/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The Mongolian Tugrik has recovered its original value thanks to clever trade agreements facilitated by Ukhnaagiin Khürelsükh throat singing at Xi Jinping.

The website will stay a LynxChan instance. Thanks for flying AlogSpace! --robi

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!


Visual Waifus Robowaifu Technician 09/15/2019 (Sun) 06:40:42 No.240
Thoughts on waifus which remain 2D but have their own dedicated hardware. This is more on the artistry side though ai is still involved. An example of an actual waifu product being the Gatebox.
gatebox.ai/sp/

My favorite example is Ritsu, she's a cute ai from assassination classroom who's body is a giant screen on wheels.
>>5458 That does the job, thanks! How are you going to use the AI, in human recognition? In training to create new animations? If you're going with 3D, will you still use stock animations e.g. from Mixamo, and you'll just use the AI for selecting the behavior and corresponding animation? I'm thinking of maybe trying with 2D first. I was going back to Godot to try to make a cheapass version of Live2D. When researching Vtuber tools, I noticed that the free Facerig clones were written in Unity game engine. So there is potential for Godot AI to take the place of both Live2D and Facerig: Read and process facial recognition from the webcam, then adjust the waifu puppet animation accordingly, all within the same game engine.
>>5459 Glad to hear it went smoothly. The possibilities are endless really. My first real objective is to do a walking simulation where AI controls movement of all the joints and can keep balanced even if hit with a beachball. Then I might extend it to running, jumping and picking up objects. I just wanted to create a little demo of what's possible with AI now to inspire people to get into waifudev. People could do a lot of other things with it, like facial recognition for custom Vtubers. I haven't given it too much thought yet. I'm more focused on developing AI, although I do want to create an unscripted video of my waifu giving an AI tutorial by the end of the year and I'll probably use Godot to do it. Which reminds me you can do real-time speech synthesis with models like WaveGlow within Godot: https://nv-adlr.github.io/WaveGlow
Open file (19.78 MB 1280x720 Viva Project v0.8.mp4)
>>340 Viva Project v0.8 comes out on October 31st. He just released a new video a few days ago. https://www.youtube.com/watch?v=CC4ate84BiM
Open file (176.19 KB 700x700 Relativity-VR-front.jpg)
Open file (187.12 KB 700x700 Relativty-VR-open.jpg)
Once again meta, since there also is no dedicated thread for VR yet, since only few have it and it's still quite expensive: https://www.relativty.com/ created a OpenSource VR headset which caught some attention and might lift off, while the Quest-2 requires that you share all your movement data and interactions with Facebook and also can loose all access to everything stored on that device and related to it, like bought games, contacts, save games, etc if you break any of their rules on their VR platform or on Facebook. This will most likely also include fake names or wrong infos about yourself in your profile. The one from https://www.relativty.com/ might develop as the cheaper alternative to some better ones, but it's behind in quality and probably always will be. Since it seems only to require investments around 200$ and access to a 3d printer it might be very interesting for many here. It might be possible to improve a lot and get really good. Main issue seems to be the tracking, sensors and such. It uses Pytorch and Cuda for experimental tracking. Also it needs connection to a computer, which might create annoying problems with cables, while the more expensive ones are standalone. I works with Steam, though. Github: https://github.com/relativty/Relativty
>>5782 Yes, there's some room for improvement I'm sure. But it seems like it's actually a remarkable progress so far for a hobbyist effort. 3D printing files, PCB plans, electronics & hardware lists, software, cabling, everything. Seems like an independent project by professionals. The Steam thing is a little worrying, but since they seem to be entirely open source thus far, then hopefully there won't be any entrenched botnet in the product.
>>5783 With Steam I meant they're compatible, which is clearly a plus. It doesn't mean the set is dependent on it. Wouldn't be a problem anyways, since one could just remove this part in an open source system. The problem with the Quest-2 is that the games are stored on that device and the device also is dependent an having a account. So they have full control. In a way it isn't your device, you're just paying for it. So probably you would even need FB approval of any Waifu software and maybe a (probably expensive) developer account.
>>5784 yep, no debate on those points tbh.
bump for vr
>>5788 Yeah, please don't. Build one and tell us how it went, or develop some VR waifu, then everyone will be interested.
Since everthing animated seems to go in here as well: Here's a video from a channel that specializes on creating 3D animated waifus, which are dancing on Youtube to some music: https://youtu.be/8AmjFLdpkyw
>>8377 Thanks Anon, Yeah I'd say this or the Waifu Simulator >>155 thread are both good choices for animation topics until we establish a specific thread on that.
So, we definitely want our robowaifu's ai to be able to travel around with us, whether she's wearing her physical robo-form, or more lightweight in her non-atoms version. Securing/protecting her physical shell is pretty common-sense , but how do we protect her from assault when she's virtual. Specifically, how do we protect her properly when she's on our phones/tablets/whatever when we're away from the privacy of our own home networks?
>>8592 The obvious solution is to do what the Gatebox device in the OP does and keep the device that the AI runs on at home and interact with the user through text messages or phone calls when the user is outside. All the AI assistants from major tech companies work on this principle through edge gateways keeping the important parts of the AI secure in cloud computing. Not only is the way they process user data a valuable trade secret but so is the data itself which is why they spend so much on developing virtual assistants. If you really wanted it to be secure you'd have several instances of it running on different types of hardware in several locations. Any compromises could be quickly detected and dealt with.
>>8593 I understand (well, sort of anyway heh). That seems like a pretty good idea just to rely on simple text messages for communicating with her 'back home'. Simple text should be easier to secure, and much easier to inspect for problems. I suppose we can build her virtual, traveling avatar to display/render locally correctly using just text messages back & forth. Shouldn't be too difficult to figure out how, once we have the other pieces in place. Thanks for the advice Anon!
>>1124 >I'm currently in the middle of a project where I'm gutting a street car, putting in an engine that is entirely too big for it, racing seats, new ECU, and basically building a street legal race car. Anon if you're still here, what the heck is going on with your project? We need our Racewaifus!
Open file (2.28 MB 1024x576 gia.webm)
To prepare your visual waifu to become a desktop assistant in Godot 3.1, go into Project Settings and set: Display > Window > Width Display > Window > Height Display > Window > Borderless > On Display > Window > Always On Top > On Display > Window > Per Pixel Transparency > Allowed > On Display > Window > Per Pixel Transparency > Enabled > On When the scene starts, in a node's _ready function run: get_tree().get_root().set_transparent_background(true) Congrats, your waifu is now on the desktop and always with you. On Linux use Alt + Drag to move her around and Alt + Space to bring up the window manager if necessary. Make sure the window is appropriately sized to fit her or she will steal your clicks.
>>9025 That's neat, thanks Anon. Also, >that dialogue Lol!
Open file (1006.94 KB 720x720 dorothy.webm)
Progress. Next is to hook Dorothy up to a language model.
>>9270 Wonderful. Now I want to go hang out with Dorothy at the pub!
Creating holowaifus like Miku here >>9562 could become a thing with $30 480p Raspberry Pi displays. It'll be amazing when people can create their own mini holowaifus one day for under $100. Shoebox method: https://www.youtube.com/watch?v=iiJn9H-8H1M Pyramid method: https://www.youtube.com/watch?v=MrgGXQvAuR4 Also there's a life-sized Gatebox coming out for businesses. It seems like they're still lacking good AI for it but I can see it being a good business attraction. I could see this being used in arcades which are still popular in Japan.
peak visual waifu-ery >
Stumbled across this gem from 2011. Japan has been hiding the waifus all along. https://www.youtube.com/watch?v=H6NzzTyglEw
>>10214 Wow that's actually quite impressive for the date, thanks Anon. I sure wish I'd known about this back then.
>>8592 she'll always be backed up on a bunker orbiting earth ; )
>>10215 We are working on waifu engine and basically already surpassed that demo. Piecing it together currently to be an app. https://alogs.theГунтretort.com/robowaifu/res/10361.html#q10361
>>10723 Neat, good luck!
Just put it in like that: >>10361
>>10736 Thanks I appreciate it I don't use this board system often =)
Does anyone know if HUAWEI HarmonyOS is going to include a waifu assistant in any form? If so, I'm going to make sure my next phone is one of theirs. They just created this in response to Google refusing to support Android on their hardware. So Huawei just made their own. Here's the product announcement show. https://www.youtube.com/watch?v=y2101ics8jc I find it very disturbing that Google and the US' NSA & other 9-Eyes organizations are so intertwined now that they are effectively the same, single, organization. Basically, you can't use any Google product today w/o your every interaction with it being instantly accessible to the Feds, already conveniently massaged for them into a pap format their diversity-tier staffs can readily digest. In the US at least, this kind of dragnet surveillance state is a flagrant and completely illegal violation of the 4th Amendment of the US Constitution. As some of us anons discussed a bit over in the basement, I'm much more comfortable with the Communist Chinese government having access to all my personal data and info, than I am with my own government having it. I'm certainly no fan of Marxism (it's incredibly evil, actually), but at the very least the CCP isn't a ZOG puppet, as basically all of the West is now. >tl;dr Chinese products are far less of a threat to us than American ones. Please give us AI waifus now, HUAWEI!
Open file (378.63 KB 714x476 huawei.png)
>>10805 >They just created this in response to Google refusing to support Android on their hardware. Lol my mistake. Turned out it was the Feds that banned it, actually. I guess I could have found this out myself, it's old news by now. www.cnet.com/news/lawmakers-to-u-s-companies-dont-buy-huawei-zte/ https://web.archive.org/web/20140911012926/www.cnet.com/news/lawmakers-to-u-s-companies-dont-buy-huawei-zte/ They were apparently concerned about Huawei spying on users. IRONY, the ZOG legislation. Anyway, I still can't figure out if they are going to include an AI in this operating system. Please let us know if you do.
Open file (27.35 KB 345x345 FUUUUUUU-.jpg)
>>10805 >Does anyone know if HUAWEI HarmonyOS is going to include a waifu assistant in any form? Welp, after doing a little research into the project, I think I can answer my own question now. In a word. No. >11) AI SUBSYSTEM >New features: >Added a unified AI engine framework to achieve rapid plug-in integration of algorithm capabilities. The framework mainly includes modules such as plug-in management, module management, and communication management, and carries out life cycle management and on-demand deployment of AI algorithm capabilities >Provide developers with a development guide, and provide 2 AI capability plug-ins based on the AI ​​engine framework and the corresponding AI application Sample, which is convenient for developers to quickly integrate AI algorithm capabilities in the AI ​​engine framework https:// www.huaweicentral.com/huawei-announce-new-openharmony-1-1-0-lts-features-modification-next-years-schedule-and-more/ Surely, if they were bundling in an AI assistant into the system they would make mention of it in such product descriptions. >mfw Welp, maybe they will later -- especially if Samsung does the right thing and gives the world Sam (>>10763), that should convince Huawei to follow suit.
>>10805 i hate chinese people
>>10818 If they give us waifus/robowaifus unencumbered by NSA/FBI/M5/... etc., etc. surveillance, then I love the Chinese ppl. Beside, they are entirely opposed to faggots, trannies, and all that gay shit, and aren't too kindly to feminists either. Sure you're on the right side of this question Anon?
Open file (52.09 KB 593x658 IMG_20210602_002310.jpg)
>>10818 But you should love decentralization of power, to some extent. At least having as much of it, that western governments have to focus on some issues instead of solving alleged world problems.
>>10820 Ugh. What is that thing in the image? I almost threw up in my mouth.
>>10829 Good point. >>10820 Daily reminder of rule #2 Please restrain yourself Anon from posting images of sodomites outside the basement.
>>10805 Ugh, independent of politics, Huawei is the Apple wannabe of China. Dont buy their crap for that alone. They try to make their interfaces so minimalist and so "smart" that the moment you have damp fingers their capacitive touchscreen wouldnt let you get out of the context sensitive menus until you miraculously swipe at just the right angle of acceleration. I hate these kinds of companies that try to reinvent the wheel by removing the headphone jack.
>>10846 Fair point. Every non-pozzed individual I know of detests Apple as a company, and as a primary tool pushing the wokeism sham on the world. But frankly I couldn't care less if it's a Russian, Chinese, or freakin' Samoan company delivering a high-quality Robowaifu product -- as long as they aren't directly in bed with the 9-eyes intelligence agencies. These agencies are plainly just ZOG puppets today, and have become obviosly evil. They clearly hold White men as their sworn enemies (ironically enough, given their own Anglo heritages), and this is plainly in alignment with their master's agendas, of course. Glowniggers of any stripe are not the friends of men in general, and White men in particular.
Open file (227.74 KB 1552x873 sidekick.jpg)
Open file (68.86 KB 680x383 it's a cardboard box.jpg)
Stumbled across a Kickstarter for holographic AI companions on YouTube. https://www.youtube.com/watch?v=jsAUbxRePMo They're basically doing the shoebox method >>9570 but with patented lenses to increase the display size of a phone that's inserted into one of their cardboard boxes. The largest plastic one has its own display and battery. The output of the AI is slow and non-engaging yet they're promising a fun, talkative, and emotionally intelligent sidekick by June 2022. https://www.youtube.com/andwatch?v=M111_7Rh1mY As of now they've raised $800,000 in a month. https://www.kickstarter.com/projects/crazies/sidekicksai/description How do they have 1700 backers with no marketing, no viral video, and almost zero social media presence? Why is the average pledge $470? Is it an elaborate money laundering scheme?
>>11628 Neat. Thanks for the heads-up Anon. Very on-topic. I actually like the idea that it's just a cardboard box. Cheap construction materials can definitely lower costs and therefore potentially increase product reach. It's also a direct inspiration to DIY-ers. Finding inexpensive ways to produce components will certainly be vital to bringing the cost of robowaifus down to the point where kits are reasonably-priced -- especially in the beginning. >>11630 Thanks for the insights, FANG anon. The comments situation does seem telling.
>>11637 This is a great video. I had an idea before to use head tracking to create better 3D illusions. I hadn't even thought it'd be possible to make the illusion seem like it's coming out of the screen. I think other properties could be calibrated for as well such as Fresnel reflection, since the angle of reflection changes the intensity of reflections. Color correction could be done as well to compensate for the background and birefringence so it looks less ghostly and rainbow colored.
VR is getting cheaper, better, and lightweight. Personally I got a quest 2 and mi vision about waifus changed radically , I think we should focus in a robot that just assist us while being in VR and match the position of the robot with the VR waifu. As an example of very good games https://sexywaifus.com/_vr/simpleselection-vr.html
Open file (202.41 KB 736x1472 736x.jpg)
>>13553 I long for the day when slim glasses can overlay cute foxes on a basic robowaifu frame. Closest thing I have to that now are degenerates on VR chat.
Something I didn't think of developing visual waifus on a PC is you can interact with them on a touchscreen with all of your fingers. It would be possible to embed the touch events into a language model's context and generate responses to the user's touch, like giving her head pats. The animation could also be directed by the output of the language model for a really interactive experience.
>>15953 >1st filename Top kek. I think it's pretty humorous that the machine developers thought ahead well enough to allocate animation resources to dealing with Anons keeping that 'finger contact' going. Obviously, /robowaifu/ will need to do the same! :^) Admittedly, I'm somewhat clueless about the necessary association with the sensory inputs to the language model Anon? Maybe it's just my nerdish engineering viewpoint, but it strikes me that's more of a systems event, that will trigger lots of cascading effects -- language responses included.
>>15954 Part of the program would need to respond to the touch event immediately, such as if you stroke a waifu's hair it should move right away. The language model would also take into account this touch event to produce a sensible response instead of making responses that are oblivious to them. It could also generate more complex animation instructions in the first tokens of a response, which would have about a 250ms delay similar to human reaction time. It's not really desirable to have a set of pre-made animations that the waifu is stuck to since after seeing them over and over again the waifu will feel rigid and stuck to replaying them. With the language model though you could generate all kinds of different reactions that take into account the conversation and touch events.
>>15956 OK, I'll take your word for it Anon. I'm sure I'll understand as we work through the algorithms themselves, even if the abstract isn't perfectly clear to me yet. You can be sure I'm very attuned to the needs of efficient processing and timely responses though! Lead on! :^) >>15953 BTW, thanks for taking the trouble of posting this Anon. Glad to see what these game manufacturers are up to. Nihongo culturalisms are pretty impactful to our goals here on /robowaifu/ tbh. Frankly they are well ahead of us for waifu aesthetics in most ways. Time to catch up! :^)
>>15953 That Madoka is the epitome of cuteness. If only there were a way to capture that voice and personality and translate it into English. >>15956 Timing of reactivity is important for preventing the uncanny valley from a communications standpoint. For her animations, it may be effective to have several possible animations for various responses that are chosen at random, though never repeating. Like, having a "welcome home" flag that triggers an associated animation when she's saying "welcome home".
>>15967 >Timing of reactivity is important for preventing the uncanny valley from a communications standpoint. You know I just had a thought at reading this sentence Kywy. It's important to 'begin' (as in, within say, 10ms) a motion, even though it isn't even her final form yet. :^) What I mean is that as soon as a responsive motion need is detected, then her servos should begin the process immediately, in a micro way, even if the full motion output hasn't been decided upon fully yet. IMO this sort of immediacy to response is a subtle clue to 'being alive' that will subconsciously be picked up on by Anon. As you suggest, without it, a rapid cascade into the Uncanny Valley is likely to ensue. It's not the only approach that's needed to help solve that issue, but it's likely to be a very important aspect of it. Just a flash insight idea.
I recognize that b/c of 'muh biomimicry' autism I have, I'm fairly inclined to go overboard into hyperrealism for robowaifus/visualwaifus, even though I know better. So my question is >"Are there simple-to-follow guidelines to keep from creating butt-fugly uncanny horrors, but instead create cute & charming aesthetics in the quest for great waifus?" Picrel is from the /valis/ thread that brought this back up to my mind. > https://anon.cafe/valis/res/2517.html#2517

Report/Delete/Moderation Forms
Delete
Report