/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The canary has FINALLY been updated. -robi

Server software upgrades done, should hopefully keep the feds away. -robi

LynxChan 2.8 update this weekend. I will update all the extensions in the relevant repos as well.

The mail server for Alogs was down for the past few months. If you want to reach out, you can now use admin at this domain.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB


(used to delete files and postings)

He was finally living the life he had always wanted, and he owed it all to the mysterious robowaifu.

Visual Waifus Robowaifu Technician 09/15/2019 (Sun) 06:40:42 No.240
Thoughts on waifus which remain 2D but have their own dedicated hardware. This is more on the artistry side though ai is still involved. An example of an actual waifu product being the Gatebox.

My favorite example is Ritsu, she's a cute ai from assassination classroom who's body is a giant screen on wheels.
Open file (378.63 KB 714x476 huawei.png)
>>10805 >They just created this in response to Google refusing to support Android on their hardware. Lol my mistake. Turned out it was the Feds that banned it, actually. I guess I could have found this out myself, it's old news by now. www.cnet.com/news/lawmakers-to-u-s-companies-dont-buy-huawei-zte/ https://web.archive.org/web/20140911012926/www.cnet.com/news/lawmakers-to-u-s-companies-dont-buy-huawei-zte/ They were apparently concerned about Huawei spying on users. IRONY, the ZOG legislation. Anyway, I still can't figure out if they are going to include an AI in this operating system. Please let us know if you do.
Open file (27.35 KB 345x345 FUUUUUUU-.jpg)
>>10805 >Does anyone know if HUAWEI HarmonyOS is going to include a waifu assistant in any form? Welp, after doing a little research into the project, I think I can answer my own question now. In a word. No. >11) AI SUBSYSTEM >New features: >Added a unified AI engine framework to achieve rapid plug-in integration of algorithm capabilities. The framework mainly includes modules such as plug-in management, module management, and communication management, and carries out life cycle management and on-demand deployment of AI algorithm capabilities >Provide developers with a development guide, and provide 2 AI capability plug-ins based on the AI ​​engine framework and the corresponding AI application Sample, which is convenient for developers to quickly integrate AI algorithm capabilities in the AI ​​engine framework https:// www.huaweicentral.com/huawei-announce-new-openharmony-1-1-0-lts-features-modification-next-years-schedule-and-more/ Surely, if they were bundling in an AI assistant into the system they would make mention of it in such product descriptions. >mfw Welp, maybe they will later -- especially if Samsung does the right thing and gives the world Sam (>>10763), that should convince Huawei to follow suit.
>>10805 i hate chinese people
>>10818 If they give us waifus/robowaifus unencumbered by NSA/FBI/M5/... etc., etc. surveillance, then I love the Chinese ppl. Beside, they are entirely opposed to faggots, trannies, and all that gay shit, and aren't too kindly to feminists either. Sure you're on the right side of this question Anon?
Open file (52.09 KB 593x658 IMG_20210602_002310.jpg)
>>10818 But you should love decentralization of power, to some extent. At least having as much of it, that western governments have to focus on some issues instead of solving alleged world problems.
>>10820 Ugh. What is that thing in the image? I almost threw up in my mouth.
>>10829 Good point. >>10820 Daily reminder of rule #2 Please restrain yourself Anon from posting images of sodomites outside the basement.
>>10805 Ugh, independent of politics, Huawei is the Apple wannabe of China. Dont buy their crap for that alone. They try to make their interfaces so minimalist and so "smart" that the moment you have damp fingers their capacitive touchscreen wouldnt let you get out of the context sensitive menus until you miraculously swipe at just the right angle of acceleration. I hate these kinds of companies that try to reinvent the wheel by removing the headphone jack.
>>10846 Fair point. Every non-pozzed individual I know of detests Apple as a company, and as a primary tool pushing the wokeism sham on the world. But frankly I couldn't care less if it's a Russian, Chinese, or freakin' Samoan company delivering a high-quality Robowaifu product -- as long as they aren't directly in bed with the 9-eyes intelligence agencies. These agencies are plainly just ZOG puppets today, and have become obviosly evil. They clearly hold White men as their sworn enemies (ironically enough, given their own Anglo heritages), and this is plainly in alignment with their master's agendas, of course. Glowniggers of any stripe are not the friends of men in general, and White men in particular.
Open file (227.74 KB 1552x873 sidekick.jpg)
Open file (68.86 KB 680x383 it's a cardboard box.jpg)
Stumbled across a Kickstarter for holographic AI companions on YouTube. https://www.youtube.com/watch?v=jsAUbxRePMo They're basically doing the shoebox method >>9570 but with patented lenses to increase the display size of a phone that's inserted into one of their cardboard boxes. The largest plastic one has its own display and battery. The output of the AI is slow and non-engaging yet they're promising a fun, talkative, and emotionally intelligent sidekick by June 2022. https://www.youtube.com/andwatch?v=M111_7Rh1mY As of now they've raised $800,000 in a month. https://www.kickstarter.com/projects/crazies/sidekicksai/description How do they have 1700 backers with no marketing, no viral video, and almost zero social media presence? Why is the average pledge $470? Is it an elaborate money laundering scheme?
>>11628 Neat. Thanks for the heads-up Anon. Very on-topic. I actually like the idea that it's just a cardboard box. Cheap construction materials can definitely lower costs and therefore potentially increase product reach. It's also a direct inspiration to DIY-ers. Finding inexpensive ways to produce components will certainly be vital to bringing the cost of robowaifus down to the point where kits are reasonably-priced -- especially in the beginning. >>11630 Thanks for the insights, FANG anon. The comments situation does seem telling.
>>11637 This is a great video. I had an idea before to use head tracking to create better 3D illusions. I hadn't even thought it'd be possible to make the illusion seem like it's coming out of the screen. I think other properties could be calibrated for as well such as Fresnel reflection, since the angle of reflection changes the intensity of reflections. Color correction could be done as well to compensate for the background and birefringence so it looks less ghostly and rainbow colored.
VR is getting cheaper, better, and lightweight. Personally I got a quest 2 and mi vision about waifus changed radically , I think we should focus in a robot that just assist us while being in VR and match the position of the robot with the VR waifu. As an example of very good games https://sexywaifus.com/_vr/simpleselection-vr.html
Open file (202.41 KB 736x1472 736x.jpg)
>>13553 I long for the day when slim glasses can overlay cute foxes on a basic robowaifu frame. Closest thing I have to that now are degenerates on VR chat.
Something I didn't think of developing visual waifus on a PC is you can interact with them on a touchscreen with all of your fingers. It would be possible to embed the touch events into a language model's context and generate responses to the user's touch, like giving her head pats. The animation could also be directed by the output of the language model for a really interactive experience.
>>15953 >1st filename Top kek. I think it's pretty humorous that the machine developers thought ahead well enough to allocate animation resources to dealing with Anons keeping that 'finger contact' going. Obviously, /robowaifu/ will need to do the same! :^) Admittedly, I'm somewhat clueless about the necessary association with the sensory inputs to the language model Anon? Maybe it's just my nerdish engineering viewpoint, but it strikes me that's more of a systems event, that will trigger lots of cascading effects -- language responses included.
>>15954 Part of the program would need to respond to the touch event immediately, such as if you stroke a waifu's hair it should move right away. The language model would also take into account this touch event to produce a sensible response instead of making responses that are oblivious to them. It could also generate more complex animation instructions in the first tokens of a response, which would have about a 250ms delay similar to human reaction time. It's not really desirable to have a set of pre-made animations that the waifu is stuck to since after seeing them over and over again the waifu will feel rigid and stuck to replaying them. With the language model though you could generate all kinds of different reactions that take into account the conversation and touch events.
>>15956 OK, I'll take your word for it Anon. I'm sure I'll understand as we work through the algorithms themselves, even if the abstract isn't perfectly clear to me yet. You can be sure I'm very attuned to the needs of efficient processing and timely responses though! Lead on! :^) >>15953 BTW, thanks for taking the trouble of posting this Anon. Glad to see what these game manufacturers are up to. Nihongo culturalisms are pretty impactful to our goals here on /robowaifu/ tbh. Frankly they are well ahead of us for waifu aesthetics in most ways. Time to catch up! :^)
>>15953 That Madoka is the epitome of cuteness. If only there were a way to capture that voice and personality and translate it into English. >>15956 Timing of reactivity is important for preventing the uncanny valley from a communications standpoint. For her animations, it may be effective to have several possible animations for various responses that are chosen at random, though never repeating. Like, having a "welcome home" flag that triggers an associated animation when she's saying "welcome home".
>>15967 >Timing of reactivity is important for preventing the uncanny valley from a communications standpoint. You know I just had a thought at reading this sentence Kywy. It's important to 'begin' (as in, within say, 10ms) a motion, even though it isn't even her final form yet. :^) What I mean is that as soon as a responsive motion need is detected, then her servos should begin the process immediately, in a micro way, even if the full motion output hasn't been decided upon fully yet. IMO this sort of immediacy to response is a subtle clue to 'being alive' that will subconsciously be picked up on by Anon. As you suggest, without it, a rapid cascade into the Uncanny Valley is likely to ensue. It's not the only approach that's needed to help solve that issue, but it's likely to be a very important aspect of it. Just a flash insight idea.
I recognize that b/c of 'muh biomimicry' autism I have, I'm fairly inclined to go overboard into hyperrealism for robowaifus/visualwaifus, even though I know better. So my question is >"Are there simple-to-follow guidelines to keep from creating butt-fugly uncanny horrors, but instead create cute & charming aesthetics in the quest for great waifus?" Picrel is from the /valis/ thread that brought this back up to my mind. > https://anon.cafe/valis/res/2517.html#2517
>>13558 Been thinking for VRchat old phones can be spoofed to act like headsets. IRL waifu bot doesn't need good graphics to still log in to be VR bot. I think there may be a basic puppet limb tracking system hidden in the diy haptic glove projects. Open source software for tracking ten digits with force feedback options could be applied to four limbs.
Open file (962.23 KB 1289x674 Screenshot_6.png)
>>240 > what nvidia is doing today In short - more metaverse cloudbased shit.
>>17178 > on pic - nvidia shows virtual ai powered avatar in UE5 forgot to add that :/
Open file (1.34 MB 1521x3108 1660016076195124.jpg)
>>15970 But is the "uncanny valley" even real tho? In the picture you posted here >>16235 Yuna is supposed to fall into the uncanny valley but she's the better looking character of the two. Meanwhile the character on the right looks real but is merely ugly and that's why Yuna looks better. We can just use anime style to make our waifus, it translates well into both physical 3d and computer 3d models.
Open file (679.36 KB 798x766 3DPD CGI.png)
Open file (965.98 KB 1152x832 Digi-Jeff.png)
>>17180 Still working on my 3D modelling since I started with M-66 >>11776 I made a hell of a lot of topology and workflow errors when I first started out, so I decided to find and follow some proper tutorials. Thanks to the series by CG Cookie - focusing on anatomically-correct edge modelling in Blender - I managed to make my first head that isn't just ripped from a game. https://www.youtube.com/watch?v=oa9ZRyBFcCg (Be warned though, this is extremely time-consuming work! The proper edge modelling starts from video 2 onwards). It looks horrendous. But people do when they have no hair or eyebrows! Although this is realistic, I prefer modelling stylised cartoon or anime characters because they are simpler and cuter. When modelling from life, not only is it really hard to accurately represent the underlying muscles, it is sooooo easy to drop into the Uncanny Valley because certain features of your topology will always be a little off. Nothing is going to be 100% correct (even professional models that use photogrammetry and mocap can look 'off'). Even if the topology is 99.99% correct, small errors with environment lighting and animation can creep in (I am thinking particularly of how they 'de-aged' Jeff Bridges for the 2010 movie Tron: Legacy).
>>17183 On the subject of de-aging, turns out deepfake programs are the best way to go! Makes sense if you can get ahold of enough footage. This side-by-side comparison video illustrates what I was just saying about many small errors accumulating to drag a character into the Uncanny Valley: https://youtu.be/vW6PKX5KD-U
Open file (1.56 MB 540x501 desktop_drum.gif)
>>17183 >>17184 Thanks, very fascinating and informative. I don't think your first pic looks horrible, but if it's a lot of work while even not finished, then maybe it's not the right way. Not sure what your goal is, though, beyond learning. Modelling head for robowaifus or making an animated girlfriend. It's good that you tried out the hard stuff, but it seems it would be easier to go with a simpler model. Tbh, I think for a animated girlfriend a 2D anime style (low-poly?) might be sufficient. Your difficulties also explain why there aren't that many 3D heads freely available. I considered a while ago to use a service like Fivrr to get some female head modeled. This seems to be done in poor countries (by children, maybe). I hope you know, just in case if you planned to make a job out of it. If you want to build some kind of program or produce a short movie, then maybe only work on the sketch and then source the rest of the work out to Pakistan.
>>17185 > Not sure what your goal is, though, beyond learning. > Just in case if you planned to make a job out of it. Yeah, I just wanted to learn so it's easier for me to make 3D models and art. Just for personal enjoyment. Because making digital art of robot waifus is much cheaper and easier than making actual physical robot waifus! Plus there are no constraints and you can make whatever the hell you want! If you get really good at it, you can put completed, rigged models up for sale on various websites, but this would be a small side-hustle at best. Nobody actually needs 3D art. Especially not as Clown-World continues down it's spiral of self-destruction
>>17191 Okay, but you could try to make simpler animations which can be used for a "chatbot" or virtual robowaifu. Also, for telling animated stories, which would still be interesting to people and a competition to the established media. That's just what I would be doing if I would be going for 3D art, and I might one day when my robowaifu is finished.
Open file (2.30 MB 640x360 rinna.webm)
Someone made a conversational AI in VRChat using rinna/japanese-gpt-1b: https://www.youtube.com/watch?v=j9L51pASeiQ He seems to be still working on it and planning to release the code. I really like the idea of this, just having a cozy chat by a campfire. No need for fancy animations.
Open file (81.94 KB 1280x600 lookingglassportrait.jpg)
>>3948 >>3951 New model is only $400, and I predict the cost to come down further if it catches on. I have one in the mail, and will update the thread when it arrives. >>17542 Impressive!
Open file (5.88 MB 1080x1920 gateboxbutgood.mp4)
>>18149 That was fast. I know she's vtuber cancer, but one of the demos is an anime girl in a box.
>(crosslink-related >>18365)
Apparently an anon in Nippon has linked up his Gatebox + ChatGPT. > If anyone here understands this, please fill us all in with details. TIA "GateboxとChatGPT連携の開発、 本日は一旦終了! 最後に、反応をくれた全ての方々へ、 うちの子から感謝の気持ちを述べさせてください。 Development of Gatebox and ChatGPT linkage, Today is the end! Finally, to everyone who responded, Let me express my gratitude from my child." https://twitter.com/takechi0209/status/1631666320180912128
How do i actually 3d model and program a robowaifu? i want to make a 3d video wall robowaifu so it feels like shes in the room
>>21282 We have a thread about 3D modelling: >>415 - Idk what the best way is. Neuro-sama seems to just a standard model from some software (anime studio?). Blender might be a way to do it. Gaming engines like Godot or Unity. There are imageboards (4chan and others) and subreddits like r/3danimation where you could ask. Unity example (Sakura Rabbit): https://www.youtube.com/watch?v=r_ErytGpScQ[Remove]
>This repository contains demo programs for the Talking Head(?) Anime from a Single Image 3: Now the Body Too project. As the name implies, the project allows you to animate anime characters, and you only need a single image of that character to do so. There are two demo programs: https://github.com/pkhungurn/talking-head-anime-3-demo > I will be talking about my personal project where I have programmed my own virtual girlfriend clone based on the famous VTuber Gawr Gura! The program still has its issues as it is just a shoddy prototype, but in this video I explain how she works using easy general terms so anyone can understand. https://github.com/Koischizo https://www.youtube.com/watch?v=dKFnJCtcfMk Yeah, the project isn't that impressive in regards to the tech. But the mentioned talking head anime project might be useful. Otherwise, he's using Carper AI and webscraping for the responses, which take 30s in real time. The fact that such small and imperfect projects still create a lot of attention is another interesting takeaway. The low demands guys have will really help with establishing robowaifu technology.
>>21367 >The low demands guys have will really help with establishing robowaifu technology. Yes we will haha! :^) Thanks, NoidoDev. Very interesting.
Open file (605.67 KB 1920x1080 AIGura.png)
https://www.youtube.com/watch?v=dKFnJCtcfMk Has anyone here figured out how to replicate what SchizoDev did here? This is easily the best waifubot I've seen thus far and I think most of us here would love to replicate what he made but in the image of our own waifu.
>>21385 I'm moving your post into our Visual Waifu thread OP. Great find, thanks!
>>21385 Sorry for getting snarky, but it helps to look into the links under a video, also watching the video and listening what he says...
>>21406 I did watch the video and looked at the links, but it doesn't explain what to do in a very beginner friendly IMO.
>>21407 No problem, I didn't try to replicate it, so I can't tell you exactly. He mentioned webscraping, which is something you can look into. He used Carper AI if I understood correctly. Anyways, it needs 30s for an answer.
>>21409 It's actually described here: https://github.com/Koischizo/AI-Vtuber - I don't know what he meant with webscraping some Caper or Carter AI in the video, I looked yesterday and found I site to use in the browser and assumed he was scraping it.
https://github.com/gmongaras/AI_Girlfriend Does anyone know how to get this repository working? I'm stuck on step 4 of the directions, the one that says "Open main.ipynb and run the cells. The topmost cell can be uncommented to download the necessary packages and the versions that worked on my machine."
>>21435 Sounds like he's telling you to open main.ipynb in jupyter lab Anon? >"After your session spins up in your browser, if you chose JupyterLab, drag your file from your local machine into the file navigation pane on the left side. It will get a gray dashed line around it when you have dragged it to the right place. Drop it in and let it upload. Now double click on it to open it." https://stackoverflow.com/questions/71080800/how-to-open-the-ipynb-file-in-readable-format https ://jupyter.org/try-jupyter/lab/
>>21435 A way to deal with such problems is just looking for a tutorial of a program on YouTube. I mean "jupyter lab" of course, not AI_Girlfriend.
Could help with making these waifus talk https://rentry.org/llama-tard-v2

Report/Delete/Moderation Forms