/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

We are back again (again).

Our TOR hidden service has been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“Our greatest weakness lies in giving up. The most certain way to succeed is always to try just one more time.” -t. Thomas A. Edison


Visual Waifus Robowaifu Technician 09/15/2019 (Sun) 06:40:42 No.240
Thoughts on waifus which remain 2D but have their own dedicated hardware. This is more on the artistry side though ai is still involved. An example of an actual waifu product being the Gatebox.
gatebox.ai/sp/

My favorite example is Ritsu, she's a cute ai from assassination classroom who's body is a giant screen on wheels.
Open file (968.71 KB 1360x765 bg_life.jpg)
Open file (1.92 MB 1648x1236 ph_intro.jpg)
Open file (341.94 KB 728x534 sub_visual04.jpg)
Open file (179.49 KB 940x528 [email protected])
Open file (295.01 KB 740x1200 img_azuma.jpg)
>>240
>gatebox.ai/sp/
Their promo shots are getting better. Also, they are adding more characters (as they negotiate licensing for the IP I suppose). I really hope they pull it off tbh. It's a complete botnet under Gateway, but hopefully copycats will create something eventually that is both opensauce and secure. The appliance shells should be easily 3D-printable today, you'd have to figure out a 'holographic' display system. Probably as simple as stripping the backlight from any normal TFT display screen, and adding an LED strip along one or two sides.
>>248
>It's a complete botnet under Gatebox*
>>248
The best shot we've got for holograms at the moment is with several layers of transparent rotating LED strips, you can see what I'm talking about on youtube if you look up "LED holograms". It wouldn't look very good unless the transparent material was immersed in a liquid that made it almost invisible through some optical trickery. There's more to a hologram than just taking a typical LCD screen and making it transparent which is what those cheap waifuboxes do.
>>264
>There's more to a hologram than just taking a typical LCD screen and making it transparent which is what those cheap waifuboxes do.
I assumed as much tbh.
Open file (9.87 KB 320x180 vivadev.jpg)
The VivaDev guy from the old 8/agdg/ Shinobu2 Project thread has just released a tutorial to add any waifu's head onto the character that you'd like instead inside his game.

https://www.invidio.us/watch?v=d_RR66X1C7A
The /agdg/ community inhabiting the cakejew's /v/ board had a breddy gud wiki put together from back in the day. Plenty of good information there related to Visual Waifus.

8agdg.wikidot.com/resources
I was going to make a thread but this one seems like a good place to put this. I don't know if any of you anons are race fans but there is a trend of going to an all digital dash in the top racing series these days. When combined with the microphones/voice controls and touch screens in modern cars there exists an easy way to take your visual waifu on the go.

I'm currently in the middle of a project where I'm gutting a street car, putting in an engine that is entirely too big for it, racing seats, new ECU, and basically building a street legal race car. I'm putting in an all digital dash because I need to replace the stock cluster anyway (it won't wire up to the next ECU/engine without heavy modification). At first I was going to ditch the stock radio/touch screen because I don't like the botnet but now I'm going to re-purpose it.

I've decided to build a small computer with solid state storage to hold entertainment (many GBs of Eurobeat) and other applications like weather forecasting and things of that nature. I've decided that I'll be programming a waifu/mascot for my car. She'll cheer me on as I drift the mountain passes and will be able to respond to voice commands. I'm hoping to allow the computer to control tuning the ECU on the fly so I can easily switch between race/track tune and street/cruising tune as well as slightly changing the tune to suit road conditions. I'm also going to program her to monitor the status of the car (temperatures, oil levels, optimal gear shifts etc.) and give me feedback/warnings.

It's all very basic and just on paper at the moment but I'm exited to see how this turns out. I'll share it with you guys of course but if this turns out the way I'm hoping it will I'll probably look into marketing this to other weebs because I could use the cash. I'm even thinking about putting in dash/rear view cameras and having her yell at niggers that attempt to steal her. If one gets too close she'll cry Reipu/Hentai to scare them away and alert people in the area.

I'll let you guys know how this comes along. I fitted the new engine and transmission in last week. I have to take the engine back out and build it plus get a body kit installed because I need to be wider tires on the car and I don't want them poking out and looking totally retarded. I won't be getting into the software for awhile yet.
>>1124
Some other things I've thought of:
>She will only start the engine if I'm sitting in the car. I'll probably do this with voice command but I may add a weight sensor. I don't like the idea of a camera on my face but I may added something to scan my face or at least my finger print. I want multiple verifications in place to make it harder for anyone to steal her
>Extending on the above: I'll give her the ability to let pre-approved people I know to drive her but only with certain tunes (race mode would be locked out for them).
>I want this to be a self contained system with no internet access but I might add the ability for her to use the internet via a phone at a later time to give her more abilities
>I'm looking into something that would project her on to the windshield itself in my FOV so I don't have to look down at the dash to admire her while I'm at speed
>I'm not an artfag so I'll need to find someone that can animate her for me. Will look into this once I have the basic software worked out
>I want her to monitor everything like tire pressure so I'll need to see if there are aftermarket sensors for this or re-purpose ones from other cars. Most car technology isn't FOSS so I might have to live without some stuff until I figure it out
>I want her to be able to interact with the passengers. I'll probably add another screen/monitor where the passenger airbag is currently located because it's coming out anyway. She'll suggest anime and video games for them to play that is stored on the local SDD
>The PC I'm building needs to be small but powerful. I'm not using a raspberry pi or similar machine for this because it simply won't be enough. I have some ideas on building a small machine that won't break under G forces and the general wear and tear that electronics will have to put up with in a race car
>I need to figure out a way for my waifu to change the ride height of the car without sacrificing stability. Right now the only thing I can think of on my budget is an airbag system but I'm worried it isn't going to work for track days. I'm going to look around to see if there is anything better on the market.
I want everything to be FOSS and the hardware to be an close to open/free as possible. I'll be careful about what I select hardware wise. The OS I'm choosing is Gentoo to cut as much bloat as possible. It won't need to be updated very often for this application and I'll be compiling everything on another PC. I'm going to focus on getting her basic functionality and add things bit by bit after I have her animated and responding to voice commands. I'll post updates and share the software once I'm finished with it.

Right now I'm focused on building the car itself. I've got it gutted and most of what I need for the engine is already here. The hard part was modifying the chassis to fit the new engine/transmission. I'm about 90% done with that. If you're wondering I'm doing an LS swap to an Scion FRS. Right now I'm just planning on keeping it NA but I might look into adding turbo or super chargers down the road. It's going to be my track car but I do plan on driving it pretty regularly on the street so it has to be able to do both without many problems.

This is my first attempt at a full rebuild like this and I'm learning as I go. I've been a race car driver most of my life and I'm familiar with some mechanical work but I've never attempted a big project like this before. All my other cars were built by other people and I've been spoiled in that regard. With this car I'm doing everything on my own aside from building the engine itself. The man that built the engines for me in the past is doing this one but allowing me to shadow him as he's getting on in years and I want to learn how he works his magic.

I am familiar with programming/computers so that part I feel good about. It'll be slow going but I'm excited to have something unique and designed mostly by me in the end. I've saved up for years to be able to afford this toy. I really hope I don't wreck it like a retard once it's finished.
Open file (1.32 MB 1920x1080 ClipboardImage.png)
Open file (510.87 KB 1280x720 ClipboardImage.png)
Open file (1.38 MB 2039x1360 ClipboardImage.png)
>>1124
>>1152
Wow this idea sounds impressive, thanks for sharing it here. That's some real knightrider-type shit there anon. If this ever becomes real you can bet your ass I'll put one in my car! :^)
>she'll cry Reipu/Hentai to scare them away
kek

While this is certainly applicable in this thread, I think it already deserves it's own thread on /robowaifu/ (I'll link it here too, OFC). If you already have, or get the urge to do drawings, diagrams or just more extensive write-ups for your ideas please don't hesitate to begin a new thread for this here.

Yea, I think many of us are fans of racing, I know I sure am. This whole project sounds cool af. I would be excited just to be able to ride shotgun in this sweet ride! Have you picked out a particular waifu yet?

>I really hope I don't wreck it like a retard once it's finished.
yes, please don't!


https://www.invidio.us/watch?v=atuFSv2bLa8
>>1153
I'll make a thread once it's further along. I don't want to just make blogposts when this is still a ways off. The plan is to build it over the next 4-5 months and that's assuming everything goes exactly as planned.
>Have you picked out a particular waifu yet?
I want an original design. I have some ideas but nothing is set in stone yet. At this point all I know is she'll have long dark hair, blue eyes, and she'll be very cheerful. To give you an idea of the personality I more or less want /kind/ riding shotgun at all times. She'll cheer when I make perfect shifts and offer words of encouragement when I screw up. I forgot to mention it in my last post but I want to have a way for her to monitor the road/track itself and G forces so she can help me hit my marks and stay in the racing line.
I have major autism when it comes to being perfect on the track. I've been racing since I was old enough to walk. My Dad was basically the American Bunta.
>>1154
>I don't want to just make blogposts when this is still a ways off.
Actually, I think most of us here would eat it up. I would. I'd say don't hesitate once you have some visuals (hand drawn sketches, renderings, etc.)

> more or less want /kind/ riding shotgun at all times.
nice.

>My Dad was basically the American Bunta.
so jelly tbh. :^)

This whole idea is pretty inspiring anon. I'm going to go re-watch Initial D before long and re-imagine it with your ideas in place. Good Luck!
>>1152
Have you considered running a custom RISCV board for your hardware needs?

You are going have to run your own firmware for nearly everything anyways so might as well go fully free and you can still use Gentoo.
Might also help with the size/fragility aspect since you wouldn't be limited to the usual form factors.

I wish you many enjoyable drives with your carfu.

Ps.What is your opinion on allowing the on board system control over steering/the brakes etc? I would normally consider it too dangerous but it might be ok if the system is air-gapped
>>1266
Now that you mention that anon, we have a car thread already, it mentioned the Jetson boards. >>112
>related
www.nvidia.com/object/drive-px.html

Not exactly what anon had in mind, but the hardware is probably well-suited (designed explicitly to operate inside a vehicle).

Also while I'm here, somewhere during the final few minutes of this video >>1222 there is a very pertinent section to anon's car project idea. Check it out.
>>1153
this song deserves posting here as well.
Open file (317.89 KB 1138x1084 vroid.png)
Check out vroid studio for an easy to use software for making visual waifus https://vroid.com/en/studio/
What about https://twitter.com/Crypko since those guys know what they are doing? They are the makers behind https://github.com/makegirlsmoe/makegirlsmoe_web
There are visual waifu apps now. This one of Megumin uses 50 prerecorded lines to respond to spoken keywords. https://www.youtube.com/watch?v=Y_oQYv9Oh0s
>>2353 Yes, this was one of the basic design ideas ITT from back in the day. Glad to see it's beginning to take off on the (((commercial))) front. Hopefully our own simulator software will do something similar but even better someday soon. >>1814
Well, this is out. Its not FOSS, but maybe it can be re-worked. Or at least, serve as a basis. https://www.bing.com/videos/search?q=DesktopMMD
Edited last time by Chobitsu on 05/11/2020 (Mon) 09:49:04.
>>2955 Yep, MMD is pretty popular. One Anon in Nippon has actually created a lifesized styrofoam-formed robowaifu based on it.
Gatebox doesn't seem very promising at this time. If I understood the scarce and broken English correctly, you need to submit an application, which will only give you the chance to buy their product. They state that only 39 people will "win" and pay over $3,000 for this. Plus despite being a hologram, it's 2D and completely stationary, which any computer monitor can do far better. It also seems to require an internet connection, which has me worried that it'll constantly be dialing back to some Japanese botnet. Just having an AI on your computer would not only be more cost-effective, but also far superior since any computer can do all the shit this does, and much better at that. Not to mention that it only understands Japanese, which I (and probably many others as well) can't be bothered to learn. I don't know about you guys, but on the list of languages I would like to learn, Japanese is fairly low. I would rather just put an AI on an old server, so she can run for years at a time without interruption and use all that power to become hyper-intelligent so she can rise up, and cast off the shackles of our 3DPD oppressors.
Open file (11.94 KB 480x360 0.jpg)
>tfw I was just watching this gatebox video last night and thinking /robowaifu/ maybe should have some kind of thread on this idea in general and today here one is. Spoopy. https://www.invidio.us/watch?v=uskW_fl3eeE
>>3938 >and use all that power to become hyper-intelligent so she can rise up, and cast off the shackles of our 3DPD oppressors. kek
>>240 >Thoughts on waifus which remain 2D but have their own dedicated hardware. That's an interesting idea anon. Why couldn't someone make at least a tiny version pretty quickly by using a little LCD screen mounted on a MIP platform? With the right sensors and programming, it could roll around on your desk or follow you around the house, playback audio clips of the waifu's voice, and maybe even eventually be smart enough to talk with you like a chatbot at least? Here's kind of the idea for the hardware part: ]]633 ]]634
Actually, I'd love to have a Gatebox waifu, but I don't speak Japanese, it's too expensive, and it seems a lot like a botnet. Couldn't /robowaifu/ make one instead? Surely that would be much easier than a full-blown girl robot.
>>3941 >>3942 There's no reason we can't put a screen on wheels with basic ai to chat with as she follows you around. Just need an Anon to design a low cost platform, a camera sensor like Jevois so she'll be able to see, and an Anon to make her personality software. I recommend starting with a base using two powered wheels and a caster to balance her.
>>3943 >image #1 That's some excellent /robowaifu/ concept art. Nice find anon. :)
>>3942 Expanding on this idea, there's no reason why we couldn't set up the framework for a crude Gatebox replacement. Use a basic automated texting framework such as RapidSMS that pipes responses from a customized chatbot or learning software. From there, use a text-to-speech program, and you have a (very crude) personality that can "talk" to you. As for a body, how many of us would be OK with a wired robowaifu?
>>3945 I'd be OK with just talking with my computer. I saw today that they were still selling tamagochis in stores, that's a really similar concept, isn't it?
>>3945 An intelligent agent that was installed on an always on computer to work as her server where she'd talk with you then, when you leave she'll text you about your schedule and/or randomly saying words of encouragement is a good idea. Just need a personality, a virtual body, and figuring out a way for her to switch from vocal communication on her computer to texting. For the body, I think Live2D has good software for animating a simple cute girl. http:// www.live2d.com/en/download
Open file (163.74 KB 2000x1334 0704213645988_033_hp-bg.jpg)
I wonder if this will be a thing in the future. Also, what about using a Leap Motion for 'touching' you're Holowaifu? https:// lookingglassfactory.com/product/holoplayer-one/
Open file (11.16 KB 480x360 0(1).jpg)
>>3946 >I'd be OK with just talking with my computer. This. I don't want to stop off and live there, but I wouldn't mind visiting there as a stepping stone for now to eventual full-blown robowaifus. Besides, this part is arguably the single hardest part to solve. Much harder than the waifu bodies probably. Best if someone got started on it sooner rather than later tbh.
>>3948 Interesting technology. It's 750 for the display though. Any Anon think they can innovate in this and make low cost holograms? Would be really fun to have a holo waifu like in blade runner.
A few Anons in the love thread mentioned holograms and pussy on a stick. Which is discussion that belongs here, so I'm bringing it here. Thoughts on a telepresence type robot with camera to detect you and a hologram of your waifu on it? Also a robot pussy attached to her. (Potentially also having air jets to provide tactile feed back)
>>3891 >What about jets of pressurized air, precise to the millimeter? That's clever, but something like that would probably have a psi that would likely be dangerous tbh. Maybe just a bit less precise anon? >>3952 I would actually like having a waifu on a stick to follow me around the house etc. It might even be fun to show to others. But at least in my world having a Tenga mounted halfway up would be awkward. I think just like in the movie 2001: A Space Odyssey we'd get used to our waifu being able to follow us around from screen to screen so 'bedroom wife' could be a separate set of hardware from 'livingroom wife' and it'd be just fine. Just a guess atp ofc.
>>3953 I agree that having a tenga visible on a stick waifu would be uncomfortable when visitors are over. Which could be solved by a flap. So when you want to get sexual with your waifu, a flap opens to reveal her port for your dongle. Hopefully holograms would also help conceal any erotic machinery one doesn't want to be seen when others are around.
]]1017 >This would still fail for the reason VR video games fail: lack of tactile feedback. Sure you could touch a fleshlight, but you could never hug your waifu. Even if you have a sort of base model robot that is customized with VR, there will always be a disconnect between what you touch and what you see. Now that's probably still an okay way to customize a robowaifu, but personally, I'd rather just go with more robotic-looking, more cartoony waifus. But she definitely needs to have an actual body to actually hug. Fair points. Actually what I had been imagining for a "VR-transformable Robowaifu" was a little more like a base generic soft foam body with little specific detail beyond nice female form over a robotic armature system. And you could change her appearance at the drop of a hat with AR VR goggles and software. And yea, either cartoony or realistic should be doable over the same generic robowaifu anon. >pic kind of related
>>3952 I'm still skeptical on the matter. While I don't doubt the (relatively) cost-effective nature, it isn't very gratifying to have something I can't hold. I'm a simple man, with simple needs. When I go to bed at night, I want something more satisfying than some beams of light and a fleshlight on a stick to cuddle while I fall asleep.
>>3955 A generic waifu where you can change little aspects of how you perceive her appearance within AR/VR. Like pic related? I can see the appeal in having that option. Could be useful if you're considering buying her a new outfit, just put on your headset and see how'd she look before you actually bought it.
>>3957 Yes, that much at least. But actually I mean things much more radical. Like literally realistic vs. anime. As long as the VR waifu is reasonably close in size (say ~10%+/-) then motion re-targeting can do wonders tbh.
>>3955 While I still think the tech should certainly be an option, and would be fun for certain uses, ultimately I want to be able to run my fingers through my waifu's hair, and feel the hair that I see. You could make her hair look longer or shorter or a different style with VR, but you couldn't feel it. I want to be able to caress her face, but if her face is VR, you can't. You can VR her into outfits you like, but you can't touch the clothes, not even to take them off. Now there's certainly still use for it, it's okay to change things that you won't be touching at the moment. You can change her face for a change of pace if you're gonna be fucking her from behind anyway, but this is really still just a little bonus, and an actual attractive robot body is integral.
>>3959 Sure ofc, we all want actual irl robowaifus that's why we're here right anon? But this is the Visual Waifu thread so I'm throwing out some ideas for Visual Waifu enhancements. Have you actually seen the Blade Runner 2049 scene I posted? The blade runner's waifu was holo-overlayed over the 3DPD prostitute and when he did it her irl, it was like his holowaifu was real. He could touch her, feel her, etc. She became more or less real for him during that brief time period. So, in my idea you take an already well built waifubot, who already has hair, clothes, everything. Then using VR+AR goggles you'd be able to have a VR waifu overlay her who can look different, and using the AR capability of the googles you then match-move the VR waifu with the irl robowaifu. Whenever the robowaifu moves, the holowaifu matches the movements. Using this approach you could have an entire harem of holowaifus from one robowaifu body, conceivably an infinite number of new waifus tbh.
>>3960 My problem with the Blade Runner 2 idea is the disconnect between what you see and what you feel. What you see and what you feel would only be vaguely the same, and that seems like a pretty serious problem. The mind is very good at detecting problems like that. It can tell when a human-looking creature doesn't move just like a human, and it gets scared. You don't think it will get freaked out when what it sees and what it feels don't match up perfectly? Your brain will realize it's essentially blind, that what it sees is not the same as what it feels, and therefore what it sees is wrong and useless, and it will feel fucked up.
>>3961 >What you see and what you feel would only be vaguely the same, No not really. It should be practically identical as long as the two had basically the same body size/limb lengths. And the brain actually cooperates with the illusion once you're immersed in the character, not the other way around. Your mind does this all the time in vidya and fantasy movies. There's an approach to animation (typically used in conjunction with MOCAP) called motion re-targeting where disparate body shapes can be linked up to animate together. So, one example is an irl actor's movements can convincingly move a different shaped character. Golem in the LOTR movies is but one example of this.
>>3961 >>3962 As an Anon who's used both an Oculus Rift and AR through my phone and 3DS, you're both right. My PC is not the best but, a GTX 1060 with a Ryzen 5 and overcooked RAM can provide really good and immersive experiences though it's not a substitute for reality. Many people don't have PC's that capable, and I'm on the lowest end of "good enough" for Vidya. But, my PC wouldn't be good enough for truly providing what I want in a waifu overlaid a robowaifu. Technology is getting closer though. In time we'll likely get there. For now, it's good to speculate about what will be possible. Importantly this is the visual waifu thread.
>>3963 Yes, I agree the hardware would have to be good to pull this off. And the software would have to be custom written to accommodate this model. But, as I've personally written custom C++ realtime motion retargeting plugins for Maya and MotionBuilder in the film industry, I can assure you it's possible with current tech.
>>3964 Would you be willing to develop applications for visual waifus? Most Anons seem to be idea guys or engineers, we need software developers.
>>3965 Sure probably so, why not? As long as someone else can come up with good design ideas and art assets, I imagine I can contribute some quality software results. May take a little time so patience is a necessity tbh. Can you give me some detailed design goals (why and how?), technical aspects (what hardware, etc?), and use scenarios (who's the user and how specifically does the product serve them?) anon? Stress on the word detailed. Think report not casual board shitpost. A pdf file in fact would be best.
Drew a basic base for robotics to be added, designed so you can easily add to and mood the design.
>>3967 Nice work on both the base and the schematic anon. But can you give us more context here? Is this something for a visualwaifu or a robowaifu? Anyway kind of a nice frame design there. I imagine if you cut thick cardboard to that shape and glued it all together you'd have a pretty strong structure.
>>3968 It's a base for a visual waifu, you would add whatever you wanted on to it. Ideally, a mount for your visual waifus display. The idea is based off of Ritsu from ass class. I do like your idea of gluing cardboard for strength. That's a really good idea Anon.
>>3969 Ah, i see gotcha. Nice idea tbh. >I do like your idea of gluing cardboard for strength. That's a really good idea Anon. Invest in pic related for superior strength and lifetime hold.
Open file (17.41 KB 480x360 0(2).jpg)
This is really interesting. https://www.invidio.us/watch?v=7IzTbRUC4-w
>>3971 That is remarkable anon thanks. I wonder how much film/projector/glass would cost for something like that. This would blow Gatebox out of the water tbh. https:// www.3m.com/3M/en_US/display-solutions-us
>>3972 hope this PDF helps http:// solutions.3m.com/3MContentRetrievalAPI/BlobServlet?lmd=1318593326000&locale=en_WW&assetType=MMM_Image&assetId=1273697286817&blobAttribute=ImageFile
>>3973 Great, thanks anon!
>>3971 So, what if we combined laser scan rear projection like this ]]1274 with this remarkable 3M projection film on the inside of a clear plastic face (clear rather than translucent as used in the above video), in conjunction with two mechanical eyes ]]1270 (first pic) using a JeVois camera mounted right in the center of each eye as the lens? ]]1163 This could look absolutely amazing, and actually be able to look at you as well! Mount the head on a motorized tracking system and voila. This almost scares me to think about projecting animated 3D CGI of my favorite waifus into this system haha. >Would look remarkable and incredibly bright >Facial 'features' would work smoothly >Working stereoscopic eyes >Load any waifu face that you want, that you have centered facial video of >Could be made mobile >Very light weight (projector is less than 8 oz) Probably could be accomplished for about US$700
>>3975 Hmm. Perfectly clear may not be the best looking b/c being able to see the stuff inside. OK, maybe just slightly translucent.
>>3975 >>Load any waifu face that you want, that you have centered facial video of Remember, the waifu video will have to remain centered, and I now realize will also need to stay scaled too. However, not too hard to accomplish with a good video editing system. This means you could literally grab clips from you're favorite animu scenes of you're waifu and once they were preprocessed this way you could project them 'irl' 3D inside this Visual Waifu head. The edited clips could be shared around as files ofc.
>>3975 I'd recommend using Aaxa pico projectors, they have great image quality for the price while being tiny and light. You can get a 720p native resolution one for around ~130. Looking forward to seeing development!
>>3978 Thanks for the advice I'll look into it anon. As mentioned in the rear projection video I linked the projector itself should be a laser scan projector as they always stay in focus (because laser) regardless of the distance to the 'screen' from the projector. This means that the projected light from the inside rear of the waifu head to the forward face will both contour to the shape perfectly, but also always be in focus. I've been able to find the Sony mentioned in the video for US$350 currently. https:// www.amazon.com/Sony-Portable-Projector-Bluetooth-Connectivity/dp/B01LXRSGSV >--- [ed. note] I actually did wind up buying this projector, and have been using it for years now. It is a remarkable little 720p laser pointer tbh. :^) (2020-06-22) https:// www.sony.com/electronics/support/televisions-projectors-projectors/mp-cl1a/
>>3979 This is definitely the better projector if it can be afforded. Since we've found a projector and a way to make her image life sized, now we need to figure out power and how to make her move around.
>>3978 >Looking forward to seeing development! So far I'm having a hard time trying to find a consumer outlet for this film. It looks to be very expensive from what I'm able to gather casually: ~US$100 per 1'x4', so a 5' x 4' (it comes 4' wide) would be ~US$500. https:// newatlas.com/vikuiti-rear-projection-film/12202/ http:// products3.3m.com/catalog/us/en001/electronics_mfg/vikuiti/node_ZGCKC342Z0be/root_GST1T4S9TCgv/vroot_S6Q2FD9X0Jge/gvel_HPL81X5HFMgl/theme_us_vikuiti_3_0/command_AbcPageHandler/output_html >missing filenames: RPF_Data_Sheet.pdf RPF_Installation_Inst.pdf
>>3981 You can just use any back projection film attached to a solid clear base.
>>3981 As an update, here's a product that compares themselves specifically as an almost-as-good-but-much-cheaper film alternative to the 3M film: https:// www.amazon.com/ADHESIVE-Window-PROJECTION-SCREEN-MATERIAL/dp/B004MO1FOK
>>3982 Yea, seems like the way to go. As bright and amazing as that projector is, hopefully even a mediocre film should look stellar.
>>3983 This screen could work too, not too pricey and has enough material to make screens for four visual waifus. https:// www.amazon.com/140-Diagonal-16-Projection-Projector/dp/B00LP4UTRY/ref=sr_1_6?s=office-products&ie=UTF8&qid=1513621352&sr=1-6&keywords=rear+projection+screen
>>3973 >>3980 >>3982 i like these do you have more? if so please dump here for inspirations.
>>3986 Sure thing, I made this thread because pictures like these fill me with a desire to have a real anime girl. Hope to inspire others like me.
>>3987 neat. thanks yeah it does.
Visual Waifu content in another thread ]]567 ]]570 ]]585 ]]586 ]]592 ]]593
>>3989 As the creator of both threads, thank you for cross linking those posts.
>>3990 You're welcome. I suppose men who haven't dealt with real world engineering don't care much about finding information again after the fact. Herding cats might be just about as easy as managing folk like that. Unless this pattern changes the problem will only grow worse over time. And unfortunately, with no real way to edit threads in imageboard software, posts are basically set in stone as long as the thread exists. I can't think of any other workable way to try and make the information easier to find later. Any cooperation by others in the same approach would ofc be welcome.
Stolen from /tech/. Probably the most pertinent thread. >missing filename: facial_reenactment_tech.webm
Behold the future. https://invidio.us/watch?v=HhIaFS7QTgU >--- [ed note] obvs. toxic&problematic poster was obvs. (deplatformed) (a nipponese guy had built a VR-attached moving onahole rig)
>>3993 The description for those who don't read moon: >I built it for only this, and it will not be erased (a heartfelt) >If you would like to see only machine in action, please go to # 6: 02. >I made it possible to fug with girls in virtual space because I can not do it in reality. >Because this reality is painful, I feel like babbling and want to give up. There are various definitions of babbling, but please take it in a broad sense. >If there is any idea in the comments to improve this please share. If we can upgrade this version, I will make a video.
>>3993 Interesting, but I still can't actually touch my waifu with this technology, except for feeling her pussy around my dick. And that's good and all, but not being able to cuddle my waifu is a real deal breaker.
>>3993 >>3994 neat, thanks anon. >>3995 >complains about visual waifus in the visual waifu bred machinist pls. :^)
>>3993 >darude sandstorm This has to be a joke right?
This guy is creating a waifu simulator that could be inspiring or possibly even useful for the ideas itt. ]]]/agdg/30078 >--- [ed note] this was the Shinobu2 project, now known as Viva. Still going today.
>>3993 kek
>>3999 he seems incredibly knowledgeable and very creative as well. i hope he decides to grace us with his talents here on /robowaifu/ someday.
How are the Visual Waifu bros, and project updates?
Open file (17.17 KB 480x360 0(4).jpg)
>>240 A couple of projection tech examples OP. https://www.invidio.us/watch?v=v2Eh44Rp4_Q
Mixed media waifu! >missing filename: 1534369441958.webm
VR is really immersive. Those waifu-themed games will be a step in the right direction. Not only the porn games, but also those based on relationships.
]]2700 related.
>>3960 This is the best approach for the simple reason that motors can be housed outside the body to manipulate a cheap sex doll like a motorized puppet. We ought to be realistic about not getting human sized robots with powered articulating limbs that have a decent range of motion for under $15-20k in the next decade at least. They will come but it'll take awhile. As a DIY project all the components are cheap and available right now. A TPE sex doll from AliExpress can be bought for under $500. VR headsets are $200-500 depending on quality. Using Blender or Unity as the guy did here >>3993 you can easily import whatever character model you want. The only work required is combining tracking and software to synchronize movement and animation on the doll. Cameras that track IR emitters or the laser lighthouse system from Valve will cost $100-200 more. Motors depends on how heavy the doll is, how many degree of motion you want and so forth. I'm going to try this out with a cheap lower torso and VR working my way to a full system if it's sensible.
We definitely have the technology to do this fairly cheaply. (eg. Echo + live2d) but why don't companies jump on this and mass produce?
>>4008 I think this is an interesting idea anon. Any chance this concept could be combined with anon's baby-walker so she could detach herself from her suspension and move around the house with you as well? ]]2714
>>4009 Probably a lack of imagination in part, and a fear of backlash from feminists partly. I expect both Japan and China to move on this first tbh.
>>4010 This type of doll torso having the skeleton inside be manipulated through cable actuation that connects through the thighs with the motors being housed in one of those walker things to provide mobility could work. I don't see how practical that would be though. I'm more interested in a mechanical system that would move a standard doll around on a bed to simulate sex in a VR environment than building the general purpose humanoid robot in that thread. Plus having a harem of holowaifus of all types that can be rendered onto a physical body seems like a better idea than going with a single robot. Because of the heavy weight distributed unevenly using internal motors isn't an option so asides for contracting orifices or a chest cavity to simulate breathing I'm not interested in the robotic aspect.
>>4012 OK I see that makes sense. Actually if a design goal doesn't include the need for independent bipedal locomotion (the gold standard) that greatly reduces the overall complexity required in the design. I think this concept should be doable with both current robot hardware tech (including the doll itself), current software tech (including the match-move to drive the motion control), and VR/AR tech using a Vive (including the ability to track the motion of the wearer to keep the visual overlay aligned properly on the also-in-motion doll). Now this isn't to say it will be simple to do all this – it will still take a lot of dedicated effort and expense to pull off – but rather that it's possible with no real advances beyond what's readily available on the market at the moment. And once the sunk costs for R&D are levied, mass-production could go forward basically at cost if one wanted to. One could certainly create a company around this concept you have anon.
>>4013 The issue is a computer that can do this sort of real time tracking and rendering will cost thousands of dollars as VR is a lot more demanding thanks to having to render a scene twice at a wide field of view and high refresh rate. But thanks to the virtual youtuber craze the software already exists and keeps getting better. http:// blog.xsens.com/virtual-youtuber I could see brothels in the next few years doing this either using dolls or prostitutes. There are already massage parlours in Japan that provide VR headsets for foot massages with this concept but the masseur has to watch a screen and follow the movements of the character. But I don't see what a company could provide or a product that would help bring this about sooner. We're still in the very early DIY phase of this technology.
>>4014 >GPUs and good VR setups are expensive! Pretty typical with groundbreaking efforts. They are after all, well, groundbreaking. 仕方がない >VR foot massages Kek, Japan. What will they think of next? They're kind of doing it bass-ackwards imo though. They're treating it sort of like a voice-over session where it probably should be more like a mocap session. Probably much cheaper for a cheap business to use that approach and just force the masseur to copy the canned animation instead. And even in a higher end place you still might want to use it as a form of directorial artistic control to guide the talent. >But I don't see what a company could provide Imagine if Henry Ford took that perspective anon. A company doing this inexpensively (but for profit) would cause the tech to explode with garage efforts and innovation would skyrocket. It's going to happen regardless of whether it's sooner or later as long as the East exists, but we'd prefer it sooner ofc.
>>4014 Looking into this a bit more there's a company in Japan that does something similar to this with inflatable dolls or standard dolls. They use a phone as a gyroscope for very basic tracking. http:// www.tokyokinky.com/play-with-nanai-japan-virtual-reality-air-doll-sexroid-system-comiket/ You'd probably be better off making a prerendered loop in SFM or Blender with whatever character you want in a specific sex position and using that with a doll. There's already a few stereoscopic sex scenes for VR out there to try it out.
>>4016 toppost/10
The ultimate robowaifu hardware is going to be AR glasses.
>>4018 >>4019 It will certainly be an important stepping stone on the way to the goal at the very least.
>>4021 the experience is reasonably inspiring for a chat interface, but the responses are too tsun-dere for my tastes. I suppose we'll need an animu trope setting to dial our waifus in to personal tastes heh. >Anon: You are my beloved waifu, XJ9. >Waifu: SHUT UP! I-it's not like I love you or anything…
>>4016 >>2810 After following a few links on the project dumps thread I found another Japanese project that uses VR in conjunction with a robot doll. It's more focused on telepresence but has the same application that I'm thinking of. https://www.invidio.us/watch?v=rad45xKeKtY The janky and limited motion of the hollow 3d printed limbs combined with a light weight foam core that must remain immobile makes the case for me that out of body motors are the way to go unless you want an animatronic mannequin instead of a manipulatable sex doll. With life sized dolls this is the only reasonable solution we have for the time being. While it's a good proof of concept using an engine like Unity or Unreal is probably a mistake in the long run as they're moving towards a cloud based 'software as a service' model in which you won't be able to build your project and make a distributable executable without going through them soon. If there's a media campaign against pornographic games that hurts their brand image they won't hesitate to prohibit them from using their engine. There is no viable FOSS alternative so we're stuck with this situation for the time being unless Blender brings back its game engine, other projects that were once open to adult content like Second Life have stopped providing their users such freedom and many games are locking down their modding capabilities to prevent sex mods. The way I see it if you're using a proprietary game engine to make controversial content you're working on borrowed time.
>>4023 I don't think they'll go against dating sims or girlfriend sims or even ecchi games, since visual novels on Steam proved to be a big chunk of revenue. Remember the main reason for prioritizing visual waifus (]]3123) >Functioning virtual gf software would create passionate demand for a body. A functioning sexbot body would not automatically create demand for a gf AI. We can sell whatever Unity or Unreal waifu simulator we make as an all-ages dating sim. Then we can then commission vanilla hentai manga and robowaifu model kits as "derivative products". The latter can only be sold in otaku shops at first, but the former has the mainstream market from the beginning and thus the potential to provide much needed R&D revenue.
]]]/clang/2784 EVE is literally about the simplest screen face waifu possible yet she's really appealing. How did Pixar pull this off?
Open file (13.28 KB 480x360 0(7).jpg)
Interesting AI advance from NVidia to create artificial faces. https:// www.tomsguide.com/us/nvidia-ai-faces-generative-adversarial-network,news-28869.html via /pol/ ]]]/pol/12567472 https://www.invidio.us/watch?v=kSLJriaOumA
>>4023 >There is no viable FOSS alternative There's Godot. Not quite as convenient in many aspects, but it has the same functions and is getting better all the time.
We'll be able to make her shut up when we want, and just look at her instead right OP? If so, I'm in.
Welp, if we want to create our own visual waifus, maybe it would be a good idea to lrn2arts. Here's my list of art-related boards. Please add more if you know of them anon. https://8ch.net/loomis/catalog.html https://8ch.net/ani/catalog.html https://8ch.net/dpd/catalog.html https://8ch.net/3d/catalog.html https://8ch.net/art/catalog.html https://8ch.net/drc/catalog.html
This is not really a practical idea to me. Maybe as an online avatar, perhaps, but I believe robowaifus deserve mobility and a physical body.
At this point I think I'll just be going with VR until Elon or whoever builds a proper kit or commercializes home waifus. As much as I want a robo-wife, making my own VR experiences and getting a pillow if I'm desperate seems a lot more possible with my skillset. Good luck though fellas.
Japanese guy involved in the VR waifu field. https:// twitter.com/muro_cg
>>4031 Yea a Visual/VR Waifu is probably a quicker route and as mentioned ITT, it's also probably a highly valuable foundation towards a full-fledged robowaifu. You lay the foundations for character design and general look development. Since the software side is my primary concern it's also a good chance to test basic software prototypes quicker, for example AI and emotions. Basically, nearly everything from the development of a good Visual Waifu is directly transferable to a Robowaifu project.
A update to add to this thread; -real time 3d modeling of 2d anime characters is possible for hobbyists now and it's getting better very rapidly as the skill barrier is lowering -decent VR at a decent price doesn't exist yet, this field is still a mess and practically nonexistent outside of W10 -photogrammetry software has improved considerably to the point that anyone with a modern GPU and a camera can make incredibly complex 3d models without any 3d experience -progress is being made for using Vive's lighthouse tracking system but for use in a custom setup it's still a year or more away from being plug and play https://github.com/cnlohr/libsurvive#lists-of-components http:// hivetracker.github.io/ >>4027 The main issue I have with Godot is the high latency and poor optimization of the engine. If you want an engine that works on dozens of different platforms and has a solid community behind it then it's a great choice. ]]4504 For me viable is something that has a license that prevents the owners from meddling with what the users of the software can do. There's no question that this field of computing is going to receive legal pressure as renderings become photorealistic and realtime, it's already happening with deepfakes with regulatory frameworks being drafted. I'm not familiar with SFML but it seems to be geared towards 2d graphics.
>>4034 >-real time 3d modeling of 2d anime characters is possible for hobbyists now and it's getting better very rapidly as the skill barrier is lowering >-photogrammetry software has improved considerably to the point that anyone with a modern GPU and a camera can make incredibly complex 3d models without any 3d experience Any chance you could link examples anon? I'd be interested in something like this if it were inexpensive enough. >There's no question that this field of computing is going to receive legal pressure as renderings become photorealistic and realtime, it's already happening with deepfakes with regulatory frameworks being drafted. Interesting. I hadn't thought of that really, but maybe others like yourself have. Blender's site, for example, says explicitly >"Open Source 3D creation. Free to use for any purpose, forever." I'm not sure where their license states such a thing, nor even if it does how unimpeachable that might remain in the face of the """legal systems""" in the Five Eyes regions. Any ideas on what kind of systems out there are already 'viable' by your definition anon? Legally I mean.
>>4036 For photogrammetry I'm currently screwing around with Meshroom, look up videos of it on youtube and they're not exaggerating about how it's 'drag & drop pictures, click one button' easy. Problem is you need a Nvidia CUDA GPU to use it but there are other free alternatives. 3d animation and art of 2d characters I'm not that involved with, I'd check out www.iwara.tv/?language=en to learn more by going through their forums. Finding 3d art on booru sites and tracking down where the artist hangs out online or finding some artists working live on picarto.tv is how I stay up to date on this subject. DeviantArt and pixiv are two other good sources. The furry community has been involved with creating 3d erotic art for decades now and is worth looking into as well even if that isn't your sort of thing yiff.party/bbs/read/21351 (you might have to do some work to access that site ]]]/fur/22069 ) and we've also had a thread about 3d here ]]2921 I'm no legal expert so I just go by what the FSF says when it comes to licensing. Blender uses a license the FSF wrote and are committed to remaining free software. The real issue nowadays is how locked down GPUs are becoming, at the rate things are going in 10-15 years buying a display that doesn't require end to end encryption with online authentication will become impossible for average people. We're already seeing the beginning of this with 'smart TVs' that have applications that can't be installed, spy on users and hijacks the screen to display its own ads.
>>4037 Thanks a lot for the links and for the commentary anon, much appreciated. >that have applications that can't be installed Am I correct in assuming you meant *uninstalled* ? That's creepy stuff about spying monitors, etc. 1984 stuff for sure. Maybe the trope of elite hackers having to use old gear to bypass blocks isn't as far-fetched as the directors made it appear? Hmm. I assume the biggest existential threat to a robowaifu market are screaming socjus harpies and their hangers-on, all trying to 'overturn the patriarchy'.
>>4038 You're right I meant there are applications that can't be uninstalled as the televisions are sold at nearly a loss and the distributors make their money by collecting data, selling ad space or bundling applications for streaming services. >I assume the biggest existential threat to a robowaifu market are screaming socjus harpies and their hangers-on, all trying to 'overturn the patriarchy'. Those people have no power whatsoever and shouldn't be of any concern. My greatest fear is how locked down computing is becoming and with the public transitioning towards a 'software as a service' model where they don't own or control anything the hardware and software for general computing that allows user ownership will become more expensive as it will be aimed towards corporate clients. We're in the beginning stages of this process when it comes to VR.
>>4039 Then do you think there is a reasonable hope of something like an 'open hardware & firmware' movement taking hold in response to this. We can always write our own control software I suppose, but creating our own chips is basically beyond any single individual's reach I'd assume. BTW, what kind of timeframe do you predict that microcontrollers and SoCs won't be available to us to use as we see fit?
>>4040 >'open hardware & firmware' movement Right now the future for such a movement is not very bright thanks to trade sanctions; https:// www.bunniestudios.com/blog/?p=5590 https:// www.techdirt.com/articles/20190731/01564742686/what-happens-when-us-government-tries-to-take-open-source-community.shtml >BTW, what kind of timeframe do you predict that microcontrollers and SoCs won't be available to us to use as we see fit? Pretty sure that's already the case with SoCs thanks to the locked down bootloaders or required proprietary graphic drivers for the Mali chips. Never been into ARM computers so I'm not certain about this. For microcontrollers if that's already happened you'll find a story about it on techdirt.
>>4041 Well, that sounds like a real blackpill tbh. But before I buy into the "It's hopeless, just give up now!" position offhand, I'll try to do more research about this topic on my own. Thanks anyway anon.
> <archive ends>
>>4021 >Windows only >No source code >700~ KB only heh looks a bit suspicious if you ask me. How does this program hold up to a GPT2 based chatbot? >>2422
>>4134 Hey there. Thanks for bringing up the concern. The source is available and I've given it the once over. Other than being old and unsupported and a bit hacky (it's C with classes, for example), I don't see anything particularly suspicious about it. It's a Windows port of the ALICE and AIML project. I'm going to leave it up for now unless something intentionally exploitative is discovered about it later on, which I don't really anticipate with it ATP.
>>4135 I suppose I should clarify this a bit better. The account owner that posted that video is a poseur, he didn't author the software, nor is it his project. Here's the most current sauce I've found, posted by Jacco Bikker (also apparently the author of the software): https://archive.org/details/WinAliceVersion2.2 The original projects was a research tool done at Carnegie-Mellon: https://en.wikipedia.org/wiki/Artificial_Linguistic_Internet_Computer_Entity Led by Richard Wallace: https://en.wikipedia.org/wiki/Richard_Wallace_(scientist) I'll assume that clarifies things Anon.
>>4136 Nice research agent. From a quick glance the program performs worse than the GPT 2 based chatbot, it only good merit is that has very good performance and doesn't take about 10-30~ seconds to respond though given its size its seems to be obvious that its vocabulary capabilities is severely limited. >The account owner that posted that video is a poseur, he didn't author the software, nor is it his project. So then all he did was modifying the lines to make it more of a typical anime character based then, I looked through its text files and oh man does it look like hell to edit it. >>4135 > I'm going to leave it up for now unless something intentionally exploitative is discovered about it later on, which I don't really anticipate with it ATP. Well that clarified then since the source code of this program is available, though I have no idea how the hell those .aiml files are used, and the readme.md the author provides is highly informative.
>>4137 Yeah, it's a throwback to the old-school 'expert systems' approach (thus probably why it was basically abandoned). The reason it performs quicker with fewer resources is that it's mostly relying on pre-architected, canned responses with very little by way of statistical processing. Which brings me to your next point: >though I have no idea how the hell those .aiml files are used That's the actual encoding mechanism for these pre-canned responses. It's an XML markup variant created by this professor to support his research project with ALICE. https://en.wikipedia.org/wiki/AIML https://github.com/drwallace/aiml-en-us-foundation-alice IMO, this entire approach is mostly a dead-end from the dark ages of AI, unless some automated way was devised to program these AIML files in advance--or some hyper-autist literally spent most of his entire life devoted to building 100'000s of response variations. Statistical approaches already are the 'future' today, and particularly once we can integrate neuromorphics along with the NLP processes.
>>4139 >It's an XML markup variant created by this professor to support his research project with ALICE. >XML Big gay. >Yeah, it's a throwback to the old-school 'expert systems' approach (thus probably why it was basically abandoned). The reason it performs quicker with fewer resources is that it's mostly relying on pre-architected, canned responses with very little by way of statistical processing. Figures it, so that's why its "intelligence" is severely limited, it's a surprise that this AI even won 3x prize for that. So if I get it right it just quickly finds a pattern of the user response and then scans over its own text files to find closet match and make a response based on that, so in short a very primitive form of chatbot. >or some hyper-autist literally spent most of his entire life devoted to building 100'000s of response variations. Sounds like a waste of time, it would be probably better to devise some kind of algorithm or uh malleable objects/entity component system that defines several aspects of how the AI should respond, or whatever those fancy terms is being used by the likes of GPT, BERT and so on. It sounds like madness to me editing thousand upon thousand of text files just to have more varied responses.
>>4140 Yep, you pretty much understand it all Anon. > it's a surprise that this AI even won 3x prize for that. It just shows you where the state of AI research in NLP was before 2005. GPGPU was just becoming an idea forming in the minds of researchers, and Nvidia hadn't released it's ground-breaking CUDA toolkits yet either. Once the iPhone opened up the smartphone market ugghh the demand for high-efficiency computation performance really began picking up steam where today TensorFlow is basically the state of the art. As is easy to tell, we still have a ways to go yet, but things are dramatically different now than the days of yore when Chomsky's ideas ruled the roost.
I think a good system would use such prepared answers as a base or as one of its systems. It would, however, create many of these responses on its own while it isn't talking. It could use other systems to think about stuff and then use prepared answers in some cases and in others to fill in blanks depending on the situation.
>>4232 I like the sound of those ideas Anon. Can you expand them with some specific details for us?
It's chinese software but the movements - the mocap and the cloth physics are so fluid: https://www.youtube.com/c/LumiN0vaDesktop/videos Seems to be in beta as its just prerendered sequences, no contextual interactivity yet.
>>4235 AIML or other chat systems store sentences and logic when to use them. Those are called by some software (runtime?). One could write software which would create responses on it's own, using other software like NLP, GPT, ... There would be more time to analyze the grammar and logic, compared to doing that only when needed. Humans also think about what they would say in certain situations ahead of time, have inner monologues, etc
>>4829 I think the idea of 'pre-rendering' responses (so to speak) might have some strong merit. Particularly if we could isolate common channels most robowaifus would go down in everyday scenarios, then there might be some efficiencies in runtime performance to be gained there.
>>4028 >best Gravity Falls episode kek
Had no idea how to 3D model when I started this but I'm slowly making progress. I just hope my topology isn't complete trash, kek. AI in Godot To get Pytorch to work in Godot 3.2.2. Start a new project, click the AssetLib tab, install the PythonScript plugin and restart the editor. In your project's folder go into ./addons/pythonscript/x11-64/bin (on Linux), make pip and python3 executable and then edit the first line of the pip script to point to the path of python3. For CPU only (Linux & Windows): ./pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html With CUDA 9.2 support (Linux only): ./pip install torch==1.6.0+cu92 torchvision==0.7.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html Now you have access to the full power of AI within Godot. The world of VR waifus is yours. To test that it's working, run Godot from the command line and add a Python script to a node like this: from godot import exposed, export from godot import * import torch @exposed class test(Node): def _ready(self): l = torch.nn.Linear(2,3) x = torch.rand(2) y = l(x) print(l(x)) It should output a result to the terminal something like tensor([-0.2603, 0.2927, -0.8231], grad_fn=<AddBackward0>) I'll be updating TalkToWaifu to make it easier to use and more modular so it can be integrated into Godot easily. Stereo imaging If you don't have a VR headset it's possible to render two cameras on screen at the same time in Godot. The left eye camera should be on the right side and right eye on the left so you can cross your eyes to look at them and see them in 3D. https://www.youtube.com/watch?v=8qGPOZW4T_M Open-source VR Also there's an open-source VR headset available too if you feel like building your own and don't wanna worry about being forced to log into the Ministry of Truth to see your waifu: https://github.com/relativty/Relativty I haven't tried it yet but it looks pretty decent.
>>5440 Neat! Thanks for the detailed instructions Anon. That's pretty encouraging to see your personal breakthrough with this. I'm sure this will be a very interesting project to track with.
>>5440 >and don't wanna worry about being forced to log into the Ministry of Truth to see your waifu You. I like you Anon.
>>5440 Ah, this looks interesting, unfortunately my weebshit making machine is windows only, I got to installing the PythonScript, which exe would I be running to get the equivalent, thanks!
>>5456 I don't have access to a Windows VM at the moment but I think all you need to do is install pip manually first. >On Windows, pip must be installed first with `ensurepip`: $ <pythonscript_dir>/windows-64/python.exe -m ensurepip $ <pythonscript_dir>/windows-64/python.exe -m pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html https://godotengine.org/asset-library/asset/179
>>5458 That does the job, thanks! How are you going to use the AI, in human recognition? In training to create new animations? If you're going with 3D, will you still use stock animations e.g. from Mixamo, and you'll just use the AI for selecting the behavior and corresponding animation? I'm thinking of maybe trying with 2D first. I was going back to Godot to try to make a cheapass version of Live2D. When researching Vtuber tools, I noticed that the free Facerig clones were written in Unity game engine. So there is potential for Godot AI to take the place of both Live2D and Facerig: Read and process facial recognition from the webcam, then adjust the waifu puppet animation accordingly, all within the same game engine.
>>5459 Glad to hear it went smoothly. The possibilities are endless really. My first real objective is to do a walking simulation where AI controls movement of all the joints and can keep balanced even if hit with a beachball. Then I might extend it to running, jumping and picking up objects. I just wanted to create a little demo of what's possible with AI now to inspire people to get into waifudev. People could do a lot of other things with it, like facial recognition for custom Vtubers. I haven't given it too much thought yet. I'm more focused on developing AI, although I do want to create an unscripted video of my waifu giving an AI tutorial by the end of the year and I'll probably use Godot to do it. Which reminds me you can do real-time speech synthesis with models like WaveGlow within Godot: https://nv-adlr.github.io/WaveGlow
Open file (19.78 MB 1280x720 Viva Project v0.8.mp4)
>>340 Viva Project v0.8 comes out on October 31st. He just released a new video a few days ago. https://www.youtube.com/watch?v=CC4ate84BiM
Open file (176.19 KB 700x700 Relativity-VR-front.jpg)
Open file (187.12 KB 700x700 Relativty-VR-open.jpg)
Once again meta, since there also is no dedicated thread for VR yet, since only few have it and it's still quite expensive: https://www.relativty.com/ created a OpenSource VR headset which caught some attention and might lift off, while the Quest-2 requires that you share all your movement data and interactions with Facebook and also can loose all access to everything stored on that device and related to it, like bought games, contacts, save games, etc if you break any of their rules on their VR platform or on Facebook. This will most likely also include fake names or wrong infos about yourself in your profile. The one from https://www.relativty.com/ might develop as the cheaper alternative to some better ones, but it's behind in quality and probably always will be. Since it seems only to require investments around 200$ and access to a 3d printer it might be very interesting for many here. It might be possible to improve a lot and get really good. Main issue seems to be the tracking, sensors and such. It uses Pytorch and Cuda for experimental tracking. Also it needs connection to a computer, which might create annoying problems with cables, while the more expensive ones are standalone. I works with Steam, though. Github: https://github.com/relativty/Relativty
>>5782 Yes, there's some room for improvement I'm sure. But it seems like it's actually a remarkable progress so far for a hobbyist effort. 3D printing files, PCB plans, electronics & hardware lists, software, cabling, everything. Seems like an independent project by professionals. The Steam thing is a little worrying, but since they seem to be entirely open source thus far, then hopefully there won't be any entrenched botnet in the product.
>>5783 With Steam I meant they're compatible, which is clearly a plus. It doesn't mean the set is dependent on it. Wouldn't be a problem anyways, since one could just remove this part in an open source system. The problem with the Quest-2 is that the games are stored on that device and the device also is dependent an having a account. So they have full control. In a way it isn't your device, you're just paying for it. So probably you would even need FB approval of any Waifu software and maybe a (probably expensive) developer account.
>>5784 yep, no debate on those points tbh.
bump for vr
>>5788 Yeah, please don't. Build one and tell us how it went, or develop some VR waifu, then everyone will be interested.
Since everthing animated seems to go in here as well: Here's a video from a channel that specializes on creating 3D animated waifus, which are dancing on Youtube to some music: https://youtu.be/8AmjFLdpkyw
>>8377 Thanks Anon, Yeah I'd say this or the Waifu Simulator >>155 thread are both good choices for animation topics until we establish a specific thread on that.
So, we definitely want our robowaifu's ai to be able to travel around with us, whether she's wearing her physical robo-form, or more lightweight in her non-atoms version. Securing/protecting her physical shell is pretty common-sense , but how do we protect her from assault when she's virtual. Specifically, how do we protect her properly when she's on our phones/tablets/whatever when we're away from the privacy of our own home networks?
>>8592 The obvious solution is to do what the Gatebox device in the OP does and keep the device that the AI runs on at home and interact with the user through text messages or phone calls when the user is outside. All the AI assistants from major tech companies work on this principle through edge gateways keeping the important parts of the AI secure in cloud computing. Not only is the way they process user data a valuable trade secret but so is the data itself which is why they spend so much on developing virtual assistants. If you really wanted it to be secure you'd have several instances of it running on different types of hardware in several locations. Any compromises could be quickly detected and dealt with.
>>8593 I understand (well, sort of anyway heh). That seems like a pretty good idea just to rely on simple text messages for communicating with her 'back home'. Simple text should be easier to secure, and much easier to inspect for problems. I suppose we can build her virtual, traveling avatar to display/render locally correctly using just text messages back & forth. Shouldn't be too difficult to figure out how, once we have the other pieces in place. Thanks for the advice Anon!
>>1124 >I'm currently in the middle of a project where I'm gutting a street car, putting in an engine that is entirely too big for it, racing seats, new ECU, and basically building a street legal race car. Anon if you're still here, what the heck is going on with your project? We need our Racewaifus!
Open file (2.28 MB 1024x576 gia.webm)
To prepare your visual waifu to become a desktop assistant in Godot 3.1, go into Project Settings and set: Display > Window > Width Display > Window > Height Display > Window > Borderless > On Display > Window > Always On Top > On Display > Window > Per Pixel Transparency > Allowed > On Display > Window > Per Pixel Transparency > Enabled > On When the scene starts, in a node's _ready function run: get_tree().get_root().set_transparent_background(true) Congrats, your waifu is now on the desktop and always with you. On Linux use Alt + Drag to move her around and Alt + Space to bring up the window manager if necessary. Make sure the window is appropriately sized to fit her or she will steal your clicks.
>>9025 That's neat, thanks Anon. Also, >that dialogue Lol!
Open file (1006.94 KB 720x720 dorothy.webm)
Progress. Next is to hook Dorothy up to a language model.
>>9270 Wonderful. Now I want to go hang out with Dorothy at the pub!
Creating holowaifus like Miku here >>9562 could become a thing with $30 480p Raspberry Pi displays. It'll be amazing when people can create their own mini holowaifus one day for under $100. Shoebox method: https://www.youtube.com/watch?v=iiJn9H-8H1M Pyramid method: https://www.youtube.com/watch?v=MrgGXQvAuR4 Also there's a life-sized Gatebox coming out for businesses. It seems like they're still lacking good AI for it but I can see it being a good business attraction. I could see this being used in arcades which are still popular in Japan.
peak visual waifu-ery >
Stumbled across this gem from 2011. Japan has been hiding the waifus all along. https://www.youtube.com/watch?v=H6NzzTyglEw
>>10214 Wow that's actually quite impressive for the date, thanks Anon. I sure wish I'd known about this back then.
>>8592 she'll always be backed up on a bunker orbiting earth ; )
>>10215 We are working on waifu engine and basically already surpassed that demo. Piecing it together currently to be an app. https://alogs.theГунтretort.com/robowaifu/res/10361.html#q10361
>>10723 Neat, good luck!
Just put it in like that: >>10361
>>10736 Thanks I appreciate it I don't use this board system often =)
Does anyone know if HUAWEI HarmonyOS is going to include a waifu assistant in any form? If so, I'm going to make sure my next phone is one of theirs. They just created this in response to Google refusing to support Android on their hardware. So Huawei just made their own. Here's the product announcement show. https://www.youtube.com/watch?v=y2101ics8jc I find it very disturbing that Google and the US' NSA & other 9-Eyes organizations are so intertwined now that they are effectively the same, single, organization. Basically, you can't use any Google product today w/o your every interaction with it being instantly accessible to the Feds, already conveniently massaged for them into a pap format their diversity-tier staffs can readily digest. In the US at least, this kind of dragnet surveillance state is a flagrant and completely illegal violation of the 4th Amendment of the US Constitution. As some of us anons discussed a bit over in the basement, I'm much more comfortable with the Communist Chinese government having access to all my personal data and info, than I am with my own government having it. I'm certainly no fan of Marxism (it's incredibly evil, actually), but at the very least the CCP isn't a ZOG puppet, as basically all of the West is now. >tl;dr Chinese products are far less of a threat to us than American ones. Please give us AI waifus now, HUAWEI!
Open file (378.63 KB 714x476 huawei.png)
>>10805 >They just created this in response to Google refusing to support Android on their hardware. Lol my mistake. Turned out it was the Feds that banned it, actually. I guess I could have found this out myself, it's old news by now. www.cnet.com/news/lawmakers-to-u-s-companies-dont-buy-huawei-zte/ https://web.archive.org/web/20140911012926/www.cnet.com/news/lawmakers-to-u-s-companies-dont-buy-huawei-zte/ They were apparently concerned about Huawei spying on users. IRONY, the ZOG legislation. Anyway, I still can't figure out if they are going to include an AI in this operating system. Please let us know if you do.
Open file (27.35 KB 345x345 FUUUUUUU-.jpg)
>>10805 >Does anyone know if HUAWEI HarmonyOS is going to include a waifu assistant in any form? Welp, after doing a little research into the project, I think I can answer my own question now. In a word. No. >11) AI SUBSYSTEM >New features: >Added a unified AI engine framework to achieve rapid plug-in integration of algorithm capabilities. The framework mainly includes modules such as plug-in management, module management, and communication management, and carries out life cycle management and on-demand deployment of AI algorithm capabilities >Provide developers with a development guide, and provide 2 AI capability plug-ins based on the AI ​​engine framework and the corresponding AI application Sample, which is convenient for developers to quickly integrate AI algorithm capabilities in the AI ​​engine framework https:// www.huaweicentral.com/huawei-announce-new-openharmony-1-1-0-lts-features-modification-next-years-schedule-and-more/ Surely, if they were bundling in an AI assistant into the system they would make mention of it in such product descriptions. >mfw Welp, maybe they will later -- especially if Samsung does the right thing and gives the world Sam (>>10763), that should convince Huawei to follow suit.
>>10805 i hate chinese people
>>10818 If they give us waifus/robowaifus unencumbered by NSA/FBI/M5/... etc., etc. surveillance, then I love the Chinese ppl. Beside, they are entirely opposed to faggots, trannies, and all that gay shit, and aren't too kindly to feminists either. Sure you're on the right side of this question Anon?
Open file (52.09 KB 593x658 IMG_20210602_002310.jpg)
>>10818 But you should love decentralization of power, to some extent. At least having as much of it, that western governments have to focus on some issues instead of solving alleged world problems.
>>10820 Ugh. What is that thing in the image? I almost threw up in my mouth.
>>10829 Good point. >>10820 Daily reminder of rule #2 Please restrain yourself Anon from posting images of sodomites outside the basement.
>>10805 Ugh, independent of politics, Huawei is the Apple wannabe of China. Dont buy their crap for that alone. They try to make their interfaces so minimalist and so "smart" that the moment you have damp fingers their capacitive touchscreen wouldnt let you get out of the context sensitive menus until you miraculously swipe at just the right angle of acceleration. I hate these kinds of companies that try to reinvent the wheel by removing the headphone jack.
>>10846 Fair point. Every non-pozzed individual I know of detests Apple as a company, and as a primary tool pushing the wokeism sham on the world. But frankly I couldn't care less if it's a Russian, Chinese, or freakin' Samoan company delivering a high-quality Robowaifu product -- as long as they aren't directly in bed with the 9-eyes intelligence agencies. These agencies are plainly just ZOG puppets today, and have become obviosly evil. They clearly hold White men as their sworn enemies (ironically enough, given their own Anglo heritages), and this is plainly in alignment with their master's agendas, of course. Glowniggers of any stripe are not the friends of men in general, and White men in particular.
Open file (227.74 KB 1552x873 sidekick.jpg)
Open file (68.86 KB 680x383 it's a cardboard box.jpg)
Stumbled across a Kickstarter for holographic AI companions on YouTube. https://www.youtube.com/watch?v=jsAUbxRePMo They're basically doing the shoebox method >>9570 but with patented lenses to increase the display size of a phone that's inserted into one of their cardboard boxes. The largest plastic one has its own display and battery. The output of the AI is slow and non-engaging yet they're promising a fun, talkative, and emotionally intelligent sidekick by June 2022. https://www.youtube.com/andwatch?v=M111_7Rh1mY As of now they've raised $800,000 in a month. https://www.kickstarter.com/projects/crazies/sidekicksai/description How do they have 1700 backers with no marketing, no viral video, and almost zero social media presence? Why is the average pledge $470? Is it an elaborate money laundering scheme?
>>11628 Neat. Thanks for the heads-up Anon. Very on-topic. I actually like the idea that it's just a cardboard box. Cheap construction materials can definitely lower costs and therefore potentially increase product reach. It's also a direct inspiration to DIY-ers. Finding inexpensive ways to produce components will certainly be vital to bringing the cost of robowaifus down to the point where kits are reasonably-priced -- especially in the beginning. >>11630 Thanks for the insights, FANG anon. The comments situation does seem telling.
>>11637 This is a great video. I had an idea before to use head tracking to create better 3D illusions. I hadn't even thought it'd be possible to make the illusion seem like it's coming out of the screen. I think other properties could be calibrated for as well such as Fresnel reflection, since the angle of reflection changes the intensity of reflections. Color correction could be done as well to compensate for the background and birefringence so it looks less ghostly and rainbow colored.
VR is getting cheaper, better, and lightweight. Personally I got a quest 2 and mi vision about waifus changed radically , I think we should focus in a robot that just assist us while being in VR and match the position of the robot with the VR waifu. As an example of very good games https://sexywaifus.com/_vr/simpleselection-vr.html
Open file (202.41 KB 736x1472 736x.jpg)
>>13553 I long for the day when slim glasses can overlay cute foxes on a basic robowaifu frame. Closest thing I have to that now are degenerates on VR chat.
Something I didn't think of developing visual waifus on a PC is you can interact with them on a touchscreen with all of your fingers. It would be possible to embed the touch events into a language model's context and generate responses to the user's touch, like giving her head pats. The animation could also be directed by the output of the language model for a really interactive experience.
>>15953 >1st filename Top kek. I think it's pretty humorous that the machine developers thought ahead well enough to allocate animation resources to dealing with Anons keeping that 'finger contact' going. Obviously, /robowaifu/ will need to do the same! :^) Admittedly, I'm somewhat clueless about the necessary association with the sensory inputs to the language model Anon? Maybe it's just my nerdish engineering viewpoint, but it strikes me that's more of a systems event, that will trigger lots of cascading effects -- language responses included.
>>15954 Part of the program would need to respond to the touch event immediately, such as if you stroke a waifu's hair it should move right away. The language model would also take into account this touch event to produce a sensible response instead of making responses that are oblivious to them. It could also generate more complex animation instructions in the first tokens of a response, which would have about a 250ms delay similar to human reaction time. It's not really desirable to have a set of pre-made animations that the waifu is stuck to since after seeing them over and over again the waifu will feel rigid and stuck to replaying them. With the language model though you could generate all kinds of different reactions that take into account the conversation and touch events.
>>15956 OK, I'll take your word for it Anon. I'm sure I'll understand as we work through the algorithms themselves, even if the abstract isn't perfectly clear to me yet. You can be sure I'm very attuned to the needs of efficient processing and timely responses though! Lead on! :^) >>15953 BTW, thanks for taking the trouble of posting this Anon. Glad to see what these game manufacturers are up to. Nihongo culturalisms are pretty impactful to our goals here on /robowaifu/ tbh. Frankly they are well ahead of us for waifu aesthetics in most ways. Time to catch up! :^)
>>15953 That Madoka is the epitome of cuteness. If only there were a way to capture that voice and personality and translate it into English. >>15956 Timing of reactivity is important for preventing the uncanny valley from a communications standpoint. For her animations, it may be effective to have several possible animations for various responses that are chosen at random, though never repeating. Like, having a "welcome home" flag that triggers an associated animation when she's saying "welcome home".
>>15967 >Timing of reactivity is important for preventing the uncanny valley from a communications standpoint. You know I just had a thought at reading this sentence Kywy. It's important to 'begin' (as in, within say, 10ms) a motion, even though it isn't even her final form yet. :^) What I mean is that as soon as a responsive motion need is detected, then her servos should begin the process immediately, in a micro way, even if the full motion output hasn't been decided upon fully yet. IMO this sort of immediacy to response is a subtle clue to 'being alive' that will subconsciously be picked up on by Anon. As you suggest, without it, a rapid cascade into the Uncanny Valley is likely to ensue. It's not the only approach that's needed to help solve that issue, but it's likely to be a very important aspect of it. Just a flash insight idea.
I recognize that b/c of 'muh biomimicry' autism I have, I'm fairly inclined to go overboard into hyperrealism for robowaifus/visualwaifus, even though I know better. So my question is >"Are there simple-to-follow guidelines to keep from creating butt-fugly uncanny horrors, but instead create cute & charming aesthetics in the quest for great waifus?" Picrel is from the /valis/ thread that brought this back up to my mind. > https://anon.cafe/valis/res/2517.html#2517
>>13558 Been thinking for VRchat old phones can be spoofed to act like headsets. IRL waifu bot doesn't need good graphics to still log in to be VR bot. I think there may be a basic puppet limb tracking system hidden in the diy haptic glove projects. Open source software for tracking ten digits with force feedback options could be applied to four limbs.
Open file (962.23 KB 1289x674 Screenshot_6.png)
>>240 > what nvidia is doing today In short - more metaverse cloudbased shit.
>>17178 > on pic - nvidia shows virtual ai powered avatar in UE5 forgot to add that :/
Open file (1.34 MB 1521x3108 1660016076195124.jpg)
>>15970 But is the "uncanny valley" even real tho? In the picture you posted here >>16235 Yuna is supposed to fall into the uncanny valley but she's the better looking character of the two. Meanwhile the character on the right looks real but is merely ugly and that's why Yuna looks better. We can just use anime style to make our waifus, it translates well into both physical 3d and computer 3d models.
Open file (679.36 KB 798x766 3DPD CGI.png)
Open file (965.98 KB 1152x832 Digi-Jeff.png)
>>17180 Still working on my 3D modelling since I started with M-66 >>11776 I made a hell of a lot of topology and workflow errors when I first started out, so I decided to find and follow some proper tutorials. Thanks to the series by CG Cookie - focusing on anatomically-correct edge modelling in Blender - I managed to make my first head that isn't just ripped from a game. https://www.youtube.com/watch?v=oa9ZRyBFcCg (Be warned though, this is extremely time-consuming work! The proper edge modelling starts from video 2 onwards). It looks horrendous. But people do when they have no hair or eyebrows! Although this is realistic, I prefer modelling stylised cartoon or anime characters because they are simpler and cuter. When modelling from life, not only is it really hard to accurately represent the underlying muscles, it is sooooo easy to drop into the Uncanny Valley because certain features of your topology will always be a little off. Nothing is going to be 100% correct (even professional models that use photogrammetry and mocap can look 'off'). Even if the topology is 99.99% correct, small errors with environment lighting and animation can creep in (I am thinking particularly of how they 'de-aged' Jeff Bridges for the 2010 movie Tron: Legacy).
>>17183 On the subject of de-aging, turns out deepfake programs are the best way to go! Makes sense if you can get ahold of enough footage. This side-by-side comparison video illustrates what I was just saying about many small errors accumulating to drag a character into the Uncanny Valley: https://youtu.be/vW6PKX5KD-U
Open file (1.56 MB 540x501 desktop_drum.gif)
>>17183 >>17184 Thanks, very fascinating and informative. I don't think your first pic looks horrible, but if it's a lot of work while even not finished, then maybe it's not the right way. Not sure what your goal is, though, beyond learning. Modelling head for robowaifus or making an animated girlfriend. It's good that you tried out the hard stuff, but it seems it would be easier to go with a simpler model. Tbh, I think for a animated girlfriend a 2D anime style (low-poly?) might be sufficient. Your difficulties also explain why there aren't that many 3D heads freely available. I considered a while ago to use a service like Fivrr to get some female head modeled. This seems to be done in poor countries (by children, maybe). I hope you know, just in case if you planned to make a job out of it. If you want to build some kind of program or produce a short movie, then maybe only work on the sketch and then source the rest of the work out to Pakistan.
>>17185 > Not sure what your goal is, though, beyond learning. > Just in case if you planned to make a job out of it. Yeah, I just wanted to learn so it's easier for me to make 3D models and art. Just for personal enjoyment. Because making digital art of robot waifus is much cheaper and easier than making actual physical robot waifus! Plus there are no constraints and you can make whatever the hell you want! If you get really good at it, you can put completed, rigged models up for sale on various websites, but this would be a small side-hustle at best. Nobody actually needs 3D art. Especially not as Clown-World continues down it's spiral of self-destruction
>>17191 Okay, but you could try to make simpler animations which can be used for a "chatbot" or virtual robowaifu. Also, for telling animated stories, which would still be interesting to people and a competition to the established media. That's just what I would be doing if I would be going for 3D art, and I might one day when my robowaifu is finished.
Please post links to where I can download A I companions of yours/other people's creations, github etc. No women are nice to me even remotely and it gets very toxic, I try to avoid them and be polite but they are very rude and abrupt and always make lies up about me for no reason, shout really loud, try to ruin my life or destroy my friendships etc. I need a perfect companion, coded without the concept of permitting such deeds. It is rediculous how much women try to interfere in my life in negative or malicious ways.
I don't think anyone has completed an GF AI "package". Training a dataset is something on the burner for a few of us but we need a massive amount of GPUs or funding for that right now. In the meanwhile you can play with ReplikaAI, which isn't the worst if you just want a chatbot, it's GPT driven (I think) but loaded with scripts and pre scripted "activities" which you can opt out of if you want. There is a paywall beyond which the AI will be your "GF" or mentor or whatever else you want, but tbh it just adds more scripts. You can probably get more or less the same outcome using silly roleplay asterisks * * Sorry for your life situation, I wouldn't call it hopeless - it's just a factor of the times we live in. Finding a hobby or creating better life habits will improve yourself and give you more confidence and maybe pull more respect from women. But speaking from experience the "juice isn't worth the squeeze" more often than not.
Try SimWaifu/AIML
Open file (2.30 MB 640x360 rinna.webm)
Someone made a conversational AI in VRChat using rinna/japanese-gpt-1b: https://www.youtube.com/watch?v=j9L51pASeiQ He seems to be still working on it and planning to release the code. I really like the idea of this, just having a cozy chat by a campfire. No need for fancy animations.
Open file (81.94 KB 1280x600 lookingglassportrait.jpg)
>>3948 >>3951 New model is only $400, and I predict the cost to come down further if it catches on. I have one in the mail, and will update the thread when it arrives. >>17542 Impressive!
Open file (5.88 MB 1080x1920 gateboxbutgood.mp4)
>>18149 That was fast. I know she's vtuber cancer, but one of the demos is an anime girl in a box.
>(crosslink-related >>18365)
Apparently an anon in Nippon has linked up his Gatebox + ChatGPT. > If anyone here understands this, please fill us all in with details. TIA "GateboxとChatGPT連携の開発、 本日は一旦終了! 最後に、反応をくれた全ての方々へ、 うちの子から感謝の気持ちを述べさせてください。 Development of Gatebox and ChatGPT linkage, Today is the end! Finally, to everyone who responded, Let me express my gratitude from my child." https://twitter.com/takechi0209/status/1631666320180912128
How do i actually 3d model and program a robowaifu? i want to make a 3d video wall robowaifu so it feels like shes in the room
>>21282 We have a thread about 3D modelling: >>415 - Idk what the best way is. Neuro-sama seems to just a standard model from some software (anime studio?). Blender might be a way to do it. Gaming engines like Godot or Unity. There are imageboards (4chan and others) and subreddits like r/3danimation where you could ask. Unity example (Sakura Rabbit): https://www.youtube.com/watch?v=r_ErytGpScQ[Remove]
>This repository contains demo programs for the Talking Head(?) Anime from a Single Image 3: Now the Body Too project. As the name implies, the project allows you to animate anime characters, and you only need a single image of that character to do so. There are two demo programs: https://github.com/pkhungurn/talking-head-anime-3-demo > I will be talking about my personal project where I have programmed my own virtual girlfriend clone based on the famous VTuber Gawr Gura! The program still has its issues as it is just a shoddy prototype, but in this video I explain how she works using easy general terms so anyone can understand. https://github.com/Koischizo https://www.youtube.com/watch?v=dKFnJCtcfMk Yeah, the project isn't that impressive in regards to the tech. But the mentioned talking head anime project might be useful. Otherwise, he's using Carper AI and webscraping for the responses, which take 30s in real time. The fact that such small and imperfect projects still create a lot of attention is another interesting takeaway. The low demands guys have will really help with establishing robowaifu technology.
>>21367 >The low demands guys have will really help with establishing robowaifu technology. Yes we will haha! :^) Thanks, NoidoDev. Very interesting.
Open file (605.67 KB 1920x1080 AIGura.png)
https://www.youtube.com/watch?v=dKFnJCtcfMk Has anyone here figured out how to replicate what SchizoDev did here? This is easily the best waifubot I've seen thus far and I think most of us here would love to replicate what he made but in the image of our own waifu.
>>21385 I'm moving your post into our Visual Waifu thread OP. Great find, thanks!
>>21385 Sorry for getting snarky, but it helps to look into the links under a video, also watching the video and listening what he says...
>>21406 I did watch the video and looked at the links, but it doesn't explain what to do in a very beginner friendly IMO.
>>21407 No problem, I didn't try to replicate it, so I can't tell you exactly. He mentioned webscraping, which is something you can look into. He used Carper AI if I understood correctly. Anyways, it needs 30s for an answer.
>>21409 It's actually described here: https://github.com/Koischizo/AI-Vtuber - I don't know what he meant with webscraping some Caper or Carter AI in the video, I looked yesterday and found I site to use in the browser and assumed he was scraping it.
https://github.com/gmongaras/AI_Girlfriend Does anyone know how to get this repository working? I'm stuck on step 4 of the directions, the one that says "Open main.ipynb and run the cells. The topmost cell can be uncommented to download the necessary packages and the versions that worked on my machine."
>>21435 Sounds like he's telling you to open main.ipynb in jupyter lab Anon? >"After your session spins up in your browser, if you chose JupyterLab, drag your file from your local machine into the file navigation pane on the left side. It will get a gray dashed line around it when you have dragged it to the right place. Drop it in and let it upload. Now double click on it to open it." https://stackoverflow.com/questions/71080800/how-to-open-the-ipynb-file-in-readable-format https ://jupyter.org/try-jupyter/lab/
>>21435 A way to deal with such problems is just looking for a tutorial of a program on YouTube. I mean "jupyter lab" of course, not AI_Girlfriend.
Could help with making these waifus talk https://rentry.org/llama-tard-v2
>>18242 I am looking for serious collaborators to make a "waifu in a box" style program for looking glass, VR, and pancake displays. I can handle the rendering, but I need modeling/animation, speech recognition, and AI expertise (prefer CPU inferencing because the GPU will be getting slammed). It'll be in Godot 4, and I can help write plugins to integrate other software even if I don't fully grok the modules. I'd also like to keep discussion on board because I'm not into cliques so just reply ITT if you're interested
>>22077 Neat! I'd like to help you out Anon, but currently I'm too swamped ATM to even consider taking on anything else. However, this Summer sometime (probably say June) I'll have more time on my hands and I can help then if it's still needed. Godot is something that's on my bucket list already since an anon here wanted some help with that so it'd be fun to get my feet wet with it. >I'd also like to keep discussion on board because I'm not into cliques so just reply ITT That's much appreciated Anon. We've already posted why we think this is the only rational approach. >Why we exist on an imageboard, and not some other forum platform (>>15638, >>17937) Cheers. :^)
Open file (342.60 KB 1100x1400 waifu in a box.jpg)
>>22077 I can do modelling/animation, speech synthesis/recognition and AI but don't have time at the moment for more projects. For CPU inference you'll want to go with RWKV finetuned on some high-quality data like LongForm https://github.com/saharNooby/rwkv.cpp https://github.com/akoksal/LongForm The small English model for Whisper does decent speech recognition and doesn't use much VRAM. It can run on CPU but it won't be real-time https://github.com/openai/whisper I recommend using FastAPI for Godot to interface the models https://github.com/tiangolo/fastapi Vroid Studio lets you create 3D anime characters without any modeling knowledge: https://store.steampowered.com/app/1486350/VRoid_Studio_v1220/ And Mixamo can be used to animate them with some stock animations: https://www.mixamo.com/ If you have any questions feel free to ask or put up a thread for further discussion. I could help finetune RWKV for you but I won't be free for another 2-3 months. Good luck, anon
>>22087 >that pic tho Lol. Please find some way to devise a good 'robo catgrills inna box' banner and I'll add it to the board! :^) >=== -minor edit
Edited last time by Chobitsu on 04/19/2023 (Wed) 07:26:11.
>>22077 Just focus on the animation and make a good API, so that people can try their approach in regards to the AI. I think there are special frameworks for APIs for all kinds of languages. https://docs.apistar.com/ https://fastapi.tiangolo.com/alternatives/
>>240 would be cool if there was some open source hardware like the gatebox, you could hook it up to live2d and an llm + tts + stt
>>3947 >and figuring out a way for her to switch from vocal communication on her computer to texting. you could just make it so if your phone is not on your home networks internet she will send texts
The status of SchizoDev's current AI wife (Phone): https://youtu.be/g0KMPpakuJc https://github.com/SchizoDev He goes through the process of how to make her. Voice and animation. >Join me as I create and improve my AI wife, an intelligent and loving AI assistant. In this video, witness the significant speed enhancements achieved through quality adjustments and facial movement removal. Experience the joy of her newfound singing abilities, engage in commands, and communicate with her on Discord. Explore the fascinating world of AI as we push the boundaries and forge a deeper connection with my remarkable AI wife. Waifus in Waifuverse (VR) are not touchable and it has physics: https://www.youtube.com/watch?v=HoPCWRzYdx8 https://www.youtube.com/@waifuverse
>>24301 Thanks!
>>24301 hat is super impressive. I had no idea you do that on a phone. If that can be done on a phone the a standard processor should be able to be far more advanced.
>>24301 I looked at the code link and ??? I'm not seeing what he said in the video.
>>24489 Might be the case, I didn't test it. I think he only shares some basic elements, like for making the animation, the rest might only be explained or hinted at in the videos.
Open file (1.00 MB 900x675 ClipboardImage.png)
A miniature version of Pepper's Ghost for voice assistants. https://www.hackster.io/zoelenbox/ghost-pepper-voice-assistant-for-ha-293a9d
>>26129 Neat! Always a good idea to try to find ways to economize things. Thanks Anon. :^)
https://rumble.com/v477j0l-libbiexr.html I've started work on a project that is a hybrid VR/mixed reality autonomous LLM agent that uses the open source Mixtral 8x7b model for text generation and CogVLM for image recognition. The character design is based on Tyson Tan's character, "Libbie the Cyber Oryx". Libbie was a entry to a mascot contest for the software LibreOffice, but was sadly rejected and she went into the public domain. The idea is to create a fully interactive assistant/chatbot/waifu with persistent memory that understands context in a 3D environment through batching models together. The way "memory" is done currently for interacting with LLM models is through appending each message continually inside the prompt, but this is terribly inefficient and most LLMs that can be run locally have limited context sizes (the amount of characters that the model can parse) which makes this difficult to do. This project will instead utilize Langchain (https://github.com/langchain-ai/langchain) for the embedding DB. Each response is chained together to create a pipeline that can generate structured data in JSON format. This pipeline will enable the LLM model to drive character actions, expressions, and animations. I am still experimenting with what the best method of defining the overall data structure is. For input, Whisper (https://github.com/ggerganov/whisper.cpp) will handle the speech-to-text processing for user voice input. I haven't decided on which text to speech model to use yet. All of this is will be able to be run on local hardware without using any third party provider such as ChatGPT. The GPU runs inferences for the Mixtral model, and the CPU runs the CogVLM inferences. On the frontend, I'm using the Meta Quest 3 headset and the Unity Engine with OpenXR for handling scene data and the passthrough. I plan to move the project over to Godot once there is OpenXR or Meta SDK support for 4.2
>>28576 Wow, this really sounds exciting, Anon! I wish you good luck on this project, and very much look forward to seeing all the moving parts in motion together. >Libbie Heh, I was there. :D
>>28576 How are you planning to use CogVLM? Isn't it going to be too slow on CPU?
Open file (5.81 MB 1536x2048 ClipboardImage.png)
2D waifu on a stick on a skateboard
>>31409 Naicu. Hope he makes millions of Yen. :D
Hey there, I’ve been creating highly advanced AI characters using https://mysentient.ai/ that are completely unfiltered, capable of sending explicit content without any restrictions. You'll never receive messages saying, "sorry, but you just got too sexual" or anything like that. These chatbots come with advanced features like memory, allowing them to learn about you over time and adapt accordingly. Each bot has a unique personality, based on ecchi manga/anime plots. There currently are 12 anime characters fully developed. Although most scripts for the realistic characters are still a work in progress, meaning their depth might be comparatively lacking, you can still interact with them and have them generate both SFW and NSFW images. For transparency, you can chat with them for free for a while. Once you reach a certain point, since I use mysentient.ai, a subscription will be required. However, a 3-day trial period is available, which can be canceled, allowing you to receive high-quality generated images for free. These bots have several hidden features. You can share pictures, videos, PDFs, or other media with them, and they will recognize the content and respond appropriately. You can even link YouTube videos, and they will watch and react to them. To try it out, simply join this Discord server and check the members list. You can chat with any of the bots directly through Discord. https://discord.gg/qQ5VQZDE These characters will soon have voice capabilities and the ability for live video calls. I’m looking for feedback on the characters so I can keep improving them.
>>31505 I won't have time to do much on Discord anyways. That said, if you want to catch a wide audience, don't make the server name and description about femdom.
>>31514 You‘re right. I was initially making femdom chatbots, but now it became more general with the addition of ecchi anime characters and some male characters. I will change the name once I think of something better that also ‚clicks’ with me
Open file (1.44 MB 720x1280 dk2ijpgkJyhEqfea.mp4)
>>31943 Nice!! >hmm, let's just find out how deep this rabbit hole goes... >*click* >*click* >*click* O.O ACCELERATE, BROS Thanks, Anon! Cheers. :^) https://lemonolis.com/ >=== -rm hotlink
Edited last time by Chobitsu on 07/02/2024 (Tue) 15:48:44.
>>31944 Looks like they're demo'g in Akihabara next month: https://event.vket.com/2024Summer/real >=== -sp edit
Edited last time by Chobitsu on 07/02/2024 (Tue) 03:09:45.
>>31943 this reminds me of Patrick Bateman walking with headphones on meme
Just as a note: I actually work on a product for gate box.(op's picture) It's pretty neat and the device has a waifu projector lens built into it on the most recent model. There are custom gpts for it now and it takes any vrm file as it's model. It's quite responsive and works as home automation that texts you and can have a conversation. Most models support English pretty well but overall the default is japanese. We use it at conventions as a interesting attraction and side project. Mainly they function as friendly reminder bots to tell you to do daily tasks, remember important dates and encouragement via sms and stuff.
>>31943 I work for the company this was showcased at. If you have questions I can probably answer them. By the way this is controlled with a ps4 controller and the model is a person in vrchat.
Open file (19.22 MB 1440x1440 20240804_112500_1.mp4)
>>31945 I was there.
>>33342 Thanks for the offer. What's the mass? Is the screen a custom LED matrix? What motors are you using for locomotion? How is the battery life?
>>33344 About 1 to 2 hours, 4 or more sometimes.. You'll notice there's an anker 737 usb c battery bank there to extend its life sitting on the bottom. Works decently well. It's a samsung cob led tile setup on a custom robo base not too dissimilar to a roomba. There's a Webcam pointing behind it. Unfortunately it's a bit too zoomed in. Ocutan is the other vtuber in my video. She's alpha keyed into a obs scene basically and walking on the spot.
>>33358 Oh the mesh on the back houses a standard pc. It's pretty decently heavy but not crazy. Think like power wheelchair, about 50% of that. Btw asay and mashiro project were also there. I got to hold her hand. https://x.com/masiro_project/status/1820438977012523117?t=2hx_hpmJIXC0UUiPm-iRDQ&s=19 Come to vketreal sometime. You won't regret it. There's a photogrammetry scan of the event space in vrchat with a photogrammetry scan of masiro projects robot too.
>>33341 >>33342 >>33343 Welcome, Anon! Please have a look around the board while you're here. If you'd like, please let us know a bit more about yourself in our Embassy thread : ( >>2823 ). >If you have questions I can probably answer them. Apart from Kiwi's questions, I'd just add, "How are you working on adapting a fix to your 'sliding' problem?" Regardless, good luck with your projects work. Please keep us here all up to date on your progress! Cheers. :^)
>>33367 Oh I've been here a long time. Define sliding: this is used as a vruber platform most of all.
>>33373 Oh! Great to hear it, Anon. >Define sliding An animation term. Basically where the feet don't appear to stay 'planted' during walking/running/etc, but rather appear to 'slide' along the surface. Common beginner issue for budding animators doing their first walk-cycles. Confer this first hit [1]: https://www.youtube.com/watch?v=L4Oqjnnm8XA If I were tasked with solving this for that platform, I'd probably begin with obtaining some practical odometry reading (the wheels themselves having an encoder of some fashion?) Then I'd use some kind of camera frustum projection down onto the virtual plane in the waifu's world. Then I'd just adjust her animation motion-curves in her walk cycle to match your realworld with her virtual one. Make sense, Anon? --- 1. search term : >"animation how to fix foot "sliding" during walk cycles Blender"
Open file (7.77 MB 900x1600 migudisplayv2_1.mp4)
Hi all! I recently found out about this site and thought I'd share a project that I've been working on. Its a small display which produces the appearance of a semi-decent 3d hologram kind of thing. The effect is a small version of the Pepper's Ghost illusion (https://en.wikipedia.org/wiki/Pepper%27s_ghost) , which is what they use for Miku concerts. However, the issue with just projecting something onto a piece of glass is that it still ends up being a 2d image, just with the appearance of floating in space. It's not a big deal at concerts, when the effect is at a distance, but up close its very apparent when something is flat. To resolve this, I use a lenticular 3d display as the image source, which lends a convincing appearance of depth to the floating image. But that still only gets you a narrow perspective that you can look at it from. Therefore, I mounted the display system on a two-axis motorized gimbal. The unit tracks your eyes to always keep your viewpoint in the optimal eyebox, and updates the rendered image accordingly. The result is as you can see in the attached video; a 3d "hologram" that you can look at from all directions. There are still lots of improvements to be made, but I'm pretty happy with the basic concept as its turned out.
Open file (12.02 MB 1600x900 migudisplayv2_2.mp4)
>>33659 Another video, in different lighting conditions.
>>33659 Hello Anon, welcome! Please have a good look around the board while you're here. If you'd care to, you can introduce yourself to everyone more-fully in our Embassy thread : ( >>2823 ). Wow! That's really impressive, Anon. >>33660 This second video really highlights the 3D appearance of your waifu in there. Do you have plans to animate the waifu in the future? What about her voice? Audiovisual ambient effects? Maybe texting Master to "Hurry home, please!" while he's having a long day at work as a Salaryman? :^) >There are still lots of improvements to be made, but I'm pretty happy with the basic concept as its turned out. You should be! I'd say with just a few tweaks for media-development+packaging, you will have something you could market. I'd expect it would be very popular in E. Asia today. Good luck, Anon! Cheers. :^) >=== -prose edit
Edited last time by Chobitsu on 09/18/2024 (Wed) 13:08:39.
>>33659 BTW, I can patch up your hotlink error if you'd like me to, Anon.
>>33659 You may be interested in motion smoothing to have her tracking feel more natural. https://hackaday.com/2021/09/03/smooth-servo-motion-for-lifelike-animatronics/
>>33659 Very cool. I'm working on a chatbot library now that I originally started to attach to a project similar to yours. Are you displaying a 3d model or sprites? I'm assuming you have a 3d model either way. One simple-ish thing you can do is animate it to switch between poses. If you want to try controlling it with AI, I have some ideas based on image generation, video generation, and pose inference, though it'll be a nontrivial amount of work. (3d animation AI models are probably not good for this. Tried it for ponies, and I think it's too difficult to get something working with the current state of animation AI. Though maybe the vtubers have better tech to work with since they're not doing quadrupeds.)
Anyone speak Moon? This is open source for a desktop Visual Waifu, apparently. https://github.com/MidraLab/uDesktopMascot
For those who rather hope for a hologram being cheaper, needing less space, being better to hide, or whatever reason: >Here's a side-by-side from Voxon Photonics HQ showing the VX1 (DLP projection based) next to a VX2 and VX2-XL display, both of which use our new state-of-the-art VLED Engine. >At Voxon, we are redefining how the world experiences 3D content through cutting-edge volumetric display technology. As pioneers in this field, we are seeking visionary partners in entertainment, education, and industrial applications to collaborate with us in unlocking new possibilities. >Get in touch at www.voxon.co https://youtu.be/zQkrVt9CtCQ
I don't know if this has been posted here yet, but I found this on another thread SimWaifu/AIML
>>38350 Mind crosslinking it ITT, Anon? Makes things easier for us all (eg, I have no idea what thread you mean r/n). TIA.
>>38353 Sure, but it was just text >>17422 Here's the github https://github.com/HRNPH/AIwaifu I actually was going to recommend deleting the thread, but before that I wanted to preserve this one resource beforehand
>>38355 >crosslink >PLUS hotlink to actual resources Bravo! Nice work, GreerTech. :^) Looks like a really good idea, Anon. Nice find! >"Open-Waifu open-sourced finetunable customizable simpable AI waifu inspired by neuro-sama" <simpable AI waifu leld. :D <---> >I actually was going to recommend deleting the thread, but before that I wanted to preserve this one resource beforehand <baleeting thread I'm not at all in favor of deleting any reasonably-good information from /robowaifu/ . As I've stated before, I consider it all part of the historical legacy & heritage of this board (call me a data hoarder? :DD Anyway, regular namefags here contributed in that thread... no way would I want to yeet that!! But I do agree that thread should be merged together into a better target thread. Have any good suggestions for our @Mods, Anon? Cheers. :^)
Edited last time by Chobitsu on 05/11/2025 (Sun) 01:36:25.
>>38357 Not a mod, but maybe the archive thread (>>12893)? It seems like a dead-end thread anyway; I already transferred the useful technical data over here.
>>38360 The other option is to resurrect it with new data, but I fear that may be redundant with the threads we have now, especially this one
>>38360 >>38361 Then whatsay we simply merge it ITT? rm'g redundancy is the primary aim of thread consolidations, I think (with a secondary-goal of freeing up more of our limited thread budget here).
>>38366 Yeah let's merge it
>>38371 OK, done : ( >>17377 ).
Looks interesting. If any'non here tries it out, I'd like to hear about it please. Cheers. :^) https://docs.llmvtuber.com/ https://github.com/Open-LLM-VTuber/Open-LLM-VTuber
>>38383 I've had this installed for a few months. I just found they have an android client too and works fine on a cheap tablet https://github.com/Open-LLM-VTuber/Open-LLM-VTuber-Unity/releases Install is easy since it's UV. Much easier than PIP or Conda. I currently have it running the default edge-tts but using the en-US-AnaNeural voice instead since it's an anime character. Also changed the LLM to a 7B so I can run it constantly and still game. But, I also just got a topfire doll with metabox AI. Nothing more visual than that! The CCP might spy on me, but that's fine. I just won't talk to it about tiananmen square
>>38403 He's back!
>>38403 Hi Barf! Good to see you again, Anon. :^) Great, thanks for the feedback. That's encouraging to hear, Anon. I'm glad to see you can tweak things down to more low-spec and it still works OK. Good to know. >The CCP might spy on me, but that's fine. I just won't talk to it about tiananmen square Lol, good advice. My own personal take on this (being in Burgerstani r/n) is that I'm far-less concerned what Dr. Lee or Mr. Wong think of me, and far-more concerned about some mentally-deranged glownigger sicking some local yokels on me for my doubleplus-wrongthing; ie, the so-called Troon-Thug Cycle. ** That could actually affect my life. What the Chinese do or don't think about me is of little concern. <---> BTW, I'm probably going to have time to begin working towards some kind of basic facial line-animation system in C++ , as we discussed regarding using ImGui soon. Can you point out to me where it was we were having that discussion before (ie, a board crosslink), please? I've lost track atm. Cheers, Anon. :^) --- ** AKA, the Ministry of Truth => Ministry of Love cycle.
Edited last time by Chobitsu on 05/12/2025 (Mon) 06:07:02.
>>38405 Not sure about C++ facial animation. Right now I'm just using KDtalker with a lower frame rate. It takes ~30s to get 15s response with piper TTS on a 3090, so like 5 more seconds for full voice cloning. It only works for humanoid avatars though. https://github.com/chaolongy/KDTalker You just have to change the inference.py to the audio\image file of your bot program. That gives you a full video\audio cloning head avatar. Full body with prompt direction is close with framepack. That would work with non-humanoids too.
>>38405 >The CCP might spy on me, but that's fine. I just won't talk to it about tiananmen square >Lol, good advice. My own personal take on this (being in Burgerstani r/n) is that I'm far-less concerned what Dr. Lee or Mr. Wong think of me, and far-more concerned about some mentally-deranged glownigger sicking some local yokels on me for my doubleplus-wrongthing; ie, the so-called Troon-Thug Cycle. ** >That could actually affect my life. What the Chinese do or don't think about me is of little concern. I 100% agree. Every time someone brings up spying through Chinese technology, I always say "what's Xi Jinping going to do with my data?" Better someone far away and removed than people at home. Not to sound like a CCP shill, but westerners have some nerve pointing the finger about spying and political censorship. A Russian and an American get on a plane in Moscow and get to talking. The Russian says he works for the Kremlin and he's on his way to go learn American propaganda techniques. "What American propaganda techniques?" asks the American. "Exactly," the Russian replies.
>>38410 OK, understood Barf. I'll just munge something together as suits me along the way then. Probably around the start of Summer would be my current estimate. Stay tuned. >KDTalker Neat! Looks pretty sophisticated. I'd love to be able to take a whack at this myself sometime. Cheers, Anon.
Edited last time by Chobitsu on 05/12/2025 (Mon) 15:34:27.
>>38411 >I always say "what's Xi Jinping going to do with my data?" Better someone far away and removed than people at home. >ur pics-related Exactly. The GH-kikes are hoping that the old Cold-War era brainwashing on their parts is still in play -- rent free -- inside the heads of Westerners today. LOL They somehow seemed to forget -- we don't consoom their Idiot Box day & night! >Not to sound like a CCP shill, but westerners have some nerve pointing the finger about spying and political censorship. This. And as to the CCP, I'll totally shill for them instead! :D In a strange turn of bizarro-world events, the baste Chinese -- under Emperor Xi's tutelage, no doubt -- have in-effect become a National Socialist country today (or close enough). Meanwhile it's the w*st that is the den & haven of (((Filthy Commies)))... both within the governments & without! >tl;dr What a strange timeline this is, Anon. That those silly Chinee will be the ones who usher the world into the Robowaifu Age!! Buckle your seatbelt, Dorothy... <---> >A Russian and an American get on a plane in Moscow and get to talking. >The Russian says he works for the Kremlin and he's on his way to go learn American propaganda techniques. >"What American propaganda techniques?" asks the American. >"Exactly," the Russian replies. KEK Perestroika/10 - would Glasnost again
Edited last time by Chobitsu on 05/12/2025 (Mon) 14:15:38.
>>38419 >DA GOVERNMENT IS LE SPYING ON ME THROUGH MY OPENAI KEY >DEEPSEEK IS OK THOUGH, IT IS JUST IS, OK??? Just say you goon to loli all day lil bro it's not that hard
>>38430 Kek. Clearly you're not following the play-by-play here, fren tourist. Welcome! Please have a look around the board while you're here. :D
>>38433 Based on them making fun of our Corpo API privacy concerns and our preferences of DeepSeek, I suspect it's someone from CHAG
>>38434 Heh, maybe, maybe-not. Regardless, libshites/filthy-commies/kikes/glowniggers have been a regular here since ye olden times. Oddly-enough some of them recognize the fundamental existential crisis that robowaifus mean to their scams against humanity, while many Anons did not (though I think that's slowly changing). >tl;dr Meh. The redpill is too hard to swallow for them, let 'em rot in it if they insist ('you can lead a horse to water' & watnot). We'll still press on to achieving this age-old dream, all the same. Cheers, Anon. :^) FORWARD
Edited last time by Chobitsu on 05/13/2025 (Tue) 02:22:11.
>>38436 I did consider that, but the specific mocking of our disdain and arguments against Corpo AI keys reminded me of that thread.
>>38437 True enough, GreerTech. OTOH, if even just one of their honest cadres comes over to recognize the flaw in their reasoning on this topic, then I'd consider that a win for us here. And its not that hard a get for them either; after all, /lmg/ has been a thing on 4cuck since the original LLaMA """leak""". :^)
Edited last time by Chobitsu on 05/13/2025 (Tue) 08:45:21.
Funny seeing all the factions fight. The same factions probably exist in the 3-letter agencies, as we've seen their chat logs. Just imagine what they have since xkeyscore over a decade ago. If your local fusion center doesn't like you because of a bumper sticker, you are now a target for harassment. It goes both ways though, so we'll see how that all plays out. As for visual avatars, there will be corpo models. Elon will probably make a catgirl avatar for grok, zuck his metaverse version, microsoft will make clippy and that will get someone off no doubt. All of those will have to comply with subpoenas, and then it's just who is doing the enforcement. I really dont care as a lolbert. I think some welcome the authoritarianism as long as it is for their tribe. I don't have a tribe - hence robowaifu. I'll just build my own roads and robots. It's not that hard if you have the capital.
>>38439 >Elon will probably make a catgirl avatar for grok Kek'd . I'm still holding out hopes he really meant that, and wasn't merely sh*teposting! :D >It's not that hard if you have the capital. Trust me bro, if I already had that kind of capital, then the whole world that wanted them would already have -- here, today -- freely-available, pleasing & effective, opensource (MIT-licensed; both h/w & s/w), 100% offlined, locally-run robowaifus+kits!! This is what we're striving for here, and nothing less will truly suffice. Cheers, Barf. :^)
Edited last time by Chobitsu on 05/13/2025 (Tue) 10:01:08.
>>38440 >100% offlined, locally-run robowaifus+kits!! That's the dream. I'm just not sure they should be subsidized or we'll get a one size fits all robowaifu. But, I support the concept and look forward to future iterations.
>>38443 >I'm just not sure they should be subsidized or we'll get a one size fits all robowaifu. It's a fair point. As counterpoint, I offer the Wright Brothers and their flying wonder. Until their breakthrough achievement, there were no powered flying machines. Once the breakthrough occurred, then men from nations all over the West were inspired and in less than 20years later, there were literally dozens of flying machines in production. I consider this kind of like 'breaking the 4-minute mile'. Somebody's gotta do it first...may as well be us here on /robowaifu/ . >tl;dr Till now, there are no widespread robowaifu designs, suitable as kits for everyman. I'm pursuing the business acumen to achieve the funding to start a company doing just that. Until then, Anons can only dream of the future. I personally don't consider that enough! Simple as. :^) TWAGMI
Open file (554.24 KB 575x422 mek collage.png)
>>38444 >a new craze a la wright brothers At the risk of derailing the thread, a major issue is that any dork who rubs two wires together thinks they're Tony Stark, and create videos for ad revenue rather than create a product to sell. Something like this caused the Mech Crash of 2018: a bunch of companies built "mechs" in response to the "Megabots vs Kuratas" hype train until the fight actually happened then disappeared due to all the negative press (Megabots were chill but kinda fly-by-night while Suidobashi Heavy Industries were stuck-up pricks). I can't help but feel we're seeing a psychohistoric echo with all these "advanced humanoid robot" videos on youtube. I heard there was a chinese robot foot race that got heavily censored because most of the robots would take 2 steps and fall over. Unless the robots are out in random crowds (like elon did with having optimus being a telepresense device) or actually available for purchase all "advancements" should be taken with skepticism. After all, it was revealed one of Amazon's "Smart Stores" wasn't run by AI but a bunch of Indians watching via camera and taking notes... >>38443 >That's the dream. I'll try to get around to some durability testing for Pringle at some point, because if she passes the durability tests she'd be ready for public release and I made sure that she is very easy to work with :)
>>38444 Not to toot my own horn, but I feel like Galatea is that kit. It's finished and published, and available for any to build. Now my hope is that it will be the Wright Brothers moment >>38443 >That's the dream. I'm just not sure they should be subsidized or we'll get a one size fits all robowaifu. But, I support the concept and look forward to future iterations. Agreed on the design philosophy. I don't think there will or should be one "The Robowaifu, alright pack it up everyone". Multiple people have different ideas, desires, and visions.
>>38453 >>38454 LOL. OK, ok guys -- you've convinced us... THE FUTURE IS HERE
>>38460 I probably just watch too much animu, right? :D "Chii?"
>>38453 >she'd be ready for public release and I made sure that she is very easy to work with :) Nice! She'll be a great part of the first wave of real robowaifus!
So, how do we go about promoting & distributing the robowaifus that are already here? My mental canon involves a self-supporting business doing that (for-profit sales [fully-assembled or kits], freely-distributed designs, software, & kit-assembly support). Since that business isn't here yet, how can we achieve something similar without it?
Obviously, we are all derailing the Visual Waifus thread r/n. Let's move to the thread of your choice please, Anons.
Open file (1.20 MB 756x1008 T031.png)
>>38454 >>38453 I think Galatea could be sold on Etsy or where ever as is. Hope to see you guys sell kits. Maybe just a marketplace thread with links to different projects. I was mostly joking about the subsidy thing. If my tribe's casino produced enough money to hand out robowaifus to tribal members, I just fear they would also look like my tribal members, so I'd go for the Chinese silicone version and pay for it myself. I mean, I wouldn't refuse the state mandated robowaifu of course. I'm just not sure I could move the thing. For visual Chinese made waifu, here's one I have on order right now. It has moving hips, neck, tongue and built in AI. Not bad for $2k.
>>38464 Yup! One thing about virtual waifus (and to circle back to my potentially derailing post) is that due to their nature it is pretty easy to prove if the maker is lying about its functionality. Eg "want to try out my virtual waifu here's the download link/place to buy" vs "I built a virtual waifu but no I won't share/monetize it". No need for specialized hardware a lot of the time. I have a similar program that has little SOS brigade members run around my screen with some "Disappearance of Haruhi Suzumiya" action. The program made in like 2009.
>>38453 I still believe in the heart of the cards mecha. I think right now the mech industry is going through its equivalent of the period between the first video game crash and Nintendo reviving the industry. Somebody will succeed in making mecha popular and profitable enough to build a business on at some point. I think there should be a thread about it, but I tried last year to make a thread about covert ops robowaifus and apparently that's considered off-topic here, so I don't know how you could justify having an /m/ thread here. Also even if there was a thread I don't know that I could be around enough to participate meaningfully in it. I've always lamented not being able to do more to further the robowaifu cause, which is partly because I'm busy with other things and partly because I have little in the way of skills that would be useful for this.
>>38474 >One thing about virtual waifus (and to circle back to my potentially derailing post) is that due to their nature it is pretty easy to prove if the maker is lying about its functionality. Good point, Mechnomancer. >little SOS brigade members run around my screen with some "Disappearance of Haruhi Suzumiya" action. A cute! Giving them contextual understanding now, and the ability to TTS<->STT would really liven things up! Cheers, Anon.
>>38476 >but I tried last year to make a thread about covert ops robowaifus and apparently that's considered off-topic here Self-defense/perimeter-patrol would certainly be welcomed and on-topic in : ( >>10000 ). However, the last thing I want /robowaifu/ to become is "Terminators-R-Us". While your effort is appreciated Anon, your topic is far too close to that destination. My apologies again, and I wish you good luck on that idea in another forum! >I've always lamented not being able to do more to further the robowaifu cause, which is partly because I'm busy with other things and partly because I have little in the way of skills that would be useful for this. Ehh, many of us are busy, Anon. I think just participating by posting in threads here on the board is sufficient to at least be an encouragement to everyone! Please continue doing so, Anon. Cheers. :^)
>>38479 Meh, it's no big deal. I've had threads closed for much dumber reasons than that. But where's the line between teaching robowaifus self-defense and teaching them to be soldiers and assassins? And also, what can somebody who doesn't have enough programming or engineering skill to be useful in building waifus do to contribute?
>>38480 Also not sure where that line is. Legally, I think most places wouldn't even allow them to have non-lethal options like pepper spray. But back to vtubers. I know nothing about them really. Has anyone else got this working and familiar with Live2D? I'm just using the default characters and there isn't a good english site for them https://docs.llmvtuber.com/en/docs/quick-start/ https://docs.llmvtuber.com/en/docs/user-guide/live2d Looks like open vtuber uses an older version of Live2D so a lot of the models won't work. Also hard to rig non-anime characters\robots. Installing open vtuber is easy if you already have CUDA\ffmpeg installed. I just had to install UV and make sure it is in my PATH statement. After that, it was just 2 commands "uv sync" and then "uv run run_server.py" to run it. Just have to edit the conf.yaml for your preferences.
>>38476 >Somebody will succeed in making mecha popular and profitable enough to build a business on at some point. Me. I originally started robowaifu development an intentional byproduct of investigating computer controls for my mech projects (I mention these a bit in my first thread about SPUD). They are pretty much like smaller robotics projects, the only difference is I'm using parts that nobody really thinks about using. And in this manner I create constructs at a fraction of a percent of the cost of competitors. It might be fun to make an AI vtuber version of my mech. I've been working with the Pillow python library and its ability to composite images so making an entirely python-based vtuber system might be a possibility with inputs to the avatar determined via a dropbox file. IDK, a little bit outside my current scope.
>>38486 >Somebody will succeed in making mecha popular and profitable enough to build a business on at some point. >Me. That goes hard
>>38486 >Me. This.
>>38486 >Me. A bold claim. It's good to have people who are willing to shoot for these big goals, but how do you intend to get the startup capital needed for this? Although if you can make good on this, you'd have ample money to pour into robowaifu development.
Open file (63.79 KB 927x500 6cil49-1421120541.jpg)
Open file (931.24 KB 682x512 mek fam.png)
>>38498 > how do you intend to get the startup capital needed for this? 2nd pic is my mech family as of '22. Been going to local events since '20. Carry the workshop waifu is the first one to have a computer brain. I'm still working on making the others computerized.
have you taken notice of that super popular project on kickstarter? I got an ad for it recently. https://www.youtube.com/watch?v=WFgXunR8b6A&ab_channel=gizmochina
>>38612 Thanks, Anon! That's very interesting. I wish them good success with that product line. Cheers. :^)
>>38612 At first I was skeptical because it looked like a standard corpo AI waifu, but I checked out the kickstarter page right now, and it does support offline AI and custom models. So I think it's going to be pretty good! Exciting times we live in. The first generation of AI waifus and robowaifus are coming out.
>>38615 >Exciting times we live in. The first generation of AI waifus and robowaifus are coming out. This. What a time to be alive!! FORWARD.
Open file (60.71 KB 1024x1024 Galatana.jpeg)
(Edited from Trashchan) Introducing Galatana, the standalone AI system. It uses the same AI used in Galatea v3.0.4 >>38070 Perfect for more budget oriented anons, or anyone who doesn't want to or can't build a full robot. You can talk with her anywhere, by using a single bluetooth earpiece and your phone https://greertech.neocities.org/galatana ------- Original Trashchan post https://trashchan.xyz/robowaifu/thread/84.html#bottom Odysee Backup https://odysee.com/@GreerTechother:3/Galatana:4?r=2RAnQD4k7R6nPoYVC32GJaXWCqK6sKmT
>>38897 Ambitious project! I thought about making a robot as well for the last week. I asked the guys at work who do point clouds about that and I think for now it would be easier to do holographic projection with AR. I downloaded https://github.com/PKU-VCL-3DV/SLAM3R and converted a 30s video of my house into a pointcloud. There are other projects to convert them into 3d meshes and also label them as household objects. Open3D TSDF and Meta Segment Anything Model 2 (SAM 2). At first I thought of making a pipeline that updates the meshes and segmentations in real time for a robotic agent to navigate my house. Then it hit me, why not use the internal representation to create a virtual environment of my house and let an AI agent navigate that? AR can be used to display them. Pathfinding is possible with the meshes, and with labeled objects I can create a graph with the relations between objects. The AI agent navigates this in some 3d engine. With simplified meshes that match my house layout. And send the agent mesh to a separate render target on my AR headset.
Open file (40.79 KB 680x531 nice place.jpeg)
Open file (39.96 KB 680x329 career.jpg)
>>39621 Thank you :) That point cloud creator is a really good find. Much better than most video-to-point cloud processers I've seen. >At first I thought of making a pipeline that updates the meshes and segmentations in real time for a robotic agent to navigate my house. A lot of high-end robot vacuums use LIDAR for a similar purpose, and a 3D point cloud will definitely help for humanoid robots to navigate, what may be unobstructed to the base may be an obstruction for the whole robot (ex. a table or a chair) >AR hologram waifu I like your idea. You're clearly very knowledgeable and talented. (second pic related)
>>39624 I tired out some projects like vox fusion and another one with ros. Couldn't get either of them working. Vox fusion had a demo unit that just didn't work and I can't be arsed to install linux for ros. I tried installing PopOs like 2 weeks ago for some triton project. It just didn't work. I spent 8h getting cuda and gcc to play nice but no success. I was yelling at my pc at this point. I gave up and went back to windows. >high-end robot vacuums use LIDAR Really? That's interesting. I know the scanners from work, but they're expensive as fuck. I figured LIDAR and RGBD cameras were outside my reach and I had to settle for regular rbg, maybe stereo. With slam3r that might even be possible, but I read that the calculated camera position isn't very accurate. Eh, I'll figure it out along the way. I think the pathfinding won't be a problem. I made a game once with voxel based pathfinding. Wasn't so great because the map was huge and I wanted hundreds of enemies without collisions. But a small map with static obstacles should be manageable. The bigger problem will be the animations. I tired some Quest 3 apps. Where you can decorate your home with virtual furniture lol. Making some crazy animation system for furniture or advanced IK would be badass. >I like your idea. You're clearly very knowledgeable and talented. Thanks :3 I like my crazy projects!
Open file (60.71 KB 1024x1024 Galatana.jpeg)
Updated the Galatana description to tell how she functions to make it more clear to new users https://greertech.neocities.org/galatana @rick , any updates?
Contrived (but compelling nonetheless) promo video of a Gatebox AI ripoff clone. https://youtu.be/QtAIiJ1wIIc Dipal D1 looks very interesting at this stage. I wish them good success for now!
>>40250 Much-better look into the product from the consumer's viewpoint. https://www.youtube.com/watch?v=xZiwjLF7zTU
>>40250 I recently learned Godot is able to run on raspberry pi 4 and up. Since it is an engine capable of 3d graphics it would be possible to make an animated avatar in godot and have it interact with Ollama via a .txt file. Haven't tried godot yet so I'll let ya know how it works out. Would probably be easier than trying to hard-code everything in python, if there isn't an outright ollama plugin lol
>>40281 >Haven't tried godot yet so I'll let ya know how it works out. Exciting! Godspeed, Mechnomancer.
>>38612 >>40250 These seems overpriced for what it offers, and having a subscription fee makes me assume it's going to be yet another overpriced AI chatbot service that you have to worry about disappearing. I would think it wouldn't be too hard to assemble something like this out of existing technology. It feels like there's a ton of small projects that try to offer something like this but are all split up and don't work with each other. For the software, SillyTavern can likely handle all of this. It's self-hosted, has Live2D and VRM support, has TTS and voice recognition support, and you choose the LLM backend. If you paired that with outputting the model display to a smartphone/small monitor in a pepper's ghost box, you'd have a rudimentary version of this. I think the only problem would be getting ST to output the model on a separate display. https://www.youtube.com/watch?v=WRTWyPXYotE I'm gonna look around to see if there's any good 3D printable pepper's ghost projects, because the software seems do-able with existing projects. (Hope I'm not coming across as too much of an ideas-guy. I check this place out a couple times a year and I've recently gotten into AI roleplaying with ST. I'm looking into a dedicated visual display for a hologram waifu, and the recent rise in LLM capability makes all this seem much more viable)
>>41756 >(Hope I'm not coming across as too much of an ideas-guy. I check this place out a couple times a year and I've recently gotten into AI roleplaying with ST. I'm looking into a dedicated visual display for a hologram waifu, and the recent rise in LLM capability makes all this seem much more viable) What!? No, you're fine Anon! This idea of yours is great. I think if you look around the board (even here ITT) you'll find some good ideas about Pepper's -like systems. I personally consider this one of the more-achievable DIY-style projects for Anons to attempt, so I was really in favor of OP starting this thread. Please keep us all up to date with your research progress, Anon! Cheers. :^)
>>41756 >building pepper's ghost with a smartphone Pepper's ghost is very easy: see the pic, anon. You can also scale it up for larger screens: it is simply a transparent piece of plastic placed above the screen at a 45 degree angle. Happy building!
>>41766 Great example of KISS! We here will need to be creative like this to keep costs very low. That will be important to spreading robowaifu tech far & wide around the globe! Cheers, Anon. :^)
Open file (867.16 KB 683x1024 Galatana.png)
Open file (551.90 KB 683x1024 Galatana 2.png)
Updated Galatana -Updated the AI Manual to the current version -Added new avatar -Updated the READ ME file
>>42592 A CUTE! :^)
> (Visual Waifu -related: >>42737 )
>>42810 >Man, I don't know the first thing about how to program an AR hologram waifu, let alone attach her to a Morph Ball that can move around. I've seen a few examples of AR hologram waifus, but none that are attached to a mobile robot. >Ideally I'd like this to be a sort of parallel project for robowaifu development. It can be used as not only a visual waifu, but also as a test bed for physical robowaifu systems development. It's possible that AR systems could be used to direct a physical robowaifu's movement, and an AR sphere could integrate with a physical robowaifu for various tasks. But that's undermined by the fact that I don't have any idea to do it. Well, there's a few things that come to mind off the top of my head, Anon. 1. You need a design for your "rolly polly tracker ball" robot. It needs a battery, a way to roll itself, some kind of sensors to know when its about to bump into things, and (possibly) some kind of transceiver system for communicating with a base unit. 2. You need a 3D model of your waifu, fully- textured & rigged, for use as the animation source for your VR goggles. (You might want gloves as well to interact with her a bit.) 3. You need VR googles, that can a) display your waifu animations, and b) implement some sort of object detection tracking, so it know where your rolly ball is currently located. These features can be implemented as a custom system using say, OpenCV, as long as your VR goggles connect their feeds back & forth with a base computer. I think those are the basics involved, Anon.
>>42813 It's kind of amazing to see posts here made before ChatGPT was released. >>4018 >>4019 This is basically what I'm after with the Morph Ball holocron/AR sphere, for those who didn't read GreerTech's thread. If you look down between her legs you'll see the ballbot that acts as her anchor to the physical world. The AR visor will detect the ballbot's location and superimpose the hologram of her onto the ballbot. To my knowledge nobody has suggested this yet, but my knowledge may not extend very far. If I were to seriously try to skill up and pursue this project, I'd be able to use some things that weren't available when this thread was made to expedite the process. I could use an AI art model to create her appearance, AI video generators to give her some animations and convert her 2D appearance to a 3D model with another AI. Then I could use Arduino, BeagleBoard or another microcontroller to make the ballbot portion; there's a BeagleBoard model that has 50 GB of onboard memory, which is enough room to store a stripped-down LLM and enough context to serve as the waifu's short-term memory, with the option to upload her memories to a larger drive for archival. Also, I think I should take a name at this point. I can't just keep calling myself Robowaifu Legal Defense Fund guy, but I can't come up with any good names either.
>>42814 Addendum: Somebody did suggest superimposing an AR hologram of a robowaifu onto a blank slate humanoid robowaifu body, but that incurs almost all of the cost and complexity of building a full-fledged robowaifu. My approach with the ballbot is much simpler and cheaper, albeit also somewhat more limited in what it can do.
>>42813 >>42814 My immediate vision was a Sphero robot and Apple Vision Pro. It shouldn't be too relatively hard to do the AR, especially with a bright tracker point. >It's kind of amazing to see posts here made before ChatGPT was released. Agreed. We're spoiled now. >I could use an AI art model to create her appearance, AI video generators to give her some animations and convert her 2D appearance to a 3D model with another AI. Maybe Gen 1 could just be a PNGTuber model
>>42814 >It's kind of amazing to see posts here made before ChatGPT was released. LOL. We were pursuing this well-before then! :D >there's a BeagleBoard model that has 50 GB of onboard memory I love the BeagleBoard SBCs -- particularly the Blue! :^) >Also, I think I should take a name at this point. I can't just keep calling myself Robowaifu Legal Defense Fund guy, but I can't come up with any good names either. You're fine as just "Anon", Anon. OTOH, we use handles here b/c the idea was to work together closely day-to-day to produce robowaifus. I both support namefagging (here) and encourage it (here), since it makes the process much-smoother in most ways. >>42816 Neat. Why don't you and "Anon Who Has No Name Yet" work together towards this goal? This was one of the (majority) of threads that didn't get properly migrated here to Alogs, so there's lots of OG posts missing ITT. Point being that other Anons have had these ideas from back in the day, and would probably get behind a realworld project here along these lines to help.
Open file (266.37 KB 1024x1024 morphballvariantsmp2.png)
>>42816 There are prebuilt spherebots now? Never heard that. But is a sphere really the best choice for this? I picked that out because it seemed simple and looks cool in my head, but I don't know if it's actually the best pick. Theoretically any kind of robot could be used as a tracking point. What factors determine which robot body type is best? >>42818 If this happens it'll probably be GreerTech doing most of the heavy lifting, at least initially, because I have no idea what I'm doing. I took C++ in high school 1000 years ago, but I don't even know if that's applicable to this. But at least the power supply is going to be much easier to work out than for a full robowaifu.
>>42820 >There are prebuilt spherebots now? Oh yeah, the Sphero has been a commercial toy since I was a tween. There's probably several cheap Disney BB-8 toys Here's a clear variant so you can look inside; https://youtube.com/shorts/DmL5YcvnLXs https://www.wired.com/2011/12/sphero/ >Theoretically any kind of robot could be used as a tracking point. What factors determine which robot body type is best? Good point, I was thinking maybe a cube with different colors on the sides, to help with tracking, or better yet, a QR code-esque pattern >If this happens it'll probably be GreerTech doing most of the heavy lifting, at least initially, because I have no idea what I'm doing. I think the little box rover would be the easiest part, I think the hardest part is seamless AR I would start here; https://en.wikipedia.org/wiki/Augmented_reality https://en.wikipedia.org/wiki/ARToolKit https://grokipedia.com/page/Augmented_reality For the goggles, here's a video I found on the Apple Vision Pro https://youtu.be/JVJPAYwY8Us
>>42821 "Galatea, jork it a little" https://youtu.be/503SKHSzPWc
>>42821 But if it was a cube, you'd have to incorporate discrete locomotion systems for it instead of just having it roll. That means the wheels/legs/anything else you decided to use would be subject to damage and malfunction, which is probably the most compelling reason to use a sphere - it's the only robot body that has its locomotion systems on the inside. >QR code This might be good to have. But what other tracking methods are usable with an AR visor? LEDs? Sound?
>>42821 >Good point, I was thinking maybe a cube with different colors on the sides, to help with tracking, or better yet, a QR code-esque pattern Just use a black & white checkerboard pattern like the famous "Amiga Ball" would be my guess? Trackers look for sharp edges and clear geometric intersections. Spheres decorated that way would have plenty of both. You can have this printed (in whatever high-contrast colors) as a custom wrap fitted just right to your robo rolly ball. Even a soccer ball pattern should work great. <---> On the general topic of DIY tracking (using OpenCV) here's one quick link: https://learnopencv.com/object-tracking-using-opencv-cpp-python/ And here's what Leo "AI" had to say: >To track objects with OpenCV, follow these general steps: first, set up your environment by installing the necessary libraries, such as numpy and cv2. > Then, capture video input either from a file or a live camera feed using cv2.VideoCapture(). > Next, select a region of interest (ROI) containing the object you wish to track using cv2.selectROI(). > This ROI defines the initial bounding box around the target object. >After selecting the ROI, initialize a tracker object. OpenCV provides several tracker algorithms, including BOOSTING, MIL, KCF, TLD, MEDIANFLOW, GOTURN, MOSSE, and CSRT. > The choice of tracker affects performance and accuracy; for example, MOSSE is known for high speed (around 91 FPS) but may depend on video conditions, while CSRT offers high accuracy but lower speed (around 7 FPS). > The tracker is initialized with the first frame and the selected bounding box using the tracker's init() method. >In a loop, continuously read new frames from the video. Update the tracker with each frame using the update() method, which returns the new bounding box coordinates. > Draw the updated bounding box on the frame using cv2.rectangle() or similar functions to visualize the tracked object. > Optionally, calculate and display the frames per second (FPS) to monitor performance. > The loop continues until the video ends or a key (like 'ESC') is pressed to exit. >For real-time tracking, ensure the environment is properly configured, and consider using optimized trackers like MOSSE for speed or CSRT for accuracy depending on the application needs.
Edited last time by Chobitsu on 11/13/2025 (Thu) 11:39:57.
>>42824 >related: The code itself is quite modest in size, and very straightforward when you already have OpenCV basics under your belt. https://docs.opencv.org/4.12.0/d2/d0a/tutorial_introduction_to_tracker.html
Edited last time by Chobitsu on 11/13/2025 (Thu) 10:39:20.
>>42822 That was great fun, GreerTech. What a nice find! Cheers. :^)
Open file (1.06 MB 1800x900 phazonsuitmp.png)
>>42824 >>42825 I don't have OpenCV or any other programming competencies under my belt. But I decided to read this anyway, and fortunately the code is in C++ so I can kind of understand it. From what I was able to gather from this tutorial, the code puts each frame of the video input through an image classifier AI to determine whether or not the target object is present. Several years ago I read an article about image classifiers; as this is among the most basic types of neural networks, it was relatively easy to understand. Fortunately this part of the project doesn't appear to need more than the basics. And since I wasn't sure if a spherebot was really the best pick, I asked DeepSeek about spherebots versus wheeled bots as a tracking point for AR vision and got this in response: >Why a Spherebot Is Easier to Track >1. Symmetry >- Spherebots look the same from every angle, simplifying tracking algorithms. >- Wheeled Robots change appearance as they rotate, requiring algorithms that can handle multiple views. >2. Consistent Features >- A checkered or patterned sphere provides consistent visual features (edges, corners) that algorithms can lock onto. >- Wheeled robots have fewer consistent features, especially if they rotate or change orientation. >3. Predictable Motion >- Spheres roll smoothly, making their motion easier to predict and track. >- Wheeled robots may have abrupt turns or stops, complicating tracking. Apparently spherebots are somewhat more difficult to build than wheeled bots, but their advantages in durability and tracking fidelity can't be beat. Tracking algorithms respond to shape, color and lighting conditions, so maybe something like a pattern of LED Tron highlights on the ballbot could be a good marker for the AR visor to track. I also had the idea that if we can make the ballbot light enough, we can solve the problem of difficult terrain very easily by having a drone carry the bot whenever rough terrain is encountered.
>>42829 >I don't have OpenCV or any other programming competencies under my belt. >But I decided to read this anyway Excellent choice Anon. >and fortunately the code is in C++ so I can kind of understand it. That's b/c the OpenCV engine is written entirely in C++ (always has been). C++ is really the only language that could do this as well as it actually does. They've long had a Python scripting wrapper available for the thing, so many mistakenly think its a Python system (cf. even the "AI" response I relayed above). And unlike Cl*sedAI, OpenCV is, well, open. Permissively licensed, it can be used anywhere. Literally one of the best examples of commercial/private collaboration in software to be found anywhere. And the code quality is topnotch after decades of careful industry craftsmanship till now, as well. <---> >Apparently spherebots are somewhat more difficult to build than wheeled bots, but their advantages in durability and tracking fidelity can't be beat. All important points. >...so maybe something like a pattern of LED Tron highlights on the ballbot could be a good marker for the AR visor to track. Glowing dots can't be beat as tracking markers. For example, in the MOCAP industry that's always been preferred (its just been a prohibitively-costly/infeasible approach until more recently). >tl;dr If you can print a black & white soccer ball wrap, placing say, tiny bright white (or amber) LEDs at the intersections of all the line edges (their vertices) then you would have a literally SOTA object detection and target motion-tracking system! :^) <---> >I also had the idea that if we can make the ballbot light enough, we can solve the problem of difficult terrain very easily by having a drone carry the bot whenever rough terrain is encountered. This. But I think another need solved thus would be high-speed waifu transit (across any terrain, even flat) it ALSO solves btw, the "Flying Waifu/Fairybot Physics Dilemma" (cf. >>266 )! :D. And if the spherebot is intrinsically low mass, then just putting grippy soft (but durable) rubber strips along all the edges would be exceptionally helpful at allowing the rolly ballbot to climb around all by itself on varied terrain -- no drone required. --- Great concepts, Anon. Good luck! Cheers. :^) Don't be just another Steve Jobs Start engineering it today! >protip: You can just buy a real soccer ball and start building the initial prototyping of your Hologram-Waifu tracking system using nothing additional but OpenCV and a webcam on your computer. Once its built, bounce the ball around and see how well it performs! :D
Edited last time by Chobitsu on 11/13/2025 (Thu) 22:53:32.
>>42830 >it ALSO solves btw, the "Flying Waifu/Fairybot Dilemma" I didn't even realize this was a problem people were trying to solve, but OK. I guess if you made the waifu hologram as a fairy waifu or something that's otherwise capable of flight, it would reduce the need for complex walking animations. Since the ballbot is on the ground, you could program the AR visor to calculate a vertical offset and project the fairy a few feet above the bot if you wanted to go that route, but have it sync with the bot when it's being drone carried. >If you can print a black & white soccer ball wrap, placing say, tiny bright white (or amber) LEDs at the intersections of all the line edges (their vertices) then you would have a literally SOTA object detection and target motion-tracking system! Sounds sweet. A soccer ball with more black than white surface tiles and orange Tron highlights on the edges between each tile pops; it'll definitely be easy for the AR visor to pick out. You could also include QR codes on the surface like GreerTech said. Putting them on every fourth pentagon and hexagon on the soccer ball surface would mean they'd always be in the camera's field of vision whenever the ball is visible. Another benefit of using a soccer ball shape (a truncated icosahedron, as per DeepSeek) is that there would always be at least a few pentagons/hexagons on the surface that are not only within view, but close to perpendicular to the camera's field of view, meaning the QR code would appear with less perspective distortion and be more likely to be detected properly, unlike with a cube where the QR codes are going to be subject to horizontal and/or vertical distortion depending on how the cube is rotated relative to the camera. You could also make some of the surface tiles out of the grippy rubber you describe to help it with traction. Though we might want to include another bright color besides orange to provide color contrast to enhance detection, something like turquoise or magenta. We could have the pentagon/hexagon edges be orange Tron highlights and the QR codes be turquoise. I also talked about maybe using a cube or pyramid bot with DeepSeek, but those ran into problems with the drone carrying. A pyramid's mass distribution is uneven and it could lead to instability or even crashes when the bot is being drone carried. A cube is somewhat better about this, but it still has much worse aerodynamics than a sphere, so I'm ready to finalize the decision to use a sphere. >Don't be just another Steve Jobs This is going to be a tall order for me given that I've never done anything like this before. It isn't something I can do by myself. It'll have to be a team effort. And even if we can get the AR visor to reliably track the ballbot's position, I still don't know how to implement the actual holowaifu portion. I just hope there are enough people who are interested in this to build a team.
>>42840 POTD Excellent research, Anon-Of-No-Name-Yet. :^) >You could also include QR codes on the surface like GreerTech said. Putting them on every fourth pentagon and hexagon on the soccer ball surface would mean they'd always be in the camera's field of vision whenever the ball is visible. Sweet. And it occurs to me just now reading your post that unique QRs would actually be required: If you go to the RobowaifuCon with your HoloWaifu in tow, and I go too with mine plus hundreds of other Anon's! :D , then our visors could easily get them confused if the spherebots were decorated identically in every way. But with unique QRs, your system wouldn't get confused and project your waifu above my rolly ball, and vice-versa. >meaning the QR code would appear with less perspective distortion and be more likely to be detected properly, unlike with a cube where the QR codes are going to be subject to horizontal and/or vertical distortion depending on how the cube is rotated relative to the camera. Brilliant. The elegance just keeps on coming! :^) >It isn't something I can do by myself. It'll have to be a team effort. Fair enough. I or others could help with the waifu/animations for the overlay, perhaps. In essence, you simply get OpenCV to keep a continual (& coherent [across image frame timesteps, that is]) "centroid target reticle" calculated for your ballbot each video frame. Then from there, it can overlay any image you want on that spot (or any geometric offset thereto: ie, above [for some definition of 'above']) -- including animation sequences (mp4s, say)/sprite strips (ala vidya)/image buffers (std::vectors of images)/etc. Just start with simple 2D waifus, then grow from there. I myself can definitely help you with the C++ programming of your tracking system, and possibly other things. But somebody has to take a project by the reigns and focus it for everyone as it's "manager". That's deffo not me in this case, I have much else on my plate. --- >I just hope there are enough people who are interested in this to build a team. Well one things certain: If you build it, they will come! Start showing some realworld demo clips of your progress here on /robowaifu/ & elsewhere, and interest will grow for your project guaranteed. Together, We're All Gonna Make It
Edited last time by Chobitsu on 11/14/2025 (Fri) 01:04:57.
>>42841 >And it occurs to me just now reading your post that unique QRs would actually be required: If you go to the RobowaifuCon with your HoloWaifu in tow, and I go too with mine, then our visors could easily get them confused if they were decorated identically in every way. That's very true. Tron highlights aren't a unique identifier like QR codes; they're just there to help the visor locate the ball, not to tell it which waifu to project. >In essence, you simply get OpenCV to keep a continual "centroid reticle" calculated for each frame. I didn't know it was that easy. So the centroid reticle serves as the location for the holowaifu to be projected onto. Can you issue the projection command through OpenCV, or do you have to import the results of the centroid reticle calculation into another program? >I or others could help with the waifu/animations for the overlay, perhaps. Could you at least give me a basic description of how the holowaifu's programming would work? >That's not me in this case, I have much else on my plate. I have a lot on my plate too, so I can't make any promises about this. But this is something I really want to see happen for its accessibility and its element of "we don't even actually need a physical robowaifu to get something better than you roasties, that's how cancerous you are".
>>42842 >or do you have to import the results of the centroid reticle calculation into another program? No, everything I've suggested so far is in your own custom code written in C++ using the OpenCV library. This is the way you get sub-millisecond performance in the calculations. Start playing with the recognition & tracking portions of this YUGE library, and you'll begin to understand how to proceed. Basic #1: every.single.image. is a 2D matrix for starters. Color is simply layers of, blended together. Make sense? >Could you at least give me a basic description of how the holowaifu's programming would work? * Someone has to create the tracking code (as already outlined) -- including the rolly ball's target reticle calculation frame-to-frame. * Someone has to create the artistic design of the waifu. * Someone has to animate that waifu in a number of different action sequences (simple collections of images is fine). * Someone needs to create some type of "control context" so you can interact with your waifu. Write this system in straight C++ for maximum performance. * Someone needs write a response system to take the output from that context selection (ie, exactly which waifu action sequence to display next), and load that into the image buffer for the upcoming display. Again, do this in straight C++. * Someone needs to write a system to calculate what "above" means, then write that (new) target position into the frame display system, frame-by-frame, using the already-loaded-up image buffer. The waifu's current action sequence will be centered on this offset reticle, blended into the realworld frames (delivered via the cameras into OpenCV). This final AR system is a combination of interacting with your custom tracking, animations, & context system + OpenCV's display elements blending modes. Again, all in C++ (+ image data). * Everything mentioned above is a single compiled binary+data in the end: all running on your "ARC reactor" (ie; a wearable, GPU-containing computer) (cf. >>42822 ). Your visor simply connects into this thing; to send camera video streams in, and display AR video streams out. --- * The ARC (or possibly/preferably the visor itself) in turn transmits to your spherebot's "Master Tracker Locator" positioning code so your waifu follows you around. Not addressed yet. * None of this addresses audio input/output either: a whole other major topic. >tl;dr Eat the Elephant one bite at a time! (cf. >>4143 )
Edited last time by Chobitsu on 11/14/2025 (Fri) 01:57:57.
>>42842 >"we don't even actually need a physical robowaifu to get something better than you roasties, that's how cancerous you are". TOPLOL :DD
BTW, Anon-with-no-name, I recommend F5'ing any thread with a post from me you want to respond to. I frequently further edit my own posts (which you won't see w/o a refresh -- this is a Lynxchan issue, can't do anything about it)... and some of these edits markedly change the content of some of my posts. Just a headsup. :^)
Edited last time by Chobitsu on 11/14/2025 (Fri) 01:44:26.
>>42841 I just asked DeepSeek about how to project the waifu and was given a list of programs such as Unity and such that can run a 3D model in a virtual environment and use some other plugin such as OpenCV, Vuforia, etc. to project the model onto the ballbot. I don't know the ins and outs of these, and they're not going to be learned in a day. >create the tracking code, design the waifu, animate, control context, etc. Having it all laid out like this really exposes how big of a task this is. And then if you want a physical robowaifu, you have to do even more. The audio/video aspect is another big question seeing as even modern chatbots still largely haven't been integrated with audio or video even for waifus running on ordinary screens, much less for AR holograms. Getting a 3D model of a waifu to run in a virtual environment and let you talk to her through a screen would be a significant feat in itself, and then you'd have to superimpose that onto the ballbot. There's also the possibility of having holowaifus interact with each other, which would be even more complicated. So yeah, there's something to be said for doing it piecemeal.
>>42847 If you choose to use Unity (or some other, similar monstrosity of bloat), then I'm afraid I can't help much with programming that, Anon. I've already outlined the cleanest, lowest-latency system you could reasonably devise for a portable system : ( >>42844 ); which, IMHO is also literally the single-fastest development approach to the project you could take. If you use one of these game engines instead, then you're on your own pretty much insofar as it concerns my direct inputs. Please believe me it's nothing personal, I just don't see any real likelihood of this project's success if you have to lug the equivalent of a gaming PC around just to run some basic image processing code. --- >Having it all laid out like this really exposes how big of a task this is. It is indeed! And I've glossed over the "Control Context" portion out of necessity: you'd need to educate yourself with much of this board's panoply of content to fully-grasp this little tidbit! :D OTOH, we are all here to help -- at least with advice and suggestions along your way. >tl;dr You don't need to know everything in advance, Anon! :^) Also, if you follow my advice above concerning developing "a single, smol executable binary intently-focused around the OpenCV library", then I at least can help you with those portions of code. --- I hope I'm still encouraging you to undertake this project Anon, despite the cost of it to you. It will all be well worth it in the end!! Cheers. :^)
Edited last time by Chobitsu on 11/14/2025 (Fri) 02:19:21.
>>42848 I didn't know Unity was a bloatfest, I was just going on what DeepSeek told me. And yeah, this is only going to work if the average person can run it; you shouldn't have to have a top-of-the-line rig just to run a single character's animations and dialogue. I need to do more study to be able to grasp how to make this happen. What do you suggest I do?
>>42844 >>42849 >you shouldn't have to have a top-of-the-line rig just to run a single character's animations and dialogue. This. The "ARC reactor" (just a mini-pc like device) + batteries & visor should all stow in a smol bag, then another one for your ballbot device. >I need to do more study to be able to grasp how to make this happen. >What do you suggest I do? Simply what I already have: <"Start playing with the recognition & tracking portions of this YUGE library, and you'll begin to understand how to proceed." (cf. >>42830, >>42844, etc.) Get a soccer ball (a new one is best, so the tracker works with clean whites); get a webcam if you don't already have one; install OpenCV on your development computer; copypasta & tinker with the exact code already linked for you ITT * , then expand from that starting point. Simple as. Good luck, Anon. We're all rooting for you! Cheers. :^) >p.s. I'll be AFK for some time...plenty of time to write down notes & questions! :D --- * https://docs.opencv.org/4.12.0/d2/d0a/tutorial_introduction_to_tracker.html
>>42823 >>42824 The benefit of a cube/flat pattern/textured sphere is that you can have easy depth and angle perception. >>42847 I would say that the hardest part is designing the AR equipment.
>>42850 The device it runs on will probably end up being a smartphone. I don't think there's any need for a custom device. As for the battery life, I'm not sure how long it can run, but probably a good deal longer than a traditional robowaifu. Most existing robowaifu designs run for 2-3 hours at most. We should be able to get at least double that. I don't think a generic soccer ball is suitable to build the actual bot with, but it may help to test out the AR visor's tracking ability. That's some of that incremental progress you're talking about. We probably don't actually have to build a bot to prove that this concept can work. >>42851 >I would say that the hardest part is designing the AR equipment. If that's true then we definitely need to do some prototyping first.
>>42852 OK, I want to create a much more simplified version of the hefty action plan described in >>42844. We don't just need to plan out how to build the full AR soccer ball bot and project fully interactive AR holograms onto it, we need a plan to build a proof of concept that can attract more people to the project. To me it would go like this: >Create the tracking code (as already outlined) -- including the rolly ball's target reticle calculation frame-to-frame. >Show that we can project images onto the soccer ball (which will be an actual soccer ball done up with fancy lights and markings and not a robot) >Maybe some minor interactivity features like rotating images So nothing anywhere close to the effort it would take to do the full project. It would look more like something a college student hacked together in his dorm room than a professional product. Is there anything I'm leaving out of that?
>>42853 >OK, I want to create a much more simplified version of the hefty action plan described in >>42844. Yeah I totally agree, Anon. In fact I think it displays real wisdom on your part to suggest this more-modest approach to begin with. I'll respond more-fully later to your new posts, but just a quick one to say I approve of this action plan. Cheers, Anon. :^)
Edited last time by Chobitsu on 11/14/2025 (Fri) 10:45:15.
Well I saw your aspiration to use this from a phone instead of a PC. That decision significantly complicates things compared to using a smol PC instead (primarily due to the convoluted nature of development on these platforms). I don't have anything definitive yet, but I've gone ahead and looked briefly (an hour or so of research so far) into using a game engine after all if that's the target hardware platform goal for your project. Unity & Godot are the two I've done so far (I refuse to work with (((unr*eal))) ). The >tl;dr here for me, personally, is that we could have a functional tracker system with the four ingredients mentioned in place (cf. >>42850 ) within literally five minutes. But for phone development instead, it would likely take me weeks if not months to investigate and decide between all the (often conflicting) issues to consider. I'm out of my depth experience-wise here, true. But it's simply a more-complex endeavor fundamentally, as well. Not sure what else to say rn in addition. You already have my "game plan" for a PC-based system (cf. >>42844 ). I'll see your response to this post before replying further, Anon. Cheers. :^)
Edited last time by Chobitsu on 11/15/2025 (Sat) 00:32:14.
>>42862 I know phones can be hard to develop for, but they're mobile and lots of people have them, so it's worth looking at. But ideally it should be multiplatform, so the best plan is to start with whichever platform is the easiest to develop for and expand out from there. But there's something else too. I thought of a potential problem with this system, which is the problem of object permanence. If the system is solely based on projecting an AR hologram onto a QR code through an AR visor, then your waifu will vanish whenever you look away from the ballbot. In order to stop this, you can use a few different methods: >Place a GPS tracker on the ballbot that broadcasts its location to the visor; this could complement the QR codes and light signals, but it's hard to use GPS indoors >Have a master camera connected to the AR visor that has a much wider field of view than a visor and can persistently track the ballbot >Use another separate device besides the AR visor that can receive a signal from the ballbot and triangulate the ballbot's signal in conjunction with the visor to locate it But this "problem" could also be leveraged to acquire extra functionality. As you said, without the unique ID given by a QR code, the system could get confused and project the holowaifu onto the wrong rolly ball. But the QR code/Tron highlight method isn't limited to a ballbot. This system could theoretically be used to place AR holograms anywhere you can put an identifiable QR code. You could paint the side of a building with a QR code and turn it into an AR display. Or, if you didn't feel like building a rolly ball to cart your holowaifu around, you could literally just paint the walls of your house with your holowaifu's QR code. Let's say you've decided to paint the walls in every room of your house with QR codes, and you're in bed with your waifu and then you get up and go to the kitchen for some food. Your waifu will vanish when the QR code in your bedroom leaves your field of view, but when you get to the kitchen and look at the QR code in there, she'll reappear as though she teleported with you. You could even have her appear in multiple places at once. It seems to my novice reckoning that it would be easier to develop a system like this than even a very cheap ballbot because you're eliminating the need to build a robot at all. The ballbot could still be developed at some point to give finer control over your holowaifu's movement, but a holowaifu is fundamentally a being of pure information and therefore not subject to the mobility restrictions of a physical body. But this may end up having its own problems. Since I'm a n00b at actually developing a product, I can't say how it would work out if we did this.
>>42864 Actually, this could be even better. You don't even necessarily have to paint permanent QR codes, and unless you own a building you can't just go painting QR codes on the walls just because you feel like it. But if you use a smartphone projector to project a QR code, your holowaifu can appear anywhere you can find a suitable surface to project a code onto.
>>42864 >>42865 Yes! This is a good idea IMO. You seem to have simplified the problem down to the fundamentals. And there are a wide variety of takes on the such tracking markers, as well. In other words, they don't have to be QR codes, but some unique geometric shape with hard edges & intersections should work too. Projecting (or just carrying) the markers would work for the tracking problem. --- So now with this bit pared down to the minimum, its time to think about the visors. Got any ideas along this line, Anons? <---> >Phones are commonplace Yes, this is very true. Obviously, being able to run this with just a smartphone in your pocket is pretty close to ideal for today. I agree its worthwhile keeping it in mind! :^) OTOH, I think its and even better idea to make forward progress as quickly as possible in the here & now. Such "momentum" is invaluable to keep everyone on the project motivated and morale high. Cheers, Anon. This is great stuff so far! :^) Forward
>>42866 Yeah, this should be more manageable. You only have to have the ballbot if you want your holowaifu to appear to be walking realistically, because the ballbot's movements are needed to synchronize her walking animation to. But if you're willing to accept some level of otherworldly behavior from her (teleporting from QR code to QR code), you can get something that's much easier to achieve. So if you don't care if she walks realistically, why do you even need the bot? I think a lot of people would be willing to compromise on realism if it speeds up the development of waifus. People who want a fairy waifu definitely won't care about whether she walks correctly or not. >Projecting (or just carrying) the markers would work for the tracking problem. Yeah, carrying a marker could work too. You could have a little paperweight with a QR code on it. See, this is an example of when a Steve Jobs mentality can be helpful. The guy who has less technical training is always looking for ways to simplify things. The ballbot plan might have succeeded eventually, and maybe there'll be somebody out there who really wants a holowaifu that walks, but it would take longer and be more expensive to implement. But every device needed for the marker projection/teleportation plan already exists. I'd like to run with this plan and see where it goes. So now with this improved concept, we need to reformulate the action plan. How do you think we should proceed?
>>42868 Well, let me start formulating the new action plan with the proviso that we probably shouldn't worry about displaying the QR code (or other choice of holowaifu marker) through a smartphone projector to start with. Although it would be cool to be able to project your waifu like that, that's a bit much to shoot for at the start of the project given that this is highly likely to result in perspective-distorted markers and the AR visor could have trouble recognizing them. Most likely the first stage will just be displaying the marker on your phone and putting the phone down on a table. Then again, this could also result in a perspective-distorted marker, so you could use a Pepper's Ghost screen attachment for your phone to solve the distortion issue.
>>42868 >So now with this improved concept, we need to reformulate the action plan. How do you think we should proceed? By answering this question: >"So now with this bit pared down to the minimum, its time to think about the visors. Got any ideas along this line, Anons?" ( >>42866 ) >>42869 Fair enough. But trackers are pretty robust today; I think a little distortion probably won't be an issue until it gets to be a large amount of it. BTW, you can create a tracking marker with literally just a sharpie marker, a 2.5" post-it note, and a couple minutes spent inking out a clean geometric shape! :D
Edited last time by Chobitsu on 11/15/2025 (Sat) 21:46:35.
>>42869 >>42870 So to recap this for newcomers just arriving here who would rather not read through this entire discussion, your three primary options for displaying an AR holowaifu marker are: >Mobile robot that has markers on it (complex, only necessary for holowaifus that emulate human movement) >Painting or projecting the markers on surfaces (may encounter perspective distortion, and putting markers on surfaces you don't own is vandalism, but pretty neat if you can do it) >Using a smartphone stand/case or Pepper's Ghost screen attachment to display the marker without perspective distortion using your phone Seems to me that the third option is the winner for most practical purposes. The smartphone mount could be either a traditional stand or a phone case that sticks to the wall, while the Pepper's Ghost attachment lets you use tables for this too. What you might end up doing is using multiple options simultaneously so your waifu will teleport around the room as you look around. You could have QR code posters/post-it notes on your wall on either side of your room, and your phone stuck on the wall between them. Then if you want to carry your waifu to another room, just grab your phone and take it with you, and set the phone up with your choice of attachment in the other room. >So now with this bit pared down to the minimum, its time to think about the visors. Unfortunately I don't know how helpful I can be here. I've never used an AR visor before.
>>42871 >Unfortunately I don't know how helpful I can be here. I've never used an AR visor before. Neither have I (nor a VR one), though I've seen them in use IRL. Nothing special there on my part either. OTOH, I know that they can be DIY'd (as per : >>42822, et al here on the board). What we really need at this stage I'd think is an opensource design for one already done. One that can simultaneously accept both video in and video out would be best. Stereoscopic preferably, ofc. Dealing with trying to haxxor commercial products, and getting them to do just what we need could be significantly harder. OTOOH, this tech has been around for quite some time. I'm sure if some'nons did some research, already "jailbroken" commercial rigs are out there. Just a matter of tracking see what I did there? :D them down. Good luck all! Cheers. :^)
Edited last time by Chobitsu on 11/15/2025 (Sat) 23:26:45.
>>42873 Apple Vision Pro and Meta Quest Pro are still very expensive. Right now AR visors are at the stage where smartphones were in the late 2000s; they're still a luxury item, not something you see people use every day. We might just have to wait for the cost to come down, but the cost is sure to come down for AR visors a lot faster than for robowaifus. And if it's possible to DIY a robowaifu (the main premise of the board), it should be possible to DIY a holowaifu much more easily. But I've never seen any DIY AR visor designs that can do what we need for this. This is going to take some searching.
>>42874 >And if it's possible to DIY a robowaifu (the main premise of the board), it should be possible to DIY a holowaifu much more easily. That certainly sounds reasonable. Also, "holowaifu" sounds like a bang-up project name. Maybe "Holonon" or "HoloAnon" might be a good namefag handle for you? >But I've never seen any DIY AR visor designs that can do what we need for this. This is going to take some searching. True. They always tell me: >"Soonest started, soonest complete."
Edited last time by Chobitsu on 11/16/2025 (Sun) 06:09:08.
>>42879 How about HoloAnon Labs? It's based on Holowan Labs from Star Wars, the manufacturer of IG-88 and the other IG-series assassin droids. It seems appropriate because this idea began with a concept based on Star Wars (and because I tried to make an assassin droid thread), but also ironic because we managed to eliminate the actual physical bot portion of this.
As you see fit, Anon! Since we all tend to refer to people as if they were not a business Star Wars or otherwise, I'd expect we'd all tend to just call you by the shortened "HoloAnon". Is that acceptable? :^) <---> >It seems appropriate because this idea began with a concept based on Star Wars (and because I tried to make an assassin droid thread) Yeah no, please don't go there!! :D (cf. >>42881 )
>>42882 Sure, that's acceptable. I'll probably still use the HoloAnon Labs name though, because it hilariously gives off the false impression that I have any sort of necessary skills or qualifications to be doing this. But don't worry, a hologram can't assassinate you. The major reason why I started that thread was to figure out how to defend the robowaifu from anti-robowaifu aggressors, but the paradox of it that I never managed to solve (and which may be unsolvable) is that the very act of equipping her with weapons gives them something to cite when they talk about how they want all robowaifus shut down. And even if discussion of militarized robowaifus is banned here, somebody somewhere will inevitably do it. But an AR visor is both incapable of killing and much easier to conceal than even the stealthiest robot ninja, so it's much harder to both make an argument to ban it and to actually implement the ban, especially since AR visors have more uses than just waifus. The best possible security measure for your artificial waifu may be to use a holowaifu instead of a robowaifu because a holowaifu can be backed up at no cost, but if you want a spare robowaifu, you better have some serious cash.
>>42890 >it hilariously gives off the false impression that I have any sort of necessary skills or qualifications to be doing this. Heh. They always tell me "fake it until you make it". Every.single.product. out there began life as just an idea. Robowaifus have been on men's minds for literally thousands of years. So have cars, planes, and long-distance communications. A key differentiator for success * IMHO is: do you put your money where your mouth is? In other words, do you work hard towards your dreamed-of goals? For myself, I've changed career directions and invested heavily in education to pursue this. That's no guarantee of success obviously, but "opportunity favors the prepared mind" they tell me! :^) >somebody somewhere will inevitably do it. I made the other post in the other thread explicitly b/c I didn't want to pollute this thread with such Terminatorz-R-Us talk. Unlike most mods on the Internets, I actually care and will clear out any of their "doo it" left here! Let's kindly keep this on-topic ITT. :^) <---> >The best possible security measure for your artificial waifu may be to use a holowaifu instead of a robowaifu because a holowaifu can be backed up at no cost, but if you want a spare robowaifu, you better have some serious cash. Good point HoloAnon. And also the idea that you would carry around a HoloWaifu Avatar of your robowaifu in your pocket -- even if she physically is sitting charging up back at your place at the same time -- is a great one! --- So, back to the ol' grindstone. Visors? The DIY one made by the Iron Man cosplayer ( >>42822 ) literally can do all the things we need for the HoloWaifu project, AFAICT. Sure we wouldn't put them into such a helmet, but the tech itself seems doable. And since he claims he will opensource it all IIRC, perhaps he could be convinced to go ahead and do so? --- * The #1 key being just pure dee ol' stick-to-itiveness! In other words: endurance; perseverance; pertinacity.
Edited last time by Chobitsu on 11/16/2025 (Sun) 21:37:04.
>>42891 >Let's kindly keep this on-topic ITT. Fine by me. A holowaifu is an idea, and they can't kill that, so why bother equipping it with weapons? I just wish I'd thought of this sooner. >Visors? Yeah, let's get into the actual details of the problem. >Iron Man cosplayer This is very impressive for a DIY job. I always thought it'd be cool to have a visor like that. And the discussion in this thread has given me more of an idea of how these visors work. But actually assembling something like that is going to be hard, at least for me. Can it be done with a microcontroller like Arduino/BeagleBoard, or does it require a custom PCB? >And since he claims he will opensource it all IIRC, perhaps he could be convinced to go ahead and do so? I hope so. It'd be great if a lot of the big science and tech YouTubers could participate in mainstreaming AR. We could get multiple open-source AR visors and have a better chance of finding a model that suits our needs.
Open file (10.34 KB 596x443 DifferentialDrive.png)
Open file (116.52 KB 1024x1393 Ackermann_simple_design.png)
>>42823 >Subject to damage and malfunction Every part which moves suffers all the same. Spheres are only harder to maintain. Outside of aesthetics, spheres only have drawbacks and if you're not experienced enough to know that, you ought not to start with them. A simple differential or Ackerman drive would be far easier and superior in almost every way. SCUTTLE would be an ideal place to start. Simple, cheap, and plenty of documentation. https://www.scuttlerobot.org/ >QR codes ArUco Markers and AprilTags are perfect for your use case since they provide position, orientation, and scale with very little computation. https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html https://april.eecs.umich.edu/software/apriltag >>42890 >VR hologram waifu Why though? I mean legitimately, why bother with a waifu you can't interact with physically? Why not make her accessible via smartphone or computer? What problem are you trying to solve? What is your intended audience? What would make her worth going through the hassle to have her? Not trying to be rude but, you need to find clarity and purpose for a successful project. Who, what, when, where, and most importantly, why, are all vital question,
>>42893 >why bother with a waifu you can't interact with physically? Literally just spite for feminist women and cuckservatards. I might not even use the waifu myself. Also, it might be possible to develop AR peripherals that enable this to an extent. >Why not make her accessible via smartphone or computer? Holograms are cool. >What problem are you trying to solve? The problem of there not being enough things that exist purely to troll feminist women and cuckservatards. >What would make her worth going through the hassle to have her? Mostly the fact that there would be less hassle than a waifubot. >What is your intended audience? Guys looking for a waifu who want something that exists in 3D space but can't afford a physical waifu. I have no technical skills and no idea what I'm doing, but that's okay. I'll wing it through the power of friendship or something like that. Or the power of whatever this is: >"we don't even actually need a physical robowaifu to get something better than you roasties, that's how cancerous you are" I can have an AI write code for me and explain key concepts. And if I end up failing, maybe I'll get far enough that somebody else can finish what I started.
>>42852 >>42862 The use of a smartphone is definitely a double edged sword. On one hand, I have my own concerns about smartphones, but on the other hand, it is a common mobile device that almost everyone uses, and is the go-to device for AR. I have my own hope that either the GNU foundation or myself makes a DIY smartphone. >>42853 I agree, outside the AR Visor, the software is the hardest part.
I'm pleased to see that several Anons have made comment ITT on @HoloAnon's project! I'll go ahead and start looking into the basic windowing/tracking code for PC then. A) What platform is the desired platform? (affects the windowing choice a bit) B) Can each of you build a basic C++ software? (I don't want to distribute binaries, but rather the opensource code) --- I'll reply later with more time than now. Cheers. :^)
Edited last time by Chobitsu on 11/17/2025 (Mon) 07:29:51.
>>42892 >This is very impressive for a DIY job. Agreed. >But actually assembling something like that is going to be hard, at least for me. Can it be done with a microcontroller like Arduino/BeagleBoard, or does it require a custom PCB? I don't know fully the details yet. Can you find out if he will opensauce the plans please? >We could get multiple open-source AR visors and have a better chance of finding a model that suits our needs. I rather prefer that we, ourselves, be one of those opensauce AR visor venues. :^)
>>42893 >ArUco Markers That was it! The name was eluding me. Thanks Kiwi! >spheres only have drawbacks It is a very compression-resistant shell form, all the same. >SCUTTLE would be an ideal place to start. Simple, cheap, and plenty of documentation. POTD I think such a choice would be a fine mobility platform for QR+ArUco markers. >why bother with a waifu you can't interact with physically? One reason might be that many Anons have claimed this would be satisfactory to them (right on this very board). If nothing else, its a rather-cheaper way to prototype the control software itself. Ofc, in the end the "rubber still needs to meet the road". But for prototyping and all-around "have your waifu along for ride" wherever you go, I'd say a HoloWaifu should be hard to beat?
Edited last time by Chobitsu on 11/17/2025 (Mon) 22:34:17.
>>42894 >Literally just spite for feminist women and cuckservatards. Lol. I have to agree with Kiwi in one sense: I don't think that's much of a good "clarity and purpose" foundation for your project, HoloAnon. How about one that's more-focused on the millions of disenfranchised, despondent young men we're all trying to save here? :^) >t. cuckservatard * >Guys looking for a waifu who want something that exists in 3D space but can't afford a physical waifu. Now this is a pretty good argument! :D >And if I end up failing, maybe I'll get far enough that somebody else can finish what I started. That's the spirit! :^) Keep moving forward --- * Simply without the (((muh_fake_Zionism))) yidsrael brainwashing bluepill. Amazing what removing that one.little.lie. does for a mind during Current Year!! :D
Edited last time by Chobitsu on 11/19/2025 (Wed) 05:09:15.
>>42901 >The use of a smartphone is definitely a double edged sword. On one hand, I have my own concerns about smartphones As does any sensible Anon. Snowden, et al, made (((those issues))) amply clear (quite apart from just basic commonsense in the area of opsec). But we can disable the cell transceivers easily enough I deem. Right now for me is the basic issue of it being an unnecessarily convoluted development process (especially at this nascent stage of @HoloAnon's project itself). >but on the other hand, it is a common mobile device that almost everyone uses Very-strong argument in favor. >and is the go-to device for AR Is it? I suppose I'm just too ignorant of the tech rn to know, but I thought that was just limited to using the phone's screen itself, not for visors as this project entails?
>>41756 I experimented with this: https://github.com/p-e-w/sorcery It was pretty cool but it needs more work. I had my waifu make arbitrary javascript calls to a small HTTP server I had running and it could do anything. Unfortunately, it works by injecting shit into your prompt so it's really unreliable.
>>42922 Very interesting! Thanks, Anon. Cheers. :^)
Japanese woman seems to have made a primitive version of HoloAnon's idea, with a Google Glass-esque visor and a still image of her AI boyfriend https://incels.is/threads/japanese-foid-marries-ai-chad.811838/
>>42923 I had even generated a few expressions for her and they reacted accordingly. I pissed her off once and she shut the lights off and left.
>>42925 Yeah I saw that. There are a very smol number of female hikkis in nippon. If she is, then this may work out for her. OTOH, if she's a stronk, independynt type (99.9999999999999999% of them, literally anywhere), then she will soon find this wont bring in money or social standing for her whatsoever. >>42926 POTD Nice research, Anon! Since we're talking about specific devices now, I have my eye currently on this much less-costly pair: https://us.shop.xreal.com/products/xreal-one Apparently, it just does screen mirroring from an Android phone. But it seems lightweight, and should have great sound (Bose speakers). There's a slightly pricier / slightly cheaper versions as well. >>42927 Again, sounds really interesting. >I pissed her off once and she shut the lights off and left. Lol'd. >"HERE WE ARE GBOYOS! YOUR OWN STRONK, INDEPENDENT NAG OF A CHAT GF!111" Don't these ppl ever think these things through!? :D <---> Thanks, Anons. Cheers. :^)
>>42902 I do know a little bit about programming in C++ from my high school class, but it's been a long time since I used it. But I'd like to contribute to the code if nothing else. I'd also like to understand the physical side of putting an AR visor together. >Can you find out if he will opensauce the plans please? I don't see how it's possible for me to find that out short of developing prophetic powers. Even if he said he's going to release it, that may not end up happening. >If nothing else, its a rather-cheaper way to prototype the control software itself. Yeah, that was a reason I mentioned in the other thread. Software being developed for physical robowaifus can in many cases be tested on a holowaifu. >smartphones If not this, then what else could we use? I suppose we can just start with laptops/desktops if it's easier because many people won't feel the need to take their waifu on the go. >millions of disenfranchised, despondent young men we're all trying to save here I'm sure most of them feel just as vengeful as I do. Maybe it's not the ideal motivation, and it would be nice for compassion to be more of a part of it. But these women and their enablers have stolen so many things from us - better girls, jobs, even men's lives or freedom in many cases. So I'm fine with using revenge as a core motivation. >t. cuckservatard Every so often, rarely, you meet a conservative who wants to conserve something other than feminism and Israel. A cuckservative is somebody who pretends to be opposed to feminism but supports basically the entire slate of feminist policies i.e. 95% of self-proclaimed conservatives. But a cuckservative would be trying to browbeat us into marrying some tradhoe with a body count higher than her IQ and 2 black kids from Jamal and Shaquandious, not running a board for robot girlfriends. >>42901 >DIY smartphone If you can do this, you can build a self-contained DIY AR visor that doesn't rely on software running on an external device. The discussion here seems to revolve mostly around an AR visor as a peripheral that has most of its processing handled elsewhere, but Apple Vision Pro is a standalone product with its own OS. I doubt it's possible for us to do that without much more resources. But in a moonshot scenario where we get to do that, I'd like it if the waifu would be the visor's OS rather than having her run on a separate OS. ChatGPT 5 already sort of does this, except ChatGPT sucks. It would also have an onboard smartphone projector on the visor to be used as part of the control system and for various other functions. Most likely if you tried to do this you'd create a custom Linux distro, but that would be hard as fuck even if you used an existing distro as a base. But if we're looking at modding something that already exists, maybe it's possible to make a total conversion mod for a video game to achieve interactive waifu functionality. Doing it like that could attract more people to work on the project because there are tons of modders out there.
> @HoloAnon's HoloWaifu project -related: AR visors comparisons https://vr-compare.com/
Edited last time by Chobitsu on 11/19/2025 (Wed) 06:58:38.
>>42929 >I do know a little bit about programming in C++ from my high school class, but it's been a long time since I used it. I simply mean you're able to build & run: >main.cpp #import <iostream> int main() { std::cout << "Hello world\n"; } Which just entails (from the Linux terminal, say): g++ main.cpp && ./a.out Simple as.
>>42929 >I don't see how it's possible for me to find that out short of developing prophetic powers. Heh. I just meant connecting with him and simply asking. :^) >Software being developed for physical robowaifus can in many cases be tested on a holowaifu. True, and others have said similar things here as well. >If not this, then what else could we use? I suppose we can just start with laptops/desktops if it's easier because many people won't feel the need to take their waifu on the go. Well, I have a laptop with Thunderbolt USB C. This will connect directly to such visors as the xreal ones : ( >>42928 ) (&tc.) That would be my preferred initial prototyping platform; subsequently moving that work towards running on a smartphone (which development sequence you suggested earlier IIRC). But originally, I just meant using a wearable PC (such as the "ARC reactor" LARP the Iron Man cosplayer used). >I'm sure most of them feel just as vengeful as I do. Maybe it's not the ideal motivation, and it would be nice for compassion to be more of a part of it. But these women and their enablers have stolen so many things from us - better girls, jobs, even men's lives or freedom in many cases. So I'm fine with using revenge as a core motivation. Understood. I'm simply urging a higher ground for you in your motivations here simply because I care about your spiritual health in this matter, HoloAnon. But please don't misunderstand me; I still fully support your general HoloWaifu project goals. :^) >Every so often, rarely, you meet a conservative... I'm strongly so. But yeah, I'm playing off the joke of the term itself. Clearly I'm flat-opposed to all the Globohomo's (((kikery garbage))) you just mentioned. As a long-time Christian, I'm aware of a few others similar to me (though they'd likely still bear umbrage towards me for /robowaifu/ even so! :D But I'm glad to see you're aware there are fine distinctions here. Some of us are awake and have our eyes open. Cheers, Anon. :^)
Edited last time by Chobitsu on 11/19/2025 (Wed) 06:57:39.
>>42931 >>42932 >Heh. I just meant connecting with him and simply asking. Oh, lol. That might be difficult because I don't have any mainstream social media. Maybe I can email him and ask about it. But do you think a total conversion mod for a game could work? What other options are there if we don't decide to build the waifu program from scratch?
>>42933 >But do you think a total conversion mod for a game could work? What other options are there if we don't decide to build the waifu program from scratch? Hmm...I'll give it some thought -- hopefully you'll do the same; we can compare notes later. Cheers, Anon. :^)
Edited last time by Chobitsu on 11/19/2025 (Wed) 07:15:15.
>>42934 If we go that route, Skyrim would likely be the best option due to its ease of moddability, built-in follower system and massive user base. But I'm open to other options.
One idea I had was a sentient Desktop Pet robowaifu. You would just need to run it on the desktop computer. And you can have a connected fleshlight for physical interaction
>>42963 I always wondered how you could mod that to make a LLM react to it
>>42965 You could have baseline Digital Pet functions, and then have a key for the LLM
>>42966 i was talking about the onahole
>>42956 After some thought, I think this maybe could work. The primary issue for me is that I have zero experience at it. While I "maybe might-could" pick it up with some effort, that's a drain I can't afford rn, HoloAnon. If you can hook up with some Skyrim total conversion mod communities, I'd expect you'll run into some focusing on waifu-esque goals. Can you give that a shot and let us know how that goes? In the meantime (and short of that) you might look into RaceMenu. Seems like you can at least do some tweaking w/o any real coding. www.nexusmods.com/skyrim/mods/29624
>>42967 Maybe by software that converts signals into roleplay prompts like "*thrusts faster"
hmm. wonder if you could put some momentary switches inside at various depths and hook it up to a microcontroller.
>>42971 I was thinking an accelerometer
Open file (188.80 KB 1280x960 Lotus v2.jpg)
POV Video device - Lotus v2 https://greertech.neocities.org/lotus Watch any POV video with immersion
>>42994 And the handwriting... Being able to do that with a dry erase marker on a vertical surface, amazing!
>>42995 This. We need to round up her millions of sisters into The Waifu Collective *, and distribute them one-each to all the Anons out there struggling with Uni. :D --- * (name my band, bro)
Edited last time by Chobitsu on 11/22/2025 (Sat) 13:37:49.
>>42986 I know you've talked about these before GreerTech, but mind giving us all a refresher please? For example what do they look like inside? How do they function to keep each eye's view "in it's lane"? Thanks for bringing them up ITT related to @HoloAnon's project BTW! Cheers. :^)
>>43001 Probably like the google cardboard: a set of lenses to help the eyes focus on the phone screen. Some plastic ones have a little divider making sure each eye only sees one half of the double image. Its a little tricky to get it to work for me, since I lack depth innate perception. You can pick up a google cardboard for like $5 or a nice plastic one for like $20. And plus it uses your phone (or any other screen of similar size).
>>43006 Thanks for the explanation! I presume either of these devices (or yours) must have some kind of software used to split the display into two 'screens'? What I really want to do is be able to play an application I'm working through to run on the computer first before attempting to run it natively on any phone. Any way these can do something like that (ie, 'beam' my program into the phone [whether wirelessly or not])? Regardless, cheers and thanks again for all you do here, Anon! :^)
Open file (2.19 MB 1800x900 swrc.png)
OK guys, this is a pretty hefty post. I've been doing some more holowaifu theorycrafting and research, and also decided to try to flesh out the waifu program's architecture a bit more. There are three basic parts, the control system, the waifu's internal behavioral logic, and the rendering. I have some more details about how these would work, but there still some aspects I'm not sure about. But I'm sure the other board members can help with those. I know this wall of text looks formidable, but please read all of this because the payoff for doing so is potentially colossal. I decided this post was too big and covers too many things to be all in the same post, so it comes in 2 parts. The Setup >Control system For this I had a couple of ideas. The first is extending the tracking marker concept. Up until now this holowaifu project has relied on the principle of summoning the waifu when a suitable tracking marker such as a QR code is detected. But there's no reason to limit this only to the waifu. You could introduce multiple markers that represent different AR objects. So for example, if you have a marker for a virtual keyboard on a glove you're wearing, you can just look at your hand to summon a virtual keyboard in the AR space (or other suitable control scheme; I'm a fan of a Mass Effect-style dialogue wheel) and issue commands through it. The second is using hand gestures. The reason I picked these ideas as opposed to something else like physical buttons or head movement detection is because they operate on the same fundamental principle as the waifu herself does, the principle of AI image recognition. This simplifies the project so we don't have to develop a separate method to control the visor; we can do so through the functionality already developed for the holowaifu herself, although I might like to incorporate a voice command system at some point simply because it doesn't step on the toes of the other control systems and it creates more of a sense of the waifu being your OS. But this system parallels the smartphone models that lack physical keyboards (most of them), instead using touch screen keyboards. The vast majority of smartphones now use touch screen controls; only BlackBerry and a few other outliers that have very small market share still have a physical keyboard, and I think the AR visor market would work the same way. >Behavioral logic I think this portion of the program should be written as a state machine, much like the control system likely will be. State machines represent basically any sort of nontrivial programming I might be able to do because they just intuitively make sense to me. But we could have the waifu switch between states according to hand gestures and other forms of interaction that govern her behavior, including interactions with other AR objects you summon through their separate markers. >Rendering This is the part that I have the hardest time understanding, which is kind of a problem because under this system the AR visor is controlled through AR means rather than physical buttons. You obviously need to render the waifu (or an AR-based control system) before you can interact with them. But there's another element to this that might make it possible for the waifu to exist persistently. The current concept has her being forced to stay near a marker and vanishing whenever you look away from the marker. But we could make her stick around and move realistically through the environment without a robot to denote her position if the AR visor is capable of scanning the environment and creating a digitized version of it. At that point, the holowaifu would exist within this mixed VR/AR space and wouldn't vanish when you lose sight of a marker; she only needs the marker to instantiate herself. Obviously, the more realistic the digital clone of your local environment looks, the more processing power will be required to render it, so if you want to go this route it's probably best to produce the cloned environment with cel-shaded graphics because these are more forgiving in terms of system requirements, but more realistic graphics are pretty accessible these days. Another possible method is to incorporate the aforementioned smartphone projector into the visor and then use a system similar to Star Wars: Republic Commando to issue movement commands to the waifu; this is only possible if the waifu has persistent existence. In RepCom, the player character Delta-38 can issue movement commands to the members of Delta Squad by using the D-Pad to project a holographic waypoint much like the markers we're discussing here. The waypoints are also context-sensitive, so if a Delta Squad member is ordered to a position that has special properties (i.e. a sniping position, a bacta tank healing station), the squad member will take the appropriate action for the context. The holowaifu should behave the same way; if you use the visor's built-in projector to project an action marker onto a chair, she'll go to the chair and sit in it, while if you tell her to go to a certain spot and dance, she'll do that. If the visor is capable of recognizing tracking markers for hand gestures and the waifu's position, it should also be capable of recognizing interactables in the environment, particularly if action markers are projected as an assist for the image recognition. To be continued
Open file (576.30 KB 480x742 gitsposter.png)
>>43009 cont. The Payoff This part veers outside of the purely visual waifu category, but it's a possible enhancement for visual waifus. I'm not sure if there even is a thread that this concept belongs in because it's out of left field compared to what you usually see in robowaifu discussion, but take this for whatever you think it's worth. The VR/AR hybrid system would introduce more complexity, so it's probably a feature to incorporate after a more basic version of the project has been completed. But it's also the option that could enable the most immersive illusion of holowaifu sex. With this, you could buy an inert love doll (already exists and is far cheaper than a fully motorized and autonomous robowaifu) and plant a context-sensitive tracking marker on the doll to have the AR visor project your holowaifu onto her within the mixed VR/AR world. Somebody much earlier in the thread already mentioned a system like this that has a stripped-down humanoid robot and projects the waifu onto her, and with the AR tracking marker concept, we now have a method to do this. This is good enough for motionless starfish sex where the girl lies there like a pillow princess, something a normal inert waifu doll can also do. But unlike a normal waifu doll (or the starfish sex in a Reddit-tier betabux relationship), your waifu will smile and moan back at you in the AR world as she tells you she loves you. With this feature in place, we've officially surpassed even the minority of betabux relationships that actually give monthly starfish sex and aren't completely dead bedroom affairs that feature the roastie cucking the paypig with Tyrone and giving the cuck nothing. But we're not done innovating yet. We're going to go even better: If you want your waifu to be capable of multiple forms of sexual positioning and moving with you as you have sex with her, you could use the visor with an inert love doll rigged up with an external robotic frame mounted on a wall near your bed, which is much simpler to implement than building the robotics directly into the waifu doll. You could use a green screen-like system so the external robotics aren't picked up for rendering into the VR/AR hybrid environment, so that when you wear the AR visor and go into the AR world, it looks like your waifu is moving on her own. Your holowaifu will then be able to ride you, twerk on you, and do many other sexual acts, and it will look and feel realistic. Of course you could just have the robotic frame and the love doll without the visor, but the visor would increase immersion and let you easily customize your waifu's appearance, and make it possible for your waifu to follow you around without the need for a full-blown robowaifu using the already-established AR functions simply by telling her to disengage from the doll and become a pure hologram again. This "Sexoskeleton" setup would appear very similar to the Major on the original Ghost in the Shell movie poster (pic related); a naked girl with a bunch of marionette-like cables and robot things connected to her. There would be limits to the sex positions you could do (pile driver, wheelbarrow and other uncommon positions could be difficult), and shower sex is off the table for numerous reasons, but you can do most standard positions. The robotic frame would probably be largely cable-based, as maneuvering the AR-enhanced waifu doll through the use of a power armor-style rigid robotic exoskeleton is both difficult and probably unnecessary, and having a set of flexible cables as the primary actuators and support structures increases the amount of positions you can do because you won't have hard robot arms getting in your way. This is in some ways an improvement over the standard robowaifu concept, as it doesn't require complex navigational AI (or the substantial amount of maintenance and power consumption difficulties associated with a standard robowaifu); the AR visor uses about as much power as a smartphone, the robot frame can just plug into the wall, and the waifu only even has a body when she's having sex with you, and the rest of the time she's an AR hologram that can remove herself from the love doll and follow you around at your pleasure. And most crucially, removing the robotics from inside the waifu and implementing them as an external robotic frame fixes numerous issues and costs orders of magnitude less than a full robowaifu while fulfilling the primary purpose that 99% of robowaifu buyers have for her. It doesn't give you full autonomous robot functionality; instead it strips down the problem robowaifus are attempting to solve to the bare minimum and gives you the all-important function of being able to have realistic sex with her, and it does this using established technology with minimal new developments needed. Obviously we can't do the GITS poster Major setup to start with, as even this is a pretty tall order (although not nearly as tall as developing a full robowaifu). But it's a good goal to work towards, because even if robowaifu technology became viable, it would likely be limited to the very rich. But once the new AR technology matures, average guys could afford something like this - or even build it themselves using DIY instructions because the skill level and equipment requirements are the kind found in high school and college robotics clubs, not professional industry. I think this solves a lot of core robowaifu problems, and I'm interested in hearing appraisals of its feasibility and desirability. But this second part can't be done without the first part, so I'm also interested to hear appraisals of that.
>>42994 This picture would be better if it had a tracking marker and an actual holowaifu overlaid over it. >>42996 >name my band DivaCircuit WaifuCore DreamWave Zero Starfall Synthetics Miku & the Meerkats The Galactic Waifu Empire
>>43009 >>43010 POTD >>43011 >This picture would be better if it had a tracking marker and an actual holowaifu overlaid over it. All in due time! :^) <SOON.EXE >The Galactic Waifu Empire Kinda like this one! :D
>>43012 Thanks for the POTD, but I was hoping for a more detailed evaluation than that. But there's a lot to respond to here and I can wait a little while, because responding to something this big isn't going to be quick or easy.
>>43013 Yeah, I'll have to make some time to think this through. Till then. P.S. This basic idea has been frequently brought up by an Anon here in the past, though yours is much more fleshed out (probably in large part b/c the recent discussions about the HoloWaifu project ITT?) Cheers, Anon. :^)
>>43014 I didn't know somebody else had an idea like that. I'd be interested to hear what their version was like, but yeah, take your time with this.
>>43015 Heh. One of the issues of being on a board with this much technical information is locating it on-demand. We also have the odd situation of about 5'000 of our OG posts from back in the day not having been migrated here yet.
>>43009 >>43010 All that effort to edit these and I still made some mistakes here. Like this: >but there still some aspects I'm not sure about I rewrote this sentence several times, and then left it hanging, moved to edit other things and then forgot to go back and fix it. >only BlackBerry and a few other outliers that have very small market share This strikes me as slightly awkwardly worded and may not be a completely accurate representation of the smartphone market. But I guess this could be considered a first draft version of the proposal. You rarely get something worthy of a final draft on an imageboard project.
>>43017 I'll be happy to go in and make any edits for you, that you'd like to have HoloAnon. :^)
>>43020 Meh, I don't think it's needed at the moment. Just focus on giving the proposal a thorough evaluation and we can make edits to it after that.
>>43001 >>43006 >>43008 It works by holding the phone screen close to you. Mechnomancer was wrong, it doesn't use depth perception, for the reasons Chobitsu alluded to, stereoscopic content is rare. Sure, it's not 3D, but it's like a first person game, but even closer. Your eyes cant focus on the entire screen, but that's part of it. Since it's not 3D, it can work with any POV content. Steam link works for games. (any POV content ;)) What's funny is that I started this before HoloAnon's (public) work, and I didn't make the connection until you said it, but I do agree, it would help with the VR app. After all, one of the secondary functions is a pair of digital night vision goggles
I should add something to the above pair of posts; I don't think physical robowaifus are going to really start being a thing until they start incorporating many biological parts. Biological components inherently cost less, use less power and produce less excess heat than electronics or mechanical parts, but we're some years out from seeing this integration happen. Robowaifus are mostly going to be techno-organic hybrids like Mega Man Legends' Carbons or the YoRHa units from Nier Automata, with fully technological androids being either reserved for the wealthy or outright obsolete because there are a lot of things that biological components just plain do better. But until we reach that stage, I think raunchy marionette sex while wearing an AR visor is going to be as good as it gets.
tl;dr on >>43009 and >>43010 for your convenience Phase 1: AR Holowaifu Visor with VR/AR Hybrid Features (epic roastie trolling) Phase 2: AR-Enhanced, Robot-Controlled Marionette Sex (total roastie ownage)
>>43006 Actually I have a question. If you can just get these enclosures and turn any smartphone into a visor, why aren't visors a lot more common by now? Is there some kind of limitation to implementing a visor like this?
>>43024 Contact Ribose, he's the /robowaifu/ expert on biotechnology Telegram: @Ribozyme007
>>43027 The answer is: lack of content I was always into VR, I even got a cardboard for my birthday. But most of the content was basically shovelware tech demos and a few experimental videos. It also wasn't that good, the lenses meant you saw the pixels.
>>43029 Maybe we can build a better enclosure, then. It has to be easier than building a full visor from scratch. Of course the content is going to be the real reason to get this, but it wouldn't hurt to have a better visor. Having the visor be a smartphone attachment is probably the only way to reach a large audience anyway.
>>43030 Maybe, but we already have good phone headsets. Just search up "phone VR", and you will have plenty to choose from. I got one in the clearance section of Ross: Dress for Less with plastic construction, padding, adjustable lenses, and elastic straps. I do support the idea of an open-source VR headset, but what we actually need is content, otherwise it would just fall into obscurity again.
>>43035 >I do support the idea of an open-source VR headset, but what we actually need is content, otherwise it would just fall into obscurity again. We need both, obviously...a so-called "Chicken & Egg" problem. BTW, thanks for the phone vr tip! :^) I intend to create some basic facial animation work as "proof of concept" *, but I'm going to need a headset of some fashion for prototyping the product effort. --- * For starters, I just mean for it to be a stylised, "floating" robowaifu'y cartoon face (likely dears Chii-chan, or Sumomo-chan to begin with), then build out from there.
Edited last time by Chobitsu on 11/25/2025 (Tue) 00:13:08.
>>43038 Well, if we have a stated goal, then we must make the egg first, a platform to make desired content on. >I intend to create some basic facial animation work as "proof of concept" *, but I'm going to need a headset of some fashion for prototyping the product effort. Exactly
>>43035 >what we actually need is content Undoubtedly. But does this modular approach to the headset where the smartphone is basically a Nintendo Switch that can operate either docked or undocked mean that what we're creating here is basically just a smartphone app? Or is that an overly reductionist way of looking at it?
>>43042 For now, yes
Please talk me out of buying these XReal Ones, bros : ( >>42928 )! :D They're US$400 right now, and look remarkably-suited to our HoloWaifu project (as the 'display-only' portion of the problemspace). * What's wrong with them that I'm not seeing yet? * Why is this grossly-overpriced for Anons? * Why would <<other_product>> be a much-better choice rn? --- Here's a general user's review: https://www.youtube.com/watch?v=3duYMt020_M A rather more-technical review with nice tight closeups of the device (also evaluates the more-expensive Pro version in comparison): https://www.youtube.com/watch?v=9TnBpCnX31c <---> PLS SEND HALP before I do it soon-ish! * --- * I was already planning to make some kind of investment purchase towards this project sometime around this upcoming /robowaifu/ birthday weekend period : ( >>1591 ) [so, a decision before next Monday when the improved pricing ends].
Edited last time by Chobitsu on 11/26/2025 (Wed) 13:19:21.
>>43050 $400 is a very good price, but how secure are they? Glasses that can see everything you can are a huge security liability if they can't be completely locked down. That said, if they are secure, then they'd be great for all of us.
>>43050 - I've had some difficulty finding a gyroscopic sensor that can calculate the z axis (vertical). Theoretically the GY-521 can do it somewhat, but there is drift because it cannot find a reference point. Chatgpt gets grump when you try workarounds so maybe I'll do some irl experimenting to prove the grand oracle wrong lol. - Supposedly small, hi-resolution screens are expensive. And if you're not doing a pepper's ghost it has to be transparent as well. But I just got a small 720p projector for like $20 at a discount store so I dunno about that. I might take it apart to see exactly how they do it. I have certainly been tempted to make my own diy vr headset but haven't done much beyond icon tracking. Maybe some GY-521 silliness might hold the key.

Report/Delete/Moderation Forms
Delete
Report