/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

We are back again (again).

Our TOR hidden service has been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“He conquers who endures.” -t. Persius


Visual Waifus Robowaifu Technician 09/15/2019 (Sun) 06:40:42 No.240
Thoughts on waifus which remain 2D but have their own dedicated hardware. This is more on the artistry side though ai is still involved. An example of an actual waifu product being the Gatebox.
gatebox.ai/sp/

My favorite example is Ritsu, she's a cute ai from assassination classroom who's body is a giant screen on wheels.
>>38443 >I'm just not sure they should be subsidized or we'll get a one size fits all robowaifu. It's a fair point. As counterpoint, I offer the Wright Brothers and their flying wonder. Until their breakthrough achievement, there were no powered flying machines. Once the breakthrough occurred, then men from nations all over the West were inspired and in less than 20years later, there were literally dozens of flying machines in production. I consider this kind of like 'breaking the 4-minute mile'. Somebody's gotta do it first...may as well be us here on /robowaifu/ . >tl;dr Till now, there are no widespread robowaifu designs, suitable as kits for everyman. I'm pursuing the business acumen to achieve the funding to start a company doing just that. Until then, Anons can only dream of the future. I personally don't consider that enough! Simple as. :^) TWAGMI
Open file (554.24 KB 575x422 mek collage.png)
>>38444 >a new craze a la wright brothers At the risk of derailing the thread, a major issue is that any dork who rubs two wires together thinks they're Tony Stark, and create videos for ad revenue rather than create a product to sell. Something like this caused the Mech Crash of 2018: a bunch of companies built "mechs" in response to the "Megabots vs Kuratas" hype train until the fight actually happened then disappeared due to all the negative press (Megabots were chill but kinda fly-by-night while Suidobashi Heavy Industries were stuck-up pricks). I can't help but feel we're seeing a psychohistoric echo with all these "advanced humanoid robot" videos on youtube. I heard there was a chinese robot foot race that got heavily censored because most of the robots would take 2 steps and fall over. Unless the robots are out in random crowds (like elon did with having optimus being a telepresense device) or actually available for purchase all "advancements" should be taken with skepticism. After all, it was revealed one of Amazon's "Smart Stores" wasn't run by AI but a bunch of Indians watching via camera and taking notes... >>38443 >That's the dream. I'll try to get around to some durability testing for Pringle at some point, because if she passes the durability tests she'd be ready for public release and I made sure that she is very easy to work with :)
>>38444 Not to toot my own horn, but I feel like Galatea is that kit. It's finished and published, and available for any to build. Now my hope is that it will be the Wright Brothers moment >>38443 >That's the dream. I'm just not sure they should be subsidized or we'll get a one size fits all robowaifu. But, I support the concept and look forward to future iterations. Agreed on the design philosophy. I don't think there will or should be one "The Robowaifu, alright pack it up everyone". Multiple people have different ideas, desires, and visions.
>>38453 >>38454 LOL. OK, ok guys -- you've convinced us... THE FUTURE IS HERE
>>38460 I probably just watch too much animu, right? :D "Chii?"
>>38453 >she'd be ready for public release and I made sure that she is very easy to work with :) Nice! She'll be a great part of the first wave of real robowaifus!
So, how do we go about promoting & distributing the robowaifus that are already here? My mental canon involves a self-supporting business doing that (for-profit sales [fully-assembled or kits], freely-distributed designs, software, & kit-assembly support). Since that business isn't here yet, how can we achieve something similar without it?
Obviously, we are all derailing the Visual Waifus thread r/n. Let's move to the thread of your choice please, Anons.
Open file (1.20 MB 756x1008 T031.png)
>>38454 >>38453 I think Galatea could be sold on Etsy or where ever as is. Hope to see you guys sell kits. Maybe just a marketplace thread with links to different projects. I was mostly joking about the subsidy thing. If my tribe's casino produced enough money to hand out robowaifus to tribal members, I just fear they would also look like my tribal members, so I'd go for the Chinese silicone version and pay for it myself. I mean, I wouldn't refuse the state mandated robowaifu of course. I'm just not sure I could move the thing. For visual Chinese made waifu, here's one I have on order right now. It has moving hips, neck, tongue and built in AI. Not bad for $2k.
>>38464 Yup! One thing about virtual waifus (and to circle back to my potentially derailing post) is that due to their nature it is pretty easy to prove if the maker is lying about its functionality. Eg "want to try out my virtual waifu here's the download link/place to buy" vs "I built a virtual waifu but no I won't share/monetize it". No need for specialized hardware a lot of the time. I have a similar program that has little SOS brigade members run around my screen with some "Disappearance of Haruhi Suzumiya" action. The program made in like 2009.
>>38453 I still believe in the heart of the cards mecha. I think right now the mech industry is going through its equivalent of the period between the first video game crash and Nintendo reviving the industry. Somebody will succeed in making mecha popular and profitable enough to build a business on at some point. I think there should be a thread about it, but I tried last year to make a thread about covert ops robowaifus and apparently that's considered off-topic here, so I don't know how you could justify having an /m/ thread here. Also even if there was a thread I don't know that I could be around enough to participate meaningfully in it. I've always lamented not being able to do more to further the robowaifu cause, which is partly because I'm busy with other things and partly because I have little in the way of skills that would be useful for this.
>>38474 >One thing about virtual waifus (and to circle back to my potentially derailing post) is that due to their nature it is pretty easy to prove if the maker is lying about its functionality. Good point, Mechnomancer. >little SOS brigade members run around my screen with some "Disappearance of Haruhi Suzumiya" action. A cute! Giving them contextual understanding now, and the ability to TTS<->STT would really liven things up! Cheers, Anon.
>>38476 >but I tried last year to make a thread about covert ops robowaifus and apparently that's considered off-topic here Self-defense/perimeter-patrol would certainly be welcomed and on-topic in : ( >>10000 ). However, the last thing I want /robowaifu/ to become is "Terminators-R-Us". While your effort is appreciated Anon, your topic is far too close to that destination. My apologies again, and I wish you good luck on that idea in another forum! >I've always lamented not being able to do more to further the robowaifu cause, which is partly because I'm busy with other things and partly because I have little in the way of skills that would be useful for this. Ehh, many of us are busy, Anon. I think just participating by posting in threads here on the board is sufficient to at least be an encouragement to everyone! Please continue doing so, Anon. Cheers. :^)
>>38479 Meh, it's no big deal. I've had threads closed for much dumber reasons than that. But where's the line between teaching robowaifus self-defense and teaching them to be soldiers and assassins? And also, what can somebody who doesn't have enough programming or engineering skill to be useful in building waifus do to contribute?
>>38480 Also not sure where that line is. Legally, I think most places wouldn't even allow them to have non-lethal options like pepper spray. But back to vtubers. I know nothing about them really. Has anyone else got this working and familiar with Live2D? I'm just using the default characters and there isn't a good english site for them https://docs.llmvtuber.com/en/docs/quick-start/ https://docs.llmvtuber.com/en/docs/user-guide/live2d Looks like open vtuber uses an older version of Live2D so a lot of the models won't work. Also hard to rig non-anime characters\robots. Installing open vtuber is easy if you already have CUDA\ffmpeg installed. I just had to install UV and make sure it is in my PATH statement. After that, it was just 2 commands "uv sync" and then "uv run run_server.py" to run it. Just have to edit the conf.yaml for your preferences.
>>38476 >Somebody will succeed in making mecha popular and profitable enough to build a business on at some point. Me. I originally started robowaifu development an intentional byproduct of investigating computer controls for my mech projects (I mention these a bit in my first thread about SPUD). They are pretty much like smaller robotics projects, the only difference is I'm using parts that nobody really thinks about using. And in this manner I create constructs at a fraction of a percent of the cost of competitors. It might be fun to make an AI vtuber version of my mech. I've been working with the Pillow python library and its ability to composite images so making an entirely python-based vtuber system might be a possibility with inputs to the avatar determined via a dropbox file. IDK, a little bit outside my current scope.
>>38486 >Somebody will succeed in making mecha popular and profitable enough to build a business on at some point. >Me. That goes hard
>>38486 >Me. This.
>>38486 >Me. A bold claim. It's good to have people who are willing to shoot for these big goals, but how do you intend to get the startup capital needed for this? Although if you can make good on this, you'd have ample money to pour into robowaifu development.
Open file (63.79 KB 927x500 6cil49-1421120541.jpg)
Open file (931.24 KB 682x512 mek fam.png)
>>38498 > how do you intend to get the startup capital needed for this? 2nd pic is my mech family as of '22. Been going to local events since '20. Carry the workshop waifu is the first one to have a computer brain. I'm still working on making the others computerized.
have you taken notice of that super popular project on kickstarter? I got an ad for it recently. https://www.youtube.com/watch?v=WFgXunR8b6A&ab_channel=gizmochina
>>38612 Thanks, Anon! That's very interesting. I wish them good success with that product line. Cheers. :^)
>>38612 At first I was skeptical because it looked like a standard corpo AI waifu, but I checked out the kickstarter page right now, and it does support offline AI and custom models. So I think it's going to be pretty good! Exciting times we live in. The first generation of AI waifus and robowaifus are coming out.
>>38615 >Exciting times we live in. The first generation of AI waifus and robowaifus are coming out. This. What a time to be alive!! FORWARD.
Open file (60.71 KB 1024x1024 Galatana.jpeg)
(Edited from Trashchan) Introducing Galatana, the standalone AI system. It uses the same AI used in Galatea v3.0.4 >>38070 Perfect for more budget oriented anons, or anyone who doesn't want to or can't build a full robot. You can talk with her anywhere, by using a single bluetooth earpiece and your phone https://greertech.neocities.org/galatana ------- Original Trashchan post https://trashchan.xyz/robowaifu/thread/84.html#bottom Odysee Backup https://odysee.com/@GreerTechother:3/Galatana:4?r=2RAnQD4k7R6nPoYVC32GJaXWCqK6sKmT
>>38897 Ambitious project! I thought about making a robot as well for the last week. I asked the guys at work who do point clouds about that and I think for now it would be easier to do holographic projection with AR. I downloaded https://github.com/PKU-VCL-3DV/SLAM3R and converted a 30s video of my house into a pointcloud. There are other projects to convert them into 3d meshes and also label them as household objects. Open3D TSDF and Meta Segment Anything Model 2 (SAM 2). At first I thought of making a pipeline that updates the meshes and segmentations in real time for a robotic agent to navigate my house. Then it hit me, why not use the internal representation to create a virtual environment of my house and let an AI agent navigate that? AR can be used to display them. Pathfinding is possible with the meshes, and with labeled objects I can create a graph with the relations between objects. The AI agent navigates this in some 3d engine. With simplified meshes that match my house layout. And send the agent mesh to a separate render target on my AR headset.
Open file (40.79 KB 680x531 nice place.jpeg)
Open file (39.96 KB 680x329 career.jpg)
>>39621 Thank you :) That point cloud creator is a really good find. Much better than most video-to-point cloud processers I've seen. >At first I thought of making a pipeline that updates the meshes and segmentations in real time for a robotic agent to navigate my house. A lot of high-end robot vacuums use LIDAR for a similar purpose, and a 3D point cloud will definitely help for humanoid robots to navigate, what may be unobstructed to the base may be an obstruction for the whole robot (ex. a table or a chair) >AR hologram waifu I like your idea. You're clearly very knowledgeable and talented. (second pic related)
>>39624 I tired out some projects like vox fusion and another one with ros. Couldn't get either of them working. Vox fusion had a demo unit that just didn't work and I can't be arsed to install linux for ros. I tried installing PopOs like 2 weeks ago for some triton project. It just didn't work. I spent 8h getting cuda and gcc to play nice but no success. I was yelling at my pc at this point. I gave up and went back to windows. >high-end robot vacuums use LIDAR Really? That's interesting. I know the scanners from work, but they're expensive as fuck. I figured LIDAR and RGBD cameras were outside my reach and I had to settle for regular rbg, maybe stereo. With slam3r that might even be possible, but I read that the calculated camera position isn't very accurate. Eh, I'll figure it out along the way. I think the pathfinding won't be a problem. I made a game once with voxel based pathfinding. Wasn't so great because the map was huge and I wanted hundreds of enemies without collisions. But a small map with static obstacles should be manageable. The bigger problem will be the animations. I tired some Quest 3 apps. Where you can decorate your home with virtual furniture lol. Making some crazy animation system for furniture or advanced IK would be badass. >I like your idea. You're clearly very knowledgeable and talented. Thanks :3 I like my crazy projects!
Open file (60.71 KB 1024x1024 Galatana.jpeg)
Updated the Galatana description to tell how she functions to make it more clear to new users https://greertech.neocities.org/galatana @rick , any updates?
Contrived (but compelling nonetheless) promo video of a Gatebox AI ripoff clone. https://youtu.be/QtAIiJ1wIIc Dipal D1 looks very interesting at this stage. I wish them good success for now!
>>40250 Much-better look into the product from the consumer's viewpoint. https://www.youtube.com/watch?v=xZiwjLF7zTU
>>40250 I recently learned Godot is able to run on raspberry pi 4 and up. Since it is an engine capable of 3d graphics it would be possible to make an animated avatar in godot and have it interact with Ollama via a .txt file. Haven't tried godot yet so I'll let ya know how it works out. Would probably be easier than trying to hard-code everything in python, if there isn't an outright ollama plugin lol
>>40281 >Haven't tried godot yet so I'll let ya know how it works out. Exciting! Godspeed, Mechnomancer.
>>38612 >>40250 These seems overpriced for what it offers, and having a subscription fee makes me assume it's going to be yet another overpriced AI chatbot service that you have to worry about disappearing. I would think it wouldn't be too hard to assemble something like this out of existing technology. It feels like there's a ton of small projects that try to offer something like this but are all split up and don't work with each other. For the software, SillyTavern can likely handle all of this. It's self-hosted, has Live2D and VRM support, has TTS and voice recognition support, and you choose the LLM backend. If you paired that with outputting the model display to a smartphone/small monitor in a pepper's ghost box, you'd have a rudimentary version of this. I think the only problem would be getting ST to output the model on a separate display. https://www.youtube.com/watch?v=WRTWyPXYotE I'm gonna look around to see if there's any good 3D printable pepper's ghost projects, because the software seems do-able with existing projects. (Hope I'm not coming across as too much of an ideas-guy. I check this place out a couple times a year and I've recently gotten into AI roleplaying with ST. I'm looking into a dedicated visual display for a hologram waifu, and the recent rise in LLM capability makes all this seem much more viable)
>>41756 >(Hope I'm not coming across as too much of an ideas-guy. I check this place out a couple times a year and I've recently gotten into AI roleplaying with ST. I'm looking into a dedicated visual display for a hologram waifu, and the recent rise in LLM capability makes all this seem much more viable) What!? No, you're fine Anon! This idea of yours is great. I think if you look around the board (even here ITT) you'll find some good ideas about Pepper's -like systems. I personally consider this one of the more-achievable DIY-style projects for Anons to attempt, so I was really in favor of OP starting this thread. Please keep us all up to date with your research progress, Anon! Cheers. :^)
>>41756 >building pepper's ghost with a smartphone Pepper's ghost is very easy: see the pic, anon. You can also scale it up for larger screens: it is simply a transparent piece of plastic placed above the screen at a 45 degree angle. Happy building!
>>41766 Great example of KISS! We here will need to be creative like this to keep costs very low. That will be important to spreading robowaifu tech far & wide around the globe! Cheers, Anon. :^)
Open file (867.16 KB 683x1024 Galatana.png)
Open file (551.90 KB 683x1024 Galatana 2.png)
Updated Galatana -Updated the AI Manual to the current version -Added new avatar -Updated the READ ME file
>>42592 A CUTE! :^)
> (Visual Waifu -related: >>42737 )
>>42810 >Man, I don't know the first thing about how to program an AR hologram waifu, let alone attach her to a Morph Ball that can move around. I've seen a few examples of AR hologram waifus, but none that are attached to a mobile robot. >Ideally I'd like this to be a sort of parallel project for robowaifu development. It can be used as not only a visual waifu, but also as a test bed for physical robowaifu systems development. It's possible that AR systems could be used to direct a physical robowaifu's movement, and an AR sphere could integrate with a physical robowaifu for various tasks. But that's undermined by the fact that I don't have any idea to do it. Well, there's a few things that come to mind off the top of my head, Anon. 1. You need a design for your "rolly polly tracker ball" robot. It needs a battery, a way to roll itself, some kind of sensors to know when its about to bump into things, and (possibly) some kind of transceiver system for communicating with a base unit. 2. You need a 3D model of your waifu, fully- textured & rigged, for use as the animation source for your VR goggles. (You might want gloves as well to interact with her a bit.) 3. You need VR googles, that can a) display your waifu animations, and b) implement some sort of object detection tracking, so it know where your rolly ball is currently located. These features can be implemented as a custom system using say, OpenCV, as long as your VR goggles connect their feeds back & forth with a base computer. I think those are the basics involved, Anon.
>>42813 It's kind of amazing to see posts here made before ChatGPT was released. >>4018 >>4019 This is basically what I'm after with the Morph Ball holocron/AR sphere, for those who didn't read GreerTech's thread. If you look down between her legs you'll see the ballbot that acts as her anchor to the physical world. The AR visor will detect the ballbot's location and superimpose the hologram of her onto the ballbot. To my knowledge nobody has suggested this yet, but my knowledge may not extend very far. If I were to seriously try to skill up and pursue this project, I'd be able to use some things that weren't available when this thread was made to expedite the process. I could use an AI art model to create her appearance, AI video generators to give her some animations and convert her 2D appearance to a 3D model with another AI. Then I could use Arduino, BeagleBoard or another microcontroller to make the ballbot portion; there's a BeagleBoard model that has 50 GB of onboard memory, which is enough room to store a stripped-down LLM and enough context to serve as the waifu's short-term memory, with the option to upload her memories to a larger drive for archival. Also, I think I should take a name at this point. I can't just keep calling myself Robowaifu Legal Defense Fund guy, but I can't come up with any good names either.
>>42814 Addendum: Somebody did suggest superimposing an AR hologram of a robowaifu onto a blank slate humanoid robowaifu body, but that incurs almost all of the cost and complexity of building a full-fledged robowaifu. My approach with the ballbot is much simpler and cheaper, albeit also somewhat more limited in what it can do.
>>42813 >>42814 My immediate vision was a Sphero robot and Apple Vision Pro. It shouldn't be too relatively hard to do the AR, especially with a bright tracker point. >It's kind of amazing to see posts here made before ChatGPT was released. Agreed. We're spoiled now. >I could use an AI art model to create her appearance, AI video generators to give her some animations and convert her 2D appearance to a 3D model with another AI. Maybe Gen 1 could just be a PNGTuber model
>>42814 >It's kind of amazing to see posts here made before ChatGPT was released. LOL. We were pursuing this well-before then! :D >there's a BeagleBoard model that has 50 GB of onboard memory I love the BeagleBoard SBCs -- particularly the Blue! :^) >Also, I think I should take a name at this point. I can't just keep calling myself Robowaifu Legal Defense Fund guy, but I can't come up with any good names either. You're fine as just "Anon", Anon. OTOH, we use handles here b/c the idea was to work together closely day-to-day to produce robowaifus. I both support namefagging (here) and encourage it (here), since it makes the process much-smoother in most ways. >>42816 Neat. Why don't you and "Anon Who Has No Name Yet" work together towards this goal? This was one of the (majority) of threads that didn't get properly migrated here to Alogs, so there's lots of OG posts missing ITT. Point being that other Anons have had these ideas from back in the day, and would probably get behind a realworld project here along these lines to help.
Open file (266.37 KB 1024x1024 morphballvariantsmp2.png)
>>42816 There are prebuilt spherebots now? Never heard that. But is a sphere really the best choice for this? I picked that out because it seemed simple and looks cool in my head, but I don't know if it's actually the best pick. Theoretically any kind of robot could be used as a tracking point. What factors determine which robot body type is best? >>42818 If this happens it'll probably be GreerTech doing most of the heavy lifting, at least initially, because I have no idea what I'm doing. I took C++ in high school 1000 years ago, but I don't even know if that's applicable to this. But at least the power supply is going to be much easier to work out than for a full robowaifu.
>>42820 >There are prebuilt spherebots now? Oh yeah, the Sphero has been a commercial toy since I was a tween. There's probably several cheap Disney BB-8 toys Here's a clear variant so you can look inside; https://youtube.com/shorts/DmL5YcvnLXs https://www.wired.com/2011/12/sphero/ >Theoretically any kind of robot could be used as a tracking point. What factors determine which robot body type is best? Good point, I was thinking maybe a cube with different colors on the sides, to help with tracking, or better yet, a QR code-esque pattern >If this happens it'll probably be GreerTech doing most of the heavy lifting, at least initially, because I have no idea what I'm doing. I think the little box rover would be the easiest part, I think the hardest part is seamless AR I would start here; https://en.wikipedia.org/wiki/Augmented_reality https://en.wikipedia.org/wiki/ARToolKit https://grokipedia.com/page/Augmented_reality For the goggles, here's a video I found on the Apple Vision Pro https://youtu.be/JVJPAYwY8Us
>>42821 "Galatea, jork it a little" https://youtu.be/503SKHSzPWc
>>42821 But if it was a cube, you'd have to incorporate discrete locomotion systems for it instead of just having it roll. That means the wheels/legs/anything else you decided to use would be subject to damage and malfunction, which is probably the most compelling reason to use a sphere - it's the only robot body that has its locomotion systems on the inside. >QR code This might be good to have. But what other tracking methods are usable with an AR visor? LEDs? Sound?
>>42821 >Good point, I was thinking maybe a cube with different colors on the sides, to help with tracking, or better yet, a QR code-esque pattern Just use a black & white checkerboard pattern like the famous "Amiga Ball" would be my guess. Trackers look for sharp edges and clear geometric intersections. Spheres decorated that way would have plenty of both. <---> On the general topic of DIY tracking (using OpenCV) here's one quick link: https://learnopencv.com/the-complete-guide-to-object-tracking-in-computer-vision/ And here's what Leo had to say: >To track objects with OpenCV, follow these general steps: first, set up your environment by installing the necessary libraries, such as numpy and cv2. > Then, capture video input either from a file or a live camera feed using cv2.VideoCapture(). > Next, select a region of interest (ROI) containing the object you wish to track using cv2.selectROI(). > This ROI defines the initial bounding box around the target object. >After selecting the ROI, initialize a tracker object. OpenCV provides several tracker algorithms, including BOOSTING, MIL, KCF, TLD, MEDIANFLOW, GOTURN, MOSSE, and CSRT. > The choice of tracker affects performance and accuracy; for example, MOSSE is known for high speed (around 91 FPS) but may depend on video conditions, while CSRT offers high accuracy but lower speed (around 7 FPS). > The tracker is initialized with the first frame and the selected bounding box using the tracker's init() method. >In a loop, continuously read new frames from the video. Update the tracker with each frame using the update() method, which returns the new bounding box coordinates. > Draw the updated bounding box on the frame using cv2.rectangle() or similar functions to visualize the tracked object. > Optionally, calculate and display the frames per second (FPS) to monitor performance. > The loop continues until the video ends or a key (like 'ESC') is pressed to exit. >For real-time tracking, ensure the environment is properly configured, and consider using optimized trackers like MOSSE for speed or CSRT for accuracy depending on the application needs.

Report/Delete/Moderation Forms
Delete
Report