/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

We are back again (again).

Our TOR hidden service has been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Happy Thanksgiving! Let us each give thanks. :^)


Robo Face Development Robowaifu Technician 09/09/2019 (Mon) 02:08:16 No.9
This thread is dedicated to the study, design, and engineering of a cute face for robots.
Open file (273.38 KB 720x480 Fossilkat2.png)
Open file (193.24 KB 592x686 Peppercat_MS_Artwork.png)
>>37529 It's just that the Medabot designs from the anime I watched back in the late 90's were simpler. They weren't all good, but a lot of the designs from franchises that are still around, like Medabots and Digimon, have gotten needlessly more complex over time. And the eyes of Medabots were usually just round or somewhat almond-shaped colored shapes, or otherwise some kind of visor. In the anime the eyes were often expressive in a cartoony way you'd expect in an anime, but they managed to convey expression just fine with voice and body language alone. Apparently in the past 10 years they started adding visible pupils to the eyes, and now they've added one with an expressive face. It just seems weird to chase a modern trend in an otherwise sci-fi series. I remember feeling the same way when Mega Man 11 had quadcopters as stage enemies. But the other thing about it that bugs me is the grid on the face. It's mostly obvious with that chibi twitter image, but the eyes aren't made with a grid of cyan and purple LEDs or something, but there appears to be a black grid overlapping the expressive eyes. I don't know if it's just an artistic shortcut or what, but it really triggers my 'tism.
>>37538 The Naughty Heart design is very busy, and the slightly chibi body proportions don't help at all. I didn't notice the faux-pixels, but now I can't unsee it What bugged me was that the "gems" on top looked like eyes to my brain.
>>37538 I see now, Anon. Thanks for the thoughtful reply. I can say that characters designs typically evolve over time, regardless the type of studio involved. Artists (and Art Directors, et al) get bored; the Current Thing(tm)(C)(R)(don't steal) catches the eyes of (((investors))), and they demand the studios shoot themselves in the foot to support it; fads change; etc., etc. Anyway, the topic at hand ITT seems to be both screen faces, and whether Ayylmao-style eyes are appealing. As to the latter, clearly there is an audience for them. Cheers, Anon. :^)
Edited last time by Chobitsu on 03/16/2025 (Sun) 16:12:57.
Open file (112.50 KB 500x900 1446343655315.jpg)
Open file (52.68 KB 736x801 Neco-Arc flapping.jpg)
>>37539 >>37550 I think the designs just get busier over time because they started out with a very good template and eventually it just keeps getting harder and harder to make new designs that look good and fit in with the art style while still remaining simple, so the designs eventually become increasingly more complex as they run out of ideas. The female medabot designs also became more shapely over time, but I'm not going to complain about that. >the slightly chibi body proportions don't help at all. I mean, all the Medabots are about 3 feet tall, and are all supposed to share the same skeleton, so a chibi design is pretty much a given. >What bugged me was that the "gems" on top looked like eyes to my brain. Yeah, I think there's another design I saw that's supposed to look like it has Hikimayu eyebrows, and it just looks like it has two sets of eyes because of how it's colored. >Anyway, the topic at hand ITT seems to be both screen faces, and whether Ayylmao-style eyes are appealing. As to the latter, clearly there is an audience for them. I like the Ayylmao-style eyes for a few reasons; they're simple and utilitarian, and properly-shaped lenses can give them a wide field of view while keeping everything in focus. Screenfaces, while expressive, adds complexity and are power-hungry, at least to the extent that the screen is one of the most power-hungry parts of a smartphone. But I don't really know how much of a cost difference it makes. I've been thinking about out how to make a cheap little android about 3 feet tall, like a Medabot or a Servbot from Mega Man Legends that could help normalize domestic robots, while still being cute enough to serve as a waifu with some slight modifications, or until a better body is available. But I'm really struggling with the face.
Open file (317.16 KB 1412x1725 Galatea v3.0 prototype.jpg)
>>37559 >cheap little android about 3 feet tall Not to toot my own horn too much, but that's basically what I'm doing. For the eyes, I use a commercial LED matrix (LED Name Badge). Flexibility and expression of a screen, but not too complicated or power intensive.
>>37562 And the cameras?
>>37563 I don't have cameras yet. I was considering maybe using a mini smartphone for the eyes, and have the camera on the side (phone front camera). I could also go for the medabots method. Maybe another idea is a hidden webcam for a multimodal AI in my PC.
Open file (323.01 KB 1748x2561 3907fcbw66j71.jpg)
>>37566 Yeah, I'm putting a bit more focus on utility for now. I'm more concerned with not getting so utilitarian that I just end up with mobile robot arms.
Open file (422.51 KB 1536x2048 FlObG6IaAAAFvgh.jpg)
Are newer images just broken on this board? New no bully. My own design choice would be concave eye socket creating the illusion of eye contact while the iris is still able to serve as the opening of camera. I am not keen on creating expressive facial motions if I had issue reading and reflecting with them in the first place. No moving parts. Pic related. Occasionally they use less concave or concex ones for poses looking away from the camera.
>>37597 There was some file data lost. We're working on resolving that over the next couple days, Anon. update: This has now been completed for all available files. <---> Nice work, Anon! Cheers.
Edited last time by Chobitsu on 04/25/2025 (Fri) 00:54:37.
>>37597 >My own design choice would be concave eye socket creating the illusion of eye contact while the iris is still able to serve as the opening of camera. Smart idea. It would allow for unique eye designs as well.
Open file (5.68 MB 200x356 FollowingEyes.gif)
>>37597 >Follow me eyes I like this idea! So much that I've used it in prototyping; >>18870 Always appreciate seeing this concept bubble up again. So much potential for heaps of cuteness for cheap.
Galatea v3.0.2-prototype. It's a radical departure from previous screen eyes. Benefits include more customization options.
>>37815 Starting to look a bit like night-vision optics, heh? :D BTW, is that a new head-print design for dear Galatea too, Anon?
Edited last time by Chobitsu on 04/26/2025 (Sat) 03:02:30.
>>37820 >Starting to look a bit like night-vision optics, heh? Interesting observation. Optical headgear is a good place for inspiration, especially when having to work with flat screens >BTW, is that a new head-print design for dear Galatea too, Anon? Yes. It's going to be part of Galatea v3.0.2
>>37821 >Optical headgear is a good place for inspiration, especially when having to work with flat screens Seemingly-so! :D <insert: pic of cyoot animu robogril SMU Operator -- kitted out -- wearing GPNVG & doing perimeter patrol around Anon's flat at 1AM.jpg.mp4.exe>
Edited last time by Chobitsu on 04/27/2025 (Sun) 18:31:47.
Open file (202.12 KB 1243x2048 Solid Eyes.jpeg)
My "Solid Eyes" experiment shows you can have some of the aesthetic benefits of screen faces with a non-electronic solid design.
Open file (202.12 KB 1243x2048 Solid Eyes.jpeg)
>>38111 I'd love to hear opinions about the designs
Open file (202.12 KB 1243x2048 Solid Eyes.jpeg)
>>38113 Slightly different configuration of the screen eyes, more in-line with the face
>>38111 I see what you're saying. Since we are all pretty used to seeing dear Galatea with screen eyes, these flow right along with that same motif. They certainly resemble common eyewear more than the phone does. Do you plan to install cams in them or anything? >>38113 >>38114 I think I prefer the first example just slightly ATM -- but that could easily simply be my current mood! I might flip on that tomorrow, depending. Also, I expect the IRL experience for you with the screen right there with you is different for us merely seeing the images posted. >tl;dr I personally think this is something that you yourself can judge best, GreerTech. Cheers. :^)
Edited last time by Chobitsu on 05/02/2025 (Fri) 12:29:14.
https://m.youtube.com/watch?v=yWrldOS6xBw >=== -rm uri fingerprinting
Edited last time by Chobitsu on 05/04/2025 (Sun) 17:21:49.
Would something like this work for making facial expressions? https://youtube.com/shorts/uh5CMuSAcQs?si=mAQebdKKg3UOv5PP
Open file (80.78 KB 755x763 Untitled.jpeg)
Open file (120.47 KB 1238x712 IMG_0437.jpeg)
Perfect timing. So idk about you guys but to keep it simple all just have it range from happy to neutral. Cogley’s skull is open source and has a wide range of motions however it has like 14 servos and must be quiet a pain in the butt to assemble.
Open file (263.55 KB 1178x2048 scarecrow.jpeg)
I know big tech is longhoused/globohomoified, and wants to avoid any cuteness and human affection, but is "scarecrow animated through black magic", really the best way to go?
>>42578 And is that cloth removable and washable? The last thing I would use a for a cleaning robot is exposed fabric.
>>42578 Heh, it is just about the most-unappealing take (from an organization that is meant to be taken seriously) for a robo-face that I've seen. We have literally thousands of examples of degenerate """art""" by kikes...they seemingly love it. Yet another example of this maybe? >>42579 This is an important practical concern. A fabric suit of sorts has been discussed here before (Skin, Materials, R&D threads), and what to do about cleaning, maintenance, and repair. That part of this android's design isn't the issue. The creepy af face design sure to give kiddies nightmares is. Cheers, Anon. :^)
Open file (61.40 KB 1383x1862 ClipboardImage.png)
>>42578 >>42586 I'd get a black one, put this face on it so the cameras go through the pupils, put an afro on it, and make it address me as "massa". When my waifu is completed, we will refer to it as "the help".
Open file (2.46 MB 969x1673 fixed neo.png)
>>42578 It only takes a permanent marker to make it better :D OR do like boston dynamics did with "Sparkles" https://www.youtube.com/watch?v=MG4PPkCyJig >>42586 > is just about the most-unappealing take (from an organization that is meant to be taken seriously) for a robo-face that I've seen. 10 buck says the design is by diversity hires
>>42596 LOL. >10 buck says the design is by diversity hires Fair enough. A j*wess, maybe? I'll happily pay up if you can prove no kike was behind this. :D
>>42596 >10 buck says the design is by diversity hires Worse. I think it's just design by a committee of normies to maximize the appeal to normies and old boomers that dislike technology in general. That seems like the fastest way to make something as painfully bland and inoffensive as this robot.
>>42606 Well they fucked up and went in the opposite direction.
>>42607 What?
>>42607 I've heard the 'bot is worse. There is a waiver you have to sign and one of the things it includes is the ability for employees to remotely control the robot without your knowledge. Its probably AI: Actually Indians. The sad thing is that this is nothing new. https://www.techspot.com/news/108173-builderai-collapses-after-revelation-ai-hundreds-engineers.html https://www.usatoday.com/story/money/shopping/2024/04/04/amazon-just-walk-out-indian-workers/73204975007/
>>42614 >jeet teleoperators Pretty inevitable. Only way for the Globohomo Big Tech/Gov to make the (((line go up))) during Current Year. Just say no, is my best advice! :^)
Edited last time by Chobitsu on 10/31/2025 (Fri) 20:50:09.
Open file (712.94 KB 1979x3067 opensourcejerk.jpg)
Happy Halloween!
>>42618 Happy Halloween, GreerTech.
>>42619 Lol I just realized I accidentally put this in the face thread than the meta thread
>>42621 Heh, you saw that too ehh? :D
Open file (316.14 KB 640x392 ClipboardImage.png)
Open file (1.50 MB 1600x1100 ClipboardImage.png)
Open file (540.82 KB 980x653 ClipboardImage.png)
I saw M3gan 2 on a flight the other day. Not as good as the first movie, and M3gan herself looked worse, somehow even more like a creepy doll than she did in the first one. But for a while her AI got put in a "Moxie" robot, which was surprisingly cute. Apparently it's a real thing, and the company that made them shut down last year, so they're expensive now, and I think an update can brick them. From what I've seen the actual toy has a face that's as expressive as in the movie, which I honestly wasn't expecting. I'd love to know exactly how the face is animated, but not enough to buy one.
>>42737 Interesting. From what I can tell by your pics that's a rear-projection system, similar to the tech discussed here: ( >>3971, >>3979, et al). >I'd love to know exactly how the face is animated Most-likely using standard animation tools, as is done commonly today in film & vidya. 2D in this case, and likely using Maya or Blender (though there are plenty of other DCC package options : (cf. >>415 , & our own @Mechnomancer's work in this area for dear SPUD : >>26306, >>34445 ). >tl;dr They simply modelled the key facial poses they wanted in 2D, then transitioned between them using the animation-timeline tools + scripting to act out the scene (following the literal screenplay script, and matching the VA's recordings). Commonplace stuff in film today. >but not enough to buy one. We'll all need to do something similar (in effect) to animate our own robowaifu's faces... so you better crack those books Anon! :D --- Thanks for the good post, Anon! Cheers. :^)
Edited last time by Chobitsu on 11/07/2025 (Fri) 17:18:32.
>>42737 >>42739 This guy took apart this robot, and you and Chobitsu were right, it was a projector. https://youtu.be/aRK9Al7RGtc
>>42740 Great detective work, GreerTech! GG & Cheers. :^)
>>42739 >>42740 I figured it uses rear-projection, (which is apparently only 640x480) but I was actually wondering how the face is animated in real time. It looks more complex than anything I thought of doing, but doesn't look like it uses a 3D model.
>>42745 Well, as I explained already, its being done as 2D animation. Kinda like oldschool Flash animations? (Think: MLP:FiM, as one example. There are thousands of others.) But today, this work is mostly done in yuge packages that can also do 3D (eg, Maya and Blender, as I mentioned). As far as "real time" goes, remember this is a big-budget film: every.single.frame. is carefully-crafted by teams. In this case the work of professional animators during pre-production/production phases. So the >tl;dr in the specific case of M3gan 2, this is not realtime. However, we deffo can do realtime, using animation-timeline tools + scripting, as I mentioned. Using MOCAP in particular, we can do realtime facial capture in remarkable detail now. Again, it's use is pretty commonplace today in the film & vidya industries (during the pre-production/production phases, incl. motion-retargeting); also in themeparks (animatronics); VTuber uses (webcam capture, etc.); even in stage theatre (animatronics, etc.) AMA.
Edited last time by Chobitsu on 11/07/2025 (Fri) 18:56:16.
>>42745 >>42746 There's only a select amount of reasonable expressions you can make, I think a library is enough. Mechnomancer did it with SPUD pringle, and I've made different eyes for Galatea.
>>42747 Well our typical 3D morphtargets for hero digital double work ran around 120 different models (many were 50/50 splits, mirrored L & R). * I don't know if you consider that a smol number of, but I can tell you it takes weeks for a smol professional team to sculpt & tweak them all during pre-production for a single hi-fi 3D mesh! :^) --- * And this is just for facial -- not including anything from the lower neck down at all!
Edited last time by Chobitsu on 11/07/2025 (Fri) 20:19:17.
Open file (161.91 KB 256x256 ClipboardImage.png)
Open file (89.94 KB 1200x723 ClipboardImage.png)
Open file (662.01 KB 850x1360 ClipboardImage.png)
>>42748 I assume those are actual faces, right? For a Murder Drones/LAV/Astrobot/Kerfus/Galatea style face, you probably don't 120 or a professional team.
>>42749 >I assume those are actual faces, right? Yes. Big actors would go into a scan volume and have high-fidelity (hi-fi) mesh captures made of their faces. We'd take those and meticulously redo them as animatable (ie, well-formed) 3D quad meshes; then use those to do the actual "acting" (ie, 3D animations, typically using the sculpted morphtargets as mentioned). In many cases this "acting" is driven by real acting done in MOCAP volumes, then applied (or retargeted) onto the hero facial mesh. A control rig of some sort also drives the underlying mesh so that directoral/artistic control is available during production. Usually a lo('r)-fi, "blocking" mesh is used during production animation, then rendered with hi-fi for sampling and during post (-production/final). The same processes mostly apply for the rest of the body, but usually without nearly the attention to detail that facial demands. >For a Murder Drones/LAV/Astrobot/Kerfus/Galatea style face, you probably don't 120 or a professional team. No ofc not. For screen faces, such lo-fi work is fine. But for realworld, IRL robowaifus? Heh, let's just say "...it's complicated." :^)
Edited last time by Chobitsu on 11/07/2025 (Fri) 20:20:21.
>>42746 >As far as "real time" goes, remember this is a big-budget film: every.single.frame. is carefully-crafted by teams. In this case the work of professional animators during pre-production/production phases. So the >tl;dr in the specific case of M3gan 2, this is not realtime. I assumed as much for the movie, but I was talking about the actual toy, which surprisingly, seems to have about as much facial expression range. >>42747 >There's only a select amount of reasonable expressions you can make, I think a library is enough. When I think of animating facial expressions my mind goes to the Source engine's Faceposer, where the face mesh flexes to create as many facial expressions as any real face could make, but something like that would probably need to be simplified to run on a toy. The only other thing I think of is early 3D game graphics where there's textures for an eye and half of the mouth, and the most appropriate expression is picked by making the right combination of textures. But the Moxie face seemed better than that.
>>42751 >but I was talking about the actual toy, which surprisingly, seems to have about as much facial expression range. Ahh, understood. Yes (just like with a movie) you'd create a "library" of expressions (as I and @GreerTech indicated), then using some form of scripting (in the case of the toy), morph between them. We'd have to have the original programming from inside it to know precisely how it was done in that specific case, but this model is the general approach. As you suggested, the Moxie expressions are likely "baked out" beforehand into individual images and morphed between them with scripting (as with @Mechnomancer's & @GreerTech's approaches). Typical 2D stuff; and very low-cost, computationally. Great questions, Anon. Sounds like you're already well on your way... why not dive into the animatronics/programming aspects of IRL robowaifu faces (similar to @Mechnomancer's newer approaches)? :^)
Edited last time by Chobitsu on 11/07/2025 (Fri) 20:27:57.

Report/Delete/Moderation Forms
Delete
Report