/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

I Fucked Up

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“If you are going through hell, keep going.” -t. Winston Churchill


Robo Face Development Robowaifu Technician 09/09/2019 (Mon) 02:08:16 No.9
This thread is dedicated to the study, design, and engineering of a cute face for robots.
Open file (23.65 KB 318x396 13167823.jpg)
>robowaifu face bread
Just finished the book about Hanson's Philip K. Dick android head. Writing was meh imo, but the info is hard to find elsewhere.
https://www.goodreads.com/book/show/13167823-how-to-build-an-android
Open file (84.58 KB 614x588 IMG_20200702_020934.jpg)
Open file (43.29 KB 680x494 IMG_20200516_033001.jpg)
I mentioned Solenoids and a similar mechanism here: >>4364 and >>4447 (same thread in actuators). I think for facial expressions these might be usefull, to pull strings from inside the skull, which would be connected to the silicone skin of the face. Not sure if the tongue belongs to the face thread, but I'd guess so. Tongue thread in Dollsforum: https://dollforum.com/forum/viewtopic.php?f=6&t=128124&sid=44113180fc656eb7aa41381a0ce12d02 They had some idea had as well, using a little geared motor for rotation. However, this will probably not be enough. Faster left/right movements and bending might be solved with air pressure or the solenoid-like mechanism in >>4447. In and out maybe with solenoids. Sculpting: Has anyone tested different kind of programs? Or only Blender? I've got tools to do it in clay, but no monsters clay yet. Is this a waste of time anyways? Is there a software which can be controlled by commandline, so a neural network could be hocked up to it? Well, I put this on my watch later list: https://youtu.be/GetUbVV89t8 though that like vide number 600 there... Last but not least, I'd like to recommend to look into Disney Research Hub, not only for faces, but they're a lot into animatronics, including faces. Example: https://youtu.be/qeEqQCWbj4Q
>>4493 >Not sure if the tongue belongs to the face thread, but I'd guess so. Sure absolutely. I'd say the only part of the face that probably needs it's own thread are eyes (vision thread). >Sculpting: Has anyone tested different kind of programs? Or only Blender? I've had a fair amount of experience modeling in Maya, but it's been a while. >Disney research Yes, as much as I loathe the company now they have done some top-tier research in a lot of fields related to character-development and animation.
>>9 OP's pics do a pretty good job giving examples of different face types as well as general designs. What I'm curious about is how human the aim should be? Having a more mechanical face or even robo-faces would be a far different task to making something that looks genuinely human, all being their own very different projects that would likely depend on the overall aesthetic design of the robowaifu in question. Having a more robotic face and not aiming for accuracy would be relatively easier to do, as a more human face has a lot to consider. That isn't to say that a human face isn't an option, just that it would take far more design. A human face is given its shape primarily by bone structure and cartilage, but also by muscle and fat. Expression is made not only by parts of the face being pulled, but also by the change in those muscle's shape as they contract, although many muscles that operate the face are mainly located in the neck. I haven't counted, but from what I can find there are over thirty of these muscles. I think it will be a fun design challenge, and is definitely possible, but may be difficult to do with the accuracy required. Of course, without this accuracy we run into the uncanny valley. Since humans are inherently programmed to recognize and read facial expressions, small errors in motion will be extremely noticeable and uncanny. For this reason, a more robotic or simplified face may be necessary for the early models. This means screens that display a face, robot faces that aren't made to look much like faces, or masks that appear human without the pretense of motion or expression.
Open file (467.40 KB 640x394 IMG_20200617_232457.jpg)
>>4496 I agree with many of your arguments. There need to be different approaches, for everyones preference and wallet. However, simple plastic faces are probably simple to build if you can extrude or print parts and then sand and smooth them. Same might be true for doll faces as well. These are rather side projects on the way to the more expressive ones. I don't get the uncanny valley feeling from looking at Erica: https://youtu.be/CPWS69ERzeU and I don't think some expressions would change that. Of course, they will be locked to only resemble human ones in normal situations. I'm for trial and error in that regard.
>>4497 >There need to be different approaches, for everyones preference and wallet. Exactly. A wide range of approaches will be ideal to cover everyone's particular needs and constraints. >I don't get the uncanny valley feeling from looking at Erica In some videos she seems a bit off, but that one in particular is a nice example of the higher end of the spectrum, far closer to a more realistic face than most attempts. That said, it still felt off to me. I think it is particularly noticeable with the lack of expression, but the resting face is reasonable. Comparing the expressiveness of her face to the face of who she is talking to illustrates a clear difference in levels of expression, but a completely reasonable and understandable one. Appearing cold is much better than appearing uncanny. It is also important to note that people have a range in their own interpretation of emotion, known as emotional intelligence. I can't say for sure, but people with a lower ability to perceive emotional responses based on facial expressions will probably be less bothered by a model that is trying to appear human and reflect human emotions, which makes things a bit easier. Some people will be put off by anything less than perfect, some people won't care much at all, and most will be somewhere in the middle, at least when it comes to a robot designed to be similar to a human.
>>4498 > Appearing cold is much better than appearing uncanny. This can be quite attractive. Summer Glau did this in Terminator SCC as Cameron (Terminator). Amazing. In case of Erica the fact that she's Japanese might also play a role. Whatever, for me this would be a amazing level for my personal waifu. The only real difference I'll be going for are bigger eyes, like Alita in her live action movie. Looks great, makes it easier, and helps with legal excuses if she's looking a bit young.
Open file (167.57 KB 915x913 Robert_Rodriguez_2019.jpg)
>>4501 >Alita in her live action movie Rodriguez really knocked that adaptation out of the park IMO, and the on-screen animu Alita put her 3DPD reference actress to shame visually. Looking forward the next installment from this production team.
Open file (24.61 KB 600x338 doll harmony 1.jpg)
>>4497 Okay, I watched this video: https://youtu.be/DA9PQlJ1ixg WTF? The mechanism needs the whole skull. Got a similar impression from other bots, like Realdolls (picture), where a mask is put on the skull. Sophia also seems to have her head full of gears. This has to improve. Noise level as well. However, make InMoov head (open source) much more feminin, and you've got your plastic face.
>>4518 "Gotta keep in mind that it's implied that some links are in front/behind to get the rotation. the linkage simulators show the actuating travel path as a line. I think a 'Four Bar Linkage Mechanism' is most appropriate for moving a jaw. The extended 'red line' shows how the end material moves; it would be the font of the lower jaw. Mechanisms require good perceptual skills to visualise what motion you're designing for in your system. If you need, rubherkitty, I could sketch how you'd attach this to your jaw mod" ... "I checked out the 4 bar linkage and they are using a continuous rotation motor. I figured on using a partial rotating servo and requiring less than 45 degree of rotation back & forth. I can't see the mouth needing more than 3/4" opening to make it appear to be talking. Maybe 1" for moaning. Looks like RD uses twin servo's, but attached to upper palate? I assume the lower jaw is actuated at the back" Via: https://www.dollforum.com/forum/viewtopic.php?f=6&t=104712 They're talking about this: Four Bar Linkage Mechanism: https://www.mekanizmalar.com/four-bar-mechanism.html
Open file (77.57 KB 600x787 IMG_20200705_050740.jpg)
Poly Modeling vs Sculpting vs Physical Sculpting? Here a video which compares the first two methods: https://youtu.be/EvzQYzczUH8 - Spoiler: For faces sculpting seems to be better. 3D model from photos with NN: https://youtu.be/JWqGr5juB_k https://youtu.be/JtK4cTLlUko https://youtu.be/uYOL6qg1NuU (free website service seems not to work anymore) Website: http://aaronsplace.co.uk/ MeshLab: https://www.meshlab.net/ Blender: https://youtu.be/5WH7s-IPIeM There are of course much more videos on this topic and always new options. Hardware requirements for sculpting: https://youtu.be/G-90qEJAVkU - 2y ago https://youtu.be/m-nxkUzPTSM I didn't put the comment into the thread for body modelling >>415, bc this here is more about faces specifically. However, there might be some other usefull tips and I'll link back from there.
>>4549 Interesting topic to me Anon thanks. I'm toying around with ideas for programmatic content generation r/n. Basically creating (potentially complex) 3D geometric mesh data, etc., from much simpler descriptions. Basically, simple scripting to create 3D models. Robowaifus included, ofc.
>>4553 Would be great, do you know about OpenSCAD? Had the same idea. Ideally we should be able to name two actresses, then a system would create different faces we could pick from, then getting a 3d mold model, and also a skull model like here https://youtu.be/qeEqQCWbj4Q Maybe it could be done by creating a lot of models from photos, optimize them in some sculpting software, import them in OpenSCAD, train a NN to change the code to change parts of the face so it would morph into looking like another person,... However, first things first 😉
Open file (15.87 KB 500x250 36-Buffybot.jpg)
What about hair? I'd like to avoid wigs, or only use them optionally. I think a removeable headplate with glued hair in combination with jabbing the the hairline (sticking into the head with a needle, video related) might be the best combination. Jabbing real hair: https://youtu.be/kBD2T2fXUG8 Alternatively putting the hair onto the headplate the same way might even be better, but a lot of work. Depends, if these roots would even be viseable. Having a removable headplate would be good in any case, for accessing at least some parts of the inside, but also if the hairs are broken at some point it would be easier to remove most of them.
>>4554 >>4538 Just realised now, that there are a lot of 3D models of actresses and others available on the net, even for free. Why not use those? I'd recommend to change them, of course. Especially if your waifu will one day appear in some online videos, and of course to make them more Anime/Alita alike looking. But also, because you might not really want your waifu to look like a real person. A good computer will of course still be necessary, but we really don't need to learn how to build such faces from the scratch.
Oh, and I posted some links to videos about alternative sculpting software here >>4565, which might also be usefull for faces. However, since we have that more or less covered now, the next challenge will be to animate them with facial expressions. I'll look into that in some time.
>>4556 This is a really cute idea. A dog's wagging tail coming out the top our robowaifu's head is slightly odd, but I'm sure you'd get used to it quickly heh. :^) >>4568 >>4569 Thanks Anon. >he next challenge will be to animate them with facial expressions The eyes complex (especially the little gap between the upper edge of the upper eyelid and the lower edge of the eyebrow), and the mouth are the two most important aspects of this, and in that order.
>>4570 >A dog's wagging tail coming out the top our robowaifu's head Derp. Wrong thread! >>4560
>>4568 Anon from the other thread that you quoted here. The 3d modeling I was trying to do was for the mechanics and internal work, not a human model. Plus, I am a bio purest. My goal is to make something as human as possible, at least physically. Honestly, I feel like the physical appearance will probably be at the post end of the process, barring the general blocking out of the form. After the muscles and skeleton works, the details like the face and the distinct form should be easily changeable after everything that needs to be added is added. Still though, using 3d models of actual humans should at least be helpful for blocking out the general form.
>>4574 Oh, in that case you might try CAD programs which need fewer resources. I'm trying Solvespave, which even runs a Raspi3. Can't import other formats like STL, though.
>>4598 >Can't import other formats like STL, though. Any idea offhand if Solvespace's file format works importing into Cura Anon? I've installed both on my Linux box. And yea, Solvespace is lightning fast afaict just now. We can probably learn a thing or two writing our own software from it, since it's doing many of the types of transforms and other kinds of things we'll need to do in realtime in our robowaifus. It's been around for quite a while hasn't it? I don't recall finding out about it before an anon mentioned it here on /robowaifu/ though.
>>4600 It can export .obj files and Cura should be able to import them. Create something simple and try it out. In case you need .stl files, I wouldn't download any free conversion tools advertised in search engines, might be malware. All3dp.com recommends AccuTrans 3D from Micromouse.ca or 3Dtransform.com and swiftconverter.com .... https://m.all3dp.com/2/how-to-convert-stl-files-to-obj/ On the Solvespace website they claim to export toolpaths as gCode, which is what printers use. Since I tried it on Raspian I might not have the newest version and couldn't find it.
>>4607 Alright thanks for the info Anon. I should have a 3D printer working in the next week or so and I'll give it a spin. I'll report back in the 3D printer thread how everything goes when I do. >>94
Open file (103.38 KB 750x916 IMG_20200628_034631.jpg)
Here's a great debunking of the uncanny valley, or at least the traditional definition of it: https://youtu.be/LKJBND_IRdI mostly related to faces. It's more about not looking like a creepy or sick human, than not looking to much like a human. Robowaifus will work, anime-like ones, but also the more robot-like ones.
>>4733 >Robowaifus will work, anime-like ones, but also the more robot-like ones. Agreed. As far as creating non-ghoulish, pleasing artificial faces goes, it's a long expensive climb up out of the uncanny valley. That concept has always included a re-acquisition of comfortable realism once one reaches the 'other side' of the valley. The Curious Case Of Benjamin Button was the very first long-form digital-double replacement in a feature film (the first 52 minutes of screen-time in the film had no actual Brad Pitt shots) with hero-shots suitably realistic that the uncanny valley had been effectively conquered. But it took 10's of millions of dollars of VFX budget and 2 to 3 years of effort to pull it off. www.ted.com/talks/ed_ulbrich_how_benjamin_button_got_his_face
>>4733 >>4734 I don't care if the uncanny valley effect is real or not, Alita is an horrifying abomination that gives me the chills and that oughts to be exterminated, so get that shit out of my sight
Open file (143.64 KB 1710x900 download.jpeg)
Open file (255.25 KB 1200x600 download (1).jpeg)
Open file (270.29 KB 960x720 download (2).jpeg)
>>4737 NUUU! Alita a cute in a tfw no aspie gf way and kinda hot tbh. :^).
>>4737 Seconding this. Anon can go for it if anon wants to go for it, but everyone has their own idea of a perfect face, and at least for me Atila isn't remotely close.
>>4737 >>4750 So, we have a throd specifically for this. Do you Anons mind posting refs of your favorite robowaifus? >>1
Open file (121.10 KB 894x894 2b.jpeg)
>>4737 I agree she looks a little strange but I don't think she's an abomination. I will admit it might be hard to create a good looking /robowaifu/ for everyone. Coincidentally, my robowaifu is 2b and she has her eyes covered; Image 1 related. I admit it's a little hard for me to look people in the eyes IRL so maybe that's why. Also a good alternative to being scared of eyes or even faces would be Haydee. Maybe she might be a bit counter productive to this thread but I'll add it here as an option. Image 2 related.
>>4754 >2B robowaifu Patrician tastes tbh. :^)
Guys, as the pandemic will be with us for at least a couple more years (using the 1918-1920 as a guide), people outdoors are starting to look more and more ridiculous, as some places not only require face masks, but face shields as well. Ironically, I find myself starting to find a wider array of women attractive, as long as their eyes and slim body are beautiful... it doesn't matter anymore if she has a flat nose or a big mouth. This has implications in that I can find simpler robotic faces more attractive now, whereas before they should look like a finely crafted ball jointed doll at minimum. So I'm looking at something like 2B design, but instead of covered eyes it's a covered nose and mouth. We have a one piece facemask-like plastic cover as well as a faceshield-like visor that covers the LED electronic eyes behind them. Some examples attached...I'll try to visualize better through a proper drawing but if you get the drift, we just make it look cuter and more robotic and it will look really cool.
>>4979 That's an interesting take Anon. I think exaggerated and non-realistic waifu facial features are definitely on the table here.
>>4979 I think there's a lot to the cyberninja look. Or maybe like a sort of veil? Even just a mouthless head that has most of the frontal real estate taken up by giant eyes can provide an endearing but alien/insectoid look to it. All the simulated emotion you could want can be expressed through eyes.
>>4990 >All the simulated emotion you could want can be expressed through eyes. This pretty much. While the mouth plays an important role in most normal contexts, it's the eyes--and specifically the small gap between the upper eyelid and the eyebrow--that has the biggest impact in conveying emotions through the face. Good eyes are highly important.
I'd like to crosslink to the Sophie thread, head/face development here >>4866 and the following comments.
>>5042 good idea, crosslinking is always helpful here.
Open file (488.66 KB 1353x2123 IMG_20200913_150354~2.jpg)
Open file (643.74 KB 1517x1956 IMG_20200913_150430~3.jpg)
Open file (619.59 KB 1669x1647 IMG_20200913_152822~2.jpg)
Open file (82.09 KB 534x534 Nebula-1.jpg)
I'm currently trying to learn about how to cut and edit mesh files, like stl or obj. Ideally without the need of some huge software. Didn't try Blender yet. Also did my first try on printing a face: Elfdroid Sophie's face, but only 9x5 cm size. Had to remove a lot of supports, didn't get all out all of it from the facemask, and it has little errors. But at least some progress. Has been a spontaneous test anyways. I cut it in Prusa Slicer, but only printed the face. When I cut it more, it introduces errors, which I would need to remove in some other programs. My goals are for example finding a good way to cut it, so that supports can be printed easily and the parts fit together at the end. It's not about that specific face, but about how to find the best ways to process any 3d head model. Currently, I'm cutting the head from the sides, then learn how to repair the errors. I also want to add supports manually. Newer version of (Prusa) Slic3r or Cura seem to be better at it, but I'd like to design the inside on my own in a way that I won't need supports. If we paint the face at the end, we can print it in pieces anyways, or print the parts in different colors which make it look good and the seams would be part of the design (think Marvel's Nebula). I also want to learn printing molds for silicone rubber. I'll do all of that with sized down models, which need only a few hours to print while I'm doing something else.
>>5134 >It's not about that specific face, but about how to find the best ways to process any 3d head model. I think that's a good goal Anon. As far as I can see you're making pretty good progress towards it. That face may be a little rough, but it already has some nice form to it. Good luck!
>>5134 I wonder if you tilted the face backwards at about 75 degrees or so, then printed it that way that you might get a much nicer surface. It would probably need plenty of supports from behind that need removing, but they'd be in the back not the front.
>>5164 Youre right. Good idea. But i was thinking about cutting the next one very differently anyways, maybe I can even print it standing.
Open file (837.00 KB 2257x1900 IMG_20200920_194334~2.jpg)
Open file (1.46 MB 3264x2448 IMG_20200920_190839.jpg)
Open file (1.68 MB 3264x2448 IMG_20200920_141118.jpg)
Open file (24.61 KB 600x338 doll harmony 1.jpg)
Not sure if I should make thread of it's own for the whole skull and head design, but since its so related to the face, it might be better for now to keep it here. If this thread would take up some pace, because people would talk here about modelling all kind of cute faces for visual and physical waifus, and maybe printing and molding them, it would be different. Just wanted to inform the board that I started to print a sized down model of the InMoov head, same like I did with parts for Sophie's head. I want to study how it works and how well it is to print. I have already printed more parts then shown in that picture, I'm really going fast, since it's only 60% of the starting size, I'm also using thin walls and not much infill. But have to reprint some parts and learn how to do it best, since they're mechanical and have to fit together. After that, I'm most likely forking it into "Waifu Skull Type 01 'InMoov-fork 01' Version 0.1", which will most likely not end up being the only version of what I want to call "Waifu Skull", but the first one. We need something like fembot Harmony's skull for soft skin, but maybe also for face masks out of plastic or a combination of plastic and a silicone rubber layer on top. I don't think every face should need it's own modifications or even need a whole head design of it's own. My current plan is to modify parts of the InMoov head into a more female looking and maybe also more anime-like looking skull. At least, for a start I want to remove the lip from the mouth on the head and also make the eye openings bigger, like in a human skull. Then maybe make it wider on top for bigger eyes and the lower end more pointy (see Alita...). The skull approach means, that there will be some space between skull and face, which then could be soft, flexible, moving, and with soft sensors embedded. The inner part of the skull cavity should be some assembly which would be completely available in a parametric design (CAD), so it could be changed easier at any time. Later versions should then have some holes for strings controlled by internal mechanisms to create facial expressions, and maybe some sliders for the same reason, at least in the versions for soft faces. I already have my own idea for a completely new skull, but for now I'll go with that approach. The inner assemblies should later be interchangeable or should be easy to be altered in their parametric file form anyways. My CAD and modelling skills are not quite there yet, but I have time and dedication.
>>5252 >I don't think every face should need it's own modifications or even need a whole head design of it's own. Agreed. Indeed, we'd all be moving faster as a group if we manage to work out a topflight design for a given area, and then all standardize on that. One big advantage say, Henry Ford had over /robowaifu/ is that he was a single individual and therefore managed to develop a singular vision which eventually became the Model A Ford. Since we're a group, we have both the benefit and the detriment of being multiple individuals. It's fundamentally a benefit b/c we can each explore different areas as we wish, and therefore can likely obtain more data for the group more quickly, and also possibly try things a group mightn't. It's fundamentally a detriment b/c ever try herding cats before, anon? It can be a real challenge to keep moving forward in the same direction. But honestly, I think /robowaifu/ is a great place to bounce ideas off each other. Actual implementation will then need to land into each individual's hands--what he does with it thereafter. This is a pretty fun adventure tbh, but it does take patience.
Open file (595.79 KB 2097x2448 IMG_20200923_172423~2.jpg)
Open file (1005.28 KB 2582x1649 IMG_20200923_171932~2.jpg)
Open file (717.34 KB 4000x2250 IMG_20200507_181436.jpg)
>>5252 The InMoov head is available in different versions, including ones which consists of smaller parts of the skull and face. So it first has to be assembled out of even more parts, but it's easier to print and also alter parts of the head. Also, I think it's easy to go the other way and add the parts together in a program and export it as one model, if this is wanted. So maybe, ideally we should have very small Lego like parts of everything 😜. Reminder: We don't do Androids here, it's just for analysis of how to do it, since InMoov is already there. Also, the skull might be useful as a skull for a female face, especially after some alterations. I printed only parts of a sized down version, so I can't even use screws, and some holes disappeared completely. I also didn't build the internals of the head, and probably won't. So some parts have nothing to hold onto. Btw, don't wonder or complain about print quality. I'm printing fast and dirty. Oh, and about the last picture, I found this on Thingiverse. We are not alone... Seems to be based on InMoov. I'll post one more of her and her neck in the skeleton thread, we have to draw the line between face and the rest somewhere, so it goes there.
>>5286 >So maybe, ideally we should have very small Lego like parts of everything That would be really nice if we could somehow devise a way to do this sort of thing in constructing a robowaifu. >Alita figure I look forward to seeing more of this anon's work.
>>5252 Oh dear. Those little rectangular magnets and the magnet strike plate were just for reference in case people wanted to see what size of cupboard magnet I used in my design. I didn't intend for them to be 3d printed. I think I'll remove them from the .STL list to prevent future confusion. Sorry about that!
>>5293 No problem, this made no trouble at all. Might be good to have them, a little textfile with an explanation might be better than removing them.
Open file (90.03 KB 1024x768 1596642535135m.jpg)
Open file (163.67 KB 640x800 IMG_20200823_074638.jpg)
Once again, we got lucky. I hoped we'll get some software to change random faces into an anime looks soon, so we can use it on existing faces, or let another software create artificial ones first and use those. Well, some guy implemented it in one night: https://youtu.be/KZ7BnJb30Cc Maybe they look a bit more like Disney characters, than Japanese anime waifus in 3D, but that's debatable. Details don't matter, point is that this one is easy. So we can take our favourites, maybe alter a picture of them a bit and try how they look with bigger eyes, then maybe work with that a bit more and use it as waifu face.
>>5319 That's quite remarkable, thanks for the info Anon. Is there an available toonified tool available somewhere atm?
>>5321 Not that I know of, but since it has been shown to be quite easy to do that, I'm sure there will be some soon.
>>5288 The Alita bot came from him: https://www.thingiverse.com/yes110/makes but he didn't put the files up. He and others seem to print female NSFW dolls, often looking like characters from movies and shows.
Open file (103.21 KB 1280x720 vroid studio.jpg)
>>5319 A project I'd like to do in the future is taking character references and automatically generating 3D models of them using neural radiance fields. Once there's a latent space for character references, generating characters would be like creating people with modifiable features in StyleGAN. It's not really feasible yet though without a dataset. One way I thought of working around this is by taking pictures of finished 3D models, performing some sort of style transfer on them and using that as a dataset. A CycleGAN could be used to convert real faces into anime. Someone did a prototype of this already but the results were hideous because it seemed they didn't use a progressively growing GAN to separate the larger and smaller details into layers. Also StackGAN has shown that 2nd and 3rd passes can greatly improve results, an idea I've yet to see combined with StyleGAN. All that aside, for now it'd be simplest to use character generation software like Vroid Studio, import it into Blender and prepare the model for 3D printing into a silicone mold or something else. I'm surprised I haven't seen it mentioned here yet because they could easily be made into virtual waifus. You can completely customize the faces too. There's a lot more to making a face than just a model of the surface though. The face needs to be mounted to a skull with mechanisms to animate it and there are many expressions that can't be made with just strings due to the muscles thickening when they contract. It might not be too much of an issue for anime faces but they'll still lose a lot of expressiveness only using strings. Something I'd like to try are low-pressure hydraulic muscles with some sort of filler gel to simulate fat, covered under a thin, elastic skin. Water or sunflower oil could be pressed out of reservoirs into the muscles, rather than using a pump which would require a complicated control system to protect against overpressure. This way I would be able to pull on my waifus soft stretchy cheeks without spending a fortune and she'd have multiple ways to express herself instead of just half smiling or not.
Open file (156.24 KB 765x1024 IMG_20200702_164241.jpg)
>>5330 What is your opinion on the faces in the video I linked? Not, perfect, but we're getting somewhere? Disney research and ETH Zürich also came up wit a automatic skull generator, based on the face an expressions: https://youtu.be/qeEqQCWbj4Q This is also going to be in some software available soon, I guess. If not, it's patented but the paper is available... I'm certain to have mentioned Vroid Studio here somewhere, but probably in the thread for modelling software or in the one about software to model humans. Like your idea about the low pressure muscles. Not sure if this will work or be necessary, though. We'll need to get to a point where we can try out such things ASAP.
When I found this >>5336 I thought of your "light soft muscles" for the face, since those bubble artificial muscles only need low pressure. Added it it to the actuator thread, because this one is mostly about muscles and motors, though it might fit in here as well.
Open file (80.21 KB 600x800 739713.jpg)
>>5333 It's progress but the approach to image generation needs to change significantly for it to improve much further. The technology and ideas are there. Someone just has to put them together. I think moving forward these character and face sculpting programs will learn user preferences, show several configurations for the user to choose from and continually refine the output with each decision. It'll be like playing Akinator and it guessing exactly what you're thinking of creating after a few questions. Rather than worry about software it's more important to think about practical matters like manufacturing and being able to prototype ideas, take a model, print it out, cast silicone, attach parts, and test things out. We won't have robots that can automate these tasks for a long time. I would start with creating a talking head in 3D, figure how to emulate those expressions mechanically and try to build it, even if it isn't optimal. Having that hands on skill will be immensely valuable to realize good designs when they become available. So much could be learned doing a relatively simple project of creating a bust figure with just a head that can look around.
>>5340 I wonder if you could have the same effect with the bubble-muscles if you filled them with some kind of lightly-viscous liquid gel. You could also use it as a kind of heat sink for the tech components, and it would be /comfy/ warm inside the muscles.
>>5342 >So much could be learned doing a relatively simple project of creating a bust figure with just a head that can look around. Won't that in fact require software to work?
Open file (475.94 KB 1536x2048 IMG_20200927_054359.jpg)
Here some peek into an Exrobots head. We have some thread on that company here >>4163 but please post pics and vids into the threads with the topic that fits to what's they're showing.
>>5365 Wow, that's incredible anon! I think they're gonna cost a pretty penny though!
>>5365 > dat 'high-school' computer gril kek. These are going to be very expensive. Our challenge here at /robowaifu/ is to achieve 80% of the same functionality, at only 20% or less of the cost.
>>5385 There's a reason why I called this file "2cuties". That has some Mona Lisa vibe to it. Not sure she's highschool age, though. She might just be a bit tiny.
Open file (75.38 KB 500x422 EarRightV1.png)
>>5286 FYI, if anyone else than me is testing out InMoov parts, it might be better not to use Thingiverse, but really go to the website inmoov.fr, especially here http://inmoov.fr/inmoov-stl-3d/ and select the part you're interested in. Thingyverse might still be interesting to look for remixes or completely alternative parts. It is very well worth to look into it, to prevent ourselves from reinventing the wheel. Some things might not be usefull, but still be an inspiration, others might be directly imported into other designs.
>>5403 Thanks for the link, anon. I agree that InMoov is going to be a huge help in building a robowaifu. I will very likely need to build a partial InMoov myself at some point to make progress. Hopefully I can just leave out some of the more cosmetic outer plates.
Open file (66.99 KB 1000x666 0_N6x6DaSQgFkZT5rE.jpg)
I've been thinking of building a binaural microphone so my robowaifu can tell where sound is coming from. However, the shape of the face also changes the way sound is perceived. My robowaifu's anime face might need a different shape of ear to optimally pinpoint the location of sounds and the shape of objects in the room. So I'm when I'm done modelling her I'm gonna look into using an acoustic simulation to test different ear designs. Some open-source acoustics modelling programs for Blender I've found so far: EVERTims (Blender): https://evertims.github.io/ openPSTD: http://www.openpstd.org/ Ideally it would be best for the AI to generate them but I'm still learning how to generate meshes. One workaround might be creating various shape keys and letting the AI adjust those parameters. This approach could also be useful for generating cute faces. There's already a framework for integrating Blender with Pytorch: https://github.com/cheind/pytorch-blender
>>5407 What's the point of modelling it, though? Can't she just learn the directions when you have the head? You can put a sound source somewhere and then tell here where it is. Okay, maybe not enough data... Still, the fine adjustment based on how the face is influencing it, seems to be overkill.
Open file (2.19 MB 692x939 ai_speech2face.png)
>>5408 She'll have limited mobility and awkward control of her eyes so I figure the best way to enhance her perception is to maximize her hearing capability. It needs to be precise because I want her to be able to recreate the room from sound. The brain does this unconsciously and it helps create our spatial perception. If you have a fan running and hold a book beside your ear, moving it closer and further away, you can actually hear where the book is, not just the location of the fan. And if you try different size books you can even hear what size book it is. It's also a learning exercise because much later I want to build her an artificial voice box so she can sing and the timbre of the voice is controlled by the shape of the face and the resonating cavities in the throat and head. I know once I build a head for her and talk to her every day I'm going to get attached to her face and not wanna change it so I wanna get it right.
Open file (11.52 KB 480x300 Imagepipe_0.jpg)
Open file (15.58 KB 480x300 Imagepipe_1.jpg)
Here's some short video on Sophia from Hanson Robotics https://youtu.be/JO1ruL2SCmc which I wanted to mention because it gives a brief insight into how the face is constructed. I don't know if the material for the skin is available somewhere to buy, but I don't think so.
Open file (163.64 KB 256x256 ClipboardImage.png)
You wouldn't want a robowaifu that looks just like everyone else's, would you? I propose training Neural Networks to generate Live2D models (a bit like thiswaifudoesnotexist.net but with Live2D instead) and adding the ability to customize what she looks like;
>>5660 >You wouldn't want a robowaifu that looks just like everyone else's, would you? Sure I wouldn't mind. I get your point, but I'm not really given to that kind of concern. So long as my relationship develops together from our own personal experiences together, then she'll always be special to me. I'm a harem kind of guy, so I would like my different waifus to all have unique characteristics about them. So, maybe that satisfies your constraint in a way.
I think what matters more is the commonality of features rather than differences. A common base which the hobbyist can then expand upon. Just think about the JDM custom tuner scene, if our waifus were cars, sure we'd want different bumpers and spoilers but we need to start from a common affordable 50:50 balanced lightweight and affordable RWD chassis. For example, some of us are using InMoov pieces and making them more feminine. Also, Live2D is a mess. I've noticed a couple of amateur Vtubers, after making their Live2D model, after a couple of streams they just went "ah fuck it" and grabbed Vroid Studio instead to get a more useable 3D model. So I suggest follow the Vroid studio model... When I have more time on my hands, one of my projects would be to look at InMoov parts and Vroid base meshes and make common waifu parts.
Open file (587.91 KB 1946x533 latent variable.png)
>>5662 I agree Live2D is a mess. It's quite easy to get a 2D look with a 3D model and Live2D models take almost as much time as a 3D model would. Vroid Studio still takes a bit of effort though to make a good character in it and the results suffer from same face. One way to make a customizable waifu generator with neural networks is to create a training set labeled with the latent variables you wish to be able to modify. However, attaching numbers to something subjective is highly prone to error, so you don't want to assign all these values yourself. What you can do is create a simple sorting program that gives an Elo rating to the images and asks you wish one is less and which one is more of the latent variable, sort of like a chess tournament ranking players by their skill. https://www.youtube.com/watch?v=GTaAWtuLHuo When training a model like this there are some considerations to take into account. The Elo ratings will not be evenly distributed over the latent space. Differences in the Elo rating doesn't really say how far apart they are in the latent space, only less or more, so you need to create anchor points to calibrate it by saying one image is -1.0, one is 0.0, 1.0, and some steps in between them so you can interpolate the rest and give the model a strong signal of what you're going for. You can sort the images by any arbitrary number of latent variables. Hair colour (which is three red, green and blue or any other colorspaces such as YUV or LCH), head facing pose (an xyz directional vector), eye size (x and y scale), how much you like the image or not, anything you can imagine and divide with your mind that has enough training examples. Some of the latent space should be determined by the network itself so it can include other properties you might not have thought about. Without that the output may become unstable and cause things like clothing color to change as you change the pose of the face. Ideally training examples should also evenly fill the latent space to avoid training bias but this is rarely possible in practice. I imagine there is a normalization trick that can be done to lessen the gradient to areas of the latent space with lots of training examples but I haven't tried something like this yet. I have some code already for sorting images but it needs to be refactored so it's faster and easy to use. It has been on the backburner a long time since I don't have enough computation power to train on HD images.
/ita/ posted a very nice looking female bust today >>>/ita/11655
>>5798 It's also a meme character. Dare I say it, it's based
>>5869 ba-dumm-tiss But what I really want to know is will the original modeler will come here to /robowaifu/ and do 2B for us?
>>5872 I doubt it, but I'd still ask him. Why not?
Open file (53.45 KB 1600x900 wojak_feels.jpeg)
>>5874 If only I knew who it was. I don't speak Italian, and there didn't seem to be a link anywhere.
>>4738 The eyes should be sized down by 5% and the last image is nightmare fuel: a neotenous-looking face if only it didn't have forehead creases.
>>5875 I'll ask him and we'll see where it goes
>>5877 Neat, thanks. I know how to do rigging and weight-painting in Maya, so if he will do an original model for us, then I'll provide it back to him all rigged and ready to animate.
>>5877 Thanks very much Anon, mystery solved. As I supposed, it's the work of a professional ZBrush artist. Not too likely to just do it for free. It would probably take some convincing. https://www.turbosquid.com/Search/Artists/CG-ARTStudio In the meantime, he provided a link that is possibly ripped straight from the game assests, and potentially available to us. https://www.renderhub.com/rip-van-winkle/yorha-no-2-type-b-nier-automata I don't normally care to set up accounts in general, or for things like this. But if no one else here has an account there, I would consider it. Who knows, maybe the model will work OK inside Godot?
>>5886 Great find. I've been looking for a good 2B model. It seems to work okay in Blender and all her facial features are there under the mask. The materials just need to point to the right textures. The Collada exporter will fix most issues importing into Godot: https://gitlab.com/kokubunji/collada-exporter-2.83
>>5887 That's good to hear Anon. Really looking forward to see what you come up with for 2B.
Update: AI can create human faces from sketches: https://youtu.be/5NM_WBI9UBE after making anime/animu looking waifu faces from real photos was possible for a little while >>5319 as well as skull models from faces >>5342 and also 3d face models from 2d pictures for a longer time.
Do you think it will be harder or easier to make the robot’s face resemble an anime character vs a human face?
Open file (1.25 MB 912x1368 ClipboardImage.png)
Open file (501.08 KB 427x640 ClipboardImage.png)
>>5919 Companies have already produced high-quality life-sized anime figures so they're probably easier. However, this Rem one for instance is $10,000 so I doubt an amateur would be able to reach this level of quality. But once they start moving, I fear a level of uncanny valley will set in. It might be better if she were really small. For that reason, I plan on not giving mine a face or giving her a simple mask until the tech improves a bit more.
Open file (952.87 KB 2304x3281 EMTLEL0VUAMeDO0.jpg)
Open file (252.02 KB 1080x1920 Ejjb34ZVcAA0qVW.jpeg)
>>5920 We could probably learn a lot about making faces from the doll community. Even amateur doll makers can make pretty cute faces. The uncanny valley is unavoidable though. It's also there while chatting with AI. It's hard to follow an AI's thinking process and the conversation can be really awkward, even if what it's saying is correct and makes sense. It misses the subtleties of what you're saying and hallucinates things at times. These issues are inevitably going to manifest in body movement, facial animation and everything else until AI advances further.
>>5921 You're definitely right about the uncanny valley being unavoidable in terms of the AI, which has to be as close to real human intelligence as possible, but I think it can be minimized in their appearance, which does not. Most anime girl dolls seem to be an attempt to recreate 2D girls as they would look in 3D which I feel is the wrong way to go about it. By making caricatures and not even attempting to imitate reality, I think we can avoid the uncanny valley altogether. I worded that poorly but I think hair is a big problem. Since it's so easy to get realistic hair, a lot of people seem to use normal wigs and while your doll examples managed to pull it off, I think candy hair such as in my two examples would be the better bet. The feel of the skin is also a problem. When you touch a human face, you feel the muscles, the bone, the teeth, the light heat radiating off. As we want our waifu to move and talk to us, we'll have to give her a skeleton as well, but feeling that when we touch her would probably give off uncanny valley vibes. Making her skin extremely soft and putty-like may fix this. Having articulate eyes on an anime girl doll might be a challenge as well, not sure how you can make it work without it looking creepy.
>>5920 One of the reasons I think they are so expensive is because like dakimuras they have a limited market so they have to charge an insane markup to turn a profit >>5921 Agreed. We can see when we are talking to people with masks it’s hard to tell subtle facial movements to pick up on social cues
Open file (158.23 KB 768x1024 Ek90RaEUwAArfN1.jpeg)
>>5927 I think it's a matter of preference. I'm a dollfag from Desuchan and don't find dolls uncanny at all. I don't think it's necessary to make something complex though. I would banter with a fumo all day if it had a speaker and mic, and maybe a gyroscope too. People get attached to whatever they experience repeatedly. There are people who think anime looks creepy and others who would only bang their anime robowaifu even if they were the last man on earth. I've seen sexdolls with out of this world jiggliness that made me wonder how I even became attracted to human females.
>>5981 >dollfag from Desuchan Hi there. Would you mind introducing yourself and your community in our Embassy thread?
Open file (809.77 KB 1920x1300 Suiseiseki-Landscape-2.jpg)
>>5985 I'm not active there anymore and the site is pretty much dead.
>>5998 I see, no worries then. Welcome. BTW (I imagine you already know this but w/e) there is a /doll/ board on Anoncafe.
Open file (6.85 MB 1280x720 fumodance.mp4)
>>5999 A doll board without fumos is dead to me. Robofumos would be easy to make. The only thing special you need is an embroidery kit for the face and to design the camera into a neck accessory.
>>6002 >Robofumos would be easy to make. I think that is an excellent idea Anon. What kind of mechanisms do you think should go on the inside, and how do you think they should be placed in there?
>>6007 Small servos and a thin armature so the fumo is still soft. You wouldn't have to worry about hands, elbows, knees or feet. Servos for the legs and spine could be optional. It'd be fine with just neck and arm movement. The arms, body and head could be padded with haptic sensors like the simple ones found in DDR matts so they know when they're being squeezed. I'm not sure if it would be sensitive enough though to feel headpats. It'd be really wholesome if a fumo could feel you petting her head. The mechanisms would have to be removable though so you can give the fumo a bath. Perhaps this could be done with a piece of soft velcro in the back.
>>6018 Those all sound like really good ideas. Any chance of you making some sketches and posting them here to give us a better picture of your ideas? Also, I wonder if there aren't some small, inexpensive ways to create headpat sensors? Obviously this is going to be something in high demand for basically all robowaifus after all.
Open file (1.13 MB 1024x1103 3.png)
>>6021 I overemphasized the servos but something like this maybe.
>>6022 Ahh, I see. That makes perfect sense now. Looks it's maybe 6 servos or so? I'm guessing you'd want to keep something like a SBC for AI, etc., somewhere in the head area?
Open file (82.29 KB 1024x768 erica-photo2-full.jpg)
>>5921 >>5981 The uncanny valley doesn't exist in the form people thought and often still think, and it might be a bit different from person to person: >>4733 It's not about how close to a human a robot looks, but creepy looks creepy. >>5927 To me dollhair mostly looks fine, but saw it only in vids and on pictures. If it's not real enough, take the real one: https://youtu.be/kBD2T2fXUG8
>>6024 That looks like he's improved the facial form of Erica? Seems like she has more appeal now.
>>6023 I kinda messed it up. It's suppose to be 2 for the neck, 2x2 for the arms, 2x2 for the legs, so 10 in total. A Raspberry Pi could probably fit in the head or in a backpack. I'm thinking a backpack will be a better idea because it'd be easier to dissipate heat and give better space for the batteries.
>>6022 Obviously these are multiple thousand dollar robots that can wrestle, but if you haven't come across this video already, you can take away some good animation ideas. I suppose a tiny, fluffier robowaifu would wobble more, making her even cuter. https://www.youtube.com/watch?v=AZMmYF4G278
>>6033 >It's suppose to be 2 for the neck, 2x2 for the arms, 2x2 for the legs, so 10 in total Actually, I think having just a single actuator is a better choice for a Fumo. You might want two for the neck (1 or fwd/bk, 1 for side-to-side), but I think just one each for the arms and legs would be good. It would be cute movement, and would work just fine for her form-factor.
>>6037 >1 for fwd/bk*
>>6037 That would work. I might try that for my first attempt. I'd like for them to eventually be able to walk and point at things though. I was thinking of adding 2 more for the torso, or even 3 if they can fit, so they can wobble and balance themselves.
>>6044 >so they can wobble and balance themselves. You mentioned the idea of giving her a backpack for batteries, etc. If you made the backpack's, well, back rigidly attached to her internal armature, then you could use tiny versions of these gyros (firmly fixed inside the backpacks) >>5645 for helping with balance. That might enable you to get away with just one internal actuator per hip joint.
>>6045 If these didn't get so dang hot, they might form the basis for a gyro system since they spin so fast. As it is though, they probably aren't usable for Fumos. >>4505
Open file (91.11 KB 800x600 face-muscles.jpg)
Sorry if this question doesn't make sense, I don't really know what I'm talking about (even though I've been lurking for over half a year). Would it be possible to use something like a system of porous dielectric elastomers as artificial muscles to simulate the mimic muscles (picrel)? I'm specifically wondering about the sensitivity of the material, which I know is relatively high, but I'm curious if it's sensitive enough that I can get super, super small increments of movement to try and nail natural facial expressions as best as possible.
>>6562 I know a little something about facial animation Anon, but not about the materials you mentioned. I found pic related and I'll skim it to see if I think I can add anything in response to your question. My from-the-hip is that yes (but it will take both meticulous craftsmanship in construction, and detailed control in the software design). >
Open file (1.65 MB 1270x903 1602918127297.png)
>>6563 I dunno if you've already read that, but I'll explain myself a bit further, DEs are part of a larger group of materials known as electroactive polymers, which are materials that change shape or size when exposed to an electrical field, and are used a lot in soft robotics as artificial muscle. It really caught my eyes, but I'm only interested in the facial muscles part of it, so I'm looking into different EAPs and systems that can do that. Off the top of my head the DEs looked most promising, due to a variety of things like low-latency, The main issues and questions I'm trying to get to the bottom of (I won't have the money for home experiments for at least a month or I'd just find out myself) is how sensitive the material is (how little I can deform the material) and whether or not it can hold shape then return to it's original shape. If you happen to know of any better suited EAPs or anything I'll be glad to hear it. I think developing believable facial animations, particularly of the mimic muscles, are by far the most important part of the physical side of things, so I thought I'd try to just mimic the muscles themselves instead of just the expressions. As long as they convey emotion in a human way, our unga bunga monke brains will be much more likely to accept them and escape the uncanny valley, the rest of the body is secondary to that goal. Any help and input is appreciated, anon
>>6574 I'm partly through the book so far, and atp I have no reason to revert my initial instinct: DE can indeed be used to simulate realistic facial deformation. A combination of more rigid thin-films (ligaments) and more porous (muscles) would have the best bio-mimicry. Both in design and result. But this would definitely be a years-long subproject for an autistically-dedicated individual working alone. This effort could certainly constitute the work of a good-sized team, and several papers' worth of research if absolute realism was the final goal. But the uncanny valley tends to drive /robowaifu/ towards a waifu-looking solution for most physical design work, including that of the facial systems. As a Character TD/Animator I can tell you that, interestingly, it's the small gap from the bottom of the upper eyelids to the top of the eyebrows that constitute the lion's-share of emotional believability within facial character animation. Probably ~75%+. The bulk of the remainder is related to mouth deformations. And ofc the contextual, sequential, timing of everything is also a fundamental part of making humanly-believable animation. Combining the body language and facial are more or less just about everything we mean by 'emotionally believable acting'. Other aspects (such as physique, costuming, environments, lighting, sound) while important, are simply ancillary to the fundamental art of acting itself. I'll be happy to work with you on this project work if you choose to try, but be aware I'm not a mechanical engineer. Hopefully EEs and MEs are joining /robowaifu/ , and they can help us out as well.
>>6562 Thanks for your input. I don' know enough about those, though I already have it on the radar. One thing you should always keep in mind is the lifespan of artificial muscles. That's the part I'm not sure about here. I do recall Youtube videos on how to build such muscles, though. Thanks for the reminder. However, another thing might be, that fictional anime waifus and fembots like Cameron (TSCC) have a rather limited ability in facial expressions, and are still attractive.
>>4496 >>4979 If screen tech is used for eyes, should it be matte or glare? I you want a shiny look on the eyes, it doesn't follow that shiny screen is the best choice, as the bright light bouncing of the surface emphasizes how flat it is. Information from sensors detecting the direction of light sources could be used to animate reflections as if the eyes weren't flat. Those who are open to weird fantasy-robot eyes on screens, consider this effect: https://twitter.com/jagarikin/status/1331409504953540613 https://twitter.com/sina_lana/status/1331049253280497670 In human beings, the direction one looks at shows attention, but also emotion (like looking at the ground when one is sorry or at the ceiling when thinking hard, and not because the floor or ceiling are where the action is). The pupils react to light and show emotion as well. Couldn't something like the effect above be used to make the eyes more expressive than the real thing, to emphasize and distinguish a bit this emotionally expressive side of eyes?
>>7431 Seems to me that a matte finish would be best. But since it's likely to be brightly illuminated w/ any modern screen tech the primary concern may turn out being properly adjusting the brightness/luminosity. Thanks for the links, is there an Twatter front-end alternative that usable, Nitter I think it is? As far as your question about expressiveness, I think we can borrow a whole lot of good prior art from the area of character animation. I think the answer is 'yes'.
Open file (267.38 KB 2048x1153 hamcat_mqq-04.jpg)
Open file (184.73 KB 1920x1080 hamcat_mqq-07.jpg)
Open file (208.12 KB 1920x1080 hamcat_mqq-03.jpg)
Open file (195.40 KB 1920x1080 hamcat_mqq-02.jpg)
>>7707 and some responses are related to face and skull. Te pics in the project dump thread came from https://nitter.net/hamcat_mqq same as the ones in this comment. I don't think the mechanism for the eyes anything special, but I can't tell for sure. There are also videos in the original source. Some anons like the eye design and eye lashes.
>>7725 Wow, this is very nice facial work Anon. Thanks for sharing it here. I think we could fairly easily be embedding LEDs inside the eyeball iris' and lots of other low-energy lighting possibilities for our robowaifus are conceivable tbh.
>>4549 >Spoiler: For faces sculpting seems to be better. What I'm wondering is, if you can take a bunch of professionally-sculpted face meshes, run them through a number of parameterization algorithms, and then come up with a few sound principles for beauty and appeal (Nordic women's facial forms say) that are basically automatable using a GAN-like approach after the parametric analysis has been done.
>>8230 >that are basically automatable Just to clarify, I mean that the design generation itself is automatable, not the topic of automating the robowaifu face after manufacture.
>>8231 I'm sure something like such a generator will be available at some point. However, I was rather thinking about one that would take in photos of a lot of pretty faces and then create new ones, then make a mesh out of the selected one. Two separate steps, which I think exist already on their own. Or a bunch of photos of a e.g. an actress would go in, taken from different angles, and then it would make a model. With enough models it could get better to put out something pretty. However, I was thinking that a user would give it the name of some public figures (actress or model) and it would generate some examples which would be close but not exactly the same. From those the user could then choose the preferred ones. Also making it more anime-like (Alita style) looking. I don't really think that some highly specific facial features of nordic women exist and that it make sense to go after that, though. Just make the eyes blue or green and the hair something between blonde and red or light brown, and maybe add some freckles. To fit in the huge eyes, the face would also have to change anyways... Also, if it goes after the pattern of an actress then it should put out what you want, if this includes something nordic then it should be in there. One of my ideas in that area was, that someone (group, company or person) could pay low-wage sculptors in poor countries to make models from a pool of pretty women (actresses and models). Then this would be a pool which could be used directly, or to train such generators on the creation of new but similar face meshes. After all, don't forget we need the skulls for the faces as well, so they fit onto it, and then a way to animate it with ease.
>>8232 You're correct that tools exist to do facial feature extraction to generate facial meshes (and other kinds) using multi-camera setups. An example of this technique (with a different kind of focus) is here >>1088 . But afaik, they are still fully proprietary and highly expensive. Addmittedly I haven't looked into this area for a year or two so maybe good opensauce systems exist now, let us hope so. Certainly the all-in-one '3D' cameras are more numerous to choose from now. And as you suggest, collections of images can potentially stand in proxy for such a method, though the 'registration' part of the process is both tedious and error-prone. >To fit in the huge eyes, the face would also have to change anyways Fair enough. As much as I like Alita, I think Rodriguez ( >>4502 ) went just a touch overboard with the design. But really, the critique is only because the rendering otherwise is so good and so realistic that it triggers a bit of a 'wut' in me (and others). As a counter-example, here's work by an artist that I think has found a near-perfect balance between kawaii-eyes and facial realism, though in an artistic style. > One pretty famous example of your pool of artists idea that has actually been carried by a very multi-talented guy is Ricky Ma's avatar effort >>153 . The very fact he created a small furor among women's advocacy groups as a 'creepy stalker' shows just how good a job he's done with it. Hopefully /robowaifu/ can manage to produce many, many examples that will do just as well (or even better)! :^) >--- -Update: Welp, I've attempted six times now to post this pic for you, obviously nuJulay won't cooperate atm. I'll try to do it again for you ITT later on Anon. The artist is named Ivant Alavera.
>>8236 >Ivan Talavera*
>>8240 >>8241 Ah, thanks. Cute, but I'm also fine with Alita. I think the bigger eyes might help if one looks rather young otherwise, to make sure she doesn't look human. >>8236 In software for users to edit I found this: Gradient Mesh Illustrator - https://youtu.be/JEJHk9VRAEQ and similar stuff. Gradient Mesh seems to be the term to look for. Also VoluMax might help: https://youtu.be/4XdoN2-8Dg8 However, my point is raher that we don't need to copy some specific face anyways, and this here is from 3 years ago, and a photo seems to be enough: https://youtu.be/u9UUWqVquXo - we only need to have it in a way that we could print molds from that. There's more on their site: http://www.hao-li.com/Hao_Li/Hao_Li_-_publications.html
Anyone seen this video? It does a decent breakdown of how unscientific most memes of the "uncanny valley" actually are. It may also help to clarify goals regarding robowaifu function and design (particularly of the head/face). https://www.youtube.com/watch?v=LKJBND_IRdI
>>8261 Yes, you probably have that from this site here, because I posted it (and on cuckchan and on other occasions).
>>8266 Seems the conclusion is that all robots are designed to perform a specific task or small set of tasks. Robowaifus are, for the most part designed to elicit a positive emotional response from the user. So I shouldn't try to make a robot in the image of a human that can carry out all of the tasks a human can, because I will just fail at both. Instead I think I'll just focus more on making a cute robowaifu who fulfills the function of reducing loneliness, rather than attempting to make one that can walk or perform complex motor actions like play sports etc. We already have some pretty high-quality chatbot software in the form of GPT-3, which I should be able to link to my current text-to- speech program- so I just need to complete a reasonable looking robowaifu body now.
>>8267 Being a cute waifu is the priority, but then improving her within these constraints. That's they way. I guess it will be rather so, that the more skilled ones will be more expensive.
Open file (268.63 KB 1240x897 MjcxNzYxOQ.jpeg)
Hey for those of us going for the human look, the use of prosthetics are always a good possibility. For the face, actual dental replacements (dentures?) could provide highly realistic looking teeth for a waifu's bright, sunshiney smile. :^)
Open file (191.52 KB 802x1202 summer-glau_02.jpg)
>>8326 Correct, they had this idea mentioned in the original board on 8chan. I never looked into it so far, where these are available, what they cost and where to get them. There must be some sources for training of dentists or something. They seem to have some kind of dolls with fake teeth to train on. Not sure how hard they are, though. I hope we get them made out of ceramics in some standardized sizes and don't need to build them on our own as well.
>>8331 Yea good thinking Anon. I bet we can source them somewhere on the cheap. Remember the mouth needs to be kept sanitized just like w/ humans so the source needs to be reputable. Remember your robowaifu will probably need to kiss you lots to stay happy! :^)
While not strictly RoboFace development per se, until we have a dedicated MOCAP thread this might be a good spot for this. I indirectly discovered this project today after looking into SingularityNet via Anon's post. >>8475 . It's a tool that finds facial landmarks in video. Helpful for thing like facial retargeting, etc. https://github.com/singnet/face-services
Open file (1.30 MB 2731x4096 IMG_20210325_183338.jpg)
Open file (585.69 KB 2731x4096 IMG_20210325_183326.jpg)
Open file (1.01 MB 2731x4096 IMG_20210325_183343.jpg)
This is what is possible today, though lightning might be relevant for the look. Let's see how they'll look after being shipped, and some customers take fotos and report on it. These are the Alita busts from Queens Studio. I already mentioned them here: >>8194
>>9260 Yep that's nice Anon, thanks for the updates.
So, things are moving forward here. New network creates toonified faces out of real or made up ones, also allowes mixing of two input pictures. >Our ReStyle scheme leverages the progress of recent StyleGAN encoders for inverting real images and introduces an iterative refinement mechansim that gradually converges to a more accurate inversion of real images in a self-correcting manner. https://yuval-alaluf.github.io/restyle-encoder/
>>10461 Oh, video: --write-sub --write-description https://youtu.be/9RzCZZBjlxM
>>10463 Thanks very much for taking the extra time to give a more full youtube-dl command to use Anon. Getting and keeping the description and subs will be important to anyone keeping personal archive of YT videos, once cancel-culture Marxism literally deletes anything/everything that could possibly have any bearing whatsoever on either robowaifu creation, or anything else that could possibly help men. Since the Lynxchan software adds an '[Embed]' into the text of the command, I always put such a command here on /robowaifu/ inside codeblocks, since the CSS here disables this embed tag. youtube-dl --write-description --write-auto-sub --sub-lang="en" https://youtu.be/9RzCZZBjlxM
>>10461 >>10463 That's really cool Anon. He's humorous to listen to as well, his enthusiasm is great.
That screen one looks nice for nanny.
https://www.thingiverse.com/thing:4865223 https://www.youtube.com/watch?v=8_wkbLL0fqM LED Matrix behind tinted plastic, cheap, easy, customizable
>>12938 Thanks, this might be something fitting very well into the basic idea of the board: Making affordable robowaifus, which don't need to look like humans but can be a bit more on the robot side.
>related crosspost (>>13020)
Open file (47.85 KB 600x414 uploads.jpg)
>>12938 That's like Rina Chans board. She's am autistic that uses a board to convey her emotions because making facial expressions is hard for her. Her board could be a really cute face for a robowaifu.
>>13560 this will be a thing in the 5-10 years all those kids growing up around masktards are going to be incapable of expressing emotions
>>13560 This could probably get quite expressive with a high enough resolution.
Open file (108.85 KB 335x640 mace_griffin_acolyte.jpg)
>>13560 >>13563 >>13565 I don't know. It just reminds me of the cultist NPCs from the game Mace Griffin. Using a display seems like a trade-off for using a real tangible face, if you want more face customization, but a emoticon-like face just seems like a really bad trade-off.
Simplified non-human faces go a long way towards bypassing the uncanny valley
>>12950 >Making affordable robowaifus, which don't need to look like humans but can be a bit more on the robot side. Agreed. >>14870 I don't personally find that particular face appealing, but I think in large part you're correct Anon. At the very least, I'd suggest we all seek for ways to maximize the expressive potential of, while minimizing the underlying complexities of, our robowaifu's face systems and structures.
Open file (42.60 KB 720x540 MaidroidMiao.jpg)
Open file (113.77 KB 750x1000 Sinobu.jpg)
>>14935 Could you provide examples of faces that work in 3D at human scale? Most figures are cute at figure scale but, they look uncanny at human scale.
>>14955 The life sized Rem statue is the best I've seen with a full anime aesthetic.
Open file (527.88 KB 2048x1364 E0SSS1JUUAEoWCw.jpg)
Open file (477.94 KB 1152x2048 Ezfhy32VIAE6pIS.jpeg)
Open file (565.04 KB 1364x2048 EzpxOz7VIAMKsjY.jpg)
Open file (1.38 MB 1920x2560 EWSMrC3UwAE-KkL2.jpg)
Open file (605.49 KB 1707x2560 1566424488789.jpg)
>>14955 Anime is inherently doll-like so at human scale it's bound to feel unnatural when you go to kiss her on the cheek and her eyes are twice the size of yours. Even most weebs will probably prefer something semi-realistic. It's really a matter of personal preference though. Some find anime dolls creepy and uncanny but I completely adore them. People will have to come to terms with robowaifus being animated machines and develop new aesthetics that feel comfortable to live with.
>>15285 Also something to consider is faces will feel very different once they start moving around. SEER's cable eyebrows aren't human-like but they add a lot of expression.
Open file (92.82 KB 988x1478 Kyoko.jpg)
>>15280 That Rem is gorgeous Her roundness and cat like features do wonders. >>15285 Good point about the eyes. Human eyes are approximately an inch in diameter, so that could be a dimension to keep in mind. The ones with smaller noses and less human-like features are the cutest. There seems to be some middle ground between cats and girls that's best. Also, fangs are amazing! >>15287 This is an important point. Looking good in motion is vitally important.
>>15280 Wonderful figure Anon, thanks for sharing it. >>15285 >>15287 Good points. >>15288 A cute. I think we can all take pointers from the dolls/figurines world for design aesthetics.
Open file (141.05 KB 1024x768 humaneyes.jpg)
Open file (296.59 KB 711x336 roboeyes.png)
Main aesthetic issues I see with regard to making realistic robot eyes: -Eyelashes often get punched in all crazy. This robot's eyelashes (specifically on the lower half) are way too sparse and erratic, and don't extend to the outer corners of its eyes. -Eyes do not sit right within the face. Notice how there's this gaping hole in the robot where the real person has muscle. Realistic eyes need muscle surrounding them like realistic teeth need gums. I understand the eyes need a way to move around without lubrication, though, so this is a real issue. -Again, this probably has to do with eye movement, but the eyelid creases are too deep. I've never seen anyone who has a weird overhang like the one above the left eye and there's something about the way the upper skin hangs above the crease that makes it look old and "crumbly".
Open file (284.34 KB 634x424 ClipboardImage.png)
Open file (232.91 KB 474x355 ClipboardImage.png)
>>15997 The eye crease is unique to Engineered Arts robots. Take a look at Asuna's eyes, they are very cute!
>>15997 I'm quite sure the eyelashes are just a matter of how much work to put in, and about finding the right technique. The gap problem is something I realized myself, but this might differ based on the construction. Also, silicone rubber is actually very flexible and wouldn't resist that much. This might just not being used because these are all just prototypes.
>>15997 >>15998 >>16001 My $.02 on the matter of hyper-realism for eyes is quite simple. In a word: don't. As discussed briefly here (>>15962, ...), and frankly, across this board for years now, is that we aren't in any position (not just us, I mean mankind generally) to 'escape' the Uncanny Valley for hyper-real, general-use Gynoids/Androids/Mammaloids anytime "Real Soon Now"(tm). We're simply too well-tuned by God as homo sapiens sapiens to be fooled easily, just yet. And 'human' eyes in particular are a relatively small part of the body that a great proportion of our brains & souls are tuned specifically for. I don't want to discourage anyone here from proving me wrong (far from it, I eagerly look forward to that day! :^), but tbh it's a fools errand IMO to think we'll manage such right out of the gate. Therefore, I agree personally with the general consensus on /robowaifu/ thus far (indeed our tagline): >"Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality." At least the cute 'anime' bit at a minimum for now. Any other approach in the near-term will be a disappointing fall into that infamous valley IMO. Thoughts? >=== -minor spelling edit -minor grammar edit -minor prose edit
Edited last time by Chobitsu on 04/25/2022 (Mon) 09:22:35.
>>16004 >matter of hyper-realism for eyes is quite simple. In a word: don't. Devs should try, then we'll see. Filling the gaps in the holes isn't really ruining anything. >escape' the Uncanny Valley >fall into that infamous valley I always like to remind people of this video: https://www.youtube.com/watch?v=LKJBND_IRdI Just don't make it look creepy nor ugly.
>>16008 This video also addresses the history of "uncanny valley" and the fear of human-like dolls: >Why Are We Afraid Of Dolls? https://www.youtube.com/watch?v=nHzwMVpiLD4 It's basically a attractive idea for creators to use dolls for horror, because they were meant to be fun (similar thing with clowns). The problem is, too man people let their media consumption shape their world view. Some might get traumatized by watching horror movies while being to young. >Pediophobia: The Fear of Dolls https://www.verywellmind.com/fear-of-dolls-2671875 https://www.healthline.com/health/mental-health/pediophobia Around 9% (of Americans) seem to have it, more women than men. Exposure seems to be a important part of each therapy. >The outlook is very good for people with pediophobia who seek counseling for their phobia. To improve the outlook, a person with pediophobia needs to be fully committed to their treatment plan.
>>16008 My $0.02 is that the Uncanny Valley theory (at least the media's interpretation of it) isn't actually 100% accurate. Specifically the notion that there's a "no zone" somewhere between inanimate-looking and totally human-looking that everything within is creepy. Rather, I think that as intended human-likeness increases, the potential appeal increases in a linear fashion. However, the potential creepiness ALSO increases linearly. An extremely realistic robot has the highest potential for human appeal, but is also the hardest to execute well and will be the most off-putting if not well executed. Uncanny Risk vs. Reward, if you will. Consider something like the CG in a recent Final Fantasy game. The characters are ALMOST human but not quite - according to the Uncanny Valley, they should be extremely unsettling. Yet that's not the case, I'd imagine most people would rate them somewhere between "mildly creepy" and "highly appealing". (reminder: we're just talking about visuals here) Now contrast that with the IEEE Creepy Robots poll (https://robots.ieee.org/robots/?t=rankings-creepiest-robots) and you will notice that none of the top 10 are nearly as human like as my aforementioned Final Fantasy example, but I imagine most would consider any one of them far, far creepier. In fact, the human likeness varies a lot between the top ten - there are plenty of examples of robots people would consider less unsettling than the top 10 in this list despite them being more human-like. Compare Pepper and Kobian, for example. Or Cutieroid and CB-2. The biggest implication of this is that the best way to fix a creepy robot design is not to "move it left/right of the Valley" by changing it's human likeness, but rather to improve the execution of the design itself. While making a robot less human like can resolve the creepiness issue, it does so at the cost of appeal.
>>15997 Though you are correct on your aesthetic reasoning, realistic faces cannot be made by man. There are numerous flaws you did not mention which would still fall into the uncanny valley. >>16004 Chobitsu is correct, going for a demi-human/anime inspired aesthetic is much more attainable. >>16008 One of my favorite videos. Though it is correct to say the uncanny valley is not real, it's a useful term to describe the general unease felt by these creepy machines. I do think good can come from researching solutions to the uncanny problem but, that's best left to corporations like Disney. >>16012 I think "cuteness uber alles" is the best way of thinking about it. Almost human can be quite cute indeed, it's just easy for almost human to end up creepy like Sophia. This is why I think sidestepping into the demi-human route is good. I also want to clang monster girls though so, I'm heavily biased. Picrel has big fins, a flat face, and spiky red eyelashes without eyebrows. Almost human but, not quite.
This is a great conversation going r/n ITT, /robowaifu/. I hope we can keep it going for a while, and I'd like to link to it in our mini-FAQ in the /meta's
>>16008 My 2 cents is that the uncanny valley dont exist and dont matter. This is robotics not a beauty competition, so we should be caring about the technical aspects, this is why i think robots like Sophia are inspiring. This is good since there are actual challenges that need to solved like budget, time, hardware, software and more
Open file (273.40 KB 1440x1800 1652407800269.jpg)
Open file (153.27 KB 1080x1080 1652403268055.jpg)
Open file (335.35 KB 1440x1800 1652403080234.jpg)
>>15285 After more research, I've come to the conclusion that anime doll faces are the only way to make her cute and simple. All of these could be printed on an SLA printer relatively easily and the eyes would also be relatively easy to make. The hair would somewhat of a problem but, adding non-human ears like cat or wold ears would help. Go for the demi-human route.
Open file (104.56 KB 800x1200 1607113054993.jpg)
>>16246 >After more research, I've come to the conclusion that anime doll faces are the only way to make her cute and simple. I've suggested all along that the doll industries have much to offer us in our quest for robowaifus. This isn't a new conception either.
Open file (77.38 KB 810x1080 Imagepipe_36.jpg)
Open file (132.76 KB 810x1080 Imagepipe_37.jpg)
Open file (180.39 KB 828x1080 Imagepipe_38.jpg)
>>16246 >>16263 These are great and at least close to good enough. I'd prefer a bit more permanent texture for the face, but that's a minor issue. That said, pointing out the minimum which is achievable is as important as pointing out what can be done with enough effort (and money): Aside from the Alita bust here >>9260, the pictures related are actually a doll.
Open file (230.57 KB 1316x2048 20220624_190543.jpg)
Spherical rear projection faces are interesting
>>16008 watching this rn. Good point. Personally robots should be robots. I don't want wrinkles, "eye creases" or synthetic pores and skin imperfections on my r-waifu. Synthetic humanoids should embrace their .. syntheticness. That is my philosophy. (I guess I can make one allowance for 2B's chin mole, lol)
Nvidia AI can create or change faces based on sketches: https://youtu.be/MO2K0JXAedM - Not always working out great, but might be useful to go from real face to anime like at some point, or at least do some changes to a pic like increasing the size of the eyes. Though, I'm currently more inclined to use Artbreeder or something alike. I don't really need any pretty actress as a starting point, after what I've got there without much effort.
>>16679 >nosering disgusting.
>>16880 Maybe, but also not relevant. She looks very human-like but even better.
A projection mapped bit of fabric and a small blue laser projector that you can get for like 50$ is enough to have a fully believable and awesome lifelike face. If we can do in vrchat and not care you can project on an opaque fabric doll head.
Realistic and Interactive Robot Gaze Abstract >This paper describes the development of a system for lifelike gaze in human-robot interactions using a humanoid Audio-Animatronics® bust. Previous work examining mutual gaze between robots and humans has focused on technical implementation. We present a general architecture that seeks not only to create gaze interactions from a technological standpoint, but also through the lens of character animation where the fidelity and believability of motion is paramount; that is, we seek to create an interaction which demonstrates the illusion of life. A complete system is described that perceives persons in the environment, identifies persons-of-interest based on salient actions, selects an appropriate gaze behavior, and executes high fidelity motions to respond to the stimuli. We use mechanisms that mimic motor and attention behaviors analogous to those observed in biological systems including attention habituation, saccades, and differences in motion bandwidth for actuators. Additionally, a subsumption architecture allows layering of simple motor movements to create increasingly complex behaviors which are able to interactively and realistically react to salient stimuli in the environment through subsuming lower levels of behavior. The result of this system is an interactive human-robot experience capable of human-like gaze behaviors. This Disney project has been around for a while now, but it's a good example of facial animatronics combined with perception & behavioral software libraries to drive lifelike facial character animation. https://la.disneyresearch.com/publication/realistic-and-interactive-robot-gaze/ https://www.youtube.com/watch?v=_knausEgv3A >=== -add external content links
Edited last time by Chobitsu on 01/14/2023 (Sat) 04:55:09.
You can slap moe eyes onto industrial equipment and it becomes hotter than half the women out there. https://twitter.com/ZappyZappy7/status/1615344252485251072
Open file (5.79 MB 1280x720 industrial_wifus.webm)
>>18874 LOL. This is what happens when you turn bored engineers loose on the factory floor! >inb4 ''Robowaifus Unique Advantages
Open file (160.38 KB 794x1059 FollowMeEyes.jpg)
"Follow Me" eyes as indicated here by Kiwi >>18870 Cheaply made with a printer or by drawing and painting them, UV light and "glass cabochons" required - tutorials: https://www.youtube.com/watch?v=3BwAM_V2Jhg https://www.youtube.com/watch?v=QuFWAq5rssM Not sure how this will look with cameras inside. Looking forward to see someone trying it out. I already got resin and colors for another way of building eyes (like this, I think: https://www.youtube.com/watch?v=zQO7Dkjr22A ), which will be posted in the prototype thread until it's finished.
>>19037 >long-redheaded waifu w/ freckles. A cute! Thanks for the information, Anon. >Not sure how this will look with cameras inside. It is a very tricky topic at this stage of tech advance. There are super-micro cams used in intelligence, but they are way outside Garage Anon's price range for now. Many projects currently seem to be placing a 3D-cam near the forehead?
>>19049 Cheao current cameras for Raspberry Pi and such are small enough for a human-sized doll or something smaller but with big eyes. 2.5cm squared, because of the board around, the camera itself is even smaller. Maybe not small enough for a playdoll like in the picture. But my point was rather that I wonder if it's going to work with the "follow me" eyes, and how well. Anyways, I like the idea of using these "glass cabochons". Might also be good to use them as the top for resin eyes to make the camera work well. >Many projects currently seem to be placing a 3D-cam near the forehead? Yeah, sure but preferably not.
>>19050 >Cheao current cameras for Raspberry Pi and such are small enough for a human-sized doll or something smaller but with big eyes. Yes I think so. I have one and would work for Alita-styled 150+ robowaifus IMO. I'm guessing that the follow-me's might not work so well with an actual camera in back of the 'lens'. And actually, that brings another issue to bear as well: camera focus. These are vision-related topics (>>97), but have some bearing on faces as well obvs.
>>19051 I crosslinked it. Eyes are for vision but also part of the face. I mentioned resin eyes in the vision thread here >>10995
>>19061 Nice! Thanks Anon. Yes, robowaifu eyes & vision are such a specialized area they deserve their own thread tbh. Obviously, the eyelids, brows, nose, & upper cheeks are also involved in vision-specific character animation, but are much more obviously part of facial.
Open file (37.23 KB 373x560 Hatsuki4.jpg)
Open file (19.54 KB 560x373 Hatsuki1.jpg)
Open file (26.50 KB 560x373 Hatsuki2.jpg)
Open file (23.06 KB 560x373 Hatsuki3.jpg)
An interesting take on a robot face that I haven't seen built before is the type of "screen" face used on Hatsuki from the Cutieroid project: It seems to be either a contoured screen or (more likely) a hollow plastic face plate that a projector displays on. I find it quite fascinating! The only obvious downside is that from afar, it's hard to see the details (perhaps that's just on camera, though). Here's a website covering the project a bit: https://gigazine.net/gsc_news/en/20200209-cutieroid-project-hatsuki-wf2020w/
>>19380 Thanks, I think this has been shown here from time to time, but not in form of pictures to the extend here. It's interesting, but no one seems to want to copy it. Pro: Freedom of animation. Con: Probably everything else. Worst case: Concept patented.
> (crosslink related : >>21100)
Two videos about face development: >Appendix: 12. Robotic Mouth System Demonstration Video https://www.youtube.com/watch?v=iwrRm9Xywas This could maybe be useful if the part was made out of TPU or silicone rubber Then a way of printing the whole head at one. I don't really like the looks of it and don't see the need, but it might help as an inspiration. >Android Printing: Towards On-Demand Android Development Employing Multi-Material 3-D Printer https://www.youtube.com/watch?v=e-iQYkgQHPc
I hate this darn patent sh*t: https://patents.google.com/patent/KR101247237B1/en >The present invention relates to a face robot having a removable inner skin structure that is easy to maintain, and by allowing a magnet to be detachably attached to the inner skin and the outer skin forming a face skeleton of a humanoid robot or an animal robot by a magnet, · To prevent damage to the appearance of the face during disassembly and reassembly of the skin, to facilitate maintenance work such as replacement or repair of the internal parts, and to ensure the correct assembly of the inner skin and the skin without distortion....
Open file (59.88 KB 800x500 Imagepipe_5.jpg)
David Browne (Hannah dev) shows the neck but also the skull design: https://youtu.be/nJHHHZrYEzs And proper teeth, similar to our conversations a while ago. They're available on AliExpress. https://youtu.be/b8DuJjHN0RA
>>22709 Neat! Thanks Noidodev.
high contrast screenface eyes + physical mouth
>>24450 There are certainly are a fair number of compelling arguments for screen faces Anon. This 'half-and-half' approach may turn out to be an even better compromise in the end. OTOH, it's really hard to beat a beautiful physical feminine face IMO. I'd say that SophieDev showed a good example with dear Elfdroid Sophie, even though she's still in an as-yet intermediate form. But yeah, good thinking Anon. :^)
Open file (412.05 KB 406x607 ExampleYuzuki.png)
My Robot Doll Aotume Doll has the best faces of any spicy doll in my opinion. Does anyone know if there are scans or models for these faces? I want to use them. They have tremendous potential as demonstrated in this video https://www.youtube.com/watch?v=EbfyJfnaXv4
As much as a dislike Disney, their research around facial animation is very important: https://www.youtu.be/-qM_XUv-JhA
Open file (88.80 KB 500x750 frank_n_ollie.jpg)
>>25983 I don't know of a source Kiwi, but it doesn't look particularly a complicated facial shell design -- rather simple in fact (not uncommon with female characters, actually). >tl;dr It shouldn't be too difficult for us to follow the basic premise here. In fact, I'll attempt to follow it somewhat when I model Sumomo's likeness, and we can all see what we think of it as a facial baseline. Sound good? >that yuzuki kokubunji tho Neat! I told you the Japanons would be first! :^) >>26624 Yep you're right NoidoDev, and they weren't always so evil of course. It's only following their takeover as a company by the GH, in fact. They used to have some amazing talent that weren't bent on the destruction of my people (indeed of men in general). > pic related [1][2][3] And yes, few companies around the world spend as much effort at facial animation research. And since they have a large Dark Rides industry with their theme parks, much of that research translates directly over to robotics. May the usurpers in control of the D*sney brand today be uprooted soon, and replaced with men that Walt Disney would approve of instead!! :DDD 1. https://www.frankandollie.com/ 2. https://www.youtube.com/watch?v=-hpoYWp9uRo 3. https://en.wikipedia.org/wiki/Frank_and_Ollie >=== -prose edit -expand hotlinks
Edited last time by Chobitsu on 11/30/2023 (Thu) 02:39:50.
Some explainer videos on facial animation: https://youtu.be/drp-f-REyjY https://youtu.be/vocoKKmszUc
>>27121 Neat! Thanks NoidoDev, cheers. :^)
Open file (281.32 KB 808x1200 ATD-H67-135ASLIM-TPE_a1.jpg)
Open file (131.73 KB 750x1109 head.jpg)
Open file (85.47 KB 960x587 aotume_eyes.jpg)
Open file (858.37 KB 750x4218 013283b421881fad.jpg)
>>25983 There are images of the front and side of one of them that could be used to recreate them. The head isn't going to look great on its own though. The most aesthetic parts of the face are the eyes and makeup with the subsurface scattering of the pigment in the silicone/TPE. Their eyelashes appear to be a separate piece bonded on too. The lines and other details are most likely stenciled on so they can manufacture a design quickly and consistently. Those smooth sharp lines would be hard to achieve painting by hand. It gives their faces a clean digital look combined with the eyes. At least the eyes are easy and can be made by applying a transparent sealant like Mod Podge to 50mm glass cabochons and attaching a printout. I don't have a 3D printer yet but I can try making a model, stencil and eye printout for you. I just need a reference of one you particularly like.
>>28107 Neat! This is a very nice baseline crossover beginning between dolls & robowaifus, Anon. Thanks for the information. >I don't have a 3D printer yet but I can try making a model, stencil and eye printout for you. I just need a reference of one you particularly like. Looking forward to seeing what you all come up with! Cheers. :^)
Open file (228.70 KB 855x1147 ClipboardImage.png)
Open file (225.19 KB 804x946 ClipboardImage.png)
>>28152 trying to build high DoF face.
Open file (80.45 KB 2048x1251 canvas (3).png)
Here is my idea for the mouth
For the flaps we could use 4 of these
>>28152 >>28153 Yeah, it's a personal interest for me too Anon. I'm very concerned about mass, so I'm pursuing non-'traditional' manufacturing approaches & designs. Good luck with your work! >>28158 >>28159 Looking forward to your facial design progress, Anon!
>>28198 thank you.
>>28205 Y/w Anon. Just keep focused on what you can achieve, and guide that energy towards your goals! :^)
>>28198 >I'm very concerned about mass same, but in this case, I think the concern is minimal. Take the neck for example. I took it from Eva. It's already extremely strong, if using the six 9g mg90s which are only 1.8kgf cm @ 5V, it moves around 19 mg90s + silicone + plastic framework + other parts. on top of this, i found these metal gear ds power 21g servos, they do about 4.2kgf cm @ 6V and are barely larger than a sg90/mg90s, (it's like mm difference). Way stronger, much quieter, tiny bit faster and far smoother running than real tower pro mg90s. the 21g are way too expensive however at 15 usd a servo.... but they are worth it on sale, i've seen em low as 4 usd. it's not so bad for the dof's that need strong forces I started originally converting the Eva head from Colombia university to use all 21g's in the same locations, and keeping the 14 line pulling facial mechanism but to basically make it silent, stronger, and faster. but after some rudimentary testing i decided against it as the string system limits constructability far too much instead im modifying Eva to use slotted or tracked pads connected to servo linkages. the pad contains a pattern of magnets to interface with internal skin magnets which are littered around the movement areas (cheek, lips, eyes, eyebrows). changing the pads shape and which internal magnets it interfaces with, allows for facial expression refinement and/or separate silicone faces using the same internal head. the constrained magnetic pad system is how the best quality/life-like animatronics in film are done in my opinion. since the skin isn't attached directly to anything but by magnetic fields, the skin has a real physical compliance you can't get anywhere else. >so I'm pursuing non-'traditional' manufacturing approaches & designs. i am excited to see how it goes. the more experimentation the better imo. >Good luck with your work! you too
Open file (55.19 KB 272x274 x_alt20495_h.jpg)
>>28293 I only like the top-left one but I don't think it'd look great in a robot. When designing my robowaifu's model I zoomed into her face 1:1 on my screen and looked up close to get a sense for how she will feel in real life and it completely changed how I designed her. Things that seem cute on the screen or at a small size, like dolls, can feel really uncanny up close at a more lifelike scale. Some anime figures would look cute scaled up, like pic related, but I'm not really a fan of them either because it's not just about how they look but the whole experience of being with them for me. I really want to nuzzle with my robowaifu so I gave her a nose. It might be silly but I put a lot of thought into what daily life will be like together. I want her face to be lovely in all situations, from across the room, up close, kissing, looking down at her in my arms, waking up next to her, sitting side-by-side resting our heads together and more. I'm considering resculpting her face in clay because working in 3D leaves too much wanting. I have no idea how her face will feel once I hold it in my hands without burning a ton of money on prototypes I can't even adjust. Clay would be much easier to work with. When I'm finished I want to be able to squeeze her silicone cheeks in my hands and feel that they not only look good but feel good too and fit in my hands perfectly.
Open file (205.95 KB 747x718 Heads.png)
Doing dev work on a basic head. My head is the bottom one next to the human reference. Any advice or suggestions? Which head do you prefer?
Yours is quite fine but hard to compare since it's less finished, the middle-top has too small eyes and the forehead is too big. The nose of the one on the right isn't good. The one with hair and the male one are only confusing. Overall, I like the middle-top the most, but I don't really like it. I think it's because the mouth and nose look better and the ears aren't too big. I made a similar comparison here: >>18868
>>28356 Middle top looks like a baby head, not really a good thing imo
Open file (151.10 KB 256x256 ClipboardImage.png)
>>28354 The one to the right is 90% of anime girls
Open file (91.82 KB 256x256 ClipboardImage.png)
>>28357 Kinda true, that's one reason why I suggested improvements. The better nose compared to the others is relevant to me, but the eyes are too human-like and the forehead too big. Just look at some of the 3D pictures I post. I'm not gonna go with a "anime nose". That said, I don't think it matters necessarily for the first iteration. >>28358 Yes, but they're 2D.
Open file (382.66 KB 1117x558 NuHead.png)
>>28348 You get it. Her head is dearly important for many interactions. We need something we can look at as we fall asleep. Here is my new design, going for a mix of real and Aigis. I gave it hair to help
>>28364 Nice moodeling. The only thing that jumps out at me is the bridge of the nose: either too narrow, or the edges (sides) are a bit too sharp for my sensibilities :)
Open file (4.36 MB 4624x3468 closenough.jpg)
At this point, I think it's mostly about getting the eyes right. Thanks for the feedback, it was helpful.
>>28354 >>28364 Sorry I haven't responded earlier Kiwi, been a bit distracted. (From the first post:) I think the two most-appealing forms are the one on the far right, and the one on the bottom left. The Aigis one is fine and so is the girl one. Obviously, the male(-looking, at least) one is out. >>28397 Is that a single print? I'm presuming those are cutout eyes, but it makes me think of screen eyes like Mechnomancer's. Do you think you'll go that route? Cheers! :^) >=== -minor edit
Edited last time by Chobitsu on 01/13/2024 (Sat) 12:44:20.
>>28252 >i am excited to see how it goes. the more experimentation the better imo. This. Me too, Anon! Cheers. :^)
Open file (292.48 KB 960x1138 faceref0.jpg)
>>28397 Generally it's best to leave about one eye width of space between the eyes, going by the width of the eyelashes. Also keep in mind that once you do the faceup, the sides of the eyelashes will extend beyond the edge of the eye socket. I'm looking forward to seeing your progress!
Open file (5.06 MB 4624x3468 jiiii.jpg)
>>28398 Thanks for the feedback! Yes, it is all one print. Those indeed eyes cutout from a photopaper. Using standoffs for the follow me effect. So far, it actually works, it just feels like she's constantly glaring. Her eyes need to be bigger and the spacing needs adjustment for her eyes to follow rather than star. I'm honestly excited how well it works. See picrel, she haunts me wherever I go in my lab. >>28407 This is true. Her eyes being spheres is also a problem. We have almond eyes, the spherical look is cute in anime but, doesn't work IRL. Thanks for reminding me how important eyebrows and lashes are.
>>28411 >This is true. Her eyes being spheres is also a problem. We have almond eyes, the spherical look is cute in anime but, doesn't work IRL. Why?!
>>28411 Maybe you could try making the irises bigger and a slightly darker blue.
Open file (6.18 MB 4624x3468 test.jpg)
>>28416 They're more human like when they're more elliptical. It didn't capture well but, this face is almost cute at the right angle. >>28441 Will try that tomorrow. Biggest update is her head is near the average human female in terms of dimensions. This means she can wear hats and wigs normally. Men's hats are still a tad big on her.
>>28455 Nice. Do you think you'll do any articulation on her head, Kiwi? Her mouth for instance. Regardless, relly good to see your progress! Cheers. :^)
>>28455 why does this look so familiar swear theres a statue that looks exactly like that, maybe an ishtar or moai
> (post-related : >>28746)
>>24464 I've had the idea of screenfaces like the one in >>24450 for a hot minute now, and I've had one fanciful thought about it, specifically the possibility of having said screen be touchscreen. This would allow for settings/configs to be managed directly on the girl herself rather than having to boot something up on your computer (as well as possibly other things, who knows). Regarding this, thoughever, I have no clue if it's economically/technologically feasible to have such a large curved touchscreen.
> (conversation-related : >>29243, ...)
Moved to faces because reasons >>29394 >>29414 >>29422 To an extent they already make heads this way. They mold hollow rubber heads to go on fiberglass skulls. Unfortunately for us, the skull face is usually just a hollow void to allow the head to be used for, uh, other activities, so we would need to build a structure to support facial articulation and expression. See picrel. Even so we wouldn't be the first to do some modification for electronics- see picrels. And skull design seems to vary between manufacturers. We may want to do an open source animatronic skull for dolls project that anons could print and then fit their chosen doll's face onto by carefully removing material from the inside until it fits. Some adjustability in the skull would also help. So the doll makers are already familiar with the concept of hollow rubber parts, just not beyond making heads yet. btw these pics are from this thread on the doll forum inventor's page- NSFW: https://dollforum.com/forum/viewtopic.php?t=91728 I almost renamed the first pic "flaccid face", but considering the subject of the thread it would have been too easy.
>>29423 Thanks for the great information, Robophiliac. Cheers. :^)
If you are into a more robotic look you could get full face sunglasses, they are a couple dollars on places like Temu and Aliexpress. If you want expressiveness you can put LED grids behind it to have eyes and a mouth shining through. This setup would allow you to use any number of cameras or LiDAR hidden behind the visor.
Because I'm lazy, I figured a screen-type face would be a better option if you simply wanted to get a product out the door. Modern OLED's allow for curved screens, so you could affix it to a skull shape without it turning into a "TV head". With eye tracking tech, you could show where the bot was focusing on to allow it to look at things. As with all screen-type faces, you don't have to worry about mechanical eyes and mouths so less moving parts to break or make noises. >But wait, there's more! When it comes to customization you could have all sorts of options; eye color, mouth shape, nose shape, all of those options are scaleable and interchangeable. If you 3-D print something and realize it's the wrong size, you have to reprint it or shave it down or something. If everything is based on .pngs and assorted digital assets, it becomes much easier to fix designs on the fly with a screen face. My main issue is the combination of digital eyes with a face plate. As much as I love those maid bots with the illuminated eyes, they're kinda odd when I look at their face. I'd rather they have a fully digital face. See pic related. I won't say it's uncanny valley territory, but it's off-putting for me at least.
>>30632 Do you know about Spud? >>30227? The little mess around one eye comes from damage that happened to the screen.
>>30632 Like NoidoDev mentioned it has been done. I will add to this that circular screens are on the market now. I had also suggested using eink displays to prevent glowing eye effect for if using other materials for the face. Not sure how it will look till someone does that.
>>30645 The refresh rate for an hobbyist e-ink screen is too long for any realtime video, and takes like 30 seconds of the screen flashing to get a pic.
>>30653 Ah, that's too bad. Not all eink does the flashing thing all the time though, that's just to maintain the image quality to prevent a burn in. I've seen full animations done on eink. I just dont know where to find the screens by themselves that are of this higher quality able to display video.
>>30640 Not explictily, I've only skimmed a few threads as of late. This is the only place on the net to dig deep when it comes to robot waifu development. Looks pretty rad. >>30645 >Like NoidoDev mentioned it has been done. Neat. I figured someone might have. If anyone had, it'd be Robowaifu no doubt.
>>30655 I don't think they're quite yet available for the DIY market yet.
>>>30693 There is some it seems though still slow for frame rate they do update fast enough to display animation. Apparently you have to look up "fast update" and "partial update" to find them. There is a difference between the two but im reading one claim fast update isnt faster than partial update when it comes to small screens though I am unsure how true this is. You just have to be sure it has the ability to run in a mode where it doesnt do an inversion image. You should check if there is some alternative way of running the eink displays you have or not. Not full color but these are multi color available a three or four colour would be enough for eyes especially if dithering is used to produce shades. Though just a monochrome can work technically though all black eyes is a sort of dead eye'd look lol https://www.pervasivedisplays.com/products/?_sft_etc_itc=pu https://github.com/PervasiveDisplays/PDLS_EXT3_Basic_Fast Would have to keep digging in order to find more. There is a video here I also came across. Skip to 4:13 for seeing partial update. https://www.youtube.com/watch?v=cUylSiuoLHc&t=64s
Some animatronic channel by Gary Willet, I post it here, since it's mostly about (monkey) head design. >My Art of 3D printed Animatronic Designs, I document everything, processes and techniques, Instructional How to do it. https://www.youtube.com/@garywillett4146
Hannah-dev Anon has a quite interesting recent facial articulation system. https://www.youtube.com/watch?v=UoSK_w19kUI
Some videos I had on my list, some have already been posted in this or other threads: Hannah mouth test (David Browne): https://youtu.be/2li8Sq9Un38 Will Cogley's robotic head: https://youtu.be/eUMVtoO_fS8 Hannah's 7 motor jaw: https://youtu.be/UoSK_w19kUI silicone doll: https://youtu.be/WxQgEL4LlN0 3D printed teeth (Garry Willlet): https://youtu.be/VUxryzJJevo mouth movement speaking: https://youtu.be/4pK5x_vOXTk animatronic eye mechanism without fasteners (Will Cogley): https://youtu.be/uzPisRAmo2s facial skin animatronics (Garry Willlet): https://youtu.be/WtIdlhdrI2c animatronic head (Garry Willlet): https://youtu.be/vt2uMbHBNw4 Lower lip linkage (Hannah / Browne): https://youtu.be/vocoKKmszUc some human-like head: https://youtu.be/uqxhR49N3ws Hannah smile (David Browne): https://youtu.be/Esw5gjrFL-w
>>34192 Great information NoidoDev, thanks! Cheers. :^)
Open file (218.52 KB 1679x1080 AutoDrive.jpg)
Open file (214.09 KB 1679x1080 RoboWaifuFace.jpg)
Open file (220.27 KB 1679x1080 Loading.jpg)
Open file (269.34 KB 1679x1080 RoboWaifuGamer.jpg)
Hey everyone. Have you guys seen Milky Highway on Youtube? There is a really good candidate for a robowaifu I would like to discuss. Makina is a robot that has a screen for her face and she shows the cutest things like a loading icon when shes calculating driving distances, autopilot mode when shes doing something but not busy and then she has full blown Facial expressions. Also shes a gamer robot girl and thats good. What I'm getting at is that once Teslabot is released I think it would be much easier to just use and LLM like Llama 4 (once it comes out) or maybe even whatever version of Grok the Teslabot is likely to use and have the robot show dynamic things on its screen. This is just my idea for what I am likely to do for a robotwaifu's face.
>>34231 Hello Anon, welcome! Please have a good look around the board (via the catalog) while you're here. >Robowaifu screen face Yep it's a great idea, Anon! You might specifically see what Mechnomancer's adventures with them during his continuing development of his dear SPUD robowaifu : ( >>26356, >>28131, >>29420, et al ). Also, there are some similar ideas floated in our Visual Waifu thread : ( >>240 ). Good luck with your work, Anon! Cheers. :^)
Open file (2.49 MB 480x854 1729008912676224.webm)
>>34311 Intredasting, thanks Anon! Still uncanny af, but it's pretty good progress if that's the route the devs want to go with their waifu headpieces. Also, it's clearly a phenomenon that otherwise seemingly normal adults will revert to childish antics when testing controlling a waifu -- even destructively so. We should expect this type of juvenile behavior and plan for it accordingly when displaying our prototypes to the public at conventions, etc., in the future.
A fellow is working on a pretty nice silicone face setup. Got some Will Smith "I Robot" vibes lol. https://www.youtube.com/watch?v=yWrldOS6xBw
>>34311 That's amazing. I think we could do something like this or have that level of feel with some sort of AI. You can do face recognition with a ESP32 microcontroller now. So I'm thinking the compute is not so high for this.

Report/Delete/Moderation Forms
Delete
Report