/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Build Back Better

More updates on the way. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB


(used to delete files and postings)

Have a nice day, Anon!

SPUD (Specially Programmed UwU Droid) Mechnomancer 11/10/2023 (Fri) 23:18:11 No.26306
Henlo anons, Stumbled here via youtube rabbit hole & thought I'd share my little side project. Started out as just an elaborate way to do some mech R&D (making a system to generate animation files on windows blender and export/transfer them to a raspberry pi system) and found tinkering with the various python libraries a kinda neat way to pass the time when whether doesn't permit my outside mech work. Plus I'd end up with a booth babe that I don't have to pay or worry about running off with a convention attendee. Currently running voice commands via google speech and chatgpt integration but I'm looking into offline/local stuff like openchat. WEF and such are so desperate to make a totalitarian cyberpunk dystopia I might as well make the fun bits to go along with it. And yes. Chicks do dig giant robots.
Okay, that's quite far evolved but doesn't look like something for snuggling. Where do you plan to go with this? Mobility? Silicone Face? Manipulation of Objects? Also, did you have an Atari Jaguar (I did)?
Hello Mechnomancer, welcome! This is a very interesting looking project. >Plus I'd end up with a booth babe that I don't have to pay or worry about running off with a convention attendee. Yes, as Masiro Project and others demonstrate, anything remotely robowaifu-related is pretty popular at conventions. >WEF and such are so desperate to make a totalitarian cyberpunk dystopia I might as well make the fun bits to go along with it. >And yes. Chicks do dig giant robots. LOL. Indeed, the Globohomo (in all it's rapacious, octopus-like branches & divisions) is well along within their evil plots to destroy the civilizations of the Earth. Hopefully we here and others like us will present some serious obstacles to their path by helping with the big task of deconstructing feminism in our own important, practical way. And personally, I'm less concerned about the chicks digging robohusbandos, and more concerned about them as a species returning to the sanity of their God-ordained roles within a man's family. Robowaifus can actually help with that process. Looking forward very much to seeing what progress you make with your project, Anon! Cheers. :^) >=== -fmt, prose edit
Edited last time by Chobitsu on 11/11/2023 (Sat) 17:33:59.
Open file (78.41 KB 667x1000 1.jpg)
>>26307 Well, nobody is particularly cuddly without their skin on. I'm working on establishing a nice solid frame first. I'll be modifying the face a bit to cover it with some stretchy fabric, if I can't get it walking like a wowee femisapien (I got one off ebay and it is cute to see wandering around on its own https://www.youtube.com/watch?v=rNezxoSEEB0 ) I'll be giving it a mobile base like the attached mecha musume image (minus the mech arms), and some fabric/foam covering. End goal is something like a VTuber but going the other way... might be a funny co-host for a gaming channel... robowaifu onlyfans... idk. Ask it to hold small items, clock, alarm, weather, conversation, etc. Will probably release source code and maybe sell the .stl files/assembly instructions for a few bucks (like the cost of a diy-project book so I can recuperate development costs). If y'all wanna mod them to do off-brand things its no skin off my nose. >>26317 >And personally, I'm less concerned about the chicks digging robohusbandos I was making a Megas XLR reference :) Even in her unfinished state, lots of folks liked SPUD; I go to a few local cons and a local fair allows me to demonstrate my mechs/robots there in perpetuity for free, since there were folks coming to the fair specifically to see my stuff. Of course SPUD was wearing her hoodie so she looked a bit less like a necron gf XD Hope to get her mobile by con season next year, but I also have the mech to get walking on all six of her stubby legs. There's already articles about middle-aged childless women regretting their life choices. And unless they suddenly start fda approval for growing babies in ziploc baggies like that sheep, feels like the great big ol' cargoship of society is startin' to turn (I hope for the best and plan for the worst, hence my 600lb dreadnaught-type mech). Competition breeds (pun intended) innovation, so guys leaving the dating pool for a machine (no offense meant) would prolly catalyze the turnabout. Building Spud, I better understand and appreciate all the little things the human body does that we take for granted, like compensating the shifts in center of gravity from something simple as raising an arm. So many little things one learns when building stuff. >=== -patch hotlink
Edited last time by Chobitsu on 11/13/2023 (Mon) 03:16:44.
>>26341 >since there were folks coming to the fair specifically to see my stuff. Excellent. That's a great sign Anon. I don't think most anons realize what a yuge sea-change is coming related to robowaifus. The demand is going to be outrageous. Every anon honestly struggling and striving right now to effectively solve every.little.thing. will eventually have an outstanding opportunity to be a first-mover (& shaker) in this new industry I predict will literally become larger than the global automotive transportation industries before all is said and done. Just let that idea sink in for a minute! :DD >Building Spud, I better understand and appreciate all the little things the human body does that we take for granted, like compensating the shifts in center of gravity from something simple as raising an arm. So many little things one learns when building stuff. God's designs of higher life is nothing short of truly mind-boggling, if you simply open your eyes and observe. So, in the film industry, there's a sub-branch of EFX called animatronics (typically about the same as the concomitant field in the Dark Rides industry). A fairly commonplace tool for that is a so-called Monkey, which is simply a manually-adjusted motion control device fashioned into a crude approximation of the body form being controlled (much like a maquette, in the case of a humanoid). Not much by way of preprogramming numbers, code, &tc., is needed for the technician-artists driving the monkeys. They simply pose-to-pose act out the performance that they want, and the full-sized rig (whether practical or virtual) follows suit. Kind of like an indirect MoCap scenario, so to speak. All it takes is patience and good instincts; writing code &tc., isn't needed at all at this stage of the performance pipeline. Just be a natural good actor with a great sense of balance & pose. I've been working towards devising an inexpensive, bipedal-capable form for /robowaifu/, et al, and I've already started gravitating towards a monkey-like control system to program in the base movements such as walking, running, jumping, climbing, backflips, &tc. Once the animation data's recorded, then of course it can be further refined & tweaked afterwards. Eventually, we'll have a collection of software algorithms for 'bio'-kinematics control models, sufficiently accurate for retargeting; with which we can draw from ginormous amounts of professionally-mocap'd performance data to run upon our robowaifu's shells thereafter. >tl;dr It's gonna be ebin in the end!! Cheers. :^) >=== -prose edit
Edited last time by Chobitsu on 11/12/2023 (Sun) 00:27:36.
Open file (7.07 MB 246x315 powerarmorgif2.gif)
>>26345 I mean, not only do I have the WIP robowaifu but I also have a 600lb, 8ft pilotable mech and powerarmor (attached) XD What remains of megabots at one point were stalking me, quite butthurt and jelly at my work
>>26351 Man I have to say that you have an amazing project going on. The robowaifu ain't bad either. You talk about selling kits, you know how may 40k enthusiast's would buy a walking space marine suit? That's cool as fuck, great job, I'd totally have sex with that mech.
Open file (32.54 KB 600x900 Gourai.png)
>>26341 >Gourai Based Please provide details on SPUD. Height, weight, speed, power consumption, capabilities, processor, motor controller, sensors, actuators, what printer are you using, etc... Information in a succinct post to make it easy for people to understand her quickly.
>>26352 Same energy as the attached lol. >>26355 Spud specifications as of last public demo (paper ears/atari hat wearing hoodie): currently 5ft, about 30lbs, stationary, standard size servo package (similar to SG996R)with varying weight limits (9/25/35/75kg) controlled with PCA9685 servo board connected via I2C to a raspberry pi 4 3gb. Power switch to convert between AC/onboard 40v 2.5Ah lipo battery. Can create custom animations (arm movement) via "blend" file. Smol LCD to display eye icons and access Rasbian OS and scripted in Python. Voice commands include: Activate/deactivate chatgpt integration Weather (need to get a new python library as it has recently depreciated) Time Date Set alarm @ time with a message Adjust listen time between 3 to 60 seconds Debug show eye expressions TTS for chatgpt (and chatgpt can control eye expressions) Activate wave animation Current WIP (necron-looking) Separate lcd screens via SPI to free up the HDMI port so OS can be accessed while SPUD is running Physical blinking eyelids Solid wiggly ears :3 More robust neck mechanism w/ ambient movement & face detection (webcam) where eyes follow detected. Better cable management Future plans: Integration of local-ish LLM (separate tower PC and communication via wifi) Localized speech to text (anyone wanna make the pocketsphinx library not in a hard boston accent?) Replacement of linear actuators with ASMC-04B Walking legs or seated on a mobile base (or maybe both) Dedicated GUI More mouth movement via SG-90s Cloth face cover (eyelids too?) robot-like Cloth/foam covering/morphsuit
>>26306 >>26351 Unfathomably based and incredibly awesome. (but God, I wish I had a mech suit too lol) If you have time, could you elaborate about your experiences/goals at fairs and conventions? I really don’t know how to build IRL interest for my own project ( >>24744 ), nor what benefits it could bring, but it could be worthwhile to start planning ahead for them. (then again, I’ve never been to a con’, and the fairs I’ve attended all had livestock stalls and a rodeo arena— needless to say, I’m pretty clueless) >>26351 Off topic, but are there any takeaways re: armor you’d be willing to share (if you considered that with your mech)? I’m hoping to armor my design’s battery carriage enough that it can survive a few shots from a handgun, if only to avoid my multi-year project going up in flames cause some inner-city thug was having a bad day…
>>26351 Looks pretty wild Anon. Stay safe in there! :D >>26356 Nice specs on Spud, Mechnomancer. I hope you manage all your WIP & future plans. Looking forward to watching your progress ITT! Cheers. :^)
Open file (1.04 MB 878x662 chonky servoos.png)
Open file (1.17 MB 675x897 mek doggo house.png)
>>26364 About IRL stuff: 1) It always takes longer than you think or plan (setting up a booth, transportation, or building stuff) 2) Most folks understand when something is a work-in-progress or prototype. The Hard critic/butthurt naysayer is rare in meatspace 3) Unless you explicitly state you're a one-man-band doing it out-of-pocket folks will think you got university funding or somethin. My fair in bumfuk is mostly agricultural but some of the vendors that follow the midway company with the fair sell anime merch. Used to have an oscar the robot wandering around when I was a kid https://www.oscartherobot.com/ but doesn't seem to be one anymore so I might as well be the change I wanna see and make an animu version Biggest pain with my projects is designing bits to work exactly how I want to-I fly by the seat of my pants, making physical parts quick as possible so they can be tested IRL. But once past that phase building what you know is rather easy (mech parts, powerarmor, etc). Armor is the easy bit so I'm focusing on making it work, then protection. >>26368 >Stay safe in there! :D I already have a slight powerarmor injury from it closing around my ribs a little too tight, 2 years out just occasional slight twinge/soreness - rib injuries are a bitch to heal. Mech goal is to try to start grassroots giant robot fights(jumbo piloted rockem sockem)/giant robot first competitions. Like Megabots/Kuratas except, yknow, real and unscripted. I have a venue for the latter, I just need to rustle up competitors. Of course, as the weather turns I'll be focusing less on the mech and more on SPUD. Mech's list is to install new motor pods, shorten leg lift linkages by 2.5" (thats cutting 16 pieces of 1" squarestock and drilling 16 new holes!), install leg control computers and program walk sequence. Doubt I'll get that done by the time heavy snow sets in (see attached "mek doggo house.png"). Currently waiting on a shipment of flanges to mount the ASMC-04B 180kg servos to the 3d printed bits (see attached "chonky servoos.png") so I can get a start on the legs. Checked out David Browne, gave me some ideas how to upgrade SPUD's face even more (been thinking about switchin over to cloth eyelids anyway.
>>26380 >rib injuries are a bitch to heal. They sure are. Sorry to hear it bro, hopefully it'll get better! >Of course, as the weather turns I'll be focusing less on the mech and more on SPUD. Great! SPUD is much more pertinant and of interest to /robowaifu/'s goals. >the ASMC-04B 180kg servos Their form-factor reminds me of dear Elfdroid Sophie's shoulder servos. (>>10152, >>7284)
Open file (268.80 KB 816x612 servos smol.jpg)
Open file (774.23 KB 624x816 spud scale.png)
>>26382 I'll keep the mechposting to a minimum :) Originally started robowaifu development with "Carry", the emotive tool holder/workshop waifu... get it? She can carry tools lol. Functions were pretty basic like on-board speech recognition for simple things like raising arms, bowing and looking cute. Attached pic is her among her other mek siblings as of spring 2022. She's currently in a state of disassembly in the mechworkshop. Those elfdroid sophie shoulder servos do indeed look familiar! Was kinda tempted to put them in Spud's shoulders but I'll stick with my small ones for safety concerns. I want to make sure she is bottom-heavy so when she does (hopefully) walk she'll be less likely to tip over. Attached is Spud next to a RSMedia (for scale and also inspiration for the devkit I'll eventually release) and a closeup of the double-jointed elbow (inspired by some art)
Open file (183.04 KB 1280x790 real_steel.jpg)
>>26384 >I'll keep the mechposting to a minimum :) No worries, it's amazing stuff. Good luck with starting a league competition system. Soon life mimics art. > We're simply more about 'make love, not war' here is all. :D >the double-jointed elbow Nice design. I actually immediately thought it was a knee from the thumbnails hehe. I'd like to use a design similar, but with rubber bumpers+some kind of elastomeric suspension at the joints. Keep moving forward this Winter, Mechnomancer. :^)
Open file (1.41 MB 735x955 servoknees.png)
>>26386 >We're simply more about 'make love, not war' here is all. :D To each their own, but I'd rather be a warrior in a garden than a gardener in a war (eg hope for the best, plan for the worst). Protecc the waifu with your laifu XD Joint was initially designed as a knee (see attached), but experimenting proved the servo package isn't really powerful enuff to move legs of such size (at least without bulky gears). Working on replacing the 2 eyelid MG995 servos with 4 SG90s (I also need to steal a few MG995 circuit board for the mech leg motor controllers). This way a cloth eyelid can be stretched over the lcd screen, slide along the surface and have minimal gap between eye & face surface. Plus eyelids could be set to angles to give extra angry/sad expression.
>>26387 >Protecc the waifu It's a fair point : (>>10000). But as I mentioned before in another thread, I don't want /robowaifu/ to devolve into 'Terminatorz-R-Us' . I'm quite well-aware of the military connotations here, but this isn't really the proper forum for such developments. >This way a cloth eyelid can be stretched over the lcd screen, slide along the surface and have minimal gap between eye & face surface. >Plus eyelids could be set to angles to give extra angry/sad expression. Excellent. The eyes are 'the window to the soul', or so the old adage goes. You're already well along the way to highly-flexible eyelids simply by dint of choosing cloth materials. I'd suggest you confer vids-related : (>>1392) Anon; a very clean & simple concept for further warping of the cloth lids during emotive expressions, &tc. >=== -prose edit
Edited last time by Chobitsu on 11/13/2023 (Mon) 18:15:28.
Open file (244.00 KB 337x587 hearts.png)
>>26388 > I'd suggest you confer vids-related (>>1392) for a very clean concept for further warping the cloth lids during emotive expressions, Anon. Heh I've seen ST-01 and kinda stole the eyebrows. The eyelids in the vid seem to have a bit of a spasm. If unintentional, that's usually caused by the servos either being under strain (approaching stall) or the signal line picking up electrical noise from somewhere (typically a power cable). Not a big deal with little robots, but I'm a little traumatized from that happening with my larger projects enough to avoid it like the plague. I'll only mention my larger robot projects if relevant/affecting to development of SPUD, such as the above. Spud is built to be fren :) Something else I'll be looking into is having 2 different eye icons at once, so spud could be cross-eyed or have asymmetrical eye icons -eye screens are currently wired in parallel. Could be simple as modifying the library to change the Chip Select pin, have to experiment a bit.
>>26390 how much was the material cost for this roughly?
>>26390 >but I'm a little traumatized from that happening with my larger projects enough to avoid it like the plague. Twisted, shielded pair is the standard signalling goto in EM-noisy environments. If that's not enough, go with electro-optical transducers + fiber. >Spud is built to be fren :) Even everyday robowaifus will face issues with this. And of course the more complex/evolved they become, the more likely. There is also the potential for unwanted, exterior noise to block. I may say that all our 'breadbox' (a cooled, electronics housing system within the upper thorax) designs are Faraday cages as well. >Could be simple as modifying the library to change the Chip Select pin, have to experiment a bit. My animator's mindset compels me to think 'the sky's the limit' when discussing screen faces/eyes. :^) >=== -add 'Faraday cages' cmnt -minor edit
Edited last time by Chobitsu on 11/13/2023 (Mon) 19:09:45.
>>26391 Roughly $500. Most expensive bits are the 5x 70kg servos and the pi. I do have a tendency to overengineer/large factor of safety (holdovers from experience with my larger projects) so you might be able to get way with cheaper servos. >>26393 >There is also the potential for unwanted, exterior noise. So far Spud has experienced no motor noise, although I've had to separate the loops into separate threads to get smoothed motions (thread for face detection, thread for determining movement, thread for implementing movement). I've heard some autistic detail about the python "threading" library, but don't really care so long as it makes it run faster. I'm like a 40k ork in that way. I'll try to get some footage of the face/eye tracking stuff later today, it's fun to see the 'bot's eyes following a face around the screen :D
>>26395 >I'm like a 40k ork in that way. Heh, you have no idea. This type of stuff will prove to be deepest rabbit-hole of them all, before we all push across the finish line together in the end. >t. systems-engineering wannabe. :^) >I'll try to get some footage of the face/eye tracking stuff later today Looking forward to that Anon. Cheers.
Open file (3.52 MB 1801x903 spiny.png)
I forgot that I disassembled SPUD's stand while testing the ASMC-04Bs -give these babies enough power and they can ZOOOM-, so I did a little more prep for the upcoming shipment of servo flanges by doing a lower body/spine transplant. Good-bye fun-noodle spine and heavy mech-grade actuators! The process reminded me of the intro to Battle Angel Alita, schleping the half-assembled torso around. I'll save you the messy details: results are attached. The rib cage connects in the front, with tension of the ribcage holds the bolts in place eliminating the need for nuts (other machine screws you may see self-tap). Plenty of space in the abdomen to stuff a tool battery & charger/ac adapter. I'd love to figure out a system to seamlessly switch between the two, but rebooting SPUD to switch between powermodes wouldn't be too bad I guess. I took in mind the structure of the motors so they make the hips/upper thighs flair out from the pelvis, however I think the hips are a bit too wide: over double the width of the ribcage! Already got some ideas to reduce the width like rotating the pelvic ASMC-04Bs vertical, recessing the flange on the pelvic servo linkage, and increasing the length of the keyway notch on the hip motors all the way down the shaft. These things are like $50 each, so I'll practice on the one I accidentally broke rather than risk breaking more of them (legs are gonna be a total of 9 of these beasties, over the past 6 months I got 7) I just need to go over the wiring with a fine-tooth comb to ensure nothing went hinky before turning SPUD on again. My mech once caught on fire (don't worry it was small enough to blow out) and that is an experience I want to avoid repeating.
Open file (48.66 KB 321x826 StandardFranxx.png)
>>26345 >Monkey Surprised this hasn't come up before, please make a monkey/mocap thread! >>26356 Thanks for the specs! >>26384 >Genista Nice! Personally want to clang a standard Franxx. >>26393 >Twisted pair Beat me to it. I'll add that you can use Cat5 cable for data lines. Wrapping wires in copper/aluminum tape is a cheap alternative.
>>26401 >>>Monkey >Surprised this hasn't come up before, please make a monkey/mocap thread! It did come up, or at least something similar, but just briefly. James Bruton made a video about it, or even two. I can't find the posting here on the board, since I can't compile Waifusearch. Anyways, I even bought the electronics for doing that. Problem is, no on here has an actual robot to control. >DMX controllers (I always forget the name) https://www.youtu.be/6TAfDX1u7w0 https://www.youtu.be/diVXbuRislM
>>26401 >please make a monkey/mocap thread! A Robowaifu Motion Capture/Motion Control thread would indeed be a good idea Kiwi. I'll think about what might make a good thread related to our specific needs. >Cat5 cable Yep, it's not only a widely-used standard interface, it also has 4 separate, balanced data channels per cable. Good thinking. >>26404 >since I can't compile Waifusearch. Huh? I thought you were using it regularly Anon. Is there something I can do to help? >DMX Yes it's a widespread C&C protocol, especially for stage lighting, etc. It's rather heavy for our internal, onboard robowaifu uses IMO, but it's a reasonable consideration (at least for Monkey captures, etc.) >le ebin blackpill Lol. I hope you get encouraged soon NoidoDev. We all count on you here! Cheers. :^) >=== -minor edit
Edited last time by Chobitsu on 11/14/2023 (Tue) 14:20:57.
>>26399 >Good-bye fun-noodle spine and heavy mech-grade actuators! Lol. The old spine does look pretty cool Anon. :^) >with tension of the ribcage holds the bolts in place eliminating the need for nuts I'd consider the ability to maintain good physical integrity despite mechanical orientation or dynamics to be a high-priority for our robowaifus, Mechnomancer. Do you not see things that way? > cf. Tensegrity < of course, I personally have this wild idea that our robowaifus will be cabable of world-class Parkour exploits, once we solve all the strength+power-to-weight issues effectively enough, so there's that... :DD Maybe I've watched too much animu/scifi? :^) >Already got some ideas to reduce the width Yes. Anything we can all do to reduce the volume necessary for internals, the better things will work out for everyone here & elsewhere. Good luck with your research along this line Anon! >My mech once caught on fire (don't worry it was small enough to blow out) and that is an experience I want to avoid repeating. Safety first! :^) >=== -minor edit
Edited last time by Chobitsu on 11/14/2023 (Tue) 14:47:40.
>>26405 I have it on my Raspi but not my PC (on a external disk which isn't connected to the Raspi right now). Trying to install it on my PC didn't work when I tried some hours ago. I might post something in the related thread: https://alogs.space/robowaifu/res/12.html#8678
Open file (93.05 KB 804x604 rsmedia.jpg)
>>26405 >Robowaifu Motion Capture/Motion Control As I understand it Kibo-chan uses the Blender Servo Animation library. This allows for bone rotation in blender to be translated to servo id/pwm signal and feeds it to an attached arduino, allowing the robot to be animated just like a videogame character. Sauce: https://github.com/timhendriks93/blender-servo-animation-arduino I haven't tried attaching the arduino, I modified a companion plugin from exporting a highly formatted mess to a list of servo positions per frame. unmodified plugin here: https://github.com/timhendriks93/blender-servo-animation#readme I'm aiming for similar flexibility as the RSMedia's development kit (see attached), which allowed custom movement/sounds to be uploaded to a special Robosapien V2.
>>26420 Very nice information Mechnomancer, thanks kindly! I'll plan to add these links into the thread's OP. Cheers.
Open file (1.11 MB 633x848 spoodsit.png)
Got spud onto the old display stand, inspected wiring and got the idling program running. However opencv stopped recognizing the usb webcam but is still recognized by cheese. So I'm reinstalling opencv via pip, and if that doesn't work I'll try apt-get.
Open file (363.49 KB 462x517 PaperOtter.png)
>>26437 Is that upscaled papercraft hair?
Open file (1.25 MB 884x970 motahs.png)
>>26438 >papercraft Yup. Pepakura is good way to check part proportions for relatively cheap. I also trace parts from paper onto 26 gauge steel/pop rivet the tabs together for the more complicated armory-y bits of my larger projects. In comparison, SPUD is much easier than those (no need to schlep around 100lb parts lol). Found out the issue with the webcam. After reinstalling Opencv (a long process) I found the default webcam encoder for Opencv -called "GStreamer"- has issues with its latest version, so I wrangled it into using the older (?) Fmpeg encoder, so not only does it work but it runs faster. Gstreamer still throws errors but it doesn't stop the code from running so I consider that a win. Narrower hip servo mount finished printing after 13 hours/50 meters of filament, so Spud's hips should now be a bit less Willendorf. New top piece makes 16 inches at the widest point on the hips/thighs, as opposed to approximately 22 inches for the older lower one. So, just gotta tweak the code for the new encoding method and I should have the demo of the face/eye tracking hopefully tonight. Also show off the papercraft... uh... chest panel .
>>26437 Sound like your prep work is progressing apace, Anon. BTW, there are dedicated smol cams that literally have Linux + full install of OpenCV directly onboard the camera. Makes this style of integration/maintenance close to a non-issue as long as you leave them dedicated solely to their task at hand (JeVois cameras, >>1110). http://www.jevois.org/
>>26441 >Pepakura is good way to check part proportions for incredibly cheap. FTFY :^) I think it's an amazingly-effective approach to many issues we face here. I've even used paper as a direct part of practical structural supports intended for real, highly-inexpensive (read: Everyman's) robowaifu kits. Great progress with this interim goal of volume consolidation. Of course, closer-quarters imply higher heat concentrations, and therefore may indicate some type of passive/active cooling locally. Cheers Mechnomancer. :^)
>>26444 >$300 camera Unless I experience performance issues, Ima stick with single-board computing/webcam for the robowaifu except for LLM/speech recognition. Keeps it simpler and costs down. When I factor costs, I also include time. I had to re-write all the ambient movement code because reinstalling opencv broke it somewhere (or I'm just really bad at coding), but either way here's a close-up of the face wandering around and the eyes tracking my face every few seconds (using a haar cascade, still have yet to get the machine learning library running so it can recognize individual faces). Also lowered the eyelids a bid so they obscure the top of the iris, didn't have time to put on the papercraft vest/chest, might wait until I build the legs (servo flanges should be arriving tonight).
>>26463 Notice: You've doxxed yourself in your video, Mechnomancer Our policy up till now has been simply to delete anything that's personally-identifying, if it legitimately can be construed to be an accident. OTOH, perhaps this has been a bit overbearing, so in this case I'll leave it up to you: I recommend we delete your video, and you can reupload an edited/censored version if you wish. >the ambient movement code That looks nice Anon. Nice subtleties. >servo flanges should be arriving tonight Looking forward to the next updates. Cheers.
Open file (3.06 MB 2251x827 hipbiece.png)
>>26466 Oh I'm active in several online communities, many are more contentious than this one :) but if you'd prefer I can delete it and use my wyatt earp action figure for a face recognition demo instead. Besides if one aims to sell robowaifu kits digital or otherwise that's gonna leave digital foot prints Hip-piece servo hubs have been completed with a knuckle for the secondary leg parallelogram linkage. Just need to 3d print more bits.
>>26468 >but if you'd prefer I can delete it and use my wyatt earp action figure If you would, thanks. I'll rm your vid this time for you. You're on your own hereafter unless you point out a mistake to us. >Besides if one aims to sell robowaifu kits digital or otherwise that's gonna leave digital foot prints That's yet to be determined. In the meantime, every anon here should exercise due diligence. Generally-speaking, Leftists can be quite vicious & demonic in my experience. Typical Filthy Commie stuff. Just use commonsense Anon. Remember, the very mild opposition we are facing today is as nothing compared to what is coming once the Globohomo sees their favorite pet feminism begin crumbling down as a result of widespread adoption of robowaifus by men the world over. >Just need to 3d print more bits. Looking good Mechnomancer! Eagerly waiting on the next updates!
Open file (1.67 MB 914x897 spuddy.png)
Need to re-print some parts as the pvc mounting points need further reinforcement, extend the ankle joint adapters and make the shins 6" longer. Even so, SPUD kinda stands.
>>26497 Excellent! Nice work today Mechnomancer. The hips rework is a nice improvement.
Open file (842.07 KB 603x793 spudpanel.png)
While waiting for more stable/reinforced ankle joints to print I applied some papercraft. Torso seems a bit long and shins too short, but since they're connected with pvc fiddling with the proportions shouldn't be too bad. Once I get ankle joints printed I'll be installing the gyroscope and powering up all the leg servos to test this neat simple bit of code I cooked up for making it balance.
>>26576 Nice work on the boobs, I love the early years lara croft look.
>>26577 Thanks, final panels will probably be laying cloth onto a form and painting it with acrylic/latex paint to create the shape, then foam backing. The chest is more annoying than you'd think, because I have to reach around 'em to adjust stuff :/ SPUD is much higher poly than Tomb Raider 1 Lara tho lol
>>26576 It's an interesting effect of psychology that even a crude approximation of a robowaifu's shell makes her feel more 'alive'. Nice work Anon! Keep up the good work. :^) >>26578 >laying cloth onto a form and painting it with acrylic/latex paint to create the shape, then foam backing. Sounds like a great idea.
Open file (722.96 KB 796x900 render paper.png)
Putting all the 3d models together into a single file to check proportions and whatnot. Made a new chest panel specifically fitted to the model, and working on a new hip piece. Structure of the shins is actually thin enuff I might just remove the gundam panels and instead give it human-scale shins with big chunky boots to hide the ankle joint/big feet (and probably put batteries in the feet to be more bottom-heavy). Also need to get around to designing/replacing the wriggly ears with those chobit-style ports for the HDMI/usb hub.
>>26597 Excellent! Looking forward to seeing the new additions, Mechnomancer. Cheers. :^)
>>26597 The absolute madman!
Open file (4.79 MB 480x854 face_v1.mp4)
Did the papercraft overhaul (more on that later) and working on implementing a locally hosted object recognition AI model. It will inject what it "sees" into the LLM prompt before sending it off to a personal server. 2 Tiers of objects: static objects such as "bookshelf", "Entertainment center" or "toyshop" and dynamic objects like "microphone", "spotlight", "cup" and "person" with eye-tracking on the most prominent object or any person. I'd show that off, however deploying it on a Pi is more annoying than windows, so in the meantime I tested out the cloth face using a manual servo testing board. Its a little rough -eyelids are dark grey and rather recessed so on cam they show up almost black and the cloth isn't entirely smooth- but it does prove the concept. Gonna do a complete soft-face overhaul so the eyelidss will be like the cheeks, nice and stretchy, and not so sunken.
>>27098 It looks creepy as fuck.
>>27098 Neat! You're making good progress with the head. We've talked here about impregnating a silicone-like material into a clothlike outer shell. Any chance you'll be experimenting with that approach. I see you've in fact begun mocking the Chobits earpods. Nice! :^) >Gonna do a complete soft-face overhaul so the eyelidss will be like the cheeks, nice and stretchy, and not so sunken. Looking forward to seeing that Mechnomancer. Keep up the great work!
>>27098 >what it "sees" where did you put the cameras, thought the eyes were just screens
Open file (391.00 KB 965x828 aigis_red_tie.png)
Open file (1.28 MB 1200x680 ClipboardImage.png)
>>26576 >Papercraft Based, would recommend laminating it or coating it with a silicone. >Blonde, red tie Just like my wife! >>27098 Please, don't use a cloth face. It works a treat for everything else. A cloth face looks like a Mondasian Cyberman every time no matter what. Face development is truly difficult. You'll get there eventually. Your Chobit ears are cute!
>>27176 Its at an intermediate stage right now, plan to color the cloth with some fleshtone acrylic paint. Still need to migrate the connectors into the chobit ears, but I just recently got a shipment of filament so I should be able to get started making those out of PLA and overhauling the face so it looks less possessed when the eyes are closed. If I do experiment with silicone faces it will be in the springtime, as I have no workspace I could safely ventilate that is also heated. Chicks dig scars but no one can see scars in your lungs so they don't do anyone any good.
>>27098 >>27204 >so it looks less possessed when the eyes are closed. Haha. Not sure if we discussed the mouth articulation issue ITT or not, but the topic came up with the earlier Elfdroid Sofie threads. The >tl;dr is basically that without a significant (read: expensive) design & engineering effort, it's rather unlikely to be a pleasing outcome to go for a single, flexible 'skin'. The choice that SophieDev came down on was a 'Nutcracker'-like mouth. Have you considered that approach yet, Mechnomancer? >chobit ears Deffo looking forward to seeing this approach used! :^)
Open file (196.14 KB 352x303 face strings.png)
>>27220 >mouth articulation be hard yo strings/cables running from around the mouth to various points on the head to move the mouth corners(the green on the pic, shooped in eyes so I stop scaring the 'technician) -even gently poking the fabric at the mounting points make a big difference. Lots of the fabric messiness around the mouth can be fixed by using properly sharp scissors while cutting, and putting a few layers of acrylic around the edges o3o Soon I'll be deploying oobabooga onto the raspi and getting the TTS connected (Espeak sounds robotic but it does real-time even on low-end systems). Might go for a hardware rather than software solution for moving the mouth up/down: an LED equalizer next to the output speaker to pick up the speech with a wire from one of the LEDS heading straight into a GPIO pin, which is only listened to while SPUD is talking.
>>27223 Yes, that might be a workable idea Anon. Especially if you secured a couple of orbicularis rings around the lips before the zygomaticus retractors. Looking forward to seeing what you manage with this issue.
Open file (2.97 MB 426x240 ruff_face.mp4)
Going thru my vidya archives I just remembered when I built this little fellow. Might try to scale it up and make a papercraft face while waiting for parts to print.
>>27251 That's charming Mechnomancer. Totally-not-Kibo-chan is a good example that we can achieve nice, empathetic characters without delving deep into the mines of the Uncanny Valley.
Open file (3.97 MB 480x596 kami-chan test.mp4)
>>27254 I call it a "kami-chan" model of the SPUD line, "kami" spelled with the kanji for paper (紙ちゃん). I kitbashed a few papercrafts - scaling them up- and threw together a larger version. Things are a little crooked due to how the layers are mounted but the idea is there.
>>27289 Very nice, Anon. I'm planning to use a papercraft shell for my initial robowaifu prototypes (shell durability isn't a high-priority for me at the initial stage, but rather rapid design changes/mockups, &tc.) BTW, have you looked into software that can do 'unfolding' for you? That is, it can take (say) a Blender 3D model, and then give you 'cut' lines to make (exactly like marking edges hard/soft) with the goal of making flat prints (like Kiwi's mesh flats : >>26066, >>26153, ...), which can then be directly folded/assembled into a 3D-shape. This idea is very common in the papercraft world ofc, but it also works for 3D printing. >=== -minor edit
Edited last time by Chobitsu on 12/15/2023 (Fri) 10:27:58.
Open file (669.61 KB 523x891 fullbody.png)
Open file (10.33 MB 1080x1920 Data.mp4)
>>27290 The software you're describing is pepakura (other, more elaborate programs similar in nature exist for designing sewing patterns), which has been in use by diy communities (my experience is designing cosplay props/armor) for over a decade. Using pepakura is how I made Spuds paneling. I need to get around to making some better hair tho :) Pepakura costs like $45 for a registration key, but even with the unregistered version you can unfold 3d models and print to pdf, just can't save editable files. I finally managed to get mediapipe running on the raspberry pi so it can track me while my face is hidden (good thing? bad thing?), and have a combo of 2 object recognition libraries running (one with objects and the other general image recognition). The servohats have a tendency to get unplugged from their power source so I gotta fix that before SPUD can move again. Also hacked a cheap led "equalizer" into a gpio input for the raspberry pi. The equalizer hangs out around the bluetooth speaker and sends the signal to open/close SPUD's mouth according to noise level (have yet to connect it to the mouth servo just have a print script running atm).
>>27438 >pepakura Great! So you are already aware of the idea then. I just wanted to make sure you weren't overlooking something to make things easier for you! :^) >(good thing? bad thing?) <le ebin anonymoose Sure, that should be sufficient to keep outsider amateurs at bay, which at the moment are the primary concerns. You'd need OG opsec to deal with the pros (full bodysuit, blurs, obfuscation, etc.) but I think this is sufficient for this point in time during Current Year -era, Anon. >The servohats have a tendency to get unplugged from their power source so I gotta fix that before SPUD can move again. Huh, I wonder why? >The equalizer hangs out around the bluetooth speaker and sends the signal to open/close SPUD's mouth according to noise level Pretty sure this was the route SophieDev chose as well (or something functionally similar). That fullbody is looking better Anon! Looking forward to your continued progress with SPUD, Mechnomancer. Cheers. :^)
>>27438 >zooms in >Gigachad 0.9982279331909
Open file (3.36 MB 1080x1920 Spuds Merry Christmas.mp4)
>>27462 The pca9685 servohat is a goofy little beastie that has problems with the official way to power it. So I came up with some alternatives (plugging in power where a servo should go). So far with WW 2.41 everything is on-board: no reliance on third-party stuff on servers located in parts unknown (I was working with chatgpt starting in feb and witnessed firsthand the neutering thereof over the course of several months). Until I either get folks to help translate the pocketsphinx library out of its hard boston accent or find another locally hosted speech-to-text engine I'll be using generic google speech-to-text. Spud wishes you all a Merry Christmas, and she can even "say so" herself lmao (complete with horrible anime-style lip synch). I'm working on a revision to the face to close the gap between eye screens and face surface to less than 1cm, and subsequent proper adjustments to the mouth to make it more fluid.
>>27496 Excellent progress, Mechnomancer. Nice work! Merry Christmas to you too bro. Please stop by the /christmas/ festival this weekend. Good luck with the further Spud design improvements. Cheers, Anon. :^)
Open file (892.33 KB 1920x1080 audio.mp4)
>>27497 Thanks. I appreciate the invite but I'm going outta town for Christmas. Yknow, normie type stuff lol. I just cobbled together a better TTS engine client/server script tested on localhost (audio gen took 1.3 seconds using the AI model on an Nvidia GTX 2080). Deployment of the client onto the raspi (and server config) will probably have to wait until after I return from christmas but the voice sounds much better. ".wav" files are verboten so I just plopped the audio onto a quick little vid.
Open file (723.37 KB 1024x768 chobits.jpg)
>>27513 Excellent. Yes, that's much better. May you have a Merry Christmas holiday with the fam and whatnot, Mechnomancer. Looking forward to your safe return with us here on /robowaifu/ . Cheers. :^)
Very impressive what can i say. I dont know if youd like to collaborate seems like you dont need any help but if i could take a peek at the files please. Good job Wish i could say i did anything special for christmas but for the most part it was just another day :/ You should consider getting some funding by the way like seriously. Like kickstarter atleast. It does look kind of creepy though. Maybe i could help with the skin? Idk
Open file (1.18 MB 3264x2448 Spud New Face.jpg)
Open file (2.02 MB 3264x2448 Spud Current face.jpg)
>>27613 Thanks. I'm not comfortable releasing the files tho until I have some documentation to go with it -or at least a version I am happy with. I hold myself to (what I consider) high standards, probably higher than what is good (or profitable) for me. The current eyelid mechanism turned out a little wonkier than expected. Ended up with over 3cm between the eyelid and the face surface to get the eyelids to blink, giving it that FNAF eyeball look (major parallax distortion when not viewed head-on, can barely see the eyescreen frame-left in "Spud Current face.jpg"). But I'm working on a new mechanism to reduce the distance to less than 1 cm and have cloth eyelids slide linearly over the screens "Spud New Face.jpg" and using a ripped mmd 3d model for reference. Cleaning up the sides of the eye openings might be a challenge, but the fabric also tends to curl inward so might make it easier. Best option to have the avatar (robowaifu body) connect to the host LLM computer will probably be via a LogmeinHamachi network (saves issues with port/ip silliness). PytorchAudio might work for locally hosted speech recognition, have to test it on my main computer then attempt pi deployment.
>>27638 Good luck with the eyes, Mechnomancer. I'm sure you'll figure it out and again I recommend you glance over SophieDev's work. Might get some inspiration there. >Best option to have the avatar (robowaifu body) connect to the host LLM computer will probably be via a LogmeinHamachi network (saves issues with port/ip silliness). So is this a local network (server in your own house), Anon? Some type of elaborate network hole-punching technique for such a setup is both overkill, much too complex in it's attack surface, and: < LogMeIn ...lol no. The le 'Proud Wakandan / Stronk Independynt White Tranny' pics on their front page is not a good look for anons. Once such a company discovered that you are using it's products for running cute & affectionate robowaifus of all things ("REEEEEE you're killing Feminism!111one!!") -- they would immediately cancel you, and jack-booted GH Big Police-State thugs would likely soon be kicking your front door in for an ensuing struggle session :^). Just use simple static IPs and problem solved (and this could easily be automated via scripting). Cheers, Anon. :^) >=== -prose edit
Edited last time by Chobitsu on 12/27/2023 (Wed) 02:59:57.
Open file (38.37 KB 930x475 workflow.png)
Using Pytorch for Speech recognition is more of a pain than it is worth (some wonky thing about requiring a version of ffmpeg that isn't dependent on python to capture streaming mic audio or something) so I found Vosk to be a good alternative. Locally hosted with a 40mb model and purportedly works for Raspberry Pi! Voice commands for a raspberry pi... *glances at mek and powerarmor with malignant intent* Workflow is almost complete. Just need to add the TTS ai into the mix then split it into the Server/Avatar scripts. Separating the emotes out of Oobaboogas response is nothing fancy, just some annoyingly involved string manipulation (replacing asterisks with parentheses to enclose the emotive language is harder than it sounds!). It is kinda fun talking to it, like magic seeing the words I say appear, then a response - I've read a lot of vintage scifi as a kid so I have an unusual appreciation for this sort of thing (Asimov eat your heart out). However my voice is low enough vosk model sometimes misunderstands, but even google speech has some trouble with it (and smart TVs do not understand/hear me at all) so I'm not surprised. AI gets kinda confused, too. XD Talking in a higher pitch seems to solve it. Tempted to use SPUD to apply to a local makerspace "startup bootcamp program", market it an artificial assistant/greeter. So like one of these, but more sophisticated/owo. https://www.youtube.com/shorts/UiYeUq-rAJk Companies would probably like some artificial secretaries/greeters that can not only answer questions but also keep their data private. You do not sail straight into the wind, you tack :)
>>27658 >Workflow is almost complete. Sound good! Good luck with this subsystem's progress Anon. Cheers.
>>27662 Thanks, I'm thinking about doing a first public release of a few simple scripts that lets folks talk to the oobabooga LLM via VOSK speech detection, and having it talk back with the TTS (like the workflow except on 1 computer). There seems to be zero documentation or examples about combining all these AI together (speech detection/LLM/TTS) so I might as well be the first. If I'm feeling fancy maybe display a pic of a waifu avatar with mouthflaps :D
>>27687 I kind of did something related to that but it was for a waifu api thats now dead https://github.com/peteblank/waifu-conversation
>>27688 Lmao wtf get your waifu api for $999,999 a month hahaha
>>27688 >ded >>27689 >crazy cost Yeah that's why I'm looking into locally hosted stuff. Free from corporate overlords and their predatory pricing/nerfy-updates. Worst case scenario I upload my current version of oobabooga/models on github w/ installation instructions.
>>27690 Is anybody paying that amount lmao Id be willing to host an api at a much more reasonable price. Say $888,888 a month.
Well, now I can talk to the LLM all on my mid-tier gaming computer (next step is to distribute some of the load onto the Raspberry Pi). The AI's voice has some reverb/echo cuz my mic is also picking it up. I need to tune the character cuz it seems a little weird, though.
>>27724 Excellent! The response latency is nearly negligible at this point Anon. You definitely have the right idea trying to economize the backend hardware involved. And of course the 'frontend' (onboard) hardware will need to be extremely efficient in Flops/Watt to remain viable for low-end (read: cheap) robowaifu designs. Thanks for the update, Mechnomancer. Keep.Moving.Forward. Cheers. :^)
Oh i see that you posted an earlier model on the other thread. Youre almost there yeah if you can make her look nicer... But again amazing work.
>>27223 Really I think that may be the main thing. The mouth... The nutcracker sophiedev did also looks kind of weird. But if you combine the nutcracker with a mask or something that might make it look better I think.
Open file (6.41 MB 1364x768 expressionschatgpt.mp4)
>>27724 Going over the code, I can only conclude the LLM seems insane because it felt like emoting crucial elements (describing actions between asterisks), which I removed from the response so it didn't clog up the TTS. For diagnostic purposes I added it to the the printout (but not sent to the TTS) It is pretty easy to detect words in a string then emote accordingly (assuming you have the graphics, etc), here is a vid of me doing just that with chatgpt early last year. The slow nerfing of chatgpt made me sad. A basic sample of detecting emoting content (with python) if "happy" in response: print("happy expression") #or whatever code you want to do to express happiness. Chatpgt is fine with working with limited responses for emoting. Locally hosted (& less sophisticated) LLMs can too with the proper prompt, but they do sometimes come up with words outside of the (admittedly small) list. That's what I got all you for, to come up with lists of possible emotes lol
>>27973 Very interesting progress Anon. We do have an Emotions thread : (>>17027) if you really want to open up a bigger, general discussion here on this topic. I'd certainly recommend we do so as a good way to kick off the New Year. And of course, this area is closely-related to Cognition as well : (>>24783) . And since emotions are often a deep part of social engagement, maybe the Society thread can provide some good context for your work : (>>106) , as well as our Personality thread : (>>18) . Regardless, as you point out LLMs have very distinct limitations (even if you have multi-billion dollar data centers, as our common enemies do), and I consider some of them fundamental. This is why we here are seeking for another solution in our Cognition thread; one that will run fully opensource & disconnected, on Anon's local hardware. >That's what I got all you for, to come up with lists of possible emotes lol Heh we're all in this together, Mechnomancer! Cheers. :^) >=== -minor edit -add'l crosslink
Edited last time by Chobitsu on 01/03/2024 (Wed) 07:09:26.
>>27978 Okay... what do you mean by AI? Do you mean you wish to make a language model now? I'm going to use something like llama2 that is not a problem for me. If you wanted to help you'd make a movement AI and you'd set goals on what is to be carried out. If you wish to make a language model you need goals too.
>>27978 >Open Source and locally hosted LLMs I present Ooobabooga, a locally hosted web UI for LLMS with aims to be the stable diffusion of text generation. The OpenChat model is comparable to Chatgpt before the great Nerfing (and about 16gb), but Pygmalion (8gb)is ok too. https://github.com/oobabooga/text-generation-webui >>27979 Already got an LLM from hugging face and the ability to create animations for the physical body using blender (see first attachment in first post)
>>27289 Oh wow I missed that one. that one also has mechanical eyes instead of screen eyes. I don't know if mechanical or screen eyes are the way to go. I think mechanical in my opinion. >>27980 I'm going to make the skin for the face with TPU filament. I also made a stand to hold the face in place. I might have overdone the stand its almost as tall as the 3d printer(ender 3) the filament is black however. I might have to paint it unless we want blackfu. It'd also be nice for the hand. However for the hand for example I'm wondering if the tpu skin would be not flexible enough for the finger movements...
Open file (231.64 KB 1280x720 IMG_20240103_203104.jpg)
>>27981 It actually didn't come out as tall the 3d printer. I've been thinking my ol' ender 3 is good enough but not really. The more I print on it the more i notice its flaws. Look at this gear for example. Although I did set the quality to standard but still.
Open file (6.63 MB 1080x1920 New_Year.mp4)
Need to adjust the noise level detection a bit for the mouth. Not quite yet ready to deploy the client-server setup as I'm trying to get a better TTS engine running. Emotivoice has not only TTS but emotional intonation that can be locally hosted. I just need to figure out how to use "Docker" and hopefully I have the power to run both at once without much computational delay. If I could figure out how to manipulate .wav files -adding audio effects- via python that would be nice but there doesn't seem to be much out there.
>>28031 Did you change the mouth mechanism? Nice i think it goes up and down now right? I had this idea of the skin being tpu filament. You're using something fuzzy... Yeah if you can get the skin to look somewhat like a sex doll and put on a wig it'd look nice-r in my opinion.
Open file (50.74 KB 200x200 15907.png)
>>28033 Mechnomancer. I appreciate what you're doing however if you want to sell it I do believe that people are quiet interested in the sexbot aspect and it looking nice-r. Now whenever somebody says something can not be sold I point to the token known as obamaharrypottersonic10inu I think it was called. Now if that can be sold anything can be sold.
>>28034 As I mentioned earlier in this thread, I am focusing on a nice solid frame that can handle further... infrastructure. Furthermore, I also previously mentioned the current face is a proof-of-concept and may experiment with casting silicone & such later this year when weather permits. There are plenty of the "fun bits" available pre-fab if you know where to look (*cough* spamazon *cough*), such as anatomically correct full-body suits for "cosplay" or a simple morphsuit if you're strapped for cash.
>>28031 Nice progress Anon. The shell (fabric?) skin is coming along. Do you have further plans for more-closely lip-syncing the mouth with audio? Cute touch with the Santa hat BTW. Cheers. :^)
Open file (399.47 KB 1217x443 earz.png)
>>28044 >le lip synch The mouth flapping is controlled by what was originally an LED equalizer box connected to a GPIO pin. It has a little knob to adjust the levels, which is what I need to fiddle with. Working on 3d printing chobit-style ears (one for ports and another for webcam), need to re-do a piece as I miscalculated the gap for the HDMI port.
>>28053 Mechnomancer are you from wisconsin? This site might be wisconsin supremacists... I will say however. Poppy and inmoov creators come from france. I am not a frenchy supremacist however i do notice they contribute to this robotics stuff often though.
>>28055 Well I REALLY better get going i'll be seeing you on your youtube channel I guess. There were some disagreements here i guess.
Open file (641.00 KB 1280x919 chii_ears.png)
>>28053 >chobit-style ears Really looking forward to it.
Open file (3.84 MB 854x480 nodata.mp4)
>>28053 >I was so excited to receive my first persocom 25 years after reading Chobits but imagine my disappointment when I found out she didn't even come with an RCA cable! Now I have this futuristic piece of tech just sitting there telling me to plug her into a HDMI port to continue installation, probably judging me for owning a CRT. Don't waste your money. Wait until someone makes one that isn't just for kids who never read the manga.
Open file (13.88 KB 351x356 i_was_like_JUST.jpg)
>>28067 >mfw
Open file (911.60 KB 500x350 booma.gif)
>>28067 tfw you forget HDMI to rca converters exist :)
Open file (6.85 MB 960x540 newface_step1.mp4)
New faceplate installed: gap between eyescreens is reduced by 10mm but with a few tweaks I will probably be able to squeeze it closer by another 5mm before attaching the cloth screen w/flexible cheeks & eyelids. I'll probably need to re-print the chobit ears, as I didn't put screws in the tips to secure the two halves together, so there is a gap of about 3mm. The hole for the webcam is also slightly too small as to the outer diameter, so Spud's view of the world looks like seeing thru a porthole. Also slightly increased iris size (don't know how it will look with the cloth face covering it but it is an easy tweak). I'm thinking of using cloth pieces to build torso musculature in layers (such as in the below video) to put between the pla frame and any external (morphsuit) covering. Probably won't work out (look too uncanny) but it would be fun to try and not too expensive: walmart has cloth bolt ends of like 2m for $2-$4. https://www.youtube.com/watch?v=tGgwA7IY0hY
>>28131 >muscles Costumers use shaped foam to create definition and musculature in costumes, you're doing the same thing here, get a thick foam block and carve muscles from that.
>>28131 Seems like a silly question now, but why fixate on physical eyelids? It would probably be way easier to closely fit these "eye" screens to the faceplate and have them play some stylized blinking interruption rather than trying to add physical eyelids, so I am assuming it's more of a personal "why not" ?
>>28135 I think the eyelids are important. If you dont believe me watch lucky star.
>>28135 In earlier versions I did have a blinking animation on the LCDs, however when the eye-screens are off it looks possessed (ಠoಠ)و† Plus its more difficult and I love me a challenge. I could do nonlcd eyes (doll eyes, etc) but I'm enamored with symbol irises. I've seen some work of embedding them in physical/animatronic eyeballs so eventually that might be an option.
Open file (16.93 KB 341x192 5034-329920408.jpg)
screen face is for karens
>>28131 Shaping EVA foam for the volume and putting cloth over it would look and feel good while keeping the weight minimal.
>>28131 Nice progress, Anon. It's a cute addition! :^) >I'll probably need to re-print the chobit ears Ehh, you'll get it sorted out I'm sure. Keep moving forward! >I'm thinking of using cloth pieces to build torso musculature in layers Is this for simulating volumetric bulk of her 'musculature'? If so, then I'm personally very-inclined to anon's suggestion about utilizing LARPfoam (>>28173) for such uses. It's certainly what I'm planning for the exterior substrates in my own designs. Cheers. >>28173 Very cute Aigis arts! :^)
Open file (696.24 KB 891x663 smine2.png)
Open file (1.01 MB 897x669 smine1.png)
Open file (444.23 KB 493x657 sat.png)
Open file (92.85 KB 1277x1022 balancing code.png)
>>28197 >le eva foam I mentioned EVA foam earlier ITT, problem is eva foam doesn't really stretch. Adding a second joint in the spine to assist with future balancing (using a gy-521). In the 2nd revision of these parts I got the tolerances right for the joint, but the lower anchor was perpendicular to where it would attach to the waist so I'm re-printing the yello bit. Spud got tired of standing so is now sitting on the workbench :3 Need make the head less egg-shaped if I want to properly fit the wig on there. Also sharing a basic, 1 axis balancing code you can do yourself in python. Can't remember if the error handling works or if I added too much but ¯\_( ͡° ͜ʖ ͡°)_/¯
Open file (1.10 MB 1402x662 newspine.png)
Open file (581.51 KB 477x661 funnywig.png)
New lower spine joint is finally complete, just need to wait for a new ASMC-04B servo to arrive since the current one is the test unit I accidentally bricked. Then it is time for the2nd spine transplant. Thankfully it is only a partial one (I made some notes so you have an idea how it will fit... Once it is installed probably won't notice unless you look real close lol). Wig was too smol, that's what I get for a $10 Spamazon special lol. Jaw panel sits differently that planned so I might need to re-print that, and re-print the face with some more mounting holes since it is currently only being held on by a single forehead screw and pressure from the ears against the head frame. Hehe looks kinda chibi/big-headed in the pic lol.
>>28239 >>28353 Excellent progress Mechnomancer! Looking forward to seeing your redesigns in action. Cheers. :^)
Open file (5.24 MB 302x540 Carrysmol.mp4)
Since parts are not arriving until at least the 18th, I dug around in my archives and found some footage of the first robowaifu. Here is the Workshop Waifu "Carry" in the early stages, and was in fact my first project using a raspberry pi. I find the contrast between Carry and Spud striking, it is amazing they are only about a year apart :D
>>28450 > I find the contrast between Carry and Spud striking, it is amazing they are only about a year apart :D Amazing. You've come far in a short time, Mechnomancer. Good job! Actually, I rather like Carry. I particularly appreciate the fact you try to keep costs relatively-low. (>>28446) Thanks for these videos Anon, please post others you think would be helpful to beginner-anons here. Cheers. :^)
Open file (2.83 MB 302x540 Carry2.mp4)
>>28456 Ok, more Carry-posting :D Started making 'er a little taller in this vid, filmed about a week later.
Open file (953.51 KB 831x517 wigz.png)
Open file (353.17 KB 451x587 face_cloth.png)
Shipment arrived, threw the wigs together. Head looks a wee bit flat but I'm printing some skull panels (forehead first) that should help the shape. Also stretchy, pale cloth arrived for the face. The acrylic one in the background didn't turn out very well.
>>28601 Why exactly the need for fabric-skin? Is it just to hide the mechanical side of "emotive" elements?
>>28603 For a more seamless face: more natural eyelids (flexible) and to cover up the terrifying, fnaf-esque jaw :D
>>28601 >>28609 Thanks for the updates, Mechnomancer. Of course, do as you see fit, but I would highly suggest to you that you don't get overly hung-up on creating any appealing face r/n. I think your talents lie more in the mechanical side of things & I would recommend you focus more on those aspects of SPUD. In particular, I say you should consider making a serious effort this year at making her walk freely in a bipedal fashion. During this same era, there will be other anons focusing on the head/face, and then perhaps we can move forward all together by splitting the work up this way. >tl;dr I simply don't want you getting your 'truck stuck in the mud' over this area, and losing your thus-far-quite-excellent forward momentum. Cheers. :^)
Open file (289.54 KB 497x242 backup.png)
>>28611 I appreciate the concern, but I'd like SOME improvement over the past year, especially since SPUD will be one of my flagship items during my demo circuit. I'll just be slapping the new cloth on there like the old cloth face, with the addition of them cloth eyelids. Maybe I'll put some leds in the cheek for blush XD Forgot to mention the final ASMC (20v, chonky) servo did arrive so I can do the spine transplant, the result will be the ribcage being used to help balance spud on both axis. Combined with the arrival of properly rated buck converters I should be able to get all the motors running off an internal battery (40v 2.5AH lithium Ion). And since there is enough resistance in the motor's gearbox Spud can almost stand by itself UNPOWERED, little energy will be used while idle. However if I can't get bipedal motion working I do have an alternative (see attached)
>>28615 >especially since SPUD will be one of my flagship items during my demo circuit. Oh yeah. That makes sense then, I'd proceed too. Just friendly advice. :^) > ... so I can do the spine transplant ... Nice! That's exciting that you're going to soon have an improved spine for her. Looking forward to that! >(see attached) Ahh, the Tachikoma trick then ehh? :D I'm planning something similar for the smol headpat-daughteru Sumomo-chan. She'll probably only be a couple feet tall at most, and her having a little 'car' to ride around in will be both cute & helpful for the both of us!
Open file (1.10 MB 815x647 metal hair.png)
>>28626 Here's a vid demonstrating the spooder leg mechanism. I'm rather tempted to build a 3.5 foot rc spider (with a nerf gun turret on top) anyway, so making the robowaifu ride it wouldn't be too hard... theoretically. https://www.youtube.com/watch?v=wypThTfbclM >daughteru tbh I think calling it a "mecha musume" would work just as well. :) I had to take Spud's bottom half apart to properly align all the heavy motors flanges (in the legs and spine), so at a 1250ms servopulse they're roughly at a standing position (don't want any wacky hijinks when I start coding the movements). Have to redesign and reprint (again) the lower spine anchor as it was slightly too large for the hips to rotate freely (currently printing). I installed Spud's forehead plate and put the hair on, and improved the way the wigs sit greatly. Yeah 2 wigs. The lower brighter yellow was originally a hair extension built into a baseball hat (looking like an owo danny davito, should've gotten a pic for the lolz but I was in the zone), and the 2nd is a gold pixie-cut style wig. Slightly different colors but very fluffy: looks like a refugee from an 80s cyberpunk animu.
>>28627 >teh spooder leg mechanism Very cool linkage! It looks like you'd need a pretty low coefficient of friction between the pads and the surface for it to be able to pivot properly though? >(don't want any wacky hijinks when I start coding the movements) This. I like that you are not only willing and able to rework/rework/rework -- but that you apparently expect it going in. This is the key to success as highlighted by great men such as Henry Ford, et al. Just.don't.quit. This is the key to (eventual) success!! :^) >looks like a refugee from an 80s cyberpunk animu. Lel'd >=== -fmt edit
Edited last time by Chobitsu on 01/20/2024 (Sat) 06:14:58.
Open file (5.36 MB 856x480 Chonky Servo Test.mp4)
>>28631 >willing and able to rework/rework/rework -- but that you apparently expect it "How do you make god laugh? You tell him your plans" :D I'll post my bucket of 3d prints that didn't make the cut at some point. Made a vid of the chonky (180kg) servo being controlled by the robowaifu computer (raspberry pi). 5V servo control board with 15V going to the motor itself, both using buck converters from a 40V atlas tool batter I got on clearance. Once the spine piece gets re-printed (again) I'll be connecting 'em all up.
>>28639 Looks good! Really looking forward to SPUD's advances this year, Mechnomancer. Cheers. :^)
Open file (6.44 MB 270x480 spudmotortestlores.mp4)
Test of the robowaifu head/abdomen motors without any motion filtering. I'm going to be replacing a joint in the neck so the head is a bit more balanced, then do another calibration of the leg motors via the robowaifu-puter before I attach Spuds spine to them. Kinda wobbly cuz its only attached by 2 screws via pvc.
>>28645 That hair is totally going to get caught up in something.
>>28648 it's already obscuring the webcam in the chobit ear lol
>>28645 Nice, the servos seem to be well-rated for moving the mass around pretty quickly. I'm sure once you have the new spine mechanisms in place, Spud will be more stable. Cheers.
>>28653 Oh shiznit nice, I didn't even notice it. Good placement.
Open file (4.74 MB 1080x1920 meeme.mp4)
Well, first full face test was less than ideal. Droopy eyelids give Spud the appearance of a stroke victim D: . To be fair I did it in like 15 minutes, 30 tops. Might revise the mechanism to something like didney do https://www.youtube.com/watch?v=YRDBFc-TrtM Wouldn't require much, just remove the cloth from the sliding eyelid panel and attach it inside the circumference of the eye socket. What else to do: - A little bit of red/pink paint in the cheeks/lips to remove that corpse-like uniformity. - Corners of the mouth are perfect for sewing in a little wire anchor to be pulled on by a servo to move between a smile and frown. To save on servo channels I could have it synched with the eyebrows? idk - Darker eyebrows so they're visible thru the hair. - Felt eyelashes (instead of floppy thin fabric ones) - Look into a design to eliminate gap between face surface and lcd screens further cuz I'm always looking for a challenge. - Calibrate the rest of the chonky servos - Mount the gyroscope (Did a test directly linking it to the abdomen servos for the lulz, I wouldn't recommend it unless you want a robot flopping around like a fish) - Wider mouth? What I've done since last post: - Calibrated the left arm (adding wrist rotation that you can't really see in the video atm) and ankle joints, and documenting servopulse position into the servo testing library. I'm going to need a third servohat to control all the motors. Spud is going to be so flexible lol. - Stuffed a little padding in the face to make the cheeks nice full and round -and pinchable, if that's your thing. - Increased shoulder width (with cool orange pieces!)
>>28717 Impressive progress.
>>28717 Not sure how much effort would you put into designing a waifu mouth movement. Just thinking about it on how to intricately create a functioning mouth muscle movement and tounge with such depth and detail is just insane. I think this has to be one of the greatest challanges in developing a realistic machanical facial expression.
>>28743 I'm certainly going to do this. We discussed this very often in other threads. This could use - solenoids pulling on strings which are attached to the skin or some "tissue" underneath. - magnetic switches doing something like that - magnetic sliders moving the attached skin a little bit sideways - or e.g. solenoids pushing the some skin in the moth region forward to tipping her lips for kissing - small air compressor or vacuum pumps? - some variant of (electric) soft muscle
Turns out, Hamachi doesn't work for Raspberry pi beyond 32-bit OS: one of its supporting daemons is out-of-date (referring to a system file that doesn't exist, which I spent several hours tracking down). For now I'm using ZeroTier to set up a virtual LAN for demonstration purposes. You can easily host them on a single network without any third parties, but I plan for the avatar and server to be separated by significant distance. I've set up Speech to text protocol on the PI and it runs surprisingly well, now I have to attempt to rebuild the voice engine sending protocol as I had to change from the zmq library to the socket library. Also documenting the entire process if not for others, at least for my own sanity when I inevitably touch some system file I shouldn't have and bjork the pi's sd card
>>28912 My man almost set up his robowaifu network on the poor man's Minecraft 'server' app
>>28912 Looking forward to seeing what you come up with for this, Mechnomancer. >Also documenting the entire process if not for others, at least for my own sanity when I inevitably touch some system file I shouldn't have and bjork the pi's sd card Very good thinking, Anon! Cheers. :^)
>>28913 Only for when I take SPUD on the road :)
Open file (1.96 MB 779x1341 spudlegs.png)
Open file (1.16 MB 548x1255 spudlegdetails.png)
Woke up early this morning, couldn't get back to sleep so I wired up SPUD's legs. I figure I might be able to get away with syncing the hip rotation servos to the ankle rotation servos (at least for now). This would effectively reduce the # of servo channels required down to 3: 1 channel to lean from side to side, 1 to move the left leg forward/back and the other to move the right leg forward and back, very similar to the femisapien. An extra wire is required for the ASMC servo power, however they don't need a ground as they are apparently grounded on the servo signal wire. The legs stand even when unpowered, however there is a slight tendency to fall backwards so I might end up making the feet even bigger, and either have a pair of false feet sitting on some big platforms, or just some megaman-esque feet/shins. There's certainly gonna be some shenanigans when I try to get it to walk for the first time, I'll probably hold it up with a rope or something like they do for all them early versions of walking robots you see on youtube and stuff.
>>29134 There are numerous studies on human locomotion & kinematics that show that average human walking gaits are in fact an accumulation of hundreds of subtle & not-so-subtle muscle movements + plus a coordinated inverted-pendulum 'mass throw' (as in 'controlled-falling'). It takes us years to mature to the stage where our musculo-skeletal/nervous systems master this orchestration with aplomb. That we can even seriously begin to pin everything down to the degree that we can honestly begin anticipating accomplishing this by robo-DIY'rs today is nothing short of amazing IMO! :^) Good luck, Mechnomancer! >=== -sp, minor edit
Edited last time by Chobitsu on 02/05/2024 (Mon) 16:40:53.
Open file (5.83 MB 480x854 Leg Dance.mp4)
>>29135 One smol shuffle for robot, one giant leap for robowaifu-kind. The ankles are on the wrong way and there's a lot of joint slop, but its progress. Especially since they're all being controlled on 1 servo signal lol.
>>29134 >I might end up making the feet even bigger permanent skis here we GO
>>29134 Very nice, Spud's on her way! :^) I would recommend you take about a 10Kg barbell weight affixed over her centerline, mounted atop an actuator-controlled vertical lever (thus your inverted 'pendulum'), and with that you can begin to coordinate the needed counter-balances. This will enable you to have her lift a foot completely up off the ground as she's sliding it forward. (If you want to get an innate sense of what's needed for the proper dynamics here, simply take a broom and balance it upside-down -- handle tip resting on your finger tips -- then walk forward like that.) Loosely suspend her first with overhead cords as you suggested! :^) >=== -minor edit
Edited last time by Chobitsu on 02/05/2024 (Mon) 16:54:46.
Open file (121.49 KB 502x869 Femisapien-a10.JPG)
>>29140 Scale up the femisapien to 5'5 and the feet are like 12 " long (or something, its been a while since I did the maths). I could always just move the ankle axis further forward instead of making 'em longer. We'll see. Not like my workshop floor is exactly level lmao.
Open file (3.96 MB 320x570 Body tilt.mp4)
>>29143 Already got torso tilting. I have a gy521 gyroscope installed, too. Just need to deploy my balancing codr.
>>29145 Yeah, very nice. It's obviously desirable to use the actual dynamical masses of the real system for prototyping, rather than a dummy mass. Also, don't neglect the mass of the head as it's own 'inverted pendulum' for the upper torso. Not sure what the mass there is, but a real human head/neck is at least 1 stone average. You can consider it's dynamics kinda like the trim-tabs on an aircraft, or the tip of the tail of a cheetah racing across the savannah. Nice progress, Anon! >=== -prose edit
Edited last time by Chobitsu on 02/05/2024 (Mon) 18:27:48.
>>29145 time to walk like i shat myself i read that humans walk by throwing themselves off balance at the start of each step thats why robots have that 'i shat myself' walk because theyre doing the exact opposite and keeping balance for the entire process
>>29134 >I might end up making the feet even bigger Well, I hope you changed your mind >>29145 >>29147 Yeah, that's how I understand it as well. We need to make robots that can fall without breaking, then we won't need to care. I realize more and more that many if not most plastics parts should be made out of TPU: >>28944
>>29147 Well I'm using the femisapien's walking mechanism as reference and its walk cycle doesnt seem so... shitty. lol https://www.youtube.com/watch?v=rNezxoSEEB0
>>29145 Very nice! where did you install the gy512? do you use PID for balancing?
>>29136 Spud is gonna have moving legs too? This is ambitious and very interesting! So cool to see someone else actually building a life-size robowaifu! Hopefully she can be set free one day! (Like M3GAN in that funny movie).
Open file (136.28 KB 291x201 gyro mount.png)
Open file (272.57 KB 467x436 broken ankle.png)
>>29151 >PID control Honestly, I never did find calculus particularly useful (although some of the supporting hardware/software I'm using most likely uses it behind the scenes). I'll certainly be using the concept of PID... but without the goofy maths. Gyroscope is currently just hanging out of Spuds neck (you can see it jiggling in the vid), but I'll probably mount it in the trapezeus or something. Maybe use the accelerometer in it to make a shoving/balancing video with Spud going "hey" and "stop it" >>29152 I gotta aim for the hardest thing. That's my mojo. If the legs are a success then I'll scale it up for power armor and maybe even a bipedal, riding mount (a la Iron Monger), however that isn't particularly robowaifu so I'll leave it at that :D If 2 legs don't work I'll make spud a robo-spider throne to ride. During some tests I forgot to keep the main 20v powerline separate from the servo signal line. It picked up the noise from the servos current draw when they start/change positions. It gets a messy signal, making the motors spazz more, which makes more noise, creating a feedback loop of messiness making me glad I had an emergency cutoff. However during the adventure the ankle joint over-extended, breaking it. Servo noise seems to be my kryptonite whether in mech projects or robowaifus.
>>29145 What do you think about adding some rubber strings between hip and the torso? These would help pull back the body towards the center.
>>29165 Depends how much slop is in the abdominal joints. Seem to be doing ok, since in the vid I'm using a manual servo testing board to move the chunky 180kg servos. I got a pack of little bungee cords I was using before I added the ab joint. Probably need to 3d print a knee linkage to replace the pvc one due to joint slop...
Open file (129.70 KB 1500x1330 71G-u-8xiHL._AC_SL1500_.jpg)
I succumbed to temptation and got one of these. Not only are they neat, but it could be a good temporary face mask while I work out the more realistic face. And allow for more (literal) flexibility for eye placement. It's expensive tho. Thank goodness for overtime pay!
>>29185 > spends WAY too much, in the autistic pursuit of basic progress... Heh, a man after my own heart. :D
>>29193 Well I've heard about these screens for years and I've always wanted one so this gives me an excuse :D Did a thigh test and isolated the 20v power cable from the servo signal lines. Had no problems with signal noise, just that dang joint slop in the left knee. Definitely gotta replace that with a 3d printed joint. The servos might actually be fast enuff that I could a walk cycle that doesn't balance the robot on 1 foot all the time (like in previous posts here walking being a partial falling sorta thing). If I put some padding between Spud's frame and the morphsuit it could help her survive falls.. and make her huggable I guess.
Open file (107.12 KB 474x474 ClipboardImage.png)
Open file (179.11 KB 474x474 ClipboardImage.png)
Open file (195.48 KB 474x474 ClipboardImage.png)
>>29206 Consider designing it around some kind of door hinge. Some might need constraining from the side, but this could also help to make the leg easier to remove if necessary. I would really keep an eye on standard parts out of metal. If necessary, the printed parts holding these can be adjusted later to make it work for other people, especially if these parts are just small and simple themselves. Not everything has to be 3D printed, just because it can be.
Open file (861.92 KB 591x559 roboelbows.png)
>>29207 Will probably do something similar as I did with the elbow (but not double jointed): held together with a machine screw and nylon lock nut. Easy to replace: I swapped out one of the parts and it took like 5 minutes tops.
>>29208 Ah, okay. This might be the better way. And in the knee it would even be less of a problem if it's a bit on the bulky side. Elbows should really not be too big.
>>29185 Very cool piece of tech! How much did it cost? how are you going to connect it?
Open file (226.32 KB 1500x1329 81qb27AydwL._AC_SL1500_.jpg)
>>29212 Cost 350 bucks O_O So its my only purchase until next payday It has some boards to connect it to HDMI.
Open file (5.56 MB 2109x1389 kneu knee.png)
New knee, who dis?
>>29215 How do you want to use it? are you going to put it into SPUDs face or do you want to replace the face with this flexible screen? I find the idea to debug her through her face very interersting assuming you are going to use the hdmi output of the raspberry
Open file (1.21 MB 300x189 bth.gif)
Open file (9.31 MB 1364x768 faces.mp4)
Put together a dynamic emotion library. How it works: first the script creates a list of all files in the emotion directory (each .png file has a filename describing it's emotional state). Then the script scans the input text (which will eventually come from the AI but for now is manual input) for all possible possible words, and displays the file named with the last word detected. and-the important bit- only looks for words that already have files associated with them. So adding a new expression is easy as adding a .png file to the proper directory, named for the desired emotion. In future this system could also be integrated for animations (see first post), where Spud will even physically move according to the words the AI says. (so if the LLM says it is dancing, the actual robot will dance a pre-programmed dance)
>>29243 nice could do some kind of interpolation to mix those expressions, not pixel interpolation though guessing that would just give weird monstrosities
>>29243 Thanks, that's a good idea and I love how such a small thing can already be very intriguing. I contemplated something like this a few times after my more ambitious idea of extracting this from an AI generated video failed >>26098. Though, I thought more about doing this for each syllable to make speech and singing animations. I just thought it was too difficult and time consuming for me, especially since I have no experience in drawing. But something like this might be in particular interesting for an AI girlfriend app. Then again, I still have a hard time to believe that this doesn't already exist somewhere. I generally don't understand why animation isn't more automatized using standard patterns which are then maybe adjusted to some specific character.
>>29245 >animation isn't more automatized It is. A lot of modern cartoons are made similar to old cell style 'sections', where there is a body on one layer, and then any part that will move is on another layer. Bodies are broken down into a full front shot, profile, 3/5ths, arms, legs, head, and even the face are completely separate, and all these pieces are placed in a library or database to be used to create specific animations, while things like walk cycles, sitting, standing, can be reused in whole chunks like 3d modelling rigged animations can be transferred between models. Some digital animators can't actually draw at all and will never have to because they place cartoons together like it's done in south park.
>>29243 Cute!!
>>29250 I always thought so, but this seems to be some expensive software or some company internal work-flows. If it was available as some open source software or even just in some common animation program, then imo we would see much more animated videos.
>>29261 We see a lot less of this the more we head into completely digital works. Actually as cool and fast as digital art is, cartooning is actually losing some techniques. >>29258 >Much more animated videos You do, it's just so many of them are terrible. Youtube animation has leapt from dedicated amateurs to kids as young as 8 with their own channels full of terribly drawn yet very smooth animations. Toonboom and a couple of other programs released free versions of their software a few years back and now a lot of studio quality animation floats around on youtube.
>>29263 my point is digital motion tweening a la flash looks cheap af. frame-by-frame is more visually appealing, especially with smears and whatnot. however sometimes digital tweening works for certain elements like the shading in "Klaus". It would be more complicated to implement digital tweening using opencv rather than simply create some in-between frames by hand. but if one wants a cartoony effect, anime characters do tend to change facial expressions quickly
>>29243 Is this written in python? So at the moment you use one png per emotion, those faces in the video look like they are build of parts (eyes + nose + mouth etc). where did you get these faces from? do you have a link? I would play around a little bit, try to construct them from parts.
>>29266 yup python-based. some nerds complain how it is slow and xyz and I pissed 'em off my saying "consumers don't care about backend fiddly stuff they just want product" Opencv is a little weird when it comes to combining images, which is something I have to look into. Because it would be nice to have a "layer" for eyebrows, eyes, cheeks, mouth, etc. Found a higher-res version of the faces lel I've been fiddling with my local AI, however it needs some coaxing to get it to emote anything so it can do facial expressions. Half-tempted to also check what it says to look for emotional keywords too. Or I could spent $300 on another graphics card and use a higher tier model.
>>29268 In my opinion it's way faster to get shit done in python. And that's what matters most in a small team project. What I really like about python is the low entry barrier. Just download and start scripting. So you are using opencv for drawing .pngs? Thanks for the faces!
>>29268 rendering layers is pretty trivial look into opengl instead or the compositor of your video driver if you want to be autistic
Open file (18.54 KB 363x87 solution.png)
I decided to brute-force my way to a solution, going through every single opencv function that seemed relevant. With all the folks trying to make HUDs for webcams a la ironman I'm surprised I had to actually troll through documentation rather than some nerd going "hey look!". Using cv2.bitwise_and(image1,image2) to create a compiled image seems to work, as it literally adds each pixel value together. Neat! For some reason everyone tries to use "add weighted" or variations thereof. However, interpreting emotional data in a multidimensional way (eyebrows, eye shape,mouth shape, face icons like blush sweatdrop or forehead veins) will require some more thought and experimentation. This will especially be more difficult using a less sophisticated AI model.
>>29263 >Toonboom Thanks, noted. >>29272 Thanks, this could be very useful. For robots with screen faces and virtual waifus as well.
>>29272 Nice work. I'm glad that you're focusing somewhat on performance in your search for 'the right tool for the job' (another honorable & necessary goal). OpenCV's engine is effectively written in pure C++ (there may be a very smol amount of some inline ASM code for specific drivers, etc.) The Python serves as a frontend API to this backend system. This is quite similar to the architectures used with TensorFlow and PyTorch, for example (and the approach we also plan for the users of robowaifus, as well). The >tl;dr is: use the scripting interface for most jobs, since that is good enough. OTOH, if you ever need to 'open the hood' and tweak the system's basic characteristics and/or performance itself (and you can't find already-scripted means you can assemble together into precisely what you need), then you'll need to drop down into the underlying systems languages to accomplish that (C & C++). Thankfully, because all of these engines are opensource systems (and permissive ones also [Apache, BSD]), you can do exactly that -- and just as you see fit! :^) (In /robowaifu/ 's case, we haven't even finished building the waifu engines themselves yet; so the Pythonic APIs will come later on, as well.) >tl;dr Kinda like if you wanted to automate using Blender, then you'd use it's Python API for that. If, on the other hand, you wanted to change Blender itself, you'd use C++ or C to do that. --- >However, interpreting emotional data in a multidimensional way (eyebrows, eye shape,mouth shape, face icons like blush sweatdrop or forehead veins) will require some more thought and experimentation. Abstraction. Write a little wrapper library for yourself that has all the high-level elements you need (happy, sad, etc.)... and ways to properly blend them together. Use that abstraction layer in your code you interact with the AI through. That way, regardless how either system changes -- on either side of the 'barrier' -- you've got one, go-to place to deal with that underlying complexity properly. --- Good luck Mechnomancer (and everyone else working towards this particular set of goals ITT). Looking forward to seeing all your results! Cheers. :^) >=== -fmt, prose edit
Edited last time by Chobitsu on 02/10/2024 (Sat) 12:15:34.
Open file (61.46 KB 680x393 199324749.jpeg)
>>29272 >a la ironman
Open file (4.37 MB 404x720 pepper'sghoose.mp4)
>>29276 I ended up using the "pepper's ghost" optical illusion, but now I could revisit it but that is not really relevant to making a robowaifu, unless you wanna make a virtual waifu living in a "holotank". It's pretty much just reflecting an LCD onto a piece of plastic at an angle.
>>29279 >unless you wanna make a virtual waifu living in a "holotank" We actually have an entire thread for precisely this topic, our Visual Waifus thread : >>240).
Open file (6.00 MB 480x586 bendyscreen.mp4)
screen arrived
Open file (389.96 KB 1023x555 tex.9.png)
>>29272 Don’t judge me too hard for my suspiciously in-depth knowledge on this, but one of the best places to find animation/character sheets (like picrel) is from VNs. Specifically, Emote-engine stuff, games by Yuzusoft or SMEE, anything advertising “Live2D” or the like If memory serves, the best-qualities sheets I’ve found were from - Nekopara - Meteor World Actor - Maitetsu - Nobleworks (partial— fully assembled faces like you’re already using) GARBro can extract most VNs these days, and even from traditional (non-animated) VNs, you can decomposed faces with add/sub and/neg/xor operations in GIMP or the like. Not sure if links to the high seas are allowed, but you can probably grab any VN you forgot your license code (happens to the best of us, amiright?) to from ryuugames or whatnot (just use a tampermonkey/greasemonkey link bypass script so you don’t have to fill out some retarded survey)
>>29279 I have to say that I am really overall impressed with your robotics and applied engineering, just really really good work there.
Open file (1.11 MB 1973x1305 better solution.png)
>>29289 Hmmm I'll have to take a look, vtuber assets might work, too. I found a better solution that will allow for overlaying png files using their alpha channel. It is a bit clunky as I extract/convert the alpha channel to a separate alpha map,then load the image again for the diffuse, but it works. And it can be iterated on so multiple pngs can be overlaid, however so far the addition of each layer needs to be a file, not an image modified by the script outside of the function. To get iris movement probably would require 2 masks: one for the eyelash layer (so it can be on a custom skintone face), and then another to crop the iris layer.
>>29295 Does your troll picture have a proper alpha channel? If so it should be accessible as fourth value in the list (rgba). OpenCV seems to load it if you use the IMREAD_UNCHANGED param. You could seperate the image loading from the overlay part and just iterate the overlay part over a list of images. So that you add each layer with with each iteration. This way you can do something like: paths = [path1, path2, path3] for each img in range(0,len(paths)): if img==0: result = load(paths[img]) break result = overlay(load(paths[img]),result) some pseudo python, I can't code shit without IDE
Open file (1.21 MB 924x520 facetest.mp4)
>>29297 I just ended up repeating the code as many times as I needed for each layer instead of using a function or a loop because I had to make some slight tweaks in each iteration. Created a basic engine where after a bit the face blinks and I open the mouth with spacebar. a few more buttons switch between the 2 current emotions, the naming scheme you can see to the right (as well as up to 12 different skin colors in case you want to switch between a tan waifu, pale waifu or custom colors/textures). Should probably stick to fairly lo-res images because of all the maths. IDK I don't feel like doing performance tests. I can use elements from work on my previous dynamic emotion library to detect the available emotions based on whether the png files exist in the directory rather than coding it in manually, so adding on new emotional expression is easy as making the graphics and naming them properly. Now I just need to connect it to oobabooga and get oobabooga to act sane through the API.
>>29288 Remarkable. It will be very interesting seeing your design for installation of it. Good luck, Anon. --- Very nice progress with the emotions research! >=== -add'l resp -minor edit
Edited last time by Chobitsu on 02/12/2024 (Mon) 06:08:13.
Open file (5.44 MB 668x480 bendy owo.mp4)
>>29366 Here it is. There are some performance issues because the screen is 1920x1440 and I'm using an older raspi setup(32 bit) but some tweaks/threading can smooth it out. Also the left side of the screen is getting pinched D:
>>29403 Very nice! >screen is getting pinched It seems to me you'll need a slight redesign, and extend the frame bevel all the way across (ie, down, in this orientation).
>>29404 The top and bottom frame are connected by an independent piece (held on by the screws you see in the vid), so I just printed some new pieces 3mm longer and no more pinch. >I am pinching an lcd screen The future is weird, man.
>>29403 Are you loading your images on startup or before you display them? Maybe caching could reduce the performance impact.
>>29419 I was loading them during every iteration of the loop. However the performance issue lies in the adding of the 5 different images, not the loading process. A 200x200 pixel face results in something like half a million pixel calculations every iteration. Not a big deal for desktop computers but there is considerable performance drop for the PI. It also wouldn't be that big a deal except the flexible oled is only capable of a resolution of 1920x1440 (Higher res than my 30" monitor), and even when scaling it up the 200p image after all the calculations there is still performance drop. So I did end up implementing a sort of cache :) To get a bit into how it works: the face status is assigned with a function showface() with variables eyes,nose,mouth,base,skintone. These are assigned with variables that redirect to the image filepath based on adding strings together elsewhere outside the showface function. So changing variable emotion from "smile" to "pout" changes an entire set (a non-available emotion assigned will result in the face giving you a death stare until you fix it and an error printed on the console. I can upgrade it to an audio-visual cue/better error handling at some point). Example: mouth = str(eld+str(emotion)+str("_mopen.png")) where eld is "hires_eye_expressions_crop/" (the directory relative to the script) and emotion is the base of the filename. This results in a change from "hires_eye_expressions_crop/smile_mopen.png" to "hires_eye_expressions_crop/pout_mopen.png" When a new face is calculated the image is stored in a "faceprev" variable and the filenames are stored in "current status" variable. This current status is then compared to the assigned statuses in the next iteration of the loop, and if they are the same it loads the image from faceprev instead of calculating a new one. If they are different, it calculates a new one (and a new faceprev). Perhaps a little performance lag during the change but some threading can keep it a little separate from other functions like object detection and motor movement. I also had to divide the eyebrow/nose layer by 255 because it was getting blown out for some reason, artifacts around the eyes during the darker skintone is something on the png not the script ¯\_(ツ)_/¯ Sorry for wall o text, I like talking about it ᕙ(▀̿̿Ĺ̯̿̿▀̿ ̿) ᕗ
>>29420 welcome to software rendering lol, check if the library is actually using simd instructions otherwise yeah its going to be like 16x slower, dont know what cpu is on a pi i think its an arm cortex which has simd maybe you need to set some parameter to make it vectorise
>>29420 >the flexible oled is only capable of a resolution of 1920x1440 Is there really no way to lower the resolution you're outputting to it? I don't think I've ever encountered a monitor that won't display at lower resolutions.
>>29424 It's actually very common for Chinese screens to either have a terrible scaler or to lack one all together. Doubly so for low quantity bespoke solutions. As for this project, glad to see Mechnomancer integrating well within this board and creating an interesting facial architecture. One which should leverage the graphics processing power of his Pi more.
>>29420 This looks really good, Mechnomancer. I have to admit (against my better judgement), that it would be compelling for normals if you fashioned as sort of faux A*ple Vision Pro -like border rim around your screen. Kind of like a full-face mask. The curve actually does quite well at luring you into a perceptive 'suspension of disbelief', and I think the mask motif might quickly send most non-autists into an alternate robowaifu reality if Spud was geared up so. >>29421 This. IIRC, the Pi systems rely on a non-standard Broadcom phone chipset for graphics. I'm unaware if any GPU-specific APIs have been reversed for it yet, and even if so, whether the OpenCV library would have driver support for it. Regardless, vectorisation would help tremendously. His older chipset might not support it, but I would imagine the newer (RPi4 - up) might. Great advice, Anon. Thanks. >>29427 >As for this project, glad to see Mechnomancer integrating well within this board and creating an interesting facial architecture. Yes he's doing a great job here.
>>29436 if its not supported you could always use a lower colour depth 32bit is overkill when theres only a few colours used you could use a bmp with indexed colours instead of png then you only have like 4bits per pixel, much faster to process, i dont know how opencv works there has to be some kind of colour to alpha function to set a specific colour as a transparency like the magenta you see in oldschool bitmaps
Open file (874.82 KB 960x540 henlotheretest.mp4)
>>29424 It theoretically can do 640x480 according to the pi, but the screen refuses to do that resolution. >>29436 Its a pi 4 with 32-bit OS (4gb). Definitely gonna make the screen holder more pretty, I just needed something quick because I don't wanna risk breaking it. That single screen alone cost more than all the heavy-duty lift motors in my crab mech! An advantage of a full-screen face is that when it is off the robot doesn't look possessed, unlike having a screen behind a faceplate without physical eyelids. >>29449 In order to do the overlaying process I actually convert the alpha in the png to an alpha map and use that to add/subtract parts of the images from each other. If I didn't extract alphas from the pngs I'd double the # of required files. I managed to put together a software-based mouthflap protocol. While the audio is playing I open the raw wav file as a list of frames, I record the length of each iteration of the loop, check the amplitude relative to the loop length (if the loop is 50ms long, it check the 50x12 frame as there are 12 wav frames per ms. if I was on the 1200th frame it would then check the 1600th frame. Just 1.26 seconds has over 15,000 frames!) I'm checking individual frames, a very small fraction of a sound (1/12th of 1ms!) so it's kinda amazing it even relatively synchs up, and since each iteration length is slightly different, I get different lip synch results each time. Sometimes the mouth hangs open (not in the video)and it is adorable. I could minimize this by doing averages of groups of frames but I'm done coding for the day.
Just a quick update: loaded face script onto 64-bit rasbian, seems to be running twice as fast lmao.
Another smol update: managed to piece together a script to put all the raw face image data into a variable (aka buffer) in the script rather than loading from disk for every iteration. I don't know if there will be any performance increase yet as I haven't yet deployed it on the pi. Also got the script installed that creates a list of possible emotes based on the emotion graphic files, then scrapes a string to see if the emote exists and sets the expression accordingly. Is it clunky? probably. Does it work? yes.
>>29500 probably, read speed speeds is io,ram,cache,reg ordered from slowest to fastest, could be worse if python uses some weird alignment and your end up with cache thrashing doesnt your pi have a 64 bit cpu, check with lscpu the pi 4 specs says it is, dont know why you would use a 32bit os then
>>29501 Well my "cache" is just a python list containing the picture data in the form of arrays. Between my wonky cache and better upscaling somehow magically appearing, performance on the pi has drastically improved. See video. The loading screen isn't just for looks: it more or less reflects actual loading progress after opencv is imported. Even progress of the raw picture data being found/loaded into the list/cache and the emotion list being generated is represented in the loading bar. I have the pi communicating with a server script on my main computer that handles voice generation and will eventually communicate with oobabooga (other AI options are available), and of course program takes the voice recognition data and only looks for the emotion keywords based on the picture library and loads the set accordingly.
>>29504 yeah youre loading the pics to ram which is faster than reading from the disk, i say cache when i mean the cpu cache (ram is slow for the cpu so if you read from ram it reads an entire chunk into the cache because statically your going to ask for something nearby), you could use a profiler to see how many cache mises youre getting but its just autism at this point it looks fine, good work
Hey Mechnomancer, do you have a particular brand of servos that you'd recommend? I'm looking to pick up 5 or 10 of em for fixture prototyping over the next few months Non-continuous (up to 180deg on some of em), low torque (couple N*m at most), decent-ish speed (90deg/s?) btw, OpenGL ES is pretty well supported on the broadcom mystery-meat graphics chips, the free version of QT uses it and has a pretty straightforward canvas API too, if memory serves. Should be able to keep all the textures loaded and just swap out layers at runtime (50 bytes of VBO instead of an entire texture), so no delays and a sync'd framerate even during animation. >>29510 valgrind's cachegrind (and callgrind, to a lesser extent) would get you the benchmarks this anon mentioned, but I'd go for the hardware acceleration (GL ES) approach first
>>29522 I just get whatever Servos I can. I tend to use the MG995 or the ETMall MG996R for low-torque applications(6x for $20), while GoolRC has 70kg ones and Injora has 35K ones. All of these are the same size/form factor, but Injora comes with brand stickers you can put on your project. I get 'em off spamazon. I'm just aiming for the minimum viable waifu easy as possible. If y'all wanna do more fancy stuff like openGL be my guest but that ain't my jam, yknow? :) Got 2 of the modular systems done: adding more voice commands (and the scripts that run as the result) and emotions are easy as making the files and naming them properly. A few extra features like a dedicated simon says protocol, a placeholder for when it will connect to the LLM, seeing what the bot sees. The weather forecast API appears to be down at the moment. Speech recognition has to be a separate program from the face display as it doesn't like to play nice with it Threading. Possibly some alternatives exist but I kinda like have 2 programs as I can adjust one without having to restart the other. They communicate by putting data in a text file. Might be a little slow but there is lag from the TTS engine anyway so another fraction of a second doesn't matter much. Making this modular has required so many loops it has driven me loopy ᕕ( ͡ʘڡ ͡ʘ)ᕗ
>>29449 >when theres only a few colours used you could use a bmp with indexed colours Very good advice. >>29451 >>29466 >>29500 >>29504 >>29602 Wow! That's a whirlwind of updates and a rather nice outcome to it all. GG, Mechnomancer! It's gonna be fun watching Spud come together. Cheers. :^) >>29522 >btw, OpenGL ES is pretty well supported on the broadcom mystery-meat graphics chips, the free version of QT uses it and has a pretty straightforward canvas API too, if memory serves. Excellent. That's really good to know, Anon. Thanks! :^)
>>29504 When I tried something similar, I started contemplating using GIFs for each sequence I need. Hoping this would cut down on the memory costs and loading time. The idea was that somehow the file format would have an compression algorithm, so it would only update in each moment what's necessary.
Open file (523.49 KB 3120x4208 SPUD_screenface.jpg)
>>29623 Not sure if you're aware, but just in case (and for the other anons out there) from my experience, gif compression works by reducing colors/dithering, and optimization utilizes transparent pixels to to only display the difference between frames. Anywho, did a quick test fit of the screen face, and just as I feared: I need to redesign the head because the screen juts out too far.
>>29666 I might have briefly read about this, but didn't remember. Thanks. The face looks nice, and a slightly bigger head will look better anyways.
Open file (1.06 MB 582x1446 spudsuit.png)
Waiting for the new head pieces to print so I dressed SPUD in the morphsuit for the lulz. I definitely need to upgrade from the paper panels. Maybe even a 2 morphsuits with padding sammiched in the middle would not only improve aesthetics but also protect against damage in case of a fall.
>>29672 Wow, this is maybe the best looking irl gynoid I have seen so far.
>>29672 the karen screen face is more my thing
Open file (35.81 KB 731x564 filez.png)
>>29673 Technically not a gynoid, as it doesn't have -nor do I plan for- any gynecological bits. After all, this particular unit is initially meant to be an artificial booth babe. :P When I do release SPUD I don't care what mods people wish to add. What a man does with his robot is none of my concern. >>29676 Well, I am thinking about customization in future, you can do your own custom face whether owo, realistic, iconographic or karen easy as making/editing the files :)
>>29677 >Technically not a gynoid, as it doesn't have -nor do I plan for- any gynecological bits. Yeah, this could be a point, but to me it's just that she's meant to be female. Most androids also don't have the male parts for reproduction.
Open file (9.08 MB 3201x1389 ouchie.png)
>>29681 I just figure gynoid should refer to the type of female android that has the fun bits ;) New head is complete, but SPUD got an owie in the process :'(
>>29683 Wait, is the screen broken?
>>29685 its got a cluster of dead pixels :( Mental note: get screen protector
>>29686 Damn, and that was a new screen too.
>>29683 >>29686 Great progress actually, Mechnomancer. Please don't be too bummed about the boo-boo on Spud's face. You can kind of think of it like EtR Lulu -- it gives her a charming dignity of having overcome obstacles! Just like you do, my friend! Keep. Moving. Forward. Cheers, Anon. :^) >=== -minor edit
Edited last time by Chobitsu on 02/20/2024 (Tue) 09:01:29.
>>29683 Can't you still return it on spamazon? I would totally just send it back, tell em it was broken as you started it.
Open file (1.26 MB 740x1062 beveragebottom.png)
Open file (922.27 KB 320x570 Tippity Tappity Test.mp4)
I found a beverage dispenser with a bottom almost the exact same size circumference as the current screen holder. So I carved a piece out of it and slapped the screen behind it. It even has a lip on one side to help hold the screen in! Talk about providence, eh? Now to 3d print a better chin.
>>29724 Neat! Very inventive appropriation Mechnomancer. Today I'm constantly on the lookout for such things, since a major goal here is to allow impoverished men to also build their own robowaifus. "One man's trash is another man's treasure", they tell me. Cheers. :^)
Open file (220.90 KB 307x467 chin.png)
>>29744 No sense reinventing the wheel :) I think the wig looks awful, so I've ordered a different on: a single wig instead of 2 combined together. Also installed the new chin, and will start work on re-printing some of the parts so they make a better shape under the morphsuit.
>>29752 The wig looks quite nice. But I might be a weirdo.
>>29752 And you'll be able to dress her up as a pirate for Halloween!
Open file (410.15 KB 559x465 headz.png)
>>29753 >>29753 It might be my a̶u̶t̶i̶s̶m̶ artistic eye for perfectionism kicking in, but the top wig (crown) is actually rather small (even the reviews said so but I didn't realize how smol it really was), so it has the appearance of the hair being plastered to the head rather than big and volumous (see the red scribbles I did in the pic). I think it has to do with the screen tbh. Even so, new wig will be arriving in a few weeks (its one of a relatively esoteric character so no amazon prime turbo shipping). In the meantime I'll be trying out one of the other face "skins" I made that has a lower face silhouette on it. Just got done putting together a jukebox script, too. Need to put files in a specific folder and have the artist name in the mp3 metadata, but the script can more or less find songs by saying the artist name, play all the songs by the specified artist in a random order without repeating, and the ability to skip tracks by saying "next". I could do other metadata too like song titles and albums but I'll stick with the basics for now. I also found a nifty script that allows for a confidence interval of similar words, so if the speech recognition AI model thinks you say "whether forecast" instead of "weather forecast" I can add in that bit of fudge factor to make things easier.
>>29760 Since you've already ordered another wig, it might be a good idea to try styling this one as an experiment. Basic stuff, like curling, cutting, the like. That way you'll be better equipped to fix any issues that might happen with the wig you ordered.
Open file (234.82 KB 487x240 faces.png)
>>29761 I moved the sides further up and tried out a different face skin and compared the two. Only problem is now the hair isn't secured at all lmao
Open file (81.41 KB 541x454 1692831980580923.jpg)
>>29786 The right side looks super-cool now, Mechnomancer. Kinda like she's wearing a space/mech helmet! :) >yfw punished Spud
Thinking about it, I might increace simplicity of SPUD's face assets further, like the attached, until I can perfect the physical face. Also, thanks to this screen face revision I should be able to swap between face types with relative ease. I think I can whip up a quick flat-screen-face module for anons who can't (or won't) shell out the $400 for a bendy screen.
Open file (7.04 MB 320x570 Jukeboxtest.mp4)
Jukebox is 95% complete, just need to add a pre-recorded voice prompt for when it misunderstands a command >_<
>>29907 <im sorry dave
>>29907 Nice progress, Mechnomancer. >>29909 Kek.
>>29907 >sorry, anon, you don't have the lincense to listen to that song.
>>29922 Actually, I do. Stringstorm is an independent music composer on youtube who music for "If The Emperor Had a Text-To-Speech Device" (plus other original songs) and occasionally has his entire discography up for free on Christmas Day :)
Printing off some aesthetic panels for SPUD, starting in the arms. Full range of motion is preserved.
>>29940 Those should fill out the morphsuit nicely.
Open file (263.48 KB 299x422 glasses.png)
Open file (2.17 MB 920x1442 armpanelz.png)
I got a brainwave for the physical face: lcds disguised as glasses! Also the arm panels just look so nice I can't stand them being covered up by the morphsuit. Going for some Alita vibes lol.
>>29959 Arms do look really good, Mechnomancer. I'll be interested to see your results for the glasses. BTW did one of Spud's 'Chii ears' break, or maybe you have it open for service? Looks like it's drooping open. Good luck, Anon! Cheers. :^)
>>29959 That's a solid idea. I saw a video of a girl building a Clank for her cosplay project, and I liked how she did the eyes. https://www.youtube.com/watch?v=xIcRPAMU7oc Custom animations behind resin printed, smoothed and polished lenses.
Open file (274.42 KB 515x515 backside.png)
Open file (886.51 KB 759x1073 front.png)
Open file (899.48 KB 615x1073 hipback.png)
Open file (1.36 MB 805x1070 frontdown.png)
Open file (867.86 KB 795x1075 front_3_4.png)
Been spending the past few days printing parts for SPUD's hip panels. Total of 10 separate panels to keep down the amount of filament for supports, each panel taking about 8 hours to print (thankfully I have 2 printers). They will definitely need some standing >_<; Also printed some new curved rib pieces. Need to adjust some of the pvc in the legs before I can get her standing again, but I did find my mech scale while cleaning the mechworks so I should be able to get an official weight soon. Next bit of coding I just need to deploy is a) having voice-command scripts being able to tell the main script to play audio without requiring the TTS server, eg playing a .wav file of the SPUD saying "oops" or "try again" and the mouth flaps syncing to it, b) copying over the weather forecast script with the cache (the weatherforecast service I use doesn't like more than like 10 API requests in a day lol) and c) mapping the servos (again) and integrating the old animation script.
>>30060 That is really excellent-looking progress, Mechnomancer. Looking forward to seeing Spud fired up and chatting away with her newest look! Cheers. :^)
>>30060 whoa boy, whoa mama
>>30061 Integrating the LLM is a little ways off: I still have to figure out how to get the oobabooga AI to act sane through the API. I think it has something to do with the history formatting not working properly so I might have to implement my own solution. That will probably be just some list manipulation, which compared to the modular stuff I implemented (scraping directories and formatting filenames into voice commands and then detecting them by comparing every word) this should be a piece of cake. Custom history would work by adding each response to a list with a label prefix (user:/AI:) and deleting the oldest response to ensure it doesn't exceed the AI's limit. Maybe have a second set of history that I can ask the AI to summarize into a sentence or two (if that takes a while I can have spud say "Hold on, I'm thinking" or something if the user tries to talk to her while this is going on). Then reintegrate the memory retrieval protocol and inject the user preferences into the AI prompt so it might one day just ask why the user likes "Samuel Adams" or whatever. Also the wig arrived today, but I have to install a new forehead plate cuz without it the bangs are shaped all funny.
Since you have dead pixels are you still set on using that same monitor for a full face? I was thinking color eink display would be better since it wont glow in your eyes which has a potentially more immersive effect and likely the lower frame rate of these mind of monitors aren't as big a deal when doing an anime styled face. Eink can be flexible too. Though that might be costly for a large screen and difficult to find so you could go back to closer to your original design where you have individual monitors for the eyes but instead leave room for eyebrows for more expression to also be in the monitors possibly and you can use a third monitor for the mouth. Though I am uncertain if that would be any cheaper than a single large monitor. In addition to that you can use some sort of air pump that pumps of small balloons under the cheeks so SPUD's cheeks can puff up when smiling or pouting, assuming you use an animegao kigurumi or a silicone moulded doll face instead of the pepakura or stiff looking plastic you were using before. That should be quieter than the noisy actuators you were using before and have a more humanoid feel. But I'm just a wanderer who stumbled on this board so don't mind me much.
>>30082 >But I'm just a wanderer who stumbled on this board so don't mind me much. Hello Anon, welcome! Please have a good look around the board while you're here. If you'd like to, please introduce yourself in our Embassy thread : (>>2823) . Great ideas about the face! BTW, we have a couple of related-threads here on these topics: Robo Face (>>9), and Robot Vision (>>97) . Please browse through those (and dozens of other good) threads to see if anything piques your interests. Thanks for stopping by and contributing, Anon! Cheers. :^)
>>30081 If you want some instructions on how to do this with GPT4ALl instead, I have that python code already..
>>30090 Thanks to you I looked up GPT4all and found some python code, I'll be sure to experiment with it once the model downloads. >>30082 Flexi-screen face is a temporary solution until I make a more articulated physical face (also an excuse for me to buy one to play with) :) Physical face will have servos in the cheeks to pull the mouth into a smile, so that should add a bit of life to it. Took a leaf from your book and checked out some e-ink screens: I could get a pair with color for relatively cheap (cheaper than the flex-screen, anyway). I'll have to see how long the screen displays a pic while unpowered, as that could eliminate the need for physical eyelids.
>>30090 If you are experienced, there is also https://sillytavernai.com/
Open file (24.78 KB 982x630 gptforall code.png)
>>30093 From what I can tell, Sillytavern is more of a front-end that can interface with various AIs including (but not limited to) chatgpt and oobabooga :) I did figure out some code for GPT4all for all so I've attached it (so easy!). Biggest problem was figuring out why the AI was pretending to be the user (it just does that lol) and how to stop it (stop the text gen if parts of "USER" are in the token). Also created a list for user responses and ai responses that can be more easily read out, and randomized temperature to get a bit more variation in responses Some of its responses remind me of chatgpt before it was nerfed.
Open file (635.61 KB 545x1065 bellybutton.png)
Open file (451.19 KB 799x855 newig_face.png)
Open file (1.39 MB 1525x519 capz.png)
Added an ab plate and some caps on the hip side motors, all of these press fit onto the chonky side motors. Going to build the thigh panels out of EVA foam so when SPUD sits they don't break. Screen face looks a bit odd with the wig, maybe if I added a bit of a fade on the top of it or something to make the transition to the edge of the screen less harsh... Still, looks kinda nice. Now need to deploy GPT4all into the server script. Unfortunately (or fortunately, as the case may be) the mistra-7b-instruct model emotes only through ASCII emojis which act a little funny in Python IDLE but fine in notepad. But at least the format it uses the emotes is consistent enough to develop a protocol... the fun part will be making the graphics for 38 different emotions. At least I got a good base to start with :)
>>30125 Very nice! I like the 'belly boop' plate & the new wig, Mechnomancer. :^) Some observations: * I think the cap is great, and it looks aesthetic. I'm concerned that it may trap too much heat in the actuator housing and overheat the motor? * The ears are a good initial design, and very cute. Now that you are homing-in on the face proportions however, they seem rather oversized to my eye. If you undertake a redesign for them at some point, I'd suggest you make them a bit more petite, more in keeping with Spud's cute feminine characteristics. * I think you have a good idea about the 'forehead fade' for her face illumination. You might also consider making that area more rounded as well, in keeping with your graphical design of the shape of her jaw/chin. * If you do go with color e-ink, you can probably place illuminating LEDs embedded within the bezel of her 'helmet' mask -- ala as commonly used in filmmaking for astronaut movies, etc. * I think you've very close to nailed the general proportions of the face to the rest of the upper body. This is easy to miss, but your pic #1 really shows this clearly as working well already. She's already come a long way in a short time, Anon. Great progress! Cheers. :^) >=== -minor edit
Edited last time by Chobitsu on 03/05/2024 (Tue) 05:34:50.
Open file (131.68 KB 660x1624 spud3dmodel.png)
>>30126 The caps only cover the tip of the 550 motors on the servo, about 1/4 its length: there is also enough resistance in the gearbox that SPUD nearly stands *unpowered* so those motors won't be subject to much overheating unless SPUD is dancing non-stop for like 20 minutes or something. Even then, I could probably slip some heatsinks in there. The ears have to be their current size in order to hold the USB hub/hdmi cable combo. Can't make them smaller unless I get a smaller usb hub in there (having 4 ports is nice tho). I could make a headband sort of thing for where the screen meets the forehead, but this face design has already proven its point, and with a few more tweaks it should suffice until I re-do the articulated cloth face with the e-ink eyes. I designed SPUD's body pretty much in 1 single file and made sure every part looks nice together (with an exception of the PVC linkages in the limbs, which I eyeballed)
>>30127 I understand. Please understand Anon that when I'm making a constructive critique of someone else's project work, I'm not in any way denigrating that work, nor belittling the creator of it. Quite the opposite. I'm simply bringing an objective set of eyes to the problem, and if I make such a comment I'm literally trying to help 'plus' the work. After all, we're all in this together! >I designed SPUD's body pretty much in 1 single file and made sure every part looks nice together That it does, Mechnomancer! Looking forward to your progress. :^)
>>30131 Hehe I understand. I'm just responding to your constructive criticism. :) Were not for posting here I'd have no idea about things like gpt4all. It is always nice to get feedback from folks working in the same subject. In my experience the biggest negative nancies are the folks who haven't built anything, hence have no idea the effort/requirements it takes and thus probably don't have anything of value to contribute anyway. I haven't experienced it from folks here, as I have been on the receiving end of genuine butthurt elsewhere enough times to accurately identify it. There are so many design choice one can take that unless you just dive in and do something, one can be paralyzed just from the sheer number of potential solutions.
>>30132 Thanks for your understanding, and for your mature professionalism Mechnomancer. :^) >There are so many design choice one can take that unless you just dive in and do something, one can be paralyzed just from the sheer number of potential solutions. This. It's hard for me to conceive of any other design & engineering undertaking apart from creating DIY robowaifus that has so many 'paralysis by analysis' opportunities! :D
>>30092 >Physical face will have servos in the cheeks to pull the mouth into a smile, so that should add a bit of life to it. Taking the hard route then. So many humanoid bots wind up looking off or even creepy as can be. It will take a lot more than just the four cables in your earlier drawing to make more convincing expressions I would think but it depends if you're combining this with other methods. >Took a leaf from your book and checked out some e-ink screens: I could get a pair with color for relatively cheap (cheaper than the flex-screen, anyway). Good to hear something I said is going to be used by someone. >I'll have to see how long the screen displays a pic while unpowered, as that could eliminate the need for physical eyelids. E-ink displays last a long time if not indefinitely without power. I have seen them on grocery store shelves with no visible power source as labels. At least that is the case when stationary. I am not sure what happens if they are exposed to vibrations, changing lighting conditions and EMFs.
Open file (959.28 KB 647x1091 facefade.png)
>>30133 >It's hard for me to conceive of any other design & engineering undertaking apart from creating DIY robowaifus that has so many 'paralysis by analysis' opportunities! :D Mechs and Powerarmor is another undertaking rife with such paralyzing opportunities. Since SPUD has GIMP installed adding gradient to the face texture took like 5 seconds :D
>>30146 >Mechs and Powerarmor is another undertaking rife with such paralyzing opportunities. I can only imagine. And human-safety is an even more-present concern in that case, AFAICT. >Since SPUD has GIMP installed adding gradient to the face texture took like 5 seconds :D That is indeed an improvement to Spud's look, IMO. Perhaps you could burn (lighten) around her eyebrows a bit as well? I think that would make her face 'pop' out of the mask a bit better. Cheers. :^)
Tested adding an additional expression to SPUD's library, and it went flawlessly. Oh yeah, also integrated the LLM into the server script (as well as a timeout). Having a reaction to one's voice (letting you know she is thinking about it) makes a big difference.
Open file (243.31 KB 329x457 fatface.png)
increased face width by 22 pixels, looks much better imo
Open file (752.87 KB 1000x565 The_Shootist_John_Wayne.png)
>>30161 Nice! >Having a reaction to one's voice (letting you know she is thinking about it) makes a big difference. Agreed. Her mouth seems a bit like a frown to my eye, rather than a "hmm, let me think about this.." Maybe a half-frown? >also <Red Dead Redemption Heh :D https://www.youtube.com/watch?v=9UIRoW13gOw >>30175 Having a full-screen face certainly offers many easy-to-change opportunities for nice variations!
Open file (18.36 KB 179x150 expression.png)
>>30178 Well this is what the face looks like when not obscured by hair :)
>>30182 Very cute design. You made a good choice here for Spud, Mechnomancer. :^)
Open file (675.43 KB 682x384 randeyetest.mp4)
Spend the past 12 hours or so slamming my head against a new addition: movable eyes! It is easy to set up as naming the proper files! If iris files (and supporting alpha files) exist for an emotion file set, it will use the files to make the irises move according to a coordinate set. If the files are not found it will display the static eye file (good for icon-like eyes). Graphics are a wee bit rough but it is something only I'd probably notice anyway Eye coordinates are a variable to be set like any other. where negative x/yvalues go left/up and positive xy values go right/down. (just don't exceed 10 in any direction tho). Currently have them set to move during the blink to random positions but other options are possible. I have them on the main computer, just need to transfer over to SPUD's avatar.
>>30224 Tthat's good news, Mechnomancer. I consider it a key metric that a project is both fundamentally sound, and on the right course, if things get easier and easier in the system, and the path forward with it gets clearer and clearer as time goes on. Please keep up the great work! Cheers. :^)
>>30226 A little copypasta and the code was easily deployed onto the avatar. Even added a little bit where the eyes might not blink while the eyes change direction. I could do a little tweak externalizing the AI object recognition so I can get smooth eye movement, but that only runs while SPUD's eyes are closed and eyes rarely move smoothly anyway. Might try to make Spud track any people she sees, idk. I also put in a bit of code so when the pi detects an external monitor it automagically adjusts the face to the proper-ish orientation.
>>30227 Facial animation is the subtlest of all animation. Good luck, Anon! :^)
>>30227 now for the shoulder mounted rocket launcher, or just a gimbal for the head is fine i guess, i assume its not that different from what your doing with the eyes, my telescope has a computerized one and those things are heavy so should work with a head, you could reuse the same code just subtract the object from the current position to make it center on the object
>>30231 As Chobitsu mentioned earlier >>26388, this place really isn't meant for discussing the construction of waifu-terminators. Make waifu-wuv not war.
Open file (17.74 KB 1069x702 Untitled.png)
>>30240 war is the pinnacle of love could still do that with the head its the same thing, a gimbal is just two motors perpendicular to each other
Open file (164.14 KB 1500x1500 71YNUuUeFuL._SL1500_.jpg)
>>30230 One of the best ways to avoid uncanny valley is to go iconographic. Through simplification (and avoiding looking like a real human) the brain fills in the gaps with what it wants rather than having to deal with the inaccuracies that exist. Even so, I ordered some 2-part casting silicone off Amazon so I can experiment with molds/face making. Cuz, yknow: its difficult. But will make it easier on myself by going for the f̶e̶l̶i̶n̶e̶ anime look However, many robots out there just seem to put a rubber mask over the face mechanisms. Instead, I'm going to be attaching cables and such directly to the face to move it. Will probably end up making a whole new head but hey that's how design goes. >>30257 Spud's neck already has motors (as does all her joints) I just don't have them on right now -buck converters have yet to be installed into the shell and the servo channels properly mapped. Object recognition AI returns screen coordinates of the people it detects so isn't that difficult to turn it into movement (I did that before the overhaul to the screenface, can't seem to find the footage tho).
Open file (2.81 MB 2101x1049 ankle.png)
Forgot to get mold release for the silicone. Oops! Anyway, finally redesigned the shin to be more stable - gonna end up with some femisapien/megaman-esque shins I guess - and assembled one (top 3). It is much sturdier than the old one (middle bottom) since it has a rotational axis on both sides. Once the second ankle is complete SPUD might be able to stand on her own two feet 𝘸𝘩𝘪𝘭𝘦 𝘶𝘯𝘱𝘰𝘸𝘦𝘳𝘦𝘥 ! Also going to start work re-designing the head for the eventual silicone face.
Open file (1.27 MB 541x1461 unpowered spud.png)
As I suspected: not only can SPUD stand with the new ankles, but can do so 𝘸𝘩𝘪𝘭𝘦 𝘶𝘯𝘱𝘰𝘸𝘦𝘳𝘦𝘥 .
>>30405 That's a good posture lol Are you going to round the feet out? Or at least cut off the corners for a sort of elongated octagonal or hexagonal shape. Would look better than rectangles.
>>30407 After I get Spud walking on those planks I'll be making the soles swappable. That way I can quicky/easily be able to see what sort of foot shapes I can get away with.
>>30408 You're planning to make SPUD wobble forward kinda like a penguin, did I understand everything right?
>>30424 Well the plan is to get 'er to walk kinda like a femisapien: https://youtu.be/1UQJoVSl0Mw?t=8 Not sure how wobbly or penguin-like you would consider that.
>>30425 To get that stable if you copied that walking style youd need the ankles to tilt side to side. Notice how femisapien steps on the inner side of the food and it causes a near fall over to the side on every step because of that? You need ankle tilt to counter that.
Open file (5.83 MB 480x854 Leg Dance.mp4)
>>30427 The pics here >>30373 are of the ankle motor :) I had to reprint some parts so the joint was more stable. Here is a vid of what it was like before (also ankles were tilting the wrong lol way).
>>30428 I see. Yeah that would do. It seems a lot of sort of side to side leaning walks involve stepping either flat foot or on the outer side edge of foot first. If you could put some sort of mechanism on the torso to keep it more center of gravity you'd get a swaying hips effect possibly if you have any stability issues still arising.
>>30429 Did someone say Torso Twist? Its here :D >>29145 I need to properly mount the gyroscopic sensor in the torso tho.
>>30430 A twisting motion would help some appearances but I meant like so the upper part of the body isnt swaying side to side because the tallest parts could sway a fair amount potentially throwing off balance. Though you may have thought of that before.
Open file (2.28 MB 1533x1081 neckpistons.png)
Open file (370.17 KB 817x583 JBweld balljoints.png)
Open file (84.76 KB 743x539 neck fit.png)
Been having good weather so I've been doing non-robowaifu robotics over the past few days P: Since SPUD's head is just about only held up while the servos are powered (and the servos heat up a bit while doing so), I figure since I'm eventually re-doing the head I should re-do the neck so it provides a bit more support. S̶t̶o̶l̶e̶ modified some inmoov parts, but 3d printed ball joints on this scale don't work very well. As a result, I'm just JB welding some ball joints (originally meant for a lawn tractor steering arm) onto the 3d printed screw drive sleeve. Won't be pretty but I can cover it up with a 3d printed shroud or something. Two screwdrive servos will be roughly imitating neck tendons with a third servo on top to twist the head left/right.
>>30510 Looks good, but I don't understand what part you welded to what.
>>30512 This. I need to secure the servos and find the optimal height to get the maximum tilt range.
Open file (487.64 KB 734x528 kamichan idea.png)
With spring(ish) weather comes more time spent on my heavy-duty robotics, so not much robowaifu progress (unless your robowaifu needs a 1350lb servo). When it gets a bit warmer I'll be trying out the silicone I bought. Sometimes stepping back from a project and working on a second project can lead to ideas about the first project: I figured a good compromise for the next iteration of the face would be a hybrid of a physical face and screen face: make the eyes physically motorized like kami-chan here, but cut out holes for the iris and stick a smol lcd screen behind them. That way even when the screens are off and eyes are open it still looks kinda cartoony and cute. I did get some full-color e-ink screens for a laugh, however it takes 30 seconds of screen flashing to change a picture. Not very good for robowaifus. Good for nametags and signs tho.
Super-stoked seeing all the great progress continuing, Mechnomancer. Also, the new balljoints look really cool. >>30405 >but can do so 𝘸𝘩𝘪𝘭𝘦 𝘶𝘯𝘱𝘰𝘸𝘦𝘳𝘦𝘥 . Perfect!

Report/Delete/Moderation Forms