/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Happy New Year!

The recovered files have been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


“The race is not always to the swift, but to those who keep on running.” -t. Anonymous


Open file (380.52 KB 512x512 1 (25).png)
Open file (359.76 KB 512x512 1 (58).png)
Open file (360.42 KB 512x512 1 (62).png)
Open file (330.60 KB 512x512 1 (93).png)
Open file (380.42 KB 512x512 1 (104).png)
AI Robowaifu Art and References ** SoaringMoon 11/25/2022 (Fri) 06:43:32 No.17763 [Reply] [Last]
I generated a whole much of neat images with Stable Diffusion 1.6. Enjoy. You are free to use the for whatever. >OP images are my five favorite of the bunch. Some proportions are off obviously. < "a robowaifu with [color] hair, digital painting, trending on artstation" Was the generation phrase. --- >Sorry to spoil all your files, rather than just the one (w/ Lynxchan it's all or nothing after the fact). The Problem Glasses are a Leftist dog-whistle that is rather distasteful around here (and also a red-flag). Certainly not something we would want to look at year-after-year in the catalog. Hope you understand, OP. ** Probably best to limit it to image generation, but also tolerating clips. Anything more advanced goes into the current propaganda thread : (>> TBD) . >=== -edit subject -add footnote
Edited last time by Chobitsu on 09/06/2024 (Fri) 09:14:10.
286 posts and 550 images omitted.
>>32653 3D bust to 3D model via Hunyan3D >>36287
>>36072 Heh. BTW, this is a good example of our opus about >Mind the fork, lass... (cf. >>4313, et al) If we watch our p's & q's, dot all our i's and cross all our t's, then we too can have snu-snu-sized waifus such as this, that we can casually carry around! :D
>>33117 I burned through the rest of my credits on Runway AI. I wasted 60-75€ for the last few month on a subscription which I didn't use. Now I'm out, this here is the rest. Now that I look through my pics here >>33029, I wished I had used some others or just made more. Maybe I will, but rather with another platform than Runway, maybe Pikalabs or Luma. I only post the relatively good videos here, the other ones are more flawed. The first three are about the robot waifu in the kitchen, these are just three variants of the same video. The third one is the shortest, but the endings of the others are a bit flawed.
>>36461 Apparently the max per posting is 20MB, not just per file.

Speech Synthesis/Recognition general Robowaifu Technician 09/13/2019 (Fri) 11:25:07 No.199 [Reply] [Last]
We want our robowaifus to speak to us right? en.wikipedia.org/wiki/Speech_synthesis https://archive.is/xxMI4 research.spa.aalto.fi/publications/theses/lemmetty_mst/contents.html https://archive.is/nQ6yt The Taco Tron project: arxiv.org/abs/1703.10135 google.github.io/tacotron/ https://archive.is/PzKZd No code available yet, hopefully they will release it. github.com/google/tacotron/tree/master/demos

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/02/2023 (Sun) 04:22:22.
357 posts and 137 images omitted.
>>36419 isnt narrator in the accessibility settings just a builtin text-to-speech program
>>36383 >>36417 Thanks for your help, Anons! :^)
>>36423 I tried that at first, but the problem is that as far as I know, it reads EVERYTHING on the screen.
>>36419 >but me and many others are on Windows Okay, I assume this is for development, while the real system will more likely be Linux. Anyways, I don't know how this exactly works but I think you can use the embedded Linux in Windows or whatever this is, and I assume there's also a repository. WSL: https://learn.microsoft.com/en-us/windows/wsl/about
>>36447 Yeah, WSL currently defaults to an Ubuntu 24 variant system + terminal. It's not a perfect match, but it's close enough in most respects. For example, I've been able to build & successfully run juCi++ [1][2][3] on it (this is a moderately complex & dependency-laden GTKMM -based GUI application, built from source) without any hiccups. This subsystem is very simple to set up, and I'll be happy to help anyone here who may be struggling to do so. Hopefully it can support Anon's other development needs, and if not then moving over to a full Linux system will be all the easier for them afterwards. Cheers. :^) --- 1. https://gitlab.com/cppit/jucipp/-/blob/master/docs/install.md#debianlinux-mintubuntu 2. here's a one-liner to copypasta onto your new Ubuntu terminal for installing all it's dependencies: sudo apt-get install libclang-dev liblldb-dev || sudo apt-get install libclang-6.0-dev liblldb-6.0-dev || sudo apt-get install libclang-4.0-dev liblldb-4.0-dev || sudo apt-get install libclang-3.8-dev liblldb-3.8-dev; sudo apt-get install universal-ctags || sudo apt-get install exuberant-ctags; sudo apt-get install git cmake make g++ clang-format pkg-config libboost-filesystem-dev libboost-serialization-dev libgtksourceviewmm-3.0-dev aspell-en libaspell-dev libgit2-dev Then just follow the rest of the install instructions from the link above (ie, git clone --recursive https://gitlab.com/cppit/jucipp , etc.) 3. WSL is also a great platform for Windows users to build & run BUMP for archiving this board, btw ( >>14866 ). >=== -prose edit -add footnote/hotlink -add dependencies/BUMP footnotes
Edited last time by Chobitsu on 02/03/2025 (Mon) 15:36:01.

Open file (2.28 MB 320x570 05_AI response.mp4)
Open file (4.77 MB 320x570 06_Gyro Test.mp4)
Open file (8.29 MB 320x570 07B_Spud functions.mp4)
Open file (1.06 MB 582x1446 Bodysuit.png)
SPUD Thread 2: Robowaifu Boogaloo Mechnomancer 11/19/2024 (Tue) 02:27:15 No.34445 [Reply] [Last]
This first post is to show the 5 big milestones in the development of SPUD, the Specially Programmed UwU Droid. You can see the old thread here: >>26306 The end goal of SPUD is to provide a fairly high-functioning robot platform at a relatively low cost (free code but a few bucks for 3d print files) that can be used for a variety of purposes such as promotional, educational or companionship. All AI used is hosted on local systems: no bowing to corporations any more than necessary, thank you. Various aspects of the code are/will be modular, meaning that adding a new voice command/expression/animation will be easy as making the file, naming it and placing it in the correct folder (no need to mess around with the base code unless you REALLY want to). While I'm researching more about bipedal walking I'll be making a companion for SPUD to ride on, so it might be a while before I return to the thread.
130 posts and 72 images omitted.
Open file (329.42 KB 1431x1051 rough foam panels.jpg)
>>36283 >control electronics/remote/etc I mean perhaps I could: I do have a spare one sitting around that is controlled via smartphone app. It would be possible to make it switch between app control and onboard computer (SPUD) controlled. >robo doog wheelchair Having SPUD looking like she's riding the doggo is an option, but having the entire robot look like a single unit is rather tempting. Been roughing out some panel shapes with scrap foam for SPUD, I'll definitely have to make some covers for SPUD's lumpy legs, and they'll probably lie a bit better when compressed with a morphsuit.
Open file (116.04 KB 304x1078 robowaifu morphsuit.jpg)
Yeah, SPUD looks ok in her morphsuit. Need to figure out how to get it to hug the boobles more, and shift the codpiece down by about 6cm so the gap between the hips/shin is less pronounced. And print some panels to cover the transition where the squish thighs go into the knees.
>>36364 >>36367 Very nice progress with the pads + suit, Mechnomancer! This should look really slick once you've gotten everything worked out here. Cheers. :^)
Open file (43.94 KB 404x448 concept.jpg)
>>36367 >morphsuit Do you plan on changing out suit often? You could always utilize velcro to the boobs. Red area is where I'd put some. Blue is if there's not enough boob hugging with the red alone. You could maybe utilize velcro for the other parts as well.
>>36364 This looks really promising, especially considering a power mesh suit on top keeping everything in the right shape. Probably better to use leggins and some top separately, to keep her internals more accessible. Also, maybe roll some soft textile up for the internals of the legs. >>36367 Ah, this went ahead quickly. I didn't look at the other pictures while writing that above. Excellent.

Open file (23.97 KB 474x474 th.jpeg)
Local Non-LLM Chatbot GreerTech 01/13/2025 (Mon) 05:35:09 No.35589 [Reply] [Last]
I was thinking, instead of using a costly LLM that takes a high-end PC to run, and isn't very modifiable, what if we use a simple chatbot with prerecorded voice lines and/or Text-To-Speech? This was partially inspired by MiSide, I realized that you don't need complex responses to create a lovable character <---> >(Chatbot General >>250 ) >(How may we all accomplish this? >>35801, ...) >=== -add crosslinks-related
Edited last time by Chobitsu on 01/18/2025 (Sat) 21:34:03.
64 posts and 6 images omitted.
>>36171 Thanks, Grommet! You might also check the hotlink-related here too ( >>36113 ). Of course, our own Robowaifudev was working on solving this years ago, and CyberPonk is more-recently. I really like your two-tier approach thinking, in addition. Good ideas please keep them coming, Anon! Cheers. :^) TWAGMI >=== -prose edit
Edited last time by Chobitsu on 01/28/2025 (Tue) 09:42:21.
>>36181 Whoops, missed that.
BTW it's actually a fairly obvious idea but you never know. Sometimes the obvious gets completely overlooked. I see this sort of thing from time to time.
>>36361 Great! Thanks for the (linked) information, GreerTech. Cheers. :^)

Robot skeletons and armatures Robowaifu Technician 09/13/2019 (Fri) 11:26:51 No.200 [Reply] [Last]
What are the best designs and materials for creating a skeleton/framework for a mobile, life-sized gynoid robot?
243 posts and 123 images omitted.
This here https://youtu.be/Fd-0tHewFf4 is related to 3D printing >>94 and modelling >>415 but I think it's more general and is useful for armatures (shells) and flexible subskin elements. The video shows a method how to make 3D printed parts that can give in to pressure from different directions. Something I was try to do for quite some time: >>17110 >>17151 >>17195 >>17630 and related comments. It refers to PLA in the headline and in the video, but this doesn't matter. It's just that the part itself is flexible, while the material itself doesn't have to be.
>>36324 I noticed that the Missle_39 video you posted ( >>36299 ) contains this same style of structural, flexible printing within the torso volume of their robowaifu. That video convinced me of the value of such an approach, so it's added to the long list of research topics for me. Cheers NoidoDev, and thanks! :^)
>>36324 >>36325 Decided to do a snap to clarify specifically: https://trashchan.xyz/robowaifu/thread/26.html#43
>>36331 This in the middle is just some regular infill, I think. It can be selected in the slicer. Looks clearly like "Gyroid Infill" https://help.prusa3d.com/article/infill-patterns_177130
>>36366 POTD Excellent resource, NoidoDev, thanks!! Yeah, that looks exactly like the same kind of infill. Just looking at it, I knew it would be strong in every direction (a fairly high-priority, in a dynamic system like a robowaifu), and the notes in your link confirmed that. <---> Thanks again, Anon. Cheers. :^)

Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102 [Reply] [Last]
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith

Message too long. Click here to view full text.

257 posts and 113 images omitted.
>>36259 >...like the word 'literally' which is literally actually and not literally... I'm stealing this.
Very interesting convo. Thanks for all the details! Again, way above my pay grade but as I started going through this yesterday, I also thought garbage in \ garbage out. But from the start, my intention was to say one man's trash is another man's treasure I guess. That is to say, if that garbage makes me happy, it has produced a valid use case and that's all that matters to me, but I'm a proponent of the subjective theory of value.
>>36274 >I also thought garbage in \ garbage out. This. I think you're right, Barf! Cheers. :^) >>36307 Very interesting. We truly understand little about the human psyche, IMO. Lots to learn still. Thanks, GreerTech! Cheers. :^)
>>36307 Thanks! That sounds like the article I read. It seems like prompt engineers are closer to AGI than the GPU farms training the LLMs. People were shocked by reasoning models but prompt engineers have been doing that for awhile. The same could happen for imagination I hope.

Open file (259.83 KB 1024x576 2-9d2706640db78d5f.png)
Single board computers & microcontrollers Robowaifu Technician 09/09/2019 (Mon) 05:06:55 No.16 [Reply] [Last]
Robotic control and data systems can be run by very small and inexpensive computers today. Please post info on SBCs & micro-controllers. en.wikipedia.org/wiki/Single-board_computer https://archive.is/0gKHz beagleboard.org/black https://archive.is/VNnAr >=== -combine 'microcontrollers' into single word
Edited last time by Chobitsu on 06/25/2021 (Fri) 15:57:27.
221 posts and 60 images omitted.
>>36102 Neat! Thanks Barf. Looking forward to seeing this system in action. Cheers. :^)
>>35813 Handy listing. Thanks, Grommet! Cheers. :^)
>>36102 Big thanks for that!
Hooking a cheap external GPU onto an RPi is an option with DeepSeek (and other models as well of course). Probably very hard to beat something like this for an AI chat price/performance point rn, IMO. https://www.youtube.com/watch?v=o1sN1lB76EA
>>36238 Thanks. I saw the video and nearly skipped it. I didn't think it would finally work to add a GPU to it. Nice. Inside a waifu it would maybe need to be a physically smaller GPU, and maybe then it wouldn't work for DeepSeek (of that size), but it's good to have that option. I thought a while ago, that it will only be a time till we see GPUs with a slots for a SBC and vRam so you can run it without PC. Just with a power cable going into it. Or maybe it will be some small adapter, so you can still use the regular connectors. Anyways, from a while ago, one Raspi and several TPUs on a special board for that: https://youtu.be/oFNKfMCGiqE

Humanoid Robot Projects Videos Robowaifu Technician 09/18/2019 (Wed) 04:02:08 No.374 [Reply] [Last]
I'd like to have a place to accumulate video links to various humanoid – particularly gynoid – robotics projects are out there. Whether they are commercial scale or small scale projects, if they involve humanoid robots post them here. Bonus points if it's the work of a lone genius. I'll start, Ricky Ma of Hong Kong created a stir by creating a gynoid that resembled Scarlett Johansson. It's an ongoing project he calls an art project. I think it's pretty impressive even if it can't walk yet. https://www.invidio.us/watch?v=ZoSfq-jHSWw === Instructions on how to use yt-dlp to save videos ITT to your computer: (>>16357)
Edited last time by Chobitsu on 05/21/2022 (Sat) 14:20:15.
222 posts and 75 images omitted.
>>35201 Thanks, NoidoDev! I found this based on searching from your crosslink : https://x.com/missile_39?mx=2 Looks like they're making some great progress! I don't read Nihongo yet, so I'm unsure at this point what the goal with their project is. Regardless, I sure wish them well with it! Thanks again, Anon. Cheers. :^)
Open file (1.99 MB 1920x1080 clapclapclap.png)
> Hannah Dev, David Browne Q&A: https://youtu.be/yFvSYekCuBM Arms: https://youtu.be/UX-1hr3NPeo > The Robot Studio DexHand, reach, pick and place: https://youtu.be/uF7vVPG_mf0 Hand picking M8 screw: https://youtu.be/PucX_w9-fOs DexHand and HOPE arm, repeated picking: https://youtu.be/JfiN_qcpODM > Realbotix (known for Harmony) CES, current product demo, price US$175k or more: https://youtu.be/2HQ84TVcbMw > HBS Lab Horizontal Grasping: https://youtu.be/CR_aLIKelv8 > Sanctuary AI In-hand manipulation: https://youtu.be/O73vVHbSX1s > Tesla bot Walking (might be fake): https://youtu.be/xxoLCQTN0KA > Chinese robots Fails, probably biased source: https://youtu.be/12IwfzyHi0A
>>35675 POTD Nice work, NoidoDev. Kind of a treasure cache of good information. I'm particularly excited to see HannahDev's good progress with brushless actuation. I hope we here can somehow return the favor to him someday. Thanks again! Cheers. :^)
>>35207 >Missile_39 There's also a new video. I didn't watch it completely and it's in Japanese, but it shows the parts of the current robots and some ideas: https://youtu.be/ZC28u1Dqcpg

Robot Eyes/Vision General Robowaifu Technician 09/11/2019 (Wed) 01:13:09 No.97 [Reply] [Last]
Cameras, Lenses, Actuators, Control Systems Unless you want to deck out you're waifubot in dark glasses and a white cane, learning about vision systems is a good idea. Please post resources here. opencv.org/ https://archive.is/7dFuu github.com/opencv/opencv https://archive.is/PEFzq www.robotshop.com/en/cameras-vision-sensors.html https://archive.is/7ESmt >=== -patch subj
Edited last time by Chobitsu on 12/27/2024 (Fri) 17:31:13.
140 posts and 57 images omitted.
Open file (658.38 KB 1089x614 7.png)
These with the convex lens that you can split apart might be nice. No cameras, but could probably be added and has everything else on a custom PCB already. https://www.adafruit.com/product/4343
This is really exciting stuff lately ITT, Anons. Thanks for linking to resources for us all! Cheers. :^)
Researchers were able to tweak machine vision into being usable in low-light conditions https://techxplore.com/news/2025-01-neural-networks-machine-vision-conditions.html
>>36237 Thanks GreerTech! I'm actually interested in devising a smol flotilla of surveillance drones (the tiny, racing ones) for a robowaifu's use for situational-awareness on grounds. Having 'night vision' is very useful for this ofc -- especially if no special hardware would be required! Cheers. :^)
>>25927 Unfortunately, it looks like project may be dead (your link was broken and the last update was in last January, but I wonder if it could be retooled with newer and more efficient LLMs and vision models. It definitely caught my eye, it solved the elephant in the room I was thinking about, how do we tie a vision model to an LLM? https://github.com/haotian-liu/LLaVA

AI Software Robowaifu Technician 09/10/2019 (Tue) 07:04:21 No.85 [Reply] [Last]
A large amount of this board seems dedicated to hardware, what about the software end of the design spectrum, are there any good enough AI to use?

The only ones I know about offhand are TeaseAi and Personality Forge.
140 posts and 44 images omitted.
>>35150 Press F <insert: Oh!? It's maintenance time already?.jpg>
>>31405 Very interesting, will do some testing on my Pi 5 with Gemma2 and return my findings. it also seems to be able to use the weights directly from ollama, which is super nice.
>>35044 >o3 just came out and it is multiple times better than chatgpt-4. the argument that the underlying tech for current ai is not good enough is very weak. Who claims that the tech is "too weak"? It's an online service, and not optimized to act like a human-like robot wife. We still need to build a framework to handle local LLMs. That said, improvements in such online services and self-hosted LLMs will make it easier getting help with research and coding.
>>35370 machine learning is compressed google autocomplete like most people think. this becomes more evident when machine learning us applied to videogames. a genetic algorithm can make a boxer player learn how to box on its own for example https://m.youtube.com/watch?v=SsJ_AusntiU&pp=ygUJYm94aW5nIGFp
>DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models [1][2] >abstract: >Mathematical reasoning poses a significant challenge for language models due to its complex and structured nature. In this paper, we introduce DeepSeekMath 7B, which continues pre-training DeepSeek-Coder-Base-v1.5 7B with 120B math-related tokens sourced from Common Crawl, together with natural language and code data. DeepSeekMath 7B has achieved an impressive score of 51.7% on the competition-level MATH benchmark without relying on external toolkits and voting techniques, approaching the performance level of Gemini-Ultra and GPT-4. Self-consistency over 64 samples from DeepSeekMath 7B achieves 60.9% on MATH. The mathematical reasoning capability of DeepSeekMath is attributed to two key factors: First, we harness the significant potential of publicly available web data through a meticulously engineered data selection pipeline. Second, we introduce Group Relative Policy Optimization (GRPO), a variant of Proximal Policy Optimization (PPO), that enhances mathematical reasoning abilities while concurrently optimizing the memory usage of PPO. They seem to go into some depth describing the optimization approaches they used to achieve the higher efficiencies with the available hardware. --- 1. https://arxiv.org/abs/2402.03300 2. https://github.com/deepseek-ai/DeepSeek-Math

Report/Delete/Moderation Forms
Delete
Report