/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The Mongolian Tugrik has recovered its original value thanks to clever trade agreements facilitated by Ukhnaagiin Khürelsükh throat singing at Xi Jinping.

The website will stay a LynxChan instance. Thanks for flying AlogSpace! --robi

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!


Open file (213.86 KB 406x532 13213d24132.PNG)
Open file (1.19 MB 1603x1640 non.png)
Robowaifu Technician 10/29/2020 (Thu) 21:56:16 No.6187 [Reply]
https://www.youtube.com/watch?v=SWI5KJvqIfg I have been working on creation of waifus using GANs etc... I've come across this project and I am totally amazed. Anyone has any idea how we can achive this much of a quality animation based GAN created characters? I think accomplishing this kind of work would have a huge impact on our progression. Calling all the people who posted at chatbot thread.
1 post omitted.
Open file (213.22 KB 400x400 sample4.png)
Open file (199.16 KB 400x400 sample1.png)
Open file (194.92 KB 400x400 sample2.png)
Open file (199.43 KB 400x400 sample3.png)
>>6188 Looking at some old tweets from them I think it is safe to say that it doesn't look much different from StyleGan on portraits. The shoulders are bad, & most of the work is done by their data cleaning to simplify the problem. Interps & style mixing are nothing special either. Gwerns work with some whack data was able to create similar kind of characters. Also waifulabs - which is all run by StyleGan - can create some really quality characters from different positions. And notice that they are a game development studio which does not work on AI waifu creations. Looks like hype-bait to me to be honest. They probably cherrypicked some of the results and maybe even manually played it to create this kind of animations. And considering their budget and data that is well possible. I am not sure if they still use StyleGAN though. They do not drop even a clue. But honestly with the current state of it and the time they spent on it I think they use a different approach.
My chief concern is first and foremost Is this open-source? If not, then it's relatively useless to us here on /robowaifu/, other than tangentially as inspiration. Hand-drawn, meticulously-crated animu is far better in that role tbh.
>>6187 It appears the characters are generated with a GAN then another model separates the character pieces into textures for a Live2D model. They're not animated with AI, but there are techniques to do such a thing: https://www.youtube.com/watch?v=p1b5aiTrGzY
Video on the state of anime GANs, anime created by AI, including animation for vtuber/avatar style animations: https://youtu.be/DX1lUelmyUo One of the guys mentioned in the video, creating a 3D model from a drawing, around 10:45 in the video above: https://github.com/t-takasaka - didn't really find which one it is on his Github yet, though. He seems to have some pose estimation to avatar in his repository, though. Other examples in the video might be more interesting for guys trying to build a virtual waifu. "Talking Head Anime 2", based on one picture: https://youtu.be/m13MLXNwdfY
>>16245 This would be tremendously helpful to us if we can find a straightforward way to accomplish this kind of thing in our robowaifu's onboard systems Anon ('character' recognition, situational awareness, hazard avoidance, etc.) Thanks! :^)

Robo Face Development Robowaifu Technician 09/09/2019 (Mon) 02:08:16 No.9 [Reply] [Last]
This thread is dedicated to the study, design, and engineering of a cute face for robots.
162 posts and 92 images omitted.
>>15997 Though you are correct on your aesthetic reasoning, realistic faces cannot be made by man. There are numerous flaws you did not mention which would still fall into the uncanny valley. >>16004 Chobitsu is correct, going for a demi-human/anime inspired aesthetic is much more attainable. >>16008 One of my favorite videos. Though it is correct to say the uncanny valley is not real, it's a useful term to describe the general unease felt by these creepy machines. I do think good can come from researching solutions to the uncanny problem but, that's best left to corporations like Disney. >>16012 I think "cuteness uber alles" is the best way of thinking about it. Almost human can be quite cute indeed, it's just easy for almost human to end up creepy like Sophia. This is why I think sidestepping into the demi-human route is good. I also want to clang monster girls though so, I'm heavily biased. Picrel has big fins, a flat face, and spiky red eyelashes without eyebrows. Almost human but, not quite.
This is a great conversation going r/n ITT, /robowaifu/. I hope we can keep it going for a while, and I'd like to link to it in our mini-FAQ in the /meta's
>>16008 My 2 cents is that the uncanny valley dont exist and dont matter. This is robotics not a beauty competition, so we should be caring about the technical aspects, this is why i think robots like Sophia are inspiring. This is good since there are actual challenges that need to solved like budget, time, hardware, software and more
Open file (273.40 KB 1440x1800 1652407800269.jpg)
Open file (153.27 KB 1080x1080 1652403268055.jpg)
Open file (335.35 KB 1440x1800 1652403080234.jpg)
>>15285 After more research, I've come to the conclusion that anime doll faces are the only way to make her cute and simple. All of these could be printed on an SLA printer relatively easily and the eyes would also be relatively easy to make. The hair would somewhat of a problem but, adding non-human ears like cat or wold ears would help. Go for the demi-human route.
Open file (104.56 KB 800x1200 1607113054993.jpg)
>>16246 >After more research, I've come to the conclusion that anime doll faces are the only way to make her cute and simple. I've suggested all along that the doll industries have much to offer us in our quest for robowaifus. This isn't a new conception either.

Open file (185.64 KB 1317x493 NS-VQA on CLEVR.png)
Open file (102.53 KB 1065x470 NS-VQA accuracy.png)
Open file (86.77 KB 498x401 NS-VQA efficiency.png)
Neurosymbolic AI Robowaifu Technician 05/11/2022 (Wed) 07:20:50 No.16217 [Reply]
I stumbled upon a couple videos critiquing "Deep Learning" as inefficient, fragile, opaque, and narrow [1]. It claims Deep Learning requires too much data, yet it performs poorly trying to extrapolate from training set, and how it arrives to its conclusions are opaque, so it's not immediately obvious why it breaks in certain cases, and all that learned information cannot be transfered between domains easily. They then put forth "Neurosymbolic AI" as the solution to DL's ails and next step of AI, along with NS-VQA as an impressive example at the end [2]. What does /robowaifu/ think about Neurosymbolic AI (NeSy)? NeSy is any approach that combines neural networks with symbolic AI techniques to take advantage of both their strengths. One example is the Neuro-Symbolic Dynamic Reasoning (NS-DR) applied on the CLEVRER dataset [3], which cascades information from neural networks into a symbolic executor. Another example is for symbolic mathematics [4], which "significantly outperforms Wolfram Mathematica" in speed and accuracy. The promise or goal is that NeSy will bring about several benefits: 1. Out-of-distribution generalization 2. Interpretability 3. Reduced size of training data 4. Transferability 5. Reasoning I brought it up because points 3 and 5, and to a lesser degree 4, are very relevant for the purpose of making a robot waifu's AI. Do you believe these promises are real? Or do you think it's an over-hyped meme some academics made to distract us from Deep Learning? I'm split between believing these promises are real and this being academics trying to make "Neurosymbolic AI" a new buzzword. [5] tries to put forth a taxonomy of NeSy AIs. It labels [4] as an example of NeSy since it parses math expressions into symbolic trees, but [4] refers to itself as Deep Learning, not neurosymbolic or even symbolic. Ditto with AlphaGo and self-driving car AI. And the NS-DR example was beaten by DeepMind's end-to-end neural network Aloe [6], and overwhelmingly so when answering CLEVRER's counterfactuals. A study reviewed how well NeSy implementations met their goals based on their paper, but their answer was inconclusive [7]. It's also annoying looking for articles on this topic because there's like five ways to write the term (Neurosymbolic, Neuro Symbolic, Neuro-Symbolic, Neural Symbolic, Neural-Symbolic). >References [1] MIT 6.S191 (2020): Neurosymbolic AI. <https://www.youtube.com/watch?v=4PuuziOgSU4> [2] Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding. <http://nsvqa.csail.mit.edu/>

Message too long. Click here to view full text.

Open file (422.58 KB 1344x496 MDETR.png)
Open file (68.28 KB 712x440 the matrix has you.jpg)
>>16217 I think such critique is outdated. The impressive results of NS-VQA have been beaten by full deep learning approaches like MDETR.[1] It would be a bit ironic to call deep learning fragile and narrow and then proceed to write specific functions that only handle a certain type of data that the training set just happens to be a small subset of and call it generalization. Sure it can handle 'out-of-distribution' examples with respect to the training set, but give it a truly out-of-distribution dataset with respect to the functions and these handwritten methods will fail completely. A lot of deep learning approaches these days can learn entire new classes of data from as few as 10-100 examples. ADAPET[2] learns difficult language understanding tasks from only 32 examples. RCE[3] can learn from a single success state example of a finished task. DINO[4] can learn to identify objects from no labelled examples at all. CLIP[5] and CoCa[6] are examples of deep learning generalizing to datasets they were never trained on, including adversarial datasets, and outperforming specialized models, and this is just stuff off the top of my head. Someone ought to give DALL-E 2 the prompt "a school bus that is an ostrich" and put that meme to rest. That said, neurosymbolic AI has its place though and I've been using it lately to solve problems that aren't easily solvable with deep learning alone. There are times when using a discrete algorithm saves development time or outperforms existing deep learning approaches. I don't really think of what I'm doing as neurosymbolic AI either. Stepping away from matrix multiplications for a bit doesn't suddenly solve all your problems and become something entirely different from deep learning. You have to be really careful actually because often a simpler deep learning approach will outperform a more clever seeming neurosymbolic one, which is clearly evident in the progression of AlphaGo to AlphaZero to MuZero. From my experience it hasn't really delivered much on the promises you listed, except maybe point 2 and 5. I wouldn't think of it as something good or bad though. It's just another tool and it's what you do with that tool what counts. There was a good paper on how to do supervised training on classical algorithms. Basically you can teach a neural network to do a lot of what symbolic AI can do, even complicated algorithms like 3D rendering, finding the shortest path or a sorting algorithm. I think it shows we've barely scratched the surface of what neural networks are capable of doing. https://www.youtube.com/watch?v=01ENzpkjOCE https://arxiv.org/pdf/2110.05651.pdf >Links 1. https://arxiv.org/abs/2104.12763 2. https://arxiv.org/abs/2103.11955 3. https://arxiv.org/abs/2103.12656 4. https://arxiv.org/abs/2104.14294 5. https://arxiv.org/abs/2103.00020

Message too long. Click here to view full text.

Open file (201.23 KB 1133x1700 spaghetti_mama.jpg)
Idling around the Interwebz today[a], I found myself reading the Chinese Room Argument article on the IEP[b], I came across the editor's contention in the article the notion that "mind is everywhere" is an "absurd consequence". >"Searle also insists the systems reply would have the absurd consequence that “mind is everywhere.” For instance, “there is a level of description at which my stomach does information processing” there being “nothing to prevent [describers] from treating the input and output of my digestive organs as information if they so desire.” "[1],[2] I found that supposed-refutation of this concept vaguely humorous on a personal level. As a devout Christian Believer, I would very strongly assert that indeed, Mind is everywhere. Always has been, always will be. To wit: The Holy Spirit sees and knows everything, everywhere. As King David wrote: >7 Where can I go to escape Your Spirit? > Where can I flee from Your presence? >8 If I ascend to the heavens, You are there; > if I make my bed in Sheol, You are there. >9 If I rise on the wings of the dawn, > if I settle by the farthest sea, >10 even there Your hand will guide me; > Your right hand will hold me fast.[3] However, I definitely agree with the authors in their writing that >"it's just ridiculous" to assert >" “that while [the] person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might” ".[1],[2]

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/16/2022 (Mon) 00:13:09.

Reploid thread Robowaifu Technician 02/28/2022 (Mon) 04:13:32 No.15349 [Reply]
A few people thought it'd be a good idea to start a thread for Reploid builds, so here we are! To kick things off, here's a little progress on painting my own RiCO. It's just spray paint so it doesn't look stellar and I screwed up a couple parts. All the blue trim paint needs done as well. I don't care if it's perfect, I just want to get her done. Not to mention anything adjacent to "art" or "craftsmanship" is beyond me, mostly due to lack of patience: I don't want to spend 1000s of hours grinding away with a paintbrush when I could be designing up cool (to me...) robotic mechanisms for instance. I bet you bottom dollar none of my projects will be winning awards in the fit-and-finish department. Can't wait to see what happens with Pandora and whatever other Reploid projects people might be working on.
36 posts and 28 images omitted.
>>16021 >Great idea, mind if I borrow this technique for MaidCom? Please do. Check out how I did the rest of her eyes as well, perhaps it could scale up.
>>16029 It does scale up really well. Though I will innovate upon it for Pandora.
Edited last time by AllieDev on 04/27/2022 (Wed) 01:32:08.
>>15999 I'm really impressed with this. I have a lot of questions, I'll scan the rest of this thread to make sure they aren't already answered first.
Curious if you had any changes this week with your wonderful robowaifu, RiCOdev?
>>16243 RiCO will be on hold for the near future; your friendly neighborhood reploid builder has got bigger fish to fry. I'll still keep an eye out here to answer questions etc.

Visual Waifus Robowaifu Technician 09/15/2019 (Sun) 06:40:42 No.240 [Reply] [Last]
Thoughts on waifus which remain 2D but have their own dedicated hardware. This is more on the artistry side though ai is still involved. An example of an actual waifu product being the Gatebox.
gatebox.ai/sp/

My favorite example is Ritsu, she's a cute ai from assassination classroom who's body is a giant screen on wheels.
188 posts and 92 images omitted.
>>15954 Part of the program would need to respond to the touch event immediately, such as if you stroke a waifu's hair it should move right away. The language model would also take into account this touch event to produce a sensible response instead of making responses that are oblivious to them. It could also generate more complex animation instructions in the first tokens of a response, which would have about a 250ms delay similar to human reaction time. It's not really desirable to have a set of pre-made animations that the waifu is stuck to since after seeing them over and over again the waifu will feel rigid and stuck to replaying them. With the language model though you could generate all kinds of different reactions that take into account the conversation and touch events.
>>15956 OK, I'll take your word for it Anon. I'm sure I'll understand as we work through the algorithms themselves, even if the abstract isn't perfectly clear to me yet. You can be sure I'm very attuned to the needs of efficient processing and timely responses though! Lead on! :^) >>15953 BTW, thanks for taking the trouble of posting this Anon. Glad to see what these game manufacturers are up to. Nihongo culturalisms are pretty impactful to our goals here on /robowaifu/ tbh. Frankly they are well ahead of us for waifu aesthetics in most ways. Time to catch up! :^)
>>15953 That Madoka is the epitome of cuteness. If only there were a way to capture that voice and personality and translate it into English. >>15956 Timing of reactivity is important for preventing the uncanny valley from a communications standpoint. For her animations, it may be effective to have several possible animations for various responses that are chosen at random, though never repeating. Like, having a "welcome home" flag that triggers an associated animation when she's saying "welcome home".
>>15967 >Timing of reactivity is important for preventing the uncanny valley from a communications standpoint. You know I just had a thought at reading this sentence Kywy. It's important to 'begin' (as in, within say, 10ms) a motion, even though it isn't even her final form yet. :^) What I mean is that as soon as a responsive motion need is detected, then her servos should begin the process immediately, in a micro way, even if the full motion output hasn't been decided upon fully yet. IMO this sort of immediacy to response is a subtle clue to 'being alive' that will subconsciously be picked up on by Anon. As you suggest, without it, a rapid cascade into the Uncanny Valley is likely to ensue. It's not the only approach that's needed to help solve that issue, but it's likely to be a very important aspect of it. Just a flash insight idea.
I recognize that b/c of 'muh biomimicry' autism I have, I'm fairly inclined to go overboard into hyperrealism for robowaifus/visualwaifus, even though I know better. So my question is >"Are there simple-to-follow guidelines to keep from creating butt-fugly uncanny horrors, but instead create cute & charming aesthetics in the quest for great waifus?" Picrel is from the /valis/ thread that brought this back up to my mind. > https://anon.cafe/valis/res/2517.html#2517

Waifu Robotics Project Dump Robowaifu Technician 09/18/2019 (Wed) 03:45:02 No.366 [Reply] [Last]
Edited last time by rw_bumpbot on 05/25/2020 (Mon) 04:54:42.
240 posts and 174 images omitted.
>>15733 >link Google is your friend: https://sweetiebot.net/ From what I understand they want to keep things rated PG. The voice generator (community talknet project) is unrelated and based in the /ppp/ thread on 4chan.org/mlp/. Enter if you dare ^:)
Open file (117.25 KB 640x798 cybergirl.jpg)
>>15731 >ponies I was thinking more >>15733 or picrel, but a cute robot horse has the PR advantage because it could easily be a children's toy.
>>15731 I think some of the ponys mentioned this project to us before Anon, thanks. I wish those Russians good success! >>15733 Heh, /monster/ pls. :^) A hexapod waifu is actually a really good idea for a special-service meido 'waifu' IMO. Just my own tastes in the matter subjectively. But objectively, a hexapod locomotion base (especially combined with roller 'feet' is a superior stability platform from which to do housework & other work. No question. >>15736 Yep. I immediately came to a similar conclusion. But it's obvious they are going for police force service with the push for that bot, and the price tag shows it. Shame, tbh.
are you people serious? all the videos are hosted offsite what the hell am I going to do with a filename put it in yandex? fucking yahoo.jp??? tor search??? why do this?
>>16233 Heh, sorry about that Anon. You're just dealing with the missing information from when our first site was destroyed on us. Those were part of the recovery effort. Unfortunately our copies of the files were lost in the attack. Maybe someday some Anon might restore them here for us. Again apologies, you might see similar in other threads here too. But at least we still have a board! :^)

Electronics General Robowaifu Technician 09/11/2019 (Wed) 01:09:50 No.95 [Reply] [Last]
Electronics & Circuits Resources general

You can't build a robot w/o good electronics. Post good info about learning, building & using electronics.

www.allaboutcircuits.com/education/
71 posts and 19 images omitted.
>>14824 OK, I think this is a reasonably good thread (unless you have a better one in mind?) Thanks again, Anon.
>>14824 Interesting. What does this imply? Is there a significant improvement in the performance of the chips using this style of transistor? If so, then we'll have to make our own asic using that same method
Open file (79.85 KB 600x424 BPv4-f.jpg)
>>734 For any Anons currently working on electronics boards that would benefit from Bus Pirate, there is also a v4 that has more RAM available. > http://dangerousprototypes.com/docs/Bus_Pirate_v4_vs_v3_comparison The firmware code is also available. https://github.com/BusPirate/Bus_Pirate
Open file (427.08 KB 1500x1500 super start kit.jpg)
Open file (127.68 KB 1649x795 rip mr servo.png)
I'm looking to get into electronics. Are the ELEGOO UNO starter kits any good? There's one on Amazon for $40. I basically just want to learn how to program a microcontroller, control servos with a controller and understand enough so I can start building a robowaifu. Or should I save my money and just play with the circuit simulator in TinkerCAD?
>>16224 I actually have the kit on the left, and I definitely recommend them for learning Anon, sure.

Open file (1.04 MB 2999x1298 main-results-large.png)
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents Robowaifu Technician 01/21/2022 (Fri) 15:35:19 No.15047 [Reply]
https://wenlong.page/language-planner/ https://github.com/huangwl18/language-planner Can world knowledge learned by large language models (LLMs) be used to act in interactive environments? In this paper, we investigate the possibility of grounding high-level tasks, expressed in natural language (e.g. "make breakfast"), to a chosen set of actionable steps (e.g. "open fridge"). While prior work focused on learning from explicit step-by-step examples of how to act, we surprisingly find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans without any further training. However, the plans produced naively by LLMs often cannot map precisely to admissible actions. We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions. Our evaluation in the recent VirtualHome environment shows that the resulting method substantially improves executability over the LLM baseline. The conducted human evaluation reveals a trade-off between executability and correctness but shows a promising sign towards extracting actionable knowledge from language models. I think it's worth a whole thread. If not, move it to the appropriate section.
Update in this topic, this time by DeepMind. https://www.youtube.com/watch?v=L9kA8nSJdYw Slowly, they realize what we all think about, yet only in virtual spaces.
Open file (251.80 KB 1665x621 calm outline.png)
Open file (172.15 KB 859x785 calm planning.png)
Open file (85.32 KB 827x627 calm success.png)
Open file (337.52 KB 1523x725 coca.png)
Open file (119.53 KB 901x670 coca performance.png)
>>16197 >Human: show me how you polish the baluster >Robowaifu: say no more goshujin-sama There was a really interesting paper recently on context-aware language models that can do planning and achieve better than human performance on a flight-booking task, using a model the same size as GPT-2 small: https://sea-snell.github.io/CALM_LM_site/ It shows a lot of promise for using a Monte Carlo tree search for doing planning with language models, since it only takes 5 dialog generation attempts with the new method to outdo human performance without doing a tree search at all. Also a huge breakthrough in zero-shot multi-modal learning has been made that completely blows CLIP and SotA specialized models to pieces by using a simple to understand contrastive and captioning loss (CoCa) that can leverage existing models: https://arxiv.org/pdf/2205.01917.pdf This is going to be huge for embodied agents. It's a lot like the similarity measure used between sentence embeddings produced by the RoBERTa model in OP's paper to ensure the translated generated instructions are executable, except it does it between images and sentences. And there's another paper worth mentioning doing transfer learning from a language model trained on Wikipedia to an RL agent (on continuous control and games) that outperforms training from scratch: https://arxiv.org/pdf/2201.12122.pdf It seems we're headed towards a leap forward soon with goal-oriented embodied agents using language models.

Artificial Wombs general Robowaifu Technician 09/12/2019 (Thu) 03:11:54 No.157 [Reply] [Last]
I think we need to figure out how to fit a womb onto a waifubot. Where's the fun in having sex if you can't procreate?

Repost from a thread on /b/;
>"If you're like me and want to fuck a qt battlebot and get her pregnant, the best place to put an artificial womb is where a normal womb would be on a normal girl. The metal exterior could simply be a bunch of metal plates that unfold to allow space for the womb pod inside. The baby is probably safest inside the battlebot, and if she has good calibration then there shouldn't be problems with her falling and hurting the baby. After giving birth the metal plates could automatically fold back up again, shrinking the womb pod inside so she is combat effective again."

Well /robowaifu/? Feasible?
124 posts and 14 images omitted.
Open file (274.25 KB 650x1000 SplashTittyMonster.jpg)
Open file (1.40 MB 2189x1305 1586409061467.png)
Before making artificial wombs, how about focusing on how to make artificial functional mammary glands? If I did clone myself then I'd have something to feed him with, and if I didn't at least I could suck them myself. It also seems like a significantly lower bar in terms of complexity compared to an entire functioning uterus.
>>13154 Because the wombs aren't for being put into the robowaifus but for having children. Glands aren't necessary. However, it seem to be possible to create genetically modified yeast that produces all kinds of milk, so it's even not a problem. Don't know about the current state, here two vids I didn't watch mysef yet, the first short the other really long: https://youtu.be/CXYg-qt4OCc https://youtu.be/ZiWnygcYsiQ I thought these were available for years, but somehow didn't here more about it. So I'm curious myself.
>>13189 >Because the wombs aren't for being put into the robowaifus but for having children. I've argued with some weirdos who were very insistent that the wombs should be in the robowaifus, just because it's their fetish. I'm not saying mammary glands need to be in the waifubot, but if they fit, and you're into that, I don't see why not. Either way they'd be useful for raising the kid. >Glands aren't necessary. I'm going to have to disagree with you on that. I don't know a lot about fetal development, but I do know varying blood hormone levels can have significant effects on fetal development. And hormones largely come from and are regulated by glands. When it comes to artificial organs it seems like glands are largely overlooked. As far as I know there's something of an artificial ovary and an artificial thymus, something that literally disappears with age, but other artificial endocrine organs basically don't exist. I think that in order to create a perfectly healthy baby with an artificial womb, you'd either need to replicate almost all of the other organs in the body to digest food/supplements to make blood or have a regular supply of healthy pregnant woman blood. The simple fact that blood plasma donations are still a thing highlights the fact that we still can't create an adequate blood plasma substitute. All an artificial mammary gland would do is make realistic breast milk, which is a simple task compared to making an artificial womb, but is still a difficult task on its own and there may be some problems it solves that would be needed to solve for the artificial wombs.
Alright folks, there's a lot of good information being shared about what scientists are doing with this stuff. Here's a news flash though: Anyone can follow the scientific method. We can break this whole thing down into smaller parts and grassroots this shit. >ctrl-f 'amniotic fluid' >no results What does a human body do? Eat. What does a pregnant human body do? Eat. What does a human fetus do? Absorb nutrients from what the mother eats. Chemically speaking, literally everything that the human body is capable of is available in some form or another at your local grocery store. My question is how difficult would it be to mix together a facsimile for amniotic fluid? What makes it up? How much of what, and how does that change during gestation? We need recipe cards or something.

Open file (173.41 KB 1080x1349 Alexandra Maslak.jpg)
Roastie Fear 2: Electric Boogaloo Robowaifu Technician 10/03/2019 (Thu) 07:25:28 No.1061 [Reply] [Last]
Your project is really disgusting >=== Notice #2: It's been a while since I locked this thread, and hopefully the evil and hateful spirit that was dominating some anons on the board has gone elsewhere. Accordingly I'll go ahead and unlock the thread again provisionally as per the conditions here: >>12557 Let's keep it rational OK? We're men here, not mere animals. Everyone stay focused on the prize we're all going for, and let's not get distracted. This thread has plenty of good advice in it. Mine those gems, and discard the rest. Notice:

Message too long. Click here to view full text.

Edited last time by Chobitsu on 09/02/2021 (Thu) 18:36:20.
118 posts and 37 images omitted.
>>13408 You aren't even taking into consideration financing options. I bet there are places that will do monthly payments, including insurance.
Open file (639.48 KB 1374x690 1525099257922.png)
Why do they do this? How different would it be if a robot wife was what they were looking at? >--- SFW board Anon, pls keep it spoilered here. Thanks!
Edited last time by Chobitsu on 04/18/2022 (Mon) 05:40:31.
>>15920 Not too much different. I've mentioned the idea of artificial wombs to some women (in person) to bypass the abortion debate. Some of the most timid girls I know went into a flying rage at the thought of their potential replacement. I personally think the roastie would be willing to commit violence at any prospect of them losing their current power hold. She will vote for any and all laws banning such. Thus, it may be best to avoid mentioning it to them, and just ignoring them and only sharing amongst men.
>>15920 >Why do they do this? b/c theyre women >How different would it be if a robot wife was what they were looking at? they would actually be angrier. remember that women are operating on instinct and feels. Sure, some may actually do an internal logic/fact check , but those are the unicorns, most will operate on their feeling to something, and justify it with whatever post-hoc. Example: robotic replacements will terrifythem Behavior: finding anything and everything to "shame" this idea, as though there is some higher moral principle being violated (this is to appeal to men's guilt), when in fact it is just that women are terrified whenever attention is taken away from them. (whether it is by alcohol, sports, video games, other women, now.. gynoids/robowaifus)
>>16137 This. Clear & simple.

Report/Delete/Moderation Forms
Delete
Report