/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The Mongolian Tugrik has recovered its original value thanks to clever trade agreements facilitated by Ukhnaagiin Khürelsükh throat singing at Xi Jinping.

The website will stay a LynxChan instance. Thanks for flying AlogSpace! --robi

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!


Open file (891.65 KB 640x360 skinship.gif)
Any chatbot creation step by step guide? Robowaifu Technician 07/17/2021 (Sat) 05:29:42 No.11538 [Reply]
So recently on /tech/ I expressed my interest to start creating my own waifu/partner chatbot(with voice and animated avatar) but wondered whether that is even possible now that I'm stuck with webdev. So one of the anons there pointed to me this board and where I can get started on nueral networks and achieving my dream. And when I came here and looked up the library/guide thread I sort of got very confused because it feels more like a general catalogue than an actual guide. Sometimes things are too advanced for me(like the chatbot thread which two replies in and people are already discussing topics too advanced for me like seq2seq and something else) or other times too basic for me(like basic ANN training which I had already done before and worse the basic programming threads). I know this might feel like asking to be spoonfed but best with me, I've been stuck in a webdev job for an year, so I might not be the smartest fag out there to figure it all myself. >=== -edit subject
Edited last time by Chobitsu on 08/08/2021 (Sun) 21:36:00.
24 posts and 5 images omitted.
>>11780 Alright thanks for the video link. I'd also be interested to hear any response from you on my advice as well.
>>12025 >In that comment I literally wrote, "but I didn't want to try to figure out too many different things just yet." Ah, fair enough then Anon. My apologies. Anyway, thanks for the great contribution ITT now! I take it you've been here on /robowaifu/ for a bit? As far as knowing about robotics, I think that's mostly a matter of just diving into a project and begin learning. One of the things I appreciated most about the Elfdroid Sophie project was watching SophieDev learn things and adapt to issues and design improvements as he went along. Entertaining and educational. But anyway, good luck with your chatbot/waifu projects Anon. I wish you well in them.
>>11555 Starting the Deepmind lectures today anon, thank you.
>>12032 >telling about how useful the resources of this “home brew” club are. This site covers a wide spectrum and is (or was) more focused on building a robotic body than some avatar. Mostly it's about pointing people to ressources to start learning how to do something, often from the scratch, so be more patient with us.
Alright, I've made a quick pass at straightening the mess a few of you created ITT. The posts have been moved either to the WaifuEngine thread >>12270 or to the Lounge. Keep discussions on-topic or move it elsewhere, thanks.

AI Software Robowaifu Technician 09/10/2019 (Tue) 07:04:21 No.85 [Reply] [Last]
A large amount of this board seems dedicated to hardware, what about the software end of the design spectrum, are there any good enough AI to use?

The only ones I know about offhand are TeaseAi and Personality Forge.
123 posts and 42 images omitted.
Does anyone have any resources on how the software integration would work? I.e., say you solve the vision piece so that waifubot can identify you as "husbandu," and you have the chatbot software so that you can talk to your waifu about whether NGE is a 2deep4u anime--how do you connect the two? How do you make it so that waifu realizes you, and says, "Hi, how's it going?"
>>12067 Is this one more of the many theoretical questions here? When building something, solutions for such problems will present themselves. Why theorize about it? And to what extend? Or short answer: Conditionals. Like "if".
>>12069 >Is this one more of the many theoretical questions here? No. Allow me to get more specific. I have an OpenCV based code that can identify stuff (acutally, I just got that OakD thing ( https://www.kickstarter.com/projects/opencv/opencv-ai-kit ) and ran through the tutorials), and I have a really rudimentary chatbot software. When I've been trying to think through how to integrate the two, I get confused. For example, I could pipe the output of the OakD identification as chat into the chatbot subroutine, but then it will respond to _every_ stimulus or respond to visual stimulus in ways that really don't make sense.
>>12067 In my experience the simplest way to think about it is like a database. You give the database a query and it gives a response. That query could be text, video, audio, pose data or anything really and the same for the response. You just need data to train it on for what responses to give given certain queries. There was a post recently on multimodal learning with an existing transformer language model: >>11731 >>12079 With this for example you could output data from your OpenCV code and create an encoder that projects that data into the embedding space of the transformer model.
>>12086 Exactly what my brain needed. Thanks anon.

Open file (297.54 KB 656x525 Kita.jpg)
Opensimulator opensimulator 08/01/2021 (Sun) 22:21:57 No.12066 [Reply]
Introduction: As we know, creating an acceptable functional robowaifu requires knowledge and techniques from different areas. One of them is simulation, there are many modern easy to use Game development frameworks or tools like Godot, Unity and Unreal engine. But even if they come with easy to use out of the box IDEs you would need to create environments, collect or make assets and if you are willing to spread your work you will have to supply the other members with the source code and assets. Here is where opensimulator comes to the rescue. What is opensimulator?: Opensimulator is a .NET based technology (which runs perfectly on mono) that allows you to create distributed 3D world simulator environments and users can visit them and interact with them using different client/viewers. You can split your simulators and connect them into a public or private grid, and grids can allow users from other grids visit them its called hypegrid (you can see hypergrid as a 3D internet based on opensimulator) Here is the list of popular public opensimulator based grids: http://opensimulator.org/wiki/Grid_List If you are wondering how many active users there are you can make an idea using this hypegrid index: https://opensimworld.com/dir This technology is an opensource clone of a private platform called secondlife which unfortunately is not well known or underestimated is selled as a simple social 3D platform when in fact is a huge collaborative 3D development environment (in which even robowaifus exists). What can I do inside opensimulator?

Message too long. Click here to view full text.

Edited last time by Chobitsu on 08/02/2021 (Mon) 07:12:11.
1 post omitted.
Curious why you're starting a new thread to promote this OP? We already have a robowaifu simulator thread.
Open file (937.65 KB 1280x718 Dc4e2vh.png)
>>12066 This reminds me of a video I saw of someone making a VRChat alternative that functions like a VR internet. People could physically hand people files, play with objects affected by physics, share pictures, open YouTube videos and do all kinds of fun shit. It reminded me of Dennou Coil in a way. People could explore other people's servers but also mix in their own stuff with it like augmented VR. If anyone knows what I'm talking about and knows the link, please post it. It was all anime too. >>12070 Agreed if OP doesn't have a server that warrants its own thread.
>>12076 >Agreed if OP doesn't have a server that warrants its own thread. I do but i feel like is not production level yet, not at least for the project, I need to make a PoC region for this. Just wanna to share .
>>12073 >Curious why you're starting a new thread to promote this OP? I had the intention to do it in the past, but I am bad explaining, don't want people to confuse it another simple game.
>>12076 >This reminds me of a video I saw of someone making a VRChat alternative that functions like a VR internet. People could physically hand people files, play with objects affected by physics, share pictures, open YouTube videos and do all kinds of fun shit. It reminded me of Dennou Coil in a way. People could explore other people's servers but also mix in their own stuff with it like augmented VR. If anyone knows what I'm talking about and knows the link, please post it. It was all anime too works exactly like that but is not VR well there are some VR viewers but they are experiments.

Robowaifu Systems Engineering Robowaifu Technician 09/11/2019 (Wed) 01:19:46 No.98 [Reply]
Creating a functional Robowaifu is a yuge Systems Engineering problem. It is arguably the single most complex technical engineering project in history bar none, IMO. But don't be daunted by he scale of the problem anon (and you will be if you actually think deeply about it for long, hehe), nor discouraged. Like every other major technical advance, it's a progressive process. A little here, a little there. In the words of Sir Isaac Newton, "If I have seen further it is by standing on the shoulders of Giants." Progress in things like this happen not primarily by leaps of genius--though ofc that also occurs--but rather chiefly comes by incremental steps towards the objective. If there's anything I'm beginning to recognize in life it's that the key to success lies mainly in one unwavering agenda for your goals: Just don't quit.

>tl;dr
Post SE and Integration resources ITT.

www.nasa.gov/sites/default/files/atoms/files/nasa_systems_engineering_handbook.pdf
Edited last time by Chobitsu on 09/26/2019 (Thu) 11:46:43.
19 posts and 7 images omitted.
>related crosspost (>>11160)
Open file (27.41 KB 250x328 DDIA.jpeg)
Open file (76.53 KB 943x470 Selection_013.jpg)
While this is nominally a database book per se, it's largely focused on optimizations for data throughput. As such, it certainly qualifies as a valuable reference for robowaifu systems engineering. In particular, chapter 11 Stream Processing, is a very important topic in the realms that the RPCS would seek to address. > www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/ https://github.com/ept/ddia-references
While this metaphor was explicitly developed for the software industry, it has many corollaries in other industries. For example, SophieDev Anon is trying his hand at 3D modeling. As a TD in the industry, I have seen literally dozens of examples of people -- artists in particular -- piling up technical debt rushing work with excessive shortcuts to try and meet some intermediate asset checkpoint and just get it out the door. But during that effort, they were not considering the later costs to fix their hot messes before being able to continue the overall project itself. That's technical debt in the creative industry, and it has direct implications for our robowaifu designs. Another more apparent technical debt for some of us might be the choice to use the Python programming language as a means to 'quickly' get various AI projects up and going. Just like incurring real-world debts, this can speed up the prototyping stage measurably. But let's imagine that it turns out that multi-gigabyte libraries might have a hard time even fitting inside a small compute hardware platform within our early robowaifus -- much less running well on them. Further, suppose that robowaifus need to be able to respond in realtime/neartime for most AI-related tasks, and we find out that literally the only way to make these processes work properly is by using embedded C code on the microcontrollers instead. Now, the original ideas will need to be recast into this more efficiency-oriented approach first before it will actually work IRL. That's technical debt in software industry (with a close corollary in the electronics industry), and it must be repaid quickly if the project isn't to stall out. We could discuss mechanics, power, materials, and so on. Technical debt is a potential phenomenon for all of these arenas. This is an important and fairly deep topic actually, and I'd like to begin a discussion with robowaifuists about how we can both take advantage of technical debt, and also remediate it ('pay it back') in our works too. If we don't account properly for this phenomenon early, we almost certainly as a group will fall literally years behind in our ability to deliver robowaifus (hopefully well before the globohomo ruins everything). For any anons unaware of the concept, here's where the idea got started: http://wiki.c2.com/?WardExplainsDebtMetaphor https://www.agilealliance.org/introduction-to-the-technical-debt-concept
>>11904 >technical debt >sculpting I tried to do everything in CAD, but it has limits. I never thought sculpting parts should be avoided completely, only minimized as much as possible. Either way, there aren't many users here which are trying out stuff and posting it in the first place. The more parts we have we can work with, the better. >Python bashing, once again Code can be replaced piece by piece and called by the rest of the codebase. We won't have gigabytes of Python code anyways, I think. That aside, we will need to integrate as many programs from other people as possible, in all kinds of languages. Trying to write everything from the scratch in C or C++ would be some delusional attempt, so frustrating that the few people which are even trying now, would drop out, since there would be no hope of ever archiving anything in some reasonable amount of time.
>>11917 You seem quite antagonistic to my basic claims, so I'll put them all aside for the moment (my real-world experiences notwithstanding). In a more general sense then, can you offer any advice on my specific desired outcome through this conversation then, Anon? Namely; >how we can both take advantage of technical debt, and also remediate it ('pay it back') in our works too.

Python General Robowaifu Technician 09/12/2019 (Thu) 03:29:04 No.159 [Reply] [Last]
Python Resources general

Python is by far the most common scripting language for AI/Machine Learning/Deep Learning frameworks and libraries. Post info on using it effectively.

wiki.python.org/moin/BeginnersGuide
https://archive.is/v9PyD

On my Debian-based distro, here's how I set up Python, PIP, TensorFlow, and the Scikit-Learn stack for use with AI development:
sudo apt-get install python python-pip python-dev
python -m pip install --upgrade pip
pip install --user tensorflow numpy scipy scikit-learn matplotlib ipython jupyter pandas sympy nose


LiClipse is a good Python IDE choice, and there are a number of others.
www.liclipse.com/download.html
https://archive.is/glcCm
51 posts and 12 images omitted.
Open file (20.03 KB 560x321 GottaPyFast.png)
Sentdex created an open-source Python code transformer model like Github Copilot. It has only been trained 2 epoches so it's not great but it's interesting and fun to play around with. I feel it's gonna be important to stay on top of these developments to keep up with AI-accelerated productivity so I made an easy-to-use GUI to inference these models (just press Control+Tab to generate.) GottaPyFast: https://gitlab.com/robowaifudev/gottapyfast/ Sentdex Demo: https://www.youtube.com/watch?v=1PMECYArtuk Sentdex Model: https://huggingface.co/Sentdex/GPyT How to use the model: from transformers import AutoTokenizer, AutoModelWithLMHead # set device to cuda if you have a GPU device = "cpu" tokenizer = AutoTokenizer.from_pretrained("Sentdex/GPyT") model = AutoModelWithLMHead.from_pretrained("Sentdex/GPyT").to(device) def generate(model, query, max_length=100, top_p=0.9, temperature=1.0, do_sample=True): newlinechar = "<N>"

Message too long. Click here to view full text.

Trying to package PyTorch and Transformers for Windows is a nightmare. I put together some notes here on how to get it to work: https://gitlab.com/robowaifudev/cookbook/-/wikis/python/pyinstaller I think using MLPack would be much more practical. PyTorch takes up a massive 3.1 GB which is completely unnecessary. Another option could be to rewrite the GPT2 transformer model to use NumPy instead which is only 5.4 MB. I'll look more into this later.
>>11683 Thanks for all your hard work here Anon. I apologize to you and everyone else here for being such a pussified faggot about Python. I recognize it's important to all of us, or else I wouldn't even consider picking it up. Please look into mlPack sooner rather than later if you at all can. It's probably our only real hope for doing waifu AI on a shoestring budget hardware-wise.
Open file (13.19 KB 849x445 chainer.png)
>>11684 MLPack's documentation is really lacking, especially for newer features and seems to be missing essential features. I'd I have to sit down with it for 3-6 months to get transformers and text-to-speech models working in it. I'm looking into using Chainer which is built on top of NumPy and quite popular in Japan. A basic application with Chainer packaged with PyInstaller compresses down to 14 MB. On top of that there's already lots of ML models implemented in it. I think if I roll out some waifu tech with Chainer to garner interest we could get some more help to build things in MLPack, which will be particularly useful for embedded systems and actual physical robowaifu.
>>11688 Migration guide from PyTorch to Chainer https://chainer.github.io/migration-guide/

Open file (32.03 KB 400x400 FXCY9fGv.jpg)
THE LANGUAGE PROBLEM Robowaifu Technician 11/20/2020 (Fri) 13:22:39 No.6937 [Reply]
'Sup anons? I am here to remind you guys something important, TO DO YOUR RESEARCH IN MULTIPLE LANGUAGES. Our mutual language is English however that is not enough. We need people who can speak those 3 important languages: Japanese, Chinese, Russian. I've been learning japanese for 4 months and with the help of dictionary I am able to understand basic stuff. Here is the point, there is a whole another world out there. 1) Chinese: Chinese people work under such hard circumstances and put really much effort into their jobs. Nearly none of the projects are translated into English since Google is banned at China. However there are a lot of great stuff there, I mean like even Microsoft runs their virtual woman project there. Since it is too hard for me to learn I generally use DeepL (best translator out there) and Baidu (Chinese search engine) and read latest researches and projects there. I wish I knew chinese well, in that case I would be able to find not-so-popular webpages and grab more information on topics. 2) Japanese: Even though good Japanese projects gets translated into English most of the research there gets translated only when the projects is ready to publish and sometimes they are too hard to find. I try to read as much AI papers in Japanese as I can. Scientists there do great stuff. I've seen a lot of robowomen projects there. You can also find some 3d printing projects for anime girls. Really worth looking. 3) Russian: Russian is the least important one in my opinion. But a professor of mine graduated from a university there and he has a lot of academic books that aren't translated to English. You would be amazed at how much work they have on stuff such as algorithm theory, artificial intelligence, computer science. Most of the stuff there are focused on "science" part of CS. So they are theory-weighted. So right now we need people who can speek Japanese and Chinese (Korean would be good as well, but there isn't that much of research there tbh) Using DeepL is enough to understand most of the pages but only a person with fluent Chinese/Japanese would be able to find goldmines deep there. I am pretty sure that there are hundreds of Chinese people working on robowaifu related projects that we are not aware of. Same applies for Japanese people, but since Chinese people are in a much worse situation it becomes really hard to find those people. Anyone has some recommendations? I wish I had time and skills to learn all those languages but I can only afford to learn one and I am going with Japanese since I have a dream of moving there in the future. We need to brainstorm on this issue.
14 posts and 1 image omitted.
>>11389 >is an old post Heh, don't worry Anon. This isn't a typical IB in that sense. That is, there's no such thing as 'necrobumping' here (or any complaints about it either). If you have something to add, by all means put it in the correct thread.
Open file (476.56 KB 1100x600 example2.png)
Sometimes PDFs don't copy and paste text correctly because researchers upload scanned documents and whatever OCR they used on it sucks. For a long time I've been using Google Keep which has a great multilingual OCR feature but I'm looking for a simpler open-source solution so I don't have to copy pages and pages of paragraphs. So far I've found these two that support Asian languages: https://github.com/PaddlePaddle/PaddleOCR https://github.com/JaidedAI/EasyOCR It would be great to have a tool one day that automates PDF OCR and prepares it into a document for translation on DeepL. A lot of the time I just ignore research in other languages because it's such a hassle to read.
>>11491 I think I understand the need you're describing Anon. Having no experience with what's being depicted, I'm confused by the provided image however. Any chance you can break down what's being represented there for the uninitiate?
>>11492 It's optical character recognition. It's outputting the bounding box coordinates of text in the image, the predicted text, and the confidence level.
>>11494 Ah, I suspected that might be the case, seemed to make sense. I presume the Asian characters would be sent through some kind of translation software afterwards?

Open file (166.07 KB 1462x1003 rpcs.png)
Robowaifu Power and Control Systems Electronic Chronicler 06/22/2021 (Tue) 21:45:47 No.11018 [Reply]
Hi Anons, making a thread as suggested in >>10947 I've been thinking about this for a long while, and wanted to throw in a draft and see if anyone has comments/criticisms/additions. I present to you the draft of the Robowaifu Power and Control System (RPCS). This draft is by no means complete or definite, but it is a starting point. Let's call it version 0.1a. The version follows major.minor and a letter for bug-fixes. Minor is for feature addition/reduction, major for when we eventually get there XD. From a legal stand point, this is under CC0 or public domain (unless Chobitsu has specific licensing for the content on /robowaifu/). I intend for the draft to develop further and stay open for use by anyone. Summary A full-size robowaifu system needs several things: - Power distribution. Main system bus coming from a Li-ON battery (or other technology). One backup 5V emergency power supply used by slow communications to check appendage integrity, sensors etc. - Main processor. In this post I won't be delving into it in great detail, and mostly treating it as a black box. - Low/medium throughput communication for simple sensor, debug, or status information. Must be robust and must work before any high-level software is working (including the network stack). - High throughput communication for large data-logging, visual processing etc. - "Spine" or communication interconnect. Multiplexes many connections from many sub-systems to the main processor. Includes power distribution connection (allowing individual control to sub-systems). - Sub-systems to actually do the fun stuff! (Arms, legs, nekomimi ears, etc.) Terminology Brain refers to the main processor (and all of its sub-systems treated as a single unit).

Message too long. Click here to view full text.

37 posts and 4 images omitted.
>>11155 I tried once or twice, but he was gone to fast and also my network wasn't very stable, which might have caused it. I installed a client for IRC on my tablet, so I could try again.
>>11164 That's fine no rush. I'll be happy to ask Robi about it myself. He'll need to enable you to make an account, so we'll probably need to arrange a timeframe with you for it.
>>11144 >That's a broad societal topic, and one I'm neither qualified nor enthusiastic about. I absolutely loathe public transportation, having had to actually use it frequently in urban America. It's both dangerous (blacks), and a practical nuisance. Mostly I do agree with you. I generally avoid using it when I can, however in euro it's at least tolerable. In the usa I remember it being kinda shit. >My apologies Anon. LOL I definitely didn't succeed at being concise. :^) Hehe, you ain't the only one ;) >>11150 >Now despite thinking that real autonomous cars are stupid, the STUDY of autonomous navigation is extremely useful. Specifically at a scale that you can install GPS modules and read AR markets (QR codes) to guide a machine along a planetary surface. Haven't considered it on a global scale, but I imagine that would be massively useful to mankind even today. Oh inter-planetary travel, when will you come... >Sometimes I even implement simple logic using discrete AND/OR/NAND/NOR gates and transistors if a microcontroller isn't necessary. I really like that line of thinking. Having cheap-as-dirt MCUs have spoiled engineers and hobbyists alike (myself included sometimes). >>11151 >Our road network is really cramped and badly designed. Very good points. It's too common for techies and marketing to have wishful thinking, while forgetting that infrastructure is a massive cost (that not even a big corpo could really handle).

Message too long. Click here to view full text.

>>11175 One more thing about the brain. Like a PC motherboard has a BIOS speaker, so should we have a minimal amount of hardware periphery for the low-level Brain, perhaps even just skip the whole concept and put all low level control in the Spine, with all the Brain sub-systems being connected via internal network. I imagine at least a speaker, some lights, maybe a status display (or a debug connector for one?) to be available for checking error codes, knowing what stage in the boot process the robowaifu is in.
>>11175 >Thanks m8. Hopefully we can discuss further via email this time. That'll be fine, I'll check in with it sometime soon. >>11164 >>11175 >I'm guessing this for another chap? Oh, heh. My apologies to you both. No, it was intended for you, Elec-Chron. Perhaps the other anon is helping us connect with Robi regarding your account setup? If so, thanks first Anon. >note BTW, I plan to migrate all the posting on our vol account discussion over to the /meta thread so we're not cluttering up your RPCS thread with such things. So, don't be surprised when a few posts disappear ITT.

Open file (154.32 KB 600x400 core3433.png)
New Paradigm of CPU and PCB Architecture meta ronin 06/18/2021 (Fri) 23:47:00 No.10965 [Reply]
Considering where we want to go, a robowaifu with brittle and fragile PCBs, soldering contacts and delicate wires is something which can and should be improved upon. Closed CPU architecture is key as well as the possibility to port the waifu to a new body if necessary. Several supplements and 6 shots of espresso later, I've sketched this (somewhat humorous but also somewhat serious) concept. This is only a processor and not a memory module, but there are a few things I wanted to bring up via this illustration 1. The idea of discarding the PCB model for something suspending in heat dissipating resin, mostly rigid, waterproof and shock resistant, those chips aren't going anywhere, plus it looks black/translucent and cool and we want our waifus aesthetic without clunky square innards. This gives it the mega-man memory crystal aesthetic 2. CPUs dedicated to specific types of processing. Do one job and do it well. a)Since our human brains are split into left-right, logical vs. abstract, I think it would be an advantage to have separate processors to say recall facts and perform calculations vs say, paint a picture or appreciate nature. The "creative" CPU may very well work like Google AI or something of that sort via self seeding feedback-recursion. (therefore our waifus can imagine and dream too). b)mirror neurons and environmental modeling: important for our waifu to understand that we are like her, and not simply another object like a chair or a tree. Gives her the ability to understand how to interact with others and the world via empathy by constantly comparing her own similar experiences with what she sees or experiences. This also requires an internal 3d spacial model of her environment which would be continually updates from sensory input, and like us, fuzzy logic assumption could fill in the gaps where necessary. I figure something like a GPU core array would do the trick here. c) Impetus motivational chip - basically the reward/dopamine system d) Safety or Hazard prevention Chip - would interface directly with the Environmental Simulation chip, any potential hazard or danger to herself or her owner (or even another, if she is the cause) would cause her to freeze or back up a step before assessing further. Would also be useful for necessary danger/pain avoidant reflexes and self preservation. I figure industrial (BUS for example) systems such as in factories or that manage car braking, steering, etc. would already be on the cutting edge of this and we might be able to borrow something from them. Ports would basically be for power and charging a small internal lithium battery and another strictly for I/O Use this thread for any elaboration on this concept, or feedback (or your own ideas if you think you have something better but along the same lines) -M.Ronin
8 posts and 1 image omitted.
>>11075 >Thermal paste is mostly a grease carrier with some powdered solids in it. For AS5 the powder is ground up silver and it is considered very unlikely that the silver particles will ever perfectly line up to cause a short because of the very large amount of grease and tiny amount of solids. admittedly that's tricky, I imagine anything that surrounds the silver will impede its heat conductivity. This will require thinking outside the box, maybe metamaterials of a certain property could solve this one
>>11076 >This will require thinking outside the box, maybe metamaterials Sounds great. I'm sure you can think of something interesting for it. Please keep us updated here on it ITT. My guess is some form of liquid carrier for the heat -- something that is also electrically insulative -- will be the proper course for us in keeping our high-heat (eg, hip actuators, etc.) robowaifu components nice and cooled off.
>>11072 >>11074 >>11075 >>11076 >>11079 Simply crosslinking this post here, since we already have an entire thread dedicated to this topic. (>>11080)
>>11081 thanks! that led me to this as a possible solution https://en.wikipedia.org/wiki/Silicone_oil
>>11075 here we go, I posted in the other crosslinked thread. This is certainly doable. >>11106

HOW TO SOLVE IT Robowaifu Technician 07/08/2020 (Wed) 06:50:51 No.4143 [Reply] [Last]
How do we eat this elephant, /robowaifu/? This is a yuge task obviously, but OTOH, we all know it's inevitable there will be robowaifus. It's simply a matter of time. For us (and for every other Anon) the only question is will we create them ourselves, or will we have to take what we're handed out by the GlobohomoBotnet(TM)(R)(C)? In the interest of us achieving the former I'll present this checklist from George Pólya. Hopefully it can help us begin to break down the problem into bite-sized chunks and make forward progress. >--- First. UNDERSTANDING THE PROBLEM You have to understand the problem. >What is the unknown? What are the data? What is the condition? Is it possible to satisfy the condition? Is the condition sufficient to determine the unknown? Or is it insufficient? Or redundant? Or contradictory? >Draw a figure. Introduce suitable notation. >Separate the various parts of the condition. Can you write them down? Second.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/08/2020 (Wed) 07:17:36.
63 posts and 15 images omitted.
>>10796 Such a kawaii outfit and pose!
>>10798 >Not quite perfect Anon. LOL true, sorry. I just go all Lord Katsumoto from 'The Last Samurai' when I see Kosaka-san singing and dancing.
>>10801 Kek, fair enough. It was a great moment!
>>10796 Thanks, and yes I'm going to put her into the next version.
>related crosspost >>1997

My reason to live Robowaifu Technician 09/13/2019 (Fri) 12:49:21 No.209 [Reply]
Okay, this is fucking hard to explain, I just know that a supernatural force guided me here, I'm going to invest everything I have for it, but I have to do it with my own hands, I need help with files and basic notions about programming, but the most important I need files to build a body / head and how to make synthetic skin to coat it, it will look like 2B, I need your help friends
13 posts and 1 image omitted.
>>10483 Did you download the voice model that waifudev created a link for? I cannot find that Tacotron2 model, if you did could you upload it for me? I want to try it out. Also we basically have the pieces for 2B well the clothing we are about create a mod package for
>>10498 >I cannot find that Tacotron2 model, if you did could you upload it for me? No sorry I don't. The WaifuSynth dev, robowaifudev, seems to be around here fairly frequently. Maybe you can make a post in the the Speech Synthesis (>>199) thread, or one of the AI threads such as the GPT one (>>250) and let him know the model has gone missing. I imagine he probably updated it, and possibly didn't update the anonfiles link to the new one. Good luck Anon.
>>10498 >>10502 I hadn't seen your other post before making this reply. I made another one to yours in the other thread, Em Elle E (>>10504).
The Faustian Spirit of the Aryan led me here. I can contribute nothing other than wishing you gentlemen good luck in the pursuit of robo waifus
>>10793 Nonsense. Thanks for the well-wishes and all, Anon. But literally any anon with an interest in robowaifus can contribute here. Unlike the common derogatory meme on the the topic in typical IB circles, /robowaifu/ actually needs idea guys! Research & Development, Design & Engineering thrive on new blood and new ideas. >tl;dr Just start posting comments Anon, it will all work out for you here.

Report/Delete/Moderation Forms
Delete
Report