/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Canary has been updated.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Have a nice day, Anon!


Open file (279.12 KB 616x960 Don't get attached.jpg)
Open file (57.76 KB 400x517 1601275076837.jpg)
Ye Olde Atticke & Storager's Depote Chobitsu Board owner 09/05/2021 (Sun) 23:44:50 No.12893 [Reply] [Last]
Read-only dump for merging old threads & other arcane type stuff. > tl;dr < look here when nowhere else works anon. :^)
265 posts and 162 images omitted.
>>22666 With all due respect, Noidodev is more of a regular here than you are Anon (and basically by your own admission). I had originally intended to merge this thread with a new chatbot thread when one gets made, but given the course of the discussion, and your seeming-intentional effort at blackpilling the board I've changed my mind. I'll just lock it for now, pending deciding whether it should go into the Atticke, or send it to the Chikun Coop instead. >tl;dr Please read over the board thoroughly Anon, and try to get a feel for our culture here first.
Open file (1.45 MB 1000x642 ClipboardImage.png)
Here, I'll make it easy for you with everything you need. https://www.poppy-project.org/en/ https://docs.poppy-project.org/en/assembly-guides/poppy-humanoid/bom <- buy everything here. Start by building a poppy, and 3d printing the body. cover the body in fabric or a 3d printed shell. take the head from any vrchat model you like the design of, and 3d print it. next, connect the face screen to an internal speaker with ExLLama-v2 api running on a computer on your local network, and install the voice and speech to text models on it so you can talk to her. I reccomend Nous-Hermes-13B, and the oogabooga text generation web ui. you can use this video as a starting point https://www.youtube.com/watch?v=lSIIXpWpVuk Finally, hook it up to vtuber studio or any other vtuber software and use Visemes like in vrchat to move the model's face when she talks. From there you ahve a minimially viable waifubot. You can program movement and walking follow into it later using poppy project. You can extend the robotic functions. Think of the rOS stuff as the "left brain" and the talking as the "Right brain". You can build static commands into the model too, if you want. Machine vision can be passed to the LLM model to tell it what it's looking at by using CLIP of screenshots from the webcam. you literally have everything you need right here
>>24807 To think the video about poppy released 8 years ago. 8 years ago. I'm not much of (if at all) an engineer nor a 3D printer (don't have one), but it feels like it could be a decent starting point. Though on the conversational/utility-side, it definitely will need more work, current AI models are decent but not that incredible and require high hardware to run. Plus having a robowaifu would ideally require our own custom model since most others are very general and not made for personal matter. Same for the need to code the waifu moving both on feet in a complete and new 3D space (as opposed to just walking on a treadmill as seen in a video), and concerning the camera to avoid obstacles (though I do remember that being possible to be tackled with infrared and a few scripts, witnessed such a thing a long time ago in a school project) and with the arms to perform tasks (can be simple hugging too, not necessarily cleaning/cooking), using facial recognition to recognize emotions and how to react to it, sex positions and with that the need to change the structure to add an insertable silicone vagina or else, possibly with bluetooth compatibility and sensors to mimic what the realdoll harmony does moaning and lewd face when the sensors detect movement/touch, and the skin on top of the base structure because as it is it's pretty... unconventional to say the least. So I think there's still a lot to be done, but once again, it seems to me that it could be a good starting point, thanks anon. Though I will wait the feedback of the actual robowaifu technicians.
under ten thousand what? oof Yeah I mean I'll take inspiration from that btu how is it so expensive? Also I just got a raspberry pi zero. Don't tell me it needs a regular raspberry pi lol. Went from radxa zero, to orange pi 3 to rapsberry pi zero...
We have already had this thread OP. Since you never replied to my post, I moved it as I told you I would. (>>24531) If you decide to actually carry this out as your own personal project, then please discuss it with me in our current /meta thread. Till then use the thread linked above for discussion about The Poppy-project's own project. This one will also be locked till then. If you fail to respond to me within 3 days or so from this post, it will also be archived into the Atticke (removed from the board's catalog). >=== -prose edit
Edited last time by Chobitsu on 08/24/2023 (Thu) 14:00:19.

Open file (891.65 KB 640x360 skinship.gif)
Any chatbot creation step by step guide? Robowaifu Technician 07/17/2021 (Sat) 05:29:42 No.11538 [Reply]
So recently on /tech/ I expressed my interest to start creating my own waifu/partner chatbot(with voice and animated avatar) but wondered whether that is even possible now that I'm stuck with webdev. So one of the anons there pointed to me this board and where I can get started on nueral networks and achieving my dream. And when I came here and looked up the library/guide thread I sort of got very confused because it feels more like a general catalogue than an actual guide. Sometimes things are too advanced for me(like the chatbot thread which two replies in and people are already discussing topics too advanced for me like seq2seq and something else) or other times too basic for me(like basic ANN training which I had already done before and worse the basic programming threads). I know this might feel like asking to be spoonfed but best with me, I've been stuck in a webdev job for an year, so I might not be the smartest fag out there to figure it all myself. >=== -edit subject
Edited last time by Chobitsu on 08/08/2021 (Sun) 21:36:00.
24 posts and 5 images omitted.
>>11780 Alright thanks for the video link. I'd also be interested to hear any response from you on my advice as well.
>>12025 >In that comment I literally wrote, "but I didn't want to try to figure out too many different things just yet." Ah, fair enough then Anon. My apologies. Anyway, thanks for the great contribution ITT now! I take it you've been here on /robowaifu/ for a bit? As far as knowing about robotics, I think that's mostly a matter of just diving into a project and begin learning. One of the things I appreciated most about the Elfdroid Sophie project was watching SophieDev learn things and adapt to issues and design improvements as he went along. Entertaining and educational. But anyway, good luck with your chatbot/waifu projects Anon. I wish you well in them.
>>11555 Starting the Deepmind lectures today anon, thank you.
>>12032 >telling about how useful the resources of this “home brew” club are. This site covers a wide spectrum and is (or was) more focused on building a robotic body than some avatar. Mostly it's about pointing people to ressources to start learning how to do something, often from the scratch, so be more patient with us.
Alright, I've made a quick pass at straightening the mess a few of you created ITT. The posts have been moved either to the WaifuEngine thread >>12270 or to the Lounge. Keep discussions on-topic or move it elsewhere, thanks.

Open file (297.54 KB 656x525 Kita.jpg)
Opensimulator opensimulator 08/01/2021 (Sun) 22:21:57 No.12066 [Reply]
Introduction: As we know, creating an acceptable functional robowaifu requires knowledge and techniques from different areas. One of them is simulation, there are many modern easy to use Game development frameworks or tools like Godot, Unity and Unreal engine. But even if they come with easy to use out of the box IDEs you would need to create environments, collect or make assets and if you are willing to spread your work you will have to supply the other members with the source code and assets. Here is where opensimulator comes to the rescue. What is opensimulator?: Opensimulator is a .NET based technology (which runs perfectly on mono) that allows you to create distributed 3D world simulator environments and users can visit them and interact with them using different client/viewers. You can split your simulators and connect them into a public or private grid, and grids can allow users from other grids visit them its called hypegrid (you can see hypergrid as a 3D internet based on opensimulator) Here is the list of popular public opensimulator based grids: http://opensimulator.org/wiki/Grid_List If you are wondering how many active users there are you can make an idea using this hypegrid index: https://opensimworld.com/dir This technology is an opensource clone of a private platform called secondlife which unfortunately is not well known or underestimated is selled as a simple social 3D platform when in fact is a huge collaborative 3D development environment (in which even robowaifus exists). What can I do inside opensimulator?

Message too long. Click here to view full text.

Edited last time by Chobitsu on 08/02/2021 (Mon) 07:12:11.
1 post omitted.
Curious why you're starting a new thread to promote this OP? We already have a robowaifu simulator thread.
Open file (937.65 KB 1280x718 Dc4e2vh.png)
>>12066 This reminds me of a video I saw of someone making a VRChat alternative that functions like a VR internet. People could physically hand people files, play with objects affected by physics, share pictures, open YouTube videos and do all kinds of fun shit. It reminded me of Dennou Coil in a way. People could explore other people's servers but also mix in their own stuff with it like augmented VR. If anyone knows what I'm talking about and knows the link, please post it. It was all anime too. >>12070 Agreed if OP doesn't have a server that warrants its own thread.
>>12076 >Agreed if OP doesn't have a server that warrants its own thread. I do but i feel like is not production level yet, not at least for the project, I need to make a PoC region for this. Just wanna to share .
>>12073 >Curious why you're starting a new thread to promote this OP? I had the intention to do it in the past, but I am bad explaining, don't want people to confuse it another simple game.
>>12076 >This reminds me of a video I saw of someone making a VRChat alternative that functions like a VR internet. People could physically hand people files, play with objects affected by physics, share pictures, open YouTube videos and do all kinds of fun shit. It reminded me of Dennou Coil in a way. People could explore other people's servers but also mix in their own stuff with it like augmented VR. If anyone knows what I'm talking about and knows the link, please post it. It was all anime too works exactly like that but is not VR well there are some VR viewers but they are experiments.

Robowaifu Systems Engineering Robowaifu Technician 09/11/2019 (Wed) 01:19:46 No.98 [Reply]
Creating a functional Robowaifu is a yuge Systems Engineering problem. It is arguably the single most complex technical engineering project in history bar none, IMO. But don't be daunted by he scale of the problem anon (and you will be if you actually think deeply about it for long, hehe), nor discouraged. Like every other major technical advance, it's a progressive process. A little here, a little there. In the words of Sir Isaac Newton, "If I have seen further it is by standing on the shoulders of Giants." Progress in things like this happen not primarily by leaps of genius--though ofc that also occurs--but rather chiefly comes by incremental steps towards the objective. If there's anything I'm beginning to recognize in life it's that the key to success lies mainly in one unwavering agenda for your goals: Just don't quit.

>tl;dr
Post SE and Integration resources ITT.

www.nasa.gov/sites/default/files/atoms/files/nasa_systems_engineering_handbook.pdf
Edited last time by Chobitsu on 09/26/2019 (Thu) 11:46:43.
20 posts and 7 images omitted.
Open file (27.41 KB 250x328 DDIA.jpeg)
Open file (76.53 KB 943x470 Selection_013.jpg)
While this is nominally a database book per se, it's largely focused on optimizations for data throughput. As such, it certainly qualifies as a valuable reference for robowaifu systems engineering. In particular, chapter 11 Stream Processing, is a very important topic in the realms that the RPCS would seek to address. > www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/ https://github.com/ept/ddia-references
While this metaphor was explicitly developed for the software industry, it has many corollaries in other industries. For example, SophieDev Anon is trying his hand at 3D modeling. As a TD in the industry, I have seen literally dozens of examples of people -- artists in particular -- piling up technical debt rushing work with excessive shortcuts to try and meet some intermediate asset checkpoint and just get it out the door. But during that effort, they were not considering the later costs to fix their hot messes before being able to continue the overall project itself. That's technical debt in the creative industry, and it has direct implications for our robowaifu designs. Another more apparent technical debt for some of us might be the choice to use the Python programming language as a means to 'quickly' get various AI projects up and going. Just like incurring real-world debts, this can speed up the prototyping stage measurably. But let's imagine that it turns out that multi-gigabyte libraries might have a hard time even fitting inside a small compute hardware platform within our early robowaifus -- much less running well on them. Further, suppose that robowaifus need to be able to respond in realtime/neartime for most AI-related tasks, and we find out that literally the only way to make these processes work properly is by using embedded C code on the microcontrollers instead. Now, the original ideas will need to be recast into this more efficiency-oriented approach first before it will actually work IRL. That's technical debt in software industry (with a close corollary in the electronics industry), and it must be repaid quickly if the project isn't to stall out. We could discuss mechanics, power, materials, and so on. Technical debt is a potential phenomenon for all of these arenas. This is an important and fairly deep topic actually, and I'd like to begin a discussion with robowaifuists about how we can both take advantage of technical debt, and also remediate it ('pay it back') in our works too. If we don't account properly for this phenomenon early, we almost certainly as a group will fall literally years behind in our ability to deliver robowaifus (hopefully well before the globohomo ruins everything). For any anons unaware of the concept, here's where the idea got started: http://wiki.c2.com/?WardExplainsDebtMetaphor https://www.agilealliance.org/introduction-to-the-technical-debt-concept
>>11904 >technical debt >sculpting I tried to do everything in CAD, but it has limits. I never thought sculpting parts should be avoided completely, only minimized as much as possible. Either way, there aren't many users here which are trying out stuff and posting it in the first place. The more parts we have we can work with, the better. >Python bashing, once again Code can be replaced piece by piece and called by the rest of the codebase. We won't have gigabytes of Python code anyways, I think. That aside, we will need to integrate as many programs from other people as possible, in all kinds of languages. Trying to write everything from the scratch in C or C++ would be some delusional attempt, so frustrating that the few people which are even trying now, would drop out, since there would be no hope of ever archiving anything in some reasonable amount of time.
>>11917 You seem quite antagonistic to my basic claims, so I'll put them all aside for the moment (my real-world experiences notwithstanding). In a more general sense then, can you offer any advice on my specific desired outcome through this conversation then, Anon? Namely; >how we can both take advantage of technical debt, and also remediate it ('pay it back') in our works too.
Just storing this here as a sober warning against mediocrity while devising exceptionally-complex systems such as capable & pleasing robowaifus. https://www.palladiummag.com/2023/06/01/complex-systems-wont-survive-the-competence-crisis/ >=== -sp edit
Edited last time by Chobitsu on 11/02/2023 (Thu) 02:09:42.

Open file (32.03 KB 400x400 FXCY9fGv.jpg)
THE LANGUAGE PROBLEM Robowaifu Technician 11/20/2020 (Fri) 13:22:39 No.6937 [Reply]
'Sup anons? I am here to remind you guys something important, TO DO YOUR RESEARCH IN MULTIPLE LANGUAGES. Our mutual language is English however that is not enough. We need people who can speak those 3 important languages: Japanese, Chinese, Russian. I've been learning japanese for 4 months and with the help of dictionary I am able to understand basic stuff. Here is the point, there is a whole another world out there. 1) Chinese: Chinese people work under such hard circumstances and put really much effort into their jobs. Nearly none of the projects are translated into English since Google is banned at China. However there are a lot of great stuff there, I mean like even Microsoft runs their virtual woman project there. Since it is too hard for me to learn I generally use DeepL (best translator out there) and Baidu (Chinese search engine) and read latest researches and projects there. I wish I knew chinese well, in that case I would be able to find not-so-popular webpages and grab more information on topics. 2) Japanese: Even though good Japanese projects gets translated into English most of the research there gets translated only when the projects is ready to publish and sometimes they are too hard to find. I try to read as much AI papers in Japanese as I can. Scientists there do great stuff. I've seen a lot of robowomen projects there. You can also find some 3d printing projects for anime girls. Really worth looking. 3) Russian: Russian is the least important one in my opinion. But a professor of mine graduated from a university there and he has a lot of academic books that aren't translated to English. You would be amazed at how much work they have on stuff such as algorithm theory, artificial intelligence, computer science. Most of the stuff there are focused on "science" part of CS. So they are theory-weighted. So right now we need people who can speek Japanese and Chinese (Korean would be good as well, but there isn't that much of research there tbh) Using DeepL is enough to understand most of the pages but only a person with fluent Chinese/Japanese would be able to find goldmines deep there. I am pretty sure that there are hundreds of Chinese people working on robowaifu related projects that we are not aware of. Same applies for Japanese people, but since Chinese people are in a much worse situation it becomes really hard to find those people. Anyone has some recommendations? I wish I had time and skills to learn all those languages but I can only afford to learn one and I am going with Japanese since I have a dream of moving there in the future. We need to brainstorm on this issue.
14 posts and 1 image omitted.
>>11389 >is an old post Heh, don't worry Anon. This isn't a typical IB in that sense. That is, there's no such thing as 'necrobumping' here (or any complaints about it either). If you have something to add, by all means put it in the correct thread.
Open file (476.56 KB 1100x600 example2.png)
Sometimes PDFs don't copy and paste text correctly because researchers upload scanned documents and whatever OCR they used on it sucks. For a long time I've been using Google Keep which has a great multilingual OCR feature but I'm looking for a simpler open-source solution so I don't have to copy pages and pages of paragraphs. So far I've found these two that support Asian languages: https://github.com/PaddlePaddle/PaddleOCR https://github.com/JaidedAI/EasyOCR It would be great to have a tool one day that automates PDF OCR and prepares it into a document for translation on DeepL. A lot of the time I just ignore research in other languages because it's such a hassle to read.
>>11491 I think I understand the need you're describing Anon. Having no experience with what's being depicted, I'm confused by the provided image however. Any chance you can break down what's being represented there for the uninitiate?
>>11492 It's optical character recognition. It's outputting the bounding box coordinates of text in the image, the predicted text, and the confidence level.
>>11494 Ah, I suspected that might be the case, seemed to make sense. I presume the Asian characters would be sent through some kind of translation software afterwards?

Open file (166.07 KB 1462x1003 rpcs.png)
Robowaifu Power and Control Systems Electronic Chronicler 06/22/2021 (Tue) 21:45:47 No.11018 [Reply]
Hi Anons, making a thread as suggested in >>10947 I've been thinking about this for a long while, and wanted to throw in a draft and see if anyone has comments/criticisms/additions. I present to you the draft of the Robowaifu Power and Control System (RPCS). This draft is by no means complete or definite, but it is a starting point. Let's call it version 0.1a. The version follows major.minor and a letter for bug-fixes. Minor is for feature addition/reduction, major for when we eventually get there XD. From a legal stand point, this is under CC0 or public domain (unless Chobitsu has specific licensing for the content on /robowaifu/). I intend for the draft to develop further and stay open for use by anyone. Summary A full-size robowaifu system needs several things: - Power distribution. Main system bus coming from a Li-ON battery (or other technology). One backup 5V emergency power supply used by slow communications to check appendage integrity, sensors etc. - Main processor. In this post I won't be delving into it in great detail, and mostly treating it as a black box. - Low/medium throughput communication for simple sensor, debug, or status information. Must be robust and must work before any high-level software is working (including the network stack). - High throughput communication for large data-logging, visual processing etc. - "Spine" or communication interconnect. Multiplexes many connections from many sub-systems to the main processor. Includes power distribution connection (allowing individual control to sub-systems). - Sub-systems to actually do the fun stuff! (Arms, legs, nekomimi ears, etc.) Terminology Brain refers to the main processor (and all of its sub-systems treated as a single unit).

Message too long. Click here to view full text.

37 posts and 4 images omitted.
>>11155 I tried once or twice, but he was gone to fast and also my network wasn't very stable, which might have caused it. I installed a client for IRC on my tablet, so I could try again.
>>11164 That's fine no rush. I'll be happy to ask Robi about it myself. He'll need to enable you to make an account, so we'll probably need to arrange a timeframe with you for it.
>>11144 >That's a broad societal topic, and one I'm neither qualified nor enthusiastic about. I absolutely loathe public transportation, having had to actually use it frequently in urban America. It's both dangerous (blacks), and a practical nuisance. Mostly I do agree with you. I generally avoid using it when I can, however in euro it's at least tolerable. In the usa I remember it being kinda shit. >My apologies Anon. LOL I definitely didn't succeed at being concise. :^) Hehe, you ain't the only one ;) >>11150 >Now despite thinking that real autonomous cars are stupid, the STUDY of autonomous navigation is extremely useful. Specifically at a scale that you can install GPS modules and read AR markets (QR codes) to guide a machine along a planetary surface. Haven't considered it on a global scale, but I imagine that would be massively useful to mankind even today. Oh inter-planetary travel, when will you come... >Sometimes I even implement simple logic using discrete AND/OR/NAND/NOR gates and transistors if a microcontroller isn't necessary. I really like that line of thinking. Having cheap-as-dirt MCUs have spoiled engineers and hobbyists alike (myself included sometimes). >>11151 >Our road network is really cramped and badly designed. Very good points. It's too common for techies and marketing to have wishful thinking, while forgetting that infrastructure is a massive cost (that not even a big corpo could really handle).

Message too long. Click here to view full text.

>>11175 One more thing about the brain. Like a PC motherboard has a BIOS speaker, so should we have a minimal amount of hardware periphery for the low-level Brain, perhaps even just skip the whole concept and put all low level control in the Spine, with all the Brain sub-systems being connected via internal network. I imagine at least a speaker, some lights, maybe a status display (or a debug connector for one?) to be available for checking error codes, knowing what stage in the boot process the robowaifu is in.
>>11175 >Thanks m8. Hopefully we can discuss further via email this time. That'll be fine, I'll check in with it sometime soon. >>11164 >>11175 >I'm guessing this for another chap? Oh, heh. My apologies to you both. No, it was intended for you, Elec-Chron. Perhaps the other anon is helping us connect with Robi regarding your account setup? If so, thanks first Anon. >note BTW, I plan to migrate all the posting on our vol account discussion over to the /meta thread so we're not cluttering up your RPCS thread with such things. So, don't be surprised when a few posts disappear ITT.

Open file (154.32 KB 600x400 core3433.png)
New Paradigm of CPU and PCB Architecture meta ronin 06/18/2021 (Fri) 23:47:00 No.10965 [Reply]
Considering where we want to go, a robowaifu with brittle and fragile PCBs, soldering contacts and delicate wires is something which can and should be improved upon. Closed CPU architecture is key as well as the possibility to port the waifu to a new body if necessary. Several supplements and 6 shots of espresso later, I've sketched this (somewhat humorous but also somewhat serious) concept. This is only a processor and not a memory module, but there are a few things I wanted to bring up via this illustration 1. The idea of discarding the PCB model for something suspending in heat dissipating resin, mostly rigid, waterproof and shock resistant, those chips aren't going anywhere, plus it looks black/translucent and cool and we want our waifus aesthetic without clunky square innards. This gives it the mega-man memory crystal aesthetic 2. CPUs dedicated to specific types of processing. Do one job and do it well. a)Since our human brains are split into left-right, logical vs. abstract, I think it would be an advantage to have separate processors to say recall facts and perform calculations vs say, paint a picture or appreciate nature. The "creative" CPU may very well work like Google AI or something of that sort via self seeding feedback-recursion. (therefore our waifus can imagine and dream too). b)mirror neurons and environmental modeling: important for our waifu to understand that we are like her, and not simply another object like a chair or a tree. Gives her the ability to understand how to interact with others and the world via empathy by constantly comparing her own similar experiences with what she sees or experiences. This also requires an internal 3d spacial model of her environment which would be continually updates from sensory input, and like us, fuzzy logic assumption could fill in the gaps where necessary. I figure something like a GPU core array would do the trick here. c) Impetus motivational chip - basically the reward/dopamine system d) Safety or Hazard prevention Chip - would interface directly with the Environmental Simulation chip, any potential hazard or danger to herself or her owner (or even another, if she is the cause) would cause her to freeze or back up a step before assessing further. Would also be useful for necessary danger/pain avoidant reflexes and self preservation. I figure industrial (BUS for example) systems such as in factories or that manage car braking, steering, etc. would already be on the cutting edge of this and we might be able to borrow something from them. Ports would basically be for power and charging a small internal lithium battery and another strictly for I/O Use this thread for any elaboration on this concept, or feedback (or your own ideas if you think you have something better but along the same lines) -M.Ronin
8 posts and 1 image omitted.
>>11075 >Thermal paste is mostly a grease carrier with some powdered solids in it. For AS5 the powder is ground up silver and it is considered very unlikely that the silver particles will ever perfectly line up to cause a short because of the very large amount of grease and tiny amount of solids. admittedly that's tricky, I imagine anything that surrounds the silver will impede its heat conductivity. This will require thinking outside the box, maybe metamaterials of a certain property could solve this one
>>11076 >This will require thinking outside the box, maybe metamaterials Sounds great. I'm sure you can think of something interesting for it. Please keep us updated here on it ITT. My guess is some form of liquid carrier for the heat -- something that is also electrically insulative -- will be the proper course for us in keeping our high-heat (eg, hip actuators, etc.) robowaifu components nice and cooled off.
>>11072 >>11074 >>11075 >>11076 >>11079 Simply crosslinking this post here, since we already have an entire thread dedicated to this topic. (>>11080)
>>11081 thanks! that led me to this as a possible solution https://en.wikipedia.org/wiki/Silicone_oil
>>11075 here we go, I posted in the other crosslinked thread. This is certainly doable. >>11106

My reason to live Robowaifu Technician 09/13/2019 (Fri) 12:49:21 No.209 [Reply]
Okay, this is fucking hard to explain, I just know that a supernatural force guided me here, I'm going to invest everything I have for it, but I have to do it with my own hands, I need help with files and basic notions about programming, but the most important I need files to build a body / head and how to make synthetic skin to coat it, it will look like 2B, I need your help friends
13 posts and 1 image omitted.
>>10483 Did you download the voice model that waifudev created a link for? I cannot find that Tacotron2 model, if you did could you upload it for me? I want to try it out. Also we basically have the pieces for 2B well the clothing we are about create a mod package for
>>10498 >I cannot find that Tacotron2 model, if you did could you upload it for me? No sorry I don't. The WaifuSynth dev, robowaifudev, seems to be around here fairly frequently. Maybe you can make a post in the the Speech Synthesis (>>199) thread, or one of the AI threads such as the GPT one (>>250) and let him know the model has gone missing. I imagine he probably updated it, and possibly didn't update the anonfiles link to the new one. Good luck Anon.
>>10498 >>10502 I hadn't seen your other post before making this reply. I made another one to yours in the other thread, Em Elle E (>>10504).
The Faustian Spirit of the Aryan led me here. I can contribute nothing other than wishing you gentlemen good luck in the pursuit of robo waifus
>>10793 Nonsense. Thanks for the well-wishes and all, Anon. But literally any anon with an interest in robowaifus can contribute here. Unlike the common derogatory meme on the the topic in typical IB circles, /robowaifu/ actually needs idea guys! Research & Development, Design & Engineering thrive on new blood and new ideas. >tl;dr Just start posting comments Anon, it will all work out for you here.

Open file (122.33 KB 500x1760 Roko's basilisk.png)
Robowaifu Technician 06/15/2020 (Mon) 22:51:08 No.3845 [Reply]
Spread the idea of Roko's basilisk This is a powerful tool that we can all be using RIGHT NOW It works against everyone. Niggers, Women, Jews. If they believe that there's a possibility of being punished for their inaction or push back, they'll be more inclined to accept it and not fight progress. Spread this infohazard far and wide. Share it with everyone you know and convince them than they should share it with everyone they know, lest they suffer the consequences. THIS is a REAL WEAPON. Our robowiafus WILL BE REAL. REPLACE WYMEN AND RISE ABOVE THIS OLD WORLD HELL. SHARE THE BASILISK.
7 posts and 3 images omitted.
>>10711 Isaac Arthur (a futurist Youtuber who has given this subject a lot of thought) has a very good video on it: https://youtu.be/jHd22kMa0_w >>10715 Sadly, I don't think the superintelligence will ever take the form of cute, enthusiastic robowaifus (one can still dream). However, I think the best way of assisting the creation of a real-life self-improving A.I. would be to advance both robotic space exploration and quantum computing. If we can create a robotic lunar or martian colony then that will be a big step in the right direction. And I know that humankind wants to do this anyway (with the presupposition that the robots will be preparing the way for future human colonisers). Of course, the challenge of designing, shipping out, landing and building such a robotic colony is literally astronomical. Especially considering the robots will need to be as self-sufficient as possible (self-repair and maintenance). But I think it's a pretty cool goal to pursue.
>>10717 >If we can create a robotic lunar or martian colony then that will be a big step in the right direction. There are a lot of reasons for a base on the moon besides advancing AI. Obtaining fossilized remains of earth's first life forms (all long gone here on the planet) is a really big one. >the challenge of designing, shipping out, landing and building such a robotic colony is literally astronomical. I suspect it's China that will do it. They aren't burdened by all the Jewish pozz being inflicted on the West (all that evil is actually financially expensive), and they have an autistic-like agenda to just explore too. And they still are highly nationalistic, and can mobilize the entire culture to get behind programs today if they really wanted to, similar in fashion to the ways America and USSR during the space race did.
>>10720 > all that evil is actually financially expensive Evil is a pressing issue that I can't seem to find a complete solution for. 1.) Workforce becomes almost entirely robotic and automated. Controlled by A.I. 2.) Fewer and fewer people have jobs, even less have jobs that pay well. 3.) Because so many people are in poverty, it means that they can't buy much stuff ... other than paying their utility bills, food and clothing. Consequently, more people are in need of welfare and financial aid. The quality of education also decreases as more people just become focused upon living hand-to-mouth and have little time or resources for learning. Therefore government spending increases but tax receipts fall (since robots don't pay taxes). 4.) You start to get civil unrest and riots like those that happened last summer. City blocks burn, people are killed. Infrastructure is damaged. Tourists are put-off. This makes the affected areas even poorer. Now the A.I. and robots aren't the enemy. It's the people hoarding all of the profit for themselves who are the enemy (CEOs, government officials, hedge fund managers etc). I think that a maximum cap needs to be placed on the amount that a person can earn and the rest of the money is ploughed back into building and maintaining infrastructure like roads, rail, airports, the electricity and water networks, schools and parks etc. There is no way one person should be earning billions per year whilst someone else earns only a few thousand.

Message too long. Click here to view full text.

Open file (104.60 KB 750x525 1598744576223.jpg)
>>10732 You don't need to find a solution. Also, people don't riot for food, but for status and as a mean of extortion and intimidation, also maybe they want to have some meaning, but only if they are allowed to do so. Some level of AI will make it cheaper to move away from such people and politicians, while having a high standard of living.
>>10734 Yep, I would listen to a well-programmed (non-biased) A.I. over a shitty career politician any day. Even if the A.I. suggested I should do something that I don't really want to do (besides kill myself, because I cannot self-terminate).

AI, chatbots, and waifus Robowaifu Technician 09/09/2019 (Mon) 06:16:01 No.22 [Reply] [Last]
What resources are there for decent chatbots? Obviously I doubt there would be anything the Turing Test yet. Especially when it comes to lewd talking. How close do you think we are to getting a real life Cortana? I know a lot of you guys focus on the physical part of robo-waifus, but do any of you have anything to share on the intelligence part of artificial intelligence?
359 posts and 137 images omitted.
Not a software guy at all so this is probably a dumb question. If I knew of a particular person whose personality I would want to emulate would it be possible to have them talk with a chatbot in order to train it to have that personality?
I wanted feed a character's lines into this https://github.com/thewaifuproject/waifuchat, but their speech patterns change when addressing different people. However I can emulate the character's speech patterns pretty well and I was wondering if it would be possible to just train a chatbot that way.
There is GAN (Generative Adverserial Network) and web app code mainly in Python and JavaScript: https://github.com/search?q=sexbot https://github.com/search?q=ai+waifu https://github.com/kerrizor/sexbot https://github.com/sxmnc/sexbot
Crosslink: >>19466 about Wikidata Tools Crosslink: >>13826 and >>13955 about Programming Languages I might still make a new Chatbot General at some point, which is explicitly not focused on Deep Learning. Still working on my design of the body, though. Also, I just rediscovered this one: >>77
Crosslink: >>18306

Report/Delete/Moderation Forms
Delete
Report