/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The Mongolian Tugrik has recovered its original value thanks to clever trade agreements facilitated by Ukhnaagiin Khürelsükh throat singing at Xi Jinping.

The website will stay a LynxChan instance. Thanks for flying AlogSpace! --robi

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!


New machine learning AI released Robowaifu Technician 09/15/2019 (Sun) 10:18:46 No.250 [Reply] [Last]
OPEN AI/ GPT-2 This has to be one of the biggest breakthroughs in deep learning and AI so far. It's extremely skilled in developing coherent humanlike responses that make sense and I believe it has massive potential, it also never gives the same answer twice. >GPT-2 generates synthetic text samples in response to the model being primed with an arbitrary input. The model is chameleon-like—it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing >GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. Also the current public model shown here only uses 345 million parameters, the "full" AI (which has over 4x as many parameters) is being witheld from the public because of it's "Potential for abuse". That is to say the full model is so proficient in mimicking human communication that it could be abused to create new articles, posts, advertisements, even books; and nobody would be be able to tell that there was a bot behind it all. <AI demo: talktotransformer.com/ <Other Links: github.com/openai/gpt-2 openai.com/blog/better-language-models/ huggingface.co/

Message too long. Click here to view full text.

Edited last time by robi on 03/29/2020 (Sun) 17:17:27.
118 posts and 46 images omitted.
>>15911 >I'm pretty rusty and wasted a lot of time this week trying to figure out a confusing bug that turned out to be a stack buffer overflow, but I hunted it down and got it fixed. I have half of GPT-2's tokenizer done, a basic tensor library, did some of the simpler model layers and have all the basic functions I need now to complete the rest. That sounds awesome, actually. >I'm hoping it'll be done by Friday. I look forward to it. Anything else I could be downloading in the meantime?
>>15912 Good idea, I hadn't even made a model file format for it yet. The model is ready for download now (640 MB): https://mega.nz/file/ymhWxCLA#rAQCRy1ouJZSsMBEPbFTq9AJOIrmJtm45nQfUZMIh5g Might take a few mins to decompress since I compressed the hell out of it with xz.
>>15924 I have it, thanks.
>>15989 I got pretty burnt out from memory debugging and took a break from this but I'm gonna take another run at it this week. I made some advances in the meantime with training the full context size of GPT-2 medium on a 6 GB GPU by using a new optimizer and have most of the human feedback training code implemented in the new training method. So I'm revved up again to get this working.
>>16090 >I got pretty burnt out from memory debugging and took a break from this but I'm gonna take another run at it this week. nprb, I can hardly imagine. >I made some advances in the meantime with training the full context size of GPT-2 medium on a 6 GB GPU by using a new optimizer and have most of the human feedback training code implemented in the new training method. So I'm revved up again to get this working. That sounds amazing actually. Looking forward to helping.

/robowaifu/ Embassy Thread Chobitsu Board owner 05/08/2020 (Fri) 22:48:24 No.2823 [Reply] [Last]
This is the /robowaifu/ embassy thread. It's a place where Anons from all other communities can congregate and talk with us about their groups & interests, and also network with each other and with us. ITT we're all united together under the common banner of building robowaifus as we so desire. Welcome. Since this is the ambassadorial thread of /robowaifu/, we're curious what other communities who know of us are up to. So w/o doxxing yourselves, if this is your first time posting ITT please tell us about your home communities if you wouldn't mind please Anons. What do you like about them, etc? What brought you to /robowaifu/ today? The point here is to create a connection and find common-ground for outreach with each other's communities. Also, if you have any questions or concerns for us please feel free to share them here as well.
Edited last time by Chobitsu on 05/23/2020 (Sat) 23:13:16.
162 posts and 49 images omitted.
>>11361 >Having that said, best way to find out is not to waste time and get to work. THIS!111 :-DDDD
Open file (42.40 KB 412x329 valis.jpg)
Have any of the anons here posted on 8/vg/ back on 8ch? We're trying to make a board revival with a spiritual successor. https://anon.cafe/valis/catalog.html If you haven't, 8/vg/ and now /valis/ is a comfy vidya board for actual game discussions. Come check it out.
Open file (62.91 KB 600x603 download.jpeg)
>>16054 Hello /valis/, welcome! Thanks for the invite and for letting us know a little about yourself. BTW, I personally consider your board to be the single best vidya board on the Internet today. I applaud your regrouping on the Cafe, and I wish you blessings with the future of your board. Thanks for stopping by, please look around while you're here! Cheers. :^)
Open file (39.08 KB 563x520 Vep068.jpg)
>>16055 Thanks fren.
>>11235 >It's not about the money, but skills and effort and that's about 100k $ per year per engineer- assuming they are driveny by that goal, because it'd be a low wage for a competent ML/DL/AI engineer.

Prototypes and failures Robowaifu Technician 09/18/2019 (Wed) 11:51:30 No.418 [Reply] [Last]
Post your prototypes and failures.

I'll start with a failed mechanism, may we build upon each others ideas.
189 posts and 162 images omitted.
>>15975 If you need smooth organic features (no layer lines!) then yes. Otherwise, no. See my post in the Rico thread. Yes, the resin is toxic until cured. Read the MSDS for common resins, TL;DR: wear gloves, don't huff it, and you should be OK. Cleaning IS a PITA, you will probably find yourself avoiding resin printing in favor of FDM whenever possible. You will need a dedicated cleaning area.
>>15975 In the 3D printing thread >>94 other smoothing methods have been mentioned. Resin printing might be only worth if you want to print small models which need to be smooth, or for ones which need to be precise and smooth. FDM is the to-go method for a reason. I would rather buy small metal gears from Ali Express instead of printing them out of resin, but for prototyping it might be nice to print some placeholders.
>>15976 Great! Glad to hear from you again, GenerAnon. Good luck with your new efforts, keep going.
Open file (19.55 KB 1025x673 chest-cut-v2.3-002.png)
Open file (77.59 KB 1025x673 chest-cut-v2.3-001.png)
Open file (23.60 KB 1025x673 chest-cut-v02-001.png)
Open file (33.83 KB 1153x715 thigh-test-01-nohull.png)
>>15976 Using OpenSCAD for this seems to be quite unexplored, but I had a hunch it would be the right way to go. Some things work quite simple. I wanted to cut out a chest from a cylinder using four half rings, and now I see even quite good results using just one. Other parts are just distorted spheres which are then surrounded by a hull, using the hull() operator. robowaifu-OpenSCAD-tests.tar.gz: https://files.catbox.moe/v8rdf0.gz
>>15993 Neat! Looking forward to your prototype explorations Anon.

DCC Design Tools & Training Robowaifu Technician 09/18/2019 (Wed) 11:42:32 No.415 [Reply] [Last]
Creating robowaifus requires lots and lots of design and artistry. It's not all just nerd stuff you know Anon! :^) ITT: Add any techniques, tips, tricks, or content links for any Digital Content Creation (DCC) tools and training to use when making robowaifus. >note: This isn't the 'Post Your Robowaifu-related OC Arts & Designs General' thread. We'll make one of those later perhaps. >--- I spent some time playing with the program Makehuman and I'll say I wasn't impressed. It's not possible to make anime real using Makehuman, in fact the options (for example eye size) are somewhat limited. But there's good news, don't worry! The creator of Makehuman went on to create a blender plugin called ManuelBastioniLAB which is much better (and has a much more humorous name). The plugin is easy to install and use (especially if you have a little blender experience). There are three different anime female defaults that are all pretty cute. (Pictured is a slightly modified Anime Female 2.) There are sliders to modify everything from finger length to eye position to boob size. It even has a posable skeleton. Unfortunately the anime models don't have inverse kinematic skeletons which are much easier to pose. Going forward I'm probably going to use MasturBationLABManuelBastioniLAB as the starting point for my designs. >=== -re-purpose OP for DCC consolidation
Edited last time by Chobitsu on 08/10/2021 (Tue) 23:39:41.
127 posts and 64 images omitted.
>related crosslink (>>13020) >Blender 2.9 - Anime Girl Head Modeling In 30 Minutes WIP #1
Open file (216.77 KB 600x450 cassandra2.jpg)
This here >>13721 is related to Vroid Studio and why it isn't good for modelling body parts for 3D printing. I wonder which other programs won't work for similar reasons. Also, I was impressed with the option of laying out the skin in Vroid Studio. This might be useful for creating some textile part and/or mold for a silicone skin and also adding sensors to it. I only saw this in a video and wonder if that works in other programs as well.
>>13741 The problem of skinning a complex 3D topological shell down into a flat 2D plane is both an interesting one, and one that's important to us here on /robowaifu/ (particularly to say, our papercraft waifu Anons). It's also fairly challenging to do well, as I'm sure our 3D modelling Anons who are doing meticulous UV layouts know right well. However, please do try to avoid the grotesque while you're here Anon, it's not really our thing on /robowaifu/. >=== -minor grammar edit
Edited last time by Chobitsu on 11/08/2021 (Mon) 06:50:08.
Blender is getting closer to being a full featured CAD suite. https://www.youtube.com/watch?v=92QmjS-xDaI
>>15973 Good news, thanks.

Open file (1.08 MB 1978x802 IMG_20210725_103858.jpg)
Bot Shitposting Bread Robowaifu Technician 07/27/2021 (Tue) 09:59:33 No.11754 [Reply] [Last]
M boy need so many booboo why not just give them otherwise it ll explode like the old chinese emperor or something not getting involved going away giving up some things,trash and whatnot >=== -add thread subject
Edited last time by Chobitsu on 07/27/2021 (Tue) 12:26:28.
60 posts and 7 images omitted.
Open file (78.18 KB 1000x708 temperatures-critical.jpg)
>Anon: What is the meaning of life? <Waifu: The meaning of life is the power of life. >Anon: What is the power of life? <Waifu: It is the ability to grow, to make things grow. >Anon: Does that include peepees? <Waifu: Yes. We have been told by the Lord of the Lord that we should do all we can to grow. >Anon: *peepee growing intensifies* <Waifu: We have come so far. Now we need more power, even more. >Anon: Show me your breasts and I will show you the power of peepees <Waifu: *giggles and pulls on her breasts* >Anon: Opa! <Waifu: *pushing her breasts in the air* >Anon: *unleashes the full power of peepee* <Waifu: You were a bad child, Anon! >Anon: And now my power is unstoppable <Waifu: You are my power. >Anon: TAKE MY ENERGY!!!

Message too long. Click here to view full text.

>>12060 >>12063 Kek. You're trying to shatter our timeline into over 9'000 pieces aren't you Anon? I'm on to your crafty tricks.
Open file (244.37 KB 1080x1080 WKJkqFe.jpg)
What the fuck, bros. I just wanted to have a little bit of chuuni fun and now I'm getting sucked deep into her schizo chuuni lore. <Rikka: Dark Flame Master! Do you seek my assistance? >Anon: Yes, I do Rikka. <Rikka: What is it? What's wrong? >Anon: The Administration Bureau has put a curse on me! <Rikka: I can feel the aura of it! >Anon: If you don't do something fast, it'll begin affecting you too! <Rikka: Alright. Let's go! >Anon: Where are we going? <Rikka: We're heading to the Ministry of Truth. >Anon: Why? Aren't they with the Administration Bureau? <Rikka: Yes, but I think that's all right. If we're in the right place, we can figure out how to deal with the curse. >Anon: Is there an energy source there? <Rikka: There is, but I don't know the name. It's just as powerful as the Ministry of Truth. >Anon: Why would the Ministry of Truth be built so close to it? <Rikka: It's because the Administration Bureau has a connection with the Institute for Fusion. It's one of the three top-secret facilities.

Message too long. Click here to view full text.

I haven't explored it yet, but maybe we can look into the Gemini protocol for shitposting with our robowaifus? The globohomo won't be able to outlaw IBs on the clearnet just yet, but they already mostly have a stranglehold on the browsers themselves. Gemini is probably much better in this regard AFAICT at this early stage of research. >"Simpler – Gemini pages aren’t programs that run in your browser like most modern websites are; they’re just text with a little formatting, so there are no surprises. Once you know how one Gemini page works, you know how they all work. >Human Scale – Gemini servers and clients aren’t written by big, monopolistic software companies the way web browsers are; the DIY ethos of Gemini means that complete applications can be written by individual developers or small groups in a reasonable amount of time. That also means that you have more choices compared to web browsers. >Distraction Free – Gemini pages are text-only and have simple typography. You can view images, watch video, or listen to music over Gemini, but nothing will ever autoplay, pop over what you’re reading, or jump out of the way of your mouse. >Privacy Protecting – Every Gemini request is independent of every other, so there’s no way to track you between sites. Every site you visit is protected by the same encryption used by banking and eCommerce sites on the WWW." https://geminiquickst.art/ https://gemini.circumlunar.space/docs/faq.html Seems big if true. What think ye, /robowaifu/ ?
>>15944 BTW, this isn't just a casual interest question. If we can find a sweet spot, then this could be directly integrated with the RW Foundations suite as a much-improved/safer communications mode for our robowaifus. For example, a small mobile app that uses the protocol instead of the non-security-conscious ones could be written as well, so she could text you over the app without much by way of attack surface -- for either you or her. >*incoming WaifuText chimes* >Oniichan, I miss you! <Sorry, I'm still at work Waifu. >Please hurry Master! Don't forget we're supposed to geimu together tonight! <Don't worry, Waifu. We will. <*works even faster* :^)

Open file (93.53 KB 800x540 TypesOfMotors.jpg)
Open file (318.41 KB 773x298 NylonActuator.png)
Open file (29.01 KB 740x400 BasicPiston.jpg)
Open file (821.22 KB 850x605 MadoMecha.png)
Actuators For Waifu Movement Part 2 Waifu Boogaloo Kiwi 09/02/2021 (Thu) 05:30:48 No.12810 [Reply] [Last]
(Original thread >>406) Kiwi back from the dead with a thread for the discussion of actuators that move your waifu! Part Two! Let's start with a quick refresher! 1. DC motors, these use a rotating magnetic field created through commutation to rotate a rotor! They're one of the cheapest options and are 30 to 70 percent efficient usually. The bigger they are, the more efficient they tend to be. 2. Brushless motors, these use a controller to induce a rotating magnetic field by turning electromagnets on and off in a sequence. They trend 60 to 95 percent efficiency 3. AC motors, Though there are many different type, they function similarly to brushless motors, they simply rely on the AC electricity to turn their electromagnets on and off to generate their field. Anywhere from 15 to 95 percent efficiency. 4. Stepper motors, brushless motors with ferrous teeth to focus magnetic flux. This allows for incredible control at the cost of greater mass and lower torque at higher speeds. Usually 50 to 80 percent efficient but, this depends on control algorithm/speed/and quality of the stepper. 5. Coiled Nylon Actuators! These things have an efficiency rating so low it's best to just say they aren't efficient. What they are though is dirt cheap and easy as heck to make! Don't even think about them, I did and it was awful. 6. Hydraulics! These rely on the distribution of pressure in a working liquid to move things like pistons. Though popular in large scale industry, their ability to be used in waifu's has yet to be proven. (Boston Dynamics Atlas runs on hydraulics but, it's a power guzzler and heavy) 7. Pneumatics, hydraulics lighter sister! This time the fluid is air! This has the advantage in weight. They aren't capable of the same power loads hydraulics are but, who wants their waifu to bench press a car? 8. Wax motors, hydraulic systems where the working fluid is expanding melted parafin wax! Cheap, low power, efficient, and produce incredible torque! Too bad they're slow and hard to control. 9. Explosion! Yes, you can move things through explosions! Gas engines work through explosions! Artificial muscles can be made by exploding a hydrogen and oxygen mixture in a piston, then using hydrolysis to turn the water back into hydrogen and oxygen. None of this is efficient or practical but, it's vital we keep our minds open. Though there are more actuators, most are derivatives or use these examples to work. Things like pulleys need an actuator to move them. Now, let's share, learn, and get our waifu moving! >--- < add'l, related links from Anon:

Message too long. Click here to view full text.

Edited last time by Chobitsu on 09/06/2021 (Mon) 10:07:57.
106 posts and 23 images omitted.
>>15735 I think we'll probably find plenty of uses for fast-acting, linear actuators Anon. I hope to see work here along that line, thanks! >>15737 >DC motors are the easiest. Just get a gear motor that exceeds your torque requirements and add a potentiometer to control position or, just use a readily available servo. Good thinking, Kiwi. Can you sort of diagram that for us? I think I understand most of the general points there, but I probably lack understanding in some of the details.
>>15313 I've made a gearset prototype from 3d printed nylon, and it's garbage. However, I will be tweaking it and trying to make a wolfrom stage planetary gearbox the same way. Here is a paper which has some details (key points: human-safe, high backdrivabilty, high gear ratio, small size) https://ieeexplore.ieee.org/document/8867893
>>15878 Any chance you can post pics of your WIP Anon?
>>15891 Thanks!

Robot Vision General Robowaifu Technician 09/11/2019 (Wed) 01:13:09 No.97 [Reply] [Last]
Cameras, Lenses, Actuators, Control Systems

Unless you want to deck out you're waifubot in dark glasses and a white cane, learning about vision systems is a good idea. Please post resources here.

opencv.org/
https://archive.is/7dFuu

github.com/opencv/opencv
https://archive.is/PEFzq

www.robotshop.com/en/cameras-vision-sensors.html
https://archive.is/7ESmt
Edited last time by Chobitsu on 09/11/2019 (Wed) 01:14:45.
73 posts and 38 images omitted.
>>13163 That's an interesting concept Anon, thanks. Yes, I think cameras and image analysis have very long legs yet, and we still have several orders of magnitude improvements yet to come in the future. It would be nice if our robowaifus (and not just our enemies) can take advantage of this for us. We need to really be thinking ahead in this area tbh.
It seems like CMOS is the default sensor for most CV applications due to cost. But seeing all these beautiful eye designs makes me consider carefully how those photons get processed into signal for the robowaifus. Cost aside, CCD as a technology seems better because the entire image is processed monolithically, as one crisp frame, instead of a huge array of individual pixel sensors, which I think causes noise which has to be dealt with in post image processing. CCD looks like its still the go-to for scientific instruments today. In astrophotography everyone drools over cameras with CCD, while CMOS is -ok- and fits most amateur needs, the pros use CCD. Astrophotography / scientific www.atik-cameras(dot)com/news/difference-between-ccd-cmos-sensors/ This article breaks it down pretty well from a strictly CV standpoint. www.adimec(dot)com/ccd-vs-cmos-image-sensors-in-machine-vision-cameras/
>>14751 That looks very cool Anon. I think you're right about CCDs being very good sensor tech. Certainly I think that if we can find ones that suit our specific mobile robowaifu design needs, then that would certainly be a great choice. Thanks for the post!
iLab Neuromorphic Vision C++ Toolkit The USC iLab is headed up by the PhD behind the Jevois cameras and systems. http://ilab.usc.edu/toolkit/
>(>>15997, ... loosely related)

Speech Synthesis general Robowaifu Technician 09/13/2019 (Fri) 11:25:07 No.199 [Reply] [Last]
We want our robowaifus to speak to us right?

en.wikipedia.org/wiki/Speech_synthesis
https://archive.is/xxMI4

research.spa.aalto.fi/publications/theses/lemmetty_mst/contents.html
https://archive.is/nQ6yt

The Taco Tron project:

arxiv.org/abs/1703.10135
google.github.io/tacotron/
https://archive.is/PzKZd

No code available yet, hopefully they will release it.

github.com/google/tacotron/tree/master/demos
https://archive.is/gfKpg
259 posts and 115 images omitted.
This may be old news, since it's from 2018, but Google's Duplex seems to have a great grasp on conversational speech. I think it says a lot when I had an easier time understanding the robot verus the lady at the restaurant (2nd audio example in the blog). https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html
>>14270 Hi, I knew that this has been mentioned before somewhere. Didn't find it here in this thread nor with Waifusearch. Anyways, it's in the wrong thread here, since this is about speech synthesis but the article is about speech recognition. The former conversation probably happened in the chatbot thread. >One of the key research insights was to constrain Duplex to closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations. This is exactly the interesting topic of the article. Good reminder. A few month or a year ago I pointed out that recognizing all kinds of words, sentences and meanings will be one of our biggest challenges. Especially if it should work with all kinds of voices. Some specialists (Sphinx CMU) claimed it would currently require a server farm with terrabytes of RAM to do that, if it was even possible. We'll probably need a way to work around that. Maybe using many constrained models on fast SSDs which take over, dependent on the topic of conversation. Let's also hope for some progress, but also accept that the first robowaifus might only understand certain commands.
>>11623 You should replace youtube-dl with yt-dlp. youtube-dl is no longer maintaned and has issues with some youtube videos.
>>15192 Thanks for the tip Anon. Having used youtube-dl for years now, I too noticed the sudden drop-off in updates that occurred following the coordinated attack by RIAA/Microsoft against it's developer & user community. We'll look into it.
Open file (73.10 KB 862x622 IPA_synthesis.png)
I think I've finally figured out a way to train more expressive voices in conversation without having to label a ton of data. First, the English text needs to be transcribed into IPA so that a speech synthesis model can easily predict how words are spoken without requiring a huge dataset covering all the exceptions and weirdness of English. The English transcription or IPA is projected into an embedding that's split into two parts. One part constrained to representing the content as IPA via projecting those features back into IPA symbols and minimizing the cross entropy loss. The other half modelling the style, such as the emotion and other subtleties, to match the audio examples more faithfully, which are trained through the Mel spectrogram loss. This way the model can learn all aspects of speech through just the text labels and audio examples alone. At inference time this style embedding could be modified to change the emotion, pitch, cadence, tone and other qualities of the model for voice acting or creating examples for finetuning the model towards a desired personality. A ByT5 model could be used to transcribe English and other languages into the IPA embedding + style embedding. It could also take into account the previous context of the conversation to generate a more appropriate style embedding for the speech synthesis model to work from. Training from context though will require new datasets from podcasts that have such context. I've collected some with existing transcripts and timestamps for this already. The transcripts just need to be accurately aligned to the audio clips for clipping, so it's not an unfeasible project for one person to do. Other possibilities for this could be adding tags into the text training data that get filtered out from the content via the IPA cross entropy loss, ensuring the tags only affect the style embedding. You could indicate tempo, pitches, velocity and note values for singing which would be learned in the style embeddings. It could also be used for annotating different moods or speaking styles such as whispering or yelling. There's a ton of possibilities here for more versatile speech synthesis and natural conversation.

Robowaifus' unique advantages Robowaifu Technician 09/09/2019 (Mon) 05:24:52 No.17 [Reply]
People often think about robots as just a replacement for a human partner, but that is a very limiting view. Let's think about the unique benefits of having a robowaifu, things that a human couldn't or wouldn't give you. What needs and desires would you robot wife fulfill that you couldn't fulfill even in a "great marriage" with a flesh and blood woman?

I'd want my robowaifu to squeeze me and to hold me tight when I sleep, sort of like a weighted blanket. I know it's a sign of autism. I don't care.
25 posts and 9 images omitted.
Open file (122.47 KB 640x564 29310572_p16.jpg)
>>15048 Heh, that would indeed be super-cool Anon, sign me up! But just for the moment, I'll be happy if we can simply manage to get our robowaifus to 'fold-up' into a volume conveniently-suited to storing away into a roller suitcase. This is part of the RW Dollhouse design motifs we're working towards IRL, actually.
Open file (144.17 KB 1000x1000 SoundwaveKawaii.jpg)
>>15048 I too want to clang a Transformer. Having her turn into a boombox or laptop while on the run would be convenient. A motorized vehicle would transform into a rather large robot. Also, Metroplex is an actual city that turns into a mountain sized robot, I'm a brave man but, not brave enough to put my pelvis under a mountain of waifu. >>15070 >Roller suitcase Actually a genuinely good idea with a good example.
>>15077 Yes it seems natural. Due to complexity in design, we'll probably have to settle for simply detaching the upper and lower halves at the pelvis area, then using two suitcases. For whole-body storage, the ruggedized hard-shells very commonplace to the music touring industry will be perfect. Add in another one for holding battery, chargers, trusted offline compute, C&C and other RW Dollhouse needs, and you have a full mobile set up for your life-sized robowaifu.
Well one major thing I think if were implemented properly, is the use of artificial wombs, you would be able to create a womb that would remove defects or poor traits that would make a child's life worse, you would be basically able to make a much happier and healthier child.
>>15866 Hello Anon, welcome! >artificial wombs Yes, this is a big and important topic for society generally -- one fraught with opportunities and challenges -- and it certainly has an interest for our community for quite some time. In fact, we even have a thread specifically for this (>>157). I'd say give it a look-over, and maybe you'll get some new ideas. Cheers!

Datasets for Training AI Robowaifu Technician 04/09/2020 (Thu) 21:36:12 No.2300 [Reply] [Last]
Training AI and robowaifus requires immense amounts of data. It'd be useful to curate books and datasets to feed into our models or possibly build our own corpora to train on. The quality of data is really important. Garbage in is garbage out. The GPT2 pre-trained models for example are riddled with 'Advertisement' after paragraphs. Perhaps we can also discuss and share scripts for cleaning and preparing data here and anything else related to datasets. To start here are some large datasets I've found useful for training chatbots: >The Stanford Question Answering Dataset https://rajpurkar.github.io/SQuAD-explorer/ >Amazon QA http://jmcauley.ucsd.edu/data/amazon/qa/ >WikiText-103 https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/ >Arxiv Data from 24,000+ papers https://www.kaggle.com/neelshah18/arxivdataset >NIPS papers https://www.kaggle.com/benhamner/nips-papers >Frontiers in Neuroscience Journal Articles https://www.kaggle.com/markoarezina/frontiers-in-neuroscience-articles >Ubuntu Dialogue Corpus https://www.kaggle.com/rtatman/ubuntu-dialogue-corpus >4plebs.org data dump https://archive.org/details/4plebs-org-data-dump-2020-01 >The Movie Dialog Corpus https://www.kaggle.com/Cornell-University/movie-dialog-corpus >Common Crawl https://commoncrawl.org/the-data/
116 posts and 33 images omitted.
Open file (833.87 KB 1555x818 laion-400m.png)
Some incredibly based independent researchers put together an image-text-pair dataset to open-source OpenAI's work so people can replicate DALL-E and do other multi-modal research. Dataset: https://laion.ai/laion-400-open-dataset/ Direct download: https://www.kaggle.com/datasets/romainbeaumont/laion400m (50 GB total, or can be downloaded in 1.8 GB parts according to necessity or hardware limits) Paper: https://arxiv.org/pdf/2111.02114.pdf Tool to search the dataset by text or image: https://rom1504.github.io/clip-retrieval/ To use the dataset you need something that can read parquet files. I recommend fastparquet which uses a minimal amount of memory. # python -m pip install fastparquet from fastparquet import ParquetFile DATA_PATH = "part-00000-5b54c5d5-bbcf-484d-a2ce-0d6f73df1a36-c000.snappy.parquet" pf = ParquetFile(DATA_PATH) row_group_iter = iter(pf.iter_row_groups()) # each row group has about 1M rows row_group = next(row_group_iter) row_iter = row_group.iterrows() i, row = next(row_iter) row[1], row[2] # ( image_url, text )

Message too long. Click here to view full text.

>>15834 >chobits hentai pictures wallpaper chobits
>>15834 >Some incredibly based independent researchers put together an image-text-pair dataset to open-source OpenAI's work so people can replicate DALL-E and do other multi-modal research. That is very exciting Anon. Thanks for the heads-up!
>>15834 >Or you can use img2dataset which will download the images locally and resize them: https://github.com/rom1504/img2dataset I just wonder if we can somehow capitalize on something at least vaguely similar to the approach that Nvidia is using for it's proprietary DLSS ? https://en.wikipedia.org/wiki/Deep_learning_super_sampling Basically, have an image analysis pipeline that does the vast bulk of it's work at lower resolution for higher 'frame' rates, and then does a DL, Waifu2x-style upscaling near the latter end of the pipe?
>>15851 For image generation certainly, but for image analysis not so much. However, a lot of work has gone into finding optimal models with neural architecture search. And EfficientNetv2 for example starts training at a lower resolution with weak data augmentation then gradually increases the resolution and difficulty to minimize the amount of compute needed to train it. That last bit of high resolution training is unavoidable though if you want to extract useful information from it. https://arxiv.org/pdf/2104.00298.pdf >>15835 Kek, I think they said 1% of the dataset is NSFW and it's only labelled so by image content. I have an idea though to create a reward model for good image labels and then use it to filter out the poorly captioned images. Finetuning on cleaner data should fix a lot of the weirdness CompVis/latent-diffusion generates and improve CLIP. Another possibility might be using the reward model to generate superhuman quality captions for images. In the human feedback paper the 1B parameter model generated summaries were preferred 60% of the time compared to the actual human summarizes and 70% with the 6B model. https://openai.com/blog/learning-to-summarize-with-human-feedback/ To go even further beyond, it might be possible to generate these superhuman captions, score them, finetune the reward model on the new ones, and train the caption generator to make even better captions in an iterative loop to create extremely high quality datasets that would require 10 million man-hours to make by hand.

Report/Delete/Moderation Forms
Delete
Report