/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The canary has FINALLY been updated. -robi

Server software upgrades done, should hopefully keep the feds away. -robi

LynxChan 2.8 update this weekend. I will update all the extensions in the relevant repos as well.

The mail server for Alogs was down for the past few months. If you want to reach out, you can now use admin at this domain.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!


Open file (259.83 KB 1024x576 2-9d2706640db78d5f.png)
Single board computers & microcontrollers Robowaifu Technician 09/09/2019 (Mon) 05:06:55 No.16 [Reply] [Last]
Robotic control and data systems can be run by very small and inexpensive computers today. Please post info on SBCs & micro-controllers. en.wikipedia.org/wiki/Single-board_computer https://archive.is/0gKHz beagleboard.org/black https://archive.is/VNnAr >=== -combine 'microcontrollers' into single word
Edited last time by Chobitsu on 06/25/2021 (Fri) 15:57:27.
65 posts and 35 images omitted.
Simple-to-install RaspberryPi 3 & 4 images, with Ubuntu 20.04 LTS + ROS Noetic included & preconfigured. Probably the single easiest/fastest way to get started with ROS. https://learn.ubiquityrobotics.com/noetic_pi_image_downloads
(>>16420 >low-cost AI -related)
USB C powered SBC with integrated Arduino Leonardo. Thought it might be of interest. https://www.dfrobot.com/product-1728.html
>>16467 Very neat Anon, great idea they had to do a combo like that. Definitely a little pricey for what we're going for here in general, but overall probably a decent technical approach. A couple more-modest SBCs (RPi, etc), a handful of MCs and I'm guessing that would cover most of the onboard-compute needs of a entry-tier basic robowaifu model.

Open file (213.86 KB 406x532 13213d24132.PNG)
Open file (1.19 MB 1603x1640 non.png)
Robowaifu Technician 10/29/2020 (Thu) 21:56:16 No.6187 [Reply]
https://www.youtube.com/watch?v=SWI5KJvqIfg I have been working on creation of waifus using GANs etc... I've come across this project and I am totally amazed. Anyone has any idea how we can achive this much of a quality animation based GAN created characters? I think accomplishing this kind of work would have a huge impact on our progression. Calling all the people who posted at chatbot thread.
1 post omitted.
Open file (213.22 KB 400x400 sample4.png)
Open file (199.16 KB 400x400 sample1.png)
Open file (194.92 KB 400x400 sample2.png)
Open file (199.43 KB 400x400 sample3.png)
>>6188 Looking at some old tweets from them I think it is safe to say that it doesn't look much different from StyleGan on portraits. The shoulders are bad, & most of the work is done by their data cleaning to simplify the problem. Interps & style mixing are nothing special either. Gwerns work with some whack data was able to create similar kind of characters. Also waifulabs - which is all run by StyleGan - can create some really quality characters from different positions. And notice that they are a game development studio which does not work on AI waifu creations. Looks like hype-bait to me to be honest. They probably cherrypicked some of the results and maybe even manually played it to create this kind of animations. And considering their budget and data that is well possible. I am not sure if they still use StyleGAN though. They do not drop even a clue. But honestly with the current state of it and the time they spent on it I think they use a different approach.
My chief concern is first and foremost Is this open-source? If not, then it's relatively useless to us here on /robowaifu/, other than tangentially as inspiration. Hand-drawn, meticulously-crated animu is far better in that role tbh.
>>6187 It appears the characters are generated with a GAN then another model separates the character pieces into textures for a Live2D model. They're not animated with AI, but there are techniques to do such a thing: https://www.youtube.com/watch?v=p1b5aiTrGzY
Video on the state of anime GANs, anime created by AI, including animation for vtuber/avatar style animations: https://youtu.be/DX1lUelmyUo One of the guys mentioned in the video, creating a 3D model from a drawing, around 10:45 in the video above: https://github.com/t-takasaka - didn't really find which one it is on his Github yet, though. He seems to have some pose estimation to avatar in his repository, though. Other examples in the video might be more interesting for guys trying to build a virtual waifu. "Talking Head Anime 2", based on one picture: https://youtu.be/m13MLXNwdfY
>>16245 This would be tremendously helpful to us if we can find a straightforward way to accomplish this kind of thing in our robowaifu's onboard systems Anon ('character' recognition, situational awareness, hazard avoidance, etc.) Thanks! :^)

Open file (185.64 KB 1317x493 NS-VQA on CLEVR.png)
Open file (102.53 KB 1065x470 NS-VQA accuracy.png)
Open file (86.77 KB 498x401 NS-VQA efficiency.png)
Neurosymbolic AI Robowaifu Technician 05/11/2022 (Wed) 07:20:50 No.16217 [Reply]
I stumbled upon a couple videos critiquing "Deep Learning" as inefficient, fragile, opaque, and narrow [1]. It claims Deep Learning requires too much data, yet it performs poorly trying to extrapolate from training set, and how it arrives to its conclusions are opaque, so it's not immediately obvious why it breaks in certain cases, and all that learned information cannot be transfered between domains easily. They then put forth "Neurosymbolic AI" as the solution to DL's ails and next step of AI, along with NS-VQA as an impressive example at the end [2]. What does /robowaifu/ think about Neurosymbolic AI (NeSy)? NeSy is any approach that combines neural networks with symbolic AI techniques to take advantage of both their strengths. One example is the Neuro-Symbolic Dynamic Reasoning (NS-DR) applied on the CLEVRER dataset [3], which cascades information from neural networks into a symbolic executor. Another example is for symbolic mathematics [4], which "significantly outperforms Wolfram Mathematica" in speed and accuracy. The promise or goal is that NeSy will bring about several benefits: 1. Out-of-distribution generalization 2. Interpretability 3. Reduced size of training data 4. Transferability 5. Reasoning I brought it up because points 3 and 5, and to a lesser degree 4, are very relevant for the purpose of making a robot waifu's AI. Do you believe these promises are real? Or do you think it's an over-hyped meme some academics made to distract us from Deep Learning? I'm split between believing these promises are real and this being academics trying to make "Neurosymbolic AI" a new buzzword. [5] tries to put forth a taxonomy of NeSy AIs. It labels [4] as an example of NeSy since it parses math expressions into symbolic trees, but [4] refers to itself as Deep Learning, not neurosymbolic or even symbolic. Ditto with AlphaGo and self-driving car AI. And the NS-DR example was beaten by DeepMind's end-to-end neural network Aloe [6], and overwhelmingly so when answering CLEVRER's counterfactuals. A study reviewed how well NeSy implementations met their goals based on their paper, but their answer was inconclusive [7]. It's also annoying looking for articles on this topic because there's like five ways to write the term (Neurosymbolic, Neuro Symbolic, Neuro-Symbolic, Neural Symbolic, Neural-Symbolic). >References [1] MIT 6.S191 (2020): Neurosymbolic AI. <https://www.youtube.com/watch?v=4PuuziOgSU4> [2] Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding. <http://nsvqa.csail.mit.edu/>

Message too long. Click here to view full text.

Open file (422.58 KB 1344x496 MDETR.png)
Open file (68.28 KB 712x440 the matrix has you.jpg)
>>16217 I think such critique is outdated. The impressive results of NS-VQA have been beaten by full deep learning approaches like MDETR.[1] It would be a bit ironic to call deep learning fragile and narrow and then proceed to write specific functions that only handle a certain type of data that the training set just happens to be a small subset of and call it generalization. Sure it can handle 'out-of-distribution' examples with respect to the training set, but give it a truly out-of-distribution dataset with respect to the functions and these handwritten methods will fail completely. A lot of deep learning approaches these days can learn entire new classes of data from as few as 10-100 examples. ADAPET[2] learns difficult language understanding tasks from only 32 examples. RCE[3] can learn from a single success state example of a finished task. DINO[4] can learn to identify objects from no labelled examples at all. CLIP[5] and CoCa[6] are examples of deep learning generalizing to datasets they were never trained on, including adversarial datasets, and outperforming specialized models, and this is just stuff off the top of my head. Someone ought to give DALL-E 2 the prompt "a school bus that is an ostrich" and put that meme to rest. That said, neurosymbolic AI has its place though and I've been using it lately to solve problems that aren't easily solvable with deep learning alone. There are times when using a discrete algorithm saves development time or outperforms existing deep learning approaches. I don't really think of what I'm doing as neurosymbolic AI either. Stepping away from matrix multiplications for a bit doesn't suddenly solve all your problems and become something entirely different from deep learning. You have to be really careful actually because often a simpler deep learning approach will outperform a more clever seeming neurosymbolic one, which is clearly evident in the progression of AlphaGo to AlphaZero to MuZero. From my experience it hasn't really delivered much on the promises you listed, except maybe point 2 and 5. I wouldn't think of it as something good or bad though. It's just another tool and it's what you do with that tool what counts. There was a good paper on how to do supervised training on classical algorithms. Basically you can teach a neural network to do a lot of what symbolic AI can do, even complicated algorithms like 3D rendering, finding the shortest path or a sorting algorithm. I think it shows we've barely scratched the surface of what neural networks are capable of doing. https://www.youtube.com/watch?v=01ENzpkjOCE https://arxiv.org/pdf/2110.05651.pdf >Links 1. https://arxiv.org/abs/2104.12763 2. https://arxiv.org/abs/2103.11955 3. https://arxiv.org/abs/2103.12656 4. https://arxiv.org/abs/2104.14294 5. https://arxiv.org/abs/2103.00020

Message too long. Click here to view full text.

Open file (201.23 KB 1133x1700 spaghetti_mama.jpg)
Idling around the Interwebz today[a], I found myself reading the Chinese Room Argument article on the IEP[b], I came across the editor's contention in the article the notion that "mind is everywhere" is an "absurd consequence". >"Searle also insists the systems reply would have the absurd consequence that “mind is everywhere.” For instance, “there is a level of description at which my stomach does information processing” there being “nothing to prevent [describers] from treating the input and output of my digestive organs as information if they so desire.” "[1],[2] I found that supposed-refutation of this concept vaguely humorous on a personal level. As a devout Christian Believer, I would very strongly assert that indeed, Mind is everywhere. Always has been, always will be. To wit: The Holy Spirit sees and knows everything, everywhere. As King David wrote: >7 Where can I go to escape Your Spirit? > Where can I flee from Your presence? >8 If I ascend to the heavens, You are there; > if I make my bed in Sheol, You are there. >9 If I rise on the wings of the dawn, > if I settle by the farthest sea, >10 even there Your hand will guide me; > Your right hand will hold me fast.[3] However, I definitely agree with the authors in their writing that >"it's just ridiculous" to assert >" “that while [the] person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might” ".[1],[2]

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/16/2022 (Mon) 00:13:09.

Reploid thread Robowaifu Technician 02/28/2022 (Mon) 04:13:32 No.15349 [Reply]
A few people thought it'd be a good idea to start a thread for Reploid builds, so here we are! To kick things off, here's a little progress on painting my own RiCO. It's just spray paint so it doesn't look stellar and I screwed up a couple parts. All the blue trim paint needs done as well. I don't care if it's perfect, I just want to get her done. Not to mention anything adjacent to "art" or "craftsmanship" is beyond me, mostly due to lack of patience: I don't want to spend 1000s of hours grinding away with a paintbrush when I could be designing up cool (to me...) robotic mechanisms for instance. I bet you bottom dollar none of my projects will be winning awards in the fit-and-finish department. Can't wait to see what happens with Pandora and whatever other Reploid projects people might be working on.
36 posts and 28 images omitted.
>>16021 >Great idea, mind if I borrow this technique for MaidCom? Please do. Check out how I did the rest of her eyes as well, perhaps it could scale up.
>>16029 It does scale up really well. Though I will innovate upon it for Pandora.
Edited last time by AllieDev on 04/27/2022 (Wed) 01:32:08.
>>15999 I'm really impressed with this. I have a lot of questions, I'll scan the rest of this thread to make sure they aren't already answered first.
Curious if you had any changes this week with your wonderful robowaifu, RiCOdev?
>>16243 RiCO will be on hold for the near future; your friendly neighborhood reploid builder has got bigger fish to fry. I'll still keep an eye out here to answer questions etc.

Waifu Robotics Project Dump Robowaifu Technician 09/18/2019 (Wed) 03:45:02 No.366 [Reply] [Last]
Edited last time by rw_bumpbot on 05/25/2020 (Mon) 04:54:42.
240 posts and 174 images omitted.
>>15733 >link Google is your friend: https://sweetiebot.net/ From what I understand they want to keep things rated PG. The voice generator (community talknet project) is unrelated and based in the /ppp/ thread on 4chan.org/mlp/. Enter if you dare ^:)
Open file (117.25 KB 640x798 cybergirl.jpg)
>>15731 >ponies I was thinking more >>15733 or picrel, but a cute robot horse has the PR advantage because it could easily be a children's toy.
>>15731 I think some of the ponys mentioned this project to us before Anon, thanks. I wish those Russians good success! >>15733 Heh, /monster/ pls. :^) A hexapod waifu is actually a really good idea for a special-service meido 'waifu' IMO. Just my own tastes in the matter subjectively. But objectively, a hexapod locomotion base (especially combined with roller 'feet' is a superior stability platform from which to do housework & other work. No question. >>15736 Yep. I immediately came to a similar conclusion. But it's obvious they are going for police force service with the push for that bot, and the price tag shows it. Shame, tbh.
are you people serious? all the videos are hosted offsite what the hell am I going to do with a filename put it in yandex? fucking yahoo.jp??? tor search??? why do this?
>>16233 Heh, sorry about that Anon. You're just dealing with the missing information from when our first site was destroyed on us. Those were part of the recovery effort. Unfortunately our copies of the files were lost in the attack. Maybe someday some Anon might restore them here for us. Again apologies, you might see similar in other threads here too. But at least we still have a board! :^)

Electronics General Robowaifu Technician 09/11/2019 (Wed) 01:09:50 No.95 [Reply] [Last]
Electronics & Circuits Resources general

You can't build a robot w/o good electronics. Post good info about learning, building & using electronics.

www.allaboutcircuits.com/education/
71 posts and 19 images omitted.
>>14824 OK, I think this is a reasonably good thread (unless you have a better one in mind?) Thanks again, Anon.
>>14824 Interesting. What does this imply? Is there a significant improvement in the performance of the chips using this style of transistor? If so, then we'll have to make our own asic using that same method
Open file (79.85 KB 600x424 BPv4-f.jpg)
>>734 For any Anons currently working on electronics boards that would benefit from Bus Pirate, there is also a v4 that has more RAM available. > http://dangerousprototypes.com/docs/Bus_Pirate_v4_vs_v3_comparison The firmware code is also available. https://github.com/BusPirate/Bus_Pirate
Open file (427.08 KB 1500x1500 super start kit.jpg)
Open file (127.68 KB 1649x795 rip mr servo.png)
I'm looking to get into electronics. Are the ELEGOO UNO starter kits any good? There's one on Amazon for $40. I basically just want to learn how to program a microcontroller, control servos with a controller and understand enough so I can start building a robowaifu. Or should I save my money and just play with the circuit simulator in TinkerCAD?
>>16224 I actually have the kit on the left, and I definitely recommend them for learning Anon, sure.

Open file (1.04 MB 2999x1298 main-results-large.png)
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents Robowaifu Technician 01/21/2022 (Fri) 15:35:19 No.15047 [Reply]
https://wenlong.page/language-planner/ https://github.com/huangwl18/language-planner Can world knowledge learned by large language models (LLMs) be used to act in interactive environments? In this paper, we investigate the possibility of grounding high-level tasks, expressed in natural language (e.g. "make breakfast"), to a chosen set of actionable steps (e.g. "open fridge"). While prior work focused on learning from explicit step-by-step examples of how to act, we surprisingly find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans without any further training. However, the plans produced naively by LLMs often cannot map precisely to admissible actions. We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions. Our evaluation in the recent VirtualHome environment shows that the resulting method substantially improves executability over the LLM baseline. The conducted human evaluation reveals a trade-off between executability and correctness but shows a promising sign towards extracting actionable knowledge from language models. I think it's worth a whole thread. If not, move it to the appropriate section.
Update in this topic, this time by DeepMind. https://www.youtube.com/watch?v=L9kA8nSJdYw Slowly, they realize what we all think about, yet only in virtual spaces.
Open file (251.80 KB 1665x621 calm outline.png)
Open file (172.15 KB 859x785 calm planning.png)
Open file (85.32 KB 827x627 calm success.png)
Open file (337.52 KB 1523x725 coca.png)
Open file (119.53 KB 901x670 coca performance.png)
>>16197 >Human: show me how you polish the baluster >Robowaifu: say no more goshujin-sama There was a really interesting paper recently on context-aware language models that can do planning and achieve better than human performance on a flight-booking task, using a model the same size as GPT-2 small: https://sea-snell.github.io/CALM_LM_site/ It shows a lot of promise for using a Monte Carlo tree search for doing planning with language models, since it only takes 5 dialog generation attempts with the new method to outdo human performance without doing a tree search at all. Also a huge breakthrough in zero-shot multi-modal learning has been made that completely blows CLIP and SotA specialized models to pieces by using a simple to understand contrastive and captioning loss (CoCa) that can leverage existing models: https://arxiv.org/pdf/2205.01917.pdf This is going to be huge for embodied agents. It's a lot like the similarity measure used between sentence embeddings produced by the RoBERTa model in OP's paper to ensure the translated generated instructions are executable, except it does it between images and sentences. And there's another paper worth mentioning doing transfer learning from a language model trained on Wikipedia to an RL agent (on continuous control and games) that outperforms training from scratch: https://arxiv.org/pdf/2201.12122.pdf It seems we're headed towards a leap forward soon with goal-oriented embodied agents using language models.

Artificial Wombs general Robowaifu Technician 09/12/2019 (Thu) 03:11:54 No.157 [Reply] [Last]
I think we need to figure out how to fit a womb onto a waifubot. Where's the fun in having sex if you can't procreate?

Repost from a thread on /b/;
>"If you're like me and want to fuck a qt battlebot and get her pregnant, the best place to put an artificial womb is where a normal womb would be on a normal girl. The metal exterior could simply be a bunch of metal plates that unfold to allow space for the womb pod inside. The baby is probably safest inside the battlebot, and if she has good calibration then there shouldn't be problems with her falling and hurting the baby. After giving birth the metal plates could automatically fold back up again, shrinking the womb pod inside so she is combat effective again."

Well /robowaifu/? Feasible?
124 posts and 14 images omitted.
Open file (274.25 KB 650x1000 SplashTittyMonster.jpg)
Open file (1.40 MB 2189x1305 1586409061467.png)
Before making artificial wombs, how about focusing on how to make artificial functional mammary glands? If I did clone myself then I'd have something to feed him with, and if I didn't at least I could suck them myself. It also seems like a significantly lower bar in terms of complexity compared to an entire functioning uterus.
>>13154 Because the wombs aren't for being put into the robowaifus but for having children. Glands aren't necessary. However, it seem to be possible to create genetically modified yeast that produces all kinds of milk, so it's even not a problem. Don't know about the current state, here two vids I didn't watch mysef yet, the first short the other really long: https://youtu.be/CXYg-qt4OCc https://youtu.be/ZiWnygcYsiQ I thought these were available for years, but somehow didn't here more about it. So I'm curious myself.
>>13189 >Because the wombs aren't for being put into the robowaifus but for having children. I've argued with some weirdos who were very insistent that the wombs should be in the robowaifus, just because it's their fetish. I'm not saying mammary glands need to be in the waifubot, but if they fit, and you're into that, I don't see why not. Either way they'd be useful for raising the kid. >Glands aren't necessary. I'm going to have to disagree with you on that. I don't know a lot about fetal development, but I do know varying blood hormone levels can have significant effects on fetal development. And hormones largely come from and are regulated by glands. When it comes to artificial organs it seems like glands are largely overlooked. As far as I know there's something of an artificial ovary and an artificial thymus, something that literally disappears with age, but other artificial endocrine organs basically don't exist. I think that in order to create a perfectly healthy baby with an artificial womb, you'd either need to replicate almost all of the other organs in the body to digest food/supplements to make blood or have a regular supply of healthy pregnant woman blood. The simple fact that blood plasma donations are still a thing highlights the fact that we still can't create an adequate blood plasma substitute. All an artificial mammary gland would do is make realistic breast milk, which is a simple task compared to making an artificial womb, but is still a difficult task on its own and there may be some problems it solves that would be needed to solve for the artificial wombs.
Alright folks, there's a lot of good information being shared about what scientists are doing with this stuff. Here's a news flash though: Anyone can follow the scientific method. We can break this whole thing down into smaller parts and grassroots this shit. >ctrl-f 'amniotic fluid' >no results What does a human body do? Eat. What does a pregnant human body do? Eat. What does a human fetus do? Absorb nutrients from what the mother eats. Chemically speaking, literally everything that the human body is capable of is available in some form or another at your local grocery store. My question is how difficult would it be to mix together a facsimile for amniotic fluid? What makes it up? How much of what, and how does that change during gestation? We need recipe cards or something.

Open file (173.41 KB 1080x1349 Alexandra Maslak.jpg)
Roastie Fear 2: Electric Boogaloo Robowaifu Technician 10/03/2019 (Thu) 07:25:28 No.1061 [Reply] [Last]
Your project is really disgusting >=== Notice #2: It's been a while since I locked this thread, and hopefully the evil and hateful spirit that was dominating some anons on the board has gone elsewhere. Accordingly I'll go ahead and unlock the thread again provisionally as per the conditions here: >>12557 Let's keep it rational OK? We're men here, not mere animals. Everyone stay focused on the prize we're all going for, and let's not get distracted. This thread has plenty of good advice in it. Mine those gems, and discard the rest. Notice:

Message too long. Click here to view full text.

Edited last time by Chobitsu on 09/02/2021 (Thu) 18:36:20.
118 posts and 37 images omitted.
>>13408 You aren't even taking into consideration financing options. I bet there are places that will do monthly payments, including insurance.
Open file (639.48 KB 1374x690 1525099257922.png)
Why do they do this? How different would it be if a robot wife was what they were looking at? >--- SFW board Anon, pls keep it spoilered here. Thanks!
Edited last time by Chobitsu on 04/18/2022 (Mon) 05:40:31.
>>15920 Not too much different. I've mentioned the idea of artificial wombs to some women (in person) to bypass the abortion debate. Some of the most timid girls I know went into a flying rage at the thought of their potential replacement. I personally think the roastie would be willing to commit violence at any prospect of them losing their current power hold. She will vote for any and all laws banning such. Thus, it may be best to avoid mentioning it to them, and just ignoring them and only sharing amongst men.
>>15920 >Why do they do this? b/c theyre women >How different would it be if a robot wife was what they were looking at? they would actually be angrier. remember that women are operating on instinct and feels. Sure, some may actually do an internal logic/fact check , but those are the unicorns, most will operate on their feeling to something, and justify it with whatever post-hoc. Example: robotic replacements will terrifythem Behavior: finding anything and everything to "shame" this idea, as though there is some higher moral principle being violated (this is to appeal to men's guilt), when in fact it is just that women are terrified whenever attention is taken away from them. (whether it is by alcohol, sports, video games, other women, now.. gynoids/robowaifus)
>>16137 This. Clear & simple.

New machine learning AI released Robowaifu Technician 09/15/2019 (Sun) 10:18:46 No.250 [Reply] [Last]
OPEN AI/ GPT-2 This has to be one of the biggest breakthroughs in deep learning and AI so far. It's extremely skilled in developing coherent humanlike responses that make sense and I believe it has massive potential, it also never gives the same answer twice. >GPT-2 generates synthetic text samples in response to the model being primed with an arbitrary input. The model is chameleon-like—it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing >GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. Also the current public model shown here only uses 345 million parameters, the "full" AI (which has over 4x as many parameters) is being witheld from the public because of it's "Potential for abuse". That is to say the full model is so proficient in mimicking human communication that it could be abused to create new articles, posts, advertisements, even books; and nobody would be be able to tell that there was a bot behind it all. <AI demo: talktotransformer.com/ <Other Links: github.com/openai/gpt-2 openai.com/blog/better-language-models/ huggingface.co/

Message too long. Click here to view full text.

Edited last time by robi on 03/29/2020 (Sun) 17:17:27.
118 posts and 46 images omitted.
>>15911 >I'm pretty rusty and wasted a lot of time this week trying to figure out a confusing bug that turned out to be a stack buffer overflow, but I hunted it down and got it fixed. I have half of GPT-2's tokenizer done, a basic tensor library, did some of the simpler model layers and have all the basic functions I need now to complete the rest. That sounds awesome, actually. >I'm hoping it'll be done by Friday. I look forward to it. Anything else I could be downloading in the meantime?
>>15912 Good idea, I hadn't even made a model file format for it yet. The model is ready for download now (640 MB): https://mega.nz/file/ymhWxCLA#rAQCRy1ouJZSsMBEPbFTq9AJOIrmJtm45nQfUZMIh5g Might take a few mins to decompress since I compressed the hell out of it with xz.
>>15924 I have it, thanks.
>>15989 I got pretty burnt out from memory debugging and took a break from this but I'm gonna take another run at it this week. I made some advances in the meantime with training the full context size of GPT-2 medium on a 6 GB GPU by using a new optimizer and have most of the human feedback training code implemented in the new training method. So I'm revved up again to get this working.
>>16090 >I got pretty burnt out from memory debugging and took a break from this but I'm gonna take another run at it this week. nprb, I can hardly imagine. >I made some advances in the meantime with training the full context size of GPT-2 medium on a 6 GB GPU by using a new optimizer and have most of the human feedback training code implemented in the new training method. So I'm revved up again to get this working. That sounds amazing actually. Looking forward to helping.

Report/Delete/Moderation Forms
Delete
Report