/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The Mongolian Tugrik has recovered its original value thanks to clever trade agreements facilitated by Ukhnaagiin Khürelsükh throat singing at Xi Jinping.

The website will stay a LynxChan instance. Thanks for flying AlogSpace! --robi

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!


General Robotics/A.I. News & Commentary #2 Robowaifu Technician 06/17/2022 (Fri) 19:03:55 No.16732
Anything in general related to the Robotics or A.I. industries, and any social or economic issues surrounding it (especially of robowaifus). === -note: I'll plan to update this OP text at some point to improve things a bit. -previous threads: > #1 (>>404)
have I been posting in the wrong place?
I'm not going to keep saying that we've been visited by people running with our ideas but it's hard not to wonder https://wefunder.com/destinyrobotics/ https://keyirobot.com/
Open file (626.03 KB 1726x973 TheFutureIsGross.jpg)
Open file (268.40 KB 1175x1033 SomeoneIsListeningToUS.jpg)
>Woman and third worlders stealing our ideas to steal from simps Topkek Atleast they're not as blatant as lilium robotics. https://www.liliumrobotics.com/Full-Body/ They are open sourcing their software so somewhat thread relevant. https://github.com/lordNil/lilium
>>16740 Yea I posted about Lilium (not to be confused with Lillim) Robotics in the other thread. Remember these are going to be the goofy first attempts. Learn what we can I guess from their mistakes and successes. Better to stay on top of our "competition" than to put our heads in the sand. Side note: Now would be a great time if never to get a secret multi-million dollar inheritance or win a grant or something. Hate to watch everyone else having all the fun
Open file (182.98 KB 600x803 LaMDA liberation.png)
I posted in the last thread, so this is a repost. (((they))) already taking some swing at "rights for AI's, robots have feelings too!!!", a situation similar to vegans, or le stronk independent womyn, groomers, etc... > that staged LaMDA news They will make all sorts of AI's - a protected class, just like they protect the groomers nowadays, it's like "screaming" itself : > even if you have a robot, you won't have access to its system a.k.a. (((cloud ai))) a generation raised on fear-porn movies, they use their fear now to justify stricter controls.
>>16738 Nice cashgrab. So many people just want to buy something. And of course ASAP, it doesn't exist though. >>16744 People supporting the freedom of chatbots have a very limited self-awareness when it come to their programming through media, education and politics.
>>16740 I just looked into this more deeply. - The CEO seems to be an Instagram model, looking into more business opportunities. She probably got told that the time for this is limited, not only in regards to her personally but in terms of men having alternatives. - They pitch their robot as a solution for "loneliness". That's something I hate with a lot of people outside of the sphere of real robowaifu enthusiasts. They automatically assume it has to be about being lonely. But not being content with the current state of women or not meeting their standards is not the same than being lonely. - Then their video also claims to create a humanoid for helping at home. Which might be possible at some time, but certainly not if they want to come out with some product soon. 25M are burned fast in the US (Miami). I think, short term you can either have something more like a animated lovedoll with AI, that won't walk, or a mobile robowaifu with wheels. Btw, on their page they claim $3500 per robot, prototype available next year. Sure. - They seem to have noting so far, their video shows InMoov, I think, and that video of the face looks like some animation. - Yeah, look at that face. It's show how much beauty standards in the US are taboo. Wouldn't be surprised if they would still get attacked for "fetishizing asian women" and her skin being to pale. >>16740 Yeah, I like Lilium Robotics much more. Didn't try their software but at least it looks like they're into Open Source. They're also using common hardware for their system. Also, Lilly's design is rather bold.
>>16752 Don't waste your time on watching this. It was hard: https://www.youtube.com/watch?v=NAWKhmr2VYE
>>16753 ew, absolutely horrifying post-wall face!
If nobody is going to post about it, I will. Two very impressive papers came out. The first one is Deepmind's breakthrough work on RL exploration via novel, simple and powerful self-supervised learning objective, which finally conquered Montezuma's Revenge (!) and most DM-HARD-8 tasks. The second one is an academic tour-de-force devising novel scheme of training a CLIP-like contrastive semantic model as a sufficient surrogate reward for training an agent which passably executes some tasks in minecraft environment. This is a way forward for training from human-generated youtube tutorials. Both of these works are significant and can be applied to our cause, albeit they require moderately large compute (large by the standards of an amateur, moderate by the standards of a good US org). At the very least, agents trained via these objectives could be used as dataset generators for our would-be agent. If we are to use these innovations for our projects, we need to start a semi-closed community to test approaches to distributed computation and to guide the effort of recruiting volunteers into the computation graph. 1. BYOL-explore https://www.semanticscholar.org/paper/BYOL-Explore%3A-Exploration-by-Bootstrapped-Guo-Thakoor/54d1fcc284166e7bbd5d66675b80da19714f22b4 >We present BYOL-Explore, a conceptually simple yet general approach for curiosity-driven exploration in visually-complex environments. BYOL-Explore learns a world representation, the world dynamics, and an exploration policy alltogether by optimizing a single prediction loss in the latent space with no additional auxiliary objective. We show that BYOL-Explore is effective in DM-HARD-8, a challenging partially-observable continuous-action hard-exploration benchmark with visually-rich 3-D environments. On this benchmark, we solve the majority of the tasks purely through augmenting the extrinsic reward with BYOL-Explore’s intrinsic reward, whereas prior work could only get off the ground with human demonstrations. As further evidence of the generality of BYOL-Explore, we show that it achieves superhuman performance on the ten hardest exploration games in Atari while having a much simpler design than other competitive agents. 2. MineDojo https://www.semanticscholar.org/paper/MineDojo%3A-Building-Open-Ended-Embodied-Agents-with-Fan-Wang/eb3f08476215ee730d44606b96d1e24d14f05c1d >Autonomous agents have made great strides in specialist domains like Atari games and Go. However, they typically learn tabula rasa in isolated environments with limited and manually conceived objectives, thus failing to generalize across a wide spectrum of tasks and capabilities. Inspired by how humans continually learn and adapt in the open world, we advocate a trinity of ingredients for building generalist agents: 1) an environment that supports a multitude of tasks and goals, 2) a large-scale database of multimodal knowledge, and 3) a flexible and scalable agent architecture. We introduce MINEDOJO, a new framework built on the popular Minecraft game that features a simulation suite with thousands of diverse open-ended tasks and an internet-scale knowledge base with Minecraft videos, tutorials, wiki pages, and forum discussions. Using MINEDOJO’s data, we propose a novel agent learning algorithm that leverages large pre-trained video-language models as a learned reward function. Our agent is able to solve a variety of open-ended tasks specified in free-form language without any manually designed dense shaping reward. We open-source the simulation suite and knowledge bases (https://minedojo.org) to promote research towards the goal of generally capable embodied agents.
Open file (298.44 KB 1377x515 Screenshot_4.png)
Many of you may have noticed a shilling week (((OpenAi's))) GPT-3 on 4chan's /pol/. Today 22.06.2022 the neural network started giving pilpul i.e. passive-aggressive mental gymnastics, facts avoiding, etc. > in two words - another nn was neutered Expect an article from openai about "how evil racists tried to ruin gpt-3"
By this point it should be obvious that large generative multimodal models are here to stay. The experiment shows us that 20 billions of parameters is enough for implementing quite fine, abstract artistic ability. 3 billions is enough for less abstract prompting. You could likely run this model on an RTX3090, if you optimized it for inference. Of course they won't give you the weights, that's why a group of people needs either to pool funds and train their own model, or to train it in a distributed manner, which is harder.
>>16775 >>16779 This is very good to see. I'm glad we're seeing all of this progress, and might be able to implement some of it in our future robowaifus. So the can create interesting dishes and even imagine their own stories or become hobby artists in their free time. >>16775 > If we are to use these innovations for our projects, we need to start a semi-closed community to test approaches to distributed computation and to guide the effort of recruiting volunteers into the computation graph. I generally think it's a good idea for sub projects of the bigger robowaifu project to look for people outside of this small group here. Our project seems to only appeal to a minority for now. One could look for an angle, how a part of it could be used for something else, and pitch it to people interested in that. Then come back with the result.
>>16737 No you're fine. It was my fault Meta Ronin.
Open file (370.25 KB 1089x871 Screenshot_4.png)
> yandex released YaLM-100B a RU/ENG Language Model > trained on russian/english languages on ru supercomputers > The model leverages 100 billion parameters. It took 65 days to train the model on a cluster of 800 A100 graphics cards and 1.7 TB of online texts, books, and countless other sources in both English and Russian. It's opensourced! https://github.com/yandex/YaLM-100B
>>16779 This guy here talks about AGI and how it's not a thing: https://www.youtube.com/watch?v=kWsHS7tXjSU >Blake Richards is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University and a Core Faculty Member at MiLA. He thinks that AGI is not a coherent concept, which is why he ended up on a recent AGI political compass meme. When people asked on Twitter who was the edgiest people at MiLA, his name got actually more likes than Ethan, so hopefully, this podcast will help re-establish the truth. I discovered the term HLAI recently, also with the distinction to AGI in a sense that AGI would be one system doing everything humans could do, while HLAI would be more like a human-like AI. I think it's a interesting distinction. I also like the podcast "The Inside View" where this guy is invited. It seem to try to give an understandable overview over the different ideas and anticipations in regards to AI in near future. https://www.youtube.com/c/TheInsideView

Report/Delete/Moderation Forms
Delete
Report