/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Build Back Better

More updates on the way. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Have a nice day, Anon!


Hold onto your papers Robowaifu Technician 09/16/2019 (Mon) 05:59:19 No.269
It's hard to keep track of all the developments that are happening in artificial intelligence and related areas. Perhaps we could share our favourite research papers to get a better feel for all the progress happening and what we need to do next to make robowaifus a reality.

I'm mostly focused on general intelligence but this thread can be for anything that has captured and held your attention for long periods of time, whether in speech synthesis, creativity, robotics, materials, or anything else.
Open file (26.90 KB 480x360 0.jpg)
Most may be already familiar with AlphaZero, the general reinforcement learning algorithm that mastered Chess, Shogi, and Go through self-play. It's a significant contribution everyone learning about AI needs to understand. It uses ResNets for a policy network and a Monte Carlo tree search to explore most likely possibilities and collapse the search tree into an improved probability distribution of expected outcomes for each possible move.

Original AlphaGo (January 2016)
<Paper:
storage.googleapis.com/deepmind-media/alphago/AlphaGoNaturePaper.pdf

AlphaGo Zero (October 2017)
<Publication:
deepmind.com/blog/alphago-zero-learning-scratch/
<Paper:
www.nature.com/articles/nature24270
<How it works (summary):
www.youtube.com/watch?v=MgowR4pq3e8
<How it works:
www.youtube.com/watch?v=XuzIqE2IshY

AlphaZero (December 2018)
<Publication:
deepmind.com/blog/alphazero-shedding-new-light-grand-games-chess-shogi-and-go/
<Early paper:
arxiv.org/abs/1712.01815
<Full paper:
science.sciencemag.org/cgi/content/full/362/6419/1140?ijkey=XGd77kI6W4rSc&keytype=ref&siteid=sci
<How it works (in-depth):
www.youtube.com/watch?v=_Z31-5D3RZg
<Tutorial:
web.stanford.edu/~surag/posts/alphazero.html

https://www.invidio.us/watch?v=2ciR6rA85tg
The papers are rather being posted in the relevant threads, but we could use this thread to share some sources for longer articles and papers to download. arXiv is a free distribution service and an open-access archive for 1,811,158 scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. Materials on this site are not peer-reviewed by arXiv: https://arxiv.org/ "The Machine Intelligence Research Institute, formerly the Singularity Institute for Artificial Intelligence, is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development." https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute Artificial Intelligence @ MIRI We do foundational mathematical research to ensure smarter-than-human artificial intelligence has a positive impact. https://intelligence.org/ This posting >>7855 is based on an article from there. Looking into their research might be useful when our AI is getting more complex, not just be AIML chatbots. They have research, papers, and analyses on their website.
>>7857 >but we could use this thread to share some sources for longer articles and papers to download. Good idea, thanks Anon.
Correct me if I'm wrong (please!) but I can't seem to find a better, bumpable thread for AI papers ATM in the catalog, so I'll post this here. Thanks again, OP. --- >The alignment problem from a deep learning perspective [1] abstract >Within the coming decades, artificial general intelligence (AGI) may surpass human capabilities at a wide range of important tasks. We outline a case for expecting that, without substantial effort to prevent it, AGIs could learn to pursue goals which are very undesirable (in other words, misaligned) from a human perspective. We argue that AGIs trained in similar ways as today's most capable models could learn to act deceptively to receive higher reward; learn internally-represented goals which generalize beyond their training distributions; and pursue those goals using power-seeking strategies. We outline how the deployment of misaligned AGIs might irreversibly undermine human control over the world, and briefly review research directions aimed at preventing these problems. 1. https://arxiv.org/abs/2209.00626 >=== -minor edit
Edited last time by Chobitsu on 02/23/2023 (Thu) 13:43:58.
Kinda surprised this hasn't been posted here yet. Guess everyone's busy. A fairly lengthy (155p) paper. --- >Sparks of Artificial General Intelligence: [1] >Early experiments with GPT-4 abstract >Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions. 1. https://arxiv.org/abs/2303.12712
Open file (176.24 KB 1500x788 431m3MEME200223.jpg)
>>22326 at least it got the current president correct lol. i'd be interested to hear what it predicts about the future of the ukraine situation.
All these papers seem to be really interesting, no doubt packed with very useful information. Such a shame I feel like brainlet reading them. Is there like a dictionary or something for all the scientific terms in the papers?
>>22328 Hello Anon, welcome! Well, there's not really any way I can think of to shortcut a good science education. It takes a while! :^) Why not start by looking up the words you're unfamilar with to start, then by repetition you'll gradually become more aware of the relationships I'm sure. Good luck Anon.
Yeah, I think >>27 was meant for ideas around AI, though this can lead to sharing papers. Then we have a news thread where people might be sharing papers. One problem with this thread here, is that the picture in OP is hidden. >>22326 >Kinda surprised this hasn't been posted here yet. Not every popular paper is posted here on the board. The creative ones are more interesting. This also seems to be a lot about society and such. I'm trying to avoid that. >=== -minor typo
Edited last time by Chobitsu on 05/04/2023 (Thu) 21:25:39.
>>22328 Oh, and how rude of me my apologies. YouChat can do a good job of answering questions. For now it's an unvetted account, but you can be sure it's a bait & switch to come, so fair warning. >>22332 I see. Sure I'm fine with that Noidodev, thanks! :^)
Finished some tooling for automated paper scraping and sorting. It has compiled every arXiv paper that has been posted to /robowaifu/ + some of my personal collection, for a total of 248 papers. Now papers hold onto you :^) https://robowaifu.tech/wiki/Latest_papers
>>22353 Awesome! :^) -In Soviet Russia, papers hold on to you! Also, not to be a nuisance, but I'm still having difficulty connecting reliably to your system. I use Tor exclusively so maybe that's part of the issue? Any chance you could set up a hidden service there, RobowaifuDev? Maybe that would make things work smoother for myself and a few others like me. Just an idea. >=== -fmt, edit
Edited last time by Chobitsu on 05/05/2023 (Fri) 10:12:54.
>>22353 Thanks, that's one of the many things I wanted to do and never did. I once posted some code to retrieve data about papers here >>10317 - this might be useful to create some search index based on the abstracts for example.
>>22355 Have no idea how to do that. My webhost blocks Tor due to abuse. Here's an archive.org snapshot though: https://web.archive.org/web/20230505214119/https://robowaifu.tech/wiki/Latest_papers
>>22384 I see. My apologies, but back when I had more time available that was why I didn't contribute more. I should've mentioned this issue a year ago haha. Thanks for the archive link Anon. Wow! What a remarkable list for our little corner of the internets. No wonder some SEs are treating us as a research hub. :^)
>>22353 I don't want to bother you guys too much, but I've been lurking here for a little while after first coming across the C++ programming textbook thread that was reference in a 4chan/g/ thread, its the first time I'm posting here. Wanted to show my appreciation for making this wiki, it was super helpful after the LLM paper Rentry for /lmg/ on 4chan got 404'd. I made a backup of it today then thought you might find some useful academic papers from it: https://rentry.org/lmg_papers_backup . I also made this Rentry for /lmg/ https://rentry.org/lmg-resources , it doesn't have much in terms of academic papers, but it has a bunch of large language model resources for AI. Except for the first section (the faq section), it's mostly done at this point. Hopefully these will be of use for you. Thanks again.
>>22499 Thanks. This was a list related anons on 4chan chat bot t creation? Why did it get deleted?
>>22531 It's a bit of a story, but I'll condense it as much as possible. First, some needed context... back in late March and early April, a user name "Jart" effectively stole the work of another user "Slaren" on GitHub. They were both working on the llama.cpp project. Slaren (allegedly) and a few other people from that GitHub showed up on the /lmg/ general telling their side, and a lot of drama went down, etc etc... The point is that a Rentry page covering this event was made (https://rentry.org/Jarted) and it was prominently placed in every general thread post. Fast forward a few weeks, an anon made a new /lmg/ thread without the Jarted Rentry (after some protest in previous threads to remove it) and many anons lost their shit. At first, it looked like it was just going to be a one day thing, but soon after, the argument dragged on for a whole week and created factions. Soon after, there were multiple /lmg/ threads at once with and without that Rentry. Eventually, things slowed down and more threads going forward didn't have the link. Allegedly, the anon who made the /lmg/ paper list made a post that he would take down his Rentry if it wasn't back up in the OP post. A few days later, it was taken down. (It could also be that he just sick of it all etc...hard to say with confidence). Eventually, a compromise was made by the creation of a drama Rentry page to document and store all future drama Rentry pages. Peace was mostly restored. A few hours after making the post here >>22499, the anon who originally made the "Local Models Related Papers" returned to /lmg/ threads and brought the Rentry back up. TL:DR The anon behind the local model Rentry (allegedly) took it down in protest over current drama, but came back soon after the compromise solution to said drama was made. Sorry for a wall of text, tried my best to recap the event leading to that list being taken down.
>>22546 ahh so that was what all the arguements were about.I've basically been living under a rock and was hella confused when I visited the threads. btw why even bother making Rentries on dramas? It has nothing to do with local models itself? Plenty of other generals have dramas without needing to document it.
>>22548 For reference: >https://boards.4channel.org/g/thread/93355544#p93355815 >https://boards.4channel.org/g/thread/93355544#p93355982 >https://boards.4channel.org/g/thread/93355544#p93356092 tbh, after a full week of infighting, nothing getting done with LLM's and good Rentry makers & devs fleeing /lmg/, it was just kind of agreed upon that a dedicated "Drama Rentry" needed to be made. Especially since other anons were making new Rentry pages covering even newer drama events (This time: a Huggingface user that goes by "mdegans", made threats to another dev that contributed to /lmg/... and that he would dox his activities to his employer because he made an uncensored LLM model). In the end, I volunteered to make the drama Rentry chart: (https://rentry.org/lmg-drama) and thankfully, there hasn't been any fighting since. Should there be any new drama where some anon makes an Rentry covering it, it would just be dumped into the "lmg-drama" chart, and no one should fight over "if it needs to be in the OP thread post or not". Unfortunately, there will always be a few anons there that loves to prolong drama and fling shit whenever they can.
>>22546 Thanks, but don't waste more time on this. I've got the picture. Good to have the links.
>but don't waste more time on this No worries on that. I'll just swing by here and there dropping off new papers as they come by. Thanks again!
>>22499 Hello Anon, welcome! Thanks for introducing yourself. It's good to know others know about our C++ class here. Hope you can stick around! Cheers. :^)
Language Models Meet World Models: Embodied Experiences Enhance Language Models https://arxiv.org/pdf/2305.10626.pdf
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model https://arxiv.org/pdf/2305.11176.pdf
>>22693 >>22694 Neat! Sounds like we're moving closer Anon. :^)
This site https://www.catalyzex.com/ shows you code implementations for a paper based on the arxiv ID.
>>23735 Wow, that's rather cool NoidoDev. Any ideas on the imposed limitations against free usage?
>>23755 Sorry, I don't understand the questions. You mean the difference between Pro and Free version of the site? I didn't even look into that before you asked. Just thought to use it as much I want then we'll see if I want the Pro version. I think it just goes beyond search and uses notifications, even based on some filters: >PRO ($5/month) >Unlimited alerts (for new code for a paper/topic, latest developments, author's newest work, and more) >Advanced search filters (language/framework, computational requirement, dataset, use case, hardware, etc.) >ELITE ($10/month): All of the above + private bookmarks/notes/collections >ALL-STAR ($30/month): All of the above + launch and run ML code
>>23771 >Sorry, I don't understand the questions. You mean the difference between Pro and Free version of the site? My apologies for not making that clear Anon, yes. They strike me that they will lock the entire thing behind a paywall if it ever picks up steam (not uncommon now IMO), and I was just wondering if you knew more about their current operational mode. Regardless, thanks for the information NoidoDev, cheers. :^)
>>23785 I'd say in that case there will be a competitor and the links include the ID of the paper, so they could be parsed and changed to an alternative service.
Robots That Ask For Help: Uncertainty Alignment for Large Language Model Planners https://arxiv.org/abs/2307.01928 >Large language models (LLMs) exhibit a wide range of promising capabilities -- from step-by-step planning to commonsense reasoning -- that may provide utility for robots, but remain prone to confidently hallucinated predictions. In this work, we present KnowNo, which is a framework for measuring and aligning the uncertainty of LLM-based planners such that they know when they don't know and ask for help when needed. KnowNo builds on the theory of conformal prediction to provide statistical guarantees on task completion while minimizing human help in complex multi-step planning settings. Experiments across a variety of simulated and real robot setups that involve tasks with different modes of ambiguity (e.g., from spatial to numeric uncertainties, from human preferences to Winograd schemas) show that KnowNo performs favorably over modern baselines (which may involve ensembles or extensive prompt tuning) in terms of improving efficiency and autonomy, while providing formal assurances. KnowNo can be used with LLMs out of the box without model-finetuning, and suggests a promising lightweight approach to modeling uncertainty that can complement and scale with the growing capabilities of foundation models. https://robot-help.github.io
>>24017 Neat! It's important that we utilize highly-flexible ways to help our robowaifus properly grasp new, so-called """commonsense""" concepts, or even just everyday activities (cf. Chii's misunderstanding of the basics of cooking, or unable to use a vacuum cleaner correctly). This sounds like it could be a valuable step along that pathway to understanding, (hopefully) using smol computing resources. BTW I saw this paper in a listing before but hadn't looked into it, thanks for posting it's abstract here Anon. Now its even more interesting to study. >=== -prose edit
Edited last time by Chobitsu on 07/17/2023 (Mon) 21:14:59.
>https://arxiv.org/abs/2307.12008 >For the first time in the world, we succeeded in synthesizing the room-temperature superconductor (Tc≥400 K, 127∘C) working at ambient pressure with a modified lead-apatite (LK-99) structure. The superconductivity of LK-99 is proved with the Critical temperature (Tc), Zero-resistivity, Critical current (Ic), Critical magnetic field (Hc), and the Meissner effect. The superconductivity of LK-99 originates from minute structural distortion by a slight volume shrinkage (0.48 %), not by external factors such as temperature and pressure. The shrinkage is caused by Cu2+ substitution of Pb2+(2) ions in the insulating network of Pb(2)-phosphate and it generates the stress. It concurrently transfers to Pb(1) of the cylindrical column resulting in distortion of the cylindrical column interface, which creates superconducting quantum wells (SQWs) in the interface. The heat capacity results indicated that the new model is suitable for explaining the superconductivity of LK-99. The unique structure of LK-99 that allows the minute distorted structure to be maintained in the interfaces is the most important factor that LK-99 maintains and exhibits superconductivity at room temperatures and ambient pressure. Room temperature, ambient pressure superconductor. Its probably too good to be true, probably fraud. But the process to make it, detailed in the papers, is super easy. So, it'll be verified soon. If true, this'll probably solve the robowaifu battery problem.
>>24256 >If true, this'll probably solve the robowaifu battery problem. If true, and also 'super easy' to manufacture, as you indicate, will revolutionize many things in the world. Robowaifus may get much lighter also, due to much slimmer, much lower-mass actuators. OTOH, it's unlikely IMO that the GH will ever allow this out onto the open market anytime soon. There are far too many vested interests this would effectively collapse. Even if true, it likely to be scuttled (as least to the public's eye) 'Oh sorry guise, we were off by a decimal point LOL!'. Watching with great interest. :^)
>>24256 >>24258 Figured we might want to keep this here on the board, just for archival safekeeping.
>>24258 This'll help the globohomo with their green energy goals too. Remember the plan to put up solar panels in the Sahara, which would generate enough electricity to meet the demands of the whole world? Well, that was kind of inviable because of the giant transmission losses from transporting electricity over the large distances to MENA and European countries. But this makes its much closer within grasp, although there's still work to be done on the solar panels themselves and energy storage. I too, am watching this with great interest. If we finally get free energy, training and running AI becomes that much cheaper and easier. Also, this might probably help the robowaifu recharging and solar charging part too. One of the downsides of making your robowaifu solar chargable is that the density of charge is very less for solar power, especially on a humanoid body, which will make it either impossible or take a really long time to charge. With this, it's easier to charge way faster and store much more energy. Although, like I said before, there's still work to be done on solar tech before we reach that level. >>24262 yeah, I probably should have done that. The Twitter and Reddit API lockdown has me woondering and goncerned, so I'm downloading as many datasets as I can. This might not be the place to ask, but does anyone have a torrent of the Raiders of the Lost Kek /pol/ dataset? Direct download takes forever.
>>24265 >does anyone have a torrent of the Raiders of the Lost Kek /pol/ dataset? Here's a GPT-J 6B model generated from it. https://archive.org/download/gpt4chan_model_float16/gpt4chan_model_float16_archive.torrent >=== -add comment
Edited last time by Chobitsu on 07/27/2023 (Thu) 02:24:58.
> potential paper -related : (>>24648)

Report/Delete/Moderation Forms
Delete
Report