/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Site was down because of hosting-related issues. Figuring out why it happened now.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon in late August. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


When the world says, “Give up,” Hope whispers, “Try it one more time.” -t. Anonymous


ROBOWAIFU U Robowaifu Technician 09/15/2019 (Sun) 05:52:02 No.235
In this thread post links to books, videos, MOOCs, tutorials, forums, and general learning resources about creating robots (particularly humanoid robots), writing AI or other robotics related software, design or art software, electronics, makerspace training stuff or just about anything that's specifically an educational resource and also useful for anons learning how to build their own robowaifus. >tl;dr ITT we mek /robowaifu/ school.
Edited last time by Chobitsu on 05/11/2020 (Mon) 21:31:04.
>FOUNDATIONS OF COMPUTATIONAL AGENTS
Online textbook

https://artint.info/2e/html/ArtInt2e.html
General Sciences tread in our own /pdfs/.
>>>/pdfs/60
>>1759
>All books related to software, programming, and technology go here.
>>>/pdfs/22
Bumping this since we need more resources.
Open file (32.78 KB 385x499 vol1.jpg)
Open file (35.14 KB 385x499 vol2.jpg)
Open file (35.66 KB 405x500 vol3.jpg)
A kind Anon over on /tech/ let us all know that ACM is making (at least some portions of) their digital library available for free download. >>>/tech/2455 This is a rather surprising turn of events, and I would encourage all you researchers here on /robowaifu/ to get while the getting is good. Alright, here's a quite pertinent on-topic triplet to get this party started: The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations - Volume 1April 2017 https://dl.acm.org/doi/book/10.1145/3015783 The Handbook of Multimodal-Multisensor Interfaces: Signal Processing, Architectures, and Detection of Emotion and Cognition - Volume 2October 2018 https://dl.acm.org/doi/book/10.1145/3107990 The Handbook of Multimodal-Multisensor Interfaces: Language Processing, Software, Commercialization, and Emerging Directions - Volume 3 July 2019 https://dl.acm.org/doi/book/10.1145/3233795
Open file (27.07 KB 405x500 410T+Pt58jL.jpg)
Conversational UX Design: A Practitioner's Guide to the Natural Conversation FrameworkApril 2019 https://dl.acm.org/doi/book/10.1145/3304087
>This is an introduction to the theory and practice of artificial intelligence. It uses an intelligent agent as the unifying theme throughout and covers areas that are sometimes underemphasized elsewhere. These include reasoning under uncertainty, learning, natural language, vision and robotics. The book also explains in detail some of the more recent ideas in the field, including simulated annealing, memory-bounded search, global ontologies, dynamic belief networks, neural nets, inductive logic programming, computational learning theory, and reinforcement learning.
Reinforcement Learning: An Introduction >Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field's key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics. >Like the first edition, this second edition focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new to the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning's relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson's wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.
Text Data Management and Analysis - A Practical Introduction to Information Retrieval and Text Mining June 2016 https://dl.acm.org/doi/book/10.1145/2915031
Open file (177.81 KB 1103x1360 71GpBnPuMCL.jpg)
Open file (260.68 KB 1103x1360 81ftddxsqvL.jpg)
The VR Book - Human-Centered Design for Virtual Reality October 2015 https://dl.acm.org/doi/book/10.1145/2792790
A Framework for Scientific Discovery through Video Games July 2014 https://dl.acm.org/doi/book/10.1145/2625848
Frontiers of Multimedia Research December 2017 https://dl.acm.org/doi/book/10.1145/3122865
MIT 6.034 Artificial Intelligence, Fall 2010 >In these lectures, Prof. Patrick Winston introduces the 6.034 material from a conceptual, big-picture perspective. >Topics include reasoning, search, constraints, learning, representations, architectures, and probabilistic inference. https://www.youtube.com/playlist?list=PLUl4u3cNGP63gFHB6xb-kVBiQHYe_4hSi Really good introduction to artificial intelligence. It starts getting interesting in the 3rd lecture when he shows how to use goal trees to create AI that can explain why it performed an action and how to solve goals.
>>2371 > to create AI that can explain why it performed an action and how to solve goals. that sounds like it would be really valuable. thanks anon i'll get a copy.
>>2381 Most of those pages are empty. Jan Peters has done a lot of work though on merging AI with robotics: https://scholar.google.com/citations?hl=en&user=-kIVAcAAAAAJ&view_op=list_works
>>2383 my apologies then. i'll make time to methodically edit it. the couple of examples i did check first were fine.
>>2250 A reinforcement learning course by David Silver, who was the student of Richard Sutton and co-lead researcher for AlphaGo that came up with the idea for MuZero: https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLqYmG7hTraZBiG_XpjnPrSNw-1XQaM_gB
>>2449 Thanks Anon, grabbing a copy now.
>Deep Residual Learning for Image Recognition >Abstract >Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [41] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. >The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions 1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. https://github.com/FrancescoSaverioZuppichini/ResNet
Open file (34.90 KB 1130x635 residual.png)
Open file (105.38 KB 640x360 event-based sensors.png)
Open file (27.37 KB 242x415 weber-fechner law.png)
>"What is Neuromorphic Event-based Computer Vision?," a Presentation from Ryad B. Benosman https://www.youtube.com/watch?v=dR8pff_MyL8 Sparse event-based processing is the future of AI. It requires orders of magnitude less power and processing time than conventional approaches, it's far more robust to a wide range of dynamic environments and provides a larger signal-to-noise ratio for training. Though this talk only covers computer vision and there isn't much research in this field yet, the same concept applies to everything else. OpenAI's GPT2 model for example wastes tons of time doing meaningless calculations when only a fraction of 1% of it contains useful processing for any particular given context. To take a gamedev analogy it's the difference between millions of objects checking for updates in a loop vs. using callbacks to only notify objects when something actually happens. Sparse models can also be combined with reinforcement learning and be rewarded for using less processing time so they not only find a solution but also an efficient one, which is another area needing much more research but most of the spiking neural network researchers aren't a fan of backpropagation. Some work creating differentiable spiking neural networks is being done with Spike Layer Error Reassignment in Time (SLAYER) which I think will become really important once neuromorphic computing takes off and becomes commonplace. Another important detail to keep in mind is that the brain processes events mainly by changes in magnitude. Our eyes are able to work in the dark of midnight and also in bright sunlight. We can discern the sound of soft cloth rubbing together and also roaring jet engines. There's evidence that the brain uses logarithmic coding schemes to do this. Further reading Logarithmic distributions prove that intrinsic learning is Hebbian: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5639933/ Sparse networks from scratch: https://timdettmers.com/2019/07/11/sparse-networks-from-scratch/ Sparse matrices in PyTorch: https://towardsdatascience.com/sparse-matrices-in-pytorch-be8ecaccae6 Pruning networks in PyTorch: https://pytorch.org/tutorials/intermediate/pruning_tutorial.html SLAYER for Pytorch: https://github.com/bamsumit/slayerPytorch
>>2526 > To take a gamedev analogy it's the difference between millions of objects checking for updates in a loop vs. using callbacks to only notify objects when something actually happens. That really makes a lot of sense, actually. Hopefully this can make our robowaifus a) able to do the right kinds of things at all, and b) do them efficiently on little SBC & microcontroller potato-boards. Thanks, grabbing a copy of the video now Anon.
>>2526 >"[with this system] you can process 100K kilohertz in realtime." What did he mean by this? What is 'process'? What kind of hardware will do this processing? Regardless, this is a very promising approach to perf optimization, a subject I think we're all very aware is a vital one to our success.
Open file (106.61 KB 1200x675 SpiNNaker.jpg)
>>2533 He's one of the researchers collaborating with SpiNNaker, a neuromorphic computing platform for spiking neural networks. According to a paper the ATIS can send data to a regular CPU to process or to neuromorphic hardware and it has a pixel update frequency up to 1 MHz. I'm not sure how the software works but spike data is usually binary and just reports the time of events or it can be floating point with an amplitude. https://arxiv.org/pdf/1912.01320.pdf >>2529 What excites me the most for the future is applying machine learning to compiler optimization. Having a program that could compile sparse tensor operations efficiently would blow GPUs out of the water, not because of any computational might but because they can be optimized in ways that dense matrices simply can't. We'll likely be able to run models that are much more powerful than GPT2 off a 50-cent 16-MHz chip of today. And for less than $50 someone will be able to put together a motionless doll that can hold a decent conversation and play a wicked game of Go by calling out moves. I estimate this will happen within the next 2-4 years. Sometimes I feel physically anxious we're only a few years away from people sleeping with plushies that can defeat human world Go champions and hold conversations better than most people, and it's just going to keep accelerating from there. The window of opportunity to learn all this shit and implement it into something is so small and it will shape how everything unfolds.
>>2558 >We'll likely be able to run models that are much more powerful than GPT2 off a 50-cent 16-MHz chip of today. That will be an actual breakthrough advance if it happens. As you indicate things are accelerating. Don't let it make you tense Anon. Just stay focused on the prize, we'll make it.
Archived Stephen Wolfram Science & Technology Q&A Livestream. Not strictly /robowaifu/-related, but the man is a literal genius and knows a ton of shit tbh. https://www.invidio.us/watch?v=pemBieAUqAw
https://4chan-science.fandom.com/wiki//sci/_Wiki There's a /sci/ wiki full of textbook recommendations on every subject going from beginner to advanced, along with prerequistes for entire topics on some pages. My personal goal is being able to understand >Dayan and Abbott - Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (DO NOT attempt to read this unless you have taken Multivariable Calculus, Differential Equations, Linear Algebra, Electricity and Magnetism, and a Probability course that uses calculus.)
I started reading Information Theory: A Tutorial Introduction by James V. Stone and it seems to be a decent book so far that explains everything in a simple way to get an insight to how it works before diving into the complex mathematics in other textbooks. It also has a chapter on mutual information and a section on Kullback-Leibler divergence that's commonly used for variational autoencoders. The book requires some basic understanding of probability and some sections require differential equations and multivariable calculus but are presented with diagrams in a way more so to familiarize yourself with the concepts.
>>2970 Thanks Anon! Would you mind curating a small on-topic selection from the list here as well? I think it would make a nice addition here tbh.
>>2972 >It also has a chapter on mutual information and a section on Kullback-Leibler divergence that's commonly used for variational autoencoders. Excellent. I will be checking it out then. Your descriptions of the efficiencies being achieved via MIM have sparked a keen interest in the topics for me atm.
by Peter Van Weert and Marc Gregoire. Concise, with a solid focus on the modern C++17 standard library. You won't find any C-style C++ here. 308pp. BTW, the link in the book to the publisher is still for the previous version. Here's the correct code files location. https://github.com/Apress/cpp17-standard-library-quick-ref
Edited last time by Chobitsu on 05/11/2020 (Mon) 22:32:16.
>>2973 https://pastebin.com/ENeAEKfZ Okay, I made it. I don't know 99% of this stuff myself, I just picked out what seemed useful and relevant. I hope there's no typos.
>>3004 Sorry, I can't get to cuckbin via tor. Privatebin?
>>3007 thank you very kindly Anon. much appreciated. :^)
Open file (49.22 KB 900x628 CAM_man.jpg)
>>3007 yea this is a really interesting list, i'll have a good time digging through this. >FUN FACT: Carver Mead, the pioneer of modern VLSI and many other breakthroughs, is also generally recognized as the Father of Neuromorphic Computing.
I'm compiling a thread at the moment of significant advancements in AI and found a recent article demonstrating why it's so important to keep up to date with progress: >We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency. https://openai.com/blog/ai-and-efficiency/ Think about that. By 2028 we will have algorithms 4000x more efficient than AlexNet, so it's not so important what you learn but how quickly you can learn and do so continuously. The incomprehensible learning methods taught to us in public school created by the globalists are not going to help us any in this exponential growth. If you ever feel bored, frustrated or lost with learning you're either following their learning process or brainwashed by it somehow. You instinctively know it's not what you want to be doing. Follow that intuition and find what you truly wanna learn more than you wanna sleep or eat. They intentionally designed education in such a way to confuse us for the purpose of creating specialized workers who are only able to follow instructions within their own field and cannot think for themselves, lest they question the authority of those taught in special private schools who give them the orders and designs. They do not want people who are capable of working by themselves, especially not together outside their control. This article explains how to approach learning and formulate knowledge into flashcards for accelerated learning: https://www.supermemo.com/en/archives1990-2015/articles/20rules It's a bit long but it's worth every word to read for the amount of time it'll save you and enhance your life. There have been dozens of people who have learned 2000+ kanji and Japanese to a conversational level in two months with sentence flashcards. This shit is powerful. I recommend using Anki for creating flashcards. It automatically spaces the cards for optimal learning and eventually you only see them every few months once you remember them. It's like doing power training for your memory and it will save you a lot of time studying and from restudying topics you need to know that you've forgotten from not touching in such a long time. Anki: https://apps.ankiweb.net/ sudo apt install anki
>>3031 >I'm compiling a thread at the moment of significant advancements in AI Looking forward to this tbh.
>>3031 >Think about that. By 2028 we will have algorithms 4000x more efficient than AlexNet I'm assuming that prediction pre-assumes that there will be a consistent rate of advancement in algorithm designs over that same period? Not to be a skeptic, but there was at least a physical basis behind Moore's Law. Does any such standard serve here behind this prediction? Insightful leaps are more like the proverbial bolt out of the blue aren't they, typically-speaking Anon? Regardless, it's quite exciting to see the progress happening. Seeing an objective classification of the progress certainly enhances that. Onward! :^)
>>3054 Yeah, algorithms will hit an entropy limit eventually. Entropy is the basis. I'm not a mathematician that can come up with a proof for the theoretical maximum efficiency to determine when this will be but from my experience working with AI and the depth of my reading, which isn't even that deep compared to the massive volume of papers being published every day, the trend can easily continue for another 8 years. If you look at how rapidly accuracy is improving on natural language processing while decreasing the amount of parameters needed by nearly 2 orders of magnitude (arXiv:2003.02645), it's quite incredible the rate of progress that's being made and these techniques haven't even been tried on other domains yet that will bring new insights and improvements. There's so much great research that isn't being utilized yet like Kanerva machines (arXiv:1804.01756), sparse transformers (arXiv:1904.10509), mutual information machines (arXiv:1910.03175), generative teaching networks (arXiv:1912.07768), large scale memory with product keys (arXiv:2002.02385), neuromodulated machine learning (arXiv:2002.09571), neuromodulated plasticity (arXiv:2002.10585) and exploration (arXiv:2004.12919). Neural networks are extremely inefficient and in some cases their sparsity can be as low as 0.5% without losing any accuracy, which means 99.5% of the calculations are being wasted. On top of that backpropagation is extremely slow, requiring to see the entire training set at least 10 times, while Hebbian learning and Bayesian update rules have shown the capacity to learn training examples in one-shot and generalize to unseen training data. In the case of generative teaching networks, the learner networks don't even need to see the actual training data at all and actually outperform networks trained on the real training data. So we have a long way to go yet to make machine learning more optimal.
>>3057 >and these techniques haven't even been tried on other domains yet that will bring new insights and improvements. Yes. Coding designs contained inside DNA (there are at least six different levels of coding that have been identified thus far) are surely an area that should also benefit greatly from these advances, I'd expect. It wouldn't surprise me if in decades to come the payback into AI will be even larger in the 'other direction'. >Neural networks are extremely inefficient and in some cases their sparsity can be as low as 0.5% without losing any accuracy, which means 99.5% of the calculations are being wasted. Wow, that quite a surprising statistic, actually. >In the case of generative teaching networks, the learner networks don't even need to see the actual training data at all and actually outperform networks trained on the real training data. Yea I kind of got that from the recent paper about the retro-game AI playing doom iirc. Which is pretty remarkable, actually. >So we have a long way to go yet to make machine learning more optimal. Haha, OK you've convinced me. >Soon you will have a 'living' doll in your room who can outperform every living player on your favorite vidya. And she can also cook, clean, act, sing, and dance. even those kinds of things... What a time to be alive! :^)
Open file (19.79 KB 474x265 donald_knuth.jpeg)
Open file (28.79 KB 500x431 big0321751043.jpg)
>ctrl+f "Knuth" >no results This simply will not do, /robowaifu/! https://www-cs-faculty.stanford.edu/~knuth/musings.html
>>3049 Not sure the best way to distill this into a thread but I've finished collecting 100+ research papers: https://gitlab.com/kokubunji/research-sandbox There's a few dozen more I'd like to add but I'm getting burnt out reading through my collection. I've covered most of the important stuff for AI relevant to robowaifus.
>>3177 Thanks, brother for all the hard work. Just cloned it. Get some rest.
>>3177 This list is.amazing. I don't even how you did this Anon.
>>3177 Very fitting jpg. Amazing work anon, this will serve a great help.
Since we may have newfriends who don't know about it yet, there is a video-downloading tool called youtube-dl https://ytdl-org.github.io/youtube-dl/index.html You should always be keeping local copies of anything important to you. In can be removed w/o a single notice, as you should be well aware of by now. Youtube-dl is very important in this regard and is pretty easy to use from the terminal. youtube-dl https://www.youtube.com/watch?v=pHNAwiUbOrc with download the best-quality copy of this hobbyist robot arm homemade video locally to your drive.
>>3177 any thoughts how you plan to make a thread out of this yet Anon? i face a similar more complex challenge making the RDD thread.
>>3220 Not yet. Busy taking my bank account out of the red. I'm probably gonna split it up into different topics and annotate the most important papers with prerequisites so people can figure them out without reading a thousand papers.
>>3221 >I'm probably gonna split it up into different topics and annotate the most important papers with prerequisites so people can figure them out without reading a thousand papers. Sounds like a good idea. Look forward to it.
>>3219 Thanks Anon, looks interesting.
Here's some kind of glossary for AI, Machine Learning, ... https://deepai.org/definitions
>>4616 That looks really helpful Anon, thank you.
Open file (168.73 KB 830x971 ClipboardImage.png)
Very approachable book explaining the internal workings of computers. Awful title, good read.
Here is a learning plan for getting into Deep Learning, which got some appreciation on Reddit: https://github.com/Emmanuel1118/Learn-ML-Basics - It also includes infos about which math basics are needed first. However, I also want to point out that DL seems not to be the best option in every case. Other ML approaches like Boosting might be better for our use cases: https://youtu.be/MIPkK5ZAsms
Open file (229.05 KB 500x572 LeonardoDrawing.jpg)
'''Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs''' According to Alexander Stepanov (in the forward to The Boost Graph Library, 2001) This man John Backus and this Turing Award lecture paper were inspirational to the design of the STL for C++. The STL underpins the current state of the art in generic programming that must be both highly expressive+composable, but must also perform very fast as well. Therefore, indirectly so does John Backus’s FP system and for that we can be grateful.
Open file (24.66 KB 192x358 Backus.jpg)
>>5191 Backus also invented FORTRAN (back when that was a first of it's kind for programming portability), and is one of the smartest men ever in the entire history of computing. https://ethw.org/John_Backus
>>5173 Started watching this, its pretty good.
Open file (650.07 KB 1030x720 thetimehathcome.mp4)
Anyone know some good tutorials for getting started in Godot with 3D? I just wanna have materials and animations load correctly and load a map with some basic collision checking to run around inside.
Open file (70.70 KB 895x331 b2_its_time.png)
>>5379 I haven't looked into Godot yet myself, but I'd like to at some point. Please let us know if you locate something good Anon. >that vid leld.
Open file (1.98 MB 1280x720 how2b.mp4)
>>5380 It's painful but if I find anything good I'll post it here. Blender videos are 'how to do X in 2 minutes', but Godot videos are 30 minutes of mechanical keyboard ASMR and explaining you should watch the previous video to understand everything while they're high on helium. There seems to be errors with importing animations of FBX models in 3.2.2 but GLTF works mostly okay. I don't think it's too much of an issue because the mesh of this model has been absolutely destroyed by my naive tinkering and lazy weight painting. FBX corrupted my project somehow but when I start fresh with GLTF all the textures load and everything works great. I accidentally merged by distance all the vertices and destroyed her face but if anyone wants to play around with the 2B model I used, here you go: https://files.catbox.moe/1wamgg.glb Taken from a model I couldn't get to import into Blender correctly: https://sketchfab.com/3d-models/nierautomata-2b-cec89dce88cf4b3082c73c07ab5613e7 I'll fix it up another time or maybe find another model that's ready for animation.
Open file (24.65 KB 944x333 godot3_logo.png)
Found a great site for Godot tutorials and a channel that goes along with it. Text: https://kidscancode.org/godot_recipes/g101/ Videos: https://www.youtube.com/c/KidscancodeOrg/playlists
Understanding Variational Autoencoders (VAEs) >Introduction >In the last few years, deep learning based generative models have gained more and more interest due to (and implying) some amazing improvements in the field. Relying on huge amount of data, well-designed networks architectures and smart training techniques, deep generative models have shown an incredible ability to produce highly realistic pieces of content of various kind, such as images, texts and sounds. Among these deep generative models, two major families stand out and deserve a special attention: Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73?gi=8b456161e353
Tutorial on Variational Autoencoders >In just three years, Variational Autoencoders (VAEs) have emerged as one of the most popular approaches to unsupervised learning of complicated distributions. VAEs are appealing because they are built on top of standard function approximators (neural networks), and can be trained with stochastic gradient descent. VAEs have already shown promise in generating many kinds of complicated data, including handwritten digits, faces, house numbers, CIFAR images, physical models of scenes, segmentation, and predicting the future from static images. This tutorial introduces the intuitions behind VAEs, explains the mathematics behind them, and describes some empirical behavior. No prior knowledge of variational Bayesian methods is assumed.
Open file (387.54 KB 1999x1151 transformer.png)
Great video on programming transformers from scratch in PyTorch: https://www.youtube.com/watch?v=U0s0f995w14 You'll need to know a bit how they work first: https://www.youtube.com/watch?v=TQQlZhbC5ps And another video if you want to understand the details of the design: https://www.youtube.com/watch?v=rBCqOTEfxvg And if you want a full lecture explaining how information flows through the network: https://www.youtube.com/watch?v=OyFJWRnt_AY Original paper for reference: https://arxiv.org/abs/1706.03762 If you're new to machine learning it might seem impossible to learn because there's a lot of pieces to grasp but each one is relatively simple. If you study and figure out one new piece a day, eventually you'll understand how the whole thing works and have a good foundation for learning other architectures.
>related crosspost >>9065
Open file (239.89 KB 572x870 ROS for Beginners.png)
A very detailed book that explains - 1.) How to install Ubuntu Linux 2.) How to install the Robot Operating System on Ubuntu. 3.) How to begin programming your robot using C++ Ideal for us here since I know a lot of people use Raspbian or at least some Linux distro to operate their robots, and I think the two are very similar. (Although if you have Windows 10 you can also install ROS on that, too).
>>9081 Thanks! Sounds like a very useful book. I tried to install ROS before once, but I ran into numerous dependency challenges. But this was a few years ago (maybe 3?) and maybe it's easier now. Also, this book sounds like it makes the process straightforward as well. I look forward to digging into it.
Open file (341.64 KB 894x631 EM.png)
Expectation maximization is an iterative method to find maximum likelihood estimates of parameters in statistical models, where the model depends on unobserved latent variables. https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm How EM is useful solving mixture models: https://www.youtube.com/watch?v=REypj2sy_5U How it works: https://www.youtube.com/watch?v=iQoXFmbXRJA Longer lecture on EM algorithms for machine learning: https://www.youtube.com/watch?v=rVfZHWTwXSA EM was applied to hindsight experience replay (which improves expectations of future states from past failures) to greatly improve the learning efficiency and performance, particularly in high-dimensional spaces: https://arxiv.org/abs/2006.07549 Hindsight Experience Replay: https://arxiv.org/abs/1707.01495
I haven't come across a good article or post on pre-training neural networks but I think it's a really important subject for anyone doing machine learning. Recently when trying to convert the T5 model into an autoencoder I made the mistake of forgetting to pre-train it on autoencoding before converting the hidden state into a variational autoencoder. Because of this both the decoder was unable to decode anything useful and it was getting random input from the untrained VAE, making it extraordinarily difficult to train. After fixing this I also locked the parameters of the T5 encoder and decoder to further improve training efficiency by training the VAE specifically on producing the same hidden state output as its hidden state input so the decoder doesn't become skewed learning how to undo the VAEs inaccuracy. Once the VAE reaches a reasonable accuracy then I will optimize the whole model in tandem while retaining the VAEs consistency loss. Pre-training is also really important for reinforcement learning. I can't remember the name of the paper right now but there was an experiment that had an agent navigate a maze and collect items, but finding a reward from a randomly initialized network is nearly impossible, so before throwing the agent into the main task they taught it with auxiliary tasks such as how to smoothly control looking around the screen and how to predict how the scene changes as it moves around. A similar paper to this was MERLIN (Memory, RL, and Inference Network) which was taught how to recognize objects, memorize them and control looking around before being thrown into the maze to search for different objects: https://arxiv.org/abs/1803.10760 For learning to happen efficiently a network has to learn tasks in a structured and comprehensive way, otherwise it's like trying to learn calculus before knowing how to multiply or add. The problem has to be broken down into smaller simpler problems that the network can learn to solve individually before tackling a more complicated problem. Not only do they have to broken down but they need to be structured in a hierarchy, so the final task can be solved with as few skills as possible. The issue of pre-training, transfer learning and how to do it properly will become more important as machine learning tackles more and more complicated tasks. The subject itself could deserve its own thread one day, but for now just being aware of this will make your experiments a lot easier.
Open file (96.49 KB 356x305 roboticist4.jpg)
Open file (104.97 KB 716x199 roboticist1.jpg)
Open file (359.55 KB 500x601 roboticist9_0.jpg)
>Mark Tilden on “What is the best way to get a robotics education today?” https://robohub.org/mark-tilden-on-what-is-the-best-way-to-get-a-robotics-education-today/
Synthesis of asynchronous circuits >Abstract >The majority of integrated circuits today are synchronous: every part of the chip times its operation with reference to a single global clock. As circuits become larger and faster, it becomes progressively more difficult to coordinate all actions of the chip to the clock. Asynchronous circuits do not suffer from this problem, because they do not require global synchronization; they also offer other benefits, such as modularity, lower power and automatic adaptation to physical conditions. >The main disadvantage of asynchronous circuits is that techniques for their design are less well understood than for synchronous circuits, and there are few tools to help with the design process. This dissertation proposes an approach to the design of asynchronous modules, and a new synthesis tool which combines a number of novel ideas with existing methods for finite state machine synthesis. Connections between modules are assumed to have unbounded finite delays on all wires, but fundamental mode is used inside modules, rather than the pessimistic speed-independent or quasi-delay-insensitive models. Accurate technology-specific verification is performed to check that circuits work correctly. >Circuits are described using a language based upon the Signal Transition Graph, which is a well-known method for specifying asynchronous circuits. Concurrency reduction techniques are used to produce a large number of circuits that conform to a given specification. Circuits are verified using a bi-bounded simulation algorithm, and then performance estimations are obtained by a gate-level simulator utilising a new estimation of waveform slopes. Circuits can be ranked in terms of high speed, low power dissipation or small size, and then the best circuit for a particular task chosen. >Results are presented that show significant improvements over most circuits produced by other synthesis tools. Some circuits are twice as fast and dissipate half the power of equivalent speed-independent circuits. Examples of the specification language are provided which show that it is easier to use than current specification approaches. The price that must be paid for the improved performance is decreased reliability, technology dependence of the circuits produced, and increased runtime compared to other tools.
Graph Algorithms: Practical Examples in Apache Spark and Neo4j Whether you are trying to build dynamic network models or forecast real-world behavior, this book demonstrates how graph algorithms deliver value – from finding vulnerabilities and bottlenecks to detecting communities and improving machine learning predictions. Register to Download O'Reilly's Graph Algorithms for Free! We walk you through hands-on examples of how to use graph algorithms in Apache Spark and Neo4j. We include sample code and tips for over 20 practical graph algorithms that cover importance through centrality, community detection and optimal pathfinding. Read this book to: Learn how graph analytics vary from conventional statistical analysis Understand how classic graph algorithms work and how they are applied Dive into popular algorithms like PageRank, Label Propagation and Louvain to find out how subtle parameters impact results Get guidance on which algorithms to use for different types of questions Explore algorithm examples with working code and sample datasets for both Apache Spark and Neo4j See how connected feature extraction increases machine learning accuracy and precision Walk through creating an ML workflow for link prediction combining Neo4j and Apache Spark https://neo4j.com/graph-algorithms-book (Upload is too large)
>>10398 >(Upload is too large) No, the AlogSpace site has a 20MB upload limit IIRC Anon. Fits comfortably within that.
>>10405 My download had 23MB, if it's the same book they maybe altered something.
>>10413 Just compare the two yourself would be my suggestion.
Finally, I have at least a search term to find data on the size of human limbs: Anthropomorphic reference data A lot to read though: https://www.sciencedirect.com/topics/engineering/anthropometric-data It still bothers me, that I can't just look up the weight of limbs or the length of bones and such, somewhere.
Open file (110.29 KB 318x493 powerdevelopment.png)
>>10542 There are some decent digital human body encyclopedias you can find on the usual pirating sites out there. I was more into the in-depth strength training materials when looking at building at a humanoid robot. This sort of valuable information can't be found in reference data.
>>10548 Good thinking Anon. Not him, but I've worked through Convict Conditioning, and while I was doing it I thought often about the fact that the very same issues of dynamics & strength were actually going to be important design & engineering concerns for us as we build robowaifus. That opinion hasn't changed in the slightest now that I'm actually looking harder at the skeletal designs for her, etc.
Latest Release Completes the Free Distribution of A Knowledge Representation Practionary: https://www.mkbergman.com/2461/entire-akrp-book-now-freely-available/ >A Knowledge Representation Practionary is a major work on knowledge representation based on the insights of Charles S. Peirce, shown at age 20 in 1859, who was the 19th century founder of American pragmatism, and also a logician, scientist, mathematician, and philosopher of the first rank. The book follows Peirce’s practical guidelines and universal categories in a structured approach to knowledge representation that captures differences in events, entities, relations, attributes, types, and concepts. Besides the ability to capture meaning and context, the Peircean approach is also well-suited to machine learning and knowledge-based artificial intelligence.
>>10634 Wow, sounds like a remarkable work Anon. Look forward to reading.
>'''"Mathematics for Machine Learning - Why to Learn & What are the Best Free Resources?"'''
Talented and very smart technical animator. Science & Technology topics. https://www.youtube.com/c/ThomasSchwenke-knowledge/playlists
>>4660 >related crosspost (>>11211)
DeepMind YT playlists https://www.youtube.com/c/DeepMind/playlists This anon recommended it (>>11555). I'm currently working through the 8-video Deep Learning Introduction list.
Computer Systems: A Programmer's Perspective, 3/E (CS:APP3e) Randal E. Bryant and David R. O'Hallaron, Carnegie Mellon University >Memory Systems >"Computer architecture courses spend considerable time describing the nuances of designing a high performance memory system. They discuss such choices as write through vs. write back, direct mapped vs. set associative, cache sizing, indexing, etc. The presentation assumes that the designer has no control over the programs that are run and so the only choice is to try to match the memory system to needs of a set of benchmark programs. >"For most people, the situation is just the opposite. Programmers have no control over their machine's memory organization, but they can rewrite their programs to greatly improve performance. Consider the following two functions to copy a 2048 X 2048 integer array: void copyij(long int src[2048][2048], long int dst[2048][2048]) { long int i,j; for (i = 0; i < 2048; i++) for (j = 0; j < 2048; j++) dst[i][j] = src[i][j]; } void copyji(long int src[2048][2048], long int dst[2048][2048]) { long int i,j; for (j = 0; j < 2048; j++) for (i = 0; i < 2048; i++) dst[i][j] = src[i][j]; } >"These programs have identical behavior. They differ only in the order in which the loops are nested. When run on a 2.0 GHz Intel Core i7 Haswell processor,, copyij runs in 4.3 milliseconds, whereas copyji requires 81.8—more than 19 times slower! Due to the ordering of memory accesses, copyij makes much better use of the cache memory system. http://csapp.cs.cmu.edu/3e/perspective.html >=== -minor fmt patch
Edited last time by Chobitsu on 08/26/2021 (Thu) 17:10:51.
The Elements of Computing Systems, second edition: Building a Modern Computer from First Principles Noam Nisan, Shimon Schocken >I Hardware >"The true voyage of discovery consists not of going to new places, but of having a new pair of eyes." >t.Marcel Proust (1871–1922) >This book is a voyage of discovery. You are about to learn three things: how computer systems work, how to break complex problems into manageable modules, and how to build large-scale hardware and software systems. This will be a hands-on journey, as you create a complete and working computer system from the ground up. The lessons you will learn, which are far more important than the computer itself, will be gained as side effects of these constructions. According to the psychologist Carl Rogers, “The only kind of learning which significantly influences behavior is self-discovered or self-appropriated—truth that has been assimilated in experience.” This introduction chapter sketches some of the discoveries, truths, and experiences that lie ahead. 33E8664A26F52769692C070A31A96CCE
>(>>15925 related crosslink, OS design)
>just dropping this here for us, since neither seem to be present ITT yet? https://functionalcs.github.io/curriculum/ https://teachyourselfcs.com/ https://www.mooc.fi/en/ (pozz-warning, but much good stuff as well)
>(>>16124, related crosspost)
Open file (33.22 KB 342x443 1651114786655-0.jpg)
So I think the time has finally come at last. I hope for the fortitude of soul to finally pursue maths. As a repeat High School dropout they kept putting me back in anyway lol, I absolutely loathed (and do even moreso today) the modern public """education""" systems. So I felt at the time my decisions were well-merited. Heh. So, fast-forward to today and I barely know how to add 2+2 haha. :^) Missing out on the basics of algebra, trig, geometry, pre-calc, calc, stats, etc., is proving a big hindrance to my progress today for us with robowaifus. Even though I'm now an adult man, I think I can still pick it up. The challenge is making the time to study on top of my already-overflowing plate, and the AFK pressures of keeping body & soul connected together. >tl;dr I'm starting with Euler's, wish me luck Anons! > >P.S. Feel free to pester me once a year -- say, during Summers -- to know how this little project is going. Sharp pointy-sticks can be a good thing after all. :^)
Open file (284.88 KB 1440x1080 MILK.jpg)
>>16302 I know that feel. I also dropped out and ended up math illiterate. I feel like the only useful subjects in school were Phys. Ed. and Math. Even the ones that sound useful on paper like Philosophy weren't useful, at least as far as what school taught of them goes. A system that wastes entire childhoods has got to be the most evil idea ever.
>>16302 I know it hasn't been a year yet, but how's it going? >Elements of Algebra If you can start without knowing the basics of algebra and end up understanding that book, then you have some serious potential. A lot of higher mathematics mostly requires a ton of tenacity and creativity trying to understand what people much further ahead than you are trying to explain. Usually our education system doesn't force students to go through that unless they've chosen to get a degree in mathematics. Elements of Algebra looks like it was written in the same tradition. It reads like it's meant for highly educated and highly dedicated people, just ones that haven't studied much math. If you get stuck, don't feel like it's cheating to look up other material or ask for help. The goal is not to get through the material on your own. The goal is to develop the right intuition so that the language of math becomes second nature, and all is fair in that pursuit. Do bang your head against some problems until you figure them out, because it's good to prevent yourself from getting lazy about exercising your brain, but don't do it for every single problem you encounter, since that will slow you down too much.
Open file (37.81 KB 346x202 misc.png)
How to Get Started with Animatronics – Thought Process, Workflow, Resources and Skills https://www.youtube.com/watch?v=8VzQshnrvN0 Will Cogley is knowledgeable with a legit ME degree. He doesn't sound like a faggot, and therefore isn't highly off-putting to listen to. He's creative and skilled, and he's accomplished in areas pertinent to robowaifu developement. I'd recommend you subscribe to his channel.
>>18698 Good idea, be he isn't new to the board. We followed his videos one or two years ago. I think he stopped at some point. One big problem is to keep in mind which video or channel covered what kind of topic. I think his moth mechanism was interesting, or how he approached speaking syllables or such. Also, his work on hands was interesting. However, problem with the common animatronics is, that they are not water proof. Especially in regards to the eye mechanism. Still a good way to get a grip on what challenges we have to go through.
>>16302 >>17436 If anyone wants a math tutor I could fill the role. I have an engineering degree and one of my minors is in math. Besides tutoring will help me keep my math skills sharp. @ribozyme:matrix.org is the best way to reach me. I might make a group depending on how many people are interested.
If anyone has some resources on material science that would be greatly appreciated.
>>18705 Thank you Ribose. I may take you up on it, since preserving my inanities over learning maths is hardly important to our community here. Please allow me some time to sort my current work/school schedules out.
>>18705 Thanks again Ribose. So what would be a good study guide for a very smart adult, but one who has 'formal' maths training only through about 9th grade, say pre-Algebra? OTOH, I've already written high-performance software that does sophisticated integrations in 2D & 3D space in professional studio environment settings. My 3D visualizaition and imagination skills are generally quite strong, but I have basically little-to-no technical training beyond self-taught efforts (apart from animation). So yeah, I'm a bit of a weird mix for maths, Ribose! :^) >=== -minor prose edit
Edited last time by Chobitsu on 01/29/2023 (Sun) 15:05:35.
>>19263 I recommend khan academy. https://www.khanacademy.org/math/algebra You can also use wolframalpha to work problems for you if you get stuck. The website shows work and everything.
>>19317 Thanks Anon, I'd looked into it in the past maybe it's time to look back into it again. Cheers.
Crosslink: >>18306 (Getting started with AI / DL ...)
Found this book to be a good introduction to robotics. Plus, guys look up the "A Manga Guide to -" series of mangas. A good intro to any maths or science field you're just entering.
Some nice scholarly robotics pages. https://scaron.info/robotics/
>>21404 Wow, these are great. Thanks!
An interesting class if you're curious about animation. https://www.khanacademy.org/computing/pixar
Stanford CS Library http://cslibrary.stanford.edu/
>>24224 Someone preserved the Blinky video. https://www.youtube.com/watch?v=i49_SNt4yfk
> ( NN / LLM training-related : >>26817, ...)
Learn yourself some magnet physics https://www.youtube.com/watch?v=OI_HFnNTfyU
New Mind is a great place to start to catch up on engineering concepts. https://www.youtube.com/@NewMind
Is uploading pdfs banned now? I'm trying to upload some books on robotics but it says connection failed everytime.
>>30563 I don't think so, how big are the files? There's an upper limit. It worked a few days ago.
>>30566 what is the upper limit?
>>30568 I think 20MB.
>>30569 oh that explains it. these books are pretty big at 80+ MBs
>>30569 oh that explains it. these books are pretty big at 80+ MBs
>>30575 You can upload them at catbox.moe and link here. Or make a MEGA account, but they would probably need to be encrypted or something to avoid their copyright scanners. Maybe don't use the same account you want to keep for other use cases, idk.
>>30896 missing the most useful part, newtons method for solving roots and way simpler to understand convergence this example made no sense like wtf man just draw the stupid ball on a curve and anyone will get it
>>30902 Do you know of any animations showing this method in a simplified way Anon? I can't read formulas very well yet, but I can folllow concepts if they are animated well.
Open file (617.22 KB 480x480 FUnLj-1138229482.gif)
Open file (48.85 KB 673x480 NewtonIteration_Ani.gif)
>>30903 its like you would expect, the closer you get to zero the smaller your delta gets so if you just take a guess and keep adding the change to your answer eventually you will just stop moving which is your root
>>30905 Ahh, got it. Thanks kindly, Anon. Cheers. :^)
>>30907 its also really simple to program if you want to play around with it, its how sqrt() is done when the cpu doesnt have a math chip void main() { float guess = 13; float ans = guess; // newtons method formula = x - f(x)/`f(x) // using f(x)= 3x^3 + 5x^2 - 3x - 2 // `f(x)= 9x^2 + 10x - 3 for ( int i=0; i<50; i++ ) { float fx = 3*(ans*ans*ans) + 5*(ans*ans) - 3*ans - 2; float dfx = 9*(ans*ans) + 18*ans - 3; float guess = ans - (fx / dfx); if ( guess == ans ) break; ans = guess; printf( "i=%d \t x=%f \n", i, ans ); } printf( "final answer = %f \n", ans ); } you only get a root though like this function has 3 but you have to play with the initial guess to get them otherwise you just get which evers closest, fun stuff
>>30908 Neat! I assume you'd add some type of Epsilon test (line #14) in for a working example, Anon? --- >also: I notice that the formula you reference in the comments (line #8), seems to be different than the code example (line #12) ? >note: for my line numbers, I'm using my own (re)formatted version of your code example. cf: https://trashchan.xyz/robowaifu/thread/26.html#27 >=== -add image hotlink -minor edit
Edited last time by Chobitsu on 04/14/2024 (Sun) 13:40:02.
>>30909 meant to be 10*ans not 18*ans my bad, good mistake to see how the method works though since you still center in one of the roots even though the derivative is wrong it just takes more iterations lots of things you can do to make it better of course just wanted to keep it simple
>>30910 Ahh, thanks. It's rather nice that the eminent Newton devised such a computationally-simple approach to this need. I'm sure with modern superscalar architectures, the solution would converge in very few clocks. Cheers. :^)
By far the best explanation of Quaternions I've ever seen. Based Anon-produced. https://www.youtube.com/watch?v=bKd2lPjl92c
>>32792 It is a shame he doesn't have a social media presence... from what I can tell the last anyone has heard of him is a 2016 documentary (I'll be glad to be proven wrong, though). Maybe too many trolls emailed him saying "robosapien suxxx lmao" or something.
>>33266 You're right, Mechnomancer. Mark Tilden really is a kind of genius IMHO. I sure hope he'd approve of our push to create life-sized robowaifus that go so far beyond his own, foundational, works. I think he would! :^)
Complex systems are best built using an incremental approach, assembled together from small, reliable parts. This author has made important theoretical contributions for this approach to systems construction. >John Gall - Systemantics: How Systems Work and Especially How They Fail https://libgen.rs/book/index.php?md5=27116A9E0EBB855AE580D3D7DB3238DE >John Gall - The systems bible: the beginner's guide to systems large and small: being the third edition of Systemantics https://libgen.rs/book/index.php?md5=69111DA8D35908A7B774863430AFB6C1
>>33550 >“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.” >related: https://en.wikipedia.org/wiki/John_Gall_(author)#Gall's_law https://blog.prototypr.io/galls-law-93c8ef8b651e https://nordicapis.com/what-is-galls-law-and-how-could-it-direct-api-design/ http://principles-wiki.net/principles:gall_s_law
Linguistics-focused wiki -- a topic of clear importance to us here on /robowaifu/ . http://glottopedia.org/index.php/Main_Page
> education-related : ( >>33905 )

Report/Delete/Moderation Forms
Delete
Report