/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Site was down because of hosting-related issues. Figuring out why it happened now.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon in late August. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


When the world says, “Give up,” Hope whispers, “Try it one more time.” -t. Anonymous


Open file (14.96 KB 280x280 wfu.jpg)
Beginners guide to AI, ML, DL. Beginner Anon 11/10/2020 (Tue) 07:12:47 No.6560
I already know we have a thread dedicated to books,videos,tutorials etc. But there are a lot of resources there and as a beginner it is pretty confusing to find the correct route to learn ML/DL advanced enough to be able contribute robowaifu project. That is why I thought we would need a thread like this. Assuming that I only have basic programming in python, dedication, love for robowaifus but no maths, no statistics, no physics, no college education how can I get advanced enough to create AI waifus? I need a complete pathway directing me to my aim. I've seen that some of you guys recommended books about reinforcement learning and some general books but can I really learn enough by just reading them? AI is a huge field so it's pretty easy to get lost. What I did so far was to buy a non-english great book about AI, philosophycal discussions of it, general algorithms, problem solving techniques, history of it, limitations, gaming theories... But it's not a technical book. Because of that I also bought a few courses on this website called Udemy. They are about either Machine Learning or Deep Learning. I am hoping to learn basic algorithms through those books but because I don't have maths it is sometimes hard to understand the concept. For example even when learning linear regression, it is easy to use a python library but can't understand how it exactly works because of the lack of Calculus I have. Because of that issue I have hard time understanding algorithms. >>5818 >>6550 Can those anons please help me? Which resources should I use in order to be able to produce robowaifus? If possible, you can even create a list of books/courses I need to follow one by one to be able to achieve that aim of mine. If not, I can send you the resources I got and you can help me to put those in an order. I also need some guide about maths as you can tell. Yesterday after deciding and promising myself that I will give whatever it takes to build robowaifus I bought 3 courses about linear alg, calculus, stats but I'm not really good at them. I am waiting for your answers anons, thanks a lot!
Just letting you know I saw your post OP. I'm one of the two you linked. I'll have to think through before giving you a reply so please be patient. It may take me a few days. The other Anon may be able to reply sooner, as he's much more versed than I am. Good thread topic for /robowaifu/ OP, so thanks for taking the time to spell things out well in your OP. :^)
Open file (184.67 KB 1669x542 freematerials.png)
First of all save your money for hardware and use http://libgen.rs/ for free books, I find the 'handbook' series from the publisher Springer has great work on a variety of topics relating to topics this board discusses. Also you can use https://torrentz2.is/ for online courses, this is what I found doing a search for 'udemy machine learning'. Be careful when pirating as the sites linked to on that torrent search engine aren't always safe. I'd recommend against amassing a horde of learning materials as those courses or books won't do you any good if your goals are unrealistic. Start with the basics moving to more difficult topics and constantly reevaluate your goals.
>>6561 Okay anon, I'll be waiting for your response. Thank you for sparing your time to help me :) >>6566 Oh, thanks for letting me know about that torrent site. Looks like there are a lot of great stuff there. >I'd recommend against amassing a horde of learning materials as those courses or books won't do you any good if your goals are unrealistic Yeah, I figured. That was the reason I bought some basic udemy courses in the first place. Thinking about starting with ML without heavy maths and learn basic consepts such as: Data Preproccessing, Numpy, Pandas, Inferential Stats, Data Visualisation, Linear Regression, Polynomial Regression, Gradient Descent, KNN, Model Performance Metrics, Naive Bayes, Logistic Regression, SVM, SVR, Decision Trees, Decision Tree Regression, Random Forest Regression, Kernel SVM, K means Clustering, Hierarchical Clustering, Apriori, Eclat, UCB, Thompson Sampling, Ensembling, Unsupervised Learning, NLP, PCA, LDA, XGBoost I bought 2 courses which covers the topics above. Both of them claim to have only prerequisite of "some basic high school maths and programming". I guess that those basic highschool math topics are simply Linear Alg, Calculus and maybe a little bit of stats. Course only gives you an idea about how the algorithms works, skipping the heavy math parts and shows you how to implement it using python libraries. I thought about finishing maths and stats courses first and then getting into machine learning but I know that won't happen. I will get bored and never finish those courses if I only focus on finishing them. While studying those 2 ML courses I'll try to learn linear algebra, limits, derivatives, integrals, some trig and stats. And after finishing them I will move on to a more theory-based course which teaches you how to implement those algorithms without using libraries with heavy math. Once I complete them I will move on to deep learning with again programming-based course and then move on to the theory-based course. And then implementing stuff, reading papers, doing kaggle competitions I will get used to create projects. Maybe after that I can start to read some of the books you guys recommended. Is there anything wrong with that approach? Pls give me some feedback.
Open file (114.23 KB 1011x720 __.jpg)
>>6560 >Assuming that I only have basic programming in python, dedication, love for robowaifus but no maths, no statistics, no physics, no college education how can I get advanced enough to create AI waifus? Creating a simple chatbot AI with a seq2seq network in Python is possible without any mathematical knowledge. However, it won't be very good or keep your attention long and you won't be able to improve on it until you understand how it works. At the minimum you should study matrices, calculus, statistics, linear algebra and programming. Once you have a basic understanding of those fundamental topics, then you can jump into learning whatever area of AI that interests you and focus on studying topics that are relevant to you. I'll dump some good textbooks on these later in >>235 For a deeper understanding you should also study trigonometry, discrete mathematics, differential equations, Fourier transforms, and entropy and information theory. In discrete mathematics you'll wanna pay special attention to graphs and search algorithms. I recommend 3Blue1Brown's channel for learning mathematics: https://www.youtube.com/c/3blue1brown/playlists He also did a really good series explaining how neural networks work: https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi DeepMind has a comprehensive deep learning course here: https://www.youtube.com/watch?v=iOh7QUZGyiU&list=PLqYmG7hTraZCkftCvihsG2eCTH2OyGScc And a good reinforcement learning course: https://www.youtube.com/watch?v=ISk80iLhdfU&list=PLqYmG7hTraZBKeNJ-JE_eyJHZ7XgBoAyb Henry AI Labs explains various AI topics: https://www.youtube.com/channel/UCHB9VepY6kYvZjj0Bgxnpbw Yannic Kilcher covers recent advances in AI: https://www.youtube.com/channel/UCZHmQk67mSJgfCCTn7xBfew Machine Learning Street Talk is a good AI podcast talking about the interesting stuff going on in AI: https://www.youtube.com/channel/UCMLtBahI5DMrt0NPvDSoIRQ Schmidhuber's website covers a lot of interesting topics in AI research: http://people.idsia.ch/~juergen/ >Which resources should I use in order to be able to produce robowaifus? At the minimum you'll need to know 3D modelling, 3D printing, C or C++ programming, electricity, microcontrollers, physics, classical mechanics, servos and power systems. Someone else will have to chime in on this since my specialty is AI. >>6567 All those courses are overkill for a start. You should focus on: >Data Preproccessing, Numpy, Pandas, Inferential Stats, Data Visualisation, Linear Regression, Polynomial Regression, Gradient Descent, KNN, Model Performance Metrics, Naive Bayes, Logistic Regression, SVM, SVR, Decision Trees, Decision Tree Regression, Random Forest Regression, Kernel SVM, K means Clustering, Hierarchical Clustering, Apriori, Eclat, UCB, Thompson Sampling, Ensembling, Unsupervised Learning, NLP, PCA, LDA, XGBoost The other stuff is good to know but more advanced than you really need to know unless you want to study those areas. You should soldier through basic mathematics and statistics first, just enough to be familiar with what you have to study deeper to take on an AI project you're interested in. If you don't understand the mathematical notation (particularly around summation, products, matrices and probability) being used in machine learning courses you'll become completely lost on what to do or learn. Once you dive into machine learning you'll see how important mathematics and statistics are and won't be bored studying them when you can see how much you need to know them. It's a good idea not to learn anything unless you understand why you're learning it, otherwise you may spend years learning stuff you'll never use.
>>6570 > It's a good idea not to learn anything unless you understand why you're learning it, otherwise you may spend years learning stuff you'll never use. This. Not OP, but thanks for the great material in your post Anon. Very helpful.
Open file (26.51 KB 400x400 43044190.jpg)
>>6570 Well, after reading your post I can clearly understand that I'll need maths, physics at the end. >All those courses are overkill for a start. >You should soldier through basic mathematics and statistics first I think I will study both of those 2 courses till the end even if it is overkill. Because rather than only studying maths I would prefer dividing my time fifty-fifty and study both programming/ML and Maths. I don't know which anon you are but one of you said that he isn't good at maths or physics, he improved himself by playing around programming like a lot. I am kinda on the same train but it is pretty clear to me that I will need to learn maths and physics anyways. I believe that if I divide my time in two pieces and study maths and programming/ML at the same time by the end of those 2 ML courses I will have finished Trigonometry, Linear Algebra, Calculus 1-2, Statistics and Probability. After that I can study more advanced maths such as those topics; >discrete mathematics, differential equations, Fourier transforms, and entropy and information theory as well as deep learning. This way I will have time to both improve my math and programming skills. Even though my motivation is to create a robowaifu I want to improve myself as much as I can in the field of AI. I will try to improve my programming too. Starting from today I will start with one of those ML courses and a trig course. I don't think I will be able to get into robotic stuff at least for a year. I will also spare physics for future. Maybe once I get into college for CS I can work with other engineers to learn from them about robotics. Thank you a lot for helping, I wonder what the other anon is going to recommend me. But I think the path I should be following is clear right now. I will take one left step and then one right step so that I won't fall because of lacking skills for maths or programming.
>>6573 OK, Anon, I've made my birthday present to /robowaifu/ today. Thank you as your question was the spur for me to finally dig in and get this ball rolling for us. Hopefully it will be an ongoing help to all of us! Cheers. >>7143
>>6573 Godspeed anon! Remember to have a break if you are getting a headache! Not kind on the old grey matter, some of that stuff.
>>7214 Thank you anon! Sorry for the late reply, I've been busy studying :^) >Remember to have a break if you are getting a headache! I will, thank you. But I wish I hade more brain processing power and ability to focus. I put in ~4 hours daily, I can't focus more than that. I am trying to get it up to 6 hours for now. Any suggestions on that topic?
>>7374 Not that anon, but I'd say just relax and don't unduly pressure yourself. Work hard, but also take time to relax and remember the fun of why you're doing this. Just remember the old adage Rome wasn't built in a day has stood the test of time for a reason. Keep your nutrition levels up and take regular short breaks at the very least.
>>7374 Don't study to hard, this won't make it better. Resting is part of the process. I try to follow some of the advice in A Mind For Numbers: How to Excel at Math and Science, you could also listen to as mp3 (magnet-link in textfile, book and mp3 on limetorrents.info or in the linked textfile, can't upload any of it here). You might listen to the book, while going for a walk, which would have the downside of not being able to make notes, though. https://files.catbox.moe/2bvsmk.epub https://files.catbox.moe/tcm2is.txt
Open file (100.55 KB 638x978 IMG_20201110_233630.jpg)
Open file (936.87 KB 4096x2896 IMG_20201106_032003.jpg)
Open file (77.43 KB 1306x822 IMG_20201104_174127.jpg)
Open file (57.16 KB 840x600 IMG_20201104_145915.jpg)
Open file (94.40 KB 1070x1252 IMG_20201019_070005.jpg)
Stolen from Twitter for us here. Mainly overviews. Cheat sheets will follow.
Open file (102.81 KB 794x902 IMG_20200925_235656.jpg)
Open file (253.39 KB 1573x1019 IMG_20200925_235627.jpg)
Open file (99.12 KB 1418x620 IMG_20200629_050706.jpg)
Open file (211.52 KB 796x668 IMG_20200625_030841.jpg)
Open file (257.29 KB 850x576 IMG_20200625_030831.jpg)
More...
Think I got the last few from here: https://www.datasciencecentral.com/page/search?q=cheat+sheet The last one is an example how to make a list of ressources related to a topic and then learn it step by step, not trying to understand it at the first try.
Thanks for all the effort Anon, much appreciated.
Open file (780.82 KB 2000x3400 ClipboardImage.png)
Open file (540.77 KB 2000x3000 ClipboardImage.png)
Open file (607.05 KB 1250x693 ClipboardImage.png)
Open file (924.91 KB 1224x794 ClipboardImage.png)
Open file (171.42 KB 875x492 ClipboardImage.png)
>>8247 Thanks!
Open file (170.73 KB 875x559 ClipboardImage.png)
Open file (528.55 KB 564x1002 ClipboardImage.png)
Open file (917.49 KB 875x1236 ClipboardImage.png)
Open file (496.18 KB 2200x1700 ClipboardImage.png)
Open file (960.72 KB 2200x1700 ClipboardImage.png)
Open file (1.04 MB 1952x1508 ClipboardImage.png)
Open file (314.05 KB 1024x747 ClipboardImage.png)
Open file (697.77 KB 3300x2550 ClipboardImage.png)
Open file (136.05 KB 1650x1275 ClipboardImage.png)
Open file (575.21 KB 1250x884 ClipboardImage.png)
Open file (287.38 KB 800x566 ClipboardImage.png)
Open file (463.83 KB 1250x883 ClipboardImage.png)
Open file (1.12 MB 1250x883 ClipboardImage.png)
Open file (559.02 KB 1753x1240 ClipboardImage.png)
Open file (482.82 KB 1250x884 ClipboardImage.png)
Open file (1.19 MB 1250x966 ClipboardImage.png)
Open file (1.18 MB 1250x966 ClipboardImage.png)
Open file (1.15 MB 1250x966 ClipboardImage.png)
Open file (1.13 MB 1250x966 ClipboardImage.png)
Open file (579.35 KB 1250x884 ClipboardImage.png)
Open file (491.45 KB 1250x884 ClipboardImage.png)
Open file (1.19 MB 1250x966 ClipboardImage.png)
Open file (1.46 MB 1250x966 ClipboardImage.png)
Open file (1.10 MB 1250x883 ClipboardImage.png)
Open file (285.42 KB 1000x704 ClipboardImage.png)
Open file (347.72 KB 1250x693 ClipboardImage.png)
Open file (327.08 KB 1250x975 ClipboardImage.png)
Open file (261.90 KB 850x1100 ClipboardImage.png)
Open file (165.69 KB 875x555 ClipboardImage.png)
Open file (606.29 KB 1652x1250 ClipboardImage.png)
Okay, thanks. That's the better way to do it.
>>8256 Y/w. There's still the pdfs to get if you're interested.
Here's a good example for a little lecture on how to understand some AI problem better: https://nitter.dark.fail/svpino/status/1357302018428256258#m - I saved it as a pdf, no idea if that's usefull, but that Nitter instance might not be there forever.
>>8447 Neat, thanks Anon.
Open file (81.70 KB 1378x562 IMG_20210208_184145.jpg)
Open file (164.53 KB 594x1937 IMG_20210208_090120.jpg)
Open file (65.89 KB 1276x846 IMG_20210208_072837.jpg)
Open file (238.86 KB 1200x927 IMG_20210208_071849.jpg)
Posted on Twitter by data scientists.
>A curated list of awesome, free machine learning and artificial intelligence courses with video lectures. All courses are available as high-quality video lectures by some of the best AI researchers and teachers on this planet. >Besides the video lectures, I linked course websites with lecture notes, additional readings and assignments. https://github.com/luspr/awesome-ml-courses
>>8495 Thanks! That's a lot of content. It would be great if we can find someplace to keep this stuff safe off of Youtube somewhere.
Has anyone here read the book "Artificial Intelligence: A Modern Approach"? Any thoughts on it?
>>8609 Not yet, but noted in case I haven't already. I guess I downloaded and noted the name of reading material which would take me 10yrs or more to read and work through, though. I'll start with anything beyond simple chatbots, image and voice recognition when I mastered all these basic things, and the stuff I'm already working on, and when have a functional female body which needs to get smarter. Which will be the case in 5-10yrs, I hope. Related: Uploaded file and https://github.com/aimacode
I'll probably go with that, as soon as I have the time: >Dive into Deep Learning: An interactive deep learning book with code, math, and discussions, based on the NumPy interface. >Implemented with NumPy/MXNet, PyTorch, and TensorFlow >Adopted at 175 universities from 40 countries >Each section is an executable Jupyter notebook >You can modify the code and tune hyperparameters to get instant feedback to accumulate practical experiences in deep learning. http://d2l.ai/
Open file (94.73 KB 1362x1104 IMG_20210301_213941.jpg)
Open file (87.07 KB 1267x710 IMG_20210222_233049.jpg)
Open file (137.51 KB 577x385 IMG_20210219_162908.jpg)
Open file (162.50 KB 1588x938 IMG_20210218_113525.jpg)
Open file (100.78 KB 700x1458 IMG_20210313_234930.jpg)
Open file (148.21 KB 1080x1080 IMG_20210313_185721.jpg)
Open file (105.91 KB 577x385 IMG_20210325_191728.jpg)
Open file (64.24 KB 784x427 IMG_20210325_171914.jpg)
Open file (178.76 KB 2334x1025 IMG_20210322_143128.jpg)
Open file (97.96 KB 1207x842 IMG_20210322_084800.jpg)
>>9266 > #1 That actually helps me understand that term a lot better now. I still don't understand the implications of one way or other, but at least I kind of have an idea what it means now. > #2 I know exactly what that is -- personally -- but I can also add that eventually you'll holistically put everything together in your head if you just keep moving forward. Don't quit and you'll succeed. > #3 Somehow, I can understand an idea when it's represented graphically like that, rather than a bunch of 'indecipherable' math symbols of a formula. I don't think I'm alone in this either. > #4 LOL. Surely this is a simple thing to pick up over the weekend. R-right, Anon? :^)
>>9274 Tha last pic is more about having an overview and being aware that there's more than deep learning. In some cases another thing might work better or be good enough but easier to archive. It's also useful to use these terms to look for videos to get an idea about something. Maybe it fits to a problem that one wants to solve.
>>9278 Makes sense, thanks Anon. It's a big field.
Open file (220.57 KB 1199x540 IMG_20210331_191630.jpg)
Open file (52.91 KB 1404x794 IMG_20210331_191334.jpg)
> completely removing the background of a picture (robust PCA) > PCA's main goal: dimensionality reduction. >You can take a bunch of features that describe an object and, using PCA, come up with the list of those that matter the most. >You can then throw away the rest without losing the essence of your object. https://nitter.dark.fail/svpino/status/1377255703933501445
>>9375 That's pretty powerful. I imagine glowniggers are using this idea extensively for surveillance isolation. Not only would this work with a 'pre-prepared' empty background plate for extraction, but a separate system could conceivably create (and keep updated under varying lighting conditions, say) an 'empty' plate from a crowded scene simply by continuously sweeping the scene and finding areas that don't change much frame-to-frame. These blank sections can then all be stitched together to create the base plate to use during main extraction process. Make sense? Ofc, a robowaifu can use this exact same technique for good instead to stay alert to important changes in a scene, alerting her master to anything she sees that might be an impending threat, or even take action herself to intervene. Simplification is the key to both understanding, and to efficiency in visual processing and other areas.
Edited last time by Chobitsu on 05/10/2021 (Mon) 01:01:13.
Related: GPT-2 for beginners >>9371
Illustrated guide to transformers, a step by step introduction: https://youtu.be/4Bdc55j80l8
Edited last time by Chobitsu on 05/10/2021 (Mon) 00:59:53.
Open file (18.21 KB 1600x900 IMG_20210514_143007.jpg)
Open file (24.74 KB 1600x900 IMG_20210514_143019.jpg)
Open file (18.13 KB 1600x900 IMG_20210514_143028.jpg)
Open file (47.79 KB 2400x1350 IMG_20210514_165116.jpg)
Open file (79.86 KB 2400x1350 IMG_20210514_165319.jpg)
This could also fit in the math thread, but it's notation used in ML.
>>6560 Heres a wholistic beginners understanding all that you see, the framework that people take for machine learning is to build a mathematical function that can make a prediction. That mathematical function can be created in many ways, at the end of the day its supposed to provide you with a value of some kind that you action, or output. More recently that function is created using deep learning = parametric system that learns how to capture data and create regions between high dimensional datapoints, to segment partitions of that data to do classification or make predictions through regression, obviously there are many other way to this these are the high level constructions. I would suggest you buy grokking deep learning by Andrew trask, he gives you a really good deep insight into DL, In practice however, a lot of the algorithms we use supplement DL techniques we generally use some older ML algorithms and Feature Engineer through PCA or various other Engineering techniques.
Open file (60.57 KB 700x355 p_value.png)
Open file (6.89 KB 446x291 variance.png)
10 Must-Know Statistical Concepts for Data Scientists: https://www.kdnuggets.com/2021/04/10-statistical-concepts-data-scientists.html
>related crosspost (>>10613, >>10614)
>>10607 Thanks very much Anon! Bookmarked.
Open file (120.30 KB metan.jpg)
Hey /robowaifu/, I am back after such a long break! Damn... I have created this thread back in 2020... time does not really wait for us I guess. Well, not much of a process has been made. When life makes you face with much tougher situations you can't think of anything else lol. But I'm back, I am studying at a University too! I started with studying some maths. Algebra, Trig, Precalculus, Stats and Calculus. I remember most of the topics from my high school years but I have not looked them up even once for the past 2 years or something. I also found the Springer book Deep Generative Modelling: https://link.springer.com/book/10.1007/978-3-030-93158-2 The book might look frustrating at first but really you can simplify the wishy washy math stuff into basic logic and understand what is going on. The book is also supported by code segments so it is a lot easier to understand. And there is also this book for those who might be interested: https://artint.info/ It follows a similar way of teaching and provides code for everything but in my case it is harder to follow than the Springer book. So yeah, pick whichever you would like. Seeing the process you people have been doing, I am kinda disappointed to not be able to contribute... I will be around from now on hopefully, in my spare time I will try to contribute to the board.... It feels nice to be back. >=== -minor date edit
Edited last time by Chobitsu on 05/26/2022 (Thu) 07:56:22.
>>16409 Welcome back! It's never too late to contribute Beginner Kun. Looking forward towards your contributions. Here's a neat inexpensive way to get into low-cost AI; the ESP32 S3 has AI acceleration. https://www.cnx-software.com/2021/08/29/esp32-s3-ai-capabilities-esp-dl-library/ Here's a GitHub that could help those interested: https://github.com/espressif/esp-dl
>>16409 Welcome Back Anon! >Seeing the process you people have been doing, I am kinda disappointed to not be able to contribute... I felt the same way when I first found the board. Its better to just jump in at any time you feel ready.
>>16409 Welcome back, OP! So, if you recall, OP, your posts not only started a nice, important thread, but it also lead indirectly to another important thread. Namely, our Library Thread (>>7210). So thanks! :^) >now at Uni I'll be praying for you Anon, haha. I too am currently undertaking a regime of maths (>>16302) so maybe we can share notes. Thanks for the books, I've looked briefly so far into the 2nd one. >Seeing the process you people have been doing, I am kinda disappointed to not be able to contribute... >I will be around from now on hopefully, in my spare time I will try to contribute to the board.... It feels nice to be back. Glad to have you back! >>16410 Heh, I can clean this up for you if you'd like Anon. :^) >=== -add 'regime of maths' cmnt
Edited last time by Chobitsu on 05/24/2022 (Tue) 06:34:05.
>>16420 Neat, thanks for the info Anon.
Hey Chobitsu! I am glad to see you again, yeah please go on and fix the post. > So thanks! :^) If it wasn't for you and the great contributors of the board, I would not have a place to post that so I thank you! And the library thread was really necessary, I wish that the board had a better search function as well. I was trying to find some specific posts and it took me a long while to remember which threads they were on. > so maybe we can share notes My University provided me with a platform full of questions. So basically, they have like 250 types of questions for precalculus for instance. The system is automated and what it does is to generate infinite number of questions of that specific type of question and also explains the solution for every question. It changes the variables randomly and gives you space to solve as much as you want. I believe that the platform requires money for independent use. Besides that, I just study from Khan Academy, but the book you mentioned caught my interest. I will probably look into it. If I ever find any good books on that matter, I will make sure to share them with you.
>>6560 >But there are a lot of resources there and as a beginner it is pretty confusing to find the correct route to learn ML/DL advanced enough to be able contribute robowaifu project. I can give a relatively uncommon advice here: DL is more about intuition + engineering than theory anyway. Just hack things until they work, and feel good about it. Understanding will come later. Install pytorch and play with tensor api, go through their basic tutorial https://pytorch.org/tutorials/beginner/nn_tutorial.html while hacking on it and trying to understand as much as possible. Develop a hoarder mentality: filter r/MachineLearning and github.com/trending/python for cool repos and models, try to clone & run them, fix and build around. This should be a self-reinforcing activity, you should not have other dopaminergic timesinks because you will go the way of least resistance then. Read cool people's repos to get a feeling of the trade: https://github.com/karpathy https://github.com/lucidrains Read blogs http://karpathy.github.io/ https://lilianweng.github.io/posts/2018-06-24-attention/ https://evjang.com/2021/10/23/generalization.html https://www.gwern.net/Scaling-hypothesis https://twitter.com/ak92501 When you feel more confident, you can start delving into papers on your own, use https://arxiv-sanity-lite.com/ https://www.semanticscholar.org/ https://inciteful.xyz/ to travel across the citation graph. git gud.
>>16460 >If it wasn't for you and the great contributors of the board, I would not have a place to post that so I thank you! Glad to be of assistance Anon. >And the library thread was really necessary, I wish that the board had a better search function as well. Agreed. >I was trying to find some specific posts and it took me a long while to remember which threads they were on. You know Beginner-kun, if you can build programs from source, then you might look into Waifusearch. We put it together to deal with this conundrum. Doesn't do anything complex yet (Boolean OR), but it's fairly quick at finding related posts for a simple term. For example to lookup 'the Pile', pic related is the result for /robowaifu/ : > >-Latest version of Waifusearch v0.2a >(>>8678) >My University provided me with a platform full of questions >It changes the variables randomly and gives you space to solve as much as you want. Sounds handy! >Besides that, I just study from Khan Academy, but the book you mentioned caught my interest. I will probably look into it. If I ever find any good books on that matter, I will make sure to share them with you. Thanks! Looking forward to your progress Anon. I chose Euler b/c Anon recommended him, so I scanned information about the man. He was a real genius, and the story behind that book's creation is incredible tbh. Seems like a good fit for me personally to start out with, time will tell heh. :^)
>>16464 Thanks for all the great links & advice Anon, appreciated.
>Chobitsu Thanks for the great board, and your patience... I had an idea that it may be fruitful to download some of these training data sets. I have seen some. Like a Full Reddit, 4chan, stuff like that. The reason is that I saw the other day one of these big sets, I think Reddit, saying they need to be paid for them. So I think you can still get this stuff but maybe not forever. While looking for links I found this. https://academictorrents.com/browse.php So this is a huge mass of torrents on academic subjects and low and behold one the first ones was Udemy - Artificial Intelligence A-Z Learn How To Build An AI https://academictorrents.com/details/750ab85e01a0d443bd0d19e49b250f8896e1e791 Now I don't know enough to judge but this might be a good way to get a toehold on AI stuff. I thought I would mention it so the link "might" be helpful to someone. I'm downloading it.
Here's Reddit Reddit comments/submissions 2005-06 to 2022-12 1.99TB I need a new hard drive for this https://academictorrents.com/details/7c0645c94321311bb05bd879ddee4d0eba08aaee I wish there was a set of 1950's home economics course books. That would be idea for a waifu. They used to train girls to treat Men well. Have you seen any of the home-econ videos from the fifties. Girls just laugh and laugh at this but...it would be so nice. So pleasant to be treated like they show. What the hell I decided to add a few links to these. I don;t know how you could train the waifu on these videos but if you could it would be great. Sexist PSAs From The '40s and '50s Show How Far Women Have Come _ NowThis https://www.youtube.com/watch?v=_-OAIAhBiHc "Tips for a Happy Marriage" old 1950s film https://www.youtube.com/watch?v=32hRC1li-T0 How They Raised Girls To Be Women In The 1950s https://www.youtube.com/watch?v=HW5FMRJesC8 Good Wife's Guide Training Video(a spoof but lots of real videos were just like this and used to train young girls) https://www.youtube.com/watch?v=DB5TOsS5EyI The next one of course has been wiped https://web.archive.org/web/20170430093822/http://goodwifesguide.co.uk:80/tag/wife-training/ If we could save a bunch of this stuff and use it for training it might be invaluable.
>>23129 Nprb Anon. We're all in this together. :^) >>23131 >I wish there was a set of 1950's home economics course books. That would be idea for a waifu. Great idea Anon! I sure hope you can locate mountains of this stuff. Let's see what we can manage with it if you do. Thanks for all the helpful links Grommet. Cheers.
I don't know anything about this but it recommended by a programmer with 40 years experience. I read his forum and he talks about various stuff he's doing. He writes sorts of stuff for various companies. He also is using AI to write software and port older code he has. It's working well for him. He suggested the libraries they have are useful. tinyML Foundation The community for ultra-low power machine learning at the edge https://www.tinyml.org/ I have no idea of their capabilities but it sure sounds like something that could be helpful. They are doing AI stuff with micro controllers, so they say. He also suggested this video https://www.youtube.com/watch?v=tL1zltXuHO8
>>23409 Thanks Grommet! Here's a platform-specific repo for the platform, opensauce. https://github.com/Efinix-Inc/tinyml And a general search for it on SJWH*b: https://github.com/topics/tinyml Harvard even has an open course on the topic pll.harvard.edu/course/fundamentals-tinyml Cheers. :^)
>>23414 Another, maybe, different one. https://www.tensorflow.org/lite I think it's the same or possibly a offshoot. It does mention it in the link you provided.
The AI Engine That Fits In 100K https://hackaday.com/2023/08/02/the-ai-engine-that-fits-in-100k/ Of course they're hyping this a bit but...it could be useful. I was reading the other day that like 99% of the neuron capacity is useless in present algorithms. I read it here >>3031 that, "...it now takes 44 times less compute to train a neural network... algorithmic progress has yielded more gains than classical hardware efficiency..." "..By 2028 we will have algorithms 4000x more efficient than AlexNet." So maybe it's not all hype. Maybe there is some big increase in efficiency. Maybe even enough that we could get some sort of interactive waifu, at a child level, with commodity processors. Abstract https://research.nvidia.com/labs/par/Perfusion/ paper https://arxiv.org/abs/2305.01644
This talks shows some ways to get into it and what would be a good thing to work on: https://youtu.be/5Sze3kHAZqE - He recommends fine tuning and transfer learning. Bit skeptical about RAG. He mentions cerebras/btlm-3b as a underappreciated one around the 01:06:00 timestamp. Pretty sure it has been recommend to me through this board or the Discord, since I already had the paper. Phi-1.5 is also mentioned later, which seems to work for Python snippets but not much more. >Fast.ai’s “Practical Deep Learning” courses been watched by over 6,000,000 people, and the fastai library has over 25,000 stars on Github. Jeremy Howard, one of the creators of Fast, is now one of the most prominent and respected voices in the machine learning industry; but that wasn’t always the case... Read the full show notes here: https://www.latent.space/p/fastai 0:00:00 Introduction 0:01:14 Jeremy’s background 0:02:53 Founding FastMail and Optimal Decisions 0:04:05 Starting Fast.ai with Rachel Thomas 0:05:28 Developing the ULMFit natural language processing model 0:10:11 Jeremy’s goal of making AI more accessible 0:14:30 Fine-tuning language models - issues with memorization and catastrophic forgetting 0:18:09 The development of GPT and other language models around the same time as ULMFit 0:20:00 Issues with validation loss metrics when fine-tuning language models 0:22:16 Jeremy’s motivation to do valuable work with AI that helps society 0:26:39 Starting fast.ai to spread AI capabilities more widely 0:29:27 Overview of fast.ai - courses, library, research 0:34:20 Using progressive resizing and other techniques to win the DAWNBench competition 0:38:42 Discovering the single-shot memorization phenomenon in language model fine-tuning 0:43:13 Why fine tuning is simply continued pre-training 0:46:47 Chris Lattner and Modular AI 0:48:38 Issues with incentives and citations limiting innovation in research 0:52:49 Joining AI research communities through Discord servers 0:55:23 Mojo 1:03:08 Most exciting areas - continued focus on transfer learning and small models 1:06:56 Pushing capabilities of small models through transfer learning 1:10:58 Opening up coding through AI to more people 1:13:51 Current state of AI capabilities compared to computer vision in 2013 - lots of basic research needed 1:17:08 Lightning Round >BTLM-3B-8k-base: Licensed for commercial use (Apache 2.0). State of the art 3B parameter model. Provides 7B model performance in a 3B model via performance enhancements from ALiBi, SwiGLU, maximal update parameterization (muP) and the the extensively deduplicated and cleaned SlimPajama-627B dataset. Fits in devices with as little as 3GB of memory when quantized to 4-bit. One of few 3B models that supports 8k sequence length thanks to ALiBi. Requires 71% fewer training FLOPs, has 58% smaller memory footprint for inference than comparable 7B models. https://huggingface.co/cerebras/btlm-3b-8k-base (search for other variants)
What would be the "roadmap" to become a "robowaifu engineer"? The amount of things I have to learn is colossal to even aspire to contribute something to this goal, being a second year student in computer science and having a lot of free time, where should I focus? I think the answer is more than obvious, artificial intelligence, the body needs a mind, my idea is to start reading "Artificial Intelligence a modern approach" and "Deep learning, adaptive computation and machine learning", I understand that AI is mainly developed in C++ and Python, perhaps after reading those books I will have a broader view than I have now, I am just starting, but I want to be a faithful follower of this goal, we have to achieve it
>>33746 Hello, Anon. Welcome! >What would be the "roadmap" to become a "robowaifu engineer"? The amount of things I have to learn is colossal to even aspire to contribute something to this goal Honestly -- whatever path you decide upon -- I think you've already taken two of the most important steps you could possibly ever take as an aspiring Robowaifu Technician, Anon: 1. You recognize the mountainous-scale of the innumerable tasks involved with this overarching goal. 2. Yet you remain undaunted by that realization, and go on 'into the breach' regardless. This shows admirable (if possibly foolish! :D moral character within you. Trust me, if you follow through on this commitment, you're going to need it! Also, finding your way here probably involved an important set of life-choices as well, heh. :^) >being a second year student in computer science and having a lot of free time, where should I focus? Lol, teach me your ways Sempai!! I'm a fulltime student, and I (seemingly) barely have time to manage just the official curricula, let alone anything else in life. <---> >I think the answer is more than obvious, artificial intelligence, the body needs a mind I'm not personally so sure that answer is """obvious""", tbh. The simple truth is that no one knows how to do this yet. We're attempting something brand new here, Anon. Trailblazers. Frontiersmen. Clearly the body does need a 'mind', but it also needs a whole lot of other things as well. As you yourself have already posited, the litany of needs for achieving wholesome, loving, & effective robowaifus is vast. Certainly things like: Mechanical Engineering, Electrical Engineering, Electronics Engineering, Actuation Engineering, Systems Engineering, Signals & Sensor Fusion, Simulation & Predictive Systems, Assembly/Factory Engineering, Maintenance Engineering, Mobile Power Engineering, Thermal Engineering, Command & Control/PID/Industrial Engineering, Mobility Engineering, Privacy/Safety/Security Engineering, Human-safety Engineering, Materials Science, Durability & Corrosion-resistance Engineering, Dustproofing/Waterproofing/Environmental Engineering, Meshnet/Wireless/Networking Engineering; and technician/QA work in every one of these areas (and many more). (I'll just presume a priori that any of these areas imply a significant interest in maths & science on your part, with no further comment.) And that's just an incomplete list on the the engineering side! If you intend to be a robowaifu-industry entrepreneur (like me) then that doesn't even touch on Art, Design, Ergonomics, Psychology, Cuddlin' Stuff, Marketing & Propaganda, Politics, Business, Fundraising, Legal (to ward off the GH's Sh*kelsteins + their pet thugs & puppets), Physical research labs & maintenance, Resource planning, Materials production, Construction factories, Sales & Distribution, etc., etc., etc. So yeah, bro. The doors into the Robowaifu Age future are practically endless...CYOA!!! :DDD <---> >I understand that AI is mainly developed in C++ and Python, perhaps after reading those books I will have a broader view than I have now, I am just starting, I'd suggest you immediately dig into C++ in a deep way. We have some good resources here, but Stroustrup's PPP3 is simply the best place to start. We have a serious shortage of C++ devs here on /robowaifu/ , but we already have plenty of Python devs. >tl;dr You can't run the realtime processing needs of a robowaifu on Python; you need C++ to do that. Also, see that big list of engineering interests above? Think C++ might be important to any of them for a robowaifu runtime/implementation systems-complex yet to be devised? Yeah, practically all of them! :D <---> >but I want to be a faithful follower of this goal, we have to achieve it Really proud of you brother. I'd suggest you take a break for a week, a month, and think this through hard. Is this the life you really want to pursue? Your life is yours to do with as you please. But if you choose this route, it's going to be one of the most difficult things you could possibly attempt in your life. The overall, society-wide payoff will be glorious in the end -- but there's no guarantee of your personal success with this whole endeavor. Such is the nature of war. Your real questions to ask yourself rn are: "Just how bad do I want this/need this? Am I willing to risk utter failure on the one-off chance of success at it?" Those answers will determine your future course in life, I think. <---> OTOH, this can't be contained; IT.IS.INEVITABLE. The only primary issue yet to be determined is the answer to just this one question: > "Will this entire, monumental, future industry be left alone into the hands of the greedy, demonically-evil Globohomo & their puppets; or will men all over the world be able to enjoy the fruits of their own labors in peace & content with robowaifus of their own devising by their sides?" This is for Anons like us -- here & elsewhere -- to decide. Will we -- along with all the honored greats -- go "...Once more into the breach, dear friends, once more" [1]; or will we shy away from the monumental task at hand, simply because it's hard? I know what my choice already is!! Follow us if you may! Cheers, Anon. :^) TWAGMI --- 1. https://www.youtube.com/watch?v=q6pWPiNUiyg >=== -prose edit -add hotlink
Edited last time by Chobitsu on 10/16/2024 (Wed) 18:03:32.
>>29488 NoidoDev what do you think of this video? Can a person of average intelligence get anything from it or a good overview? Or even better be able to build something with this data? I wonder if we could not use the mentioned AI cerebras/btlm-3b as a base for a distributed mind. I look at AI's and it seems so daunting. But I could see training one to recognize voice, one to deal with movement, one to deal with vision, specifically not running into things, and maybe one for instruction on a low level. Like move here, pick up this, etc. If you could get the first part working which I think would be easier, then, work on the higher level stuff as hardware becomes more and more capable. There has, or needs to be, some way to break this down into parts to make the task easier. None of us are going to come up with super computers any time soon. I wish there was more work done on swapping out small AI's to SS drives. It would seem you could use a small AI to determine what specialty AI was needed in memory. And it may be that you could have a small AI index things in memory and draw from the hard drive just like a paging file does now. Some times the robowaifu would slow down and have to "think" but I don't believe that would be the worst thing.
>>33765 > But I could see training one to recognize voice, one to deal with movement, one to deal with vision, specifically not running into things, and maybe one for instruction on a low level. Like move here, pick up this, etc If I remember right Mark Tilden referred to this as "horse and rider" setup, where you have a high level program giving direction to a lower level program. The lower level worries about not stepping in a hole, etc while the high level worries about where the pair are going. I too have experienced the boons of separating different functions into different programs/AI. To give a real life example of what you're talking about: my voice recognition AI doesn't like to run in the same program as the image recognition AI. I've experienced some programs running at different speeds, eg: on a raspi takes half a second for the image recognition to run, while the servo program can run like 2 dozen times a second while it is running, and the voice detection pauses the program until words are heard (or a 5 second timeout), so these different speeds/natures of the code requires separation, which in turn requires developing a way to communicate with each each program. >>33746 >Starting Best way to start is looking for a code/library that does what you want (like image recognition), and try tweaking it to fit your needs, like making it interact with other programs eg if an object is recognized in an image, move a servo.
>>33767 >I've experienced some programs running at different speeds Asynchrony is a deep topic in systems engineering for complex 'systems-of-systems' -- which full-blown robowaifus will certainly be in the end. The buffering and test, test, test combo has been the most successful engineering approach to this issue thus far, AFAICT. Just imagine the timing difficulties that had to be surmounted by the men who created and operated the Apollo spacecraft out to the Moon & back! Lol, our problems here are actually much more complicated (by at least a couple orders magnitude)!! :DD Kiwi discussed the desire that >A thread dedicated to man machine relationships may be needed : ( >>33634 ), and I agree. While my guess is that he meant for that thread to primarily be focused on the psychological/social aspects of those relationships, I would argue that the engineering of complex parts of our robowaifu's systems that in any way involve responsiveness or interaction-timing with her Master (or others) is definitely fair game for such a thread. The reason is simple: timing of interactions -- particularly verbal ones -- clearly affects the social perceptions of those (for all parties involved). >tl;dr < If it takes our robowaifus more than half a second to begin to respond to her Master's engagements, then we've waited too long... Cheers. :^) >=== -fmt, prose edit
Edited last time by Chobitsu on 09/28/2024 (Sat) 03:08:07.
>>33905 Thanks, Anon. Nice degree programs. And at least one version of many of these lectures are available online! Cheers. :^)
>>33767 Thanks for the advice. It's welcome.

Report/Delete/Moderation Forms
Delete
Report