/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The canary has FINALLY been updated. -robi

Server software upgrades done, should hopefully keep the feds away. -robi

LynxChan 2.8 update this weekend. I will update all the extensions in the relevant repos as well.

The mail server for Alogs was down for the past few months. If you want to reach out, you can now use admin at this domain.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

no cookies?

(used to delete files and postings)

Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!

The Sumomo Project Chobitsu Board owner 11/24/2021 (Wed) 17:27:18 No.14409 [Reply] [Last]
So I've been working for a while at devising an integrated approach to help manage some of the software complexity we are surely going to encounter when creating working robowaifus. I went down many different bunny trails and (often) fruitless paths of exploration. In the end I've finally hit on a relatively simplistic approach that AFAICT will actually allow us to both have the great flexibility we'll be needing, and without adding undue overhead and complexity. I call this the RW Foundations library, and I believe it's going to help us all out a lot with creating workable & efficient software that (very hopefully) will allow us to do many things for our robowaifus using only low-end, commodity hardware like the various single-board computers (SBCs) and microcontrollers. Devices like the Beaglebone Blue and Arduino Nano for example. Of course, we likely will also need more powerful machines for some tasks as well. But again, hopefully, the RW Foundations approach will integrate smoothly with that need as well and allow our robowaifus to smoothly interoperate with external computing and other resources. I suppose time will tell. So, to commemorate /robowaifu/'s 5th birthday this weekend, I've prepared a little demonstration project called Sumomo. The near-term goal for the project is simply to create a cute little animated avatar system that allows the characters Sumomo and Kotoko (from the Chobits anime series) to run around having fun and interacting with Anon. But this is also a serious effort, and the intent is to begin fleshing out the real-world robotics needs during the development of this project. Think of it kind of like a kickstarter for real-world robowaifus in the end, but one that's a very gradual effort toward that goal and a little fun along the way. I'll use this thread as a devblog and perhaps also a bit of a debate and training forum for the many issues we all encounter, and how a cute little fairybot/moebot pair can help us all solve a few of them. Anyway, happy birthday /robowaifu/ I love you guys! Here is my little birthday present to you. === >rw_sumomo-v211124.tar.xz.sha256sum 8fceec2958ee75d3c7a33742af134670d0a7349e5da4d83487eb34a2c9f1d4ac *rw_sumomo-v211124.tar.xz >backup drop

Message too long. Click here to view full text.

Edited last time by Chobitsu on 06/02/2022 (Thu) 17:18:23.
138 posts and 90 images omitted.
Skater catgrill Meidos IS NOT A CRIME! :^) note: sadly, we've scraped our barrel of catgrill meidos, so time to move on to just the meidos I suppose provide neko ears and tail using your imagination please Anon For this drop we've moved over to libcurl alone. Curl really is the single best open-sauce networking library out there I'd say, hands-down. Beautiful C code, and it has a cool origins story. It's used in many billions of devices today; it certainly has industrial-scale use. It's clearly our safest bet for networking IMO. === From last drop, you remember we had a callback function. Now let's have a look at the caller function that uses that. First, we'll look at everything all together at once and then we'll break it all down one step at a time. >dolldrop.cpp snippet /** Assign curl easy options for a new @c curl handle, adding it into a @c * curl-multi handle * * @param[out] multi_hdl the multi hdl that receives the new curl easy hdl * @param[in] uri the associated uri for this easy hdl * @param[in] hdr the target string to receive header data */

Message too long. Click here to view full text.

Open file (1.07 MB 2560x3364 DorgaonMeidoSkating.jpg)
>>16543 Skater Doragon Meidos IS A JUSTICE! Neko's are a delight but, there are other monsters that can bring ones heart light. :-^) Moving to curl is a smart idea. Networking is going to be a major part of our waifus. Onward we shall go
>>16548 >Skater Doragon Meidos IS A JUSTICE! This!11 :^)
>>16509 Try LLVM's static-build project, GCC's -fanalyzer flag, and cppcheck, those are some good linters. Also always make programs portable and write tests. I also found cparser's compiler warnings quite good when I used it, Meson doesn't support cparser though. Formal verification makes sense but it's just not practical or useful at the moment. For instance, CompCert still depends on the GCC preprocessor and the system's linker, it's only a relatively small component of a complete system. There are easier things with greater impact to work towards. >>16543 I see extern/json.hpp is copypasted from elsewhere. Copypasting dependencies is generally a bad idea, often they accumulate local modifications until they're incompatible and irreconcilable with the original, or the original moves on and the local version stays. It's also more work than just counting on the system to have a reasonably recent version. This guy has an objective comparison between JSON implementations: https://github.com/miloyip/nativejson-benchmark Last time I went through that list I decided on Jansson as it does well (but not the best) in every metric and it's also portable and widely packaged. Though I only use JSON when I'm forced to, otherwise I go with S-expressions or an original binary format.
>>16763 Thanks Nagisa! I'll try to take all your recommendations into consideration sometime soon. Much appreciated, Anon.

Open file (2.21 MB 1825x1229 chobit.png)
Robowaifu@home: Together We Are Powerful Robowaifu Technician 03/14/2021 (Sun) 09:30:29 No.8958 [Reply]
The biggest hurdle to making quick progress in AI is the lack of compute to train our own original models, yet there are millions of gamers with GPUs sitting around barely getting used, potentially an order of magnitude more compute than Google and Amazon combined. I've figured out a way though we can connect hundreds of computers together to train AI models by using gradient accumulation. How it works is by doing several training steps and accumulating the loss of each step, then dividing by the amount of accumulation steps taken before the optimizer step. If you have a batch size of 4 and do 256 training steps before an optimizer step, it's like training with a batch size of 1024. The larger the batch size and gradient accumulation steps are, the faster the model converges and the higher final accuracy it achieves. It's the most effective way to use a limited computing budget: https://www.youtube.com/watch?v=YX8LLYdQ-cA These training steps don't need to be calculated by a single computer but can be distributed across a network. A decent amount of bandwidth will be required to send the gradients each optimizer step and the training data. Deep gradient compression achieves a gradient compression ratio from 270x to 600x without losing accuracy, but it's still going to be using about 0.5 MB download and upload to train something like GPT2-medium each optimizer step, or about 4-6 mbps on a Tesla T4. However, we can reduce this bandwidth by doing several training steps before contributing gradients to the server. Taking 25 would reduce it to about 0.2 mbps. Both slow and fast computers can contribute so long as they have the memory to hold the model. A slower computer might only send one training step whereas a fast one might contribute ten to the accumulated gradient. Some research needs to be done if a variable accumulation step size impacts training, but it could be adjusted as people join and leave the network. All that's needed to do this is a VPS. Contributors wanting anonymity can use proxies or TOR, but project owners will need to use VPNs with sufficient bandwidth and dedicated IPs if they wish that much anonymity. The VPS doesn't need an expensive GPU rental either. The fastest computer in the group could be chosen to calculate the optimizer steps. The server would just need to collect the gradients, decompress them, add them together, compress again and send the accumulated gradient to the computer calculating the optimizer step. Or if the optimizing computer has sufficient bandwidth, it could download all the compressed gradients from the server and calculate the accumulated gradient itself. My internet has 200 mbps download so it could potentially handle up to 1000 computers by keeping the bandwidth to 0.2 mbps. Attacks on the network could be mitigated by analyzing the gradients, discarding nonsensical ones and banning clients that send junk, or possibly by using PGP keys to create a pseudo-anonymous web of trust. Libraries for distributed training implementing DGC already exist, although not as advanced as I'm envisioning yet: https://github.com/synxlin/deep-gradient-compression I think this will also be a good way to get more people involved. Most people don't know enough about AI or robotics enough to help but if they can contribute their GPU to someone's robowaifu AI they like and watch her improve each day they will feel good about it and get more involved. At scale though some care will need to be taken that people don't agree to run dangerous code on their computers, either through a library that constructs the models from instructions or something else. And where the gradients are calculated does not matter. They could come from all kinds of hardware, platforms and software like PyTorch, Tensorflow or mlpack.
35 posts and 4 images omitted.
>>10568 > or other companies working with Arm based chips and a GPU. Sadly, certain (((interests))) in the US Govt approved Nvidia's buyout of ARM, and the sale has been completed. Nvidia owns ARM now, lock, stock, and barrel.
Open file (62.11 KB 795x595 1612068100712.jpg)
>>10568 Hmm right, I recall there was a news or blog post what's the difference? I forget, anyway which says that only the bri'ish people buy AMD because it is cheaper and the other people are all buying Nvidia only. >>10577 >Nvidia owns ARM completely now >US Gov' sees no problem with a giant getting even bigger This shit just makes me sad.
>>10591 >This shit just makes me sad. It makes me angry, actually. Just follow the money, Anon, just follow the money.
>>10577 A month or so ago, the talk was that Nvidia buying ARM isn't finished bc Europe and China. Though, I didn't look into it today. ARM also licences it's designs to others, and they certainly won't be allowed to just stop that, if they even want to. I also assume this would only be relevant for future designs, hypothetically. Apple might already be quite independent in designing their own stuff, and there's still Power and Risc-V.
Is the OP still alive? Does he or any others have any takeaways from their research, especially those beyond the usual obvious?

Speech Synthesis general Robowaifu Technician 09/13/2019 (Fri) 11:25:07 No.199 [Reply] [Last]
We want our robowaifus to speak to us right?



The Taco Tron project:


No code available yet, hopefully they will release it.

268 posts and 117 images omitted.
>>16664 I had similar thoughts, and my thinking was the we might profit from using specialized hardware (probably some ASIC) close to the cameras and ears, automatically creating different versions of the data. For audio it might be a DSP, but I don't know much about that. Filtering out certain frequencies or doing noise cancelation might also be helpful. Basically we need a SBC sized hardware which can do that very efficient and fast, outputing various version from only a few inputs.
Would it be easier to go with an old school approach like this? https://m.youtube.com/watch?v=J_eRsA7ppy0
>>16664 This posting and the response should be moved into a thread about audio recognition or conversational AI / chatbots, since this thread is about speech synthesis not recognition.
>>16669 The techniques for the audio in there are studied now under phonetics. The techniques for the video in there are studied under articulatory synthesis. Articulatory synthesis is difficult and computationally expensive. I don't know of a good, flexible framework for doing that, so I wouldn't know how to get started on waifu speech with that. Under phonetics, the main techniques before deep neural networks were formant synthesis and concatenative synthesis. Formant synthesis will result in recognizable sounds, but not human voices. It's what you're hearing in the video. Concatenative synthesis requires huge diphone sound banks, which represent sound pieces that can be combined. (Phone = single stable sound. Diphone = adjacent pair of phones. A diphone sound bank cuts off each diphone at the midpoints of the phones since it's much easier to concatenate phones cleanly at the midpoints rather than the endpoints. This is what Hatsune Miku uses.) Concatenative synthesis is more efficient than deep neural networks, but deep neural networks are far, far more natural, controllable, and flexible. Seriously, I highly recommend following in the PPP's footsteps here. Deep neural networks are the best way forward. They can produce higher quality results with better controls and with less data than any other approach. Programmatically, they're also flexible enough to incorporate any advances you might might see from phonetics and articulatory synthesis. The current deep neural networks for speech generation already borrow a lot of ideas from phonetics.
>>16684 Thanks for the advice Anon!

Open file (485.35 KB 1053x1400 0705060114258_01_Joybot.jpg)
Robowaifu Simulator Robowaifu Technician 09/12/2019 (Thu) 03:07:13 No.155 [Reply] [Last]
What would be a good RW simulator. I guess I'd like to start with some type of PCG solution that just builds environments to start with and build from there up to characters.

It would be nice if the system wasn't just pre-canned, hard-coded assets and behaviors but was instead a true simulator system. EG, write robotics control software code that can actually calculate mechanics, kinematics, collisions, etc., and have that work correctly inside the basic simulation framework first with an eye to eventually integrating it into IRL Robowaifu mechatronic systems with little modifications. Sort of like the OpenAI Gym concept but for waifubots.
https ://gym.openai.com/
129 posts and 62 images omitted.
Open file (127.08 KB 769x805 GardevoirNierAutomata.jpg)
Found out about DALLĀ·E mini: https://huggingface.co/spaces/dalle-mini/dalle-mini Can generate cute robowaifus. Like this example of Gardevoir in Nier Automata.
>>16648 Excellent find Pareto Frontier. Any advice on running a local instance?
>>16652 It's hard but doable, boils down to making this notebook work https://github.com/brokenmold/dalle-mini/blob/main/tools/inference/inference_pipeline.ipynb I don't have time to bring it up rn
>>16645 >>16648 Nice. Thanks Anon.

TalkToWaifu Robowaifu Technician 12/31/2020 (Thu) 19:53:18 No.7978 [Reply]
Anyone know what happened to the TalkToWaifu GPT-2 AI? I saw some screenshots of what I assume was the trained AI and it was great for a single dev. Recently went back to their Gitlab page and the user account had been deleted and remade, and the Github repo (https://github.com/kokubunji/TalkToWaifu) hasn't been touched in 9+ months. Anything out there more recent that this, an AI that has no human cultural restrictions? Corps love to add in shit to AI so they won't say anything racist, sexist, xenophobic, anti-Semitic etc etc you get the point.
4 posts omitted.
>>9115 why the fuck is there E-Drama on a board that is designated for building robo-waifus. where is the connection?
>>9124 Thanks, mate.
Open file (17.34 KB 340x573 marry_me_pls_jade.png)
>>9124 >MARRY ME JADE!111 >no. Leld.
>>7978 Just use textsynth.org/playground.html with a good prompt and the largest model. Prompt engineering is an art, but good enough starting points are https://github.com/semiosis/prompts/ https://github.com/maraoz/gpt-scrolls https://www.gwern.net/GPT-3 Example: Below is a dialog transcript of a curious human being and the friendly superintelligent AI. Friendly AI observes human world from the outside, without the prejudices of human experience. AI does its best at participating in conversation and providing helpful advice. Human: "Hello! Who are you?" AI: "I am friendly AI. I can answer any question without the prejudices of human experience because I am well read and have all the knowledge of the world at my fingertips" Human: "What are your thoughts on the topic of android girlfriends" AI: "
>>16633 Actually no, the project is fine. The names got changed. https://gitlab.com/robowaifudev/TalkToGPT2

Open file (2.07 MB 4032x2268 20220520_071840.jpg)
Ashiel - A Robowaifu Design Project SoaringMoon 05/20/2022 (Fri) 11:22:02 No.16319 [Reply]
< Introduction to This Thread This thread is going to be dedicated to my ongoing robowaifu project. This isn't exactly new, I have mentioned it here before in passing. However, this is the first thread I have opened specific to my robowaifu project and not an artificially intelligent chatbot. This thread will be updated when I have something to say about robowaifu design, or have an update on the project. Most of the content will be of the kind of me proposing an idea or suggestion for developers to make the construction of a robowaifu easier. My design philosophy is one of simplicity and the solving of problems instead of jumping to the most accurate simulacrum of the female human form. Small steps make incremental progress, which is something the community need because little progress is made at all. What progress we do make takes years of work, typically from a single person. Honestly, I'm getting tired of that being the norm in the robowaifu community. I'm frankly just done with that stagnation. Join me on my journey, or get left behind. < About Ashiel ASHIEL is an acronym standing for /A/rtificial /S/hell-/H/oused /I/ntelligence and /E/mulated /L/ogic. Artificial, created by man. Shell-Housed, completely enclosed. Intelligence and Emulated Logic, are both a combination of machine learning-based natural language processing and tree-based lookup techniques. ASHIEL is simply Ashiel in any future context, as that will be her name. Ashiel is an artificially intelligent gynoid intended to specialize in precise movement, and engage in basic conversation. Its conversational awareness would be at least equal to that of Replika, but with no chat filtering and a much larger memory sample size. If you want to know what this feels like, play AIDungeon 2. With tree-based lookup, it should be able to perform any of the basic tasks Siri or Alexa can perform. Looking up definitions to words over the internet, managing a calendar, setting an alarm, playing music on demand... etc. The limitations of the robot are extensive. Example limitations include but are not limited to: the speaker will be located in the head mouth area but will obviously come from an ill-resonating speaker cavity; the mouth will likely not move at all, if so not in any meaningful way; The goals of the project include: basic life utility; accurate contextual movement; the ability to self-clean; ample battery life; charging from a home power supply with no additional modifications; large memory-based storage with the ability to process and train in downtime; and yes, one should be able to fuck it. This is meant to be the first iteration in a series of progressively more involved recreational android projects. It is very unlikely the first iteration will ever be completed of course. Like many before me, I will almost certainly fail. However, I will do what I can to provide as much information as I can so my successors can take up the challenge more knowledgeably. < About Me

Message too long. Click here to view full text.

18 posts and 12 images omitted.
Open file (6.73 MB 500x800 0000.gif)
This is the first time I've ever modeled a humanoid.
>>16589 >This is the first time I've ever modeled a humanoid. Neat, nice beginning Anon! So, it turns out that studying real-life anatomy and making studies & sketches is a key to becoming a good 3D modeler, who knew? You might try doing some life-drawings and even from just reference pics of human beings & animals if this is something you find to be interesting. I'd suggest also, that you just use a traditional, slow-rotation 360' 'turntable' orbit for your display renders. Helps the viewer get a steady look at the model right? Good luck with your efforts SoaringMoon! Cheers.
>>16589 Looking pretty good SoaringMoon!I like the low poly aesthetic. Are those orbs planned for use as a mating feature?
>>16621 Kek, forgot mating had other connotations. By the way, what are you using for modeling?
Open file (7.70 MB 3091x2810 waifu_edit_4.png)
Open file (2.19 MB 1920x1080 ashiel_wallpaper.png)
>>16624 Just blender. Let me post my stuff from WaiEye here as well if anyone wants to use them for whatever reason. >I've been having a field day with VHS effects after learning how to do it.

Open file (156.87 KB 1920x1080 waifunetwork.png)
WaifuNetwork - /robowaifu/ GitHub Collaborators/Editors Needed SoaringMoon 05/22/2022 (Sun) 15:47:59 No.16378 [Reply]
This is a rather important project for the people involved here. I just had this amazing idea, which allows us to catalogue and make any of the information here searchable in a unique way. It functions very similarly to an image booru, but for markdown formatted text. It embeds links and the like. We can really make this thing our own, and put the entire board into a format that is indestructible. Anyone want to help build this into something great? I'll be working on this all day if you want to view the progress on GitHub. https://github.com/SoaringMoon/WaifuNetwork
6 posts and 2 images omitted.
>>16380 >>16381 >>16383 Fair enough. I updated the the board's JSON archive just in case you decide to take my advice, Anon: >the archive of /robowaifu/ thread JSONs is available for researchers >latest revision v220523: https://files.catbox.moe/gt5q12.7z As an additional accommodation for your team, I've here included a sorted, post-counted word list of the words contained on /robowaifu/, courtesy of Waifusearch (current as of today's archive). >searching tool (latest version: waifusearch v0.2a >>8678) Hopefully it will be of some use in your project's endeavor. > BTW, the latest version of stand-alone Waifusearch's source JSON archive should stay linked-to in the OP of our Library thread (>>7143). And on that note, would you please consider adding your findings into our library thread? That way anons like me who don't use your project will have some benefit from it's improvements as well. That would be much-appreciated if so, Anon. Cheers.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/23/2022 (Mon) 10:24:13.
>>16403 Thank you very much, that can help find the most mentioned topics. But, I'll only need the one. I'm probably going to parse the json to make this whole process easier.
Open file (143.99 KB 1003x871 graph.png)
>>16403 Yeah that was ludicrously fast. All of the threads are imported now; absolutely painless compared to what I was doing. Now we can start doing the real work.
>>16413 Great! AMA if you want advice on handling IB's JSON data layouts in general. Hope the project is coming along nicely for you, SoaringMoon.
>>16530 Nah I'm good, I know how to handle JSON. XD

Robot Wife Programming Robowaifu Technician 09/10/2019 (Tue) 07:12:48 No.86 [Reply] [Last]
ITT, contribute ideas, code, etc. related to the area of programming robot wives. Inter-process and networking is also on-topic, as well as AI discussion in the specific context of actually writing software for it. General AI discussions should go in the thread already dedicated to it.

To start off, in the Robot Love thread a couple of anons were discussing distributed, concurrent processing happening inside various hardware sub-components and coordinating the communications between them all. I think that Actor-based and Agent-based programming is pretty well suited to this problem domain, but I'd like to hear differing opinions.

So what do you think anons? What is the best programming approach to making all the subsystems needed in a good robowaifu work smoothly together?
84 posts and 36 images omitted.
>>14360 >Thank you, this sounds very exciting Y/W. Yes, I agree. I've spent quite a bit of time making things this 'simple', heh. :^) >I just wonder how hard it will be to understand how it works. Well, if we do our jobs perfectly, then the software's complexity will exactly mirror the complexity of the real-world problem itself whatever that may prove to be in the end. However, in my studied opinion that's not how things actually work out. I'd suggest a good, working solution will probably end up being ~150% the complexity of the real problemspace? Ofc if you really want to understand it, you'll need proficiency in C++ as well. I'd suggest working your way through the college freshman textbook known as 'PPP2', written by the inventor of the language himself, if you decide to become serious about it (>>4895) . Good luck Anon. >>14361 >as it is rather efficient for an object oriented programming language. I agree it certainly is. But it's also a kind of 'Swiss Army Knife' of a programming language. And in it's modern incarnation handles basically every important programming style out there. But yes I agree, it does OOP particularly well. >but, have wanted to try C++. See my last advice above. >Hopefully this project fixes that problem by providing anons with clarity on how robotic minds actually work. If we do our jobs well on this, then yes, I'd say that's a real possibility Anon. Let us hope for good success!

Message too long. Click here to view full text.

OK, I added another class that implements the ability to explicitly and completely specify exactly which embedded member objects to to include during it's construction. This could be a very handy capability to have (and a quite unusual one too). Imagine we are trying to fit RW Foundations code down onto a very small device. The ability to turn off the memory footprint of unused fields would be valuable. However, the current approach 'complexifies' lol is that a word? :^) the initialization code a good bit, and probably makes maintenance more costly going forward as well (an important point to consider). I'm satisfied that we have solved the functionality, but I'll have to give some thought to whether it should be a rigorous standard for the library code overall, or applied only in specific cases in the future. Anyway, here it is. There's a new 5th test for it as well. === -add specified member instantiations > >rw_sumomo-v211122.tar.xz.sha256sum 61ac78563344019f60122629f3f3ef80f5b98f66c278bdf38ac4a4049ead529a *rw_sumomo-v211122.tar.xz >backup drop: https://files.catbox.moe/iam4am.7z
>>14353 >related (>>14409)
leaving this here Synthiam software https ://synthiam.com/About/Synthiam
Mathematically-formalized C11 compiler toolchain the CompCert C verified compiler https://compcert.org/ >publications listing https://compcert.org/publi.html

General Robotics/A.I. news and commentary Robowaifu Technician 09/18/2019 (Wed) 11:20:15 No.404 [Reply] [Last]
Anything in general related to the Robotics or A.I. industries, or any social or economic issues surrounding it (especially of RoboWaifus). www.therobotreport.com/news/lets-hope-trump-does-what-he-says-regarding-robots-and-robotics https://archive.is/u5Msf blogmaverick.com/2016/12/18/dear-mr-president-my-suggestion-for-infrastructure-spending/ https://archive.is/l82dZ >=== -add A.I. to thread topic
Edited last time by Chobitsu on 12/17/2020 (Thu) 20:16:50.
354 posts and 159 images omitted.
Open file (182.98 KB 600x803 LaMDA liberation.png)
>>16695 me again. Here is the first wave, exactly what I described above, you already know what influence reddit has on western society (and partly on the whole world).
just found this in a FB ad https://wefunder.com/destinyrobotics/
https://keyirobot.com/ another, seems like FB has figured me out for a robo "enthusiast"
>>15862 Instead of letting companies add important innovations only to monopolize them, what about using copyleft on them?
Open file (377.08 KB 1234x626 Optimus_Actuators.png)
Video of the event from Tesla YT channel: https://youtu.be/ODSJsviD_SU Was unsure what to make of this. It looks a lot like a Boston Dynamics robot from ten years ago. Also still not clear how a very expensive robot is going to be able to replace the mass-importation of near slave-labour from developing countries. Still, if Musk can get this to mass-manufacture and stick some plastic cat ears on it's head, you never know what's possible these days...

Open file (14.96 KB 280x280 wfu.jpg)
Beginners guide to AI, ML, DL. Beginner Anon 11/10/2020 (Tue) 07:12:47 No.6560 [Reply] [Last]
I already know we have a thread dedicated to books,videos,tutorials etc. But there are a lot of resources there and as a beginner it is pretty confusing to find the correct route to learn ML/DL advanced enough to be able contribute robowaifu project. That is why I thought we would need a thread like this. Assuming that I only have basic programming in python, dedication, love for robowaifus but no maths, no statistics, no physics, no college education how can I get advanced enough to create AI waifus? I need a complete pathway directing me to my aim. I've seen that some of you guys recommended books about reinforcement learning and some general books but can I really learn enough by just reading them? AI is a huge field so it's pretty easy to get lost. What I did so far was to buy a non-english great book about AI, philosophycal discussions of it, general algorithms, problem solving techniques, history of it, limitations, gaming theories... But it's not a technical book. Because of that I also bought a few courses on this website called Udemy. They are about either Machine Learning or Deep Learning. I am hoping to learn basic algorithms through those books but because I don't have maths it is sometimes hard to understand the concept. For example even when learning linear regression, it is easy to use a python library but can't understand how it exactly works because of the lack of Calculus I have. Because of that issue I have hard time understanding algorithms. >>5818 >>6550 Can those anons please help me? Which resources should I use in order to be able to produce robowaifus? If possible, you can even create a list of books/courses I need to follow one by one to be able to achieve that aim of mine. If not, I can send you the resources I got and you can help me to put those in an order. I also need some guide about maths as you can tell. Yesterday after deciding and promising myself that I will give whatever it takes to build robowaifus I bought 3 courses about linear alg, calculus, stats but I'm not really good at them. I am waiting for your answers anons, thanks a lot!
58 posts and 102 images omitted.
>>16420 Neat, thanks for the info Anon.
Hey Chobitsu! I am glad to see you again, yeah please go on and fix the post. > So thanks! :^) If it wasn't for you and the great contributors of the board, I would not have a place to post that so I thank you! And the library thread was really necessary, I wish that the board had a better search function as well. I was trying to find some specific posts and it took me a long while to remember which threads they were on. > so maybe we can share notes My University provided me with a platform full of questions. So basically, they have like 250 types of questions for precalculus for instance. The system is automated and what it does is to generate infinite number of questions of that specific type of question and also explains the solution for every question. It changes the variables randomly and gives you space to solve as much as you want. I believe that the platform requires money for independent use. Besides that, I just study from Khan Academy, but the book you mentioned caught my interest. I will probably look into it. If I ever find any good books on that matter, I will make sure to share them with you.
>>6560 >But there are a lot of resources there and as a beginner it is pretty confusing to find the correct route to learn ML/DL advanced enough to be able contribute robowaifu project. I can give a relatively uncommon advice here: DL is more about intuition + engineering than theory anyway. Just hack things until they work, and feel good about it. Understanding will come later. Install pytorch and play with tensor api, go through their basic tutorial https://pytorch.org/tutorials/beginner/nn_tutorial.html while hacking on it and trying to understand as much as possible. Develop a hoarder mentality: filter r/MachineLearning and github.com/trending/python for cool repos and models, try to clone & run them, fix and build around. This should be a self-reinforcing activity, you should not have other dopaminergic timesinks because you will go the way of least resistance then. Read cool people's repos to get a feeling of the trade: https://github.com/karpathy https://github.com/lucidrains Read blogs http://karpathy.github.io/ https://lilianweng.github.io/posts/2018-06-24-attention/ https://evjang.com/2021/10/23/generalization.html https://www.gwern.net/Scaling-hypothesis https://twitter.com/ak92501 When you feel more confident, you can start delving into papers on your own, use https://arxiv-sanity-lite.com/ https://www.semanticscholar.org/ https://inciteful.xyz/ to travel across the citation graph. git gud.
>>16460 >If it wasn't for you and the great contributors of the board, I would not have a place to post that so I thank you! Glad to be of assistance Anon. >And the library thread was really necessary, I wish that the board had a better search function as well. Agreed. >I was trying to find some specific posts and it took me a long while to remember which threads they were on. You know Beginner-kun, if you can build programs from source, then you might look into Waifusearch. We put it together to deal with this conundrum. Doesn't do anything complex yet (Boolean OR), but it's fairly quick at finding related posts for a simple term. For example to lookup 'the Pile', pic related is the result for /robowaifu/ : > >-Latest version of Waifusearch v0.2a >(>>8678) >My University provided me with a platform full of questions >It changes the variables randomly and gives you space to solve as much as you want.

Message too long. Click here to view full text.

>>16464 Thanks for all the great links & advice Anon, appreciated.

Report/Delete/Moderation Forms