/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Site was down because of hosting-related issues. Figuring out why it happened now.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon in late August. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


When the world says, “Give up,” Hope whispers, “Try it one more time.” -t. Anonymous


Open file (2.21 MB 1825x1229 chobit.png)
Robowaifu@home: Together We Are Powerful Robowaifu Technician 03/14/2021 (Sun) 09:30:29 No.8958 [Reply] [Last]
The biggest hurdle to making quick progress in AI is the lack of compute to train our own original models, yet there are millions of gamers with GPUs sitting around barely getting used, potentially an order of magnitude more compute than Google and Amazon combined. I've figured out a way though we can connect hundreds of computers together to train AI models by using gradient accumulation. How it works is by doing several training steps and accumulating the loss of each step, then dividing by the amount of accumulation steps taken before the optimizer step. If you have a batch size of 4 and do 256 training steps before an optimizer step, it's like training with a batch size of 1024. The larger the batch size and gradient accumulation steps are, the faster the model converges and the higher final accuracy it achieves. It's the most effective way to use a limited computing budget: https://www.youtube.com/watch?v=YX8LLYdQ-cA These training steps don't need to be calculated by a single computer but can be distributed across a network. A decent amount of bandwidth will be required to send the gradients each optimizer step and the training data. Deep gradient compression achieves a gradient compression ratio from 270x to 600x without losing accuracy, but it's still going to be using about 0.5 MB download and upload to train something like GPT2-medium each optimizer step, or about 4-6 mbps on a Tesla T4. However, we can reduce this bandwidth by doing several training steps before contributing gradients to the server. Taking 25 would reduce it to about 0.2 mbps. Both slow and fast computers can contribute so long as they have the memory to hold the model. A slower computer might only send one training step whereas a fast one might contribute ten to the accumulated gradient. Some research needs to be done if a variable accumulation step size impacts training, but it could be adjusted as people join and leave the network. All that's needed to do this is a VPS. Contributors wanting anonymity can use proxies or TOR, but project owners will need to use VPNs with sufficient bandwidth and dedicated IPs if they wish that much anonymity. The VPS doesn't need an expensive GPU rental either. The fastest computer in the group could be chosen to calculate the optimizer steps. The server would just need to collect the gradients, decompress them, add them together, compress again and send the accumulated gradient to the computer calculating the optimizer step. Or if the optimizing computer has sufficient bandwidth, it could download all the compressed gradients from the server and calculate the accumulated gradient itself. My internet has 200 mbps download so it could potentially handle up to 1000 computers by keeping the bandwidth to 0.2 mbps. Attacks on the network could be mitigated by analyzing the gradients, discarding nonsensical ones and banning clients that send junk, or possibly by using PGP keys to create a pseudo-anonymous web of trust. Libraries for distributed training implementing DGC already exist, although not as advanced as I'm envisioning yet: https://github.com/synxlin/deep-gradient-compression I think this will also be a good way to get more people involved. Most people don't know enough about AI or robotics enough to help but if they can contribute their GPU to someone's robowaifu AI they like and watch her improve each day they will feel good about it and get more involved. At scale though some care will need to be taken that people don't agree to run dangerous code on their computers, either through a library that constructs the models from instructions or something else. And where the gradients are calculated does not matter. They could come from all kinds of hardware, platforms and software like PyTorch, Tensorflow or mlpack.
98 posts and 30 images omitted.
>NuNet is building a globally decentralized computing framework that combines latent computing power of independently owned compute devices across the globe into a dynamic ecosystem of compute resources, individually rewarded via tokenomic ecosystem based on NuNet Utility Token (NTX). https://www.nunet.io
Open file (33.58 KB 855x540 NuNet Tokenomics.png)
>>26510 While the basic idea behind the claims is sound (ours is much better however), the entire thing strikes me as yet another scam. If I'm correct, then it's an effort to sweep up any unencumbered compute resources not already controlled by the GH, into their already-obscenely-large hardware stable.
Related: >>30759 >I'm working on infrastructure that's friendly to distributed development of complex AI applications
> platform-related : ( >>34123 )

RoboWaifuBanners Robowaifu Technician 09/15/2019 (Sun) 10:29:19 No.252 [Reply] [Last]
This thread is for the sharing of robo waifu banners. As per the rules, fallow these requirements: >banner requirements: File size must be lower than 500 KB and dimensions are 300x100 exactly. Allowed file formats are .jpg, .png, and .gif. >=== -fmt cleanup
Edited last time by Chobitsu on 01/26/2023 (Thu) 18:54:19.
122 posts and 91 images omitted.
>>19013 >>19019 Either of these would be acceptable as banners here Anon, but you'll need to figure out a creative layout that meets our formatting requirements (>>252). I'd also suggest you try to simplify so that they 'read' well, as with your first sauce material. >=== -add crosslink -minor prose edit
Edited last time by Chobitsu on 01/26/2023 (Thu) 19:00:15.
>>19074 Then these are not banners but memes: Robowaifu Propaganda and Recruitment >>2705
>>19418 Ahh, got it. Thanks Anon!
>>9674 Not the anon who asked but it sounds sad. What happened to her in the end?
>>26448 Hello newbie, welcome! I suppose you'll have to find out for yourself! You can find sites that will let you read this mango online. I hope you enjoy our board, Anon. Please look around and don't be afraid to ask questions along the way. Cheers! :^)

HOW TO SOLVE IT Robowaifu Technician 07/08/2020 (Wed) 06:50:51 No.4143 [Reply] [Last]
How do we eat this elephant, /robowaifu/? This is a yuge task obviously, but OTOH, we all know it's inevitable there will be robowaifus. It's simply a matter of time. For us (and for every other Anon) the only question is will we create them ourselves, or will we have to take what we're handed out by the GlobohomoBotnet(TM)(R)(C)? In the interest of us achieving the former I'll present this checklist from George Pólya. Hopefully it can help us begin to break down the problem into bite-sized chunks and make forward progress. >--- First. UNDERSTANDING THE PROBLEM You have to understand the problem. >What is the unknown? What are the data? What is the condition? Is it possible to satisfy the condition? Is the condition sufficient to determine the unknown? Or is it insufficient? Or redundant? Or contradictory? >Draw a figure. Introduce suitable notation. >Separate the various parts of the condition. Can you write them down? Second.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/19/2023 (Wed) 14:10:30.
119 posts and 30 images omitted.
>>24612 My apologies for not responding sooner Anon. >nor do I think that is a useful goal for robowaifu. While I think that's a matter of degree, I'll agree with sentiment in general. BTW, we now have a new thread on-topic with this : (>>24783). >I don’t think we need to copy the monkey meat to succeed, AI NN are vaguely inspired by real neurons but are not the that similar in reality. Interestingly, the so-called 'monkey meat' as you put it, is now being used in conjunction with in-silico devices, just as predicted by /robowaifu/ ahead of time. (>>24827) > Only time will tell how useful this approach will be for robowaifus, but I think there is little doubt we'll see these systems being used in guided missles and drones within the decade. >my ML TODO I'd suggest starting with the recommended post from our /meta threads, Anon. > -How to get started with AI/ML for beginners (>>18306) >But we can take advantage of something really nice. We already have LLMs you can use large LLMs to create the labeled data and basically extract what it learned. We are already seeing exciting developments where LLMs are being used to train other models.

Message too long. Click here to view full text.

>>24612 >- I need to do a deep dive into psychology, this would be useful for figuring out what "mechanics" this mind should be governed by. (requesting opinions and advice) I answered here >>24861
I upgraded this: >>10317. Though, nobody seems to care, since it was broken an no one complained or fixed it. It's not for downloading the files, just for getting the metadata for those which you already have. Doesn't work for renamed files, for example where the title of the paper was put into the name. I want to use this to extract the metadata and be able to use it in something like Obsidian. So I can have the description of the paper there and the title with the link. At some point, making keywords into tags automatically would also be interesting. (Indentation might be botched in the code, since copying from Emacs seems to not work very well) # pip install arxiv first import os import arxiv Getting metadata for your ArXiv.org documents AI_PAPERS_DIR = os.path.expanduser("~/Games/Not-Games/AI_Papers/") if AI_PAPERS_DIR is None: AI_PAPERS_DIR = os.path.expanduser(input("The dir with papers: ")) filenames = os.listdir(AI_PAPERS_DIR) id_list = [] for filename in filenames: if len(filename.split('.')) == 3:

Message too long. Click here to view full text.

>>26312 Thanks kindly, NoidoDev. :^)
>related : John Gall : Systemantics : ( >>33550, >>33553 ) >=== -add crosslink
Edited last time by Chobitsu on 09/14/2024 (Sat) 15:24:36.

Open file (156.87 KB 1920x1080 waifunetwork.png)
WaifuNetwork - /robowaifu/ GitHub Collaborators/Editors Needed SoaringMoon 05/22/2022 (Sun) 15:47:59 No.16378 [Reply]
This is a rather important project for the people involved here. I just had this amazing idea, which allows us to catalogue and make any of the information here searchable in a unique way. It functions very similarly to an image booru, but for markdown formatted text. It embeds links and the like. We can really make this thing our own, and put the entire board into a format that is indestructible. Anyone want to help build this into something great? I'll be working on this all day if you want to view the progress on GitHub. https://github.com/SoaringMoon/WaifuNetwork
10 posts and 4 images omitted.
>>16530 Nah I'm good, I know how to handle JSON. XD
>>16413 Finally started to sort my data in regards to robowaifu and AI in Obsidian today. > https://github.com/SoaringMoon/WaifuNetwork The data seem to not be available anymore. Does anyone have it, or should I contact OP? I might have downloaded it somewhere but don't find it right now. It also might be a good idea to make this into a project using something like LangChain, NLTK, or something else to extract some keywords and make them into a tag.
>>26259 >or should I contact OP? Yeah I think that'd be cool to find out what SoaringMoon's up to these days.
>>26263 >find out what SoaringMoon's up to these days. Well, that's easy, he's working in gaming. That's on his Github. He also has a Substack, and I saw him on the Discord a while ago. Probably just deleted the RW repo on Github for PR reasons.
>>26266 Cool! Well I hope he's doing well and will stop by to say hi again, and let us know how he's doing at some point. >Probably just deleted the RW repo on Github for PR reasons. Yeah, I warned you about hiding your power level the stairs, bro! :^)

OpenGL Resources Robowaifu Technician 09/12/2019 (Thu) 03:26:10 No.158 [Reply]
Good VR is one important interim goal for a fully-realized RoboWaifu, and it's also much easier to visualize an algorithm graphically at a glance than 'trying to second-guess a mathematical function'. OpenGL is by far the most portable approach to graphics atm.

>"This is an excellent site for starting with OpenGL from scratch. It covers from very introductory topics all the way to advanced topics like Deferred Shading. Liberal usage of accompanying images and code. Strongly recommended."

learnopengl.com/
https://archive.is/BAJ0e

Plus the two books related, but learnopengl.com is particularly well-suited for beginners.

www.openglsuperbible.com/
https://archive.is/NvdZQ

opengl-redbook.com/
https://archive.is/xPxml

www.opengl.org
https://archive.fo/EZA0p
Edited last time by Chobitsu on 09/26/2019 (Thu) 08:14:37.
22 posts and 18 images omitted.
>>1848
Unrelated: Does the system redraw from the bottom-up? For some reason I thought the drawing happened from the upper-left corner down, but apparently not. Given that the coordinate origin for the system starts in the lower-left corner (similar to a Cartesian system) I guess that makes sense. Live and learn.
>>26098 This here about the Lobster language is related to OpenGL. This is why it is fast, I don't think that it is compiled is much important for speed, though it might have other advantages. For now I'm trying to use Python with OpenGL to do the same thing. Not to confused with OpenCL which I also need to use. I found the term "delta compression" for calculating differences in frames. I hope I can make the animations smaller that way. My current way of "programming" this is asking ChatGPT for every step while learning about how it works. With basic knowledge of Python it works relatively well, even with GPT-3. I'm getting the terms I need to look for how to do things, and the code which needs some polishing.
>>26105 thats how most video codecs like webm work, where you keep only keyframes and replace whats in between them with only the transformation required for the next frames, there must be lots of libraries for this
Open file (18.98 KB 768x768 delta_frame_15.png)
Open file (33.69 KB 768x768 delta_frame_37.png)
>>26106 Thanks, that's what I asked ChatGPT so I had something to ask further. I need to make it interactive, though, or like very short gifs with sound.
>>26108 Okay, I fell into a rabbit hole here. It was interesting, but probably useless. I rather need a collection of GIFs or so. I wondered why no one else had this idea before, I'm starting to understand where the problem is and why it might be hard or impossible. [quote] You are correct. In a delta compression approach where you keep the original frames and use delta frames to reconstruct, the primary advantage is not in reducing storage requirements but rather in potentially reducing the processing load when displaying or working with the frames in real-time. The advantage lies in the efficiency of processing and transmitting the frames, especially when dealing with limited computational resources or bandwidth. Here's a clearer explanation of the advantage: 1. Reduced Processing Load: By storing and transmitting only the delta frames, you can reduce the amount of data that needs to be processed, especially when displaying or working with the frames in real-time. Instead of working with full frames, you process the smaller delta frames, which can be computationally less intensive. 2. Real-Time Efficiency: In applications where real-time processing or streaming is crucial, delta compression can be advantageous. It allows for quicker decoding and display of frames, which is important in video conferencing, surveillance, and interactive applications. 3. Bandwidth Efficiency: When transmitting video data over a network, delta compression can reduce the required network bandwidth, making it feasible to stream video even with limited bandwidth. However, it's important to note that you still need the original frames to apply the delta frames and reconstruct the complete frames. The advantage is in processing efficiency, not in storage efficiency. You trade off storage efficiency for computational and bandwidth efficiency. If your priority is purely reducing storage requirements and you don't need real-time processing or streaming, then traditional video codecs that achieve high compression ratios while storing complete frames might be more suitable for your use case.[/quote]

Python General Robowaifu Technician 09/12/2019 (Thu) 03:29:04 No.159 [Reply] [Last]
Python Resources general

Python is by far the most common scripting language for AI/Machine Learning/Deep Learning frameworks and libraries. Post info on using it effectively.

wiki.python.org/moin/BeginnersGuide
https://archive.is/v9PyD

On my Debian-based distro, here's how I set up Python, PIP, TensorFlow, and the Scikit-Learn stack for use with AI development:
sudo apt-get install python python-pip python-dev
python -m pip install --upgrade pip
pip install --user tensorflow numpy scipy scikit-learn matplotlib ipython jupyter pandas sympy nose


LiClipse is a good Python IDE choice, and there are a number of others.
www.liclipse.com/download.html
https://archive.is/glcCm
58 posts and 14 images omitted.
Open file (79.43 KB 483x280 Screenshot_70.png)
Open file (124.54 KB 638x305 Screenshot_71.png)
Open file (124.48 KB 686x313 Screenshot_73.png)
Open file (186.01 KB 636x374 Screenshot_74.png)
Advanced use of exceptions in Python for reliability and debugging: >I Take Exception to Your Exceptions: Using Custom Errors to Get Your Point Across https://youtu.be/wJ5EO7tnDiQ (audio quality is a bit suboptimal)
>>24658 Great stuff NoidoDev. Exceptions are based.
I watched this video here in 1.75x speed as a refresher, since I had some gaps from not writing Python in quite a while. It might also work for beginners with experience in some other language: > Python As Fast as Possible - Learn Python in ~75 Minutes https://youtu.be/VchuKL44s6E As a beginner you should of course go slower and test the code you've learned. While we're at it, Python got a lot of new features, including compiling better to C-code now using Cython: https://youtu.be/e6zFlbEU76I
https://automatetheboringstuff.com/ Free online ebook. >Practical Programming for Total Beginners >If you've ever spent hours renaming files or updating hundreds of spreadsheet cells, you know how tedious tasks like these can be. But what if you could have your computer do them for you?
Related: Some anon wants to get started >>26616

Agalmatophilia Robowaifu Technician 09/09/2019 (Mon) 04:50:40 No.15 [Reply]
Is the desire for one's own waifubot a legitimate personal expression or simply yet another one of the many new psychoses stemming from First World problems?
10 posts and 2 images omitted.
both
> Here’s a list of ways they could be used, generated by ChatGPT. While the list is quite optimistic, it’s noteworthy that there are plenty of people already reporting using models in some of these ways. > Companionship: > Virtual Companions: LLMs can serve as virtual companions, providing conversation and interaction for individuals who may feel lonely or isolated. > Social Simulation: By simulating social interactions, LLMs can help individuals practice social skills in a safe and controlled environment. > Counseling and Therapy: > Mental Health Screening: LLMs can be used to conduct initial mental health screenings, helping to identify individuals who may need professional help. > Cognitive Behavioral Therapy (CBT) Support: LLMs can assist in delivering cognitive behavioral therapy exercises, helping individuals to manage symptoms of disorders like anxiety or depression. > Emotional Support: > Mood Monitoring: LLMs can be used to track an individual’s mood over time, providing insights into emotional patterns and triggers. > Crisis Support: While not a replacement for professional intervention, LLMs can provide immediate responses in crisis situations, offering support until professional help can be accessed. > Education and Awareness: > Mental Health Education: LLMs can provide information and resources on mental health topics, helping to raise awareness and reduce stigma. > Stress Management Techniques: They can educate individuals on various stress management techniques such as mindfulness, breathing exercises, and relaxation techniques. > Personal Development: > Mindfulness and Meditation Guidance: LLMs can guide individuals through mindfulness and meditation exercises to promote mental well-being. > Motivational Support: By offering encouragement and tracking progress, LLMs can help individuals stay motivated towards achieving personal goals. > Behavioral Change:

Message too long. Click here to view full text.

I think it's beautiful how you can spot obvious and clear parallels between the Pygmalion and Galatea myth and robowaifu technicians. Sometimes I wonder if those ancient Greeks would have been able to notice it as well.
>>26055 Nice, I didn't know there was a painting of it. I might look for a reproduction or art print in some time.
>>26057 there are multiple paintings, so you can pick and choose

/robowaifu/meta-8: Its Summertime, why even wait? Robowaifu Technician 06/24/2023 (Sat) 19:24:05 No.23415 [Reply] [Last]
/meta, offtopic, & QTDDTOT >--- General /robowaifu/ team survey (please reply ITT) (>>15486) >--- Mini-FAQ >A few hand-picked posts on various /robowaifu/-related topics -Why is keeping mass (weight) low so important? (>>4313) -How to get started with AI/ML for beginners (>>18306) -"The Big 4" things we need to solve here (>>15182) -HOW TO SOLVE IT (>>4143) -Why we exist on an imageboard, and not some other forum platform (>>15638, >>17937) -This is madness! You can't possibly succeed, so why even bother? (>>20208, >>23969) -All AI programming is done in Python. So why are you using C++ here? (>>21057, >>21091)

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/15/2023 (Sun) 09:37:18.
374 posts and 135 images omitted.
>>26088 >Well shit they didn’t take me. Anyone have any tips on how to move forward? The company told me to learn UiPath. Oh, too bad, but don't let them discourage you. With the UI path they probably meant doing something around learning how to make a user interface, usability and such. I'm not sure if I can give you advice but I would make some projects which I could show off, but don't listen too much to on company in regards to the area. Maybe look into LangChain.
We need a new /meta/ - this one is bump limited now.
>>26090 hahah I mean to be fair automation will become a very important skill and there are some job listings that demand a knowledge in it (UiPath, Automation Anywahere). But as a newbie I gotta do Python first I guess. >>26094 Shouldn’t I first get a foot in Python? I don’t want to mix these two languages. For now I at least have a UiPath certificate.
>>26100 yeah learning how to script is pretty much a necessity if you want to use a computer for its intended purpose, scripting isnt programing the language isnt really important you can switch language easily when you already know one its just preference at that point
NEW THREAD NEW THREAD NEW THREAD (>>26137) (>>26137) (>>26137) (>>26137) (>>26137) NEW THREAD NEW THREAD NEW THREAD

Robowaifu Thermal Management Robowaifu Technician 09/15/2019 (Sun) 05:49:44 No.234 [Reply] [Last]
Question: How are we going to solve keeping all the motors, pumps, computers, sensors, and other heat-producing gizmos inside a real robowaifu cooled off anons?
61 posts and 16 images omitted.
Today I looked into the filling of cold packs. One of mine broke a while ago and I wondered what this goo inside is made of. I thought it might be some kind of silicone rubber. Instead I found are ways to store heat: https://en.wikipedia.org/wiki/Phase-change_material >Organic PCMs: Hydrocarbons, primarily paraffins (CnH2n+2) and lipids but also sugar alcohols. >Inorganic: Salt hydrates >Hygroscopic materials >Solid-solid PCMs >The most commonly used PCMs are salt hydrates, fatty acids and esters, and various paraffins (such as octadecane). Recently also ionic liquids were investigated as novel PCMs. Fore example: >highly water absorbent >expands when placed in water https://en.wikipedia.org/wiki/Polyacrylamide Useful for things like Thermal Comfort: https://en.wikipedia.org/wiki/Thermal_comfort

Message too long. Click here to view full text.

Open file (153.35 KB 445x478 KoakumaNB.png)
Wings! We pump thermal fluids through the robowaifu and then use thin "wings" filled with little tubes as heat exchangers. I took inspiration from how elephants use their ears to help regulate their body temperatures.
>>25300 Yeah I always found thermal chemistry fascinating NoidoDev, thanks! :^) >>25305 Heh, this is in fact a great idea Anon.
Don't think I'd seen the term 'thermoelastic' ITT yet. Particularly interesting was the phrase 'high volumetric energy density'. Presumably this implies a more efficient solid-state cooling system by volume and/or mass. >XJTU latest breakthrough in solid-state refrigeration published in Science http://en.xjtu.edu.cn/2023-05/19/c_893543.htm >=== -minor edit
Edited last time by Chobitsu on 09/24/2023 (Sun) 01:01:32.
>>25479 Not sure if it's related: >>22748 but it's also solid-state cooling, I just forgot the hyphen in solid-state.

Open file (8.45 MB 2000x2811 Chii_w_Atashi_kawaii.png)
Open file (442.93 KB 3109x3840 percept_UML.dia.png)
Cognitive Architecture : WIP Kiwi, NoidoDev 09/17/2023 (Sun) 20:38:37 No.25368 [Reply]
"An algorithm without a mind cannot have a heart." Chii Cogito Ergo Chii Chii thinks, therefore Chii is. --- # Introduction With cognitive architecture we mean software that ties different elements of other software together to create a system that can perform tasks that one AI model based on deep learning would not be able to do, or only with downsides in areas like alignement, speed or efficiency. We study the building blocks of the brain and the human mind by various definitions, with the goal to create something that thinks in a relatively similar way to a human being. Let's start with the three main aspects of mind: * Sentience: Ability to experience sensations and feelings. Her sensors communicate states to her. She senses your hand holding hers and can react. Feelings, having emotions. Her hand being held bring her happiness. This builds on her capacity for subjective experience, related to qualia. * Self-awareness: Capacity to differentiate the self from external actors and objects. When presented with a mirror, echo, or other self referential sensory input is recognized as the self. She sees herself in your eyes reflection and recognizes that is her, that she is being held by you.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 09/18/2023 (Mon) 00:53:46.
# Cognitive Architecture - Separating it from conversational AI is probably impossible, since we need to use it for inner dialog and retrieval of data from LLMs - I'm not sure if optimizations for conversations, like faster understanding and anticipation where a conversation goes, should also go in here or in another thread - We need to approach this from different directions: Lists of capabilities humans have, concrete scenarios and how to solve them, configuration optimizations - The system will need to be as flexible as possible, making it possible to add new features everywhere # Definitions of conscience which we might be able to implement - ANY comment on conscience has to make clear to which definition it refers or make the case for a new one which can be implemented in code (including deep learning) - <list, it's five or so> # List of projects working on Cognitive Architecture or elements of it - Dave Shapiro: https://www.youtube.com/@DavidShapiroAutomator - OpenCog - Some guys from IBM Watson worked on it - LangChain - - ... <there are more, we need a list>

Message too long. Click here to view full text.

Edited last time by Chobitsu on 09/17/2023 (Sun) 20:54:52.

Report/Delete/Moderation Forms
Delete
Report