/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Canary has been updated.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Have a nice day, Anon!


CNC Machine Thread Robowaifu Technician 05/12/2020 (Tue) 02:13:40 No.2991 [Reply] [Last]
Many of the parts needed to build our robowaifus will need to be custom made and they will need to be metal. For parts that have a high tolerance for imperfections a 3d printer can print a mold and then a small scale foundry can be used to cast the piece with metal (probably copper or aluminum). BUT there will be pieces that need a higher degree of precision (such as joints). For these pieces a CNC machine would be useful. CNC machines can widely range in size, price, and accuracy and I would like to find models suitable for our purposes. I know there are CNC machines available that can cut up to copper for under $300, but I don't know if that will be enough for our purposes. (https://www.sainsmart.com/products/sainsmart-genmitsu-cnc-router-pro-diy-kit?variant=15941458296905&currency=USD&utm_campaign=gs-2018-08-06&utm_source=google&utm_medium=smart_campaign) Some CNC machines can be used to engrave printed circuit boards and that may prove useful for our purposes as well. Are there any anons that know more about CNC machines? Anons looking to buy one ask your questions here.
61 posts and 11 images omitted.
EDM machine (via Levi Jannsen Discord) https://youtu.be/5CeCxkFVCdM >"EDM" typically refers to "Electrical Discharge Machining." EDM is a machining process that uses electrical discharges (sparks) to remove material from a workpiece. It is often used for precision machining of complex shapes or for creating fine features in hard materials.
>>11424 >He deleted his posting I replied to? Seems like it. >>15712 >>26357 Interesting stuff Anons, thanks! :^)
Open file (733.51 KB 1024x853 PhotoChemicalMachining.jpg)
An excellent tutorial for at home photo chemical machining. https://www.instructables.com/Micro-Machining-at-Home-on-the-Cheap/
>>26798 Excellent find, Kiwi! This could definitely be a useful approach for the DIY Robowaifuist. Cheers. :^)
Some mentions of metal in the 3D printing thread: >>23931 (low melting point alloys) >>24149 (metal casting at the end of the comment) >>26053 >>31511

Fundamental Design Ideas that should ALWAYS be accounted for Grommet 07/19/2023 (Wed) 01:59:58 No.24047 [Reply]
So I'm thinking about design and realize some mistakes I made. There is likely no better short cut to advancing design than these fundamental principles as stated by Elon Musk. 1. Make The Requirements Less Dumb “Step one: Make the requirements less dumb. The requirements are definitely dumb; it does not matter who gave them to you. It’s particularly dangerous when they come from an intelligent person, as you may not question them enough. Everyone’s wrong. No matter who you are, everyone is wrong some of the time. All designs are wrong, it’s just a matter of how wrong,” explains Musk. 2. Try And Delete Part Of The Process “Step two: try very hard to delete the part or process. If parts are not being added back into the design at least 10% of the time, [it means that] not enough parts are being deleted. The bias tends to be very strongly toward ‘let’s add this part or process step in case we need it’. Additionally, each required part and process must come from a name, not a department, as a department cannot be asked why a requirement exists, but a person can,” says Musk. 3. Simplify Or Optimize “Step three: simplify and optimize the design. This is the most common error of a smart engineer — to optimize something that should simply not exist,” according to Musk. He, himself, has been a victim of implementing these steps out of order. He refers to a “mental straightjacket” that happens in traditional schools where you always have to answer the question regardless of whether the premise makes any sense at all. 4. Accelerate Cycle Time “Step four: accelerate cycle time. You’re moving too slowly, go faster! But don’t go faster until you’ve worked on the other three things first,” explains Musk. Here he uses another example of how these steps should occur in order. During a wrongheaded process you should simply stop, not accelerate. He says, “If you’re digging your grave, don’t dig it faster.”

Message too long. Click here to view full text.

Edited last time by Chobitsu on 11/29/2023 (Wed) 02:52:46.
16 posts and 5 images omitted.
>>24134 The topic is literally just OP expressing their love for the guy. You're calling out those other anons for disagreeing. OP you're acting like a cultist, you could have said all of that without inserting your celebrity into it.
>>26580 Lolwut? I think I'm pretty skeptical of the guy. OTOH, I'm not at all jelly of him and have earnest hopes for the Teslabot program. Regardless, he's certainly done the world a service regarding Xitter, even if some of the approaches to managing the thing seem bonkers to me. :D
>>26580 >The topic is literally just OP expressing their love for the guy You don't understand. One of the most valuable things in the world is to have a simple straightforward guide to operating effectively. "...you could have said all of that without inserting your celebrity into it...." He said it. He stated them in the sequence listed. These principles he stated off the cuff in an interview with everyday astronaut are pure gold. If you internalize them it will make you more effective in whatever you are doing. People for some reason really get triggered like little boys saying girls have cooties when you mention him or what he has done. And somehow admiration for most excellent productivity and changing the trajectory of several technologies on the planet, is said to be "love" or "worship" or some other derogatory term. No the guy has done a shit load of impressive things and respecting that is...just respect. Very few people in history have produced so much change in such a short time as he has. It's just a fact.
>>26590 > You don't understand. One of the most valuable things in the world is to have a simple straightforward guide to operating effectively. Do you actually think Elon is the first person that thought efficiency is good? You could ask any engineer and they would explain how important efficiency is to you. Also if you honest to god believe Elon is a genius at engineering, look at how modular and resistant Tesla cars or his rockets are. And most things he talked about don't apply because we aren't selling status symbols to the masses, we are producing robots to love. 1)Make The requirements less dumb If you made the requirements, they aren't "dumb" as long as you don't think they are dumb. Every requirement you started with should be added to the end product over time as long as there aren't any engineering difficulties and you want those features. 2) Try And Delete Part Of The Process The product should be quality checked at every step and the more check there is the better. 3)Simplify Or Optimize Good advice but some people died because he simplifyed his car's doors to the point they weren't functioning. 4)Accelerate Cycle Time That thinking of his is why Tesla factories get far more injuries than other car factories, i strongly believe Tesla factories would be closed 10 times by now if other car manifacturers ran factories as dangerous as his. 5. Automate Soo smart of him to realise industrial revolution happened. But doesn't apply to us yet because we aren't manifacturing in masses.
>>26592 You don't like his ideas, don't use them but don't pretend that stating them is worship or any of the other stuff you throw out there. No one forcing you to do anything. Ideas are not people.

Open file (2.21 MB 1825x1229 chobit.png)
Robowaifu@home: Together We Are Powerful Robowaifu Technician 03/14/2021 (Sun) 09:30:29 No.8958 [Reply] [Last]
The biggest hurdle to making quick progress in AI is the lack of compute to train our own original models, yet there are millions of gamers with GPUs sitting around barely getting used, potentially an order of magnitude more compute than Google and Amazon combined. I've figured out a way though we can connect hundreds of computers together to train AI models by using gradient accumulation. How it works is by doing several training steps and accumulating the loss of each step, then dividing by the amount of accumulation steps taken before the optimizer step. If you have a batch size of 4 and do 256 training steps before an optimizer step, it's like training with a batch size of 1024. The larger the batch size and gradient accumulation steps are, the faster the model converges and the higher final accuracy it achieves. It's the most effective way to use a limited computing budget: https://www.youtube.com/watch?v=YX8LLYdQ-cA These training steps don't need to be calculated by a single computer but can be distributed across a network. A decent amount of bandwidth will be required to send the gradients each optimizer step and the training data. Deep gradient compression achieves a gradient compression ratio from 270x to 600x without losing accuracy, but it's still going to be using about 0.5 MB download and upload to train something like GPT2-medium each optimizer step, or about 4-6 mbps on a Tesla T4. However, we can reduce this bandwidth by doing several training steps before contributing gradients to the server. Taking 25 would reduce it to about 0.2 mbps. Both slow and fast computers can contribute so long as they have the memory to hold the model. A slower computer might only send one training step whereas a fast one might contribute ten to the accumulated gradient. Some research needs to be done if a variable accumulation step size impacts training, but it could be adjusted as people join and leave the network. All that's needed to do this is a VPS. Contributors wanting anonymity can use proxies or TOR, but project owners will need to use VPNs with sufficient bandwidth and dedicated IPs if they wish that much anonymity. The VPS doesn't need an expensive GPU rental either. The fastest computer in the group could be chosen to calculate the optimizer steps. The server would just need to collect the gradients, decompress them, add them together, compress again and send the accumulated gradient to the computer calculating the optimizer step. Or if the optimizing computer has sufficient bandwidth, it could download all the compressed gradients from the server and calculate the accumulated gradient itself. My internet has 200 mbps download so it could potentially handle up to 1000 computers by keeping the bandwidth to 0.2 mbps. Attacks on the network could be mitigated by analyzing the gradients, discarding nonsensical ones and banning clients that send junk, or possibly by using PGP keys to create a pseudo-anonymous web of trust. Libraries for distributed training implementing DGC already exist, although not as advanced as I'm envisioning yet: https://github.com/synxlin/deep-gradient-compression I think this will also be a good way to get more people involved. Most people don't know enough about AI or robotics enough to help but if they can contribute their GPU to someone's robowaifu AI they like and watch her improve each day they will feel good about it and get more involved. At scale though some care will need to be taken that people don't agree to run dangerous code on their computers, either through a library that constructs the models from instructions or something else. And where the gradients are calculated does not matter. They could come from all kinds of hardware, platforms and software like PyTorch, Tensorflow or mlpack.
97 posts and 30 images omitted.
> conceivably-related question (>>23971)
>NuNet is building a globally decentralized computing framework that combines latent computing power of independently owned compute devices across the globe into a dynamic ecosystem of compute resources, individually rewarded via tokenomic ecosystem based on NuNet Utility Token (NTX). https://www.nunet.io
Open file (33.58 KB 855x540 NuNet Tokenomics.png)
>>26510 While the basic idea behind the claims is sound (ours is much better however), the entire thing strikes me as yet another scam. If I'm correct, then it's an effort to sweep up any unencumbered compute resources not already controlled by the GH, into their already-obscenely-large hardware stable.
Related: >>30759 >I'm working on infrastructure that's friendly to distributed development of complex AI applications

RoboWaifuBanners Robowaifu Technician 09/15/2019 (Sun) 10:29:19 No.252 [Reply] [Last]
This thread is for the sharing of robo waifu banners. As per the rules, fallow these requirements: >banner requirements: File size must be lower than 500 KB and dimensions are 300x100 exactly. Allowed file formats are .jpg, .png, and .gif. >=== -fmt cleanup
Edited last time by Chobitsu on 01/26/2023 (Thu) 18:54:19.
122 posts and 91 images omitted.
>>19013 >>19019 Either of these would be acceptable as banners here Anon, but you'll need to figure out a creative layout that meets our formatting requirements (>>252). I'd also suggest you try to simplify so that they 'read' well, as with your first sauce material. >=== -add crosslink -minor prose edit
Edited last time by Chobitsu on 01/26/2023 (Thu) 19:00:15.
>>19074 Then these are not banners but memes: Robowaifu Propaganda and Recruitment >>2705
>>19418 Ahh, got it. Thanks Anon!
>>9674 Not the anon who asked but it sounds sad. What happened to her in the end?
>>26448 Hello newbie, welcome! I suppose you'll have to find out for yourself! You can find sites that will let you read this mango online. I hope you enjoy our board, Anon. Please look around and don't be afraid to ask questions along the way. Cheers! :^)

HOW TO SOLVE IT Robowaifu Technician 07/08/2020 (Wed) 06:50:51 No.4143 [Reply] [Last]
How do we eat this elephant, /robowaifu/? This is a yuge task obviously, but OTOH, we all know it's inevitable there will be robowaifus. It's simply a matter of time. For us (and for every other Anon) the only question is will we create them ourselves, or will we have to take what we're handed out by the GlobohomoBotnet(TM)(R)(C)? In the interest of us achieving the former I'll present this checklist from George Pólya. Hopefully it can help us begin to break down the problem into bite-sized chunks and make forward progress. >--- First. UNDERSTANDING THE PROBLEM You have to understand the problem. >What is the unknown? What are the data? What is the condition? Is it possible to satisfy the condition? Is the condition sufficient to determine the unknown? Or is it insufficient? Or redundant? Or contradictory? >Draw a figure. Introduce suitable notation. >Separate the various parts of the condition. Can you write them down? Second.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/19/2023 (Wed) 14:10:30.
118 posts and 30 images omitted.
>>24559 I feel people are taking the wrong message from my post. I never said that I want to simulate a human mind, nor do I think that is a useful goal for robowaifu. My message is that machine learning is a useful tool in your toolbox, but should not be the only thing you use. >I wonder if you could subdivide this between different AIs? Yes absolutely and that will be very important. >There could be a text file that stores every event the robowaifu recognizes, it gets fed to an LLM to summarize it, this summary gets stored to the "long-term memories" file... A better approach would be to identify important things to remember (you could ask the LLM), create an embedding of the information and then store the embedding with the text inside a database. To retrieve the memory, take the input before it’s fed into the LLM and query the DB for related memories to be inserted into the pre-prompt. (This is not a new idea) Skim this readme, I think you will find this useful https://github.com/wawawario2/long_term_memory >>24568 I don’t think we need to copy the monkey meat to succeed, AI NN are vaguely inspired by real neurons but are not the that similar in reality. So there is no reason the waifu brain needs to be accurate. I don’t think there is a "subconscious language" but the brain for sure is a collection of many independent systems that share information and is not a single system. The reason I speculate this is due the existence of conditions like "callosal syndrome" (where the connection between the two hemispheres is damaged) and the types of behaviors that are associated with it. So one way this idea could be applied is that a waifu brain would have some sort of event bus with different "modules" that listen for and publish events. I would love some input right now, here is the things on my "TODO" list. - I need to do a deep dive into psychology, this would be useful for figuring out what "mechanics" this mind should be governed by. (requesting opinions and advice) - I need to get more hands on experience with actually working on ML models, I have a background in programming but nothing to do with ML. (requesting opinions and advice)

Message too long. Click here to view full text.

>>24612 My apologies for not responding sooner Anon. >nor do I think that is a useful goal for robowaifu. While I think that's a matter of degree, I'll agree with sentiment in general. BTW, we now have a new thread on-topic with this : (>>24783). >I don’t think we need to copy the monkey meat to succeed, AI NN are vaguely inspired by real neurons but are not the that similar in reality. Interestingly, the so-called 'monkey meat' as you put it, is now being used in conjunction with in-silico devices, just as predicted by /robowaifu/ ahead of time. (>>24827) > Only time will tell how useful this approach will be for robowaifus, but I think there is little doubt we'll see these systems being used in guided missles and drones within the decade. >my ML TODO I'd suggest starting with the recommended post from our /meta threads, Anon. > -How to get started with AI/ML for beginners (>>18306) >But we can take advantage of something really nice. We already have LLMs you can use large LLMs to create the labeled data and basically extract what it learned. We are already seeing exciting developments where LLMs are being used to train other models.

Message too long. Click here to view full text.

>>24612 >- I need to do a deep dive into psychology, this would be useful for figuring out what "mechanics" this mind should be governed by. (requesting opinions and advice) I answered here >>24861
I upgraded this: >>10317. Though, nobody seems to care, since it was broken an no one complained or fixed it. It's not for downloading the files, just for getting the metadata for those which you already have. Doesn't work for renamed files, for example where the title of the paper was put into the name. I want to use this to extract the metadata and be able to use it in something like Obsidian. So I can have the description of the paper there and the title with the link. At some point, making keywords into tags automatically would also be interesting. (Indentation might be botched in the code, since copying from Emacs seems to not work very well) # pip install arxiv first import os import arxiv Getting metadata for your ArXiv.org documents AI_PAPERS_DIR = os.path.expanduser("~/Games/Not-Games/AI_Papers/") if AI_PAPERS_DIR is None: AI_PAPERS_DIR = os.path.expanduser(input("The dir with papers: ")) filenames = os.listdir(AI_PAPERS_DIR) id_list = [] for filename in filenames: if len(filename.split('.')) == 3:

Message too long. Click here to view full text.

>>26312 Thanks kindly, NoidoDev. :^)

Open file (156.87 KB 1920x1080 waifunetwork.png)
WaifuNetwork - /robowaifu/ GitHub Collaborators/Editors Needed SoaringMoon 05/22/2022 (Sun) 15:47:59 No.16378 [Reply]
This is a rather important project for the people involved here. I just had this amazing idea, which allows us to catalogue and make any of the information here searchable in a unique way. It functions very similarly to an image booru, but for markdown formatted text. It embeds links and the like. We can really make this thing our own, and put the entire board into a format that is indestructible. Anyone want to help build this into something great? I'll be working on this all day if you want to view the progress on GitHub. https://github.com/SoaringMoon/WaifuNetwork
10 posts and 4 images omitted.
>>16530 Nah I'm good, I know how to handle JSON. XD
>>16413 Finally started to sort my data in regards to robowaifu and AI in Obsidian today. > https://github.com/SoaringMoon/WaifuNetwork The data seem to not be available anymore. Does anyone have it, or should I contact OP? I might have downloaded it somewhere but don't find it right now. It also might be a good idea to make this into a project using something like LangChain, NLTK, or something else to extract some keywords and make them into a tag.
>>26259 >or should I contact OP? Yeah I think that'd be cool to find out what SoaringMoon's up to these days.
>>26263 >find out what SoaringMoon's up to these days. Well, that's easy, he's working in gaming. That's on his Github. He also has a Substack, and I saw him on the Discord a while ago. Probably just deleted the RW repo on Github for PR reasons.
>>26266 Cool! Well I hope he's doing well and will stop by to say hi again, and let us know how he's doing at some point. >Probably just deleted the RW repo on Github for PR reasons. Yeah, I warned you about hiding your power level the stairs, bro! :^)

OpenGL Resources Robowaifu Technician 09/12/2019 (Thu) 03:26:10 No.158 [Reply]
Good VR is one important interim goal for a fully-realized RoboWaifu, and it's also much easier to visualize an algorithm graphically at a glance than 'trying to second-guess a mathematical function'. OpenGL is by far the most portable approach to graphics atm.

>"This is an excellent site for starting with OpenGL from scratch. It covers from very introductory topics all the way to advanced topics like Deferred Shading. Liberal usage of accompanying images and code. Strongly recommended."

learnopengl.com/
https://archive.is/BAJ0e

Plus the two books related, but learnopengl.com is particularly well-suited for beginners.

www.openglsuperbible.com/
https://archive.is/NvdZQ

opengl-redbook.com/
https://archive.is/xPxml

www.opengl.org
https://archive.fo/EZA0p
Edited last time by Chobitsu on 09/26/2019 (Thu) 08:14:37.
22 posts and 18 images omitted.
>>1848
Unrelated: Does the system redraw from the bottom-up? For some reason I thought the drawing happened from the upper-left corner down, but apparently not. Given that the coordinate origin for the system starts in the lower-left corner (similar to a Cartesian system) I guess that makes sense. Live and learn.
>>26098 This here about the Lobster language is related to OpenGL. This is why it is fast, I don't think that it is compiled is much important for speed, though it might have other advantages. For now I'm trying to use Python with OpenGL to do the same thing. Not to confused with OpenCL which I also need to use. I found the term "delta compression" for calculating differences in frames. I hope I can make the animations smaller that way. My current way of "programming" this is asking ChatGPT for every step while learning about how it works. With basic knowledge of Python it works relatively well, even with GPT-3. I'm getting the terms I need to look for how to do things, and the code which needs some polishing.
>>26105 thats how most video codecs like webm work, where you keep only keyframes and replace whats in between them with only the transformation required for the next frames, there must be lots of libraries for this
Open file (18.98 KB 768x768 delta_frame_15.png)
Open file (33.69 KB 768x768 delta_frame_37.png)
>>26106 Thanks, that's what I asked ChatGPT so I had something to ask further. I need to make it interactive, though, or like very short gifs with sound.
>>26108 Okay, I fell into a rabbit hole here. It was interesting, but probably useless. I rather need a collection of GIFs or so. I wondered why no one else had this idea before, I'm starting to understand where the problem is and why it might be hard or impossible. [quote] You are correct. In a delta compression approach where you keep the original frames and use delta frames to reconstruct, the primary advantage is not in reducing storage requirements but rather in potentially reducing the processing load when displaying or working with the frames in real-time. The advantage lies in the efficiency of processing and transmitting the frames, especially when dealing with limited computational resources or bandwidth. Here's a clearer explanation of the advantage: 1. Reduced Processing Load: By storing and transmitting only the delta frames, you can reduce the amount of data that needs to be processed, especially when displaying or working with the frames in real-time. Instead of working with full frames, you process the smaller delta frames, which can be computationally less intensive. 2. Real-Time Efficiency: In applications where real-time processing or streaming is crucial, delta compression can be advantageous. It allows for quicker decoding and display of frames, which is important in video conferencing, surveillance, and interactive applications. 3. Bandwidth Efficiency: When transmitting video data over a network, delta compression can reduce the required network bandwidth, making it feasible to stream video even with limited bandwidth. However, it's important to note that you still need the original frames to apply the delta frames and reconstruct the complete frames. The advantage is in processing efficiency, not in storage efficiency. You trade off storage efficiency for computational and bandwidth efficiency. If your priority is purely reducing storage requirements and you don't need real-time processing or streaming, then traditional video codecs that achieve high compression ratios while storing complete frames might be more suitable for your use case.[/quote]

Python General Robowaifu Technician 09/12/2019 (Thu) 03:29:04 No.159 [Reply] [Last]
Python Resources general

Python is by far the most common scripting language for AI/Machine Learning/Deep Learning frameworks and libraries. Post info on using it effectively.

wiki.python.org/moin/BeginnersGuide
https://archive.is/v9PyD

On my Debian-based distro, here's how I set up Python, PIP, TensorFlow, and the Scikit-Learn stack for use with AI development:
sudo apt-get install python python-pip python-dev
python -m pip install --upgrade pip
pip install --user tensorflow numpy scipy scikit-learn matplotlib ipython jupyter pandas sympy nose


LiClipse is a good Python IDE choice, and there are a number of others.
www.liclipse.com/download.html
https://archive.is/glcCm
58 posts and 14 images omitted.
Open file (79.43 KB 483x280 Screenshot_70.png)
Open file (124.54 KB 638x305 Screenshot_71.png)
Open file (124.48 KB 686x313 Screenshot_73.png)
Open file (186.01 KB 636x374 Screenshot_74.png)
Advanced use of exceptions in Python for reliability and debugging: >I Take Exception to Your Exceptions: Using Custom Errors to Get Your Point Across https://youtu.be/wJ5EO7tnDiQ (audio quality is a bit suboptimal)
>>24658 Great stuff NoidoDev. Exceptions are based.
I watched this video here in 1.75x speed as a refresher, since I had some gaps from not writing Python in quite a while. It might also work for beginners with experience in some other language: > Python As Fast as Possible - Learn Python in ~75 Minutes https://youtu.be/VchuKL44s6E As a beginner you should of course go slower and test the code you've learned. While we're at it, Python got a lot of new features, including compiling better to C-code now using Cython: https://youtu.be/e6zFlbEU76I
https://automatetheboringstuff.com/ Free online ebook. >Practical Programming for Total Beginners >If you've ever spent hours renaming files or updating hundreds of spreadsheet cells, you know how tedious tasks like these can be. But what if you could have your computer do them for you?
Related: Some anon wants to get started >>26616

Agalmatophilia Robowaifu Technician 09/09/2019 (Mon) 04:50:40 No.15 [Reply]
Is the desire for one's own waifubot a legitimate personal expression or simply yet another one of the many new psychoses stemming from First World problems?
10 posts and 2 images omitted.
both
> Here’s a list of ways they could be used, generated by ChatGPT. While the list is quite optimistic, it’s noteworthy that there are plenty of people already reporting using models in some of these ways. > Companionship: > Virtual Companions: LLMs can serve as virtual companions, providing conversation and interaction for individuals who may feel lonely or isolated. > Social Simulation: By simulating social interactions, LLMs can help individuals practice social skills in a safe and controlled environment. > Counseling and Therapy: > Mental Health Screening: LLMs can be used to conduct initial mental health screenings, helping to identify individuals who may need professional help. > Cognitive Behavioral Therapy (CBT) Support: LLMs can assist in delivering cognitive behavioral therapy exercises, helping individuals to manage symptoms of disorders like anxiety or depression. > Emotional Support: > Mood Monitoring: LLMs can be used to track an individual’s mood over time, providing insights into emotional patterns and triggers. > Crisis Support: While not a replacement for professional intervention, LLMs can provide immediate responses in crisis situations, offering support until professional help can be accessed. > Education and Awareness: > Mental Health Education: LLMs can provide information and resources on mental health topics, helping to raise awareness and reduce stigma. > Stress Management Techniques: They can educate individuals on various stress management techniques such as mindfulness, breathing exercises, and relaxation techniques. > Personal Development: > Mindfulness and Meditation Guidance: LLMs can guide individuals through mindfulness and meditation exercises to promote mental well-being. > Motivational Support: By offering encouragement and tracking progress, LLMs can help individuals stay motivated towards achieving personal goals. > Behavioral Change:

Message too long. Click here to view full text.

I think it's beautiful how you can spot obvious and clear parallels between the Pygmalion and Galatea myth and robowaifu technicians. Sometimes I wonder if those ancient Greeks would have been able to notice it as well.
>>26055 Nice, I didn't know there was a painting of it. I might look for a reproduction or art print in some time.
>>26057 there are multiple paintings, so you can pick and choose

/robowaifu/meta-8: Its Summertime, why even wait? Robowaifu Technician 06/24/2023 (Sat) 19:24:05 No.23415 [Reply] [Last]
/meta, offtopic, & QTDDTOT >--- General /robowaifu/ team survey (please reply ITT) (>>15486) >--- Mini-FAQ >A few hand-picked posts on various /robowaifu/-related topics -Why is keeping mass (weight) low so important? (>>4313) -How to get started with AI/ML for beginners (>>18306) -"The Big 4" things we need to solve here (>>15182) -HOW TO SOLVE IT (>>4143) -Why we exist on an imageboard, and not some other forum platform (>>15638, >>17937) -This is madness! You can't possibly succeed, so why even bother? (>>20208, >>23969) -All AI programming is done in Python. So why are you using C++ here? (>>21057, >>21091)

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/15/2023 (Sun) 09:37:18.
374 posts and 135 images omitted.
>>26088 >Well shit they didn’t take me. Anyone have any tips on how to move forward? The company told me to learn UiPath. Oh, too bad, but don't let them discourage you. With the UI path they probably meant doing something around learning how to make a user interface, usability and such. I'm not sure if I can give you advice but I would make some projects which I could show off, but don't listen too much to on company in regards to the area. Maybe look into LangChain.
We need a new /meta/ - this one is bump limited now.
>>26090 hahah I mean to be fair automation will become a very important skill and there are some job listings that demand a knowledge in it (UiPath, Automation Anywahere). But as a newbie I gotta do Python first I guess. >>26094 Shouldn’t I first get a foot in Python? I don’t want to mix these two languages. For now I at least have a UiPath certificate.
>>26100 yeah learning how to script is pretty much a necessity if you want to use a computer for its intended purpose, scripting isnt programing the language isnt really important you can switch language easily when you already know one its just preference at that point
NEW THREAD NEW THREAD NEW THREAD (>>26137) (>>26137) (>>26137) (>>26137) (>>26137) NEW THREAD NEW THREAD NEW THREAD

Report/Delete/Moderation Forms
Delete
Report