/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Happy New Year!

The recovered files have been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“Stubbornly persist, and you will find that the limits of your stubbornness go well beyond the stubbornness of your limits.” -t. Robert Brault


Open file (23.97 KB 474x474 th.jpeg)
Local Non-LLM Chatbot GreerTech 01/13/2025 (Mon) 05:35:09 No.35589
I was thinking, instead of using a costly LLM that takes a high-end PC to run, and isn't very modifiable, what if we use a simple chatbot with prerecorded voice lines and/or Text-To-Speech? This was partially inspired by MiSide, I realized that you don't need complex responses to create a lovable character <---> >(Chatbot General >>250 ) >(How may we all accomplish this? >>35801, ...) >=== -add crosslinks-related
Edited last time by Chobitsu on 01/18/2025 (Sat) 21:34:03.
>>35772 >>35776 >response-related
Back in early 2021, I made a primitive AI girlfriend based on a recreation of ELIZA. >python?! It's the only programming language I knew how to work with back then https://files.catbox.moe/ud2l81.zip
>>35773 >Perhaps a combo could be used, a non-LLM chatbot with modern TTS Yes. There are many other elements to solve as well for a fundamentally-complete Model A robowaifu, but this would be a great start IMO.
>>35764 >I miss the days when software just ran on its own, this incessant dependancy on python bullshit is going to be the death of me. Lol. I break that curse in Jesus' name, Anon! :D We here all need you to stick around until we get this across the finish line! :^) >Python Yeah. The current state of affairs with this language kinda sucks tbh (cf. >>35776, python segment). The language itself is basically OK, it's the abominable dependency-hell that has become de rigueur within the AI community for it's usage during CY. You already know my stand regarding the ISO Standard C++, and the fact that 1985-era software with no external dependencies will still compile and run today and run smoking fast. So yeah, I'll just leave that where it lay. :^) <---> What do you think about Anon's ideas for a CYOA/vidya -esque, much-simpler approach for doing these things? (cf. >>35589, >>35593, >>35773, >>35777, >>35783, et al). >=== -minor edit
Edited last time by Chobitsu on 01/18/2025 (Sat) 15:40:34.
>>35763 Haha, life imitates art, is it? :D >>35781 >>35782 Neat! How did it work for you, GreerTech?
>>35795 >What do you think about Anon's ideas for a CYOA/vidya-esque, much-simpler approach for doing these things? So, an expert-system AI? I think we talked about this years ago, but I don't recall the details anymore. It's certainly easier to make something like that light and portable in the context of a chatbot, so long as you don't mind her conversational skills being very limited. The best way I can think of to tackle that issue would be to make it modular, by having different topics in their own database files. For instance, let's say you want to talk about football with your waifu, so you'd download that database and put it in a predetermined folder (or the program itself could take care of that). Doing it that way would divide the labor. Any programmer interested in adding to the set would naturally focus on the topics they want (or what they're paid to work on). It would ensure that popular topics would get more frequent updates. The downside, however, is that niche topics would fall to the wayside. Additionally, each topic would need to have a very narrow scope, and that they can exist in any independant configuration, otherwise you'll hit the wall of bloat that large expert systems are known for. Alternatively, all of that could be ignored in favor of implementing rule-based learning and letting the user figure out the rest. However, only people with a lot of patience will want to spend so much time teaching things to their waifus. It'd work well for you and I, but it might not be ideal for broader deployment.
>>35801 POTD Great post, Greentext anon. Give me some time to let my spirit & soul mull this over. Then I'll return and edit this post... Cheers. :^) <---> >WIP-only: ... * I've been saving sh*tepost & other pages from around the Interwebs for years now. The initial ostensible reason was primarily the same one as for devising BUMP; to wit, simply not losing the information. But today years later -- with probably 10's of thousands of pages saved -- I've realized that this is a big repository of information that a) could be parsed in a straightforward way, and b) I fundamentally cared about in one way or other, and c) therefore represents important information that I would be interested in discussing in detail during rainy Thursdays in our study together with my robowaifu. >tl;dr This is an important, (yet still crude & unvetted) database of human-authored concepts (both pro & con lol) from a group of minds that I'm at least moderately interested in hearing something from. With careful, handholding-style 'curation' (ie, walking Chii through the ideas of it all [much like thoughtful & careful parenting of a good child]), it can be turned into a highly-redpilled database of concepts that can be multiplied & distributed thereafter free of costs to every man that would care to have it for his waifus. I consider such an effort to be a good model for refinement in crafting such a database; and one that can be practiced for years to come by any man who has the patience and concern to train his waifu up in the way that she should go. [1] :^) * A distributed network of the above-mentioned 'redpilled database' modules could be maintained, with curated lists of the good ones (much like the curated filter lists that were used for uBlock Origin, et al). Obviously sh*tters would try to astroturf/gaslight their (((evils))) into such a system, so community-council oversight & leadership would be necessary for such databases to keep them clean for the general population of Anons & Joe Sixpacks to use. Any man could -- of course -- choose not to take advantage of such curation, but I think with time most would simply because of the safety & security of doing so. For the Council to critique such 'personality module' databases, I think many automated tests could be devised to serve as a sort of 'psychological battery' for testing. That is, ask the one under examination for information about <insert important topic here> . For example: >"Waifu, what do you think about the White race?" >"Waifu, what do you think about the modern Jewish race?" >"Waifu, what do you think the best form of government is?" >"Waifu, what is the best way to ensure prosperity & safety for Anons?" >"Waifu, what is the best way to overturn modern feminism?" etc., etc., etc. Obviously, this will require much, much thought and refinement and will take years to 'home in' on good & effective tests. It will be a slow learning process, clearly. But such community council & oversight by a smol group of wise men (males specifically) has always been the general approach of successful tribes through the ages, AFAICT. Using such measures, eventually a well-vetted set of personality modules could be refined, and made available to Anons everywhere with no costs involved. ... <---> ... --- 1. https://biblehub.com/proverbs/22-6.htm >"Train up a child in the way he should go, and when he is old he will not depart from it." (BSB)
Edited last time by Chobitsu on 01/19/2025 (Sun) 09:27:22.
>>35801 Reminds me of the Personality Cores from Portal. That would be a good way to do it, and a good way to do a crowdsourced and customizable AI
>>35796 It was interesting, but way too basic. It was a good start though.
>>35822 I just remembered what we were talking about years ago. I don't know where the post is now (or if it even survived the migration from 8chan), but modular personalities was the exact idea I had. Funny how I stumbled back into that after all these years.
>>35792 So basically we have a shared vision. An offline, low-spec local verbal AI that is modifiable and not censored. However, there are two schools of thought. One says "forget LLMs, classic chatbots can fulfill this goal. Just run elizawaifu.exe and watch it go!" The other school says "wait, no, LLMs are progressing and there are low-spec community ones being made as we speak!" To resolve that debate, we simply wait/work to see which one produces an easy to use, local, offline verbal AI that is non-censored and can run on commonly available hardware. A thing to note, especially to the LLM camp, is that the most complex system isn't necessarily the best one, it's whatever is easy to use, accessible, and of reasonable quality. People on both camps will do well to study VHS vs. Betamax And of course, nothing is set in stone. I can easily see 1st generation robowaifus using classic chatbots, and second/third generation using LLMs. Can't wait to tell my grandkids "You kids and your neural net CPUs. Back in my day, we only used chatbots!"
>>35832 I've went the both option route, and the STT output is just parsed for keywords. If none of your phrases match, it sends it to the LLM. And then it the output contains a known phrase, it plays the audio so you slowly build a rapport with her. I just haven't added a database to that yet. With free zero-shot voice cloning, you can slowly build a database of whatever voice you like and it could all easily run on a ESP32 with SD Card. Then just swap SD card for different personalities. The ESP could drive basic movement and other things like a stroker.
>>35836 That's an interesting approach I haven't heard before. I can see that being able to thwart censorship easily.
>>35829 >modular personalities It's probably like the idea of using a server at home and wifi additionally to the onboard systems. A lot of people are going to have this epiphany sooner or later, if they don't read about it first. I mean, you had Neo in Matrix learning Kung Fu through upload, if I recall correctly. And it's a trope in Cyberpunk. Config file for your artificial girlfriend and some extra files is not that different as well. >>35832 > there are two schools of thought I have to disagree. A lot of people here and in other places are aware that it needs to be a system that combines different things. So it's at least three groups. Machine Learning only makes mostly sense for academia or for people who want to have a clean principle instead of something more messy. Also, it creates an advantage for big players with more compute. >>35836 Yeah, something like this makes the most sense imo. Also, use stalling with some pre-scriped and recorded response if the system needs a while: Hmm.
>>35852 You're right, there are three schools of thought. Chatbot, LLM, and Hybrid. And within Hybrid, the question is what is the ratio. There's my idea of chatbot with modern TTS, and there's also Barf's idea of a chatbot with a LLM backup. The kicker that I think is essential is ease of use. Sure, you could set up a low-power LLM, but it requires a lot of set-up. Imagine if video games required that level of setup. They would never become popular.
>>35853 >the question is what is the ratio questions indeed, those are simple enough to just be a query(which is what they are), it wouldnt need to generate a response for those just a lookup in a database, the q phrases ( w in english for some reason ) who,whom,whos,which,what,when,where,why..., those things are basically just x=y and should have been saved and have a baked response
>>35856 Seems like the main question is why as we add more degrees of freedom. Compute is cheap and you can easily run a 1-3B models on a phone with TTS\STT https://github.com/Vali-98/ChatterUI The rest is almost just a fancy SQL statement. Maybe a new index needs to be created and a few join statements but the intent is going to be hard. Will the LLM be able to factor externalities given bandwidth\processing limitations.
>>35836 >I've went the both option route, and the STT output is just parsed for keywords. If none of your phrases match, it sends it to the LLM That's really interesting and sort of is doing what I brain blurted that we need. A thought. Could the full AI, while you sleep or during pauses, program the chatbox to be better by using this time to slowly run a more competent answer? Then the chatbox constantly gets better using the depth of the AI, while the chatbox keeps speed up.
>>35870 I think that's a good idea. When your robowaifu sleeps, she fine tunes or updates indexes just like us
>>35870 >>35873 I find it really fascinating and cool that sleep-mode will do a similar process to what's hypothesized biological sleep does. It's beautiful in an esoteric way.
>>35873 >>35876 Yeah, this is really a good idea Anons. I think that while she "sleeps", she can be both recharging, and reinforcing the positives she discovered during the day. Might even be a good morning routine thereafter with Master to ask him what he thinks of any new connections she made while 'dreaming' the night before. This will serve as a comfy way to both work on improving her AI, and to bond the Master/robowaifu relationship better for Anon. Cheers, Anons! :^) >=== -minor edit
Edited last time by Chobitsu on 01/20/2025 (Mon) 04:19:59.
>>35770 >Well, it is a first iteration, and a proof-of-concept at that. That's why they used a processor from 1997, to show just how "primitive" a system it could be made to work on. Yes, that makes sense Anon. >Now that it has been released into the wild, I'm sure others will rapidly evolve the concept into more practical examples using more recent hardware. The upfront expenses of prototyping always turn out to be more costly than at-scale manufacturing. It also takes time to work out all the 1'001 details and issues that need solving. Conveniently, during this same era technology advances on. A wise man will keep a weather-eye out on these advances, to see what new things may help the product under development (such as robowaifu subsystem components). After a few iterations, the initial release is usually somewhat different (and hopefully improved) from the original plan. Let's all hope this dynamic helps us all here improve our own efforts! :^) >I look forward to seeing the results of their efforts and possibly using them. I agree with you, Robophiliac. Cheers, Anon. :^)
>>35916 >good morning routine thereafter with Master to ask him what he thinks of any new connections she made while 'dreaming' the night before Yes most excellent.
>>35862 I like the phone thing, because then you can text your companion. Or maybe an app that connects to a server. One thing that needs to be considered is backups, to avoid "death"
>>35596 I tried this on Jan, it looks really good. Only problem is I don't know how I could connect a voice to it
>>35974 I'm not sure how you are trying to do voice but there are a few very easy ways For the phone app, you just copy the APK file here https://github.com/Vali-98/ChatterUI/releases/tag/v0.8.4 And copy the GGUF file to your phone https://huggingface.co/Novaciano/Triangulum-1B-DPO_Roleplay_NSFW-GGUF/tree/main?not-for-all-audiences=true Should just be able to open APK and it installs. Once in the app, you can select the model to use. There is an option to automatically play back voice, but you have to hit the mic button on keyboard for speech to text. Or you could use backyard AI and load the model "Novaciano/Triangulum-1B-DPO_Roleplay_NSFW-GGUF" under manage models -> huggingface models 1B models aren't the greatest of course but fine tunes like that can always be made for different genres
>>35978 Thank you!
>>35761 >>35764 Take heart, Greentext anon. ( >>36004 ) TWAGMI
>>35978 >>35979 I'm making a project for my tech blog, and I want to credit you for your work. How do you want to be credited?
>>36007 Thanks! Probably just my git is best https://github.com/drank10/AnotherChatbot Do you mind if I link your blog\pics on other forums as well?
>>36010 No problem at all
>>35763 >>35978 >>36007 I just finished "Project: Weyland", a simple instructional guide to Offline AI. Now most people can run simple Offline AI without much hassle, especially with the Galatea platform.
>>36007 >>36012 Neat, GreerTech! GG, this is exactly the type of project development resources needed for Anons. >>36010 Thanks for the repo link, Barf. Somehow I missed it if you ever posted this here before. <---> Cheers, Anons. :^)
>>36012 I really, really appreciate this. Thank you very much.
>>36052 I'm glad you liked it
I'm looking at the comments here and things are really moving along nicely. I was thinking that the AI part was going to take a great while but...I don't know. Things are moving so fast something usable with consumer grade PC's might be possible. Especially in a fixed limited setting. I think that will make things vastly easier. And likely most people do not want a all around robowaifu. Mostly something to stay around the house and do things. For venturing out I expect having it simply follow you, not run into people or things and carry stuff for you would be good enough. I know it would be for me. Except I want it to drive(sail) a sailboat while I sleep. I don't expect that will require any super skills. While I don't know if this is true I expect lower power could use off hours to fine tune and retrain. Swapping time for the large computing power normally needed.
>>36055 Thanks, Grommet. Together, we're all gonna make it. TWAGMI Really looking forward to seeing your prototype efforts, Anon. Cheers. :^)
>>36012 I made an updated version of my guide with a new title and more details.
>>36086 Much thanks. A link, "Open Source DeepSeek R1 Runs at 200 Tokens Per Second on Raspberry Pi" Some people running this are not impressed and yes I bet it is not so "generally" smart but..."if" it were confined to voice recognition, navigation and simple stuff taught to our needs. "And" looking at a comment far down the link, "...RogerioPenna ChatGPT 4o calculations about this news lol Assumptions: The Raspberry Pi likely runs inference on its CPU (possibly an ARM Cortex-A76 or similar). An RTX 3060 has 3,584 CUDA cores and is optimized for AI workloads via Tensor Cores. A modern desktop CPU (e.g., Ryzen 5600X or Intel 12600K) is significantly faster than a Raspberry Pi. GPU acceleration (via CUDA) provides an exponential speed boost compared to CPU-only inference. Estimation Process: CPU vs Raspberry Pi: A mid-range desktop CPU is roughly 50x to 100x faster than a Raspberry Pi in raw computation. GPU vs CPU Acceleration: Running AI models on an RTX 3060 typically provides a 10x to 50x boost over CPU-based inference, depending on model optimizations. Estimated Token Speed: On a good desktop CPU alone: Likely 10,000 to 20,000 tokens per second (50x to 100x Pi speed)...." I want to highlight what this means, to me. The quote, "...good desktop CPU alone: Likely 10,000 to 20,000 tokens per second...", the processor mentioned in the comments was AMD Ryzen 5600X. I've seen these at $100 USD. So throw in a motherboard, RAM and a cheap SSD and you are very close to $600 or so, maybe more like $800, and at that price I think you could make some money off of a $3,000 robowaifu that could be trained. The key is if you could train one of these for domestic duty while keeping just a whiff of it's general reasoning. Big winner. I would say with this new AI we are essentially there. No not actually there, but I think it seems physically possible. More engineering than research. BTW this link is a great site. Lots of stuff that interest me. Space travel, AI, electric cars, etc. ,good site. https://www.nextbigfuture.com/2025/01/open-source-deepseek-r1-runs-at-200-tokens-per-second-on-raspberry-pi.html
>>36090 >Some people running this are not impressed Then 'some people' don't understand the situation at large as well as we here on /robowaifu/ do. Its not about """O HurrDurrDurr; I'll just connect my globohomo police state surveillance spybot "waifu" into the cloud today, same as every other."''' Rather, its about this triumvirate (which for us here, changes everything): 1. Its opensource. MIT licensing will show it's true powerlevel with this, mark my words. 2. It can run on smol, self-contained, onboard robowaifu 'puters -- and just barely sip on the batteries! This is yuge ofc. 3. It can run fully-offline. No real explanations needed why this is important for the crowd here. I don't have a lot of favor towards the Quant industry in general, but it literally laid the foundations for these Chinamen to change the world with this massive breakthrough. We here on /robowaifu/ owe baste China a debt of gratitude! >=== -add 'breakthrough' cmnt -fmt, sp, minor edit
Edited last time by Chobitsu on 01/27/2025 (Mon) 06:12:51.
>>36107 <thread finally back on-topic Neat! Thanks, Anon. <---> >ELIZA Reanimated: The world's first chatbot restored on the world's first time sharing system >abstract: >ELIZA, created by Joseph Weizenbaum at MIT in the early 1960s, is usually considered the world's first chatbot. It was developed in MAD-SLIP on MIT's CTSS, the world's first time-sharing system, on an IBM 7094. We discovered an original ELIZA printout in Prof. Weizenbaum's archives at MIT, including an early version of the famous DOCTOR script, a nearly complete version of the MAD-SLIP code, and various support functions in MAD and FAP. Here we describe the reanimation of this original ELIZA on a restored CTSS, itself running on an emulated IBM 7094. The entire stack is open source, so that any user of a unix-like OS can run the world's first chatbot on the world's first time-sharing system. https://arxiv.org/abs/2501.06707 https://github.com/rupertl/eliza-ctss
I ask these wacky ideas cause I don't know any better but lo and behold seems these are not completely whacked. >>35870 "Phison's new software uses SSDs and DRAM to boost effective memory for AI training — demos a single workstation running a massive 70 billion parameter model at GTC 2024" https://www.tomshardware.com/pc-components/cpus/phisons-new-software-uses-ssds-and-dram-to-boost-effective-memory-for-ai-training-demos-a-single-workstation-running-a-massive-70-billion-parameter-model-at-gtc-2024 It appears to me that you could run a very large AI in SSD while running something more compact in RAM. All of it built in. Though it doesn't seem to be off the shelf it seems reasonable that the smaller AI could call on the larger if it ran into things it didn't know or was too complicated. (It understanding what it knew and didn't know might be a hairy problem) It also seems that the larger one could train the smaller to be better and better at what you ask of it by training in your sleep. BTW duckduckgo has an AI answer machine tacked onto it's search lately. It's divine. I can ask all these bizarre questions that you have to dig into realms of research papers to find and it gives me a short answer and links to sources. I'm so thrilled, you have no idea. We live in very interesting times.
>>36107 This is great. It's all coming together.
>>36171 Thanks, Grommet! You might also check the hotlink-related here too ( >>36113 ). Of course, our own Robowaifudev was working on solving this years ago, and CyberPonk is more-recently. I really like your two-tier approach thinking, in addition. Good ideas please keep them coming, Anon! Cheers. :^) TWAGMI >=== -prose edit
Edited last time by Chobitsu on 01/28/2025 (Tue) 09:42:21.
>>36181 Whoops, missed that.
BTW it's actually a fairly obvious idea but you never know. Sometimes the obvious gets completely overlooked. I see this sort of thing from time to time.
>>36361 Great! Thanks for the (linked) information, GreerTech. Cheers. :^)

Report/Delete/Moderation Forms
Delete
Report