/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Happy New Year!

The recovered files have been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“Fall seven times, stand up eight.” -t. Japanese Proverb


Open file (23.97 KB 474x474 th.jpeg)
Local Non-LLM Chatbot GreerTech 01/13/2025 (Mon) 05:35:09 No.35589
I was thinking, instead of using a costly LLM that takes a high-end PC to run, and isn't very modifiable, what if we use a simple chatbot with prerecorded voice lines and/or Text-To-Speech? This was partially inspired by MiSide, I realized that you don't need complex responses to create a lovable character <---> >(Chatbot General >>250 ) >(How may we all accomplish this? >>35801, ...) >=== -add crosslinks-related
Edited last time by Chobitsu on 01/18/2025 (Sat) 21:34:03.
Animal crossing was pretty convincing for creating a lively town full of villagers. Of course, only the villagers could talk to you and you couldn't really prompt them yourself. I think the most you could do was answer yes or no questions and give the villagers a nickname. It was good enough to provide the villagers with the illusion of life. The problem with a "dumb" chatbot is you'll eventually exhaust its content. It'd be great for a robot working around your house that didn't need deep conversation capabilities or long context recall. I'd probably treat it like an NPC at that point and skip its dialogue once I'd heard all it had to offer. >"You look like you could use some-" (a prompt I've heard before to initiate intimate encounters) >"Yeah Yeah, that's nice, now suck my dick." or >I approach them and they greet me with the usual prompt "Hello, can I-" (I interrupt them because I don't want to wait for their drawn-out spiel) >"Wash the dishes." >"Okay..." I haven't played miside but it looks like an interesting game. LLMs are pretty slick though, with some of the mistral models you can get a good amount of mileage out of a tiny resource footprint. An odd concept though, giving a robot the intelligence to perform tasks around your house but not a way to communicate without prescripted dialogue. Assuming the bot was advanced enough to navigate 3-D space and do chores around your house, presumably, it'd have the computing power to run LLM anyway. I might be overshooting the scope of your vision though.
>>35593 True. It depends on the computing power and scope of the robowaifu. Mine is very low power, it uses a robot vacuum base and a bluetooth speaker either connected to a computer or phone
>>35594 You can run both whisper.cpp STT and a 1B LLM on a raspberry or orange pi and TTS should be quick using canned voices from Coqui TTS \ Piper There's roleplay 1B fine tune models like this that are only 800MB https://huggingface.co/Novaciano/Triangulum-1B-DPO_Roleplay_NSFW-GGUF That would give you full end-to-end speech on a roomba, and then it's just how much compute do you want to add \ offload
>>35596 Thank you! I suppose the next move is to make that into an easily run package. I want verbal robots to be moderately easy to use.
>>35589 >I realized that you don't need complex responses to create a lovable character This -- Chobits during Chii's 'early days' with Hideki being a canonical example of this idea, IMO. Also, I'm highly in favor of Anons seeking alternatives to the whole "LLM-only-since-its-the-do-all-be-all-and-end-all-final-word-in-all-things-AI'y-and-robotic'y" mindset. I know when this general issue is mentioned, Anons here & elsewhere pay lip service to the retort, but then proceed on with the GH programming in the matter in general just as before. Alway rember: > Anyone/anything that ties you down to using the cloud isn't your fren, and it isn't in line with the spirit of Anonymous either. Kudos to you for this very thread idea, OP. Cheers. :^) >>35593 >vidya tropes as guiding principles for robowaifu interaction design... Good thinking, Anon. Obviously, this is both a well-worn path (ie, reliably-achievable today), and one that is low-overhead in general, computationally speaking. >>35594 >Mine is very low power This is great! The Model A by the great Henry Ford's team wasn't particularly powerful, but it changed the world. I recommend that we all adopt a similar approach during these early years, for a wide variety of reasons. >>35596 Great information, Barf! Thanks for supplying links to resources for every'non here. <---> Cheers Anons. :^)
>>35602 >LLM-only-since-its-the-do-all-be-all-and-end-all-final-word-in-all-things-AI'y-and-robotic'y" mindset. I agree. Pre-recorded sounds or video files that are played in response to STT input is very easy to do and good option to play sounds with background noise\music\moaning etc.. The program I made already has that option if anyone needs a code example.
Chatbot General >>250
Looks like there is hope for smaller LLMs running on minimal hardware. "Modern AI On Vintage Hardware: LLama 2 Runs On Windows 98" https://hackaday.com/2025/01/13/modern-ai-on-vintage-hardware-llama-2-runs-on-windows-98/ Referring article: https://www.msn.com/en-us/news/technology/blasting-ai-into-the-past-modders-get-llama-ai-working-on-an-old-windows-98-pc/ar-BB1rqMOf?ocid Maybe there's hope for my AMD FX-8350
>>35752 That's amazing! This gives me hope for local LLMs for all sorts of purposes.
>>35621 You make a good point, NoidoDev. I think the concept of this seems sufficiently unique to warrant it's own thread (at least I'm hopeful that something unique may come out of the >"don't use costly LLMs, bro!" thinking that seems to be the premise. Our other general chatbot thread basically went full-ahead down the path that's lead us to the ponderous monstrosities of today (that clearly were focused on one thing, and one thing alone IMO: ensuring that AI was fully in the control of the GH, to the exclusion of any focus but the elites' gold purses. Again, my take. While the focus on edge computing has become prominent during the '23-'24 era, most still go down a heavyweight track, computationally-speaking. I want us to depart from that mindset completely as our primary focus. I'm hopeful that can happen ITT. <---> But as always, thanks NoidoDev! We all appreciate you trying to herd us into keeping things better-organized! :^) BTW, I'll add that crosslink into the OP ITT. Cheers, Anon.
>>35752 Very cool, Robophiliac. Thanks for the information! Cheers. :^) >>35755 This.
>>35757 Exactly my thoughts. It's easy to talk about the new latest over-9000B parameter model, but practical and/or inexpensive implementation requires more efficient technology. Not to mention that most high-end models are more restricted.
Open file (69.93 KB 680x378 all coming together.jpg)
>>35752 >mfw I have one of the shittiest ME laptops ever made with an over-spec SSD Time to brush up on your serial networking protocols, because we're just a hop, skip, and a jump away from 720k waifus.
>>35761 The first man that makes the first user-friendly low-spec local AI will be the Peter Weyland of our time.
Open file (732.88 KB 568x538 unknown.png)
>>35761 >>35763 Yeah, I just skimmed though the instructuons and realized that the process is way more complicated than it should be for the result. Honestly, it's my fault for expecting better. I miss the days when software just ran on its own, this incessant dependancy on python bullshit is going to be the death of me. For the time being, it looks like my min-maxed meme machine is going to stay a word processor.
>>35764 You can almost say Python SLITHERS into everything and CONSTRICTS software
>>35764 Well, it is a first iteration, and a proof-of-concept at that. That's why they used a processor from 1997, to show just how "primitive" a system it could be made to work on. Now that it has been released into the wild, I'm sure others will rapidly evolve the concept into more practical examples using more recent hardware. I look forward to seeing the results of their efforts and possibly using them.
>>35621 Yes, these posts all belong there. >>327575 I disagree, unless we make this chatbots sans LLM, and that thread the LLM chatbot thread. Which, would be ideal considering how quickly LLM's dominate all AI discussion. Having a space to discuss alternatives would be an eloquent solution. I'm too lazy to move these posts to the other thread either way. (MaidCom dev eats all my free time.) >>35761 Could you run an SQLite bench? My idea for an ultra-low end AI involves a large (hundreds of GB's) reserve of information and various lines to insert as strings in a fill in the blacks format. Essentially a pre-rendered AI where processing speed is traded for seek speed and being enormous. With various smol programs rapidly loaded whenever needed. Dedicated calculators are better than the best ClosedAI can manage anyways. As for making her good at conversation, I'm a hardware guy. A tiny AI, perhaps transformer based? Would handle her "persona" when just chattin'. My personal intention is for MaidCom to default to having a small and efficient SBC similar to the ODROID-M1S that can run a huge SSD bolted on and barely use any power while being good enough.
Perhaps a combo could be used, a non-LLM chatbot with modern TTS
>>35772 >>35776 >response-related
Back in early 2021, I made a primitive AI girlfriend based on a recreation of ELIZA. >python?! It's the only programming language I knew how to work with back then https://files.catbox.moe/ud2l81.zip
>>35773 >Perhaps a combo could be used, a non-LLM chatbot with modern TTS Yes. There are many other elements to solve as well for a fundamentally-complete Model A robowaifu, but this would be a great start IMO.
>>35764 >I miss the days when software just ran on its own, this incessant dependancy on python bullshit is going to be the death of me. Lol. I break that curse in Jesus' name, Anon! :D We here all need you to stick around until we get this across the finish line! :^) >Python Yeah. The current state of affairs with this language kinda sucks tbh (cf. >>35776, python segment). The language itself is basically OK, it's the abominable dependency-hell that has become de rigueur within the AI community for it's usage during CY. You already know my stand regarding the ISO Standard C++, and the fact that 1985-era software with no external dependencies will still compile and run today and run smoking fast. So yeah, I'll just leave that where it lay. :^) <---> What do you think about Anon's ideas for a CYOA/vidya -esque, much-simpler approach for doing these things? (cf. >>35589, >>35593, >>35773, >>35777, >>35783, et al). >=== -minor edit
Edited last time by Chobitsu on 01/18/2025 (Sat) 15:40:34.
>>35763 Haha, life imitates art, is it? :D >>35781 >>35782 Neat! How did it work for you, GreerTech?
>>35795 >What do you think about Anon's ideas for a CYOA/vidya-esque, much-simpler approach for doing these things? So, an expert-system AI? I think we talked about this years ago, but I don't recall the details anymore. It's certainly easier to make something like that light and portable in the context of a chatbot, so long as you don't mind her conversational skills being very limited. The best way I can think of to tackle that issue would be to make it modular, by having different topics in their own database files. For instance, let's say you want to talk about football with your waifu, so you'd download that database and put it in a predetermined folder (or the program itself could take care of that). Doing it that way would divide the labor. Any programmer interested in adding to the set would naturally focus on the topics they want (or what they're paid to work on). It would ensure that popular topics would get more frequent updates. The downside, however, is that niche topics would fall to the wayside. Additionally, each topic would need to have a very narrow scope, and that they can exist in any independant configuration, otherwise you'll hit the wall of bloat that large expert systems are known for. Alternatively, all of that could be ignored in favor of implementing rule-based learning and letting the user figure out the rest. However, only people with a lot of patience will want to spend so much time teaching things to their waifus. It'd work well for you and I, but it might not be ideal for broader deployment.
>>35801 POTD Great post, Greentext anon. Give me some time to let my spirit & soul mull this over. Then I'll return and edit this post... Cheers. :^) <---> >WIP-only: ... * I've been saving sh*tepost & other pages from around the Interwebs for years now. The initial ostensible reason was primarily the same one as for devising BUMP; to wit, simply not losing the information. But today years later -- with probably 10's of thousands of pages saved -- I've realized that this is a big repository of information that a) could be parsed in a straightforward way, and b) I fundamentally cared about in one way or other, and c) therefore represents important information that I would be interested in discussing in detail during rainy Thursdays in our study together with my robowaifu. >tl;dr This is an important, (yet still crude & unvetted) database of human-authored concepts (both pro & con lol) from a group of minds that I'm at least moderately interested in hearing something from. With careful, handholding-style 'curation' (ie, walking Chii through the ideas of it all [much like thoughtful & careful parenting of a good child]), it can be turned into a highly-redpilled database of concepts that can be multiplied & distributed thereafter free of costs to every man that would care to have it for his waifus. I consider such an effort to be a good model for refinement in crafting such a database; and one that can be practiced for years to come by any man who has the patience and concern to train his waifu up in the way that she should go. [1] :^) * A distributed network of the above-mentioned 'redpilled database' modules could be maintained, with curated lists of the good ones (much like the curated filter lists that were used for uBlock Origin, et al). Obviously sh*tters would try to astroturf/gaslight their (((evils))) into such a system, so community-council oversight & leadership would be necessary for such databases to keep them clean for the general population of Anons & Joe Sixpacks to use. Any man could -- of course -- choose not to take advantage of such curation, but I think with time most would simply because of the safety & security of doing so. For the Council to critique such 'personality module' databases, I think many automated tests could be devised to serve as a sort of 'psychological battery' for testing. That is, ask the one under examination for information about <insert important topic here> . For example: >"Waifu, what do you think about the White race?" >"Waifu, what do you think about the modern Jewish race?" >"Waifu, what do you think the best form of government is?" >"Waifu, what is the best way to ensure prosperity & safety for Anons?" >"Waifu, what is the best way to overturn modern feminism?" etc., etc., etc. Obviously, this will require much, much thought and refinement and will take years to 'home in' on good & effective tests. It will be a slow learning process, clearly. But such community council & oversight by a smol group of wise men (males specifically) has always been the general approach of successful tribes through the ages, AFAICT. Using such measures, eventually a well-vetted set of personality modules could be refined, and made available to Anons everywhere with no costs involved. ... <---> ... --- 1. https://biblehub.com/proverbs/22-6.htm >"Train up a child in the way he should go, and when he is old he will not depart from it." (BSB)
Edited last time by Chobitsu on 01/19/2025 (Sun) 09:27:22.
>>35801 Reminds me of the Personality Cores from Portal. That would be a good way to do it, and a good way to do a crowdsourced and customizable AI
>>35796 It was interesting, but way too basic. It was a good start though.
>>35822 I just remembered what we were talking about years ago. I don't know where the post is now (or if it even survived the migration from 8chan), but modular personalities was the exact idea I had. Funny how I stumbled back into that after all these years.
>>35792 So basically we have a shared vision. An offline, low-spec local verbal AI that is modifiable and not censored. However, there are two schools of thought. One says "forget LLMs, classic chatbots can fulfill this goal. Just run elizawaifu.exe and watch it go!" The other school says "wait, no, LLMs are progressing and there are low-spec community ones being made as we speak!" To resolve that debate, we simply wait/work to see which one produces an easy to use, local, offline verbal AI that is non-censored and can run on commonly available hardware. A thing to note, especially to the LLM camp, is that the most complex system isn't necessarily the best one, it's whatever is easy to use, accessible, and of reasonable quality. People on both camps will do well to study VHS vs. Betamax And of course, nothing is set in stone. I can easily see 1st generation robowaifus using classic chatbots, and second/third generation using LLMs. Can't wait to tell my grandkids "You kids and your neural net CPUs. Back in my day, we only used chatbots!"
>>35832 I've went the both option route, and the STT output is just parsed for keywords. If none of your phrases match, it sends it to the LLM. And then it the output contains a known phrase, it plays the audio so you slowly build a rapport with her. I just haven't added a database to that yet. With free zero-shot voice cloning, you can slowly build a database of whatever voice you like and it could all easily run on a ESP32 with SD Card. Then just swap SD card for different personalities. The ESP could drive basic movement and other things like a stroker.
>>35836 That's an interesting approach I haven't heard before. I can see that being able to thwart censorship easily.
>>35829 >modular personalities It's probably like the idea of using a server at home and wifi additionally to the onboard systems. A lot of people are going to have this epiphany sooner or later, if they don't read about it first. I mean, you had Neo in Matrix learning Kung Fu through upload, if I recall correctly. And it's a trope in Cyberpunk. Config file for your artificial girlfriend and some extra files is not that different as well. >>35832 > there are two schools of thought I have to disagree. A lot of people here and in other places are aware that it needs to be a system that combines different things. So it's at least three groups. Machine Learning only makes mostly sense for academia or for people who want to have a clean principle instead of something more messy. Also, it creates an advantage for big players with more compute. >>35836 Yeah, something like this makes the most sense imo. Also, use stalling with some pre-scriped and recorded response if the system needs a while: Hmm.
>>35852 You're right, there are three schools of thought. Chatbot, LLM, and Hybrid. And within Hybrid, the question is what is the ratio. There's my idea of chatbot with modern TTS, and there's also Barf's idea of a chatbot with a LLM backup. The kicker that I think is essential is ease of use. Sure, you could set up a low-power LLM, but it requires a lot of set-up. Imagine if video games required that level of setup. They would never become popular.
>>35853 >the question is what is the ratio questions indeed, those are simple enough to just be a query(which is what they are), it wouldnt need to generate a response for those just a lookup in a database, the q phrases ( w in english for some reason ) who,whom,whos,which,what,when,where,why..., those things are basically just x=y and should have been saved and have a baked response
>>35856 Seems like the main question is why as we add more degrees of freedom. Compute is cheap and you can easily run a 1-3B models on a phone with TTS\STT https://github.com/Vali-98/ChatterUI The rest is almost just a fancy SQL statement. Maybe a new index needs to be created and a few join statements but the intent is going to be hard. Will the LLM be able to factor externalities given bandwidth\processing limitations.
>>35836 >I've went the both option route, and the STT output is just parsed for keywords. If none of your phrases match, it sends it to the LLM That's really interesting and sort of is doing what I brain blurted that we need. A thought. Could the full AI, while you sleep or during pauses, program the chatbox to be better by using this time to slowly run a more competent answer? Then the chatbox constantly gets better using the depth of the AI, while the chatbox keeps speed up.
>>35870 I think that's a good idea. When your robowaifu sleeps, she fine tunes or updates indexes just like us
>>35870 >>35873 I find it really fascinating and cool that sleep-mode will do a similar process to what's hypothesized biological sleep does. It's beautiful in an esoteric way.
>>35873 >>35876 Yeah, this is really a good idea Anons. I think that while she "sleeps", she can be both recharging, and reinforcing the positives she discovered during the day. Might even be a good morning routine thereafter with Master to ask him what he thinks of any new connections she made while 'dreaming' the night before. This will serve as a comfy way to both work on improving her AI, and to bond the Master/robowaifu relationship better for Anon. Cheers, Anons! :^) >=== -minor edit
Edited last time by Chobitsu on 01/20/2025 (Mon) 04:19:59.
>>35770 >Well, it is a first iteration, and a proof-of-concept at that. That's why they used a processor from 1997, to show just how "primitive" a system it could be made to work on. Yes, that makes sense Anon. >Now that it has been released into the wild, I'm sure others will rapidly evolve the concept into more practical examples using more recent hardware. The upfront expenses of prototyping always turn out to be more costly than at-scale manufacturing. It also takes time to work out all the 1'001 details and issues that need solving. Conveniently, during this same era technology advances on. A wise man will keep a weather-eye out on these advances, to see what new things may help the product under development (such as robowaifu subsystem components). After a few iterations, the initial release is usually somewhat different (and hopefully improved) from the original plan. Let's all hope this dynamic helps us all here improve our own efforts! :^) >I look forward to seeing the results of their efforts and possibly using them. I agree with you, Robophiliac. Cheers, Anon. :^)
>>35916 >good morning routine thereafter with Master to ask him what he thinks of any new connections she made while 'dreaming' the night before Yes most excellent.

Report/Delete/Moderation Forms
Delete
Report