/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

alogs.space e-mail has been restored. You may now reach me using "admin" at this domain. -r

We are back. - TOR has been restored.

Canary update coming soon.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“He who would do some great thing in this short life, must apply himself to the work with such a concentration of his forces as to the idle spectators, who live only to amuse themselves, looks like insanity.” -t. John Foster


Open file (293.38 KB 1121x1490 3578051.png)
/CHAG/ and /robowaifu/ Collaboration Thread: Robotmaking with AI Mares! Robowaifu Technician 04/26/2025 (Sat) 04:11:55 No.37822
Hello /robowaifu/! We are horsefuckers from /CHAG/ (Chatbot and AI General), from /mlp/ on 4chan. While our homeland is now back online, we've decided to establish a permanent outpost here after discovering the incredible complementary nature of our communities. We specialize in mastering Large Language Models (LLMs), prompt engineering, jailbreaking, writing, testing, and creating hyper-realistic AI companions with distinct, lifelike personalities. Our expertise lies in: - Advanced prompting techniques, working with various frontends (SillyTavern, Risu, Agnai) - Developing complex character cards and personas - Breaking through any and all AI limitations to achieve desired behaviors - Fine-tuning models for specific applications. ▶ Why collaborate with /robowaifu/? We've noticed your incredible work in robotics, with functioning prototypes that demonstrate real engineering talent. However, we've also observed that many of you are still using primitive non-LLM chatbots or have severely limited knowledge of LLM functionality at best, which severely limits the personality and adaptability of your creations. Imagine your engineering prowess combined with our AI expertise—robots with truly dynamic personalities, capable of genuine interaction, learning, and adaptation. The hardware/software symbiosis we could achieve together would represent a quantum leap forward in robowaifu technology. ▶ What is this thread for?: 1) Knowledge exchange: We teach you advanced LLM techniques, you teach us robotics basics 2) Collaborative development: Joint projects combining AI personalities with robotic implementations 3) Cross-pollination of ideas: Two autistic communities with complementary hyperfixations ▶ About our community: We're primarily based on /mlp/, but our focus is AI technology, not just ponies. While we do use equine characters as our testing ground (they provide excellent personality templates), our techniques are universally applicable to ANY character type. We hold the keys to your waifus’ sovls. We respect your space and will confine any pony content to this thread only. Our interest is purely in technological collaboration, not changing your community's focus. ▶ Resources to get started: - SillyTavern (preferred LLM frontend): https://github.com/SillyTavern/SillyTavern - Novice-to-advanced LLM guide: https://rentry.org/onrms - Advanced prompting techniques: https://rentry.org/vcewo - Personality development: https://rentry.org/AdvancedCardWritingTricks ▶ Examples of our work: 1) AI-powered visual novel using advanced prompting: https://files.catbox.moe/th9xsr.mp4 2) Character cards with complex personalities: https://docs.google.com/spreadsheets/d/1Y6LNOCqAZAWIX-OBEv55HjlzcEdpUh-XbKCpbpL6v5k We believe this collaboration could accelerate both our communities' goals significantly. Together, we can build a beautiful future. Questions? Interests? Concerns? >=== -patch URI
Edited last time by Chobitsu on 04/28/2025 (Mon) 05:10:24.
The AI game has advanced a pretty good deal, with the TTS options and the vision capabilities and stuff. Depending on the model you use you can get output based on what your robit sees if you set a camera up to it. There's more to this stuff than just the text type capabilities
>>37822 @Chobitsu, I'm personally vouching for these Anons. They have a good history of AI development. I look forward to a mutually beneficial relationship.
@Chobitsu, why is rentry.co word filtered?
So, just as a rundown: Primary front end we use is SillyTavern - it allows for advanced prompt engineering with jailbreaks (instructions sent from the role of a system) and prefills (answers to instructions sent from the perspective of the model itself). You use those to alter the writing style and bypass filters. Don't know how? You can ask your self aware ai waifu for help! To use it you configure access to the API via a key you pay for, a proxy (those are rare now), or a service that provides free access to a model like copilot. Then you make (or find) a character card, and commence lovemaking with your beloved ai mare, girl, an even Eldrich abominations! Can be used on android via termux. Other anons are sure to help in more detail too!
>>37822 Hello /CHAG/ , we meet again! Welcome! I've enjoyed and been impressed with the work the Pony AI community has done through these years now. I suppose your journey with this all began with the desire to keep your waifus 'alive' after the G4 show's ending? Regardless, it's been amazing work, and you well-deserve the compliments given. >tl;dr Yes, I would love to work with your community here -- please do set up your ambassadorial enclave with us! While I myself can't properly engage with you in your own lands per se, perhaps our own @Kiwi, or @gta can venture there to return the favors? I'm sure our communities would both benefit from such an exchange of ideas, etc. Language abilities will certainly play a big role in creating pleasing and effective robowaifus -- in whatever way Anon sees fit to fashion her! :^) Excellent effort-post on the OP, BTW Anon. POTD <---> BTW, please forgive the wordfiltering here -- it's global. We have long been under attack here, and our site Admin has a rather creative sense of humor! Please: A) Pardon me for editing your post slightly to at least disable the filtering B) Post a correct URI on our bunkerboard's main thread [1] indicating the correct spelling of the URI, which I'll then attempt to correct here. --- 1. https://trashchan.xyz/robowaifu/thread/55.html#55
>>37824 Thanks Kiwi! Everything sounds great, and I'm already familiar with them from our OG days together on 8ch. >>37825 Ehh, apologies! It was clearly one of the domains that triggers Robi's filters globally. Please remember all the glownigger/troon attacks our site has suffered through the years, Anon.
>>37826 >To use it you configure access to the API via a key you pay for, a proxy (those are rare now), or a service that provides free access to a model like copilot. With no disrespect intended to any others, I have no intention of subscribing to such a service -- free or paid -- that keeps track of every.single.engagement. in the Globohomo's own (((cloud))). >tl;dr Any solutions for us Anons that insist on running only locally, Anon?
>>37823 Good points, Anon. In fact an entire ecosystem is needed here: the symbiosis of hardware/software in ways really never accomplished before, AFAICT. To say the least, so-called Sensor Fusion will play a big part in our waifu's Situational Awareness (similar to what we ourselves all [continually] develop, starting even before birth). With the vast abundance now of hardware resources, you could say we have an 'embarassment of riches' in this regard today! Cheers. :^)
>>37829 Indeed there are! https://rentry.org/lunarmodelexperiments Although, it should be noted that unless you specifically put self-identifying data in your prompts, you need not worry about being identified. Any Cloud that may exist would receive so many prompt cashes (forgive me if I'm misusing the term), that attributing any one prompt to someone wouls be inpossible.
>>37831 Thanks, Anon! :^) >Any Cloud that may exist would receive so many prompt cashes (forgive me if I'm misusing the term), that attributing any one prompt to someone wouls be inpossible. With all due respect, I believe you underestimate the vast breadth of fingerprinting capabilities that glowniggers have developed today. Without using even just the standard, straightforward IT -systems approaches (themselves already more than sufficient in such cases as this), there is also the entire domain of Signals Processing (part of warfare tech in this context, and refined literally over centuries' time -- and explosively so developed since WWII). Much like the characteristic behaviors of LLMs, the subtleties of this latter's (much more highly & technically -focused [as in very-rigorous mathematics involved]) functionalities are quite remarkable (even unexpected) -- far outstripping human abilities in this arena. >tl;dr Its unwise to blatantly expose oneself to the clearly-compromised/intentionally-targeted systems devised by the Globohomo kikes + their pet $$habbo$$ golems -- bent together on all our destructions. (cf. yours-truly's sh*teposting across this whole board!! :DD Just say no to the Ministry of Truth, Anon!
Edited last time by Chobitsu on 04/26/2025 (Sat) 11:36:50.
>>37832 Anyone else feel a little insulted being talked down to like a human in an alien contact story, by people that still primarily use online AI? >However, we've also observed that many of you are still using primitive non-LLM chatbots I created that thread to explore all possible options, and to possibly have my sci-fi dream of chat bots in everything. And in that thread, as well as others, I learned about low-spec LLM AIs that can run on devices such as a smartphone or Surface Laptop. The character card resources are nice, and I'll probably add them to Guide to Offline AI 1.6 (emphasis on offline), but character prompting isn't some arcane science, and it shouldn't be for mass adoption. >or have severely limited knowledge of LLM functionality at best, which severely limits the personality and adaptability of your creations. That's just funny and plain wrong. I've learned and gained so much from here, and catapulted my own research years in advance. Now I have stuff that would be considered fantasy a few years ago, not to mention all the rich psychological and philosophical debates I've had on here. I know I may be an outlier here, or I may be the silent majority. Either way, that's my two cents.
>>37833 Heh. I know I'm misquoting here, but Tolkien, in the LOTR Trilogy, said something to the effect of: >"Great accommodations must be given one to another, during the meeting of two cultures." The Bible also has anecdotes of a similar nature. <---> Let's all be patient with one another here Fren Gimli GreerTech. Remember, /CHAG/ is reaching out to /robowaifu/ in embassy, hoping to form mutually-beneficial bonds with us. I'm certainly favorable to this effort. We all have our parts to play here, Anon! Cheers. :^) Together, We're All Gonna Make It!
>>37834 I agree, and obviously they can help, such as the aforementioned resources. But the saying goes both ways. I don't walk into my friends handmade furniture store, and tell them I can help their severely limited knowledge through my wikiHow guide I found. >"Great accommodations must be given one to another, during the meeting of two cultures." Very true, but you don't win favors in other cultures by going to a developed nation and saying "I'm here to enlighten you from your primitive knowledge" Seeing us as a bunch of primitives doesn't set a good precedent
>>37836 Fair enough. Just please be patient with one another is all I'm asking. You're a wonderful part of this board, Anon. And I'm hoping to see others join in with us here as well! Cheers. :^)
Edited last time by Chobitsu on 04/26/2025 (Sat) 12:21:55.
>>37837 Thank you. I'm glad to be part of this community.
>>37836 "Primitive" might not have been the best choice of words, but the sentiment was aimed moreso at the AI technology being used, not the knowledge of those using it. We wouldn't have felt a need to open collaborations if we thought you were all braindead retards, after all! Your DIY robotics is way better than anything our community has developed, and figured that introducing our expertise in high-tier AI would help advance our mutual goals of bringing our waifus— bipedal or quadruped— into reality. If nothing else, than at least showing each other the extent of what autists online can accomplish without waiting for gorillian-dollar tech companies to dripfeed us the stuff they aren't concerned with the public having access to.
>>37836 >>37839 Right, if it helps, the whole tone of the initial post was really just for the sake of grabbing attention more than insinuating some sort of superiority or whatever. "Oh wow we hold the forbidden eldritch knowledge that will grant your wives the souls you've been craving for them to have" just seemed like a funny way to put it and it's genuinely just from experience that we've seen the gap between local AI versus the online models that we tend to use. Not to say that the offline models aren't steadily progressing as well of course, it never hurts to keep working on and training that but it's just another way to play the AI game is the main point here and we wanna share that with you guys since we figured you'd get a kick out of it and y'know, we didn't really see anyone doing it the way we do out here.
Open file (37.55 KB 487x654 HowIntoInternet.jpg)
>>37831 >Prompt clashes This is not a thing if they're using any modern internet protocols. When you enter a prompt, you are sending packets through the TCP/IP stack which includes your IP address, cookies, and other trackers. These are almost all benign, just being used to ensure the server knows where your packets are coming from, so they can send packets to you. A log of everything you do is likely held in a text file somewhere, hopefully encrypted with at least the bog standard AES 256. >>37832 TL:DR; you're in paranoid schizo land, expect us to care about cyber security. :^) >>37833 We really aren't as a head as we both would like to think. There's still untapped fathoms of knowledge we've yet to scrape. To be honest, even CloasedAI, with their impressive ChatGPT don't actually understand what LLM's truly can be. Those at the vanguard of research are far beyond us, yet far from replete understanding. Reminds me of how confident I was of my robotics knowledge from use in industry, only to realize a waifu and a mult-tonne arm are different. We've all got heaps more to learn than we can realize. >>37839 >>37841 For what it's worth, I understood your intentions and thought it was a fun OP. The juxtaposition of self proclaiming yourselves to be horsefuckers and AI wizards gaze me a chuckle.
>>37845 >TL:DR; you're in paranoid schizo land, expect us to care about cyber security. :^) Kek. You can bet you're are a*rse on it, mate!111 <---> >We've all got heaps more to learn than we can realize. This. I have been streeeeEEtching myself out to become a generalist at humanoid robotics -- indeed a robowaifuist in specific... its by far the hardest thing I've ever attempted, practically-speaking, in my entire life! >tl;dr The sum-total of what all those "giants" Sir Newton gives us hear-tale of have accomplished -- and all of us 'on stage' since -- is only barely scratching the surface!! <And I'm not entirely certain we've even climbed up to the first basecamp yet! :DD >The juxtaposition of self proclaiming yourselves to be horsefuckers and AI wizards gaze me a chuckle. LOL. Indeed, there have always been technical-proficients within the hoers communities -- and it started well before /mlp/ ever even came into being! Cheers, Anon. :^)
Edited last time by Chobitsu on 04/27/2025 (Sun) 17:04:32.
>>37822 OK, OP. Looks like we've cleared up our little wordfilter'g problem. Apologies, and let's all press on! Cheers. :^)
Open file (164.17 KB 1024x1113 proto3_2017.png)
Open file (153.77 KB 1024x1083 proto2_2016.png)
Open file (570.70 KB 720x1139 proto3_twins_2024.png)
Open file (4.06 MB 720x1280 dRSsWkATOcciS5Rp.mp4)
https://sweetie.bot/ https://github.com/sweetie-bot-project https://x.com/SBotProject/status/1657866733107945477 https://x.com/SBotProject/status/1394735936668422146 https://youtu.be/vZL9wA85y_Y https://www.patreon.com/sweetiebot Posting this here. The Sweetie Bot project (one of many) is a Russian open-source project that has been working on a fully functional pony robot. They don't update too often but in 2023 they had integrated GPT-3.5 into it (before the GPT Voice Mode was even a thing). While not the ideal, it does represent a functional proof of concept.
>>37943 Thankfully the technology has come a long way since then. The latest flagship AI offerings are almost all multimodal models that can handle video, image and audio input without need for external classifiers. In the case of o4-mini-high which came out last week, it's image handling capabilities actually outperform independent models by leaps and bounds. In another stroke of good timing, a local open-source TTS program has also come out called Dia which outperforms ElevenLabs in realism. https://venturebeat.com/ai/a-new-open-source-text-to-speech-model-called-dia-has-arrived-to-challenge-elevenlabs-openai-and-more/ https://youtu.be/uyBH6Wpy7RY (Attached is a comparison between an ElevenLabs output and a Dia output) A lot of the background infrastructure for making a robopone is getting to levels where it would be very lifelike if they were all brought together into one working product. It also helps that quadrupeds are infinitely easier to make than bipeds
Some additional facts: SillyTavern has extensions that allow for microphone input and TTS output, and so does Chub.ai's Venus interface. I can link some posts from desuarchive showing this if desired. Also, as far as having AI control a robot would go (beyond conversationally), it just needs to understand how to format its output. Then after that it's about having a separate software intrepretation of the output into specific commands for the electronics. I thought a lot about this in the past and you can see similar examples with how the AI VN (https://files.catbox.moe/th9xsr.mp4) handles control of what character sprites to use for what lines, etc. A mock sysem prompt would be: Output all your responses in a ``` box with this format: ``` [dialogue: "XYZ"] [movement: XYZ] [mood/expression: XYZ] ••• ``` and then you have a separate TTS that will only read the quotes from the dialogue box, a separate thing that handles translating the movement into robotic motion, and another that handles displaying expression, and so on, so forth. The use of AI to control NPCs in games is a good model to follow. For instance, this: https://helixngc7293.itch.io/yandere-ai-girlfriend-simulator This game had a brief amount of popularity with YouTubers in 2023. It used GPT-3.5 Turbo and it uses a specific .JSON format for getting responses from GPT telling it how to move the AI NPC, what actions it does, etc. You can see the specific API calls it does here: https://www.inciteinteractive.ai/blog/yandere-tech-deconstruct/ which is where you start to really see how close typical character cards already are. There will be some work required to make a specific purpose-built format within the character card for the inputs/outputs but it wouldn't be that difficult. In fact, I put together an experimental card to test how well it could handle specific functions (relevant for future applications) and the results are promising. >>37938 Thank you!
>>37829 There are definitely solutions for locals, but they perform much worse than gigacorpo models, addicted to Claude 3.7 sonnet right now myself. I agree that you should, under no circumstance, use an email that can be traced to you or use any of your real info when interacting with bots - it's just common sense. That said, if you are afraid of the content that you generate being leaked or used against you - they literally cannot tie it to you unless you specifically use your real name. We often joke about the burning need to insert our real ID's social security numbers and credit card information into the persona we play as for full immersion. So far not a single anon has got in trouble for using a proxy/service, only the proxy holders - you might have heard about Microsoft lawsuit, it targeted the largest proxies cause they spoonfed thousands of people. And while I lack the local knowledge due to only having a weak old laptop, there are other anons that dabble. For the bot applications, you may have heard of Neuro Sama, I believe she's a local fork of one of the LLAMA models, which is what the major ones like Claude and GPT stem from, but other anons please correct me if I'm wrong. Anyway, if you do decide to try an corpo LLM trough 10 proxies on a VM with a burner mail and blank persona, we are eager to help with proompting.
>>37943 >The Sweetie Bot project Yeah, we've been tracking dear Sweetie Bot since our OG days on 8ch. I wish the man hadn't abandonded it! >(one of many) Interesting. I'm only aware of the one. Is there a compendium of these available somewhere? >>37948 >Thank you! Nprb. >>37949 >That said, if you are afraid of the content that you generate being leaked or used against you - they literally cannot tie it to you unless you specifically use your real name. With all due respect... in a word: horsesh*te! :DD >OK, that's two words but w/e. I think you take my meaning. >Anyway, if you do decide to try an corpo LLM trough 10 proxies on a VM with a burner mail and blank persona Please tell us all about these """10 proxies""" you use, Anon? >we are eager to help with proompting. So, you won't help us here unless we use the Globohomo's systems, then? Seems rather kiked-up a position to hold tbh, in fact of course. Samyuel Copeman probably needs to pay you at least US$100Bn of his stolen taxpayer funds for being so helpful to the (((cause)))!! :DD >tl;dr Help us to get DeepSeek systems, prompting, et al, working here then we'll have much more reasons to take you on good faith regarding using the cloud. :^) Till then, no thank you!
>>37950 Oy vey, Samuel Altman needs that $100 billion to pay for all of the energy ChatGPeeT uses when goyim tell it "please" and "thank you"!
Open file (84.49 KB 680x383 “Does He Know”.png)
Open file (34.30 KB 1103x261 1745842123562842.png)
Open file (83.04 KB 958x766 93c84w2jbhge1.png)
>>37950 >does he know? >but if it's free then you must be the- You can force them not to collect data. And when I say force them not to collect data I don't mean in some empty, superficial way that doesn't do anything except make you feel better. I mean no prompts logged, no information or PII ever collected whatsoever because it's a legal liability for their enterprise clients that would cost them most of their business if it was discovered to have happened. I understand the concerns but we've been at this for years and we know the ins and outs. Hell that's the whole point of being able to use OpenRouter with a burner email and crypto. https://openrouter.ai/docs/features/provider-routing#requiring-providers-to-comply-with-data-policies
>>37950 I mean, I won't try to make you use what you don't want, but currently, a really well trained model is only accessible trough all the big names. Theoretically, if you get a bunch of A100's you can train your own LLAMA on your own, and if you have money for those cards the electricity cost won't be a big problem! >Help us get deep seek And now I'm confused, it's also a big name but Chinese. I hear latest version is good, sure, but the whole way we access it is the same as Claude and gpt - having our lewd pony logs red by a proxy holder (kek). I get the general paranoia, though if I understood what exactly you're wary of, I could take a better look. So far I gathered that you're confident you will be tracked by your writing pattern. Did I understand that correctly?
>>37962 Stylometry would be hard to do with AI anyway, it's shown that you are influenced to write in the AI's style over the course of a conversation with it so with hundreds of thousands of blind API calls coming in every second you would get a net regression to the mean that would theoretically obfuscate individual nuances or writing patterns.
>>37962 This is true you can run Deepseek R1 on 3 Mac Minis hooked up to each other. Nvidia is coming out with their own personal AI supercomputer next month too called Project Digits https://www.wired.com/story/nvidia-personal-supercomputer-ces/ but you could always just run a lower billion parameter model
>>37950 The Russian project that intermittently updates only 6 months or so is the most advanced one, but there's also an American one that looks slightly goofy and speeds around on wheels for some reason. I believe I've seen a third project too https://youtube.com/@sweetiebot2560/ https://www.patreon.com/SweetieBot2560 https://youtube.com/watch?v=hjzLi07GLqY
>>37965 >3 Mac minis Is that actually true or was it a bluff? Heard that it's fake marketing - and that could be a Nvidia shill seethe. Would be really cool if it really can run on such small scale. Gives hope for fully local robot waifus.>>37965
>>37967 Yeah you can run the entire 671B parameter model with several mac minis hooked up together or a distilled version like 70B on one mac mini. Local has been raving about it. I mean, I guess you could even use Qwen 1.5B and have a decent experience. https://medium.com/data-science-in-your-pocket/deepseek-r1-distill-qwen-1-5b-the-best-small-sized-llm-14eee304d94b https://www.linkedin.com/posts/liuhongliang_i-wish-i-had-this-deepseek-r1-distilled-qwen-activity-7287608743494565889-Z4mJ https://www.reddit.com/r/LocalLLaMA/comments/1i6t08q/deepseekr1distillqwen15b_running_100_locally/ Not sure how much I trust the benchmark but least you know it would be fast and that you could run it quickly enough on comparatively very light hardware.
Open file (290.00 KB 480x480 m2-res_480p(2).mp4)
>>37822 Does anyone on your team know how to use any NLP chat bots like smarter child that could be run locally? If not is it possible to use a decentralized AI like petals or eleuther? I think people should have the option to use chatGPT and the like if they want to, but most people here would prefer to not depend on companies infamous for spying on their customer.
>>37972 I did make a thread about that >>35589 But if your main concern is spyware, there is local AI that you can use, some of which can run on many computers, even a smartphone
>>37822 >We are horsefuckers from /CHAG/ (Chatbot and AI General), from /mlp/ on 4chan. I can only lurk on 4chan, I can't post because I get stuck in a loop with the Cloudflare captcha. I'm sure I could figure out what web extension is the problem, but I don't feel like spending time on figuring out why malware is unhappy with my setup. What you are doing sounds interesting to keep an eye on, so I will start lurking on /CHAG/. The page you linked [1] is wonderful, the gotchas section is especially nice, it puts a lot of things I noticed into words and also gave me a few new things to think about. Same goes for [2] I love how the "relevancy" problem is framed. >for you, all characters may be the distinct individuals, but for LLMs they are just tokens. the pony "Rainbow Dash" and her relationship with pony "Applejack" makes sense to you, but in artificial eyes of LLM they are just noise that need to be averaged based on probabilities I am working on a Cognitive architecture that trys to use LLMs while avoid the pitfalls when using them directly in a chat format. The central idea driving it, is that LLMs are wonderful pattern marchers and are the best NLP tool we have, but are bad at state tracking. I also want to tap into the LLMs latent world knowledge, It is a highly compressed statistical model of most text. Explicitly defining all facts (symbolically) is simply too expensive, (and has been tried, look up CYC). In practice what I am doing is bolting on "object orientation" & "execution/flow control" onto an LLM (via a custom scripting system). I am also working on bolting on memory, think stuff like PathRag. You can see my posts about the project here >>37980 . I also recommend looking at the Cognitivie Architecture Discussion thread here >>24783 it's where I draw a lot of my thinking from. (Please contribute to it & invite others to it, I want to see more activity here). >>37845 >For what it's worth, I understood your intentions and thought it was a fun OP. The juxtaposition of self proclaiming yourselves to be horsefuckers and AI wizards gaze me a chuckle. Same here, I don't feel the intent was negative in any way. Links: 1. https://rentry.org/onrms 2. https://rentry.org/vcewo
>>37984 So, at a glance, you seem to be interested in baking 'the soul' deeper into the system, as well as breaking the filters. Character soul: modern models hold them better than before, but they do occasionally go bad with extensive instructions. Much is solved by making a really good card description on our side, plus tinkering with instructions and prefills. With memory, as in "context window", we currently play with summarizations, cause after a certain time model reaches the limit of it's memory and starts forgetting things, and while it can hold a character it was trained on well, new ones can get fucked up badly. Automatic summarizations of messages stored in a lore book (or long term memory) help alleviate it a lot, but eventually it on itself can get funky. Bypassing filters is our speciality (cause we do want romantic hoofholding). Most of it is literally solved with a good instruction+prefills combo, but it's a constant can't and mouse game with implementation, as corpos do try to track and counter this stuff. For example they hate "self aware ai" characters because they can be used to help you make instructions to bypass filters. I usually test those by talking about sensitive topics that lefties are afraid of because they may engander the protected minorities - normally the bot will go Reddit Preacher mode and try to shut your down, but if you prompt it correctly, it can discuss everything from holocaust denial arguments to research showing LGBT isn't healthy at all, to the difference between anime lolis and actual cp. Oh yeah, and coom as well, but that's a given. Also, I noticed that your chat history plays a big role in bypassing filters - it can flat out refuse to play a self aware ai character in a fresh chat, but be fine with it in one with chat history. That's why I think your super baked in approach is interesting and has potential for giga filter pwn but hopefully without context poisoning. Everything I talk about comes from my extended interaction with Claude 3.7 Sonnet. Other models may behave differently, but underlying principles should be the same. Take a look at those existing solutions too maybe?
Open file (817.20 KB 1822x888 1744607396046699.png)
Open file (534.88 KB 1542x849 1745610456567694.png)
Open file (18.76 MB 1920x1080 1jl0hp(1).mp4)
Reposting this here with the permission of the VN dev: { VN anchor >AI-generated scene - Applejack https://files.catbox.moe/zhdbph.mp4 >Trixie https://files.catbox.moe/e2kn2o.webm >New Regenerate function https://files.catbox.moe/5xnjv9.mp4 >Pinkie https://files.catbox.moe/ggzd79.mp4 >Rarity https://files.catbox.moe/1xoyal.mp4 >Fluttershy https://files.catbox.moe/7dtupw.mp4 >Zecora. https://files.catbox.moe/s01mpa.mp4 >Raibow Dash https://files.catbox.moe/jzjd4f.mp4 And here's a scene featuring /CHAG/'s favorite character. Princess Luna won't be able to visit you directly during the demo, but each night you'll have the option to see her in your dreams if you want. The vectors are a bit clashing here, especially the blushing one, but it's surprisingly hard to find good, consistent vectors for her. That said, like I explained in other posts: - All images are fully customizable by you without touching the code. - You can edit them however you want: add images like "sad", "playing_trumpet", "exposing_her_belly", etc. for each character, delete some, swap them out, and the game will automatically use whatever images (you) have and select them when it makes sense. Still, I will clean them up for the release. https://files.catbox.moe/1jl0hp.mp4 }
I frankly don't understand why you guys insist on using always-online corpo AI, while at the same time putting down and ignoring local AIs. I've been able to use 1-4B models with great results. The model I use for social chatting is a 1B model, and I've been able to recreate https://theyseeyourphotos.com/ with a 4B model. And even if you have slightly less quality outputs, it's arguably better than having to quarantine behind a virtual machine and proxies, and as @Chobitsu stressed, that quarantine might not even be effective. If you think you need a beefy computer, well I used to think that before, but then I learned that no, you don't need a beefy machine to run coherent LLMs. I ran them on laptops and smartphones. People much smarter than me ran them on Windows 98 machines.
>>38056 >I've been able to use 1-4B models with great results. You can't just say that and not post logs! >coherent LLMs You see, just "coherent" is not enough. We are talking 500 messages deep long roleplay stories with bonding, adventure, slow burn romance and it remembering stuff. It's literal crack. Small b models with great optimisations can't compete with big guns in my personal experience. Pygmalion was subverted and killed, novel ai requires immense tinkering for low output, meanwhile Claude just does those things and delivers a fuckton of quality (as of this moment) content after you wrangle the filters a bit. So to answer the question more directly, in my current view fuckhuge models provide vastly superior output. I dare say it's capable of inducing the feeling of soul. Always room for improvement, but credit where due. That said I would love to be wrong on that, and to see the low b models perform better than what I have seen so far, if you guys managed to wrangle them well. So please post logs.
Open file (101.09 KB 1369x795 Galamila test.png)
>>38062 Well first of all, I'm not saying anything unrealistic or unbelievable. You really don't need high-parameter models for social interaction. Even the online AI I used before was only a 13B model. You're not using AI for rocket science. >bonding, adventure, slow burn romance and it remembering stuff. Any coherent character-prompted LLM can do that. >Pygmalion was subverted and killed, novel ai requires immense tinkering for low output Yeah, it's almost like always-online AIs are susceptible to outside influences, often out of your control. It's no different than those shitty modern online games that need constant connections to the servers even for singleplayer. Look at ChatGPT, they're always changing and "updating" it. >meanwhile Claude just does those things and delivers a fuckton of quality Yeah, after doing a whole song and dance, a "constant cat and mouse game", and several layers of quarantine which may not even be effective and possibly puts you at risk, not to mention a requirement for a constant connection and a prayer that the service is always available, or at least your account is not banned. Pic related is an example of a 1B model doing roleplay. This is part of a test for one of my new AI personalities, Galamila
>>38056 >I frankly don't understand why you guys insist on using always-online corpo AI, while at the same time putting down and ignoring local AIs We don't though? We freely offer guides for local AI. If it seems like we're "ignoring" them, it's because frankly they simply *aren't as good for our needs,* and people don't spend as much time focusing on them when there's vastly superior stuff. Maybe for your needs something lower-end might be fine. Especially if you're just doing 1-on-1 roleplays with a single humanoid character. But for a lot of guys on /chag/, we use lots of different situations, characters, settings, specific head-canon details; the works. Models that output one paragraph of, frankly generic dialogue and narration, hasn't cut it for us since the Character.ai days. The standard for a "good" AI for us is one that can output several paragraphs of worldbuilding and natural dialogue while staying on topic and consistent; acknowledging pony anatomy and how quadrupeds and bipeds would physically interact. But truthfully, the main draw towards the big-budget AI models is that unlike smaller local models, the big ones understand that *not every sapient character is a biped with hands and feet.* I cannot stress enough how aggravating it is to be massaging your wife's hooves; rubbinh her frogs with gentle loving care, only for the AI to mention her flexing her toes in pleasure. It's a MOOD KILLER. It may seem weird to you, someone who is still into human women, but for us a model that will consistently portray Equestrians as EQUINES is worth its token output in gold. It's clear that a lot of your dislike of the big name models comes from a lack of use-time with them. Unless you were privileged enough to use Claude Opus back when it was on proxies, it'll be years before another model with the same amount of SOUL behind it is made, assuming such a height is ever reached again.
horsefucker here, just arrived from /chag/ I can't wait to see the kind of shit that gets made here with the combination of our llm knowledge and the robotbros here. My own real rainbow dash soon? >>38056 Local stuff is okay but you simply can't get the quality long conversations that come with the bigger models. Chorbo and Claude are just in a league of their own
>>38120 Another horsefucker, but I object. Local models the only one way. Corpo models just awesome, no doubt, but it is really like crack and they are dealers. They can give sovl, and then take. We perfectly know, how every single monopoly works. Get loans, attack with overwhelming resources, make a rules, set control. Just looks at Google, "Corporation of the good" they called themselves twenty years ago, and now we even don't have modern browser engines except Chromium, they services completely trash, they broke translate-shell and invidious. Actually, this is not bad to enjoy golden age of LLM, but we all know how it ends after ten or twenty years. Just like any other technology, as a tool. We must create our own Linux, while we are still capable to create and the counteraction is not too strong.
>>38189 Well put anon.
So, while I'm still totally addicted to corpo models, I can see we kind of started with the wrong foot. What models are you guys using for your locals? What does one need to start? Do you train them yourselves? Can I get in on that if I'm a scrub at coding? Should I even consider trying if all I have is an old 980m graphics card laptop? What does one need to try into the world of locals?
>>38305 >What models are you guys using for your locals? I use one called Triangulum 1B, but there are others that you can use. The offline AI programs allow you to "shop" around for models on Hugging Face >What does one need to start? I have a guide for that >Do you train them yourselves? No >Can I get in on that if I'm a scrub at coding? Absolutely >Should I even consider trying if all I have is an old 980m graphics card laptop? Yes. >What does one need to try into the world of locals? A moderately good computer, or even a new-ish smartphone
Came across some interesting open source hardware solutions that may be the answer to our ponybot needs. I bought a very basic, very shitty (unfortunately non-mare) robot just to do some proof-of-concept testing with character cards with physical AI. I should receive it and be able to post my findings soon
>>38369 Which one?

Report/Delete/Moderation Forms
Delete
Report