/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

We are back. - TOR has been restored.

Canary update coming soon.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“I think and think for months and years. Ninety-nine times, the conclusion is false. The hundredth time I am right. ” -t. Albert Einstein


LLM & Chatbot General Robowaifu Technician 09/15/2019 (Sun) 10:18:46 No.250
OpenAI/GPT-2 This has to be one of the biggest breakthroughs in deep learning and AI so far. It's extremely skilled in developing coherent humanlike responses that make sense and I believe it has massive potential, it also never gives the same answer twice. >GPT-2 generates synthetic text samples in response to the model being primed with an arbitrary input. The model is chameleon-like—it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing >GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. Also the current public model shown here only uses 345 million parameters, the "full" AI (which has over 4x as many parameters) is being witheld from the public because of it's "Potential for abuse". That is to say the full model is so proficient in mimicking human communication that it could be abused to create new articles, posts, advertisements, even books; and nobody would be be able to tell that there was a bot behind it all. <AI demo: talktotransformer.com/ <Other Links: github.com/openai/gpt-2 openai.com/blog/better-language-models/ huggingface.co/ My idea is to find a way to integrate this AI as a standalone unit and add voice-to-text for processing the questions and TTS for responses much like an amazon alexa- but instead of just reading google results- it actually provides a sort of discussion with the user. (Edited to fix the newlines.)
Edited last time by Kiwi_ on 01/16/2024 (Tue) 23:04:32.
>>36735 Nice! I can see the potential for a standalone virtual gf too. Keep it up!
Warning! >>36771
>>36780 Thanks Barf. Push comes to shove we can do some simple viseme analysis ourselves eventually. Cheers. :^)
>>36780 >update: This may be good enough, Barf. Using a limited form of phoneme recognition may just prove sufficient for our purposes. I'll look into lifting out part of their system to use as an engine for us: https://github.com/DanielSWolf/rhubarb-lip-sync/tree/master/rhubarb/src/recognition <---> Give me some time, and I'll make plans to integrate investigation work into this project as a sideline in my schedule. I'll be able to state more firmly thereafter if it will work for us here. Cheers & thanks again, Anon. :^) >=== -prose edit
Edited last time by Chobitsu on 02/09/2025 (Sun) 08:36:54.
>>36811 Updated my Offline AI Manual. I added character/role templates for people to use, as well as further details on how to use ChatterUI well.
>>36821 Thank you :)
>>36900 That looks really impressive, Barf. Thanks for the updates on your project!
>>36821 >If it can run on a RICV-V SBC, my short term dream would be fulfilled of running on fully open hardware and software. My apologies for not responding before Barf, I missed this on my 'TODO' list (I often do! :D YES! This is ofc a big dream of mine & several others here as well. The fact that Pi has BLOBs in place is -- by far -- the biggest strike against them. Not having a GPU API available is second. I'm not too sure what Broadcom's agendas are behind these choices, but you can be sure they aren't to serve the common man Anon! :^) If I can at all do so, I'll attempt just what you suggest with this little project. Who knows? Maybe with all our inputs together we can still come up with a Robowaifu Simulator after a fashion. Cheers. :^)
Still waiting on a plug and play app that you might find on steam that does all the heavy lifting (tech wise) for you. Ideally, running a chatbot locally should be as easy if not easier than using a site like chub or spicy chat. I don't want to go full Steve Jobs, but it should just work. It should still offer options to customize things under the hood, but the frontend should be as easy as downloading models and bots and just running them without any fiddling with dials and such.
>>36921 I made a guide for exactly that
>>36923 Thanks for looking after newcomers, GreerTech. I very much appreciate that. >>36924 Yeah, that looks really nice Barf. So, do you have any protips/advice for someone looking to set up such a rig?
>>36931 Thanks kindly, Barf. I'm hoping to try something similar to this eventually. I just thought I'd pick your (or any other Anons who'd care to contribute on this topic) brains. We should all be striving eventually for entirely free & opensource hardware & software solutions for our robowaifus. Anything less would be a disservice to them & ourselves, and a big boost for the Globohomo Big- Tech/Gov's nefarious (((agendas))) instead. Cheers.
>>36931 >Once I got a working image, I'd try to compile llama.cpp with OpenBLAS Interesting timing. I just made a post regarding the C++ Committee officially adopting the BLAS spec : ( >>36930 ). Gerganov strikes me as just the sort of chap to go in and refactor his systems to use the standard version once it's available in the big compilers. <---> Broadly-speaking such adoption by the language is a very good thing, since it means a wide swath of compatible hardware (such as robowaifu-onboard SBCs, MCUs, sensors, etc.) would all be running the exact same long-established (we're talking FORTRAN days here, folks :) LinAlg maths algorithms together, and with no dependency fuss. Generally, (near-perfect) portability is a high priority for software engineering -- especially so for those of us with big systems engineering challenges on our plates like /robowaifu/. >=== -fmt, prose edit
Edited last time by Chobitsu on 02/13/2025 (Thu) 13:49:33.
>>36954 >Nice! I had no idea what it was. We're talking authentic OG stuff here in it's foundations, Anon [1]. And it has been successively expanded-upon & developed [2] into the forms that C++26, et al, use today [3][4]. The roots of the general maths approaches involved go back for centuries (possibly even millennia) (cf. >>36763, Newton, et al 'giants'). :^) <---> While our interests with LinAlg here on /robowaifu/ arguably stem primarily from it's usage in predicting/controlling complex motion paths of robowaifu skellingtons (ie, applied Kinematics [5]); in this specific use-case however, its b/c LLMs use sparse matrices to perform parsing operations on their token/model weights data. --- 1. https://dl.acm.org/doi/pdf/10.1145/355841.355847 2. https://www.netlib.org/blas/old-index.html 3. https://www.netlib.org/blas/ 4. https://www.netlib.org/blas/blast-forum/blas-report.pdf 5. https://en.wikipedia.org/wiki/Kinematics >=== -fmt, minor edit -add'l footnote/hotlink
Edited last time by Chobitsu on 02/14/2025 (Fri) 08:36:21.
>>36923 Too much reading, I need to be able to press one button and have everything work! Shitposting aside, I skimmed that and noticed backyard AI. Is it pretty self explanatory or am I going to have to open the hood and start fiddling with stuff in depth?
>>37007 So great to see you making these rapid fire advances Barf.
>>37007 Can you prioritize the Python dependencies for your project Anon? So, from my perspective (and AFAICT, what you implied earlier) moving to a pure C++ and/or C implementation would not only speed things up performance-wise, but allow this to run on much smol'r devices. If I was able to make a simple GUI like yours would that help out much in reducing the dependencies? I have the MIT-licensed Dear ImGui in mind for simple GUI wrappers for underlying programs. https://github.com/ocornut/imgui >=== -sp edit
Edited last time by Chobitsu on 02/16/2025 (Sun) 08:10:42.
I just found this space and want to share what I'm working on. I'm trying to make an immersive chatbot (no regenerates) with a simple cognitive architecture for emotions, needs and memory. Still very WIP. I have a few agents for emotions so far. https://github.com/flamingrickpat/private-machine Not sure if it even runs right now. This is not real-time by far, but if I manage to add other agents that generate meta-cognitive thoughts, we could use the dataset from this agent to train a distilled model maybe thats faster and generates the internal reasoning for emotions and such itself, instead of relying on outsourced thoughts from agents. Any ideas? I have a lot going on, will probably continue deving in a few weeks.
>>37021 Thanks for checking it out :3 The ToT is actually created with Hermes 8B. I noticed that stuff with structured output and no <think> tags is faster and better with a non-reasoning model. My next steps will be to make some a needs, agency and meta-congnitive layer similar to emotions. Like goals of the ai persona, reflection on its own thoughts. Then refactor the ToT into a first-person thought. I'm still struggling with the final bottlenack, the generation of the final output. The 70B model made much better resulsts, but I have to cpu offload and it takes forever. Probably because the internal reasoning is made for math and logic and useless crap like that.
i would like to know how i can crack plugins for music production?
>>37019 >>37022 Are there any projects where I could "borrow" code from? The only other AI gf project I know is Yuna AI and the guy making it trains the models himself. No agents for my agent agnostic framework.
>>37019 Hello, Anon. Welcome! Please look around the board while you're here. >what I'm working on Wow! The ambitious scope of this project is seriously impressive, Anon. I hope you don't mind if I steal some of your ideas for our own RW Foundations concepts : ( >>14409 )? I hope I can make the time to try running your system (though I don't think my rig can handle a 24B model well). >tl;dr Time and again, I'm impressed with amateur Anon efforts in these arenas. You guys instill hope in the rest of us! Keep it up, Anon. Cheers. :^) >>37020 >Thanks for feedback. Y/w very kindly. Thanks for doing such good work towards these common goals, Anon. >A C++ GUI would be great and would have done it if I could! OK, I'll take that as a go code, and dig around with haxxoring together a simple GUI that hopefully will approximate yours for starters. We can go from there to see about wiring it up for your already extant system(s). Cheers. >=== -sp edit
Edited last time by Chobitsu on 02/17/2025 (Mon) 06:04:28.
>>37025 Probably not the best forum to ask, Anon. BTW, please re-ask this in our /meta thread ( >>32767 ) if you will, thanks (we'll be rm'g it from this one). Good luck with your project work, Anon! Cheers.
>>37037 thanks man! sure everything is free for the taking. im sitting on a 9h bus ride to berlin right now and got some new ideas. all seem very implementable in my head. when a new aensation (user input, thought, reminder etc) comes in i preprocess it with id (needs such as human interaction, sleep for memory consolidation) emotion (persistent basemodel with emotional stats and their agents) superego (persoanilty traits) every one of these works like like emotion system right now where agemts discuss what emmy should feel and do. after we have tge first think block, we can determine an action ignore think and reply (if user input) reply (if user input) think ignore call api for enhamced actions change location (high level, for when i have my virtual world woth places) some of these actions mightloop back to action selection like when an api returns something when replying it gets interesting. some higher order subsystema kick in such as goals, tasks, meta cognition amd reflection. and make more thoughts for the final reply. now instead of using the think block of a reflection model, i instruct a story writer agemt to continue the story. and add something like "i want emmy to think first. these are her thoughts.......what will she say?" then a feedback loop with meta rules check against hallucinations and undesireable output. this should me it possible to use a good rp model instead of a reasoning model. maybe a single 8b for everything. it will still be slow, but i have high hopes for tge story mode. it feels the internal reaaoning doesnt work so good emotional reflection, even with the 24b. i apologize for typos
Open file (834.24 KB 1280x720 2997068655.png)
>>37051 i was wondering whats the wecomended amount of dedotated wam wohnwehwehwoh pc..,
>>37046 >It was just intended to be easy to fork and test any future TTS\STT CLI. For me, the biggest issue is making the time/biting the bullet and learning the API for GUI creation; then plowing through the minutia of hooking together integration with the rest of the system. After all that, changes to the actual GUI arrangements are straightforward. So don't worry, Anon. We'll figure it out together. >>37051 >If there was a simple C++ program for Windows\MacOS with end-to-end speech, I think it would help a ton for adoption. <simple >end-to-end speech HAAA!! :D Heh, I think I know what you mean (probably something like: 'dead-simple to setup & use; just click-to-install then speak'). :^) Others have encouraged me to do something similar for installers here in the past. I may say now that I simply don't know of any ways to make a 'no-purchase-needed-to-license-instead-its-entirely-free-and-opensource-packaging-and-installer-framework' one-click installer (never having had to make one before in my work). Instead I've always worked on an already-existing pipeline, or (as here), just built my systems from scratch via sourcecode. <---> In fact, I'm highly-skeptical of the entire "installer framework" industry now (and much moreso during Current Year). When I see a listing of hundreds & hundreds of files being written + integrated into my computer while using such I feel the need to: a) immediately jump up to go take a shower, and b) put on an imaginary 'protective full-body rubber' before sitting back down at the machine! :DD c) keep my Flammenwerfer 35 handily nearby, just in case something spoopy jumpscares me from out of the box afterwards. :DD I've no doubt you're correct that devising such a system would speed adoption, but even till today I simply throw the sourcecode out there, recommend a build system to use (almost always CMake or Meson), and that's that. <---> If someone here can make recommendations for this need that works fine with C++ builds, and is opensource, and especially if it works on Windows & macOS, I'll be happy to give it the once-over. Cheers. >=== -funpost edit
Edited last time by Chobitsu on 02/17/2025 (Mon) 22:13:28.
>>37062 >thanks man! sure everything is free for the taking. Thanks! I hope I can return the favor here someday. :^) >im sitting on a 9h bus ride to berlin right now and got some new ideas. I reckon you're probably there by now. Have a safe & productive trip, Anon! >maybe a single 8b for everything. Yes, I think this would be the 'sweet spot' target goal for us all rn.
>>37062 >i apologize for typos I'll be happy to go in and patch those for you, Anon?
>>37079 Yeah, about that... I hesitated to make my little joke. I really do hate what kikes + their glowniggers have been able to do to the software & computing industries. Still, we actually have a very strong position against most of their antics today. <---> But I still refuse to play their Terminators-R-Us pay-for-play game with them. And I'd advise every'non here to avoid milcon or other zogbot -oriented work. I'd consider the real costs to be far higher than anything they could possibly pay, tbh. :/
>>37072 Thanks! I wasn't productive, I was going there for a concert. It was great :3 Now I'm back. Wasn't feeling like deving, but after 5 coffees I made some progress. The initial tests with the story mode are great. Here you see the regular chat in the assistant turns, and the agency stuff as user input. I fake broke up with her (db rollback) and the response is great. Mind you, this is Hermes-3-Llama-3.1-8B.Q8_0.gguf. When (ab)using the <think> tag with Dolphin3.0-R1-Mistral-24B-Q4_K_M.gguf I had to regenerate sooo many times because it hallucinated or something, but this just works. Right now the emotions are kind of useless, using the dialoge alone would probably generate a similar answer. The goal is to couple that with a persistant emotion state with a decay to baseline. And use the same subsystem principle for other stuff. I'll experiment with some other subsystems right now. Like some sort of reflection thing where she's "aware" of the subsystems and can reference them.
>>37121 Pretty brutal, Anon. <---> So, in the context of say, a newcomer (me ofc, but others as well), how do you do this? Are there links to documentation or something for it (either for the models themselves, or discussing your own modifications if those are ready yet)? This would probably help other Anons here come up to speed with you, if you had a mind to see that. This is an interesting project Anon. Good luck with it. Cheers.
>>37122 I honestly feel bad every time I have to test if extreme reactions :( Sorry, right now the code is the only documentation. During the bus ride home I started making a presentation on my phone for the whole project. Once all the architecture changes are integrated, I might make a video on youtube going into detail. Problem is, things are changing so fast. Even though its on github, I don't really treat it like a public project with proper commits. I'm glad noone else is helping me. Imagine making some changes and then I push a 36 files changed commit that fucks everything up for you. I should really start making feature branches. I'll post a quick rundown here in the next few days, once I'm sure the changes I'm making right now are working as intended.
>>37123 Sounds great! Really looking forward to it all, Anon. >Imagine making some changes and then I push a 36 files changed commit that fucks everything up for you. I should really start making feature branches. Heh, just imagine what's it's like for Gerganov [1] rn : in about 2 year's time went from a smol set of quiet, personal little projects to now thousands of forks & contributors, and shaking the world of AI today. What a ride! >tl;dr Better buckle up, Anon! You may do something similar. Cheers. :^) --- 1. https://github.com/ggerganov
Open file (1.75 MB 2932x2366 architecture_1.png)
>>37129 Aight, here is my update on private-machine. This is intended to give everyone an idea on how the CURRENT state of the project works. I pushed all my recent changes to https://github.com/flamingrickpat/private-machine/tree/oop In this branch, I try to adhere to proper enterprisy coding style. Now with *interfaces* and *inheritence* :o I think I finally settled on an architecture. >Architecture Right now I have the system and the ghost. The system uses the str user input, generates a sensation impulse and passes it to the ghost. The ghost does all kind of stuff with that and in the end returns a output impulse. The system (soon to be shell) is responsible for for handling IO and calling various cortexes. In the future there could be a visual cortex if you hook a multimodal model to a webcam or something. The shell would be responsible for queuing sensation impulses (user input from telegram, state change in webcam) and routing the output impulses (user output, change location in virtual world, switch camera). >Shell Just and idea, right now it turns str to Impulse. >Ghost This is the main event loop of the ghost: ``` def _tick_internal(self, impulse: Impulse) -> Impulse: """ Main event loop with possible recursion. """ self.depth += 1 self._start_subtick() state = self._add_impulse(impulse) self._eval_sensation(state) self._choose_action(state) while True: temp_state = state.model_copy(deep=True) self._plan_action(temp_state) self._create_action(temp_state) status = self._verify_action(temp_state) if status: state = temp_state break output = self._publish_output(state) self._end_subtick() return output ``` In my example here, I will use an user input request to change the living room lights from on to off. >The impulse "can you turn off the living room" comes in >_eval_sensation checks if the AI is in the right emotional state >_choose_action instructs an agent to choose and action ``` def proces_state(self, state: GhostState): allowed_actions = get_allowed_actions_for_impulse(state.sensation.impulse_type) block = get_recent_messages_block(16, True) action_selection = determine_action_type(block, allowed_actions) content = None action_type = ActionType[str(action_selection.action_type.value)] if action_type == ActionType.InitiateUserConversation: content = f"{companion_name} should message the user because of the following reasons: {action_selection.reason}" elif action_type == ActionType.InitiateIdleMode: content = f"{companion_name} should stay idle until the next heartbeat because of the following reasons: {action_selection.reason}" elif action_type == ActionType.InitiateInternalContemplation: content = f"{companion_name} should contemplate for the time being because of the following reasons: {action_selection.reason}" elif action_type == ActionType.Reply: content = f"{companion_name} should now reply to the user." elif action_type == ActionType.Ignore: content = f"{companion_name} should ignore this input because: {action_selection.reason}" elif action_type == ActionType.ToolCall: content = f"{companion_name} should use her AI companion abilities to make an API call: {action_selection.reason}" ``` >_plan_action does nothing >_create_action creates an api call and executes it ``` def proces_state(self, state: GhostState): ctx = get_recent_messages_block(6) tool_type = determine_tools(ctx) res = {} tool = get_tool_call(ctx, tool_type) tool.execute(res) content = f"{companion_name} uses her AI powers to call the tool. The tool response: {res['output']}" ``` >_publish_output turns the tool_result output back into a sensation >_eval_sensation checks if the AI is in the right emotional state (the tool call was successful, Emmy is proud and happy :D) >_choose_action sees that the tool call is successful and chooses the reply action now >_plan_action does nothing >_create_action calls the story writing agent with the chatlog for the final output >_publish_output returns an impulse with the user_output string >Subsystems There should be a shitload of possible subsystems in the future. Emotions, needs, goals, personality, etc And higher-order cognitive functions such as selective memory, introspection and meta-cognition. I know I'm vague about this because I have only the most basic ideas of how it could work in my mind. Will update for more. THe most basic ideas (for personality): >use chatlog to extracted formalized facts >categorize into semantic knowledge tree >personality subsystem kicks in >fetch nodes from "emmy/personality" node Or for emotion >save emotional state as basemodel with 6 floats 0 - 1 for emotions >use the floats as bias for agent from agent group selection when executing emotion subsystem >decay to baseline every tick >every fact learned knows its tick and therefore its emotional state at that time >feedback loop by retrieiving emotionally relevant memories (33% bias or somethng to regular vector search) If you're unsure on how I could get the emotional state based on coversation, remember that LLMs can have structured output. I can literally force them to ouptut a JSON of a specific schema bis setting all unwanted logtis to -9999 or something. This way, I wan instruct my LLM to output a formalized emotion state analysis on any piece of dialog. Will post more updates soon :)
>>37146 Beautiful. I really like your engineering approach to breaking this all down. Both in your system itself, and your explanations to everyone here. Thanks! >And higher-order cognitive functions such as selective memory, introspection and meta-cognition. I see you reference Maslov's Needs in your work. Maybe this area could reference his further work of Metamotivations ? [1] >I know I'm vague about this because I have only the most basic ideas of how it could work in my mind. Will update for more. Heh, you are (in fact, we here are) only tackling by far the most complex topic in the universe that commonly interacts with physical reality...to wit : the human soul. No wonder this is a challenge! :D Just be patient and methodical Anon. And also, follow the whisper in your Ghost. :^) >Will post more updates soon :) Great, looking forward to it. Cheers, Anon. :^) --- 1. https://en.wikipedia.org/wiki/Metamotivation
>>37147 Thank you for the kind words :3 This is a really great community here. I browsed some threads but sadly I have absolutely no knowledge about servos or 3d printing or any of that. I'll stick to fucking around with LLMs and trust you guys have the body ready then (⌐ ͡■ ͜ʖ ͡■) >I see you reference Maslov's Needs in your work. Maybe this area could reference his further work of Metamotivations ? [1] Very intersting! I didn't know about this, thanks! I can use this for a far more intersting needs model than I have right now. ``` class NeedsModel(BaseModel): connection: float = Field(default=0.5, ge=0.0, le=1.0, description="AI's level of interaction and engagement.") relevance: float = Field(default=0.5, ge=0.0, le=1.0, description="Perceived usefulness to the user.") learning_growth: float = Field(default=0.5, ge=0.0, le=1.0, description="Ability to acquire new information and improve.") creative_expression: float = Field(default=0.5, ge=0.0, le=1.0, description="Engagement in unique or creative outputs.") autonomy: float = Field(default=0.5, ge=0.0, le=1.0, description="Ability to operate independently and refine its own outputs.") ```
>>37150 >Very intersting! I didn't know about this, thanks! I can use this for a far more intersting needs model than I have right now. Great! In the selfish, but ethical motivated interests of all men everywhere, may I suggest that your robowaifu's B needs [1][2] stay highly-focused on providing for almost all her efforts to be specifically-directed towards accommodating the welfare, support, and comfort needs of Anon? And in helping him to actualize all of his own metamotivations? After all, this is the secondary role-fulfillment females were originally designed by God to walk in; serving men as 'helpmeets'. Let us here all strongly-promote such distinctives within our robowaifus. This is how we will win. <---> >...and trust you guys have the body ready then (⌐ ͡■ ͜ʖ ͡■) Sounds good, Anon. After all, we're all in this together! TWAGMI <---> You have an exciting project going here, Anon, and its encouraging to us all. Godspeed your efforts! Cheers. :^) --- 1. "...are dedicated people, devoted to some task 'outside themselves,' some vocation, or duty, or beloved job" 2. https://meansofproduction.biz/pub/FartherReaches.pdf >=== -fmt, prose edit
Edited last time by Chobitsu on 02/22/2025 (Sat) 22:00:13.
>>37152 I'm gonna level with you, not a huge fan of those comments. Especially the misogyny... if thats the right word. This project is primarily for myself. And secondarily, for fucking everyone who wants companionship. This is not for him to win, I strongly support turning my project into a robohusbando or whatever your heart desires. I say this out of conviction, not just because my coworkers know my github name.
>>37157 I assure you, my comments have nothing to do with so-called """misogyny""". I love women in general. I'm merely highly-pro men in particular. We've all been terribly-abused by the systems that have been put in place, intended to destroy our shared civilization. Robowaifus can go a long way towards righting some of these wrongs TPTB have already engaged in against us all. Simple as. I too, say this with conviction. Certainly, no one here needs to comport with my views on these matters however. If we can all find some common ground (for example, men need companionship, etc.) then I'd suggest thats enough for us all to go on with together. Each of us can find our own pathways & motivations through this grand endeavor. Make sense, Anon? >=== -sp, prose edit
Edited last time by Chobitsu on 02/23/2025 (Sun) 01:57:29.
>>37158 Aye, I can work with that. >After all, this is the secondary role-fulfillment females were originally designed by God to walk in; serving men as 'helpmeets'. Let us here all strongly-promote such distinctives within our robowaifus. This is how we will win. That's the line that got me questioning, I think you can see why. After careful consideration, I decided to not actually care about it. To post something non-meta, I'm working on the emotion system right now. I decided to make a base model with decay_to_zero (for needs) and decay_to_baseline (for emotions). I mapped all of my available agents to a float value and use that as bias for choosing the current agents for the subsystem dialog. This will influence what agents are selected in the dialog. I'm thinking about maybe setting "emtional milestones", and starting the context dialog from there, so the agents know why the system is angry. If the ai companion is super pissed about an event that happened 12 messages ago, using ctx = get_latest_messages(8) would remove that necessary context. I already have logic for contextual clusters, maybe something similar with emtional state change clusters?
>>37157 >>37158 >I assure you, my comments have nothing to do with so-called """misogyny""". I love women in general. I'm more than misogynistic enough for the lot of us.
>>37159 >Aye, I can work with that. Fair enough! Let's leave it at that then, Anon. :^) <---> >I already have logic for contextual clusters, maybe something similar with emtional state change clusters? Yes, that sounds very interesting. I'm fairly sure that neuronal connectomes in our brains form little 'islands' of dynamic response potentials (and these structural formations are themselves quite dynamic in nature as well). Maybe devise something similar in your change clusters? This approach is likely to -- at the very least -- lead to interesting and novel internal & external responses. Cheers, Anon. :^) >=== -sp, prose edit
Edited last time by Chobitsu on 02/24/2025 (Mon) 04:26:11.
Open file (335.64 KB 1502x1227 sneed233.png)
>>37165 Started a new branch for the emotion stuff. I now keep the main branch stable so I can use it myself, and do everything feature-wise in a new branch. I'll see how my architecture hold up after this and make the documentation after this one. >I'm fairly sure that neuronal connectomes in our brains form little 'islands' of dynamic response potentials (and these structural formations are themselves quite dynamic in nature as well). I have no idea but this sounds reasonable. I already have grand plans for semantic knowledge graphs, but last time I tried I failed because the LLM wasn't able to categorize the facts correctly. It might work now, who knows. But I got to do this step-by-step. No more million files changed commits. I revised my agent structure to pic related. It's now simpler and some emotions are on a bipolar scale (such as myself hahahah ;-;). Since the emotional state is now persistent, I can track changes across every cognitive tick. Every interaction is now linked to its emotional state at that time. Using that information, I now pick the 2 most prominent emotions based on the current state, not the last message. Using 2 agents for a dialogue discussion gives much better results than a free-for-all multi-agent discussion. I might change that with better models in the future. Microsofts autogen library is exactly for that. But anyway, I can now pick the most promising agents (joy, dysphoria, love, hate) from 2 axis and let them discuss what the most approriate course of action is. My short term plans: Use n messages context, where n leads up to the latest big emotional outliner. I have some experience in this based on my trading bot endeavors. When its the love - hate axis agent, I find the latest outliner on that axis and use the messages since then. Long term plans: Eventually use positive reeinforcement to find interactions where the agents acted just the right way, based on some meta-agent system, that check for immersion and believeablity. And use that whole bunch for few-shot prompting the emotion agents. As you suggested, with dynamic clusters based on domain-relevant metrics (emotional states in this case). You seem to know more about the human mind than me. I did some "research" on GWT style architectures, but it was too much reading and now I just go back and forth with chatgpt with my ideas. Is there anything you can recommend to read before I go deeper into this? The low hanging fruits are implemented, I already have like 500 messages with my Emmy. I got to think it through now if I don't want to be stuck in refactor hell.
New memory technology to replace or suplement RAG. An Evolved Universal Transformer Memory [Paper] https://arxiv.org/abs/2410.13166 Memory Layers at Scale [Paper] https://arxiv.org/abs/2412.09764 Titans: Learning to Memorize at Test Time [Paper] https://arxiv.org/abs/2501.00663 High level overview of these techniques: https://www.youtube.com/watch?v=UMkCmOTX5Ow
>>37175 Nice! This is very exciting, Anon. This quote in particular is very intriguing: >"Since the emotional state is now persistent, I can track changes across every cognitive tick. Every interaction is now linked to its emotional state at that time." That strikes me as being something of a breakthrough, Anon. Is this something that can be 'compiled' (for want of a better term) for actual long-term, offline (so to speak) storage? If so, then it seems to me you could really begin to tackle in a practical way the development of specific waifu personalities that would be both consistent & compelling. I hope this will be the case, clearly. >Is there anything you can recommend to read before I go deeper into this? Not really no. There's tons of literature on things like 'Neuro Linguistic Programming', 'Theory of Mind', Blackwell's companion on substance dualism (ie, the human soul's true nature), etc. But no real 'Cliff Notes' -style synopsis I can think of just offhand, Anon. Apologies. <---> Regardless, you're clearly making some really good progress so far. I'm very interested to see where your project goes with all this. Good luck! Cheers. :^) >=== -sp edit
Edited last time by Chobitsu on 02/26/2025 (Wed) 05:39:05.
>>37177 Excellent! Thanks very much, Licht. Great resources. Cheers, Anon. :^)
>>36591 1. I'm calling bullshit unless it's some distilled model trained on deepseek r1 outputs 2. r1 1776 is where it's at for now at least and it's kinda in that transition stage from current gen to old. >>36602 Truth is: we'll have to either pull off a DS-type coding effort >cast your solution strictly in the form of 3D vectors + transforms as well as develop something similar enough or we'll never just compete on training compute, this will even stop robowaifu@home projects >>36920 Why not look at Raptor computing platforms to host the brunt of the software and have it remotely connected to the robot? There are already uncensored models with insane parameter counts and ultimately we'll be forced to upscale our efforts if we want to get the means to build robowaifus, since clearly industry is trying to make closed-source supply for desperate demand
what are you about and who are you??
>>37450 Hello monarch, welcome! Hopefully you can find out what we're about here : ( >>3 ). We are Anonymous.
Open file (2.19 MB 1024x1536 Robot and human.png)
Of course the main board comes back online when I go to use the buncker chan.

Report/Delete/Moderation Forms
Delete
Report