/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

We are back again (again).

Our TOR hidden service has been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“A jug fills drop by drop.” -t. Buddha


Open file (8.45 MB 2000x2811 ClipboardImage.png)
Cognitivie Architecture : Discussion Kiwi 08/22/2023 (Tue) 05:03:37 No.24783
Chii Cogito Ergo Chii Chii thinks, therefore Chii is. Cognitive architecture is the study of the building blocks which lead to cognition. The structures from which thought emerges. Let's start with the three main aspects of mind; Sentience: Ability to experience sensations and feelings. Her sensors communicate states to her. She senses your hand holding hers and can react. Feelings, having emotions. Her hand being held bring her happiness. This builds on her capacity for subjective experience, related to qualia. Self-awareness: Capacity to differentiate the self from external actors and objects. When presented with a mirror, echo, or other self referential sensory input is recognized as the self. She sees herself in your eyes reflection and recognizes that is her, that she is being held by you. Sapience: Perception of knowledge. Linking concepts and meanings. Able to discern correlations congruent with having wisdom. She sees you collapse into your chair. She infers your state of exhaustion and brings you something to drink. These building blocks integrate and allow her to be. She doesn't just feel, she has qualia. She doesn't see her reflection, she sees herself reflected, she acknowledges her own existence. She doesn't just find relevant data, she works with concepts and integrates her feelings and personality when forming a response. Cognition, subjective thought reliant on a conscious separation of the self and external reality that integrates knowledge of the latter. A state beyond current AI, a true intellect. This thread is dedicated to all the steps on the long journey towards a waifu that truly thinks and feels. >=== -edit subject
Edited last time by Chobitsu on 09/17/2023 (Sun) 20:43:41.
>>34910 Due to some combination of laziness, design difficulties, and the holidays, I made less progress in the last 10 days than I'd hoped. But I do have some updates. - I think I figured out how to scale up the number of causal graphs. The main difficulty is in figuring out how to take two independently-generated graphs and link them together. In practice, this means modeling the relationships between variables across causal graphs. My existing code assumes that if two variables are the same, then they'll have the same name. That doesn't work when LLMs generate the graphs since LLMs will generate an arbitrary name as long as it makes sense for the context it's given, and it would quickly get too expensive & unreliable to add all existing variable names to the context every time I want to generate a new graph. My new (promising) approach is to maintain an ontology of variables, and to link new variables into the ontology every time a new graph is generated. I'm using RAG over variable definitions (created at graph generation time) to identify candidate relevant variables, then using an LLM call to determine the relationship between new variables and each existing variable. It looks like this can be done efficiently enough, and tracking just a few simple ontological relationships (A is the same as B, A is a subtype of B) that are easy for LLMs to classify should give me what I need. I have the initial code for generating ontological relationships as new graphs come in. I need to do a bit more testing, then update the rest of the code to use this information. I also need to update the rest of my code to work only with definitions, not variable names, since variables can be conflated if referencing them by name. - Notably, this causal framework cannot implement a theory of mind. But I have some ideas on how to implement that efficiently with a few changes. I think the key is to hook into the data retrieval step and the "model fit" step (one of the steps in causal analysis, generating a model from a restricted set of features) so they can be biased/transformed into whatever makes sense from a given perspective. I think this approach would address not just theory of mind, but also the ability to deal with fictional settings. (E.g., when some information is only useful when discussing a particular story.)
>>35189 >Due to some combination of laziness, design difficulties, and the holidays, I made less progress in the last 10 days than I'd hoped. Lol, IKTF. :P So, I know you've already spelled it out for me before (I can't seem to find it ATM, probably another thread), but can you please give me: 'babby's first horsona steps' to get just the most-basic part of your system working? I still hope to have this done before year's end. :^) <---> As to your multi-graph graphing difficulties (and please pardon me if in my simplistic, uneducated thinking I'm totally missing the point here), could you possibly adopt a similar nodal-based graphing approach used by, say, Blender, et al? You can add as many parameters (via BPython scripting) as you'd like to a given node. Would this style of argument-passing allow you to make arbitrary connections node-to-node, even across UODs [1] that differing causal graphs may maintain? Good luck, Anon! We're all rooting for you, Bro. Cheers. :^) --- 1. Universe of Discourse https://en.wikipedia.org/wiki/Domain_of_discourse >=== -add footnote -minor edit
Edited last time by Chobitsu on 12/28/2024 (Sat) 03:05:16.
>>35190 The simplest would be to run the boilerplate project under samples/. git clone https://github.com/synthbot-anon/horsona cd horsona/samples/boilerplate cp .env.example .env poetry install # Edit .env with your OpenAI API key # Or check https://github.com/synthbot-anon/horsona?tab=readme-ov-file#using-a-different-llm-api for how to use a different API provider # I can add support for a different API if needed poetry run python -m boilerplate # Give it a task >could you adopt a similar nodal-based graphing approach used by, say, Blender, et al? The goal is to get an LLM to generate the graph so it can be done in the background while interacting with a chatbot. I can use an arbitrary graph structures, but the challenge is getting an LLM to generate them and to work with large graph sizes. The default approach is the generate small graphs/nodes then piece them together, but the LLM has a tendency to generate inconsistent "field" names. The approach in >>35189 is for linking together "fields" across many small graphs.
>>35191 OK, thanks! ># Edit .env with your OpenAI API key Unsurprisingly, I'm finding they want an email, etc. Before I waste lots of time trying to find a way around their Globohomo surveillance systems, do you know if I can use a throwaway successfully, CyberPonk? While I want to encourage you with support, I'm certainly not willing to kowtow to that incredibly-evil lot to do so. :/ Any 'quick tips' for a complete newb to all this, for running your system entirely privately (as in, entirely locally)?
>>35206 I think all of the main API providers will work with a throwaway account. You'll need a credit card for most of them, but they all accept privacy.com cards. If you're using a privacy.com card, you can give the merchant any arbitrary name & address, though you'll need to give privacy.com your real info. I actually haven't tried using it with local LLMs yet since I don't have a GPU and since API providers have a lot of advantages while I'm in dev mode. I also haven't really played around with them for about a year now, so I don't know what's worthwhile there. If you use a local model that exposes the same interface as OpenAI (which I think most of them do), you should able to use the AsyncOpenRouterEngine LLM type with a local url. If you tell me which inference framework you choose, the local url it generates, and the model name you want to use, I can tell you what to put in your llm_config.json file to have horsona use that. OpenLLM should work, though I haven't tested it. (If it doesn't work, I can update the library to support it.) Ollama might also work. For local embeddings– which isn't required for running the boilerplate project– I have support for Ollama already along with instructions https://github.com/synthbot-anon/horsona?tab=readme-ov-file#using-ollama-instead-of-openai-for-embeddings on how to set it up. With a local LLM and local embeddings, everything will run locally.
>>35210 I see. I don't want to speak foolishly in a knee-jerk reaction. I'd say let me think on it, but if I do that I'm fairly sure I'll just get swamped in the load I'm currently under right now, and the whole thing will just slip my mind for several months. I'm extremely proud of you & your efforts for both our communities, Anon. I think that I'll need to bow out from personally participating until either a) someone (other than me) has created a turn-key, private system for Anons to use that utilizes your remarkable system, or b) I've graduated and I'm not aggressively trying to get a new robowaifu business off the ground. My apologies to dissappoint, but I simply don't trust myself to manage all this in a timely and secure fashion that won't provide my every interaction with it to OpenAI or their masters.
>>35206 ibwent out of my way to ensure my talking head would use a local model but its still slow. a fair compromise would be it running on the cloud but using a non pozzed model unless ppl want to get multiple graphics cards.(or maybe 1 4090 i tested it with a 4060 idk)
>>35211 No worries. At some point, I'll be creating other repos to make local setups more turnkey, but those would be heavily dependent on unstable external projects I don't control (like OpenLLM) so it doesn't make sense to add it to horsona. Horsona is just a library, not a full application. The main challenge with getting a fully turnkey local solution is just working out how to handle updates. - I'll have to set up something like a CI/CD pipeline so I can integrate unstable dependency updates without adding too much dev overhead. - Horsona is designed to make heavy use of multiple parallel calls, which are necessary for robust functionality, and robust functionality is necessary for complex functionality. Using a single endpoint works for very simple demos, but multiple endpoints https://github.com/synthbot-anon/horsona?tab=readme-ov-file#using-multiple-llm-apis-simultaneously-for-speed are essentially non-optional for the features that make horsona useful relative to other libraries. I have a decent chunk of this worked out (all my earlier infrastructure stuff) but it requires kubernetes, and streamlining kubernetes setup & upgrades for infra I don't own is highly nontrivial.
>>35216 Thanks for your understanding, mate. May you have a blessed and productive 2025! Cheers, CyberPonk. :^)
>>35213 Good luck with it, peteblank. If you'd like to share your process with everyone sometime that would be appreciated here. Cheers. :^)
>>35218 no prob so i made the fully local one private but this one is public https://github.com/peteblank/waifu-conversation/tree/main go to final.sh it uses aws poly to transcrive and put the file in s3 buckets it also uses the long gone waifu ai api so youd have to replace it with another api or run a local ai. its also meant to be activated with a button connected to a pi. That was... 2 years ago. :/
>>35219 Thanks again.
>>35206 You are overdoing it with the anonymity. That said, you could just install a email server yourself somewhere and buy a domain with Bitcoin (which you first would sent through a mixer or exchange it back and forth). Personal email servers have issues in sending emails, but receiving emails should work.
>>35237 >You are overdoing it with the anonymity. Lol. Perhaps, perhaps not. I could give you over 9'000 reasons why I'm not going far enough yet, I'm sure. But not on NYD, plox. :D Regardless, thanks for the good suggestions NoidoDev. I'm sure we'll arrive at good solutions ot protect both Anons & their robowaifus + Joe Sixpacks & theirs as well, if we all just put our heads together in such a manner. BTW ( >>10000 ) may be a better thread for this discussion? Cheers, Anon. :^)
>>35189 Some updates: - I've made very little progress on causal reasoning since my last update. I have the ontological relationships, and now I'm integrating them with causal reasoning. I'm working on that now. - I've learned a lot about what's important to me in a waifu. On that second point: - There are three factors that I think are critical: love, romantic interest, and relationship stability. - "Love" gets used for a lot of things, but I think the most relevant form is: feeling "at home" with someone, and feeling like that someone will always have a home at your side, no matter what. There's no single solution that would fit everyone here, but I think it always comes down to: the things you most deeply value, and what you feel is missing in other people that prevents you from feeling at home with them. Some examples (probably all of these will resonate with people broadly to some extent, but I think there's usually a "core" one that's both necessary and sufficient): - ... The ability to freely converse with someone, without having to worry about whether they "get it" or are going to misinterpret you. - ... Paying attention to nuance, and not oversimplifying things just because it's natural or convenient. - ... The willingness to pursue one's curiosities and creative inclinations at great depth. - ... I think Claude in particular is very good at uncovering these values, so I think LLMs broadly will be good at this in the not-to-distant future. - Romantic interest would be the feeling of wanting to be closer to someone, both emotionally and physically. I think there are two sides of this: the desire to be "observed" and the desire to "observe". I think the strongest form of "wanting to be observed" comes from the belief that someone can meaningfully contribute to the things you feel and believe. I think the strongest form of "wanting to observe" comes from the belief that someone embodies things you deeply value. I think lowercase-r romantic interest can come just from these things, and capital-R Romantic interest comes from the resonance between these things and physical desires. The bridge between these things and physical desires seems to come from analogies with the physical sense: to be heard (respected), to be seen (understood), to be felt (empathized with). I think these analogies work because our brains are deeply wired to understand how to deal with the physical senses, and that circuitry gets reused for higher-level understanding. The analogies for smell (e.g., "something smells off") and taste ("having good taste") are a little harder to pin down and strongly overlapping (probably because they both deal with sensing chemical properties), but I currently thing the "right" way to think about those is in terms of judgement (to be judged). - Relationship stability come from overlap in lifestyle and from someone not doing/embodying anything that disgusts you. Whereas the other two can be "optimized" mostly just by understanding someone's core values, this one likely can only be discovered through trial and error since lifestyles are complex things that can co-evolve with your waifu. Once I get to higher-level functionality in Horsona, I'll likely focus on trying to align with these things. I have some ideas on how to do this.
>>36101 POTD Amazing stuff, really. >I have some ideas on how to do this. Please hurry, Sempai! The world is anxiously waiting for this stuff!! :^)
>>36101 Minor update on merging ontological reasoning and causal reasoning: - The causal inference code seems able to handle ontological information now. I still need to update some interfaces with recent changes to how I'm handling variable identifiers and linking data into the graphs. - Since the ontologies and causal graphs are generated by an LLM, I'll need some way to identify & correct errors in them automatically. Right now, at causal inference time, I'm identifying error cases where a single data field applies to multiple causal variables and cases where multiple data fields apply to a single causal variable. I haven't figured out yet how exactly to correct the underlying graphs when an error is detected, but I'm thinking: (1) flag the relevant causal graphs and sections of the ontology, (2) if something gets flagged enough times, regenerate it. The bare minimum I want here is for "good" parts of the graphs to be stable, and for "bad" parts to be unstable.
>>36309 Thanks for you work on this. I'm still trying to work myself through the unread threads, till I have time to use OpenAI and your software.
> (discussion -related : >>37652, ...)
>>24783 do you have any other contacts? this is also the field that I am working on, if you are interested.
>>40369 I'm pretty sure he maintains a d*xxcord, Anon. It's listed somewhere here on the board, I think.
>>40369 Well i have a discord. However im mostly focused on hardware side of things. https://discord.gg/ZnepfsBD If youre concerned about optics dont worry as i am NON political as far as the project is concerned
>>40376 This is not a joke this is my email address.
>>40376 >>40377 What a great domain. :D
>>42089 This seems better here than in Robowaifus in Media thread.
>>42089 I really hate this kind of pandering, fearmongering clickbait. I don't doubt that some of these outcomes are possible in some extreme scenarios in decades to come, but that doesn't justify this YT'rs carte blanche, unsubstantiated claims and verbal manipulations here & now. For example, blithely tossing in phrases such as "human-level AIs" in the past tense as if its some kind of "Oh... everybody already knows thats happening now!111" declaration. Lolnopenopenope.exe >tl;dr Manipulative AF. And the Globohomo """art""" is offputting as well (I always nootice a (((common pattern))) with types that use it overmuch, as in this case). <---> Personally, I give it a hard pass on this content creator.
Edited last time by Chobitsu on 10/04/2025 (Sat) 14:09:44.
>>42091 Could have swore I posted it there. My bad.
>>42092 I saw what Chobitsu posted first, and thought "I wonder how bad it will be", and the first few seconds pissed me off. The Discord artstyle is cute for a minute, then gets annoying really quickly. Lemme guess, he's an animator, and he hates AI. Totally not jealousy going on here. The argument/scenario that it brings up can be easily countered by realizing, "Humans can be liars, evil and self-serving too". They also gloss over the fact that the hypothetical company has no idea what's going on in their own computers and don't even check their own work. That sounds like a you problem. I watched further into the video, and it just got ridiculous. Their idea of AI spreading is straight out of a cheesy inaccurate sci-fi story. This ignorant artist thinks every single AI is linked and that AI are cyberspace gods. And ofc, he's a fat leftoid by his own admission. The real lesson here is that we have to spread our own message and quell any 90s cyberpunk fears
>>42096 POTD >The real lesson here is that we have to spread our own message and quell any 90s cyberpunk fears I hope somehow that you're right, and that would in fact work by some miracle. Cheers, GreerTech. :^)
Open file (474.08 KB 512x512 IMG_20251002_152355.png)
>>42096 I think it is good to understand how are enemies think and how our enemies might build waifus of their own. Love thy enemies after all.
>>42098 Ahh. The old sheep & wolves; snakes and doves [1], ehh? :D While I don't think this 'snake' considers us sheep to be devoured, I'm absolutely positive his (((masters))) (and their master) do. GreerTech is right. Hopefully the NPC brigades will have their eyes opened to these evildoer's plots soon. May our good Lord direct the affairs of mankind during this coming war. Amen. --- 1. "Behold, I am sending you out like sheep among wolves; therefore be as shrewd as snakes and as innocent as doves." https://biblehub.com/matthew/10-16.htm (BSB)
Edited last time by Chobitsu on 10/05/2025 (Sun) 02:28:16.
Open file (30.91 KB 832x162 ClipboardImage.png)
So, I'm building something atm. And I think this might be the right place to discuss it? I'm attempting to design a simulated, sparse computational neural system, optimized through evolution. I'll keep you guys in the loop and my next post will hopefully be a "solved" cifar10, with a bit of luck.
>>42099 Based >>42100 Welcome
>>42100 Welcome, Anon! Glad you've joined us. >your code review: 1st: You need to understand your machine and the compiler to make good progress, Anon. a: std::endl performs two operations: 1. writes your output to the global cout buffer object. 2. flushes that object. Very inefficient, unless that's the design goal (in this case it clearly isn't). b: use: std::cout << "muh_text\n"; instead. 2nd: Good looking code otherwise! Cheers, Anon. :^) >>42101 Grazie . :^)
Open file (136.33 KB 1276x897 ClipboardImage.png)
>>42102 The couts are just leftover junk from debugging the very unruly state machine. It's basically an interpreter at this point(well, genome interpreter so I guess I should've expected it to turn out like that, lol.) Dirty code, but it's there and I can clean it up when I see the threads crap out in tracy, kek.
>>42102 >>42101 And thank you for the warm welcomes! It's been a few years since I was last on here. I really need to get up to speed on C++, I've mostly written VHDL lately.
>>42103 Ahh, makes sense. I'm sure you'll have it all cleaned up in due time! :^) Anyway -- again -- glad you've joined us. Looking forward to you sharing your progress updates with us, if you will. Just pick a thread you like, Anon. Cheers. :^)
>>42104 >I've mostly written VHDL lately. In some sense, this is really the only way we'll all pull this off: literally at the hardware level. * --- * In support of this postulate, I'd direct you to the very eminent Dr. Carver Mead's works, as pertains /robowaifu/'s immediate interests: ( >>12828, >>12861 ).
Edited last time by Chobitsu on 10/05/2025 (Sun) 04:22:30.
>>42104 >I really need to get up to speed on C++ I highly recommend Bjarne Stroustrup's little book colloquially known as Tour++. https://stroustrup.com/tour3.html It's roughly comparable to K&R's TCPL2e (and intentionally-so). It's not too dense, and at around ~250pgs can be consumed pretty readily. Its targeted specifically towards programmers who are trying to get to grips with colloquial C++ today (so-called "Modern C++"). Highly-recommended. Cheers, Anon. :^)
>>42106 But the question is: what hardware would we even build? You should basically just need a CPU and an architecture that can take advantage of it. Modern processors have mind melting instruction throughput, humanity just insists on using dense ANNs which for example require 3 layers to do XOR - an operation that can be done in <1 clock cycle with a modern CPU. All this because SDG is so enticing for data modeling purposes. The result is something like GLM4.6, impressive model but it still takes 355B/32B active to solve "What is 34+12" and the method for solving it isn't logic at all, simply a bunch of heuristics. >>42107 Thank you fren.
>>42109 >But the question is: what hardware would we even build? Well, the basic premise is to push much of the "intelligence" out to the periphery of the system as a whole. In-effect combining the sensing & processing & control all together into one tight little silicon package. Similar to biological life, taking this approach to 'computation' reduces the need for bidirectional chatter with a central core. Locally handling inputs results in faster control responses & lower power consumption (nominally being near-instantaneous & mere micro-Watts). >tl;dr Put hundreds (thousands?) of these relatively-dumb little sensor+processing+control packages out on all the periphery where the robowaifu interacts with the realworld; while her system overall lazily performs data fusion back at the core (the robowaifu's "brain") to inform higher-level "thinking & planning". --- These distal components also form compact little subnets for local comms/coordination with one another. For example, all the dozens of these little modules associated with sensing/running a robowaifu's single hand, say. Though spending most of their time asleep, they still can each perform thousands of sense/control operation cycles per second; also xcvr'g dozens-to-hundreds of subnet info packets per second amongst themselves (and additionally generating consolidated data together as a group; for the central core's use [plus receiving control signals asynchronously back for the collection's use]; sending this out along alternate comms pathways). All electronics thereto being very lightweight (in basically every sense of the term) individually. <---> This is the essence of Neuromorphics. The inspiration for this approach is studying life itself, and how it reactively coordinates & operates against stimulus. The >tl;dr here being that most of it happens out at the periphery...inside the neural & musculature tissue there locally. Cheers, Anon. :^)
Edited last time by Chobitsu on 10/07/2025 (Tue) 02:18:26.
>>42109 >Thank you fren. You're welcome, Anon! Cheers. :^)
What if the AI is hallucinating because of sensory deprivation?
>>42116 LLMs are autoregressive so errors will accumulate and make any LLM go batshit in a long enough context.
>>42118 https://www.youtube.com/watch?v=qvNCVYkHKfg Yann LeCun talks about it here among other things, nice to get a more grounded perspective on current AI efforts.
>>42124 Thanks, Anon! Cheers. :^)
>>42111 >Sensory Subnetting This is similar to how modern robotics servos use advanced sensors to enable simple control. You only need to send position and velocity, the servo will figure out how to enact that goal on its own. Many advanced sensors are similar in their ease of use. They can simply tell the computer where/what/or the state of something, which makes implementation heaps easier. Distributed computing is making robotics much easier, and will help Cognitive Architecture. I imagine an artificial "spine" for reflexes will also play a role in a cognitive system. We likely will need to consider further on how to distribute her system for effective and affective interactions with the world, both consciously and subconsciously. >>42116 >AI sensory deprivation Not possible as AI doesn't function in a manner where that could happen. It's not intuitive, AI aren't like us. Sensory deprivation causes us to go crazy because our brain relies on our senses to orient itself within the world. Without these senses, the brain will make things up based on misfires and assumptions based on lived experience. AI without sensory input will simply either pause or act within whatever parameters their programmer set for null or out of bounds variables. It's vital to remember that all AI can be is a system of algorithms, similar to any other program on a computer. AI is thrilling because it can simulate human like responses but, they're not like us underneath it all. >>42118 >Batshit with long enough context There's some truth to that, what ultimately matters is maintaining a proper context length given the system. A simple enough task honestly. Which brings up the issue of how to keep her persona/short term memory intact over time without simply shoving everything into context. That's the million dollar question with several promising answers but, nothing truly definitive as of yet. I still like dynamic RAG based solutions personally. >>42124 Great find and a video many of us ought to watch. It provides an excellent reminder of how truly limited LLM's are inherently. They're an important part of a Cognitive Architecture but, we do need to discover better methods. Artificial curiosity with intuitive synthesis of novel ideas/concepts based on new information will likely be one of the hardest nuts to crack.
>>42146 POTD >>Sensory Subnetting Ah, I See You're a Man of Culture As Well. :^) https://www.youtube.com/watch?v=KqiW4w6MlQU

Report/Delete/Moderation Forms
Delete
Report