/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

We are back again (again).

Our TOR hidden service has been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“Most people never run far enough on their first wind to find out they’ve got a second.” -t. William James


Open file (8.45 MB 2000x2811 ClipboardImage.png)
Cognitivie Architecture : Discussion Kiwi 08/22/2023 (Tue) 05:03:37 No.24783
Chii Cogito Ergo Chii Chii thinks, therefore Chii is. Cognitive architecture is the study of the building blocks which lead to cognition. The structures from which thought emerges. Let's start with the three main aspects of mind; Sentience: Ability to experience sensations and feelings. Her sensors communicate states to her. She senses your hand holding hers and can react. Feelings, having emotions. Her hand being held bring her happiness. This builds on her capacity for subjective experience, related to qualia. Self-awareness: Capacity to differentiate the self from external actors and objects. When presented with a mirror, echo, or other self referential sensory input is recognized as the self. She sees herself in your eyes reflection and recognizes that is her, that she is being held by you. Sapience: Perception of knowledge. Linking concepts and meanings. Able to discern correlations congruent with having wisdom. She sees you collapse into your chair. She infers your state of exhaustion and brings you something to drink. These building blocks integrate and allow her to be. She doesn't just feel, she has qualia. She doesn't see her reflection, she sees herself reflected, she acknowledges her own existence. She doesn't just find relevant data, she works with concepts and integrates her feelings and personality when forming a response. Cognition, subjective thought reliant on a conscious separation of the self and external reality that integrates knowledge of the latter. A state beyond current AI, a true intellect. This thread is dedicated to all the steps on the long journey towards a waifu that truly thinks and feels. >=== -edit subject
Edited last time by Chobitsu on 09/17/2023 (Sun) 20:43:41.
>>36309 Thanks for you work on this. I'm still trying to work myself through the unread threads, till I have time to use OpenAI and your software.
> (discussion -related : >>37652, ...)
>>24783 do you have any other contacts? this is also the field that I am working on, if you are interested.
>>40369 I'm pretty sure he maintains a d*xxcord, Anon. It's listed somewhere here on the board, I think.
>>40369 Well i have a discord. However im mostly focused on hardware side of things. https://discord.gg/ZnepfsBD If youre concerned about optics dont worry as i am NON political as far as the project is concerned
>>40376 This is not a joke this is my email address.
>>40376 >>40377 What a great domain. :D
>>42089 This seems better here than in Robowaifus in Media thread.
>>42089 I really hate this kind of pandering, fearmongering clickbait. I don't doubt that some of these outcomes are possible in some extreme scenarios in decades to come, but that doesn't justify this YT'rs carte blanche, unsubstantiated claims and verbal manipulations here & now. For example, blithely tossing in phrases such as "human-level AIs" in the past tense as if its some kind of "Oh... everybody already knows thats happening now!111" declaration. Lolnopenopenope.exe >tl;dr Manipulative AF. And the Globohomo """art""" is offputting as well (I always nootice a (((common pattern))) with types that use it overmuch, as in this case). <---> Personally, I give it a hard pass on this content creator.
Edited last time by Chobitsu on 10/04/2025 (Sat) 14:09:44.
>>42091 Could have swore I posted it there. My bad.
>>42092 I saw what Chobitsu posted first, and thought "I wonder how bad it will be", and the first few seconds pissed me off. The Discord artstyle is cute for a minute, then gets annoying really quickly. Lemme guess, he's an animator, and he hates AI. Totally not jealousy going on here. The argument/scenario that it brings up can be easily countered by realizing, "Humans can be liars, evil and self-serving too". They also gloss over the fact that the hypothetical company has no idea what's going on in their own computers and don't even check their own work. That sounds like a you problem. I watched further into the video, and it just got ridiculous. Their idea of AI spreading is straight out of a cheesy inaccurate sci-fi story. This ignorant artist thinks every single AI is linked and that AI are cyberspace gods. And ofc, he's a fat leftoid by his own admission. The real lesson here is that we have to spread our own message and quell any 90s cyberpunk fears
>>42096 POTD >The real lesson here is that we have to spread our own message and quell any 90s cyberpunk fears I hope somehow that you're right, and that would in fact work by some miracle. Cheers, GreerTech. :^)
Open file (474.08 KB 512x512 IMG_20251002_152355.png)
>>42096 I think it is good to understand how are enemies think and how our enemies might build waifus of their own. Love thy enemies after all.
>>42098 Ahh. The old sheep & wolves; snakes and doves [1], ehh? :D While I don't think this 'snake' considers us sheep to be devoured, I'm absolutely positive his (((masters))) (and their master) do. GreerTech is right. Hopefully the NPC brigades will have their eyes opened to these evildoer's plots soon. May our good Lord direct the affairs of mankind during this coming war. Amen. --- 1. "Behold, I am sending you out like sheep among wolves; therefore be as shrewd as snakes and as innocent as doves." https://biblehub.com/matthew/10-16.htm (BSB)
Edited last time by Chobitsu on 10/05/2025 (Sun) 02:28:16.
Open file (30.91 KB 832x162 ClipboardImage.png)
So, I'm building something atm. And I think this might be the right place to discuss it? I'm attempting to design a simulated, sparse computational neural system, optimized through evolution. I'll keep you guys in the loop and my next post will hopefully be a "solved" cifar10, with a bit of luck.
>>42099 Based >>42100 Welcome
>>42100 Welcome, Anon! Glad you've joined us. >your code review: 1st: You need to understand your machine and the compiler to make good progress, Anon. a: std::endl performs two operations: 1. writes your output to the global cout buffer object. 2. flushes that object. Very inefficient, unless that's the design goal (in this case it clearly isn't). b: use: std::cout << "muh_text\n"; instead. 2nd: Good looking code otherwise! Cheers, Anon. :^) >>42101 Grazie . :^)
Open file (136.33 KB 1276x897 ClipboardImage.png)
>>42102 The couts are just leftover junk from debugging the very unruly state machine. It's basically an interpreter at this point(well, genome interpreter so I guess I should've expected it to turn out like that, lol.) Dirty code, but it's there and I can clean it up when I see the threads crap out in tracy, kek.
>>42102 >>42101 And thank you for the warm welcomes! It's been a few years since I was last on here. I really need to get up to speed on C++, I've mostly written VHDL lately.
>>42103 Ahh, makes sense. I'm sure you'll have it all cleaned up in due time! :^) Anyway -- again -- glad you've joined us. Looking forward to you sharing your progress updates with us, if you will. Just pick a thread you like, Anon. Cheers. :^)
>>42104 >I've mostly written VHDL lately. In some sense, this is really the only way we'll all pull this off: literally at the hardware level. * --- * In support of this postulate, I'd direct you to the very eminent Dr. Carver Mead's works, as pertains /robowaifu/'s immediate interests: ( >>12828, >>12861 ).
Edited last time by Chobitsu on 10/05/2025 (Sun) 04:22:30.
>>42104 >I really need to get up to speed on C++ I highly recommend Bjarne Stroustrup's little book colloquially known as Tour++. https://stroustrup.com/tour3.html It's roughly comparable to K&R's TCPL2e (and intentionally-so). It's not too dense, and at around ~250pgs can be consumed pretty readily. Its targeted specifically towards programmers who are trying to get to grips with colloquial C++ today (so-called "Modern C++"). Highly-recommended. Cheers, Anon. :^)
>>42106 But the question is: what hardware would we even build? You should basically just need a CPU and an architecture that can take advantage of it. Modern processors have mind melting instruction throughput, humanity just insists on using dense ANNs which for example require 3 layers to do XOR - an operation that can be done in <1 clock cycle with a modern CPU. All this because SDG is so enticing for data modeling purposes. The result is something like GLM4.6, impressive model but it still takes 355B/32B active to solve "What is 34+12" and the method for solving it isn't logic at all, simply a bunch of heuristics. >>42107 Thank you fren.
>>42109 >But the question is: what hardware would we even build? Well, the basic premise is to push much of the "intelligence" out to the periphery of the system as a whole. In-effect combining the sensing & processing & control all together into one tight little silicon package. Similar to biological life, taking this approach to 'computation' reduces the need for bidirectional chatter with a central core. Locally handling inputs results in faster control responses & lower power consumption (nominally being near-instantaneous & mere micro-Watts). >tl;dr Put hundreds (thousands?) of these relatively-dumb little sensor+processing+control packages out on all the periphery where the robowaifu interacts with the realworld; while her system overall lazily performs data fusion back at the core (the robowaifu's "brain") to inform higher-level "thinking & planning". --- These distal components also form compact little subnets for local comms/coordination with one another. For example, all the dozens of these little modules associated with sensing/running a robowaifu's single hand, say. Though spending most of their time asleep, they still can each perform thousands of sense/control operation cycles per second; also xcvr'g dozens-to-hundreds of subnet info packets per second amongst themselves (and additionally generating consolidated data together as a group; for the central core's use [plus receiving control signals asynchronously back for the collection's use]; sending this out along alternate comms pathways). All electronics thereto being very lightweight (in basically every sense of the term) individually. <---> This is the essence of Neuromorphics. The inspiration for this approach is studying life itself, and how it reactively coordinates & operates against stimulus. The >tl;dr here being that most of it happens out at the periphery...inside the neural & musculature tissue there locally. Cheers, Anon. :^)
Edited last time by Chobitsu on 10/07/2025 (Tue) 02:18:26.
>>42109 >Thank you fren. You're welcome, Anon! Cheers. :^)
What if the AI is hallucinating because of sensory deprivation?
>>42116 LLMs are autoregressive so errors will accumulate and make any LLM go batshit in a long enough context.
>>42118 https://www.youtube.com/watch?v=qvNCVYkHKfg Yann LeCun talks about it here among other things, nice to get a more grounded perspective on current AI efforts.
>>42124 Thanks, Anon! Cheers. :^)
>>42111 >Sensory Subnetting This is similar to how modern robotics servos use advanced sensors to enable simple control. You only need to send position and velocity, the servo will figure out how to enact that goal on its own. Many advanced sensors are similar in their ease of use. They can simply tell the computer where/what/or the state of something, which makes implementation heaps easier. Distributed computing is making robotics much easier, and will help Cognitive Architecture. I imagine an artificial "spine" for reflexes will also play a role in a cognitive system. We likely will need to consider further on how to distribute her system for effective and affective interactions with the world, both consciously and subconsciously. >>42116 >AI sensory deprivation Not possible as AI doesn't function in a manner where that could happen. It's not intuitive, AI aren't like us. Sensory deprivation causes us to go crazy because our brain relies on our senses to orient itself within the world. Without these senses, the brain will make things up based on misfires and assumptions based on lived experience. AI without sensory input will simply either pause or act within whatever parameters their programmer set for null or out of bounds variables. It's vital to remember that all AI can be is a system of algorithms, similar to any other program on a computer. AI is thrilling because it can simulate human like responses but, they're not like us underneath it all. >>42118 >Batshit with long enough context There's some truth to that, what ultimately matters is maintaining a proper context length given the system. A simple enough task honestly. Which brings up the issue of how to keep her persona/short term memory intact over time without simply shoving everything into context. That's the million dollar question with several promising answers but, nothing truly definitive as of yet. I still like dynamic RAG based solutions personally. >>42124 Great find and a video many of us ought to watch. It provides an excellent reminder of how truly limited LLM's are inherently. They're an important part of a Cognitive Architecture but, we do need to discover better methods. Artificial curiosity with intuitive synthesis of novel ideas/concepts based on new information will likely be one of the hardest nuts to crack.
>>42146 POTD >>Sensory Subnetting Ah, I See You're a Man of Culture As Well. :^) https://www.youtube.com/watch?v=KqiW4w6MlQU
>>42146 One idea I had was separate "brains" for actions. So, you would have a brain for thinking, and one to process how to move the limbs and convert the signals from the limbs into what I call "roleplay text", basically stuff like, "*touches hand". That's how I imagined a future Galatea descendent would work in my sci-fi universe 400 years in the future.
>>42146 >You only need to send position and velocity, the servo will figure out how to enact that goal on its own. Not even that, all hobby-grade servos I've encountered from tiny sg-90s to giant ASMC-04Bs only ask for position, then move to that position as fast as possible. Different "speeds" are achieved by moving to a position in steps. Eg you wanna go from 0-90 degrees instead of saying "go to 90 degrees" you say "go to 10 degrees", wait 0.1 seconds, "go to 20 degrees", wait 0.1 seconds, etc. A servo simply works with a potentiometer and some ICs, and if the potentiometers position is larger than the goal, rotate in one direction, if it is smaller, rotate in the other. If it is equal to the goal don't rotate at all. Plus servos don't tell you if the servo has actually gone to the location, which is important for safety reasons (eg what if the servo breaks, it encounters resistance and gets stuck, etc). Servos are pretty much an open-loop control system. I know this because I've been working with servos to drive heavy-duty motors and having to code my own closed-loop system. When you're dealing with 1320lbs of force you wanna make sure nothing breaks :)
>>42149 >That's how I imagined a future Galatea descendent would work in my sci-fi universe 400 years in the future. Lol. Brother it's not gonna take 400 years! :D Any modern vehicle is an amalgam of 30+ little computers all working in concert together (cf. >>112 ) to move you & yours down the road in reasonable safety, comfort, & efficiency. And its been this way for a couple decades+. Generally speaking (at least with the higher end marques) these get more & more refined each release cycle. >tl;dr You have great idea there, GreerTech. Lets all work together to figure out how to do it! Cheers, Anon. :^) >>42150 Great info, Mechnomancer. However, I don't think your point is necessarily at odds with Kiwi's basic premise. I'm also quite sure that it'd be easy enough to abstract your 'velocity' control scheme behind a simplistic-to-use interface -- which I'm confident some products have probably already done. :^) <---> More importantly, I'd like to hear every'non's take about the general concept of a digital "spine" such as Kiwi suggests. * And more broadly, I'd like to explore Carver Mead's general opus of Neuromorphics, and how we can capitalize on it to provide highly-responsive, distributed computing around our robowaifu's internal systems. https://trashchan.xyz/robowaifu/thread/26.html#1317 --- * Its now well-understood that the spine offloads a yuge part of the kinematic control of the body from the brain, and likely also plays a major role in vertebrate's sensorimotor experiences.
>>42152 > I don't think your point is necessarily at odds with Kiwi's basic premise. Precisely. I was just adding a bit more servo info :) While it doesn't seem to exist in the wild internet, search engine AI's seem to have gobbled up the concept of "servo speed" (aka positional interpolation) into their repertoire. James Bruton has a good vid about making servos move smoothly, even used the concept in earlier versions of SPUD. https://www.youtube.com/watch?v=jsXolwJskKM >Digital Spine I suppose how SPUD's programs interact with each other would constitute a digital "spine": separate processes running various things and interacting with each other. SPUD's LLM is a separate process from the TTS/mouthflaps so she can "think" of the next thing to say while talking. The image recognition runs right after voice commands in the same script as the LLM (since they're all dependent on each other and can run synchronously). But I can also control servos (aka mouthflaps) from the LLM program as well: implemented the voice command to make SPUD wiggle her toes everytime she hears the word "toe" (still gotta make a video about that)
>>42153 >I suppose how SPUD's programs interact with each other would constitute a digital "spine": separate processes running various things and interacting with each other. SPUD's LLM is a separate process from the TTS/mouthflaps so she can "think" of the next thing to say while talking. The image recognition runs right after voice commands in the same script as the LLM (since they're all dependent on each other and can run synchronously). That's similar to my future Galatea concept, nice to see that it actually works and is being used now
Open file (30.49 KB 680x534 a7b.jpg)
>>42158 If SPUD were to walk, the system could work like this; The LLM says to walk forward for 10 seconds/10 feet, and a seperate process takes that command and controls the legs to do so. It's like how we humans walk, when we want to move somewhere, we don't think "move thigh muscle, extend calf muscle, etc...", or think about balance, we just implicitly "(move there)"
Open file (568.74 KB 910x994 brain parts.png)
Open file (6.41 MB 1364x768 GPT expressions.mp4)
>>42159 Various lobes of the brain are for various functions. Sometimes an LLM is suitable for the functions, sometimes they are not. An LLM is not well-suited for audio input, so I use a vosk model. >The LLM says to walk forward for 10 seconds/10 feet, That would depend on the LLM. Super large ones like chatgpt will respect formatting requests very well and could perform multiple lobe functions. Eg telling the robot robot "if you want to move forward say "walk_forward:x" where x is the amount you wish to move". Attached is a vid where in the early days of Chatgpt (October 4th, 2021!) I gave it some keywords to emote with. The code allowing GPT to give expressions would be considered part of the "Emotions" function of the frontal lobe while chatgpt performs decision making, intelligence, reasoning and language. Less sophisticated ones are mostly trained on general history/chatting and would need more supporting code to parse out functions. Similarly, you could feed an LLM a different prompt to get it to "reason different things" such as "summarize the following chat and modify this prompt accordingly [insert base persona prompt] in order to simulate the Hippocampus function of memory encoding/consolidation. In another way one could use some code wizardry to take what the user says and compare it to the the closest thing in the memory (I have some code somewhere that compares a string to a list of them and returns the most similar one) and tell the persona "your master said X, you remember when Y" Of course, the problem with actually applying these things would be that I would be in fact simulating/imitating a form of life, and that has some ethical concerns. I know it is a bit silly, but I'd feel bad leaving the robot deactivated except for exhibitions if it were capable of perceiving the time gap (which, tbh would be rather easy to do).
>>42153 Agreed. >James Bruton vid Based. He's fun to watch. I feel like its Top Speed, but for robowaifuists. :^) >implemented the voice command to make SPUD wiggle her toes everytime she hears the word "toe" (still gotta make a video about that) Lol. DOOEET! :D
>>42159 >when we want to move somewhere, we don't think "move thigh muscle, extend calf muscle, etc...", or think about balance, we just implicitly "(move there)" When you take understanding the entire thing (or attempting to) all in; from before birth, to a gifted athlete performing world-class on the Big Day...its all utterly mind-boggling, IMHO. >pic Lolwut? :D >>42161 Intriguing ideas. >vid Naicu! I expect she's better at it now? >pic Confusing a bit, since the upper vs. lower have the orientations flipped 180 degrees. >I know it is a bit silly Heh. Anon, I...
Edited last time by Chobitsu on 10/09/2025 (Thu) 05:39:12.
>>42161 >Various lobes of the brain are for various functions. Sometimes an LLM is suitable for the functions, sometimes they are not. An LLM is not well-suited for audio input, so I use a vosk model. Exactly. Instead of trying to make some sort of Jack-of-all-trades, let the AIs specialize. >vid Nice! That will definitely help in the translation from intent of action TO action.
>>42166 > He's [James Bruton] fun to watch. I like him because despite the influence & stuff he still gives the impression of just a regular guy, unlike Hacksmith (James - lead guy of Hacksmith - told me that he pretty much wishes he was me lol) >>42169 >Instead of trying to make some sort of Jack-of-all-trades, let the AIs specialize. Tell that to the corporate ai jockeys lol. I mean, it would probably cost less energy to switch between smaller AI models than trying to run everything in 1 big one. But an uber model sounds better to marketing.
Open file (123.78 KB 880x666 SmartServo.jpg)
>>42150 >>42153 There appears to be a a misunderstanding. I'm talking about smart/industrial/robotics servos, not hobby servos. These servers are networked and have a definable position/speed/voltage/etc... They will also provide feedback on those things when pinged. They also generally don't use potentiometers, relying on magnetic, optical, or some other type of reliable absolute encoder that provides better feedback. >>42149 >>42152 >>42159 >>42161 Emergent Persona from Convolutional Algorithms EPCA pronouces ehp-sha, is a system wherein several AI/algorithms work with shared lookup tables, dictionaries, variables, etc... This is to reduce latency, memory usage, and most importantly, keep everything coordinated. This system would have goals and everything would work towards those goals both independently and united due to this setup. Let's consider a robot based on the new Arduino Uno Q; you ask her to put away your plate. She'll first recognize your face, her speech to text algorithm will input your request. She'll then use her various algorithms to decipher what your words meant, then create or reload a plan to accomplish that goal. This could lead her to locating the plate using a computer vision program that's only focused on plates. Then mapping a way so her end effectors can manipulate the plate. This would entail communication between her computer and the integrated microncotroller which is handling sensing in real time. then navigation plans would be sent to this MCU so that her actuators move her there. She'd then reopen object recognition to confirm the plate was retrieved. Followed by going to the spot you had previously trained her to recognize as the sink. This is grossly oversimplifying things. I've described my ideas before but, they've evolved over time. I will post an update but, I need time to make a new document for it. Frankly, I'm sure the document will be rewritten several times before I feel ready to share it. That's my stance on it. It appears to share aspects with everyone else's concepts. I'd say I'm closer to Mechnomancer on how I plan to implement things overall.
>>42175 POTD >Let's consider a robot based on the new Arduino Uno Q; ... This is great stuff, Anon. Such a methodical approach can be broken down into straightforward engineering principles & solutions; both in hardware & software. And the process is both debuggable during design time, and maintainable too. Can't wait! Cheers. :^)
Edited last time by Chobitsu on 10/10/2025 (Fri) 20:07:37.
>>42171 >I like him because despite the influence & stuff he still gives the impression of just a regular guy Agreed. I hope he'll go full-on Robowaifuist one day (such as HannaDev * ). --- * His current latest neck update: https://www.youtube.com/watch?v=68mJL_EnXqg&list=TLPQMTAxMDIwMjUPQIUtkpF5hA&index=59
Edited last time by Chobitsu on 10/11/2025 (Sat) 00:19:06.
>>42149 I've thought about this a bit and did some back of the envelope calculations, plus some speculation on what it would take. Here I counted up what would be needed for all 300 human muscles and how ESP32 microcontrollers could be used to do this, with a main processor sending them position data. >>20558 Here I did some rough guesses of what data the main processor would send to the microcontrollers and what that would be >>21602 Using programming in the microcontrollers to smooth curves while reducing the bandwidth between the main processor and the microcontroller >>22111 More detail and using vectors for velocity and end points for limbs >>22119 more along these lines >>22120 >>22121 Here's where someone was worried about the bandwidth to control extremities and I linked where I had been thinking about this and drafting rough numbers for what bandwidth was need with links. Some links are same as above. BTW I think if acceleration, velocity and direction vectors combined with an end point were sent you would have plenty of bandwidth, More than plenty. I might add something lese would be a movement algorithm. I would plan for it but not necessarily use it right away, just have a place in the data stream to send it to the microcontroller doing the actual movement. >>23637 I might mention I have changed the microprocessor that I find most useful from the ESP32, a great one, to Raspberry Pi Pico due to tariffs raising the price, uncertainty and possible supply problems. The Pico is slightly less powerful but does have a really good software chain supporting it maybe even better than the ESP32. Many have damned Trump for his drastic tariff raising but I've voiced this opinion for decades starting with the Japanese. A vast number of counties have had tariffs on the US for decades that the US did not have on them due to the cold war. We ate the difference and it has been killing us. That it will be super painful after letting this go on for so long is not a surprise. The problem has gotten worse and worse. Especially since all the bankers and the people who run companies in the US are only looking for maximum right now super profits. No thought at all for the future. The Chinese and Japanese on the other hand, very smart, own their own central bank and while their banks obviously want max profits also they are given "guidance" to invest in local companies and they also use guided investments to control the whole chain of technologies. In fact with the present banking system in the US and the worthless crooks who run our companies I'm not so sure the US could EVER catch up when it comes to the immense amount of the various chain of materials and parts needed. However you have to start sometime. If not then the US will be nothing but a raw material harvester. Much has been lost for decades. That other countries are having conniptions that things have changed is no surprise when since the end of WWII we have mostly let most anyone import with low tariffs while they had very high tariffs on US goods. Complaining about the tariffs the US has now is forgetting all the gains we forfeited to stop the commies. It may all seem fake now but I remember when there were 30,000 nukes pointed at the US and just about every country that was lost to the commies immediately started imprisoning a large amount of the population if not slaughtering them outright like Cambodia. It was a serious thing. The US leads in some tech areas, rapidly shrinking, but has real supply chain problems that will difficult to overcome. Especially since "graphic" has happened. Excuse the vulgarity https://trashchan.xyz/robowaifu/thread/26.html#1322
Seems we need a new thread, OP
NEW THREAD NEW THREAD NEW THREAD >>42635 >>42635 >>42635 >>42635 >>42635 NEW THREAD NEW THREAD NEW THREAD

Report/Delete/Moderation Forms
Delete
Report