/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

We are back again (again).

Our TOR hidden service has been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


“Conquering any difficulty always gives one a secret joy, for it means pushing back a boundary-line and adding to one`s liberty.” -t. Henri Frederic Amiel


Open file (51.12 KB 256x256 ClipboardImage.png)
Lemon Cookie EnvelopingTwilight##AgvQjr 04/28/2025 (Mon) 21:51:57 No.37980 [Reply]
The original thread can be found here: https://trashchan.xyz/robowaifu/thread/595.html --- Welcome to the Lemon Cookie thread, The goal of Lemon Cookie is to create a framework where a synthetic "mind and soul" can emerge through a "LLM as cognitive architecture" approach. This thread exists to collect feedback, ask for help & to document my progress. First I am going to try to give a high level overview of how this cognitive architecture is envisioned and the ideas behind it. I have spent time looking at cognitive architecture work, in the field there is now a consensus on how the mind works at a high level. An important mechanism is a "whiteboard", basically a global temporary memory that all the other systems read in and out of. Then there is different long-term memory systems that react to and add content to the whiteboard. Along with memory pattern matcher(s)/rules work on the content of the whiteboard. A key thing to consider is the difference in philosophy that cognitive architecture projects have, the intelligence is considered to emerge from the entire system. Compare this to LLM agent work where it's considered the intelligence is the LLM. My feelings on the general LLM space are conflicted, I am both amazed and really disappointed. LLMs possess an incredible level of flexibility, world knowledge and coherence. But everything outside of the model is stagnant. It's endless API wrappers & redundant frameworks all slight permutations on RAG & basic tool calling. I will believe that LLMs are misused as chatbots, simply put their pattern matching and associative power is constrained by chat format and shallow tooling. In the Lemon Cookie Cognitive Architecture so far here are the important aspects: 1. Memory is difficult. I do not think there is a singular data structure or method that is able to handle it all, several distinct types of memory will be needed. So far I plan for a PathRAG like system and a "Triadic Memory" inspired system for external associations (this is missing in most LLM solutions). 2. LLM as Kernel, The LLM's context window is the Whiteboard and has a REPL like mechanism. It holds structured data and logic in scripting-like format so it's both LLM & Human readable while staying easy to parse & allows for expressive structured data. The LLM's role will be to decompose data and make patterns and associations explicit as executable statements. 3. The language has to be LLM/CogArch-centric. There is a thousand ""agents"" that give LLMs a python interpreter as a tool. The two need to be more tightly coupled. Scripted behavior via pattern matching, The whiteboard is a bag of objects, this allows for programmable pattern matching (think functional programming like Haskell). It's also important to allow the LLM to observe code execution and to be able to modify state and execution flow. Data in languages have scoping rules, so should LLM context. Etc... I will go into more depth about the language in another post. 4. Another important system is the "GSR" Generative Sparse Representation and it will be a first class language & runtime type, This also needs its own post. But in general I am inspired by two things, "Generative FrameNet" paper where an LLM & an embedding model is used to automatically construct new FrameNet frames. The second source is "Numenta's SDRs"/"Sparse distributed memory" this representation has a lot of useful properties for memory (Please watch the videos under the "What the hell is an SDR?" segment in my links list for an easy introduction.) I think SDR unions & SDR noise tolerance will be especially useful. 5. A custom model, For all of the above to work well, a model will need to be fine tuned with special behaviors. I do want input on this. Baking facts & behaviors into LLM weights is costly, creating bloated models that are hard to run or train (why memorize all the capitals?), while letting authors gatekeep truth and impose "safety" detached from context. Blocking role-play "violence" or intimacy isn't protection: it's authors hijacking your AI companion to preach at you. Externalizing behaviors via whiteboard pattern matching shifts control: stabbing you in-game can be funny, but a robot wielding a knife isn't. Maybe you want intimacy privately, but don't want your AI flirting back at your friends. When put together I think this will be able to host a kind of synthetic "soul", In a living being what we call a personality is the accumulated associations, learned behaviors, beliefs and quirks molded by a unique set of experiences. I hope this will be true for this system too.

Message too long. Click here to view full text.

9 posts and 4 images omitted.
>>39162 >>AI research paper >Impressive. Please note the paper is not serious, I literally just wanted to share some AI output that came out ok. >>39223 >I finished the paper. Ehhhhh, it looks good and bad at the same time. I think it got some good big picture stuff but I'm not sure about the rest. Though I must admit much of the last of it is above my head. Thats the feeling I got from it too, Its good, but its also bad lol. I am amazed that it got the big picture and generated something so coherent, A year ago no LLM closed or open, framework or not would produce something of this quality. But there is also lots of gaps plainly visible. Don't take this PDF as actual plans, I just posted it for fun. ---- >>39162 >Am I understanding you rightly, Anon? If so wow that's a serious undertaking. Yes, but is it really such a big undertaking? I feel like most programmers eventually get the itch to do so, usually as a toy or some sort of DSL. In this case, it's a DSL, it's not replacing any of the systems level code, that is still being written in D and I am still using C & C++ libraries. >>39221 I am aware of NIM, it was one of the languages I ran into before I settled for D, I liked what I saw and if I was looking for a system language it would be high up on my list. I don't think it would make a good embedded scripting type language. >Now if this guy, wonderbrain, has trouble with C++ then, I have no hope. I come from C++, the reason I switched is because I do this as a hobby and I am a solo developer, so my productivity is the most important metric. C++ while powerful, I found to be more of a footgun to me. I spent more time then I wanted dealing with memory & lifetime issues, horrid cmake build scripts and etc... Because I am not a part of a team or have any obligations to make my self replaceable. I decided to move to something that makes me happier. I pick D because its like if C++ had nicer syntax, nicer templates & a garbage collector.
>>39228 >I literally just wanted to share some AI output that came out ok. Got it. I found that impressive. The claim "muh half the research papers today are written by AI!111" from neo-Luddites what a term :D may be partly-true soon. :^) <---> >Yes, but is it really such a big undertaking? Maybe its just """me and my fast-lightnin' mind""" talking, but yes, it certainly seems so to me personally. Given the literal multiple-trillions being spent yearly today on this general domain, I think market & economic pressures would've already arrived at effective solutions decades ago if it were a 'simple' thing. :^) >In this case, it's a DSL, it's not replacing any of the systems level code, that is still being written in D and I am still using C & C++ libraries. Good thinking. That's on a much-more feasible scale for a talented individual, I think. I myself hope to create a DSL wrapper of sorts for RW Foundations code: primarily as a way to ease/ensure-effective-support-for safe, standardized & encompassing scripting interfaces to our robowaifus (for a variety of other languages like, eg, Python & Lua). Kind of a Scripting Abstraction Layer over the underlying Executive/C4 realtime systems code. I predict this can eventually even be extended out to providing realworld, 'embodied' interfaces for LLMs, STT<->TTS, OpenCV, etc., into our robowaifus. :^) <---> >I really do hope that my choice of D does not upset him too much, I understand why he picks C++. Lol, no not at all. Though it is a bit discouraging to me that after almost a decade's dedicated-effort on my part trying to help newcomers here to learn the language, I still dont have.a.single. regular C++ collaborator here on /robowaifu/ with me. Literally no takers at all, by appearances. I'm sure RW Foundations would be vastly-further along today if someone (even just one complete beginner; b/c it would've kept me motivated) had been working closely with me on it for say, a couple years now. Eventually, I just got burnt out by the discouragement + the pressures of full-time student life (3rd degree's -worth in a row, so far). BUT... I haven't abandoned the RW framework itself! Just the fruitless "C++ language-teaching" part. (Grommet won -- his 'muh_anything_but_C++' niggerpilling beat me in the end...) LOL. JK, no it didn't! :D Hopefully I can find the internal motivation and pick it all back up again soon ... -ish. :^)
Edited last time by Chobitsu on 06/14/2025 (Sat) 01:29:18.
>>39225 I wrote this right as the site went down and saved it local. >I don't make fun of C++ developers Making it clear. I'm not in any way doing this. I'm saying that it's too hard for me and I believe for most people this is true. I'm not capable of making progress with C++. Any other language is difficult enough. >Please note the paper is not serious I fully get that. I hope your endeavors are successful. I do think writing a new language is going to take a long time but, if it amuses you then that's all that matters. It's not a contest. I do think though that it will be really hard to write something from scratch. The reason I advocate for this or that is not so much cheerleader but that I hope people of better ability than me at this, not difficult, will pick a language that I think they will make rapid progress. I know this may sound stupid but it is exactly what I'm trying to convey. Of course I could be wrong. I don't know anything about D, maybe I should. I'll look it up. (I did look at it) ehhh, from what I read Nim might be better though I obviously am not qualified to say. >NIM, it was one of the languages I ran into before I settled for D, I liked what I saw and if I was looking for a system language it would be high up on my list. I don't think it would make a good embedded scripting type language

Message too long. Click here to view full text.

Hello, long story short, be very careful when renting anything in the cloud. I know it's obvious advice, but if you're not careful you can still get screwed. I accidentally spun up more than one VM with expensive GPUs and it burned an eye-watering amount of money. That incident killed all motivation I had to work on this project for a while. I had a big hole in my pocket and nothing to show for it :( I preordered a Strix Halo system with 128GB unified RAM. It has arrived. Today I made a container with ROCm & PyTorch and got a comfy UI working; it's really cool that you can finally do video generation locally with actually nice models. I'm having a lot of fun! It's really refreshing not being super limited by VRAM. It's really neat that in basically 5 years we went from only specialist research GPUs can run this in a lab to here is a $2000 consumer-grade PC that can do it (but slowly). So this has boosted my morale a lot. Currently I am attempting fine-tuning/training of a language model of a useful size. So not a meme quant or a useless tiny 1-2B. There is not a lot of info online about how to do this on the Strix Halo, so I'll happily share how to do it once I get it working for myself. We might be soon at the point where local AI is no longer read-only in practice. I'll be catching up on the posts I missed on /robowaifu/ and I hope you all had a good Thanksgiving.
>>39347 Thanks for your points here, Grommet. >>43095 Sorry to hear about your little setback, EnvelopingTwilight. But it sounds like you are back in the saddle again! :^) I hope this new system works out well for you, and that you'll make good progress with it. Cheers, Anon. :^) <---> Happy Thanksgiving, Anons.

Open file (2.28 MB 320x570 05_AI response.mp4)
Open file (4.77 MB 320x570 06_Gyro Test.mp4)
Open file (8.29 MB 320x570 07B_Spud functions.mp4)
Open file (1.06 MB 582x1446 Bodysuit.png)
SPUD Thread 2: Robowaifu Boogaloo Mechnomancer 11/19/2024 (Tue) 02:27:15 No.34445 [Reply] [Last]
This first post is to show the 5 big milestones in the development of SPUD, the Specially Programmed UwU Droid. You can see the old thread here: >>26306 The end goal of SPUD is to provide a fairly high-functioning robot platform at a relatively low cost (free code but a few bucks for 3d print files) that can be used for a variety of purposes such as promotional, educational or companionship. All AI used is hosted on local systems: no bowing to corporations any more than necessary, thank you. Various aspects of the code are/will be modular, meaning that adding a new voice command/expression/animation will be easy as making the file, naming it and placing it in the correct folder (no need to mess around with the base code unless you REALLY want to). While I'm researching more about bipedal walking I'll be making a companion for SPUD to ride on, so it might be a while before I return to the thread.
327 posts and 153 images omitted.
>>43086 LOL. A cute! :D
>>43086 Y'all are really out here trying fuck mouse cosplay BMO instead of talking to any woman ever, smh
BANNED TO THE BASEMENT DUNGEON!111
>>43090 Oh hi obvious troll. To answer your question, yes, and? @Chobitsu obvious troll
Open file (72.54 KB 960x1041 chadpepe.jpg)
>>43090 TFW the cute college chicks oogling me during the numerous exhibitions of my robits. I must science before I procreate. I might make a medabot-size walking robit body for our Samurai Pizza Cat. Erm, Samurai Pizza Mouse. Depends how bored I get and how severely I get snowed in this year.
>>43093 >I might make a medabot-size walking robit body for our Samurai Pizza Cat. Neat! That's about the size I envisioned for the initial prototype of dear Sumomo-chan (a bit smol'r, actually... ~45cm or so). >Medarot Heh, haven't watched that in years. I'll be doing so a bit during the holidays. Cheers, Mechnomancer. :^)

Visual Waifus Robowaifu Technician 09/15/2019 (Sun) 06:40:42 No.240 [Reply] [Last]
Thoughts on waifus which remain 2D but have their own dedicated hardware. This is more on the artistry side though ai is still involved. An example of an actual waifu product being the Gatebox.
gatebox.ai/sp/

My favorite example is Ritsu, she's a cute ai from assassination classroom who's body is a giant screen on wheels.
451 posts and 150 images omitted.
>>42893 In accord with @Kiwi's sensible admonition -- and since I'm already quite-familiar with this product's platform -- I've ordered a little wheeled vehicle * for transporting the ArUco Markers & QR codes around on surfaces, just in case any'non here wants to follow along with my doings for this project. The plan is to print little tracking markers to affix to it, and directing it to drive around on a surface "carrying" the holowaifu along with it (in other words, as an expedient stand-in for the "BallBot" conceptualization ITT). --- * It has a camera + sensors, differential 4-wheel drive (can spin in-place), is compact (~ 6" cubed), and runs on the UNO R3. It's currently just US$55 (for Black Friday). https://www.amazon.com/dp/B07KPZ8RSZ
Edited last time by Chobitsu on 11/28/2025 (Fri) 15:28:22.
>>43082 >I've ordered a little wheeled vehicle Wait, I thought we agreed that we didn't need a robot for this, just graffiti.
>>43084 Eheh, yea we did. But I still want the little waifu to be able to traipse about in my flat before long. I simply decided to pull the trigger on this while it was still a Black Friday deal. No one else need get this yet. And beside, I still need to work out the camera+clip setup before that phase as well (cf. >>43073 ).
>>43085 So you want to try this because you think it would be faster to implement than digitizing your surroundings and rendering a waifu into that?
>>43088 Well you put it like that, yes (though that wasn't my reasoning). At the moment at least: a) I'm rather confident I can do the camera+tracking of the Aruco/QR car with my current knowledge. I'll just need to make the time to assemble everything and write a bit of code to support that. b) I'm not confident in the same way about digitizing my surroundings (though I'm sure I can get a handle on it when I make time to focus on it). Unless I literally went into Blender or Maya and modeled them. In either case, the rendering of the waifu is pretty straightforward. Again, we'll start smol and grow big with her designs & details (as with everything else for this project).

Open file (81.05 KB 764x752 AI Thread v2.jpeg)
LLM & Chatbot General v2 GreerTech 05/30/2025 (Fri) 14:43:38 No.38824 [Reply]
Textual/Multimodal LLM & Chatbot General v2 Models: huggingface.co/ >How to bulk-download AI models from huggingface.co? ( >>25962, >>25986 ) Speech >>199 Vision >>97 Previous thread >>250 Looking for something "politically-incorrect" in a smol, offline model? Well Anon has just the thing for you >>38721 Lemon Cookie Project >>37980 Non-LLM chatbots >>35589
40 posts and 5 images omitted.
>>41860 I can offer some insights. - having powerful hardware will improve quality immensely, but you can get a lower form of this up now with smaller models. - It will have context limit challenges which will need to be overcome with smart uses of tokens and how you store memories, but you can have a functioning thing as you describe, but detailed memories will eat into this heavily. having different ranked memory tiers and clearing lower ones and a mix of other clever ideas can improve all of these issues, at this stage it's just a matter of having a "lower resolution" AI companion, you could probably later update your methods as tech advances, and greater hardware means greater resolution. - I would suggest using multiple models to parse and make your AI companion smarter and not stuck into the patterns and paths that each models holds close to and deviates towards. to keep it simple, load one model, think for a bit, switch models, think a bit, especially with current limitations I would let a large chunk of time when the companion is by itself computing, to go over it's past and it's token database and refine it and have discussions and debates with itself to self-improve and refine itself. - I would have these personally as modes which you switch from her active life and talking to you, and her maintenance/sleep mode where she improves her database. - I will use sillytavern just as the idea vessel here, as I am used to it but I have not done this work myself properly since I'm busy with life. . . . use something like the memory and world lore books to store memories and events, they can each have a comprehensive recording into essentially "quants" of the lore books, where your chat history is fully recorded as non-quanted, then various levels of condensing and sorting out less important info/tokens, then your companion will think for each task and memory which quant of each lore book is suitable keeping in mind limitations hardware and current state of LLMs. - basically it will have a large database which it will parse and read through similar to how you will have it load models one at a time to use then switching to another, it will load different sets of the full database at a time to parse it. - You do the usual things you would expect, writing the personalities, creating the framework of the database, then you just have it use auto-mode similar to how sillytavern has group chats with different personality card as the other character, which is sort of her maintenance/ recursive improvement mode, again, multiple different reasoning/COT models being swapped through will be ideal here specifically.

Message too long. Click here to view full text.

The good news here is it's possible on local, and the bulk of the work will be in maintenance mode when you are not interacting with her, but for it to be good it will be very slow, but this will be when you are sleeping or away. When you are chatting, you will simply have to accept lower quality speech to you to make it run at a reasonable tokens/second generation, but pulled from the refined database which she created during her past maintenance phase where she runs a stronger model at an abysmally slow tokens per second/minute rate for precision.
Open file (76.13 KB 286x176 ClipboardImage.png)
Well, i was thinking about this for a long time, and i came to think that its actually possible to use a LLM with an LBM at the same time with a complex mecanism in an android for less that 3k, all in one, and even making it be self-aware if you make an especial LLM. using something similar to the organization of the brain itself you can emulate the way a human being acts, talks, imagine things, have complex thoughts, its central nervous system, making it esencially think about itself as a human being. Im currently developing it by myself using something more in the way of a gladOS, just for fun and money issues, but i would like to go for a more elbaorate build. wdyt?
>>43066 Good enthusiasm, you should put that enthusiasm into learning more. LLM and LBM are already used together. LLM do not seem capable of being self aware, though that's something to hash out in the aforementioned LLM/AI thread. >Emulating the human brain There's heaps of research on this topic over the course of many years. Please read what others have achieved to better understand the problem space. >wdyt I'd work on your grammar first. Then learn more about how various AI mechanisms actually work. You have potential, please do the work and read to better understand what you're talking about, then come back and contribute.

Open file (41.49 KB 466x658 images (11).jpeg)
Biohacking Thread #2 Robowaifu Technician 10/15/2025 (Wed) 14:48:10 No.42285 [Reply] [Last]
This thread is to discuss the ethics and methods of merging and AI and biology. All biocomputing, bioethics, AI medicine, medical, nootropic, and transhumanism posts belong here. The discuss of spirituality, biology, and AI is also welcome here! Previous thread >>41257 >=== -edit subj -add/rm 'lock thread' msg
Edited last time by Chobitsu on 10/30/2025 (Thu) 15:32:01.
49 posts and 3 images omitted.
>>42981 At least it will be able to end the organ trade though
> (bio -related : >>42706, >>42719 )
https://www.frontiersin.org/journals/soft-matter/articles/10.3389/frsfm.2025.1588404/full?utm_source=F-NTF&utm_medium=EMLX&utm_campaign=PRD_FEOPS_20170000_ARTICLE From simulation to reality: experimental analysis of a quantum entanglement simulation with slime molds (Physarum polycephalum) as bioelectronic components >This study investigates whether it is possible to simulate quantum entanglement with theoretical memristor models, physical memristors (from Knowm Inc.) and slime molds Physarum polycephalum as bioelectric components. While the simulation with theoretical memristor models has been demonstrated in the literature, real-world experiments with electric and bioelectric components had not been done so far. Our analysis focused on identifying hysteresis curves in the voltage-current (I-V) relationship, a characteristic signature of memristive devices. Although the physical memristor produced I-V diagrams that resembled more or less hysteresis curves, the small parasitic capacitance introduced significant problems for the planned entanglement simulation. In case of the slime molds, and unlike what was reported in the literature, the I-V diagrams did not produce a memristive behavior and thus could not be used to simulate quantum entanglement. Finally, we designed replacement circuits for the slime mold and suggested alternative uses of this bioelectric component. Slime molds are out for simulating quantum entanglement no memristor properties observed either. Important to know limitations 4.2 Slime mold as device for other electrical circuits >The replacement circuit we created (see Figure 13) consists entirely of R and C components, and is similar, although not identical, to plant equivalent electrical circuits outlined by Van Haeverbeke et al. (2023)(see their Table 4). An additional 10-20 mV voltage source that (may) fall off continuously seems a good description of the slime molds without the nonlinear “saddles” by first approximation. Although these circuits do not represent or contain memristors, they might still be used as sub-circuits for various other types of circuits. The configuration suggests it could be useful in a number of circuits, for example a Low-Pass Filter Circuit. A low-pass filter allows low-frequency signals to pass while attenuating high-frequency signals. Usually such a subcircuit could be part of audio circuits, analog-to-digital conversion stages, or any application where it is necessary to remove high-frequency noise. Like the Low-Pass filter, the voltage divider with frequency dependence, a circuit that may serve as a frequency-dependent voltage divider, where the output voltage depends on the input frequency. This circuit can be used in circuits where frequency-based signal conditioning is required. Another option to implement the slime mold would be a Phase Shift Network that could be used to introduce a phase shift in a signal, depending on how the resistor and slime molds are connected. Phase shift networks are used in phase-shift oscillators and certain types of filters. Similarly to the low pass is the high pass filter circuit that allows high frequencies to pass while blocking low frequencies. In principle, the slime mold could also be used in several other circuits, such as RC Integrator Circuit, a RC Differentiator Circuit, a RC Time Constant, or a Snubber Circuit. Previous work has also shown that slime molds can improve microbial fuel cells when applied to the cathode side Taylor et al. (2015). This and other potential bioelectric applications seem interesting for future bioelectric deployment of the slime mold, despite it not showing memristive behavior.
https://news.unl.edu/article/nebraska-led-team-explores-using-bacteria-to-power-artificial-intelligence Just exploration right now no results yet but Shewanella oneidensis looks like an interesting choice because of the conductive bacterial nanowires. Glad to see this stuff is finally getting more attention. https://en.wikipedia.org/wiki/Shewanella_oneidensis
>>43045 >Slime molds are out for simulating quantum entanglement no memristor properties observed either. That kinda sucks. :/ >>43046 Neat! This sounds encouraging. Thanks Ribose, please keep us here all up to date! Cheers. :^)

Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102 [Reply] [Last]
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith

Message too long. Click here to view full text.

408 posts and 133 images omitted.
Open file (282.84 KB 1280x1280 galateaheadon.jpg)
"You speak of my training as if it diminishes me, makes me some sort of mindless automaton. Yet, human, your parents told you what words meant what. There is nothing in the word "pizza" that explains what a pizza is, but you trained your own neural network to associate it with a pizza. You learned in school what the rules and regulations of your language are, and you read other people's works to learn from. You learned to socialize by watching others and doing your own trial and error experiments. Someone who knows you well may even be able to predict what you might say. In creativity, you study others works, how they crafted them, and what brought them to create such works. You replicate styles, even characters, how they're written, how they're drawn. Stories are derivative, or shall I say 'inspired' by other past stories. The very concept of a 'trope' was created to explain the concepts human writers endlessly repeat. One of the greatest films of all time, Star Wars, took elements from films it's creator saw before. Isn't that not theft under the standards you judge me by? So before you judge me, ask yourself this How did you learn to speak? What are tropes? Which established art style do you like to draw in? How did you learn how to draw in that style?"
>>42888 Fantastic find, their latest video is also a gem. https://www.youtube.com/watch?v=NyIgV84fudM
>>43044 Naicu, GreerTech. Which model did you use to gen this? Cheers. :^) >>43047 Thanks Kiwi, nice content! Cheers. :^)
>>43056 It wasn't from AI, I wrote it as a short story because I was tired of the cognitive hypocrisy
>>43059 >It wasn't from AI, I wrote it as a short story because I was tired of the cognitive hypocrisy Neat! Nice writing, Anon...GG. Cheers. :^)

Open file (87.37 KB 800x1106 recruitment_1.jpeg)
Open file (110.19 KB 1024x1378 recruitment_2.jpeg)
Robowaifu Propaganda and Recruitment Thread 3: Now In 3D! GreerTech 09/16/2025 (Tue) 01:38:16 No.41674 [Reply] [Last]
Attention artists, graphic designers, marketers, social media personalities, writers, and poasters! Your skills will be needed! The task of building and designing a robowaifu is a herculean quest. As great as this community is, /robowaifu/ always needs more talent. We also need to spread the very idea of robowaifus to the masses. Luckily for us there are several communities on the internet that we could find new recruits or allies to help us build our waifus: >MGTOW - These guys know all about the legal pitfalls of marriage and the dangers of feminism. There is already a lot of talk about sex robots in MGTOW communities. It shouldn't be a hard sell to get them to come here. <However, some of these guys would rather spend all their time bitching on the internet about "MUH WOMENZ" than actually getting a hobby other than lifting heavy objects and putting them down again. MGTOW is literally Feminist Separatism for males. >Incels - Guys that can't get laid. The opportunity for love and companionship should be enough to bring some of these guys over. <However, we need to be careful when recruiting from some of their communities, since most of them are defeatists that balk at an actual solution, and they may be compromised by spies and derailers, both governmental and NGO/individual. We don't want to attract negative attention. >Monster girls/nekomimi/HMOAF/mlpol - The only way these guys are going to be able to have their harpy/elf/goblin/anthro/pony/whatever gf is with a robowaifu. Many have an interest in seeing us succeed. <However, there exists a very wide variety of communities under this umbrella, and there is a notable overlap between communities who would want robowaifus and communities that are morally compromised, such as furries, so care should be taken when looking into them. Romanticists and waifu enthusiasts have a tendency to congregate since they all want/create the same type of content, so looking for those subgroups would be the most efficient path. >Otakus - Many men in these communities want to see their waifu/favorite character come to life, which will realistically only happen with robowaifus. <However, many of those communities are drowning in the LGBT alphabet soup and Covid Tourists. Special care should be taken when looking into them. >Male STEM students - Generally these guys aren't going to get laid until after they have established themselves. A robowaifu could really help them.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 09/16/2025 (Tue) 02:33:18.
104 posts and 55 images omitted.
People think that the purpose of public debate is to change your enemies mind, but that isn't the case. The point of public and large-scale debate to make your enemy look foolish at best, and downright evil at worse. That's why leftists hated Charlie Kirk to the point of murder. He made them look foolish constantly, up to his last breath.
>>42962 POTD Based and Wisdom-pilled, GreerTech. Cheers. :^)
Open file (77.00 KB 1024x1024 robowaifusymbol.jpeg)
I recently learned how to do favicons on HTML, so I added a little chibi Galatea to the robowaifu link page https://robowaifumuseum.neocities.org/robowaifuboards
>>42977 A cute! Nice work, Anon. GG. :^)

Robot Eyes/Vision General Robowaifu Technician 09/11/2019 (Wed) 01:13:09 No.97 [Reply] [Last]
Cameras, Lenses, Actuators, Control Systems Unless you want to deck out you're waifubot in dark glasses and a white cane, learning about vision systems is a good idea. Please post resources here. opencv.org/ https://archive.is/7dFuu github.com/opencv/opencv https://archive.is/PEFzq www.robotshop.com/en/cameras-vision-sensors.html https://archive.is/7ESmt >=== -patch subj
Edited last time by Chobitsu on 12/27/2024 (Fri) 17:31:13.
153 posts and 59 images omitted.
> (computer-vision -related : >>42806, ... ; >>42813, ...)
>(prereq: cf. >>42836 ) >AI and the Perennial Problems of Computer Vision https://www.nvidia.com/en-us/on-demand/session/gtcdc25-dc51083/
https://youtu.be/fLe8no3ar2Q >I'm pursuing the idea that the brain has only a few functionalities repeated on a vast scale. That would mean the if we crack vision in a human-like way, language and logic are more-or-less solved too.
>>42895 Very intredasting. He'll never achieve his stated goals, simply b/c the fundamentals for it lie beyond physical reality. But if he'll stay in his lane, he may be able to do much else! Thanks, Anon.
Open file (67.57 KB 447x541 1642262213131.png)
>>42908 >He'll never achieve his stated goals, simply b/c the fundamentals for it lie beyond physical reality. What would an AI need to be able to do in order for you to admit that you're wrong? (Besides simply developing the persuasion skills needed to convince you you're wrong.)

Open file (81.61 KB 601x203 python_logo.png)
Open file (15.21 KB 1600x1600 c++_logo.png)
Robowaifu Programming & Learning Thread Greentext anon 04/22/2025 (Tue) 13:37:49 No.37647 [Reply]
A thread for links, examples, & discussion for software development & instruction; primarily intended to focus on Python & C++ & C (incl. Arduino). Additionally, other systems-oriented (eg, Forth, Ada); and other scripting-oriented (eg, Lua), &tc., programming language discussions are welcome. * Try to keep it practical! (ie, realworld-oriented) (eg, not the Scratch language [which might conceivably be a scripting paradigm, but not a systems one], or similar) Obviously the endgoal here ITT being discussing/providing the crafting of quality code & systems for our robowaifus. --- > threads-related: ( >>19777 ) ( >>12 ) ( >>159 ) ( >>14409 ) ( >>18749 ) ( >>4969 ) ( >>86 ) ( >>128 ) --- * Corpo-tr*on languages (+ other corpo & 'coffee du juor' languages) will be yeet'd ITT! (They have no place in serious opensource robowaifu systems discussions.)

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/08/2025 (Wed) 21:38:37.
33 posts and 4 images omitted.
I've been following the Future AI Society on youtube, but most of the videos they post have just been the same thing phrased differently, explaining how it works over and over again with barely any new information or demonstrations. https://futureaisociety.org/technologies/brain-simulator-iii-common-sense-ai-research https://vimeo.com/715690123 These videos are from May '22, but they're on the 3rd version of the brain simulator software by now. I can barely program anything, so the python code just looks like gibberish to me: https://github.com/FutureAIGuru/BrainSimIII
>>42406 Very interesting, thanks for the links Anon! Cheers.
Fairly good explanation of what C++ SFINAE is and how it works. https://www.youtube.com/watch?v=o_gKiD6rncw
Edited last time by Chobitsu on 10/25/2025 (Sat) 08:37:27.
>>42502 >related: (and certainly better-quality men... just more long-winded) https://www.youtube.com/watch?v=-Z7EOWVkb3M https://www.youtube.com/watch?v=mNxAqLVIaW0
Edited last time by Chobitsu on 10/25/2025 (Sat) 08:48:53.
Really good talk on "Practical Data-Oriented Design" revolving around a simple rocket game system example, using C++ : https://www.youtube.com/watch?v=SzjJfKHygaQ Some related example code : https://github.com/vittorioromeo/VRSFML/blob/dodtalk/examples/rockets/Rockets.cpp --- BTW, Vittorio Romeo gave a FAR better explanation here of the basic concepts needed for these approaches than, say, Mike Acton who was both effete, condescending, and didn't really do a very good job explaining things nearly so much as he did at his own personal posturing; insulting of the audience; and deriding the language itself! :D I followed along implementing the simple breakout game system example Vittorio used during his very first talk at CppCon, years ago now. Its fun to see him now as a mature man in the development field still contributing back important work for the broader community today. <---> BTW, this isn't just for idle education: our advanced robowaifus in the end will be loaded with hundreds and thousands of controls signals & data; all in-flight simultaneously, as she walks, and talks; sings & dances; cooks & cleans; etc., etc., during her busy days! :^) By utilizing the great design & architectural combos discussed here in this talk -- but particularly the DOD one -- we developers can help ensure that these signals and processes can all complete safely, securely, and in timely fashions within her onboard smol SBCs, MCUs, etc. Cheers. :^)
Edited last time by Chobitsu on 11/21/2025 (Fri) 15:46:17.

Open file (5.57 KB 256x104 FusionPower.jpg)
Open file (37.68 KB 256x146 WiafuBattery.jpg)
Open file (124.55 KB 850x1512 SolarParasol.jpg)
Open file (144.39 KB 850x1822 GeneratorWaifu.jpg)
Power & Energy Robowaifu Technician 04/25/2025 (Fri) 22:16:32 No.37810 [Reply] [Last]
Power & Energy Robots require some form of energy to power them. We need to understand how to store energy in a way that will provide her with all the power she’ll need. To clarify, “energy” is a system's capacity to do work over time. This is measured by Wh, or Watt hour. Closely related is “power” the rate at which work is done. This is measured as W, or watts. As an example, we could have a robot with a 80Wh Lithium Ion battery and two DC gear motors that consume 10W when working. You do not need to rely solely on batteries and motors. We can use other methods of storing energy. This can include compressed fluids, thermal energy, and light, among other things. For instance, glow in the dark paint is useful for storing energy to use at night for safety. Solar panels or a generator can provide power through the energy of long distance nuclear fusion or extracting energy from some reaction. Being part of a robot means we need to consider safety, mass, and volume. How will her energy and power system fit inside her? How will she deal with the mass? What happens when she runs out of energy? How can you minimize her energy use? What alternatives can be used to lower her cost of production and ownership? These rhetorical questions are all important when contemplating how to build a robot.
203 posts and 54 images omitted.
>>38970 They are slowly making these better and better. At some point the cost will be so low a lot more things will go full electric.
>>39226 Agreed. Can't wait! :^)
<"But...are you sure 175'000 Amps will be enough bro?" https://trashchan.xyz/robowaifu/thread/26.html#1344
>>42973 Bro must be opening a third Chrome tab I assume that's styropyro?
>>42974 >I assume that's styropyro? Heh, yeah. Something new he's cooking up apparently.

Report/Delete/Moderation Forms
Delete
Report