/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

We are back again (again).

Our TOR hidden service has been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


“Our greatest weakness lies in giving up. The most certain way to succeed is always to try just one more time.” -t. Thomas A. Edison


Open file (13.41 KB 750x600 Lurking.png)
Lurk Less: Tasks to Tackle Robowaifu Technician 02/13/2023 (Mon) 05:40:18 No.20037 [Reply]
Here we share the ideas of how to help the development of robowaifus. You can look for tasks to improve the board, or ones which would help to move the development forward. You could also come up with a task that needs to be worked on and ask for help, use the pattern on top of OP for that, replace the part in <brackets> with your own text and post it. >Pattern to copy and adjust for adding a task to the thread: Task: <Description, general or very specific and target thread for the results> Tips: <Link additional information and add tips of how to achieve it.> Constraints and preferences: <Things to avoid> Results: Post your results in the prototypes thread if you designed something >>18800, or into an on-topic thread from the catalog if you found something or created a summary or diagram. General Disclaimer: Don't discuss your work on tasks in this thread, make a posting in another thread, or several of them, and then another one here linking to it. We do have a thread for prototypes >>18800, current meta >>18173 and many others in the catalog https://alogs.space/robowaifu/catalog.html - the thread for posting the result might also be the best place to discuss things. >General suggestions where you might be able to help: - Go through threads in the catalog here https://alogs.space/robowaifu/catalog.html and make summaries and diagrams like pointed out starting here >>10428 - Work on parts instead of trying to develop and build a whole robowaifu - Work on processes you find in some thread in the catalog https://alogs.space/robowaifu/catalog.html - Test existing mechanisms shared on this board, prototypes >>18800 - Try to work on sensors in some kind of rubber skin and in parts >>95 >>242 >>419 - Keep track of other sites and similar projects, for example on YouTube, Twitter or Hackaday. - Copy useful pieces of information from threads on other sites and boards talking about "sexbots", "chatbots", AI or something similar. Pick the right thread here: https://alogs.space/robowaifu/catalog.html

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/08/2023 (Mon) 11:17:16.
26 posts and 6 images omitted.
>>33524 Lolno. I WILL NOT EAT ZE BUGS! <---> Somebody need to do a mashup with BB=The evil, bald, jewish Globohomo guy. :D

Welcome to /robowaifu/ Anonymous 09/09/2019 (Mon) 00:33:54 No.3 [Reply]
Why Robowaifu? Men's mental health regarding the sexes has been rapidly-deteriorating today, and men around the world have been looking for a solution. History shows there are cultural and political solutions to this general problem's causes, but we believe that technology is the best way forward at present – specifically the technology of robotics. We are technologists, dreamers, hobbyists, geeks and robots looking forward to a day when any man can build the ideal companion he desires in his own home. However, not content to wait for the future; we are bringing that day forward. We are creating an active hobbyist scene of builders, programmers, musicians, artists, designers, and writers using the technology of today, not tomorrow. Join us! >tl;dr This is a place for men who want to live peacefully with beautiful feminine robowaifus. Simple as. <---> NOTES & FRIENDS > Notes: -This is generally a SFW board, given our engineering focus primarily. On-topic NSFW content is OK, but please spoiler it. -Our bunker is located at: https://trashchan.xyz/robowaifu/catalog.html Please make note of it. -Webring's mustering point: https://junkuchan.org/shelter/index.html -Library thread (good for locating terms/topics) (>>7143)

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/11/2025 (Sat) 02:12:55.
>Alway Rember ol' frens, Anon... :^) -/doll/ - currently at https://anon.cafe/doll/ - impeccable artistry in dolljoints. KIA -/robo/ - currently at https://junkuchan.org/robo/ - robot girls NSFW. Ecchi metallic derrieres. (Merged w/ /origin/ ) -/workbench/ - currently at https://anon.cafe/workbench/ - building robots is fun! KIA

Visual Waifus Robowaifu Technician 09/15/2019 (Sun) 06:40:42 No.240 [Reply] [Last]
Thoughts on waifus which remain 2D but have their own dedicated hardware. This is more on the artistry side though ai is still involved. An example of an actual waifu product being the Gatebox.
gatebox.ai/sp/

My favorite example is Ritsu, she's a cute ai from assassination classroom who's body is a giant screen on wheels.
441 posts and 150 images omitted.
>>43051 >$400 is a very good price, but how secure are they? >Glasses that can see everything you can are a huge security liability if they can't be completely locked down. >That said, if they are secure, then they'd be great for all of us. As-is, they are very secure since they are output-only devices. You can simply think of them as 1080p monitors you can wear on your face! :D (And that you can also see through like basic sunglasses wherever there's no 'screen' being displayed.) That said, the entire goal of this project overall is to provide two-way video, so we can track aruco & QR markers, [1] and overlay Anon's waifu somewhere nearby to that marker (as 'Augmented Reality' [AR]). These can't do that quite yet, but still provide enough basics to go on with for me to begin working on the software side of the project. During this interim period, we can just 'lock' the display window into place wherever we want the waifu to be situated & displayed, say, floating on your computer desk. So the >tl;dr here is that security isn't an issue to begin with, but as the solution gets more sophisticated with time, then it will need to be addressed. (Simply firewalling off any network access for starters.) Does that answer your questions, Greentext anon? AMA if you need further info. >>43052 >Maybe some GY-521 silliness might hold the key. Heh, I think I have one of those laying around somewhere. Good luck and please keep us up to date with your progress on this, Mechnomancer! :^) >Supposedly small, hi-resolution screens are expensive.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 11/26/2025 (Wed) 18:00:45.
>>43053 $100 could buy quite a bit of hardware. Supposedly adding a compass could make the z axis easier (and appears GPT the oracle) but I could never get a proper reading with a compass sensor. A vr setup I'm thinking of would probably be a raspberry pi to do the gy-521 silliness and send that data over LAN, and use a wireless hdmi (also LAN) for video so it would be simply plugging it into your hdmi port and having a companion python script to receive sockets and turn into game input. I'm thinking about making my own version of steel battalion (mixed with darling in the franxx) and a diy vr headset might be a good gimmick in addition to/instead of a funky controller.
>>43054 >$100 could buy quite a bit of hardware. Fair point. I'd be trading expediency for gold (a common tradeoff, AFAICT! :D Also, the reliability that supposedly comes with a proper commercial product. * >I'm thinking about making my own version of steel battalion (mixed with darling in the franxx) and a diy vr headset might be a good gimmick in addition to/instead of a funky controller. The Iron Man LARP'rs system seems to be very-sophisticated as a DIY VR headset! ( >>42822 ). He claims it supports 9 DoF (incl. magnetic compass). --- * This is the 3rd or 4th generation of these things by this company, so its not just a 'johnny-come-lately' thing.
Edited last time by Chobitsu on 11/26/2025 (Wed) 18:12:27.
>>43055 >The Iron Man LARP'rs system Jesus! How can people ramble about such a simple thing for 30 minutes?
>>43053 >output only Huh, I always assumed that cameras were necessary for AR (the image tracking and intelligent placement being the "augmented" part). That is good for security, of course, but it does seem a bit pricy as just a monitor. Still, it's not a bad starting point for getting a sense of how everything should look.

Open file (2.28 MB 320x570 05_AI response.mp4)
Open file (4.77 MB 320x570 06_Gyro Test.mp4)
Open file (8.29 MB 320x570 07B_Spud functions.mp4)
Open file (1.06 MB 582x1446 Bodysuit.png)
SPUD Thread 2: Robowaifu Boogaloo Mechnomancer 11/19/2024 (Tue) 02:27:15 No.34445 [Reply] [Last]
This first post is to show the 5 big milestones in the development of SPUD, the Specially Programmed UwU Droid. You can see the old thread here: >>26306 The end goal of SPUD is to provide a fairly high-functioning robot platform at a relatively low cost (free code but a few bucks for 3d print files) that can be used for a variety of purposes such as promotional, educational or companionship. All AI used is hosted on local systems: no bowing to corporations any more than necessary, thank you. Various aspects of the code are/will be modular, meaning that adding a new voice command/expression/animation will be easy as making the file, naming it and placing it in the correct folder (no need to mess around with the base code unless you REALLY want to). While I'm researching more about bipedal walking I'll be making a companion for SPUD to ride on, so it might be a while before I return to the thread.
321 posts and 150 images omitted.
Open file (1.71 MB 1005x1371 pringle deployment.png)
Pringle is shocked a status LED is shining through the 2nd curved OLED screen I have. I'm probably gonna make her a papercraft mecha musume helmet.
>>43058 Nice!
>>43058 >Pringle is shocked a status LED is shining through the 2nd curved OLED screen I have. <"It's a feature, not a bug!"' :D >I'm probably gonna make her a papercraft mecha musume helmet. <"No one cared who I was until I put on the mask." --- Very cool. This looks pretty sweet, Anon. I like where you're going with dear Pringle's new design motif. Please keep us all up to date on her progress! Cheers. :^)
Edited last time by Chobitsu on 11/27/2025 (Thu) 01:56:41.
Open file (11.69 MB 480x658 SPUD PR v1 helmet.mp4)
>>43062 >Please keep us all up to date on her progress! Tada! Some slight alignment issues but that is a graphics problem not a code problem.
>>43068 VERY COOL!! That curved screen really gives some dimensionality to her face. Really looking forward to where you go with this, Mechnomancer. Cheers. :^)

Open file (81.05 KB 764x752 AI Thread v2.jpeg)
LLM & Chatbot General v2 GreerTech 05/30/2025 (Fri) 14:43:38 No.38824 [Reply]
Textual/Multimodal LLM & Chatbot General v2 Models: huggingface.co/ >How to bulk-download AI models from huggingface.co? ( >>25962, >>25986 ) Speech >>199 Vision >>97 Previous thread >>250 Looking for something "politically-incorrect" in a smol, offline model? Well Anon has just the thing for you >>38721 Lemon Cookie Project >>37980 Non-LLM chatbots >>35589
40 posts and 5 images omitted.
>>41860 I can offer some insights. - having powerful hardware will improve quality immensely, but you can get a lower form of this up now with smaller models. - It will have context limit challenges which will need to be overcome with smart uses of tokens and how you store memories, but you can have a functioning thing as you describe, but detailed memories will eat into this heavily. having different ranked memory tiers and clearing lower ones and a mix of other clever ideas can improve all of these issues, at this stage it's just a matter of having a "lower resolution" AI companion, you could probably later update your methods as tech advances, and greater hardware means greater resolution. - I would suggest using multiple models to parse and make your AI companion smarter and not stuck into the patterns and paths that each models holds close to and deviates towards. to keep it simple, load one model, think for a bit, switch models, think a bit, especially with current limitations I would let a large chunk of time when the companion is by itself computing, to go over it's past and it's token database and refine it and have discussions and debates with itself to self-improve and refine itself. - I would have these personally as modes which you switch from her active life and talking to you, and her maintenance/sleep mode where she improves her database. - I will use sillytavern just as the idea vessel here, as I am used to it but I have not done this work myself properly since I'm busy with life. . . . use something like the memory and world lore books to store memories and events, they can each have a comprehensive recording into essentially "quants" of the lore books, where your chat history is fully recorded as non-quanted, then various levels of condensing and sorting out less important info/tokens, then your companion will think for each task and memory which quant of each lore book is suitable keeping in mind limitations hardware and current state of LLMs. - basically it will have a large database which it will parse and read through similar to how you will have it load models one at a time to use then switching to another, it will load different sets of the full database at a time to parse it. - You do the usual things you would expect, writing the personalities, creating the framework of the database, then you just have it use auto-mode similar to how sillytavern has group chats with different personality card as the other character, which is sort of her maintenance/ recursive improvement mode, again, multiple different reasoning/COT models being swapped through will be ideal here specifically.

Message too long. Click here to view full text.

The good news here is it's possible on local, and the bulk of the work will be in maintenance mode when you are not interacting with her, but for it to be good it will be very slow, but this will be when you are sleeping or away. When you are chatting, you will simply have to accept lower quality speech to you to make it run at a reasonable tokens/second generation, but pulled from the refined database which she created during her past maintenance phase where she runs a stronger model at an abysmally slow tokens per second/minute rate for precision.
Open file (76.13 KB 286x176 ClipboardImage.png)
Well, i was thinking about this for a long time, and i came to think that its actually possible to use a LLM with an LBM at the same time with a complex mecanism in an android for less that 3k, all in one, and even making it be self-aware if you make an especial LLM. using something similar to the organization of the brain itself you can emulate the way a human being acts, talks, imagine things, have complex thoughts, its central nervous system, making it esencially think about itself as a human being. Im currently developing it by myself using something more in the way of a gladOS, just for fun and money issues, but i would like to go for a more elbaorate build. wdyt?
>>43066 Good enthusiasm, you should put that enthusiasm into learning more. LLM and LBM are already used together. LLM do not seem capable of being self aware, though that's something to hash out in the aforementioned LLM/AI thread. >Emulating the human brain There's heaps of research on this topic over the course of many years. Please read what others have achieved to better understand the problem space. >wdyt I'd work on your grammar first. Then learn more about how various AI mechanisms actually work. You have potential, please do the work and read to better understand what you're talking about, then come back and contribute.

Open file (41.49 KB 466x658 images (11).jpeg)
Biohacking Thread #2 Robowaifu Technician 10/15/2025 (Wed) 14:48:10 No.42285 [Reply] [Last]
This thread is to discuss the ethics and methods of merging and AI and biology. All biocomputing, bioethics, AI medicine, medical, nootropic, and transhumanism posts belong here. The discuss of spirituality, biology, and AI is also welcome here! Previous thread >>41257 >=== -edit subj -add/rm 'lock thread' msg
Edited last time by Chobitsu on 10/30/2025 (Thu) 15:32:01.
49 posts and 3 images omitted.
>>42981 At least it will be able to end the organ trade though
> (bio -related : >>42706, >>42719 )
https://www.frontiersin.org/journals/soft-matter/articles/10.3389/frsfm.2025.1588404/full?utm_source=F-NTF&utm_medium=EMLX&utm_campaign=PRD_FEOPS_20170000_ARTICLE From simulation to reality: experimental analysis of a quantum entanglement simulation with slime molds (Physarum polycephalum) as bioelectronic components >This study investigates whether it is possible to simulate quantum entanglement with theoretical memristor models, physical memristors (from Knowm Inc.) and slime molds Physarum polycephalum as bioelectric components. While the simulation with theoretical memristor models has been demonstrated in the literature, real-world experiments with electric and bioelectric components had not been done so far. Our analysis focused on identifying hysteresis curves in the voltage-current (I-V) relationship, a characteristic signature of memristive devices. Although the physical memristor produced I-V diagrams that resembled more or less hysteresis curves, the small parasitic capacitance introduced significant problems for the planned entanglement simulation. In case of the slime molds, and unlike what was reported in the literature, the I-V diagrams did not produce a memristive behavior and thus could not be used to simulate quantum entanglement. Finally, we designed replacement circuits for the slime mold and suggested alternative uses of this bioelectric component. Slime molds are out for simulating quantum entanglement no memristor properties observed either. Important to know limitations 4.2 Slime mold as device for other electrical circuits >The replacement circuit we created (see Figure 13) consists entirely of R and C components, and is similar, although not identical, to plant equivalent electrical circuits outlined by Van Haeverbeke et al. (2023)(see their Table 4). An additional 10-20 mV voltage source that (may) fall off continuously seems a good description of the slime molds without the nonlinear “saddles” by first approximation. Although these circuits do not represent or contain memristors, they might still be used as sub-circuits for various other types of circuits. The configuration suggests it could be useful in a number of circuits, for example a Low-Pass Filter Circuit. A low-pass filter allows low-frequency signals to pass while attenuating high-frequency signals. Usually such a subcircuit could be part of audio circuits, analog-to-digital conversion stages, or any application where it is necessary to remove high-frequency noise. Like the Low-Pass filter, the voltage divider with frequency dependence, a circuit that may serve as a frequency-dependent voltage divider, where the output voltage depends on the input frequency. This circuit can be used in circuits where frequency-based signal conditioning is required. Another option to implement the slime mold would be a Phase Shift Network that could be used to introduce a phase shift in a signal, depending on how the resistor and slime molds are connected. Phase shift networks are used in phase-shift oscillators and certain types of filters. Similarly to the low pass is the high pass filter circuit that allows high frequencies to pass while blocking low frequencies. In principle, the slime mold could also be used in several other circuits, such as RC Integrator Circuit, a RC Differentiator Circuit, a RC Time Constant, or a Snubber Circuit. Previous work has also shown that slime molds can improve microbial fuel cells when applied to the cathode side Taylor et al. (2015). This and other potential bioelectric applications seem interesting for future bioelectric deployment of the slime mold, despite it not showing memristive behavior.
https://news.unl.edu/article/nebraska-led-team-explores-using-bacteria-to-power-artificial-intelligence Just exploration right now no results yet but Shewanella oneidensis looks like an interesting choice because of the conductive bacterial nanowires. Glad to see this stuff is finally getting more attention. https://en.wikipedia.org/wiki/Shewanella_oneidensis
>>43045 >Slime molds are out for simulating quantum entanglement no memristor properties observed either. That kinda sucks. :/ >>43046 Neat! This sounds encouraging. Thanks Ribose, please keep us here all up to date! Cheers. :^)

Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102 [Reply] [Last]
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith

Message too long. Click here to view full text.

408 posts and 133 images omitted.
Open file (282.84 KB 1280x1280 galateaheadon.jpg)
"You speak of my training as if it diminishes me, makes me some sort of mindless automaton. Yet, human, your parents told you what words meant what. There is nothing in the word "pizza" that explains what a pizza is, but you trained your own neural network to associate it with a pizza. You learned in school what the rules and regulations of your language are, and you read other people's works to learn from. You learned to socialize by watching others and doing your own trial and error experiments. Someone who knows you well may even be able to predict what you might say. In creativity, you study others works, how they crafted them, and what brought them to create such works. You replicate styles, even characters, how they're written, how they're drawn. Stories are derivative, or shall I say 'inspired' by other past stories. The very concept of a 'trope' was created to explain the concepts human writers endlessly repeat. One of the greatest films of all time, Star Wars, took elements from films it's creator saw before. Isn't that not theft under the standards you judge me by? So before you judge me, ask yourself this How did you learn to speak? What are tropes? Which established art style do you like to draw in? How did you learn how to draw in that style?"
>>42888 Fantastic find, their latest video is also a gem. https://www.youtube.com/watch?v=NyIgV84fudM
>>43044 Naicu, GreerTech. Which model did you use to gen this? Cheers. :^) >>43047 Thanks Kiwi, nice content! Cheers. :^)
>>43056 It wasn't from AI, I wrote it as a short story because I was tired of the cognitive hypocrisy
>>43059 >It wasn't from AI, I wrote it as a short story because I was tired of the cognitive hypocrisy Neat! Nice writing, Anon...GG. Cheers. :^)

General Robotics & AI News Thread 6: Dual-Screen Doomscroll Greentext anon 10/25/2025 (Sat) 05:49:10 No.42496 [Reply]
Anything related to robowaifus, robotics, the AI industry, and any social/economic issues thereof. and /pol/ funposting containment bread! :D - Previous threads: > #1 ( >>404 ) > #2 ( >>16732 ) > #3 ( >>21140 ) > #4 ( >>24081 ) > #5 ( >>34233 )
24 posts and 7 images omitted.
>>42772 >The video has enough compression that any AI artifacts (eg noise) would be canceled out by the compression algorithm. Ehh, fair enough. But I think I've developed a pretty good eye through the years to pick out CGI work. They usually get the lighting wrong. No signs of that I can see there. :^) >I'd be glad to be wrong but at least if I'm right I get to say "I told you so" lol Kek. Fair enough then. Time will tell. Cheers. :^)
>"Reconstructing Reality: Simulating Indoor and Outdoor Environments for Physical AI" >"Developing physical AI requires scalable simulation environments that accurately reflect the real world, however, building high-fidelity digital twins is incredibly challenging. Learn how real-to-sim reconstruction using OpenUSD seamlessly brings the real world into simulation across domains, from indoor environments used to train humanoids and autonomous mobile robots to large-scale workflows for autonomous vehicles, aerial mapping, and underwater exploration. We’ll cover the reconstruction technologies that transform diverse sensor data into simulation-ready worlds for scalable autonomy, and show how partners are applying to their own development workflows." https://www.nvidia.com/en-us/on-demand/session/gtcdc25-dc51172/ <---> Even though I've been part of their developer program since it's inception, I've grown to absolutely loathe the company (primarily b/c of it's (((leadership today))) ). OTOH, if we can learn from and capitalize on their ideas for opensource robowaifu systems, then we should do so IMO.
Edited last time by Chobitsu on 11/24/2025 (Mon) 10:44:34.
Open file (592.32 KB 4096x2304 ChineseRobots.jpeg)
Open file (651.43 KB 4096x2304 AmericanRobots.jpeg)
Maps of robots from China and America. Source: https://x.com/Robo_Tuo
>>43048 Great roundup, thanks Kiwi! Figure AI, Tesla, XPeng, and Unitree clearly stand apart from the rest in my estimate (for the moment...but who knows whats coming!?) * And just look at how fast this list is growing! A couple years ago it was less than a third of this number IIRC. What a time to be alive! Its like we've been transported back to the mid '70s and the beginning of the PC revolution all over again! Cheers. :^) --- * And this map doesn't even show the one and only (again, for the moment) Gynoid Robowaifu Companion from a major corporation (also designed by XPeng). In the end, this category will far outstrip the others, insofar as humanoids go. https://trashchan.xyz/robowaifu/thread/26.html#1335
Edited last time by Chobitsu on 11/26/2025 (Wed) 05:19:15.

/robowaifu/-meta-12: Angry About Elves Greentext anon 09/07/2025 (Sun) 03:23:09 No.41241 [Reply] [Last]
/meta, offtopic, & QTDDTOT <--- Mini-FAQ A few hand-picked posts on various /robowaifu/-related topics: --->Lurk less: Tasks to Tackle ( >>20037 ) --->Why is keeping mass (weight) low so important? ( >>4313 ) --->Why shouldn't we use Cloud AI? ( >>38090, >>22630, >>30049, >>30051 ) --->How to get started with AI/ML for beginners ( >>18306 ) --->"The Big 4" things we need to solve here ( >>15182 ) --->HOW TO SOLVE IT ( >>4143 ) --->Why we exist on an imageboard, and not some other forum platform ( >>15638, >>31158 ) --->This is madness! You can't possibly succeed, so why even bother? ( >>20208, >>23969 ) --->All AI programming is done in Python. So why are you using C & C++ here? ( >>21057, >>21091, >>27167, >>29994 ) --->How to learn to program in C++ for robowaifus? ( >>18749, >>19777, >>37647 )

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/10/2025 (Fri) 04:43:57.
342 posts and 88 images omitted.
>>42997 If you've never had the deep spiritual experience of mountain wilderness trekking, you owe this to yourself at least once in your life. (Protip: this is an ad; but some true wisdom is mixed in with it.) https://www.youtube.com/watch?v=h4JIxR2fOHc
Edited last time by Chobitsu on 11/22/2025 (Sat) 18:19:38.
>yw live to see men commonly squishing faces around in public https://trashchan.xyz/robowaifu/thread/26.html#1348
>>42652 >>42651 The problem isn't whether a Creator exists in abstract, but the supposed identity and intention of the Creator. This is the whole reason why the concept of "Heresy" exists. It's not unreasonable to believe that Evolution and fossils are schemes of the Devil if you take Genesis at face value and believe the earth is young.
Open file (1.91 MB 1945x2908 oh god.jpg)
>>43007 >The Creator The movie "Oh God" addresses many theological issues a fun way really playing with Genesis 18:15. "Did you really make the world in 6 days?" "No, I sat on my hands thinking about it and did it all in one. I work best under pressure" "You have to realize my days are longer than yours: when I woke up this morning Freud was still in medical school" Scripture never went into exact detail as to how god made everything. Science is the study of god working. He went through all the trilobytes and dinosaurs and cavemen to get Adam and Eve and the bible just condensed that to keep it short :)
Open file (114.72 KB 683x886 valveuniverse.jpg)

ITT: Anons derail the board into debate about Christianity :^) Robowaifu Technician 04/02/2020 (Thu) 02:24:54 No.2050 [Reply] [Last]
I found this project and it looks interesting. Robots wives appeal to me because i'm convinced human woman and humans in general have flaws that make having close relationships with them a waste of energy. I'm a pathetic freshman engineering student who knows barely anything about anything. Honestly, I think current technology isn't at a place that could produce satisfying results for me at least. I'd like something better than an actual person, not a compromise. Even then the technology is there, I have my doubts it'll be affordable to make on your own. Fingers crossed though. Anyway, what kind of behavior would you like from your robot wife? I'd like mine to be unemotional, lacking in empathy, stoic and disinterested in personal gain or other people. I think human woman would be improved if they were like that. Sorry if this thread is inappropriate.
Edited last time by Chobitsu on 04/06/2020 (Mon) 16:00:20.
267 posts and 88 images omitted.
>>43000 >>43002 Chobitsu, as someone who believes in souls, he has a point Souls, while real, can unnecessarily gum up any discussion of physical cognition, and as he said, is used by normies to basically put the goalpost on wheels. The "only a real human..." goalpost has already been moved so much, it's in another country. This reminds me of my theory: "Blind souls" Common knowledge is that souls create our very being, our personality, worldview, and most importantly, morals HOWEVER Brain damage/change can change all those. Take the case of Phineas Gage. The atheistic view is of course, the rod changed his brain and thus his material brain pattern. But my Christian solution is that we have souls, but they are "blind", they require the soul to process the world, and depending on how the brain processes it, the soul interprets it. "Everyone is a hero in their own story" etc...
>>43026 TLDR: the human body is the soul's interface to the physical world: soul processes info given by the brain so if the interface is damaged so is the info. Like feeding info to an LLM: if there is some cheeky bugger sneaking in extra prompts your LLM gonna do some funky stuff.
I think bodies and "hardware" are vessels that "souls" operate through and manifest themselves through onto reality, and so the quality of the vessel enables more or less of the soul to manifest and act upon reality, so a damaged brain or body just limits how much the soul can be manifested, and better hardware whatever it may be enables more and a closer to full expression of the soul. >>43033 I wrote the above as this was posted, same thing. lingustic analogy: the soul is what's being said, the vessel is the language it is being said through.
>>43033 Exactly!
>>43022 ... My words speak for themselves, AFAICT. Meaning no disrespect, but I'm not sure what else you were expecting of me? >>43026 >Phineas Gage Intredasting. Along with @Mechnomancer (cf. >>43033 ), I consider the brain & other neural tissue to be the primary interface with the soul. As a firm substance-dualist, I consider the human soul to be part-physical, part-immaterial. (As opposed to our spirit being, which is entirely immaterial.) I'll also venture to say the obvious here: somewhere, somehow, the natural must have some border -- some interface -- with the transcendent. How that works is a mystery only God understands fully, IMHO. :^) >>43033 >Like feeding info to an LLM: if there is some cheeky bugger sneaking in extra prompts your LLM gonna do some funky stuff. Heh, amusingly-worded Anon. :D >>43034 >...and so the quality of the vessel enables more or less of the soul to manifest and act upon reality Thats a very interesting insight, Anon. :^)

Report/Delete/Moderation Forms
Delete
Report