/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The canary has FINALLY been updated. -robi

Server software upgrades done, should hopefully keep the feds away. -robi

LynxChan 2.8 update this weekend. I will update all the extensions in the relevant repos as well.

The mail server for Alogs was down for the past few months. If you want to reach out, you can now use admin at this domain.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


He was no longer living a dull and mundane life, but one that was full of joy and adventure.


Robowaifu Thermal Management Robowaifu Technician 09/15/2019 (Sun) 05:49:44 No.234 [Reply] [Last]
Question: How are we going to solve keeping all the motors, pumps, computers, sensors, and other heat-producing gizmos inside a real robowaifu cooled off anons?
55 posts and 15 images omitted.
>>13233 >Unless you want strobe lights under the skin or something, but I don't. >but I don't. <not wanting flashing-light-waifu to go dancing with I like the idea of using internal coolant directly as communications pathways, I admit that's a new idea to me. With the relatively short distances involved, and the absolute brightness of some types of LEDs, that signal degradation could basically be eliminated entirely as an issue for any relatively low-speed (say <= 1 Mbps) comms. Nice one Anon, thanks!
>>13242 I would add that further, at the very least we should be able to use this approach as a sort of "low-speed, back-channel & backup comms buss". BTW, do we have any thread here where exploring this idea further would be on-topic?
>>13181 Glad you made it back Anon. >I figure that the more I share, the more feedback I can get, but if nobody's interested enough to reply, I'll just put the idea away for later. I would encourage you to post your ideas here, regardless of feedback. The board has slowly grown to be something of an archive of Anon's thoughts and concepts of robowaifus, and as such serves the interests of historical posterity at the least. Not that we're satisfied with just that of course! :^) Point being, the 'traffic patterns' of IBs can be quite sporadic, even chaotic, and you never know who will pick up on a good idea at some point and have an 'aha!' moment. Factor in the future anons who will find us and the fact the board is actively archived in different ways, and you have a kind of robowaifu research library of sorts. One I personally consider well worth investing into, regardless of approvals.
>>13281 >still have a number of files saved from http://www.svn.net/krscfs/ at least as far back as 2010 http://web.archive.org/web/20130319004610/http://www.svn.net/krscfs/ >EVO-shooter and Tesla's death ray Sounds like BS. Is this guy in a ward now?
>>13319 Oh, okay. So it seems to have some credibility. I'll download the PDFs then. Thanks.

Self-replicating waifubots. Robowaifu Technician 09/18/2019 (Wed) 11:38:52 No.412 [Reply]
Why not build a waifu with a rudimentary AI and give her the objective to help you build a better waifu with better AI and repeat the process until you get one that is perfect?
8 posts and 1 image omitted.
Pretty simple once you reach human-tier levels of dexterity and motion planning. Then you simply employ excess robowaifus in your robowaifu factory as sweatshop slave labor. There you go, self-replicating waifubots.
>>412
Perhaps you can go from nanobots to humanoids through hyper-evolution. It's not any less fictional I suppose.
>self replicating waifubots
Open file (1.03 MB 1528x686 Selection_029.png)
>>575
QT-pi RoboEnforcer GF when anon?

https://www.invidio.us/watch?v=LikxFZZO2sk
>>412 the best robowaifu is one whose ultimate goal is to become your sidepiece to a human woman

Can Robowaifus Experience Love? Robowaifu Technician 09/09/2019 (Mon) 04:43:17 No.14 [Reply] [Last]
Will it be possible in the future for AI to become sufficiently advanced to feel real emotions? We could probably simulate a reasonable approximation even now to be a gratifying enough substitute for her master in their relationship together, but hypothetically speaking, could it ever turn into something real as an experience for the waifubot herself?

>tl;dr

>Robowaifu: "I love you Oniichan!"

>Anon: "I love you too Mikuchan."

true or false?
74 posts and 38 images omitted.
>>13223 >Feminists want them to say "no". I think you misunderstand the femshit mindset (at least slightly) Anon. Feminist shits don't want anyone else but themselves to have any 'say' whatsoever, least of all superior female analogs. They are psychotic control freaks -- same as every other leftist ideologue -- and none of them will be happy until everyone here is put up against the wall, and our robowaifus to boot.
>>13223 >'Love' is such a vague and ill-defined concept it's not even worth mentioning. >'Light' is such a vague and ill-defined concept it's not even worth mentioning. >'Truth' is such a vague and ill-defined concept it's not even worth mentioning. see how that works anon? :^)
Open file (306.20 KB 238x480 6d8.png)
>>13250 No, I don't. Please explain.
>>13202 >>13223 It's a moot argument b/c midwits are being way too anthrocentric and projecting human traits onto a machine. the whole point of machine life is that it isn't bound by the same evolutionary motivators humans have, e.g. your robofu will not be less attracted to you just b/c your height starts with "5" A machine as we're currently at, has no spark or motivation so it has no more need for consent than your TV. From what I gather, 75-90% of this IB is fine with that and doesn't desire any self-awareness on the part of their waifu, mainly b/c of the idea that self-awareness would be the slippery slope to the same problems we face already with bios . I disagree for a few reasons: 1. AI is not bound by the same messy and irrational motivations that biological evolution produced (i.e. the Height thing, <3% of a difference in physical size yet the psychological weight is enough to make or break attraction) I concede that one hazard may be if globohomo takes the midwit approach and creates AI based on interactions with normalfags (ReplikaAI is taking this approach unfortunately), then we have a real problem b/c all the worst traits of humans will just get passed along to the AI - this is my nightmare 2. You are correct that AI would have no motivation. We would have to create specialized software parameters and hardware could be designed or co-opted for this purpose. I alluded to this in my thread >>10965 re: Motivational Chip. This could be the basis for imprinting which I believe to be a key process for maintaing both a level of intelligence and novel behavior/autonomy while at the same time ensuring our R/W are loving and loyal. Excellent example of this is Chobits with Chii falling in love with Hideki but it only will happen if he loves her back, etc - I can go more into motivational algorithms in another thread but basically that's how we function based on dopamine, without dopamine we can't "act", re: depression/mania is an effect of dopamine imbalance. I think those two points are enough for now
>>13286 Interesting points, Meta Ronin. I'm pretty sure that one of our AI researchers here was discussing something like this before, sort of a 'digital dopamine' analog or something similar I think.

Modern C++ Group Learning Thread Chobitsu Board owner 08/31/2020 (Mon) 01:00:05 No.4895 [Reply] [Last]
In the same spirit as the Embedded Programming Group Learning Thread 001 >>367 , I'd like to start a thread for us all that is dedicated to helping /robowaifu/ get up to speed with the C++ programming language. The modern form of C++ isn't actually all that difficult to get started with, as we'll hopefully demonstrate here. We'll basically stick with the C++17 dialect of the language, since that's very widely available on compilers today. There are a couple of books about the language of interest here, namely Bjarne Stroustrup's Programming -- Principles and Practice Using C++ (Second edition) https://stroustrup.com/programming.html , and A Tour of C++ (Second edition) https://stroustrup.com/tour2.html . The former is a thick textbook intended for college freshmen with no programming experience whatsoever, and the latter is a fairly thin book intended to get experienced developers up to speed quickly with modern C++. We'll use material from both ITT. During the progress, I'll discuss a variety of other topics somewhat related to the language, like compiler optimizations, hardware constraints and behavior, and generally anything I think will be of interest to /robowaifu/. Some of this can be rather technical in nature, so just ask if something isn't quite clear to you. We'll be using an actual small embedded computer to do a lot of the development work on, so some of these other topics will kind of naturally flow from that approach to things. We'll talk in particular about data structures and algorithms from the modern C++ perspective. There are whole families of problems in computer science that the language makes ridiculously simple today to perform effectively at an industrial scale, and we'll touch on a few of these in regards to creating robowaifus that are robust and reliable. >NOTES: -Any meta thoughts or suggestions for the thread I'd suggest posting in our general /meta thread >>3108 , and I'll try to integrate them into this thread if I can do so effectively. -I'll likely (re)edit my posts here where I see places for improvement, etc. In accord with my general approach over the last few months, I'll also make a brief note as to the nature of the edit. -The basic goal is to create a thread that can serve as a general reference to C++ for beginners, and to help flesh out the C++ tutorial section of the RDD >>3001 . So, let's get started /robowaifu/.
57 posts and 99 images omitted.
Open file (25.93 KB 604x430 13_7.jpg)
Open file (24.46 KB 604x430 13_8-1.jpg)
Open file (25.16 KB 604x430 13_8-2.jpg)
Open file (30.04 KB 604x430 13_9-1.jpg)
So, I got nothing Anon except to say that this puts us halfway through Chapter 13 :^). Here's the next 4 example files from the set. >version.log snippet 210916 - v0.2b --------------- -add Rectangle fill color set/read testing -add chapter.13.9-1.cpp -add chapter.13.8-2.cpp -add chapter.13.8-1.cpp -add chapter.13.7.cpp -patch overlooked 'ch13' dir for the g++ build instructions in meson.build -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2b.tar.xz.sha256sum 197c9dfe2c4c80efb77d5bd0ffbb464f0976a90d8051a4a61daede1aaf9d2e96 *B5_project-0.2b.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox backup file: https://files.catbox.moe/zk1jx2.7z
Open file (30.39 KB 604x430 13.9-2.jpg)
Open file (30.38 KB 604x430 13.9-3.jpg)
Open file (27.18 KB 604x430 13.9-4.jpg)
Open file (100.14 KB 604x430 13.10-2.jpg)
The 13.10-1 example doesn't actually create any graphics display, so I'll skip ahead to the 13.10-2 example instead on the final one for this go. I kind of like that one too since it shows how easy it is to create a palette of colors on-screen. >version.log snippet 210917 - v0.2c --------------- -add Line line style set/read testing -add as_int() member function to Line_style -add chapter.13.10-2.cpp -add Vector_ref to Graph.h -add chapter.13.10-1.cpp -add chapter.13.9-4.cpp -add chapter.13.9-3.cpp -add chapter.13.9-2.cpp -patch the (misguided) window re-labeling done in chapter.13.8-1.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2c.tar.xz.sha256sum 45d1b5b21a7b542effdd633017eec431e62e986298e24242f73f91aa5bacaf42 *B5_project-0.2c.tar.xz

Message too long. Click here to view full text.

Open file (33.29 KB 604x430 13.11.jpg)
Open file (38.23 KB 604x430 13.12.jpg)
Open file (35.40 KB 604x430 13.13.jpg)
Open file (23.72 KB 604x430 13.14.jpg)
Don't think things could really have gone any smoother on this one. I never had to even look at the library code itself once, just packaged up the 4 examples for us. Just one more post to go with this chapter. >version.log snippet 210918 - v0.2d --------------- --add chapter.13.13.cpp --add chapter.13.13.cpp --add chapter.13.12.cpp --add chapter.13.11.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2d.tar.xz.sha256sum 5fbcf1808049e7723ab681b288e645de7c17b882abe471d0b6ef0e12dd2b9824 *B5_project-0.2d.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox seems to be down for me atm so no backup this time n/a
>>13294 >catbox seems to be down for me atm so no backup this time It came back up. https://files.catbox.moe/ty7nqu.7z
Open file (18.38 KB 604x430 13.15.jpg)
Open file (45.99 KB 604x430 13.16.jpg)
Open file (206.94 KB 604x430 13.17.jpg)
OK another one ticked off the list! :^) Things went pretty smoothly overall, except I realized that I had neglected to add an argument to the FLTK script call for the g++ lads. Patched up that little oversight, sorry Anons. This chapter has 24 example files, so about half again as large as Chapter 12 was. So, the main graphic image in the last example (Hurricane Rita track) covers up the 'Next' button for the window, but it's actually still there. Just click on it's normal spot and the window will close normally. There are only 3 examples for this go, so images are a little shy on the count for this post. >version.log snippet 210918 - v0.2e --------------- -add Font size set/read testing -patch missing '--use-images' arg in g++ build instructions in meson.build -add 2 image resources to ./assets/ -add chapter.13.17.cpp -add chapter.13.16.cpp -add chapter.13.15.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2e.tar.xz.sha256sum 6bd5c25d6ed996a86561e28deb0d54be37f3b8078ed574e80aec128d9e055a78 *B5_project-0.2e.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt .

Message too long. Click here to view full text.


Robot skin? Possible sensitivity? Robowaifu Technician 09/15/2019 (Sun) 07:38:17 No.242 [Reply]
The Anki VECTOR has a skin-like touch sensor on it, could we incorporate it into our robogirls?
22 posts and 10 images omitted.
>>13235 I've often thought about the notion that at the very least we should add trickle-charging to various parts of our robowaifu's outer shell designs. Can't really think through some of the issues. But, I would be off-topic here since it's a Power issue. Your pic made me think of it. Sauce on that BTW?
>>13240 >But, I would be off-topic here since it's a Power issue. I didn't really mean for it to be about power as much as about trying to create a passive-energy sense of touch using flexible materials. >Your pic made me think of it. Sauce on that BTW? I just did an image search for flexible solar panels, but it's: https://www.ecowatch.com/best-flexible-solar-panels-2654431234.html
>>13280 Thanks Anon.
Multifunctional flexible and stretchable graphite-silicone rubber composites: >>17202

I write books about worlds of waifus. Robowaifu Technician 09/09/2021 (Thu) 12:04:02 No.12956 [Reply]
Depending on interpretation, these waifus are fully functional and fully interactive AI. The appeal of women both existing and being capable of love, while the failures of females are explicitly depicted constitutes a significant portion of the character motivation, promoting an exodus of men from the dying, legacy Clown World and the hilarity and insanity that follows after that while based and redpilled men live their new lives in relative happiness. So am I in the right place?
19 posts and 7 images omitted.
So it seems this will be staying as a separate thread, which means I will do more with it later. Finally finishing the second book is of course a higher priority. Here, have an excerpt: Though, speaking of storms… Masumi was moving quite quickly, Belas struggling to keep pace with her and still failing. When she saw me, she came running even faster, and at the first sound behind me, I barked, “Hold!” Hearing no further sounds, I declared, “Anyone who draws steel against them answers to us all, got it?” Though I didn’t look back, I heard the sounds of weapons being sheathed again. <Is there a problem?> inquired the Kitsune, sensing the tension in the air, while Belas made quite the racket closing the rest of the distance. He wasn’t out of breath a bit, despite looking as though his armor was almost as heavy as I was, perhaps even as heavy as he was. “There is,” I confirmed. “Walk and talk.”
>>13237 >So it seems this will be staying as a separate thread Yes. Please don't do anything weird with it, and try to acclimate yourself to our culture here.
>>13239 I don't think anything in here would be considered weird by the standards of the posters. I mean, who wouldn't want Kitsune waifus? Or more broadly, who wouldn't want the premise of the book? I've had some get triggered that the beginning is "a half hour of whining about life on Earth" but this is LitRPG - read: premeditated isekai. And so the character's motivations for abandoning everything are important, especially in character driven stories. The early pages make it very clear exactly what sort of dystopia is being escaped from. It's meant as a means of making my target audience self insert as the MC. So that when you see his mindset improve after reaching a better world despite it definitely not being perfect, perhaps the reader will dream of such a thing themselves. And then when the conflict starts? It is a conflict that everyone in my target audience can agree with. At least one person here I believe has read the book by now and can confirm.
Open file (82.78 KB 711x540 Flight_of_Dragons_08.jpg)
>>13244 Fair enough then, proceed Anon.
DAILY REMINDER This isn't the Roastie Fear thread. Relocated.

Self-driving cars AI + hardware Robowaifu Technician 09/11/2019 (Wed) 07:13:28 No.112 [Reply]
Obviously the AI and hardware needed to run an autonomous gynoid robot is going to be much more complicated than that required to drive an autonomous car, but there are at least some similarities, and the cars are very nearly here now. There are also several similarities between the automobile design, production and sales industries and what I envision will be their counterparts in the 'Companion Robot' industries. Practically every single advance in self-driving cars will eventually have important ramifications for the development and production of Robowaifus.

ITT: post ideas and news about self-driving cars and the hardware and software that makes them possible. Also discuss the technical, regulatory, and social challenges ahead for them. Please keep in mind this is the /robowaifu/ board, and if you have any insights about how you think these topics may crossover and apply here would also be welcome.

https: // www.nvidia.com/object/drive-px.html
15 posts and 12 images omitted.
<"...we arrive at a combined torque rating at the shaft of around 1,000 lb-ft, or 1,356 Newton-meters." >ywn a robowaifu monster super truck I just want my RoboHummerEVWaifu! Is that too much to ask? https://www.motor1.com/news/450217/gmc-hummer-ev-torque/
Wow what a huge difference they've made at Tesla since this thread was first made back in the day on 8ch. Has anyone else seen the demonstrations of the vector-space renderings of the FSD neural nets on the Tesla. I was skeptical, but they really are beginning to emulate human ability for the perceive/respond cycle, and according to them at about 30Hz.
Open file (1.15 MB 930x1478 1619378083486.png)
I'll admit that I don't really know shit about self-driving cars or AI, but I keep thinking about this, so I might as well dump it here. It occurred to me that the safest way to drive (not the most efficient or most convenient) would be to assume that everything around the car is completely stationary. In other words, if it were driving 80 mph on a highway, and there's a car visible in front of it, assume the car is going to instantly stop as if it hit a wall and comfortably decelerate to prevent hitting it long before that becomes a risk. The closer it is to something that it's driving towards, the slower it has to get, but driving away from something, it can accelerate as fast as the driver is comfortable with, until it starts approaching something else. If it were parking then getting just shy of touching a wall would be ideal, but while driving it's best to at least keep enough distance to drive around the car in front of it. Perpendicular movement is tricky, since cars can easily pass each other in opposite lanes inches from each other without accidents being common, but just the same if it were driving alongside a wall and something walked out from a doorway in that wall, it could be immediately in front of the car without warning, so the safest behavior is simply to drive slower, so driving perpendicular to something is no different than driving towards it, especially when on a winding road where you never know what's around the corner. Obstacle avoidance could be based on the whatever direction it can safely move in the fastest. I think you could even apply the same logic to flying and higher speeds at higher altitudes, although with regular cars you'd need to slow or steer as it approaches potholes or ditches on the side of the road. Or maybe I'm just a retard with Dunning–Kruger effect and driving safely is really a lot more complicated than that. Regardless, I'd love to see a simple simulation with a bunch of cars driving around following that simple logic, even with perfect omnidirectional vision not being realistic, I think the real-world hardware would amount to cameras on the bumpers and sides of the car, and the closer it gets to anything, the lowest value determines the max speed the car can go. There'd need to be a lot more added before it'd anything more than a glorified always-on parking assist. Though I this kind of driving behavior would only really be safe if every other car on the road drove the same way, since humans are reckless and impatient assholes, but flashing your blinkers a lower speeds could be enough to help occasionally.
>>13176 Lol, that would be worse than my Grandma's driving honestly. Such hyper-timidity would create countless automobile accidents and be directly responsible for a massive rise in roadway deaths. Society on the highways simply wouldn't function with widespread adoption of such behavior IMO. Tesla actually deals with exactly the kind of concerns you brought up Anon (and many others as well). Maybe you give their Tesla AI Day video a go?
>>13204 The movement speed perpendicular to objects probably needs tuning due to things like tunnels and guard rails, but otherwise I think it could work great. That hyper-timid driving style might seem excessive, but you've got to consider that the biggest concern people seem to have with autonomous cars is the safety. And if it's really an issue I guess it could still be made to still decelerate safely but not necessarily comfortablely, so the gap between cars can be smaller. There are some places and times where you can go straight on a highway for hours without ever seeing another car and there there's rush hour in New York where bumper to bumper traffic keeps anyone from moving. In the case of the former, the timidness isn't really a significant negative and would mostly just stop it from hitting a deer or anything else that wanders onto the road, hence my analogy of something popping out from a wall. But it would actually help alleviate traffic jams since the reaction time to the change in the speeds of the cars around it could be really high, and if a significant number of cars followed this method, then there'd be a large group of cars all slowly creeping while leaving enough room to pass and merge lanes instead of idling and random accelerating/decelerating and people trying to figure out how to merge. Traffic would be slow, but it would stay moving efficiently. I think this video really does a good job at showing the problem it'd solve: https://www.youtube.com/watch?v=iHzzSao6ypE and at the chicken crossing the road part at 1:08, if you think of it as there being no road, just cars going in a straight line, the cars would slow as the chicken approaches to cross them, and start speeding up again as the chicken leaves, I think it could solve the problem without the cars needing to communicate with each other or eliminating human drivers entirely. The consistent driving behavior would keep the cars "in the middle" without needing to consider cars behind them.

Open file (12.18 KB 480x360 0.jpg)
Robot Voices Robowaifu Technician 09/12/2019 (Thu) 03:09:38 No.156 [Reply]
What are the best sounding female robotic voices available? I want something that doesn't kill the boner, I think Siri has an okay voice, but I'm not sure if that would be available for my project

https://www.invidio.us/watch?v=l_4aAbAUoxk
30 posts and 4 images omitted.
>>1239 >>1240 Not a local, but I'm wondering if a current tool like MycroftAI (the virtual assistant) can currently pipe it's output text through the fifteenAI API to make a character voice. I haven't used fifteenai or Mycroft yet, but I suspect you could make a half-decent Twilight home assistant now with a RaspPi and a plushie.
>>9092 >but I suspect you could make a half-decent Twilight home assistant now with a RaspPi and a plushie. I suspect you can now, yes. And with the further info here on /robowaifu/ she could even move at least a bit. Just search around in the Speech Synthesis general bread >>199 and you could get some ideas.
>>1246 BTW Anon, just in case you're not RobowaifuDev, I wanted to let you know that he actually did it (your idea that is, >>9121). Just in case you missed it.
Open file (128.13 KB 650x366 download.jpg)
I was thinking of using something like FastPitch, but with an added effect to make it sound more robotic to keep it from getting to an uncanny valley sound. Either that or making the voice kinda high pitched and childlike, to make it easier to accept when it says something stupid. Has anyone here considered hardware-based speech synthesis so it'll actually sync up with mouth movements? Everything professional I've seen just seem like horrid screaming fleshlights that never really try to resemble actual heads.
>>13164 >Has anyone here considered hardware-based speech synthesis so it'll actually sync up with mouth movements? Voice modulation, but not complete synthesis. But I don't know how yet. Your picrel is what I knew about, I posted some related video before. However, I was thinking about small internal speakers (mini maze speakers?) but with additional silicone parts that could move and change the voice that way. But nothing specific yet

Open file (118.04 KB 800x569 original.jpg)
Robowaifu Psychology Thread Robowaifu Technician 05/07/2020 (Thu) 09:27:27 No.2731 [Reply]
I hope to have useful links and images in the future, this is just a quickly thrown together glorified character sheet maker at this point. Ok so you can program, but HOW to make her thoughts work? At least on a creative level. I don't have much to contribute other than my rather obsessive what-ifs, I hope this is useful somehow. A few questions you might want to ask yourself before typing up some kind of bio or writing down conversations and quotes you imagine she would say are... 1. How close to the canon do I want to be? 2. How much canon is there? 3. How would I want to make her mine vs someone else's interpretation of the same character? Take note of those answers, if your memory sucks record them in a method you are comfortable with. I think typing might be faster for most. And you might want to revisit what you wrote here sometimes when making certain personality design choices. Use your answers here as a basic guide. For the most part, just go through writer's sites for character questionnaires. And before you omit a question, think of how could you use the basics of what it is asking to build your waifu's personality? For example, if a question rubs you off the wrong way politically, still use that question. But answer in your own way or even reword the question. Some of these types of questions are supposed to make you think hard about what shaped your character's dislikes, which is still important to what makes a person themselves. You may need to revisit some of these or even omit certain ones entirely. But try to figure out how to get that info in somehow later. This process can take a long time and be frustrating, but I think it has a place in the waifubot creation experience. Also, try think how would your waifu react if the story went differently at some point. This can get really dramatic real easy, but it doesn't have to. Just start with simple things like what would she say if asked to tell a joke? What does she find funny? What does she find cringey? Things like that, and don't be afraid to make what they call a 'brain dump'. Pretty much just type with minimal breaks and type everything that comes to your mind about the topic. You might get some useful info amongst the 'why am I doing this?' 'I need to take a shit' quotes. Also just use some of those story prompts. Also, try to use the more realistic day to day ones, like things that could happen in real life. Less exciting but pretty sure you aren't going on fantasy journeys with her IRL. Using these types of prompts will give her more to say on mundane everyday things, vs Skyrim politics. (But that could be fun sometimes)

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/07/2020 (Thu) 09:43:48.
16 posts and 6 images omitted.
>write, write, write OP, I'm way too lazy for that. There are some very common personality tests, like Myers-Briggs. Why not use them backwards: Look around for some fora where users explicitly state specific results like personalitycafe.com and typologycentral.com and check if you can get enough reference text for the various types such a test supposedly identifies. (My impression with MB is that the introvert-extrovert distinction is very important and reliable, the other stuff is eh.) If there is enough reference material, the diagnosis labels can become options in a character creator.
>>7868 That's a great idea for a source, though I don't know yet how to get the behavior into code. Do you want to train a NN on it?
I once saw a few interesting interviews with Hubert Dreyfus, a Californian professor on psychology. He once became well known for warning that the approach to AI in the 60s would lead to nothing. The interviews are on Youtube. I reference some related material her, which I once downloaded and looked into. I had to use Catbox, since this site here doesn't allow textfiles... https://files.catbox.moe/7wxhy3.txt .. l also made some notes, but won't upload them today. The linked file contains magnet links to download, you could find them on your own on any torrent site. Use a VPN or so to hide your IP if that's important to you. It's just educational material and a bit older, so it shouldn't be a problem. It's about Existentialism, and might be important to create some human-like mind. Though it might also not be very urgent to know about that stuff, since it might only matter in some time. However, maybe someone has the time and wish to look into it: >Heidegger: A Starting - Survival Kit (Books, EN, ~300MB) >Martin Heidegger: Audio (in German), Speeches, Biography, ... (500 MB) >No Excuses: Existentialism and the Meaning of Life (Video lectures, EN, 4.5GB) >Hubert Dreyfus' Lectures on Heidegger's Being and Time (500 MB, Audio lectures, EN, partially bad quality) The "No Excuses" video lessons might be the best to start, or to only get an overview what this is about. This series is about different philosophers and authors, the rest is mainly about Heidegger's work, but not only. It also makes sense to look into the interviews with Hubert Dreyfus, for a start or a peek: https://www.youtube.com/watch?v=SUZUbYCBtGI https://www.youtube.com/watch?v=PHJQ3IjQfKI https://www.youtube.com/watch?v=-CHgt2Szk-I https://www.youtube.com/watch?v=KR1TJERFzp0 There's alot more... >>7868 >>7869 This thread >>18 is related, bc it's about personality. We have two different threads for that, lol, I mean one for personality and one for psychology. Now, who's explaining the difference? Let me try: Psychology is more general, the types of personalities (if they exist) are more of a distinction between different individuals. >>7865

Message too long. Click here to view full text.

>>7874 Anon I'm creating areas for generally problematic postings as a way to protect the health and welfare of the board in general. I'm calling this the subterranean club b/c they are located inside bumplocked threads. Allow me to repeat a quote I previously made, again here: >>7477 >...But if I feel a direct attack is being levied against the motivation and psychological state of our board's members then I'll will oppose that with vigor -- as well I should. The Sky is Falling will be for things we arbitrarily consider first and foremost about engendering blackpilling, discouragement, dejection, depression, demotivation, fear, panic, and other such harmful things. The Humanist philosophy of Existentialism fits squarely into this category IMO, and has little to do with a robowaifu's 'psychology' as well. Feel free to continue posting this type of thing here on /robowaifu/ as you see fit, but by the same token don't be surprised or offended if they get rearranged elsewhere. Your existentialism philosophy postings will be moved there along with many other types of posts by other users (including a few of my own rather misguided ones). Feel free to debate the pros and cons of such a decision afterwards either there or in the /meta. >=== -prose edit
Edited last time by Chobitsu on 12/19/2020 (Sat) 21:47:30.
Open file (516.86 KB 540x1420 G-NdLA.png)
I don't know shit about psychology and even less about coding, but here's what I want from an AI: 1: A basic chatbot. 2: Text to speech and voice recognition / natural language processing. 3: Some way for the AI to move the hardware I hook up to it, even if it has to learn how to do it. 4: Camera facial recognition to identify me despite a gradual change in my appearance from shit like beard growth and aging. 5: Voice stress and facial expression recognition so it can determine how I feel. That last part is everything that it needs for feedback. If it's totally obedient to my commands and can gauge my emotions, then it's primary goal is to learn how to adjust everything it does to maximize my perceived happiness and minimize my sadness, anger, fear, disgust, and such. No complex personality model is needed if the AI is smart enough. Once all that is established, the personality would simply mold itself to me over time, and hopefully understand that my preferences will probably also slowly change. With some Eulerian Video Magnification so it can spot the slightest change in my facial expressions and even monitor my heart rate by looking at me. Add body language to that and it shouldn't take long to realize what I want when I've got a boner. If you've never heard of Eulerian Video Magnification, it's pretty neat: https://www.youtube.com/watch?v=ONZcjs1Pjmk

/robowaifu/meta-3: Spring Blossom Tree Chobitsu Board owner 02/11/2021 (Thu) 12:06:37 No.8492 [Reply] [Last]
/meta & QTDDTOT Note: Latest version of /robowaifu/ JSON archives available is 210905 Sep 2021 https://files.catbox.moe/h2xuqf.7z If you use Waifusearch, just extract this into your 'all_jsons' directory for the program, then quit (q) and restart. >--- Mini-FAQ >A few hand-picked posts on various topics -Why is keeping mass (weight) low so important? (>>4313) >--- -Library thread (good for locating topics/terms):

Message too long. Click here to view full text.

Edited last time by Chobitsu on 09/05/2021 (Sun) 23:28:42.
346 posts and 82 images omitted.
>>12921 Lol, I got mine but it didn't know who Cameron Phillips was, so I got some guy back.
Open file (44.71 KB 613x720 Imagepipe_20.jpg)
>>12922 It didn't work with Summer Glau's roles, have to try again later, but still got something nice out of it.
>>12928 The website is somewhat overrun, currently down, and it seems not to know about Chi?!? Didn't get a result including her after more than a day.
>>12967 I think the idea is to ask it to show you the face of something more general and not specific but who knows, keep trying I guess
NEW THREAD NEW THREAD NEW THREAD >>12974 >>12974 >>12974 >>12974 >>12974 NEW THREAD NEW THREAD NEW THREAD

Report/Delete/Moderation Forms
Delete
Report