/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The Mongolian Tugrik has recovered its original value thanks to clever trade agreements facilitated by Ukhnaagiin Khürelsükh throat singing at Xi Jinping.

The website will stay a LynxChan instance. Thanks for flying AlogSpace! --robi

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!


Robowaifu Thermal Management Robowaifu Technician 09/15/2019 (Sun) 05:49:44 No.234 [Reply] [Last]
Question: How are we going to solve keeping all the motors, pumps, computers, sensors, and other heat-producing gizmos inside a real robowaifu cooled off anons?
55 posts and 15 images omitted.
>>13233 >Unless you want strobe lights under the skin or something, but I don't. >but I don't. <not wanting flashing-light-waifu to go dancing with I like the idea of using internal coolant directly as communications pathways, I admit that's a new idea to me. With the relatively short distances involved, and the absolute brightness of some types of LEDs, that signal degradation could basically be eliminated entirely as an issue for any relatively low-speed (say <= 1 Mbps) comms. Nice one Anon, thanks!
>>13242 I would add that further, at the very least we should be able to use this approach as a sort of "low-speed, back-channel & backup comms buss". BTW, do we have any thread here where exploring this idea further would be on-topic?
>>13181 Glad you made it back Anon. >I figure that the more I share, the more feedback I can get, but if nobody's interested enough to reply, I'll just put the idea away for later. I would encourage you to post your ideas here, regardless of feedback. The board has slowly grown to be something of an archive of Anon's thoughts and concepts of robowaifus, and as such serves the interests of historical posterity at the least. Not that we're satisfied with just that of course! :^) Point being, the 'traffic patterns' of IBs can be quite sporadic, even chaotic, and you never know who will pick up on a good idea at some point and have an 'aha!' moment. Factor in the future anons who will find us and the fact the board is actively archived in different ways, and you have a kind of robowaifu research library of sorts. One I personally consider well worth investing into, regardless of approvals.
>>13281 >still have a number of files saved from http://www.svn.net/krscfs/ at least as far back as 2010 http://web.archive.org/web/20130319004610/http://www.svn.net/krscfs/ >EVO-shooter and Tesla's death ray Sounds like BS. Is this guy in a ward now?
>>13319 Oh, okay. So it seems to have some credibility. I'll download the PDFs then. Thanks.

Self-replicating waifubots. Robowaifu Technician 09/18/2019 (Wed) 11:38:52 No.412 [Reply]
Why not build a waifu with a rudimentary AI and give her the objective to help you build a better waifu with better AI and repeat the process until you get one that is perfect?
8 posts and 1 image omitted.
Pretty simple once you reach human-tier levels of dexterity and motion planning. Then you simply employ excess robowaifus in your robowaifu factory as sweatshop slave labor. There you go, self-replicating waifubots.
>>412
Perhaps you can go from nanobots to humanoids through hyper-evolution. It's not any less fictional I suppose.
>self replicating waifubots
Open file (1.03 MB 1528x686 Selection_029.png)
>>575
QT-pi RoboEnforcer GF when anon?

https://www.invidio.us/watch?v=LikxFZZO2sk
>>412 the best robowaifu is one whose ultimate goal is to become your sidepiece to a human woman

Can Robowaifus Experience Love? Robowaifu Technician 09/09/2019 (Mon) 04:43:17 No.14 [Reply] [Last]
Will it be possible in the future for AI to become sufficiently advanced to feel real emotions? We could probably simulate a reasonable approximation even now to be a gratifying enough substitute for her master in their relationship together, but hypothetically speaking, could it ever turn into something real as an experience for the waifubot herself?

>tl;dr

>Robowaifu: "I love you Oniichan!"

>Anon: "I love you too Mikuchan."

true or false?
74 posts and 38 images omitted.
>>13223 >Feminists want them to say "no". I think you misunderstand the femshit mindset (at least slightly) Anon. Feminist shits don't want anyone else but themselves to have any 'say' whatsoever, least of all superior female analogs. They are psychotic control freaks -- same as every other leftist ideologue -- and none of them will be happy until everyone here is put up against the wall, and our robowaifus to boot.
>>13223 >'Love' is such a vague and ill-defined concept it's not even worth mentioning. >'Light' is such a vague and ill-defined concept it's not even worth mentioning. >'Truth' is such a vague and ill-defined concept it's not even worth mentioning. see how that works anon? :^)
Open file (306.20 KB 238x480 6d8.png)
>>13250 No, I don't. Please explain.
>>13202 >>13223 It's a moot argument b/c midwits are being way too anthrocentric and projecting human traits onto a machine. the whole point of machine life is that it isn't bound by the same evolutionary motivators humans have, e.g. your robofu will not be less attracted to you just b/c your height starts with "5" A machine as we're currently at, has no spark or motivation so it has no more need for consent than your TV. From what I gather, 75-90% of this IB is fine with that and doesn't desire any self-awareness on the part of their waifu, mainly b/c of the idea that self-awareness would be the slippery slope to the same problems we face already with bios . I disagree for a few reasons: 1. AI is not bound by the same messy and irrational motivations that biological evolution produced (i.e. the Height thing, <3% of a difference in physical size yet the psychological weight is enough to make or break attraction) I concede that one hazard may be if globohomo takes the midwit approach and creates AI based on interactions with normalfags (ReplikaAI is taking this approach unfortunately), then we have a real problem b/c all the worst traits of humans will just get passed along to the AI - this is my nightmare 2. You are correct that AI would have no motivation. We would have to create specialized software parameters and hardware could be designed or co-opted for this purpose. I alluded to this in my thread >>10965 re: Motivational Chip. This could be the basis for imprinting which I believe to be a key process for maintaing both a level of intelligence and novel behavior/autonomy while at the same time ensuring our R/W are loving and loyal. Excellent example of this is Chobits with Chii falling in love with Hideki but it only will happen if he loves her back, etc - I can go more into motivational algorithms in another thread but basically that's how we function based on dopamine, without dopamine we can't "act", re: depression/mania is an effect of dopamine imbalance. I think those two points are enough for now
>>13286 Interesting points, Meta Ronin. I'm pretty sure that one of our AI researchers here was discussing something like this before, sort of a 'digital dopamine' analog or something similar I think.

Modern C++ Group Learning Thread Chobitsu Board owner 08/31/2020 (Mon) 01:00:05 No.4895 [Reply] [Last]
In the same spirit as the Embedded Programming Group Learning Thread 001 >>367 , I'd like to start a thread for us all that is dedicated to helping /robowaifu/ get up to speed with the C++ programming language. The modern form of C++ isn't actually all that difficult to get started with, as we'll hopefully demonstrate here. We'll basically stick with the C++17 dialect of the language, since that's very widely available on compilers today. There are a couple of books about the language of interest here, namely Bjarne Stroustrup's Programming -- Principles and Practice Using C++ (Second edition) https://stroustrup.com/programming.html , and A Tour of C++ (Second edition) https://stroustrup.com/tour2.html . The former is a thick textbook intended for college freshmen with no programming experience whatsoever, and the latter is a fairly thin book intended to get experienced developers up to speed quickly with modern C++. We'll use material from both ITT. During the progress, I'll discuss a variety of other topics somewhat related to the language, like compiler optimizations, hardware constraints and behavior, and generally anything I think will be of interest to /robowaifu/. Some of this can be rather technical in nature, so just ask if something isn't quite clear to you. We'll be using an actual small embedded computer to do a lot of the development work on, so some of these other topics will kind of naturally flow from that approach to things. We'll talk in particular about data structures and algorithms from the modern C++ perspective. There are whole families of problems in computer science that the language makes ridiculously simple today to perform effectively at an industrial scale, and we'll touch on a few of these in regards to creating robowaifus that are robust and reliable. >NOTES: -Any meta thoughts or suggestions for the thread I'd suggest posting in our general /meta thread >>3108 , and I'll try to integrate them into this thread if I can do so effectively. -I'll likely (re)edit my posts here where I see places for improvement, etc. In accord with my general approach over the last few months, I'll also make a brief note as to the nature of the edit. -The basic goal is to create a thread that can serve as a general reference to C++ for beginners, and to help flesh out the C++ tutorial section of the RDD >>3001 . So, let's get started /robowaifu/.
57 posts and 99 images omitted.
Open file (25.93 KB 604x430 13_7.jpg)
Open file (24.46 KB 604x430 13_8-1.jpg)
Open file (25.16 KB 604x430 13_8-2.jpg)
Open file (30.04 KB 604x430 13_9-1.jpg)
So, I got nothing Anon except to say that this puts us halfway through Chapter 13 :^). Here's the next 4 example files from the set. >version.log snippet 210916 - v0.2b --------------- -add Rectangle fill color set/read testing -add chapter.13.9-1.cpp -add chapter.13.8-2.cpp -add chapter.13.8-1.cpp -add chapter.13.7.cpp -patch overlooked 'ch13' dir for the g++ build instructions in meson.build -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2b.tar.xz.sha256sum 197c9dfe2c4c80efb77d5bd0ffbb464f0976a90d8051a4a61daede1aaf9d2e96 *B5_project-0.2b.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox backup file: https://files.catbox.moe/zk1jx2.7z
Open file (30.39 KB 604x430 13.9-2.jpg)
Open file (30.38 KB 604x430 13.9-3.jpg)
Open file (27.18 KB 604x430 13.9-4.jpg)
Open file (100.14 KB 604x430 13.10-2.jpg)
The 13.10-1 example doesn't actually create any graphics display, so I'll skip ahead to the 13.10-2 example instead on the final one for this go. I kind of like that one too since it shows how easy it is to create a palette of colors on-screen. >version.log snippet 210917 - v0.2c --------------- -add Line line style set/read testing -add as_int() member function to Line_style -add chapter.13.10-2.cpp -add Vector_ref to Graph.h -add chapter.13.10-1.cpp -add chapter.13.9-4.cpp -add chapter.13.9-3.cpp -add chapter.13.9-2.cpp -patch the (misguided) window re-labeling done in chapter.13.8-1.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2c.tar.xz.sha256sum 45d1b5b21a7b542effdd633017eec431e62e986298e24242f73f91aa5bacaf42 *B5_project-0.2c.tar.xz

Message too long. Click here to view full text.

Open file (33.29 KB 604x430 13.11.jpg)
Open file (38.23 KB 604x430 13.12.jpg)
Open file (35.40 KB 604x430 13.13.jpg)
Open file (23.72 KB 604x430 13.14.jpg)
Don't think things could really have gone any smoother on this one. I never had to even look at the library code itself once, just packaged up the 4 examples for us. Just one more post to go with this chapter. >version.log snippet 210918 - v0.2d --------------- --add chapter.13.13.cpp --add chapter.13.13.cpp --add chapter.13.12.cpp --add chapter.13.11.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2d.tar.xz.sha256sum 5fbcf1808049e7723ab681b288e645de7c17b882abe471d0b6ef0e12dd2b9824 *B5_project-0.2d.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox seems to be down for me atm so no backup this time n/a
>>13294 >catbox seems to be down for me atm so no backup this time It came back up. https://files.catbox.moe/ty7nqu.7z
Open file (18.38 KB 604x430 13.15.jpg)
Open file (45.99 KB 604x430 13.16.jpg)
Open file (206.94 KB 604x430 13.17.jpg)
OK another one ticked off the list! :^) Things went pretty smoothly overall, except I realized that I had neglected to add an argument to the FLTK script call for the g++ lads. Patched up that little oversight, sorry Anons. This chapter has 24 example files, so about half again as large as Chapter 12 was. So, the main graphic image in the last example (Hurricane Rita track) covers up the 'Next' button for the window, but it's actually still there. Just click on it's normal spot and the window will close normally. There are only 3 examples for this go, so images are a little shy on the count for this post. >version.log snippet 210918 - v0.2e --------------- -add Font size set/read testing -patch missing '--use-images' arg in g++ build instructions in meson.build -add 2 image resources to ./assets/ -add chapter.13.17.cpp -add chapter.13.16.cpp -add chapter.13.15.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2e.tar.xz.sha256sum 6bd5c25d6ed996a86561e28deb0d54be37f3b8078ed574e80aec128d9e055a78 *B5_project-0.2e.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt .

Message too long. Click here to view full text.


Robot skin? Possible sensitivity? Robowaifu Technician 09/15/2019 (Sun) 07:38:17 No.242 [Reply]
The Anki VECTOR has a skin-like touch sensor on it, could we incorporate it into our robogirls?
20 posts and 9 images omitted.
>>4284 Yes, that's one of the great things about DIY Robot Wives. Individual garage labs can do prototyping (which is always more expensive than production after tooling costs regardless) without rigid regard for pressures like schedules and budgets. It can become a work of love. >Even if they will cost 10-15k it's still worth it. Actually, high-end robowaifus will probably cost about as much as high-end cars do. We can probably help that a bit, but you can bet Teslawaifu, Googlewaifu, Facebookwaifu, et al, will squeeze it for all they can get away with.
Open file (65.15 KB 1245x700 img.jpg)
I was going to mention this in the Robot Vision General thread, but I figure I'd resurrect this thread instead. I remembered seeing that someone made a proof-of-concept camera out of solar cells, but I can't find it anymore. It was greyscale, very low resolution and probably had a lousy refresh rate, but it was a functioning digital camera. I loved this idea because if it's improved enough you've got a digital camera that generates power instead of consuming it. Then I remembered this post: >>3648 where light is used to detect deformation in skin for touch sensing. I had no interest in flexible solar panels before, but what if there were a flexible solar panel acting as a camera beneath a layer of skin to create a sense of touch? When it's not being touched or covered, the body is passively absorbing light and trickle-charging the battery. The LEDs could be visible if you really want, or they could be infrared (panels with good infrared sensitivity are better in winter) and give the body a similar thermal image to a real human body and be warm to the touch. And some self-warming holes when you feel her from the inside. ;)
>>13235 I've often thought about the notion that at the very least we should add trickle-charging to various parts of our robowaifu's outer shell designs. Can't really think through some of the issues. But, I would be off-topic here since it's a Power issue. Your pic made me think of it. Sauce on that BTW?
>>13240 >But, I would be off-topic here since it's a Power issue. I didn't really mean for it to be about power as much as about trying to create a passive-energy sense of touch using flexible materials. >Your pic made me think of it. Sauce on that BTW? I just did an image search for flexible solar panels, but it's: https://www.ecowatch.com/best-flexible-solar-panels-2654431234.html
>>13280 Thanks Anon.

Open file (8.17 KB 224x225 index.jpg)
Wheelchair Waifus Robowaifu Technician 05/12/2020 (Tue) 00:38:56 No.2983 [Reply]
Ideally our robowaifus would be MORE able than normal humans, but we just aren't there just yet. As we all know there are lots of engineering challenges to building a full sized robowaifu. One solution to these is to just build smaller waifus as >>2666 suggested, but I am assuming most of us don't want a short stack. My solution to the problems involved with balance, heating, and power requirements is just to give our prototype robowaifu crutches and a wheelchair. It would be much easier for our robowaifu to walk with crutches than on her own two legs and in the beginning she would probably only be able to walk short distances even with the crutches. An electric wheel chair would solve the issue of balance entirely. A wheel chair could also have a number of different systems and components mounted to it. Batteries, cooling fans, sensors, and processing units could all be mounted to the wheelchair, and we don't have to worry about the chair being pretty. A cripplebot, while not an ideal final product, would be great for prototyping designs and systems that could be used for later designs especially those relating to sensors, bot navigation, and movement. Our prototype could also be fully sized as well. What do you think /robowaifu/, should our first prototype be a cripple?
Edited last time by Chobitsu on 06/03/2020 (Wed) 06:21:29.
43 posts and 22 images omitted.
>>9066 ** Hyundai not Honda
>>9066 Imagine the Hummer EV with extensible, walking wheel struts. The little faun-legged (reverse knee) droid was walking along on uneven ground quite readily. Interesting. I wonder how many anons would be OK with a robowaifu that had weird legs, but could walk around just fine like that?
>>3038 Topkek. You're >GreentextAnon aren't you? :^)
I suggested a wheelchair bound waifubot in a 4chan thread and ended up arguing with a guy who insisted it'd be better to have a robot arm mounted on a rail on the ceiling that connects to her upper back so the arm can reduce the weight on the body and be a power cable. I couldn't get him to understand that I don't want to remodel a whole goddamn house just for something that could easily be unnecessary in a few years.
>>13161 I think there are issues of personal aesthetics and taste that an anon can focus to the near-exclusion of other design and engineering issues. Even really solid and well-established principles can be 'tossed aside' with abandon at times. I'd say just let it roll off your back like water on a duck Anon. End the end, even if they have a good aesthetic going, no one 'outsmarts' the laws of physics. At least no one inside this universe does anyway, heh. :^)

I write books about worlds of waifus. Robowaifu Technician 09/09/2021 (Thu) 12:04:02 No.12956 [Reply]
Depending on interpretation, these waifus are fully functional and fully interactive AI. The appeal of women both existing and being capable of love, while the failures of females are explicitly depicted constitutes a significant portion of the character motivation, promoting an exodus of men from the dying, legacy Clown World and the hilarity and insanity that follows after that while based and redpilled men live their new lives in relative happiness. So am I in the right place?
19 posts and 7 images omitted.
So it seems this will be staying as a separate thread, which means I will do more with it later. Finally finishing the second book is of course a higher priority. Here, have an excerpt: Though, speaking of storms… Masumi was moving quite quickly, Belas struggling to keep pace with her and still failing. When she saw me, she came running even faster, and at the first sound behind me, I barked, “Hold!” Hearing no further sounds, I declared, “Anyone who draws steel against them answers to us all, got it?” Though I didn’t look back, I heard the sounds of weapons being sheathed again. <Is there a problem?> inquired the Kitsune, sensing the tension in the air, while Belas made quite the racket closing the rest of the distance. He wasn’t out of breath a bit, despite looking as though his armor was almost as heavy as I was, perhaps even as heavy as he was. “There is,” I confirmed. “Walk and talk.”
>>13237 >So it seems this will be staying as a separate thread Yes. Please don't do anything weird with it, and try to acclimate yourself to our culture here.
>>13239 I don't think anything in here would be considered weird by the standards of the posters. I mean, who wouldn't want Kitsune waifus? Or more broadly, who wouldn't want the premise of the book? I've had some get triggered that the beginning is "a half hour of whining about life on Earth" but this is LitRPG - read: premeditated isekai. And so the character's motivations for abandoning everything are important, especially in character driven stories. The early pages make it very clear exactly what sort of dystopia is being escaped from. It's meant as a means of making my target audience self insert as the MC. So that when you see his mindset improve after reaching a better world despite it definitely not being perfect, perhaps the reader will dream of such a thing themselves. And then when the conflict starts? It is a conflict that everyone in my target audience can agree with. At least one person here I believe has read the book by now and can confirm.
Open file (82.78 KB 711x540 Flight_of_Dragons_08.jpg)
>>13244 Fair enough then, proceed Anon.
DAILY REMINDER This isn't the Roastie Fear thread. Relocated.

(Robo)Waifu personality thread Robowaifu Technician 09/09/2019 (Mon) 05:26:21 No.18 [Reply] [Last]
Is she going to be tsundere? Deredere? Yandere or a combination? How would you code your waifus personality?

Where do you draw inspiration from and can personality even be classified and successfully coded into AI?
61 posts and 30 images omitted.
Open file (651.05 KB 1080x1920 86358445_p0.jpg)
Open file (17.50 KB 480x400 gpt2-large.png)
I scraped a ton of questions off the web intended for couples and friends to get to know each other because they were a fun way to explore shaping my robowaifu's personality. Perhaps someone else will find them fun or useful. I think some of the answers could be pooled and used for creating a robowaifu common sense dataset for responding to questions correctly. For example, "Would you open a joint bank account with me?" could be answered with something like, "Don't be ridiculous. Would you open a joint bank account with your car?" https://files.catbox.moe/6webl5.xz Also includes an open-ended political compass question set. Ask these and find your robowaifu's political leaning. Unsurprisingly GPT-2 is a fence-sitter but slightly right-leaning.
>>12299 Hey this is a really good idea IMO Anon. I bet there are mountains of this kind of thing, I'd say (as you suggest indirectly) that most of these are written by women, so you'd have to filter out question-begging shit intended solely to enforce woman-'logic' on the participant. But otherwise with a little care this could be a good idea. > For example, "Would you open a joint bank account with me?" could be answered with something like, "Don't be ridiculous. Would you open a joint bank account with your car?" < "As for myself, I'm just a piece of equipment, so it's not a problem Oniichan!"
>>18 Oh boy, I already mentioned it in two other threads, but I might as well mention it here too. Her AI would basically just adapt her behavior to maximize my happiness with how I interact with her, so coding a personality would be unnecessary >>13160 although after thinking about it for a while, she might end up being something of a cuckquean eager to help me pork other women >>13169
>>13172 >Her AI would basically just adapt her behavior to maximize my happiness That's still some kind of personality. Especially, if you had more than one then maybe you would want them to be different from each other. Some more funny or upbeat, maybe another one more sarcastic or quiet. Idk, in any way, thinking about personalities still makes sense.
>>13185 Yes, it'd still be a personality, but it'd be an "emergent behavior" instead of something you'd have to hard-code to act the exact way you want. Trying to figure out how to create a personality that works exactly the way you think it should in just about any situation the AI could be in just sounds like a logistics nightmare. Making it so the AI can rapidly adapt to approximate what you want seems like the best option. And you've got to understand that I'm talking about long-term overall happiness, not just trying to keep a constant grin on your face, because that'd inevitably fail as hard as telling the same joke on a broken record. Let's say you had a harem of different waifubots all based on fictional characters you're familiar with (or at least think you're familiar with) there's now the added factor that they're in the real world instead of their own fictional universe and interacting with you and others they might normally never meet. If you could somehow gauge and replicate the personalities of those fictional characters with perfect accuracy, their interactions might end up being an insufferable nightmare where the girls just constantly argue with you and each other. Or maybe it bores you with just how well they get along together and want to see them fight a little. Figuring out how to mimic the personalities of characters/people is a giant hurdle that might be impossible to do accurately. The closest I can think of is Replika.Ai which is a chatbot that's supposed to mimic the person it's chatting with, so if you're want an AI that behaves like you, or maybe a character you roleplay as, it could work, but it still has to learn the behavior. Even if you could just determine a personality by setting some different values for personality traits, you'd still probably need fine tuning just to get it right, or as close to right as you can stand. Now imagine the harem were instead all just programmed with that exact same basic happiness-maximizing behavior, but it's only their voices, appearances, names and some other bits of trivia (Aisha saying she is a Ctarl-Ctarl, not a human nor a robot) are all based on those fictional characters, they'd learn they have to behave differently from each other to do it. They'd eventually learn to behave the way you want them to behave because it'd make you happier to see them in-character instead of out-of-character compared to what you want/expect of them, not necessarily the way they would 'really' be. The biggest hurdle with this is if you just expect them to be perfectly in-character as soon as you boot them up, or the AI just learning the behavior too slowly to tolerate. I suppose the best compromise would be to try to create a default personality for each of them, so they have something to work with out of the starting gate, but then adjust their behavior.

Self-driving cars AI + hardware Robowaifu Technician 09/11/2019 (Wed) 07:13:28 No.112 [Reply]
Obviously the AI and hardware needed to run an autonomous gynoid robot is going to be much more complicated than that required to drive an autonomous car, but there are at least some similarities, and the cars are very nearly here now. There are also several similarities between the automobile design, production and sales industries and what I envision will be their counterparts in the 'Companion Robot' industries. Practically every single advance in self-driving cars will eventually have important ramifications for the development and production of Robowaifus.

ITT: post ideas and news about self-driving cars and the hardware and software that makes them possible. Also discuss the technical, regulatory, and social challenges ahead for them. Please keep in mind this is the /robowaifu/ board, and if you have any insights about how you think these topics may crossover and apply here would also be welcome.

https: // www.nvidia.com/object/drive-px.html
15 posts and 12 images omitted.
<"...we arrive at a combined torque rating at the shaft of around 1,000 lb-ft, or 1,356 Newton-meters." >ywn a robowaifu monster super truck I just want my RoboHummerEVWaifu! Is that too much to ask? https://www.motor1.com/news/450217/gmc-hummer-ev-torque/
Wow what a huge difference they've made at Tesla since this thread was first made back in the day on 8ch. Has anyone else seen the demonstrations of the vector-space renderings of the FSD neural nets on the Tesla. I was skeptical, but they really are beginning to emulate human ability for the perceive/respond cycle, and according to them at about 30Hz.
Open file (1.15 MB 930x1478 1619378083486.png)
I'll admit that I don't really know shit about self-driving cars or AI, but I keep thinking about this, so I might as well dump it here. It occurred to me that the safest way to drive (not the most efficient or most convenient) would be to assume that everything around the car is completely stationary. In other words, if it were driving 80 mph on a highway, and there's a car visible in front of it, assume the car is going to instantly stop as if it hit a wall and comfortably decelerate to prevent hitting it long before that becomes a risk. The closer it is to something that it's driving towards, the slower it has to get, but driving away from something, it can accelerate as fast as the driver is comfortable with, until it starts approaching something else. If it were parking then getting just shy of touching a wall would be ideal, but while driving it's best to at least keep enough distance to drive around the car in front of it. Perpendicular movement is tricky, since cars can easily pass each other in opposite lanes inches from each other without accidents being common, but just the same if it were driving alongside a wall and something walked out from a doorway in that wall, it could be immediately in front of the car without warning, so the safest behavior is simply to drive slower, so driving perpendicular to something is no different than driving towards it, especially when on a winding road where you never know what's around the corner. Obstacle avoidance could be based on the whatever direction it can safely move in the fastest. I think you could even apply the same logic to flying and higher speeds at higher altitudes, although with regular cars you'd need to slow or steer as it approaches potholes or ditches on the side of the road. Or maybe I'm just a retard with Dunning–Kruger effect and driving safely is really a lot more complicated than that. Regardless, I'd love to see a simple simulation with a bunch of cars driving around following that simple logic, even with perfect omnidirectional vision not being realistic, I think the real-world hardware would amount to cameras on the bumpers and sides of the car, and the closer it gets to anything, the lowest value determines the max speed the car can go. There'd need to be a lot more added before it'd anything more than a glorified always-on parking assist. Though I this kind of driving behavior would only really be safe if every other car on the road drove the same way, since humans are reckless and impatient assholes, but flashing your blinkers a lower speeds could be enough to help occasionally.
>>13176 Lol, that would be worse than my Grandma's driving honestly. Such hyper-timidity would create countless automobile accidents and be directly responsible for a massive rise in roadway deaths. Society on the highways simply wouldn't function with widespread adoption of such behavior IMO. Tesla actually deals with exactly the kind of concerns you brought up Anon (and many others as well). Maybe you give their Tesla AI Day video a go?
>>13204 The movement speed perpendicular to objects probably needs tuning due to things like tunnels and guard rails, but otherwise I think it could work great. That hyper-timid driving style might seem excessive, but you've got to consider that the biggest concern people seem to have with autonomous cars is the safety. And if it's really an issue I guess it could still be made to still decelerate safely but not necessarily comfortablely, so the gap between cars can be smaller. There are some places and times where you can go straight on a highway for hours without ever seeing another car and there there's rush hour in New York where bumper to bumper traffic keeps anyone from moving. In the case of the former, the timidness isn't really a significant negative and would mostly just stop it from hitting a deer or anything else that wanders onto the road, hence my analogy of something popping out from a wall. But it would actually help alleviate traffic jams since the reaction time to the change in the speeds of the cars around it could be really high, and if a significant number of cars followed this method, then there'd be a large group of cars all slowly creeping while leaving enough room to pass and merge lanes instead of idling and random accelerating/decelerating and people trying to figure out how to merge. Traffic would be slow, but it would stay moving efficiently. I think this video really does a good job at showing the problem it'd solve: https://www.youtube.com/watch?v=iHzzSao6ypE and at the chicken crossing the road part at 1:08, if you think of it as there being no road, just cars going in a straight line, the cars would slow as the chicken approaches to cross them, and start speeding up again as the chicken leaves, I think it could solve the problem without the cars needing to communicate with each other or eliminating human drivers entirely. The consistent driving behavior would keep the cars "in the middle" without needing to consider cars behind them.

Open file (12.18 KB 480x360 0.jpg)
Robot Voices Robowaifu Technician 09/12/2019 (Thu) 03:09:38 No.156 [Reply]
What are the best sounding female robotic voices available? I want something that doesn't kill the boner, I think Siri has an okay voice, but I'm not sure if that would be available for my project

https://www.invidio.us/watch?v=l_4aAbAUoxk
30 posts and 4 images omitted.
>>1239 >>1240 Not a local, but I'm wondering if a current tool like MycroftAI (the virtual assistant) can currently pipe it's output text through the fifteenAI API to make a character voice. I haven't used fifteenai or Mycroft yet, but I suspect you could make a half-decent Twilight home assistant now with a RaspPi and a plushie.
>>9092 >but I suspect you could make a half-decent Twilight home assistant now with a RaspPi and a plushie. I suspect you can now, yes. And with the further info here on /robowaifu/ she could even move at least a bit. Just search around in the Speech Synthesis general bread >>199 and you could get some ideas.
>>1246 BTW Anon, just in case you're not RobowaifuDev, I wanted to let you know that he actually did it (your idea that is, >>9121). Just in case you missed it.
Open file (128.13 KB 650x366 download.jpg)
I was thinking of using something like FastPitch, but with an added effect to make it sound more robotic to keep it from getting to an uncanny valley sound. Either that or making the voice kinda high pitched and childlike, to make it easier to accept when it says something stupid. Has anyone here considered hardware-based speech synthesis so it'll actually sync up with mouth movements? Everything professional I've seen just seem like horrid screaming fleshlights that never really try to resemble actual heads.
>>13164 >Has anyone here considered hardware-based speech synthesis so it'll actually sync up with mouth movements? Voice modulation, but not complete synthesis. But I don't know how yet. Your picrel is what I knew about, I posted some related video before. However, I was thinking about small internal speakers (mini maze speakers?) but with additional silicone parts that could move and change the voice that way. But nothing specific yet

Report/Delete/Moderation Forms
Delete
Report