/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Downtime was caused by the hosting service's network going down. Should be OK now.

An issue with the Webring addon was causing Lynxchan to intermittently crash. The issue has been fixed.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

no cookies?

(used to delete files and postings)

They worked together tirelessly, bouncing ideas off each other and solving problems as a team.

Datasets for Training AI Robowaifu Technician 04/09/2020 (Thu) 21:36:12 No.2300 [Reply] [Last]
Training AI and robowaifus requires immense amounts of data. It'd be useful to curate books and datasets to feed into our models or possibly build our own corpora to train on. The quality of data is really important. Garbage in is garbage out. The GPT2 pre-trained models for example are riddled with 'Advertisement' after paragraphs. Perhaps we can also discuss and share scripts for cleaning and preparing data here and anything else related to datasets. To start here are some large datasets I've found useful for training chatbots: >The Stanford Question Answering Dataset https://rajpurkar.github.io/SQuAD-explorer/ >Amazon QA http://jmcauley.ucsd.edu/data/amazon/qa/ >WikiText-103 https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/ >Arxiv Data from 24,000+ papers https://www.kaggle.com/neelshah18/arxivdataset >NIPS papers https://www.kaggle.com/benhamner/nips-papers >Frontiers in Neuroscience Journal Articles https://www.kaggle.com/markoarezina/frontiers-in-neuroscience-articles >Ubuntu Dialogue Corpus https://www.kaggle.com/rtatman/ubuntu-dialogue-corpus >4plebs.org data dump https://archive.org/details/4plebs-org-data-dump-2020-01 >The Movie Dialog Corpus https://www.kaggle.com/Cornell-University/movie-dialog-corpus >Common Crawl https://commoncrawl.org/the-data/
166 posts and 58 images omitted.
Open file (47.18 KB 600x375 Aegis.full.1946321.jpg)
A finetuned GPT2-1.5B model beats Alpaca-7B using a 700 MB dataset of generated instructions and responses from ChatGPT. Next to come will be metalearning via filtering training data to maximize performance. Combined with external memory we should be able to generate datasets even better than ChatGPT. Things are starting to get interesting. Dataset: https://huggingface.co/datasets/MBZUAI/LaMini-instruction (parquet format, I recommend using pyarrow) Models: https://github.com/mbzuai-nlp/LaMini-LM Paper: https://arxiv.org/abs/2304.14402 tl;dr more high-quality data with greater variety is all you need
Open file (107.56 KB 750x544 LdeYPwt.jpg)
>We introduce DataComp-1B, a dataset created by applying a simple filtering algorithm to the 12.8B candidate pool. The resulting 1.4B subset enables training a CLIP ViT-L/14 from scratch to 79.2% zero-shot accuracy on ImageNet. >Our new ViT-L/14 model outperforms a larger ViT-g/14 trained on LAION-2B by 0.7 percentage points while requiring 9x less training compute. Code to build dataset: https://github.com/mlfoundations/datacomp Paper: https://arxiv.org/abs/2304.14108 Website: https://www.datacomp.ai/ They could do much better but this is a start. >Synthetic Data from Diffusion Models Improves ImageNet Classification Paper: https://arxiv.org/abs/2304.08466 Datasets are shrinking. Loss functions suddenly dropping. Better synthesized data piling. Are you ready to foom?
>>22215 >>22216 Very encouraging Anon, thanks! :^)
Even more instruction tuning data, this time with responses generated by GPT-4. A 7B LLaMA model finetuned with this dataset greatly outperforms Alpaca and is competitive with GPT-4 when rated by human evaluators on helpfulness. https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
>>22240 Wow. This could be a big deal sounds like. Please keep us up to date with your research on it Anon! :^)

Robowaifu Propaganda and Recruitment Robowaifu Technician 05/07/2020 (Thu) 05:39:42 No.2705 [Reply] [Last]
Attention drawfags and writefags! Your skills will be needed. The task of building and designing a robowaifu is a herculean quest. As great as this community is, /robowaifu/ simply does not have the manpower to real our goal of DIY robowaifus. Luckily for us there are several communities on the internet that we could find new recruits or allies to help us build our waifus: MGTOW - These guys know all about the legal pitfalls of marriage and the dangers of feminism. There is already a lot of talk about sex robots in MGTOW communities. It shouldn't be a hard sell to get them to come here. Incels - Guys that can't get laid. The opportunity for love and companionship should be enough to bring some of these guys over. We need to be careful when recruiting from some of their communities. We don't want to attract negative attention. Monster girls/furry/mlp fandoms - The only way these guys are going to be able to have their harpy/elf/goblin/anthro/pony/whatever gf is with a robowaifu. They have an interest in seeing us succeed. Again we need to be careful here not to attract the wrong kind of people that will bring us the wrong kind of attention. Male STEM students - Generally these guys aren't going to get laid until after they have established themselves. A robowaifu could really help them. This may be a harder sell because many of them have been brainwashed in university, but they have skills that we could really use. Transhumanists/biohackers - Many of the technologies involved in building a robowaifu could be used in transhumanist or biohacking applications such as building an avatar. They may have some interest in helping us out. We will need to be careful which transhumanist communities we go after as many of them are full of feminism, tumblr tier sexualities and genders, and SJW's. Cyberpunks and technophiles - These guys (and they are usually guys) are all around into technology and may just enjoy working on the kinds of projects we need to do. They are often into programming and AI.
227 posts and 86 images omitted.
>>20312 I thought ads on 4chan were like $20 for a month or something ridiculously low. Schizos are advertising their youtube channels now.
>>20321 Perhaps, but they didn't originate here AFAICT.
>>20334 >I thought ads on 4chan were like $20 for a month or something ridiculously low. Sounds great! If you take some out, please let us all see the ads themselves first, Anon.
I'd like to request the board come up with some materials we can use abroad during the lead-up week for our local C++ Classroom upcoming (>>19777). The plan is -- all things being equal -- to begin our classes here on May 6th. So if you can have something ready before next Monday then we'd all have a full workweek to use them in promoting our programming class out to other boards on the Internet. TIA /robowaifu/ ! :^) >=== -prose edit
Edited last time by Chobitsu on 04/25/2023 (Tue) 15:53:59.
>>22169 I thought everyone just used Midjourney or Stable Diffusion for stuff like that nowadays.

Open file (349.32 KB 480x640 0c.png)
Robowaifu Media Propaganda and Merchandizing Anon 01/29/2023 (Sun) 22:15:50 No.19295 [Reply]
That Time I Incarnated My Christian Cat Girl Maid Wife in a Short Dress Together we can create the robowaifu-verse! Let's start making media to further our cause. There's tremendous potential for us to reach a wide market and prime minds to accept their future cat grill meidos in tiny miniskirts. We have text based stories floating around but, audio and visuals are paramount to gain a general audience. I will start with a short about a young man building a small cat girl and learning to understand her love. Importantly, she will be very limited, in the same way our first models will be. To set certain expectations. Mahoro is impossible but, the slow and simple robowaifus that can exist have their own charm. Let's show the world the love that will soon exist. --- >thread-related Robowaifu Propaganda and Recruitment (>>2705) >=== -add related crosslink
Edited last time by Chobitsu on 02/02/2023 (Thu) 22:18:06.
44 posts and 17 images omitted.
Open file (56.67 KB 400x400 EVE.jpg)
>>20957 >animating a face that's extremely simple (i.e. linework), yet also cute and sufficiently feminine (i.e. linework with eyelashes and blushing capabilities). Eve from Wall-E may be nearly the perfect icon of this idea. Her 'face' is just a pair of animated eyes.
>>20972 >Bigger ears can be an option. I'd suggest a bit rounder-looking tbh. 'Maximum-Fluff' should be our motto here! :^) >Keeping it simple allows for our minds to fill in the gaps, she becomes easier to map dreams onto. Very insightful, Kiwi. This. >=== -prose edit
Edited last time by Chobitsu on 03/02/2023 (Thu) 05:44:07.
Open file (415.14 KB 567x708 BIG.png)
>>20972 >Could you provide some examples? Here's my shitty illustration of what that'd look like. I've never seen this done before, so I can't give you any real examples. It's just a concept I thought up on the spot. >an Android phone with enough power to handle object recognition... etcetera Ah, I was operating under the assumption that the bulk of her processing power would be handled elsewhere, like a torso-mounted SBC or laptop board. If you intend to run that stuff entirely from a phone, then that changes things. I don't know much about engineering, but I assume that'd it'd be better to have as little weight in the head as possible, and modern smartphones tend to have a bit of heft to them. As far as screens go, you can get some pretty cheap Chinese ones these days that can run off of USB. I don't really know about the general specs of your planned waifu (namely, size and power supply), but it might be worth looking into. >>20974 Seconded, you could poke your eyes out with those sharp corners!
Open file (248.84 KB 908x1103 NeoFace.png)
>>20976 I've decided after some testing to just use a head that's compatible with human child accessories. This way we can rely on things that already exist. Back to making a basic model for the commercial.
>>22131 >This way we can rely on things that already exist. IIRC we had a varied discussion back in the day when we were first trying to hash the right sizes for our initial robowaifu prototypes. Dollfan (and probably other anons) mention this point specifically for clothing options. The stuff is expensive new, but go to a Goodwill-like place somewhere in the 'nice' parts of town, and you should be able to pick up girl's clothing items relatively-inexpensively. Just don't be like Hideki in episode 4 & forget the local convenience store option! :^)

Open file (93.53 KB 800x540 TypesOfMotors.jpg)
Open file (318.41 KB 773x298 NylonActuator.png)
Open file (29.01 KB 740x400 BasicPiston.jpg)
Open file (821.22 KB 850x605 MadoMecha.png)
Actuators For Waifu Movement Part 2 Waifu Boogaloo Kiwi 09/02/2021 (Thu) 05:30:48 No.12810 [Reply] [Last]
(Original thread >>406) Kiwi back from the dead with a thread for the discussion of actuators that move your waifu! Part Two! Let's start with a quick refresher! 1. DC motors, these use a rotating magnetic field created through commutation to rotate a rotor! They're one of the cheapest options and are 30 to 70 percent efficient usually. The bigger they are, the more efficient they tend to be. 2. Brushless motors, these use a controller to induce a rotating magnetic field by turning electromagnets on and off in a sequence. They trend 60 to 95 percent efficiency 3. AC motors, Though there are many different type, they function similarly to brushless motors, they simply rely on the AC electricity to turn their electromagnets on and off to generate their field. Anywhere from 15 to 95 percent efficiency. 4. Stepper motors, brushless motors with ferrous teeth to focus magnetic flux. This allows for incredible control at the cost of greater mass and lower torque at higher speeds. Usually 50 to 80 percent efficient but, this depends on control algorithm/speed/and quality of the stepper. 5. Coiled Nylon Actuators! These things have an efficiency rating so low it's best to just say they aren't efficient. What they are though is dirt cheap and easy as heck to make! Don't even think about them, I did and it was awful. 6. Hydraulics! These rely on the distribution of pressure in a working liquid to move things like pistons. Though popular in large scale industry, their ability to be used in waifu's has yet to be proven. (Boston Dynamics Atlas runs on hydraulics but, it's a power guzzler and heavy) 7. Pneumatics, hydraulics lighter sister! This time the fluid is air! This has the advantage in weight. They aren't capable of the same power loads hydraulics are but, who wants their waifu to bench press a car? 8. Wax motors, hydraulic systems where the working fluid is expanding melted parafin wax! Cheap, low power, efficient, and produce incredible torque! Too bad they're slow and hard to control. 9. Explosion! Yes, you can move things through explosions! Gas engines work through explosions! Artificial muscles can be made by exploding a hydrogen and oxygen mixture in a piston, then using hydrolysis to turn the water back into hydrogen and oxygen. None of this is efficient or practical but, it's vital we keep our minds open. Though there are more actuators, most are derivatives or use these examples to work. Things like pulleys need an actuator to move them. Now, let's share, learn, and get our waifu moving! >--- < add'l, related links from Anon:

Message too long. Click here to view full text.

Edited last time by Chobitsu on 09/06/2021 (Mon) 10:07:57.
241 posts and 62 images omitted.
>>22111 Let me see if I understand your concept Grommet. Are you saying that the core decides the next stepwise action for a (limb, say), calculates a Bspline to direct it to the FK target and sends only that signal data across the network?
>>22113 Sort of but not quite. The peripheral controllers can only get directions from the brain and then calculate how to move there, excepting if they hit something as you and others have discussed. That being said I'm working with rough idea here so...could be changes. On the macro level this sort of framework could start out with a rough movement not so good and then as the waifu moves it could send correcting ending position, direction and velocity vectors as it moved. This gives you quick movement but allows finer control as it preforms whatever movement required. Gives it fast compute of gross movement but also good control because it gives it time to "think" over the movement period. It also allows us to change our programming or functions to get better as we learn what works while providing a fixed framework to start with. All the micro-controllers and end actuator controls have a fixed set of instructions which can be fine tuned by the main processor, while only using a small set of instructions. Rough progress then further refinement as you learn and upgrade. I made up the framework but got the idea of the refinement progress as the waifu moved from a paper where they used it with success and also a sort of general thinking about wavelets for processing. Because wavelets, I think generally, are a sort of a signal processing way of adding tiny more specific waveforms to larger signal gross waveforms to represent complicated signals. And they tend to be fast. I'm not saying I know how to program all of this or even yet how to set up the set of equations but I do know about some of these things generally and know that this sort of computing has worked on other task. So I would be a long ways off from making this happen but it can't hurt to start with what works. On the big picture level the main brain sees the world for the waifu to move in and tells the peripherals how to move and where to go. They don't understand the world only how to move their one piece of it. On the other hand by giving a complex instruction to the peripherals instead of minute by minute control from the brain you compress the network bandwidth which will likely be low with these cheap parts we will use. (1MB/s with CAN bus 2.0) I really like CAN bus because it seems most all the problems we need to solve have been solved for this and it's everywhere for all sorts of stuff. Further thought, you could have a separate CAN bus for each limb, and the head/backbone(5) or maybe one for the head alone for facial movements(6). https://www.autopi.io/blog/can-bus-explained/

Message too long. Click here to view full text.

>>22119 Thinking a little more, maybe a bspline is not the most efficient. There are other line creations functions. https://en.wikipedia.org/wiki/Non-uniform_rational_B-spline Maybe the lowest bandwidth set of commands is to do the wrist and foot as I said and then send a set of fingertip and toe tip position, velocity etc for each fingertip and toetip. I do believe that the wrist(actually the point where the forearm meets the wrist) and the foot(actually the lower leg meets the foot) points combined with toetip and fingertip, referenced to the before mentioned point wrist and foot points, will give you low bandwidth but precise control. Move your hand around and hold your forearm. You will see if you tell where the fingertips are then the rest of the wrist and fingers will move to make the tips go where you want. I actually should start saying the forearm end and lower leg end. Wrist and foot are not actually correct for gross limb movement. I hope I didn't confuse anyone because the nomenclature I used was incorrect. I think I'll start using the phrase leg end and arm end points. So the foot connected to the leg end and the wrist connected to the arm end.
>>22120 I better say this again to be super specific. Most will understand right away so bear with me because some won't. I made a mistake and was not precise. When you say I'm moving my foot or hand it can have two meanings. One you are moving the whole foot which actually means the lower leg socket point, but you also could mean "articulating the foot", which is fact means moving the toe tips. So if I said move the foot I really meant the lower leg. If I'm talking about "articulating" the foot I mean the toe tips. Same for the hand which means the end point of the arm meeting the wrist.
CAN bus ESP32 125KBPS - (Default) should be enough for network if you divide each limb and head into different busses. "...The ESP32 has an integrated CAN controller and therefore doesn’t need an external controller necessarily. You only need to specify the RX and TX pins. Any GPIO will work..." https://esphome.io/components/canbus.html

R&D General Robowaifu Technician 09/10/2019 (Tue) 06:58:26 No.83 [Reply] [Last]
This is a thread to discuss smaller waifu building problems, solutions, proposals and questions that don't warrant a thread. Keep it technical. I'll start.

Liquid battery and cooling in one
Having a single "artificial blood" system for liquid cooling and power storage would eliminate the need for a vulnerable solid state battery, eliminate the need for a separate cooling system, and solve the problem of extending those systems to extremities.
I have heard of flow batteries, you'd just need to use a pair of liquids that's safe enough and not too sensitive to changes in temperature.
This one looks like it fits the bill. The downside is that your waifu would essentially be running on herbicide. (though from what I gather, it's in soluble salt form and thus less dangerous than the usual variety)

How close are we to creating artificial muscles? And what's the second best option?
Muscles are perfect at what they do; they're powerful, compact, efficient, they carry their own weight, they aren't dependent on remote parts of the system, they can be controlled precisely, and they can perform many roles depending on their layout alone.
We could grow actual organic muscles for this purpose already but that's just fucking gross, and you'd need a lot of extra bloat to maintain them.
What we need are strands of whatever that can contract using electrical energy. Piezo does the trick at small scales, but would it be enough to match the real thing? There have been attempts, but nothing concrete so far.
What are some examples of technology that one could currently use instead?

High level and low level intelligence emulation
I've noticed a pattern in programs that emulate other computing hardware.
The first emulators that do the job at acceptable speeds are always the ones that use hacks and shortcuts to get the job done.
It comes down to a tradeoff. Analyzing and recompiling or reinterpreting the code itself on a more abstract level will introduce errors, but it is a magnitude of order more efficient than simulating every part of the circuitry down to each cycle. This is why a relatively high level emulator of a 6th gen video game console has close system requirements to a cycle-accurate emulator of the SNES.
Now, I want to present an analogy here. If training neural networks for every damn thing and trying to blindly replicate an organic system is akin to accurately emulating every logic gate in a circuit, what are some shortcuts we could take?
It is commonly repeated that a human brain has immense computing power, but this assumption is based just on the amount of neurons observed, and it's likely that most of them probably have nothing to do with intelligence or consciousness. If we trim those, the estimated computing power would drop to a more reasonable level. In addition, our computers just aren't built for doing things like neural systems do. They're better at some things, and worse at others. If we can do something in a digital way instead of trying to simulate an analog circuit doing the same thing, that's more computing power that we could save, possibly bridging the gap way earlier than we expected to.
The most obvious way to handle this would be doing as many mundane processing and hardware control tasks as possible in an optimized, digital way, and then using a GPU or another kind of circuit altogether to handle the magical "frontal lobe" part, so to speak.
357 posts and 146 images omitted.
>>22284 Why not using a peristaltic pump? Except for the problem that the tube might break at some point.
Some very creative gears: https://youtu.be/0ZPo3HxR0KI
>>22178 Sorry, but I can't work on this. Maybe in two or three weeks.
>>22178 >>22288 That's really good advice Noidodev, it would make for a really good new OP. Maybe the original OP will return in the meantime, but if not, then by all means give it a shot if you're willing! Cheers. :^)
>>22286 That might work. My goal would be making a non electric one. I would need to figure a way to make a clock work movement with enough torque to move the pump and install the self winding system used on rolex for when you move your watch so that it self winds every time robot chan moves.

C++ General Robowaifu Technician 09/09/2019 (Mon) 02:49:55 No.12 [Reply] [Last]
C++ Resources general

The C++ programming language is currently the primary AI-engine language in use.





BTW if you're new to C++ and you're stuck on Windows (either you can't or won't upgrade to Linux) then you can at least incorporate a good, open shell into your system to begin with so you can follow along. Start at this link, and if you have any questions just ask ITT:
Edited last time by Chobitsu on 10/05/2019 (Sat) 20:16:32.
251 posts and 76 images omitted.
>>14759 Sure. But I haven't done drivers. Never touched an Arduino either.
>>14765 Neat! That's good to hear Anon. Welcome aboard! >But I haven't done drivers. It'll be easy. You just need to devise a standardized way to talk to the device's pins. As you might imagine, pin numbers can be all over the place, and may also require small amounts of preamble ceremony beforehand. We just need to turn that into easy to remember interfaces, standard across all the devices we'll use in our robowaifus. I'd suggest you have a look at Derek Molloy's videos for a starter Anon. https://www.youtube.com/c/DerekMolloyatDCU/playlists >Never touched an Arduino either. You might want to pick up a handful then, they're quite inexpensive (for the power they bring). Here's but one of hundreds of possible sources, 3 for ~ US$ 20 www.amazon.com/REXQualis-Board-ATmega328P-Compatible-Arduino/dp/B07WK4VG58 >=== -fix missing greenquote
Edited last time by Chobitsu on 12/23/2021 (Thu) 12:41:36.
>>14738 kek, feel free to email the libcaptcha author (kefeer@brokestream.com). that library has some problems indeed, I noticed that various things were wrong when I used it for the ib, I even felt tempted to rewrite it entirely, or at least partly, or at the very least fix the compiler warnings, but got scared by the copyright notice (points 1 and 2 in particular). and didn't want to waste time writing my own. I ended up doing things like avoiding the gifSize "constant" entirely and the declare my own ("constexpr int gifSize = 17646"; constexpr is a C++ specifier for expressions that are known at compile time). to avoid some of the semantic problems that you mention. you're wrong about the internal linking tho. g++ uses internal linking by default for symbols marked as "const" (and static, obviously). I never #include the libcaptcha.c file, so the compiler would throw an error (even before linking) if I tried to use those symbols anyway. the C++ part (my code) is even worse. from the hard-coded queries to the obscure orm (which needs it's own compiler to generate implementation files), blocking queries everywhere, and non-threadsafe globals in a multi-threading context. it nonetheless work. I might try to write something better in zig or C++ in the distant future
>>14826 >blocking queries everywhere, and non-threadsafe globals in a multi-threading context. We'd certainly be interested in seeing solutions to these issues Anon. BTW, do you know of Apple's libdispatch ?
Realized I was being kinda silly devising my own convoluted logging system out of concern for performance latency, once it suddenly dawned on me the C++ standard library already sported a perfectly-serviceable buffered logging system--which can lazily write to files. Here's a little RAII-compliant hack for doing just that. [1] #include <cstdlib> #include <fstream> #include <iostream> #include <string> class File_clog { public: File_clog() = delete; // no default ctor // globally-redirects all std::clog output into the given filename instead File_clog(std::string const& name) : log_nm{name}, log_file{log_nm} { // assign this log file's buffer into std::clog: auto log_buff = log_file.rdbuf(); std::clog.rdbuf(log_buff); }

Message too long. Click here to view full text.

Edited last time by Chobitsu on 04/11/2023 (Tue) 19:15:22.

Short Stacks the Obvious Solution Robowaifu Technician 05/06/2020 (Wed) 10:51:46 No.2666 [Reply]
If we want a robowaifu that anyone can make at home then we will need to make it small enough to be printed on a cheap 3d printer. Let's say the ender 3 because it has a large printing area and a large community because it is so affordable. Our robowaifu doesn't have to be printed all in one go; in fact it would probably be better if our robowaifu was printed in parts, those parts used to make silicone casts around robotic internals. Short stack robowaifus provide an easily printable solution and require considerably less materials to make than conventional sexbots. They could be easier and cheaper for the average man to produce. There is also a large community surrounding short stacks as it is a popular fetish. Goblins, kobold, dwarfs, halflings, and gnomes are very popular in fantasy and several 3d models for them can easily be found on the internet which would give men options for their robowaifus. Engineering a smaller robowaifu would also be considerably easier. Less stress would be put on her skeleton and less energy would be required to make her move. Short stacks in particular often have thicker legs and feet which would fix a lot of issues with balance. What are your thoughts /robowaifu/ have you taken the short stack pill?
34 posts and 11 images omitted.
>>21838 >Humans are too hard to replicate, mini mechanical monster girls make more sense. Kind of like the movie M3GAN, life-sized robodolls will actually be a market for rich normalfags in the nearish future. Maybe you can capitalize on that as a 'fluffy headpat robodaughteru' market?
Open file (412.68 KB 512x768 1.png)
>>21840 There are countless benefits to this design. Cute fluffy tails are fantastic for balancing. Fix ears are great for stereo sound localization, the fluffy white tuft will also act as a pop filter. She can just wear shirts as dresses. The benefits go on and on, just need to make it not be a loli somehow.
>>21851 >The benefits go on and on Indeed. The primary one IMO being smaller volume (which loosely correlates with smaller mass, in principle). Square-Cube Law and all that. I'll go ahead and venture an off-the-cuff 'estimate' that a humanoid robot which is ~130cm (full doll-sized, like M3GAN) would likely have only ~<60% the volume of a ~165cm one. That's a big savings for a 1 foot difference in height! That savings translates directly into much more feasible engineering designs, plus quicker & cheaper costs to manufacture & ship. If the dolly market is your target domain (instead of Anons+Robowaifus) then you'd have many benefits thereby, as you mentioned. But of course that kind of leaves Anon & the other men of the world cast adrift if we all choose that route, right Anon? :^) >just need to make it not be a loli somehow. Well I'd say just don't give her ladybits, for starters. 'Anatomically Correct' in the established doll markets might raise a few eyebrows, but for the robowaifu domain it will trigger an avalanche of snowflake feefees like a runaway freight train haha! However, I'm sure we'll all figure it out together in the end Kiwi. The key to our success here is to just keep moving forward. "Build More & Think Less" at this stage, as our motto-Anon has suggested. Cheers! :^) >=== -prose edit
Edited last time by Chobitsu on 04/09/2023 (Sun) 00:16:08.
Open file (426.28 KB 512x768 3.png)
>>21857 The difference in mass and volume will indeed be staggering. The balancing will be much easier as well. Honeslty, balancing at 150cm is one of the biggest pains. I don't see why a small fox can't be a waifu for the average Anon. Build more, think less is honestly great advice. I and many others, think too much.
>>21861 >I don't see why a small fox can't be a waifu for the average Anon. Maybe so, there's a large crowd of people that really just want some kind of tangible companion that can offer them some degree of interactivity. Headpat daughterus and visual waifus fall into this category for example. While there may be some anons who want a smol waifu, I think the vast majority would want something approaching lifesized.

Self-driving cars AI + hardware Robowaifu Technician 09/11/2019 (Wed) 07:13:28 No.112 [Reply]
Obviously the AI and hardware needed to run an autonomous gynoid robot is going to be much more complicated than that required to drive an autonomous car, but there are at least some similarities, and the cars are very nearly here now. There are also several similarities between the automobile design, production and sales industries and what I envision will be their counterparts in the 'Companion Robot' industries. Practically every single advance in self-driving cars will eventually have important ramifications for the development and production of Robowaifus.

ITT: post ideas and news about self-driving cars and the hardware and software that makes them possible. Also discuss the technical, regulatory, and social challenges ahead for them. Please keep in mind this is the /robowaifu/ board, and if you have any insights about how you think these topics may crossover and apply here would also be welcome.

https: // www.nvidia.com/object/drive-px.html
16 posts and 13 images omitted.
Wow what a huge difference they've made at Tesla since this thread was first made back in the day on 8ch. Has anyone else seen the demonstrations of the vector-space renderings of the FSD neural nets on the Tesla. I was skeptical, but they really are beginning to emulate human ability for the perceive/respond cycle, and according to them at about 30Hz.
Open file (1.15 MB 930x1478 1619378083486.png)
I'll admit that I don't really know shit about self-driving cars or AI, but I keep thinking about this, so I might as well dump it here. It occurred to me that the safest way to drive (not the most efficient or most convenient) would be to assume that everything around the car is completely stationary. In other words, if it were driving 80 mph on a highway, and there's a car visible in front of it, assume the car is going to instantly stop as if it hit a wall and comfortably decelerate to prevent hitting it long before that becomes a risk. The closer it is to something that it's driving towards, the slower it has to get, but driving away from something, it can accelerate as fast as the driver is comfortable with, until it starts approaching something else. If it were parking then getting just shy of touching a wall would be ideal, but while driving it's best to at least keep enough distance to drive around the car in front of it. Perpendicular movement is tricky, since cars can easily pass each other in opposite lanes inches from each other without accidents being common, but just the same if it were driving alongside a wall and something walked out from a doorway in that wall, it could be immediately in front of the car without warning, so the safest behavior is simply to drive slower, so driving perpendicular to something is no different than driving towards it, especially when on a winding road where you never know what's around the corner. Obstacle avoidance could be based on the whatever direction it can safely move in the fastest. I think you could even apply the same logic to flying and higher speeds at higher altitudes, although with regular cars you'd need to slow or steer as it approaches potholes or ditches on the side of the road. Or maybe I'm just a retard with Dunning–Kruger effect and driving safely is really a lot more complicated than that. Regardless, I'd love to see a simple simulation with a bunch of cars driving around following that simple logic, even with perfect omnidirectional vision not being realistic, I think the real-world hardware would amount to cameras on the bumpers and sides of the car, and the closer it gets to anything, the lowest value determines the max speed the car can go. There'd need to be a lot more added before it'd anything more than a glorified always-on parking assist. Though I this kind of driving behavior would only really be safe if every other car on the road drove the same way, since humans are reckless and impatient assholes, but flashing your blinkers a lower speeds could be enough to help occasionally.
>>13176 Lol, that would be worse than my Grandma's driving honestly. Such hyper-timidity would create countless automobile accidents and be directly responsible for a massive rise in roadway deaths. Society on the highways simply wouldn't function with widespread adoption of such behavior IMO. Tesla actually deals with exactly the kind of concerns you brought up Anon (and many others as well). Maybe you give their Tesla AI Day video a go?
>>13204 The movement speed perpendicular to objects probably needs tuning due to things like tunnels and guard rails, but otherwise I think it could work great. That hyper-timid driving style might seem excessive, but you've got to consider that the biggest concern people seem to have with autonomous cars is the safety. And if it's really an issue I guess it could still be made to still decelerate safely but not necessarily comfortablely, so the gap between cars can be smaller. There are some places and times where you can go straight on a highway for hours without ever seeing another car and there there's rush hour in New York where bumper to bumper traffic keeps anyone from moving. In the case of the former, the timidness isn't really a significant negative and would mostly just stop it from hitting a deer or anything else that wanders onto the road, hence my analogy of something popping out from a wall. But it would actually help alleviate traffic jams since the reaction time to the change in the speeds of the cars around it could be really high, and if a significant number of cars followed this method, then there'd be a large group of cars all slowly creeping while leaving enough room to pass and merge lanes instead of idling and random accelerating/decelerating and people trying to figure out how to merge. Traffic would be slow, but it would stay moving efficiently. I think this video really does a good job at showing the problem it'd solve: https://www.youtube.com/watch?v=iHzzSao6ypE and at the chicken crossing the road part at 1:08, if you think of it as there being no road, just cars going in a straight line, the cars would slow as the chicken approaches to cross them, and start speeding up again as the chicken leaves, I think it could solve the problem without the cars needing to communicate with each other or eliminating human drivers entirely. The consistent driving behavior would keep the cars "in the middle" without needing to consider cars behind them.
https://insideevs.com/news/659974/tesla-ai-fsd-beta-interview-dr-know-it-all-john-gibbs/ Interview with a proponent of EVs, discussing some of the AI aspects of Tesla's self-driving cars.

C++ programming textbook; PPP2 Chobitsu Board owner 01/16/2023 (Mon) 03:57:21 No.18749 [Reply] [Last]
This is /robowaifu/'s official C++ learning textbook thread. It is based directly on Bjarne Stroustrup's college freshman textbook, Programming Principles and Practice Using C++, commonly referred to as PPP2. [1] This textbook thread belongs with this C++ Learning Classroom thread: (>>19777). note: This is a read-only document in essence. If you happen to catch the thread unlocked while it's still under construction, please resist the temptation to reply ITT -- it will get deleted! For now, just reply in /meta please. :^) --- Full program archives (through chapter 11): >file drop 230504 https://anonfiles.com/o8i6j4p7z8/PPP2_v0_1a_7z >PPP2-v0.1a.tar.xz.sha256sum b8a117369432ccaf82b3d1ad2037dd87bd7344bda9d0e9968ebe14be673db47a *PPP2-v0.1a.tar.xz ---

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/04/2023 (Thu) 13:51:11.
236 posts and 237 images omitted.
Open file (211.88 KB 1140x2262 PPP2_p395.png)
>"A >> operator reads into objects of a given type according to that type’s standard format." >"The standard library istream library also provides facilities for reading individual characters and whole lines." >"What if we wanted to read everything on that line at once and decide how to format it later? That could be done using the function getline()." >"One common reason for wanting to read a whole line is that the definition of whitespace isn’t always appropriate." --- >p395 command line + possible output: g++ -std=c++20 -O2 -Wall -pedantic ./ch_11/main_p395.cpp && ./a.out > > Dennis Ritchie >

Message too long. Click here to view full text.

Open file (246.17 KB 1140x1762 PPP2_p397.png)
>"Usually, we read integers, floating-point numbers, words, etc. as defined by format conventions. However, we can — and sometimes must — go down a level of abstraction and read individual characters." >"That’s more work, but when we read individual characters, we have full control over what we are doing." >"When we read individual characters, we usually want to classify them: Is this character a digit? Is this character uppercase? And so forth." note: see the code example for the listing of them. --- >p397 command line + possible output: g++ -std=c++20 -O2 -pedantic ./ch_11/main_p397.cpp && ./a.out > --- >p397 example code https://rentry.org/PPP2_p397 https://coliru.stacked-crooked.com/a/c12fc47b5e326104
Open file (139.90 KB 1140x1337 PPP2_p399.png)
>"This section provides a semi-realistic example of the use of iostreams to solve a real problem. When we read strings, words are by default separated by whitespace. Unfortunately, istream doesn’t offer a facility for us to define what characters make up whitespace or in some other way directly change how >> reads a string." >"So, what do we do if we need another definition of whitespace?" >"For most purposes we must treat punctuation just like whitespace. How might we get rid of such punctuation? We could read characters, remove the punctuation characters — or turn them into whitespace — and then read the “cleaned-up” input again" --- >p399 command line + possible output: g++ -std=c++20 -O2 -Wall -pedantic ./ch_11/main_p399.cpp && ./a.out > --- >p399 example code https://rentry.org/PPP2_p399 https://coliru.stacked-crooked.com/a/de67503d3a55b7a6
Open file (397.32 KB 1140x2987 PPP2_p400.png)
>>21874 >"Unfortunately, the code above is messy and rather special-purpose. What would we do if we had another definition of punctuation?" >"Let’s provide a more general and useful way of removing unwanted characters from an input stream." >"The basic idea is to read words from an ordinary input stream and then treat the user-specified “whitespace” characters as whitespace" >"To become a programmer, you need to read code, and not just carefully polished solutions to educational problems. This is [one such] example [of real code]." >"In another few days or weeks, this will become easy for you to read, and you will be looking at ways to improve the solution." --- >p400 command line + possible output: g++ -std=c++20 -O2 -Wall -pedantic ./ch_11/main_p400.cpp && ./a.out <<< "There are only two kinds of languages: languages that people complain about, and languages that people don't use." please enter words (ctrl+d to end input) about and are complain

Message too long. Click here to view full text.

Edited last time by Chobitsu on 04/10/2023 (Mon) 02:44:48.
Open file (340.11 KB 1140x2565 PPP2_p391_v2.png)
>>21869 >"note: this is an extended & reworked example of the book's original, that demonstrates moving data for both the character & binary formats to/from disk files, and confirming the resultant data are identical." Stop. Congratulations Anon, you've finished with chapter 11; and in fact with a couple of very detailed chapters loaded with information. C++ I/O streams are a bit tricky to master at first, but they are an exceptionally powerful and (nearly-always) elegant approach to I/O management in general. We can't use them everywhere, but when we can they offer us a unified approach to the world of data I/O that otherwise can be quite chaotic! Your time invested in mastering C++ streams is time well-spent Anon. Cheers. :^) --- >p391_v2 command line + possible output: g++ -std=c++20 -O2 -Wall -pedantic ./ch_11/main_p391_v2.cpp && ./a.out -rw-r--r-- 1 2001 2000 20 Apr 9 04:53 ints.bin -rw-r--r-- 1 2001 2000 57 Apr 9 04:53 ints.txt sizeof(char): 1 sizeof(int): 4 sizeof(double): 8

Message too long. Click here to view full text.

Waifu Robotics Project Dump Robowaifu Technician 09/18/2019 (Wed) 03:45:02 No.366 [Reply] [Last]
Edited last time by rw_bumpbot on 05/25/2020 (Mon) 04:54:42.
249 posts and 178 images omitted.
Open file (343.54 KB 565x1026 Liliumrobotics_LA-001.png)
Open file (273.47 KB 1198x2005 Liliumrobotics_LA-001.jpg)
Open file (177.43 KB 1072x1066 Liliumrobotics_Head-001.jpg)
Lilium Robotics is at it, with a planned Kickstarter campaign for a sex enabled taller gynoid: https://liliumrobotics.com/Projects/ - The cute catgirl seems to never become available as a product or open source model. Only the head remains: https://liliumrobotics.com/Head/
Japan, c'mon, what are you doing? Panzer waifu: https://www.youtube.com/watch?v=qs1Wvdb1-wo
>>9267 missile_93's robots name is apparently Adachi Rei. She has moving legs now: https://vxtwitter.com/missile_39/status/1639877257660624896?t=nSWxs0EoE8X0rSpXu7pL5w&s=33
>>21534 Kek. >ywn around inside your waifus large head resting on a tracked vehicle. I have high hopes for the world of robowaifus from the Nipponese tbh. Are they the heros we all need? BTW, thanks for all the positive inputs across the board during the past couple weeks NoidoDev. That's much appreciated! Cheers. :^) >=== -minor edit
Edited last time by Chobitsu on 03/31/2023 (Fri) 00:32:36.
Open file (1.65 MB 3024x3024 3673321654421404503.jpg)
Open file (348.60 KB 1920x1080 7547271654421116153.jpg)
Open file (214.78 KB 1440x1440 8092471654421363694.jpg)
Open file (164.01 KB 1600x900 517311660145646695.jpg)
Not exactly a waifu yet! :^) but still a breddy cool project. Looks pretty inexpensive to build and gets around bipedally on two wheels. Has wooden dowels! :^) https://hackaday.io/project/185729-upkie-homemade-wheeled-biped-robot https://github.com/tasts-robots/upkie_locomotion

Report/Delete/Moderation Forms