/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The Mongolian Tugrik has recovered its original value thanks to clever trade agreements facilitated by Ukhnaagiin Khürelsükh throat singing at Xi Jinping.

The website will stay a LynxChan instance. Thanks for flying AlogSpace! --robi

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!


The Library of /robowaifu/ Card Catalogue Robowaifu Technician 11/26/2020 (Thu) 07:11:30 No.7143 [Reply] [Last]
Robowaifus are a big topic. They need a big library index! :^) Note -This is a living document. Please contribute topical thread/post crosslinks! Thread category quick-jumps >>7150 AI / VIRTUAL_SIM / UX_ETC >>7152 HARDWARE / MISC_ENGINEERING >>7154 DESIGN-FOCUSED >>7156 SOFTWARE_DEVELOPMENT / ETC >>7159 BIO / CYBORG >>7162 EDUCATION >>7164 PERSONAL PROJECTS >>7167 SOCIETY / PHILOSOPHY / ETC >>7169 BUSINESS(-ISH) >>7172 BOARD-ORIENTED >>7174 MISCELLANEOUS

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/23/2022 (Mon) 04:51:00.
123 posts and 35 images omitted.
waifusearch> Tensegrity THREAD SUBJECT POST LINK R&D General >>5448 tensegrity Waifu Materials >>6507 " Robot skeletons and armatures >>4398 " " >>4416 " " >>8089 " " >>8158 " Building the ultimate waifu. >>7653 " Papercraft waifu >>9016 " Actuators for waifu movement! >>5108 " /robowaifu/ Embassy Thread >>4832 " " >>4833 " " >>4844 " " >>4848 " " >>4855 "

Message too long. Click here to view full text.


Welcome to /robowaifu/ Anonymous 09/09/2019 (Mon) 00:33:54 No.3 [Reply]
Why Robowaifu? Most of the world's modern women have failed their men and their societies, feminism is rampant, and men around the world have been looking for a solution. History shows there are cultural and political solutions to this problem, but we believe that technology is the best way forward at present – specifically the technology of robotics. We are technologists, dreamers, hobbyists, geeks and robots looking forward to a day when any man can build the ideal companion he desires in his own home. However, not content to wait for the future; we are bringing that day forward. We are creating an active hobbyist scene of builders, programmers, artists, designers, and writers using the technology of today, not tomorrow. Join us! NOTES & FRIENDS > Notes: -This is generally a SFW board, given our engineering focus primarily. On-topic NSFW content is OK, but please spoiler it. -Our bunker is located at: https://anon.cafe/robowaifu/catalog.html Please make note of it. > Friends: -/clang/ - currently at https://8kun.top/clang/ - toaster-love NSFW. Metal clanging noises in the night. -/monster/ - currently at https://smuglo.li/monster/ - bizarre NSFW. Respect the robot. -/tech/ - currently at >>>/tech/ - installing Gentoo Anon? They'll fix you up. -/britfeel/ - currently at https://anon.cafe/britfeel/ - some good lads. Go share a pint! -/server/ - currently at https://anon.cafe/server/ - multi-board board. Eclectic thing of beauty. -/f/ - currently at https://anon.cafe/f/res/4.html#4 - doing flashtech old-school. -/kind/ - currently at https://2kind.moe/kind/ - be excellent to each other.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/09/2022 (Mon) 21:03:13.

General Robotics/A.I. news and commentary Robowaifu Technician 09/18/2019 (Wed) 11:20:15 No.404 [Reply] [Last]
Anything in general related to the Robotics or A.I. industries, or any social or economic issues surrounding it (especially of RoboWaifus). www.therobotreport.com/news/lets-hope-trump-does-what-he-says-regarding-robots-and-robotics https://archive.is/u5Msf blogmaverick.com/2016/12/18/dear-mr-president-my-suggestion-for-infrastructure-spending/ https://archive.is/l82dZ >=== -add A.I. to thread topic
Edited last time by Chobitsu on 12/17/2020 (Thu) 20:16:50.
345 posts and 146 images omitted.
>>16481 LOL. <inb4 le epin memery Just a quick note to let Anons know this thread is almost autosage limit. I'd like suggestions for the OP of #2 please. Thread subject (if you think it should be changed), OP text, links, pics?
>>16482 Would be cool to combine the usual with scaling hypothesis link https://www.gwern.net/Scaling-hypothesis and lore (maybe a single image with a mashup of DL memes) https://mobile.twitter.com/npcollapse/status/1286596281507487745 Also, “blessings of scale” could make it into the name
Open file (35.94 KB 640x480 sentiment.png)
>>16482 It might be good to have a thread dedicated to new papers and technology for more technical discussion that doesn't fit in any particular thread and another for more general news about robotics and AI. >>2480 I did some quick sentiment analysis back in April and there were more a lot more people positive about MaSiRo than negative. About a third was clearly positive and looking forward to having robowaifus but had similar reservations that the technology has a long way to improve before they would get one. Some said they only needed minor improvements and some were enthusiastic and wanted to buy one right away even with the flaws. Most of the negative sentiment was fear followed by people wanting to destroy the robomaids. Some negative comments weren't directed toward robowaifus but rather at women and MaSiRo's creator. And a few comments were extremely vocal against robots taking jobs and replacing women. Given how vicious some of the top negative comments were it's quite encouraging to see the enthusiasm in the top positive comments was even stronger. >>2484 Someone just needs to make a video of a robomaid chopping vegetables for dinner with a big knife and normies will repost it for years to come shitting their pants. Look at the boomers on /g/ and /sci/ that still think machine learning is stuck in 2016. If any meaningful opposition were to arise against robowaifus it would have to come from subculture given the amount of dedication it takes to build them. Most working on them have already been burnt by or ostracized from society and don't care what anyone thinks. They hold no power over us. So don't let your dreams be memes, unless your dreams are more layers, then get stacking. :^)
Open file (31.00 KB 474x623 FPtD8sBVIAMKpH9.jpeg)
Open file (185.41 KB 1024x1024 FQBS5pvWYAkSlOw.jpeg)
Open file (41.58 KB 300x100 1588925531715.png)
>>16482 This one is pretty good. We're hitting levels of AI progress that shouldn't even be possible. Now we just need to get Rem printed out and take our memes even further beyond. I'd prefer something pleasing to look at though than a meme since we'll probably be looking at it for 2+ years until the next thread. The libraries in the sticky are quite comfy and never get old.
>>16483 >Also, “blessings of scale” could make it into the name Not to be contentious, but is scale really a 'blessing'? I mean for us Anons. Now obviously large-scale computing hardware will play into the hands of the Globohomo Big-Tech/Gov, but it hardly does so to the benefit of Joe Anon (who is /robowaifu/'s primary target 'audience' after all). I feel that Anon's goals here (>>16496) would instead serve us (indeed, the entire planet) much, much better than some kind of always-on (even if only partially so) lock-in to massive data centers for our robowaifus. No thanks! :^) >>16487 >It might be good to have a thread dedicated to new papers and technology for more technical discussion that doesn't fit in any particular thread and another for more general news about robotics and AI. Probably a good idea, but tbh we already have at least one 'AI Papers' thread (maybe two). Besides, I hardly feel qualified myself to start such a thread with a decent, basic OP. I think I'll leave that to RobowaifuDev or others here if they want to make a different one. Ofc, I can always go in and edit the subject+message of any existing thread. So we can re-purpose any standing thread if the team wants to. >Given how vicious some of the top negative comments were it's quite encouraging to see the enthusiasm in the top positive comments was even stronger. At least it looks to be roughly on-par, even before there is a robowaifu industry in existence. Ponder the ramifications of that for a second; even before an industry exists. Robowaifus are in fact a thousands-years-old idea whose time has finally come. What an opportunity, what a time to be alive! :^) You can expect this debate to heat up fiercely once we and others begin making great strides in a practical way, Anon. >Someone just needs to make a video of a robomaid chopping vegetables for dinner with a big knife and normies will repost it for years to come shitting their pants. This. As I suggested to Kywy, once we accomplish this, even the rabid, brainwashed feminists will be going nuts wanting one of their own (>>15543).

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/28/2022 (Sat) 16:07:14.

Robowaifu-OS & Robowaifu-Brain(cluster) Robowaifu Technician 09/13/2019 (Fri) 11:29:59 No.201 [Reply] [Last]
I realize it's a bit grandiose (though probably no more than the whole idea of creating a irl robowaifu in the first place) but I want to begin thinking about how to create a working robowaifu 'brain', and how to create a special operating system to run on her so she will have the best chance of remaining an open, safe & secure platform.

OS Language Choice
C is by far the single largest source of security holes in software history, so it's out more or less automatically by default. I'm sure that causes many C developers to sneer at the very thought of a non-C-based operating system, but the unavoidable cost of fixing the large numbers of bugs and security holes that are inevitable for a large C project is simply more than can be borne by a small team. There is much else to do besides writing code here, and C hooks can be generated wherever deemed necessary as well.

C++ is the best candidate for me personally, since it's the language I know best (I also know C too). It's also basically as low level as C but with far better abstractions and much better type-checking. And just like C, you can inline Assembler code wherever needed in C++. Although poorly-written C++ can be as bad as C code in terms of safety due to the necessity of it being compatible with C, it also has many facilities to not go there for the sane coder who adheres to simple, tried-and-true guidelines. There is also a good C++ project already ongoing that could be used for a clustered unikernel OS approach for speed and safety. This approach could save drastic amounts of time for many reasons, not the least of which is tightly constrained debugging. Every 'process' is literally it's own single-threaded kernel, and mountains of old-style cruft (and thinking) typical with OS development simply vanishes.

FORTRAN is a very well-established language for the sciences, but a) there aren't a lot of FORTRAN coders, and b) it's probably not the greatest at being a general-purpose language anyway. I'm sure it could be made to run robotics hardware, but would probably be a challenge to turn into an OS.

There are plenty of dujour SJW & coffee languages out there, but quite apart from the rampant faggotry & SJWtardism plainly evident in most of their communities, none of them have the kind of industrial experience and pure backbone that C, C++, or FORTRAN have.

D and Ada are special cases and possibly bear due consideration in some year's time, but for now C++ is the obvious choice to me for a Robowaifu OS foundation, Python probably being the best scripting language for it.

(1 of 2)
55 posts and 19 images omitted.
>>13174 lel'd. >How do I Well, you start by not letting her get behind the wheel at night anon.
>>201 https://www.mythic-ai.com/technology/ https://youtu.be/GVsUOuSjvcg relevant and of interest for AI computing technology.
>just dropping this here for refs: Operating Systems: Three Easy Pieces Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau Arpaci-Dusseau Books August, 2018 (Version 1.00) >"The book is centered around three conceptual pieces that are fundamental to operating systems: virtualization, concurrency, and persistence. In understanding the conceptual, you will also learn the practical, including how an operating system does things like schedule the CPU, manage memory, and store files persistently. Lots of fun stuff! Or maybe not so fun? https://pages.cs.wisc.edu/~remzi/OSTEP/
>>201 tbh as a side project I would be interested if you succeeded in buying a few cheap Intel Xeon Phis from ebay and integrating them into a system with non-trivial performance. But my main platform is gaming GPUs for now.
>>16469 Not a bad idea Anon. I think the main point to start with is maybe a basic 4-SBC cluster simply to provide cheap+energy-efficient failover safety for our starter robowaifus. Thereafter, the sky's the limit so yea.

Open file (410.75 KB 1122x745 birbs_over_water.png)
Open file (99.96 KB 768x512 k0p9tx.jpg)
/robowaifu/meta-5: It's Good To Be Alive Robowaifu Technician 03/07/2022 (Mon) 00:23:10 No.15434 [Reply] [Last]
General /robowaifu/ team survey: (>>15486) Please respond Anon /meta & QTDDTOT Note: Latest version of /robowaifu/ JSON archives available is v220523 May 2022 https://files.catbox.moe/gt5q12.7z If you use Waifusearch, just extract this into your 'all_jsons' directory for the program, then quit (q) and restart. Mini-FAQ

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/24/2022 (Tue) 05:08:32.
210 posts and 75 images omitted.
Open file (175.19 KB 235x480 Lapis-figurine.png)
I've just bought a 3d printer (Creality Ender 3 pro as my first printer) https://www.creality.com/products/ender-3-pro-3d-printer I will most likely pick it up on friday. I already have filaments which I got for free from a convention If it works out well I'll be expanding on it
>>16461 Congrats Hik! That's a nice one IMO. Looking forward to your printout pics! >figurine A Cute! I'm really looking forward to an Aoki Lapis robowaifu Anon. :^) waifusearch> Ender THREAD SUBJECT POST LINK 3D printer resources -> https://alogs.space/robowaifu/res/94.html#8850 ender " -> https://alogs.space/robowaifu/res/94.html#15714 " Prototypes and failures -> https://alogs.space/robowaifu/res/418.html#15926 " " -> https://alogs.space/robowaifu/res/418.html#15930 " Short Stacks the Obvious Solutio -> https://alogs.space/robowaifu/res/2666.html#2666 " /robowaifu/meta-4: Rugged Mounta -> https://alogs.space/robowaifu/res/12974.html#15333 " The Sumomo Project -> https://alogs.space/robowaifu/res/14409.html#15256 " kiwi's Tutorials -> https://alogs.space/robowaifu/res/14704.html#14711 "

Message too long. Click here to view full text.

Open file (1.96 MB 1288x966 3dprinter-assembled.png)
Open file (1.58 MB 1159x758 3dprinter-parts.png)
I got and assembled my 3d printer today Took about 3 hours to put it together I have yet to configure it and setup software for it and such, will get to that soon, then I'll make test prints and tweak slic3r settings and so on
>>16494 Congrats! And welcome to 3d-printer calibration technicians' club! You have to choose slicer (cura, slic3r) and a CAD (solvespace, freecad, cadquery, openscad or something else) to continue.
>>16494 Gratz Hik! May your 3D-printing adventures go smoothly. Cheers.

AI Design principles and philosophy Robowaifu Technician 09/09/2019 (Mon) 06:44:15 No.27 [Reply] [Last]
My understanding of AI is somewhat limited, but personally I find the software end of things far more interesting than the hardware side. To me a robot that cannot realistically react or hold a conversation is little better than a realdoll or a dakimakura.

As such, this is a thread for understanding the basics of creating an AI that can communicate and react like a human. Some examples I can think of are:

>ELIZA
ELIZA was one of the first chatbots, and was programmed to respond to specific cues with specific responses. For example, she would respond to "Hello" with "How are you". Although this is one of the most basic and intuitive ways to program a chat AI, it is limited in that every possible cue must have a response pre-programmed in. Besides being time-consuming, this makes the AI inflexible and unadaptive.

>Cleverbot
The invention of Cleverbot began with the novel idea to create a chatbot using the responses of human users. Cleverbot is able to learn cues and responses from the people who use it. While this makes Cleverbot a bit more intelligent than ELIZA, Cleverbot still has very stilted responses and is not able to hold a sensible conversation.

>Taybot
Taybot is the best chatbot I have ever seen and shows a remarkable degree of intelligence, being able to both learn from her users and respond in a meaningful manner. Taybot may even be able to understand the underlying principles of langauge and sentence construction, rather than simply responding to phrases in a rote fashion. Unfortunately, I am not sure how exactly Taybot was programmed or what principles she uses, and it was surely very time-intensive.

Which of these AI formats is most appealing? Which is most realistic for us to develop? Are there any other types you can think of? Please share these and any other AI discussion in this thread!
111 posts and 49 images omitted.
>>16312 >It would be more practical for it to do planning and have a faster, more lightweight system to handle the movements. Everything would be able to run onboard but it wouldn't be ideal. Realistically, low-level movement and locomotion would be handled by a separate model or a traditional software system. Gato is useful for slow-realtime actions (unless you enhance it in more than a few ways).
Open file (107.55 KB 1013x573 RETRO.png)
>>16254 I very much like seeing this here, great taste. Note that even the largest model is quite small by modern standards - you could run it on 6gb a VRAM GPU with a few tricks. It uses vanilla transformer and short context, this is clearly just a baseline compared to what could be done here. Stay tuned. >>16110 I respect the creativity, but I do think that you overcomplicate the solution, although a semantically rich memory index mechanism sounds interesting in theory. Still, as of now it looks brittle, as memorizing should be learned in context of a large rich general-purpose supervision source. RETRO https://arxiv.org/abs/2112.04426 used banal frozen BERT + FAISS for encoder & index for language modeling, and did quite well, overperforming dense models larger than it by 1+ OOM. >If the model finds a contradiction somewhere, it should be possible to resolve it then update its own model or at the very least correct memories in the database. If you have some strong runtime supervision, you can just edit the index. Retrieval-based models are targeted towards this usecase as well. There is a good if a bit dated overview of QA approaches https://lilianweng.github.io/posts/2020-10-29-odqa/ There are some attempts at retrieval-enhanced RL, but the success is modest for now https://www.semanticscholar.org/paper/Retrieval-Augmented-Reinforcement-Learning-Goyal-Friesen/82938e991a4094022bc190714c5033df4c35aaf2 I think a fruitful engineering direction is building upon DPR for QA-specific embedding indexing https://huggingface.co/docs/transformers/model_doc/dpr https://github.com/facebookresearch/DPR The retrieval mechanics could be improved with binary network computing semantic bitvectors https://github.com/swuxyj/DeepHash-pytorch and using the well-developed MIPS primitives: https://blog.vespa.ai/billion-scale-knn/

Message too long. Click here to view full text.

>>16468 >Stay tuned. I like the ring of that, Pareto Frontier. Looking forward with anticipation to your thread tbh.
Open file (205.82 KB 701x497 image-search.png)
Open file (134.63 KB 1041x385 hashnet.png)
>>16468 The idea behind aligning the classification embeddings is because the query lacks the information it's trying to retrieve from the memory. A frozen BERT model trained for semantic search isn't going to match well from a query like "what is the name of the robowaifu in the blue maid dress?" to character descriptions of Yuzuki, Mahoro or Kurumi. It has to learn to connect those dots. If it struggles with figuring that out on its own then I will pretrain it with a human feedback reward model: https://openai.com/blog/instruction-following/ Also the encoder for the summarization model can be used for the classification embeddings which reduces the memory cost of having to use another model. Training will still be done on large general-purpose datasets. The memory can be cleared after pretraining with no issue and filled later with a minimal factory default that is useful for an AI waifu. RETRO is evidence that basic memory retrieval works even without good matching, and augmenting the context with knowledge from a seq2seq model has also been successfully done with improvements to consistency and truthfulness: https://arxiv.org/abs/2111.05204 The hashing strategy was inspired from product-key memory for doing approximate nearest neighbour search: https://arxiv.org/abs/1907.05242 but using the score instead for a binary code so it can work with a database or any binary search tree and a continuous relaxation to make the hash differentiable: https://www.youtube.com/watch?v=01ENzpkjOCE Vespa.ai seems to be using a similar method by placing several items in a bucket via a binary hash code then doing a fine-level search over the bucket: https://arxiv.org/abs/2106.00882 and https://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W03/papers/Lin_Deep_Learning_of_2015_CVPR_paper.pdf From that repo you linked it looks like HashNet is the simplest and most effective and similar to what I was planning to do with a continuous relaxation to make the binary hash codes differentiable: https://openaccess.thecvf.com/content_ICCV_2017/papers/Cao_HashNet_Deep_Learning_ICCV_2017_paper.pdf Using FAISS is out of the question though since it uses too much memory for a SBC and can't scale up to GBs let alone TBs. I'm not familiar with DPR and will have to read up on it when I have time. There's bit of a difference in our projects since your target platform is a gaming GPU. My goal is to create an artificial intellect that doesn't need to rely on the memory of large language models and utilizes memory from disk instead. This way it can run off an SBC with only 512 MB of RAM which are both affordable and in great stock (at least non-WiFi versions that can take a USB WiFi dongle). I've given up trying to do anything with large language models since I neither have the compute or the money to rent it. The idea though will also scale up to larger compute such as a gaming GPU if anyone with the resources becomes interested in doing that.
>>16496 >My goal is to create an artificial intellect that doesn't need to rely on the memory of large language models and utilizes memory from disk instead. This way it can run off an SBC with only 512 MB of RAM which are both affordable and in great stock (at least non-WiFi versions that can take a USB WiFi dongle). You are the hero we all need, but don't deserve Anon! Godspeed.

/robowaifu/ + /monster/, its benefits, and the uncanny valley Robowaifu Technician 05/03/2021 (Mon) 14:02:40 No.10259 [Reply]
Discussing the potential benefits of creating monster girls via robotics instead of 1 to 1 replicas of humans and what parts can be substituted to get them in production as soon as possible. Firstly is the fact that many of the animal parts that could be substituted for human one are much simpler to work with than the human appendages, which have a ton of bones and complex joints in the hands and feet, My primary example of this is bird/harpy species (image 1), which have relatively simple structures and much less complexity in the hands and feet. For example, the wings of the bird species typically only have around three or four joints total, compared to the twenty-seven in the human hand, while the legs typically only have two or three, compared to the thirty-three in the human foot. As you can guess, having to work with a tenth of the bones and joints and opposable thumbs and all that shit makes things incredibly easier to work with. And while I used bird species as an example, the same argument could be put forward for MG species with paws and other more simplistic appendages, such as Bogey (image 2) and insect hybrids (image 3). Secondly is intentionally making it appear to not be human in order to circumvent the uncanny valley. It's incredibly difficult to make completely convincing human movement, and one of the simplest ways around that is just to suspend the need for it entirely. We as humans are incredibly sensitive to the uncanny valley of our own species, even something as benign as a prosthetic limb can trigger it, but if we were to create something that we don't expect to move in such a way, it's theoretically entirely possible to just not have to deal with it (for the extremities part of it, anyways), leaving more time to focus on other aspects, such as the face. On the topic of face, so too could slight things be substituted there (again for instance, insect girls), in order to draw attention away from the uncanny valley until technology is advanced enough that said uncanny valley can be eliminated entirely. These possibilities, while certainly not to the taste of every anon, could be used as a way to accelerate production to the point that it picks up investors and begins to breed competition and innovation among people with wayyyyyyy more money and manpower than us, which I believe should be the endgoal for this board as a whole. . Any ideas or input is sincerely appreciated.
20 posts and 8 images omitted.
>>13697 There are Anon's here working on making monster girls real. Monster and robowaifu make sense together.
>>13697 Imagine, being so upset about one guy writing something in an online community. In a thread about monster girl robots. Dude.
>>13698 As you think >>13699 I will get mad on what I want.
Open file (490.38 KB 525x910 loona.png)
>put robo-skellington in life-size plush >Uncanny valley solved ? You don't have to worry about replicating skin and it's more huggable than hard plastic
>>16492 Yep, good thinking Anon. And actually, we've had similar concepts going here for quite some time actually. waifusearch> plush OR plushie OR daki OR dakimakura THREAD SUBJECT POST LINK AI Design principles and philoso -> https://alogs.space/robowaifu/res/27.html#27 dakimakura What can we buy today? -> https://alogs.space/robowaifu/res/101.html#101 " Who wouldn't hug a kiwi. -> https://alogs.space/robowaifu/res/104.html#6127 " " -> https://alogs.space/robowaifu/res/104.html#6132 " " -> https://alogs.space/robowaifu/res/104.html#6176 plushie " -> https://alogs.space/robowaifu/res/104.html#14761 daki Waifus in society -> https://alogs.space/robowaifu/res/106.html#2267 dakimakura Robot Voices -> https://alogs.space/robowaifu/res/156.html#9092 plushie " -> https://alogs.space/robowaifu/res/156.html#9093 " Waifu Robotics Project Dump -> https://alogs.space/robowaifu/res/366.html#3501 daki Robowaifu Propaganda and Recruit -> https://alogs.space/robowaifu/res/2705.html#2738 " /robowaifu/ Embassy Thread -> https://alogs.space/robowaifu/res/2823.html#10983 plushie

Message too long. Click here to view full text.


Open file (363.25 KB 1027x1874 MaidComRef.png)
MaidCom Development Kiwi 03/16/2022 (Wed) 23:30:40 No.15630 [Reply] [Last]
Welcome to the /robowaifu/ board's project. Together, we will engineer a modular robot that will serve and provide companionship to their Anon faithfully. See picrel for details on the current design. This robot will begin with a basic maid robot then move forward towards more capable robots with functionality approaching Chii/2B/Dorothy. First goal is to have a meter tall robot which functions as a mobile server bearing an appearance that approximates her owners waifu. This should be completed by December 2022 with further major updates happening yearly until Anons can build a PersoCom class robot with relative ease and affordability.
162 posts and 81 images omitted.
Kywy, Anon just posted a video clip that could have a potential for our meshtegrity approach for robowaifus: > (>>16415 >Tendon-driven leg -related)
Open file (149.69 KB 634x1535 compositeic1.jpg)
>>16374 ROS has a lot of "documentation" but it actually says very little. I only found the source code because I looked at the Ubuntu compilation instructions which suggested adding a source repository to Ubuntu and the repository was hosted on Github. I'm wary of that organization. I jumped on the source code of one of their C projects and I see a load of useless wrappers around stdlib functions https://github.com/ros2/rcutils/blob/master/src/strcasecmp.c https://github.com/ros2/rcutils/blob/master/src/strdup.c Every line of code has a cost, adding lines of code to add argument checking to the callee isn't worth it. Programmers will have to lose time learning those wrappers, using them will tie programs to ROS, and they don't actually provide any benefit: you still have an error you have to check if you pass null pointers to those stdlib wrappers. The other thing that code does is let the caller specify the memory allocator. I doubt they found an use for a custom memory allocator in their stdlib wrappers. Most likely they're just losing memory and performance by adding more parameters and layers of pointers to dereference to functions. If you check at the callee, you still have to check at the caller, because then you have to check if the callee returned error. Worse, the errors are only going to pile up, and you'll have to come up with increasingly complex mechanisms to signal what kind of error happened. As errors from one part of the program are propagated to increasingly distant places, it will become impossible to handle them. If there is no way to signal what kind of error happened as is the case here, error checking paradoxically becomes impossible because someone added error checking to a function. If you check at the caller, the callee doesn't have to check anything, and every function it passes the argument to doesn't have to do any checking either. The errors aren't allowed to snowball, keeping error checking to a minimum, and errors are only checked where they might be acquired, keeping them close to their source. All of this without letting any error go unchecked. This is also way less lines of code, so it results in less code to maintain, smaller binaries, and less chances of bugs cropping up because you can't have bugs if you don't have code. Errors should be checked where they might be acquired. Also, type names ending with _t are reserved by POSIX: creating one such type is UB, though it's unlikely to cause trouble. And this memcpy call https://github.com/ros2/rcutils/blob/master/src/strdup.c#L52 copies 1 extra byte unnecessarily.

Message too long. Click here to view full text.

>>16231 How are you doing, Ricardo? You still with us bro? Just checking up on you, seems like it's been a couple weeks since the team has heard from you. Cheers.
>>16394 Thank you and not so much. My connection got cancelled due to freak snow and I was stranded in Denver for 4 days. Made the most of it but only the other night just arrived back home and still have the remainder of a full workweek. What can I say when it rains it pours (snows?)
>>16484 4 days? Wow, that sucks. Anyway, glad you're back safe and sound Meta Ronin.

The Sumomo Project Chobitsu Board owner 11/24/2021 (Wed) 17:27:18 No.14409 [Reply] [Last]
So I've been working for a while at devising an integrated approach to help manage some of the software complexity we are surely going to encounter when creating working robowaifus. I went down many different bunny trails and (often) fruitless paths of exploration. In the end I've finally hit on a relatively simplistic approach that AFAICT will actually allow us to both have the great flexibility we'll be needing, and without adding undue overhead and complexity. I call this the RW Foundations library, and I believe it's going to help us all out a lot with creating workable & efficient software that (very hopefully) will allow us to do many things for our robowaifus using only low-end, commodity hardware like the various single-board computers (SBCs) and microcontrollers. Devices like the Beaglebone Blue and Arduino Nano for example. Of course, we likely will also need more powerful machines for some tasks as well. But again, hopefully, the RW Foundations approach will integrate smoothly with that need as well and allow our robowaifus to smoothly interoperate with external computing and other resources. I suppose time will tell. So, to commemorate /robowaifu/'s 5th birthday this weekend, I've prepared a little demonstration project called Sumomo. The near-term goal for the project is simply to create a cute little animated avatar system that allows the characters Sumomo and Kotoko (from the Chobits anime series) to run around having fun and interacting with Anon. But this is also a serious effort, and the intent is to begin fleshing out the real-world robotics needs during the development of this project. Think of it kind of like a kickstarter for real-world robowaifus in the end, but one that's a very gradual effort toward that goal and a little fun along the way. I'll use this thread as a devblog and perhaps also a bit of a debate and training forum for the many issues we all encounter, and how a cute little fairybot/moebot pair can help us all solve a few of them. Anyway, happy birthday /robowaifu/ I love you guys! Here is my little birthday present to you. === >rw_sumomo-v211124.tar.xz.sha256sum 8fceec2958ee75d3c7a33742af134670d0a7349e5da4d83487eb34a2c9f1d4ac *rw_sumomo-v211124.tar.xz >backup drop

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/20/2022 (Fri) 20:36:38.
131 posts and 88 images omitted.
Open file (716.84 KB 500x281 shaking.gif)
Hello, newfag here. I have been lurking here for a bit. I am new to programming and C++ in general, so an experienced programmer would probably get a stroke looking at my code, but it seems to work and that is all I am looking for at this point. Today I finally got back propagation working on neural network library in C++ that I was working on. So far I have created a CSV file parsing, as well as matrix operations library using std::vector. The neural nets use matrix algebra to compute the outputs. It can initialize a random neural network with weight between -1 and 1 using 2 vectors, 1st to determine how many neurons in each layer, 2nd to determine the type of neuron in each layer(different activation functions), as well as Nx the number of input variables. There are also functions to scale input and output data between 0 and 1 for faster input. It I have finally got the 1st order training method of Gradient descent to work, with OK results, the training dataset isn't too large and IDK if I am using the correct network layouts, but the outputs seem somewhat on the mark. I need to learn a lot more about layouts of neural nets. The next step is a lot more testing, and adding the more efficient 2nd order methods, which will involve computing the hessian and jacobian matrices. (RIP my brain) After I get these 2 methods working, and then get into actual chatbot learning to utilize these neuralnets for my waifubot. I will post updates if that is OK. My test folder: https://anonfiles.com/[redacted]/test.tar_gz === Note: It's bad form generally (and especially on IBs) to include anonymous precompiled binary executables inside a drop, Anon. Give it a shot making another drop that's just the source code, build files, and dataset files. Please post that instead, and I'll fix up your post here with the new droplink. Cheers. >t. Chobitsu >=== -add/edit admonition cmnt
Edited last time by Chobitsu on 05/24/2022 (Tue) 22:42:37.
>>16408 Sorry, but it's better if you create the repository, you're the author, and I'm not looking for more responsibilities.
>>16451 Understood. Well, as I indicated ATM /robowaifu/ is my 'repository'. It certainly affords us a reasonable degree of communication. If you can find some way to dist + push patches up to catbox or some other file service, I'm sure we can get by.
>>16455 Should be doable with any of a git hook, gitlab action, or meson target. I've done tangentially related stuff but not this in particular. Either way, I'm already working with 2 dependencies /robowaifu/ uses to fix a bug and add a way to use Meson's dependency() so I already found something to work on.
>>16489 Excellent. Thanks Nagisa, your help will be very welcome! Cheers.

R&D General Robowaifu Technician 09/10/2019 (Tue) 06:58:26 No.83 [Reply] [Last]
This is a thread to discuss smaller waifu building problems, solutions, proposals and questions that don't warrant a thread. Keep it technical. I'll start.

Liquid battery and cooling in one
Having a single "artificial blood" system for liquid cooling and power storage would eliminate the need for a vulnerable solid state battery, eliminate the need for a separate cooling system, and solve the problem of extending those systems to extremities.
I have heard of flow batteries, you'd just need to use a pair of liquids that's safe enough and not too sensitive to changes in temperature.
This one looks like it fits the bill. The downside is that your waifu would essentially be running on herbicide. (though from what I gather, it's in soluble salt form and thus less dangerous than the usual variety)
https://www.seas.harvard.edu/news/2017/02/long-lasting-flow-battery-could-run-for-more-than-decade-with-minimum-upkeep

How close are we to creating artificial muscles? And what's the second best option?
Muscles are perfect at what they do; they're powerful, compact, efficient, they carry their own weight, they aren't dependent on remote parts of the system, they can be controlled precisely, and they can perform many roles depending on their layout alone.
We could grow actual organic muscles for this purpose already but that's just fucking gross, and you'd need a lot of extra bloat to maintain them.
What we need are strands of whatever that can contract using electrical energy. Piezo does the trick at small scales, but would it be enough to match the real thing? There have been attempts, but nothing concrete so far.
What are some examples of technology that one could currently use instead?

High level and low level intelligence emulation
I've noticed a pattern in programs that emulate other computing hardware.
The first emulators that do the job at acceptable speeds are always the ones that use hacks and shortcuts to get the job done.
It comes down to a tradeoff. Analyzing and recompiling or reinterpreting the code itself on a more abstract level will introduce errors, but it is a magnitude of order more efficient than simulating every part of the circuitry down to each cycle. This is why a relatively high level emulator of a 6th gen video game console has close system requirements to a cycle-accurate emulator of the SNES.
Now, I want to present an analogy here. If training neural networks for every damn thing and trying to blindly replicate an organic system is akin to accurately emulating every logic gate in a circuit, what are some shortcuts we could take?
It is commonly repeated that a human brain has immense computing power, but this assumption is based just on the amount of neurons observed, and it's likely that most of them probably have nothing to do with intelligence or consciousness. If we trim those, the estimated computing power would drop to a more reasonable level. In addition, our computers just aren't built for doing things like neural systems do. They're better at some things, and worse at others. If we can do something in a digital way instead of trying to simulate an analog circuit doing the same thing, that's more computing power that we could save, possibly bridging the gap way earlier than we expected to.
The most obvious way to handle this would be doing as many mundane processing and hardware control tasks as possible in an optimized, digital way, and then using a GPU or another kind of circuit altogether to handle the magical "frontal lobe" part, so to speak.
278 posts and 120 images omitted.
>>9272 >My original idea was just to use text but Plastic Memories shot down that idea fast. If you're still here Anon, I'm curious if you could spell out your reasoning here. Just as a preface a) I'm very familiar with Plastic Memories, and b) I wrote BUMP & Waifusearch to use only the filesystem+textfiles, specifically because they are more archivable/distributable/durable that way. At least that's my take on this issue. I'd be glad to hear yours though.
>>16476 >I'm bursting now, with so many things that were fun or made me happy... >Too many for me to write down in my diary English is a lossy and inefficient way to encode information. One has to be articulate and verbose to encode a memory into text, and then that text has to be decoded later into a useful format but most of the information is actually missing. Only the most valuable points get encoded and the amount of information they can contain is constrained by the language they're in. It's not just text with that problem either. Similar problems arise trying to express a sound as a picture or a walk through a forest as audio. Books, logs, articles and messages have their place but words have limited usefulness in an intuitive long-term memory. Feelings in particular contain so much information because the soup of different hormones and chemicals in the body are exciting certain memories and inhibiting others. This causes a unique behaviour and thought process to emerge that can never be exactly reproduced again, even with the same ingredients, because the memories change. Remembering these feelings is not something that can be done in text.
>>16479 All fair points, and no debate. But, knowing a little bit about how databases actually work, I'm not yet convinced that they offer a priori some kind of enhanced capability for solving this expansive problem. From my perspective that's really more an issue in the domain of algorithm, rather than data store. Isn't basically any kind of feasible operation on the data, doable in either form? It's all just operations on locations in memory ultimately AFAICT. So, a 1 is a 1 whether it's in a DB or a text file on the filesystem. My primary motivation for relying on text files in this type scenario would be complete openness and visibility of all data themselves. As mentioned, this approach also brings a boatload of other benefits for data safeguards and management. I hope I'm making sense Anon.
>>16480 Both have strengths and weaknesses. Being able to upload files into a robowaifu and automatically index them would be convenient and using a file system would speed up creating incremental backups with rsync too and also offer system file permissions to use. You could have different file permission groups to prevent strangers or friends from accessing personal data and be able to store sensitive data in encrypted files or partitions. Being able to efficiently search and retrieve metadata across different indexes and joined tables will also be invaluable. For example, to store research papers for semantic search one table might contain the papers and another contain sentence embeddings for each sentence in a paper. Then the sentence embeddings can be efficiently searched and return which paper and page the match is found on and do other stuff like finding the top-k best matching papers or searching certain papers within a given date span or topic. Another table could contain references for papers so they can be quickly retrieved and searched across as well with constraints like only searching abstracts or introductions. Other things could be done like counting how many papers cite a paper multiplied by a weight of how interesting those citing papers are to suggest other interesting popular papers. These embeddings wouldn't be limited to only text but also images, animations, 2D and 3D visualizations of the text and other modalities if desired. Transformers are quite good at translating natural language questions into SQL queries so a robowaifu would be able to quickly respond to a large variety of complex questions from any saved data in her system, given the database has proper indexes for the generated queries. I'm expecting robowaifus will have around 4 to 128 TB of data by 2030 and being able to perform complex searches on that data in milliseconds will be crucial. The metadata database could be automatically built and updated from a file system. A book could have a JSON metadata file and various text files containing the content, making it a lot easier to modify, merge and delete data and manage it with git. This database would be completely safe to lose (though costly to rebuild) and just be for indexing the file system. It could also be encrypted, hidden behind file permissions, and take into account system file permissions to prevent access to metadata.
>>16485 Thanks for the thoughtful response Anon. >The metadata database could be automatically built and updated from a file system. A book could have a JSON metadata file and various text files containing the content, making it a lot easier to modify, merge and delete data and manage it with git. This database would be completely safe to lose (though costly to rebuild) and just be for indexing the file system. It could also be encrypted, hidden behind file permissions, and take into account system file permissions to prevent access to metadata. Well, I feel you understand where I'm coming from. It's not too difficult to devise a table-based (say 3rd Normal Form) using nothing but flat-file CSV textual data. Obviously, using binary-style numerical data representation is significantly more compact than ASCII, etc., but is, in essence, still open-text. Additionally, as you clearly recognize, the filesystem is a nearly-universal data store that has decades of support from OS designers going back (effectively) to the literal beginnings of all computing. Philosophically, the first rock, the second rock makes the marks upon. :^) >tl;dr The long heritage of filesystem support for text data can't be ignored IMO. OTOH, rapid indexes, etc., are also obviously vital for practical runtime performance, etc. As mentioned elsewhere (and fully recognizing that the data itself is relatively small by comparison) Waifusearch's textfile-based system can generally return tree-oriented search responses in a few hundred microseconds -- often less than 100us (and that with no particular effort at optimizations, rather simply using standard COTS STL libs in C++ in a straightforward manner). I realize that the systems that need to underlie our robowaifu's runtimes are vastly more complex than just database lookups, but my primary point is that these are mostly questions of algorithm, not data store. Anything we can do to go back to basics + reduce as many dependencies as feasible, then the better & safer it will be for both Anon and his robowaifus.

Report/Delete/Moderation Forms
Delete
Report