/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The canary has FINALLY been updated. -robi

Server software upgrades done, should hopefully keep the feds away. -robi

LynxChan 2.8 update this weekend. I will update all the extensions in the relevant repos as well.

The mail server for Alogs was down for the past few months. If you want to reach out, you can now use admin at this domain.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


He was no longer living a dull and mundane life, but one that was full of joy and adventure.


DCC Design Tools & Training Robowaifu Technician 09/18/2019 (Wed) 11:42:32 No.415 [Reply] [Last]
Creating robowaifus requires lots and lots of design and artistry. It's not all just nerd stuff you know Anon! :^) ITT: Add any techniques, tips, tricks, or content links for any Digital Content Creation (DCC) tools and training to use when making robowaifus. >note: This isn't the 'Post Your Robowaifu-related OC Arts & Designs General' thread. We'll make one of those later perhaps. >--- I spent some time playing with the program Makehuman and I'll say I wasn't impressed. It's not possible to make anime real using Makehuman, in fact the options (for example eye size) are somewhat limited. But there's good news, don't worry! The creator of Makehuman went on to create a blender plugin called ManuelBastioniLAB which is much better (and has a much more humorous name). The plugin is easy to install and use (especially if you have a little blender experience). There are three different anime female defaults that are all pretty cute. (Pictured is a slightly modified Anime Female 2.) There are sliders to modify everything from finger length to eye position to boob size. It even has a posable skeleton. Unfortunately the anime models don't have inverse kinematic skeletons which are much easier to pose. Going forward I'm probably going to use MasturBationLABManuelBastioniLAB as the starting point for my designs. >=== -re-purpose OP for DCC consolidation
Edited last time by Chobitsu on 08/10/2021 (Tue) 23:39:41.
135 posts and 66 images omitted.
>>16856 Neat, Anon. TDs have to focus on their artist colleagues just as much (more, actually) as their pipeline devs. Will be interesting to look into, thanks!
>>16856 ~/tor_ytdl_retry.sh https://www.youtube.com/watch?v=iXu2t3e9NkA Nice, concise official video highlighting several differences between the two products. ~24mins. >via https://www.bforartists.de/the-differences-to-blender/
Open file (11.34 KB 336x188 hqdefault.jpg)
>>16856 >Quickstart ~/tor_ytdl_retry.sh https://www.youtube.com/playlist?list=PLB0iqEbIPQTZEkNWmGcIFGubrLYSDi5Og >Quickstart_play.m3u Quickstart Intro [Gm7aCzI4xws].webm Quickstart Navigation [TDzMx7huGzk].mkv Quickstart Scale and Extrude [-oaTVNOIf-c].webm Quickstart Modeling [f9ubBGV4js0].webm Quickstart UV Mapping Smart UV Project [oRebOmPy2iU].webm Quickstart UV Mapping Unwrapping [qTCTPf0gFiY].webm Quickstart UV Mapping Cleaning up [f004Kvke10c].webm Quickstart Export UV Layout [sRYZLgqPJjM].webm Quickstart Adding Cycles Material [h3LFl59qnLk].webm Quickstart Camera [c34RBNXvIZI].webm Quickstart Lights [X83gVkyT4B8].webm Quickstart Rendersettings Cycles [W8EZNTQYCOY].webm

Message too long. Click here to view full text.

>>12248 >updated Learn_Blender_3_for_Complete_Beginners yt-dlp --write-description --write-subs https://www.youtube.com/playlist?list=PLn3ukorJv4vuU3ILv3g3xnUyEGOQR-D8J >play.m3u Blender 3 - Complete Beginners Guide - Part 1 [jnj2BL4chaQ].webm Blender 3 - Complete Beginners Guide - Part 2 - Materials & Rendering [g5lHlUB66r0].webm Blender 3 - Complete Beginners Guide - Part 3 - The Old Man [zt2ldQ23uOE].webm Blender 3 - Complete Beginners Guide - Part 4 - Edit Mode [iSHMlLSzsrk].webm Blender 3 - complete beginners guide - part 5 - the monster [kSrqpVZ1raY].webm Blender 3 - Complete Beginners Guide - Part 6 - The Monster [kRA3_M74vIw].webm Blender 3 - Complete Beginners Guide - Part 7 - The Street [3GILW1wnP8Y].webm Blender 3 - Complete Beginners Guide - Part 8 - The Lighting [8x2a8UalJCQ].webm Create Any Animal in Blender 3 - Detailed Beginner Tutorial [GHv42up5xA0].webm Updated for version 3. Highly recommended for Blender beginners. Cheers.

Robowaifu-OS & Robowaifu-Brain(cluster) Robowaifu Technician 09/13/2019 (Fri) 11:29:59 No.201 [Reply] [Last]
I realize it's a bit grandiose (though probably no more than the whole idea of creating a irl robowaifu in the first place) but I want to begin thinking about how to create a working robowaifu 'brain', and how to create a special operating system to run on her so she will have the best chance of remaining an open, safe & secure platform.

OS Language Choice
C is by far the single largest source of security holes in software history, so it's out more or less automatically by default. I'm sure that causes many C developers to sneer at the very thought of a non-C-based operating system, but the unavoidable cost of fixing the large numbers of bugs and security holes that are inevitable for a large C project is simply more than can be borne by a small team. There is much else to do besides writing code here, and C hooks can be generated wherever deemed necessary as well.

C++ is the best candidate for me personally, since it's the language I know best (I also know C too). It's also basically as low level as C but with far better abstractions and much better type-checking. And just like C, you can inline Assembler code wherever needed in C++. Although poorly-written C++ can be as bad as C code in terms of safety due to the necessity of it being compatible with C, it also has many facilities to not go there for the sane coder who adheres to simple, tried-and-true guidelines. There is also a good C++ project already ongoing that could be used for a clustered unikernel OS approach for speed and safety. This approach could save drastic amounts of time for many reasons, not the least of which is tightly constrained debugging. Every 'process' is literally it's own single-threaded kernel, and mountains of old-style cruft (and thinking) typical with OS development simply vanishes.

FORTRAN is a very well-established language for the sciences, but a) there aren't a lot of FORTRAN coders, and b) it's probably not the greatest at being a general-purpose language anyway. I'm sure it could be made to run robotics hardware, but would probably be a challenge to turn into an OS.

There are plenty of dujour SJW & coffee languages out there, but quite apart from the rampant faggotry & SJWtardism plainly evident in most of their communities, none of them have the kind of industrial experience and pure backbone that C, C++, or FORTRAN have.

D and Ada are special cases and possibly bear due consideration in some year's time, but for now C++ is the obvious choice to me for a Robowaifu OS foundation, Python probably being the best scripting language for it.

(1 of 2)
66 posts and 23 images omitted.
>>16615 It should be noted that quantum supermacy-type calculations aren't of any use except being provably hard enough for classical systems to simulate. My bet is we will train general intelligence on a classical hardware years before any quantum hardware is up to the task.
>>16620 This doesn't seem correct. Considering how Gaussian Boson Sampling can be done one trillion times faster than the fastest supercomputers today. A ratio of a minute to 100 million is simply astonishing. China took leadership easily by using a 76-photon prototype. We are just beginning to learn about advantages of quantum computing. In the next 5-10 years we will discover a lot more computational advantages. > we will train general intelligence on a classical hardware Due to scaling laws of neural nets there will never be such thing as AGI. Maybe human level AI (HLAI). Any computing system can only represent efficiently (through a short program) a tiny subset of all possible outputs. Most outputs require a program as long as themselves. Algorithmic approximability can be achieved only to degree. And most Turing reducible problems are exactly those which can be limit computed. So to go beyond you have to use algorithmic approximability. And this implies that general intelligence therefore is not possible for a subset of all possible outputs.
>>16628 Truthfully things that generate headlines like the gaussian boson sampling that you speak of are no more than toy problems that do not translate to generalized approaches. It doesn't matter whether a triangular prism can do optical FFT 100 million or 100 billion times as fast (latency, bandwidth?) as some super computer, it fundamentally cannot be generalized in any comparable way. I think people hype photonics too darn much. I believe within the next 10-20 we will see nothing but more improvements to classical microarctecture. Eventually we will find better ways to take advantage of laws of nature to compute for us (like that light prism) but it's certainly not going to be the hypebait you see today
Dropping this here for now, since I'm not sure where else it would go on /robowaifu/ or even it it's interesting here? >ToaruOS >ToaruOS is a "complete" operating system for x86-64 PCs and experimental support for ARMv8. >While many independent, hobby, and research OSes aim to experiment with new designs, ToaruOS is intended as an educational resource, providing a representative microcosm of functionality found in major desktop operating systems. >The OS includes a kernel, bootloader, dynamic shared object linker, C standard library, its own composited windowing system, a dynamic bytecode-compiled programming language, advanced code editor, and dozens of other utilities and example applications. >There are no external runtime dependencies and all required source code, totalling roughly 100k lines of (primarily) C, is included in this repository, save for Kuroko, which lives separately. https://github.com/klange/toaruos

Electronics General Robowaifu Technician 09/11/2019 (Wed) 01:09:50 No.95 [Reply] [Last]
Electronics & Circuits Resources general

You can't build a robot w/o good electronics. Post good info about learning, building & using electronics.

www.allaboutcircuits.com/education/
72 posts and 19 images omitted.
>>14824 Interesting. What does this imply? Is there a significant improvement in the performance of the chips using this style of transistor? If so, then we'll have to make our own asic using that same method
Open file (79.85 KB 600x424 BPv4-f.jpg)
>>734 For any Anons currently working on electronics boards that would benefit from Bus Pirate, there is also a v4 that has more RAM available. > http://dangerousprototypes.com/docs/Bus_Pirate_v4_vs_v3_comparison The firmware code is also available. https://github.com/BusPirate/Bus_Pirate
Open file (427.08 KB 1500x1500 super start kit.jpg)
Open file (127.68 KB 1649x795 rip mr servo.png)
I'm looking to get into electronics. Are the ELEGOO UNO starter kits any good? There's one on Amazon for $40. I basically just want to learn how to program a microcontroller, control servos with a controller and understand enough so I can start building a robowaifu. Or should I save my money and just play with the circuit simulator in TinkerCAD?
>>16224 I actually have the kit on the left, and I definitely recommend them for learning Anon, sure.
I don't recall exactly where we were all talking about creating DIY garage-fabs, so I'll put this here for now. >Using mercury lamps as a UV light source ASML is able to get 220nm features out of a dry process. https://www.asml.com/en/products/duv-lithography-systems/twinscan-xt-400l Surely not cheap, but conceivable for a small robowaifu factory.

Black Magic M66 3D Modelling Project & Battledroid Robowaifus SophieDev 07/27/2021 (Tue) 14:03:16 No.11776 [Reply] [Last]
Decided it may be best to place my 3D digital modelling efforts in a separate thread to my work on 'Elfdroid Sophie'. I think this thread will still belong under "Personal Projects". At the moment the 3D modelling begins over on her thread: >>11644 >>11657 I'm using some source material from Masamune Shirow's 1987 OVA "Black Magic - M66" (which itself was based off his 1983 manga). However, as a child of the late eighties/early nineties, I loved 'Transformers', and I was also a big fan of the 'Heavy Gear' series (similar to MechWarrior), and Armored Core. So I often get the urge to create battle-ready robowaifus, but I can only create them virtually. Because obviously I cannot get hold of machine-guns, explosives, tank cannons, and rocket launchers IRL. Good thing, too, since if I were given access to live ammunition I would almost certainly blow myself into a flying streamer of giblets. But I digress. This thread may also be a good place for any equally deranged anons who wish to post images of military-grade robowaifus, either found online or their own creations?
178 posts and 78 images omitted.
>>17689 Apologies, but even site Admins can't change images after the fact under Lynx's software, nor can I. There are some tools to perform image optimizations out there beforehand ofc. Trimage is one. https://trimage.org/ >=== -add trimage cmnt/link
Edited last time by Chobitsu on 11/15/2022 (Tue) 22:23:14.
>>17667 >>17691 >xlink-related
>>17691 Thanks.
I'd like to see battle droids with tails and "wings" in their designs. Not only could both help with balance but the "wings" could be used to disperse heat as well.
>>17696 Horns that act as antennas or long ears with the same function as the horns or wings would also be cool.

Minimalist Breadboard Waifu Robowaifu Technician 10/10/2022 (Mon) 04:32:16 No.17493 [Reply]
Did an engineering exercise to make a """recreational companion robot""" Worked on it for a week or two and hit the MVP. My preferred alternative git service is on the fritz so I'm posting the code here. >What does it do? You press the button to stimulate it, and it makes faces based on the stimulation level. The goal was to demonstrate how little is needed to make a companion robot. A "minimum viable waifu", if you will. I think small, easily replicable lil' deliverables like this would help interest in the robowaifu project, because the bar to entry is low. (In both skill and cost). It has meme potential. I hope you guys find it useful in some way. If there is enough interest in the project, I may start working on it again. >--- >related > (>>367, Embedded Programming Group Learning Thread 001) >=== -add C programming thread crosslink
Edited last time by Chobitsu on 10/13/2022 (Thu) 20:26:05.
5 posts and 1 image omitted.
>>17506 >I guess the big question is "what features would be most useful"? I think the 'big' answer is: whatever we here decide they should be. :^) For now, for my own part I'd suggest that the software is the first place to begin, since it is by far the most malleable & inexpensive to start with for initial prototyping. Function stubs can be written out to crystalize the notions well before the hardware need be spec'd. EG: int respond_to_boop (int const boop_strength) { if (boop_strength > 50) return OUCH_RESPONSE; else return LOL_RESPONSE; } This should make it all clear enough to everyone where you're going with things. It will also allow for very gradual introduction of H/W designs for the overall project, for any anon who may later take an interest in a specific notion or hardware capability. Make sense? BTW, I can add a cross-link to our C programming class thread into your OP if you'd like me to? >===

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/11/2022 (Tue) 20:17:57.
>>17507 >BTW, I can add a cross-link to our C programming class thread into your OP if you'd like me to? I don't see why not. >I think the 'big' answer is: whatever we here decide they should be. :^) Hmm. I'm leaning towards a system that uses the Arduino for I/O and offloads the "thinking" to a PC. It would give devs a lot more flexibility while keeping hardware costs down. But I guess a better question to ask is "what problem are we trying to solve"? What need exists, that a device of this caliber would fill?
>>17508 I think a "self-improvement tomagochi" waifu/companion would be the most useful product. TL;DW - It avoids the problems of cellphone apps (distractions) and high-powered robots (cost and complexity) while giving the benefits of a mechanical friend (always there and has unlimited patience)
>>17508 >I don't see why not. done >I'm leaning towards a system that uses the Arduino for I/O and offloads the "thinking" to a PC. It would give devs a lot more flexibility while keeping hardware costs down. Yes, we've discussed this notion frequently here on /robowaifu/. Our RW Foundations effort (>>14409) is being developed with direct support for this paradigm in mind; and more specifically to support a better-secured approach to the problem (eg, including offline air-gapped). >But I guess a better question to ask is "what problem are we trying to solve"? What need exists, that a device of this caliber would fill? In a nutshell? >"Start small, grow big." I also think Anon is correct that Tamagotchi-like waifus are a great fit for your thread, OP (>>17495, >>17509). >=== -minor grmr, sp edit -add 'secure approach' cmnt
Edited last time by Chobitsu on 10/13/2022 (Thu) 21:37:16.
> (>>17505, potentially-related)

Robotics Hardware General Robowaifu Technician 09/10/2019 (Tue) 06:21:04 No.81 [Reply]
Servos, Actuators, Structural, Mechatronics, etc.

You can't build a robot without robot parts tbh. Please post good resources for obtaining or constructing them.

www.servocity.com/
https://archive.is/Vdd1P
7 posts and 2 images omitted.
Open file (4.75 MB 4624x3472 IMG_20220903_105556.jpg)
>>17213 Posting in this thread now. I am attempting to make a silicone sensor while avoiding patent infringement. It appears that every possible patent is either expired, abandoned, or not applicable, so I'll proceed. So far I have created this giant mess. >pic related
I have a couple questions. 1. Would it be feasible to simulate muscles by twisting chords using electric motors to shorten them, or simply reeling up cable/chord? 2. If so, would pairs of these "muscles" working opposite each other, like biceps and triceps, be able to regenerate electricity as one pulled against the other to unwind/unreel against the opposing motor? Obviously there would still be energy loss but could you reduce the loss by using motors as regenerators? I'm asking because I had a weird dream after learning about Iceland's giant wooden puppet where there was a wooden doll that moved using twisting chords as muscles. It obviously looked feasible in my dream but my dreams are often retarded.
>>17429 I like your sketch Anon.
>1. Would it be feasible to simulate muscles by twisting chords using electric motors to shorten them, or simply reeling up cable/chord? Sounds doable. I've been trying my hand on a similar design. >2. If so, would pairs of these "muscles" working opposite each other, like biceps and triceps, be able to regenerate electricity as one pulled against the other to unwind/unreel against the opposing motor? Obviously there would still be energy loss but could you reduce the loss by using motors as regenerators? Wouldn't work. Any energy the relaxed engine would generate would be extra energy the engine under current would consume. The reason stuff like regenerative breaking works for EVs is because you're taking energy from the wheels while you don't want the wheels to spin.
Open file (38.98 KB 741x599 Jupiter1.jpg)
>>17449 Thanks, maybe I'll learn to draw on the computer someday (I made this jupiter with a drawing pad a while back but, pencil to paper just feels more natural) Also helps to get the idea across quickly I used to be pretty good with Aldus Freehand back in the day but that was bought out by Adobe and I just hate the Illustrator Interface

Open file (40.50 KB 568x525 FoobsWIthTheDew??.jpg)
Emotions in Robowaifus. Robowaifu Technician 07/26/2022 (Tue) 02:05:49 No.17027 [Reply]
Hello, part-time lurker here. (Please excuse me if a thread on this topic exists already) I have and idea on how we could plan to implement emotions easily into our Robowaifus. This idea stems from Chobits where Persocoms change behavior based on battery level. So please consider this. Emotions would be separated into two groups. Internal and external stimuli. Internal stimuli emotions are things like lethargy, hunger, weakness, etc. Things that are at their base are derived from lower battery and damaged components. External stimuli emotions, things like happiness, sadness, etc. Provoked from outside events, mostly relating to how the humans (and master) around her act. A mob mentality way of processing emotions. All of this would be devoid of any requirement for AI, which would quicken development until we make/get a general AI. So until that time comes I think this artificial implementation for emotions would work fine. Though when AIs enter the picture this emotion concept is simple enough that a compatability layer could be added so that the AI can connect and change these emotions into something more intelligent. Perhaps a more human emotional response system [irrational first thought into more thought out rational/personality centered response] or a direct change of the base emotional response by the AI as it distinguish itself from the stock personality to something new. :] > (>>18 - related-thread, personality)

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/27/2022 (Wed) 00:27:23.
23 posts and 6 images omitted.
Open file (43.38 KB 649x576 coloring.png)
When latent impressions stored from our lifetime of experiences become active they cause an emotional reaction, an actual chemical reaction in the body that activates certain parts of the brain, which then leads to a conscious thought process, which further develops into actions. If you observe your emotional reactions you will notice that most, if not all of them, are either about getting what you want or not getting what you want. If you trace them back to their source they all arise from self-preservation, either from the primal needs such as food, sex and sleep or attachment to an identity (which includes family, friends, community, country, species, environment and even ideas). Latent impressions color our thought process and bias it in many ways. Think of the word 'car' and observe your thoughts. What comes to mind first? What color is it? What shape is it? Did an actual car arise in your mind or another vehicle like a truck? Is it big or small? Do you like cars or dislike them? Do they remind you of something else or something from the past or future? If you ask friends what comes to mind first about a word, you'll find everyone colors words differently. Some very little, some a lot. Most of these colorings come from our desires being fulfilled or unfulfilled, which become stored as latent impressions and bias our attention. Language models are already fully capable of coloring 'thoughts'. The difference is their latent impressions come from an amalgamation of data collected from the internet. There's no cyclical process involved between the resulting actions affecting the latent impressions and those new ones creating fresh actions since current models do not have a plastic memory. So the first step towards creating emotions is creating a working memory. Once we have that we could have a much more productive conversation about emotions and engineering ideal ones. One idea I've had to build a working memory into off-the-shelf models is to do something akin to prefix tuning or multi-modal few-shot learning by prefixing embeddings to the context which are continuously updated to remember as much as possible, and like our own latent impressions, the context would activate different parts of the memory bank that would in turn influence the prefix embeddings and resulting generation. This would be the first step towards a working memory. From there it would need to develop into inserting embeddings into the context and coloring the token embeddings themselves within some constraints to ensure stability.
I believe OP had the right idea and that almost immediately the thread went into overthinking mode. Start simple, like reacting to low battery status. I would also like to emphasize: Start transparent. One can say that emotional states are related to different modes of problem solving and so and so forth, but this all gets very indirect. At the start, I'd rather only have emotions that are directly and immediately communicated, so you have immediate feedback about how well this works. So, ideas about simulating an emotion like nostalgia (is that even an emotion?) I would put aside for the time being. The state of the eyelids is something practical to start with. Multiple aspects could be measured and summed together for creating the overall effect. -battery status -time of the day -darkness for some time -movement (& how much & how fast & which direction) -eyelid status of other faces -low noise level for some time -sudden noise increase -human voice -voice being emotional or not (I mean what you register even without knowing a language, this can't be very complex) -hearing words with extreme or dull emotional connotation -registering vibrations -body position (standing, sitting, sitting laid back, lying flat) -extreme temperature and rapid temperature changes There is no necessity to perfectly measure an aspect (the measure just has to be better than deciding by coin flip) nor do you need to have something for all or even most aspects, summing together whatever of these silly tiny things you implement badly will make the overall effect more realistic and sophisticated than the parts.
>>17457 Excellent post Anon, thanks.
>>17457 The uncanny valley video here >>10260 describes the differences in approaches well. There are two problems to solve: 1. How do you make something emotional? 2. How do you make emotions realistic? In any case, I wrote this up: https://colab.research.google.com/drive/1B6AedPTyACKvnlynKUNyA75XPkEvVAAp?usp=sharing I have two extremes on that page. In the first cell, emotions are described with text, and they can be arbitrarily precise. In the second cell, emotions are described by a few measures that can be added. There are different advantages to each. If there are a fixed number of emotions, text-based emotions would be low complexity, easy to specify, and easy to test. If there's a continuum of simple emotions, measure-based emotions would be low complexity, harder to specify, and easy to test. If there are complex emotions, text-based emotions would be high complexity, easy to specify, and hard to test. It might not matter which approach is taken to start with since it seems possible to hybridize the two approaches. "On a scale of [...] how well does this statement match your feelings on this [...] event?" As a result, it should be possible to start with one approach, then later get the benefits of the other approach.
>>17463 replikaAI does something similar with CakeChat (which has been linked in here via Luka's GitHub) >Training data >The model was trained on a preprocessed Twitter corpus with ~50 million dialogs (11Gb of text data). To clean up the corpus, we removed URLs, retweets and citations; mentions and hashtags that are not preceded by regular words or punctuation marks; messages that contain more than 30 tokens. >We used our emotions classifier to label each utterance with one of the following 5 emotions: "neutral", "joy", "anger", "sadness", "fear", and used these labels during training. To mark-up your own corpus with emotions you can use, for example, DeepMoji tool. >Unfortunately, due to Twitter's privacy policy, we are not allowed to provide our dataset. You can train a dialog model on any text conversational dataset available to you, a great overview of existing conversational datasets can be found here: https://breakend.github.io/DialogDatasets/ >The training data should be a txt file, where each line is a valid json object, representing a list of dialog utterances. Refer to our dummy train dataset to see the necessary file structure. Replace this dummy corpus with your data before training.

/robowaifu/ + /monster/, its benefits, and the uncanny valley Robowaifu Technician 05/03/2021 (Mon) 14:02:40 No.10259 [Reply]
Discussing the potential benefits of creating monster girls via robotics instead of 1 to 1 replicas of humans and what parts can be substituted to get them in production as soon as possible. Firstly is the fact that many of the animal parts that could be substituted for human one are much simpler to work with than the human appendages, which have a ton of bones and complex joints in the hands and feet, My primary example of this is bird/harpy species (image 1), which have relatively simple structures and much less complexity in the hands and feet. For example, the wings of the bird species typically only have around three or four joints total, compared to the twenty-seven in the human hand, while the legs typically only have two or three, compared to the thirty-three in the human foot. As you can guess, having to work with a tenth of the bones and joints and opposable thumbs and all that shit makes things incredibly easier to work with. And while I used bird species as an example, the same argument could be put forward for MG species with paws and other more simplistic appendages, such as Bogey (image 2) and insect hybrids (image 3). Secondly is intentionally making it appear to not be human in order to circumvent the uncanny valley. It's incredibly difficult to make completely convincing human movement, and one of the simplest ways around that is just to suspend the need for it entirely. We as humans are incredibly sensitive to the uncanny valley of our own species, even something as benign as a prosthetic limb can trigger it, but if we were to create something that we don't expect to move in such a way, it's theoretically entirely possible to just not have to deal with it (for the extremities part of it, anyways), leaving more time to focus on other aspects, such as the face. On the topic of face, so too could slight things be substituted there (again for instance, insect girls), in order to draw attention away from the uncanny valley until technology is advanced enough that said uncanny valley can be eliminated entirely. These possibilities, while certainly not to the taste of every anon, could be used as a way to accelerate production to the point that it picks up investors and begins to breed competition and innovation among people with wayyyyyyy more money and manpower than us, which I believe should be the endgoal for this board as a whole. . Any ideas or input is sincerely appreciated.
22 posts and 9 images omitted.
>>13698 As you think >>13699 I will get mad on what I want.
>>16492 Yep, good thinking Anon. And actually, we've had similar concepts going here for quite some time actually. waifusearch> plush OR plushie OR daki OR dakimakura THREAD SUBJECT POST LINK AI Design principles and philoso -> https://alogs.space/robowaifu/res/27.html#27 dakimakura What can we buy today? -> https://alogs.space/robowaifu/res/101.html#101 " Who wouldn't hug a kiwi. -> https://alogs.space/robowaifu/res/104.html#6127 " " -> https://alogs.space/robowaifu/res/104.html#6132 " " -> https://alogs.space/robowaifu/res/104.html#6176 plushie " -> https://alogs.space/robowaifu/res/104.html#14761 daki Waifus in society -> https://alogs.space/robowaifu/res/106.html#2267 dakimakura Robot Voices -> https://alogs.space/robowaifu/res/156.html#9092 plushie " -> https://alogs.space/robowaifu/res/156.html#9093 " Waifu Robotics Project Dump -> https://alogs.space/robowaifu/res/366.html#3501 daki Robowaifu Propaganda and Recruit -> https://alogs.space/robowaifu/res/2705.html#2738 " /robowaifu/ Embassy Thread -> https://alogs.space/robowaifu/res/2823.html#10983 plushie

Message too long. Click here to view full text.

Some of the most mobile robots around today are snakes. It got me thinking that a naga robot would be easier than a biped. the tail could hold a large number of pneumatic artificial muscles that are cheap and relatively lightweight and powerful making balancing and moving easier. It might be nice to have a bot that wraps you in its sexy scaley tail at night and massages you to sleep with it.
>>17434 /monster/, pls :^) You are definitely correct about the ease of design vs. biped. Snek robots are already wildly successful for industrial applications involving pipes, crevasses and other space-constrained applications.
>>17434 >pneumatic artificial muscles that are cheap and relatively lightweight and powerful The pneumatic muscles I've seen online are very expensive. Where have you found any cheap ones to purchase? https://www.robotshop.com/en/210mm-stroke-45lb-air-muscle.html This one is 99 dollars but that will add up wood quickly because you'll need 5-15 in a tail.

Building the ultimate waifu. Robowaifu Technician 09/15/2019 (Sun) 08:06:26 No.246 [Reply]
For shits and giggles, let's discuss what we would do to build our ultimate robowaifu in an age wherein synthetic flesh is already a thing and we're closer and closer to AI.
36 posts and 8 images omitted.
>>7666 It can be a long term thing, like a great life work or magnum opus Some of the goals can be realized before others and then eventually (it being modular) it might get finalized
>>7667 >like a great life work or magnum opus You. I like you Anon. I wouldn't personally be doing this at all if I didn't consider it a great achievement when we all succeed together at this. I would that more young men in our increasingly-corrupted Western Civilization could find something worthy of a life's-pursuit. Godspeed to /robowaifu/ during this approaching Christmas season. >=== -minor prose edit
Edited last time by Chobitsu on 12/09/2020 (Wed) 20:12:46.
>>246 I wouldn't care as long as she is cute and her AI doesn't take over the world
>>1137 I have a doll that I never warm up. It's TPE and as long as it's powdered well it feels so good to get into bed with a cold soft doll on a hot summer night. The only downside is the rubbery medical smell never went away. I've showered it dozens of times now and it still smells like medical grade TPE. I would rather have a cold hard plastic like PVC and have even considered getting her a PVC catsuit to wear to bed. On cold winter nights it's not as satisfying but I still appreciate the feel of it even though it feels nothing like real skin. In many ways it feels a lot better than a woman.
>>7678 >I wouldn't care as long as she is cute and her AI doesn't take over the world hard disagree on the take over the world part take the Dr. W pill

Open file (659.28 KB 862x859 lime_mit_mug.png)
Open-Source Licenses Comparison Robowaifu Technician 07/24/2020 (Fri) 06:24:05 No.4451 [Reply] [Last]
Hi anons! After looking at the introductory comment in >>2701 which mentions the use of the MIT licence for robowaifu projects. I read the terms: https://opensource.org/licenses/MIT Seems fine to me, however I've also been considering the 3-clause BSD licence: https://opensource.org/licenses/BSD-3-Clause >>4432 The reason I liked this BSD licence is the endorsement by using the creator's name (3rd clause) must be done by asking permission first. I like that term as it allows me to decide if I should endorse a derivative or not. Do you think that's a valid concern? Initially I also thought that BSD has the advantage of forcing to retain the copyright notice, however MIT seems to do that too. It has been mentioned that MIT is already used and planned to be used. How would the these two licences interplay with each other? Can I get a similar term applied from BSD's third clause but with MIT?

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/24/2020 (Fri) 14:07:59.
50 posts and 9 images omitted.
BTW, OpenWRT violates the shit of all the licenses and gets into 0 trouble for it. Go to the OpenWRT repositories right now and download any package, it's just a .tar.gz tarball with a different extesion. Look into the contents of the package: there is no license text anywhere. A lot of FLOSS licenses require that the license text be distributed with binary, but OpenWRT simply doesn't do it. I approve. OpenBSD does almost the same thing, they very rarely distribute the loicense, so they're violating the license of most of their packages with such a term.
>>16352 >I use ISC. This is in essence just what OpenBSD advocates today. https://cvsweb.openbsd.org/src/share/misc/license.template?rev=HEAD
>discussion-related > (>>16429, >>16431, >>16526, >>16538, >>16549, >>16551)
I use CC-BY-NC-SA for everything. This allows people to use it freely in the same manner as GPL non-commercially but requires people to ask permission to use it commercially. You can always grant copyright to people outside of the license terms by releasing your work under a different license to specific people. The use of dual licenses allows you to forbid certain individuals and/or entities from using your work as you see fit on a case by case basis while not inhibiting anyone from merely contributing to your project.

Report/Delete/Moderation Forms
Delete
Report