/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The canary has FINALLY been updated. -robi

Server software upgrades done, should hopefully keep the feds away. -robi

LynxChan 2.8 update this weekend. I will update all the extensions in the relevant repos as well.

The mail server for Alogs was down for the past few months. If you want to reach out, you can now use admin at this domain.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!


The Library of /robowaifu/ Card Catalogue Robowaifu Technician 11/26/2020 (Thu) 07:11:30 No.7143 [Reply] [Last]
Robowaifus are a big topic. They need a big library index! :^) Note -This is a living document. Please contribute topical thread/post crosslinks! Thread category quick-jumps >>7150 AI / VIRTUAL_SIM / UX_ETC >>7152 HARDWARE / MISC_ENGINEERING >>7154 DESIGN-FOCUSED >>7156 SOFTWARE_DEVELOPMENT / ETC >>7159 BIO / CYBORG >>7162 EDUCATION >>7164 PERSONAL PROJECTS >>7167 SOCIETY / PHILOSOPHY / ETC >>7169 BUSINESS(-ISH) >>7172 BOARD-ORIENTED >>7174 MISCELLANEOUS

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/23/2022 (Mon) 04:51:00.
123 posts and 35 images omitted.
waifusearch> Tensegrity THREAD SUBJECT POST LINK R&D General >>5448 tensegrity Waifu Materials >>6507 " Robot skeletons and armatures >>4398 " " >>4416 " " >>8089 " " >>8158 " Building the ultimate waifu. >>7653 " Papercraft waifu >>9016 " Actuators for waifu movement! >>5108 " /robowaifu/ Embassy Thread >>4832 " " >>4833 " " >>4844 " " >>4848 " " >>4855 "

Message too long. Click here to view full text.


Welcome to /robowaifu/ Anonymous 09/09/2019 (Mon) 00:33:54 No.3 [Reply]
Why Robowaifu? Most of the world's modern women have failed their men and their societies, feminism is rampant, and men around the world have been looking for a solution. History shows there are cultural and political solutions to this problem, but we believe that technology is the best way forward at present – specifically the technology of robotics. We are technologists, dreamers, hobbyists, geeks and robots looking forward to a day when any man can build the ideal companion he desires in his own home. However, not content to wait for the future; we are bringing that day forward. We are creating an active hobbyist scene of builders, programmers, artists, designers, and writers using the technology of today, not tomorrow. Join us! NOTES & FRIENDS > Notes: -This is generally a SFW board, given our engineering focus primarily. On-topic NSFW content is OK, but please spoiler it. -Our bunker is located at: https://anon.cafe/robowaifu/catalog.html Please make note of it. > Friends: -/clang/ - currently at https://8kun.top/clang/ - toaster-love NSFW. Metal clanging noises in the night. -/monster/ - currently at https://smuglo.li/monster/ - bizarre NSFW. Respect the robot. -/tech/ - currently at >>>/tech/ - installing Gentoo Anon? They'll fix you up. -/britfeel/ - currently at https://anon.cafe/britfeel/ - some good lads. Go share a pint! -/server/ - currently at https://anon.cafe/server/ - multi-board board. Eclectic thing of beauty. -/f/ - currently at https://anon.cafe/f/res/4.html#4 - doing flashtech old-school. -/kind/ - currently at https://2kind.moe/kind/ - be excellent to each other.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/09/2022 (Mon) 21:03:13.

Black Magic M66 3D Modelling Project & Battledroid Robowaifus SophieDev 07/27/2021 (Tue) 14:03:16 No.11776 [Reply] [Last]
Decided it may be best to place my 3D digital modelling efforts in a separate thread to my work on 'Elfdroid Sophie'. I think this thread will still belong under "Personal Projects". At the moment the 3D modelling begins over on her thread: >>11644 >>11657 I'm using some source material from Masamune Shirow's 1987 OVA "Black Magic - M66" (which itself was based off his 1983 manga). However, as a child of the late eighties/early nineties, I loved 'Transformers', and I was also a big fan of the 'Heavy Gear' series (similar to MechWarrior), and Armored Core. So I often get the urge to create battle-ready robowaifus, but I can only create them virtually. Because obviously I cannot get hold of machine-guns, explosives, tank cannons, and rocket launchers IRL. Good thing, too, since if I were given access to live ammunition I would almost certainly blow myself into a flying streamer of giblets. But I digress. This thread may also be a good place for any equally deranged anons who wish to post images of military-grade robowaifus, either found online or their own creations?
141 posts and 66 images omitted.
>>15356 A cute! Elfy is a good idea IMO as a more 'low-impact' waifu design that brings lots of benefits to the table for newcomers in particular SophieDev I'm not meaning you or any other regulars here haha except me perhaps. Simpler lines, volumes, movements, etc. BTW, you might investigate what the pony community is doing along these lines. Not only have they reached out to us here more than once (>>8118, >>1563, >>11024), but there are probably some of them already doing plushie-oriented work there. Ofc our own dear Kiwi-chan waifu design also pretty closely related too (>>104)
Open file (667.24 KB 758x764 Elfy_Bed.png)
When she is tired after a hard week of transporting completely legal, safe and legitimate goods across the Schengen Area only ... Blender's cloth physics enables me to tuck Elfy into bed as many times as I want.
>>15513 LOL, a cute. CUTE >"I'm going to turn the lights out now, sleep tight Elfy >"I WANT SOME WATER! >gives small glass of wawa to Elfy-gril* >"There, all better? >Elfy nods* >gives headpats* >"That's a girl. Now you be a good gril now, won't you? >"Uh huh >"OK then, we'll see you in the morning >"Good night!
Open file (1.34 MB 1922x1702 Rektopology - Copy.png)
Open file (813.86 KB 1344x728 Elfy_Upgrades - Copy.png)
When I initially "finished" this basemesh, I had no idea about correct topology; quads, edge loops, face loops and how clean topology is essential for weight painting. My mesh was a literal nightmare. Multiple layers inside layers, too dense and looking like a broken mirror. However, after more than a month of repair work on and off, M-66's basemesh is almost completely repaired. I was having major issues getting the humanoid Metarig to attach to my first basemesh, and weight-painting was nigh-on impossible. Now I know why. To be honest, it's a miracle that Blender even managed to calculate a rigged and weighted model at all from the mess that I started with. In the meantime I have also been making upgrades to my Smurflike-character Elfy (better hair, fingers, toes, tongue, teeth, and an actual body instead of just clothes.) The two completely different characters have now become linked in my mind and are the best of friends.

Open file (40.50 KB 568x525 FoobsWIthTheDew??.jpg)
Emotions in Robowaifus. Robowaifu Technician 07/26/2022 (Tue) 02:05:49 No.17027 [Reply]
Hello, part-time lurker here. (Please excuse me if a thread on this topic exists already) I have and idea on how we could plan to implement emotions easily into our Robowaifus. This idea stems from Chobits where Persocoms change behavior based on battery level. So please consider this. Emotions would be separated into two groups. Internal and external stimuli. Internal stimuli emotions are things like lethargy, hunger, weakness, etc. Things that are at their base are derived from lower battery and damaged components. External stimuli emotions, things like happiness, sadness, etc. Provoked from outside events, mostly relating to how the humans (and master) around her act. A mob mentality way of processing emotions. All of this would be devoid of any requirement for AI, which would quicken development until we make/get a general AI. So until that time comes I think this artificial implementation for emotions would work fine. Though when AIs enter the picture this emotion concept is simple enough that a compatability layer could be added so that the AI can connect and change these emotions into something more intelligent. Perhaps a more human emotional response system [irrational first thought into more thought out rational/personality centered response] or a direct change of the base emotional response by the AI as it distinguish itself from the stock personality to something new. :] > (>>18 - related-thread, personality)

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/27/2022 (Wed) 00:27:23.
21 posts and 6 images omitted.
>>17382 (cont & final) >Excuse me if I've misunderstood topology and their transformations It's fine, I think I understand what you mean. Normally, topology only tells you which things are connected to which other things. Density is usually associated with measure spaces or distributions. Flatness and curvature are usually associated with geometry. So the intuitive pictures we have in mind may be different, but most of what you described makes sense for topologies with or without measures and geometries. The only part that requires more than topology is about reinforcing a conscious thread (enlargening a peak), which would require a measure space. In machine learning, it's pretty common to use measure spaces (for probabilities) and geometries (for derivatives) to picture these things anyway, so it's not confusing at all to me. I think one difference in how we're thinking about this is that, when I say "landmark", the picture I have in mind isn't analogous to a point on an electron cloud. It's analogous to the electron cloud itself. Sometimes the "cloud" might get reduced down to a single point, but usually that doesn't happen. So if the conscious is traversing a topological space, it's not walking along the space, it's shifting between different subspaces within that topological space. When I think of the conscious picking a path from a pathset provided by the subconscious, what I imagine is this: - The subconscious has an overall space it's working within. - The subconscious picks out a bunch of (potentially overlapping) subspaces that seem interesting. - The conscious picks one or more of those subspaces. - The subconscious expands on that choice by finding new interesting subspaces within the (union of the) selected subspaces. >attach a feeling to something in order to process it I think we're thinking the same thing here. Tying it back to vision: the data coming into our eyes consists of only colors, but we often think of objects as being defined mostly by their shapes. The colors provide the cues we need to infer both shapes and the context (lighting), and to a lesser extend, the colors themselves provide some final cues for us to identify objects. We have landmarks in the space of objects by which we recognize objects through all of these things, shapes, context, and colors, and we associate those landmarks with language. For us to be able to process an object, we need to process the landmark associated with that object. That happens when the conscious "expands" on that landmark by focusing on its subspaces. (A subspace here would be, e.g., the object in various contexts, taking form in various shapes, and being recolored in various ways.) All of this begins with colors that come in through our eyes, and a color is just a "vision feeling". There should be a similar process going on for all feelings, including "emotion feelings". >>17345 I actually suspect that ethics and morality isn't foundational, and that it's derived from something else. I think that's why ethicists don't seem to come up with things that become widespread and uncontested, which is something most other academic fields seem able to do. People's sense of right and wrong seems to change with time. I suspect what's more important is that there's some degree of agreement in what narratives people ascribe to the world and to the roles people can play within those narratives. That gives people a common basis for discussing actions and outcomes: they can say that things are right or wrong in terms of the stories they're acting out. Here's one way to identify ethical compatibility: you can rank stories in terms of which story worlds you would prefer to live in. A robowaifu would be a match for you (in terms of ethics, at least) if and only if your rankings and hers converge "quickly enough" (which depends on how much patience you have for people with temporarily-different ethics from you).
>>17345 There's a discussion on "relevance realization" that seems relevant to what we're discussing here about the conscious selecting branches for the subconscious to expand on. It starts on 32:14 and continues until 41:27. https://www.youtube.com/watch?v=yImlXr5Tr8g&t=1934s He points out some connections to opponent processes, which was originally used to describe color perception. Here's a summary: - Relevance realization is about the perspective/framing through which information and options are made available. It determines what's salient. - Relevance realization must happen at a more fundamental level than propositional logic, or anything involving language. That's because the words we use implicitly come with a choice of framing. - The process of relevance realization can be influenced by how we represent things, but it cannot depend on any particular choice of representation. - There seems to be an evolutionary process within the brain that's involved for coming up with representations. - Vervaeke pointed out three opponent processes that seem relevant for cognition: threat-opportunity (same as valence?), relaxing-arousing, and wandering-focusing. Some background information unrelated to the video: in the vision, the three opponent processes are blue-yellow, red-green, and black-white. - There are bottom-up things that direct your attention (like a sudden clap), and top-down things that direct your attention (language). - Salience is whatever stands out to you. It's what makes subconscious aspects of relevant realization available to working memory. Working memory enables feedback loops with sensory information, and it acts as a global mechanism for coordinating subconscious processes. There seems to be evidence that working memory is a "higher order relevance filter" (i.e., something that keeps track of the relevance of information to the process of relevance realization). - Higher-order relevance filters & working memory are required when facing situations that are novel, complex, and ill-defined. Vervaeke suggests that these are the things that consciousness is required for. This seems to me like a very elegant picture of conscious-subconscious interactions. It ties together a lot of theories I've heard about consciousness, like that it's relevant for global coordination, memory, attention, and feelings.
Open file (43.38 KB 649x576 coloring.png)
When latent impressions stored from our lifetime of experiences become active they cause an emotional reaction, an actual chemical reaction in the body that activates certain parts of the brain, which then leads to a conscious thought process, which further develops into actions. If you observe your emotional reactions you will notice that most, if not all of them, are either about getting what you want or not getting what you want. If you trace them back to their source they all arise from self-preservation, either from the primal needs such as food, sex and sleep or attachment to an identity (which includes family, friends, community, country, species, environment and even ideas). Latent impressions color our thought process and bias it in many ways. Think of the word 'car' and observe your thoughts. What comes to mind first? What color is it? What shape is it? Did an actual car arise in your mind or another vehicle like a truck? Is it big or small? Do you like cars or dislike them? Do they remind you of something else or something from the past or future? If you ask friends what comes to mind first about a word, you'll find everyone colors words differently. Some very little, some a lot. Most of these colorings come from our desires being fulfilled or unfulfilled, which become stored as latent impressions and bias our attention. Language models are already fully capable of coloring 'thoughts'. The difference is their latent impressions come from an amalgamation of data collected from the internet. There's no cyclical process involved between the resulting actions affecting the latent impressions and those new ones creating fresh actions since current models do not have a plastic memory. So the first step towards creating emotions is creating a working memory. Once we have that we could have a much more productive conversation about emotions and engineering ideal ones. One idea I've had to build a working memory into off-the-shelf models is to do something akin to prefix tuning or multi-modal few-shot learning by prefixing embeddings to the context which are continuously updated to remember as much as possible, and like our own latent impressions, the context would activate different parts of the memory bank that would in turn influence the prefix embeddings and resulting generation. This would be the first step towards a working memory. From there it would need to develop into inserting embeddings into the context and coloring the token embeddings themselves within some constraints to ensure stability.
I believe OP had the right idea and that almost immediately the thread went into overthinking mode. Start simple, like reacting to low battery status. I would also like to emphasize: Start transparent. One can say that emotional states are related to different modes of problem solving and so and so forth, but this all gets very indirect. At the start, I'd rather only have emotions that are directly and immediately communicated, so you have immediate feedback about how well this works. So, ideas about simulating an emotion like nostalgia (is that even an emotion?) I would put aside for the time being. The state of the eyelids is something practical to start with. Multiple aspects could be measured and summed together for creating the overall effect. -battery status -time of the day -darkness for some time -movement (& how much & how fast & which direction) -eyelid status of other faces -low noise level for some time -sudden noise increase -human voice -voice being emotional or not (I mean what you register even without knowing a language, this can't be very complex) -hearing words with extreme or dull emotional connotation -registering vibrations -body position (standing, sitting, sitting laid back, lying flat) -extreme temperature and rapid temperature changes There is no necessity to perfectly measure an aspect (the measure just has to be better than deciding by coin flip) nor do you need to have something for all or even most aspects, summing together whatever of these silly tiny things you implement badly will make the overall effect more realistic and sophisticated than the parts.
>>17457 Excellent post Anon, thanks.

Open file (410.75 KB 1122x745 birbs_over_water.png)
Open file (99.96 KB 768x512 k0p9tx.jpg)
/robowaifu/meta-5: It's Good To Be Alive Robowaifu Technician 03/07/2022 (Mon) 00:23:10 No.15434 [Reply] [Last]
/meta, offtopic, & QTDDTOT General /robowaifu/ team survey: (>>15486) Note: Latest version of /robowaifu/ JSON archives available is v220523 May 2022 https://files.catbox.moe/gt5q12.7z If you use Waifusearch, just extract this into your 'all_jsons' directory for the program, then quit (q) and restart. Mini-FAQ >A few hand-picked posts on various topics -Why is keeping mass (weight) low so important? (>>4313) -HOW TO SOLVE IT (>>4143) -/robowaifu/ 's systems-engineering goals, brief synopsis (>>16376)

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/21/2022 (Thu) 21:41:47.
297 posts and 128 images omitted.
>>17423 I understand. On the AI side, it doesn't take a large amount of resources to build on Stable Diffusion or GPT-J 6B. You can train new tokens for these models ("softprompts", "textual inversion") on Google Colab for free, and you can rent an A100 to fine-tune models for, currently, $0.35/hour, or an RTX 3090 for $0.04/hour. In some cases, it's feasible to train massive models from scratch for free because there are groups (Google TRC, Stability AI, CoreWeave) that give free compute resources for open source AI research projects. I'm not familiar with robotics, but my understanding is that a lot of the design work can be done using free software (MuJoCo, SketchUp, IsaacGym). The actual manufacuring and real-world tests might get expensive, and I don't know what it would take to create models for off-the-shelf parts (would manufacturers be willing to provide it?), but maybe those things can be worked out later. I'd find it very exciting if we could get a good waifu design & simulation working with mostly-plausible components and environments, even if we didn't know how to manufacture it. >>17424 >The question intended for waifu/ was what happens if they use AI advancement partially driven by /robowaifu/ to improve their uses for AI: mass surveillance, social control, etc. In that case, I would say that's an acceptable risk because a world without waifus is unacceptable. I would take steps to prevent people from misusing my [hypothetical] advances for surveillance & control especially if that surveillance & control inhibits waifu progress, but I would not try to prevent misuses if it means stopping waifu progress. I'm curious how others feel about this point. https://strawpoll.com/polls/Q0Zp4kle6ZM
>>17430 a world with total surveillance is almost unavoidable except if someone lobbies for "privacy zones" where drones are restricted, or build a faraday caged home, etc. No one outlawed security cameras and while it undermines privacy such measures should reduce petty crime and dangerous crime in urban areas perhaps making them safe again one day? tl;dr total privacy is a lost cause, better learn to deal with it. Those in power who have a lot to lose if their private stuff gets out will surely put in some limitations that the rest of us can benefit from too. That's how the legal system works
>>17430 also saying, by the time waifus with govt. or corporate backdoors might be a standard thing, it may be a moot point when there are "fly" drones at a density of a few per some odd cubic meters of any public space already
>>17415 Duly noted Meta Ronin. I'll see if I can come up with something sufficiently cheesy for /meta-6. :^)
>>17430 >and you can rent an A100 to fine-tune models for, currently, $0.35/hour, or an RTX 3090 for $0.04/hour Please share where you can get such cheap compute. Lambda is $1.10/hr for an A100 and A100s and 3090s are never available on vast.ai.

/robowaifu/ + /monster/, its benefits, and the uncanny valley Robowaifu Technician 05/03/2021 (Mon) 14:02:40 No.10259 [Reply]
Discussing the potential benefits of creating monster girls via robotics instead of 1 to 1 replicas of humans and what parts can be substituted to get them in production as soon as possible. Firstly is the fact that many of the animal parts that could be substituted for human one are much simpler to work with than the human appendages, which have a ton of bones and complex joints in the hands and feet, My primary example of this is bird/harpy species (image 1), which have relatively simple structures and much less complexity in the hands and feet. For example, the wings of the bird species typically only have around three or four joints total, compared to the twenty-seven in the human hand, while the legs typically only have two or three, compared to the thirty-three in the human foot. As you can guess, having to work with a tenth of the bones and joints and opposable thumbs and all that shit makes things incredibly easier to work with. And while I used bird species as an example, the same argument could be put forward for MG species with paws and other more simplistic appendages, such as Bogey (image 2) and insect hybrids (image 3). Secondly is intentionally making it appear to not be human in order to circumvent the uncanny valley. It's incredibly difficult to make completely convincing human movement, and one of the simplest ways around that is just to suspend the need for it entirely. We as humans are incredibly sensitive to the uncanny valley of our own species, even something as benign as a prosthetic limb can trigger it, but if we were to create something that we don't expect to move in such a way, it's theoretically entirely possible to just not have to deal with it (for the extremities part of it, anyways), leaving more time to focus on other aspects, such as the face. On the topic of face, so too could slight things be substituted there (again for instance, insect girls), in order to draw attention away from the uncanny valley until technology is advanced enough that said uncanny valley can be eliminated entirely. These possibilities, while certainly not to the taste of every anon, could be used as a way to accelerate production to the point that it picks up investors and begins to breed competition and innovation among people with wayyyyyyy more money and manpower than us, which I believe should be the endgoal for this board as a whole. . Any ideas or input is sincerely appreciated.
22 posts and 9 images omitted.
>>13698 As you think >>13699 I will get mad on what I want.
>>16492 Yep, good thinking Anon. And actually, we've had similar concepts going here for quite some time actually. waifusearch> plush OR plushie OR daki OR dakimakura THREAD SUBJECT POST LINK AI Design principles and philoso -> https://alogs.space/robowaifu/res/27.html#27 dakimakura What can we buy today? -> https://alogs.space/robowaifu/res/101.html#101 " Who wouldn't hug a kiwi. -> https://alogs.space/robowaifu/res/104.html#6127 " " -> https://alogs.space/robowaifu/res/104.html#6132 " " -> https://alogs.space/robowaifu/res/104.html#6176 plushie " -> https://alogs.space/robowaifu/res/104.html#14761 daki Waifus in society -> https://alogs.space/robowaifu/res/106.html#2267 dakimakura Robot Voices -> https://alogs.space/robowaifu/res/156.html#9092 plushie " -> https://alogs.space/robowaifu/res/156.html#9093 " Waifu Robotics Project Dump -> https://alogs.space/robowaifu/res/366.html#3501 daki Robowaifu Propaganda and Recruit -> https://alogs.space/robowaifu/res/2705.html#2738 " /robowaifu/ Embassy Thread -> https://alogs.space/robowaifu/res/2823.html#10983 plushie

Message too long. Click here to view full text.

Some of the most mobile robots around today are snakes. It got me thinking that a naga robot would be easier than a biped. the tail could hold a large number of pneumatic artificial muscles that are cheap and relatively lightweight and powerful making balancing and moving easier. It might be nice to have a bot that wraps you in its sexy scaley tail at night and massages you to sleep with it.
>>17434 /monster/, pls :^) You are definitely correct about the ease of design vs. biped. Snek robots are already wildly successful for industrial applications involving pipes, crevasses and other space-constrained applications.
>>17434 >pneumatic artificial muscles that are cheap and relatively lightweight and powerful The pneumatic muscles I've seen online are very expensive. Where have you found any cheap ones to purchase? https://www.robotshop.com/en/210mm-stroke-45lb-air-muscle.html This one is 99 dollars but that will add up wood quickly because you'll need 5-15 in a tail.

Robotics Hardware General Robowaifu Technician 09/10/2019 (Tue) 06:21:04 No.81 [Reply]
Servos, Actuators, Structural, Mechatronics, etc.

You can't build a robot without robot parts tbh. Please post good resources for obtaining or constructing them.

www.servocity.com/
https://archive.is/Vdd1P
7 posts and 2 images omitted.
Open file (4.75 MB 4624x3472 IMG_20220903_105556.jpg)
>>17213 Posting in this thread now. I am attempting to make a silicone sensor while avoiding patent infringement. It appears that every possible patent is either expired, abandoned, or not applicable, so I'll proceed. So far I have created this giant mess. >pic related
I have a couple questions. 1. Would it be feasible to simulate muscles by twisting chords using electric motors to shorten them, or simply reeling up cable/chord? 2. If so, would pairs of these "muscles" working opposite each other, like biceps and triceps, be able to regenerate electricity as one pulled against the other to unwind/unreel against the opposing motor? Obviously there would still be energy loss but could you reduce the loss by using motors as regenerators? I'm asking because I had a weird dream after learning about Iceland's giant wooden puppet where there was a wooden doll that moved using twisting chords as muscles. It obviously looked feasible in my dream but my dreams are often retarded.
Open file (395.22 KB 4096x3072 20220910_123058.jpg)
>>17428 power could just as simply go into pulling along a single axis. The "twisting" just introduces a gear ratio effect. That being said, I've been really keen on the idea of recouping kinetic energy as electrical via opposing actuators, like you speak of here. My own design was something like a solenoid, the opposing actuator could induce a charge with much like a braking system will recoup some kinetic energy as charge. (Note that you won't perfectly conserve energy b/c if the arm for example were to lift an object it would have to exert additional "X" force to recoup "(X energy * efficiency coefficient) - friction". HOWEVER: if the robot is merely moving its own body in space and returns to more or less the original state, there's no reason recouping some charge wouldn't work. Example: you swing your arm forward, it swings back and returns to the same position. (If I'm wrong here please feel free to explain how) pic somewhat related: design for translation of oblique movement of actuator to linear movement, perpendicular to the axis of the "joint"
>>17429 I like your sketch Anon.
>1. Would it be feasible to simulate muscles by twisting chords using electric motors to shorten them, or simply reeling up cable/chord? Sounds doable. I've been trying my hand on a similar design. >2. If so, would pairs of these "muscles" working opposite each other, like biceps and triceps, be able to regenerate electricity as one pulled against the other to unwind/unreel against the opposing motor? Obviously there would still be energy loss but could you reduce the loss by using motors as regenerators? Wouldn't work. Any energy the relaxed engine would generate would be extra energy the engine under current would consume. The reason stuff like regenerative breaking works for EVs is because you're taking energy from the wheels while you don't want the wheels to spin.

General Robotics/A.I. News & Commentary #2 Robowaifu Technician 06/17/2022 (Fri) 19:03:55 No.16732 [Reply] [Last]
Anything in general related to the Robotics or A.I. industries, and any social or economic issues surrounding it (especially of robowaifus). === -note: I'll plan to update this OP text at some point to improve things a bit. -previous threads: > #1 (>>404)
52 posts and 37 images omitted.
>>17399 I think that's someone else. I'm the math anon. >retrieval-augmented models If you haven't seen them yet, I highly recommend checking out external attention models. https://www.deepmind.com/publications/improving-language-models-by-retrieving-from-trillions-of-tokens >>17403 >>17406 There's also this one from Google: https://ai.googleblog.com/2022/02/can-robots-follow-instructions-for-new.html They try to get a robo to generalize to new tasks by: - Training it on a hundred tasks associated with task descriptions, - Then passing the descriptions through a language model before giving it to the robo.
I see it isn't posted here, so here's some more stable diffusion stuff. - The code & model were posted here >>17259 - Textual Inversion for creating reference tokens usable with stable diffusion: https://github.com/rinongal/textual_inversion - A community-built repo of reference tokens: https://huggingface.co/sd-concepts-library - Some people are also doing prompt weighting with stable diffusion, which was previously used with vqgan: https://github.com/tnwei/vqgan-clip-app/blob/main/docs/tips-n-tricks.md - ... This supports negative weight prompts, which let you tell that model that you want X and not Y. Plus a bonus blog post on AI progress: https://astralcodexten.substack.com/p/i-won-my-three-year-ai-progress-bet The main takeaway is that, 3 months ago, the leading text-to-image model was approximately 3 years ahead of what even optimistic experts believed, and that was after accounting for DALL-E 2.
It's starts with humans. > an "Atom Touch" the first artificial prosthetic arm capable of near-full human range of motion, a basic sense of touch, and mind control https://atomlimbs.com/touch/preview Nothing prevents it from being used in robotics.
>>17438 I like how you think.
New framework for simulation that works with Unity, Blender, and Godot: https://github.com/huggingface/simulate New Q&A tool that's very easy to use: https://twitter.com/osanseviero/status/1572332963378958338 Stable Diffusion prompt generator for creating good prompts: https://huggingface.co/spaces/Gustavosta/MagicPrompt-Stable-Diffusion

The important question Robowaifu Technician 09/18/2019 (Wed) 11:54:39 No.419 [Reply] [Last]
Vagoo. I can't speak for anyone but myself but I'd like to get.. intimate with my fembot. I'd like to know what my options are for her robopussy. I was thinking something like a flesh light with sensors that triggers voice and arm action. I'm using Myrobotlab is Anyone familiar with it?

Robosex general I guess
47 posts and 15 images omitted.
>>13354 >>13358 >>13361 >Why would I need to increase my performance? Why? What for? Why? In my mid 20's my stamina was gone, orgasms barely felt like anything, barely any sex drive, barely any semen, a hard time getting or keeping an erection. Doctors literally didn't seem to care when I complained about it, and I went to a few, and each recommended completely different unrelated medications. The only one that seems even close to making sense was for high cholesterol. Your performance is a very good indicator of your health. If you have no testosterone, not only are you going to be shit in bed, but mentally you're going to be fucking miserable. >Also, it is more difficult to build, probably difficult to even have a scientific foundation for it, and just a very special use case. No, if anything it's easier to build. It'd basically just be a fleshlight with sensors for collecting data. Then someone with far better understanding of math than I, would use that data with other bits of information about taking supplements, or time spent lifting, the temperature you shower at, or whatever, and find what's better and what's just retarded broscience. (like the shower thing) Like I said, I don't see a point in combining the idea with a robowaifu, since unless it just lays there lifelessly, the data would be useless.
>>13368 sounds like a bad hormonal imbalance. Supplements, getting enough sleep, cutting back on alcohol and caffiene, eating more saturated fat and substantially less sugar and getting proper sunlight and exercise will fix all this. Barring something medically catastrophic I'd say you were victim of the urban/postmodern lifestyle. Sexual function is an indicator not a cause. have you fixed this since then?
>>13372 we can move the conversation to >>39 if you have more to add
Open file (239.02 KB 426x238 i2mixt4wK-5ngRvy.mp4)
China has multiple tipes of this machines i have seen atleast 3 diferent ones, >https://nextshark.com/china-sperm-extractor/

F = ma Robowaifu Technician 12/13/2020 (Sun) 04:24:19 No.7777 [Reply]
Alright mathematicians/physicians report in. Us Plebeians need your honest help to create robowaifus in beginner's terms. How do we make our robowaifus properly dance with us at the Royal Ball? >tl;dr Surely in the end it will be the laws of physic and not mere hyperbole that brings us all real robowaifus in the end. Moar maths kthx.
28 posts and 5 images omitted.
>>15186 AUUUUUUUUGH! :^) I still haven't made time yet Anon. I haven't forgotten.
Some stuff for dance generation: https://expressivemachinery.gatech.edu/projects/luminai/ It looks like this is based on Viewpoint theory, which is a theory of improvisation. There's a book about it called "The Viewpoints Book".
>>15459 Sounds interesting Anon. Unfortunately, they are blocking access across Tor so I'm unable to see it. But there has been some good things come out of GA Tech in our area so yep, I can believe it. Thanks!
This one showcases controlling dance movements with high-level controls, including music based controls: https://www.youtube.com/watch?v=YhH4PYEkVnY This alone might be enough to teach a robo how to dance.
>>17444 Very interesting research. Nice find, Anon.

Report/Delete/Moderation Forms
Delete
Report