/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Canary has been updated.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Have a nice day, Anon!


Open file (329.39 KB 850x1148 Karasuba.jpg)
Open file (423.49 KB 796x706 YT_AI_news_01.png)
Open file (376.75 KB 804x702 YT_AI_news_02.png)
General Robotics/A.I./Software News, Commentary, + /pol/ Funposting Zone #4 NoidoDev ##eCt7e4 07/19/2023 (Wed) 23:21:28 No.24081 [Reply] [Last]
Anything in general related to the Robotics or A.I. industries, and any social or economic issues surrounding it (especially of robowaifus). -previous threads: > #1 (>>404) > #2 (>>16732) > #3 (>>21140)
430 posts and 173 images omitted.
>>32173 Indeed the same idea can be extended even to environments that don't have multiple independent episodes, basically where the agent only experiences one very long "episode". However, this requires two additions. First, you need a discount factor g, a number between 0 and 1, strictly less than 1. This is used to make the agent prioritize near term rewards more than long term rewards, for example a reward that is 100 actions away is discounted by a factor of g^100. You cannot handle the "one very long episode" case without some kind of time horizon, and the discount factor effective creates a kind of soft time horizon. Second, you have to bootstrap off a learnable value function estimate. The value function equals the expected value of the total reward the agent gets when starting from a state and using its policy until the end of the episode. When there is only "one very long episode", this needs to be the infinite sum for all future actions, which is still a finite number thanks to the discount factor g. You can then cut the "one very long episode" into spans of arbitrary length. For each span, you assume the total reward after the cut will equal the value function estimate, and you can still train the agent with policy gradient. At first, the value function estimate will be randomly initialized and is totally inaccurate, you simultaneously need to train it to more closely resemble the true value function. This training is also done with the cut up spans, you again apply the value function estimate for the end of the span, and bootstrap off it to compute more accurate (but still biased) estimates of the value function for the other steps in the span. With a discount factor g strictly less than 1, the training eventually makes the estimate converge to the true value. And when the value function is estimated accurately, the policy converges to the optimal actions.
>>32195 That is amazingly-interesting, Anon. It's also very encouraging to know that this approach can serve in a less-preplanned, more-adhoc environment. Any estimate about the hardware compute costs involved? POTD
>>32197 Yeah, this would be an example of a model-free RL algorithm. It only needs to know what state the agent is in, which action is taken, and what the reward for the action is. My impression is that for RL used in robotics, the dominant cost is the simulation of the environment (or worse, the data collection from running the agent in the real world), not the calculations or updates from the loss function. Running these simulations can be pretty similar to running high graphics quality 3D games. For RLHF with LLM's you have a reward model that's a neural net, so you need big enough GPUs to fit the models. But regardless, you want an algorithm that converges in as few steps as possible. With model-free RL, the training speed is mostly limited by how quickly you can gain information about the environment. You want a policy that explores interesting actions more often than uninteresting actions during training. You cannot optimize prematurely based on too little information, or the policy will be stuck in a local optimum. This is also why LLMs need supervised fine-tuning before RLHF, or why some RL experiments add some kind of curiosity into the agent. If you have control over the reward definition, you also want to shape the reward so that partial progress is rewarded too, even if the agent fails to achieve the final goal. You can see Nvidia's Eureka paper, they used LLM's to write reward functions to do this.
>>32201 Great! That's about what I expected regarding costs. We still need to try finding a solution for Robowaifu@home for training needs, heh. I've grabbed the paper and I'll make time to go over it soon. Thanks, Anon! Cheers. :^)
>>32201 >You want a policy that explores interesting actions more often than uninteresting actions during training. You cannot optimize prematurely based on too little information, or the policy will be stuck in a local optimum. I've mentioned before in other threads that the AI I'm looking to make will have "needs" to motivate her actions. Most will be essential things like changing the battery, but otherwise her needs are to cater to my needs. One of the needs is "boredom" which will punish her for being idle for too long or too much repetition. It might be worth it to do things in a less than optimal way if it means potentially discovering a more efficient way of doing something.

Robowaifu fiction to promote the product and expand the market Robowaifu Technician 09/09/2019 (Mon) 07:17:19 No.29 [Reply] [Last]
>order companionbot from obscure japanese website
>you're not a pedo, but size is a major factor in the practicality of these designs, so the loli-robot is by far the cheapest and most reliable option
>you open the box and find your companion, purposely designed to look like a cartoon robot, rather than a real person
>still, the robot's purpose is obvious when you realize it is nude and has genitals
>since it is a lolibot, you, a 32 year old wizard NEET, can't exactly go to the store and buy clothes that fit it. So you'd better do an extra good job at hiding it from any guests that come over.
>lol you never have any guests. Guess some problems solve themselves.
>before turning the robot on, you have to setup the software options on your computer. You adjust a series of sliders regarding personality traits, before selecting the English option, and choosing your preferred voice from a list.
>then you agonize for hours over picking a name
>other, more expensive models, are wi-fi compatible, but you purposely chose the cheapest option with no wireless connectivity, not just because you're cheap, because you don't want people spying on your waifu
>you save the settings to a flash drive which is inserted in the robot's navel, after removing a waterproof cover, of course. But this is when you realize you don't actually know how to turn the robot on
>after rifling through the manual you find the on/off procedure, which involves bending the fingers into a certain configuration before pressing in the port on the robot's navel with one hand and pinching the buttons that are the robot's g-spot and clitoris with the other.
>the robot immediately comes to life, opening its eyes and looking directly at you, in a rather compromising position
>Your sudden reaction of shock abides when you remind yourself that it's simply a robot.
>But the awkwardness comes back when the robot speaks, in very broken Engrish
>still, you can understand as it introduces itself with the name you've given it, the voice you chose for it.
>you know that you chose those options, but when the robot asks you for your name, you still answer just as awkwardly as when a real girl would ask you your name at the bank or whatever
>actually, more awkwardly because your fingers are inside it. So you freeze up, as you do even in simpler situations
>but the robot is programmed for your happiness, and detects your stress, smiling at you in an attempt to make you feel better. But only briefly, because you programmed it with just the mildest hint of tsundere
>it tells you to not feel stressed, and assures you that it is not being damaged by your touch
>you remove yourself from the robot's vagina, and notice a brief, subtle shudder. Nice attention to detail from the creators
>You stand up in front of the robot and watch it as it looks around the room, studying its surroundings. It moves in an unnaturally smooth motion, but manages to not be too uncanny due to looking like a robot, rather than a human.
>as the robot's eyes scan the room, you notice that they stop for just a tiny but longer than usual as they look straight ahead. Straight ahead at your boner, which happens to be right at the small robot's face level.
>once again your mind forgets that you are dealing with a machine, and you awkwardly try to create small talk to diffuse the situation, asking the robot if it requires anything else at the moment. It declines, and instead asks if there is anything you desire
>you, the autist you are, refuse to let the robot do anything for you, and instead say that you are going to go and make a sandwich.
>you tell the robot to make itself comfortable, then cringe to yourself when you realize the absurdity of that statement.

(1 of 6)
252 posts and 99 images omitted.
Greentext anon, once again back to get into the swing of things. As much as I'd like to say I've made great progress, I've tumbled quite a bit. The reasons for this are varied and require context to explain, but the important part is my continued refusal to give in. For now, I'll just post those other poems I said I would before. These will be the highlights of my book of poems I started writing before ( >>23878 ), featuring one poem per ten. If for whatever reason you want to see a number I haven't posted (up to 232), do let me know and I'll post it here. These have no titles, and will only be referred to by their numbers. Some will receive minor edits as I transcribe them, to fix the occasional incorrect word, replace an illegible one, or improve flow. 21 Velour fur, masterfully woven, Encasing the lightning within, She moves with grace, Clopping to and fro, Eyes glimmering soulfully, Intelligence and passion,

Message too long. Click here to view full text.

>>32200 pretty good >Right near the end, I discovered that I made a mistake when this got me excited until i realized it wasnt part of the poem, its a nice juxtaposition if you add a stanza like this, a perfect world doesnt seem that appealing without a little chaos to drive it home
Open file (903.36 KB 768x1024 Aria.png)
>>32200 Beautiful work Greentext Anon. Your poetry is truly wonderful. I hope you'll post more smaller batches over time. Master! I hear behind me. Fluttering plastic in warm air. Strings of a marionette, hidden in plastic and cloth. Eyes surreal, large, shining, crafted with care. A reflection which still haunts when met. Once more, she asserts her hearts longing. It's Sunday, church was calling. Yet, she doesn't know. Her motors weren't muscle. Her battery wasn't a heart. Air flowed instead of blood. A whir of fans instead of breath. Yet, she insists, she is real. So, under clothes covering it all, I let her lean on me in our pew.

Message too long. Click here to view full text.

Do you hear that, anons? That is the sound of inevitability. That is the sound of robopony gfs. I present to you: Carriage Return I sit at the desk in my cluttered bedroom, staring at my typewriter as I feel the weight of oblivion weighing down on my mind. Nothing more than a modest hiss can be heard, as I attempt to stimulate my brain into continuing its productivity. But alas, the result is the same. A cloud that is as empty now as it is full when inspiration strikes. Adjusting my headphones, I decide that my current playlist just isn't getting me into the flow. I'm feeling a bit grungey right now, that may do the trick. As I lean over to my dektop keyboard to mess around in my music folder, a sound from outside the cacophonous echo chamber of empty thoughts hits me. The door to my bedroom creaks as a feminine snout peeks through, clad in precision-cut mahogany synthetic fur. "Still stuck, honey?" Inky's demure voice cuts in, her olive eyes boring into my own.

Message too long. Click here to view full text.

These are some very impressive works by several Anons ITT over the past few months. You guys never cease to amaze me!! Cheers. :^)

Open file (2.92 MB 1470x1665 1280938409182.png)
nandroid project II Emmy-Pilled 09/11/2023 (Mon) 01:03:11 No.25306 [Reply] [Last]
building own personal nandroid doll continuation of previous thread: https://alogs.space/robowaifu/res/19226.html#
213 posts and 81 images omitted.
>>32203 Beautiful!! This is really coming together well, Emmy-Pilled. Do you think you'll embed any form(s) of illumination inside Emmy's head Anon? Cheers. :^)
Open file (3.12 MB 1563x2714 53215321563123.png)
Open file (593.14 KB 1964x2292 8901380228390.jpg)
>>32231 yes, much like the previous ones, however I feel the pupils are too solid a color for light and the annoying air bubbles being highlighted as well. Might have to go for a fourth pair to get it right.
>>32232 Shiny! Looking forward to see her fully finished one day.
>>32233 another overhaul is coming, as time passes I find myself more and more displeased by her eyelashes. A new model will have to be made and printed
>>32232 I'm sure you'll do right by your robowaifu in the end, Emmy-Pilled! Keep moving forward.

Open file (485.35 KB 1053x1400 0705060114258_01_Joybot.jpg)
Robowaifu Simulator Robowaifu Technician 09/12/2019 (Thu) 03:07:13 No.155 [Reply] [Last]
What would be a good RW simulator. I guess I'd like to start with some type of PCG solution that just builds environments to start with and build from there up to characters.

It would be nice if the system wasn't just pre-canned, hard-coded assets and behaviors but was instead a true simulator system. EG, write robotics control software code that can actually calculate mechanics, kinematics, collisions, etc., and have that work correctly inside the basic simulation framework first with an eye to eventually integrating it into IRL Robowaifu mechatronic systems with little modifications. Sort of like the OpenAI Gym concept but for waifubots.
https ://gym.openai.com/
141 posts and 64 images omitted.
>>29242 >>29296 That sounds very encouraging, SchaltkreisMeister! Good luck getting this system up and running successfully. Cheers. :^)
David Browne did some fast muscle design simulation: https://youtu.be/J7RxSPLLw-s
>>29390 Cool. I'm going to check this out NoidoDev, thanks!
I had some idea about using symbols to let the AI do reasoning about objects and it's position in the world. I thought of something like ASCII art. So basically a picture of a view would be mapped into a 2D or 3D model of the world based on symbols which can be moved around. Then I had the idea, that there might be game engines being useful as a base for that. I found these: > PyPlayScii is a Python package that enables an simple object oriented implementation of ascii art games. By asigning the shapes of the game objects by texts seprated by newline characters and determining what to do every frame, you can quickly implement an ascii art game which can be run directly on terminal window. The following shows an example of an ascii art game implemented by PyPlayScii. https://pypi.org/project/pyplayscii/ A alternative in Scala and probably more in use and supported would be Cosplayengine https://cosplayengine.com
>>32210 This is a very-cool idea, Anon. Also, thanks for the links. Cheers. :^)

Speech Synthesis/Recognition general Robowaifu Technician 09/13/2019 (Fri) 11:25:07 No.199 [Reply] [Last]
We want our robowaifus to speak to us right? en.wikipedia.org/wiki/Speech_synthesis https://archive.is/xxMI4 research.spa.aalto.fi/publications/theses/lemmetty_mst/contents.html https://archive.is/nQ6yt The Taco Tron project: arxiv.org/abs/1703.10135 google.github.io/tacotron/ https://archive.is/PzKZd No code available yet, hopefully they will release it. github.com/google/tacotron/tree/master/demos

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/02/2023 (Sun) 04:22:22.
336 posts and 136 images omitted.
>>31027 I've been thinking about designing something similar, now I'm totally gonna s̶t̶e̶a̶l̶ be inspired by this.
>>31027 That is great. I mentioned doing something, sorta, the same with facial expressions. I believe this is the same sort of "framework" or idea. Here's the paper, Real-time lip synchronization between text-to-speech (TTS) system and robot mouth Well I can't upload it. I get an error saying,"Tor users can not upload files". What??????? Here's an address for the paper. https://sci-hub.ru/10.1109/roman.2010.5598656
>>31049 >Well I can't upload it. I get an error saying,"Tor users can not upload files". What??????? Lol, welcome to my world! :D TBH I think Robbit must've disabled file posting by Torfags. I hope he changes that soon.
Just wanted to mention, that Suno - the AI music creation model, is based on Bark - the speech generation model. They needed around two years from there to where we are now. I have a source, a video where this is mentioned, but this is also about a lot of other things. Just has been mentioned there.
>>32169 Neat! That's an interesting heritage. Impressive results in a fairly short time, too. Thanks, NoidoDev. Cheers. :^)

Emmy The Robot Robowaifu Technician 04/15/2024 (Mon) 20:31:05 No.30919 [Reply] [Last]
Welcome all Nandroids fans to the Emmy thread, for discussing and posting about EtR. Off-topic posts and personal attacks will be deleted. --- Also, be sure to check out Emmy-Pilled's project thread! (>>25306) Important Community Links: Boorus, etc.: https://nandroid.booru.org/index.php https://emmytherobot.art/ (Jumbo controlled, be careful.) Google Docs: https://docs.google.com/spreadsheets/d/1mXuNh9ESedCiDZclVuz9uiL7nTNk3U9SgCE_CRHi3Us/htmlview# Webtoons: https://m.webtoons.com/en/canvas/emmy-the-robot/list?title_no=402201 > previous threads : >>27481 >>26629
Edited last time by Kiwi_ on 06/24/2024 (Mon) 18:13:49.
541 posts and 300 images omitted.
>>32190 built for BWC
Open file (144.70 KB 1080x1350 GSY-EAJXwAAEuil.jpeg)
>>32192 off model
Open file (14.40 KB 832x480 ClipboardImage.jpg)
>>32192 Look at how blacked they made my bot.
new thread new thread >>32205 new thread new thread

LLM & Chatbot General Robowaifu Technician 09/15/2019 (Sun) 10:18:46 No.250 [Reply] [Last]
OpenAI/GPT-2 This has to be one of the biggest breakthroughs in deep learning and AI so far. It's extremely skilled in developing coherent humanlike responses that make sense and I believe it has massive potential, it also never gives the same answer twice. >GPT-2 generates synthetic text samples in response to the model being primed with an arbitrary input. The model is chameleon-like—it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing >GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. Also the current public model shown here only uses 345 million parameters, the "full" AI (which has over 4x as many parameters) is being witheld from the public because of it's "Potential for abuse". That is to say the full model is so proficient in mimicking human communication that it could be abused to create new articles, posts, advertisements, even books; and nobody would be be able to tell that there was a bot behind it all. <AI demo: talktotransformer.com/ <Other Links: github.com/openai/gpt-2 openai.com/blog/better-language-models/ huggingface.co/

Message too long. Click here to view full text.

Edited last time by Kiwi_ on 01/16/2024 (Tue) 23:04:32.
268 posts and 84 images omitted.
Some guy (Morgan Millipede) started to reverse engineer Neuro-Sama: https://youtu.be/uLG8Bvy47-4 - basically just a humorous introduction on how to do this (he has a $4k computer, though, and she's slower in her responses at the beginning). 4chan responded: https://youtu.be/PRAEuS-PkAk - Her response time improved since the first video.
>>30821 Lol. Thanks NoidoDev, I'll try to make time to look these over. Cheers. :^)
>llama3-70b on Groq runs at 300 tokens/s for 7k tokens >mixtral-8x7b at 550 tokens/s for 7k tokens >my tinyllama-1.1b model extended to 12k tokens runs at 0.5 tokens/s I don't feel so good, bros. How do we make faster models? I have an idea to use Matryoshka representation learning to reduce the hidden dimension size dynamically: https://arxiv.org/abs/2205.13147 but even if I truncate the model's 2048 dimensions down to 512 dimensions, it will perform at 8 tokens/s at best. And who knows how much slower it will be once I get to 32k context. If it's possible to reduce 90% of the tokens to 64 dimensions, then it might get 70 tokens/s at the very most, but GPU latency will probably fuck that down to 20 tokens/s. I could also prune a few layers of the model, quantize it to 4-bits and implement mixture of depths https://arxiv.org/abs/2404.02258 but that will only give a tiny speed up and I don't want the accuracy to drop further than it is. With the much smaller model size though I could convert it into a sparse-mixture-of-experts model https://arxiv.org/abs/2401.04088 with 16 experts to make up for the loss in accuracy without sacrificing speed. The model will eventually be finetuned with self-rewarding ORPO too, hopefully providing a boost in usefulness to overcome its barebone compute, although I'll likely use Llama3-70b to bootstrap the reward labels until its capable of consistently self-improving on its own. Odds ratio preference optimization (ORPO): https://arxiv.org/abs/2403.07691 Self-rewarding LMs: https://arxiv.org/abs/2401.10020 The T5 efficient model worked fine with a hidden dimension size 512 after finetuning: https://arxiv.org/abs/2109.10686 And Matryoshka representation learning also worked well using a 16-dimension embedding for a 1k-class classification task. I forget the paper but I remember reading one years ago where they found some layers in transformers are only making a decision between a few choices, so a large hidden size might not be necessary in those cases. To convert the model's hidden states to Matryoshka I plan to add importance biases to parameters and train the biases with the rest of the parameters frozen and then take the softmax over them and top-k. After training, the parameters could be sorted and the importance biases pruned, and then the model parameters could be finetuned. I may have to train an even smaller model from scratch though since TinyLlama uses 32 attention heads.
>>31006 >use Matryoshka representation learning to reduce the hidden dimension size dynamically This seems both interesting & promising, Anon. Good luck with your research. Cheers. :^)
Kyutai - fast and unhinged, the real girlfriend experience: https://youtu.be/ZY2hBv9ob8U https://youtu.be/bu7-YODAcfs

Open file (93.53 KB 800x540 TypesOfMotors.jpg)
Open file (436.87 KB 1664x2048 MotionBaseServos.jpeg)
Open file (78.13 KB 922x1396 Femisapien.jpg)
Open file (2.25 MB 2500x1778 MechaMadoka.jpg)
Actuators For Waifu Movement Part 3 Kiwi 12/06/2023 (Wed) 01:18:16 No.27021 [Reply] [Last]
(1stl thread >>406 2nd thread >>12810) Kiwi back again with a thread for discussing actuators to move your waifu! Part Three! Let's start with a quick introduction to common actuators! 1. DC motors, these use brushes to switch the ferrous core electromagnets on a rotor to rotate its magnetic field relative to surrounding magnets! They're one of the cheapest options with an average efficiency range of 30 to 90%. Larger DC motors and motors with higher turn counts are more efficient. 1.5 Coreless DC motors, by removing ferrous materials, losses from hysteresis are almost eliminated, dramatically increasing efficiency to nearly 90% even in small motors. Eliminating the ferrous materials reduces flux focusing, resulting in weaker fields and higher speeds. 2. Brushless DC motors (BLDC), these use a controller to switch the electromagnets on a stator to rotate the magnets of a rotor! Without brushes, they have the potential to be more efficient with higher power density compared to DC motors. Their efficiency and behavior vary depending on the algorithm and sensors used to control them. Coreless brushless motors exist but are rare and only used for very niche applications. 3. AC motors, a wide and incredibly varied category. They all rely on AC’s frequency to control them. With single phase AC motors relying on shaded poles, capacitors, or some other method to induce a rotating magnetic field. 3 phase AC motors naturally have a rotating field which usually gives them higher efficiency and power density. Notably, most AC motors are brushless. The most commonly used brushed AC motor is the universal motor, which is 4. Stepper motors, brushless motors with ferrous teeth to focus magnetic flux. This allows for incredible control (stepping) at the cost of greater mass, subsequently giving them higher rotary inertia. Usually 50 to 80% efficient depending on control algorithm/speed/and quality of the stepper. Due to their increasing mass production (& ubiquitous low cost controllers), they have appeal as a lower cost alternative to BLDC motors if one carefully designs around them. 5. Coiled Nylon Actuators! These things have an efficiency rating so low it's best to just say they aren't efficient. (0.01% typical, 2% achieved under extremely specific conditions in a lab.) Though they are exciting due to their incredible low cost of fabrication, they’re far too slow and the energy requirements are nonsensical. https://youtu.be/S4-3_DnKE9E https://youtu.be/wltLEzQnznM 6. Hydraulics! These rely on the distribution of pressure in a working liquid to move things like pistons. Though popular in large scale industry, their ability to be used in waifu's has yet to be proven. (Boston Dynamics Atlas runs on hydraulics but it's a power guzzler and heavy) Efficiency varies wildly depending on implementation. They would work great for a giantess! 7. Pneumatics, hydraulics lighter sister! This time the fluid is air! This has the advantage in weight. They aren't capable of the same power loads hydraulics are but, who wants their waifu to bench press a car? (Too loud and inefficient for mobile robotics.) 8. Wax motors, hydraulic systems where the working fluid is expanding melted (commonly paraffin) wax! Cheap, low power, and produce incredible forces! Too bad they're slow and hard to control. 9. Explosion! Yes, you can move things through explosions! Gas engines work through explosions! Artificial muscles can be made by exploding a hydrogen and oxygen mixture in a piston, then using hydrolysis to turn the water back into hydrogen and oxygen. None of this is efficient or practical but it's vital we keep our minds open! Though there are more actuators, most are derivatives or use these examples to work. Things like pulleys need an actuator to move them. Now, let's share, learn, and get our waifu moving!

Message too long. Click here to view full text.

Edited last time by Chobitsu on 12/06/2023 (Wed) 03:06:55.
149 posts and 41 images omitted.
>>31961 >High pull force magnets It's important to remember you are given the force at contact with steel. Which is pretty close to your guess. >400+ pounds of force at 40W You can indeed get tremendous forces at low currents. The problem is this requires the steel to be touching the electromagnet. Which, would prohibit rotation. Using many of them to create a rotating magnetic field would be more expensive, heavier, and larger than a comparable purpose built motor. What I think you're going towards are solenoids. They will provide the forces at low power you are looking for. This provides you with 2 problems to solve. 1. How will you control the stroke to fit into your needs? 2. How will you retain position without the solenoid becoming a heater? Some helpful links; https://science.howstuffworks.com/solenoid.htm https://audioxpress.com/article/voice-coils-a-tutorial https://www.machinedesign.com/mechanical-motion-systems/article/21836669/what-is-a-voice-coil-actuator
>>31961 >>31978 Thanks, Anons.
> (actuators-related : >>31995)
>>31978 Great graph. Very informative. >What I think you're going towards are solenoids No I was just showing a general rule of thumb of what sort of forces we could get for what power. I do realize the force they are quoting is directly connected to a thick steel plate. A thin one would not show this sort of force. it would have to be thick. But I have brainstormed solenoid type actuators here. >>9984 >>10002 It's a terribly retarded idea, and likely noisy, but...really cheap and simple.
> (actuator/joint-braking convo-related : >>32321, ...) >=== -minor edit
Edited last time by Chobitsu on 07/22/2024 (Mon) 01:31:54.

Open file (428.51 KB 1500x1063 general_engineering-01.jpeg)
Open file (150.45 KB 1024x747 tools_(resized).jpg)
R&D General NoidoDev ##eCt7e4 07/21/2023 (Fri) 15:25:47 No.24152 [Reply] [Last]
This is a thread to discuss smaller or general waifu building problems, solutions, proposals and questions that don't warrant a thread or touch on more than one topic. In a way this is a technical meta, minus news. Keep it technical. A lot of topics in the old thread here >>83 have a thread on their own by now. The main topics in the old thread with the link to the related dedicated threads are listed here - it was mostly about actuation at the beginning: Topics in the old OP: - liquid battery and cooling in one (flow batteries) >>5080 - artificial muscles (related to actuators >>12810) - high level and low level intelligence emulation (AI) (related to AI >>77 >>22 >>250 >>27 >>201) - wear and maintenance, including repairs - sanitation >>1627 (related to actuators >>12810) > cheap hydraulic and pneumatic muscles > woven sleeves out of strong nylon fishing line > exhaust excess heat by breathing and panting (related to thermal management >>234) >>1635 (related to energy systems >>5080) > sitting in her 'recharging chair'

Message too long. Click here to view full text.

133 posts and 44 images omitted.
>>31848 Not inherently because you could easily use something like plywood or plastic sheets to make them less delicate and last longer as result so it would no longer fall under papercraft in such cases.
>>31860 >2.5D robotics True, it doesn't always fit into the papercraft topic. Maybe there needs to be a thread on it's own at some point, if there's enough to post about it.
I like this robot's eyebrows. I'm trying to figure out how they made the corners move. Just regular servos? And, I wonder why they made the corners go back that far, when shorter eyebrows would look more natural.
>>31835 >>31848 >>31860 >>31862 Glad to see this topic being brought back up here Anons. It was a good idea years ago, and it's still a good one today! Cheers. :^) >>31917 My general impression from his videos is that there are two simple servos per eyebrow. Both located at the furrow of the brow; one controlling the rotation, one controlling the vertical 'sliding'? (cf. : >>15287) . The other end(s) seems simply to be fixed in place. I hope you'll give it a shot Anon, and let us here know about your results. Cheers. :^) >=== -minor edit
Edited last time by Chobitsu on 07/01/2024 (Mon) 09:01:12.
Open file (564.81 KB 868x441 TransferExamples.png)
Full color image transfer to the base of 3D prints. https://www.youtube.com/watch?v=eElO5aso8kY

Visual Waifus Robowaifu Technician 09/15/2019 (Sun) 06:40:42 No.240 [Reply] [Last]
Thoughts on waifus which remain 2D but have their own dedicated hardware. This is more on the artistry side though ai is still involved. An example of an actual waifu product being the Gatebox.
gatebox.ai/sp/

My favorite example is Ritsu, she's a cute ai from assassination classroom who's body is a giant screen on wheels.
244 posts and 118 images omitted.
Open file (1.44 MB 720x1280 dk2ijpgkJyhEqfea.mp4)
>>31943 Nice!! >hmm, let's just find out how deep this rabbit hole goes... >*click* >*click* >*click* O.O ACCELERATE, BROS Thanks, Anon! Cheers. :^) https://lemonolis.com/ >=== -rm hotlink
Edited last time by Chobitsu on 07/02/2024 (Tue) 15:48:44.
>>31944 Looks like they're demo'g in Akihabara next month: https://event.vket.com/2024Summer/real >=== -sp edit
Edited last time by Chobitsu on 07/02/2024 (Tue) 03:09:45.
>>31943 this reminds me of Patrick Bateman walking with headphones on meme

Report/Delete/Moderation Forms
Delete
Report