/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The Mongolian Tugrik has recovered its original value thanks to clever trade agreements facilitated by Ukhnaagiin Khürelsükh throat singing at Xi Jinping.

The website will stay a LynxChan instance. Thanks for flying AlogSpace! --robi

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!


R&D General Robowaifu Technician 09/10/2019 (Tue) 06:58:26 No.83
This is a thread to discuss smaller waifu building problems, solutions, proposals and questions that don't warrant a thread. Keep it technical. I'll start.

Liquid battery and cooling in one
Having a single "artificial blood" system for liquid cooling and power storage would eliminate the need for a vulnerable solid state battery, eliminate the need for a separate cooling system, and solve the problem of extending those systems to extremities.
I have heard of flow batteries, you'd just need to use a pair of liquids that's safe enough and not too sensitive to changes in temperature.
This one looks like it fits the bill. The downside is that your waifu would essentially be running on herbicide. (though from what I gather, it's in soluble salt form and thus less dangerous than the usual variety)
https://www.seas.harvard.edu/news/2017/02/long-lasting-flow-battery-could-run-for-more-than-decade-with-minimum-upkeep

How close are we to creating artificial muscles? And what's the second best option?
Muscles are perfect at what they do; they're powerful, compact, efficient, they carry their own weight, they aren't dependent on remote parts of the system, they can be controlled precisely, and they can perform many roles depending on their layout alone.
We could grow actual organic muscles for this purpose already but that's just fucking gross, and you'd need a lot of extra bloat to maintain them.
What we need are strands of whatever that can contract using electrical energy. Piezo does the trick at small scales, but would it be enough to match the real thing? There have been attempts, but nothing concrete so far.
What are some examples of technology that one could currently use instead?

High level and low level intelligence emulation
I've noticed a pattern in programs that emulate other computing hardware.
The first emulators that do the job at acceptable speeds are always the ones that use hacks and shortcuts to get the job done.
It comes down to a tradeoff. Analyzing and recompiling or reinterpreting the code itself on a more abstract level will introduce errors, but it is a magnitude of order more efficient than simulating every part of the circuitry down to each cycle. This is why a relatively high level emulator of a 6th gen video game console has close system requirements to a cycle-accurate emulator of the SNES.
Now, I want to present an analogy here. If training neural networks for every damn thing and trying to blindly replicate an organic system is akin to accurately emulating every logic gate in a circuit, what are some shortcuts we could take?
It is commonly repeated that a human brain has immense computing power, but this assumption is based just on the amount of neurons observed, and it's likely that most of them probably have nothing to do with intelligence or consciousness. If we trim those, the estimated computing power would drop to a more reasonable level. In addition, our computers just aren't built for doing things like neural systems do. They're better at some things, and worse at others. If we can do something in a digital way instead of trying to simulate an analog circuit doing the same thing, that's more computing power that we could save, possibly bridging the gap way earlier than we expected to.
The most obvious way to handle this would be doing as many mundane processing and hardware control tasks as possible in an optimized, digital way, and then using a GPU or another kind of circuit altogether to handle the magical "frontal lobe" part, so to speak.
I'd just say that I think it's definitely possible to run good AI on mobile computers. We just haven't figured out how to -- yet. I'll remind everyone ITT that human brains runs on 12 Watts of power, continuous. That's obviously rather less than gigantic sums of energy being sucked up by big tech attempting this. Their approach is fundamentally wrong, simple as.
>>9325 Yeah, the hard disk speed will be the greatest limiting factor in this. Some caution needs to be taken not to cause page faults to the system's virtual memory. This can be avoided by maintaining enough free memory and storing parameters as many small tensors rather than big single ones. I did some tests in PyTorch and the throughput to my GPU is 1.4 GB/s which is no where close to my HDD speed of 100 MB/s. So long as parameter saving/loading remains within hard disk limits, the model can be scaled as much as desired breadth-wise across experts or possibly depth-wise across layers (if I can find a way to unload and reload parameters efficiently during gradient checkpointing without waiting on disk reads.) The entire network doesn't need to be reloaded either. A 500-million parameter model could have 25-million parameters saved and read per second. It will require a new neural network architecture that works like a switch transformer but optimized around the hard disk throughput. To my knowledge this hasn't been explored before, at least publicly, because SotA models are too big and inefficient to utilize the disk in any reasonable manner and if a model isn't pushing SotA on some pre-established benchmark then the paper is either rejected or ignored even if accepted. Another idea I have is it might be possible to pre-route minibatches so expert networks can do several training steps before being saved to disk while the next ones are being preloaded in. This way models could utilize much larger expert networks that are 1 GB. Something I've been experimenting with is absolutely gigantic transformers that use parameter sharing like ALBERT. The model I'm working on right now has dozens of layers and a hidden size of 2048 (slightly larger than GPT-2 XL) but the whole model is only 60-million parameters (120 MB) which is slightly smaller than distil GPT2. It would use well over 32 GB of memory if it didn't have factorized embeddings, parameter sharing between layers, gradient checkpointing and some other tricks, but since it does it only needs about 0.8 GB while training with a batch size of 1. Despite using so little memory it's doing a ton of calculations for each layer so a training step takes about 4 seconds, which would be more than enough time to read and write the whole model to disk. According to this paper training large then compressing is also more computationally efficient than training a smaller model: https://arxiv.org/pdf/2002.11794.pdf And from what I've seen so far I would agree. The convergence speed is stupidly fast using such a large hidden size. I haven't tried compressing it yet but the option is there. It'll take some tinkering around to find what utilizes my compute budget the best and what will result in a good compressed model that can run on lower-end systems fast. My ideal right now though is to create a language model that can inference in near realtime on a desktop CPU for use in games and other applications. Creating a smaller model that works on a Raspberry Pi is possible but I don't think the quality will be good enough without getting these switch transformers working first. For now someone could finetune a pretrained T5-small model that is only 60M parameters and use 16-bit float precision. If someone has PyTorch working on the Raspberry Pi I'd be happy to help write some scripts to do that. The lite transformer library I'm working on should be useful for creating models for Raspberry Pis, but they will take several weeks or months to train from scratch. It would be better if the model weights could be loaded and ran in mlpack because PyTorch just hogs up way too much memory. For now the best solution is what >>9332 said. The hard work can be done by a server and sent via wifi, while onboard there could be more simple AI that functions like instincts to make sure the robowaifu doesn't fall over or crash into something. Our robowaifus won't be leaving home anytime soon anyway, even if they could walk, and I can't imagine a Raspberry Pi managing to do speech recognition, speech synthesis and language model inference all together in realtime unless someone comes up with a more efficient model for speech. And I haven't checked out BOINC yet. Looking into distributed training right now feels a bit like putting the cart before the horse but if you have a link I'll take a quick look and see.
>>9338 >and if a model isn't pushing SotA on some pre-established benchmark then the paper is either rejected or ignored even if accepted. Of course. That is one of the several mechanisms of control being utilized by them to keep good AI out of anyone else's hands. >Despite using so little memory it's doing a ton of calculations for each layer so a training step takes about 4 seconds Well, it's a far better computing model IMO to keep the data and work units up on the accelerator device itself than to invoke the long chain of physics penalties of data transport back and forth. Trying to keep the speed of light (or at least the switching speeds of the silicon gates) as the fundamental limiting factor of your algorithms is practically always going to be a net win in the end. Solve fast, then get out is a good mantra to follow, generally speaking. Certainly from an energy consumption perspective, reduction of moving data 'through the docks' should be a high priority. >Creating a smaller model that works on a Raspberry Pi is possible but I don't think the quality will be good enough without getting these switch transformers working first. Frankly, quality isn't the issue r/n. Existence is. It's far better to have a dry crust of crude brown bread, than literally nothing at all. At the moment, almost all others are starving with nothing in hand whatsoever. And ofc, I'm not suggesting (even remotely) that the training occur on embedded devices. Simply that the runtime experience does. We should be able to pre-compute these models and then distribute them freely to Anons everywhere. >It would be better if the model weights could be loaded and ran in mlpack because PyTorch just hogs up way too much memory. Not to beat a dead horse well, much at least :^), but Python was always a mistake. We need to go strictly with compiled languages. mlpack is a brilliant choice. Both they and the Armadillo team deserve huge applause for what they are doing that, IMHO, will be the key to allow robowaifus to even become real at all. And as far as cluster computing at home goes, sure that's fine atm. But it should only be a temporary watershed, not a final destination. The SoC SBCs clustered together inside our robowaifus will be the approach that works in the end. They will not only do speech recognition, but computer vision, text generation, speech synthesis, kinematics calculations, robotics animation, sensor fusion, agenda planning. The whole package. And they will be able to do everything on just a couple hundred Watts of power as well. This must be our goal here. Anything less will fall short, and ultimately remain only a nerd-niche rather than a groundswell sea-change that can free men everywhere from gynocentrism, abuse and loneliness. I need to update a new JSON batch for Waifusearch. I'll do so, find the BOINC link for you, and also push the data up to catbox today or tomorrow. Good work Anon, just keep moving forward. >=== -minor prose edit
Edited last time by Chobitsu on 03/30/2021 (Tue) 21:23:48.
>>9334 I think you misunderstood me. To make it clearer: The servers would be at our homes, just not inside the body of the waifu. I didn't mean servers in some data center. At the beginning this will be necessary, later maybe optional, but better for some tasks. Might be a difference in quality and price as well. The cheapest ones are for people which can't afford an additional computer, others might do that to improve their learning speed and speed of reaction at home. Of course, they should be more and more independent, no disagreement there. >>9338 Thanks for you ideas, btw. They're amazing, as much as I can tell. Good to see that robowaifu dream attracts smart and creative people.
>>9346 >they should be more and more independent, no disagreement there. Agreed, and thanks for clarifying that for me Anon. Yes, this will certainly be a progression of the engineering choices. As you seem to suggest, necessarily crude at first, then advancing to a more usable state with time. Things won't be perfect initially, and that's OK. I'd suggest we should all keep the end goal clear in our minds however. Mobile, autonomous robowaifus are both feasible soon, and are also the only effective robowaifu ideal that will actually change everything for us all.
Open file (1.99 MB 410x230 python.gif)
>>9343 >And ofc, I'm not suggesting (even remotely) that the training occur on embedded devices. Maybe not creating pretrained models but it would be good if embedded devices can do a little bit of fine-tuning on models to learn new on the go or optimize themselves. >We need to go strictly with compiled languages. I'm gonna make an effort to switch to mlpack by the end of this year and NimTorch for making pretrained models. All the devs working on mlpack and Armadillo have been doing outstanding work. I went to grab the PyTorch 1.8.1 update today and it's 2 GB now. Damn thing was too big when it was 300 MB years ago. It's just getting ridiculous at this point. Speaking of which, NimTorch might be a good solution for the Raspberry Pi until there are easy-to-use mlpack models available. I'll have to get a Raspberry Pi for myself and try it out. >The SoC SBCs clustered together inside our robowaifus will be the approach that works in the end. I don't think it's pipe dream to imagine just one SBC doing everything. AI has already reached a tipping point. There's so much progress being made I can't keep up with it. Sometimes I spend a whole week reading papers and don't make a dent in all the progress. I'm still catching up on ones from 2018. If algorithmic efficiency continues to increase at an exponential pace there might come a day where we can perform all these functions with very little power. If I could go back in time 2 years and tell myself the stuff I'd be doing today, I wouldn't have believed it at all. In 1-2 years we'll probably be doing crazy stuff on SBCs that seemed like fantasy today. So I think it's a more important question to ask how we can make the hardware as energy efficient as possible. The software side has hundreds of thousands of engineers and researchers working on it but robowaifu hardware not so much. >>9346 It's amazing seeing the devotion robowaifudevs from Japan put into their work. If it wasn't for them I wouldn't work as hard as I do and even then it's only a tenth of what they're putting out. I hope one day there's more communication between Japan and the West. The only way that's gonna happen though is if we create something that gets their attention.
>>9354 I'm not so much against Python than others here, since it's easy to learn and use, also widely used. However I'm going to look into Nim as well if it is really going to be used here, as soon as I get started to get seriously into ML. Not sure about the importance of the many PyTorch tutorials and repositories out there, I assume this can't really be ignored.
>>9354 >that pic kek. just a quick pop-in to give you the crosslink to anon's BOINC posting from before. i should be back later today Anon to continue further. mlcathome >>9105 >>9106
>>9356 If you're getting started with ML, PyTorch is the best way to go. Nim is a fairly new language and NimTorch hasn't received any updates in 2 years. It doesn't have full support of PyTorch's features but it compiles straight in C++. I've taken a closer look at it now and by porting some features it should be able to run transformer models. It's missing some things like the cross entropy loss function needed to train them and the Module class. If the missing features are easy to implement in NimTorch I might do that but I feel my efforts would be best spent working on getting PyTorch models working in mlpack. >>9363 It appears they're using the Torch C++ API. BOINC sends a model file to the client, it runs it, and then reports back with the results and the updated model file. The work that needs to be done to do distributed training on the same model is a lot more complicated than that. https://gitlab.com/mlcathome/mlds
>>9354 >tl;dr I've been tracking down information on both NimTorch, ATen, and this nifty little tool Anon just brought to our attention, aitextgen >>9371 I'll make some efforts of the next few weeks to see if a) I can get these up and working on my old RPi 2b, and b) see if I can 'translate' some common PyTorch examples over to it and run them successfully at all this way. Also, mlpack and Armadillo seem to be under active development so there's that too. Yeah, it's amazing to see what you and others here have managed over the past couple of years. I'm feeling quite confident we are going to actually pull this all off before too long. >>9370 Thanks for taking the time to give it the once over Anon. BOINC is very well-established as a distribution platform by now, but it's certainly not the only way forward. I'm sure we'll eventually work something out. Still planning to go with Robowaifu@home as the name BTW?
>>9356 Don't take our bantz too srsbzns Anon, it's just the an ongoing joke. I rather like the language when I actually force myself to sit down to it (I don't often need to). It is very easy to learn, relatively speaking. I'd also just add that yes, you're certainly right about one thing: Python's certainly very popular >>9036 :^)
>>9402 Thanks, then NIM isn't really that interesting to me. I'd hope we could do some Lisp (family) instead, Common Lisp, Scheme or Clojure, for those which don't want to work to close to the machine. Even C# has a better image than C++, though never tried either really.
>>9405 >I'd hope we could do some Lisp (family) instead, Common Lisp, Scheme or Clojure, for those which don't want to work to close to the machine. Sure, by all means Anon! Would you mind sort of sharing how you wanted to proceed with your project? I mean do you think you want to work on a physical robowaifu as well, or just stick with the AI side of things for instance?
Just doing a little simple math. https://www.tweaktown.com/news/74601/the-worlds-largest-chip-2-6-trillion-transistors-and-850-000-cores/index.html Nov 3 2020 Cerebras has just unveiled the world's largest chip, which packs a mind boggling 2.6 trillion transistors and 850,000 cores. TSMC's new 7nm process to cram the 850,000 cores onto the 2.6 trillion transistors. If 2.6 trillion transistors on one chip and each artificial silicon neuron has 10,000 transistors then it's equal to 130,000,000 neurons, so you would need 769 chips for a 100 billion neuron brain.(some say 100 billion s a good number of neurons for an average human brain but there is some dispute about this.) So 10 doublings gives us a human at wafer scale. I suspect that you would not need 10,000 transistors for each neuron and that each neuron of this size could simulate a hundred or even a thousand neurons because neurons are so slow. The figures I've seen show human compatible computation by 2025. So by 5 years after that for the software and we could have a super intelligent robowaifu totally loyal to us and programmed to make us happy.
>>9407 Pretty mind-boggling ramifications. We better hurry /robowaifu/ !!
>>9346 "...The servers would be at our homes..." To lowe cost the waifu could have simple walking and moving around functions embedded. So it could carry stuff, follow you, simple stuff. These functions would take far less computer power than stuff you want to happen at home. Higher functions talking, cleaning, cooking, etc. could be run from local home based servers lowering cost until the computing cost and size come down such that they could be embedded.
>>9448 >until the computing cost and size come down such that they could be embedded. You're right about that Anon, all else being equal. The computing costs are going to continue to get less relative to their power. In fact, the rate for GPU architectures is still growing faster than exponentially.
>Planetary roller screws >highspeed, high stiffness in linear movement >suitable for frequent changes in direction >high speed acceleration >high power density >high peak load acceptance and reliability >precise control This looks like something we should have in mind: https://youtu.be/-JU4Xxwv1TI
>>9958 Thank you Anon!
>>9958 >Planetary roller screws That is excellent. It got me to looking around at links that were related and I found this one on ball screws. https://en.wikipedia.org/wiki/Ball_screw Now one of the major design decisions we probably need for DIY manufacture I think we need is to be able to 3D print as much as we can. It might take a while but we could afford the slower time to do this compared to a traditional manufacturer. So I'm looking at this ball screw link and had an idea. Look at how the balls flow down in a spiral then they have a recirculating track that feeds them back up to the top to start recirculating down the screw track again. Now imagine you have a shuttle that stops the balls and grabs a ball on the way back up. Instead of a rotary motion to feed the balls you have a shuttle that only moves up and down and releases the balls after it's shuttled them up one ball space. You have a simple solenoid with a coil and a release to drive the balls which would in turn drive the screw. Driving a motor is much more complicated than just feeding power to a mechanism that would shuttle balls faster or slower depending on the voltage. Reverse could be as simple as reversing the voltage. The shuttle would also provide a simple locking clutch automatically that you would not need a separate clutch or logic to hold as you would with a motor.
>>9984 This sounds like a quite interesting idea Anon. Mind drawing us a sketch or two to highlight your idea here? Also, > Driving a motor is much more complicated than just feeding power... I'm guessing you mean 'isn't much more complicated' ?
>>9984 What?!? I have no idea what this descibes. Is someone testing his text generator? Also, these parts could and probably should be bought as metal parts.
>>9990 Actually, he's just referring to the idea of using a solenoid-actuated 'shuttle' (kind of like a tiny elevator if you like) that moves bearing back to the ready-position instead of bearing raceways loaded up with them. Interesting idea actually. Reducing the number of moving parts has nice benefits, and could conceivably be lighter in weight as well. OTOH, adding in both the solenoid/shuttle system brings it's own issues into the mix. As with all engineering it's going to be a trade off. I'd recommend this anon try his idea out, and compare it against a standard ballscrew system side-by-side and test.
>solenoid-actuated 'shuttle' I don't deny the idea is a bit of a kludge but I bet you price one of those Planetary roller screws or ball screws made of metal and it would freak you out. The price would be high and we need 650 muscles to make a real looking humanoid. https://en.wikipedia.org/wiki/List_of_muscles_of_the_human_body So I found a source for these and they are $32.95USD and no motor. You still have to have a motor. https://www.banggood.com/SFU1204-400mm-Ball-Screw-With-SFU1204-Single-Ballnut-For-BKBF10-End-Machine-CNC-Parts-p-1091450.html?utm_campaign=BestToolLocker_March&utm_content=2635&p=KR28032004379201507P&cur_warehouse=CN Here's a video on ball screws https://www.youtube.com/watch?v=WoPPwGxgWEY Here's a video on solenoids see at 6:16 how the current pulls the metal part forward. https://www.youtube.com/watch?v=BbmocfETTFo Now imagine every time the solenoid moves it has one ball trapped and pulls it one ball length. If this is part of the ball screw it would push the screw one ball length. Make sure the solenoid is spring loaded and has cut off switch that is hit every time the solenoid moves one length. So current moves the steel shuttle, moves one ball length, turns the screw, and at full extension after it's moved, ball hits a switch that disconnects the current. Spring moves the switch back and if power still there it repeats with another ball another ball. Now I fully accept that this is an abortion but...it's damn cheap and uses stuff you can make yourself. If you have a motor they have to have controllers to start, stop, hold them. Let's say you lift a cup with an arm, you need to balance it or it drops it as soon as power taken off. With the shuttle and ball screw the shuttle won't let the balls through so you have a built in clutch. Some of the 3D printer plastic with added carbon fiber is very strong. Not as steel but make it bigger and the force it can hold is just as good. It just occurred to me. Maybe the cheapest, and easiest, way to make the solenoid is to instead make an actuator that's like a rail gun, https://en.wikipedia.org/wiki/Railgun You only need two conductors and a cross over conductor that will also be your shuttle to shuttle the balls. No winding of coils and you could just slide a couple of copper rails and a copper cross piece into slots made when you 3D print them The goal of all this is that if you don't have some sort of transmission the current needed will be too much. Hence the ball screw to raise torque. A lot of what I'm doing is just thinking out loud and gaming what could be cheap and do able without a lot of cash or specialized machinery.
>>10002 >Now imagine every time the solenoid moves it has one ball trapped and pulls it one ball length. If this is part of the ball screw it would push the screw one ball length. Make sure the solenoid is spring loaded and has cut off switch that is hit every time the solenoid moves one length. So current moves the steel shuttle, moves one ball length, turns the screw, and at full extension after it's moved, ball hits a switch that disconnects the current. Spring moves the switch back and if power still there it repeats with another ball another ball. /k/ would love you Anon. Actually, I hope you can test your idea out IRL. Among the many thousands of engineering tradeoffs needed for a working robowaifu prototype size of actuators is certainly going to be an important one. The reduced volume of your bearing slide is certainly a benefit. I'd also try to come up with further design ideas that reduced the bearing count as well. The only way to know for sure is try the idea out directly Anon.
>>10002 >It just occurred to me. Maybe the cheapest, and easiest, way to make the solenoid is to instead make an actuator that's like a rail gun, LOL. /k/ would really love you!
>>10002 >actuator that's like a rail gun Something like this was one of my first "genius" ideas to get around well known actuation methods. I put it into the thread for failures and prototypes. Magnetic power is very weak, you won't drag anything with such a little "coil gun" in room temperature. Maybe, letting something rotate along a screw, which is then used to hold something, might work. However, that's basically like a motor. Maybe a slower version of it. I had similar thoughts on how to control the eyeballs. Maybe not using a motor with fast rotation, but using some magnetism to let them move and hold in place. I plan to use magnets to guide them anyways, also to transfer power from a motor to the eyeballs, in case I'll use a motor, which is more likely. The advantage of using magnets here would be, that the eyes wouldn't be fixed to some mechanism and the whole mechanism to move them could be smaller. The best use case for solenoids is not to use the pin to push something, but to block something coming from the other direction, which is from the side. So the blocking power would not be related to the power of the magnetic force, but the strength of that pin and the whole construction of the solenoid and where it is attached to.
>>10030 "Magnetic power is very weak" I'm sure your right about a rail gun being weak but I do not think magnetic power is particularly weak. Look at the size of Teslas motors for his cars and how much torque these have. We need way, way, way less power. I did a little looking around and found a paper I had read before but forgot about. It covers "flux concentrators" for rail guns. A quote,"...The device was used at MIT for high field research and also for industrial metal forming. In 1965, Chapman[20] used a flux concentrator with a tapered bore for accelerating milligram metal spheres to hypervelocities. Using a first stage explosive flux compressor, Chapman managed to reach peak fields in excess of 7 megagauss, starting with an initial field of only 40 kilogauss..." Hmmm...maybe...I get that we don't need hyper velocities but the concentration part helps to raise the force to a small area. This is in turn used to push the balls. The balls and screw guides are in reality a transmission of sorts that allows taking a small fast force and turning into a slower more powerful force. Lucky for us humans are very weak,"...During a bicycle race, an elite cyclist can produce close to 400 watts of mechanical power...", that's not a lot. So just walking around you could get away with less than a hundred watts. https://www.coilgun.info/theorymath/electroguns.htm "...The best use case for solenoids is not to use the pin to push something, but to block something..." I get this. Totally understand but if we are going to make something that we can readily hack up in a garage without spending several thousand dollars for brush-less DC servos or some other such pricey stuff we're going to have to do something a bit odd and get creative. It doesn't matter if it's not perfect if it's cheap and easy to prototype. I really don't like pneumatics or hydraulics. They are very inefficient and super noisy. I think to get something acceptable you will have to make it electric.
>>10044 I just guess, that there's a reason that motors with moving parts are being used for motion, not just stationary magnets or a coil gun. Feel free to build something that shows what you mean. There're videos on YT where small metal parts move through some winded coil like a train. BTW, if this would be moving fast and strong, then how to prevent accidents of shooting parts outside of the "muscle"? >spending ... brush-less DC servos The key words here are spending and servos. We just shouldn't do that. I still didn't encounter an argument why measuring the movements couldn't happen outside of the motor. I still think, we don't need these amazing top-noch servos. >pneumatics or hydraulics. They are very inefficient and super noisy They are not always noisy at all. I'm sure that depends on factors like speed and power of inflation and deflation. Efficiency? What does it matter, if the tank is loaded while she's plugged in? That part is probably going faster than recharging the batteries.
>"I just guess, that there's a reason that motors with moving parts are being used for motion, not just stationary magnets or a coil gun." Most have gears to take the less powerful motor and trade speed of the motor for slower with more torque. The solenoid shuttling balls in the ball roller screw does the exact same but I'm guessing it would be lot easier to build in a garage. Also think of the cost of actuators if you have to have copper conductors the whole length of the motion. Expensive. I'm just throwing out ideas. I do not mind at all if people argue against them. On my part throwing out a lot of ideas means some of them will be stupid but...oh well. Such is the price for progress. I liked the Planetary roller screws here and was brain storming on how to make something that functioned like this without all the expensive gearing. Those gears must cost a fortune. I'm actually enamored of this stuff I posted here. Elastomor actuators. This stuff can even have logic computing elements built into it. WOW. Later down the thread I link to book on this stuff. The main problem with it is it doesn't pull it spreads out or pushes so some system would have to translate the motion into a pull. Possibly some sort of undulating motion could be used. Use that motion to make pumps with fluids that could circulate warm water to keep the temperature of the skin warm and provide muscle action. >=== -rm g_untd links
Edited last time by Chobitsu on 04/22/2021 (Thu) 11:43:54.
The links were messed up I referenced here they are again. Planetary roller screws >>9958 And I got the name wrong it's Dielectric Elastomers >>8502
>>10047 BTW, you should be able to delete your own posts to clean things up a bit Anon.
It wouldn't let me. Asked for another password???? It appears that it will only let you delete the very last post you made and not the one before it. If you would delete it I would appreciate it. Sorry for screwing it up. I think I figured out how link correctly now.
>>10051 No worries, done. It's not based on post recency (AFAICT), but is based on your IP. Your address probably changed since. Happens with Tor frequently, or with phoneposting from time to time. >I think I figured out how link correctly now. It's easy if you just click on the little post number at the end of a post's headline (if you have JS enabled). BTW, thanks Anon.
>>10623 Sounds good, done. >No one needs more work. No worries it's fine. It's literally in everyone's interest to have things well-organized Anon.
>>9272 >My original idea was just to use text but Plastic Memories shot down that idea fast. If you're still here Anon, I'm curious if you could spell out your reasoning here. Just as a preface a) I'm very familiar with Plastic Memories, and b) I wrote BUMP & Waifusearch to use only the filesystem+textfiles, specifically because they are more archivable/distributable/durable that way. At least that's my take on this issue. I'd be glad to hear yours though.
>>16476 >I'm bursting now, with so many things that were fun or made me happy... >Too many for me to write down in my diary English is a lossy and inefficient way to encode information. One has to be articulate and verbose to encode a memory into text, and then that text has to be decoded later into a useful format but most of the information is actually missing. Only the most valuable points get encoded and the amount of information they can contain is constrained by the language they're in. It's not just text with that problem either. Similar problems arise trying to express a sound as a picture or a walk through a forest as audio. Books, logs, articles and messages have their place but words have limited usefulness in an intuitive long-term memory. Feelings in particular contain so much information because the soup of different hormones and chemicals in the body are exciting certain memories and inhibiting others. This causes a unique behaviour and thought process to emerge that can never be exactly reproduced again, even with the same ingredients, because the memories change. Remembering these feelings is not something that can be done in text.
>>16479 All fair points, and no debate. But, knowing a little bit about how databases actually work, I'm not yet convinced that they offer a priori some kind of enhanced capability for solving this expansive problem. From my perspective that's really more an issue in the domain of algorithm, rather than data store. Isn't basically any kind of feasible operation on the data, doable in either form? It's all just operations on locations in memory ultimately AFAICT. So, a 1 is a 1 whether it's in a DB or a text file on the filesystem. My primary motivation for relying on text files in this type scenario would be complete openness and visibility of all data themselves. As mentioned, this approach also brings a boatload of other benefits for data safeguards and management. I hope I'm making sense Anon.
>>16480 Both have strengths and weaknesses. Being able to upload files into a robowaifu and automatically index them would be convenient and using a file system would speed up creating incremental backups with rsync too and also offer system file permissions to use. You could have different file permission groups to prevent strangers or friends from accessing personal data and be able to store sensitive data in encrypted files or partitions. Being able to efficiently search and retrieve metadata across different indexes and joined tables will also be invaluable. For example, to store research papers for semantic search one table might contain the papers and another contain sentence embeddings for each sentence in a paper. Then the sentence embeddings can be efficiently searched and return which paper and page the match is found on and do other stuff like finding the top-k best matching papers or searching certain papers within a given date span or topic. Another table could contain references for papers so they can be quickly retrieved and searched across as well with constraints like only searching abstracts or introductions. Other things could be done like counting how many papers cite a paper multiplied by a weight of how interesting those citing papers are to suggest other interesting popular papers. These embeddings wouldn't be limited to only text but also images, animations, 2D and 3D visualizations of the text and other modalities if desired. Transformers are quite good at translating natural language questions into SQL queries so a robowaifu would be able to quickly respond to a large variety of complex questions from any saved data in her system, given the database has proper indexes for the generated queries. I'm expecting robowaifus will have around 4 to 128 TB of data by 2030 and being able to perform complex searches on that data in milliseconds will be crucial. The metadata database could be automatically built and updated from a file system. A book could have a JSON metadata file and various text files containing the content, making it a lot easier to modify, merge and delete data and manage it with git. This database would be completely safe to lose (though costly to rebuild) and just be for indexing the file system. It could also be encrypted, hidden behind file permissions, and take into account system file permissions to prevent access to metadata.
>>16485 Thanks for the thoughtful response Anon. >The metadata database could be automatically built and updated from a file system. A book could have a JSON metadata file and various text files containing the content, making it a lot easier to modify, merge and delete data and manage it with git. This database would be completely safe to lose (though costly to rebuild) and just be for indexing the file system. It could also be encrypted, hidden behind file permissions, and take into account system file permissions to prevent access to metadata. Well, I feel you understand where I'm coming from. It's not too difficult to devise a table-based (say 3rd Normal Form) using nothing but flat-file CSV textual data. Obviously, using binary-style numerical data representation is significantly more compact than ASCII, etc., but is, in essence, still open-text. Additionally, as you clearly recognize, the filesystem is a nearly-universal data store that has decades of support from OS designers going back (effectively) to the literal beginnings of all computing. Philosophically, the first rock, the second rock makes the marks upon. :^) >tl;dr The long heritage of filesystem support for text data can't be ignored IMO. OTOH, rapid indexes, etc., are also obviously vital for practical runtime performance, etc. As mentioned elsewhere (and fully recognizing that the data itself is relatively small by comparison) Waifusearch's textfile-based system can generally return tree-oriented search responses in a few hundred microseconds -- often less than 100us (and that with no particular effort at optimizations, rather simply using standard COTS STL libs in C++ in a straightforward manner). I realize that the systems that need to underlie our robowaifu's runtimes are vastly more complex than just database lookups, but my primary point is that these are mostly questions of algorithm, not data store. Anything we can do to go back to basics + reduce as many dependencies as feasible, then the better & safer it will be for both Anon and his robowaifus.
>>16485 I've been giving your excellent practical-vision post here more thought Anon. Having a "RWFS" idiom mechanism for structure/naming/contents/etc could be a big help here. In all likelihood (particularly in this beginning), this scheme should ride on top of a traditional 'real' *NIX filesystem. We can evaluate the performance characteristics of various OS/FS platforms further along in time, but for starters I'd suggest something like Alpine Linux + EXT4. -Regarding filesystem directory structures: If you don't mind directories having non-human-readable (ie, nonsensical) names, then we can gain some nice optimizations for searches by simply using the hashing bin codes as the directory names themselves. -These hashes can actually encode pre-computed lookup data, in a fashion similar to that used by the package manager nix. -The entire tree structure can be indexed into a very compact metadata (as embeddings or any other format) in advance, and a daemon process can continually keep this index updated as new information streams into the storage from higher-level robowaifu processes. -The exact path to any specific data of interest would simply be the calculated path to it, composed merely of it's hashtree. The filesystem's structure will always mirror this hashtree, so the location on the disk is known precisely the instant it's hash is calculated. -Multi-indexes can be supported by virtue of using symlinks back to a data's primary tree location, and can be scattered anywhere in the system roughly for free (cf. data's sz itself). -Some form of low-impact, nearline/offline backup scheme (marshaled rsync?) daemon can be used for queuing up updates for sneaker-netting over to external storage, etc. Thence to git, platter-based, w/e. There's more to say on this set of related topics to your ideas, but this is enough to go on with for now. I now plan to carve out some time for filesystem optimizations studies. Hopefully some kind of prototype of this could be ready in two months or so. Cheers. >=== -add 'symlinks' cmnt -add 'nearline/offline backup' cmnt
Edited last time by Chobitsu on 05/30/2022 (Mon) 11:24:13.
>>16519 >Regarding filesystem directory structures: If you don't mind directories having non-human-readable (ie, nonsensical) names, then we can gain some nice optimizations for searches by simply using the hashing bin codes as the directory names themselves. Yeah that would be fine. The indexed data could be kept separate from unindexed human-readable data. Directory depth would have to be limited though since they cost an inode and block (4 KiB). Once every bin is occupied a 24-bit hash would cost about 70 GB in directories alone or 0.5 GB for 16-bit. This will be an issue creating multiple indexes. >Multi-indexes can be supported by virtue of using symlinks back to a data's primary tree location, and can be scattered anywhere in the system roughly for free Symlinks use up inodes. For indexes it would be better to use hard links. One concern is ext4 only supports 32-bit inodes which limits the max to 2^32. For now up to a 1 TB drive should be fine since I estimate the ratio of inodes to file system capacity will be around 1:256 bytes in a worst case scenario, but this isn't a big issue to worry about right now. Another issue is each inode itself costs 256 bytes. This could be worked around though by storing multiple memories per file. All the memories in a bin and possibly nearby bins need to be read anyway so keeping them in one file would save lookup and read time too. >Some form of low-impact, nearline/offline backup scheme (marshaled rsync?) daemon can be used for queuing up updates for sneaker-netting over to external storage, etc. Thence to git, platter-based, w/e. Changed files could be added to a list and copied in the next incremental backup with the rsync --file-from option or to an archive file. I don't think git will be ideal for millions of files or even usable for billions of files. Backups will need an absolute minimum of overhead. They will likely be binary files as well since storing tokens as 16-bit integers saves both read time and space.
>>16536 >This could be worked around though by storing multiple memories per file. All the memories in a bin and possibly nearby bins need to be read anyway so keeping them in one file would save lookup and read time too. Yeah, that general idea was floating around in my head as well. I would suggest we adopt some type of DoD as used with gaming systems as a further optimization approach too. I'm not sure what optimization strategies will prove optimal for the disk's electronics ATM, but ultimately every needs to get into processor/cache memory. Keeping cache lines full is the key to perf. >Backups will need an absolute minimum of overhead. Very much so. Still, a vital need IMO. >They will likely be binary files as well since storing tokens as 16-bit integers saves both read time and space. I'm not sure if bfloat16 is your plan, but they seems to bring a lot to the table for us?
Open file (51.17 KB 1080x683 Imagepipe_40.jpg)
Crush ribs for inserting ball bearings into plastic parts, without falling out and without the need for printing the part very precisely: https://youtu.be/0h6dCeATkrU
>>16707 Important concept for manufacturing tbh.
Video about which smells men prefer relating to women, including Estradiol (Estrogen): https://youtu.be/QXkkaqANM8I - aside of that one, most interesting seem to be Rose oil, Vanilla, fresh Oranges, and Black Licorice (which allegedly has a strong physical effect on a certain blood flow), doughnuts and other food, Peppermint, combination of Lavender and Pumpkin pie (most attractive), and Jasmine. Vetiver and Musk were also mentioned, but those are more general and ambiguous. - Full article with more links: https://www.thelist.com/149560/scents-that-surprisingly-make-women-more-attractive-to-men/ - Study on hormones in the odor: https://royalsocietypublishing.org/doi/full/10.1098/rspb.2018.1520 - Study on how the scent of roses makes women more attractive: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0098347 - Study on "Human Male Sexual Response to Olfactory Stimuli" - Black Licorice and others: https://aanos.org/human-male-sexual-response-to-olfactory-stimuli/
What if ran cooling fluids through big elf/fox/cat ears or structures that looked like wings? We could use a lot of surface area without compromising her appearance. Maybe wing like structures could be used to help with balance too or a tail.
Open file (471.64 KB 656x717 OtomediusMiniWings.jpg)
Open file (165.72 KB 920x920 Swerzer.png)
>>16766 Wings with wheels would serve many functions admirably. They'd provide a large surface area that's often exposed to moving air as you mentioned. The balancing aspect would allow for her to be a quadruped while maintaining the aesthetics of a biped. I also have a thing for wings
Open file (60.09 KB 796x945 Lambda.jpg)
>>16767 Her wings could be folded to look like a dress while they provide balancing points of contact.
>>16767 You don't need to make her a quadruped to get better balance. The point of the "wings" would be to spread mass away from the pivot point thus increasing her moment of inertia. It's just like when you spread your arms out to help you balance.

Report/Delete/Moderation Forms
Delete
Report