/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Happy New Year!

The recovered files have been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“What counts is not necessarily the size of the dog in the fight – it’s the size of the fight in the dog.” -t. General Dwight Eisenhower


Open file (259.83 KB 1024x576 2-9d2706640db78d5f.png)
Single board computers & microcontrollers Robowaifu Technician 09/09/2019 (Mon) 05:06:55 No.16
Robotic control and data systems can be run by very small and inexpensive computers today. Please post info on SBCs & micro-controllers. en.wikipedia.org/wiki/Single-board_computer https://archive.is/0gKHz beagleboard.org/black https://archive.is/VNnAr >=== -combine 'microcontrollers' into single word
Edited last time by Chobitsu on 06/25/2021 (Fri) 15:57:27.
>>27555 You can add should use a water cooling line if you want liquid cooling. Mineral oil is simply ideal for immersion liquid cooling. They are entirely different concepts which should be considered as having nothing in common. Aside from a liquid carrying heat. If you'd like her to drink and pee, a soft line water cooling system is recommended. If you'll drink her pee, please incorporate a filter near her urethra.
Your waifu will be on the cloud or locahost and you will be happy
>Explore the groundbreaking Coral AI mini PCIe accelerator by Google, a game-changer for home automation and DIY projects, in my latest video where I integrate this innovative chip with the ZimaBoard and Zima Blade for superior performance and cost-effectiveness. Discover how this setup outperforms others, like the Raspberry Pi 5, in speed and thermal efficiency, and follow my journey from troubleshooting software issues to successfully running Frigate, an advanced home lab computer vision system. Learn how this affordable, under $100 setup can revolutionize your home tech projects! https://youtu.be/mdOEaNV8NXw >LattePanda's Mu is the latest entrant in the 'Pi Killer' battle, but it has a trick up its sleeve. https://youtu.be/GKGtRrElu30 SBC comparissions: https://github.com/geerlingguy/sbc-reviews
Well looking around at RISV micro-controllers and other MC and to my surprise they are now blowing out MC at 6 cents, 10 cents, etc. with 32 bit RISC and also Arm Cortex. I'm stupefied. I said I couldn't imagine how you could get touch, but if your MC has 10 or 16 touch inputs and cost 15 cents or less each. Well there it is. You have to get some conductive Mylar or whatever on both sides of something you could squeeze together or some sort of conductive material that when squeezed changes value. Not that this is easy but once you have it figured out you could mass produce it. I admit to being amazed. Here's write up on the, "...Arm Cortex-M0+ made by Puya, it’s less than 10 cents in moderate quantities..." https://jaycarlson.net/2023/02/04/the-cheapest-flash-microcontroller-you-can-buy-is-actually-an-arm-cortex-m0/ Maybe a good full handful of these would be all you needed for motion , actuator control(you still need MOSFETS for power) sensing and touch/pressure sensing. I found a link with their "high end" PY32F030F18P6TU 8KB 64KB FLASH TSSOP-20 Microcontrollers (MCU/MPU/SOC) ROHS 6 for $ 1.81 WOW!!!!!!!!!! 60+ for $ 13.76 https://www.lcsc.com/product-detail/Microcontroller-Units-MCUs-MPUs-SOCs_PUYA-PY32F030F18P6TU_C3018715.html If you had 120 or maybe 200 of these at around $30-$40 you have all the control you need. They also have this new RISC-V mentioned on the link above for low cost like less than 20 cents. Here's a few links talking about this(BTW the above part seems to have great support compared to some of the other Chinese offerings the article mentioned) https://hackaday.com/2024/01/04/ch32-risc-v-mcus-get-official-arduino-support/ https://www.cnx-software.com/2023/02/09/8-cents-for-an-arm-cortex-m0-microcontroller-meet-puya-py32-series-mcus/ https://hackaday.com/2023/03/02/a-ch32v003-toolchain-if-you-can-get-one-to-try-it-on/ https://hackaday.com/2023/04/19/risc-v-supercluster-for-very-low-cost/ I think the key to this is some sort of framework using vectors sent by the main CPU to these micro-controllers and letting them do all the compute and feedback sensing to preform the move.
>32872 >> We may devise some custom encoders if we can't find suitable ones for things like the knuckles, eyes, etc. I wish to repeat myself because, to me, it seems the absolute master genius, super-fi, bestest in the world idea for encoders for anything that moves.It's the guys idea on hackaday to send light through fiber optics, plastic fibers are super cheap, and then have the light vary due to movement. This would in turn be fed through another fiber optic back to a simple mass produced camera chip. So you could millions of inputs on one micro-controller with a camera. Called "Fibergrid" https://hackaday.io/project/167317-fibergrid#menu-details and some, rough,speculation about usage by me, >>24233
Another good link by the same guy with a run down on under $1 USD micro-controllers. https://jaycarlson.net/microcontrollers/
And even mre. A comparisom of linux based sngle board computers. This guy really good and the links I got from h im are, in my opinion, a must have for any sort of prototyping we need to do. https://jaycarlson.net/embedded-linux/
>>32880 >This guy really good and the links I got from h im are, in my opinion, a must have for any sort of prototyping we need to do. Yeah, its really good stuff Grommet. Thanks! He mentions the PY32 series [1] (including one for US10cents!). I really like that he has a set of C libraries for that platform already assembled together [2]. Cheers. :^) --- 1. https://jaycarlson.net/2023/02/04/the-cheapest-flash-microcontroller-you-can-buy-is-actually-an-arm-cortex-m0/ 2. https://github.com/jaydcarlson/py32-template
>>32874 I'll take another look at this. What we really need is an encoder/sensor that simply in the form-factor of being a fiber itself. That way we can simply run it up through the knuckles to obtain position information with no add'l muss & fuss (such as with rotary encoders, etc.) Know anything about such a form-factor sensor, Anon?
Open file (112.73 KB 1024x1024 image-122.jpeg)
>>32874 Thanks for all the interesting comments, but especially for that. The idea to use light for robowaifus crossed my mind years ago, but this was just fantasizing, and then you posted this at some point already. This is really something we should keep in mind for sensors in the skin. Though, it would need to get quite small and maybe use the light spectrum. Idk, I guess there are reasons why this doesn't already exist. That said, I recently thought about if people could build their own optics based AI accelerator (GPU-like). Not for fitting into the PC but just some rather big drawer if necessary, but then good enough to run Llama 3. Just in case if the "Pausing AI"-crazies would be somewhat successful or just to discourage politicians in the first place. I thought about how to do that, and wondered how long it would take to calculate the color coming from a fibre since it would influence the frequency of change that could be allowed for transmitting information. Unfortunately, the camera he uses is rather big and he seems to only use the amount of light without analyzing the spectrum, which tells me this is the hard part. Which is was I thought. I wanted to use 50k pixels (220x220) and analyze the spectrum of each, haha. I don't really plan to pursue that, but maker projects like this one would be a place to look into it. So good to know it exists. Companies working on optics based computing would try to go very small. I think this approach even only helps if the hardware is farther away from each other, which is why this isn't being done. The distance would probably be good for cooling. The project I thought about would be more about having something we can build and repair. Bigger parts than regular chips, but still small enough to fit into a room and run one big LLM at any time.
>>32882 Why does it need to "run" through the knuckels? Can't it pick up light based on the position of a connection?
>>32893 Simple. Ease-of-manufacture. There's also the very-likely adjunct benefit of lowered-cost too (less parts, less mass, etc.) >pick up light based on the position of a connection Maybe you have something there, Anon. Can you explicitly & particularly define your idea of how 'position of a connection' might work in your concept? >=== -minor edit
Edited last time by Chobitsu on 08/17/2024 (Sat) 20:47:16.
>>32896 Related about buses, sensors and multiplexing in the engineering general thread: >>32900
Interesting chip, Sohu AI chip "...Most large language models (LLMs) use matrix multiplication for the majority of their compute tasks and Etched estimated that Nvidia’s H100 GPUs only use 3.3% percent of their transistors for this key task. This means that the remaining 96.7% silicon is used for other tasks, which are still essential for general-purpose AI chips..." They're showing lama70B throughput at high speeds. lama70b should be really powerful. https://www.tomshardware.com/tech-industry/artificial-intelligence/sohu-ai-chip-claimed-to-run-models-20x-faster-and-cheaper-than-nvidia-h100-gpus
>>32882 This fibergrid thing this guy thought up is in my opinion noble prize winning material. It's an idea that has been one of the most impressive out of the box thinking I've seen in a long time. I mean it makes it sooooo easy to monitor literally hundreds of thousands of sensors with one chip, a LED, fiber and a camera. There's all sorts of ways to run this. For joints I mentioned grays code but that's actually a little overboard, but good for "exact" placement. We don't need that precision. Something as simple as a slit that opens and closes as the joint rotates. One fiber feeds light and the return picks up the intensity based on how far the slit opens. Another, a piece of paper with coloring going from white to black in shade. Shine on it, then return the reflection. So may ways. Try as I might I have not come up with what I feel is a super satisfactory way to use this for pressure sensing. I have posted a few but, I admit they are a bit kludgy. I am still thinking about it and if anyone has any ideas about how to vary light with pressure please chime in. I just remebered one idea I had that just might work. Slit the fiber all the way down it's length. If pressed it would open the fiber and leak light. Have one set going down a limb or body part and many horizontal. Might work but...difficult to say. The beauty of this is you can use a camera to record the reflection and with a vertical and horizontal grid you get position. And you could do high detail, full body with a $9 camera, micro-controller combination and a few dollars of plastic fiber.
>>32882 >we can simply run it up through the knuckles to obtain position information with no add'l muss & fuss Right after I wrote the last comment I read yours again and maybe, just maybe, a split fiber would do this. One way, the simple way, to do this is to run the fiber down through the joint with a split at the joint then run it back up to the camera. What would really be slick is to put a reflector on the end of the fiber and have a beam splitter right at the light, camera interface. But...that adds so much complexity..(.just thinking out loud). "If" you could find a cheap, fast, way to do this, it would be very cool. As for the split fiber. Could be done with a template to split a certain depth and length in a jig. You would need to glue on a piece of black something, paper, plastic, cloth whatever to absorb light. You "might" have to have a sort of wedge shape above the fiber to force it open. haven't tried this. I know they have force sensing systems for fibers and I looked at it, a lot, but they were super complicated things. One thing good about this is the calibration could be automated. Bend the limb and read the output as it bends. Same with touch. Touch it all over while recording the light variance per force. So it wouldn't have to be perfect to work. I would suggest using multiple camera pixels inputs for each feedback light source so even if a few go bad you still can re-calibrate it without chunking the whole thing.
Open file (101.71 KB 1024x1024 image-58.jpeg)
>>32918 > fibergrid thing... noble prize winning material. It's an idea that has been one of the most impressive out of the box thinking ... I don't understand this argument, what did I miss? Combining light and cameras, or optical sensors, is very obvious. If you just know data transfer through optical fibers exists, then how would you imagine that working? If you know about the concept or idea of a computer based on it, how else would it work? It's pixels or a matrix, intensity, the spectrum of light (color), and frequency of change on top of it. It's something very obvious.
>>32891 >I wanted to use 50k pixels (220x220) and analyze the spectrum of each, Shine it on a grating. The color will separate and you can measure not only the different spectrum's but the intensity. A cheap Fresnel lens would work. A simple dirt cheap way would be to shine the light on a regular music CD or optical dvd. Might have to move the CD around a bit to get the right effect you wanted. To be precise you are measuring the reflection off of a grated mirrored surface or ion the case of a Fresnel lens through it. You can buy Fresnel lenses that you stick in the back window of campers so they can see behind them. I'm not sure if that would work, but it might. I "know" a grating could work. You could make these with a razor blade and a piece of plastic and a little tricky scraping. Drawing Holograms by Hand http://amasci.com/amateur/hand1.html I wonder, I do not think pressure on a fiber will cause it to change light frequency. You know better or have heard of such? Please say so if you have. What would you use to change the light frequency?
>>32891 >wondered how long it would take to calculate the color coming from a fibre Instantly, or as fast as your light sensor works.
>>32920 >I don't understand this argument, what did I miss? Combining light and cameras, or optical sensors, is very obvious. It's brilliant. Using light and sensors has been around for a long time yes but using the individual pixels of a CCD camera and its preexisting software to multiply the amount of sensors to the thousands with one small chip by reading the reflective feedback from a variable light source, is...brilliant. It's so good. And if it's so obvious how come no one else thought of it? But he did. He has another crazy idea for robots. He takes an oscillating arm, or something, and feeds it through the whole entire robot and then uses some sort of clutches to grab the, whatever it is that moves, and that moves the muscles. It's the craziest most whacked thing ever but it's also...brilliant. He's built prototypes of these. And in fact if you tweak it a little... Say use a AC waveform, and then use magnetic amplifiers all throughout the body. The magnetic amplifer blocks the AC until you use a smaller signal to unblock it...well then...once unblocked it could be fed to a rectifier and provide DC to power a motor, solenoid, whatever. So you have one big ass AC flowing at all times. Only siphoning off the power to those muscles unblocked. And you wonder why I keep bleeting about magnetic amplifiers. That's why.
>>32921 > Fresnel lense Interesting. I recall these lenses vaguely from astronomy, for analyzing stars. Your ideas isn't bad, I was thinking more of a gradual filter while this one seems to do a hard cut-off, but I haven't looked into it. I thought more about some glass sphere or cylinder changing what spectrum it let's through. Maybe changing the color based on some flow of electricity, or something more indirect like vibrations. But I also don't want too many parts which could break. Anyways, this might actually be good enough, I mean better than nothing, though it would also need to be very small and this is where it probably would get tricky. That said, let's not forget this is more of a spontaneous thought experiment. >>32917 >Sohu AI chip Yeah, I came across this. But we can't buy it yet. I would like this for at home, or better as an SBC. Anyways, one of the comments from the liked news: >In theory, Intel seems to have shown that their Intel MESO (Magnetoelectric spin-orbit) logic, a post-CMOS technology related to spintronics, is much more amenable to both low-power logic and low-power neuromorphic computing. >For example, emerging Non-Volatile-Memory (NVM) / Persistent Memory MRAM is already based on spintronics phenomena >Therefore much more R&D ressources should be allocated to develop new manufactuting tools to improve and lower MRAM manufacturing cost, and then improve those tools to evolve to MESO manufacturing : this would be much, much more groundbreaking !!! https://www.techspot.com/news/77688-intel-envisions-meso-logic-devices-superseding-cmos-tech.html https://www.imec-int.com/en/press/imecs-extremely-scaled-sot-mram-devices-show-record-low-switching-energy-and-virtually >fibergrid sensors Btw, can we move the discussion of sensors out of the thread for computers which was even meant just for SBCs. Buses, sensors and multiplexing in the engineering general thread: >>32900 I fell for it myself again with the posting of >>32920 - Keeping the speculations on optical computing in this thread might be okayish, since it already moved beyond SBCs and small accelators to GPUs such.
Open file (112.63 KB 1024x1024 image-63.jpeg)
Since we're doing hardware today: >A research paper from UC Santa Cruz and accompanying writeup discussing how AI researchers found a way to run modern, billion-parameter-scale LLMs on just 13 watts of power. That's about the same as a 100W-equivalent LED bulb, but more importantly, it's about 50 times more efficient than the 700W of power that's needed by data center GPUs like the Nvidia H100 and H200, never mind the upcoming Blackwell B200 that can use up to 1200W per GPU. >The work was done using custom FPGA hardware, but the researchers clarify that (most) of their efficiency gains can be applied through open-source software and tweaking of existing setups. Most of the gains come from the removal of matrix multiplication (MatMul) from the LLM training and inference processes. >How was MatMul removed from a neural network while maintaining the same performance and accuracy? The researchers combined two methods. First, they converted the numeric system to a "ternary" system using -1, 0, and 1. This makes computation possible with summing rather than multiplying numbers. They then introduced time-based computation to the equation, giving the network an effective "memory" to allow it to perform even faster with fewer operations being run. >The mainstream model that the researchers used as a reference point is Meta's LLaMA LLM. The endeavor was inspired by a Microsoft paper on using ternary numbers in neural networks, though Microsoft did not go as far as removing matrix multiplication or open-sourcing their model like the UC Santa Cruz researchers did. >It boils down to an optimization problem. Rui-Jie Zhu, one of the graduate students working on the paper, says, "We replaced the expensive operation with cheaper operations." Whether the approach can be universally applied to AI and LLM solutions remains to be seen, but if viable it has the potential to radically alter the AI landscape. >We've witnessed a seemingly insatiable desire for power from leading AI companies over the past year. This research suggests that much of this has been a race to be first while using inefficient processing methods. We've heard comments from reputable figures like Arm's CEO warning that AI power demands continuing to increase at current rates would consume one-third of the United States' power by 2030. Cutting power use down to 1/50 of the current amount would represent a massive improvement. >Here's hoping Meta, OpenAI, Google, Nvidia, and all the other major players will find ways to leverage this open-source breakthrough. Faster and far more efficient processing of AI workloads would bring us closer to human brain level functioning—a brain gets by with approximately 0.3 kWh of power per day by some estimates, or 1/56 of what an Nvidia H100 requires. Of course, many LLMs require tens of thousands of such GPUs and months of training; hence, our gray matter isn't exactly outdated yet. It seems to be mostly coming from some software, but the FPGA will likely still be necessary. Not sure if this should rather go into >>250, but it's very much related to the hardware we would need. - https://www.tomshardware.com/tech-industry/artificial-intelligence/ai-researchers-found-a-way-to-run-llms-at-a-lightbulb-esque-13-watts-with-no-loss-in-performance - https://arxiv.org/abs/2406.02528 - https://news.ucsc.edu/2024/06/matmul-free-llm.html - https://github.com/ridgerchu/matmulfreellm -
This is some great information content recently ITT, Anons. Thanks! >=== -parse-edit post into R&D thread; per : ( >>32925 )
Edited last time by Chobitsu on 08/19/2024 (Mon) 07:50:44.
>>32927 Really good links. Very important stuff.
I talked about an idea for pressure touch senors here(in the wrong thread) >>32918 and here >>32919 and much earlier >>24233 >>30980
I did not mean to post the above here. Sigh, but it does reference the previous post so...maybe keep it??
>>32957 >>32960 Eheh, it's fine Grommet. Since this is a) great stuff, and b) you've helped everyone by 'consolidating' the access to your related-posts, and c) it's almost impossible with this forum format to stay on-topic (w/o rather Draconian moderation -- no one wants that) * ... then I say let's keep it! :^) --- * However I do think that (especially given the highly-technical & expansive nature of what we're attempting here) we do a better job at this than most any other IB I've ever seen. Give yourselves a pat on the back, Anons! :D >=== -add footnote
Edited last time by Chobitsu on 08/19/2024 (Mon) 18:55:33.
An idea for for waifu computers. I expect the sweet spot price wise is standard PC mini ATX motherboards. Another good choice would be the Mini-ITX is a 170 mm × 170 mm (6.7 in × 6.7 in) motherboard. https://en.wikipedia.org/wiki/Mini-ITX These usually are fan less but most would be lacking in computing power. Micro ATX 244x244mm (9.6×9.6 inches) likely would be the best for performance while staying manageable in size. You would have to have a full size waifu to fit it though. The problem is waifus are, or need to be, able to go in water to wash themselves and they are going to get wet. So I was thinking a good way to to waterproof them would be to use standard food packaging material. Vacuum packing. On the other side of this you could make a plastic box enclosing all the electronics and run water over the whole board through this box for cooling. The box would make a firm mount for the motherboard and contain the water cooling. The plastic film would stay cool and keep the water off the electronics while providing excellent cooling. This could be circulated through the body to simulate body heat. If it would not be too expensive you could use optical outputs to carry signals in and out of the board.
>>33785 > On the other side of this you could make a plastic box enclosing all the electronics and run water over the whole board through this box for cooling. The box would make a firm mount for the motherboard and contain the water cooling. The plastic film would stay cool and keep the water off the electronics while providing excellent cooling. Yep. I call this little jewel the robowaifu's Breadbox, and if push-comes-to-shove, thermal-engineering-wise, then we'll actually need to duct cooling tubes into the thing to keep the electronics cool. I expect just regular distilled water should be sufficient for this, thermal-density-wise. Good thinking, Anon!
Open file (86.52 KB 519x422 ITX.png)
Open file (135.98 KB 618x890 Cart.png)
>>33785 >Mini ITX The only mobo size that actually makes sense for a waifu. Placement in her frame would be tricky, I'd suggest tilting it in the chest. >Lacking in compute I'm unsure of what you mean. ITX boards can run almost any desktop processor and GPU. As for what I'd actually do with an ITX PC, I'd just stick it in her movement cart. That's a battery behind the ITX board.
>>33809 >Maidcom Mini's gettin' some roller skates! Based. :^)
>>33809 >>Lacking in compute >I'm unsure of what you mean Fanless as a general rule means, less power. And yes there are exceptions but "generally". For us, likely water cooled so no problem. I did not think about that at the time and was talking...generally. >=== -fmt patch
Edited last time by Chobitsu on 10/03/2024 (Thu) 03:52:36.
>>35587 >RPi5 w/ 8GB is about US$55 US$120 for the 16GB version (official price): https://youtu.be/apWi16EROKc - it might be too slow for such a big model, unless there are also stalling methods employed, so the user doesn't directly chat with the LLM. Jetson Orin Nano from Nvidia: https://youtu.be/QHBr8hekCzg and https://youtu.be/fcGD7kHgxqE But there's also a alternative to Googles Coral TPU for the Raspi, Hailo: https://youtu.be/Qinb3j8J8-k Not an SBC, but shows where ARM is going: https://youtu.be/AshDjtlV6go
>>35622 Maybe to justify the costs, we could go the Chobits route and have the robots do other computer tasks, like calendar, alarm, internet search, VOIP, etc. TL:DR: Raspberry Pi Persocomp
Every so often I look around and see new, to me, processors and try to find the amount of additions per second. A basic metric that I could use as a reference. So I see this newer processor Intel N100 So specs show compared to ESP32 Some rough numbers https://esp32.com/viewtopic.php?f=14&p=139224 ESP32-S3 results 240 MHz and performing at up to 600 DMIP Integer Addition 239.772217 MOP/S Double Addition 6.629294 MOP/S $7-$15 depending on what's on the board with bare bones as low as $5 https://www.cpubenchmark.net/cpu.php?cpu=Intel+N100&id=5157# Intel N100 Integer Math 16,333 MOps/Sec 6 Watts power $128.00 USD Now the N100 is more expensive but if you take the operations per second then it would take 615 ESP32's to equal the one N100. I personally believe that one ESP32 could do all the calculations to move a waifu. To be specific it may not be able to calculate "where" to move and "how" but given a vectors on where to move it could, I think, have plenty of power to monitor the movement and progress of the movement. I did some rough, very rough, calculations on what sort of performance you would need to drive and monitor a waifu and some possible strategies on how to set up the programming. There's some comparison to insect brains which move along just fine with little in the way of compute. Here's a few comments. Warning they are long. >>20558 >>21602 Thinking about number of actuators for a full human direct equivalence to human muscles. >>22109 computing movement curves to make movement graceful >>22111 >>22113 >>22119 >>22120 >>22121 A link of links,many the same >>23637 >>32995 Enough. I did this to tie in some rough numbers and so I myself, and others, could have some sort of starting brainstorming reference.
I was also thinking ESP32 using the robot project car or tanks as a base since they can do basic obstacle avoidance, following object and line tracking and come with all the parts\code to start. And then you can scale up. Most of the parts for bigger wheels\tracks can be printed too. https://www.youtube.com/watch?v=cIn8e9K3UcQ&t=16m55s
>>35831 >ESP32 There's a lot of good software for it. Face recognition is another. I didn't know about the obstacle avoidance. Could be very useful.
Open file (23.96 KB WebServo.pdf)
Here's a some basic micropython code for the ESP8266 but should work better with dual core ESP32. It starts a web server with end points of /start and /stop that can be added to your program. I ran into an issue at first with the 8266 because it is a single core, so threading was a problem. The servo worked fine without webserver but had blocking problems with web server until asyncio was added. I added this to my chatbot program to start before the play_audio function and stop on the record_audio function. I need to work on threading now on my chatbot program, but it works until there is a blocking problem again. Need to add some debugging but should be able to get it to work properly. This could be used to turn neck, jaw or hands with different end-points. It looks like a lot of you are way past this but wanted to post the code I have this working on GreerTech's bot now but need to glue down and get a servo that is more quiet. With slight bigger metal gear servo, I should be able to use doll head on it too.
>>36102 Neat! Thanks Barf. Looking forward to seeing this system in action. Cheers. :^)
>>35813 Handy listing. Thanks, Grommet! Cheers. :^)
>>36102 Big thanks for that!
Hooking a cheap external GPU onto an RPi is an option with DeepSeek (and other models as well of course). Probably very hard to beat something like this for an AI chat price/performance point rn, IMO. https://www.youtube.com/watch?v=o1sN1lB76EA
>>36238 Thanks. I saw the video and nearly skipped it. I didn't think it would finally work to add a GPU to it. Nice. Inside a waifu it would maybe need to be a physically smaller GPU, and maybe then it wouldn't work for DeepSeek (of that size), but it's good to have that option. I thought a while ago, that it will only be a time till we see GPUs with a slots for a SBC and vRam so you can run it without PC. Just with a power cable going into it. Or maybe it will be some small adapter, so you can still use the regular connectors. Anyways, from a while ago, one Raspi and several TPUs on a special board for that: https://youtu.be/oFNKfMCGiqE
I have one of these ESP32s now, and I'm wondering what I could do with it that won't take a large amount of time. Anyone here done something with this, similar to link related? Thinking about setting up something for a robot, but this seems easier. "ESPHome is a system to control your microcontrollers by simple yet powerful configuration files and control them remotely through Home Automation systems." https://esphome.io/
>>37009 I setup HomeAssitant with Ollama but never did anything with it since I dont have any appliances or things to automate yet. But I might try the Smart home starter kit like this and slowly move parts over. https://www.keyestudio.com/products/keyestudio-smart-home-kit-with-plus-board-for-arduino-diy-stem
>>37011 I saw in the gatebox video the waifu runs the vacuum for the owner. You can do that too.
>>37014 Japan leading the way, but that's my long term goal. My robowaifu already runs my vac thanks to Greertech, but someday it will be able to detect if I fall or have a heart attack and pick me up with her Standard Open Arm 100
>>37023 Even if she can't pick you up (a sizable & delicate, just so engineering task, BTW [humans are fragile, in general -- particularly the incapacitated]), just being able to monitor & recognize your distress and call for others on your behalf could be lifesaving. I think dear Galatea could definitely be enhanced to do so given current, easily-accessible technologies. Looking forward to seeing these developments! Cheers.
>>37055 Getting breathing, heart and fall detection from esphome \ home assistant would be huge selling point. https://esphome.io/#health-safety Mid-term, would love to see a seedstudio type starter kit basic for a companion bot. I don't see why that isn't already available. Depending on who makes it, there's no reason it can't also be RISC-V ESP32's. If I'm going to have a ton of sensors in my home, I want to be sure all the code and hardware has been audited. For picking up, just a crutch can be a huge help for many.
>>37058 >Getting breathing, heart and fall detection from esphome \ home assistant would be huge selling point. Oh yeah, nice find, Anon! Especially with target the home-healthcare markets to begin with, such techs should indeed increase your sales-potential. Maybe you can tinker together a good system, and have a company swoop you in a buyout? Regardless, good luck with your projects, Anon!

Report/Delete/Moderation Forms
Delete
Report