/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon in late August. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Have a nice day, Anon!


Open file (259.83 KB 1024x576 2-9d2706640db78d5f.png)
Single board computers & microcontrollers Robowaifu Technician 09/09/2019 (Mon) 05:06:55 No.16
Robotic control and data systems can be run by very small and inexpensive computers today. Please post info on SBCs & micro-controllers. en.wikipedia.org/wiki/Single-board_computer https://archive.is/0gKHz beagleboard.org/black https://archive.is/VNnAr >=== -combine 'microcontrollers' into single word
Edited last time by Chobitsu on 06/25/2021 (Fri) 15:57:27.
>>25437 Okay, but make sure you know how to do take care of the cooling.
>>25396 >4.2 TFLOPS!! Nice. You'll need active cooling. Here's Nvidia's CUDA page. It's basically an extension of C programming (cf. >>20980) . https://developer.nvidia.com/cuda-toolkit >=== -add crosslink
Edited last time by Chobitsu on 09/21/2023 (Thu) 22:50:59.
As already mentioned in the meta-thread: The P40 (server GPU) also exists with 24GB. I didn't know. It's better than the M40. https://technical.city/en/video/Tesla-M40-24-GB-vs-Tesla-P40 - They need extra work with adding coolers or a cooler in a PCI slot.
This thread here becomes more and more a thread about GPUs. Maybe we should make one for that., or simply for AI related hardware topics in PCs and home servers. Right now these topics are spread over many threads, since in the past we didn't use this thread here for it. Anyways, I'm I the only one with a AMD APU in a PC? It seem not have been interesting for anyone that we can now run neural networks on such an APU. https://www.tomshardware.com/news/dollar95-amd-cpu-becomes-16gb-gpu-to-run-ai-software https://youtu.be/H9oaNZNJdrw https://youtu.be/HPO7fu7Vyw4 I didn't test it yet, but try to keep it in mind when I reboot. I need to change some Bios settings.
>>25759 I have a 5700G and it is honestly terrible for AI compared to a GPU. Yes, I can feed it heaps of RAM. It is too slow for it to matter though. My RTX 3060 undervolted and underclocked to 65 watts generates a Stable Diffusion image at 1024x768 with several LoRA's in 27 seconds. Considering a RTX 3060 can be had for 200 USD equivalent or even less if you're careful, it is hard to see the point
>>25769 One point is, that there are mini-PCs with APUs, which would fit into a robowaifu. That aside I'm more interested in trying it for some LLM. It could at least hold 16GB and some claim 64GB, but also if this works then sometimes only a few GB are missing in a PC/homeserver to run a model could come from the APU.
>>25770 LLM's do make sense for these systems. In which case, using LLaMAC++ (https://github.com/ggerganov/llama.cpp) is ideal. Those Zen cores are also very powerful and utilizing them with the iGPU can lead to good results. This hybrid approach is still being improved but, has great potential. I'm looking forward to further refinement in ROCm drivers. AMD has been making strides that have improved their performance in AI over the last year which should continue to benefit APU based systems.
Old chromebooks as alternative to SBCs: https://www.youtu.be/1qfSJxcgH5I (sometimes $99 for 10 of them, ofter cheaper). Of course, don't use an outdated version of Linux Mint like he might be in the video.
>>19428 More interesting development based around the RK3588's NPU. Useful Sensors have created a good library for using Transformer models on the NPU efficiently, called Useful Transformers. https://github.com/usefulsensors/useful-transformers Link to useful sensors https://usefulsensors.com/ Video demo of voice recognition and offline LLM response. https://youtu.be/K6qgN_QXH6M
>>26099 Hey that's probably a good idea, NoidoDev. >>26254 >Transformer models on the NPU efficiently Interesting.
There's progress in connecting (old) AMD GPUs to Raspis. This might become useful to run small models in mobile systems, I guess. https://youtu.be/BLg-1w2QayU - Work in progress, there are still issues.
>>26613 >There's progress in connecting (old) AMD GPUs to Raspis. Very cool. Lack of GPU dev for that Broadcom chipset has always been a shortcoming of the Pi. Maybe this will eventually turn into something important for this field. Appreciated, NoidoDev.
I've been thinking lately, Assuming that you're trying to build a full size humanoid body. Would it be better to have these single boards placed all over the body parts? Instead of having one powerful centralized board to micromanaged everything, your robowaifu will be a supercomputer, decentralized with many single boards throughout the humanoid body, Maybe this could solves the heat issue that generates from cpu? I'm not sure about the latency performance from this design though, I think there will be a hub somewhere that connects every single boards into one central board.
Open file (427.53 KB 1728x1728 FnmJKwtaYAEnvRH.jpg)
>>27359 There might be some merits to a system like you're describing, but there are some complicating factors. Firstly, each and every board will have to be powered one way or another. How they're powered will greatly affect the logistics of the setup. If they're powered by USB, then you have to either ensure that the central board can supply power to each peripheral board, or install a powered hub if that's not possible. Some might be powered by basic power outputs that exist on the central SBC. If so, you'll need to wire each one properly and ensure that the central board can even power everything. If not, you'll need a power supply. Similar problems exist with other powering schemes. You're adding a whole lot of cables that'll have to be managed. Depending on how the internal workings of the waifu are planned (and there are varying schools of thought here), this could get messy pretty quickly. You could opt for using PoE boards, ensuring that you only need one cable per board, but this is more expensive. You'll also need to install a PoE switch, which needs a whole bunch of power and may run hot if it's not cooled properly. Speaking of power, each board will likely have redundant components that will consume power without doing much. I doubt such a setup would consume less power than just having one board managing everything all the time (not even accounting for seperate power supply schemes). If you're planning on having a battery-powered waifu, this is a huge tradeoff to make. Every watt counts. If you're planning on having a tethered waifu (like me), then your main focus will be on placing as many components outside of the waifu as possible. It's better to have a beefier computer managing everything while it sits in its designated corner. If heat is your main concern, the motor controllers and power supply should be your main concerns anyway. All of that will put out far more heat than a laptop board when under significant load. Just think about how many motors you'll need for a waifu to work. Pretty much everyone here is cutting back on articulation in some way or another using various methods, but almost all of us still end up with relatively high figures because we want the most posability out of our waifus as we can manage. Seriously, think about it. Imagine any one physical activity you'd want to do with your waifu. Hugging? That'll need at least four points of articulation. A good hug? Triple that. Handholding? Add another five. Per hand. Don't forget the sensors, either. Even with a demultiplexer, you're looking at some hefty motor requirements unless you want your waifu to only be able to move one point of articulation at a time. Swapping out a single baked potato for a tray of fries won't change the fact that they're sitting in an oven.
>>27359 Yes there are a number of design tradeoffs for a distributed, onboard compute network for our robowaifus. I certainly increases the complexity from a systems-engineering perspective, but perhaps in some contexts the benefits are worth the pain! :D Good thinking, Anon. Thanks for sharing it. Cheers. :^) >>27360 Good, straightforward analysis Anon. >Swapping out a single baked potato for a tray of fries won't change the fact that they're sitting in an oven. Lel'd. :^) >=== -add'l reply -sp edit
Edited last time by Chobitsu on 12/17/2023 (Sun) 04:40:20.
>>27360 I like the idea of having all the internal electronic parts including the board fully submerged with water. I guess you might have seen a desktop pc motherboard running in dekstop casing that were filled with water. I forgot how the theory works. I think it has to do with electrolite like how an old car battery would run. Perhaps this technique could help cool down your robowaifu.
>>27359 Think about it slightly different. You want the heat to have a warm body. You have to either distribute electricity or the heat. Distributing the latter might only be possible through liquid, especially since it would have to go for a longer distance. I think it's probably easier to make a coupler in the joints for electricity than for liquid. >>27378 I think they're rather using some plant oil or some oil derivative. I don't think it's such a good idea. We won't need it and it would add difficulties. I think this is mainly useful for overclocking.
>Explore the groundbreaking Coral AI mini PCIe accelerator by Google, a game-changer for home automation and DIY projects, in my latest video where I integrate this innovative chip with the ZimaBoard and Zima Blade for superior performance and cost-effectiveness. Discover how this setup outperforms others, like the Raspberry Pi 5, in speed and thermal efficiency, and follow my journey from troubleshooting software issues to successfully running Frigate, an advanced home lab computer vision system. Learn how this affordable, under $100 setup can revolutionize your home tech projects! https://youtu.be/mdOEaNV8NXw
Open file (839.96 KB 1600x865 ShionOverHeating.png)
>>27359 It is most efficient to keep everything as centralized as possible. Remember, it is trivial to route power and data throughout a body. >Heat issue If this is a problem, your design has failed, make something better. I've designed and implemented several systems and heat has never been a problem because, I don't allow it to be. >>27360 >Power logistics This is an important factor. Minimize the range of voltages needed, ideally everything requires voltage within the standard discharge curve of your battery. Always strive to reach this ideal state. >USB Not recommended for internal components unless there is a need. For instance, an internal USB can be used for adding special components like SDR, GPS, Video Capture, and other special need components which don't make sense for a base model. >PoE Expensive, makes sense for charging her while transferring data. Great idea for an external connection. Internally, there's no reason not to just use wires. I trust you already knew all of this, just wanted to add clarity for other Anons. >>27378 >Submersion cooling Mineral oil is better for this. Still used for high performance computers. You need to actively filter the oil and replace it yearly. It is completely unnecessary and only ever makes sense as an aesthetic choice. >>27474 >Heating her skin Connecting her heat sinks to her skin is a great idea. Huge surface area to dissipate heat while making her feel more alive. >>27478 Looking forward to stronger accelerators. These are good to offload object recognition and/or word recognition for low compute platforms. Could be useful to slap into an old Thinkpad to give her eyes and ears.
>>27525 All very good points, and that's kinda what I was getting at in regards to his idea of putting small boards everywhere. Not all of them are designed to get their power the same way. I could have made that a bit more clear, though. Since mai waifu's going to be a tethered TENV system, I'm always thinking about how to make that specific kind of waifu work. However, I still maintain that the number-crunching is best kept remote, even (or especially) for a battery powered system. Modern day WiFi is fantastic, so I don't see much reason to have a larger computer using up space and power in your V1 waifu unless you actually plan on having her be especially mobile from the get-go. Just stick a desktop in the corner and let it chug away.
Open file (286.14 KB 1800x2103 322rrrr3MEM120623.jpg.jpeg)
>>27526 >pic >tfw i have the original of that somewhere...
>>27525 >Mineral oil She will be able to piss and drink a fresh mineral oil as a part of its self routine maintenance cycle ^_^. Just wondering though, is there a reason to choose mineral oil instead of any other type of non conductive chemical substance?
>>27552 I don't see why you couldn't go with something else that's both electrically insulative and thermally conductive, but you'll have to be careful that whatever you use doesn't cause rot, corrosion, dissolution, or leave any film or substrate behind on the surfaces it's interacting with. At least, not enough to make any difference over a decade or so. One of these is bound to happen eventually when you have a liquid mechanically interacting with a surface. Also, be sure to watch out for your own health. Don't get anything toxic, even if you think she'll be well sealed (leaks happen...). All that being said, flushing might not even be necessary (at least, not on a regular basis) if you seal everything properly. The reason mineral oil coolers need to refilled is because the oil evaporates over time. Those systems are rarely completely sealed because they're only designed to operate upright. If your waifu is only ever going to be standing or sitting, then this is probably fine, but if you want her to operate in any orientation, then she'll need her cooling system to be perfectly sealed.
>>27555 You can add should use a water cooling line if you want liquid cooling. Mineral oil is simply ideal for immersion liquid cooling. They are entirely different concepts which should be considered as having nothing in common. Aside from a liquid carrying heat. If you'd like her to drink and pee, a soft line water cooling system is recommended. If you'll drink her pee, please incorporate a filter near her urethra.
Your waifu will be on the cloud or locahost and you will be happy
>Explore the groundbreaking Coral AI mini PCIe accelerator by Google, a game-changer for home automation and DIY projects, in my latest video where I integrate this innovative chip with the ZimaBoard and Zima Blade for superior performance and cost-effectiveness. Discover how this setup outperforms others, like the Raspberry Pi 5, in speed and thermal efficiency, and follow my journey from troubleshooting software issues to successfully running Frigate, an advanced home lab computer vision system. Learn how this affordable, under $100 setup can revolutionize your home tech projects! https://youtu.be/mdOEaNV8NXw >LattePanda's Mu is the latest entrant in the 'Pi Killer' battle, but it has a trick up its sleeve. https://youtu.be/GKGtRrElu30 SBC comparissions: https://github.com/geerlingguy/sbc-reviews
Well looking around at RISV micro-controllers and other MC and to my surprise they are now blowing out MC at 6 cents, 10 cents, etc. with 32 bit RISC and also Arm Cortex. I'm stupefied. I said I couldn't imagine how you could get touch, but if your MC has 10 or 16 touch inputs and cost 15 cents or less each. Well there it is. You have to get some conductive Mylar or whatever on both sides of something you could squeeze together or some sort of conductive material that when squeezed changes value. Not that this is easy but once you have it figured out you could mass produce it. I admit to being amazed. Here's write up on the, "...Arm Cortex-M0+ made by Puya, it’s less than 10 cents in moderate quantities..." https://jaycarlson.net/2023/02/04/the-cheapest-flash-microcontroller-you-can-buy-is-actually-an-arm-cortex-m0/ Maybe a good full handful of these would be all you needed for motion , actuator control(you still need MOSFETS for power) sensing and touch/pressure sensing. I found a link with their "high end" PY32F030F18P6TU 8KB 64KB FLASH TSSOP-20 Microcontrollers (MCU/MPU/SOC) ROHS 6 for $ 1.81 WOW!!!!!!!!!! 60+ for $ 13.76 https://www.lcsc.com/product-detail/Microcontroller-Units-MCUs-MPUs-SOCs_PUYA-PY32F030F18P6TU_C3018715.html If you had 120 or maybe 200 of these at around $30-$40 you have all the control you need. They also have this new RISC-V mentioned on the link above for low cost like less than 20 cents. Here's a few links talking about this(BTW the above part seems to have great support compared to some of the other Chinese offerings the article mentioned) https://hackaday.com/2024/01/04/ch32-risc-v-mcus-get-official-arduino-support/ https://www.cnx-software.com/2023/02/09/8-cents-for-an-arm-cortex-m0-microcontroller-meet-puya-py32-series-mcus/ https://hackaday.com/2023/03/02/a-ch32v003-toolchain-if-you-can-get-one-to-try-it-on/ https://hackaday.com/2023/04/19/risc-v-supercluster-for-very-low-cost/ I think the key to this is some sort of framework using vectors sent by the main CPU to these micro-controllers and letting them do all the compute and feedback sensing to preform the move.
>32872 >> We may devise some custom encoders if we can't find suitable ones for things like the knuckles, eyes, etc. I wish to repeat myself because, to me, it seems the absolute master genius, super-fi, bestest in the world idea for encoders for anything that moves.It's the guys idea on hackaday to send light through fiber optics, plastic fibers are super cheap, and then have the light vary due to movement. This would in turn be fed through another fiber optic back to a simple mass produced camera chip. So you could millions of inputs on one micro-controller with a camera. Called "Fibergrid" https://hackaday.io/project/167317-fibergrid#menu-details and some, rough,speculation about usage by me, >>24233
Another good link by the same guy with a run down on under $1 USD micro-controllers. https://jaycarlson.net/microcontrollers/
And even mre. A comparisom of linux based sngle board computers. This guy really good and the links I got from h im are, in my opinion, a must have for any sort of prototyping we need to do. https://jaycarlson.net/embedded-linux/
>>32880 >This guy really good and the links I got from h im are, in my opinion, a must have for any sort of prototyping we need to do. Yeah, its really good stuff Grommet. Thanks! He mentions the PY32 series [1] (including one for US10cents!). I really like that he has a set of C libraries for that platform already assembled together [2]. Cheers. :^) --- 1. https://jaycarlson.net/2023/02/04/the-cheapest-flash-microcontroller-you-can-buy-is-actually-an-arm-cortex-m0/ 2. https://github.com/jaydcarlson/py32-template
>>32874 I'll take another look at this. What we really need is an encoder/sensor that simply in the form-factor of being a fiber itself. That way we can simply run it up through the knuckles to obtain position information with no add'l muss & fuss (such as with rotary encoders, etc.) Know anything about such a form-factor sensor, Anon?
Open file (112.73 KB 1024x1024 image-122.jpeg)
>>32874 Thanks for all the interesting comments, but especially for that. The idea to use light for robowaifus crossed my mind years ago, but this was just fantasizing, and then you posted this at some point already. This is really something we should keep in mind for sensors in the skin. Though, it would need to get quite small and maybe use the light spectrum. Idk, I guess there are reasons why this doesn't already exist. That said, I recently thought about if people could build their own optics based AI accelerator (GPU-like). Not for fitting into the PC but just some rather big drawer if necessary, but then good enough to run Llama 3. Just in case if the "Pausing AI"-crazies would be somewhat successful or just to discourage politicians in the first place. I thought about how to do that, and wondered how long it would take to calculate the color coming from a fibre since it would influence the frequency of change that could be allowed for transmitting information. Unfortunately, the camera he uses is rather big and he seems to only use the amount of light without analyzing the spectrum, which tells me this is the hard part. Which is was I thought. I wanted to use 50k pixels (220x220) and analyze the spectrum of each, haha. I don't really plan to pursue that, but maker projects like this one would be a place to look into it. So good to know it exists. Companies working on optics based computing would try to go very small. I think this approach even only helps if the hardware is farther away from each other, which is why this isn't being done. The distance would probably be good for cooling. The project I thought about would be more about having something we can build and repair. Bigger parts than regular chips, but still small enough to fit into a room and run one big LLM at any time.
>>32882 Why does it need to "run" through the knuckels? Can't it pick up light based on the position of a connection?
>>32893 Simple. Ease-of-manufacture. There's also the very-likely adjunct benefit of lowered-cost too (less parts, less mass, etc.) >pick up light based on the position of a connection Maybe you have something there, Anon. Can you explicitly & particularly define your idea of how 'position of a connection' might work in your concept? >=== -minor edit
Edited last time by Chobitsu on 08/17/2024 (Sat) 20:47:16.
>>32896 Related about buses, sensors and multiplexing in the engineering general thread: >>32900
Interesting chip, Sohu AI chip "...Most large language models (LLMs) use matrix multiplication for the majority of their compute tasks and Etched estimated that Nvidia’s H100 GPUs only use 3.3% percent of their transistors for this key task. This means that the remaining 96.7% silicon is used for other tasks, which are still essential for general-purpose AI chips..." They're showing lama70B throughput at high speeds. lama70b should be really powerful. https://www.tomshardware.com/tech-industry/artificial-intelligence/sohu-ai-chip-claimed-to-run-models-20x-faster-and-cheaper-than-nvidia-h100-gpus
>>32882 This fibergrid thing this guy thought up is in my opinion noble prize winning material. It's an idea that has been one of the most impressive out of the box thinking I've seen in a long time. I mean it makes it sooooo easy to monitor literally hundreds of thousands of sensors with one chip, a LED, fiber and a camera. There's all sorts of ways to run this. For joints I mentioned grays code but that's actually a little overboard, but good for "exact" placement. We don't need that precision. Something as simple as a slit that opens and closes as the joint rotates. One fiber feeds light and the return picks up the intensity based on how far the slit opens. Another, a piece of paper with coloring going from white to black in shade. Shine on it, then return the reflection. So may ways. Try as I might I have not come up with what I feel is a super satisfactory way to use this for pressure sensing. I have posted a few but, I admit they are a bit kludgy. I am still thinking about it and if anyone has any ideas about how to vary light with pressure please chime in. I just remebered one idea I had that just might work. Slit the fiber all the way down it's length. If pressed it would open the fiber and leak light. Have one set going down a limb or body part and many horizontal. Might work but...difficult to say. The beauty of this is you can use a camera to record the reflection and with a vertical and horizontal grid you get position. And you could do high detail, full body with a $9 camera, micro-controller combination and a few dollars of plastic fiber.
>>32882 >we can simply run it up through the knuckles to obtain position information with no add'l muss & fuss Right after I wrote the last comment I read yours again and maybe, just maybe, a split fiber would do this. One way, the simple way, to do this is to run the fiber down through the joint with a split at the joint then run it back up to the camera. What would really be slick is to put a reflector on the end of the fiber and have a beam splitter right at the light, camera interface. But...that adds so much complexity..(.just thinking out loud). "If" you could find a cheap, fast, way to do this, it would be very cool. As for the split fiber. Could be done with a template to split a certain depth and length in a jig. You would need to glue on a piece of black something, paper, plastic, cloth whatever to absorb light. You "might" have to have a sort of wedge shape above the fiber to force it open. haven't tried this. I know they have force sensing systems for fibers and I looked at it, a lot, but they were super complicated things. One thing good about this is the calibration could be automated. Bend the limb and read the output as it bends. Same with touch. Touch it all over while recording the light variance per force. So it wouldn't have to be perfect to work. I would suggest using multiple camera pixels inputs for each feedback light source so even if a few go bad you still can re-calibrate it without chunking the whole thing.
Open file (101.71 KB 1024x1024 image-58.jpeg)
>>32918 > fibergrid thing... noble prize winning material. It's an idea that has been one of the most impressive out of the box thinking ... I don't understand this argument, what did I miss? Combining light and cameras, or optical sensors, is very obvious. If you just know data transfer through optical fibers exists, then how would you imagine that working? If you know about the concept or idea of a computer based on it, how else would it work? It's pixels or a matrix, intensity, the spectrum of light (color), and frequency of change on top of it. It's something very obvious.
>>32891 >I wanted to use 50k pixels (220x220) and analyze the spectrum of each, Shine it on a grating. The color will separate and you can measure not only the different spectrum's but the intensity. A cheap Fresnel lens would work. A simple dirt cheap way would be to shine the light on a regular music CD or optical dvd. Might have to move the CD around a bit to get the right effect you wanted. To be precise you are measuring the reflection off of a grated mirrored surface or ion the case of a Fresnel lens through it. You can buy Fresnel lenses that you stick in the back window of campers so they can see behind them. I'm not sure if that would work, but it might. I "know" a grating could work. You could make these with a razor blade and a piece of plastic and a little tricky scraping. Drawing Holograms by Hand http://amasci.com/amateur/hand1.html I wonder, I do not think pressure on a fiber will cause it to change light frequency. You know better or have heard of such? Please say so if you have. What would you use to change the light frequency?
>>32891 >wondered how long it would take to calculate the color coming from a fibre Instantly, or as fast as your light sensor works.
>>32920 >I don't understand this argument, what did I miss? Combining light and cameras, or optical sensors, is very obvious. It's brilliant. Using light and sensors has been around for a long time yes but using the individual pixels of a CCD camera and its preexisting software to multiply the amount of sensors to the thousands with one small chip by reading the reflective feedback from a variable light source, is...brilliant. It's so good. And if it's so obvious how come no one else thought of it? But he did. He has another crazy idea for robots. He takes an oscillating arm, or something, and feeds it through the whole entire robot and then uses some sort of clutches to grab the, whatever it is that moves, and that moves the muscles. It's the craziest most whacked thing ever but it's also...brilliant. He's built prototypes of these. And in fact if you tweak it a little... Say use a AC waveform, and then use magnetic amplifiers all throughout the body. The magnetic amplifer blocks the AC until you use a smaller signal to unblock it...well then...once unblocked it could be fed to a rectifier and provide DC to power a motor, solenoid, whatever. So you have one big ass AC flowing at all times. Only siphoning off the power to those muscles unblocked. And you wonder why I keep bleeting about magnetic amplifiers. That's why.
>>32921 > Fresnel lense Interesting. I recall these lenses vaguely from astronomy, for analyzing stars. Your ideas isn't bad, I was thinking more of a gradual filter while this one seems to do a hard cut-off, but I haven't looked into it. I thought more about some glass sphere or cylinder changing what spectrum it let's through. Maybe changing the color based on some flow of electricity, or something more indirect like vibrations. But I also don't want too many parts which could break. Anyways, this might actually be good enough, I mean better than nothing, though it would also need to be very small and this is where it probably would get tricky. That said, let's not forget this is more of a spontaneous thought experiment. >>32917 >Sohu AI chip Yeah, I came across this. But we can't buy it yet. I would like this for at home, or better as an SBC. Anyways, one of the comments from the liked news: >In theory, Intel seems to have shown that their Intel MESO (Magnetoelectric spin-orbit) logic, a post-CMOS technology related to spintronics, is much more amenable to both low-power logic and low-power neuromorphic computing. >For example, emerging Non-Volatile-Memory (NVM) / Persistent Memory MRAM is already based on spintronics phenomena >Therefore much more R&D ressources should be allocated to develop new manufactuting tools to improve and lower MRAM manufacturing cost, and then improve those tools to evolve to MESO manufacturing : this would be much, much more groundbreaking !!! https://www.techspot.com/news/77688-intel-envisions-meso-logic-devices-superseding-cmos-tech.html https://www.imec-int.com/en/press/imecs-extremely-scaled-sot-mram-devices-show-record-low-switching-energy-and-virtually >fibergrid sensors Btw, can we move the discussion of sensors out of the thread for computers which was even meant just for SBCs. Buses, sensors and multiplexing in the engineering general thread: >>32900 I fell for it myself again with the posting of >>32920 - Keeping the speculations on optical computing in this thread might be okayish, since it already moved beyond SBCs and small accelators to GPUs such.
Open file (112.63 KB 1024x1024 image-63.jpeg)
Since we're doing hardware today: >A research paper from UC Santa Cruz and accompanying writeup discussing how AI researchers found a way to run modern, billion-parameter-scale LLMs on just 13 watts of power. That's about the same as a 100W-equivalent LED bulb, but more importantly, it's about 50 times more efficient than the 700W of power that's needed by data center GPUs like the Nvidia H100 and H200, never mind the upcoming Blackwell B200 that can use up to 1200W per GPU. >The work was done using custom FPGA hardware, but the researchers clarify that (most) of their efficiency gains can be applied through open-source software and tweaking of existing setups. Most of the gains come from the removal of matrix multiplication (MatMul) from the LLM training and inference processes. >How was MatMul removed from a neural network while maintaining the same performance and accuracy? The researchers combined two methods. First, they converted the numeric system to a "ternary" system using -1, 0, and 1. This makes computation possible with summing rather than multiplying numbers. They then introduced time-based computation to the equation, giving the network an effective "memory" to allow it to perform even faster with fewer operations being run. >The mainstream model that the researchers used as a reference point is Meta's LLaMA LLM. The endeavor was inspired by a Microsoft paper on using ternary numbers in neural networks, though Microsoft did not go as far as removing matrix multiplication or open-sourcing their model like the UC Santa Cruz researchers did. >It boils down to an optimization problem. Rui-Jie Zhu, one of the graduate students working on the paper, says, "We replaced the expensive operation with cheaper operations." Whether the approach can be universally applied to AI and LLM solutions remains to be seen, but if viable it has the potential to radically alter the AI landscape. >We've witnessed a seemingly insatiable desire for power from leading AI companies over the past year. This research suggests that much of this has been a race to be first while using inefficient processing methods. We've heard comments from reputable figures like Arm's CEO warning that AI power demands continuing to increase at current rates would consume one-third of the United States' power by 2030. Cutting power use down to 1/50 of the current amount would represent a massive improvement. >Here's hoping Meta, OpenAI, Google, Nvidia, and all the other major players will find ways to leverage this open-source breakthrough. Faster and far more efficient processing of AI workloads would bring us closer to human brain level functioning—a brain gets by with approximately 0.3 kWh of power per day by some estimates, or 1/56 of what an Nvidia H100 requires. Of course, many LLMs require tens of thousands of such GPUs and months of training; hence, our gray matter isn't exactly outdated yet. It seems to be mostly coming from some software, but the FPGA will likely still be necessary. Not sure if this should rather go into >>250, but it's very much related to the hardware we would need. - https://www.tomshardware.com/tech-industry/artificial-intelligence/ai-researchers-found-a-way-to-run-llms-at-a-lightbulb-esque-13-watts-with-no-loss-in-performance - https://arxiv.org/abs/2406.02528 - https://news.ucsc.edu/2024/06/matmul-free-llm.html - https://github.com/ridgerchu/matmulfreellm -
This is some great information content recently ITT, Anons. Thanks! >=== -parse-edit post into R&D thread; per : ( >>32925 )
Edited last time by Chobitsu on 08/19/2024 (Mon) 07:50:44.
>>32927 Really good links. Very important stuff.
I talked about an idea for pressure touch senors here(in the wrong thread) >>32918 and here >>32919 and much earlier >>24233 >>30980
I did not mean to post the above here. Sigh, but it does reference the previous post so...maybe keep it??
>>32957 >>32960 Eheh, it's fine Grommet. Since this is a) great stuff, and b) you've helped everyone by 'consolidating' the access to your related-posts, and c) it's almost impossible with this forum format to stay on-topic (w/o rather Draconian moderation -- no one wants that) * ... then I say let's keep it! :^) --- * However I do think that (especially given the highly-technical & expansive nature of what we're attempting here) we do a better job at this than most any other IB I've ever seen. Give yourselves a pat on the back, Anons! :D >=== -add footnote
Edited last time by Chobitsu on 08/19/2024 (Mon) 18:55:33.

Report/Delete/Moderation Forms
Delete
Report