>>32848
>I hadn't heard that
He mentioned it during his 2024 Tesla shareholder meeting : (
>>31772 ) .
>You can bet his robot likely has the same processors to cut cost.
It does. They have one in the chest
(exactly where I'm urging Anons to locate the 'breadbox' [a cooled Faraday cage+protective frame] for their robowaifus). BTW, this AI "chip" is actually a largish assembly of about 1ft^2 in area (think chip carrier on steroids). You can check out the first
Tesla AI Day event to see it.
>(36 trillion operations per second) damn that's fast
Certainly impressive, but nothing like what's coming in the future! :D
>It's going to be tough to find something with that speed and mass of built in memory.
We won't need to IMO. We're
already on a trajectory to deliver robowaifus at a much-lower cost factor, and using nothing more than commodity, specialized compute hardware suited to amateur DIY'rs (RPis, ESP32s, etc.). We may devise some custom encoders if we can't find suitable ones for things like the knuckles, eyes, etc., but other than that we are looking at COTS stuff+3D-printed physical components. The actuators are the biggest cost-factor, by far (since the
software [at least
ours] will all be free-as-in-speech & free-as-in-beer).
>muh home server
My intent is to do
everything onboard the robowaifu. Any other approach will preclude smooth-and-fun
Walk&Picnic in the Park days with robowaifu --
OBVIOUSLY A VERY-HIGH PRIORITY DESIGN GOAL!! : ( cf.
>>32036 ) :DD
OTOH, having a detachable-randoseru for extra battery/compute/cooling capacity is a given for my designs during these early years. Also, a home server
(-room?) setup is a must if you want to produce private & custom -trained language models, etc., as well.
>===
-
minor edit
Edited last time by Chobitsu on 08/16/2024 (Fri) 19:37:25.