>>39001
>NEEDS:
>speech, robot movement, power supply and safety features
<--->
Let's address your primary concern here first.
Premise:
>A US$500Bn-taxpayer-money-theft-tier, Samyuel Copeman-tier, AI video generator setup is needed to:
>dynamically run [robowaifu] movements
Issues:
>muh nuclear-powered datacenter is needed!!
>it'll cost muh six-gorillion j*wbuxx!!
>else its muh Baste Chinese
>then its muh glowniggers, femsh*tes, and conservicucks!!
>...and all that that implies!111 :DD
My apologies for being so facetious, Anon. Just kidding around a bit. All due respect to you. :^)
---
This is a fallacy. We
don't need AI Movie Generation -scale capabilities to either a) control a robowaifu's movements, or b) to understand other humanoid's (such as her Master's) movements. This problem was solved long ago in the computer animation industry by skellingtons + (so-called) rigging. Both are highly-mathematical, highly-optimized (computationally-speaking) processes today that are relatively low-power.
A) As to the first need, its simply a matter of animatronics-style animation of her skellington's rig. (This is similar to character animation in vidya & film, BTW.)
B) As to the second need, its roughly-speaking been commoditized to the point that it will
literally run on a few-bucks-cost microcontroller today. [1]
So no, we won't need yuge-a*rse datacenters just to manage our robowaifu's bodily motions, nor for her to analyze properly the people/robowaifus in the scene around her. It can all be run onboard, fully self-contained, within the robowaifu's internal compute hardware of today (ie, tiny-PCs, SBCs, MCUs).
---
The
real issue is the
systems control (ie, 'Executive', or 'C4 Systems') software to generally oversee & operate/send-commands to the robowaifu herself.
This part is some real frontiersman, trailblazing stuff at this stage. Much has been done within industrial work, and humanoid systems are coming along now too. But we all still have a looong climb up that mountain to go! :^)
>(cf. 'Safety' below)
<--->
Speech.
As you mentioned, speech is largely a done deal today as well. The quality & hardware needs vary dramatically at this stage, but my instincts tell me that we can get working robowaifus at a reasonable level today with little other than consumer tier compute hardware -- all running locally there inside Anon's flat -- 100% disconnected & offline.
<--->
Ahh, batteries...
Yes, it's a big issue still. There are ongoing research efforts (primarily in the EV auto industries), and incremental improvements are occurring. As you mentioned, qwik-replace/augmentation approaches are probably needed for our robowaifus today.
Additionally, incremental-improvements are still happening with actuators research...an important point.
<--->
Safety.
This is literally
the.hardest.task. that we here on /robowaifu/ & elsewhere face, I deem. We're decades away from highly-solid solutions in this arena IMO -- and even then
they will only be provisional!
OTOH, we don't have perfect safety today with cars/planes/dishwashers/toasters. That fact doesn't stop multi-billion dollar industries from humming along surrounding all these, nor for millions & millions of consumers from using them.
>tl;dr
We'll muddle along somehow, I think.
BUT we'll need some kind of legal-defense union [2] to keep the greedy lawyers, et al, from attempting to destroy the 'little guys' trying to participate in this nascent robowaifu industry.
<--->
I hope that touches the bases sufficiently to address your great post properly, Anon. Cheers! :^)
---
1.
(cf.
>>38841, et al)
2.
(cf.
>>38515, et al)
Edited last time by Chobitsu on 06/02/2025 (Mon) 22:37:05.