>>42109
>But the question is: what hardware would we even build?
Well, the basic premise is to push much of the "intelligence" out to the periphery of the system as a whole. In-effect combining the sensing & processing & control all together into one tight little silicon package. Similar to biological life, taking this approach to 'computation' reduces the need for bidirectional chatter with a central core. Locally handling inputs results in faster control responses & lower power consumption (nominally being near-instantaneous & mere micro-Watts).
>tl;dr
Put hundreds (thousands?) of these relatively-dumb little sensor+processing+control packages out on all the periphery where the robowaifu interacts with the realworld; while her system overall lazily performs data fusion back at the core (the robowaifu's "brain") to inform higher-level "thinking & planning".
---
These distal components also form compact little subnets for local comms/coordination with one another. For example, all the dozens of these little modules associated with sensing/running a robowaifu's single hand, say. Though spending
most of their time asleep, they still can each perform thousands of sense/control operation cycles per second; also xcvr'g dozens-to-hundreds of subnet info packets per second amongst themselves (and additionally generating consolidated data together as a group; for the central core's use [plus receiving control signals asynchronously back for the collection's use]; sending this out along alternate comms pathways).
All electronics thereto being very lightweight (in basically every sense of the term) individually.
<--->
This is the essence of
Neuromorphics. The inspiration for this approach is studying life itself, and how it reactively coordinates & operates against stimulus. The >tl;dr here being that
most of it happens out at the periphery...inside the neural & musculature tissue there locally. Cheers, Anon. :^)
Edited last time by Chobitsu on 10/07/2025 (Tue) 02:18:26.