>>42751
>but I was talking about the actual toy, which surprisingly, seems to have about as much facial expression range.
Ahh, understood. Yes (just like with a movie) you'd create a "library" of expressions (as I and
@GreerTech indicated), then using some form of scripting (in the case of the toy), morph between them. We'd have to have the original programming from inside it to know precisely how it was done in that specific case, but this model is the general approach.
As you suggested, the Moxie expressions are likely "baked out" beforehand into individual images and morphed between them with scripting (as with
@Mechnomancer's &
@GreerTech's approaches). Typical 2D stuff; and very low-cost, computationally.
Great questions, Anon. Sounds like you're already well on your way... why not dive into the animatronics/programming aspects of IRL robowaifu faces (similar to
@Mechnomancer's newer approaches)? :^)
Edited last time by Chobitsu on 11/07/2025 (Fri) 20:27:57.