>>10357 (me)
>>10400
>What does chatbot-style-AI mean? Some already existing system?
Basically, Cleverbot, Evie, Replika, anything that has a user input text, then responds with an AI-derived response to mimick a back-and-forth conversation. I had the thought of a model like this to allow hot-swappable AIs, just so if a newer, better-coded AI comes to light, as long as it has the same basic text-in, text-out system, it can be swapped in and trained to utilize the rest of its body.
>>10427
>This idea that the mind would only deliver abstract ideas to another "component", perhaps even a more basic machine learning model
This is what I was trying to get at. Instead of forcing the chatbot AI (which is designed first and foremost to speak like a human, not move like a human) to learn the nitty gritty of each action down at the metal (move ABC servo XYZ degrees, move DEF servo XYZ degrees, etc) it calls out an abstract command that other code can pick up on and carry out in place of the AI directly. The chatbot isn't moving, the action handler is, and all the chatbot has to do is invoke a command, and the action handler can then carry it out. Granted, this leaves a whole lot open to interpretation from the action handler, but there can be other information that text analysis can give that can influence how the action handler carries out its actions aside from just the command invocation (like those ML scripts that can predict emotion behind words into a set of confidence values, which can be plugged in and used to further give emotion to movement by understanding the AI's mood.) Full, direct-control of the waifubody by the AI would be cool, but the level of computing power, effort in training data, and effort in building a virtual training environment needed to train the AI both to SOUND human and ACT human seems improbable for a proof-of-concept.
>>10400
>if it's a high level command which has man sub-commands and requires recognizing objects and planing motion, it's way more difficult. What happens in you text analyzer, and from there to the action, will be very complex.
This feels like a more achievable goal than native control, at least to me. For lack of a better way of explaining it, BostonDynamics' Spot can scan its environment, create a model, and determine the best way to move around without falling over or bumping into things, and the end user can code movement in without having to manually tell each servo how to move and where to step- it's all abstracted away, and, without code, is simple enough to use with a gamepad-style controller. Granted, this is a bit of an unfair comparison since Spot is an engineering masterpiece with over 30yrs of development (and is indeed, very complex), but considering Spot-like bots exist, Replika-like AI exist, but robowaifus don't yet, I think this model is a good way of cross-breeding these two technologies together if direct-control isn't viable. At least to me, coding movement control in this way seems way easier than trying to wrap my head around AI and ML trying to learn to walk in a virtual environment then trying to translate virtual movement to IRL movement.