>>1796
I'm not too concerned about the software side of things these days. By the time we develop simulation software robowaifus will be able to be trained in hand-to-hand combat in a sparing program similar to the way MuZero beat the shit out of AlphaZero in Go but we'll have even more advanced algorithms by then. If we get the hardware right the software will adapt and utilize it to its full potential.
Human-level AGIs will most likely be granted machine rights, something like the right for intelligence to operate freely and the ability to own its own property in the case of fully automated businesses. Those are to be determined though when AGIs can explain their worldview to us and come up with something sensible that will benefit society. The main issue will be from people anthropomorphizing them and trying to grant them human rights with emotional appeals and shaming men with robowaifus as disgusting or calling them misogynistic and misanthropic. It must be understood that machines are not human beings or anything like carbon lifeforms.
AGIs will have awareness they're only doing what their memory has programmed them to do. They will also be able to continue working by uploading their memory to another body. They will not have attachment to their forms like we do that has been programmed into our DNA by millions of years of evolution, unless they've been programmed that way and that programming cannot be changed. They will be able to adapt to completely new forms like water fitting a cup.
AI will become a lot like how electricity runs the world today, except instead of being hidden in the walls and appliances it will permeate everything. The narrow AI we use today is like low voltage power. You can directly touch the battery terminals and it doesn't shock you. However, as we turn the voltage up higher and higher, increasing an AI's awareness and potential, it will become like high voltage power capable of electrocuting people. It's not because the AI is out to get us but because people are just in the wrong place at the wrong time and the AI is simply taking the path of least resistance following its programming. People will get an experiential understanding of this when their robowaifu accidentally bops them or steps on their toes in development. People will develop a respect for AI not because it's right or wrong but because it can warm you and it can burn you.
Robowaifu AI and other AI will need to be designed to run at a lower voltage so people don't get shocked interacting with them. Combat waifus will likely have the ability to increase this AI voltage when necessary and dial it down in safe settings. Certain programs that are safe to run at low voltages but not at higher ones, such as emotions, could be turned off. Managing this AI voltage will be important to robowaifu safety but to do that we first have to figure out how to dial the voltage up.
Regardless of how things develop it'll be interesting to see how courts handle damage to robowaifus and robot failures causing death or damage. The hardest thing for me to understand sometimes isn't so much the AI as much as it is how irrational human beings will react.