>>17931
I wasn't able to find any other board listed that would be good for giving a summary of what I've learned from discussing robowaifus with others, so I'll just list it here:
1. Robowaifus were not something anyone else thought should have anything except an electronic circuitry design. They did not think a pneumatic or soft body design to avoid electrocution risk was worth it. Notably they also had no concern about the electrocution or overheating risk to a consumer that would come with an electronic circuitry design.
A synthetic, self cleaning vagina will be needed in a robowaifu. Maybe transgender surgeries will provide a low tech stepping stone to more advanced tech in the medical field for this (another reason why trans surgery, while not good to be around, is not in a waifubot community's interest to want to outlaw completely).
Piezoelectric and photosynthetic skins are basically still in the realm of science fiction and probably won't be available for at least a few centuries:
https://www.futuretimeline.net/the-far-future-2300-2999.htm
2. The general sequence of development that was agreed on was:
AI waifu chatbots and mobile apps -> prototyping and testing aspects of waibots like inverse kinematics with robotic joints in high-fidelity virtual reality -> testing and release of waifubots irl
For AI waifu chatbots and mobile apps, the preferred programming languages were Python, JavaScript and Kotlin, with Kotlin particularly useful in drag and drop mobile apps
https://www.geeksforgeeks.org/android-drag-and-drop-with-kotlin/. When releasing these apps, you need to release them on an open source app store, both Google and Apple will remove your app from being available on their app stores.
For actual waifubots, the preferred programming language was either C or Python. Because C lacks garbage collection for memory management, this was suggested as a downside of using C compared to using a language with garbage collection such as Python.
High-fidelity virtual reality is nowhere near ready for use in prototyping and testing and will likely take 5 to 10 more years to be ready. RTX 4080 headsets and Meta's work on API (CPU+GPU) look promising. Maybe Apple's VR headset will be promising too, but they exercise much more creative control these days and consider themselves much more of a "family friendly company", so it's less likely than in the past. Inverse kinematics has been around for a long time, but there has been a lot of advancement in computer vision, which along with NLP, will be the final pieces of the puzzle in irl waifubots.
3. Alternatives to AWS will be needed since they censor anything that is politically incorrect. You need a stable cloud provider. One alternative I was suggested was:
https://www.proxmox.com/en/
Docker is very useful for use of containerized apps, but it would be a good idea to find an alternative to a Docker-like engine
https://alternativeto.net/software/docker/
Kubernetes or Apache Mesos are good for deployment, management and scaling of containerized apps alongside using Docker, as described in this link:
https://archive.is/OOg9v
Kubernetes and Mesos are open source, so they have less risk of being restricted than Docker or a Docker-like engine.
This more efficient deployment of containerized apps has the potential to result in multiple users being able to work on a project at a time, which speeds up progress and allows for more frequent updates/improvements to be made, similar to smaller scale game updates and patches that are released for video games. This can potentially be useful in achieving economies of scale in mass production and frequent updating of waifubot apps.
4. Python code for GANs and webapp chatbots can be thought of as the interface. The real work with machine learning and state of the art NLP models takes place in PyTorch or TensorFlow.
PyTorch is easier to use and more advanced, so it's recommended over TensorFlow.
https://builtin.com/data-science/pytorch-vs-tensorflow
https://cs230.stanford.edu/blog/pytorch/
5. GPT-3 is getting cucked by hate speech by feminists:
https://arxiv.org/abs/2103.12407v1. However, they will not be able to keep up with doing this with improvements in machine learning techniques and networking, along with increases in computing power, since more powerful models will then be able to built regardless.
6. You can train your own GPT-3 model, but because current deep learning models are bad at continuous learning and have trouble with retraining, it will remain hard to allow a waifubot to say anything you want while these 'hate speech' restrictions are programmed in by SJWs/feminists.
7. Quantum computing can potentially help in dramatically speeding up machine learning.
8. GANs are useful for creating images, but RNNs and transformers are much more useful for NLP. Transformers have become the gold standard for NLP in recent years.
https://www.quora.com/What-is-the-difference-between-CNNs-and-GANs
https://deepai.org/publication/a-comparative-study-on-transformer-vs-rnn-in-speech-applications
9. RNNs are better for NLP than GANs, except for problems of lack of convergence, difficulty in training the neural network and processing long sequences:
https://www.geeksforgeeks.org/difference-between-ann-cnn-and-rnn/