>>24559
I feel people are taking the wrong message from my post. I never said that I want to simulate a human mind, nor do I think that is a useful goal for robowaifu. My message is that machine learning is a useful tool in your toolbox, but should not be the only thing you use.
>I wonder if you could subdivide this between different AIs?
Yes absolutely and that will be very important.
>There could be a text file that stores every event the robowaifu recognizes, it gets fed to an LLM to summarize it, this summary gets stored to the "long-term memories" file...
A better approach would be to identify important things to remember (you could ask the LLM), create an embedding of the information and then store the embedding with the text inside a database. To retrieve the memory, take the input before it’s fed into the LLM and query the DB for related memories to be inserted into the pre-prompt. (This is not a new idea) Skim this readme, I think you will find this useful
https://github.com/wawawario2/long_term_memory
>>24568
I don’t think we need to copy the monkey meat to succeed, AI NN are vaguely inspired by real neurons but are not the that similar in reality. So there is no reason the waifu brain needs to be accurate. I don’t think there is a "subconscious language" but the brain for sure is a collection of many independent systems that share information and is not a single system.
The reason I speculate this is due the existence of conditions like "callosal syndrome" (where the connection between the two hemispheres is damaged) and the types of behaviors that are associated with it.
So one way this idea could be applied is that a waifu brain would have some sort of event bus with different "modules" that listen for and publish events.
I would love some input right now, here is the things on my "TODO" list.
- I need to do a deep dive into psychology, this would be useful for figuring out what "mechanics" this mind should be governed by. (requesting opinions and advice)
- I need to get more hands on experience with actually working on ML models, I have a background in programming but nothing to do with ML. (requesting opinions and advice)
I don't know where to start when it comes to psychology, the entire field look like its filled with traps and is very pozzed. I have at random ran into interesting ideas, I learned about "misattribution of arousal" via a book on game design. I don't know where to start looking. I would love input!
For ML, I would like to take a smaller is better approach, I don't think it's a nicety, but instead its essential for success. GloboHomo is begging for regulation and we can't continue to rely on the cloud and absurd amounts of compute. Even if stuff won't get this bad, its still a good to keep costs down to make it accessible to a single anon's budget. Even if we could collectivise the efforts, centralized groups are more easily subverted and attacked, this also robs the individual's ability to make there waifu in there own singular image.
What makes LLMs so powerful is that they are trained on abusred amounts of unlabeled data. given enough data, compute and network size, the LLMs learns patterns contained within language including facts, ideas and bits of general reasoning patterns within it (to better predict text). It’s important to point out that LLMs are slow, expensive and hard to control. Creating a smaller and simpler networks is hard because finding labeled data is hard. But we can take advantage of something really nice. We already have LLMs you can use large LLMs to create the labeled data and basically extract what it learned. We are already seeing exciting developments where LLMs are being used to train other models.
I ask anons to look at this problem from a high level, what works for them may not work for individual anons or even small groups of "us" (i'm a new fag so I wont speak for everyone here). Big tech is not going to push this, it’s the reason why they don’t allow it in there api/usage agreements, they do not want this. I think it’s important to understand that we are smaller then big tech and that we play by different sets of rules. We may not have as much resources but we also have advantages that globohomo corporations do not. Remember there is no moat :^)
A current task that could be of a lot of use to "us", is figuring out what kind of structured and labeled data would be really useful for our mission. It would be a good move to generate it now while access to very big LLMs is still quite open. I would love input on this!