I have an project idea, but I need some general advise.
The goal is to have a local LLM running on a Linux server, which it could control via CLI. The user could talk to it and give it some tasks to do, but the main part would be that when left with nothing to do, it could decide it's own goals to pursue. (coding, writing a personal website, browsing the web...) One could say that the goal of the experiment is to achieve an AI that could operate mostly by itself and achieve something meaningful while doing so.
This would give the waifu the ability to have her own life outside of user interaction, making her both more useful and more realistic. I am planning on writing some LLM wrapper, which would implement API for interacting with the system, as well as handling multiple levels of memory and personality, both controlled by the model itself. More complex features like MCP API support or connection to some external hardware are being considered as possible extensions, but the current plan is a simple, Linux-controlling, AI companion.
My knowledge of the current local LLM landscape is severely limited, however, so I come here to ask for help.
Are local LLMs there yet?
What is the least demanding model that you would consider capable of operating a Linux system and not freaking out when left alone?
How powerful hardware would I need to run it?
What backend should I use? (ollama, llamacpp, something else)
Anything else I should know/consider?