AI is a broad term that is mostly marketing fluff. I am talking about a language model you can run locally.
You can install Alpaca which is now an official gnome app and then download the Mistral model. Once it downloads you can ask it things about what to do with the OS. It all runs local so you need enough ram and storage. 8gb of ram and then a few gigs of storage
You mean AI ? I rarely use it
AI is a broad term that is mostly marketing fluff. I am talking about a language model you can run locally.
You can install Alpaca which is now an official gnome app and then download the Mistral model. Once it downloads you can ask it things about what to do with the OS. It all runs local so you need enough ram and storage. 8gb of ram and then a few gigs of storage
I’m indeed open to the idea if it’s locally hosted but ollama isn’t available in my country… I’ll search if there’s a LLM that isn’t an ollama fork
Ollama is run locally. It can be available in any country
I know, but won’t I need to download the models in the app in order to run it locally ?
Yes but that’s pretty minor. You can just run ollama pull <model name>