Running large language models locally offers several benefits. One of the main advantages is that it eliminates the need for a stable internet connection. This means you can work with LLMs even when Wi-Fi is unreliable or unavailable. Another benefit is that your tools are always accessible, regardless of where you are.

To run models locally, I use 🦙 Ollama, which allows me to run LLMs without relying on cloud-based services. This provides greater control over my data and model usage.

For interacting with these models, I utilise open-webui, a chat-based user interface that enables easy communication with the LLMs. This tools can use ollama as a backend.