lqdev

In the latest Ollama 0.6.5 release support was added for Mistral Small 3.1.

In an earlier post, I highlighted the announcement and summarized the key features.

I've been using it with GitHub Models and based on vibes I found it was more reliable at structured output compared to Gemma 3, especially when considering the model and context window size.

Now that it's on Ollama thought, I can use it offline as well. However, I'm not sure how well that'll work on my ARM64 device, which only has 16GB of RAM.


Send me a message or webmention
Back to feed