Getting started with Ollama on Windows
Recently Ollama announced support for Windows in preview. In doing so, people who want to use AI models like Llama, Phi, and many others can do so locally on their PC. In this post, I'll go over how you can get started with Ollama on Windows.
Install Ollama
The first thing you'll want to do is install Ollama.
You can do so by downloading the installer from the website and following the installation prompts.
Get a model
Once you've installed Ollama, it's time to get a model.
Open PowerShell
Run the following command
ollama pull llama2
In this case, I'm using llama2. However, you can choose another model. You could even download many models at once and switch between them. For a full list of supported models, see the Ollama model documentation.
Use the model
Now that you have the model, it's time to use it. The easiest way to use the model is using the REST API. When you install Ollama, it starts up a server to host your model. One other neat thing is, the REST API is OpenAI API compatible.
Open PowerShell
Send the following request:
(Invoke-WebRequest -method POST -Body '{"model":"llama2", "prompt":"Why is the sky blue?", "stream": false}' -uri http://localhost:11434/api/generate ).Content | ConvertFrom-json
This command will issue an HTTP POST request to the server listening on port 11434.
The main things to highlight in the body:
- model: The model you'll use. Make sure this is one of the models you pulled.
- prompt: The input to the model
- stream: Whether to stream responses back to the client
For more details on the REST API, see the Ollama REST API documentation.
Conclusion
In this post, I went over how you can quickly install Ollama to start using generative AI models like Llama and Phi locally on your Windows PC. If you use Mac or Linux, you can perform similar steps as those outlined in this guide to get started on those operating systems. Happy coding!