What You'll Learn
- What is Ollama? — An open-source tool to run large language models (LLMs) locally on your machine without cloud dependencies.
- System Requirements — Minimum hardware and OS requirements for Windows and macOS.
- Step-by-Step Installation — Download, install, and verify Ollama on both platforms.
- Running Your First Model — Pull a model like Llama 2 or Mistral and run your first inference.
- Using the REST API — Integrate Ollama into your applications via its built-in HTTP API.
- Troubleshooting — Common issues and their resolutions.
Quick Start Preview
macOS Installation
# Download and install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Verify installation
ollama --version
# Pull and run a model
ollama pull llama2
ollama run llama2Windows Installation
- Download the installer from ollama.com/download
- Run the installer and follow the on-screen instructions
- Open PowerShell or Command Prompt
- Verify with
ollama --version
Key Features Covered in the Guide
💻
Local AI Development
Run LLMs entirely on your machine — no API keys, no cloud costs.
🔌
REST API Access
Built-in HTTP endpoints to integrate AI into any application.
📦
Model Library
Access Llama 2, Mistral, CodeLlama, Phi-2 and dozens more models.
⚙️
Cross-Platform
Works seamlessly on macOS, Windows, and Linux.
