Javathoughts Logo
Javathoughts Blogs
Subscribe to the newsletter

🦙 Install Ollama on Windows & macOS

A complete, step-by-step walkthrough for installing and running Ollama locally on your machine—whether you're on Windows or macOS.

What You'll Learn

  • What is Ollama? — An open-source tool to run large language models (LLMs) locally on your machine without cloud dependencies.
  • System Requirements — Minimum hardware and OS requirements for Windows and macOS.
  • Step-by-Step Installation — Download, install, and verify Ollama on both platforms.
  • Running Your First Model — Pull a model like Llama 2 or Mistral and run your first inference.
  • Using the REST API — Integrate Ollama into your applications via its built-in HTTP API.
  • Troubleshooting — Common issues and their resolutions.

Quick Start Preview

macOS Installation

# Download and install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Verify installation
ollama --version

# Pull and run a model
ollama pull llama2
ollama run llama2

Windows Installation

  1. Download the installer from ollama.com/download
  2. Run the installer and follow the on-screen instructions
  3. Open PowerShell or Command Prompt
  4. Verify with ollama --version

Key Features Covered in the Guide

💻

Local AI Development

Run LLMs entirely on your machine — no API keys, no cloud costs.

🔌

REST API Access

Built-in HTTP endpoints to integrate AI into any application.

📦

Model Library

Access Llama 2, Mistral, CodeLlama, Phi-2 and dozens more models.

⚙️

Cross-Platform

Works seamlessly on macOS, Windows, and Linux.

Download the Full Guide

Get the complete PDF with screenshots, code snippets, and advanced configuration tips.

Download PDF