How to Set Up Ollama for Agent Zero
Overview
This guide explains how to correctly set up Ollama as the language model backend for Agent Zero (Agent 0).
It assumes Agent Zero is already running (Docker or local) and focuses only on Ollama integration.
Prerequisites
- Agent Zero running and accessible in a browser
- Docker installed (if Agent Zero runs in Docker)
- Network access between Agent Zero and Ollama
Step 1: Install Ollama
macOS
- Download Ollama from the official website
- Install the application normally
- Ollama runs automatically as a background service
Linux
- Install using the official install script
- Ensure the Ollama service is running
Step 2: Verify Ollama Is Running
- Confirm Ollama listens on port 11434
- Ensure the API endpoint is reachable
- The default API base URL is:
http://192.168.1.190:11434
Step 3: Pull a Model
Pull at least one model that Agent Zero will use.
- Example models:
- gemma3:12b
- llama3
- mistral
After pulling, verify the model exists and can respond using the Ollama CLI.
Step 4: Open Agent Zero Settings
- Open Agent Zero in your browser
- Click Settings
- Go to Agent Settings
- Select Chat Model
Step 5: Configure Chat Model
Main Chat Model
- Chat model provider: Ollama
- Chat model name: gemma3:12b (or your chosen model)
- Chat model API base URL:
- Local Agent Zero: http://192.168.1.190:11434
- Docker Agent Zero: http://host.docker.internal:11434 or host LAN IP
- Context length: Set according to your model (example: 8192 or higher)
Step 6: Configure Utility Model
Agent Zero requires a utility model for memory, summarization, and internal tasks.
- Utility model provider: Ollama
- Utility model name: Same as chat model or smaller model
- Utility model API base URL: Same as chat model
Step 7: Save and Restart
- Click Save in Settings
- Restart Agent Zero
- If using Docker, restart the container
Step 8: Test the Setup
- Start a new chat
- Send a simple message such as:
What is 2 + 2?
- If Ollama is configured correctly, Agent Zero will respond normally
Troubleshooting
- No response or errors: Check API base URL
- Docker issues: Ensure Ollama is reachable from the container
- Repeated tool errors: Verify Agent 0 prompt files are not enforcing tool-only output
- Slow responses: Ensure the model fits your system resources
Summary
- Ollama runs the LLM locally
- Agent Zero connects via HTTP API
- Both chat and utility models must be configured
- Restart is required after changes
#-----------------------------------------#
# gemma3:12b
# http://192.168.1.190:11434
#-----------------------------------------#
$ docker exec -it modest_jones bash
$ docker restart modest_jones
#-----------------------------------------#