Ollama Security¶
Overview¶
Ollama provides the local LLM inference capabilities. As it processes sensitive prompt data (potential vulnerabilities, user inputs), it is secured behind a strict proxy layer.
Security Configuration¶
1. Reverse Proxy (Nginx)¶
Ollama natively provides an open API. We place it behind an Nginx proxy to enforce security controls:
- TLS/SSL: All communication is encrypted via HTTPS.
- Authentication: OLLAMA_API_KEY is enforced by the proxy. Requests without this header are rejected before reaching the LLM service.
2. Network Isolation¶
- No Direct Access: The raw Ollama container ports are NOT published to the host or other containers.
- Internal Network: Only accessible via the
internal-servicesnetwork through theollama-proxy.
3. Resource limits¶
- Running in Docker allows limiting GPU/CPU resources to prevent DoS via resource exhaustion.