Installation Guide¶
This guide walks you through installing PenLocal-AI on your system.
Prerequisites¶
- Docker and Docker Compose v2.x or later
- Git for cloning the repository
- Ports available: 443, 8000 (check with
netstat -tuln)
Hardware Requirements¶
| Profile | RAM | Storage | Notes |
|---|---|---|---|
| CPU | 16GB+ | 20GB+ | Slower inference |
| NVIDIA GPU | 16GB+ | 20GB+ | Requires CUDA drivers |
| AMD GPU | 16GB+ | 20GB+ | Linux only, ROCm required |
Quick Install¶
The installer will:
- Generate secure environment variables (
.envfile) - Ask about Ollama installation (local LLM)
- Prompt for GPU profile selection (if Ollama enabled)
- Start all Docker containers
- Import n8n workflows and credentials
Installation Steps¶
Step 1: Environment Generation¶
The installer runs generate-env.sh (or generate-env.ps1) which creates:
- Random 32-character API keys
- Secure database passwords
- Encryption keys for the vault
- JWT secrets for n8n
Security Note
Never commit your .env file to version control. It contains sensitive secrets.
Step 2: Ollama Selection¶
Choose Yes for local AI inference, or No if you'll use a remote Ollama instance.
Step 3: GPU Profile¶
If you selected Ollama, choose your hardware:
| Option | Profile | Requirements |
|---|---|---|
| 1 | gpu-nvidia |
NVIDIA GPU + CUDA drivers |
| 2 | gpu-amd |
AMD GPU + ROCm (Linux only) |
| 3 | cpu |
No GPU acceleration |
Step 4: Container Startup¶
The installer runs:
Step 5: n8n Import¶
Workflows and credentials are imported via import-n8n.sh:
- Pentest orchestration workflows
- Credential templates for services
- Pre-configured AI tool connections
Post-Installation¶
Access Points¶
| Service | URL | Purpose |
|---|---|---|
| n8n | https://127.0.0.1 |
Create account, manage workflows |
| Pentest Manager | https://127.0.0.1:8000 |
Start pentests, view results |
| Documentation | http://127.0.0.1:8081 |
This documentation |
First-Time Setup¶
- Create n8n Account
- Navigate to
https://127.0.0.1 - Create your admin account
-
This is used for workflow management
-
Create Pentest Manager Account
- Navigate to
https://127.0.0.1:8000 - Login to admin account and reset password
-
Set up MFA
-
Generate API Key
- Go to Profile in Pentest Manager
- Click "Generate API Key"
-
Save this key securely
-
Add Ollama Connection
- Go to Profile → Ollama Connections
- Add connection with URL:
https://ollama-proxy:11434 - Use your
OLLAMA_API_KEYfrom.env
Manual Installation¶
If you prefer manual setup:
# 1. Clone repository
git clone <repository-url>
cd pentest-agent
# 2. Generate environment (or copy and edit manually)
./generate-env.sh
# OR
cp .env.example .env
# Edit .env with your values
# 3. Start containers
docker compose --profile gpu-nvidia up -d # NVIDIA
docker compose --profile gpu-amd up -d # AMD
docker compose --profile cpu up -d # CPU only
docker compose up -d # No Ollama
# 4. Wait for services to be healthy
docker compose ps
# 5. Import n8n workflows
./import-n8n.sh
Apple Silicon (M1/M2/M3)¶
Apple Silicon Macs cannot expose GPU to Docker. Options:
Option 1: Run Ollama Locally on Mac¶
# Install Ollama on Mac
brew install ollama
ollama serve
# In another terminal, pull the model
ollama pull qwen3:14b
# Start PenLocal without Ollama container
docker compose up -d
Then configure Ollama connection in Pentest Manager:
- URL: http://host.docker.internal:11434
Option 2: CPU Mode in Docker¶
This works but inference will be slower.
Stopping and Restarting¶
# Stop all services
docker compose down
# Stop and remove volumes (DATA LOSS!)
docker compose down -v
# Restart services
docker compose up -d
# View logs
docker compose logs -f
# View specific service logs
docker compose logs -f n8n
docker compose logs -f pentest-webapp
Troubleshooting¶
Port Already in Use¶
Find and stop the conflicting process:
# Linux/macOS
sudo lsof -i :443
sudo kill <PID>
# Windows
netstat -ano | findstr :443
taskkill /PID <PID> /F
SSL Certificate Errors¶
Self-signed certificates are generated on first run. If you see certificate warnings:
- This is expected for self-signed certs
- Accept the certificate in your browser
- Or import
nginx/ssl/ca-bundle.crtto your trust store
Ollama Model Download¶
First run downloads the Qwen3 model (~8GB). Check progress: