Skip to content

Installation Guide

This guide walks you through installing PenLocal-AI on your system.

Prerequisites

  • Docker and Docker Compose v2.x or later
  • Git for cloning the repository
  • Ports available: 443, 8000 (check with netstat -tuln)

Hardware Requirements

Profile RAM Storage Notes
CPU 16GB+ 20GB+ Slower inference
NVIDIA GPU 16GB+ 20GB+ Requires CUDA drivers
AMD GPU 16GB+ 20GB+ Linux only, ROCm required

Quick Install

# Clone the repository
git clone <repository-url>
cd PenLocal-AI

# Run the installer
chmod +x install.sh
./install.sh
# Clone the repository
git clone <repository-url>
cd pentest-agent

# Run the installer
.\install.ps1

The installer will:

  1. Generate secure environment variables (.env file)
  2. Ask about Ollama installation (local LLM)
  3. Prompt for GPU profile selection (if Ollama enabled)
  4. Start all Docker containers
  5. Import n8n workflows and credentials

Installation Steps

Step 1: Environment Generation

The installer runs generate-env.sh (or generate-env.ps1) which creates:

  • Random 32-character API keys
  • Secure database passwords
  • Encryption keys for the vault
  • JWT secrets for n8n

Security Note

Never commit your .env file to version control. It contains sensitive secrets.

Step 2: Ollama Selection

Do you want to install Ollama (local LLM)?
  1) Yes
  2) No

Choose Yes for local AI inference, or No if you'll use a remote Ollama instance.

Step 3: GPU Profile

If you selected Ollama, choose your hardware:

Option Profile Requirements
1 gpu-nvidia NVIDIA GPU + CUDA drivers
2 gpu-amd AMD GPU + ROCm (Linux only)
3 cpu No GPU acceleration

Step 4: Container Startup

The installer runs:

# With Ollama
docker compose --profile <profile> up -d

# Without Ollama
docker compose up -d

Step 5: n8n Import

Workflows and credentials are imported via import-n8n.sh:

  • Pentest orchestration workflows
  • Credential templates for services
  • Pre-configured AI tool connections

Post-Installation

Access Points

Service URL Purpose
n8n https://127.0.0.1 Create account, manage workflows
Pentest Manager https://127.0.0.1:8000 Start pentests, view results
Documentation http://127.0.0.1:8081 This documentation

First-Time Setup

  1. Create n8n Account
  2. Navigate to https://127.0.0.1
  3. Create your admin account
  4. This is used for workflow management

  5. Create Pentest Manager Account

  6. Navigate to https://127.0.0.1:8000
  7. Login to admin account and reset password
  8. Set up MFA

  9. Generate API Key

  10. Go to Profile in Pentest Manager
  11. Click "Generate API Key"
  12. Save this key securely

  13. Add Ollama Connection

  14. Go to Profile → Ollama Connections
  15. Add connection with URL: https://ollama-proxy:11434
  16. Use your OLLAMA_API_KEY from .env

Manual Installation

If you prefer manual setup:

# 1. Clone repository
git clone <repository-url>
cd pentest-agent

# 2. Generate environment (or copy and edit manually)
./generate-env.sh
# OR
cp .env.example .env
# Edit .env with your values

# 3. Start containers
docker compose --profile gpu-nvidia up -d  # NVIDIA
docker compose --profile gpu-amd up -d     # AMD
docker compose --profile cpu up -d         # CPU only
docker compose up -d                       # No Ollama

# 4. Wait for services to be healthy
docker compose ps

# 5. Import n8n workflows
./import-n8n.sh

Apple Silicon (M1/M2/M3)

Apple Silicon Macs cannot expose GPU to Docker. Options:

Option 1: Run Ollama Locally on Mac

# Install Ollama on Mac
brew install ollama
ollama serve

# In another terminal, pull the model
ollama pull qwen3:14b

# Start PenLocal without Ollama container
docker compose up -d

Then configure Ollama connection in Pentest Manager: - URL: http://host.docker.internal:11434

Option 2: CPU Mode in Docker

docker compose --profile cpu up -d

This works but inference will be slower.

Stopping and Restarting

# Stop all services
docker compose down

# Stop and remove volumes (DATA LOSS!)
docker compose down -v

# Restart services
docker compose up -d

# View logs
docker compose logs -f

# View specific service logs
docker compose logs -f n8n
docker compose logs -f pentest-webapp

Troubleshooting

Port Already in Use

ERROR: Required port(s) 443 8000 already in use

Find and stop the conflicting process:

# Linux/macOS
sudo lsof -i :443
sudo kill <PID>

# Windows
netstat -ano | findstr :443
taskkill /PID <PID> /F

SSL Certificate Errors

Self-signed certificates are generated on first run. If you see certificate warnings:

  1. This is expected for self-signed certs
  2. Accept the certificate in your browser
  3. Or import nginx/ssl/ca-bundle.crt to your trust store

Ollama Model Download

First run downloads the Qwen3 model (~8GB). Check progress:

docker compose logs -f ollama

Container Health Issues

# Check container status
docker compose ps

# Restart unhealthy container
docker compose restart <service-name>

# Rebuild container
docker compose build <service-name>
docker compose up -d <service-name>