Back to Agency Guides
Technical Setup

Multi-Tenant n8n Setup for Agencies

Run multiple isolated n8n instances for different clients with proper security and monitoring.

February 2, 2026

Multi-Tenant n8n Setup for Agencies

Running n8n for multiple clients requires careful architecture to ensure security, reliability, and manageability. This guide covers the key decisions and patterns for agency-scale deployments.

Architecture Options

Option 1: Separate Instances per Client

Each client gets their own n8n instance.

Pros:
- Complete isolation
- Client-specific configurations
- Easy to hand off ownership
- Clear cost attribution

Cons:
- Higher operational overhead
- More resources required
- Harder to update uniformly
- More complex monitoring

Best for: Large clients, sensitive data, potential handoffs

Option 2: Shared Instance with Projects

All clients on one n8n instance using projects/folders.

Pros:
- Lower operational cost
- Easier updates
- Simpler monitoring
- Resource efficiency

Cons:
- Credentials visible across projects
- Resource contention possible
- Single point of failure
- Harder to isolate issues

Best for: Small clients, simple workflows, tight budgets

Option 3: Hybrid Approach

Shared instance for small clients, dedicated for large ones.

Pros:
- Balance of efficiency and isolation
- Flexible scaling
- Appropriate security levels

Cons:
- More complex to manage
- Two different operational models

Best for: Agencies with varied client sizes

Recommended Setup: Separate Instances

For most agencies, separate instances provide the best balance of security, manageability, and client value.

Infrastructure Stack

┌─────────────────────────────────────────┐
│           Load Balancer                 │
│         (nginx/traefik)                 │
└─────────────────────────────────────────┘
                  │
    ┌─────────────┼─────────────┐
    │             │             │
    ▼             ▼             ▼
┌───────┐    ┌───────┐    ┌───────┐
│n8n    │    │n8n    │    │n8n    │
│Client │    │Client │    │Client │
│A      │    │B      │    │C      │
└───────┘    └───────┘    └───────┘
    │             │             │
    ▼             ▼             ▼
┌───────┐    ┌───────┐    ┌───────┐
│Postgres│   │Postgres│   │Postgres│
│DB A   │    │DB B   │    │DB C   │
└───────┘    └───────┘    └───────┘

Docker Compose Template

version: '3.8'

services:
  n8n-clienta:
    image: n8nio/n8n
    environment:
      - N8N_HOST=clienta.n8n.youragency.com
      - N8N_PROTOCOL=https
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=db-clienta
      - DB_POSTGRESDB_DATABASE=n8n
    volumes:
      - clienta-data:/home/node/.n8n
    labels:
      - "traefik.http.routers.n8n-clienta.rule=Host(`clienta.n8n.youragency.com`)"

  db-clienta:
    image: postgres:14
    environment:
      - POSTGRES_DB=n8n
      - POSTGRES_PASSWORD=${CLIENTA_DB_PASSWORD}
    volumes:
      - clienta-db:/var/lib/postgresql/data

Security Considerations

Network Isolation:
- Separate Docker networks per client
- No cross-client communication
- VPN or IP allowlisting for admin access

Credentials:
- Unique encryption keys per instance
- Rotate API keys periodically
- Use secrets management (Vault, AWS Secrets Manager)

Access Control:
- SSO where possible
- Strong password policies
- Regular access reviews

Monitoring at Scale

Centralized Logging

Send all instance logs to a central location:

services:
  n8n-clienta:
    logging:
      driver: "fluentd"
      options:
        fluentd-address: "logs.youragency.com:24224"
        tag: "n8n.clienta"

Metrics Collection

Track per-instance:
- Execution count and success rate
- Queue depth
- Memory and CPU usage
- API response times

Use Prometheus + Grafana or similar.

Alerting Rules

Set up alerts for:
- Instance unreachable
- High error rate (>10%)
- Resource exhaustion
- Unusual execution patterns

Operational Procedures

Deployment

Use infrastructure-as-code:

# Provision new client
./scripts/provision-client.sh --name acme --plan pro

# This creates:
# - Docker containers
# - Database
# - DNS records
# - Monitoring dashboards
# - Backup jobs

Updates

Roll out updates gradually:

  1. Update staging instance
  2. Test critical workflows
  3. Update pilot client
  4. Monitor for 24 hours
  5. Roll out to remaining clients

Backup Strategy

  • Database backups: Daily, 30-day retention
  • Workflow exports: Weekly
  • Credentials backup: Encrypted, stored separately

Disaster Recovery

Document and test:
- Instance recovery time
- Database restoration
- Workflow import
- Credential restoration

Cost Optimization

Right-Sizing

  • Start small, scale up as needed
  • Use auto-scaling where available
  • Review resource usage monthly

Resource Sharing

For small clients:
- Share database servers (separate databases)
- Use container orchestration for efficiency
- Consider time-based scaling

Client Handoffs

When handing off to client:

  1. Export all workflows
  2. Document credentials (or transfer ownership)
  3. Provide runbooks
  4. Schedule knowledge transfer
  5. Support transition period