The Scaling Problem
Your D2C brand grows from 50 to 500 concurrent users. Your single Odoo server:
• CPU: 100% (maxed out)
• Memory: 95% (nearly full)
• Response time: 10 seconds (was 1 second)
• Every page load lags
• Staff complains constantly
Option A: Vertical Scaling (Buy Bigger Server)
| Cost | $50,000/year for monster server |
| Risk | Single point of failure |
| Complexity | Low |
| Long-term | Can't scale forever (physical limits) |
Option B: Horizontal Scaling (Multiple Servers)
| Cost | 3 × $15,000/year = $45,000/year |
| Risk | Redundancy (one server fails, others handle load) |
| Complexity | Higher (but manageable) |
| Long-term | Scale indefinitely |
Result: Better value, better uptime, future-proof.
We've implemented 150+ Odoo systems. The ones that scale horizontally? They handle 10x growth without breaking a sweat. The ones that don't? They hit the scaling wall, panic, buy emergency hardware, then find it's not enough. That's $80,000-$200,000 in unplanned emergency spending and emergency consulting.
Load Balancing Architecture
What it does:
User → Load Balancer (Nginx/HAProxy) → Odoo Server 1
→ Odoo Server 2
→ Odoo Server 3
Load balancer decides which server gets each request.
Nginx as Load Balancer
Architecture
Internet
↓
nginx (Port 80/443) — Load balancer
↓
Odoo1 (Port 8069)
Odoo2 (Port 8069)
Odoo3 (Port 8069)
↓
PostgreSQL (Shared database)
Nginx Configuration
upstream odoo_backend {
# Round-robin across 3 Odoo servers
server 192.168.1.10:8069;
server 192.168.1.11:8069;
server 192.168.1.12:8069;
# Optional: Weight (server 1 gets 2x traffic)
# server 192.168.1.10:8069 weight=2;
# Optional: Sticky sessions (user stays on same server)
# ip_hash;
}
server {
listen 80;
server_name your-domain.com;
# Redirect HTTP to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name your-domain.com;
# SSL certificates
ssl_certificate /etc/ssl/certs/your-cert.crt;
ssl_certificate_key /etc/ssl/private/your-key.key;
# Proxy to Odoo servers
location / {
proxy_pass http://odoo_backend;
# Headers for proper proxying
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (for real-time features)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# Longpolling (chat, notifications)
location /longpolling {
proxy_pass http://odoo_backend;
proxy_set_header Connection "";
}
}
sudo nginx -t # Test config
sudo systemctl restart nginx
HAProxy (More Powerful Alternative)
global
log stdout local0
log stdout local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
# Frontend: What clients connect to
frontend web_frontend
bind *:80
bind *:443 ssl crt /etc/ssl/certs/your-cert.pem
redirect scheme https code 301 if !{ ssl_fc }
default_backend odoo_servers
# Backend: Pool of Odoo servers
backend odoo_servers
balance roundrobin # Load balancing algorithm
# Health checks
option httpchk GET /web
# Odoo servers
server odoo1 192.168.1.10:8069 check
server odoo2 192.168.1.11:8069 check
server odoo3 192.168.1.12:8069 check
# Admin panel
listen stats
bind *:8404
stats enable
stats uri /stats
stats refresh 30s
sudo systemctl restart haproxy
# View stats at http://your-ip:8404/stats
Session Management (Critical)
Problem: User logs in on Server1. Request goes to Server2. Server2 doesn't know user is logged in.
Solution: Store sessions in Redis (shared memory)
Setup Redis
# Install Redis
sudo apt-get install redis-server
# Edit /etc/redis/redis.conf
bind 0.0.0.0 # Accessible from other servers
port 6379
requirepass your_password # Set password
# Restart
sudo systemctl restart redis-server
Configure Odoo Servers
[options]
session_store = redis
session_store_url = redis://:your_password@192.168.1.5:6379/0
# (192.168.1.5 = Redis server IP, accessible from all Odoo servers)
Result: User logs in on Server1 → session stored in Redis → Server2 reads same session → seamless.
Real D2C Example: Complete 3-Server Setup
Architecture
Users
↓
Nginx (Load Balancer) - 192.168.1.1
↓
Odoo1 - 192.168.1.10
Odoo2 - 192.168.1.11
Odoo3 - 192.168.1.12
↓
PostgreSQL - 192.168.1.20
↓
Redis (Sessions) - 192.168.1.30
Step 1: Set Up PostgreSQL
sudo apt-get install postgresql
# Create Odoo database
sudo -u postgres createdb odoo_db
# Create Odoo user
sudo -u postgres createuser odoo
sudo -u postgres psql -c "ALTER USER odoo WITH PASSWORD 'odoo_password';"
Step 2: Set Up Redis
sudo apt-get install redis-server
# Configure for network access
sudo nano /etc/redis/redis.conf
# Set: bind 0.0.0.0, requirepass your_password
sudo systemctl restart redis-server
Step 3: Install Odoo on Each Server
sudo apt-get install odoo
# Edit /etc/odoo/odoo.conf
[options]
db_host = 192.168.1.20 # PostgreSQL server IP
db_user = odoo
db_password = odoo_password
db_name = odoo_db
session_store = redis
session_store_url = redis://:your_password@192.168.1.30:6379/0
workers = 4 # 4 worker processes per server
longpolling_port = 8072 # For chat/notifications
Step 4: Set Up Nginx Load Balancer
sudo apt-get install nginx
# Use config from Part 2 above
sudo nano /etc/nginx/nginx.conf
upstream odoo_backend {
server 192.168.1.10:8069;
server 192.168.1.11:8069;
server 192.168.1.12:8069;
}
# ... rest of config
sudo systemctl restart nginx
Step 5: Verify All Working
# From any machine
curl http://your-domain.com
# Should load-balance across 3 servers
# Check Nginx stats if configured
Database Scaling (1,000+ Concurrent Users)
For 1,000+ concurrent users, database becomes bottleneck.
Solution: Read replicas
Odoo Servers
↓ (writes)
PostgreSQL Primary - 192.168.1.20
↓ (replicates)
PostgreSQL Replica1 - 192.168.1.21
PostgreSQL Replica2 - 192.168.1.22
↓ (reads)
Odoo Servers
Writes go to primary.
Reads go to replicas (distributes load).
[options]
db_host = 192.168.1.20 # Write server (primary)
# Read-only replicas (for reports/dashboards)
replica_db_host = 192.168.1.21,192.168.1.22
Result: 70% reduction in primary database load (most queries are reads).
Your Action Items
For 50-200 Users (Single Server)
❏ Skip load balancing
❏ Focus on PostgreSQL optimization
❏ Add Redis caching
For 200-500 Users (2-3 Servers)
❏ Set up Nginx load balancer
❏ Install Redis for sessions
❏ Use shared PostgreSQL database
❏ Test failover (one server down)
For 500+ Users (5+ Servers)
❏ Consider Docker Swarm or Kubernetes
❏ Add read replicas to database
❏ Implement CDN for static files
❏ Monitor load across servers
Free Scaling Assessment
Stop guessing about scaling. We'll analyze your current user load, project 12-month growth, design optimal architecture (1-5 servers), estimate costs, implement load balancer. Most D2C brands scale from 1 → 3 servers at $50K/year revenue. Proper planning saves $100,000+ in wasted infrastructure.
