The Database Performance Problem
Your D2C brand runs an Odoo system with 2 million orders. Reports take 3 minutes to load. Customer list view freezes. Staff complains constantly.
The Problem: PostgreSQL is running with default settings (shared_buffers = 128MB) designed for laptops, not production servers with 32GB RAM.
The Solution: Proper tuning.
| Metric | Improvement |
|---|---|
| Reports | Load in 10 seconds (18x faster) |
| List Views | Instant |
| Concurrent Capacity | Doubles |
| CPU Usage | Less CPU |
The Cost: 2 hours of tuning, zero code changes.
We've implemented 150+ Odoo systems. The ones properly tuned? 95%+ staff adoption, zero performance complaints. The ones with default PostgreSQL settings? Staff uses workarounds, blames Odoo, company considers replacing it. That's $80,000-$200,000 in lost productivity and unnecessary platform migrations.
The PostgreSQL Performance Pyramid
Top 1% performance gains
(Fine-tuning parameters)
5% gains
Query optimization
50% of gains
Indexing
50% of gains
Configuration
Focus: 50% effort on indexing, 50% on configuration, small tweaks on queries.
PostgreSQL Configuration (Biggest Gains)
Location: /var/lib/postgresql/data/postgresql.conf
Critical Parameters
# SHARED BUFFERS (25% of system RAM)
# Default: 128MB → Production: 8GB (for 32GB server)
shared_buffers = 8GB
# EFFECTIVE CACHE SIZE (50% of system RAM)
# Helps query planner decide execution plan
effective_cache_size = 16GB
# WORK MEMORY (Total RAM / (max_connections * 2))
# For 32GB RAM, 100 connections: 32GB / 200 = ~164MB
work_mem = 164MB
# MAINTENANCE WORK MEMORY (used for VACUUM, CREATE INDEX)
# 1-2GB typically
maintenance_work_mem = 2GB
# MAX CONNECTIONS
# Odoo typically needs 20-40 per user
# Default 100 is often too low
max_connections = 200
# RANDOM PAGE COST (0.1 for SSD, 1.0 for HDD)
# Use 0.1 if running on SSD (which you should)
random_page_cost = 0.1
# AUTOVACUUM (aggressive for production)
autovacuum = on
autovacuum_naptime = 30s
autovacuum_vacuum_scale_factor = 0.01 # Vacuum at 1% changes
autovacuum_analyze_scale_factor = 0.005 # Analyze at 0.5% changes
# LOGGING (find slow queries)
log_min_duration_statement = 1000 # Log queries > 1 second
log_lock_waits = on
How to Apply
# 1. Edit config
sudo nano /var/lib/postgresql/data/postgresql.conf
# 2. Restart PostgreSQL
sudo systemctl restart postgresql
# 3. Verify settings applied
sudo -u postgres psql -c "SELECT name, setting FROM pg_settings WHERE name IN ('shared_buffers', 'effective_cache_size', 'work_mem');"
30-50% performance improvement
Just from configuration tuning
Indexing (The Most Important Optimization)
What it is: Database index on frequently-searched fields. Without index, database scans every row (slow). With index, uses lookup table (fast).
Create Indexes
-- Index on frequently-filtered field
CREATE INDEX idx_sale_order_partner_id
ON sale_order(partner_id);
-- Index on frequently-sorted field
CREATE INDEX idx_sale_order_date
ON sale_order(order_date DESC);
-- Composite index (multiple fields in one query)
CREATE INDEX idx_sale_order_state_date
ON sale_order(state, order_date DESC);
-- Partial index (only active orders)
CREATE INDEX idx_sale_order_active
ON sale_order(partner_id) WHERE state IN ('draft', 'sale', 'sent');
-- Index on relationship traversal
CREATE INDEX idx_order_customer_country
ON sale_order USING btree((partner_id::bigint)) WHERE state = 'sale';
In Odoo Models (Python)
class SaleOrder(models.Model):
_inherit = 'sale.order'
# Index on frequently-searched field
partner_id = fields.Many2one(
'res.partner',
string='Customer',
index=True # Creates database index
)
# Index on frequently-sorted field
order_date = fields.Date(
string='Order Date',
index=True
)
# Index on state (frequently filtered)
state = fields.Selection(
selection=[...],
index=True
)
Find Missing Indexes
-- Queries running full table scans (missing indexes)
SELECT
query,
seq_scan,
seq_tup_read,
idx_scan,
idx_tup_fetch
FROM pg_stat_statements
WHERE seq_scan > 1000 -- High sequential scans = missing index
ORDER BY seq_scan DESC
LIMIT 10;
5-100x faster queries
On indexed fields
Query Analysis (EXPLAIN ANALYZE)
Find Slow Queries
-- Log is at /var/log/postgresql/postgresql.log
-- Look for queries > 1000ms
-- Or enable query logging
SET log_min_duration_statement = 1000;
Analyze Query
-- EXPLAIN ANALYZE shows execution plan
EXPLAIN ANALYZE
SELECT * FROM sale_order
WHERE state = 'sale'
AND partner_id = 123
AND order_date > '2025-01-01';
-- Output shows:
-- Seq Scan on sale_order (slow)
-- Rows=10000 (scanned all 10,000 rows!)
CREATE INDEX idx_sale_order_composite ON sale_order(state, partner_id, order_date);
EXPLAIN ANALYZE
SELECT * FROM sale_order
WHERE state = 'sale'
AND partner_id = 123
AND order_date > '2025-01-01';
-- Output shows:
-- Index Scan using idx_sale_order_composite (fast)
-- Rows=10 (only 10 rows scanned!)
Read EXPLAIN ANALYZE Output
| Indicator | Meaning |
|---|---|
| Seq Scan | BAD (sequential scan, full table scan) |
| Index Scan | GOOD (uses index) |
| Rows | Estimated rows returned |
| Actual Rows | How many rows actually returned |
| Filter | Additional filtering after scan |
VACUUM & ANALYZE (Maintenance)
What They Do
VACUUM: Removes dead rows, reclaims space
ANALYZE: Updates statistics for query planner
Run Manually
-- VACUUM removes deleted rows
VACUUM VERBOSE;
-- ANALYZE updates statistics
ANALYZE VERBOSE;
-- Do both (recommended)
VACUUM ANALYZE VERBOSE;
-- On specific table
VACUUM ANALYZE VERBOSE sale_order;
-- Full vacuum (locks table, slower but more thorough)
VACUUM FULL ANALYZE VERBOSE sale_order;
Schedule with Cron
# Add to crontab
0 2 * * * sudo -u postgres psql -d odoo_db -c "VACUUM ANALYZE;"
# Run at 2 AM daily
Result: Faster queries, recovered disk space, improved performance.
Real D2C Example: Complete Optimization
Scenario: D2C with 10+ years of data (5 million orders). System is slow.
Step 1: Analyze Current Performance
# Check PostgreSQL version
sudo -u postgres psql -c "SELECT version();"
# Get server RAM
free -h
# Get largest tables
sudo -u postgres psql -d odoo_db -c "SELECT schemaname, tablename, pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) FROM pg_tables WHERE schemaname NOT IN ('pg_catalog', 'information_schema') ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC LIMIT 10;"
Step 2: Optimize Configuration
shared_buffers = 8GB
effective_cache_size = 16GB
work_mem = 164MB
maintenance_work_mem = 2GB
random_page_cost = 0.1
Step 3: Add Critical Indexes
-- Sale orders (most searched)
CREATE INDEX idx_so_partner ON sale_order(partner_id);
CREATE INDEX idx_so_state ON sale_order(state);
CREATE INDEX idx_so_date ON sale_order(order_date DESC);
CREATE INDEX idx_so_composite ON sale_order(state, partner_id, order_date);
-- Sale order lines
CREATE INDEX idx_sol_order ON sale_order_line(order_id);
CREATE INDEX idx_sol_product ON sale_order_line(product_id);
-- Invoices
CREATE INDEX idx_inv_partner ON account_move(partner_id);
CREATE INDEX idx_inv_state ON account_move(state);
-- Stock moves
CREATE INDEX idx_sm_product ON stock_move(product_id);
CREATE INDEX idx_sm_state ON stock_move(state);
Step 4: Run Maintenance
# Full vacuum & analyze
sudo -u postgres vacuumdb -z -v odoo_db
# Take 30 minutes but worth it
Step 5: Monitor & Verify
-- Check index usage
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC
LIMIT 20;
-- Check for unused indexes (remove them)
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
WHERE idx_scan = 0
AND indexname NOT LIKE 'pg_toast%'
ORDER BY pg_relation_size(indexrelid) DESC;
Result
| Metric | Before | After | Improvement |
|---|---|---|---|
| Reports | 3 minutes | 15 seconds | 12x faster |
| List Views | 5 seconds | 0.5 seconds | 10x faster |
| CPU Usage | 80% | 30% | 50% reduction |
| User Capacity | 50 concurrent | 150 concurrent | 3x capacity |
Your Action Items
Immediate (1 hour)
❏ Check current configuration (shared_buffers should be 25% of RAM)
❏ Update if needed, restart PostgreSQL
❏ Enable slow query logging (log_min_duration_statement = 1000)
Short-term (4 hours)
❏ Run EXPLAIN ANALYZE on 5 slowest queries
❏ Create indexes on frequently-filtered fields
❏ Run VACUUM ANALYZE
Ongoing
❏ Monitor slow queries daily
❏ Add indexes as needed (don't over-index)
❏ Schedule weekly VACUUM ANALYZE
❏ Review index usage (remove unused)
Free PostgreSQL Performance Audit
Stop accepting slow databases. Most D2C brands can achieve 10-50x performance gains with proper tuning. Cost: 4-8 hours consulting. Value: $100,000+ in infrastructure you don't need to buy. We'll analyze your current configuration vs. optimal, identify missing indexes using real query logs, run EXPLAIN ANALYZE on slowest queries, create optimization plan with timeline, implement and measure improvements.
