Quick Answer
Systematic diagnosis identifies whether bottleneck is database (95%), code (4%), or infrastructure (1%). The problem: Odoo slow, team points fingers, IT sends 200-page report, CFO asks "when faster?", you throw hardware at it, lose $400-800/day to invisible friction. 58 of 60 audited clients misdiagnosed their bottleneck. Three categories: (1) Python application layer, (2) PostgreSQL database, (3) Infrastructure (servers, RAM, CPU, disk). Company thinks "need bigger server", reality "4 workers on 8-core = leaving performance on table". Company thinks "custom module", reality "95% database, 4% code, 1% module". 7-step diagnostic: (1) Baseline: top -b -n 1, df -h, ps aux | grep odoo = CPU >85% = server constrained, disk >80% = bloated database, 2 workers on 8-core = misconfigured. (2) Enable query logging: log_level = debug_sql, tail -f log | grep "query time", look for >300-500ms queries. (3) Database vs code: Browser DevTools Network tab shows request time, compare to SQL logs, if logs fast but network slow = Python problem, if logs slow = database. (4) Missing indexes: pg_stat_user_tables WHERE seq_scan > 1000 = add index, common: state fields, create_date, foreign keys. (5) Search() loops: --dev=performance shows repeated identical queries, move search outside loop. (6) Profile Python: cProfile shows slowest 20 functions. (7) Cron collisions: multiple jobs at 3 PM = overlap = slow period. Tools: pg_stat_statements (top 10 slowest queries), Odoo profiler v14+ (instant visibility), pgBadger (historical analysis). Common patterns: Missing index (7.2s → 0.4s, 5 min fix), Loop (18s → 2.1s, 15 min fix), Bloated database (6s → 2.2s, 3 hours), Cron collision (periodic slowness eliminated, 10 min fix), Misconfigured workers (configure cores×2+1). Real case: $3.2M eCommerce, inventory search 11.4s, lost $240/day + $3,200/month revenue. Diagnosis: 1 missing index (stock_move.state), 1 loop (86 searches → 1), bloated 34GB. Fix: 11.4s → 0.7s, +18 orders/day, +14 hours/week productivity, $38,400 year 1 recovery.
The Misdiagnosis Problem
Your Odoo is slow. But you have no idea why.
Is it the database? Is it code? Is it the server? Your team points fingers. Your IT vendor sends a 200-page system health report you don't understand. Your CFO asks when you're getting "faster Odoo."
The problem isn't Odoo. The problem is you have no diagnostic process.
Without a systematic way to identify what's actually broken, you're throwing hardware at problems, running blind installations of "performance modules," and losing $400-800 every single day to invisible friction. We've audited 60+ Odoo clients—and 58 of them were misdiagnosing their bottleneck.
The Three Bottleneck Categories
Every slow Odoo instance has a bottleneck in exactly one of three places: the Python application layer, the PostgreSQL database, or the infrastructure (servers, RAM, CPU, disk). Yet companies almost always guess wrong.
| Company Thinks | Reality |
|---|---|
| "Odoo is slow" | Database query takes 4.2s because search() inside loop 18 months ago, running 2,400 identical queries per day |
| "We need a bigger server" | 4 workers on 8-core server. Leaving performance on table without spending a dollar |
| "It's a custom module problem" | 95% database. 4% code. 1% actually the module you suspect |
This is why diagnosis comes first. You cannot optimize what you cannot measure. And you cannot measure what you don't look at.
The 7-Step Diagnostic Process
Step 1: Establish Your Baseline (5 Minutes)
Open a terminal. SSH into your Odoo server. Run these three commands:
top -b -n 1 | head -20
df -h
ps aux | grep odoo | wc -l
This tells you:
• CPU usage right now (is it spiky or flat?)
• Disk space remaining (bloated database shows up here)
• Worker count (misconfigured systems run too few or too many)
Critical Thresholds:
• CPU consistently above 85% = server constrained
• Disk above 80% full = database bloated
• 2 workers on 8-core server = leaving performance dead on the table
Write this down. This is your baseline. If nothing changes and your system is still slow, it's not infrastructure.
Step 2: Enable Query Logging (Immediate)
You are right now running database queries that take 3-8 seconds each. You don't know which ones. Fix that.
# Edit your Odoo config file (/etc/odoo/odoo.conf)
log_level = debug_sql
# Restart Odoo, wait 5 minutes, check logs
tail -f /var/log/odoo/odoo-server.log | grep -i "query time|took"
Perform a user action: create an invoice. Search inventory. Open a report. The logs will show you which SQL queries executed and how long each one took.
Look for queries exceeding 300-500ms. That is your first enemy.
DEBUG odoo.sql: SELECT res_partner.id FROM res_partner
WHERE res_partner.name ILIKE 'test' [QUERY TIME: 0.8s]
This single query takes 800ms. If someone calls it 20 times during an invoice creation, that's 16 seconds buried in your page load.
Step 3: Identify If It's Database Or Code
Open your browser. Go to your Odoo instance. Add ?debug=1 to the URL. Log in. Now open your browser's Developer Tools (press F12).
Click the "Network" tab. Create a sales order. Watch the timeline.
You'll see calls like /web/dataset/search_read or /web/dataset/call_kw. These are the expensive operations. Click one. Look at the "Time" column.
Database vs Code Decision Tree:
• Logs show NO slow SQL + Network shows slow requests = Code problem (Python)
• Logs show slow SQL matching network timeline = Database problem
Real diagnostic story: Manufacturing client said "reports are slow." Network timing: 12.3 seconds per report. Logs: 11.8 seconds of SQL execution. Pulled slowest query (found in 3 minutes): SELECT COUNT(*) FROM stock_move WHERE state='done' AND DATE(create_date) BETWEEN %s AND %s. This query scanned 847,000 records without an index. Added index on (state, create_date). Report time: 12.3s → 0.8s. Cost: 8 minutes.
Step 4: Check For Missing Indexes
This is where 40% of slow Odoo instances live.
psql -U odoo -d my_database
-- Find unused indexes (delete them)
SELECT schemaname, tablename, indexname
FROM pg_stat_user_indexes
WHERE idx_scan = 0
ORDER BY idx_blks_read DESC
LIMIT 10;
-- Find tables with excessive sequential scans (add indexes)
SELECT schemaname, tablename
FROM pg_stat_user_tables
WHERE seq_scan > 1000
ORDER BY seq_scan DESC
LIMIT 10;
Common fields that need indexes:
• Fields in domain filters (res.partner.email, res.partner.phone)
• state fields (invoice state, order state, move state)
• Frequently sorted columns (create_date, write_date)
• Foreign keys (partner_id, product_id)
If you find a table with 50,000+ rows being sequentially scanned thousands of times per day, adding one index will cut its query time by 60-90%.
CREATE INDEX idx_res_partner_email ON res_partner(email);
Test it. Measure again. If query time drops from 2.1s to 0.3s, you just found a $5,000+ problem you can fix in 2 minutes.
Step 5: Look For The Hidden Killer—search() Inside Loops
This is the pattern nobody sees coming.
for line in invoice_lines:
partner = self.env['res.partner'].search([('id', '=', line.partner_id)])
# do something
This is death. If you have 50 invoice lines, you just executed 50 database searches. If each takes 80ms, that's 4 seconds buried in code.
To find this: Enable Odoo in performance mode:
./odoo-bin --dev=performance
Watch the logs. You'll see repeated identical SQL queries. If you see the same query appearing 50+ times in a single operation, you found it. Move the search outside the loop.
| Before (Bad) | After (Good) |
|---|---|
for line in invoice_lines: | partners = {p.id: p for p in browse([...])} |
Impact: One client had custom invoice import running 2,400 searches per day (one per line). After moving searches outside loop, dropped to 1 search. Import time: 18 minutes → 3 minutes. Plus 18 hours per week freed up for team running import manually.
Step 6: Profile Your Python Code
If your logs show fast queries but pages still load slow, Python is the problem.
odoo shell -d my_database
import cProfile
import pstats
import io
pr = cProfile.Profile()
pr.enable()
# Run your slow operation here
# e.g., self.env['sale.order'].create({'partner_id': 1, ...})
pr.disable()
s = io.StringIO()
ps = pstats.Stats(pr, stream=s).sort_stats('cumulative')
ps.print_stats(20)
print(s.getvalue())
This shows the 20 slowest function calls. If you see a custom method eating 6 seconds, that's your bottleneck.
Step 7: Check Cron Jobs (The Silent Killer)
Bad scheduled actions will strangle your entire system.
Go to Settings → Technical → Automation → Scheduled Actions.
Look for jobs set to run frequently (every 1-5 minutes). If you have 12 jobs each running every 2 minutes, they can overlap and queue up.
During your 3 PM slow period, your system might be running all at the same time:
• Inventory synchronization
• Accounting reconciliation
• Report generation
• Email sending
• Custom sync jobs
Fix: Stagger them. Have your 2-minute jobs run at 2, 4, 6, 8... minutes. Have your 5-minute jobs run at 0, 5, 10, 15... minutes. Space them out.
The Tools That Do The Heavy Lifting
PostgreSQL's pg_stat_statements Extension
This is the single most important diagnostic tool. It logs every SQL query with execution count and timing.
# Edit postgresql.conf
shared_preload_libraries = 'pg_stat_statements'
# Restart PostgreSQL, then query it
SELECT query, calls, mean_time, max_time
FROM pg_stat_statements
WHERE database = 'my_database'
ORDER BY max_time DESC
LIMIT 10;
This shows your top 10 slowest queries. If you see a query taking 8.2 seconds, you have 1-2 hours of optimization work. If you see a query running 47,000 times in one hour, that's your search-in-loop problem.
Odoo's Built-In Profiler (v14+)
This is criminally underused.
1. Go to Settings → Developer Tools → Profiler
2. Click Start Profiling
3. Perform your slow operation
4. Click Stop Profiling
5. View the results
The profiler shows you each SQL query executed, time taken for each, and queries grouped by operation. This gives you instant visibility. Takes 2 minutes. Catches 90% of problems.
pgBadger (For Historical Analysis)
sudo apt-get install pgbadger
# Parse PostgreSQL logs
pgbadger /var/log/postgresql/postgresql-*.log -o report.html
Open report.html in a browser. This visualizes your slowest queries, query frequency, missing indexes, and database activity timeline.
Common Diagnostic Patterns
| Pattern | Symptom | Fix | Result |
|---|---|---|---|
| Missing Index | Inventory search 7.2s | Add index on domain fields (5 min) | 7.2s → 0.4s |
| The Loop | Invoice creation 18s | Move search outside loop (15 min) | 18s → 2.1s |
| Bloated Database | All queries slow over time | Archive old records, VACUUM FULL (3 hrs) | 6s → 2.2s |
| Cron Collision | Slow at certain times (3 PM, 5 PM) | Stagger job execution (10 min) | Periodic slowness eliminated |
| Misconfigured Workers | High CPU, slow pages | Configure (cores × 2) + 1 workers (2 min) | Faster request processing |
Your Diagnostic Checklist
This Week
[ ] Record baseline CPU, RAM, disk, worker count
[ ] Enable query logging
[ ] Profile one slow operation with browser DevTools
[ ] Compare logs to network timeline (identify database vs. code)
[ ] Check for missing indexes
This Month
[ ] Run pg_stat_statements query, identify top 5 slowest queries
[ ] Check for search() inside loops in custom modules
[ ] Audit scheduled actions, look for overlaps
[ ] Run Odoo profiler on your slowest user workflow
[ ] Archive records older than 24 months
This Quarter
[ ] Optimize 3 slow queries (add indexes, fix code)
[ ] Run database VACUUM FULL during off-hours
[ ] Review worker configuration against actual concurrent user load
[ ] Implement pgBadger analysis as part of monthly maintenance
[ ] Document your slowest operations and their known causes
Real-World Case Study
Problem: Inventory search took 11.4 seconds
Impact: Lost $240/day in team time + 8 lost orders/week ≈ $3,200/month revenue
Diagnosis (4 hours):
• 1 missing index on stock_move.state field
• 1 loop running 86 searches instead of 1
• Database bloated by 34GB from unarchived records
Fixes (12 hours including testing):
• Inventory search: 11.4s → 0.7s
• Orders processed per day: +18 (fewer timeouts)
• Monthly revenue recovered: $3,200+
• Team productivity: +14 hours per week
ROI:
Cost: $4,800 consulting | Monthly ROI: $3,200 | Payback: 1.5 months
Year 1 Recovery: $38,400
Without diagnosis, they would have upgraded their server ($8,000+), hired more staff, or replaced Odoo.
The Final Truth About Bottlenecks
Your Odoo doesn't have a speed problem. It has an invisibility problem.
You cannot optimize what you do not see. Most companies lose $400-600 daily to bottlenecks they refuse to diagnose. They guess. They add workers. They throw hardware at the problem.
Spend 6 hours this week diagnosing. Identify your actual bottleneck. Fix it. Recover $150K+ in annual productivity and revenue.
It is the single highest-ROI project your operations team can run.
Frequently Asked Questions
How long does a bottleneck diagnosis take?
4-6 hours of active analysis. Includes baseline measurement, query logging, profiling, code review, and a detailed report with fix priority and estimated impact.
Can we diagnose without downtime?
Yes. All diagnostic tools (logging, profiling) can run on production without stopping the system. The only high-risk fix is database VACUUM FULL, which should run during off-hours.
What if the bottleneck is the server itself?
You'll know immediately. Baseline CPU will be consistently above 85%, RAM above 90%, or disk I/O showing I/O wait above 30%. In that case, upgrade is justified. But this is only 1 in 20 instances. The other 19 are database or code.
How much does optimization cost?
Depends on the bottleneck. Adding an index: $0. Unsticking a loop: 1-2 hours labor ($400-800). Archiving old data: 2-4 hours labor ($800-1,600). Rebuilding a slow custom module: 20-40 hours ($8K-16K). Always measure first, then estimate.
Can we prevent bottlenecks?
Mostly yes. Code reviews catch loops. Database monitoring catches bloat. Worker configuration prevents infrastructure bottlenecks. But without ongoing monitoring, performance degrades over 18-24 months as data volume grows and new modules get added.
Should we use an APM tool like Datadog?
Only if you're a $5M+ brand or running custom Odoo development. For most D2C companies, the free tools (pg_stat_statements, Odoo profiler, browser DevTools) are sufficient. Start with these. Upgrade to Datadog only when you outgrow them.
Free Bottleneck Diagnosis Session
Braincuber's bottleneck diagnostic process identifies your exact problem—database, code, or infrastructure—in 4-6 hours. No guessing. No wasted optimization efforts. Pure data. We've recovered 2,400+ hours of annual team productivity and $7.2M+ in prevented downtimes across our D2C clients. We'll profile your system, identify the top 3 problems costing you right now, and show you the exact fix sequence.
