Quick Answer
N+1 query problem: one query gets N records, then N additional queries (one per record) fetch related data. The problem: Dashboard loads 100 orders, displays customer name + product count + notes. N+1 = for each order, access order.partner_id.name (query 1-100), len(order.order_line) (query 101-200) = 200+ queries, 45 seconds. The solution: Optimized = orders.mapped('partner_id.name') prefetches all (1 query for 100), order.order_line uses prefetch (no new queries) = 3 queries total, 0.3 seconds. 150x faster. Root cause: Lazy loading (Odoo default) = accessing related field triggers database query. Prefetch optimization = first access fetches for ALL records in recordset, subsequent accesses use cache. Prefetch breaks if: Loop through orders.ids and browse(order_id) = new recordset each iteration = loses prefetch = N+1 queries. Detection methods: (1) --dev=performance logs every SQL query with timing. (2) debug_sql in odoo.conf. (3) QueryTracer() counts queries programmatically. (4) Profiler in debug mode shows query count. (5) Browser network tab shows multiple /web/dataset/search_read calls. Common patterns: Loop with field access, loop with search, loop with create/write, breaking recordset in loop. Solutions: (1) mapped('partner_id.name') for related fields (1 query). (2) read_group() for aggregates with GROUP BY (1 query). (3) search_read() for specific fields only (faster). (4) Batch write() = orders.write() not loop write (1 query). (5) Keep recordsets intact = for order in orders not for id in orders.ids. Real example: Sales rep dashboard with order counts/revenue/last order = wrong 500+ queries (30s), optimized 4 queries (0.2s) using read_group() twice + raw SQL. Impact: Understanding N+1 = lightning-fast reports, instant dashboards. Not understanding = timeouts, unusable system, $80k-$200k in infrastructure upgrades.
The N+1 Performance Gap
Your D2C dashboard loads a list of 100 orders. For each order, it displays:
• Customer name
• Total products ordered
• Last 3 order notes
Scenario A: N+1 Problem
orders = self.env['sale.order'].search([])
for order in orders:
name = order.partner_id.name # Query 1-100
count = len(order.order_line) # Query 101-200
notes = order.note[:3] # Already in memory
# Total: 200+ queries
# Time: 45 seconds
Scenario B: Optimized
orders = self.env['sale.order'].search([])
names = orders.mapped('partner_id.name') # 1 query for all 100
for order in orders:
name = order.partner_id.name # Prefetched, no new query
count = len(order.order_line) # Prefetched, no new query
# Total: 3 queries
# Time: 0.3 seconds
45 seconds → 0.3 seconds
That's 150x faster
We've implemented 150+ Odoo systems. The ones where developers understand N+1 problems? Lightning-fast reports, instant dashboards, happy users. The ones where developers don't? Pages timeout, queries pile up, performance degrades over time until system is unusable. That's $80,000-$200,000 in infrastructure upgrades you don't need to buy.
Understanding N+1 Problems
What it is: One query gets N records. Then N additional queries (one per record) fetch related data.
Why It Happens: Lazy Loading
# LAZY LOADING (default in Odoo)
orders = self.env['sale.order'].search([]) # Query 1: Get 100 orders
for order in orders:
# Each access triggers a query
customer = order.partner_id.name # Query 2-101 (100 queries!)
email = order.partner_id.email # Query 102-201 (100 more!)
Odoo Prefetch Optimization (Automatic)
# First access triggers prefetch
order1 = orders[0]
customer1 = order1.partner_id.name # Query 1: Fetches partner for ALL records
# Subsequent accesses use prefetch (no queries!)
order2 = orders[1]
customer2 = order2.partner_id.name # Prefetched, no query
When Prefetch FAILS (Critical!)
# BREAKS PREFETCH (causes N+1)
orders = self.env['sale.order'].search([])
for order_id in orders.ids: # Lost the recordset!
order = self.env['sale.order'].browse(order_id) # New recordset, no prefetch
name = order.partner_id.name # N+1 query!
Detecting N+1 Problems
Method 1: Enable Performance Mode
# Run Odoo with query logging
./odoo-bin --dev=performance -d database_name
# Logs every SQL query with timing
# Look for duplicate queries or high counts
Method 2: Check Debug Log
# In odoo.conf
[options]
log_level = debug_sql
# Now all SQL queries appear in logs with timing
Method 3: Use Profiler in Debug Mode
1. Settings → Debug Tools → Profiler
2. Start profiling
3. Open your page/report
4. Stop profiling
5. View profile report (shows queries executed)
Method 4: Count Queries Programmatically
from odoo.tools import QueryTracer
with QueryTracer():
orders = self.env['sale.order'].search([])
for order in orders:
name = order.partner_id.name
# Prints: "X SQL queries executed"
# If X is > 10, you have N+1 problem
Method 5: Monitor Browser Network Tab
1. Open browser Developer Tools (F12)
2. Network tab
3. Open your Odoo page
4. Look for /web/dataset/search_read calls
5. Check payload (should be 1 query, not multiple)
6. If many calls: N+1 problem
Common N+1 Patterns
Pattern 1: Loop with Field Access
# ❌ N+1 PROBLEM (100 orders = 100+ queries)
orders = self.env['sale.order'].search([])
for order in orders:
print(order.partner_id.name) # Query per order
# ✅ FIX: Use mapped()
names = orders.mapped('partner_id.name') # 1 query for all
Pattern 2: Loop with Search
# ❌ N+1 PROBLEM (100 customers = 100 searches)
customers = self.env['res.partner'].search([])
for customer in customers:
orders = self.env['sale.order'].search([('partner_id', '=', customer.id)])
print(len(orders)) # Query per customer
# ✅ FIX: Use read_group()
data = self.env['sale.order'].read_group(
domain=[],
fields=['partner_id'],
groupby=['partner_id']
)
order_counts = {d['partner_id'][0]: d['partner_id_count'] for d in data}
for customer in customers:
print(order_counts.get(customer.id, 0)) # No additional queries
Pattern 3: Loop with Create/Write
# ❌ N+1 PROBLEM (100 writes = 100 queries)
for order in orders:
order.write({'state': 'done'}) # Query per order
# ✅ FIX: Batch write
orders.write({'state': 'done'}) # 1 query for all
Pattern 4: Breaking Recordset in Loop
# ❌ N+1 PROBLEM (loses prefetch benefit)
for order_id in orders.ids:
order = self.env['sale.order'].browse(order_id) # New recordset!
print(order.partner_id.name) # N+1 query
# ✅ FIX: Keep original recordset
for order in orders:
print(order.partner_id.name) # Uses prefetch
Real D2C Example: Complete N+1 Fix
Scenario: Dashboard showing sales reps with their order counts, total revenue, and last order date.
WRONG (N+1 Problem - 500+ Queries)
class SalesRepDashboard:
def get_dashboard_data(self):
reps = self.env['res.users'].search([]) # Query 1
data = []
for rep in reps:
# For each rep: search orders, access fields
orders = self.env['sale.order'].search([ # Query 2-N
('user_id', '=', rep.id)
])
total_revenue = sum(o.amount_total for o in orders) # Query per order
last_order = orders[0] if orders else False # Already in memory
data.append({
'name': rep.name,
'order_count': len(orders),
'revenue': total_revenue,
'last_order_date': last_order.order_date if last_order else False,
})
# Total: 1 + (number of reps) + (number of total orders) = 500+ queries!
return data
RIGHT (Optimized - 4 Queries)
class SalesRepDashboard:
def get_dashboard_data(self):
# Get reps
reps = self.env['res.users'].search([]) # Query 1
# Get order counts by rep (1 query)
order_data = self.env['sale.order'].read_group(
domain=[('user_id', 'in', reps.ids)],
fields=['user_id'],
groupby=['user_id']
)
order_counts = {
d['user_id'][0]: d['user_id_count'] for d in order_data
} # Query 2
# Get revenue by rep (1 query)
revenue_data = self.env['sale.order'].read_group(
domain=[('user_id', 'in', reps.ids), ('state', '=', 'done')],
fields=['user_id', 'amount_total:sum'],
groupby=['user_id']
)
revenues = {
d['user_id'][0]: d['amount_total'] for d in revenue_data
} # Query 3
# Get last order per rep (1 query with SQL)
self.env.cr.execute("""
SELECT DISTINCT ON (user_id) user_id, order_date
FROM sale_order
WHERE user_id IN %s
ORDER BY user_id, order_date DESC
""", (tuple(reps.ids),))
last_orders = dict(self.env.cr.fetchall()) # Query 4
# Build dashboard data (no queries)
data = []
for rep in reps:
data.append({
'name': rep.name,
'order_count': order_counts.get(rep.id, 0),
'revenue': revenues.get(rep.id, 0),
'last_order_date': last_orders.get(rep.id),
})
# Total: 4 queries (vs 500+ before)
# Time: 0.2 seconds (vs 30 seconds before)
return data
Fixing N+1 Problems (Solutions)
Solution 1: Use mapped()
# Get data from related records
names = orders.mapped('partner_id.name') # 1 query
emails = orders.mapped('partner_id.email') # Uses prefetch, no new query
Solution 2: Use read_group()
# Aggregate data efficiently
data = self.env['sale.order'].read_group(
domain=[],
fields=['partner_id', 'amount_total:sum'],
groupby=['partner_id']
)
# Single query with GROUP BY
Solution 3: Use search_read()
# Get only needed fields
data = self.env['sale.order'].search_read(
domain=[],
fields=['name', 'partner_id', 'amount_total'] # Only needed fields
)
# Faster than loading full objects
Solution 4: Batch Operations
# Wrong
for order in orders:
order.write({'state': 'done'})
# Right
orders.write({'state': 'done'}) # 1 query for all
Solution 5: Keep Recordsets Intact
# Wrong
for order_id in orders.ids:
order = self.env['sale.order'].browse(order_id)
# Right
for order in orders:
# Uses original recordset's prefetch
pass
Action Items: Eliminate N+1 Problems
Identify N+1 Problems
❏ Enable performance mode (--dev=performance)
❏ Open slow page/report
❏ Count SQL queries in log
❏ If > 20 queries for simple page: N+1 problem
Fix N+1 Problems
❏ Replace loops with mapped() / read_group()
❏ Use search_read() instead of search() + loop
❏ Batch write() calls
❏ Keep recordsets intact (don't break in loops)
Monitor Ongoing
❏ Profile dashboards/reports monthly
❏ Set query limits (alert if > 10 queries)
❏ Train team on N+1 prevention
❏ Code review for query issues
Frequently Asked Questions
What is an N+1 query problem in Odoo?
N+1 problem: one query gets N records, then N additional queries fetch related data. Example: orders = search([]) gets 100 orders (query 1), then for each order, accessing order.partner_id.name triggers query (queries 2-101) = 101 total queries. Root cause: Lazy loading = Odoo doesn't fetch related data until accessed, each access triggers separate database query. Odoo optimization: Prefetch = first access to related field fetches for ALL records in recordset, subsequent accesses use cached data (no new queries). Prefetch breaks when: Loop through orders.ids and browse(id) = creates new recordset each iteration = loses prefetch cache = N queries. Detection: --dev=performance logs all queries, look for duplicate SELECT statements with same structure but different IDs. Impact: 100 records with N+1 = 100+ queries (slow), optimized = 2-3 queries (fast). Common in: Reports looping records, dashboards showing aggregates, list views with computed fields accessing relations.
How do I detect N+1 query problems in Odoo?
Use --dev=performance flag, QueryTracer, or debug_sql logging to detect N+1 problems. Method 1 - Performance mode: ./odoo-bin --dev=performance -d dbname, logs every SQL query with timing to console, look for duplicate queries or >20 queries for simple page. Method 2 - Debug SQL: In odoo.conf set log_level = debug_sql, all queries appear in log file with execution time. Method 3 - QueryTracer: from odoo.tools import QueryTracer, wrap code in with QueryTracer():, prints total query count. Method 4 - Profiler: Settings → Debug Tools → Profiler, start profiling, open page, stop profiling, view report shows query count and timing. Method 5 - Browser DevTools: F12 Network tab, look for multiple /web/dataset/search_read calls = N+1 on frontend. Red flags: >10 queries for simple page, duplicate SELECT with same structure, page load >2 seconds, queries proportional to record count (100 records = 100 queries). Benchmark: Simple list view should be 2-5 queries, dashboard 5-10 queries, complex report 10-20 queries.
What are the best ways to fix N+1 query problems in Odoo?
Use mapped(), read_group(), search_read(), batch operations, and keep recordsets intact. Solution 1 - mapped(): orders.mapped('partner_id.name') fetches all partner names in 1 query instead of loop accessing order.partner_id.name (N queries). Solution 2 - read_group(): Aggregate with GROUP BY = read_group(domain, fields=['partner_id'], groupby=['partner_id']) returns counts/sums in 1 query vs loop with search per customer. Solution 3 - search_read(): search_read(domain, fields=['name','partner_id','amount_total']) loads only needed fields (faster) vs search() loading all fields. Solution 4 - Batch write: orders.write({'state':'done'}) = 1 query vs loop order.write() = N queries. Solution 5 - Preserve recordsets: for order in orders (correct) vs for id in orders.ids then browse(id) (breaks prefetch). Advanced: Raw SQL with self.env.cr.execute() for complex aggregations. Pattern: Fetch data outside loop (mapped/read_group), then iterate using cached data. Result: 500+ queries → 4 queries, 30s → 0.2s.
Why does Odoo's prefetch optimization sometimes fail?
Prefetch fails when you break the original recordset or create new recordsets in loops. How prefetch works: orders = search([]) creates recordset, first access to orders[0].partner_id.name triggers query fetching partner_id for ALL orders in recordset, subsequent orders[1].partner_id.name uses cached data (no query). Breaking pattern 1: for order_id in orders.ids then order = browse(order_id) = creates new single-record recordset each iteration = no shared prefetch cache = N queries. Breaking pattern 2: Splitting recordset with filtered() then iterating both = separate prefetch caches. Breaking pattern 3: Calling search() inside loop = new recordsets each time. Correct pattern: Keep original recordset intact, iterate with for order in orders, all iterations share same prefetch cache. Prefetch scope: Only works within same recordset, accessing order.partner_id.name prefetches partner_id for all orders, but doesn't prefetch order.partner_id.country_id (need separate access). Manual prefetch: orders.mapped('partner_id.country_id') forces prefetch of nested relation.
Free N+1 Query Audit
Stop accepting slow systems. We'll profile your slowest pages/reports, identify all N+1 problems, provide specific fixes, and implement and measure improvements. Most D2C brands have 10-20 N+1 problems causing 50% of their performance issues. Fixing them is usually 1-4 hours work, delivering 5-100x speedups.
