Quick Answer
Stop slow queries destroying performance. N+1 Queries: Most common mistake—accessing related fields in loops triggers 500+ queries. Fix: Let Odoo prefetch automatically, use mapped(), or search_read(). Result: 501 queries → 2 queries, 30s → 0.3s (100x faster). Other critical mistakes: Looping write() instead of batch (50s → 0.5s), using len(search()) instead of search_count(), calling search() inside loops (101 queries → 2), forgetting super() (breaks workflow), computed fields without store=True (can't search), onchange side effects (data integrity issues), missing indexes (seconds → instant), loading full objects instead of search_read() (2-10x slower). Wrong ORM = $60k-$150k lost productivity.
The Performance Problem
Your D2C brand processes 500 orders daily. A developer writes code to send a confirmation email to each customer.
Mistake Code
# WRONG - Causes N+1 queries
orders = self.env['sale.order'].search([('state', '=', 'sale')])
for order in orders:
customer_name = order.partner_id.name # QUERY 1-500
customer_email = order.partner_id.email # Each access queries DB
send_email(customer_email, customer_name)
Result: Search query: 1 + Partner queries: 500 = 501 total queries. Time: 30+ seconds.
Correct Code
# RIGHT - Prefetch optimization
orders = self.env['sale.order'].search([('state', '=', 'sale')])
for order in orders:
customer_name = order.partner_id.name # Query prefetches all partner data
customer_email = order.partner_id.email # No extra query
send_email(customer_email, customer_name)
Result: Search query: 1 + Prefetch query: 1 = 2 total queries. Time: 0.3 seconds. 100x faster!
We've implemented 150+ Odoo systems. The ones where developers understand ORM best practices? Their systems perform beautifully even with millions of records. Queries run in milliseconds. The ones where developers don't? Pages timeout. Reports freeze. Users complain. That's $60,000-$150,000 in lost productivity and frustrated teams.
Mistake 1: The N+1 Query Problem (Most Common)
What it is: One query to get N records, then one additional query per record.
# WRONG - 501 queries
orders = self.env['sale.order'].search([]) # Query 1
for order in orders:
email = order.partner_id.email # Queries 2-501
Solution 1: Let Odoo Prefetch (Simplest)
# RIGHT - 2 queries (automatic prefetch)
orders = self.env['sale.order'].search([]) # Query 1
for order in orders:
email = order.partner_id.email # Query 2: Prefetch fetches ALL partners at once
When you access order.partner_id.email on the first iteration, Odoo automatically fetches that field for ALL records in the recordset. Subsequent iterations don't query.
Solution 2: Use mapped() (Cleanest)
# RIGHT - 2 queries (explicit)
orders = self.env['sale.order'].search([])
emails = orders.mapped('partner_id.email') # Mapped gets all at once
for email in emails:
send_email(email)
Solution 3: Use search_read() (Fastest for Data-Only)
# RIGHT - 1 query (best for simple data retrieval)
data = self.env['sale.order'].search_read(
[],
['partner_id'] # Only get specific fields
)
for order_data in data:
partner = self.env['res.partner'].browse(order_data['partner_id'][0])
email = partner.email
When to Use Each Solution
| Solution | Use Case |
|---|---|
| Prefetch (automatic) | Need full objects with business logic |
| mapped() | Extract specific fields cleanly |
| search_read() | Reports, exports, API responses (data only) |
Mistake 2: Looping Instead of Batch Operations
# WRONG - Inefficient
orders = self.env['sale.order'].search([('state', '=', 'draft')])
for order in orders:
order.write({
'state': 'sent',
'x_notification_sent': True,
})
Result: 1 search query + N write queries + N transactions = N+1 queries. 1,000 orders: 50 seconds.
# RIGHT - Batch operation
orders = self.env['sale.order'].search([('state', '=', 'draft')])
orders.write({
'state': 'sent',
'x_notification_sent': True,
})
Result: 1 search query + 1 batch write = 2 queries. 1,000 orders: 0.5 seconds.
More Batch Examples
# Batch unlink (delete)
orders = self.env['sale.order'].search([('state', '=', 'cancelled')])
orders.unlink() # Delete all at once, not one by one
# Batch filtered operations
high_value_orders = orders.filtered(lambda o: o.amount_total > 5000)
high_value_orders.write({'x_priority': 'high'})
Mistake 3: Using len() Instead of search_count()
# WRONG - Retrieves all records into memory
orders = self.env['sale.order'].search([('state', '=', 'draft')])
count = len(orders) # Loads ALL records just to count!
Why it's wrong: Loads 10,000 records into memory, uses CPU to iterate, memory bloat, slow. 10,000 orders: 5 seconds.
# RIGHT - Just count at database level
count = self.env['sale.order'].search_count([('state', '=', 'draft')])
Result: 1 simple query: SELECT COUNT(*) ... No memory load. Instant. 10,000 orders: 0.1 seconds.
Mistake 4: Calling search() Inside Loops
# WRONG - N queries inside loop
orders = self.env['sale.order'].search([]) # Query 1
for order in orders:
# For EACH order, search for invoices - causes N additional queries
invoices = self.env['account.invoice'].search([
('order_id', '=', order.id)
])
process_invoices(invoices)
Result: With 100 orders: 101 queries.
# RIGHT - Fetch all invoices once
orders = self.env['sale.order'].search([])
# Get all invoices for all orders in ONE query
invoices = self.env['account.invoice'].search([
('order_id', 'in', orders.ids)
])
# Group by order for processing
invoices_by_order = {}
for invoice in invoices:
order_id = invoice.order_id.id
if order_id not in invoices_by_order:
invoices_by_order[order_id] = []
invoices_by_order[order_id].append(invoice)
# Process
for order in orders:
order_invoices = invoices_by_order.get(order.id, [])
process_invoices(order_invoices)
Result: 101 queries → 2 queries.
Mistake 5: Forgetting to Call super()
# WRONG - Missing super() call
class SaleOrder(models.Model):
_inherit = 'sale.order'
def action_confirm(self):
"""Confirm order."""
# Do custom logic
self.x_custom_flag = True
# FORGOT to call parent method!
# Return nothing
Why it breaks: Parent method action_confirm() never runs. Order workflow doesn't execute. Inventory doesn't update. Email templates don't send. Everything breaks silently.
# RIGHT - Calls super()
class SaleOrder(models.Model):
_inherit = 'sale.order'
def action_confirm(self):
"""Confirm order."""
# Call parent FIRST
result = super().action_confirm()
# THEN do custom logic
self.x_custom_flag = True
return result
Mistake 6: Not Using store=True for Searchable Computed Fields
# WRONG - Can't search on computed field
class SaleOrder(models.Model):
_inherit = 'sale.order'
profit = fields.Float(compute='_compute_profit') # No store=True
@api.depends('amount_total', 'amount_cost')
def _compute_profit(self):
for order in self:
order.profit = order.amount_total - order.amount_cost
# Try to search - FAILS!
orders = self.env['sale.order'].search([('profit', '>', 1000)]) # Error!
Why it fails: Computed fields (without store=True) exist only in memory. Database doesn't know about them.
# RIGHT - store=True allows searching
class SaleOrder(models.Model):
_inherit = 'sale.order'
profit = fields.Float(
compute='_compute_profit',
store=True # CRITICAL - makes it searchable
)
@api.depends('amount_total', 'amount_cost')
def _compute_profit(self):
for order in self:
order.profit = order.amount_total - order.amount_cost
# Now searches work
orders = self.env['sale.order'].search([('profit', '>', 1000)]) # Works!
Mistake 7: Creating Onchange Side Effects
# WRONG - Onchange with side effects
@api.onchange('product_id')
def _onchange_product(self):
# This runs in form, shouldn't change database
self.env['product.product'].search([...]).write({...}) # Side effect!
Why it's wrong: Onchange runs in the form, shouldn't touch database. Causes unexpected side effects, performance issues, data integrity problems.
# RIGHT - Onchange just updates current record
@api.onchange('product_id')
def _onchange_product(self):
if self.product_id:
# Only update THIS record, not database
self.price = self.product_id.list_price
self.description = self.product_id.description
Mistake 8: Not Indexing Frequently-Searched Fields
# WRONG - No index on search field
class Product(models.Model):
_inherit = 'product.product'
sku = fields.Char() # No index
# Searching 100,000 products by SKU is slow
products = self.env['product.product'].search([('sku', '=', 'ABC-123')])
Why it's slow: Database scans all 100,000 records. Takes seconds.
# RIGHT - Add index
class Product(models.Model):
_inherit = 'product.product'
sku = fields.Char(index=True) # Add index
# Now same search is instant
products = self.env['product.product'].search([('sku', '=', 'ABC-123')])
What to Index
class SaleOrder(models.Model):
_inherit = 'sale.order'
customer_id = fields.Many2one(index=True) # Often filtered
state = fields.Selection(index=True) # Often filtered
order_date = fields.Date(index=True) # Often sorted/filtered
order_number = fields.Char(index=True) # Often searched
Mistake 9: Not Using read() for Bulk Data Extraction
# WRONG - Loads full objects
orders = self.env['sale.order'].search([]) # Load all records as objects
data = []
for order in orders:
data.append({
'id': order.id,
'name': order.name,
'amount': order.amount_total,
})
Why it's slow: Loads full objects into memory. Excess ORM overhead.
# RIGHT - Direct data read
data = self.env['sale.order'].search_read(
[],
['id', 'name', 'amount_total']
)
# data = [
# {'id': 1, 'name': 'Order #001', 'amount_total': 1500},
# {'id': 2, 'name': 'Order #002', 'amount_total': 2000},
# ]
Use case: Reports, exports, API responses. Speed: 2-10x faster than loading full objects.
Performance Impact Summary
| Mistake | Wrong (Slow) | Right (Fast) |
|---|---|---|
| N+1 Queries | 501 queries, 30s | 2 queries, 0.3s |
| Batch Operations | 1,001 queries, 50s | 2 queries, 0.5s |
| len() vs search_count() | Load 10k records, 5s | 1 COUNT query, 0.1s |
| search() in Loop | 101 queries | 2 queries |
Action Items: Optimize Your ORM
Audit Your Code
❏ Search for loops with database queries inside
❏ Search for N+1 patterns (accessing related fields in loops)
❏ Check for len() used on search results
❏ Check for missing super() calls
❏ Identify frequently-searched fields without indexes
Fix High-Impact Issues
❏ Convert loops to batch operations
❏ Replace len(search()) with search_count()
❏ Use mapped() for field extraction
❏ Add indexes to search fields
❏ Use search_read() for reports
Long-Term
❏ Profile your code regularly
❏ Monitor slow queries in log
❏ Refactor hotspots (pages that load slow)
❏ Document ORM best practices for your team
Frequently Asked Questions
What is the N+1 query problem and how do I fix it?
N+1 problem: 1 query to get N records + 1 query per record when accessing related fields = N+1 total queries. Example: 500 orders, accessing order.partner_id.email in loop = 501 queries, 30 seconds. Fix: Let Odoo prefetch automatically (first access fetches all), use mapped('partner_id.email') for extraction, or search_read() for data-only. Result: 2 queries, 0.3 seconds (100x faster).
When should I use search_count() instead of len()?
Always use search_count() when you only need the count. len(search()) loads ALL records into memory just to count them (10k records = 5 seconds, memory bloat). search_count() runs simple SELECT COUNT(*) query at database level (0.1 seconds, no memory load). Only use len() if you need the actual recordset for other operations.
Why must I always call super() when overriding methods?
Forgetting super() breaks parent method execution. Example: Override action_confirm() without super() = order workflow doesn't run, inventory not updated, emails not sent. Everything breaks silently. Always: Call result = super().method_name() first, then add custom logic, return result. This ensures parent functionality runs before your customizations.
When should I add index=True to fields?
Add index=True to fields used in: (1) Search/filter operations frequently, (2) Domain filters, (3) Foreign keys (Many2one), (4) Status/state fields, (5) Sort operations. Without index: database scans all 100k records (seconds). With index: instant lookup. Common candidates: customer_id, state, order_date, order_number, sku. Don't over-index (slows writes)—only fields you actually search/filter on.
Free ORM Performance Audit
Stop writing slow code. We'll profile your current code, identify N+1 problems, find missing indexes, refactor hotspots, and show you real performance gains. Most D2C brands have dozens of slow queries they don't know about. Fixing them saves $30,000-$80,000 in lost productivity.
