The Concurrency Crisis
If you're running Odoo with more than 8 concurrent users and you're seeing "could not serialize access due to concurrent update" errors in your logs, you're bleeding money. Not $100 a month—try $8,000-$24,000 per quarter in lost orders, retry logic failures, and infrastructure waste.
Here's what's actually happening: Two or more users are modifying the same record at the exact same time. Odoo detects this conflict at the database level (PostgreSQL's Repeatable Read isolation), kills the transaction, and tells your user "Sorry, try again." The user re-clicks. The request hits the queue again. Your system gets slower. Repeat.
We see this 23 times a week across our client base. Most teams don't know why it's happening. They just blame "Odoo is slow" and buy more servers. It's not slow. You're not locking correctly.
How Odoo Actually Handles Concurrent Requests
Odoo doesn't run traditional multi-threading. Instead, it uses gevent greenlets, which are lightweight "fake threads" that run cooperatively inside a single OS process. When you start Odoo with --workers=4, you're not creating 4 threads—you're creating 4 completely separate processes, each with its own greenlet scheduler.
The Mental Model
Worker 1 processes User A's request
Worker 2 processes User B's request
Workers 3 & 4 sit idle waiting for requests
Both requests hit PostgreSQL simultaneously. If User A and User B are both trying to update the same sales order, here's what happens:
The Collision Sequence
Step 1: Both transactions start. PostgreSQL takes a snapshot of the database at that moment (Repeatable Read isolation level). Both workers see the exact same data.
Step 2: User A modifies the sales order. Worker 1 sends an UPDATE to PostgreSQL. PostgreSQL locks that row. The transaction commits successfully.
Step 3: User B tries to commit their changes to the same sales order. But PostgreSQL says: "Hold on—you started your snapshot before User A's changes were committed. I can't let you update this row now. Conflict detected."
Result: PostgreSQL throws ERROR: could not serialize access due to concurrent update. Worker 2 catches it. Your user sees an error. The entire transaction rolls back.
The Kicker:
This isn't a bug. It's by design. PostgreSQL's Repeatable Read isolation level is supposed to fail when concurrent updates collide. It's protecting your data integrity. The problem is that most Odoo developers don't know how to prevent these collisions in the first place.
The Exact Cost: $18.50 Per Serialization Error
A $5M D2C brand running Odoo with 18 users during peak hours (9 AM - 2 PM). Without proper locking, they're seeing serialization failures every 4-7 minutes during peak times.
| Cost Component | Time/Impact | $ Cost |
|---|---|---|
| User retry time | 1 minute @ $30/hr | $0.50 |
| Support ticket | 30 seconds @ $40/hr | $0.33 |
| Lost order (2% of errors) | $800 average | $16.00 |
| Infrastructure waste | Retry cycles, CPU | $1.67 |
| TOTAL PER ERROR | $18.50 | |
If they see 120 errors per day during peak season (6 weeks), that's $15,540 in direct and indirect losses per year, not counting the brand reputation damage.
Why Automatic Retry Isn't The Answer
Yes, Odoo's framework automatically retries failed transactions. But here's what people don't understand:
Automatic retry has a cascade effect.
When Transaction A fails and gets retried, it competes with Transaction B (which may also be retrying). Now you have 4 transactions fighting for the same locks instead of 2. The retry makes things worse, not better.
Measured Impact:
A client with serialization errors every 2 minutes would see those errors increase to every 90 seconds after auto-retry kicked in. More retries = more contention = more failures = even more retries.
The real fix isn't automatic retry. It's preventing the collision in the first place.
The Three Locking Strategies
Strategy 1: Implicit Locking (Easiest, But Doesn't Always Work)
Odoo has an implicit lock mechanism. When you call record.write(), Odoo tries to acquire a lock on that record before writing.
# This locks the record automatically
order = self.env['sale.order'].browse(123)
order.write({'amount_total': 1000})
Pros: No code needed. Odoo handles it.
Cons: Only works if all code paths use write(). If another process uses raw SQL or a custom method, the lock is bypassed.
Strategy 2: Explicit Database Locks With SELECT FOR UPDATE (The Pro Move)
This is what you should be doing in high-concurrency scenarios.
@api.model
def update_inventory_with_lock(self, product_id, qty):
# Lock the row BEFORE reading it
self.env.cr.execute(
'SELECT id FROM stock_quant WHERE product_id = %s FOR UPDATE',
(product_id,)
)
# Now it's safe to read and update
quant = self.env['stock_quant'].search(
[('product_id', '=', product_id)],
limit=1
)
quant.quantity += qty
return quant
What FOR UPDATE does: It tells PostgreSQL: "Don't let anyone else modify this row until my transaction commits." Other workers trying to execute the same query will block (wait) until you're done.
Real Impact:
We implemented this for a client with a connector processing 450 orders/hour. Serialization errors dropped from 47 per day to 0. Processing time actually went down because the system wasn't thrashing with retries.
Strategy 3: Job Queue (The Asynchronous Nuclear Option)
For heavy operations (bulk imports, long-running calculations), don't try to handle concurrency in real-time. Use the Job Queue module instead.
def import_orders(self):
# Instead of processing immediately (and risking conflicts),
# queue it for background processing
self.with_delay().process_orders_background()
def process_orders_background(self):
# This runs in a separate worker, sequentially
# No concurrency issues because it's one job at a time
for order in self.orders:
self._process_single_order(order)
Why this works: Job Queue processes jobs one at a time (or with configurable concurrency). If 100 orders arrive simultaneously, they're queued and processed sequentially instead of colliding.
Real Impact:
A $7M brand's Shopify connector was syncing 3,200 orders per day, hitting serialization errors on 340+ orders daily (10.6% failure rate). After switching to Job Queue, failure rate dropped to 0.3% (only retries from network timeouts, not concurrency).
Common Concurrency Mistakes
Mistake #1: Assuming Workers = Threads
You start Odoo with --workers=8. You think: "I have 8 threads now, so 8 concurrent users can work safely."
No. Each worker can handle multiple requests (one after another). If you have 8 workers and 24 concurrent users, each worker is processing 3 users' requests sequentially.
Mistake #2: Forgetting That Cron Jobs Are Also Concurrent
Your cron job processes invoices. A user tries to edit the same invoice at the exact same moment. Collision.
The solution: Use SELECT FOR UPDATE SKIP LOCKED (PostgreSQL 9.5+) to lock only the rows that aren't already locked:
def cron_process_invoices(self):
# Only process invoices that aren't locked by other workers
self.env.cr.execute('''
SELECT id FROM account_invoice
WHERE state = 'draft'
FOR UPDATE SKIP LOCKED
LIMIT 100
''')
invoice_ids = [row[0] for row in self.env.cr.fetchall()]
invoices = self.env['account.invoice'].browse(invoice_ids)
# Safe to process now
for invoice in invoices:
invoice.action_post()
SKIP LOCKED tells PostgreSQL: "Give me rows that are free. If a row is locked, skip it and move to the next one." No waiting, no collisions.
Mistake #3: Nested Transactions And Computed Fields
You create a sales order. The sales order has a computed field that sums order lines. You create 2 order lines simultaneously (two requests).
The fix: Defer computed field updates. Instead of computing the total on every line add, batch the updates:
class SaleOrder(models.Model):
_name = 'sale.order'
# Don't trigger on every line add
amount_total = fields.Float(
compute='_compute_amount',
store=False # Don't trigger DB writes on every change
)
def _compute_amount(self):
# This is called when you explicitly request it
for order in self:
order.amount_total = sum(line.price_unit for line in order.order_line)
# Add 10 lines
for i in range(10):
order.order_line.create({'product_id': ..., 'qty': ...})
# Recompute once at the end
order._compute_amount() # Single update, no collision risk
Real-World Scenario: The $14,200 Recovery
A $4.2M brand's Shopify connector was hitting serialization errors on 8% of imported orders. They had 6 workers but only 12 users in the system. So why the collisions?
Root cause: The connector was processing 120 orders every 15 minutes. Two orders for the same customer in the same 2-second window? Both workers try to update the customer record. Collision.
The fix we implemented:
• Added SELECT FOR UPDATE to the inventory update query (locks stock.quant before modifying)
• Deferred partner field updates until after the order was fully created
• Switched connector processing to Job Queue (sequential instead of parallel)
• Serialization errors: 120/month → 3/month (network timeouts only)
• Processing time: 3.2 seconds → 1.8 seconds per order (faster because no retries)
• Customer support tickets: 12/month → 0
• Annual value recovered: $14,200
The Worker Configuration Rule Everyone Gets Wrong
Most Odoo admins use this formula:
Number of Workers = (CPU Cores × 2) + 1
So a 4-core server gets 9 workers.
A 16-core server gets 33 workers.
This is wrong. This formula is designed to maximize throughput (how many requests per second), not to minimize concurrency errors.
Better formula for concurrency safety:
Number of Workers = Expected Concurrent Users / 6
One worker can safely handle 6 concurrent users if they're not editing the same records.
12 users expected? Use 2 workers
36 users expected? Use 6 workers
60 users expected? Use 10 workers
Plus: Always add 1 dedicated cron worker:
workers = (concurrent_users / 6) + 1
A 4-core server handling 24 concurrent users should use:
(24 / 6) + 1 = 5 workers
Not 9 workers
Key Insight:
More workers = more contention = more serialization failures. Counterintuitive, but it's real.
Monitoring: How To Know If You Have A Problem
Check your Odoo logs for this error:
ERROR: could not serialize access due to concurrent update
If you see this more than once per day, you have a concurrency problem.
To measure the blast radius, count failures per model:
grep "could not serialize" /var/log/odoo/odoo.log | grep -oP "(?<=UPDATE ).*(?= SET)" | sort | uniq -c | sort -rn
Also check PostgreSQL's slow query log:
SELECT query, calls, total_time
FROM pg_stat_statements
WHERE query LIKE '%UPDATE%'
ORDER BY total_time DESC
LIMIT 10;
Cost Comparison
| Approach | Time Investment | Annual Value |
|---|---|---|
| Do nothing, blame Odoo | 0 hours | -$8,000 to -$24,000 |
| Buy bigger servers | 2-4 hours setup | -$4,800/year (hosting costs) |
| Implement proper locking | 8-12 hours development | +$8,000 to +$25,000 |
Free 30-Minute Concurrency Audit
Serialization failures aren't "part of running Odoo." They're symptoms of improper locking. Fix them, and you'll recover $8,000-$25,000 annually and improve user experience instantly. At Braincuber, we've implemented proper locking on 34 Odoo systems across the US, UK, UAE, and Singapore. Average improvement: 94% reduction in serialization errors and 18% faster processing time.
