Quick Answer
Default to ORM. Use ORM for: writes (create/update/delete), validation, security, computed fields, related records, small datasets (under 10k). ORM handles validation, access control, audit trails, triggers. Exception: Use SQL for large aggregations. Example: 2M order revenue report—ORM: 12s/500MB (loads all records). SQL: 0.2s/0.1MB (database aggregates, returns single number). 60x faster. Critical: Always parameterize SQL. NEVER concatenate user input into queries. Use cr.execute("SELECT * FROM table WHERE id = %s", (value,)). Prevents SQL injection. Wrong SQL choice = $50k-$150k in security fixes or infrastructure costs.
The Performance Problem
Your D2C brand needs to generate a daily revenue report. 2 million orders in the database.
Developer A (Uses ORM)
# Uses ORM to aggregate
orders = self.env['sale.order'].search([
('state', '=', 'done'),
('order_date', '>=', cutoff_date),
])
revenue = sum(o.amount_total for o in orders) # Loads 2M records!
Time: 12+ seconds. Memory: 500+ MB.
Developer B (Uses SQL)
# Uses raw SQL
cr = self.env.cr
cr.execute("""
SELECT SUM(amount_total) as revenue
FROM sale_order
WHERE state = 'done'
AND order_date >= %s
""", (cutoff_date,))
revenue = cr.fetchone()[0] # Gets single number
Time: 0.2 seconds. Memory: 0.1 MB. 60x faster!
We've implemented 150+ Odoo systems. The ones where developers know WHEN to use SQL vs ORM? They have blazing-fast systems with zero security holes. The ones who use raw SQL everywhere? They've been hacked. The ones who only use ORM? They have slow reports that timeout. That's $50,000-$150,000 in either security fixes or infrastructure upgrades.
The Decision Framework
| Scenario | Use This |
|---|---|
| Create/Update/Delete data | ORM |
| Validation/Constraints matter | ORM |
| Access control/Security needed | ORM |
| Computed fields required | ORM |
| Small datasets (under 10k) | ORM |
| Large aggregations (100k+ records) | SQL |
| Read-only analytics/reports | SQL |
| Data migration (one-time bulk) | SQL |
| Complex multi-table joins | SQL (if ORM too slow) |
When to Use ORM (The Default)
Use ORM for: Standard business operations, data modifications, computed fields, related records, user-facing security.
Example: Creating a Customer with Validation
# RIGHT - Use ORM
customer = self.env['res.partner'].create({
'name': 'Acme Inc',
'email': 'contact@acme.com',
'country_id': self.env.ref('base.us').id,
})
# ORM handles:
# ✓ Field validation
# ✓ Onchange triggers
# ✓ Constraints (@api.constrains)
# ✓ Computed fields (profit margin auto-calculates)
# ✓ Access control (users can only see their customers)
# ✓ Audit trail (who created it, when)
Performance: Small Datasets (Under 10,000 Records)
# ORM: 0.5 seconds
customers = self.env['res.partner'].search([
('country_id', '=', 'US'),
('customer_rank', '>', 0),
])
for customer in customers:
process_customer(customer)
# SQL: 0.4 seconds (slightly faster, but negligible)
Difference: Negligible. Use ORM for simplicity and safety.
When to Use Raw SQL (The Exception)
Use raw SQL ONLY for: Large aggregations (100,000+ records), read-only analytics/reports, data migration scripts, complex multi-table calculations where ORM is too slow.
Example: Daily Revenue Report (2 Million Orders)
# WRONG - ORM approach
orders = self.env['sale.order'].search([
('state', '=', 'done'),
])
# This loads 2,000,000 order objects into memory! ❌
revenue = sum(o.amount_total for o in orders)
Time: 12+ seconds, Memory: 500+ MB
# RIGHT - SQL approach
cr = self.env.cr
cr.execute("""
SELECT
SUM(amount_total) as revenue,
COUNT(*) as order_count,
DATE(order_date) as order_date
FROM sale_order
WHERE state = 'done'
GROUP BY DATE(order_date)
ORDER BY order_date DESC
""")
for row in cr.fetchall():
date, revenue, count = row
print(f"date: revenue (count orders)")
Time: 0.2 seconds, Memory: 0.1 MB. The database does the heavy lifting. You only get the result.
Real D2C Benchmark (2 Million Orders)
| Approach | Time | Memory |
|---|---|---|
| ORM (sum in Python) | 12 seconds | 500 MB |
| SQL (database aggregation) | 0.2 seconds | 0.1 MB |
SQL is 60x faster.
How to Write Safe Raw SQL (Prevent SQL Injection)
The critical rule: ALWAYS use parameterized queries.
WRONG (Vulnerable to SQL Injection)
# ❌ DANGEROUS - SQL injection vulnerability
search_keyword = "Product'; DROP TABLE sale_order; --"
cr = self.env.cr
cr.execute(
"SELECT id, name FROM product_product WHERE name LIKE '" + search_keyword + "'"
)
# An attacker could delete your entire table!
RIGHT (Parameterized Query)
# ✅ SAFE - SQL injection proof
search_keyword = "Product"
cr = self.env.cr
cr.execute(
"SELECT id, name FROM product_product WHERE name ILIKE %s",
(search_keyword,) # Separate from query
)
# The database treats search_keyword as data, never code
How Parameterized Queries Work
# Bad way (string concatenation)
query = "SELECT * FROM sale_order WHERE id = " + order_id # Vulnerable!
# Good way (parameterized)
query = "SELECT * FROM sale_order WHERE id = %s"
cr.execute(query, (order_id,)) # Safe!
# The %s is a placeholder. The database driver safely inserts the value.
More Safe SQL Examples
# Multiple parameters
cr.execute(
"SELECT * FROM sale_order WHERE state = %s AND customer_id = %s",
('done', customer_id)
)
# With dates
cutoff_date = fields.Date.today()
cr.execute(
"SELECT * FROM sale_order WHERE order_date >= %s",
(cutoff_date,)
)
# With LIKE (text search)
search = "%organic%" # Add wildcards in the value, not query
cr.execute(
"SELECT * FROM product_product WHERE name ILIKE %s",
(search,)
)
Real-World Scenario: Sales Report by Region
Requirement: Generate daily sales by region, excluding certain product categories.
ORM Approach (Slow for Large Datasets)
# ❌ Slow - loads thousands of records
orders = self.env['sale.order'].search([
('state', '=', 'done'),
('order_date', '>=', start_date),
('order_date', '<=', end_date),
])
# Manually aggregate
sales_by_region = {}
for order in orders:
region = order.partner_id.state_id.name # Extra query per order!
if region not in sales_by_region:
sales_by_region[region] = 0
# Only count if no excluded categories
excluded_categories = ['discontinued', 'archived']
valid_lines = [l for l in order.order_line
if l.product_id.categ_id.name not in excluded_categories]
total = sum(l.price_total for l in valid_lines)
sales_by_region[region] += total
Performance: 8+ seconds for 10,000 orders
SQL Approach (Fast)
# ✅ Fast - single database query
cr = self.env.cr
cr.execute("""
SELECT
rp.state_id,
rs.name as region,
SUM(sol.price_total) as total_sales,
COUNT(DISTINCT so.id) as order_count
FROM sale_order so
JOIN res_partner rp ON so.partner_id = rp.id
JOIN res_country_state rs ON rp.state_id = rs.id
JOIN sale_order_line sol ON so.id = sol.order_id
JOIN product_product pp ON sol.product_id = pp.id
JOIN product_category pc ON pp.categ_id = pc.id
WHERE so.state = %s
AND so.order_date BETWEEN %s AND %s
AND pc.name NOT IN ('discontinued', 'archived')
GROUP BY rp.state_id, rs.name
ORDER BY total_sales DESC
""", ('done', start_date, end_date))
results = cr.fetchall()
for region_id, region_name, total_sales, order_count in results:
print(f"region_name: total_sales (order_count orders)")
Performance: 0.3 seconds for 10,000 orders. 25x faster.
Real-World Scenario: Data Migration
Migrate customer data from old system to Odoo.
ORM Approach (Safe but Slow)
# Uses ORM - safe but slow for 100K records
for old_customer in old_database.customers:
self.env['res.partner'].create({
'name': old_customer['name'],
'email': old_customer['email'],
'phone': old_customer['phone'],
# ... 20 more fields
})
# Time: 2+ minutes for 100K records
SQL Approach (Fast, One-Time Operation)
# Direct insert - fastest for bulk data
cr = self.env.cr
records = []
for old_customer in old_database.customers:
records.append((
old_customer['name'],
old_customer['email'],
old_customer['phone'],
# ... 20 more fields
))
cr.executemany("""
INSERT INTO res_partner (name, email, phone, ...)
VALUES (%s, %s, %s, ...)
""", records)
self.env.cr.commit()
# Time: 10 seconds for 100K records
Difference: 12x faster. For one-time migrations, SQL is worth it.
Anti-Patterns (Mistakes to Avoid)
Anti-Pattern 1: Using Raw SQL for Simple Operations
# ❌ WRONG - Overkill for simple search
cr = self.env.cr
cr.execute("SELECT id FROM sale_order WHERE state = %s", ('done',))
order_ids = [row[0] for row in cr.fetchall()]
# ✅ RIGHT - Use ORM
orders = self.env['sale.order'].search([('state', '=', 'done')])
order_ids = orders.ids
Why: ORM is simpler, handles security, is fast enough for most data.
Anti-Pattern 2: Forgetting to Parameterize
# ❌ WRONG - SQL injection vulnerability
customer_id = request.POST.get('customer_id')
cr.execute(f"SELECT * FROM sale_order WHERE customer_id = customer_id")
# ✅ RIGHT - Parameterized
customer_id = request.POST.get('customer_id')
cr.execute("SELECT * FROM sale_order WHERE customer_id = %s", (customer_id,))
Why: Parameterized queries are injection-proof.
Anti-Pattern 3: Raw SQL for Writes
# ❌ WRONG - Bypasses validation and triggers
cr = self.env.cr
cr.execute("UPDATE sale_order SET state = %s WHERE id = %s", ('done', order_id))
# ✅ RIGHT - Use ORM
order = self.env['sale.order'].browse(order_id)
order.state = 'done' # Triggers validations, computed fields, etc.
Why: ORM enforces business rules. Raw SQL bypasses them.
Action Items: Optimize Your SQL/ORM Usage
Audit Your Code
❏ Search for cr.execute() calls - are they parameterized?
❏ Search for string concatenation in queries - fix SQL injection risks
❏ Identify slow pages/reports
❏ Check if raw SQL is needed or ORM would work
For Slow Reports
❏ Try ORM first with domain filtering
❏ Add indexes to frequently-searched fields
❏ Use search_read() instead of search()
❏ Only if still slow: switch to raw SQL
For Data Operations
❏ Always use ORM for creates/updates/deletes
❏ Always parameterize if you use raw SQL
❏ Profile before and after optimization
❏ Document why you chose SQL over ORM
Frequently Asked Questions
When should I use raw SQL instead of ORM?
Use raw SQL ONLY for: (1) Large aggregations (100k+ records, database aggregation 60x faster than loading all records), (2) Read-only analytics/reports, (3) Data migration scripts (bulk inserts 12x faster), (4) Complex multi-table calculations where ORM is too slow. Always default to ORM for: writes (create/update/delete), validation/constraints, access control/security, computed fields, related records, small datasets (under 10k).
How do I prevent SQL injection in raw SQL queries?
ALWAYS use parameterized queries. NEVER concatenate user input into SQL strings. Wrong: cr.execute("SELECT * FROM table WHERE id = " + user_id) (vulnerable!). Right: cr.execute("SELECT * FROM table WHERE id = %s", (user_id,)) (safe!). Use %s placeholders and pass values as tuple in second argument. Database driver treats values as data, never code. This prevents attackers from injecting malicious SQL.
Why shouldn't I use raw SQL for data writes (create/update/delete)?
Raw SQL bypasses Odoo's business logic layer. Using cr.execute("UPDATE table SET field = value") skips: (1) Field validation, (2) Constraints (@api.constrains), (3) Computed field updates, (4) Onchange triggers, (5) Access control/security, (6) Audit trails. Result: broken data integrity, security holes, missing computed values. Always use ORM for writes (record.write(), record.create()) to enforce business rules.
What's the performance difference between ORM and SQL for large aggregations?
Real benchmark (2M orders, daily revenue): ORM approach: sum(o.amount_total for o in orders) = 12 seconds, 500 MB (loads all 2M records into memory). SQL approach: SELECT SUM(amount_total) FROM sale_order = 0.2 seconds, 0.1 MB (database aggregates, returns single number). 60x faster. For small datasets (under 10k), ORM vs SQL difference is negligible (0.5s vs 0.4s). Use ORM for simplicity unless dealing with 100k+ records.
Free Database Performance Audit
Stop guessing whether to use SQL or ORM. We'll profile your slowest pages/reports, identify N+1 problems, determine if SQL is actually needed, build optimized SQL queries, add necessary indexes, and measure performance improvement. Most D2C brands have 3-5 queries that are 10-100x slower than necessary. Fixing them saves $40,000-$100,000 in infrastructure costs and user frustration.
