Skip to main content
OperationsFebruary 20, 20268 min read

Odoo Monitoring: What to Track and When to Panic

Most Odoo outages are predictable. Disk fills up, workers max out, a slow query spirals. Here's what to monitor so you catch problems before your users do.

A production Odoo instance has three layers that can fail independently: the server (CPU, memory, disk), PostgreSQL (connections, queries, cache), and the Odoo application itself (workers, response times, errors). Monitor all three.

Server Metrics

CPU UsageWarning: > 70% sustainedCritical: > 90% for 5+ min

Odoo workers are CPU-bound. High CPU means workers are saturated and requests queue up.

Memory UsageWarning: > 80%Critical: > 95%

Odoo workers consume 150-300MB each. OOM kills cause instant downtime.

Disk UsageWarning: > 75%Critical: > 90%

Filestore grows with attachments. Full disk crashes PostgreSQL and Odoo simultaneously.

Load AverageWarning: > 2x CPU coresCritical: > 4x CPU cores

Indicates how many processes are waiting. High load with low CPU often means I/O bottleneck.

PostgreSQL Metrics

Active ConnectionsWarning: > 80% of maxCritical: > 95% of max

Each Odoo worker holds a DB connection. Running out means new requests fail.

Slow QueriesWarning: > 1s averageCritical: > 5s average

Slow queries block workers. One bad query can cascade into site-wide slowness.

Cache Hit RatioWarning: < 95%Critical: < 90%

Low cache ratio means PostgreSQL reads from disk instead of memory. Increase shared_buffers.

Dead TuplesWarning: > 10% of tableCritical: > 25% of table

Odoo updates and deletes create dead tuples. Autovacuum should clean them, but can fall behind.

Odoo Application Metrics

Response Time (p95)Warning: > 2sCritical: > 5s

Users notice anything over 2 seconds. Measure at the 95th percentile, not average.

Worker UtilizationWarning: > 70%Critical: > 90%

When all workers are busy, new requests wait. Add workers or optimize slow endpoints.

HTTP 500 RateWarning: > 0.1%Critical: > 1%

500 errors mean unhandled exceptions. Each one is a broken user experience.

Cron Queue DepthWarning: > 50 pendingCritical: > 200 pending

Backed-up cron jobs mean emails aren't sending, scheduled actions aren't running.

Quick Health Check Script

A basic shell script that checks the essentials and exits non-zero if anything is wrong. Run it from cron every 5 minutes and alert on failure.

#!/bin/bash
# odoo-health-check.sh
ODOO_URL="http://localhost:8069/web/health"
DB_NAME="odoo_production"
DISK_THRESHOLD=85
MEM_THRESHOLD=90
# Check Odoo HTTP response
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "$ODOO_URL" --max-time 10)
if [ "$HTTP_CODE" != "200" ]; then
echo "CRITICAL: Odoo not responding (HTTP $HTTP_CODE)"
exit 2
fi
# Check disk usage
DISK_PCT=$(df / | awk 'NR==2 {print $5}' | tr -d '%')
if [ "$DISK_PCT" -gt "$DISK_THRESHOLD" ]; then
echo "WARNING: Disk usage at ${DISK_PCT}%"
exit 1
fi
# Check memory usage
MEM_PCT=$(free | awk '/^Mem:/ {printf "%.0f", $3/$2*100}')
if [ "$MEM_PCT" -gt "$MEM_THRESHOLD" ]; then
echo "WARNING: Memory usage at ${MEM_PCT}%"
exit 1
fi
# Check PostgreSQL connections
PG_CONNS=$(psql -U odoo -d "$DB_NAME" -t -c \
"SELECT count(*) FROM pg_stat_activity;")
PG_MAX=$(psql -U odoo -d "$DB_NAME" -t -c \
"SHOW max_connections;")
PG_PCT=$((PG_CONNS * 100 / PG_MAX))
if [ "$PG_PCT" -gt 85 ]; then
echo "WARNING: PostgreSQL connections at ${PG_PCT}%"
exit 1
fi
echo "OK: All checks passed"
exit 0

Monitoring PostgreSQL Slow Queries

Enable slow query logging in postgresql.conf to catch problematic queries:

# Log queries slower than 1 second
log_min_duration_statement = 1000
# Log all lock waits
log_lock_waits = on
deadlock_timeout = 5s
# Track query statistics
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all

Then query the top offenders:

-- Top 10 slowest queries by total time
SELECT
calls,
round(total_exec_time::numeric, 2) as total_ms,
round(mean_exec_time::numeric, 2) as avg_ms,
substr(query, 1, 80) as query_preview
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;

Odoo Log Patterns to Watch

Odoo's log file tells you more than most people realize. Watch for these patterns:

bus.Bus unavailableLongpolling/websocket worker crashed. Live chat and notifications stop working.
OperationalError: FATAL: too many connectionsPostgreSQL ran out of connections. Workers can't reach the database.
WARNING: ... took ... secondsOdoo's built-in slow operation logging. Usually indicates a heavy RPC call or report generation.
MemoryErrorWorker ran out of memory. Usually caused by loading too much data at once (large exports, unbounded searches).
OSError: [Errno 28] No space left on deviceDisk is full. Odoo and PostgreSQL will both crash shortly.

Alerting Strategy

Not every metric needs an alert. Too many alerts leads to alert fatigue — and then you ignore the one that matters.

Page immediately

Odoo HTTP unresponsive, disk > 95%, PostgreSQL down, OOM killer active

Slack/email warning

Disk > 80%, CPU > 80% for 10+ min, slow query p95 > 3s, error rate > 0.5%

Daily dashboard review

Cache hit ratio, connection trends, filestore growth rate, backup success

Weekly audit

Dead tuple ratio, index usage, unused indexes, query plan changes

OEC.sh: Built-In Monitoring

Setting up monitoring from scratch takes hours of Prometheus/Grafana configuration, alert routing, and ongoing maintenance. OEC.sh includes monitoring out of the box:

  • Real-time CPU, memory, and disk dashboards per instance
  • PostgreSQL connection and slow query tracking
  • Odoo worker utilization and response time graphs
  • Configurable alerts via email and webhook
  • 30-day metric retention for trend analysis
  • One-click log viewer with search and filtering

Related Reading