Skip to main content
Practical Guide

Build an AI Chatbot for Odoo Without Enterprise

Create an AI-powered chatbot for Odoo Community using n8n and OpenAI. Handle customer queries, search products, check orders — no Enterprise license needed. Total infrastructure cost: under $30/month.

15 min read
Updated February 2026
Odoo 16+

Odoo Enterprise includes an AI-powered chatbot in its Live Chat module. It reads your knowledge base, answers customer questions, and escalates to human agents when it's stuck. It works well. It also requires every user on your instance to have an Enterprise license.

We're going to build the same thing — and then extend it beyond what Enterprise offers — using n8n, OpenAI, and Odoo Community's existing APIs. The entire stack is self-hosted. Total infrastructure cost: under $30/month. Per-conversation AI cost: less than a penny.

This isn't a toy demo. By the end of this guide you'll have a chatbot that can answer product questions, check order status, look up account balances, and seamlessly hand off to a human agent — all running on your own servers.

What Enterprise's AI Chatbot Does (and What We're Building)

Enterprise's built-in chatbot lives inside the Live Chat module. When a visitor starts a chat on your website, the AI bot responds using content from Odoo's Knowledge module (articles, FAQ pages, product docs). If the bot can't answer confidently, it transfers the conversation to a human operator.

What we're building does the same thing, with three differences:

1. Data sources are broader

Enterprise's bot is limited to the Knowledge module. Ours can query any Odoo model — products, sales orders, invoices, stock levels, CRM data. A customer can ask “Where's my order?” and get a real answer pulled from stock.picking in real time.

2. Model choice

Enterprise uses whatever AI provider Odoo SA has partnered with. Our stack lets you choose: OpenAI GPT-4o for maximum quality, GPT-4o-mini for cost efficiency, Claude for nuanced conversations, or a local LLM via Ollama for complete data privacy.

3. Workflow logic

n8n sits between the chat input and the AI, which means you can add business logic: route VIP customers to human agents immediately, auto-create support tickets for certain keywords, log conversations to a CRM note, trigger follow-up emails. The chatbot becomes a workflow entry point, not just a Q&A widget.

Architecture Overview

Here's the data flow, end to end:

Data Flowtext
Customer → Website chat widget → Webhook → n8n
                                              │
                                              ├─→ OpenAI: understand intent
                                              │
                                              ├─→ Odoo API: fetch relevant data
                                              │     (products, orders, invoices,
                                              │      knowledge base articles)
                                              │
                                              ├─→ OpenAI: generate response
                                              │     with Odoo data as context
                                              │
                                              └─→ Webhook response → Customer

All components run as Docker containers on the same network. n8n talks to Odoo via XML-RPC (internal network, no public exposure). OpenAI calls go out over HTTPS. The chat widget connects to n8n via a webhook URL.

If you use a local LLM instead of OpenAI, the entire pipeline stays on your server. Nothing leaves your infrastructure.

Prerequisites

Before starting, you need:

  • Odoo Community Edition (v16 or later) running and accessible
  • n8n instance (we'll set this up if you don't have one)
  • OpenAI API key with at least $5 credit — or an Anthropic API key, or Ollama installed for local LLMs
  • Basic familiarity with Docker Compose (our Docker Compose guide covers the fundamentals)
  • 15–30 minutes of uninterrupted setup time

Step 1: Set Up n8n Alongside Odoo

If n8n is already running in your stack, skip to Step 2.

Add the following service to your existing docker-compose.yml:

docker-compose.ymlyaml
services:
  # ... your existing odoo and db services ...

  n8n:
    image: n8nio/n8n:latest
    restart: unless-stopped
    ports:
      - "5678:5678"
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=admin
      - N8N_BASIC_AUTH_PASSWORD=${N8N_PASSWORD:-changethis}
      - N8N_HOST=${N8N_HOST:-n8n.yourdomain.com}
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://${N8N_HOST:-n8n.yourdomain.com}/
      - GENERIC_TIMEZONE=UTC
    volumes:
      - n8n_data:/home/node/.n8n
    networks:
      - odoo-network

volumes:
  n8n_data:

The key configuration: n8n must be on the same Docker network as Odoo so it can reach Odoo's XML-RPC endpoint at http://odoo:8069 (using the container name, not a public URL). This keeps API traffic internal and fast.

Terminalbash
docker compose up -d n8n

Verify n8n is running at http://your-server:5678. Log in with the credentials you set.

Don't have a Docker Compose setup yet? Use the OEC.sh Docker Compose Generator to create one with Odoo + PostgreSQL + n8n pre-configured.

Step 2: Create the Chatbot Workflow in n8n

Open n8n and create a new workflow. We'll build it node by node.

Node 1: Webhook (Entry Point)

  • Add a Webhook node
  • Method: POST
  • Path: /odoo-chatbot (this becomes your webhook URL: https://n8n.yourdomain.com/webhook/odoo-chatbot)
  • Response mode: Last Node (the response is generated by the final node in the chain)
  • Authentication: None (the chat widget will call this publicly) — or add Header Auth if your widget supports it

The webhook receives a JSON payload from your chat widget:

Webhook Payloadjson
{
  "message": "What's the status of my order SO-2024-0142?",
  "session_id": "abc123",
  "customer_email": "jane@example.com"
}

Node 2: Conversation Memory (Context Window)

Add a Code node named “Load Conversation History.” This loads the last N messages from the session to give the AI context:

Load Conversation Historyjavascript
// Simple in-memory conversation store
// For production, use Redis or a database
const sessionId = $input.first().json.session_id;
const message = $input.first().json.message;

// Get existing conversation from workflow static data
const staticData = $getWorkflowStaticData('global');
if (!staticData.conversations) {
  staticData.conversations = {};
}
if (!staticData.conversations[sessionId]) {
  staticData.conversations[sessionId] = [];
}

// Add current message
staticData.conversations[sessionId].push({
  role: 'user',
  content: message,
  timestamp: new Date().toISOString()
});

// Keep last 10 messages for context
const history = staticData.conversations[sessionId].slice(-10);

return [{
  json: {
    ...$input.first().json,
    conversation_history: history
  }
}];

Node 3: Intent Classification (OpenAI)

Add an OpenAI Chat node named “Classify Intent.” Model: gpt-4o-mini (fast and cheap for classification). System prompt:

Intent Classification Prompttext
You are an intent classifier for an e-commerce support chatbot. Given a customer message, classify it into exactly one category:

- ORDER_STATUS: customer is asking about an order, delivery, or shipment
- PRODUCT_INQUIRY: customer is asking about a product, pricing, or availability
- ACCOUNT_QUESTION: customer is asking about their account, invoices, or payments
- SUPPORT_REQUEST: customer needs help with a problem or wants to file a complaint
- GENERAL: anything else (greetings, small talk, questions about the company)

Respond with ONLY the category name, nothing else.

This costs about $0.0001 per classification and takes ~200ms. It tells the next node which Odoo models to query.

Node 4: Route by Intent (Switch Node)

Add a Switch node. Route on the output of the classification node:

  • ORDER_STATUS → Order Lookup
  • PRODUCT_INQUIRY → Product Search
  • ACCOUNT_QUESTION → Account Lookup
  • SUPPORT_REQUEST → Human Handoff (or ticket creation)
  • GENERAL → Skip data lookup, go straight to response generation

Node 5: Odoo Data Retrieval (Per Intent)

Each branch has an Odoo node configured for the relevant query. Here are the three main branches:

Order Lookup

  • Resource: sale.order
  • Operation: Search and Read
  • Domain: [["name", "ilike", "<order number>"], ["partner_id.email", "=", "<customer email>"]]
  • Fields: name, state, amount_total, date_order, commitment_date, picking_ids

Product Search

  • Resource: product.template
  • Operation: Search and Read
  • Domain: [["name", "ilike", "<search term>"]]
  • Fields: name, list_price, qty_available, description_sale, categ_id
  • Limit: 5

Account Lookup

  • Resource: account.move
  • Operation: Search and Read
  • Domain: [["partner_id.email", "=", "<customer email>"], ["state", "=", "posted"]]
  • Fields: name, state, amount_total, amount_residual, invoice_date, invoice_date_due

Node 6: Generate Response (OpenAI)

Add an OpenAI Chat node named “Generate Response.” Model: gpt-4o-mini (or gpt-4o for higher quality). This is where you define your chatbot's personality and rules:

System Prompttext
You are a helpful customer support assistant for [Your Company Name],
an online store running on Odoo. You are friendly, professional, and concise.

RULES:
1. Answer based ONLY on the Odoo data provided in the context. Never make up
   order statuses, prices, or product details.
2. If the data doesn't contain enough information to answer, say so honestly
   and offer to connect the customer with a human agent.
3. Keep responses under 150 words. Customers are chatting, not reading essays.
4. Format prices as currency with 2 decimal places.
5. For order status, translate Odoo states: "sale" = "Confirmed",
   "done" = "Completed", "cancel" = "Cancelled", "draft" = "Draft/Pending".
6. Never reveal internal system details, field names, or technical errors.
   If something goes wrong, say "Let me connect you with a team member."
7. If asked about returns, exchanges, or complaints, always offer to
   create a support ticket.

COMPANY INFO:
- Name: [Your Company Name]
- Return policy: [Your return policy summary]
- Business hours: [Your hours]
- Support email: [Your support email]

The user message should concatenate the conversation history + the Odoo data context + the current message.

Node 7: Save Response and Return

Add a Code node to save the assistant's response to the conversation history. Then add a Respond to Webhook node that returns the AI response as JSON:

Webhook Responsejson
{
  "response": "Your order SO-2024-0142 was confirmed on January 15th for $2,450.00. It's currently being prepared for shipment. Based on the expected delivery date, you should receive it by January 22nd. Would you like me to check anything else?",
  "session_id": "abc123",
  "needs_human": false
}

Step 3: Connect to Odoo's XML-RPC API

In n8n, go to Settings > Credentials > Add Credential > Odoo.

FieldValue
Site URLhttp://odoo:8069 (Docker internal) or https://your-odoo.com
Database NameYour Odoo database name (check Odoo URL or database manager)
UsernameA dedicated bot user's login (create one in Odoo)
Password / API KeyUse an API key (more secure than password)

Create a dedicated Odoo user for the chatbot. Don't use your admin account. Create a user like chatbot@yourcompany.com with access rights limited to:

  • Sales: User (read sales orders)
  • Inventory: User (read delivery status)
  • Accounting: Billing (read invoices — be careful with this, scope appropriately)
  • Website: restricted (no admin access)

Generate an API key for this user: Settings > Users > [chatbot user] > API Keys > New API Key. Use this key in n8n instead of the user's password.

Test the connection in n8n. You should see a green checkmark. If it fails, verify the URL is reachable from n8n's container (try docker exec -it n8n wget -qO- http://odoo:8069/web/database/list).

Step 4: Train the AI with Your Business Context

The system prompt in Node 6 is your chatbot's brain. The better you craft it, the better your chatbot performs. Here's how to go beyond the basics.

Inject Your Product Catalog

For small catalogs (under 100 products), include a summary directly in the system prompt:

Product Catalog Prompttext
PRODUCT CATALOG:
- Widget Pro ($49.99) - Professional-grade widget, stainless steel
- Widget Basic ($19.99) - Entry-level widget, plastic housing
- Widget Kit ($79.99) - Pro + Basic + accessories, best value
- Spare Parts Pack ($12.99) - Replacement parts for all models

For large catalogs, use the RAG (Retrieval-Augmented Generation) pattern: before the AI generates a response, search Odoo's product.template for products matching the customer's query, and inject only the relevant products into the context. This is already what our architecture does in Node 5 — the Odoo product search provides just-in-time context.

Add Business Rules

Your system prompt should include rules the AI must follow:

Business Rules Prompttext
PRICING RULES:
- Orders over $500 qualify for free shipping
- Wholesale pricing available for orders of 50+ units (direct to sales team)
- Active promotion: 15% off Widget Pro through March 31, 2026 (code: SPRING15)

ESCALATION RULES:
- Any mention of "refund" or "cancel" → offer to create a support ticket
- Questions about custom orders → transfer to human agent
- Complaints about quality → apologize, create ticket, flag as priority

Conversation Starters

Configure your chat widget to show suggested questions:

  • “What products do you have?”
  • “Where's my order?”
  • “What are your business hours?”

These guide customers toward queries your bot handles well, improving the first-interaction experience.

Step 5: Add the Chat Widget to Your Website

You need a frontend chat widget that sends messages to n8n's webhook and displays responses. Two options:

Option A: Use Odoo's Live Chat (Redirect to n8n)

If you're running Odoo's Website module, Live Chat is available in Community Edition. Configure Live Chat to send messages to your n8n webhook instead of (or in addition to) the default operator routing. This requires a custom Live Chat handler — a small Odoo module override that POSTs to n8n before routing to a human operator.

Option B: Standalone Chat Widget (Recommended)

Embed a lightweight chat widget on your website that talks directly to n8n. Here's a minimal implementation:

chat-widget.htmlhtml
<!-- Add before </body> -->
<div id="oec-chat-widget">
  <div id="oec-chat-toggle" onclick="toggleChat()">
    <svg width="24" height="24" viewBox="0 0 24 24" fill="white">
      <path d="M20 2H4c-1.1 0-2 .9-2 2v18l4-4h14c1.1 0 2-.9 2-2V4c0-1.1-.9-2-2-2z"/>
    </svg>
  </div>
  <div id="oec-chat-window" style="display:none;">
    <div id="oec-chat-header">
      <strong>Support Chat</strong>
      <span onclick="toggleChat()" style="cursor:pointer">&times;</span>
    </div>
    <div id="oec-chat-messages"></div>
    <div id="oec-chat-input">
      <input type="text" id="oec-msg" placeholder="Type a message..."
             onkeypress="if(event.key==='Enter')sendMessage()">
      <button onclick="sendMessage()">Send</button>
    </div>
  </div>
</div>

<script>
const WEBHOOK_URL = 'https://n8n.yourdomain.com/webhook/odoo-chatbot';
const SESSION_ID = 'sess_' + Math.random().toString(36).substr(2, 9);

function toggleChat() {
  const win = document.getElementById('oec-chat-window');
  win.style.display = win.style.display === 'none' ? 'flex' : 'none';
}

async function sendMessage() {
  const input = document.getElementById('oec-msg');
  const message = input.value.trim();
  if (!message) return;

  appendMessage('You', message);
  input.value = '';

  try {
    const res = await fetch(WEBHOOK_URL, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        message: message,
        session_id: SESSION_ID,
        customer_email: getCustomerEmail()
      })
    });
    const data = await res.json();
    appendMessage('Support', data.response);

    if (data.needs_human) {
      appendMessage('System', 'Connecting you with a team member...');
    }
  } catch (err) {
    appendMessage('System', 'Connection issue. Please try again.');
  }
}

function appendMessage(sender, text) {
  const container = document.getElementById('oec-chat-messages');
  const div = document.createElement('div');
  div.className = 'chat-msg ' + (sender === 'You' ? 'user' : 'bot');
  div.innerHTML = '<strong>' + sender + ':</strong> ' + text;
  container.appendChild(div);
  container.scrollTop = container.scrollHeight;
}

function getCustomerEmail() {
  return null; // Return logged-in user's email if available
}
</script>

<style>
#oec-chat-widget {
  position: fixed; bottom: 20px; right: 20px;
  z-index: 9999; font-family: sans-serif;
}
#oec-chat-toggle {
  width: 56px; height: 56px; border-radius: 50%;
  background: #714B67; display: flex; align-items: center;
  justify-content: center; cursor: pointer;
  box-shadow: 0 2px 12px rgba(0,0,0,0.15);
}
#oec-chat-window {
  flex-direction: column; width: 360px; height: 480px;
  background: white; border-radius: 12px;
  box-shadow: 0 4px 20px rgba(0,0,0,0.15);
  overflow: hidden; position: absolute;
  bottom: 70px; right: 0;
}
#oec-chat-header {
  padding: 14px 16px; background: #714B67; color: white;
  display: flex; justify-content: space-between;
  align-items: center;
}
#oec-chat-messages { flex: 1; overflow-y: auto; padding: 12px; }
.chat-msg {
  margin-bottom: 10px; padding: 8px 12px;
  border-radius: 8px; font-size: 14px; line-height: 1.4;
}
.chat-msg.user { background: #f0f0f0; }
.chat-msg.bot { background: #f3e8f0; }
#oec-chat-input {
  display: flex; padding: 8px; border-top: 1px solid #eee;
}
#oec-chat-input input {
  flex: 1; padding: 8px 12px; border: 1px solid #ddd;
  border-radius: 6px; outline: none;
}
#oec-chat-input button {
  margin-left: 8px; padding: 8px 16px;
  background: #714B67; color: white; border: none;
  border-radius: 6px; cursor: pointer;
}
</style>

Drop this snippet into your website's HTML. Change WEBHOOK_URL to your n8n webhook URL. The styling uses Odoo's purple (#714B67) — adjust to match your brand.

Need the full stack deployed?

OEC.sh runs Odoo + n8n + PostgreSQL on the cloud provider of your choice. Automated backups, SSL, monitoring — and a Docker Compose generator to get started.

  • Free tier available
  • No credit card required
  • Any cloud provider
Try OEC.sh Free

Testing and Improving

Test Scenarios

Before going live, test these conversation types:

ScenarioTest MessageExpected Behavior
Product inquiry"Do you sell widgets?"Searches product.template, returns matches with prices
Order status"Where's my order SO-2024-0142?"Looks up order, returns state and delivery estimate
Account balance"Do I have any unpaid invoices?"Queries account.move for open invoices
Out of scope"What's the weather today?"Politely explains it can only help with store-related questions
Escalation trigger"I want a refund"Offers to create a support ticket, flags for human follow-up
Greeting"Hello!"Friendly greeting + suggested actions
Multi-turnOrder status → then "When will it arrive?"Uses conversation history to maintain context

Logging for Quality Improvement

Add a logging node in n8n that saves every conversation to a Google Sheet, Airtable, or a custom Odoo model. Review weekly. Look for:

  • Questions the bot answered incorrectly (fix the system prompt)
  • Questions the bot couldn't answer (add data sources or business rules)
  • Common questions you hadn't anticipated (add to conversation starters)
  • Conversations that escalated unnecessarily (adjust escalation thresholds)

Human Agent Handoff

When the bot can't answer or the customer asks for a human, your options:

1. Odoo Live Chat operator

Redirect the session to a human agent in Odoo's Live Chat. This requires the Live Chat module installed (available in Community).

2. Email notification

n8n sends an email to your support team with the conversation transcript and customer details.

3. Ticket creation (recommended)

n8n creates a helpdesk.ticket or crm.lead in Odoo with the conversation attached. This is the most reliable — it creates an auditable record and works even outside business hours.

Advanced: Using Local LLMs Instead of OpenAI

If you need complete data privacy or want to eliminate API costs entirely, replace OpenAI with a self-hosted model via Ollama.

Add Ollama to Your Stack

docker-compose.ymlyaml
services:
  # ... existing services ...

  ollama:
    image: ollama/ollama:latest
    restart: unless-stopped
    volumes:
      - ollama_data:/root/.ollama
    networks:
      - odoo-network
    # Uncomment below for GPU acceleration
    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - capabilities: [gpu]
Terminalbash
docker compose up -d ollama
docker exec -it ollama ollama pull mistral  # 7B model, good balance of quality and speed

Configure n8n for Ollama

In n8n's OpenAI Chat nodes, instead of using the OpenAI credential, add an HTTP Request node:

  • URL: http://ollama:11434/v1/chat/completions
  • Method: POST
Ollama Request Bodyjson
{
  "model": "mistral",
  "messages": [
    {"role": "system", "content": "Your system prompt here"},
    {"role": "user", "content": "Customer message + Odoo context"}
  ],
  "temperature": 0.3
}

Alternatively, n8n supports Ollama natively through the AI Agent node — configure it with the Ollama URL and model name.

Performance Expectations

ModelSizeCPU-only SpeedGPU SpeedQuality for Chatbot
Mistral 7B4.1 GB~5 tokens/sec~40 tokens/secGood for most queries
Llama 3 8B4.7 GB~4 tokens/sec~35 tokens/secSlightly better reasoning
Phi-3 Mini 3.8B2.3 GB~10 tokens/sec~60 tokens/secFastest, adequate quality

CPU-only inference on a modern 8-core server produces a response in 5–15 seconds. With a GPU, responses come in under 2 seconds. For a customer-facing chatbot, GPU is recommended if you expect more than a handful of concurrent conversations.

Cost Breakdown

OpenAI API Costs

Using GPT-4o-mini (the sweet spot for chatbot workloads):

VolumeIntent ClassificationData LookupResponse GenTotal/Month
500 conversations/mo$0.05(free — Odoo API)$0.50~$0.55
1,000 conversations/mo$0.10(free)$1.00~$1.10
5,000 conversations/mo$0.50(free)$5.00~$5.50
10,000 conversations/mo$1.00(free)$10.00~$11.00

For higher quality (GPT-4o instead of mini), multiply by roughly 10x. Still cheap — $11/month for 1,000 conversations with the best model available.

Local LLM Costs

Zero per-request cost. Your only expense is the server running Ollama:

  • CPU-only (shared with Odoo): $0 marginal cost
  • Dedicated GPU VPS (e.g., Hetzner GPU): $50–150/month
  • Your own GPU server: amortized hardware cost

Comparison with Enterprise

Enterprise Live Chat with AI chatbot requires Enterprise licensing for all users, not just the chatbot feature:

ScenarioEnterprise CostSelf-Hosted (OpenAI)Self-Hosted (Local LLM)
5 users, 500 chats/mo$1,494/yr$7/yr API$0/yr
25 users, 2,000 chats/mo$7,470/yr$40/yr API$0/yr
50 users, 5,000 chats/mo$14,940/yr$66/yr API$0/yr

The takeaway: The self-hosted chatbot costs 98–100% less than the Enterprise equivalent. The gap widens as your team grows because Enterprise charges per user while your chatbot costs scale with conversation volume, not headcount.

Deploy on OEC.sh

You now have a three-container stack: Odoo + n8n + PostgreSQL (plus optionally Ollama for local AI). That's a production system that needs proper infrastructure: monitoring, backups, SSL certificates, DNS management, and a server that doesn't go down.

OEC.sh runs this full stack on the cloud provider of your choice. No vendor lock-in — pick DigitalOcean, AWS, Google Cloud, Hetzner, or OVH. We handle the infrastructure: automated backups, SSL via Let's Encrypt, server monitoring, and one-click scaling when your chatbot goes viral.

Frequently Asked Questions

Can I use this chatbot with Odoo's existing Live Chat module?

Yes. Odoo Community includes the Live Chat module. You can configure it to work alongside the AI chatbot — the AI handles initial responses and escalates to human operators via Live Chat when needed. The two systems complement each other.

How many concurrent conversations can this handle?

n8n handles webhook requests asynchronously. A single n8n instance comfortably manages 50–100 concurrent conversations. If you’re using OpenAI, the bottleneck is API latency (~500ms–1s per call), not n8n. For high-traffic sites (1,000+ concurrent chats), run multiple n8n workers behind a load balancer.

What if the customer asks something the bot can't answer?

The system prompt instructs the AI to be honest when it doesn’t have enough information. It will offer to connect the customer with a human agent or create a support ticket. The needs_human flag in the response lets your chat widget trigger a handoff flow.

Can I use Anthropic Claude instead of OpenAI?

Yes. n8n has a native Anthropic node. Replace the OpenAI Chat nodes with Anthropic Chat nodes, configure your API key, and select your preferred Claude model. Claude Sonnet is comparable in quality and cost to GPT-4o-mini for chatbot workloads. Claude Haiku is even cheaper for intent classification.

Does this work with Odoo's website or only external websites?

Both. The chat widget is a standalone HTML/JS snippet that works on any website. If your storefront runs on Odoo Website, embed the widget there. If your storefront is a separate frontend (Next.js, Shopify, etc.) that connects to Odoo as a backend, embed the widget on that frontend. The webhook URL doesn’t care where the request comes from.

How do I handle multiple languages?

Modern AI models handle multilingual conversations natively. GPT-4o and Claude respond in whatever language the customer writes in. Add a note to your system prompt: “Respond in the same language the customer uses. Default to English if uncertain.” No translation layer needed. For local LLMs, Mistral handles European languages well; Llama 3 has broader multilingual support.

Build Your Odoo AI Chatbot Today

Under $30/month for the full stack. Less than a penny per conversation. No Enterprise license required. Deploy on the cloud provider you already use.