Skip to main content
EngineeringMay 7, 202614 min read

Build Anything With the OEC.sh Public API: 6 Real Examples

Most Odoo platforms make you click through a web UI to deploy. Ours has a REST API. Here are 6 things you can build with it in an afternoon, with the actual code we run in production.

Quick API primer

The Public API lives at https://platform.oec.sh/api/v1. Every endpoint is JSON in, JSON out, authenticated with a bearer token. There is no SDK to wrestle with. curl works. requests works. fetch works. Pick your weapon.

Generate a key in the dashboard under Settings, API Keys. Each key is JWT-signed and scoped to the org you created it from. Send it as a Bearer token on every request:

curl https://platform.oec.sh/api/v1/projects \
-H "Authorization: Bearer oec_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

Three things worth knowing before you write a single line of automation:

  • Idempotency keys. Pass an Idempotency-Key header on any mutating request and we deduplicate retries for 24 hours. Network blipped during a deploy? Retry the exact same call. You will not get two deploys, you will get the same response twice.
  • Rate limits. Each key gets a few hundred requests per minute. Limits are returned in X-RateLimit-Remaining and X-RateLimit-Reset headers, so you can back off cleanly. If you hit the ceiling, you get a 429 with a Retry-After header. No surprises.
  • Pagination. List endpoints return { data, next_cursor }. Pass ?cursor=... to walk forward. Page size defaults to 50, max 200.

The full OpenAPI spec is at platform.oec.sh/api/v1/docs. Drop the JSON into Postman, Bruno, or your favorite codegen tool and you have typed clients in five minutes. The endpoints you will use most often: /api-keys, /projects, /environments, /deployments, /webhooks, plus deploy, clone, and rollback action endpoints under each environment.

One quick note on plans. The Public API requires Starter or above. Free accounts can receive incoming webhooks but cannot mint API keys. Start free, kick the tires, then upgrade when you need outbound automation.

1. Deploy on every push

This is the obvious one, so we are getting it out of the way first. You merge a PR to main, GitHub Actions calls our deploy endpoint, the environment redeploys with the new commit. No webhook configuration in our dashboard, no clicking around, no manual intervention. Your repo is the source of truth.

Drop this in .github/workflows/deploy.yml:

name: Deploy to OEC.sh
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Trigger OEC.sh deploy
run: |
curl -X POST \
https://platform.oec.sh/api/v1/environments/${{ secrets.OECSH_ENV_ID }}/deploy \
-H "Authorization: Bearer ${{ secrets.OECSH_API_KEY }}" \
-H "Idempotency-Key: gh-${{ github.run_id }}" \
-H "Content-Type: application/json" \
-d '{"ref": "${{ github.sha }}", "wait": false}'
- name: Print deploy URL
run: echo "Watch deploy at https://platform.oec.sh/envs/${{ secrets.OECSH_ENV_ID }}/deployments"

Two secrets to set in GitHub: OECSH_API_KEY and OECSH_ENV_ID. The environment ID is in the URL when you open an environment in our dashboard.

The Idempotency-Key uses the GitHub run ID, which means if the workflow retries (network flake, runner died), we will not double-deploy. The same key returns the same deployment record. Beautiful.

Why bother with Actions when our dashboard already supports git push deploys? Because Actions gives you everything around the deploy: run tests first, build assets, post Slack messages, gate on approvals, deploy to staging then prod. The API is the escape hatch when the GUI workflow is too rigid. We use this exact pattern internally.

GitLab works the same way. Bitbucket Pipelines works the same way. CircleCI works the same way. It is just a curl. See our GitLab CI/CD guide for the equivalent .gitlab-ci.yml.

2. Custom deployment dashboard

An agency we work with runs 40+ Odoo environments across 18 clients. Their support team needed one screen showing every environment, status, last deploy, and current Odoo version. Our dashboard does this, but they wanted it embedded inside their internal tooling next to ticket queues and SLA timers.

Three endpoints, one Python script, done. The whole data fetcher is about 40 lines:

import os
import httpx
API = "https://platform.oec.sh/api/v1"
KEY = os.environ["OECSH_API_KEY"]
HEADERS = {"Authorization": f"Bearer {KEY}"}
def paginate(path: str):
cursor = None
while True:
params = {"cursor": cursor} if cursor else {}
r = httpx.get(f"{API}{path}", headers=HEADERS, params=params, timeout=30)
r.raise_for_status()
body = r.json()
yield from body["data"]
cursor = body.get("next_cursor")
if not cursor:
break
def fleet_status():
projects = {p["id"]: p for p in paginate("/projects")}
rows = []
for env in paginate("/environments"):
recent = list(paginate(f"/environments/{env['id']}/deployments?limit=1"))
last = recent[0] if recent else None
rows.append({
"client": projects[env["project_id"]]["name"],
"environment": env["name"],
"url": env["primary_url"],
"status": env["status"],
"odoo_version": env["odoo_version"],
"last_deploy_at": last["finished_at"] if last else None,
"last_deploy_status": last["status"] if last else None,
"last_commit": (last or {}).get("git_sha", "")[:7],
})
return rows
if __name__ == "__main__":
import json
print(json.dumps(fleet_status(), indent=2, default=str))

Run that on a 30-second cron, push the JSON to a Redis cache, render it in whatever frontend you like. Their version is a Next.js app. Ours is a Grafana dashboard pulling from the same JSON via the Infinity datasource. Metabase, Retool, Internal.io, anything that speaks HTTP works.

The expensive part of this script is the inner loop fetching one deployment per environment. With 200 environments you are doing 201 requests. If that makes you nervous, batch the deploy calls using /api/v1/deployments?limit=200&status=succeeded,failed at the org level and reduce dramatically. The API supports that exact filter.

One thing this lets you build that we do not ship out of the box: a leaderboard of which client has the most stale Odoo version. Sort by odoo_version ascending and you have a sales lead list for upgrade engagements.

3. Slack, Teams, Discord notifications

Every team wants deploy notifications in chat. We support this two ways. The lazy way: configure an outgoing webhook with the destination platform set to Slack, Teams, or Discord, and we will format the payload natively for each. No JSON wrangling. No transformation layer. The receiver gets a properly styled message with attachments, colors, and clickable links.

Add a webhook via the API:

curl -X POST https://platform.oec.sh/api/v1/webhooks \
-H "Authorization: Bearer $OECSH_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://hooks.slack.com/services/T0000/B0000/xxxxx",
"format": "slack",
"events": [
"deployment.succeeded",
"deployment.failed",
"environment.created",
"server.degraded"
],
"secret": "whsec_choose_a_long_random_string"
}'

Swap format: "slack" for teams or discord and we render the right payload shape for that platform. Same events, same data, different wire format.

The harder way, but also the more flexible way: set format: "generic" and point the webhook at your own server. You get our raw JSON event envelope and can do whatever you want with it, route to PagerDuty, write to a database, fan out to multiple destinations. Here is what a successful deploy event looks like on the wire:

{
"id": "evt_01HXMZ8QPK3R5T7V9X1Z3B5C7D",
"type": "deployment.succeeded",
"created_at": "2026-05-07T14:23:11.421Z",
"data": {
"deployment": {
"id": "dep_01HXMZ8QPK3R5T7V9X1Z3B5C7E",
"environment_id": "env_01HXMZ0K3RJ8QM2P5T7V9X1Z3B",
"project_id": "prj_01HXMY9P3RJ8QM2P5T7V9X1Z3B",
"git_sha": "a3f8c2d",
"git_branch": "main",
"started_at": "2026-05-07T14:21:48.000Z",
"finished_at": "2026-05-07T14:23:11.000Z",
"duration_seconds": 83,
"actor": "github-actions"
},
"environment": {
"name": "production",
"primary_url": "https://acme.apps.oec.sh",
"odoo_version": "18.0"
}
}
}

Verify the signature on the receiving end so nobody can forge events. We sign every outgoing webhook with HMAC-SHA256 using the secret you registered:

from fastapi import FastAPI, Request, HTTPException
import hmac, hashlib, os, time
WEBHOOK_SECRET = os.environ["OECSH_WEBHOOK_SECRET"].encode()
app = FastAPI()
@app.post("/oecsh/webhook")
async def receive(request: Request):
body = await request.body()
sig = request.headers.get("X-OECSH-Signature", "")
ts = request.headers.get("X-OECSH-Timestamp", "0")
# Reject events older than 5 minutes (replay protection)
if abs(time.time() - int(ts)) > 300:
raise HTTPException(400, "stale event")
expected = hmac.new(
WEBHOOK_SECRET,
f"{ts}.".encode() + body,
hashlib.sha256,
).hexdigest()
if not hmac.compare_digest(expected, sig):
raise HTTPException(401, "bad signature")
event = await request.json()
if event["type"] == "deployment.failed":
# route to your incident system, page someone, whatever
...
return {"ok": True}

Use hmac.compare_digest, not ==. The constant-time comparison prevents timing attacks. This bites people surprisingly often.

4. Auto-clone prod to staging weekly

Stale staging is worse than no staging. If your staging database is from three months ago, every test you run there is a lie. The fix is a scheduled clone of production, sanitized, into your staging environment.

We have an Automation Rules feature for this exact thing, configured in the dashboard with a cron expression and a clone source. If you are on Pro you should just use that. But if you already have a scheduler (Jenkins, Airflow, plain cron, GitHub scheduled workflows), the API works fine:

# /etc/cron.d/oecsh-staging-refresh
# Weekly clone of production into staging, sanitized, Sunday 02:00 UTC
0 2 * * 0 ops curl -X POST \
https://platform.oec.sh/api/v1/environments/$STAGING_ENV/clone \
-H "Authorization: Bearer $OECSH_API_KEY" \
-H "Content-Type: application/json" \
-H "Idempotency-Key: weekly-clone-$(date +\%Y-\%U)" \
-d '{"source_env": "'$PROD_ENV'", "sanitize": true, "wait": false}' \
>> /var/log/oecsh-staging-refresh.log 2>&1

The sanitize: true flag scrubs PII before writing to staging. We replace email addresses with deterministic hashes, blank password hashes, and clear API keys stored in the database. Your developers can poke around without exposing real customer data.

Idempotency key uses the ISO week number, so a retried cron job in the same week will not run a second clone. If your scheduler retries, you waste no compute.

The clone endpoint returns immediately with wait: false and gives you a job_id you can poll. Pass wait: true if you want the request to block until the clone is done, useful when the next step in your pipeline depends on staging being ready.

5. Multi-tenant agency dashboard

This is the big one. Agencies running Odoo for multiple clients want a customer-facing portal where each client can see their own environments, request deploys, and read their own metrics, without seeing each other or seeing the agency's internal data.

On the Agency plan you get scoped API keys. Mint a key per client, scoped to that client's org, and embed our API into your portal. Each client only sees their own environments because their key cannot see anything else. Here is the mint request:

curl -X POST https://platform.oec.sh/api/v1/api-keys \
-H "Authorization: Bearer $AGENCY_OWNER_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "client-acme-portal",
"scope": "org",
"org_id": "org_01HXMW8QPK3R5T7V9X1Z3B5C7D",
"permissions": ["projects:read", "environments:read", "deployments:read", "deployments:trigger"]
}'

Store the returned token securely, treat it like a password. Use it server-side from your portal backend, never expose it to the browser. The portal frontend talks to your backend, your backend talks to our API. Standard pattern.

Two more pieces complete the agency setup. First, registration tokens. When you onboard a new client, generate a registration token via the API and email it to them. They click the link, set a password, and they are in:

def invite_client(client_email: str, org_id: str) -> str:
r = httpx.post(
f"{API}/registration-tokens",
headers=HEADERS,
json={
"email": client_email,
"org_id": org_id,
"role": "viewer",
"expires_in_seconds": 86400 * 7, # 7 days
},
)
r.raise_for_status()
token = r.json()["token"]
return f"https://platform.oec.sh/accept-invite?token={token}"

Second, Connect-As impersonation for support. When a client opens a ticket, your support engineers can request a Connect-As session that lets them act as the client in our dashboard for a bounded time window. Every action is audit-logged with the original engineer's identity, so you keep accountability:

curl -X POST https://platform.oec.sh/api/v1/connect-as/sessions \
-H "Authorization: Bearer $AGENCY_OWNER_KEY" \
-d '{
"target_user_id": "usr_01HXMW8QPK3R5T7V9X1Z3B5C7E",
"reason": "ticket-4291: customer reports failing deploy",
"duration_minutes": 30
}'

The response is a one-time URL. Open it, you are the customer, the timer starts. After 30 minutes you are kicked back to your own session. Read more about how Connect-As works if you want the security model details.

Put it together: registration tokens onboard clients, scoped keys power their portal, Connect-As lets you support them. That is the whole agency control plane in three endpoints.

Building an agency stack? Start with our Agency plan for cross-org Connect-As and multi-org keys. The free tier is the wrong size for this use case.

6. Custom monitoring and alerting

Some teams want deploy failures going to PagerDuty, not Slack. Some want Opsgenie. Some have a homegrown SIEM that swallows everything. The Public API plus webhooks gives you both pull and push paths, and you decide which one fits the destination.

Pull pattern, for catching state you missed: hit /deployments?status=failed&since=... on a schedule and reconcile against your incident system. Useful as a belt-and-suspenders check in case a webhook delivery silently failed. Push pattern, for real-time alerts: webhooks fire on deployment.failed within seconds.

Here is the push path wired to PagerDuty Events API v2. The webhook receiver from Example 3 routes failures into PagerDuty incidents:

import os, httpx
PD_KEY = os.environ["PAGERDUTY_ROUTING_KEY"]
def trigger_pagerduty(event: dict):
deploy = event["data"]["deployment"]
env = event["data"]["environment"]
payload = {
"routing_key": PD_KEY,
"event_action": "trigger",
"dedup_key": f"oecsh-deploy-{deploy['id']}",
"payload": {
"summary": f"Deploy failed: {env['name']} ({deploy['git_sha'][:7]})",
"severity": "error",
"source": "oec.sh",
"component": env["name"],
"custom_details": {
"environment_url": env["primary_url"],
"git_branch": deploy["git_branch"],
"duration_seconds": deploy.get("duration_seconds"),
"actor": deploy.get("actor"),
"deploy_logs": f"https://platform.oec.sh/deployments/{deploy['id']}",
},
},
"links": [
{"href": f"https://platform.oec.sh/deployments/{deploy['id']}", "text": "View deploy"},
{"href": env["primary_url"], "text": "Environment"},
],
}
httpx.post("https://events.pagerduty.com/v2/enqueue", json=payload, timeout=10)
# inside your verified webhook handler:
# if event["type"] == "deployment.failed":
# trigger_pagerduty(event)

The dedup_key uses our deployment ID, so retries do not create duplicate incidents. PagerDuty will collapse them. Same pattern works for Opsgenie (just change the endpoint) and for any SIEM that accepts JSON over HTTP.

Want to be smarter about it? Only page on deployment.failed in production environments, route staging failures to a Slack channel instead. The event payload includes the environment name, so the routing logic is one if-statement. Five lines of Python turns into the kind of nuanced alert routing that usually requires a dedicated alerting platform.

Webhook signing and the delivery log

We touched on signature verification in Example 3. Here is the rest of what you should know about how outgoing webhooks behave on our side, because debugging silent webhook failures is a special kind of hell and we tried to make it easy.

Every outgoing request is signed with HMAC-SHA256 over {timestamp}.{body} using the secret you set on the webhook. Two headers come along for the ride: X-OECSH-Timestamp and X-OECSH-Signature. Including the timestamp in the signed payload prevents replay attacks, the receiver checks freshness and rejects anything older than 5 minutes.

Node version, since the Python one is in Example 3:

import crypto from "node:crypto";
export function verifyOecshSignature(rawBody, headers, secret) {
const sig = headers["x-oecsh-signature"];
const ts = headers["x-oecsh-timestamp"];
if (!sig || !ts) return false;
// Replay protection
if (Math.abs(Date.now() / 1000 - Number(ts)) > 300) return false;
const expected = crypto
.createHmac("sha256", secret)
.update(`${ts}.`)
.update(rawBody)
.digest("hex");
return crypto.timingSafeEqual(
Buffer.from(expected, "hex"),
Buffer.from(sig, "hex"),
);
}

Now the operational stuff. Every webhook gets a delivery log, viewable in the dashboard or via /api/v1/webhooks/{id}/deliveries. We keep 30 days of history. Each delivery shows the event type, response code, response body (truncated), and how many retries it took. If your endpoint is returning 500s and you cannot figure out why, the delivery log shows you exactly what we sent and what came back.

Retries follow exponential backoff: 1s, 5s, 30s, 2m, 10m, 1h. After 5 consecutive failures we auto-pause the webhook and flag it in the dashboard. This is deliberate, a broken receiver should not generate thousands of failed retries. Fix the endpoint, click resume, we replay the queued events. The whole replay-after-fix flow is also exposed via the API if you want to wire it into your own deploy pipeline.

What is gated on which plan

Quick reference so you do not have to dig through pricing:

Free, $0/mo

Incoming webhooks (receive events from us). No outbound API key minting. Use this tier to develop your webhook receiver locally without paying for it.

Starter, $19/mo

Everything in Free, plus full Public API access. Mint API keys, deploy via API, build dashboards. This is the entry point for any kind of serious automation.

Pro, $39/mo

Everything in Starter, plus unlimited Automation Rules (the dashboard-native version of Example 4) and self-hosted GitLab integration. If you have an internal GitLab and you want native deploy hooks, this is your tier.

Agency, $199/mo

Everything in Pro, plus cross-org Connect-As via API and multi-org keys. Required for the agency portal pattern in Example 5.

TL;DR

We built a Public API because we wanted one ourselves. Every example above is something we either run internally or that customers asked us to make possible. JWT auth, idempotency keys, native Slack/Teams/Discord formats, signed webhooks with delivery logs and auto-pause, scoped multi-org keys, Connect-As. The pieces fit together.

Pick one of the six examples, copy the code, point it at your environments. You will be deploying from CI by lunch. Pull the OpenAPI spec into your codegen tool and you have a typed client by dinner. The platform is genuinely yours to script.

Get your first API key in 30 seconds

Free tier covers webhook receivers. Starter unlocks the Public API at $19/mo. Cancel anytime.