Idempotent Webhooks

rrule.net delivers webhooks at least once. In rare cases — a slow network, a worker restart at the wrong moment — your endpoint may receive the same event twice. This guide explains how to handle that safely in a few lines of code.

Why at-least-once?

The scheduler sends a webhook over the network. Two things can go wrong:

  • 1. The HTTP request times out before your server responds, so the scheduler marks it as failed and retries.
  • 2. Your server received and processed the request but the response was lost in transit.

In both cases, the scheduler has no way to distinguish "not received" from "received but response lost". Retrying is the safe default. The consequence: your handler may be called twice for the same occurrence. If your handler sends an email or charges a card, that matters.

The idempotency key

Every webhook delivery carries a stable, unique identifier for that occurrence: the X-RRule-Execution-Id header. It is also present in the request body as execution_id. Retried deliveries of the same occurrence always carry the same value.

POST https://your-app.com/webhook
User-Agent: rrule.net-scheduler/1.0
X-RRule-Execution-Id: 7f3a1c2d-4e5b-6f7a-8b9c-0d1e2f3a4b5c
X-RRule-Scheduled-For: 2026-03-10T17:00:00.000Z
X-RRule-Schedule-Id: a1b2c3d4-...
Content-Type: application/json

{
  "schedule_id": "a1b2c3d4-...",
  "execution_id": "7f3a1c2d-4e5b-6f7a-8b9c-0d1e2f3a4b5c",
  "scheduled_for": "2026-03-10T17:00:00.000Z",
  "executed_at": "2026-03-10T17:00:02.341Z",
  "timezone": "Europe/Paris",
  "input": {
    "type": "rrule",
    "value": "FREQ=MONTHLY;BYDAY=+2MO;BYHOUR=18;BYMINUTE=0"
  }
}

The scheduler considers a delivery successful when your endpoint returns any 2xx status code within 30 seconds. It retries on network errors and non-2xx responses.

The pattern: store before act

The core idea is simple: record that you have seen an execution_idbefore doing any work. On the next delivery, check first. If you have already seen that ID, return 200 OK immediately — no work is done twice.

┌─────────────────────────────────────────────────────┐
 Receive webhook

  1. Read execution_id from header or body
  2. Check: is this ID already in processed_ids?
 Yes: return 200, do nothing
 No:  INSERT execution_id (deduplicate here)   │
  3. Do the actual work (send email, charge card...) │
  4. Return 200
└─────────────────────────────────────────────────────┘

Step 2 must use an atomic insert with a unique constraint — not a read-then-write — to be safe under concurrent requests.

Examples

Node.js / TypeScript (Hono + Postgres)

Uses a processed_webhooks table with a unique constraint on execution_id.

-- Migration: one-time setup
CREATE TABLE processed_webhooks (
  execution_id TEXT PRIMARY KEY,
  processed_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

-- Optional: clean up old records after 30 days
CREATE INDEX ON processed_webhooks (processed_at);
// webhook.ts
import { Hono } from 'hono'
import { sql } from './db'

const app = new Hono()

app.post('/webhook', async (c) => {
  const executionId = c.req.header('X-RRule-Execution-Id')
  if (!executionId) return c.text('Missing execution ID', 400)

  // Atomic deduplication: fails silently if already processed
  const result = await sql`
    INSERT INTO processed_webhooks (execution_id)
    VALUES (${executionId})
    ON CONFLICT (execution_id) DO NOTHING
    RETURNING execution_id
  `

  if (result.length === 0) {
    // Already processed — return 200 so the scheduler stops retrying
    return c.text('Already processed', 200)
  }

  // Safe to act: this is the first delivery
  const body = await c.req.json()
  await sendEmail({ scheduledFor: body.scheduled_for })

  return c.text('OK', 200)
})

Python (FastAPI + SQLAlchemy)

Same pattern, using PostgreSQL's ON CONFLICT DO NOTHING.

from fastapi import FastAPI, Request, Header
from sqlalchemy import text
from db import engine  # your SQLAlchemy engine

app = FastAPI()

@app.post("/webhook")
async def handle_webhook(
    request: Request,
    x_rrule_execution_id: str = Header(...)
):
    async with engine.begin() as conn:
        result = await conn.execute(
            text("""
                INSERT INTO processed_webhooks (execution_id)
                VALUES (:id)
                ON CONFLICT (execution_id) DO NOTHING
                RETURNING execution_id
            """),
            {"id": x_rrule_execution_id}
        )

        if result.rowcount == 0:
            return {"status": "already_processed"}

        # First delivery — safe to act
        body = await request.json()
        await send_email(scheduled_for=body["scheduled_for"])

    return {"status": "ok"}

Without a database (Redis SET NX)

If you don't have a relational DB, Redis SET NX is atomic and works as a dedup store.

import { createClient } from 'redis'

const redis = createClient()

app.post('/webhook', async (req, res) => {
  const executionId = req.headers['x-rrule-execution-id']

  // SET only if Not eXists — atomic, race-condition safe
  // Expires after 7 days (webhooks are never retried beyond 24h)
  const isNew = await redis.set(
    `webhook:${executionId}`,
    '1',
    { NX: true, EX: 60 * 60 * 24 * 7 }
  )

  if (!isNew) {
    return res.status(200).send('already processed')
  }

  await sendEmail(req.body)
  res.status(200).send('ok')
})

What to return

The scheduler interprets the response status as follows:

StatusScheduler behaviour
2xxSuccess — failure counter reset, next occurrence scheduled.
4xx / 5xxFailure — failure counter incremented. Auto-paused after 5 consecutive failures.
Timeout (> 30s)Treated as failure — same as 5xx.

Always return 200 for already-processed deliveries. Returning 4xx or 5xx would trigger a retry and increment the failure counter, which could eventually auto-pause your schedule.

rrule.net sends X-RRule-Execution-Id on every delivery. With a single unique constraint in your database, you get robust, duplicate-free webhook processing.