Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.rootly.com/llms.txt

Use this file to discover all available pages before exploring further.

The Generic Webhook Alert Source lets Rootly ingest alerts from any tool that can fire HTTP webhooks. If your monitoring, observability, security, or data-quality platform can send a POST with a JSON body, Rootly can turn that event into an alert, route it to the right responder, and trigger downstream automation. Use it when your alerting tool doesn’t have a dedicated Rootly integration, when you’re ingesting alerts from custom or internal services, or when you want a single, unified pattern for handling webhook-based signals across multiple sources.

Compatible Alerting Tools

If a tool can send an HTTP POST with a JSON body, it works with Rootly. The Generic Webhook Alert Source is the standard way to wire up alerts from:
  • Application performance & error tracking — BugSnag, AppOptics, Coralogix
  • Infrastructure & log monitoring — Sumo Logic, Elastic, Chronosphere, Nagios, PRTG
  • Uptime & synthetic monitoring — Pingdom, StatusCake, uptime.com, Checkly, Cronitor, Runscope
  • Security & threat detection — Crowdstrike, Expel
  • Data quality & observability — Monte Carlo
It also works for internal services, custom monitoring jobs, scheduled scripts, and any system you build in-house that needs to page an on-call responder. If your tool has a dedicated Rootly integration, prefer that — native sources reduce setup time and ship with vendor-specific field mappings. The Generic Webhook Alert Source is the right choice when no native integration exists.

How It Works

The Generic Webhook Alert Source gives you a webhook endpoint URL that your external system POSTs alert events to. When a request arrives, Rootly:
  1. Authenticates the request using the Bearer Token or HMAC signature you configured for the source.
  2. Parses the JSON body and extracts the fields you mapped — title, description, identifier, state, routing target.
  3. Creates or updates an alert based on the External Identifier. If Rootly already has an alert with the same identifier, follow-up events update it; otherwise a new alert is created.
  4. Routes the alert to the target you specified, either from the URL or from the payload itself.
  5. Triggers alert workflows, which can create incidents, page on-call, post to Slack, or run any other downstream action you’ve configured.
Webhook events are processed asynchronously. The POST returns quickly; the alert appears in Rootly shortly after.

Authentication

Rootly supports two authentication models. Pick based on how sensitive the alert data is and how much setup the sending system can absorb.

Bearer Token (default)

A static secret sent in the Authorization header, or as a secret query string parameter on the webhook URL. Prefer the header form when your sending tool supports it — query parameters tend to be captured in proxy and load-balancer access logs, so they’re more exposure-prone than a header. Either form is simple to set up: paste the secret into your sending tool and you’re done.

HMAC Signature

The sender signs the raw request body with a shared secret using HMAC-SHA256, and Rootly verifies the signature in the X-Webhook-Signature-256 header using a timing-safe comparison. The signing secret never travels over the wire. Use this when the sending system can compute HMAC signatures and you want stronger tamper-evidence on every request.
HMAC must be enabled by Rootly support before you can select it during source creation. The Bearer Token stays required in HMAC mode — Rootly uses the bearer to identify the source and the HMAC signature to verify the request body.
Code samples for both methods, including Python and Node.js HMAC signing examples, live on the Installation page.

Mapping Payload Fields

Rootly doesn’t require a specific JSON shape. You map fields from your sending tool’s payload to Rootly’s alert fields during source creation. A typical payload from a monitoring tool:
{
  "alert_title": "High CPU on web-01",
  "alert_description": "CPU > 90% for 5 minutes",
  "external_id": "monitor-12345",
  "state": "triggered",
  "service": "web-api"
}
Map those fields to Rootly’s alert fields in the source configuration:
Rootly FieldMapped FromPurpose
Titlealert_titleHeadline shown on the alert
Descriptionalert_descriptionBody text on the alert
External Identifierexternal_idStable key Rootly uses to match follow-up events to the same alert
StatestateLifecycle state — drives auto-resolution when a recovery event arrives
Routing TargetserviceWhich service, team, or escalation policy receives the alert
The External Identifier is what Rootly uses to match recovery events back to the original alert when auto-resolution is configured. Map it to whatever field your sending tool uses as the alert’s persistent ID (monitor_id, incident_id, alert_uuid, etc.) so that a follow-up “resolved” event can close the original alert instead of being ignored. See the Installation guide for the field-mapping UI walkthrough.

Routing Alerts

There are two ways to tell Rootly where an alert should go. You can mix both across sources, but a single source typically uses one or the other.

Route By URL

Include the target type and ID directly in the webhook URL Rootly generates:
POST https://webhooks.rootly.com/webhooks/incoming/generic_webhooks/notify/<type>/<id>
Use this when every alert from this source should always route to the same target — one service, one team, or one escalation policy. Cleanest setup, no payload mapping required for the target.

Route By Payload

Send the target type and ID in the JSON body and let Rootly extract them via field mappings:
{
  "alert_title": "Database slow query",
  "notification_target_type": "service",
  "notification_target_id": "8c4a5e91-1b2d-4c3e-9f6a-7d8b2c5e9a01"
}
The notification_target_id is the Rootly resource’s internal ID — open the service, team, or escalation policy in Rootly and copy the ID from its edit page. Names and slugs aren’t accepted in this field.Use this when a single source needs to route alerts to different targets depending on the event — useful when your monitoring tool already tags events by service or team and you want that metadata to drive routing.
Common target types: service, group (or team), escalationPolicy.

Auto-Resolution

When your sending tool fires a recovery or “cleared” event, Rootly can transition the original alert to resolved automatically. The match happens on the External Identifier, and the trigger is an exact-match value in the State field (typically resolved, recovered, or ok). Configure auto-resolution after you’ve set up the source and its field mappings. The dedicated Auto-Resolution page covers the matching rules, payload requirements, and how to define the resolved-state value.

From Alert to Incident

Alerts created through this source behave like any other Rootly alert — they’re regular signals you can drive any of Rootly’s alert-driven automation off of:
  • Create incidents automatically based on alert content, severity, or service
  • Page on-call responders through escalation policies
  • Post notifications to Slack, Microsoft Teams, or email
  • Trigger downstream workflows that update runbooks, status pages, ticketing systems, or run custom scripts
The webhook source ingests the alert; alert workflows decide what the alert means and what should happen next.

Installation

The full step-by-step setup — creating the source, copying the endpoint URL, configuring authentication, mapping payload fields, configuring routing, and testing the integration — lives on the dedicated Installation page. It includes Python and Node.js code samples for HMAC signing and screenshots of the source-creation UI.

Troubleshooting

The most common cause is a missing or mismatched Bearer token. Confirm:
  • The Authorization: Bearer <secret> header is present on the request (or the secret query string parameter is set)
  • The secret matches the one shown in the source configuration in Rootly
  • If you’re using HMAC, the X-Webhook-Signature-256 header is also present and computed over the raw request body — not a re-serialized or pretty-printed version
Webhooks process asynchronously, so check again after a few seconds. If the alert still doesn’t appear:
  • Verify the Title field mapping points at a non-empty field in the payload
  • Check the source’s recent activity in Rootly to confirm the payload was received
  • Confirm the routing target referenced in the URL or payload exists and isn’t archived
The field mapping is pointing at JSON paths that don’t exist in the incoming payload. Capture a real payload using a request inspector (RequestBin, webhook.site, or a local server), compare it against your mapping, and update the JSON paths to match the actual structure.
The External Identifier isn’t mapped to a stable, unique value, so the resolved event can’t be matched back to the original alert. Map it to whatever field your sending tool uses as the persistent ID (monitor_id, incident_id, alert_uuid, etc.) so trigger and recovery events share the same identifier.
Either the recovery payload has a different External Identifier than the original alert, or the State field isn’t mapping to the exact resolved value you configured. See the Auto-Resolution page for the matching rules.
Switch from URL-based routing to payload-based routing. Map the target type and target ID fields from the incoming JSON so the routing decision lives in the event itself.

Frequently Asked Questions

No. If your tool can send an HTTP POST with a JSON body, the Generic Webhook Alert Source can ingest its alerts. Native integrations are nicer when they exist because they ship with vendor-specific field mappings and reduce setup time, but they’re not required.
Alerts are signals — discrete events from your monitoring stack. Incidents are coordinated response — the war room, the timeline, the postmortem. One incident may be informed by many alerts. The webhook source creates alerts; alert workflows decide when an alert should escalate to an incident.
You can, but it’s cleaner to create one source per sending tool. Separate sources make it easier to route alerts differently per tool, attribute issues during troubleshooting, and manage authentication independently.
Route the alert to a single target with the webhook, then use an alert workflow to fan out the response — page on-call, post to Slack, create an incident, update a status page. The webhook ingests the alert; the workflow decides what should happen next.
Both work. Rootly accepts the Bearer secret either as the Authorization: Bearer <secret> header or as a secret query string parameter on the webhook URL.
The webhook source maps from existing payload fields, and alert workflows fire after an alert is created and routed — not before. To compose a richer title (combining service name, severity, and a summary, for example), construct the title in your sending system before the webhook is sent and map the prepared field as the Title.
Any valid JSON. Rootly doesn’t require a specific schema — instead you map your tool’s existing fields to Rootly’s alert fields during source setup. Most monitoring tools work out of the box; you just point them at the webhook URL and configure the mapping.

Installation

Step-by-step setup with code samples, payload mapping, routing, and testing.

Auto-Resolution

Match recovery events to the original alert and resolve automatically.

Alert Workflows

Turn alerts into incidents, page on-call, post notifications, and run custom automation.