The Generic Webhook Alert Source lets Rootly ingest alerts from any tool that can fire HTTP webhooks. If your monitoring, observability, security, or data-quality platform can send a POST with a JSON body, Rootly can turn that event into an alert, route it to the right responder, and trigger downstream automation. Use it when your alerting tool doesn’t have a dedicated Rootly integration, when you’re ingesting alerts from custom or internal services, or when you want a single, unified pattern for handling webhook-based signals across multiple sources.Documentation Index
Fetch the complete documentation index at: https://docs.rootly.com/llms.txt
Use this file to discover all available pages before exploring further.
Compatible Alerting Tools
If a tool can send an HTTP POST with a JSON body, it works with Rootly. The Generic Webhook Alert Source is the standard way to wire up alerts from:- Application performance & error tracking — BugSnag, AppOptics, Coralogix
- Infrastructure & log monitoring — Sumo Logic, Elastic, Chronosphere, Nagios, PRTG
- Uptime & synthetic monitoring — Pingdom, StatusCake, uptime.com, Checkly, Cronitor, Runscope
- Security & threat detection — Crowdstrike, Expel
- Data quality & observability — Monte Carlo
How It Works
The Generic Webhook Alert Source gives you a webhook endpoint URL that your external system POSTs alert events to. When a request arrives, Rootly:- Authenticates the request using the Bearer Token or HMAC signature you configured for the source.
- Parses the JSON body and extracts the fields you mapped — title, description, identifier, state, routing target.
- Creates or updates an alert based on the External Identifier. If Rootly already has an alert with the same identifier, follow-up events update it; otherwise a new alert is created.
- Routes the alert to the target you specified, either from the URL or from the payload itself.
- Triggers alert workflows, which can create incidents, page on-call, post to Slack, or run any other downstream action you’ve configured.
Authentication
Rootly supports two authentication models. Pick based on how sensitive the alert data is and how much setup the sending system can absorb.Bearer Token (default)
A static secret sent in theAuthorization header, or as a secret query string parameter on the webhook URL. Prefer the header form when your sending tool supports it — query parameters tend to be captured in proxy and load-balancer access logs, so they’re more exposure-prone than a header. Either form is simple to set up: paste the secret into your sending tool and you’re done.
HMAC Signature
The sender signs the raw request body with a shared secret using HMAC-SHA256, and Rootly verifies the signature in theX-Webhook-Signature-256 header using a timing-safe comparison. The signing secret never travels over the wire. Use this when the sending system can compute HMAC signatures and you want stronger tamper-evidence on every request.
Mapping Payload Fields
Rootly doesn’t require a specific JSON shape. You map fields from your sending tool’s payload to Rootly’s alert fields during source creation. A typical payload from a monitoring tool:| Rootly Field | Mapped From | Purpose |
|---|---|---|
| Title | alert_title | Headline shown on the alert |
| Description | alert_description | Body text on the alert |
| External Identifier | external_id | Stable key Rootly uses to match follow-up events to the same alert |
| State | state | Lifecycle state — drives auto-resolution when a recovery event arrives |
| Routing Target | service | Which service, team, or escalation policy receives the alert |
monitor_id, incident_id, alert_uuid, etc.) so that a follow-up “resolved” event can close the original alert instead of being ignored.
See the Installation guide for the field-mapping UI walkthrough.
Routing Alerts
There are two ways to tell Rootly where an alert should go. You can mix both across sources, but a single source typically uses one or the other.Route By URL
Route By Payload
notification_target_id is the Rootly resource’s internal ID — open the service, team, or escalation policy in Rootly and copy the ID from its edit page. Names and slugs aren’t accepted in this field.Use this when a single source needs to route alerts to different targets depending on the event — useful when your monitoring tool already tags events by service or team and you want that metadata to drive routing.service, group (or team), escalationPolicy.
Auto-Resolution
When your sending tool fires a recovery or “cleared” event, Rootly can transition the original alert to resolved automatically. The match happens on the External Identifier, and the trigger is an exact-match value in the State field (typicallyresolved, recovered, or ok).
Configure auto-resolution after you’ve set up the source and its field mappings. The dedicated Auto-Resolution page covers the matching rules, payload requirements, and how to define the resolved-state value.
From Alert to Incident
Alerts created through this source behave like any other Rootly alert — they’re regular signals you can drive any of Rootly’s alert-driven automation off of:- Create incidents automatically based on alert content, severity, or service
- Page on-call responders through escalation policies
- Post notifications to Slack, Microsoft Teams, or email
- Trigger downstream workflows that update runbooks, status pages, ticketing systems, or run custom scripts
Installation
The full step-by-step setup — creating the source, copying the endpoint URL, configuring authentication, mapping payload fields, configuring routing, and testing the integration — lives on the dedicated Installation page. It includes Python and Node.js code samples for HMAC signing and screenshots of the source-creation UI.Troubleshooting
Rootly returns 401 Unauthorized
Rootly returns 401 Unauthorized
The webhook returns 200 but no alert appears in Rootly
The webhook returns 200 but no alert appears in Rootly
- Verify the Title field mapping points at a non-empty field in the payload
- Check the source’s recent activity in Rootly to confirm the payload was received
- Confirm the routing target referenced in the URL or payload exists and isn’t archived
Alerts appear but the fields are empty
Alerts appear but the fields are empty
Recoveries create new alerts instead of closing the original
Recoveries create new alerts instead of closing the original
monitor_id, incident_id, alert_uuid, etc.) so trigger and recovery events share the same identifier.Recovery events don't resolve the original alert
Recovery events don't resolve the original alert
One source needs to route to multiple teams
One source needs to route to multiple teams
Frequently Asked Questions
Does my monitoring tool need a native Rootly integration?
Does my monitoring tool need a native Rootly integration?
What's the difference between an alert and an incident in Rootly?
What's the difference between an alert and an incident in Rootly?
Can I use one webhook source for multiple tools?
Can I use one webhook source for multiple tools?
How do I send the same alert to multiple targets?
How do I send the same alert to multiple targets?
What if my tool only supports query-string secrets, not Authorization headers?
What if my tool only supports query-string secrets, not Authorization headers?
Can I customize the alert title beyond what the payload contains?
Can I customize the alert title beyond what the payload contains?
What payload shape does Rootly expect?
What payload shape does Rootly expect?