How Alerts Work
Alerts represent events generated by your monitoring and operational tools when something needs attention—like a firing Datadog monitor, a PagerDuty incident, or a Zendesk ticket. In Rootly:- Each alert has a status:
opentriggeredacknowledgedresolved
- Alerts can be linked to incidents, so responders see the right telemetry and tickets in context.
- Alert records track a request count and last received time, so you can see how often a condition has fired.
- A PagerDuty incident has been created
- A Datadog alert has been triggered
- A Zendesk ticket has been created
Supported Alert Sources
Rootly integrates with many alerting and ticketing tools. Some of the most common include:| Integration | Trigger |
|---|---|
| PagerDuty | When a PagerDuty Incident is created |
| Opsgenie | When an Opsgenie Incident is created |
| Splunk On-Call (VictorOps) | When a VictorOps Incident is created |
| Datadog | When a Datadog alert is triggered |
| Zendesk | When a Zendesk ticket is created (customizable) |
| Nobl9 | When an SLO is not satisfied |
| Sentry | When a Sentry alert is triggered |
Alert ingestion is rate limited to 50 alerts per minute per source/API key by default.
This limit is configurable per team, and higher limits are available for Enterprise customers upon request.
This limit is configurable per team, and higher limits are available for Enterprise customers upon request.
Alert Deduplication
Monitoring systems often keep sending alerts while a condition remains unhealthy. Instead of flooding responders with multiple identical alerts, Rootly provides two layers of deduplication:- Configurable per–Alert Source dedupe (by unique identifier)
- Payload-based suppression (ignored duplicate requests)
1. Combining Alerts by Unique Identifier
At the Alert Source level, you can tell Rootly to combine duplicate alerts into one alert using a stable identifier from the payload or alert fields. To configure:- Open an Alert Source in Rootly Web.
- Go to the Events tab.
- Toggle on Combine duplicate alerts into one alert.
- Choose where the identifier comes from:
- Payload – use a JSONPath into the raw payload (recommended for most cases)
- Alert field – use a specific alert attribute
- Provide the deduplication key path (e.g., a JSONPath value).
- Optionally apply a regular expression to normalize the value before matching.
In the Alert Source UI, you can preview sample alerts.
Clicking a purple pill in the payload viewer copies its JSONPath—use this directly as your deduplication key path.
Clicking a purple pill in the payload viewer copies its JSONPath—use this directly as your deduplication key path.
- If its deduplication key matches an existing alert, Rootly:
- Does not create a new alert
- Adds a duplicate/ignored request event to the original alert
- Increments the alert’s requests count
- Updates last_request_at
- A badge like
×3next to the alert - A tooltip indicating how many matching requests have been received and when the last one arrived
2. Payload-Based Duplicate Suppression
In addition to key-based dedupe, Rootly can also suppress exact payload duplicates at the team level. When enabled, if Rootly sees another alert with the exact same request body as a previous “ignored” event for that alert:- The request is counted against the same alert
- Rootly records an internal event (e.g.,
"ignored_alert_request") - No new alert is created
Linking Alerts to Incidents
Alerts become most useful when tied directly to incidents. In Rootly, alerts can be associated with incidents via:- Integration mappings and workflows
- e.g., “When a PagerDuty incident is created, attach the alert to the corresponding Rootly incident.”
- Automation logic
- e.g., based on service, environment, or alert attributes
- Manual linking from the incident or alert views
- Jump from the incident to the underlying alert(s)
- See how many times the alert fired (via the requests count)
- Use alert details to drive mitigation and follow-up tasks
Best Practices
-
Choose a stable deduplication key
Use identifiers like monitor IDs, incident keys, or ticket IDs—avoid full message text or highly variable fields. -
Start narrow, then broaden if needed
Begin with conservative dedup rules and relax them as you gain confidence, to avoid accidentally merging unrelated alerts. -
Link alerts to incidents early
Use workflows to automatically attach alerts to incidents as soon as they are ingested. -
Watch the request count
A high×Ncount on an alert is a strong signal of ongoing or flapping conditions and can inform severity and prioritization. -
Tune rate limits for noisy environments
If you know a source can spike, consider increasing the per-source rate limit for that team.
Troubleshooting
Alerts aren’t being combined as expected
Alerts aren’t being combined as expected
Confirm that Combine duplicate alerts into one alert is enabled on the Alert Source and that the deduplication key path points to a stable, consistent value.
If the key changes between alerts, Rootly treats them as separate alerts.
If the key changes between alerts, Rootly treats them as separate alerts.
I see fewer alerts than my provider shows
I see fewer alerts than my provider shows
Often this means deduplication is working as designed.
Multiple provider alerts may be mapped to a single Rootly alert, with extra occurrences recorded as ignored/duplicate requests on the original alert.
Multiple provider alerts may be mapped to a single Rootly alert, with extra occurrences recorded as ignored/duplicate requests on the original alert.
The alert requests counter isn’t increasing
The alert requests counter isn’t increasing
Make sure:
- Deduplication is configured correctly, or payload-based suppression is enabled
- Incoming payloads actually match the configured dedup key or body
If the identifier or body differs, Rootly will create separate alerts instead of incrementing the existing one.
Alerts are hitting rate limits
Alerts are hitting rate limits
Rootly enforces a per-team, per-source/API key rate limit (default 50 alerts/minute).
For high-throughput environments, increase the alerts rate limit per minute in team settings or contact support for higher Enterprise limits.
For high-throughput environments, increase the alerts rate limit per minute in team settings or contact support for higher Enterprise limits.
An alert from my tool never appears in Rootly
An alert from my tool never appears in Rootly
Check:
- The integration is installed and authenticated
- The mapping points to the right team or alert source
- The webhook or outbound configuration is using the correct URL
- The payload contains all required fields for that integration
Also review integration error logs in Rootly for more details.