# Overview Source: https://docs.rootly.com/ai/ai Discover how Rootly AI acts as your experienced engineer, providing proactive troubleshooting, summaries, and conversational incident management throughout the entire lifecycle. Using the power of GenAI at every stage of the incident lifecycle—from alert to retrospective—Rootly AI provides proactive troubleshooting tips, accurate summaries, and automatic metric reports with simple conversational prompts. With Rootly AI, you can: * [Generated Incident Title](/ai/generated-incident-title "Generated Incident Title") * [Incident Summarization](/ai/incident-summarization "Incident Summarization") * [Incident Catchup](/ai/incident-catchup "Incident Catchup") * [Mitigation and Resolution](/ai/mitigation-and-resolution-summary "Mitigation and Resolution") * [Ask Rootly AI](/ai/ask-rootly-ai "Ask Rootly AI") * [Rootly AI Editor](/ai/rootly-ai-editor "Rootly AI Editor") * [Virtual Meeting Bot](/ai/ai-meeting-bot) Rootly AI also ensures privacy and flexibility, allowing users to seamlessly opt in or out of AI features and customize data access permissions. Learn more [here](https://shared.archbee.space/doc/wPy7HIXCArkg_ZqVYHTQt/Wm_3udbDOJFGTkssXTWch "here"). Document image # AI Meeting Bot Source: https://docs.rootly.com/ai/ai-meeting-bot Automatically record, transcribe, and summarize incident bridge calls to break down communication silos and support postmortem documentation. Your AI Meeting Bot helps break down communication silos between incident bridges and the rest of your team. ### Configuration[](#yihQh) To get started, integrate your Rootly instance into your virtual meeting room software of choice. Check out our [integration documentation](/integrations "integration documentation") for more information on how to do specific integrations. Once Rootly is successfully integrated into your virtual meeting room software, click into your integration from the Integrations page and toggle on 'Meeting transcript and summary'. Document image During your incident bridges, allow your Rootly Meeting Bot into the meeting to begin recording and transcribing your call. **Note**: Make sure your team uses the virtual meeting room that Rootly creates for you when the incident starts in order for your Meeting Bot to join the call. This can be found in the pinned Slack message at the top of your incident's Slack channel. Document image ### On incident bridges[](#PAphj) Once your Meeting Bot has been admitted to your incident bridge, it will immediately begin recording and transcribing the meeting. Any new responders joining the incident can use [Incident Catchup](/ai/incident-catchup "Incident Catchup") to receive up-to-date information on the incident: Rootly AI will also include information from your incident bridge to give your responder a complete picture of the incident. ### After your incident[](#CMHSJ) Once the call has ended and your incident is resolved, the Meeting Bot will update your incident with a call recording, transcription, and summary of what was discussed to support your postmortem rituals. This can be found in the Meeting tab of your incident. Document image # Ask Rootly AI Source: https://docs.rootly.com/ai/ask-rootly-ai Interactive AI assistant that answers questions about incidents, provides summaries, and helps with incident management tasks directly in Slack or the web interface. Got a simple question to ask? Need a summary for your customer-facing teams? Ask Rootly AI can help answer questions and handle a variety of prompts in an incident channel in Slack or on the web. **Via the web in an incident:** Document image **Via Slack in an incident channel: ask @rootly questions** Document image *Note: Ask Rootly AI is restricted to answering questions about the current incident and incident management practices.* ### Ask Rootly AI Questions and Prompts[](#PPE0U) **General Questions** * What happened? * What caused the incident? * Who is the commander? * When was the incident declared? * When did this incident start? **Overview of Actions taken** * What have we tried? * What questions have we asked? * What decisions have we made so far? * What should I do next? * What are the next steps for this incident? **Summaries** * Write me a summary * Write me a summary to share with an executive * Write me some customer-facing communication summarizing the incident * Write me a status page update **General Help** * What are you capable of? * What are examples of things you can do? ### Configuration[](#g0Ckv) Generated incident titles by Rootly AI are available for all customers. To enable, head to Rootly AI > Enable Rootly AI and opt in to Rootly AI capabilities. Note that only Admins can enable Rootly AI. To ensure the best results, 'Slack channel message visibility' is set to 'All messages' by default. You can change this at any time. Document image # Data Privacy for AI Source: https://docs.rootly.com/ai/data-privacy-for-ai Learn about Rootly's data privacy and security safeguards for AI features, including what data is shared with OpenAI and how your incident information is protected. Rootly is dedicated to maintaining the highest standards of privacy and security. [Read more about our data philosophy](https://rootly.com/blog/building-a-privacy-first-ai-for-incident-management). * Rootly AI, driven by OpenAI, incorporates multiple safeguards to ensure the security of your data, providing you with peace of mind. * Data sent to OpenAI is solely used to provide Rootly AI services and is neither stored nor used for training purposes by OpenAI. * We automatically redact the following PII before sending any data to OpenAI: * email, addresses, phone numbers, credit card numbers, social security numbers (SSNs) and passwords in URLs * Private incident data is **never** sent to OpenAI. * Rootly AI never uses your data (even if anonymously) to improve results for other customers; it stays within the walls of your organization and is only used there. * You may opt-out at any time via the [AI configuration page](https://rootly.com/account/ai/configurations). No future changes to how your data is used will change without your explicit approval. * Optionally, organizations may [integrate their OpenAI account](https://rootly.com/account/integrations/open_ai_accounts/new) to take advantage of any organization specific data retention policies. **Data from the incident that will be considered includes:** * Built-in and custom fields * Human-created timeline events * Completed action items * Timestamps * Alert source * Mitigated and resolved messages * Slack messages from the incident channel (depending upon [Slack channel message visibility](https://rootly.com/account/ai/configurations)) **Data that is not considered includes:** * Incident feedback * Automated timeline events relating to action items, workflow runs and playbooks. * Any data from private incidents Note: To enable higher quality output [Slack Scope Updates](/ai/slack-scope-updates) are required. # Generated Incident Title Source: https://docs.rootly.com/ai/generated-incident-title Automatically generate descriptive and accurate incident titles using AI that analyzes incident information and updates as more details become available. Rootly AI can instantly generate descriptive and accurate incident titles by pulling up-to-date information about the incident. As more information comes in about the incident, you can regenerate new, even more informative titles. You can generate an incident title on the web or in Slack by using the command `/rootly update`. **Via web in an incident: trigger by clicking the magic pen** Document image **Via Slack in an incident channel: trigger via the** `/rootly update` **command then clicking the 'Generate with AI' button** Document image *Note: Incident title generation is not available in private incidents.* ### Configuration[](#dorky) Generated incident titles by Rootly AI are available for all customers. To enable, head to **Rootly AI** > **Enable Rootly AI** and opt in to Rootly AI capabilities. Note that only Admins can enable Rootly AI. To ensure the best results, **'Slack channel message visibility'** is set to **'All messages'** by default. You can change this at any time. Document image # Incident Catchup Source: https://docs.rootly.com/ai/incident-catchup Quickly get up to speed on ongoing incidents using AI-generated summaries when joining an incident Slack channel as a responder. Have you ever been pulled into an incident and you're not sure what's going on or where to begin? Well, Rootly AI can help with just that. As a responder joining an incident Slack channel, use a simple command and learn everything you need to know in seconds; use the Slack command `/rootly catchup`. Summaries will be only visible to you, and you can generate a new summary by clicking on 'Update Summary.' Via Slack in an incident channel: Run the command `/rootly catchup`. Document image Document image Note: Incident summary generation is not available in private incidents. ### Configuration[](#oRX6M) Generated incident titles by Rootly AI are available for all customers. To enable, head to Rootly AI > Enable Rootly AI and opt in to Rootly AI capabilities. Note that only Admins can enable Rootly AI. To ensure the best results, **'Slack channel message visibility'** is set to **'All messages'** by default. You can change this at any time. Document image # Incident Summarization Source: https://docs.rootly.com/ai/incident-summarization Generate AI-powered incident summaries that analyze historical incidents, suggest resolution steps, and invite previous responders to help resolve current incidents. Generate incident summaries instantly with Rootly AI. Using historical incidents, our advanced AI models will detect key similarities. We’ll tell you how it was resolved in the past, suggest the best next steps, and optionally invite previous responders to help. You can do this whether you're on the web or on Slack by using the commands `/rootly summary` or `/rootly update`. **Via web in an incident: click the genius pen** Document image **Via Slack in an incident channel: run the** `/rootly summary` **or** `/rootly update` **command and select the ‘Generate with AI’ button** Document image *Note: Incident summarization is not available in private incidents.* ### Configuration[](#kOMIh) Generated incident titles by Rootly AI are available for all customers. To enable, head to Rootly AI > Enable Rootly AI and opt in to Rootly AI capabilities. Note that only Admins can enable Rootly AI. To ensure the best results, **'Slack channel message visibility'** is set to **'All messages'** by default. You can change this at any time. Document image # Mitigation And Resolution Summary Source: https://docs.rootly.com/ai/mitigation-and-resolution-summary Generate AI-powered mitigation and resolution summaries for incidents using Rootly AI through the web interface or Slack commands. Do you need to write-up mitigation and resolution summaries? Rootly AI has you covered! By pulling information on the current state of the incident, Rootly AI will give you a mitigation and resolution summary just like that, whether you are on the web or in Slack by using the commands `/rootly mitigate` or `/rootly resolve`. **Via web in an incident: click the genius pen** Document image **Via Slack in an incident channel: run the** `/rootly mitigate` **or** `/rootly resolve` **commands and select the ‘Generate with AI’ button** Document image ### Configuration[](#joai1) Generated incident titles by Rootly AI are available for all customers. To enable, head to Rootly AI > Enable Rootly AI and opt in to Rootly AI capabilities. Note that only Admins can enable Rootly AI. To ensure the best results, 'Slack channel message visibility' is set to 'All messages' by default. You can change this at any time. Document image # Rootly AI Editor Source: https://docs.rootly.com/ai/rootly-ai-editor Improve your writing across Rootly with AI-powered text editing that fixes spelling and grammar, adjusts length, and simplifies language. Being an award-winning writer isn't for everyone, but don't worry, we got you covered. Rootly AI Editor is available across Rootly in our text inputs, and it's there to help fix spelling and grammar, shorten or lengthen sentences, and even simplify the language used. **Via web trigger by highlighting any text in a text input** Document image ### Configuration[](#IX4jj) Generated incident titles by Rootly AI are available for all customers. To enable, head to Rootly AI > Enable Rootly AI and opt in to Rootly AI capabilities. Note that only Admins can enable Rootly AI. To ensure the best results, 'Slack channel message visibility' is set to 'All messages' by default. You can change this at any time. Document image # Slack Scope Updates Source: https://docs.rootly.com/ai/slack-scope-updates Configure enhanced Slack permissions to enable Rootly AI features with customizable privacy levels for incident channel message ingestion. Administrators will be prompted to update Slack Scopes to enable Rootly AI. These updated scopes enable the ingestion of the contents of incident slack channels providing a higher quality experience when generating AI responses. These enhanced scopes are only used in the context of Rootly AI. They can be [configured to five varying levels of privacy](https://rootly.com/account/ai/configurations "configured to five varying levels of privacy") from fully permissive (ingesting every message in an incident channel), to only ingesting content from specific types of incidents (public or private) or completely off and ingesting no slack messages. Document image # Alert Fields Source: https://docs.rootly.com/alerts/alert-fields Use Alert Fields to extract, normalize, and store structured alert data for routing, enrichment, automation, and triage across Rootly. Alert Fields allow you to extract key information from incoming alert payloads and store it in a normalized format that can be used consistently across Rootly. This removes the need to understand every alert provider’s unique payload structure—Rootly handles that translation automatically. Alert Fields are populated automatically on alert creation or update, depending on the mappings you configure on each Alert Source. *** ### Overview Different observability tools send alerts in very different formats. Alert Fields standardize this by letting you: * Normalize metadata such as environment, severity, region, service, or product area * Route alerts consistently, regardless of which tool sent them * Enrich alerts with structured information to help responders triage faster * Build metrics and dashboards using clean, uniform data * Simplify workflows across multi-tool monitoring environments Alert Fields become part of the alert record itself and are accessible everywhere Rootly evaluates conditions, displays alert information, or triggers automation. *** ### How Alert Fields Work When an alert is ingested: 1. Rootly reads the raw payload from the alert source. 2. Each configured mapping is evaluated using Liquid. 3. The results are stored as `alert_field_values`. 4. The normalized fields are then available throughout the platform. Rootly automatically seeds built-in fields when creating a new Alert Source so you can map values immediately. Alert Fields configuration *** ### Examples **Route alerts by impacted product area**\ Map a `product_area` field using Liquid, then build routes that send alerts to the correct on-call team. **Enrich alert details for responders**\ Extract severity, region, deployment ID, customer tier, or any custom metadata. **Build better metrics and dashboards**\ Use normalized field values to track trends without parsing different payload structures. **Simplify multi-tool environments**\ Create one `severity` field and map Datadog, PagerDuty, Opsgenie, and Sentry severities into it consistently. *** ### Configuring Alert Fields To configure Alert Fields: Navigate to the Alert Source and select the Fields tab to view all fields currently mapped. Click Add Field to select an existing field or create a new one.\ New fields immediately become available across all alert sources. Specify a Liquid expression that extracts a value from the alert payload.\ Reference recent alerts using the preview on the right. Click any purple pill in the payload viewer to copy its Liquid expression. All future alerts from this source will populate the field using your mapping. If the title or description fields are left blank, Rootly automatically assigns reasonable defaults (for example, using the subject line for email alert sources). *** ### Using Alert Fields in Alert Routes Alert Fields can be referenced directly in Alert Route conditions.\ This allows your routing logic to be built once and work across all sources, as long as each source maps its payload fields correctly. Examples: * Route all `severity = critical` alerts to the primary on-call * Route `region = EU` alerts to the EMEA team * Route alerts associated with a specific service or component * Route customer-impacting alerts differently from internal signals Learn more on the **[Alert Routes](/alert-routing)** page. *** ### Using Alert Fields for Auto-Resolution Rules (Email Sources) Email alert sources support auto-resolution rules based on Alert Fields. To set this up: 1. Open the email alert source. 2. Define auto-resolution conditions. 3. Reference Alert Fields in those conditions (e.g., subject text, severity, environment). When a new email alert arrives, Rootly evaluates your conditions and automatically resolves the alert if they match. *** ### Accessing Alert Fields as a Responder Responders can view alert field values in: * **Web:** Alert details panel * **Slack:** Alert details and context blocks * **Mobile:** Alert details in the Rootly mobile app This gives responders immediate access to normalized metadata without reviewing raw JSON payloads. *** ### Best Practices Use shared fields (severity, environment, service, region, etc.) to keep routing behavior consistent across monitoring tools. Test Liquid mappings with real alerts to avoid mismatches or null values. Map differences at the Alert Source layer rather than building multiple routing rules for each provider. Adopt consistent formatting across sources (e.g., PRODUCTION, STAGING, DEV). Alert Fields make workflow triggers more reliable and much easier to maintain. *** ### Troubleshooting * Ensure the field is mapped on the correct Alert Source. * Confirm your Liquid expression returns a value. * Check that the alert payload changed (fields update when payload changes). * Verify your team has Alert Fields enabled. * Confirm the payload path is accurate. * Use purple-pill copy from the alert payload preview. * Add default guards in Liquid where necessary. * Not all providers send uniform payloads. * Some alerts may lack the field entirely. * The mapping may require a conditional or fallback. * Verify the field is correctly populated before routing. * Compare formatting (case sensitivity, whitespace, arrays). * Ensure the route condition exactly matches the field value. *** ### Summary Alert Fields give your organization a unified layer of structured alert data across multiple tools. They power consistent routing, faster triage, stronger automation, and cleaner reporting—making them one of the most important parts of a scalable alerting setup in Rootly. Let them do the heavy lifting so your responders don’t have to. # Alert Grouping Source: https://docs.rootly.com/alerts/alert-grouping Reduce noise by automatically grouping related alerts into a single, leader-driven alert. ### Overview Alert Grouping reduces noise and alert fatigue by consolidating related alerts into a **single leader alert** with associated **member alerts**.\ Responders are paged for the leader, while subsequent matching alerts join its group silently. This helps you: * Avoid duplicate pages from multiple monitors on the same issue * Keep alert timelines focused on one “source of truth” * Improve prioritization and reduce cognitive load for on-call responders **Non-paging alerts** (alerts that do not route to any team, service, or escalation policy) are not grouped. Only alerts that participate in routing and paging can form or join groups. *** ### How Alert Grouping Works Each **Alert Group** defines *which alerts belong together* based on: * **Destinations** – what the alert was routed to (team, service, escalation policy) * **Time Window** – how long alerts are considered part of the same episode * **Content Matching** – which alert attributes or payload fields must match When an alert arrives: 1. Rootly finds any **active group** whose rules and time window match. 2. If a match is found: * The existing alert becomes (or remains) the **group leader** * The new alert is added as a **member**, and **does not re-page** 3. If no group matches: * A **new leader alert** and group are created (and the responder is paged according to routing rules) *** ### Group Leaders vs. Members Within a group: * The **leader alert**: * Is the alert that originally caused the page * Acts as the **source of truth** for the group * Drives status and noise updates (e.g., acknowledged, resolved) * **Member alerts**: * Join silently (no additional pages) * Inherit status changes from the leader * Appear on the group timeline for context You can view an alert’s group on the **Alert details page** under the **Alert Group** tab. *** ### When to Enable Alert Grouping Alert Grouping is especially useful when: * A single service has **many monitors** (error rate, latency, CPU, DB health, etc.) * A failure in one component triggers multiple alarms across: * APM, logging, metrics, and infrastructure tools * You want to treat a burst of related alerts as **one incident episode** rather than many independent pages Example: > “Service A has 5 monitors. When it goes down, all 5 fire at once. With grouping, responders get **one page** and then see all related alerts attached to that leader.” *** ### Creating an Alert Group To create a new Alert Group in the web app: * Go to **Alerts → Grouping** * Click **+ New Alert Group** * Enter a **Name** (required) and a **Description** (optional) Destinations define **which routed alerts** are candidates for this group. Under **Destinations**, choose one of: * **All services, teams, and escalation policies** * **All services** * **All teams** * **All escalation policies** * **Select routes** – only alerts routed to specific: * Services * Teams * Escalation policies To group only alerts routed to specific targets: * Choose **Select routes** * Pick the services, teams, or escalation policies you want to include Destination scoping ensures you don’t accidentally group unrelated alerts\ (for example, SRE and Security alerts) into the same cluster. Next, decide how strictly routing must match inside a group. You can choose: * **Groups should only contain alerts for the same route** * Alerts must be routed to the **exact same** service, team, or escalation policy * Example: alerts routed to Service A group only with other Service A alerts * **Groups can contain alerts for any selected route** * Alerts routed to **any** of the selected targets are allowed in the same group * Example: all alerts routed to any SRE team are grouped together Internally, Rootly treats this as: * `same` – route must match exactly * `any` – any eligible route can join the group The **Time Window** defines how long alerts should be considered part of the same group. * Specify the window in **minutes** * Rootly supports values between: * **5 minutes (minimum)** * **7 days (maximum)** The time window is **rolling** and is anchored to the **last alert** added to the group. With a **10-minute** window, the group remains open as long as new alerts keep arriving within 10 minutes of the last one.\ Once 10 minutes pass with no new alerts, a **new group** will be created next time a matching alert arrives. Content Matching defines **what must be similar** between grouped alerts. You can group on: * **Alert Title** – matches the alert’s `summary` * **Alert Description** – the alert’s `description` * **Alert Urgency** – e.g., High, Medium, Low * **Source Link** – the `external_url` (e.g., link to Datadog, PagerDuty, etc.) * **Payload** – any field within the incoming alert payload (via JSONPath) * **Alert Fields** – normalized fields you’ve configured on the Alert Source Operators include: * `is one of` / `is not one of` * `contains` / `does not contain` * `starts with` / `ends with` * `matches regex` * `is empty` To group by a payload field, choose **Payload** and provide a **JSONPath**\ (for example `$.alert.feature`). Alert grouping conditions configuration For convenience, Rootly provides quick toggles such as **Group by Title** and **Group by Urgency**, which automatically create the appropriate underlying conditions. *** ### Working with Alert Groups Once your Alert Groups are configured and active: * The **first alert** that matches a group becomes the **leader** and triggers paging * Subsequent alerts that match: * Are added as **members** * **Do not** re-page responders * Update the group’s timeline with additional context When you change the leader’s status: * All member alerts’ statuses are updated to match (e.g., resolving the leader resolves its members) * Noise controls on the leader (e.g., marking as noise) propagate to members as they join You can inspect group membership by: * Opening an alert in the web UI * Navigating to the **Alert Group** tab *** ### Example: Grouping Multiple Monitors for One Service Suppose you have the following monitors for `checkout-service`: * Error rate > threshold * P95 latency > threshold * CPU saturation * Database connection errors If the database experiences a serious issue, **all four monitors** might fire. Without grouping: * The on-call may receive 4 pages * Each alert appears as independent noise With alert grouping: * Destination condition: **Select routes → Service: checkout-service** * Route logic: **Groups should only contain alerts for the same route** * Time window: **10 minutes** * Content matching: **Group by Service + Urgency** (or only by destination) Result: * First alert pages and becomes the **leader** * Remaining alerts join silently as **members** * The responder sees one alert with a rich history of related signals *** ### Best Practices * **Start narrow** * Group by **route + short time window** first; broaden later if needed * **Use content carefully** * Combining **Title + Urgency + Payload** can create very precise groupings * Avoid overly broad conditions that might lump unrelated incidents together * **Align with incident semantics** * Think of an Alert Group as “all signals about the same episode,” not “all alerts about the same service forever” * **Regularly review grouped alerts** * Use the Alert Group tab and alert timelines to validate whether groupings still make sense as your monitoring evolves *** ### Troubleshooting * Verify the **Destination** condition includes those routes * Check whether the **route logic** is set to “same route” vs “any selected route” * Confirm the **Time Window** hasn’t expired between alerts * Make sure content matching conditions (Title, Urgency, Payload, etc.) actually match * Narrow the **Destination** scope (e.g., from “all teams” to “specific teams/services”) * Switch from “any selected route” to “same route” * Add or tighten **Content Matching** conditions (e.g., require matching Title and Urgency) * Reduce the **Time Window** duration * The previous group’s time window may have expired * Conditions may have changed (e.g., different title or urgency) * Destination may differ (e.g., different service or team) * This is expected: only alerts that **route to a team, service, or escalation policy** can group * Convert important non-paging alerts into routed alerts via **Alert Routes** if you want them to participate in grouping # Alert Routing Source: https://docs.rootly.com/alerts/alert-routing Use Alert Routes to determine which teams, services, and escalation policies receive incoming alerts from your monitoring tools. ### Overview Alert Routing ensures that alerts from your monitoring and observability systems reach the correct responders quickly and reliably. Rootly provides a unified routing layer that works across all alert sources, enabling consistent on-call workflows. Rootly supports two routing pathways: 1. **Routing inside your monitoring tool** (Datadog, PagerDuty, Opsgenie, etc.) 2. **Routing inside Rootly** using centralized **Alert Routes** This guide focuses on routing **inside Rootly**. Alert Routes page *** ### What Is an Alert Route? An **Alert Route** defines *when*, *how*, and *to whom* Rootly should send alerts. It supports evaluation against: * Alert Sources * Alert Fields (normalized metadata) * Raw payload values (JSONPath) * Teams, services, and escalation policies **Tip:** Alert Routes work best when combined with **Alert Fields**, which let you write stable routing logic even when payload schemas vary across providers. *** ### Creating an Alert Route Navigate to **Alerts → Routes** and click **New Route**. Each route requires: #### Name A descriptive title that clarifies the route’s purpose. #### Alert Sources Select one or more alert sources the route should evaluate.\ Sources can be added or removed at any time. See all integrations → [Integrations Overview](/integrations/overview) #### Owning Team Controls who can edit the route. **Permissions:** * Team Admins may only create routes **for their own team**. * Teams can only route alerts **from alert sources they own**. After creating a route, you can begin adding Routing Rules. *** ### Configuring Routing Rules Routing Rules determine *which alerts should page responders* and *where they should go*. Click **Add routing rule** to create one. Routing Rule creation *** ### Routing Rule Conditions Conditions define when a rule should trigger. #### Select a Field You may reference: * **Alert Fields** (recommended) * **Payload values via JSONPath** Alert Fields ensure your routing logic remains stable even if payload structures change. Condition builder #### Choose an Operator Supported operators include: * *is one of* * *contains* * *starts with* * *matches regex* * *is empty* * and more Use **regex** when values vary across alert providers and need flexible matching. #### Add Additional Conditions Use **AND/OR** groups to define complex routing logic. #### Live Preview Rootly shows matching historical alerts to validate your logic. Matching alerts preview *** ### Routing Rule Destinations Each rule must specify **who receives the alert**. You may route alerts to: * **Teams** * **Services** * **Escalation Policies** Routing to a team or service automatically triggers its configured escalation policy. For easier reporting and maintenance, we recommend routing to **teams** or **services**, not directly to escalation policies. Rules may include **multiple destinations**, all of which will be paged when the rule fires. *** ### Completing the Alert Route A route may contain any number of rules.\ Rootly evaluates rules **top-to-bottom**, so ordering matters. Use the rule menu (**… → Reorder rule**) to adjust order. *** ### How Rootly Routes Alerts Rootly evaluates alerts in two sequential stages. #### Stage 1 — Payload-Based Routing If the alert payload contains a **target ID** (team or service), Rootly immediately routes the alert there without evaluating Alert Routes. #### Stage 2 — Evaluate Alert Routes If the alert does not specify a target: #### Evaluate Routes Rootly evaluates **every Alert Route associated with the alert’s source**. #### Evaluate Rules Within each route, rules are evaluated **from top to bottom**. * The first matching rule triggers paging * Rootly stops evaluating additional rules in that route * Other routes referencing the same source will still run If no rules match, the alert becomes a **Non-Paging Alert**. Review these in the Alerts dashboard by filtering **Status → Non-Paging**. Reordering rules Order rules **most specific → least specific** to avoid unintended matches. *** ### Alert Timeline Every routed alert includes a timeline event documenting: * Which **Alert Route** was applied * Which **Routing Rule** matched * Which **destinations** were paged Alert timeline routing This ensures responders understand *why* they were paged. *** ### Best Practices * Prefer **Alert Fields** over JSONPath for stability. * Start with broad routing categories and refine with specific rules. * Keep rule names action-oriented and descriptive. * Regularly check **Non-Paging Alerts** for routing gaps. * Route to **teams/services**, not escalation policies, for better ownership. * Combine routes thoughtfully when different teams own different tools. *** ### Troubleshooting * Ensure the alert source is included in at least one route. * Verify that at least one rule matches the alert. * Confirm the alert payload does not contain a `target_id`, which overrides routing. * Check the rule order; a broader rule may be matching first. * Validate operators and values used in conditions. * Ensure Alert Field mappings are extracting values correctly. * All routes referencing the alert source are evaluated. * Remove unnecessary alert sources from routes. * Tighten condition logic. * Review the alert payload preview (purple pill tokens). * Confirm your JSONPath reflects the actual alert structure. * Use Alert Fields whenever possible. # Alert Statuses Source: https://docs.rootly.com/alerts/alert-statuses Understand how alerts progress through their lifecycle and how Rootly enforces valid state transitions. ### Overview Every alert in Rootly progresses through a well-defined **finite state machine (FSM)** that dictates how it escalates, notifies responders, synchronizes with alert groups, and ultimately resolves.\ Understanding these states ensures predictable behavior across Routing, On-Call Escalation Policies, Alert Grouping, and integrations like Slack. Rootly alerts can be in one of four statuses: * **open** * **triggered** * **acknowledged** * **resolved** These values are stored on the alert’s canonical `status` enum. All transitions, notification triggers, and timeline events are governed by Rootly’s internal state machine. Alert Status State Machine *** ### Status Definitions #### **open** The alert has been created but **has not yet been assigned a notification target**. Typical reasons for an `open` alert: * The alert was ingested but did not match a Routing Rule * The alert is a *non-paging alert* * It was created manually without a destination This status allows two transitions: * `open → triggered` (once a notification target is assigned via routing or manual paging) * `open → resolved` An alert immediately transitions from **open → triggered** when Routing assigns a team, service, user, or escalation policy. #### **triggered** The alert is **actively paging responders**. This is the state where on-call users are notified based on escalation logic. A triggered alert: * Sends notifications (SMS, push, phone call, Slack) * Can be acknowledged by responders * Can be resolved manually or via automation Allowed transitions: * `triggered → acknowledged` * `triggered → resolved` * `triggered → triggered` (retrigger—e.g., ack timeout, forced escalation, manual actions) All transitions into `triggered` create a `status_update` timeline event. #### **acknowledged** A responder confirmed that they have seen the alert and are working on it. Escalation pauses unless a timeout or retrigger occurs. Allowed transitions: * `acknowledged → resolved` * `acknowledged → triggered` (ack timeout or manual retrigger) If an acknowledged alert hits **acknowledgement timeout**, Rootly automatically **re-triggers** it and resumes escalation. #### **resolved** A terminal state indicating no further action is required. Notifications cease and Rootly records `ended_at`. However, Rootly allows: * `resolved → triggered` (re-open regression, manual retrigger, new escalation) This ensures alerts can be reopened without creating duplicates. Resolved alerts remain visible and analyzable in your alert history, even after re-triggering. *** ### Summary Table of Allowed Transitions | From ↓ | To: triggered | To: acknowledged | To: resolved | | ---------------- | ------------- | ---------------- | ------------ | | **open** | ✅ | — | ✅ | | **triggered** | — | ✅ | ✅ | | **acknowledged** | ✅ (retrigger) | — | ✅ | | **resolved** | ✅ (retrigger) | — | — | Retriggering is a first-class action in Rootly. A retrigger transitions an alert **back to `triggered`**, restarts escalation, and produces appropriate timeline events. *** ### How Rootly Records Status Changes Every transition writes a `status_update` event into the alert timeline. A status event includes: * The new status * The previous status * Who performed the action (user or system) * Metadata such as escalation step, ack timeout, grouping rule, or routing origin These timeline entries power audit trails, analytics, and seamless Slack updates. *** ### Interaction With Alert Grouping When an alert is part of an **Alert Group**, status synchronization is automatic: #### Leader Alert Behavior * The **group leader** is the first alert in the group (the one that paged). * Any change to the leader’s status cascades to all members. * Member alerts update timestamps, noise indicators, and events to match the leader. #### Member Alert Behavior * Members never independently influence group state. * Status changes come exclusively from the group leader. * Retriggering the leader retriggers all members. This ensures responders never lose the true “source of paging,” even when many alerts represent the same event. *** ### Visual Indicators Across Rootly Rootly uses consistent color/status styling across the Web UI, Slack, and Mobile: * 🟥 **Open / Triggered** — Requires action * 🟧 **Acknowledged** — Someone is actively working the alert * 🟩 **Resolved** — Incident has concluded These indicators appear in: * Alert lists * Slack alert threads * Alert details * Incident sidebars when alerts link to incidents *** ### Timestamp Behavior Each alert automatically manages its own lifecycle timestamps: * **started\_at** – Set when the alert is created * **ended\_at** – Set when transitioning into `resolved` * **ended\_at** is cleared if later retriggered This enables clean duration metrics (MTTA, MTTR, paging duration, escalation analytics). *** ### Troubleshooting * It may not match any routing rules * The alert source may not be associated with an Alert Route * No notification target was assigned * The alert may be a non-paging alert * Ensure its status is **triggered**, not **open** * Validate the routing rule actually assigned a team or escalation policy * Confirm notification channels are enabled * Check for quiet-only escalation paths * Review acknowledgement timeout settings * Check whether escalation policies intentionally retrigger * Ensure grouping leader logic isn’t retriggering members This is expected if: * A user manually retriggered * The system detected a regression * A new routing condition matched and assigned a destination *** ### Summary Alert Statuses are the backbone of Rootly’s alerting engine. They define: * How and when responders are notified * How escalation policies activate * How grouping behaves * How alerts appear in dashboards and Slack * How timeline events reflect real-world activity By enforcing strict, predictable transitions—and exposing complete audit trails—Rootly ensures smooth, reliable alerting workflows from ingestion → paging → acknowledgment → resolution → retriggering if needed. # Alert Urgency Source: https://docs.rootly.com/alerts/alert-urgency Learn how Alert Urgencies determine alert priority across Alert Sources, Heartbeats, Live Call Routing, and Escalation Policies. ### Overview Alert Urgency controls **how quickly responders must act** when an alert is triggered.\ It’s the core signal Rootly uses to decide: * How aggressively to page on-call responders * Whether notifications should be **audible** (wake people up) or **quiet** * Which escalation paths apply during or outside of **working hours** Configured well, urgency ensures true incidents get immediate attention, while low-impact noise stays non-disruptive.