Introduction
The Azure OpenAI integration lets you connect Rootly to AI models running inside your organization’s Azure subscription. Rather than using Rootly’s default OpenAI account, requests are routed through your own Azure OpenAI resource — giving you control over data residency, compliance posture, content filtering policies, and the specific model deployment your team has provisioned. With the Azure OpenAI integration, you can:- Generate AI-powered summaries, analyses, and responses within incident workflows
- Send custom prompts to your Azure-deployed GPT models with full Liquid template support
- Use a system prompt to define the model’s role, tone, or output constraints
- Satisfy data residency and enterprise compliance requirements by keeping requests within Azure infrastructure
Before You Begin
Before setting up the Azure OpenAI integration, make sure you have:- A Rootly account with permission to manage integrations
- An active Azure OpenAI resource with at least one model deployed
- Your Azure OpenAI API key, resource name, and deployment name (details below)
Azure OpenAI resources are not automatically created with your Azure subscription. You must request access and deploy a model in the Azure Portal before connecting to Rootly.
Finding Your Credentials
API Key- Go to the Azure Portal and open your Azure OpenAI resource.
- Select Keys and Endpoint from the left menu.
- Copy either KEY 1 or KEY 2.
my-company-openai.
Deployment Name
- In the Azure Portal, open your Azure OpenAI resource.
- Select Model deployments or Deployments from the left menu.
- Copy the name of the deployment you want Rootly to use (for example,
gpt-4oorgpt-35-turbo).
Installation
Open the integrations page in Rootly
Navigate to the integrations page in your Rootly workspace and select Azure OpenAI.
Enter your credentials
Enter your API Key, Resource Name, and Deployment Name into the respective fields. Rootly constructs your Azure OpenAI endpoint as:Your API key is encrypted at rest in Rootly.
Workflow Actions
Create OpenAI Chat Completion
Sends a prompt to your Azure-deployed model and captures the response as a workflow output. This is the same action used by the standard OpenAI integration — when Azure OpenAI is connected, Rootly routes requests through your Azure resource automatically.| Field | Description | Required |
|---|---|---|
| Model | Your configured Azure deployment — set by the deployment name you provided | Yes |
| Prompt | The user message — supports Liquid templating | Yes |
| System Prompt | Instructions for the model’s role or behavior — supports Liquid templating | No |
| Temperature | Sampling temperature between 0.0 and 2.0 — controls randomness | No |
| Max Tokens | Maximum number of tokens in the response | No |
| Top P | Nucleus sampling probability between 0.0 and 1.0 | No |
Use Liquid variables in your prompts to include live incident context — for example
{{ incident.title }}, {{ incident.severity }}, and {{ incident.description }}. See the Liquid variables reference for all available fields.Troubleshooting
The API key is rejected on save
The API key is rejected on save
Confirm that the API key is active and has not been revoked. In the Azure Portal, go to Keys and Endpoint and verify the key is still valid. Also confirm that the resource name matches the Azure OpenAI resource the key belongs to — using a key from a different resource will cause authentication to fail.
The workflow action fails with an authentication error
The workflow action fails with an authentication error
If the integration was working and then stopped, the API key may have been rotated in Azure. Update the key in the integration settings. Also check that the deployment name has not been renamed or deleted in the Azure Portal.
The workflow action fails with a rate limit error
The workflow action fails with a rate limit error
Azure OpenAI enforces rate limits based on your provisioned capacity (tokens per minute and requests per minute). Running many concurrent Rootly workflows may exceed these limits. Consider staggering workflows, reducing token usage with more focused prompts, or increasing your Azure OpenAI quota in the Azure Portal.
The model is not responding as expected
The model is not responding as expected
The model behavior is determined by the deployment you configured — Rootly uses your deployment name directly and does not select a model. If you need a different model, create a new deployment in the Azure Portal and update the deployment name in Rootly.
Liquid variables are not rendering correctly in the prompt
Liquid variables are not rendering correctly in the prompt
Check your Liquid syntax — unclosed tags or undefined variables can cause rendering failures. Use the Liquid variables reference to confirm variable names and test your template in a low-stakes workflow first.
Related Pages
OpenAI
Use Rootly’s standard OpenAI integration if you don’t need Azure-specific data residency or compliance controls.
Incident Workflows
Build workflows that use Azure OpenAI models to analyze, summarize, or respond to incidents.
Liquid Variables
Reference for all incident variables available in Liquid-templated prompts.