Alerting
Alerting notifies your team when flows encounter problems — failures, dead-letter buildup, latency spikes, or connector health changes.
Alert channels
Section titled “Alert channels”Channels define where notifications are sent:
| Channel type | Description |
|---|---|
slack_webhook | Post to a Slack channel via incoming webhook |
email | Send email notifications |
generic_webhook | POST to any HTTP endpoint |
Creating a channel via the dashboard
Section titled “Creating a channel via the dashboard”In the fyrn dashboard, go to Settings → Alert Channels → New Channel. Select the channel type, provide the required configuration (webhook URL, email address, etc.), and save.
Creating a channel via the API
Section titled “Creating a channel via the API”curl -X POST https://app.fyrn.ai/api/v1/alert-channels \ -H "Authorization: Bearer $FYRN_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "ops-slack", "type": "slack_webhook", "config": { "webhookUrl": "https://hooks.slack.com/services/T00/B00/xxxx" } }'{ "id": "ach_9f3k2m", "name": "ops-slack", "type": "slack_webhook", "createdAt": "2026-03-05T10:00:00Z"}For email channels, set type to "email" and pass "recipients" in config:
{ "name": "oncall-email", "type": "email", "config": { "recipients": ["oncall@example.com", "platform@example.com"] }}For generic webhooks, set type to "generic_webhook" and pass the target "url" and optional "headers":
{ "name": "pagerduty-webhook", "type": "generic_webhook", "config": { "url": "https://events.pagerduty.com/integration/xxxx/enqueue", "headers": { "Content-Type": "application/json" } }}Alert rules
Section titled “Alert rules”Rules define what conditions trigger a notification:
Condition types
Section titled “Condition types”| Condition | Description |
|---|---|
failure_count | Total failures exceed threshold within window |
consecutive_failures | N failures in a row |
dead_letter_depth | Dead-letter queue exceeds threshold |
latency_threshold | Processing time exceeds threshold |
error_rate | Error percentage exceeds threshold |
healing_triggered | Self-healing event detected |
connector_health_change | Connector health status changed |
Rule configuration
Section titled “Rule configuration”| Field | Description |
|---|---|
channelId | Which channel to notify |
flowId | Optional — scope to a specific flow |
connectorInstanceId | Optional — scope to a specific connector |
conditionType | One of the condition types above |
threshold | Numeric threshold for the condition |
windowSeconds | Time window for evaluation |
cooldownSeconds | Suppress duplicate alerts for this duration |
enabled | Enable/disable the rule |
Cooldown (suppression)
Section titled “Cooldown (suppression)”The cooldownSeconds field prevents alert fatigue. After an alert fires, it won’t fire again for the same rule until the cooldown expires.
Every alert evaluation is recorded in the alert history. When a rule fires during an active cooldown window, the event is logged with a suppressed status instead of being delivered. This gives you full visibility into how often conditions are met, even when notifications are suppressed.
Recommended cooldown values by alert type:
| Condition | Suggested cooldownSeconds | Rationale |
|---|---|---|
failure_count | 300 (5 min) | Gives time to investigate before re-alerting |
consecutive_failures | 600 (10 min) | Persistent failures need longer investigation windows |
dead_letter_depth | 900 (15 min) | Queue depth changes slowly; frequent alerts add noise |
latency_threshold | 180 (3 min) | Latency spikes can be transient; short cooldown catches recurring issues |
error_rate | 300 (5 min) | Balances responsiveness with noise reduction |
healing_triggered | 60 (1 min) | Self-healing events are high-signal; keep cooldown short |
connector_health_change | 600 (10 min) | Health status tends to flap during outages |
Set cooldownSeconds to 0 to disable suppression entirely — every condition match will fire a notification.
Alert history
Section titled “Alert history”Every alert attempt is recorded:
| Status | Description |
|---|---|
sent | Notification delivered successfully |
failed | Delivery failed (e.g., Slack webhook error) |
suppressed | Within cooldown period, not sent |
Query alert history with GET /api/v1/alert-history. Filter by rule, flow, status, or time range:
curl "https://app.fyrn.ai/api/v1/alert-history?ruleId=arl_4x8j1n&status=sent&limit=10" \ -H "Authorization: Bearer $FYRN_API_KEY"{ "data": [ { "id": "ah_7k2m9p", "ruleId": "arl_4x8j1n", "channelId": "ach_9f3k2m", "status": "sent", "conditionType": "failure_count", "conditionValue": 12, "threshold": 5, "flowId": "flow_abc123", "firedAt": "2026-03-05T09:32:00Z" } ], "pagination": { "total": 47, "limit": 10, "offset": 0 }}To see suppressed alerts, set status=suppressed. To view all events regardless of status, omit the status parameter.
Setup examples
Section titled “Setup examples”Slack alert for flow failures
Section titled “Slack alert for flow failures”1. Create the Slack channel:
curl -X POST https://app.fyrn.ai/api/v1/alert-channels \ -H "Authorization: Bearer $FYRN_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "eng-alerts-slack", "type": "slack_webhook", "config": { "webhookUrl": "https://hooks.slack.com/services/T00/B00/xxxx" } }'2. Create an alert rule for flow failures:
curl -X POST https://app.fyrn.ai/api/v1/alert-rules \ -H "Authorization: Bearer $FYRN_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "channelId": "ach_9f3k2m", "flowId": "flow_abc123", "conditionType": "failure_count", "threshold": 5, "windowSeconds": 600, "cooldownSeconds": 300, "enabled": true }'This fires a Slack notification when the flow hits 5 or more failures within a 10-minute window. After firing, the rule is suppressed for 5 minutes.
Email alert for dead-letter queue
Section titled “Email alert for dead-letter queue”1. Create the email channel:
curl -X POST https://app.fyrn.ai/api/v1/alert-channels \ -H "Authorization: Bearer $FYRN_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "oncall-email", "type": "email", "config": { "recipients": ["oncall@example.com"] } }'2. Create an alert rule for dead-letter queue depth:
curl -X POST https://app.fyrn.ai/api/v1/alert-rules \ -H "Authorization: Bearer $FYRN_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "channelId": "ach_email01", "conditionType": "dead_letter_depth", "threshold": 100, "windowSeconds": 0, "cooldownSeconds": 900, "enabled": true }'This sends an email when the dead-letter queue for any flow exceeds 100 messages. The 15-minute cooldown prevents inbox flooding during sustained issues. Omitting flowId applies the rule across all flows.
Webhook alert for latency
Section titled “Webhook alert for latency”1. Create the webhook channel:
curl -X POST https://app.fyrn.ai/api/v1/alert-channels \ -H "Authorization: Bearer $FYRN_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "name": "pagerduty", "type": "generic_webhook", "config": { "url": "https://events.pagerduty.com/integration/xxxx/enqueue", "headers": { "Content-Type": "application/json" } } }'2. Create an alert rule for latency:
curl -X POST https://app.fyrn.ai/api/v1/alert-rules \ -H "Authorization: Bearer $FYRN_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "channelId": "ach_pd01", "flowId": "flow_payments", "conditionType": "latency_threshold", "threshold": 5000, "windowSeconds": 300, "cooldownSeconds": 180, "enabled": true }'This fires a webhook when the flow_payments flow exceeds 5000ms processing time within a 5-minute window. The payload POSTed to your webhook endpoint includes the full alert context — rule ID, flow ID, condition value, and timestamp.
What’s next
Section titled “What’s next”- Testing Flows — Catch issues before they trigger alerts
- CLI Usage — Monitor flows from the terminal
- REST API — Alert management endpoints