Skip to content

Alerting

Alerting notifies your team when flows encounter problems — failures, dead-letter buildup, latency spikes, or connector health changes.

Channels define where notifications are sent:

Channel typeDescription
slack_webhookPost to a Slack channel via incoming webhook
emailSend email notifications
generic_webhookPOST to any HTTP endpoint

In the fyrn dashboard, go to Settings → Alert Channels → New Channel. Select the channel type, provide the required configuration (webhook URL, email address, etc.), and save.

Terminal window
curl -X POST https://app.fyrn.ai/api/v1/alert-channels \
-H "Authorization: Bearer $FYRN_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "ops-slack",
"type": "slack_webhook",
"config": {
"webhookUrl": "https://hooks.slack.com/services/T00/B00/xxxx"
}
}'
{
"id": "ach_9f3k2m",
"name": "ops-slack",
"type": "slack_webhook",
"createdAt": "2026-03-05T10:00:00Z"
}

For email channels, set type to "email" and pass "recipients" in config:

{
"name": "oncall-email",
"type": "email",
"config": {
"recipients": ["oncall@example.com", "platform@example.com"]
}
}

For generic webhooks, set type to "generic_webhook" and pass the target "url" and optional "headers":

{
"name": "pagerduty-webhook",
"type": "generic_webhook",
"config": {
"url": "https://events.pagerduty.com/integration/xxxx/enqueue",
"headers": {
"Content-Type": "application/json"
}
}
}

Rules define what conditions trigger a notification:

ConditionDescription
failure_countTotal failures exceed threshold within window
consecutive_failuresN failures in a row
dead_letter_depthDead-letter queue exceeds threshold
latency_thresholdProcessing time exceeds threshold
error_rateError percentage exceeds threshold
healing_triggeredSelf-healing event detected
connector_health_changeConnector health status changed
FieldDescription
channelIdWhich channel to notify
flowIdOptional — scope to a specific flow
connectorInstanceIdOptional — scope to a specific connector
conditionTypeOne of the condition types above
thresholdNumeric threshold for the condition
windowSecondsTime window for evaluation
cooldownSecondsSuppress duplicate alerts for this duration
enabledEnable/disable the rule

The cooldownSeconds field prevents alert fatigue. After an alert fires, it won’t fire again for the same rule until the cooldown expires.

Every alert evaluation is recorded in the alert history. When a rule fires during an active cooldown window, the event is logged with a suppressed status instead of being delivered. This gives you full visibility into how often conditions are met, even when notifications are suppressed.

Recommended cooldown values by alert type:

ConditionSuggested cooldownSecondsRationale
failure_count300 (5 min)Gives time to investigate before re-alerting
consecutive_failures600 (10 min)Persistent failures need longer investigation windows
dead_letter_depth900 (15 min)Queue depth changes slowly; frequent alerts add noise
latency_threshold180 (3 min)Latency spikes can be transient; short cooldown catches recurring issues
error_rate300 (5 min)Balances responsiveness with noise reduction
healing_triggered60 (1 min)Self-healing events are high-signal; keep cooldown short
connector_health_change600 (10 min)Health status tends to flap during outages

Set cooldownSeconds to 0 to disable suppression entirely — every condition match will fire a notification.


Every alert attempt is recorded:

StatusDescription
sentNotification delivered successfully
failedDelivery failed (e.g., Slack webhook error)
suppressedWithin cooldown period, not sent

Query alert history with GET /api/v1/alert-history. Filter by rule, flow, status, or time range:

Terminal window
curl "https://app.fyrn.ai/api/v1/alert-history?ruleId=arl_4x8j1n&status=sent&limit=10" \
-H "Authorization: Bearer $FYRN_API_KEY"
{
"data": [
{
"id": "ah_7k2m9p",
"ruleId": "arl_4x8j1n",
"channelId": "ach_9f3k2m",
"status": "sent",
"conditionType": "failure_count",
"conditionValue": 12,
"threshold": 5,
"flowId": "flow_abc123",
"firedAt": "2026-03-05T09:32:00Z"
}
],
"pagination": {
"total": 47,
"limit": 10,
"offset": 0
}
}

To see suppressed alerts, set status=suppressed. To view all events regardless of status, omit the status parameter.


1. Create the Slack channel:

Terminal window
curl -X POST https://app.fyrn.ai/api/v1/alert-channels \
-H "Authorization: Bearer $FYRN_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "eng-alerts-slack",
"type": "slack_webhook",
"config": {
"webhookUrl": "https://hooks.slack.com/services/T00/B00/xxxx"
}
}'

2. Create an alert rule for flow failures:

Terminal window
curl -X POST https://app.fyrn.ai/api/v1/alert-rules \
-H "Authorization: Bearer $FYRN_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"channelId": "ach_9f3k2m",
"flowId": "flow_abc123",
"conditionType": "failure_count",
"threshold": 5,
"windowSeconds": 600,
"cooldownSeconds": 300,
"enabled": true
}'

This fires a Slack notification when the flow hits 5 or more failures within a 10-minute window. After firing, the rule is suppressed for 5 minutes.

1. Create the email channel:

Terminal window
curl -X POST https://app.fyrn.ai/api/v1/alert-channels \
-H "Authorization: Bearer $FYRN_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "oncall-email",
"type": "email",
"config": {
"recipients": ["oncall@example.com"]
}
}'

2. Create an alert rule for dead-letter queue depth:

Terminal window
curl -X POST https://app.fyrn.ai/api/v1/alert-rules \
-H "Authorization: Bearer $FYRN_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"channelId": "ach_email01",
"conditionType": "dead_letter_depth",
"threshold": 100,
"windowSeconds": 0,
"cooldownSeconds": 900,
"enabled": true
}'

This sends an email when the dead-letter queue for any flow exceeds 100 messages. The 15-minute cooldown prevents inbox flooding during sustained issues. Omitting flowId applies the rule across all flows.

1. Create the webhook channel:

Terminal window
curl -X POST https://app.fyrn.ai/api/v1/alert-channels \
-H "Authorization: Bearer $FYRN_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "pagerduty",
"type": "generic_webhook",
"config": {
"url": "https://events.pagerduty.com/integration/xxxx/enqueue",
"headers": {
"Content-Type": "application/json"
}
}
}'

2. Create an alert rule for latency:

Terminal window
curl -X POST https://app.fyrn.ai/api/v1/alert-rules \
-H "Authorization: Bearer $FYRN_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"channelId": "ach_pd01",
"flowId": "flow_payments",
"conditionType": "latency_threshold",
"threshold": 5000,
"windowSeconds": 300,
"cooldownSeconds": 180,
"enabled": true
}'

This fires a webhook when the flow_payments flow exceeds 5000ms processing time within a 5-minute window. The payload POSTed to your webhook endpoint includes the full alert context — rule ID, flow ID, condition value, and timestamp.