Alerts & Anomaly Detection
Policy violation rate alerts, auth spike detection, per-tenant anomaly thresholds, and webhook integrations for PagerDuty and Slack.
Phase note. Ships Phase 2.
Alerts & Anomaly Detection
CoreSDK ships a built-in alerting engine that evaluates rules against the metrics and log stream it produces. Rules fire webhooks, PagerDuty incidents, or Slack messages without requiring an external alerting system — though you can also route alerts through Prometheus Alertmanager if you prefer.
Policy violation rate alerts
Trigger an alert when the fraction of denied policy evaluations exceeds a threshold within a rolling window.
use coresdk_engine::alerting::{Alert, Condition, Window};
let engine = Engine::from_env().await?;
// Alerting configured via coresdk-sidecar.yaml — Phase 2
// alert:
// - name: high-policy-denial-rate
// condition: policy_denial_rate > 0.05
// window: 5m
// severity: warningfrom coresdk.alerting import Alert, Condition, Window
sdk = CoreSDKClient(SDKConfig(
tenant="acme",
alerts=[
Alert(
name="high-policy-denial-rate",
condition=Condition.policy_denial_rate().above(0.05), # > 5 %
window=Window.rolling_minutes(5),
severity="warning",
)
],
))import "github.com/coresdk/sdk-go/alerting"
sdk, err := coresdk.New(coresdk.Config{
Tenant: "acme",
Alerts: []alerting.Alert{
alerting.NewAlert("high-policy-denial-rate").
Condition(alerting.PolicyDenialRate().Above(0.05)).
Window(alerting.RollingMinutes(5)).
Severity("warning"),
},
})import { Alert, Condition, Window } from "@coresdk/sdk/alerting";
const sdk = await CoreSDK.create({
tenant: "acme",
alerts: [
Alert.create("high-policy-denial-rate")
.condition(Condition.policyDenialRate().above(0.05)) // > 5 %
.window(Window.rollingMinutes(5))
.severity("warning"),
],
});Failed auth spike detection
Detect sudden increases in authentication failures — useful for catching credential-stuffing attacks or misconfigured clients before they escalate.
Alert::new("auth-failure-spike")
.condition(
Condition::auth_failure_rate()
.above(0.10) // > 10 % failure rate
.or_absolute_rate_above(50.0) // OR > 50 failures/sec
)
.window(Window::rolling_minutes(2))
.severity("critical")Alert(
name="auth-failure-spike",
condition=(
Condition.auth_failure_rate().above(0.10) # > 10 % failure rate
| Condition.auth_absolute_rate().above(50.0) # OR > 50 failures/sec
),
window=Window.rolling_minutes(2),
severity="critical",
)alerting.NewAlert("auth-failure-spike").
Condition(
alerting.AuthFailureRate().Above(0.10).
OrAbsoluteRateAbove(50.0),
).
Window(alerting.RollingMinutes(2)).
Severity("critical")Alert.create("auth-failure-spike")
.condition(
Condition.authFailureRate().above(0.10) // > 10 % failure rate
.orAbsoluteRateAbove(50.0) // OR > 50 failures/sec
)
.window(Window.rollingMinutes(2))
.severity("critical")Per-tenant anomaly thresholds
Set independent alert thresholds per tenant so a noisy tenant does not mask problems in quieter ones.
use coresdk_engine::alerting::TenantAlertConfig;
// Phase 2 — configure per-tenant alert thresholds via sidecar YAML
// tenant_alerts:
// - tenant_id: acme
// policy_denial_rate_threshold: 0.03
// auth_failure_rate_threshold: 0.05
// request_rate_spike_factor: 3.0
// - tenant_id: beta-corp
// policy_denial_rate_threshold: 0.10
// auth_failure_rate_threshold: 0.15
// request_rate_spike_factor: 5.0from coresdk.alerting import TenantAlertConfig
sdk = CoreSDKClient(SDKConfig(
tenant="acme",
tenant_alerts=[
TenantAlertConfig(
tenant_id="acme",
policy_denial_rate_threshold=0.03,
auth_failure_rate_threshold=0.05,
request_rate_spike_factor=3.0,
),
TenantAlertConfig(
tenant_id="beta-corp",
policy_denial_rate_threshold=0.10,
auth_failure_rate_threshold=0.15,
request_rate_spike_factor=5.0,
),
],
))sdk, err := coresdk.New(coresdk.Config{
Tenant: "acme",
TenantAlerts: []alerting.TenantAlertConfig{
{
TenantID: "acme",
PolicyDenialRateThreshold: 0.03,
AuthFailureRateThreshold: 0.05,
RequestRateSpikeFactorAbove: 3.0,
},
{
TenantID: "beta-corp",
PolicyDenialRateThreshold: 0.10,
AuthFailureRateThreshold: 0.15,
RequestRateSpikeFactorAbove: 5.0,
},
},
})const sdk = await CoreSDK.create({
tenant: "acme",
tenantAlerts: [
{
tenantId: "acme",
policyDenialRateThreshold: 0.03,
authFailureRateThreshold: 0.05,
requestRateSpikeFactorAbove: 3.0,
},
{
tenantId: "beta-corp",
policyDenialRateThreshold: 0.10,
authFailureRateThreshold: 0.15,
requestRateSpikeFactorAbove: 5.0,
},
],
});Webhook notifications
Send alert payloads to any HTTP endpoint. CoreSDK retries with exponential backoff on failure.
use coresdk_engine::alerting::Webhook;
// Phase 2 — configure webhook sinks via sidecar YAML
// alert_sinks:
// - type: webhook
// url: https://hooks.example.com/coresdk-alerts
// headers:
// Authorization: "Bearer my-secret"
// retry_attempts: 3from coresdk.alerting import Webhook
sdk = CoreSDKClient(SDKConfig(
tenant="acme",
alert_sinks=[
Webhook(
url="https://hooks.example.com/coresdk-alerts",
headers={"Authorization": "Bearer my-secret"},
retry_attempts=3,
)
],
))sdk, err := coresdk.New(coresdk.Config{
Tenant: "acme",
AlertSinks: []alerting.Sink{
alerting.NewWebhook("https://hooks.example.com/coresdk-alerts").
Header("Authorization", "Bearer my-secret").
RetryAttempts(3),
},
})import { Webhook } from "@coresdk/sdk/alerting";
const sdk = await CoreSDK.create({
tenant: "acme",
alertSinks: [
Webhook.create("https://hooks.example.com/coresdk-alerts")
.header("Authorization", "Bearer my-secret")
.retryAttempts(3),
],
});The webhook payload shape:
{
"alert_name": "high-policy-denial-rate",
"severity": "warning",
"tenant_id": "acme",
"fired_at": "2026-03-19T14:35:00Z",
"value": 0.08,
"threshold": 0.05,
"window_seconds": 300,
"trace_id": "4bf92f3577b34da6a3ce929d0e0e4736",
"labels": {
"service_name": "orders-service",
"environment": "production"
}
}PagerDuty and Slack integrations
CoreSDK provides first-class sinks for PagerDuty and Slack so you do not need to write a webhook adapter.
PagerDuty
use coresdk_engine::alerting::PagerDuty;
.alert_sink(
PagerDuty::new("pdl_...") // Events API v2 integration key
.severity_map(|s| match s {
"critical" => pagerduty::Severity::Critical,
"warning" => pagerduty::Severity::Warning,
_ => pagerduty::Severity::Info,
}),
)# Equivalent YAML config (for sidecar / operator deployments)
alert_sinks:
- type: pagerduty
integration_key: pdl_...
severity_map:
critical: critical
warning: warningSlack
use coresdk_engine::alerting::Slack;
.alert_sink(
Slack::new("https://hooks.slack.com/services/T.../B.../...")
.channel("#coresdk-alerts")
.mention_on_severity("critical", "@oncall"),
)# Equivalent YAML config
alert_sinks:
- type: slack
webhook_url: https://hooks.slack.com/services/T.../B.../...
channel: "#coresdk-alerts"
mention_on_severity:
critical: "@oncall"Multiple sinks can be active at the same time — combine PagerDuty for critical alerts with Slack for warnings.
Next steps
- Metrics — the underlying metrics that feed alert conditions
- Structured Logging — log lines emitted on alert fire/resolve
- OpenTelemetry — link alert events to the triggering trace