Structured Logging
JSON and OTel-compatible structured logs with per-tenant filtering and export to Datadog, Loki, and CloudWatch.
Available in Phase 1b. This feature ships with the sidecar daemon and wrapper SDKs. Phase 1a (Rust crate only) users access this via the core engine directly.
Structured Logging
CoreSDK emits structured JSON logs for every auth decision, policy evaluation, and SDK lifecycle event. Every log line carries trace context so you can pivot from a log to the full distributed trace.
Log format
All log lines are JSON and conform to the OpenTelemetry Log Data Model.
{
"timestamp": "2026-03-19T14:32:01.245Z",
"severity": "INFO",
"body": "policy evaluation completed",
"trace_id": "4bf92f3577b34da6a3ce929d0e0e4736",
"span_id": "00f067aa0ba902b7",
"attributes": {
"coresdk.tenant_id": "acme",
"coresdk.user_id": "usr_2xK9",
"coresdk.policy.action": "orders:read",
"coresdk.policy.result": "allowed",
"coresdk.policy.latency_us": 42,
"service.name": "orders-service",
"service.version": "1.4.0"
}
}The trace_id and span_id fields are always present when OTEL tracing is enabled, enabling
direct correlation between logs and traces in backends like Grafana, Datadog, and Loki.
Log levels
| Level | When emitted |
|---|---|
ERROR | Auth failures, policy engine panics, network errors |
WARN | Deprecated config, elevated error rates, near-threshold anomalies |
INFO | Auth decisions, policy results, SDK startup/shutdown |
DEBUG | Token parsing steps, Rego input/output, cache hits/misses |
TRACE | Raw HTTP payloads, JWKS fetch internals (never in production) |
let engine = Engine::from_env().await?;
// CORESDK_LOG_LEVEL=info # error | warn | info | debug | trace
// CORESDK_LOG_FORMAT=json # json | pretty# Configure via environment variables
# CORESDK_LOG_LEVEL=info # error | warn | info | debug | trace
# CORESDK_LOG_FORMAT=json # json | prettysdk, err := coresdk.New(coresdk.Config{
Tenant: "acme",
LogLevel: coresdk.LogLevelInfo, // LogLevelError | Warn | Info | Debug | Trace
LogFormat: coresdk.LogFormatJSON, // LogFormatJSON | LogFormatPretty
})const sdk = await CoreSDK.create({
tenant: "acme",
logLevel: "info", // "error" | "warn" | "info" | "debug" | "trace"
logFormat: "json", // "json" | "pretty"
});Filtering by tenant or trace
Use the log_filter option to narrow log output to a specific tenant, user, or trace ID.
Filters follow the same syntax as core trace tail.
let engine = Engine::from_env().await?;
// CORESDK_LOG_FILTER="tenant_id=acme AND policy.result=denied"# CORESDK_LOG_FILTER="tenant_id=acme AND policy.result=denied"sdk, err := coresdk.New(coresdk.Config{
Tenant: "acme",
LogFilter: "tenant_id=acme AND policy.result=denied",
})const sdk = await CoreSDK.create({
tenant: "acme",
logFilter: "tenant_id=acme AND policy.result=denied",
});You can also filter at the CLI level without redeploying:
core log tail --tenant acme
core log tail --tenant acme --filter 'policy.result=denied'
core log tail --tenant acme --filter 'user_id=usr_2xK9'
core log tail --tenant acme --trace-id 4bf92f3577b34da6a3ce929d0e0e4736Exporting to Datadog, Loki, and CloudWatch
CoreSDK supports log export over OTLP (HTTP or gRPC). Point your OTLP endpoint at any compatible collector and the logs flow automatically.
Datadog
# datadog-agent.yaml
otlp_config:
receiver:
protocols:
grpc:
endpoint: 0.0.0.0:4317
logs_enabled: truelet engine = Engine::from_env().await?;
// OTEL_EXPORTER_OTLP_ENDPOINT=http://datadog-agent:4317Grafana Loki (via OpenTelemetry Collector)
# otel-collector.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
exporters:
loki:
endpoint: http://loki:3100/loki/api/v1/push
labels:
attributes:
coresdk.tenant_id: tenant_id
coresdk.policy.result: policy_result
service:
pipelines:
logs:
receivers: [otlp]
exporters: [loki]AWS CloudWatch
# otel-collector.yaml
exporters:
awscloudwatchlogs:
log_group_name: /coresdk/orders-service
log_stream_name: production
region: us-east-1Adding custom fields
Phase note. Ships Phase 2.
Attach arbitrary key-value pairs to every log line emitted within a request context.
use coresdk_engine::logging::with_fields;
async fn handle_request(req: Request) -> Response {
with_fields! {
"order.id" => req.order_id,
"order.region" => req.region,
};
// all logs within this scope carry the extra fields
process(req).await
}from coresdk.tracing import log_context # Phase 2
with log_context(order_id=req.order_id, order_region=req.region):
# all logs within this block carry the extra fields
await process(req)import "github.com/coresdk/sdk-go/logging"
ctx = logging.WithFields(ctx, map[string]any{
"order.id": req.OrderID,
"order.region": req.Region,
})
// pass ctx through — all SDK log calls pick up the extra fields
result, err := sdk.Auth(ctx, token)import { withLogFields } from "@coresdk/sdk";
await withLogFields(
{ "order.id": req.orderId, "order.region": req.region },
async () => {
// all logs within this callback carry the extra fields
await process(req);
}
);Next steps
- OpenTelemetry — correlate logs with distributed traces
- Metrics — Prometheus export and Grafana dashboards
- Alerts & Anomaly Detection — act on log-derived signals