Pluggable Caching Layer
Configure in-memory and Redis cache adapters for JWKS, policy results, tenant context, and config — with HMAC-SHA256 integrity and full OTel metrics.
Pluggable Caching Layer
CoreSDK caches four categories of data to avoid repeated network round-trips on the hot path:
| Cache store | What is stored | Default TTL |
|---|---|---|
jwks | JWK public keys fetched from the JWKS endpoint | 15 minutes |
policy | Rego policy evaluation results, keyed by input hash | 30 seconds |
tenant | Tenant context objects (plan, feature flags, metadata) | 5 minutes |
config | Remote SDK configuration snapshots | 10 minutes |
The default adapter is an in-process LRU memory cache (Phase 1). Redis adapter ships Phase 2 — swapping to Redis requires one additional config block and changes no application code.
CacheAdapter trait
All cache backends implement the CacheAdapter trait. You can provide a custom adapter by implementing it:
use coresdk_cache::{CacheAdapter, CacheManager, CacheError};
use std::time::Duration;
struct MyCustomCache { /* ... */ }
impl CacheAdapter for MyCustomCache {
fn get(&self, key: &str) -> Result<Option<Vec<u8>>, CacheError> { todo!() }
fn set(&self, key: &str, value: Vec<u8>, ttl: Duration) -> Result<(), CacheError> { todo!() }
fn delete(&self, key: &str) -> Result<(), CacheError> { todo!() }
}
// CacheManager wraps any adapter and adds get_json / set_json helpers
let manager = CacheManager::new(MyCustomCache { /* ... */ });
// Serialize any serde::Serialize value; deserialize with get_json
manager.set_json("jwks:acme", &my_jwks, Duration::from_secs(900))?;
let jwks: Option<MyJwkSet> = manager.get_json("jwks:acme")?;CacheManager::get_json and CacheManager::set_json handle JSON serialisation, HMAC-SHA256 signing, and TTL-based expiry automatically. The underlying get/set bytes API is available for non-JSON values.
Default in-memory cache
The in-memory adapter is zero-configuration. It runs inside the SDK process with sensible bounds so it cannot grow unbounded.
use coresdk_engine::{CoreSDK, cache::InMemoryCache};
let sdk = Engine::from_env()?
.tenant("acme")
// InMemoryCache is the default — this is explicit for clarity
.cache(
InMemoryCache::builder()
.jwks_ttl(Duration::from_secs(900)) // 15 min
.jwks_max_entries(512)
.policy_ttl(Duration::from_secs(30)) // 30 sec
.policy_max_entries(10_000)
.tenant_ttl(Duration::from_secs(300)) // 5 min
.tenant_max_entries(2_048)
.config_ttl(Duration::from_secs(600)) // 10 min
.config_max_entries(256)
.build(),
)
.build()
.await?;from coresdk import CoreSDKClient, SDKConfig
# In-memory cache is built-in — no extra configuration required
_sdk = CoreSDKClient(SDKConfig(
sidecar_addr="[::1]:50051",
tenant_id="acme-corp",
fail_mode="open",
))import (
"time"
"github.com/coresdk/sdk"
"github.com/coresdk/sdk/cache"
)
client, err := sdk.New(sdk.Config{
Tenant: "acme",
// InMemoryCache is the default — this is explicit for clarity
Cache: cache.NewInMemory(cache.InMemoryOptions{
JWKSTtl: 15 * time.Minute,
JWKSMaxEntries: 512,
PolicyTtl: 30 * time.Second,
PolicyMaxEntries: 10_000,
TenantTtl: 5 * time.Minute,
TenantMaxEntries: 2_048,
ConfigTtl: 10 * time.Minute,
ConfigMaxEntries: 256,
}),
})import { CoreSDK, InMemoryCache } from "@coresdk/sdk";
const sdk = new CoreSDK({
tenant: "acme",
// InMemoryCache is the default — this is explicit for clarity
cache: new InMemoryCache({
jwksTtl: 900, // seconds — 15 min
jwksMaxEntries: 512,
policyTtl: 30, // 30 sec
policyMaxEntries: 10_000,
tenantTtl: 300, // 5 min
tenantMaxEntries: 2_048,
configTtl: 600, // 10 min
configMaxEntries: 256,
}),
});When an entry exceeds its TTL or the store reaches max_entries, the LRU eviction policy removes the least-recently-used entry. Eviction counts are exposed as OTel metrics (see Cache metrics below).
Redis adapter
Phase 2. The Redis cache adapter ships Phase 2.
The Redis adapter shares the cache across all SDK instances in a fleet. It requires TLS 1.3 and authentication — unencrypted or unauthenticated connections are rejected at SDK initialization.
use coresdk_engine::{CoreSDK, cache::RedisCache};
let sdk = Engine::from_env()?
.tenant("acme")
.cache(
RedisCache::builder()
.url("rediss://cache.internal:6380") // rediss:// enforces TLS
.password("$REDIS_PASSWORD")
.tls_ca_cert("/etc/coresdk/redis-ca.pem")
// optional: mutual TLS client certificate
.tls_client_cert("/etc/coresdk/redis-client.pem")
.tls_client_key("/etc/coresdk/redis-client-key.pem")
.key_prefix("coresdk:acme:") // namespace per tenant
.connect_timeout(Duration::from_secs(2))
.command_timeout(Duration::from_millis(200))
.build()
.await?,
)
.build()
.await?;# Redis cache adapter ships Phase 2.
# Configure via coresdk.toml or CORESDK_CACHE_* env vars (Phase 2).import (
"time"
"github.com/coresdk/sdk"
"github.com/coresdk/sdk/cache"
)
client, err := sdk.New(sdk.Config{
Tenant: "acme",
Cache: cache.NewRedis(cache.RedisOptions{
URL: "rediss://cache.internal:6380", // rediss:// enforces TLS
Password: os.Getenv("REDIS_PASSWORD"),
TLSCACert: "/etc/coresdk/redis-ca.pem",
// optional: mutual TLS client certificate
TLSClientCert: "/etc/coresdk/redis-client.pem",
TLSClientKey: "/etc/coresdk/redis-client-key.pem",
KeyPrefix: "coresdk:acme:", // namespace per tenant
ConnectTimeout: 2 * time.Second,
CommandTimeout: 200 * time.Millisecond,
}),
})import { CoreSDK, RedisCache } from "@coresdk/sdk";
const sdk = new CoreSDK({
tenant: "acme",
cache: new RedisCache({
url: "rediss://cache.internal:6380", // rediss:// enforces TLS
password: process.env.REDIS_PASSWORD,
tlsCaCert: "/etc/coresdk/redis-ca.pem",
// optional: mutual TLS client certificate
tlsClientCert: "/etc/coresdk/redis-client.pem",
tlsClientKey: "/etc/coresdk/redis-client-key.pem",
keyPrefix: "coresdk:acme:", // namespace per tenant
connectTimeout: 2000, // ms
commandTimeout: 200, // ms
}),
});The rediss:// scheme (double s) is required. Using redis:// causes initialization to fail with CacheError::TlsRequired.
TLS requirements
| Requirement | Detail |
|---|---|
| Protocol | TLS 1.3 minimum (TLS 1.2 rejected) |
| Server authentication | CA certificate must be provided; system roots are not trusted for cache backends |
| Client authentication (mTLS) | Optional but recommended; cert + key must be provided together |
| Cipher suites | TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256 |
Redis Cluster
Redis Cluster is supported by providing a seed node list. The SDK discovers the full cluster topology automatically and routes commands to the correct shard.
use coresdk_engine::{CoreSDK, cache::RedisClusterCache};
let sdk = Engine::from_env()?
.tenant("acme")
.cache(
RedisClusterCache::builder()
.nodes(vec![
"rediss://cache-0.internal:6380",
"rediss://cache-1.internal:6380",
"rediss://cache-2.internal:6380",
])
.password("$REDIS_PASSWORD")
.tls_ca_cert("/etc/coresdk/redis-ca.pem")
.key_prefix("coresdk:acme:")
.read_from_replicas(true) // spread read load across replicas
.connect_timeout(Duration::from_secs(2))
.command_timeout(Duration::from_millis(200))
.build()
.await?,
)
.build()
.await?;# Redis Cluster adapter ships Phase 2.client, err := sdk.New(sdk.Config{
Tenant: "acme",
Cache: cache.NewRedisCluster(cache.RedisClusterOptions{
Nodes: []string{
"rediss://cache-0.internal:6380",
"rediss://cache-1.internal:6380",
"rediss://cache-2.internal:6380",
},
Password: os.Getenv("REDIS_PASSWORD"),
TLSCACert: "/etc/coresdk/redis-ca.pem",
KeyPrefix: "coresdk:acme:",
ReadFromReplicas: true, // spread read load across replicas
ConnectTimeout: 2 * time.Second,
CommandTimeout: 200 * time.Millisecond,
}),
})import { CoreSDK, RedisClusterCache } from "@coresdk/sdk";
const sdk = new CoreSDK({
tenant: "acme",
cache: new RedisClusterCache({
nodes: [
"rediss://cache-0.internal:6380",
"rediss://cache-1.internal:6380",
"rediss://cache-2.internal:6380",
],
password: process.env.REDIS_PASSWORD,
tlsCaCert: "/etc/coresdk/redis-ca.pem",
keyPrefix: "coresdk:acme:",
readFromReplicas: true, // spread read load across replicas
connectTimeout: 2000,
commandTimeout: 200,
}),
});Redis Sentinel
For high-availability deployments that use Sentinel instead of Cluster:
use coresdk_engine::{CoreSDK, cache::RedisSentinelCache};
let sdk = Engine::from_env()?
.tenant("acme")
.cache(
RedisSentinelCache::builder()
.sentinels(vec![
"rediss://sentinel-0.internal:26380",
"rediss://sentinel-1.internal:26380",
"rediss://sentinel-2.internal:26380",
])
.master_name("mymaster")
.password("$REDIS_PASSWORD")
.tls_ca_cert("/etc/coresdk/redis-ca.pem")
.key_prefix("coresdk:acme:")
.build()
.await?,
)
.build()
.await?;# Redis Sentinel adapter ships Phase 2.client, err := sdk.New(sdk.Config{
Tenant: "acme",
Cache: cache.NewRedisSentinel(cache.RedisSentinelOptions{
Sentinels: []string{
"rediss://sentinel-0.internal:26380",
"rediss://sentinel-1.internal:26380",
"rediss://sentinel-2.internal:26380",
},
MasterName: "mymaster",
Password: os.Getenv("REDIS_PASSWORD"),
TLSCACert: "/etc/coresdk/redis-ca.pem",
KeyPrefix: "coresdk:acme:",
}),
})import { CoreSDK, RedisSentinelCache } from "@coresdk/sdk";
const sdk = new CoreSDK({
tenant: "acme",
cache: new RedisSentinelCache({
sentinels: [
"rediss://sentinel-0.internal:26380",
"rediss://sentinel-1.internal:26380",
"rediss://sentinel-2.internal:26380",
],
masterName: "mymaster",
password: process.env.REDIS_PASSWORD,
tlsCaCert: "/etc/coresdk/redis-ca.pem",
keyPrefix: "coresdk:acme:",
}),
});Per-store TTL tuning
TTLs can be configured independently per cache store regardless of which adapter is in use. Shorter TTLs reduce the window during which a revoked key or changed policy stays cached; longer TTLs reduce latency and backend load.
use coresdk_engine::cache::{RedisCache, CacheTtls};
let sdk = Engine::from_env()?
.tenant("acme")
.cache(
RedisCache::builder()
.url("rediss://cache.internal:6380")
.password("$REDIS_PASSWORD")
.tls_ca_cert("/etc/coresdk/redis-ca.pem")
.ttls(
CacheTtls::builder()
// JWKS: short TTL tightens the key rotation window
.jwks(Duration::from_secs(300)) // 5 min
// Policy: increase for stable, rarely-changing policies
.policy(Duration::from_secs(120)) // 2 min
// Tenant: increase if tenant metadata changes infrequently
.tenant(Duration::from_secs(600)) // 10 min
// Config: long TTL is fine; config changes are rare
.config(Duration::from_secs(1800)) // 30 min
.build(),
)
.build()
.await?,
)
.build()
.await?;# Redis TTL tuning ships Phase 2.
# For Phase 1 in-memory TTLs, set CORESDK_CACHE_JWKS_TTL etc. as env vars.client, err := sdk.New(sdk.Config{
Tenant: "acme",
Cache: cache.NewRedis(cache.RedisOptions{
URL: "rediss://cache.internal:6380",
Password: os.Getenv("REDIS_PASSWORD"),
TLSCACert: "/etc/coresdk/redis-ca.pem",
TTLs: cache.TTLs{
// JWKS: short TTL tightens the key rotation window
JWKS: 5 * time.Minute,
// Policy: increase for stable, rarely-changing policies
Policy: 2 * time.Minute,
// Tenant: increase if tenant metadata changes infrequently
Tenant: 10 * time.Minute,
// Config: long TTL is fine; config changes are rare
Config: 30 * time.Minute,
},
}),
})import { CoreSDK, RedisCache } from "@coresdk/sdk";
const sdk = new CoreSDK({
tenant: "acme",
cache: new RedisCache({
url: "rediss://cache.internal:6380",
password: process.env.REDIS_PASSWORD,
tlsCaCert: "/etc/coresdk/redis-ca.pem",
ttls: {
// JWKS: short TTL tightens the key rotation window
jwks: 300, // 5 min
// Policy: increase for stable, rarely-changing policies
policy: 120, // 2 min
// Tenant: increase if tenant metadata changes infrequently
tenant: 600, // 10 min
// Config: long TTL is fine; config changes are rare
config: 1800, // 30 min
},
}),
});TTL reference
| Store | Minimum safe TTL | Recommended default | Maximum recommended TTL |
|---|---|---|---|
jwks | 60 seconds | 15 minutes | 1 hour |
policy | 5 seconds | 30 seconds | 10 minutes |
tenant | 30 seconds | 5 minutes | 30 minutes |
config | 5 minutes | 10 minutes | 1 hour |
Setting jwks TTL below 60 seconds is not recommended because JWKS endpoint rate limits vary by identity provider and burst fetches on a cold start can cause cascading failures.
Cache integrity
All values written to external cache backends (Redis) are signed with HMAC-SHA256. If a value fails signature verification on read — due to tampering, corruption, or a key mismatch — CoreSDK treats it as a cache miss and falls back to a live fetch.
Key distribution: HMAC signing keys are delivered to the SDK over the same mTLS channel used for config. They are held only in process memory and are never written to disk or stored in the cache itself.
Key rotation: Signing keys rotate automatically every 24 hours. During the rotation window both the old and new key are accepted on reads. Writes always use the new key.
Cache read path
───────────────
1. Fetch raw bytes from store (Redis / in-memory)
2. Verify HMAC-SHA256 signature
├── valid → deserialize and return
└── invalid → log warning, record metric, treat as MISS
↓
3. MISS: live fetch from origin (JWKS endpoint / policy engine / etc.)
├── success → write back to cache with fresh signature → return
└── failure → return CacheError::OriginUnreachableCache miss behavior
A cache miss never silently degrades to an insecure state. The fallback chain is strict:
- Cache miss or integrity failure — CoreSDK attempts a live fetch from the origin (JWKS endpoint, policy engine, tenant API, or config API).
- Live fetch succeeds — result is stored in cache and returned normally. The request proceeds.
- Live fetch fails — CoreSDK returns a hard error. The request is rejected. No fallback to a stale value.
This means that if both the cache and the origin are unavailable simultaneously, requests will fail rather than proceed with potentially outdated security data.
To control the live-fetch retry behavior:
RedisCache::builder()
// ...
.miss_fetch_timeout(Duration::from_secs(3)) // max time for a live fetch
.miss_fetch_retries(2) // attempts before hard failure
.build()
.await?# Redis miss fetch tuning ships Phase 2.cache.NewRedis(cache.RedisOptions{
// ...
MissFetchTimeout: 3 * time.Second,
MissFetchRetries: 2,
})new RedisCache({
// ...
missFetchTimeout: 3000, // ms
missFetchRetries: 2,
})Cache metrics
CoreSDK exports cache telemetry through the same OpenTelemetry pipeline used for auth and policy metrics. No additional configuration is required if OTEL is already enabled.
| Metric | Type | Labels | Description |
|---|---|---|---|
coresdk.cache.hits | Counter | store, adapter | Cache lookups that returned a valid entry |
coresdk.cache.misses | Counter | store, adapter | Cache lookups that resulted in a live fetch |
coresdk.cache.integrity_failures | Counter | store, adapter | Entries rejected due to HMAC verification failure |
coresdk.cache.evictions | Counter | store, adapter | Entries removed by LRU eviction (in-memory only) |
coresdk.cache.origin_fetches | Counter | store, result | Live fetch attempts, labelled success or failure |
coresdk.cache.origin_fetch_latency | Histogram | store | Round-trip latency for live fetches (milliseconds) |
coresdk.cache.size | Gauge | store, adapter | Current number of entries in the store |
coresdk.cache.ttl_remaining | Histogram | store | Remaining TTL (seconds) of entries at read time |
Label values for store: jwks, policy, tenant, config.
Label values for adapter: memory, redis.
A healthy deployment should show a hit rate above 95% for jwks and tenant stores and above 80% for policy during steady-state traffic. Elevated integrity_failures indicate a signing key mismatch and should be treated as a security alert.
Sidecar daemon (YAML) configuration
When running CoreSDK as a sidecar daemon, the cache adapter is configured in the sidecar YAML manifest rather than in application code. The options map 1:1 to the SDK builder API.
# /etc/coresdk/config.yaml
cache:
adapter: redis # "memory" | "redis" | "redis-cluster" | "redis-sentinel"
memory: # used when adapter is "memory"
jwks_ttl: 900
jwks_max_entries: 512
policy_ttl: 30
policy_max_entries: 10000
tenant_ttl: 300
tenant_max_entries: 2048
config_ttl: 600
config_max_entries: 256
redis: # used when adapter is "redis"
url: "rediss://cache.internal:6380"
password: "${REDIS_PASSWORD}"
tls_ca_cert: /etc/coresdk/redis-ca.pem
tls_client_cert: /etc/coresdk/redis-client.pem # optional mTLS
tls_client_key: /etc/coresdk/redis-client-key.pem
key_prefix: "coresdk:acme:"
connect_timeout: 2s
command_timeout: 200ms
miss_fetch_timeout: 3s
miss_fetch_retries: 2
redis_cluster: # used when adapter is "redis-cluster"
nodes:
- "rediss://cache-0.internal:6380"
- "rediss://cache-1.internal:6380"
- "rediss://cache-2.internal:6380"
password: "${REDIS_PASSWORD}"
tls_ca_cert: /etc/coresdk/redis-ca.pem
key_prefix: "coresdk:acme:"
read_from_replicas: true
connect_timeout: 2s
command_timeout: 200ms
redis_sentinel: # used when adapter is "redis-sentinel"
sentinels:
- "rediss://sentinel-0.internal:26380"
- "rediss://sentinel-1.internal:26380"
- "rediss://sentinel-2.internal:26380"
master_name: mymaster
password: "${REDIS_PASSWORD}"
tls_ca_cert: /etc/coresdk/redis-ca.pem
key_prefix: "coresdk:acme:"
ttls: # applies to all adapters
jwks: 300
policy: 120
tenant: 600
config: 1800Environment variable interpolation (${VAR}) is supported for any string value in the sidecar config. Secrets should never be written as literal values in the YAML file.
Next steps
- OpenTelemetry — configure the OTEL exporter that receives cache metrics
- JWT authentication — understand JWKS fetching and how cache TTLs affect key rotation latency
- Rego authorization — policy evaluation details and how input hashing works for the policy cache key
- Configuration reference — full list of all SDK configuration options