Skip to main content
Caches in the Maverics Orchestrator provide named key-value storage that multiple modes and features can share. Define a cache once, reference it by name from any mode that supports caching, and swap the underlying implementation without changing any mode configuration. This page covers how caches work, which modes use them, data protection features, and how to get started.
Production support status:
  • Redis cache — Supported for production. Use Redis caches for shared data across Orchestrator instances in multi-node deployments.
  • Cluster cache — Experimental. Requires the experimental.clusters feature flag. Not recommended for production workloads. This feature may be changed or removed without notice.
Console terminology: In the Maverics Console, Orchestrator instances and configuration delivery are managed through Deployments. When working directly with YAML, configuration is managed as files delivered via the -config flag or MAVERICS_CONFIG environment variable.

How Caches Work

Caches are defined by name and referenced by any mode or feature that supports caching. All cache types work the same way from the perspective of any mode that uses them, so you can swap the underlying implementation without changing the rest of your configuration. Caches are configured independently of session stores — sessions persist user authentication state, while caches provide general-purpose key-value storage for mode-specific data like tokens, authorization state, and provider metadata. The Orchestrator includes a built-in local cache named local_default that works out of the box with no configuration. This cache is in-memory and single-node only — it does not share data across Orchestrator instances. For multi-node deployments, configure an external cache so all instances share the same cached data.
Do not use the name local_default for custom caches — it is reserved for the built-in local cache.

Caches and Orchestrator Modes

Different Orchestrator modes use caches for different purposes. Modes reference caches by name via their cache field:
  • OIDC Provider — Stores token and authorization server state. A shared cache is required for multi-node deployments so any Orchestrator instance can validate tokens and continue authorization flows started on another node.
  • SAML Provider — Stores request data and provider state. A shared cache is recommended for multi-node deployments to ensure SAML authentication flows complete correctly regardless of which node receives the response.
  • LDAP Provider — Stores shared credentials for facade deployments. When multiple Orchestrator instances serve LDAP traffic, a shared cache ensures consistent credential state.
  • HTTP Proxy — Does not use caches directly. HTTP Proxy mode uses sessions for user authentication state.
maverics.yaml
# Modes reference caches by name
oidcProvider:
  cache: my-cache
  # ... other oidcProvider config

samlProvider:
  cache: my-cache
  # ... other samlProvider config

Key Concepts

Data Protection

External caches support client-side data protection so sensitive values are secured before leaving the Orchestrator. These features are configured per cache and apply to all data stored in that cache.
  • Client-side encryption — Values are encrypted using AES-256-GCM before being sent to the cache service. The encryption key never leaves the Orchestrator.
  • Key hashing — Cache keys are hashed before storage so the cache service never sees sensitive key names.
  • Key rotation — Configure both a current key and old keys. The Orchestrator encrypts new data with the current key and decrypts existing data with either the current or old keys, enabling seamless key rotation without data loss.
  • Key prefix — Feature-specific prefixes are added to cache keys by default. Disable prefixing (keys.disablePrefix: true) when Service Extensions need to read data written by external systems.
Service Extensions can read and write cache entries directly using the api.Cache() interface, which exposes GetBytes and SetBytes methods with TTL support. This is useful for caching expensive lookups (e.g., external API calls) so they are not repeated on every request. See the cache detail pages below for the full set of encryption, hashing, and connection fields for each cache type.

Built-in Local Cache

Every Orchestrator ships with a built-in cache named local_default. This cache is in-memory, requires no configuration, and is used automatically when a mode does not specify a cache field. Because it is local to each Orchestrator instance, it does not share data across nodes — for multi-node deployments, configure an external cache instead.

Setup

Each cache type has its own setup guide and configuration reference. See the cache detail pages below to get started: