Skip to main content
The cluster cache is a distributed key-value store built directly into the Orchestrator. Data written on one node propagates automatically to every other node in the cluster — no external infrastructure to deploy or manage. This makes it the simplest option for sharing cached data across Orchestrator instances in environments where eventual consistency is acceptable.
Cluster caches require the experimental.clusters feature flag and are not supported for production and may be changed or removed without notice. For production multi-node deployments, use a Redis cache.
Console deployments: Cluster caches are not available natively in the Maverics Console. To use cluster caches with Console-deployed Orchestrators, use the config override feature. Config override requires enablement for your organization — contact your Strata account team or Strata support to enable it.
Prerequisite: Cluster caches require a configured cluster. See the Clusters reference page for setup instructions including discovery, encryption, and data plane configuration.

Overview

Every node in the cluster holds a replica of every cache entry. When a node writes a cache value — a provider token, a policy decision, a user attribute — that value propagates to all other nodes automatically. Any node can serve any request without reaching out to an external store. A cluster must be defined in the configuration before cluster caches can be used.

Consistency Model

Cluster caches are eventually consistent. A write completes immediately on the local node and is then propagated to other nodes via the gossip protocol. Other nodes may briefly return stale data until the update arrives. In practice, for small clusters (3-10 nodes) within the same region, updates typically converge in under one second. Cross-region deployments or larger clusters may take a few seconds. Conflict resolution is automatic and deterministic — no manual intervention is required, including after network partitions.
Because cluster caches are eventually consistent, a cache entry written on one node may not be immediately visible on another. For most Orchestrator use cases, the brief convergence window does not affect correctness. If your deployment requires immediate consistency for cached data, use a Redis cache instead.

When to Use Cluster Cache vs. Redis

Cluster CacheRedis
InfrastructureNone — built into the OrchestratorRequires a separate Redis deployment
ConsistencyEventually consistent (sub-second convergence in most cases)Immediately consistent
Production statusExperimentalProduction-supported
Best forDevelopment, testing, and deployments that prioritize simplicityProduction workloads that require immediate consistency

Use Cases

  • Infrastructure-free shared state — every node sees the same data without deploying and managing Redis
  • Regional data isolation — with multi-cluster topologies, keep regional cache data in regional clusters for performance and data residency compliance while still routing requests globally
  • Development velocity — no Redis to install locally; clustering just works between Orchestrator nodes

Configuration

Cluster caches are defined with a name, type, and a reference to a configured cluster:
maverics.yaml
caches:
  - name: my-cache
    type: cluster
    cluster:
      name: my-cluster
Modes that support caching reference the cache by name. For example:
maverics.yaml
oidcProvider:
  cache: my-cache
  # ... other oidcProvider config

samlProvider:
  cache: my-cache
  # ... other samlProvider config
The cluster cache requires a clusters definition in your configuration. See the Clusters reference for complete cluster setup including membership, data plane, and discovery.

Configuration Reference

Cluster Cache Fields

KeyTypeDefaultRequiredDescription
caches[].namestringYesUnique cache name
caches[].typestringYesMust be "cluster"
caches[].cluster.namestringYesCluster name — must match a defined cluster in clusters[]

Examples

A minimal cluster cache configuration. The cache references a configured cluster by name — all nodes in that cluster automatically share cached data.
maverics.yaml
caches:
  - name: my-cache
    type: cluster
    cluster:
      name: my-cluster

clusters:
  - name: my-cluster
    # ... cluster configuration (see Clusters reference)
An OIDC Provider mode using a cluster cache for shared token and authorization state. This configuration is suitable for development and testing environments where you want multi-node state sharing without deploying Redis.
maverics.yaml
caches:
  - name: oidc-cache
    type: cluster
    cluster:
      name: dev-cluster

clusters:
  - name: dev-cluster
    # ... cluster configuration (see Clusters reference)

oidcProvider:
  cache: oidc-cache
  # ... other oidcProvider config

Troubleshooting

  • Cluster cache not working — cluster caches require the experimental.clusters feature flag and a valid cluster definition. Verify that your cluster is configured and that the cache’s cluster name matches a defined cluster.
  • Cache data not shared across nodes — ensure all nodes are members of the same cluster and that the data plane is configured correctly. Check Orchestrator logs for cluster membership and replication errors.
  • Stale reads after a write — cluster caches are eventually consistent. A write on one node may take up to a few seconds to appear on other nodes. This is normal behavior, not a bug. If your use case cannot tolerate eventual consistency, use a Redis cache.