Skip to main content
Clusters are an experimental feature — see Experimental Features for important caveats. Enable with features: { experimental.clusters: true } in your configuration file.
When you scale the Orchestrator beyond a single instance, each node needs access to shared session, cache, and routing state. Clustering solves this by letting Orchestrator nodes share state directly with each other — eliminating the need for sticky sessions on your load balancer and external caching infrastructure like Redis or Memcached. Any node can handle any request, enabling true horizontal scaling with automatic failover. Clustering is ideal for multi-node deployments where you want high availability without introducing additional infrastructure to manage. Nodes discover each other automatically, replicate data in real time, and detect failures through built-in health monitoring.
Console deployments: Clustering is not available natively in the Maverics Console. To enable clustering for Console-deployed Orchestrators, use the config override feature. Config override requires enablement for your organization — contact your Strata account team or Strata support to enable it.

How It Works

Clustering uses two network channels:
  1. Membership channel — A gossip protocol for discovering and monitoring cluster members. Nodes exchange heartbeats to detect failures and maintain a consistent membership view.
  2. Data channel — A dedicated plane for replicating session data, cache entries, and routing decisions across nodes.
Both channels require the encryption.psk pre-shared key. All nodes in a cluster must use the same PSK value. The optional nodeKey.file provides a persistent identity for the node across restarts.
The experimental.clusters feature flag must be set to true in the features map. Without the flag, the Orchestrator ignores cluster configuration and runs as a standalone instance. See Feature Flags for details.
Pre-shared key security: The PSK protects all cluster communication — treat it with the same care as a TLS private key.
  • Generate securely: Use a cryptographically random value, for example: openssl rand -hex 32. Any secure generation method producing exactly 32 bytes of entropy works.
  • Store in a secret provider: Always reference the PSK via a secret provider (<secrets.cluster-psk>). Never hardcode the PSK in configuration files, environment variables, or source code.
  • Rotate periodically: Establish a rotation schedule. To rotate, update the PSK value in your secret provider and perform a rolling restart of cluster nodes — nodes will re-establish trust using the new PSK.

Discovery Methods

The Orchestrator supports two methods for cluster members to find each other.

Static Discovery

Static discovery lists known node addresses directly. Use this when cluster members have stable, known IP addresses or hostnames.
clusters:
  - name: my-cluster
    addresses:
      membership: /ip4/0.0.0.0/tcp/9450
      data: /ip4/0.0.0.0/tcp/9451
    discovery:
      method: static
      static:
        nodes:
          - endpoints:
              membership: /ip4/10.0.0.1/tcp/9450
          - endpoints:
              membership: /ip4/10.0.0.2/tcp/9450
          - endpoints:
              membership: /ip4/10.0.0.3/tcp/9450
    encryption:
      psk: <secrets.cluster-psk>
Each node in the static.nodes array provides the membership endpoint of a peer. The current node’s own address can be included — the gossip protocol ignores self-connections.

DNS SRV Discovery

DNS SRV discovery queries DNS SRV records to find cluster members dynamically. Use this in container orchestration environments (Kubernetes, ECS) where node addresses change.
clusters:
  - name: my-cluster
    addresses:
      membership: /ip4/0.0.0.0/tcp/9450
      data: /ip4/0.0.0.0/tcp/9451
    discovery:
      method: srvdns
      srvDNS:
        dnsAddress: "10.0.0.53:53"
        pollInterval: "30s"
        names:
          - "_maverics-membership._tcp.orchestrator.internal"
    encryption:
      psk: <secrets.cluster-psk>
The pollInterval controls how frequently the Orchestrator queries DNS for updated membership records. Shorter intervals detect new nodes faster but increase DNS load.

Cluster-Backed Services

Once a cluster is defined, other configuration sections can reference it by name to share state across nodes.

Session Store

Store user sessions in the cluster for shared session state across all nodes:
session:
  store:
    type: cluster
    cluster:
      name: my-cluster
See Sessions for the full session configuration reference.

Cache

Create a cluster-backed cache store for distributed data sharing:
caches:
  - name: my-cache
    type: cluster
    cluster:
      name: my-cluster
See Caches for cache encryption and key hashing options.

HTTP Routing

Route incoming requests across cluster nodes so that any node can handle any request:
http:
  routing:
    enabled: true
    type: cluster
    cluster:
      name: my-cluster
See HTTP Server Configuration for the full HTTP server configuration reference.

Field Reference

KeyTypeDefaultRequiredDescription
clusters[].namestringYesCluster name, referenced by session store, cache, and routing configuration
clusters[].disabledbooleanfalseNoDisable the cluster without removing its configuration
clusters[].addresses.membershipstring (multiaddr)YesGossip protocol bind address for cluster membership
clusters[].addresses.datastring (multiaddr)YesData plane bind address for session, cache, and routing traffic
clusters[].discovery.methodstringYesDiscovery method: "static" or "srvdns"
clusters[].discovery.static.nodes[].endpoints.membershipstring (multiaddr)Yes (static)Peer node membership address
clusters[].discovery.srvDNS.dnsAddressstringNo (srvdns)DNS server address and port for SRV lookups
clusters[].discovery.srvDNS.pollIntervalduration stringNo (srvdns)How often to poll DNS for membership changes
clusters[].discovery.srvDNS.namesarray of stringsYes (srvdns)SRV record names to query for peer discovery
clusters[].encryption.pskstringYesPre-shared key for establishing trust between cluster nodes (use a secret reference)
clusters[].encryption.nodeKey.filestringNoFile path for a persistent node key pair
Addresses use multiaddr format: /ip4/<address>/tcp/<port>.