Skip to main content
By the end of this guide, you will have a horizontally scaled Maverics deployment with multiple Orchestrator instances, sticky sessions, local session storage, and mode-appropriate Redis caching. A single Orchestrator instance can handle a significant amount of traffic — the Orchestrator is small, lightweight, and efficient. But production workloads often require multiple instances for high availability (if one instance goes down, the others continue serving traffic) and for handling peak loads that exceed a single instance’s capacity. Scaling the Orchestrator horizontally requires sticky sessions (cookie-based session affinity) on your load balancer so that each user’s requests consistently reach the same Orchestrator instance, plus Redis caching when your mode requires cross-instance data sharing.
Console terminology: In the Maverics Console, Orchestrator instances and configuration delivery are managed through Deployments. When working directly with YAML, configuration is managed as files delivered via the -config flag or MAVERICS_CONFIG environment variable.

Prerequisites

  • A running Orchestrator deployment — If you have not deployed to production yet, follow the Deploy to Production guide first.
  • A load balancer that supports cookie-based session affinity (sticky sessions) — Any HTTP load balancer that supports health check-based routing and cookie-based affinity (NGINX Plus, HAProxy, AWS ALB, F5 BIG-IP, GCP Load Balancer, etc.).
  • A Redis instance — Required for OIDC Provider mode; recommended for SAML Provider mode; not needed for other modes. See the Redis Cache reference for setup.

Scale Your Deployment

1

Configure sticky sessions on your load balancer

Sessions are stored locally on each Orchestrator instance. Sticky sessions (session affinity) ensure that each user’s requests consistently reach the same instance, so the Orchestrator can find the user’s session on every request.Configure your load balancer to use cookie-based affinity targeting the maverics_session cookie (or your custom session.cookie.name value). Cookie-based affinity is more reliable than IP-based affinity because it works correctly when multiple users share the same IP address (e.g., behind a corporate NAT) or when a user’s IP changes mid-session (e.g., switching networks on a mobile device).
Console UI documentation is coming soon. This section will walk you through configuring this component using the Maverics Console’s visual interface, including step-by-step screenshots and field descriptions.
Session storage settings in Maverics Console showing local session store configuration
If an Orchestrator node goes down, sessions on that node are lost. Users are transparently re-authenticated via their upstream IdP session — they experience a brief redirect but are not prompted to log in again.
2

Add Redis caching (if needed)

Redis is a cache for connector tokens and provider data — not a session store. In multi-node deployments, Redis ensures that provider-specific data (authorization codes, SAML request data, connector tokens) is accessible from any Orchestrator instance, even though each user’s requests are routed to the same instance via sticky sessions.The need for Redis depends on which Orchestrator mode you are using:
  • OIDC Provider: Redis cache is required for multi-node deployments. The OIDC Provider stores authorization codes, token state, and provider data in the cache. Without Redis, an authorization code issued by one instance cannot be redeemed by another — and even with sticky sessions, the token endpoint request may arrive from the application server rather than the user’s browser, bypassing session affinity.
  • SAML Provider: Redis cache is recommended. The SAML Provider caches SAML request data (AuthnRequest and LogoutRequest form parameters) so that after a user authenticates upstream, any Orchestrator instance can restore the original request context and generate the SAML response. Without Redis, if the authentication callback arrives at a different instance than the one that received the original AuthnRequest, the SAML response flow will fail because the request data is only in local memory.
  • HTTP Proxy: Redis cache is not needed. Each instance manages its own connector tokens independently.
  • LDAP Provider: A shared cache is not typically needed for standalone LDAP Provider deployments — the LDAP Provider uses per-connection bind state. However, in facade (sandwich) deployments where HTTP Proxy and LDAP Provider run as separate Orchestrator instances, a shared cache is required so that one-time credentials generated by the proxy can be validated by the LDAP Provider. When both modes run on a single Orchestrator, the local cache is sufficient. See LDAP Provider — Pairing with HTTP Proxy and the Caches reference.
  • AI Identity Gateway: A cache is not typically needed. AI Identity Gateway uses token-based state tied to the upstream OAuth authorization server.
Console UI documentation is coming soon. This section will walk you through configuring this component using the Maverics Console’s visual interface, including step-by-step screenshots and field descriptions.
Redis cache settings in Maverics Console showing cache and provider configuration
3

Configure load balancing

A load balancer distributes incoming requests across your Orchestrator instances. Configure it to use health check-based routing so that traffic only goes to healthy instances, and cookie-based sticky sessions so that each user’s requests reach the same instance.Key load balancer configuration:
  • Health check endpoint — Point the load balancer’s health check at the Orchestrator’s /status endpoint. Remove instances from the pool when they return an unhealthy status.
  • Sticky sessions — Configure cookie-based session affinity using the maverics_session cookie (or your custom session.cookie.name value). This ensures each user’s requests consistently reach the same Orchestrator instance.
  • Connection draining — When an instance is removed from the pool (during a rolling update or failure), allow existing connections to complete before terminating the instance. This prevents request failures during deployments.
  • TLS termination — Decide whether TLS terminates at the load balancer or at the Orchestrator. Terminating at the load balancer is simpler to manage when running multiple instances.
Load balancers are external to the Orchestrator — they are configured in your infrastructure layer, not in maverics.yaml. Select your load balancer below for configuration examples:
upstream maverics {
    server 10.0.1.10:9443;
    server 10.0.1.11:9443;
    server 10.0.1.12:9443;

    # Use the Orchestrator's session cookie for affinity.
    # If you customized session.cookie.name, use that value instead.
    sticky cookie maverics_session;
}

server {
    listen 443 ssl;
    server_name app.example.com;

    location / {
        proxy_pass https://maverics;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    location /status {
        proxy_pass https://maverics/status;
    }
}
Always use cookie-based affinity rather than IP-based affinity. Cookie-based affinity works correctly when users share IPs (corporate NAT) or change networks mid-session (mobile devices). Each user’s requests will consistently reach the same Orchestrator instance based on their session cookie.
4

Verify high availability

With multiple instances behind a load balancer and sticky sessions configured, verify that your deployment is truly highly available by testing failover.
# Verify all instances are healthy via the load balancer
curl -s https://your-load-balancer:9443/status | jq .

# Authenticate through the load balancer and verify your session works
# Make several requests to confirm sticky sessions are routing correctly
Test a failover scenario:
  1. Authenticate through the load balancer — Establish a session
  2. Stop one Orchestrator instance — Simulate a failure
  3. Make another request — If the stopped instance was handling your session, the load balancer routes you to a healthy instance. The Orchestrator redirects you to your upstream IdP, which recognizes your existing IdP session and completes authentication silently — you experience a brief redirect but are not prompted to log in again.
  4. Restart the stopped instance — It should rejoin the pool automatically once the health check passes
Success! Your Orchestrator deployment is horizontally scaled with high availability. Each instance stores sessions locally, the load balancer uses cookie-based sticky sessions to route users consistently, and failover triggers transparent re-authentication via the upstream IdP.

Troubleshooting

If users are being asked to re-authenticate unexpectedly, sticky sessions may be misconfigured:
  • Verify the load balancer has cookie-based session affinity configured on the session cookie (maverics_session by default, or your custom session.cookie.name value).
  • Verify all instances use store.type: "local".
  • Check load balancer logs to confirm requests from the same user consistently reach the same instance.
  • If using IP-based affinity instead of cookie-based, switch to cookie-based affinity. IP-based affinity breaks when multiple users share the same IP (corporate NAT) or when a user’s IP changes mid-session.
If a new or restarted Orchestrator instance is not receiving traffic, check that:
  • The instance’s health endpoint is returning a healthy status
  • The load balancer’s health check is configured to poll the correct port and path
  • The health check interval and threshold are set appropriately — some load balancers require multiple consecutive healthy responses before adding an instance to the pool
  • Network rules allow the load balancer to reach the instance on both the application port and the health check port