Console terminology: In the Maverics Console, Orchestrator instances and
configuration delivery are managed through Deployments. When working directly
with YAML, configuration is managed as files delivered via the
-config flag or
MAVERICS_CONFIG environment variable.Prerequisites
- A running Orchestrator deployment — If you have not deployed to production yet, follow the Deploy to Production guide first.
- A load balancer that supports cookie-based session affinity (sticky sessions) — Any HTTP load balancer that supports health check-based routing and cookie-based affinity (NGINX Plus, HAProxy, AWS ALB, F5 BIG-IP, GCP Load Balancer, etc.).
- A Redis instance — Required for OIDC Provider mode; recommended for SAML Provider mode; not needed for other modes. See the Redis Cache reference for setup.
Scale Your Deployment
Configure sticky sessions on your load balancer
Sessions are stored locally on each Orchestrator instance. Sticky sessions (session affinity) ensure that each user’s requests consistently reach the same instance, so the Orchestrator can find the user’s session on every request.Configure your load balancer to use cookie-based affinity targeting the 
maverics_session cookie (or your custom session.cookie.name value). Cookie-based affinity is more reliable than IP-based affinity because it works correctly when multiple users share the same IP address (e.g., behind a corporate NAT) or when a user’s IP changes mid-session (e.g., switching networks on a mobile device).- Console UI
- Configuration
Console UI documentation is coming soon. This section will walk you
through configuring this component using the Maverics Console’s visual
interface, including step-by-step screenshots and field descriptions.
If an Orchestrator node goes down, sessions on that node are lost. Users are transparently re-authenticated via their upstream IdP session — they experience a brief redirect but are not prompted to log in again.
Add Redis caching (if needed)
Redis is a cache for connector tokens and provider data — not a session store. In multi-node deployments, Redis ensures that provider-specific data (authorization codes, SAML request data, connector tokens) is accessible from any Orchestrator instance, even though each user’s requests are routed to the same instance via sticky sessions.The need for Redis depends on which Orchestrator mode you are using:
- OIDC Provider: Redis cache is required for multi-node deployments. The OIDC Provider stores authorization codes, token state, and provider data in the cache. Without Redis, an authorization code issued by one instance cannot be redeemed by another — and even with sticky sessions, the token endpoint request may arrive from the application server rather than the user’s browser, bypassing session affinity.
- SAML Provider: Redis cache is recommended. The SAML Provider caches SAML request data (AuthnRequest and LogoutRequest form parameters) so that after a user authenticates upstream, any Orchestrator instance can restore the original request context and generate the SAML response. Without Redis, if the authentication callback arrives at a different instance than the one that received the original AuthnRequest, the SAML response flow will fail because the request data is only in local memory.
- HTTP Proxy: Redis cache is not needed. Each instance manages its own connector tokens independently.
- LDAP Provider: A shared cache is not typically needed for standalone LDAP Provider deployments — the LDAP Provider uses per-connection bind state. However, in facade (sandwich) deployments where HTTP Proxy and LDAP Provider run as separate Orchestrator instances, a shared cache is required so that one-time credentials generated by the proxy can be validated by the LDAP Provider. When both modes run on a single Orchestrator, the local cache is sufficient. See LDAP Provider — Pairing with HTTP Proxy and the Caches reference.
- AI Identity Gateway: A cache is not typically needed. AI Identity Gateway uses token-based state tied to the upstream OAuth authorization server.
- Console UI
- Configuration
Console UI documentation is coming soon. This section will walk you
through configuring this component using the Maverics Console’s visual
interface, including step-by-step screenshots and field descriptions.
Configure load balancing
A load balancer distributes incoming requests across your Orchestrator instances. Configure it to use health check-based routing so that traffic only goes to healthy instances, and cookie-based sticky sessions so that each user’s requests reach the same instance.Key load balancer configuration:
- Health check endpoint — Point the load balancer’s health check at the Orchestrator’s
/statusendpoint. Remove instances from the pool when they return an unhealthy status. - Sticky sessions — Configure cookie-based session affinity using the
maverics_sessioncookie (or your customsession.cookie.namevalue). This ensures each user’s requests consistently reach the same Orchestrator instance. - Connection draining — When an instance is removed from the pool (during a rolling update or failure), allow existing connections to complete before terminating the instance. This prevents request failures during deployments.
- TLS termination — Decide whether TLS terminates at the load balancer or at the Orchestrator. Terminating at the load balancer is simpler to manage when running multiple instances.
maverics.yaml. Select your load balancer below for configuration examples:- NGINX Plus
- HAProxy
- AWS ALB
- F5 BIG-IP
- Azure Application Gateway
- GCP Load Balancer
- Traefik
Verify high availability
With multiple instances behind a load balancer and sticky sessions configured, verify that your deployment is truly highly available by testing failover.Test a failover scenario:
- Authenticate through the load balancer — Establish a session
- Stop one Orchestrator instance — Simulate a failure
- Make another request — If the stopped instance was handling your session, the load balancer routes you to a healthy instance. The Orchestrator redirects you to your upstream IdP, which recognizes your existing IdP session and completes authentication silently — you experience a brief redirect but are not prompted to log in again.
- Restart the stopped instance — It should rejoin the pool automatically once the health check passes
Success! Your Orchestrator deployment is horizontally scaled with high
availability. Each instance stores sessions locally, the load balancer uses
cookie-based sticky sessions to route users consistently, and failover
triggers transparent re-authentication via the upstream IdP.
Troubleshooting
Sessions lost when requests hit a different instance
Sessions lost when requests hit a different instance
If users are being asked to re-authenticate unexpectedly, sticky sessions
may be misconfigured:
- Verify the load balancer has cookie-based session affinity configured on the session cookie (
maverics_sessionby default, or your customsession.cookie.namevalue). - Verify all instances use
store.type: "local". - Check load balancer logs to confirm requests from the same user consistently reach the same instance.
- If using IP-based affinity instead of cookie-based, switch to cookie-based affinity. IP-based affinity breaks when multiple users share the same IP (corporate NAT) or when a user’s IP changes mid-session.
Instance not joining the load balancer pool
Instance not joining the load balancer pool
If a new or restarted Orchestrator instance is not receiving traffic, check
that:
- The instance’s health endpoint is returning a healthy status
- The load balancer’s health check is configured to poll the correct port and path
- The health check interval and threshold are set appropriately — some load balancers require multiple consecutive healthy responses before adding an instance to the pool
- Network rules allow the load balancer to reach the instance on both the application port and the health check port