Console terminology: In the Maverics Console, Orchestrator instances and
configuration delivery are managed through Deployments. When working directly
with YAML, configuration is managed as files delivered via the
-config flag or
MAVERICS_CONFIG environment variable.Prerequisites
- The Orchestrator installed — See the installation reference for system requirements and installation methods.
- A working configuration file — You should have a YAML configuration that works in development. See the configuration reference for config file structure.
- A target environment — Docker, Kubernetes, a Linux/macOS host, or a Windows server where the Orchestrator will run.
Deploy to Production
Prepare production configuration
Production configuration differs from development in a few important ways. In development, you might hardcode secrets and use default ports. In production, you want secrets managed externally, environment-specific values injected at runtime, and explicit resource boundaries.The Orchestrator supports environment variable substitution in YAML configuration — so you can keep a single config file and vary behavior per environment. For secrets like IdP client credentials and TLS certificates, use an external secret provider instead of putting values directly in your config file.Key production configuration concerns:
- Secrets — Use a secret provider (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault) instead of plaintext values
- External config sources — Pull configuration from a config source for centralized management across multiple Orchestrator instances
- TLS — Enable TLS termination at the Orchestrator or ensure your load balancer handles it. See the Transport Layer Security (TLS) Reference
- Logging — Set log level to
infoorwarnfor production (notdebug). See the Monitor guide for full observability setup
- Console UI
- Configuration
Console UI documentation is coming soon. This section will walk you
through configuring this component using the Maverics Console’s visual
interface, including step-by-step screenshots and field descriptions.
Configure secret provider
Secret providers are configured via the 
MAVERICS_SECRET_PROVIDER environment variable or the -secretProvider CLI flag — not in YAML. Only one secret provider may be active at a time.- Console UI
- Configuration
Console UI documentation is coming soon. This section will walk you
through configuring this component using the Maverics Console’s visual
interface, including step-by-step screenshots and field descriptions.
Deploy to your target environment
The Orchestrator is small and lightweight — it deploys almost anywhere. Choose the deployment model that matches your infrastructure.Docker deployment is the simplest path to production. Mount your configuration file and any TLS certificates, expose the Orchestrator’s listening port, and set environment variables for secrets.Kubernetes deployment is ideal for teams already running workloads on Kubernetes. The Orchestrator runs as a Deployment with a Service, and you configure it through ConfigMaps (for YAML configuration) and Secrets (for credentials). Kubernetes also gives you built-in health check integration, rolling updates, and automatic restarts. Strata provides an official Helm chart for streamlined Kubernetes installation.Windows deployment is supported via the MSI installer, which registers the Orchestrator as a Windows service with automatic startup. The installer provides a guided setup for configuration source, TLS certificates, and environment variables.
- Console UI
- Configuration
Console UI documentation is coming soon. This section will walk you
through configuring this component using the Maverics Console’s visual
interface, including step-by-step screenshots and field descriptions.
Configure health checks and readiness probes
The Orchestrator exposes a health endpoint that reports its operational status — whether it has started successfully and is ready to handle traffic. In production, you should configure your infrastructure to poll this endpoint and take action when the Orchestrator is unhealthy.The status endpoint returns a JSON response indicating the Orchestrator’s operational state. Your load balancer, container orchestrator, or process manager can use this endpoint to:
- Route traffic only to healthy instances — Remove unhealthy instances from the load balancer pool
- Restart failed instances — Kubernetes liveness probes and Docker restart policies use health checks to detect and recover from failures automatically
- Block traffic during startup — Readiness probes prevent traffic from reaching an instance before it has finished initializing
- Console UI
- Configuration
Console UI documentation is coming soon. This section will walk you
through configuring this component using the Maverics Console’s visual
interface, including step-by-step screenshots and field descriptions.
Verify the deployment
With the Orchestrator deployed and health checks configured, verify that everything is working end-to-end. Start by confirming the health endpoint returns a healthy status, then test the full authentication flow.Walk through the full user flow to confirm end-to-end functionality:
- Visit a protected URL — You should be redirected to your identity provider
- Authenticate — Log in with valid credentials
- Verify redirect — After authentication, you should land on your protected application
- Check logs — Confirm the Orchestrator logged the authentication event
Success! Your Orchestrator is deployed in production with environment-specific
configuration, health monitoring, and automated restarts. The health endpoint
reports a healthy status, and the full authentication flow works end-to-end.
Troubleshooting
Orchestrator fails to start
Orchestrator fails to start
Check the Orchestrator’s startup logs for the specific error. The most common
causes are:
- Invalid YAML — Indentation errors, missing colons, or unclosed quotes. Run your config through a YAML validator.
- Missing required fields — The Orchestrator validates its configuration on startup and logs which fields are missing.
- Port conflict — Another process is already using the configured listening
port. Check with
lsof -i :8080or change the port in your configuration. - Permission denied — The Orchestrator process does not have permission to read the config file, TLS certificates, or bind to the configured port.
Configuration not loading from external source
Configuration not loading from external source
If you are using an external config source,
verify that:
- The config source credentials are correct and the Orchestrator can reach the external service over the network
- The config source path or key matches what the Orchestrator expects
- Environment variables referenced in the configuration are set in the Orchestrator’s runtime environment (not just your local shell)
debug logging temporarily to see the exact config source requests and
responses.Health check returns unhealthy or times out
Health check returns unhealthy or times out
An unhealthy status usually means one or more connectors failed to initialize.
Check the Orchestrator’s logs for startup errors.If the health endpoint times out entirely, the Orchestrator may not be listening
on the expected port. Verify the port configuration and check that no firewall
rules are blocking access to the health endpoint.