Skip to main content
By the end of this guide, you will have a production-ready Maverics Orchestrator deployment with environment-specific configuration, health monitoring, and automated restarts. This guide focuses on production deployment — not installation. If you have not yet installed the Orchestrator, start with the installation reference for system requirements and platform options. This guide picks up where installation leaves off, covering the production-specific concerns that matter when real users depend on your deployment.
Console terminology: In the Maverics Console, Orchestrator instances and configuration delivery are managed through Deployments. When working directly with YAML, configuration is managed as files delivered via the -config flag or MAVERICS_CONFIG environment variable.

Prerequisites

  • The Orchestrator installed — See the installation reference for system requirements and installation methods.
  • A working configuration file — You should have a YAML configuration that works in development. See the configuration reference for config file structure.
  • A target environment — Docker, Kubernetes, a Linux/macOS host, or a Windows server where the Orchestrator will run.

Deploy to Production

1

Prepare production configuration

Production configuration differs from development in a few important ways. In development, you might hardcode secrets and use default ports. In production, you want secrets managed externally, environment-specific values injected at runtime, and explicit resource boundaries.The Orchestrator supports environment variable substitution in YAML configuration — so you can keep a single config file and vary behavior per environment. For secrets like IdP client credentials and TLS certificates, use an external secret provider instead of putting values directly in your config file.Key production configuration concerns:
  • Secrets — Use a secret provider (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault) instead of plaintext values
  • External config sources — Pull configuration from a config source for centralized management across multiple Orchestrator instances
  • TLS — Enable TLS termination at the Orchestrator or ensure your load balancer handles it. See the Transport Layer Security (TLS) Reference
  • Logging — Set log level to info or warn for production (not debug). See the Monitor guide for full observability setup
Console UI documentation is coming soon. This section will walk you through configuring this component using the Maverics Console’s visual interface, including step-by-step screenshots and field descriptions.
Production configuration panel in Maverics Console showing environment variables and secret provider settings
Keep your production configuration in version control — but use environment variable references for anything sensitive. This way your config file is auditable without exposing secrets.
2

Configure secret provider

Secret providers are configured via the MAVERICS_SECRET_PROVIDER environment variable or the -secretProvider CLI flag — not in YAML. Only one secret provider may be active at a time.
Console UI documentation is coming soon. This section will walk you through configuring this component using the Maverics Console’s visual interface, including step-by-step screenshots and field descriptions.
Secret provider configuration in Maverics Console
3

Deploy to your target environment

The Orchestrator is small and lightweight — it deploys almost anywhere. Choose the deployment model that matches your infrastructure.Docker deployment is the simplest path to production. Mount your configuration file and any TLS certificates, expose the Orchestrator’s listening port, and set environment variables for secrets.Kubernetes deployment is ideal for teams already running workloads on Kubernetes. The Orchestrator runs as a Deployment with a Service, and you configure it through ConfigMaps (for YAML configuration) and Secrets (for credentials). Kubernetes also gives you built-in health check integration, rolling updates, and automatic restarts. Strata provides an official Helm chart for streamlined Kubernetes installation.Windows deployment is supported via the MSI installer, which registers the Orchestrator as a Windows service with automatic startup. The installer provides a guided setup for configuration source, TLS certificates, and environment variables.
Console UI documentation is coming soon. This section will walk you through configuring this component using the Maverics Console’s visual interface, including step-by-step screenshots and field descriptions.
Deployment wizard in Maverics Console showing Docker and Kubernetes deployment options
Whichever deployment model you choose, the Orchestrator behaves the same way. The only differences are how you deliver the configuration file and how you manage the process lifecycle (restarts, scaling, health checks).
4

Configure health checks and readiness probes

The Orchestrator exposes a health endpoint that reports its operational status — whether it has started successfully and is ready to handle traffic. In production, you should configure your infrastructure to poll this endpoint and take action when the Orchestrator is unhealthy.The status endpoint returns a JSON response indicating the Orchestrator’s operational state. Your load balancer, container orchestrator, or process manager can use this endpoint to:
  • Route traffic only to healthy instances — Remove unhealthy instances from the load balancer pool
  • Restart failed instances — Kubernetes liveness probes and Docker restart policies use health checks to detect and recover from failures automatically
  • Block traffic during startup — Readiness probes prevent traffic from reaching an instance before it has finished initializing
# Check the status endpoint
curl -s https://localhost:9443/status | jq .
{
  "status": "up"
}
Console UI documentation is coming soon. This section will walk you through configuring this component using the Maverics Console’s visual interface, including step-by-step screenshots and field descriptions.
Health check settings in Maverics Console showing endpoint configuration and probe intervals
5

Verify the deployment

With the Orchestrator deployed and health checks configured, verify that everything is working end-to-end. Start by confirming the health endpoint returns a healthy status, then test the full authentication flow.
# Verify status
curl -s https://your-orchestrator-host:9443/status | jq .

# Check that the Orchestrator is serving traffic
curl -I https://your-orchestrator-host:9443/
Walk through the full user flow to confirm end-to-end functionality:
  1. Visit a protected URL — You should be redirected to your identity provider
  2. Authenticate — Log in with valid credentials
  3. Verify redirect — After authentication, you should land on your protected application
  4. Check logs — Confirm the Orchestrator logged the authentication event
Success! Your Orchestrator is deployed in production with environment-specific configuration, health monitoring, and automated restarts. The health endpoint reports a healthy status, and the full authentication flow works end-to-end.

Troubleshooting

Check the Orchestrator’s startup logs for the specific error. The most common causes are:
  • Invalid YAML — Indentation errors, missing colons, or unclosed quotes. Run your config through a YAML validator.
  • Missing required fields — The Orchestrator validates its configuration on startup and logs which fields are missing.
  • Port conflict — Another process is already using the configured listening port. Check with lsof -i :8080 or change the port in your configuration.
  • Permission denied — The Orchestrator process does not have permission to read the config file, TLS certificates, or bind to the configured port.
If you are using an external config source, verify that:
  • The config source credentials are correct and the Orchestrator can reach the external service over the network
  • The config source path or key matches what the Orchestrator expects
  • Environment variables referenced in the configuration are set in the Orchestrator’s runtime environment (not just your local shell)
Enable debug logging temporarily to see the exact config source requests and responses.
An unhealthy status usually means one or more connectors failed to initialize. Check the Orchestrator’s logs for startup errors.If the health endpoint times out entirely, the Orchestrator may not be listening on the expected port. Verify the port configuration and check that no firewall rules are blocking access to the health endpoint.