Console terminology: In the Maverics Console, Orchestrator instances and
configuration delivery are managed through Deployments. When working directly
with YAML, configuration is managed as files delivered via the
-config flag or
MAVERICS_CONFIG environment variable.Startup Issues
These issues prevent the Orchestrator from starting or completing its initialization.Orchestrator will not start
Orchestrator will not start
The Orchestrator validates its configuration on startup and exits with an error
if anything is invalid. Check the error output for the specific cause.Common causes:
- Invalid YAML syntax — Indentation errors, missing colons, unclosed quotes, or tabs instead of spaces. Run your config file through a YAML validator to catch syntax errors.
- Missing required fields — The Orchestrator logs which required fields are missing. Check the configuration reference for the minimum required configuration.
- Permission denied — The Orchestrator process does not have permission to read the config file, bind to the configured port, or access TLS certificates. Check file permissions and ensure the process runs with appropriate privileges.
Configuration file not found
Configuration file not found
The Orchestrator cannot locate the configuration file at the specified path.Common causes:
- Wrong path — The
-configflag points to a path that does not exist. Double-check the path, including case sensitivity on Linux. - Mount not available — In Docker or Kubernetes, the volume mount may not be configured correctly. Verify the mount path inside the container matches the config flag.
- Environment variable not set — If you use an environment variable for the config path, confirm it is set in the Orchestrator’s runtime environment.
Port already in use
Port already in use
The Orchestrator cannot bind to its configured listening port because another
process is already using it.Common causes:
- Previous instance still running — A previous Orchestrator process did not shut down cleanly. Find and stop it before starting a new one.
- Another service — A different application is using the same port.
- Port below 1024 — Ports below 1024 require root privileges on most systems.
Authentication Issues
These issues affect user login flows, token validation, and SSO behavior.Users not redirected to identity provider
Users not redirected to identity provider
When users visit a protected URL, they should be redirected to the identity
provider’s login page. If the redirect is not happening, the issue is usually
in the application route or connector configuration.Common causes:
- Route not matched — The request URL does not match any configured application route. Check that the route’s URL pattern matches the URL the user is visiting.
- Connector not assigned — The application route does not have an identity connector assigned, so the Orchestrator does not know where to redirect for authentication.
- Connector misconfigured — The identity connector’s authorization endpoint URL is wrong or unreachable.
- Check the Orchestrator’s logs for the incoming request — the log entry shows which route (if any) matched and whether a connector was invoked
- Verify the redirect URL in the browser’s developer tools (Network tab) to see if the Orchestrator is issuing a redirect at all
- Test the IdP’s authorization endpoint directly with
curlto confirm it is reachable
Tokens rejected by upstream application
Tokens rejected by upstream application
The user authenticates successfully, but the upstream application rejects
the request or shows an unauthorized error.Common causes:
- Missing headers — The Orchestrator is not injecting the identity headers that the upstream application expects. Check your header injection configuration.
- Wrong header format — The upstream application expects headers in a specific format (for example, a Bearer token in the Authorization header) but the Orchestrator is sending a different format.
- Clock skew — Token validation failures can occur when the Orchestrator’s clock and the upstream application’s clock are out of sync. Ensure both systems use NTP.
- Use
curl -vto inspect the exact headers the Orchestrator sends to the upstream - Check the upstream application’s logs for the specific rejection reason
- Compare the expected header format with what the Orchestrator sends
SSO redirect loop
SSO redirect loop
The user gets stuck in an infinite loop between the Orchestrator and the
identity provider — redirecting back and forth without ever landing on the
application.Common causes:
- Callback URL mismatch — The callback URL registered in the identity provider does not match the Orchestrator’s expected redirect URI. Both sides must match exactly (including protocol, host, port, and path).
- Session not persisting — The session cookie is not being set or read correctly. Check cookie domain and path settings, and ensure the browser is not blocking third-party cookies.
- Application behind the Orchestrator redirecting again — The upstream application has its own authentication that triggers another redirect. Disable the upstream app’s authentication since the Orchestrator handles it.
- Open the browser’s developer tools and watch the Network tab to see the exact redirect sequence
- Check that the callback URL in your IdP configuration matches the Orchestrator’s redirect URI exactly
- Look for
Set-Cookieheaders in the Orchestrator’s responses to confirm the session cookie is being set
Configuration Issues
These issues affect how the Orchestrator reads and applies its configuration.Configuration changes not taking effect
Configuration changes not taking effect
You updated the configuration file but the Orchestrator’s behavior has not
changed.Common causes:
- Orchestrator not restarted — The Orchestrator reads its configuration at startup. Changes to the config file require a restart to take effect.
- Wrong config file — You may have edited a different config file than the
one the Orchestrator is using. Check the
-configflag or environment variable to confirm the path. - Cached config — If using an external config source, the config may be cached. Check the config source’s cache TTL and refresh settings.
Environment variable not resolving
Environment variable not resolving
You used an environment variable reference in your configuration file but the
Orchestrator is using the literal string instead of the variable’s value.Common causes:
- Variable not set — The environment variable is not set in the Orchestrator’s runtime environment. Variables set in your local shell are not automatically available to services started by systemd, Docker, or Kubernetes.
- Wrong syntax — The Orchestrator uses
{{ env.VAR_NAME }}syntax (double curly braces with spaces) for environment variable references in YAML. See the configuration reference for details. - Variable set after startup — Environment variables are read at process startup. If you set a variable after the Orchestrator started, restart it.
Secret not loading from secret provider
Secret not loading from secret provider
A secret referenced in the configuration is not loading from the external
secret provider.Common causes:
- Authentication failure — The Orchestrator cannot authenticate to the secret provider. Check the secret provider’s credentials and IAM permissions.
- Wrong secret path — The secret path or key does not match what exists in
the secret provider. Paths are case-sensitive and must match exactly. In YAML,
secrets are referenced with
<namespace.key>syntax (angle brackets). - Network access — The Orchestrator cannot reach the secret provider’s endpoint. Check network policies, firewall rules, and DNS resolution.
- Enable
debuglogging to see the exact secret provider requests and responses - Test the secret provider connection independently using its CLI tool (for
example,
vault read secret/data/mavericsfor HashiCorp Vault) - Verify IAM roles and policies grant read access to the specific secret path
Connectivity Issues
These issues affect the Orchestrator’s ability to communicate with external services.Cannot reach upstream application
Cannot reach upstream application
The Orchestrator returns a 502 Bad Gateway error because it cannot connect to
the upstream application.Common causes:
- Upstream not running — The upstream application is down or not listening on the expected port.
- Wrong upstream URL — The upstream URL in the route configuration is incorrect (wrong host, port, or protocol).
- Network isolation — Firewall rules, security groups, or Kubernetes NetworkPolicies are blocking traffic from the Orchestrator to the upstream.
- DNS resolution — The Orchestrator cannot resolve the upstream hostname.
Identity provider connection timeout
Identity provider connection timeout
The Orchestrator times out when trying to reach the identity provider during
authentication flows.Common causes:
- IdP endpoint unreachable — The identity provider’s endpoints are not reachable from the Orchestrator’s network. This is common in air-gapped environments or when network egress is restricted.
- DNS issues — The Orchestrator cannot resolve the IdP’s hostname.
- TLS handshake failure — The Orchestrator cannot complete a TLS handshake with the IdP, often because of a missing CA certificate or certificate verification failure.
- Proxy required — Your network requires an HTTP proxy for outbound connections, and the Orchestrator is not configured to use it.
- Test the IdP’s OIDC discovery endpoint directly from the Orchestrator’s host:
curl -v https://your-idp.example.com/.well-known/openid-configuration - Check for proxy requirements and set
HTTP_PROXY/HTTPS_PROXYenvironment variables if needed - Verify the IdP’s TLS certificate chain is trusted by the Orchestrator’s CA bundle
Health check failing intermittently
Health check failing intermittently
The health endpoint (
/status) returns {"status": "up"} when the
Orchestrator is operational. If your load balancer intermittently removes and
re-adds the instance, the Orchestrator may be unresponsive during those
periods.Common causes:- Resource pressure — The Orchestrator is running low on memory or CPU, causing slow health check responses that the load balancer interprets as timeouts.
- Network instability — Intermittent network issues between the load balancer and the Orchestrator instance.
- Process restarts — The Orchestrator is restarting due to configuration reloads or OOM kills, causing brief unavailability windows.
- Monitor the Orchestrator’s resource usage (CPU, memory) during failure periods
- Check system logs (
journalctl -u mavericsordocker logs) for restarts - Increase the load balancer’s health check timeout and failure threshold to tolerate brief interruptions without removing the instance from the pool
Getting Help
If you have worked through the troubleshooting steps above and are still stuck, here is how to get additional help. Collect diagnostic information before reaching out to support. This makes it much faster to identify and resolve your issue:- Orchestrator version — Run the Orchestrator with a
-versionflag or check the startup logs - Configuration file — A sanitized copy of your config file (remove secrets and credentials)
- Logs — The relevant log entries around the time the issue occurred. Include the full error message and any stack traces. Enable
debuglogging temporarily for more detail - Environment details — Operating system, container runtime version, Kubernetes version, cloud provider, and any relevant network configuration
- Strata Identity support — Contact the Strata support team for production issues. Include the diagnostic information above.
- Documentation — The Orchestrator Configuration Reference pages often have the specific detail you need for configuration questions.
Related Pages
Operations Overview
Back to the Operations guides hub
Orchestrator Configuration Reference
Full reference for configuration file structure, environment variables, and runtime settings
Telemetry
Complete configuration reference for telemetry, logging, and health check settings
Secret Providers Reference
Configuration for HashiCorp Vault, AWS, Azure, and other secret providers