Skip to main content
By the end of this guide, you will have a systematic approach to diagnosing Maverics Orchestrator issues — with clear steps for identifying root causes and resolving the most common problems. Troubleshooting is easier when you have a systematic approach. This guide organizes the most common Orchestrator issues by category — startup, authentication, configuration, and connectivity — so you can quickly find the problem that matches your symptoms and follow the resolution steps. Each section covers the likely causes, diagnostic commands, and fixes.
Console terminology: In the Maverics Console, Orchestrator instances and configuration delivery are managed through Deployments. When working directly with YAML, configuration is managed as files delivered via the -config flag or MAVERICS_CONFIG environment variable.
Before diving into a specific issue, check the Orchestrator’s logs first. The Orchestrator emits structured JSON logs that include error details, request IDs, and component names. Most issues are diagnosable from the log output alone. See the Monitor and Observe guide for log configuration.

Startup Issues

These issues prevent the Orchestrator from starting or completing its initialization.
The Orchestrator validates its configuration on startup and exits with an error if anything is invalid. Check the error output for the specific cause.Common causes:
  • Invalid YAML syntax — Indentation errors, missing colons, unclosed quotes, or tabs instead of spaces. Run your config file through a YAML validator to catch syntax errors.
  • Missing required fields — The Orchestrator logs which required fields are missing. Check the configuration reference for the minimum required configuration.
  • Permission denied — The Orchestrator process does not have permission to read the config file, bind to the configured port, or access TLS certificates. Check file permissions and ensure the process runs with appropriate privileges.
Diagnostic steps:
# Check the Orchestrator's error output
maverics -config /etc/maverics/config.yaml 2>&1 | head -20

# Validate YAML syntax
python3 -c "import yaml; yaml.safe_load(open('/etc/maverics/config.yaml'))"

# Check file permissions
ls -la /etc/maverics/config.yaml
The Orchestrator cannot locate the configuration file at the specified path.Common causes:
  • Wrong path — The -config flag points to a path that does not exist. Double-check the path, including case sensitivity on Linux.
  • Mount not available — In Docker or Kubernetes, the volume mount may not be configured correctly. Verify the mount path inside the container matches the config flag.
  • Environment variable not set — If you use an environment variable for the config path, confirm it is set in the Orchestrator’s runtime environment.
Diagnostic steps:
# Check if the file exists at the expected path
ls -la /etc/maverics/config.yaml

# In Docker, check the mount
docker inspect <container-id> | jq '.[0].Mounts'

# In Kubernetes, check the ConfigMap mount
kubectl exec <pod-name> -- ls -la /etc/maverics/
The Orchestrator cannot bind to its configured listening port because another process is already using it.Common causes:
  • Previous instance still running — A previous Orchestrator process did not shut down cleanly. Find and stop it before starting a new one.
  • Another service — A different application is using the same port.
  • Port below 1024 — Ports below 1024 require root privileges on most systems.
Diagnostic steps:
# Find what is using the port (Linux/macOS)
lsof -i :8080

# Kill the previous process if needed
kill <PID>

# Or change the port in your configuration

Authentication Issues

These issues affect user login flows, token validation, and SSO behavior.
When users visit a protected URL, they should be redirected to the identity provider’s login page. If the redirect is not happening, the issue is usually in the application route or connector configuration.Common causes:
  • Route not matched — The request URL does not match any configured application route. Check that the route’s URL pattern matches the URL the user is visiting.
  • Connector not assigned — The application route does not have an identity connector assigned, so the Orchestrator does not know where to redirect for authentication.
  • Connector misconfigured — The identity connector’s authorization endpoint URL is wrong or unreachable.
Diagnostic steps:
  • Check the Orchestrator’s logs for the incoming request — the log entry shows which route (if any) matched and whether a connector was invoked
  • Verify the redirect URL in the browser’s developer tools (Network tab) to see if the Orchestrator is issuing a redirect at all
  • Test the IdP’s authorization endpoint directly with curl to confirm it is reachable
The user authenticates successfully, but the upstream application rejects the request or shows an unauthorized error.Common causes:
  • Missing headers — The Orchestrator is not injecting the identity headers that the upstream application expects. Check your header injection configuration.
  • Wrong header format — The upstream application expects headers in a specific format (for example, a Bearer token in the Authorization header) but the Orchestrator is sending a different format.
  • Clock skew — Token validation failures can occur when the Orchestrator’s clock and the upstream application’s clock are out of sync. Ensure both systems use NTP.
Diagnostic steps:
  • Use curl -v to inspect the exact headers the Orchestrator sends to the upstream
  • Check the upstream application’s logs for the specific rejection reason
  • Compare the expected header format with what the Orchestrator sends
The user gets stuck in an infinite loop between the Orchestrator and the identity provider — redirecting back and forth without ever landing on the application.Common causes:
  • Callback URL mismatch — The callback URL registered in the identity provider does not match the Orchestrator’s expected redirect URI. Both sides must match exactly (including protocol, host, port, and path).
  • Session not persisting — The session cookie is not being set or read correctly. Check cookie domain and path settings, and ensure the browser is not blocking third-party cookies.
  • Application behind the Orchestrator redirecting again — The upstream application has its own authentication that triggers another redirect. Disable the upstream app’s authentication since the Orchestrator handles it.
Diagnostic steps:
  • Open the browser’s developer tools and watch the Network tab to see the exact redirect sequence
  • Check that the callback URL in your IdP configuration matches the Orchestrator’s redirect URI exactly
  • Look for Set-Cookie headers in the Orchestrator’s responses to confirm the session cookie is being set

Configuration Issues

These issues affect how the Orchestrator reads and applies its configuration.
You updated the configuration file but the Orchestrator’s behavior has not changed.Common causes:
  • Orchestrator not restarted — The Orchestrator reads its configuration at startup. Changes to the config file require a restart to take effect.
  • Wrong config file — You may have edited a different config file than the one the Orchestrator is using. Check the -config flag or environment variable to confirm the path.
  • Cached config — If using an external config source, the config may be cached. Check the config source’s cache TTL and refresh settings.
Diagnostic steps:
# Check which config file the Orchestrator is using
ps aux | grep maverics

# Restart the Orchestrator to pick up changes
systemctl restart maverics
# or in Docker
docker restart <container-id>
You used an environment variable reference in your configuration file but the Orchestrator is using the literal string instead of the variable’s value.Common causes:
  • Variable not set — The environment variable is not set in the Orchestrator’s runtime environment. Variables set in your local shell are not automatically available to services started by systemd, Docker, or Kubernetes.
  • Wrong syntax — The Orchestrator uses {{ env.VAR_NAME }} syntax (double curly braces with spaces) for environment variable references in YAML. See the configuration reference for details.
  • Variable set after startup — Environment variables are read at process startup. If you set a variable after the Orchestrator started, restart it.
Diagnostic steps:
# Check if the variable is set in the Orchestrator's environment
# For systemd
systemctl show maverics -p Environment

# For Docker
docker inspect <container-id> | jq '.[0].Config.Env'

# For Kubernetes
kubectl exec <pod-name> -- env | grep YOUR_VAR
A secret referenced in the configuration is not loading from the external secret provider.Common causes:
  • Authentication failure — The Orchestrator cannot authenticate to the secret provider. Check the secret provider’s credentials and IAM permissions.
  • Wrong secret path — The secret path or key does not match what exists in the secret provider. Paths are case-sensitive and must match exactly. In YAML, secrets are referenced with <namespace.key> syntax (angle brackets).
  • Network access — The Orchestrator cannot reach the secret provider’s endpoint. Check network policies, firewall rules, and DNS resolution.
Diagnostic steps:
  • Enable debug logging to see the exact secret provider requests and responses
  • Test the secret provider connection independently using its CLI tool (for example, vault read secret/data/maverics for HashiCorp Vault)
  • Verify IAM roles and policies grant read access to the specific secret path

Connectivity Issues

These issues affect the Orchestrator’s ability to communicate with external services.
The Orchestrator returns a 502 Bad Gateway error because it cannot connect to the upstream application.Common causes:
  • Upstream not running — The upstream application is down or not listening on the expected port.
  • Wrong upstream URL — The upstream URL in the route configuration is incorrect (wrong host, port, or protocol).
  • Network isolation — Firewall rules, security groups, or Kubernetes NetworkPolicies are blocking traffic from the Orchestrator to the upstream.
  • DNS resolution — The Orchestrator cannot resolve the upstream hostname.
Diagnostic steps:
# Test connectivity from the Orchestrator's host
curl -v http://upstream-host:8080/

# Check DNS resolution
nslookup upstream-host

# In Kubernetes, check network policies
kubectl get networkpolicies -n <namespace>
The Orchestrator times out when trying to reach the identity provider during authentication flows.Common causes:
  • IdP endpoint unreachable — The identity provider’s endpoints are not reachable from the Orchestrator’s network. This is common in air-gapped environments or when network egress is restricted.
  • DNS issues — The Orchestrator cannot resolve the IdP’s hostname.
  • TLS handshake failure — The Orchestrator cannot complete a TLS handshake with the IdP, often because of a missing CA certificate or certificate verification failure.
  • Proxy required — Your network requires an HTTP proxy for outbound connections, and the Orchestrator is not configured to use it.
Diagnostic steps:
  • Test the IdP’s OIDC discovery endpoint directly from the Orchestrator’s host: curl -v https://your-idp.example.com/.well-known/openid-configuration
  • Check for proxy requirements and set HTTP_PROXY / HTTPS_PROXY environment variables if needed
  • Verify the IdP’s TLS certificate chain is trusted by the Orchestrator’s CA bundle
The health endpoint (/status) returns {"status": "up"} when the Orchestrator is operational. If your load balancer intermittently removes and re-adds the instance, the Orchestrator may be unresponsive during those periods.Common causes:
  • Resource pressure — The Orchestrator is running low on memory or CPU, causing slow health check responses that the load balancer interprets as timeouts.
  • Network instability — Intermittent network issues between the load balancer and the Orchestrator instance.
  • Process restarts — The Orchestrator is restarting due to configuration reloads or OOM kills, causing brief unavailability windows.
Diagnostic steps:
# Verify the health endpoint responds
curl -s https://localhost:9443/status

# Expected response: {"status": "up"}
  • Monitor the Orchestrator’s resource usage (CPU, memory) during failure periods
  • Check system logs (journalctl -u maverics or docker logs) for restarts
  • Increase the load balancer’s health check timeout and failure threshold to tolerate brief interruptions without removing the instance from the pool

Getting Help

If you have worked through the troubleshooting steps above and are still stuck, here is how to get additional help. Collect diagnostic information before reaching out to support. This makes it much faster to identify and resolve your issue:
  • Orchestrator version — Run the Orchestrator with a -version flag or check the startup logs
  • Configuration file — A sanitized copy of your config file (remove secrets and credentials)
  • Logs — The relevant log entries around the time the issue occurred. Include the full error message and any stack traces. Enable debug logging temporarily for more detail
  • Environment details — Operating system, container runtime version, Kubernetes version, cloud provider, and any relevant network configuration
Support channels:
  • Strata Identity support — Contact the Strata support team for production issues. Include the diagnostic information above.
  • Documentation — The Orchestrator Configuration Reference pages often have the specific detail you need for configuration questions.