Prerequisites
- Kubernetes 1.24.0 or later
- Helm 3.x
Container Image
The Orchestrator container image is downloaded from the Maverics Console — it is not available from a public container registry. For Kubernetes deployments, you must push this image to a container registry accessible by your cluster.Load and Push to a Registry
Start by downloading the container image tarball (maverics-orchestrator.tar) from the Download Orchestrator Software modal inside any Deployment in the Console. Load it into your local Docker image store, then tag and push it to your private registry.
- AWS ECR
- GCP Artifact Registry
- Azure ACR
Configure Kubernetes Image Pull
After pushing to a private registry, configure the Helm chart to pull from it by setting theimage values:
ECR tokens expire after 12 hours. For production AWS deployments, use workload IAM (Option 2) instead of image pull secrets to avoid token rotation issues.
- AWS (IRSA)
- GCP (Workload Identity)
- Azure (Workload Identity)
IAM Roles for Service Accounts (IRSA) lets Kubernetes pods assume an IAM role to pull from ECR without static credentials.Prerequisites: EKS cluster with OIDC provider enabled.Steps:With IRSA configured, no
- Create an IAM policy granting
ecr:GetDownloadUrlForLayer,ecr:BatchGetImage, andecr:GetAuthorizationToken - Create an IAM role with a trust policy for the EKS OIDC provider
- Annotate the ServiceAccount via Helm values:
imagePullSecrets are needed.Install and Uninstall
-f values.yaml) or inline overrides (--set key=value).
Key Chart Settings
The sections below surface the most commonly configured chart settings. For the complete list, see All Available Options.Image and Replicas
| Setting | Default | Description |
|---|---|---|
replicaCount | 1 | Number of Orchestrator pods |
image.repository | strata/orchestrator | Container image repository |
image.tag | Chart appVersion | Image tag (defaults to the chart’s appVersion) |
image.pullPolicy | IfNotPresent | Image pull policy |
imagePullSecrets | [] | Secrets for private registries |
Networking
| Setting | Default | Description |
|---|---|---|
service.type | ClusterIP | Service type (ClusterIP, LoadBalancer, NodePort) |
service.port | 8080 | Service port |
ingress.enabled | false | Enable Ingress resource creation |
hostNetwork | false | Use host networking (bypasses kube-proxy) |
ingress.hosts, ingress.paths, and ingress.tls for TLS termination at the Ingress controller.
Security
The chart ships with secure defaults: the container runs as a non-root user (UID 10001), drops all Linux capabilities, uses a read-only root filesystem, and applies aRuntimeDefault seccomp profile. A ServiceAccount is created automatically (serviceAccount.create: true) — add annotations via serviceAccount.annotations for cloud IAM integration (e.g., AWS IRSA, GCP Workload Identity).
Configuration
| Setting | Description |
|---|---|
orchestrator.baseConfig | Base Orchestrator config (version + HTTP address) |
orchestrator.customConfig | Inline YAML config overrides |
orchestrator.customConfigMapName | Reference an external ConfigMap for config |
env | Set environment variables directly |
envValueFrom | Reference ConfigMap or Secret keys as env vars |
envFromSecrets / envFromConfigMaps | Bulk-import all keys from a Secret or ConfigMap |
extraSecretMounts | Mount Secrets as files (TLS certs, credentials) |
extraVolumeMounts | Mount additional volumes |
Resources and Scaling
| Setting | Default | Description |
|---|---|---|
resources | {} | CPU/memory requests and limits |
autoscaling.enabled | false | Enable Horizontal Pod Autoscaler |
podDisruptionBudget | — | Configure PDB for availability during rollouts |
terminationGracePeriodSeconds | 120 | Graceful shutdown timeout |
Health Checks
The chart configures readiness and liveness probes against the/status endpoint. The liveness probe has a 25-second initial delay to allow startup. These defaults work for most deployments.
Scheduling
UsenodeSelector, tolerations, and affinity to control pod placement across nodes and availability zones.
Clustering
The Orchestrator supports multi-node clustering for high availability. Enable clustering withorchestrator.clusters.create: true. Clustering uses DNS service discovery with a pre-shared key (PSK) for inter-node authentication. Requires ports 9450 (TCP/UDP, membership) and 9451 (TCP, data sync) open between pods.
Cloud Integration
For Console-managed deployments, enablecloud.enabled: true and configure cloud.config with bundle and key management settings. This mode supports S3-compatible storage for configuration bundles, allowing the Console to push configuration updates to your cluster.
Minimal Production Example
A startervalues.yaml for a production-ready deployment:
Example Configurations
The Helm chart repository includes ready-to-use example configurations in theexamples/ directory:
- Standalone Orchestrator (minimal)
- Clustered Orchestrator
- OpenShift deployment
- Cloud integration with local config
- Cloud integration with S3 storage
- External ConfigMap reference
All Available Options
Run the following command to see the complete list of configurable chart options:Related Pages
Installation Overview
System requirements, download options, CLI flags, and environment variables
Configuration
Configure the Orchestrator after installation
Getting Started
End-to-end quick-start guide