Deploy the Orchestrator to Kubernetes using the official Strata Helm chart. The charts are available publicly on GitHub — they are not downloaded through the Console.
Prerequisites
- Kubernetes 1.24.0 or later
- Helm 3.x
Container Image
The Orchestrator container image is downloaded from the Maverics Console — it is not available from a public container registry. For Kubernetes deployments, you must push this image to a container registry accessible by your cluster.
Load and Push to a Registry
Start by downloading the container image tarball (maverics-orchestrator.tar) from the Download Orchestrator Software modal inside any Deployment in the Console. Load it into your local Docker image store, then tag and push it to your private registry.
# Load the Console-provided image
docker load -i maverics-orchestrator.tar
Push the loaded image to your cloud provider’s container registry:
AWS ECR
GCP Artifact Registry
Azure ACR
# Authenticate to ECR
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
# Create the repository (first time only)
aws ecr create-repository --repository-name maverics-orchestrator --region us-east-1
# Tag and push
docker tag maverics-orchestrator:latest \
123456789012.dkr.ecr.us-east-1.amazonaws.com/maverics-orchestrator:v2026.02.1
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/maverics-orchestrator:v2026.02.1
# Authenticate to Artifact Registry
gcloud auth configure-docker us-central1-docker.pkg.dev
# Create the repository (first time only)
gcloud artifacts repositories create maverics \
--repository-format=docker --location=us-central1
# Tag and push
docker tag maverics-orchestrator:latest \
us-central1-docker.pkg.dev/my-project/maverics/orchestrator:v2026.02.1
docker push us-central1-docker.pkg.dev/my-project/maverics/orchestrator:v2026.02.1
# Authenticate to ACR
az acr login --name myregistry
# Tag and push
docker tag maverics-orchestrator:latest \
myregistry.azurecr.io/maverics-orchestrator:v2026.02.1
docker push myregistry.azurecr.io/maverics-orchestrator:v2026.02.1
After pushing to a private registry, configure the Helm chart to pull from it by setting the image values:
image:
repository: 123456789012.dkr.ecr.us-east-1.amazonaws.com/maverics-orchestrator
tag: "v2026.02.1"
pullPolicy: IfNotPresent
There are two approaches for authenticating your cluster to pull from the private registry.
Option 1: Image pull secrets (works with all registries):
# Create the pull secret
kubectl create secret docker-registry maverics-pull-secret \
--docker-server=123456789012.dkr.ecr.us-east-1.amazonaws.com \
--docker-username=AWS \
--docker-password=$(aws ecr get-login-password --region us-east-1) \
-n maverics
Reference the secret in your Helm values:
imagePullSecrets:
- name: maverics-pull-secret
ECR tokens expire after 12 hours. For production AWS deployments, use workload IAM (Option 2) instead of image pull secrets to avoid token rotation issues.
Option 2: Workload IAM (recommended for cloud-managed clusters):
Workload IAM lets Kubernetes pods authenticate to your cloud provider’s container registry without static credentials. This is the recommended approach for production deployments.
IAM Roles for Service Accounts (IRSA) lets Kubernetes pods assume an IAM role to pull from ECR without static credentials.Prerequisites: EKS cluster with OIDC provider enabled.Steps:
- Create an IAM policy granting
ecr:GetDownloadUrlForLayer, ecr:BatchGetImage, and ecr:GetAuthorizationToken
- Create an IAM role with a trust policy for the EKS OIDC provider
- Annotate the ServiceAccount via Helm values:
serviceAccount:
create: true
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::123456789012:role/maverics-ecr-access"
With IRSA configured, no imagePullSecrets are needed. GCP Workload Identity binds a Kubernetes ServiceAccount to a Google Cloud service account for keyless registry access.Steps:
- Create a Google Cloud service account with
roles/artifactregistry.reader
- Bind it to the Kubernetes ServiceAccount using
gcloud iam service-accounts add-iam-policy-binding
- Annotate the ServiceAccount via Helm values:
serviceAccount:
create: true
annotations:
iam.gke.io/gcp-service-account: "[email protected]"
Azure Workload Identity enables pods to authenticate to ACR using federated identity credentials.Steps:
- Create a managed identity with
AcrPull role on the ACR
- Establish a federated credential for the Kubernetes ServiceAccount
- Configure Helm values:
serviceAccount:
create: true
annotations:
azure.workload.identity/client-id: "<managed-identity-client-id>"
podLabels:
azure.workload.identity/use: "true"
The Security subsection of Key Chart Settings mentions ServiceAccount annotations for cloud IAM integration — this section provides the specific configuration details for each cloud provider.
Install and Uninstall
# Add the Strata Helm repository
helm repo add strata https://strata-io.github.io/helm-charts
helm repo update
# Install the Orchestrator
helm install my-orchestrator strata/orchestrator
# Uninstall
helm delete my-orchestrator
Customize the installation with a values file (-f values.yaml) or inline overrides (--set key=value).
Key Chart Settings
The sections below surface the most commonly configured chart settings. For the complete list, see All Available Options.
Image and Replicas
| Setting | Default | Description |
|---|
replicaCount | 1 | Number of Orchestrator pods |
image.repository | strata/orchestrator | Container image repository |
image.tag | Chart appVersion | Image tag (defaults to the chart’s appVersion) |
image.pullPolicy | IfNotPresent | Image pull policy |
imagePullSecrets | [] | Secrets for private registries |
Networking
| Setting | Default | Description |
|---|
service.type | ClusterIP | Service type (ClusterIP, LoadBalancer, NodePort) |
service.port | 8080 | Service port |
ingress.enabled | false | Enable Ingress resource creation |
hostNetwork | false | Use host networking (bypasses kube-proxy) |
When Ingress is enabled, configure ingress.hosts, ingress.paths, and ingress.tls for TLS termination at the Ingress controller.
Security
The chart ships with secure defaults: the container runs as a non-root user (UID 10001), drops all Linux capabilities, uses a read-only root filesystem, and applies a RuntimeDefault seccomp profile. A ServiceAccount is created automatically (serviceAccount.create: true) — add annotations via serviceAccount.annotations for cloud IAM integration (e.g., AWS IRSA, GCP Workload Identity).
Configuration
| Setting | Description |
|---|
orchestrator.baseConfig | Base Orchestrator config (version + HTTP address) |
orchestrator.customConfig | Inline YAML config overrides |
orchestrator.customConfigMapName | Reference an external ConfigMap for config |
env | Set environment variables directly |
envValueFrom | Reference ConfigMap or Secret keys as env vars |
envFromSecrets / envFromConfigMaps | Bulk-import all keys from a Secret or ConfigMap |
extraSecretMounts | Mount Secrets as files (TLS certs, credentials) |
extraVolumeMounts | Mount additional volumes |
Resources and Scaling
| Setting | Default | Description |
|---|
resources | {} | CPU/memory requests and limits |
autoscaling.enabled | false | Enable Horizontal Pod Autoscaler |
podDisruptionBudget | — | Configure PDB for availability during rollouts |
terminationGracePeriodSeconds | 120 | Graceful shutdown timeout |
Set explicit resource requests and limits for production deployments (e.g., resources.requests.cpu: 250m, resources.requests.memory: 256Mi) to ensure predictable scheduling and prevent resource contention.
Health Checks
The chart configures readiness and liveness probes against the /status endpoint. The liveness probe has a 25-second initial delay to allow startup. These defaults work for most deployments.
Scheduling
Use nodeSelector, tolerations, and affinity to control pod placement across nodes and availability zones.
Clustering
The Orchestrator supports multi-node clustering for high availability. Enable clustering with orchestrator.clusters.create: true. Clustering uses DNS service discovery with a pre-shared key (PSK) for inter-node authentication. Requires ports 9450 (TCP/UDP, membership) and 9451 (TCP, data sync) open between pods.
Cloud Integration
For Console-managed deployments, enable cloud.enabled: true and configure cloud.config with bundle and key management settings. This mode supports S3-compatible storage for configuration bundles, allowing the Console to push configuration updates to your cluster.
Minimal Production Example
A starter values.yaml for a production-ready deployment:
replicaCount: 2
image:
tag: "latest"
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: "1"
memory: 512Mi
service:
type: ClusterIP
port: 8080
env:
- name: MAVERICS_HTTP_ADDRESS
value: ":8080"
extraSecretMounts:
- name: tls-certs
secretName: orchestrator-tls
mountPath: /etc/maverics/certs
readOnly: true
Example Configurations
The Helm chart repository includes ready-to-use example configurations in the examples/ directory:
- Standalone Orchestrator (minimal)
- Clustered Orchestrator
- OpenShift deployment
- Cloud integration with local config
- Cloud integration with S3 storage
- External ConfigMap reference
All Available Options
Run the following command to see the complete list of configurable chart options:
helm show values strata/orchestrator
Related Pages