Kubernetes
Background
The Maverics Orchestrator is a highly extensible, highly scalable Golang application that can run on bare metal servers, virtual machines and all brands of Kubernetes (K8s) clusters, version 1.24.0 and later. While it is not possible to detail every runtime scenario, this document should be used together with the orchestrator Helm chart to provide best practices and guidance on configuring orchestrator pods on a production K8s cluster. Strata currently tests orchestrators on AWS EKS and RedHat OpenShift on AWS (ROSA).
The configuration guidance provided can be used to manage and operate the orchestrator using the template-driven Helm package manager for more complex configurations, or more lightweight and less abstracted solutions such as Kustomize. We recommend using Strata’s orchestrator Helm chart which models best practices for orchestrator deployment. Early releases of the chart are available to select customers, with publication to public repositories targeted for Q4 2023.
Architecture Recommendations
The following is a list of K8s constructs and components, and recommended tools and configuration that may streamline the deployment and operation of the orchestrators on K8s clusters. Each of the components in the orchestrator Helm chart are described below, with customer specific overrides being possible through, in the case of Helm, the values.yaml file.
Secrets Management
Kubernetes uses its Secrets feature for confidential information such as passwords, keys, and tokens, but the default configuration does not provide sufficient levels of protection and encryption for the secrets required by the orchestrator (e.g. encryption keys, signing keys, client secrets).
Before deploying the orchestrator in a production environment, follow the Good practices for Kubernetes Secrets guide to ensure that the sensitive data saved as Secrets is as secure as possible. In particular:
- Enable encryption at rest
- Configure RBAC rules for least-privilege access
- Restrict access to specific container
- Use an external Secret store provider (e.g. AWS Secrets Manager)
StatefulSets
StatefulSet is the workload API object particularly suited for applications that require stable, unique network identifiers, stable persistent storage and ordered, graceful deployment and scaling.
StatefulSets are required for the orchestrator on K8s because the orchestrator is stateful with respect to Identity data such as sessions, user’s claims, tokens etc. StatefulSets can maintain a sticky identity for each orchestrator, which ensures that after rescheduling, the association between a Pod and its volume remains intact. While volumes are not used today, they may be used in future orchestrator feature development.
Services
Services provide a way to expose orchestrators running on a set of Pods as a network service. This allows applications to reference the primary orchestrator by service name, similar to the way a CNAME on a load balancer or virtual IP address works.
Network Policy
The appropriate network configuration for securing an orchestrator should ensure appropriate protection for traffic coming through ingress, also referred to as north/south traffic. This will be heavily dependent on customer security policies and how orchestrators have been configured.
For east/west traffic, that is traffic flowing between services and service-to-datastores, Strata uses and recommends the enhanced networking features of Calico to allow for deeper management of cluster networking policies. The Calico plugin extends features as such:
- Policies can be applied to pods, containers, virtual machines, or interfaces.
- Rules can contain a specific action (such as restriction, permission, or logging).
- Rules can contain ports, port ranges, protocols, HTTP/ICMP attributes, IPs, subnets, or selectors for nodes (such as hosts or environments).
- Traffic flow can be controlled via DNAT settings and policies.
Since the purpose of an orchestrator is to manage traffic based on identity, the extended features of a tool such as Calico can enhance the security of orchestrator deployments.
Today, Network Policies must be configured out of band from the orchestrator Helm chart; however, the chart has access to all necessary data to template them. With this addition, customers that utilize a CNI that supports Network Policies will add an additional layer of security out of the box.
Pod Disruption Budgets (PDB)
As with any production orchestrator installation, Strata recommends some form of high availability always be configured to ensure key identity traffic is not disrupted. The default value for the PDB for the orchestrator is currently disabled in the Helm chart values.yaml
file. However, Strata advises customers to configure this based on their tolerance for voluntary disruptions to the ReplicaSets and the number of ReplicaSets running orchestrators in their specific environment.
ConfigMaps
A ConfigMap is an API object used to store non-confidential data in key-value pairs. Orchestrator Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
Non-secret values set when deploying the orchestrator Helm chart (e.g. options set in the customConfig
section) are saved to a ConfigMap in the namespace of the deployment.
Ingress
There are no restrictions imposed by the orchestrator on the choice of ingress. Strata uses AWS’s ALB’s as an ingress controller layer in internal K8’s clusters. The orchestrator has been tested with the ALB and Nginx ingress controllers.
When scaling an orchestrator to multiple instances, the ingress controller should be configured for session affinity (AKA “sticky sessions") to ensure that front channel communication with clients routes to the same orchestrator consistently.
Horizontal Pod Autoscaler (HPA)
Strata suggests starting with a minimum count of ‘2’ and a maximum count of ‘5’. This assumes the conservative sizing of the orchestrator Pod, for example 0.5 vCPU and 256MB RAM. This should be adjusted as more applications are added and load on the orchestrator increases. We recommend the autoscaler be engaged at 70% of CPU.
Best Practices - Dependant on Environment
Use of Anti-Affinity Rules
Pods of the orchestrator StatefulSet are spread across different nodes, so a single node failure does not bring down multiple instances of the orchestrator. This is configurable by overriding the default affinity
value within the Helm Chart using the customer-specific override file values.yaml
.
Setting Resource Request and Limits
This can be enabled by overriding the defaults resources setting within the helm chart using the customer-specific override file values.yaml
. The appropriate limits will depend on how the orchestrator will be deployed. For example, when configured as a OIDC provider, the orchestrator will need different resources than when configured to proxy application traffic. Irrespective of the orchestrator configuration, the amount and pattern of traffic passing through the orchestrator will heavily influence the resource requirements.
Use of Horizontal Pod Autoscaler (HPA)
Horizontal pod autoscaling can be enabled and configured in the autoscaling
block of the Helm values.
When the orchestrator is configured as a proxy (App Gateway), pods can be autoscaled provided that the ingress is configured for sticky sessions. The orchestrator instance keeps its session cache locally, so it is important that front channel traffic for clients continues routing to the same instance.
When the orchestrator is configured as an OIDC Provider, a cache must be
configured (see Orchestrator as an OIDC Provider) before setting
replicaCount
higher than 1 or enabling pod autoscaler.
Orchestrator as an OIDC Provider
To run the orchestrator’s OIDC provider with multiple instances, an external cache (e.g. Redis) or distributed orchestrator cache (currently in development) is required. All instances running the same OIDC provider configuration need access to the same metadata cache.
Instructions for external cache configuration vary depending on the environment. Appropriate orchestrator configuration settings will be provided by a Strata support engineer during deployment planning.