Kubernetes Configuration Manager — Helm
For example, to deploy the Nginx-alpine application 4 resource manifests were necessary: Namespace, Deployment, Service, and Configmap. However, this suite of manifests represents the deployment to a single environment, e.g sandbox. The application should be propagated through staging and production environment, which reference a separate set of manifests. It is essential to ensure that the application configuration is tailored for each environment, such as allocating more CPU and memory to the application in production since it handles more traffic or has a different number of replicas for each cluster. In this case, an engineering team ends up managing 3 sets of manifests, 1 for each cluster.
However, the number of manifests grows exponentially when the application is distributed across multiple regions. As such, if the application is released in AP(Asia Pacific) and the US, a team ends up managing 9 different sets of manifests.
To Solve the challenge to manage multiple manifests we can take the help of Configuration Management tools like Helm
Helm is a package manager, that manages Kubernetes configuration with Charts. A Helm chart is a collection of YAML files that describe the state of multiple Kubernetes resources. These files can be parametrized using the Go template.
Pre-Configuration:
- Kubernetes Cluster
- Helm
Helm Chart
A Helm chart is composed of the following files:
.
├── Chart.yaml
├── templates
│ ├── configmap.yaml
│ ├── deployment.yaml
│ ├── namespace.yaml
│ └── service.yaml
├── values-prod.yaml
├── values-staging.yaml
└── values.yaml
- Chart.yaml — expose chart details, such as description, version, and dependencies
- templates/ folder — contains templates YAML manifests for Kubernetes resources
- values.yaml — default input configuration file for the chart. If no other values file is supplied, the parameters in this file will be used.
Chart.yaml
A Chart.yaml file contains the apiVersion, description, description, version, and maintainer details.
Let’s take this as an example we are going to deploy Nginx deployment on Kubernetes with Helm
apiVersion: v1
name: nginx-deployment
description: Install Nginx deployment manifests
keywords:
- nginx
version: 1.0.0
maintainers:
- name: siva naik
email: sivanaikk0903@gmail.com
- version is the version of the Chart
templates directory
The templates directory contains the templated manifest files of Kubernetes resources. The template files expect the input, we pass inputs with values files
templates/
├── configmap.yaml
├── deployment.yaml
├── namespace.yaml
└── service.yaml
- configmap.yaml is contains the manifest to create a config map
apiVersion: v1
data:
{{ .Values.configmap.data }}
#version: alpine
kind: ConfigMap
metadata:
name: nginx-version
namespace: {{ .Values.namespace.name }}
The values in {{ .. }} syntax are the variables. we have to give the reference of variables present in values.yaml file
The Value for the variable namespace is from the file under the namespace block as the name. These manifests can be templated using Go template. For example, instead of hardcoding the name of the Namespace, it can be parameterized as follows:
namespace: {{ .Values.namespace.name }}
- namespace.yaml
Namespace file contains the manifest for Kubernetes namespace for Nginx.
apiVersion: v1
kind: Namespace
metadata:
labels:
tier: test
name: {{ .Values.namespace.name }}
- service.yaml
This file contains the manifest to create a service in Kubernetes
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
tag: alpine
name: nginx-alpine
namespace: {{ .Values.namespace.name }}
spec:
ports:
- port: {{ .Values.service.port }}
protocol: TCP
targetPort: {{ .Values.service.port }}
selector:
app: nginx
tag: alpine
type: {{ .Values.service.type }}
- deployment.yaml
This file has the manifest to create Nginx deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
tag: alpine
name: nginx-alpine
namespace: {{ .Values.namespace.name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: nginx
tag: alpine
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: nginx
tag: alpine
spec:
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
name: nginx-alpine
resources:
{{ toYaml .Values.resources | indent 12 }}
values.yaml file
The values.yaml file contains default input parameters for a Helm chart. The parameters are consumed by the templated YAML manifests through the .Values object.
We can use multiple values files to pass as arguments based on the environment where we are going to deploy. Let’s say sandbox, stagging, and production.
- values.yaml
namespace:
name: demo
service:
port: 8111
type: ClusterIP
image:
repository: nginx
tag: alpine
pullPolicy: IfNotPresent
replicaCount: 3
resources:
requests:
cpu: 50m
memory: 256Mi
configmap:
data: "version: alpine"
- values-staging
namespace:
name: staging
service:
port: 8111
type: ClusterIP
image:
repository: nginx
tag: 1.18.0
pullPolicy: IfNotPresent
replicaCount: 1
resources:
requests:
cpu: 50m
memory: 128Mi
configmap:
data: "version: 1.18.0"
- values-prod.yaml
namespace:
name: prod
service:
port: 80
type: ClusterIP
image:
repository: nginx
tag: 1.17.0
pullPolicy: IfNotPresent
replicaCount: 2
resources:
requests:
cpu: 70m
memory: 256Mi
configmap:
data: "version: 1.17.0"
The values files have different values based on the environment in the resource are deployed
Let’s Install the Helm chart we created
helm install [name] [ helm chart path ]
- list installed helm charts
helm list
Thank you!