Deep Dive Into Helm
Introduction
It has been quite a journey to get this point. This post is a continuation from the previous post where I went into great detail into how this blog is created and managed. To recap, this blog is a Hugo static site which is generated from plain text files and managed in a Git repository. When I check-in content, the site is built and deployed to Kubernetes using Drone.
I covered everything in the previous post except how the generated website is deployed to Kubernetes. The site content and a web server are put into a container and uploaded to a container registry. However, to automate the deployment of the container to the test and production sites, I use Helm.
Helm is a package manager for Kubernetes. Like any package manager, it can be used to install, upgrade/modify, and remove applications. It uses Charts which, at their core, are templated Kubernetes manifests. We can pass values to Helm at runtime and the variables will be used to generate the final Kubernetes manifests.
The Chart
The Helm chart can be built directly into the site’s git repository as helm/chartname.
Similar to .gitignore, .helmignore tells Helm to ignore some files:
helm/chartname/.helmignore:
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
The chart.yml file defines metadata about the chart.
helm/chartname/hart.yml:
apiVersion: v2
name: chartname
description: Helm chart to deploy my Hugo site
type: application
# Chart version
version: 0.1.0
appVersion: "latest"
The values.yml file defines the default values for the chart. These values can be modifed by passing values at runtime. For example, different values are passed when deploying to the test site and the live/production site.
helm/chartname/values.yml:
# Default values for chartname.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: hub.domain.tld/user/repo
pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imageCredentials:
registry: quay.io
username: someone
password: sillyness
email: [email protected]
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: false
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
The template directory contains all of the templates which make up the chart. They are Kubernetes manifests with placeholders for the values that will be passed at runtime. The _helpers.tpl file defines some additional derived values that will be used in the templates.
helm/chartname/templates/_helpers.tpl:
{{/*
Expand the name of the chart.
*/}}
{{- define "chartname.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "chartname.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "chartname.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "chartname.labels" -}}
helm.sh/chart: {{ include "chartname.chart" . }}
{{ include "chartname.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "chartname.selectorLabels" -}}
app.kubernetes.io/name: {{ include "chartname.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "chartname.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "chartname.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Create the contents of the imagePullSecret
*/}}
{{- define "chartname.imagePullSecret" }}
{{- with .Values.imageCredentials }}
{{- printf "{\"auths\":{\"%s\":{\"username\":\"%s\",\"password\":\"%s\",\"email\":\"%s\",\"auth\":\"%s\"}}}" .registry .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }}
{{- end }}
{{- end }}
The notes file is output after the install is completed to provide information to the helm user about their install.
helm/chartname/templates/NOTES.txt:
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "chartname.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "chartname.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "chartname.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "chartname.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}
The deployment manifest deploys the container onto the cluster as a pod:
helm/chartname/templates/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chartname.fullname" . }}
labels:
{{- include "chartname.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "chartname.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "chartname.selectorLabels" . | nindent 8 }}
spec:
imagePullSecrets:
- name: regauth-{{ include "chartname.fullname" .}}
serviceAccountName: {{ include "chartname.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
The pod autoscaler is defined in hpa.yaml, if it’s enabled.
helm/chartname/templates/hpa.yaml:
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "chartname.fullname" . }}
labels:
{{- include "chartname.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "chartname.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}
The ingress, if it’s defined, is in ingress.yaml.
helm/chartname/templates/ingress.yaml:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "chartname.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "chartname.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
The registry pull secret is stored in a secret.
helm/chartname/templates/secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: regauth-{{ include "chartname.fullname" .}}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "chartname.imagePullSecret" . }}
The service definition.
helm/chartname/templates/service.yaml:
apiVersion: v1
kind: Service
metadata:
name: {{ include "chartname.fullname" . }}
labels:
{{- include "chartname.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "chartname.selectorLabels" . | nindent 4 }}
If enabled, the service account definition.
helm/chartname/templates/serviceaccount.yaml:
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "chartname.serviceAccountName" . }}
labels:
{{- include "chartname.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
I also want Helm to test that the deployment succeeded by defining tests. In this case, I’m deploying a busybox container which attempts to connect to the new service.
helm/chartname/templates/tests/test-connection.yaml:
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "chartname.fullname" . }}-test-connection"
labels:
{{- include "chartname.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "chartname.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never
This helm chart can be used standalone to deploy the site or it can be automated with a CI/CD pipeline as I did with Hugo.
Next Steps
This post brings me to the conclusion of this series where I was “catching up” with documenting a lot of the work done over a period of months. It started in April 2022 with migrating from the original Kubernetes cluster I had to a new k3s cluster. I’d decided that it needed to be completely automated such that it could be deployed entirely with Terraform and Ansible. I ran into some serious roadblocks and the blog was actually down for a period. The old cluster was running, but fast becoming unstable. I almost gave up on the effort, even.
Ultimately, I was able to resolve many of the roadblocks and I had accomplished the goal of the migration plus the stretch goal of moving this blog to a static site. However, I realized that I hadn’t documented the journey at all. Since the purpose of this blog is to force me to document what I’ve done more than any other reason, I knew I had to catch up. I’ve written a (mostly) weekly post since September 2022 and this post concludes that effort.
I hope to continue the (mostly) weekly post routine and I’m definitely planning to roll out new applications on the cluster. However, I’m also heavily involved in starting an IT services business that is consuming an enormous amount of my time. When I started this project, I was working remotely in a salaried job and I was given the freedom to spend part of my day on personal projects. My personal project time is now limited to weekends.
This is all to say that my posts may stray into other areas and topics. When I have something to share about the self-hosting project, I will. If I don’t have anything, I will still try to post on another topic that interests me. Stay tuned.