kubernetes

Automated Testing for Kubernetes and Helm Charts using Terratest

Helm is a popular package management solution for Kubernetes. It is like apt, yum, or brew for Kubernetes in that it allows you to deploy…
Automated Testing for Kubernetes and Helm Charts using Terratest
Yoriyasu Yano
Published February 27, 2019

Update, March 2, 2020: We’ve updated this blog post for Helm v3!

Helm is a popular package management solution for Kubernetes. It is like apt, yum, or brew for Kubernetes in that it allows you to deploy complex applications and all its dependencies in a single command: helm install stable/mysql.

Developing Helm Charts, however, is a less pleasant experience. Here is an example Helm Chart:

apiVersion: v1
kind: Pod
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image }}"

Helm Charts are written using go templates to render YAML, which can lead to a frustrating experience. The lack of editor support (mixing template with YAML make syntax highlighting hard), difficult syntax (have you forgotten to chomp whitespace? Or did you chomp in the wrong direction?), and confusing error messages (can't evaluate field image in type interface {}) all contribute to painful development experiences while developing charts. Not to mention how easy it is to shoot yourself in the foot and sneak in subtle bugs.

At Gruntwork, one of the things we learned writing over 300,000 lines of infrastructure code is that agility requires safety. To move fast, you need safety mechanisms to help you catch issues before they’ve had a chance to do lots of damage. As chart developers, how can we better protect ourselves as we try to build Helm Charts?

For Terraform, we faced a similar situation and our answer was Terratest, a Swiss Army knife for testing infrastructure code, including Packer, Docker, and Terraform. Over the past year we expanded Terratest with functionality to cover Kubernetes testing, including Helm Charts. In this post I’ll talk about how you can use the helm and k8s modules of Terratest to build a continuous integration pipeline for your charts to catch bugs before you release them to the public, or your internal teams.

Here is what this post will cover:

Example chart

To demo the concepts, we need a concrete helm chart to test. Here is a minimal Helm Chart that deploys a Pod that listens on port 80 (e.g., you could use this Pod to run nginx). This chart exposes a single input value that specifies the container image.

The directory structure is:

minimal-pod
├── Chart.yaml
├── templates
│   ├── _helpers.tpl
│   └── pod.yaml
└── values.yaml

Chart.yaml and templates/_helpers.tpl are the defaults generated by helm create. values.yaml includes a single entry for providing the container image spec:

# values.yaml
image: ""

The templates/pod.yaml file includes a template for a single pod that deploys the container and exposes port 80:

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: {{ include "minimal-pod.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "minimal-pod.name" . }}
helm.sh/chart: {{ include "minimal-pod.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image }}"
ports:
- name: http
containerPort: 80
protocol: TCP

Throughout the post we will write tests using Terratest that test various properties of this chart. All the code from this post is available in the GitHub repo, terratest-helm-testing-example. If you would like to follow along and run the examples, you can refer to the root README of the repo for the exact instructions on how to run the tests.

Testing Overview

On the surface, you can categorize helm chart testing into three categories:

  • Template testing (unit testing): these tests render the templates against various input values, but do not necessarily deploy the results. These types of tests let you verify that the template rendered the expected resources in the manner you intended. These tests are fast to execute and can catch syntactic errors of your template, but because you don’t actually deploy the infrastructure, you can’t catch issues with how the resources integrate (e.g resource dependencies and deployment order).
  • Integration testing: these tests take the rendered templates and deploy them to a real Kubernetes cluster. You can then verify the deployed infrastructure works as intended by hitting the endpoints or querying Kubernetes for the resources. These tests closely resemble an actual deployment and give you a close approximation of how it might behave when you are ready to push the chart to production. However, these tests are expensive and can be slow to run due to the nature of having to deploy the infrastructure and run validations against the endpoints.
  • Production smoke tests: these tests run against the deployed infrastructure as part of the helm install or upgrade. These tests can be used for non-invasive validation of a deployment to catch issues that require a deployment rollback. Since these run on the actual production infrastructure, you are limited in what you can test. Smoke tests are a native feature of helm known as “test hooks,” so we won’t be covering them in this blog post. You can read more about Helm test hooks in the official documentation.

In this post we will do a deep dive into template testing and integration testing on our example chart. So let’s start with template tests!

Template testing

Template tests can be used to catch syntactic issues with your helm chart templates. For example, in our example chart, we might want to verify that the container image is correctly rendered in the right spot in the Pod template. If you were verifying this by hand, you would:

  1. Provide an example container as input
  2. Render the template
  3. Verify the image attribute of the pod is derived from the input

You can write this exact test in Terratest:

func TestPodTemplateRendersContainerImage(t *testing.T) {
// Path to the helm chart we will test
helmChartPath := "../charts/minimal-pod"
// Setup the args.
// For this test, we will set the following input values:
// - image=nginx:1.15.8
options := &helm.Options{
SetValues: map[string]string{"image": "nginx:1.15.8"},
}
// Run RenderTemplate to render the template
// and capture the output.
output := helm.RenderTemplate(
t, options, helmChartPath, "nginx",
[]string{"templates/pod.yaml"})
// Now we use kubernetes/client-go library to render the
// template output into the Pod struct. This will
// ensure the Pod resource is rendered correctly.
var pod corev1.Pod
helm.UnmarshalK8SYaml(t, output, &pod)
// Finally, we verify the pod spec is set to the expected
// container image value
expectedContainerImage := "nginx:1.15.8"
podContainers := pod.Spec.Containers
if podContainers[0].Image != expectedContainerImage {
t.Fatalf(
"Rendered container image (%s) is not expected (%s)",
podContainers[0].Image,
expectedContainerImage,
)
}
}

The above code runs helm template --set image=nginx:1.15.8 --show-only templates/pod.yaml to render the template, and then reads in the generated yaml using kubernetes/client-go to get a statically typed struct representing the Pod resource. This has the advantage of catching subtle bugs in the template by ensuring that it conforms to the expected schema of the resource. As an added bonus, checking the values is easier because you can rely on go’s static analysis to extract the attributes out of the rendered yaml config.

If you put this in a file minimal_pod_template_test.go and run it, you will see output similar to below (truncated for readability):

=== RUN   TestPodTemplateRendersContainerImage
Running command helm with args [template --set image=nginx:1.15.8 --show-only templates/pod.yaml nginx ../charts/minimal-pod]
---
# Source: minimal-pod/templates/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-minimal-pod
labels:
app.kubernetes.io/name: minimal-pod
helm.sh/chart: minimal-pod-0.1.0
app.kubernetes.io/instance: nginx
app.kubernetes.io/managed-by: Helm
spec:
containers:
- name: minimal-pod
image: "nginx:1.15.8"
ports:
- name: http
containerPort: 80
protocol: TCP
--- PASS: TestPodTemplateRendersContainerImage (0.05s)
PASS
ok      github.com/gruntwork-io/helm-chart-testing-example/test 0.086s

Note how it shows you the rendered template output. You can use this output to help debug any test failures.

The advantage of using Terratest for your helm chart testing is that now you have an automated test that takes less than 1/10th of a second, and that can be run on every change to your chart using CI. For fairly large charts, manually testing all the different scenarios is close to impossible, so you would end up only focusing on the updated areas. But what if you need to upgrade helm to a new version? With Terratest, You can run your tests on the new version locally in parallel to test a wide coverage area in a relatively short amount of time.

On the other hand, what if you wanted to test that the container actually exists, if the selected port is actually the correct one for the container, or if your startup scripts actually start your container without errors? Template tests won’t catch these because they require actually deploying the container on real infrastructure. For these, you can use integration tests.

Integration testing

Unlike template tests, integration tests deploy the rendered template on to a real Kubernetes cluster. You can test against production-grade clusters such as EKS or GKE, or run locally against minikube. Because of this, you can check that the charts not only render correctly, but actually does what you want: e.g that the app has all the necessary resources, can be reached, can store data, etc. If template tests are syntactic tests, you can consider integration tests the semantic tests of your charts.

If we were to test that our example chart can deploy an Nginx container and that it exposes the right ports, we might do the following:

  1. Provide inputs to deploy the nginx container
  2. Deploy the chart using helm install
  3. Verify we can access nginx using port forward
  4. Undeploy using helm delete

You can use Terratest to automate these steps as well:

func TestPodDeploysContainerImage(t *testing.T) {
// Path to the helm chart we will test
helmChartPath := "../charts/minimal-pod"
// Setup the kubectl config and context.
// Here we choose to use the defaults, which is:
// - HOME/.kube/config for the kubectl config file
// - Current context of the kubectl config file
// Change this to target a different Kubernetes cluster
// We also specify to use the default namespace
kubectlOptions := k8s.NewKubectlOptions("", "", "default")
// Setup the args.
// For this test, we will set the following input values:
// - image=nginx:1.15.8
options := &helm.Options{
SetValues: map[string]string{"image": "nginx:1.15.8"},
}
// We generate a unique release name that we can refer to.
// By doing so, we can schedule the delete call here so that
// at the end of the test, we run `helm delete RELEASE_NAME`
// to clean up any resources that were created.
releaseName := fmt.Sprintf(
"nginx-%s", strings.ToLower(random.UniqueId()))
defer helm.Delete(t, options, releaseName, true)
// Deploy the chart using `helm install`.
helm.Install(t, options, helmChartPath, releaseName)
// Wait for the pod to come up.  It takes some time for the Pod
// to start, so retry a few times.
podName := fmt.Sprintf("%s-minimal-pod", releaseName)
retries := 15
sleep := 5 * time.Second
k8s.WaitUntilPodAvailable(
t, kubectlOptions, podName, retries, sleep)
// Now let's verify the pod. We will first open a tunnel to
// the pod, making sure to close it at the end of the test.
tunnel := k8s.NewTunnel(
kubectlOptions, k8s.ResourceTypePod, podName, 0, 80)
defer tunnel.Close()
tunnel.ForwardPort(t)
// ... and now that we have the tunnel, we will verify that we
// get back a 200 OK with the nginx welcome page.
endpoint := fmt.Sprintf("http://%s", tunnel.Endpoint())
http_helper.HttpGetWithRetryWithCustomValidation(
t,
endpoint,
retries,
sleep,
func(statusCode int, body string) bool {
isOk := statusCode == 200
isNginx := strings.Contains(body, "Welcome to nginx")
return isOk && isNginx
},
)
}

The code above does all the steps of the manual test, including running helm install to deploy the chart, kubectl port-forward to open a tunnel to the Pod, making HTTP requests to the Pod via the open tunnel (retrying up to 15 times with 5 seconds between retries), closing the port forward tunnel (using defer to run it at the end of the test, whether the test succeeds or fails), and running helm delete to delete the release and thereby undeploy the resources.

You can put this in a file minimal_pod_integration_test.go and run it against minikube. This will output something similar to (truncated for readability):

=== RUN   TestPodDeploysContainerImage
Running command helm with args [install --set image=nginx:1.15.8 -n nginx-eftw1b /Users/yoriy/go/src/github.com/gruntwork-io/helm-chart-testing-example/charts/minimal-pod]
NAME:   nginx-eftw1b
LAST DEPLOYED: Sat Feb 23 14:45:13 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod
NAME                      AGE
nginx-eftw1b-minimal-pod  0s
Wait for pod nginx-eftw1b-minimal-pod to be provisioned.
Configuring kubectl using config file /Users/yoriy/.kube/config with context
Wait for pod nginx-eftw1b-minimal-pod to be provisioned. returned an error: Pod nginx-eftw1b-minimal-pod is not available. Sleeping for 5s and will try again.
... SNIPPED FOR BREVITY ...
Pod is now available
Creating a port forwarding tunnel for resource pod/nginx-eftw1b-minimal-pod routing local port 0 to remote port 80
Selected pod nginx-eftw1b-minimal-pod to open port forward to
Using URL https://192.168.99.141:8443/api/v1/namespaces/default/pods/nginx-eftw1b-minimal-pod/portforward to create portforward
Requested local port is 0. Selecting an open port on host system
Selected port 49400
Successfully created port forwarding tunnel
HTTP GET to URL http://localhost:49400
Making an HTTP GET call to URL http://localhost:49400
Running command helm with args [delete nginx-eftw1b]
release "nginx-eftw1b" deleted
--- PASS: TestPodDeploysContainerImage (25.76s)
PASS
ok      github.com/gruntwork-io/helm-chart-testing-example/test 25.792s

Of course, sometimes you will want to test on an actual cloud infrastructure (e.g if you had a load balancer resource as part of the config). If you have terraform code to deploy a Kubernetes cluster, you can combine this with the terraform testing capabilities in Terratest to deploy your Kubernetes Cluster before deploying the helm charts for testing.

Using Helm as a Template Engine

One way people have used helm is as a pure templating engine. That is, rather than relying on helm to do the release tracking, you use helm as a templating engine to generate Kubernetes manifest files that you apply directly with kubectl apply . This style of using helm is more conducive to a GitOps flow.

Terratest supports this workflow by providing functions to run kubectl apply on an arbitrary yaml file. The example above can be updated to instead use helm template to render the template and kubectl apply to deploy it. This looks something like:

func TestPodDeploysContainerImageHelmTemplateEngine(t *testing.T) {
// Path to the helm chart we will test
helmChartPath := "../charts/minimal-pod"
// Setup the kubectl config and context.
// Here we choose to use the defaults, which is:
// - HOME/.kube/config for the kubectl config file
// - Current context of the kubectl config file
// Change this to target a different Kubernetes cluster
// We also specify to use the default namespace
kubectlOptions := k8s.NewKubectlOptions("", "", "default")
// Setup the args.
// For this test, we will set the following input values:
// - image=nginx:1.15.8
// - fullnameOverride=minimal-pod-RANDOM_STRING
// We use a fullnameOverride so we can find the Pod later
podName := fmt.Sprintf(
"minimal-pod-%s",
strings.ToLower(random.UniqueId()),
)
options := &helm.Options{
SetValues: map[string]string{
"image": "nginx:1.15.8",
"fullnameOverride": podName,
},
}
// Run RenderTemplate to render the template and get the output.
output := helm.RenderTemplate(
t, options, helmChartPath, "nginx", []string{})
// Make sure to delete the resources at the end of the test
defer k8s.KubectlDeleteFromString(t, kubectlOptions, output)
// Now use kubectl to apply the rendered template
k8s.KubectlApplyFromString(t, kubectlOptions, output)
// Now that the chart is deployed, verify the deployment.
// This will perform the same validation as the previous
// example using Tiller and `helm install`.
verifyNginxPod(t, kubectlOptions, podName)
}

Like the previous example, this requires a working Kubernetes cluster to run against. However, unlike the previous example, this test only uses helm as a templating engine, relying on kubectl apply and kubectl delete to actually manage the resources on the Kubernetes cluster.

Try it out!

The above is a small taste of the various validation functions Terratest provides for helm chart testing. To learn more:

  1. Check out the example repository for executable versions of the code samples from this post.
  2. Check out the examples folder and the corresponding automated tests for those examples in the test folder for fully working (and tested!) sample code.
  3. Browse through the list of Terratest packages to get a sense of all the tools available in Terratest.
  4. Read our Testing Best Practices Guide.
  5. For an example of real world usage of the patterns in this post, see gruntwork-io/helm-kubernetes-services.

Happy testing!

Your entire infrastructure. Defined as code. In about a day. Gruntwork.io.