3 Different Strategies for Managing Kubernetes Manifests

Kubernetes is a great abstraction for managing deployments of containerized applications. The way Kubernetes achieves this abstraction is through YAML configuration files called manifests. These manifests allow us to represent all cluster resources such as pods, volumes and services. Additionally, third parties who implement the operator pattern can also create their own resources for things like CI, security policies, and even infrastructure provisioning. Kubernetes solves a lot of problems for us elegantly but also create a new problem, the management of many YAML manifest files.

The problems with managing YAML

The main problem with managing Kubernetes manifests is that it is tedious and error prone. This is because most applications being deployed to a cluster will have multiple environments that they need to pass through in order to make it to production, and each environment will typically have at least some differences in their configuration. These differences in configuration lead us to create the same set of YAML configurations for each environment and just replace the values that are different. This quickly becomes tedious and error prone and can lead to configuration drift or deployment failures.

 

Strategies for managing K8s Manifests

Luckily the Kubernetes community has provided us with some strategies to manage YAML configurations to reduce the toil and risk for errors.


Kustomize and the Manifest Hierarchy

Since Kubernetes 1.14, kustomize has been a part of the core Kubernetes CLI (kubectl). Kustomize works by allowing you to create a Kubernetes manifest hierarchy using a “base” configuration and overriding the differences with an “overlay” configuration.

To illustrate an example of how this works, lets set up a simple Redis deployment and use an environment variable called REDIS_PORT to change the default port for each environment.

The directory structure for our example looks like this:

The manifests in the base directory will contain sensible defaults and look like this:

Since in our example, Redis will be deployed to QA and production with non-standard REDIS_PORT values we will create “overlay” directories to specify only the changes for each environment.

QA overlays:

For each additional environment, there is a sub directory in the overlay directory that includes a kustomize.yaml file and the value that you would like to override.

In this example, we are only changing the values inside the base ConfigMap and service per each environment but with a real application there are often more changes to manage which can still lead to a lot of YAML files! This strategy at least allows us to only manage the deltas of each environment.

Helm and Chart Repositories

Helm is a package manager for Kubernetes that allows you to template a set of Kubernetes manifest files using the Golang Sprig template library and manage the values to be injected in a values.yaml file. This allows you to create a “chart” for your application which you can version, update and install on any Kubernetes cluster. Helm, unlike Kustomize, is not part of the official Kubernetes project so you must install the helm CLI separately using the package manager of your choice.

If we imagine the same application as above that uses a ConfigMap to source a REDIS_PORT environment variable, the helm manifests would look like this.

The values that populate the template are then sourced from the helm values.yaml file, the _helpers.tpl file and the Chart.yaml file.

Our example values.yaml file might look like this:

As you can probably imagine, the value REDIS_PORT will be injected into our manifests wherever you see {{.Values.redisPort}}. Using this strategy, we can template all our Kubernetes manifests.

Once you have created a helm chart for your application, there are additional features to help you install and manage these assets.

The first is the concept of helm “releases” which allows you to install named instances of your application to a cluster. When you install a release of your application, the metadata including the chart version, application version and release revision are stored in your cluster as a Kubernetes Secret. By storing this metadata, helm allows us to perform atomic deployments and rollbacks.

Here’s a basic example of installing our redis-chart helm chart using the helm CLI:

The second feature is the concept of repositories where you can version, upload and install your helm charts from. The helm project provides a public chart repository that is free to use called ChartMuseum, but you can also install and manage your own chart repository. By using a helm chart repository you can store all your application charts in one place and also other community provided charts for commonly used software.

The drawbacks of using helm as your YAML manifest management strategy are helm’s learning curve, complicated chart templates, management of the chart repository, and the versioning of charts. Also the values.yaml file can become difficult to manage over time as it is not uncommon to include excerpts of other manifest files within.

Some real world examples:

https://github.com/elastic/helm-charts

https://github.com/prometheus-community/helm-charts

ShuttleOps Continuous Delivery

ShuttleOps is a Continuous Delivery tool for Kubernetes deployments that removes the burden on managing YAML files for your application. ShuttleOps does this by providing a visual pipeline editor that will build and update your YAML manifests based on form inputs. It also allows us to clone pipelines into new environments with a single click and manage the delta values per each environment.

To recreate our deployment above, we can create a new pipeline called redis-dev:

After creating the pipeline, we select the library/redis container image from Dockerhub.

Once we’ve added our container, we can add a ConfigMap that sets the REDIS_PORT environment variable like in our Kustomize example.

Finally, we can edit the container settings to add our ConfigMap key reference, custom command and custom argument values.

And that’s it, we’re all set to deploy our Redis container to the development environment. Since in our example we have a requirement to change the REDIS_PORT per environment, let’s create two more pipelines to modal our QA and Prod deployments.

From the deploy page overflow menu, select “Create New Environment”.

We can easily create multiple new environments pre-configured with the values from the source pipeline and manage the environment specific REDIS_PORT within that pipeline.

As you can see, there are no complicated YAML management strategies and no additional chart repositories to deploy on your infrastructure. This allows you to focus on the development of your application and not the development of Kubernetes YAML manifests.

Focusing on what matters

Managing Kubernetes manifests is not a trivial task and the community has responded by creating some great tools to help us better manage them. We at ShuttleOps feel that although these tools are great, they have a large time investment which detracts from your core business – your application!

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Global Variables and Secrets Management in...

Today, we’ve enabled a truly valuable capability that bridges the gap between security and collaboration – the ShuttleOps Vault....
Continue reading

We’ll See You at HashiConf Digital 2020...

It’s 2020. The world is different, conferences are digital, and HashiCorp is announcing not one, but two open source...
Continue reading
5 Stages of Kubernetes Maturity

The 5 Stages of Kubernetes Maturity – ...

After spending more than a year in a small operations team in the Health Tech industry running self-managed production...
Continue reading