Watch Joe Beda on TGI Kubernetes 079: YTT and Kapp go through this blog post and share his thoughts about these tools.
The k14s (stands for "Kubernetes Tools") Github organization (https://github.com/k14s) contains several tools we created as a result of working with complex, multi-purpose tools like Helm. We believe that working with simple, single-purpose tools that easily interoperate with one another results in a better, workflow compared to the all-in-one approach chosen by Helm. We have found this approach to be easier to understand and debug.
In this blog post we will focus on local application development workflow; however, tools introduced here work also well for other workflows, for example, for production GitOps deployments or manual application deploys. We plan to publish additional blog posts for other workflows. Let us know what you are most interested in!
We break down local application development workflow into the following stages:
- Source code authoring
- Configuration authoring (e.g. YAML configuration files)
- Packaging (e.g. Dockerfile)
- Deployment (e.g. kubectl apply ...)
Helm arguably tries to address stages 2, 3, and 4, with configuration, packaging and deployment together in one tool. The community has varied opinions on advantages and disadvantages of using Helm. However, let's explore an alternative approach with tools from k14s.
For each stage, we have open sourced a tool that we believe addresses that stage's challenges (sections below explore each tool in detail):
- configuration -> ytt for YAML configuration and templating
- packaging -> kbld for building Docker images and record image references
- deployment -> kapp for deploying k8s resources
We'll use k8s-simple-app-example application as our example to showcase how these tools can work together to develop and deploy an application.
Before getting too deep, let's get some basic preparations out of the way:
- Find a Kubernetes cluster (preferably Minikube as it better fits local development; Docker for Mac/Linux is another good option as it now includes Kubernetes)
- Check that the cluster works via
kubectl get nodes
- Install k14s tools by following instructions on https://k14s.io/
Deploying the application
To get started with our example application, clone k8s-simple-app-example locally:
This directory contains a simple Go application that consists of
app.go (an HTTP web server) and a Dockerfile for packaging. Multiple
config-step-* directories contain variations of application configuration that we will use in each step.
Typically, an application deployed to Kubernetes will include Deployment and Service resources in its configuration. In our example,
config-step-1-minimal/ directory contains
config.yml which contains exactly that. (Note that the Docker image is already preset and environment variable
HELLO_MSG is hard coded. We'll get to those shortly.)
Traditionally, you can use
kubectl apply -f config-step-1-minimal/config.yml to deploy this application. However, kubectl (1) does not indicate which resources are affected and how they are affected before applying changes, and (2) does not yet have a robust prune functionality to converge a set of resources (GH issue). kapp addresses and improves on several kubectl's limitations as it was designed from the start around the notion of a "Kubernetes Application" - a set of resources with the same label:
- kapp separates change calculation phase (diff), from change apply phase (apply) to give users visibility and confidence regarding what's about to change in the cluster
- kapp tracks and converges resources based on a unique generated label, freeing its users from worrying about cleaning up old deleted resources as the application is updated
- kapp orders certain resources so that the Kubernetes API server can successfully process them (e.g., CRDs and namespaces before other resources)
- kapp tries to wait for resources to become ready before considering the deploy a success
Let us deploy our application with kapp:
simple-app received a unique label
kapp.k14s.io/app=1557433075084066000 for resource tracking:
Using this label, kapp tracks and allows inspection of all Kubernetes resources created for
Note that it even knows about resources it did not directly create (such as ReplicaSet and Endpoints).
logs commands demonstrate why it's convenient to view resources in "bulk" (via a label). For example,
logs command will tail any existing or new Pod that is part of
simple-app application, even after we make changes and redeploy.
Additional kapp resources:
Accessing the deployed application
Once deployed successfully, you can access the application at
127.0.0.1:8080 in your browser with the help of
kubectl port-forward command:
One downside to the
kubectl command above: it has to be restarted if the application pod is recreated.
Alternatively, you can use k14s' kwt tool which exposes cluster IP subnets and cluster DNS to your machine. This way, you can access the application without requiring any restarts.
kwt installed, run the following command
Additional kwt resources:
Deploying configuration changes
Let's make a change to the application configuration to simulate a common occurrence in a development workflow. A simple observable change we can make is to change the value of the
HELLO_MSG environment variable in
Above output highlights several kapp features:
- kapp detected a single change to
simple-appDeployment by comparing given local configuration against the live cluster copy
- kapp showed changes in a git-style diff via
simple-appService was not changed in any way, it was not "touched" during the apply changes phase at all
- kapp waited for Pods associated with a Deployment to converge to their ready state before exiting successfully
To double check that our change applied, go ahead and refresh your browser window with our deployed application.
Given that kapp does not care where application configuration comes from, one can use it with any other tools that produce k8s configuration, for example, Helm's
Templating application configuration
Managing application configuration is a hard problem. As an application matures, typically configuration needs to be tweaked for different environments, and different constraints. This leads to the desire to expose several, hopefully not too many, configuration knobs that could be tweaked at the time of the deploy.
This problem is typically solved in two ways: templating or patching. ytt supports both approaches. In this section we'll see how ytt allows to template YAML configuration, and in the next section, we'll see how it can patch YAML configuration via overlays.
Unlike many other tools used for templating, ytt takes a different approach to working with YAML files. Instead of interpreting YAML configuration as plain text, it works with YAML structures such as maps, lists, YAML documents, scalars, etc. By doing so ytt is able to eliminate a lot of problems that plague other tools (character escaping, ambiguity, etc.). Additionally ytt provides Python-like language (Starlark) that executes in a hermetic environment making it friendly, yet more deterministic compared to just using general purpose languages directly or non-familiar custom templating languages. Take a look at ytt: The YAML Templating Tool that simplifies complex configuration management for a more detailed introduction.
To tie it all together, let's take a look at
config-step-2-template/config.yml. You'll immediately notice that YAML comments
(#@ ...) store templating metadata within a YAML file, for example:
Above snippet tells ytt that
HELLO_MSG environment variable value should be set to the value of
data.values.hello_msg. data.values structure comes from the builtin ytt
data library that allows us to expose configuration knobs through a separate file, namely
config-step-2-template/values.yml. Deployers of
simple-app can now decide, for example, what hello message to set without making application code or configuration changes.
Let's chain ytt and kapp to deploy an update, and note
-v flag which sets
We covered one simple way to use ytt to help you manage application configuration. Please take a look at examples in ytt interactive playground to learn more about other ytt features which may help you manage YAML configuration more effectively.
Additional ytt resources:
- ytt: The YAML Templating Tool that simplifies complex configuration management
- ytt interactive playground
- ytt docs
Patching application configuration
ytt also offers another way to customize application configuration. Instead of relying on configuration providers (e.g. authors of k8s-simple-app) to expose a set of configuration knobs, configuration consumers (e.g. users that deploy k8s-simple-app) can use the ytt overlay feature to patch YAML documents with arbitrary changes.
For example, our simple app configuration templates do not make Deployment's
spec.replicas configurable as a data value to control how may Pods are running. Instead of asking authors of simple app to expose a new data value, we can create an overlay file
config-step-2a-overlays/custom-scale.yml that changes spec.replicas to a new value.
Building container images locally
K8s embraced use of container images to package source code and its dependencies. One way to deliver updated application is to rebuild a container when changing source code. kbld is a small tool that provides a simple way to insert container image building into deployment workflow. kbld looks for images within application configuration (currently it looks for image keys), checks if there is an associated source code, if so builds these images via Docker (could be pluggable with other builders), and finally captures built image digests and updates configuration with new references.
Before running kbld, let's change
app.go by uncommenting
fmt.Fprintf(w, "<p>local change</p>") to make a small change in our application.
config-step-3-build-local/build.yml is a new file in this config directory, which specifies that
docker.io/dkalinin/k8s-simple-app should be built from the current working directory where kbld runs (root of the repo).
If you are using Minikube, make sure kbld has access to Docker CLI by running
eval $(minikube docker-env). If you are using Docker for Mac (or related product that comes with Docker and Kubernetes), make sure that docker ps succeeds. If you do not have a local environment (i.e. running a remote cluster and have a local Docker daemon), read on but you may have to wait until the next section when we show how to use a remote registry.
Let's insert kbld between ytt and kapp so that images used in our configuration are built before they are deployed by kapp:
As you can see, the above output shows that kbld received ytt's produced configuration, and used the
docker build command to build simple app image, ultimately capturing a specific reference and passing it onto kapp.
Once the deploy is successful check out application in your browser, it should have an updated response.
It's also worth showing that kbld not only builds images and updates references but also annotates Kubernetes resources with image metadata it collects and makes it quickly accessible for debugging. This may not be that useful during development but comes handy when investigating environment (staging, production, etc.) state.
kapp inspect -a simple-app --raw --filter-kind Deployment | kbld inspect -f- Images Image kbld:docker-io-dkalinin-k8s-simple-app-sha256-f999be3e0d96c78dc4d4c8330c8de8aff3c91f5e152f021d01cb3cd0e92a1797 Metadata - Path: /Users/pivotal/workspace/k14s-go/src/github.com/k14s/k8s-simple-app-example Type: local - Dirty: false RemoteURL: firstname.lastname@example.org:k14s/k8s-simple-app-example SHA: e877718521f7ccea0ab0844db0f86fe123a8d8ef Type: git Resource deployment/simple-app (apps/v1) namespace: default 1 images Succeeded
Building and pushing container images to a registry
The above section showed how to use kbld with local cluster that's backed by local Docker daemon. No remote registry was involved; however, for a production environment or in absence of a local environment, you will need to instruct kbld to push out built images to a registry accessible to your cluster.
config-step-4-build-local/build.yml specifies that
docker.io/dkalinin/k8s-simple-app should be pushed to a repository as specified by
push_images_repo data value.
Before continuing on, make sure that your Docker daemon is authenticated to the registry where image will be pushed via
docker login command.
As a benefit of using kbld, you will see that image digest reference (e.g.
index.docker.io/your-username/your-repo@sha256:4c8b96...) was used instead of a tagged reference (e.g.
kbld:docker-io...). Digest references are preferred to other image reference forms as they are immutable, hence provide a gurantee that exact version of built software will be deployed.
Clean up cluster resources
Given that kapp tracks all resources that were deployed to k8s cluster, deleting them is as easy as running
kapp delete command:
We've seen how ytt, kbld, kapp, and kwt can be used together (
ytt ... | kbld -f- | kapp deploy ...) to deploy and iterate on an application running on Kubernetes. Each one of these tools has been designed to be single-purpose and composable with other tools from k14s org and larger k8s ecosystem.
Dmitriy Kalinin is a Software Engineer at Pivotal working on Kubernetes and Cloud Foundry projects. (@dmitriykalinin on twitter)
Nima Kaviani is a Senior Cloud Engineer with IBM. Nima has been a contributor to Cloud Foundry, Kubernetes, and Knative open source projects. He holds a PhD in Computer Science and tweets and blogs about distributed systems, life, and technology in general. (@nimak on twitter)
About the Author