Rethinking Security from Scratch: The Case for Shifting Container Security from the Edge to the Core

March 30, 2020 Chris Milsted

Here’s a simple possibility that is often overlooked but can have serious consequences: Building the smallest possible container image might break your existing operational security processes. In this blog post, you will see how new DevSecOps thinking is necessary as we look at the impact a development-led change can have on your operational security.

Organizations are often accustomed to establishing an image-based trust system at the edge. If, however, you consider the new super-minimal containers that can be created, the security model must evolve to focus on trusting the build system instead.

To explain the motivation for such a shift, let’s explore container images, newer ideas like distroless and multi-stage builds, scratch builds, and how common vulnerabilities and exposures are reported. At the core of your process today will probably be a connection between a semantic version of software and a security assessment, and this blog post demonstrates why you might need to modify this thinking.

Setting the Stage: OCI Images and Layers

One of the features of the OCI specification for images and the container runtime is the layers in the images. OCI containers are made up of layers that can inherit from other images. As an example, you can compare the Alpine and golang:alpine container images to show how the images relate to each other. The golang:alpine container pulled from Docker Hub references the alpine:latest container image as a base layer.  

  You can see this relationship in more detail in the Docker inspect output:

       "Architecture": "amd64",
        "Os": "linux",
        "Size": 5552690,
        "VirtualSize": 5552690,
        "GraphDriver": {
            "Data": {
                "MergedDir": "/var/lib/docker/overlay2/784af3f8492d8d7ade0a82bbaa6dace2bd694d4c0f1a4ab1510cd43cec0c67d9/merged",
Docker inspect of “golang” image:

	"Architecture": "amd64",
    	"Os": "linux",
    	"Size": 359122654,
    	"VirtualSize": 359122654,
    	"GraphDriver": {
        	"Data": {
            	"LowerDir": "/var/lib/docker/overlay2/c34762b4720a23034e53c9f35be009071921086ca140db65d13a82940b9ebf35/diff:/var/lib/docker/overlay2/9988592682bef4c04e144f6d153b9116686e8c0a879e71e41cfc79e07037a19d/diff:/var/lib/docker/overlay2/96f7522bc8c562a266b524645dd43931d6bdf37560f940fe50cb85177f08fe02/diff:/var/lib/docker/overlay2/784af3f8492d8d7ade0a82bbaa6dace2bd694d4c0f1a4ab1510cd43cec0c67d9/diff",

The layer from alpine:latest is referenced by the golang:alpine image, meaning that it is only stored once on the filesystem. If you pull the Alpine image and then pull the golang:alpine image, only the new layers for the golang:alpine image need to be copied over the network. The base Alpine layer simply points to the layer from the Alpine container that’s already on the local machine.

There are benefits from keeping images as small as possible. For example, smaller images are faster to pull and transfer over the network, but remember that you will not need to copy common layers if all your images are derived from the same base image. Larger images that share layers need not be a penalty in terms of the disk space used to store the image and the network bandwidth needed to pull the images. If we have common layers, these will be re-used and no copying or storing of the layers is needed.

Looking at Options: Distroless Images and Multi-Stage Builds

In your search for the smallest container image possible, one option is the distroless approach coupled with multi-stage builds. Multi-stage builds were invented as a way to keep the build and run concerns separate. They also allow you to keep runtime containers to a minimal size. One common example is to pull a large container that contains compilers or other build tooling. You can pull this container and create a container reference for it as seen below, using the AS directive to tag the first container.

Once you have completed the build process, you can then inject the artifacts by using the COPY directive and supplying a --from= into the much smaller runtime container. Having just our build artifacts and its runtime in a minimal container is useful to keep images small. For security, we can also remove development tooling from builds running in production.

FROM golang:1.7.3 AS builder
WORKDIR /go/src/github.com/ 
COPY app.go    .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .

FROM alpine:latest  
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder ./app .
CMD ["./app"] 

Distroless container images are images that contain just enough packages to run a binary based on a specified language. They even remove package managers and shells (which makes it impossible to exec into the running container). These can be used as the runtime container image in the above example to create very small images, but note that without a package manager or shell it is hard to work out what they contain.

Building Images from Scratch

There has been an industry move to reduce to zero the container image size. Growing in popularity with new statically linked languages like Go is the idea of a “scratch” build. You can populate a Dockerfile with “FROM scratch” to initialise an empty layer. This empty layer can be populated with binaries and dependencies such that the application is completely self-contained and there is no longer any inheritance of layers from a more traditional Linux-based container, for example our “FROM alpine” line in the previous code block.

Below you can find some example Go code that creates a static web page, which makes it simple to test helloworld.go. You can use any language or code as long as you can compile it into a static binary. This static binary is placed in a scratch container:

package main

import (
    	"fmt"
    	"net/http"
)

func helloHandler(w http.ResponseWriter, r *http.Request) {
    	fmt.Fprintln(w, "Hello World from Go in minimal Docker container")
}

func main() {
    	http.HandleFunc("/", helloHandler)

    	fmt.Println("Started, serving at 8080")
    	err := http.ListenAndServe(":8080", nil)
    	if err != nil {
            	panic("ListenAndServe: " + err.Error())
    	}
}

When you run this code, you will get a simple web page returned:  

You can build the code to a statically compiled binary using the Go compiler. You should add the CGO_ENABLED=0 flag to make sure that no linking to C libraries is required. If you see a strange error similar to ‘standard_init_linux.go:211: exec user process caused "no such file or directory"’ appear when you run your container, check that the CGO_ENABLED flag was set.

$ env CGO_ENABLED=0 go build -a -o helloworld_static
$ ldd helloworld_static
    not a dynamic executable

You can then inject this binary into a minimal scratch container using a Dockerfile as follows:

FROM scratch

# List the maintainer
LABEL maintainer="Chris Milsted"

# Copy the Pre-built binary file from the previous stage.
COPY  ./helloworld .

# Expose port 8080 to the outside world
EXPOSE 8080

#Command to run the executable
CMD ["./helloworld"]

Looking at this from a developer’s point of view, it achieves the goal of making the smallest possible container—It contains just the application binary. We can use either scratch containers or distroless containers for this.

Identifying a Breakdown in Security Information

When you expand this viewpoint to DevSecOps, however, that team’s view is very different. If you now inspect the container layer, all you can see is a static binary, and it is unclear which version of Go was used to compile the code or even which Git commit was built against.

This is the breakdown in the security model mentioned at the beginning of this blog post. Historically, you would have looked at the edges of your runtime system to determine what is happening. Now that information is no longer visible at runtime and instead can only be determined from the build-time information. So how does your DevSecOps team track common vulnerability exceptions today? Let’s explore that question next.

Tracing a Line through the Traditional Security Model

When you are asked if your software contains a security vulnerability, traditionally you would have looked at the bits your operating system or container base image provided. If you look at the Alpine image from the earlier example, you can see it is made up of a number of packages that have a semantic version associated with them:

# docker run --rm -ti alpine sh  
/ # apk list
musl-1.1.22-r3 x86_64 {musl} (MIT) [installed]
zlib-1.2.11-r1 x86_64 {zlib} (zlib) [installed]
apk-tools-2.10.4-r2 x86_64 {apk-tools} (GPL2) [installed]
musl-utils-1.1.22-r3 x86_64 {musl} (MIT BSD GPL2+) [installed]
libssl1.1-1.1.1d-r0 x86_64 {openssl} (OpenSSL) [installed]
alpine-baselayout-3.1.2-r0 x86_64 {alpine-baselayout} (GPL-2.0-only) [installed]
alpine-keys-2.1-r2 x86_64 {alpine-keys} (MIT) [installed]
busybox-1.30.1-r2 x86_64 {busybox} (GPL-2.0) [installed]
scanelf-1.2.3-r0 x86_64 {pax-utils} (GPL-2.0) [installed]
libc-utils-0.7.1-r0 x86_64 {libc-dev} (BSD) [installed]
libtls-standalone-2.9.1-r0 x86_64 {libtls-standalone} (ISC) [installed]
ssl_client-1.30.1-r2 x86_64 {busybox} (GPL-2.0) [installed]
ca-certificates-cacert-20190108-r0 x86_64 {ca-certificates} (MPL-2.0 GPL-2.0-or-later) [installed]
libcrypto1.1-1.1.1d-r0 x86_64 {openssl} (OpenSSL) [installed]

When a security issue is identified, it is given a Common Vulnerability and Exposure (CVE) identifier, which can be located in CVE databases such as those maintained by NIST or MITRE. Looking at CVE-2019-14697, we can see that it is fixed in a specific version of Alpine Linux which is 3.10.2. So the logic that most security teams are working with could be summarized as:

  1. Identify CVEs which are of concern (usually important and critical CVEs)
  2. Work out which version of a software package has the fix
  3. Map the software package to a version of a Linux distribution, e.g. Alpine, If needed.

The mapping of CVE through to the semantic version of Alpine allows teams to assess the security of a container. To scale up this process, you can use a software scanning tool to gather all the “versions” in the containers running on your Kubernetes cluster. For example, the default drivers and data sources for the Clair security scanner do exactly this.

When you then try and apply this mapping logic of CVE to package version to base container image to your scratch build, this logic path is no longer possible because you only have a self-contained application binary. However, when the application was compiled, you did know everything, such as the Git tag of the code you had checked out, the version of the Go compiler you used, and the additional libraries you needed to include.

Shifting Toward Build-Based Security

A blog post on the Polyverse website discusses how semantic versioning is becoming less useful in this new world of agile development. The Polyverse blog post points in the direction of a new security pattern, one where we trust the build system and not the running artifacts. How can an enterprise move to a chain of trust based on when a container was built?

Such an approach would re-establish the audit trail your security teams are looking for, and there are some aspects of the existing OCI specification, such as the unique identifier SHA256 value, we can re-use as part of a solution. Drawing out the existing CVE security flow, you will also need to establish a feed from your build system to the security database. Feeding into the security database at build time, you will also pass in things like Go version and Git commit reference for your scratch container above, as well as the container SHA that was created during the build process.  

Let’s revisit the technical decision to move to minimalist scratch builds. If you are going to allow people to create containers from scratch images, then you need to move the point of trust to the build system because it is the only point where we know certain information. In this case, you would follow a workflow similar to the following:

  1. Extend your current security database, which maps CVEs to package versions, to also include a build database that would map a container’s SHA to the code, language, or libraries used at build time.
  2. When you create a new container image, send details to a “build database” system that maps the container SHA to a set of artifacts we have CVE information about.
  3. As new CVE information is fed in, trigger a rules engine to build up a list of container SHAs that have vulnerabilities.
  4. Pass this information on to a policy engine that can enforce a runtime policy to trigger alerts (in the case of production environment) or to stop the use of the container (in case of a non-production environment).

Following the Polyverse blog, you could also feed other details into the build database, such as exposed APIs based on the Git version. This data could also map alpha, beta, or stable versioning information of the API to each of the container SHAs. This mapping would allow you to build up a full DevSecOps picture that discloses each container’s capabilities and exposure to vulnerabilities.You could, for instance, construct a mapping table similar to the following:  

From this table, you can then come up with a policy that can map CVEs for Go version 1.13.4 to the SHA256 signature and a set of running containers. You also have the Git tag, which you can map back to a code commit and a set of application capabilities. Cloudflare, for example, has modified their Go binaries to display these values with runtime flags.

As you can see, while the appeal of smaller builds from scratch containers exists in the developer world to get builds deployed faster, there are some consequences of doing this from a security operations point of view. Moving to a build-based system of trust is the logical next step, but it is not traditionally how security departments have done things. Building the new capabilities and control points will be a journey for some organizations that will take time.

Combining Close Collaboration with New Tools to Improve Security

With modern languages and the move of the industry to microservices, there has been a shift in thinking around container images towards a more minimal approach. However, when you add the people and process factors to this technology shift, you can see some challenges to the way things like security have been pursued in the past.

Containers, Kubernetes, and microservices are things that will need close collaboration between business functions like security and audit to make sure that technical decisions taken in isolation do not impact the ability of a company to understand their supply chains and security posture. We have a whole new raft of tooling to help organisations with this, such as VMware Secure State from CloudHealth by VMware.

About the Author

Chris is based in the UK and is a Staff Field Engineer for VMware. He spends most of his work wrangling Kubernetes and most of his spare time playing field hockey badly and being a taxi driver for two children who are growing up rapidly.

More Content by Chris Milsted
Previous
Feb 16 - The Future of Software Delivery Needs DevSecOps
Feb 16 - The Future of Software Delivery Needs DevSecOps

Next Video
DevSecOps with Confidence
DevSecOps with Confidence

How to ship code faster in production without sacrificing security? How to maintain consistency in CI/CD pi...