Managing Stateful Docker Containers with Cloud Foundry BOSH

May 27, 2014 Ferran Rodenas

featured-CF-logo-cloudsDocker provides a simple and convenient way to package an application and deploy it in isolation on a target machine. But what happens when you want to deploy multiple containers in multiple machines? How do you monitor the status of your containers? How do you deal automatically with stateful data when your containers or the virtual machines hosting the containers fail?

Learn in this article how Cloud Foundry BOSH can help you to orchestrate your multi-node containerized applications on your choice of IaaS, manage its lifecycle including stateful data, while providing monitoring, alerting and self-healing capabilities.

Containers koolaid

OS level virtualization (aka containers) is not a new technology, it has been floating around for a while, but lately it has been a hot topic since Docker came to the scene. But what makes Docker so special if it is just a wrapper of an old approach? I would say that basically is due to, but not only, its social aspect: being able to build a consistent application container, from scratch or reusing a layer from other container image, and share it in an easy way, via a Dockerfile or via the Docker registry, with the rest of the world is a really amazing capability and it is what made Docker so popular.

On the other side, Platforms as a Service (PaaS) technologies have been one of the earliest adopters of containers, as PaaS usually require:

  1. an abstraction layer from the underlying infrastructure running the PaaS;

  2. a better density/efficiency of resources without the overhead of virtual machines (see economics of application virtualization);

  3. a strong isolation between applications as they can run on the same physical or virtual instance;

  4. really rapid resource management (to create, destroy, scale, … the application).

So it is clear that PaaS and containers are a good match. But although various PaaS platforms have implemented their own way to package and deploy an application in a reusable way (i.e. Buildpacks in Cloud Foundry), recently there have been some efforts trying to integrate Docker images or Dockerfiles with Cloud Foundry. The Cloud Foundry engineering team is working on Diego, with a clear mandate to make it relatively easy to support multiple platforms, including Docker; and in parallel, our friends at CloudCredo are experimenting with Decker, a reimplementation of Cloud Foundry DEA’s using Docker as a backend.

Docker BOSH Release FTW!

But while the work to support a Docker image with “cf push” is still under development, we have been searching for a way to make it possible to use Docker images now using existing Cloud Foundry APIs. So today we want to introduce and release as an open source project an experimental project: a CF-BOSH release for Docker.

Why is this CF-BOSH release so “awesomic”?

  • It works with a standard CF-BOSH without any modification, you will only require a stemcell with kernel >= 3.8 (the Ubuntu Trusty ones);

  • You can orchestrate multiple Docker containers into multiple virtual machines;

  • You can deploy your containers to your choice of IaaS (AWS, OpenStack, VSphere, VCHS, CloudStack or Google Compute Engine) using the same deployment tool;

  • It will monitor automatically your containers and restart them in case of failures;

  • It allows to set dependencies between containers running in the same virtual machine, so if a container fails, when it is restarted it will restart also all of the dependent containers;

  • It will also automatically monitor your virtual machines and recreate them again in case of a failure;

  • It allows to bind host volumes to your Docker containers in an very easy way;

  • It allows easily to resize any data disk attached to a Docker container without losing any date.

Let’s take a depth look at each of these features:

Become the director of the Docker Orchestra!

You can create as many virtual machines as you want (unless you reach your IaaS quota!), and deploy as many containers in each virtual machine as you want (unless you reach the limits of your virtual machine!). How does it work? CF-BOSH uses an declarative approach when deploying your system.

First, at the “jobs” section of the CF-BOSH deployment manifest, we declare how many jobs we want to deploy. Each “job” will become a virtual machine, and on every virtual machine we need to install 2 templates. The “docker” template will install the Docker bits and will start the Docker daemon. The “containers” template will install and start the specified Docker containers.

  - name: docker-vm-1
      - name: docker
      - name: containers
  - name: docker-vm-2
      - name: docker
      - name: containers

Then we need to set the “job properties” to specify what containers should be deployed on every job (virtual machine). The “containers” template allows you to customize an array of “containers” and set, for every container, what Docker image should be deployed, the entrypoint or the command to run the container, if it should expose ports, or if it should bind a host disk to the container.

    - name: redis
      image: "dockerfile/redis"
      command: "--dir /var/lib/redis/ --appendonly yes"
      entrypoint: "redis-server"
        - "6379:6379"
        - "/var/lib/redis"
    - name: mysql
      image: "google/mysql"
        - "3306:3306"
        - "/mysql"

And if you prefer to create a custom Docker image instead of reusing an existing one, you can also set the Dockerfile:

    - name: elasticsearch
      image: "bosh/elasticsearch"
      dockerfile: |
        FROM dockerfile/java
            cd /tmp && 
            wget && 
            tar xvzf elasticsearch-1.1.1.tar.gz && 
            rm -f elasticsearch-1.1.1.tar.gz && 
            mv /tmp/elasticsearch-1.1.1 /elasticsearch
        WORKDIR /data
        CMD ["/elasticsearch/bin/elasticsearch"]
        EXPOSE 9200
        EXPOSE 9300

For a more detailed list of options, check the README at the Docker CF-BOSH Release github repository.

Be careful when deploying more than one container on the same job (virtual machine) if you explicitly expose ports to the host interface, because the network in the host is not isolated, so you cannot start two containers and expose in both of them the same host port (i.e. port 80). Also, containers on the same job (virtual machine) will share the same persistent disk, although every container will use a different path.

Let’s see this in action. We are going to deploy 3 different services (Redis, MySQL, Elasticsearch) running in containers on a single virtual machine hosted at Google Compute Engine. For Redis and MySQL we are going to use pre-baked Docker images (fetched from the public Docker registry), and for Elasticsearch we are going to use a Dockerfile with instructions on how to built the image, so it will be built on-the-fly while the container is deployed. We will also test that we can connect from our local laptop to the services running inside containers on the virtual machine.

Killing the wrong PID

The Docker CF-BOSH release monitors not only that the Docker daemon process is up and listening, but also monitors that each of the deployed containers are also running. If for some reason the Docker daemon or one of the containers dies (because the application exits abnormally or because we accidentally killed the wrong PID), CF-BOSH will automatically detect the failure and it will restart again the processes without any human intervention until it success.

Let’s see this in action. We will ssh into one the virtual machines hosting our Docker containers and we will kill manually a process running inside a container to see how automatically, and after a just few seconds, CF-BOSH is going to restart it. Then we will repeat the same exercise but killing the Docker daemon instead:

OMG, my VM’s are ephemeral!

A common scenario in cloud infrastructures is that you need to deal with ephemeral virtual machines. Your instance can fail, you accidentally destroy the instance, or simply, your IaaS provider decided to kill your instance because it must perform some maintenance tasks (AWS I’m looking at you!). In those cases, CF-BOSH will automatically create new virtual machines to replace the missing instances.

There is a component in CF-BOSH, named “Health Monitor”, that periodically pings a CF-BOSH agent running inside the virtual machine to check its state. If for some reason the Health Monitor is unable to contact the CF-BOSH agent (because the virtual machine is “gone”), or the state of the deployment in a particular virtual machine is not what CF-BOSH is expecting, then it will trigger an alert. The alert then is passed through a list of responders (email, pager, …). But there is a special responder that performs the health management, the “resurrector”. The resurrector will automatically communicate with the IaaS and will ask that the failed VM be replaced, and once this is done, it will reattach the existing persistent disk to the virtual machine, and then it will deploy and start the processes again.

Let’s see this in action. We will destroy manually a virtual machine that has an attached persistent disk, and we will see how CF-BOSH will “resurrect” the virtual machine and reattach the persistent disk, and all of our processes will be restarted without losing any data.

Disk full error! :(

Creating and attaching a persistent disk to a virtual machine is an easy task, but initially sizing the disk correctly is not. Our service becomes really popular and starts generating lots of data, or we could miserably fail on our expectations and we need to decrease the size of our deployment to not incur in more costs. How can we then resize the disk without losing any data? CF-BOSH can also help here!

When in your deployment manifest file you declare a persistent disk, CF-BOSH is going to create and attach a persistent disk to the virtual machine. If later you decide to modify the size of this persistent disk, CF-BOSH is going to, in an ordered way, stop the processes in your virtual machine, so all the data is flushed from the running service to the disk, create a new persistent disk with the updated size, attach the disk to the virtual machine, copy all data from the old persistent disk to the new persistent disk, detach and delete the old persistent disk, and then start the process again.

Let’s see this in action. With a simple modification in our deployment manifest (the size of the persistent disk), CF-BOSH will resize our data partition without losing any data:


We have shown you how using CF-BOSH and the Docker CF-BOSH release is pretty easy to orchestrate your Docker containers on your choice of IaaS. We have also shown you why you don’t need to worry about the state of your containers, because CF-BOSH is constantly taking care not only of your Docker containers, but also the virtual machines that host the containers.

Additional work

As the project is open source, we encourage our community to contribute to the project with whatever ideas to make this CF-BOSH release even better. What would be uber-cool? Adding a private Docker registry to the Docker CF-BOSH release, so you can host your already pre-built Docker images in your very own private image depot. Or adding a service discovery mechanism, so you can link containers hosted in different virtual machines.


If you’d like to learn more about CF-BOSH, see the CF-BOSH documentation website and/or check the source code at our github repository. For any specific questions about CF-BOSH or the Docker CF-BOSH release, use the CF-BOSH Users Google Group.

Bonus: Do you want to manage Docker resources, including containers, images, hosts, and more all from a single management interface? Then try also our Shipyard CF-BOSH release!

About the Author


Deploy and Update Your Google Compute Engine VMs Using Cloud Foundry BOSH
Deploy and Update Your Google Compute Engine VMs Using Cloud Foundry BOSH

Do you want to to deploy a full Hadoop Cluster on Google Compute Engine in under 3 minutes? Would you like ...

Introducing R for Big Data with PivotalR
Introducing R for Big Data with PivotalR

Wouldn't it be great if there was a way to harness the familiarity and usability of a tool like R, and at t...


Subscribe to our Newsletter

Thank you!
Error - something went wrong!