Informations techniques / Containers

Containers enable consistent deployment and execution

Containers are popular with both developers and operators because they offer a straightforward way to deploy and manage applications, regardless of the target environment. They facilitate DevOps (and DevSecOps) practices by improving handoffs between development and operations teams.

What is a container?

A container encapsulates an application in a form that’s portable and easy to deploy. Containers can run without changes on any compatible system—in any private cloud or public cloud—and they consume resources efficiently, enabling high density and resource utilization. Although containers can be used with almost any application, they’re frequently associated with microservices, in which multiple containers run separate application components or services. The containers that make up microservices are typically coordinated and managed using a container orchestration platform, such as Kubernetes.

Containers vs. VMs

At the simplest level, the difference between a virtual machine (VM) and a container is that every VM runs a full or partial instance of an operating system, whereas multiple containers share a single operating system instance. A container is a lightweight, standalone, executable package that—in conjunction with the host system—includes everything necessary to run an application such as code, runtime, system tools, system libraries, and settings, enabling multiple containerized applications to run independently on a single host system. Since multiple containers can run inside a VM, you can combine the benefits of both.

Multiple containers can run in lightweight VMs to increase security and isolation. The VM creates an infrastructure-level boundary that container traffic cannot cross, reducing exposure if a service is compromised.

Containers isolate and abstract resources

Containerization standards

Standards for container formatting and runtime environments are controlled by the Open Container Initiative (OCI), a lightweight, open governance structure (project) formed in 2015 for the express purpose of creating open industry standards. The OCI currently offers two specifications: the Runtime Specification (runtime-spec) and the Image Specification (image-spec). The Runtime Specification outlines how to run a filesystem bundle that’s unpacked on disk. An OCI implementation downloads an OCI Image, unpacks that image into an OCI Runtime filesystem bundle, and executes it via an OCI Runtime.

What’s the relationship between containers and Docker?

Docker has been almost synonymous with containers from the beginning, and it continues to be used by developers to build container images. The Docker environment includes a container runtime as well as container build and image management.

However, it’s important to be aware that on December 2, 2020, the contributors to Kubernetes announced the deprecation of the Docker runtime. Kubernetes is shifting to the Container Runtime Interface (CRI), which supports a broader set of container runtimes with smooth interoperability between different runtimes.

Because Docker builds an OCI-standard container image, those images will run on any OCI-compliant container runtime. Therefore, developers can continue to use Docker to build, share, and run containers on Kubernetes.

Why containers?

The monolithic architectures that define many of today’s existing applications can slow your business down. Modern applications built with containers promise to accelerate the delivery of new functionality and create an environment of continuous innovation. These benefits are catching the attention of many organizations.

Containers offer your organization many advantages:

Deployment consistency reduces time to market.

Developing and packaging an application—in addition to its dependencies—to handoff to operations for deployment is a time-consuming and costly process. By coupling everything together, containers improve the consistency and speed of pushing new applications and updates into production.

Execution consistency improves quality.

Because containers integrate applications and their dependencies, they make possible higher consistency and reliability when moving tested applications from testing into production, enabling teams to improve the quality of releases and boost customer satisfaction.

Easier developer-to-operator handoffs speed delivery.

Containers are made for portability with small, immutable image file packages that can be shared easily, which means that developers and operators have more time to focus on developing business-critical applications and ensuring robust application delivery.

Better isolation protects against failures.

A containerized application is isolated and abstracted from the OS and other containers, assuming that cloud native best practices are used, so one container can fail without causing downtime to other running containers. By monitoring containers and starting or stopping container instances as needed, Kubernetes makes containerized apps both more resilient and more efficient.

Updating apps to respond to CVEs is easier.

Containers can be updated and redeployed more quickly than traditional apps. Container updates can be automated, so that new containers with updated code can be built and pushed to your container registry for production deployment when CVEs are identified.

What to keep in mind if you’re considering containers

Containers are just one of many important features of cloud native development. If you’re moving to containers, there are multiple elements to bear in mind:

Understand containers and container orchestration

To get started with containers, you need to know about both containers and container orchestration.

Container orchestration helps manage the complexity of the container lifecycle. This becomes especially important when you’re operating distributed applications and large numbers of containers.

Kubernetes is an open source container orchestrator that has become the de facto standard. Kubernetes automates deployment, load balancing, resource allocation, and security enforcement for containers via declarative configuration and automation. It keeps containerized applications running in their desired state, ensuring they’re scalable and resilient.

You can access Kubernetes in a number of ways: as open source, as a service in most public clouds, or via prepackaged Kubernetes distributions. There is an ecosystem of projects and products that supplements and extends the abilities of containers and Kubernetes.

See KubeAcademy to learn more about Kubernetes and container orchestration. Also, see Kubernetes vs. Docker to learn how the two technologies relate to one another.

Informez-vous sur les sujets importants

Inscription à la newsletter

The process of building containers can be automated

If containerizing an application or microservice requires too much manual effort, it can slow down agile development teams. When container creation is automated, developers can focus on their source code and don’t have to be packaging experts or verify the provenance of every image building block.

Functionally, container images are built in layers. With an automated, declarative approach to container builds, whenever one of the layers changes, only that individual layer must be updated. Then that new container image can be redeployed. For example, if only system libraries need to be updated, you only have to rebuild the layer that contains the libraries. This reduces the burden on testing and validation practices, as the application code and any other code layers remain unchanged. This allows for more secure containers to be pushed into production faster and more frequently.

Every package within an image should be given a digital fingerprint (hash) to prove that it has not changed since the time it left its source. When metadata is available about which libraries and binaries are in the stack, container updating is simplified. Teams can more easily understand the exposure to vulnerabilities by granularly understanding package versions running in production and make targeted plans to mitigate.

Cloud Native Buildpacks are an example of open source technology that centrally automates and manages building and updating container images from source code. Buildpacks help to automate the process of creating OCI-compliant containers. VMware Tanzu Build Service takes Buildpacks to another level by adding a declarative image configuration model that creates and continuously updates OCI-compliant containers based on a desired state. And by giving operators a centralized control plane for managing updates, it removes the need to track dependencies individually for thousands of containers with varying patch levels.

Manual vs. automated container build process

Developer builds the container. Developers take the source code and all dependencies and wrap in a container definition. This package is deployed to the target environment. Developer commits code to a repository. Developers commit source code to a repository, where it’s automatically packaged and pushed to the target environment.
Developer identifies middleware. Developers choose a base image, language dependencies, and middleware components and versions that make up the portable image. Declarative approach to middleware. Instead of developers selecting and configuring language runtimes and middleware, the container is assembled from a known good definition.
Developer is responsible for lifecycle management. If there are bugs or updates to language runtimes or middleware, it’s up to developers to update affected containers, before they’re tested and redeployed to the target environment. Lifecycle management is continuous. Containers are automatically rebuilt when source code, base OS, or middleware is updated. Then the container is tested and redeployed through CI/CD workflows.
Developer maintains documentation of container contents. With every update, developers must keep track manually as versions of different layers of the container change. Automated metadata creation of container contents. As updates are made, metadata is updated to document the changes. This metadata can be programmatically inspected and audited.

Container lifecycle security should encompass development

As organizations adopt new ways of deploying containerized applications, they need to take a DevSecOps approach, changing the way they implement and manage security policies. Like testing, integration, and deployment, security needs to be built in at the ground level of application development and automated as much as possible.

Baking security into an application early in the container lifecycle is the so-called “shift left” of an organization’s security model. Security teams, working with development and operations teams, can adapt existing governance and compliance policies to accommodate the new container and application lifecycle and new tools. Development and delivery teams are then responsible for the implementation of those practices, performing the day-to-day decision making around the security of applications and providing evidence demonstrating that they are meeting the organization’s policies.

Some best practices for container security include:

  • Use programming frameworks that make adopting recommended security practices and patterns easier, enabling developers to create secure applications by default.
  • Standardize the code used in the base OS, as well as application dependencies for your container builds.
  • Know what’s in your containers via well-documented code provenance (metadata), which also automates policy enforcement and monitoring.
  • Use a private container registry for managing approved, validated container images and base OS images (including third-party containers).
  • Rigorously control access and deployment policies for the private registry.
  • Automate container builds so that updates to application code, dependencies, or OS libraries trigger rebuilds.
  • Implement a zero-trust, role-based access control policy for accessing runtime platforms.

The result is that security is no longer an “add-on” to a deployed application—or a hurdle for a development team to overcome after the fact. Secure practices simply become “the way things are done.”

What is a container platform?

Containerization and container orchestration are part of a larger set of capabilities needed for an enterprise software platform. Containers by themselves lack the necessary capabilities for security, high availability, application lifecycle management, and more. For any organization seeking scale, security, and deployment consistency, an enterprise platform is required to efficiently run containerized applications.

An effective container management platform typically includes:

Container orchestration:

Orchestration and scheduling of containers (typically via Kubernetes) to keep containers operating in the desired state based on the resource requirements of each container

Lifecycle management:

Using declarative APIs to manage the lifecycle of multiple clusters consistently across clouds (e.g., provisioning, scaling, upgrading and deleting clusters)

Security and compliance:

Ability to secure the container lifecycle using validated components and automation, from how to build containers to how those containers interact in production


Analyzing operational telemetry collected across distributed data sources, such as applications, services, containers, and multi-cloud infrastructure to provide contextual, actionable insights

Connectivity and networking:

Dynamic, robust Kubernetes ingress services and application connectivity management for fine-grained access, policy enforcement, and encryption between services

Developer experience:

Streamlining the container lifecycle from the start, including development and delivery automation (e.g., automated container builds and self-service deployment environments)

VMware Tanzu is designed for containers and modern apps

VMware Tanzu drives modern applications on modern infrastructure. It simplifies operating containers across multi- and hybrid-cloud environments, while freeing developers to build great apps that support continuous delivery workflows.

VMware Tanzu Application Platform is a full-stack platform that makes it possible to operationalize containers and DevSecOps practices.

VMware Tanzu Labs helps teams adopt up-to-date development practices, modernize existing applications, and stand up a container management platform customized for their specific organization.