12-Factor Containerized Microservices: Leveraging VMware Tanzu and the Best of Kubernetes

February 14, 2022 Kirti Apte

At VMware, as we talk to enterprise customers about their application deployment patterns, challenges, and future requirements, we observe a common theme. Most of them are embarking on a modern application design and deployment path by using containers and Kubernetes as foundational technologies and by implementing their applications as microservices. Sometimes these modern enterprise applications are for new, “greenfield” business requirements, and sometimes they represent an evolution of legacy, monolithic applications. In all cases, however, the key technology and business goals are consistent: 24-7 availability, resiliency, scalability, portability, and agility of deployment. 

Kubernetes and a microservices-based architecture offer significant promise to meet these goals. A common approach to building microservices-based applications is to start with a simple application service and then split it into a set of smaller, self-contained, and interconnected microservices. Stateless UI and API services can be the first candidates for this evolution. The 12-factor app methodology, a set of guidelines developed in 2012 and still widely used today, provides a well-defined framework for developing these modern microservices. Kubernetes is a popular container orchestration platform that can be used to deploy and manage the lifecycle of such microservices. However, the goal should be to leverage Kubernetes’ capabilities to the fullest. VMware Tanzu is a portfolio of products to build, run, and manage Kubernetes-based workloads in hybrid multi-cloud environments.

This post shows how organizations can leverage Kubernetes’ containerization orchestration using the VMware Tanzu portfolio and 12-factor-based application patterns to bring scalability, portability, and resiliency across their application stacks.

For simplicity, let’s divide 12-factor application patterns into the coding, deployment, and operation phases as part of transforming a monolith into containerized stateless microservices.

Code 

Factor I: Codebase

“One codebase tracked in revision control, many deploys.”

Typically, an application is composed of multiple components, with each component supporting UI, business logic, and database functions. The core principles that need to be followed when designing microservices-based codebase are single responsibility, high cohesion, and loose coupling. Each service has a single purpose and includes all the functions to carry out that single purpose. The following diagram shows multiple microservices working together as a logical unit to form a shopping cart application ecosystem.

Even though each microservice can adopt its own choice of technology, bringing in standardization and consistency helps developers share components across multiple teams. For example, Spring Initializr or VMware Tanzu Application Platform Application Accelerators can be used to build a shopping cart application starter kit for UI and API services that can then be shared across multiple teams to provide standard coding practices, thus enforcing standardization and consistency across multiple development teams.

All services are revision-controlled and follow a version control contract with other services so that they can be upgraded or downgraded independently. You can have different versions of the shopping microservice running on development, staging, and production environments, as shown below. For example, the production container spec for the shopping cart UI microservice uses version 1 of the shopping cart UI image:

spec:
  containers:
  - name: shopping-cart-ui
    image: shopping-cart-ui:v1
    imagePullPolicy: IfNotPresent

Factor V: Build, release, run

“Strictly separate build, release, and run stages.”

Once your code base is in place, you can build, release, and run your code in the Kubernetes environment. There needs to be strict separation between build, release, and run phases.

Typically, once code gets merged from a feature branch to the main branch, continuous integration tooling triggers a build and builds the necessary container images. You may want to run automation and unit tests at this stage. For our example, we are using VMware Tanzu Build Service, which is integrated with continuous integration tooling as shown below. Tanzu Build Service creates container images and manages the entire lifecycle of container images by bringing in patching and upgrading automation. 

The build stage fetches application dependencies, compiles binaries, and produces the container images. Container images are pushed to a container registry, such as Harbor or Artifactory. Typically, vulnerability and compliance scans are run on the images, and various tags are created. Tags are pushed to environments such as development, test, staging, and production. You can define an image replication strategy to push a subset of images with specific tags to different environments, such as development, test, staging, or production. The resulting release contains both the build and the config and is ready for immediate deployment in the Kubernetes-based environment. 

The run stage (also known as “runtime”) runs the app in the Kubernetes-based environment, by launching some set of the app's processes against a selected release.

Managing the lifecycle of enterprise applications requires alignment between people, tools, and processes. The SMEs that specialize in operations, security, and compliance need to be onboarded. Clear processes need to be defined to handle frequency of production deployment, gate conditions for promotion from dev, test to production, tagging, and branching strategy.  

Factor X: Dev/prod parity

“Keep development, staging, and production as similar as possible.”

The 12-factor application strategy is designed for continuous deployment by keeping the gap between development and production small. Differences between backing services mean that tiny incompatibilities crop up, causing the code that worked and passed tests in development or staging to fail in production. Ideally, one predefined set of microservices needs to be deployed in different environments, such as integration, staging, and production.

Another important consideration is fault tolerance. Service being unavailable is part of the service lifecycle. It needs to be built into microservices design. Each service needs to be able to handle errors and outages from dependent services gracefully. If it is a momentary failure, it can be resolved with retries or cache refreshes. Alternatively, a long-term failure can be handled by returning proper error messages to the consumers of your service. 

CI/CD processes need to run integration builds with key automated tests to catch integration issues as early as possible. For example, shopping cart UI microservices can run functional tests with pull request builds and long-running tests running once a day to simulate a production-like data workload. The following screenshot shows a unit test output that is integrated with a Travis CI build.

Deploy

Factor II: Dependencies

“Explicitly declare and isolate dependencies.”

A 12-factor app is expected to declare all of its build-time and runtime dependencies explicitly, via a dependency declaration manifest. For instance, in our shopping cart example, the UI microservice is built using Spring Boot and Java. The application dependencies are declared explicitly in the pom.xml file as shown below.

Runtime dependencies require other services to be running and ready to service requests. For example, the shopping cart UI service in the above example depends on cart service, payment service, delivery service, and styling service. Kubernetes makes probing of these runtime dependencies easy by standardizing liveness and readiness probes for Kubernetes pods. These probes can be used to check if the backing services are running and are ready to receive requests. An example probe definition for a Kubernetes pod is shown below.

Factor III: Config

“Store config in the environment.”

A 12-factor app needs strict separation of its configuration from the code. Development, staging, and production environments need different configurations to deploy the same code. You can use Kubernetes ConfigMap objects and secrets to store configuration in a declarative way. For example, as shown in the diagram below, the shopping cart UI example service uses Kubernetes ConfigMap and secrets to declare its configuration. As the contents of ConfigMap objects are passed as environment variables to the runtime containers, this model scales across a large number of deployment targets. Tanzu Build Service treats environment-specific configuration as code and builds the container image. This container image is then deployed on the Kubernetes-based deployment targets. This allows the configuration to be abstracted from the code such that the same code can be deployed to different environments by simply injecting the necessary configuration at runtime to satisfy functional and nonfunctional requirements of the application.

  

Factor VI: Process

“Execute the app as one or more stateless processes.”

Twelve-factor runtime processes (e.g., the container processes that run the application) are stateless. Any data that needs to be persisted must be stored in a stateful backing service, such as a database. The shopping cart application uses microservices for the shopping cart UI, cart implementation, product objects, and delivery APIs as stateless services. It uses Mongo, Redis, and MySQL as stateful database services.  

Factor IV: Backing services

“Treat backing services as attached resources.”

Modern microservices-based applications often consume backing services, such as caching, databases, logging, monitoring, messaging, and others. For example, the shopping cart application uses Redis as a caching solution, Mongo for storing product information and users data, and MySQL for the catalog database as backing services. The 12-factor app treats all backing services as attached resources. The attached resource can be swapped without changing the application code in case of failures. VMware Application Catalog provides several curated images and Helm charts for commonly used open source technologies that can be used to deploy and manage various kinds of backing services. You can find more information about VMware Application Catalog and how it can be used to simplify deployment of backing services via Helm charts in this blog post.

Factor VII: Port binding

“Export services via port binding.”

Services need to be exposed for external or inter-service access with well-defined ports.  You can use an application load balancer to expose services externally. In Kubernetes, each service can interface with another service by using its service name, which is resolved by Kubernetes DNS support. It is an important design consideration to isolate service providers from service consumers via fixed API contracts so that they can evolve independently. A service provider and consumer should follow an API contract, data contract, and access pattern contract.

Additionally, sophisticated load balancers can be leveraged to perform blue-green deployment and canary deployment to test new versions of service providers or consumers while limiting overall exposure to the running application. For instance, load balancers provided by common service mesh implementations allow you to configure traffic routing, and traffic prioritization.  Kubernetes has built-in support that allows easy integration with front-end traffic management tools like load balancers and ingress via declarative configuration constructs. For VMware Tanzu–based deployments, one can use the NSX Advanced load balancer as a layer 7 application load balancer to expose services externally with VMware Tanzu. We also recommend using VMware Tanzu Service Mesh to manage and secure service-to-service communication, perform certificate rotations, implement advanced load balancing such as weighted routing, and service discovery. In our shopping cart application example, Tanzu Service Mesh is used for the shopping cart service as shown in the figure below. It has built-in support to show key KPIs, such as latency, duration, and requests-per-second for service-to-service communication. The communication between the services is encrypted with mutual TLS (mTLS). 

Operate

Factor VIII: Concurrency

“Scale out via the process model.”

The Kubernetes pod-based deployment model truly shines when it comes time to scale out application components to meet changing demands. The stateless processes follow a share-nothing design that makes scalability a simple and reliable operation in order to achieve the desired level of concurrency. Kubernetes ReplicaSets can be used to run concurrent container processes to maintain high availability for critical microservices.  For example, three replicas of the shopping cart microservice are running to maintain high availability for the cart service, as shown in the image below.

You can scale out the shopping cart UI or API based on service-level agreement (SLA) parameters by adding more replicas. Tanzu Service Mesh can be used to define service-level objectives (SLOs) for the shopping cart service and to scale in and scale out instances based on the service-level objectives, as shown in the image below.

Tanzu Service Mesh allows selection from a range of metrics, such as a latency (p90), traffic (requests), or a compute resource metric (CPU and memory usage) to perform scaling decisions.

Factor X: Disposability

“Maximize robustness with fast startup and graceful shutdown.”

The three key principles of disposability are fast launch, graceful shutdown, and responsiveness. For microservices, the idea that processes should be disposable means that when an application stops abruptly, the user should be minimally impacted and failures are always handled gracefully. This can again be achieved by using Kubernetes ReplicaSets. ReplicaSets allow you to specify upper and lower bounds for the number of replicas to maintain a level of availability for the microservices. Capacity planning is an important aspect when designing microservices. You can use Kubernetes requests and limits to provide guidance on expected resource constraints for CPU and memory usage to ensure Kubernetes reserves the required resources for the microservices. You can define requests and limits for CPU and memory in the deployment config, as shown below.

Factor XI: Logs

“Treat logs as event streams.”

It is recommended to have a separate process for routing and processing of logs generated by the application. The microservices must report health and diagnostic information that provide insights into their runtime health so that problems can be detected and diagnosed proactively. It is important to establish standard and consistent practices for log formatting and how health and diagnostic information is collected for each service. For example, the shopping cart application uses the Fluent Bit agent to collect logs from different microservices. Common open source technologies like ElasticSearch can be used to aggregate logs from different sources, and Kibana can be used for log visualization. Developers can use any log aggregation solution of their choice.

Routing application logs

Factor XII: Admin tasks

“Run admin/management tasks as one-off processes.”

Data migration, caching of the data, backup, and restore are some of the examples of the administrative tasks to be performed as part of the application lifecycle. Kubernetes jobs can be used to carry out these administrative tasks. These tasks should be completely decoupled from application microservices and hence should be executed as separate processes.

Additional Tanzu features to augment a 12-factor application lifecycle

End-to-end observability

Modern cloud native applications need to be resilient and available 24-7 to meet business needs. As microservices grow in number, communication between microservices becomes complex. Observability for microservices is critical for gaining visibility into communication failures and reacting to failures quickly and proactively. VMware Tanzu Observability provides full-stack observability for our shopping cart application by supporting three key pillars: metrics, logging, and tracing.

Metrics

Kubernetes metrics, such as CPU usage, memory usage across nodes, containers, and pods, can be collected by installing the Kubernetes-based agent on the cluster.

CPU and memory usage across containers and pods

Distributed tracing

Our shopping cart application code can be instrumented to collect distributed tracing information to identify a root cause, as shown below.

Distributed tracing for the example shopping cart application

Observability across the entire stack

With Tanzu Observability, it is possible to achieve observability across infrastructure, operating system, Kubernetes, and application, as shown in the image below. This end-to-end visibility of the runtime environment is important for isolating or correlating issues at different layers to quickly perform root cause analysis.

Full-stack dashboard

Logging

Tanzu Observability provides out-of-the-box integration with log collection tools like ElasticSearch (ELK) for data ingestion. With an end-to-end log and metrics view, it is possible to correlate events, set up alerts, and further integrate resolution actions via PagerDuty and Slack.

Audit logs

“Know what, when, who, and where for all critical operations.”

A well-designed microservice should have clear audit trails of who did what when. VMware Tanzu Mission Control collects and stores logs of audit events, including service-level actions that occur, as well as cluster-level interactions that occur between Tanzu Mission Control and the provisioned Kubernetes clusters. Each log entry shows what was done, when and where it was done, and who did it, as shown in the example below. This information can then be used to generate audit reports.

A resilient, production-grade enterprise application starts with a sound architectural philosophy. The 12-factor app framework provides a foundation for production-ready design and lifecycle management with clearly articulated criteria. The VMware Tanzu portfolio includes capabilities to implement, deploy, and manage 12-factor apps efficiently.  

If you're starting on a containerization journey to deploy in Kubernetes, note the factors that you may have already applied, and apply any factors that you are missing. Share your findings with others.

Further reading

Beyond the 12-Factor App (blog post)

Establishing an SRE-Based Incident Lifecycle Program (ebook)

Oh, the Microservices You’ll Build! Learn Microservices, from Zero to Hero (ebook)

Responsible Microservices (ebook)

11 Recommended Security Practices to Manage the Container Lifecycle (white paper)

Previous
This Is Not a Predictions Article! What’s on the Minds of Your Peers and Tech Leaders for 2022
This Is Not a Predictions Article! What’s on the Minds of Your Peers and Tech Leaders for 2022

A compilation of trends and topics in the cloud native, Kubernetes, and application development space.

Next
Announcing the General Availability of VMware Tanzu Kubernetes Grid 1.5
Announcing the General Availability of VMware Tanzu Kubernetes Grid 1.5

See what's new in the latest release of Tanzu Kubernetes Grid, including topology awareness, new integratio...