Preparing and Deploying Kubernetes Workloads
Deploying to Kubernetes can become somewhat of an involved process, but it’s also one that can be instrumented in a way that may be better than what you’re running today. While writing YAML files 100’s of lines long can be tiresome, there are tools that help ease or even completely remove the process. Additionally, there are features of Kubernetes that you can leverage in your application to improve their performance and operation.
Kubernetes does not demand specifics about the applications that run on top of it. They don’t not need to be microservices, 12 factor, or maintain other specific software philosophies. However, for an application to run well on Kubernetes, there are aspects of your application you may wish to reconsider. Kubernetes is a distributed system that has behaviors different from what many are used to in a traditional environment. These include:
The scripts and systems used in the CI/CD pipelines to deploy and update applications are limited by the Kubernetes resources they can manage. In many cases this may be perfectly sufficient. An update to the image in a Deployment spec may be all that is required to perform an update of an application, for example. However, this model is often insufficient when dealing with workloads that are stateful, distributed and/or complex.
The term “observability” in control theory states that the system is observable if the internal states of the system and its behavior can be determined by only looking at its inputs and outputs. In software, observability means we can answer most questions about a system’s status and performance by looking from the outside. The system has been instrumented to externalize and make available measurements useful to those responsible for the platform’s success and reliability.
Application Readiness Checklist
This list is a starting place for considerations about your application running on Kubernetes. It is not exhaustive and should be expanded based on your requirements. Required The following are items that must be completed before running on Kubernetes. Application runs in a container For the workload to run in a Kubernetes Pod, it must be packaged in a container. Application / container is not dependent on host configuration Pods are scheduled across multiple hosts and may be rescheduled based on the needs of the system.
Assign Pods to Nodes With Bitnami Helm Chart Affinity Rules
To help users to implement affinity rules, Bitnami has enhanced its Helm charts by including opinionated affinities in their manifest files. Check out this step-by-step guide to learn how to adapt them to your needs.
Best Practices for Creating Production-Ready Helm Charts
This tutorial will show you which are the best practices that any chart developer should follow.
Learn all about Carvel, a set of reliable, single-purpose, composable tools that aid in your application building, configuration, and deployment to Kubernetes
Creating Your First Helm Chart
Create your first ever Helm chart and learn what goes inside these packages.
The developer workflow typically involves writing code, executing automated tests, building the application, and running the app locally. In most cases, developers repeat these steps throughout the day, creating a development cycle. The efficiency of the development cycle has a direct impact on the time it takes development teams to ship new features and fix bugs. For this reason, minimizing the time it takes to iterate through the cycle is desirable.
Exporting Application Metrics
Exposing useful metrics is critical to understanding what is happening with your software in production. Without this quantifiable data, it is almost impossible to manage and develop your application intelligently. This guide covers how to expose metrics from your app for collection by Prometheus. What Makes a Good Metric A good metric provides quantifiable measurements on a time series that helps you understand: Application Performance Resource Consumption Application Performance This category is often expressed as “user experience” and encompasses measurements that indicate if users or client apps are getting what they should reasonably expect from the application.
Application configuration is anything that varies between environments. For example, stateful applications depend on different database endpoints in testing and production environments. A best practice in cloud-native development is to decouple configuration from code. This means keeping database endpoints and credentials separate from the application’s source code. If your application has environment-specific configuration hard-coded into its repository, VMware recommends refactoring your application to decouple source code from configuration. Runtime Injection Once configuration has been decoupled from source code, configure your application to consume it at runtime by presenting environment variables or mounting a file in the container.
Getting Started with kapp
Deploy to Kubernetes using kapp, a tool that provides an easier way to deploy and view all resources created together regardless of what namespace they’re in
Getting Started with kapp-controller
This guide will walk you through the basics of kapp-controller and help you get started with it.
Getting Started with Kubeapps
Walk through the process of deploying Kubeapps for your cluster and installing an example application with this step-by-step Kubeapps guide.
Getting Started with Using Helm to Deploy Apps on Kubernetes
Learn how to use Helm to help define, install, upgrade applications, and deploy apps on Kubernetes, from set up to configuring and changing values.
Getting Started with ytt
This guide will walk you through the basics of ytt and help you get started with it.
Throughout the lifecycle of an application, running pods are terminated due to multiple reasons. In some cases, Kubernetes terminates pods due to user input (when updating or deleting a deployment, for example). In others, Kubernetes terminates pods because it needs to free resources on a given node. Regardless of the scenario, Kubernetes allows the containers running in a pod to shutdown gracefully within a configurable period. Pod Shutdown Scenarios The following diagrams depict the possible pod shutdown scenarios.
How to use Harbor Registry to Eliminate Docker Hub Rate Limits
Watch Paul Walk through this guide on Tanzu.TV Shortcuts. On August 24 2020 Docker announced they would be implementing Rate Limits on the Docker Hub and they were implemented on November 2 2020 thus ending our free ride of unlimited Docker Image pulls. Unless you’re a paid customer of Docker or very lucky you’ve probably started to see errors like this: ERROR: toomanyrequests: Too Many Requests. Or You have reached your pull rate limit.
Installing Harbor on Kubernetes with Project Contour, Cert Manager, and Let’s Encrypt
Looking to run a private container image for self-hosting or enterprise purposes? This guide walks through deploying Harbor to Kubernetes.
Label Best Practices
Labels are a means for describing and identifying components that make up an application with arbitrary key/value pairs. Labels are attached to Kubernetes API objects at time of creation or can also be added/modified/removed at a later time. Labels are simple pieces of metadata that can help with organization and administration of an application’s lifecycle. Labels are not always arbitrary, and are sometimes applied automatically to some API objects by Kubernetes, typically via the kubelet.
Logging Best Practices
Logs help you monitor events and debug problems in an application. As complexity of a cloud-native environment grows with the number of deployed applications, debugging and monitoring become more difficult. Organizations can maintain observability and the ability to troubleshoot by adhering to the following guidelines. Logging Types Cloud-native platforms handle application logging in three primary ways: Centralized Logging - An external system pulls logs from stdout of your application. You do not need to configure anything.
Microservices with Spring Cloud Kubernetes Reference Architecture
This Reference Architecture demonstrates design, development, and deployment of Spring Boot microservices on Kubernetes. Each section covers architectural recommendations and configuration for each concern when applicable. High-level key recommendations: Consider Best Practices in Cloud Native Applications and The 12 Factor App Keep each microservice in a separate Maven or Gradle project Prefer using dependencies when inheriting from parent project instead of using relative path Use Spring Initializr a web application that can generate a Spring Boot project structure, fill in your project details, pick your options, and download a bundled up project This architecture demonstrates a complex Cloud Native application that addresses following concerns:
In Kubernetes, the desired state of the system is declared via resources sent to the API Server. Resources are stored as JSON or YAML files called manifests. The management of manifests can be cumbersome but there are many tools which can help. To inform tooling choices it is helpful to define the nature of the problems commonly encountered and identify the approaches that each tool takes to address them. Challenges Value Duplication Managing manifests as flat data files (YAML) violates the DRY principle.
Probing Application State
Adding probes to your application provides two critical pieces of information to the system running it. Is the application ready to receive traffic? Is the application healthy? Cloud Native platforms have methods to probe the application and answer the above questions. In the case of Kubernetes, the kubelet (Kubernetes agent that runs on every host) can execute a command inside the container, make an HTTP request, or make a TCP request.
Troubleshooting Applications on Kubernetes
Common steps for troubleshooting applications running on Kubernetes.
What Is Helm?
Learn the basics of Helm, a tool to help you define, install, and upgrade applications running on Kubernetes, and explore how it works.