Building Your Kubernetes Platform
How you tool your Kubernetes cluster is considered your application platform. Networking, management, and more turn it from a solution for orchestrating containers, to a platform for all of your workloads.
Autoscaling Reference Architecture
Guidance for autoscaling application workloads and cluster compute resources
Calico Reference Architecture
A reference architecture for running the Calico CNI in Kubernetes
Challenges Managing Multiple Clusters Across Multiple Clouds
While Kubernetes provides a rich and capable environment for modern applications, it introduces a lot of moving parts and day 2 operating issues. How do you create and enforce security policy in a highly fluid environment? How do you make sure that your identity and access control systems are configured? How do you make certain that everything stays properly configured? These challenges are hard enough to get right in a single Kubernetes cluster, but we don’t live in a world of single Kubernetes clusters.
Contour Reference Architecture
A reference architecture for implementing the Contour Service Mesh
Controlling Ingress with Contour
Use Contour to quickly deploy cloud native applications by using the flexible IngressRoute API
Forwarding Client Certificates with NGINX Ingress
A look at annotations to configure Kubernetes NGINX Ingress for forwarding client certificates
Getting Started with Contour - To Ingress and Beyond
Introduction to Contour Contour is an open source Kubernetes Ingress controller that acts as a control plane for the Envoy edge and service proxy (see below). Contour supports dynamic configuration updates and multi-team ingress delegation while maintaining a lightweight profile. Contour is built for Kubernetes to empower you to quickly deploy cloud native applications by using the flexible HTTPProxy API which is a lightweight system that provides many of the advanced routing features of a Service Mesh.
Getting Started with VMware Tanzu Application Platform Beta on KIND, part 1
A guide for installing the VMware Tanzu Application Platform Beta locally, on KIND
Getting Started with VMware Tanzu Application Platform Beta on KIND, part 2
A guide for utilizing Tanzu Application Platform locally, on KIND
Fundamental to the deployment of most software is the ability to route traffic to network services. This is especially true when the software platform adopts a microservices architecture. Traditionally, exposing such services has been an arduous task. Concerns such as service discovery, port contention, and even load balancing were often left as an exercise for the operator. These capabilities were, no doubt, available, but were often configured and operated through manual user intervention.
Sonobuoy for CNCF Conformance and Kubernetes
Using Sonobuoy for cluster conformance testing with Kubernetes
STIG Compliant Tanzu Kubernetes Grid 1-click install into an air-gapped environment
This blog post walks you through installing a Security Technical Implementation Guide–hardened Tanzu Kubernetes Grid for multi-cloud clusters with Federal Information Processing Standards enabled on Amazon Web Services.
Kubernetes is an inherently multi-tenant system. The term tenant can have many meanings. For the purpose of this page, we consider a workload (e.g. Kubernetes pod) to be a tenant. In most Kubernetes environments, pods are scheduled alongside other pods on the same hosts. Kubernetes has features that provide the illusion of workload boundaries such as namespaces. However, odds are pods in different namespaces will run together on the same host.