Go from Tricky Complexity to Radical Simplicity by Automating Networking for Kubernetes Clusters

January 30, 2019 Joseph Griffiths

Out of the box, the open-source version of Kubernetes struggles to provide secure multi-tenant ingress to clusters, which can make it a challenge to create the Kubernetes cluster API and worker nodes with all the required networking. You can, however, radically simplify many operational aspects of running Kubernetes in production by using VMware PKS, and automating networking when a cluster is created serves as a primary example.

To illustrate how VMware PKS automatically sets up networking, this blog post provides a deeper dive into the networks that are created when you issue the following command:

$ pks create-cluster my-cluster -e my-cluster.corp.local -p small

 This command creates a new Kubernetes cluster with the name my-cluster with an external name of my-cluster.corp.local using the small plan. Plans are defined as part of the VMware PKS installation, and the plans can be resized at any time. The plan includes:

  • The number master/etcd nodes and their size
  • The number of worker nodes and their size
  • The availability zone to use

You can see the small plan inputs in the following screenshot:

 You can see the status of the create-cluster command with the cluster command:

$ pks cluster my-cluster

Name:                     my-cluster
Plan Name:                small
UUID:                     02bf0307-b6cd-4545-b9f2-1f07d5890e7f
Last Action:              CREATE
Last Action State:        in progress
Last Action Description:  Instance provisioning in progress
Kubernetes Master Host:   my-cluster.corp.local
Kubernetes Master Port:   8443
Worker Nodes:             3
Kubernetes Master IP(s):  In Progress
Network Profile Name:

Once you issue the command, the etcd and worker nodes are deployed along with all the required networking. Several networks are created during  cluster creation. All the networks include the cluster's UUID so it’s simple to track in NSX-T.  Searching in NSX-T for the UUID provides the following information:

As you can see, the operation has created several logical routers to handle VMware PKS traffic:

  • One T1 router for the Kubernetes master node (pks-UUID-cluster-router)
  • One T1 router for the load balancer (lb-pks-UUID-cluster-router)
  • Four T1 routers, one per namespace, which can be found by using the following command:
$ kubectl get ns -o wide

NAME          STATUS    AGE
default       Active    22h
kube-public   Active    22h
kube-system   Active    22h
pks-system    Active    22h

To locate what is running inside each namespace, you can run the following command:

$ kubectl get pods –all-namespaces

NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   heapster-6d5f964dbd-qnpdd               1/1       Running   0          22h
kube-system   kube-dns-6b697fcdbd-7zh4f               3/3       Running   0          22h
kube-system   kubernetes-dashboard-785584f46b-pdtp6   1/1       Running   0          22h
kube-system   metrics-server-5f68584c5b-fl2fs         1/1       Running   0          22h
kube-system   monitoring-influxdb-54759946d4-s7p58    1/1       Running   0          22h
kube-system   telemetry-agent-7c944bb46b-8mrp4        1/1       Running   0          22h
pks-system    fluent-bit-gr26h                        1/1       Running   0          22h
pks-system    fluent-bit-lrwrb                        1/1       Running   0          22h
pks-system    fluent-bit-tfjb6                        1/1       Running   0          22h
pks-system    sink-controller-578859d5f-xlx8m         1/1       Running   0          22h

  Here's a description of what each namespace is used for:

Namespace What is it used for
default Default namespace for containers
kube-public Used by cluster communications
kube-system Heapster, kube-dns, kubernetes-dashboard, metrics-server, monitoring-influxdb, telemetry-agent
pks-system Fluent, sink-controller

When you add additional namespaces to the Kubernetes cluster, additional T1 routers are deployed. With VMware PKS, all of this is handled automatically, making it simple and easy to deploy a Kubernetes cluster with integrated networking.   This is best illustrated by adding a namespace to our cluster called new-namespace using this command:

$ kubectl create namespace new-namespace

namespace/new-namespace created

You can see the new namespace by using the following command:

$ kubectl get ns -o wide

NAME            STATUS    AGE
default         Active    23h
kube-public     Active    23h
kube-system     Active    23h
new-namespace   Active    38s
pks-system      Active    22h

In NSX, you can use the UUID to check that a new T1 router has been deployed for the new namespace:

 Removal of the namespace also cleans up all the networking constructs, making the experience seamless for end users:

$ kubectl delete namespaces new-namespace

namespace "new-namespace" deleted

In NSX, you can see that the T1 router for new-namespace has been removed: 

As illustrated, the tight integration between Kubernetes and NSX-T built into VMware PKS allows for easier administration of container-based environments.

Interested in finding out more about how VMware PKS automates networking for Kubernetes clusters?

Check out the following videos:

About the Author

Joseph Griffiths is a virtualization focused architect who has deployed and architected complex cloud based solutions. Joseph is business driven with incredible people skills, with a determination to provide well documented repeatable results. Joseph is honest and hardworking with an interest in all technology. Joseph has managed many technical projects and implementations to successful completion. Joseph also has the honor of being a double VMware certified design expert (#143). Joseph enjoy's public speaking and education opportunities. He blogs about technology at blog.jgriffiths.org and can be found on Twitter @Gortees.

More Content by Joseph Griffiths
Previous
Welcoming Heptio Open Source Projects to VMware
Welcoming Heptio Open Source Projects to VMware

As we continue our integration, we wanted to provide an update on Heptio’s open source projects and set exp...

Next
Enterprise-Ready Containers with VMware vSphere Integrated Containers 1.5
Enterprise-Ready Containers with VMware vSphere Integrated Containers 1.5

Version 1.5 adds support for storage quotas, alternate Linux operating systems, VMware NSX-T, and the lates...

×

Subscribe to our Newsletter

!
Thank you!
Error - something went wrong!