Integrating Istio with VMware Enterprise PKS

March 12, 2019 VMware Tanzu

By Pranay Bakre, Alka Gupta, Kendrick Coleman

Adoption of modern distributed architectures has challenged enterprises to monitor, manage, and secure services in a consistent way. Some of the challenges include individual services handling retries, flow control, circuit breaking, authorization and authentication with increased attack surface, service-level monitoring requirements, and cross-platform ops tool chains. Istio is an open source service mesh to connect, secure, control, and observe services in a Kubernetes environment. It provides a modular set of services and components including:

  • Sidecar proxies (Envoy): Handle ingress-egress traffic between services in the cluster and from a service to external services transparently
  • Pilot: Configures the proxies at runtime
  • Mixer: Enforces ACLs, rate limits, quotas, authentication, request tracing, and telemetry collection
  • Certificate authority: Issues and rotates security certificates for service identities
  • Initializer: Injects sidecar proxies
  • Ingress: Manages external access to the services

As part of the Istio integration with Kubernetes, an Envoy proxy is deployed as a sidecar to the relevant service in the same Kubernetes pod. Envoy is a proxy to mediate all inbound and outbound traffic for all services in the service mesh. This deployment allows Istio to extract a wealth of signals about traffic. The sidecar proxy model also lets you add Istio capabilities to an existing deployment with no need to rearchitect or rewrite code. 

Below is a comparative view of a Kubernetes pod before and after integrating Istio.  

Below is an architecture of the Istio deployment in Kubernetes.

  

Compatibility with VMware Enterprise PKS

VMware Enterprise PKS, which is certified by the Cloud Native Computing Foundation (CNCF) through its Kubernetes Software Conformance Certification program, is compatible with upstream open source Kubernetes. Integrating open source Istio with VMware Enterprise PKS is straightforward.

VMware Enterprise PKS improves operational efficiency in deploying, running, and managing Kubernetes clusters in production with faster time to value.

This blog showcases how an Istio service mesh can be created and integrated easily with Kubernetes clusters provisioned by VMware Enterprise PKS.  

Creating an Istio Service Mesh

Follow the steps below to create an Istio service mesh in VMware Enterprise PKS and deploy a sample application.

Prerequisites:

  • Make sure you have a running VMware Enterprise PKS 1.2 or 1.3 environment; see the installation documentation. The integration uses features such as network profiles and medium load balancers.
  • For manual Istio sidecar injection in the pods, ensure there’s at least one plan in the PKS tile with ‘Enable Privileged Containers – Use with Caution’ and ‘Disable DenyEscalatingExec’ fields checked. (Not required for automatic Istio sidecar injection; details provided below.)

1. Istio requires a large number of virtual servers. Define a medium-sized load balancer by creating a JSON file named “network-profile-medium”:

network-profile-medium.json:{"description": "network profile with medium size LB", xxxx"name": "network-profile-medium", xxxx"parameters": { xxxx"lb_size": "medium" xxxx} xxxx}

2. Create a network profile using the VMware Enterprise PKS CLI and the above file “network-profile-medium”: pks create-network-profile network-profile-medium.json

Verify the successful creation of network profile with the following command: pks network-profiles

You should see a message like in the screenshot below:  

   

3. Create a Kubernetes cluster using the CLI and specify the network profile created in the previous step: pks create-cluster medium-lb-cluster --external-hostname mediumlb --plan large --num-nodes 4 --network-profile network-profile-medium  

 4. Switch to the newly created cluster’s context and check if the Kubernetes nodes can be accessed: pks get-credentials medium-lb-cluster  

5. Download the Istio bits and add its binary location to the PATH variable. Execute the following command to download Istio: curl -L https://git.io/getLatestIstio | sh -  

6. Run the istioctl command to check that the Istio utility is running: istioctl  

Note: Istio can also be installed using Helm. Follow the tutorial in this GitHub link to install Istio by using Helm.  

7. Create Custom Resource Definitions (CRDs) or use the ones provided with Istio  

8. For using the CRDs provided by Istio, change the current directory to the extracted directory of Istio: istio-1.0.5  

9. Execute the crds.yaml file located at install/kubernetes/helm/istio/templates/crds.yaml kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml  

10. Use kubectl get crd to look at all the CRDs created  

11. Install Istio on the cluster with mutual TLS authentication: kubectl apply -f install/kubernetes/istio-demo-auth.yaml  

12. Verify the installation is complete by checking that the Istio pods are running: kubectl get pods --namespace istio-system  

   

13. Also, check the services in istio-system namespace: kubectl get services --namespace istio-system  

   

14. An Istio sidecar needs to be running in each pod in the service mesh. There are two ways of injecting sidecars: manual injection and automatic injection. Manual injection is desired in scenarios where a user may want to deploy pods in the future to the default namespace without a sidecar. This can also be achieved by deploying applications to a specific namespace and enabling automatic sidecar injection for that namespace. However, manual injection gives more flexibility to the user in the default namespace.  

15. Follow step ‘a’ for injecting the sidecar into the deployment manually or step ‘b’ for automatic sidecar injection:

a. The sample application provided with Istio uses the default namespace. Find the application YAML file under the Istio directory extracted before. Execute the following command to deploy the application: kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)

b. Enable the default namespace with istio-injection using this command: kubectl label namespace default isito-injection=enabled

And then deploy the application using this command: kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml  

16. Check the pods and services of the application using the following command: kubectl get pods

If the pods are not running, use the following command to troubleshoot the cause of the error: kubectl describe pod --namespace istio-system

   

17. Now, in order to access this application from outside, deploy an ingress gateway in the same Kubernetes cluster. Deploy the ingress gateway for the application using the following command: kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

18. Check the services deployed as part of this gateway. Locate the External IP field and use it to access the application: kubectl get gateway  

19. Using Istio, you can configure rules to control the routing of traffic within the service mesh. Locate the YAML file for the default destination rules and deploy it: kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml  

20. Observe the application behavior by reloading the page. This blog is a simple illustration of how easily Istio can be setup on Kubernetes clusters provisioned by VMware Enterprise PKS.

Previous
The What and the Why of the Cluster API
The What and the Why of the Cluster API

We outline the history and motivations behind the creation of the Cluster API as a specialized toolset to b...

Next
Routing Traffic to Applications in Kubernetes with Contour 0.10
Routing Traffic to Applications in Kubernetes with Contour 0.10

Contour is an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy ...

×

Subscribe to our Newsletter

!
Thank you!
Error - something went wrong!