Join 80,000+ fellow learners

Get hands-on practice with Kubernetes, track your progress, and more with a free KubeAcademy account.

Log In / Register

Kubernetes Ingress resources and controllers provide higher-level routing capabilities, such as HTTP, for services running on your cluster. In this lesson, you'll learn about these resources and see how Contour implements them.

Steve Sloka

Sr. Member of Technical Staff at VMware

Steve is a Sr. Member of Technical Staff at VMware working as a maintainer of Contour as well as a Kubernetes contributor since early 2015.

View Profile

Hi, I'm Steve Sloka. I'm a senior member of technical staff at VMware, and I'm also a contour maintainer. In this video, we're going to talk about what Ingress is and how you can implement Contour and Ingress controller for Kubernetes. Let's get started.

So first off, what is Ingress? When I think of the word Ingress, I think of incoming. We're going to talk about how we can get traffic from outside of our cluster, into our cluster, and route it to some backend application. Now, why use Ingress? Why even have this discussion around ... Why would you want to implement this technology?

Well, what we want to do first is provide traffic consolidation. Right? So find a way to have all of our traffic route through one entry point into our cluster. The alternative is to have every application have its own way to manage that incoming traffic. Right? And depending on how many applications you deploy, you could have hundreds and hundreds of these different applications' entry points into your cluster. So Ingress, we can consolidate all of those into one place.

Now, when everything's consolidated into one entry point, we can now manage all of the TLS certificates at those entry points as well. Right? So, we can have a set of certificates, all managed in one place without having to have each backend application deal with them individually. We can abstract our configuration away, so now the applications don't have to know or care about how they get deployed and how they get access from outside of the cluster. We can just have the applications be deployed and rely on Ingress to then provide that abstraction the connectivity into the cluster.

And Ingress gives us this path-based routing, which we sometimes call L7 or layer seven of the stack. And what that means is that we can route requests now based on the path. So [/fu 00:01:51], for instance, can route to one application, and /fu, /bar can route to a different one.

Here's an example of what an Ingress controller might look like. So you can see we have requests from the internet or outside of the cluster. They hit some sort of load balancer. The load balancer's job is to then send traffic across all the replicas of your Ingress controller. Right? And that's to provide capacity to your cluster. Once the Ingress controller receives a request, it inspects it, again, that layer seven, and then it routes it to the right place in the application or into the cluster, depending on how you've configured it.

So an example here, you can see on the top, we have a get request for xyz.com. And you can see it hits the load balancer, comes into the Ingress controller, and it's inspected. The Ingress controller, based on its configuration, decides this should go to the web application, and it routes that to those two green boxes on the top. Similarly, there is a request to xyz.com/blog. Again, it comes into the load balancer, hits the Ingress controller. And the Ingress controller says, "Oh, /blog should route to the blog application." And you can see that request in purple going down to the two green boxes in the bottom.

This is what the Ingress resource might look like. So you can see the version we're working on is networking.k8s@ios/vi. Now, the V1 version is new to Kubernetes 1.19. If you're using a version prior to this, you might be using V1 beta one. Now, the kind here is Ingress, and you can see we have some rules that are under the spec. And we can see that there's a path rule for /blog, and that /blog routes to the service name blog over the service port 80. Any requests to /blog it's going to route to the application in Kubernetes called Splog on the service port 80.

Let's talk about how Contour implements itself as an Ingress controller. So first off, Contour is a CNCF project. Contour is open source, so it's free to use. Now, in Contour's architecture, this diagram looks very similar to the one we looked previously, which outlined Ingress controllers work generically. But you'll notice now there's two different boxes where we had one before, so now the same thing is going to happen. Request is going to come in, it's going to hit the load balancer. And then, in Contour's case, Contour leverages Envoy as its data path component. So all traffic is going to route through Envoy, and Envoy is actually doing the routing of the requests.

Contour's job is to look at the cluster and watch for things like services, endpoints, secrets, and Ingress objects. When any of those things change in the cluster, Contour builds a new configuration and passes that down to Envoy. Now, in this example, Envoy is the client and Contour is its server. And in Envoy speak, Contour implements the XDS protocol, or it's the XDS server for Envoy. What this means is that this is how Contour configures Envoy's clusters, and listeners, and all those different bits in Envoy.

So Contour can stream those changes down to Envoy in near real-time and not require any restarts, which is a great benefit. Again, anytime something changes in the cluster, Contour will rebuild that configuration and pass that down to Envoy.

Now, let's take a look at a thing that's unique to Contour. It's this thing called ... we have HTTP proxy, and this is our custom resource definition or CRD. And this exists to solve a couple of problems that we've had with Ingress. Some goals we wanted to have for this CRD is to safely support Ingress in multi-team clusters, so this is where you have lots of different teams working in a single cluster. And you want them to be able to self-manage their own resources.

Contour implements this safely through this thing called delegation of routing. So that L7 are those, that path-based routing. You can carve off different portions of those paths and delegate permissions to different teams in different namespaces. With those delegations, they can then self-manage their own resources and not worry about breaking other teams in the cluster.

We want to provide a sensible home for common configuration parameters, so things like 301 redirects, all of those different things that commonly exist as annotations today. We wanted to find a good spot in the spec of this theory to place those. Contour also supports this idea of delegation of TLS secrets. So when you have a certificate you want to attach to your entry point for your Ingress object, some you just don't want to let other teams have access to the secrets and keys, so Contour allows you to take those secrets and put them in one namespace, but allow users in different names basis to actually consume them.

Let's take a quick look at how Contour gets deployed. Okay, what I have here is I've got a cluster running in kind on my laptop. And it's a single node cluster, and it's running version 1.18. So the first thing I want to do is go ahead and to deploy Contour. So what we'll do is we'll hop over here to the Project Confident IO website. This is a great resource to learn more about Contour, learn about its architecture, the CRD that we looked at, and anything else really. We'll go ahead and click on this getting started using Contour link.

And down here, we'll grab this command, and this is basically a single YAML file. We provide this quick-start, which basically puts all the bits you might need into one YAML file so that you can easily deploy Contour. Alright, so let's go here to our terminal, and we will go ahead and apply this. And this will go ahead and create all the different things that we need in our cluster. Alright, let's go ahead and get our pods in the namespace Project Contour.

Now this is the default place where Contour gets deployed. Again, you can customize this if you'd like. But let's dig into some of what we just deployed. So the first thing you see here is we have two pods here of Contour. Now again, Contour is the server and Envoy is the client, so we have multiple instances to provide redundancy for Envoy. Contour runs as a deployment in Kubernetes. Right? So you can have as many of those as you'd like. Envoy runs as a DaemonSet, And this means that we're going to have one instance of Envoy per node.

Again, when Envoy spins up, it looks for Contour. And Contour then streams on its configuration. This last bit here is this Contour search, and it's actually a job. So what this does is this secures the communication between Contour and Envoy because they're running in different pods in the cluster. This is just generating self-signed certificates, again, to secure that transmission. But if you'd like, you can always swap in your own certificates.

Let's go ahead and hit our pods again. Here we can see now that we have everything up and running. Our two Contour pods are running, and our Envoy pod is running. So what we need next is we need some Ingress objects. Right? And some applications to deploy. And that's what I have here. So this is a very simple Ingress object that we have here. And what this is, is this is going to host the local .projectcontour.io domain. And we're going to handle the slash prefix, meaning all prefixes. So it's going to handle the root of this domain name.

And when we get a request over this URL to this path, it's going to route that to the root app application running on port 80. And then this other bit here is this just some setup. Right? So this is creating a namespace for me, a deployment to have that sample application be running, and it runs a simple echo server. So basically, whatever requests we send it, it just echoes back to us. This is really useful when you're trying to understand how these requests get routed, how those different path-based configurations can come into play. It's very easy to see what's happening with this echo server.

Alright, let's go ahead and apply this. We'll keep control, apply, demo. Alight, we created a namespace, the deployment, the service for the deployment, as well as the NHP proxy resource. So let's go ahead and get our proxies. And you can see here, I've got one, which is the one we just created. It's in the root namespace. The name is called basic. Again, here's that fully qualified domain name, and the status is valid. So Contour provides a status feedback to the user to understand if there's something wrong.

So if you had some sort of configuration error or something else that was incorrect in your setup of your cluster, Contour would give you information back here to understand what's wrong. Now that it's validable, what we can do is go ahead and curl that. So, we'll curl local.projectcontour.io. And there you go, and there's that echo server I talked about. So you can see this is what the application's configured with. Here was the request that we sent, so local.projectcontour.io. And here's the headers on the request.

Now, if I did a similar request to /blog, what you will see as the same application response, the request was /blog. And this is because we only have one application to configure in our Ingress object, and that's for slash. So, basically, any path that you send to the server is going to respond with the same application.

Alright. Well, thank you very much for attending. Hopefully, this was a great intro to Ingress and how Contour can be implemented as an Ingress controller. Thank you.

Give Feedback

Help us improve by sharing your thoughts.

Links

Share