Demonstrating Certificate Management by Deploying Harbor with an SSL Endpoint

June 12, 2019 Tom Scanlan

This post is a continuation of a blog series that highlights an easy path forward for operations teams that need to up their certificate-management game for Kubernetes. The first entry covered the tools you can use for automated certificate management. This entry deploys the Harbor container registry with an SSL endpoint to show the tools in use. The final entry, to be published soon, summarizes two alternatives that may work better with your existing certificate workflow and help improve developer velocity and production robustness.

TLS for Harbor

In this example, we’ll stand up a Harbor registry using a Helm chart and verify that it is running TLS properly. Harbor is an open source registry that stores container images, signs them as trusted, and scans them for vulnerabilities. Here’s what the main page of the Harbor registry looks like:

Dependencies

The first dependency is a Kubernetes cluster with Contour, Tiller, and cert-manager installed. Contour is an ingress controller for Kubernetes. Tiller is the in-cluster component of Helm that interacts with the Kubernetes API server to install Kubernetes resources. For information about cert-manager, see the first blog post in this series.

Assuming you have Contour, Tiller, and cert-manager installed in your cluster, you can set up cert-manager to issue certificates by putting the following ClusterIssuer resources into a file and applying them to your cluster. Generally, operations will be the team doing this, and they will get notifications via email when certificate renewals occur. It is also possible for developers to own this configuration in dev and test environments.

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  # This issuer has low thresholds for rate limits,
  # so only use once bugs have been worked out for ingress stanzas
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: youremail@example.com
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    http01: {}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  # This issuer will not give trusted certificates, but has high rate limits
  # so can be used for testing initial certificate generation
  name: letsencrypt-stage
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: youremail@example.com
    privateKeySecretRef:
      name: letsencrypt-stage
    # Enable the HTTP-01 challenge provider
    http01: {}

Assuming you’ve put this code in a YAML file named issuers.yml, here’s the command to apply it to your cluster:

kubectl apply -f issuers.yml

When you installed Contour, a load balancer service was created to listen for ingress traffic. You can find the address of the load balancer by running the following command and looking at the LoadBalancer Ingress line:

kubectl describe svc -n heptio-contour contour

Name:                     contour
Namespace:                heptio-contour
...snip...
Selector:                 app=contour
Type:                     LoadBalancer
IP:                       10.0.0.77
LoadBalancer Ingress:     contour.incident-reporting-9b14fb86-8887-11e9-8c92-12d4897b3f98.57197a32-43f3-46f6-bd80-bc65267b6d7c.vke-user.com
Port:                     http  80/TCP
TargetPort:               8080/TCP
NodePort:                 http  30436/TCP
Endpoints:                10.2.1.3:8080,10.2.1.4:8080
Port:                     https  443/TCP
TargetPort:               8443/TCP
NodePort:                 https  30405/TCP
Endpoints:                10.2.1.3:8443,10.2.1.4:8443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   

Any traffic directed at this ingress address (here it is contour.incident-reporting-9b14fb86-8887-11e9-8c92-12d4897b3f98.57197a32-43f3-46f6-bd80-bc65267b6d7c.vke-user.com) will be handled by Contour and ultimately directed to a service based on the inbound host address. So, if we want to host Harbor at the URL https://harbor.demo.example.com, then we need to create a DNS CNAME record for harbor.demo.example.com that points to the load balancer address. Developers will need an automated method to update DNS records in dev and test environments, and operations will want to use the same type of automation for updating DNS in production.

Installing Harbor

Finally, use the Helm command-line interface to install the chart for Harbor. Specify the proper domain name, issuer, ingress controller, and secret to populate and use for creating, signing, and using the certificate. Note that you must keep the forward slashes (\) in the annotation lines.

helm install -n demo-harbor \
    https://github.com/goharbor/harbor-helm/tarball/1.0.1 \
    --set expose.ingress.hosts.core=harbor.demo.example.com \
    --set expose.ingress.annotations.'kubernetes\.io/ingress\.class'=contour \
    --set expose.ingress.annotations.'certmanager\.k8s\.io/cluster-issuer'=letsencrypt-prod \
    --set externalURL=https://harbor.demo.example.com \
    --set expose.tls.secretName=demo-harbor-harbor-ingress-cert \
    --set notary.enabled=false

After a few minutes, Harbor should be up and available. Open a browser, go to your Harbor domain name, and look for the green lock. Then test the certificate by going to ssllabs and putting in your DNS name. You should get back an “A” rating if things are working right:

  

Checking the Certificate’s Status

You can also check the status of the certificate by running this command:

kubectl describe certificate demo-harbor-harbor-ingress-cert

Here’s an example of the command’s output:

Status:
  Conditions:
    Last Transition Time:  2019-04-12T20:56:21Z
    Message:               Certificate is up to date and has not expired
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2019-07-11T19:56:20Z
Events:
  Type    Reason         Age   From          Message
  ----    ------         ----  ----          -------
  Normal  OrderCreated   36s   cert-manager  Created Order resource "demo-harbor-harbor-ingress-cert-3490026767"
  Normal  OrderComplete  9s    cert-manager  Order "demo-harbor-harbor-ingress-cert-3490026767" completed successfully
  Normal  CertIssued     9s    cert-manager  Certificate issued successfully

Lean Back and Relax: Renewal Is Automatic

The great thing about this is that the certificate will be automatically renewed by cert-manager, and if you destroy and re-create the service, you’ll get a new certificate automatically. To really boil it down, once operations create good issuers, then developers only need to craft a valid ingress stanza and run some automation to update DNS records.

Now you've seen how some common tooling can be used to enable developers to manage certificates in environments they control and to enable operations to do the same in production. This automation should relieve the drudgery and stress of managing certificates in a Kubernetes world.

Similar methods should be used for non-Kubernetes services. As a next step, head to getting started with Let's Encrypt and see how to automate TLS certificates for existing legacy services.

Summary

We’ve covered a concrete deployment of Harbor using tools to automatically create and manage TLS certificates for Kubernetes HTTPS services. You should understand the tools involved and the general process that cert-manager uses to manage the certificate lifecycle.

Check back at the Cloud Native Apps Blog for the remaining entry in this series to see another strategy that may be more adaptable to your existing certificate management workflow.

About the Author

Tom is an architect responsible for researching and applying emerging technologies to business problems. He has been in the technology industry for roughly 20 years, starting in systems and network engineering role. Tom soon found a passion for automating everything, which led him into software engineering, where the bulk of his career has been spent in DevOps type roles.

More Content by Tom Scanlan
Previous
Performance, Bootstrapping, and CRDs in Kubernetes 1.15
Performance, Bootstrapping, and CRDs in Kubernetes 1.15

Kubernetes 1.15 saw new performance improvements, a gradual increase in the stability of management and boo...

Next
Simplifying TLS for Kubernetes Services
Simplifying TLS for Kubernetes Services

In this blog series, I’ll highlight an easy path forward for operations teams that need to up their certifi...

SpringOne. Catch all the highlights

Watch now