Backing Up and Restoring Apps on VMware Enterprise PKS with Velero

November 6, 2019 Cormac Hogan

Velero version 1.1 provides support to back up Kubernetes applications orchestrated on VMware Enterprise PKS. This post details how to install and configure Velero to back up and restore a stateless application running in a Kubernetes cluster deployed on VMware vSphere by VMware Enterprise PKS. At this time, there is no vSphere plug-in for snapshotting stateful applications during a Velero backup. In this case, we rely on a third-party program called restic. This post does not, however, include an example of how to back up a stateful application; that is available in another tutorial.

Overview of Steps

Here’s a quick overview of the steps we’ll go through to back up and restore a stateless application. The instructions assume that the Kubernetes nodes in your cluster have Internet access in order to pull the Velero images.

  • Download and extract Velero v1.1.
  • Deploy and Configure the MinIO object store.
  • Install Velero using the velero install command, ensuring that both restic support and a MinIO publicUrl are included.
  • Implement steps to support VMware Enterprise PKS.
  • Run a test backup and restore of a stateless application that has been deployed on Kubernetes through VMware Enterprise PKS.

Download and Extract Velero v1.1

The Velero v1.1 binary can be found here. Download and extract it to the desktop where you wish to manage your Velero backups, and then copy or move the velero binary to somewhere in your $PATH.

Deploy and Configure a MinIO Object Store as a Backup Destination

Velero sends data and metadata about the Kubernetes objects being backed up to an Amazon S3 object store. If you do not have an S3 object store available, Velero provides the manifest file to create a MinIO S3 object store on your Kubernetes cluster. With MinIO, all Velero backups can be kept on premises.

Note: Stateful backups of applications deployed on Kubernetes on vSphere that use the restic plug-in for backing up Persistent Volumes would send the backup data to the same S3 object store.

There are a few different steps required to successfully deploy the MinIO S3 object store.

  1. Create a MinIO credentials secret file.A simple credentials file containing the login and password (ID and key) for the local on-premises MinIO S3 object store must be created. Here is an example of such a credentials file:
    $ cat credentials-velero
    [default]
    aws_access_key_id = minio
    aws_secret_access_key = minio123
  2. Expose the MinIO service on a NodePort.While this step is optional, it is useful for two reasons. The first is that it gives you a way to access the MinIO portal through a browser and examine the backups. The second is that it enables you to specify a publicUrl for MinIO, which in turn means that you can access backup and restore logs from the MinIO S3 object store.To expose the MinIO service on a NodePort, you must modify the manifest at examples/minio/00-minio-deployment.yaml. The only change is to the type: field, from ClusterIP to NodePort:
    spec:
      # ClusterIP is recommended for production environments.
      # Change to NodePort if needed per documentation,
      # but only if you run Minio in a test or trial environment,
      # for example with Minikube.
      type: NodePort
  3. Create the MinIO object store.After making the changes above, run the following command to create the MinIO object store.
    $ kubectl apply -f examples/minio/00-minio-deployment.yaml
    namespace/velero created
    deployment.apps/minio created
    service/minio created
    job.batch/minio-setup created
  4. Verify that the MinIO object store has deployed successfully.Retrieve both the Kubernetes node on which the MinIO pod is running, and the port that the MinIO service has been exposed on. With this information, you can verify that MinIO is working.
    $ kubectl get pods -n velero
    NAME                     READY   STATUS      RESTARTS   AGE
    minio-66dc75bb8d-95xpp   1/1     Running     0          25s
    minio-setup-zpnfl        0/1     Completed   0          25s
    $ kubectl describe pod minio-66dc75bb8d-95xpp -n velero | grep -i Node:
    Node: 140ab5aa-0159-4612-b68c-df39dbea2245/192.168.192.5
    $ kubectl get svc -n velero
     NAME    TYPE      CLUSTER-IP  EXTERNAL-IP        PORT(S)         AGE
    minio  NodePort    192.0.2.82                 9000:32109/TCP      5s
    In the above outputs, the node on which the MinIO Object Storage is deployed has IP address 192.168.192.5. The NodePort that the MinIO service is exposed is 32109. If you now direct a browser to that Node:port combination, you should see the MinIO object store web interface. You can use the credentials provided in the credentials-velero file earlier to log in. Keep this information handy — we will use it to log in to the MinIO object store again later.

Install Velero

To install Velero, the velero install command is used. There are a few options that need to be included. Since there is no vSphere plug-in at this time, we rely on a third-party plug-in called restic to make backups of the Persistent Volume contents when Kubernetes is running on vSphere. The command must include the option to use restic. As we also mentioned, we have set up a publicUrl for MinIO, so we should also include it in our command.

To successfully create the restic pods when deploying Velero, you need to select the check box for ‘Allow Privileged’ and 'DenyEscalatingExec' on the PKS plan in Pivotal Ops Manager. You will then need to re-apply the PKS configuration after selecting the check boxes.

Here is a sample command based on a default installation on Velero for Kubernetes running on vSphere, ensuring that the credentials-velero secret file created earlier resides in the same directory where the command is run:

$ velero install  --provider aws --bucket velero \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--use-restic \
--backup-location-config \
region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000,publicUrl=http://192.168.192.5:32109

Once the command is running, you should observe various output related to the creation of Velero objects in Kubernetes. If everything goes well, the output should complete with the following message:

Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status.

Yes, that is a small sailboat in the output (Velero is Spanish for sailboat).

Modify the hostPath in the Restic DaemonSet

This step is a special step to be able to back up applications deployed on VMware Enterprise PKS. The step is special because the path to pods on native Kubernetes nodes is /var/lib/kubelet/pods, but on VMware Enterprise PKS, they are located in /var/vcap/data/kubelet/pods.

This step is to point restic to the correct location of pods for backup purposes, when Kubernetes is deployed by VMware Enterprise PKS. First, identify the restic DaemonSet:

$ kubectl get ds --all-namespaces


NAMESPACE     NAME             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   vrops-cadvisor   3         3         3       3            3                     5d3h
pks-system    fluent-bit       3         3         3       3            3                     5d3h
pks-system    telegraf         3         3         3       3            3                     5d3h
velero        restic           3         3         0       3            0                     2m21s

Next, edit the DaemonSet and change the hostPath setting. The before and after edits are shown below:

$ kubectl edit ds restic -n velero

Change from:

      volumes:
      - hostPath:
          path: /var/lib/kubelet/pods
          type: ""
        name: host-pods

To:

      volumes:
      - hostPath:
          path: /var/vcap/data/kubelet/pods
          type: ""
        name: host-pods

When you save the changes, you should see a message similar to the following:

daemonset.extensions/restic edited

Deploy a Sample Application to Test Backup

Velero provides a sample nginx application for backup testing. This nginx deployment assumes the presence of a load balancer for its service. Because VMware Enterprise PKS supports NSX-T integration, NSX-T will provide this service for you if it’s configured to do so. If you do not have a load balancer as part of your Container Network Interface (CNI), there are some easily configurable ones available to get you started. One example is MetalLb.

Note: This application is stateless. It does not create any Persistent Volumes; thus, the restic driver is not used as part of this example. To test whether restic is working correctly, you will need to back up a stateful application that is using Persistent Volumes.

To deploy the sample nginx application, run the following command:

$ kubectl apply -f examples/nginx-app/base.yaml
namespace/nginx-example created
deployment.apps/nginx-deployment created
service/my-nginx created

Check that the deployment was successful by using the following commands:

$ kubectl get pods --all-namespaces | grep nginx
nginx-example         nginx-deployment-5f8798768c-5jdkn        1/1     Running     0          8s
nginx-example         nginx-deployment-5f8798768c-lrsw6        1/1     Running     0          8s
$ kubectl get svc --namespace=nginx-example
NAME       TYPE           CLUSTER-IP       EXTERNAL-IP                 PORT(S)        AGE
my-nginx   LoadBalancer   192.0.2.147   198.51.100.1,192.168.191.70   80:30942/TCP   32s

In this example, a load balancer has provided the nginx service with an external IP address of 192.168.191.70. If you point a browser to that IP address or another IP address that you used, you get an nginx welcome page hat says the nginx web server is successfully installed and working.

We're now ready to do a backup and restore of the nginx application.

Take Your First Velero Backup

In this example, we are going to stipulate in the velero backup command that it should only back up applications that match app=nginx. Thus, we do not back up everything in the Kubernetes cluster, only the nginx application-specific items.

$ velero backup create nginx-backup --selector app=nginx
Backup request "nginx-backup" submitted successfully.
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
$ velero backup get
NAME           STATUS      CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
nginx-backup   Completed   2019-08-07 16:13:44 +0100 IST   29d       default            app=nginx

For additional details on the objects that were backed up, you can use the velero backup describe --details command. You can now log in to the MinIO object storage through a browser and verify that the backup actually exists. If you go to the following URL, you should see the name of the backup (nginx-backup) under the velero/backups folder:

192.168.192.5:32109/minio/velero/backups/

Destroy Your Application

Let’s now go ahead and remove the nginx namespace, and then do a restore of the application from our backup. Later, we will demonstrate how we can restore our nginx application.

$ kubectl delete ns nginx-example
namespace "nginx-example" deleted

This command should also have removed the nginx deployment and service.

Do Your First Velero Restore

Restores are also done from the command line by using the velero restore command. You simply need to specify which backup you wish to restore, using the --from-backup option.

$ velero restore create nginx-restore --from-backup nginx-backup
Restore request "nginx-restore" submitted successfully.
Run `velero restore describe nginx-restore` or `velero restore logs nginx-restore` for more details.

Verify that the Restore Succeeded

The command velero restore describe nginx-restore --details can be used to examine the restore in detail and check to see if it has successfully completed.

Once you see that the restore has completed, you can check whether the namespace, DaemonSet, and service have been restored using the kubectl commands shown previously. One item to note is that the nginx service may be restored with a new IP address from the load balancer. This is normal.

$ kubectl get ns | grep nginx
nginx-example         Active   17s
$ kubectl get svc --all-namespaces | grep nginx
nginx-example   my-nginx               LoadBalancer   192.0.2.225   198.51.100.1,192.168.191.67   80:32350/TCP        23s

Now let’s see if we can successfully reach our nginx web server on that IP address. Yes we can! Looks like the restore was successful. If you point your browser to 192.168.191.67 or another IP address that you used, you should, once again, see the nginx welcome page saying that the nginx web server is successfully installed and working.

Backups and restores are now working on Kubernetes deployed by VMware Enterprise PKS on vSphere using Velero v1.1.

Feedback and Participation

As always, we welcome feedback and participation in the development of Velero. All information on how to contact us or become active can be found on the Velero Community page.

You can find us on Kubernetes Slack in the #velero channel, and follow us on Twitter at @projectvelero.

About the Author

Cormac Hogan is a Director and Chief Technologist in the Office of the CTO in the Hyper Converged (HCI) Business Unit at VMware. He has been with VMware since April 2005 and has previously held roles in VMware’s Technical Marketing and Technical Support organizations. He has written a number of storage related white papers and have given numerous presentations on storage best practices and vSphere storage features. He is also the co-author of the “Essential Virtual SAN” book published by VMware Press.

More Content by Cormac Hogan
Previous
Enterprise PKS hits 1.6 GA; your on-ramp to the VMware Tanzu portfolio plus Kubernetes 1.15, and a new management console
Enterprise PKS hits 1.6 GA; your on-ramp to the VMware Tanzu portfolio plus Kubernetes 1.15, and a new management console

Enterprise PKS hits 1.6 GA; your on-ramp to the VMware Tanzu portfolio plus Kubernetes 1.15, and more

Next
Nov 14 - VMware Tanzu and Aqua Security: Full-Stack Security for Enterprise PKS Webinar
Nov 14 - VMware Tanzu and Aqua Security: Full-Stack Security for Enterprise PKS Webinar