Tutorial: Run the MongoDB Enterprise Kubernetes Operator atop TKGI

June 18, 2019 Maggie Ambrose

If you have been paying attention to conversations in the Kubernetes community, you’ve probably come across the ongoing debate among community leaders about running stateful workloads in Kubernetes. Despite varying opinions, there is no question that the K8s community is rallying to make it easier and safer to run stateful workloads.

One of Pivotal’s long time technology partners, MongoDB, is one organization making strides in this space and offering a way to provision and manage fully functional MongoDB instances running in containers via the MongoDB Enterprise Kubernetes Operator (GA this week!). This blog post will walk through getting the MongoDB Operator and database instances up and running on an Enterprise Pivotal Container Service (PKS) cluster. But first, let’s answer a few important questions.

Why is MongoDB On K8s Interesting?

The MongoDB Operator makes it easy for Kubernetes users to create and manage the lifecycle of MongoDB instances on-demand through just a few commands. This drives developer productivity by enabling self-service access to instances rather than waiting for service access from a manual ticket queue. The high-availability built-in to MongoDB Replica Sets combined with the portability of Kubernetes and containers greatly improves the Operational experience as well.

In addition to increased efficiency, some users are excited about the potential resource utilization benefits by running MongoDB and other workloads in containers instead of VMs.

OK, So Why PKS?

Pivotal Container Service (PKS) is an enterprise-ready K8s offering that runs on private and public clouds. PKS offers rolling upgrades, health monitoring and self-healing for Kubernetes clusters. In a nutshell, PKS focuses on making it easy for platform operators to deliver secure and reliable Kubernetes clusters on-demand. The example in this post uses PKS as the underlying Kubernetes cluster for provisioning MongoDB instances with the MongoDB Operator.

Getting Started

Creating a PKS Cluster

To begin, you must either provision a new cluster or have access to an existing K8s cluster to deploy MongoDB on. One option is to use the PKS CLI to create a cluster:

mambrose$ pks create-cluster demo --external-hostname demo.haas-115.pez.pivotal.io --plan small --json



{

  "name": "demo",

  "plan_name": "small",

  "last_action": "CREATE",

  "last_action_state": "in progress",

  "last_action_description": "Creating cluster",

  "uuid": "f2d30f26-ba60-4917-9104-d7c2ba0c6696",

  "kubernetes_master_ips": [

     "In Progress"

  ],

  "parameters": {

     "kubernetes_master_host": "demo.haas-115.pez.pivotal.io",

     "kubernetes_master_port": 8443,

     "kubernetes_worker_instances": 3

  }

}

Using the PKS CLI create-cluster command, the cluster is provisioned from a pre-configured plan (named “small”) in PKS. The plan used here is set up with a single master node, three worker nodes, and privileged containers enabled.

Our new cluster should be finished provisioning in about 15 minutes (time for a quick coffee break!). We can query the status of our cluster using the PKS CLI. It is ready when we see that CREATE succeeded.

mambrose$ pks cluster demo



Name:                     demo

Plan Name:                small

UUID:                     f2d30f26-ba60-4917-9104-d7c2ba0c6696

Last Action:              CREATE

Last Action State:        succeeded

Last Action Description:  Instance provisioning completed

Kubernetes Master Host:   demo.haas-115.pez.pivotal.io

Kubernetes Master Port:   8443

Worker Nodes:             3

Kubernetes Master IP(s):  10.0.11.0

Network Profile Name:     

We’re ready to go! We will set our kubectl context to use this cluster for kubectl commands.

mambrose$ pks get-credentials demo



Fetching credentials for cluster demo.

Context set for cluster demo.

You can now switch between clusters by using:

$kubectl config use-context <cluster-name>

Setting up MongoDB Ops Manager

MongoDB Ops Manager is a prerequisite for using the MongoDB Operator. Ops Manager maintains the state of your running MongoDB instances. You can use MongoDB Cloud Manager to get started quickly or you can follow the guide to set up MongoDB Operations Manager on a Virtual Machine.

Once you have MongoDB Ops Manager available, there are a few pieces of information you will need to have handy to continue your config:

  • Create a public API key and note it down securely.

  • Whitelist your worker node’s IP addresses because the MongoDB Operator will need to access MongoDB Ops Manager for certain requests.

  • Note down the URL of Ops Manager. For Cloud Manager, the Base URL will be https://cloud.mongodb.com.

  • (optional) Create a new Organization and note down the organization ID - this 24 character string can be found in the URL after you have created your organization (if you choose not to create an organization, it will be created by the MongoDB Operator).

  • (optional) Create a new Project in the organization and note down the name (this can also be created by the MongoDB Operator).

Installing and Creating MongoDB Instances

The Operator Pattern

The MongoDB Operator follows the standard Operator pattern. The components of the MongoDB Operator are the CustomResourceDefinitions (CRDs) for MongoDB Instances and MongoDB Users, the Controller that reconciles requests for the CRDs, and CustomResource Instances (CRs), the actual running instances of MongoDB. If the Operator pattern is unfamiliar to you, I advise that you read through Aaron Meza’s essential comic to get a foundational understanding.

Installing the MongoDB Operator on the Cluster

We start by cloning the operator repository locally.

mambrose$ git clone https://github.com/mongodb/mongodb-enterprise-kubernetes

Next we create a new namespace in our cluster, where we will install the MongoDB Operator and provision our MongoDB CR instances.

mambrose$ kubectl create ns mongodb

namespace "mongodb" created

MongoDB has simplified the installation of the CRD and Controller together through a Helm Chart. We have the option to customize the deployment by modifying the helm_chart/values.yaml file; for example to install the MongoDB Operator as a namespace-scoped vs. cluster-wide or to use images from a private registry. For this exercise, we will accept the default values, including namespace-scoped and using the public quay.io registry.

The helm template command generates an expanded manifest (named operator.yaml) with the Chart values specified in values.yaml file.

mambrose$ cd mongodb-enterprise-kubernetes

mambrose$ helm template helm_chart > operator.yaml

Applying the expanded manifest creates all the necessary objects in our cluster and namespace.

mambrose$ kubectl apply -f operator.yaml

serviceaccount "mongodb-enterprise-operator" created

customresourcedefinition.apiextensions.k8s.io "mongodb.mongodb.com" created

role.rbac.authorization.k8s.io "mongodb-enterprise-operator" created

rolebinding.rbac.authorization.k8s.io "mongodb-enterprise-operator" created

clusterrole.rbac.authorization.k8s.io "mongodb-enterprise-operator-mongodb-certs" created

clusterrolebinding.rbac.authorization.k8s.io "mongodb-enterprise-operator-mongodb-certs-binding" created

deployment.apps "mongodb-enterprise-operator" created

We can validate the install was successful by listing the CRDs available in the cluster and that the Controller Pod is running.

mambrose$ kubectl get crd

NAME                                 AGE

clustermetricsinks.apps.pivotal.io   25m

clustersinks.apps.pivotal.io         25m

metricsinks.apps.pivotal.io          25m

mongodb.mongodb.com                 9m

mongodbusers.mongodb.com            9m

sinks.apps.pivotal.io                25m



mambrose$ kubectl get pod -n mongodb

NAME                                           READY STATUS RESTARTS

mongodb-enterprise-operator-78bbcfd5b4-k92ft   1/1 Running 0

Notice that the mongodb.mongodb.com and mongodbusers.mongodb.com CRDs are now available in the cluster. This means we can now request MongoDB custom resources (CRs) from the Kubernetes API Server.

Note: The MongoDB Operator and mongodbusers.mongodb.com CRD can manage database users for deployments with TLS and X.509 internal cluster authentication enabled. In this example we will manage database users through the Cloud Manager UI. A future tutorial will go through enabling TLS and X509 authentication enabled and use of the mongodbusers CRD.

However, before creating any MongoDB CRs, we need to create a ConfigMap and Secret for the MongoDB CRs to reference.

Create Prerequisites for MongoDB CR Instances

We need to create a ConfigMap object to link the Ops Manager project we created earlier to the MongoDB Operator. We create the ConfigMap based on the included samples/project.yaml file, modifying to include the values from the Setting Up MongoDB Ops Manager step earlier, as well as the namespace:

mambrose$ cat samples/project.yaml

---

apiVersion: v1

kind: ConfigMap

metadata:

 name: my-project

 namespace: mongodb

data:

 projectName: operatorproject



 #Optional parameter - if skipped a new organization with the same name as project will be created automatically.

 #orgId: my-org-id

 baseUrl: https://cloud.mongodb.com



mambrose$ kubectl apply -f samples/project.yaml

configmap "my-project" created

We can validate that our ConfigMap was set up and exists:

mambrose$ kubectl describe configmaps -n mongodb

Name:         my-project

Namespace:    mongodb

Labels:       <none>

Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","data":{"baseUrl":"https://cloud.mongodb.com","projectName":"operatorproject"},"kind":"ConfigMap","metadata":{"annotations":{},"name...



Data

====

baseUrl:

----

https://cloud.mongodb.com

projectName:

----

operatorproject

Events:  <none>

We also need to create and set a Kubernetes Secret object. This is what will allow our MongoDB Operator and Instances to authenticate with the Ops Manager. Create this Secret using the values from the Setting Up MongoDB Ops Manager step earlier:

mambrose$ kubectl -n mongodb create secret generic mongo-credentials --from-literal="user=<user>" --from-literal="publicApiKey=<api-key>"

secret "mongo-credentials" created

Again, we can validate that our Secret was set up and exists:

mambrose$ kubectl describe secret mongo-credentials -n mongodb

Name:         mongo-credentials

Namespace:    mongodb

Labels:       <none>

Annotations:  <none>



Type:  Opaque



Data

====

publicApiKey:  36 bytes

user:          19 bytes

Create Your First MongoDB Instance with kubectl

Now we should have everything set up and available that will allow us to successfully provision a new MongoDB instance from the MongoDB CRD. Let’s start off by creating a new Standalone MongoDB instance.

We can start with the samples/minimal/standalone.yaml file. Change the spec.project field to match the metadata.name of the ConfigMap created earlier:

mambrose$ cat samples/minimal/standalone.yaml

---

apiVersion: mongodb.com/v1

kind: MongoDB

metadata:

 name: my-standalone

spec:

 version: 4.0.0

 type: Standalone

 # Before you create this object, you'll need to create a project ConfigMap and a

 # credentials Secret. For instructions on how to do this, please refer to our

 # documentation, here:

 # https://docs.opsmanager.mongodb.com/current/tutorial/install-k8s-operator

 project: my-project

 credentials: mongo-credentials



 # This flag allows the creation of pods without persistent volumes. This is for

 # testing only, and must not be used in production. 'false' will disable

 # Persistent Volume Claims. The default is 'true'

 persistent: true


Then we will kubectl apply this configuration to create a standalone MongoDB instance. It is important to include the namespace when the MongoDB Operator is namespace-scoped. In our example, the MongoDB Operator is namespace-scoped to look for changes in the “mongodb” namespace which is set as default in the helm_chart/values.yaml file in the step “Installing the MongoDB Operator on the Cluster” above.

mambrose$ kubectl apply -f samples/minimal/standalone.yaml -n mongodb

mongodb.mongodb.com "my-standalone" created

The status of the standalone instance will change to “Running” from “Pending” once it is successfully created.

mambrose$ kubectl describe mdb my-standalone -n mongodb

Name:         my-standalone

Namespace:    mongodb

Labels:       <none>

Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"mongodb.com/v1","kind":"MongoDB","metadata":{"annotations":{},"name":"my-standalone","namespace":"mongodb"},"spec":{"credentials":"mongo...

API Version:  mongodb.com/v1

Kind:         MongoDB

Metadata:

 Creation Timestamp:  2019-06-04T19:59:11Z

 Generation:          3

 Resource Version:    3113064

 Self Link:           /apis/mongodb.com/v1/namespaces/mongodb/mongodb/my-standalone

 UID:                 38a02511-8703-11e9-8379-42010a000b14

Spec:

 Credentials:         mongo-credentials

 Exposed Externally:  false

 Persistent:          false

 Project:             my-project

 Security:

   Tls:

     Enabled:  false

 Type:         Standalone

 Version:      4.0.0

Status:

 Last Transition:  2019-06-04T20:00:11Z

 Link:             https://cloud.mongodb.com/v2/<instance-id>

 Phase:            Running

 Type:             Standalone

 Version:          4.0.0

Events:             <none>

We can also validate the successful creation of the MongoDB instance in the Ops Manager UI by following the link in the Status section.

Consuming MongoDB from a Spring App

Great! We have a MongoDB instance up and running in our PKS cluster. Now let’s see it in action with a Spring application running with the Standalone instance as a backing service.

First, we will create a new namespace for our demo Spring application to run.

mambrose$ kubectl create ns demos

namespace "demos" created

Next, we will create a user for the demo “spring-music” application to read and write from the database. We can create a user from the MongoDB Ops Manager UI by going to the “Security” tab, or from the MongoDB shell.

We also need to enable Authentication for the Standalone MongoDB instance. By default, the MongoDB CR instances are created with authentication disabled. Once we enable authentication, all CR instances created in the same project will inherit Authentication enabled and the roles created in the project.

Now we will apply the manifest for the “spring-music” application. You can copy the manifest yaml from Jason Mimmick's MongoDB examples.This manifest creates the application and exposes it on a Public IP address with a LoadBalancer.

mambrose$ kubectl create secret generic spring-music-db --from-literal=mongodburi="mongodb://spring-music:<password>@my-standalone-0.my-standalone-svc.mongodb.svc.cluster.local:27017/admin?standalone=my-standalone-0&authSource=admin&retryWrites=true" -n demos

secret "spring-music-db" created



mambrose$ kubectl apply -f spring-music.yaml -n demos

deployment.apps "spring-music" created

We can now access the  “spring-music” application on its public IP address. In the application, we can add or delete albums from the MongoDB database. We can also view the real-time operations on the database from Ops Manager.

In Conclusion

Now you are familiar with how to install and run the MongoDB Enterprise Kubernetes Operator on Pivotal Container Service. We are very excited about this new capability for our customers to run and consume MongoDB in containers on Kubernetes. Go ahead and give it a try, and let us know what you think! Learn more about how Pivotal and MongoDB are working together here.

Sources:

MongoDB has great, detailed documentation on how to install the Kubernetes Operator which we follow.

About the Author

Maggie Ambrose

Maggie Ambrose is a Partner Solutions Architect at Pivotal.

Previous
Aug 15 - Cloud-Native Operations with Kubernetes and CI/CD Webinar
Aug 15 - Cloud-Native Operations with Kubernetes and CI/CD Webinar

Next
Modernizing Legacy Windows Apps With Kubernetes and TKGI
Modernizing Legacy Windows Apps With Kubernetes and TKGI

Windows on PKS is a new option that can allow legacy Windows applications to run alongside cloud-native app...