All Things Pivotal Podcast Episode #13: Using Service Brokers in Pivotal Cloud Foundry

January 28, 2015 Simon Elisha

featured-pivotal-podcastMicroservices are becoming an increasingly popular development pattern for modern, cloud-ready applications. This is an “extreme” example of what developers have known for a long time—consuming loosely coupled services is easier than writing it all yourself, and creating a monolith that makes change and development difficult.

If we want to consume service, does it not make sense to have an easy, consistent and reliable way to do so? What are some of the capabilities of this consumption model that we would want? Naturally we would want a way to identify services, to create them, bind to them, and then clean then up once we are done.

We need to do this at all stages of the software lifecycle, from development, test, QA and into production. If we could do this without the use of hard-coded credentials, YAML files and the like—all the better!

A key capability of Cloud Foundry is that of the Service Broker—a simple and consistent way to access services that may be running on top of Cloud Foundry, controlled by Cloud Foundry or running totally independently of Cloud Foundry.

In this week’s episode we examine the concept of the Service Broker, how it works, how you can build your own, and what it brings to the developer’s toolkit.

PLAY EPISODE #13

 

RESOURCES:

Transcript

Speaker 1:
Welcome to the All Things Pivotal podcast. The podcast of the intersection of agile, cloud and big data. Stay tuned for regular updates, technical details, architecture discussions and interviews. Please share your feedback with us by emailing podast@pivotal.io.

Simon Elisha:
Hello everyone and welcome back to the All Things Pivotal podcast. Great to have you back. My name is Simon Elisha, and today we are going to be talking about services. In particular we are going to talk about the services architecture in relation to Pivotal CF or in relation to Cloud Foundry in general and how that work.

Now, what are services? Services are things that are consumed by applications running on the pivotal CF platform so these could be database services. They could be messaging services. They could be logging services. They could be any kind of service that you like. Obviously, this ties in very closely with concepts like service oriented architecture or the model [inaudible 00:01:05] which is marker services.

Anything that consumes something else needs services. One of the things that Pivotal CF does really effectively is provide a structure around how these services are defined. How they are consumed, and how their life cycle is maintained, because this is really not a trivial exercise. When you think about it so by having a structure, a formula, an API etc around this concept means that we have a consistent and measured approach.

What we will do today is go into a little bit of depth into how this actually works, and how you would actually go about building a service broker as well, but before we get into the depth let us go a little high level and talk about framework and the API. Within Pivotal CF we have what’s called a service broker API, and this API allows us to define service brokers.

Now what is a service broker? A service broker will advertise a catalog of service offerings and service plans as well as interpreting the cause for the provisioning, binding, unbinding and de-provisioning of those particular services. Now what happens behind the scenes on the service by service basis may change and may be very different. in general it will require the creation of something when you provision something, deletion of something when you de-provision something but not always, particularly if you are consuming a third party service just straight via another API.

What this service broker does though is [inaudible 00:02:37] have a consistent way to interact with all the services in your environment. It basically abstracts what is going on. The service instance that you are consuming from could represent a single database on a multitenant server. It could be a dedicated cluster. It could just be an account on a web application. It really does not matter, but from your application perspective it’s consuming it through a very consistent fashion.

That is important to understand that this is a service broker not a gateway. What this means is the broker is responsible for the mechanics and the building around the service so to set up a creation, the credentials, etc., but traffic does not float through the broker when the application is consuming the service itself, so it is a broker not a gateway. Now how you deploy or implement that particular broker is really up the developer.

There is no massive requirement around it. There is no specific delineation about how it should be built. There are actually many, many deployment models that are possible, and I want to share four specific ones, really just to give you some conceptual idea of how that might work. A service broker or the service deployed by broker may be completely packaged and deployed by Bosh alongside Cloud Foundry.

Bosh which deploys Cloud Foundry under the covers could also be deploying the particular service. The broker could be packaged and deployed by Bosh alongside Cloud Foundry, but the rest of the service may be deployed and maintained by other main sites outside the framework of Bosh. The broker and optionally the service itself could be pushed as an application to Cloud Foundry user space so it is running on top of Cloud Foundry so on the DA rather than next to the DA from a [inaudible 00:04:29] perspective or the entire service including the broker could be deployed and maintained completely outside of Cloud Foundry by other means.

You don’t even need to know what they are. It just happens outside of your world. There are a number of different ways you can do things. Now what actually happens from a broker itself? What does the broker have to do? The broker has some pre-specific tasks. The first one is to implement really what is called a catalog, and basically this catalog allows us to query the broker and say hey what’s your capability? What do you do?

The catalog will tell you all about its capabilities, what it brings to the table, what the application looks like, whole bunch of meta-data that the system would use. This would include information like what the plans are so I may have different sizes of implementations that are available. Whether it is cost mining? Whether it is free? What the names are? What the descriptions are? How I can bond the applications? Other types of metadata that is passed back, etc.

This is the identifying part of the broker. The next part of the broker’s capability is the provision part. This is one of the cool bits because it actually goes and does something. It says, hey I am going to go out and take whatever action is necessary to create a new service resource for the developer. Now again, what actually happens at the step will depend on the service, and there are many and varied variations that could be implemented. Let me give you some examples with a mysql service or a database service. Provisioning could result in any of these ones.

It could be an anti-dedicated mysql D process running in its own virtual machine. It could be an anti-dedicated mysql D process running in a lightweight container on the shared VM. It could be an anti dedicated mysql D process running on a share VM itself so not in a container. It could be an anti-dedicated database on an existing shared running mysql D. Now having provision to the new mysql D I have just created a new dedicated database.

It could be a database with a scheme already created but no data in it. It could be a copy of a full database so it may be a QA database with a full copy of data in it ready to go for me as well. It could be many different variations. There really is no sort of one [inaudible 00:06:49] approach. The concept though is that we can issue the command to provision the service and the service gets provisioned for us.

We can also update an existing service as being provisioned so we may modify its service plan so we can upgrade or downgrade our service to a different plan so we may move from a service that has one particular size to another as an application scales or different capability as we go. The next step is that of bonding the service. This is a very important step. This is where we take the service that has been provisioned and connect to an application or other services that we are running.

The trick here is this allows us to have a very consistent and straightforward way to connect services to one another. Basically, we will know whether this works or not and will also be issued with unique credentials for the application to use to access that particular service. That bonding process allows us to pass credentials in a secure fashion to the application as necessary.

Naturally we will also have an unbinding process which means we now disconnect the service from those other services or applications that are consuming that particular service. Unbinding the service does not destroy the service. It simply disconnects it because it may be bound to a multiple sources or you may want to have different life cycle for it.

We want to actually get rid of the service itself. We use the process of de-provisioning, and this is where we will delete or take of any resources that may have been created in the process of provisioning the service in the background for us. Again, it may be implemented in many different ways. It may be implemented as physical process or may be using Bosh to go and delete some virtual machines or it may be a more logical process what makes an API called to a third party service to go ahead and delete that service for us.

That is all around the building of the service broker and so a developer function where you take that API, you build the broker and you can deploy, you can have that broker ready to go. Once that broker is ready to go from an operations perspective you do two steps. You have to register the broker and you make your plans public. Registering the broker essentially allows cloud controlled to go and talk to the broker through that catalog command we spoke about the catalog API.

It will basically go and say, hey go tell me about yourself, what you are and how I should we register you. It gives you all that metadata across. Once I have done that, the plans that are presented by the service broker are not made available to developers yet. There is another step that it has to take place. You have to make the plans public, and once you make those plans public you can provision them and then can be used by our developers.

They could have multiple brokers. You can have multiple services and multiple plans as long as the naming scheme is different between them but what this means is you essentially provide a list of service brokers to your particular developer community and then can consume them as they like. Now where do they go to consume the services from? Well, this is where the wonderful world of the services marker place comes into play.

The services marker place is an aggregate catalog of all of the services and all the plans that are exposed to end users of the Cloud Foundry instance. The services can come from one or from many service brokers, and if you are using the ops manager or the marker place we will them represented often as tiles, visual tiles in the environment. This marker place allows you to shop basically for the services you want to consume, and the services could be running away, it can be running on Cloud Foundry, adjacent to Cloud Foundry somewhere completely different.

This marker place that you see what the services are what they provide, there is a whole bunch of metadata, it is quite flexible in terms of the metadata because it has been left as a more of a community standard approach where you can define what you want and you can see what is available to you, what their services may look like or what they may cost as well so again you can either do proper billing over these show back top situations and what are these service going to cost to consume.

Now what are some examples of particular services? How do you use them? Well you may bind to a Redis cache. You may bind to a messaging system so it could be a read bin and cue. You may bind to databases. You may bind to blob storage, etc. When you bind to these particular services one of the things that happens during that bind call is you will receive a set of credentials into an environment variable through application.

This environment variable, the probably the main one to remember is called V cap underscore services and this contains information. What information you may ask, well I will tell you. The information includes the URI so the connection string to connect to a particular service that fully qualified domain name of the host, the port, the name of the service provider or the service instance, a virtual host if necessary, username and a password.

All this information gets passed back to the application. It can then be passed and used appropriately either directly by the framework if the framework supports it natively or within your own code. What this means is it tries to secure that credential passing problem that we often have in application development. Instead of having just stored in YAML files and update it physically or have some sort of secure process we are defining who get what credentials and how, the system is automatically creating through that service broker and providing the correct connection credentials to pass into the application.

The application is seamlessly consuming those credentials and using them to connect to the service as necessary and then getting rid of them afterwards. Now I will define a well written service broker who provides fresh and unique credentials to every application that is bound to a particular service at bind time. That is the best practice. It doesn’t always happen that way depending on the implementation of the service, but if you are building a service broker it will be highly recommendable to do it that way.

Give a fresh set of credentials to the application every time the application connects. You may be saying, I might want to turn my hand to building one of these brokers of which you speak. Are there some examples? Well there are. We will be providing them in the share notes, but there are some really good examples for both ruby and Java, and they are also examples for other languages out there as well so depending on the language that you like you can go ahead and see how they built, get some pointers and get some tips etc.

That is a little bit of a deeper dive into services and the service broker. Again, the service broker allows you to create services. It defines how those services are obtained, and where they are taken from, but it is not a gateway through which traffic to and from the service traverses. It is simply a method for creating those services. I hope that has demystified that a little bit and given you some ideas of what you could do with that kind of capability.

Again, if you are enjoying the podcast please do share it with others. We love to spread the word. It has been great to see the relationship growing and the number of countries that we are speaking to grow over time as well and as ever if there are things you like us to talk about please get in touch with us podcast at pivotal.io. Until then, thanks so much for listening. Keep on building.

Speaker 1:
Thanks for listening to the All Things Pivotal podcast. If you enjoyed it please share it with other. We love hearing the feedback so please send any comments or suggestions to podcast@pivotal.io.

About the Author

Simon Elisha is CTO & Senior Manager of Field Engineering for Australia & New Zealand at Pivotal. With over 24 years industry experience in everything from Mainframes to the latest Cloud architectures - Simon brings a refreshing and insightful view of the business value of IT. Passionate about technology, he is a pragmatist who looks for the best solution to the task at hand. He has held roles at EDS, PricewaterhouseCoopers, VERITAS Software, Hitachi Data Systems, Cisco Systems and Amazon Web Services.

Previous
Using Data Science To Save And Improve Lives
Using Data Science To Save And Improve Lives

Data science is actively being used to save lives and improve healthcare. In this post, Principal Data Scie...

Next
Case Study: Refactoring A Monolith Into A Cloud-Native App (Part 1)
Case Study: Refactoring A Monolith Into A Cloud-Native App (Part 1)

Migrating legacy, monolith apps on to Cloud-Native architectures is a challenge. In this post, we delve int...

×

Subscribe to our Newsletter

!
Thank you!
Error - something went wrong!