Patch While You Relax - Automated PCF Upgrades With Concourse

August 29, 2018 Brian McClain

Hackers are gunning for your enterprise IT systems. And  the type of attacks are growing more sophisticated everyday. When a new vulnerability is discovered, it’s open season on your applications and infrastructure. So it’s imperative that you apply fixes to your systems the moment a patch is issued.  This isn’t just an IT issue, either. Your CEO cares about security, which means you should care about rapid patching.

Keeping your enterprise IT systems up to date with the latest security patches is therefore one of the most important and effective pre-emptive security measures you can take. In fact, this is a major reason why enterprises are adopting to Pivotal Cloud Foundry. Rapidly releasing high-quality software your customers love is how you differentiate your business these days. But none of that matters if your systems are only patched once every several few months. Speed is important in security too!

Let’s take a deeper look at how Pivotal Cloud Foundry handles rapid patching.

Stay Up to Date with Ops Manager     

Keeping up to date with the latest patches ensures that vulnerability and bug fixes keep your platform safe and secure. PCF Ops Manager makes this process seamless, alerting operators to new patches for their service tiles and allowing them to deploy them with just a few clicks.

There comes a point, however, when the sheer number of environments an operator manages makes it virtually impossible to roll out security patches and upgrades manually. This is where automation comes in. The ability to automatically upgrade environments with zero intervention allows operators to rapidly roll out patches to all of their CF foundations, from development to staging to production.

“Risk is no longer change, it is lack of change." - Brian Kirkland, Senior Cloud Architect, Verizon

At SpringOne Platform 2017, Brian Kirkland, Senior Cloud Architect at Verizon, spoke about the company’s journey from the traditional enterprise operation model of an infrequent, off-hours patching schedule to a more automated approach. Thanks to Pivotal Cloud Foundry, Verizon developers  are now able to roll out patches to their applications at any time of day and much more frequently than in the the past. But the Verizon operations team wanted to adopt the same model to patching their foundations and services. To do so, the ops team turned to  Pivotal’s Ryan Pei’s team, which is developing PCF Concourse pipelines.

Automation with Concourse and PCF Pipelines

The power of Concourse is undeniable, giving developers a pluggable automation tool to enable continuous software integration and deployment. This same power can also be used to continuously deliver and upgrade infrastructure. PCF Pipelines are a collection of Concourse pipelines to automatically install and patch PCF. Specifically, the install-pcf pipeline takes you from zero to a running PCF foundation, configuring everything from networks and load balancers to databases all the way up through Ops Manager and Pivotal Application Service (PAS) itself.

While Ops Manager expects quite a bit of infrastructure to be setup ahead of time, Concourse automates a majority of the manual work required to set up a new foundation. With a bit of YAML configuration, you can configure Concourse to watch for new patches from the Pivotal Network every thirty minutes, automatically pull them into your Ops Manager and apply them to your environment, with as much or as little intervention as you desire. And all configuration is moved to a YAML file supplied to Concourse, making the manual half of the processes much easier.

The install-pcf pipeline in action, setting up a new environment

Managing pipelines becomes easier and easier with each Concourse release . Recently released version 4.0.0 enhances authentication, adding LDAP support and improving usability for users that are a part of multiple teams. For those that are managing multiple pipelines across multiple teams, this means the dashboard now contains a full picture view of every pipeline you manage. Be sure to keep an eye on the Concourse releases page as every release is consistently jam-packed with features and improvements!

In Concourse 4.0.0, users now can see pipelines for all teams they are a part of simultaneously

Automate Smoke Test Pipelines

In their SpringOne Platform presentation, Brian and Ryan touched on another important detail. In addition to using Concourse to deploy and continuously update PCF, Verizon also uses it to  automate their smoke test pipelines, ensuring applications are always up and running as expected. Thanks to the flexibility of Concourse, ops teams can go a step further if they choose and configure the update pipeline to kick off an entire suite of tests once their environment has been patched, further increasing confidence in the deployment. While some opt to no automatically patch production, this allows ops teams to confidently roll patches up from lower environments, test the software, and even manually kick off a production upgrade with the click of a single button if they wish.

Configuring Your Upgrade Pipeline

If you take a look at the pipeline to upgrade PCF tiles, you’ll see it’s split up into two seperate files. The pipeline.yml file contains all of the logic on how to upgrade a tile, while the params.yml file contains all of the parameters specific to our environment. For our example, let’s say we’re running on GCP and we want to upgrade PAS. First, we’ll let the pipeline know which IaaS we’re running on:

# The IaaS name for which stemcell to download. This must match the IaaS name

# within the stemcell to download, e.g. "vsphere", "aws", "azure", "google" must be lowercase.

iaas_type: google

Next, we’ll provide our access credentials to Ops Manager to the pipeline. Since we’re using a username and password to authenticate, as mentioned in the comments in the YAML, we’ll set the client ID and secret to empty values:

# Operations Manager

# ------------------------------

# Credentials for Operations Manager. These are used for uploading, staging,

# and deploying the product file on Operations Manager.

# Either opsman_client_id/opsman_client_secret or opsman_admin_username/opsman_admin_password needs to be specified.

# If you are using opsman_admin_username/opsman_admin_password, edit opsman_client_id/opsman_client_secret to be an empty value.

# If you are using opsman_client_id/opsman_client_secret, edit opsman_admin_username/opsman_admin_password to be an empty value.

opsman_admin_username: admin

opsman_admin_password: $uperp@ssw0rd

opsman_client_id:

opsman_client_secret:

opsman_domain_or_ip_address: https://myopsman.some.host

We’ll also configure what tile we’re upgrading and give it some rules around what version to automatically upgrade to. For example, in this case we can say we’re running ERT 2.1.11 and we only want to automatically apply bug fixes (i.e. we’ll apply version 2.1.12, but not 2.2.0).

# om-linux

# ------------------------------

# The name of the product on Pivotal Network. This is used to configure the

# resource that will fetch the product file.

#

# This can be found in the URL of the product page, e.g. for rabbitmq the URL

# is https://network.pivotal.io/products/pivotal-rabbitmq-service, and the

# product slug is 'pivotal-rabbitmq-service'.

product_slug: "elastic-runtime"

# The minor product version to track, as a regexp. To track 1.11.x of a product, this would be "^2\.0\.[0-9]+$", as shown below.

product_version_regex: ^2\.1\.[0-9]+$

Finally, we’ll grab our API token from Pivotal Network as described in the comment so that our pipeline can authenticate downloads on our behalf:

# Resource

# ------------------------------

# The token used to download the product file from Pivotal Network. Find this

# on your Pivotal Network profile page:

# https://network.pivotal.io/users/dashboard/edit-profile

pivnet_token: 000000000000_0000000

Great! We’ve successfully gathered all of the configuration needed for our pipeline to upgrade our tile. But before we send it up to Concourse, let’s actually take a peek at the pipeline.yml file, as this is where we define how often to check Pivotal Network for updates. By default it will check every 30 minutes but we can change this as needed:

- name: schedule

source:

  days:

  - Sunday

  - Monday

  - Tuesday

  - Wednesday

  - Thursday

  - Friday

  - Saturday

  interval: 30m

  location: America/Los_Angeles

  start: 12:00 AM

  stop: 11:59 PM

type: time

Assuming you already have a Concourse environment up and running, we’re ready to create the pipeline! The syntax is just like any other pipeline. We provide the fly CLI a name for the pipeline and the paths to our pipeline YAML and the file where we define our variables. Additionally, we’ll unpause the pipeline and let it start doing it’s work:

   fly -t my-concourse set-pipeline --pipeline upgrade-ert --config pipeline.yml --load-vars-from params.yml

   fly -t my-concourse unpause-pipeline -p upg

That’s it! Our pipeline is up and running, watching for new versions of PAS to become available and will automatically bring it into our environment. We can use this same pipeline to upgrade other tiles in Ops Manager as well, including service tiles such as MySQL, or even other offerings such as PKS.

Once configured, our new pipeline will ensure PAS is always up to date

Interested in hearing more about Concourse and platform operations? Make sure to attend SpringOne Platform, September 24th to 27th, 2018 in Washington, D.C.! The agenda is pack with great speakers presenting on a wide range of topics for developers and operators alike. Register today with code S1P200_BMcClain to save $200!

About the Author

Brian McClain

Brian is a Principal Product Marketing Manager on the Technical Marketing team at Pivotal, with a focus on technical educational content for Pivotal customers as well as the open source communities. Prior to Pivotal, Brian worked on both the development and operations of software, with a heavy focus on Cloud Foundry and BOSH at companies in many industries including finance, entertainment and technology. He loves learning and experimenting with new technologies, and more importantly sharing the lessons learned along the way

Follow on Twitter Follow on Linkedin More Content by Brian McClain
Previous
#automateALLtheTHINGS: From Ops Manager GUI to Automating Deployments and Reporting with Concourse
#automateALLtheTHINGS: From Ops Manager GUI to Automating Deployments and Reporting with Concourse

#automateALLtheTHINGS: From Ops Manager GUI to Automating Deployments and Reporting with Concourse - Onno B...

Next
Building the Unbreakable Continuous Delivery Pipeline: Put Your Concourse Pipeline on Auto-Pilot with Dynatrace
Building the Unbreakable Continuous Delivery Pipeline: Put Your Concourse Pipeline on Auto-Pilot with Dynatrace