Continuous Deployment From GitHub To PWS Via Concourse

May 3, 2016 Dan Higham

 

Concourse pipelinesIn celebration of the first release of Concourse, we thought it would be a good idea to show just how trivial a task it is to create a Concourse pipeline that will continuously deploy new versions of a web application to Pivotal Web Services. This functionality is the same principle that Pivotal Web Services (PWS) uses to automatically scoop up buildpack updates, tests them and deploys them so your applications are always up to date and secure.

This post will explain the steps by providing an example Concourse manifest, Go application, and a Go test along with a pipeline YAML file, which defines the GitHub source and target application platform, in this case that platform is Pivotal Web Services. Once it is set up, the pipeline will automatically run whenever a new version of the application is pushed to Github.

The web application is a very basic Go app, and it contains a test, which must pass before deployment. So first of all, we need a Concourse instance. I already have a BOSH director set up, and it is configured to use the “cloud config” style of configuration. I have already uploaded the latest release of Concourse and also the latest BOSH stemcell for my infrastructure. There is only one thing left to do—deploy the Concourse manifest, shown below.

concourse.yml

---
name: concourse

director_uuid: 00b14a50-a411-4e6d-b8fb-618c7007c6f7

releases:
- name: concourse
  version: latest
- name: garden-linux
  version: latest

stemcells:
- alias: trusty
  os: ubuntu-trusty
  version: latest

instance_groups:
- name: web
  instances: 1
  vm_type: default
  stemcell: trusty
  azs: [z1]
  networks: [{name: private}]
  jobs:
  - name: atc
    release: concourse
    properties:
      external_url: http://ci.aaa.com

      # replace with username/password, or configure GitHub auth
      basic_auth_username: ciadmin
      basic_auth_password: xxxxxxxxxxx

      postgresql_database: &atc_db atc
  - name: tsa
    release: concourse
    properties: {}

- name: db
  instances: 1
  vm_type: default
  stemcell: trusty
  persistent_disk_type: large
  azs: [z1]
  networks: [{name: private}]
  jobs:
  - name: postgresql
    release: concourse
    properties:
      databases:
      - name: *atc_db
        # make up a role and password
        role: atc_admin
        password: xxxxxxxxxxx

- name: worker
  instances: 1
  vm_type: large
  stemcell: trusty
  azs: [z1]
  networks: [{name: private}]
  jobs:
  - name: groundcrew
    release: concourse
    properties: {}
  - name: baggageclaim
    release: concourse
    properties: {}
  - name: garden
    release: garden-linux
    properties:
      garden:
        listen_network: tcp
        listen_address: 0.0.0.0:7777

update:
  canaries: 1
  max_in_flight: 1
  serial: false
  canary_watch_time: 1000-60000
  update_watch_time: 1000-60000

BOSH deploy
At this point, the manifest is deployed, but there are no pipelines defined. To understand what is required of the pipeline, we should look at the application first. The two files of interest are app.go(the application itself) and app_test.go, which contains one small test to for the http handler and its output. These are shown below.

NOTE: The example application that will be deployed by the pipeline is available on Github.

app.go

package main

import (
  "log"
  "net/http"
  "github.com/gorilla/mux"
)

func main() {

  rtr := mux.NewRouter()
  rtr.Handle("/", rootHandler()).Methods("GET")

  http.Handle("/", rtr)

  if err := http.ListenAndServe(":8080", Log(http.DefaultServeMux)); err != nil {
      log.Fatal("ListenAndServe: ", err)
  }
}

func Log(handler http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        log.Printf("%s %s %s", r.RemoteAddr, r.Method, r.URL)
          handler.ServeHTTP(w, r)
    })
}

func rootHandler() http.Handler {
  return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    w.Write([]byte("Hello, World!"))
  })
}
app_test.go
package main

import(
    "net/http"
    "net/http/httptest"
    "testing"
)

func TestRootHandler(t *testing.T) {
  rootHandler := rootHandler()
  req, _ := http.NewRequest("GET", "", nil)
  w := httptest.NewRecorder()
  rootHandler.ServeHTTP(w, req)

  if w.Code != http.StatusOK {
      t.Errorf("Home page didn't return %v", http.StatusOK)
  }

  body := string(w.Body.Bytes())

  expectedBody := "Hello, World!"
  if body != expectedBody {
    t.Errorf("Body was %s, not %s", body, expectedBody)
  }
}

The pipeline for this application should have Go run any tests in the repository, in this case, just app_test.go. If the call to test is successful the pipeline should push the application to an account on Pivotal Web Services. Generally, it’s good practice to define the CI task with the application itself, the Concourse task for running the tests is defined here.

test.yml

---
platform: linux

image_resource:
  type: docker-image
  source:
    repository: golang
    tag: '1.6'

inputs:
- name: simple-go-webapp

run:
  path: bash
  args: ['-c', 'go get github.com/tools/godep; cd simple-go-webapp; godep restore; go test']

This application uses GoDep to manage its dependencies. In the task definition, one call to Bash is made, installing godep first, moving to the app directory, installing Go dependencies and running the tests. The tasks definition also shows that this process will take place in the golang:1.6 Docker image. With that task defined in the application itself, the rest of the pipeline is defined in its own file.

pipeline.yml

---
jobs:
- name: run-tests
  public: true
  serial: true
  plan:
  - get: simple-go-webapp
    trigger: true
  - task: run-tests
    file: simple-go-webapp/ci/tests.yml
  - put: deploy-web-app
    params:
      manifest: simple-go-webapp/manifest.yml
      path: simple-go-webapp/

resources:
- name: simple-go-webapp
  type: git
  source:
    uri: https://github.com/danhigham/simple-go-webapp.git

- name: deploy-web-app
  type: cf
  source:
    api: https://api.run.pivotal.io
    username: {{cf-user}}
    password: {{cf-password}}
    organization: {{cf-org}}
    space: {{cf-space}}
    skip_cert_check: false

This pipeline definition has two resources and one job defined. The first resource is simply a pointer to the Git repository that contains the web application, and the second defines the Cloud Foundry instance we wish to deploy the application to. The job definition itself is also very simple. In its most basic form, a name and a plan is defined. The plan has three steps, get the application from GitHub, run the tests and deploy the application.

With a new Concourse instance, the first step is to log in and assign a new alias using the “fly” command line tool.

$ fly --target higham-ci login --concourse-url https://ci.bosh-east.high.am

assign a new alias using the “fly” command line tool

As shown in the pipeline definition, sensitive parameters such as username and password for Cloud Foundry are substituted for variables. When submitting the pipeline definition, the values for those parameters can be set in a separate file.

$ fly -t higham-ci set-pipeline --pipeline go-webapp --config pipeline.yml --load-vars-from cf-env.yml

sensitive parameters such as username and password for Cloud Foundry are substituted for variables.

Inspecting the pipeline using the UI shows the two resources and the job.

Concourse

The pipeline will automatically run whenever a new version of the application is pushed to Github. We can also manually start a new run of the pipeline by selecting the run-tests task and then the plus symbol on the top right.

run tests in Concourse

For more information on Concourse, check out the documentation at concourse.ci. Also, in a related blog post, to learn how to use Concourse to deploy new versions of build packs as they become available on Github, go here.

 

About the Author

Biography

Previous
How Lockheed Martin Is Taking On Agile
How Lockheed Martin Is Taking On Agile

How does one of the largest defense companies in the world embrace agile development? Lockheed Martin set o...

Next
3 Biggest Questions Companies Have Before Starting To Tackle Apache Hadoop
3 Biggest Questions Companies Have Before Starting To Tackle Apache Hadoop

After attending the Pivotal Big Data Roadshow in Atlanta, Pivotal’s Stacey Schneider validates that it is s...

×

Subscribe to our Newsletter

!
Thank you!
Error - something went wrong!