Pivotal CF 1.1 Advances Enterprise PaaS with New Capabilities

March 26, 2014 Vik Rana

featured-PivotalCFAt Pivotal, we see an overwhelming desire from customers to innovate with greater speed through software. In November, we launched Pivotal CF, the leading enterprise PaaS, powered by Cloud Foundry. Pivotal CF provides a turnkey private PaaS experience for agile development teams to deploy, scale and update applications. Customers have given us great feedback on capabilities to help them on-board new applications and leapfrog their competition using Pivotal CF. Providing a platform that enables teams to release software more often is not only a capability that Pivotal CF enables, it’s a principle we embrace delivering Pivotal Software. Just four months after the initial release we are delivering many new capabilities for developers, cloud operators and service providers. We will continue to a deliver a fast pace of innovation and frequent releases to help enterprises become excellent at software.

Pivotal CF 1.0 delivered many industry firsts for customers:

  • Integrated platform for provisioning and binding Apache Hadoop to applications running on Cloud Foundry
  • Elastic Runtime Service providing a complete, scalable execution environment, extensible to any framework or language running on Linux
  • Automatic provisioning and binding of Pivotal One services – Pivotal HD, Pivotal RabbitMQ and Pivotal MySQL
  • IaaS integrated PaaS Operations Manager for turnkey deployments and updates

What’s new in Pivotal CF 1.1:

  • Improved app event log aggregation – developers can now go to a unified log stream for full application event visibility (Watch) and drain logs to a 3rd party tool like Splunk for analysis (Watch)
  • Buildpack Management – operators can add new runtimes using buildpacks and control the order in which buildpacks are applied (Watch)
  • Monitoring of Pivotal CF components with JMX (Beta add-on) – Access VM and Cloud Foundry component health statistics via JMX for integration with compatible logging, monitoring, and alerting tools
  • Higher availability – this release introduces a 3rd generation application health manager for higher system and application availability
  • Simple experience for adding new Services – providers can develop and expose new services in the Pivotal CF catalog using a streamlined V2 Service Broker API
  • High velocity deployment and updates for Pivotal HD – enterprises can go from zero to Hadoop in minutes using Pivotal CF’s support for parallel deployment (Watch CF BOSH deploy and scale a Hadoop cluster on AWS faster than Amazon Elastic MapReduce)

Operations Manager

  • Faster Developer Console – faster, with enhanced usability in managing teams and interacting with services, plus new look and feel

console

  • Faster CF CLI – faster, more performant with native installers for all modern platforms and versions of Windows, OSX, and Linux

Let’s take a quick look at what some of these new capabilities mean for customers.

Improved App Event Log Aggregation

Deployed applications receive integrated logging so developers can go to one place to see what is happening with their applications. Pivotal CF now aggregates an application’s lifecycle events (e.g. staging, start, stop, restart), events from components like the DEA and Router, and application events (captured from STDERR and STDOUT) into a unified log stream. This allows developers to:

  • Tail logs interactively from the CF CLI – Watch
  • Dump a recent set of application logs
  • Analyze their logs with 3rd party tools by continually draining them to a remote syslog drain URL –Watch

In Pivotal CF 1.1, log streams are scoped to a unique application ID and instance index so developers can quickly understand application behavior and pinpoint issues.

Let’s see a few examples of how easy it is to understand an app’s behavior by inspecting the log. Watch the demo.

First, we push the sample spring music app with cf push spring-music, then immediately tail the log using cf logs spring-music in another window:

logging_tail

We can see a ‘application staging’ request ([DEA]) then a Cloud Controller application event ([API]) followed by a detailed set of application staging events ([STG]) and finally an ‘application start’ event ([DEA]). The staging event log entries are particularly useful for debugging long-running staging tasks, e.g. an application with a large number of runtime dependencies and/or that performs long-running database initialization tasks. Finally logging output from the first application instance ([App/0]) – captured from STDERR – is shown.

Adding a 3rd party tool for log search and analysis is also easy using a remote syslog drain. In this example, we’ve setup a Splunk syslog drain. Watch. We can bind to the remote service using a ‘user provided service instance’.

Let’s now go to our application in a browser. We refresh our Splunk event console and now see our log stream showing router ([RTR]) apache formatted web log entries alongside Spring framework INFO logging. From here we can do full text searches and apply filters for debugging and log trend analysis.

pivotal_cf_splunk

The unified log stream is also useful for understanding why instances of an app crashed. Here’s how a sample app instance crash event appears in the new unified logging format – again notice the exit reason correlated with the timestamp, app instance index (0) and application GUID:

Screen Shot 2014-03-20 at 4.43.59 PM

Buildpack Management

Developers simply upload their application files to Pivotal CF for an “it just works” experience. Buildpacks detect, download and configure the appropriate languages, frameworks, containers and libraries for the application, relieving the developer of this burden. Buildpacks are a shared approach with Heroku, IBM and a broad ecosystem of providers, ensuring support for almost any language. With Pivotal CF 1.1, cloud operators can bring languages, frameworks and application containers (PHP, Python, tcServer, WebSphere Liberty, etc.) developers love into the organization using admin buildpacks while controlling the order in which buildpacks are applied. Operators can also change buildpack configuration details.

Let’s take a look at the common use case of specifying the version of the JDK used for running Java applications. Watch the demo. We slip into the cloud operator role to fork the Pivotal CF default Java buildpack and specify the version of the Open JDK in ./config/open_jdk_jre.yml. The example below shows pinning the Open JDK version to 1.7.0_40.

fork_buildpack

We can then simply download the forked repo as a zip file and unpack it in our local environment. In our role as cloud admin we then log into the Pivotal CF instance using the new version of the CF CLI. Once logged in, we can use the buildpack commands to upload the modified Java buildpack:

Screen Shot 2014-03-20 at 4.47.55 PM

This will upload the Java buildpack, name it java-buildpack-modified and place it at index 0, meaning it will be the first buildpack that is run when applications are pushed to Pivotal CF, ahead of the system supplied Java buildpack. After uploading, we can verify the buildpack and its position:

Screen Shot 2014-03-20 at 4.48.44 PM

That’s it! Now whenever a java based application is pushed the modified Java buildpack will be used to stage and run the application, instead of the system supplied Java buildpack.

To verify installation push a simple web application:

Screen Shot 2014-03-20 at 4.49.55 PM

Should we ever want to un-install the buildpack it can also be done via cf:

Screen Shot 2014-03-20 at 4.50.38 PM

In the developer role, we can also specify a custom buildpack by URL when pushing an application:

Screen Shot 2014-03-20 at 4.51.29 PM

In the operator role, we can control the buildpack environment based on our organization’s needs by selecting the ‘Disable Custom Buildpacks’ option in Operations Manager at install time, disabling the -b custom buildpack option on cf push:

disable-buildpacks

Monitoring Pivotal CF Using JMX

Operators looking to monitor the health and performance of their Pivotal CF deployment can now do so with the Pivotal Ops Metrics Add-On (beta). This add-on delivers typical machine metrics (CPU, memory, disk) and statistics for the various components of a Pivotal CF deployment via the JMX protocol:

  • Router
    • Number of requests for different CF components
    • Number of responses for different CF components
  • Cloud Controller
    • Number of completed requests
    • Number of outstanding requests
  • DEA
    • CPU Utilization
    • Amount of disk space allocated for applications
    • Amount of memory allocated for applications
    • Amount of disk space used
    • Amount of memory used

Here’s an example of the JMX data from a DEA instance during two cf push operations. In the middle graph, the amount of memory available first goes down by 1GB and then an additional 2GB as more app instances are added. (The graph expresses the amount of available memory and disk as a percentage.) At the same time, the amount of CPU and memory actually used hasn’t changed much as the there’s no traffic going to the applications during the push request.

pivotal_cf_jmx_1

Operators can access this information through a JMX-compatible monitoring tool (e.g. JConsole, Java Mission Control) of their choice and integrate it with their existing monitoring and alerting infrastructure. This information can also be used for proactive monitoring use cases such as expanding capacity of Pivotal CF components based on historical resource utilization. For example, an operator could choose to expand capacity over time by scaling out the number of DEA instances when DEA memory utilization crossed a given threshold.

Simple Experience for Adding New Services

Add-on services are one of the primary modes for extending functionality of the Cloud Foundry platform, and can deliver a broad range of benefits to a software development team. Services can provide data persistence for applications, as well as search, caching, messaging, and more. But services not only enhance applications, they can also better enable development teams themselves, delivering self-service provisioning of any resource a service provider can automate, such as accounts on a continuous integration system or multi-tenant project management application.

Services can be deployed anywhere your users and their applications can reach them, and may be self-operated or provided by another team or organization. Integration is by way of a Service Broker, a component operated by the service provider which advertises a catalog of one or more services, and translates API calls from Cloud Foundry into service-specific requests for resources and credentials. For more information, see Cloud Foundry Services for operators and service authors, and our Developer Guide to Services for platform end users.

With the release of the v2 Service Broker API, and new operator-facing features in Cloud Foundry, providing end users with self-service, on-demand provisioning of new service offerings has become much easier. We’ve moved responsibility for catalog management and orphan mitigation out of the service broker and into the platform, removing the need for service brokers to read and write to the platform; all API calls are now outbound to the service brokers. By implementing a v2 service broker, service providers can support multiple Cloud Foundry instances; simply provide CF operators with unique credentials to your broker.

Along with these API changes, we’re putting control of the Service Marketplace into the hands of the Cloud Foundry operator. The Marketplace is the aggregate of all services advertised by all service brokers registered with a Cloud Foundry instance. With a URL and credentials obtained from a service provider, an operator can register the provider’s service broker with Cloud Foundry. Upon registration of a broker, the platform will fetch the catalog of services the broker offers. New service offerings are initially only available to the operator, they can then decide whether to make a service available to all end users, or only to particular organizations. For more information, see Managing Service Brokers.

High Velocity Pivotal HD Deployment and Updates

Our goal is to enable operators to deploy and update distributed systems in minutes so enterprises can be more agile in responding to business needs. Pivotal CF Operations Manager automates large-scale service deployment by taking control of the underlying IaaS API to start distributed system components as a set of ‘jobs’ running across a resource pool of Linux containers and VMs. In Pivotal CF 1.1 these jobs are now started in parallel and component packages are pre-compiled. Cloud operators can now deploy Pivotal HD in just minutes. Watch CF BOSH deploy and scale a Hadoop cluster on AWS faster than Amazon Elastic MapReduce.

Deliver and update applications at velocity and scale

Summary

We are committed to helping a new generation of developers transform software delivery in enterprises and will continue to deliver a steady pace of innovation to

  • Enable developers with an end-to-end platform where cloud services and runtimes as a service allow them to build and update applications easily
  • Enable every application and the platform itself with built-in services for operational benefits: high availability, logging, monitoring, auditability for compliance etc
  • Enable enterprises to deliver large scale services and applications on choice of IaaS to optimize efficiency, cost, geographic distribution, capacity planning and regulatory compliance

Download or learn more to get started on the journey to great software.

About the Author

Biography

Previous
Pivotal RabbitMQ 3.3.0 Released
Pivotal RabbitMQ 3.3.0 Released

Pivotal RabbitMQ 3.3.0 is the latest commercial release of the popular, open source RabbitMQ message broker...

Next
Foodology by Brian Sullivan
Foodology by Brian Sullivan

Watch live streaming video from pivotallabs at livestream.com What is food design and how does technology ...