Security & Compliance with Pivotal Platform

Introduction

What kind of threats do IT security experts face today? Let’s look at the sobering picture presented in the Symantec 2017 Internet Threat Report. Here are a few alarming statistics from 2016:

  • Over a billion identities were exposed
  • Vulnerabilities were found in over three quarters of websites
  • Ransomware detections increased over 36 percent
  • One in 131 emails sent were malicious, the highest rate in five years.

The volume of threats is growing at an exponential rate. Attackers are moving faster and faster. Thankfully, the types of threats you face are relatively simple:

  • Malware. This is a catch-all term for viruses, trojan horses, worms, spyware, and other programs that have malicious intent.
  • Advanced persistent threats. These are breaches where an attacker gains access to a network and stays there undetected for a long period of time. The longer the threat stays undetected, the more data that’s at risk.
  • Leaked credentials. Credentials control access to information or other resources. No matter how hard an organization tries to lock-down employee credentials to critical systems, they always seem to get out into the wild.

Pivotal Platform, a modern, cloud-native product, can play an instrumental role in improving your security posture. At the same time, it can also help your development and operations teams work more effectively. In this paper, we review the security features of Pivotal Platform, and discuss how you can use them to reduce risk in your organization.

Identity & Access Management

The cornerstone of any enterprise security implementation: identity and access management (IAM). The platform embraces a microservice-based approach to securing identity and access across an organization. Further, Pivotal Platform works with your existing identity management systems. We offer integrations with on-premises options like Active Directory Federation Services, and cloud-based tools like AzureAD and Google Cloud Platform OIDC.

Pivotal Platform's identity model can be easily extended as deployments grow with new services. Pivotal Platform’s SAML-based authentication is compatible with a multitude of providers.

Pivotal Platform also includes an extensive API that allows for user management, and Access Control List (ACL) reporting.

Security Services and IAM groups that perform user access audits will find the following details on these sections helpful.

User Account and Authentication (UAA)

The User Account and Authentication (UAA) is the identity management service for Pivotal Platform. It is an OAuth2 provider, issuing tokens for client applications and APIs when they act on behalf of Pivotal Platform users. UAA works with the login server to authenticate users with their Pivotal Platform credentials. It performs single sign-on (SSO) duties. UAA has endpoints for managing user accounts, and other functions like registering OAuth2 clients.

Single Sign-On (SSO)

Users login via the Single Sign-On service to access other applications that are hosted or protected by SSO. This approach improves security and productivity, since users do not have to log in to individual applications separately.

SSO converts legacy identity protocols into modern cloud-native, federated protocols for your applications and APIs to consume. Developers are responsible for selecting the authentication method for application users. They can choose native authentication provided by the UAA, or opt for external identity providers.

After authentication, the Single Sign-On service uses OAuth 2.0 for authorization. (OAuth 2.0 is an authorization framework that delegates access to applications to access resources on behalf of a resource owner.)

Developers define resources required by an application bound to a Single Sign-On (SSO) service instance and administrators grant resource permissions.

Integration guides for popular identity management systems (like Active Directory) are available in the SSO documentation.

CredHub

CredHub is a centralized credential management component for Pivotal Platform. CredHub secures credential generation, storage, lifecycle management, and access. CredHub can mitigate the risk of leaked credentials, a common culprit in data breaches.

CredHub performs a number of different functions to help generate and protect the credentials in your CF deployment, including:

  • Securing data for storage
  • Authentication
  • Authorization
  • Access and change logging
  • Data typing
  • Credential generation
  • Credential metadata
  • Credential versioning

CredHub was first introduced in Pivotal Platform 1.11. Subsequent releases of Pivotal Platform feature CredHub more prominently in credential management scenarios.

Role-Based Access Controls (User Roles)

Pivotal Platform supports enterprise access controls with Orgs, Spaces, Roles, and Permissions. This feature works in concert to ensure developers and operators have the right level of access. The Apps Manager is a web-based tool to help administer these roles and permissions.

  • Org. An org is a development account used by an individual or a team. Collaborators access an org with user accounts. Collaborators share the org's resource quota plan, applications, services availability, and custom domains.
  • User Accounts. A user account represents an individual in a Pivotal Platform installation. A user may have different roles in different spaces within an org.
  • Spaces. Every application and service is part of a space. Each org contains at least one space. A space provides a shared location for application development, deployment, and maintenance. Each space role applies only to a particular space.
  • Roles and Permissions. A user can have one or more roles. These roles defines the user’s permissions in the org and within specific spaces in that org.

REST API (cf curl)

Pivotal Platform offers several tools to monitor and manage the platform. Ops Manager includes a dashboard for monitoring VM status and retrieving log files from system components. Ops Manager includes an API interface and CLI as well. Pivotal’s Ops Metrics is a JMX interface that exposes platform statistics. It can be used with tools (like Splunk) for monitoring. Furthermore, the Loggregator system is often used to drain logs into Datadog, ELK, and Splunk. Most enterprises extend these tools with customizations to build their business scenarios.

To this end, the Pivotal Platform API is open source and freely available. While this offers flexibility in scripting, REST APIs can be challenging to use on their own; hence, the familiar curl command is packaged into the Pivotal Platform CLI as ‘cf curl’.

cf curl can be used to generate custom reports for auditing and identity management scenarios. A common scenario: an operator needs to produce a list of the users who have recently accessed Pivotal Platform. The operator also needs to know what was accessed, when, and for how long.

Operators can write a script that uses cf curl, and then integrate it into admin pipelines. An example of such script which can generate an ACL detailing org access with output to CSV can be found in GitHub.

Service Brokers

Developers using Pivotal Platform often extend their code by binding their app to a variety of services (databases, metrics, machine learning, and so on). Service Brokers manage the lifecycle of these bound services. They manage calls for the provision/creation, binding/unbinding, and deprovisioning of services. Documentation details the implementation, deployment, and general concepts of Service Brokers in the Pivotal Platform ecosystem.

Org Assignment of Services

Services are often consumed with service plans. Application developers can leverage service plans in order to consume a service. Service plans include some element of sizing, so developers can ensure efficient use of that service. Platform operators will want to control access to service plans with granularity. All new service plans from standard brokers are private by default. So when you add a new broker—or when you add a new plan to an existing broker’s catalog—service plans won’t immediately be available to end users. Administrators must enable a service plans for end users, and manage limited service availability.

Alternatively, brokers can use the Pivotal Platform permissions model to limit access. Space-scoped brokers are registered to a specific space in the org. All users within that space can automatically access the broker’s service plans. With space-scoped brokers, service visibility is not managed separately.

Administrators can use the service-access CLI command to see the current access control setting for every service plan in the marketplace, across all service brokers.

$ cf service-access
getting service access as admin...
broker: elasticsearch-broker
   service        plan     access    orgs
   elasticsearch  standard limited
broker: p-mysql
   service   plan        access   orgs
   p-mysql   100mb-dev   all

Service Brokers and IAM

Admins use the cf enable-service-access command to give users access to service plans. The command grants access at the org level, or across all orgs.

When an org has access to a plan, its users see the plan in the services marketplace (cf marketplace) and its space developer users can provision instances of the plan in their spaces.

Admins use the cf disable-service-access command to disable user access to service plans. The command denies access at the org level or across all orgs.

The-p and -o flags to cf disable-service-access let the admin deny access to specific service plans or orgs.

Threat and Vulnerability Mitigation

Threat and vulnerability mitigation is one of the most important aspect of securing IT systems. Teams tasked with handling vulnerabilities work to prevent attacks and mitigate them. They liaise with other groups (such as Incident Response, and the NOC) to coordinate organization-wide responses to newly- discovered vulnerabilities.

Triaging vulnerabilities on an automated cloud-native platform like Pivotal Platform is easier than with traditional infrastructure. This convenience stems from new concepts and methodologies, collectively called “immutable infrastructure.”

Pivotal Platform supports the following patterns and practices, called the Three R's of Enterprise Security:

  • Repair. Repair vulnerable software as soon as updates are available.
  • Repave. Repave servers and applications from a known good state. Do this often.
  • Rotate. Rotate user credentials frequently, so they are only useful for short periods of time.

Another important element for Pivotal Platform customers: Pivotal attempts to provide a 48-hour fix to high or critical CVEs. The platform allows you to remediate with zero downtime. Pivotal Application Security has more details.

In this section, we review how Pivotal and Pivotal Platform and related components help you reduce your exposure to vulnerabilities. We’ll also explain how the platform helps you quickly resolve them.

We also discuss topics such as highly available, stateless virtualization components (managed by BOSH), automated container builds with standardized dependency packaging, and container hardening. Finally, we demonstrate how Pivotal Platform handles CVE mitigation—for all containers and VMs in your deployments, across any cloud.

BOSH & Stemcells

The BOSH Deployment Model

BOSH unifies release, deployment, and lifecycle management of distributed systems. It encompasses the software layer, as well as the control and management of the underlying infrastructure. BOSH is central to the “repair” and “repave” security models.

BOSH implements 3 layers of packaging. Together, these create the BOSH deployment model, shown in Figure 1. Starting from the bottom:

  • A stemcell is an operating system for the deployment. This can be Linux or Windows Server.
  • A release is the source code for what the operator wants to deploy. Examples include Pivotal Platform, Kubernetes, Redis, RabbitMQ, and etcd.
  • A manifest is a configuration file that describes how the system should be deployed to its chosen infrastructure target.

Figure 1: BOSH’s flexible deployment model, unifying all components of a software system into a single BOSH package.

Let’s step through these 3 layers, starting with the stemcell.

BOSH Stemcell

A stemcell is the base Operating System (OS) image that powers the release. Stemcells are created with a minimalist mindset; each includes a slimmed-down OS, a few common utilities, and the BOSH agent. Let’s consider an example stemcell, shown in Figure 2. Three pieces stand out:

  • image – OS image in a format understood by the target infrastructure (usually a .raw, .qcow, or .ova)
  • stemcell.MF – a YAML file with stemcell metadata
  • stemcell_dpkg_l.txt – a text file that lists of packages installed on the stemcell. This file is simply a reference to tell the operator what’s included in the stemcell.

Figure 2: BOSH’s minimal stemcell architecture

As a first step in a deployment, BOSH creates virtual machines (VMs) and lays down a versioned OS. The BOSH team creates and hardens stemcells daily by configuring them to very strict requirements, thereby reducing their attack surface. As a result, each stemcell has fewer vulnerabilities. The stemcells inside of Pivotal Platform have been hardened in accordance with best practice guidance, as described in well known references such as DISA STIGs. Both sources provide complementary, authoritative, customer- verifiable guidance.

Stemcells provide a powerful separation between the OS and the other software packages bundled in a deployment. Each stemcell—no matter the underlying infrastructure—is exactly the same. This allows for rapid, reliable mobility between different infrastructure targets. Stemcells are distributed and updated via https://bosh.io.

Now, let’s review how BOSH and stemcells work together to help enterprises mitigate vulnerabilities.

“Repair”

Security teams regularly repair vulnerable, unpatched software in enterprise systems. And yet, many organizations run their business day-to-day with a significant percentage of systems running with known vulnerabilities. Why is this? Two reasons:

  • Patching is typically a manual task, performed by operations staff as part of regular systems maintenance.
  • Ops teams are forced to “do more with less.” There are simply too few engineers on-staff. This risk is bigger than most CIOs would like to admit.
  • This risk is bigger than most CIOs would like to admit.

Meanwhile, newer attack vectors such as Advanced Persistent Threats (APT) use unpatched software to wreck havoc amongst commercial and government systems. What’s an organization to do?

For adopters of Pivotal Platform, the answer is surprisingly simple.

The “Repair” functionality inside of Pivotal Platform uses the power of automation to patch vulnerable software, operating systems, and environments in a consistent, continuous fashion, with zero downtime.

Here’s the usual flow.

1. The first step to mitigating vulnerabilities is to identify them. Pivotal receives private reports on vulnerabilities from customers, and from field architects via a secure disclosure process. Pivotal also monitors public repositories of software security vulnerabilities to identify newly discovered vulnerabilities that might affect one or more of our products.

2. Once identified, vulnerabilities are classified with a Low, Medium, High category rating. Pivotal attempts to deliver a fix within 48 hours from the time of disclosure. The fixes are posted on pivotal.io/security.

3. The vulnerability is fixed and tested. Once the patch is ready for enterprise consumption, it’s fit to be published. Vulnerabilities in both software and the operating system are addressed through new releases that contain all of the required and latest security patches and fixes.

4. Pivotal Platform administrators (typically operations staff), download and apply the new release. Every core component of Pivotal Platform uses the same stemcell. When it comes time to patch Pivotal Platform, patches are not applied individually to either an operating system or a vulnerable software component. Rather, the entire Pivotal Platform platform is restaged and redeployed using the updated OS and software-related release. BOSH handles the heavy burden of redeploying and updating every component. BOSH also ensures a successful deployment of the patched OS and software releases (with zero downtime) to the production platform. Workloads are automatically rebalanced to other operating environments while BOSH redeploys and updates vulnerable operating environments and software components.

The result? A redeployed and updated Pivotal Platform environment. The ease and convenience of this process has profound effects upon the enterprise. All components, including operating system and individual software components, are now patched and updated to the most recent, secure level. Configuration drift amongst operating environments is now fully mitigated—all systems are now running the latest patch and kernel level of the operating system.

This approach conveniently addresses the two issues that cause a high percentage of vulnerable servers: manual tasks and a lack of staff. Automation, like that offered by Pivotal Platform, is the way to protect IT systems faster, with less effort.

“Repave”

What’s running a legacy production enterprise app today? We see one pattern more often than others.

A typical enterprise system was hand-crafted by systems administrators and operations teams. Sysadmins will install the operating system, middleware, and tools for configuration management and monitoring.

Workloads and software will then be moved to the system, running a physical or virtual server. Countless hours will be spent to ensure the configurations of these systems meet the organization’s operational  and security baselines. Eventually, the system is deemed ready for production. The system is unique. It’s sometimes given a name. From there, the system will serve its purpose for several years, with incremental changes every six months.

The challenge with this approach? These systems are one-of-a-kind snowflakes. They require a specific, one-off set of operational and security procedures to stay online.

This configuration is highly vulnerable to an Advanced Persistent Threat (APT) style attack. Malware can finagle its way into a system, and take hold over an extended period of time.

The “repave” concept in Pivotal Platform offers a new approach to protect against APTs.

Ideally systems should be part of a lean supply chain, enabling the business to repeatedly clone systems over and over. With immutable infrastructure, servers, software, and their associated configuration can be easily be thrown away and replaced instantly.

Pivotal Platform takes this idea to its logical conclusion: repave virtual machines and containers every few hours with zero downtime. This ensures production systems are fully patched and running at the most secure, vetted operational baseline. By repaving servers and applications from a known good state on a routine basis, an attacker has a much smaller window to strike. Any malware is overwritten—“paved over”—with the clean, secure baseline of the operating system, middleware, and application code. This dramatically improves the security posture of organizations. It makes them much more secure against APT style attacks.

Use Concourse for Pivotal Platform to Repair and Repave

Concourse for Pivotal Platform is a tool to help teams create and run continuous integration and delivery pipelines in and for Pivotal Platform. This is the recommended way to repair and repave the platform.

Pivotal patches critical vulnerabilities anywhere in the platform—embedded operating system, middleware, Pivotal Platform component—typically within 48 hours of a fix becoming available. With Concourse for Pivotal Platform, customers can set up pipelines that detect and deploy that patch to their Pivotal Platform installations automatically, often with zero downtime.

Rigorous automated testing and continuously updated platforms improve a company's security posture while also freeing up operators to focus on delivering new features to application developers—allowing IT organizations to focus on creating value-added software for their customers.

Figure 3: Concourse for Pivotal Platform automatically applies new updates and patches to Pivotal Platform.

Heightened Security

Pivotal Platform users can repair vulnerable operating systems and application stacks consistently within hours of patch availability. In addition to leveraging Concourse to update the platform itself, Concourse users can continuously deploy their own applications.

Pivotal Platform's ability to repair vulnerable operating systems and application stacks consistently within hours of patch availability allows organizations to adopt a “faster is safer” approach to cybersecurity with the three Rs of enterprise security: (1) Rotate the credentials frequently so they are only useful for short periods of time, (2) Repave servers and applications from a known good state to cut down on the amount of time an attack can live, and (3) Repair vulnerable software as soon as updates are available.

Ultimately, Pivotal Platform customers can load, test, and apply security patches to their entire cloud platform with complete automation.

Platform Automation

Pivotal Platform is used by developers at the world’s largest companies, who support tens of thousands of applications that run their multi-billion dollar businesses. At Pivotal, we apply this new approach to our own operations, which allows us to maintain Pivotal Web Services® (PWS) and the thousands of applications running on it with two IT operators—with Concourse, our customers can now easily operate applications running their company with the same level of efficiency.

Concourse removes the need to scale operations teams as the number of onboarded application development teams increase. This automation not only speeds up incident response times but also provides a consistent experience for developers across Pivotal Platform environments—e.g., across public clouds and on-prem, across development, test, and production, etc.

Buildpacks

Buildpacks in Pivotal Platform helps speed deployment. A buildpack detects the type of application being deployed (JVM-based, .NET, Python, etc.). It then “builds” the correct runtime, middleware, and shared library stack to run the app on the platform.

Buildpacks enable developers and release engineers to deploy application code that’s automatically configured to run. At the same time, InfoSec teams have governance and control over the runtime definition. That means security teams always know what’s running on the platform—useful insight that helps assess risk!

When a cf push command is performed, the application code is uploaded to Pivotal Platform. At this point, a “staging” process is started by the Cloud Controller in the platform. The staging process invokes a “detect” method in each of the installed buildpacks to determine which one is applicable. Then, the applicable buildpack’s compile method is called. The output of the compile method: a container image called a “droplet”, which includes the application code as well as any other binaries and libraries required by the application. A droplet is then used to create the containers that host the application.

This is the automated containerization process in Pivotal Platform

App Packaging with Buildpacks and Containers

It’s hard to find a developer laptop these days that doesn’t have Docker on it. This tech beautifully solves the ubiquitous “works on my machine” problem. However, developer-built containers can expose an organization to more vulnerabilities. We compare and contrast developer-built containers with platform- built containers below.

BOSH Add-ons

BOSH is easily extended with add-ons in order to protect against specific types of vulnerabilities. These add-ons help you customize configurations to your requirements. For instance, IT Security may mandate SSH banners, vulnerability scanning agents, or pre-existing login credentials. They may also require integration to a specific monitoring system.

A platform operations team meet these requirements, simply by updating the BOSH Runtime-Config with custom YAML configuration.

To view your Runtime-Config, execute “bosh runtime-config”. To update your Runtime-config, view, and pipe the config to a YAML file, make updates as needed then run bosh update runtime-config myruntimeconfig.yml. Any add-ons in the Runtime-Config will propagate to all VMs in the deployment.

Here’s a list of common add-ons that may be useful to your organization’s vulnerability management program.

  •  Custom SSH Banners – this add-on simplifies the chore of updating VMs to include custom SSH banners. BOSH makes it easy, with an os-conf-release that allows a Pivotal Platform admin to include OS specific configuration details in a deployment. Note this is an open source add-on.
releases:
- name: os-conf
   version: 3
addons:
- name: misc
   jobs:
   - name: login_banner
     release: os-conf
   properties:
     login_banner:
       text: |
         This computer system is for authorized use only. All activity
is logged and
         regularly checked by system administrators. Individuals
attempting to connect to,
         port-scan, deface, hack, or otherwise interfere with any services on this system
         will be reported.
                           

Aside from just allowing custom banners, the add-on also enables configuration of the Operating System if needed by an organization:

  • Add UNIX users to VM

Enable IPv6

  • Configure resolv.conf search domain
  • Change TCP keepalive kernel args
  • Apply arbitrary sysctls
    • Clam AV – Pivotal Platform customers in regulated industries may be required by their auditor to include anti-malware protection in their deployment. Use this add-on to install the ClamAV antivirus agent to each host within a Pivotal Platform deployment. This add-on is provided by Pivotal and fully supported.

      • File Integrity Monitoring (FIM) – an add-on commonly required by auditors. The FIM add-on enables an operator to install a Pivotal FIM agent to each host within a Pivotal Platform deployment. This add-on is fully supported, and provides the following features:
      • Continuous file integrity monitoring protection for stemcells.
      • Real-time alerts, when changes to monitored files and directories are detected.
      • Log messages, identifying the changed file, are sent to syslog.
      • Post-installation verification procedures can be used to ensure that agent is functioning and alerts are received.
  • IPSec – enables authentication and encryption at the IP layer. The add-on encrypts and secures network traffic between VMs in a Pivotal Platform deployment. It is configurable via a manifest file.
  • NESSUS Agent – an open-source add-on that deploys a Nessus Agent to each VM. Requires access to a licensed Nessus Manager.
  • OSSEC Host-based IDS – a Host-based Intrusion Detection System. It performs log analysis, file integrity checking, policy monitoring, rootkit detection, real-time alerting and active response.
  • Snort – an open source network intrusion prevention system. It’s capable of performing real-time traffic analysis and packet logging on IP networks.
  • Tripwire – a free software security and data integrity tool. It’s useful for monitoring and alerting on specific file change(s) in Pivotal Platform.
  • OS Image Hardening – allows Bosh to run custom hardening tasks in a system, as used in cloud.gov (see configuration script here)
    • Safe Defaults – /etc/modprobe.d
    • Redirect protections
    • System Access, Authentication and Authorization
    • Password Policy
    • SSH Settings
    • Set warning banner for login services
    • Restrict Core Dumps
    • Change permissions on home directory
    • Ensure syslog emits at least one entry each minute
    • Ensure rpcbind does not run at start (Nessus check 6.7)

Monitoring and Security Incident Response

Pivotal Platform provides a number of mechanisms for application monitoring and security. Further, Pivotal Platform bundles centralized, aggregated logging and metrics for the platform itself. As a result, developers and operators to have visibility and insight into performance, health, and security of applications and the platform that runs them.

Loggregator

Logs generated by running applications, services, and backend processes are streamed, aggregated, and collected by Loggregator. The Loggregator Firehose provides a combined stream of logs from
all apps, plus metrics emitted from Pivotal Platform components. This enables application developers and operations to troubleshoot events that impact applications and the platform (operating system, infrastructure, etc).

Security events from the Cloud Controller are logged with the Common Event Format (CEF). CEF specifies the following format for log entries:

CEF:Version|Device Vendor|Device Product|Device Version|Signature
ID|Name|Severity|Extension

Entries in the Cloud Controller log use the following format:

CEF:CEF_VERSION|cloud_foundry|cloud_controller_ng|CC_API_VERSION|
SIGNATURE_ID|NAME|SEVERITY|rt=TIMESTAMP suser=USERNAME suid=USER_GUID
cs1Label=userAuthenticationMechanism cs1=AUTH_MECHANISM
cs2Label=vcapRequestId cs2=VCAP_REQUEST_ID request=REQUEST
requestMethod=REQUEST_METHOD cs3Label=result cs3=RESULT
cs4Label=httpStatusCode cs4=HTTP_STATUS_CODE src=SOURCE_ADDRESS
dst=DESTINATION_ADDRESS cs5Label=xForwardedFor cs5=X_FORWARDED_FOR_HEADER

How do teams use Loggregator? Two ways.

  • App developers tail their application logs, or opt to dump the recent logs with simple commands in the Pivotal Platform Command Line Interface (cf CLI).

  • Operators and administrators access the Loggregator Firehose, the combined stream of logs from all apps, and the metrics data from Pivotal Platform components. From there, operators deploy nozzles to the Firehose. A nozzle is a component that monitors the Firehose for specified events and metrics, and streams this data to external services.

Syslog Forwarding

Pivotal Platform aggregates logs for all instances of your applications. It does the same for requests made to your applications through internal components of Pivotal Platform. For example, when the Cloud Foundry Router forwards a request to an application, the Router records that event in the log stream for that app. Run the following command to access the log stream for an app in the terminal:

$ cf logs YOUR-APP-NAME

If you want to persist more than the limited amount of logging information that Pivotal Platform can buffer, drain these logs to a log management service. Let’s examine how operators achieve this.

Complete the following steps to set up a communication channel between the log management service and your Pivotal Platform deployment:

  1. Obtain the external IP addresses that your Pivotal Platform administrator assigns to outbound traffic.

  2. Provide these IP addresses to the log management service. The specific steps to configure a third- party log management service depend on the service.

  3. Whitelist these IP addresses to ensure unrestricted log routing to your log management service.

  4. Record the syslog URL provided by the third-party service. Third-party services typically provide a syslog URL to use as an endpoint for incoming log data. You use this syslog URL in Step 2: Create a User-provided Service Instance.

  5. Pivotal Platform uses the syslog URL to route messages to the service. The syslog URL has a scheme of syslog, syslog-tls, or https, and can include a port number. For example: syslog://logs.example.com:1234

Detailed documentation for how to integrate with other log management tools (like Splunk) are provided in the Pivotal Platform documentation.

Data Loss Prevention

Microservices increase the amount of traffic on a network, and with this increase comes the price of substantially widened attack surface risking Data Loss. Enterprises need to prevent data loss with Data Loss Prevention (DLP) strategies that are also agile in their implementation. While Pivotal Platform is deployed into locked down networks preventing ingress of traffic, a company should be concerned of microservice egress to systems that aren’t in scope of application functional requirements. Pivotal has made Application Security Groups (ASG) available to developers pushing their code into the platform, which also allow an ASG to be confirmed to limit the possibilities of egress. To help decrease the attack surface of microservice in terms of DLP, is Zero-Trust networking, including Container Hardening Add-ons, and best practices outlined below.

Ingress Control

Route Services & Pivotal Platform Router

Pivotal Platform application developers may wish to transform or process requests before they reach an application. These are called “Route Services.” Common route services include authentication, rate limiting, and caching services.

Route services are added to applications through the Pivotal Platform marketplace mechanism. Developers can use to apply various transformations to application requests by binding an application’s route to a service instance. Through integrations with service brokers and, optionally, with the Pivotal Platform routing tier, developers can add these capabilities with a familiar, automated, self-service, and on-demand user experience. Popular add-on services are in the Pivotal Platform Marketplace.

Pivotal Platform supports the following three models for Route Services: Fully-brokered services, Static, brokered services, and User-provided services.

Fully-Brokered Service

In the fully-Brokered Service model, the CF router receives all traffic before any processing by the route service. Developers can bind a route service to any app. If an app is bound to a route service, the CF router sends its traffic to the service. After the route service processes requests, it sends them back to the load balancer in front of the CF router. The second time through, the CF router recognizes that the route service has already handled them, and forwards them directly to app instances.

Static, Brokered Service

In the static, brokered service model, an operator installs a static routing service, which might be a piece of hardware, in front of the load balancer. The routing service runs outside of Pivotal Platformand receives traffic to all apps running in the CF deployment. The service provider creates a service broker to publish the service to the CF marketplace. As with a fully-brokered service, a developer can use the service by instantiating it with cf create-service and binding it to an app with cf bind-route-service.

User-Provided Service

If a route service is not listed in the CF marketplace by a broker, a developer can still bind it to their app as a user-provided service. The service can run anywhere, either inside or outside of CF, but it must fulfill certain the integration requirements (described in Service Instance Responsibilities). The service also needs to be reachable by an outbound connection from the CF router.

This model is identical to the fully-brokered service model, except without the broker. Developers configure the service manually, outside of Pivotal Platform. They can then create a user-provided service instance and bind it to their application using familiar cf cli commands. The developer will need to supply the URL of their route service.

TCP Router

TCP Routing enables applications to be run on Pivotal Platform that require inbound requests on non-HTTP protocols. You can use TCP Routing to comply with regulatory rules that require your organization to terminate the TLS as close to your apps as possible so that packets are not decrypted before reaching the application level.

Figure 4: the layers of network address translation that occur in Pivotal Platform in support of TCP Routing.

Let’s examine an example workflow that covers route ports, backend ports, and app ports for this scenario.

  • A developer creates a TCP route for their application based on a TCP domain and a route port, and maps this route to one or more applications.
  • Clients make requests to the route. DNS resolves the domain name to the load balancer.
  • The load balancer listens on the port and forwards requests for the domain to the TCP routers. The load balancer must listen on a range of ports to support multiple TCP route creation. Additionally,Pivotal Platform must be configured with this range, so that the platform knows what ports can be reserved when developers create TCP routes.
  • The TCP router can be dynamically configured to listen on the port when the route is mapped to an application. The domain the request was originally sent to is no longer relevant to the routing of the request to the application. The TCP router keeps a dynamically updated record of the backends for each route port. The backends represent instances of an application mapped to the route. The TCP Router chooses a backend using a round-robin load balancing algorithm for each new TCP connection from a client. As the TCP Router is protocol agnostic, it does not recognize individual requests, only TCP connections. All client requests transit the same connection to the selected backend until the client or backend closes the connection. Each subsequent connection triggers the selection of a backend.
  • Because containers each have their own private network, the TCP router does not have direct access to application containers. When a container is created for an application instance, a port on the Cell VM is randomly chosen and iptables are configured to forward requests for this port to the internal interface on the container. The TCP router then receives a mapping of the route port to the Cell IP and port.
  • The Diego Cell only routes requests to port 8080, the App Port, on the container internal interface. The App Port is the port on which applications must listen.

Securing Traffic into Pivotal Platform

Organizations have several ways to secure traffic into Pivotal Platform. We summarize these below. Technical documentation describes these methods in greater detail.

Mutual TLS

Applications that require mutual TLS (mTLS) need metadata from client certificates to authorize requests. Pivotal Platform supports this use case without bypassing layer-7 load balancers and the Gorouter.

The HTTP header X-Forwarded-Client-Cert (XFCC) may be used to pass the originating client certificate along the data path to the application. Each component in the data path must trust that the downstream component has not allowed the header to be tampered with.

If you configure the load balancer to terminate TLS and set the XFCC header from the received client certificate, then you must also configure the load balancer to strip this header if it is present in client requests. This configuration is required to prevent spoofing of the client certificate.

SSL/TLS Termination Options for HTTP Routing

There are several options for terminating SSL/TLS for HTTP traffic. You can terminate TLS at the Gorouter, your load balancer, or at both. The following table summarizes SSL/TLS termination options and which option to choose for your deployment.

Egress Control with Application Security Groups

Application Security Groups (ASGs) are a collections of egress rules that specify the protocols, ports, and IP address ranges where app or task instances send traffic. Because ASGs define allow rules, their order of evaluation is unimportant when multiple ASGs apply to the same space or deployment. The platform sets up rules to filter and log outbound network traffic from app and task instances. ASGs apply to both buildpack-based and Docker-based apps and tasks.

When apps or tasks begin staging, they need traffic rules permissive enough to allow them to pull resources from the network. After an app or task is running, the traffic rules can be more restrictive and secure. To distinguish between these two security requirements, administrators can define one ASG for app and task staging, and another for app and task runtime.

To provide granular control when securing a deployment, an administrator can assign ASGs to apply to all app and task instances for the entire deployment, or assign ASGs to spaces to apply only to apps and tasks in a particular space.

ASGs can be complicated to configure correctly, especially when the specific IP addresses listed in a group change. To simplify securing a deployment while still permitting apps reach external services, operators can deploy the services into a subnet that is separate from their Pivotal Platform deployment. Then the operators can create ASGs for the apps that whitelist those service subnets, while denying access to any virtual machine (VM) hosting other apps.

Administrators can use both ASGs and new container networking to control egress. This table is a useful comparison of each feature.

Intra-Platform Control: Zero-Trust Networking in Pivotal Platform

Many enterprises want to apply the Zero Trust Model in their data center. The "Zero Trust" approach eschews the idea of a "trusted" and "untrusted" network security. Instead, all network traffic is untrusted. Why is this so popular? Two reasons.

First, enterprise architectures are growing in complexity. It's much harder to enforce this dual-model when things change rapidly. Second, the threat landscape is evolving quickly. A network that was trusted today could become compromised tomorrow.

Customers use Pivotal Platform in conjunction with on-premises network virtualization (like NSX from VMware) to improve their security posture with zero-trust principles. Two features are key in this pattern: isolation segments and container-to-container networking. Let’s review both.

Trusted and verifiable network isolation segments. This feature is critical in highly secure and regulated environments. Workloads with sensitive data can be subject to compliance and accreditation standards. Often, these apps must run isolated from other apps and traffic. Pivotal Platform supports isolation segments for compute isolation (i.e. where apps run) and network isolation (how application traffic traverses a network).

Now, operators can easily configure networks for application deployment that are separate from other workloads. With NSX, users can see how network traffic flows through Pivotal Platform. Verification of the required isolation is instant. Enforcing isolation policies—and proving compliance—in traditional networks is far more difficult.

Container-to-container (C2C) networking. With this feature, Pivotal Platform apps can directly communicate with each other. Developers can tailor networking policies for app-to-app interactions, boosting security. No more whitelisting traffic, no more public routes for private apps!

 

Figure 5: Container networking in Pivotal Platform.

Over time, administrators will be able to use "C2C networking" to enforce even more granular controls. Want to limit specific apps so they only access specific services? You can do that with future support for controls down to the CIDR, protocol, and port level.

These programmatic controls are a best practice for an enterprise. With thousands of apps and hundreds of users, what can you trust? Zero!

Container Hardening

Pivotal Platform's secure containerization is just part of a platform-wide security system that leads the industry in protecting your applications on the cloud. Three features below are of particular interest.

Unprivileged Containers On Pivotal Platform

Unprivileged containers are a security technique of mapping the root user inside the container to a regular user at the linux operating system level that has no privileges. This prevents an application from inheriting root access on the host if it breaks out of the container. By using the full set of user-namespacing features in Linux, Pivotal Platform isolates containers sharing the same host.

Pivotal Platform also reduces the set of linux system capabilities for processes started inside a container using a variety of methods, including linux control groups (cgroups). Pivotal Platform takes other steps to harden containers, even blocking all outbound network traffic by default, which can be overridden and/or fine tuned with an ASG (Application Security Group).

Limiting access to linux kernel functions in this manner nicely compliments tools like AppArmor and Seccomp by providing another layer of security, in case the others become compromised.

AppArmor

AppArmor is a Mandatory Access Control System (MAC)that is part of the upstream Linux kernel. Thanks to Docker’s AppArmor integration being contributed to OCI, AppArmor works great with runC.  And since runC is now the default container runtime in Pivotal Platform, Pivotal Platform users get AppArmor compatibility “for free.”

Generally regarded as easy to use, AppArmor focuses on programs, not users, restricting a given program’s access inside a container to system resources like network, disk, etc. Security administrators can use AppArmor to create enforcement policies that are simple to author and easy to audit. Without an easily auditable system, it’s quite possible to inadvertently introduce security loopholes in a policy that isn’t consistent with itself.

In Pivotal Platform, AppArmor is pre-configured with a default policy and enforced by default for all unprivileged containers. AppArmor can dramatically increase your default security posture!

Seccomp

Seccomp (Secure Computing Mode) is also part of the upstream Linux kernel, and restricts the set of system calls a program inside a container can access. This sandbox greatly reduces the surface area
for break-out exploits. Docker asserts in its documentation that their default seccomp profile disables around 44 system calls out of 300+. Whitelisting just the critical linux system functions for your program balances the needs of security and application compatibility, and provides an entry level of lockdown for containerized applications. In Pivotal Platform 1.6+, this is set up out-of-the-box: we use the same Docker default seccomp profile to maximize compatibility with existing images. It’s like getting free beer in an open container!

It’s also worth noting that security administrators often use 3rd party tools like Grsecurity (or similar) to ensure security at the linux kernel level inside the container, since access control systems and sandboxes reside atop the kernel and LSM.

Disaster Recovery

BOSH Backup and Restore: Cloud-Native DR

BOSH Backup and Restore (BBR) is a command-line tool for orchestrating the backup and restore of Pivotal Platform and related services. Use it to run automated backups of Pivotal Platform, and to restore components when needed.

Backup and restore in distributed systems are thorny issues. Custom code, data services, and platforms constantly change across multiple infrastructure targets. How do you accurately capture the state of the system? How do you bring it back reliably?

BOSH helps ease these challenges, so we made it the backbone of BBR. After all, Pivotal Platform components are simply BOSH deployments.

BBR offers security and operations teams a few benefits.

  • Structured Backup and Recovery for Pivotal Platform. Use BOSH Backup and Restore to reliably create backups of core Pivotal Platform components and their data (CredHub, UAA, BOSH Director, and ERT).
  • Consistent, Distributed Backups. Each component includes its own backup instructions. This decentralized structure helps keep scripts in sync. Meantime, “locking” features ensures data integrity and consistent, distributed backups across your deployment.
  • Supports Essential Data Services. Because BOSH Backup and Restore supports on-demand instances, you can use it to backup and restore many popular Pivotal Platform add-ons.

BBR: How does it work?

Operators trigger a backup or a restore for a BOSH deployment or BOSH director using the BBR binary on a jumpbox. BOSH Backup and Restore then looks at the jobs in the deployment or director for backup or restore scripts, and then triggers those scripts in the prescribed order. The artifacts are then transferred to or from the jumpbox. The operator is responsible for transferring artifacts to or from external storage.

Figure 6: BOSH Backup and Restore is designed for distributed systems that change often.

Meeting PCI and Federal Compliance with Pivotal Platform

Meeting standards aren’t just about technology. Rather, it is a matter of people, process, and technology working in concert. Pivotal has partnered with the largest companies and government organizations in the world to address these requirements for their most critical applications.

PCI on Pivotal Platform

Very little happens in enterprise IT without security and compliance teams giving the green light. And so it goes for companies that host credit card information and related metadata. Here, Payment Card Industry Data Security Standard (PCI DSS) compliance is in play.

A recently published paper helps compliance teams get comfortable with Pivotal Platform for this use case. Security and compliance guru John Field explains, in exquisite detail, how to achieve PCI compliance with Pivotal Platform at the core of your application stack.

Want even more context? The author is meticulous! The paper includes links to public Pivotal Tracker stories that map to specific sections of the PCI standard. (Here’s an example). For those who wade through complex regulations often, this granularity is a boon.

Companies are realizing they can improve velocity while simultaneously boosting compliance. The PCI paper is further proof of how Pivotal Platform makes it a reality.

Federal Compliance and Reaching ATO

The US Federal government mandates that information technology systems abide by several different regulations. The regulations in play depend on where an IT system will reside, and how it will be utilized. The Federal Information Security Management Act (FISMA) of 2002 was one of the first pieces of legislation passed. Other standards and frameworks have been subsequently created, including the Federal Risk and Authorization Management Program (FedRAMP), and Department of Defense Cloud Computing Security Security Requirements Guide (DoD SRG). In addition, a number of security and operational certifications and baselines have been created, such as the DISA STIG.

Systems that follow these standards, frameworks, and certifications can be operated on a production network. This concept, known as Authority-to-Operate, or ATO, is often the desired outcome from compliance efforts.

Pivotal Platform is a structured platform that meets all operational and security compliance standards and requirements. It can help organizations rapidly achieve ATO.

Federal government organizations across the US Department of Defense, Intelligence Community, and Civilian agencies use Pivotal Platform in production. Several federal customers have even developed an ATO-In-A-Day process. The end result: government developers get their apps in production much faster.

Recommended Reading

Appendix: Cloud-Native Security

Cloud-Native Security: Go Faster to Be More Secure

The security tradition in the enterprise today resists change. Security teams tend to be most comfortable when systems change slowly. Why is this the case? Any unauthorized change is likely to be caused by an intruder if employees don’t drive frequent change.

As such, the answer to any request is almost always “no.” Change is resisted at every level because any change to the system is the sign of a potential threat.

Contrast this approach to application development and operations. These groups are now working together in new ways (broadly dubbed “DevOps”) to deliver new code faster.

The move to DevOps didn’t happen overnight. But changing business conditions have now made this practice mainstream with big companies.

For security teams, the changing nature of security threats is now shifting the mindset from “slow” to “fast.” Constant, more sophisticated, and ever-evolving threats require you to rethink your approach in the cloud- native era.

Threats are evolving faster than ever

Malware and advanced persistent threats are proliferating. Malicious programs can be created and deployed for next to nothing. Hundreds of new threats attempt to penetrate enterprise systems every day. Traditional security measures can’t evolve nearly as quickly. A cloud-native approach offers both external perimeter and internal systems protection.

Mitigating credential leakage is possible

The fact is credentials will always be leaked, but systems administrators don’t have to sit idle and let it happen. They can change the lifespan for credentials from the weeks or months that give hackers plenty of time to find vulnerabilities to hours or just 15 minutes. A cloud-native security approach helps ensure leaked credentials quickly become worthless.

Enterprises need to conduct a realistic assessment of the security challenges they face and understand why today’s approaches to security are falling short:

Are systems at risk due to the patching we are intentionally not doing?

Vendors continuously release patches and that’s awesome! However, the practical reality is that a typical enterprise has procured thousands of servers over several years. Each one is loaded with different software packages. The effort to patch these systems regularly (let alone quickly) is mind-boggling. So what happens? System administrators are pragmatic. They triage. The truth is that systems go knowingly unpatched. That’s a broken process.

Are organizations, processes, and tooling designed to react to threats, rather than prevent them?

By the time you’ve detected an attack, it’s too late. Further, finding a finding a breach is only the beginning; you still have to fix it.

Are your security vendors only offering incremental improvements?

Big vendors of the cloud-native era certainly look different than the dominant providers from a decade ago, but where are the revolutionary vendors in the security area? Enterprise buyers and security vendors are still having the same conversations about the same products that they did in the dot-com era. Now, products might be delivered “as a service,” added into an on-premises private cloud, or served up as a virtual appliance in a public cloud. These are hardly earth-shattering advancements in enterprise IT compared to infrastructure as a service, Agile development, or microservices.

Are you prevented from updating production systems frequently?

Going to production with new software takes months. It’s a painful, arduous journey, and once new bits are online, no one wants to change anything. Why? Because it might break, and that would be bad. Here’s what’s worse: a static environment is fertile ground for attacks. The way production systems are managed today couldn’t be more inviting to attackers—and unfortunately, cyber criminals know it.

All of these points are symptoms of a larger issue—a mindset that believes “going slower reduces risk.” In fact, the opposite is true. The faster systems change, the harder they are to penetrate. That’s the core idea of cloud-native.

Citations

1. Contrast to the more familiar Discretionary Access Control, or DAC, where users may decide for themselves who is granted access to their resources. With MAC, the permissions on a resource can only be changed by a dedicated security administrator, and not by the resource users,.

Previous
A Primer on PCI Compliance with Tanzu Applicaton Service

No More Articles