Improving Cloud Foundry Loggregator scalability with a shared-nothing architecture

November 26, 2019 Jesse Weaver

This post was co-authored by Jesse Weaver, Mitch Seaman, Maxwell Eshleman, Caitlyn Yu, and Tom Chen of Pivotal.

“Why is logging so expensive? It amounts to 30% of my platform cost.”

“I’m seeing log loss and I don’t know why!”

“How many Dopplers do I need?”

“Why are my metrics missing?”

Loggregator is one of Cloud Foundry’s original components, responsible for egressing application logs and platform metrics. It is a crucial tool that helps operators understand app and system health, but, as platform size scales, it has also been one of the platform’s biggest pain points. In order to handle logging and metrics for a large platform, a CF operator needs to use up to 60 VMs and configure a nozzle to an external partner like Splunk or Wavefront. Even then, large platforms drop logs because of the centralized bottleneck of the firehose. 

Last year, the Loggregator team set out to make Cloud Foundry logging and metrics simpler, cheaper, and more scalable by implementing a shared-nothing architecture. Here’s an explanation as to why we did that and where we landed.

Drawbacks of the legacy architecture

Historically, log and metric egress in CF revolved around a central distribution mechanism called the Firehose. In that architecture, platform VMs write their logs and metrics in a custom envelope format to a set of Dopplers, which then forward them to Traffic Controllers (TC), Reverse Log Proxies (RLP), and other destinations. For instance, syslog adapters convert Loggregator envelopes into the standard syslog RFC5424 format, and partner nozzles export envelopes to external services like Datadog, Splunk, or AppDynamics.

The issues with this setup:

  1. The bottleneck in Dopplers and RLPs leads to dropped logs when the drain count approaches approximately 10,000 drains.

  2. In order to egress all logs to a Syslog Aggregator (like Splunk), individual drains need to be set up for each application, or Firehose nozzles need to be created that have the same bottleneck issues. This is time-consuming and has the potential to bring down the whole logging pipeline.

  3. There are a lot of hops in log egress, each of which adds cost and latency. Syslog Adapters and Dopplers are particularly expensive.

  4. The envelope format is specific to Cloud Foundry, and many components exist solely to convert from envelopes to more widely supported log and metric formats.

A new architecture: shared-nothing and agent-based

Loggregator is changing over to a shared-nothing architecture to improve the scalability of syslog drains and to enable whole-platform syslog egress. This allows Loggregator to scale to handle more drains and logs/metrics per second; converges on a community-standard log format; and improves the resource efficiency of logging and metrics egress.

The new architecture deploys a Syslog Agent to each VM, which sends log and metric traffic directly from components to syslog drains. That’s it. 

This is far simpler and cheaper, and it scales alongside VMs without unduly affecting their CPU or memory usage. In order to satisfy an operator’s need to send all of a platform’s metrics and application logs to one place, we’ve introduced Aggregate Drains—syslog destinations that receive all logs and metrics from every agent on the foundation. Log Cache is now receiving logs and metrics via an aggregate drain, rather than the Firehose. Operators can create additional aggregate drains to egress directly to external destinations (such as Splunk or Datadog).

In the above diagram, “Aggregator” refers to any external observability service, such as Splunk, Datadog or AppDynamics.

The benefits of this architecture:

  1. Log and metric egress scales automatically as the work is spread across Syslog Agents living on every VM in the foundation.
  2. The system can handle larger log/metric load. We’ve tested with more than 250,000 logs/metrics per second.
  3. Operators can reduce infrastructure cost by removing the previous, centralized syslog egress architecture (the Syslog Adapter, Log API, and Doppler VMs).
  4. The architecture is simpler, which reduces cost and complexity for operators and speeds up egress due to fewer network hops.
  5. Less configuration is required to egress all app logs to aggregators like Splunk due to aggregate drains. Custom nozzles are no longer needed.

How do I get one?

Pivotal Platform customers can try out this new architecture as follows, depending on which version of PAS they’re running.

PAS 2.6

  • By checking “Enable agent-based syslog egress for app logs” in “System Logging”, all of your syslog drains will start running on the new, more scalable syslog agents.

PAS 2.7 

  • To check whether your existing integrations will continue to work after the upcoming V1 firehose deprecation, you can disable those components (the Traffic Controllers) by unchecking the “Enable V1 Firehose” checkbox under “System Logging”.

PAS 2.8 

  • You can shut down the entire centralized logging pipeline by unchecking the “Enable V1 Firehose” and “Enable V2 Firehose”, and scale down or remove the VMs in question.

  • Rather than setting up app drains for every app in your foundation, you can add a syslog or https destination for all app logs (an aggregate drain) under “System logging”.

  • To consume metrics for applications and components, we’ve made it possible to scrape the metrics directly from each VM in the platform, using Prometheus-style scrapeable endpoints.

Many of these represent large changes for operators, developers and integrators. We strongly believe that these changes will improve the scale of the platform, while requiring less effort to translate PAS-specific formats. We want to help you transition to this new architecture and answer any questions; please get in touch with


Slaying the Hydra: The Multi-Headed Beast That Is API Security
Slaying the Hydra: The Multi-Headed Beast That Is API Security

Validating someone's identity and ensuring they only have access to the resources they’re entitled to, all ...

Getting Started with Spring Cloud Stream
Getting Started with Spring Cloud Stream

Spring Cloud Streams offer a straightforward way to implement event-driven architecture, whether your backe...