I recently dealt with a recall notice for my car. Something to do with the clutch. The car company had telemetry that proved something was wrong, tested a fix, and rolled it out. If I had built my own car—let's be honest, I'd be dead by now—I wouldn't have noticed an issue with a single defective part. When we take on responsibility for building and running our own complex machinery, we take on risk. Same goes for what's in your data center.
Every day, Pivotal uses cloud servers—mostly on Google Cloud Platform—to constantly test our platform. Each of our 60+ engineering teams run comprehensive unit and integration tests on their individual product. These product tests are typically done on one IaaS against a single version of PCF. When satisfied with their build, the team triggers integration tests on a fully-loaded PCF instance, targeting multiple PCF versions, on every IaaS. We call this our Master Pipeline, a continuous integration platform that tests every new Ops Manager, tile, and add-ons release together on the three main public IaaS providers. We spend millions of dollars per year to do this and it's worth every cent. Here's why.
You shouldn’t have to absorb the time and cost of packaging software
Stop me if this sounds familiar. You get new or updated software to install. First, you scour the documentation to figure out the supported OS version. You roll the dice and install the software in a test lab to make sure it works with other components in your environment. Inevitably, there's some missing dependency or obscure configuration that costs you six wasted days. It's exhausting.
Why does this happen time and time again? Packaging. It’s about pre-tested packages, versus user-tested packages. Virtually all software gets distributed without its dependencies, and it's rarely tested for all the various installation scenarios. There are too many possibilities! It's left up to you to make sure all the components actually work together. You're stuck figuring out installation combinations, and packing up a given release. There's a better way. Here’s how Pivotal does it.
Figure 1 - How we determine the responsibility of our master pipeline
Pivotal offers you a centralized, standardized way to deploy platforms and products. And before you fetch that software, each component is thoroughly tested on its own, and as part of the larger platform product. Software, such as PCF Metrics or Spring Cloud Services, is continuously tested by their own teams. Then, it's part of our Master Pipeline that tests all the PCF components together, on every IaaS, multiple times per day. This ensures that everything—from the server and OS, to the network and software—is tested to simulate exactly how you're going to use it. The result? You pull software that has explicit dependency declarations, and is tested against real-life installation scenarios.
You can deploy full-stack patches that shrink the time you’re exposed to threats
Patching software may be the most important thing that companies haven't figured out. You're not alone! Why is it so tough? Because most platforms, whether on-premises or in the public cloud, are a collection of brittle layers. Change one thing, that other thing breaks. This complexity slows the pace of updates applied by IT. However, this means that in the name of availability, you leave your systems exposed for long periods. I'm here to say that you can have both availability and protection.
In distributed systems, there's always a vulnerability popping up. Linux kernel problems, OpenSSL issues, PCF product bugs, and more. The question is, how fast can you address it? If you're packaging your own software and responsible for testing new builds, the answer is "not very." Because of Pivotal's investment in continuous integration, we can fix, test, and distribute patches in hours. And these patches aren't just tested locally, but tested against the entire product as part of the Master Pipeline. This means that you can confidently patch any layer of PCF without fear of breaking something elsewhere. That's huge.
Figure 2 - A view of the master pipeline results, showing tested PCF versions and IaaSes
Continue growing the usage and utility your platform with trusted extensions
Unless you have some strange goals, you deploy platforms for people to use them. Platforms that stagnate, die off. But sometimes we hesitate to expand the footprint of a given platform because the perceived operational cost of managing more things. If we scale out, can we handle the new infrastructure? If we add a new component how does that change our responsibilities? Those are legitimate concerns.
Our investment in operational efficiency alleviates those concerns. If you have a starter-size PCF environment and want to bolt on more things, fear not. Add Windows servers for .NET workloads. Give developers better telemetry with PCF Metrics. Add anti-virus agents to hosts. Each one of these PCF add-ons are part of the Master Pipeline, and tested daily against the entirety of PCF on every IaaS. Add them, and trust that it "just works." Because it does! Looking at our broader ecosystem of partners? Great. Each partner goes through a continuous integration process as well.
We’re using state-of-the-art techniques and technologies to produce quality software. But we’re not finished! The Master Pipeline team continues to evolve the service so that product teams have a fully dynamic, collaborative way to test their software.
Use 2018 to focus on your most important things, and leave the platform to us!
About the AuthorFollow on Twitter More Content by Richard Seroter