Note: Join me at SpringOne Platform in October to hear real-life stories about the complexities and successes that your peers experience in their transformation. It may be the most important thing you do this year.
Look, I'm impressed by anyone who accomplishes something for the first time. You've got my admiration for running that marathon, sending out the first edition of your newsletter, or winning a blackjack tournament. Do you know what impresses me even more? Those who do something meaningful again, and again, and again. That requires more than heroics or luck; it requires a process.
Getting software to production is hard. It often involves a mix of coordination, scripting, manual steps, and yes, even some begging. For many teams, it's a herculean cross-functional effort that's avoided because it's so painful. But in this era where your user experience is a deciding factor for potential customers, you need to build and ship software regularly. The best companies have invested in automated, repeatable software delivery and follow these four practices.
Shine A Light On Your End-To-End Value Stream
Do you really know how workflows from idea to production? Or more importantly, in reverse, do you know the value your customer buys from you, and the path through your business to arrive there? As Tasktop CEO Mik Kersten says in his terrific book Project to Product, "[T]o avoid the pitfalls of local optimization, focus on the end-to-end value stream." It's important to "see" the entire system and find the bottlenecks that actually hold you back, before jumping into (potentially futile) localized improvement areas.
Value Stream Mapping is a key approach for @pivotal - @AlexLuttschyn is sharing the details at #IDCDEVOPS18 @PivotalDACH pic.twitter.com/FhIkMudJ1J— oliverwelte (@oliver_welte) October 18, 2018
This means that before you can start automating delivery pipelines, you need to investigate and document the current state. Who are the involved teams? Where does work sit waiting? What are the entrance and exit criteria for each stage? Where are you using outdated or manual steps where the introduction of technology would make a major impact? Oftentimes, you (and your stakeholders) will be surprised when they visualize the complete value stream, and notice the people, processes, and technologies in play. Improvement requires attention on all three!
New whitepaper: Crossing the Value Stream: Improving Development with Pivotal and Cloud Foundry https://t.co/xFYA5z9cR9 < looks good, whether you use @pivotalcf or not; proven patterns listed. pic.twitter.com/XZRAXoRetH— Richard Seroter (@rseroter) February 16, 2018
Focus on Improving Lead Time From Code Commit to Production
One of the key measures of success outlined in the Accelerate State of DevOps Report is "lead time for changes." In this context, lead time relates to "how long does it take to go from code committed to code successfully running in production?" For the elite companies, it's less than one day. For low performers, it takes between one and six months. To learn more about the report, watch a recent replay of the webinar I did with the report’s author, Dr. Nicole Forsgren.
After your value stream analysis, you should have a handle on the path to production for code after it's checked into a source control repository.
Is it immediately tested against other code modules in the system?
Are you running security scans or checking dependencies?
How about building and packaging code into containers?
Where does code sit waiting for production deployment?
How is code rolled out to new users?
Companies in every industry have proven that they can automate these steps, and get value into the hands of their customers faster. This means they learn faster, incorporate feedback faster, and increase their chances of creating loyal customers. Want to learn more about continuous integration and delivery? Read our CIO’s guide.
"You know how I get through walls? I don't see walls!!" Paul Gorup - @Cerner... and neither does a good CI/CD pipeline. #SpringOne @s1p #Concourse @CernerEng #CloudComputing @wattersjames pic.twitter.com/K0oTsUBscp— Greg Meyer (@Greg_Meyer93) December 7, 2017
Put Apps and Platforms On Pipelines
Digital transformation isn't about shoveling more features and apps into the market. It's about changing your relationship with customers through useful software. That means the relationship involves more than shiny new things. It's reinforced through a reliable, secure, cost-effective set of services. How do you do that? By also continuously delivering your underlying platforms.
What hurts your platform's reliability? Taking major downtime during quarterly upgrades. What puts your platforms—which all your mission-critical apps run atop—at risk? Leaving them unpatched or using rarely-changed credentials. And what keeps your platform costs high, thus making it harder to pass savings on to customers? Large teams doing intensive manual management of multi-site platforms. There's a better way.
no problem... already patched in PCF envs!! suck it, CVEs! #Kubernetes' First Major Security Hole Discovered https://t.co/8EGhWgdX4u #CyberSecurity #CloudComputing @pivotalcf @wattersjames @pivotal— Greg Meyer (@Greg_Meyer93) December 5, 2018
I've seen financial services firms, healthcare companies, government agencies, and retailers all put their platform onto pipelines. That means that they're continuously updated (without taking downtime), immediately patched when vulnerabilities emerge, and completely hands-off for system upgrades. This results in improved reliability, better security, and lower costs.
It's exciting to share this story. It's an honor to do this for @CernerEng which directly results in a better security posture for HealthCare solutions. The goal this year is to get more solutions brought on to get the same benefits. https://t.co/v09Bc7ytAG— Bryan Kelly (@xyloman2) February 5, 2019
Invest In Improving Availability While Increasing the Rate Of Change
Another key finding from the 2019 Accelerate State of DevOps Report? Top performers have a 7x lower change failure rate, even though they do 208x more frequent code deployments!
You can go fast and improve stability. Make no mistake, that's not a trivial accomplishment. It's hard to do. Yes, deploying small changes frequently means you have a smaller change surface and simpler debugging. But complex change processes, brittle architectures, and thin infrastructure APIs all make it hard to continuously deliver software and platforms. Rather, you need clear change processes devoid of review boards and heavy on automation. You need a resilient architecture that can tolerate rolling upgrades to compute and storage. And you need infrastructure APIs that make it possible to automate all the necessary provisioning, de-provisioning, and configuration activities.
From 45 days to 5 days to upgrade 9 @pivotalcf foundations for 1 patch for 1 product/@RichRuedinII of @ExpressScripts on using PCF Pipelines #SpringOne pic.twitter.com/utPxoOnCDI— Dormain Drewitz 🧟♀️ (@DormainDrewitz) December 7, 2017
We covered a lot of ground in this five-part blog series.
I offered an overview of digital transformation and some keys to success.
We looked at the paradox of choice and how to focus on outcomes for your customers.
I encouraged an investment in design thinking and scaling the design discipline within your organization.
We explored the value of processing data faster, and how to start embracing a streaming mindset.
And here, we looked at automating delivery to get value to your customers faster.
The key to all of this is deeply understanding what your customers need, and staying laser-focused on the desired outcomes.
Join me at SpringOne Platform in October to hear real-life stories about the complexities and successes that your peers experience in their transformation. It may be the most important thing you do this year.
About the AuthorFollow on Twitter More Content by Richard Seroter