The companies we at VMware Tanzu work with are constantly looking for new, better ways of developing and releasing quality software faster. But digital transformation means fundamentally changing the way you do business, a process that can be derailed by any number of obstacles. In his recent video series, Michael Coté identifies 14 reasons why it’s hard to change development practices in large organizations. Today, we look at the second digital transformation bottleneck: security.
We live in dangerous times, with news of sophisticated cyberattacks, phishing attempts, and massive data breaches appearing at an increasingly alarming rate. It’s no surprise then that while governments introduce new legislation to help protect critical national infrastructure, businesses are laser focused on their own potential exposure and have allocated money to improve their own cyber defenses.
Development teams need to create more secure code, and solving security vulnerabilities early in the process is more cost effective than after an application has been deployed to production. In this blog post, we’ll discuss the role of security in our organizations, explore the fundamentals of a “shift left, shield right” methodology, and look at some specific areas that can help developers and security professionals work more collaboratively.
Enterprises accumulate security risks and are too slow to change
I’ve spent more years than I care to admit working with large-scale infrastructure environments, as an engineer, manager, and an executive. No matter the industry, there seem to be a few common problems that are endemic to enterprise environments.
Security testing is all too often an exercise completed at the end of development, where the cost to fix problems is significantly greater. The security team often has limited contextual knowledge of the application, its purpose, or its dependencies. There will be immense pressure to get the application into production, which is potentially already running behind schedule. Project managers will send a flurry of requests to infrastructure engineers to diagnose connectivity issues, open firewall holes, and answer seemingly random questions about the underlying infrastructure to pacify the security auditor.
Throughout the process of delivering a project, various problems will be identified, some of which will be added to teams’ backlogs, others to a project risk register, and a few just talked about and left unactioned—"just so long as we can go to production." Of course, once the application is live, the project team is disbanded, and responsibility falls to the individual (often siloed) support teams to look after their respective areas, who have their own neverending queue of other priorities.
The application will hopefully receive regular updates as bugs are inevitably reported, but it’s likely that the software dependencies, middleware, or operating systems will be left untouched. The operations teams know little of the application and the developers don’t want any infrastructure downtime for patching. Predictably, the net result is an aging estate, running old software that is susceptible to a broad range of security vulnerabilities.
And then it happens.
On some lazy Friday afternoon, news breaks of a brand-new, critical zero-day exploit that’s been seen in the wild. Soon the company is scrambling to formulate their response, starting with a seemingly sensible question—“What’s our exposure?”—and that’s where the problem begins: there’s no clear record of software dependencies, of where each application is running, or what the impact is if a service is temporarily taken offline to allow for patching.
Understanding security at the source, rather than testing at the destination
“Shift left” was a term coined back in 2001 by Larry Smith to explain the benefits of involving QA teams early in the software development process, where he found that “bugs are cheap when caught young.” He found that problems were identified sooner, fixed while they were still low risk, and teams were able to work more in parallel, avoiding critical path restrictions and freeing up valuable engineering time. Perhaps most importantly though, there was evidence of a culture change; tying QA to development was a clear statement that QA was valuable and “real” engineering, encouraging people who were good at it to do it.
Today, “shift left” has been successfully applied across the many disciplines of software development. The lessons from Larry Smith still stand true for how we approach security today. We talk about DevSecOps culture and how we can foster collaboration between development, security, and operations teams, removing barriers, building bridges, and appreciating that great software needs input from everyone. As I spoke about in my previous article, we must think of quality, scalability, security, and auditability as critical features of our applications.
Cloud native security basics
Understanding a large and dynamic, sprawling production estate is hard work. Retrospectively trying to unravel a spaghetti of dependencies or identify the owners of every component can feel like trying to do an archaeological dig during an earthquake.
A core tenet of cloud native security is to regularly “repave” servers and applications from a known good state, rather than attempt in-place upgrade or patching. Through modern practices such as infrastructure-as-code and configuration management tools, we’re able to record the desired state of our environments and have tools automatically apply configuration changes to ensure that state is maintained. We can have confidence that our applications and infrastructure are deployed as designed and do not have any unauthorized changes—even by a well-intentioned engineer trying to quickly fix a problem “out of band.”
Combatting tool sprawl
Empowering developers to choose, use, and build their own tooling has led to some amazing improvements in agility, innovation, and time to market. However, it has also led to an explosion in the variety of software used across the estate, deployment processes bespoke to each team, and headaches for security teams expected to manage potential attack vectors. Here are four new tools that give you more security controls over your application lifecycle.
While there are often straightforward best practices for development and project configuration, getting developers to follow those practices has proved difficult. Tools like VMware Tanzu Application Accelerator can guide developers toward approved blueprints that help bootstrap application development by providing ready-made templates that meet best practices and can include company-specific configuration or coding styles. This is a practical way to encourage developers to use common software libraries, follow corporate coding styles, and include standardized methods for authentication, logging, etc. Architects, auditors, and others can lay the tracks toward compliance in practical ways that developers can follow, rather than arcane policy documents that gather dust in a little-visited SharePoint site.
Control how container images are built
Letting developers mess with operating systems is usually a bad idea, security wise. So, if you're letting developers build their own container images, where they can mess with the operating systems and services in the OS that support their applications, you're introducing a huge security risk. VMware Tanzu Build Service is an incredible tool that brings the magic of Cloud Native Buildpacks to automate the build of container images appropriate for your application. With support for most common programming languages, you simply point it at your source code and Tanzu Build Service will analyze your code and build an appropriate container with all the necessary software dependencies. Developers can continue to focus on writing software, safe in the knowledge that their applications sit on a secure foundation. Operators can decide to push updates to Buildpacks whenever necessary to guard against new vulnerabilities. Security auditors can gain one-to-many productivity efficiencies, knowing that a single set of secure Buildpacks can be used by thousands of application deployments.
Continually scan container images for CVEs
Most development teams have already embraced containers and find it a convenient way to quickly build a runtime for their application that contains everything their app needs to run. Containers will often include pre-built image layers from public repositories that can contain security vulnerabilities. The Sysdig 2022 Cloud Native Security and Usage Report advises that 75 percent of containers have “high” or “critical” patchable vulnerabilities. Many container registries, such as Harbor, have the capability to perform vulnerability scans whenever a container image is uploaded, providing developers with instant feedback. Such scanning can detect known vulnerabilities in all layers of the container image and point developers straight to the relevant security advisory (CVE) and the offending layer. Developers can use this information as part of their automated pipeline, only allowing secure containers to be deployed to production environments. With a good build process in place, you can even automate this rebuilding without the developers needing to do anything. For example, Wells Fargo uses this practice to rebuild production multiple times a week, easily deploying security patches without troubling developers.
Of course, improving how we approach security earlier in the development cycle doesn’t mean we can ignore traditional cybersecurity. We most definitely need to do all we can to protect our production assets from malicious attack and should deploy network firewalls and web application firewalls alongside a mature suite of observability and audit tools to detect potential attacks.
For cloud native applications, developers are moving toward service mesh technology that simplifies the connectivity between application microservices. VMware Tanzu Service Mesh provides end-to-end encryption, fine-grained access control, and API threat protection. Container runtimes like Kubernetes and Cloud Foundry also give us new insights into our applications and opportunities to secure them from attack. Using tools like VMware Carbon Black, not only can we limit network traffic like a traditional firewall, but also define restrictions on what processes are allowed to run inside our containers. With these additional controls, your application may be vulnerable, but the attacker cannot execute any payload or exploit the application as part of a wider traversal attack to other applications.
Our businesses are relying more and more on software, and we must continue to do all we can to protect ourselves from increased threat from cyberattacks. When building and running software, we must view security as a core feature of our applications and include vulnerability assessment early in our development processes.
Adoption of modern cloud native tooling enables us to embed security into our applications as an intrinsic component of our development practices. Doing so not only increases the chance of adoption of best practices, compared to traditional policy documentation, but it begins to build an improved culture of mutual respect for IT disciplines and fosters more collaboration across teams.
If you’re keen to build better quality software and avoid a potentially devastating breach of your customer data, download this white paper on best security practices for managing containers, get an early peek at the Securing Cloud Applications ebook, and check out how the VMware Tanzu portfolio of cloud native platform software and services can help enable DevSecOps outcomes in your business.
About the AuthorFollow on Twitter More Content by Bryan Ross