Edge computing is becoming a business imperative. As the data generated by endpoints like cars, sensors, cameras, and operational technologies explode, applications that businesses would run at the edge will only grow going forward. In fact, Gartner predicts that, “by 2022, as a result of digital business projects, 75 percent of enterprise-generated data will be created and processed outside the traditional, centralized data center or cloud—an increase from the less than 10 percent generated today.”*
There are many use cases for running applications at the edge and they vary by industries. For example, in retail, we see use cases like in-store analytics, faster checkout at the point-of-sale, theft prevention, smart inventory, and aisle management. In manufacturing, use cases include video surveillance and predictive maintenance to enhance a plant’s productivity. The possibilities are endless and are backed by genuine outcomes that businesses want to achieve.
Traditionally, all of these applications were run in virtual machines (VMs), but with the emergence of Kubernetes and microservices architectures, more and more applications are being containerized. As such, there is a need for running cloud native compute stacks at edge locations. The reality is that most organizations today will have a mix of both VM-based applications as well as new, container-based applications.
Let’s review how companies approach their edge journey, run through four requirements for cloud native at the edge, and talk about how the VMware Tanzu solution architecture can help.
Edge journey to cloud native
Edge computing can take many different approaches. Some organizations have a large infrastructure footprint at the edge, essentially hundreds of mini data centers, as they need more processing power and independent edge locations. Examples of this can include cruise ships, distribution centers, and service centers. Then there's the smaller infrastructure footprint at the edge—thousands of edge sites with resource-constrained, two- to three-server deployments. This is the remote office/branch office (ROBO) example, which includes retail stores.
VMware helps customers with their edge journeys across various implementations. Specifically for the small, two- to three-node deployments, VMware offers a ROBO topology that enables you to run VM workloads at these locations. With vSphere ROBO topology, you are able to manage your remote offices and branch offices with little or no local IT staff. You can enable rapid provisioning of servers through virtualization, minimization of host configuration drift, and enhanced visibility into regulatory compliance across multiple sites.
Now, with the addition of VMware Tanzu, you can simplify how you architect and deploy a cloud native stack at hundreds or even thousands of edge locations so that you can run cloud native as well as traditional applications at the edge, as shown in the image below.
Cloud native edge architecture for ROBO topology
4 capabilities for running a cloud native stack at the edge
For those companies that already have ROBOs with servers running edge workloads, the act of running modern, distributed applications across edge locations can add a new layer of complexity. There is a significant difference between running a cloud native platform at the core data center and running it at the edge. Let’s review the four key capabilities you need to run a cloud native stack at the edge.
1. Run “just enough” Kubernetes infrastructure at the edge
You need a platform that can fit across varied footprints of edge deployments and still have a consistent Kubernetes experience. Along with limited compute capacity, these edge locations can have a very spotty network connection, with limited bandwidth. Connectivity back to the core is not always guaranteed. It is important for the platform to be able to support edge deployments that are completely autonomous while also enabling highly available and resilient deployments that can handle server or site failures.
2. Centralize fleet management of Kubernetes clusters at scale
Scale is the game, but it also presents several challenges. For companies modernizing their application estate, it is not uncommon to have cloud native deployments across many locations. They will run Kubernetes clusters in their core data centers (in the tens), mini data centers away from the core data center (in the hundreds), and thousands of smaller two- to three-server deployments at locations like stores. To operationalize this vast Kubernetes estate, a centralized control plane is critical to help you deploy, manage, better secure, and apply consistent policies while providing insights into all your edge deployments.
3. Deploy and manage app lifecycle at the edge
The common goal for all application development teams and their business partners is to drive the pace of innovation with the value they deliver to customers. Applications at the edge are no different. Abstracting and automating away the unique challenges that edge computing (at scale) presents is necessary to achieve the desired business outcomes. You will be deploying sets of microservice applications on fleets of edge locations, and so will need to think about and invest in deployment strategies for a variety of applications.
Questions you will need to answer include: How do we do canary deployments? When do the updates actually propagate to locations? Where does the container registry that holds all the applications sit? It’s even more important that the non-production development environment is as close to identical to the hundreds or thousands of edge sites to avoid bugs in production. It’s also imperative that application and operations teams work together to automate the blueprint using GitOps or a similar approach for the entire stack—to the point that a disaster recovery strategy can be backed by bootstrapping edge environments and applications from scratch in the event a site gets corrupted or damaged.
4. Centralize visibility, monitoring, and observability
Knowing what’s going on at any given moment across so many deployments is a daunting task. From physical infrastructure to applications to network and everything in between really requires a single pane of glass, with drilldown and alerting, to manage it all. You need to be able to create alerts tuned by advanced analytics, troubleshoot systems, and understand the impact of running production code at the edge.
In order to do this reliably, you need to consider the quality of the network over which your metrics and filtered logs are shipped. In some cases, this information can’t tolerate much or any lost data. With a centralized control plane, the observability solutions need to be able to store and forward from edge sites and be configurable for things like frequency, filtering, local storage for buffering, prioritization and more. At the same time, collector agents and proxies need to consume the least amount of CPU and memory possible so as to allow business applications the most space to operate. In some cases, local access to logs/metrics also needs to be available, especially in cases where outbound network connectivity is lost.
VMware Tanzu edge solution architecture: A best practice example
The VMware Tanzu edge solution architecture offers a best practice for running a cloud native stack at edge locations using the ROBO topology. It comprises core VMware Tanzu capabilities, including:
Unified Kubernetes runtime – VMware Tanzu Kubernetes Grid provides a consistent, upstream-compatible implementation of Kubernetes that is tested, signed, and supported by VMware.
Global multicluster management – VMware Tanzu Mission Control is a SaaS-based centralized management platform for consistently operating and securing your Kubernetes infrastructure and modern applications across multiple teams and clouds, regardless of where they reside.
Full-stack observability – VMware Tanzu Observability by Wavefront is another SaaS offering that provides high-performance streaming analytics with 3D observability (e.g., metrics, histograms, traces/spans). You can use it to collect data from many services and sources across your entire application stack, including edge sites.
The Tanzu Edge solution architecture:
Illustrates a tested deployment of VMware Tanzu edge solution architecture atop the vSphere ROBO topology
Demonstrates the configurability, scalability, and resiliency of a cloud native stack running in a vSphere ROBO topology, as well as how to operationalize it
Articulates the modern application architectures and deployment strategies along with the variables that need to be considered for reliably delivering feature updates to the edge
Details a SaaS management control plane that enhances edge computing for modern applications and simplifies the maintenance overhead for operations teams
Check out our detailed Solution Guide for more information about the architecture itself and how to get started.
*Gartner, Top 10 Strategic Technology Trends for 2020, October 2019