Developers would prefer to focus on business logic, not infrastructure. Serverless computing offers an efficient way to build and consume applications that makes life simpler for developers, increasing productivity.
What is serverless?
Serverless computing is a cloud native computing paradigm that encourages a DevOps approach to software development and deployment. Developers write code for a serverless environment without needing to concern themselves with the details of servers, virtual machines, containers, or other infrastructure particulars. All of that is handled automatically via backend services with the underlying infrastructure complexity abstracted away.
The term serverless itself can be a bit misleading. There are still servers running the application workloads, but developers are insulated from the details. Developers can write code and run it in a public cloud using services such as AWS Lambda or Microsoft Azure Functions. It’s also possible to run a serverless environment on-premises, for example by setting up and running Knative, a serverless runtime environment for Kubernetes.
From a developer’s perspective, serverless eliminates the usual considerations regarding infrastructure. There’s no provisioning. No patching. No capacity planning. No scaling. All that’s required is the code or service written in the form of a function. The serverless runtime takes care of everything else, enabling developers to focus on the application and business logic. Resources are only consumed when a function is in use.
Serverless development is based on a number of simple principles.
Write code as functions
Functions are small, single-purpose pieces of code that run dynamically, usually in response to an event trigger. (Think of functions as the purest form of microservices.) To get started with function-as-a-service (FaaS), developers simply bring their function code and wire up event triggers. Many providers offer FaaS in a pay-as-you-go model.
Though the terms serverless and functions are sometimes used interchangeably, they aren’t the same thing. Functions are frequently used as the compute layer for serverless workloads. But you can write serverless applications without using functions. And you can write functions that don’t run on a serverless platform.
Use event-driven architecture
As the name implies, code in an event-driven architecture (EDA) runs in response to events, where an event might be the receipt of a message, a completed file upload, or the insertion of a record in a database. Serverless functions are usually purpose-built to work with events and data streams. That makes them a perfect fit for EDA. FaaS solutions commonly include integrations with components like message brokers or data stores. Developers can trigger serverless functions to respond to events related to these services.
Connect managed services together
Since serverless functions are stateless, all state and configuration information is kept in backend services, including databases, message queues, authentication providers, or routing services such as an API gateway. In a serverless architecture, these services are all managed. Just as developers don’t have to think about the infrastructure required to run their code, they don’t need to worry about the related services either. With serverless functions as the “glue,” developers can connect managed services together into a coordinated system, so there’s no need to think about infrastructure in any context.
What are the benefits of serverless?
Depending on what you want to optimize, serverless can help your business in many ways. Perhaps you want to reduce your overall compute costs or achieve a faster time to market. Done correctly, a serverless architecture offers a number of benefits for optimizing software development and maintainability.
Scale to zero.
Scale to zero is a key serverless concept, allowing compute resources to be consumed only when they’re in use. A serverless runtime will automatically scale out a function to handle increased load. The runtime will also scale to zero when the function isn’t in use. When the next request comes in, the runtime is ready to spin back up. With public cloud serverless solutions, customers are only billed for the time that the function is running.
Write less code.
A goal of the serverless model is to allow developers to write as little code as possible. By writing narrowly scoped units of code, it’s easier to focus on business logic. Serverless platforms abstract the complexity of packaging, deployment, and event consumption. This way, developers don’t fiddle with complex integrations or boilerplate code.
There can be significant advantages to having less code. Lower complexity leads to fewer bugs. Your attack surface is smaller, improving code security. Less code also means your code is easier to maintain over time.
Ship code faster with serverless architecture.
Organizations are always looking for ways to get applications to market quickly. A serverless architecture can enable you to ship code faster than ever before. Developers use functions to stitch managed services together into a coordinated system. There’s less to build from scratch and no need to worry about wiring up custom components; the serverless runtime handles routing, DNS, load balancing, and firewall rules, making deployments that much easier.
With a serverless framework, developers no longer need to worry about complicated build and deployment processes. As long as they don’t change external interfaces, functions can be updated independently as enhancements are made, tested, and approved.
Manage only what you build.
Serverless workloads depend mostly on backend services. Teams are only responsible for what they've built—their function code. Since there's no low-level implementation details, Day 2 operational tasks look much different than they do for a more traditional, server-based application model.
Focus on business outcomes.
In a serverless architecture, there’s an emphasis on writing business logic instead of plumbing or packaging. This frees up developers to focus on solving specific business challenges. That means your organization can concentrate on outcomes instead of managing technology. It’s a pure realization of cloud native—underlying infrastructure complexities are abstracted away.
Update or complement existing apps.
Traditional monolithic apps can be hard to modernize. A serverless framework can enable you to add new features or enable new integrations in less time while avoiding a full modernization effort.
What to keep in mind if you’re considering serverless
Serverless is a powerful idea. There’s no infrastructure to manage; developers just bring code and everything else is taken care of. It can enable many promising capabilities for your organization. But as with any cloud native application, there are things to keep in mind before going down the serverless path.
Your team may have to develop new skill sets.
Building applications to embrace serverless architecture is a fundamental change, and there’s a learning curve. Developers may face new challenges when working with serverless. For some, event-driven patterns and asynchronous operations are new concepts to master. What’s more, teams must become familiar with the managed services they’re connecting. It’s also sometimes tricky to test code locally, and developers need to adapt to a new set of tooling. A good rule of thumb is: If you’re not ready for microservices, you’re not ready for serverless.
Plan for performance.
If you care about performance, plan accordingly. The pay-per-use model may seem attractive, but it doesn’t come for free. For one thing, when a function scales to zero, it has to be ready to spin up and back into action when triggered by an event. (The time it takes for this to happen is called a “cold start.”)
The performance hit on the first request may be significant, depending on your code, chosen serverless runtime, and use case. Don’t forget to factor in startup times and network latency. Also, be aware of any additional charges associated with adjacent managed services. You may opt to receive or send data in batches to limit bandwidth charges or connection costs.
If you do serverless wrong, it could cost you.
When adopting any new architecture, be realistic. Technology alone is rarely enough without the culture and practice to go with it. If you aren’t using cloud native service APIs, you could run into problems with persistent connections. Avoid long-running functions and don’t write functions with too many dependencies. As with microservices, doing serverless wrong can be detrimental and end up costing you more.
Choose your managed services carefully.
The list of choices for managed services in the public cloud is always growing. Pick the right ones for the job and properly integrate them with your application. The wrong choice creates the kind of technical debt you’re trying to avoid by using serverless.
What are the best use cases for serverless?
If your team is new to serverless, it’s important to think about use cases. One of the most obvious use cases for serverless is for building APIs. The serverless framework scales out automatically to accommodate the processing load without having to plan for it explicitly. For example, the Lambda service in AWS can work in conjunction with API Gateway service and multiple instances of the same function run in parallel to accommodate simultaneous user requests.
Similarly, using a serverless framework you can create an autoscaling website without dedicating a lot of time to upfront infrastructure setup and planning.
Event-streaming applications are another popular use case. Whether you’re using an event-streaming service or monitoring event logs, a serverless framework can be used to create a scalable and elastic approach to event handling.
Companies are using serverless to handle a wide variety of asynchronous tasks. For example, many applications require image, video, or file manipulations. Such tasks often run as serverless functions running efficiently in the background.
When is serverless NOT the right choice?
No tool is right for every job, so it’s important to think about when a serverless architecture may not be the best choice. As a developer, you necessarily give up control over infrastructure and backend services with serverless, but sometimes having control is necessary or desirable.
Cloud-based services may not deliver the same performance every time a function is invoked, especially after a cold start. (See the section: Plan for performance.) Applications that require maximum performance or specialized resources may be a bad fit.
If an application runs for a long time, your usage costs could be higher running that application as a function than it would be running the same application with dedicated resources on-premises or in the cloud. Applications that have consistent resource usage—rather than elastic usage in response to spikes in demand—may not be best suited to a serverless architecture.
Finally, it’s important to recognize that you can’t just move your serverless functions from one service to another, some work will be necessary to switch vendors. When you build an application for a particular service, you risk getting locked in.
Serverless can run on-premises
As noted in the introduction—while most people think of serverless in conjunction with cloud services—you can also run a serverless framework on premises (or you can set up your own serverless framework in the public cloud.) This can be a good way to get the benefits of serverless while addressing concerns about lack of control and vendor lock-in. Of course, when you run a serverless framework yourself, someone in your organization has to take responsibility for configuring and managing infrastructure and setting up and supporting necessary backend services such as databases.
If your organization is running Kubernetes, you can set up serverless Kubernetes environments using tools such as Knative, Kubeless, Apache OpenWhisk, and others.
Serverless at VMware
VMware recognizes the importance of using cloud native application methods like serverless to speed application delivery. We’re always working to increase your organization’s success by delivering the latest capabilities and making them easier to consume.
Cloud Native Runtimes™ for VMware Tanzu™
Cloud Native Runtimes for VMware Tanzu enables developers to leverage the power of Kubernetes for serverless use cases without having to master the Kubernetes API. Cloud Native Runtimes for VMware Tanzu can be used by itself or in concert with other VMware Tanzu capabilities to get modern cloud native applications with event-based architectures up and running on Kubernetes more quickly, regardless of a developer’s level of experience with the platform.
Cloud Native Runtimes for VMware Tanzu integrates with Tanzu Kubernetes Grid and vSphere 7.0 as well as Kubernetes services from AWS, Microsoft Azure, and Google Cloud.
To learn more:
- Try out our serverless capabilities by joining the Cloud Native Runtimes for VMware Tanzu beta.
- Read about the general availability release of Cloud Native Runtimes’ serving capabilities in our announcement blog.
- Read the Cloud Native Runtimes for VMware Tanzu documentation.
VMware Tanzu Application Platform
Tanzu Application Platform simplifies and secures the container lifecycle to speed the delivery of modern apps at scale. With its modular, full-stack capabilities, you can embrace DevSecOps and stand up a platform for modern apps that ensures security throughout the container lifecycle. Automatically build a stream of compliant containers. Secure your software supply chain end to end.
What does it mean to go serverless?
An organization going serverless signifies it’s adopting serverless computing, a cloud native computing paradigm that brings new, efficient ways to build, deploy, and consume applications. With serverless, developers don’t have to manage anything but the application.
Where is serverless used?
Serverless computing is used by developers to build and deploy applications. Common use cases include building APIs, creating websites that autoscale, event streaming, and handling asynchronous tasks. Serverless capabilities are available from cloud providers and can also be set up on-premises.
Is serverless a function-as-a-service (FaaS) offering?
FaaS is also known as serverless computing, where developers can deploy a function—a small, single-purpose piece of code that runs dynamically, allowing developers to write as little code as possible.
What are the benefits of using a serverless architecture?
A serverless architecture has a number of benefits, including writing less code, paying for consumption rather than allocation, shipping code faster, managing only what you build, and focusing on business outcomes.