Cloud native technologies—including containers and Kubernetes, continuous integration and continuous delivery (CI/CD), DevOps, and microservices—have become a dominant force in software development and are indispensable to modern application delivery. Many teams are turning to a microservices architecture to improve software delivery speed, independence, and innovation.
What is microservices architecture?
Traditionally, development teams relied on monolithic architectures for business applications. Although a monolithic architecture can work for applications with few dependencies, microservices are the engine to accelerate an organization’s development processes. Monolithic applications often have a single codebase, are owned and managed by a single (often large) team, and are built and deployed as a single unit.
The term microservices refers to an architectural approach based on multiple smaller, more modular services. Each microservice has its own codebase and is usually owned and maintained by a separate small team. Microservices:
- Are loosely coupled. Services can be updated independently.
- Have a bounded context. A service doesn’t need to know anything about surrounding services to function properly.
In other words, microservices are (relatively) small, independent services that work together as part of a larger system. The microservices approach—and associated tools such as CI/CD, containers, and Kubernetes—enable teams to adapt more quickly to changing demands and accelerate the delivery of new software features. Containers are commonly used for microservices because they provide a scalable, portable standalone package; Kubernetes lets you conveniently orchestrate groups of containers and other services that run collectively as applications.
Microservices architecture is the opposite of a traditional monolithic architecture that has tightly integrated modules that ship infrequently and scale as a single unit. Microservices have become popular with companies that need greater agility and scalability.
Microservices development includes several important characteristics:
- Each instance of a service—of which there may be many operating in parallel—runs as a separate process in its own container and communicates with other services and the outside world via APIs.
- Individual microservices can be deployed, upgraded, scaled, and restarted independently from the other services that make up the application.
- When managed by an automated system such as Kubernetes, microservices can be updated without disrupting the running application or negatively impacting users.
- Developers are free to choose the best technology to build each microservice and encapsulate the necessary business logic.
Today, IT teams in businesses across industries—from retail to financial services to manufacturing—use microservices for new applications and to break down existing monoliths. But it’s important to bear in mind that microservices aren’t a simple code rewrite. They require a different mindset, approach, and operational model.
By adopting a microservices architecture, teams can be more responsive to customer needs. Rather than adhering rigidly to fixed release schedules, software teams are empowered to ship new capabilities more rapidly when customers need them. When developers use microservices design patterns to create new apps or break apart older applications, they’re helping to improve software development processes, release software faster, and improve collaboration within and among software teams.
Microservices build on Agile and DevOps principles, helping software teams work in parallel while iterating quickly on discrete capabilities. A successful microservices architecture relies heavily on repeatable automation, supports fine-grained scaling of services, and uses patterns designed to keep the system running even when individual components fail, ensuring greater reliability.
Microservices have additional benefits:
- Increased modularity and the ability to separate services by business requirements
- Ability to scale out individual microservices that need more resources without having to scale out an entire application
- Delivery of multiple services from a single host, allowing for better resource utilization
- Support for continuous code refactoring to increase the benefits of microservices over time
Monolithic vs. Microservices Architecture
Has a single focus. It does one thing and does it well. Microservices are targeted at a specific problem domain and contain everything they need (data included) to manage that experience. The “micro” in microservices is about scope, not size.
Has a wide focus. Tightly integrated software packages attempt to solve many business challenges at once, creating many code dependencies.
Is loosely coupled. Microservices development demands that services be as self-sufficient as possible and avoid hard-coded references to other services.
Is tightly coupled. Monoliths are often a tangled web of interdependent components that cannot be deployed without a carefully crafted sequence of steps.
Is delivered continuously. Microservices are ideal for teams that have apps with constantly evolving feature sets. To deliver value to market as quickly as possible, microservices are delivered to production regularly through automation.
Relies on scheduled delivery. Applications are developed and updates are delivered as scheduled, often with a quarterly or annual cadence.
Has independent teams that own the service lifecycle. The microservices transformation is as much about team structure as it is about technology. Microservices are built, shipped, and run by independent teams. Not every service needs this treatment, but it’s a powerful model for business-critical services.
Has many teams that own the service lifecycle. Project teams are responsible for building the first iteration of software, and then pulled apart for the next assignment. Software is handed over to an operations team to maintain.
Has design patterns and technology emphasizing distributed systems at scale. A microservices architecture depends on a set of capabilities for service discovery, messaging, network routing, failure detection, logging, storage, identity, and more. Teams cannot build and run microservices with the same approach and tools as monolithic software.
Has design patterns and technology, putting process first. Siloed tools and processes, focused on key development stages, QA, and release to production produce monolithic software.
SOA vs. microservices
The term service-oriented architecture (SOA) first appeared in the 1990s to describe an approach to componentizing application services for an entire enterprise. Microservices and SOA share the principle of breaking down monolithic applications into smaller services, but there are significant differences.
SOA typically has an enterprise scope—a single service might be used by many separate applications all communicating across an enterprise service bus (ESB). Microservices, on the other hand, limit the scope to the application. A microservice only serves the needs of one application, typically communicating via APIs versus a shared ESB. If another application requires the same microservice, it runs that microservice separately. This difference impacts the way data is managed.
With SOA, all applications obtain and modify data from the source. This has advantages for data integrity and consistency, but it can result in bottlenecks and slow services. With SOA, each service may be designed to read and write data from a backend relational database that is the system of record for the company. By comparison, microservices are typically designed to have access to all the data they need locally—simplifying design and improving performance—possibly at the cost of data duplication.
In a typical microservices example, each microservice in an application might have its own database—an e-commerce app might have a service that manages account information with an account database, a service that manages inventory with its own inventory database, and so on—allowing each service to run without bottlenecks, with mechanisms to synchronize data on the backend to ensure consistency if necessary.
What to keep in mind if you’re considering microservices architecture
Microservices place new demands on organizations and infrastructure. If you’re thinking about getting started, it’s important to ask yourself the following questions.
Is your organization ready?
A microservices transition is as much about organization and culture as technology. Teams have to be ready to embrace an automation-centric, continuous-delivery (CD) approach to software. Is your company ready to eliminate functional silos and have self-sufficient teams that build and operate services? Can your change management process accommodate a deployment pipeline with no human involvement? The way you answer these questions will help you decide whether your organization is prepared.
Do you have overeager developers?
In the rush to “microservices all the things,” developers may commit to significant coding time on existing applications that aren’t a high priority for change. Low-use applications or ones that don’t serve business-critical functions may be better off in their monolithic states. Microservices increase agility at the cost of some added complexity. Ensure you need the former before signing up for the latter.
Aggiornamenti sugli argomenti importanti
Are your services coordinated?
Microservices are loosely coupled to one another and can be extremely dynamic—with new instances starting and stopping in response to increases and decreases in load. How do you find the current URL of a service, or route traffic when you have an elastic number of service instances? How do services exchange data? In many cases, the technology you have in place today to handle service discovery, load balancing, and messaging may be inadequate to handle the dynamics introduced by microservices. Have you deployed or are you exploring Kubernetes deployment? Do you have the dedication and budget to invest in change?
Do you need a service mesh?
Over time, you may develop many microservices applications, leading to increased complexity. A service mesh is a dedicated infrastructure layer that manages communication between individual services, making monitoring, networking and security less complex. A service mesh addresses challenges associated with a microservices architecture by intercepting network communications across a containerized application deployed on Kubernetes to manage and help secure microservices as they interact.
Is your Day 2 management up to the rigors of a more dynamic environment?
As the number of apps and services you operate grows, so does the operational risk. Spreading hundreds of microservices across hundreds or thousands of servers will create management headaches without a new approach. Is it difficult to patch or upgrade underlying machines? Can you track dependencies and identify applications at risk? How hard will it be to keep dozens of microservices instances updated with the latest application configuration?
Why are REST APIs so important to microservices?
REST (or REpresentational State Transfer) is an established architectural style for distributed systems. Microservices typically communicate with each other through well-defined REST APIs using HTTP to send API requests and receive responses. There are other protocols that can be used to enable microservices communication, but they may be less familiar and accessible than REST. You can update the code for a microservice with no impact to the other services that call it.
Which applications should you move to a microservices architecture?
Your company’s software teams are writing more code than ever. Here’s how to tell which apps and systems you should prioritize to move to microservices:
- Varying rates of change: If parts of your system need to evolve at different speeds or in different directions, separate them into microservices to enable each component to have an independent lifecycle. For example, an e-commerce app might split the cart functions from the search or recommendation engine code.
- Independent lifecycles: If a code commit for a module needs to have a completely independent lifecycle, make it a microservice with its own code repository and CI/CD pipeline.
- Independent scalability: When the load or throughput requirements vary for different parts of a system, it’s likely that the scaling requirements do, too. Those parts should be independent microservices so they can scale independently and efficiently.
- Need for failure isolation: When the failure of an app isn’t an option, create a microservice to isolate unreliable dependencies from the rest of the system and provide failover capabilities to protect the availability of that microservice.
- Need to simplify interactions with eternal dependencies: When you need to protect systems from external dependencies that change frequently, create a microservice. A payment processor is a good example.
- Freedom of choice: Microservices allow different teams to use their preferred technology stacks without creating conflicts.
How should you create, deploy, and manage a microservices application?
Although there’s no single answer that satisfies every requirement, consider the following suggestions as a good starting point:
- Deploy your microservices application on an on-premises or cloud Kubernetes cluster and take advantage of Kubernetes microservices orchestration, autoscaling, and resiliency features.
- Connect your microservices with REST APIs—which are rules, commands, and routines—so many microservices can work together in one application.
- Eliminate dependencies and create independent, loosely coupled code with bounded contexts for each microservice. Because a microservices-based application is a distributed system that typically runs in more than one place as a process, also choose a communication protocol (e.g., HTTP, AMQP, or TCP).
- Avoid relying on the same data repositories to simplify and decentralize data. This will ensure not every application goes down should one database need to be upgraded or patched.
- Separate how code is governed to optimize for speed.
- When you use REST APIs, security is built in via SSL.
- Adopt a continuous integration/continuous delivery pipeline (CI/CD pipeline) and automate infrastructure deployment to scale without human intervention.
- Continuously monitor and fix code used in and to deploy microservices.
3 steps from monoliths to microservices
Teams grappling with legacy portfolios have modernization decisions to make. Many choose to decompose monoliths into microservices. Available tools coupled with best practices, processes and techniques can make the transition easier.
Step 1: Start with monoliths.
We believe monoliths are an appropriate early stage choice. If too many people are working with the same code base, problems arise. When developers report that the portfolio is complicated, documentation is limited, and resources are too tightly coupled—or that they aren’t learning new skills because they have to maintain legacy apps and infrastructure—it’s time to turn to microservices.
Step 2: Find the seams to discover bounded contexts.
The key to identifying the right set of microservices to break down a monolith is to find the “seams” of the application—so you know the bounded contexts. Then you have to extract the bounded context from the application.
Put a fence around the bounded context and figure out what’s making inbound and outbound calls. What are the dependencies? What are the constraints? This gives you an understanding of the coupling and the events attached to it. Next, you put an API fence around the bounded context and start extracting the bounded context into its own application.
Step 3: Modernize your monolith.
An abrupt rip-and-replace upgrade of an existing monolith can be a recipe for disaster. A more stepwise and deliberate modernization approach often works better. An approach known as the strangler pattern allows you to build a new microservices system around the edges of the old system, gradually retiring more and more of the legacy app over time. The strangler pattern reduces project risk by incrementally improving an application, using a series of small, easily digestible steps, boosting your chances of success. Teams also deliver value on a regular cadence, while carefully monitoring progress towards the goal of complete modernization.
- Is your organization ready to eliminate silos and have self-sufficient teams that build and operate services?
- Can your change-management process tolerate a deployment pipeline with no human involvement?
- Do you have over-eager developers that will try microservices development for every application?
- Do the key applications you want to transition serve business-critical functions?
- Are your services coordinated?
- Do you have the team dedication and budget to invest in microservices?
- Is it difficult to patch or upgrade underlying machines?
- Do you know how to deploy Kubernetes? Will it be hard to keep dozens of microservices up to date with the latest application configuration?
Achieve your microservices vision with VMware
At VMware, we help you design a high-performing microservices architecture, and then provide a world-class environment to run your microservices.
Team with VMware Tanzu Labs to initially target applications that require feature iterations and extreme scalability, and then learn how to build teams focused on delivery.
Deploy and manage microservices with VMware Tanzu Application Service—our multi-cloud product for rapidly delivering apps, containers, and functions. Quickly build loosely coupled, secure, resilient applications that sit behind a high-performing routing tier and use a robust logging and monitoring subsystem with VMware Tanzu Application Service. Deliver all of this to production continuously using integrated deployment components.
Empower your developers with patterns from Spring Cloud Services that overcome key challenges and operational overheads when building distributed systems with microservices.