Hi, Spring fans! In this installment, Josh Long (@starbuxman) talks to Spring observability guru Jonatan Ivanov (@jonatan_ivanov)
At a high level, RabbitMQ is an open source message broker. A message broker accepts messages from a producer (the message sender) and holds it in a queue so that a consumer (the message receiver) can retrieve it. This allows for multiple producers and consumers to share the same queue without having to directly pass messages between each other. What RabbitMQ excels at is doing this at scale whilst staying lightweight and easy to deploy.
To get started with a basic RabbitMQ implementation, checkout this guide.
The first question you may have is “why do I want to add in additional complexity?”
If you were not using a message broker, you would most likely be using an HTTP or socket-based form of communication. These methods can be difficult to scale and are not always robust. HTTP communication can also tightly couple two systems together - increasing inter-dependency, which is undesirable.
The addition of a message broker improves the fault tolerance and resiliency of the systems in which they are employed.
They are easy to scale as their publish/subscribe pattern means the addition of many more services can be easily supported - without having to modify existing systems.
RabbitMQ has four types of exchanges (or message routers) available to route the message in different ways. You will focus …
Aloha, Spring fans! Welcome to another installment of This Week in Spring!
I?m still on vacation on the beautiful island of Maui, Hawaii, but I wanted to say hello (?aloha!?) and share this week?s latest roundup of all that?s good and glorious in the wide and wonderful world of Springdom.
Funny thing, today - the 2nd of August, 2022 - is also my 12-year anniversary on the Spring team. It continues to be a helluva ride, and I so look forward to all that lies ahead. Thank you, Spring team, for everything. Also, sort of coincidentally, I just got a nice promotion. (Thanks, Spring team and VMware). I?m also just about to hit 60,000 followers on Twitter. It?s been a weird, wonderful week, and not just because of all that, but because we?ve got a ton of great stuff to dive into this week (besides the ocean, which I?ll return to promptly after finishing this roundup)!
Hi, Spring fans! In this installment Josh Long (@starbuxman) talks to a very busy rabbit-herder on the RabbitMQ team, Dan Carwin (@dcarwin)
Aloha, Spring fans! I?m on vacation, reporting to you from the paradise-like island of Maui, Hawaii, and hoping that you?re having a wonderful day! My family and I love Hawaii. It?s brimming with beauty and serenity, and while the island of Maui, in the state of Hawaii, is very small, the islands are humbling. They make you feel so very small. It?s surreal to sit there on the beach as the sun creeps down beyond the horizon and to realize there?s nothing but pitch black darkness and water for as far as you can see, starting just a few meters away. It?s endless. It has no end. Like the bugs in code. Endless. And humbling.
I?ve spent so much time on the beach with my partner and our daughter that it actually feels kind of weird just to sit here at the keyboard and write out this blog! But I?m happy to do it. It?s gratifying to learn new things. And so, with that, let?s dive into this week?s installment:
Hi, Spring fans! In this episode, Josh Long (@starbuxman) talks to a person who knows more than most about the awesome implications of both the words ?Spring? and ?Cloud,? Spring Cloud Kubernetes lead Ryan Baxter (@ryanjbaxter).
Hi, Spring fans! In this installment, Josh Long (@starbuxman) looks at some of the amazing opportunities for building Spring Boot applications intended for production in Kubernetes in mid 2022.
The code, as usual, is available on the spring-tips
Github organization
Hi, Spring fans! Welcome to another installment of This Week in Spring! This week I?m trying to wind down some threads and take some vacation with my family. It?s going to be an amazing time, indeed! But that doesn?t stop the deluge of novelties and news in the wide world of Springdom, so we?ve got a lot to cover this week. Let?s get to it!
Before I go, though, I have a new ?Spring Tips? episode dropping Wednesday morning, at midnight, so be on the look out :)
Hi, Spring fans! In this installment, Josh Long (@starbuxman) looks at some of the amazing opportunities for building Spring Boot applications intended for production in Kubernetes in mid 2022.
The code, as usual, is available on the spring-tips
Github organization
Hi, Spring fans! In this installment, Josh Long (@starbuxman) talks to his friend, teammate, and architect extraordinaire, Nate Schutta (@ntschutta)
Hi, Spring fans! Welcome to another installment of This Week in Spring! How are you? This week I?m writing you from sunny Seattle, Washington, where we?re having our next installment of the SpringOne Tour series. It?s been a ton of fun seeing all these fun and friendly faces again and getting to see people, many of whom I haven?t seen since before the pandemic! I?ve also had a lot of fun seeing some friends from some of the big cloud companies here, Microsoft and AWS. It?s always interesting to learn how people are using the latest and greatest from Spring to build amazing systems and software targeting these cloud platforms.
We?ve got a lot to cover this week so let?s dive right into it!
expand
operator in Project Reactor: Pagination in a Reactive Application. It looks at how to eleganrly expand the contents of a reactive stream as new data (in a data pagination scenario) …Hi, Spring fans! In this installment Josh Long (@starbuxman) talks to fellow teammate and Kubernetes ecosystem legend Leigh Capili (@capileigh) about Gitops, Kubernetes, Puppet/Chef, continuous delivery, how zoom scales if you deploy on-prem, being a developer advocate, Flux, and so much more.
Hi, Spring fans! Welcome to another installment of This Week in Spring! This week?s all sorts of weird for me. It?s Tuesday! But here in the US we just celebrated the 4th of July, and I, like many Americans, took a long weekend. Took some time with the family to do a little road trip up north to visit Mt. Shasta, Crater Lake, Lassen National Park, etc. It was a ton of fun, and a lot of driving! Anyway, it all kinda blurs together and felt like just one weekend, and today feels like Monday. I only just realized it was Tuesday! And you know what that means? It?s time for our weekly dive into the wild and wonderful world of Springdom!
Hi, Spring fans! In this installment, Josh Long (@starbuxman) talks to fellow Spring Developer Advocate Dan Vega (@therealdanvega)
Hi, Spring fans! This year, SpringOne is back in person, and being held in my hometown of San Francisco, California, December 6th-8th. (Have you registered?) and today (June 28th, 2022) is the last day to submit to the Call For Papers! If you have a good idea or story you want to share, submit today!
Either way, I hope to see you in December in San Francisco, a city so famous for its foggy nightscapes that I think it?s fair to say we are the original cloud natives!
Hi, Spring fans! Welcome to another installment of This Week in Spring! I?m writing this from the Big Apple, New York City! I?m here for the SpringOne Tour 2022 NYC event. This is my first time back in New York City since before the pandemic and it has been so much fun. I?ve been catching up with people I?ve not seen in years. I even accidentally bumped into people I had no idea was going to also be in town at the same time as I was. New York City is like a magnet for fun, and for fun people. Anyway, we?ve got a lot of stuff to get to this roundup, so let?s dive right into it.
Also: if you?re just reading this, today - the 28th of June - is the last day to submit to SpringOne 2022, being held in my hometown of San Francisco, California, in December of this year!
static
codeHi, Spring fans! In thi^^^ these installments, we continue our series introducing the Spring for GraphQL project. This series features Spring for GraphQL lead Rossen Stoyanchev (@rstoya05) - whose work you may know from basically everything in the wide and wonderful world of Springdom having to do with the web (HTTP, RSocket, WebSockets, GraphQL, JSF, MVC, etc) - and GraphQL Java engine founder and lead Andi Marek (@andimarek) and of course yours truly, Spring Developer Advocate Josh Long (@starbuxman). It provides an in-depth look at all things Spring for GraphQL.
This week I?m publishing two new installments.
The first episode this week is part seven of eight, focusing on how to secure a Spring for GraphQL application with Spring Security.
The last episode this week is part eight of eight, the final episode of the series introducing the new and novel Spring for GraphQL project, looks at how to integrate Spring for GraphQL and Spring Data.
This continues the series we started last week, with episodes one and two, which I recap here:
In this first installment, we look at the basics of using the GraphQL Java engine that underpins Spring for GraphQL.
In this second installment, we look at using the Spring for GraphQL component model by writing queries.
Episode three of a series, looks at batching requests with Spring for GraphQL?s @BatchMapping
support. This …
Hi, Spring fans! In this installment, I (@starbuxman) talk to my old friend, world-famous polyglot and code curmudgeon, software philosopher, industry veteran, and legend of ecosystems aplenty, Ted Neward (@tedneward)
We’re pleased to announce that the Tanzu Toolkit for Visual Studio is now generally available. Tanzu Toolkit for Visual Studio is an extension for Visual Studio 2019 and 2022 that enables users of Tanzu Application Service (“TAS”) or other Cloud Foundry distributions to manage applications directly from within Visual Studio IDE.
Tanzu Application Service continues to be an excellent place to run cloud native applications, particularly those that are written in .NET.
While some features provided by this extension are already available in Tanzu Apps Manager, bringing them into Visual Studio reduces the impact of context switching and makes it easier to navigate directly to the correct application instance. Other features of the extension also simplify otherwise complicated tasks.
Cloud native application developers need to be able to accomplish a few things in order to be productive:
This extension provides those capabilities within an IDE used by many …
Hi, Spring fans! Welcome to another installment of This Week in Spring! How are you? It?s been a hot minute since we last chatted. I was in Germany this time last week. Now, I?m back in beautiful San Francisco. Today the weather will climb to a monumental 84 F! That?s very unusual, for any time of the year, here in San Francisco. Most places here in San Francisco don?t have air conditioning. Some have heating. I bought a brand new condo in 2014 and it didn?t have air conditioning. You just open the window. I am privileged enough that I have air conditioning today, of course. I mention all this to say that it?s hot here! I worry for the elderly! When it gets this hot, the YMCA and other organizations typically invite elderly people to come in and get some cool air and water. It?s dangerous. Some days it gets even hotter. Very rare, but it does happen. I hope you?re all doing well. Take care of yourselves and each other, my friends.
And, speaking of being hot, let?s look at this week?s roundup of the latest-and-greatest that?s hot off the press!
Hi, Spring fans! In this installment, Josh Long (@starbuxman) talks to Spring Framework contributor S?bastien Deleuze (@sdeleuze) on GraalVM, AOT, project Leyden, and WebAssembly.
Hi, Spring fans! In thi^^^ these installments, we continue our series introducing the Spring for GraphQL project. This series features Spring for GraphQL lead Rossen Stoyanchev (@rstoya05) - whose work you may know from basically everything in the wide and wonderful world of Springdom having to do with the web (HTTP, RSocket, WebSockets, GraphQL, JSF, MVC, etc) - and GraphQL Java engine founder and lead Andi Marek (@andimarek) and of course yours truly, Spring Developer Advocate Josh Long (@starbuxman). It provides an in-depth look at all things Spring for GraphQL.
This week I?m publishing two new installments.
The first of this week?s installments, part five of the series, looks at using GraphQL subscriptions to stream data in a way that is agnostic of the supported transport protocols: SSE, WebSockets, and RSocket. In this episode, we look at the RSocket support in particular.
The second of this week?s installments, part six of the series, looks at using the Spring for GraphQL clients to talk to HTTP, WebSocket, and RSocket-powered GraphQL services.
This continues the series we started last week, with episodes one and two, which I recap here:
In this first installment, we look at the basics of using the GraphQL Java engine that underpins Spring for GraphQL.
In this second installment, we look at using the Spring for GraphQL component model by writing queries. …
When the community first saw .NET 6 there was a little bit of uproar, myself included, about how the way we structured web applications had changed.
In earlier versions of .NET Core, we had grown familiar with the symbiotic relationship between the Startup and Program classes. We even engineered ways to add a Startup class to Azure Functions and console applications.
The Startup class was the place to register all of the application’s dependencies, set up the middleware, and of course, configure the configuration.
Yet, in .NET 6, all of that changed with the launch of top-level statements. By making the program’s entry point a static method, the new Program class could relinquish its hold on ceremony including all of the set up we used to do in the Startup class—so no more Startup class!
The Program class now looks like this:
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
app.Run();
I haven’t omitted any code—this is the …
Tanzu Observability enables you to monitor your Kubernetes infrastructure metrics (e.g., containers, pods, etc.) from Wavefront dashboards, as well as create alerts from those dashboards. You can also automatically collect metrics from applications and workloads using built-in plug-ins such as Prometheus, Telegraf, etc.
Tanzu Observability recognizes Tanzu Kubernetes Grid (TKG) clusters just like any other Kubernetes cluster. For more information, read this documentation.
Within the product, installing Kubernetes integration is as simple as deploying a Helm chart (seen below). The Helm is customized for different types of Kubernetes.
If you do not have a Tanzu Observability license you can start a free trial here
Once the integration is flowing metrics, usually within just a few minutes, you can use the dashboards provided with the integration to start observing your clusters.
Tanzu Observability has something for everyone. If you’re a Spring developer, instrumenting your application is a breeze. If you’re a Kubernetes operator, Tanzu Observability has you covered as well!
It’s that simple to instrument your TKG clusters with VMware Tanzu Observability by Wavefront. Happy observing!
VMware Tanzu Application Platform is a modular, application-aware platform that provides a rich set of developer tooling and a prepaved path to production to build and deploy software quickly in a consistent, scalable, and secure way.
Tanzu Application Platform automates the process of taking the application code and deploying it to production with the help of supply chain choreography.
Tanzu Application Platform installation comes with three versions of the Out-of-the-Box (OOTB) Supply Chain to promote code to production:
In this post, we’ll take a closer look at the OOTB Testing and Scanning Supply Chain. The Out-of-the-Box Testing and Scanning Supply Chain contains all of the same elements as the Out-of-the-Box Testing Supply Chain, but it also includes integrations with the secure scanning components of Tanzu Application Platform.
A few different custom resource definitions (CRDs) make up the supply chain choreography. Please note that the below list only covers the objects that are part of the OOTB Supply Chains.
ClusterSupplyChain – A graph of interconnected Kubernetes resources with the shared purpose of producing deployable Kubernetes configuration
Delivery – A graph of interconnected Kubernetes resources with the shared purpose of deploying Kubernetes configuration
Runnable – Cartographer’s …
Originally published on the Bitnami blog
A new release of Kubeapps is out, and it introduces major changes that mark a milestone in the history of this tool. We are thrilled to announce that the support of different Kubernetes packages has now become a reality with the implementation of the Kubeapps API service. Helm charts are no longer the only option to choose from, as now Kubeapps users can deploy Carvel and Flux packages as well!
In addition to this new capability, the Kubeapps team has solved a long-standing security issue by removing the reverse proxy to the Kubernetes API server.
Keep reading to learn more about how the team has implemented a pluggable architecture that allows users to discover new implementations that make Kubeapps a robust and secure way to deploy applications on Kubernetes infrastructure.
We will also cover how to deploy and manage Carvel and Flux packages from the Kubeapps user interface (UI).
The design of the Kubeapps APIs server solves two main goals for Kubeapps.
Enables pluggable support for presenting catalogs of different Kubernetes packaging formats for installation.
Kubeapps coupled with Helm prior to this release and was tied closely to the Helm packaging API for listing, installing, and updating Helm charts. The team has abstracted the functionality for listing, …
The whirlwind that was .NET Beyond 2022 just wrapped up. If you had a chance to attend, we hope you learned a lot, and had some fun in the process. If you couldn’t attend, check out some talk summaries below. If you want to know more, check out all the talks on YouTube.
Python and Java developers who have tried using F# in a .NET ecosystem are in awe over the succinctness in their code.
“It’s not like Python,” said Philip Carter, a senior product manager at HoneyComb, “It has a heritage based on functional programming, not out of smorgasbord programming that Python does.”
Carter, who spent six years working at Microsoft, and five years working with F# for Visual Studio Code, said that F# got its start as a basic Microsoft research project, “It’s really hard for a research project to become a real product,” said Carter.
According to Carter, F# provides excellent Visual Studio Code integration because it lets you use first class software development kits (SDKs) such as Badger, AWS, or any other service that has a standard .NET or small library. He also likes Immutable first, a feature that forces developers to structure their programming so that everything flows cleanly, from top to bottom. He said you can declare something as mutable by just turning off the Immutable first feature.
Immutable first …
In this previous blog, you learnt how to implement an API gateway in .NET 6 with YARP, a reverse proxy library. All external traffic can be routed through the API gateway to the backend services, making securing and managing the application far easier.
However, if you have a scalable, distributed system, your API gateway may not know where all of the instances of your services actually are. That’s where a service registry, such as Netflix Eureka can save the day.
Eureka is a RESTful service that is primarily used in the AWS cloud for the purpose of discovery, load balancing and failover of middle-tier servers. It plays a critical role in Netflix mid-tier infrastructure. It’s built with Java, but for your purposes, you can run Eureka easily from a Docker container.
You’ll register all your services and instances with Eureka and then Eureka will tell your API gateway where everything is, so that it can direct traffic to the right place.
Before you begin you will need:
If you followed along with the Build an API gateway with .NET 6, C# and YARP blog …
Consider an API gateway to be a virtual “garden gate” to all your backend services. Implementing one means that all external traffic must pass through the gateway. This is great as it increases security and simplifies a lot of processes such as rate limiting and throttling.
There are many paid for services that offer API management but they can be costly and you may not need all the features they offer.
In this tutorial, you will build a basic API Gateway using YARP or “Yet Another Reverse Proxy”. YARP is an open-source library built by developers from within Microsoft. It’s highly customisable, but you are just going to use a simple implementation today.
Before you begin you will need:
Below is a simple diagram consisting of 3 services and a database - note our demo looks slightly different to this. The front end client app is talking directly to all three services directly. This means that each service will need to manage its own security and makes implementing patterns such as service discovery much harder.
Once you add in an API gateway, as you can see in the diagram below, all external traffic …
On December 9, a vulnerability in one of the most popular Java libraries was revealed. Log4j (version 2) was affected by a zero-day exploit that resulted in Remote Code Execution (RCE), allowing attackers to do remote code execution in vulnerable environments. At this stage, everyone has heard about CVE-2021-44228, also known as Log4Shell.
Log4j is a library prevalent in Java ecosystems used by millions of applications everywhere, so the impact of this CVE has been massive. Proof of its impact is the high CVSS score given to this CVE: 10 out of 10.
Also, products from major cloud vendors, such as AWS, Intel, Cisco, RedHat, and even VMware, have been largely affected by the vulnerability. The impact on businesses was enormous, causing most engineering and operations teams to stop their daily activities and product development in order to prioritize applying patches coming from upstream projects and patching their own in-house built software to attend to this critical vulnerability. Considering the potential effects and risks that this vulnerability can have on applications and sites built using this library, the team behind Log4j immediately started to work on a fix.
In this blog post, we go over the responses and mitigations that have been released, how some of them didn’t solve the problem, and which patches are available to keep your installations secure against this …
If you’re a C# developer, you’ll recognise the oftentimes extensive list of using directives at the top of a .cs
or .razor
file. You’ll have also most likely considered a way to obfuscate them - maybe in a #REGION
, maybe a setting in an IDE. Many are duplicated across multiple files; I’m looking at you, System
, and all your little derivatives!
Although there is nothing inherently wrong with this, it’s been a convention since 2002, it just takes up a lot of screen space at the top of every file.
With C# and .NET 6, however, all that can change…
Before you can start making use of Global Using Directive, you will need:
Many of the most common using directives will already be in a global format out of the box, known as “implicit using directives”, but I find this obfuscation a little confusing.
Implicit using directives will mean the compiler automatically adds a set of using directives based on the project type, meaning the most common libraries will be available out-of-the-box. For example, in a console application, the following directives are implicitly included in the application:
using System;
using System.IO;
using System.Collections.Generic;
using System.Linq;
using System.Net.Http;
using System.Threading;
using …
Most of us do application development work on our local machine. We also come to understand that HTTPS (TLS/SSL) is the new standard for all our web applications. But we often skip using them on our local machine because either: \
The reason why you should care about it for local development is:
Every difference between your local development and production adds to the risk that your code won’t run in production.
This post will help you understand the process and set you up to grok what you are doing when you do the actual commands to make this work on your local machine. This post is NOT going to talk about certificates in production or in a public key infrastructure (PKI). While many of the concepts are similar, the security implications are much more serious once you move past just your local machine.
We will start by talking about TLS/SSL for web applications. You may also know that you can use TLS/SSL for connecting database (DB) client software to the DB server. Finally, certificates are highly used in Kubernetes, even if you are running it on your local machine. As an application developer, it is getting harder and harder to ignore the use of TLS/SSL in your daily development work. And to make …
First published on https://blog.bitnami.com/2021/11/deploy-applications-with-confidence-vmware-application.html.
As more organizations adopt Kubernetes as the preferred infrastructure for running their IT resources, enterprise SRE teams tend to adopt a GitOps mindset.
The GitOps approach consists of embracing different practices that manage infrastructure configuration as a code. This means that Git becomes the single source of truth and as such, all operations are tracked via commits and pull requests. Thus, every action performed on the infrastructure will leave a trace and can be reverted, if needed.
These practices bring a lot of benefits to IT admins, since automation and ease of managing Kubernetes configurations are extremely important to them.
Despite this, there’s a high probability of discovering security risks when managing the access to the applications running in a Kubernetes cluster. This is where Sealed Secrets comes in. Sealed Secrets is a Kubernetes controller and a tool for one-way encrypted secrets.
When cluster operators and administrators follow the GitOps approach, they find that they can manage all Kubernetes configurations through Git except for secrets. Sealed Secrets solves this problem by encrypting the secret into a new Kubernetes object called “SealedSecret” …
Modern application teams that release frequently to production find that dynamic service routing is a crucial capability. Deployment strategies like blue-green and canary are dependent upon routing. These strategies involve multiple concurrent “versions” of a service to be deployed and routing rules to determine how traffic is sent to each version. Ideally the routing rules are exposed as an API and can be managed via an application operator, or even better, with automation.
Taking this to the next level of detail, let us look at a number of use cases:
These are realistic and common requirements, and the solution domain spans several aspects of a distributed and complex system of cloud technologies. …
Well, of course you do! It’s the reason you enrolled in online training, became part of the Kubernetes Community and read case studies on all things Kubernetes. You put a lot of time and effort into learning how to build a Kubernetes Operator to manage software and reduce operational toil for your company.
Get ready for all your hard work to pay off. You are about to build a Kubernetes Operator. Your first order of business is to assemble a team of Kubernetes enthusiasts. There are some things that the team is collectively going to need to discuss, including the foundational feature set for building the operator, and the type of operations that the operator is going to control like upgrades, backups, restores, and failovers. It’s also a good idea to collaborate over design considerations so that the team is effectively working with, and not fighting against, Kubernetes patterns.
Assemble a development team that collectively know how to use:
Keep in mind that the operator you are building is an extension to the Kubernetes control plane. Understanding how that control plane works so that you can seamlessly extend it is crucial. Having deep knowledge and experience in both …
The Kubernetes tools landscape keeps growing, with more and more companies and projects building specific tools to tackle specific challenges. Making sense of all these tools and how they can be used to build a SaaS platform on top of Kubernetes is a full-time job.
These platforms are commonly built to provision a set of domain-specific components that provide the services (a set of features) that the platform is offering. You might end up having internal “customers,” such as different departments or teams that require new instances of these components, or external customers that are willing to pay for a managed instance of these components. Whether they are internal or external, application platforms should provide a self-service approach, where each customer can access a portal and easily request new instances (that could be via GUI, CLI, and/or whatever is most natural for that user).
If you are building this kind of platform, and you are also fully invested in Kubernetes, you might want to look for tools that are built on top of the Kubernetes APIs. This way, the solution you build can be run on top of any Kubernetes installation, managed or on-premises.
This blog post covers three different angles that you will need to cover if you are tasked to build one of these SaaS platforms.
Are you thinking about building an application platform with Kubernetes? If so, this article is for you. It discusses the major platform elements that you should consider. To some degree, every use case is different. You likely have your own edge cases and unique requirements, but by the time you solve the necessary items in this article, you will be familiar enough with the topic to get the job done.
The first thing to recognize is that Kubernetes is a container orchestrator. Scheduling and running containerized workloads across clusters of machines is a complex concern. Kubernetes uses sophisticated systems to achieve these ends but it’s purpose is fairly narrow. Kubernetes provides interfaces for container networking, persistent storage and container runtime, but it does not solve them directly, or provide enterprise-grade authentication for it’s API. Instead, Kubernetes allows you to configure a webhook to implement this functionality. It does not provide comprehensive tenancy, observability, service routing or policy control systems. Platform services must be installed to provide these.
Therein lies the first pattern to familiarize yourself with when using Kubernetes: it is a supremely extensible and composable system. Kubernetes provides the foundation upon which to build an application platform that meets your organization’s specific needs. It does incur …
Kubernetes is a wonderful piece of software and provides developers capabilities they have not had readily available to them in most organizations. This is a game changer for what developer productivity can look like in the future. I am excited about the commitment of VMware to make Kubernetes accessible to the masses by simplifying not only the operational use of the platform, but also creating and collaborating on tools targeted specifically for developers and their applications.
You see, Kubernetes can and should be more than just a deployment platform for our application code. We should be able to utilize the power and features of Kubernetes for our daily development processes. Think about what you could accomplish if you had the power and flexibility of Kubernetes for your local development before you perform a git push. Think about all of the things you could play with and prototype before kicking off your CI/CD pipelines for a full integration test.
But what good does it do to have a great developer experience on top of Kubernetes if developers don’t have access to Kubernetes to begin with? This is problem No. 1 that we need to address. When VMware wanted to create an open source Kubernetes offering, I knew I had to be part of it. VMware Tanzu Community Edition is the open source project that will bring Kubernetes to a developer workstation near you. Chances are your IT …
You may have noticed a few changes since the last time you visited the Tanzu Developer Center. The team has been hard at work making sure our content is easier to find, and more oriented toward reaching specific objectives. Today, we have launched that redesign!
We started the Tanzu Developer Center with the goal of creating a space where developers could learn about best practices for developing, deploying, and managing applications — applications that are built to take advantage of current platform technologies and frameworks.
Since its launch in June 2020 (see our launch announcement blog, the Tanzu Developer Center has grown to host hundreds of guides and blog posts. Dedicated guides walk readers through how to use specific technologies. Timely blog posts provide thought leadership and new product announcements.
Since then, along with a lot of additional content, we have continued to add functionality to the site to better serve our visitors. In our 1st birthday announcement in June 2021, we discussed changes, such as the addition of Workshops, which gave developers a hands-on way to use technologies with easy-to-follow guides, as well as fully functional environments to experiment in. [Outcomes] brought additional structure to guides by creating content series oriented toward learning more complex concepts.
But with the addition of so much new content and new features, and …
A new Kubeapps release is out, and it is even easier to run in TKG clusters! The last version of Kubeapps necessitated a manual update of the current Pinniped version to the latest. This step is no longer required. Cluster administrators can now configure Kubeapps to use the built-in Pinniped instance to authenticate through the same OIDC provider as they have already installed in their VMware Tanzu™ Kubernetes Grid (TKG) clusters.
Keep reading to learn more about how to benefit from installing the Kubeapps 2.3.4 version.
Kubeapps enables users to consume and manage open-source trusted and validated solutions through an intuitive web-based interface.
With the previous release, Tanzu users gained the ability of deploying Kubeapps directly to TKG workload clusters. This integration allows users to operate Kubernetes deployments through a web-based dashboard both on-premises in vSphere, and in the public cloud on Amazon EC2 or Microsoft Azure.
Kubeapps provides a wide catalog of ready-to-run-on Kubernetes solutions. In addition to the default Kubeapps catalog, Tanzu users have the flexibility to configure either VMware Tanzu™ Application Catalog (TAC) as a private chart repository or any of VMware Marketplace™ Catalog or the Bitnami Application Catalog as public chart repositories. This extends the number of available solutions and sources for …
Photo by Nicole Wolf on Unsplash
SpringOne is the leading conference for the most popular and beloved Java framework, Spring. This year’s conference, held September 1–2, was packed with useful information and exciting announcements. In addition to the keynotes and breakout sessions, the self-paced labs had in-depth training on the latest breakthroughs in Spring and related technologies. The labs are made for all skill levels and can be completed even with little Spring knowledge, all within a real environment. Best of all, they are free, and now they will be hosted at the VMware Tanzu Developer Center!
Learning with the labs in the Tanzu Developer Center is a shortcut to leveling up your coding skills and trying new technologies. The labs are responsive and provide a real environment designed to reduce student errors. Your environment exposes containers and real Kubernetes clusters and includes the required packages and tools while avoiding the hassle of setting up a new environment. The labs include an integrated code editor, terminals, and easy-to-follow instructions. This is a big time saver, saving you the trouble of deleting and uninstalling software and tearing down an environment. The SpringOne labs give you the opportunity to test new tech without having …
This post is for getting started with Beta 1. On October 5th 2021, VMware Tanzu Application Platform Beta 2 was released. And since then, other Betas have been released. Get more information on Tanzu Application Platform here.
By now you may have seen the announcement at the recent SpringOne conference for VMware’s new Tanzu Application Platform. You understand the power a platform like this can bring to your production environments, but have you considered what it can do for your inner loop development? Not every commit goes to production. That’s why you need a way to locally deploy and test your changes locally before going to production.
You also need a way to locally evaluate and use the Tanzu Application Platform. You want to understand how it works, and what it can do for your organization before a potential deployment. If this sounds like you, I have a 2-part series on installing and using Tanzu Application Platform Beta 1 locally, using KIND.
Part 1, shows you how to install all the necessary components of the Tanzu Application Platform onto a KIND Kubernetes Cluster.
Part 2, shows you how to access and utilize the Tanzu Application Platform to deploy a sample application.
Both of these guides will heavily leverage the existing install documentation for Tanzu Application Platform, although heavily modified for this specific use case (i.e. deploying in KIND). …
“VMware is pleased to join the Docker Verified Publisher’s program. This provides developers unrestricted access to our artifacts and allows them to safely adopt the popular open-source technologies we’ve made available. We are excited that VMware Tanzu customers, in particular, will benefit from a wider range of complementary services they can leverage as they quickly get apps to market.” - Ashok Aletty, VP Engineering, VMware
In May 2021, Docker, Inc™ announced the launch of its Docker Verified Publisher Program which helps developers recognize trusted publisher software. For development teams, this is huge, since this program simplifies the consumption of secure and verified components for them, as they build their applications.
When building container-based applications or deployment templates such as Helm charts, it is a frequent practice to grab pre-built building blocks to quickly create application images. A common concern among developers is to make sure that the pieces being used to build their applications are secure, reliable, maintained and up to date. Nobody wants to spend time fixing security issues or exposing their software supply chain to malicious content.
To make it easier to select robust, trusted, and reliable software when navigating through Docker Hub, Docker has launched the Docker Verified …
Kubernetes is the leading trendsetter in the future of autonomous software, having made it possible for companies throughout the world to experience a tremendous reduction in human toil when it comes to all types of software management and deployment.
Kubernetes has a reputation for being a complex software system with high startup costs and an intense learning curve, yet it remains steadfastly popular among companies that made the initial investment and immediately started reaping the benefits of improved efficiency and effectiveness in delivering automated, on-demand software that accelerates time-to-value.
Many of the companies that made the lucrative decision to choose Kubernetes as a distributed software system to manage their applications quickly recognized the value of expanding the power of their Kubernetes ecosystem through Kubernetes operators that reduce operational toil in platform services and tenant workloads.
You can leverage Kubernetes operators to accomplish all types of automated tasks, including software deployments, management, troubleshooting and updates through custom resources to define the state of the system, and custom controllers to reconcile the existing state of the system with the desired state of the system defined in the custom resource.
There’s also an impressive assortment of Kubernetes …
Container images enable you to bundle an application with all of its dependencies—soup to nuts, all the way down to the OS file system. Effectively, you are packaging your app and its environment into a single, immutable, and runnable artifact. You can then drop that image onto any container runtime and you’re (nearly) off to the races.
The benefits of taking this approach over deploying an application-only artifact onto a custom and curated environment are well established: greater predictability, repeatability, portability, and scalability, to name just a few. So, what’s the catch? The responsibility of providing the runtime and OS shifts from the ops or IT team that formerly created and maintained the target environment to the dev or DevOps team that is now packaging the application as an image. With this transition, organizations large and small must reimagine how they ensure consistency, security, transparency, and upkeep of these modernized deployable artifacts.
How you build your images is a key part of the answer. Let’s compare two approaches—Dockerfile and Cloud Native Buildpacks—to see how they measure up when it comes to meeting, or exacerbating, these challenges.
Dockerfile is the oldest and most common approach for building images. A Dockerfile is a script where each command begins with a keyword called a Dockerfile instruction. Each instruction …
Special thanks to Antonio Gamez and Michael Nelson, members of the VMware Kubeapps Team
The latest version of Kubeapps (v.2.3.2) is now available for deployment on VMware Tanzu™ Kubernetes Grid™ (TKG) workload clusters. VMware Tanzu users already benefit from deploying Kubeapps in several environments and, now with a little configuration Kubeapps can be integrated with your TKG workload cluster.
In addition to this capability, Kubeapps also features full compatibility with the latest versions of Pinniped which means that it can be used with any OIDC provider for your TKG clusters and even in managed clusters such as Azure Kubernetes Service (AKS) and Google Kubernetes Engine (GKE).
Want to know more? Keep reading to discover the latest capabilities of Kubeapps that will enable developers and admin clusters to deploy and manage trusted open-source content in TKG clusters.
Kubeapps is an in-cluster web-based application that enables users with a one-time installation to deploy, manage, and upgrade applications on a Kubernetes cluster.
This past year, the Kubeapps team has added key new features to support different use cases and scenarios. Firstly, we added support for private Helm and Docker registries and later, in Kubeapps version 2.0, we built support to run Kubeapps on various VMware Tanzu ™ platforms such as Tanzu ™ Mission Control, …
My, how time flies. It seems like just yesterday that the Tanzu Developer Center launched. Our initial showing had what we believed at the time to be a wide array of content, from guides to videos to code samples. Over the last 12 months, though, we’ve really seen our content grow.
With shows that run monthly, weekly—even daily—each and every episode is always a great time. Tanzu Tuesdays, for example, hosted by Tiffany Jernigan, features a new guest every week who takes you into a deep-dive of a topic of their choosing, complete with live demos and coding. Code, on the other hand, is a weekly show hosted by the Spring developer advocates, who walk you through complex, real-world scenarios and show you the tools and techniques you can use to solve them. Make sure to check out all of our shows on Tanzu.TV!
In September of 2020, we launched self-paced workshops on the Tanzu Developer Center. Complete with your own personal environment right in the browser, these workshops offer hands-on instructions for working with new technologies and techniques. For example, our Kubernetes Fundamentals Workshop teaches you how to prepare and deploy your applications on Kubernetes without having to set up your own cluster or install anything locally.
VMware Tanzu Labs has actually been around for a long, long time. Previously known as VMware …
We hope you are enjoying your time at KubeCon and CloudNativeCon! Hopefully, you have seen VMware’s keynote presentation around just a hand full of the open source projects VMware is involved in. Well, if you are interested in learning more about those projects, and maybe even trying them out for yourself, you’ve come to the right place. Below, we have some great content for you to look through for each of these projects. Enjoy!
One of the imperative architectural concerns for software architects is to protect APIs and service endpoints from harmful events such as denial-of-service attacks, cascading failures, or overuse of resources. Rate limiting is a technique used to control the rate by which an API or a service is consumed, which in turn can protect you from these events that can bring your services to a screeching halt. In a distributed system, no better option exists than to centralize configuring and managing the rate at which consumers can interact with APIs. Only those requests within a defined rate would make it to the API. Any more would return an HTTP 429 (“Too Many Requests”) error.
Spring Cloud Gateway is a simple and lightweight component that can be used to limit API consumption rates. In this post, I am going to demonstrate how easily that can be accomplished using a configuration method. As shown in the figure below, the demonstration consists of both a front- and backend service, with a Spring Cloud Gateway service in between.
No code whatsoever is needed to include the Spring Cloud Gateway in the architecture. You need instead to include the Spring Boot Cloud dependency org.springframework.cloud:spring-cloud-starter-gateway
in a vanilla Spring Boot application, then you’ll be set to go with the appropriate configuration settings.
Requests received by Spring Cloud Gateway from a …
A great feature of Tanzu Observability is that all context about the chart or dashboard that you are looking at is encoded in the URL, which makes it easy for you to share those links with your colleagues and to deep link into our product from other places such as wiki pages. A consequence of this is that the URL slug is rather involved. This is not a problem when the UI generates the URL, but it becomes very tedious when customers try to create the URL on their own in order to automate and embed Tanzu Observability charts and dashboards outside of the product itself.
To help customers take better advantage of Tanzu Observability charts and dashboards as well as allow easier automation and customization, we recently open sourced our Tanzu Observability URL slug generation code. This code lets you programmatically generate links to charts and dashboards that you can then embed wherever you like to give users an easy to find view of the metrics that matter to them.
If you are not familiar with a URL slug, it is the last part of a URL that comes after the domain name. For example:
https://www.vmware.com/company.html
In the URL above, “company.html” is referred to as the URL slug.
In some cases, the URL slug is relatively simple. In the case of a Tanzu Observability chart or dashboard, a lot of information is encoded in the slug which makes it difficult for …
How To Measure Anything by Douglas Hubbard is a popular book at VMware Tanzu Labs. In it, Hubbard takes a strong, opinionated stance on both the concept of measurement, the value of measurement, and how to measure things that are considered immeasurable.
One of our team members published an old-fashioned book report on How to Measure Anything, which you can read here. Below we summarize one fascinating and helpful section of the book, along with our notes and thoughts, as this section is particularly helpful for the Product Valuation workshop.
First we want to highlight Hubbard’s analysis of why people often think things are “immeasurable.”
There are just three reasons why people think that something can’t be measured; these reasons are based on misconceptions about different aspects of measurement. We’ll refer to these misconceptions them as the concept of, object of, and method of measurement:
We’ll expand upon each of those misconceptions.
Implicit or explicit in all of these misconceptions is that measurement is a certainty—an exact quantity with …
If you’re using one of the great observability tools out there, you probably already mark your data with important events that may affect it—deployments, configuration changes, code commits, and more. But what about changes Kubernetes makes on its own, like autoscaling events?
Knative is a Kubernetes-based platform used to deploy and manage serverless workloads. It has two components: serving and eventing, both of which can be deployed independently. In this post, we’re going to focus on eventing here, which can automatically mark events in your data or trigger other events based on your needs.
The eventing component of Knative is a loosely coupled system of event producers and consumers that allows for multiple modes of usage and event transformation.
Among the other components in this system are the broker, which routes the events over channels, and triggers, which subscribe specific consumers to events. For our example, we’re going to keep things very simple, with a single broker using a single in-memory channel, which itself is not to be used in production.
If we want Kubernetes events as a source, we can use the API server source as an event producer. This will publish any changes seen by the API server to the channel we’re using, and we can consume that event with a small golang application and forward to the observability tool of our …
Those four letters that strike dread in the hearts of every Kubernetes user. That short acronym that pierces like a knife in the dark. The aura of terror that follows it, enveloping everyone and everything as its reach seems to grow to the ends of time itself.
YAML.
Alright, maybe that’s a bit dramatic, but there’s no doubt that YAML has developed a reputation for being a pain, namely due to the combination of semantics and empty space that gets deserialized to typed values by a library that you hope follows the same logic as others. This has fostered frustration among developers and operators no matter what the context. But is the issue as simple as “YAML is a pain”? Or is it a bit more nuanced than that?
Last year, at Software Circus: Nightmares on Cloud Street, Joe Beda gave a talk on this very subject titled I’m Sorry About The YAML. In it, he explores the factors that contribute to YAML’s reputation, or the so-called “two wolves” inside the hatred of YAML—the frustration with YAML itself and the problem that it’s being used to solve—and how they contribute to each other.
Beda starts by talking about YAML itself, both writing it and reading it. Of course, the first thing that comes to mind is the meaningful use of blank space. Opinions run high in this discussion, as it’s a situation with which Python developers are intimately familiar. Indeed, …
In a previous post, we discussed the advantages of running JupyterHub on Kubernetes. We also showed you how to install a local Kubernetes cluster using kind on your Mac, as well as how to install the JupyterHub Helm chart on a Kubernetes cluster.
In this post, we will focus on the experience of the developers, who are going to be leveraging our service to develop new models using scikit-learn
or perform calculations and transformations of large datasets using pandas. To illustrate the value that Jupyter Notebooks and JupyterHub provide in a multiuser environment, we will clone a Git repository containing two example Jupyter Notebooks that we can work with.
Each user that accesses JupyterHub will have their own workspace complete with a single-user Jupyter Notebook server, which uses the JupyterLab Interface. To demonstrate the capabilities of JupyterHub and Python, we will check out the following sample notebooks that we have written and executed:
scikit-learn
library for PythonNote: Each time a user logs into the JupyterHub web page, an additional pod will be instantiated for that user and a 10GB …
Provisioning environments for data scientists and analysts to run simulations, test new models, or experiment with new datasets can be time-consuming and error-prone. Python is a popular choice for data science use cases, and one of the easiest ways to leverage Python is through Jupyter Notebooks. A web-based development environment for multiple languages, Jupyter Notebooks support the creation and sharing of documents that contain code, equations, visualizations, output, and markup text all in the same document. Because Jupyter Notebooks are just text files, they can be easily stored and managed in a source code repository such as GitLab or GitHub. JupyterHub, meanwhile, is a multiuser hub that spawns, manages, isolates, and proxies multiple instances of a single-user Jupyter Notebook server.
Kubernetes provides the perfect abstractions and API to automate consistent and isolated environments for data scientists to conduct their work. Combining these three things—Jupyter Notebooks, Python, and Kubernetes—into one powerful platform therefore makes a lot of sense.
In the first post in this two-part series, you will learn how to deploy a Kubernetes cluster using kind on a Mac, then how to install JupyterHub into that cluster. In the second post, we will show you how to use the data science and machine learning notebooks you have created on your newly deployed JupyterHub service …
Who will speak for the various, meaningless phrases and jargon that fills our ears? “Digital transformation,” for example. Year after year, surveys of Very Important People in the form of Gartner’s CIO Agenda Report and others show rising interest, even “do or die” desire for digital transformation. These efforts seem to always be behind: They’re either underfunded or in the process of getting more funding; skilled people are consistently hard to find. And the headwinds! Always with the macro-global headwinds.
But surely a company must transform in order to remain competitive. Indeed, if all these executives are craving “digital transformation” and complaining about how hard it is to achieve, it must be something very important, right?
Well, sort of.
The problem with “digital transformation” is that it’s become an umbrella term to mean any spending on or change to IT. We need to implement remote working? Then we need digital transformation. Our goal is now better analytics? That means digital transformation! Upping our sales through Instagram? Roll in the digital transformation!
When a term is used for everything, it loses its meaning. In such cases, I like to replace “digital transformation” or whatever the phrase of the moment is with “Computers are awesome!” Doing so helps me remember that all people are talking about is using computers to conduct …
Three years ago, a colleague of mine wrote a post to help readers understand when to use RabbitMQ and when to use Apache Kafka, which many found to be very useful. While the two solutions take very different approaches architecturally and can solve very different problems, many find themselves comparing them for overlapping solutions. In an increasingly distributed environment where more and more services need to communicate with each other, RabbitMQ and Kafka have both come to be popular services that facilitate that communication.
It has been three years since that post was published, however, which in technology can be lifetime. We thought this would be a great opportunity to revisit how RabbitMQ and Kafka have changed, check if their strengths have shifted, and see how they fit into today’s use case.
RabbitMQ is often summarized as an “open source distributed message broker.” Written in Erlang, it facilitates the efficient delivery of messages in complex routing scenarios. Initially built around the popular AMQP protocol, it’s also highly compatible with existing technologies, while its capabilities can be expanded through plug-ins enabled on the server. RabbitMQ brokers can be distributed and configured to be reliable in case of network or server failure.
Apache Kafka, on the other hand, is described as a “distributed event streaming …
KubeCon North America is coming up soon! It will take place virtually November 17th-20th.
The schedule is chock-full of very interesting talks,from introductory overviews to advanced deep dives. When I first saw it, I ended up copying down as many talks as possible to share here because they are all just so good. But I figured I should probably curate a bit, so below you will find a list of my top recommendations, broken down, for the most part, by their respective Special Interest Group (SIG) names.
Talks with a 🌱 next to them are introductory/deep dive talks, and each SIG section header links to its respective SIG page.
If you can, definitely watch all of the keynotes. There is a complimentary pass just for the keynotes if you’re unable to attend the rest of the conference.
If you’re interested in seeing what cool things different companies are working on and/or are interested in, check out the sponsored sessions on Day 1.
🌱 Admission Control, We Have a Problem - Ryan Jarvinen, Red Hat
This is an interactive session that will teach you how Admission Controllers play a critical role in securing Kubernetes APIs. You will be able to “implement basic input validation and testing of webhooks for the Admission Controller.”
It’s the day and age of mountains of microservices, running on various platforms, consuming multiple services from multiple providers. As applications become more and more distributed, they become more complex. Even splitting a monolith into multiple smaller microservices introduces several points of failure. What happens when the two services can’t reach each other over the network? What if one service relies on the other and it crashes? What about if the application slows to a crawl; where would you start looking to figure out why?
Rather than guessing and hoping, you can lean on properly instrumented observability. Being able to aggregate logs and metrics, as well as trace a request as it flows through various applications and services, is as achievable as ever. No matter your language, framework, or platform of choice, chances are you have some great options.
But first, let’s talk about why you should care about observability.
I think of observability as the ability to infer the correlation between (seemingly) disparate systems. That means bringing together metrics from many systems in a way that allows us to find answers to questions that speed up both MTTD (the mean time to detect an issue) and MTTR (the mean time to resolve an issue). By themselves, metrics such as CPU, memory, response time, error rates, and latency are valuable, but they will not …
Octant is a tool designed to enable developers without a deep knowledge of Kubernetes to become productive as quickly as possible. This post will walk through an NGINX deployment with a faulty NGINX configuration to demonstrate how common pitfalls can be identified using Octant.
Before you get started, here are the tools you’ll need:
Let’s start with a ConfigMap containing a basic NGINX configuration and set the mount path to /etc/nginx
. A deployment will mount the volume containing the NGINX configuration for the container. Copy the YAML below.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
user nginx;
worker_processes 3;
error_log /var/log/nginx/error.log;
events {
worker_connections 10240;
}
http {
log_format main
'remote_addr:$remote_addr\t'
'time_local:$time_local\t'
'method:$request_method\t'
'uri:$request_uri\t'
'host:$host\t'
'status:$status\t'
'bytes_sent:$body_bytes_sent\t'
'referer:$http_referer\t'
'useragent:$http_user_agent\t' …
The transition to working remotely due to COVID-19 has proven to be quite a challenge—including for those of us at VMware Tanzu Labs. While remotely communicating and collaborating using digital tools is not new to us, doing so in an entirely distributed environment for the foreseeable future is.
The challenges have been especially daunting when we’ve had to apply the principles of Extreme Programming (XP), namely constant communication, intense collaboration, and ongoing reflection. On top of all that, we engage with new customers on a regular basis to kick off projects multiple times throughout the course of a year. In this new normal, our practitioners must not only quickly bootstrap burgeoning relationships and get up to speed on unfamiliar domains, but build customer-based teams that feature all these new ways of working and in the process, re-define a set of previously shared norms.
There are undoubtedly more challenges left for us to face in our new, all-remote set-up. In the meantime, however, we set out to identify the most pressing problems and come up with the best possible solutions to address them. The result is a series of remote working tips that we are thrilled to share with you.
Our remote working tips cover a wide range of topics, from general considerations to specific advice for high-collaboration teams:
Building relationships – How to create …
One of the biggest challenges I face when developing applications that will run on Kubernetes is having a local environment that I can spin up at any time—one that won’t give me any problems, won’t cost me money when left on during the weekend or at night, and that I can be confident will have all the same functionality as my cloud-based environment. That’s why I use minikube for local development, as it’s the tool that gives me the best “developer experience” possible. None of the alternatives can really compare.
But minikube is not perfect. There are two things in particular that require some additional configurations.The first is that every time you create a minikube instance you get a different IP address, which becomes an obstacle when you want to recreate your environments. The second is that I prefer my minikube instances to have a registry, which like the services I choose to work with should also be secure. But while the minikube documentation provides instructions for how to secure a registry, it’s still a complicated process.
For both of these reasons, I’m going to explain how I set up my minikube instances so they can be used and recreated easily, and so they give me the ability to work with trusted secured services.
When working with secure services, I want the minikube instance to have secure routes and internal access. The easiest …
SpringOne 2020 just wrapped, and the self-paced workshops were a complete success! Moreover, all of your requests to continue providing these workshops beyond the conference have been heard. Their future home will be the Tanzu Developer Center. UPDATE: These workshops are available to try out now!
For those that missed SpringOne 2020, we’ll quickly recap what the workshops focused on and what they accomplished. Feel free to jump around if you need to; the recap is meant to be a quick read with plenty of pictures.
A total of 10 self-paced workshops covering a range of open source technology—from CI/CD with Tekton, several Spring technologies, and infrastructure technology like Kubernetes, Octant, and Carvel—were available at SpringOne 2020:
Each of these workshops has an environment prepared and ready, which is quite refreshing when you are accustomed to spending 15-30 minutes setting up to follow a tutorial. The workshop environment is also native to the technology being used. For example, you can interact with actual Kubernetes clusters in a Kubernetes workshop or work …
You’re halfway through delivering your feature and you decide to take a look at your diff. Doing so gives you a sinking feeling in your stomach, because you see a lot more changes than you were expecting, some of which were refactorings, like renames or structural changes you wish were separated into their own commits. Teasing apart these smaller commits can be messy and would take too much effort at this point. But if you’d tried to preemptively break them apart at random, you’d have run the risk of overengineering your work or creating unnecessary changes.
Writing commit messages first can help pairs navigate between feature delivery and what could be smaller, more atomic commits. It’s analogous to working from “stories” off a backlog or trying to get the next failing test to pass. If you or your pair gets lost in the weeds, pointing back to the story provides a direct and egoless way to get back on track. In test-driven development, pairs stay focused by thinking about small, incremental, falsifiable functionality one failing test at a time. Writing commit messages first can help pairs articulate the space between a feature story and individual tests.
You can use a command line tool such as Stacker to keep track of your commit messages, or simply use pencil and paper. The first commit message is easy to write and provides a frictionless …
For quite a while now, I’ve kept an eye on RedMonk’s programming language rankings — which track the usage of all programming languages based on data from GitHub, as well as discussion about them on Stack Overflow —— to get insight into the various language trends. In the January 2020 update, something interesting happened: Python reached No. 2 on the list, taking over Java.
As RedMonk pointed out, “[T]he numerical ranking is substantially less relevant than the language’s tier or grouping. In many cases, one spot on the list is not distinguishable from the next.” However, it’s still interesting, especially as Python has continued to hold the No. 2 spot into the latest ranking after spending approximately seven years ranked third or fourth.
RedMonk isn’t alone in its findings, either. GitHub’s report, The State of the Octoverse also ranked Python as the second most-popular language used on that website, just behind JavaScript. Not only that, it also found Python remains among the top 10 fastest-growing languages in the community, despite already having a foothold with developers. In the JetBrains Python Developers Survey in 2019, it found that one of the most popular things developers use Python for is web development, with Flask and Django fighting for the top web framework.
At one time Python was my main language of choice. For the past few years, I’ve primarily been a Java …
Your cloud-native application has been designed, built, and tested locally, and now you’re taking the next step: deployment to Kubernetes.
Isolated environments can be provisioned as namespaces in a self-service manner with minimal human intervention through the Kubernetes scheduler. However, as the number of microservices increases, continually updating and replacing applications with newer versions, along with observing and monitoring the success/failure of deployments, becomes an increasing burden.
Deployment processes are performed while allowing for zero or some downtime, at the expense of increased resource consumption or support of concurrent app versions.
What options do you have to turn deployments into a predictable, repeatable, consistent, easy to observe process? Kubernetes with declarative deployments.
The concept of deployment in Kubernetes encapsulates the upgrade and rollback processes of a group of containers and makes its execution a repeatable and automated activity.
The declarative nature of a Kubernetes deployment allows you to specify how the deployed state should look, rather than the steps to get there. Declarative deployments allow us to describe how an application should be updated, leveraging different strategies while allowing for fine-tuning of the various aspects of the deployment process.
The …
Throughout most of my career as a developer, I have written code using .NET (mostly C#). But lately, I have been spending more time with Spring, and I keep hearing comments about exciting changes in .NET around containers. I decided it was time to go back and check out what I had missed. This article highlights some of these changes, emphasizing the ones most relevant to containers and microservices; after all, I am part of the VMware Tanzu Portfolio.
Microsoft released .NET Core 3.0 on Sept. 23, 2019, and a couple of months later, on Dec. 3, 2019, version 3.1 followed. Version 3.0 had already reached its end of life, while version 3.1, with its LTS designation, will have support until Dec. 3, 2022 (more details here).
.NET Core 3.1 contains a tiny number of changes compared to version 3.0. These are mainly related to Blazor and Windows Desktop, in addition to the LTS designation. The bulk of significant changes were in version 3.0. I have selected a subset of items that I believe have a more significant impact on my day-to-day role at VMware Tanzu Labs. For the complete list of changes, go here and here.
Before version 3, running .NET Core in a container was not for the faint of heart. CoreCLR was inefficient when allocating GC heaps and quickly ran into Out-of-Memory situations. The new version of .NET Core has made significant …
When you’re first learning how Kubernetes works, or are developing code that leverages Kubernetes, you’re likely to find yourself looking to one of the many options available to run it locally. As with almost anything in technology, there are more options than you probably know what to do with, which can leave you asking yourself which one you should use. Minikube? Kind? Microk8s? Even Docker Desktop ships with the ability to spin up Kubernetes.
Consider a scenario in which you need to develop and test on Kubernetes locally. For example, Spring Cloud Kubernetes gives you tools such as service discovery, which enables you to look up Kubernetes services, as well as the ability to set properties in your code using ConfigurationMaps. This post will use a simple two-tier application that has a frontend (written in Spring) that looks up where the backend service is (also written in Spring) by looking up the service name that exposes it. The backend service presents a REST API that reports inventory information about an imaginary grocery store, and the frontend application visualizes it.
Since the whole point is to develop locally, you’ll deploy the backend, then get a shell into another pod that mounts a volume from your local machine containing the frontend code. This will allow you to rapidly iterate changes to the code without building and deploying a new container every time. …
You can start to build modern, cloud native apps today using the latest innovations from Spring, Cassandra and Kubernetes. This blog will show you code samples and a fully working demo to get you up to speed in minutes with these open-source technologies.
There’s no shortage of buzzwords, such as “digital transformation”, “cloud native” and “serverless,” being thrown around the internet. Peeling back a layer of the buzzword onion, we do see significant changes in the technology world that have inspired these terms. First, most companies are becoming technology companies as having a presence in the digital space grows as a requirement for survival. Second, the mad dash to the cloud is showing no signs of slowing down. Third, time to market for new applications matters more than ever.
So, how has this affected technology practitioners? Well, the developer population is multiplying rapidly and the pressures for faster delivery of these new digital experiences are getting more extreme by the day. It’s now a fundamental expectation that applications will be there whenever, wherever and however users want to engage. The cloud movement is creating complex architectures that span on-premises and cloud environments, not to mention the pure amount of data coming through these services is exploding. Sounds fun!
Open-source technology …
Understanding the way containers communicate will make your life easier in many ways. Technologies like Kubernetes and Docker abstract away details that make containers special, but can also abstract away intuition and understanding. Without that knowledge, challenges arise—educated problem-solving adds confidence to decision-making!
In this post, we will demystify containers and cover some networking basics by explaining how to create two rudimentary containers and connecting them with a virtual network so they can talk to each other. The host machine, which is the machine where the network lives, views this network as if it were completely external. So, we will connect the network to the host. We’ve also included a bonus section about connecting the network to the internet so your containers can reach Google. You do not need a Linux machine to run through the exercises.
A container can be considered synonymous with a Linux network namespace. Keep this in mind. Essentially, a container is a namespace.
Each container runtime uses a namespace differently. For example, containers in Docker get their own namespace, while in CoreOS’ rkt, groups of containers share namespaces, each of which is called a pod.
Containers are based on Linux networking, and so insights learned in either can be applied to both. You can apply these concepts to …
If you’re like me and you’ve worked with Git for some time, you might have a couple of commands committed to your memory—from git commit
for recording your changes, to git log
for sensing “where” you are.
I have found git checkout
to be a command that I reach for pretty frequently, as it performs more than one operation. But a single command doing more than one thing might produce a suboptimal user experience for someone learning Git. I can almost picture an XKCD strip:
Learner: What do I run to change the branch I’m on?
You: Usegit checkout <branch>
.
Learner: What can I run to discard changes to a file?
You: Use…git checkout <file>
.
Learner: OK…
Even if you have the commands memorized, there have likely been times when you had to pause after typing a git checkout
command while you tried to match it with the operation you had in mind (e.g., “I just typed git checkout … to do X, but I thought git checkout does Y, does this really do what I want?")
Let’s take a look at what git checkout
can do, and an alternative (or two) that can make for a friendlier user experience in Git.
git checkout
do?Perhaps you were trying something out and made some changes to the files in your local Git repository, and you now want to discard those changes. You can do so by calling git checkout
with one …
While running software in containers is very popular, it can be a little confusing to figure out the best way to get your code into a container. Now that the industry is mostly unified on Open Container Initiative (OCI) Standard container image formats, they can be built in any number of ways.
Building via Dockerfiles is the most commonly used approach, but there are also other tools that can make it easier with less learning upfront and some other advantages.
If you’re not familiar with the specification for Dockerfiles, you can find it here. The basic layout looks something like this:
FROM debian:latest
ADD my-app-file /app/
CMD /app/my-app-file
The first thing we need is a starting point, and in this case, we’re using a debian image, and the latest
version. There are also ones that are language-specific like python
or golang
and ones tied to specific distributions.
The next lines include whatever steps we need to prepare the image, and the last line tells the image what command to run when the image is executed. There are a lot of variations of this but these are the basics. How can we make it better? Well a real application that is a bit more complicated would make this easier. Here’s a very simple golang http server application:
package main
import (
"fmt"
"net/http"
)
func main() {
http.HandleFunc("/", func (w http. …
Your app is destined for the cloud, but it’s going to meet some challenges along the way. First stop is the always fun whiteboarding session(s). Then come the sticky notes, which inevitably yield a backlog. Only when those two steps are complete does the Zen art of coding begin.
But ask yourself this: While whiteboarding the app’s design, how often is the developer’s local environment considered? Probably never. In fact, I bet during design a local environment doesn’t even make into the afterthoughts. Instead, it’s just one of those “figure it out” things.
Take, for example, the design of a microservice. Most likely it’s going to be dependent on an external configuration, like Spring Config. Ever consider how a developer is going to test on a config server locally? Do they have access to local containerization, like Docker? Or are they left to waste countless hours rigging up environment variables, only to find a totally different schema when the app is pushed to its platform?
I’ve been there and done that. It’s frustrating, wasteful, and breaks one of the most important of the 12 factors: No. 10, parity between environments. It’s also a personal favorite, and one I’ve been known to be ornery about.
The goal is not about following a good or bad design (although there’s plenty of room for bad decisions). It’s about …
Developers and VMware. The pairing might not make sense to you at first. As an application developer, maybe you have only limited experience working with VMware software. It’s probably just the place where your software runs on-premises. Or it’s the thing you get access to a couple of days after putting in that infrastructure request ticket.
But make no mistake, VMware Tanzu is for you. If you are a developer working in a large enterprise, or even a small- to medium-sized business, you are now being asked to build “modern” applications or to “modernize” your existing apps. VMware Tanzu is a collection of software created expressly to help you with this application modernization effort. It brings together innovations via VMware’s acquisitions of Heptio, Bitnami, Wavefront, and Pivotal with lots of open-source DNA from projects like Kubernetes and Cloud Foundry. And don’t worry, this site isn’t about infrastructure software like vSphere or NSX. Instead, it covers the topics you need to know to write modern software: Spring, .NET, Python, RabbitMQ, Kafka, CI/CD platforms, and much more.
So now you see how a VMware-hosted developer site can be focused on app modernization. But what exactly is meant by “app modernization”? Is it simply the latest in a long line of technology fads that just means more work for you?
It can sometimes feel like that. Change is hard, …