Cloud Native Security for non security folks or those who are newer to security. This is part 2. In this part, we are going to roll on with the security topics, and tackle permission management, secrets management, and more!
In the first part of this series, we introduced some concepts specific to Cloud Native security, in particular the “4 C’s of Cloud Native security”. We gave a number of leads and insights about securing Kubernetes as a platform. In this next part, we’re going to roll on with the security topics, and tackle permission management, secrets management, and more!
When we develop with a local Kubernetes cluster like minikube or KinD, we usually don’t have to worry too much about permissions. In these clusters, we typically have access to everything. We are
kubernetes-admin or some equivalent of the good old
root user on UNIX systems. (Technically, we belong to the
system:masters group, which is granted the
However, when deploying to “real” clusters (production or even staging or testing environments), we will hopefully use fine-grained permissions. We might only get permissions to deploy to a specific namespace. Or perhaps, we won’t get permissions to create/update/delete anything at all, because deployment will be done by a CI/CD pipeline, and all we can do is view resources, logs, and events, for troubleshooting purposes. It might be a good idea to grant even narrower permissions on production systems; for instance, making sure that we can’t use
kubectl exec there. That way, we won’t be able to, say, dump and run away with the user credit card database… Or more realistically, we won’t risk that unfortunate event if we get our laptop stolen by a malicious actor who would then proceed to do the same!
These permissions are typically implemented with Kubernetes RBAC system (Role-Based Access Control). This system lets us grant permissions to people (you and your devs) as well as robots (i.e., automated services and code running on your clusters) that need to talk to the kubernetes API. The RBAC system lets us implement very specific permissions, for instance, “let this custom autoscaler code change the number of replicas for this particular deployment in this particular namespace, but don’t let it do anything else”. The custom autoscaler won’t be able to scale other deployments, or even view them; and on the targeted deployment, it will only be able to change the number of replicas, not the labels, images, or anything else. Presumably, the attack surface on a custom autoscaler would be small; but just in case - if that autoscaler code gets exploited, it won’t be able to start new pods that mine cryptocurrencies or access and extract sensitive data. The RBAC system is very powerful, but can be a bit confusing or overwhelming at first, which is why we’re going to dedicate a bit of time to talk about user management and permissions.
We have two different types of “auth” that we need to know about and understand.
AuthN (authentication): Who are you? Are you really who you claim you are?
AuthZ (authorization): What can you do? Do you have permission to do what you’re trying to do?
Let’s start with authentication, or user management. Kubernetes gives us a lot of flexibility. We can use TLS certificates (issued by a Kubernetes-specific CA or otherwise), JWT (JSON Web Tokens) issued by Kubernetes itself of by various OIDC (OpenID Connect) providers (self-hosted like Dex/Keycloak or SaaS like Okta), or even piggyback on our cloud provider’s IAM to map cloud users or roles to Kubernetes users. We can mix and match these different methods; and in fact, we will typically mix and match them within a cluster, because most oftentimes, nodes will authenticate with certificates (that can be issued with Kubernetes TLS bootstrap mechanism and renewed automatically with Kubernetes CSR API) while controllers and other workloads running in pods will leverage ServiceAccount tokens (JWT generated by the control plane).
We can make an arbitrary distinction in Kubernetes between humans (actual people interacting with the Kubernetes API with kubectl and other tools) and robots (controllers and other automated processes). It’s worth mentioning that in Kubernetes, human users don’t exist as API objects: we can’t do, for instance,
kubectl get users as there is no “User” object or resource. Users end up existing implicitly, for instance when we create a RoleBinding stating “user ada.lovelace can create deployments in the dev namespace”. Then, whenever someone authenticates with a TLS certificate or a properly signed JWT with “ada.lovelace” in the subject field, they will be granted the corresponding permissions.
This means that there is no way to directly “list all users”, other than listing all the permissions that have been granted (technically: all the RoleBinding and ClusterRoleBinding object resources) and aggregating the results. There are many tools (open source or proprietary) out there to help us with that task, for instance the access-matrix kubectl plugin.
“Robots”, on the other hand, will typically use ServiceAccounts. ServiceAccounts are Kubernetes API objects: we can list them (with
kubectl get serviceaccounts) and manage them with e.g. YAML manifests. When a namespace is created, it automatically receives a ServiceAccount named “default”, and this ServiceAccount is automatically used by all pods created in this namespace (unless another one is created and then specified explicitly in pods templates). Kubernetes generates tokens for these ServiceAccounts and injects them in the containers’ filesystems so that any workload running on a Kubernetes cluster automatically uses its ServiceAccount identity. This doesn’t require any work or configuration on our end, because the Kubernetes standard client library (in Go and other languages) will automatically that it’s running in a container in a Kubernetes cluster, and will automatically obtain that token as well as the API server address, port, and CA certificate!
To go even farther, check out the SPIFFE project which you can use for authenticating from one service to another. It’s an advanced use case but these short lived identities can help you move away from long lived secrets too.
What happens after the Kubernetes API server has authenticated a request, i.e. validated who or what is making that request? Then it moves to the authorization phase, which means checking that the identified principal (user, ServiceAccount…) has the required permissions to perform the request. The Kubernetes API Server has a few different authorizers that can approve or deny a particular request. These include:
Role-based Access Control can be a tiny bit frustrating at first, especially if you’re looking for the simplest solution to grant one particular privilege to a specific user. Unfortunately, “let user grace.hopper create pods in the staging namespace!” doesn’t translate to a simple one-liner “kubectl” command. It translates to two commands:
While this level of indirection might seem like an unnecessary complication at first, it will encourage us to write reusable components (the Roles and ClusterRoles) and avoid permission drift (e.g. one user having different permissions from another because these permissions have been set individually instead of being factored into a Role).
That being said, let’s dive into the details!
To grant permissions to users, these permissions must be gathered into a Role or ClusterRole.
A Role or ClusterRole is a collection of rules, and each rule is defined by a resource, verb, API group… If you’re getting started with Kubernetes permission model, it can be difficult to know what to put there, or even to remember what’s in a rule.
Thanks to kubectl, there is a relatively straightforward way to figure it out, though!
First, if we run
kubectl create role -h it will kindly remind us the syntax required to create a role:
kubectl create role NAME --verb=verb --resource=resource.group/subresource [...]
Let’s pretend that you want to let a user view the pods of a given namespace. Have that user try to run e.g. “kubectl get pods” in that namespace, and they should see a message similar to:
Error from server (Forbidden): pods is forbidden: User "charlie. devoops" cannot list resource "pods" in API group "" in the namespace "production"
All the information that we need is in that message:
We can therefore create the corresponding role with:
kubectl create role viewer-of-pods --verb=list --resource=pods
Pro tip: if we want to generate a YAML manifest for that Role (and conveniently edit it, add more rules, etc) we can easily to it by adding the
-o yaml –dry-run=client options to the command line:
kubectl create role viewer-of-pods --verb=list --resource=pods -o yaml --dry-run=client
Now that we have a Role, we need to bind it to a user with a RoleBinding.
A RoleBinding can bind a Role (but also a ClusterRole, as we’ll discuss a bit later) to one or multiple “principals”. These principals can be any combination of:
Groups correspond to the Organization (“O”) field in TLS certificates, and to the
groups claim in OIDC tokens. With groups, instead of binding a Role to a specific user, it is possible to bind that role to a group (for instance: “developers”) and then issue certificates or tokens that mention that group. This partially shifts the burden (and responsibility) of permission management to the user provisioning system (and can save us the trouble of manually enumerating users on Kubernetes’ side).
If you need help crafting your first RoleBindings, once again, kubectl proves to be very helpful!
kubectl create rolebinding --help … Usage: kubectl create rolebinding NAME --clusterrole=NAME|--role=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none] [options]
And once again, you can add
-o yaml --dry-run=client to the command if you need to generate YAML manifests.
We have at our disposal Roles and RoleBindings, as well as ClusterRoles and ClusterRoleBindings. You probably noticed the same pattern for other Kubernetes resources; for instance cert-manager has Issuers and ClusterIssuers. This is common for resources that might justifiably exist within a single Namespace (and be available solely in that Namespace) or at the cluster scope (and be available from all Namespaces).
In the context of RBAC, we can similarly work at two different levels:
Roles and RoleBindings are relevant for the following use-cases:
The latter use-case is particularly interesting because it means that you don’t need to be an all-powerful cluster administrator to grant permissions. This gives us the possibility of implementing a 3-tier user system like the following:
Meanwhile, ClusterRoles and ClusterRoleBindings are relevant for the following use-cases:
Finally, note that it is possible to create a RoleBinding that references a ClusterRole. In that case, the RoleBinding gives permissions only to resources in its own Namespace; but this allows the role to be defined once in the cluster, instead of having to duplicate many identical roles in each and every Namespace. In fact, Kubernetes ships with 4 ClusterRoles out of the box:
view; providing user facing roles that can be used to grant predefined sets of permissions to users.
Does that sound like a lot (maybe too much) of information and new concepts? Well, here is a comparatively simple method that you can use to isolate users and workloads while keeping things simple. Don’t give blanket admin permissions on the whole cluster (with a ClusterRoleBinding), because that would be like giving root access to the entire cluster. But feel free to give admin permissions within a namespace (with a RoleBinding). So, in doubt, if someone needs to access a bunch of stuff, put that stuff in a Namespace, give them access to that Namespace, and call it a day!
After creating a few (or more than a few) Roles and RoleBindings, it can become harder to track exactly who can do what, and to make sure that we didn’t accidentally grant too many permissions.
Good news, everyone! There are many tools to help us review who can do what.
kubectl auth can-i --list is available out of the box, and lists the permissions of the current user. It can also be used for penetration testing: run it inside a pod, and you will immediately see the permissions granted to the pod’s ServiceAccount! It can also be used with the
--as option to see the permissions of another user; for instance
kubectl auth can-i --list --as bob.
Then a number of options are available as
kubectl plugins. You can install them manually, or install krew and then use krew to install these plugins super easily.
kubectl who-canwill list which users, groups, or ServiceAccounts have the permission to perform a specific action (e.g.
kubectl who-can delete pods)
kubectl access-matrixwill show you a matrix of the permissions of a user (or group or ServiceAccount), indicating which actions that user can do with which resources. Alternatively, it can show a matrix for a particular resource, and list all the principals who have access to that resource and what they can do with it.
kubectl rbac-tool; the latter can even generates visual diagrams showing the relationships between users and permissions!
Keep in mind that permissions are additive. We can’t “take away” permissions or make exceptions. For instance, if we gave someone permissions to “update Deployments”, we can’t make an exception to protect a particular Deployment. Instead, we need to give them permissions on all other Deployments individually. Or, put the “protected” Deployment in a specific Namespace, and then give permissions to all the Namespaces (again, individually) except the one with the “protected” Deployment.
That being said: if we wanted to block updates to specific resources (e.g. “the Deployment named
queueworker in Namespaces whose name starts with
prod-”), or block specific updates (e.g. “prevent scaling Deployment
queueworker below 2 or above 10”), we could leverage dynamic admission webhooks and implement even finer controls. We don’t recommend using these webhooks for permission management, though, because the corresponding permissions (or rather: restrictions) won’t show up in RBAC auditing tools. These webhooks are better suited to policy control, i.e. implementing rules like “it is forbidden to use tag
:latest”, for instance.
“Secrets” are the mechanism through which applications running on Kubernetes typically obtain sensitive information like TLS keys, database credentials, API tokens, and the like. As you can certainly imagine, there will be many recommendations and guidelines about the use of these secrets!
First things first: make sure that all sensitive information is properly stashed in Secrets and not in ConfigMaps. It is unfortunately too easy to load a configuration file in a ConfigMap while forgetting that the file contains a database password for example.
Secrets and ConfigMaps can have different RBAC (Role-Based Access Control) permissions, meaning that our teams will easily be able to double-check configuration values (in ConfigMap) but won’t access (on purpose or by malice) our precious Secrets.
Secrets can also be encrypted at rest. This won’t change how you access Secrets through the Kubernetes API, but it will change how they get stored in etcd. This means that someone who would gain access to your etcd servers (or to their backups, or to their storage systems if applicable) won’t be able to get their hands on your secrets.
If you fully embrace the declarative nature of Kubernetes, perhaps by using GitOps tooling and storing all your manifests in git repositories, you might wonder how to safely include secrets. Indeed, committing secrets to a git repository would make them available to anyone with read access to that repository!
There are many solutions to this challenge, each addressing a different use-case.
SealedSecrets lets you use asymmetric (public key) encryption so that anyone can add secrets to a repository by encrypting them with a public key (transforming a regular Secret into a SealedSecret); but these secrets can only be decrypted using a private key held by the cluster (transforming them back into regular Secrets).
Kamus uses private key encryption at the application level, meaning that the control plane doesn’t “see” the decrypted secrets and the app doesn’t need to entrust the control plane with these secrets.
Hashicorp Vault can take care of storing your secrets safely, and then expose them in various ways. If you are already a user of Vault, you might be excited (or perhaps overwhelmed!) to know that there are at least three ways to expose Vault secrets to Kubernetes apps: with a sidecar (running the Vault Agent), with a CSI driver, and with an operator.
SOPS is a pretty popular option used in many other contexts. If you already use it extensively (for instance, to safeguard your infrastructure secrets for usage with Terraform) you will be able to extend that usage to Kubernetes as well.
And of course, you can leverage your cloud provider’s Key Management Service (KMS) if you are using one, for secret data encryption.
The Kubernetes RBAC system defines
list permissions. One could assume that
get gives you all the details about a given object, while
list would only let us enumerate objects, without their details. One would be wrong! The
list permission lets you enumerate objects, but also gives you their content. This means that the
list permission on Secrets, for instance, lets you list all Secrets in a Namespace with their content.
This has interesting consequences for Ingress Controllers handling TLS traffic. The Ingress specification lets us put a TLS key and certificate in a Secret, and then specify that Secret name in an Ingress resource. This will typically require our Ingress Controller to be granted
list permission on Secrets. This means that a vulnerability in the Ingress Controller could lead to a compromise of all Secrets, which might lead to a full cluster compromise if we have Secrets holding privileged ServiceAccount tokens. Using bound ServiceAccount tokens (the default since Kubernetes 1.24) mitigates the risk. Another possible mitigation is to avoid deploying a cluster-wide Ingress Controller, and instead, deploy one Ingress Controller per Namespace, and give it only access to the Secrets in its own Namespace. That’s not always convenient, though!
This is indeed a lot of ground to cover! The Kubernetes documentation has a great security checklist to help us and make sure that we got most of our bases covered. Check it out once in a while in case it gets updated.
In addition to all that, the most hardened clusters won’t be exposed to classic vulnerabilities, but might still be vulnerable to software supply chain attacks. In a nutshell, this means finding a way to insert malicious code inside the applications deployed on the clusters, or inside their dependencies. Defending against that type of attack involves securing a lot of links along our software delivery chain: source repositories, container registries (and other artifact registries), provenance of third-party images and libraries used in our own apps, our CI/CD pipeline itself… This deserves a full write-up on its own. In the meantime, if you want to get a feel of what this entails, you can take a look at sigstore and in-toto.
And beyond security concerns, there are many other details big and small that we need to cover before we can stamp our Kubernetes clusters as “production-ready”. Even with using a state-of-the-art managed Kubernetes cluster (spun up by click around in the web console or with something like
eksctl create cluster), we’re still missing a lot of critical components.
Logging is indispensable in general, but in particular for good security, because we want to audit who does what. By default, container logs are stored locally (on the node running the container), which means that if we lose the node (to an outage or because it was scaled down or simply recycled), we lose the logs. Enable log collection, either through some facility offered by your cloud provider, or by installing something like Datadog, Loki, etc.
Control plane logging deserves its own separate entry, because it might require separate steps to be enabled. In particular in managed clusters, since the control plane runs outside of your nodes, you’ll need to make sure that control plane logs are shipped somewhere - and pay particular attention to the API server audit logs.
Observability also matters. While it’s not strictly part of the security perimeter, it’s important to have metrics to help with outage resolution, performance analysis, and general troubleshooting. Metrics can even help to predict (and anticipate) outages, by detecting when SLOs are affected before they affect user experience. Tracing will also be extremely helpful when troubleshooting multi-tier applications, especially microservices.
Backups are also important. Of course, they can help in case of catastrophic outage or human error; but they can also save the day in case of e.g. ransomware attack, and as such, they are part of a good security posture. If you want a backup tool designed specifically for Kubernetes and Cloud Native apps, have a look at Velero.
Finally, don’t forget that at the end of the day, our clusters are essentially running our code, which means that we need to pay attention to that, too. This is not Kubernetes-specific, but you can check OWASP for resources that can help to analyze source code or compiled versions of code to find security flaws.