On Securing the Kubernetes Dashboard

Joe Beda
Heptio
Published in
13 min readFeb 28, 2018

--

Recently Tesla (the car company) was alerted, by security firm RedLock, that their Kubernetes infrastructure was compromised. The attackers were using Tesla’s infrastructure resources to mine cryptocurrency. This type of attack has been called “cryptojacking”.

The vector of attack in this case was a Kubernetes Dashboard that was exposed to the general internet with no authentication and elevated privileges. Not only this, but core AWS API keys and secrets were visible. How do you prevent this from happening to you?

This post is a cleaned up version of the content that was covered in my weekly live stream. See TGIK8s 027 if you want to see me demonstrate this stuff with other fun asides and digressions.

Photo by sergey Svechnikov on Unsplash

How did the Tesla attack happen?

The details in the blog post disclosing the attack are a bit sparse so I’m going to do my best to speculate, within reason. There are two things that must be going on for this to have happened.

First, the Kubernetes Dashboard had elevated privileges on the cluster. This either happens by running a cluster without RBAC or explicitly granting the dashboard service account elevated privileges.

Second, the Kubernetes Dashboard was exposed to the internet. By default the Dashboard isn’t explicitly exposed outside of the cluster. However, the ease in which users can expose services makes it all too easy to do this. Exposing the dashboard can often be a one line change in a Kubernetes YAML file.

Don’t ignore the basics

Many times, avoiding attacks like this can be a matter of simple security hygiene. To start run a recent version of Kubernetes with RBAC (Role Based Access Control) turned on.

Recent versions of Kubernetes have made huge strides in securing the cluster. When set up properly, communication between parts of the cluster are now authenticated and encrypted. There are now bootstrap mechanisms (used by kubeadm and other projects) that help to establish that encryption. In addition, the default configuration for installing the dashboard has been continually locked down over the past few versions.

RBAC is an absolute must for any secure installation of Kubernetes. Most installers and distributions, at this point, will enable RBAC for new clusters. Make sure you take advantage of that. Note, however, that AKS (Azure Kubernetes Service) and kops both do not enable RBAC by default at the time of this writing. (AKS is adding support soon and is a gate for general availability of the service.)

Security isn’t just for production! In the world of infrastructure the intent of your cluster doesn’t matter. While you may not be exposing user data, you are exposing your infrastructure for attacks like cryptojacking. This can easily inflate your cloud bill. In addition, it can be the first step to a deeper, more targeted attack on your “production” cluster.

Brad Geesaman gave a great talk on hardening Kubernetes at KubeCon in Austin. The video and slides are available and it is well worth your time to watch.

Accessing the dashboard

By default, the Dashboard isn’t accessible outside the cluster. In fact, in some configurations (such as the Quick Start for Kubernetes by Heptio on AWS) network policy is used to restrict ingress to the dashboard within the cluster so that other semi-trusted applications inside the cluster cannot elevate privileges via the dashboard when the dashboard is misconfigured.

The easiest and most common way to access the cluster is through kubectl proxy. This creates a local web server that securely proxies data to the dashboard through the Kubernetes API server.

This is an easy 2 step process assuming you have authentication to your cluster via kubectl already set up. Simply type kubectl proxy and then navigate your browser to http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ (on older setups you may need to use http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy/ instead).

But what if you want to access the dashboard without having to muck with the command line? You may be tempted to simply set type: LoadBalancer on the dashboard service. Don’t do this! This will (usually) make your dashboard accessible to the world. With a well functioning cluster with RBAC this isn’t an immediate disaster but it is still not advised. First, if you accidentally empower the dashboard ServiceAccount (more below) you’ve given the keys to your cluster to the public. But even if you don’t do that, you are still at risk if there is a bug in the dashboard itself. That may allow an attacker to hijack the dashboard and compromise credentials from real users as they use it.

If you must expose the dashboard without kubectl proxy there are two options:

  • Preferred: Use an authenticating proxy (example in the tutorial section).
  • Expose the proxy using a type: NodePort service and secure your network. This will make the dashboard available to anyone that can directly reach any cluster node. This may or may not be appropriate depending on your configuration.

If you want to learn more about the different types of Kubernetes Services check out TGIK 002. You can also refer to the Service Chapter in a book I co-authored: Kubernetes Up and Running. That chapter is available in an excerpt offered for free by Heptio.

Authenticating and the dashboard

All authentication and authorization in Kubernetes is done at the API server. Authentication in Kubernetes is an incredibly flexible system that makes it possible to integrate almost any authorization system. However, because of this flexibility, there is currently no generic way for a user to do any sort of delegation of his/her access to another application.

Components within the Kubernetes control plane use a synthetic user called a “ServiceAccount” to authenticate to the API server. When using a recent version of the recommended deployment YAML for the dashboard, the dashboard ServiceAccount has very few privileges on the cluster. This means that on its own, the dashboard has very little power. This is great from a security perspective but not so great from a “able to do anything at all” perspective.

The Dashboard github repo has a useful page that details some of the technical details of Authentication and Authorization for the dashboard.

Avoid: empower the dashboard ServiceAccount

Because the ServiceAccount for the dashboard has few permissions, the dashboard seems broken with many “permission denied” errors. Users often solve this by granting the ServiceAccount “root” privileges on the cluster. This can be a one line admin operation. It is even referenced (with caveats) on the dashboard wiki. Please do not give the dashboard root privileges. It is almost never appropriate for any real installation of Kubernetes to give the dashboard root access. Even in “development” or “toy” clusters this can create bad habits.

Preferred: per-user credentials

The current best way to make the dashboard work is to give it credentials for a user only when that user is active. There are two main ways that this can be done.

The first way to provide credentials to the dashboard is via the login page. If you access the dashboard without credentials it’ll show you an a login page. The goal of this login page is to capture the credentials from the user, encrypt them and then store them in a cookie for future access.

One gotcha: The login page only works if you are using token auth to access the API Server. This is often not the case when first bootstrapping a cluster. For instance, there is no generic token based authentication set up with a stock kubeadm cluster. (Basic username:password auth is also supported by the dashboard but its usage is discouraged)

The second mechanism to get your credentials to the dashboard is to have something that is sitting in front of the Dashboard set an “Authorization” header for you. Whatever is in that header is then passed on to the Kubernetes API server. This can be done with a browser extension (such as Requestly) or via an authenticating proxy (more on that later).

With both of these approaches token expiration can also be a problem. Often times the token used is part of an OAuth flow that the dashboard is not aware of. This means that the login page will reset after a few minutes to a few hours. The user is then required to re-login with an up to date token.

Using a ServiceAccount token

One way to work around both of these issues and get a long lived token is to create a ServiceAccount and extract and use its token. This is a slight abuse of the ServiceAccount mechanism and there may be better ways in the future. There may also be other options depending on if and how you have alternative authentication mechanisms implemented in your cluster.

To create a service account to access the cluster, do something like this:

# Create the service account in the current namespace 
# (we assume default)
kubectl create serviceaccount my-dashboard-sa
# Give that service account root on the cluster
kubectl create clusterrolebinding my-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=default:my-dashboard-sa
# Find the secret that was created to hold the token for the SA
kubectl get secrets
# Show the contents of the secret to extract the token
kubectl describe secret my-dashboard-sa-token-xxxxx

The token in that service account is a root password! Protect it as such. You can now put that in the login screen (or configure Requestly) and have reliable authentication to the dashboard.

Astute readers may wonder why it is better to use a ServiceAccount token over just giving access to the dashboard ServiceAccount. There are a couple of reasons. First, when using a token it is possible to create multiple ServiceAccounts with more limited permissions. Often times users can create ServiceAccounts themselves that mirror their own permissions. Second, if the dashboard were accidentally exposed publicly, the default set of permissions is limited. The user accessing the dashboard is required to bring extra information to make it work.

Conclusion

The Dashboard is a great way to visualize and understand what is going on in your cluster. But that power can cut both ways. If you aren’t careful it is very easy to misconfigure it to give too much access to the wrong people. Kubernetes is a fast moving project and it is easy to find out of date information that gets things working but in an insecure way. From now on, make sure you use the latest security features for your cluster (RBAC, network policy, etc.) and think hard before you expose anything outside your cluster.

You can start putting this into action right now with the following tutorial for using oauth2_proxy in front of the Kubernetes dashboard. Also join me most Fridays at 1pm pacific time for TGI Kubernetes. If you or your company needs help getting going with Kubernetes please reach out to us at heptio.com.

Tutorial: Screening with oauth2_proxy

The best way to expose the Kubernetes Dashboard (or any other dashboard like Jenkins) is to use an authenticating proxy. This is a proxy that sits in front of a service and only lets traffic through if the user has authenticated. Bitly has been nice enough to open source their oauth2 based authenticating proxy. It works pretty well with the Kubernetes Dashboard.

In this example we are also going to be using GitHub as our authentication provider. The oauth2_proxy supports multiple providers. Refer to that documentation for details.

When we are done, we’ll have something that looks like below. Request traffic will follow the bold arrows. Ingress will use Let’s Encrypt to get certificates that will be used over the internet. The oauth2_proxy will then authenticate that proxy with GitHub (this will include redirecting the user to GitHub). Finally, the traffic will pass to the dashboard and it will use the API server.

Step 1: Have a cluster with Ingress and TLS configured

To start with, you need a cluster that can expose services on a URL that you support and with TLS. In this case I’m assuming you are using Contour with Let’s Encrypt and the JetStack cert-manager. Our own Dave Cheney just wrote a blog post on doing just this. Please refer to that.

For this tutorial we assume you are hosting your dashboard on k8s.i.example.com. The “i” domain in there is for “internal” and is a good way to signify to users that this isn’t public facing. Replace that with the correct domain wherever you see it in these instructions.

Step 2: Create a GitHub app

Go to https://github.com/settings/developers and create a new application. Users will see this information when logging into the proxy so make sure it is something they’ll trust. The key thing to get right is the callback URL. Set that to https://k8s.i.example.com/oauth2/callback.

Out of this you’ll get a Client ID and a Client Secret. It should look something like this:

Step 3: Create a Kubernetes Secret for these values

Run the following command with your values substituted in:

kubectl create secret generic dashboard-proxy-secret \
-o yaml --dry-run \
-n kube-system \
--from-literal=client-id=97a5d47e775f844b06d0 \
--from-literal=client-secret=f2fbea21867b40b4964716eedf23eccdab5a2487 \
--from-literal=cookie=$(openssl rand 16 -hex) > dashboard-proxy-secret.yaml

Now apply this with kubectl apply -f dashboard-proxy-secret.yaml.

If you are checking this in to your source control repository you may want to investigate a system such as Sealed Secrets from Bitnami.

Step 4: Launch the proxy with Ingress

We are going to create four objects here with a bunch of YAML:

  • A Deployment for the proxy itself. This will actually run the proxy. It’s configuration will be a combination of command line flags and environment variables pulled from the Secret we created in the previous step. Make sure you fix up the redirect-url and github-org parameters flags to match configuration.
  • A Service object. This is useful so that the Ingress system can find the proxy.
  • A Certificate object. This is used to ask cert-manager to provision a certificate for us with Let’s Encrypt. Make sure you fix up the DNS name to the one you are using.
  • An Ingress configuration. This tells our ingress system to route traffic to the proxy.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: dashboard-proxy
name: dashboard-proxy
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: dashboard-proxy
template:
metadata:
labels:
app: dashboard-proxy
spec:
containers:
- args:
- --cookie-secure=false
- --provider=github
- --upstream=http://kubernetes-dashboard.kube-system.svc.cluster.local
- --http-address=0.0.0.0:8080
- --redirect-url=https://k8s.i.example.com/oauth2/callback
- --email-domain=*
- --github-org=YOUR-ORG
- --pass-basic-auth=false
- --pass-access-token=false
env:
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
key: cookie
name: dashboard-proxy-secret
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
key: client-id
name: dashboard-proxy-secret
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client-secret
name: dashboard-proxy-secret
image: a5huynh/oauth2_proxy:2.2
name: oauth-proxy
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
run: dashboard-proxy
name: dashboard-proxy
namespace: kube-system
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: dashboard-proxy
type: ClusterIP
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: dashboard-proxy-tls
namespace: kube-system
spec:
secretName: dashboard-proxy-tls
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: k8s.i.example.com
dnsNames:
- k8s.i.example.com
acme:
config:
- http01: {}
domains:
- k8s.i.example.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dashboard-proxy
namespace: kube-system
annotations:
kubernetes.io/ingress.class: contour
spec:
rules:
- host: k8s.i.example.com
http:
paths:
- backend:
serviceName: dashboard-proxy
servicePort: 8080
path: /
tls:
- hosts:
- k8s.i.example.com
secretName: dashboard-proxy-tls

Step 5: Use HTTP between the proxy and dashboard

By default the dashboard configures HTTPS with a self signed certificate. This is a great approach! In a situation where you can’t have a public certificate, a self signed cert is better than nothing. It provides protection from eavesdropping, but not from man-in-the-middle attacks.

The problem is that the oauth2_proxy doesn’t know how to deal with self signed certificates on upstream services. This means that we have to expose the dashboard over plain old HTTP instead of HTTPS.

We are going to do this by editing both the Dashboard Deployment and the Dashboard Service.

First, edit the Deployment with kubectl -n kube-system edit deployment kubernetes-dashboard. This will launch your editor so that you can reconfigure the Deployment. Make the following changes:

  • Change all instances of 8443 to 9090
  • Set the livenessProbe scheme to HTTP (instead of HTTPS)
  • Set the arguments (and remove auto-generate-certificates):
- --insecure-bind-address=0.0.0.0
- --insecure-port=9090
- --enable-insecure-login

Now edit the Service with kubectl -n kube-system edit service kubernetes-dashboard. Fix up the ports section to look like this:

ports:
- port: 80
protocol: TCP
targetPort: 9090

Finally, some clusters may be extra paranoid about access to the dashboard. The Heptio quickstart, for example, implements a network policy to block all access to the dashboard. You can disable this with kubectl -n kube-system delete networkpolicy deny-dashboard. You could also write a better replacement network policy to only allow traffic to the dashboard from the proxy depending on your threat model.

One note: if you still want to access the dashboard with kubectl proxy after this you have to go to a slightly different URL: http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy/#!/login.

Step 6: Test it out

If all goes well you can hit https://k8s.i.example.com and get a login page. Click on that and give the dashboard access to your identity (you also have to include access to the GitHub Org in question). You should then be forwarded to the Kubernetes Dashboard where you can use your token as necessary.

At some point in the future, it should be possible to have the oauth2_proxy forward the authentication to the API Server and have the API Server trust that token for auth. When that is possible we can skip the Dashboard login screen altogether!

The technique of using oauth2_proxy is useful for more than just the dashboard. This is a great way to secure access to any sort of internal web service without having to set up a VPN.

--

--

Dad of two. CTO of Heptio. Started Google Compute Engine, Kubernetes and Google Container Engine.