---
controls:
version: "eks-1.5.0"
id: 3
text: "Worker Node Security Configuration"
type: "node"
groups:
  - id: 3.1
    text: "Worker Node Configuration Files"
    checks:
      - id: 3.1.1
        text: "Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Automated)"
        audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
        tests:
          test_items:
            - flag: "permissions"
              compare:
                op: bitmask
                value: "644"
        remediation: |
          Run the below command (based on the file location on your system) on the each worker node.
          For example,
          chmod 644 $kubeletkubeconfig
        scored: true

      - id: 3.1.2
        text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Automated)"
        audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
        tests:
          test_items:
            - flag: root:root
        remediation: |
          Run the below command (based on the file location on your system) on the each worker node.
          For example,
          chown root:root $kubeletkubeconfig
        scored: true

      - id: 3.1.3
        text: "Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Automated)"
        audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
        tests:
          test_items:
            - flag: "permissions"
              compare:
                op: bitmask
                value: "644"
        remediation: |
          Run the following command (using the config file location identified in the Audit step)
          chmod 644 $kubeletconf
        scored: true

      - id: 3.1.4
        text: "Ensure that the kubelet configuration file ownership is set to root:root (Automated)"
        audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
        tests:
          test_items:
            - flag: root:root
        remediation: |
          Run the following command (using the config file location identified in the Audit step)
          chown root:root $kubeletconf
        scored: true

  - id: 3.2
    text: "Kubelet"
    checks:
      - id: 3.2.1
        text: "Ensure that the Anonymous Auth is Not Enabled (Automated)"
        audit: "/bin/ps -fC $kubeletbin"
        audit_config: "/bin/cat $kubeletconf"
        tests:
          test_items:
            - flag: "--anonymous-auth"
              path: '{.authentication.anonymous.enabled}'
              set: true
              compare:
                op: eq
                value: false
        remediation: |
          Remediation Method 1:
          If configuring via the Kubelet config file, you first need to locate the file.
          To do this, SSH to each node and execute the following command to find the kubelet
          process:
          ps -ef | grep kubelet
          The output of the above command provides details of the active kubelet process, from
          which we can see the location of the configuration file provided to the kubelet service
          with the --config argument. The file can be viewed with a command such as more or
          less, like so:
          sudo less /path/to/kubelet-config.json
          Disable Anonymous Authentication by setting the following parameter:
          "authentication": { "anonymous": { "enabled": false } }

          Remediation Method 2.
          If using executable arguments, edit the kubelet service file on each worker node and
          ensure the below parameters are part of the KUBELET_ARGS variable string.
          For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
          Bottlerocket AMIs, then this file can be found at
          /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
          you may need to look up documentation for your chosen operating system to determine
          which service manager is configured:
          --anonymous-auth=false

          For Both Remediation Steps:
          Based on your system, restart the kubelet service and check the service status.
          The following example is for operating systems using systemd, such as the Amazon
          EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
          command. If systemctl is not available then you will need to look up documentation for
          your chosen operating system to determine which service manager is configured:
          systemctl daemon-reload
          systemctl restart kubelet.service
          systemctl status kubelet -l
        scored: true

      - id: 3.2.2
        text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
        audit: "/bin/ps -fC $kubeletbin"
        audit_config: "/bin/cat $kubeletconf"
        tests:
          test_items:
            - flag: --authorization-mode
              path: '{.authorization.mode}'
              set: true
              compare:
                op: nothave
                value: AlwaysAllow
        remediation: |
          Remediation Method 1:
          If configuring via the Kubelet config file, you first need to locate the file.
          To do this, SSH to each node and execute the following command to find the kubelet
          process:
          ps -ef | grep kubelet
          The output of the above command provides details of the active kubelet process, from
          which we can see the location of the configuration file provided to the kubelet service
          with the --config argument. The file can be viewed with a command such as more or
          less, like so:
          sudo less /path/to/kubelet-config.json
          Enable Webhook Authentication by setting the following parameter:
          "authentication": { "webhook": { "enabled": true } }
          Next, set the Authorization Mode to Webhook by setting the following parameter:
          "authorization": { "mode": "Webhook }
          Finer detail of the authentication and authorization fields can be found in the
          Kubelet Configuration documentation.

          Remediation Method 2:
          If using executable arguments, edit the kubelet service file on each worker node and
          ensure the below parameters are part of the KUBELET_ARGS variable string.
          For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
          Bottlerocket AMIs, then this file can be found at
          /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
          you may need to look up documentation for your chosen operating system to determine
          which service manager is configured:
          --authentication-token-webhook
          --authorization-mode=Webhook

          For Both Remediation Steps:
          Based on your system, restart the kubelet service and check the service status.
          The following example is for operating systems using systemd, such as the Amazon
          EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
          command. If systemctl is not available then you will need to look up documentation for
          your chosen operating system to determine which service manager is configured:
          systemctl daemon-reload
          systemctl restart kubelet.service
          systemctl status kubelet -l
        scored: true

      - id: 3.2.3
        text: "Ensure that a Client CA File is Configured (Automated)"
        audit: "/bin/ps -fC $kubeletbin"
        audit_config: "/bin/cat $kubeletconf"
        tests:
          test_items:
            - flag: --client-ca-file
              path: '{.authentication.x509.clientCAFile}'
              set: true
        remediation: |
          Remediation Method 1:
          If configuring via the Kubelet config file, you first need to locate the file.
          To do this, SSH to each node and execute the following command to find the kubelet
          process:
          ps -ef | grep kubelet
          The output of the above command provides details of the active kubelet process, from
          which we can see the location of the configuration file provided to the kubelet service
          with the --config argument. The file can be viewed with a command such as more or
          less, like so:
          sudo less /path/to/kubelet-config.json
          Configure the client certificate authority file by setting the following parameter
          appropriately:
          "authentication": { "x509": {"clientCAFile": <path/to/client-ca-file> } }"

          Remediation Method 2:
          If using executable arguments, edit the kubelet service file on each worker node and
          ensure the below parameters are part of the KUBELET_ARGS variable string.
          For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
          Bottlerocket AMIs, then this file can be found at
          /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
          you may need to look up documentation for your chosen operating system to determine
          which service manager is configured:
          --client-ca-file=<path/to/client-ca-file>

          For Both Remediation Steps:
          Based on your system, restart the kubelet service and check the service status.
          The following example is for operating systems using systemd, such as the Amazon
          EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
          command. If systemctl is not available then you will need to look up documentation for
          your chosen operating system to determine which service manager is configured:
          systemctl daemon-reload
          systemctl restart kubelet.service
          systemctl status kubelet -l
        scored: true

      - id: 3.2.4
        text: "Ensure that the --read-only-port is disabled (Automated)"
        audit: "/bin/ps -fC $kubeletbin"
        audit_config: "/bin/cat $kubeletconf"
        tests:
          test_items:
            - flag: "--read-only-port"
              path: '{.readOnlyPort}'
              set: true
              compare:
                op: eq
                value: 0
        remediation: |
          If modifying the Kubelet config file, edit the kubelet-config.json file
          /etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to 0
          "readOnlyPort": 0
          If using executable arguments, edit the kubelet service file
          /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
          worker node and add the below parameter at the end of the KUBELET_ARGS variable
          string.
          --read-only-port=0

          Based on your system, restart the kubelet service and check status
          systemctl daemon-reload
          systemctl restart kubelet.service
          systemctl status kubelet -l
        scored: true

      - id: 3.2.5
        text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
        audit: "/bin/ps -fC $kubeletbin"
        audit_config: "/bin/cat $kubeletconf"
        tests:
          test_items:
            - flag: --streaming-connection-idle-timeout
              path: '{.streamingConnectionIdleTimeout}'
              set: true
              compare:
                op: noteq
                value: 0
            - flag: --streaming-connection-idle-timeout
              path: '{.streamingConnectionIdleTimeout}'
              set: false
          bin_op: or
        remediation: |
          Remediation Method 1:
          If modifying the Kubelet config file, edit the kubelet-config.json file
          /etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to a
          non-zero value in the format of #h#m#s
          "streamingConnectionIdleTimeout": "4h0m0s"
          You should ensure that the kubelet service file
          /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not
          specify a --streaming-connection-idle-timeout argument because it would
          override the Kubelet config file.

          Remediation Method 2:
          If using executable arguments, edit the kubelet service file
          /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
          worker node and add the below parameter at the end of the KUBELET_ARGS variable
          string.
          --streaming-connection-idle-timeout=4h0m0s

          Remediation Method 3:
          If using the api configz endpoint consider searching for the status of
          "streamingConnectionIdleTimeout": by extracting the live configuration from the
          nodes running kubelet.
          **See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
          Live Cluster, and then rerun the curl statement from audit process to check for kubelet
          configuration changes
          kubectl proxy --port=8001 &
          export HOSTNAME_PORT=localhost:8001 (example host and port number)
          export NODE_NAME=ip-192.168.31.226.ec2.internal (example node name from "kubectl get nodes")
          curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"

          For all three remediations:
          Based on your system, restart the kubelet service and check status
          systemctl daemon-reload
          systemctl restart kubelet.service
          systemctl status kubelet -l
        scored: true

      - id: 3.2.6
        text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
        audit: "/bin/ps -fC $kubeletbin"
        audit_config: "/bin/cat $kubeletconf"
        tests:
          test_items:
            - flag: --make-iptables-util-chains
              path: '{.makeIPTablesUtilChains}'
              set: true
              compare:
                op: eq
                value: true
            - flag: --make-iptables-util-chains
              path: '{.makeIPTablesUtilChains}'
              set: false
          bin_op: or
        remediation: |
          Remediation Method 1:
          If modifying the Kubelet config file, edit the kubelet-config.json file
          /etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
          true
          "makeIPTablesUtilChains": true
          Ensure that /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf
          does not set the --make-iptables-util-chains argument because that would
          override your Kubelet config file.

          Remediation Method 2:
          If using executable arguments, edit the kubelet service file
          /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
          worker node and add the below parameter at the end of the KUBELET_ARGS variable
          string.
          --make-iptables-util-chains:true

          Remediation Method 3:
          If using the api configz endpoint consider searching for the status of
          "makeIPTablesUtilChains.: true by extracting the live configuration from the nodes
          running kubelet.
          **See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
          Live Cluster, and then rerun the curl statement from audit process to check for kubelet
          configuration changes
          kubectl proxy --port=8001 &
          export HOSTNAME_PORT=localhost:8001 (example host and port number)
          export NODE_NAME=ip-192.168.31.226.ec2.internal (example node name from "kubectl get nodes")
          curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"

          For all three remediations:
          Based on your system, restart the kubelet service and check status
          systemctl daemon-reload
          systemctl restart kubelet.service
          systemctl status kubelet -l
        scored: true

      - id: 3.2.7
        text: "Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)"
        audit: "/bin/ps -fC $kubeletbin"
        audit_config: "/bin/cat $kubeletconf"
        tests:
          test_items:
            - flag: --event-qps
              path: '{.eventRecordQPS}'
              set: true
              compare:
                op: gte
                value: 0
        remediation: |
          If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate
          level.
          If using command line arguments, edit the kubelet service file
          /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node
          and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
          Based on your system, restart the kubelet service. For example:
          systemctl daemon-reload
          systemctl restart kubelet.service
        scored: true

      - id: 3.2.8
        text: "Ensure that the --rotate-certificates argument is not present or is set to true (Automated)"
        audit: "/bin/ps -fC $kubeletbin"
        audit_config: "/bin/cat $kubeletconf"
        tests:
          test_items:
            - flag: --rotate-certificates
              path: '{.rotateCertificates}'
              set: true
              compare:
                op: eq
                value: true
            - flag: --rotate-certificates
              path: '{.rotateCertificates}'
              set: false
          bin_op: or
        remediation: |
          Remediation Method 1:
          If modifying the Kubelet config file, edit the kubelet-config.json file
          /etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
          true
          "RotateCertificate":true
          Additionally, ensure that the kubelet service file
          /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set the --RotateCertificate
          executable argument to false because this would override the Kubelet
          config file.

          Remediation Method 2:
          If using executable arguments, edit the kubelet service file
          /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
          worker node and add the below parameter at the end of the KUBELET_ARGS variable
          string.
          --RotateCertificate=true
        scored: true

      - id: 3.2.9
        text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
        audit: "/bin/ps -fC $kubeletbin"
        audit_config: "/bin/cat $kubeletconf"
        tests:
          test_items:
            - flag: RotateKubeletServerCertificate
              path: '{.featureGates.RotateKubeletServerCertificate}'
              set: true
              compare:
                op: eq
                value: true
        remediation: |
          Remediation Method 1:
          If modifying the Kubelet config file, edit the kubelet-config.json file
          /etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
          true

          "featureGates": {
            "RotateKubeletServerCertificate":true
          },

          Additionally, ensure that the kubelet service file
          /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set
          the --rotate-kubelet-server-certificate executable argument to false because
          this would override the Kubelet config file.

          Remediation Method 2:
          If using executable arguments, edit the kubelet service file
          /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
          worker node and add the below parameter at the end of the KUBELET_ARGS variable
          string.
          --rotate-kubelet-server-certificate=true

          Remediation Method 3:
          If using the api configz endpoint consider searching for the status of
          "RotateKubeletServerCertificate": by extracting the live configuration from the
          nodes running kubelet.
          **See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
          Live Cluster, and then rerun the curl statement from audit process to check for kubelet
          configuration changes
          kubectl proxy --port=8001 &
          export HOSTNAME_PORT=localhost:8001 (example host and port number)
          export NODE_NAME=ip-192.168.31.226.ec2.internal (example node name from "kubectl get nodes")
          curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"

          For all three remediation methods:
          Restart the kubelet service and check status. The example below is for when using
          systemctl to manage services:
          systemctl daemon-reload
          systemctl restart kubelet.service
          systemctl status kubelet -l
        scored: true