1
0
mirror of https://github.com/aquasecurity/kube-bench.git synced 2024-12-22 06:38:06 +00:00

Add cis 1.6 (#678)

* Add new cis version yamls

Add new cis version yamls

* Add new cis version yamls

* Add cis-1.6 to versions table

* support version mapping cis-1.6

* support version mapping cis-1.6

* Update controlplane.yaml

* Update etcd.yaml

* Update node.yaml

* Update policies.yaml

* Create job.data

* Create job-node.data

* Create job-master.data

* Create add-tls-kind.yaml

* Change node version to 1.15.0

* Add tests for cis-1.6

* Delete node_only.yaml

* Change tests 1.1.19-1.1.21

Change 1.1.19-1.1.21 because failing tests

* Update job.data

* Update job-master.data

* Update job-master.data

* Update job.data

* fix 1.2.35 remediation 

tabs instead of spaces

* Update job-master.data

* Remove extra space

* Update job.data

* Create node_only.yaml

* Add tests for cis-1.6

Add tests for cis-1.6 and change some from 1,5 to 1.6

* Fix typo

* Add mapping for cis-1.6

* Remove extra space in 1.2.35 remediation

* Update job.data

* Update job-master.data

* Fix type 1.2.35

* Remove trailing spaces

* Remove trailing spaces

* Remove trailing spaces

* Remove trailing spaces

* Add version 1.19 kubernetes support

* Add version 1.19 kubernetes support

* Add version 1.19 kubernetes support
This commit is contained in:
yoavrotems 2020-09-17 18:54:43 +03:00 committed by GitHub
parent 041c437339
commit 7280438eb5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 2582 additions and 10 deletions

View File

@ -58,7 +58,8 @@ kube-bench supports the tests for Kubernetes as defined in the [CIS Kubernetes B
|---|---|---|
| [1.3.0](https://workbench.cisecurity.org/benchmarks/602) | cis-1.3 | 1.11-1.12 |
| [1.4.1](https://workbench.cisecurity.org/benchmarks/2351) | cis-1.4 | 1.13-1.14 |
| [1.5.1](https://workbench.cisecurity.org/benchmarks/4892) | cis-1.5 | 1.15- |
| [1.5.1](https://workbench.cisecurity.org/benchmarks/4892) | cis-1.5 | 1.15 |
| [1.6.0](https://workbench.cisecurity.org/benchmarks/4834) | cis-1.6 | 1.16- |
| [GKE 1.0.0](https://workbench.cisecurity.org/benchmarks/4536) | gke-1.0 | GKE |
| [EKS 1.0.0](https://workbench.cisecurity.org/benchmarks/5190) | eks-1.0 | EKS |
| Red Hat OpenShift hardening guide | rh-0.7 | OCP 3.10-3.11 |
@ -122,6 +123,7 @@ The following table shows the valid targets based on the CIS Benchmark version.
| cis-1.3| master, node |
| cis-1.4| master, node |
| cis-1.5| master, controlplane, node, etcd, policies |
| cis-1.6| master, controlplane, node, etcd, policies |
| gke-1.0| master, controlplane, node, etcd, policies, managedservices |
| eks-1.0| controlplane, node, policies, managedservices |

2
cfg/cis-1.6/config.yaml Normal file
View File

@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@ -0,0 +1,35 @@
---
controls:
version: 1.6
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
type: "manual"
remediation: |
Create an audit policy file for your cluster.
scored: false
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Consider modification of the audit policy in use on the cluster to include these items, at a
minimum.
scored: false

124
cfg/cis-1.6/etcd.yaml Normal file
View File

@ -0,0 +1,124 @@
---
controls:
version: 1.6
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration Files"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--cert-file"
- flag: "--key-file"
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: true
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--client-cert-auth"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
set: false
- flag: "--auto-tls"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are
set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--peer-cert-file"
- flag: "--peer-key-file"
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: true
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--peer-client-cert-auth"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
set: false
- flag: "--peer-auto-tls"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Manual)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--trusted-ca-file"
remediation: |
[Manual test]
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
scored: false

982
cfg/cis-1.6/master.yaml Normal file
View File

@ -0,0 +1,982 @@
---
controls:
version: 1.6
id: 1
text: "Master Node Security Configuration"
type: "master"
groups:
- id: 1.1
text: "Master Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the
master node.
For example, chmod 644 $apiserverconf
scored: true
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c %U:%G $apiserverconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chown root:root $apiserverconf
scored: true
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c permissions=%a $controllermanagerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chmod 644 $controllermanagerconf
scored: true
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c %U:%G $controllermanagerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chown root:root $controllermanagerconf
scored: true
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c permissions=%a $schedulerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chmod 644 $schedulerconf
scored: true
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c %U:%G $schedulerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chown root:root $schedulerconf
scored: true
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then stat -c permissions=%a $etcdconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chmod 644 $etcdconf
scored: true
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then stat -c %U:%G $etcdconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chown root:root $etcdconf
scored: true
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual)"
audit: "stat -c permissions=%a <path/to/cni/files>"
type: "manual"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chmod 644 <path/to/cni/files>
scored: false
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Manual)"
audit: "stat -c %U:%G <path/to/cni/files>"
type: "manual"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chown root:root <path/to/cni/files>
scored: false
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%' | xargs stat -c permissions=%a
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "700"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the below command:
ps -ef | grep etcd
Run the below command (based on the etcd data directory found above). For example,
chmod 700 /var/lib/etcd
scored: true
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)"
audit: ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%' | xargs stat -c %U:%G
tests:
test_items:
- flag: "etcd:etcd"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the below command:
ps -ef | grep etcd
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
scored: true
- id: 1.1.13
text: "Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e /etc/kubernetes/admin.conf; then stat -c permissions=%a /etc/kubernetes/admin.conf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chmod 644 /etc/kubernetes/admin.conf
scored: true
- id: 1.1.14
text: "Ensure that the admin.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e /etc/kubernetes/admin.conf; then stat -c %U:%G /etc/kubernetes/admin.conf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chown root:root /etc/kubernetes/admin.conf
scored: true
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e /etc/kubernetes/scheduler.conf; then stat -c permissions=%a /etc/kubernetes/scheduler.conf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chmod 644 /etc/kubernetes/scheduler.conf
scored: true
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e /etc/kubernetes/scheduler.conf; then stat -c %U:%G /etc/kubernetes/scheduler.conf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chown root:root /etc/kubernetes/scheduler.conf
scored: true
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e /etc/kubernetes/controller-manager.conf; then stat -c permissions=%a /etc/kubernetes/controller-manager.conf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chmod 644 /etc/kubernetes/controller-manager.conf
scored: true
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e /etc/kubernetes/controller-manager.conf; then stat -c %U:%G /etc/kubernetes/controller-manager.conf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chown root:root /etc/kubernetes/controller-manager.conf
scored: true
- id: 1.1.19
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)"
audit: "find /etc/kubernetes/pki/ | xargs stat -c %U:%G"
use_multiple_values: true
tests:
test_items:
- flag: "root root"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chown -R root:root /etc/kubernetes/pki/
scored: true
- id: 1.1.20
text: "Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual)"
audit: "find /etc/kubernetes/pki -name '*.crt' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chmod -R 644 /etc/kubernetes/pki/*.crt
scored: false
- id: 1.1.21
text: "Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)"
audit: "find /etc/kubernetes/pki -name '*.key' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.key
scored: false
- id: 1.2
text: "API Server"
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
type: manual
tests:
test_items:
- flag: "--anonymous-auth"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the below parameter.
--anonymous-auth=false
scored: false
- id: 1.2.2
text: "Ensure that the --basic-auth-file argument is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--basic-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication. Then,
edit the API server pod specification file $apiserverconf
on the master node and remove the --basic-auth-file=<filename> parameter.
scored: true
- id: 1.2.3
text: "Ensure that the --token-auth-file parameter is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--token-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication. Then,
edit the API server pod specification file $apiserverconf
on the master node and remove the --token-auth-file=<filename> parameter.
scored: true
- id: 1.2.4
text: "Ensure that the --kubelet-https argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--kubelet-https"
compare:
op: eq
value: true
- flag: "--kubelet-https"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and remove the --kubelet-https parameter.
scored: true
- id: 1.2.5
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--kubelet-client-certificate"
- flag: "--kubelet-client-key"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the
apiserver and kubelets. Then, edit API server pod specification file
$apiserverconf on the master node and set the
kubelet client certificate and key parameters as below.
--kubelet-client-certificate=<path/to/client-certificate-file>
--kubelet-client-key=<path/to/client-key-file>
scored: true
- id: 1.2.6
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
remediation: |
Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
$apiserverconf on the master node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
scored: true
- id: 1.2.7
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: nothave
value: "AlwaysAllow"
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the --authorization-mode parameter to values other than AlwaysAllow.
One such example could be as below.
--authorization-mode=RBAC
scored: true
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "Node"
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the --authorization-mode parameter to a value that includes Node.
--authorization-mode=Node,RBAC
scored: true
- id: 1.2.9
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "RBAC"
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the --authorization-mode parameter to a value that includes RBAC,
for example:
--authorization-mode=Node,RBAC
scored: true
- id: 1.2.10
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "EventRateLimit"
remediation: |
Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the API server pod specification file $apiserverconf
and set the below parameters.
--enable-admission-plugins=...,EventRateLimit,...
--admission-control-config-file=<path/to/configuration/file>
scored: false
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: AlwaysAdmit
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and either remove the --enable-admission-plugins parameter, or set it to a
value that does not include AlwaysAdmit.
scored: true
- id: 1.2.12
text: "Ensure that the admission control plugin AlwaysPullImages is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "AlwaysPullImages"
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the --enable-admission-plugins parameter to include
AlwaysPullImages.
--enable-admission-plugins=...,AlwaysPullImages,...
scored: false
- id: 1.2.13
text: "Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "SecurityContextDeny"
- flag: "--enable-admission-plugins"
compare:
op: has
value: "PodSecurityPolicy"
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the --enable-admission-plugins parameter to include
SecurityContextDeny, unless PodSecurityPolicy is already in place.
--enable-admission-plugins=...,SecurityContextDeny,...
scored: false
- id: 1.2.14
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "ServiceAccount"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Follow the documentation and create ServiceAccount objects as per your environment.
Then, edit the API server pod specification file $apiserverconf
on the master node and ensure that the --disable-admission-plugins parameter is set to a
value that does not include ServiceAccount.
scored: true
- id: 1.2.15
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "NamespaceLifecycle"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the --disable-admission-plugins parameter to
ensure it does not include NamespaceLifecycle.
scored: true
- id: 1.2.16
text: "Ensure that the admission control plugin PodSecurityPolicy is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "PodSecurityPolicy"
remediation: |
Follow the documentation and create Pod Security Policy objects as per your environment.
Then, edit the API server pod specification file $apiserverconf
on the master node and set the --enable-admission-plugins parameter to a
value that includes PodSecurityPolicy:
--enable-admission-plugins=...,PodSecurityPolicy,...
Then restart the API Server.
scored: true
- id: 1.2.17
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "NodeRestriction"
remediation: |
Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
Then, edit the API server pod specification file $apiserverconf
on the master node and set the --enable-admission-plugins parameter to a
value that includes NodeRestriction.
--enable-admission-plugins=...,NodeRestriction,...
scored: true
- id: 1.2.18
text: "Ensure that the --insecure-bind-address argument is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--insecure-bind-address"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and remove the --insecure-bind-address parameter.
scored: true
- id: 1.2.19
text: "Ensure that the --insecure-port argument is set to 0 (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--insecure-port"
compare:
op: eq
value: 0
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the below parameter.
--insecure-port=0
scored: true
- id: 1.2.20
text: "Ensure that the --secure-port argument is not set to 0 (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--secure-port"
compare:
op: gt
value: 0
- flag: "--secure-port"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and either remove the --secure-port parameter or
set it to a different (non-zero) desired port.
scored: true
- id: 1.2.21
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the below parameter.
--profiling=false
scored: true
- id: 1.2.22
text: "Ensure that the --audit-log-path argument is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-path"
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the --audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example:
--audit-log-path=/var/log/apiserver/audit.log
scored: true
- id: 1.2.23
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxage"
compare:
op: gte
value: 30
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the --audit-log-maxage parameter to 30 or as an appropriate number of days:
--audit-log-maxage=30
scored: true
- id: 1.2.24
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxbackup"
compare:
op: gte
value: 10
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
value.
--audit-log-maxbackup=10
scored: true
- id: 1.2.25
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxsize"
compare:
op: gte
value: 100
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the --audit-log-maxsize parameter to an appropriate size in MB.
For example, to set it as 100 MB:
--audit-log-maxsize=100
scored: true
- id: 1.2.26
text: "Ensure that the --request-timeout argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--request-timeout"
set: false
- flag: "--request-timeout"
remediation: |
Edit the API server pod specification file $apiserverconf
and set the below parameter as appropriate and if needed.
For example,
--request-timeout=300s
scored: true
- id: 1.2.27
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--service-account-lookup"
set: false
- flag: "--service-account-lookup"
compare:
op: eq
value: true
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the below parameter.
--service-account-lookup=true
Alternatively, you can delete the --service-account-lookup parameter from this file so
that the default takes effect.
scored: true
- id: 1.2.28
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-key-file"
remediation: |
Edit the API server pod specification file $apiserverconf
on the master node and set the --service-account-key-file parameter
to the public key file for service accounts:
--service-account-key-file=<filename>
scored: true
- id: 1.2.29
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--etcd-certfile"
- flag: "--etcd-keyfile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the master node and set the etcd certificate and key file parameters.
--etcd-certfile=<path/to/client-certificate-file>
--etcd-keyfile=<path/to/client-key-file>
scored: true
- id: 1.2.30
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--tls-cert-file"
- flag: "--tls-private-key-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the master node and set the TLS certificate and private key file parameters.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
scored: true
- id: 1.2.31
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--client-ca-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the master node and set the client certificate authority file.
--client-ca-file=<path/to/client-ca-file>
scored: true
- id: 1.2.32
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--etcd-cafile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the master node and set the etcd certificate authority file parameter.
--etcd-cafile=<path/to/ca-file>
scored: true
- id: 1.2.33
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--encryption-provider-config"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
Then, edit the API server pod specification file $apiserverconf
on the master node and set the --encryption-provider-config parameter to the path of that file: --encryption-provider-config=</path/to/EncryptionConfig/File>
scored: false
- id: 1.2.34
text: "Ensure that encryption providers are appropriately configured (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
type: "manual"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
In this file, choose aescbc, kms or secretbox as the encryption provider.
scored: false
- id: 1.2.35
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--tls-cipher-suites"
compare:
op: has
value: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
remediation: |
Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the below parameter.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM
_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM
_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM
_SHA384
scored: false
- id: 1.3
text: "Controller Manager"
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the master node and set the --terminated-pod-gc-threshold to an appropriate threshold,
for example:
--terminated-pod-gc-threshold=10
scored: false
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the master node and set the below parameter.
--profiling=false
scored: true
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--use-service-account-credentials"
compare:
op: noteq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the master node to set the below parameter.
--use-service-account-credentials=true
scored: true
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-private-key-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the master node and set the --service-account-private-key-file parameter
to the private key file for service accounts.
--service-account-private-key-file=<filename>
scored: true
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--root-ca-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the master node and set the --root-ca-file parameter to the certificate bundle file`.
--root-ca-file=<path/to/file>
scored: true
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--feature-gates"
compare:
op: eq
value: "RotateKubeletServerCertificate=true"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the master node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
--feature-gates=RotateKubeletServerCertificate=true
scored: true
- id: 1.3.7
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the master node and ensure the correct value for the --bind-address parameter
scored: true
- id: 1.4
text: "Scheduler"
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf file
on the master node and set the below parameter.
--profiling=false
scored: true
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf
on the master node and ensure the correct value for the --bind-address parameter
scored: true

452
cfg/cis-1.6/node.yaml Normal file
View File

@ -0,0 +1,452 @@
---
controls:
version: 1.6
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $kubeletsvc
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $proxykubeconfig
scored: false
- id: 4.1.4
text: "Ensure that the proxy kubeconfig file ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: false
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: false
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 644 <filename>
scored: false
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
scored: false
- id: 4.1.9
text: "Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 644 $kubeletconf
scored: true
- id: 4.1.10
text: "Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to
false.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
remediation: |
If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Ensure that the --read-only-port argument is set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
If using a Kubelet config file, edit the file to set readOnlyPort to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.6
text: "Ensure that the --protect-kernel-defaults argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --protect-kernel-defaults
path: '{.protectKernelDefaults}'
compare:
op: eq
value: true
remediation: |
If using a Kubelet config file, edit the file to set protectKernelDefaults: true.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.7
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.8
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
audit: "/bin/ps -fC $kubeletbin "
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.9
text: "Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.10
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cert-file
path: '{.tlsCertFile}'
- flag: --tls-private-key-file
path: '{.tlsPrivateKeyFile}'
remediation: |
If using a Kubelet config file, edit the file to set tlsCertFile to the location
of the certificate file to use to identify this Kubelet, and tlsPrivateKeyFile
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.11
text: "Ensure that the --rotate-certificates argument is not set to false (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line rotateCertificates: true or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.12
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: eq
value: true
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.13
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set TLSCipherSuites: to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false

239
cfg/cis-1.6/policies.yaml Normal file
View File

@ -0,0 +1,239 @@
---
controls:
version: 1.6
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Manual)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used. (Manual)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 5.2
text: "Pod Security Policies"
checks:
- id: 5.2.1
text: "Minimize the admission of privileged containers (Manual)"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that
the .spec.privileged field is omitted or set to false.
scored: false
- id: 5.2.2
text: "Minimize the admission of containers wishing to share the host process ID namespace (Manual)"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostPID field is omitted or set to false.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host IPC namespace (Manual)"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostIPC field is omitted or set to false.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host network namespace (Manual)"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostNetwork field is omitted or set to false.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers with allowPrivilegeEscalation (Manual)"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.allowPrivilegeEscalation field is omitted or set to false.
scored: false
- id: 5.2.6
text: "Minimize the admission of root containers (Manual)"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of
UIDs not including 0.
scored: false
- id: 5.2.7
text: "Minimize the admission of containers with the NET_RAW capability (Manual)"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.requiredDropCapabilities is set to include either NET_RAW or ALL.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with added capabilities (Manual)"
type: "manual"
remediation: |
Ensure that allowedCapabilities is not present in PSPs for the cluster unless
it is set to an empty array.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilites in applications runnning on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports Network Policies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have Network Policies defined (Manual)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using secrets as files over secrets as environment variables (Manual)"
type: "manual"
remediation: |
if possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your pod definitions (Manual)"
type: "manual"
remediation: |
Seccomp is an alpha feature currently. By default, all alpha features are disabled. So, you
would need to enable alpha features in the apiserver by passing "--feature-
gates=AllAlpha=true" argument.
Edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_API_ARGS
parameter to "--feature-gates=AllAlpha=true"
KUBE_API_ARGS="--feature-gates=AllAlpha=true"
Based on your system, restart the kube-apiserver service. For example:
systemctl restart kube-apiserver.service
Use annotations to enable the docker/default seccomp profile in your pod definitions. An
example is as below:
apiVersion: v1
kind: Pod
metadata:
name: trustworthy-pod
annotations:
seccomp.security.alpha.kubernetes.io/pod: docker/default
spec:
containers:
- name: trustworthy-container
image: sotrustworthy:latest
scored: false
- id: 5.7.3
text: "Apply Security Context to Your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@ -194,9 +194,10 @@ version_mapping:
"1.13": "cis-1.4"
"1.14": "cis-1.4"
"1.15": "cis-1.5"
"1.16": "cis-1.5"
"1.17": "cis-1.5"
"1.18": "cis-1.5"
"1.16": "cis-1.6"
"1.17": "cis-1.6"
"1.18": "cis-1.6"
"1.19": "cis-1.6"
"eks-1.0": "eks-1.0"
"gke-1.0": "gke-1.0"
"ocp-3.10": "rh-0.7"
@ -215,6 +216,12 @@ target_mapping:
- "controlplane"
- "etcd"
- "policies"
"cis-1.6":
- "master"
- "node"
- "controlplane"
- "etcd"
- "policies"
"gke-1.0":
- "master"
- "node"

View File

@ -211,9 +211,10 @@ func TestMapToCISVersion(t *testing.T) {
{kubeVersion: "1.13", succeed: true, exp: "cis-1.4"},
{kubeVersion: "1.14", succeed: true, exp: "cis-1.4"},
{kubeVersion: "1.15", succeed: true, exp: "cis-1.5"},
{kubeVersion: "1.16", succeed: true, exp: "cis-1.5"},
{kubeVersion: "1.17", succeed: true, exp: "cis-1.5"},
{kubeVersion: "1.18", succeed: true, exp: "cis-1.5"},
{kubeVersion: "1.16", succeed: true, exp: "cis-1.6"},
{kubeVersion: "1.17", succeed: true, exp: "cis-1.6"},
{kubeVersion: "1.18", succeed: true, exp: "cis-1.6"},
{kubeVersion: "1.19", succeed: true, exp: "cis-1.6"},
{kubeVersion: "gke-1.0", succeed: true, exp: "gke-1.0"},
{kubeVersion: "ocp-3.10", succeed: true, exp: "rh-0.7"},
{kubeVersion: "ocp-3.11", succeed: true, exp: "rh-0.7"},
@ -398,6 +399,18 @@ func TestValidTargets(t *testing.T) {
targets: []string{"master", "node", "controlplane", "etcd", "policies"},
expected: true,
},
{
name: "cis-1.6 no Pikachu",
benchmark: "cis-1.6",
targets: []string{"master", "node", "controlplane", "etcd", "Pikachu"},
expected: false,
},
{
name: "cis-1.6 valid",
benchmark: "cis-1.6",
targets: []string{"master", "node", "controlplane", "etcd", "policies"},
expected: true,
},
{
name: "gke-1.0 valid",
benchmark: "gke-1.0",

View File

@ -69,7 +69,9 @@ version_mapping:
"1.13": "cis-1.4"
"1.14": "cis-1.4"
"1.15": "cis-1.5"
"1.16": "cis-1.5"
"1.17": "cis-1.5"
"1.16": "cis-1.6"
"1.17": "cis-1.6"
"1.18": "cis-1.6"
"1.19": "cis-1.6"
"ocp-3.10": "rh-0.7"
"ocp-3.11": "rh-0.7"

View File

@ -92,6 +92,10 @@ func TestCheckCIS15WithKind(t *testing.T) {
testCheckCISWithKind(t, "cis-1.5")
}
func TestCheckCIS16WithKind(t *testing.T) {
testCheckCISWithKind(t, "cis-1.6")
}
// This is simple "diff" between 2 strings containing multiple lines.
// It's not a comprehensive diff between the 2 strings.
// It does not inditcate when lines are deleted.

View File

@ -16,4 +16,4 @@ kubeadmConfigPatchesJson6902:
nodes:
# the control plane node config
- role: control-plane
image: "kindest/node:v1.18.0"
image: "kindest/node:v1.15.0"

View File

@ -0,0 +1,19 @@
---
apiVersion: kind.sigs.k8s.io/v1alpha3
kind: Cluster
networking:
apiServerAddress: "0.0.0.0"
kubeadmConfigPatchesJson6902:
- group: kubelet.config.k8s.io
version: v1beta1
kind: KubeletConfiguration
patch: |
- op: add
path: /tlsCipherSuites
value: ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"]
nodes:
# the control plane node config
- role: control-plane
image: "kindest/node:v1.18.0"

View File

@ -0,0 +1,183 @@
[INFO] 1 Master Node Security Configuration
[INFO] 1.1 Master Node Configuration Files
[PASS] 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
[PASS] 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
[PASS] 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
[PASS] 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
[WARN] 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual)
[WARN] 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
[PASS] 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
[FAIL] 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
[PASS] 1.1.13 Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
[PASS] 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
[PASS] 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
[FAIL] 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
[PASS] 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual)
[PASS] 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
[INFO] 1.2 API Server
[WARN] 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
[PASS] 1.2.2 Ensure that the --basic-auth-file argument is not set (Automated)
[PASS] 1.2.3 Ensure that the --token-auth-file parameter is not set (Automated)
[PASS] 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
[PASS] 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
[FAIL] 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
[PASS] 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
[PASS] 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
[PASS] 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
[WARN] 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
[PASS] 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
[WARN] 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
[WARN] 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
[PASS] 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
[PASS] 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
[FAIL] 1.2.16 Ensure that the admission control plugin PodSecurityPolicy is set (Automated)
[PASS] 1.2.17 Ensure that the admission control plugin NodeRestriction is set (Automated)
[PASS] 1.2.18 Ensure that the --insecure-bind-address argument is not set (Automated)
[PASS] 1.2.19 Ensure that the --insecure-port argument is set to 0 (Automated)
[PASS] 1.2.20 Ensure that the --secure-port argument is not set to 0 (Automated)
[FAIL] 1.2.21 Ensure that the --profiling argument is set to false (Automated)
[FAIL] 1.2.22 Ensure that the --audit-log-path argument is set (Automated)
[FAIL] 1.2.23 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
[FAIL] 1.2.24 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
[FAIL] 1.2.25 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
[PASS] 1.2.26 Ensure that the --request-timeout argument is set as appropriate (Automated)
[PASS] 1.2.27 Ensure that the --service-account-lookup argument is set to true (Automated)
[PASS] 1.2.28 Ensure that the --service-account-key-file argument is set as appropriate (Automated)
[PASS] 1.2.29 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
[PASS] 1.2.30 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
[PASS] 1.2.31 Ensure that the --client-ca-file argument is set as appropriate (Automated)
[PASS] 1.2.32 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
[WARN] 1.2.33 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)
[WARN] 1.2.34 Ensure that encryption providers are appropriately configured (Manual)
[WARN] 1.2.35 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
[INFO] 1.3 Controller Manager
[WARN] 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
[FAIL] 1.3.2 Ensure that the --profiling argument is set to false (Automated)
[PASS] 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
[PASS] 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
[PASS] 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
[FAIL] 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
[PASS] 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
[INFO] 1.4 Scheduler
[FAIL] 1.4.1 Ensure that the --profiling argument is set to false (Automated)
[PASS] 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
== Remediations ==
1.1.9 Run the below command (based on the file location on your system) on the master node.
For example,
chmod 644 <path/to/cni/files>
1.1.10 Run the below command (based on the file location on your system) on the master node.
For example,
chown root:root <path/to/cni/files>
1.1.12 On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the below command:
ps -ef | grep etcd
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
1.1.19 Run the below command (based on the file location on your system) on the master node.
For example,
chown -R root:root /etc/kubernetes/pki/
1.2.1 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the below parameter.
--anonymous-auth=false
1.2.6 Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
/etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
1.2.10 Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
and set the below parameters.
--enable-admission-plugins=...,EventRateLimit,...
--admission-control-config-file=<path/to/configuration/file>
1.2.12 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --enable-admission-plugins parameter to include
AlwaysPullImages.
--enable-admission-plugins=...,AlwaysPullImages,...
1.2.13 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --enable-admission-plugins parameter to include
SecurityContextDeny, unless PodSecurityPolicy is already in place.
--enable-admission-plugins=...,SecurityContextDeny,...
1.2.16 Follow the documentation and create Pod Security Policy objects as per your environment.
Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --enable-admission-plugins parameter to a
value that includes PodSecurityPolicy:
--enable-admission-plugins=...,PodSecurityPolicy,...
Then restart the API Server.
1.2.21 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the below parameter.
--profiling=false
1.2.22 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example:
--audit-log-path=/var/log/apiserver/audit.log
1.2.23 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --audit-log-maxage parameter to 30 or as an appropriate number of days:
--audit-log-maxage=30
1.2.24 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
value.
--audit-log-maxbackup=10
1.2.25 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --audit-log-maxsize parameter to an appropriate size in MB.
For example, to set it as 100 MB:
--audit-log-maxsize=100
1.2.33 Follow the Kubernetes documentation and configure a EncryptionConfig file.
Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --encryption-provider-config parameter to the path of that file: --encryption-provider-config=</path/to/EncryptionConfig/File>
1.2.34 Follow the Kubernetes documentation and configure a EncryptionConfig file.
In this file, choose aescbc, kms or secretbox as the encryption provider.
1.2.35 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the below parameter.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM
_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM
_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM
_SHA384
1.3.1 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
on the master node and set the --terminated-pod-gc-threshold to an appropriate threshold,
for example:
--terminated-pod-gc-threshold=10
1.3.2 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
on the master node and set the below parameter.
--profiling=false
1.3.6 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
on the master node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
--feature-gates=RotateKubeletServerCertificate=true
1.4.1 Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
on the master node and set the below parameter.
--profiling=false
== Summary ==
43 checks PASS
12 checks FAIL
10 checks WARN
0 checks INFO

View File

@ -0,0 +1,77 @@
[INFO] 4 Worker Node Security Configuration
[INFO] 4.1 Worker Node Configuration Files
[PASS] 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
[PASS] 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
[WARN] 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)
[WARN] 4.1.4 Ensure that the proxy kubeconfig file ownership is set to root:root (Manual)
[PASS] 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
[PASS] 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Manual)
[PASS] 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)
[PASS] 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
[PASS] 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
[PASS] 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
[INFO] 4.2 Kubelet
[PASS] 4.2.1 Ensure that the anonymous-auth argument is set to false (Automated)
[PASS] 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
[PASS] 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
[PASS] 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
[PASS] 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
[FAIL] 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
[PASS] 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
[PASS] 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
[WARN] 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
[WARN] 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
[PASS] 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Manual)
[WARN] 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
[PASS] 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
== Remediations ==
4.1.3 Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 /etc/kubernetes/proxy.conf
4.1.4 Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root /etc/kubernetes/proxy.conf
4.2.6 If using a Kubelet config file, edit the file to set protectKernelDefaults: true.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
4.2.9 If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
4.2.10 If using a Kubelet config file, edit the file to set tlsCertFile to the location
of the certificate file to use to identify this Kubelet, and tlsPrivateKeyFile
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
4.2.12 Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
== Summary ==
17 checks PASS
1 checks FAIL
5 checks WARN
0 checks INFO

431
integration/testdata/cis-1.6/job.data vendored Normal file
View File

@ -0,0 +1,431 @@
[INFO] 1 Master Node Security Configuration
[INFO] 1.1 Master Node Configuration Files
[PASS] 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
[PASS] 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
[PASS] 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
[PASS] 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
[WARN] 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual)
[WARN] 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
[PASS] 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
[FAIL] 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
[PASS] 1.1.13 Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
[PASS] 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
[PASS] 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)
[PASS] 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
[FAIL] 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
[PASS] 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual)
[PASS] 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
[INFO] 1.2 API Server
[WARN] 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
[PASS] 1.2.2 Ensure that the --basic-auth-file argument is not set (Automated)
[PASS] 1.2.3 Ensure that the --token-auth-file parameter is not set (Automated)
[PASS] 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
[PASS] 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
[FAIL] 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
[PASS] 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
[PASS] 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
[PASS] 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
[WARN] 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
[PASS] 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
[WARN] 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
[WARN] 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
[PASS] 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
[PASS] 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
[FAIL] 1.2.16 Ensure that the admission control plugin PodSecurityPolicy is set (Automated)
[PASS] 1.2.17 Ensure that the admission control plugin NodeRestriction is set (Automated)
[PASS] 1.2.18 Ensure that the --insecure-bind-address argument is not set (Automated)
[PASS] 1.2.19 Ensure that the --insecure-port argument is set to 0 (Automated)
[PASS] 1.2.20 Ensure that the --secure-port argument is not set to 0 (Automated)
[FAIL] 1.2.21 Ensure that the --profiling argument is set to false (Automated)
[FAIL] 1.2.22 Ensure that the --audit-log-path argument is set (Automated)
[FAIL] 1.2.23 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
[FAIL] 1.2.24 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
[FAIL] 1.2.25 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
[PASS] 1.2.26 Ensure that the --request-timeout argument is set as appropriate (Automated)
[PASS] 1.2.27 Ensure that the --service-account-lookup argument is set to true (Automated)
[PASS] 1.2.28 Ensure that the --service-account-key-file argument is set as appropriate (Automated)
[PASS] 1.2.29 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
[PASS] 1.2.30 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
[PASS] 1.2.31 Ensure that the --client-ca-file argument is set as appropriate (Automated)
[PASS] 1.2.32 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
[WARN] 1.2.33 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)
[WARN] 1.2.34 Ensure that encryption providers are appropriately configured (Manual)
[WARN] 1.2.35 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
[INFO] 1.3 Controller Manager
[WARN] 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
[FAIL] 1.3.2 Ensure that the --profiling argument is set to false (Automated)
[PASS] 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
[PASS] 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
[PASS] 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
[FAIL] 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
[PASS] 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
[INFO] 1.4 Scheduler
[FAIL] 1.4.1 Ensure that the --profiling argument is set to false (Automated)
[PASS] 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
== Remediations ==
1.1.9 Run the below command (based on the file location on your system) on the master node.
For example,
chmod 644 <path/to/cni/files>
1.1.10 Run the below command (based on the file location on your system) on the master node.
For example,
chown root:root <path/to/cni/files>
1.1.12 On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the below command:
ps -ef | grep etcd
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
1.1.19 Run the below command (based on the file location on your system) on the master node.
For example,
chown -R root:root /etc/kubernetes/pki/
1.2.1 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the below parameter.
--anonymous-auth=false
1.2.6 Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
/etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
1.2.10 Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
and set the below parameters.
--enable-admission-plugins=...,EventRateLimit,...
--admission-control-config-file=<path/to/configuration/file>
1.2.12 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --enable-admission-plugins parameter to include
AlwaysPullImages.
--enable-admission-plugins=...,AlwaysPullImages,...
1.2.13 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --enable-admission-plugins parameter to include
SecurityContextDeny, unless PodSecurityPolicy is already in place.
--enable-admission-plugins=...,SecurityContextDeny,...
1.2.16 Follow the documentation and create Pod Security Policy objects as per your environment.
Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --enable-admission-plugins parameter to a
value that includes PodSecurityPolicy:
--enable-admission-plugins=...,PodSecurityPolicy,...
Then restart the API Server.
1.2.21 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the below parameter.
--profiling=false
1.2.22 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example:
--audit-log-path=/var/log/apiserver/audit.log
1.2.23 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --audit-log-maxage parameter to 30 or as an appropriate number of days:
--audit-log-maxage=30
1.2.24 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
value.
--audit-log-maxbackup=10
1.2.25 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --audit-log-maxsize parameter to an appropriate size in MB.
For example, to set it as 100 MB:
--audit-log-maxsize=100
1.2.33 Follow the Kubernetes documentation and configure a EncryptionConfig file.
Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the --encryption-provider-config parameter to the path of that file: --encryption-provider-config=</path/to/EncryptionConfig/File>
1.2.34 Follow the Kubernetes documentation and configure a EncryptionConfig file.
In this file, choose aescbc, kms or secretbox as the encryption provider.
1.2.35 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the below parameter.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM
_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM
_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM
_SHA384
1.3.1 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
on the master node and set the --terminated-pod-gc-threshold to an appropriate threshold,
for example:
--terminated-pod-gc-threshold=10
1.3.2 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
on the master node and set the below parameter.
--profiling=false
1.3.6 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
on the master node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
--feature-gates=RotateKubeletServerCertificate=true
1.4.1 Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
on the master node and set the below parameter.
--profiling=false
== Summary ==
43 checks PASS
12 checks FAIL
10 checks WARN
0 checks INFO
[INFO] 2 Etcd Node Configuration
[INFO] 2 Etcd Node Configuration Files
[PASS] 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
[PASS] 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
[PASS] 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
[PASS] 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
[PASS] 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
[PASS] 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
[PASS] 2.7 Ensure that a unique Certificate Authority is used for etcd (Manual)
== Summary ==
7 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO
[INFO] 3 Control Plane Configuration
[INFO] 3.1 Authentication and Authorization
[WARN] 3.1.1 Client certificate authentication should not be used for users (Manual)
[INFO] 3.2 Logging
[WARN] 3.2.1 Ensure that a minimal audit policy is created (Manual)
[WARN] 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
== Remediations ==
3.1.1 Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
3.2.1 Create an audit policy file for your cluster.
3.2.2 Consider modification of the audit policy in use on the cluster to include these items, at a
minimum.
== Summary ==
0 checks PASS
0 checks FAIL
3 checks WARN
0 checks INFO
[INFO] 4 Worker Node Security Configuration
[INFO] 4.1 Worker Node Configuration Files
[PASS] 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
[PASS] 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
[WARN] 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)
[WARN] 4.1.4 Ensure that the proxy kubeconfig file ownership is set to root:root (Manual)
[PASS] 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
[PASS] 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Manual)
[PASS] 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)
[PASS] 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
[PASS] 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
[PASS] 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
[INFO] 4.2 Kubelet
[PASS] 4.2.1 Ensure that the anonymous-auth argument is set to false (Automated)
[PASS] 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
[PASS] 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
[PASS] 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
[PASS] 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
[FAIL] 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
[PASS] 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
[PASS] 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
[WARN] 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
[WARN] 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
[PASS] 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Manual)
[WARN] 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
[PASS] 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
== Remediations ==
4.1.3 Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 /etc/kubernetes/proxy.conf
4.1.4 Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root /etc/kubernetes/proxy.conf
4.2.6 If using a Kubelet config file, edit the file to set protectKernelDefaults: true.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
4.2.9 If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
4.2.10 If using a Kubelet config file, edit the file to set tlsCertFile to the location
of the certificate file to use to identify this Kubelet, and tlsPrivateKeyFile
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
4.2.12 Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
== Summary ==
17 checks PASS
1 checks FAIL
5 checks WARN
0 checks INFO
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[WARN] 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
[WARN] 5.1.2 Minimize access to secrets (Manual)
[WARN] 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
[WARN] 5.1.4 Minimize access to create pods (Manual)
[WARN] 5.1.5 Ensure that default service accounts are not actively used. (Manual)
[WARN] 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
[INFO] 5.2 Pod Security Policies
[WARN] 5.2.1 Minimize the admission of privileged containers (Manual)
[WARN] 5.2.2 Minimize the admission of containers wishing to share the host process ID namespace (Manual)
[WARN] 5.2.3 Minimize the admission of containers wishing to share the host IPC namespace (Manual)
[WARN] 5.2.4 Minimize the admission of containers wishing to share the host network namespace (Manual)
[WARN] 5.2.5 Minimize the admission of containers with allowPrivilegeEscalation (Manual)
[WARN] 5.2.6 Minimize the admission of root containers (Manual)
[WARN] 5.2.7 Minimize the admission of containers with the NET_RAW capability (Manual)
[WARN] 5.2.8 Minimize the admission of containers with added capabilities (Manual)
[WARN] 5.2.9 Minimize the admission of containers with capabilities assigned (Manual)
[INFO] 5.3 Network Policies and CNI
[WARN] 5.3.1 Ensure that the CNI in use supports Network Policies (Manual)
[WARN] 5.3.2 Ensure that all Namespaces have Network Policies defined (Manual)
[INFO] 5.4 Secrets Management
[WARN] 5.4.1 Prefer using secrets as files over secrets as environment variables (Manual)
[WARN] 5.4.2 Consider external secret storage (Manual)
[INFO] 5.5 Extensible Admission Control
[WARN] 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
[INFO] 5.7 General Policies
[WARN] 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
[WARN] 5.7.2 Ensure that the seccomp profile is set to docker/default in your pod definitions (Manual)
[WARN] 5.7.3 Apply Security Context to Your Pods and Containers (Manual)
[WARN] 5.7.4 The default namespace should not be used (Manual)
== Remediations ==
5.1.1 Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
5.1.2 Where possible, remove get, list and watch access to secret objects in the cluster.
5.1.3 Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
5.1.4 Where possible, remove create access to pod objects in the cluster.
5.1.5 Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
5.1.6 Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
5.2.1 Create a PSP as described in the Kubernetes documentation, ensuring that
the .spec.privileged field is omitted or set to false.
5.2.2 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostPID field is omitted or set to false.
5.2.3 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostIPC field is omitted or set to false.
5.2.4 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostNetwork field is omitted or set to false.
5.2.5 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.allowPrivilegeEscalation field is omitted or set to false.
5.2.6 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of
UIDs not including 0.
5.2.7 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.requiredDropCapabilities is set to include either NET_RAW or ALL.
5.2.8 Ensure that allowedCapabilities is not present in PSPs for the cluster unless
it is set to an empty array.
5.2.9 Review the use of capabilites in applications runnning on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
5.3.1 If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
5.3.2 Follow the documentation and create NetworkPolicy objects as you need them.
5.4.1 if possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
5.4.2 Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
5.5.1 Follow the Kubernetes documentation and setup image provenance.
5.7.1 Follow the documentation and create namespaces for objects in your deployment as you need
them.
5.7.2 Seccomp is an alpha feature currently. By default, all alpha features are disabled. So, you
would need to enable alpha features in the apiserver by passing "--feature-
gates=AllAlpha=true" argument.
Edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_API_ARGS
parameter to "--feature-gates=AllAlpha=true"
KUBE_API_ARGS="--feature-gates=AllAlpha=true"
Based on your system, restart the kube-apiserver service. For example:
systemctl restart kube-apiserver.service
Use annotations to enable the docker/default seccomp profile in your pod definitions. An
example is as below:
apiVersion: v1
kind: Pod
metadata:
name: trustworthy-pod
annotations:
seccomp.security.alpha.kubernetes.io/pod: docker/default
spec:
containers:
- name: trustworthy-container
image: sotrustworthy:latest
5.7.3 Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
Containers.
5.7.4 Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
== Summary ==
0 checks PASS
0 checks FAIL
24 checks WARN
0 checks INFO