1
0
mirror of https://github.com/aquasecurity/kube-bench.git synced 2024-11-23 00:28:07 +00:00
kube-bench/cfg/k3s-cis-1.8/master.yaml
Derek Nola a9422a6623
Overhaul of K3s scans (#1659)
* Overhaul K3s 1.X checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Overhaul K3s 2.X Checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Overhaul K3s 4.X checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Overhaul K3s 5.X checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Add K3s cis-1.8 scan

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Fix K3s 1.1.10 check

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Merge journalctl checks for K3s

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Matched Manual/Automated to correct scoring (false/true)

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Remove incorrect use of check_for_default_sa.sh script

Signed-off-by: Derek Nola <derek.nola@suse.com>

---------

Signed-off-by: Derek Nola <derek.nola@suse.com>
Co-authored-by: afdesk <work@afdesk.com>
2024-09-25 13:12:02 +06:00

986 lines
44 KiB
YAML

---
controls:
version: "k3s-cis-1.8"
id: 1
text: "Control Plane Security Configuration"
type: "master"
groups:
- id: 1.1
text: "Control Plane Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
By default, K3s embeds the api server within the k3s process. There is no API server pod specification file.
scored: true
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c %U:%G $apiserverconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Not Applicable.
By default, K3s embeds the api server within the k3s process. There is no API server pod specification file.
scored: true
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c permissions=%a $controllermanagerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
By default, K3s embeds the controller manager within the k3s process. There is no controller manager pod specification file.
scored: true
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c %U:%G $controllermanagerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Not Applicable.
By default, K3s embeds the controller manager within the k3s process. There is no controller manager pod specification file.
scored: true
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c permissions=%a $schedulerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
By default, K3s embeds the scheduler within the k3s process. There is no scheduler pod specification file.
scored: true
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c %U:%G $schedulerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Not Applicable.
By default, K3s embeds the scheduler within the k3s process. There is no scheduler pod specification file.
scored: true
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c permissions=%a; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
By default, K3s embeds etcd within the k3s process. There is no etcd pod specification file.
scored: true
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c %U:%G; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Not Applicable.
By default, K3s embeds etcd within the k3s process. There is no etcd pod specification file.
scored: true
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Automated)"
audit: find /var/lib/cni/networks -type f ! -name lock 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
By default, K3s sets the CNI file permissions to 600.
Note that for many CNIs, a lock file is created with permissions 750. This is expected and can be ignored.
If you modify your CNI configuration, ensure that the permissions are set to 600.
For example, chmod 600 /var/lib/cni/networks/<filename>
scored: true
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Automated)"
audit: find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root /var/lib/cni/networks/<filename>
scored: true
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: |
if [ "$(journalctl -m -u k3s | grep -m1 'Managed etcd cluster' | wc -l)" -gt 0 ]; then
stat -c permissions=%a /var/lib/rancher/k3s/server/db/etcd
else
echo "permissions=700"
fi
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "700"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above). For example,
chmod 700 /var/lib/etcd
scored: true
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)"
audit: ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%' | xargs stat -c %U:%G
type: "skip"
tests:
test_items:
- flag: "etcd:etcd"
remediation: |
Not Applicable.
For K3s, etcd is embedded within the k3s process. There is no separate etcd process.
Therefore the etcd data directory ownership is managed by the k3s process and should be root:root.
scored: true
- id: 1.1.13
text: "Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig
scored: true
- id: 1.1.14
text: "Ensure that the admin.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: "root:root"
set: true
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root /var/lib/rancher/k3s/server/cred/admin.kubeconfig
scored: true
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c permissions=%a $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $schedulerkubeconfig
scored: true
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c %U:%G $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $schedulerkubeconfig
scored: true
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c permissions=%a $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $controllermanagerkubeconfig
scored: true
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Automated)"
audit: "stat -c %U:%G $controllermanagerkubeconfig"
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: "root:root"
set: true
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $controllermanagerkubeconfig
scored: true
- id: 1.1.19
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)"
audit: "stat -c %U:%G /var/lib/rancher/k3s/server/tls"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown -R root:root /var/lib/rancher/k3s/server/tls
scored: true
- id: 1.1.20
text: "Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)"
audit: "/bin/sh -c 'stat -c permissions=%a /var/lib/rancher/k3s/server/tls/*.crt'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chmod -R 600 /var/lib/rancher/k3s/server/tls/*.crt
scored: false
- id: 1.1.21
text: "Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated)"
audit: "/bin/sh -c 'stat -c permissions=%a /var/lib/rancher/k3s/server/tls/*.key'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chmod -R 600 /var/lib/rancher/k3s/server/tls/*.key
scored: true
- id: 1.2
text: "API Server"
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth'"
tests:
test_items:
- flag: "--anonymous-auth"
compare:
op: eq
value: false
remediation: |
By default, K3s sets the --anonymous-auth argument to false.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove anything similar to below.
kube-apiserver-arg:
- "anonymous-auth=true"
scored: true
- id: 1.2.2
text: "Ensure that the --token-auth-file parameter is not set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--token-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove anything similar to below.
kube-apiserver-arg:
- "token-auth-file=<path>"
scored: true
- id: 1.2.3
text: "Ensure that the --DenyServiceExternalIPs is not set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: "DenyServiceExternalIPs"
set: true
- flag: "--enable-admission-plugins"
set: false
remediation: |
By default, K3s does not set DenyServiceExternalIPs.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml, remove any lines like below.
kube-apiserver-arg:
- "enable-admission-plugins=DenyServiceExternalIPs"
scored: true
- id: 1.2.4
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
bin_op: and
test_items:
- flag: "--kubelet-client-certificate"
- flag: "--kubelet-client-key"
remediation: |
By default, K3s automatically provides the kubelet client certificate and key.
They are generated and located at /var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt and /var/lib/rancher/k3s/server/tls/client-kube-apiserver.key
If for some reason you need to provide your own certificate and key, you can set the
below parameters in the K3s config file /etc/rancher/k3s/config.yaml.
kube-apiserver-arg:
- "kubelet-client-certificate=<path/to/client-cert-file>"
- "kubelet-client-key=<path/to/client-key-file>"
scored: true
- id: 1.2.5
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
remediation: |
By default, K3s automatically provides the kubelet CA cert file, at /var/lib/rancher/k3s/server/tls/server-ca.crt.
If for some reason you need to provide your own ca certificate, look at using the k3s certificate command line tool.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "kubelet-certificate-authority=<path/to/ca-cert-file>"
scored: true
- id: 1.2.6
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: nothave
value: "AlwaysAllow"
remediation: |
By default, K3s does not set the --authorization-mode to AlwaysAllow.
If this check fails, edit K3s config file /etc/rancher/k3s/config.yaml, remove any lines like below.
kube-apiserver-arg:
- "authorization-mode=AlwaysAllow"
scored: true
- id: 1.2.7
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "Node"
remediation: |
By default, K3s sets the --authorization-mode to Node and RBAC.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml,
ensure that you are not overriding authorization-mode.
scored: true
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "RBAC"
remediation: |
By default, K3s sets the --authorization-mode to Node and RBAC.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml,
ensure that you are not overriding authorization-mode.
scored: true
- id: 1.2.9
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "EventRateLimit"
remediation: |
Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the K3s config file /etc/rancher/k3s/config.yaml and set the below parameters.
kube-apiserver-arg:
- "enable-admission-plugins=...,EventRateLimit,..."
- "admission-control-config-file=<path/to/configuration/file>"
scored: false
- id: 1.2.10
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: AlwaysAdmit
- flag: "--enable-admission-plugins"
set: false
remediation: |
By default, K3s does not set the --enable-admission-plugins to AlwaysAdmit.
If this check fails, edit K3s config file /etc/rancher/k3s/config.yaml, remove any lines like below.
kube-apiserver-arg:
- "enable-admission-plugins=AlwaysAdmit"
scored: true
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysPullImages is set (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "AlwaysPullImages"
remediation: |
Permissive, per CIS guidelines,
"This setting could impact offline or isolated clusters, which have images pre-loaded and
do not have access to a registry to pull in-use images. This setting is not appropriate for
clusters which use this configuration."
Edit the K3s config file /etc/rancher/k3s/config.yaml and set the below parameter.
kube-apiserver-arg:
- "enable-admission-plugins=...,AlwaysPullImages,..."
scored: false
- id: 1.2.12
text: "Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)"
type: "skip"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "SecurityContextDeny"
- flag: "--enable-admission-plugins"
compare:
op: has
value: "PodSecurityPolicy"
remediation: |
Not Applicable.
Enabling Pod Security Policy is no longer supported on K3s v1.25+ and will cause applications to unexpectedly fail.
scored: false
- id: 1.2.13
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "ServiceAccount"
- flag: "--disable-admission-plugins"
set: false
remediation: |
By default, K3s does not set the --disable-admission-plugins to anything.
Follow the documentation and create ServiceAccount objects as per your environment.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "disable-admission-plugins=ServiceAccount"
scored: true
- id: 1.2.14
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "NamespaceLifecycle"
- flag: "--disable-admission-plugins"
set: false
remediation: |
By default, K3s does not set the --disable-admission-plugins to anything.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "disable-admission-plugins=...,NamespaceLifecycle,..."
scored: true
- id: 1.2.15
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "NodeRestriction"
remediation: |
By default, K3s sets the --enable-admission-plugins to NodeRestriction.
If using the K3s config file /etc/rancher/k3s/config.yaml, check that you are not overriding the admission plugins.
If you are, include NodeRestriction in the list.
kube-apiserver-arg:
- "enable-admission-plugins=...,NodeRestriction,..."
scored: true
- id: 1.2.16
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling'"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
By default, K3s sets the --profiling argument to false.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "profiling=true"
scored: true
- id: 1.2.17
text: "Ensure that the --audit-log-path argument is set (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--audit-log-path"
remediation: |
Edit the K3s config file /etc/rancher/k3s/config.yaml and set the audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example,
kube-apiserver-arg:
- "audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log"
scored: false
- id: 1.2.18
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--audit-log-maxage"
compare:
op: gte
value: 30
remediation: |
Edit the K3s config file /etc/rancher/k3s/config.yaml on the control plane node and
set the audit-log-maxage parameter to 30 or as an appropriate number of days, for example,
kube-apiserver-arg:
- "audit-log-maxage=30"
scored: false
- id: 1.2.19
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--audit-log-maxbackup"
compare:
op: gte
value: 10
remediation: |
Edit the K3s config file /etc/rancher/k3s/config.yaml on the control plane node and
set the audit-log-maxbackup parameter to 10 or to an appropriate value. For example,
kube-apiserver-arg:
- "audit-log-maxbackup=10"
scored: false
- id: 1.2.20
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--audit-log-maxsize"
compare:
op: gte
value: 100
remediation: |
Edit the K3s config file /etc/rancher/k3s/config.yaml on the control plane node and
set the audit-log-maxsize parameter to an appropriate size in MB. For example,
kube-apiserver-arg:
- "audit-log-maxsize=100"
scored: false
- id: 1.2.21
text: "Ensure that the --request-timeout argument is set as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--request-timeout"
remediation: |
Permissive, per CIS guidelines,
"it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed".
Edit the K3s config file /etc/rancher/k3s/config.yaml
and set the below parameter if needed. For example,
kube-apiserver-arg:
- "request-timeout=300s"
scored: false
- id: 1.2.22
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--service-account-lookup"
set: false
- flag: "--service-account-lookup"
compare:
op: eq
value: true
remediation: |
By default, K3s does not set the --service-account-lookup argument.
Edit the K3s config file /etc/rancher/k3s/config.yaml and set the service-account-lookup. For example,
kube-apiserver-arg:
- "service-account-lookup=true"
Alternatively, you can delete the service-account-lookup parameter from this file so
that the default takes effect.
scored: true
- id: 1.2.23
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--service-account-key-file"
remediation: |
K3s automatically generates and sets the service account key file.
It is located at /var/lib/rancher/k3s/server/tls/service.key.
If this check fails, edit K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "service-account-key-file=<path>"
scored: true
- id: 1.2.24
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: |
if [ "$(journalctl -m -u k3s | grep -m1 'Managed etcd cluster' | wc -l)" -gt 0 ]; then
journalctl -m -u k3s | grep -m1 'Running kube-apiserver' | tail -n1
else
echo "--etcd-certfile AND --etcd-keyfile"
fi
tests:
bin_op: and
test_items:
- flag: "--etcd-certfile"
set: true
- flag: "--etcd-keyfile"
set: true
remediation: |
K3s automatically generates and sets the etcd certificate and key files.
They are located at /var/lib/rancher/k3s/server/tls/etcd/client.crt and /var/lib/rancher/k3s/server/tls/etcd/client.key.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "etcd-certfile=<path>"
- "etcd-keyfile=<path>"
scored: true
- id: 1.2.25
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep -A1 'Running kube-apiserver' | tail -n2"
tests:
bin_op: and
test_items:
- flag: "--tls-cert-file"
set: true
- flag: "--tls-private-key-file"
set: true
remediation: |
By default, K3s automatically generates and provides the TLS certificate and private key for the apiserver.
They are generated and located at /var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt and /var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "tls-cert-file=<path>"
- "tls-private-key-file=<path>"
scored: true
- id: 1.2.26
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file'"
tests:
test_items:
- flag: "--client-ca-file"
remediation: |
By default, K3s automatically provides the client certificate authority file.
It is generated and located at /var/lib/rancher/k3s/server/tls/client-ca.crt.
If for some reason you need to provide your own ca certificate, look at using the k3s certificate command line tool.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "client-ca-file=<path>"
scored: true
- id: 1.2.27
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile'"
tests:
test_items:
- flag: "--etcd-cafile"
remediation: |
By default, K3s automatically provides the etcd certificate authority file.
It is generated and located at /var/lib/rancher/k3s/server/tls/client-ca.crt.
If for some reason you need to provide your own ca certificate, look at using the k3s certificate command line tool.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "etcd-cafile=<path>"
scored: true
- id: 1.2.28
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'encryption-provider-config'"
tests:
test_items:
- flag: "--encryption-provider-config"
remediation: |
K3s can be configured to use encryption providers to encrypt secrets at rest.
Edit the K3s config file /etc/rancher/k3s/config.yaml on the control plane node and set the below parameter.
secrets-encryption: true
Secrets encryption can then be managed with the k3s secrets-encrypt command line tool.
If needed, you can find the generated encryption config at /var/lib/rancher/k3s/server/cred/encryption-config.json.
scored: false
- id: 1.2.29
text: "Ensure that encryption providers are appropriately configured (Manual)"
audit: |
ENCRYPTION_PROVIDER_CONFIG=$(journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -o 'providers\"\:\[.*\]' $ENCRYPTION_PROVIDER_CONFIG | grep -o "[A-Za-z]*" | head -2 | tail -1 | sed 's/^/provider=/'; fi
tests:
test_items:
- flag: "provider"
compare:
op: valid_elements
value: "aescbc,kms,secretbox"
remediation: |
K3s can be configured to use encryption providers to encrypt secrets at rest. K3s will utilize the aescbc provider.
Edit the K3s config file /etc/rancher/k3s/config.yaml on the control plane node and set the below parameter.
secrets-encryption: true
Secrets encryption can then be managed with the k3s secrets-encrypt command line tool.
If needed, you can find the generated encryption config at /var/lib/rancher/k3s/server/cred/encryption-config.json
scored: false
- id: 1.2.30
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites'"
tests:
test_items:
- flag: "--tls-cipher-suites"
compare:
op: valid_elements
value: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384"
remediation: |
By default, the K3s kube-apiserver complies with this test. Changes to these values may cause regression, therefore ensure that all apiserver clients support the new TLS configuration before applying it in production deployments.
If a custom TLS configuration is required, consider also creating a custom version of this rule that aligns with your requirements.
If this check fails, remove any custom configuration around `tls-cipher-suites` or update the /etc/rancher/k3s/config.yaml file to match the default by adding the following:
kube-apiserver-arg:
- "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
scored: true
- id: 1.3
text: "Controller Manager"
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold'"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
remediation: |
Edit the K3s config file /etc/rancher/k3s/config.yaml on the control plane node
and set the --terminated-pod-gc-threshold to an appropriate threshold,
kube-controller-manager-arg:
- "terminated-pod-gc-threshold=10"
scored: false
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling'"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
By default, K3s sets the --profiling argument to false.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-controller-manager-arg:
- "profiling=true"
scored: true
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials'"
tests:
test_items:
- flag: "--use-service-account-credentials"
compare:
op: noteq
value: false
remediation: |
By default, K3s sets the --use-service-account-credentials argument to true.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-controller-manager-arg:
- "use-service-account-credentials=false"
scored: true
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file'"
tests:
test_items:
- flag: "--service-account-private-key-file"
remediation: |
By default, K3s automatically provides the service account private key file.
It is generated and located at /var/lib/rancher/k3s/server/tls/service.current.key.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-controller-manager-arg:
- "service-account-private-key-file=<path>"
scored: true
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file'"
tests:
test_items:
- flag: "--root-ca-file"
remediation: |
By default, K3s automatically provides the root CA file.
It is generated and located at /var/lib/rancher/k3s/server/tls/server-ca.crt.
If for some reason you need to provide your own ca certificate, look at using the k3s certificate command line tool.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-controller-manager-arg:
- "root-ca-file=<path>"
scored: true
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--feature-gates"
compare:
op: nothave
value: "RotateKubeletServerCertificate=false"
set: true
- flag: "--feature-gates"
set: false
remediation: |
By default, K3s does not set the RotateKubeletServerCertificate feature gate.
If you have enabled this feature gate, you should remove it.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml, remove any lines like below.
kube-controller-manager-arg:
- "feature-gate=RotateKubeletServerCertificate"
scored: true
- id: 1.3.7
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
set: true
- flag: "--bind-address"
set: false
remediation: |
By default, K3s sets the --bind-address argument to 127.0.0.1
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-controller-manager-arg:
- "bind-address=<IP>"
scored: true
- id: 1.4
text: "Scheduler"
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'profiling'"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
set: true
remediation: |
By default, K3s sets the --profiling argument to false.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-scheduler-arg:
- "profiling=true"
scored: true
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address'"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
set: true
- flag: "--bind-address"
set: false
remediation: |
By default, K3s sets the --bind-address argument to 127.0.0.1
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-scheduler-arg:
- "bind-address=<IP>"
scored: true