1
0
mirror of https://github.com/aquasecurity/kube-bench.git synced 2024-12-28 01:18:22 +00:00
kube-bench/cfg/k3s-cis-1.24/node.yaml
lizhang96 64bc05354b
fix: k3s-cis-*- CHECK 4.2.1-4.2.3 (#1739)
* fix the node kubelet related tests

* update the tests
2024-12-06 13:29:34 +06:00

428 lines
19 KiB
YAML

---
controls:
version: "k3s-cis-1.24"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
type: "skip"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
type: "skip"
tests:
test_items:
- flag: root:root
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: true
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)"
audit: 'stat -c %U:%G $proxykubeconfig'
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: true
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: 'stat -c %U:%G $kubeletkubeconfig'
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: "root:root"
set: true
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)"
audit: "stat -c permissions=%a $kubeletcafile"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
set: true
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 600 $kubeletcafile
scored: true
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Automated)"
audit: "stat -c %U:%G $kubeletcafile"
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root $kubeletcafile
scored: true
- id: 4.1.9
text: "If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
type: "skip"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.1.10
text: "If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
type: "skip"
tests:
test_items:
- flag: root:root
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s -u k3s-agent | grep "Running kubelet" | wc -l) -gt 0; then journalctl -m -u k3s -u k3s-agent | grep "Running kubelet" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'' '
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
set: true
remediation: |
By default, K3s sets the --anonymous-auth to false. If you have set this to a different value, you
should set it back to false. If using the K3s config file /etc/rancher/k3s/config.yaml, remove any lines similar to below.
kubelet-arg:
- "anonymous-auth=true"
If using the command line, edit the K3s service file and remove the below argument.
--kubelet-arg="anonymous-auth=true"
Based on your system, restart the k3s service. For example,
systemctl daemon-reload
systemctl restart k3s.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s -u k3s-agent | grep "Running kubelet" | wc -l) -gt 0; then journalctl -m -u k3s -u k3s-agent | grep "Running kubelet" | tail -n1 | grep "authorization-mode"; else echo "--authorization-mode=Webhook"; fi'' '
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
set: true
remediation: |
By default, K3s does not set the --authorization-mode to AlwaysAllow.
If using the K3s config file /etc/rancher/k3s/config.yaml, remove any lines similar to below.
kubelet-arg:
- "authorization-mode=AlwaysAllow"
If using the command line, edit the K3s service file and remove the below argument.
--kubelet-arg="authorization-mode=AlwaysAllow"
Based on your system, restart the k3s service. For example,
systemctl daemon-reload
systemctl restart k3s.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s -u k3s-agent | grep "Running kubelet" | wc -l) -gt 0; then journalctl -m -u k3s -u k3s-agent | grep "Running kubelet" | tail -n1 | grep "client-ca-file"; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'' '
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
set: true
remediation: |
By default, K3s automatically provides the client ca certificate for the Kubelet.
It is generated and located at /var/lib/rancher/k3s/agent/client-ca.crt
scored: true
- id: 4.2.4
text: "Verify that the --read-only-port argument is set to 0 (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
By default, K3s sets the --read-only-port to 0. If you have set this to a different value, you
should set it back to 0. If using the K3s config file /etc/rancher/k3s/config.yaml, remove any lines similar to below.
kubelet-arg:
- "read-only-port=XXXX"
If using the command line, edit the K3s service file and remove the below argument.
--kubelet-arg="read-only-port=XXXX"
Based on your system, restart the k3s service. For example,
systemctl daemon-reload
systemctl restart k3s.service
scored: true
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter to an appropriate value.
kubelet-arg:
- "streaming-connection-idle-timeout=5m"
If using the command line, run K3s with --kubelet-arg="streaming-connection-idle-timeout=5m".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: false
- id: 4.2.6
text: "Ensure that the --protect-kernel-defaults argument is set to true (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --protect-kernel-defaults
path: '{.protectKernelDefaults}'
compare:
op: eq
value: true
set: true
remediation: |
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter.
protect-kernel-defaults: true
If using the command line, run K3s with --protect-kernel-defaults=true.
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.7
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
set: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter.
kubelet-arg:
- "make-iptables-util-chains=true"
If using the command line, run K3s with --kubelet-arg="make-iptables-util-chains=true".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.8
text: "Ensure that the --hostname-override argument is not set (Automated)"
type: "skip"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Not Applicable.
By default, K3s does set the --hostname-override argument. Per CIS guidelines, this is to comply
with cloud providers that require this flag to ensure that hostname matches node names.
scored: true
- id: 4.2.9
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: eq
value: 0
remediation: |
By default, K3s sets the event-qps to 0. Should you wish to change this,
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter to an appropriate value.
kubelet-arg:
- "event-qps=<value>"
If using the command line, run K3s with --kubelet-arg="event-qps=<value>".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: false
- id: 4.2.10
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --tls-cert-file
path: '/var/lib/rancher/k3s/agent/serving-kubelet.crt'
- flag: --tls-private-key-file
path: '/var/lib/rancher/k3s/agent/serving-kubelet.key'
remediation: |
By default, K3s automatically provides the TLS certificate and private key for the Kubelet.
They are generated and located at /var/lib/rancher/k3s/agent/serving-kubelet.crt and /var/lib/rancher/k3s/agent/serving-kubelet.key
If for some reason you need to provide your own certificate and key, you can set the
below parameters in the K3s config file /etc/rancher/k3s/config.yaml.
kubelet-arg:
- "tls-cert-file=<path/to/tls-cert-file>"
- "tls-private-key-file=<path/to/tls-private-key-file>"
scored: true
- id: 4.2.11
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
By default, K3s does not set the --rotate-certificates argument. If you have set this flag with a value of `false`, you should either set it to `true` or completely remove the flag.
If using the K3s config file /etc/rancher/k3s/config.yaml, remove any rotate-certificates parameter.
If using the command line, remove the K3s flag --kubelet-arg="rotate-certificates".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.12
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
By default, K3s does not set the RotateKubeletServerCertificate feature gate.
If you have enabled this feature gate, you should remove it.
If using the K3s config file /etc/rancher/k3s/config.yaml, remove any feature-gate=RotateKubeletServerCertificate parameter.
If using the command line, remove the K3s flag --kubelet-arg="feature-gate=RotateKubeletServerCertificate".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.13
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a K3s config file /etc/rancher/k3s/config.yaml, edit the file to set `tlsCipherSuites` to
kubelet-arg:
- "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
or to a subset of these values.
If using the command line, add the K3s flag --kubelet-arg="tls-cipher-suites=<same values as above>"
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: false