1
0
mirror of https://github.com/aquasecurity/kube-bench.git synced 2024-12-12 09:48:10 +00:00

add support VMware Tanzu(TKGI) Benchmarks v1.2.53 (#1452)

* add Support VMware Tanzu(TKGI) Benchmarks v1.2.53
with this change, we are adding
1. latest kubernetes cis benchmarks for VMware Tanzu1.2.53
2. logic to kube-bench so that kube-bench can auto detect vmware platform, will be able to execute the respective vmware tkgi compliance checks.
3. job-tkgi.yaml file to run the benchmark as a job in tkgi cluster
Reference Document for checks: https://network.pivotal.io/products/p-compliance-scanner/#/releases/1248397

* add Support VMware Tanzu(TKGI) Benchmarks v1.2.53
with this change, we are adding
1. latest kubernetes cis benchmarks for VMware Tanzu1.2.53
2. logic to kube-bench so that kube-bench can auto detect vmware platform, will be able to execute the respective vmware tkgi compliance checks.
3. job-tkgi.yaml file to run the benchmark as a job in tkgi cluster
Reference Document for checks: https://network.pivotal.io/products/p-compliance-scanner/#/releases/1248397
This commit is contained in:
KiranBodipi 2023-06-01 19:07:50 +05:30 committed by GitHub
parent 84f80b59b8
commit ca8743c1f7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 2073 additions and 1 deletions

View File

@ -270,6 +270,7 @@ version_mapping:
"aks-1.0": "aks-1.0" "aks-1.0": "aks-1.0"
"ack-1.0": "ack-1.0" "ack-1.0": "ack-1.0"
"cis-1.6-k3s": "cis-1.6-k3s" "cis-1.6-k3s": "cis-1.6-k3s"
"tkgi-1.2.53": "tkgi-1.2.53"
target_mapping: target_mapping:
"cis-1.5": "cis-1.5":
@ -372,3 +373,9 @@ target_mapping:
- "controlplane" - "controlplane"
- "policies" - "policies"
- "managedservices" - "managedservices"
"tkgi-1.2.53":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"

View File

@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@ -0,0 +1,67 @@
---
controls:
version: "tkgi-1.2.53"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users"
audit: ps -ef | grep kube-apiserver | grep -- "--oidc-issuer-url="
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
Exception
This setting is site-specific. It can be set in the "Configure created clusters to use UAA as the OIDC provider."
section of the "UAA"
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created"
audit: ps -ef | grep kube-apiserver | grep -v tini | grep -- "--audit-policy-file="
tests:
test_items:
- flag: "--audit-policy-file"
remediation: |
Create an audit policy file for your cluster.
scored: true
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns"
audit: |
diff /var/vcap/jobs/kube-apiserver/config/audit_policy.yml \ <(echo "--- apiVersion: audit.k8s.io/v1beta1 kind:
Policy rules: - level: None resources: - group: '' resources: - endpoints - services - services/status users: -
system:kube-proxy verbs: - watch - level: None resources: - group: '' resources: - nodes - nodes/status users: -
kubelet verbs: - get - level: None resources: - group: '' resources: - nodes - nodes/status userGroups: -
system:nodes verbs: - get - level: None namespaces: - kube-system resources: - group: '' resources: -
endpoints users: - system:kube-controller-manager - system:kube-scheduler - system:serviceaccount:kube-
system:endpoint-controller verbs: - get - update - level: None resources: - group: '' resources: - namespaces -
namespaces/status - namespaces/finalize users: - system:apiserver verbs: - get - level: None resources: -
group: metrics.k8s.io users: - system:kube-controller-manager verbs: - get - list - level: None
nonResourceURLs: - \"/healthz*\" - \"/version\" - \"/swagger*\" - level: None resources: - group: '' resources: -
events - level: Request omitStages: - RequestReceived resources: - group: '' resources: - nodes/status -
pods/status userGroups: - system:nodes verbs: - update - patch - level: Request omitStages: -
RequestReceived users: - system:serviceaccount:kube-system:namespace-controller verbs: - deletecollection -
level: Metadata omitStages: - RequestReceived resources: - group: '' resources: - secrets - configmaps - group:
authentication.k8s.io resources: - tokenreviews - level: Request omitStages: - RequestReceived resources: -
group: '' - group: admissionregistration.k8s.io - group: apiextensions.k8s.io - group: apiregistration.k8s.io -
group: apps - group: authentication.k8s.io - group: authorization.k8s.io - group: autoscaling - group: batch -
group: certificates.k8s.io - group: extensions - group: metrics.k8s.io - group: networking.k8s.io - group: policy -
group: rbac.authorization.k8s.io - group: settings.k8s.io - group: storage.k8s.io verbs: - get - list - watch - level:
RequestResponse omitStages: - RequestReceived resources: - group: '' - group: admissionregistration.k8s.io -
group: apiextensions.k8s.io - group: apiregistration.k8s.io - group: apps - group: authentication.k8s.io - group:
authorization.k8s.io - group: autoscaling - group: batch - group: certificates.k8s.io - group: extensions - group:
metrics.k8s.io - group: networking.k8s.io - group: policy - group: rbac.authorization.k8s.io - group:
settings.k8s.io - group: storage.k8s.io - level: Metadata omitStages: - RequestReceived ")
type: "manual"
remediation: |
Consider modification of the audit policy in use on the cluster to include these items, at a
minimum.
scored: false

121
cfg/tkgi-1.2.53/etcd.yaml Normal file
View File

@ -0,0 +1,121 @@
---
controls:
version: "tkgi-1.2.53"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration Files"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate"
audit: ps -ef | grep etcd | grep -- "--cert-file=/var/vcap/jobs/etcd/config/etcd.crt" | grep -- "--key-file=/var/vcap/jobs/etcd/config/etcd.key"
type: manual
tests:
bin_op: and
test_items:
- flag: "--cert-file"
- flag: "--key-file"
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: false
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true"
audit: ps -ef | grep etcd | grep -- "--client\-cert\-auth"
tests:
test_items:
- flag: "--client-cert-auth"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file etcd config on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true"
audit: ps -ef | grep etcd | grep -v -- "--auto-tls"
tests:
test_items:
- flag: "--auto-tls"
compare:
op: eq
value: true
set: false
remediation: |
Edit the etcd pod specification file etcd config on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate"
audit: ps -ef | grep etcd | grep -- "--peer-cert-file=/var/vcap/jobs/etcd/config/peer.crt" | grep -- "--peer-key-file=/var/vcap/jobs/etcd/config/peer.key"
type: manual
tests:
bin_op: and
test_items:
- flag: "--peer-cert-file"
- flag: "--peer-key-file"
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file etcd config on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: false
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true"
audit: ps -ef | grep etcd | grep -- "--peer\-client\-cert\-auth"
tests:
test_items:
- flag: "--peer-client-cert-auth"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file etcd config on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true"
audit: ps -ef | grep etcd | grep -v -- "--peer-auto-tls"
tests:
test_items:
- flag: "--peer-auto-tls"
compare:
op: eq
value: true
set: false
remediation: |
Edit the etcd pod specification file etcd config on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd"
audit: diff /var/vcap/jobs/kube-apiserver/config/kubernetes-ca.pem /var/vcap/jobs/etcd/config/etcd-ca.crt | grep -c"^>" | grep -v "^0$"
type: manual
tests:
test_items:
- flag: "--trusted-ca-file"
remediation: |
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file etcd config on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
scored: false

1098
cfg/tkgi-1.2.53/master.yaml Normal file

File diff suppressed because it is too large Load Diff

418
cfg/tkgi-1.2.53/node.yaml Normal file
View File

@ -0,0 +1,418 @@
---
controls:
version: "tkgi-1.2.53"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 644 or more restrictive"
audit: stat -c permissions=%a /var/vcap/jobs/kubelet/monit
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 /var/vcap/jobs/kubelet/monit
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root"
audit: stat -c %U:%G /var/vcap/jobs/kubelet/monit
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root /var/vcap/jobs/kubelet/monit
Exception
File is group owned by vcap
scored: true
- id: 4.1.3
text: "Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive"
audit: stat -c permissions=%a /var/vcap/jobs/kube-proxy/config/kubeconfig
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 /var/vcap/jobs/kube-proxy/config/kubeconfig
scored: true
- id: 4.1.4
text: "Ensure that the proxy kubeconfig file ownership is set to root:root"
audit: stat -c %U:%G /var/vcap/jobs/kube-proxy/config/kubeconfig
type: manual
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root /var/vcap/jobs/kube-proxy/config/kubeconfig
Exception
File is group owned by vcap
scored: false
- id: 4.1.5
text: "Ensure that the kubelet.conf file permissions are set to 644 or more restrictive"
audit: stat -c permissions=%a /var/vcap/jobs/kube-proxy/config/kubeconfig
type: manual
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 /var/vcap/jobs/kube-proxy/config/kubeconfig
Exception
kubeadm is not used to provision/bootstrap the cluster. kubeadm and associated config files do not exist on worker
scored: false
- id: 4.1.6
text: "Ensure that the kubelet.conf file ownership is set to root:root"
audit: stat -c %U:%G /etc/kubernetes/kubelet.conf
type: manual
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root /etc/kubernetes/kubelet.conf
Exception
file ownership is vcap:vcap
scored: false
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 644 or more restrictive"
audit: stat -c permissions=%a /var/vcap/jobs/kubelet/config/kubelet-client-ca.pem
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 644 <filename>
scored: true
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root"
audit: stat -c %U:%G /var/vcap/jobs/kubelet/config/kubelet-client-ca.pem
type: manual
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
Exception
File is group owned by vcap
scored: false
- id: 4.1.9
text: "Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive"
audit: stat -c permissions=%a /var/vcap/jobs/kubelet/config/kubeletconfig.yml
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 644 /var/vcap/jobs/kubelet/config/kubeletconfig.yml
scored: true
- id: 4.1.10
text: "Ensure that the kubelet --config configuration file ownership is set to root:root"
audit: stat -c %U:%G /var/vcap/jobs/kubelet/config/kubeletconfig.yml
type: manual
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root /var/vcap/jobs/kubelet/config/kubeletconfig.yml
Exception
File is group owned by vcap
scored: false
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the anonymous-auth argument is set to false"
audit: grep "^authentication:\n\s{2}anonymous:\n\s{4}enabled:\sfalse$" /var/vcap/jobs/kubelet/config/kubeletconfig.yml
tests:
test_items:
- flag: "enabled: false"
remediation: |
If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to
false.
If using executable arguments, edit the kubelet service file
on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow"
audit: |
grep "^authorization:\n\s{2}mode: AlwaysAllow$" /var/vcap/jobs/kubelet/config/kubeletconfig.yml
tests:
test_items:
- flag: "AlwaysAllow"
set: false
remediation: |
If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If
using executable arguments, edit the kubelet service file
on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate"
audit: |
grep ^authentication:\n\s{2}anonymous:\n\s{4}enabled:\sfalse\n(\s{2}webhook:\n\s{4}cacheTTL:\s\d+s\n\s{4}enabled:.*\n)?
\s{2}x509:\n\s{4}clientCAFile:\s"\/var\/vcap\/jobs\/kubelet\/config\/kubelet-client-ca\.pem" /var/vcap/jobs/kubelet/config/kubeletconfig.yml
tests:
test_items:
- flag: "clientCAFile"
remediation: |
If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Ensure that the --read-only-port argument is set to 0"
audit: |
grep "readOnlyPort: 0" /var/vcap/jobs/kubelet/config/kubeletconfig.yml
tests:
test_items:
- flag: "readOnlyPort: 0"
remediation: |
If using a Kubelet config file, edit the file to set readOnlyPort to 0.
If using command line arguments, edit the kubelet service file
on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0"
audit: |
grep -- "streamingConnectionIdleTimeout: 0" /var/vcap/jobs/kubelet/config/kubeletconfig.yml
tests:
test_items:
- flag: "streamingConnectionIdleTimeout: 0"
set: false
remediation: |
If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a
value other than 0.
If using command line arguments, edit the kubelet service file
on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.6
text: "Ensure that the --protect-kernel-defaults argument is set to true"
audit: |
grep -- "protectKernelDefaults: true" /var/vcap/jobs/kubelet/config/kubeletconfig.yml
tests:
test_items:
- flag: "protectKernelDefaults: true"
remediation: |
If using a Kubelet config file, edit the file to set protectKernelDefaults: true.
If using command line arguments, edit the kubelet service file
on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.7
text: "Ensure that the --make-iptables-util-chains argument is set to true"
audit: |
grep -- "makeIPTablesUtilChains: true" /var/vcap/jobs/kubelet/config/kubeletconfig.yml
tests:
test_items:
- flag: "makeIPTablesUtilChains: true"
remediation: |
If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true.
If using command line arguments, edit the kubelet service file
on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.8
text: "Ensure that the --hostname-override argument is not set"
audit: |
ps -ef | grep [k]ubelet | grep -- --[c]onfig=/var/vcap/jobs/kubelet/config/kubeletconfig.yml | grep -v -- --hostname-override
type: manual
remediation: |
Edit the kubelet service file
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
Exception
On GCE, the hostname needs to be set to the instance name so the gce cloud provider can manage the instance.
In other cases its set to the IP address of the VM.
scored: false
- id: 4.2.9
text: "Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture"
audit: grep -- "--event-qps" /var/vcap/jobs/kubelet/config/kubeletconfig.yml
type: manual
tests:
test_items:
- flag: "--event-qps"
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
If using command line arguments, edit the kubelet service file
on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.10
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate"
audit: |
grep ^tlsCertFile:\s\"\/var\/vcap\/jobs\/kubelet\/config\/kubelet\.pem\"\ntlsPrivateKeyFile:\s\"\/var\/vcap\/jobs\/kubelet\/config\/kubelet-key\.pem\"$
/var/vcap/jobs/kubelet/config/kubeletconfig.yml
tests:
bin_op: and
test_items:
- flag: "tlsCertFile"
- flag: "tlsPrivateKeyFile"
remediation: |
If using a Kubelet config file, edit the file to set tlsCertFile to the location
of the certificate file to use to identify this Kubelet, and tlsPrivateKeyFile
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.11
text: "Ensure that the --rotate-certificates argument is not set to false"
audit: ps -ef | grep kubele[t] | grep -- "--rotate-certificates=false"
type: manual
tests:
test_items:
- flag: "--rotate-certificates=false"
set: false
remediation: |
If using a Kubelet config file, edit the file to add the line rotateCertificates: true or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
Exception
Certificate rotation is handled by Credhub
scored: false
- id: 4.2.12
text: "Verify that the RotateKubeletServerCertificate argument is set to true"
audit: ps -ef | grep kubele[t] | grep -- "--feature-gates=\(\w\+\|,\)*RotateKubeletServerCertificate=true\(\w\+\|,\)*"
type: manual
tests:
test_items:
- flag: "RotateKubeletServerCertificate=true"
remediation: |
Edit the kubelet service file
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
Exception
Certificate rotation is handled by Credhub
scored: false
- id: 4.2.13
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers"
audit: ps -ef | grep kubele[t] | grep -- "--tls-cipher-
suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
type: manual
tests:
test_items:
- flag: --tls-cipher-suites
compare:
op: regex
value: (TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256|TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256|TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305|TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384|TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305|TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384|TLS_RSA_WITH_AES_256_GCM_SHA384|TLS_RSA_WITH_AES_128_GCM_SHA256)
remediation: |
If using a Kubelet config file, edit the file to set TLSCipherSuites: to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false

View File

@ -0,0 +1,287 @@
---
controls:
version: "tkgi-1.2.53"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
Exception
This is site-specific setting.
scored: false
- id: 5.1.2
text: "Minimize access to secrets"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to secret objects in the cluster.
Exception
This is site-specific setting.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
Exception
This is site-specific setting.
scored: false
- id: 5.1.4
text: "Minimize access to create pods"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
Exception
This is site-specific setting.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used."
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
Exception
This is site-specific setting.
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
Exception
This is site-specific setting.
scored: false
- id: 5.2
text: "Pod Security Policies"
checks:
- id: 5.2.1
text: "Minimize the admission of privileged containers"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that
the .spec.privileged field is omitted or set to false.
Exception
This is site-specific setting.
scored: false
- id: 5.2.2
text: "Minimize the admission of containers wishing to share the host process ID namespace"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostPID field is omitted or set to false.
Exception
This is site-specific setting.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host IPC namespace"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostIPC field is omitted or set to false.
Exception
This is site-specific setting.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host network namespace"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostNetwork field is omitted or set to false.
Exception
This is site-specific setting.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers with allowPrivilegeEscalation"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.allowPrivilegeEscalation field is omitted or set to false.
Exception
This is site-specific setting.
scored: false
- id: 5.2.6
text: "Minimize the admission of root containers"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of
UIDs not including 0.
Exception
This is site-specific setting.
scored: false
- id: 5.2.7
text: "Minimize the admission of containers with the NET_RAW capability"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.requiredDropCapabilities is set to include either NET_RAW or ALL.
Exception
This is site-specific setting.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with added capabilities"
type: "manual"
remediation: |
Ensure that allowedCapabilities is not present in PSPs for the cluster unless
it is set to an empty array.
Exception
This is site-specific setting.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with capabilities assigned"
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
Exception
This is site-specific setting.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports Network Policies"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
Exception
This is site-specific setting.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have Network Policies defined"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
Exception
This is site-specific setting.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using secrets as files over secrets as environment variables"
type: "manual"
remediation: |
if possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
Exception
This is site-specific setting.
scored: false
- id: 5.4.2
text: "Consider external secret storage"
type: "manual"
remediation: |
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
Exception
This is site-specific setting.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
Exception
This is site-specific setting.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
Exception
This is site-specific setting.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your pod definitions"
type: "manual"
remediation: |
Seccomp is an alpha feature currently. By default, all alpha features are disabled. So, you
would need to enable alpha features in the apiserver by passing "--feature-
gates=AllAlpha=true" argument.
Edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_API_ARGS
parameter to "--feature-gates=AllAlpha=true"
KUBE_API_ARGS="--feature-gates=AllAlpha=true"
Based on your system, restart the kube-apiserver service. For example:
systemctl restart kube-apiserver.service
Use annotations to enable the docker/default seccomp profile in your pod definitions. An
example is as below:
apiVersion: v1
kind: Pod
metadata:
name: trustworthy-pod
annotations:
seccomp.security.alpha.kubernetes.io/pod: docker/default
spec:
containers:
- name: trustworthy-container
image: sotrustworthy:latest
Exception
This is site-specific setting.
scored: false
- id: 5.7.3
text: "Apply Security Context to Your Pods and Containers "
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
Containers.
Exception
This is site-specific setting.
scored: false
- id: 5.7.4
text: "The default namespace should not be used"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
Exception
This is site-specific setting.
scored: false

View File

@ -447,7 +447,7 @@ func getPlatformInfo() Platform {
} }
func getPlatformInfoFromVersion(s string) Platform { func getPlatformInfoFromVersion(s string) Platform {
versionRe := regexp.MustCompile(`v(\d+\.\d+)\.\d+-(\w+)(?:[.\-])\w+`) versionRe := regexp.MustCompile(`v(\d+\.\d+)\.\d+[-+](\w+)(?:[.\-])\w+`)
subs := versionRe.FindStringSubmatch(s) subs := versionRe.FindStringSubmatch(s)
if len(subs) < 3 { if len(subs) < 3 {
return Platform{} return Platform{}
@ -479,6 +479,8 @@ func getPlatformBenchmarkVersion(platform Platform) string {
case "4.1": case "4.1":
return "rh-1.0" return "rh-1.0"
} }
case "vmware":
return "tkgi-1.2.53"
} }
return "" return ""
} }

View File

@ -27,3 +27,4 @@ Some defined by other hardenening guides.
| CIS | [OCP4 1.1.0](https://workbench.cisecurity.org/benchmarks/6778) | rh-1.0 | OCP 4.1- | | CIS | [OCP4 1.1.0](https://workbench.cisecurity.org/benchmarks/6778) | rh-1.0 | OCP 4.1- |
| CIS | [1.6.0-k3s](https://docs.rancher.cn/docs/k3s/security/self-assessment/_index) | cis-1.6-k3s | k3s v1.16-v1.24 | | CIS | [1.6.0-k3s](https://docs.rancher.cn/docs/k3s/security/self-assessment/_index) | cis-1.6-k3s | k3s v1.16-v1.24 |
| DISA | [Kubernetes Ver 1, Rel 6](https://dl.dod.cyber.mil/wp-content/uploads/stigs/zip/U_Kubernetes_V1R6_STIG.zip) | eks-stig-kubernetes-v1r6 | EKS | | DISA | [Kubernetes Ver 1, Rel 6](https://dl.dod.cyber.mil/wp-content/uploads/stigs/zip/U_Kubernetes_V1R6_STIG.zip) | eks-stig-kubernetes-v1r6 | EKS |
| CIS | [TKGI 1.2.53](https://network.pivotal.io/products/p-compliance-scanner#/releases/1248397) | tkgi-1.2.53 | vmware |

View File

@ -177,3 +177,18 @@ To run the benchmark as a job in your ACK cluster apply the included `job-ack.ya
``` ```
kubectl apply -f job-ack.yaml kubectl apply -f job-ack.yaml
``` ```
### Running in a VMware TKGI cluster
| CIS Benchmark | Targets |
|---------------|--------------------------------------------|
| tkgi-1.2.53 | master, etcd, controlplane, node, policies |
kube-bench includes benchmarks for VMware tkgi platform.
To run this you will need to specify `--benchmark tkgi-1.2.53` when you run the `kube-bench` command.
To run the benchmark as a job in your VMware tkgi cluster apply the included `job-tkgi.yaml`.
```
kubectl apply -f job-tkgi.yaml
```

54
job-tkgi.yaml Normal file
View File

@ -0,0 +1,54 @@
---
apiVersion: batch/v1
kind: Job
metadata:
name: kube-bench
spec:
template:
spec:
hostPID: true
containers:
- name: kube-bench
image: docker.io/aquasec/kube-bench:latest
command:
[
"kube-bench",
"run",
"--targets",
"node,policies",
"--benchmark",
"tkgi-1.2.53",
]
volumeMounts:
- name: var-vcap-jobs
mountPath: /var/vcap/jobs
readOnly: true
- name: var-vcap-packages
mountPath: /var/vcap/packages
readOnly: true
- name: var-vcap-store-etcd
mountPath: /var/vcap/store/etcd
readOnly: true
- name: var-vcap-sys
mountPath: /var/vcap/sys
readOnly: true
- name: etc-kubernetes
mountPath: /etc/kubernetes
readOnly: true
restartPolicy: Never
volumes:
- name: var-vcap-jobs
hostPath:
path: "/var/vcap/jobs"
- name: var-vcap-packages
hostPath:
path: "/var/vcap/packages"
- name: var-vcap-store-etcd
hostPath:
path: "/var/vcap/store/etcd"
- name: var-vcap-sys
hostPath:
path: "/var/vcap/sys"
- name: etc-kubernetes
hostPath:
path: "/etc/kubernetes"