1
0
mirror of https://github.com/aquasecurity/kube-bench.git synced 2024-12-20 05:38:13 +00:00

Merge branch 'master' into no-master-binaries

This commit is contained in:
Liz Rice 2019-04-24 10:02:32 +01:00 committed by GitHub
commit e5b6603da5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 2709 additions and 1914 deletions

11
Gopkg.lock generated
View File

@ -189,6 +189,17 @@
pruneopts = "UT" pruneopts = "UT"
revision = "c95af922eae69f190717a0b7148960af8c55a072" revision = "c95af922eae69f190717a0b7148960af8c55a072"
[[projects]]
digest = "1:e8e3acc03397f71fad44385631e665c639a8d55bd187bcfa6e70b695e3705edd"
name = "k8s.io/client-go"
packages = [
"third_party/forked/golang/template",
"util/jsonpath",
]
pruneopts = "UT"
revision = "e64494209f554a6723674bd494d69445fb76a1d4"
version = "v10.0.0"
[solve-meta] [solve-meta]
analyzer-name = "dep" analyzer-name = "dep"
analyzer-version = 1 analyzer-version = 1

View File

@ -25,6 +25,8 @@ kube-bench supports the tests for Kubernetes as defined in the CIS Benchmarks 1.
By default kube-bench will determine the test set to run based on the Kubernetes version running on the machine. By default kube-bench will determine the test set to run based on the Kubernetes version running on the machine.
There is also preliminary support for Red Hat's Openshift Hardening Guide for 3.10 and 3.11. Please note that kube-bench does not automatically detect Openshift - see below.
## Installation ## Installation
You can choose to You can choose to
@ -47,14 +49,14 @@ You can even use your own configs by mounting them over the default ones in `/op
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t -v path/to/my-config.yaml:/opt/kube-bench/cfg/config.yaml aquasec/kube-bench:latest [master|node] docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t -v path/to/my-config.yaml:/opt/kube-bench/cfg/config.yaml aquasec/kube-bench:latest [master|node]
``` ```
> Note: the tests require either the kubelet or kubectl binary in the path in order to know the Kubernetes version. You can pass `-v $(which kubectl):/usr/bin/kubectl` to the above invocations to resolve this. > Note: the tests require either the kubelet or kubectl binary in the path in order to auto-detect the Kubernetes version. You can pass `-v $(which kubectl):/usr/bin/kubectl` to the above invocations to resolve this.
### Running in a kubernetes cluster ### Running in a kubernetes cluster
You can run kube-bench inside a pod, but it will need access to the host's PID namespace in order to check the running processes, as well as access to some directories on the host where config files and other files are stored. You can run kube-bench inside a pod, but it will need access to the host's PID namespace in order to check the running processes, as well as access to some directories on the host where config files and other files are stored.
Master nodes are automatically detected by kube-bench and will run master checks when possible. Master nodes are automatically detected by kube-bench and will run master checks when possible.
The detection is done by verifying that mandatory components for master are running. (see [config file](#configuration). The detection is done by verifying that mandatory components for master, as defined in the config files, are running (see [Configuration](#configuration)).
The supplied `job.yaml` file can be applied to run the tests as a job. For example: The supplied `job.yaml` file can be applied to run the tests as a job. For example:
@ -72,7 +74,7 @@ NAME READY STATUS RESTARTS AGE
kube-bench-j76s9 0/1 Completed 0 11s kube-bench-j76s9 0/1 Completed 0 11s
# The results are held in the pod's logs # The results are held in the pod's logs
k logs kube-bench-j76s9 kubectl logs kube-bench-j76s9
[INFO] 1 Master Node Security Configuration [INFO] 1 Master Node Security Configuration
[INFO] 1.1 API Server [INFO] 1.1 API Server
... ...
@ -84,6 +86,15 @@ To run the tests on the master node, the pod needs to be scheduled on that node.
The default labels applied to master nodes has changed since Kubernetes 1.11, so if you are using an older version you may need to modify the nodeSelector and tolerations to run the job on the master node. The default labels applied to master nodes has changed since Kubernetes 1.11, so if you are using an older version you may need to modify the nodeSelector and tolerations to run the job on the master node.
### Running in an EKS cluster
There is a `job-eks.yaml` file for running the kube-bench node checks on an EKS cluster. **Note that you must update the image reference in `job-eks.yaml`.** Typically you will push the container image for kube-bench to ECR and refer to it there in the YAML file.
There are two significant differences on EKS:
* It uses [config files in JSON format](https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/)
* It's not possible to schedule jobs onto the master node, so master checks can't be performed
### Installing from a container ### Installing from a container
This command copies the kube-bench binary and configuration files to your host from the Docker container: This command copies the kube-bench binary and configuration files to your host from the Docker container:
@ -112,6 +123,9 @@ go build -o kube-bench .
./kube-bench ./kube-bench
``` ```
## Running on OpenShift
kube-bench includes a set of test files for Red Hat's OpenShift hardening guide for OCP 3.10 and 3.11. To run this you will need to specify `--version ocp-3.10` when you run the `kube-bench` command (either directly or through YAML). This config version is valid for OCP 3.10 and 3.11.
## Configuration ## Configuration
@ -190,6 +204,19 @@ tests:
value: value:
... ...
``` ```
You can also define jsonpath and yamlpath tests using the following syntax:
```
tests:
- path:
set:
compare:
op:
value:
...
```
Tests have various `operations` which are used to compare the output of audit commands for success. Tests have various `operations` which are used to compare the output of audit commands for success.
These operations are: These operations are:

20
cfg/1.11-json/config.yaml Normal file
View File

@ -0,0 +1,20 @@
---
# Config file for systems such as EKS where config is in JSON files
# Master nodes are controlled by EKS and not user-accessible
node:
kubernetes:
confs:
- "/var/lib/kubelet/kubeconfig"
kubeconfig:
- "/var/lib/kubelet/kubeconfig"
kubelet:
bins:
- "hyperkube kubelet"
- "kubelet"
defaultconf: "/etc/kubernetes/kubelet/kubelet-config.json"
defaultsvc: "/etc/systemd/system/kubelet.service"
defaultkubeconfig: "/var/lib/kubelet/kubeconfig"
proxy:
defaultkubeconfig: "/var/lib/kubelet/kubeconfig"

508
cfg/1.11-json/node.yaml Normal file
View File

@ -0,0 +1,508 @@
---
controls:
version: 1.11
id: 2
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 2.1
text: "Kubelet"
checks:
- id: 2.1.1
text: "Ensure that the --allow-privileged argument is set to false (Scored)"
audit: "ps -fC $kubeletbin"
tests:
test_items:
- flag: "--allow-privileged"
compare:
op: eq
value: false
set: true
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--allow-privileged=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.2
text: "Ensure that the --anonymous-auth argument is set to false (Scored)"
audit: "cat $kubeletconf"
tests:
test_items:
- path: "{.authentication.anonymous.enabled}"
compare:
op: eq
value: false
set: true
remediation: |
If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to
false .
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.3
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Scored)"
audit: "cat $kubeletconf"
tests:
test_items:
- path: "{.authorization.mode}"
compare:
op: noteq
value: "AlwaysAllow"
set: true
remediation: |
If using a Kubelet config file, edit the file to set authorization: mode to Webhook.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.4
text: "Ensure that the --client-ca-file argument is set as appropriate (Scored)"
audit: "cat $kubeletconf"
tests:
test_items:
- path: "{.authentication.x509.clientCAFile}"
set: true
remediation: |
If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.5
text: "Ensure that the --read-only-port argument is set to 0 (Scored)"
audit: "cat $kubeletconf"
tests:
bin_op: or
test_items:
- path: "{.readOnlyPort}"
set: false
- path: "{.readOnlyPort}"
compare:
op: eq
value: "0"
set: true
remediation: |
If using a Kubelet config file, edit the file to set readOnlyPort to 0 .
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.6
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Scored)"
audit: "cat $kubeletconf"
tests:
bin_op: or
test_items:
- path: "{.streamingConnectionIdleTimeout}"
set: false
- path: "{.streamingConnectionIdleTimeout}"
compare:
op: noteq
value: 0
set: true
remediation: |
If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.7
text: "Ensure that the --protect-kernel-defaults argument is set to true (Scored)"
audit: "cat $kubeletconf"
tests:
test_items:
- path: "{.protectKernelDefaults}"
compare:
op: eq
value: true
set: true
remediation: |
If using a Kubelet config file, edit the file to set protectKernelDefaults: true .
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.8
text: "Ensure that the --make-iptables-util-chains argument is set to true (Scored)"
audit: "cat $kubeletconf"
tests:
bin_op: or
test_items:
- path: "{.makeIPTablesUtilChains}"
set: false
- path: "{.makeIPTablesUtilChains}"
compare:
op: eq
value: true
set: true
remediation: |
If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true .
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.9
text: "Ensure that the --hostname-override argument is not set (Scored)"
audit: "cat $kubeletconf"
tests:
test_items:
- path: "{.hostnameOverride}"
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.10
text: "Ensure that the --event-qps argument is set to 0 (Scored)"
audit: "cat $kubeletconf"
tests:
test_items:
- path: "{.eventRecordQPS}"
compare:
op: eq
value: 0
set: true
remediation: |
If using a Kubelet config file, edit the file to set eventRecordQPS: 0 .
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--event-qps=0
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.11
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored)"
audit: "cat $kubeletconf"
tests:
bin_op: and
test_items:
- path: "{.tlsCertFile}"
set: true
- path: "{.tlsPrivateKeyFile}"
set: true
remediation: |
If using a Kubelet config file, edit the file to set tlsCertFile to the location of the certificate
file to use to identify this Kubelet, and tlsPrivateKeyFile to the location of the
corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.12
text: "Ensure that the --cadvisor-port argument is set to 0 (Scored)"
audit: "cat $kubeletconf"
tests:
bin_op: or
test_items:
- path: "{.cadvisorPort}"
compare:
op: eq
value: 0
set: true
- path: "{.cadvisorPort}"
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CADVISOR_ARGS variable.
--cadvisor-port=0
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.13
text: "Ensure that the --rotate-certificates argument is not set to false (Scored)"
audit: "cat $kubeletconf"
tests:
bin_op: or
test_items:
- path: "{.rotateCertificates}"
set: false
- path: "{.rotateCertificates}"
compare:
op: noteq
value: "false"
set: true
remediation: |
If using a Kubelet config file, edit the file to add the line rotateCertificates: true.
If using command line arguments, edit the kubelet service file $kubeletsvc
on each worker node and add --rotate-certificates=true argument to the KUBELET_CERTIFICATE_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.14
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Scored)"
audit: "cat $kubeletconf"
tests:
test_items:
- path: "{.featureGates.RotateKubeletServerCertificate}"
compare:
op: eq
value: true
set: true
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 2.1.15
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Not Scored)"
audit: "cat $kubeletconf"
tests:
test_items:
- path: "{.tlsCipherSuites}"
compare:
op: eq
value: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
set: true
remediation: |
If using a Kubelet config file, edit the file to set TLSCipherSuites: to TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 ,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 ,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 ,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
If using executable arguments, edit the kubelet service file $kubeletconf on each worker node and set the below parameter.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
scored: false
- id: 2.2
text: "Configuration Files"
checks:
- id: 2.2.1
text: "Ensure that the kubelet.conf file permissions are set to 644 or
more restrictive (Scored)"
audit: "/bin/sh -c 'if test -e $kubeletkubeconfig; then stat -c %a $kubeletkubeconfig; fi'"
tests:
bin_op: or
test_items:
- flag: "644"
compare:
op: eq
value: "644"
set: true
- flag: "640"
compare:
op: eq
value: "640"
set: true
- flag: "600"
compare:
op: eq
value: "600"
set: true
remediation: |
Run the below command (based on the file location on your system) on the each worker
node. For example,
chmod 644 $kubeletkubeconfig
scored: true
- id: 2.2.2
text: "Ensure that the kubelet.conf file ownership is set to root:root (Scored)"
audit: "/bin/sh -c 'if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: root:root
set: true
remediation: |
Run the below command (based on the file location on your system) on the each worker
node. For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 2.2.3
text: "Ensure that the kubelet service file permissions are set to 644 or
more restrictive (Scored)"
audit: "/bin/sh -c 'if test -e $kubeletsvc; then stat -c %a $kubeletsvc; fi'"
tests:
bin_op: or
test_items:
- flag: "644"
compare:
op: eq
value: 644
set: true
- flag: "640"
compare:
op: eq
value: "640"
set: true
- flag: "600"
compare:
op: eq
value: "600"
set: true
remediation: |
Run the below command (based on the file location on your system) on the each worker
node. For example,
chmod 755 $kubeletsvc
scored: true
- id: 2.2.4
text: "Ensure that the kubelet service file ownership is set to root:root (Scored)"
audit: "/bin/sh -c 'if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'"
tests:
test_items:
- flag: "root:root"
set: true
remediation: |
Run the below command (based on the file location on your system) on the each worker
node. For example,
chown root:root $kubeletsvc
scored: true
- id: 2.2.5
text: "Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive (Scored)"
audit: "/bin/sh -c 'if test -e $proxykubeconfig; then stat -c %a $proxykubeconfig; fi'"
tests:
bin_op: or
test_items:
- flag: "644"
compare:
op: eq
value: "644"
set: true
- flag: "640"
compare:
op: eq
value: "640"
set: true
- flag: "600"
compare:
op: eq
value: "600"
set: true
remediation: |
Run the below command (based on the file location on your system) on the each worker
node. For example,
chmod 644 $proxykubeconfig
scored: true
- id: 2.2.6
text: "Ensure that the proxy kubeconfig file ownership is set to root:root (Scored)"
audit: "/bin/sh -c 'if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
set: true
remediation: |
Run the below command (based on the file location on your system) on the each worker
node. For example,
chown root:root $proxykubeconfig
scored: true
- id: 2.2.7
text: "Ensure that the certificate authorities file permissions are set to
644 or more restrictive (Scored)"
type: manual
remediation: |
Run the following command to modify the file permissions of the --client-ca-file
chmod 644 <filename>
scored: true
- id: 2.2.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Scored)"
audit: "/bin/sh -c 'if test -e $ca-file; then stat -c %U:%G $ca-file; fi'"
type: manual
remediation: |
Run the following command to modify the ownership of the --client-ca-file .
chown root:root <filename>
scored: true
- id: 2.2.9
text: "Ensure that the kubelet configuration file ownership is set to root:root (Scored)"
audit: "/bin/sh -c 'if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'"
tests:
test_items:
- flag: "root:root"
set: true
remediation: |
Run the following command (using the config file location identied in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 2.2.10
text: "Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Scored)"
audit: "/bin/sh -c 'if test -e $kubeletconf; then stat -c %a $kubeletconf; fi'"
tests:
bin_op: or
test_items:
- flag: "644"
compare:
op: eq
value: "644"
set: true
- flag: "640"
compare:
op: eq
value: "640"
set: true
- flag: "600"
compare:
op: eq
value: "600"
set: true
remediation: |
Run the following command (using the config file location identied in the Audit step)
chmod 644 $kubeletconf
scored: true

File diff suppressed because it is too large Load Diff

View File

@ -1,376 +1,376 @@
--- ---
controls: controls:
id: 2 id: 2
text: "Worker Node Security Configuration" text: "Worker Node Security Configuration"
type: "node" type: "node"
groups: groups:
- id: 2.1 - id: 7
text: "Kubelet" text: "Kubelet"
checks: checks:
- id: 2.1.1 - id: 7.1
text: "Ensure that the --allow-privileged argument is set to false (Scored)" text: "Use Security Context Constraints to manage privileged containers as needed"
type: "skip" type: "skip"
scored: true scored: true
- id: 2.1.2 - id: 7.2
text: "Ensure that the --anonymous-auth argument is set to false (Scored)" text: "Ensure anonymous-auth is not disabled"
type: "skip" type: "skip"
scored: true scored: true
- id: 2.1.3 - id: 7.3
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Scored)" text: "Verify that the --authorization-mode argument is set to WebHook"
audit: "grep -A1 authorization-mode /etc/origin/node/node-config.yaml" audit: "grep -A1 authorization-mode /etc/origin/node/node-config.yaml"
tests: tests:
bin_op: or bin_op: or
test_items: test_items:
- flag: "authorization-mode" - flag: "authorization-mode"
set: false set: false
- flag: "authorization-mode: Webhook" - flag: "authorization-mode: Webhook"
compare: compare:
op: has op: has
value: "Webhook" value: "Webhook"
set: true set: true
remediation: | remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and remove authorization-mode under Edit the Openshift node config file /etc/origin/node/node-config.yaml and remove authorization-mode under
kubeletArguments in /etc/origin/node/node-config.yaml or set it to "Webhook". kubeletArguments in /etc/origin/node/node-config.yaml or set it to "Webhook".
scored: true scored: true
- id: 2.1.4 - id: 7.4
text: "Ensure that the --client-ca-file argument is set as appropriate (Scored)" text: "Verify the OpenShift default for the client-ca-file argument"
audit: "grep -A1 client-ca-file /etc/origin/node/node-config.yaml" audit: "grep -A1 client-ca-file /etc/origin/node/node-config.yaml"
tests: tests:
test_items: test_items:
- flag: "client-ca-file" - flag: "client-ca-file"
set: false set: false
remediation: | remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and remove any configuration returned by the following: Edit the Openshift node config file /etc/origin/node/node-config.yaml and remove any configuration returned by the following:
grep -A1 client-ca-file /etc/origin/node/node-config.yaml grep -A1 client-ca-file /etc/origin/node/node-config.yaml
Reset to the OpenShift default. Reset to the OpenShift default.
See https://github.com/openshift/openshift-ansible/blob/release-3.10/roles/openshift_node_group/templates/node-config.yaml.j2#L65 See https://github.com/openshift/openshift-ansible/blob/release-3.10/roles/openshift_node_group/templates/node-config.yaml.j2#L65
The config file does not have this defined in kubeletArgument, but in PodManifestConfig. The config file does not have this defined in kubeletArgument, but in PodManifestConfig.
scored: true scored: true
- id: 2.1.5 - id: 7.5
text: "Ensure that the --read-only-port argument is set to 0 (Scored)" text: "Verify the OpenShift default setting for the read-only-port argument"
audit: "grep -A1 read-only-port /etc/origin/node/node-config.yaml" audit: "grep -A1 read-only-port /etc/origin/node/node-config.yaml"
tests: tests:
bin_op: or bin_op: or
test_items: test_items:
- flag: "read-only-port" - flag: "read-only-port"
set: false set: false
- flag: "read-only-port: 0" - flag: "read-only-port: 0"
compare: compare:
op: has op: has
value: "0" value: "0"
set: true set: true
remediation: | remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and removed so that the OpenShift default is applied. Edit the Openshift node config file /etc/origin/node/node-config.yaml and removed so that the OpenShift default is applied.
scored: true scored: true
- id: 2.1.6 - id: 7.6
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Scored)" text: "Adjust the streaming-connection-idle-timeout argument"
audit: "grep -A1 streaming-connection-idle-timeout /etc/origin/node/node-config.yaml" audit: "grep -A1 streaming-connection-idle-timeout /etc/origin/node/node-config.yaml"
tests: tests:
bin_op: or bin_op: or
test_items: test_items:
- flag: "streaming-connection-idle-timeout" - flag: "streaming-connection-idle-timeout"
set: false set: false
- flag: "0" - flag: "5m"
set: false set: false
remediation: | remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and set the streaming-connection-timeout Edit the Openshift node config file /etc/origin/node/node-config.yaml and set the streaming-connection-timeout
value like the following in node-config.yaml. value like the following in node-config.yaml.
kubeletArguments: kubeletArguments:
 streaming-connection-idle-timeout:  streaming-connection-idle-timeout:
   - "5m"    - "5m"
scored: true scored: true
- id: 2.1.7 - id: 7.7
text: "Ensure that the --protect-kernel-defaults argument is set to true (Scored)" text: "Verify the OpenShift defaults for the protect-kernel-defaults argument"
type: "skip" type: "skip"
scored: true scored: true
- id: 2.1.8 - id: 7.8
text: "Ensure that the --make-iptables-util-chains argument is set to true (Scored)" text: "Verify the OpenShift default value of true for the make-iptables-util-chains argument"
audit: "grep -A1 make-iptables-util-chains /etc/origin/node/node-config.yaml" audit: "grep -A1 make-iptables-util-chains /etc/origin/node/node-config.yaml"
tests: tests:
bin_op: or bin_op: or
test_items: test_items:
- flag: "make-iptables-util-chains" - flag: "make-iptables-util-chains"
set: false set: false
- flag: "make-iptables-util-chains: true" - flag: "make-iptables-util-chains: true"
compare: compare:
op: has op: has
value: "true" value: "true"
set: true set: true
remediation: | remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and reset make-iptables-util-chains to the OpenShift Edit the Openshift node config file /etc/origin/node/node-config.yaml and reset make-iptables-util-chains to the OpenShift
default value of true. default value of true.
scored: true scored: true
id: 2.1.9 - id: 7.9
text: "Ensure that the --keep-terminated-pod-volumeskeep-terminated-pod-volumes argument is set to false (Scored)" text: "Verify that the --keep-terminated-pod-volumes argument is set to false"
audit: "grep -A1 keep-terminated-pod-volumes /etc/origin/node/node-config.yaml" audit: "grep -A1 keep-terminated-pod-volumes /etc/origin/node/node-config.yaml"
tests: tests:
test_items: test_items:
- flag: "keep-terminated-pod-volumes: false" - flag: "keep-terminated-pod-volumes: false"
compare: compare:
op: has op: has
value: "false" value: "false"
set: true set: true
remediation: | remediation: |
Reset to the OpenShift defaults Reset to the OpenShift defaults
scored: true scored: true
- id: 2.1.10 - id: 7.10
text: "Ensure that the --hostname-override argument is not set (Scored)" text: "Verify the OpenShift defaults for the hostname-override argument"
type: "skip" type: "skip"
scored: true scored: true
- id: 2.1.11 - id: 7.11
text: "Ensure that the --event-qps argument is set to 0 (Scored)" text: "Set the --event-qps argument to 0"
audit: "grep -A1 event-qps /etc/origin/node/node-config.yaml" audit: "grep -A1 event-qps /etc/origin/node/node-config.yaml"
tests: tests:
bin_op: or bin_op: or
test_items: test_items:
- flag: "event-qps" - flag: "event-qps"
set: false set: false
- flag: "event-qps: 0" - flag: "event-qps: 0"
compare: compare:
op: has op: has
value: "0" value: "0"
set: true set: true
remediation: | remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml set the event-qps argument to 0 in Edit the Openshift node config file /etc/origin/node/node-config.yaml set the event-qps argument to 0 in
the kubeletArguments section of. the kubeletArguments section of.
scored: true scored: true
- id: 2.1.12 - id: 7.12
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored)" text: "Verify the OpenShift cert-dir flag for HTTPS traffic"
audit: "grep -A1 cert-dir /etc/origin/node/node-config.yaml" audit: "grep -A1 cert-dir /etc/origin/node/node-config.yaml"
tests: tests:
test_items: test_items:
- flag: "/etc/origin/node/certificates" - flag: "/etc/origin/node/certificates"
compare: compare:
op: has op: has
value: "/etc/origin/node/certificates" value: "/etc/origin/node/certificates"
set: true set: true
remediation: | remediation: |
Reset to the OpenShift default values. Reset to the OpenShift default values.
scored: true scored: true
- id: 2.1.13 - id: 7.13
text: "Ensure that the --cadvisor-port argument is set to 0 (Scored)" text: "Verify the OpenShift default of 0 for the cadvisor-port argument"
audit: "grep -A1 cadvisor-port /etc/origin/node/node-config.yaml" audit: "grep -A1 cadvisor-port /etc/origin/node/node-config.yaml"
tests: tests:
bin_op: or bin_op: or
test_items: test_items:
- flag: "cadvisor-port" - flag: "cadvisor-port"
set: false set: false
- flag: "cadvisor-port: 0" - flag: "cadvisor-port: 0"
compare: compare:
op: has op: has
value: "0" value: "0"
set: true set: true
remediation: | remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and remove the cadvisor-port flag Edit the Openshift node config file /etc/origin/node/node-config.yaml and remove the cadvisor-port flag
if it is set in the kubeletArguments section. if it is set in the kubeletArguments section.
scored: true scored: true
- id: 2.1.14 - id: 7.14
text: "Ensure that the RotateKubeletClientCertificate argument is not set to false (Scored)" text: "Verify that the RotateKubeletClientCertificate argument is set to true"
audit: "grep -B1 RotateKubeletClientCertificate=true /etc/origin/node/node-config.yaml" audit: "grep -B1 RotateKubeletClientCertificate=true /etc/origin/node/node-config.yaml"
tests: tests:
test_items: test_items:
- flag: "RotateKubeletClientCertificate=true" - flag: "RotateKubeletClientCertificate=true"
compare: compare:
op: has op: has
value: "true" value: "true"
set: true set: true
remediation: | remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and set RotateKubeletClientCertificate to true. Edit the Openshift node config file /etc/origin/node/node-config.yaml and set RotateKubeletClientCertificate to true.
scored: true scored: true
- id: 2.1.15 - id: 7.15
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Scored)" text: "Verify that the RotateKubeletServerCertificate argument is set to true"
audit: "grep -B1 RotateKubeletServerCertificate=true /etc/origin/node/node-config.yaml" audit: "grep -B1 RotateKubeletServerCertificate=true /etc/origin/node/node-config.yaml"
test: test:
test_items: test_items:
- flag: "RotateKubeletServerCertificate=true" - flag: "RotateKubeletServerCertificate=true"
compare: compare:
op: has op: has
value: "true" value: "true"
set: true set: true
remediation: | remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and set RotateKubeletServerCertificate to true. Edit the Openshift node config file /etc/origin/node/node-config.yaml and set RotateKubeletServerCertificate to true.
scored: true scored: true
- id: 2.2 - id: 8
text: "Configuration Files" text: "Configuration Files"
checks: checks:
- id: 2.2.1 - id: 8.1
text: "Ensure that the kubelet.conf file permissions are set to 644 or more restrictive (Scored)" text: "Verify the OpenShift default permissions for the kubelet.conf file"
audit: "stat -c %a /etc/origin/node/node.kubeconfig" audit: "stat -c %a /etc/origin/node/node.kubeconfig"
tests: tests:
bin_op: or bin_op: or
test_items: test_items:
- flag: "644" - flag: "644"
compare: compare:
op: eq op: eq
value: "644" value: "644"
set: true set: true
- flag: "640" - flag: "640"
compare: compare:
op: eq op: eq
value: "640" value: "640"
set: true set: true
- flag: "600" - flag: "600"
compare: compare:
op: eq op: eq
value: "600" value: "600"
set: true set: true
remediation: | remediation: |
Run the below command on each worker node. Run the below command on each worker node.
chmod 644 /etc/origin/node/node.kubeconfig chmod 644 /etc/origin/node/node.kubeconfig
scored: true scored: true
- id: 2.2.2 - id: 8.2
text: "Ensure that the kubelet.conf file ownership is set to root:root (Scored)" text: "Verify the kubeconfig file ownership of root:root"
audit: "stat -c %U:%G /etc/origin/node/node.kubeconfig" audit: "stat -c %U:%G /etc/origin/node/node.kubeconfig"
tests: tests:
test_items: test_items:
- flag: "root:root" - flag: "root:root"
compare: compare:
op: eq op: eq
value: root:root value: root:root
set: true set: true
remediation: | remediation: |
Run the below command on each worker node. Run the below command on each worker node.
chown root:root /etc/origin/node/node.kubeconfig chown root:root /etc/origin/node/node.kubeconfig
scored: true scored: true
- id: 2.2.3 - id: 8.3
text: "Ensure that the kubelet service file permissions are set to 644 or more restrictive (Scored)" text: "Verify the kubelet service file permissions of 644"
audit: "stat -c %a /etc/systemd/system/atomic-openshift-node.service" audit: "stat -c %a /etc/systemd/system/atomic-openshift-node.service"
tests: tests:
bin_op: or bin_op: or
test_items: test_items:
- flag: "644" - flag: "644"
compare: compare:
op: eq op: eq
value: "644" value: "644"
set: true set: true
- flag: "640" - flag: "640"
compare: compare:
op: eq op: eq
value: "640" value: "640"
set: true set: true
- flag: "600" - flag: "600"
compare: compare:
op: eq op: eq
value: "600" value: "600"
set: true set: true
remediation: | remediation: |
Run the below command on each worker node. Run the below command on each worker node.
chmod 644 /etc/systemd/system/atomic-openshift-node.service chmod 644 /etc/systemd/system/atomic-openshift-node.service
scored: true scored: true
- id: 2.2.4 - id: 8.4
text: "Ensure that the kubelet service file ownership is set to root:root (Scored)" text: "Verify the kubelet service file ownership of root:root"
audit: "stat -c %U:%G /etc/systemd/system/atomic-openshift-node.service" audit: "stat -c %U:%G /etc/systemd/system/atomic-openshift-node.service"
tests: tests:
test_items: test_items:
- flag: "root:root" - flag: "root:root"
compare: compare:
op: eq op: eq
value: root:root value: root:root
set: true set: true
remediation: | remediation: |
Run the below command on each worker node. Run the below command on each worker node.
chown root:root /etc/systemd/system/atomic-openshift-node.service chown root:root /etc/systemd/system/atomic-openshift-node.service
scored: true scored: true
- id: 2.2.5 - id: 8.5
text: "Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive (Scored)" text: "Verify the OpenShift default permissions for the proxy kubeconfig file"
audit: "stat -c %a /etc/origin/node/node.kubeconfig" audit: "stat -c %a /etc/origin/node/node.kubeconfig"
tests: tests:
bin_op: or bin_op: or
test_items: test_items:
- flag: "644" - flag: "644"
compare: compare:
op: eq op: eq
value: "644" value: "644"
set: true set: true
- flag: "640" - flag: "640"
compare: compare:
op: eq op: eq
value: "640" value: "640"
set: true set: true
- flag: "600" - flag: "600"
compare: compare:
op: eq op: eq
value: "600" value: "600"
set: true set: true
remediation: | remediation: |
Run the below command on each worker node. Run the below command on each worker node.
chmod 644 /etc/origin/node/node.kubeconfig chmod 644 /etc/origin/node/node.kubeconfig
scored: true scored: true
- id: 2.2.6 - id: 8.6
text: "Ensure that the proxy kubeconfig file ownership is set to root:root (Scored)" text: "Verify the proxy kubeconfig file ownership of root:root"
audit: "stat -c %U:%G /etc/origin/node/node.kubeconfig" audit: "stat -c %U:%G /etc/origin/node/node.kubeconfig"
tests: tests:
test_items: test_items:
- flag: "root:root" - flag: "root:root"
compare: compare:
op: eq op: eq
value: root:root value: root:root
set: true set: true
remediation: | remediation: |
Run the below command on each worker node. Run the below command on each worker node.
chown root:root /etc/origin/node/node.kubeconfig chown root:root /etc/origin/node/node.kubeconfig
scored: true scored: true
- id: 2.2.7 - id: 8.7
text: "Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Scored)" text: "Verify the OpenShift default permissions for the certificate authorities file."
audit: "stat -c %a /etc/origin/node/client-ca.crt" audit: "stat -c %a /etc/origin/node/client-ca.crt"
tests: tests:
bin_op: or bin_op: or
test_items: test_items:
- flag: "644" - flag: "644"
compare: compare:
op: eq op: eq
value: "644" value: "644"
set: true set: true
- flag: "640" - flag: "640"
compare: compare:
op: eq op: eq
value: "640" value: "640"
set: true set: true
- flag: "600" - flag: "600"
compare: compare:
op: eq op: eq
value: "600" value: "600"
set: true set: true
remediation: | remediation: |
Run the below command on each worker node. Run the below command on each worker node.
chmod 644 /etc/origin/node/client-ca.crt chmod 644 /etc/origin/node/client-ca.crt
scored: true scored: true
- id: 2.2.8 - id: 8.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Scored)" text: "Verify the client certificate authorities file ownership of root:root"
audit: "stat -c %U:%G /etc/origin/node/client-ca.crt" audit: "stat -c %U:%G /etc/origin/node/client-ca.crt"
tests: tests:
test_items: test_items:
- flag: "root:root" - flag: "root:root"
compare: compare:
op: eq op: eq
value: root:root value: root:root
set: true set: true
remediation: | remediation: |
Run the below command on each worker node. Run the below command on each worker node.
chown root:root /etc/origin/node/client-ca.crt chown root:root /etc/origin/node/client-ca.crt
scored: true scored: true

View File

@ -2,6 +2,8 @@ package check
import ( import (
"io/ioutil" "io/ioutil"
"os"
"path/filepath"
"testing" "testing"
yaml "gopkg.in/yaml.v2" yaml "gopkg.in/yaml.v2"
@ -11,31 +13,28 @@ const cfgDir = "../cfg/"
// validate that the files we're shipping are valid YAML // validate that the files we're shipping are valid YAML
func TestYamlFiles(t *testing.T) { func TestYamlFiles(t *testing.T) {
// TODO: make this list dynamic err := filepath.Walk(cfgDir, func(path string, info os.FileInfo, err error) error {
dirs := []string{"1.6/", "1.7/"}
for _, dir := range dirs {
dir = cfgDir + dir
files, err := ioutil.ReadDir(dir)
if err != nil { if err != nil {
t.Fatalf("error reading %s directory: %v", dir, err) t.Fatalf("failure accessing path %q: %v\n", path, err)
} }
if !info.IsDir() {
for _, file := range files { t.Logf("reading file: %s", path)
in, err := ioutil.ReadFile(path)
fileName := file.Name()
in, err := ioutil.ReadFile(dir + fileName)
if err != nil { if err != nil {
t.Fatalf("error opening file %s: %v", fileName, err) t.Fatalf("error opening file %s: %v", path, err)
} }
c := new(Controls) c := new(Controls)
err = yaml.Unmarshal(in, c) err = yaml.Unmarshal(in, c)
if err != nil { if err == nil {
t.Fatalf("failed to load YAML from %s: %v", fileName, err) t.Logf("YAML file successfully unmarshalled: %s", path)
} else {
t.Fatalf("failed to load YAML from %s: %v", path, err)
} }
} }
return nil
})
if err != nil {
t.Fatalf("failure walking cfg dir: %v\n", err)
} }
} }

View File

@ -157,7 +157,6 @@ groups:
value: Something value: Something
set: true set: true
- id: 14 - id: 14
text: "check that flag some-arg is set to some-val with ':' separator" text: "check that flag some-arg is set to some-val with ':' separator"
tests: tests:
@ -167,3 +166,134 @@ groups:
op: eq op: eq
value: some-val value: some-val
set: true set: true
- id: 15
text: "jsonpath correct value on field"
tests:
test_items:
- path: "{.readOnlyPort}"
compare:
op: eq
value: 15000
set: true
- path: "{.readOnlyPort}"
compare:
op: gte
value: 15000
set: true
- path: "{.readOnlyPort}"
compare:
op: lte
value: 15000
set: true
- id: 16
text: "jsonpath correct case-sensitive value on string field"
tests:
test_items:
- path: "{.stringValue}"
compare:
op: noteq
value: "None"
set: true
- path: "{.stringValue}"
compare:
op: noteq
value: "webhook,Something,RBAC"
set: true
- path: "{.stringValue}"
compare:
op: eq
value: "WebHook,Something,RBAC"
set: true
- id: 17
text: "jsonpath correct value on boolean field"
tests:
test_items:
- path: "{.trueValue}"
compare:
op: noteq
value: somethingElse
set: true
- path: "{.trueValue}"
compare:
op: noteq
value: false
set: true
- path: "{.trueValue}"
compare:
op: eq
value: true
set: true
- id: 18
text: "jsonpath field absent"
tests:
test_items:
- path: "{.notARealField}"
set: false
- id: 19
text: "jsonpath correct value on nested field"
tests:
test_items:
- path: "{.authentication.anonymous.enabled}"
compare:
op: eq
value: "false"
set: true
- id: 20
text: "yamlpath correct value on field"
tests:
test_items:
- path: "{.readOnlyPort}"
compare:
op: gt
value: 14999
set: true
- id: 21
text: "yamlpath field absent"
tests:
test_items:
- path: "{.fieldThatIsUnset}"
set: false
- id: 22
text: "yamlpath correct value on nested field"
tests:
test_items:
- path: "{.authentication.anonymous.enabled}"
compare:
op: eq
value: "false"
set: true
- id: 23
text: "path on invalid json"
tests:
test_items:
- path: "{.authentication.anonymous.enabled}"
compare:
op: eq
value: "false"
set: true
- id: 24
text: "path with broken expression"
tests:
test_items:
- path: "{.missingClosingBrace"
set: true
- id: 25
text: "yamlpath on invalid yaml"
tests:
test_items:
- path: "{.authentication.anonymous.enabled}"
compare:
op: eq
value: "false"
set: true

View File

@ -15,11 +15,16 @@
package check package check
import ( import (
"bytes"
"encoding/json"
"fmt" "fmt"
"os" "os"
"regexp" "regexp"
"strconv" "strconv"
"strings" "strings"
yaml "gopkg.in/yaml.v2"
"k8s.io/client-go/util/jsonpath"
) )
// test: // test:
@ -38,6 +43,7 @@ const (
type testItem struct { type testItem struct {
Flag string Flag string
Path string
Output string Output string
Value string Value string
Set bool Set bool
@ -54,33 +60,79 @@ type testOutput struct {
actualResult string actualResult string
} }
func failTestItem(s string) *testOutput {
return &testOutput{testResult: false, actualResult: s}
}
func (t *testItem) execute(s string) *testOutput { func (t *testItem) execute(s string) *testOutput {
result := &testOutput{} result := &testOutput{}
match := strings.Contains(s, t.Flag) var match bool
var flagVal string
if t.Flag != "" {
// Flag comparison: check if the flag is present in the input
match = strings.Contains(s, t.Flag)
} else {
// Path != "" - we don't know whether it's YAML or JSON but
// we can just try one then the other
buf := new(bytes.Buffer)
var jsonInterface interface{}
if t.Path != "" {
err := json.Unmarshal([]byte(s), &jsonInterface)
if err != nil {
err := yaml.Unmarshal([]byte(s), &jsonInterface)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to load YAML or JSON from provided input \"%s\": %v\n", s, err)
return failTestItem("failed to load YAML or JSON")
}
}
}
// Parse the jsonpath/yamlpath expression...
j := jsonpath.New("jsonpath")
j.AllowMissingKeys(true)
err := j.Parse(t.Path)
if err != nil {
fmt.Fprintf(os.Stderr, "unable to parse path expression \"%s\": %v\n", t.Path, err)
return failTestItem("unable to parse path expression")
}
err = j.Execute(buf, jsonInterface)
if err != nil {
fmt.Fprintf(os.Stderr, "error executing path expression \"%s\": %v\n", t.Path, err)
return failTestItem("error executing path expression")
}
jsonpathResult := fmt.Sprintf("%s", buf)
match = (jsonpathResult != "")
flagVal = jsonpathResult
}
if t.Set { if t.Set {
var flagVal string
isset := match isset := match
if isset && t.Compare.Op != "" { if isset && t.Compare.Op != "" {
// Expects flags in the form; if t.Flag != "" {
// --flag=somevalue // Expects flags in the form;
// --flag // --flag=somevalue
// somevalue // flag: somevalue
//pttn := `(` + t.Flag + `)(=)*([^\s,]*) *` // --flag
pttn := `(` + t.Flag + `)(=|: *)*([^\s]*) *` // somevalue
flagRe := regexp.MustCompile(pttn) pttn := `(` + t.Flag + `)(=|: *)*([^\s]*) *`
vals := flagRe.FindStringSubmatch(s) flagRe := regexp.MustCompile(pttn)
vals := flagRe.FindStringSubmatch(s)
if len(vals) > 0 { if len(vals) > 0 {
if vals[3] != "" { if vals[3] != "" {
flagVal = vals[3] flagVal = vals[3]
} else {
flagVal = vals[1]
}
} else { } else {
flagVal = vals[1] fmt.Fprintf(os.Stderr, "invalid flag in testitem definition")
os.Exit(1)
} }
} else {
fmt.Fprintf(os.Stderr, "invalid flag in testitem definition")
os.Exit(1)
} }
result.actualResult = strings.ToLower(flagVal) result.actualResult = strings.ToLower(flagVal)

View File

@ -120,6 +120,38 @@ func TestTestExecute(t *testing.T) {
controls.Groups[0].Checks[14], controls.Groups[0].Checks[14],
"2:45 kube-apiserver some-arg:some-val --admission-control=Something ---audit-log-maxage=40", "2:45 kube-apiserver some-arg:some-val --admission-control=Something ---audit-log-maxage=40",
}, },
{
controls.Groups[0].Checks[15],
"{\"readOnlyPort\": 15000}",
},
{
controls.Groups[0].Checks[16],
"{\"stringValue\": \"WebHook,Something,RBAC\"}",
},
{
controls.Groups[0].Checks[17],
"{\"trueValue\": true}",
},
{
controls.Groups[0].Checks[18],
"{\"readOnlyPort\": 15000}",
},
{
controls.Groups[0].Checks[19],
"{\"authentication\": { \"anonymous\": {\"enabled\": false}}}",
},
{
controls.Groups[0].Checks[20],
"readOnlyPort: 15000",
},
{
controls.Groups[0].Checks[21],
"readOnlyPort: 15000",
},
{
controls.Groups[0].Checks[22],
"authentication:\n anonymous:\n enabled: false",
},
} }
for _, c := range cases { for _, c := range cases {
@ -129,3 +161,31 @@ func TestTestExecute(t *testing.T) {
} }
} }
} }
func TestTestExecuteExceptions(t *testing.T) {
cases := []struct {
*Check
str string
}{
{
controls.Groups[0].Checks[23],
"this is not valid json {} at all",
},
{
controls.Groups[0].Checks[24],
"{\"key\": \"value\"}",
},
{
controls.Groups[0].Checks[25],
"broken } yaml\nenabled: true",
},
}
for _, c := range cases {
res := c.Tests.execute(c.str).testResult
if res {
t.Errorf("%s, expected:%v, got:%v\n", c.Text, false, res)
}
}
}

34
job-eks.yaml Normal file
View File

@ -0,0 +1,34 @@
apiVersion: batch/v1
kind: Job
metadata:
name: kube-bench
spec:
template:
spec:
hostPID: true
containers:
- name: kube-bench
# Push the image to your ECR and then refer to it here
image: <ID.dkr.ecr.region.amazonaws.com/aquasec/kube-bench:ref>
command: ["kube-bench", "--version", "1.11-json"]
volumeMounts:
- name: var-lib-kubelet
mountPath: /var/lib/kubelet
- name: etc-systemd
mountPath: /etc/systemd
- name: etc-kubernetes
mountPath: /etc/kubernetes
restartPolicy: Never
volumes:
- name: var-lib-kubelet
hostPath:
path: "/var/lib/kubelet"
- name: etc-systemd
hostPath:
path: "/etc/systemd"
- name: etc-kubernetes
hostPath:
path: "/etc/kubernetes"
- name: usr-bin
hostPath:
path: "/usr/bin"