* Delete README.md * Edit readme and separate into different files * Update README.md * Update Running.md * Update CONTRIBUTING.md * Create Contributing.md * Add files via upload * Update Index.md * Rename Flags and Commands.md to Flags_and_commands.md * Rename Index.md to index.md * Create mkdocs.yml * Delete images directory * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Create mkdocs-dev.yaml * Create mkdocs-latest.yaml * Update mkdocs.yml * Update mkdocs.yml * Update mkdocs.yml Add yamllint --- * Make it yamllint comply * Make Yamllint comply * Make Yamllint comply * Change description Co-authored-by: Itay Shakury <itay@itaysk.com> * Fix syntax Co-authored-by: Itay Shakury <itay@itaysk.com> * Update docs/Architecture.md Co-authored-by: Itay Shakury <itay@itaysk.com> * Update docs/Architecture.md Co-authored-by: Itay Shakury <itay@itaysk.com> * Update example for test files * Update contributing * Delete Contributing.md * Update Flags_and_commands.md * Change syntax and add source * Update Platforms.md * lower case file names * lower case file names * Lower case file names * Lower case file names * Lower case file names * Lower case file names * Add note about inspect master in some platforms * Add quick start * Lower case files names * Lower case files names * Fixing typo * Remove section about old ocp * Fix typos Co-authored-by: Itay Shakury <itay@itaysk.com>
6.7 KiB
Commands
Command | Description |
---|---|
help | Prints help about any command |
run | List of components to run |
version | Print kube-bench version |
Flags
Flag | Description |
---|---|
--alsologtostderr | log to standard error as well as files |
--asff | Send findings to AWS Security Hub for any benchmark tests that fail or that generate a warning. See [this page][kube-bench-aws-security-hub] for more information on how to enable the kube-bench integration with AWS Security Hub. |
--benchmark | Manually specify CIS benchmark version |
-c, --check | A comma-delimited list of checks to run as specified in Benchmark document. |
--config | config file (default is ./cfg/config.yaml) |
--exit-code | Specify the exit code for when checks fail |
--group | Run all the checks under this comma-delimited list of groups. |
--include-test-output | Prints the actual result when test fails. |
--json | Prints the results as JSON |
--junit | Prints the results as JUnit |
--log_backtrace_at traceLocation | when logging hits line file:N, emit a stack trace (default :0) |
--logtostderr | log to standard error instead of files |
--noremediations | Disable printing of remediations section to stdout. |
--noresults | Disable printing of results section to stdout. |
--nototals | Disable calculating and printing of totals for failed, passed, ... checks across all sections |
--outputfile | Writes the JSON results to output file |
--pgsql | Save the results to PostgreSQL |
--scored | Run the scored CIS checks (default true) |
--skip string | List of comma separated values of checks to be skipped |
--stderrthreshold severity | logs at or above this threshold go to stderr (default 2) |
-v, --v Level | log level for V logs (default 0) |
--version string | Manually specify Kubernetes version, automatically detected if unset |
--vmodule moduleSpec | comma-separated list of pattern=N settings for file-filtered logging |
Examples
Report kube-bench findings to AWS Security Hub
You can configure kube-bench with the --asff
option to send findings to AWS Security Hub for any benchmark tests that fail or that generate a warning. See this page for more information on how to enable the kube-bench integration with AWS Security Hub.
Specifying the benchmark or Kubernetes version
kube-bench
uses the Kubernetes API, or access to the kubectl
or kubelet
executables to try to determine the Kubernetes version, and hence which benchmark to run. If you wish to override this, or if none of these methods are available, you can specify either the Kubernetes version or CIS Benchmark as a command line parameter.
You can specify a particular version of Kubernetes by setting the --version
flag or with the KUBE_BENCH_VERSION
environment variable. The value of --version
takes precedence over the value of KUBE_BENCH_VERSION
.
For example, run kube-bench using the tests for Kubernetes version 1.13:
kube-bench --version 1.13
You can specify --benchmark
to run a specific CIS Benchmark version:
kube-bench --benchmark cis-1.5
Note: It is an error to specify both --version
and --benchmark
flags together
Specifying Benchmark sections
If you want to run specific CIS Benchmark sections (i.e master, node, etcd, etc...)
you can use the run --targets
subcommand.
kube-bench run --targets master,node
or
kube-bench run --targets master,node,etcd,policies
If no targets are specified, kube-bench
will determine the appropriate targets based on the CIS Benchmark version and the components detected on the node. The detection is done by verifying which components are running, as defined in the config files (see Configuration.
Run specific check or group
kube-bench
supports running individual checks by specifying the check's id
as a comma-delimited list on the command line with the --check
| -c
flag.
kube-bench --check="1.1.1,1.1.2,1.2.1,1.3.3"
kube-bench
supports running all checks under group by specifying the group's id
as a comma-delimited list on the command line with the --group
| -g
flag.
kube-bench --check="1.1,2.2"
Will run all checks 1.1.X and 2.2.X.
Skip specific check or group
kube-bench
supports skipping checks or groups by specifying the id
as a comma-delimited list on the command line with the --skip
flag.
kube-bench --skip="1.1,1.2.1,1.3.3"
Will skip 1.1.X group and individual checks 1.2.1, 1.3.3.
Skipped checks returns [INFO] output.
Exit code
kube-bench
supports using uniqe exit code when failing a check or more.
kube-bench --exit-code 42
Will return 42 if one check or more failed, and 0 incase none failed.
Note: [WARN] is not [FAIL].
Output manipulation flags
There are four output states:
- [PASS] indicates that the test was run successfully, and passed.
- [FAIL] indicates that the test was run successfully, and failed. The remediation output describes how to correct the configuration, or includes an error message describing why the test could not be run.
- [WARN] means this test needs further attention, for example it is a test that needs to be run manually. Check the remediation output for further information.
- [INFO] is informational output that needs no further action.
Note:
- If the test is Manual, this always generates WARN (because the user has to run it manually)
- If the test is Scored, and kube-bench was unable to run the test, this generates FAIL (because the test has not been passed, and as a Scored test, if it doesn't pass then it must be considered a failure).
- If the test is Not Scored, and kube-bench was unable to run the test, this generates WARN.
- If the test is Scored, type is empty, and there are no
test_items
present, it generates a WARN. This is to highlight tests that appear to be incompletely defined.
kube-bench
supports multiple output manipulation flags.
kube-bench --include-test-output
will print failing checks output in the results section
[INFO] 1 Master Node Security Configuration
[INFO] 1.1 Master Node Configuration Files
[FAIL] 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
**permissions=777**
Note: --noresults
--noremediations
and --include-test-output
will not effect the json output but only stdout.
Only --nototals
will effect the json output and thats because it will not call the function to calculate totals.
Troubleshooting
Running kube-bench
with the -v 3
parameter will generate debug logs that can be very helpful for debugging problems.
If you are using one of the example job*.yaml
files, you will need to edit the command
field, for example ["kube-bench", "-v", "3"]
. Once the job has run, the logs can be retrieved using kubectl logs
on the job's pod.