1
0
mirror of https://github.com/aquasecurity/kube-bench.git synced 2024-12-22 06:38:06 +00:00

Fixes Issue 269 - Numbering to use CIS Versions (#511)

* starting benchmark flag

* Revert "starting benchmark flag"

This reverts commit 58fc948626.

* fixes issue #269

* add more unit tests

* fix bug

* Update cmd/common.go

Co-Authored-By: Liz Rice <liz@lizrice.com>

* fixes as per PR review

* fixes as per PR review

* adds more tests

* fixed tests

* changes as per PR Review

* changes as per PR Review

* updated README

* Update README.md

Co-Authored-By: Liz Rice <liz@lizrice.com>

* Update README.md

Co-Authored-By: Liz Rice <liz@lizrice.com>

* Update README.md

Co-Authored-By: Liz Rice <liz@lizrice.com>

* Update README.md

Co-Authored-By: Liz Rice <liz@lizrice.com>

* changes are per PR review
This commit is contained in:
Roberto Rojas 2019-11-05 16:31:27 -05:00 committed by GitHub
parent 8276e521d4
commit 7ca438b618
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
19 changed files with 424 additions and 1953 deletions

View File

@ -19,39 +19,43 @@ Tests are configured with YAML files, making this tool easy to update as test sp
Table of Contents
=================
* [CIS Kubernetes Benchmark support](#cis-kubernetes-benchmark-support)
* [Installation](#installation)
* [Running kube-bench](#running-kube-bench)
* [Running inside a container](#running-inside-a-container)
* [Running in a kubernetes cluster](#running-in-a-kubernetes-cluster)
* [Running in an EKS cluster](#running-in-an-eks-cluster)
* [Installing from a container](#installing-from-a-container)
* [Installing from sources](#installing-from-sources)
* [Running on OpenShift](#running-on-openshift)
* [Output](#output)
* [Configuration](#configuration)
* [Test config YAML representation](#test-config-yaml-representation)
* [Omitting checks](#omitting-checks)
* [Roadmap](#roadmap)
* [Testing locally with kind](#testing-locally-with-kind)
* [Contributing](#contributing)
* [Bugs](#bugs)
* [Features](#features)
* [Pull Requests](#pull-requests)
- [Table of Contents](#table-of-contents)
- [CIS Kubernetes Benchmark support](#cis-kubernetes-benchmark-support)
- [Installation](#installation)
- [Running kube-bench](#running-kube-bench)
- [Running inside a container](#running-inside-a-container)
- [Running in a Kubernetes cluster](#running-in-a-kubernetes-cluster)
- [Running in an EKS cluster](#running-in-an-eks-cluster)
- [Installing from a container](#installing-from-a-container)
- [Installing from sources](#installing-from-sources)
- [Running on OpenShift](#running-on-openshift)
- [Output](#output)
- [Configuration](#configuration)
- [Test config YAML representation](#test-config-yaml-representation)
- [Omitting checks](#omitting-checks)
- [Roadmap](#roadmap)
- [Testing locally with kind](#testing-locally-with-kind)
- [Contributing](#contributing)
- [Bugs](#bugs)
- [Features](#features)
- [Pull Requests](#pull-requests)
## CIS Kubernetes Benchmark support
kube-bench supports the tests for Kubernetes as defined in the CIS Benchmarks 1.3.0 to 1.4.0 respectively.
kube-bench supports the tests for Kubernetes as defined in the CIS Benchmarks 1.3.0 to 1.4.1 respectively.
| CIS Kubernetes Benchmark | kube-bench config | Kubernetes versions |
|---|---|---|
| 1.3.0| 1.11 | 1.11-1.12 |
| 1.4.1| 1.13 | 1.13- |
| 1.3.0| cis-1.3 | 1.11-1.12 |
| 1.4.1| cis-1.4 | 1.13- |
By default kube-bench will determine the test set to run based on the Kubernetes version running on the machine.
There is also preliminary support for Red Hat's OpenShift Hardening Guide for 3.10 and 3.11. Please note that kube-bench does not automatically detect OpenShift - see below.
## Installation
You can choose to
@ -78,16 +82,31 @@ For example, run kube-bench against a master with version auto-detection:
kube-bench master
```
Or run kube-bench against a node with the node `controls` for Kubernetes version 1.13:
Or run kube-bench against a node with the node `controls` for Kubernetes version 1.13:
```
kube-bench node --version 1.13
```
`controls` for the various versions of Kubernetes can be found in directories
with same name as the Kubernetes versions under `cfg/`, for example `cfg/1.13`.
`controls` are also organized by distribution under the `cfg` directory for
example `cfg/ocp-3.10`.
`kube-bench` will map the `--version` to the corresponding CIS Benchmark version as indicated by the version mapping table above.
For example, if you specify:
```
kube-bench node --version 1.13
```
`kube-bench` will map the `1.13` to CIS Bechmark version `cis-1.14`
Also, you can specify `--benchmark` to run a specific CIS Benchmark version:
```
kube-bench node --benchmark cis-1.4
```
`controls` for the various versions of CIS Benchmark can be found in directories
with same name as the CIS Benchmark versions under `cfg/`, for example `cfg/cis-1.4`.
**Note:** **`It is an error to specify both --version and --benchmark flags together`**
### Running inside a container
@ -183,7 +202,15 @@ go build -o kube-bench .
## Running on OpenShift
kube-bench includes a set of test files for Red Hat's OpenShift hardening guide for OCP 3.10 and 3.11. To run this you will need to specify `--version ocp-3.10` when you run the `kube-bench` command (either directly or through YAML). This config version is valid for OCP 3.10 and 3.11.
| OpenShift Hardening Guide | kube-bench config |
|---|---|---|
| ocp-3.10| rh-0.7 |
| ocp-3.11| rh-0.7 |
kube-bench includes a set of test files for Red Hat's OpenShift hardening guide for OCP 3.10 and 3.11. To run this you will need to specify `--benchmark rh-07`, or `--version ocp-3.10` or `--version ocp-3.11`
when you run the `kube-bench` command (either directly or through YAML).
## Output

View File

@ -139,4 +139,13 @@ node:
svc:
- "/lib/systemd/system/kube-proxy.service"
defaultconf: /etc/kubernetes/addons/kube-proxy-daemonset.yaml
defaultkubeconfig: "/etc/kubernetes/proxy.conf"
defaultkubeconfig: "/etc/kubernetes/proxy.conf"
version_mapping:
"1.11": "cis-1.3"
"1.12": "cis-1.3"
"1.13": "cis-1.4"
"1.14": "cis-1.4"
"1.15": "cis-1.4"
"ocp-3.10": "rh-0.7"
"ocp-3.11": "rh-0.7"

File diff suppressed because it is too large Load Diff

View File

@ -1,376 +0,0 @@
---
controls:
id: 2
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 7
text: "Kubelet"
checks:
- id: 7.1
text: "Use Security Context Constraints to manage privileged containers as needed"
type: "skip"
scored: true
- id: 7.2
text: "Ensure anonymous-auth is not disabled"
type: "skip"
scored: true
- id: 7.3
text: "Verify that the --authorization-mode argument is set to WebHook"
audit: "grep -A1 authorization-mode /etc/origin/node/node-config.yaml"
tests:
bin_op: or
test_items:
- flag: "authorization-mode"
set: false
- flag: "authorization-mode: Webhook"
compare:
op: has
value: "Webhook"
set: true
remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and remove authorization-mode under
kubeletArguments in /etc/origin/node/node-config.yaml or set it to "Webhook".
scored: true
- id: 7.4
text: "Verify the OpenShift default for the client-ca-file argument"
audit: "grep -A1 client-ca-file /etc/origin/node/node-config.yaml"
tests:
test_items:
- flag: "client-ca-file"
set: false
remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and remove any configuration returned by the following:
grep -A1 client-ca-file /etc/origin/node/node-config.yaml
Reset to the OpenShift default.
See https://github.com/openshift/openshift-ansible/blob/release-3.10/roles/openshift_node_group/templates/node-config.yaml.j2#L65
The config file does not have this defined in kubeletArgument, but in PodManifestConfig.
scored: true
- id: 7.5
text: "Verify the OpenShift default setting for the read-only-port argument"
audit: "grep -A1 read-only-port /etc/origin/node/node-config.yaml"
tests:
bin_op: or
test_items:
- flag: "read-only-port"
set: false
- flag: "read-only-port: 0"
compare:
op: has
value: "0"
set: true
remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and removed so that the OpenShift default is applied.
scored: true
- id: 7.6
text: "Adjust the streaming-connection-idle-timeout argument"
audit: "grep -A1 streaming-connection-idle-timeout /etc/origin/node/node-config.yaml"
tests:
bin_op: or
test_items:
- flag: "streaming-connection-idle-timeout"
set: false
- flag: "5m"
set: false
remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and set the streaming-connection-timeout
value like the following in node-config.yaml.
kubeletArguments:
 streaming-connection-idle-timeout:
   - "5m"
scored: true
- id: 7.7
text: "Verify the OpenShift defaults for the protect-kernel-defaults argument"
type: "skip"
scored: true
- id: 7.8
text: "Verify the OpenShift default value of true for the make-iptables-util-chains argument"
audit: "grep -A1 make-iptables-util-chains /etc/origin/node/node-config.yaml"
tests:
bin_op: or
test_items:
- flag: "make-iptables-util-chains"
set: false
- flag: "make-iptables-util-chains: true"
compare:
op: has
value: "true"
set: true
remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and reset make-iptables-util-chains to the OpenShift
default value of true.
scored: true
- id: 7.9
text: "Verify that the --keep-terminated-pod-volumes argument is set to false"
audit: "grep -A1 keep-terminated-pod-volumes /etc/origin/node/node-config.yaml"
tests:
test_items:
- flag: "keep-terminated-pod-volumes: false"
compare:
op: has
value: "false"
set: true
remediation: |
Reset to the OpenShift defaults
scored: true
- id: 7.10
text: "Verify the OpenShift defaults for the hostname-override argument"
type: "skip"
scored: true
- id: 7.11
text: "Set the --event-qps argument to 0"
audit: "grep -A1 event-qps /etc/origin/node/node-config.yaml"
tests:
bin_op: or
test_items:
- flag: "event-qps"
set: false
- flag: "event-qps: 0"
compare:
op: has
value: "0"
set: true
remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml set the event-qps argument to 0 in
the kubeletArguments section of.
scored: true
- id: 7.12
text: "Verify the OpenShift cert-dir flag for HTTPS traffic"
audit: "grep -A1 cert-dir /etc/origin/node/node-config.yaml"
tests:
test_items:
- flag: "/etc/origin/node/certificates"
compare:
op: has
value: "/etc/origin/node/certificates"
set: true
remediation: |
Reset to the OpenShift default values.
scored: true
- id: 7.13
text: "Verify the OpenShift default of 0 for the cadvisor-port argument"
audit: "grep -A1 cadvisor-port /etc/origin/node/node-config.yaml"
tests:
bin_op: or
test_items:
- flag: "cadvisor-port"
set: false
- flag: "cadvisor-port: 0"
compare:
op: has
value: "0"
set: true
remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and remove the cadvisor-port flag
if it is set in the kubeletArguments section.
scored: true
- id: 7.14
text: "Verify that the RotateKubeletClientCertificate argument is set to true"
audit: "grep -B1 RotateKubeletClientCertificate=true /etc/origin/node/node-config.yaml"
tests:
test_items:
- flag: "RotateKubeletClientCertificate=true"
compare:
op: has
value: "true"
set: true
remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and set RotateKubeletClientCertificate to true.
scored: true
- id: 7.15
text: "Verify that the RotateKubeletServerCertificate argument is set to true"
audit: "grep -B1 RotateKubeletServerCertificate=true /etc/origin/node/node-config.yaml"
tests:
test_items:
- flag: "RotateKubeletServerCertificate=true"
compare:
op: has
value: "true"
set: true
remediation: |
Edit the Openshift node config file /etc/origin/node/node-config.yaml and set RotateKubeletServerCertificate to true.
scored: true
- id: 8
text: "Configuration Files"
checks:
- id: 8.1
text: "Verify the OpenShift default permissions for the kubelet.conf file"
audit: "stat -c %a /etc/origin/node/node.kubeconfig"
tests:
bin_op: or
test_items:
- flag: "644"
compare:
op: eq
value: "644"
set: true
- flag: "640"
compare:
op: eq
value: "640"
set: true
- flag: "600"
compare:
op: eq
value: "600"
set: true
remediation: |
Run the below command on each worker node.
chmod 644 /etc/origin/node/node.kubeconfig
scored: true
- id: 8.2
text: "Verify the kubeconfig file ownership of root:root"
audit: "stat -c %U:%G /etc/origin/node/node.kubeconfig"
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: root:root
set: true
remediation: |
Run the below command on each worker node.
chown root:root /etc/origin/node/node.kubeconfig
scored: true
- id: 8.3
text: "Verify the kubelet service file permissions of 644"
audit: "stat -c %a /etc/systemd/system/atomic-openshift-node.service"
tests:
bin_op: or
test_items:
- flag: "644"
compare:
op: eq
value: "644"
set: true
- flag: "640"
compare:
op: eq
value: "640"
set: true
- flag: "600"
compare:
op: eq
value: "600"
set: true
remediation: |
Run the below command on each worker node.
chmod 644 /etc/systemd/system/atomic-openshift-node.service
scored: true
- id: 8.4
text: "Verify the kubelet service file ownership of root:root"
audit: "stat -c %U:%G /etc/systemd/system/atomic-openshift-node.service"
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: root:root
set: true
remediation: |
Run the below command on each worker node.
chown root:root /etc/systemd/system/atomic-openshift-node.service
scored: true
- id: 8.5
text: "Verify the OpenShift default permissions for the proxy kubeconfig file"
audit: "stat -c %a /etc/origin/node/node.kubeconfig"
tests:
bin_op: or
test_items:
- flag: "644"
compare:
op: eq
value: "644"
set: true
- flag: "640"
compare:
op: eq
value: "640"
set: true
- flag: "600"
compare:
op: eq
value: "600"
set: true
remediation: |
Run the below command on each worker node.
chmod 644 /etc/origin/node/node.kubeconfig
scored: true
- id: 8.6
text: "Verify the proxy kubeconfig file ownership of root:root"
audit: "stat -c %U:%G /etc/origin/node/node.kubeconfig"
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: root:root
set: true
remediation: |
Run the below command on each worker node.
chown root:root /etc/origin/node/node.kubeconfig
scored: true
- id: 8.7
text: "Verify the OpenShift default permissions for the certificate authorities file."
audit: "stat -c %a /etc/origin/node/client-ca.crt"
tests:
bin_op: or
test_items:
- flag: "644"
compare:
op: eq
value: "644"
set: true
- flag: "640"
compare:
op: eq
value: "640"
set: true
- flag: "600"
compare:
op: eq
value: "600"
set: true
remediation: |
Run the below command on each worker node.
chmod 644 /etc/origin/node/client-ca.crt
scored: true
- id: 8.8
text: "Verify the client certificate authorities file ownership of root:root"
audit: "stat -c %U:%G /etc/origin/node/client-ca.crt"
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: root:root
set: true
remediation: |
Run the below command on each worker node.
chown root:root /etc/origin/node/client-ca.crt
scored: true

View File

@ -1,27 +0,0 @@
---
## Version-specific settings that override the values in cfg/config.yaml
master:
apiserver:
bins:
- openshift start master api
- hypershift openshift-kube-apiserver
scheduler:
bins:
- "openshift start master controllers"
confs:
- /etc/origin/master/scheduler.json
controllermanager:
bins:
- "openshift start master controllers"
etcd:
bins:
- openshift start etcd
node:
proxy:
bins:
- openshift start network

View File

@ -209,15 +209,12 @@ func loadConfig(nodetype check.NodeType) string {
file = nodeFile
}
runningVersion := ""
if kubeVersion == "" {
runningVersion, err = getKubeVersion()
if err != nil {
exitWithError(fmt.Errorf("Version check failed: \n%s", err))
}
benchmarkVersion, err := getBenchmarkVersion(kubeVersion, benchmarkVersion, viper.GetViper())
if err != nil {
exitWithError(err)
}
path, err := getConfigFilePath(kubeVersion, runningVersion, file)
path, err := getConfigFilePath(benchmarkVersion, file)
if err != nil {
exitWithError(fmt.Errorf("can't find %s controls file in %s: %v", nodetype, cfgDir, err))
}
@ -237,6 +234,60 @@ func loadConfig(nodetype check.NodeType) string {
return filepath.Join(path, file)
}
func mapToBenchmarkVersion(kubeToBenchmarkMap map[string]string, kv string) (string, error) {
cisVersion, found := kubeToBenchmarkMap[kv]
for !found && (kv != defaultKubeVersion && !isEmpty(kv)) {
kv = decrementVersion(kv)
cisVersion, found = kubeToBenchmarkMap[kv]
glog.V(2).Info(fmt.Sprintf("mapToBenchmarkVersion for cisVersion: %q found: %t\n", cisVersion, found))
}
if !found {
glog.V(1).Info(fmt.Sprintf("mapToBenchmarkVersion unable to find a match for: %q", kv))
glog.V(3).Info(fmt.Sprintf("mapToBenchmarkVersion kubeToBenchmarkSMap: %#v", kubeToBenchmarkMap))
return "", fmt.Errorf("Unable to find a matching Benchmark Version match for kubernetes version: %s", kubeVersion)
}
return cisVersion, nil
}
func loadVersionMapping(v *viper.Viper) (map[string]string, error) {
kubeToBenchmarkMap := v.GetStringMapString("version_mapping")
if kubeToBenchmarkMap == nil || (len(kubeToBenchmarkMap) == 0) {
return nil, fmt.Errorf("config file is missing 'version_mapping' section")
}
return kubeToBenchmarkMap, nil
}
func getBenchmarkVersion(kubeVersion, benchmarkVersion string, v *viper.Viper) (bv string, err error) {
if !isEmpty(kubeVersion) && !isEmpty(benchmarkVersion) {
return "", fmt.Errorf("It is an error to specify both --version and --benchmark flags")
}
if isEmpty(benchmarkVersion) {
if isEmpty(kubeVersion) {
kubeVersion, err = getKubeVersion()
if err != nil {
return "", fmt.Errorf("Version check failed: %s\nAlternatively, you can specify the version with --version", err)
}
}
kubeToBenchmarkMap, err := loadVersionMapping(v)
if err != nil {
return "", err
}
benchmarkVersion, err = mapToBenchmarkVersion(kubeToBenchmarkMap, kubeVersion)
if err != nil {
return "", err
}
glog.V(2).Info(fmt.Sprintf("Mapped Kubernetes version: %s to Benchmark version: %s", kubeVersion, benchmarkVersion))
}
return benchmarkVersion, nil
}
// isMaster verify if master components are running on the node.
func isMaster() bool {
glog.V(2).Info("Checking if the current node is running master components")

View File

@ -16,6 +16,10 @@ package cmd
import (
"errors"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"testing"
"github.com/aquasecurity/kube-bench/check"
@ -166,3 +170,234 @@ func TestIsMaster(t *testing.T) {
assert.Equal(t, tc.isMaster, isMaster(), tc.name)
}
}
func TestMapToCISVersion(t *testing.T) {
viperWithData, err := loadConfigForTest()
if err != nil {
t.Fatalf("Unable to load config file %v", err)
}
kubeToBenchmarkMap, err := loadVersionMapping(viperWithData)
if err != nil {
t.Fatalf("Unable to load config file %v", err)
}
cases := []struct {
kubeVersion string
succeed bool
exp string
}{
{kubeVersion: "1.9", succeed: false, exp: ""},
{kubeVersion: "1.11", succeed: true, exp: "cis-1.3"},
{kubeVersion: "1.12", succeed: true, exp: "cis-1.3"},
{kubeVersion: "1.13", succeed: true, exp: "cis-1.4"},
{kubeVersion: "1.16", succeed: true, exp: "cis-1.4"},
{kubeVersion: "ocp-3.10", succeed: true, exp: "rh-0.7"},
{kubeVersion: "ocp-3.11", succeed: true, exp: "rh-0.7"},
{kubeVersion: "unknown", succeed: false, exp: ""},
}
for _, c := range cases {
rv, err := mapToBenchmarkVersion(kubeToBenchmarkMap, c.kubeVersion)
if c.succeed {
if err != nil {
t.Errorf("[%q]-Unexpected error: %v", c.kubeVersion, err)
}
if len(rv) == 0 {
t.Errorf("[%q]-missing return value", c.kubeVersion)
}
if c.exp != rv {
t.Errorf("[%q]- expected %q but Got %q", c.kubeVersion, c.exp, rv)
}
} else {
if c.exp != rv {
t.Errorf("mapToBenchmarkVersion kubeversion: %q Got %q expected %s", c.kubeVersion, rv, c.exp)
}
}
}
}
func TestLoadVersionMapping(t *testing.T) {
setDefault := func(v *viper.Viper, key string, value interface{}) *viper.Viper {
v.SetDefault(key, value)
return v
}
viperWithData, err := loadConfigForTest()
if err != nil {
t.Fatalf("Unable to load config file %v", err)
}
cases := []struct {
n string
v *viper.Viper
succeed bool
}{
{n: "empty", v: viper.New(), succeed: false},
{
n: "novals",
v: setDefault(viper.New(), "version_mapping", "novals"),
succeed: false,
},
{
n: "good",
v: viperWithData,
succeed: true,
},
}
for _, c := range cases {
rv, err := loadVersionMapping(c.v)
if c.succeed {
if err != nil {
t.Errorf("[%q]-Unexpected error: %v", c.n, err)
}
if len(rv) == 0 {
t.Errorf("[%q]-missing mapping value", c.n)
}
} else {
if err == nil {
t.Errorf("[%q]-Expected error but got none", c.n)
}
}
}
}
func TestGetBenchmarkVersion(t *testing.T) {
viperWithData, err := loadConfigForTest()
if err != nil {
t.Fatalf("Unable to load config file %v", err)
}
type getBenchmarkVersionFnToTest func(kubeVersion, benchmarkVersion string, v *viper.Viper) (string, error)
withFakeKubectl := func(kubeVersion, benchmarkVersion string, v *viper.Viper, fn getBenchmarkVersionFnToTest) (string, error) {
execCode := `#!/bin/sh
echo "Server Version: v1.13.10"
`
restore, err := fakeExecutableInPath("kubectl", execCode)
if err != nil {
t.Fatal("Failed when calling fakeExecutableInPath ", err)
}
defer restore()
return fn(kubeVersion, benchmarkVersion, v)
}
withNoPath := func(kubeVersion, benchmarkVersion string, v *viper.Viper, fn getBenchmarkVersionFnToTest) (string, error) {
restore, err := prunePath()
if err != nil {
t.Fatal("Failed when calling prunePath ", err)
}
defer restore()
return fn(kubeVersion, benchmarkVersion, v)
}
type getBenchmarkVersionFn func(string, string, *viper.Viper, getBenchmarkVersionFnToTest) (string, error)
cases := []struct {
n string
kubeVersion string
benchmarkVersion string
v *viper.Viper
callFn getBenchmarkVersionFn
exp string
succeed bool
}{
{n: "both versions", kubeVersion: "1.11", benchmarkVersion: "cis-1.3", exp: "cis-1.3", callFn: withNoPath, v: viper.New(), succeed: false},
{n: "no version-missing-kubectl", kubeVersion: "", benchmarkVersion: "", v: viperWithData, exp: "", callFn: withNoPath, succeed: false},
{n: "no version-fakeKubectl", kubeVersion: "", benchmarkVersion: "", v: viperWithData, exp: "cis-1.4", callFn: withFakeKubectl, succeed: true},
{n: "kubeVersion", kubeVersion: "1.11", benchmarkVersion: "", v: viperWithData, exp: "cis-1.3", callFn: withNoPath, succeed: true},
{n: "ocpVersion310", kubeVersion: "ocp-3.10", benchmarkVersion: "", v: viperWithData, exp: "rh-0.7", callFn: withNoPath, succeed: true},
{n: "ocpVersion311", kubeVersion: "ocp-3.11", benchmarkVersion: "", v: viperWithData, exp: "rh-0.7", callFn: withNoPath, succeed: true},
}
for _, c := range cases {
rv, err := c.callFn(c.kubeVersion, c.benchmarkVersion, c.v, getBenchmarkVersion)
if c.succeed {
if err != nil {
t.Errorf("[%q]-Unexpected error: %v", c.n, err)
}
if len(rv) == 0 {
t.Errorf("[%q]-missing return value", c.n)
}
if c.exp != rv {
t.Errorf("[%q]- expected %q but Got %q", c.n, c.exp, rv)
}
} else {
if err == nil {
t.Errorf("[%q]-Expected error but got none", c.n)
}
}
}
}
func loadConfigForTest() (*viper.Viper, error) {
viperWithData := viper.New()
viperWithData.SetConfigFile(filepath.Join("..", cfgDir, "config.yaml"))
if err := viperWithData.ReadInConfig(); err != nil {
return nil, err
}
return viperWithData, nil
}
type restoreFn func()
func fakeExecutableInPath(execFile, execCode string) (restoreFn, error) {
pathenv := os.Getenv("PATH")
tmp, err := ioutil.TempDir("", "TestfakeExecutableInPath")
if err != nil {
return nil, err
}
wd, err := os.Getwd()
if err != nil {
return nil, err
}
err = os.Chdir(tmp)
if err != nil {
return nil, err
}
if len(execCode) > 0 {
ioutil.WriteFile(filepath.Join(tmp, execFile), []byte(execCode), 0700)
} else {
f, err := os.OpenFile(execFile, os.O_CREATE|os.O_EXCL, 0700)
if err != nil {
return nil, err
}
err = f.Close()
if err != nil {
return nil, err
}
}
err = os.Setenv("PATH", fmt.Sprintf("%s:%s", tmp, pathenv))
if err != nil {
return nil, err
}
restorePath := func() {
os.RemoveAll(tmp)
os.Chdir(wd)
os.Setenv("PATH", pathenv)
}
return restorePath, nil
}
func prunePath() (restoreFn, error) {
pathenv := os.Getenv("PATH")
err := os.Setenv("PATH", "")
if err != nil {
return nil, err
}
restorePath := func() {
os.Setenv("PATH", pathenv)
}
return restorePath, nil
}

View File

@ -36,6 +36,7 @@ var (
envVarsPrefix = "KUBE_BENCH"
defaultKubeVersion = "1.11"
kubeVersion string
benchmarkVersion string
cfgFile string
cfgDir string
jsonFmt bool
@ -113,6 +114,7 @@ func init() {
RootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file (default is ./cfg/config.yaml)")
RootCmd.PersistentFlags().StringVarP(&cfgDir, "config-dir", "D", "./cfg/", "config directory")
RootCmd.PersistentFlags().StringVar(&kubeVersion, "version", "", "Manually specify Kubernetes version, automatically detected if unset")
RootCmd.PersistentFlags().StringVar(&benchmarkVersion, "benchmark", "", "Manually specify CIS benchmark version. It would be an error to specify both --version and --benchmark flags")
goflag.CommandLine.VisitAll(func(goflag *goflag.Flag) {
RootCmd.PersistentFlags().AddGoFlag(goflag)

View File

@ -121,41 +121,19 @@ func getBinaries(v *viper.Viper, nodetype check.NodeType) (map[string]string, er
return binmap, nil
}
// getConfigFilePath locates the config files we should be using based on either the specified
// version, or the running version of kubernetes if not specified
func getConfigFilePath(specifiedVersion string, runningVersion string, filename string) (path string, err error) {
var fileVersion string
// getConfigFilePath locates the config files we should be using CIS version
func getConfigFilePath(benchmarkVersion string, filename string) (path string, err error) {
glog.V(2).Info(fmt.Sprintf("Looking for config specific CIS version %q", benchmarkVersion))
if specifiedVersion != "" {
fileVersion = specifiedVersion
} else {
fileVersion = runningVersion
}
glog.V(2).Info(fmt.Sprintf("Looking for config for version %s", fileVersion))
for {
path = filepath.Join(cfgDir, fileVersion)
file := filepath.Join(path, string(filename))
glog.V(2).Info(fmt.Sprintf("Looking for config file: %s\n", file))
if _, err = os.Stat(file); !os.IsNotExist(err) {
if specifiedVersion == "" && fileVersion != runningVersion {
glog.V(1).Info(fmt.Sprintf("No test file found for %s - using tests for Kubernetes %s\n", runningVersion, fileVersion))
}
return path, nil
}
// If we were given an explicit version to look for, don't look for any others
if specifiedVersion != "" {
return "", err
}
fileVersion = decrementVersion(fileVersion)
if fileVersion == "" {
return "", fmt.Errorf("no test files found <= runningVersion")
}
path = filepath.Join(cfgDir, benchmarkVersion)
file := filepath.Join(path, string(filename))
glog.V(2).Info(fmt.Sprintf("Looking for config file: %s", file))
if _, err = os.Stat(file); os.IsNotExist(err) {
glog.V(2).Infof("error accessing config file: %q error: %v\n", file, err)
return "", fmt.Errorf("no test files found <= benchmark version: %s", benchmarkVersion)
}
return path, nil
}
// decrementVersion decrements the version number
@ -163,6 +141,9 @@ func getConfigFilePath(specifiedVersion string, runningVersion string, filename
// just in case someone wants to specify their own test files for that version
func decrementVersion(version string) string {
split := strings.Split(version, ".")
if len(split) < 2 {
return ""
}
minor, err := strconv.Atoi(split[1])
if err != nil {
return ""
@ -367,6 +348,11 @@ func makeSubstitutions(s string, ext string, m map[string]string) string {
return s
}
func isEmpty(str string) bool {
return len(strings.TrimSpace(str)) == 0
}
func buildComponentMissingErrorMessage(nodetype check.NodeType, component string, bins []string) string {
errMessageTemplate := `

View File

@ -409,7 +409,7 @@ func TestGetConfigFilePath(t *testing.T) {
t.Fatalf("Failed to create temp directory")
}
defer os.RemoveAll(cfgDir)
d := filepath.Join(cfgDir, "1.8")
d := filepath.Join(cfgDir, "cis-1.4")
err = os.Mkdir(d, 0666)
if err != nil {
t.Fatalf("Failed to create temp file")
@ -417,29 +417,57 @@ func TestGetConfigFilePath(t *testing.T) {
ioutil.WriteFile(filepath.Join(d, "master.yaml"), []byte("hello world"), 0666)
cases := []struct {
specifiedVersion string
runningVersion string
benchmarkVersion string
succeed bool
exp string
}{
{runningVersion: "1.8", succeed: true, exp: d},
{runningVersion: "1.9", succeed: true, exp: d},
{runningVersion: "1.10", succeed: true, exp: d},
{runningVersion: "1.1", succeed: false},
{specifiedVersion: "1.8", succeed: true, exp: d},
{specifiedVersion: "1.9", succeed: false},
{specifiedVersion: "1.10", succeed: false},
{benchmarkVersion: "cis-1.4", succeed: true, exp: d},
{benchmarkVersion: "cis-1.5", succeed: false, exp: ""},
{benchmarkVersion: "1.1", succeed: false, exp: ""},
}
for _, c := range cases {
t.Run(c.specifiedVersion+"-"+c.runningVersion, func(t *testing.T) {
path, err := getConfigFilePath(c.specifiedVersion, c.runningVersion, "/master.yaml")
if err != nil && c.succeed {
t.Fatalf("Error %v", err)
}
if path != c.exp {
t.Fatalf("Got %s expected %s", path, c.exp)
t.Run(c.benchmarkVersion, func(t *testing.T) {
path, err := getConfigFilePath(c.benchmarkVersion, "/master.yaml")
if c.succeed {
if err != nil {
t.Fatalf("Error %v", err)
}
if path != c.exp {
t.Fatalf("Got %s expected %s", path, c.exp)
}
} else {
if err == nil {
t.Fatalf("Expected Error, but none")
}
}
})
}
}
func TestDecrementVersion(t *testing.T) {
cases := []struct {
kubeVersion string
succeed bool
exp string
}{
{kubeVersion: "1.13", succeed: true, exp: "1.12"},
{kubeVersion: "1.15", succeed: true, exp: "1.14"},
{kubeVersion: "1.11", succeed: true, exp: "1.10"},
{kubeVersion: "1.1", succeed: true, exp: ""},
{kubeVersion: "invalid", succeed: false, exp: ""},
}
for _, c := range cases {
rv := decrementVersion(c.kubeVersion)
if c.succeed {
if c.exp != rv {
t.Fatalf("decrementVersion(%q) - Got %q expected %s", c.kubeVersion, rv, c.exp)
}
} else {
if len(rv) > 0 {
t.Fatalf("decrementVersion(%q) - Expected empty string but Got %s", c.kubeVersion, rv)
}
}
}
}