1
0
mirror of https://github.com/aquasecurity/kube-bench.git synced 2024-12-21 22:28:07 +00:00

Add GKE 1.6 CIS benchmark for GCP environment (#1672)

* Add config entries for GKE 1.6 controls

* Add gke1.6 control plane recommendations

* Add gke-1.6.0 worker node recommendations

* Add gke-1.6.0 policy recommendations

* Add managed services and policy recommendation

* Add master recommendations

* Fix formatting across gke-1.6.0 files

* Add gke-1.6.0 benchmark selection based on k8s version

* Workaround: hardcode kubelet config path for gke-1.6.0

* Fix tests for makeIPTablesUtilChaings

* Change scored field for all node tests to true

* Fix kubelet file permission to check for

---------

Co-authored-by: afdesk <work@afdesk.com>
This commit is contained in:
Abubakr-Sadik Nii Nai Davis 2024-10-11 04:49:35 +00:00 committed by GitHub
parent e47725299e
commit a15e8acaa3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
11 changed files with 1409 additions and 1 deletions

View File

@ -288,6 +288,7 @@ version_mapping:
"eks-1.2.0": "eks-1.2.0"
"gke-1.0": "gke-1.0"
"gke-1.2.0": "gke-1.2.0"
"gke-1.6.0": "gke-1.6.0"
"ocp-3.10": "rh-0.7"
"ocp-3.11": "rh-0.7"
"ocp-4.0": "rh-1.0"
@ -380,6 +381,12 @@ target_mapping:
- "controlplane"
- "policies"
- "managedservices"
"gke-1.6.0":
- "master"
- "node"
- "controlplane"
- "policies"
- "managedservices"
"eks-1.0.1":
- "master"
- "node"

View File

@ -0,0 +1,9 @@
---
## Version-specific settings that override the values in cfg/config.yaml
node:
proxy:
defaultkubeconfig: "/var/lib/kubelet/kubeconfig"
kubelet:
defaultconf: "/etc/kubernetes/kubelet/kubelet-config.yaml"

View File

@ -0,0 +1,20 @@
---
controls:
version: "gke-1.6.0"
id: 2
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 2.1
text: "Authentication and Authorization"
checks:
- id: 2.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
You can remediate the availability of client certificates in your GKE cluster. See
Recommendation 5.8.1.
scored: false

View File

@ -0,0 +1,617 @@
---
controls:
version: "gke-1.6.0"
id: 5
text: "Managed Services"
type: "managedservices"
groups:
- id: 5.1
text: "Image Registry and Image Scanning"
checks:
- id: 5.1.1
text: "Ensure Image Vulnerability Scanning is enabled (Automated)"
type: "manual"
remediation: |
For Images Hosted in GCR:
Using Command Line:
gcloud services enable containeranalysis.googleapis.com
For Images Hosted in AR:
Using Command Line:
gcloud services enable containerscanning.googleapis.com
scored: false
- id: 5.1.2
text: "Minimize user access to Container Image repositories (Manual)"
type: "manual"
remediation: |
For Images Hosted in AR:
Using Command Line:
gcloud artifacts repositories set-iam-policy <repository-name> <path-to-policy-file> \
--location <repository-location>
To learn how to configure policy files see: https://cloud.google.com/artifact-registry/docs/access-control#grant
For Images Hosted in GCR:
Using Command Line:
To change roles at the GCR bucket level:
Firstly, run the following if read permissions are required:
gsutil iam ch <type>:<email_address>:objectViewer gs://artifacts.<project_id>.appspot.com
Then remove the excessively privileged role (Storage Admin / Storage Object
Admin / Storage Object Creator) using:
gsutil iam ch -d <type>:<email_address>:<role> gs://artifacts.<project_id>.appspot.com
where:
<type> can be one of the following:
user, if the <email_address> is a Google account.
serviceAccount, if <email_address> specifies a Service account.
<email_address> can be one of the following:
a Google account (for example, someone@example.com).
a Cloud IAM service account.
To modify roles defined at the project level and subsequently inherited within the GCR
bucket, or the Service Account User role, extract the IAM policy file, modify it
accordingly and apply it using:
gcloud projects set-iam-policy <project_id> <policy_file>
scored: false
- id: 5.1.3
text: "Minimize cluster access to read-only for Container Image repositories (Manual)"
type: "manual"
remediation: |
For Images Hosted in AR:
Using Command Line:
Add artifactregistry.reader role
gcloud artifacts repositories add-iam-policy-binding <repository> \
--location=<repository-location> \
--member='serviceAccount:<email-address>' \
--role='roles/artifactregistry.reader'
Remove any roles other than artifactregistry.reader
gcloud artifacts repositories remove-iam-policy-binding <repository> \
--location <repository-location> \
--member='serviceAccount:<email-address>' \
--role='<role-name>'
For Images Hosted in GCR:
For an account explicitly granted to the bucket:
Firstly add read access to the Kubernetes Service Account:
gsutil iam ch <type>:<email_address>:objectViewer gs://artifacts.<project_id>.appspot.com
where:
<type> can be one of the following:
user, if the <email_address> is a Google account.
serviceAccount, if <email_address> specifies a Service account.
<email_address> can be one of the following:
a Google account (for example, someone@example.com).
a Cloud IAM service account.
Then remove the excessively privileged role (Storage Admin / Storage Object
Admin / Storage Object Creator) using:
gsutil iam ch -d <type>:<email_address>:<role> gs://artifacts.<project_id>.appspot.com
For an account that inherits access to the GCR Bucket through Project level
permissions, modify the Projects IAM policy file accordingly, then upload it using:
gcloud projects set-iam-policy <project_id> <policy_file>
scored: false
- id: 5.1.4
text: "Ensure only trusted container images are used (Manual)"
type: "manual"
remediation: |
Using Command Line:
Update the cluster to enable Binary Authorization:
gcloud container cluster update <cluster_name> --enable-binauthz
Create a Binary Authorization Policy using the Binary Authorization Policy Reference:
https://cloud.google.com/binary-authorization/docs/policy-yaml-reference for guidance.
Import the policy file into Binary Authorization:
gcloud container binauthz policy import <yaml_policy>
scored: false
- id: 5.2
text: "Identity and Access Management (IAM)"
checks:
- id: 5.2.1
text: "Ensure GKE clusters are not running using the Compute Engine default service account (Automated))"
type: "manual"
remediation: |
Using Command Line:
To create a minimally privileged service account:
gcloud iam service-accounts create <node_sa_name> \
--display-name "GKE Node Service Account"
export NODE_SA_EMAIL=gcloud iam service-accounts list \
--format='value(email)' --filter='displayName:GKE Node Service Account'
Grant the following roles to the service account:
export PROJECT_ID=gcloud config get-value project
gcloud projects add-iam-policy-binding <project_id> --member \
serviceAccount:<node_sa_email> --role roles/monitoring.metricWriter
gcloud projects add-iam-policy-binding <project_id> --member \
serviceAccount:<node_sa_email> --role roles/monitoring.viewer
gcloud projects add-iam-policy-binding <project_id> --member \
serviceAccount:<node_sa_email> --role roles/logging.logWriter
To create a new Node pool using the Service account, run the following command:
gcloud container node-pools create <node_pool> \
--service-account=<sa_name>@<project_id>.iam.gserviceaccount.com \
--cluster=<cluster_name> --zone <compute_zone>
Note: The workloads will need to be migrated to the new Node pool, and the old node
pools that use the default service account should be deleted to complete the
remediation.
scored: false
- id: 5.2.2
text: "Prefer using dedicated GCP Service Accounts and Workload Identity (Manual)"
type: "manual"
remediation: |
Using Command Line:
gcloud container clusters update <cluster_name> --zone <cluster_zone> \
--workload-pool <project_id>.svc.id.goog
Note that existing Node pools are unaffected. New Node pools default to --workload-
metadata-from-node=GKE_METADATA_SERVER.
Then, modify existing Node pools to enable GKE_METADATA_SERVER:
gcloud container node-pools update <node_pool_name> --cluster <cluster_name> \
--zone <cluster_zone> --workload-metadata=GKE_METADATA
Workloads may need to be modified in order for them to use Workload Identity as
described within: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity.
Also consider the effects on the availability of hosted workloads as Node pools
are updated. It may be more appropriate to create new Node Pools.
scored: false
- id: 5.3
text: "Cloud Key Management Service (Cloud KMS)"
checks:
- id: 5.3.1
text: "Ensure Kubernetes Secrets are encrypted using keys managed in Cloud KMS (Automated)"
type: "manual"
remediation: |
To create a key:
Create a key ring:
gcloud kms keyrings create <ring_name> --location <location> --project \
<key_project_id>
Create a key:
gcloud kms keys create <key_name> --location <location> --keyring <ring_name> \
--purpose encryption --project <key_project_id>
Grant the Kubernetes Engine Service Agent service account the Cloud KMS
CryptoKey Encrypter/Decrypter role:
gcloud kms keys add-iam-policy-binding <key_name> --location <location> \
--keyring <ring_name> --member serviceAccount:<service_account_name> \
--role roles/cloudkms.cryptoKeyEncrypterDecrypter --project <key_project_id>
To create a new cluster with Application-layer Secrets Encryption:
gcloud container clusters create <cluster_name> --cluster-version=latest \
--zone <zone> \
--database-encryption-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name> \
--project <cluster_project_id>
To enable on an existing cluster:
gcloud container clusters update <cluster_name> --zone <zone> \
--database-encryption-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name> \
--project <cluster_project_id>
scored: false
- id: 5.4
text: "Node Metadata"
checks:
- id: 5.4.1
text: "Ensure the GKE Metadata Server is Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
gcloud container clusters update <cluster_name> --identity-namespace=<project_id>.svc.id.goog
Note that existing Node pools are unaffected. New Node pools default to --workload-
metadata-from-node=GKE_METADATA_SERVER.
To modify an existing Node pool to enable GKE Metadata Server:
gcloud container node-pools update <node_pool_name> --cluster=<cluster_name> \
--workload-metadata-from-node=GKE_METADATA_SERVER
Workloads may need modification in order for them to use Workload Identity as
described within: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity.
scored: false
- id: 5.5
text: "Node Configuration and Maintenance"
checks:
- id: 5.5.1
text: "Ensure Container-Optimized OS (cos_containerd) is used for GKE node images (Automated)"
type: "manual"
remediation: |
Using Command Line:
To set the node image to cos for an existing cluster's Node pool:
gcloud container clusters upgrade <cluster_name> --image-type cos_containerd \
--zone <compute_zone> --node-pool <node_pool_name>
scored: false
- id: 5.5.2
text: "Ensure Node Auto-Repair is enabled for GKE nodes (Automated)"
type: "manual"
remediation: |
Using Command Line:
To enable node auto-repair for an existing cluster's Node pool:
gcloud container node-pools update <node_pool_name> --cluster <cluster_name> \
--zone <compute_zone> --enable-autorepair
scored: false
- id: 5.5.3
text: "Ensure Node Auto-Upgrade is enabled for GKE nodes (Automated)"
type: "manual"
remediation: |
Using Command Line:
To enable node auto-upgrade for an existing cluster's Node pool, run the following
command:
gcloud container node-pools update <node_pool_name> --cluster <cluster_name> \
--zone <cluster_zone> --enable-autoupgrade
scored: false
- id: 5.5.4
text: "When creating New Clusters - Automate GKE version management using Release Channels (Automated)"
type: "manual"
remediation: |
Using Command Line:
Create a new cluster by running the following command:
gcloud container clusters create <cluster_name> --zone <cluster_zone> \
--release-channel <release_channel>
where <release_channel> is stable or regular, according to requirements.
scored: false
- id: 5.5.5
text: "Ensure Shielded GKE Nodes are Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To migrate an existing cluster, the flag --enable-shielded-nodes needs to be
specified in the cluster update command:
gcloud container clusters update <cluster_name> --zone <cluster_zone> \
--enable-shielded-nodes
scored: false
- id: 5.5.6
text: "Ensure Integrity Monitoring for Shielded GKE Nodes is Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To create a Node pool within the cluster with Integrity Monitoring enabled, run the
following command:
gcloud container node-pools create <node_pool_name> --cluster <cluster_name> \
--zone <compute_zone> --shielded-integrity-monitoring
Workloads from existing non-conforming Node pools will need to be migrated to the
newly created Node pool, then delete non-conforming Node pools to complete the
remediation
scored: false
- id: 5.5.7
text: "Ensure Secure Boot for Shielded GKE Nodes is Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To create a Node pool within the cluster with Secure Boot enabled, run the following
command:
gcloud container node-pools create <node_pool_name> --cluster <cluster_name> \
--zone <compute_zone> --shielded-secure-boot
Workloads will need to be migrated from existing non-conforming Node pools to the
newly created Node pool, then delete the non-conforming pools.
scored: false
- id: 5.6
text: "Cluster Networking"
checks:
- id: 5.6.1
text: "Enable VPC Flow Logs and Intranode Visibility (Automated)"
type: "manual"
remediation: |
Using Command Line:
1. Find the subnetwork name associated with the cluster.
gcloud container clusters describe <cluster_name> \
--region <cluster_region> - -format json | jq '.subnetwork'
2. Update the subnetwork to enable VPC Flow Logs.
gcloud compute networks subnets update <subnet_name> --enable-flow-logs
scored: false
- id: 5.6.2
text: "Ensure use of VPC-native clusters (Automated)"
type: "manual"
remediation: |
Using Command Line:
To enable Alias IP on a new cluster, run the following command:
gcloud container clusters create <cluster_name> --zone <compute_zone> \
--enable-ip-alias
If using Autopilot configuration mode:
gcloud container clusters create-auto <cluster_name> \
--zone <compute_zone>
scored: false
- id: 5.6.3
text: "Ensure Control Plane Authorized Networks is Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To enable Control Plane Authorized Networks for an existing cluster, run the following
command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--enable-master-authorized-networks
Along with this, you can list authorized networks using the --master-authorized-networks
flag which contains a list of up to 20 external networks that are allowed to
connect to your cluster's control plane through HTTPS. You provide these networks as
a comma-separated list of addresses in CIDR notation (such as 90.90.100.0/24).
scored: false
- id: 5.6.4
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
type: "manual"
remediation: |
Using Command Line:
Create a cluster with a Private Endpoint enabled and Public Access disabled by including
the --enable-private-endpoint flag within the cluster create command:
gcloud container clusters create <cluster_name> --enable-private-endpoint
Setting this flag also requires the setting of --enable-private-nodes, --enable-ip-alias
and --master-ipv4-cidr=<master_cidr_range>.
scored: false
- id: 5.6.5
text: "Ensure clusters are created with Private Nodes (Manual)"
type: "manual"
remediation: |
Using Command Line:
To create a cluster with Private Nodes enabled, include the --enable-private-nodes
flag within the cluster create command:
gcloud container clusters create <cluster_name> --enable-private-nodes
Setting this flag also requires the setting of --enable-ip-alias and
--master-ipv4-cidr=<master_cidr_range>.
scored: false
- id: 5.6.6
text: "Consider firewalling GKE worker nodes (Manual)"
type: "manual"
remediation: |
Using Command Line:
Use the following command to generate firewall rules, setting the variables as
appropriate:
gcloud compute firewall-rules create <firewall_rule_name> \
--network <network> --priority <priority> --direction <direction> \
--action <action> --target-tags <tag> \
--target-service-accounts <service_account> \
--source-ranges <source_cidr_range> --source-tags <source_tags> \
--source-service-accounts <source_service_account> \
--destination-ranges <destination_cidr_range> --rules <rules>
scored: false
- id: 5.6.7
text: "Ensure use of Google-managed SSL Certificates (Automated)"
type: "manual"
remediation: |
If services of type:LoadBalancer are discovered, consider replacing the Service with
an Ingress.
To configure the Ingress and use Google-managed SSL certificates, follow the
instructions as listed at: https://cloud.google.com/kubernetes-engine/docs/how-
to/managed-certs.
scored: false
- id: 5.7
text: "Logging"
checks:
- id: 5.7.1
text: "Ensure Logging and Cloud Monitoring is Enabled (Automated)"
type: "manual"
remediation: |
To enable Logging for an existing cluster, run the following command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--logging=<components_to_be_logged>
See https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--logging
for a list of available components for logging.
To enable Cloud Monitoring for an existing cluster, run the following command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--monitoring=<components_to_be_logged>
See https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--
monitoring for a list of available components for Cloud Monitoring.
scored: false
- id: 5.7.2
text: "Enable Linux auditd logging (Manual)"
type: "manual"
remediation: |
Using Command Line:
Download the example manifests:
curl https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-tools/master/os-audit/cos-auditd-logging.yaml > cos-auditd-logging.yaml
Edit the example manifests if needed. Then, deploy them:
kubectl apply -f cos-auditd-logging.yaml
Verify that the logging Pods have started. If a different Namespace was defined in the
manifests, replace cos-auditd with the name of the namespace being used:
kubectl get pods --namespace=cos-auditd
scored: false
- id: 5.8
text: "Authentication and Authorization"
checks:
- id: 5.8.1
text: "Ensure authentication using Client Certificates is Disabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
Create a new cluster without a Client Certificate:
gcloud container clusters create [CLUSTER_NAME] \
--no-issue-client-certificate
scored: false
- id: 5.8.2
text: "Manage Kubernetes RBAC users with Google Groups for GKE (Manual)"
type: "manual"
remediation: |
Using Command Line:
Follow the G Suite Groups instructions at: https://cloud.google.com/kubernetes-
engine/docs/how-to/role-based-access-control#google-groups-for-gke.
Then, create a cluster with:
gcloud container clusters create <cluster_name> --security-group <security_group_name>
Finally create Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings that
reference the G Suite Groups.
scored: false
- id: 5.8.3
text: "Ensure Legacy Authorization (ABAC) is Disabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To disable Legacy Authorization for an existing cluster, run the following command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--no-enable-legacy-authorization
scored: false
- id: 5.9
text: "Storage"
checks:
- id: 5.9.1
text: "Enable Customer-Managed Encryption Keys (CMEK) for GKE Persistent Disks (PD) (Manual)"
type: "manual"
remediation: |
Using Command Line:
Follow the instructions detailed at: https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek.
scored: false
- id: 5.9.2
text: "Enable Customer-Managed Encryption Keys (CMEK) for Boot Disks (Automated)"
type: "manual"
remediation: |
Using Command Line:
Create a new node pool using customer-managed encryption keys for the node boot
disk, of <disk_type> either pd-standard or pd-ssd:
gcloud container node-pools create <cluster_name> --disk-type <disk_type> \
--boot-disk-kms-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name>
Create a cluster using customer-managed encryption keys for the node boot disk, of
<disk_type> either pd-standard or pd-ssd:
gcloud container clusters create <cluster_name> --disk-type <disk_type> \
--boot-disk-kms-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name>
scored: false
- id: 5.10
text: "Other Cluster Configurations"
checks:
- id: 5.10.1
text: "Ensure Kubernetes Web UI is Disabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To disable the Kubernetes Dashboard on an existing cluster, run the following
command:
gcloud container clusters update <cluster_name> --zone <zone> \
--update-addons=KubernetesDashboard=DISABLED
scored: false
- id: 5.10.2
text: "Ensure that Alpha clusters are not used for production workloads (Automated)"
type: "manual"
remediation: |
Using Command Line:
Upon creating a new cluster
gcloud container clusters create [CLUSTER_NAME] \
--zone [COMPUTE_ZONE]
Do not use the --enable-kubernetes-alpha argument.
scored: false
- id: 5.10.3
text: "Consider GKE Sandbox for running untrusted workloads (Manual)"
type: "manual"
remediation: |
Using Command Line:
To enable GKE Sandbox on an existing cluster, a new Node pool must be created,
which can be done using:
gcloud container node-pools create <node_pool_name> --zone <compute-zone> \
--cluster <cluster_name> --image-type=cos_containerd --sandbox="type=gvisor"
scored: false
- id: 5.10.4
text: "Ensure use of Binary Authorization (Automated)"
type: "manual"
remediation: |
Using Command Line:
Update the cluster to enable Binary Authorization:
gcloud container cluster update <cluster_name> --zone <compute_zone> \
--binauthz-evaluation-mode=<evaluation_mode>
Example:
gcloud container clusters update $CLUSTER_NAME --zone $COMPUTE_ZONE \
--binauthz-evaluation-mode=PROJECT_SINGLETON_POLICY_ENFORCE
See: https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--binauthz-evaluation-mode
for more details around the evaluation modes available.
Create a Binary Authorization Policy using the Binary Authorization Policy Reference:
https://cloud.google.com/binary-authorization/docs/policy-yaml-reference for guidance.
Import the policy file into Binary Authorization:
gcloud container binauthz policy import <yaml_policy>
scored: false
- id: 5.10.5
text: "Enable Security Posture (Manual)"
type: "manual"
remediation: |
Enable security posture via the UI, gCloud or API.
https://cloud.google.com/kubernetes-engine/docs/how-to/protect-workload-configuration
scored: false

View File

@ -0,0 +1,6 @@
---
controls:
version: "gke-1.6.0"
id: 1
text: "Control Plane Components"
type: "master"

506
cfg/gke-1.6.0/node.yaml Normal file
View File

@ -0,0 +1,506 @@
---
controls:
version: "gke-1.6.0"
id: 3
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 3.1
text: "Worker Node Configuration Files"
checks:
- id: 3.1.1
text: "Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on each worker node.
For example,
chmod 644 $proxykubeconfig
scored: true
- id: 3.1.2
text: "Ensure that the proxy kubeconfig file ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on each worker node.
For example:
chown root:root $proxykubeconfig
scored: true
- id: 3.1.3
text: "Ensure that the kubelet configuration file has permissions set to 600 (Manual)"
audit: '/bin/sh -c ''if test -e /home/kubernetes/kubelet-config.yaml; then stat -c permissions=%a /home/kubernetes/kubelet-config.yaml; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command (using the kubelet config file location)
chmod 644 /home/kubernetes/kubelet-config.yaml
scored: true
- id: 3.1.4
text: "Ensure that the kubelet configuration file ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e /home/kubernetes/kubelet-config.yaml; then stat -c %U:%G /home/kubernetes/kubelet-config.yaml; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identied in the Audit step)
chown root:root /home/kubernetes/kubelet-config.yaml
scored: true
- id: 3.2
text: "Kubelet"
checks:
- id: 3.2.1
text: "Ensure that the Anonymous Auth is Not Enabled (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /home/kubernetes/kubelet-config.yaml
Disable Anonymous Authentication by setting the following parameter:
"authentication": { "anonymous": { "enabled": false } }
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--anonymous-auth=false
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Enable Webhook Authentication by setting the following parameter:
"authentication": { "webhook": { "enabled": true } }
Next, set the Authorization Mode to Webhook by setting the following parameter:
"authorization": { "mode": "Webhook }
Finer detail of the authentication and authorization fields can be found in the
Kubelet Configuration documentation (https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/).
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--authentication-token-webhook
--authorization-mode=Webhook
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.3
text: "Ensure that a Client CA File is Configured (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
set: true
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Configure the client certificate authority file by setting the following parameter
appropriately:
"authentication": { "x509": {"clientCAFile": <path/to/client-ca-file> } }"
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--client-ca-file=<path/to/client-ca-file>
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.4
text: "Ensure that the --read-only-port argument is disabled (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
bin_op: or
remediation: |
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to 0
"readOnlyPort": 0
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--read-only-port=0
For each remediation:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet-config.yaml and set the below parameter to a non-zero
value in the format of #h#m#s
"streamingConnectionIdleTimeout": "4h0m0s"
You should ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not
specify a --streaming-connection-idle-timeout argument because it would
override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--streaming-connection-idle-timeout=4h0m0s
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"streamingConnectionIdleTimeout": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster (https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/),
and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=gke-cluster-1-pool1-5e572947-r2hg (example node name from
"kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediations:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-utils-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
true
"makeIPTablesUtilChains": true
Ensure that /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf
does not set the --make-iptables-util-chains argument because that would
override your Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--make-iptables-util-chains:true
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"makeIPTablesUtilChains.: true by extracting the live configuration from the nodes
running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster (https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/),
and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=gke-cluster-1-pool1-5e572947-r2hg (example node name from
"kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediations:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.7
text: "Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
set: true
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
If using command line arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.8
text: "Ensure that the --rotate-certificates argument is not present or is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.yaml file
/etc/kubernetes/kubelet/kubelet-config.yaml and set the below parameter to
true
"RotateCertificate":true
Additionally, ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set the --RotateCertificate
executable argument to false because this would override the Kubelet
config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--RotateCertificate=true
scored: true
- id: 3.2.9
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: eq
value: true
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet-config.yaml and set the below parameter to true
"featureGates": {
"RotateKubeletServerCertificate":true
},
Additionally, ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set
the --rotate-kubelet-server-certificate executable argument to false because
this would override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--rotate-kubelet-server-certificate=true
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"RotateKubeletServerCertificate": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster (https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/),
and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=gke-cluster-1-pool1-5e572947-r2hg (example node name from
"kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediation methods:
Restart the kubelet service and check status. The example below is for when using
systemctl to manage services:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true

238
cfg/gke-1.6.0/policies.yaml Normal file
View File

@ -0,0 +1,238 @@
---
controls:
version: "gke-1.6.0"
id: 4
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 4.1
text: "RBAC and Service Accounts"
checks:
- id: 4.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 4.1.2
text: "Minimize access to secrets (Automated)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to secret objects in the cluster.
scored: false
- id: 4.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 4.1.4
text: "Ensure that default service accounts are not actively used (Automated)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific
access to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 4.1.5
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 4.1.6
text: "Avoid use of system:masters group (Automated)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 4.1.7
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 4.1.8
text: "Avoid bindings to system:anonymous (Automated)"
type: "manual"
remediation: |
Identify all clusterrolebindings and rolebindings to the user system:anonymous.
Check if they are used and review the permissions associated with the binding using the
commands in the Audit section above or refer to GKE documentation
(https://cloud.google.com/kubernetes-engine/docs/best-practices/rbac#detect-prevent-default).
Strongly consider replacing unsafe bindings with an authenticated, user-defined group.
Where possible, bind to non-default, user-defined groups with least-privilege roles.
If there are any unsafe bindings to the user system:anonymous, proceed to delete them
after consideration for cluster operations with only necessary, safer bindings.
kubectl delete clusterrolebinding [CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding [ROLE_BINDING_NAME] --namespace [ROLE_BINDING_NAMESPACE]
scored: false
- id: 4.1.9
text: "Avoid non-default bindings to system:unauthenticated (Automated)"
type: "manual"
remediation: |
Identify all non-default clusterrolebindings and rolebindings to the group
system:unauthenticated. Check if they are used and review the permissions
associated with the binding using the commands in the Audit section above or refer to
GKE documentation (https://cloud.google.com/kubernetes-engine/docs/best-practices/rbac#detect-prevent-default).
Strongly consider replacing non-default, unsafe bindings with an authenticated, user-
defined group. Where possible, bind to non-default, user-defined groups with least-
privilege roles.
If there are any non-default, unsafe bindings to the group system:unauthenticated,
proceed to delete them after consideration for cluster operations with only necessary,
safer bindings.
kubectl delete clusterrolebinding [CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding [ROLE_BINDING_NAME] --namespace [ROLE_BINDING_NAMESPACE]
scored: false
- id: 4.1.10
text: "Avoid non-default bindings to system:authenticated (Automated)"
type: "manual"
remediation: |
Identify all non-default clusterrolebindings and rolebindings to the group
system:authenticated. Check if they are used and review the permissions associated
with the binding using the commands in the Audit section above or refer to GKE
documentation.
Strongly consider replacing non-default, unsafe bindings with an authenticated, user-
defined group. Where possible, bind to non-default, user-defined groups with least-
privilege roles.
If there are any non-default, unsafe bindings to the group system:authenticated,
proceed to delete them after consideration for cluster operations with only necessary,
safer bindings.
kubectl delete clusterrolebinding [CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding [ROLE_BINDING_NAME] --namespace [ROLE_BINDING_NAMESPACE]
scored: false
- id: 4.2
text: "Pod Security Standards"
checks:
- id: 4.2.1
text: "Ensure that the cluster enforces Pod Security Standard Baseline profile or stricter for all namespaces. (Manual)"
type: "manual"
remediation: |
Ensure that Pod Security Admission is in place for every namespace which contains
user workloads.
Run the following command to enforce the Baseline profile in a namespace:
kubectl label namespace pod-security.kubernetes.io/enforce=baseline
scored: false
- id: 4.3
text: "Network Policies and CNI"
checks:
- id: 4.3.1
text: "Ensure that the CNI in use supports Network Policies (Manual)"
type: "manual"
remediation: |
To use a CNI plugin with Network Policy, enable Network Policy in GKE, and the CNI plugin
will be updated. See Recommendation 5.6.7.
scored: false
- id: 4.3.2
text: "Ensure that all Namespaces have Network Policies defined (Automated)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as needed.
See: https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy#creating_a_network_policy
for more information.
scored: false
- id: 4.4
text: "Secrets Management"
checks:
- id: 4.4.1
text: "Prefer using secrets as files over secrets as environment variables (Automated)"
type: "manual"
remediation: |
if possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 4.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 4.5
text: "Extensible Admission Control"
checks:
- id: 4.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
Also see recommendation 5.10.4.
scored: false
- id: 4.6
text: "General Policies"
checks:
- id: 4.6.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 4.6.2
text: "Ensure that the seccomp profile is set to RuntimeDefault in your pod definitions (Automated)"
type: "manual"
remediation: |
Use security context to enable the RuntimeDefault seccomp profile in your pod
definitions. An example is as below:
{
"namespace": "kube-system",
"name": "metrics-server-v0.7.0-dbcc8ddf6-gz7d4",
"seccompProfile": "RuntimeDefault"
}
scored: false
- id: 4.6.3
text: "Apply Security Context to Your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Google Container-
Optimized OS Benchmark.
scored: false
- id: 4.6.4
text: "The default namespace should not be used (Automated)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@ -494,6 +494,8 @@ func getPlatformBenchmarkVersion(platform Platform) string {
switch platform.Version {
case "1.15", "1.16", "1.17", "1.18", "1.19":
return "gke-1.0"
case "1.29", "1.30", "1.31":
return "gke-1.6.0"
default:
return "gke-1.2.0"
}

View File

@ -25,6 +25,7 @@ The following table shows the valid targets based on the CIS Benchmark version.
| cis-1.9 | master, controlplane, node, etcd, policies |
| gke-1.0 | master, controlplane, node, etcd, policies, managedservices |
| gke-1.2.0 | controlplane, node, policies, managedservices |
| gke-1.6.0 | controlplane, node, policies, managedservices |
| eks-1.0.1 | controlplane, node, policies, managedservices |
| eks-1.1.0 | controlplane, node, policies, managedservices |
| eks-1.2.0 | controlplane, node, policies, managedservices |

View File

@ -20,6 +20,7 @@ Some defined by other hardenening guides.
| CIS | [1.9](https://workbench.cisecurity.org/benchmarks/16828) | cis-1.9 | 1.27-1.29 |
| CIS | [GKE 1.0.0](https://workbench.cisecurity.org/benchmarks/4536) | gke-1.0 | GKE |
| CIS | [GKE 1.2.0](https://workbench.cisecurity.org/benchmarks/7534) | gke-1.2.0 | GKE |
| CIS | [GKE 1.6.0](https://workbench.cisecurity.org/benchmarks/16093) | gke-1.6.0 | GKE |
| CIS | [EKS 1.0.1](https://workbench.cisecurity.org/benchmarks/6041) | eks-1.0.1 | EKS |
| CIS | [EKS 1.1.0](https://workbench.cisecurity.org/benchmarks/6248) | eks-1.1.0 | EKS |
| CIS | [EKS 1.2.0](https://workbench.cisecurity.org/benchmarks/9681) | eks-1.2.0 | EKS |

View File

@ -154,8 +154,9 @@ oc apply -f job.yaml
| ------------- | ----------------------------------------------------------- |
| gke-1.0 | master, controlplane, node, etcd, policies, managedservices |
| gke-1.2.0 | master, controlplane, node, policies, managedservices |
| gke-1.6.0 | master, controlplane, node, policies, managedservices |
kube-bench includes benchmarks for GKE. To run this you will need to specify `--benchmark gke-1.0` or `--benchmark gke-1.2.0` when you run the `kube-bench` command.
kube-bench includes benchmarks for GKE. To run this you will need to specify `--benchmark gke-1.0`, `--benchmark gke-1.2.0` or `--benchmark gke-1.6.0` when you run the `kube-bench` command.
To run the benchmark as a job in your GKE cluster apply the included `job-gke.yaml`.