mirror of
https://github.com/aquasecurity/kube-bench.git
synced 2024-11-26 18:08:15 +00:00
Add latest CIS benchmarks
This commit is contained in:
parent
d70459b77c
commit
651f12d21c
2
cfg/aks-1.3/config.yaml
Normal file
2
cfg/aks-1.3/config.yaml
Normal file
@ -0,0 +1,2 @@
|
||||
---
|
||||
## Version-specific settings that override the values in cfg/config.yaml
|
31
cfg/aks-1.3/controlplane.yaml
Normal file
31
cfg/aks-1.3/controlplane.yaml
Normal file
@ -0,0 +1,31 @@
|
||||
---
|
||||
controls:
|
||||
version: "aks-1.3"
|
||||
id: 2
|
||||
text: "Control Plane Configuration"
|
||||
type: "controlplane"
|
||||
groups:
|
||||
- id: 2.1
|
||||
text: "Logging"
|
||||
checks:
|
||||
- id: 2.1.1
|
||||
text: "Enable audit Logs"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Azure audit logs are enabled and managed in the Azure portal. To enable log collection for
|
||||
the Kubernetes master components in your AKS cluster, open the Azure portal in a web
|
||||
browser and complete the following steps:
|
||||
1. Select the resource group for your AKS cluster, such as myResourceGroup. Don't
|
||||
select the resource group that contains your individual AKS cluster resources, such
|
||||
as MC_myResourceGroup_myAKSCluster_eastus.
|
||||
2. On the left-hand side, choose Diagnostic settings.
|
||||
3. Select your AKS cluster, such as myAKSCluster, then choose to Add diagnostic setting.
|
||||
4. Enter a name, such as myAKSClusterLogs, then select the option to Send to Log Analytics.
|
||||
5. Select an existing workspace or create a new one. If you create a workspace, provide
|
||||
a workspace name, a resource group, and a location.
|
||||
6. In the list of available logs, select the logs you wish to enable. For this example,
|
||||
enable the kube-audit and kube-audit-admin logs. Common logs include the kube-
|
||||
apiserver, kube-controller-manager, and kube-scheduler. You can return and change
|
||||
the collected logs once Log Analytics workspaces are enabled.
|
||||
7. When ready, select Save to enable collection of the selected logs.
|
||||
scored: false
|
144
cfg/aks-1.3/managedservices.yaml
Normal file
144
cfg/aks-1.3/managedservices.yaml
Normal file
@ -0,0 +1,144 @@
|
||||
---
|
||||
controls:
|
||||
version: "aks-1.3"
|
||||
id: 5
|
||||
text: "Managed Services"
|
||||
type: "managedservices"
|
||||
groups:
|
||||
- id: 5.1
|
||||
text: "Image Registry and Image Scanning"
|
||||
checks:
|
||||
- id: 5.1.1
|
||||
text: "Ensure Image Vulnerability Scanning using Azure Defender image scanning or a third party provider (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.1.2
|
||||
text: "Minimize user access to Azure Container Registry (ACR) (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Azure Container Registry
|
||||
If you use Azure Container Registry (ACR) as your container image store, you need to grant
|
||||
permissions to the service principal for your AKS cluster to read and pull images. Currently,
|
||||
the recommended configuration is to use the az aks create or az aks update command to
|
||||
integrate with a registry and assign the appropriate role for the service principal. For
|
||||
detailed steps, see Authenticate with Azure Container Registry from Azure Kubernetes
|
||||
Service.
|
||||
To avoid needing an Owner or Azure account administrator role, you can configure a
|
||||
service principal manually or use an existing service principal to authenticate ACR from
|
||||
AKS. For more information, see ACR authentication with service principals or Authenticate
|
||||
from Kubernetes with a pull secret.
|
||||
scored: false
|
||||
|
||||
- id: 5.1.3
|
||||
text: "Minimize cluster access to read-only for Azure Container Registry (ACR) (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.1.4
|
||||
text: "Minimize Container Registries to only those approved (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.2
|
||||
text: "Access and identity options for Azure Kubernetes Service (AKS)"
|
||||
checks:
|
||||
- id: 5.2.1
|
||||
text: "Prefer using dedicated AKS Service Accounts (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Azure Active Directory integration
|
||||
The security of AKS clusters can be enhanced with the integration of Azure Active Directory
|
||||
(AD). Built on decades of enterprise identity management, Azure AD is a multi-tenant,
|
||||
cloud-based directory, and identity management service that combines core directory
|
||||
services, application access management, and identity protection. With Azure AD, you can
|
||||
integrate on-premises identities into AKS clusters to provide a single source for account
|
||||
management and security.
|
||||
Azure Active Directory integration with AKS clusters
|
||||
With Azure AD-integrated AKS clusters, you can grant users or groups access to Kubernetes
|
||||
resources within a namespace or across the cluster. To obtain a kubectl configuration
|
||||
context, a user can run the az aks get-credentials command. When a user then interacts
|
||||
with the AKS cluster with kubectl, they're prompted to sign in with their Azure AD
|
||||
credentials. This approach provides a single source for user account management and
|
||||
password credentials. The user can only access the resources as defined by the cluster
|
||||
administrator.
|
||||
Azure AD authentication is provided to AKS clusters with OpenID Connect. OpenID Connect
|
||||
is an identity layer built on top of the OAuth 2.0 protocol. For more information on OpenID
|
||||
Connect, see the Open ID connect documentation. From inside of the Kubernetes cluster,
|
||||
Webhook Token Authentication is used to verify authentication tokens. Webhook token
|
||||
authentication is configured and managed as part of the AKS cluster.
|
||||
scored: false
|
||||
|
||||
- id: 5.3
|
||||
text: "Key Management Service (KMS)"
|
||||
checks:
|
||||
- id: 5.3.1
|
||||
text: "Ensure Kubernetes Secrets are encrypted (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.4
|
||||
text: "Cluster Networking"
|
||||
checks:
|
||||
- id: 5.4.1
|
||||
text: "Restrict Access to the Control Plane Endpoint (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.4.2
|
||||
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.4.3
|
||||
text: "Ensure clusters are created with Private Nodes (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.4.4
|
||||
text: "Ensure Network Policy is Enabled and set as appropriate (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.4.5
|
||||
text: "Encrypt traffic to HTTPS load balancers with TLS certificates (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
|
||||
- id: 5.5
|
||||
text: "Authentication and Authorization"
|
||||
checks:
|
||||
- id: 5.5.1
|
||||
text: "Manage Kubernetes RBAC users with Azure AD (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
- id: 5.5.2
|
||||
text: "Use Azure RBAC for Kubernetes Authorization (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.6
|
||||
text: "Other Cluster Configurations"
|
||||
checks:
|
||||
- id: 5.6.1
|
||||
text: "Restrict untrusted workloads (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
- id: 5.6.2
|
||||
text: "Hostile multi-tenant workloads (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
6
cfg/aks-1.3/master.yaml
Normal file
6
cfg/aks-1.3/master.yaml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
controls:
|
||||
version: "aks-1.3"
|
||||
id: 1
|
||||
text: "Control Plane Components"
|
||||
type: "master"
|
298
cfg/aks-1.3/node.yaml
Normal file
298
cfg/aks-1.3/node.yaml
Normal file
@ -0,0 +1,298 @@
|
||||
---
|
||||
controls:
|
||||
version: "aks-1.3"
|
||||
id: 3
|
||||
text: "Worker Node Security Configuration"
|
||||
type: "node"
|
||||
groups:
|
||||
- id: 3.1
|
||||
text: "Worker Node Configuration Files"
|
||||
checks:
|
||||
- id: 3.1.1
|
||||
text: "Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Manual)"
|
||||
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
|
||||
tests:
|
||||
test_items:
|
||||
- flag: "permissions"
|
||||
compare:
|
||||
op: bitmask
|
||||
value: "644"
|
||||
remediation: |
|
||||
Run the below command (based on the file location on your system) on the each worker node.
|
||||
For example,
|
||||
chmod 644 $kubeletkubeconfig
|
||||
scored: false
|
||||
|
||||
- id: 3.1.2
|
||||
text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Manual)"
|
||||
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
|
||||
tests:
|
||||
test_items:
|
||||
- flag: root:root
|
||||
remediation: |
|
||||
Run the below command (based on the file location on your system) on the each worker node.
|
||||
For example,
|
||||
chown root:root $kubeletkubeconfig
|
||||
scored: false
|
||||
|
||||
- id: 3.1.3
|
||||
text: "Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Manual)"
|
||||
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
|
||||
tests:
|
||||
test_items:
|
||||
- flag: "permissions"
|
||||
compare:
|
||||
op: bitmask
|
||||
value: "644"
|
||||
remediation: |
|
||||
Run the following command (using the config file location identified in the Audit step)
|
||||
chmod 644 $kubeletconf
|
||||
scored: false
|
||||
|
||||
- id: 3.1.4
|
||||
text: "Ensure that the kubelet configuration file ownership is set to root:root (Manual)"
|
||||
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
|
||||
tests:
|
||||
test_items:
|
||||
- flag: root:root
|
||||
remediation: |
|
||||
Run the following command (using the config file location identified in the Audit step)
|
||||
chown root:root $kubeletconf
|
||||
scored: false
|
||||
|
||||
- id: 3.2
|
||||
text: "Kubelet"
|
||||
checks:
|
||||
- id: 3.2.1
|
||||
text: "Ensure that the --anonymous-auth argument is set to false (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: "--anonymous-auth"
|
||||
path: '{.authentication.anonymous.enabled}'
|
||||
compare:
|
||||
op: eq
|
||||
value: false
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to
|
||||
false.
|
||||
If using executable arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
--anonymous-auth=false
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.2
|
||||
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --authorization-mode
|
||||
path: '{.authorization.mode}'
|
||||
compare:
|
||||
op: nothave
|
||||
value: AlwaysAllow
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If
|
||||
using executable arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_AUTHZ_ARGS variable.
|
||||
--authorization-mode=Webhook
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.3
|
||||
text: "Ensure that the --client-ca-file argument is set as appropriate (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --client-ca-file
|
||||
path: '{.authentication.x509.clientCAFile}'
|
||||
set: true
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to
|
||||
the location of the client CA file.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_AUTHZ_ARGS variable.
|
||||
--client-ca-file=<path/to/client-ca-file>
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.4
|
||||
text: "Ensure that the --read-only-port argument is set to 0 (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: "--read-only-port"
|
||||
path: '{.readOnlyPort}'
|
||||
set: true
|
||||
compare:
|
||||
op: eq
|
||||
value: 0
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set readOnlyPort to 0.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
--read-only-port=0
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.5
|
||||
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --streaming-connection-idle-timeout
|
||||
path: '{.streamingConnectionIdleTimeout}'
|
||||
set: true
|
||||
compare:
|
||||
op: noteq
|
||||
value: 0
|
||||
- flag: --streaming-connection-idle-timeout
|
||||
path: '{.streamingConnectionIdleTimeout}'
|
||||
set: false
|
||||
bin_op: or
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a
|
||||
value other than 0.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
--streaming-connection-idle-timeout=5m
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.6
|
||||
text: "Ensure that the --make-iptables-util-chains argument is set to true (Manual) "
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --make-iptables-util-chains
|
||||
path: '{.makeIPTablesUtilChains}'
|
||||
set: true
|
||||
compare:
|
||||
op: eq
|
||||
value: true
|
||||
- flag: --make-iptables-util-chains
|
||||
path: '{.makeIPTablesUtilChains}'
|
||||
set: false
|
||||
bin_op: or
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
remove the --make-iptables-util-chains argument from the
|
||||
KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.7
|
||||
text: "Ensure that the --hostname-override argument is not set (Manual)"
|
||||
# This is one of those properties that can only be set as a command line argument.
|
||||
# To check if the property is set as expected, we need to parse the kubelet command
|
||||
# instead reading the Kubelet Configuration file.
|
||||
audit: "/bin/ps -fC $kubeletbin "
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --hostname-override
|
||||
set: false
|
||||
remediation: |
|
||||
Edit the kubelet service file $kubeletsvc
|
||||
on each worker node and remove the --hostname-override argument from the
|
||||
KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.8
|
||||
text: "Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --event-qps
|
||||
path: '{.eventRecordQPS}'
|
||||
set: true
|
||||
compare:
|
||||
op: eq
|
||||
value: 0
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.9
|
||||
text: "Ensure that the --rotate-certificates argument is not set to false (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --rotate-certificates
|
||||
path: '{.rotateCertificates}'
|
||||
set: true
|
||||
compare:
|
||||
op: eq
|
||||
value: true
|
||||
- flag: --rotate-certificates
|
||||
path: '{.rotateCertificates}'
|
||||
set: false
|
||||
bin_op: or
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to add the line rotateCertificates: true or
|
||||
remove it altogether to use the default value.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
|
||||
variable.
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.10
|
||||
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: RotateKubeletServerCertificate
|
||||
path: '{.featureGates.RotateKubeletServerCertificate}'
|
||||
set: true
|
||||
compare:
|
||||
op: eq
|
||||
value: true
|
||||
remediation: |
|
||||
Edit the kubelet service file $kubeletsvc
|
||||
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
|
||||
--feature-gates=RotateKubeletServerCertificate=true
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
214
cfg/aks-1.3/policies.yaml
Normal file
214
cfg/aks-1.3/policies.yaml
Normal file
@ -0,0 +1,214 @@
|
||||
---
|
||||
controls:
|
||||
version: "aks-1.3"
|
||||
id: 4
|
||||
text: "Policies"
|
||||
type: "policies"
|
||||
groups:
|
||||
- id: 4.1
|
||||
text: "RBAC and Service Accounts"
|
||||
checks:
|
||||
- id: 4.1.1
|
||||
text: "Ensure that the cluster-admin role is only used where required (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
|
||||
if they need this role or if they could use a role with fewer privileges.
|
||||
Where possible, first bind users to a lower privileged role and then remove the
|
||||
clusterrolebinding to the cluster-admin role :
|
||||
kubectl delete clusterrolebinding [name]
|
||||
scored: false
|
||||
|
||||
- id: 4.1.2
|
||||
text: "Minimize access to secrets (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Where possible, remove get, list and watch access to secret objects in the cluster.
|
||||
scored: false
|
||||
|
||||
- id: 4.1.3
|
||||
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Where possible replace any use of wildcards in clusterroles and roles with specific
|
||||
objects or actions.
|
||||
scored: false
|
||||
|
||||
- id: 4.1.4
|
||||
text: "Minimize access to create pods (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Where possible, remove create access to pod objects in the cluster.
|
||||
scored: false
|
||||
|
||||
- id: 4.1.5
|
||||
text: "Ensure that default service accounts are not actively used. (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create explicit service accounts wherever a Kubernetes workload requires specific access
|
||||
to the Kubernetes API server.
|
||||
Modify the configuration of each default service account to include this value
|
||||
automountServiceAccountToken: false
|
||||
scored: false
|
||||
|
||||
- id: 4.1.6
|
||||
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Modify the definition of pods and service accounts which do not need to mount service
|
||||
account tokens to disable it.
|
||||
scored: false
|
||||
|
||||
- id: 4.2
|
||||
text: "Pod Security Policies"
|
||||
checks:
|
||||
- id: 4.2.1
|
||||
text: "Minimize the admission of privileged containers (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that
|
||||
the .spec.privileged field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.2
|
||||
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.hostPID field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.3
|
||||
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.hostIPC field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.4
|
||||
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.hostNetwork field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.5
|
||||
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.allowPrivilegeEscalation field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.6
|
||||
text: "Minimize the admission of root containers (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of
|
||||
UIDs not including 0.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.7
|
||||
text: "Minimize the admission of containers with the NET_RAW capability (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.requiredDropCapabilities is set to include either NET_RAW or ALL.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.8
|
||||
text: "Minimize the admission of containers with added capabilities (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Ensure that allowedCapabilities is not present in PSPs for the cluster unless
|
||||
it is set to an empty array.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.9
|
||||
text: "Minimize the admission of containers with capabilities assigned (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Review the use of capabilities in applications running on your cluster. Where a namespace
|
||||
contains applications which do not require any Linux capabities to operate consider adding
|
||||
a PSP which forbids the admission of containers which do not drop all capabilities.
|
||||
scored: false
|
||||
|
||||
- id: 4.3
|
||||
text: "Azure Policy / OPA"
|
||||
checks: []
|
||||
|
||||
- id: 4.4
|
||||
text: "CNI Plugin"
|
||||
checks:
|
||||
- id: 4.4.1
|
||||
text: "Ensure that the latest CNI version is used (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Review the documentation of AWS CNI plugin, and ensure latest CNI version is used.
|
||||
scored: false
|
||||
|
||||
- id: 4.4.2
|
||||
text: "Ensure that all Namespaces have Network Policies defined (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Follow the documentation and create NetworkPolicy objects as you need them.
|
||||
scored: false
|
||||
|
||||
- id: 4.5
|
||||
text: "Secrets Management"
|
||||
checks:
|
||||
- id: 4.5.1
|
||||
text: "Prefer using secrets as files over secrets as environment variables (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
If possible, rewrite application code to read secrets from mounted secret files, rather than
|
||||
from environment variables.
|
||||
scored: false
|
||||
|
||||
- id: 4.5.2
|
||||
text: "Consider external secret storage (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Refer to the secrets management options offered by your cloud provider or a third-party
|
||||
secrets management solution.
|
||||
scored: false
|
||||
|
||||
- id: 4.6
|
||||
text: "Extensible Admission Control"
|
||||
checks:
|
||||
- id: 4.6.1
|
||||
text: "Verify that admission controllers are working as expected (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 4.7
|
||||
text: "General Policies"
|
||||
checks:
|
||||
- id: 4.7.1
|
||||
text: "Create administrative boundaries between resources using namespaces (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Follow the documentation and create namespaces for objects in your deployment as you need
|
||||
them.
|
||||
scored: false
|
||||
|
||||
- id: 4.7.2
|
||||
text: "Apply Security Context to Your Pods and Containers (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Follow the Kubernetes documentation and apply security contexts to your pods. For a
|
||||
suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
|
||||
Containers.
|
||||
scored: false
|
||||
|
||||
- id: 4.7.3
|
||||
text: "The default namespace should not be used (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
|
||||
resources and that all new resources are created in a specific namespace.
|
||||
scored: false
|
@ -262,12 +262,15 @@ version_mapping:
|
||||
"eks-1.0.1": "eks-1.0.1"
|
||||
"eks-1.1.0": "eks-1.1.0"
|
||||
"eks-1.2.0": "eks-1.2.0"
|
||||
"eks-1.3.0": "eks-1.3.0"
|
||||
"gke-1.0": "gke-1.0"
|
||||
"gke-1.2.0": "gke-1.2.0"
|
||||
"gke-1.4.0": "gke-1.4.0"
|
||||
"ocp-3.10": "rh-0.7"
|
||||
"ocp-3.11": "rh-0.7"
|
||||
"ocp-4.0": "rh-1.0"
|
||||
"aks-1.0": "aks-1.0"
|
||||
"aks-1.3": "aks-1.3"
|
||||
"ack-1.0": "ack-1.0"
|
||||
"cis-1.6-k3s": "cis-1.6-k3s"
|
||||
"tkgi-1.2.53": "tkgi-1.2.53"
|
||||
@ -328,6 +331,12 @@ target_mapping:
|
||||
- "controlplane"
|
||||
- "policies"
|
||||
- "managedservices"
|
||||
"gke-1.4.0":
|
||||
- "master"
|
||||
- "node"
|
||||
- "controlplane"
|
||||
- "policies"
|
||||
- "managedservices"
|
||||
"eks-1.0.1":
|
||||
- "master"
|
||||
- "node"
|
||||
@ -346,6 +355,12 @@ target_mapping:
|
||||
- "controlplane"
|
||||
- "policies"
|
||||
- "managedservices"
|
||||
"eks-1.3.0":
|
||||
- "master"
|
||||
- "node"
|
||||
- "controlplane"
|
||||
- "policies"
|
||||
- "managedservices"
|
||||
"rh-0.7":
|
||||
- "master"
|
||||
- "node"
|
||||
@ -355,6 +370,12 @@ target_mapping:
|
||||
- "controlplane"
|
||||
- "policies"
|
||||
- "managedservices"
|
||||
"aks-1.3":
|
||||
- "master"
|
||||
- "node"
|
||||
- "controlplane"
|
||||
- "policies"
|
||||
- "managedservices"
|
||||
"ack-1.0":
|
||||
- "master"
|
||||
- "node"
|
||||
|
9
cfg/eks-1.3.0/config.yaml
Normal file
9
cfg/eks-1.3.0/config.yaml
Normal file
@ -0,0 +1,9 @@
|
||||
---
|
||||
## Version-specific settings that override the values in cfg/config.yaml
|
||||
## These settings are required if you are using the --asff option to report findings to AWS Security Hub
|
||||
## AWS account number is required.
|
||||
AWS_ACCOUNT: "<AWS_ACCT_NUMBER>"
|
||||
## AWS region is required.
|
||||
AWS_REGION: "<AWS_REGION>"
|
||||
## EKS Cluster ARN is required.
|
||||
CLUSTER_ARN: "<AWS_CLUSTER_ARN>"
|
14
cfg/eks-1.3.0/controlplane.yaml
Normal file
14
cfg/eks-1.3.0/controlplane.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
---
|
||||
controls:
|
||||
version: "eks-1.3.0"
|
||||
id: 2
|
||||
text: "Control Plane Configuration"
|
||||
type: "controlplane"
|
||||
groups:
|
||||
- id: 2.1
|
||||
text: "Logging"
|
||||
checks:
|
||||
- id: 2.1.1
|
||||
text: "Enable audit logs (Manual)"
|
||||
remediation: "Enable control plane logging for API Server, Audit, Authenticator, Controller Manager, and Scheduler."
|
||||
scored: false
|
154
cfg/eks-1.3.0/managedservices.yaml
Normal file
154
cfg/eks-1.3.0/managedservices.yaml
Normal file
@ -0,0 +1,154 @@
|
||||
---
|
||||
controls:
|
||||
version: "eks-1.3.0"
|
||||
id: 5
|
||||
text: "Managed Services"
|
||||
type: "managedservices"
|
||||
groups:
|
||||
- id: 5.1
|
||||
text: "Image Registry and Image Scanning"
|
||||
checks:
|
||||
- id: 5.1.1
|
||||
text: "Ensure Image Vulnerability Scanning using Amazon ECR image scanning or a third-party provider (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
To utilize AWS ECR for Image scanning please follow the steps below:
|
||||
|
||||
To create a repository configured for scan on push (AWS CLI):
|
||||
aws ecr create-repository --repository-name $REPO_NAME --image-scanning-configuration scanOnPush=true --region $REGION_CODE
|
||||
|
||||
To edit the settings of an existing repository (AWS CLI):
|
||||
aws ecr put-image-scanning-configuration --repository-name $REPO_NAME --image-scanning-configuration scanOnPush=true --region $REGION_CODE
|
||||
|
||||
Use the following steps to start a manual image scan using the AWS Management Console.
|
||||
Open the Amazon ECR console at https://console.aws.amazon.com/ecr/repositories.
|
||||
From the navigation bar, choose the Region to create your repository in.
|
||||
In the navigation pane, choose Repositories.
|
||||
On the Repositories page, choose the repository that contains the image to scan.
|
||||
On the Images page, select the image to scan and then choose Scan.
|
||||
scored: false
|
||||
|
||||
- id: 5.1.2
|
||||
text: "Minimize user access to Amazon ECR (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Before you use IAM to manage access to Amazon ECR, you should understand what IAM features
|
||||
are available to use with Amazon ECR. To get a high-level view of how Amazon ECR and other
|
||||
AWS services work with IAM, see AWS Services That Work with IAM in the IAM User Guide.
|
||||
scored: false
|
||||
|
||||
- id: 5.1.3
|
||||
text: "Minimize cluster access to read-only for Amazon ECR (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
You can use your Amazon ECR images with Amazon EKS, but you need to satisfy the following prerequisites.
|
||||
|
||||
The Amazon EKS worker node IAM role (NodeInstanceRole) that you use with your worker nodes must possess
|
||||
the following IAM policy permissions for Amazon ECR.
|
||||
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"ecr:BatchCheckLayerAvailability",
|
||||
"ecr:BatchGetImage",
|
||||
"ecr:GetDownloadUrlForLayer",
|
||||
"ecr:GetAuthorizationToken"
|
||||
],
|
||||
"Resource": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
scored: false
|
||||
|
||||
- id: 5.1.4
|
||||
text: "Minimize Container Registries to only those approved (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.2
|
||||
text: "Identity and Access Management (IAM)"
|
||||
checks:
|
||||
- id: 5.2.1
|
||||
text: "Prefer using dedicated Amazon EKS Service Accounts (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.3
|
||||
text: "AWS Key Management Service (KMS)"
|
||||
checks:
|
||||
- id: 5.3.1
|
||||
text: "Ensure Kubernetes Secrets are encrypted using Customer Master Keys (CMKs) managed in AWS KMS (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
This process can only be performed during Cluster Creation.
|
||||
|
||||
Enable 'Secrets Encryption' during Amazon EKS cluster creation as described
|
||||
in the links within the 'References' section.
|
||||
scored: false
|
||||
|
||||
- id: 5.4
|
||||
text: "Cluster Networking"
|
||||
checks:
|
||||
- id: 5.4.1
|
||||
text: "Restrict Access to the Control Plane Endpoint (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.4.2
|
||||
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.4.3
|
||||
text: "Ensure clusters are created with Private Nodes (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.4.4
|
||||
text: "Ensure Network Policy is Enabled and set as appropriate (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
- id: 5.4.5
|
||||
text: "Encrypt traffic to HTTPS load balancers with TLS certificates (Manual)"
|
||||
type: "manual"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
||||
|
||||
|
||||
- id: 5.5
|
||||
text: "Authentication and Authorization"
|
||||
checks:
|
||||
- id: 5.5.1
|
||||
text: "Manage Kubernetes RBAC users with AWS IAM Authenticator for Kubernetes (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Refer to the 'Managing users or IAM roles for your cluster' in Amazon EKS documentation.
|
||||
scored: false
|
||||
|
||||
|
||||
- id: 5.6
|
||||
text: "Other Cluster Configurations"
|
||||
checks:
|
||||
- id: 5.6.1
|
||||
text: "Consider Fargate for running untrusted workloads (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a Fargate profile for your cluster Before you can schedule pods running on Fargate
|
||||
in your cluster, you must define a Fargate profile that specifies which pods should use
|
||||
Fargate when they are launched. For more information, see AWS Fargate profile.
|
||||
|
||||
Note: If you created your cluster with eksctl using the --fargate option, then a Fargate profile has
|
||||
already been created for your cluster with selectors for all pods in the kube-system
|
||||
and default namespaces. Use the following procedure to create Fargate profiles for
|
||||
any other namespaces you would like to use with Fargate.
|
||||
scored: false
|
6
cfg/eks-1.3.0/master.yaml
Normal file
6
cfg/eks-1.3.0/master.yaml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
controls:
|
||||
version: "eks-1.3.0"
|
||||
id: 1
|
||||
text: "Control Plane Components"
|
||||
type: "master"
|
307
cfg/eks-1.3.0/node.yaml
Normal file
307
cfg/eks-1.3.0/node.yaml
Normal file
@ -0,0 +1,307 @@
|
||||
---
|
||||
controls:
|
||||
version: "eks-1.3.0"
|
||||
id: 3
|
||||
text: "Worker Node Security Configuration"
|
||||
type: "node"
|
||||
groups:
|
||||
- id: 3.1
|
||||
text: "Worker Node Configuration Files"
|
||||
checks:
|
||||
- id: 3.1.1
|
||||
text: "Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Manual)"
|
||||
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
|
||||
tests:
|
||||
test_items:
|
||||
- flag: "permissions"
|
||||
compare:
|
||||
op: bitmask
|
||||
value: "644"
|
||||
remediation: |
|
||||
Run the below command (based on the file location on your system) on the each worker node.
|
||||
For example,
|
||||
chmod 644 $kubeletkubeconfig
|
||||
scored: false
|
||||
|
||||
- id: 3.1.2
|
||||
text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Manual)"
|
||||
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
|
||||
tests:
|
||||
test_items:
|
||||
- flag: root:root
|
||||
remediation: |
|
||||
Run the below command (based on the file location on your system) on the each worker node.
|
||||
For example,
|
||||
chown root:root $kubeletkubeconfig
|
||||
scored: false
|
||||
|
||||
- id: 3.1.3
|
||||
text: "Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Manual)"
|
||||
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
|
||||
tests:
|
||||
test_items:
|
||||
- flag: "permissions"
|
||||
compare:
|
||||
op: bitmask
|
||||
value: "644"
|
||||
remediation: |
|
||||
Run the following command (using the config file location identified in the Audit step)
|
||||
chmod 644 $kubeletconf
|
||||
scored: false
|
||||
|
||||
- id: 3.1.4
|
||||
text: "Ensure that the kubelet configuration file ownership is set to root:root (Manual)"
|
||||
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
|
||||
tests:
|
||||
test_items:
|
||||
- flag: root:root
|
||||
remediation: |
|
||||
Run the following command (using the config file location identified in the Audit step)
|
||||
chown root:root $kubeletconf
|
||||
scored: false
|
||||
|
||||
- id: 3.2
|
||||
text: "Kubelet"
|
||||
checks:
|
||||
- id: 3.2.1
|
||||
text: "Ensure that the Anonymous Auth is Not Enabled (Automated)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: "--anonymous-auth"
|
||||
path: '{.authentication.anonymous.enabled}'
|
||||
set: true
|
||||
compare:
|
||||
op: eq
|
||||
value: false
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to
|
||||
false.
|
||||
If using executable arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
--anonymous-auth=false
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: true
|
||||
|
||||
- id: 3.2.2
|
||||
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --authorization-mode
|
||||
path: '{.authorization.mode}'
|
||||
set: true
|
||||
compare:
|
||||
op: nothave
|
||||
value: AlwaysAllow
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If
|
||||
using executable arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_AUTHZ_ARGS variable.
|
||||
--authorization-mode=Webhook
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: true
|
||||
|
||||
- id: 3.2.3
|
||||
text: "Ensure that a Client CA File is Configured (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --client-ca-file
|
||||
path: '{.authentication.x509.clientCAFile}'
|
||||
set: true
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to
|
||||
the location of the client CA file.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_AUTHZ_ARGS variable.
|
||||
--client-ca-file=<path/to/client-ca-file>
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.4
|
||||
text: "Ensure that the --read-only-port is disabled (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: "--read-only-port"
|
||||
path: '{.readOnlyPort}'
|
||||
set: true
|
||||
compare:
|
||||
op: eq
|
||||
value: 0
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set readOnlyPort to 0.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
--read-only-port=0
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.5
|
||||
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --streaming-connection-idle-timeout
|
||||
path: '{.streamingConnectionIdleTimeout}'
|
||||
set: true
|
||||
compare:
|
||||
op: noteq
|
||||
value: 0
|
||||
- flag: --streaming-connection-idle-timeout
|
||||
path: '{.streamingConnectionIdleTimeout}'
|
||||
set: false
|
||||
bin_op: or
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a
|
||||
value other than 0.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
--streaming-connection-idle-timeout=5m
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: true
|
||||
|
||||
- id: 3.2.6
|
||||
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated) "
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --make-iptables-util-chains
|
||||
path: '{.makeIPTablesUtilChains}'
|
||||
set: true
|
||||
compare:
|
||||
op: eq
|
||||
value: true
|
||||
- flag: --make-iptables-util-chains
|
||||
path: '{.makeIPTablesUtilChains}'
|
||||
set: false
|
||||
bin_op: or
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
remove the --make-iptables-util-chains argument from the
|
||||
KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: true
|
||||
|
||||
- id: 3.2.7
|
||||
text: "Ensure that the --hostname-override argument is not set (Manual)"
|
||||
# This is one of those properties that can only be set as a command line argument.
|
||||
# To check if the property is set as expected, we need to parse the kubelet command
|
||||
# instead reading the Kubelet Configuration file.
|
||||
audit: "/bin/ps -fC $kubeletbin "
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --hostname-override
|
||||
set: false
|
||||
remediation: |
|
||||
Edit the kubelet service file $kubeletsvc
|
||||
on each worker node and remove the --hostname-override argument from the
|
||||
KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.8
|
||||
text: "Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --event-qps
|
||||
path: '{.eventRecordQPS}'
|
||||
set: true
|
||||
compare:
|
||||
op: gte
|
||||
value: 0
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.9
|
||||
text: "Ensure that the --rotate-certificates argument is not present or is set to true (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --rotate-certificates
|
||||
path: '{.rotateCertificates}'
|
||||
set: true
|
||||
compare:
|
||||
op: eq
|
||||
value: true
|
||||
- flag: --rotate-certificates
|
||||
path: '{.rotateCertificates}'
|
||||
set: false
|
||||
bin_op: or
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to add the line rotateCertificates: true or
|
||||
remove it altogether to use the default value.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
|
||||
variable.
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.10
|
||||
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: RotateKubeletServerCertificate
|
||||
path: '{.featureGates.RotateKubeletServerCertificate}'
|
||||
set: true
|
||||
compare:
|
||||
op: eq
|
||||
value: true
|
||||
remediation: |
|
||||
Edit the kubelet service file $kubeletsvc
|
||||
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
|
||||
--feature-gates=RotateKubeletServerCertificate=true
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
- id: 3.3
|
||||
text: "Container Optimized OS"
|
||||
checks:
|
||||
- id: 3.3.1
|
||||
text: "Prefer using a container-optimized OS when possible (Manual)"
|
||||
remediation: "No remediation"
|
||||
scored: false
|
209
cfg/eks-1.3.0/policies.yaml
Normal file
209
cfg/eks-1.3.0/policies.yaml
Normal file
@ -0,0 +1,209 @@
|
||||
---
|
||||
controls:
|
||||
version: "eks-1.3.0"
|
||||
id: 4
|
||||
text: "Policies"
|
||||
type: "policies"
|
||||
groups:
|
||||
- id: 4.1
|
||||
text: "RBAC and Service Accounts"
|
||||
checks:
|
||||
- id: 4.1.1
|
||||
text: "Ensure that the cluster-admin role is only used where required (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
|
||||
if they need this role or if they could use a role with fewer privileges.
|
||||
Where possible, first bind users to a lower privileged role and then remove the
|
||||
clusterrolebinding to the cluster-admin role :
|
||||
kubectl delete clusterrolebinding [name]
|
||||
scored: false
|
||||
|
||||
- id: 4.1.2
|
||||
text: "Minimize access to secrets (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Where possible, remove get, list and watch access to secret objects in the cluster.
|
||||
scored: false
|
||||
|
||||
- id: 4.1.3
|
||||
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Where possible replace any use of wildcards in clusterroles and roles with specific
|
||||
objects or actions.
|
||||
scored: false
|
||||
|
||||
- id: 4.1.4
|
||||
text: "Minimize access to create pods (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Where possible, remove create access to pod objects in the cluster.
|
||||
scored: false
|
||||
|
||||
- id: 4.1.5
|
||||
text: "Ensure that default service accounts are not actively used. (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create explicit service accounts wherever a Kubernetes workload requires specific access
|
||||
to the Kubernetes API server.
|
||||
Modify the configuration of each default service account to include this value
|
||||
automountServiceAccountToken: false
|
||||
scored: false
|
||||
|
||||
- id: 4.1.6
|
||||
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Modify the definition of pods and service accounts which do not need to mount service
|
||||
account tokens to disable it.
|
||||
scored: false
|
||||
|
||||
- id: 4.1.7
|
||||
text: "Avoid use of system:masters group (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Remove the system:masters group from all users in the cluster.
|
||||
scored: false
|
||||
|
||||
- id: 4.1.8
|
||||
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Where possible, remove the impersonate, bind and escalate rights from subjects.
|
||||
scored: false
|
||||
|
||||
- id: 4.2
|
||||
text: "Pod Security Policies"
|
||||
checks:
|
||||
- id: 4.2.1
|
||||
text: "Minimize the admission of privileged containers (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that
|
||||
the .spec.privileged field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.2
|
||||
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.hostPID field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.3
|
||||
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.hostIPC field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.4
|
||||
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.hostNetwork field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.5
|
||||
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.allowPrivilegeEscalation field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.6
|
||||
text: "Minimize the admission of root containers (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of
|
||||
UIDs not including 0.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.7
|
||||
text: "Minimize the admission of containers with added capabilities (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Ensure that allowedCapabilities is not present in PSPs for the cluster unless
|
||||
it is set to an empty array.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.8
|
||||
text: "Minimize the admission of containers with capabilities assigned (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Review the use of capabilities in applications running on your cluster. Where a namespace
|
||||
contains applications which do not require any Linux capabities to operate consider adding
|
||||
a PSP which forbids the admission of containers which do not drop all capabilities.
|
||||
scored: false
|
||||
|
||||
- id: 4.3
|
||||
text: "CNI Plugin"
|
||||
checks:
|
||||
- id: 4.3.1
|
||||
text: "Ensure CNI plugin supports network policies (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
As with RBAC policies, network policies should adhere to the policy of least privileged
|
||||
access. Start by creating a deny all policy that restricts all inbound and outbound traffic
|
||||
from a namespace or create a global policy using Calico.
|
||||
scored: false
|
||||
|
||||
- id: 4.3.2
|
||||
text: "Ensure that all Namespaces have Network Policies defined (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Follow the documentation and create NetworkPolicy objects as you need them.
|
||||
scored: false
|
||||
|
||||
- id: 4.4
|
||||
text: "Secrets Management"
|
||||
checks:
|
||||
- id: 4.4.1
|
||||
text: "Prefer using secrets as files over secrets as environment variables (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
If possible, rewrite application code to read secrets from mounted secret files, rather than
|
||||
from environment variables.
|
||||
scored: false
|
||||
|
||||
- id: 4.4.2
|
||||
text: "Consider external secret storage (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Refer to the secrets management options offered by your cloud provider or a third-party
|
||||
secrets management solution.
|
||||
scored: false
|
||||
|
||||
- id: 4.5
|
||||
text: "General Policies"
|
||||
checks:
|
||||
- id: 4.5.1
|
||||
text: "Create administrative boundaries between resources using namespaces (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Follow the documentation and create namespaces for objects in your deployment as you need
|
||||
them.
|
||||
scored: false
|
||||
|
||||
- id: 4.5.2
|
||||
text: "Apply Security Context to Your Pods and Containers (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Follow the Kubernetes documentation and apply security contexts to your pods. For a
|
||||
suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
|
||||
Containers.
|
||||
scored: false
|
||||
|
||||
- id: 4.5.3
|
||||
text: "The default namespace should not be used (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
|
||||
resources and that all new resources are created in a specific namespace.
|
||||
scored: false
|
2
cfg/gke-1.4.0/config.yaml
Normal file
2
cfg/gke-1.4.0/config.yaml
Normal file
@ -0,0 +1,2 @@
|
||||
---
|
||||
## Version-specific settings that override the values in cfg/config.yaml
|
35
cfg/gke-1.4.0/controlplane.yaml
Normal file
35
cfg/gke-1.4.0/controlplane.yaml
Normal file
@ -0,0 +1,35 @@
|
||||
---
|
||||
controls:
|
||||
version: "gke-1.4.0"
|
||||
id: 2
|
||||
text: "Control Plane Configuration"
|
||||
type: "controlplane"
|
||||
groups:
|
||||
- id: 2.1
|
||||
text: "Authentication and Authorization"
|
||||
checks:
|
||||
- id: 2.1.1
|
||||
text: "Client certificate authentication should not be used for users (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
|
||||
implemented in place of client certificates.
|
||||
You can remediate the availability of client certificates in your GKE cluster. See
|
||||
Recommendation 6.8.2.
|
||||
scored: false
|
||||
|
||||
- id: 2.2
|
||||
text: "Logging"
|
||||
type: skip
|
||||
checks:
|
||||
- id: 2.2.1
|
||||
text: "Ensure that a minimal audit policy is created (Manual)"
|
||||
type: "manual"
|
||||
remediation: "This control cannot be modified in GKE."
|
||||
scored: false
|
||||
|
||||
- id: 2.2.2
|
||||
text: "Ensure that the audit policy covers key security concerns (Manual)"
|
||||
type: "manual"
|
||||
remediation: "This control cannot be modified in GKE."
|
||||
scored: false
|
706
cfg/gke-1.4.0/managedservices.yaml
Normal file
706
cfg/gke-1.4.0/managedservices.yaml
Normal file
@ -0,0 +1,706 @@
|
||||
---
|
||||
controls:
|
||||
version: "gke-1.4.0"
|
||||
id: 5
|
||||
text: "Managed Services"
|
||||
type: "managedservices"
|
||||
groups:
|
||||
- id: 5.1
|
||||
text: "Image Registry and Image Scanning"
|
||||
checks:
|
||||
- id: 5.1.1
|
||||
text: "Ensure Image Vulnerability Scanning using GCR Container Analysis
|
||||
or a third-party provider (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
|
||||
gcloud services enable containerscanning.googleapis.com
|
||||
scored: false
|
||||
|
||||
- id: 5.1.2
|
||||
text: "Minimize user access to GCR (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To change roles at the GCR bucket level:
|
||||
Firstly, run the following if read permissions are required:
|
||||
|
||||
gsutil iam ch [TYPE]:[EMAIL-ADDRESS]:objectViewer
|
||||
gs://artifacts.[PROJECT_ID].appspot.com
|
||||
|
||||
Then remove the excessively privileged role (Storage Admin / Storage Object Admin /
|
||||
Storage Object Creator) using:
|
||||
|
||||
gsutil iam ch -d [TYPE]:[EMAIL-ADDRESS]:[ROLE]
|
||||
gs://artifacts.[PROJECT_ID].appspot.com
|
||||
|
||||
where:
|
||||
[TYPE] can be one of the following:
|
||||
o user, if the [EMAIL-ADDRESS] is a Google account
|
||||
o serviceAccount, if [EMAIL-ADDRESS] specifies a Service account
|
||||
[EMAIL-ADDRESS] can be one of the following:
|
||||
o a Google account (for example, someone@example.com)
|
||||
o a Cloud IAM service account
|
||||
To modify roles defined at the project level and subsequently inherited within the GCR
|
||||
bucket, or the Service Account User role, extract the IAM policy file, modify it accordingly
|
||||
and apply it using:
|
||||
|
||||
gcloud projects set-iam-policy [PROJECT_ID] [POLICY_FILE]
|
||||
scored: false
|
||||
|
||||
- id: 5.1.3
|
||||
text: "Minimize cluster access to read-only for GCR (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
For an account explicitly granted to the bucket. First, add read access to the Kubernetes
|
||||
Service Account
|
||||
|
||||
gsutil iam ch [TYPE]:[EMAIL-ADDRESS]:objectViewer
|
||||
gs://artifacts.[PROJECT_ID].appspot.com
|
||||
|
||||
where:
|
||||
[TYPE] can be one of the following:
|
||||
o user, if the [EMAIL-ADDRESS] is a Google account
|
||||
o serviceAccount, if [EMAIL-ADDRESS] specifies a Service account
|
||||
[EMAIL-ADDRESS] can be one of the following:
|
||||
o a Google account (for example, someone@example.com)
|
||||
o a Cloud IAM service account
|
||||
|
||||
Then remove the excessively privileged role (Storage Admin / Storage Object Admin /
|
||||
Storage Object Creator) using:
|
||||
|
||||
gsutil iam ch -d [TYPE]:[EMAIL-ADDRESS]:[ROLE]
|
||||
gs://artifacts.[PROJECT_ID].appspot.com
|
||||
|
||||
For an account that inherits access to the GCR Bucket through Project level permissions,
|
||||
modify the Projects IAM policy file accordingly, then upload it using:
|
||||
|
||||
gcloud projects set-iam-policy [PROJECT_ID] [POLICY_FILE]
|
||||
scored: false
|
||||
|
||||
- id: 5.1.4
|
||||
text: "Minimize Container Registries to only those approved (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
First, update the cluster to enable Binary Authorization:
|
||||
|
||||
gcloud container cluster update [CLUSTER_NAME] \
|
||||
--enable-binauthz
|
||||
|
||||
Create a Binary Authorization Policy using the Binary Authorization Policy Reference
|
||||
(https://cloud.google.com/binary-authorization/docs/policy-yaml-reference) for guidance.
|
||||
Import the policy file into Binary Authorization:
|
||||
|
||||
gcloud container binauthz policy import [YAML_POLICY]
|
||||
scored: false
|
||||
|
||||
- id: 5.2
|
||||
text: "Identity and Access Management (IAM)"
|
||||
checks:
|
||||
- id: 5.2.1
|
||||
text: "Ensure GKE clusters are not running using the Compute Engine
|
||||
default service account (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
Firstly, create a minimally privileged service account:
|
||||
|
||||
gcloud iam service-accounts create [SA_NAME] \
|
||||
--display-name "GKE Node Service Account"
|
||||
export NODE_SA_EMAIL=`gcloud iam service-accounts list \
|
||||
--format='value(email)' \
|
||||
--filter='displayName:GKE Node Service Account'`
|
||||
|
||||
Grant the following roles to the service account:
|
||||
|
||||
export PROJECT_ID=`gcloud config get-value project`
|
||||
gcloud projects add-iam-policy-binding $PROJECT_ID \
|
||||
--member serviceAccount:$NODE_SA_EMAIL \
|
||||
--role roles/monitoring.metricWriter
|
||||
gcloud projects add-iam-policy-binding $PROJECT_ID \
|
||||
--member serviceAccount:$NODE_SA_EMAIL \
|
||||
--role roles/monitoring.viewer
|
||||
gcloud projects add-iam-policy-binding $PROJECT_ID \
|
||||
--member serviceAccount:$NODE_SA_EMAIL \
|
||||
--role roles/logging.logWriter
|
||||
|
||||
To create a new Node pool using the Service account, run the following command:
|
||||
|
||||
gcloud container node-pools create [NODE_POOL] \
|
||||
--service-account=[SA_NAME]@[PROJECT_ID].iam.gserviceaccount.com \
|
||||
--cluster=[CLUSTER_NAME] --zone [COMPUTE_ZONE]
|
||||
|
||||
You will need to migrate your workloads to the new Node pool, and delete Node pools that
|
||||
use the default service account to complete the remediation.
|
||||
scored: false
|
||||
|
||||
- id: 5.2.2
|
||||
text: "Prefer using dedicated GCP Service Accounts and Workload Identity (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
|
||||
gcloud beta container clusters update [CLUSTER_NAME] --zone [CLUSTER_ZONE] \
|
||||
--identity-namespace=[PROJECT_ID].svc.id.goog
|
||||
|
||||
Note that existing Node pools are unaffected. New Node pools default to --workload-
|
||||
metadata-from-node=GKE_METADATA_SERVER .
|
||||
|
||||
Then, modify existing Node pools to enable GKE_METADATA_SERVER:
|
||||
|
||||
gcloud beta container node-pools update [NODEPOOL_NAME] \
|
||||
--cluster=[CLUSTER_NAME] --zone [CLUSTER_ZONE] \
|
||||
--workload-metadata-from-node=GKE_METADATA_SERVER
|
||||
|
||||
You may also need to modify workloads in order for them to use Workload Identity as
|
||||
described within https://cloud.google.com/kubernetes-engine/docs/how-to/workload-
|
||||
identity. Also consider the effects on the availability of your hosted workloads as Node
|
||||
pools are updated, it may be more appropriate to create new Node Pools.
|
||||
scored: false
|
||||
|
||||
- id: 5.3
|
||||
text: "Cloud Key Management Service (Cloud KMS)"
|
||||
checks:
|
||||
- id: 5.3.1
|
||||
text: "Ensure Kubernetes Secrets are encrypted using keys managed in Cloud KMS (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To create a key
|
||||
|
||||
Create a key ring:
|
||||
|
||||
gcloud kms keyrings create [RING_NAME] \
|
||||
--location [LOCATION] \
|
||||
--project [KEY_PROJECT_ID]
|
||||
|
||||
Create a key:
|
||||
|
||||
gcloud kms keys create [KEY_NAME] \
|
||||
--location [LOCATION] \
|
||||
--keyring [RING_NAME] \
|
||||
--purpose encryption \
|
||||
--project [KEY_PROJECT_ID]
|
||||
|
||||
Grant the Kubernetes Engine Service Agent service account the Cloud KMS CryptoKey
|
||||
Encrypter/Decrypter role:
|
||||
|
||||
gcloud kms keys add-iam-policy-binding [KEY_NAME] \
|
||||
--location [LOCATION] \
|
||||
--keyring [RING_NAME] \
|
||||
--member serviceAccount:[SERVICE_ACCOUNT_NAME] \
|
||||
--role roles/cloudkms.cryptoKeyEncrypterDecrypter \
|
||||
--project [KEY_PROJECT_ID]
|
||||
|
||||
To create a new cluster with Application-layer Secrets Encryption:
|
||||
|
||||
gcloud container clusters create [CLUSTER_NAME] \
|
||||
--cluster-version=latest \
|
||||
--zone [ZONE] \
|
||||
--database-encryption-key projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKey s/[KEY_NAME] \
|
||||
--project [CLUSTER_PROJECT_ID]
|
||||
|
||||
To enable on an existing cluster:
|
||||
|
||||
gcloud container clusters update [CLUSTER_NAME] \
|
||||
--zone [ZONE] \
|
||||
--database-encryption-key projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKey s/[KEY_NAME] \
|
||||
--project [CLUSTER_PROJECT_ID]
|
||||
scored: false
|
||||
|
||||
- id: 5.4
|
||||
text: "Node Metadata"
|
||||
checks:
|
||||
- id: 5.4.1
|
||||
text: "Ensure legacy Compute Engine instance metadata APIs are Disabled (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To update an existing cluster, create a new Node pool with the legacy GCE metadata
|
||||
endpoint disabled:
|
||||
|
||||
gcloud container node-pools create [POOL_NAME] \
|
||||
--metadata disable-legacy-endpoints=true \
|
||||
--cluster [CLUSTER_NAME] \
|
||||
--zone [COMPUTE_ZONE]
|
||||
|
||||
You will need to migrate workloads from any existing non-conforming Node pools, to the
|
||||
new Node pool, then delete non-conforming Node pools to complete the remediation.
|
||||
scored: false
|
||||
|
||||
- id: 5.4.2
|
||||
text: "Ensure the GKE Metadata Server is Enabled (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
gcloud beta container clusters update [CLUSTER_NAME] \
|
||||
--identity-namespace=[PROJECT_ID].svc.id.goog
|
||||
Note that existing Node pools are unaffected. New Node pools default to --workload-
|
||||
metadata-from-node=GKE_METADATA_SERVER .
|
||||
|
||||
To modify an existing Node pool to enable GKE Metadata Server:
|
||||
|
||||
gcloud beta container node-pools update [NODEPOOL_NAME] \
|
||||
--cluster=[CLUSTER_NAME] \
|
||||
--workload-metadata-from-node=GKE_METADATA_SERVER
|
||||
|
||||
You may also need to modify workloads in order for them to use Workload Identity as
|
||||
described within https://cloud.google.com/kubernetes-engine/docs/how-to/workload-
|
||||
identity.
|
||||
scored: false
|
||||
|
||||
- id: 5.5
|
||||
text: "Node Configuration and Maintenance"
|
||||
checks:
|
||||
- id: 5.5.1
|
||||
text: "Ensure Container-Optimized OS (COS) is used for GKE node images (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To set the node image to cos for an existing cluster's Node pool:
|
||||
|
||||
gcloud container clusters upgrade [CLUSTER_NAME]\
|
||||
--image-type cos \
|
||||
--zone [COMPUTE_ZONE] --node-pool [POOL_NAME]
|
||||
scored: false
|
||||
|
||||
- id: 5.5.2
|
||||
text: "Ensure Node Auto-Repair is enabled for GKE nodes (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To enable node auto-repair for an existing cluster with Node pool, run the following
|
||||
command:
|
||||
|
||||
gcloud container node-pools update [POOL_NAME] \
|
||||
--cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
|
||||
--enable-autorepair
|
||||
scored: false
|
||||
|
||||
- id: 5.5.3
|
||||
text: "Ensure Node Auto-Upgrade is enabled for GKE nodes (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To enable node auto-upgrade for an existing cluster's Node pool, run the following
|
||||
command:
|
||||
|
||||
gcloud container node-pools update [NODE_POOL] \
|
||||
--cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
|
||||
--enable-autoupgrade
|
||||
scored: false
|
||||
|
||||
- id: 5.5.4
|
||||
text: "When creating New Clusters - Automate GKE version management using Release Channels (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
Create a new cluster by running the following command:
|
||||
|
||||
gcloud beta container clusters create [CLUSTER_NAME] \
|
||||
--zone [COMPUTE_ZONE] \
|
||||
--release-channel [RELEASE_CHANNEL]
|
||||
|
||||
where [RELEASE_CHANNEL] is stable or regular according to your needs.
|
||||
scored: false
|
||||
|
||||
- id: 5.5.5
|
||||
text: "Ensure Shielded GKE Nodes are Enabled (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To create a Node pool within the cluster with Integrity Monitoring enabled, run the
|
||||
following command:
|
||||
|
||||
gcloud beta container node-pools create [NODEPOOL_NAME] \
|
||||
--cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
|
||||
--shielded-integrity-monitoring
|
||||
|
||||
You will also need to migrate workloads from existing non-conforming Node pools to the
|
||||
newly created Node pool, then delete the non-conforming pools.
|
||||
scored: false
|
||||
|
||||
- id: 5.5.6
|
||||
text: "Ensure Integrity Monitoring for Shielded GKE Nodes is Enabled (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To create a Node pool within the cluster with Integrity Monitoring enabled, run the
|
||||
following command:
|
||||
|
||||
gcloud beta container node-pools create [NODEPOOL_NAME] \
|
||||
--cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
|
||||
--shielded-integrity-monitoring
|
||||
|
||||
You will also need to migrate workloads from existing non-conforming Node pools to the newly created Node pool,
|
||||
then delete the non-conforming pools.
|
||||
scored: false
|
||||
|
||||
- id: 5.5.7
|
||||
text: "Ensure Secure Boot for Shielded GKE Nodes is Enabled (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To create a Node pool within the cluster with Secure Boot enabled, run the following
|
||||
command:
|
||||
|
||||
gcloud beta container node-pools create [NODEPOOL_NAME] \
|
||||
--cluster [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
|
||||
--shielded-secure-boot
|
||||
|
||||
You will also need to migrate workloads from existing non-conforming Node pools to the
|
||||
newly created Node pool, then delete the non-conforming pools.
|
||||
scored: false
|
||||
|
||||
- id: 5.6
|
||||
text: "Cluster Networking"
|
||||
checks:
|
||||
- id: 5.6.1
|
||||
text: "Enable VPC Flow Logs and Intranode Visibility (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To enable intranode visibility on an existing cluster, run the following command:
|
||||
|
||||
gcloud beta container clusters update [CLUSTER_NAME] \
|
||||
--enable-intra-node-visibility
|
||||
scored: false
|
||||
|
||||
- id: 5.6.2
|
||||
text: "Ensure use of VPC-native clusters (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To enable Alias IP on a new cluster, run the following command:
|
||||
|
||||
gcloud container clusters create [CLUSTER_NAME] \
|
||||
--zone [COMPUTE_ZONE] \
|
||||
--enable-ip-alias
|
||||
scored: false
|
||||
|
||||
- id: 5.6.3
|
||||
text: "Ensure Master Authorized Networks is Enabled (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To check Master Authorized Networks status for an existing cluster, run the following
|
||||
command;
|
||||
|
||||
gcloud container clusters describe [CLUSTER_NAME] \
|
||||
--zone [COMPUTE_ZONE] \
|
||||
--format json | jq '.masterAuthorizedNetworksConfig'
|
||||
|
||||
The output should return
|
||||
|
||||
{
|
||||
"enabled": true
|
||||
}
|
||||
|
||||
if Master Authorized Networks is enabled.
|
||||
|
||||
If Master Authorized Networks is disabled, the
|
||||
above command will return null ( { } ).
|
||||
scored: false
|
||||
|
||||
- id: 5.6.4
|
||||
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
Create a cluster with a Private Endpoint enabled and Public Access disabled by including
|
||||
the --enable-private-endpoint flag within the cluster create command:
|
||||
|
||||
gcloud container clusters create [CLUSTER_NAME] \
|
||||
--enable-private-endpoint
|
||||
|
||||
Setting this flag also requires the setting of --enable-private-nodes , --enable-ip-alias
|
||||
and --master-ipv4-cidr=[MASTER_CIDR_RANGE] .
|
||||
scored: false
|
||||
|
||||
- id: 5.6.5
|
||||
text: "Ensure clusters are created with Private Nodes (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To create a cluster with Private Nodes enabled, include the --enable-private-nodes flag
|
||||
within the cluster create command:
|
||||
|
||||
gcloud container clusters create [CLUSTER_NAME] \
|
||||
--enable-private-nodes
|
||||
|
||||
Setting this flag also requires the setting of --enable-ip-alias and --master-ipv4-
|
||||
cidr=[MASTER_CIDR_RANGE] .
|
||||
scored: false
|
||||
|
||||
- id: 5.6.6
|
||||
text: "Consider firewalling GKE worker nodes (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
Use the following command to generate firewall rules, setting the variables as appropriate.
|
||||
You may want to use the target [TAG] and [SERVICE_ACCOUNT] previously identified.
|
||||
|
||||
gcloud compute firewall-rules create FIREWALL_RULE_NAME \
|
||||
--network [NETWORK] \
|
||||
--priority [PRIORITY] \
|
||||
--direction [DIRECTION] \
|
||||
--action [ACTION] \
|
||||
--target-tags [TAG] \
|
||||
--target-service-accounts [SERVICE_ACCOUNT] \
|
||||
--source-ranges [SOURCE_CIDR-RANGE] \
|
||||
--source-tags [SOURCE_TAGS] \
|
||||
--source-service-accounts=[SOURCE_SERVICE_ACCOUNT] \
|
||||
--destination-ranges [DESTINATION_CIDR_RANGE] \
|
||||
--rules [RULES]
|
||||
scored: false
|
||||
|
||||
- id: 5.6.7
|
||||
text: "Ensure Network Policy is Enabled and set as appropriate (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To enable Network Policy for an existing cluster, firstly enable the Network Policy add-on:
|
||||
|
||||
gcloud container clusters update [CLUSTER_NAME] \
|
||||
--zone [COMPUTE_ZONE] \
|
||||
--update-addons NetworkPolicy=ENABLED
|
||||
|
||||
Then, enable Network Policy:
|
||||
|
||||
gcloud container clusters update [CLUSTER_NAME] \
|
||||
--zone [COMPUTE_ZONE] \
|
||||
--enable-network-policy
|
||||
scored: false
|
||||
|
||||
- id: 5.6.8
|
||||
text: "Ensure use of Google-managed SSL Certificates (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
If services of type:LoadBalancer are discovered, consider replacing the Service with an
|
||||
Ingress.
|
||||
|
||||
To configure the Ingress and use Google-managed SSL certificates, follow the instructions
|
||||
as listed at https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs.
|
||||
scored: false
|
||||
|
||||
- id: 5.7
|
||||
text: "Logging"
|
||||
checks:
|
||||
- id: 5.7.1
|
||||
text: "Ensure Stackdriver Kubernetes Logging and Monitoring is Enabled (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
|
||||
STACKDRIVER KUBERNETES ENGINE MONITORING SUPPORT (PREFERRED):
|
||||
To enable Stackdriver Kubernetes Engine Monitoring for an existing cluster, run the
|
||||
following command:
|
||||
|
||||
gcloud container clusters update [CLUSTER_NAME] \
|
||||
--zone [COMPUTE_ZONE] \
|
||||
--enable-stackdriver-kubernetes
|
||||
|
||||
LEGACY STACKDRIVER SUPPORT:
|
||||
Both Logging and Monitoring support must be enabled.
|
||||
To enable Legacy Stackdriver Logging for an existing cluster, run the following command:
|
||||
|
||||
gcloud container clusters update [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
|
||||
--logging-service logging.googleapis.com
|
||||
|
||||
To enable Legacy Stackdriver Monitoring for an existing cluster, run the following
|
||||
command:
|
||||
|
||||
gcloud container clusters update [CLUSTER_NAME] --zone [COMPUTE_ZONE] \
|
||||
--monitoring-service monitoring.googleapis.com
|
||||
scored: false
|
||||
|
||||
- id: 5.7.2
|
||||
text: "Enable Linux auditd logging (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
Download the example manifests:
|
||||
|
||||
curl https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-tools/master/os-audit/cos-auditd-logging.yaml \
|
||||
> cos-auditd-logging.yaml
|
||||
|
||||
Edit the example manifests if needed. Then, deploy them:
|
||||
|
||||
kubectl apply -f cos-auditd-logging.yaml
|
||||
|
||||
Verify that the logging Pods have started. If you defined a different Namespace in your
|
||||
manifests, replace cos-auditd with the name of the namespace you're using:
|
||||
|
||||
kubectl get pods --namespace=cos-auditd
|
||||
scored: false
|
||||
|
||||
- id: 5.8
|
||||
text: "Authentication and Authorization"
|
||||
checks:
|
||||
- id: 5.8.1
|
||||
text: "Ensure Basic Authentication using static passwords is Disabled (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To update an existing cluster and disable Basic Authentication by removing the static
|
||||
password:
|
||||
|
||||
gcloud container clusters update [CLUSTER_NAME] \
|
||||
--no-enable-basic-auth
|
||||
scored: false
|
||||
|
||||
- id: 5.8.2
|
||||
text: "Ensure authentication using Client Certificates is Disabled (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
Create a new cluster without a Client Certificate:
|
||||
|
||||
gcloud container clusters create [CLUSTER_NAME] \
|
||||
--no-issue-client-certificate
|
||||
scored: false
|
||||
|
||||
- id: 5.8.3
|
||||
text: "Manage Kubernetes RBAC users with Google Groups for GKE (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
Follow the G Suite Groups instructions at https://cloud.google.com/kubernetes-
|
||||
engine/docs/how-to/role-based-access-control#google-groups-for-gke.
|
||||
|
||||
Then, create a cluster with
|
||||
|
||||
gcloud beta container clusters create my-cluster \
|
||||
--security-group="gke-security-groups@[yourdomain.com]"
|
||||
|
||||
Finally create Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings that
|
||||
reference your G Suite Groups.
|
||||
scored: false
|
||||
|
||||
- id: 5.8.4
|
||||
text: "Ensure Legacy Authorization (ABAC) is Disabled (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To disable Legacy Authorization for an existing cluster, run the following command:
|
||||
|
||||
gcloud container clusters update [CLUSTER_NAME] \
|
||||
--zone [COMPUTE_ZONE] \
|
||||
--no-enable-legacy-authorization
|
||||
scored: false
|
||||
|
||||
- id: 5.9
|
||||
text: "Storage"
|
||||
checks:
|
||||
- id: 5.9.1
|
||||
text: "Enable Customer-Managed Encryption Keys (CMEK) for GKE Persistent Disks (PD) (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
FOR NODE BOOT DISKS:
|
||||
Create a new node pool using customer-managed encryption keys for the node boot disk, of
|
||||
[DISK_TYPE] either pd-standard or pd-ssd :
|
||||
|
||||
gcloud beta container node-pools create [CLUSTER_NAME] \
|
||||
--disk-type [DISK_TYPE] \
|
||||
--boot-disk-kms-key \
|
||||
projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]
|
||||
|
||||
Create a cluster using customer-managed encryption keys for the node boot disk, of
|
||||
[DISK_TYPE] either pd-standard or pd-ssd :
|
||||
|
||||
gcloud beta container clusters create [CLUSTER_NAME] \
|
||||
--disk-type [DISK_TYPE] \
|
||||
--boot-disk-kms-key \
|
||||
projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]
|
||||
|
||||
FOR ATTACHED DISKS:
|
||||
Follow the instructions detailed at https://cloud.google.com/kubernetes-
|
||||
engine/docs/how-to/using-cmek.
|
||||
scored: false
|
||||
|
||||
- id: 5.10
|
||||
text: "Other Cluster Configurations"
|
||||
checks:
|
||||
- id: 5.10.1
|
||||
text: "Ensure Kubernetes Web UI is Disabled (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To disable the Kubernetes Dashboard on an existing cluster, run the following command:
|
||||
|
||||
gcloud container clusters update [CLUSTER_NAME] \
|
||||
--zone [ZONE] \
|
||||
--update-addons=KubernetesDashboard=DISABLED
|
||||
scored: false
|
||||
|
||||
- id: 5.10.2
|
||||
text: "Ensure that Alpha clusters are not used for production workloads (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
Upon creating a new cluster
|
||||
|
||||
gcloud container clusters create [CLUSTER_NAME] \
|
||||
--zone [COMPUTE_ZONE]
|
||||
|
||||
Do not use the --enable-kubernetes-alpha argument.
|
||||
scored: false
|
||||
|
||||
- id: 5.10.3
|
||||
text: "Ensure Pod Security Policy is Enabled and set as appropriate (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To enable Pod Security Policy for an existing cluster, run the following command:
|
||||
|
||||
gcloud beta container clusters update [CLUSTER_NAME] \
|
||||
--zone [COMPUTE_ZONE] \
|
||||
--enable-pod-security-policy
|
||||
scored: false
|
||||
|
||||
- id: 5.10.4
|
||||
text: "Consider GKE Sandbox for running untrusted workloads (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
To enable GKE Sandbox on an existing cluster, a new Node pool must be created.
|
||||
|
||||
gcloud container node-pools create [NODE_POOL_NAME] \
|
||||
--zone=[COMPUTE-ZONE] \
|
||||
--cluster=[CLUSTER_NAME] \
|
||||
--image-type=cos_containerd \
|
||||
--sandbox type=gvisor
|
||||
scored: false
|
||||
|
||||
- id: 5.10.5
|
||||
text: "Ensure use of Binary Authorization (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
Firstly, update the cluster to enable Binary Authorization:
|
||||
|
||||
gcloud container cluster update [CLUSTER_NAME] \
|
||||
--zone [COMPUTE-ZONE] \
|
||||
--enable-binauthz
|
||||
|
||||
Create a Binary Authorization Policy using the Binary Authorization Policy Reference
|
||||
(https://cloud.google.com/binary-authorization/docs/policy-yaml-reference) for
|
||||
guidance.
|
||||
|
||||
Import the policy file into Binary Authorization:
|
||||
|
||||
gcloud container binauthz policy import [YAML_POLICY]
|
||||
scored: false
|
||||
|
||||
- id: 5.10.6
|
||||
text: "Enable Cloud Security Command Center (Cloud SCC) (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Using Command Line:
|
||||
Follow the instructions at https://cloud.google.com/security-command-
|
||||
center/docs/quickstart-scc-setup.
|
||||
scored: false
|
6
cfg/gke-1.4.0/master.yaml
Normal file
6
cfg/gke-1.4.0/master.yaml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
controls:
|
||||
version: "gke-1.4.0"
|
||||
id: 1
|
||||
text: "Control Plane Components"
|
||||
type: "master"
|
312
cfg/gke-1.4.0/node.yaml
Normal file
312
cfg/gke-1.4.0/node.yaml
Normal file
@ -0,0 +1,312 @@
|
||||
---
|
||||
controls:
|
||||
version: "gke-1.4.0"
|
||||
id: 3
|
||||
text: "Worker Node Security Configuration"
|
||||
type: "node"
|
||||
groups:
|
||||
- id: 3.1
|
||||
text: "Worker Node Configuration Files"
|
||||
checks:
|
||||
- id: 3.1.1
|
||||
text: "Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive (Manual)"
|
||||
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
|
||||
tests:
|
||||
test_items:
|
||||
- flag: "permissions"
|
||||
compare:
|
||||
op: bitmask
|
||||
value: "644"
|
||||
remediation: |
|
||||
Run the below command (based on the file location on your system) on each worker node.
|
||||
For example,
|
||||
chmod 644 $proxykubeconfig
|
||||
scored: false
|
||||
|
||||
- id: 3.1.2
|
||||
text: "Ensure that the proxy kubeconfig file ownership is set to root:root (Manual)"
|
||||
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
|
||||
tests:
|
||||
test_items:
|
||||
- flag: root:root
|
||||
remediation: |
|
||||
Run the below command (based on the file location on your system) on each worker node.
|
||||
For example, chown root:root $proxykubeconfig
|
||||
scored: false
|
||||
|
||||
- id: 3.1.3
|
||||
text: "Ensure that the kubelet configuration file permissions are set to 644 or more restrictive (Manual)"
|
||||
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
|
||||
tests:
|
||||
test_items:
|
||||
- flag: "permissions"
|
||||
compare:
|
||||
op: bitmask
|
||||
value: "644"
|
||||
remediation: |
|
||||
Run the following command (using the config file location identied in the Audit step)
|
||||
chmod 644 /var/lib/kubelet/config.yaml
|
||||
scored: false
|
||||
|
||||
- id: 3.1.4
|
||||
text: "Ensure that the kubelet configuration file ownership is set to root:root (Manual)"
|
||||
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
|
||||
tests:
|
||||
test_items:
|
||||
- flag: root:root
|
||||
remediation: |
|
||||
Run the following command (using the config file location identied in the Audit step)
|
||||
chown root:root /etc/kubernetes/kubelet.conf
|
||||
scored: false
|
||||
|
||||
- id: 3.2
|
||||
text: "Kubelet"
|
||||
checks:
|
||||
- id: 3.2.1
|
||||
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: "--anonymous-auth"
|
||||
path: '{.authentication.anonymous.enabled}'
|
||||
compare:
|
||||
op: eq
|
||||
value: false
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to
|
||||
false.
|
||||
If using executable arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
--anonymous-auth=false
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: true
|
||||
|
||||
- id: 3.2.2
|
||||
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --authorization-mode
|
||||
path: '{.authorization.mode}'
|
||||
compare:
|
||||
op: nothave
|
||||
value: AlwaysAllow
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If
|
||||
using executable arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_AUTHZ_ARGS variable.
|
||||
--authorization-mode=Webhook
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: true
|
||||
|
||||
- id: 3.2.3
|
||||
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --client-ca-file
|
||||
path: '{.authentication.x509.clientCAFile}'
|
||||
set: true
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to
|
||||
the location of the client CA file.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_AUTHZ_ARGS variable.
|
||||
--client-ca-file=<path/to/client-ca-file>
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: true
|
||||
|
||||
- id: 3.2.4
|
||||
text: "Ensure that the --read-only-port argument is set to 0 (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: "--read-only-port"
|
||||
path: '{.readOnlyPort}'
|
||||
set: true
|
||||
compare:
|
||||
op: eq
|
||||
value: 0
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set readOnlyPort to 0.
|
||||
If using command line arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
|
||||
on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
--read-only-port=0
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.5
|
||||
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --streaming-connection-idle-timeout
|
||||
path: '{.streamingConnectionIdleTimeout}'
|
||||
compare:
|
||||
op: noteq
|
||||
value: 0
|
||||
- flag: --streaming-connection-idle-timeout
|
||||
path: '{.streamingConnectionIdleTimeout}'
|
||||
set: false
|
||||
bin_op: or
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a
|
||||
value other than 0.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
--streaming-connection-idle-timeout=5m
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: true
|
||||
|
||||
- id: 3.2.6
|
||||
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated) "
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --make-iptables-util-chains
|
||||
path: '{.makeIPTablesUtilChains}'
|
||||
compare:
|
||||
op: eq
|
||||
value: true
|
||||
- flag: --make-iptables-util-chains
|
||||
path: '{.makeIPTablesUtilChains}'
|
||||
set: false
|
||||
bin_op: or
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
remove the --make-iptables-util-chains argument from the
|
||||
KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: true
|
||||
|
||||
- id: 3.2.7
|
||||
text: "Ensure that the --hostname-override argument is not set (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin "
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --hostname-override
|
||||
set: false
|
||||
remediation: |
|
||||
Edit the kubelet service file $kubeletsvc
|
||||
on each worker node and remove the --hostname-override argument from the
|
||||
KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.8
|
||||
text: "Ensure that the --eventrecordqps argument is set to 5 or higher or a level which ensures appropriate event capture (Automated)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --event-qps
|
||||
path: '{.eventRecordQPS}'
|
||||
set: true
|
||||
compare:
|
||||
op: gte
|
||||
value: 5
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
|
||||
If using command line arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
|
||||
on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: true
|
||||
|
||||
- id: 3.2.9
|
||||
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --tls-cert-file
|
||||
path: '{.tlsCertFile}'
|
||||
- flag: --tls-private-key-file
|
||||
path: '{.tlsPrivateKeyFile}'
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to set tlsCertFile to the location
|
||||
of the certificate file to use to identify this Kubelet, and tlsPrivateKeyFile
|
||||
to the location of the corresponding private key file.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
|
||||
--tls-cert-file=<path/to/tls-certificate-file>
|
||||
--tls-private-key-file=<path/to/tls-key-file>
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.10
|
||||
text: "Ensure that the --rotate-certificates argument is not set to false (Manual)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: --rotate-certificates
|
||||
path: '{.rotateCertificates}'
|
||||
compare:
|
||||
op: eq
|
||||
value: true
|
||||
- flag: --rotate-certificates
|
||||
path: '{.rotateCertificates}'
|
||||
set: false
|
||||
bin_op: or
|
||||
remediation: |
|
||||
If using a Kubelet config file, edit the file to add the line rotateCertificates: true or
|
||||
remove it altogether to use the default value.
|
||||
If using command line arguments, edit the kubelet service file
|
||||
$kubeletsvc on each worker node and
|
||||
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
|
||||
variable.
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: false
|
||||
|
||||
- id: 3.2.11
|
||||
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
|
||||
audit: "/bin/ps -fC $kubeletbin"
|
||||
audit_config: "/bin/cat $kubeletconf"
|
||||
tests:
|
||||
test_items:
|
||||
- flag: RotateKubeletServerCertificate
|
||||
path: '{.featureGates.RotateKubeletServerCertificate}'
|
||||
compare:
|
||||
op: eq
|
||||
value: true
|
||||
remediation: |
|
||||
Edit the kubelet service file $kubeletsvc
|
||||
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
|
||||
--feature-gates=RotateKubeletServerCertificate=true
|
||||
Based on your system, restart the kubelet service. For example:
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet.service
|
||||
scored: true
|
239
cfg/gke-1.4.0/policies.yaml
Normal file
239
cfg/gke-1.4.0/policies.yaml
Normal file
@ -0,0 +1,239 @@
|
||||
---
|
||||
controls:
|
||||
version: "gke-1.4.0"
|
||||
id: 4
|
||||
text: "Kubernetes Policies"
|
||||
type: "policies"
|
||||
groups:
|
||||
- id: 4.1
|
||||
text: "RBAC and Service Accounts"
|
||||
checks:
|
||||
- id: 4.1.1
|
||||
text: "Ensure that the cluster-admin role is only used where required (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
|
||||
if they need this role or if they could use a role with fewer privileges.
|
||||
Where possible, first bind users to a lower privileged role and then remove the
|
||||
clusterrolebinding to the cluster-admin role :
|
||||
kubectl delete clusterrolebinding [name]
|
||||
scored: false
|
||||
|
||||
- id: 4.1.2
|
||||
text: "Minimize access to secrets (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Where possible, remove get, list and watch access to secret objects in the cluster.
|
||||
scored: false
|
||||
|
||||
- id: 4.1.3
|
||||
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Where possible replace any use of wildcards in clusterroles and roles with specific
|
||||
objects or actions.
|
||||
scored: false
|
||||
|
||||
- id: 4.1.4
|
||||
text: "Minimize access to create pods (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Where possible, remove create access to pod objects in the cluster.
|
||||
scored: false
|
||||
|
||||
- id: 4.1.5
|
||||
text: "Ensure that default service accounts are not actively used. (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create explicit service accounts wherever a Kubernetes workload requires specific access
|
||||
to the Kubernetes API server.
|
||||
Modify the configuration of each default service account to include this value
|
||||
automountServiceAccountToken: false
|
||||
scored: true
|
||||
|
||||
- id: 4.1.6
|
||||
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Modify the definition of pods and service accounts which do not need to mount service
|
||||
account tokens to disable it.
|
||||
scored: false
|
||||
|
||||
- id: 4.2
|
||||
text: "Pod Security Policies"
|
||||
checks:
|
||||
- id: 4.2.1
|
||||
text: "Minimize the admission of privileged containers (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that
|
||||
the .spec.privileged field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.2
|
||||
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.hostPID field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.3
|
||||
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.hostIPC field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.4
|
||||
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.hostNetwork field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.5
|
||||
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.allowPrivilegeEscalation field is omitted or set to false.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.6
|
||||
text: "Minimize the admission of root containers (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of
|
||||
UIDs not including 0.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.7
|
||||
text: "Minimize the admission of containers with the NET_RAW capability (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Create a PSP as described in the Kubernetes documentation, ensuring that the
|
||||
.spec.requiredDropCapabilities is set to include either NET_RAW or ALL.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.8
|
||||
text: "Minimize the admission of containers with added capabilities (Automated)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Ensure that allowedCapabilities is not present in PSPs for the cluster unless
|
||||
it is set to an empty array.
|
||||
scored: false
|
||||
|
||||
- id: 4.2.9
|
||||
text: "Minimize the admission of containers with capabilities assigned (Manual) "
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Review the use of capabilites in applications running on your cluster. Where a namespace
|
||||
contains applications which do not require any Linux capabities to operate consider adding
|
||||
a PSP which forbids the admission of containers which do not drop all capabilities.
|
||||
scored: false
|
||||
|
||||
- id: 4.3
|
||||
text: "Network Policies and CNI"
|
||||
checks:
|
||||
- id: 4.3.1
|
||||
text: "Ensure that the CNI in use supports Network Policies (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
To use a CNI plugin with Network Policy, enable Network Policy in GKE, and the CNI plugin
|
||||
will be updated. See Recommendation 6.6.7.
|
||||
scored: false
|
||||
|
||||
- id: 4.3.2
|
||||
text: "Ensure that all Namespaces have Network Policies defined (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Follow the documentation and create NetworkPolicy objects as you need them.
|
||||
scored: false
|
||||
|
||||
- id: 4.4
|
||||
text: "Secrets Management"
|
||||
checks:
|
||||
- id: 4.4.1
|
||||
text: "Prefer using secrets as files over secrets as environment variables (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
if possible, rewrite application code to read secrets from mounted secret files, rather than
|
||||
from environment variables.
|
||||
scored: false
|
||||
|
||||
- id: 4.4.2
|
||||
text: "Consider external secret storage (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Refer to the secrets management options offered by your cloud provider or a third-party
|
||||
secrets management solution.
|
||||
scored: false
|
||||
|
||||
- id: 4.5
|
||||
text: "Extensible Admission Control"
|
||||
checks:
|
||||
- id: 4.5.1
|
||||
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Follow the Kubernetes documentation and setup image provenance.
|
||||
See also Recommendation 6.10.5 for GKE specifically.
|
||||
scored: false
|
||||
|
||||
- id: 4.6
|
||||
text: "General Policies"
|
||||
checks:
|
||||
- id: 4.6.1
|
||||
text: "Create administrative boundaries between resources using namespaces (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Follow the documentation and create namespaces for objects in your deployment as you need
|
||||
them.
|
||||
scored: false
|
||||
|
||||
- id: 4.6.2
|
||||
text: "Ensure that the seccomp profile is set to docker/default in your pod definitions (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Seccomp is an alpha feature currently. By default, all alpha features are disabled. So, you
|
||||
would need to enable alpha features in the apiserver by passing "--feature-
|
||||
gates=AllAlpha=true" argument.
|
||||
Edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_API_ARGS
|
||||
parameter to "--feature-gates=AllAlpha=true"
|
||||
KUBE_API_ARGS="--feature-gates=AllAlpha=true"
|
||||
Based on your system, restart the kube-apiserver service. For example:
|
||||
systemctl restart kube-apiserver.service
|
||||
Use annotations to enable the docker/default seccomp profile in your pod definitions. An
|
||||
example is as below:
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: trustworthy-pod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: docker/default
|
||||
spec:
|
||||
containers:
|
||||
- name: trustworthy-container
|
||||
image: sotrustworthy:latest
|
||||
scored: false
|
||||
|
||||
- id: 4.6.3
|
||||
text: "Apply Security Context to Your Pods and Containers (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Follow the Kubernetes documentation and apply security contexts to your pods. For a
|
||||
suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
|
||||
Containers.
|
||||
scored: false
|
||||
|
||||
- id: 4.6.4
|
||||
text: "The default namespace should not be used (Manual)"
|
||||
type: "manual"
|
||||
remediation: |
|
||||
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
|
||||
resources and that all new resources are created in a specific namespace.
|
||||
scored: false
|
Loading…
Reference in New Issue
Block a user