mirror of
https://github.com/aquasecurity/kube-bench.git
synced 2025-08-03 12:28:09 +00:00
fix the mentioned issues
This commit is contained in:
parent
ec556cd19f
commit
dbe7ed14cc
@ -9,7 +9,7 @@ groups:
|
|||||||
text: "Image Registry and Image Scanning"
|
text: "Image Registry and Image Scanning"
|
||||||
checks:
|
checks:
|
||||||
- id: 5.1.1
|
- id: 5.1.1
|
||||||
text: "Ensure Image Vulnerability Scanning using Microsoft Defender for Cloud (MDC) image scanning or a third party provider (Automated)"
|
text: "Ensure Image Vulnerability Scanning using Microsoft Defender for Cloud (MDC) image scanning or a third party provider (Manual)"
|
||||||
type: "manual"
|
type: "manual"
|
||||||
remediation: |
|
remediation: |
|
||||||
Enable MDC for Container Registries by running the following Azure CLI command:
|
Enable MDC for Container Registries by running the following Azure CLI command:
|
||||||
@ -99,7 +99,7 @@ groups:
|
|||||||
text: "Cluster Networking"
|
text: "Cluster Networking"
|
||||||
checks:
|
checks:
|
||||||
- id: 5.4.1
|
- id: 5.4.1
|
||||||
text: "Restrict Access to the Control Plane Endpoint (Automated)"
|
text: "Restrict Access to the Control Plane Endpoint (Manual)"
|
||||||
type: "manual"
|
type: "manual"
|
||||||
remediation: |
|
remediation: |
|
||||||
By enabling private endpoint access to the Kubernetes API server, all communication between your nodes and the API server stays within your VPC. You can also limit the IP addresses that can access your API server from the internet, or completely disable internet access to the API server.
|
By enabling private endpoint access to the Kubernetes API server, all communication between your nodes and the API server stays within your VPC. You can also limit the IP addresses that can access your API server from the internet, or completely disable internet access to the API server.
|
||||||
@ -110,7 +110,7 @@ groups:
|
|||||||
scored: false
|
scored: false
|
||||||
|
|
||||||
- id: 5.4.2
|
- id: 5.4.2
|
||||||
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Automated)"
|
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
|
||||||
type: "manual"
|
type: "manual"
|
||||||
remediation: |
|
remediation: |
|
||||||
To use a private endpoint, create a new private endpoint in your virtual network, then create a link between your virtual network and a new private DNS zone.
|
To use a private endpoint, create a new private endpoint in your virtual network, then create a link between your virtual network and a new private DNS zone.
|
||||||
@ -120,7 +120,7 @@ groups:
|
|||||||
scored: false
|
scored: false
|
||||||
|
|
||||||
- id: 5.4.3
|
- id: 5.4.3
|
||||||
text: "Ensure clusters are created with Private Nodes (Automated)"
|
text: "Ensure clusters are created with Private Nodes (Manual)"
|
||||||
type: "manual"
|
type: "manual"
|
||||||
remediation: |
|
remediation: |
|
||||||
To create a private cluster, use the following command:
|
To create a private cluster, use the following command:
|
||||||
@ -138,7 +138,7 @@ groups:
|
|||||||
scored: false
|
scored: false
|
||||||
|
|
||||||
- id: 5.4.4
|
- id: 5.4.4
|
||||||
text: "Ensure Network Policy is Enabled and set as appropriate (Automated)"
|
text: "Ensure Network Policy is Enabled and set as appropriate (Manual)"
|
||||||
type: "manual"
|
type: "manual"
|
||||||
remediation: |
|
remediation: |
|
||||||
Utilize Calico or another network policy engine to segment and isolate your traffic.
|
Utilize Calico or another network policy engine to segment and isolate your traffic.
|
||||||
|
@ -21,7 +21,7 @@ groups:
|
|||||||
Run the below command (based on the file location on your system) on the each worker node.
|
Run the below command (based on the file location on your system) on the each worker node.
|
||||||
For example,
|
For example,
|
||||||
chmod 644 $kubeletkubeconfig
|
chmod 644 $kubeletkubeconfig
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
- id: 3.1.2
|
- id: 3.1.2
|
||||||
text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Automated)"
|
text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Automated)"
|
||||||
@ -33,7 +33,7 @@ groups:
|
|||||||
Run the below command (based on the file location on your system) on the each worker node.
|
Run the below command (based on the file location on your system) on the each worker node.
|
||||||
For example,
|
For example,
|
||||||
chown root:root $kubeletkubeconfig
|
chown root:root $kubeletkubeconfig
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
- id: 3.1.3
|
- id: 3.1.3
|
||||||
text: "Ensure that the azure.json file has permissions set to 644 or more restrictive (Automated)"
|
text: "Ensure that the azure.json file has permissions set to 644 or more restrictive (Automated)"
|
||||||
@ -47,7 +47,7 @@ groups:
|
|||||||
remediation: |
|
remediation: |
|
||||||
Run the following command (using the config file location identified in the Audit step)
|
Run the following command (using the config file location identified in the Audit step)
|
||||||
chmod 644 $kubeletconf
|
chmod 644 $kubeletconf
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
- id: 3.1.4
|
- id: 3.1.4
|
||||||
text: "Ensure that the azure.json file ownership is set to root:root (Automated)"
|
text: "Ensure that the azure.json file ownership is set to root:root (Automated)"
|
||||||
@ -58,7 +58,7 @@ groups:
|
|||||||
remediation: |
|
remediation: |
|
||||||
Run the following command (using the config file location identified in the Audit step)
|
Run the following command (using the config file location identified in the Audit step)
|
||||||
chown root:root $kubeletconf
|
chown root:root $kubeletconf
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
|
|
||||||
- id: 3.2
|
- id: 3.2
|
||||||
@ -85,7 +85,7 @@ groups:
|
|||||||
Based on your system, restart the kubelet service. For example:
|
Based on your system, restart the kubelet service. For example:
|
||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
systemctl restart kubelet.service
|
systemctl restart kubelet.service
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
- id: 3.2.2
|
- id: 3.2.2
|
||||||
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
|
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
|
||||||
@ -107,7 +107,7 @@ groups:
|
|||||||
Based on your system, restart the kubelet service. For example:
|
Based on your system, restart the kubelet service. For example:
|
||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
systemctl restart kubelet.service
|
systemctl restart kubelet.service
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
- id: 3.2.3
|
- id: 3.2.3
|
||||||
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
|
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
|
||||||
@ -128,7 +128,7 @@ groups:
|
|||||||
Based on your system, restart the kubelet service. For example:
|
Based on your system, restart the kubelet service. For example:
|
||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
systemctl restart kubelet.service
|
systemctl restart kubelet.service
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
- id: 3.2.4
|
- id: 3.2.4
|
||||||
text: "Ensure that the --read-only-port is secured (Automated)"
|
text: "Ensure that the --read-only-port is secured (Automated)"
|
||||||
@ -151,7 +151,7 @@ groups:
|
|||||||
Based on your system, restart the kubelet service. For example:
|
Based on your system, restart the kubelet service. For example:
|
||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
systemctl restart kubelet.service
|
systemctl restart kubelet.service
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
- id: 3.2.5
|
- id: 3.2.5
|
||||||
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
|
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
|
||||||
@ -179,7 +179,7 @@ groups:
|
|||||||
Based on your system, restart the kubelet service. For example:
|
Based on your system, restart the kubelet service. For example:
|
||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
systemctl restart kubelet.service
|
systemctl restart kubelet.service
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
- id: 3.2.6
|
- id: 3.2.6
|
||||||
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated) "
|
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated) "
|
||||||
@ -206,7 +206,7 @@ groups:
|
|||||||
Based on your system, restart the kubelet service. For example:
|
Based on your system, restart the kubelet service. For example:
|
||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
systemctl restart kubelet.service
|
systemctl restart kubelet.service
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
|
|
||||||
- id: 3.2.7
|
- id: 3.2.7
|
||||||
@ -230,7 +230,7 @@ groups:
|
|||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
systemctl restart kubelet.service
|
systemctl restart kubelet.service
|
||||||
systemctl status kubelet -l
|
systemctl status kubelet -l
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
|
|
||||||
- id: 3.2.8
|
- id: 3.2.8
|
||||||
@ -259,7 +259,7 @@ groups:
|
|||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
systemctl restart kubelet.service
|
systemctl restart kubelet.service
|
||||||
systemctl status kubelet -l
|
systemctl status kubelet -l
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
- id: 3.2.9
|
- id: 3.2.9
|
||||||
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
|
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
|
||||||
@ -280,4 +280,4 @@ groups:
|
|||||||
Based on your system, restart the kubelet service. For example:
|
Based on your system, restart the kubelet service. For example:
|
||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
systemctl restart kubelet.service
|
systemctl restart kubelet.service
|
||||||
scored: false
|
scored: true
|
||||||
|
@ -10,138 +10,173 @@ groups:
|
|||||||
checks:
|
checks:
|
||||||
- id: 4.1.1
|
- id: 4.1.1
|
||||||
text: "Ensure that the cluster-admin role is only used where required (Automated)"
|
text: "Ensure that the cluster-admin role is only used where required (Automated)"
|
||||||
audit: "kubectl get clusterrolebindings -o=custom-columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name"
|
audit: |
|
||||||
audit_config: "kubectl get clusterrolebindings"
|
kubectl get clusterrolebindings -o json | jq -r '
|
||||||
|
.items[]
|
||||||
|
| select(.roleRef.name == "cluster-admin")
|
||||||
|
| .subjects[]?
|
||||||
|
| select(.kind != "Group" or (.name != "system:masters" and .name != "system:nodes"))
|
||||||
|
| "FOUND_CLUSTER_ADMIN_BINDING"
|
||||||
|
' || echo "NO_CLUSTER_ADMIN_BINDINGS"
|
||||||
tests:
|
tests:
|
||||||
test_items:
|
test_items:
|
||||||
- flag: cluster-admin
|
- flag: "NO_CLUSTER_ADMIN_BINDINGS"
|
||||||
path: '{.roleRef.name}'
|
|
||||||
set: true
|
set: true
|
||||||
compare:
|
compare:
|
||||||
op: eq
|
op: eq
|
||||||
value: "cluster-admin"
|
value: "NO_CLUSTER_ADMIN_BINDINGS"
|
||||||
remediation: |
|
remediation: |
|
||||||
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
|
Identify all clusterrolebindings to the cluster-admin role using:
|
||||||
if they need this role or if they could use a role with fewer privileges.
|
|
||||||
Where possible, first bind users to a lower privileged role and then remove the
|
kubectl get clusterrolebindings --no-headers | grep cluster-admin
|
||||||
clusterrolebinding to the cluster-admin role :
|
|
||||||
kubectl delete clusterrolebinding [name]
|
Review if each of them actually needs this role. If not, remove the binding:
|
||||||
scored: false
|
|
||||||
|
kubectl delete clusterrolebinding <binding-name>
|
||||||
|
|
||||||
|
Where possible, assign a less privileged ClusterRole.
|
||||||
|
scored: true
|
||||||
|
|
||||||
- id: 4.1.2
|
- id: 4.1.2
|
||||||
text: "Minimize access to secrets (Automated)"
|
text: "Minimize access to secrets (Automated)"
|
||||||
audit: "kubectl get roles,rolebindings --all-namespaces -o=custom-columns=NAME:.metadata.name,ROLE:.rules[*].resources,SUBJECT:.subjects[*].name"
|
audit: |
|
||||||
audit_config: "kubectl get roles --all-namespaces -o json"
|
count=$(kubectl get roles --all-namespaces -o json | jq '
|
||||||
|
.items[]
|
||||||
|
| select(.rules[]?
|
||||||
|
| (.resources[]? == "secrets")
|
||||||
|
and ((.verbs[]? == "get") or (.verbs[]? == "list") or (.verbs[]? == "watch"))
|
||||||
|
)' | wc -l)
|
||||||
|
|
||||||
|
if [ "$count" -gt 0 ]; then
|
||||||
|
echo "SECRETS_ACCESS_FOUND"
|
||||||
|
fi
|
||||||
tests:
|
tests:
|
||||||
test_items:
|
test_items:
|
||||||
- flag: secrets
|
- flag: "SECRETS_ACCESS_FOUND"
|
||||||
path: '{.rules[*].resources}'
|
set: false
|
||||||
set: true
|
|
||||||
compare:
|
|
||||||
op: eq
|
|
||||||
value: "secrets"
|
|
||||||
- flag: get
|
|
||||||
path: '{.rules[*].verbs}'
|
|
||||||
set: true
|
|
||||||
compare:
|
|
||||||
op: contains
|
|
||||||
value: "get"
|
|
||||||
- flag: list
|
|
||||||
path: '{.rules[*].verbs}'
|
|
||||||
set: true
|
|
||||||
compare:
|
|
||||||
op: contains
|
|
||||||
value: "list"
|
|
||||||
- flag: watch
|
|
||||||
path: '{.rules[*].verbs}'
|
|
||||||
set: true
|
|
||||||
compare:
|
|
||||||
op: contains
|
|
||||||
value: "watch"
|
|
||||||
remediation: |
|
remediation: |
|
||||||
Where possible, remove get, list and watch access to secret objects in the cluster.
|
Identify all roles that grant access to secrets via get/list/watch verbs.
|
||||||
scored: false
|
Use `kubectl edit role -n <namespace> <name>` to remove these permissions.
|
||||||
|
Alternatively, create a new least-privileged role that excludes secret access.
|
||||||
|
scored: true
|
||||||
|
|
||||||
- id: 4.1.3
|
- id: 4.1.3
|
||||||
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
|
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
|
||||||
audit: "kubectl get roles --all-namespaces -o yaml | grep '*'"
|
audit: |
|
||||||
audit_config: "kubectl get clusterroles -o yaml | grep '*'"
|
wildcards=$(kubectl get roles --all-namespaces -o json | jq '
|
||||||
|
.items[] | select(
|
||||||
|
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
|
||||||
|
)' | wc -l)
|
||||||
|
|
||||||
|
wildcards_clusterroles=$(kubectl get clusterroles -o json | jq '
|
||||||
|
.items[] | select(
|
||||||
|
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
|
||||||
|
)' | wc -l)
|
||||||
|
|
||||||
|
total=$((wildcards + wildcards_clusterroles))
|
||||||
|
|
||||||
|
if [ "$total" -gt 0 ]; then
|
||||||
|
echo "wildcards_present"
|
||||||
|
fi
|
||||||
tests:
|
tests:
|
||||||
test_items:
|
test_items:
|
||||||
- flag: wildcard
|
- flag: wildcards_present
|
||||||
path: '{.rules[*].verbs}'
|
set: false
|
||||||
compare:
|
|
||||||
op: notcontains
|
|
||||||
value: "*"
|
|
||||||
remediation: |
|
remediation: |
|
||||||
Where possible, replace any use of wildcards in clusterroles and roles with specific objects or actions.
|
Identify roles and clusterroles using wildcards (*) in 'verbs', 'resources', or 'apiGroups'.
|
||||||
Review the roles and clusterroles across namespaces and ensure that wildcards are not used for sensitive actions.
|
Replace wildcards with specific values to enforce least privilege access.
|
||||||
Update roles by specifying individual actions or resources instead of using "*".
|
Use `kubectl edit role -n <namespace> <name>` or `kubectl edit clusterrole <name>` to update.
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
|
|
||||||
- id: 4.1.4
|
- id: 4.1.4
|
||||||
text: "Minimize access to create pods (Automated)"
|
text: "Minimize access to create pods (Automated)"
|
||||||
audit: "kubectl get roles,rolebindings --all-namespaces -o=custom-columns=NAME:.metadata.name,ROLE:.rules[*].resources,SUBJECT:.subjects[*].name"
|
audit: |
|
||||||
audit_config: "kubectl get roles --all-namespaces"
|
echo "🔹 Roles and ClusterRoles with 'create' access on 'pods':"
|
||||||
|
access=$(kubectl get roles,clusterroles -A -o json | jq '
|
||||||
|
[.items[] |
|
||||||
|
select(
|
||||||
|
.rules[]? |
|
||||||
|
(.resources[]? == "pods" and .verbs[]? == "create")
|
||||||
|
)
|
||||||
|
] | length')
|
||||||
|
|
||||||
|
if [ "$access" -gt 0 ]; then
|
||||||
|
echo "pods_create_access"
|
||||||
|
fi
|
||||||
tests:
|
tests:
|
||||||
test_items:
|
test_items:
|
||||||
- flag: pods
|
- flag: pods_create_access
|
||||||
path: '{.rules[*].resources}'
|
set: false
|
||||||
set: true
|
|
||||||
compare:
|
|
||||||
op: eq
|
|
||||||
value: "pods"
|
|
||||||
- flag: create
|
|
||||||
path: '{.rules[*].verbs}'
|
|
||||||
set: true
|
|
||||||
compare:
|
|
||||||
op: contains
|
|
||||||
value: "create"
|
|
||||||
remediation: |
|
remediation: |
|
||||||
Where possible, remove create access to pod objects in the cluster.
|
Review all roles and clusterroles that have "create" permission on "pods".
|
||||||
scored: false
|
|
||||||
|
🔒 Where possible, remove or restrict this permission to only required service accounts.
|
||||||
|
|
||||||
|
🛠 Use:
|
||||||
|
- `kubectl edit role -n <namespace> <role>`
|
||||||
|
- `kubectl edit clusterrole <name>`
|
||||||
|
|
||||||
|
✅ Apply least privilege principle across the cluster.
|
||||||
|
scored: true
|
||||||
|
|
||||||
|
|
||||||
- id: 4.1.5
|
- id: 4.1.5
|
||||||
text: "Ensure that default service accounts are not actively used (Automated)"
|
text: "Ensure that default service accounts are not actively used (Automated)"
|
||||||
audit: "kubectl get serviceaccounts --all-namespaces -o custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,TOKEN:.automountServiceAccountToken"
|
audit: |
|
||||||
audit_config: "kubectl get serviceaccounts --all-namespaces"
|
echo "🔹 Default Service Accounts with automountServiceAccountToken enabled:"
|
||||||
|
default_sa_count=$(kubectl get serviceaccounts --all-namespaces -o json | jq '
|
||||||
|
[.items[] | select(.metadata.name == "default" and (.automountServiceAccountToken != false))] | length')
|
||||||
|
if [ "$default_sa_count" -gt 0 ]; then
|
||||||
|
echo "default_sa_not_auto_mounted"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "\n🔹 Pods using default ServiceAccount:"
|
||||||
|
pods_using_default_sa=$(kubectl get pods --all-namespaces -o json | jq '
|
||||||
|
[.items[] | select(.spec.serviceAccountName == "default")] | length')
|
||||||
|
if [ "$pods_using_default_sa" -gt 0 ]; then
|
||||||
|
echo "default_sa_used_in_pods"
|
||||||
|
fi
|
||||||
tests:
|
tests:
|
||||||
test_items:
|
test_items:
|
||||||
- flag: default
|
- flag: default_sa_not_auto_mounted
|
||||||
path: '{.metadata.name}'
|
set: false
|
||||||
set: true
|
- flag: default_sa_used_in_pods
|
||||||
compare:
|
set: false
|
||||||
op: eq
|
|
||||||
value: "default"
|
|
||||||
- flag: automountServiceAccountToken
|
|
||||||
path: '{.automountServiceAccountToken}'
|
|
||||||
set: true
|
|
||||||
compare:
|
|
||||||
op: eq
|
|
||||||
value: "false"
|
|
||||||
remediation: |
|
remediation: |
|
||||||
Create explicit service accounts wherever a Kubernetes workload requires specific access
|
1. Avoid using default service accounts for workloads.
|
||||||
to the Kubernetes API server.
|
2. Set `automountServiceAccountToken: false` on all default SAs:
|
||||||
Modify the configuration of each default service account to include this value
|
kubectl patch serviceaccount default -n <namespace> -p '{"automountServiceAccountToken": false}'
|
||||||
automountServiceAccountToken: false
|
3. Use custom service accounts with only the necessary permissions.
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
|
|
||||||
- id: 4.1.6
|
- id: 4.1.6
|
||||||
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
|
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
|
||||||
audit: "kubectl get pods --all-namespaces -o custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,SERVICE_ACCOUNT:.spec.serviceAccountName,MOUNT_TOKEN:.spec.automountServiceAccountToken"
|
audit: |
|
||||||
audit_config: "kubectl get pods --all-namespaces"
|
echo "🔹 Pods with automountServiceAccountToken enabled:"
|
||||||
|
pods_with_token_mount=$(kubectl get pods --all-namespaces -o json | jq '
|
||||||
|
[.items[] | select(.spec.automountServiceAccountToken != false)] | length')
|
||||||
|
|
||||||
|
if [ "$pods_with_token_mount" -gt 0 ]; then
|
||||||
|
echo "automountServiceAccountToken"
|
||||||
|
fi
|
||||||
tests:
|
tests:
|
||||||
test_items:
|
test_items:
|
||||||
- flag: automountServiceAccountToken
|
- flag: automountServiceAccountToken
|
||||||
path: '{.spec.automountServiceAccountToken}'
|
set: false
|
||||||
set: true
|
|
||||||
compare:
|
|
||||||
op: eq
|
|
||||||
value: "false"
|
|
||||||
remediation: |
|
remediation: |
|
||||||
Modify the definition of pods and service accounts which do not need to mount service
|
Pods that do not need access to the Kubernetes API should not mount service account tokens.
|
||||||
account tokens to disable it.
|
|
||||||
scored: false
|
✅ To disable token mounting in a pod definition:
|
||||||
|
```yaml
|
||||||
|
spec:
|
||||||
|
automountServiceAccountToken: false
|
||||||
|
```
|
||||||
|
|
||||||
|
✅ Or patch an existing pod's spec (recommended via workload template):
|
||||||
|
Patch not possible for running pods — update the deployment YAML or recreate pods with updated spec.
|
||||||
|
scored: true
|
||||||
|
|
||||||
|
|
||||||
- id: 4.2
|
- id: 4.2
|
||||||
@ -275,10 +310,36 @@ groups:
|
|||||||
|
|
||||||
- id: 4.4.2
|
- id: 4.4.2
|
||||||
text: "Ensure that all Namespaces have Network Policies defined (Automated)"
|
text: "Ensure that all Namespaces have Network Policies defined (Automated)"
|
||||||
type: "manual"
|
audit: |
|
||||||
|
ns_without_np=$(comm -23 \
|
||||||
|
<(kubectl get ns -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | sort) \
|
||||||
|
<(kubectl get networkpolicy --all-namespaces -o jsonpath='{.items[*].metadata.namespace}' | tr ' ' '\n' | sort))
|
||||||
|
if [ -z "$ns_without_np" ]; then echo "ALL_NAMESPACES_HAVE_NETWORKPOLICIES"; else echo "MISSING_NETWORKPOLICIES"; fi
|
||||||
|
tests:
|
||||||
|
test_items:
|
||||||
|
- flag: "ALL_NAMESPACES_HAVE_NETWORKPOLICIES"
|
||||||
|
set: true
|
||||||
|
compare:
|
||||||
|
op: eq
|
||||||
|
value: "ALL_NAMESPACES_HAVE_NETWORKPOLICIES"
|
||||||
remediation: |
|
remediation: |
|
||||||
Follow the documentation and create NetworkPolicy objects as you need them.
|
Define at least one NetworkPolicy in each namespace to control pod-level traffic. Example:
|
||||||
scored: false
|
|
||||||
|
kubectl apply -n <namespace> -f - <<EOF
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: NetworkPolicy
|
||||||
|
metadata:
|
||||||
|
name: default-deny-all
|
||||||
|
spec:
|
||||||
|
podSelector: {}
|
||||||
|
policyTypes:
|
||||||
|
- Ingress
|
||||||
|
- Egress
|
||||||
|
EOF
|
||||||
|
|
||||||
|
This denies all traffic unless explicitly allowed. Review and adjust policies per namespace as needed.
|
||||||
|
scored: true
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
- id: 4.5
|
- id: 4.5
|
||||||
@ -286,11 +347,21 @@ groups:
|
|||||||
checks:
|
checks:
|
||||||
- id: 4.5.1
|
- id: 4.5.1
|
||||||
text: "Prefer using secrets as files over secrets as environment variables (Automated)"
|
text: "Prefer using secrets as files over secrets as environment variables (Automated)"
|
||||||
type: "manual"
|
audit: |
|
||||||
|
output=$(kubectl get all --all-namespaces -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}')
|
||||||
|
if [ -z "$output" ]; then echo "NO_ENV_SECRET_REFERENCES"; else echo "ENV_SECRET_REFERENCES_FOUND"; fi
|
||||||
|
tests:
|
||||||
|
test_items:
|
||||||
|
- flag: "NO_ENV_SECRET_REFERENCES"
|
||||||
|
set: true
|
||||||
|
compare:
|
||||||
|
op: eq
|
||||||
|
value: "NO_ENV_SECRET_REFERENCES"
|
||||||
remediation: |
|
remediation: |
|
||||||
If possible, rewrite application code to read secrets from mounted secret files, rather than
|
Refactor application deployments to mount secrets as files instead of passing them as environment variables.
|
||||||
from environment variables.
|
Avoid using `envFrom` or `env` with `secretKeyRef` in container specs.
|
||||||
scored: false
|
scored: true
|
||||||
|
|
||||||
|
|
||||||
- id: 4.5.2
|
- id: 4.5.2
|
||||||
text: "Consider external secret storage (Manual)"
|
text: "Consider external secret storage (Manual)"
|
||||||
@ -323,8 +394,26 @@ groups:
|
|||||||
|
|
||||||
- id: 4.6.3
|
- id: 4.6.3
|
||||||
text: "The default namespace should not be used (Automated)"
|
text: "The default namespace should not be used (Automated)"
|
||||||
type: "manual"
|
audit: |
|
||||||
|
output=$(kubectl get all -n default --no-headers 2>/dev/null | grep -v '^service\s\+kubernetes\s' || true)
|
||||||
|
if [ -z "$output" ]; then echo "DEFAULT_NAMESPACE_UNUSED"; else echo "DEFAULT_NAMESPACE_IN_USE"; fi
|
||||||
|
tests:
|
||||||
|
test_items:
|
||||||
|
- flag: "DEFAULT_NAMESPACE_UNUSED"
|
||||||
|
set: true
|
||||||
|
compare:
|
||||||
|
op: eq
|
||||||
|
value: "DEFAULT_NAMESPACE_UNUSED"
|
||||||
remediation: |
|
remediation: |
|
||||||
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
|
Avoid using the default namespace for user workloads.
|
||||||
resources and that all new resources are created in a specific namespace.
|
- Create separate namespaces for your applications and infrastructure components.
|
||||||
scored: false
|
- Move any user-defined resources out of the default namespace.
|
||||||
|
|
||||||
|
Example to create a namespace:
|
||||||
|
kubectl create namespace my-namespace
|
||||||
|
|
||||||
|
Example to move resources:
|
||||||
|
kubectl get deployment my-app -n default -o yaml | sed 's/namespace: default/namespace: my-namespace/' | kubectl apply -f -
|
||||||
|
kubectl delete deployment my-app -n default
|
||||||
|
scored: true
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user