1
0
mirror of https://github.com/aquasecurity/kube-bench.git synced 2025-08-02 11:58:28 +00:00

fix the mentioned issues

This commit is contained in:
LaibaBareera 2025-06-16 14:34:31 +05:00
parent ec556cd19f
commit dbe7ed14cc
3 changed files with 213 additions and 124 deletions

View File

@ -9,7 +9,7 @@ groups:
text: "Image Registry and Image Scanning"
checks:
- id: 5.1.1
text: "Ensure Image Vulnerability Scanning using Microsoft Defender for Cloud (MDC) image scanning or a third party provider (Automated)"
text: "Ensure Image Vulnerability Scanning using Microsoft Defender for Cloud (MDC) image scanning or a third party provider (Manual)"
type: "manual"
remediation: |
Enable MDC for Container Registries by running the following Azure CLI command:
@ -99,7 +99,7 @@ groups:
text: "Cluster Networking"
checks:
- id: 5.4.1
text: "Restrict Access to the Control Plane Endpoint (Automated)"
text: "Restrict Access to the Control Plane Endpoint (Manual)"
type: "manual"
remediation: |
By enabling private endpoint access to the Kubernetes API server, all communication between your nodes and the API server stays within your VPC. You can also limit the IP addresses that can access your API server from the internet, or completely disable internet access to the API server.
@ -110,7 +110,7 @@ groups:
scored: false
- id: 5.4.2
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Automated)"
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
type: "manual"
remediation: |
To use a private endpoint, create a new private endpoint in your virtual network, then create a link between your virtual network and a new private DNS zone.
@ -120,7 +120,7 @@ groups:
scored: false
- id: 5.4.3
text: "Ensure clusters are created with Private Nodes (Automated)"
text: "Ensure clusters are created with Private Nodes (Manual)"
type: "manual"
remediation: |
To create a private cluster, use the following command:
@ -138,7 +138,7 @@ groups:
scored: false
- id: 5.4.4
text: "Ensure Network Policy is Enabled and set as appropriate (Automated)"
text: "Ensure Network Policy is Enabled and set as appropriate (Manual)"
type: "manual"
remediation: |
Utilize Calico or another network policy engine to segment and isolate your traffic.

View File

@ -21,7 +21,7 @@ groups:
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $kubeletkubeconfig
scored: false
scored: true
- id: 3.1.2
text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Automated)"
@ -33,7 +33,7 @@ groups:
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: false
scored: true
- id: 3.1.3
text: "Ensure that the azure.json file has permissions set to 644 or more restrictive (Automated)"
@ -47,7 +47,7 @@ groups:
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 644 $kubeletconf
scored: false
scored: true
- id: 3.1.4
text: "Ensure that the azure.json file ownership is set to root:root (Automated)"
@ -58,7 +58,7 @@ groups:
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: false
scored: true
- id: 3.2
@ -85,7 +85,7 @@ groups:
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
scored: true
- id: 3.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
@ -107,7 +107,7 @@ groups:
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
scored: true
- id: 3.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
@ -128,7 +128,7 @@ groups:
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
scored: true
- id: 3.2.4
text: "Ensure that the --read-only-port is secured (Automated)"
@ -151,7 +151,7 @@ groups:
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
scored: true
- id: 3.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
@ -179,7 +179,7 @@ groups:
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
scored: true
- id: 3.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated) "
@ -206,7 +206,7 @@ groups:
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
scored: true
- id: 3.2.7
@ -230,7 +230,7 @@ groups:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: false
scored: true
- id: 3.2.8
@ -259,7 +259,7 @@ groups:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: false
scored: true
- id: 3.2.9
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
@ -280,4 +280,4 @@ groups:
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
scored: true

View File

@ -10,138 +10,173 @@ groups:
checks:
- id: 4.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
audit: "kubectl get clusterrolebindings -o=custom-columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name"
audit_config: "kubectl get clusterrolebindings"
audit: |
kubectl get clusterrolebindings -o json | jq -r '
.items[]
| select(.roleRef.name == "cluster-admin")
| .subjects[]?
| select(.kind != "Group" or (.name != "system:masters" and .name != "system:nodes"))
| "FOUND_CLUSTER_ADMIN_BINDING"
' || echo "NO_CLUSTER_ADMIN_BINDINGS"
tests:
test_items:
- flag: cluster-admin
path: '{.roleRef.name}'
- flag: "NO_CLUSTER_ADMIN_BINDINGS"
set: true
compare:
op: eq
value: "cluster-admin"
value: "NO_CLUSTER_ADMIN_BINDINGS"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
Identify all clusterrolebindings to the cluster-admin role using:
kubectl get clusterrolebindings --no-headers | grep cluster-admin
Review if each of them actually needs this role. If not, remove the binding:
kubectl delete clusterrolebinding <binding-name>
Where possible, assign a less privileged ClusterRole.
scored: true
- id: 4.1.2
text: "Minimize access to secrets (Automated)"
audit: "kubectl get roles,rolebindings --all-namespaces -o=custom-columns=NAME:.metadata.name,ROLE:.rules[*].resources,SUBJECT:.subjects[*].name"
audit_config: "kubectl get roles --all-namespaces -o json"
audit: |
count=$(kubectl get roles --all-namespaces -o json | jq '
.items[]
| select(.rules[]?
| (.resources[]? == "secrets")
and ((.verbs[]? == "get") or (.verbs[]? == "list") or (.verbs[]? == "watch"))
)' | wc -l)
if [ "$count" -gt 0 ]; then
echo "SECRETS_ACCESS_FOUND"
fi
tests:
test_items:
- flag: secrets
path: '{.rules[*].resources}'
set: true
compare:
op: eq
value: "secrets"
- flag: get
path: '{.rules[*].verbs}'
set: true
compare:
op: contains
value: "get"
- flag: list
path: '{.rules[*].verbs}'
set: true
compare:
op: contains
value: "list"
- flag: watch
path: '{.rules[*].verbs}'
set: true
compare:
op: contains
value: "watch"
- flag: "SECRETS_ACCESS_FOUND"
set: false
remediation: |
Where possible, remove get, list and watch access to secret objects in the cluster.
scored: false
Identify all roles that grant access to secrets via get/list/watch verbs.
Use `kubectl edit role -n <namespace> <name>` to remove these permissions.
Alternatively, create a new least-privileged role that excludes secret access.
scored: true
- id: 4.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
audit: "kubectl get roles --all-namespaces -o yaml | grep '*'"
audit_config: "kubectl get clusterroles -o yaml | grep '*'"
audit: |
wildcards=$(kubectl get roles --all-namespaces -o json | jq '
.items[] | select(
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
)' | wc -l)
wildcards_clusterroles=$(kubectl get clusterroles -o json | jq '
.items[] | select(
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
)' | wc -l)
total=$((wildcards + wildcards_clusterroles))
if [ "$total" -gt 0 ]; then
echo "wildcards_present"
fi
tests:
test_items:
- flag: wildcard
path: '{.rules[*].verbs}'
compare:
op: notcontains
value: "*"
- flag: wildcards_present
set: false
remediation: |
Where possible, replace any use of wildcards in clusterroles and roles with specific objects or actions.
Review the roles and clusterroles across namespaces and ensure that wildcards are not used for sensitive actions.
Update roles by specifying individual actions or resources instead of using "*".
scored: false
Identify roles and clusterroles using wildcards (*) in 'verbs', 'resources', or 'apiGroups'.
Replace wildcards with specific values to enforce least privilege access.
Use `kubectl edit role -n <namespace> <name>` or `kubectl edit clusterrole <name>` to update.
scored: true
- id: 4.1.4
text: "Minimize access to create pods (Automated)"
audit: "kubectl get roles,rolebindings --all-namespaces -o=custom-columns=NAME:.metadata.name,ROLE:.rules[*].resources,SUBJECT:.subjects[*].name"
audit_config: "kubectl get roles --all-namespaces"
audit: |
echo "🔹 Roles and ClusterRoles with 'create' access on 'pods':"
access=$(kubectl get roles,clusterroles -A -o json | jq '
[.items[] |
select(
.rules[]? |
(.resources[]? == "pods" and .verbs[]? == "create")
)
] | length')
if [ "$access" -gt 0 ]; then
echo "pods_create_access"
fi
tests:
test_items:
- flag: pods
path: '{.rules[*].resources}'
set: true
compare:
op: eq
value: "pods"
- flag: create
path: '{.rules[*].verbs}'
set: true
compare:
op: contains
value: "create"
- flag: pods_create_access
set: false
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
Review all roles and clusterroles that have "create" permission on "pods".
🔒 Where possible, remove or restrict this permission to only required service accounts.
🛠 Use:
- `kubectl edit role -n <namespace> <role>`
- `kubectl edit clusterrole <name>`
✅ Apply least privilege principle across the cluster.
scored: true
- id: 4.1.5
text: "Ensure that default service accounts are not actively used (Automated)"
audit: "kubectl get serviceaccounts --all-namespaces -o custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,TOKEN:.automountServiceAccountToken"
audit_config: "kubectl get serviceaccounts --all-namespaces"
audit: |
echo "🔹 Default Service Accounts with automountServiceAccountToken enabled:"
default_sa_count=$(kubectl get serviceaccounts --all-namespaces -o json | jq '
[.items[] | select(.metadata.name == "default" and (.automountServiceAccountToken != false))] | length')
if [ "$default_sa_count" -gt 0 ]; then
echo "default_sa_not_auto_mounted"
fi
echo "\n🔹 Pods using default ServiceAccount:"
pods_using_default_sa=$(kubectl get pods --all-namespaces -o json | jq '
[.items[] | select(.spec.serviceAccountName == "default")] | length')
if [ "$pods_using_default_sa" -gt 0 ]; then
echo "default_sa_used_in_pods"
fi
tests:
test_items:
- flag: default
path: '{.metadata.name}'
set: true
compare:
op: eq
value: "default"
- flag: automountServiceAccountToken
path: '{.automountServiceAccountToken}'
set: true
compare:
op: eq
value: "false"
- flag: default_sa_not_auto_mounted
set: false
- flag: default_sa_used_in_pods
set: false
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
1. Avoid using default service accounts for workloads.
2. Set `automountServiceAccountToken: false` on all default SAs:
kubectl patch serviceaccount default -n <namespace> -p '{"automountServiceAccountToken": false}'
3. Use custom service accounts with only the necessary permissions.
scored: true
- id: 4.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
audit: "kubectl get pods --all-namespaces -o custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,SERVICE_ACCOUNT:.spec.serviceAccountName,MOUNT_TOKEN:.spec.automountServiceAccountToken"
audit_config: "kubectl get pods --all-namespaces"
audit: |
echo "🔹 Pods with automountServiceAccountToken enabled:"
pods_with_token_mount=$(kubectl get pods --all-namespaces -o json | jq '
[.items[] | select(.spec.automountServiceAccountToken != false)] | length')
if [ "$pods_with_token_mount" -gt 0 ]; then
echo "automountServiceAccountToken"
fi
tests:
test_items:
- flag: automountServiceAccountToken
path: '{.spec.automountServiceAccountToken}'
set: true
compare:
op: eq
value: "false"
set: false
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
Pods that do not need access to the Kubernetes API should not mount service account tokens.
✅ To disable token mounting in a pod definition:
```yaml
spec:
automountServiceAccountToken: false
```
✅ Or patch an existing pod's spec (recommended via workload template):
Patch not possible for running pods — update the deployment YAML or recreate pods with updated spec.
scored: true
- id: 4.2
@ -275,10 +310,36 @@ groups:
- id: 4.4.2
text: "Ensure that all Namespaces have Network Policies defined (Automated)"
type: "manual"
audit: |
ns_without_np=$(comm -23 \
<(kubectl get ns -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | sort) \
<(kubectl get networkpolicy --all-namespaces -o jsonpath='{.items[*].metadata.namespace}' | tr ' ' '\n' | sort))
if [ -z "$ns_without_np" ]; then echo "ALL_NAMESPACES_HAVE_NETWORKPOLICIES"; else echo "MISSING_NETWORKPOLICIES"; fi
tests:
test_items:
- flag: "ALL_NAMESPACES_HAVE_NETWORKPOLICIES"
set: true
compare:
op: eq
value: "ALL_NAMESPACES_HAVE_NETWORKPOLICIES"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
Define at least one NetworkPolicy in each namespace to control pod-level traffic. Example:
kubectl apply -n <namespace> -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
EOF
This denies all traffic unless explicitly allowed. Review and adjust policies per namespace as needed.
scored: true
- id: 4.5
@ -286,11 +347,21 @@ groups:
checks:
- id: 4.5.1
text: "Prefer using secrets as files over secrets as environment variables (Automated)"
type: "manual"
audit: |
output=$(kubectl get all --all-namespaces -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}')
if [ -z "$output" ]; then echo "NO_ENV_SECRET_REFERENCES"; else echo "ENV_SECRET_REFERENCES_FOUND"; fi
tests:
test_items:
- flag: "NO_ENV_SECRET_REFERENCES"
set: true
compare:
op: eq
value: "NO_ENV_SECRET_REFERENCES"
remediation: |
If possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
scored: false
Refactor application deployments to mount secrets as files instead of passing them as environment variables.
Avoid using `envFrom` or `env` with `secretKeyRef` in container specs.
scored: true
- id: 4.5.2
text: "Consider external secret storage (Manual)"
@ -323,8 +394,26 @@ groups:
- id: 4.6.3
text: "The default namespace should not be used (Automated)"
type: "manual"
audit: |
output=$(kubectl get all -n default --no-headers 2>/dev/null | grep -v '^service\s\+kubernetes\s' || true)
if [ -z "$output" ]; then echo "DEFAULT_NAMESPACE_UNUSED"; else echo "DEFAULT_NAMESPACE_IN_USE"; fi
tests:
test_items:
- flag: "DEFAULT_NAMESPACE_UNUSED"
set: true
compare:
op: eq
value: "DEFAULT_NAMESPACE_UNUSED"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false
Avoid using the default namespace for user workloads.
- Create separate namespaces for your applications and infrastructure components.
- Move any user-defined resources out of the default namespace.
Example to create a namespace:
kubectl create namespace my-namespace
Example to move resources:
kubectl get deployment my-app -n default -o yaml | sed 's/namespace: default/namespace: my-namespace/' | kubectl apply -f -
kubectl delete deployment my-app -n default
scored: true