1
0
mirror of https://github.com/aquasecurity/kube-bench.git synced 2025-06-27 02:12:41 +00:00

Add AKS-1.7 version (#1874)

* Add AKS-1.7 version

* resolve linter error

* add aks-1.7 as a default plateform aks version

* add alternative method to identify AKS specific cluster

* fix alternative method

* combine logic of label and providerId in isAKS function

* fix checks of aks-1.7

* fix the mentioned issues

* fix test cases
This commit is contained in:
LaibaBareera 2025-06-17 13:43:21 +05:00 committed by GitHub
parent 4e2651fda0
commit a3a8544a1d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
13 changed files with 972 additions and 1 deletions

2
cfg/aks-1.7/config.yaml Normal file
View File

@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@ -0,0 +1,31 @@
---
controls:
version: "aks-1.7"
id: 2
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 2.1
text: "Logging"
checks:
- id: 2.1.1
text: "Enable audit Logs"
type: "manual"
remediation: |
Azure audit logs are enabled and managed in the Azure portal. To enable log collection for
the Kubernetes master components in your AKS cluster, open the Azure portal in a web
browser and complete the following steps:
1. Select the resource group for your AKS cluster, such as myResourceGroup. Don't
select the resource group that contains your individual AKS cluster resources, such
as MC_myResourceGroup_myAKSCluster_eastus.
2. On the left-hand side, choose Diagnostic settings.
3. Select your AKS cluster, such as myAKSCluster, then choose to Add diagnostic setting.
4. Enter a name, such as myAKSClusterLogs, then select the option to Send to Log Analytics.
5. Select an existing workspace or create a new one. If you create a workspace, provide
a workspace name, a resource group, and a location.
6. In the list of available logs, select the logs you wish to enable. For this example,
enable the kube-audit and kube-audit-admin logs. Common logs include the kube-
apiserver, kube-controller-manager, and kube-scheduler. You can return and change
the collected logs once Log Analytics workspaces are enabled.
7. When ready, select Save to enable collection of the selected logs.
scored: false

View File

@ -0,0 +1,169 @@
---
controls:
version: "aks-1.7"
id: 5
text: "Managed Services"
type: "managedservices"
groups:
- id: 5.1
text: "Image Registry and Image Scanning"
checks:
- id: 5.1.1
text: "Ensure Image Vulnerability Scanning using Microsoft Defender for Cloud (MDC) image scanning or a third party provider (Manual)"
type: "manual"
remediation: |
Enable MDC for Container Registries by running the following Azure CLI command:
az security pricing create --name ContainerRegistry --tier Standard
Alternatively, use the following command to enable image scanning for your container registry:
az resource update --ids /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.ContainerRegistry/registries/{registry-name} --set properties.enabled=true
Replace `subscription-id`, `resource-group-name`, and `registry-name` with the correct values for your environment.
Please note that enabling MDC for Container Registries will incur additional costs, so be sure to review the pricing information provided in the Azure documentation before enabling it.
scored: false
- id: 5.1.2
text: "Minimize user access to Azure Container Registry (ACR) (Manual)"
type: "manual"
remediation: |
Azure Container Registry
If you use Azure Container Registry (ACR) as your container image store, you need to grant
permissions to the service principal for your AKS cluster to read and pull images. Currently,
the recommended configuration is to use the az aks create or az aks update command to
integrate with a registry and assign the appropriate role for the service principal. For
detailed steps, see Authenticate with Azure Container Registry from Azure Kubernetes
Service.
To avoid needing an Owner or Azure account administrator role, you can configure a
service principal manually or use an existing service principal to authenticate ACR from
AKS. For more information, see ACR authentication with service principals or Authenticate
from Kubernetes with a pull secret.
scored: false
- id: 5.1.3
text: "Minimize cluster access to read-only for Azure Container Registry (ACR) (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.1.4
text: "Minimize Container Registries to only those approved (Manual)"
type: "manual"
remediation: |
If you are using **Azure Container Registry**, you can restrict access using firewall rules as described in the official documentation:
https://docs.microsoft.com/en-us/azure/container-registry/container-registry-firewall-access-rules
For other non-AKS repositories, you can use **admission controllers** or **Azure Policy** to enforce registry access restrictions.
Limiting or locking down egress traffic to specific container registries is also recommended. For more information, refer to:
https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic
scored: false
- id: 5.2
text: "Access and identity options for Azure Kubernetes Service (AKS)"
checks:
- id: 5.2.1
text: "Prefer using dedicated AKS Service Accounts (Manual)"
type: "manual"
remediation: |
Azure Active Directory integration
The security of AKS clusters can be enhanced with the integration of Azure Active Directory
(AD). Built on decades of enterprise identity management, Azure AD is a multi-tenant,
cloud-based directory, and identity management service that combines core directory
services, application access management, and identity protection. With Azure AD, you can
integrate on-premises identities into AKS clusters to provide a single source for account
management and security.
Azure Active Directory integration with AKS clusters
With Azure AD-integrated AKS clusters, you can grant users or groups access to Kubernetes
resources within a namespace or across the cluster. To obtain a kubectl configuration
context, a user can run the az aks get-credentials command. When a user then interacts
with the AKS cluster with kubectl, they're prompted to sign in with their Azure AD
credentials. This approach provides a single source for user account management and
password credentials. The user can only access the resources as defined by the cluster
administrator.
Azure AD authentication is provided to AKS clusters with OpenID Connect. OpenID Connect
is an identity layer built on top of the OAuth 2.0 protocol. For more information on OpenID
Connect, see the Open ID connect documentation. From inside of the Kubernetes cluster,
Webhook Token Authentication is used to verify authentication tokens. Webhook token
authentication is configured and managed as part of the AKS cluster.
scored: false
- id: 5.3
text: "Key Management Service (KMS)"
checks:
- id: 5.3.1
text: "Ensure Kubernetes Secrets are encrypted (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.4
text: "Cluster Networking"
checks:
- id: 5.4.1
text: "Restrict Access to the Control Plane Endpoint (Manual)"
type: "manual"
remediation: |
By enabling private endpoint access to the Kubernetes API server, all communication between your nodes and the API server stays within your VPC. You can also limit the IP addresses that can access your API server from the internet, or completely disable internet access to the API server.
With this in mind, you can update your cluster accordingly using the AKS CLI to ensure that Private Endpoint Access is enabled.
If you choose to also enable Public Endpoint Access then you should also configure a list of allowable CIDR blocks, resulting in restricted access from the internet. If you specify no CIDR blocks, then the public API server endpoint is able to receive and process requests from all IP addresses by defaulting to ['0.0.0.0/0'].
Example:
az aks update --name ${CLUSTER_NAME} --resource-group ${RESOURCE_GROUP} --api-server-access-profile enablePrivateCluster=true --api-server-access-profile authorizedIpRanges=192.168.1.0/24
scored: false
- id: 5.4.2
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
type: "manual"
remediation: |
To use a private endpoint, create a new private endpoint in your virtual network, then create a link between your virtual network and a new private DNS zone.
You can also restrict access to the public endpoint by enabling only specific CIDR blocks to access it. For example:
az aks update --name ${CLUSTER_NAME} --resource-group ${RESOURCE_GROUP} --api-server-access-profile enablePublicFqdn=false
This command disables the public API endpoint for your AKS cluster.
scored: false
- id: 5.4.3
text: "Ensure clusters are created with Private Nodes (Manual)"
type: "manual"
remediation: |
To create a private cluster, use the following command:
az aks create \
--resource-group <private-cluster-resource-group> \
--name <private-cluster-name> \
--load-balancer-sku standard \
--enable-private-cluster \
--network-plugin azure \
--vnet-subnet-id <subnet-id> \
--docker-bridge-address <docker-bridge-address> \
--dns-service-ip <dns-service-ip> \
--service-cidr <service-cidr>
Ensure that --enable-private-cluster flag is set to enable private nodes in your cluster.
scored: false
- id: 5.4.4
text: "Ensure Network Policy is Enabled and set as appropriate (Manual)"
type: "manual"
remediation: |
Utilize Calico or another network policy engine to segment and isolate your traffic.
Enable network policies on your AKS cluster by following the Azure documentation or using the `az aks` CLI to enable the network policy add-on.
scored: false
- id: 5.4.5
text: "Encrypt traffic to HTTPS load balancers with TLS certificates (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.5
text: "Authentication and Authorization"
checks:
- id: 5.5.1
text: "Manage Kubernetes RBAC users with Azure AD (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.5.2
text: "Use Azure RBAC for Kubernetes Authorization (Manual)"
type: "manual"
remediation: "No remediation"
scored: false

6
cfg/aks-1.7/master.yaml Normal file
View File

@ -0,0 +1,6 @@
---
controls:
version: "aks-1.7"
id: 1
text: "Control Plane Components"
type: "master"

283
cfg/aks-1.7/node.yaml Normal file
View File

@ -0,0 +1,283 @@
---
controls:
version: "aks-1.7"
id: 3
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 3.1
text: "Worker Node Configuration Files"
checks:
- id: 3.1.1
text: "Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $kubeletkubeconfig
scored: true
- id: 3.1.2
text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 3.1.3
text: "Ensure that the azure.json file has permissions set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e /etc/kubernetes/azure.json; then stat -c permissions=%a /etc/kubernetes/azure.json; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 644 $kubeletconf
scored: true
- id: 3.1.4
text: "Ensure that the azure.json file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 3.2
text: "Kubelet"
checks:
- id: 3.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to
false.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
set: true
remediation: |
If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.4
text: "Ensure that the --read-only-port is secured (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: true
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set readOnlyPort to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: true
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated) "
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: true
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.7
text: "Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
set: true
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set the 'eventRecordQPS' value to an appropriate level (e.g., 5).
If using executable arguments, check the Kubelet service file `$kubeletsvc` on each worker node, and add the following parameter to the `KUBELET_ARGS` variable:
--eventRecordQPS=5
Ensure that there is no conflicting `--eventRecordQPS` setting in the service file that overrides the config file.
After making the changes, restart the Kubelet service:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.8
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: true
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If modifying the Kubelet config file, edit the `kubelet-config.json` file located at `/etc/kubernetes/kubelet/kubelet-config.json` and set the following parameter to `true`:
"rotateCertificates": true
Ensure that the Kubelet service file located at `/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf` does not define the `--rotate-certificates` argument as `false`, as this would override the config file.
If using executable arguments, add the following line to the `KUBELET_CERTIFICATE_ARGS` variable:
--rotate-certificates=true
After making the necessary changes, restart the Kubelet service:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.9
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: true
compare:
op: eq
value: true
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true

417
cfg/aks-1.7/policies.yaml Normal file
View File

@ -0,0 +1,417 @@
---
controls:
version: "aks-1.7"
id: 4
text: "Policies"
type: "policies"
groups:
- id: 4.1
text: "RBAC and Service Accounts"
checks:
- id: 4.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
audit: |
kubectl get clusterrolebindings -o json | jq -r '
.items[]
| select(.roleRef.name == "cluster-admin")
| .subjects[]?
| select(.kind != "Group" or (.name != "system:masters" and .name != "system:nodes"))
| "FOUND_CLUSTER_ADMIN_BINDING"
' || echo "NO_CLUSTER_ADMIN_BINDINGS"
tests:
test_items:
- flag: "NO_CLUSTER_ADMIN_BINDINGS"
set: true
compare:
op: eq
value: "NO_CLUSTER_ADMIN_BINDINGS"
remediation: |
Identify all clusterrolebindings to the cluster-admin role using:
kubectl get clusterrolebindings --no-headers | grep cluster-admin
Review if each of them actually needs this role. If not, remove the binding:
kubectl delete clusterrolebinding <binding-name>
Where possible, assign a less privileged ClusterRole.
scored: true
- id: 4.1.2
text: "Minimize access to secrets (Automated)"
audit: |
count=$(kubectl get roles --all-namespaces -o json | jq '
.items[]
| select(.rules[]?
| (.resources[]? == "secrets")
and ((.verbs[]? == "get") or (.verbs[]? == "list") or (.verbs[]? == "watch"))
)' | wc -l)
if [ "$count" -gt 0 ]; then
echo "SECRETS_ACCESS_FOUND"
fi
tests:
test_items:
- flag: "SECRETS_ACCESS_FOUND"
set: false
remediation: |
Identify all roles that grant access to secrets via get/list/watch verbs.
Use `kubectl edit role -n <namespace> <name>` to remove these permissions.
Alternatively, create a new least-privileged role that excludes secret access.
scored: true
- id: 4.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
audit: |
wildcards=$(kubectl get roles --all-namespaces -o json | jq '
.items[] | select(
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
)' | wc -l)
wildcards_clusterroles=$(kubectl get clusterroles -o json | jq '
.items[] | select(
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
)' | wc -l)
total=$((wildcards + wildcards_clusterroles))
if [ "$total" -gt 0 ]; then
echo "wildcards_present"
fi
tests:
test_items:
- flag: wildcards_present
set: false
remediation: |
Identify roles and clusterroles using wildcards (*) in 'verbs', 'resources', or 'apiGroups'.
Replace wildcards with specific values to enforce least privilege access.
Use `kubectl edit role -n <namespace> <name>` or `kubectl edit clusterrole <name>` to update.
scored: true
- id: 4.1.4
text: "Minimize access to create pods (Automated)"
audit: |
echo "🔹 Roles and ClusterRoles with 'create' access on 'pods':"
access=$(kubectl get roles,clusterroles -A -o json | jq '
[.items[] |
select(
.rules[]? |
(.resources[]? == "pods" and .verbs[]? == "create")
)
] | length')
if [ "$access" -gt 0 ]; then
echo "pods_create_access"
fi
tests:
test_items:
- flag: pods_create_access
set: false
remediation: |
Review all roles and clusterroles that have "create" permission on "pods".
🔒 Where possible, remove or restrict this permission to only required service accounts.
🛠 Use:
- `kubectl edit role -n <namespace> <role>`
- `kubectl edit clusterrole <name>`
✅ Apply least privilege principle across the cluster.
scored: true
- id: 4.1.5
text: "Ensure that default service accounts are not actively used (Automated)"
audit: |
echo "🔹 Default Service Accounts with automountServiceAccountToken enabled:"
default_sa_count=$(kubectl get serviceaccounts --all-namespaces -o json | jq '
[.items[] | select(.metadata.name == "default" and (.automountServiceAccountToken != false))] | length')
if [ "$default_sa_count" -gt 0 ]; then
echo "default_sa_not_auto_mounted"
fi
echo "\n🔹 Pods using default ServiceAccount:"
pods_using_default_sa=$(kubectl get pods --all-namespaces -o json | jq '
[.items[] | select(.spec.serviceAccountName == "default")] | length')
if [ "$pods_using_default_sa" -gt 0 ]; then
echo "default_sa_used_in_pods"
fi
tests:
test_items:
- flag: default_sa_not_auto_mounted
set: false
- flag: default_sa_used_in_pods
set: false
remediation: |
1. Avoid using default service accounts for workloads.
2. Set `automountServiceAccountToken: false` on all default SAs:
kubectl patch serviceaccount default -n <namespace> -p '{"automountServiceAccountToken": false}'
3. Use custom service accounts with only the necessary permissions.
scored: true
- id: 4.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
audit: |
echo "🔹 Pods with automountServiceAccountToken enabled:"
pods_with_token_mount=$(kubectl get pods --all-namespaces -o json | jq '
[.items[] | select(.spec.automountServiceAccountToken != false)] | length')
if [ "$pods_with_token_mount" -gt 0 ]; then
echo "automountServiceAccountToken"
fi
tests:
test_items:
- flag: automountServiceAccountToken
set: false
remediation: |
Pods that do not need access to the Kubernetes API should not mount service account tokens.
✅ To disable token mounting in a pod definition:
```yaml
spec:
automountServiceAccountToken: false
```
✅ Or patch an existing pod's spec (recommended via workload template):
Patch not possible for running pods — update the deployment YAML or recreate pods with updated spec.
scored: true
- id: 4.2
text: "Pod Security Policies"
checks:
- id: 4.2.1
text: "Minimize the admission of privileged containers (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | \
jq -r 'if any(.items[]?.spec.containers[]?; .securityContext?.privileged == true) then "PRIVILEGED_FOUND" else "NO_PRIVILEGED" end'
tests:
test_items:
- flag: "NO_PRIVILEGED"
set: true
compare:
op: eq
value: "NO_PRIVILEGED"
remediation: |
Add a Pod Security Admission (PSA) policy to each namespace in the cluster to restrict the admission of privileged containers.
To enforce a restricted policy for a specific namespace, use the following command:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also enforce PSA for all namespaces:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
Additionally, review the namespaces that should be excluded (e.g., `kube-system`, `gatekeeper-system`, `azure-arc`, `azure-extensions-usage-system`) and adjust your filtering if necessary.
To enable Pod Security Policies, refer to the detailed documentation for Kubernetes and Azure integration at:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: true
- id: 4.2.2
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | \
jq -r 'if any(.items[]?; .spec.hostPID == true) then "HOSTPID_FOUND" else "NO_HOSTPID" end'
tests:
test_items:
- flag: "NO_HOSTPID"
set: true
compare:
op: eq
value: "NO_HOSTPID"
remediation: |
Add a policy to each namespace in the cluster that restricts the admission of containers with hostPID. For namespaces that need it, ensure RBAC controls limit access to a specific service account.
You can label your namespaces as follows to restrict or enforce the policy:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also use the following to warn about policies:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
For more information, refer to the official Kubernetes and Azure documentation on policies:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: true
- id: 4.2.3
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | jq -r 'if any(.items[]?; .spec.hostIPC == true) then "HOSTIPC_FOUND" else "NO_HOSTIPC" end'
tests:
test_items:
- flag: "NO_HOSTIPC"
set: true
compare:
op: eq
value: "NO_HOSTIPC"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the admission of hostIPC containers.
You can label your namespaces as follows to restrict or enforce the policy:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also use the following to warn about policies:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
For more information, refer to the official Kubernetes and Azure documentation on policies:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: true
- id: 4.2.4
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | jq -r 'if any(.items[]?; .spec.hostNetwork == true) then "HOSTNETWORK_FOUND" else "NO_HOSTNETWORK" end'
tests:
test_items:
- flag: "NO_HOSTNETWORK"
set: true
compare:
op: eq
value: "NO_HOSTNETWORK"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the admission of hostNetwork containers.
You can label your namespaces as follows to restrict or enforce the policy:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also use the following to warn about policies:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
For more information, refer to the official Kubernetes and Azure documentation on policies:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: true
- id: 4.2.5
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | \
jq -r 'if any(.items[]?.spec.containers[]?; .securityContext?.allowPrivilegeEscalation == true) then "ALLOWPRIVILEGEESCALTION_FOUND" else "NO_ALLOWPRIVILEGEESCALTION" end'
tests:
test_items:
- flag: "NO_ALLOWPRIVILEGEESCALTION"
set: true
compare:
op: eq
value: "NO_ALLOWPRIVILEGEESCALTION"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the admission of containers with .spec.allowPrivilegeEscalation set to true.
You can label your namespaces as follows to restrict or enforce the policy:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also use the following to warn about policies:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
For more information, refer to the official Kubernetes and Azure documentation on policies:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: true
- id: 4.3
text: "Azure Policy / OPA"
checks: []
- id: 4.4
text: "CNI Plugin"
checks:
- id: 4.4.1
text: "Ensure latest CNI version is used (Manual)"
type: "manual"
remediation: |
Review the documentation of AWS CNI plugin, and ensure latest CNI version is used.
scored: false
- id: 4.4.2
text: "Ensure that all Namespaces have Network Policies defined (Automated)"
audit: |
ns_without_np=$(comm -23 \
<(kubectl get ns -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | sort) \
<(kubectl get networkpolicy --all-namespaces -o jsonpath='{.items[*].metadata.namespace}' | tr ' ' '\n' | sort))
if [ -z "$ns_without_np" ]; then echo "ALL_NAMESPACES_HAVE_NETWORKPOLICIES"; else echo "MISSING_NETWORKPOLICIES"; fi
tests:
test_items:
- flag: "ALL_NAMESPACES_HAVE_NETWORKPOLICIES"
set: true
compare:
op: eq
value: "ALL_NAMESPACES_HAVE_NETWORKPOLICIES"
remediation: |
Define at least one NetworkPolicy in each namespace to control pod-level traffic. Example:
kubectl apply -n <namespace> -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
EOF
This denies all traffic unless explicitly allowed. Review and adjust policies per namespace as needed.
scored: true
- id: 4.5
text: "Secrets Management"
checks:
- id: 4.5.1
text: "Prefer using secrets as files over secrets as environment variables (Automated)"
audit: |
output=$(kubectl get all --all-namespaces -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}')
if [ -z "$output" ]; then echo "NO_ENV_SECRET_REFERENCES"; else echo "ENV_SECRET_REFERENCES_FOUND"; fi
tests:
test_items:
- flag: "NO_ENV_SECRET_REFERENCES"
set: true
compare:
op: eq
value: "NO_ENV_SECRET_REFERENCES"
remediation: |
Refactor application deployments to mount secrets as files instead of passing them as environment variables.
Avoid using `envFrom` or `env` with `secretKeyRef` in container specs.
scored: true
- id: 4.5.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 4.6
text: "General Policies"
checks:
- id: 4.6.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 4.6.2
text: "Apply Security Context to Your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 4.6.3
text: "The default namespace should not be used (Automated)"
audit: |
output=$(kubectl get all -n default --no-headers 2>/dev/null | grep -v '^service\s\+kubernetes\s' || true)
if [ -z "$output" ]; then echo "DEFAULT_NAMESPACE_UNUSED"; else echo "DEFAULT_NAMESPACE_IN_USE"; fi
tests:
test_items:
- flag: "DEFAULT_NAMESPACE_UNUSED"
set: true
compare:
op: eq
value: "DEFAULT_NAMESPACE_UNUSED"
remediation: |
Avoid using the default namespace for user workloads.
- Create separate namespaces for your applications and infrastructure components.
- Move any user-defined resources out of the default namespace.
Example to create a namespace:
kubectl create namespace my-namespace
Example to move resources:
kubectl get deployment my-app -n default -o yaml | sed 's/namespace: default/namespace: my-namespace/' | kubectl apply -f -
kubectl delete deployment my-app -n default
scored: true

View File

@ -298,6 +298,7 @@ version_mapping:
"ocp-3.11": "rh-0.7"
"ocp-4.0": "rh-1.0"
"aks-1.0": "aks-1.0"
"aks-1.7": "aks-1.7"
"ack-1.0": "ack-1.0"
"cis-1.6-k3s": "cis-1.6-k3s"
"cis-1.24-microk8s": "cis-1.24-microk8s"
@ -431,6 +432,12 @@ target_mapping:
- "controlplane"
- "policies"
- "managedservices"
"aks-1.7":
- "master"
- "node"
- "controlplane"
- "policies"
- "managedservices"
"ack-1.0":
- "master"
- "node"

View File

@ -444,6 +444,12 @@ func TestValidTargets(t *testing.T) {
targets: []string{"node", "policies", "controlplane", "managedservices"},
expected: true,
},
{
name: "aks-1.7 valid",
benchmark: "aks-1.7",
targets: []string{"node", "policies", "controlplane", "managedservices"},
expected: true,
},
{
name: "eks-1.0.1 valid",
benchmark: "eks-1.0.1",

View File

@ -300,6 +300,7 @@ func getKubeVersion() (*KubeVersion, error) {
glog.V(3).Infof("Error fetching cluster config: %s", err)
}
isRKE := false
isAKS := false
if err == nil && kubeConfig != nil {
k8sClient, err := kubernetes.NewForConfig(kubeConfig)
if err != nil {
@ -311,14 +312,22 @@ func getKubeVersion() (*KubeVersion, error) {
if err != nil {
glog.V(3).Infof("Error detecting RKE cluster: %s", err)
}
isAKS, err = IsAKS(context.Background(), k8sClient)
if err != nil {
glog.V(3).Infof("Error detecting AKS cluster: %s", err)
}
}
}
if k8sVer, err := getKubeVersionFromRESTAPI(); err == nil {
glog.V(2).Info(fmt.Sprintf("Kubernetes REST API Reported version: %s", k8sVer))
if isRKE {
k8sVer.GitVersion = k8sVer.GitVersion + "-rancher1"
}
if isAKS {
k8sVer.GitVersion = k8sVer.GitVersion + "-aks1"
}
return k8sVer, nil
}
@ -485,11 +494,36 @@ func getPlatformInfoFromVersion(s string) Platform {
}
}
func IsAKS(ctx context.Context, k8sClient kubernetes.Interface) (bool, error) {
nodes, err := k8sClient.CoreV1().Nodes().List(ctx, metav1.ListOptions{Limit: 1})
if err != nil {
return false, err
}
if len(nodes.Items) == 0 {
return false, nil
}
node := nodes.Items[0]
labels := node.Labels
if _, exists := labels["kubernetes.azure.com/cluster"]; exists {
return true, nil
}
if strings.HasPrefix(node.Spec.ProviderID, "azure://") {
return true, nil
}
return false, nil
}
func getPlatformBenchmarkVersion(platform Platform) string {
glog.V(3).Infof("getPlatformBenchmarkVersion platform: %s", platform)
switch platform.Name {
case "eks":
return "eks-1.5.0"
case "aks":
return "aks-1.7"
case "gke":
switch platform.Version {
case "1.15", "1.16", "1.17", "1.18", "1.19":

View File

@ -627,6 +627,11 @@ func Test_getPlatformNameFromKubectlOutput(t *testing.T) {
args: args{s: "v1.27.6+rke2r1"},
want: Platform{Name: "rke2r", Version: "1.27"},
},
{
name: "aks",
args: args{s: "v1.27.6+aks1"},
want: Platform{Name: "aks", Version: "1.27"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
@ -729,6 +734,13 @@ func Test_getPlatformBenchmarkVersion(t *testing.T) {
},
want: "rke2-cis-1.7",
},
{
name: "aks",
args: args{
platform: Platform{Name: "aks", Version: "1.27"},
},
want: "aks-1.7",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {

View File

@ -33,6 +33,7 @@ The following table shows the valid targets based on the CIS Benchmark version.
| eks-1.5.0 | controlplane, node, policies, managedservices |
| ack-1.0 | master, controlplane, node, etcd, policies, managedservices |
| aks-1.0 | controlplane, node, policies, managedservices |
| aks-1.7 | controlplane, node, policies, managedservices |
| rh-0.7 | master,node|
| rh-1.0 | master, controlplane, node, etcd, policies |
| cis-1.6-k3s | master, controlplane, node, etcd, policies |

View File

@ -28,6 +28,9 @@ Some defined by other hardenening guides.
| CIS | [EKS 1.5.0](https://workbench.cisecurity.org/benchmarks/17733) | eks-1.5.0 | EKS |
| CIS | [ACK 1.0.0](https://workbench.cisecurity.org/benchmarks/6467) | ack-1.0 | ACK |
| CIS | [AKS 1.0.0](https://workbench.cisecurity.org/benchmarks/6347) | aks-1.0 | AKS |
| CIS | [AKS 1.7.0](https://workbench.cisecurity.org/benchmarks/
20359)
| aks-1.7 | AKS |
| RHEL | RedHat OpenShift hardening guide | rh-0.7 | OCP 3.10-3.11 |
| CIS | [OCP4 1.1.0](https://workbench.cisecurity.org/benchmarks/6778) | rh-1.0 | OCP 4.1- |
| CIS | [1.6.0-k3s](https://docs.rancher.cn/docs/k3s/security/self-assessment/_index) | cis-1.6-k3s | k3s v1.16-v1.24 |

View File

@ -11,7 +11,7 @@ spec:
- name: kube-bench
image: docker.io/aquasec/kube-bench:latest
command:
["kube-bench", "run", "--targets", "node", "--benchmark", "aks-1.0"]
["kube-bench", "run", "--targets", "node", "--benchmark", "aks-1.7"]
volumeMounts:
- name: var-lib-kubelet
mountPath: /var/lib/kubelet