1
0
mirror of https://github.com/aquasecurity/kube-bench.git synced 2025-05-29 20:28:49 +00:00
This commit is contained in:
LaibaBareera 2025-05-27 10:57:37 +00:00 committed by GitHub
commit ecc8e39bf2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
13 changed files with 901 additions and 1 deletions

2
cfg/aks-1.7/config.yaml Normal file
View File

@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@ -0,0 +1,31 @@
---
controls:
version: "aks-1.7"
id: 2
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 2.1
text: "Logging"
checks:
- id: 2.1.1
text: "Enable audit Logs"
type: "manual"
remediation: |
Azure audit logs are enabled and managed in the Azure portal. To enable log collection for
the Kubernetes master components in your AKS cluster, open the Azure portal in a web
browser and complete the following steps:
1. Select the resource group for your AKS cluster, such as myResourceGroup. Don't
select the resource group that contains your individual AKS cluster resources, such
as MC_myResourceGroup_myAKSCluster_eastus.
2. On the left-hand side, choose Diagnostic settings.
3. Select your AKS cluster, such as myAKSCluster, then choose to Add diagnostic setting.
4. Enter a name, such as myAKSClusterLogs, then select the option to Send to Log Analytics.
5. Select an existing workspace or create a new one. If you create a workspace, provide
a workspace name, a resource group, and a location.
6. In the list of available logs, select the logs you wish to enable. For this example,
enable the kube-audit and kube-audit-admin logs. Common logs include the kube-
apiserver, kube-controller-manager, and kube-scheduler. You can return and change
the collected logs once Log Analytics workspaces are enabled.
7. When ready, select Save to enable collection of the selected logs.
scored: false

View File

@ -0,0 +1,169 @@
---
controls:
version: "aks-1.7"
id: 5
text: "Managed Services"
type: "managedservices"
groups:
- id: 5.1
text: "Image Registry and Image Scanning"
checks:
- id: 5.1.1
text: "Ensure Image Vulnerability Scanning using Microsoft Defender for Cloud (MDC) image scanning or a third party provider (Automated)"
type: "manual"
remediation: |
Enable MDC for Container Registries by running the following Azure CLI command:
az security pricing create --name ContainerRegistry --tier Standard
Alternatively, use the following command to enable image scanning for your container registry:
az resource update --ids /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.ContainerRegistry/registries/{registry-name} --set properties.enabled=true
Replace `subscription-id`, `resource-group-name`, and `registry-name` with the correct values for your environment.
Please note that enabling MDC for Container Registries will incur additional costs, so be sure to review the pricing information provided in the Azure documentation before enabling it.
scored: false
- id: 5.1.2
text: "Minimize user access to Azure Container Registry (ACR) (Manual)"
type: "manual"
remediation: |
Azure Container Registry
If you use Azure Container Registry (ACR) as your container image store, you need to grant
permissions to the service principal for your AKS cluster to read and pull images. Currently,
the recommended configuration is to use the az aks create or az aks update command to
integrate with a registry and assign the appropriate role for the service principal. For
detailed steps, see Authenticate with Azure Container Registry from Azure Kubernetes
Service.
To avoid needing an Owner or Azure account administrator role, you can configure a
service principal manually or use an existing service principal to authenticate ACR from
AKS. For more information, see ACR authentication with service principals or Authenticate
from Kubernetes with a pull secret.
scored: false
- id: 5.1.3
text: "Minimize cluster access to read-only for Azure Container Registry (ACR) (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.1.4
text: "Minimize Container Registries to only those approved (Manual)"
type: "manual"
remediation: |
If you are using **Azure Container Registry**, you can restrict access using firewall rules as described in the official documentation:
https://docs.microsoft.com/en-us/azure/container-registry/container-registry-firewall-access-rules
For other non-AKS repositories, you can use **admission controllers** or **Azure Policy** to enforce registry access restrictions.
Limiting or locking down egress traffic to specific container registries is also recommended. For more information, refer to:
https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic
scored: false
- id: 5.2
text: "Access and identity options for Azure Kubernetes Service (AKS)"
checks:
- id: 5.2.1
text: "Prefer using dedicated AKS Service Accounts (Manual)"
type: "manual"
remediation: |
Azure Active Directory integration
The security of AKS clusters can be enhanced with the integration of Azure Active Directory
(AD). Built on decades of enterprise identity management, Azure AD is a multi-tenant,
cloud-based directory, and identity management service that combines core directory
services, application access management, and identity protection. With Azure AD, you can
integrate on-premises identities into AKS clusters to provide a single source for account
management and security.
Azure Active Directory integration with AKS clusters
With Azure AD-integrated AKS clusters, you can grant users or groups access to Kubernetes
resources within a namespace or across the cluster. To obtain a kubectl configuration
context, a user can run the az aks get-credentials command. When a user then interacts
with the AKS cluster with kubectl, they're prompted to sign in with their Azure AD
credentials. This approach provides a single source for user account management and
password credentials. The user can only access the resources as defined by the cluster
administrator.
Azure AD authentication is provided to AKS clusters with OpenID Connect. OpenID Connect
is an identity layer built on top of the OAuth 2.0 protocol. For more information on OpenID
Connect, see the Open ID connect documentation. From inside of the Kubernetes cluster,
Webhook Token Authentication is used to verify authentication tokens. Webhook token
authentication is configured and managed as part of the AKS cluster.
scored: false
- id: 5.3
text: "Key Management Service (KMS)"
checks:
- id: 5.3.1
text: "Ensure Kubernetes Secrets are encrypted (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.4
text: "Cluster Networking"
checks:
- id: 5.4.1
text: "Restrict Access to the Control Plane Endpoint (Automated)"
type: "manual"
remediation: |
By enabling private endpoint access to the Kubernetes API server, all communication between your nodes and the API server stays within your VPC. You can also limit the IP addresses that can access your API server from the internet, or completely disable internet access to the API server.
With this in mind, you can update your cluster accordingly using the AKS CLI to ensure that Private Endpoint Access is enabled.
If you choose to also enable Public Endpoint Access then you should also configure a list of allowable CIDR blocks, resulting in restricted access from the internet. If you specify no CIDR blocks, then the public API server endpoint is able to receive and process requests from all IP addresses by defaulting to ['0.0.0.0/0'].
Example:
az aks update --name ${CLUSTER_NAME} --resource-group ${RESOURCE_GROUP} --api-server-access-profile enablePrivateCluster=true --api-server-access-profile authorizedIpRanges=192.168.1.0/24
scored: false
- id: 5.4.2
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Automated)"
type: "manual"
remediation: |
To use a private endpoint, create a new private endpoint in your virtual network, then create a link between your virtual network and a new private DNS zone.
You can also restrict access to the public endpoint by enabling only specific CIDR blocks to access it. For example:
az aks update --name ${CLUSTER_NAME} --resource-group ${RESOURCE_GROUP} --api-server-access-profile enablePublicFqdn=false
This command disables the public API endpoint for your AKS cluster.
scored: false
- id: 5.4.3
text: "Ensure clusters are created with Private Nodes (Automated)"
type: "manual"
remediation: |
To create a private cluster, use the following command:
az aks create \
--resource-group <private-cluster-resource-group> \
--name <private-cluster-name> \
--load-balancer-sku standard \
--enable-private-cluster \
--network-plugin azure \
--vnet-subnet-id <subnet-id> \
--docker-bridge-address <docker-bridge-address> \
--dns-service-ip <dns-service-ip> \
--service-cidr <service-cidr>
Ensure that --enable-private-cluster flag is set to enable private nodes in your cluster.
scored: false
- id: 5.4.4
text: "Ensure Network Policy is Enabled and set as appropriate (Automated)"
type: "manual"
remediation: |
Utilize Calico or another network policy engine to segment and isolate your traffic.
Enable network policies on your AKS cluster by following the Azure documentation or using the `az aks` CLI to enable the network policy add-on.
scored: false
- id: 5.4.5
text: "Encrypt traffic to HTTPS load balancers with TLS certificates (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.5
text: "Authentication and Authorization"
checks:
- id: 5.5.1
text: "Manage Kubernetes RBAC users with Azure AD (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.5.2
text: "Use Azure RBAC for Kubernetes Authorization (Manual)"
type: "manual"
remediation: "No remediation"
scored: false

6
cfg/aks-1.7/master.yaml Normal file
View File

@ -0,0 +1,6 @@
---
controls:
version: "aks-1.7"
id: 1
text: "Control Plane Components"
type: "master"

283
cfg/aks-1.7/node.yaml Normal file
View File

@ -0,0 +1,283 @@
---
controls:
version: "aks-1.7"
id: 3
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 3.1
text: "Worker Node Configuration Files"
checks:
- id: 3.1.1
text: "Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $kubeletkubeconfig
scored: false
- id: 3.1.2
text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: false
- id: 3.1.3
text: "Ensure that the azure.json file has permissions set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 644 $kubeletconf
scored: false
- id: 3.1.4
text: "Ensure that the azure.json file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: false
- id: 3.2
text: "Kubelet"
checks:
- id: 3.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to
false.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 3.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 3.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
set: true
remediation: |
If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 3.2.4
text: "Ensure that the --read-only-port is secured (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: true
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set readOnlyPort to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 3.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: true
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 3.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated) "
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: true
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 3.2.7
text: "Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
set: true
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set the 'eventRecordQPS' value to an appropriate level (e.g., 5).
If using executable arguments, check the Kubelet service file `$kubeletsvc` on each worker node, and add the following parameter to the `KUBELET_ARGS` variable:
--eventRecordQPS=5
Ensure that there is no conflicting `--eventRecordQPS` setting in the service file that overrides the config file.
After making the changes, restart the Kubelet service:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: false
- id: 3.2.8
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: true
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If modifying the Kubelet config file, edit the `kubelet-config.json` file located at `/etc/kubernetes/kubelet/kubelet-config.json` and set the following parameter to `true`:
"rotateCertificates": true
Ensure that the Kubelet service file located at `/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf` does not define the `--rotate-certificates` argument as `false`, as this would override the config file.
If using executable arguments, add the following line to the `KUBELET_CERTIFICATE_ARGS` variable:
--rotate-certificates=true
After making the necessary changes, restart the Kubelet service:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: false
- id: 3.2.9
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: true
compare:
op: eq
value: true
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false

349
cfg/aks-1.7/policies.yaml Normal file
View File

@ -0,0 +1,349 @@
---
controls:
version: "aks-1.7"
id: 4
text: "Policies"
type: "policies"
groups:
- id: 4.1
text: "RBAC and Service Accounts"
checks:
- id: 4.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
audit: "kubectl get clusterrolebindings -o=custom-columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name"
audit_config: "kubectl get clusterrolebindings"
tests:
test_items:
- flag: cluster-admin
path: '{.roleRef.name}'
set: true
compare:
op: eq
value: "cluster-admin"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 4.1.2
text: "Minimize access to secrets (Automated)"
audit: "kubectl get roles,rolebindings --all-namespaces -o=custom-columns=NAME:.metadata.name,ROLE:.rules[*].resources,SUBJECT:.subjects[*].name"
audit_config: "kubectl get roles --all-namespaces"
tests:
test_items:
- flag: secrets
path: '{.rules[*].resources}'
set: true
compare:
op: eq
value: "secrets"
- flag: get
path: '{.rules[*].verbs}'
set: true
compare:
op: contains
value: "get"
- flag: list
path: '{.rules[*].verbs}'
set: true
compare:
op: contains
value: "list"
- flag: watch
path: '{.rules[*].verbs}'
set: true
compare:
op: contains
value: "watch"
remediation: |
Where possible, remove get, list and watch access to secret objects in the cluster.
scored: false
- id: 4.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
audit: "kubectl get roles --all-namespaces -o yaml | grep '*'"
audit_config: "kubectl get clusterroles -o yaml | grep '*'"
tests:
test_items:
- flag: wildcard
path: '{.rules[*].verbs}'
compare:
op: notcontains
value: "*"
remediation: |
Where possible, replace any use of wildcards in clusterroles and roles with specific objects or actions.
Review the roles and clusterroles across namespaces and ensure that wildcards are not used for sensitive actions.
Update roles by specifying individual actions or resources instead of using "*".
scored: false
- id: 4.1.4
text: "Minimize access to create pods (Automated)"
audit: "kubectl get roles,rolebindings --all-namespaces -o=custom-columns=NAME:.metadata.name,ROLE:.rules[*].resources,SUBJECT:.subjects[*].name"
audit_config: "kubectl get roles --all-namespaces"
tests:
test_items:
- flag: pods
path: '{.rules[*].resources}'
set: true
compare:
op: eq
value: "pods"
- flag: create
path: '{.rules[*].verbs}'
set: true
compare:
op: contains
value: "create"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 4.1.5
text: "Ensure that default service accounts are not actively used (Automated)"
audit: "kubectl get serviceaccounts --all-namespaces -o custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,TOKEN:.automountServiceAccountToken"
audit_config: "kubectl get serviceaccounts --all-namespaces"
tests:
test_items:
- flag: default
path: '{.metadata.name}'
set: true
compare:
op: eq
value: "default"
- flag: automountServiceAccountToken
path: '{.automountServiceAccountToken}'
set: true
compare:
op: eq
value: "false"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 4.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
audit: "kubectl get pods --all-namespaces -o custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,SERVICE_ACCOUNT:.spec.serviceAccountName,MOUNT_TOKEN:.spec.automountServiceAccountToken"
audit_config: "kubectl get pods --all-namespaces"
tests:
test_items:
- flag: automountServiceAccountToken
path: '{.spec.automountServiceAccountToken}'
set: true
compare:
op: eq
value: "false"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 4.2
text: "Pod Security Policies"
checks:
- id: 4.2.1
text: "Minimize the admission of privileged containers (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | jq -r '.items[] | select(.spec.containers[].securityContext.privileged == true) | .metadata.name'
tests:
test_items:
- flag: securityContext.privileged
path: '{.spec.containers[].securityContext.privileged}'
compare:
op: eq
value: false
remediation: |
Add a Pod Security Admission (PSA) policy to each namespace in the cluster to restrict the admission of privileged containers.
To enforce a restricted policy for a specific namespace, use the following command:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also enforce PSA for all namespaces:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
Additionally, review the namespaces that should be excluded (e.g., `kube-system`, `gatekeeper-system`, `azure-arc`, `azure-extensions-usage-system`) and adjust your filtering if necessary.
To enable Pod Security Policies, refer to the detailed documentation for Kubernetes and Azure integration at:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: false
- id: 4.2.2
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | jq -r '.items[] | select(.spec.hostPID == true) | "\(.metadata.namespace)/\(.metadata.name)"'
tests:
test_items:
- flag: hostPID
path: '{.spec.hostPID}'
compare:
op: eq
value: false
remediation: |
Add a policy to each namespace in the cluster that restricts the admission of containers with hostPID. For namespaces that need it, ensure RBAC controls limit access to a specific service account.
You can label your namespaces as follows to restrict or enforce the policy:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also use the following to warn about policies:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
For more information, refer to the official Kubernetes and Azure documentation on policies:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: false
- id: 4.2.3
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | jq -r '.items[] | select(.spec.hostIPC == true) | "\(.metadata.namespace)/\(.metadata.name)"'
tests:
test_items:
- flag: hostIPC
path: '{.spec.hostIPC}'
compare:
op: eq
value: false
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the admission of hostIPC containers.
You can label your namespaces as follows to restrict or enforce the policy:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also use the following to warn about policies:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
For more information, refer to the official Kubernetes and Azure documentation on policies:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: false
- id: 4.2.4
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | jq -r '.items[] | select(.spec.hostNetwork == true) | "\(.metadata.namespace)/\(.metadata.name)"'
tests:
test_items:
- flag: hostNetwork
path: '{.spec.hostNetwork}'
compare:
op: eq
value: false
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the admission of hostNetwork containers.
You can label your namespaces as follows to restrict or enforce the policy:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also use the following to warn about policies:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
For more information, refer to the official Kubernetes and Azure documentation on policies:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: false
- id: 4.2.5
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | jq -r '.items[] | select(any(.spec.containers[]; .securityContext.allowPrivilegeEscalation == true)) | "\(.metadata.namespace)/\(.metadata.name)"'
tests:
test_items:
- flag: allowPrivilegeEscalation
path: '{.spec.containers[].securityContext.allowPrivilegeEscalation}'
compare:
op: eq
value: false
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the admission of containers with .spec.allowPrivilegeEscalation set to true.
You can label your namespaces as follows to restrict or enforce the policy:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also use the following to warn about policies:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
For more information, refer to the official Kubernetes and Azure documentation on policies:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: false
- id: 4.3
text: "Azure Policy / OPA"
checks: []
- id: 4.4
text: "CNI Plugin"
checks:
- id: 4.4.1
text: "Ensure latest CNI version is used (Manual)"
type: "manual"
remediation: |
Review the documentation of AWS CNI plugin, and ensure latest CNI version is used.
scored: false
- id: 4.4.2
text: "Ensure that all Namespaces have Network Policies defined (Automated)"
audit: "kubectl get networkpolicy --all-namespaces"
tests:
test_items:
- flag: networkPolicy
path: '{.items[*].metadata.name}'
compare:
op: exists
value: true
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 4.5
text: "Secrets Management"
checks:
- id: 4.5.1
text: "Prefer using secrets as files over secrets as environment variables (Automated)"
audit: "kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {\"\\n\"}{end}' -A"
tests:
test_items:
- flag: secretKeyRef
path: '{.items[*].spec.containers[*].envFrom[*].secretRef.name}'
compare:
op: exists
value: true
remediation: |
If possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 4.5.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 4.6
text: "General Policies"
checks:
- id: 4.6.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 4.6.2
text: "Apply Security Context to Your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 4.6.3
text: "The default namespace should not be used (Automated)"
audit: "kubectl get all -n default"
audit_config: "kubectl get all -n default"
tests:
test_items:
- flag: "namespace"
path: "{.metadata.namespace}"
set: true
compare:
op: eq
value: "default"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@ -298,6 +298,7 @@ version_mapping:
"ocp-3.11": "rh-0.7"
"ocp-4.0": "rh-1.0"
"aks-1.0": "aks-1.0"
"aks-1.7": "aks-1.7"
"ack-1.0": "ack-1.0"
"cis-1.6-k3s": "cis-1.6-k3s"
"cis-1.24-microk8s": "cis-1.24-microk8s"
@ -431,6 +432,12 @@ target_mapping:
- "controlplane"
- "policies"
- "managedservices"
"aks-1.7":
- "master"
- "node"
- "controlplane"
- "policies"
- "managedservices"
"ack-1.0":
- "master"
- "node"

View File

@ -444,6 +444,12 @@ func TestValidTargets(t *testing.T) {
targets: []string{"node", "policies", "controlplane", "managedservices"},
expected: true,
},
{
name: "aks-1.7 valid",
benchmark: "aks-1.7",
targets: []string{"node", "policies", "controlplane", "managedservices"},
expected: true,
},
{
name: "eks-1.0.1 valid",
benchmark: "eks-1.0.1",

View File

@ -300,6 +300,7 @@ func getKubeVersion() (*KubeVersion, error) {
glog.V(3).Infof("Error fetching cluster config: %s", err)
}
isRKE := false
isAKS := false // Variable to track AKS detection
if err == nil && kubeConfig != nil {
k8sClient, err := kubernetes.NewForConfig(kubeConfig)
if err != nil {
@ -311,7 +312,12 @@ func getKubeVersion() (*KubeVersion, error) {
if err != nil {
glog.V(3).Infof("Error detecting RKE cluster: %s", err)
}
isAKS, err = IsAKS(context.Background(), k8sClient)
if err != nil {
glog.V(3).Infof("Error detecting AKS cluster: %s", err)
}
}
}
if k8sVer, err := getKubeVersionFromRESTAPI(); err == nil {
@ -319,6 +325,9 @@ func getKubeVersion() (*KubeVersion, error) {
if isRKE {
k8sVer.GitVersion = k8sVer.GitVersion + "-rancher1"
}
if isAKS {
k8sVer.GitVersion = k8sVer.GitVersion + "-aks1" // Mark it as AKS in the version
}
return k8sVer, nil
}
@ -485,11 +494,33 @@ func getPlatformInfoFromVersion(s string) Platform {
}
}
func IsAKS(ctx context.Context, k8sClient kubernetes.Interface) (bool, error) {
// Query the nodes for any annotations that indicate AKS (Azure Kubernetes Service)
nodes, err := k8sClient.CoreV1().Nodes().List(ctx, metav1.ListOptions{Limit: 1})
if err != nil {
return false, err
}
// If the cluster contains nodes with specific AKS annotations, its likely AKS
if len(nodes.Items) == 0 {
return false, nil
}
annotations := nodes.Items[0].Annotations
if _, exists := annotations["azure-identity-binding"]; exists { // "azure-identity-binding" is one possible AKS-specific annotation
return true, nil
}
return false, nil
}
func getPlatformBenchmarkVersion(platform Platform) string {
glog.V(3).Infof("getPlatformBenchmarkVersion platform: %s", platform)
switch platform.Name {
case "eks":
return "eks-1.5.0"
case "aks":
return "aks-1.7"
case "gke":
switch platform.Version {
case "1.15", "1.16", "1.17", "1.18", "1.19":

View File

@ -627,6 +627,11 @@ func Test_getPlatformNameFromKubectlOutput(t *testing.T) {
args: args{s: "v1.27.6+rke2r1"},
want: Platform{Name: "rke2r", Version: "1.27"},
},
{
name: "aks",
args: args{s: "v1.27.6+aks1"},
want: Platform{Name: "aks", Version: "1.27"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
@ -729,6 +734,13 @@ func Test_getPlatformBenchmarkVersion(t *testing.T) {
},
want: "rke2-cis-1.7",
},
{
name: "aks",
args: args{
platform: Platform{Name: "aks", Version: "1.27"},
},
want: "aks-1.7",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {

View File

@ -33,6 +33,7 @@ The following table shows the valid targets based on the CIS Benchmark version.
| eks-1.5.0 | controlplane, node, policies, managedservices |
| ack-1.0 | master, controlplane, node, etcd, policies, managedservices |
| aks-1.0 | controlplane, node, policies, managedservices |
| aks-1.7 | controlplane, node, policies, managedservices |
| rh-0.7 | master,node|
| rh-1.0 | master, controlplane, node, etcd, policies |
| cis-1.6-k3s | master, controlplane, node, etcd, policies |

View File

@ -28,6 +28,9 @@ Some defined by other hardenening guides.
| CIS | [EKS 1.5.0](https://workbench.cisecurity.org/benchmarks/17733) | eks-1.5.0 | EKS |
| CIS | [ACK 1.0.0](https://workbench.cisecurity.org/benchmarks/6467) | ack-1.0 | ACK |
| CIS | [AKS 1.0.0](https://workbench.cisecurity.org/benchmarks/6347) | aks-1.0 | AKS |
| CIS | [AKS 1.7.0](https://workbench.cisecurity.org/benchmarks/
20359)
| aks-1.7 | AKS |
| RHEL | RedHat OpenShift hardening guide | rh-0.7 | OCP 3.10-3.11 |
| CIS | [OCP4 1.1.0](https://workbench.cisecurity.org/benchmarks/6778) | rh-1.0 | OCP 4.1- |
| CIS | [1.6.0-k3s](https://docs.rancher.cn/docs/k3s/security/self-assessment/_index) | cis-1.6-k3s | k3s v1.16-v1.24 |

View File

@ -11,7 +11,7 @@ spec:
- name: kube-bench
image: docker.io/aquasec/kube-bench:latest
command:
["kube-bench", "run", "--targets", "node", "--benchmark", "aks-1.0"]
["kube-bench", "run", "--targets", "node", "--benchmark", "aks-1.7"]
volumeMounts:
- name: var-lib-kubelet
mountPath: /var/lib/kubelet