EKS clusters should have control plane audit logging enabled
EKS control plane audit logs record every request to the Kubernetes API server, including who made the request, what resource was affected, and whether it succeeded. Without these logs, you have no way to detect unauthorized kubectl commands, privilege escalation attempts, or misconfigurations in RBAC policies. Incident response becomes guesswork.
Each enabled log type costs roughly $0.50 per GB ingested into CloudWatch Logs, which is modest relative to the forensic value. Disabling audit logs to save a few dollars a month leaves a gap that compliance auditors will flag immediately.
Retrofit consideration
Enabling or changing control plane log types triggers a cluster update that can take several minutes, but does not cause downtime or node disruption.
Implementation
Choose the approach that matches how you manage Terraform.
Use the compliance.tf module to enforce this control by default. See get started with compliance.tf.
module "eks" {
source = "pcidss.compliance.tf/terraform-aws-modules/eks/aws"
version = ">=21.0.0"
include_oidc_root_ca_thumbprint = false
name = "abc123"
subnet_ids = ["subnet-abc123", "subnet-def456"]
vpc_id = "vpc-12345678"
}
module "eks" {
source = "acscessentialeight.compliance.tf/terraform-aws-modules/eks/aws"
version = ">=21.0.0"
include_oidc_root_ca_thumbprint = false
name = "abc123"
subnet_ids = ["subnet-abc123", "subnet-def456"]
vpc_id = "vpc-12345678"
}
If you use terraform-aws-modules/eks/aws, set the right module inputs for this control. You can later migrate to the compliance.tf module with minimal changes because it is compatible by design.
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = ">=21.0.0"
include_oidc_root_ca_thumbprint = false
name = "abc123"
subnet_ids = ["subnet-abc123", "subnet-def456"]
vpc_id = "vpc-12345678"
}
Use AWS provider resources directly. See docs for the resources involved: aws_eks_cluster.
resource "aws_eks_cluster" "this" {
enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
encryption_config {
provider {
key_arn = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
}
resources = ["secrets"]
}
name = "pofix-abc123"
role_arn = "arn:aws:iam::123456789012:role/example-role"
vpc_config {
endpoint_private_access = true
subnet_ids = ["subnet-abc123", "subnet-def456"]
}
}
What this control checks
This control validates that the aws_eks_cluster resource enables the "audit" control plane log type. Depending on provider version and module design, this appears as either enabled_cluster_log_types or a nested logging/cluster_logging block. Valid EKS control plane log types are "api", "audit", "authenticator", "controllerManager", and "scheduler". To pass, the configured set must contain at least "audit". It fails when logging configuration is omitted entirely, when the enabled set is empty, or when other log types are present but "audit" is not.
Common pitfalls
Only audit vs all log types
Setting
enabled_cluster_log_typesto["api", "authenticator"]without"audit"fails this control. A common mistake: teams enable"api"thinking it captures audit events, but theapiandauditstreams are distinct in EKS. You need"audit"explicitly.Provider schema differences for logging configuration
Depending on provider version and module design, EKS logging appears as either
enabled_cluster_log_typesor a nestedlogging/cluster_loggingblock. Make sure your configuration actually enables"audit"and that your policy engine checks the schema your code uses.CloudWatch log retention defaults to never expire
EKS creates the log group with indefinite retention. Without a companion
aws_cloudwatch_log_groupresource for/aws/eks/<cluster>/clusterthat sets a retention period, storage costs grow without bound. This doesn't affect the compliance check, but it will show up on your bill.Log group must exist before cluster creation in some modules
Pre-creating the CloudWatch log group to set encryption or retention is fine, but the name must match
/aws/eks/<cluster-name>/clusterexactly. A mismatch means EKS silently creates its own unmanaged log group, and your KMS and retention settings are ignored.
Audit evidence
The primary evidence is the AWS Config rule eks-cluster-logging-enabled showing COMPLIANT for each cluster, or equivalent output from a CSPM tool. Supporting evidence is the CloudWatch Logs log group /aws/eks/<cluster-name>/cluster containing recent audit log streams, confirming logs are actively flowing. Console screenshots of the cluster's "Logging" tab under "Observability" can also demonstrate current state.
For periodic audits, a CloudTrail search for UpdateClusterConfig API calls with logging changes shows when logging was enabled or modified, which helps prove continuous compliance rather than point-in-time enablement.
Framework-specific interpretation
PCI DSS v4.0: Requirement 10 calls for logging all access to system components that may store, process, or transmit cardholder data. For EKS clusters in that scope, control plane audit logs are what examiners want to see: every API server request recorded with user identity, resource, and outcome.
Related controls
Tool mappings
Use these identifiers to cross-reference this control across tools, reports, and evidence.
Compliance.tf Control:
eks_cluster_control_plane_audit_logging_enabledAWS Config Managed Rule:
EKS_CLUSTER_LOG_ENABLEDCheckov Check:
CKV_AWS_37Powerpipe Control:
aws_compliance.control.eks_cluster_control_plane_audit_logging_enabledProwler Check:
eks_control_plane_logging_all_types_enabledAWS Security Hub Control:
EKS.8KICS Queries:
37304d3f-f852-40b8-ae3f-725e87a7cedf,66f130d9-b81d-4e8e-9b08-da74b9c891dfTrivy Check:
AWS-0038
Last reviewed: 2026-03-09