EKS clusters endpoint should restrict public access
The EKS API server endpoint is the control plane for your Kubernetes cluster. With public access open and no CIDR restriction, anyone on the internet can attempt authentication against it. RBAC and strong authentication reduce the risk but don't eliminate it. An exposed endpoint invites brute-force and credential-stuffing attacks regardless of what's behind the auth layer.
Disabling public access entirely and routing through the private VPC endpoint is the cleanest posture. If public access is operationally necessary, scope public_access_cidrs to corporate or VPN IP ranges. Leaving it at 0.0.0.0/0 is the same as not restricting it at all.
Retrofit consideration
Disabling public endpoint access on a running cluster requires that private endpoint access is already enabled and that operators have network connectivity to the VPC via VPN, Direct Connect, or bastion. Cutting public access without this in place causes kubectl lockout.
Implementation
Choose the approach that matches how you manage Terraform.
Use the compliance.tf module to enforce this control by default. See get started with compliance.tf.
module "eks" {
source = "pcidss.compliance.tf/terraform-aws-modules/eks/aws"
version = ">=21.0.0"
include_oidc_root_ca_thumbprint = false
name = "abc123"
subnet_ids = ["subnet-abc123", "subnet-def456"]
vpc_id = "vpc-12345678"
}
module "eks" {
source = "cisv80ig1.compliance.tf/terraform-aws-modules/eks/aws"
version = ">=21.0.0"
include_oidc_root_ca_thumbprint = false
name = "abc123"
subnet_ids = ["subnet-abc123", "subnet-def456"]
vpc_id = "vpc-12345678"
}
module "eks" {
source = "nist800171.compliance.tf/terraform-aws-modules/eks/aws"
version = ">=21.0.0"
include_oidc_root_ca_thumbprint = false
name = "abc123"
subnet_ids = ["subnet-abc123", "subnet-def456"]
vpc_id = "vpc-12345678"
}
module "eks" {
source = "nistcsfv11.compliance.tf/terraform-aws-modules/eks/aws"
version = ">=21.0.0"
include_oidc_root_ca_thumbprint = false
name = "abc123"
subnet_ids = ["subnet-abc123", "subnet-def456"]
vpc_id = "vpc-12345678"
}
module "eks" {
source = "pcidssv321.compliance.tf/terraform-aws-modules/eks/aws"
version = ">=21.0.0"
include_oidc_root_ca_thumbprint = false
name = "abc123"
subnet_ids = ["subnet-abc123", "subnet-def456"]
vpc_id = "vpc-12345678"
}
If you use terraform-aws-modules/eks/aws, set the right module inputs for this control. You can later migrate to the compliance.tf module with minimal changes because it is compatible by design.
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = ">=21.0.0"
include_oidc_root_ca_thumbprint = false
name = "abc123"
subnet_ids = ["subnet-abc123", "subnet-def456"]
vpc_id = "vpc-12345678"
endpoint_public_access = false
}
Use AWS provider resources directly. See docs for the resources involved: aws_eks_cluster.
resource "aws_eks_cluster" "this" {
enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
encryption_config {
provider {
key_arn = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
}
resources = ["secrets"]
}
name = "pofix-abc123"
role_arn = "arn:aws:iam::123456789012:role/example-role"
vpc_config {
endpoint_private_access = true
endpoint_public_access = false
subnet_ids = ["subnet-abc123", "subnet-def456"]
}
}
What this control checks
The aws_eks_cluster resource's vpc_config block controls API endpoint exposure. Set endpoint_public_access = false to pass. If public access must stay on, set public_access_cidrs to a restricted CIDR list (not ["0.0.0.0/0"]). Also set endpoint_private_access = true so worker nodes and internal clients reach the API server over the private VPC endpoint. Any configuration with endpoint_public_access = true and public_access_cidrs at the default ["0.0.0.0/0"] fails.
Common pitfalls
Default endpoint_public_access is true
Omit
endpoint_public_accessand it defaults totrue, withpublic_access_cidrsdefaulting to["0.0.0.0/0"]. The endpoint stays publicly reachable unless you explicitly setendpoint_public_access = falseor provide a restricted CIDR list. Don't rely on EKS defaults here.Disabling public without enabling private causes lockout
Set
endpoint_public_access = falsewithoutendpoint_private_access = trueand the API server goes dark. There's no graceful error, kubectl just stops responding. Enable the private endpoint first, verify connectivity from within the VPC, then disable public access.CIDR restriction alone may not satisfy the control
Some compliance scanners treat any
endpoint_public_access = trueconfiguration as a failure, regardless of what's inpublic_access_cidrs. If your scanner works that way, restricting CIDRs won't move the finding. Check your evaluation logic before assuming a CIDR-scoped configuration passes.DNS resolution changes with private endpoint
When
endpoint_private_accessis enabled, the EKS API DNS name resolves to private IPs from within the VPC. Developers and CI/CD runners outside the VPC only reach the API server ifendpoint_public_accessis still on. Sort out DNS and network routing before toggling these flags, otherwise you'll break pipelines silently.
Audit evidence
Auditors typically want AWS Config rule results showing each EKS cluster as compliant, or equivalent CSPM findings. Console evidence from the EKS cluster's Networking tab, showing public access as disabled or restricted to specific CIDRs, is a common request. aws eks describe-cluster --name <name> output with endpointPublicAccess: false or a non-wildcard publicAccessCidrs list is the CLI-level proof.
CloudTrail logs for UpdateClusterConfig events show when and by whom the endpoint configuration was last changed, useful for change-control review.
Framework-specific interpretation
PCI DSS v4.0: Requirement 1 mandates network controls that prevent unauthorized access to systems in the cardholder data environment. An open EKS API endpoint, reachable from any IP, directly contradicts this: the Kubernetes control plane governs every workload on the cluster, so it needs the same network boundaries as any other CDE component. Disabling public access or scoping public_access_cidrs to known IP ranges provides the segmentation PCI assessors look for.
Related controls
Tool mappings
Use these identifiers to cross-reference this control across tools, reports, and evidence.
Compliance.tf Control:
eks_cluster_endpoint_restrict_public_accessAWS Config Managed Rule:
EKS_ENDPOINT_NO_PUBLIC_ACCESSCheckov Checks:
CKV_AWS_38,CKV_AWS_39Powerpipe Controls:
aws_compliance.control.eks_cluster_endpoint_public_access_restricted,aws_compliance.control.eks_cluster_endpoint_restrict_public_accessProwler Check:
eks_cluster_not_publicly_accessibleAWS Security Hub Control:
EKS.1KICS Queries:
42f4b905-3736-4213-bfe9-c0660518cda8,61cf9883-1752-4768-b18c-0d57f2737709Trivy Check:
AWS-0040
Last reviewed: 2026-03-09