Skip to content

EKS clusters endpoint public access should be restricted

An EKS cluster with a publicly accessible API server endpoint exposes the Kubernetes control plane to the internet. Even with authentication in place, this widens the attack surface considerably: credential stuffing, zero-day exploits against the API server, and reconnaissance are all possible from any IP address. Disabling public access forces all API traffic through your VPC, where security groups, NACLs, and private connectivity (VPN, Direct Connect, VPC peering) provide layered defense.

If you need to keep public access on temporarily during migration, restrict public_access_cidrs to specific IPs rather than leaving it at the default 0.0.0.0/0. Private-only access is the correct default for production clusters.

Retrofit consideration

Switching an existing cluster from public to private endpoint access requires that your CI/CD runners, developer workstations, and any external tooling can reach the API server through the VPC (via VPN, Direct Connect, or bastion). Cutting public access without this connectivity in place will lock you out of the cluster.

Implementation

Choose the approach that matches how you manage Terraform.

If you use terraform-aws-modules/eks/aws, set the right module inputs for this control. You can later migrate to the compliance.tf module with minimal changes because it is compatible by design.

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = ">=21.0.0"

  include_oidc_root_ca_thumbprint = false
  name                            = "abc123"
  subnet_ids                      = ["subnet-abc123", "subnet-def456"]
  vpc_id                          = "vpc-12345678"

  endpoint_public_access = false
}

Use AWS provider resources directly. See docs for the resources involved: aws_eks_cluster.

resource "aws_eks_cluster" "this" {
  enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]

  encryption_config {
    provider {
      key_arn = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
    }
    resources = ["secrets"]
  }

  name     = "pofix-abc123"
  role_arn = "arn:aws:iam::123456789012:role/example-role"

  vpc_config {
    endpoint_private_access = true
    endpoint_public_access  = false
    subnet_ids              = ["subnet-abc123", "subnet-def456"]
  }
}

What this control checks

In the aws_eks_cluster resource, the vpc_config block controls API server endpoint exposure. To pass, set endpoint_public_access = false and endpoint_private_access = true within vpc_config. The default for endpoint_public_access is true, so any cluster created without setting it explicitly will fail. When endpoint_public_access is false, the Kubernetes API server is only reachable from within the VPC or through connected networks. endpoint_private_access must also be true for the cluster to remain manageable; setting both to false is an invalid configuration. If endpoint_public_access is true for any reason, the control fails regardless of what public_access_cidrs contains.

Common pitfalls

  • Default is public

    The endpoint_public_access argument in aws_eks_cluster defaults to true. Omitting it from your vpc_config block means the cluster fails this control silently. Always set it explicitly.

  • DNS resolution requires private access enabled

    When you set endpoint_public_access = false, the cluster API endpoint DNS resolves to private VPC IPs. If endpoint_private_access is not also true, you lose all access to the cluster. Always pair the two settings.

  • kubectl access from CI/CD breaks

    External CI/CD systems (GitHub Actions, GitLab SaaS) can't reach a private-only endpoint over the public internet. You need private connectivity: a self-hosted runner inside the VPC, or access through VPN, Direct Connect, or peering. EKS access entries and aws eks update-kubeconfig handle authentication and kubeconfig generation, but they don't provide network reachability to a private endpoint.

  • public_access_cidrs does not satisfy this control

    Setting public_access_cidrs to a narrow CIDR like your office IP still fails this control because endpoint_public_access remains true. The control checks the boolean, not the CIDR list.

  • VPC CNI and node registration

    Nodes in private subnets need to reach the API server after public access is disabled. Without the required interface VPC endpoints (com.amazonaws.<region>.eks, com.amazonaws.<region>.ec2, com.amazonaws.<region>.ecr.api, com.amazonaws.<region>.sts) or a NAT gateway, node registration will silently fail.

Audit evidence

Config rule evaluation results for eks-cluster-endpoint-public-access returning COMPLIANT for all in-scope clusters are the primary evidence. Console screenshots of each cluster's Networking tab showing "Public access" as Disabled and "Private access" as Enabled are supplementary. For continuous assurance, Security Hub findings filtered to this control ID and showing a passing status across all accounts and regions work well alongside Config.

CloudTrail UpdateClusterConfig events show when public access was disabled and which principal made the change, giving auditors a point-in-time record they can tie back to a change ticket.

Framework-specific interpretation

Tool mappings

Use these identifiers to cross-reference this control across tools, reports, and evidence.

  • Compliance.tf Control: eks_cluster_endpoint_public_access_restricted

  • AWS Config Managed Rule: EKS_ENDPOINT_NO_PUBLIC_ACCESS

  • Checkov Checks: CKV_AWS_38, CKV_AWS_39

  • Powerpipe Controls: aws_compliance.control.eks_cluster_endpoint_public_access_restricted, aws_compliance.control.eks_cluster_endpoint_restrict_public_access

  • Prowler Checks: eks_cluster_not_publicly_accessible, eks_cluster_private_nodes_enabled

  • AWS Security Hub Control: EKS.1

  • KICS Queries: 42f4b905-3736-4213-bfe9-c0660518cda8, 61cf9883-1752-4768-b18c-0d57f2737709

  • Trivy Checks: AWS-0040, AWS-0041

Last reviewed: 2026-03-09