ELB classic load balancers should span multiple availability zones
A Classic Load Balancer deployed in a single Availability Zone becomes a single point of failure. If that AZ experiences an outage, all traffic routed through the load balancer stops, even if backend instances exist in other AZs. Multi-AZ load balancing is the minimum bar for any production workload.
Spanning two or more AZs lets the load balancer continue distributing traffic when one zone degrades. Classic Load Balancer pricing is based on load balancer-hours and data processed. Enabling multiple AZs does not add a separate ELB feature fee, but inter-AZ data transfer charges can still apply depending on traffic patterns.
Retrofit consideration
Adding AZ subnets to an existing Classic Load Balancer may require creating subnets in VPCs that lack them, and can trigger brief connectivity changes as the ELB provisions new ENIs in the additional zones.
Implementation
Choose the approach that matches how you manage Terraform.
Use the compliance.tf module to enforce this control by default. See get started with compliance.tf.
module "elb" {
source = "rbiitfnbfc.compliance.tf/terraform-aws-modules/elb/aws"
version = ">=4.0.0,<5.0.0"
health_check = {
healthy_threshold = 2
interval = 30
target = "HTTP:80/"
timeout = 5
unhealthy_threshold = 2
}
internal = true
listener = [
{
instance_port = 80
instance_protocol = "HTTP"
lb_port = 80
lb_protocol = "HTTP"
}
]
name = "abc123"
security_groups = ["sg-abc12345"]
subnets = ["subnet-abc123", "subnet-def456"]
}
module "elb" {
source = "nistcsfv11.compliance.tf/terraform-aws-modules/elb/aws"
version = ">=4.0.0,<5.0.0"
health_check = {
healthy_threshold = 2
interval = 30
target = "HTTP:80/"
timeout = 5
unhealthy_threshold = 2
}
internal = true
listener = [
{
instance_port = 80
instance_protocol = "HTTP"
lb_port = 80
lb_protocol = "HTTP"
}
]
name = "abc123"
security_groups = ["sg-abc12345"]
subnets = ["subnet-abc123", "subnet-def456"]
}
If you use terraform-aws-modules/elb/aws, set the right module inputs for this control. You can later migrate to the compliance.tf module with minimal changes because it is compatible by design.
module "elb" {
source = "terraform-aws-modules/elb/aws"
version = ">=4.0.0,<5.0.0"
health_check = {
healthy_threshold = 2
interval = 30
target = "HTTP:80/"
timeout = 5
unhealthy_threshold = 2
}
internal = true
listener = [
{
instance_port = 80
instance_protocol = "HTTP"
lb_port = 80
lb_protocol = "HTTP"
}
]
name = "abc123"
security_groups = ["sg-abc12345"]
subnets = ["subnet-abc123", "subnet-def456"]
}
Use AWS provider resources directly. See docs for the resources involved: aws_elb.
resource "aws_elb" "this" {
internal = true
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
subnets = ["subnet-abc123", "subnet-def456"]
}
What this control checks
The aws_elb resource controls AZ placement for Classic Load Balancers. In VPC deployments, the subnets argument must reference subnets from at least two different AZs. For EC2-Classic (legacy), use the availability_zones argument with two or more zone names. The control fails if only one subnet or one AZ is specified. A passing VPC configuration looks like subnets = [aws_subnet.az_a.id, aws_subnet.az_b.id].
Common pitfalls
Single subnet passed as a list
Setting
subnets = [var.subnet_id]where the variable resolves to one subnet satisfies Terraform syntax but fails the control. Ensure the variable or data source always returns subnets across at least two AZs.Mixing availability_zones and subnets
The
aws_elbresource does not allow bothavailability_zonesandsubnetsto be set simultaneously. Using the wrong argument for your network context (VPC vs. EC2-Classic) causes a plan error. In VPC deployments, always usesubnets.Cross-zone load balancing disabled
With
cross_zone_load_balancingset tofalse, spanning multiple AZs satisfies this control but traffic distribution across zones will be uneven. It is a separate setting onaws_elb, but it is almost always flagged in the same review.Subnet CIDR exhaustion in new AZ
Adding a second AZ requires a subnet in that zone, which requires available CIDR space in the VPC. If all blocks are allocated, add a secondary CIDR via
aws_vpc_ipv4_cidr_block_associationfirst, then create the subnet.
Audit evidence
An auditor expects AWS Config rule evaluation results showing each AWS::ElasticLoadBalancing::LoadBalancer as COMPLIANT. Supporting evidence includes aws elb describe-load-balancers output where the AvailabilityZones array contains two or more entries per load balancer. Console screenshots showing multiple AZs on the load balancer description page are acceptable secondary evidence.
For continuous compliance, the managed Config rule elb-multiple-az evaluated across accounts and regions is the primary audit trail. Security Hub may surface related controls depending on enabled standards, but PASSED findings aren't guaranteed for this check in all environments.
Framework-specific interpretation
Related controls
Tool mappings
Use these identifiers to cross-reference this control across tools, reports, and evidence.
Compliance.tf Control:
elb_classic_lb_multiple_az_configuredAWS Config Managed Rule:
CLB_MULTIPLE_AZPowerpipe Control:
aws_compliance.control.elb_classic_lb_multiple_az_configuredProwler Check:
elb_is_in_multiple_azAWS Security Hub Control:
ELB.10
Last reviewed: 2026-03-09