Elasticsearch domains should have at least three data nodes
An Elasticsearch cluster with fewer than three data nodes cannot safely distribute primary and replica shards across multiple Availability Zones. If a single node or AZ fails in a two-node cluster, the domain risks split-brain scenarios, unassigned replica shards, or complete read/write unavailability. Three nodes across two or three AZs let shard replication work as designed.
Zone awareness without sufficient node count is equally dangerous. Elasticsearch assigns replica shards to a different AZ than their primary, but this requires at least one node per AZ and enough total nodes to hold both primary and replica copies. Three data nodes is the minimum where automated AZ failover provides real protection for production workloads.
Retrofit consideration
Increasing instance_count and enabling zone_awareness_enabled on a live domain triggers a blue/green deployment. Depending on cluster size, shard count, and data volume, this can take hours and temporarily increase costs as both old and new nodes run concurrently.
Implementation
Choose the approach that matches how you manage Terraform.
Use AWS provider resources directly. See docs for the resources involved: aws_elasticsearch_domain.
resource "aws_elasticsearch_domain" "this" {
advanced_security_options {
enabled = true
internal_user_database_enabled = true
master_user_options {
master_user_name = "admin"
master_user_password = "ChangeMe123!"
}
}
cluster_config {
dedicated_master_count = 3
dedicated_master_enabled = true
dedicated_master_type = "m5.large.elasticsearch"
instance_count = 3
instance_type = "m5.large.elasticsearch"
zone_awareness_config {
availability_zone_count = 3
}
zone_awareness_enabled = true
}
cognito_options {
enabled = true
identity_pool_id = "us-east-1:12345678-1234-1234-1234-123456789012"
role_arn = "arn:aws:iam::123456789012:role/example-role"
user_pool_id = "us-east-1_AbCdEfGhI"
}
domain_endpoint_options {
enforce_https = true
tls_security_policy = "Policy-Min-TLS-1-2-2019-07"
}
domain_name = "pofix-abc123"
ebs_options {
ebs_enabled = true
volume_size = 10
volume_type = "gp3"
}
elasticsearch_version = "7.10"
encrypt_at_rest {
enabled = true
}
log_publishing_options {
cloudwatch_log_group_arn = local.es_log_group_arn
log_type = "AUDIT_LOGS"
}
log_publishing_options {
cloudwatch_log_group_arn = local.es_log_group_arn
log_type = "ES_APPLICATION_LOGS"
}
log_publishing_options {
cloudwatch_log_group_arn = local.es_log_group_arn
log_type = "SEARCH_SLOW_LOGS"
}
log_publishing_options {
cloudwatch_log_group_arn = local.es_log_group_arn
log_type = "INDEX_SLOW_LOGS"
}
node_to_node_encryption {
enabled = true
}
vpc_options {
security_group_ids = ["sg-12345678"]
subnet_ids = ["subnet-12345678", "subnet-12345678", "subnet-12345678"]
}
}
What this control checks
This control validates the cluster_config block of aws_elasticsearch_domain or aws_opensearch_domain resources. To pass, cluster_config.instance_count must be 3 or greater and cluster_config.zone_awareness_enabled must be true. When zone awareness is enabled, configure the zone_awareness_config sub-block with availability_zone_count set to 2 or 3 to match your AZ deployment strategy. A domain with instance_count of 1 or 2 fails regardless of the zone_awareness_enabled value. A domain with 3 or more nodes but zone_awareness_enabled = false also fails, because shards won't be distributed across AZs.
Common pitfalls
Dedicated master nodes do not count as data nodes
Dedicated masters handle cluster management only, not data storage. Setting
dedicated_master_enabled = truewithdedicated_master_count = 3does nothing for this control. The value the control evaluates isinstance_countincluster_config.Zone awareness config ignored when zone awareness is false
Terraform allows you to set
zone_awareness_config { availability_zone_count = 3 }even whenzone_awareness_enabled = false. The sub-block is silently ignored by AWS in that case. The control checks both conditions independently, so both must be correctly set.Using aws_opensearch_domain vs aws_elasticsearch_domain
aws_opensearch_domainis the current resource for OpenSearch domains;aws_elasticsearch_domaincovers legacy Elasticsearch domains. Both usecluster_config.instance_countandcluster_config.zone_awareness_enabled, but migrating between resource types requires careful state management withterraform state mvto avoid triggering domain recreation.Even instance count across odd AZ count
Setting
instance_count = 3withavailability_zone_count = 2creates an uneven distribution: 2 nodes in one AZ, 1 in another. The control passes, but the failure domain is asymmetric. Withavailability_zone_count = 3, three nodes spread evenly across AZs.
Audit evidence
Auditors typically ask for AWS Config rule evaluation results showing all Elasticsearch and OpenSearch domains passing a rule that checks data node count and zone awareness. Supporting API output includes DescribeElasticsearchDomainConfig or DescribeDomain responses showing ElasticsearchClusterConfig.InstanceCount >= 3, ZoneAwarenessEnabled = true, and ZoneAwarenessConfig.AvailabilityZoneCount. Console screenshots of the domain cluster configuration page showing instance count and the zone awareness setting with AZ count work as supporting evidence. Continuous compliance history from AWS Security Hub or a CSPM tool covering the audit period rounds out the package.
Framework-specific interpretation
Tool mappings
Use these identifiers to cross-reference this control across tools, reports, and evidence.
Compliance.tf Control:
es_domain_data_nodes_min_3Checkov Check:
CKV_AWS_318Powerpipe Controls:
aws_compliance.control.es_domain_data_nodes_min_3,aws_compliance.control.opensearch_domain_data_node_fault_toleranceProwler Check:
opensearch_service_domains_fault_tolerant_data_nodesAWS Security Hub Controls:
ES.6,Opensearch.6
Last reviewed: 2026-03-09