Elasticsearch domains should be configured with at least three dedicated master nodes
Dedicated master nodes handle cluster state management, shard allocation, and index operations. Without them, data nodes must perform these tasks, which degrades query performance under load and creates a single point of failure for cluster stability. When a cluster loses its master, it cannot process writes or rebalance shards until a new master is elected.
Three dedicated masters let the cluster tolerate one master failure while maintaining quorum (two of three). A single dedicated master provides no fault tolerance. Two masters risk split-brain, where both nodes believe they are the active master. Three is the minimum safe count for distributed consensus.
Retrofit consideration
Adding or changing dedicated master nodes on a running domain triggers a blue/green deployment. On large clusters this can take hours and temporarily increases resource consumption. Schedule the change during a maintenance window and verify cluster health after the update completes.
Implementation
Choose the approach that matches how you manage Terraform.
Use AWS provider resources directly. See docs for the resources involved: aws_elasticsearch_domain.
resource "aws_elasticsearch_domain" "this" {
advanced_security_options {
enabled = true
internal_user_database_enabled = true
master_user_options {
master_user_name = "admin"
master_user_password = "ChangeMe123!"
}
}
cluster_config {
dedicated_master_count = 3
dedicated_master_enabled = true
dedicated_master_type = "m5.large.elasticsearch"
instance_count = 3
instance_type = "m5.large.elasticsearch"
zone_awareness_config {
availability_zone_count = 3
}
zone_awareness_enabled = true
}
cognito_options {
enabled = true
identity_pool_id = "us-east-1:12345678-1234-1234-1234-123456789012"
role_arn = "arn:aws:iam::123456789012:role/example-role"
user_pool_id = "us-east-1_AbCdEfGhI"
}
domain_endpoint_options {
enforce_https = true
tls_security_policy = "Policy-Min-TLS-1-2-2019-07"
}
domain_name = "pofix-abc123"
ebs_options {
ebs_enabled = true
volume_size = 10
volume_type = "gp3"
}
elasticsearch_version = "7.10"
encrypt_at_rest {
enabled = true
}
log_publishing_options {
cloudwatch_log_group_arn = local.es_log_group_arn
log_type = "AUDIT_LOGS"
}
log_publishing_options {
cloudwatch_log_group_arn = local.es_log_group_arn
log_type = "ES_APPLICATION_LOGS"
}
log_publishing_options {
cloudwatch_log_group_arn = local.es_log_group_arn
log_type = "SEARCH_SLOW_LOGS"
}
log_publishing_options {
cloudwatch_log_group_arn = local.es_log_group_arn
log_type = "INDEX_SLOW_LOGS"
}
node_to_node_encryption {
enabled = true
}
vpc_options {
security_group_ids = ["sg-12345678"]
subnet_ids = ["subnet-12345678", "subnet-12345678", "subnet-12345678"]
}
}
What this control checks
The relevant resources are aws_opensearch_domain and the legacy aws_elasticsearch_domain. Inside cluster_config, two arguments control this: dedicated_master_enabled must be true, and dedicated_master_count must be 3 or higher. Setting dedicated_master_enabled to false or omitting it fails the control. Setting dedicated_master_count to 1 or 2 also fails. Three passes; five passes as well but is typically unnecessary outside very large clusters with frequent cluster state changes. Set dedicated_master_type to an instance type sized for your cluster's index and shard count, for example "m6g.large.search".
Common pitfalls
Default dedicated_master_count is unpredictable
Omit
dedicated_master_countalongsidededicated_master_enabled = trueand the actual count depends on provider or service defaults that aren't guaranteed to be three. Set it explicitly every time to avoid both ambiguity and drift detection issues.Legacy inline cluster_config syntax
If you're migrating from
aws_elasticsearch_domain, watch for deprecated instance type families likem3.medium.elasticsearchin thecluster_configblock. The provider still accepts them, but migrate the resource toaws_opensearch_domainand update instance types to the.searchsuffix (for example,m6g.large.search).Undersized dedicated master instances
Setting
dedicated_master_count = 3passes this control, but a smalldedicated_master_typeliket3.small.searchcan cause master instability on clusters with high shard counts. Size dedicated masters using current AWS OpenSearch guidance for shard count, index count, and cluster state size.Multi-AZ interaction
Three dedicated masters work correctly only with
zone_awareness_enabled = trueand three availability zones (zone_awareness_config { availability_zone_count = 3 }). With two AZs, two of the three masters land in the same zone. Lose that zone and you lose quorum despite technically having three masters.
Audit evidence
AWS Config rule evaluation results should show each domain as COMPLIANT, confirming at least three dedicated master nodes are enabled. The OpenSearch Service domain detail page in the console displays the dedicated master configuration under the "Cluster configuration" tab and can be captured as a screenshot. For programmatic evidence, aws opensearch describe-domain --domain-name <name> returns DedicatedMasterEnabled, DedicatedMasterCount, and DedicatedMasterType inside the ClusterConfig object. Security Hub findings with PASSED status across all domains provide consolidated compliance evidence.
Framework-specific interpretation
Related controls
Tool mappings
Use these identifiers to cross-reference this control across tools, reports, and evidence.
Compliance.tf Control:
es_domain_dedicated_master_nodes_min_3Checkov Check:
CKV_AWS_318Powerpipe Control:
aws_compliance.control.es_domain_dedicated_master_nodes_min_3Prowler Check:
opensearch_service_domains_fault_tolerant_master_nodesAWS Security Hub Controls:
ES.7,Opensearch.11
Last reviewed: 2026-03-09