EMR clusters should have security configuration enabled
An EMR cluster without a security configuration runs with no enforced encryption or authentication policy. Data on local disks, in EMRFS (S3), and in transit between nodes is unprotected. In multi-tenant or regulated environments, that opens a direct path to interception and unauthorized access.
Security configurations are reusable across clusters, so attaching one is a one-time setup. Skipping it means each cluster silently inherits the least-secure defaults, and that gap is easy to miss during rapid provisioning.
Retrofit consideration
Existing EMR clusters cannot have a security configuration added after creation. You must terminate and recreate the cluster with the security configuration attached, which causes downtime and requires data migration planning.
Implementation
Choose the approach that matches how you manage Terraform.
If you use terraform-aws-modules/emr/aws, set the right module inputs for this control. You can later migrate to the compliance.tf module with minimal changes because it is compatible by design.
module "emr" {
source = "terraform-aws-modules/emr/aws"
version = ">=3.0.0"
applications = ["Spark"]
autoscaling_iam_role_arn = "arn:aws:iam::123456789012:role/EMR_AutoScaling_DefaultRole"
core_instance_group = {
instance_count = 1
instance_type = "m5.xlarge"
}
create_autoscaling_iam_role = false
create_iam_instance_profile = false
create_security_configuration = true
create_service_iam_role = false
ec2_attributes = {
instance_profile = "EMR_EC2_DefaultRole"
subnet_id = "subnet-12345678"
}
is_private_cluster = false
kerberos_attributes = {
kdc_admin_password = "$${var.kdc_admin_password}"
realm = "EC2.INTERNAL"
}
master_instance_group = {
instance_type = "m5.xlarge"
}
name = "abc123"
release_label = "emr-7.0.0"
security_configuration = "${jsonencode({ "EncryptionConfiguration" : { "EnableInTransitEncryption" : false, "EnableAtRestEncryption" : true, "AtRestEncryptionConfiguration" : { "S3EncryptionConfiguration" : { "EncryptionMode" : "SSE-S3" } } }, "AuthenticationConfiguration" : { "KerberosConfiguration" : { "Provider" : "ClusterDedicatedKdc", "ClusterDedicatedKdcConfiguration" : { "TicketLifetimeInHours" : 24 } } } })}"
service_iam_role_arn = "arn:aws:iam::123456789012:role/EMR_DefaultRole"
tags = {
for-use-with-amazon-emr-managed-policies = "true"
}
vpc_id = "vpc-12345678"
}
Use AWS provider resources directly. See docs for the resources involved: aws_emr_cluster.
resource "aws_emr_cluster" "this" {
ec2_attributes {
instance_profile = "arn:aws:iam::123456789012:instance-profile/example-instance-profile"
subnet_id = "subnet-12345678"
}
kerberos_attributes {
kdc_admin_password = "ChangeMe123!"
realm = "EC2.INTERNAL"
}
master_instance_group {
instance_type = "m5.xlarge"
}
name = "pofix-abc123"
release_label = "emr-6.15.0"
security_configuration = "example-security-config"
service_role = "arn:aws:iam::123456789012:role/example-role"
}
What this control checks
The aws_emr_cluster resource must have its security_configuration argument set to the name of an existing aws_emr_security_configuration resource. It fails when security_configuration is omitted or set to an empty string.
Define an aws_emr_security_configuration resource with a JSON configuration block covering your encryption and authentication requirements, then reference its name in the cluster's security_configuration argument. The control checks only that a configuration name is present, not the contents of the configuration itself.
Common pitfalls
Security configuration cannot be changed on running clusters
The
security_configurationargument onaws_emr_clusterforces replacement. Changing it in Terraform destroys and recreates the cluster. Plan a maintenance window before making this change in production.Empty security configuration still passes
An
aws_emr_security_configurationwith a minimal JSON body like{"EncryptionConfiguration":{"EnableInTransitEncryption":false,"EnableAtRestEncryption":false}}satisfies this control because a configuration is attached. The control does not inspect contents. A separate control or policy review is needed to confirm the configuration actually enforces encryption and authentication.Name collisions across regions
Security configuration names are regional. Referencing a configuration name that exists in a different region than the cluster will fail cluster creation. Make sure the
aws_emr_security_configurationandaws_emr_clustershare the same provider alias and region.Transient clusters from Step Functions or Lambda
EMR clusters launched via the
RunJobFlowAPI, from Step Functions or Lambda for example, bypass Terraform entirely and are out of scope for this resource control. They still need theSecurityConfigurationparameter set at launch to pass runtime compliance checks.
Audit evidence
Config rule evaluations should show each EMR cluster with a non-null SecurityConfiguration property. The aws emr describe-cluster --cluster-id <id> CLI output must include a SecurityConfiguration field with a valid name. Running aws emr describe-security-configuration --name <name> returns the JSON policy confirming encryption and authentication settings are defined. Config compliance snapshots work as point-in-time evidence for all active clusters.
Related controls
Elasticsearch domain node-to-node encryption should be enabled
OpenSearch domains node-to-node encryption should be enabled
Tool mappings
Use these identifiers to cross-reference this control across tools, reports, and evidence.
Compliance.tf Control:
emr_cluster_security_configuration_enabledAWS Config Managed Rule:
EMR_KERBEROS_ENABLEDCheckov Check:
CKV2_AWS_55Powerpipe Control:
aws_compliance.control.emr_cluster_security_configuration_enabled
Last reviewed: 2026-03-09