ElastiCache for Redis replication groups should be encrypted with CMK
AWS-managed keys give you encryption but no control over key rotation schedules, key policies, or cross-account access grants. With a CMK, you own the key lifecycle: you can disable or delete the key to render cached data unrecoverable, enforce separation of duties through IAM and KMS key policies, and audit every decrypt call in CloudTrail. Redis replication groups often cache sensitive session tokens, PII, or query results that deserve the same key governance you apply to your primary data stores.
Losing control of the encryption key means losing the ability to provably revoke access to cached data during an incident or offboarding event.
Retrofit consideration
kms_key_id or enabling at_rest_encryption_enabled on an existing replication group forces replacement. Terraform destroys and recreates the replication group, causing full cache eviction and downtime. Plan a maintenance window and pre-warm the new cluster before cutting over.Implementation
Choose the approach that matches how you manage Terraform.
Use the compliance.tf module to enforce this control by default. See get started with compliance.tf.
module "elasticache" {
source = "registry.compliance.tf/terraform-aws-modules/elasticache/aws"
version = ">=1.0.0,<2.0.0"
description = "Redis cluster"
engine = "redis"
engine_version = "7.1"
node_type = "cache.t3.micro"
num_cache_clusters = 2
replication_group_id = "abc123"
subnet_ids = ["subnet-12345678", "subnet-12345678"]
vpc_id = "vpc-12345678"
}This control is enforced automatically with Compliance.tf modules. Start free trial
If you use terraform-aws-modules/elasticache/aws, set the right module inputs for this control. You can later migrate to the compliance.tf module with minimal changes because it is compatible by design.
module "elasticache" {
source = "terraform-aws-modules/elasticache/aws"
version = ">=1.0.0,<2.0.0"
description = "Redis cluster"
engine = "redis"
engine_version = "7.1"
node_type = "cache.t3.micro"
num_cache_clusters = 2
replication_group_id = "abc123"
subnet_ids = ["subnet-12345678", "subnet-12345678"]
vpc_id = "vpc-12345678"
}Use AWS provider resources directly. See docs for the resources involved: aws_elasticache_replication_group.
resource "aws_elasticache_replication_group" "this" {
at_rest_encryption_enabled = true
auth_token = "PofixExampleAuthToken32CharsLng"
description = "pofix example replication group"
node_type = "cache.t3.micro"
num_cache_clusters = 2
replication_group_id = "pofix-abc123"
snapshot_retention_limit = 15
subnet_group_name = "example-subnet-group"
transit_encryption_enabled = true
kms_key_id = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
}What this control checks
In aws_elasticache_replication_group, two arguments must be set: at_rest_encryption_enabled must be true, and kms_key_id must reference a customer-managed KMS key (a key ARN or key ID from an aws_kms_key resource). A configuration passes when both are explicitly set with valid values pointing to a CMK you own. It fails when at_rest_encryption_enabled is false or omitted, or when kms_key_id is absent, which causes ElastiCache to fall back to the default AWS-managed aws/elasticache key.
Common pitfalls
Replacement required on existing clusters
Both at_rest_encryption_enabled and kms_key_id are ForceNew arguments on aws_elasticache_replication_group. Terraform destroys and recreates the replication group, wiping the cache entirely. You can use create_before_destroy if you also rename the resource (ElastiCache group names must be unique), but you will still lose all cached data and need to pre-warm the new cluster after cutover.
KMS key policy must grant ElastiCache access
Cluster creation fails with an opaque InvalidParameterValue error if your CMK's key policy does not include kms:GenerateDataKey* and kms:Decrypt for the elasticache.amazonaws.com service principal (or the account root with appropriate IAM delegation). Validate the key policy grants before running terraform apply, not after.
Alias strings can pass plan but fail at apply
Passing a bare alias like alias/my-key to kms_key_id may pass terraform plan without error but cause a creation failure at apply time. Reference aws_kms_key.example.arn directly to ensure you are passing a key ARN that ElastiCache can resolve without ambiguity.
Pending deletion KMS key breaks the cluster
If the referenced CMK enters PendingDeletion state, ElastiCache loses the ability to decrypt data and the replication group becomes unusable. Add lifecycle { prevent_destroy = true } to your aws_kms_key resource and set up CloudWatch alarms on the KMSKeyPendingDeletion metric so you catch a scheduled deletion before it takes effect.
Audit evidence
Config rule evaluation results should show each AWS::ElastiCache::ReplicationGroup resource as compliant, confirming both encryption at rest and a CMK ARN. The DescribeReplicationGroups API response should show AtRestEncryptionEnabled: true and a KmsKeyId value that maps to a customer-managed key. Verify via aws kms describe-key --key-id <arn> that KeyManager is CUSTOMER.
CloudTrail logs showing kms:Decrypt and kms:GenerateDataKey events associated with ElastiCache activity serve as ongoing proof the CMK is in active use. Compliance dashboard exports covering this control complete the evidence package.
Related controls
Tool mappings
Use these identifiers to cross-reference this control across tools, reports, and evidence.
- Compliance.tf Control:
elasticache_replication_group_encryption_at_rest_enabled_with_kms_cmk - AWS Config Managed Rule:
ELASTICACHE_REPL_GRP_ENCRYPTED_AT_REST - Checkov Check:
CKV_AWS_191 - Powerpipe Control:
aws_compliance.control.elasticache_replication_group_encryption_at_rest_enabled_with_kms_cmk - Prowler Check:
elasticache_redis_cluster_rest_encryption_enabled - AWS Security Hub Control:
ElastiCache.4 - Trivy Check:
AWS-0045
Last reviewed: 2026-03-09