Skip to content

Redshift clusters should have audit logging enabled

Redshift audit logs capture connection attempts, user activity, and query execution against your data warehouse. Without them, you lose visibility into who accessed sensitive data, when, and what they ran. That gap makes breach investigation slow and unreliable.

Redshift often holds large volumes of business-critical data. Logging to a dedicated S3 bucket gives your security team a durable, tamper-resistant record that survives cluster deletion or modification. Centralizing those logs also feeds SIEM tools and anomaly detection pipelines that depend on complete coverage across all data stores.

Retrofit consideration

Enabling audit logging on an existing cluster does not cause downtime, but the target S3 bucket must exist with the correct bucket policy granting Redshift write access before you apply the change.

Implementation

Choose the approach that matches how you manage Terraform.

Use the compliance.tf module to enforce this control by default. See get started with compliance.tf.

module "redshift" {
  source  = "soc2.compliance.tf/terraform-aws-modules/redshift/aws"
  version = ">=7.0.0,<8.0.0"

  automated_snapshot_retention_period = 7
  cluster_identifier                  = "abc123"
  create_cloudwatch_log_group         = true
  database_name                       = "mydb"
  logging = {
    log_destination_type = "cloudwatch"
    log_exports          = ["connectionlog", "userlog", "useractivitylog"]
  }
  master_password_wo     = "change-me-in-production"
  master_username        = "admin"
  node_type              = "ra3.xlplus"
  number_of_nodes        = 2
  subnet_ids             = ["subnet-12345678", "subnet-12345678", "subnet-12345678"]
  vpc_id                 = "vpc-12345678"
  vpc_security_group_ids = ["sg-12345678"]
}

module "redshift" {
  source  = "pcidss.compliance.tf/terraform-aws-modules/redshift/aws"
  version = ">=7.0.0,<8.0.0"

  automated_snapshot_retention_period = 7
  cluster_identifier                  = "abc123"
  create_cloudwatch_log_group         = true
  database_name                       = "mydb"
  logging = {
    log_destination_type = "cloudwatch"
    log_exports          = ["connectionlog", "userlog", "useractivitylog"]
  }
  master_password_wo     = "change-me-in-production"
  master_username        = "admin"
  node_type              = "ra3.xlplus"
  number_of_nodes        = 2
  subnet_ids             = ["subnet-12345678", "subnet-12345678", "subnet-12345678"]
  vpc_id                 = "vpc-12345678"
  vpc_security_group_ids = ["sg-12345678"]
}

module "redshift" {
  source  = "nydfs23.compliance.tf/terraform-aws-modules/redshift/aws"
  version = ">=7.0.0,<8.0.0"

  automated_snapshot_retention_period = 7
  cluster_identifier                  = "abc123"
  create_cloudwatch_log_group         = true
  database_name                       = "mydb"
  logging = {
    log_destination_type = "cloudwatch"
    log_exports          = ["connectionlog", "userlog", "useractivitylog"]
  }
  master_password_wo     = "change-me-in-production"
  master_username        = "admin"
  node_type              = "ra3.xlplus"
  number_of_nodes        = 2
  subnet_ids             = ["subnet-12345678", "subnet-12345678", "subnet-12345678"]
  vpc_id                 = "vpc-12345678"
  vpc_security_group_ids = ["sg-12345678"]
}

module "redshift" {
  source  = "acscessentialeight.compliance.tf/terraform-aws-modules/redshift/aws"
  version = ">=7.0.0,<8.0.0"

  automated_snapshot_retention_period = 7
  cluster_identifier                  = "abc123"
  create_cloudwatch_log_group         = true
  database_name                       = "mydb"
  logging = {
    log_destination_type = "cloudwatch"
    log_exports          = ["connectionlog", "userlog", "useractivitylog"]
  }
  master_password_wo     = "change-me-in-production"
  master_username        = "admin"
  node_type              = "ra3.xlplus"
  number_of_nodes        = 2
  subnet_ids             = ["subnet-12345678", "subnet-12345678", "subnet-12345678"]
  vpc_id                 = "vpc-12345678"
  vpc_security_group_ids = ["sg-12345678"]
}

module "redshift" {
  source  = "cfrpart11.compliance.tf/terraform-aws-modules/redshift/aws"
  version = ">=7.0.0,<8.0.0"

  automated_snapshot_retention_period = 7
  cluster_identifier                  = "abc123"
  create_cloudwatch_log_group         = true
  database_name                       = "mydb"
  logging = {
    log_destination_type = "cloudwatch"
    log_exports          = ["connectionlog", "userlog", "useractivitylog"]
  }
  master_password_wo     = "change-me-in-production"
  master_username        = "admin"
  node_type              = "ra3.xlplus"
  number_of_nodes        = 2
  subnet_ids             = ["subnet-12345678", "subnet-12345678", "subnet-12345678"]
  vpc_id                 = "vpc-12345678"
  vpc_security_group_ids = ["sg-12345678"]
}

module "redshift" {
  source  = "rbiitfnbfc.compliance.tf/terraform-aws-modules/redshift/aws"
  version = ">=7.0.0,<8.0.0"

  automated_snapshot_retention_period = 7
  cluster_identifier                  = "abc123"
  create_cloudwatch_log_group         = true
  database_name                       = "mydb"
  logging = {
    log_destination_type = "cloudwatch"
    log_exports          = ["connectionlog", "userlog", "useractivitylog"]
  }
  master_password_wo     = "change-me-in-production"
  master_username        = "admin"
  node_type              = "ra3.xlplus"
  number_of_nodes        = 2
  subnet_ids             = ["subnet-12345678", "subnet-12345678", "subnet-12345678"]
  vpc_id                 = "vpc-12345678"
  vpc_security_group_ids = ["sg-12345678"]
}

module "redshift" {
  source  = "nistcsfv11.compliance.tf/terraform-aws-modules/redshift/aws"
  version = ">=7.0.0,<8.0.0"

  automated_snapshot_retention_period = 7
  cluster_identifier                  = "abc123"
  create_cloudwatch_log_group         = true
  database_name                       = "mydb"
  logging = {
    log_destination_type = "cloudwatch"
    log_exports          = ["connectionlog", "userlog", "useractivitylog"]
  }
  master_password_wo     = "change-me-in-production"
  master_username        = "admin"
  node_type              = "ra3.xlplus"
  number_of_nodes        = 2
  subnet_ids             = ["subnet-12345678", "subnet-12345678", "subnet-12345678"]
  vpc_id                 = "vpc-12345678"
  vpc_security_group_ids = ["sg-12345678"]
}

module "redshift" {
  source  = "pcidssv321.compliance.tf/terraform-aws-modules/redshift/aws"
  version = ">=7.0.0,<8.0.0"

  automated_snapshot_retention_period = 7
  cluster_identifier                  = "abc123"
  create_cloudwatch_log_group         = true
  database_name                       = "mydb"
  logging = {
    log_destination_type = "cloudwatch"
    log_exports          = ["connectionlog", "userlog", "useractivitylog"]
  }
  master_password_wo     = "change-me-in-production"
  master_username        = "admin"
  node_type              = "ra3.xlplus"
  number_of_nodes        = 2
  subnet_ids             = ["subnet-12345678", "subnet-12345678", "subnet-12345678"]
  vpc_id                 = "vpc-12345678"
  vpc_security_group_ids = ["sg-12345678"]
}

If you use terraform-aws-modules/redshift/aws, set the right module inputs for this control. You can later migrate to the compliance.tf module with minimal changes because it is compatible by design.

module "redshift" {
  source  = "terraform-aws-modules/redshift/aws"
  version = ">=7.0.0,<8.0.0"

  automated_snapshot_retention_period = 7
  cluster_identifier                  = "abc123"
  create_cloudwatch_log_group         = true
  database_name                       = "mydb"
  logging = {
    log_destination_type = "cloudwatch"
    log_exports          = ["connectionlog", "userlog", "useractivitylog"]
  }
  master_password_wo     = "change-me-in-production"
  master_username        = "admin"
  node_type              = "ra3.xlplus"
  number_of_nodes        = 2
  subnet_ids             = ["subnet-12345678", "subnet-12345678", "subnet-12345678"]
  vpc_id                 = "vpc-12345678"
  vpc_security_group_ids = ["sg-12345678"]
}

Use AWS provider resources directly. See docs for the resources involved: aws_redshift_cluster.

resource "aws_redshift_cluster" "this" {
  automated_snapshot_retention_period = 7
  cluster_identifier                  = "pofix-abc123"
  cluster_subnet_group_name           = "example-redshift-subnet-group"
  master_password                     = "ChangeMe123!"
  master_username                     = "admin"
  node_type                           = "ra3.large"
  skip_final_snapshot                 = true
}

What this control checks

Each aws_redshift_cluster passes when paired with an aws_redshift_logging resource that sets cluster_identifier to the cluster and log_destination_type to "s3". With S3 as the destination, bucket_name must reference an existing bucket; s3_key_prefix is optional and organizes log files within it. A cluster fails when no aws_redshift_logging resource targets it, or when the control specifies an expected bucket name and the configured bucket_name does not match.

Common pitfalls

  • Deprecated inline logging block

    The logging block inside aws_redshift_cluster is deprecated in current provider versions. Use the standalone aws_redshift_logging resource instead.

  • S3 bucket policy missing Redshift service principal

    The destination bucket needs a policy granting s3:PutObject to the Redshift service principal for your region (the account ID varies by region; check the AWS documentation). Without it, log delivery fails and the cluster shows as non-compliant on inspection.

  • Bucket name mismatch when parameter is specified

    When the Config rule or compliance check specifies a bucketNames parameter, the bucket_name on aws_redshift_logging must match exactly. Logging to a different bucket still counts as non-compliant even though logs are being captured.

  • CloudWatch destination does not satisfy bucket checks

    log_destination_type set to "cloudwatch" enables audit logging but produces no S3 output. Any check that validates bucket name matching will fail, even though connection and query events are being logged.

Audit evidence

Auditors expect Config evaluation results for the redshift-cluster-audit-logging managed rule showing all clusters COMPLIANT. Supporting evidence includes the Redshift console's cluster properties page showing audit logging status and the target bucket name. CloudTrail events for EnableLogging and ModifyLogging show when logging was configured and by whom.

For deeper validation, auditors may request sample log files from the designated S3 bucket, confirming connection logs, user logs, and user activity logs are being delivered. S3 server access logs on the destination bucket can further confirm the integrity of the audit trail.

Framework-specific interpretation

SOC 2: CC7.2 and CC7.3 call for monitoring system components and responding to detected events. Redshift audit logs supply the raw data for spotting unauthorized queries or suspicious access patterns in data warehouse environments.

PCI DSS v4.0: Requirement 10 mandates logging all access to system components and cardholder data. Connection attempts, user activity, and query records from Redshift audit logging are what examiners ask to see when cardholder data touches your data warehouse.

Tool mappings

Use these identifiers to cross-reference this control across tools, reports, and evidence.

  • Compliance.tf Control: redshift_cluster_audit_logging_enabled

  • AWS Config Managed Rule: REDSHIFT_AUDIT_LOGGING_ENABLED

  • Checkov Check: CKV_AWS_71

  • Powerpipe Controls: aws_compliance.control.redshift_cluster_audit_logging_enabled, aws_compliance.control.redshift_cluster_encryption_logging_enabled

  • Prowler Check: redshift_cluster_audit_logging

  • AWS Security Hub Control: Redshift.4

  • KICS Query: 15ffbacc-fa42-4f6f-a57d-2feac7365caa

Last reviewed: 2026-03-09