S3 buckets should have logging enabled
S3 server access logs capture requests made to a bucket, including the requester, bucket name, request time, action, response status, and error codes, on a best-effort basis. Without them, visibility into who accessed or modified objects drops sharply, making breach investigation and unauthorized access detection significantly harder.
Access logs also feed into SIEM tools for anomaly detection. A single unlogged bucket in an otherwise monitored environment creates a blind spot that attackers may exploit.
Retrofit consideration
Enabling logging on existing buckets has no downtime impact, but you need a target bucket with the correct ACL or bucket policy granting the S3 logging service principal (logging.s3.amazonaws.com) write access. Retroactive logs are not generated for past activity.
Implementation
Choose the approach that matches how you manage Terraform.
Use the compliance.tf module to enforce this control by default. See get started with compliance.tf.
module "s3_bucket" {
source = "soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "pcidss.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "hipaa.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "nist80053.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "nistcsf.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "fedrampmoderate.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "cisv80ig1.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "nist800171.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "cisacyberessentials.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "nydfs23.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "cisv140.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "ffiec.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "acscessentialeight.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "acscism2023.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "cfrpart11.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "rbicybersecurity.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "rbiitfnbfc.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "fedramplow.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "cisv130.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "cisv71ig1.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "hipaasecurity2003.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "nistcsfv11.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "nist80053rev4.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
module "s3_bucket" {
source = "pcidssv321.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
If you use terraform-aws-modules/s3-bucket/aws, set the right module inputs for this control. You can later migrate to the compliance.tf module with minimal changes because it is compatible by design.
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = ">=5.0.0"
bucket = "abc123"
}
Use AWS provider resources directly. See docs for the resources involved: aws_s3_bucket_logging.
resource "aws_s3_bucket" "this" {
bucket = "pofix-abc123"
force_destroy = true
}
resource "aws_s3_bucket_logging" "this" {
bucket = "example-bucket-abc123"
target_prefix = "logs/"
target_bucket = "example-bucket-abc123"
}
What this control checks
This control validates that an aws_s3_bucket_logging resource exists for each aws_s3_bucket. It must reference the source bucket via bucket and specify a target_bucket for log delivery. target_prefix is optional but recommended for organizing logs by source bucket. Any bucket without a corresponding aws_s3_bucket_logging resource fails. The inline logging block on aws_s3_bucket was deprecated in provider v4 and removed in v5; use the standalone resource. The target bucket needs write permissions for the S3 log delivery service, typically via an aws_s3_bucket_policy allowing logging.s3.amazonaws.com to call s3:PutObject.
Common pitfalls
Target bucket permissions missing
Log delivery silently fails if the target bucket doesn't grant write access to
logging.s3.amazonaws.com. The S3 console will show logging as enabled, the Terraform resource will exist, and the policy check will pass, but no logs will arrive. Fix it with a bucket policy allowings3:PutObjectfor thelogging.s3.amazonaws.comprincipal, or an ACL grantinglog-delivery-write.Logging loops from self-targeting
Use a dedicated logging bucket. Pointing
target_bucketat the source bucket creates a feedback loop: access logs generate new objects, which generate new access log entries, inflating costs and burying real traffic in noise.Deprecated inline logging block
The inline
logging {}block onaws_s3_bucketwas deprecated in provider v4 and removed in v5. If your configuration still uses it, Terraform will either ignore it silently or error on plan. Migrate to the standaloneaws_s3_bucket_loggingresource.Log delivery delay
S3 access logs are delivered on a best-effort basis, typically within a few hours, not in real time. If your compliance requirement needs near-real-time visibility, supplement with CloudTrail data events for S3, which can deliver to CloudWatch Logs with much lower latency.
Terraform import drift for pre-existing buckets
If logging was enabled outside Terraform before the bucket was imported,
terraform planwill show a missingaws_s3_bucket_loggingresource and may try to create a duplicate. Runterraform import aws_s3_bucket_logging.<name> <bucket>to bring the existing configuration under state management before applying.
Audit evidence
Config rule evaluation results for s3-bucket-logging-enabled showing all buckets as compliant are the primary evidence. Console screenshots of the bucket properties page (the "Server access logging" section with a target bucket configured) also work. For CLI output, aws s3api get-bucket-logging --bucket <name> should return a LoggingEnabled object with TargetBucket and TargetPrefix values.
Auditors may also ask for sample log files from the target bucket to confirm delivery is active and that records include fields like requester ARN, operation type, and HTTP status.
Framework-specific interpretation
SOC 2: CC7 criteria cover system monitoring and anomaly detection. S3 access logs give your SIEM the raw material to flag unusual access patterns, which is what CC7 evidence looks like in practice.
PCI DSS v4.0: Requirement 10.2 calls for logging access to system components and cardholder data. For S3 buckets in the CDE, access logging contributes that evidence, though it should be read alongside CloudTrail data events since each captures different information.
HIPAA Omnibus Rule 2013: 45 CFR 164.312(b) requires covered entities to implement mechanisms to record and examine activity on systems containing ePHI. For S3 buckets that store protected health information, access logs are the activity record OCR investigators ask for first.
NIST SP 800-53 Rev 5: Server access logs touch AU-2 (Event Logging), AU-3 (Content of Audit Records), and AU-12 (Audit Record Generation). AU-3 specifies what a record needs to contain: who, what, when, where, and outcome. S3 access log entries cover all of those fields. The logs also feed SI-4 (System Monitoring) when forwarded to a SIEM.
NIST Cybersecurity Framework v2.0: S3 server access logs feed directly into DE.CM monitoring for anything stored in S3. Without them, you have no visibility into object-level access patterns. ID.AM coverage is a secondary benefit: the logs help maintain awareness of how data assets are being used.
FedRAMP Moderate Baseline Rev 4: AU-2 and AU-12 are the primary controls here. AU-2 calls for defined audit events; AU-12 requires the system to actually generate them. At the Moderate baseline, storage access events fall within the authorization boundary and need to be captured. Server access logging is how you cover that for S3.
Related controls
Tool mappings
Use these identifiers to cross-reference this control across tools, reports, and evidence.
Compliance.tf Control:
s3_bucket_logging_enabledAWS Config Managed Rule:
S3_BUCKET_LOGGING_ENABLEDCheckov Check:
CKV_AWS_18Powerpipe Control:
aws_compliance.control.s3_bucket_logging_enabledProwler Check:
s3_bucket_server_access_logging_enabledAWS Security Hub Controls:
CloudTrail.7,S3.9KICS Query:
f861041c-8c9f-4156-acfc-5e6e524f5884Trivy Check:
AWS-0089
Last reviewed: 2026-03-09