S3 buckets should have object logging enabled
S3 management events in CloudTrail only record bucket-level operations like CreateBucket or PutBucketPolicy. They don't capture who read, uploaded, or deleted individual objects. Without object-level logging, you have no forensic trail when sensitive files are exfiltrated or tampered with.
Enabling S3 data events in CloudTrail closes this gap. Each object access generates a JSON record with the caller identity, source IP, timestamp, and the exact key accessed. That record is what incident responders need and what auditors ask for when reviewing data access controls.
Retrofit consideration
Enabling S3 data events on high-traffic buckets can generate significant CloudTrail log volume fast. At $0.10 per 100,000 events, a bucket serving millions of daily GETs can cost hundreds of dollars per month before you've noticed. Estimate costs before enabling across all buckets and use advanced event selectors to filter by specific buckets, prefixes, or event names.
Implementation
Choose the approach that matches how you manage Terraform.
Use AWS provider resources directly. See docs for the resources involved: aws_cloudtrail.
resource "aws_cloudtrail" "this" {
advanced_event_selector {
field_selector {
equals = ["Data"]
field = "eventCategory"
}
field_selector {
equals = ["AWS::S3::Object"]
field = "resources.type"
}
name = "Log all S3 data events"
}
cloud_watch_logs_group_arn = local.cloudtrail_log_group_arn
cloud_watch_logs_role_arn = "arn:aws:iam::123456789012:role/example-role"
name = "pofix-abc123"
s3_bucket_name = "example-bucket-abc123"
}
What this control checks
This control validates that an aws_cloudtrail trail captures S3 object-level data events. The aws_cloudtrail resource must include an event_selector block (or advanced_event_selector) with a data_resource entry where type = "AWS::S3::Object". The values list should contain specific bucket ARNs (e.g., "arn:aws:s3:::my-bucket/") or "arn:aws:s3" to cover all buckets. Set read_write_type to "All" to capture both reads and writes, though "WriteOnly" or "ReadOnly" may satisfy narrower requirements. A trail with only include_management_events = true and no S3 data resource selectors fails. With advanced_event_selector, the field selector must include eventCategory = "Data" and resources.type = "AWS::S3::Object". The trail must also be active: enable_logging must not be false.
Common pitfalls
Management events do not cover object access
The
include_management_events = trueflag only records bucket-level API calls like CreateBucket and PutBucketPolicy. It captures nothing about who accessed or modified individual objects. You need an explicitdata_resourceblock withtype = "AWS::S3::Object"inside anevent_selectorto log GetObject, PutObject, and DeleteObject calls.Trailing slash matters in bucket ARNs
When specifying individual bucket ARNs in the
data_resourcevalueslist, include a trailing slash:"arn:aws:s3:::my-bucket/". Omitting it can silently prevent object events from matching for that bucket.Cost explosion on high-throughput buckets
S3 data events are billed at $0.10 per 100,000 events. A bucket serving millions of GET requests per day can generate hundreds of dollars in CloudTrail charges monthly. Use
advanced_event_selectorwithfield_selectorblocks to filter byeventName(e.g., onlyPutObjectandDeleteObject) or target specific bucket prefixes to control costs.Multiple trails can create duplicate events
If you have both an organization trail and an account-level trail with S3 data events enabled, CloudTrail logs each event twice to separate destinations. This doubles storage and query costs. Audit your trails with
aws cloudtrail describe-trailsand consolidate data event selectors onto a single trail where possible.Legacy event_selector vs advanced_event_selector
Terraform supports both
event_selectorandadvanced_event_selectoronaws_cloudtrail. Theevent_selectorblock remains valid, butadvanced_event_selectorlets you filter byeventName,readOnly, and resource ARN patterns. Switching to advanced selectors can cut log volume and cost without losing compliance coverage.
Audit evidence
An auditor expects to see a trail in the CloudTrail console with data events configured for S3 objects, either across all buckets or for the specific buckets containing sensitive data. The event selectors page should show the S3 object resource type with the appropriate read/write scope. CLI output from aws cloudtrail get-event-selectors --trail-name <name> gives machine-readable confirmation: look for DataResources entries with Type: AWS::S3::Object.
Supporting evidence includes Config rule results for cloudtrail-s3-dataevents-enabled, sample log entries showing GetObject or PutObject events with caller identity and source IP, and CloudWatch metrics confirming active log delivery.
Framework-specific interpretation
NIST Cybersecurity Framework v2.0: S3 data events feed directly into DE.CM continuous monitoring and DE.AE adverse event analysis. Without object-level records, you can't detect unauthorized reads or correlate events during an incident investigation.
Related controls
Tool mappings
Use these identifiers to cross-reference this control across tools, reports, and evidence.
Compliance.tf Control:
s3_bucket_object_logging_enabledAWS Config Managed Rule:
CLOUDTRAIL_S3_DATAEVENTS_ENABLEDPowerpipe Controls:
aws_compliance.control.cloudtrail_s3_data_events_enabled,aws_compliance.control.s3_bucket_object_logging_enabledProwler Checks:
cloudtrail_s3_dataevents_read_enabled,cloudtrail_s3_dataevents_write_enabledAWS Security Hub Controls:
S3.22,S3.23KICS Query:
a8fc2180-b3ac-4c93-bd0d-a55b974e4b07Trivy Checks:
AWS-0171,AWS-0172
Last reviewed: 2026-03-09