Skip to content

S3 buckets should have object logging enabled

S3 management events in CloudTrail only record bucket-level operations like CreateBucket or PutBucketPolicy. They don't capture who read, uploaded, or deleted individual objects. Without object-level logging, you have no forensic trail when sensitive files are exfiltrated or tampered with.

Enabling S3 data events in CloudTrail closes this gap. Each object access generates a JSON record with the caller identity, source IP, timestamp, and the exact key accessed. That record is what incident responders need and what auditors ask for when reviewing data access controls.

Retrofit consideration

Enabling S3 data events on high-traffic buckets can generate significant CloudTrail log volume fast. At $0.10 per 100,000 events, a bucket serving millions of daily GETs can cost hundreds of dollars per month before you've noticed. Estimate costs before enabling across all buckets and use advanced event selectors to filter by specific buckets, prefixes, or event names.

Implementation

Choose the approach that matches how you manage Terraform.

Use AWS provider resources directly. See docs for the resources involved: aws_cloudtrail.

resource "aws_cloudtrail" "this" {
  advanced_event_selector {
    field_selector {
      equals = ["Data"]
      field  = "eventCategory"
    }
    field_selector {
      equals = ["AWS::S3::Object"]
      field  = "resources.type"
    }

    name = "Log all S3 data events"
  }

  cloud_watch_logs_group_arn = local.cloudtrail_log_group_arn
  cloud_watch_logs_role_arn  = "arn:aws:iam::123456789012:role/example-role"
  name                       = "pofix-abc123"
  s3_bucket_name             = "example-bucket-abc123"
}

What this control checks

This control validates that an aws_cloudtrail trail captures S3 object-level data events. The aws_cloudtrail resource must include an event_selector block (or advanced_event_selector) with a data_resource entry where type = "AWS::S3::Object". The values list should contain specific bucket ARNs (e.g., "arn:aws:s3:::my-bucket/") or "arn:aws:s3" to cover all buckets. Set read_write_type to "All" to capture both reads and writes, though "WriteOnly" or "ReadOnly" may satisfy narrower requirements. A trail with only include_management_events = true and no S3 data resource selectors fails. With advanced_event_selector, the field selector must include eventCategory = "Data" and resources.type = "AWS::S3::Object". The trail must also be active: enable_logging must not be false.

Common pitfalls

  • Management events do not cover object access

    The include_management_events = true flag only records bucket-level API calls like CreateBucket and PutBucketPolicy. It captures nothing about who accessed or modified individual objects. You need an explicit data_resource block with type = "AWS::S3::Object" inside an event_selector to log GetObject, PutObject, and DeleteObject calls.

  • Trailing slash matters in bucket ARNs

    When specifying individual bucket ARNs in the data_resource values list, include a trailing slash: "arn:aws:s3:::my-bucket/". Omitting it can silently prevent object events from matching for that bucket.

  • Cost explosion on high-throughput buckets

    S3 data events are billed at $0.10 per 100,000 events. A bucket serving millions of GET requests per day can generate hundreds of dollars in CloudTrail charges monthly. Use advanced_event_selector with field_selector blocks to filter by eventName (e.g., only PutObject and DeleteObject) or target specific bucket prefixes to control costs.

  • Multiple trails can create duplicate events

    If you have both an organization trail and an account-level trail with S3 data events enabled, CloudTrail logs each event twice to separate destinations. This doubles storage and query costs. Audit your trails with aws cloudtrail describe-trails and consolidate data event selectors onto a single trail where possible.

  • Legacy event_selector vs advanced_event_selector

    Terraform supports both event_selector and advanced_event_selector on aws_cloudtrail. The event_selector block remains valid, but advanced_event_selector lets you filter by eventName, readOnly, and resource ARN patterns. Switching to advanced selectors can cut log volume and cost without losing compliance coverage.

Audit evidence

An auditor expects to see a trail in the CloudTrail console with data events configured for S3 objects, either across all buckets or for the specific buckets containing sensitive data. The event selectors page should show the S3 object resource type with the appropriate read/write scope. CLI output from aws cloudtrail get-event-selectors --trail-name <name> gives machine-readable confirmation: look for DataResources entries with Type: AWS::S3::Object.

Supporting evidence includes Config rule results for cloudtrail-s3-dataevents-enabled, sample log entries showing GetObject or PutObject events with caller identity and source IP, and CloudWatch metrics confirming active log delivery.

Framework-specific interpretation

NIST Cybersecurity Framework v2.0: S3 data events feed directly into DE.CM continuous monitoring and DE.AE adverse event analysis. Without object-level records, you can't detect unauthorized reads or correlate events during an incident investigation.

Tool mappings

Use these identifiers to cross-reference this control across tools, reports, and evidence.

  • Compliance.tf Control: s3_bucket_object_logging_enabled

  • AWS Config Managed Rule: CLOUDTRAIL_S3_DATAEVENTS_ENABLED

  • Powerpipe Controls: aws_compliance.control.cloudtrail_s3_data_events_enabled, aws_compliance.control.s3_bucket_object_logging_enabled

  • Prowler Checks: cloudtrail_s3_dataevents_read_enabled, cloudtrail_s3_dataevents_write_enabled

  • AWS Security Hub Controls: S3.22, S3.23

  • KICS Query: a8fc2180-b3ac-4c93-bd0d-a55b974e4b07

  • Trivy Checks: AWS-0171, AWS-0172

Last reviewed: 2026-03-09