Skip to content

AWS Generative AI Best Practices v2

Best for: Teams building GenAI applications on AWS using SageMaker, Bedrock, or custom training pipelines. Applies to any organization storing training data or model artifacts in S3 and running inference on AWS compute. No revenue or size threshold. Regulated industries (financial services, healthcare) should treat this as a starting point beneath their primary compliance obligations.

Mandatory?Voluntary — AWS best-practice framework for AI/ML workloads
Who validates?Self-assessment; optional AWS partner review
RenewalNo fixed cycle
ScopeOrganizations developing or deploying generative AI on AWS

🏛 Amazon Web Services (AWS) · AWS Generative AI Best Practices v2 Official source →

Get Started

module "..." {
  source  = "awsgenai.compliance.tf/terraform-aws-modules/<module>/aws"
  version = "<version>"
}

What Compliance.tf Covers vs. What You Handle

  • What Compliance.tf automates

    • Encryption at Rest: Checks S3 bucket default encryption configuration (s3_bucket_default_encryption_enabled). Validates that SSE-S3 or SSE-KMS is applied as the default encryption method on all in-scope buckets.
    • Encryption in Transit: Validates S3 bucket policies enforce SSL/TLS for all requests (s3_bucket_enforces_ssl). Checks for the presence of the aws:SecureTransport condition in bucket policies.
    • Access Control: Checks root user MFA status via iam_root_user_mfa_enabled and iam_root_user_account_console_access_mfa_enabled. Detects root accounts without hardware or virtual MFA devices configured.
    • Data Integrity and Recovery: Validates S3 bucket versioning (s3_bucket_versioning_enabled) and MFA delete protection (s3_bucket_mfa_delete_enabled), confirming that training data and model artifacts can be recovered and are protected from unauthorized deletion.
    • AI/ML Network Isolation: Checks SageMaker notebook instance configurations for direct internet access (sagemaker_notebook_instance_direct_internet_access_disabled). Flags any notebook instance where DirectInternetAccess is set to Enabled.
    • Audit Trail: Verifies at least one CloudTrail trail meets security best practices (cloudtrail_security_trail_enabled), including multi-region coverage and log file validation.
  • What you handle

    • Encryption at Rest: Selecting and managing KMS keys for model artifact encryption, defining key policies that restrict decryption to authorized AI/ML roles, and rotating keys per your organization's schedule.
    • Encryption in Transit: Ensuring application-layer TLS configuration for SageMaker endpoints, API Gateway integrations, and any custom inference APIs that sit outside S3.
    • Access Control: Implementing least-privilege IAM policies for SageMaker execution roles, Bedrock access policies, and service-linked roles. Defining and enforcing permission boundaries for data scientists and ML engineers.
    • Data Integrity and Recovery: Defining retention policies for model versions, establishing rollback procedures for model deployments, and testing recovery of training datasets from versioned buckets.
    • AI/ML Network Isolation: Configuring VPC endpoints for SageMaker, S3, and other AWS services. Designing network architecture with private subnets, NAT gateways, and security groups appropriate for your ML pipeline.
    • Audit Trail: Configuring CloudTrail event selectors for AI/ML workloads (management events and data events where required, such as S3 object-level access), setting up CloudWatch alarms or EventBridge rules for anomalous activity, and retaining logs per your compliance obligations.

Controls by Category

AI/ML Compute Environment Security (2 controls)

The most common finding here: notebook instances with DirectInternetAccess set to Enabled, left over from a development phase that never got cleaned up before promotion to production. Verify both DirectInternetAccess and root access settings on every notebook instance in scope.

Training Data and Model Artifact Protection (3 controls)

S3 buckets holding training datasets, fine-tuned model weights, and inference logs must have encryption at rest, SSL enforcement, versioning, and MFA delete enabled. MFA delete is the most commonly missed control in this category. Auditors check bucket policies, default encryption settings, and versioning configuration across all buckets tagged for AI/ML workloads.

Frequently Asked Questions

Does this framework apply to my organization if we only use Amazon Bedrock and never train custom models?

Yes, partially. The S3, IAM, and CloudTrail controls apply regardless of whether you train custom models or consume foundation models via Bedrock. The SageMaker notebook controls only apply if you use SageMaker for development or fine-tuning. If your GenAI usage is limited to Bedrock API calls, most controls here are still relevant to securing the data you send to and receive from those APIs.

Is this framework auditable or certifiable?

No. AWS does not offer a certification or formal audit program for this framework. It is a best practices guide. You can use it internally as a compliance baseline and run automated checks via PowerPipe, but there is no third-party attestation or certification mark associated with it.

How does this relate to the AWS Machine Learning Lens in the Well-Architected Framework?

The ML Lens is broader, covering architectural decisions like model selection, data pipeline design, and cost optimization. AWS GenAI v2 distills a small set of infrastructure security controls that can be automatically validated. Think of GenAI v2 as an enforceable subset of the ML Lens security pillar.

Why are there only 9 controls? That seems insufficient for AI governance.

This framework targets infrastructure-layer security controls that can be programmatically verified. It does not cover responsible AI concerns like bias detection, explainability, content filtering, or model governance. Those require separate frameworks (such as NIST AI RMF) and tooling outside of infrastructure policy-as-code. The 9 controls are the automatable foundation, not the full governance picture.

Can I extend this framework with custom controls for our internal AI policies?

Yes. PowerPipe and Steampipe support custom benchmarks. Add controls for Bedrock model access policies, SageMaker endpoint configurations, or VPC endpoint checks by defining additional queries and referencing them in a custom benchmark that inherits from aws_genai_v2.