AWS Generative AI Best Practices v2
Best for: Teams building GenAI applications on AWS using SageMaker, Bedrock, or custom training pipelines. Applies to any organization storing training data or model artifacts in S3 and running inference on AWS compute. No revenue or size threshold. Regulated industries (financial services, healthcare) should treat this as a starting point beneath their primary compliance obligations.
| Mandatory? | Voluntary β AWS best-practice framework for AI/ML workloads |
| Who validates? | Self-assessment; optional AWS partner review |
| Renewal | No fixed cycle |
| Scope | Organizations developing or deploying generative AI on AWS |
π Amazon Web Services (AWS) Β· AWS Generative AI Best Practices v2 Official source β
Get Started
module "..." {
source = "awsgenai.compliance.tf/terraform-aws-modules/<module>/aws"
version = "<version>"
}What Compliance.tf Covers vs. What You Handle
What Compliance.tf automates
- Encryption at Rest: Checks S3 bucket default encryption configuration (s3_bucket_default_encryption_enabled). Validates that SSE-S3 or SSE-KMS is applied as the default encryption method on all in-scope buckets.
- Encryption in Transit: Validates S3 bucket policies enforce SSL/TLS for all requests (s3_bucket_enforces_ssl). Checks for the presence of the aws:SecureTransport condition in bucket policies.
- Access Control: Checks root user MFA status via iam_root_user_mfa_enabled and iam_root_user_account_console_access_mfa_enabled. Detects root accounts without hardware or virtual MFA devices configured.
- Data Integrity and Recovery: Validates S3 bucket versioning (s3_bucket_versioning_enabled) and MFA delete protection (s3_bucket_mfa_delete_enabled), confirming that training data and model artifacts can be recovered and are protected from unauthorized deletion.
- AI/ML Network Isolation: Checks SageMaker notebook instance configurations for direct internet access (sagemaker_notebook_instance_direct_internet_access_disabled). Flags any notebook instance where DirectInternetAccess is set to Enabled.
- Audit Trail: Verifies at least one CloudTrail trail meets security best practices (cloudtrail_security_trail_enabled), including multi-region coverage and log file validation.
What you handle
- Encryption at Rest: Selecting and managing KMS keys for model artifact encryption, defining key policies that restrict decryption to authorized AI/ML roles, and rotating keys per your organization's schedule.
- Encryption in Transit: Ensuring application-layer TLS configuration for SageMaker endpoints, API Gateway integrations, and any custom inference APIs that sit outside S3.
- Access Control: Implementing least-privilege IAM policies for SageMaker execution roles, Bedrock access policies, and service-linked roles. Defining and enforcing permission boundaries for data scientists and ML engineers.
- Data Integrity and Recovery: Defining retention policies for model versions, establishing rollback procedures for model deployments, and testing recovery of training datasets from versioned buckets.
- AI/ML Network Isolation: Configuring VPC endpoints for SageMaker, S3, and other AWS services. Designing network architecture with private subnets, NAT gateways, and security groups appropriate for your ML pipeline.
- Audit Trail: Configuring CloudTrail event selectors for AI/ML workloads (management events and data events where required, such as S3 object-level access), setting up CloudWatch alarms or EventBridge rules for anomalous activity, and retaining logs per your compliance obligations.
Controls by Category
AI/ML Compute Environment Security (2 controls)
The most common finding here: notebook instances with DirectInternetAccess set to Enabled, left over from a development phase that never got cleaned up before promotion to production. Verify both DirectInternetAccess and root access settings on every notebook instance in scope.
Training Data and Model Artifact Protection (3 controls)
S3 buckets holding training datasets, fine-tuned model weights, and inference logs must have encryption at rest, SSL enforcement, versioning, and MFA delete enabled. MFA delete is the most commonly missed control in this category. Auditors check bucket policies, default encryption settings, and versioning configuration across all buckets tagged for AI/ML workloads.
Related Frameworks
NIST AI RMF 1.0 β βͺ Low overlap (15%)
The NIST AI Risk Management Framework covers the full AI lifecycle including governance, bias, transparency, and accountability. AWS GenAI v2 addresses only the infrastructure security layer. Where NIST AI RMF expects documented risk assessments and impact evaluations for AI systems, AWS GenAI v2 focuses on the cloud resource configurations that support those systems.
AWS Well-Architected Framework β π’ High overlap (60%)
The AWS Well-Architected Framework shares significant overlap on S3 encryption, IAM best practices, and CloudTrail logging. AWS GenAI v2 adds SageMaker-specific controls that the general Well-Architected Framework treats as optional workload guidance rather than explicit controls.
NIST 800-53 Rev 5 β βͺ Low overlap (25%)
NIST 800-53 Rev 5 control families AC (Access Control), AU (Audit and Accountability), and SC (System and Communications Protection) map to the IAM, CloudTrail, and S3 encryption controls in this framework. NIST 800-53 is far broader, covering hundreds of controls across organizational, physical, and technical domains.