Migrating to Compliance.tf
You are already using the right modules. Migration from upstream terraform-aws-modules to compliance.tf is a source URL change — typically no resource recreation, no state surgery, no workflow changes.
Some compliance controls enforce values you may not have set yet. Your terraform plan output will tell you exactly what needs attention — that is the migration working as designed, not a failure.
What Changes and What Doesn't
What changes
The module source URL changes: terraform-aws-modules/s3-bucket/aws becomes soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws. With that, compliance controls are enforced. terraform plan will fail if your current configuration does not satisfy all active controls for your chosen framework, and each error tells you exactly what to fix.
What does NOT change
Terraform tracks resources by their address (module.s3_bucket.aws_s3_bucket.this[0]), not by the module source URL. Changing the source does not recreate resources. The module interface is also identical: same arguments, same variable names, same types.
- Terraform workflow — same
init,plan,applycycle. No new CLI tools, no policy engines, no sidecars. - Provider configuration — no new providers, no provider version changes.
- Your existing variable definitions and
.tfvarsfiles work without modification. - No state migration, no backend reconfiguration.
Controls that require resource recreation
Most controls enforce values that Terraform can apply in-place (logging configuration, encryption settings, access policies). However, a few controls enforce values that AWS does not allow to change on existing resources:
- S3 object lock — must be enabled at bucket creation. Enabling it on an existing bucket requires creating a new bucket and migrating data. You can disable this control if you do not need object lock.
- RDS storage encryption — encrypting an existing unencrypted RDS instance requires a snapshot-restore cycle. This is an AWS limitation, not a compliance.tf limitation.
- ElastiCache encryption at rest — requires cluster recreation.
- Redshift encryption — requires cluster recreation.
If your existing resources already have these settings enabled, migration is clean. If they do not, you can disable the specific control and address the retrofit separately.
Adopt one module at a time
Compliance.tf modules and upstream terraform-aws-modules coexist in the same Terraform configuration without conflict. Migrate one module, verify with terraform plan, then proceed to the next. There is no requirement to migrate everything at once.
Step-by-Step Migration
Step 1: Pick a framework
Each compliance framework maps to a registry endpoint. Your framework determines which controls are enforced.
| Framework | Endpoint identifier | Example source |
|---|---|---|
| SOC 2 | soc2 | soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws |
| PCI DSS v4.0 | pcidssv40 | pcidssv40.compliance.tf/terraform-aws-modules/s3-bucket/aws |
| HIPAA | hipaa | hipaa.compliance.tf/terraform-aws-modules/s3-bucket/aws |
| CIS AWS v1.4.0 | cisv140 | cisv140.compliance.tf/terraform-aws-modules/s3-bucket/aws |
| NIST 800-53 Rev 5 | nist-800-53-rev5 | nist-800-53-rev5.compliance.tf/terraform-aws-modules/s3-bucket/aws |
| FedRAMP Moderate Rev 4 | fedrampmoderaterev4 | fedrampmoderaterev4.compliance.tf/terraform-aws-modules/s3-bucket/aws |
Not sure which framework to pick? Start with SOC 2 — it covers the broadest set of general security controls and is the most commonly used endpoint.
See all supported frameworks and registry endpoints for the full list.
Step 2: Configure authentication
You need an access token to download modules from the compliance.tf registry. If you have already configured authentication from the Get Started guide, skip to Step 3.
terraform login soc2.compliance.tf
This opens a browser window where you authenticate with your compliance.tf credentials. Replace soc2 with your chosen framework endpoint.
credentials "soc2.compliance.tf" {
token = "ctf_YOUR_TOKEN_HERE"
}
Get your token from the Access Tokens page. Replace soc2 with your framework endpoint.
machine soc2.compliance.tf
login anything
password ctf_YOUR_TOKEN_HERE
The login value can be anything — only the password (your access token) matters. Get your token from the Access Tokens page.
For detailed authentication setup, see Get Started and Registry Endpoints.
Step 3: Change the source URL
Replace the upstream terraform-aws-modules source with the compliance.tf endpoint.
module "s3_bucket" {
- source = "terraform-aws-modules/s3-bucket/aws"
+ source = "soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = "5.0.0"
bucket = "my-app-data"
}
module "s3_bucket" {
- source = "terraform-aws-modules/s3-bucket/aws"
- version = "5.0.0"
+ source = "https://soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws?version=5.0.0"
bucket = "my-app-data"
}
Version constraints work the same way. If you already pin versions (and you should), keep your existing version constraint.
Bulk find-and-replace
For a single module across your codebase:
# Find all files referencing the upstream S3 module
rg 'source\s*=\s*"terraform-aws-modules/s3-bucket/aws"' --type tf -l
# Replace with the compliance.tf source (macOS sed)
find . -name '*.tf' -exec sed -i '' \
's|"terraform-aws-modules/s3-bucket/aws"|"soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws"|g' {} +
# Replace with the compliance.tf source (Linux sed)
find . -name '*.tf' -exec sed -i \
's|"terraform-aws-modules/s3-bucket/aws"|"soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws"|g' {} +
For all terraform-aws-modules at once:
# macOS
find . -name '*.tf' -exec sed -i '' \
's|"terraform-aws-modules/|"soc2.compliance.tf/terraform-aws-modules/|g' {} +
# Linux
find . -name '*.tf' -exec sed -i \
's|"terraform-aws-modules/|"soc2.compliance.tf/terraform-aws-modules/|g' {} +
After replacing, verify the changes:
rg 'source\s*=.*compliance\.tf' --type tf
Step 4: Run terraform init -upgrade
terraform init -upgrade
The -upgrade flag is required. It tells Terraform to re-download modules from the new source even if a cached version exists.
Expected output:
Initializing modules...
Downloading soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws 5.0.0 for s3_bucket...
- s3_bucket in .terraform/modules/s3_bucket
Initializing provider plugins...
Terraform has been successfully initialized!
Authentication errors
If you see a 401 Unauthorized or 403 Forbidden error, check that:
- Your token is valid and not expired — get a new one from the Access Tokens page
- The framework endpoint in your credentials matches the one in your
sourceURL (e.g.,soc2.compliance.tfin both places) - For
.terraformrc: thecredentialsblock hostname matches the module source hostname - For
.netrc: themachinevalue matches the module source hostname
Step 5: Run terraform plan
terraform plan
One of two things will happen:
Clean plan — your existing configuration already satisfies all controls. You will see the standard Terraform plan output. For existing infrastructure, this typically shows No changes. Your infrastructure matches the configuration. or minor in-place updates.
Validation errors — one or more controls require values you have not set. This is expected. Each error message names the control, explains what is required, and lists the frameworks that require it:
│ Error: Invalid value for variable
│
│ on main.tf line 3, in module "s3_bucket":
│ 3: source = "soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws"
│
│ s3_bucket_logging_enabled: logging.target_bucket must be set
│ to enable S3 bucket access logging.
│
│ Frameworks requiring this control:
│ SOC 2, CIS AWS v1.4.0 (3.6), PCI DSS v4.0 (10.2.1)
This is the migration working as designed. The module is telling you what your infrastructure needs to be compliant. Proceed to Step 6.
Step 6: Fix validation errors
Each validation error tells you the control name and what value to provide. Most fixes are adding a single argument or block.
Example: S3 logging control
The error above requires a logging block with a target_bucket. Add it to your module configuration:
module "s3_bucket" {
source = "soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = "5.0.0"
bucket = "my-app-data"
# Fix: add logging configuration
logging = {
target_bucket = "my-logging-bucket"
target_prefix = "s3-access-logs/my-app-data/"
}
}
Run terraform plan again. Repeat until plan succeeds.
Disabling a control you do not need
If a control does not apply to your use case (for example, object lock on a non-critical bucket), you can disable it using the HTTPS URL format:
module "s3_bucket" {
source = "https://soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws?version=5.0.0&disable=s3_bucket_object_lock_enabled"
bucket = "my-app-data"
logging = {
target_bucket = "my-logging-bucket"
target_prefix = "s3-access-logs/my-app-data/"
}
}
See Customize modules for details on enabling and disabling controls.
Step 7: Apply
terraform apply
Once your plan is clean, terraform apply works exactly as before. For existing infrastructure, the apply will make in-place changes for any new attribute values you added (logging configuration, encryption settings, etc.) — it will not recreate resources.
After apply, your state reflects the compliance.tf module source. The resource addresses are unchanged.
Per-Module Migration Impact
Migration difficulty varies by module. Some modules migrate with zero plan changes. Others require additional values to satisfy compliance controls. The table below covers all 34 supported modules.
| Module | Migration Impact | Primary Fix Needed | Time Estimate |
|---|---|---|---|
| ACM | Usually clean | Minimal control enforcement | 5-10 min |
| ALB | Minor fixes | May need access logging, SSL policy | 15-30 min |
| API Gateway V2 | Minor fixes | May need access logging, TLS configuration | 15-30 min |
| AppSync | Usually clean | Minimal control enforcement | 5-10 min |
| Auto Scaling | Minor fixes | May need launch template IMDSv2, detailed monitoring | 15-30 min |
| CloudFront | Minor fixes | May need TLS policy, logging, or WAF association | 15-30 min |
| CloudWatch | Usually clean | Minimal control enforcement | 5-10 min |
| DMS | Minor fixes | May need encryption, logging configuration | 15-30 min |
| DynamoDB Table | Minor fixes | May need encryption configuration, deletion protection | 15-30 min |
| EC2 Instance | Minor fixes | May need IMDSv2 (metadata_options), detailed monitoring | 15-30 min |
| ECR | Minor fixes | May need image scanning, encryption, lifecycle policy | 15-30 min |
| ECS | Minor fixes | May need container insights, logging configuration | 10-15 min |
| EFS | Minor fixes | May need encryption at rest, backup policy | 15-30 min |
| EKS | Minor fixes | May need control plane logging, secrets encryption | 30-60 min |
| ElastiCache | Minor to significant | Encryption at rest may require cluster recreation | 30 min - hours |
| ELB (Classic) | Minor fixes | May need access logging, SSL policy | 15-30 min |
| EMR | Minor to significant | Encryption at rest may require cluster recreation | 30 min - hours |
| FSx | Minor to significant | Encryption at rest may require filesystem recreation | 30 min - hours |
| KMS | Usually clean | Minimal control enforcement | 5-10 min |
| Lambda | Usually clean | Minimal control enforcement | 5-10 min |
| MSK Kafka Cluster | Minor fixes | May need encryption in transit, logging configuration | 15-30 min |
| Network Firewall | Minor fixes | May need logging, encryption configuration | 15-30 min |
| OpenSearch | Minor fixes | May need encryption, node-to-node encryption, logging | 15-30 min |
| RDS | Minor fixes | May need encryption, backup retention, deletion protection | 15-30 min |
| RDS Aurora | Minor fixes | May need encryption, backup retention | 15-30 min |
| Redshift | Minor to significant | Encryption may require cluster recreation | 30 min - hours |
| S3 Bucket | Minor fixes | Add logging.target_bucket, verify versioning and encryption settings | 15-30 min |
| Secrets Manager | Usually clean | Minimal control enforcement | 5-10 min |
| SNS | Usually clean | Minimal control enforcement | 5-10 min |
| SQS | Usually clean | Minimal control enforcement | 5-10 min |
| SSM Parameter | Usually clean | Minimal control enforcement | 5-10 min |
| Step Functions | Usually clean | Minimal control enforcement | 5-10 min |
| VPC | Minor fixes | May need VPC flow logs configuration | 10-15 min |
| VPN Gateway | Usually clean | Minimal control enforcement | 5-10 min |
Migration impact key:
- Usually clean —
terraform planshows no changes or only minor in-place updates. No new arguments needed. - Minor fixes — plan fails with 1-3 validation errors. Each fix is adding an argument or block. Typically 15-30 minutes including testing.
- Minor to significant — depends on your existing configuration. If encryption is already enabled, migration is clean. If not, the fix may require resource recreation (an AWS limitation, not a compliance.tf limitation).
Migrating Multiple Modules
Start with one module
Start with S3 if you use it. It has the highest control count and will surface the most validation errors, giving you a clear picture of what compliance.tf enforces. If you do not use S3, pick any module with "Usually clean" impact from the table above.
After your first module migrates cleanly and you have applied the changes, batch the rest by migration difficulty.
Batch by difficulty
- First batch: all "Usually clean" modules (ACM, AppSync, CloudWatch, KMS, Lambda, Secrets Manager, SNS, SQS, SSM Parameter, Step Functions, VPN Gateway). These should migrate with zero or minimal plan changes.
- Second batch: "Minor fixes" modules (S3, ALB, API Gateway V2, Auto Scaling, CloudFront, DMS, DynamoDB, EC2, ECR, ECS, EFS, EKS, ELB, MSK, Network Firewall, OpenSearch, RDS, RDS Aurora, VPC). Fix validation errors one module at a time.
- Third batch: "Minor to significant" modules (ElastiCache, EMR, FSx, Redshift). Plan for potential resource recreation if encryption is not already enabled.
Run terraform plan after each batch
Do not replace all module sources at once and then run plan. Migrate in batches and verify each batch independently. This isolates validation errors and makes them easier to fix.
Phased approach for large teams
For teams managing many modules across multiple environments:
| Phase | Timeline | What to do |
|---|---|---|
| Phase 1 | Weeks 1-4 | All new resources use compliance.tf modules. Existing infrastructure stays on upstream modules. |
| Phase 2 | Months 2-3 | Migrate high-risk existing modules (S3, RDS, EKS — anything storing sensitive data). |
| Phase 3 | Month 4+ | Full adoption. Migrate remaining modules. Enable CI/CD enforcement. |
During any phase, compliance.tf modules and upstream terraform-aws-modules coexist in the same codebase without conflict. There is no requirement to migrate everything at once.
Team coordination workflow
For a team of 4–10 engineers migrating across multiple services:
Platform lead responsibilities:
- Complete the first module migration yourself (S3 or VPC) to establish the pattern
- Set up registry authentication in CI/CD — one-time setup for the team
- Create a migration tracking table (spreadsheet or issue board) listing every module, its current source, assigned engineer, and status
- Add the optional module source governance CI step in warning mode during migration, switching to blocking after Phase 3
Individual engineer responsibilities:
- Pick an unassigned module from the tracking table and mark it as in-progress
- Change the source URL, run
terraform plan, and fix validation errors - Open a PR with the source URL change and any configuration fixes
PR review checklist for module migrations:
- [ ] Source URL changed to
<framework>.compliance.tf/... - [ ]
terraform planoutput is clean (no validation errors) - [ ] Any
?disable=parameters have a documented justification in the PR description - [ ] No unrelated changes bundled with the migration
Handling the transition period:
During migration, some modules will use compliance.tf and others will still use upstream sources. This is expected and safe — the modules are independent. Track progress in your migration table and aim for one batch per sprint.
Rollback
If you need to switch back to upstream modules:
- Change the source URL back to
terraform-aws-modules/... - Run
terraform init -upgrade - Run
terraform planto confirm
That is the complete rollback. No state surgery. No resource changes. Your deployed infrastructure is unchanged — the values you configured (logging, encryption, etc.) remain in place because they are standard AWS resource attributes, not proprietary settings.
Compliance enforcement stops after rollback. terraform plan will no longer validate controls, but the security configurations you already deployed continue to protect your resources.
Frequently Asked Questions
Does migration recreate resources?
No. Changing the module source URL does not change resource addresses in Terraform state. Terraform tracks resources by their address (for example, module.s3_bucket.aws_s3_bucket.this[0]), not by the module source. The source URL change causes terraform init to download the module from a different registry, but the resource graph remains identical.
Exception: If a control enforces a value that AWS does not allow to change on an existing resource (such as S3 object lock or RDS storage encryption), that specific attribute will require resource recreation. This is an AWS API limitation. You can disable the specific control and address the retrofit separately.
Can I migrate one module at a time?
Yes. This is the recommended approach. Each module source is independent. Migrate S3 first, verify it with terraform plan and terraform apply, then proceed to the next module. Compliance.tf modules and upstream terraform-aws-modules coexist in the same Terraform configuration without conflict.
Does this work with Terraform Cloud or Terraform Enterprise?
Yes. Add the compliance.tf credential to your TFC/TFE workspace or organization. The module source URL change works the same way — TFC/TFE runs terraform init internally and downloads modules from the compliance.tf registry using the configured credential.
Does this work with Spacelift, env0, or Atlantis?
Yes. Compliance.tf works with any tool that runs terraform init. Configure the access token in your platform's secret management or environment variables. The specific configuration depends on your platform:
- Spacelift: Add the token as a mounted file (
.terraformrc) or environment variable - env0: Add the token in the project's credential settings
- Atlantis: Configure the token in the Atlantis server's
.terraformrc
What about S3 object lock? That requires resource recreation.
Yes. Enabling S3 object lock on an existing bucket requires creating a new bucket and migrating data. This is an AWS limitation — object lock can only be enabled at bucket creation time, regardless of whether you use compliance.tf or configure it manually.
If you do not need object lock, disable the control:
module "s3_bucket" {
source = "https://soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws?version=5.0.0&disable=s3_bucket_object_lock_enabled"
# ...
}
If you do need object lock, you will need to create a new bucket with object lock enabled and migrate your data. This is the same procedure you would follow without compliance.tf.
What about RDS encryption? That also requires recreation.
Same situation. Encrypting an existing unencrypted RDS instance requires a snapshot-restore cycle. This is an AWS limitation. The compliance.tf control enforces encryption, but if your RDS instance is already encrypted, migration is clean. If it is not, you can disable the encryption control and plan a separate retrofit.
Does migration affect my audit evidence?
No. Existing AWS Config rules, CloudTrail trails, and Security Hub findings are unaffected. Compliance.tf modules enforce the same AWS-native configurations that these tools monitor. After migration, your evidence trail continues uninterrupted. In fact, it may improve — controls that were previously missing are now enforced, producing the configurations that auditors verify.
There is no compliance gap during a phased migration. Modules not yet migrated continue to work as before. Migrated modules gain additional controls. At no point is compliance reduced.
Can I keep Checkov or Trivy running alongside compliance.tf?
Yes. Compliance.tf and static analysis tools like Checkov are complementary. Compliance.tf prevents non-compliant configurations at the module level (preventive — the plan fails before anything is deployed). Checkov verifies the plan output independently (detective — it scans and reports). Running both gives you defense in depth: compliance.tf ensures controls are enforced, Checkov provides an independent verification layer.
What if compliance.tf doesn't have a module I need?
Modules not yet available in the compliance.tf registry are proxied to the upstream Terraform Registry automatically. If you point a module source at soc2.compliance.tf/terraform-aws-modules/some-module/aws and that module has not been processed by compliance.tf yet, the request is forwarded to the public Terraform Registry. Your workflow continues uninterrupted. You can mix compliance.tf modules and upstream modules in the same codebase.
Important: Proxied modules are served from the upstream registry without compliance.tf controls. They behave identically to sourcing from terraform-aws-modules/... directly. If your governance requirements demand that all modules pass through compliance controls, verify that each module you use appears in the module catalog before relying on compliance.tf enforcement.
What's Next
- New to compliance.tf? — Get Started covers account setup, authentication, and your first compliant module
- Need to disable or enable specific controls? — Customize modules
- Want to see what controls are enforced? — Browse controls by module
- Looking for your compliance framework? — Browse supported frameworks