
Make Non-Compliant Terraform Impossible With compliance.tf¶
Your CI pipeline fails again. Checkov found 23 new findings. The sprint deadline is tomorrow.
If this sounds familiar, you're not alone. Teams across the industry spend countless hours in the same cycle: write Terraform, run an IaC scanner, get a wall of findings, fix them, run again, hope for green. This is the reality of compliance-as-code today, and it is fundamentally reactive.
Tools like Checkov and Trivy are valuable. They catch misconfigurations before they reach production. But they share a common limitation: they detect non-compliance after your code already exists. Every finding becomes a negotiation. Every PR becomes a compliance discussion. Every sprint loses time to remediation.
What if non-compliant resources were effectively impossible to create in the first place when you use standard modules?
Compliance.tf (CTF) is a private Terraform registry that turns common AWS modules into compliant-by-default building blocks, so non-compliant infrastructure is blocked by the module itself instead of discovered later by scanners.
Terraform and OpenTofu
Compliance.tf works with both Terraform and OpenTofu. This article uses "Terraform" for brevity, but all examples and concepts apply equally to OpenTofu users.
A Different Approach: Compliance at the Registry Level¶
CTF takes a different approach. Instead of scanning your code after you write it, CTF modules enforce compliance controls directly inside the module code. The goal is simple: prevent non-compliant configurations before they can ever become an accepted plan.
Here is what this looks like in practice:
# Before: Using official Terraform registry
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "5.9.0"
bucket = "my-data"
}
# After: Using compliance.tf Terraform registry with SOC 2 controls
module "s3_bucket" {
source = "soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = "5.9.0"
bucket = "my-data"
}
Notice what changed: only the source URL. Your module configuration stays identical. Your workflow stays identical. But now, every resource created through this module automatically enforces SOC 2 compliance controls.
Under the hood, this works because:
- The registry serves pre-curated versions of
terraform-aws-moduleswith compliance constraints baked in. - Each registry namespace (for example
soc2.compliance.tf,pcidss.compliance.tf,hipaa.compliance.tf) maps to a specific framework control set. - Modules are wired with safe defaults and Terraform validations aligned to those controls.
No forked modules to maintain. No wrapper modules to build. No policy-as-code rules to write and keep updated. You use the same upstream terraform-aws-modules you already know, with compliance built in for SOC 2, PCI DSS, HIPAA, and other frameworks.
How Controls Are Enforced Inside the Modules¶
CTF applies constraints inside the modules themselves, before resources are provisioned. Each control maps directly to one or more framework requirements, whether that is SOC 2, PCI DSS, HIPAA, CIS Benchmarks, or others.
At a high level, controls are enforced through:
Safe defaults - Options like encryption, logging, public access blocks, and secure transport are turned on by default and configured to compliant values.
Required inputs and validators - Certain variables must be provided and must satisfy validation rules. If a caller tries to skip them or pass a non-compliant value, Terraform fails at plan time.
Restricted configuration surface - Some dangerous options are not exposed at all. Others are only exposed with explicit, clearly named exception paths.
For example, a module might define a variable with validation:
variable "minimum_tls_version" {
description = "Minimum version of TLS"
type = string
default = "TLS1_2"
validation {
condition = contains(["TLS1_2", "TLS1_3"], var.minimum_tls_version)
error_message = "TLS version must meet compliance requirements (TLS1_2 or higher)."
}
}
When you run terraform plan, Terraform evaluates these validation rules before the plan is created. If you try to pass a non-compliant value, the plan fails immediately with a clear error message. The non-compliant configuration never makes it into a plan file, so there is no finding for a scanner to catch later.
Why Bypass Attempts Fail¶
Let's look at a concrete example using the S3 bucket module. S3 bucket logging is a common requirement across multiple compliance frameworks. With a standard module, a developer might disable logging intentionally or accidentally:
# Attempting to skip logging configuration
module "s3_bucket" {
source = "soc2.compliance.tf/terraform-aws-modules/s3-bucket/aws"
version = "5.9.0"
bucket = "my-data"
# Trying to disable logging - this will not work
logging = {}
}
With CTF, this attempt fails. The S3 bucket logging control is enforced by the module itself. The constraint is architectural, not a policy that can be silently overridden by the caller. The module ensures logging is configured regardless of input.
In practice, the user might see an error like:
Control 's3_bucket_logging_enabled': `logging.target_bucket must be set to enable S3 bucket access logging.
Read more: https://docs.compliance.tf/controls/aws/s3_bucket_logging_enabled
Frameworks: SOC 2, CIS AWS v1.4.0 (3.6), PCI DSS v4.0 (10.2.1)
This is not about catching mistakes after the fact. It is about making the mistake impossible to make for the covered controls.
Where exceptions are allowed, they are controlled by explicit flags or separate modules, so any deviation is intentional and visible to both security and auditors. Learn how to Customize Compliance-ready Terraform modules.
DIY Scanning vs Built-in Prevention¶
Both approaches aim for the same outcome: compliant infrastructure. The difference is in how you get there.
In the second part of this series, we look at how to prove that these guardrails actually work, how to integrate with Powerpipe, and how to turn module behavior into audit-ready evidence.