Terraform State Backend & S3 Bucket
This guide explains the structure of a Terraform S3 state backend bucket, including the use of workspaces, key prefixes, and buckets. It details how the backend.tf.json
file is used to configure the S3 backend for storing Terraform state, and how DynamoDB is used for state locking and consistency checking. The document provides examples and best practices for managing and accessing the Terraform state backend.
Terraform State Backend & Locking via Amazon S3β
Using Terraform 1.10.5
with built in state locking via S3, eliminating the need for DynamoDB for state management.
Plan a smooth migration to the new locking mechanism
- No Extra Setup: You donβt need to deal with a separate DynamoDB table just for locking.
- Saves Money: Dropping DynamoDB means cutting down on unnecessary costs.
- Easy Migration: Terraform lets you use both DynamoDB and S3 locking at the same time, so you can switch over smoothly.
- Better Security: S3 Object Lock can enforce retention policies, giving you extra protection.
1 Enable S3 Object Lockβ
When creating your Terraform state bucket, enable Object Lock (note: this is irreversible after activation).
2 Modify Terraform Backend Configurationβ
- β Using S3 Lock Directly
- π€ Using DynamoDB for State Locking
- Terraform & S3 Best Practices
terraform {
backend "s3" {
bucket = "tf-state-bucket"
key = "state/terraform.tfstate"
region = "us-east-1"
encrypt = true
use_lockfile = true
}
}
terraform {
backend "s3" {
bucket = "tf-state-bucket"
key = "state/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "state-lock-table"
}
}
From HashiCorp
Stores the state as a given key in a given bucket on Amazon S3. This backend also supports state locking and consistency checking via Dynamo DB, which can be enabled by setting the
dynamodb_table
field to an existing DynamoDB table name. A single DynamoDB table can be used to lock multiple remote state files. Terraform generates key names that include the values of thebucket
andkey
variables.
- Use Remote Backend: Store your Terraform state files in a secure and centralized location like an S3 bucket with versioning enabled.
- Enable Encryption: Use server-side encryption (SSE) to protect data at rest.
- Define Policies: Use IAM policies to restrict access to your S3 bucket.
- Leverage Tags: Add tags to your resources for better organization and cost tracking.
3 Test in Dev/Stagingβ
Deploy the updated configuration in a non-production environment to validate the changes.
4 Migrate Productionβ
Once confident, phase out DynamoDB and rely solely on S3 Object Lock.
5 Monitor & Optimizeβ
Use AWS CloudTrail and Terraform logs to monitor state lock behavior.
Referencesβ
- https://www.terraform.io/docs/language/settings/backends/s3.html backend configuration documentation.
Enterprise-Grade TerraformβAWS Acceleratorβ
A comprehensive, enterprise-grade TerraformβAWS accelerator configuration that has been refined to the highest standard. This step-by-step solution:
- Includes a central
global_variables.tf
file for provider and common variable definitions. - Uses a centralized S3 bucket for state (with versioning and object lock enabled) and a naming convention that works across multiple AWS accounts and organizational units (e.g., management, network, backup, β¦).
- Organizes code into a clear folder structure with separate directories for global configurations, reusable modules, and account-specific resources.
- Provides a Taskfile for standardized automation (init, plan, apply, destroy).
- Integration of multiple modules (such as ACM, Route53, etc.) in an enterprise context.
1. Repository Folder Structureβ
A clear folder structure is critical in a multi-account, multi-module enterprise environment. For example:
DevOps-Terraform/
βββ account/
β βββ management/identity-center
β β βββ backend.tf ## Account-specific backend configuration
β β βββ main.tf ## Resources for the management account
β β βββ variables.tf ## Variables specific to management account
β β βββ outputs.tf ## Outputs for management account
β ββ β network/
β β βββ backend.tf ## Account-specific backend configuration for network account
β β βββ main.tf ## Network resources
β β βββ variables.tf
β βββ backup/
β β βββ ... ## Backup account configuration
β βββ ...
βββ modules/
β βββ terraform-aws-s3-bucket/ ## Local clone of the S3 bucket module (customized for enterprise)
β βββ terraform-aws-acm/ ## Module for ACM certificate management
β βββ terraform-aws-route53/ ## Module for Route53 DNS management
β βββ ... ## Other modules
βββ global/
β βββ global_variables.tf ## Global provider and variable definitions
β βββ backend_setup.tf ## (Optional) Script/module to bootstrap the central S3 state bucket
βββ Taskfile.yml ## Task automation file (using Task)
2. Global Configurationβ
This file provides a central definition of the Terraform version, providers, and common variables. Place this in global/global_variables.tf
:
// File: global/global_variables.tf
terraform {
required_version = ">= 1.10.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.70"
}
}
}
provider "aws" {
region = var.region
## For multi-account or cross-region setups, consider additional provider configuration
## (e.g., assume_role, aliasing, endpoints)
}
variable "region" {
description = "Default AWS region for deployments"
default = "ap-southeast-2"
}
Notes:
- This file is sourced by every account and module that needs to adhere to the common provider version and region.
- In a multi-region or multi-account scenario, you might override these settings in account-specific variable files.
3. Provisioning the Centralized S3 State Bucket with Object Lockβ
Before using the bucket as your backend, you must create it with the following features:
- Versioning enabled
- Object Lock enabled (configured at creation)
- Server-side encryption
- Strict ACLs and tagging for compliance
Using your locally cloned S3 bucket module, you can create a file such as global/backend_setup.tf
:
## File: global/global_variables.tf
module "s3_state_bucket" {
source = "../modules/terraform-aws-s3-bucket"
bucket = "ams-terraform-org-state"
acl = "private"
versioning_enabled = true
# Enable object locking (ensure your module supports this parameter)
object_lock_enabled = true
server_side_encryption_configuration = {
rule = {
apply_server_side_encryption_by_default = {
sse_algorithm = "AES256"
}
}
}
tags = {
Environment = "global"
ManagedBy = "Terraform"
}
}
Important:
- Object Locking: Must be enabled when the bucket is created. Confirm that your module properly supports object lock configuration.
- Compliance: Ensure that IAM policies and bucket policies align with your enterprise security standards.
4. Account-Specific Backend Configurationβ
Each account (or organizational unit) must reference the centralized state bucket with a unique key. For example, in the management account, create account/management/backend.tf
:
// File: account/management/backend.tf
terraform {
backend "s3" {
bucket = "ams-terraform-org-state"
key = "terraform-aws/123456789012/management/terraform.tfstate"
region = "us-east-1" # Region where the S3 state bucket is hosted
encrypt = true
use_lockfile = true # Use S3 object locking instead of a DynamoDB table
}
}
Key Points:
- Bucket: Points to the central state bucket.
- Key Naming Convention: Incorporate the AWS account number (e.g.,
123456789012
) and organizational unit (e.g.,management
).
This could be parameterized as:key = "terraform-aws/${var.account_number}/${var.environment}/terraform.tfstate"
- Region & Encryption: These ensure secure and compliant state storage.
Replicate similar backend configurations for other accounts (e.g., network, backup) using the appropriate key values.
5. Enterprise Task Automation with Taskfileβ
A robust automation tool (like Task) enforces consistent operations. Here is an enhanced Taskfile.yml
:
# File: Taskfile.yml
version: '3'
vars:
# Override these via environment variables or CLI parameters as needed.
TF_STATE_BUCKET: "ams-terraform-org-state"
ACCOUNT_ID: "123456789012" # Default AWS account number; update per environment.
ENVIRONMENT: "management" # Options: management, network, backup, etc.
TF_BACKEND_REGION: "us-east-1" # Region where the state bucket is hosted.
# Construct the backend key with a consistent naming convention.
BACKEND_KEY: "terraform-aws/{{.ACCOUNT_ID}}/{{.ENVIRONMENT}}/terraform.tfstate"
# Set the working directory for Terraform commands.
WORKING_DIR: "account/{{.ENVIRONMENT}}"
tasks:
init:
cmds:
- >
cd {{.WORKING_DIR}} && terraform init
-backend-config="bucket={{.TF_STATE_BUCKET}}"
-backend-config="key={{.BACKEND_KEY}}"
-backend-config="region={{.TF_BACKEND_REGION}}"
-backend-config="encrypt=true"
-backend-config="use_lockfile=true"
description: "Initialize Terraform with S3 backend and object lock"
plan:
cmds:
- cd {{.WORKING_DIR}} && terraform plan
description: "Generate and review Terraform execution plan"
apply:
cmds:
- cd {{.WORKING_DIR}} && terraform apply -auto-approve
description: "Apply Terraform configuration changes"
destroy:
cmds:
- cd {{.WORKING_DIR}} && terraform destroy -auto-approve
description: "Destroy all Terraform-managed infrastructure (use with extreme caution)"
Usage Examples:
- Initialize:
task init
- Plan:
task plan
- Apply:
task apply
- Destroy:
task destroy
This Taskfile supports dynamic configuration and can be extended with additional tasks (e.g., linting, formatting, security checks).
6. Integrating Multiple Modules in an Enterprise Setupβ
When deploying resources, you typically call various modules (ACM, Route53, etc.). For example, in account/management/main.tf
:
// File: account/management/main.tf
// Reference the global variables and provider
terraform {
# No backend block here β it is in backend.tf
}
locals {
environment = "management"
account_id = "123456789012"
}
// Example: ACM module usage for certificate management.
module "acm_certificates" {
source = "../../modules/terraform-aws-acm"
domain_name = var.domain_name
validation_method = "DNS"
# Additional module-specific variables can be passed here.
}
// Example: Route53 module for DNS zone management.
module "route53_dns" {
source = "../../modules/terraform-aws-route53"
zone_name = var.dns_zone
# Include other required parameters.
}
// Additional resources can be defined here.
resource "aws_s3_bucket" "example" {
bucket = "example-bucket-${local.environment}"
acl = "private"
tags = {
Environment = local.environment
Account = local.account_id
}
}
Key Considerations:
- Modular Integration: Use relative paths (or versioned module registry sources) so that updates to modules propagate correctly.
- Variable Passing: Parameterize modules to support different accounts and environments.
- Output Usage: Use module outputs to build dependency graphs between resources as needed.
7. Best Practices & Final Enterprise Considerationsβ
-
Centralized Version Control & CI/CD:
Store all Terraform configurations (global, account-specific, and modules) in a version-controlled repository (e.g., Git). Integrate with CI/CD pipelines to enforce policy checks, automated testing, and secure deployments. -
Locking & Concurrency:
While the S3 backend withuse_lockfile = true
simplifies state locking, continuously monitor concurrent deployments. For environments with heavy parallel activity, consider a DynamoDB table for state locking or carefully manage deployment windows. -
Environment Isolation:
Use workspaces or separate state files (via backend key naming) to isolate environments (e.g., staging, production, testing). This minimizes the risk of cross-environment interference. -
Security & Compliance:
- Enforce encryption (both in transit and at rest).
- Apply least-privilege IAM roles.
- Implement robust bucket policies and logging.
- Regularly audit your state and Terraform logs.
-
Documentation & Training:
Document repository structure, naming conventions, backend configuration, and automation practices. Provide training to teams on Terraform best practices and module usage.
Conclusionβ
This enterprise-grade TerraformβAWS accelerator configuration integrates:
- A centralized S3 state bucket with versioning and object lock (ensuring secure state management),
- A well-defined folder structure separating global settings, reusable modules, and account-specific configurations,
- A comprehensive global configuration file (
global_variables.tf
) that sets provider versions and default variables, - A parameterized Taskfile to automate Terraform operations consistently, and
- Examples of module integration (ACM, Route53, etc.) to demonstrate a modular, scalable approach.
By following this detailed, step-by-step configuration and best practices guide, your organization can confidently deploy and manage Terraform configurations across multiple AWS accounts and organizational units, meeting the highest standards of enterprise DevOps and cloud security.
The backend.tf.json
Fileβ
This file is programmatically generated by Semaphore using all the capabilities of
Stacks to deep merge. Every component defines a backend.tf.json
, which is what distinguishes it as a root module (as opposed to a terraform child module). The backend tells terraform where to access the last known deployed state of infrastructure for the given component. Since the backend is stored in S3, itβs easily accessed by in a distributed manner by anyone running terraform.
An identical backend.tf.json
file is used by all environments (stacks). Environments are selected using the
terraform workspace
command, which happens automatically when using Taskfile
together with the --stack
argument.
For reference, this is the anatomy of the backend configuration: (note this is just a JSON representation of HCL)
{
"terraform": {
"backend": {
"s3": {
"acl": "bucket-owner-full-control",
"bucket": "acme-ue2-root-tfstate",
"dynamodb_table": "acme-ue2-root-tfstate-lock",
"encrypt": true,
"key": "terraform.tfstate",
"profile": "acme-gbl-root-terraform",
"region": "us-east-2",
"workspace_key_prefix": "vpc"
}
}
}
}
Either profile
or role_arn
can be used here
S3 Backendβ
The S3 bucket is created in the cold start using the tfstate-backend component provisioned in the root account.
The state format is s3://{bucket_name}/{component}/{stack}/terraform.tfstate
-
The
bucket name
format is{namespace}-{optional tenant}-{environment}-{stage}-tfstate
-
We deploy this bucket in the
root
account so here are some example bucket names
acme-ue2-root-tfstate
(without tenant) acme-mgmt-ue2-root-tfstate
(with tenant: mgmt
)
-
The
component
name provided is used as the terraform stateβsworkspace_key_prefix
in each componentβsbackend.tf.json
. Therefore, this will be the first s3 key after the bucket name. -
The
stack
is where the component is provisioned and the name of the workspace created -
Finally, the
terraform.tfstate
is thekey
provided in each componentβsbackend.tf.json
The terraform commands run by Taskfile
for the backend s3://acme-ue2-root-tfstate/vpc/ue2-prod/terraform.tfstate
task terraform deploy vpc --stack ue2-prod
| task will create the input variables from the YAML and run the following commands
| -- terraform init
| -- terraform workspace ue2-prod
| -- terraform plan
| -- terraform apply
To better visualize whatβs going on, we recommend running the commands below to explore your own state bucket. Make sure
to use the correct profile
for your organization (acme-gbl-root-admin
is just a placeholder).
Find the bucket. It should contain tfstate
in its name. In the example below, we can see the
vpc component is deployed to use2-auto
, use2-corp
, use2-dev
, use2-qa
,
use2-sbx01
, use2-staging
. As you can see, the workspace
is constructed as the {environment}-{stage}
. This
setting is defined in the task.yaml
config with the stacks.name_pattern
setting (see Semaphore
for all settings).
$ aws --profile acme-gbl-root-admin \
s3 ls --recursive
...
2021-11-01 19:53:48 120926 vpc/use2-auto/terraform.tfstate # workspace key prefix: vpc, workspace name is `use2-auto`
2021-11-01 19:49:12 123604 vpc/use2-corp/terraform.tfstate
2021-11-01 19:50:18 123486 vpc/use2-dev/terraform.tfstate
2021-11-01 19:48:39 123354 vpc/use2-qa/terraform.tfstate
2021-11-01 19:49:46 123735 vpc/use2-sbx01/terraform.tfstate
2021-11-01 19:50:50 124014 vpc/use2-staging/terraform.tfstate
See where all the VPC components contain state
aws --profile acme-gbl-root-admin \
s3 ls s3://{bucket_name}/vpc/
If a component is mistakenly deployed somewhere and destroyed, a leftover terraform.tfstate
file will be present on
your local filesystem with a small file size so while this is a good way to search for backends, it's not the best way
to determine where a component is deployed. Also, the S3 bucket has versioning enabled, ensuring we can always
(manually) revert to a previous state if need be.
DynamoDB Lockingβ
Find the table. It should contain tfstate-lock
in its name.
aws --profile acme-gbl-root-admin \
dynamodb list-tables
Get a LockID
aws --profile acme-gbl-root-admin \
dynamodb get-item \
--table-name {table_name} \
--key '{"LockID": {"S": "{bucket_name}/{component}/{stack}/terraform.tfstate-md5"}}'