Skip to main content

Command Palette

Search for a command to run...

A simple way to Structure your Terraform code

Published
4 min read
A simple way to Structure your Terraform code
L

Extremely bored all the time, but I love tech.

Basic Structure

Writing Terraform code is like connecting wires to a network switch(I'm not a Network Engineer), but you get the sentiment(we've all seen how bad wires can get).

When you start writing Terraform code everything is usually in the same folder, this might be good for projects that have a few instances or if you need to start your project quickly.

Your structure would look like this.

.
├── backend.tf
├── db-instance.tf
├── iam.tf
├── igw.tf
├── provider.tf
├── s3.tf
├── subnets.tf
├── variables.tf
└── vpc.tf

This is pretty good and straightforward for beginner projects and smaller projects, but slowly you will add more resources, and you'll soon be waiting for Terraform to finish refreshing all the 20 resources every time you make a change to your code. This is one of the reasons, the other reason is there is something called a blast radius, blast radius is basically a term that means how much of your Infrastructure can be accidentally deleted.

The last and main reason is to keep your sanity intact(I've read a few terraform horror stories).

Writing Better Code

What you should do is split your code into logical sections. For example, you have a "production" and a "non-production" environment. So you can create two separate folders, inside those you logically split the resources into sub-folders. Something along the lines of this.

production/
├── backend
│   ├── dyanmodb.tf
│   ├── outputs.tf
│   ├── s3-backend.tf
│   ├── terraform.tf
│   ├── terraform.tfvars
│   └── variables.tf
├── cloudwatch
│   ├── main.tf
│   ├── terraform.tf
│   ├── terraform.tfvars
│   └── variables.tf
└── ecs
    ├── main.tf
    ├── outputs.tf
    ├── terraform.tf
    ├── terraform.tfvars
    └── variables.tf

The files in each folder represent the Terraform Provider=>terraform.tf, the Terraform code => main.tf and the variables, you can also split those into various files.

Terraform Structure for ECS

When it comes to Elastic Container Service (ECS), I recommend doing something as follows, this makes sure your services are separate and nothing changes due to one service being changed.

├── ecs
│   ├── main.tf
│   ├── terraform.tf
│   ├── terraform.tfvars
│   └── variab
└── services
    ├── service1
    ├── service2
    ├── service

Now you may wonder, since Terraform only works within one folder how would it work if you've split these resources. Well, there is something known as Terraform Remote State Data Source.

So to give you a gist of how this works, the outputs defined in the folder i.e in outputs.tf are accessible via the remote backend.

This is the network/terraform.tf file.

network/terraform.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "4.4.0"
    }
  }
  backend "s3" {
    key               = "network/terraform.tfstate"
    bucket            = "Leos-tf-prod"
    encrypt           = true
    profile           = "My-custom-profile"
    region            = "us-west-2"
    dynamodb_endpoint = "dynamodb.us-west-2.amazonaws.com"
    dynamodb_table    = "production-tf-state"

  }
 }

Note: Specifying the profile makes sure Terraform doesn't use the default profile.

Now I'll setup the resources using the AWS VPC module. Then If I need to access Data, such as VPC ID in another folder I need to define that using the Terraform output blocks in the outputs.tf.

# VPC
output "vpc_id" {
  description = "The ID of the VPC"
  value       = module.vpc.vpc_id
}

output "vpc_cidr_block" {
  description = "The CIDR block of the VPC"
  value       = module.vpc.vpc_cidr_block
}

Now if I do terraform output I can see the values

$ terraform output

vpc_cidr_block = "20.0.118.0/17"
vpc_id = "vpc-03434c243d05599f"

Let's move forward, now I need to access the same in my Elastic Container Service.

To do this I need to simply define a terraform_remote_state Data Source as such:

ecs/terraform.tf

data "terraform_remote_state" "network" {
  backend = "s3"
  config = {
    bucket = "Leos-tf-prod"
    key    = "network/terraform.tfstate"
    region = "us-west-2"
    profile = "My-custom-profile"
  }
}

Now if you've noticed, typing data.terraform_remote_state.network.vpc_cidr is going to cause you to type more or you need to remember stuff, to avoid this you can use locals as such

locals {
  vpc_id          = data.terraform_remote_state.network.outputs.vpc_id
  private_subnets = data.terraform_remote_state.network.outputs.public_subnets
  public_subnets  = data.terraform_remote_state.network.outputs.public_subnets
}

Then in your folder, you can simply call the variable as such

local.vpc_id

Benefits of this approach.

  • Reducing the blast radius, for example, if you accidentally run a destroy, a limited number of resources will be affected.
  • Improved security in case you need someone to work only on one part of the code.
  • Splitting the files reduces the time taken to apply changes.

In case you would like to continue the discussion, you can always reach out to me on Twitter or on LinkedIn, if you feel like following on Github you can also do that

That's it, thank you for reading.

O

Great addition to this space, Leon! Come join the Learning Terraform Discord server where you can chat with Leon and many others about Terraform or more general IaC practices. Invite code is 5fAwCK5R5W (discord [dot] gg).

1
A

That's super useful! Going to use these in my IaC. 🤗

{

Thanks for this article. Is there a way to avoid blast radius without using remote state file? Assuming you are developing locally.

{

Found this terraform doc addressing my question https://www.terraform.io/language/settings/backends/local

1
L
Leon4y ago

Hey Christopher K.M Etou sorry I wasn't getting any notifications directly, glad you found the answer!

O

You might find some value in a recent video on this exact question here: YT: watch?v=Qg8VZsbaXxA&t=1416s. I can't post links yet, so you'll just have to add that query string to the YouTube URL ;)

1
/
/dev/null4y ago

No mention of environments, modules,zero code re-use and an amateur take on Terraform.

I wouldn't call it best practices

2
{

I believe the focus is on how to avoid blast radius. I do agree that having to repeat the provider details in every direction could be annoying.

1
/
/dev/null4y ago

Oh I know blast radius. It doesn't do anything to help. Consider having an infra or multiple generic projects that have elevated permissions and multiple projects with their own state. Each team has access only to their own state with the pipeline only able to fetch(but not change) remote state from the elevated projects Christopher K.M Etou

{

/dev/null I believe knowledge is a distributed system and the more we all contribute, the we all learn. Leon's article is the first block on how to modularize your terraform code.

The usecase you are describing is a valid one. Is there anything you found somewhere that could help complement Leon's article?

2
L
Leon4y ago

Hey /dev/null thanks for your inputs and yes I'm a newbie in Terraform, my focus was more on splitting your code into smaller pieces, each folder has its own state file so unless you're doing a terraform apply in all folders using something such as a CI/CD platform your code would definitely be much safer in a single repo, could you elaborate on environments, if you're pointing towards workspaces those are being deprecated and will only be available for terraform cloud backend/state, if you're referring to prod/dev env then this folder structure also works. While modules may help, that wasn't the focus of my article.

1
B

An amazing read 🙌

L
Leon4y ago

Thanks!

1

More from this blog