Following on from my first steps with TerraForm post, this post covers the next steps. This includes external state storage and splitting up the terraform configuration.

External state

By default TerraForm stores state in a file named terraform.tfstate. This is difficult to manage, both in relation to team working as described in the remote state documentation but also for revision control and automation, to ensure the state is persisted appropriately if it changes during a continuous delivery workflow. Remote state separates state from configuration, which is a very good thing in my experience.

As I am using an Azure infrastructure, the obvious backend for me to use is azurerm which stores state within an Azure Blob Container.

terraform {
    backend "azurerm" {
        resource_group_name = "RG-MY_STORAGE-001"
        storage_account_name = "storage_account_001"
        container_name = "tfstate"
        key = "prod.terraform.tfstate"
    }
}

Once the backend has been changed, terraform init has to be run again and will accept the option -migrate-state to migrate the current state:

terraform init -migrate-state

In order to migrate from one storage account to another (even in different resource groups), I found I had to remove the configuration entirely (reverting to the default local storage), migrate to the local storage then put the new configuration in place and migrate from local storage back to the (new) Azure blob.

Splitting up the configuration

Terraform reads all .tf files in the root directory (called the “root module”), so splitting up the configuration is simply a case of creating separate .tf files. TerraForm has a concept of modules, which allow us to create templates and include them (including specifying we want multiple copies of a resource defined in a module).

My first configuration was in a single file that contained everything, the first thing I did was to split the variables (for the tagging) that I defined into a separate tag_variables.tf file:

variable "base_tags" {
    type = map(String)
    description = "Base set of tags for all resources"
    default = {
        "Application Name" = "My Application"
        "Application Owner" = "Support Team"
        "Business Sector" = "R&D"
        "Country" = "UK"
        "Data Classification" = "Confidential"
        "Environment" = "PROD/DEV"  # Mandatory tag per company policy - set to correct value by more specific variables
        "Region" = "EMEA"
        # ... etc.
    }
}

variable "tags_dev" {
    type = map(String)
    description = "Development resources tags"
    default = {Environment = "DEV"}
}

variable "tags_live" {
    type = map(String)
    description = "Live resources tags"
    default = {Environment = "LIVE"}
}

and put the core configuration into terraform.tf:

terraform {
    required_providers {
        azurerm = {
            source = "hashicorp/azurerm"
            version = ">=2.77.0"  # 2.77.0 added 'EnableServerless' cosmosdb capability, which we use
        }
    }

    backend "azurerm" {
        resource_group_name = "RG-MY_STORAGE-001"
        storage_account_name = "storage_account_001"
        container_name = "tfstate"
        key = "prod.terraform.tfstate"
    }
}

# Using the Azure plugin
provider "azurerm" {
    features {}
}

To add the networking, I created network.tf:

resource "azurerm_resource_group" "network" {
    name = "RG-MY_VNET-001"
    location = "West Europe"

    tags = merge(var.base_tags, var.tags_live)
}

resource "azurerm_virtual_network" "vnet" {
    name = "VNET-MY_NET-001"
    address_space = ["10.0.0.0/22", "10.5.0.0/18"]
    location = azurerm_resource_group.network.location
    resource_group_name = azurerm_resource_group.network.name

    tags = merge(var.base_tags, var.tags_live)
}

resource "azurerm_subnet" "access" {
    name = "SNET-ACCESS-001"
    resource_group_name = azurerm_resource_group.network.name
    virtual_network_name = azurerm_virtual_network.vnet.name
    address_prefixes = ["10.0.0.64/26"]

    enforce_private_link_endpoint_network_policies = true
    service_endpoints = ["Microsoft.KeyVault", "Microsoft.Storage"]
}

resource "azurerm_subnet" "private" {
    name = "SNET-PRIVATE-001"
    resource_group_name = azurerm_resource_group.network.name
    virtual_network_name = azurerm_virtual_network.vnet.name
    address_prefixes = ["10.0.3.0/24"]

    enforce_private_link_endpoint_network_policies = true
}

resource "azurerm_subnet" "storage" {
    name = "SNET-STORAGE-001"
    resource_group_name = azurerm_resource_group.network.name
    virtual_network_name = azurerm_virtual_network.vnet.name
    address_prefixes = ["10.0.0.16/28"]

    enforce_private_link_endpoint_network_policies = true
    delegation {
        name = "abcdef123434567890"
        service_delegation {
            name = "Microsoft.Netapp/volumes"
            actions = [
                "Microsoft.Network/networkinterfaces/*",
                "Microsoft.Network/virtualNetworks/subnets/join/action"
            ]
        }
    }
}

# ... etc.

I then revisited the private endpoint in the configuration in yesterday’s post and updated it to use the subnet from the new configuration file:

resource "azurerm_private_endpoint" "cosmos_endpoint" {
#...
    subnet_id = azurerm_subnet.private.id
#...
}