Continuing from my last TerraForm post, I have split my TerraForm configuration into a number of files. I am now taking this one step further and creating a module that I can use to deploy a number of identical (or near-identical) resources following a pattern.

Starting point

Since my last post, I have slightly restructured the variables, variable definitions are now all in the for example:

variable "base_tags" {
    type = map(string)
    description = "Base set of resource tags for all resources"

variable "tags_dev" {
    type = map(string)
    description = "Base tags for dev/test resources"

variable "tags_live" {
    type = map(string)
    description = "Base tags for live resources"

Rather than specify my (environment-specific) values via the default (which is not the right way to use the default mechanism), I created a variable file that TerraForm will automatically read called

base_tags = {
    "Application Name" = "My Application"
    "Application Owner" = "Support Team"
    "Business Sector" = "R&D"
    "Country" = "UK"
    "Data Classification" = "Confidential"
    "Environment" = "PROD/DEV"  # Mandatory tag per company policy - set to correct value by more specific variables
    "Region" = "EMEA"
    # ... etc.
tags_dev = {Environment = "Dev"}
tags_live = {Environment = "Dev"}

The module

In this context, my module is simply a directory on the filesystem. TerraFrom supports many other sources of modules, but at the moment I just want to template a local bit of deployment.

My modules creates user-access VMs, which we refer to as VDIs (but are not part of a proper VDI solution, just plain VMs).

Inside my new directory, I created 4 files (following TerraForm’s documented good practice for module structure):

  • - As this is just to be used locally, I literally just put 1 sentence in describing its purpose.
  • - This is where I put the resources the module creates.
  • - My module has no outputs (currently) but I created the empty placeholder file anyway
  • - This is where I put the variables my module needs.

The module cannot reference resources created outside it, which is very good for encapsulation but means you may need quite a few variables.

My looks like this:

variable "name" {
    type = string
    description = "Name of the VDI"

variable "size" {
    type = string
    description = "Azure VM size of the VDI"

variable "ip4" {
    type = string
    description = "IPv4 address of the VDI"

variable "location" {
    type = string
    description = "Azure location name for the VM (and associated resources)"

variable "resource_group_name" {
    type = string
    description = "Name of the Azure resource group for the VM (and associated resources)"

variable "tags" {
    type = map(string)
    description = "Tags to apply to the resources"

variable "subnet" {
    type = object({name: string, id: string})
    description = "Map of the name and id of the subnet (name is used during generation of the NIC name)"

Then my uses this to create the VM’s NIC and the VM itself (we have some standard naming conventions - such as the sequential numerical end of the VM name (although for some reason the VM number is one digit shorter than the disk and NIC in our convention) and naming NICs according to an infix part of the virtual subnet they are attached to):

resource "azurerm_network_interface" "vdi-nic" {
    name= "NIC-${regex("^SNET-(?P<name>.*)-[0-9]+$",}-0${regex("^.*?(?P<number>[0-9]+)$",}"
    location = var.location
    resource_group_name = var.resource_group_name

    ip_configuration {
        name = "ipconfig1"
        subnet_id =
        private_ip_address_allocation = "Static"
        private_ip_address = var.ip4

    tags = var.tags

resource "azurerm_linux_virtual_machine" "vdi" {
    name =
    location = var.location
    resource_group_name = var.resource_group_name
    size = var.size
    admin_username = "azureuser"

    # Cloud init?
    #custom_data = var.cloud_init

    admin_ssh_key {
        public_key = "ssh-key-here"
        username = "azureuser"

    boot_diagnostics {}

    identity {
        type = "SystemAssigned"

    network_interface_ids = []

    os_disk {
        caching = "ReadWrite"
        storage_account_type = "Premium_LRS"
        name = "DSK-VDI-0${regex("^.*?(?P<number>[0-9]+)$",}"

    plan {
        name = "cis-centos7-l1"
        product = "cis-centos-7-v2-1-1-l1"
        publisher = "center-for-internet-security-inc"

    source_image_id = "/subscriptions/<my-sub-id/resourceGroups/RG-SHIMG-001/providers/Microsoft.Compute/galleries/SHA_IG_001/images/IMG-VDI-LINUX-001/versions/1.0.3">

    tags = var.tags

Some of these bits are hard-coded for my environment - e.g. the plan and source_image_id but these can be pulled out as variables trivially in the future.

Once created, terraform init has to be re-run to “discover” the module before it can be used.

Using the module

To use the module, I first added a new variable to describe my VDI instances to

variable "vdis" {
    type = map(object({ip4: string, size: string, shared: bool}))
    description = "Map of VDI names to data about those VDIs"

Then I created, similar to how I now populate my tag data:

vdis = {
    VDI01 = {
        ip4 = ""
        size = "Standard_D4s_v3"
        shared = false

Finally, I created

resource "azurerm_resource_group" "vdi" {
    name = "RG-VDI-001"
    location = "West Europe"

    tags = merge(var.base_tags, var.tags_live)

module "vdis" {
    for_each = var.vdis

    source = "./vdi"

    location = azurerm_resource_group.vdi.location
    name =

    subnet = {
        name =
        id =

    name = each.key
    size = each.value.size
    ip4 = each.value.ip4
    tags = merge(var.base_tags, var.tags_live, {"Use": each.value.shared ? "shared" : "personal"})

As before, I then had to import the existing resources (as I have an existing infrastructure I am bringing under the control of TerraForm). This time, I need to use the name of the VDI as a subscript to the module and some escaping was necessary to get this to work in PowerShell:

terraform import "module.vdis[\"VDI01\"].azurerm_network_interface.vdi-nic" "/subscriptions/$sub_id/resourceGroups/RG-VDI-001/providers/Microsoft.Network/networkInterfaces/NIC-ACCESS-001"
terraform import "module.vdis[\"VDI01\"].azurerm_linux_virtual_machine.vdi" "/subscriptions/$sub_id/resourceGroups/RG-VDI-001/providers/Microsoft.Compute/virtualMachines/VDI01"

To loop over existing resources (numbered 1 to 37, skipping 18-23, 25 and 27) you might want to try something like:

for($vm = 1; $vm -lt 38; $vm++) {
    terraform import "module.vdis[\""$("VDI{0:D2}" -f $vm)\""].azurerm_network_interface.vdi-nic" "/subscriptions/$sub_id/resourceGroups/RG-VDI-001/providers/Microsoft.Network/networkInterfaces/$("NIC-ACCESS-{0:D3}" -f $vm)"
    terraform import "module.vdis[\""$("VDI{0:D2}" -f $vm)\""].azurerm_linux_virtual_machine.vdi" "/subscriptions/$sub_id/resourceGroups/RG-VDI-001/providers/Microsoft.Compute/virtualMachines/$("VDI{0:D2}" -f $vm)"

    if ($vm -eq 17) { $vm=23 } elseif ($vm -eq 24) {$vm=25} elseif ($vm -eq 26) {$vm=27}