Terraforming Proxmox (infra as code)

󰃭 2024-12-15

So I’ve got Proxmox running, and you’re tired of clicking through the UI to create VMs. The next logical step is to write some provisioning code.

This is how I did it.

This was the first thing i did after installing the hypervisor, I went looking for ways to use either ansible or terraform

The Setup

First things first - Proxmox needs to know who we are and what we’re allowed to do. Let’s create a role with just enough permissions (courtesy of Proxmox Provider docs):

pveum role add TerraformProv -privs "Datastore.AllocateSpace Datastore.AllocateTemplate Datastore.Audit Pool.Allocate Sys.Audit Sys.Console Sys.Modify VM.Allocate VM.Audit VM.Clone VM.Config.CDROM VM.Config.Cloudinit VM.Config.CPU VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Migrate VM.Monitor VM.PowerMgmt SDN.Use"

Create a user and give them our new role:

pveum user add terraform-prov@pve --password <password>
pveum aclmod / -user terraform-prov@pve -role TerraformProv

After this, head to the Proxmox UI and create an API token. You’ll need this for Terraform authentication with proxmox.

Creating a Base Template

Before we can automate VM creation, we need a base image. Download your distro of choice (I went with Ubuntu 24.04 because I like it):

# Get the tools we need
sudo apt update -y && sudo apt install libguestfs-tools -y

# Download the image and add qemu-guest-agent
# (read more about the guest agent here https://pve.proxmox.com/wiki/Qemu-guest-agent)
wget <distro-cloud-image-url>
sudo virt-customize -a noble-server-cloudimg-amd64.img --install qemu-guest-agent

# Create and configure the template VM
qm create 9000 --name ubuntu-cloud --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0
qm importdisk 9000 noble-server-cloudimg-amd64.img local-lvm
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
qm set 9000 --agent enabled=1

Pro tip: Skip setting SSH keys in the base template. We’ll handle that in Terraform to avoid cloud-init overwriting our keys on every boot.

Terraforming

Now for the fun part. First, tell Terraform about our Proxmox provider:

terraform {
 required_providers {
 proxmox = {
 source = "Telmate/proxmox"
 version = "3.0.1-rc4"
 }
 }
}

provider "proxmox" {
 pm_api_url = var.pm_api_url                    # Your Proxmox API URL
 pm_api_token_id = var.pm_api_token_id          # API token ID from earlier
 pm_api_token_secret = var.pm_api_token_secret  # The secret part
 pm_tls_insecure = false                        # Set true if using self-signed certs
}

I created a reusable module for this, you can find it here terraform-proxmox-qemu. Here’s an example (check the repo for more details):

module "cp" {
  source = "[email protected]:mrdvince/terraform-proxmox-qemu.git"
  count         = 3
  vmname        = "controlplane-${count.index + 1}"
  template_name = "ubuntu-cloud"
  os_type       = "cloud_init"
  target_node   = "node01"
  vmid = "${count.index + 1 + 600}"
  ipconfig0     = "ip=192.168.50.13${count.index + 1}/24,gw=192.168.50.1"
  network = {
    bridge    = "vmbr0"
    firewall  = false
    link_down = false
    model     = "virtio"
  }
  cipassword = var.cipassword
  vm_config_map = {
    bios                   = "ovmf"
    boot                   = "c"
    bootdisk               = "scsi0"
    ciupgrade              = true
    ciuser                 = "ubuntu"
    cores                  = 4
    define_connection_info = true
    machine                = "q35"
    memory                 = 8192
    onboot                 = true
    scsihw                 = "virtio-scsi-pci"
    balloon                = 4096
  }

  disks = {
    storage    = "local-lvm"
    backup     = true
    discard    = false
    emulatessd = false
    format     = "raw"
    iothread   = false
    readonly   = false
    replicate  = false
    size       = "128G"

  }
  serial = {
    id   = 0
    type = "socket"
  }
  sshkeys = file("~/.ssh/devkey.pub")

  efidisk = {
    efitype = "4m"
    storage = "local-lvm"
  }
}

This configuration gives you a VM with:

  • UEFI boot (thanks to OVMF)
  • Cloud-init for initial setup
  • Pre-configured networking
  • SSH key access

Run terraform init (or tofu init), then terraform plan to see what’s going to happen, and finally terraform apply.

Notes and Gotchas

  1. UEFI boot requires an EFI disk - the module handles this, but it’s good to know.

  2. The VM template ID (9000 in our case) needs to be unique on the Proxmox host.

  3. When creating multiple VMs in parallel, you might hit lock conflicts, to fix this explicitly set VMIDs:

    vmid = "${count.index + 1 + 600}"  # Starts from 601 and increments
    

This lets all VMs create in parallel without fighting over IDs or locks.

  1. Cloud-init config persistence can be tricky. Avoid setting SSH keys in the base template and instead configure them through Terraform to prevent cloud-init from overwriting your changes on reboots.


More posts like this