Terraform + Habitat
Announcing a new Terraform provisioner plugin for Chef Habitat!
Introduction
It’s been over a year since I’ve had my blog up without any new content, I figured now would be a good time to post a quick update on a “new” Terraform provisioner for Chef Habitat.
With the recent changes HashiCorp has announced with built-in Terraform provisioners, there exists a need to build a self-contained plugin to deploy Chef Habitat to VM instances.
The terraform-proivisioner-habitat project aims to achieve that goal, while expanding support of the plugin to provision Chef Habitat to Windows, bringing it up-to-date with Habitat supervisor feature parity, and providing
Installation
The installation process is a bit cumbersome, since provisioners can’t be downloaded as a plugin module from the Terraform registry.
- Download a pre-built binary release from GitHub Releases page
- Make sure the filename matches:
terraform-provisioner-habitat_v<version>
- Place the file in
~/.terraform.d/plugins/
directory
Example
In the example below, we are going to create 4 virtual machines–3 x for the supervisor ring, and 1 x linux host which will join the supervisor ring.
For a more comprehensive example, review the Terraform integration tests from the GitHub Repo.
Supervisor Ring VMs
Create a supervisor-ring.tf
file that looks something like this:
//
// Linux Habitat Dedicated Supervisor Ring
//
resource "vsphere_virtual_machine" "supervisor-ring" {
count = 3
//
// Generic VM settings
//
num_cpus = 4
memory = 4096
name = format("sup-ring-%s", count.index+1)
resource_pool_id = data.vsphere_resource_pool.pool.id
datastore_id = data.vsphere_datastore.datastores[count.index].id
folder = var.vsphere.folder
guest_id = data.vsphere_virtual_machine.linux-template.guest_id
scsi_type = data.vsphere_virtual_machine.linux-template.scsi_type
host_system_id = data.vsphere_host.hosts[count.index].id
//
// Primary VM disk
//
disk {
label = "disk0"
eagerly_scrub = data.vsphere_virtual_machine.linux-template.disks.0.eagerly_scrub
thin_provisioned = data.vsphere_virtual_machine.linux-template.disks.0.thin_provisioned
size = data.vsphere_virtual_machine.linux-template.disks.0.size
unit_number = 0
}
//
// NICs
//
network_interface {
network_id = "..."
}
}
//
// Provision Habitat Supervisor Ring--note that in the supervisor ring case, this is a on a separate null_resource so
// we can obtain the IP addresses for each sup-ring instance, and then provision Habitat to it
//
resource "null_resource" "habitat-provisioner" {
depends_on = [vsphere_virtual_machine.supervisor-ring]
count = length(vsphere_virtual_machine.supervisor-ring[*])
triggers = {
cluster_instance_ids = join(",", vsphere_virtual_machine.supervisor-ring[*].id)
user_toml_contents = sha256(template_file.supervisor-ring-effortless_user_toml[index.count].rendered)
}
provisioner "habitat" {
license = "accept-no-persist"
version = "latest"
peers = vsphere_virtual_machine.supervisor-ring[*].default_ip_address
use_sudo = true
permanent_peer = true
ring_key = "my-ring-key"
ring_key_content = file("my-ring-key.sym.key")
ctl_secret = "dead-beef"
service {
name = "klm/effortless"
strategy = "at-once"
user_toml = template_file.supervisor-ring-effortless_user_toml[index.count].rendered
channel = "stable"
}
connection {
type = "ssh"
host = vsphere_virtual_machine.supervisor-ring[index.count].default_ip_address
user = var.habitat.hab_ssh_username
password = var.habitat.hab_ssh_password
private_key = file("~/.ssh/id_rsa.pem")
}
}
}
//
// User toml for klm/effortless (supervisor-ring)
//
data "template_file" "supervisor-ring-effortless_user_toml" {
count = 3
template = file("effortless/user.tmpl.toml")
vars = {
machine_name = vsphere_virtual_machine.supervisor-ring[count.index].name
machine_domain = "klmh.co"
}
}
Linux VM
Create a linux.tf
file that looks something like this:
resource "vsphere_virtual_machine" "linux" {
//
// Generic VM settings
//
num_cpus = 4
memory = 4096
name = "linux"
resource_pool_id = data.vsphere_resource_pool.pool.id
datastore_id = data.vsphere_datastore.datastores[0].id
folder = var.vsphere.folder
guest_id = data.vsphere_virtual_machine.linux-template.guest_id
scsi_type = data.vsphere_virtual_machine.linux-template.scsi_type
host_system_id = data.vsphere_host.hosts[0].id
//
// Primary VM disk
//
disk {
label = "disk0"
eagerly_scrub = data.vsphere_virtual_machine.linux-template.disks.0.eagerly_scrub
thin_provisioned = data.vsphere_virtual_machine.linux-template.disks.0.thin_provisioned
size = data.vsphere_virtual_machine.linux-template.disks.0.size
unit_number = 0
}
//
// NICs
//
network_interface {
network_id = "..."
}
//
// Provision Habitat
//
provisioner "habitat" {
license = "accept-no-persist"
version = "latest"
peers = vsphere_virtual_machine.supervisor-ring[*].default_ip_address
use_sudo = true
permanent_peer = false
ring_key = "my-ring-key"
ring_key_content = file("my-ring-key.sym.key")
ctl_secret = "dead-beef"
service {
name = "klm/effortless"
strategy = "at-once"
user_toml = template_file.linux-effortless_user_toml.rendered
channel = "stable"
}
connection {
type = "ssh"
host = self.default_ip_address
user = var.habitat.hab_ssh_username
password = var.habitat.hab_ssh_password
private_key = file("~/.ssh/id_rsa.pem")
}
}
}
//
// User toml for klm/effortless (linux)
//
data "template_file" "linux-effortless_user_toml" {
count = 3
template = file("effortless/user.tmpl.toml")
vars = {
machine_name = vsphere_virtual_machine.linux.name
machine_domain = "klmh.co"
}
}
Additional information
For more information, review the README.md.