Starting with Environments & Servers - Building a HashiCorp Cluster (Part 1)

Starting with Environments & Servers - Building a HashiCorp Cluster (Part 1)

Learn how to build a HashiCorp cluster in AWS using Terraform and Packer. This guide covers environment and server setup in detail.

Welcome to the first installment of our multi-part series on creating a HashiCorp cluster.

  • Starting with Environments & Servers - Building a HashiCorp Cluster (Part 1)
  • Configure Vault - Building a HashiCorp Cluster (Part 2)
  • Configure Consul - Building a HashiCorp Cluster (Part 3)
  • Configure Nomad - Building a HashiCorp Cluster (Part 4)
  • Tying it all together - Building a HashiCorp Cluster (Part 5)

Design Criteria

We are building a production cluster for the three Hashicorp products that make the cornerstone of a solid infrastructure. That said we will be enabling the security features and configuring them. We will be using Vault to create, store and rotate credentials for both Consul and Nomad. This means we will need to start with the Vault cluster first.

Cloud Provider AWS You knew we were putting this on AWS
VPC 10.250.0.0/24 In total with all the environments we will have between 9-12 servers. (12 < 254)
Servers 3 It’s minimum viable ‘Production’
OS Debian It’s my preferred, stable distro
Arch [Arm ] Cool stuff later but cost savings now

We are going to be configuring between nine and twelve servers in total. We will also be installing the exact same software on all of them. The only difference will be the configurations on each server and eventually clients. This is a perfect reason to use a system to create an operating system image to use on your machines, we will be using Packer. Since this is Amazon we will be creating Debian AMI for the Arm architecture with all the updates and basic configs already applied. We will also revisit this image and its configuration multiple times as we add functionality.

Packer

Install and Configure

HashiCorp maintains its own software repositories, and I always opt for the vendor’s repo when available for the most reliable updates.

  1. Install HashiCorp Repo and Packer

HashiCorp released the repo and instructions on how to add it back in 2020 in this news release if you’re interested.

We’re following those instructions and adding the software update and install.

wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update
sudo apt install packer -y

Create Project Directories and Repo

Let’s start by creating the necessary directories to organize our entire project effectively.

  1. Create the project directories
  2. Initialize git
  3. Create Packer directory
  4. cd and start working with Packer
mkdir ~/project/
cd ~/project/
git init
mkdir packer
cd packer

Create Configuration File cluster_image.pkr.hcl

Packer is similar to the other HashiCorp products allows you to use plugins for various providers such as AWS, Azure, GCP, DigitalOcean, etc.. Integrations can be found here

We need to tell Packer what provider to use so it can install it on init

packer {
  required_plugins {
    amazon = {
      source  = "github.com/hashicorp/amazon"
      version = "~> 1"
    }
  }
}

We’re going to create some local variables for our tags currently. Don’t skip on the tags they are really going to help us out later when we have an operating cluster and want to join or leave. (forshadowing)

locals {
  timestamp = formatdate("YYYYMMDD-HHmmss", timestamp()) # For Names - need to be unique
    tags = {
        project = "hashi_cluster_demo"
        contact = "SuperCoolITGuy-AskMyKids"
    }
}

Now that we have a little bit of the config setup and we’re ready to actually do something let’s chat a second about what packer does and what we need to provide.

Packer HCL Primer
  • Packer HCL has a opening packer {} block for any required plugins
  • Source blocks follow - What are we using to create this image. AMI on Amazon
    • We can have multiple source blocks for various cloud providers, architectures, or regions, etc.
  • Build blocks define what packer should do with the sources provided above.
  • Provisioner blocks are contained within the build {} and execute individual tasks on the image

Source(s)

We need two sources one for each architecture we are using. We are only going to deploy to one region (us-west-2) currently but would need to include a source stanza for each one if it was required.

The format for the source stanza is source "amazon-ebs "name" and this is how it’s going to be addressed later in the build block.

Options:

  • ami_name: What the ami will be named
  • instance_type: How big or does it need special image hardware, etc.
  • region: Where are you running
  • source_ami_filter: This method defines the filters and determines which image we’ll use
    • name = This is a simple name match. Here we’re matching for debian-12-<arch>-* which is anything that starts with debian-12- followed by the architecture a dash and anything else. We know this will work because we were able to query it using the command-line:
      
    aws ec2 describe-images --region us-west-2 \ 
      --filters "Name=name,Values=debian-12*" "Name=state,Values=available" "Name=architecture,Values=arm64" \
      --query "Images | sort_by(@, &CreationDate) | [*].[ImageId,Name,OwnerId]" \
      --output table
    
    
    AMI ID Name Owner ID
    ami-00ec1849cf177d67c debian-12-arm64-20230531-1397 136693071363
    ami-0754bffb72d1da555 debian-12-arm64-20230531-1397-prod-nfuqytqhknlo2 679593333241
    ami-0e3fabf7e6603a437 debian-12-arm64-20230612-1409 136693071363
    ami-0947dd717ee80e607 debian-12-arm64-20230612-1409-prod-nfuqytqhknlo2 679593333241
    ami-0cf467cdd983a5859 debian-12-arm64-20230711-1438 136693071363
    ami-0cad52149dd2ac631 debian-12-arm64-daily-20240101-1613 136693071363
    ami-03e5df8427a95b8a4 debian-12-backports-arm64-daily-20240101-1613 136693071363
    ami-03f8d8c2589ed5677 debian-12-arm64-daily-20240102-1614 136693071363
    ami-09a8bb25e17c8b112 debian-12-backports-arm64-daily-20240102-1614 136693071363

    We need the images with the pattern debian-12-<arch>-<date>-<id> and the owner = 136693071363 which is Amazon. I’m opting for the Amazon image of Debian to ensure it’s at least tuned for the environment in which it will be running.

  • most_recent: We need the latest one - Patches are a good thing
  • owners: This is Amazon’s owner ID - I looked in the Console to verify
  • tags: These are going to be the tags for the images and how the cluster IDs them later.

  • root-device-type: This is the type we’re using to create our AMI - it just is…
  • virtualization-type: hvm - it’s AWS
# x64 Image for Intel/AMD processors
source "amazon-ebs" "cluster_image_x64" {
  ami_name      = "cluster_image_x64-${local.timestamp}"
  instance_type = "t2.micro"
  region        = "us-west-2"
  source_ami_filter {
    filters = {
      name                = "debian-12-amd64-*"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    most_recent = true
    owners      = ["136693071363"]
  }
  tags = local.tags
}

# Arm Image for the Amazon Graviton Processors
source "amazon-ebs" "cluster_image_arm64" {
  ami_name      = "cluster_image_arm64-${local.timestamp}"
  instance_type = "t2g.nano"
  region        = "us-west-2"
  source_ami_filter {
    filters = {
      name                = "debian-12-arm64-*"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    most_recent = true
    owners      = ["136693071363"]
  }
  tags = local.tags
}

Build Block

We are going to simply make sure at this point that we can build our images and everything is working so far.

Simple Build

build {
  name = "cluster_image"
  sources = [
    "source.amazon-ebs.cluster_image_x64",
    "source.amazon-ebs.cluster_image_arm64"
  ]
}

Save the file and execute the following:

packer fmt . && \
packer validate . && \
packer init && \
packer build .

This should execute two builds in parallel one for the Arm processor and another for the amd64 system. If there is an error this is a good place to stop, review and make sure everything is where is should be.

Configure Systems

Now that we have the images building properly we can start configuring them. Let’s review what needs to happen in no particular order:

  • SSH Keys need to be installed
  • Additional users need to be created
  • Updates need to be executed
  • Additional software needs to be installed

Each of these jobs will be done by provisioners inside of the build block.

This Debian image requires the use of admin user so we will need to make sure we are able to ssh in to the admin user account. Install some required packages, install the HashiCorp apt keys, install Vault.

Simple

  build {
    name = "cluster_image"
    sources = [
      "source.amazon-ebs.cluster_image_x64",
      "source.amazon-ebs.cluster_image_arm64"
    ]

    provisioner "shell" {
      inline = [
        "sudo mkdir -p /home/admin/.ssh/",
        "sudo apt update",
        "sudo apt install -y gpg",
      ]
    }

    provisioner "file" {
      source      = "/home/${user}/.ssh/id_rsa.pub"
      destination = "/home/admin/.ssh/authorized_keys"
    }

    provisioner "shell" {
      inline = [
        "wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg",
        "echo 'deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com bookworm main' | sudo tee /etc/apt/sources.list.d/hashicorp.list",
        "sudo apt update",
        "sudo apt install vault -y"
      ]
    }

  }

  • We are including the prior build stanza but now expanding upon it
  • shell provisioners are exactly that they will execute a command or series of commands in the shell
  • file provisioners will copy a file or template from the local system to the image being built.

  • Create the directory for the ssh authorized_keys file
  • Update the apt cache and install dependencies

  • Copy the public key from a user and copies it into the authorized_keys file of the admin user

  • Installs the HashiCorp repo as before
  • Updates apt cache with new repo and installs Vault

BUILD YOUR IMAGES!

Seriously, we’re finished with this part. Execute Packer to build the images and upon completion two AMI will be in the Amazon account ready to be used. Spin one up and test it for yourself!

packer build .

...
<5m of output>
...
==> cluster_image.amazon-ebs.cluster_image_arm64: Creating AMI hashi_cluster_image_arm64-20240103-102624 from instance i-0e33fb159d0f03427
    cluster_image.amazon-ebs.cluster_image_arm64: AMI: ami-0f23be22378946cd4
==> cluster_image.amazon-ebs.cluster_image_arm64: Waiting for AMI to become ready...
==> cluster_image.amazon-ebs.cluster_image_arm64: Skipping Enable AMI deprecation...
==> cluster_image.amazon-ebs.cluster_image_arm64: Adding tags to AMI (ami-0f23be22378946cd4)...
==> cluster_image.amazon-ebs.cluster_image_arm64: Tagging snapshot: snap-08bca9c95ceea787a
==> cluster_image.amazon-ebs.cluster_image_arm64: Creating AMI tags
    cluster_image.amazon-ebs.cluster_image_arm64: Adding tag: "project": "hashi_cluster_demo"
    cluster_image.amazon-ebs.cluster_image_arm64: Adding tag: "contact": "thomas"
==> cluster_image.amazon-ebs.cluster_image_arm64: Creating snapshot tags
==> cluster_image.amazon-ebs.cluster_image_arm64: Terminating the source AWS instance...
==> cluster_image.amazon-ebs.cluster_image_arm64: Cleaning up any extra volumes...
==> cluster_image.amazon-ebs.cluster_image_arm64: No volumes to clean up, skipping
==> cluster_image.amazon-ebs.cluster_image_arm64: Deleting temporary security group...
==> cluster_image.amazon-ebs.cluster_image_arm64: Deleting temporary keypair...
Build 'cluster_image.amazon-ebs.cluster_image_arm64' finished after 4 minutes 2 seconds.
==> cluster_image.amazon-ebs.cluster_image_x64: Skipping Enable AMI deprecation...
==> cluster_image.amazon-ebs.cluster_image_x64: Adding tags to AMI (ami-096ad0dad5312dc12)...
==> cluster_image.amazon-ebs.cluster_image_x64: Tagging snapshot: snap-032b424e5e1668fa9
==> cluster_image.amazon-ebs.cluster_image_x64: Creating AMI tags
    cluster_image.amazon-ebs.cluster_image_x64: Adding tag: "project": "hashi_cluster_demo"
    cluster_image.amazon-ebs.cluster_image_x64: Adding tag: "contact": "thomas"
==> cluster_image.amazon-ebs.cluster_image_x64: Creating snapshot tags
==> cluster_image.amazon-ebs.cluster_image_x64: Terminating the source AWS instance...
==> cluster_image.amazon-ebs.cluster_image_x64: Cleaning up any extra volumes...
==> cluster_image.amazon-ebs.cluster_image_x64: No volumes to clean up, skipping
==> cluster_image.amazon-ebs.cluster_image_x64: Deleting temporary security group...
==> cluster_image.amazon-ebs.cluster_image_x64: Deleting temporary keypair...
Build 'cluster_image.amazon-ebs.cluster_image_x64' finished after 5 minutes 13 seconds.

==> Wait completed after 5 minutes 13 seconds

==> Builds finished. The artifacts of successful builds are:
--> cluster_image.amazon-ebs.cluster_image_arm64: AMIs were created:
us-west-2: ami-0f23be22378946cd4

--> cluster_image.amazon-ebs.cluster_image_x64: AMIs were created:
us-west-2: ami-096ad0dad5312dc12

Notice both images were created with tags applied and everything cleaned up with the AMIs are ready to go!

Wrapping Up

In this post, we’ve accomplished a great deal: defining and initiating our project, creating the AMI for use, and setting up a basic configuration. Now every system we deploy with that AMI will have our ssh key and have Vault installed. That’s a really good starting point for building a cloud infrastructure.

Until the time we Configure Vault,

  • Cheers