Hashicorp's Packer is an open source tool that enables users to create machine images using infrastructure as code. What this allows me to do, is create custom images with my application code already loaded into it.

If you remember from my Gitlab CI/CD posts ( part 1 and part 2 ), I was pulling the application code from the repo by baking in the ssh keys into the AMI. Clearly not a very elegant way of getting code into my instances.

However packer allows me to do that and customize images for not only AWS, but also other vendors!

Well for my final post of 2019, I'll show you all how to create an AMI using packer!


Installing Packer

Download packer from here, I will be using the MacOS version, but the installation is quite similar for all versions.

  1. Unpack the packer  binary.
unzip packer_1.5.1_darwin_amd64.zip

2. Create a symlink pointing to the packer binary file location.

ln -s ~/Documents/packer /usr/bin

3. Verify packer installed correctly.

Type the following and press enter:

packer version

You should get an output for the packer version.


Creating an AMI Image

Now that packer is installed, it's time to finally use it! Okay where to begin? Well that is easy, I'll consult the official documentation. A packer file is written in JSON format. To get a minimum viable image, there are two sections needed, builders and provisioners.

The builder section will tell packer details such as which region, aws access & secret key, ami name, and etc to apply to the ami that will be created.

The provisioner section will install the necessary software and configure the OS to my specifications. I will be creating a new directory called /appl to hold all of my application's files.

  1. Variables

The first section, I'll create is for the variables. This section is needed so that I can pass the AWS access and secret keys without having to type it into the packer file.

{
  "variables": {
    "aws_access_key": "{{env `aws_access_key`}}",
    "aws_secret_key": "{{env `aws_secret_key`}}"
  }
}

2. Builders

The next section, I'll add is a builders section. Like mentioned earlier the builder is responsible for creating an instance and then creating an image from said instance.

I will be using the amazon-ebs builder, this will create a new AMI based on an existing AMI. The source AMI I will use is for a standard ECS instance in us-east-2 region. The eight options I'll be using are:

  • access_key
  • ami_name
  • instance_type
  • region
  • secret_key
  • source_ami
  • ssh_username
  • type
{
  "variables": {
    "aws_access_key": "{{env `aws_access_key`}}",
    "aws_secret_key": "{{env `aws_secret_key`}}"
  },
  "builders": [
    {
      "access_key": "{{user `aws_access_key`}}",
      "secret_key": "{{user `aws_secret_key`}}",
      "ami_name": "test-ami-{{timestamp}}",
      "instance_type": "t2.micro",
      "region": "us-east-2",
      "source_ami": "ami-01a7c6aed63b6014f",
      "ssh_username": "ec2-user",
      "type": "amazon-ebs"
    }
  ],

The values for the builder section is understandable at a quick glance. However I'm using a built-in function such as {{timestamp}}, which will put the time in UTC in the AMI name. Along with the time function, I'm passing in variables for the access and secret keys – this is denoted by {{user variable_name}}.

3. Provisioners

Time to setup the last section – provisioners! Provisioners will modify the instance once it's created by the builder section. This will allow me to install dependencies and download my application files to the instance.

Alright here is how I want to deploy my app. I'll be using the file and shell provisioners. The file provisioner will take a directory named testdir which will be filled with application files.

The shell provisioner will allow me to execute a shell script. The shell script will download updates and install any dependency packages for my application.

{
  "variables": {
    "aws_access_key": "{{env `aws_access_key`}}",
    "aws_secret_key": "{{env `aws_secret_key`}}"
  },
  "builders": [
    {
      "access_key": "{{user `aws_access_key`}}",
      "ami_name": "test-ami-{{timestamp}}",
      "instance_type": "t2.micro",
      "region": "us-east-2",
      "secret_key": "{{user `aws_secret_key`}}",
      "source_ami": "ami-01a7c6aed63b6014f",
      "ssh_username": "ec2-user",
      "type": "amazon-ebs"
    }
  ],
  "provisioners": [
    {
      "destination": "/tmp/testdir",
      "source": "testdir",
      "type": "file"
    },
    {
      "script": "init.sh",
      "type": "shell"
    }
  ]
}
test-ami.json

Below is the shell script that will be executed. It will create a text file with the content "This image was created by Phil Afable 12/2019" in the /home/ec2-user directory. Next I want to see the location of the target instance so pwd will show the location of the operation and ls -ls will list all of the contents in the working directory. After the positional awareness commands, I'll download the latest packages updates. Finally it's time to install the packages for wget and amazon-efs-utils.

#!/bin/sh

echo "This image was created by Phil Afable 12/2019" >> /home/ec2-user/hello.txt

pwd
ls -la
sudo yum update -y && yum upgrade -y
sudo yum install -y wget amazon-efs-utils
init.sh

Executing the Packer Build Command

Execute the packer command below, supply the environment variables and build file. The environment variables I'll be passing in are the AWS access and secret keys.

packer build -var "aws_access_key=<YOUR_ACCESS_KEY>" -var "aws_secret_key=<YOUR_SECRET_KEY>" test-ami.json

You will get a lengthy output, but this is good for debugging if something goes wrong.

I have cut down on some lines below for brevity reasons. Yours will be much longer than mine. As you can see the present working directory is shown along with the contents of the working directory. Then the necessary packages are installed onto the image.

amazon-ebs: output will be in this color.

==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Uploading testdir => /tmp/testdir
==> amazon-ebs: Provisioning with shell script: init.sh
    amazon-ebs: /home/ec2-user
    amazon-ebs: total 28
    amazon-ebs: drwx------ 3 ec2-user ec2-user 4096 Dec 29 22:11 .
    amazon-ebs: drwxr-xr-x 3 root     root     4096 Dec 29 22:11 ..
    amazon-ebs: -rw-r--r-- 1 ec2-user ec2-user   18 Jul 27  2018 .bash_logout
    amazon-ebs: -rw-r--r-- 1 ec2-user ec2-user  193 Jul 27  2018 .bash_profile
    amazon-ebs: -rw-r--r-- 1 ec2-user ec2-user  231 Jul 27  2018 .bashrc
    amazon-ebs: -rw-rw-r-- 1 ec2-user ec2-user   46 Dec 29 22:11 date.txt
    amazon-ebs: drwx------ 2 ec2-user ec2-user 4096 Dec 29 22:11 .ssh

    amazon-ebs: ---> Package libidn2.x86_64 0:2.3.0-1.amzn2 will be an update
    amazon-ebs: --> Finished Dependency Resolution
    amazon-ebs:
    amazon-ebs: Dependencies Resolved
    amazon-ebs:
    amazon-ebs: ================================================================================
    amazon-ebs:  Package            Arch      Version                       Repository     Size
    amazon-ebs: ================================================================================
    amazon-ebs: Updating:
    amazon-ebs:  ca-certificates    noarch    2018.2.22-70.0.amzn2.0.1      amzn2-core    392 k
    amazon-ebs:  file               x86_64    5.11-35.amzn2.0.2             amzn2-core     57 k
    amazon-ebs:  file-libs          x86_64    5.11-35.amzn2.0.2             amzn2-core    339 k
    amazon-ebs:  krb5-libs          x86_64    1.15.1-37.amzn2.2.1           amzn2-core    759 k
    amazon-ebs:  libidn2            x86_64    2.3.0-1.amzn2                 amzn2-core    140 k
    amazon-ebs:
    amazon-ebs: Transaction Summary
    amazon-ebs: ================================================================================
    amazon-ebs: Upgrade  5 Packages
    amazon-ebs:
    amazon-ebs: Total download size: 1.6 M
    amazon-ebs: Downloading packages:
    amazon-ebs: Delta RPMs disabled because /usr/bin/applydeltarpm not installed.

Build 'amazon-ebs' finished.

==> Builds finished. The artifacts of successful builds are:

Once it finishes check in your AWS Console and test the AMI by spinning up a new instance!

That is it and wraps up my 2019 see you all in 2020!