It is a chore to update and build container images. Personally speaking from managing this website running on containers, I hate having to create new container images and uploading it to ECR.

I wish there was a way to routinely download the latest container image every month automatically... well there is!

It is rather simple and borrows the same principles and practices as baking AMI's. For the tutorial today, I will show you all how to setup an automated pipeline that will build the container image and push it to ECR!


Overview

image

This entire setup will utilize Codepipeline with Cloudwatch Events, Codebuild, ECR, and ECS providing support. If you're curious on setting up your own pipeline, you can view my previous post here for instructions.


Project Files

Below are the files needed for this pipeline. It consists of a json template needed by ecs, dockerfile for docker to create the container image, and buildspec for codebuild.

To create a blank template file, execute the following command aws ecs register-task-definition --generate-cli-skeleton. Some sections of the json file are not needed and can be ommitted. However if you omit these sections they use the default ECS settings.

For example if you ommit "readonlyRootFilesystem": false,, this setting will be used "readonlyRootFilesystem": true,. This will prevent you from uploading pictures from the ghost UI.

Task Definition JSON

{
    "family": "ghost-blog-task-definition",
    "taskRoleArn": "arn:aws:iam::999999999999:role/myECStaskRole",
    "executionRoleArn": "arn:aws:iam::999999999999:role/ecsTaskExecutionRole",
    "containerDefinitions": [
        {
            "name": "ghost-container",
            "image": "999999999999.dkr.ecr.us-east-2.amazonaws.com/pafableblog:3.0.5",
            "cpu": 0,
            "memory": 200,
            "portMappings": [
                {
                    "containerPort": 2368,
                    "hostPort": 0,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "environment": [
                {
                    "name": "database__client",
                    "value": "mysql"
                },
                {
                    "name": "db__conn",
                    "value": "gh_01"
                },
                {
                    "name": "db__conn__01",
                    "value": "test-1.us-east-2.rds.amazonaws.com"
                },
                {
                    "name": "NODE_ENV",
                    "value": "production"
                },
                {
                    "name": "url",
                    "value": "http://pafable.com"
                }
            ],
            "mountPoints": [
                {
                    "sourceVolume": "Images",
                    "containerPath": "/var/lib/ghost/content/images",
                    "readOnly": false
                }
            ],
            "secrets": [
                {
                    "name": "db__conn__pw",
                    "valueFrom": "arn:aws:ssm:eu-west-1:999999999999:parameter/pw"
                },
                {
                    "name": "db-conn-ur",
                    "valueFrom": "arn:aws:ssm:eu-west-1:999999999999:parameter/ur"
                }
            ],
            "startTimeout": 2,
            "stopTimeout": 2,
            "disableNetworking": false,
            "privileged": true,
            "readonlyRootFilesystem": false,
            "interactive": false,
            "pseudoTerminal": true
        }
    ],
    "volumes": [
        {
            "name": "Images",
            "host": {
                "sourcePath": "/efs/ghost/content/images"
            }
        }
    ],
    "requiresCompatibilities": [
        "EC2"
    ],
    "cpu": "256",
    "memory": "256",
    "tags": [
        {
            "key": "owner",
            "value": "pafable"
        },
        {
            "key": "env",
            "value": "prod"
        }
    ],
    "ipcMode": "none"
}

As you can see again, my dockerfile below is super simple. I don't make any modifications to the container image other than the maintainer.

Dockerfile

FROM ghost:alpine

MAINTAINER Phil Afable
Dockerfile

Here is the true magic of this pipeline! The buildspec file will tell codebuild how to build this container image and push it to ECR and then create a new revision for the task definition. All standard bash commands utilizing the AWSCLI.

Buildspec.yml

version: 0.2

phases:
  install:
    runtime-versions:
        docker: 18
  pre_build:
    commands:
       - VERSION=3.0.5
    # This will download the latest version of the ghost container 
       - docker pull ghost:latest 
       - ECR_URI=`aws ecr describe-repositories --repository-name pafableblog --region us-east-2 | jq '.repositories[].repositoryUri' | tr -d '"'`
       - DOCKER_LOGIN=`aws ecr get-login --no-include-email --region us-east-2`
       - ${DOCKER_LOGIN}
       - IMAGE_URI=${ECR_URI}:${VERSION}

  build:
    commands:
    # Create docker container and tag
        - docker build -t pafableblog:${VERSION} .
        - docker tag pafableblog:${VERSION} 999999999999.dkr.ecr.us-east-2.amazonaws.com/pafableblog:${VERSION}
        
  post_build:
    commands:
    # Push container to ecr
       - docker push 999999999999.dkr.ecr.us-east-2.amazonaws.com/pafableblog:${VERSION}
       - aws ecs register-task-definition --cli-input-json file://td.json
buildspec.yml

Setting up Codepipeline

Okay so for the source I will select AWS CodeCommit as the action provider. I'll use the repository I created in CodeCommit named pafable-golden-docker-image. Last for this stage, I'll select CloudWatch Events for the "Change detection options".

image2

In the next stage, I'll create an approval stage. This stage will send out an email to me or anyone else I assign as an approver to push this new container to ECR and create a new task definition to ECS.

image2

The final stage of the pipeline is will be CodeBuild.

image4


Testing Time!

Time for my favorite part of the process is the testing phase! To kick this build off commit and push your changes.

The commit should trigger the pipeline because it detected a change in the repository. You should get an email with the link to approve or disapprove of the build.

image4

Great! It worked without any issues. Verify you see a new container image in your container repository in ECR. Mine will have a tag of 3.0.5.

If you see it then it worked correctly! If not review the codebuild logs. If you haven't enabled the CodeBuild to be sent to CloudWatch, go into the CodeBuild service and edit your project.

Select the logs option and check off the box for CloudWatch logs - optional.

codebuildlogs


Setting up a Schedule on CloudWatch Events

Head into the CloudWatch service and click on rules under "Events". Here I'll tweak the rule for the pipeline. Some months I'll have no edits done on my container image, but I still want the latest container image. So I'll set a monthly cadence where this pipeline will be executed.

To do this, I'll be using a cron expression. You can read about the syntax here. My cron expression will execute the pipeline on the 1st of the month at 5AM GMT which is 12AM EST.

Cron express: 0 5 1 * ? *

image5


Wrapping up

Alright so this pipeline is finally fully automated. It will kick itself off every month without any intervention from me. In the future, the final step of the process is to apply the new task definition to my ECS service. This process is still manual until I feel more comfortable updating the service in ECS via commandline.

Eventually even the ECS service update will be automated! However for now this is a giant leap and saves me a few minutes of mashing my keyboard to create a container image.