I haven't been blogging as much as I wanted to last month due to a very persistent cold that sidelined me for nearly two weeks. So this blog entry will be a long one!

Time to get back on the saddle and begin my blog's migration to AWS. The current setup of my blog uses a single VPS from Linode and the DNS is handled by Namecheap. Obviously not very redundant and if the virtual machine my blog lives on dies, no one will be able to reach it. Also setting up naked domain is a pain on Namecheap (ie. https://pafable.com)

After recently transferring my domain from Namecheap to AWS Route 53, I'd like to take it a step further and use AWS completely for my blog.

What will change with this migration? Well hopefully nothing, the content will be the same, except for the version of ghost. This migration will allow me to upgrade to newer versions quickly and easily by using containers. Migrating to AWS will touch on multiple AWS services to make a very reliable and fault tolerant blog!

The AWS services I'll be using are ECS, EFS, ECR, RDS, System Manager Parameter Store and Route 53. If you haven't noticed, I'm using ECS which means I'll be containerizing my blog (wish me luck!). You can definitely use standard AWS instances to accomplish the same setup, but using containers allows me to push updated versions of Ghost quickly by simply changing the container image.

The New Setup

The AWS diagram below illustrates the new setup. ECS will be at the core of this environment. It will utilize an RDS running Amazon Aurora instead of an actual MySQL database, ECR as the container registry, Parameter Store will handle sensitive information such as database credentials and finally EFS will hold static content such as images.

Along with those an application load balancer will be used to direct traffic to the auto scaling group of the ECS cluster. The auto scaling group will allow for a resilient infrastructure in the event the EC2 instance running ECS is lost, a new instance can be provisioned automatically and service can continue. DNS will be handled by Route 53. To accomodate the naked domain, an alias will be created to route pafable.com to www.pafable.com.

For this new setup I'll be using my development domain - arandomproject.net. This will allow me to keep pafable.com up and running without affecting access to the current site.

Prep Work

Before we actually begin, we'll need to do some prep work before we actually touch any containers.

First up is setting up the Parameter Store for the database's credentials. Search for the System Manager in the AWS services and select "Parameter Store".

Next you'll want to give the parameter a name, I'll first create a parameter for the database user and give it the name ghostdbuser. Following that, I'll select "SecureString" for the type. This option is important so that value we put is encrypted by KMS. Once the ghostdbuser is complete click on "Create parameter".

Go ahead and create another parameter for the database password. The name of this parameter is ghostdbpass.

Now that the Parameter Store is all set, it's time to create a task role so that the container has the permission to read the parameters.

Head on over to IAM in the console to setup the role. Click on "Roles" and then the button for "Create role". Select AWS service for the type of trusted entity, next select Elastic Container Service for the service that will use this role. For the use case select "Elastic Container Service Task".

In the next page that appears, click on the "Create policy" button. Paste the following JSON. This policy should allow the ECS Service Task to pull the parameters from the Parameter Store.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeParameters",
                "ssm:GetParameterHistory",
                "ssm:GetParametersByPath",
                "ssm:GetParameters",
                "ssm:GetParameter"
            ],
            "Resource": "*"
        }
    ]
}

Before we set aside the credentials of the database, let's grab their ARN which we will need later for the container environment variables. The best way to do this is to use the AWS Cli. Use the command below to get the ARN of the database user. Run the command again replacing ghostdbuser with ghostdbpass.

aws ssm get-parameters --name ghostdbuser

You should get an output similar to this. Save the "ARN", we'll need it later.

{
    "InvalidParameters": [],
    "Parameters": [
        {
            "Name": "ghostdbuser",
            "LastModifiedDate": 1573665873.817,
            "Value": "AQICAHhUC1/EG7q2Twi2WH/Zv0kvV78sQIGBg1K/7t4Gaie5gAHkJl5FmOy2U+QxSl+6PiPqAAAAYzBhBgkqhkiG9w0BBwagVDBSAgEAME0GCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM+49hcCE85sDhtO0/AgEQgCDMl9rgaTbXlWq+Q13VcsYMVHydtd/UNm/0jw6tCM7tIQ==",
            "Version": 1,
            "Type": "SecureString",
            "ARN": "arn:aws:ssm:us-east-2:159695908353:parameter/ghostdbuser"
        }
    ]
}

Setting up RDS

Now we can do more interesting aspects! Navigate over to RDS and click on the create database button.

Select or fill out the following options for your RDS.

  • Engine options: Amazon Aurora
  • Edition: Amazon Aurora with MySQL compatibility
  • Version: Auroa (MySQL)-5.6.10a
  • Database Location: Regional
  • Database features: One writer and multiple readers
  • Templates: Production or Dev/Test  

For templates I chose Dev/Test because using production will setup a multi-AZ configuration. Since this blog doesn't produce any income... yet, I'm comfortable with using a Dev/Test environment. Using the production option will increase expenses. However if you're going to use this database for a mission critical business application, choose the production template!

The next section will have you filling out the DB name, master username and password. For the master username and password, use the same username and password you created in the Parameter Store.

In the DB instance size option, I went with a db.t2.small if you have bigger needs you may choose something larger, for a Ghost blog this size is plenty.

Create an EFS

Next major component to create is the EFS share. It will hold the static images for my blog. Head over to EFS and click on the "Create file system" button.

  • VPC: Select your own VPC or use the default.
  • Mount targets: Select the availability zones you want to be able to reach the EFS share.
  • Tag: Give your EFS a name, mine is named Ghostblog-EFS
  • Lifecycle policy: None
  • Throughput mode: Busting
  • Performance mode: General purpose
  • Enable encyption: unchecked

Review the configuration and then click create.

Once it's created you can use the commands below to mount the EFS share to the ECS instance. (be patient the ECS cluster is not created yet)

yum install -y amazon-efs-utils
mkdir /efs
mount -t efs fs-ef526896:/ /efs

Using ECR

The next step in the process is to setup AWS's Elastic Container Registry (ECR). ECR will hold the container image that will be used by ECS. You can think of ECR as a private container registry like Docker Hub.

Go to ECR and create a new repository. Mine is called ghost-blog.

To push container images to this registry from your local machine. You will need to have AWS Cli and docker installed on your machine.

  1. Retrieve the log in credentials for ECR using the AWS Cli. Once logged in these credentials are good for 12 hours.
$(aws ecr get-login --no-include-email --region us-east-2)

2. Build your image.

First download the latest alpine image of Ghost using docker pull on your local machine. Alpine is a striped down version of linux and holds only the necessary components to run ghost.

docker pull ghost:alpine

The command above will pull the ghost alpine image from docker hub. From this image, I'll build my own container image from that image and push it to my newly created ECR repository.

Create a simple Dockerfile, specifying the image name and maintainer.

Dockerfile:

FROM ghost:alpine

MAINTAINER Phil Afable

Go ahead and build the image. I like to give my image version numbers, if you don't specify a version, docker will assign "latest".

docker build -t ghost-blog:1.0 .

Next you're going to tag the image and prepare it to be pushed to the repository.

docker tag ghost-blog:1.0 473051755120.dkr.ecr.us-east-2.amazonaws.com/ghost-blog:1.0

3. Push the container image to ECR

Once you have the container image created and tagged, go ahead and push it to your ECR repository.

docker push 473051755120.dkr.ecr.us-east-2.amazonaws.com/ghost-blog:1.0

Configuring ECS

Now comes the fun part! ECS will be the orchestration service that will run our container. Those of you familiar with Kubernetes will find this very similar.

  1. Creating an ECS Cluster

Go to the ECS service and click on "Create Cluster". For the cluster template, select "EC2 Linux + Networking" and then click on "Next step".

In step 2, enter or select the following options.

  • Cluster name: <Enter your own name>
  • Provisioning Model: On-Demand Instance
  • EC2 instance type: t2.micro (depending on your traffic, you can change this instance type)
  • Enable T2 unlimited: checked
  • Number of instances: 1
  • EC2 Ami id: default Amazon Linux 2 AMI
  • EBS storage: 22

For the networking section; select your VPC, subnets, and security groups. Under the tags section, it is a good idea to list the owner of the cluster. After all of that you can go ahead and click on the create button.

If you open the console for EC2 instances, you should see your ECS instance up and running.

This is a perfect time to SSH to the instance and mount the EFS share to this instance. To do this run the following commands, previously mentioned when we created the EFS share.

yum install -y amazon-efs-utils
mkdir /efs
mount -t efs fs-ef526896:/ /efs

Now that the cluster is up and running you may have noticed how very little resource this cluster needs. I'm using a t2.micro instance type! Like I mentioned before, if you expect far more traffic or your use case dictates more containers, I highly suggest you bump up the instance type.

2. Create a task definition

If you've made it this far and haven't taken a break, I commend you for your dedication, but now I urge you to take break because from here forward it can get quite intense!

Within the ECS service click on "Task Definition" and click on the "Create new Task Definition" button. Task definitions will tell ECS how to run the container. You can view this similar to docker-compose or Kubernetes manifest files.

In the new window that opens choose the EC2.

In step 2, go ahead and give the task definition a name. Select the role you created in the prep work section. The role I created earlier is called myECStaskRole.

Before you add a container scroll down to the volumes section and add the EFS directory on the EC2 instance. /efs/ghost/content/images

Give it a name and specify the the source path.

Next it's time to add a container! Give the container a name and specify the image by copying the URI for the container image from ECR. For the memory limits, set a hard limit of 300. Follow that up by configuring the port mappings. Map the host's 3306 port to the container's 3306 port. This will allow the Aurora database to connect to the container. Next map the host's port 80 to the container's 2368. Port 2368 is the default port used by Ghost.

Scroll further down and you should see the options for environment variables. This is where things will get dicey if you're not using any credential/secrets management tool. In my case, I'm using System Manager Parameter Store to handle the database credentials so that it's not in plain text.  

I'll be using the following environment variables:

  • database__client
  • database__connection__database
  • database__connection__host
  • NODE_ENV
  • url
  • database__connection__password
  • database__connection__user

Notice for database__connection__password and database__connection__user I have set the value dropdown to ValueFrom. This is important because it will allow the container to grab the creds from parameter store. The value for these two environment variables are the ARN we created earlier in the prep work.

Edit the mount points and select "Images" as the source volume. In the container path specify /var/lib/ghost/content/images. After all of that you can click on the create button and wrap up the task definition phase.

3. Creating the ECS Cluster Services

Click into your cluster and click on the create button.

Check off EC2, give your service a name, and set the number of tasks to 1. You can leave the rest as default.

As you can see in my revision, I've messed up 5 times before getting it right on the 6th revision. You should only have 1 revision.

For steps 2, 3, and 4 you can click through these accepting the default values.

Once the service is setup, wait a couple of seconds and you should see a new task being instanstiated and then running in the tasks tab. As soon as you see the task running the site should be operational.

Depending on how you setup your security groups, subnet and VPC you may be able to access the Ghost site using the ECS instance's public IP. In my case, I prevent port 80 access to my ECS instance and only 80 from a load balancer.

Load Balancer Setup

  1. Finding the auto scaling group

First we'll need to know the autoscaling group associated with our ECS cluster. Head on over to the EC2 console and look for the autoscaling group name in the tags tab. Save that name we'll need it for later when we associate the target group with an autoscaling group.

2. Creating a Target Group

In the EC2 console, click on "Target Groups" under Load Balancing on the left navigation bar. Click on the "Create Target Group button" and in the next page provide the following information:

  • Target group name
  • Target type: Instance
  • Protocol: HTTP
  • Port: 80
  • VPC: Select your VPC

You can keep the rest as default and then click the create button.

Now go to "Auto Scaling Groups" in the navigation bar on the left and search for the auto scaling group containing the ECS cluster. Go to the details tab and click on the edit button.

In the new window that appears, search for "Target Groups" and put the target group associated with the ECS cluster in here and then click on the save button.

Great now that the auto scaling group and target group is squared away. Let's move on and create the actual load balancer

3. Creating the Load Balancer

On the navigation column on the left click on "Load Balancers" and then click on "Create Load Balancer".

Select the Application Load Balancer for the type.

Give your load balancer a name and select your VPC and subnets.

In step 3, select a security group that has port 80 access to the ECS instance.

Next in step 4, select the target group you created in earlier and proceed to step 5. You should see the targets register in step 5. After all that you're done!!!

Now grab the ALB's DNS name and you should be greeted by your Ghost blog.

All that is left to do is go into Route 53 and create an entry for your domain to be routed to the ALB's DNS entry.

As soon as you get the Route 53 in place you can use your site's domain name to access it. As you can see below I've already began the import of the data from my old VPS to AWS.

Where to go from here...

Now that the minimum viable product is up and running, what can I do to further enhance my site? Well for one thing I need to setup an SSL certificate so that I can enable HTTPS connections. I will most likely use AWS Certificate Manager to create a cert and apply it to my load balancer. Also I'm planning on leveraging Ghost's API and create a script in AWS Lambda to take a routine export of my blog's content. This will function as a secondary backup.

However for now I'll enjoy http://arandomproject.net and bask in it's glory and run further load tests on the ECS instance until I cutover the DNS entry for https://pafable.com. At the time of this writing (Nov 2019) http://arandomproject.net is still active, I'll probably cutover pafable.com at the end of the month.

How much does this setup cost? As of right now it's on track to be roughly $40 a month to run this setup. I'll know exactly at the end of the month. I'm warning you all now, this type of setup is 4 times more expensive than running a VPS from Linode, but allows me to have a solid backup in case I run into irrecoverable issues. Also for those of you running Wordpress based blogs, yes, this can be done with Wordpress as well. I'm not familiar with the environment variables for Wordpress, but it will be very similar.

Lastly I'll automate the deployment of new versions of Ghost by implementing a deployment pipeline. As soon as I push to the repository I want ECS to update the task definition to update the container image with the newest version from the repository.