Alright!!! If you've been following the last post, I've setup a pipeline to create a flask based website and deploy it using Elastic Beanstalk (EB). EB is great service when building a single EC2 instance to run my website. However it does not let you tweak the underlying load balancers, autoscaling groups and etc.

Don't worry, there is a solution called Cloudformation. What is Cloudformation? Cloudformation is AWS' infrastructure provisioning service. It's an infrastructure as code service that allows an operator to quickly spin up an environment using JSON or YAML templates.

Getting Started

First step is to make some changes to the layout of the project. I have decided to use autoscaling groups instead of a single EC2 instance. It will allow me to have a great deal of resiliency in a catastrophic event (ie. instance gets terminated).

I'll also setup two environments, one for production and the other for development. Both environments will be running in parallel and will have their own autoscaling groups.

Creating the Cloudformation Templates

One of the few changes I'll create are two Cloudformation templates. One template will be called dev.yml and the other will be prod.yml.

Virtually the same in structure and content except for the names of the launch configuration, autoscaling group, and S3 bucket.

dev.yml

# version 2019.10.04
AWSTemplateFormatVersion: '2010-09-09'

Mappings:
  AwsRegionAmi:
    us-east-1:
      AMI: ami-0b69ea66ff7391e80
    us-east-2:
      AMI: ami-00c03f7f7f2ec15c3

Resources:
  myInstanceProfile: 
    Type: AWS::IAM::InstanceProfile
    Properties: 
      Roles: 
        - EC2S3RO

  myLaunchConfig:
    Type: AWS::AutoScaling::LaunchConfiguration
    Properties:
      LaunchConfigurationName: cf-created-standard-dev
      KeyName: mycf
      SecurityGroups: 
        - sg-05ee0a7e156531a0a
      InstanceType: t2.micro
      ImageId: !FindInMap [AwsRegionAmi, !Ref 'AWS::Region', AMI]
      BlockDeviceMappings: 
        - DeviceName: "/dev/xvda"
          Ebs:
            VolumeSize: 10
            VolumeType: gp2
      IamInstanceProfile: !Ref myInstanceProfile
      UserData:
        Fn::Base64: |
          DATE=$(date +%Y-%m-%d)
          ARTIFACT=$(aws s3 ls s3://codepipeline-us-east-2-710251686107/hello-kitty-ASG/BuildArtif/ | grep ${DATE} | awk '{print $4}')
          yum update -y
          yum install epel-release vim lynx tcpdump tmux -y
          echo "hello lol12345" > /tmp/lol.txt
          mkdir /flask
          aws s3 cp s3://codepipeline-us-east-2-710251686107/hello-kitty-ASG/BuildArtif/${ARTIFACT} /flask
          unzip /flask/${ARTIFACT} -d /flask
          python /flask/application.py
    
  myASG:
    Type: AWS::AutoScaling::AutoScalingGroup
    Properties:
      AutoScalingGroupName: myCfAsg-dev
      AvailabilityZones: 
        Fn::GetAZs: 
          Ref: "AWS::Region"
      LaunchConfigurationName: !Ref myLaunchConfig
      DesiredCapacity: "1"
      MinSize: "1"
      MaxSize: "3"
      Tags:
        - Key: Environment 
          Value: Dev
          PropagateAtLaunch: "true"
        - Key: Name 
          Value: DevInstance
          PropagateAtLaunch: "true"
  
  myS3bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: devs3bucket001

prod.yml

# version 2019.10.04
AWSTemplateFormatVersion: '2010-09-09'

Mappings:
  AwsRegionAmi:
    us-east-1:
      AMI: ami-0b69ea66ff7391e80
    us-east-2:
      AMI: ami-00c03f7f7f2ec15c3

Resources:
  myInstanceProfile: 
    Type: AWS::IAM::InstanceProfile
    Properties: 
      Roles: 
        - EC2S3RO

  myLaunchConfig:
    Type: AWS::AutoScaling::LaunchConfiguration
    Properties:
      LaunchConfigurationName: cf-created-standard-prod
      KeyName: mycf
      SecurityGroups: 
        - sg-05ee0a7e156531a0a
      InstanceType: t2.micro
      ImageId: !FindInMap [AwsRegionAmi, !Ref 'AWS::Region', AMI]
      BlockDeviceMappings: 
        - DeviceName: "/dev/xvda"
          Ebs:
            VolumeSize: 10
            VolumeType: gp2
      IamInstanceProfile: !Ref myInstanceProfile
      UserData:
        Fn::Base64: |
          #!/bin/bash
          DATE=$(date +%Y-%m-%d)
          ARTIFACT=$(aws s3 ls s3://codepipeline-us-east-2-710251686107/hello-kitty-ASG/BuildArtif/ | grep ${DATE} | awk '{print $4}')
          yum update -y
          yum install epel-release vim lynx tcpdump tmux -y
          echo "hello lol12345" > /tmp/lol.txt
          mkdir /flask
          aws s3 cp s3://codepipeline-us-east-2-710251686107/hello-kitty-ASG/BuildArtif/${ARTIFACT} /flask
          unzip /flask/${ARTIFACT} -d /flask
          python /flask/application.py
    
  myASG:
    Type: AWS::AutoScaling::AutoScalingGroup
    Properties:
      AutoScalingGroupName: myCfAsg-prod
      AvailabilityZones: 
        Fn::GetAZs: 
          Ref: "AWS::Region"
      LaunchConfigurationName: !Ref myLaunchConfig
      DesiredCapacity: "2"
      MinSize: "1"
      MaxSize: "3"
      Tags:
        - Key: Environment 
          Value: Prod
          PropagateAtLaunch: "true"
        - Key: Name 
          Value: ProdInstance
          PropagateAtLaunch: "true"
  
  myS3bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: prods3bucket001

As you can see above in the two templates, I created a mapping for which AMI to use depending on the AWS region I'm deploying to. (For this tutorial im only deploying to us-east-2)

Next I'm creating the following resources: instance profile, launch configurations, autoscaling groups, and S3 buckets.

Notice in the dev environment's launch configuration, I've set the DesiredCapacity to "1" and in prod I've set it to "2". This is intentional, the rationale to this is, production will be receiving more traffic than development so two instances will be needed to handle the load.

Editing the Codepipeline

Now time to make some edits in Codepipeline!

Delete the current deploy section with EB as the action provider. Now I'm going to create two new stages. These stages will be named Deploy-Dev and Deploy-Prod respectively.

Give the 'Action name' a name and select AWS CloudFormation as the 'Action provider'. Select the region you want to deploy to, I'm using us-east-2 (Ohio), for the input artifacts select BuildArtifacts.

In the 'Action mode' section select Create or update stack. Next give your stack a name, mine is called dev-stack-21 for the development environment and prod-stack-21 for the production environment. The '21' in the name denotes the number of attempts I've created this stack, you may choose not to include this.

In the template section, I'll use the cf template in the BuildArtifact and specify the filename depending on the environment. (prod.yml or dev.yml)

Kicking off the Pipeline

Before kicking off the pipeline, navigate to your Codepipeline project's S3 bucket. Once your inside of your S3 bucket go to your project and open up the BuildArtif folder. Make sure to delete everything inside of this folder. There should be nothing in here from your previous builds.

Now it's time to kick off the build! Go ahead and commit and push the changes to your repository using git. Your build should run like before but it will first create the development environment and then create the production environment with two EC2 instances.

Looking good! As you can see the instances and buckets have been created! Let's check if the code is on the EC2 instances.

SSH to the instances and confirm the flask directory was created.

Prod Instance:

[ec2-user@ip-172-31-19-230 ~]$ ls /flask
application.py  deploy.sh  prod.yml          templates
asg.yml         dev.yml    requirements.txt  user_data.txt
buildspec.yml   env        singleec2.yml     xrX8NG5

Dev Instance:

[ec2-user@ip-172-31-5-122 ~]$ ls /flask
application.py  deploy.sh  prod.yml          templates
asg.yml         dev.yml    requirements.txt  user_data.txt
buildspec.yml   env        singleec2.yml     xrX8NG5

Awesome! The pipeline is working as intended and both environments are running in parallel.

Where to go from here? Well depending on your organization you may want to gate the production deployment. What that means is adding a manual approval action in 'Deploy-Prod'. This will allow everyone to view the development environment and any changes are as expected before proceeding with production.