Hey there, it has been awhile since my last post! No I did not contract COVID-19 yet. I have been busy studying for the AWS Devops Professional Certification! Now that I have passed the exam I can get back to entertaining you all.

Let's do a quick recap, in a previous blog post, I created a pipeline using CodePipeline and setup an approval process via Slack. Now it's time to complete the pipeline and have it update the ECS service with the new task definitions and stop the containers running an older version of ghost.


Overview

I will implement another step in the pipeline and invoke a Lambda function that will update the ECS cluster and then delete the out of date containers.  

pipe

My code will be written in python 3.8 and will utilize the python SDK for AWS – Boto3.

The order of operations I would like executed by Python will go as follows:

  • Lists available clusters
  • Lists available ECS services
  • Lists available task definitions
  • Lists available ECS tasks
  • Delete ECS tasks (docker containers)
  • Update the ECS service

Writing the Code

Time to get to the good stuff! If you're a master with Python then the following explanation will be trivial for you, go on ahead and skip to the bottom to read the full code. To see my thought process for the code, continue reading.

If you are wondering how I am figuring out the proper syntax for Boto3, I am following this documentation.

First, I will import the modules I will use. The two modules, I need are Boto3 and json. Simply import these modules into your code to use them.

import boto3
import json

Next I'll instantiate Boto3 for ECS and Codepipeline use. To do that I'll create variables for ECS and Codepipeline. My variables will be named client and pipe_client respectively.

client = boto3.client('ecs')
pipe_client = boto3.client('codepipeline')

After the instantiation, it is time to create four functions to find the ECS cluster, service, and task definition. These functions will aid in pinpointing the ECS structures I need to interact with. I only want to target my Ghost blog ECS cluster.

The list functions will show the following and use the listed methods:

  • Available clusters – list_clusters()
  • ECS services – list_services()
  • Task definitions – list_task_definitions()
  • ECS tasks – list_tasks()
# Shows available clusters
def list_cluster():
    x = client.list_clusters()
    cluster = x['clusterArns'][0]
    return cluster


# Shows available ecs services
def list_ecs_service(arn):
    y = client.list_services(
        cluster = arn
    )
    service = y['serviceArns'][0]
    return service


# Shows available task definitions
def task_definition():
    n = len(client.list_task_definitions()['taskDefinitionArns'])
    td = client.list_task_definitions()['taskDefinitionArns'][n - 1]
    return td


# Shows available ecs tasks
def tasks(cluster):
    instance = client.list_container_instances(
        cluster = cluster
    )

    w = client.list_tasks(
        cluster = cluster,
        containerInstance = instance['containerInstanceArns'][0]
    )
    return w

Cool now, the code is able to gather information from ECS. I will move on and create a function to update the task definition and delete old tasks on the cluster.

The update and delete function will use these methods:

  • Update ECS service – update_service()
  • Delete old tasks – stop_task()
# Update the ecs service 
def update_ecs_service(cluster, service, td):
    z = client.update_service(
        cluster = cluster,
        service = service,
        desiredCount = 2,
        taskDefinition = td
    )
    return z


# Deletes ecs tasks (docker containers)
def delete_old_task(cluster, task):
    client.stop_task(
        cluster = cluster,
        task = task
    )

Almost there! Next I'm going to create the main function that will call the other functions and supply them with the necessary arguments.

Just in case I need to troubleshoot I will add for print statements to check the cluster, service, latest task definition, and number of tasks. When this is printed out by Lambda it will be logged into Cloudwatch Logs which I can review to troubleshoot issues.

I will then call the update_ecs_service function and pass in three arguments: cluster, service, and task definition. These arguments are the list_cluster, list_ecs_service, and task_definition functions.

Once the service is updated with the newest task definition, I'll then delete the running tasks so that ECS can create new containers from the new task definition. To do this operation, I will loop through the output of the list_cluster function and delete or stop the old tasks.

The last function is the lambda_handler which is required by Lambda so that it can execute my code. In this function I'll call the main function so that it will execute it's instructions. In this function I'll also send a success message back to Codepipeline. This will inform Codepipeline the Lambda function executed successfully. I will use the method, put_job_success_result() from pipe_client.

def main():
    print('cluster: ' + list_cluster())
    print('service: '+ list_ecs_service(list_cluster()))
    print('latest task definition: ' + task_definition())
    print('# of tasks: ' + str(len(tasks(list_cluster())['taskArns'])))

    update_ecs_service(list_cluster(), list_ecs_service(list_cluster()), task_definition())

    for task in tasks(list_cluster())['taskArns']:
        print('deleting task: ' + task)
        delete_old_task(list_cluster(), task)


def lambda_handler(event, context):
    main()
    job_id = event['CodePipeline.job']['id']
    pipe_client.put_job_success_result(jobId=job_id)
    return {
        'statusCode': 200
    }

That completes the code! The full code written out is below.

Full complete code

import boto3
import json


client = boto3.client('ecs')
pipe_client = boto3.client('codepipeline')

# Shows available clusters
def list_cluster():
    x = client.list_clusters()
    cluster = x['clusterArns'][0]
    return cluster


# Shows available ecs services
def list_ecs_service(arn):
    y = client.list_services(
        cluster = arn
    )
    service = y['serviceArns'][0]
    return service


# Shows available task definitions
def task_definition():
    n = len(client.list_task_definitions()['taskDefinitionArns'])
    td = client.list_task_definitions()['taskDefinitionArns'][n - 1]
    return td


# Shows available ecs tasks
def tasks(cluster):
    instance = client.list_container_instances(
        cluster = cluster
    )

    w = client.list_tasks(
        cluster = cluster,
        containerInstance = instance['containerInstanceArns'][0]
    )
    return w


# Update the ecs service 
def update_ecs_service(cluster, service, td):
    z = client.update_service(
        cluster = cluster,
        service = service,
        desiredCount = 2,
        taskDefinition = td
    )
    return z
    
    
# Deletes ecs tasks (docker containers)
def delete_old_task(cluster, task):
    client.stop_task(
        cluster = cluster,
        task = task
    )


def main():
    print('cluster: ' + list_cluster())
    print('service: '+ list_ecs_service(list_cluster()))
    print('latest task definition: ' + task_definition())
    print('# of tasks: ' + str(len(tasks(list_cluster())['taskArns'])))

    update_ecs_service(list_cluster(), list_ecs_service(list_cluster()), task_definition())

    for task in tasks(list_cluster())['taskArns']:
        print('deleting task: ' + task)
        delete_old_task(list_cluster(), task)


def lambda_handler(event, context):
    main()
    print(event)
    job_id = event['CodePipeline.job']['id']
    pipe_client.put_job_success_result(jobId=job_id)
    return {
        'statusCode': 200
    }

Creating the Lambda Function

Now head into the AWS console to setup the Lambda function. I will name my function updateEcs, select python 3.8 as the runtime, and create a new role for Lambda.  

lambdaecs

In the next window, paste your code into the Lambda terminal window and click on the save button.

lambdawindow

In the permissions tab, click on the role assigned to the Lambda function. It should take you to the IAM console. You will need to add a policy that will allow Lambda to write the results back to Codepipeline. If you don't do this then the step will invoke Lambda in Codepipeline, but will never complete.

Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "codepipeline:PutJobFailureResult",
                "codepipeline:PutJobSuccessResult",
                "logs:*"
            ],
            "Resource": "*"
        }
    ]
}

Save the policy and make sure it is attached to the role associated with the Lambda function.


Finalizing the Pipeline

Navigate to Codepipeline and add a fourth stage. Next add an action and select invoke Lambda. Give it a name and choose the Lambda function you created earlier.

ecsUpdate

Once the fourth stage is added, the only thing left to do is test! Go ahead and release the change.

ecspipe

Success! As you can see the Lambda stage executed successfully! That now wraps up this pipeline.

To recap this pipeline grabs code from Codecommit and builds a container image and task definition using Codebuild and then uploads it to ECR. Finally the last stage updates the ECS service with the new task definition and deletes the old running tasks (containers) so that ECS can create new tasks using the new task definition.