With the rise of multiple cloud vendors, being able to spin up environments programmatically can be a huge challenge. An operator will need to understand varying services and syntax for each cloud provider.

It is extremely taxing to have teams learn multiple tools. How can we solve this problem? Well we're in luck, there is a single tool that is cloud provider agnostic which means it will work on AWS, Azure, GCP, and more!

Today I'll show you how to create an EC2 instances and S3 buckets using Terraform.


Installing Terraform

Terraform can be installed on Windows, Mac, and Linux. I'll be installing it on a Linux machine. Go ahead and download the version you want here.

  1. Download Terraform.
curl -O https://releases.hashicorp.com/terraform/0.12.5/terraform_0.12.5_linux_amd64.zip

2. Unzip file and move to /usr/local/bin.

unzip terraform_0.12.5_linux_amd64.zip -d /usr/local/bin/

3. Test to make sure Terraform is installed correctly by checking it's verison.

terraform -version

Terraform Time

Time to dive in and create our environment!.

  1. First tasks we're going to do is create a directory to store the terraform files.
mkdir /terra-proj

2. Create the terraform file.

touch ec2s3.tf

3. Create the EC2 Instance.

For  the first half we'll tell Terraform to create us the AWS provider and pass in the region where it will b deployed. Then I'll have Terraform create an EC2 instance.

provider "aws" {

        region = "us-east-1"
}

# Create an EC2 instance
resource "aws_instance" "ec2Deploy" {

        ami = "ami-0b898040803850657"
        instance_type = "t2.micro"
		key_name = "YOUR_SSH_KEY_PAIR"
        tags = {
                Name = "hellokitty"
        }
}

4. Create an S3 bucket.

In the second half of the file create the S3 bucket. As per AWS documentation I'll need to supply a random ID when I name the S3 bucket. Also make sure you have force_destroy = true, this will allow Terraform to destroy the S3 bucket even if it has files in it.

provider "aws" {

        region = "us-east-1"
}

# Create an EC2 instance
resource "aws_instance" "ec2Deploy" {

        ami = "ami-0b898040803850657"
        instance_type = "t2.micro"
        tags = {
                Name = "hellokitty"
        }
}

# Create an S3 bucket
resource "random_id" "bcktId" {
	byte_length =2 
}

resource "aws_s3_bucket" "bucket" {
		bucket = "bucket-${random_id.bcktId.dec}"
		acl = "private"
		force_destroy = true
		
		tags = {
			Name = "bckt-tf"
		}
}

5. Supply the AWS Secret and Access Keys.

Now we could have hard coded these into the file, but that is highly insecure and if you push this onto Github your keys are out in the open for people to see. So I'll export it as an environment variable.

export AWS_SECRET_ACCESS_KEY="<secret_key>"
export AWS_ACCESS_KEY_ID="<access_key>"

Launch the New Environment

Now that the Terraform files are in place, it's time launch the new resources.

  1. Initialize the Terraform directory.
terraform init

2. Validate the Terraform files.

terraform validate

You will see an output that the configuration is valid.

3. Plan the deployment.

terraform plan will generate the plan for Terraform to execute. The -out option will create a file called ec2s3plan this will be used later by Terraform to create the instance and bucket.

terraform plan -out=ec2s3plan

3. Execute the plan.

After executing the plan below, hop on to the AWS console and check to make sure your S3 and EC2 instance are being created.

terraform apply ec2s3plan

5. Destroy the Environment.

Finally when you are done with the environment, you will want to destroy it. To do so issue the terraform destroy command. Supply the -auto-approve option to bypass the prompt.

terraform destroy -auto-approve

Now if you're paranoid like me, double check the AWS console and make sure you have no EC2 instances and S3 buckets lying around.