Hello friends! It's time to ring in the new year with some cool container orchestration with EKS! Tonight I'll be showing you all how to setup an EKS cluster on AWS. What is EKS you ask? EKS stands for Elastic Kubernetes Service. This is a managed Kubernetes service offered by AWS! This is very similar to Azure's AKS which I wrote about here.

By using EKS, the master node is taken care of by AWS and all you manage are the worker nodes.


Prerequisites

For this tutorial you will need the AWS cli. If you do not have it installed, follow this guide from AWS themselves.


Installing eksctl

Eksctl is a cli tool to create EKS clusters. It's also the official command line tool as stated by AWS in their blog. So I'll be using it to create my cluster.

  1. Download and install the eksctl binary.
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin

2. Verify eksctl installed properly.

eksctl version

You should get an output similar to mine:

$ eksctl version
[ℹ]  version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.11.1"}

Installing kubectl

If you remember from my previous blogs, kubectl is a command line tool for managing kubernetes clusters.

  1. Download the kubectl binary.
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

2. Make the binary executable.

chmod u+x ./kubectl

3. Move the binary to your environment PATH.

mv ./kubectl /usr/local/bin/kubectl

4. Verify kubectl installed properly.

kubectl version

You should get an output similar to this:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:11:03Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"windows/amd64"}

Installing the AWS IAM Authenticator

To configure authentication to the EKS cluster you will need the iam authenticator.

  1. Download the binary for iam authenticator.
curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator

2. Make the binary executable.

chmod u+x ./aws-iam-authenticator

3. Copy the binary to your environment PATH.

cp aws-iam-authenticator /usr/local/bin

Creating the EKS Cluster

Alright let's finally create the clusters!

  1. Writing the config file.

Okay open up your favorite text editor and we'll create config files for eksctl to use. This config file I'll create will tell EKS how many worker nodes to create, which instance type to use, in which region to deploy to, and etc.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: my-test-cluster
  region: us-east-2

nodeGroups:
  - name: test-nodegroup
    instanceType: t2.micro
    desiredCapacity: 3
example config file

As you can see above the config file is written in yaml format. The config file will create a cluster in us-east-2 with the name my-test-cluster. When it creates the cluster it will put them into a node group called test-nodegroup and it will use the t2.micro instance type. Finally it will spin up 3 nodes for this cluster.

2. Executing eksctl

Supply the example config filename when executing the eksctl command.

eksctl cluster create -f config.yaml

Let this process run, it may take 10 - 15 minutes to complete. During this process, eksctl will create a new VPC and subnets to deploy the cluster to. If you do not want eksctl to this you will need to supply the values in your config file. See this documentation.

You will get an output similar to this:

[ℹ]  eksctl version 0.11.1
[ℹ]  using region us-east-2
[ℹ]  setting availability zones to [us-east-2c us-east-2a us-east-2b]
[ℹ]  subnets for us-east-2c - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for us-east-2a - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for us-east-2b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  nodegroup "test-nodegroup" will use "ami-082bb518441d3954c" [AmazonLinux2/1.14]
[ℹ]  using Kubernetes version 1.14
[ℹ]  creating EKS cluster "my-test-cluster" in "us-east-2" region with un-managed nodes
[ℹ]  1 nodegroup (test-nodegroup) was included (based on the include/exclude rules)
[ℹ]  will create a CloudFormation stack for cluster itself and 1 nodegroup stack(s)
[ℹ]  will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s)
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=my-test-cluster'
[ℹ]  CloudWatch logging will not be enabled for cluster "my-test-cluster" in "us-east-2"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=us-east-2 --cluster=my-test-cluster'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "my-test-cluster" in "us-east-2"
[ℹ]  2 sequential tasks: { create cluster control plane "my-test-cluster", create nodegroup "test-nodegroup" }
[ℹ]  building cluster stack "eksctl-my-test-cluster-cluster"
[ℹ]  deploying stack "eksctl-my-test-cluster-cluster"
[ℹ]  building nodegroup stack "eksctl-my-test-cluster-nodegroup-test-nodegroup"
[ℹ]  --nodes-min=2 was set automatically for nodegroup test-nodegroup
[ℹ]  --nodes-max=2 was set automatically for nodegroup test-nodegroup
[ℹ]  deploying stack "eksctl-my-test-cluster-nodegroup-test-nodegroup"
[✔]  all EKS cluster resources for "my-test-cluster" have been created

3. Verify all 3 nodes are up and ready.

kubectl get nodes

The output should be something like this:

NAME                                           STATUS   ROLES    AGE   VERSION
ip-192-168-10-52.us-east-2.compute.internal    Ready    <none>   80s   v1.14.7-eks-1861c5
ip-192-168-49-165.us-east-2.compute.internal   Ready    <none>   82s   v1.14.7-eks-1861c5
ip-192-168-94-93.us-east-2.compute.internal    Ready    <none>   84s   v1.14.7-eks-1861c5

Also if you check EKS and EC2 services within the AWS console, you will see the cluster and instances all ready to go for your deployments!