Kubernetes simplified with AWS EKS

Vishal Gorai
5 min readJul 11, 2020

Let’s get into container orchestration with aws.

Hello readers! I hope you all are doing fine in your domains. Well, mine is cloud, and with this I’ll take you deep into this. Cloud is all over, and AWS dominates the cloud domain. Needless to say how it has revolutionarised the cloud space, and is moving further ahead with its continuously expanding services. In this article, we will be exploring one of its services ie, AWS Elastic Kuberneter Service (EKS) and simultaneously build a small Kubernetes (K8) application.

EKS

AWS EKS

EKS is a fully managed kubernetes service provided by AWS to provision and manage your K8 cluster. EKS efficiently manages the availability and scalability of your pods (containers), also it automatically detects and replaces unhealthy nodes for each cluster so your customer never gets a 404.

EKS has a huge fanbase. From huge investment banks like HSBC to the social media app Snapchat. EKS is the go to solution for all container implementation on cloud which you can think.

There are two ways to go ahead with EKS

  1. The EC2 way
  2. The severless way

EC2 way :

This gives you the complete control of your K8 cluster. From choosing EC2 instances with different configuration to monitoring it from inside the pods. It is a great option if your organisation has a very good DevOps team to manage so much granular control. Along with great control, you get better cost optimisation.

The serverless way :

This employs the AWS Fargate, so that you get rid of any server management thing and focus only on your pods, more specifically your business. This doesn’t give you the headache (control) of you node machines. Fargate provisions the right amount of compute, eliminating the need to choose instances and scale your cluster. In this, you pay for resources per application. This would be the go-to solution with orgs with a small or little expertise on server management stuffs. This obviously would be little expensive than EC2 way, since professional AWS engineers handle you master nodes.

Both this mechanisms have trade-offs, and it solely depends on your need to embrace any one.

Overall

The Master in EKS is configured in multiple availability Zones (AZs) by AWS. This makes your ecosystem fault tolerant, in case of natural disasters. You don’t have any access to Master Node, since it is fully managed and configured by AWS, this means you can’t ssh into it. I guess that itself reduces much of the stress ;)

EKS is tightly coupled with other AWS services like EFS, ELB, CloudWatch and many more. Here, we will be using a few in our very basic cluster.

The internal EKS architecture :

eks architecture

Jumping into the Cluster

  1. Get your system ready :

Install awscli (for handling aws console from the terminal)

Install eksctl (for managing K8 on AWS)

Install kubectl (for managing K8 cluster)

system setup

2. Configure your awscli with the user with permission to access EKS and other services. I configured it with an admin. Also note that if you’re using AWS Fargate, decide the region appropriately. Fargate is not available in all AWS regions.

awscli configure

3. Initially there won’t be any cluster with AWS. Check with following command

$ eksctl get cluster

4. We will provision our first cluster with very basic cluster configuration. We will use a yaml cluster configuration file and use eksctl to execute from cli.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: my-kube-config
region: ap-southeast-1

nodeGroups:
— name: node-group-1
desiredCapacity: 2
instanceType: t2.micro
ssh:
- publicKeyName: <your-keypair-name>
— name: node-grop-2
desiredCapacity: 1
instanceType: t2.micro
ssh:
- publicKeyName: <your-keypair-name>
creating cluster with eksctl

If you look into the logs closely, eksctl is using the CloudFormation service to launch the worker nodes

using cloudformation in the backend
cluster up and running after 10–15 minutes

5. Configuring kubectl locally. Initially your kubectl won’t be configured, and your config file will be something like this.

Run the command to configure the local kubeconfig file

aws eks update-kubeconfig --name <cluster-name>

This makes you kubernetes ecosystem ready. Above this you can run you K8 apps.

6. Without diving deep into K8, here we will deploy the most basic K8 application on our EKS

k8 app deployment

In K8 by default, the Pod is only accessible by its internal IP address within the Kubernetes cluster. To make the node accessible from outside the Kubernetes virtual network, you have to expose the Pod as a Kubernetes Service.

7. Exposing the port to the outer world.

EKS services

Here in EKS, the load balancer type is ELB Classic pre-defined by AWS. You could chaange it and use of from many available load balancers.

Managing the pod from inside.

pod exec

8. Finally getting the mess cleaned up from the pods

kubectl delete deployment --all

9. Releasing the EKS services

eksctl delete cluster --name <cluster-name>
eksctl delete cluster

EKS Pricing

You pay $0.10 per hour for each Amazon EKS cluster that you create. You can use a single Amazon EKS cluster to run multiple applications by taking advantage of Kubernetes namespaces and IAM security policies.

Thank you all :)

That’s it for stepping into the EKS. If you’ve come uptill here, I am sure you found this definitely into the dev and K8 stuff. Give it a clap and share it with your cluster. Consider getting connected with me on linkedin :)

--

--