© CloudSkiff 2019

This site is edited by CloudSkiff SAS, domiciled at STATION F - 5 Parvis Alan Turing, 75013 Paris - FRANCE

Terms of use and conditions will be released upon service opening to public use. 

How to get an EKS cluster running + Terraform code in 2 min of work

TLDR;
AWS EKS + Terraform + CloudSkiff do the job


In this article we'll explain how to spin up an AWS EKS cluster in 1 min of work, and get Terraform code out of it for reproducibility and easy cleanup, with CloudSkiff, a CI/CD for infrastructure as code.

Setting up new environments in EKS is a little tedious, and requires a lot of point and click work if you do it through the console.


Plus if something messes up, or you just want to shut it all down, you end up with a shitload of work cleaning up your AWS account and getting rid of now useless services. And AWS didn’t make that simple (who designed that CLI again? And no, you can’t delete your VPC, there’s a NAT gateway attached to it. And no again there is no automated cleanup function).

The AWS team doesn’t really want to add easy cleanup functions :) 

Enters Terraform. Describe everything as Terraform code, and you get a really easy way to deploy your new dev environment, a way that is reproducible and easy to clean up. And it makes it simpler to do things cleanly, with your environment neatly set up in a VPC for isolation.


Writing, optimizing and running Terraform code is a little tricky, and if you have your infra described as code, you might as well manage it in a CI/CD system like any other code. Right?
 

That’s why CloudSkiff is building a CI/CD for infrastructure as code that:

  • Day 1: makes getting started with infrastructure as code more approachable.

  • Day 2+: streamlines versioning, acts as the central place for automation, and enables collaboration around your templates and deployments.

 

We’re talking about AWS here, but CloudSkiff connects to other cloud providers too.

So let’s dive into it. Start the timer, and let’s see how we launch a small dev cluster in 2 min of work. Cloudskiff will also generate basic but clean Terraform code for you, that you can then reuse and upgrade to evolve your environment.


1. Create a CloudSkiff account
Easy, it’s here.

 


2. Create a CloudSkiff IAM user in your AWS account
Sign into the AWS management console, then create a new AWS IAM user for CloudSkiff. I called mine
cloudskiff.

Hit Add user then select programmatic access

We will create a new set of policies for this user to secure things up.

CloudSkiff needs access to EKS, EC2 and IAM. I created an easy, copy paste friendly permission set right there.

EKS EC2 IAM permission policy

You don’t need to add tags.


Let’s create this user now. Once you’ve created your user, save your access key and secret key, we’ll need them soon.

3. Add AWS to CloudSkiff


Great! We’ve created a new IAM cloudskiff user. Now let’s grant the CloudSkiff platform access using that user.
 

  1. Open the CloudSkiff app

  2. Navigate to the Integrations tab on the left

  3. Select AWS

  4. Enter your credentials, select your favorite AWS region

  5. Save. Keep the keys handy, we’ll need them later to configure our local AWS profile.

4. Add permissions on a new infra as code Github repo


CloudSkiff will generate Terraform code for your infrastructure and save it in your repo. So we need to create a github repo that we want it to push to

  1. Create a new private Github repository. Let’s call itcloudskiff-dev-eks

  2. Go to CloudSkiff’s integrations tab and select Github

  3. Connect your Github account

Note: CloudSkiff only needs access to the specific repo where you Terraform code will be stored.

5. Cool. Let’s deploy an EKS cluster

The setup is complete. You’ll only have to do steps 1,2,3,4 once.

Now let’s see how we can launch an EKS cluster. Move to the CloudSkiff dashboard. That’s where you will monitor all your clusters, and launch new ones.

 

Hit New Project.

  1. Select Templates. Templates are preconfigured EKS cluster that help you get started. You still have access to the Terraform behind it.

  2. Pick a name for your project

  3. Select AWS as the provider

  4. Select your usual region

  5. We’ll deploy a small cluster of t3.nano scaling between 1 and 3 machines. You can always come back to it and launch something more serious afterwards :)

  6. Enter your ssh public key (cat ~/.ssh/id_rsa.pub to get it real quick on most systems). It should look like ssh-rsa BLABLABLA .

  7. Select your brand new cloudskiff-dev-eks repo

  8. Hit Save

You should land back to your dashboard, and tadaaaam: our project is there.

9. Hit Deploy! Our project will start, and we can monitor the progress in the Logs.

5. Relax and check out the Terraform code we’ve generated

See that github logo? Hit it and you’ll land on your cloudskiff-dev-eks repository. The Terraform code that is executing right now has been stored on that repo. That means it is versioned, traced, and in case there is trouble you can roll back to older versions. GitOps become easier.

Some basic, but clean, Terraform code has been generated

Meanwhile, AWS is doing its thing, starting the EKS cluster, VPCs, autoscaling groups described in this Terraform.

I am guessing you already are using AWS routinely, and you have the AWS CLI setup. Let’s take a look at that.

6. Setup your local environment

 

All you need to do is :

  1. Create a CloudSkiff profile in your AWS credentials file, so that you can access your machines with your cloudskiff IAM.

aws_access_key_id = ..#you probably already have something here

 

aws_secret_access_key = #here too

#Create this

[cloudskiff]

 

aws_access_key_id = ..

aws_secret_access_key = .. # told you we'd need that later

2. Set your local environment variables $KUBECONFIG and $AWS_PROFILE .

KUBECONFIG should contain the path to your kubeconfig file. We will download it from CloudSkiff later, so let’s just prepare a folder to save it, for example $HOME/code/cloudskiff/config/aws and define the KUBECONFIG to point to the file.

export AWS_PROFILE=cloudskiff;

#the path to where you will store your kubeconfig file

export KUBECONFIG=$HOME/code/cloudskiff/config/aws/kubeconfig-dev-cluster;

7. Connect

 

Wait a few minutes (10–15) for AWS to assign resources. At some point, your project will be deployed (you will see a green ball in the UI).

1. Get the kubeconfig from the CloudSkiff dashboard

2. Rename it to kubeconfig-dev-cluster. Then move it to ~/code/cloudskiff/config/kubeconfig-dev-cluster or the place of your choosing, as long as it matches your $AWS_PROFILE.

3. Check: echo $AWS_PROFILE; echo $KUBECONFIG . It should output something like that:

AWS_PROFILE: cloudskiffKUBECONFIG

~/code/cloudskiff/config/aws/kubeconfig-dev-cluster

4. Now run kubectl get nodes , or k9s if you prefer. You’re in! Cluster deployed

(8. Destroy)

 

To destroy your cluster and cleanup everything, well: just hit theDestroy button on CloudSkiff. Everything will be cleaned up automatically and ready for a re-deploy!

Debrief

Reading this, you might think I took more than 2 min, because I sprayed screenshots everywhere.

Thinking about it, most of the things I did were just one-off for setup:

  1. (only once) Create a CloudSkiff account

  2. (only once) Create an IAM

  3. (only once) add it to CloudSkiff

  4. (only once) add permissions on a new infra as code Github repo

  5. Select 5 options

  6. Press a Deploy button

  7. (only once) make sure my local AWS profile was configured

  8. Get a kubeconfig

  9. Connect!

 

Only 5, 6, 8, 9 are steps you need to do for each deployment, and they are mostly buttons to press or single lines of command.

 

I hope you liked that! We haven’t looked in detail in the terraform code together, so I will keep that for an upcoming post.

 

Don’t hesitate to reach out about this tutorial!

I am at <my-weird-first-name>@cloudskiff.com

Malo Marrec

CloudSkiff is a CI/CD for Terraform, on steroids.

One platform to rule it all.

Automate your GitOps workflow | Empower your team members

Control your costs