Creating Multi Node Kubernetes Cluster using AWS Instances

Akhilesh Jain
12 min readApr 16, 2021

So, in this Article, I have created a Multi Node Kubernetes Cluster on AWS. I have used 2 tools ‘Ansible’ and ‘Terraform’ for this Complete Cluster using Centos Image.
→ I have used Terraform to create Instances on AWS, creating a Security Group to allow some ports and icmp protocol to check connectivity.
→ I have used Ansible to configure this Cluster, i.e to Configure Master and Slave nodes effectively.

In Ansible, I have used 2 major concepts to configure the Cluster:
- Dynamic Inventory:-
In this, Ansible can automatically fetch the hosts on AWS Cloud, as I have some Python Scripts which do these all behind the scene, we just have to provide the IAM Credentials.

I have created Stories, in which I have told about how to Install Terraform, AWS CLI and create an IAM User.

- Roles: These are the packages which are easy to manage, in which separate directories are there which contains variables, our tasks and all which are inter dependent internally. We just have to create a playbook which will call them implicitly.

→ Firstly we will create Instances and then Configure them. After Installing, the tools, now we can create our resources using Terraform.

To create the Instances, I have used ‘module’ concept in Terraform, which will create Resources for us.

→ So lets Start Building this:

Following is the structure of the directory of my Task in Terraform:

Here, in Folder Task 19, there are 2 files and 1 directory ‘ec2’:

  • main.tf: This file consists of the module which will call the file ‘ec2.tf’ in ec2 folder.
  • key123.pem: This is the public key to authenticate and attach to our Instances.
  • ./ec2/ec2.tf: This file creates Security Group, Instances of Master and Slaves. Following is the Code for this:
provider "aws" {
region = "ap-south-1"
profile = "akhil"
}
resource "aws_default_vpc" "default" {
tags = {
Name = "Default VPC"
}
}
resource "aws_default_subnet" "default_az1" {
availability_zone = "ap-south-1a"
tags = {
Name = "Default subnet"
}
}
resource "aws_security_group" "secure" {
name = "secure"
description = "Allow HTTP, SSH inbound traffic"
vpc_id = aws_default_vpc.default.id
ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "ssh"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "k8s"
from_port = 6443
to_port = 6443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "k8s-1"
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "ping"
from_port = 0
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "security-wizard"
}
}
resource "aws_instance" "task19m" {
ami = "ami-026f33d38b6410e30"
instance_type = "t2.micro"
key_name = "key123"
user_data = "fd50d738924f03fa08bbabe03058dcdcd20d0986"
security_groups = [ "${aws_security_group.secure.name}" ]
tags = {
Name = "Master"
}
root_block_device {
delete_on_termination = "true"
}
}
resource "aws_instance" "task19s" {
ami = "ami-026f33d38b6410e30"
instance_type = "t2.micro"
key_name = "key123"
count = 3
user_data = "fd50d738924f03fa08bbabe03058dcdcd20d0986"
security_groups = [ "${aws_security_group.secure.name}" ]
tags = {
Name = "Slaves"
}
root_block_device {
delete_on_termination = "true"
}
}

→ Files other than these are created by terraform itself when we run create the resources.

→To install the plugins or to initialize terraform, run the command:

# terraform init

→ To start the creation of resources:

# terraform apply --auto-approve

→ Now, Instances will be created, so now we can configure Kubernetes Cluster on top of these.

Following are the resources which are created on AWS:

  1. ) Instances are Created:
  • Master Instance
  • Slave 1 Instance
  • Slave 2 Instance

2.) Security Group has been Created:

3.) Key Pair which is pre-created

4.) Volumes Associated with the Instances:

→ So, now as we can see our Instances has been created, so now we can configure our Master ans Slaves. Now we will see the VM in which ansible is pre installed.

→ Following is the structure of my Folder in which Playbooks and Roles are there with some other files.

  • ‘ansible.cfg’ — This file is the configuration file which ansible uses to see the hosts, permissions to the user, location of roles, keys etc.
[defaults]
inventory = ./hosts
host_key_checking = false
remote_user = centos
private_key_file = ./key123.pem
ask_pass = false
timeout=60
[privilege_escalation]
become = true
become_method = sudo
become_user = root
become_ask_pass = false
  • ‘key123.pem’ — This is the private key attached to our Instance.
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAhxiL2lp7c/uGAWk6X733iaT2soDRPvVQQVFStI1tfax+msZg
LUS9VKip5SygH5dJsN62EaqEbWqoR6ES0tPkiSs6HSydYs2LD54yjCJxCVkJhSrc
cG52X6dpDcSEhQPeJt2hMqnYvW2L7lfI6SOjSaNwXHSvhtmnckEx1xYg8JSP/L76
w9zhuoiWqXX9g4EcSPkWmMperVIli4mYJzD5/MbgWGtbnVJlZ3MlPa9vrRqNf25A
MPkAAKQ8V/qCZ3ZlQI+T64yDKEg5NAkFW2pMrbcmlkfQjKRGrtv7rfo1P0on5KNL
Pw2M+BsXdnDdS+bkNVM8225PzbfFZZOc5gHOEwIDAQABAoIBADx7BMFwmKxIAqpH
DdcnGNcKf1dSzFq/QHq9iaVDW61TuCpafVxG1ew8xjLPU7BQ7rC8RA6MpFTH1yaa
Oe8g5cNzEsVU3/EHzCXl0QNjt+9TaSuxEJdVHLGeJS4AuMNEBASqXCxuVZYYoPjH
XC7jwYqKHReHNb3NW2WPQlzkj0Kk1WAEIRMkaOJZNzXhqSv2AjEKmNccLZXF1VbT
+se/05DkOhYpx1Ibb/25zK7DJy6szyLfEqZzIKlu9+5xMSPKrbOKivYg3iA8+hyU
HfGDKIVfbmaZOkV/drKF2LMrZiy/zFgWCQnX7HPCQv2IJzg7+SWSv4cOUTuwptbl
aDfNGmECgYEA6CxLL1DVEMhjXcIxMYNMX5f0WbPuC2NGAkJ6M/uoaJL9JkmO/eid
nwo7MPep5fP6dQgsxHzEm4q+YJZ82399ERw0dd1povrgNnEQhvZQ99bociZ4mlQF
Hv4EWdWLULTolmYINwldPfMTXyegACf/jEc2XkXBt4ALMyOBvadhy78CgYEAlPXR
fqxTRbN6wFzX1/4efHtMem1ilYO+fZGknVcsDQ/QiptE861aU+DQsg+lpcSJwq58
LHcK1TimSp/fB6TBgjoh0qDB6wWgddogJ4L/59aQRMUuT1OD7jD1sFr/Xy6S6Lar
0lov2jVPRzrcIV5ew2CW6MKTbb3Lip5hmBIUYq0CgYAPp6TuLNIhDpH8qXJttz+4
FmPohIRhijEXR+o7hRWG75pYMY+NuVifd64kEB8JnVje+U0jdpI/Nqy9kIgcuMzz
EWbMJ8DOt4HUyezmXMd63qfPwp5RMacivtgGQqrhJ0Gjmn+lTmFWIwTEXsSgHhJS
IB8fXi7As8aNjTBbXGTwuwKBgH+1XJ2oql/4p0Xik17nzEVXBFN2Em4zHA7V3fbT
NL4iD921juEHf4ioFuSCC7daD+2r4GPSz6PMRK138SPBefHnWvYUwwx2r4I6txSI
+FNQnjGHh9OUu2hr60f+TDDTYjpH2nmmvp3q1IQyD2ZAXShOWDNIFlOgw6+dZ/iT
j4ylAoGBALNSARg4uzeT+GxVRPIdSsMRYKBxTs+lbx5L7v7SrCw5MEXB57GQcILj
D6weJP6+INGz6c3mX/WILCvECEu0sLQUa6tW3gGG3eTy/lsr0eNF24RrXv0XpCvF
I3BFRKmBAxQRjsk3uwjBPa30JlJ/oin6rhA1cddNx7+ZpWR7+bLc
-----END RSA PRIVATE KEY-----
  • ‘vars.yml’ — This file will store token and then it will be used by slave_token playbook to run token of Master to Slave, so that slave get’s attached to the Cluster.
  • ‘slave_token.yml’ — This playbook is used to run the token, after getting updated in the vars.yml .
  • ‘hosts/ec2.py’ — This file consist of python code which is used to Create and Fetch the Resources over AWS.
  • ‘hosts/ec2.ini’ — This file is used to store the Credentials of the IAM User created in AWS.
  • ‘pod.yml’ — This playbook will create a pod in the cluster. This is just used to get a better proof whether our cluster is running or not.

→ In the roles directory, I have 2 roles and in each role, there are 3 files, in which I have written the Code, which are as follows:

  • roles/master/tasks/main.yml — This is the main file of master role in which we have to write everything what ansible will do to the hosts, when we run this role.
---
# tasks file for master
- name: "Making Selinux Disabled"
selinux:
policy: targeted
state: disabled
- name: Creating Docker Repo
yum_repository:
name: docker
baseurl: "{{ docker_repo_url }}"
description: "This is Docker Repo"
gpgcheck: no
- name: "Giving --no-best Option if OS is Redhat"
replace:
path: "/etc/dnf/dnf.conf"
regexp: "True"
after: "best="
replace: "False"
when: ansible_distribution=='RedHat'
- name: "Installing Docker"
yum:
name: "{{ docker_pkg }}"
state: present
allow_downgrade: True
- name: "Starting Docker"
service:
name: "{{ docker_svc }}"
state: started
enabled: yes
- name: "Creating Kubernetes Repo for Redhat"
yum_repository:
name: "kubernetes"
description: "Kubernetes repo"
baseurl: "{{ k8s_repo_url }}"
gpgkey: "{{ k8s_gpg_url }}"
enabled: True
gpgcheck: True
repo_gpgcheck: True
register: repo
when: ansible_distribution=='RedHat'
- name: "Creating Kubernetes Repo for Centos"
yum_repository:
name: "kubernetes"
description: "Kubernetes repo"
baseurl: "{{ k8s_repo_url }}"
enabled: True
gpgcheck: False
repo_gpgcheck: False
register: repo
- name: "Installing kubeadm, kubelet, kubectl"
yum:
name: "{{ k8s_repo_pkg }}"
state: present
register: kubeadm
- name: "Starting kubelet service"
service:
name: kubelet
state: started
enabled: True
- name: "Pulling Images"
command: "kubeadm config images pull"
#Resolving Warning-1 Part-1
- name: "Creating file in docker and in next task will be edited"
file:
path: "{{ docker_driver_path }}"
state: touch
#Resolving Warning-1 Part-2
- name: "Changing docker driver to systemd"
copy:
src: "./files/daemon.json"
dest: "{{ docker_driver_path }}"
force: yes
- name: "Restarting Docker Service"
service:
name: "{{ docker_svc }}"
state: restarted
#Resolving Warning-2
- name: "Installing {{ warning2_pkg }} package"
yum:
name: "{{ warning2_pkg }}"
state: present
when: ansible_distribution=='RedHat'
#Resolving SWAP Error
- name: "Doing swap off"
command: "swapoff -a"
register: s
when: ansible_distribution=='RedHat'
- name: "Copying k8s.conf file"
copy:
src: "./files/k8s.conf"
dest: "{{ k8s_conf_path }}"
force: yes
- name: "running sysctl command"
command: "sysctl --system"
#Creating Containers and Configuring Master- name: "Start the Cluster"
command: "kubeadm init --pod-network-cidr={{ cidr_range }} --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem"
register: master_config
- name: debug
debug:
msg: "{{ master_config }}"
- name: Configuring Credentials
file:
path: "{{ k8s_dir_path }}"
state: directory
recurse: yes
- name: Copying admin.conf file
copy:
src: "/etc/kubernetes/admin.conf"
dest: "{{ k8s_dir_path }}/config"
remote_src: yes
- name: "Creating Directory to store flannel yaml file"
file:
path: "/k8s"
state: directory
- name: "Downloading FLannel"
get_url:
url: "{{ flannel_url }}"
dest: "/k8s/"
- name: "Installing Flannel"
command: "kubectl apply -f kube-flannel.yml"
args:
chdir: /k8s/
- name: "Creating Token for Slaves"
command: "kubeadm token create --print-join-command"
register: token_result
- name: "Adding token to a File"
lineinfile:
regexp: "^token: "
line: "token: {{ token_result.stdout }}"
dest: "{{ file_dest }}"
state: present
delegate_to: localhost
- name: Fetching Credentials File
fetch:
src: "{{ k8s_dir_path }}/config"
dest: "./creds/"
flat: yes
  • roles/slave/tasks/main.yml — This is the main file of slave role in which we have to write same as we have done in master.
---
# tasks file for slave
- name: "Making Selinux Disabled"
selinux:
policy: targeted
state: disabled
- name: Creating Docker Repo
yum_repository:
name: docker
baseurl: "{{ docker_repo_url }}"
description: "This is Docker Repo"
gpgcheck: no
- name: "Giving --no-best Option for Redhat"
replace:
path: "/etc/dnf/dnf.conf"
regexp: True
after: "best="
replace: False
when: ansible_distribution=='RedHat'

- name: "Installing Docker"
yum:
name: "{{ docker_pkg }}"
state: present
allow_downgrade: True
- name: "Starting Docker"
service:
name: "{{ docker_svc }}"
state: started
enabled: yes
- name: "Creating Kubernetes Repo"
yum_repository:
name: "Kubernetes"
description: "This repo will install kubeadm, kubelet, kubectl"
baseurl: "{{ k8s_repo_url }}"
enabled: True
gpgcheck: False
- name: "Installing kubeadm, kubelet, kubectl"
yum:
name: "{{ k8s_repo_pkg }}"
state: present
- name: "Starting kubelet service"
service:
name: kubelet
state: started
enabled: True

#Resolving Warning-1 Part-1
- name: "Creating file in docker and in next task will be edited"
file:
path: "{{ docker_driver_path }}"
state: touch

#Resolving Warning-1 Part-2
- name: "Changing docker driver to systemd"
copy:
src: "./files/daemon.json"
dest: "{{ docker_driver_path }}"
force: yes

- name: "Restarting Docker Service"
service:
name: "{{ docker_svc }}"
state: restarted
- name: "Checking Systemd Driver"
command: "docker info | grep systemd"
register: docker_driver
- name: "Printing Output of Docker Driver"
debug:
msg: "{{ docker_driver }}"
#Resolving Warning-2
- name: "Installing {{ warning2_pkg }} package"
yum:
name: "{{ warning2_pkg }}"
state: present
when: ansible_distribution=='RedHat'

#Resolving Warning-3

- name: Doing swap off
command: "swap off -a"
when: ansible_distribution=='RedHat'
- name: "Copying k8s.conf file"
copy:
src: "./files/k8s.conf"
dest: "{{ k8s_conf_path }}"
force: yes
- name: "running sysctl command"
command: "sysctl --system"
  • roles/master/vars/main.yml — This file contains the values of variables used in the tasks folder.
---
# vars file for master
docker_repo_url: https://download.docker.com/linux/centos/7/x86_64/stable/
docker_pkg: docker-ce-19.03.8-3.el7.x86_64
docker_svc: docker
k8s_repo_url: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
k8s_gpg_url: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
k8s_repo_pkg:
- kubeadm
- kubectl
- kubelet
docker_driver_path: /etc/docker/daemon.json
warning2_pkg: iproute-tc
cidr_range: 10.240.0.0/16
k8s_conf_path: /etc/sysctl.d/k8s.conf
k8s_dir_path: $HOME/.kube
flannel_url: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
file_dest: ../../../vars.yml
  • roles/slave/vars/main.yml — This file contains values for the variables used by main.yml in tasks folder of the role.
---
# vars file for slave
docker_repo_url: https://download.docker.com/linux/centos/7/x86_64/stable/
docker_pkg: docker-ce-19.03.8-3.el7.x86_64
docker_svc: docker
k8s_repo_url: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
k8s_gpg_url: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
k8s_repo_pkg:
- kubeadm
- kubectl
- kubelet
docker_driver_path: /etc/docker/daemon.json
warning2_pkg: iproute-tc
cidr_range: 10.240.0.0/16
k8s_conf_path: /etc/sysctl.d/k8s.conf
k8s_dir_path: $HOME/.kube
flannel_url: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
file_dest: ../../../vars.yml
  • roles/master/files/ — This directory has 2 files which I am copying to the Master, which are used as some exceptions come of docker driver and some variables need to be set to 1. Following are the files:

— daemon.json

— k8s.conf

→ Slaves role also has the same files, as it is.

→ I have edited Readme.md file in both roles which describes the vars and roles features.

→ Let us run the Playbook — ‘configure_cluster.yml’ :

# ansible-playbook configure_cluster.yml

→ Remembering the last digits of token i.e ‘f0b2b’, to verify token present in in vars.yml.

Now, slave is being configured:

→ As our Master and Slave is Configured, we just have to run the token to our slaves, and for that we have our playbook named — ‘slave_token.yml’. But firstly we can verify that, token is updated or not in the vars.yml file. But we have to verify it also from the above output, we can match the last 4–5 keywords.

# cat vars.yml

→ As we can see the token is same. So now we can run token to slaves.

# ansible-playbook slave_token.yml

— Actually, I have re-installed cluster so token value and IP has been changed, but if anyone other practice, this wont happen.

Now, our Cluster have been Configured.

→ So, now we create a pod on Master, which will give us more proof that our Cluster has been configured correctly, and is able to deploy applications.

# ansible-playbook pod.yml

→ We can see our pod has been created.

OUTPUTS SECTION:

→ So now we will do SSH to our Master and see Kubernetes Cluster directly.

→ I have done one more thing, which will fetch my credentials file. So anyone having that Credential file can go into my Kubernetes Cluster and can deploy Applications.
Credentials of this Cluster will be stored in the ‘creds’ directory, and within the file named as ‘config’.

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EUXhOVEV5TVRBek5Gb1hEVE14TURReE16RXlNVEF6TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS1NOCk40ODU3YU00R0gwY29FdmtYaEk3a3Y0SnB5OUZlMThIclZRRU9HYlZMcmJaVU01cmQ4Z3FSb1BCWHhwYXlYMGMKU0FXWWdoQjl4VU54OEMxd2JBalpTWjVucTkvMVhYcDFsSUFsMllDOE9wai9IUnRYMHozVFdUMDc4Y3U3S1FHOQpDSk5OOEQ5ZVFnb0NKVkczNHQydFhvekZ1UmV2S2FvTFZrK2E0RHRodEQ0SmJOanhvYmRBMCs1d3pKa2hvakwxCkR4TTRGZGh3cmdqWCsyNXJJUGNidHJMTmFoLzUxZ3dicC94dVdUdmxyMnUvQlluWFJab3pmc2pUaEQvSDQ5NGwKSHFTKzFaaUtWV1NRd1haUFFDSGFpSGJNS0M5N2UwcmZaYXpmTWcyYWt5bzY4NmRjOWpmbjNINk5QSzdEdk1CcQpLTWxZRnhQaktvcUtyK0tIU0FVQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKTXZpU3pXbUQ5RDNlaU5iSGJ5UGNNT0dGUjBNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCdW9BWENKY1RBSjlhbyt3WXVxY01wY1lXMmVYdk96T0crU2tVTnRpQWd0aHNWZlIveQplOTZYbW1vQmJrbzlQaHgvRDRoMGZjVkszVEo2US9SQlN0SGZLWVNKUFgwVERPYUVmbTY4VE9YMWNLODVleENCClh3V3N2dVFOZkwyUjBZemphaU5nV1QvdzJwV0R3bU54UTN3ZnJqbFRzSzQ0NFRGdEJiYXcxVldlRzhqRzJWK2IKYkNGQ2JwZ2JheWs1R1JwZVJzL2p3TTY5RmZpVzVDd2s5a1l2MzhJbEgvd3l3OWlUZm9xL2E1c2V6NW82bE1ZRwp5dDZSYjBkcksyUzYxRE01MmE3UTF5ZDVqZWg3ZU9MdWpaVk1IUi9pNSttd0pRS0l1aWptZEljK2liN3M2YUUvClNQTTdScU1iTG9lbnpkeDNxTnhtUC9QL215THVHV1EzSDAxTAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://172.31.7.190:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJYmVtNG5kWFBySmt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBME1UVXhNakV3TXpSYUZ3MHlNakEwTVRVeE1qRXdNemhhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXVuTUdFUDFTbHd6ZHhMWUcKczM5UVhoeE9BWC9TUTVvdmtiOUR5VFM3RFk1eXA2eDFYMVRkR2tkTTNibGMvY0l1bmc1Wi9UNDVkM1dpbUJOQwpZZmU4QUdOKzc3VVphZlVWMlJQS1lTamZLK3JMU3dQRFpFVzBmQVJtcFZUUWtqVThkcnc4N2llQUZjd1FxQTdxCkhhUDRwNG1tY0FIc2x5VE4wUldvVnhIcnR3VEptM3NYdm1tZlVaZ3Y2RmhaU09KQ0U0VWlkS2hKbVhnMGlpVG0KaTZYYUxnNy9XRFRKNHE2WVpuZ1FFQk1QNnJQRVQ0MjZrZ21RbUV6NG1PcWo2NDIyRzhvSlRhaFVIdFZ0clRJRQo4NGlQb2NlU2lpZ1dWUFVNYmgxdGFBTUxQRVNsSmxCWGd0clcwemR1YXlLdEZvaERWM3BsdElWZElBdEhRbmdiCmluR1BrUUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JTVEw0a3MxcGcvUTkzb2pXeDI4ajNERGhoVQpkREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBUHhINDNHWEVHd3doa2ZSNGx3aWJyTWhseVRHNFlDbnBZREt2CnJsTTZIaitDb1V4RjhxTHU1MDNXR3J4Q2ZsemVwbWNIMTRFSXR3dkliZmZQNHhwa1V4N2kxNGxabEk3Q0FTb0gKN0IvVE9aUGhFcEZHQ1Q5cVRpelRLTkVUWk1JOG4yU1YwUmQxSk1pSUdqRVVTc25MS2E3dkpvMnl2NDI0K0ZxRQpCVzZDbWlDZ2NYcjN1WFNGVWJXZ01lRSt1VUhKakRmdWFzbzQ1VHBxbCszNDZaV1Rad1hYUkhJLzljb2VocmxxCnkrcjRNZUkxbG05MElCNjQ1dE9xQmN2QVJlMlloUWNPRWd2aDhWRkl1U09pQVZ4UkRBbVM3TndVbXMwRmFQR2kKOGZNSWZnVWVDNzY5aVFHQUY4Mm5lSk0yNXpobW9TRGd0SElaelU5eHNBdlZ2YnNxQ3c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdW5NR0VQMVNsd3pkeExZR3MzOVFYaHhPQVgvU1E1b3ZrYjlEeVRTN0RZNXlwNngxClgxVGRHa2RNM2JsYy9jSXVuZzVaL1Q0NWQzV2ltQk5DWWZlOEFHTis3N1VaYWZVVjJSUEtZU2pmSytyTFN3UEQKWkVXMGZBUm1wVlRRa2pVOGRydzg3aWVBRmN3UXFBN3FIYVA0cDRtbWNBSHNseVROMFJXb1Z4SHJ0d1RKbTNzWAp2bW1mVVpndjZGaFpTT0pDRTRVaWRLaEptWGcwaWlUbWk2WGFMZzcvV0RUSjRxNllabmdRRUJNUDZyUEVUNDI2CmtnbVFtRXo0bU9xajY0MjJHOG9KVGFoVUh0VnRyVElFODRpUG9jZVNpaWdXVlBVTWJoMXRhQU1MUEVTbEpsQlgKZ3RyVzB6ZHVheUt0Rm9oRFYzcGx0SVZkSUF0SFFuZ2JpbkdQa1FJREFRQUJBb0lCQUNrS2pMbE1xZE5xRjU3bgpXbzVFWmhKeE5KS0w2bUxMRzlGL1FwS1ZzdDhIRGlIdWlsK1R0Si9HTmh0UVpESFBmcWQ0RFVMN1lYYjBRL2dwCnRTRVBnU2lzdmhKUjBPaEw5S1UyQUFSbkZNajhCQWZkS2pOMlRJWklDYmcyOVRwWjBaZHBWQmd3UmJlR2xkd0kKZkd4TjNid3pScG05TXJFS2Z1dVpVdGJuc1BVMkRod21kVE9Mcy9Cd1VwQ0N1Z0NZQTBxM1RyblNwWmZMRVY2SQpMY2hSTnN1TUhkMmNkemtabUJxV3VsMEZvQW1pTlBQbFRHcm96QXZlaFJiZ1FuUUJwWk5xUitQaXVEbVJ6Si9CCkFldGZaRTB6NlM1VWl5VW5kc1lnaTh3RXR1WUlXdHZjcWdQQSs2bFZkQWxtN20zNnFEbTMzVTV3Z0Y0WnRsVW4KbUhaYXFJRUNnWUVBeW1ZdDIybGsrV0tIa250RHZPTnZuWldZckxCR29GeXBaa1ZENWVOTkdaV29VendrU2RLTQpCQ1BhZEpySXc5ZVV5cnZxR1V5dEdrVEoydXBuY1lPSTF1aGJiVzhnVEwzQXRlMkwrNjBkU0hOOUt5aWpEWmI3CjhjZXgwQkpscnJaQW9IMitLcWJmTTlhR2JBODBON2R0ajVwR3dXcWRHeGFuMFdQYm4zUlA4MWtDZ1lFQTY5T0QKT3M1UDNSN2VCWlFyVVAwc2NVUEhLeHZGeXZ0cnVualBteTI5ei9NWjRSRlhFL2ExYStjMTJVbnJld3NHVEVVZwpxUzZOMlhFc1NXcjcyQ1dUeEpsVXBEVHhPVDFYTUpyaXphMmE0TDNydmNOTnRsWVVKbDZaQW5DTExHdFczLzN2CnU2SlR2S1JNUXBWZi82cGwzcElUOXhXOCtzSlZTcWs3dmVOM0R2a0NnWUE1U1FSUHBwdnFyY1Y3WXlILzgwdGoKQ2JWRm96cktKby9YbjJFaFR0MUNrWVlyME1qZ2tCUUxFKzYvdEJPQXdxS2RZdVJXTnNxRHRkWi8vSG84dWFMZwpXTEdQM3JVQW8zQkl6YXdpRnBSRUxsUE9CRmxwL2tMZTRzdGovZUVEdXhlOWxQbGU3dzRiaU90UTZGaTZNRk4xCklwQkdMQnU5VUFNOWs0clVyY0gzT1FLQmdRRFhQVzRCTmx6ZVRWWUhjWDAzcGx2eDVSTGIzYlZoMXFnMHdoOTYKV3YxcjEwNC9oandjRklqeHUwNEN6TjBJcUw5T3phbEp3UnZtNHN5eEZkeFhJN1VETTQ5MWNIemE2WW15NnlzbQozdFVGVzFMWEdITE5nVE5TOGZSbHJhTFpILzlpNGJyMVh1dGV1ZmFBcnlXM2pDYitSZ1hDOXl4TkV5SVZ5dkNrCkdBQ0t3UUtCZ0VOQUZWMmhNREp4dytvNUZhMnJ4Qld3U3ZYNTdab0VZcWpWWXdwZlNWQUNpL0pDUzdzdkV2cjMKQmk3V1ZMaWF3RkYwUmFTQnR1Zko1OW9vNU5Zdm9WZVJJbW94MUpSbG5JSit4TXZQZzh6L1FuSWZWM3h2cVFregprUG5nY0tqSEdaUGlKc0VKaWQ2UGJaUDlzSEIwS2RZWFpRZHkxY3NuMGVnWVFPUlpFbW4yCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

Config File is something like this.

→ So, this was all about this article, I have created a video which will be demonstrating more about this task.

Video:

→ To create Cluster Using roles:

→ To create Cluster Using Playbooks:

Github URL:

→ Actually there are 3 folders Inside this:

  • terraform_code: This folder contains the code used in terraform, i.e the code used to create the resources on top of AWS. This code will be used in both the tasks either we use ‘using_Roles’ or we use ‘playbooks’ code.
  • using_Roles: This folder consist of the code, which is used in this article, i.e doing this task using roles.
  • playbooks: This folder contains the code which will do the same task, but concept used in this is ‘playbook’. I will be attaching a Video Tutorial to show the usage of this code also.

I have learned of these tools, under the mentor ship of Mr. Vimal Daga Sir under Arth Training.

I hope this article is Informative and Explanatory. Hope you like it !!!

Please give some claps, if you liked this article !!!

For any suggestions or if any reader find any flaw in this article, please email me to “akhileshjain9221@gmail.com”

Thank You Readers, for viewing this !!!

--

--

Akhilesh Jain

I am a student and persuing under graduation in computer science and engineering.