Creating a HaProxy LoadBalancer Cluster on AWS using Ansible as Automation Tool

In this Article, I will be creating a Load Balancer Cluster using HaProxy Server. This cluster will solely be created on AWS (Public Cloud). To create this deployment, I will be using Ansible as Automation tool.

What is Ansible?

Ansible is a software tool that provides simple but powerful automation for cross-platform computer support. It is primarily intended for IT professionals, who use it for application deployment, updates on workstations and servers, cloud provisioning, configuration management, intra-service orchestration, and nearly anything a systems administrator does on a weekly or daily basis. Ansible doesn’t depend on agent software and has no additional security infrastructure, so it’s easy to deploy.

What is “HA-Proxy”?

HAProxy is free, open-source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications that spread requests across multiple servers. It is written in C and has a reputation for being fast and efficient (in terms of processor and memory usage).

Pre-Requistes for this creation :

  • IAM role in AWS

You can refer to this below link to create IAM role :

  • Pre-Installed Ansible on a system to run playbooks and other stuff –

— Link to install ansible on CentOS/Redhat Systems :

→ I have used commands for Python3 (You can use any).

— Here, in this task, I am using the concept of Dynamic Inventory, this inventory will automatically fetch whatever we want from AWS. Dynamic Inventory consists of codes, which can get data by giving the credentials of AWS.

— Also , I will be creating roles, which is a better way of managing and working in Ansible. Roles are the predefined folder which contains different directories for different functions, like

  • tasks directory is for containing the main file/playbook.
  • cars directory is for storing variables.
  • templates directory is for storing templates written in jinja format.
    There are other folders also which have different uses.

→ So, Lets create our roles, playbooks, configuration file of ansible etc. :

Firstly, We will be creating our Dynamic Inventory. Location, I am using is /etc/ansible/hosts/ .

Link for ec2.py →
https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.py

Link for ec2.ini →
https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.ini

Now, to make our code Executable, we have to change the file to Executable mode, by using the commands :

# chmod +x ec2.py
# chmod +x ec2.ini

We have to change 1st line in ec2.py, and make this line like this :

Now, we have to add region, access-key, secret access-key of IAM Role to the ec2.ini file, you can add like this:

Initial Image
Changes made

Now, dynamic Inventory is setup, we can check our hosts using following command:

# ./ec2.py --list

In my case, this error coming, so to avoid this error, I can make the mentioned line 172 as comment.

2nd line will be commented

If in your case, this error doesn’t comes, so don’t change anything, and your inventory is ready to work.

We can use 1 more command to check hosts :

# ansible all --list-hosts

→ Now we will create a folder which will contain our roles, playbook and .cfg file.

For this, firstly create a key pair on AWS, which will be attached to our VM’s, and copy this key pair to this folder named “task3”.
— Change the permissions of this key pair, and we will be adding its path to our .cfg file.

# chmod 400 ansibletask3key.pem

— Now, we will create our configuration file which is the main file, whenever our playbook is in state of run, always settings written in ansible.cfg is checked and is run accordingly.

In the default section, we tell about our inventory, which in case is our dynamic inventory, we have to tell where our roles are present in roles_path variable, as by default name of user in our VM will be ec2-user, so we are writing here, we have tell about the path of our private key as ansible uses SSH to do its work, so without this key, authentication will fail.
In the privilege escalation block, we are telling the ansible that make yourself as root and do your work.

— Now we will create 2 roles for creating loadbalancer haproxy server on 1 VM, and another role which will create webserver on other 3 VM’s.
To create a role following command is used, here ‘lb’ and ‘webserver’ are the names of our role folder.

# ansible-galaxy init lb
# ansible-galaxy init webserver

We can check, how many roles are there in our directory, by using the following command :

# ansible-galaxy role list

— Now, we will configure our roles :

  • Firstly, I am configuring webserver role, following are the editing you can do by going to described directories :
tasks/main.yml
-
# tasks file for webserver
- name: install httpd
package:
name: "httpd"
state: present
- name: copy content
copy:
content: "Task3 Successfully completed\n, hello from {{ ansible_hostname }} "
dest: /var/www/html/index.html
notify: restart service
- name: service httpd start
service:
name: "httpd"
state: started
enabled: yes
handlers/main.yml
# handlers file for webserver
- name: "restart service"
service:
name: "httpd"
state: restarted

→ This will install httpd webserver, and then configure it by pasting this content to destination file, and then we will be starting its service. Handlers is created as we want whenever we change anything in content or change the file, then our service must restart to work with the changes.

  • Now, we will be configuring “lb” role, following are the editing you can do by going to described directories:
tasks/main.yml
---
# tasks file for lb
- name: "installing haproxy"
package:
name: "haproxy"
state: present
- template:
src: "haproxy.j2"
dest: "/etc/haproxy/haproxy.cfg"
notify: "restart service"- service:
name: "haproxy"
state: started
enabled: yes
handlers/main.yml
---
# handlers file for LB
- name: "restart service"
service:
name: "haproxy"
state: restarted
templates/
haproxy.j2

— Templates file “haproxy.j2”, is as follows :

---
# handlers file for webserver
- name: "restart service"
service:
name: "httpd"
state: restarted
[root@ansiblecn ansibletask3]# cat lb/templates/haproxy.j2
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# https://www.haproxy.org/download/1.8/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
# utilize system-wide crypto-policies
ssl-default-bind-ciphers PROFILE=SYSTEM
ssl-default-server-ciphers PROFILE=SYSTEM
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main
bind *:2629
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
use_backend static if url_static
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
balance roundrobin
server static 127.0.0.1:4331 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
{% for webserver in groups["tag_Name_Ansible_WebServer"] %}
server app {{ webserver }}:80 check
{% endfor %}

→ This is the complete conf file of haproxy, Here I have copied this complete code to the LoadBalancer VM, but we can use another methods like copying the other VM’s IP to the LB VM amnually, but we can use automation by using code.
I have used this code at last, where we have to add the backends.

I have used a for loop for the group “tag_Name_Ansible_WebServer”, which contains the IP’s of our webservers, and using the for loop, loop will write the IP’s in that block .In the frontend block, I have given a port no 2629, which will be our Custom Port number which we will use while seeing the output in browser.

— Now, we will create a vault which will store our access key and secret access key, so file name is “vault.yml”, simple steps are there to create vault:

# vi vault.yml

Now, encrypt your vault using following command :

# ansible-vault encrypt vault.yml

— Now, we will create a playbook “ec2.yml” which will create the VM’s on AWS Cloud.

- hosts: localhost
gather_facts: false
vars_files:
- vault.yml
tasks:
- name: "installing boto"
pip:
name: "boto"
executable: pip3
- name: "installing boto3"
pip:
name: "boto3"
executable: pip3
- name: "creating security group"
ec2_group:
aws_access_key: "{{ ak }}"
aws_secret_key: "{{ sak }}"
name: 'launch-wizard-1'
description: 'sg with rule descriptions'
vpc_id: 'vpc-224d514a'
tags:
Name: "task3-sg"
region: "ap-south-1"
rules:
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
rule_desc: allow all on port 22 for ssh
- proto: tcp
cidr_ip: 0.0.0.0/0
ports:
- 80
rule_desc: allow all on port 80 for webserver
- proto: tcp
cidr_ip: 0.0.0.0/0
ports:
- 2629
rule_desc: allow all on port 2629 for loadbalancer
rules_egress:
- proto: all
from_port: 0
to_port: 0
cidr_ip: 0.0.0.0/0

- name: "LoadBalancer"
ec2:
count: 1
image: "ami-0ebc1ac48dfd14136"
instance_type: t2.micro
region: "ap-south-1"
wait: yes
instance_tags:
Name: Ansible_LoadBalancer
group: "launch-wizard-1"
key_name: "ansibletask3key"
state: present
aws_access_key: "{{ ak }}"
aws_secret_key: "{{ sak }}"
- name: "webserver"
ec2:
count: 3
image: "ami-0ebc1ac48dfd14136"
instance_type: t2.micro
region: "ap-south-1"
wait: yes
instance_tags:
Name: Ansible_WebServer
group: "launch-wizard-1"
key_name: "ansibletask3key"
state: present
aws_access_key: "{{ ak }}"
aws_secret_key: "{{ sak }}"

In this file, we are creating a Security Group and 4 VM’s (1 for Loadbalancer and 3 webservers), and this security group will be attached to all 4 VM’s.

→ One thing in the Security Group I have given is that “2629” port number, which is our LoadBalancer port no, which will help us to give a gateway to our LoadBalancer.

To know more, you can refer to my blog in which I have explained in more detail about this creation on AWS, but in that I have not used dynamic Inventory concept.

— Now, we will create a playbook, which will run these 2 roles.

→ You might be wondering, how I know that these hostname will be there, I will show you guys this, when we run ec2.yml, as this will be updated then only.

Note: We will run ec2.yml firstly and then lbcluster.yml, as firstly VM’s will be created and then only we can configure them.

→ As, our Setup is complete, so now we can run our playbook :

# ansible-playbook ec2.yml --ask-vault-pass

— As, we can see that our Instances are created, now we can configure them, by running the lbcluster.yml playbook .

# ansible-playbook lbcluster.yml

— As we discussed above, that how we come to know that this is the hostgroupname, so use the following command in the dynamic inventory folder :

# ./ec2.py --list
In this image, you can see that 3rd and 4th group contains our VM’s IP

→ Here in the above picture, you all can see that there are 2 groups named “tag_Name_Ansible_WebServer” and “tag_Name_Ansible_LoadBalancer”, which has our VM’s IP.

— So, now we can use our LoadBalancerIP in the browser with our given port no. :

— As we can see that our Webservers are accessible using LoadBalancers IP and the custom port number.

Github URL :

“I have practiced and gained all knowledge of this project(task) under the mentorship of Mr. VIMAL DAGA Sir during the Ansible Training by Linux World India.”

I hope this article is Informative and Explanatory. Hope you like it, and give some claps !!!

For any suggestions or if any reader find any flaw in this article, please email me to “akhileshjain9221@gmail.com”

Thank You Readers!!!

I am a student and persuing under graduation in computer science and engineering.