Creating EC2 Instance using Terraform

“Core Services used for this automated environment”

DESCRIPTION OF THIS PROJECT -

We have to use code using Terraform tool by HashiCorp . Using this code file , we have to create key pair, security group for instance, a EBS Volume which is to be mounted so for storage, Using S3 service,we have to create a bucket in which will be storing a image , CloudFront for CDN and Integrating CloudFront with S3 . Below is the image which shows the working or task what Ihave done in this project :-

GITHUB LINK FOR TERRAFORM CODE USED IN MY PROJECT :-

PROCESS AND EXPLANATION STEPWISE : —

  1. Install Terraform from the official site “terraform.io” and then unzip it and save it in a folder wherever you want. Then add the path of this folder in the path variable . For this :-

goto search bar → search env → then click on environmental variables →then edit path variable → new → then copy the path

  • then in cli ,use command “ terraform version” to check whether terraform is installed/working or not.
  • then create a folder for using terraform with any name and create a file with any name but with .tf extension.we will be typing our code here for our project.

2. First we need to configure aws cli. Open cmd and execute the following command.

<aws configure> in [ CLI OF WINDOWS(in my case)]

Then enter the Add the access key ID, secret access key and the default region name.

3. “

provider “aws” {

region = “ap-south-1”

profile = “akhil”

}

  • provider is the first block we use because on this aws plugins will be installed, myprofile name is included here for configuration and region is necessary.

4.”

resource “tls_private_key” “keypair” {

algorithm = “RSA”

}resource “local_file” “privatekey” {

content = tls_private_key.keypair.private_key_pem

filename = “key1.pem”

}resource “aws_key_pair” “deployer” {

key_name = “key1.pem”

public_key = tls_private_key.keypair.public_key_openssh

}”

  • this block is used for creating private key which is used for configuration of instance.

5. “

resource “aws_default_vpc” “default” {

tags = {

Name = “Default VPC”

}

}

  • this block use default VPC for creating security groups in next block

6. “

resource "aws_security_group" "secure-task" {

name = "secure-task"

description = "Allow HTTP, SSH inbound traffic"

vpc_id = aws_default_vpc.default.id

ingress {

description = "http"

from_port = 80

to_port = 80

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}

ingress {

description = "ssh"

from_port = 22

to_port = 22

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}

egress {

from_port = 0

to_port = 0

protocol = "-1"

cidr_blocks = ["0.0.0.0/0"]

}

tags = {

Name = "security-wizard-webserver"

}

}”

  • → this block creates a security firewall for our instance which will allow ssh for remote login and http service for webserver.

7. “

resource “aws_instance” “job1” {

ami = “ami-0447a12f28fddb066”

instance_type = “t2.micro”

key_name = tls_private_key.keypair.private_key_pem

security_groups = [“secure-task”]

tags = {

Name = “job1”

}

connection {

type = “ssh”

user = “ec2-user”

host = aws_instance.job1.public_ip

private_key = tls_private_key.keypair.private_key_pem

}

provisioner “remote-exec” {

inline = [

“sudo yum install httpd php git -y”,

“sudo systemctl start httpd”,

“sudo systemctl enable httpd”,

]

}

}

  • this block creates instance and connection block within this is for connecting through ssh and remote-exec provisioner is used as after the connection is established then following commands must be run.

8.

resource “aws_ebs_volume” “mountesb” {

availability_zone = aws_instance.job1.availability_zone

size = 5

tags = {

Name = “mountesb”

}

}

  • this block is to create a ESB Volume which will be used to mount to our instance.

9. “

resource “aws_volume_attachment” “ebs_attach” {

device_name = “/dev/xvdh”

volume_id = aws_ebs_volume.mountesb.id

instance_id = aws_instance.job1.id

force_detach = true

}

resource “null_resource” “nulllocal2” {

provisioner “local-exec” {

command = “echo ${aws_instance.job1.public_ip} > publicip.txt”

}

}

resource “null_resource” “nullremote3” {

depends_on = [

aws_volume_attachment.ebs_attach,

]

connection {

type = “ssh”

user = “ec2-user”

private_key = tls_private_key.keypair.private_key_pem

host = aws_instance.job1.public_ip

}

provisioner “remote-exec” {

inline = [

“sudo mkfs.ext4 /dev/xvdh”,

“sudo mount /dev/xvdh /var/www/html”,

“sudo rm -rf /var/www/html/*”,

“sudo git clone https://github.com/akhilesh-jain1729/integrated-webserver-using-terraform.git /var/www/html”

]

}

}

  • → this block is used to attach our volume to instance and again connection block is used to create remote connection after connection , run the following commands for mounting, file system and git clone command is for cloning the code for webserver.

10. “

resource “aws_s3_bucket” “bucket-first-task” {

bucket = “bucket-for-webserver”

acl = “public-read”

}”

  • → this block is used for creating S3 bucket with permission of public-read

11.”

resource “aws_s3_bucket_object” “image-upload” {

bucket = “bucket-for-webserver”

key = “first-img.jpeg”

source = filemd5(“F:/STUDY/PRACTICE/terraform/job1/skillset.jpg”)

acl = “public-read”

}”

  • → this block will upload image from my pc to bucket and will be saved as “first-img.jpeg” with permission of public-read.

12. “

resource “aws_cloudfront_distribution” “s3_distribution” {

origin {

domain_name = aws_s3_bucket.bucket-first-task.bucket_regional_domain_name

origin_id = “S3-${aws_s3_bucket.bucket-first-task.bucket}”

s3_origin_config {

origin_access_identity = “aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path”

}

}

enabled = true

is_ipv6_enabled = true

default_root_object = “index.html”

default_cache_behavior {

allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]

cached_methods = [“GET”, “HEAD”]

target_origin_id = “S3-${aws_s3_bucket.bucket-first-task.bucket}”

forwarded_values {

query_string = false

cookies {

forward = “none”

}

}

viewer_protocol_policy = “allow-all”

min_ttl = 0

default_ttl = 3600

max_ttl = 86400

}

restrictions {

geo_restriction {

restriction_type = “none”

}

}

viewer_certificate {

cloudfront_default_certificate = true

}

connection {

type = “ssh”

user = “ec2-user”

host = aws_instance.job1.public_ip

port = 22

private_key = tls_private_key.keypair.private_key_pem

}

}

resource “null_resource” “nulllocal1” {

depends_on = [

null_resource.nullremote3,

]

provisioner “local-exec” {

command = “chrome ${aws_instance.job1.public_ip}”

}

}”

  • → this block will create cloudfront distribution and will integrate it with S3, and provisioner command will open chrome with webserver ip automatically.

“I did this project under the mentorship of Mr. Vimal Daga Sir during the Hybrid Multi Cloud Training by Linux World India.”

Thank You Readers!!! Hope you liked it.

I am a student and persuing under graduation in computer science and engineering.