Creating VPC and doing their pairing, Integrating GKE and SQL Storage of GCP, all using “Terraform” code
This Article is about accessing GCP Services and creating these services using Terraform Code Files. In this article, we have several tasks inside it, which we will be creating and observing its output and hence a Good Infrastructure is created with us.
Pre-Requistes for this Project :
- GCP account, you can use your google account to create this.
Once created, after that I have created 2 projects in this, named as “Development (with id=’development-task-29’)”, and “Production (with id=’production-task-29’)”.
- We have to install Google CloudSDK, to use gloud shell CLI. Use below link and see the procedure to install it:
Quickstart: Getting started with Cloud SDK | Cloud SDK Documentation
Send feedback Sign in to your Google Account.If you don't already have one, sign up for a new account. In the Cloud…
- Use below command to enable your laptop to create resources on GCP, after running this command, a URL will appear on CMD, and then copy URL to browser and then click on allow from your respected GCP Account.
# gcloud auth login
— We have to enable some API’s in both the projects :
Compute Engine API :
Enable more API like :
2. Terraform Tool by Hashicorp. , We have to install this tool so as to create Terraform Code. Check Below link to install Terraform for Windows :
Download Terraform - Terraform by HashiCorp
Terraform is distributed as a binary or as a package for each of the major operating systems and multiple Linux…
- As I am using VSCode, so install extension of terraform.
Services Created/Used in this Article :
- GCP (Google Cloud Platform):
This is one of the Public Cloud, which I will be using to do our task:
→ Creating VPC in 2 different projects and then doing VPC Peering in between them to make them connected, and creating different resources in different projects, which will be able to exchange data after peering.
→ Deploy Kubernetes Cluster and then create pods on it, deploying Mysql Database which will serve as backend storage for our wordpress pods.
I have Used module approach in terraform, as it better way of handling and management of resources.
→ So, Lets Start The building Process :
This is my structure of the project :
There are 3 modules “development”, “production”,and “devk8s”and these have their respective terraform files.
main.tf → This is our main file which will call, all the modules to apply the infrastructure and is also used to assign values to variables used in modules. This file is the only way to route between different modules, and transferring variables in these modules.
variables.tf → This is file which is used to store variables, which can be used in main file, and then used in modules. These variables can be called global variables, as these can be accessed anywhere in the whole project.
In this file, I have created 7variables, which will work as global variables, and can be used in modules.
- devvpcid — Used to give project ID for development project
- prodvpcid — Used to give project ID for production project
- dbname — Used to give name of the database
- dbuser — Used to give user in the database
- dbpass — Used to give password for database
- username — Used in GKE Cluster for security
- password — Used in GKE Cluster to give password of this username admin
→ Let’s see the development module’s — dev.tf file:
This is the provider block used for authentication, and whatever resource we create, will depend on the details, we give in this provider block.
— This will create VPC in development project, and complete Network will be created.
This will create Subnets in VPC. Subnets are like labs in a VPC(building). This subnet will have IP Range of 10.0.1.0 to 10.0.1.255 in the region “us-central1”.
— This is the firewall I am creating, in which we can do SSH and can do ping from other instance, as this firewall will be associated with the instance, so we will create firewall in this VPC only.
— Now, we can do peering with the other VPC in different project, We have used depends_on block as, it means that when VPC of both projects will be created, then only peering must be done.
This block of resource will create Instance in the given VPC and Subnet. I have given “RHEL8” as image with size 20GB. This instance will have 2GB RAM and 2 vCPU’s.
NOTE: We have to do same steps till here in prod.tf in production module.
Till here, there are same configurations in both the files, and now we can see that our VPC are peered or not, We can check connectivity by using ping command.
!!! We will be check the outputs of this task, in the end together !!!
→Now, we can create “GKE Cluster”, Load Balancer and Wordpress Pods in Development Project.
GKE Cluster — This will create a Kubernetes Cluster with master and slave nodes, Master node is not revealed to us, but slave nodes we can see in VM Instances, and in this cluster, kubernetes will be installed and master will control all slave nodes, and pods will be created by master on slave nodes, and to manage the traffic on pods, we will create LoadBalancer on it.
→This will create Google Kubernetes Cluster (known as GKE), In this, we have given name, location, projectid, vpc, subnetwork which are common to give, but some of the arguments, which we have to give with some understanding:
- We can set some security to our Cluster by adding some username and password, so in variables file, we have written the values of username and password and hence used in code, as described in master_auth block.
- remove_default_node_pool — This is given as GKE creates its own node pool, but for simplicity, we are creating our own NodePool.
NODE POOL — It is a group of nodes, which will part of our GKE slave nodes, in this we can give configuration of nodes according to us, below is the code for this.
— I am creating a null resource, which uses a local-exec provisioner, this runs command locally on our system, so to use GKE from our CLI , we have to download GKE cluster’s credentials, so null resource block is defined below :
— To send values of variables in the other modules, we have to create the variables using output block, as we want to send the credentials of cluster, hostname of cluster to Kubernetes Module, as this module requires these variables for authentication.
→Now, since our Cluster will be created from above code, so we can now create database service for this, so we will be using SQL service of Google Cloud Platform.
We have to create a Database Instance from this service and then, we will be creating a Database in it, and then a user, which can access this database.
- We can give name, root password, project id, region which are easy to give.
- Settings block is used to give the configurations of this instance like RAM, CPU(can be given in custom format), disk size and type, type of availability like “zonal” or “regional”, publicly accessible or not, if yes then we can give from which ip range can client come, In our case, we have given that, from anywhere customer/client can come.
→After this, we have to create a database for our pods :
- In this, we just have to give name, and database instance id, so that it could get created in it.
→ Now, we have to create a database user :
It is also simple to create, we just have to give name, and its password, and database instance’s name.
→ Now, we will create our Replication Controller, which is created in “devk8s” module. File name is “rc.tf”.
— For this, as done in earlier 2 modules, we have to define empty variables initially,
- Variable username, password, host, client_certificate, client_key, cluster_ca_certificate : are credentials of GKE cluster, which will be given in provider section of kubernetes, so that whatever we create, terraform will check credentials and deploy there.
- Variable dbip, dbpass, dbuser, dbname : are used to give in environmental variables in specs part of container, these variables contain the details of database Instance and Database.
- Variable nodepool : is used to make creation of pods dependable on creation of GKE Cluster’s Nodepool. As it doesn’t make sense, if we create pods, without making nodes.
— Now we will use some of these variables in provider block,
— Now, we will create a Load Balancer, which will be created above these nodes:
— Now, we will create Replication Controller, which will create replicas(pods), on these nodes :
— This is the code for creating Replication Controller :
- I have make this replication controller dependable on GKE cluster Node Pool, i.e we want when Cluster’s Node Pool will be created, then only pods must be created, so we make replication controller dependent on a cluster’s nodepool’s variable “id”, as it will only be created when nodepool will be created and ready. We can give use any variable.
- Here, I have created 2 replicas, i.e 2 pods will be created initially, we can increase number of replicas(pods). So, in pods, we have to give wordpress Image, and I will be using latest version of this image.
- For giving database access to our pods, we are here giving environmental variables, which will directly give arguments, which wordpress requires for installing itself on the pod and after installation, tables in the given database will be created.
— We can give these details manually also, but as our task is for automation, i am using this feature of Wordpress Image, it is given by them only.
Variables desciption :
- “WORDPRESS_DB_HOST” : In this, we have to give PublicIP address of SQL Storage i.e IP of our database
- “WORDPRESS_DB_USER” : In this, we have to give the name of user we created, either we can give this username or root, But in SQL Storage of GCP, both users have same priviliges
- “WORDPRESS_DB_PASSWORD” : In this, we have to give the password for the above user.
- “WORDPRESS_DB_NAME” : In this, we have to give name of our database, in which we want wordpress should install it’s tables, and store it’s data.
— If you dont want to use these variables, when we install the wordpress from browser, following page with come after selection of the language :
— Fill the variables like this :
Now, everything is being created by us, and Infrastructure is ready to deploy, for this we have to run a command :
# terraform init
This, will install packages and dependencies of terraform.
# terraform apply --auto-approve
→ This will take around 15–20 minutes for creation, so keep patience.
Outputs Section :
In this we see the resources we were creating, are created or not and then we will verify the following :
VPC Peering —
1. We will see whether the Instances are pinging to each other or not?
2. We will create post on Wordpress pod, and then we check whether our post have used the SQL Storage or not?, We will go into the database and will check there.
→ VPC Created in Development Project :
→ VPC Created in Production Project :
→ Subnets created within VPC in respected projects :
→ VPC Peering is created :
→ Firewall created in respective Projects :
→ Instance created in development project :
→ Instance created in production project :
— Now, we will do Browser SSH to our instance and will be doing ping, to check connectivity :
— I have created a Video for you guys, do check it :
— Now, it’s confirmed that, after peering of VPC, data can exchange between the projects.
So, it is also possible that Kubernetes Pods in development project can use backend database, which is in Production project.
→ In Development Project, we can see that :
- GKE Cluster is created :
2. Nodepool for this Kubernetes Cluster is created :
3. NodePool will create a Instance Group, which is also created :
4. Instances(number= 3) are created :
→ We can check nodes, pods, nodeport using our system’s CMD :
— Pods are also created :
— LoadBalancer Service is also created (from CLI) :
→ Load Balancer is also created(from Console) :
— Our Desired IP is “18.104.22.168:80”
→ In Production Project, we can see that :
- Database Instance is created :
2. Database in this Instance is created :
3. User for this database is created :
→ I will login into this Database Instance on my system’s CMD using user we created and then check if database is created or not, and our post is there or not.
— Our desired DataBase is “mysqldb”
— Currently, this database is empty, as we haven’t installed wordpress on our pod, so to do this we have to run the LoadBalancer IP in browser, and hence we can install wordpress and it’s tables will be created.
— We can see that tables are also created :
— Now, we will create a post in this wordpress pod :
As you all can see that, whatever we wrote there, is there in our database, our desired task is hence accomplished.
— We can also see that, the user we created on Wordpress, on 2nd page while installing wordpress, is also stored here in “wp_users” table.
→ To destroy the Infrastructure, a single command can destroy it:
# terraform destroy --auto-approve
All resources will be deleted by this. It could take 10–20 minutes.
In this Section, Github URL and some reference docs are provided :
→ GITHUB URL :
You can't perform that action at this time. You signed in with another tab or window. You signed out in another tab or…
→ Similar project, I have done with AWS’s service RDS(Relational Database Service) and Kubernetes (pods were running on my local system), Check this out :
Integration of AWS-RDS and Wordpress using Kubernetes pods.
In this article, I have created an Integration which Includes AWS service RDS used for Database and Wordpress site…
→ Docs Links for more reference :
— Google Provider
Provider: Google Cloud Platform - Terraform by HashiCorp
Try out Terraform 0.12 with the Google provider! google and google-beta are 0.12-compatible from 2.5.0 onwards. The…
— Kubernetes Provider
Google: google_compute_network - Terraform by HashiCorp
Manages a VPC network or legacy network resource on GCP. To get more information about Network, see: The following…
— VPC Peering
Google: google_compute_network_peering - Terraform by HashiCorp
Manages a network peering within GCE. For more information see the official documentation and API. Both network must…
Google: google_compute_firewall - Terraform by HashiCorp
Each network has its own firewall controlling access to and from the instances. All traffic to instances, even from…
— Google compute Instance (VM Instances)
Google: google_compute_instance - Terraform by HashiCorp
Manages a VM instance resource within GCE. For more information see the official documentation and API. The following…
Google: google_compute_subnetwork - Terraform by HashiCorp
Get a subnetwork within GCE from its name and region. The following arguments are supported: - (Optional) The self link…
— GKE Cluster
Google: google_container_cluster - Terraform by HashiCorp
Manages a Google Kubernetes Engine (GKE) cluster. For more information see the official documentation and the API…
— GKE NodePool
Google: google_container_node_pool - Terraform by HashiCorp
Manages a node pool in a Google Kubernetes Engine (GKE) cluster separately from the cluster control plane. For more…
— Database Instance
Google: google_sql_database_instance - Terraform by HashiCorp
Creates a new Google SQL Database Instance. For more information, see the official documentation, or the JSON API. NOTE…
Google: google_sql_database - Terraform by HashiCorp
Represents a SQL database inside the Cloud SQL instance, hosted in Google's cloud. The following arguments are…
— Database User
Google: google_sql_user - Terraform by HashiCorp
Creates a new Google SQL User on a Google SQL User Instance. For more information, see the official documentation, or…
— For Wordpress Image and their Variables :
wordpress - Docker Hub
The WordPress rich content management system can utilize plugins, widgets, and themes.
— Kubernetes Service
An abstract way to expose an application running on a set of PodsA Pod represents a set of running containers in your…
— Kubernetes Replication Controller
Note: A Deployment that configures a ReplicaSet is now the recommended way to set up replication. A…
I have learned of Google Cloud Platform and it’s various services, under the mentorship of Mr. Vimal Daga Sir under GCP Workshop.
I hope this article is Informative and Explanatory. Hope you like it !!!
Please give some claps, if you liked this article !!!
For any suggestions or if any reader find any flaw in this article, please email me to “firstname.lastname@example.org”
Thank You Readers, for viewing this !!!