Creating VPC and doing their pairing, Integrating GKE and SQL Storage of GCP, all using “Terraform” code

This type of infrastructure, we will be doing

This Article is about accessing GCP Services and creating these services using Terraform Code Files. In this article, we have several tasks inside it, which we will be creating and observing its output and hence a Good Infrastructure is created with us.

Pre-Requistes for this Project :

  1. GCP account, you can use your google account to create this.
    Once created, after that I have created 2 projects in this, named as “Development (with id=’development-task-29’)”, and “Production (with id=’production-task-29’)”.
  • We have to install Google CloudSDK, to use gloud shell CLI. Use below link and see the procedure to install it:
  • Use below command to enable your laptop to create resources on GCP, after running this command, a URL will appear on CMD, and then copy URL to browser and then click on allow from your respected GCP Account.
# gcloud auth login

— We have to enable some API’s in both the projects :

Compute Engine API :

Enable this API

Enable more API like :

2. Terraform Tool by Hashicorp. , We have to install this tool so as to create Terraform Code. Check Below link to install Terraform for Windows :

  • As I am using VSCode, so install extension of terraform.

Services Created/Used in this Article :

  1. GCP (Google Cloud Platform):
    This is one of the Public Cloud, which I will be using to do our task:
    → Creating VPC in 2 different projects and then doing VPC Peering in between them to make them connected, and creating different resources in different projects, which will be able to exchange data after peering.
    → Deploy Kubernetes Cluster and then create pods on it, deploying Mysql Database which will serve as backend storage for our wordpress pods.
  2. Terraform
    I have Used module approach in terraform, as it better way of handling and management of resources.

→ So, Lets Start The building Process :

This is my structure of the project :

Folder Structure

There are 3 modules “development”, “production”,and “devk8s”and these have their respective terraform files. → This is our main file which will call, all the modules to apply the infrastructure and is also used to assign values to variables used in modules. This file is the only way to route between different modules, and transferring variables in these modules. → This is file which is used to store variables, which can be used in main file, and then used in modules. These variables can be called global variables, as these can be accessed anywhere in the whole project.

In this file, I have created 7variables, which will work as global variables, and can be used in modules.

  • devvpcid — Used to give project ID for development project
  • prodvpcid — Used to give project ID for production project
  • dbname — Used to give name of the database
  • dbuser — Used to give user in the database
  • dbpass — Used to give password for database
  • username — Used in GKE Cluster for security
  • password — Used in GKE Cluster to give password of this username admin

→ Let’s see the development module’s — file:

This is the provider block used for authentication, and whatever resource we create, will depend on the details, we give in this provider block.

— This will create VPC in development project, and complete Network will be created.

This will create Subnets in VPC. Subnets are like labs in a VPC(building). This subnet will have IP Range of to in the region “us-central1”.

— This is the firewall I am creating, in which we can do SSH and can do ping from other instance, as this firewall will be associated with the instance, so we will create firewall in this VPC only.

— Now, we can do peering with the other VPC in different project, We have used depends_on block as, it means that when VPC of both projects will be created, then only peering must be done.

This block of resource will create Instance in the given VPC and Subnet. I have given “RHEL8” as image with size 20GB. This instance will have 2GB RAM and 2 vCPU’s.

NOTE: We have to do same steps till here in in production module.

Till here, there are same configurations in both the files, and now we can see that our VPC are peered or not, We can check connectivity by using ping command.

!!! We will be check the outputs of this task, in the end together !!!

→Now, we can create “GKE Cluster”, Load Balancer and Wordpress Pods in Development Project.

GKE Cluster — This will create a Kubernetes Cluster with master and slave nodes, Master node is not revealed to us, but slave nodes we can see in VM Instances, and in this cluster, kubernetes will be installed and master will control all slave nodes, and pods will be created by master on slave nodes, and to manage the traffic on pods, we will create LoadBalancer on it.

→This will create Google Kubernetes Cluster (known as GKE), In this, we have given name, location, projectid, vpc, subnetwork which are common to give, but some of the arguments, which we have to give with some understanding:

  • We can set some security to our Cluster by adding some username and password, so in variables file, we have written the values of username and password and hence used in code, as described in master_auth block.
  • remove_default_node_pool — This is given as GKE creates its own node pool, but for simplicity, we are creating our own NodePool.

NODE POOL — It is a group of nodes, which will part of our GKE slave nodes, in this we can give configuration of nodes according to us, below is the code for this.

— I am creating a null resource, which uses a local-exec provisioner, this runs command locally on our system, so to use GKE from our CLI , we have to download GKE cluster’s credentials, so null resource block is defined below :

— To send values of variables in the other modules, we have to create the variables using output block, as we want to send the credentials of cluster, hostname of cluster to Kubernetes Module, as this module requires these variables for authentication.

→Now, since our Cluster will be created from above code, so we can now create database service for this, so we will be using SQL service of Google Cloud Platform.
We have to create a Database Instance from this service and then, we will be creating a Database in it, and then a user, which can access this database.

  • We can give name, root password, project id, region which are easy to give.
  • Settings block is used to give the configurations of this instance like RAM, CPU(can be given in custom format), disk size and type, type of availability like “zonal” or “regional”, publicly accessible or not, if yes then we can give from which ip range can client come, In our case, we have given that, from anywhere customer/client can come.

→After this, we have to create a database for our pods :

  • In this, we just have to give name, and database instance id, so that it could get created in it.

→ Now, we have to create a database user :

It is also simple to create, we just have to give name, and its password, and database instance’s name.

→ Now, we will create our Replication Controller, which is created in “devk8s” module. File name is “”.

— For this, as done in earlier 2 modules, we have to define empty variables initially,

  • Variable username, password, host, client_certificate, client_key, cluster_ca_certificate : are credentials of GKE cluster, which will be given in provider section of kubernetes, so that whatever we create, terraform will check credentials and deploy there.
  • Variable dbip, dbpass, dbuser, dbname : are used to give in environmental variables in specs part of container, these variables contain the details of database Instance and Database.
  • Variable nodepool : is used to make creation of pods dependable on creation of GKE Cluster’s Nodepool. As it doesn’t make sense, if we create pods, without making nodes.

— Now we will use some of these variables in provider block,

Provider Block

— Now, we will create a Load Balancer, which will be created above these nodes:

LoadBalancer which is kubernetes service

— Now, we will create Replication Controller, which will create replicas(pods), on these nodes :

— This is the code for creating Replication Controller :

  • I have make this replication controller dependable on GKE cluster Node Pool, i.e we want when Cluster’s Node Pool will be created, then only pods must be created, so we make replication controller dependent on a cluster’s nodepool’s variable “id”, as it will only be created when nodepool will be created and ready. We can give use any variable.
  • Here, I have created 2 replicas, i.e 2 pods will be created initially, we can increase number of replicas(pods). So, in pods, we have to give wordpress Image, and I will be using latest version of this image.
  • For giving database access to our pods, we are here giving environmental variables, which will directly give arguments, which wordpress requires for installing itself on the pod and after installation, tables in the given database will be created.
    — We can give these details manually also, but as our task is for automation, i am using this feature of Wordpress Image, it is given by them only.
Wordpress Image
Wordpress Variables

Variables desciption :

  • “WORDPRESS_DB_HOST” : In this, we have to give PublicIP address of SQL Storage i.e IP of our database
  • “WORDPRESS_DB_USER” : In this, we have to give the name of user we created, either we can give this username or root, But in SQL Storage of GCP, both users have same priviliges
  • “WORDPRESS_DB_PASSWORD” : In this, we have to give the password for the above user.
  • “WORDPRESS_DB_NAME” : In this, we have to give name of our database, in which we want wordpress should install it’s tables, and store it’s data.

— If you dont want to use these variables, when we install the wordpress from browser, following page with come after selection of the language :

— Fill the variables like this :

I have edited the above image, and tried to tell you, what you should write, if you dont provide env variables and want to do it manually.

Now, everything is being created by us, and Infrastructure is ready to deploy, for this we have to run a command :

# terraform init

This, will install packages and dependencies of terraform.

Initialized Terraform
# terraform apply --auto-approve
Resources creating…..
Resources created

→ This will take around 15–20 minutes for creation, so keep patience.

Outputs Section :

In this we see the resources we were creating, are created or not and then we will verify the following :

VPC Peering —

1. We will see whether the Instances are pinging to each other or not?

2. We will create post on Wordpress pod, and then we check whether our post have used the SQL Storage or not?, We will go into the database and will check there.

VPC Created in Development Project :

→ VPC Created in Production Project :

Subnets created within VPC in respected projects :

VPC Peering is created :

Firewall created in respective Projects :

Instance created in development project :

Instance created in production project :

— Now, we will do Browser SSH to our instance and will be doing ping, to check connectivity :

— I have created a Video for you guys, do check it :

— Now, it’s confirmed that, after peering of VPC, data can exchange between the projects.

So, it is also possible that Kubernetes Pods in development project can use backend database, which is in Production project.

→ In Development Project, we can see that :

  1. GKE Cluster is created :
Description of Cluster
Description of Cluster

2. Nodepool for this Kubernetes Cluster is created :


3. NodePool will create a Instance Group, which is also created :

Instance Group

4. Instances(number= 3) are created :

All Instances in Development project

— VM1

NodePool’s VM 1

— VM2

NodePool’s VM 2

— VM3

NodePool’s VM 3

→ We can check nodes, pods, nodeport using our system’s CMD :

All services running in Cluster

Pods are also created :


LoadBalancer Service is also created (from CLI) :

Load Balancer
Description of Load Balancer

Load Balancer is also created(from Console) :

Load Balancer is created
Description of Load Balancer

— Our Desired IP is “”

→ In Production Project, we can see that :

  1. Database Instance is created :
SQL Storage Database Instance
Description of SQL Storage Database Instance
Description of SQL Storage Database Instance
Description of SQL Storage Database Instance

2. Database in this Instance is created :

Database In Instance

3. User for this database is created :

User for authentication for instance

→ I will login into this Database Instance on my system’s CMD using user we created and then check if database is created or not, and our post is there or not.

login using this command
Number of databases in our Instance

— Our desired DataBase is “mysqldb”

Now Accessing desired database

— Currently, this database is empty, as we haven’t installed wordpress on our pod, so to do this we have to run the LoadBalancer IP in browser, and hence we can install wordpress and it’s tables will be created.

Click on Install Wordpress, remember the username and password
Wordpress is Successfully installed and Login

— We can see that tables are also created :

Wordpress installed and then these tables are created
Login to the wordpress
Dashboard after logging in

— Now, we will create a post in this wordpress pod :

Click on “add new”
This is the post/page, I have created
Click on “Publish” on the top right corner
This is outlay of our /postpage, we have created
Displaying content of this table, as this table contains our post/page

As you all can see that, whatever we wrote there, is there in our database, our desired task is hence accomplished.

— We can also see that, the user we created on Wordpress, on 2nd page while installing wordpress, is also stored here in “wp_users” table.

→ To destroy the Infrastructure, a single command can destroy it:

# terraform destroy --auto-approve

All resources will be deleted by this. It could take 10–20 minutes.

In this Section, Github URL and some reference docs are provided :


→ Similar project, I have done with AWS’s service RDS(Relational Database Service) and Kubernetes (pods were running on my local system), Check this out :

→ Docs Links for more reference :

Google Provider

— Kubernetes Provider


— VPC Peering

— Firewall

— Google compute Instance (VM Instances)

— Subnets

— GKE Cluster

— GKE NodePool

— Database Instance

— Database

— Database User

For Wordpress Image and their Variables :

— Kubernetes Service

— Kubernetes Replication Controller

I have learned of Google Cloud Platform and it’s various services, under the mentorship of Mr. Vimal Daga Sir under GCP Workshop.

I hope this article is Informative and Explanatory. Hope you like it !!!

Please give some claps, if you liked this article !!!

For any suggestions or if any reader find any flaw in this article, please email me to “”

Thank You Readers, for viewing this !!!




I am a student and persuing under graduation in computer science and engineering.

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Akhilesh Jain

Akhilesh Jain

I am a student and persuing under graduation in computer science and engineering.

More from Medium

NoCodeAI GCP, AWS, Azure Cloud Services — NoCodeAI.Cloud

Tagging Strategy in Terraform

GCP Command Line - gcloud

Introduction to Google Cloud Platform