Create/Launch An Application on AWS using Terraform (through EFS)

Nischal Vooda
6 min readJul 20, 2020

introduction to terraform:

as we know that we can access the cloud providers in three ways WebUI, CLI, SDK's .there are to types of cloud providers

public: AWS, GCP, Azure

private: OpenStack

no one will ensure that only one of them will have benefits every cloud provider has their advantages and disadvantages as well. as a user, we want to use only benefits form every cloud provider now think about it, is it possible that taking specific services form multiple cloud provider and can we organize by doing this??

the answer is yes,

we can achieve this by using MULTY CLOUD COMPUTING .but each cloud provider has it’s own API so it is hard for a developer to learn each syntax or commands of every cloud provider and to manage it.

here the role of terraform coms in play.

TERRAFORM is a tool to manage the cloud.

it gives infrastructure as a code which provides standardization. here HashiCorp Configuration language is used.HCL is simple and easy to apply.

Now, let’s have a look at the objectives:

  • Create an AWS Instance — EC2
  • Install required dependencies, modules, Softwares
  • Create an EFS for persistent storage
  • Attach, Format and Mount it in a folder in the instance
  • Clone the code sent by a developer on GitHub in the folder
  • Create an S3 Bucket for storage of static data
  • This will be sent to all the edge locations using CloudFront
  • Finally loading the webpage on your favourite browser automatically

For this particular setup you should have the following things installed on your machine:

  1. Git
  2. AWS Command Line Interface
  3. Terraform

here are some of the basic command of terraform

terraform init - To install the required plugins terraform validate - To check the codeterraform plan - is used to create an execution planterraform apply - To make the resources run

terraform destroy - To destroy all the resources in single click

CODE:

  • Login with CLI then use aws configure command and give the access key, secret key from credential file and region name.
  • Create a directory and then create a file inside it with “.tf” extension.
  • Now we will write the code in the terraform file we created.

Give the name of the provider(AWS) to whom the Terraform will contact.

provider "aws" {region = "ap-south-1"profile = "nischal"}

Creating a Security Group with allowing Port 80 For HTTP and Port 22 For ssh

resource "aws_security_group" "securitygroup" {name        = "task2-securitygroup"description = "Allow TLS inbound traffic"vpc_id      = "vpc-33b9a45b"ingress {description = "SSH"from_port   = 22to_port     = 22protocol    = "tcp"cidr_blocks = [ "0.0.0.0/0" ]}ingress {description = "HTTP"from_port   = 80to_port     = 80protocol    = "tcp"cidr_blocks = [ "0.0.0.0/0" ]}egress {from_port   = 0to_port     = 0protocol    = "-1"cidr_blocks = ["0.0.0.0/0"]}tags = {Name = "task2-securitygroup"}}

Launching an instance with created key pair and security group and to connect into the instance we need to specify the path of the key and public_ip of instance. And installing HTTPd, PHP, git to deploy a webpage.

resource "aws_instance" "web_server" {ami = "ami-0447a12f28fddb066"instance_type = "t2.micro"root_block_device {volume_type = "gp2"delete_on_termination = true}key_name = "nischal"security_groups = [ "${aws_security_group.securitygroup.name}" ]connection {type     = "ssh"user     = "ec2-user"private_key = file("C:/Users/91986/Downloads/nischal.pem")host     = aws_instance.web_server.public_ip}provisioner "remote-exec" {inline = ["sudo yum install httpd git -y","sudo systemctl restart httpd","sudo systemctl enable httpd",]}tags = {Name = "task2_os"}}

Now we will be creating our EFS and for that, we require VPC which will contact VP at the backend but since we haven’t mentioned it so we will go for the default one. And once it gets created then we will create a mount we will clone all the required data from the Github and then we will mount our EFS to /var/www/Html directory.

resource "aws_efs_file_system" "efs" {
creation_token = "efs"
performance_mode = "generalPurpose"
throughput_mode = "bursting"
encrypted = "true"
tags = {
Name = "Efs"
}
}
resource "aws_efs_mount_target" "efs-mount" {
depends_on = [
aws_instance.web_server,
aws_security_group.securitygroup,
aws_efs_file_system.efs,
]

file_system_id = aws_efs_file_system.efs.id
subnet_id = aws_instance.web_server.subnet_id
security_groups = ["${aws_security_group.securitygroup.id}"]


connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/91986/Downloads/nischal.pem")
host = aws_instance.web_server.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo mount ${aws_efs_file_system.efs.id}:/ /var/www/html",
"sudo echo '${aws_efs_file_system.efs.id}:/ /var/www/html efs defaults,_netdev 0 0' >> /etc/fstab",
"sudo rm -rf https://github.com/NischalRam/aws_efs.git /var/www/html/*",
"sudo git clone /var/www/html/"
]
}
}

Now we have to create an S3 bucket and upload my image to it in the same availability zone.

resource "aws_s3_bucket" "s3bucket" {
bucket = "nischal666600"
acl = "public-read"
region = "ap-south-1"
tags = {
Name = "nischal666600"
}
}
# -- Uploading files in S3 bucketresource "aws_s3_bucket_object" "file_upload" {
depends_on = [
aws_s3_bucket.s3bucket,
]
bucket = "nischal666600"
key = "efs.jpg"
source = "C:/Users/91986/Desktop/terraform-x-aws-1.png"
acl ="public-read"
}

In the last step, we will create the cloud-front that will collect all my data from the S3 bucket and reach my client through the nearest edge locations whenever any client will hit to my site.

resource "aws_cloudfront_distribution" "s3_distribution" {
depends_on = [
aws_efs_mount_target.efs-mount,
aws_s3_bucket_object.file_upload,
]
origin {
domain_name = "${aws_s3_bucket.s3bucket.bucket}.s3.amazonaws.com"
origin_id = "ak"
}
enabled = true
is_ipv6_enabled = true
default_root_object = "index.html"
restrictions {
geo_restriction {
restriction_type = "none"
}
}
default_cache_behavior {
allowed_methods = ["HEAD", "GET"]
cached_methods = ["HEAD", "GET"]
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
default_ttl = 3600
max_ttl = 86400
min_ttl = 0
target_origin_id = "ak"
viewer_protocol_policy = "allow-all"
}
price_class = "PriceClass_All"viewer_certificate {
cloudfront_default_certificate = true
}
}
resource "null_resource" "nullremote3" {
depends_on = [
aws_cloudfront_distribution.s3_distribution,
]

Connecting to the instance and deploying image of s3 bucket to the var/www/html and then it automatically opens on the google chrome browser

resource "null_resource" "nullremote3"  {
depends_on = [
aws_cloudfront_distribution.s3_distribution,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/91986/Downloads/nischal.pem")
host = aws_instance.web_server.public_ip
}

provisioner "remote-exec" {
inline = [
"sudo su <<END",
"echo \"<img src='http://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.file_upload.key}' height='800' width='500'>\" >> /var/www/html/index.html",
"END",
]
}
}
# -- Starting chrome for outputresource "null_resource" "nulllocal1" {
depends_on = [
null_resource.nullremote3,
]
provisioner "local-exec" {
command = "start chrome ${aws_instance.web_server.public_ip}"
}
}

Now we are done with all our steps required and to create our setup just create the complete code and run the following commands and then now our entire setup will be ready

terrafor initterraform planterraform apply -auto-approve
your code is validate
terraform apply succeful
output

Now let’s delete the whole setup by using the following command:

terraform destroy -auto-apply

Connect me on my LinkedIn as well.

--

--