WebServer — EC2, S3, and CloudFront provisioned using Terraform + Github

Nischal Vooda
7 min readJun 16, 2020

--

introduction to terraform:

as we know that we can access the cloud providers in three ways wedUI,cli,SDK's .there are two types of cloud providers

public: AWS, GCP, Azure

private: OpenStack

no one will ensure that only one of them will have benefits every cloud provider has its advantages and disadvantages as well. as a user, we want to use only benefits from every cloud provider now think about it, is it possible that taking specific services from multiple cloud provider, and can we organize by doing this??

the answer is yes,

we can achieve this by using MULTY CLOUD COMPUTING .but each cloud provider has its own API so it is hard for a developer to learn each syntax or commands of every cloud provider and to manage it.

here the role of terraform coms in play.

TERRAFORM is a tool to manage the cloud.

it gives infrastructure as a code that provides standardization. here HashiCorp Configuration language is used.HCL is simple and easy to apply.

Task: Have to launch /create infrastructure using Terraform

  1. Create the key and security group which allows the ports.

like

a.shh port: 22

b.http port:80

2. Launch EC2 instance.

3. in the EC2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html.

5.import the code which is uploaded by the developer from GitHub Repositories.

6.import the GitHub Repositories code into /var/www/html.

7. Create an S3 bucket, and deploy the images from GitHub Repositories or from the local file into the S3 bucket and change the permission to public readable. (so that everyone can access )

8. Create a Cloudfront using an S3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

SOLUTION::

NOTE: The terraform has been used on a Windows operating system as the host. AWS-CLI and Git should be installed on your machine where terraform is running.

resources used in this project:

aws_vpc

aws_internet_gateway

aws_subnet

aws_route_table

aws_route_table_association

aws_network_acl

aws_eip

aws_security_group

aws_instance

aws_ebs_volume

aws_volume_attachment

aws_s3_bucket

aws_s3_bucket_object

aws_cloudfront_distribution

CODE:

Step-1: Create a configuration file so terraform can access your aws account and then use this profile for safety instead of providing your access and secret key separately.

#providerprovider "aws" {profile = "nischal"region  = "ap-south-1"}

Step-2: Create a vpc, internet gateway andsubnet which you will be using in the instance.

#vpcresource "aws_vpc" "aws_vpc_1" {cidr_block = "10.0.0.0/16"tags={Name= "aws_vpc_1"}}#internet gatewayresource "aws_internet_gateway" "aws_internet_gateway_1" {vpc_id = aws_vpc.aws_vpc_1.id}#aws subnetresource "aws_subnet" "aws_subnet_1" {vpc_id     =  aws_vpc.aws_vpc_1.idcidr_block = "10.0.1.0/24"}

Step-3: creating a route table, route table association

route table association Provides a resource to create an association between a route table and a subnet or a route table and an internet gateway or virtual private gateway.

#aws route tableresource "aws_route_table" "aws_route_table_1"{vpc_id= aws_vpc.aws_vpc_1.idroute{cidr_block = "0.0.0.0/0"gateway_id = aws_internet_gateway.aws_internet_gateway_1.id}depends_on=["aws_internet_gateway.aws_internet_gateway_1"]}
#aws route table association
resource "aws_route_table_association" "aws_route_table_association_1" {subnet_id = aws_subnet.aws_subnet_1.idroute_table_id = aws_route_table.aws_route_table_1.id}

Step-4: creating network ACL resource

You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.

resource "aws_network_acl" "aws_network_acl_1" {vpc_id = aws_vpc.aws_vpc_1.idegress {protocol   = "-1"rule_no    = 100action     = "allow"cidr_block = "0.0.0.0/0"from_port  = 0to_port    = 0}ingress {protocol   = "-1"rule_no    = 200action     = "allow"cidr_block = "0.0.0.0/0"from_port  = 0to_port    = 0}tags = {Name = "aws_network_acl_1"}}#aws eipresource "aws_eip" "aws_eip_1" {instance = aws_instance.aws_instance_1.idvpc      = truedepends_on=["aws_internet_gateway.aws_internet_gateway_1"]}

Step-5: Create a security group which allows ingress traffic from port number 80 and 22 . here we have to provide vpc id

#creating securitygroupresource "aws_security_group" "aws_security_group_1" {name        = "allow_traffic"description = "Allow TLS inbound traffic"vpc_id      = "aws_vpc.aws_vpc_1.id"ingress {description = "http"from_port   = 80to_port     = 80protocol    = "tcp"cidr_blocks = ["0.0.0.0/0"]}ingress {description = "ssh"from_port   = 22to_port     = 22protocol    = "tcp"cidr_blocks = ["0.0.0.0/0"]}ingress {description = "ping"from_port   = -1to_port     = -1protocol    = "icmp"cidr_blocks = ["0.0.0.0/0"]}egress {from_port   = 0to_port     = 0protocol    = "-1"cidr_blocks = ["0.0.0.0/0"]}tags = {Name = "allow_traffic"}}

Step-6: Now we are creating an EC2 instance for the deployment of the webserver. We will be using the key pair and security group we created. The intelligence of terraform will help us set it dynamically so we don’t need to hard code repeatedly.

Then we will connect to our instance via ssh in terraform itself and install httpd server and git.

#ec2 instance launchresource "aws_instance" "aws_instance_1" {ami             = "ami-0447a12f28fddb066"instance_type   = "t2.micro"security_groups = [ "allow_traffic" ]key_name = "newkey"subnet_id      = aws_subnet.aws_subnet_1.idconnection {type        = "ssh"user        = "ec2-user"private_key = file("C:/Users/91986/Desktop/aws/newkey.pem")host        = aws_instance.aws_instance_1.public_ip}provisioner "remote-exec" {inline = ["sudo yum install httpd php git -y","sudo systemctl restart httpd","sudo systemctl enable httpd"]}tags = {Name = "aws_instance_1"}}

Step-7: Now we create our own EBS (Elastic Block Storage) in the same availability zone as our EC2 instance and then attach it . We will then form a connection via ssh, format and mount our disk then copy the html file from our repository in GitHub

# create volumeresource "aws_ebs_volume" "aws_ebs_volume_1" {availability_zone = aws_instance.aws_instance_1.availability_zonesize = 1tags = {Name = "aws_ebs_volume_1"}}# attach volumeresource "aws_volume_attachment" "aws_volume_attachment_1" {depends_on = [aws_ebs_volume.aws_ebs_volume_1,]device_name  = "/dev/xvdf"volume_id    = aws_ebs_volume.aws_ebs_volume_1.idinstance_id  = aws_instance.aws_instance_1.idforce_detach = trueconnection {type        = "ssh"user        = "ec2-user"private_key = file("C:/Users/91986/Desktop/aws/newkey.pem")host        = aws_instance.aws_instance_1.public_ip}provisioner "remote-exec" {inline = ["sudo mkfs.ext4 /dev/xvdf","sudo mount /dev/xvdf /var/www/html","sudo rm -rf /var/www/html/*","sudo git clone https://github.com/NischalRam/aws_task-1.git /var/www/html/"]}}

Step-8: creating S3 bucket with public access to read. We then clone our image repo on the host PC and then upload the image to the bucket with public read access.

# s3 bucketresource "aws_s3_bucket" "aws_s3_bucket_1" {bucket = "123mywebbucket321220"acl    = "public-read"region = "ap-south-1"tags = {Name = "123mywebbucket321220"}}# adding object to s3resource "aws_s3_bucket_object" "aws_s3_bucket_object_1" {depends_on = [aws_s3_bucket.aws_s3_bucket_1,]bucket  = aws_s3_bucket.aws_s3_bucket_1.bucketkey     = "logo.jpg"source  = "C:/Users/91986/Desktop/aws_logo_smile_1200x630.png"acl     = "public-read"}output "bucketid" {value = aws_s3_bucket.aws_s3_bucket_1.bucket}output "myos_ip" {value = aws_instance.aws_instance_1.public_ip}

Step-9: creating Amazon CloudFront

info about Amazon CloudFront: Amazon CloudFront is a web service that speeds up the distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

# cloud frontvariable "oid" {type    = stringdefault = "S3-"}locals {s3_origin_id = "${var.oid}${aws_s3_bucket.aws_s3_bucket_1.id}"}resource "aws_cloudfront_distribution" "aws_cloudfront_distribution_1" {depends_on = [aws_s3_bucket_object.aws_s3_bucket_object_1,]origin {domain_name = aws_s3_bucket.aws_s3_bucket_1.bucket_regional_domain_nameorigin_id   = local.s3_origin_id}enabled             = truedefault_cache_behavior {allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]cached_methods   = ["GET", "HEAD"]target_origin_id = local.s3_origin_idforwarded_values {query_string = falsecookies {forward = "none"}}viewer_protocol_policy = "allow-all"min_ttl                = 0default_ttl            = 3600max_ttl                = 86400}restrictions {geo_restriction {restriction_type = "none"}}viewer_certificate {cloudfront_default_certificate = true}connection {type        = "ssh"user        = "ec2-user"private_key = file("C:/Users/91986/Desktop/aws/newkey.pem")host        = aws_instance.aws_instance_1.public_ip}provisioner "remote-exec" {inline = ["sudo su <<END","echo \"<img src='http://${aws_cloudfront_distribution.aws_cloudfront_distribution_1.domain_name}/${aws_s3_bucket_object.aws_s3_bucket_object_1.key}' height='400' width='400'>\" >> /var/www/html/file.html","END",]}}

Here we have downloaded an image from GitHub and uploaded it to the S3 bucket. Then we created a CloudFront for the same bucket.

Step-10: To see our webpage we can use a local recourse so that terraform will open our web page for us

resource "null_resource" "openwebsite"  {depends_on = [aws_cloudfront_distribution.aws_cloudfront_distribution_1, aws_volume_attachment.aws_volume_attachment_1]provisioner "local-exec" {command = "start chrome  http://${aws_instance.aws_instance_1.public_ip}/file.html"}}

here we are simply telling terraform that take my instance IP and run the chrome command in local-exec

Here is the complete code.

OUTPUT:

Connect me on my LinkedIn as well.

--

--

No responses yet