part9 (1)

Deploy AWS Launch Configuration & AutoScaling Group Using Terraform

Welcome back to the series of Deploying On AWS Cloud Using Terraform 👨🏻‍💻. In this entire series, we will focus on our core concepts of Terraform by launching important basic services from scratch which will take your infra-as-code journey from beginner to advanced. This series would start from beginner to advance with real life Usecases and Youtube Tutorials.

If you are a beginner for Terraform and want to start your journey towards infra-as-code developer as part of your devops role buckle up 🚴‍♂️ and lets get started and understand core Terraform concepts by implementing it…🎬

🔎Basic Terraform Configurations🔍

As part of basic configuration we are going to setup 3 terraform files

1. Providers File:- Terraform relies on plugins called “providers” to interact with cloud providers, SaaS providers, and other APIs.

Providers are distributed separately from Terraform itself, and each provider has its own release cadence and version numbers.
The Terraform Registry is the main directory of publicly available Terraform providers, and hosts providers for most major infrastructure platforms. Each provider has its own documentation, describing its resource types and their arguments.
We would be using AWS Provider for our terraform series. Make sure to refer Terraform AWS documentation for up-to-date information.
Provider documentation in the Registry is versioned; you can use the version menu in the header to change which version you’re viewing.

 provider "aws" { 
 region = "var.AWS_REGION" 
 shared_credentials_file = "" 
 }

2. Variables File:- Terraform variables lets us customize aspects of Terraform modules without altering the module’s own source code. This allows us to share modules across different Terraform configurations, reusing same data at multiple places.

When you declare variables in the root terraform module of your configuration, you can set their values using CLI options and environment variables. When you declare them in child modules, the calling module should pass values in the module block.

variable "AWS_REGION" {
  default = "us-east-1"
}
#-------------------------Fetch VPC ID---------------------------------
data "aws_vpc" "GetVPC" {
filter {
    name   = "tag:Name"
    values = ["CustomVPC"]
          }
}
#-------------------------Variables For Autoscaling---------------------
variable "instance_type" {
  type = string
  default = "t2.micro"
}
variable "autoscaling_group_min_size" {
  type = number
  default = 2
}
variable "autoscaling_group_max_size" {
  type = number
  default = 3
}
variable "aws_key_pair" {
  type = string
  default = 
}
#-------------------------Fetch Public Subnets List----------------------
data "aws_subnet_ids" "GetSubnet_Ids" {
  vpc_id = data.aws_vpc.GetVPC.id
  filter {
    name   = "tag:Type"
    values = ["Public"]
  }
}
data "aws_subnet" "GetSubnet" {
  count = "${length(data.aws_subnet_ids.GetSubnet_Ids.ids)}"
  id    = "${tolist(data.aws_subnet_ids.GetSubnet_Ids.ids)[count.index]}"
}
#-------------------------Fetch Target Group ARN----------------------
data "aws_lb_target_group" "elb_tg" {
  arn  = var.elb_tg_arn
}

3. Versions File:- It’s always a best practice to maintain a version file where you specific version based on which your stack is testing and live on production.

 terraform { 
 required_version = ">= 0.12" 
 }

image.png

The method acts as a virtual firewall to control your inbound and outbound traffic flowing in and out.

✦ aws_security_group:- This resource is define traffic inbound and outbound rules on the subnet level.

✦ name:- This is an optional argument to define the name of the security group.
✦ description:- This is an optional argument to mention details about the security group that we are creating.
✦ vpc_id:- This is a mandatory argument and refers to the id of a VPC to which it would be associated.
✦ tags:- One of the most important property used in all resources. Always make sure to attach tags for all your resources.
EGRESS & INGRESS are processed in attribute-as-blocks mode.

resource "aws_security_group" "asg_sg" {
  name        = "ASG_Allow_Traffic"
  description = "Allow all inbound traffic for asg"
  vpc_id      = data.aws_vpc.GetVPC.id

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port   = 8
    to_port     = 0
    protocol    = "icmp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port       = 0
    to_port         = 0
    protocol        = "-1"
    cidr_blocks     = ["0.0.0.0/0"]
  }
  tags = {
    Name = "terraform-asg-security-group"
  }
}

Before creating the Autoscaling group let’s first define the launch configuration for EC2 instances that will be used by autoscaling later.

✦ aws_launch_configuration:- This resource creates a launch configuration for EC2 instances that we are going to deploy as part of our autoscaling group.

✦ name_prefix:- This is an optional argument to create a unique name beginning with the specified prefix.
✦ image_id:- This is an mandatory argument to mention image_id id based on which EC2 instance will be launched.
✦ instance_type:- This is a mandatory argument to mention instance type for the EC2 instances like t2.small, t2.micro etc.
✦ key_name:- This is an optional argument to enable ssh connection to your EC2 instance.
✦ security_groups:- This is an optional argument to mention which controls your inbound and outbound traffic flowing to your EC2 instances inside a subnet.
✦ user_data:- This is an optional argument to provide commands or scripts to be executed during the launch of the EC2 instance.
✦ Lifecycle:- Lifecycle is a nested block that can appear within a resource block.
create_before_destroy:- when Terraform must change a resource argument that cannot be updated in place due to remote API limitations, Terraform will instead destroy the existing object and then create a new replacement object with the newly configured arguments.

resource "aws_launch_configuration" "launch_config_dev" {
  name_prefix                 = "webteir_dev"
  image_id                    = "ami-0742b4e673072066f"
  instance_type               = "${var.instance_type}"
  key_name                    = "${var.aws_key_pair}"
  security_groups             = ["${aws_security_group.asg_sg.id}"]
  associate_public_ip_address = true
  user_data                   = <Deployed EC2 Using ASG" | sudo tee /var/www/html/index.html
    EOF
  lifecycle {
    create_before_destroy = true
  }
}

✦ aws_autoscaling_group:- This resource group resources for use so that it can be associated with load balancers.

✦ launch_configuration:- This is an optional argument to mention name of the launch configuration to be used.
✦ min_size:- This is a mandatory argument to define the minimum size of the Autoscaling group.
✦ max_size:-This is a mandatory argument to define the maximum size of the Autoscaling group.
✦ target_group_arns:- This is an optional argument to define the target group arn to which EC2 can register.
✦ vpc_zone_identifier:- This is an optional argument to define a list of subnet IDs to launch resources in.
✦ tags:- One of the most important property used in all resources. Always make sure to attach tags for all your resources.

resource "aws_autoscaling_group" "autoscaling_group_dev" {
  launch_configuration = "${aws_launch_configuration.launch_config_dev.id}"
  min_size             = "${var.autoscaling_group_min_size}"
  max_size             = "${var.autoscaling_group_max_size}"
  target_group_arns    = ["${data.aws_lb_target_group.elb_tg.arn}"]
  vpc_zone_identifier  = "${data.aws_subnet.GetSubnet.*.id}"

  tag {
    key                 = "Name"
    value               = "autoscaling-group-dev"
    propagate_at_launch = true
  }
}

Output values make information about your infrastructure available on the command line, and can expose information for other Terraform configurations to use. Output values are similar to return values in programming languages.

output "asg_sg" {
  value       = aws_security_group.asg_sg.id
  description = "This is Security Group for autoscaling launch configuration."
}
output "aws_launch_configuration" {
  value       = aws_launch_configuration.launch_config_dev.id
  description = "This is ASG Launch Configuration ID."
}
output "autoscaling_group_dev" {
  value       = aws_autoscaling_group.autoscaling_group_dev.id
  description = "This is ASG ID."
}

🔊To view the entire GitHub code click here

  1️⃣ The terraform fmt command is used to rewrite Terraform configuration files to a canonical format and style👨‍💻.

terraform fmt

2️⃣ Initialize the working directory by running the command below. The initialization includes installing the plugins and providers necessary to work with resources. 👨‍💻

terraform init

3️⃣ Create an execution plan based on your Terraform configurations. 👨‍💻

terraform plan

4️⃣ Execute the execution plan that the terraform plan command proposed. 👨‍💻

terraform apply --auto-approve

⛔️ Hashicorp Terraform
⛔️ AWS CLI
⛔️ Hashicorp Terraform Extension Guide
⛔️ Terraform Autocomplete Extension Guide
⛔️ AWS Security Group
⛔️ AWS Launch Configuration
⛔️ AWS Autoscaling Group
⛔️ Lifecycle Meta-Argument

In this blog, we have configured the below resources
✦ AWS Security Group for the ASG Launch Configuration.
✦ AWS Launch Configuration.
✦ AWS Autoscaling Group.
I have also referenced what arguments and documentation we are going to use so that while you are writing the code it would be easy for you to understand terraform official documentation. Stay with me for the next blog.

📢 Stay tuned for my next blog…..

So, did you find my content helpful? If you did or like my other content, feel free to buy me a coffee. Thanks.

Dheeraj_Pic1 (2)

Author - Dheeraj Choudhary

I am an IT Professional with 11+ years of experience specializing in DevOps & Build and Release Engineering, Software configuration management in automating, build, deploy and release. I blog about AWS and DevOps on my YouTube channel, which focuses on content such as, AWS, DevOps, open source, AI-ML and AWS community activities.

RELATED ARTICLES

AWS Glue Presentation (1)

Automate S3 Data ETL Pipelines With AWS Glue Using Terraform

Discover how to automate your S3 data ETL pipelines using AWS Glue and Terraform in this step-by-step tutorial. Learn to efficiently manage and process your data, leveraging the power of AWS Glue for seamless data transformation. Follow along as we demonstrate how to set up Terraform scripts, configure AWS Glue, and automate data workflows.

144 Responses

Add a Comment

Your email address will not be published. Required fields are marked *