top of page

CI/CD Infrastructure Deployment Using Immutable approach

CI/CD and It's Benefits

Continuous Integration (CI) and Continuous Deployment (CD) are practices used in software development to streamline the process of building, testing, and deploying software changes.

Continuous Integration is the practice of frequently integrating code changes into a shared repository, where automated build and testing tools are used to verify the correctness of the changes. By doing so, it allows for the early detection of integration issues and conflicts, which ultimately leads to less time spent on fixing bugs and more time spent on developing new features.

Continuous Deployment is the practice of automatically deploying code changes to production once they have passed through the testing stage. This means that as soon as a new code change is pushed to the shared repository, it goes through automated testing and if the tests pass, it is automatically deployed to production.


The benefits of CI/CD include:

  • Faster feedback loop: developers get feedback on their code changes faster, reducing the time spent on debugging and allowing for more rapid development and deployment of new features.

  • Consistent and reliable deployments: automated testing ensures that code changes are deployed only when they meet the required quality criteria, leading to more reliable and consistent deployments.

  • Improved collaboration: CI/CD encourages collaboration between development, testing, and operations teams, leading to better communication and faster problem resolution.

  • Faster time-to-market: CI/CD enables organizations to deliver software changes more frequently and with more confidence, ultimately leading to faster time-to-market and competitive advantage.

Immutable Infrastructure and It’s Advantages


Immutable infrastructure is an approach to managing IT infrastructure where infrastructure components such as virtual machines, containers, and network configurations are treated as immutable, meaning that they are never modified once deployed. Instead, new instances are created and deployed to replace the existing instances whenever there is a change or an update.

The advantages of immutable infrastructure include:

  1. Consistency: Since all infrastructure components are identical and are never modified, there are fewer variables that can cause unexpected behavior or errors. This makes it easier to maintain a consistent and predictable environment.

  2. Security: Immutable infrastructure is more secure because any changes made to the system will not be persistent. Any malicious activity can be detected and remediated by simply replacing the affected instances.

  3. Faster and more reliable deployment: Immutable infrastructure enables fast and reliable deployment of updates since the entire infrastructure is replaced with new instances. This eliminates the need for complex update scripts or patching and reduces the risk of deployment failures.

  4. Scalability: Immutable infrastructure is highly scalable as new instances can be easily created and deployed to handle increasing workloads. This enables organizations to adapt to changes quickly and easily in demand.

  5. Cost savings: By treating infrastructure components as disposable and only paying for what is needed, organizations can reduce infrastructure costs and optimize resource usage.

Overall, immutable infrastructure provides a more reliable, secure, and scalable approach to managing IT infrastructure. It allows organizations to focus on building and deploying new applications and services, rather than maintaining and troubleshooting existing infrastructure.


What we are going to deploy?


We are going to deploy CI/CD infrastructure within a multi-tier network. Ci/CD infrastructure consists of 1 Jenkins build server, 1 Nexus Repository manager server, and 1 SonarQube server along with sonarqube’s persistent layer as a PostgreSQL server.

We will deploy a VPC with four different Subnets. Two subnets will be public and two will be private. We will deploy 5 EC2 machines, 4 in private subnets and 1 in the public subnet. Jenkins, Nexus, SonarQube, and PostgreSQL servers. 1 EC2 will be the bastion host/Jump box machine deployed in 1 public subnet. NAT Gateway is deployed in 1 public subnet and it’s routes were registered in the routing table to give incoming traffic to EC2 deployed in private subnets.

All these EC2 machines were deployed by using custom machine AMIs created with immutable infra-deployment workflows in GitHub Actions. After the successful deployment of the infrastructure stack, we can access Jenkins, Nexus, and SonarQube user interface using load balancer DNS URL with the desired port combination.


Jenkins URL => http://LB-URL:8080

Nexus URL => http://LB-URL:8081

SonarQube URL => http://LB-URL


NOTE: Use the URL of deployed load balancer in your Aws Account in place of “LB-URL”.

Terraform and GitHub Actions will be used to configure the whole deployment.


Machine Image Creation using Packer and GitHub Actions

Creating machine images using Packer and GitHub Actions is a powerful way to automate the process of building and deploying infrastructure in the cloud. Here are the basic steps to get started:

  1. Install Packer: Packer is a tool for creating machine images that can be used on various cloud providers. Install it on our local machine or in the GitHub Actions runner.

  2. Create a Packer template: This is an HCL file that defines the image we want to create. It can include information about the base image, any software to be installed, and any configuration that needs to be done.

  3. Store the Packer template in your repository: Commit the Packer template to your GitHub repository so that it can be used by GitHub Actions.

  4. Create a GitHub Actions workflow: This is a YAML file that defines the steps to be taken when the workflow is triggered. In this case, we will define the steps to build the machine image using Packer and upload it to your cloud provider.

  5. Set up environment variables: Store any sensitive information such as cloud provider credentials or API keys as GitHub secrets or in your repository's environment variables.

  6. Trigger the GitHub Actions workflow: When we are ready to create the machine image, trigger the GitHub Actions workflow. The workflow will use the Packer template to build the machine image and upload it to your cloud provider.

  7. Verify the machine image: After the workflow completes, verify that the machine image was created successfully and that it is working as expected.

  8. Use the machine image: Use the machine image to deploy infrastructure in the cloud, such as virtual machines, containers, or serverless functions.

Using Packer and GitHub Actions together can save time and improve consistency in the process of creating machine images. It also enables collaboration with other developers and makes it easier to version control and audit changes to the machine image.


The full code is available here.

Code Structure of Machine Image Creation Workflows


The above figure shows files for GitHub Actions (Workflow Code), immutable configuration code (Packer Configuration Code), and Linux shell scripts (Shell Scripts) for building the configuration inside an immutable infrastructure.


Jenkins Workflow

The above figure shows the GitHub Actions workflow, which builds Jenkins machine image. Installation of OpenJDK-11-JDK and Jenkins are the main highlights of building the machine image.


Code Explained

path=>packer/aws/jenkins

jenkins-ubuntu.pkr.hcl

locals {
  timestamp = regex_replace(timestamp(), "[- TZ:]", "")
}

source "amazon-ebs" "ubuntu" {
  ami_name      = "jenkins-server-${local.timestamp}"
  instance_type = var.instance_type
  region        = var.region
  source_ami_filter {
    filters = {
      name                = "ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20221206"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }    
    owners = ["099720109477"]
  }

  ssh_username = "ubuntu"
  tags         = var.tags
  aws_polling {
    delay_seconds = 30
    max_attempts  = 300
  }

}

build {
  name        = "jenkins-server"
  description = <<EOF
  This build creates ubuntu images for ubuntu versions :
  * 20.04
  For the following builders :
  * amazon-ebs
  EOF
  sources     = ["source.amazon-ebs.ubuntu"]

  provisioner "shell" {
    script = "./scripts/jenkins.sh"
  }
}
jenkins.auto.pkrvars.hcl

region        = "us-east-1"
instance_type = "t2.small"
tags = {
  "Name"        = "JenkinsImage"
  "Environment" = "Development"
  "OS_Version"  = "Ubuntu 22.04"
  "Release"     = "Latest"
  "Created-by"  = "Packer-200244692886"
}
variables.pkr.hcl

variable "region" {
  type        = string
  description = "AWS Region"
}

variable "instance_type" {
  type        = string
  description = "EC2 Instance type"
}

variable "tags" {
  type        = map(string)
  description = "Tags"
  default     = {}
}

path=>scripts/jenkins.sh

jenkins.sh

# !/bin/bash
sudo apt-get autoremove
sudo apt-get upgrade
sudo apt-get update
sudo apt-get install openjdk-11-jdk -y
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get  install jenkins -y
sudo apt-get update
sudo apt-get install -y dotnet-sdk-6.0
###

The above files implement the immutable infrastructure for the Jenkins server.


path=>.github/workflows

jenkins.yml

name: jekins
# on: 
#  push:
#    branches:
#      - "master"
#    paths:
#      - "packer/aws/jenkins/**"
#      - "scripts/jenkins.sh"
#      - ".github/wokflows/jenkins.yml"
#      - ".github/composite_actions/packer/**"
on: workflow_dispatch

jobs:
  ami-creation:
   name: Jenkins AMI creation #AWS CLI Setup 
   runs-on: ubuntu-22.04
   steps:
      - name: Checkout
        uses: actions/checkout@v2 
      
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
         aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
         aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
         aws-region: us-east-1

      - name: Test configuration of AWS CLI
        run: |
         aws --version
         aws configure list | grep region | awk '{print $2}'
     
      - name: Setup `packer`
        uses: hashicorp/setup-packer@main
        id: setup
        with:
          version: "1.8.5" # or `latest`

      - name: Run `packer init`
        id: init
        run: "packer init ./packer/aws/jenkins/"

      - name: Run `packer validate`
        id: validate
        run: "packer validate ./packer/aws/jenkins/"

      - name: Run `packer build`
        id: build
        run: "packer build ./packer/aws/jenkins/"

The above code file implements the GitHub Actions workflow for Jenkins server image creation.


Nexus Workflow

The above figure shows the GitHub Actions workflow, which builds the Nexus Repository Manager image. Installation of Java-1.8.0-OpenJDK, Nexus Repo manager, and configuration of Nexus service are the main highlights of building the machine image.


Code Explained

path=>packer/aws/nexus

nexus-awslinux.pkr.hcl

locals {
  timestamp = regex_replace(timestamp(), "[- TZ:]", "")
}
# source
source "amazon-ebs" "aws_linux" {
  ami_name      = "nexus-server-${local.timestamp}"
  instance_type = var.instance_type
  region        = var.region
  source_ami_filter {
    filters = {
      name                = "amzn2-ami-kernel-5.10-hvm-2.0.20221210.1-x86_64-gp2"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    owners = ["137112412989"]
  }

  ssh_username = "ec2-user"
  tags         = var.tags
  aws_polling {
    delay_seconds = 30
    max_attempts  = 300
  }

}

build {
  name        = "nexus-server"
  description = <<EOF
  This build creates aws linux images for nexus server
  EOF
  sources     = ["source.amazon-ebs.aws_linux"]

  provisioner "shell" {
    script = "./scripts/nexus.sh"
  }
}
nexus.auto.pkrvars.hcl

region        = "us-east-1"
instance_type = "t2.medium"
tags = {
  "Name"        = "NexusImage"
  "Environment" = "Development"
  "OS_Version"  = "Amazon Linux 2 Kernel 5.10"
  "Release"     = "Latest"
  "Created-by"  = "Packer-200244692886"
}
variables.pkr.hcl

variable "region" {
  type        = string
  description = "AWS Region"
}

variable "instance_type" {
  type        = string
  description = "EC2 Instance type"
}

variable "tags" {
  type        = map(string)
  description = "Tags"
  default     = {}
}

path=>scripts/nexus.sh


#!/bin/bash
sudo yum install java-1.8.0-openjdk.x86_64 wget -y   
sudo mkdir -p /opt/nexus/   
sudo mkdir -p /tmp/nexus/                           
cd /tmp/nexus/
NEXUSURL="https://download.sonatype.com/nexus/3/nexus-3.45.0-01-unix.tar.gz"
sudo wget $NEXUSURL -O nexus.tar.gz
EXTOUT=`sudo tar -xzvf nexus.tar.gz`
NEXUSDIR=`echo $EXTOUT | cut -d '/' -f1`
sudo rm -rf /tmp/nexus/nexus.tar.gz
sudo rsync -avzh /tmp/nexus/ /opt/nexus/
sudo useradd nexus
sudo chown -R nexus.nexus /opt/nexus 
sudo touch ../../etc/systemd/system/nexus.service
sudo chmod 777 ../../etc/systemd/system/nexus.service
cat <<EOT>> ../../etc/systemd/system/nexus.service
[Unit]                                                                          
Description=nexus service                                                       
After=network.target                                                            
                                                                  
[Service]                                                                       
Type=forking                                                                    
LimitNOFILE=65536                                                               
ExecStart=/opt/nexus/$NEXUSDIR/bin/nexus start                                  
ExecStop=/opt/nexus/$NEXUSDIR/bin/nexus stop                                    
User=nexus                                                                      
Restart=on-abort                                                                
                                                                  
[Install]                                                                       
WantedBy=multi-user.target                                                      
EOT
sudo chmod 777 ../../opt/nexus/$NEXUSDIR/bin/nexus.rc
sudo echo 'run_as_user="nexus"' > ../../opt/nexus/$NEXUSDIR/bin/nexus.rc
sudo systemctl daemon-reload
sudo systemctl start nexus
sudo systemctl enable nexus

The above files implement the immutable infrastructure for the Nexus server.

path=>.github/workflows

nexus.yml

name: nexus
# on: 
#  push:
#    branches:
#      - "master"
#    paths:
#      - "packer/aws/nexus/**"
#      - "scripts/nexus.sh"
on: workflow_dispatch

jobs:
  ami-creation:
   name: Nexus AMI creation #AWS CLI Setup 
   runs-on: ubuntu-22.04
   steps:
     - name: Configure AWS Credentials
       uses: aws-actions/configure-aws-credentials@v1
       with:
         aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
         aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
         aws-region: us-east-1

     - name: Test configuration of AWS CLI
       run: |
         aws --version
         aws configure list | grep region | awk '{print $2}'
     - name: Checkout
       uses: actions/checkout@v2 

     - name: Setup `packer`
       uses: hashicorp/setup-packer@main
       id: setup
       with:
          version: "1.8.5" # or `latest`

     - name: Run `packer init`
       id: init
       run: "packer init ./packer/aws/nexus/"

     - name: Run `packer validate`
       id: validate
       run: "packer validate ./packer/aws/nexus/"

     - name: Run `packer build`
       id: build
       run: "packer build ./packer/aws/nexus/"

The above code file implements the GitHub Actions workflow for Nexus server image creation.


SonarQube Workflow

The above figure shows the GitHub Actions workflow, which builds the SonarQube image. Configuration of sysctl and limits, installation of OpenJDK, the configuration of JAVA_HOME, installation of SonarQube, the configuration of SonarQube properties, and SonarQube service are the main highlights of building the machine image.


Code Explained

path=>packer/aws/sonarqube

sonarqube-ubuntu.pkr.hcl

locals {
  timestamp = regex_replace(timestamp(), "[- TZ:]", "")
}

source "amazon-ebs" "ubuntu" {
  ami_name      = "sonarqube-server-${local.timestamp}"
  instance_type = var.instance_type
  region        = var.region
  source_ami_filter {
    filters = {
      name                = "ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230208"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    owners = ["099720109477"]
  }

  ssh_username = "ubuntu"
  tags         = var.tags
  aws_polling {
    delay_seconds = 30
    max_attempts  = 300
  }

}

build {
  name        = "sonarqube-server"
  description = <<EOF
  This build creates aws linux images for sonarqube server
  EOF
  sources     = ["source.amazon-ebs.ubuntu"]

  provisioner "shell" {
    script = "./scripts/sonarqube.sh"
  }
}
sonarqube.auto.pkrvars.hcl

region        = "us-east-1"
instance_type = "t2.medium"
tags = {
  "Name"        = "SonarqubeImage"
  "Environment" = "Development"
  "OS_Version"  = "Ubuntu 22.04"
  "Release"     = "Latest"
  "Created-by"  = "Packer-200244692886"
}
variables.pkr.hcl

variable "region" {
  type        = string
  description = "AWS Region"
}

variable "instance_type" {
  type        = string
  description = "EC2 Instance type"
}

variable "tags" {
  type        = map(string)
  description = "Tags"
  default     = {}
}

path=>scripts/sonarqube.sh


#!/bin/bash
sudo cp ~/../../etc/sysctl.conf ../../root/sysctl.conf_backup
sudo chmod 777 ~/../../etc/sysctl.conf

cat <<EOT>> ~/../../etc/sysctl.conf
vm.max_map_count=262144
fs.file-max=65536
ulimit -n 65536
ulimit -u 4096
EOT

sudo cp /etc/security/limits.conf /root/sec_limit.conf_backup
sudo chmod 777 ~/../../etc/security/limits.conf

cat <<EOT> ~/../../etc/security/limits.conf
sonarqube   -   nofile   65536
sonarqube   -   nproc    409
EOT

sudo apt-get update -y
sudo wget https://builds.openlogic.com/downloadJDK/openlogic-openjdk/11.0.18+10/openlogic-openjdk-11.0.18+10-linux-x64.tar.gz
tar -xvf openlogic-openjdk-11.0.18+10-linux-x64.tar.gz
sudo mkdir -p /usr/lib/jvm/openjdk-11.0.18/
sudo mv openlogic-openjdk-11.0.18+10-linux-x64/* /usr/lib/jvm/openjdk-11.0.18/
sudo chmod 777 ~/../../etc/environment

cat <<EOT>> ~/../../etc/environment
JAVA_HOME="/usr/lib/jvm/openjdk-11.0.18/bin"
EOT

export JAVA_HOME=/usr/lib/jvm/openjdk-11.0.18/bin

. ~/../../etc/environment
sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/openjdk-11.0.18/bin/java" 0
sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/openjdk-11.0.18/bin/javac" 0
sudo update-alternatives --set java /usr/lib/jvm/openjdk-11.0.18/bin/java
sudo update-alternatives --set javac /usr/lib/jvm/openjdk-11.0.18/bin/javac
java -version

sudo apt-get update -y

sudo mkdir -p /sonarqube/
cd /sonarqube/
sudo curl -O https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-8.3.0.34182.zip
sudo apt-get install zip -y
sudo unzip -o sonarqube-8.3.0.34182.zip -d /opt/
sudo mv /opt/sonarqube-8.3.0.34182/ /opt/sonarqube
sudo groupadd sonar
sudo useradd -c "SonarQube - User" -d /opt/sonarqube/ -g sonar sonar
sudo chown sonar:sonar /opt/sonarqube/ -R
sudo cp /opt/sonarqube/conf/sonar.properties /root/sonar.properties_backup
sudo chmod 777 ~/../../opt/sonarqube/conf/sonar.properties

cat <<EOT> ~/../../opt/sonarqube/conf/sonar.properties
sonar.jdbc.username=sonar
sonar.jdbc.password=sonar.admin@2023
sonar.jdbc.url=jdbc:postgresql://localhost:5432/sonarqube
sonar.search.javaOpts=-Xmx512m -Xms512m -XX:+HeapDumpOnOutOfMemoryError
EOT

sudo touch ~/../../etc/systemd/system/sonarqube.service
sudo chmod 777 ~/../../etc/systemd/system/sonarqube.service

cat <<EOT> ~/../../etc/systemd/system/sonarqube.service
[Unit]
Description=SonarQube service
After=syslog.target network.target
[Service]
Type=forking
ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start
ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop
User=sonar
Group=sonar
Restart=always
LimitNOFILE=65536
LimitNPROC=4096
[Install]
WantedBy=multi-user.target
EOT

sudo systemctl daemon-reload
sudo systemctl enable sonarqube.service
sudo apt-get update -y

sudo apt-get install nginx -y
sudo rm -rf ~/../../etc/nginx/sites-enabled/default
sudo rm -rf ~/../../etc/nginx/sites-available/default
sudo touch ~/../../etc/nginx/sites-enabled/sonarqube.conf
sudo chmod 777 ~/../../etc/nginx/sites-enabled/sonarqube.conf
cat <<EOT> ~/../../etc/nginx/sites-enabled/sonarqube.conf
server{
    listen      80;    
    access_log  /var/log/nginx/sonar.access.log;
    error_log   /var/log/nginx/sonar.error.log;
    proxy_buffers 16 64k;
    proxy_buffer_size 128k;
    location / {
        proxy_pass  http://127.0.0.1:9000;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
        proxy_redirect off;
              
        proxy_set_header    Host            \$host;
        proxy_set_header    X-Real-IP       \$remote_addr;
        proxy_set_header    X-Forwarded-For \$proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto http;
    }
}
EOT
sudo ln -s ~/../../etc/nginx/sites-enabled/sonarqube.conf /etc/nginx/sites-available/sonarqube.conf
sudo systemctl enable nginx.service
sudo ufw allow 80,9000,9001/tcp

The above files implement the immutable infrastructure for the SonarQube server.

path=>.github/workflows

sonarqube.yml

name: sonarqube
# on: 
#  push:
#    branches:
#      - "master"
#    paths:
#      - "packer/aws/sonarqube/**"
#      - "scripts/sonarqube.sh"
on: workflow_dispatch
 
jobs:

  ami-creation:
   name: Sonarqube AMI creation #AWS CLI Setup 
   runs-on: ubuntu-22.04
   steps:
     - name: Configure AWS Credentials
       uses: aws-actions/configure-aws-credentials@v1
       with:
         aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
         aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
         aws-region: us-east-1

     - name: Test configuration of AWS CLI
       run: |
         aws --version
         aws configure list | grep region | awk '{print $2}'
     - name: Checkout
       uses: actions/checkout@v2 

     - name: Setup `packer`
       uses: hashicorp/setup-packer@main
       id: setup
       with:
          version: "1.8.5" # or `latest`

     - name: Run `packer init`
       id: init
       run: "packer init ./packer/aws/sonarqube/"

     - name: Run `packer validate`
       id: validate
       run: "packer validate ./packer/aws/sonarqube/"

     - name: Run `packer build`
       id: build
       run: "packer build ./packer/aws/sonarqube/"

The above code file implements the GitHub Actions workflow for SonarQube server image creation.


PostgreSQL Workflow

The above figure shows the GitHub Actions workflow, which builds PostgreSQL images. Configuration of sysctl and limits, installation of OpenJDK, the configuration of JAVA_HOME, installation of PostgreSQL, and creation of database objects are the main highlights of building the machine image.


Code Explained

path=>packer/aws/postgres

postgres-ubuntu.pkr.hcl

locals {
  timestamp = regex_replace(timestamp(), "[- TZ:]", "")
}

source "amazon-ebs" "ubuntu" {
  ami_name      = "postgres-sonardb-server-${local.timestamp}"
  instance_type = var.instance_type
  region        = var.region
  source_ami_filter {
    filters = {
      name                = "ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230208"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    owners = ["099720109477"]
  }

  ssh_username = "ubuntu"
  tags         = var.tags
  aws_polling {
    delay_seconds = 30
    max_attempts  = 300
  }

}

build {
  name        = "postgres-sonardb-server"
  description = <<EOF
  This build creates aws linux images for postgres server
  EOF
  sources     = ["source.amazon-ebs.ubuntu"]

  provisioner "shell" {
    script = "./scripts/postgres.sh"
  }
}
postgres.auto.pkrvars.hcl

region        = "us-east-1"
instance_type = "t2.medium"
tags = {
  "Name"        = "PostgresImage"
  "Environment" = "Development"
  "OS_Version"  = "Ubuntu 22.04"
  "Release"     = "Latest"
  "Created-by"  = "Packer-200244692886"
}
variables.pkr.hcl

variable "region" {
  type        = string
  description = "AWS Region"
}

variable "instance_type" {
  type        = string
  description = "EC2 Instance type"
}

variable "tags" {
  type        = map(string)
  description = "Tags"
  default     = {}
}

path=>scripts/postgres.sh


#!/bin/bash
sudo cp ../../etc/sysctl.conf ../../root/sysctl.conf_backup
sudo chmod 777 ~/../../etc/sysctl.conf

cat <<EOT>> ~/../../etc/sysctl.conf
vm.max_map_count=262144
fs.file-max=65536
ulimit -n 65536
ulimit -u 4096
EOT

sudo cp /etc/security/limits.conf /root/sec_limit.conf_backup
sudo chmod 777 ~/../../etc/security/limits.conf

cat <<EOT> ~/../../etc/security/limits.conf
sonarqube   -   nofile   65536
sonarqube   -   nproc    409
EOT

sudo apt-get update -y
sudo wget https://builds.openlogic.com/downloadJDK/openlogic-openjdk/11.0.18+10/openlogic-openjdk-11.0.18+10-linux-x64.tar.gz
tar -xvf openlogic-openjdk-11.0.18+10-linux-x64.tar.gz
sudo mkdir -p /usr/lib/jvm/openjdk-11.0.18/
sudo mv openlogic-openjdk-11.0.18+10-linux-x64/* /usr/lib/jvm/openjdk-11.0.18/
sudo chmod 777 ~/../../etc/environment

cat <<EOT>> ~/../../etc/environment
JAVA_HOME="/usr/lib/jvm/openjdk-11.0.18/bin"
EOT

export JAVA_HOME=/usr/lib/jvm/openjdk-11.0.18/bin

. ~/../../etc/environment
sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/openjdk-11.0.18/bin/java" 0
sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/openjdk-11.0.18/bin/javac" 0
sudo update-alternatives --set java /usr/lib/jvm/openjdk-11.0.18/bin/java
sudo update-alternatives --set javac /usr/lib/jvm/openjdk-11.0.18/bin/javac
java -version

sudo apt-get update -y
sudo wget -q https://www.postgresql.org/media/keys/ACCC4CF8.asc -O - | sudo apt-key add -
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
sudo apt-get install postgresql postgresql-contrib -y
sudo chmod 777 ~/../../etc/postgresql/14/main/postgresql.conf
sudo sed -i '1,/#listen_addresses/s/#listen_addresses/listen_addresses/g' ~/../../etc/postgresql/14/main/postgresql.conf

sudo sed -i '1,/localhost/s/localhost/*/g' ~/../../etc/postgresql/14/main/postgresql.conf

sudo systemctl enable postgresql.service
sudo systemctl start  postgresql.service

echo "postgres:admin@2023" | sudo chpasswd
sudo runuser -l postgres -c "createuser sonar"
sudo -i -u postgres psql -c "ALTER USER sonar WITH ENCRYPTED PASSWORD 'sonar.admin@2023';"
sudo -i -u postgres psql -c "CREATE DATABASE sonarqube OWNER sonar;"
sudo -i -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE sonarqube to sonar;"
sudo systemctl restart  postgresql

The above files implement the immutable infrastructure for the PostgreSQL server.

path=>.github/workflows

postgres.yml

name: postgres
# on: 
#  push:
#    branches:
#      - "master"
#    paths:
#      - "packer/aws/postgres/**"
#      - "scripts/postgres.sh"
on: workflow_dispatch
 
jobs:
  ami-creation:
   name: Postgres AMI creation #AWS CLI Setup 
   runs-on: ubuntu-22.04
   steps:
     - name: Configure AWS Credentials
       uses: aws-actions/configure-aws-credentials@v1
       with:
         aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
         aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
         aws-region: us-east-1

     - name: Test configuration of AWS CLI
       run: |
         aws --version
         aws configure list | grep region | awk '{print $2}'
     - name: Checkout
       uses: actions/checkout@v2 

     - name: Setup `packer`
       uses: hashicorp/setup-packer@main
       id: setup
       with:
          version: "1.8.5" # or `latest`

     - name: Run `packer init`
       id: init
       run: "packer init ./packer/aws/postgres/"

     - name: Run `packer validate`
       id: validate
       run: "packer validate ./packer/aws/postgres/"

     - name: Run `packer build`
       id: build
       run: "packer build ./packer/aws/postgres/"

The above code file implements the GitHub Actions workflow for PostgreSQL server image creation.


Infrastructure deployment using Terraform and GitHub Actions

Terraform is an open-source infrastructure as code (IAC) tool that allows you to automate the deployment of infrastructure resources across multiple cloud platforms, including AWS, Azure, and Google Cloud. GitHub Actions, on the other hand, is a powerful automation tool that enables you to automate various tasks, including infrastructure deployment.


Combining Terraform and GitHub Actions, you can create a powerful deployment pipeline that automates the deployment of infrastructure resources to your cloud platform. Here's how you can deploy infrastructure using Terraform and GitHub Actions:

  • Set up your Terraform environment

    1. Install Terraform on our local machine or a CI/CD server

    2. Create a Terraform configuration file (e.g., main.tf) that defines the infrastructure resources we want to deploy

    3. Add a Terraform backend configuration to store the Terraform state in a remote location (e.g., an S3 bucket)

  • Set up your GitHub Actions workflow

    1. Create a new GitHub repository or use an existing one

    2. Create a .github/workflows directory and add a new workflow file (e.g., terraform.yml) that defines our deployment pipeline

    3. Configure our GitHub Actions workflow to use our Terraform backend configuration to store the Terraform state

  • Add Terraform commands to your GitHub Actions workflow

    1. Use the terraform init command to initialize the Terraform backend and download any necessary provider plugins

    2. Use the terraform plan command to generate an execution plan for the infrastructure resources you want to deploy

    3. Use the terraform apply the command to apply the changes to your cloud platform

  • Set up environment variables and secrets

    1. Use GitHub Actions secrets to store sensitive information like API keys and access tokens

    2. Use GitHub Actions environment variables to store non-sensitive information like the region and environment name

With these steps, we can automate the deployment of infrastructure resources using Terraform and GitHub Actions. Once we've set up our deployment pipeline, any changes to our Terraform configuration file will trigger a new deployment to our cloud platform, ensuring that our infrastructure is always up-to-date and consistent.


Code Structure of Infrastructure Deployment Workflow


The above figure shows files for building the configuration to deploy CI/CD infrastructure using Terraform code. This code uses the machine image created by the machine image creation workflow.


Plan and Execute Workflow

With a pull request to terraform codebase, infrastructure deployment workflow gets triggered and we can see the terraform plan in the pull request, once we close the pull request and merge the code, terraform apply action gets triggered.

Around 37 resources should be created.

Terraform code goes here
Code Explained

path=>.github/workflows

stackdeployment.yml

name: "Infrastructure stack deployment"
on:
 push:
   branches:
    - "master"
   paths:
    - "terraform/**"
 pull_request:
   branches:
    - "master"
   paths:
    - "terraform/**"
# on: workflow_dispatch
permissions:
  pull-requests: write
  
env:
 # verbosity setting for Terraform logs--test
 TF_LOG: INFO
 # Credentials for deployment to AWS
 aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
 aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
 # S3 bucket for the Terraform state
 BUCKET_TF_STATE: ${{ secrets.BUCKET_TF_STATE }} 

jobs:
 terraform:
   name: "CI/CD Infrastructure Deployment Workflow"
   runs-on: ubuntu-22.04
   defaults:
     run:
       shell: bash
       # We keep Terraform files in the terraform directory.
       working-directory: ./terraform

   steps:
     - name: Checkout the repository to the runner
       uses: actions/checkout@v2

     - name: Configure AWS Credentials
       uses: aws-actions/configure-aws-credentials@v1
       with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: us-east-1

     - name: Test configuration of AWS CLI
       run: |
        aws --version
        aws configure list | grep region | awk '{print $2}'
 
     - name: Setup Terraform with specified version on the runner
       uses: hashicorp/setup-terraform@v2
       with:
         terraform_version: 1.3.5

     - name: Terraform init
       id: init
       run: terraform init -backend-config="bucket=$BUCKET_TF_STATE"

     - name: Terraform format
       id: fmt
       run: terraform fmt -check

     - name: Terraform validate
       id: validate
       run: terraform validate

     - name: Terraform plan
       id: plan
       if: github.event_name == 'pull_request'
       run: terraform plan -no-color -input=false
       continue-on-error: true

    

     - uses: actions/github-script@v6

       if: github.event_name == 'pull_request'

       env:

         PLAN: "terraform\n${{ steps.plan.outputs.stdout }}"

       with:

         script: |

           const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`

           #### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\`

           #### Terraform Validation 🤖\`${{ steps.validate.outcome }}\`

           #### Terraform Plan 📖\`${{ steps.plan.outcome }}\`

 

           <details><summary>Show Plan</summary>

 

           \`\`\`\n

           ${process.env.PLAN}

           \`\`\`

 

           </details>

           *Pushed by: @${{ github.actor }}, Action: \`${{ github.event_name }}\`*`;

 

           github.rest.issues.createComment({

             issue_number: context.issue.number,

             owner: context.repo.owner,

             repo: context.repo.repo,

             body: output

           })

 

     - name: Terraform Plan Status

       if: steps.plan.outcome == 'failure'

       run: exit 1

 

     - name: Terraform Apply

       if: github.ref == 'refs/heads/master' && github.event_name == 'push'

       run: terraform apply -auto-approve -input=false

The above code file implements the GitHub Actions workflow for CI/CD infrastructure deployment.


Testing deployed infrastructure

We can test the deployed infrastructure by accessing the user interfaces of all the deployed components. We can browse the various screens of Jenkins, Nexus, and SonarQube before releasing infrastructure to the development team. Visible assessment of cloud infrastructure in the AWS portal is another manual way of testing the deployment.


Jenkins Deployment Testing


Jenkins screen can be assessed by "http://load-balancer-URL:8080".


The above figure shows the initial Jenkins screen.


The above figure shows the initial Jenkins configuration screen.


The above figure shows the initial plugin configuration screen.


The above figure shows the Jenkins instance URL.


The above figure shows the admin user creation screen.


Nexus Deployment Testing


The Nexus screen can be assessed by "http://load-balancer-URL:8081".

The above figure shows the initial screen of Nexus Repository Manager.


The above figure shows the Nexus administrator panel.


SonarQube and PostgreSQL Deployment Testing


The SonarQube screen can be assessed by "http://load-balancer-URL". SonarQube is accessing PostgreSQL database for storing it's information and analytics report.


The above figure shows the initial webpage of SonarQube.


The above figure shows the admin panel of SonarQube after logging in as an Admin user.


AWS Portal Assessment


The above figure shows generated machine images from Jenkins, Nexus, SonarQube, and PostgreSQL GitHub Actions workflow for immutable configuration.


The above figure shows deployed EC2 instances from the above-mentioned AMIs (machine images).


The above figure shows deployed Load Balancer.


The above figure shows Load Balancer listeners.


The above figure shows target groups attached to Load Balancer.


Security Group Traffic Assessment

We can access Jenkins, Nexus, and SonarQube via Load Balancer on different ports. Except for Bastion Host all other EC2 are deployed on Private Subnets. So for accessing all private EC2 machines via SSH, we have to log in to Bastion Host and then perform the required activities. Direct access is not allowed to private EC2 machines.


The above figure shows that traffic from the Load balancer and Bastion Host is allowed to Jenkins Server.


The above figure shows that traffic from the Load balancer, Jenkins machine, and Bastion Host is allowed to Nexus Server.


The above figure shows that traffic from the Load balancer, Jenkins machine, and Bastion Host is allowed to SonarQube Server.


The above figure shows that traffic from Bastion Host and SonarQube machine is allowed to PostgreSQL Server.


The above figure shows the Load Balancer security group configuration.


Clean-Up Workflow


The above figure shows the GitHub Actions workflow which deletes the deployed infrastructure.


Code Explained

path=>.github/workflows

stackdestroy.yml

name: "Infrastructure stack delete"
on: workflow_dispatch

env:
 # verbosity setting for Terraform logs--test
 TF_LOG: INFO
 # Credentials for deployment to AWS
 aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
 aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
 # S3 bucket for the Terraform state
 BUCKET_TF_STATE: ${{ secrets.BUCKET_TF_STATE }} 

jobs:
 terraform:
   name: "CI/CD Infrastructure Delete Workflow"
   runs-on: ubuntu-22.04
   defaults:
     run:
       shell: bash
       # We keep Terraform files in the terraform directory.
       working-directory: ./terraform

   steps:
     - name: Checkout the repository to the runner
       uses: actions/checkout@v2

     - name: Configure AWS Credentials
       uses: aws-actions/configure-aws-credentials@v1
       with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: us-east-1

     - name: Test configuration of AWS CLI
       run: |
        aws --version
        aws configure list | grep region | awk '{print $2}'
 
     - name: Setup Terraform with specified version on the runner
       uses: hashicorp/setup-terraform@v2
       with:
         terraform_version: 1.3.5

     - name: Terraform init
       id: init
       run: terraform init -backend-config="bucket=$BUCKET_TF_STATE"    

     - name: Terraform destroy
       id: destroy       
       run: terraform destroy -auto-approve -no-color -input=false
       continue-on-error: true

The above code file implements the GitHub Actions workflow for deleting CI/CD infrastructure.


Summary


Just to summarize the post 😊

We went through CI/CD and it’s benefits along with Immutable infrastructure and it’s advantages. Then we created 4 GitHub Actions workflows for creating machine images for CI/CD infrastructure using Packer. Then we deployed a multi-tier network using Terraform configuration and used custom machine images to deploy CI/CD infrastructure. Infrastructure deployment was carried out by GitHub actions workflow. At last, we destroyed and cleaned up the deployment using destroy workflow.

That’s it for the day, do let me know your feedback in the comment section.

I will be back with some other topic, till then Bye!

The full code is available here
1,134 views0 comments
bottom of page