Setting up Immutable Infrastructure Using Hashicorp Terraform and Jenkins

0
9401

Infrastructure as Code has gained much popularity nowadays because of its easy implementation and the possibility of building clean infrastructure with a declarative programming model. This article covers the various approaches of building and maintaining your infrastructure with Terraform and Jenkins Server.

DevOps methodologies and practices have transformed the complexities of IT infrastructure management into code that manages the entire IT infrastructure with just a little maintenance. We have a lot of configuration management tools and orchestration tools to tailor IT Infrastructure as Code (IaC), but selecting the right tool depends on numerous factors. One needs to analyse the pros and cons of the tool, understand how it fits a particular use case, and check if its source code has been fully open sourced since there should be no vendor lock-in. There should also be clear official documentation, good community support, easy integration with the platform, and interoperability with different cloud solutions and third party software.

Figure 1: Mutable infrastructure
Figure 2: Immutable infrastructure

Common scenarios in IT infrastructure management

Provisioning and de-provisioning resources in a cloud environment is a common practice for testing and releasing a software product without any bugs. In conjunction with continuous integration (CI) and continuous deployment (CD) tools, we may need to use both orchestration tools and configuration management solutions. In any cloud environment, orchestration tools such as Terraform, CloudFormation, Heat and Azure Resource Manager are responsible for provisioning infrastructure. Configuration management tools such as Chef, Ansible, Puppet and SaltStack take care of installing the software packages on the server, configuring the services, and deploying the applications on top of it. But today, configuration management tools have some amount of support for provisioning resources in the cloud as well as the support of provisioning tools for installing and configuring software on newly created resources. This balances the complexity of provisioning and managing infrastructure. On the other hand, it is difficult to achieve everything with a single tool. The recommended way is to use both provisioning and configuration tools to manage infrastructure at scale.

The need for immutable infrastructure

Even though we manage infrastructure with configuration management tools, there are chances of ‘configuration drift’ in the servers if there are frequent changes applied to the server. In order to avoid this, we should neither change the configuration of a running server by modifying it manually nor through configuration management tools. Maintaining immutable infrastructure is a best practice to avoid configuration drift.

Immutable infrastructure is now becoming a popular term across the DevOps community. It is the practice of provisioning a new server for every config change and de-provisioning old ones. Provisioning tools like Terraform and CloudFormation support the creation of immutable infrastructure to a great extent. For every software configuration change, these tools help to create new infrastructure and deploy the new configuration before deleting the old one. This helps to manage large infrastructure, as we do not need to worry about the configuration changes and their impact over a period of time.

In a production environment, DevOps practitioners often follow Blue-Green deployment to avoid unexpected issues that lead to downtime in production. Rollback is possible in such a deployment, and the application can go back into the previous state without any difficulties because no changes have been made to the existing environment.

Figure 3: Terraform project structure
Figure 4: Terraform provider and back-end configuration
Figure 5: Terraform variables

Infrastructure management with HashiCorp Terraform

HashiCorp Terraform enables the user to safely and predictably create, change and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed and versioned. Terraform supports storing the state of the infrastructure, which helps to prevent configuration drift. The state can be stored in the local environment or from remote key-value or object stores. The syntax of Terraform configuration is called HashiCorp Configuration Language (HCL).

Terraform can also use JSON configurations. It supports multiple providers for orchestration. The majority of code is written in GoLang. Terraform follows clear syntax to define the resources and supports the most common data structures such as list, map and string to define the variables. It is quite simple to organise the code. We can read the credentials from the environment variables instead of defining them inside the Terraform configuration file. Many open source IDEs support the development of Terraform modules. We can extend the functionality of Terraform by writing custom plugins and provisioners to run the script from Bash, Ruby, Chef, etc. Reusable Terraform modules for various providers are available in the Terraform registry. Terraform Enterprise offers a Web interface to manage Terraform and the state.

Benefits of using Terraform

  1. It defines Infrastructure as Code to increase operator productivity and transparency.
  2. Terraform configuration can be stored in version control, shared and collaborated on by teams of operators.
  3. It can track the complete history of infrastructure versions. The Terraform state can be stored on the local disk as well as on any one of the supported remote back-ends such as AWS S3, OpenStack Swift, Azure Blob, Consul, etc.
  4. Terraform provides an elegant user experience for operators to safely and predictably make changes to infrastructure.
  5. It builds a dependency graph from the configurations and uses this graph to generate plans, refresh state and more.
  6. It separates plans and applies, and reduces mistakes and uncertainty at scale. Plans show operators what will happen, and applies execute the changes.
  7. Terraform can be used to create resources across all major infrastructure providers (AWS, GCP, Azure, OpenStack, VMware and more) and third party tools such as GitHub, BitBucket, NewRelic, Consul, Docker, etc.
  8. Terraform lets operators easily use the same configuration in multiple places to reduce mistakes and save time.
  9. We can use the same Terraform configuration to provision identical staging, QA and production environments.
  10. Common Terraform configurations can be packaged as modules, and used across teams and organisations.
Figure 6: Blue-Green deployment
Figure 7: Calling Terraform modules from the workspace

CI/CD pipeline workflow

The CI/CD pipeline workflow for applying changes to the infrastructure using Terraform is given below.

  1. The developer or operations engineers change the Terraform configuration file in their local machine and commit the code to BitBucket.
  2. GitBucket WebHook triggers a continuous integration job to Jenkins.
  3. Jenkins will pull the latest code from the configured repo, which contains Terraform files to its workspace.
  4. It reads the Terraform configuration and then initialises the remote Consul back-end.
  5. Terraform generates a plan about the changes that have to be applied on the infrastructure.
  6. Jenkins sends notifications to a Slack channel about the changes, for manual approval.
  7. Here, the user can approve or disapprove the Terraform plan.
  8. The user input is sent to the Jenkins server for further action.
  9. Once the changes are approved by an operator, Jenkins will execute the Terraform Apply command to reflect the changes to the infrastructure.
  10. Terraform will create a report about the resources and the dependencies created while executing the plan.
  11. Terraform will provision the resources in the provider environment.
  12. Jenkins will again send a notification to the Slack channel about the status of the infrastructure after applying the changes to it. Once the job is executed, the Jenkins pipeline job is configured to clean up the workspace created by the job.
Figure 8: CI/CD using HashiCorp Terraform and Jenkins
Figure 9: Jenkins log

Setting up the deployment environment

  1. Create a repo in scm tools like GitLab or BitBucket, and commit the Terraform configuration and its dependency module to the repo. If you are using any third party remote module as a dependency, it will be automatically downloaded during execution.
  2. If you do not have the Jenkins server, then just pull a Jenkins Docker image and run it on your local machine. If you are setting it up in a cloud environment, check the Jenkins AMI (Amazon Machine Image) from the marketplace to set up the environment and configure the required plugins.
  3. Create a WebHook in your BitBucket repo settings to invoke a http call to the Jenkins call back URL for triggering a continuous integration job.
  4. If you have an existing Jenkins server, ensure that the pipeline plugin is installed on it. Otherwise, go to ‘Manage plugins’ and install the pipeline plugin.
  5. In this project, Consul is being used as a remote back-end for state storing and state locking. It is not recommended to use local state when multiple people are involved in the project and for production deployments. It is good to use a remote back-end that provides highly available storage with state lock functionalities to avoid having the state being written by multiple users at the same time.
  6. If you do not have a Consul key-value store in your environment, just pull a Consul Docker image and set up a single node cluster. If it is a production deployment, set up a distributed key-value store.
  7. Create an application in Slack and note down the Slack integration details for configuring it in a Jenkins file.
  8. Configure your provider and backend details in the main Terraform configuration file either by an environment variable or persisting in a repo. In my case, I am going to provision a resource in AWS and my CI server is hosted in AWS. So I am assigning an IAM role to my server with sufficient privileges.
  9. Create a new project in Jenkins by using the pipeline plugin.
  10. Add the Jenkins file where the pipeline stages are defined. Save the job and trigger it manually for testing. Then apply changes to the configuration, commit the changes to BitBucket and ensure the job is automatically triggered. Check the Jenkins log for more details about the job.
//Jenkins File

import groovy.json.JsonOutput

//git env vars

env.git_url = ‘https://user@bitbucket.org/user/terraform-ci.git’

env.git_branch = ‘master’

env.credentials_id = ‘1’

//slack env vars

env.slack_url = ‘https://hooks.slack.com/services/SDKJSDKS/SDSDJSDK/SDKJSDKDS23434SDSDLCMLC’

env.notification_channel = ‘my-slack-channel’

//jenkins env vars

env.jenkins_server_url = ‘https://52.79.46.98’

env.jenkins_node_custom_workspace_path = “/opt/bitnami/apps/jenkins/jenkins_home/${JOB_NAME}/workspace”

env.jenkins_node_label = ‘master’

env.terraform_version = ‘0.11.10’

def notifySlack(text, channel, attachments) {

def payload = JsonOutput.toJson([text: text,

channel: channel,

username: “Jenkins”,

attachments: attachments

])

sh “export PATH=/opt/bitnami/common/bin:$PATH && curl -X POST --data-urlencode \’payload=${payload}\’ ${slack_url}”

}

pipeline {

agent {

node {

customWorkspace “$jenkins_node_custom_workspace_path”

label “$jenkins_node_label”

}

}

stages {

stage(‘fetch_latest_code’) {

steps {

git branch: “$git_branch” ,

credentialsId: “$credentials_id” ,

url: “$git_url”

}

}

stage(‘install_deps’) {

steps {

sh “sudo apt install wget zip python-pip -y”

sh “cd /tmp”

sh “curl -o terraform.zip https://releases.hashicorp.com/terraform/’$terraform_version’/terraform_’$terraform_version’_linux_amd64.zip”

sh “unzip terraform.zip”

sh “sudo mv terraform /usr/bin”

sh “rm -rf terraform.zip”

}

}

stage(‘init_and_plan’) {

steps {

sh “sudo terraform init $jenkins_node_custom_workspace_path/workspace”

sh “sudo terraform plan $jenkins_node_custom_workspace_path/workspace”

notifySlack(“Build completed! Build logs from jenkins server $jenkins_server_url/jenkins/job/$JOB_NAME/$BUILD_NUMBER/console”, notification_channel, [])

}

}

stage(‘approve’) {

steps {

notifySlack(“Do you approve deployment? $jenkins_server_url/jenkins/job/$JOB_NAME”, notification_channel, [])

input ‘Do you approve deployment?’

}

}

stage(‘apply_changes’) {

steps {

sh “echo ‘yes’ | sudo terraform apply $jenkins_node_custom_workspace_path/workspace”

notifySlack(“Deployment logs from jenkins server $jenkins_server_url/jenkins/job/$JOB_NAME/$BUILD_NUMBER/console”, notification_channel, [])

}

}

}

post {

always {

cleanWs()

}

}

}

It is recommended that you use reusable modules in Terraform by writing your own modules, using modules from the Terraform registry. We can also use the Docker build agent for the Jenkins slave and save the workspace by attaching a persistent volume to the Jenkins server from the Docker host. It is recommended that readers encrypt the Consul key-value with HashiCorp Vault. This is a reliable key management service and can be accessed by http calls.

Figure 10: Build history
Figure 11: CI/CD using HashiCorp Terraform and AWS code pipeline

Right now, all cloud providers are offering their own CI tools. AWS offers CodePipeLine, with which we can use Code Commit for scm, Code Build for the build environment (where we can apply Terraform configurations), and SNS to send notifications for manual approval. Azure offers Azure DevOps tools for creating the CI/CD pipeline, whereby the user can commit the code to Azure TFS or any scm through vsts and trigger the CI job. We can set up the pipeline job based on the cloud platform being used. Here Jenkins can be used in the cloud as well as in the on-premises infrastructure.

LEAVE A REPLY

Please enter your comment!
Please enter your name here