How to use GitHub Actions and AWS CodeDeploy for automated CI/CD builds and deployment

Introduction

I recently migrated a client to a new AWS-based infrastructure, fully automated and managed via IaC (primarily Packer, Ansible and Terraform). However, a somewhat clunky old build/deploy system was still being used, so it was also time to migrate that to a new automated CI/CD (continuous integration/continuous delivery) system for builds and deployments. Keeping costs as low as possible was a priority, so I ruled out Jenkins since that would have cost money to maintain an additional instance for extended periods of time.

Since GitHub was already in use, GitHub Actions was an obvious choice because the virtual instances (known as “runners”) used for code builds only exist for as long as necessary to run all the build commands. Costs are therefore kept as low as possible. Since the infrastructure was already running on Amazon Web Services, AWS CodeDeploy made sense as an integrated solution for deploying code. The challenge therefore was to get the builds working on GitHub Actions, then to connect GitHub Actions to AWS CodeDeploy for full CI/CD deployments.

This simple diagram shows the desired CI/CD architecture with GitHub Actions and AWS CodeDeploy:

Continue reading “How to use GitHub Actions and AWS CodeDeploy for automated CI/CD builds and deployment”

How to provision an ECS cluster and deploy a webapp on it with load-balanced Docker containers, using Ansible

I wrote a suite of Ansible playbooks to provision an ECS (Elastic Container Service) cluster on AWS, running a webapp deployed on Docker containers in the cluster and load balanced from an ALB (Application Load Balancer), with the Docker image for the app pulled from an ECR (Elastic Container Registry) repository.

This is a follow-up to my project/article “How to use Ansible to provision an EC2 instance with an app running in a Docker container” which explains how to get a containerised Docker app running on an regular EC2 instance, using Docker Hub as the image repo. That could work well as a simple Staging environment, but for Production it’s desirable to easily cluster and scale the containers with a load balancer, so I came up with this solution for provisioning/deploying on ECS which is well-suited for this kind of flexibility. (To quote AWS: “Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications”.) This solution also uses Amazon’s own ECR for Docker images, rather than Docker Hub.

Continue reading “How to provision an ECS cluster and deploy a webapp on it with load-balanced Docker containers, using Ansible”

How to use Ansible for automated AWS provisioning

I’ve recently produced a series of articles aimed at startups, entrepreneurial solo developers, etc. wanting to take their first steps into Amazon Web Services (AWS) setups for app deployment:

I then wanted to move on from discussing manual setup via the GUI interface of the AWS web console, to DevOps-style command-line programmatic setup for automated provisioning of an AWS infrastructure for app deployment, i.e. infrastructure as code (IaC). I have therefore created a suite of Ansible playbooks to provision an entire AWS infrastructure with a Staging instance and an auto-scaled load-balanced Production environment, and to deploy a webapp thereon. The resulting set of Ansible AWS provisioning playbooks and associated files can be found in a repository on my GitHub, so go ahead and grab it from there if you want to try them out. Keep reading for information on how to set up and use the playbooks (and you can also refer to the README in the repo folder, which contains much of the same information).

With these playbooks, firstly the EC2 SSH key and Security Groups are created, then a Staging instance is provisioned, then the webapp is deployed on Staging from GitHub, then an image is taken from which to provision the Production environment. The Production environment is set up with auto-scaled EC2 instances running behind a load balancer. Finally, DNS entries are added for the Production and Staging environments.

Continue reading “How to use Ansible for automated AWS provisioning”

How to migrate an application to AWS with auto-scaled EC2 instances

For entrepreneurs, startups, and established companies trying out new projects, there hopefully comes a time when interest in the app increases such that incoming traffic levels start to rise significantly. This will likely necessitate various improvements to the infrastructure running the application so that it’s more robust, reliable and scalable.

The application may previously have been running on a cheap hosted server on a service like DigitalOcean, Linode or OVHcloud, or possibly even a single EC2 (Elastic Compute Cloud) instance on AWS (Amazon Web Services), and the desired solution would now be to move the application to a dynamically auto-scaled EC2 environment so that it can handle the increasing traffic without resource problems and site downtime.

In order to achieve this, it will also be necessary to set up the database on AWS, and the most realistic solution for this is to use Amazon’s RDS (Relational Database Service). I’ve recently covered this process in my article Migrating a MySQL database to AWS (with specific focus on RDS). You’ll also need to set up a load balancer, most likely an ELB (Elastic Load Balancer), to balance the incoming traffic across the auto-scaled EC2 application instances. I’ve recently covered this topic also, in my article Choosing and setting up a load balancer in AWS. So have a read through both of those articles to begin with, and below I’ll cover the rest of the process, i.e. auto-scaling the application instances on EC2.

Continue reading “How to migrate an application to AWS with auto-scaled EC2 instances”