If you’ve been running a web application on just one or two servers which contain your web server, application framework and database, there will likely come a time when you need to scale to cope with higher load from more incoming traffic. Whilst the web/application side of things can usually be scaled horizontally across multiple servers without too much trouble, a MySQL database is not so straightforward.
You may be using one or two EC2 instances, or your infrastructure may be hosted somewhere other than AWS. We’ll assume here that you’re migrating the whole app to AWS (if it’s not there already) and that you need to put the database somewhere within AWS as part of the scaling-up process. There are basically two options: install MySQL on one or more EC2 instances and administer it yourself; or use Amazon’s RDS (Relational Database Service) as a simpler way of hosting and managing your database.
Continue reading “Migrating a MySQL database to AWS (with specific focus on RDS)”
When you want to run your web application on more than one EC2 instance for scaling and redundancy purposes, you will probably require some form of load balancer to distribute incoming requests evenly across the instances. There are various possible solutions for this.
One option is to launch another EC2 instance and install a load balancer on it yourself. There are quite a few open source load balancing options, though I would tend to recommend HAProxy as it’s fast, efficient, secure, and very flexible. This option involves setting up your Linux instance and installing the software you need yourself, then configuring your chosen load balancer and installing your SSL certificates, etc. Additionally you would need to estimate the necessary instance size to run the load balancing software without getting overloaded and slowing the site down (bearing in mind that SSL termination can be particularly CPU-intensive), then monitoring it accordingly.
Unless there’s a particular reason to take the approach of installing a load balancer on an EC2 instance, a simpler and more effective option – especially for companies taking their first steps into scaling multiple instances for their application – is likely to be Amazon’s own ELB (Elastic Load Balancer). This doesn’t require an EC2 instance with Linux setup, software installation and configuration, etc. It provides a simple interface with easy SSL termination and it will scale itself automatically as needed, so there is little required in the way of planning and monitoring.
Continue reading “Choosing and setting up a load balancer in AWS”
This article describes the process of creating a Slack notification from a CloudWatch alarm generated from an undesirable state in an SQS (Simple Queue Service) queue, via SNS (Simple Notification Service) and Lambda.
You can of course modify any of these to suit your differing requirements. For example, the source could be a different SQS state, or the source could be some other AWS service rather than SQS, or you may want to send the notification to somewhere other than Slack (in which case a different Lambda function may be required), etc.
Continue reading “Creating a Slack notification from a CloudWatch alarm for an SQS queue, via SNS and Lambda”
It was necessary to build an updated mail system for a client which would handle all incoming and outgoing email, and which could handle successfully sending out an average of one million emails per day. This was based on Postfix, since Postfix is known for reliability, robustness, security, and relative ease of administration. Building a Postfix mail system capable of handling so many emails is quite a significant aim at a time when establishing a positive reputation for independent mail servers delivering high volumes of email is quite a challenging goal.
Continue reading “Building a Postfix-based mail system for incoming and outgoing email, capable of successfully sending one million emails per day”