How to automate provisioning and deployment of RabbitMQ with cert-manager on a Kubernetes cluster in GKE within GCP

I was brought in by a startup to set up their core infrastructure in a way that functioned as needed and could be automated for safe and efficient provisioning and deployment. The key requirement was making RabbitMQ work only with secure certificate-based connections – the AMQPS protocol, rather than AMQP – for security and compliance purposes. This needed to be done within a Kubernetes cluster for storage and shared states via StatefulSets, ease of scaling and deployment, and general flexibility. It was also necessary to set this up on GCP (Google Cloud Platform) as that was already in use by the startup and they didn’t want to consider alternative cloud providers at this stage, so GKE (Google Kubernetes Engine) needed to be used for the Kubernetes cluster.

Getting certificates for use with RabbitMQ within Kubernetes required the setup of cert-manager for certificate management, which in turn needed ingress-nginx to allow incoming connections for Let’s Encrypt verification so that certificates could be issued.

I successfully solved the problems and fulfilled the requirements. It’s still a “work in progress” to some extent. Some of the config is a little “rough and ready” and could be improved with more modularisation and better use of variables and secrets. Also, the initial cluster provisioning is fully automated with Terraform, and the rest is only semi automated currently. So there is room for further improvement.

All the code and documentation is available on my GitHub repository here. Below I will explain the whole process from start to finish.

Continue reading “How to automate provisioning and deployment of RabbitMQ with cert-manager on a Kubernetes cluster in GKE within GCP”

Building a Postfix-based mail system for incoming and outgoing email, capable of successfully sending one million emails per day

It was necessary to build an updated mail system for a client which would handle all incoming and outgoing email, and which could handle successfully sending out an average of one million emails per day. This was based on Postfix, since Postfix is known for reliability, robustness, security, and relative ease of administration. Building a Postfix mail system capable of handling so many emails is quite a significant aim at a time when establishing a positive reputation for independent mail servers delivering high volumes of email is quite a challenging goal.

Continue reading “Building a Postfix-based mail system for incoming and outgoing email, capable of successfully sending one million emails per day”

How to harden CentOS 7, Red Hat Enterprise Linux 7 & Amazon Linux for better security

A few years ago I wrote a quite popular post for security hardening on Ubuntu 14.04, and now here’s a new version for CentOS 7 and RHEL 7. Much of it should apply to CentOS/RHEL versions 6 and 8, with some tweaks required here and there. It should also largely work with Amazon Linux and Amazon Linux 2, although again some tweaks will be required for those.

Continue reading “How to harden CentOS 7, Red Hat Enterprise Linux 7 & Amazon Linux for better security”

How to reclaim storage space on two-node MongoDB replica sets

At mongodb.org they seem to assume we can create MongoDB replica sets using unlimited numbers of instances which have infinite amounts of storage. In practice, however, we often need to use replica sets with only two nodes (plus arbiter) which have limited storage. The problem then is that MongoDB has the tendency to use vast amounts of disk space without reclaiming the space from dropped data, so it consumes ever-increasing amounts of storage. It’s then hard to deal with this storage problem given the limited options available in a two-node replica set.

A solution to this is clearing all the data from each node in turn, which forces MongoDB to rebuild its data using only the disk space it needs. When performed on a regular basis, this stops the amount of storage which MongoDB is using from constantly increasing at an unacceptable rate.

Continue reading “How to reclaim storage space on two-node MongoDB replica sets”

PostgreSQL tuning: ensuring that as many sorts as possible are done in memory and not on disk

(This post assumes a PostgreSQL installation located at /var/lib/pgsql on a Red Hat-type Linux system such as Red Hat Enterprise Linux or CentOS. If your system differs from this, you may need to modify some of the paths accordingly.)

In PostgreSQL, sorts larger than a certain size will get performed on disk instead of in memory, and this makes them much slower as a result. Ideally all sorts should be done in memory (except for the ones that are genuinely too big to fit into your available RAM, because swapping to virtual memory should be avoided at all costs).

Continue reading “PostgreSQL tuning: ensuring that as many sorts as possible are done in memory and not on disk”