STICKY: Useful Unix commands

For a long time I’ve maintained a memory aid in the form of a list of useful commands which can be used on the command line for Linux, OS X, BSD, Solaris, etc., so I thought I’d list them in a blog post in case they come in useful for others. Most of these will run on any Unix-type operating system, though I’ve usually indicated where a command is OS-specific.

  • For Debian/Ubuntu distributions, many of these commands are available via the APT package manager.
  • For Red Hat/CentOS/Fedora, many of these commands are available via the yum package manager, though particularly in the case of CentOS I’d recommend adding the EPEL repository to increase the availablility of useful tools which are otherwise missing.
  • For OS X, install the excellent Homebrew package manager, then many of these commands will be available for install.

This is a fairly arbitrary list which I add to when I forget to use something old, or when I come across something new which is useful. Some of these commands will probably be familiar to you, and some probably won’t. I’ve added links where applicable. Please feel free to throw amendments or additional suggestions my way.

The list of commands

  • ab – website benchmarking; usually comes with Apache HTTP server
  • afconvert – powerful audio file converter; OS X only
  • cpulimit – provides a simple way of limiting CPU usage for specific processes
  • curl – powerful URL transfer tool for testing web pages and other services
  • dc – CLI-based calculator
  • ditaa – converts ASCII art to PNG
  • dmidecode – reports system hardware as described in the BIOS
  • exiftool – for manipulating Exif data on image files
  • fio – IO benchmarking tool
  • – Bash script to generate static web galleries; no server-side programs required
  • generate-md – converts Markdown files to HTML, with over a dozen builtin themes (requires Node.js)
  • goaccess – simple and powerful web log analyser and interactive viewer
  • – converts HTML to Markdown; this is not the html2text which comes with e.g. Homebrew
  • htop – like top but nicer and more informative
  • http-server – start a web server in the current directory on port 8001, using Node.js; package needs to have been installed using: npm install http-server -g
  • httping – ping a host using HTTP instead of ICMP
  • iftop – like top but for network traffic
  • ike-scan – find and probe IPSec VPNs
  • iotop – like top but for disk IO
  • jp2a – converts JPEGs to ASCII art
  • lshw – simple and powerful way of getting hardware info; Linux only
  • lsof – for finding which processes are using which files and network ports, amongst other things
  • mitmproxy/mitmdump – nice HTTP sniffer proxy; usually installed via pip
  • mtr – handy graphical combination of ping and traceroute
  • mountpoint – check whether a directory is a mount point
  • multitail – tail multiple log files in separate panes in the same window
  • ncat – like nc/netcat but newer and with extra options; comes with nmap
  • nethogs – quick real-time display of how much bandwidth individual processes are using; Linux only
  • ngrep – for intelligently sniffing HTTP and other protocols
  • nl – add line numbers to input from file or stdin
  • nmap – comprehensive port/vulnerability scanner
  • nping – advanced ping tool for TCP/HTTP pinging; comes with nmap
  • opensnoop – watch file accesses in real time; OS X only
  • parallel – like xargs but better
  • paste – merge multi-line output into single line with optional delimiters
  • pen – simple but effective command line load balancer
  • pgrep/pkill – easy grepping for/killing of processes using various criteria
  • photorec – recover lost documents, videos, photos, etc. from storage media
  • pidstat – flexible tool for obtaining statistics on processes, very useful for understanding resource usage for particular processes
  • psk-crack – for cracking VPN preshared keys obtained with ike-scan; comes with ike-scan
  • printf – for reformatting; very useful for things like zero padding numbers in bash
  • pstree – shows a tree of running processes
  • pv – provides a progress bar for piped processes
  • python -m SimpleHTTPServer – start a web server in the current directory on port 8000, using Python
  • qlmanage -p – Quick Look from the command line; OS X only
  • s3cmd – CLI tool for Amazon S3 administration
  • scutil – for changing system settings including various forms of hostname; OS X only
  • seq – generates a sequence of numbers
  • siege – website benchmarking; more options than ab
  • sslscan – see which ciphers and SSL/TLS versions are supported on secure sites
  • stress – to artificially add load to a system for testing purposes
  • subnetcalc – IPv4/IPv6 subnet calculator
  • tcptraceroute – like traceroute but TCP
  • tee – for directing output to a file whilst watching it at the same time
  • time – gives info on how long a command took to run
  • timeout – run a command with a time limit, i.e. kill it if it’s still running after a certain time
  • tmutil – get more control over Time Machine from the command line; OS X only
  • tree – file listing in tree format
  • trickle – simple but effective bandwidth throttling
  • trimforce – turns on TRIM support for third-party SSDs; OS X only
  • watch – prepend to a command line to see continously updating output
  • wget – nice client for web downloads and archiving websites
  • xmlstarlet – powerful tool for manipulating and reformatting XML

COVID-19 update

CETRE SysAdmin Services continue to operate as normal during the Coronavirus pandemic and lockdown. Matt’s services are available for organisations feeling the strain during this difficult period, who are looking to keep their infrastructure up and running as efficiently as possible with minimal costs and resource usage. Matt’s services are also available for organisations taking advantage of this slower period to make improvements or launch projects requiring new infrastructure and devops engineering.

Security hardening on Amazon Linux, CentOS 6 and Red Hat 6

A few years ago I wrote a quite popular post for security hardening on Ubuntu 14.04, and since I’ve just had to do a similar series of operations on Amazon Linux I thought I’d update the post accordingly. This should also apply to CentOS 6 and Red Hat 6, since those are very similar distributions to Amazon Linux.

Assume that all these operations need to be performed as root, which you can do by logging in as root with sudo -i (or you can issue an endless series of sudo commands if you prefer).

Harden SSH

I generally regard it as a sensible idea to disable root login over SSH, so in /etc/ssh/sshd_config you could change PermitRootLogin to no. The default nowadays seems to be forced-commands-only which is probably an acceptable compromise, so you could probably leave it like that without worrying further.

If SSH on your servers is open to the world then I also advise running SSH on a non-standard port in order to avoid incoming SSH hacking attempts. To do that, in /etc/ssh/sshd_config change Port from 22 to another port of your choice, e.g. 1022. Note that you’ll need to update your firewall or EC2 security rules accordingly.

After making changes to SSH, reload the OpenSSH server:

service sshd reload

Improve IP security

Add the following lines to /etc/sysctl.d/10-network-security.conf to improve IP security:

# Ignore ICMP broadcast requests
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Disable source packet routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0 
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.default.accept_source_route = 0

# Ignore send redirects
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# Block SYN attacks
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 5

# Log Martians
net.ipv4.conf.all.log_martians = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Ignore ICMP redirects
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0 
net.ipv6.conf.default.accept_redirects = 0

# Ignore Directed pings
net.ipv4.icmp_echo_ignore_all = 1

Load the new rules:

sysctl --system

PHP hardening

If you’re using PHP, these are changes worth making in /etc/php.ini in order to improve the security of PHP:

  1. Add exec, system, shell_exec, and passthru to disable_functions.
  2. Change expose_php to Off.
  3. Ensure that display_errors, track_errors and html_errors are set to Off.

Apache hardening

If you’re using Apache web server, it’s worth making sure you have the following parameters set in the config to make sure Apache is suitably hardened:

ServerTokens Prod
ServerSignature Off
TraceEnable Off
Header unset ETag
FileETag None

You may find TraceEnable Off already set in /etc/httpd/conf.d/notrace.conf. I find the simplest thing is to place whichever of these parameters are not already set into a new file /etc/httpd/conf.d/security.conf.

Then restart Apache:

service httpd restart

Install and configure ModSecurity

If you’re using Apache, the web application firewall ModSecurity is a great way to harden your web server so that it’s much less vulnerable to probes and attacks. Firstly, install the necessary packages:

yum install mod24_security

(That’s if you’re using Apache 2.4, otherwise it’s probably just yum install mod_security)

Next, install the Open Web Application Security Project ModSecurity Core Rule Set:

cd /tmp
cp -r owasp-modsecurity-crs-3.3-dev /etc/httpd
mv /etc/httpd/owasp-modsecurity-crs-3.3-dev/crs-setup.conf.example /etc/httpd/owasp-modsecurity-crs-3.3-dev/crs-setup.conf

To add the rules to Apache, edit /etc/httpd/conf.d/mod_security.conf and add the following lines near the end, just before </IfModule>:

Include owasp-modsecurity-crs-3.3-dev/crs-setup.conf
Include owasp-modsecurity-crs-3.3-dev/rules/*.conf

Restart Apache to activate the new security rules:

service httpd restart

Install and configure mod_evasive

If you’re using Apache then it’s a good idea to install mod_evasive to help protect against denial of service attacks. For Apache 2.4 there’s no package available so we need to do it using apxs. (If you’re using Apache 2.2 then you should be able to install with yum, or you can do it this way but change the module name accordingly for the Apache version you’re using.) You may firstly need to install relevant development tools:

yum install httpd24-devel
yum groupinstall "Development Tools"

Then grab mod_evasive and install:

cd /tmp
wget -c
cd mod_evasive-master
apxs -i -a -c mod_evasive24.c

I would then comment out this line in /etc/httpd/httpd.conf:

LoadModule evasive20_module /usr/lib64/httpd/modules/

Then create /etc/httpd.d/mod_evasive.conf and put all the relevant bits in there:

LoadModule evasive20_module   /usr/lib64/httpd/modules/

    DOSHashTableSize    3097
    DOSPageCount        5
    DOSSiteCount        50
    DOSPageInterval     1
    DOSSiteInterval     1
    DOSBlockingPeriod   10
    DOSEmailNotify      root

Restart Apache to activate it:

service httpd restart

Install and configure rootkit checkers

It’s highly desirable to get alerted if any rootkits are found on your server, so let’s install a couple of rootkit checkers:

yum install rkhunter chkrootkit

Let’s run rkhunter weekly instead of daily, because daily is too annoying:

mv /etc/cron.daily/rkhunter /etc/cron.weekly

We also need to add a weekly cronjob for chkrootkit:

echo "5 5 * * Sun root /usr/sbin/chkrootkit" > /etc/cron.d/chkrootkit

Install Logwatch

Logwatch is a great tool which provides regular reports nicely summarising what’s been going on in the server logs. Install it like this:

yum install logwatch

Make it run weekly instead of daily, otherwise it gets too annoying:

mv /etc/cron.daily/0logwatch /etc/cron.weekly

Make it show output from the last week by editing /etc/cron.weekly/0logwatch and adding --range 'between -7 days and -1 days' to the end of the logwatch command (so that it says logwatch --range 'between -7 days and -1 days').

Enable process accounting

Linux process accounting keeps track of all sorts of details about which commands have been run on the server, who ran them, when, etc. It’s a very sensible thing to enable on a server where security is a priority, so let’s activate it:

chkconfig psacct on
service psacct start

To show users’ connect times, run ac. To show information about commands previously run by users, run sa. To see the last commands run, run lastcomm. Those are a few commands to give you an idea of what’s possible; just read the manpages to get more details if you need to.

I threw together a quick Bash script to send a weekly email with a summary of user activity, login information and commands run. To get the same report yourself, create a file called /etc/cron.weekly/pacct-report containing the following (don’t forget to make this file executable) (you can grab this from GitHub if you prefer):


echo ""

ac -d -p

echo ""
echo ""

users=$(cat /etc/passwd | awk -F ':' '{print $1}' | sort)

for user in $users ; do
  comm=$(lastcomm --user $user | awk '{print $1}' | sort | uniq -c | sort -nr)
  if [ "$comm" ] ; then
    echo "$user:"
    echo "$comm"

echo ""
echo ""

sa | awk '{print $1, $5}' | sort -n | head -n -1 | sort -nr

Things I haven’t covered

There are some additional issues you might want to consider which I haven’t covered here for various reasons:

  1. This guide assumes your server is on a network behind a firewall of some kind, whether that’s a hardware firewall of your own, EC2 security rules on Amazon Web Services, or whatever; and that the firewall is properly configured to only allow through the necessary traffic. However, if that’s not the case then you’ll need to install and configure a firewall on the server itself. The recommended software for this would probably be iptables.
  2. If you’re running an SSH server then you’re often told that you must install a tool such as fail2ban immediately if you don’t want your server to be hacked to death within seconds. However, I’ve maintained servers with publicly-accessible SSH servers for many years, and I’ve found that simply moving SSH to a different port solves this problem far more elegantly. I monitor logs in order to identify incoming hacking attempts, and I haven’t seen a single one in the many years I’ve been doing this. However, using this “security by obscurity” method doesn’t mean that such an attack can’t happen, and if you don’t watch your logs regularly and respond quickly to them as I do, then you would be well advised to install fail2ban or similar as a precaution, in addition to moving your SSH server to another port as described above.
  3. Once you’ve hardened your server, you’re advised to run some vulnerability scans and penetration tests against it in order to check that it’s actually as invincible as you’re now hoping it is. This is a topic which requires a post all of its own so I won’t be covering it in any detail here, but a good starting point if you’re not already familiar with it is the excellent Nmap security scanner.

Script to detect MAC addresses of new devices connecting to local network

I wanted to get notified of any new machines connecting to my local network so that I could be reasonably sure there would be no unauthorised devices connecting wirelessly to use my network for unknown and potentially malicious purposes.

I therefore wrote a simple script to detect new MAC addresses appearing on the network and notify me accordingly. The script requires nmap to be installed and should ideally be run from cron with the output going to a valid email account. The script can be obtained from my GitHub.

Elastic Stack installation on CentOS 6

Fairly recently I made some notes for a setup of Elastic Stack on a network of CentOS 6 machines. I found it relatively involved so thought it was worth sharing.

On the main log processing server

Oracle Java 8 needs to be installed.

Import RPM key:

rpm --import

In /etc/yum.repos.d/elasticsearch.repo:

name=Elasticsearch repository for 5.x packages

Install Elasticsearch:

yum install elasticsearch

Add the following to /etc/init.d/elasticsearch:

# Configure Java environment
JAVA_HOME=/usr/local/java [or /usr/local/jdk8 if needed]

Start the service:

service elasticsearch start
chkconfig elasticsearch on

In /etc/yum.repos.d/kibana.repo:

name=Kibana repository for 5.x packages

Install Kibana:

yum install kibana

Configure Kibana:

In /etc/kibana/kibana.yml: ""

Start the service:

service kibana start
chkconfig kibana on

In /etc/yum.repos.d/logstash.repo:

name=Elastic repository for 5.x packages

Install Logstash:

yum install logstash

If needed, change Java executable path in /etc/logstash/startup.options then run /usr/share/logstash/bin/system-install.

Add to /etc/init/logstash.conf:

env JAVA_HOME=/usr/local/java
env PATH=/usr/local/java/bin:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin

Add syslog source in /etc/logstash/conf.d/syslog.conf:

input {
file {
path => [ "/var/log/messages" ]
type => "syslog"

output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]

Add filter in /etc/logstash/conf.d/filebeat.conf:

input {
beats {
port => 5044

filter {
if [fields][log_type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]

output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"

If necessary, fix permissions on /var/log/messages to make it readable as needed, then start logstash.

Results should be visible at this URL (insert name/IP as appropriate for your network): http://NAME OR IP OF LOG PROCESSING SERVER:5601

To tail logs for information and problems:

cd /var/log
tail -F messages elasticsearch/*.log kibana/* logstash/logstash-plain.log

On other servers sending log data to the main server

Import RPM key:

rpm --import

In /etc/yum.repos.d/elastic.repo:

name=Elastic repository for 5.x packages

Install Filebeat:

yum install filebeat

Modify /etc/filebeat/filebeat.yml as follows:

- input_type: log
- /var/log/messages
log_type: syslog


Start service:

chkconfig --add filebeat
service filebeat start

To tail logs for information and problems:

tail -F /var/log/filebeat/filebeat

No Man’s Sky – musings and wallpapers

It’s unusual for me to take much interest in anything with so much hype, but I thought I must make an exception for No Man’s Sky, a game which brings together two things I’ve treasured during my life: firstly, the concept of space exploration in a computer game, as originally explored in the form of the glorious game Elite which I spent a great deal of time playing as a teenager; and secondly, graphics inspired by wonderful 70s sci-fi artwork such as that produced by Chris Foss, which I’ve enjoyed in books since my early childhood. I was so keen to play this game that I even went so far as to buy a new PS4 so I could play it.

The controversy surrounding the game is curious, though it’s sad that so much of it has been so negative. The amount of shocking abuse that’s been hurled at Sean Murray, the founder of Hello Games, is simply appalling. As a game it’s not exactly adrenaline-fueled and it certainly has its flaws, so I can see why some are disappointed and frustrated with it. It’s easy to see the places in the game where development was cut short, and I hope that Hello Games will continue to work on the game to realise the full extent of their original vision. Even if that doesn’t happen, I won’t regret for a second the time I’ve spent immersed in this endlessly gorgeous universe.

For me, indeed, this game is a beautiful, almost Zen-like work of art which – to an intriguing extent – explores notions of, and raises questions about, the purpose of our existence and the nature of the cosmos. The visuals in the game are remarkably reminiscent of the beauty we find within nature in the real world, and the need to find our own reasons for playing the game – without being handed a story on a plate like most other games – echoes the existential dilemma we all face when it comes to trying to find meaning within our lives. The mind-bogglingly huge, procedurally generated universe in the game reflects our increasing suspicion that our own universe could actually be a virtual simulation, generated by machines located in some other reality, as explored in films such as The Matrix and World on a Wire.

Oh, one other thing: last but certainly not least, the epic and atmospheric soundtrack by 65daysofstatic is an excellent accompaniment to the graphics and gameplay in No Man’s Sky.

This is a video I took within the game just after I’d first started playing it, quite a while ago now. I still think it’s a great example of the visuals and atmosphere within the game, and of course this is just some of the stuff that happens in space. For planetary artwork, check out the desktop wallpapers I created below as a result of adopting the role of “photographer” within the game, just as I do in real life.

Desktop wallpapers

These are all taken directly from the PS4 version of the game. They’re not modified or edited in any way, apart from the HUD details which I’ve photoshopped out.

Monitoring HP ProLiant DL360 hardware in CentOS, with Nagios (optional)

My original post for monitoring HP storage hardware in CentOS is now out of date, so I decided to write an updated post for monitoring all hardware, not just storage hardware, and for optionally including this hardware monitoring in Nagios.

This is written primarily for CentOS 6. It should be largely fine for CentOS 5 and CentOS 7 too, although one or two modifications may be needed. It should also work with some other HP ProLiant servers such as the DL380.

smartd for (supposedly) predicting drive failure

Before we get onto the HP software, it’s worth taking a minute to install smartd, which you can obtain by installing the smartmontools package in CentOS. This software uses the SMART system to attempt to predict when drives are going to fail. It’s easy to configure so that smartd supposedly emails you as soon as problems are detected with drives.

Here’s an older example of an /etc/smartd.conf file on a server which has two SAS disks arranged into a single RAID partition:

/dev/cciss/c0d0 -d cciss,0 -a -m
/dev/cciss/c0d0 -d cciss,1 -a -m

Here’s a more recent example of an /etc/smartd.conf file on a server which has two SSDs configured as RAID 1:

/dev/sda -a -m

However, I’ve never found smartd to be very useful. It starts up fine and indicates via syslog that it’s monitoring the disks, but I’ve never had smartd give a warning before a drive failure even though I’m quite sure it’s configured correctly.

HP software for hardware monitoring

So, onto the really useful stuff. If you try to do this using the official methods as advised by HP, you’ll probably end up installing a whole bunch of awful bloated software that you don’t need taking up resources on your servers. In fact there are only two or three fairly small components which you actually need.

Previously it was necessary to get the first two of these from the HP Service Pack For ProLiant, but HP have recently changed everything once again, so now it’s necessary to get the Management Component Pack for CentOS 6 (also known as hp-mcp) from CentOS 6 Downloads on the Support section of the HP website; this provides the the hp-health (previously known as hpasm) and hpssacli (previously known as hpacucli) components that you’ll need.

If you have SSDs installed, you’ll also want to get the HP Smart Storage Administrator Diagnostic Utility (also known as HP SSADU or hpssaducli, previously known as hpadu) from the Software – System Management section in Red Hat Enterprise Linux 6 Server (x86-64) Downloads on the Support section of the HP website.

Sorry if that all seems a bit longwinded, but HP do have a way of making things complicated.

When you extract the hp-mcp tarball after downloading the Management Component Pack for CentOS 6, you’ll find a subdirectory called something like mcp/CentOS/6/x86_64/10.10 in which there are a bunch of RPM files. Upload the hp-health and hpssacli RPMs to your servers, along with the hpssaducli RPM you got from the HP Smart Storage Administrator Diagnostic Utility if you have SSDs. Then install them the usual way, with rpm -i ... etc.

Checking server hardware with hpasmcli

Once these are installed you can check server hardware by running hpasmcli. Once in, if you type show then you’ll see what things you can check. For example, show powersupply gives you up to date information on – unsurprisingly – the power supplies:

Power supply #1
        Present  : Yes
        Redundant: Yes
        Condition: Ok
        Hotplug  : Supported
        Power    : 40 Watts
Power supply #2
        Present  : Yes
        Redundant: Yes
        Condition: Ok
        Hotplug  : Supported
        Power    : 30 Watts

Type help to get more information.

Checking storage hardware with hpssacli

Next, to check the RAID controller and installed drives, use a command like the following:

hpssacli ctrl all show status ; hpssacli ctrl slot=0 ld all show status ; 
hpssacli ctrl slot=0 pd all show status

That command should show something like this:

Smart Array P440ar in Slot 0 (Embedded)
   Controller Status: OK
   Cache Status: Not Configured
   Battery/Capacitor Status: OK

   logicaldrive 1 (111.8 GB, 1): OK

   physicaldrive 1I:1:1 (port 1I:box 1:bay 1, 120 GB): OK
   physicaldrive 1I:1:2 (port 1I:box 1:bay 2, 120 GB): OK

Type hpssacli help to get more information on how to use it.

Checking SSDs with hpssaducli

If you have SSDs and you installed hpssaducli, you can also check SSD status with this command:

hpssaducli -ssd -txt -f /tmp/ssd.txt ; cat /tmp/ssd.txt

That should show you a bunch of information about wear on the SSDs, e.g:

Smart Array P440ar in Embedded Slot : Internal Drive Cage at Port 1I : Box 1 : Physical Drive (120 GB SATA SSD) 1I:1:1 : SmartSSD Wear Gauge

   Status                               OK
   Supported                            TRUE
   Log Full                             FALSE
   Utilization                          0.000000
   Power On Hours                       47
   Has Smart Trip SSD Wearout           FALSE

Integrating HP hardware monitoring with Nagios

If you’re not using Nagios then obviously you can stop reading now!

Server hardware

I’ve always used the check_hpasm plugin for checking server hardware, and it’s worked well for me. Just follow their instructions to install it, then you can integrate it into your Nagios configuration as needed.

Note that you’ll need to add the following line to your /etc/sudoers so that it has permission to run:

nrpe              ALL=NOPASSWD: /sbin/hpasmcli

Storage hardware

I’ve always used the check_hparray plugin for checking storage hardware, and it’s always worked perfectly for me, notifying me every time there’s been a drive failure. However, I see that it apparently hasn’t worked for some people, and it’s not clear why not, so use at your own risk.

Note that it does need to be modified now that HP have changed the name of their software, so just replace all instances of “hpacucli” in the script with “hpssacli” then it should work fine. Put the script in your Nagios plugins folder, then you can integrate it into your Nagios configuration as needed.

Note that you’ll need to add the following line to your /etc/sudoers so that it has permission to run:

nrpe              ALL=NOPASSWD: /sbin/hpssacli


To check the wear status of SSDs, I wrote a simple Nagios plugin which you can obtain from my GitHub repository. You’ll need to install the dos2unix command if it’s not already installed (with yum -y install dos2unix). Just install the plugin in your Nagios plugins directory, then you can integrate it into your Nagios configuration as needed.

Reclaiming storage space on two-node MongoDB replica sets

At they seem to assume we can create MongoDB replica sets using unlimited numbers of instances which have infinite amounts of storage. In practice, however, we often need to use replica sets with only two nodes (plus arbiter) which have limited storage. The problem then is that MongoDB has the tendency to use vast amounts of disk space without reclaiming the space from dropped data, so it consumes ever-increasing amounts of storage. It’s then hard to deal with this storage problem given the limited options available in a two-node replica set.

A solution to this is clearing all the data from each node in turn, which forces MongoDB to rebuild its data using only the disk space it needs. When performed on a regular basis, this stops the amount of storage which MongoDB is using from constantly increasing at an unacceptable rate.

To achieve this, I wrote the following script which can be run on the primary node via cron as the mongod user on a regular basis (e.g. once a week, or even once a day, depending on the seriousness of the problem). The script firstly clears then rebuilds data on the secondary, then temporarily promotes the secondary to primary whilst clearing and rebuilding data on the primary, then puts everything back to normal again.

N.B. Whilst I’ve built a lot of safety checks and backups into this script, be aware that it deletes all data on your MongoDB nodes so there is high potential for serious problems such as complete data loss if you’re not careful. So, read through the following points very carefully, and deal with these issues before you even think about running the script:

  • Only run this on a properly functioning, problem-free two-node system where you have an arbiter configured on a third machine.
  • Follow the instructions in the comments at the top and ensure that you have the mongod user, SSH and sudo set up properly before commencing.
  • For the latest version of the script you’ll need the timeout Unix command installed, so make sure that’s available on your systems before you start.
  • Get this working properly and safely in test environments before considering deployment in any production environments.
  • Before adding this to cron, run it manually so you can see what it’s doing and stop it if necessary to fix issues.
  • Always make sure you have recent data backups before running it, so that you can restore all your data in the event of a disaster.
  • I’ve run this in various environments with CentOS 5 and CentOS 6, but I haven’t tested it on Debian or Ubuntu, so you may need to make some changes to run it on those distributions.

If you choose to use this then you do so at your own risk, and after all those warnings I’m not going to take any responsibility if you lose data as a result!

2016-01-11: I’ve modified the script to use the timeout command in various places. This adds a level of safety to the script to stop it from unexpectedly doing dangerous things if it doesn’t run properly for some reason.

Change your environments and hostnames in the script as needed. You can get the script from GitHub or copy and paste it below:


# Force MongoDB to only use as much storage as it needs
# instead of taking up more and more space without reclaiming it

# Make sure of the following:
# 1. The mongod user has its shell set to /bin/bash on both machines
# 2. The mongod user has SSH keys set up such that it can SSH from 
#    the primary to the secondary without prompt
# 3. The mongod user has the following permissions in /etc/sudoers:
#    mongod ALL=NOPASSWD: /sbin/service mongod status, /sbin/service mongod stop, /sbin/service mongod start
#    (modify accordingly if not using Red Hat/CentOS)
# 4. Make sure the requiretty option is off in /etc/sudoers

# Only run as mongod user
if [ "$(whoami)" != "mongod" ] ; then echo "Not mongod user" ; exit 1 ; fi

# Determine environment - change these as needed
case "$(hostname)" in ) ; ;; ) ; ;; ) ; ;;
  * ) echo "Unknown environment" ; exit 1 ;;

# Check sudo and SSH
if ! sudo -n /sbin/service mongod status > /dev/null ; then
  echo "Problem with sudo on $primary" ; exit 1
elif ! ssh -q $secondary "sudo -n /sbin/service mongod status > /dev/null" ; then
  echo "Problem with SSH and/or sudo on $secondary" ; exit 1

# Take backup on primary
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Taking backup /tmp/dump on $primary..."
cd /tmp ; rm -rf dump ; mongodump > /dev/null
if [ "$?" != "0" ] ; then echo " Problem taking backup on $primary" ; exit 1 ; fi
echo " done"

# Clear data on secondary
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Clearing data on $secondary..."
timeout 300 ssh -q $secondary "sudo -n /sbin/service mongod stop > /dev/null"
if [ "$?" != "0" ] ; then echo " Problem stopping mongod on $secondary" ; exit 1 ; fi
timeout 300 ssh -q $secondary "rm -rf /var/lib/mongo/*"
if [ "$?" != "0" ] ; then echo " Problem clearing /var/lib/mongo on $secondary" ; exit 1 ; fi
timeout 300 ssh -q $secondary "sudo -n /sbin/service mongod start > /dev/null"
if [ "$?" != "0" ] ; then echo " Problem starting mongod on $secondary" ; exit 1 ; fi
echo " done"

# Wait for secondary to come back up
issecondary=$(timeout 300 ssh -q $secondary "echo 'db.isMaster()' | mongo" | grep secondary | awk -F '[ ,]' '{print $3}')
if [ "$?" != "0" ] ; then echo " Problem getting isMaster status on $secondary" ; exit 1 ; fi
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Waiting for $secondary to come up..."
until [ "$issecondary" == "true" ] ; do
  sleep 5
  echo -n "."
  issecondary=$(timeout 300 ssh -q $secondary "echo 'db.isMaster()' | mongo" | grep secondary | awk -F '[ ,]' '{print $3}')
  if [ "$?" != "0" ] ; then echo " Problem getting isMaster status on $secondary" ; exit 1 ; fi
echo " done"

# Demote primary so secondary is master
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Demoting $primary..."
echo 'rs.stepDown()' | mongo --quiet > /dev/null
if [ "$?" != "0" ] ; then echo " Problem demoting $primary" ; exit 1 ; fi
echo " done"

# Wait for secondary to take over as master
issecondary=$(echo 'db.isMaster()' | mongo | grep secondary | awk -F '[ ,]' '{print $3}')
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Waiting for $secondary to become master..."
until [ "$issecondary" == "true" ] ; do
  sleep 5
  echo -n "."
  issecondary=$(echo 'db.isMaster()' | mongo | grep secondary | awk -F '[ ,]' '{print $3}')
echo " done"

# Clear data on primary
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Clearing data on $primary..."
sudo -n /sbin/service mongod stop > /dev/null
if [ "$?" != "0" ] ; then echo " Problem stopping mongod on $primary" ; exit 1 ; fi
rm -rf /var/lib/mongo/*
sudo -n /sbin/service mongod start > /dev/null
if [ "$?" != "0" ] ; then echo " Problem starting mongod on $primary" ; exit 1 ; fi
echo " done"

# Wait for primary to come up
issecondary=$(echo 'db.isMaster()' | mongo | grep secondary | awk -F '[ ,]' '{print $3}')
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Waiting for $primary to come up..."
until [ "$issecondary" == "true" ] ; do
  sleep 5
  echo -n "."
  issecondary=$(echo 'db.isMaster()' | mongo | grep secondary | awk -F '[ ,]' '{print $3}')
echo " done"

# Demote secondary so primary is master
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Demoting $secondary..."
timeout 300 ssh -q $secondary "echo 'rs.stepDown()' | mongo --quiet > /dev/null"
if [ "$?" != "0" ] ; then echo " Problem demoting $secondary" ; exit 1 ; fi
echo " done"

# Wait for primary to take over as master
isprimary=$(echo 'db.isMaster()' | mongo | grep ismaster | awk -F '[ ,]' '{print $3}')
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Waiting for $primary to become master..."
until [ "$isprimary" == "true" ] ; do
  sleep 5
  echo -n "."
  isprimary=$(echo 'db.isMaster()' | mongo | grep ismaster | awk -F '[ ,]' '{print $3}')
echo " done"

Setting up an IPsec VPN on pfSense 2.1 for mobile OS X and iOS clients

I recently had to configure the open-source firewall pfSense to allow VPN access for mobile clients, particularly those using OS X on Macs and iOS on iPhones and iPads.

I haven’t found too many examples out there from people who have set this up successfully, so I thought it might be helpful to share this information for others who are trying to set up a similar VPN configuration.

N.B. This works for pfSense 2.1. In pfSense 2.2 they completely changed the IPSec backend, so things are a little different at the frontend.

pfSense configuration

In System -> User Manager set up a suitable user as needed, and under Effective Privileges add User – VPN – IPsec xauth Dialin for that user.

Then go to VPN -> IPsec and set up the mobile IPsec client configuration as follows.

VPN: IPsec

Tunnels: Phase 1 (Mobile Client)

General information

  • Disabled off
  • Internet Protocol IPv4
  • Interface WAN
  • Description Remote access VPN [modify as needed]

Phase 1 proposal (Authentication)

  • Authentication method Mutual PSK + Xauth
  • Negotiation mode aggressive
  • My identifier My IP address
  • Peer identifier Distinguished name MyIdentifier [modify as needed]
  • Pre-Shared Key MyPresharedKey [modify as needed]
  • Policy Generation Default
  • Proposal Checking Default
  • Encryption algorithm 3DES
  • Hash algorithm SHA1
  • DH key group 2 (1024 bit)
  • Lifetime 28800

Advanced Options

  • NAT Traversal Force
  • Dead Peer Detection on 10 seconds 5 retries

Tunnels: Phase 2 (Mobile Client)

  • Disabled off
  • Mode Tunnel IPv4
  • Local Network LAN subnet (NAT/BINAT None)
  • Description [empty]

Phase 2 proposal (SA/Key Exchange)

  • Protocol ESP
  • Encryption algorithms AES auto, Blowfish auto, 3DES, CAST128
  • Hash algorithms MD5, SHA1
  • PFS key group off
  • Lifetime 3600

Advanced Options

  • Automatically ping host [empty]

Mobile clients

  • IKE Extensions on

Extended Authentication (Xauth)

  • User Authentication LocalDatabase
  • Group Authentication none

Client Configuration (mode-cfg)

  • Virtual Address Pool on Network: / 24 [modify as needed]
  • Network List off
  • Save Xauth Password off
  • DNS Default Domain on [modify as needed]
  • Split DNS off
  • DNS Servers on Server #1: [modify as needed]
  • WINS Servers off
  • Phase2 PFS Group off
  • Login Banner on Warning: don't be naughty! [modify as needed]

Pre-Shared Keys

  • Identifier MyIdentifier [modify as needed, should match Peer identifier above]
  • Pre-Shared Key MyPresharedKey [modify as needed, should match Pre-Shared Key above]

Firewall: Rules

In Firewall -> Rules, go to the IPsec tab and make sure there’s a rule to allow all IPv4 traffic from anywhere to anywhere.

OS X configuration

In System Preferences -> Network, add a new interface of type VPN, VPN Type Cisco IPSec, and Service Name of your choice.

Server Address is the public IP of your firewall. Account Name is the pfSense user you set up earlier.

In Authentication Settings, Shared Secret is the pre-shared key you created on pfSense earlier, and Group Name is the identifier you created on pfSense earlier.

iOS configuration

In Settings -> VPN, add a new VPN configuration of type IPSec.

Description is up to you. Server is the public IP of your firewall. Account is the pfSense user you set up earlier. Group Name is the identifier you created on pfSense earlier. Secret is the pre-shared key you created on pfSense earlier.

Soundtrack, sound effects and ident audio design for Propsplanet game demo

I was recently asked to start providing audio for Propsplanet, a maker of 3D models for game developers. I started with a video called Fantasy Journey, a game demo which gives examples of how their 3D models could be used.

I wrote the soundtrack music to accompany the video; I created all the in-game sound effects; and I also designed the audio for the Propsplanet ident at the start.

Here’s the end result: