Archives For October 2012

Sometimes it is necessary to block access from a certain ip-address. This can be done easily using route:

route add -host 1.2.3.4/32 reject

While this works, it does not provide the best user experience because from 1.2.3.4 the website now seems down, while it isn’t. A better way is to display an error message instead of the website requested.

I’m using load balancing to distribute the load to different web servers. The software in use is Keepalived. To block a given ip-address, I have the firewall tag it and then make Keepalived forward it to another web server instead. It goes like this:

iptables -t mangle -A PREROUTING -i eth0 \
-p tcp -s 1.2.3.4 --dport http -j MARK --set-mark 2000

This iptables rule just sets a mark ‘2000’ (can be any integer) when a request from 1.2.3.4 comes in for port ‘http’. In keepalived.conf we setup how to handle this fwmark:

virtual_server fwmark 2000 {

delay_loop 6
 lb_algo wlc
 lb_kind NAT
 persistence_timeout 0
 protocol TCP

real_server 10.10.10.1 80 {
 weight 1
 TCP_CHECK {
  connect_port 80
  connect_timeout 3
  }
 }
}

As you can see, Keepalived will send the request to ‘10.10.10.1’ which is for example an extra server. There you can display a static page with an error message explaining what’s going on. You can add more capacity by adding another ‘real_server’, if you wish. This will also distribute the load between the real_servers.

Now, when you block an ip-address, instead of the website being ‘down’, you now display an error message. Add your phone number or e-mail address so they can get in touch to fix the problem. In my experience, this approach works better and prevents urgent ‘website down’ calls.

To extend this even further, you can have a script add the fwmark rule above automatically when you detect some sort of abuse you want to block. It’s just as easy as using ‘route’!

Here are three ways to backup your MySQL database(s) and compress the backup on the fly:

1. Dump database ‘dbname’ into a compressed file

mysqldump \
-u user \
-ppassword \
--host=mysql-server \
dbname | \
gzip > dbname_backup.sql.gz

To dump multiple databases, you can either write a ‘for loop’ on a ‘SHOW DATABASES’ query result or use the next command to backup to a single file.

2. Dump all databases into a compressed file

mysqldump \
--all-databases \
-u user \
-ppassword \
--host=mysql-server | \
gzip > dbname_backup.sql.gz

3. Dump database ‘dbname’ except for some tables into a compressed file

mysqldump \
--ignore-table=dbname.ignoredtable1 \
--ignore-table=dbname.ignoredtable2 \
--ignore-table=dbname.ignoredtable3 \
-ppassword \
-u user \
--host=mysql-server \
dbname | \
gzip > dbname_backup.sql.gz

This allows you to skip certain tables that are unimportant (for example cached data) or static (imported from somewhere else). You can skip as many tables as you like. Just repeat the line for each table you want to ignore.

The pipe to gzip in the last line of all examples, makes sure the backup gets compressed before written to disk. You can safely do so to save some disk space. Just remove it to have an uncompressed backup.

When building new computer infrastructure, I find it important to label and document everything properly. This will help you in an emergency, especially if you are not on site and need to instruct someone on the phone. Apart from that, todays infrastructure is quite complex and one cannot remember everything by head. This blog post describes what we used when building our Cloud infrastructure.

First of all, use a separate cable color for each type of connection. We used, for example, different colors for ‘storage’, ‘management’, ‘serial’ and ‘public’ traffic types. This gives a quick overview of the different cables that are in use. Make sure colors are unique, so don’t use black if you’re using black power cables.

The color of the cable is the first distinction, to be even more specific we’ll label every cable as well. Most sys admins know that labeling cables in a data center is not as easy as it seems: the heat and ventilation will eventually make the labels fall off. With this in mind we’re looking for a solution.. Brady BMP21 to the rescue!

The BMP21 has a keyboard and let’s you enter any description you want. When printing, it produces a label that you can wrap around the cable. This way, the label is very solid connected to the cable and will not fall off.

As you can see in the image, it prints the two lines entered on the Brady multiple times. You can wrap the label around the cable and read what’s on it very easy. This also means you won’t have to pull off the cable to be able to read its label. Smart thinking!

An example of cables we labelled with the BMP21

As you can see it works very well and looks pretty good as well. I’m really satisfied with the device and the labels it produces!

Changes to production systems should be tested on a development system and then be deployed using configuration management (such as Puppet), if you ask me. Sometimes I first run commands by hand (on a test system) to find out the right ones, the right order, etc. In those cases I find it useful to automatically document what I type and be able to ‘replay’ it later on.

Why? Because when I write the Puppet configuration needed to deploy the change, I want to be sure all manual commands I ran make it to the Puppet manifest. Being able to replay what I did allows me to do so with ease.

Here’s how to record your key strokes using the ‘script’ Linux utility:

script change1234.script -t 2> change1234.timing

The ‘script’ utility logs all commands you enter to the file ‘change1234.script’. Furthermore it saves the timing data to a file called ‘change1234.timing’. Beware that everything is saved, including errors and typo’s.

The timing data allows to replay the script using the same timing as when the session was recorded. It gives a good representation of what happened. To replay, simply run:

scriptreplay change1234.timing change1234.script

You can replay it as many times as you like.

The .script and .timing files are just plain text files. This means you can ‘grep’ them to quickly find commands. For example, to display the ‘sed’ commands you used:

grep sed change1234.script

As you can see, saving (and documenting) your commands is easy and offers some nice features. It even allows you to forget what exactly you did… 😉

It just came to my attention that a vulnerability in Apache CloudStack was discovered, as John Kinsella writes in his post to the Apache CloudStack dev-mailinglist.

A malicious user could, for example, delete all VMs in the system. Addressing this issue is especially important for anybody using CloudStack in a public environment.

The vulnerability report has an easy work-around that I will mention here as well:

mysql -p -u cloud -h mgt-server-ip
update cloud.user set password=RAND() where id=1;
\q

Hugo Trippaers of Schuberg Philis discovered this issue. Thanks for sharing!

One-liners are always fun. Linux has pipes, streams and redirects which you can combine to do many things in just one line. Today I had to restore a MySQL backup that was gzipped. Instead of typing:

gunzip db-backup.sql.gz
mysql -p -h db.example.com dbname < db-backup.sql

You can type this instead:

gunzip < db-backup.sql.gz | mysql -p -h db.example.com dbname

The .sql.gz file is input to gunzip, the resulting (unzipped) data is piped to mysql and is used as input there.

It will preserve the .gz file and saves diskspace as the unzipped file is not saved to disk. When using big databases, this is also a time saver. When using two commands you have to wait for the file to unzip, before starting the actual import. No intervention is needed when using the one-liner.