Archives For backup

Here are three ways to backup your MySQL database(s) and compress the backup on the fly:

1. Dump database ‘dbname’ into a compressed file

mysqldump \
-u user \
-ppassword \
--host=mysql-server \
dbname | \
gzip > dbname_backup.sql.gz

To dump multiple databases, you can either write a ‘for loop’ on a ‘SHOW DATABASES’ query result or use the next command to backup to a single file.

2. Dump all databases into a compressed file

mysqldump \
--all-databases \
-u user \
-ppassword \
--host=mysql-server | \
gzip > dbname_backup.sql.gz

3. Dump database ‘dbname’ except for some tables into a compressed file

mysqldump \
--ignore-table=dbname.ignoredtable1 \
--ignore-table=dbname.ignoredtable2 \
--ignore-table=dbname.ignoredtable3 \
-ppassword \
-u user \
--host=mysql-server \
dbname | \
gzip > dbname_backup.sql.gz

This allows you to skip certain tables that are unimportant (for example cached data) or static (imported from somewhere else). You can skip as many tables as you like. Just repeat the line for each table you want to ignore.

The pipe to gzip in the last line of all examples, makes sure the backup gets compressed before written to disk. You can safely do so to save some disk space. Just remove it to have an uncompressed backup.

One-liners are always fun. Linux has pipes, streams and redirects which you can combine to do many things in just one line. Today I had to restore a MySQL backup that was gzipped. Instead of typing:

gunzip db-backup.sql.gz
mysql -p -h dbname < db-backup.sql

You can type this instead:

gunzip < db-backup.sql.gz | mysql -p -h dbname

The .sql.gz file is input to gunzip, the resulting (unzipped) data is piped to mysql and is used as input there.

It will preserve the .gz file and saves diskspace as the unzipped file is not saved to disk. When using big databases, this is also a time saver. When using two commands you have to wait for the file to unzip, before starting the actual import. No intervention is needed when using the one-liner.

In a previous post I described howto restore a OpenLDAP server from backup . But how to backup Open LDAP?

The backups I make consist of two parts:

1. First backup the LDAP database itself using a program called ‘slapcat.’ Slapcat  is  used  to generate an LDAP Directory Interchange Format (LDIF) output based upon the contents of a given LDAP database. This is a text version of your database which can be imported later. Think of it as a SQL-backup for relational databases. Anyway, here’s how to run slapcat on the OpenLDAP server:

slapcat -l backup.ldif

This will backup the whole database into the file called ‘backup.ldif’. You can then use this file to restore an OpenLDAP server later, using slapadd. Be sure to run this in a backup script from crontab and have a backup at least once per day.

2. Second thing I do, is backing up the config of the OpenLDAP server. This config is usually in /etc/ldap. Back it up using a tar, or using a technique like rsnapshot.

When you have this in place (and save the backups on a different place), you’ll be able to rebuild an OpenLDAP server without problems.

As a sysadmin I’ve many things to take care of. One of the most important is backups. As websites and mailarchives become larger and lager, it is an ongoing challenge to fit as many backups in the available backup space.

In the early days we’re backupping using rsync, tar and gzip. The biggest drawback was it takes a lot of space. On the bright side, it’s plain simple and just always works. All you’ve to do is untar an archive and everything is there again (i.e.: happy customer!). It helped me on many occasions. So I kept this old method for a long time and looked around for alternatives.

I’ve experimented with tools like rdiff-backup, but didn’t feel comfortable with it. Rdiff-backup just had disappointed me too many times. The version of client and server needs to be exactly the same. So during an upgrade from say Debian Lenny to Debian Squeeze, you either have no backups of the freshly upgrades machines, or, when you’ve upgraded the backupserver too, no more backups of the not-yet-upgraded machines. May be no problem for a few servers, but I’m managing many servers and this just doesn’t work. Another problem was that the rdiff-backup would got corrupt on some cases. In that case, only the last backup was usable, the others were gone. So the rdiff-backup experiment didn’t work.

Last week, when googling about ‘snapshots’ for another project, I just run into rsnapshot backup.Wow, that looked cool and simple! And since our backup server was suffering from low available disk space, which takes a lot of time to resolve each time, I decided to implement rsnapshot and see if it’d work for my environment.

Installation is simple:

aptitude install rsnapshot

Then edit /etc/rsnapshot.conf and tell the program what to backup, how many times, what to include/exclude and some more details. I found it very simple and powerful. The only thing you’ll need to know is that values are separated by tabs (not spaces) and paths have a trailing slash.

The magical thing rsnapshot uses is called ‘hardlinks‘. So, when rsnapshot finds two files in two backups are the same (i.e.: unchanged) it just makes a hardlink instead of saving two copies. This saves a lot of backup space!

This is how it looks like after rsnapshot has been running for some time:

215M    /backup/rsnapshot/daily.0/
820K    /backup/rsnapshot/daily.1/
820K    /backup/rsnapshot/daily.2/
820K    /backup/rsnapshot/daily.3/
820K    /backup/rsnapshot/daily.4/
816K    /backup/rsnapshot/daily.5/
219M    total

This website is 215MB. Saving 6 backups would normally cost 6x215MB = 1290MB which is 1.2GB. When using rsnapshot, only the changed (added, deleted, updated) files are saved, the rest are hardlinks. That turns out to be a great idea, since the backups now uses only 219MB instead of 1.2GB!

Using less space per backup means we’re able to save more backups for our customers 🙂