Archives For 30 November 1999

At times it can be very handy to be able to use a local MySQL socket, while the real MySQL server is in fact a remote one. I once used this technique to split a dozen LAMP servers into dedicated (clustered) web servers and MySQL servers. Without having all clients to update scripts, that is.

Today I ran into another use-case: while migrating an old 32bit MySQL Windows server to a modern Linux VM. We wanted to get some statistics from it. The famous ‘mysqltuner.pl‘ script is designed to run from localhost. Well, let’s fake MySQL is on localhost then 😉

For these tricks to work you need a tool called ‘socat‘, which can be installed like this:

apt-get install socat

This is how to create a socket in ‘/var/lib/mysql/mysql.sock’, owned by both user and group ‘mysql’ and forward the connections to ‘mysql-server’ on port 3306:

socat UNIX-LISTEN:/var/lib/mysql/mysql.sock,fork,\
reuseaddr,unlink-early,user=mysql,group=mysql,mode=777 \
TCP:mysql-server:3306 &

You can now connect:

mysql -u user -p

No need to supply a hostname, MySQL connects to the local socket by default and socat takes care of forwarding the connection to the real MySQL server.

It is important to note this works as long as socat is running. You could use a tool called runit to keep socat running at all times. In another post I’ll explain more about runit.

This works with other protocols as well.

Sometimes a MySQL slave may get corrupted, or its data may otherwise be unreliable. Usually I clone the data from a slave that is still ok and fix it from there. However, today I run into an issue that made me doubt on the data of any slave. To be absolutely sure the data is consistent on both master and slave, I decided to deploy a new slave with a clone of the master and then redeploy the other slaves from the newly created slave like I normally do with a script.

This blog post describes both methods of restoring replication.

Restoring data directly from the master
We will create a dump from the master server and use it on a slave. To be sure nothing changes during the dump, we issue a ‘read lock’ on the database. Reading will work, writes will wait until we unlock, so please choose the right time to do this maintenance.

To lock all tables run:

FLUSH TABLES WITH READ LOCK;

Now that we have the lock, record the position of the master and write it down. We need it later to instruct the slaves where to continue reading updates from the master.

SHOW MASTER STATUS\G

Example output:

File: bin-log.002402
Position: 20699406

Time to create a sql dump of the current databases. Do this in another session and keep the first one open. This will make sure you’ll keep your lock while dumping the database.

mysqldump -ppassword -u username --add-drop-database databasename table1 table2 > masterdump.sql

After the dump is complete, go back to fist screen and release lock:

UNLOCK TABLES;

This is all we need to do on the master.

Restoring from an already running slave
As an alternative to creating a dump from the master, you can also use a slave’s data. This has the advantage of not having locks on the master database and thus not interrupting service. On the other hand, you will have to be sure this slave’s data is correct.

First stop the slave

SLAVE STOP;

And verify it has stopped

SHOW SLAVE STATUS\G

Output:

Slave_IO_Running: No
Slave_SQL_Running: No
Master_Log_File: bin-log.002402
Read_Master_Log_Pos: 20699406

Record the ‘Relay_Master_Log_File’ and ‘Exec_Master_Log_Pos’. This is the position this slave is at. We will need it later to instruct the new slave.

Create a sql dump of the slave’s data:

/usr/bin/mysqldump --add-drop-database -ppassword -u user -h mysqlserver --databases databasename

Now that we have a dump, we can start the slave again.

SLAVE START;

In the period between the ‘stop’ and ‘start’ slave, everything still works except that updates from the master are not processed. As soon as you start the slave again, the slave catches up with the master.

This method has the advantage that is it easily scriptable. Whenever there’s a problem, you’d run a script with the above commands and have everything fixed in a matter of seconds. That’s a real time saver!

Setting up the new slave
Use scp to securely copy the sql dump we just created above to the slave. Alternatively you may run the ‘mysqldump’ commands directly from the slave as well. Then login and run these commands:

STOP SLAVE;
RESET SLAVE;

Restore the sql dump:

mysql -ppassword -u user databasename < masterdump.sql

You now have a slave with up to date data. We’ll have to instruct the slave where to start updating. Use the result from the ‘master status’ or ‘slave status’ query above depending on the method of your choice.

CHANGE MASTER TO
 master_host='mysqlmaster',
 master_user='replicate_user',
 master_password='replicate_password',
 master_log_file='bin-log.002402',
 master_log_pos=20699406;

Then start the slave:

SLAVE START;

And check the status after a few seconds:

SHOW SLAVE STATUS\G

Output:

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

The slave now runs again with up to date data!

Here are three ways to backup your MySQL database(s) and compress the backup on the fly:

1. Dump database ‘dbname’ into a compressed file

mysqldump \
-u user \
-ppassword \
--host=mysql-server \
dbname | \
gzip > dbname_backup.sql.gz

To dump multiple databases, you can either write a ‘for loop’ on a ‘SHOW DATABASES’ query result or use the next command to backup to a single file.

2. Dump all databases into a compressed file

mysqldump \
--all-databases \
-u user \
-ppassword \
--host=mysql-server | \
gzip > dbname_backup.sql.gz

3. Dump database ‘dbname’ except for some tables into a compressed file

mysqldump \
--ignore-table=dbname.ignoredtable1 \
--ignore-table=dbname.ignoredtable2 \
--ignore-table=dbname.ignoredtable3 \
-ppassword \
-u user \
--host=mysql-server \
dbname | \
gzip > dbname_backup.sql.gz

This allows you to skip certain tables that are unimportant (for example cached data) or static (imported from somewhere else). You can skip as many tables as you like. Just repeat the line for each table you want to ignore.

The pipe to gzip in the last line of all examples, makes sure the backup gets compressed before written to disk. You can safely do so to save some disk space. Just remove it to have an uncompressed backup.

One-liners are always fun. Linux has pipes, streams and redirects which you can combine to do many things in just one line. Today I had to restore a MySQL backup that was gzipped. Instead of typing:

gunzip db-backup.sql.gz
mysql -p -h db.example.com dbname < db-backup.sql

You can type this instead:

gunzip < db-backup.sql.gz | mysql -p -h db.example.com dbname

The .sql.gz file is input to gunzip, the resulting (unzipped) data is piped to mysql and is used as input there.

It will preserve the .gz file and saves diskspace as the unzipped file is not saved to disk. When using big databases, this is also a time saver. When using two commands you have to wait for the file to unzip, before starting the actual import. No intervention is needed when using the one-liner.

Today I run into an issue with MySQL replication that prevented updates from the master to appear on the slaves. When checking the slave status, MySQL reports:

Slave_IO_State: Waiting for master to send event
Slave_IO_Running: Yes
Slave_SQL_Running: No
Last_Error: Query caused different errors on master and slave.

Apparently, an error occurred on the master (like a query that was wrong) and that failed on the slave as well. But since the error messages differ (a bit) the replication was stuck on this query. Manual intervention is required to tell MySQL what to do next.

This is what I used to fix it:

mysql -p
STOP SLAVE;
SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;
START SLAVE;

I logged in into MySQL (line 1), stopped the slave thread (line 2), skipped the one faulty query (line 3) and started the slave thread again (final line). The status now reports both a running IO and SQL thread.

mysql> show slave status\G

Output:

 Slave_IO_State: Waiting for master to send event
 Slave_IO_Running: Yes
 Slave_SQL_Running: Yes