Archives For 31 October 2012

I’m used to my Mac where the direction of scrolling was changed to a more ‘natural’ way starting with OSX Lion. It’s the same way scrolling works on mobile devices. When working on Ubuntu, I want to have the same behaviour because I’m used to it.

To do so you need to edit (or create) the .Xmodmap file in your homedir.

This is the default:

cat ~/.Xmodmap
pointer = 1 2 3 4 5

I changed it to:

vim ~/.Xmodmap
pointer = 1 2 3 5 4

This essentially makes the last two (scroll wheel) buttons work the other way around, and that is exactly the behaviour I want.

To enable this setting you have to either logout (and back in) of your X environment, or run this oneliner (Thanks James!):

xmodmap -e "pointer = 1 2 3 5 4"

By editing both the Xmodmap file and running the above oneliner, the new setting works immediately and also keeps working when you (for example) reboot your machine.

Sometimes a MySQL slave may get corrupted, or its data may otherwise be unreliable. Usually I clone the data from a slave that is still ok and fix it from there. However, today I run into an issue that made me doubt on the data of any slave. To be absolutely sure the data is consistent on both master and slave, I decided to deploy a new slave with a clone of the master and then redeploy the other slaves from the newly created slave like I normally do with a script.

This blog post describes both methods of restoring replication.

Restoring data directly from the master
We will create a dump from the master server and use it on a slave. To be sure nothing changes during the dump, we issue a ‘read lock’ on the database. Reading will work, writes will wait until we unlock, so please choose the right time to do this maintenance.

To lock all tables run:

FLUSH TABLES WITH READ LOCK;

Now that we have the lock, record the position of the master and write it down. We need it later to instruct the slaves where to continue reading updates from the master.

SHOW MASTER STATUS\G

Example output:

File: bin-log.002402
Position: 20699406

Time to create a sql dump of the current databases. Do this in another session and keep the first one open. This will make sure you’ll keep your lock while dumping the database.

mysqldump -ppassword -u username --add-drop-database databasename table1 table2 > masterdump.sql

After the dump is complete, go back to fist screen and release lock:

UNLOCK TABLES;

This is all we need to do on the master.

Restoring from an already running slave
As an alternative to creating a dump from the master, you can also use a slave’s data. This has the advantage of not having locks on the master database and thus not interrupting service. On the other hand, you will have to be sure this slave’s data is correct.

First stop the slave

SLAVE STOP;

And verify it has stopped

SHOW SLAVE STATUS\G

Output:

Slave_IO_Running: No
Slave_SQL_Running: No
Master_Log_File: bin-log.002402
Read_Master_Log_Pos: 20699406

Record the ‘Relay_Master_Log_File’ and ‘Exec_Master_Log_Pos’. This is the position this slave is at. We will need it later to instruct the new slave.

Create a sql dump of the slave’s data:

/usr/bin/mysqldump --add-drop-database -ppassword -u user -h mysqlserver --databases databasename

Now that we have a dump, we can start the slave again.

SLAVE START;

In the period between the ‘stop’ and ‘start’ slave, everything still works except that updates from the master are not processed. As soon as you start the slave again, the slave catches up with the master.

This method has the advantage that is it easily scriptable. Whenever there’s a problem, you’d run a script with the above commands and have everything fixed in a matter of seconds. That’s a real time saver!

Setting up the new slave
Use scp to securely copy the sql dump we just created above to the slave. Alternatively you may run the ‘mysqldump’ commands directly from the slave as well. Then login and run these commands:

STOP SLAVE;
RESET SLAVE;

Restore the sql dump:

mysql -ppassword -u user databasename < masterdump.sql

You now have a slave with up to date data. We’ll have to instruct the slave where to start updating. Use the result from the ‘master status’ or ‘slave status’ query above depending on the method of your choice.

CHANGE MASTER TO
 master_host='mysqlmaster',
 master_user='replicate_user',
 master_password='replicate_password',
 master_log_file='bin-log.002402',
 master_log_pos=20699406;

Then start the slave:

SLAVE START;

And check the status after a few seconds:

SHOW SLAVE STATUS\G

Output:

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

The slave now runs again with up to date data!

DRBD (Distributed Redundant Block Device) is an open source storage solution that is best compared with a RAID-1 (mirror) between two servers. I’ve implemented this for both our cloud storage as our cloud management servers.

We’re in the process of replacing both cloud storage nodes (that is: everything except the disks and its raid array) and of course no downtime is allowed. Although DRBD is made for redundancy (one node can be offline without impact), completely replacing a node is a bit tricky.

Preventing a ‘split brain’ situation
The most important thing to remember is that only one storage node is allowed to be the active node at all times. If this is violated, a so-called ‘split brain’ happens. DRBD has methods in surviving such a state, but it is best to prevent it from happening.

When discussing this project in our Team, it was suggested to boot the replaced storage node without any networking cables. Usually this is a safe way to prevent the node from interacting with others. In this case, it is not such a good idea: since the new secondary server has the same disks and configuration as the old one unexpeced thing may happen. When booting without networking cables attached, both nodes cannot find one another and the newly booted secondary may decode to become primary itself. A split brain situation will then occur: both nodes will be primary at the same time and access the data. You won’t be able to recover from this, unless you manually decide which node is the master (and lose the changes on the other node). In this case this can be easily decided, but it’s a lot of unneccessary trouble.

Instead, boot the replaced node with network cables connected so the replication network will be up and both nodes immediately will see each other. In our case, this means connecting the 10Gbps connection between the nodes. This connection is used by DRBD for syncing and by Heartbeat for sending the heartbeats. This prevents entering the ‘split brain’ state and immediately starts syncing.

Note: If you want to replace everything including the disks, you’ll have to manually join the cluster with the new secondary node and then sync the data. In this case it doesn’t matter whether the networking cables are connected or not, since this new node won’t be able to become primary anyway.

The procedure
Back to our case: replacing all hardware except for the disks. We managed to successfully replace both nodes using this procedure:

  1. shut down the secondary node
  2. replace the hardware, install the existing disks and 10Gbps card
  3. boot the node with at least the 10Gbps connection active
  4. the node should sync with the primary
  5. when syncing finishes, redundancy is restored
  6. make sure all other networking connections are working. Since the main board was replaced some MAC-addresses changes. Update UDEV accordingly
  7. when all is fine, check if DRBD and Heartbeat are running without errors on both nodes
  8. then stop heartbeat on the primary node. A fail-over to the new secondary node will occur
  9. if all went well, you can now safely shut down the old primary
  10. replace the hardware, install the existing disks and 10Gbps card
  11. boot the node with at least the 10Gbps connection active
  12. the node should sync with the primary
  13. when syncing finishes, redundancy is restored
  14. make sure all other networking connections are working. Since the main board was replaced some MAC-addresses changes. Update UDEV accordingly
  15. the old primary is now secondary
  16. If you want, initialize another fail-over (In our case we didn’t fail-over again, since both nodes are equal powerful)

Congratulations: the cluster is redundant again with the new hardware!

Using the above procedure, we replaced both nodes of our DRBD storage cluster without any downtime.

I’ve been working a lot with CloudStack Advanced networking and find it very flexible. Recently, I had another opportunity to test its flexibility when a customer called: “We want VM’s in your CloudStack cloud, but these VM’s are only allowed be reachable from our office, and not from the public internet”. Firewalling? No, they required us to use their VPN solution.

Is CloudStack flexible enough for this to work? Yes, it is. In this blog I’ll tell you how we did it. And it doesn’t even matter what VPN device you use. This will work regardless of brand and features, as lang as it supports a public ip-address to connect over the internet to another VPN device, and has a private network behind it. All VPN devices I know of support these basic features.

VPN (Virtual Private Networking)
The client’s office is connected to the internet and has a VPN device. We received another device as well to host in our data center and the two talk to each other over the public internet in a secure way. Probably speaking IPsec or similar but that is beyond the scope of this blog.

The VPN device in the data center has a public ip-address on its WAN port but also has some ports for the internal network. We configured it to use the same network CIDR as we did in the CloudStack network we created for this customer. Let’s use 10.10.16.0/24 as an example in this blog. And now the problem: this cloud network is a tagged network and the VPN device we received is not VLAN-capable.

VLANs in the Cloud
CloudStack Advanded networking relies on VLANs. Every VLAN has its own unique ID. Switches use this VLAN ID to keep the VLAN networks apart and make sure they’re isolated from each other. Most switches support VLANs as well, and that’s were we’ll find the solution to this problem.

Configuring the switch

We connected the VPN device to our switch and set its port to UNTAGGED for the VLAN ID the CloudStack network uses. In other words, devices connected to this port now do not need to know about the VLAN. The switch will add it as traffic flows. This means the VPN device will use an ip-address in the 10.10.16.0/24 range and is able to communicate with the VM’s in the same network. The CloudStack compute nodes have their switch ports set to TAGGED and the switch makes communication between them possible.

Overview of ip-addresses:

  • 10.10.16.1 – the internal VPN device ip-address in the data center
  • 10.10.16.11 – the first VM’s ip-address
  • 10.10.16.12 – the second VM’s ip-address

The VM’s have their default gateway set to the VPN device’s 10.10.16.1 address. Also, the office needs to be configured in a way it knows the 10.10.16.0/24 network is handled by the VPN device located there. Users in the office will now be able to access the VM’s on the 10.10.16.0/24 network.

Conclusion
While the VM’s are hosted on our CloudStack cloud on the internet, they do not have public ip-addresses and thus are not reachable. The only public ip-address for this customer is the one configured on the VPN device in the data center. This provides the same level of security as you’d have with physical servers but adds the power of a cloud solution.

Thanks to the flexibility of the CloudStack Advanced Networking this cloud be done!

When migrating an ip-address to another server, you will notice it will take anywhere between 1 and 15 minutes for the ip-address to work on the new server. This is caused by the arp cache of the switch or gateway on the network. But don’t worry: you don’t just have to wait for it to expire.

Why it happens
ARP (Address Resolution Protocol) provides a translation between ip-addresses and mac-addresses. Since the new server has another mac-address and the old one stays in the cache for some time, connections will not yet work. The cache usually only exists for some minutes and prevents asking for the mac-address of a certain ip-address over and over again.

One solution to this problem is to send a command to the gateway to tell it to update its cached mac-address. You need the ‘arping’ utility for this.

Installing arping
There are two packages in Debian that contain arping:

arping - sends IP and/or ARP pings (to the mac address)
iputils-arping - Tool to send ICMP echo requests to an ARP address

I’ve had best results with the ‘iputils’ one, so I recommend to install that one. This is mainly because the other package’s command does not implement the required -U flag.

aptitude install iputils-arping

I haven’t installed arping on CentOS yet, but was told the package is in the RPMForge repository.

Using arping
The command looks like this:

arping -s ip_address -c1 -U ip_addresss_of_gateway

Explanation:
-s is the source ip-address, the one you want to update the mac-address of
-c1 sends just one arping
-U is Unsolicited arp mode to update neighbours’ arp caches
This is followed by the ip-address of the gateway you want to update. In most cases this is your default gateway for this network.

Example: you moved 192.168.0.100 to a new server and your gateway is 192.168.0.254, you’d run:

arping -s 192.168.0.100 -c1 -U 192.168.0.254

After you’ve send the arping, the gateway will update the mac-address it knows of the ip-address and traffic for this ip-address will start flowing to the new server.

Bottom line: whenever you migrate an ip-address to another server, use arping to minimize downtime.