Archives For 30 November 1999

Due to the Ghost bug aka CVE-2015-0235, we had to upgrade 500+ system vm’s. We’re running CloudStack 4.4.2. The version of the systemvm template it uses was 4.4.1 and so we created 4.4.2 and used that instead. It was quite some work to get it done so we thought it was worth sharing how we did it in this blog. I did this work together with my Schuberg Philis colleague Daan Hoogland.

1. Build new CloudStack RPM’s with MinVRVersion set to 4.4.2
Basically this was a single digit change in a single file (api/src/com/cloud/network/VirtualNetworkApplianceService.java).

ghost_java

2. Build new systemvm with latest patches
Obviously we had to build a new systemvm template with this same version. We used the latest Debian 7 release:

ghost_definition

And set the version also to 4.4.2:

ghost_postinstall

3. Upload the template to CloudStack
Upload the template as admin user. We couldn’t use systemvm-xenserver-4.4 as a name, because it was already there. So we gave it a temporary name: systemvm-xenserver-4.4.2.

Wait until they are READY.

4. Stop CloudStack management servers
Unfortunately you need to stop CloudStack. First of all because we’re going to upgrade the RPM’s. Second to get the new template registered.

Also, since we will hack the SQL database in the next step (or should I say: we used the SQL-API) it’s better to do this when CloudStack is not running.

5. Hack the SQL database
When the templates are downloaded, we made the following changes:
– renamed systemvm-xenserver-4.4 to systemvm-xenserver-4.4-old
– renamed systemvm-xenserver-4.4.2 to systemvm-xenserver-4.4
– set the type of systemvm-xenserver-4.4.2 to SYSTEM

Get an overview with this query:

SELECT * FROM cloud.vm_template where type='SYSTEM';

ghost_sql_db

Example of update query:

UPDATE `cloud`.`vm_template` SET `name`='systemvm-xenserver-4.4', `type`='SYSTEM' WHERE `id`='2152';
UPDATE `cloud`.`vm_template` SET `name`='systemvm-vmware-4.4', `type`='SYSTEM' WHERE `id`='2153';

As you can see, in our case the old and new template id’s were as follows:

1952: old template, 2152 new
1953: old template, 2153 new

Finally, you need to update the vm_template_id from old -> new like in this example:

UPDATE `cloud`.`vm_instance` SET `vm_template_id`='2152' WHERE `vm_template_id`='1952' and removed is NULL;
UPDATE `cloud`.`vm_instance` SET `vm_template_id`='2153' WHERE `vm_template_id`='1953' and removed is NULL;

6. Install new CloudStack RPM’s
Now that CloudStack is still down, upgrade the RPM’s. This is a quick install as there are almost no changes.

7. Start the management servers
It’s time to start the management servers again. When it’s ready, check the virtual routers:

ghost_routerupgrade

All flags RequiresUpgrade are set!

8. Destroy SSVM and CP
We had to destroy the Secondary Storage VM’s and Console Proxies for them to get recreated with the new templates. Rebooting did not work.

9. Reboot routers
Just reboot your routers and they get upgraded automatically!

We used internal developed tooling to do this automated. The tools send maintenance notifications to the tenant when their router is upgraded (and when it’s finished). We’ll open source the tools in the coming months, so stay tuned!

Conclusion
I think we need an easier way to do this 😉

Happy patching!

Data security is getting more and more important these days. Imaging you work as a sysadmin, on a laptop, and it gets lost. Of course you might lose your work (but you have a backup, right?). The real problem: your sensible data (SSH private key for example) is no longer under your control.

In this blog I explain how you can add an encrypted partition to Linux. As long as you also use a password protected screensaver, with a decent password (to protect a running, logged in laptop), no one can access your data. Even rebooting into single user mode (to by-pass the login screen) won’t help. No access to the encrypted disk without a working passphrase. Wiping your disk and reinstall is an option, but your data is not unveiled.

LUKS: Linux Unified Key Setup
Linux ships with LUKS, that is short for ‘Linux Unified Key Setup’. It’s a tool and technique to setup encrypted devices. This device can be a laptop harddisk, but also a USB-pendrive or a virtual disk (when you have a virtualized server). Encryption with LUKS works on the block level, so filesystems above it are not even aware. This is nice, because it also means you can use LVM inside a LUKS encrypted block device. The encrypted drive is protected with a passphrase that you need to enter at boot.

Below I’ll show you how to set it up. Be aware we are formatting partitions. In other words, you will lose all data on the partition you will experiment with. For testing purposes, use a USB-pendrive or a spare partition. I’m using a virtual disk called /dev/sdc for this demo.

Note: you need the kernel module dm_crypt loaded for this to work.

modprobe dm_crypt

Don’t forget to make it persistent after a reboot.

To format the partition:

cryptsetup luksFormat /dev/sdc

Output:

WARNING!
========
This will overwrite data on /dev/sdc irrevocably.

Are you sure? (Type uppercase yes): YES
Enter LUKS passphrase: 
Verify passphrase:

Now that the encrypted disk is created, we’ll open it:

cryptsetup luksOpen /dev/sdc encrypted_disk

Output:

Enter passphrase for /dev/sdc:

To unlock the disk, you need to enter the passphrase you just set. You need to do this every time you want to unlock the disk.

The ‘encrypted_disk‘ part of the command above, is used to map a device name to your encrypted disk. In this case: ‘/dev/mapper/encrypted_disk’ is created. So, the encrypted disk called ‘/dev/sdc’ now has an extra name to refer to its unlocked state called ‘/dev/mapper/encrypted_disk’. This device is also a block device, on which you could run ‘fdisk’ or ‘mkfs.ext4’, etc.

But stop, before you do that. When you’d like to use multiple partitions, I’d suggest using LVM inside the encrypted disk. This isn’t visible from outside; only when you’ve unlocked it the LVM partitions appear. It prevents from entering a password for each and every encrypted disk you create. Also, LVM is more flexible in resizing its logical volumes.

Let me show you how to setup LVM on the encrypted disk:

pvcreate /dev/mapper/encrypted_disk

Output:

Physical volume "/dev/mapper/encrypted_disk" successfully created 
vgcreate encrypted /dev/mapper/encrypted_disk

Output:

Volume group "encrypted" successfully created
lvcreate encrypted --name disk1 --size 10G

Output:

Logical volume "disk1" created
lvcreate encrypted --name disk2 --size 10G

Output:

Logical volume "disk2" created

You now have two more block devices called ‘/dev/encrypted/disk1’ and ‘/dev/encrypted/disk2’. Let’s put a file system on top of them:

mkfs.ext4 -m0 /dev/encrypted/disk1
mkfs.ext4 -m0 /dev/encrypted/disk2

The two encrypted partitions are now ready to be used. Let’s mount them somewhere:

mkdir -p /mnt/disk1 /mnt/disk2
mount /dev/encrypted/disk1 /mnt/disk1
mount /dev/encrypted/disk2 /mnt/disk2

This works all pretty cool already. But when you reboot, you’ll have to run:

cryptsetup luksOpen /dev/sdc encrypted_disk
vgscan
lvchange --activate y encrypted/disk1
lvchange --activate y encrypted/disk2
mount /dev/encrypted/disk1 /mnt/disk1
mount /dev/encrypted/disk2 /mnt/disk2

Entering the passphrase is required after the first command. Line 2 scans for new LVM devices (because when unlocking the encrypted device, a new block device appears). Line 3 and 4 activate the two logical volumes, and finally they are mounted.

Automating these steps
It is possible to automate this. That is, Linux will then ask for the passphrase at boot time and mount everything for you. Just think about that for a while. When booting a laptop this is probably what you want. But if it is a server in a remote location, it might not, as you need to enter it on the (virtual) console for it to continue booting. No SSH access at that time.

Anyway, you need to do two things. First is to tell Linux to unlock the encrypted device at boot time. Second is to mount the logical volumes.

To start, lookup the UUID of the encrypted disk, /dev/sdc:

cryptsetup luksDump /dev/sdc | grep UUID

The result should be something like:

UUID: b8f60c1d-ffeb-4aaf-8368-9e5d4d29fc52

Open /etc/crypttab and enter this line:

encrypted_disk UUID=b8f60c1d-ffeb-4aaf-8368-9e5d4c29fc52 none

The first field is the name of the device that is created, use the same as when using luksOpen above. The second field is the UUID we just found. The final field is the password, but this should be set to ‘none‘ as this prompts for the passphrase. Entering the passphrase in this file is a bad idea, if you ask me.

Final step is to setup /etc/fstab to mount the encrypted disks automatically. Add these lines:

/dev/encrypted/disk1 /mnt/disk1 ext4 defaults 0 0
/dev/encrypted/disk2 /mnt/disk2 ext4 defaults 0 0

I’m using device names here, because LVM gives me the /dev/encrypted/disk[12] name every time. When you did not use LVM, it’s probably wise to use UUID’s in /etc/fstab instead. This makes sure the right filesystem is mounted, regardless of the device’s name.

Time to reboot. During this reboot, Linux will ask for the passphrase of the /dev/sdc device. On RHEL 6 it looks like:

RHEL 6 asks for the LUKS passphrase of /dev/sdc during boot

RHEL 6 asks for the LUKS passphrase of /dev/sdc during boot

It might look different on your OS. The Ubuntu version looks a bit prettier, for example.

Enter the passphrase, hit enter, and Linux should continue booting normally. Then login to the console (or SSH) and verify if the two disks are mounted.

...
/dev/mapper/encrypted-disk1 on /mnt/disk1 type ext4 (rw)
/dev/mapper/encrypted-disk2 on /mnt/disk2 type ext4 (rw)
...

Ubuntu makes this very easy to setup: they just have a checkbox during install that says ‘encrypt disk’ and will setup LUKS. But, you end up with everything in one big / partition without LVM. That’s why I prefer to configure it myself, and with these instructions so can you.

Conclusion
It is cool that these security features are now mainstream and easy to use. Do yourself a favor, and setup LUKS today!

As discussed in a previous post, multi-factor authentication really makes things more secure. Let’s see how we can secure services like SSH and the Gnome desktop with multi-factor authentication.

The Google Authenticator project provides a PAM module than can be integrated with your Linux server or desktop. This PAM module is designed for for home and other small environments. There is no central management of keys, and the configuration is saved in each users home folder. I’ve successfully deployed this on my home server (a Raspberry Pi) and on my work laptop.

When using a Debian-like OS, you can install it with a one-liner

apt-get install libpam-google-authenticator

But note the packaged version is old and does not support all documented options. Below I talk about the ‘nullok’ option, but that is not supported in the packaged version. You then see this error:

sshd(pam_google_authenticator)[884]: Unrecognized option "nullok"
sshd(pam_google_authenticator)[887]: Failed to read "/home/pi/.google_authenticator"

That’s why I suggest building from source, as this can be done quickly:

apt-get remove libpam-google-authenticator
apt-get install libpam0g-dev libqrencode3
cd ~
wget https://google-authenticator.googlecode.com/files/libpam-google-authenticator-1.0-source.tar.bz2
tar jvxf libpam-google-authenticator-1.0-source.tar.bz2
cd libpam-google-authenticator-1.0/
make
make install

Open a shell for a user you want to enable two-factor authentication for and run ‘google-authenticator’ to configure.

google-authenticator
Configure Google Authenticator

Configure Google Authenticator

Nice!

Configure your Mobile device
Install the ‘Google Authenticator’ app and just scan the QR-code. It will automatically configure itself and start displaying verification codes.

Verification codes displayed by app

Verification codes displayed by app

You should notice it’ll display new codes each 30 seconds.

Configure SSH
Two files need to be edited in order to enable two-factor authentication in SSH.

vim /etc/pam.d/sshd

Add this line:

auth required pam_google_authenticator.so nullok

Where to put this in the file? That depends. When you put it at the top, SSH will first ask a verification code, then a password. To me this sounds unlogical, so I placed it just below this line:

@include common-auth

The ‘nullok’ option, by the way, tells PAM whenever no config for 2-factor authentication is found, it should just ignore it. If you want SSH logins to fail, when no two-factor authentication is configured, you can delete the option. Be warned to at least config it for one user, or you will be locked out of your server.

Now tel SSH to ask for the verification code:

vim /etc/ssh/sshd_config

Edit the setting, it’s probably set to ‘no’:

ChallengeResponseAuthentication yes

Now all you need to do is restart SSH. Keep a spare SSH session logged-in, just in case.

/etc/init.d/ssh restart
SSH 2-factor authentication in action

SSH two-factor authentication in action

SSH will now ask for a verification code when you do an interactive login. When a certificate is used, no verification code is asked.

Configuring Ubuntu Desktop
The Ubuntu desktop can also ask for a verification code. Just edit one file:

vim /etc/pam.d/lightdm

And add this line, just like SSH:

auth required pam_google_authenticator.so nullok

The login screen should now ask for a verification code. It looks like:

Two-factor authentication in Ubuntu

Two-factor authentication in Ubuntu

You can setup any PAM service like this, the basic principles are all the same.

Skip two-factor authentication if logging in from the local network
At first this is all very cool, but soon it becomes a bit annoying, too. When I SSH from a local network, I just don’t want to enter the verification code because I trust my local network. When I SSH from remote, a verification code is required. One way to arrange that, is always login with certificates. But there is another way to configure it: using the pam_access module. Try this config:

auth [success=1 default=ignore] pam_access.so accessfile=/etc/security/access-local.conf
auth required pam_google_authenticator.so nullok

The config file, /etc/security/access-local.conf looks like:

# Two-factor can be skipped on local network
 + : ALL : 10.0.0.0/24
 + : ALL : LOCAL
 - : ALL : ALL

Local login attempts from 10.0.0.0/24 will not require two-factor authentication, while all others do.

Two-factor only for users of a certain group
You can even enable two-factor authentication only for users of a certain group. Say, you’d want only users in the ‘sudo’ group to do use two-factor authentication. You could use:

auth [success=1 default=ignore] pam_succeed_if.so user notingroup sudo
auth required pam_google_authenticator.so nullok

By the way, “[success=1 default=ignore]” in PAM means: on success, skip the next 1 auth provider (two-factor authentication), on failure, just pretend it didn’t happen.

Moving the config outside the user’s home
Some people prefer not to store the authentication data in the user’s home directory. You can do that using the ‘secret’ parameter:

mkdir -p /var/lib/google-authenticator/
mv ~remibergsma/.google_authenticator /var/lib/google-authenticator/remibergsma
chown root.root /var/lib/google-authenticator/*

The PAM recipe will then become:

auth required pam_google_authenticator.so nullok user=root secret=/var/lib/google-authenticator/${USER}

As you’ve seen, PAM is very flexible. You can write almost anything you want.

Try the Google Authenticator two-factor authentication. It’s free and adds another security layer to your home server.

The recent article on Ars Technica, Anatomy of a hack: How crackers ransack passwords like “qeadzcwrsfxv1331” made me realize again how weak password authentication is. True, in this case the MD5 hashes were known, but still it’s only a single factor. Imagine what happens when you reuse a password on multiple sites, and one of them stores it as MD5 (or SHA-1, even plain-text) and gets comprimised.

Brute-force
What’s interesting in the above article, is that brute-force attacks only work up to 6 to 7 characters (depending on the hardware available) because of the so-called “exponential wall of brute-force cracking” caused by the exponentially increasing number of guesses each additional character creates. Although project Erebus v 2.5 already demonstrated brute-forcing any 8 character password in just 12 hours with a 8 GPU cluster. This, however, does not mean a 9+ character password is safe. In fact, passwords of 16 characters are easily cracked when using the right techniques.

Patterns
One of the techniques used to crack the passwords, is to use common patterns. These patterns were learned from past security breaches. Especially the famous RockYou breach, in which they lost millions of passwords in plain-text. This revealed many passwords people use in real life and gave an interesting view on how people choose passwords. Not only are these passwords added to the word lists used to crack passwords today (500 hundred million words is no exception), patterns were developed as well to help efficient cracking of more complex longer passwords.

A simple example of such a pattern is: Winter09 (word starting with a capital, two digits added to the end, all lowercase in the middle). Don’t think ‘W1nt3r09’ is safer, as this is also a pattern: i is replaced by 1, e with 3, etc. Huge collections of patterns can be found online.

You can now imagine why you should use random password instead. Random strings are the best, because they can only be brute-forced. When you take a 12 character random password, such as 45zCpJEq8bAf, you’ll be safe. The longer, the better, but W6Bf7TwFfRhM3xQ9FYckCYs3VpY38yJR is much harder to remember, of course. Fortunately, tools exist to help you.

Password managers
Use a secure password manager to help you remember strong passwords. I say secure, because the password managers in browsers are not secure (at all). Services like 1Password and LastPass provide an encrypted vault in which you can store your passwords, protected by a secure, random master-password. They integrate in your browser, and automatically fill-in your site-specific, strong, password. This password can now be much longer, since you don’t have to remember it. All you have to remember is the master-password.

Everything is encrypted locally, before sending to their servers, making it useless data without your master-password. It’s automatically sync’ed across your devices (laptop, desktop, mobile, tablet, etc). Check it out, it’s great.

Google Authenticator

Google Authenticator

Multi-facor authentication
Now that you have safe passwords, and a way to manage them, I’d like to suggest another technique called ‘multi-factor authentication‘. Multi-factor authentication (also Two-factor authentication, TFA, T-FA or 2FA) is an approach to authentication which requires the presentation of two or more of the three authentication factors: a knowledge factor (“something the user knows”), a possession factor (“something the user has”), and an inherence factor (“something the user is”).

Both 1Password and LastPass support two-factor authentication. This can eiher be hardware device (RSA, Yubikey, etc) or software (like Google Authenticator). Sometimes verification codes are send as SMS-messages. When two-factor authentication is enabled, you need to have a master-password and a verification key to access your vault. That’s great security for your passwords!

Final thing you should do, is protect your e-mail address with two-factor authentication, as well. Some e-mail providers offer this service. Google’s Gmail even offers it for free. Why this is important? Because most sites send a ‘password lost’ e-mail to verify a password change. So your passwords are only as secure as your e-mail password. With an added security layer, this becomes much harder to crack. You now probably also understand why you shouldn’t reuse your e-mail password on other sites..

I hope this blog brings in some inspiration to upgrade your online security!

openvpnOpenVPN is great, it allows for easy access in a secure way. But how do you keep it secure? I mean, what if someone leaves your company? Do you disable access to the OpenVPN server? You should! In this blog I’ll show you how to do it.

A feature called revoking exists in OpenVPN. Revoking a certificate means to invalidate a previously signed certificate so that it can no longer be used for authentication purposes. For this to work, we need to tell the OpenVPN server which certificates are no longer valid.

All connecting clients will then have their client certificates verified against the so-called CRL (Certificate Revoking List). Any positive match will result in the connection being dropped. Your former employees will no longer have access, even if they still have their certificates.

Creating a certificate to test with
Before we start, let’s generate a dummy certificate for testing purposes:

cd /etc/openvpn/easy-rsa/2.0/
 . ./vars
./build-key unwanted-client-name

Verify you can connect to the OpenVPN server using this certificate. Refer to my earlier post for more info. Now that this works, I’ll show you how to revoke this certificate so you will no longer be able to connect.

Revoking a certificate
To revoke a certificate, we’ll use the ‘easy-rsa’ toolset.

cd /etc/openvpn/easy-rsa/2.0

If it’s not there, look at the OpenVPN examples and copy it:

cp -R /usr/share/doc/openvpn/examples/easy-rsa /etc/openvpn
cd /etc/openvpn/easy-rsa/2.0

Run this command to revoke the certificate called ‘unwanted-client-name’:

./revoke-all unwanted-client-name

You should see output similar to this:

Using configuration from /etc/openvpn/easy-rsa/2.0/openssl-1.0.0.cnf
Revoking Certificate 03.
Data Base Updated
Using configuration from /etc/openvpn/easy-rsa/2.0/openssl-1.0.0.cnf
unwanted-client-name.crt: C = NL, ST = ZH, L = City, O = Name, OU = pi.example.org, CN = unwanted-client-name, name = unwanted-client-name], emailAddress = [email protected]
error 23 at 0 depth lookup:certificate revoked

Note the “error 23” in the last line. That is what you want to see, as it indicates that a certificate verification of the revoked certificate failed.

The index.txt file on keys directory will be updated. You’ll see an ‘R’ (for Revoked) on the first column from the left for your user when you run this:

cat keys/index.txt

You can also examine the CRL to see what’s in there:

openssl crl -in keys/crl.pem -text

Now copy the crl.pem file to the OpenVPN config directory:

cp keys/crl.pem /etc/openvpn/

Whenever you revoke a certificate, you’ve to copy it to the OpenVPN server.
Note: The CRL file is not secret, and should be made world-readable so that the OpenVPN daemon can read it after root privileges have been dropped.

Enable revoking support
Before it works, we need to setup the OpenVPN server to add support for revoking certificates. You’ve to do this only once.

Add the folowing line to the OpenVPN server config:

vim /etc/openvpn/server.conf
crl-verify /etc/openvpn/crl.pem

Reload the OpenVPN server to activate the revoke setting:

/etc/init.d/openvpn reload

Testing the revoked certificate
The final test: try to login again using the ‘unwanted-client-name’ certificate. It will not work anymore!

By revoking users, you disallow access to your OpenVPN server for users that previously had access. This should be done as soon as an user no longer needs access, as it is an important security feature.