Archives For 30 November 1999

At work, it is of my tasks to make sure we keep the mailboxes of our clients free of spam. Some weeks ago the number of spam mails went up massively and we worked hard to update the filters to keep unwanted mails out. In this blog I’ll describe a few of the things we did.

Using the famous SpamAssassin tool it is possible to score e-mails. One can score on the contents of subject, body, headers etcetera. A lot of good rules are already supplied and it’s possible to write your own. When a new spam run comes in, we used to create new rules for the spam mails that slipped trough. That works, and afterwards the mails are tagged as spam.

As you can imagine, this procedure is both time consuming and a bit late: only after we see mails slipping through we can create rules to catch them. Of course this procedure will always be some sort of a last resort if all else fails but I wanted to setup something more proactive.

To start from the beginning, how is all this spam sent?
It can’t be send from one or a few locations only, because then it’d be easy to block. Instead, most spam is sent by botnets these days. Botnets usually have hundreds of thousands pc’s under control and one of the main things they do is sending spam. For example to advertise online casino’s, fake banking sites or other scams. Because there are so many infected PC’s, it’s not easy to block them all. Or is it?

When thinking about this, I realized most (if not all) members of these botnets are infected Windows pc’s. Also, these mails are often directly sent from the PC to the final destination mailserver (instead of using the SMTP server of their ISP).

If we could detect the OS of the client that connects to our mailserver, we could then apply certain actions based on the OS. The idea here is that most ISP’s use Linux, Unix or Mac servers. And if they are using Windows, it is likely to be some Server version instead of ‘Windows Vista’, ‘Windows XP’, etcetera. Interesting!

What we want to do here is known as Passive OS Fingerprinting. A tool that implements this is for example P0f. You run P0f as a deamon on the mailserver that accepts the incoming connections. Based on the traffic that flows by, P0f is able to guess the OS of the client that connects. It is passive, so the client never knows we’re doing this. Nothing is in between the client and the mailserver, P0f is just analyzing the traffic. Now that P0f knows the OS of the client, we can decide what to do with this information. In our setup it works like this:

1. When the OS is Windows, but not Windows Server, activate Greylisting. When another or unknown OS is detected, start mail delivery immediately;

The idea behind this is that mail sent from infected Windows PC’s is usually poorly written. They cannot handle the fact the mailserver sends a 400 series temporary error message and most give up after just one attempt. This technique is called Greylisting and it alone reduces the number of spam mails significantly. But, Greylisting has drawbacks as well. The biggest drawback in my opinion is that it can delay mail up to 30 minutes or more. Most customers we serve find this annoying.

2. At the time the connection is accepted and the mail is delivered, we set a ‘X-P0f-OS:’-header with the detected OS;

Combining Greylisting and P0f creates a more ideal solution: Windows PC’s should not send mail directly to the recipients mailserver, but use the Provider SMTP server instead. One could say that when such a PC is sending mails directly, it is at least suspicious. That is, in my opinion, enough reason to Greylist them. There must be some spam software that understands Greylisting (now or in the future) and that will eventually connect again after some time and deliver the mail. That’s why there is an action #3:

3. This header is consulted later on in the delivery process, and when Windows appears (again, not Windows Server), Spamassassin assigns some points to the spam-score.

Because mail that is sent directly from a Windows PC is suspicious to me. The OS score helps reaching the score needed for Spamassassin to tag it. The interesting thing is that this is proactive: you just don’t know what new mail spammers will send, but what you do know is that the next mail is probably send by an infected Windows PC.

This setup is now up and running, so I’ll let you all know what my experiences are after some time. When I find the time, I will also write some blogs in more detail on how to setup such a system.

Any other methods you use to stop spam effectively?

I had an interesting problem lately regarding AWStats. Due to some delay, the log files weren’t processed in the right order and then AWStats ignored all old logs. This resulted in some days being blank in the stats and of course this is not something we want. Since we also have multiple web servers in our cluster, things started to get a bit complicated.

The log files from each of the web servers were concatenated and then split to a separate log file for each virtual host using the Apache2 split-logfile script.

The logs for an example virtual host looked like this:

1.2.2.1 - - [01/Aug/2012:05:50:50 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_04"
1.2.2.1 - - [01/Aug/2012:05:50:51 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_04"
1.2.2.1 - - [28/Jul/2012:04:02:06 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_32"
1.2.2.1 - - [28/Jul/2012:04:02:06 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_32"

As you can see, AWStats processes August 1 and then refuses the older July records. To resort the log files, I ran:

cat website.unsorted.log | sort -t ' ' -k 4.9,4.12n -k 4.5,4.7M -k 4.2,4.3n -k 4.14,4.15n -k 4.17,4.18n -k 4.20,4.21n > website.log

As an alternative the AWStats scriptlogresolvemerge.pl can be used as well. Since I already had concatenated the log files and split them, the sort option above was faster to implement.

Now the log file looks like this:

1.2.2.1 - - [28/Jul/2012:04:02:06 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_32"
1.2.2.1 - - [28/Jul/2012:04:02:06 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_32"
1.2.2.1 - - [01/Aug/2012:05:50:50 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_04"
1.2.2.1 - - [01/Aug/2012:05:50:51 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_04"

One last thing to solve was the AWStats history file. Since it had run before but with the wrong ordenend logfile, it had a wrong ‘LastLine’ setting. Experimenting with this showed it was best to remove the line, and replace it with a newline (so we won’t break the indexes). I used sed to fix it:

sed -i \
-e 's/^LastLine .*//' \
awstats072012.*

AWStats now updates the stats correctly and everybody is happy! Thanks to my colleagues Pim, Vincent and Mischa because they all helped solving some pieces of the puzzle. Yes, it’s nice having some technically skilled colleagues 🙂

 

My mother’s iPhone had issues and was replaced by a new one. That did Apple handle very well 🙂 The down side of a new iPhone is, of course, that all settings and photo’s were lost. Fortunately, I was smart enough to enable iCloud backups last year when I setup her iPhone.

So, when the new iPhone came in today, all I did was restore the iCloud backup in a few swipes and clicks. An hour or so later (due to downloading) everything was as before. Very cool!

I recommend everybody to setup iCloud backups. You find the settings in the iCloud screen. It’s easy and you never know…

 

We’ve virtualized many servers already this year. Last month we moved the MD equipment that still needs to be virtualized. Today we removed the old MN servers that are no longer needed due to our new private cloud. All our main equipment is now together in the DC-2 datacenter in Amsterdam!

I’m upgrading our MySQL master/slave setup and am moving it to new (virtual) hardware in our cloud environment. One of the things I did last night was moving the MySQL slaves to a new master that I had prepared in the new environment. This post describes how I connected the slaves to their new master in the cloud.

First, you’ll need to make sure the new master has the same data as the old one.
1. Make sure no more updates occur on the old master
2. Create a sql dump of the master using mysqldump
3. Import that dump into the new master using mysql cmd line tool

At this point both masters should have the same data.
4. Now, shut down the old master as it can be retired 😉
5. Before allowing write access to the new master, note it’s position by executing this query:

mysql> show master status\G;
File: mn-bin.000005
Position: 11365777
Binlog_Do_DB: database_name
Binlog_Ignore_DB:
1 row in set (0.00 sec)

We’ll need this information later on when instructing the slaves to connect to their new master.

6. It’s now safe to allow write access again to the new master
7. Do this on any slave, it will connect it to the new master:

CHANGE MASTER TO
master_host=’master_hostname’,
master_user=’replicate_user’,
master_password=’password’,
master_log_file=’log-bin.000005‘,
master_log_pos= 11365777

Note the ‘master_log_file’ and ‘master_log_pos’. Their values are the ones we selected from the master at step 5. Then check if it worked (allow a few seconds to connect):

mysql> show slave status\G;

Look for these lines, they should say ‘Yes’:

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

And the status should be:

Slave_IO_State: Waiting for master to send event

That’s it, the slave is now connected to a new master. Test it by updating the master, and checking whether the slave receives the update too.