After all the hard work in the past months it’s now time for some weeks off 🙂 One last thing to solve was the Zimbra auto responder: I had enabled it through the web interface but while testing it, I found it only replied mails sent to my main e-mail address. There is no way to fix this in the web interface, so I had a look at the command line options and found a way to add extra e-mail addresses. Ssh to the machine, become user ‘zimbra’ and run this command: 

zmprov ma [email protected] +zimbraPrefOutOfOfficeDirectAddress [email protected]

By running this command you tell Zimbra to also reply to [email protected] instead of only [email protected]. By specifying a + before the command, you can run this command multiple times to add more e-mail addresses, if you have them. Otherwise you’ll overwrite the previous setting. I have added all 3 e-mail addresses I use, so they’ll get auto responses while I’m out of the office. Hope this is helpful to others as well.

Sometimes files may be filled up with null characters that look like ^@ when you open them in a text editor. This may happen when a disk becomes full, or when you rename a logfile while an application is still writing to it.

I ran into this problem today, and I fixed it using a command called ‘tr’. This is a utility capable of translating or deleting characters from standard input/output. It means you can use it to ‘pipe’ input to it, and send the output to a new file. For example:

cat file.log | tr -d '\000'  > new_file.log 

Note: when using this in a script, you might need to escape that backslash.

What does this command do? Using the -d switch we delete a character. A backslash followed by three 0’s represents the null character. This just deletes these characters and writes the result to a new file. Problem solved!

At work, it is of my tasks to make sure we keep the mailboxes of our clients free of spam. Some weeks ago the number of spam mails went up massively and we worked hard to update the filters to keep unwanted mails out. In this blog I’ll describe a few of the things we did.

Using the famous SpamAssassin tool it is possible to score e-mails. One can score on the contents of subject, body, headers etcetera. A lot of good rules are already supplied and it’s possible to write your own. When a new spam run comes in, we used to create new rules for the spam mails that slipped trough. That works, and afterwards the mails are tagged as spam.

As you can imagine, this procedure is both time consuming and a bit late: only after we see mails slipping through we can create rules to catch them. Of course this procedure will always be some sort of a last resort if all else fails but I wanted to setup something more proactive.

To start from the beginning, how is all this spam sent?
It can’t be send from one or a few locations only, because then it’d be easy to block. Instead, most spam is sent by botnets these days. Botnets usually have hundreds of thousands pc’s under control and one of the main things they do is sending spam. For example to advertise online casino’s, fake banking sites or other scams. Because there are so many infected PC’s, it’s not easy to block them all. Or is it?

When thinking about this, I realized most (if not all) members of these botnets are infected Windows pc’s. Also, these mails are often directly sent from the PC to the final destination mailserver (instead of using the SMTP server of their ISP).

If we could detect the OS of the client that connects to our mailserver, we could then apply certain actions based on the OS. The idea here is that most ISP’s use Linux, Unix or Mac servers. And if they are using Windows, it is likely to be some Server version instead of ‘Windows Vista’, ‘Windows XP’, etcetera. Interesting!

What we want to do here is known as Passive OS Fingerprinting. A tool that implements this is for example P0f. You run P0f as a deamon on the mailserver that accepts the incoming connections. Based on the traffic that flows by, P0f is able to guess the OS of the client that connects. It is passive, so the client never knows we’re doing this. Nothing is in between the client and the mailserver, P0f is just analyzing the traffic. Now that P0f knows the OS of the client, we can decide what to do with this information. In our setup it works like this:

1. When the OS is Windows, but not Windows Server, activate Greylisting. When another or unknown OS is detected, start mail delivery immediately;

The idea behind this is that mail sent from infected Windows PC’s is usually poorly written. They cannot handle the fact the mailserver sends a 400 series temporary error message and most give up after just one attempt. This technique is called Greylisting and it alone reduces the number of spam mails significantly. But, Greylisting has drawbacks as well. The biggest drawback in my opinion is that it can delay mail up to 30 minutes or more. Most customers we serve find this annoying.

2. At the time the connection is accepted and the mail is delivered, we set a ‘X-P0f-OS:’-header with the detected OS;

Combining Greylisting and P0f creates a more ideal solution: Windows PC’s should not send mail directly to the recipients mailserver, but use the Provider SMTP server instead. One could say that when such a PC is sending mails directly, it is at least suspicious. That is, in my opinion, enough reason to Greylist them. There must be some spam software that understands Greylisting (now or in the future) and that will eventually connect again after some time and deliver the mail. That’s why there is an action #3:

3. This header is consulted later on in the delivery process, and when Windows appears (again, not Windows Server), Spamassassin assigns some points to the spam-score.

Because mail that is sent directly from a Windows PC is suspicious to me. The OS score helps reaching the score needed for Spamassassin to tag it. The interesting thing is that this is proactive: you just don’t know what new mail spammers will send, but what you do know is that the next mail is probably send by an infected Windows PC.

This setup is now up and running, so I’ll let you all know what my experiences are after some time. When I find the time, I will also write some blogs in more detail on how to setup such a system.

Any other methods you use to stop spam effectively?

I had an interesting problem lately regarding AWStats. Due to some delay, the log files weren’t processed in the right order and then AWStats ignored all old logs. This resulted in some days being blank in the stats and of course this is not something we want. Since we also have multiple web servers in our cluster, things started to get a bit complicated.

The log files from each of the web servers were concatenated and then split to a separate log file for each virtual host using the Apache2 split-logfile script.

The logs for an example virtual host looked like this:

1.2.2.1 - - [01/Aug/2012:05:50:50 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_04"
1.2.2.1 - - [01/Aug/2012:05:50:51 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_04"
1.2.2.1 - - [28/Jul/2012:04:02:06 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_32"
1.2.2.1 - - [28/Jul/2012:04:02:06 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_32"

As you can see, AWStats processes August 1 and then refuses the older July records. To resort the log files, I ran:

cat website.unsorted.log | sort -t ' ' -k 4.9,4.12n -k 4.5,4.7M -k 4.2,4.3n -k 4.14,4.15n -k 4.17,4.18n -k 4.20,4.21n > website.log

As an alternative the AWStats scriptlogresolvemerge.pl can be used as well. Since I already had concatenated the log files and split them, the sort option above was faster to implement.

Now the log file looks like this:

1.2.2.1 - - [28/Jul/2012:04:02:06 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_32"
1.2.2.1 - - [28/Jul/2012:04:02:06 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_32"
1.2.2.1 - - [01/Aug/2012:05:50:50 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_04"
1.2.2.1 - - [01/Aug/2012:05:50:51 +0200] "GET /nonexistent_page.html HTTP/1.1" 404 224 "-" "Java/1.6.0_04"

One last thing to solve was the AWStats history file. Since it had run before but with the wrong ordenend logfile, it had a wrong ‘LastLine’ setting. Experimenting with this showed it was best to remove the line, and replace it with a newline (so we won’t break the indexes). I used sed to fix it:

sed -i \
-e 's/^LastLine .*//' \
awstats072012.*

AWStats now updates the stats correctly and everybody is happy! Thanks to my colleagues Pim, Vincent and Mischa because they all helped solving some pieces of the puzzle. Yes, it’s nice having some technically skilled colleagues 🙂

 

My mother’s iPhone had issues and was replaced by a new one. That did Apple handle very well 🙂 The down side of a new iPhone is, of course, that all settings and photo’s were lost. Fortunately, I was smart enough to enable iCloud backups last year when I setup her iPhone.

So, when the new iPhone came in today, all I did was restore the iCloud backup in a few swipes and clicks. An hour or so later (due to downloading) everything was as before. Very cool!

I recommend everybody to setup iCloud backups. You find the settings in the iCloud screen. It’s easy and you never know…