Abstract

Postal is a mail server benchmark that I wrote. The main components of it are postal for testing the delivery of mail via SMTP, rabid for testing mail download via POP, and a new program I have just written called bhm (for Black Hole Mailer) which listens on port 25 and sends all mail to /dev/null.

The new BHM program makes it possible to test the performance of mail relay systems. This means outbound smart host gateway systems, list servers, and forwarding services. The testing method is to configure three machines, one running Postal for sending the mail, the machine to be tested running a mail server or list server configuration, and the target machine running BHM.
The initial aim of this paper was to use artificial delays in the BHM program to simulate slow network performance and also to simulate various anti-spam measures and to measure how they impact a mail relay system. However I found other issues along the way which were interesting to analyse and will be useful to other people.

Description of Postal

The first component of the benchmark suite is Postal, this program sends mail at a controlled rate. When using it you have a list of addresses for senders and a separate list of recipients that will be used for sending random messages to a mail server. It sends the mail to a specified IP address to save the effort of configuring a test DNS server because in the most common test scenario you have a single mail server that you want to test.
Postal sends at a fixed rate because in most MTAs an initial burst of mail will just go to the queue and will be actually delivered much more slowly. Often mail servers will take two minutes or more of sustained load to show the full performance impact. So I designed the programs in the Postal suite to display their results once per minute so you can watch the performance of the system over time and track the system load.
The most important thing to observe is that the load (in all areas) is below 100%. If any system resource (CPU, network IO, or disk IO) is used to 100% capacity then the queue will grow without limit. Such unlimited queue growth leads to timeouts which increases the queue and causes the system to break down. An SMTP server has a continual load from the rest of the Internet and if it goes more slowly the load will not decrease in the short-term. So a server that falls behind can simply become unusable, an unmanaged mail server can easily accumulate a queue of messages as old as a week through not having the performance required to deliver them as fast as they arrive.

The second program in the Postal suite is Rabid, a benchmark for POP servers. The experiments I document in this paper do not involve Rabid.

The most recent program is BHM which is written as an SMTP sink for testing mail relays. The idea is that a mail relay machine will have mail sent to it by Postal and then send it on to a machine running BHM. There are many ways in which machines that receive mail can delay mail and thus increase the load on the server. Unfortunately I spent all the time available for this paper debugging my code and tracking down DNS server issues so I didn't discover much about the mail server itself.

Hardware

For running Postal I used my laptop. Postal does much less work than any other piece of software in the system so I'm sure that my laptop is not a performance bottleneck. It is however a 1700MHz Pentium-M and probably the fastest machine in my network.

For the mail relay machine (the system actually being tested) I used a Compaq Evo desktop machine with a 1.5GHz P4 CPU, 384M of RAM, and an 80G IDE disk.

For running BHM I used an identical Compaq system.

The network is 100baseT full duplex with a CableTron SmartSwitch. I don't think it will impact the performance. During the course of testing I did not notice any reports of packet loss or collisions.

All the machines in question were running the latest Fedora rawhide as of late September.

Preparation

To prepare for the testing I set up a server running BHM with 254 IP addresses to receive email (mail servers perform optimisations if they see the same IP address being used). The following script creates the interfaces:
for n in `seq 1 254`
  do ifconfig eth0:$n 10.254.0.$n netmask 255.255.255.0
done

Test 1, BIND and MTA on the Same Server

The script in appendix 1 creates the DNS configuration for the 254 zones and the file of email addresses (one per zone) to use as destinations. I configured the server as a DNS server and a mail relay. A serious mail server will often have a DNS cache running on localhost so for my purposes having primary zones under example.com configured on a DNS server on localhost seemed appropriate.

I initially tested with only a single thread of Postal connecting to the server. This means that there was no contention on the origin side and it was all on the MTA side. I tested Sendmail and Postfix with an /etc/aliases file expanding to 254 addresses (one per domain). All the messages had the same sender, and the message size was a random value from 0 to 10K.

The following table shows the amount of CPU time used by the server (from top output) and the load average as well as the mail server in use and the number of messages per minute sent through it.
MTAMsgs/MinuteCPU UseLoad Average
Postfix15~70%9
Postfix18~80%9
Postfix20~90%11
Sendmail10~50%1
Sendmail13~70%2
Sendmail15~95%4.5
Sendmail20100%*
Surprisingly the named process appeared to be using ~10% of the CPU at any given time when running Postfix and 25% of the CPU when running Sendmail (not sure why Sendmail does more DNS work - both MTAs were in fairly default Fedora configurations). As for this operation CPU was the bottleneck it appears that having the named process on the same machine might not be a good optimisation.

When testing 15 and 20 messages per minute with Sendmail the CPU use was higher than with Postfix and in my early tests with 256M of RAM in the kernel started reporting ip_conntrack: table full, dropping packet. which disrupted the tests by deferring connections.
The conntrack errors are because the TCP connection tracking code in the kernel has a fixed number of entries where the default is chosen based on the amount of RAM in the system. With 256M of RAM in the test system the number of connections that could be tracked was just under 15,000. After upgrading the system to 384M of RAM there was support for tracking 24,568 connections and the problem went away. You can change the maximum number of connections by the command echo NUMBER > /proc/sys/net/ipv4/ip_conntrack_max or for a permanent change edit /etc/sysctl.conf and add the line net.ipv4.ip_conntrack_max = NUMBER and then run sysctl -p to load the settings from /etc/sysctl.conf. Note that adding more RAM will increase many system resource limits that affect the operation of the system.
My preferred solution to this problem is to add more RAM because it keeps the machine in a default configuration which decreases the chance of finding a new bug that no-one else has found. Also an extra 128M of RAM is not particularly expensive.

After performing those tests I decided that I needed to add a minimum message size option to Postal (for results that had a lower variance).
I also decided to add an option to specify the list of sender addresses separately from the recipient addresses. When I initially wrote Postal the aim was to test a mail store system. So if you have 100,000 mail boxes then sending mail between them randomly works reasonably well. However for a mail relay system a common test scenario is having two disjoint sets of users for senders and recipients.

Test 2, Analysing DNS Performance

For the second test run I moved the DNS server to the same machine that runs the BHM process which is lightly loaded as the mail relay doesn't send enough mail to cause BHM to take much CPU time).
I then did a test with Sendmail to see what the performance would be for messages that have a size of exactly 10K for the body which are sent from 254 random sender addresses (one per domain). I noticed that the named process rapidly approached 100% CPU use and was a bottleneck on system performance. It seems that the DNS load for Sendmail is significant!

I then analysed the tcpdump output from the DNS server and saw the following requests:

IP sendmail.34226 > DNS.domain:  61788+ A? a0.example.com. (32)
IP sendmail.34228 > DNS.domain:  22331+ MX? a0.example.com. (32)
IP sendmail.34229 > DNS.domain:  4387+ MX? a0.example.com. (32)
IP sendmail.34229 > DNS.domain:  18834+ A? mail.a0.example.com. (37)
It seems that there are four DNS requests per recipient giving a total of 1016 DNS requests per message. When 15 messages per minute are delivered to 254 recipients that means 254 DNS requests per second plus some extra requests (lookups of the sending IP address etc).

Also one thing I noticed is that Sendmail does a PTR query (reverse DNS lookup) on it's own IP address for every delivery to a recipient. This added an extra 254 DNS queries to the total for Sendmail. I am sure that I could disable this through Sendmail configuration, but I expect that most people who use Sendmail in production would use the default settings in this regard.

Noticing that the A record is consulted first I wondered whether removing the MX record and having only an A record would change things. The following tcpdump output shows that the same number of requests are sent so it really makes no difference for Sendmail:

IP sendmail.34238 > DNS.domain:  26490+ A? a0.example.com. (32)
IP sendmail.34240 > DNS.domain:  16187+ MX? a0.example.com. (32)
IP sendmail.34240 > DNS.domain:  57339+ A? a0.example.com. (32)
IP sendmail.34240 > DNS.domain:  50474+ A? a0.example.com. (32)
Next I tested Postfix with the same DNS configuration (no MX record) and saw the following packets:
IP postfix.34245 > DNS.domain:  3448+ MX? a0.example.com. (32)
IP postfix.34261 > DNS.domain:  50123+ A? a0.example.com. (32)
The following is the result for testing Postfix with the MX based DNS configuration:
IP postfix.34675 > DNS.domain:  29942+ MX? a0.example.com. (32)
IP postfix.34675 > DNS.domain:  33294+ A? mail.a0.example.com. (37)
It seems that in all cases Postfix does less than half the DNS work that Sendmail does in this regard and as BIND is a bottleneck this means that Sendmail can't be used. So I excluded Sendmail from all further tests.

Below is the results for the Exim queries for sending the same message, Exim didn't check whether IPv6 was supported before doing an IPv6 DNS query. I filed a bug report about this and was informed that there is a configuration option to disable AAAA lookups, but it is agreed that looking up an IPv6 entry when there is no IPv6 support on the system (or no support other than link-local addresses) is a bad idea.

IP exim.35992 > DNS.domain:  43702+ MX? a0.example.com. (32)
IP exim.35992 > DNS.domain:  7866+ AAAA? mail.a0.example.com. (37)
IP exim.35992 > DNS.domain:  31399+ A? mail.a0.example.com. (37)
The total number of DNS packets sent and received for each mail server was 2546 for Sendmail, 1525 for Exim, and 1020 for Postfix. Postfix clearly wins in this case for being friendly to the local DNS cache and for not sending pointless IPv6 queries to external DNS servers. For further tests I will use Postfix as I don't have time to configure a machine that is fast enough to handle the DNS needs of Sendmail.
Exim would also equal Postfix in this regard if configured correctly. However I am making a point of using configurations that are reasonably close to the Fedora defaults as that is similar to the common use on the net.

Test 3 - Postfix Performance

To test Postfix performance I used the DNS server on a separate machine which had the MX records. I decided to test a selection of message sizes to determine the correlation between message size and system load.
Msg SizeMsgs/MinuteCPU UseLoad Average
10K20~95%11
0-1K20~85%7
100K10~80%6
In all the above tests there were a few messages not being sent due to connections timing out. This seems unreasonably poor performance. I had expected Postfix in the most simplistic mailing list configuration to be able to handle more than 5080 outbound messages per minute.

Test 4 - different Ethernet Card

If an Ethernet device driver takes an excessive amount of CPU time in interrupt context then it will be billed to the user-space process that was running at the time, this can result in an application being reported as using a lot more CPU time than it really uses. To check whether that is the case I decided to replace the Ethernet card in the test system and see if that changed the reported CPU use.

I installed a PCI Ethernet card with Intel Corporation 82557/8/9 chipset to use instead of the Ethernet port on the motherboard which had a Intel Corporation 82801BA/BAM/CA/CAM chipset and observed no performance difference. I did not have suitable supplies of spare hardware to test a non-Intel card.

Conclusion

DNS performance is more important to mail servers than I had previously thought. The choice and configuration of the mail server will affect the performance required from local DNS caches and from remote servers. Sendmail is more demanding on DNS servers and Exim needs to be carefully configured to match the requirements.
I am still considering whether it would be more appropriate for Exim to check for IPv4 addresses before checking for IPv6 addresses given that most of the Internet runs on IPv4 only. Maybe a configuration option for this would be appropriate.
Also other mail servers will have the same issues to face as IPv6 increases in popularity.

The performance of 20 messages per minute doesn't sound very good, but when you consider the outbound performance it's more impressive. Every inbound message gets sent to 254 domains, so 20 inbound messages per minute gives 84.6 outbound messages per second on average which is a reasonable number for a low-end machine. Surprisingly there was little disk IO load.

Future Work

The next thing to implement is BHM support for tar pits, gray-listing, randomly dropping connections, temporary deferrals, and generating bounce messages. This will significantly increase the load on the mail server. Administrators of list servers often complain about the effects of being tar-pitted, I plan to do some tests to estimate the performance overhead of this and determine what it means in terms of capacity planning for list administrators.

Another thing I plan to develop is support for arbitrary delays at various points in the SMTP protocol. This will be used for similating some anti-spam measures, and also the effects of an overloaded server which will take a long time to return a SMTP 250 code in response to a complete message. It will be interesting to discover whether making your mail server faster can help the internet at large.

Appendix 1

Script to create DNS configuration


#!/usr/bin/perl

# use:
# mkzones.pl 100 a%s.example.com 10.254.0 10.253.0.7
# the above command creates zones a0.example.com to a99.example.com with an
# A record for the mail server having the IP address 10.254.0.X where X is
# a number from 1 to 254 and an NS record
# with the IP address 10.1.2.4
#
# then put the following in your /etc/named.conf
#include "/etc/named.conf.postal";
#
# the file "users" in the current directory will have a sample user list for
# postal
#


my $inclfile = "/etc/named.conf.postal";
open(INCLUDE, ">$inclfile") or die "Can't create $inclfile";
open(USERS, ">users") or die "Can't create users";
my $zonedir = "/var/named/data";
for(my $i = 0; $i < $ARGV[0]; $i++)
{
  my $zonename = sprintf($ARGV[1], $i);
  my $filename = "$zonedir/$zonename";
  open(ZONE, ">$filename") or die "Can't create $filename";
  print INCLUDE "zone \"$zonename\" {\n  type master;\n  file \"$filename\";\n};\n\n";
  print ZONE "\$ORIGIN	$zonename.\n\$TTL 86400\n\@	SOA	localhost.	root.localhost. (\n";
# serial refresh retry expire ttl
  print ZONE "	2006092501 36000 3600 604800 86400 )\n";
  print ZONE "	IN NS ns.$zonename.\n";
  print ZONE "	IN MX 10 mail.$zonename.\n";
  my $final = $i % 254 + 1;
  print ZONE "mail	IN A $ARGV[2].$final\n";
  print ZONE "ns	IN A $ARGV[3]\n";
  close(ZONE);
  print USERS "user$final\@$zonename\n";
}
close(INCLUDE);
close(USERS);