Infopackets Server Failure

Dennis Faas's picture

All is not well in the land of infopackets.com.

During the last newsletter announcement, the infopackets web server crashed. My laptop was connected to the infopackets server at the time that it happened and I was able to see that server's virtual memory (swap file) was almost completely depleted. In techy terms, this situation is referred to as a memory leak.

How a Memory Leak causes an Operating System to crash

An operating system is reserved a specific set of system memory (RAM) to perform critical system operations. By definition, it is the job of the operating system to ensure that a program does not overstep its memory boundaries and compromise system stability.

A program which is launched through an operating system is also set aside memory where it performs its own tasks separate from the operating system (see picture to the right). When a program runs amuck, its operations may overwrite the memory reserved for the operating system, which can result in a system lockup, freeze, or crash.

When I noticed that there was a problem with the web server, I attempted to reset the Apache web server program in hopes that this would halt whichever process was running wild. Unfortunately, the infopackets web server took a complete nose dive and froze up moments later.

To remedy the situation, I contacted my hosting company and had them manually reset the server by pressing the reset button. The hosting team was not able to revive the server on the first reboot, but was able to intervene and put the infopackets web server back online within 30 minutes of the crash.

Side note: Apache is a web server program which comes bundled with the RedHat Linux operating system. Apache is responsible for maintaining web server operations on the infopackets web server, while RedHat oversees the Apache processes.

In mid December of 2003, I mentioned that infopackets.com desperately needed a new web server. I have been keeping an eye on the available servers configurations from my web hosting company in the last few weeks, and was finally able to purchase a setup to suit my needs over the weekend.

Originally, I planned to use the same web server management software on the new web server. Using this method, I was going to split the newsletter subscriber base in half and send the email newsletter using both web servers in parallel [I.E.: at the same time]. In theory, this would have cut newsletter delivery time in half. However, I realized that in the long run, this type of configuration would be inefficient.

A much better solution

The gist is that sendmail isn't optimized for sending out emails to a large mailing list. After giving it much thought, I've decided to use the new web server as a dedicated SMTP mail server -- designed to do nothing more but process mail.

The new SMTP server will use a mail program called qmail, which is designed for large mailing lists. Compared to sendmail, qmail is able to deliver 20 concurrent mail connections (by default) at once. With this type of configuration, the newsletter should be able to deliver to 250,000 users in 3.5 hours time, rather than the ~24 hours that it now takes.

Unfortunately, the qmail setup requires much customization. I'm hoping that the new mail server will be online by the end of this week. By that time, I hope to resume the normal newsletter delivery schedule (every Tuesday, Wednesday, Thursday).

Rate this article: 
No votes yet