Far too many web site operators are surprised when their sites fail under heavy traffic. The options for avoiding this kind of problem are often confusing, poorly documented and in some cases don’t work. A lot of issues arise from a fundamental misunderstanding of the relationship between a web server and a web client. Other issues arise because sites are tested in environments that don’t resemble real world use cases.
Server administrators, IT departments and especially individual webmasters need good advice when it comes to testing the performance of their web sites, because if the site isn’t readily available to visitors, the quality of the product or service doesn’t matter. If you’re running a business online and your site doesn’t work properly, you might as well be invisible.
If you’re looking for ways to test your site to ensure maximum uptime and maximum accessibility by as many guests, readers, customers and search engines as possible, here are a few things to consider.
1. Understand the Software
A web server is not just hardware. Installed in your machine is an application called a “web server.” It’s function is to “listen” on one or more network ports for connections by web clients, which are usually browsers. When a client makes a request for a particular document, web address or protocol, the web server’s job is to either serve the request itself by sending the document across the network to the client, or to hand some or all of the request off to another application for processing.
Because of the nature of web server software, the first step to understanding performance is to recognize anything that degrades the performance of software on your web server will degrade the performance of your web server. A good example is available RAM. If your server is light on available RAM, it will not only slow down your web server, but if you have databases, script interpreters, middleware, spawned processes or other kinds of processes running on your server, you will experience less than optimal performance regardless of the nature of your site or how well your system is optimized outside of the web server process itself.
2. Crash It
Some of the most profound technological advances were the result of failure. The reason failure is so important in the testing process is because it instantly provides you with a long list of facts and at least one provable conclusion. It tells you “under these circumstances, our web site will fail.”
From there, it is often a much easier task to formulate protective measures to avoid future failures. Far easier, for example, than a situation where everything is working fine and you have no idea why. When in doubt, always fall back on the server administrator’s first rule: “If everything is working properly, something is broken. You just haven’t found it yet. Keep looking.” This is especially true when testing WordPress websites, according to Web Hosting Buddy.
3. Simulate Failovers
When the main server crashes, your backup server has to take over instantly. The transition is a key performance metric. The reason is simple. If your failover server doesn’t handle the transition properly, it can lead to hangups, disconnected processes and garbage collection issues for your server. If the timing is wrong, it can lead to a crash. For every mainline test, you should run at least one failover test for comparison.
4. Unplug Optional Software
Performance testing relies almost entirely on comparisons. If you can, you should test your site with no add-ons running, then add optional plug-ins, 3rd party modules and so forth one at a time. Once you’ve climbed the mountain, so to speak, you should perform the same test in reverse. Start with everything turned on, and then turn them off one by one.
Occasionally you will find an add-on that either causes a failure at startup, or more likely fails to shut down properly. The goal is seamless upgrades and downgrades. If an add-on isn’t behaving properly, then you may get better performance by deactivating it.
5. Local vs. Remote
If you have a standardized test suite, run it locally on the web server, then run it remotely from another machine and force the suite to transmit results across the network. Compare the two sets of numbers. You just might discover one of those nagging and hard-to-replicate network errors that will be much easier to fix on a day when the whole world isn’t trying to get to your site. Then, you can expand your non-web test procedures to other software.
6. Test Other Services
The problem with focusing only on your web server is that you might have degraded network performance and not know it because every test you run is getting the exact same degraded result. Later, when you run the same test on a different protocol like SMTP, TLS or FTP, suddenly you might notice a spike in disk activity, a RAM bottleneck, a CPU spike, etc. While these problems may not always affect your web server right away, if the site tries to field requests for multiple protocols, those performance hits might start accumulating at the wrong time and make your site unstable.
7. Use Every Client
If you’ve never accessed your web site through telnet, you’re in for a surprise. Can wget download from your site faster than the leading web browser? How about elinks? Or curl? Or command-line SFTP or Filezilla? If a client that can’t understand https tries to reach a URL does your site need unusual amounts of time, RAM or CPU to field the request? If so, you just might have uncovered one of those stealth errors that doesn’t show up until game day when the king of the geeks wants to read your site with SSH.
8. Automated Testing with Loadview
At least some of your testing should happen outside of whatever systems you control, if for no other reason than to get results from machines that might more closely resemble those of your future audience. A service like Loadview is a good option, since it is not only automated but runs remotely on another network.
The configuration options cover almost all possible use cases including many browsers, scripting environments and mobile clients.
9. Test Applications on all Tiers
If your site only serves text and images, this isn’t as important. If your site runs middleware, a database or other server-side applications like user-matching, lobbies or virtual servers, however, you need to test your site under conditions where all those tiers are accessed. While the solutions to problems with your server-side middleware and databases may not necessarily be solvable through performance testing, they can certainly be identified.
10. Document the Process
It is vitally important that any formalized testing process capture as much data as possible, and further, that data be collated in a standardized format so it can be properly analyzed. Simply scratching out a few notes while the test suite runs isn’t going to be adequate. It is better if your test suite can produce this documentation independently, but even if it can, the people involved need to make their own notes.
Unless you have a record of what you are trying to accomplish, you’re likely to either misinterpret or lose track of key data no matter how good your testing process might be.
Performance testing is part of that elusive and highly valuable category of information called “institutional knowledge.” The more you know about your own systems and how they perform in the real world, the more information you have upon which to base your future decisions about upgrades, development and improvements.