Logo

Thin Clients : Benchmark 

Benchmarking

Introduction

I was interested in getting some comparative benchmark figures for the various thin clients I had. As I often use them as web servers it seemed appropriate to find (or write) a simple bit of software to fulfil this function. A quick look around with Google turned up torture.pl written by Lincoln D Stein. The core of this did exactly what I wanted - it made multiple requests for web pages from a server and recorded the response time.

To (partially) quote from Lincoln's description:

"A few years ago I wrote a small Perl script called "torture.pl" whose purpose in life is to inflict pain and suffering on hapless Web servers. It sends servers increasing amounts of random data at increasingly shorter intervals until they either crashed or slowed to the point of unusability.

...This script has two functions. First, it can be used to test the speed and responsiveness of a Web server. Second, the script can be used to test the stability and reliability of a particular Web server.

When used for performance testing, you can measure the speed and response time of your Web servers, CGI scripts, and other Web enhancements. Although torture.pl isn't rigorously normalized for cross-server comparisons the way the WebStone metric is, it's good for measuring changes on a single Web server."

I had no interest the stability/reliability aspect, just the performance aspect. My intention was to use it to test a particular Linux+Web Server+Website build across different thin clients or different Linux builds on the same hardware.

If you are interested in the guts of the perl script I suggest you read Lincoln's original article. This was here: http://stein.cshl.org/~lstein/torture/torture.html, but had vanished in Jan 2012. As an alternative I found "The Perl Journal" issue #8 Winter 1997.

WebServerTest

My version is invoked like this:

WebServerTest.pl -[options] (URL | FILE)

Starting at the end of the command line. You must either specify a single URL or give it the name of file. If you give it the name of the file then that file should be a simple text file containing the name (or IP address) of the website to check followed by a list of URLs to check. For example:

 192.168.10.6
 thin/
 thin/EvoT20/Linux.shtml
 .....

In the former case, where you just gave it a single URL, the performance test is conducted by retrieving only that URL. In the latter case, the test script randomly selects web pages from the list supplied in the file. I generate such a list by 'cd'ing into the document root of my webserver and then doing:

  find thin -name "*shtml" -print >TestList.txt
..which generates a list of all the web pages in this section of my website. After a few seconds work with an editor you have a suitable file to drive the test.

The options supported by WebServerTest.pl are:

OptionDescriptiondefault
-t <integer>Number of times to run the test[1]
-c <integer>Number of copies of program to run[1]
-d <integer>Mean delay between serial accesses[0ms]
-h <name>Name or IP address of hostOverrides any entry in URL or FILE

Taking the options in turn:

-c specifies how many copies of the script to run - the script will spawn that number of copies of itself. Use a figure of say 20 and it will look like there are 20 people simultaneously requesting web pages.

-t specifies the number of times to run the test. The default is that each copy of the script will only request a single page.

-d. If you want you can ask the script to insert a delay between it's requests otherwise, once one web page has been retrieved, it immediately asks for the next one.

-h lets you override the hostname that is specified at the beginning of the file. This is useful in my scenario where I have instances of the same webserver on different hardware and my DHCP server has assigned them different DHCP addresses. It means I can still use the same TestList.txt file without having to edit it to change the server address.

An example run of the program:

[thin@mantaray ~] ./WebServerTest -t 50 -c 20 -h 192.168.10.12 TestList.txt
** WebServerTest.pl version 1.00 starting at Tue Aug 31 09:34:06 2010
Tests run on Host: 192.168.10.12 50 times with 20 copies and average delay of 0 seconds
Transactions:           1000
Elapsed time:           8.875 sec
Bytes Transferred:      8261130 bytes
Response Time:          0.17 sec
Transaction Rate:       112.68 trans/sec
Throughput:             930880.74 bytes/sec
Concurrency:            19.6
Status Code 200:        1000
** WebServerTest.pl version 1.00 ending at Tue Aug 31 09:34:15 2010

Here we're simulating 20 users each requesting 50 random pages from the web server at 192.168.10.12.

Obviously something like this needs to be used intelligently - the host from which you are running the test should be up to the job and you may need to consider exactly what the connectivity is between it and the server(s) under test.

The perl script can be viewed as a webpage or downloaded from here.

 


Any comments? email me.    Last update August 2010