Starman vs. uWSGI: PSGI server performance comparison



This benchmark is about Perl PSGI web application servers but the story starts with Python. I needed to deploy a Python based web application on a server running nginx. I was not fully up to date about the current WSGI application server offerings and therefore searched for some comparisons. I found a comprehensive WSGI server benchmark by Nicholas Piël which was very interesting, although slightly dated already.

I noticed that uWSGI looks like a good modern application server which integrates nicely with nginx. And not only for running Python apps: It can run Perl PSGI/Plack apps as well! That made me very interested.

Earlier PSGI server benchmarks

I searched the interwebs for PSGI/Plack server performance comparisons but did not find any which included uWSGI. It looked like the benchmark in the Starman documentation is the only one available and quoted in many places. It declares Starman as the fastest PSGI server:

— server: Starman (workers=10)
Requests per second: 6849.16 [#/sec] (mean)
— server: Twiggy
Requests per second: 3911.78 [#/sec] (mean)
— server: AnyEvent::HTTPD
Requests per second: 2738.49 [#/sec] (mean)
— server: HTTP::Server::PSGI
Requests per second: 2218.16 [#/sec] (mean)
— server: HTTP::Server::PSGI (workers=10)
Requests per second: 2792.99 [#/sec] (mean)
— server: HTTP::Server::Simple
Requests per second: 1435.50 [#/sec] (mean)
— server: Corona
Requests per second: 2332.00 [#/sec] (mean)
— server: POE
Requests per second: 503.59 [#/sec] (mean)

I needed to have a look at this myself and compare Starman to uWSGI.

Test machine setup

The test environment consisted of:

  • Server: QEMU/KVM based virtual machine with 4 processors and 768 MB of memory, running Ubuntu 12.04 64bit version.
  • Client/load generator: another virtual machine in the same physical box, running Debian sid.

The virtual machines were setup using virtio network drivers and interconnected with a Linux bridge in the host.

Linux kernel TCP tweaks

The test application

The web application used in this benchmark was the simplest possible “hello world” implementation in PSGI:

Starman configuration

The following upstart job was used to start five Starman workers:

The nginx front-end was configured to talk to Starman as follows:

uWSGI configuration

uWSGI was configured to match the Starman configuration with five workers and listen backlog of 1000 (which is the Starman default):

Nginx has a native support for uwsgi protocol which was used over TCP:

Load generator

I used a patched version of weighttp for testing. I do not like ab (ApacheBench) because it does not recognize failed responses (nginx returns HTTP error code 500 when it is unable to speak to the upstream server). I do not like httperf because it does not scale to a high concurrency (it uses select() and thus has a limit of 1024 open connections on Linux).

There is also a problem with weighttp: it includes unsuccessful requests in the “requests per second” statistics. That ruins the results if there are failures because nginx is very fast at serving the error pages. I patched weighttp as follows:

I used keep-alive connections between weighttp and nginx (weighttp option -k) because I did not want to introduce extra overhead from the extra TCP handshakes.

Software versions

All the server side software was as distributed in Ubuntu 12.04 64 bit version at the time of writing this article, except for nginx which was installed from the Launchpad nginx stable PPA. Versions are listed below:

  • uwsgi: 1.0.3+dfsg-1ubuntu0.1
  • starman: 0.2014-1
  • nginx-full: 1.2.1-1ubuntu0

Starman vs. uWSGI

Enough introductions, let’s get to the point. I fired off weighttp from a script with increasing levels of concurrency and got the following results:

uWSGI vs. Starman performance (SVG graph)

uWSGI was able to serve roughly three times as many requests per second as Starman!

Another important thing to consider is the error rate. Here is a graph of the error rate with Starman:

Starman errors (SVG graph)

There is a significant amount of page load errors with Starman when the load gets too high. Not too good. How about uWSGI?

uWSGI errors (SVG graph)

There were zero page load errors with uWSGI. It always served the page that the client was looking for. Much better.

Let’s look at the server average CPU utilization next, Starman first:

Starman CPU utilization (SVG graph)

And then uWSGI:

Starman CPU utilization (SVG graph)

Looks like there is not too much difference in the average CPU consumption.

The following table shows the memory consumption (resident set size) of the server processes after the tests were run:

Server kB per master kB per worker # of workers total MB
Starman 9396 8916 5 52.7
uWSGI 4836 2708 5 17.9

uWSGI uses just a third of the memory compared to Starman.

Conclusions

uWSGI clearly outperforms Starman in every aspect covered in this benchmark.

But that is not all. There are some extra benefits from using uWSGI:

  • It can run Python/WSGI and PHP applications as well. There is no need to have a separate application web server for each different language.
  • uWSGI packaging in Ubuntu/Debian is very nicely done. There is no need to write init scripts or upstart jobs for bringing up servers. You just link a configuration file in /etc/uwsgi/apps-enabled/ and the app starts and stops using the normal system daemon startup mechanisms.
  • uWSGI integrates very nicely with nginx using the high-performance uwsgi binary protocol.
  • uWSGI is programmed to be very robust.
  • uWSGI can do many other things as well. Look at the uWSGI documentation for further details.

I can warmly recommend uWSGI for your PSGI (and other) web application serving needs!

What are your experiences with Starman and uWSGI? Are there any other modern well-performing PSGI or multi-platform application servers which I have overlooked? Feel free to comment below.