The old filesystem benchmarks with more naive tests on different (slightly higher spec) hardware are still available here.
This page delivers up the results of some performance testing I have done on Linux filesystems. The tests were done with a very specific application in mind --- web caching. And will only apply to similar applications where there are a large number of small files and considerable activity.
The test server was a Dell PowerEdge 2450 with 2 PIII CPU's running at 600Mhz with 512Mbytes of memory. The disk used for the benchmarking was a Seagate ST318404LC. The partition was approximately 7Gbytes in size and positioned towards the end of the disk.
The system was running Gentoo Linux with kernel version 2.4.24+XFS patches. The benchmarks were 'wrapped' up into a simple script to make retesting more simple. Two normal benchmarks were chosen ... Postmark and Bonnie (not Bonnie++), and two custom (and very simplistic) benchmarks were added.
The first custom benchmark was to measure the elapsed time taken to extract a tar archive which was 3.1Gbytes in size and contained 250,000 files (it was pulled off an active web cache). The second was the elapsed time taken to remove the produced files.
Postmark was run as follows :-
set number 5000 set transactions 5000 run
With the default values for these parameters, the results were flat (as in the previous benchmark test). Postmark is supposedly slanted towards the demands places on mail server queues (which is somewhat similar to the demands on web caches).
The final benchmark (Bonnie) is a more conventional look at filesystem speed, and didn't differentiate between the different filesystems as well (with the exception of JFS).
|Filesystem||Extract (sec)||Delete (sec)||Postmark transactions per second|
What follows is the combined Bonnie results :-
---Sequential Output (nosync)--- ---Sequential Input-- --Rnd Seek- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --04k (03)- Filesystem MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU Ext2 1*1024 7447 99.4 36104 23.9 15075 17.5 6885 89.5 31637 20.6 316.1 3.1 Ext3 1*1024 6999 99.9 32104 41.3 14620 18.2 6826 88.9 30002 19.4 276.9 2.1 JFS 1*1024 861 12.0 34565 27.7 16096 18.5 6783 88.4 31469 21.0 301.5 2.9 ReiserFS 1024 6620 95.4 27330 40.4 12885 15.9 6642 87.1 20311 14.7 284.0 3.0 XFS 1*1024 7393 99.6 36626 24.6 15592 18.3 6762 88.1 30380 20.3 293.0 1.8
The clear winner for speed judging over all the results is Ext2 and the obvious loser is JFS. In fact JFS is so astonishingly bad that again I'm suspicious of the results ... other people doing benchmarks produce quite different results. The only thing I can think of is that JFS is particularly bad at small files, or possibly it journals the data as well as metadata ... in which case it would be slow but very safe.
Where Ext2 fails (although not shown here) is for checking a filesystem (fscking) where it can be frighteningly slow on large filesystems. For this reason, I almost always discount using it in favour of a logging filesystem.
The three journalling filesystems in the running are all fairly close with each winning a different benchmark. However ReiserFS is placed 2nd in the benchmarks where it doesn't win, which makes it probably the overall winner for speed (excluding Ext2).