Hello folks!

I would like an advice on choosing the right filesystem for optimum
performance in a certain class of tasks, and then tuning the filesystem
and the cache for the task.

I'll be building a server for mainly static storage of data. The data
itself won't be accessed very often and anyway the speed of reading and
writing the data will be limited by the network bandwidth. However,
there are basically two operations that need to be as fast as possible:
"ls -lR" and "rm -fR", i.e. listing the contents of directories
including file metadata, and unlinking/removing files and directories.
There can be lots of files (i.e., the data will likely not be "a few
large files", but rather "many small(ish) files").

So, the question is: assuming that "ls -lR" and "rm -fR" are
time-critical and everything else is irrelevant, which filesystem is the
best to choose? Journalling is, of course, preferred, but if e.g. ext2
is significantly faster for this than any journalling fs, it would
probably be chosen.

Further, are there any kernel or filesystem parameters that can be tuned
for this kind of optimization? Is there a way to prefer caching of
metadata over data, for example? (Or even turn off caching of data
completely, to ensure there is always space in the cache for the metadata?)

Is there anything new in 2.6 that might help me here?

Thanks for any advice!

Vaclav Dvorak