[Nottingham] 1GB of \0 (aka null)

Graeme Fowler graeme at graemef.net
Mon Mar 27 22:33:12 BST 2006


On Mon, 2006-03-27 at 22:04 +0100, Martin wrote:
<snip>
> I'll let someone else graph up the numbers if interested to see how bs 
> affects the system, user and elapsed time. In short, the bigger the bs 
> gives the quicker the times provided that you have the memory for it.

http://www.graemef.net/dd-timing.jpg

Using log scales for both axes shows a nice linear correlation between
reducing the elapsed time as you do each doubling in the block size -
equating to an inverse exponential curve, if I remember my maths in
enough detail.

However you hit the buffers (literally!) of your system at a block size
of around 768 (that's me interpolating the curve). You'll probably find
that was a function of your drive's throughput, PCI bus speed, CPU's
interrupt handling rate and also available memory.

Also note that unless you did a full reboot between tests, your buffers
would have been full of NULL after the first one, which might affect the
test results. Not by much, but hey - consistent testing is all :)

I suspect the dip at a size of just above 10000 was more a function of
your machine stopping doing something else rather than anything sinister
or useful - unless you hit a magic block size there which the drive
could buffer nicely!

Graeme




More information about the Nottingham mailing list