[Nottingham] 1GB of \0 (aka null)

Martin martin at ml1.co.uk
Mon Mar 27 23:52:29 BST 2006


Graeme Fowler wrote:
> On Mon, 2006-03-27 at 22:04 +0100, Martin wrote:
> <snip>
> 
>>I'll let someone else graph up the numbers if interested to see how bs 
>>affects the system, user and elapsed time. In short, the bigger the bs 
>>gives the quicker the times provided that you have the memory for it.
> 
> http://www.graemef.net/dd-timing.jpg
> 
> Using log scales for both axes shows a nice linear correlation between
> reducing the elapsed time as you do each doubling in the block size -
> equating to an inverse exponential curve, if I remember my maths in
> enough detail.

Yep, and thanks for that. The curve is more interesting than anticipated.


> However you hit the buffers (literally!) of your system at a block size
> of around 768 (that's me interpolating the curve). You'll probably find
> that was a function of your drive's throughput, PCI bus speed, CPU's
> interrupt handling rate and also available memory.

Well, I've just rerun and the "sweet spot" for minimum time was on this 
occasion for a block size of 8k up to 64k bytes.

There are some Boinc projects running in the background.
( See http://boinc-wiki.ath.cx )


> Also note that unless you did a full reboot between tests, your buffers
> would have been full of NULL after the first one, which might affect the
> test results. Not by much, but hey - consistent testing is all :)
> 
> I suspect the dip at a size of just above 10000 was more a function of
> your machine stopping doing something else rather than anything sinister
> or useful - unless you hit a magic block size there which the drive
> could buffer nicely!

Another test would be to try a self contained routine generating the 
nulls rather than /dev/zero and "dd". Regardless, the PCI bus shouldn't 
be involved..?


Cheers,
Martin

-- 
----------------
Martin Lomas
martin at ml1.co.uk
----------------



More information about the Nottingham mailing list