[Nottingham] lvm on raid & hdparm -tT looks pathetic!
Jim Moore
jmthelostpacket at googlemail.com
Sat May 10 02:58:08 BST 2008
I think the explanation lies in the setup - using vastly oversimplified
setup examples and theoretically-possible speed measurements on raw
partitions across a two-disk array:
Type: Relative speed:
JBOD: 1.0
RAID0: 2.0*
RAID1: 1.0*
RAID0+LVM: 2.0*
RAID1+LVM: 1.0*
LVM: 1.0*
*less RAID/LVM caching overheads: softRAID and LVM needs to cache the
data before it can be written to disk, so using both systems at the same
time incurs /both/ overheads. Therefore, RAID0 on a softRAID will give
more like 1.5-1.7 relative read/write speed, LVM on its own slightly
more or slightly less. LVM depends more I think on the partition type
(journalling or not, etc) as to the amount of overhead involved.
softRAID overhead involves several dependent factors, not least of which
the number of disks in the array, the speed of the /slowest/ disk in the
array, the filesize, the chunk size, the partition type, cache speed,
controller speed...
RAID1 is pretty much JBOD but across two disks. Caching isn't an
essential feature, and it *can* be kludged together (ie with a simple
script). While this doesn't require a RAID controller to pull off, it
helps as a periodic diff can seriously affect system performance (but
good for a nightly run if you don't mind a 1.0 daytime performance**).
LVM on a softRAID1 is again double trouble for the setup overhead and
can knock performance down to 1.2-1.5.
**I've got a 300GB RAID0 across two SATA notebook drives in one of my
workstations which runs a nightly diff to a single 300GB drive on my
fileserver. By day, it's just blazingly fast and pretty much waits for
the processor a lot of the time, by night it's backing itself up to the
fileserver just as fast as the network can choke it through. I /could/
run on two separate RAID0 arrays on the same system using LVM to manage
the RAID1 layer (to mirror across both RAID0s), but (having tried it) it
would completely kill any performance advantage I would have gained by
using RAID0. So, for the purpose of the system, I found a diff during
idle time did just as well and let me enjoy the full speed advantage of
a RAID0 on the working system.
Using an offboard RAID or a controller with its own cache, improvements
can be made in performance regarding cache speed. Some of the more
expensive RAID cards have their own memory banks used for caching, this
coupled with their own independent cache controllers (rather than the
load being borne by the CPU and system RAM) makes for a much faster
array. This will not, however, make any improvements on the LVM
performance - this is still OS-software-dependent. Only on RAIDx
performance.
course, that little lot could be completely wrong, someone please
correct me if it is. This is my experience and understanding of how
RAID/LVM works, having set up my own servers and HPC workstations and
finding that my poxy little 100MBit network can't keep up with streaming
HD video...
Cheers,
Jim
Martin wrote:
> Back in the land of hdd rather than mythical DD flash kaboom... ;-)
>
>
> On a certain experimental system of mine with two sata1 hdds, using
> hdparm -tT to get some read speeds I get:
>
> 1 GByte/s memory (cache) read in all cases;
>
> 18 MByte/s single partition (/dev/sda1 or /dev/sdb1);
>
> 80 MByte/s raid (/dev/md0);
>
> 18 MByte/s LVM (/dev/lvm-whatever...);
>
>
> Whereby I've set up the LVM on top of a software mirror raid across the
> sata1 hdds.
>
>
> So why is the LVM performance so slow?
>
> Or is this a measurement problem?
>
>
> Any ideas?
>
> Cheers,
> Martin
>
>
> ------------------------------------------------------------------------
>
>
> No virus found in this incoming message.
> Checked by AVG.
> Version: 8.0.100 / Virus Database: 269.23.11/1422 - Release Date: 5/8/2008 5:24 PM
>
More information about the Nottingham
mailing list