[Gllug] A SATA Raid card worth having?
Russell Howe
rhowe at siksai.co.uk
Fri Jan 27 00:38:38 UTC 2006
On Thu, Jan 26, 2006 at 11:46:59PM +0000, Ben Fitzgerald wrote:
> see what you are saying. on modern 64-bit pci:
>
> "Later versions of PCI enable true 64-bit data transfers using up to a
> 133MHz clock to enable transfer speeds of up to 1066 Mbytes/sec.)" [1]
>
> ata drives reach 133mb/s max transfer but scsi goes much higher so I
> guess this could be a bottleneck if you had a scsi controller with
> several fast units hanging off it.
I'd be astounded to see a drive that could do 133MB/s sustained!
I suspect most 7200rpm drives max out at about 40, perhaps 50MB/s? 15k
RPM should be faster, but I don't know how much faster...
With ATA (serial and parallel), the interface speed isn't too much of a
bottleneck, since you're quite limited in the number of devices you can
attach to an interface (although in the SATA docs I linked to, there is
mention of PMPs, port multipliers, which would increase the number of
devices per port on SATA.
With SCSI, the bus speed is more important, since you can have perhaps
15 devices on a single chain (perhaps more? I'm not too up to date with
my SCSI knowledge). If you say your SCSI interface is 300Mbyte/s (U320,
say?), that's 20Mbyte/s per device.
This gets even more complicated when you add in things like fibrechannel
and iSCSI, where bandwidth calculations are more complex, what with
switches being thrown into the mix, etc.
> > Also, what if your OS and bootloader are installed on a RAID1 set and one
> > of the drives dies?
>
> you can put grub onto both mirror components and use root=LABEL=/ [2]
Sure, but is the system BIOS smart enough to not just sit there and say
"Hard disk failure" or "Bootloader not found" because it's too
distracted by a failed drive?
Probably very BIOS-dependent, and needs thorough testing before
deployment.
I think this is the kind of thing you get with hardware RAID - all these
things have been (or should have been!) thought about by the people who
built it. There should be no I/O bottlenecks, the CPU in it should be
fast enough to do what's needed, and for RAID boxes, things like
cooling, hotswappable parts, modular design with replaceable fan modules
for when fans die and other things like that can all make a difference.
To develop something this polished by yourself, using software RAID as a
base, is quite a challenge, and, I suspect, the hardware RAID may come
out cheaper overall. It will also come with things like a warranty and a
support telephone number :)
A lot (all?) of the preassembled RAID units (the kind that you just plug
a cable into the back of) will mean you're tied into a vendor for
expansion and parts though, and software RAID really wins out here.
Like so many open source things, if you have the time and expertise to
build something yourself, you can reach and even surpass a commercial
equivalent, but it's usually at the expense of features you don't think
you need right now (and which you may well never need).
Not that I'm trying to suggest software RAID is inherently open source -
it certainly isn't, but it does allow you to do a DIY job when an off
the shelf box might do things better/cheaper/etc.
Still, it seems that we've covered most of the pros and cons :)
Note that dmraid looks very useful for dealing with proprietary ondisk
RAID formats, should you ever need to recover data!
--
Russell Howe | Why be just another cog in the machine,
rhowe at siksai.co.uk | when you can be the spanner in the works?
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list