[sclug] hot drives
Bob Franklin
r.c.franklin at reading.ac.uk
Mon Jun 19 22:29:59 UTC 2006
On Tue, 20 Jun 2006, Pieter Claassen wrote:
> Simple question. Why when I configure the SATA raid card and add the two
> drives to an array do I still get presented with a two scsi devices in
> the kernel? What does this mean? I was under the impression that a
> successful hw config would present only a single raid device to the
> kernel.
Cheap RAID cards (i.e. the ones that cost about 20-40 quid) just do RAID
at the BIOS level - this is enough to boot the machine, but not enough to
run Windows or Linux, once they switch off the BIOS; you need a driver for
those to run the RAID array (i.e. copy the stuff to both disks, stripe
across them, etc.).
So, if you have one like this (as I have - a cheapy Adaptec thing that
offers RAID 0/1 and JBOD; I bought it because I wanted SATA in a PATA-100
machine) then you need to run a driver to use it.
If you configure things as Just a Bunch Of Disks then the RAID card won't
try and do anything clever with the disks. You can then do software
mirroring (or whatever) in Linux or Windows. If you use 'md' under Linux,
for example, the created device will be /dev/md0, etc. which you can then
use LVM to make partitions under (for example) or use directly.
I'm always concerned about these cheapy cards because something may step
in and try and manage the disks without knowing they're RAIDed and make a
big mess of things; at least Linux software RAID looks like a load of
whacky partitions which probably only the Windows installer will want to
trash without warning.
> Even more intriguing, what happens if I configure HW raid as well as SW
> raid and then have a drive fail?
If it's true hardware RAID, the failure will depend on how things are
configured; if it's a stripe then you'll get the array off-line. If it's
a mirror, one half will go off-line and software RAID will not notice
anything has happened.
Although hardware RAID looks attractive, you really need a good card that
does it properly (and transparently, to the software), which is much more
expensive. Coupled with the fact that you're then depending on the card
(and, if it goes screwy, you'll probably lost the disk because I assume
there's no portability in the metadata between manufacturers, possibly
even models).
At least with software RAID, you can shove the disk in another Linux box
and read it on there.
Of course, software RAID does give you a performance hit. And you can't
partition a /dev/mdX directly; you need to use LVM to do this, which slows
things more and seems unnecessarily complex to me, but it's the way to do
it (I'm told) and it's how I have my server.
- Bob
--
Bob Franklin <r.c.franklin at reading.ac.uk> +44 (0)118 378 7147
Systems and Communications, IT Services, The University of Reading, UK
More information about the Sclug
mailing list