[Gllug] SCSI more reliable than Maxline PlusII?
Russell Howe
rhowe at wiss.co.uk
Fri Sep 10 22:43:25 UTC 2004
On Sat, Sep 11, 2004 at 12:29:00AM +0100, Bryn Reeves wrote:
> 1) markets.
> IDE is mass-market, high volume technology. Look at the way DRAM price
> rockets as soon as a particular type is no longer shipping large numbers
> of units.
The main thing I heard is that SCSI drives, being generally destined for
the higher end, were individually tested, whereas IDE drives for the
mass markets were batch tested (i.e. they'd make a batch, and test one
drive from each batch).
This is all n-hand information though, so don't rely on it at *all*.
> 3) interface hardware.
> SCSI host/device interface hardware costs *a lot* more than IDE. IDE's
> big goal was always low-cost mass implementation. SCSI chipsets implement
> much more in hardware than IDE, like s/g, command queueing and DMA (IDE
> normally uses the DMA controller on the motherboard chipset).
I always remember seeing the Adaptec 1542CF ISA SCSI HBA. It had a Z80
on it :)
I also heard that there are people making so-called 'enterprise-class'
SATA drives which I guess are manufactured with server applications in
mind.
I do recall some IDE drives only being under warranty for 10 hours use
per day or something silly like that.
TBH, I can't see the problem with having drives you think might not be
massively reliable. That's what RAID is for - with RAID you assume
drives are going to fail.
If you do RAID5 with a hot spare or two, and you use a mixture of drives
(I guess if you buy all the drives at the same time, and they're the
same model/make/firmware/batch etc then there's always the chance they
all die together). Put some monitoring in place so you know when drives
are on their way out (SMART!) and make sure you get alerted whenever one
dies, I can't see a huge problem in going for the cheap-and-nasty.
I noticed an option in the kernel config for RAID6 which appeared to be
RAID5 but with two parity disks, allowing you to have 2 dead drives and
still continue on albeit without any redundancy. RAID6 was marked as
experimental and likely to devour data, however.
With decent backups, a well monitored RAID setup, and both hot and cold
spares, you should have a decent system.
The price of SATA disks allows you to do all this relatively
inexpensively - to do the same with SCSI or FC disks would probably cost
so much more that regardless of the supposed reliability claims (which I
would tend to think are mostly people's personal views, not really
backed up by anything but gut instinct and their wanting to justify
having spent 5k on SCSI drives when 1k of SATA would've done :)
Hard drives fail. Build a system to cope with that and the reliability
of individual drives probably doesn't matter a great deal. Anyone fancy
doing some maths?
Take a RAID5 array of 6 disks each with an MTBF of MTBFdisk hours, A
RAID controller with an MTBF of MTBFcontroller, and a server with an
MTBF of MTBFserver (that one's pretty tough to calculate!)
Now calculate what the combined MTBF of the system is, and then work out
what'd happen if you halved the disk MTBFs, or what'd happen if you
halved the MTBF of half the disks etc.
But it's 1amish and I have no paper :)
--
Russell Howe | Why be just another cog in the machine,
rhowe at siksai.co.uk | when you can be the spanner in the works?
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list