[Gllug] RAID on laptop or xfs?

Russell Howe rhowe at wiss.co.uk
Thu Mar 31 11:23:55 UTC 2005


On Thu, Mar 31, 2005 at 10:49:43AM +0100, Ian Norton wrote:
> anyhoo, what are people's thoughts on either having two mirrored partitions on
> the same drive (and use ext3,rieser or xfs) or just using xfs?

XFS does nothing to protect your data. Its journalling is all for the
metadata (file name, size, xattrs, mode, ACLs, etc).

All the journalling in XFS is designed to do is keep the filesystem
itself in a consistent state after a crash, not the data stored upon it.

See the XFS FAQ about null bytes in files.

The sync codepaths were rewritten a year or so ago to improve the chance
that your data will make it to disk in a timely fashion, but it's by no
means guaranteed (by design).

Also note that it seems many hard drives (IDE ones in particular)
perform write caching by default. If you lose power, anything stored in
that cache gets lost, which means that XFS thinks stuff has been flushed
to disk when it hasn't - the result? Massive corruption.

Even worse, some drives *IGNORE* a request to disable write caching! In
this case, there isn't a whole lot the filesystem can do other than take
a guess at the state of the drive (although I think I read something on
the linux-xfs list talking about cache barriers or something...).

Pressing the reset button will probably be OK with such a drive (as the
drive won't lose power, and should still be able to flush its cache).
It's loss of power that's the problem, AFAIK.

Laptop drives might be OK in this regard - I suspect it's the large
drives made for desktops and for the DVD-ripping, warez-serving,
HDD-speed genital-waving Counter Strike playi^W^W^W community that have
the problem, and quite frankly the more data they lose, the better IMHO
:)

-- 
Russell Howe       | Why be just another cog in the machine,
rhowe at siksai.co.uk | when you can be the spanner in the works?
-- 
Gllug mailing list  -  Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug




More information about the GLLUG mailing list