[Gllug] RAID on RAID

Russell Howe rhowe at wiss.co.uk
Thu Nov 4 11:00:08 UTC 2004


On Wed, Nov 03, 2004 at 04:28:20PM +0000, Rich Walker wrote:
> The idea that occurred to me was to allocate a ~5GB chunk of each disk
> and then do 
> hda1 + hde1 => md0, RAID1
> hdc1 + hdg1 => md1, RAID1
> md0 + md1 => md2, RAID1
> 
> and then mount md2 as /

A few things spring to mind:

1) Why have 5G for /? If you have /{var,home,usr} as seperate
filesystems, / shouldn't grow much above 100 megabytes (about the
biggest consumer would be a large /lib/modules if you have many kernels
installed). If you're an /opt kind of person, then that would probably
be wanting its own partition too. As an example, a Debian sid system I
have looks like this:

Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/hda2      xfs    239M  120M  119M  51% /
/dev/hda1     ext2     23M   14M  9.4M  59% /boot
/dev/hda5      xfs    1.9G   12M  1.9G   1% /tmp
/dev/hda6      xfs    5.6G  3.0G  2.7G  54% /usr
/dev/hdd2      xfs    4.9G  2.4G  2.6G  49% /usr/local
/dev/hdd3      xfs    4.0G  3.9G   49M  99% /var
/dev/hdd5      xfs    2.0G  2.0G   48M  98% /home
/dev/hdc1      xfs     75G   75G  387M 100% /usr/local/media

The only reason /var is so big is because it holds the filesystem for my
desktop (NFS-root box). /home is only so full because I have too much
crap in ~rhowe. /tmp was more a case of "hm, 2G of space.. what can I do
with it?". Having hdc on its own cable would be nice, but IRQs are
limited in that box, so I don't really want to add another IDE
controller.

/ is 50% kernel modules, too:

$ du -hsc /lib/modules/*
13M     /lib/modules/2.6.1-xiao
16M     /lib/modules/2.6.3
12M     /lib/modules/2.6.5
13M     /lib/modules/2.6.5-xiao
11M     /lib/modules/2.6.9-final-xiao
63M     total

2) You need to be careful when layering kernel subsystems like this -
especially with the recent kernel option of 4K stacks. There are parts
of the kernel which are rather heavy consumers of stack, and you can hit
the 4K limit relatively easily (you can even hit the 8K limit too, if
you try). Running out of stack space is something to be avoided. Things
to watch out for are LVM, MD, XFS and certain device drivers (cpqfc, for
example). All are fairly heavy consumers of stack space. Note that some
distributions (notably Fedora) ship with a kernel with CONFIG_4KSTACKS
set.

> Now, clearly write will be slow :-> But write to / is rare - most writes
> go to /home, /var, /tmp and some to /big.
> 
> Reads should alternate between md0 and md1.
> 
> If any one disk controller goes down, no problem.
> If any three disks go down, no problem.

If a controller or disk goes down, it's quite likely to take Linux with
it... I've had lockups and kernel panics due to drives simply getting too
hot, and returning IDE errors! Putting the drives in a fan-cooled
removable bay solved that one though.

-- 
Russell Howe       | Why be just another cog in the machine,
rhowe at siksai.co.uk | when you can be the spanner in the works?
-- 
Gllug mailing list  -  Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug




More information about the GLLUG mailing list