[Gllug] disk problems

Nix nix at esperi.org.uk
Thu Mar 16 00:17:45 UTC 2006


On Wed, 15 Mar 2006, Andy Smith spake:
> The stock debian kernels include the necessary support, and the
> installer will also allow you to configure this.

WARNING: The initrd that Debian creates is currently rather broken.
It constructs mdadm lines with explicit device names in it, something
like (adapted from my initramfs, but this gives the general idea)

/sbin/mdadm --assemble /dev/md0 --auto=md --run /dev/hda1 /dev/hdb3 /dev/sda2

which is really problematic if you move disks around --- and guess what
happens when disks fail? Yep, that's it, they move around. By comparison
my initramfs now does# Assemble the RAID arrays.

/sbin/mdadm --examine --scan > /etc/mdadm.conf
/sbin/mdadm --assemble --scan --auto=md --run
/sbin/lvm vgscan --ignorelockingfailure --mknodes && /sbin/lvm vgchange -ay --ignorelockingfailure

which scans all your accessible disks (constructed via micro-udev from
the contents of /sys) for RAID arrays, assembles them, and then scans
all those new block devices for LVM volume groups.

The only thing hardwired in there now is the name of the root filesystem,
and that's overrideable via root=LABEL=/dev/some/device/here ;)


If you look on the linux-raid list I just posted my initramfs there (a
somewhat older version: I must repost). I'd be happy to soup it up some
more.

(mdadm-2.31 also comes with a --- much simpler and less flexible ---
initramfs, complete with a construction script.)

Getting it working was *much* less hair-raising than moving all the data
from non-RAID into RAID. (Note one thing: the simple approach of making
RAID arrays, expanding an LVM onto them, and pvmoving the data in, will
lead to extremely inefficient filesystems. You have to *reconstruct*
ext2/ext3 filesystems with knowledge of the RAID chunk size: see the
description of the -E option on the mke2fs manpage. I'd recommend using
separate LVM VGs on your RAIDed storage in any case, so that failure of
single disks can't take your VG down. (That doesn't stop you tying more
than one RAID array together with LVM: I'd just not recommend adding
PVs that aren't RAIDed to an LVM with some PVs that are on RAID.)

-- 
`Come now, you should know that whenever you plan the duration of your
 unplanned downtime, you should add in padding for random management
 freakouts.'
-- 
Gllug mailing list  -  Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug




More information about the GLLUG mailing list