[Gllug] RAID on LVM - heracy?
Ken Smith
kens at kensnet.org
Fri Sep 28 23:07:23 UTC 2007
Nix wrote:
> On 28 Sep 2007, Ken Smith told this:
>
>> But consider this. If I had a collection of disks, none really suitably
>> large or equal-ish in size for raid by themselves, would it be valid to
>> group some of them together as for example - LV1 and some others
>> together as LV2 and so on and then run soft raid over the top?
>>
>> Could you even run LVM over the resultant volume to retain the
>> flexibility of resizing file systems....
>>
>
> I do exactly the opposite of this.
>
> I have four disks, two IDE (an old 10Gb, a larger 45Gb) and two SCSI
> (70Gb each).
>
> These are rather inconveniently sized, so I split them into three RAID
> arrays and a lump of un-RAIDed space for stuff I don't much care about
> or can regenerate easily.
>
> Each disk starts with one component of a small, non-LVMed RAID-1 array
> containing /boot (in a primary partition on the IDE drives because of
> limitations of my machine's BIOS: it locks up before boot if all IDE
> disks don't have at least one primary partition, *sigh*):
>
> Array Size : 56064 (54.76 MiB 57.41 MB)
> Used Dev Size : 56064 (54.76 MiB 57.41 MB)
>
> Number Major Minor RaidDevice State
> 0 8 5 0 active sync /dev/sda5
> 1 8 21 1 active sync /dev/sdb5
> 2 3 1 2 active sync /dev/hda1
> 3 22 1 3 active sync /dev/hdc1
>
> (It's RAID-1 both for robustness against failure of any disk and because
> LILO can't boot from anything more sophisticated: it's also the only
> array using a v0.90 superblock for exactly the same reason.)
>
> Then there are two RAID-5 arrays. There's one large one on both SCSI
> disks and the large IDE disk, sized to fill the IDE disk completely
> except for a gigabyte for swap:
>
> Array Size : 76807296 (73.25 GiB 78.65 GB)
> Used Dev Size : 76807296 (36.62 GiB 39.33 GB)
>
> Number Major Minor RaidDevice State
> 0 8 6 0 active sync /dev/sda6
> 1 8 22 1 active sync /dev/sdb6
> 3 22 5 2 active sync /dev/hdc5
>
> The other RAID-5 array is smaller and on the old 10Gb drive and the
> two SCSI drives (as they still have room left);
>
> Array Size : 19631104 (18.72 GiB 20.10 GB)
> Used Dev Size : 19631104 (9.36 GiB 10.05 GB)
>
> Number Major Minor RaidDevice State
> 0 8 23 0 active sync /dev/sdb7
> 1 8 7 1 active sync /dev/sda7
> 3 3 5 2 active sync /dev/hda5
>
> The remaining space (20Gb or so on each SCSI disk) is not RAIDed, but
> just a single LVM VG with two PVs.
>
>
> Atop this we have two LVM volume groups, one covering the non-RAIDed
> space, one the RAIDed space. We cut the space into two regions for an
> important reason: if you lose a single PV, you're likely to lose the
> whole VG unless you're very lucky or careful with LV assignment within
> the VG. Both RAID arrays are combined into a single VG because even if
> we lose one of the SCSI disks, all that will happen is that both RAID-5
> arrays will go degraded: the VG is exactly as robust against failure as
> RAID-5 natively is.
>
> (The second VG is nonrobust; losing either SCSI disk will kill it. I
> don't care: it contains things like my news spool, video crud, and stuff
> that's so boring I don't even back it up.)
>
>
> Booting from this requires an initramfs, of course: auto-assembly won't
> find the root filesystem when it's on a v1 superblock, especially when
> it's also inside LVM and thus a `vgscan' away from being mountable. The
> linux-raid wiki used to contain instructions for building this
> initramfs, but it seems to have gone down in a welter of wikispam (at
> least it had last I checked, a few days ago).
>
>
That sounds a neater solution than my behemoth suggestion. Thanks..... Ken
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list