[Gllug] Partioning advice needed

Nix nix at esperi.org.uk
Sat Feb 17 15:03:02 UTC 2007


On 17 Feb 2007, Anthony Newman verbalised:

> Chris Bell wrote:
>> On Fri 16 Feb, Nix wrote:
>>
>>> The *right* thing to do is to ditch the lot and go LVM, one VG per
>>> physical disk, probably (and if you can afford it, get extra disks and
>>> go LVM-on-RAID across all of them at once ;) ).
>>>
>>    Does LVM alone, without RAID, make the filesystem more vulnerable?
>
> If you have more than one physical disk available, it must since you
> have multiple critical underlying devices providing a single file
> system.

Although it isn't guaranteed, VGs often seem to be able to recover
decently from losing an underlying PV: you have to run it with
--partial, is all. Obviously any LVs partially or completely located on
the failed PVs are toast.

> LVM isn't the ultimate panacea despite its potential usefulness,
> especially as there are not any file systems available[0] that allow
> shrinking,

(Other people have pointed out ext3; you have to unmount first but
that's all; you can expand without unmounting).

>            so the only way to provide extra space to a logical volume
> when you runs out of space is to expand into space you haven't already
> allocated (and consequently was wasted), or add physical volumes,
> which should also be redundant.

Myself I leave space I can't see a use for un-LVMed until I can see a
use for it, and then expand into the space. Expanding filesystems
with LVM is so much not a big deal that it's not true :)

>                                  If you're running machines that
> absolutely must not go down, it can get you out of a spot, but for

If you don't like rebooting because it ditches your state it can be
useful as well.

> people who reboot every day it's an extra hassle that you'd have to

Some people voluntarily reboot every day. Given how long it takes pretty
much every modern OS other than the OLPC to boot this strikes me as
being rather peculiar. Suspend the machine when you're not using it,
perhaps, but not reboot! (unless it's a laptop or other power-limited
system or a system which makes too much noise and suspension doesn't
work, of course. Then you have no choice.)

> evaluate on the basis of the extra work and the difficulty added to
> system recovery if you have, say, a botched kernel upgrade.

If you're using initramfs, no difficulty at all. That's one of the
things that's so nice about initramfs: because the rootfs contents are
linked into the kernel image, they never get out of synch, and if they
worked in your old working kernel, they'll still work later, even if
you've installed totally broken LVM tools in the meantime.

>                                                             LVM alone
> is little better than a JBOD or RAID0 array (the latter as LVM also
> supports striping).

Um, it lets you expand LVs without rebooting? It lets you have
filesystems in discontiguous pieces of the disk? It lets you move
filesystems from disk to disk on the fly? Try doing any of that with
normal partition tables.

> The option exists now in the kernel (again experimental) to add
> devices to an existing software RAID5, which has potential use, but
> RAID5 is a bit sucky for a variety of reasons.

The safety advantages are such that 

> Compared to commercial storage options, which effectively provide
> highly flexible redundant storage in software, Linux RAID/LVM has a
> way to go yet.

I dunno. I'd say I've *got* `highly flexible redundant storage in
software'. Many of the commercial RAID options in particular come with
lovely extra features:

 - non-public RAID data formats so if you need to recover the data
   on a RAID-1 post-software-failure, or use other RAID software,
   you can't; the underlying storage representation of LVM's
   superblocks is *plain text*, so if you have a really bad failure
   you can often fix it with no more aid than a disk sector editor
   and the man page
 - requirements for matched drive sizes or you waste space (if it
   works at all)
 - gross inflexibility. I have two RAID-5 arrays on three disks of
   very different sizes, with the arrays sharing one disk to maximize
   the amount of space that's RAIDed. Linux and FreeBSD's software
   RAID implementations have no trouble with that, but can anyone else
   do it? For a while, post-disk-failure, I ran RAID-5 partly atop
   the network block device to a remote system. Cluster-aware LVM
   atop the NBD is the core of the GFS SAN system, which is a pretty
   good demonstration of LVM's flexibility, too, I'd say.
 - all the inability to fix bugs and reliance on an unreliable third
   party that proprietary stuff brings you

> I'm sure many will disagree :)

You bet.

> [0] Maybe there are experimental ones, but not that I'd use for any
> data I care about

ext3?

-- 
`In the future, company names will be a 32-character hex string.'
  --- Bruce Schneier on the shortage of company names
-------------- next part --------------
-- 
Gllug mailing list  -  Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug


More information about the GLLUG mailing list