[sclug] RAID & LVM2 - again!

Chris Aitken chris at ion-dreams.com
Thu Apr 14 09:08:56 UTC 2005


A follow on from last night:

2 x 73 Gb disks, RAID-1.

Each disk looks like this:

Sda1	Boot	Primary	Linux raid autodetect	5116.13
Sda2		Primary	Linux raid autodetect	1019.94
Sda5		Logical	Linux raid autodetect	57025.87
Sda4		Primary	Linux raid autodetect	10240.48

Each partition is RAID'd with the matching partition from /dev/sdb:

Personalities : [raid1]
md1 : active raid1 sdb2[1] sda2[0]
      995904 blocks [2/2] [UU]

md3 : active raid1 sdb4[1] sda4[0]
      10000384 blocks [2/2] [UU]

md2 : active raid1 sdb5[1] sda5[0]
      55689216 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
      4996096 blocks [2/2] [UU]

unused devices: <none>

/etc/fstab:

/dev/md0        /               ext3    errors=remount-ro       0       1
/dev/md1        none            swap    sw                      0       0
/dev/md3        /var            ext3    defaults                0       0
/dev/vg_main/tmp_lv     /tmp    ext3    defaults                0       0
/dev/vg_main/home_lv    /home   ext3    defaults,usrquota       0       0
/dev/vg_main/mail_lv    /var/mail       ext3    defaults,usrquota       0
0

Now /dev/vg_main is an LVM2 Volume Group (VG), consisting of one Physical
Volume (PV):

svs-dc1:/home/chrisa# vgdisplay
  --- Volume group ---
  VG Name               vg_main
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               53.11 GB
  PE Size               4.00 MB
  Total PE              13595
  Alloc PE / Size       4871 / 19.03 GB
  Free  PE / Size       8724 / 34.08 GB
  VG UUID               sDLF7v-x5YN-9j1A-m8o3-zU3H-A6xV-fCQg8Y

svs-dc1:/home/chrisa# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg_main
  PV Size               53.11 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              13595
  Free PE               8724
  Allocated PE          4871
  PV UUID               0zgqpJ-vywb-ZeW3-tQPU-AJ11-dLFu-1cQb1R

It can be seen that all (55.11 GB) of the PV (/dev/md2) is used for the VG
vg_main, but only 19.03 GB of the VG has been allocated to LVs.

The VG vg_main has 3 LVs within it: tmp_lv (/tmp), home_lv (/home), and
mail_lv (/var/mail):

svs-dc1:/home/chrisa# lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg_main/tmp_lv
  VG Name                vg_main
  LV UUID                CHFB2A-HOkW-E0b9-GrRA-2zJa-ezcY-HiKSXJ
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.00 GB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/vg_main/home_lv
  VG Name                vg_main
  LV UUID                IFsqki-EXAF-WHL5-xF3W-G7Kz-YC2a-09z8U2
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                13.03 GB
  Current LE             3335
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:1

  --- Logical volume ---
  LV Name                /dev/vg_main/mail_lv
  VG Name                vg_main
  LV UUID                zYy7dM-5TUz-oYm3-FM2q-VJjw-RTur-aaRcFl
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                5.00 GB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:2

I hope this is easy enough for people to follow.

Chris


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



More information about the Sclug mailing list