[Gllug] RAID on RAID

Rich Walker rw at shadow.org.uk
Thu Nov 11 19:37:04 UTC 2004


DISCLAIMER:

BY THE END OF THIS EMAIL I'M DOING BAD THINGS TO FILESYSTEMS IN USE.

READ THE DOCUMENTATION BEFORE TRYING IT.

TRY IT ON A SCRATCH MACHINE FIRST.

F'ING AROUND WITH mdadm ON PARTITIONS WITH FILESYSTEMS IS A REALLY GOOD
WAY TO GET TO TEST IF YOUR BACKUPS WORK.

That said...


Tethys <tet at createservices.com> writes:

> Rich Walker writes:
>
>>I just created two LVM volumes, and then built a raid-1 array from them.
>
> Now having never used md under Linux, I don't know for sure, but doesn't
> it blow away the contents of the devices used? I want to mirror the disk
> after it has a filesystem on it, without destroying the data, and without
> needing to go through a backup/restore cycle.

If I know I'd want to mirror the drive later, I'd do
  mdadm --create /dev/md0 --raid-level 1 --raid-devices 2 /dev/hda7 missing

and then build the filesystem on it.

Then later I'd do
  mdadm --manage --add /dev/md0 /dev/hdb8

which would put the filesystem on both of them:

I just tried it, and it seems to work:

 mdadm   --create /dev/md1 --force --level 1 --raid-devices 2 /dev/mapper/bulk-lvol7  missing
mdadm: /dev/mapper/bulk-lvol7 appears to contain an ext2fs file system
    size=102336K  mtime=Thu Nov 11 19:14:02 2004
mdadm: /dev/mapper/bulk-lvol7 appears to be part of a raid array:
    level=1 devices=1 ctime=Thu Nov 11 19:13:42 2004
Continue creating array? y
mdadm: array /dev/md1 started.
gateway:~--# mount /dev/md1 /mnt/z
gateway:~--# umount /mnt/z
gateway:~--# mke2fs /dev/md1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
25584 inodes, 102336 blocks
5116 blocks (5.00%) reserved for the super user
First data block=1
13 block groups
8192 blocks per group, 8192 fragments per group
1968 inodes per group
Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345, 73729

Writing inode tables: done                            
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 32 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
gateway:~--# mount /dev/md1 /mnt/z
gateway:~--# df 
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md1                 99099        13     93970   1% /mnt/z
gateway:~--# mdadm --manage /dev/md1 --add /dev/mapper/bulk-lvol8 
mdadm: hot added /dev/mapper/bulk-lvol8
gateway:~--# cat /proc/mdstat 
Personalities : [raid1] [raid5] [multipath] 
read_ahead 1024 sectors
md1 : active raid1 [dev fe:08][2] [dev fe:07][0]
      102336 blocks [2/1] [U_]
      [======>..............]  recovery = 34.0% (35584/102336) finish=0.0min speed=11861K/sec
md0 : active raid1 [dev fe:06][1] [dev fe:05][0]
      102336 blocks [2/2] [UU]
      
unused devices: <none>
gateway:~--# 

Then I can mark one of the drives in md1 as failed, and remove it:


gateway:~--# mdadm --manage /dev/md1 --fail /dev/mapper/bulk-lvol7
mdadm: set /dev/mapper/bulk-lvol7 faulty in /dev/md1
gateway:~--# cat /proc/mdstat 
Personalities : [raid1] [raid5] [multipath] 
read_ahead 1024 sectors
md1 : active raid1 [dev fe:08][1] [dev fe:07][0](F)
      102336 blocks [2/1] [_U]
      
md0 : active raid1 [dev fe:06][1] [dev fe:05][0]
      102336 blocks [2/2] [UU]
      
unused devices: <none>
gateway:~--# mdadm --manage /dev/md1 --remove /dev/mapper/bulk-lvol7
mdadm: hot removed /dev/mapper/bulk-lvol7
gateway:~--# cat /proc/mdstat 
Personalities : [raid1] [raid5] [multipath] 
read_ahead 1024 sectors
md1 : active raid1 [dev fe:08][1]
      102336 blocks [2/1] [_U]
      
md0 : active raid1 [dev fe:06][1] [dev fe:05][0]
      102336 blocks [2/2] [UU]
      
unused devices: <none>
gateway:~--# 


So you can do it, but you need to decide you are going to do it before
you do...


...

Actually, that's not quite true.

I just created an ext2 partition, 
copied data onto it, 
unmounted it, 
put it in a RAID-1 array with the other partition missing, 
e2fsck'd it (complained about size mismatch - but passed)
then ran resize2fs on it to fix the error.

I think the RAID superblock is at the *end* of the partition - so if you
resize2fs before making it part of a RAID array, it might work...

But that's the sequence that might lose your data entirely.

The fact the RAID-1 array has one device missing prevents md from
erasing one disk with the other, which is what would normally happen.

<shudder>

Well, It Worked For Me.

cheers, Rich.

-- 
rich walker         |  Shadow Robot Company | rw at shadow.org.uk
technical director     251 Liverpool Road   |
need a Hand?           London  N1 1LX       | +UK 20 7700 2487
www.shadow.org.uk/products/newhand.shtml
-- 
Gllug mailing list  -  Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug




More information about the GLLUG mailing list