[Wylug-help] help with RAID

Roger Beaumont roger.bea at blueyonder.co.uk
Sun Jan 28 21:54:26 GMT 2007


I've got a pair of drives configured as raid-1.  Now read on...

They are getting full, so I got a new pair of 250gig SATA drives.  The plan 
was to configure those too as raid-1, then add them in linear mode to the 
md partition that's filling up (plan derived courtesy of info on the 
Software-RAID.HOWTO at unthought.net).

Step 1 was to look at /proc/mdstat to see that the existing pair were OK. 
They weren't.  After extensive web browsing and use of 'mdadm --detail' and 
  the manufacturer's diagnostic tool (which says the drive is perfect) it 
seems that the problem wasn't the drive itself, but a dodgy power lead that 
made it cut out.  However, the RAID software is now convinced the drive is 
duff.

I'll probably be back again for more help (the guy who set it up for me 
originally has moved), but for now I just want to know how to get the 
existing raid to re-synch, so that at least the existing stuff is all safe 
again.

What else do I need to tell you?

The version of mdadm currently installed is 1.8.1 (yum says 1.12.0-1.caos 
is available, but my FC-5 workstation has 2.3.1)

There is no /etc/raidtab file.

The only un-commented-out line in /etc/mdadm.conf is
------------------
   MAILADDR root
------------------

 >df
gives:
------------------
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md0               1881568    971396    814592  55% /
/dev/md1                241036      7594    220998   4% /boot
/dev/md3               4237632   2473196   1549172  62% /usr
/dev/md4                932792     16484    868924   2% /tmp
/dev/md5                932792    175084    710324  20% /var
/dev/md6                932792    304588    580820  35% /var/cache
/dev/md7                458800     55095    380016  13% /var/log
/dev/md8             170297264 157843988   3802684  98% /space
------------------
(It is md8 to which I want to add the new drive pair in linear mode, but 
more of that later.)

 >fdisk -l /dev/hdc
   (the active disk)  gives:
------------------
Disk /dev/hdc: 200.0 GB, 200049647616 bytes
255 heads, 63 sectors/track, 24321 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot    Start       End      Blocks   Id  System
/dev/hdc1             1        32      257008+  fd  Linux raid autodetect
/dev/hdc2            33       282     2008125   fd  Linux raid autodetect
/dev/hdc3           283       402      963900   fd  Linux raid autodetect
/dev/hdc4           403     24321   192129367+   5  Extended
/dev/hdc5           403      1027     5020281   fd  Linux raid autodetect
/dev/hdc6          1028      1152     1004031   fd  Linux raid autodetect
/dev/hdc7          1153      1277     1004031   fd  Linux raid autodetect
/dev/hdc8          1278      1402     1004031   fd  Linux raid autodetect
/dev/hdc9          1403      1465      506016   fd  Linux raid autodetect
/dev/hdc10         1466     24321   183590788+  fd  Linux raid autodetect
------------------

 >fdisk -l /dev/hda
   (the inactive disk)  gives:
------------------
Disk /dev/hda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot    Start         End      Blocks   Id  System
/dev/hda1             1       238     1911703+  fd  Linux raid autodetect
/dev/hda2           239       269      249007+  fd  Linux raid autodetect
/dev/hda3           270       384      923737+  fd  Linux raid autodetect
/dev/hda4           385     30401   241111552+   5  Extended
/dev/hda5           385       920     4305388+  fd  Linux raid autodetect
/dev/hda6           921      1038      947803+  fd  Linux raid autodetect
/dev/hda7          1039      1156      947803+  fd  Linux raid autodetect
/dev/hda8          1157      1274      947803+  fd  Linux raid autodetect
/dev/hda9          1275      1333      473886   fd  Linux raid autodetect
/dev/hda10         1334     22872   173011986   fd  Linux raid autodetect
/dev/hda11        22873     30401    60476661   83  Linux
------------------

 >mdadm --detail /dev/md0
   gives
------------------
/dev/md0:
         Version : 00.90.01
   Creation Time : Sun Jul  3 20:03:50 2005
      Raid Level : raid1
      Array Size : 1911616 (1866.81 MiB 1957.49 MB)
     Device Size : 1911616 (1866.81 MiB 1957.49 MB)
    Raid Devices : 2
   Total Devices : 1
Preferred Minor : 0
     Persistence : Superblock is persistent

     Update Time : Sun Jan 28 21:48:06 2007
           State : clean, degraded
  Active Devices : 1
Working Devices : 1
  Failed Devices : 0
   Spare Devices : 0

            UUID : 6b8b4567:327b23c6:643c9869:66334873
          Events : 0.4415723

     Number   Major   Minor   RaidDevice State
        0      22        2        0      active sync   /dev/hdc2
        1       0        0        -      removed
------------------

 >mdadm --detail /dev/md1
   gives
------------------
/dev/md1:
         Version : 00.90.01
   Creation Time : Sun Jul  3 20:03:50 2005
      Raid Level : raid1
      Array Size : 248896 (243.06 MiB 254.87 MB)
     Device Size : 248896 (243.06 MiB 254.87 MB)
    Raid Devices : 2
   Total Devices : 1
Preferred Minor : 1
     Persistence : Superblock is persistent

     Update Time : Sun Jan 28 21:41:31 2007
           State : clean, degraded
  Active Devices : 1
Working Devices : 1
  Failed Devices : 0
   Spare Devices : 0

            UUID : 6b8b4567:327b23c6:643c9869:66334873
          Events : 0.3835

     Number   Major   Minor   RaidDevice State
        0      22        1        0      active sync   /dev/hdc1
        1       0        0        -      removed
------------------

 >mdadm --detail /dev/md8
   gives
------------------
/dev/md8:
         Version : 00.90.01
   Creation Time : Sun Jul  3 20:04:08 2005
      Raid Level : raid1
      Array Size : 173011904 (164.100 GiB 177.16 GB)
     Device Size : 173011904 (164.100 GiB 177.16 GB)
    Raid Devices : 2
   Total Devices : 1
Preferred Minor : 8
     Persistence : Superblock is persistent

     Update Time : Sun Jan 28 21:47:38 2007
           State : clean, degraded
  Active Devices : 1
Working Devices : 1
  Failed Devices : 0
   Spare Devices : 0

            UUID : 6b8b4567:327b23c6:643c9869:66334873
          Events : 0.3342355

     Number   Major   Minor   RaidDevice State
        0      22       10        0      active sync   /dev/hdc10
        1       0        0        -      removed
------------------

Is that enough info?  ;^)

TIA,

Roger




More information about the Wylug-help mailing list