[sclug] RAID 5

Peter Brewer p.w.brewer at reading.ac.uk
Thu Sep 22 12:05:09 UTC 2005


Keith Edmunds wrote:

> Peter Brewer wrote:
>
>> We don't have an /etc/raidtab! 
>
>
> /etc/raidtab is only used by the deprecated raidtools packages, so I 
> wouldn't expect that file to be there. These days RAID arrays are 
> managed with mdadm.
>
> The command you want is
>
>     mdadm --query --detail /dev/md0
>
> - but replace '/dev/md0' with the /data device name. It does sound as 
> if the RAID setup is faulty in some way. What you might want to do it 
> put the drive back and THEN try the above command, and see if it shows 
> anything amiss.
>
> Keith
>

The plot thickens....  when I reinstate the 8th drive, reboot and run 
mdadm on md2 (the /data partition) I get:

*******************************************************
        Version : 00.90.00
  Creation Time : Wed Jan  1 00:25:32 2003
     Raid Level : raid5
     Array Size : 1367451904 (1304.10 GiB 1400.27 GB)
    Device Size : 195350272 (186.30 GiB 200.04 GB)
   Raid Devices : 8
  Total Devices : 7
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Wed Jan  1 23:59:55 2003
          State : active, degraded
 Active Devices : 7
Working Devices : 7
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : b980a90c:fcc9a7d3:caae324a:a2b9f2df
         Events : 0.19

    Number   Major   Minor   RaidDevice State
       0       8       22        0      active sync   /dev/sdb6
       1       8       38        1      active sync   /dev/sdc6
       2       8       54        2      active sync   /dev/sdd6
       3       8       70        3      active sync   /dev/sde6
       4       0        0        4      faulty removed
       5       8      102        5      active sync   /dev/sdg6
       6       8      118        6      active sync   /dev/sdh6
       7       8       86        7      active sync   /dev/sdf6
*******************************************************


So it appears that sda6 is faulty.  However if I run mdadm on md1 and 
md0 (the RAIDs that were still working when I removed a drive), I get:

*******************************************************
    Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync   /dev/sdb5
       1       8       37        1      active sync   /dev/sdc5
       2       8       53        2      active sync   /dev/sdd5
       3       0        0        3      faulty removed
       4       8        5        4      active sync   /dev/sda5
       5       8      101        5      active sync   /dev/sdg5
       6       8      117        6      active sync   /dev/sdh5
       7       8       85        7      active sync   /dev/sdf5
*******************************************************

and:

*******************************************************
    Number   Major   Minor   RaidDevice State
       0       8       19        0      active sync   /dev/sdb3
       1       8       35        1      active sync   /dev/sdc3
       2       8       51        2      active sync   /dev/sdd3
       3       0        0        3      faulty removed
       4       8        3        4      active sync   /dev/sda3
       5       8       99        5      active sync   /dev/sdg3
       6       8      115        6      active sync   /dev/sdh3
       7       8       83        7      active sync   /dev/sdf3
*******************************************************

respectively.  These are showing sde5 and sde3 to be knackered.  I had 
removed the last disk (presumably sdf*) so if parts of sda and sde are 
knackered how did root and /home continue to function when sde is 
knackered and sdf was removed from the raid!??!? :-S

Surely there aren't problems with two of the brand new HD's?  It must be 
a cockup with the RAID controller?  I think we'll try FC4 next.

My head hurts.

Pete




More information about the Sclug mailing list