[Wylug-help] Inactive RAID 10 Array - Need More Help Please

Dave Fisher wylug-help at davefisher.co.uk
Fri Apr 17 00:05:53 UTC 2009


On Tue, Apr 14, 2009 at 11:58:15PM +0100, Chris Davies wrote:
> If you "mdadm --examine --scan -v", and the superblocks are still 
> intact, you'll get a dump of something like this (from my RAID 1 setup):
> 
> # mdadm --examine --scan -v
> ARRAY /dev/md1 level=raid1 num-devices=2 UUID=...
>     devices=/dev/hdc1,/dev/hda1
> ARRAY /dev/md5 level=raid1 num-devices=2 UUID=...
>     devices=/dev/hdc5,/dev/hda5
> ARRAY /dev/md6 level=raid0 num-devices=2 UUID=...
>     devices=/dev/hdc6,/dev/hda6
> ARRAY /dev/md9 level=raid10 num-devices=4 UUID=...
>     devices=/dev/dm-8,/dev/dm-3,/dev/dm-2,/dev/dm-1

I get the following:

  $ sudo mdadm --examine --scan -v
  ARRAY /dev/md0 level=raid1 num-devices=2 UUID=e1023500:94537d05:cb667a5a:bd8e784b
     spares=1   devices=/dev/sde2,/dev/sdd2,/dev/sdc2,/dev/sdb2,/dev/sda2
  ARRAY /dev/md1 level=raid10 num-devices=4 UUID=f4ddbd55:206c7f81:b855f41b:37d33d37
     spares=1   devices=/dev/sde4,/dev/sdd4,/dev/sdc4,/dev/sdb4,/dev/sda4

Which seems consistent.

N.B. the drive letters are slightly different from my original post, because I
removed an irrelevant drive that had nothing to do with RAID.
 
> If you see the right devices you should simply be able to restart the array:
> 
> # mdadm --assemble /dev/md1 /dev/sd{b,c,d,e,f}4

I did:

   $ sudo mdadm --assemble /dev/md1 /dev/sdb4 /dev/sdc4 /dev/sdd4 /dev/sde4

And got the following:

   mdadm: /dev/md1 assembled from 3 drives and 1 spare - need all 4 to start it (use --run to insist)

I'm not sure that I fully understand this message, so I'm not going to do
anything more than further diagnoses, until:

  1. I do understand that message, and;

  2. I have a better understanding of the current state of these devices.   

Why is it saying "assembled from 3 drives"?

Both parts ("assembled" and "from 3 drives") are unclear to me.  

Hasn't mdadm actually *failed* to assemble?

Is it saying "3 drives", because it's not counting the /dev/sdc4 and the spare
(/dev/sde4)?

I am hoping that the situation is not too bad, e.g.  that in the worst case, I can
explicitly fail and remove /dev/sdc4, then rebuild the array with the spare. 

The optimist/fantacist in me is hoping that, it may be even simpler, e.g. that
/dev/sdc4 is fine and can be re-incorporated without the need to use the spare. 

What, if any, are the risks involved in using the --run option?

What would --run actually do in this case?

The manpage says it will:

   "will fully activate a partially assembled md array"

But I'm not sure what "fully activate" or "partially assembled" actually mean.

Dave


------------------------------------------------------------------------------
FURTHER NOTES


As you can see (from the output from mdadm -E, shown below) the disks seem to
be in the same condition described in my original post, i.e. the output from
the first device sees the third device (/dev/sdc4) as "faulty removed", but
none of the others do.

The all see /dev/sdc4 as "active synch". 





$ sudo mdadm -E /dev/sda4
[sudo] password for davef:
/dev/sda4:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : f4ddbd55:206c7f81:b855f41b:37d33d37
  Creation Time : Tue May  6 02:06:45 2008
     Raid Level : raid10
  Used Dev Size : 965883904 (921.14 GiB 989.07 GB)
     Array Size : 1931767808 (1842.28 GiB 1978.13 GB)
   Raid Devices : 4
  Total Devices : 5
Preferred Minor : 1

    Update Time : Tue Apr 14 00:45:27 2009
          State : active
 Active Devices : 3
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 1
       Checksum : 7a3576c1 - correct
         Events : 221

         Layout : near=2, far=1
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       20        0      active sync   /dev/sdb4

   0     0       8       20        0      active sync   /dev/sdb4
   1     1       8       36        1      active sync   /dev/sdc4
   2     2       0        0        2      faulty removed
   3     3       8       68        3      active sync   /dev/sde4
   4     4       8       84        4      spare


$ sudo mdadm -E /dev/sdb4
/dev/sdb4:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : f4ddbd55:206c7f81:b855f41b:37d33d37
  Creation Time : Tue May  6 02:06:45 2008
     Raid Level : raid10
  Used Dev Size : 965883904 (921.14 GiB 989.07 GB)
     Array Size : 1931767808 (1842.28 GiB 1978.13 GB)
   Raid Devices : 4
  Total Devices : 5
Preferred Minor : 1

    Update Time : Tue Apr 14 00:44:13 2009
          State : active
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1
       Checksum : 7a35767a - correct
         Events : 219

         Layout : near=2, far=1
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8       36        1      active sync   /dev/sdc4

   0     0       8       20        0      active sync   /dev/sdb4
   1     1       8       36        1      active sync   /dev/sdc4
   2     2       8       52        2      active sync   /dev/sdd4
   3     3       8       68        3      active sync   /dev/sde4
   4     4       8       84        4      spare


$ sudo mdadm -E /dev/sdc4
/dev/sdc4:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : f4ddbd55:206c7f81:b855f41b:37d33d37
  Creation Time : Tue May  6 02:06:45 2008
     Raid Level : raid10
  Used Dev Size : 965883904 (921.14 GiB 989.07 GB)
     Array Size : 1931767808 (1842.28 GiB 1978.13 GB)
   Raid Devices : 4
  Total Devices : 5
Preferred Minor : 1

    Update Time : Tue Apr 14 00:44:13 2009
          State : active
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1
       Checksum : 7a35768c - correct
         Events : 219

         Layout : near=2, far=1
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8       52        2      active sync   /dev/sdd4

   0     0       8       20        0      active sync   /dev/sdb4
   1     1       8       36        1      active sync   /dev/sdc4
   2     2       8       52        2      active sync   /dev/sdd4
   3     3       8       68        3      active sync   /dev/sde4
   4     4       8       84        4      spare



$ sudo mdadm -E /dev/sdd4
/dev/sdd4:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : f4ddbd55:206c7f81:b855f41b:37d33d37
  Creation Time : Tue May  6 02:06:45 2008
     Raid Level : raid10
  Used Dev Size : 965883904 (921.14 GiB 989.07 GB)
     Array Size : 1931767808 (1842.28 GiB 1978.13 GB)
   Raid Devices : 4
  Total Devices : 5
Preferred Minor : 1

    Update Time : Tue Apr 14 00:44:13 2009
          State : active
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1
       Checksum : 7a35769e - correct
         Events : 219

         Layout : near=2, far=1
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8       68        3      active sync   /dev/sde4

   0     0       8       20        0      active sync   /dev/sdb4
   1     1       8       36        1      active sync   /dev/sdc4
   2     2       8       52        2      active sync   /dev/sdd4
   3     3       8       68        3      active sync   /dev/sde4
   4     4       8       84        4      spare

$ sudo mdadm -E /dev/sde4
/dev/sde4:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : f4ddbd55:206c7f81:b855f41b:37d33d37
  Creation Time : Tue May  6 02:06:45 2008
     Raid Level : raid10
  Used Dev Size : 965883904 (921.14 GiB 989.07 GB)
     Array Size : 1931767808 (1842.28 GiB 1978.13 GB)
   Raid Devices : 4
  Total Devices : 5
Preferred Minor : 1

    Update Time : Fri Apr 10 16:43:47 2009
          State : clean
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1
       Checksum : 7a31126a - correct
         Events : 218

         Layout : near=2, far=1
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     4       8       84        4      spare

   0     0       8       20        0      active sync   /dev/sdb4
   1     1       8       36        1      active sync   /dev/sdc4
   2     2       8       52        2      active sync   /dev/sdd4
   3     3       8       68        3      active sync   /dev/sde4
   4     4       8       84        4      spare




More information about the Wylug-help mailing list