[Wylug-help] Inactive RAID 10 Array - Step 2: Diagnosis

Dave Fisher wylug-help at davefisher.co.uk
Thu Apr 16 16:22:36 UTC 2009


On Wed, Apr 15, 2009 at 08:59:58PM +0100, Chris Davies wrote:
> Fortunately the work-around is relatively straightforward: boot a 
> different kernel and the resync will happen automatically.

Could it really be that simple?

I suspect that I am worrying too much, because:

  1. I can't afford to make any mistake that might make recoverable data unrecoverable.
  
  2. None of the documentation I've seen gets close to explaining how I might
     recover the array from the state that it is currently in.
     
     - In fact, nothing but bug reports even describes anything similar

  3. My back-ups will complete this evening and then I'll have to do something,
     e.g. install a new kernel on the copied system. 

So far, I've found it impossible to find out how the (S) markers, shown in the
mdstat output below, are set.

Nor have I found any documentation which lists all of the possible markers or
their defintive meanings.

I would have thought that if (S) means 'spare', a new kernel would see the (S)
and conclude that there were no non-spare active partitions from which to
resync or reassemble the array.

In other words, I would have thought that I would have to toggle these markers
to something more sensible, e.g. turn them off.

As I say, I'm probably worrying too much, but it would be reassuring to get
some advice from someone who has some in-depth knowledge of RAID ... unlike me.

Dave


####################
# cat /proc/mdstat #
####################
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : inactive sdb4[0](S) sdf4[4](S) sde4[3](S) sdd4[2](S) sdc4[1](S)
      4829419520 blocks

md0 : active raid1 sdb2[0] sdf2[2](S) sdc2[1]
      9767424 blocks [2/2] [UU]

unused devices: <none>



More information about the Wylug-help mailing list