[Wylug-help] Inactive RAID 10 Array - Step 2: Diagnosis

Dave Fisher wylug-help at davefisher.co.uk
Wed Apr 15 18:58:15 UTC 2009


Hi All,

I've started to backup all the disks containing my raid arrays, but it's
taking a lot longer than I anticipated. 

It could be two more days before I can start tinkering with the copies,
and I can't really concentrate on any other major topic, so I'm hoping
to use some of the time to prepare myself for the next (diagnostic) step.

My first question is quite simply: what does 'inactive' really mean? 

Also I don't think that anyone answered my previous question about what
the (S) flags mean in the following mdstat output.

On Tue, Apr 14, 2009 at 02:59:06PM +0100, Dave Fisher wrote:
> ####################
> # cat /proc/mdstat #
> ####################
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
> md1 : inactive sdb4[0](S) sdf4[4](S) sde4[3](S) sdd4[2](S) sdc4[1](S)
>       4829419520 blocks
>        
> md0 : active raid1 sdb2[0] sdf2[2](S) sdc2[1]
>       9767424 blocks [2/2] [UU]
>       
> unused devices: <none>

Googling produced a couple of mailing list contributions which suggested
that the (S) flags mean that the kernel thinks that they are spares.

Can anyone confirm this?

If so, what could be causing that preception?

Obviously, the purpose of answering these questions is to answer a
broader question: what does the kernel think that the current state of
the array and it's component partitions is?

Perhaps someone could suggest so other diagnostics that I could run,
once the copies become available?

N.B. latecomers to the thread might want to read the diagnostics I
included in the original post.

Dave



More information about the Wylug-help mailing list