[Wylug-help] Inactive RAID 10 Array
Dave Fisher
wylug-help at davefisher.co.uk
Tue Apr 14 21:57:03 UTC 2009
On Tue, Apr 14, 2009 at 04:10:55PM +0100, John Hodrien wrote:
> On Tue, 14 Apr 2009, Dave Fisher wrote:
>
> > I suspect that the first thing I should do is dd sd{b,c,d,e}4 to some spare
> > disks.
>
> Fair enough. How come the array's inactive when you've only lost one
> partition?
No idea.
> What's happened to get you in this state?
4 events in chronological order:
1. Monitor blew-up ... bang, smoke, acrid smell, silence
2. No ssh access, no keyboard/mouse response, no screen output with new monitor, so I power-cycled the machine
3. Failed fsck on reboot
4. I tried mdadm autodetect ... this may have made things worse?
Does this give you any ideas?
What does (S) mean in the mdstat output?
I notice that it is next to a spare partition (sdf2) in the working array md0.
####################
# cat /proc/mdstat #
####################
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : inactive sdb4[0](S) sdf4[4](S) sde4[3](S) sdd4[2](S) sdc4[1](S)
4829419520 blocks
md0 : active raid1 sdb2[0] sdf2[2](S) sdc2[1]
9767424 blocks [2/2] [UU]
N.B. sdf no longer exists, and should have nothing to do with RAID ... it was
just an extra PATA drive on the machine.
Dave
P.S. See newly posted question about creating device files for new SATA drives.
More information about the Wylug-help
mailing list