[Wylug-help] RAID, mdadm and df

Martyn Ranyard ranyardm at gmail.com
Fri Sep 24 13:18:15 UTC 2010


Hi Roger,

  The size thing is a container mindset issue.  /dev/md1 is the container
for the filesystem, and although the container has been grown with --grow
but the filesystem within it has not.  You need to use the filesystem tool
to resize the filesystem and unless you're using xfs, I believe you'll have
to do this offline (livecd).  extresize I believe is the tool for
ext(2/3/4), but I haven't got the full suite available to me at the moment
to check.

  As to the physical connection, no, in modern linux raid, you can switch
the physical devices around almost willy-nilly, as the raid superblock
contains all the information to say that the partition is part x of a raid
set.

HTH
--
Martyn

On 24 September 2010 13:20, Roger <roger at roger-beaumont.co.uk> wrote:

> My server uses RAID 1 for robust storage, but lately I've had some problems
> with the component drives, so after a few years while it all 'just worked'
> (and I forgot what I learned) I've now had to do stuff...
>
> The RAID was 2 x 750Gig SATA II drives (Seagate).
>
> One of them developed a fault, so I plugged in 2 1TB drives (more Seagate
> SATA II), to give a spare as well as replacing the failed drive.
>
> The partitioning was very simple:
> md0 - sd?1    257008 blocks, type fd (raid autodetect) mounted as /boot
>      sd?2   2048287 blocks, type 82 (swap)
> md1 - sd?3 730266705 blocks, type fd (raid autodetect) mounted as /
>
> On the 1TB drives, partitions 1 & 2 were identical, but partition 3 was
> 974454592 blocks.
>
> Soon, a fault appeared on the other 750 Gig, so I used mdadm to --fail then
> --remove that drive.  Without the smaller drive I then used --grow to
> increase the space on md1.
>
> Now,
>  cat /proc/mdstat
> reports the full 974454592 blocks on md1, but df still reports only
> 707395896 blocks.  It isn't the discrepancy between 707395896 and 730266705
> that bothers me - I know that difference is to hold the superblock - it's
> the difference between 707395896 & 974454592; the extra 250 Gig.
>
> I priced what's now available and ordered a 1.5TB drive, planning that when
> it arrived, I'd replace the old sda drive, thinking in my ignorance, that
> that would allow the other drives to stay at their current locations, so sdb
> remained sdb and sdc ditto.
>
> Between placing that order and collecting it, one of the 1TB drives seems
> to have developed a fault (in sdc1 - the /boot partition), so I've bought
> two of the 1.5TB drives.
>
> Finally, my questions:
>
> 1.  Does it actually matter if individual drives retain their letter
> (interface connection)?  If not does the RAID software automatically detect
> any change, or do I need to do something manually?  (Should I connect the
> 1.5TB drives to the SATA0 & SATA3 connectors, or can I put them on 0 & 1,
> and move one of the 1TB drives to 3?)
>
> 2.  Why the discrepancy between /proc/mdstat and what df reports?  The
> latest drives will more than double the original capacity of md1, but I want
> that to be reliably available.  (I have rebooted since the --grow command
> seemed to succeed, with no change.)
>
> I'd like to know what went awry in the last replacement, before doing the
> next one...
>
> Thanks in advance,
>
> Roger
>
> ______________________________________________________________________
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit http://www.messagelabs.com/email______________________________________________________________________
>
> _______________________________________________
> Wylug-help mailing list
> Wylug-help at wylug.org.uk
> https://mailman.lug.org.uk/mailman/listinfo/wylug-help
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.lug.org.uk/pipermail/wylug-help/attachments/20100924/1936ca50/attachment.htm>


More information about the Wylug-help mailing list