Hi Roger,<div><br></div><div> The size thing is a container mindset issue. /dev/md1 is the container for the filesystem, and although the container has been grown with --grow but the filesystem within it has not. You need to use the filesystem tool to resize the filesystem and unless you're using xfs, I believe you'll have to do this offline (livecd). extresize I believe is the tool for ext(2/3/4), but I haven't got the full suite available to me at the moment to check.</div>
<div><br></div><div> As to the physical connection, no, in modern linux raid, you can switch the physical devices around almost willy-nilly, as the raid superblock contains all the information to say that the partition is part x of a raid set.</div>
<div><br></div><div>HTH</div><div>--</div><div>Martyn<br><br><div class="gmail_quote">On 24 September 2010 13:20, Roger <span dir="ltr"><<a href="mailto:roger@roger-beaumont.co.uk">roger@roger-beaumont.co.uk</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">My server uses RAID 1 for robust storage, but lately I've had some problems with the component drives, so after a few years while it all 'just worked' (and I forgot what I learned) I've now had to do stuff...<br>
<br>
The RAID was 2 x 750Gig SATA II drives (Seagate).<br>
<br>
One of them developed a fault, so I plugged in 2 1TB drives (more Seagate SATA II), to give a spare as well as replacing the failed drive.<br>
<br>
The partitioning was very simple:<br>
md0 - sd?1 257008 blocks, type fd (raid autodetect) mounted as /boot<br>
sd?2 2048287 blocks, type 82 (swap)<br>
md1 - sd?3 730266705 blocks, type fd (raid autodetect) mounted as /<br>
<br>
On the 1TB drives, partitions 1 & 2 were identical, but partition 3 was 974454592 blocks.<br>
<br>
Soon, a fault appeared on the other 750 Gig, so I used mdadm to --fail then --remove that drive. Without the smaller drive I then used --grow to increase the space on md1.<br>
<br>
Now,<br>
cat /proc/mdstat<br>
reports the full 974454592 blocks on md1, but df still reports only 707395896 blocks. It isn't the discrepancy between 707395896 and 730266705 that bothers me - I know that difference is to hold the superblock - it's the difference between 707395896 & 974454592; the extra 250 Gig.<br>
<br>
I priced what's now available and ordered a 1.5TB drive, planning that when it arrived, I'd replace the old sda drive, thinking in my ignorance, that that would allow the other drives to stay at their current locations, so sdb remained sdb and sdc ditto.<br>
<br>
Between placing that order and collecting it, one of the 1TB drives seems to have developed a fault (in sdc1 - the /boot partition), so I've bought two of the 1.5TB drives.<br>
<br>
Finally, my questions:<br>
<br>
1. Does it actually matter if individual drives retain their letter (interface connection)? If not does the RAID software automatically detect any change, or do I need to do something manually? (Should I connect the 1.5TB drives to the SATA0 & SATA3 connectors, or can I put them on 0 & 1, and move one of the 1TB drives to 3?)<br>
<br>
2. Why the discrepancy between /proc/mdstat and what df reports? The latest drives will more than double the original capacity of md1, but I want that to be reliably available. (I have rebooted since the --grow command seemed to succeed, with no change.)<br>
<br>
I'd like to know what went awry in the last replacement, before doing the next one...<br>
<br>
Thanks in advance,<br>
<br>
Roger<br>
<br>
______________________________________________________________________<br>
This email has been scanned by the MessageLabs Email Security System.<br>
For more information please visit <a href="http://www.messagelabs.com/email" target="_blank">http://www.messagelabs.com/email</a> ______________________________________________________________________<br>
<br>
_______________________________________________<br>
Wylug-help mailing list<br>
<a href="mailto:Wylug-help@wylug.org.uk" target="_blank">Wylug-help@wylug.org.uk</a><br>
<a href="https://mailman.lug.org.uk/mailman/listinfo/wylug-help" target="_blank">https://mailman.lug.org.uk/mailman/listinfo/wylug-help</a><br>
</blockquote></div><br></div>