<div dir="ltr"><div>Not wanting to add discs to the RAID, want to replace the current ones as they are aged out.</div><div><br></div><div>Martin suggested on Signal using BTRFS to build out a new RAID 1 and then switch, think I might go down that route.<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, 18 Jan 2023 at 10:29, Brian Pickford via Nottingham <<a href="mailto:nottingham@mailman.lug.org.uk">nottingham@mailman.lug.org.uk</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi,<div><br></div><div>I'm far from an expert here, but I have setup and used arrays for as long as you have. Mine tend to fail more often though :(</div><div><br></div><div>When replacing an array, I'd set up a new array with a temporary mount point say, /mnt/lib</div><div>mkdir /mnt/lib</div><div>set ownership as you need </div><div>rsync the current array data to the temp so /var/lib to /mnt/lib</div><div>use lsblk and blkid to get the UUID</div><div>boot using a liveUSB</div><div>alter my fstab to refer to the new UUID, keeping the old one either on a temp mount point or commented out</div><div>check everything is OK, leave like this for a week or so, then remove the old drives</div><div><br></div><div>Seems easier than trying to add disks of different sizes to your existing array</div><div><br></div><div>Good luck</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 17 Jan 2023 at 23:58, J I via Nottingham <<a href="mailto:nottingham@mailman.lug.org.uk" target="_blank">nottingham@mailman.lug.org.uk</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hello everyone, hope you are keeping well.</div><div><br></div><div>I have an old RAID setup using drives from 2011 and whilst I've not had issues, I am getting a bit nervous about them.</div><div><br></div><div>At the moment both 2TB drives are in mdadm RAID1 (/dev/md0), and LVM sits on top of that, the actual FS is the unexciting Ext4:</div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">#lsblk</span></span></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">...<br></span></span></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0);background-color:rgb(255,255,255)">sda 8:0 0 1.8T 0 disk </span><br>└─md0 9:0 0 1.8T 0 raid1 <br> └─primary-varlib 253:0 0 1.8T 0 lvm /var/lib
<br>sdb 8:16 0 1.8T 0 disk <br>└─md0 9:0 0 1.8T 0 raid1 <br> └─primary-varlib 253:0 0 1.8T 0 lvm /var/lib</span></div><div><span style="font-family:monospace">... <br></span></div><div>As you can see they are not boot, just data really.</div><div><br></div><div>I want to replace them both with new 4TB Seagate IronWolfs<br></div><div><br></div><div>Can anyone point me at some instructions on how to do that? My search-fu is failing me and I have a feeling it's going to be more complicated than just replacing the drives one at a time with a rebuild (and then some magic to grow things to the full 4TB).<br></div><div><br></div><div>Do I need to be overly concerned with /etc/fstab which seems to be using a UUID:<br></div><div><span style="font-family:monospace">/dev/disk/by-id/dm-uuid-LVM-47rf... /var/lib ext4 defaults 0 0</span></div><div><br></div><div>This is all on Ubuntu Server 22.04.</div><div><br></div><div>Cheers!</div><div><br></div><div>J.<br></div><div><br></div><div>Some more info if it helps:<br><span style="font-family:monospace">#file -sL /dev/primary/varlib <br>/dev/primary/varlib: Linux rev 1.0 ext4 filesystem data...<br><br>#mdadm --misc --detail /dev/md0<br>/dev/md0:<br> Version : 1.2<br> Creation Time : Sun May 24 15:00:15 2020<br> Raid Level : raid1<br> Array Size : 1953382464 (1862.89 GiB 2000.26 GB)<br> Used Dev Size : 1953382464 (1862.89 GiB 2000.26 GB)<br> Raid Devices : 2<br> Total Devices : 2<br> Persistence : Superblock is persistent<br><br> Intent Bitmap : Internal<br><br> Update Time : Tue Jan 17 23:22:16 2023<br> State : clean <br> Active Devices : 2<br> Working Devices : 2<br> Failed Devices : 0<br> Spare Devices : 0<br><br>Consistency Policy : bitmap<br><br> Name : ubuntu-server:0<br> UUID : 7df346bb:a3c7ee5a:aa7ea17b:09c1c352<br> Events : 88901<br><br> Number Major Minor RaidDevice State<br> 0 8 0 0 active sync /dev/sda<br> 1 8 16 1 active sync /dev/sdb</span></div></div>
-- <br>
Nottingham mailing list<br>
<a href="mailto:Nottingham@mailman.lug.org.uk" target="_blank">Nottingham@mailman.lug.org.uk</a><br>
<a href="https://mailman.lug.org.uk/mailman/listinfo/nottingham" rel="noreferrer" target="_blank">https://mailman.lug.org.uk/mailman/listinfo/nottingham</a><br>
</blockquote></div>
-- <br>
Nottingham mailing list<br>
<a href="mailto:Nottingham@mailman.lug.org.uk" target="_blank">Nottingham@mailman.lug.org.uk</a><br>
<a href="https://mailman.lug.org.uk/mailman/listinfo/nottingham" rel="noreferrer" target="_blank">https://mailman.lug.org.uk/mailman/listinfo/nottingham</a><br>
</blockquote></div>