<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"><html><head><meta content="text/html;charset=UTF-8" http-equiv="Content-Type"></head><body ><div style='font-size:10pt;font-family:Verdana,Arial,Helvetica,sans-serif;color:#000000;'><div ><div>I’d say that loss of data was due to insufficient back up strategy. Btrfs with snapshots and remote backups with btrfs-sync can be quite efficient. I suggest Jason look into those.</div><div><br></div><div><br></div><div><br></div><div>—<br>VM<br><br>Make 1984 fiction again.</div></div><div><br><br></div><div class="replyHeader">---- On Thu, 26 Jan 2023 11:05:21 +0000 J I via Nottingham<nottingham@mailman.lug.org.uk> wrote ----</div><div><br></div><blockquote style="border-left: 1px solid rgb(204, 204, 204); padding-left: 6px; margin-left: 5px;"><div><div dir="ltr">This is RAID 1, I am not doing 5/6 shenanigans.<br></div><br><div class="x_-2083167951gmail_quote"><div dir="ltr" class="x_-2083167951gmail_attr">On Thu, 26 Jan 2023 at 09:32, Michael Simms via Nottingham <<a href="mailto:nottingham@mailman.lug.org.uk" target="_blank">nottingham@mailman.lug.org.uk</a>> wrote:<br></div><blockquote class="x_-2083167951gmail_quote" style="margin: 0.0px 0.0px 0.0px 0.8ex;border-left: 1.0px solid rgb(204,204,204);padding-left: 1.0ex;">
<div>
<p><font face="monospace">Be really careful on BTRFS raid. Last I
checked it was still "unfit for purpose" and the place I worked
lost a lot of data due to raid 5 failures.</font><br>
</p>
<div>On 26/01/2023 09:18, J I via Nottingham
wrote:<br>
</div>
<blockquote>
<div dir="ltr">
<div>Thanks, Andy.<br>
<br>
</div>
<div>I actually made the leap to BTRFS for the RAID, a couple of
subvolumes and it all went very smoothly.</div>
<div><br>
</div>
<div>Old (11.5 year old!) drives are out and I am now seriously
considering flipping the other server.</div>
<div><br>
</div>
<div>J.<br>
</div>
</div>
<br>
<div class="x_-2083167951gmail_quote">
<div dir="ltr" class="x_-2083167951gmail_attr">On Wed, 25 Jan 2023 at 17:08,
Andy Smith via Nottingham <<a href="mailto:nottingham@mailman.lug.org.uk" target="_blank">nottingham@mailman.lug.org.uk</a>>
wrote:<br>
</div>
<blockquote class="x_-2083167951gmail_quote" style="margin: 0.0px 0.0px 0.0px 0.8ex;border-left: 1.0px solid rgb(204,204,204);padding-left: 1.0ex;">Hi,<br>
<br>
On Tue, Jan 17, 2023 at 11:58:01PM +0000, J I via Nottingham
wrote:<br>
> Can anyone point me at some instructions on how to do
that?<br>
<br>
These will work (have done this myself many times):<br>
<br>
<a href="https://raid.wiki.kernel.org/index.php/Growing#Extending_an_existing_RAID_array" target="_blank">https://raid.wiki.kernel.org/index.php/Growing#Extending_an_existing_RAID_array</a><br>
<br>
So, being a bit more verbose on what steps are relevant to
you:<br>
<br>
1. Make sure you have backups. Growing MD RAID-1 is pretty
well<br>
tested but human error is a factor especially with tasks
you are<br>
unfamiliar with.<br>
<br>
2. Go back to step 1 after not taking it seriously enough the
first<br>
time it was read.<br>
<br>
3. Remove one of the devices from your RAID-1 by marking it
failed.<br>
<br>
# mdadm -f /dev/md0 /dev/sda<br>
<br>
4. Physically remove sda from your computer and physically
attach<br>
the replacement. If you can do hot swap then lucky you, you
get<br>
to do all of this without switching your computer off.
Otherwise<br>
clearly you'll be shutting it down to remove each device
and add<br>
each new one. It should still boot fine and assemble the
array<br>
(degraded).<br>
<br>
5. I'm going to assume you have power cycled in which case the
drive<br>
names may have changed. Your new drive will probably now be
sda,<br>
BUT IT MIGHT NOT BE, SO CHECK THAT SDA IS WHAT YOU EXPECT
IT TO<br>
BE. One other likely outcome is that your previous sdb is
now sda<br>
and the new drive is sdb. You can do "smartctl -i /dev/sda"
to<br>
get some vitals like model and serial number. If you had
hotswap<br>
and didn't power cycle, your new drive will likely be
/dev/sdc.<br>
It doesn't matter what it's called; just use the new name.<br>
<br>
6. Partition your new drive how you want it. I see you have
used the<br>
entirety of the old drive as the MD RAID member but current
best<br>
practice is to create a single large partition rather than
use<br>
the bare drive. There are various reasons for this which I
won't<br>
go into at this stage. So let's assume you now have
/dev/sda1, a<br>
partition on your first new drive.<br>
<br>
7. Add your new drive to the (currently degraded) array:<br>
<br>
# mdadm --add /dev/md0 /dev/sda1<br>
<br>
The array will now be resyncing onto the new drive, though
it<br>
will still be the same size. Check progress:<br>
<br>
$ cat /proc/mdstat<br>
$ watch -d cat /proc/mdstat<br>
<br>
8. Once that's done, repeat steps (3) to (7) with sdb. You
should<br>
end up with a clean running array on both the new drives,
but it<br>
will still be the old (small) size.<br>
<br>
9. Tell the kernel to grow the array to as big as it can go:<br>
<br>
# mdadm --grow /dev/md0 --size=max<br>
<br>
I've never had an issue with the bitmap stuff it mentions
but if<br>
concerned then you might want to do as it says.<br>
<br>
After this your array should say it is the new size, though
your<br>
LVM setup will not know of that.<br>
<br>
10. Resize the LVM PV so LVM knows you have more to allocate
from:<br>
<br>
# pvresize /dev/md0<br>
<br>
That should be it. At no point should you have had to reboot
into a<br>
live environment or anything since the RAID should have
continued<br>
working in degraded state up until the end of step (8). If you
DID<br>
happen to encounter a hardware fault during the swap though,
you<br>
could be in for a bad time, hence backups.<br>
<br>
If you somehow DO end up with a non-working system and have to
boot<br>
into a live / rescue environment, don't panic. Most of them
have<br>
full mdadm tools and kernel support so you should be able to
fix<br>
things from them. If you hit a snag, ask on the linux-raid
mailing<br>
list. Don't be tempted to experiment unless you know exactly
what<br>
effect the various commands will have.<br>
<br>
I haven't discussed the bootloader implications since you said
these<br>
are not your boot drives. The page does a reasonable job of
that.<br>
<br>
> I have a feeling it's going to be more complicated than
just<br>
> replacing the drives one at a time with a rebuild (and
then some magic to<br>
> grow things to the full 4TB).<br>
<br>
That's really all it is.<br>
<br>
Note that you'll need a GPT partition table (not MBR) in order
to<br>
have a partition of ~4TB. That will be fine.<br>
<br>
> Do I need to be overly concerned with /etc/fstab which
seems to be using a<br>
> UUID:<br>
> /dev/disk/by-id/dm-uuid-LVM-47rf... /var/lib ext4
defaults 0 0<br>
<br>
This is a UUID to a device-mapper (LVM) device. Nothing will
change<br>
as far as your LVM configuration is concerned so that UUID
will<br>
remain the same.<br>
<br>
If you have other fstab entries you are concerned about, let
us<br>
know.<br>
<br>
Cheers,<br>
Andy<br>
<br>
-- <br>
<a href="https://bitfolk.com/" target="_blank">https://bitfolk.com/</a> --
No-nonsense VPS hosting<br>
<br>
-- <br>
Nottingham mailing list<br>
<a href="mailto:Nottingham@mailman.lug.org.uk" target="_blank">Nottingham@mailman.lug.org.uk</a><br>
<a href="https://mailman.lug.org.uk/mailman/listinfo/nottingham" target="_blank">https://mailman.lug.org.uk/mailman/listinfo/nottingham</a><br>
</blockquote>
</div>
<br>
<fieldset></fieldset>
</blockquote>
</div>
-- <br>
Nottingham mailing list<br>
<a href="mailto:Nottingham@mailman.lug.org.uk" target="_blank">Nottingham@mailman.lug.org.uk</a><br>
<a href="https://mailman.lug.org.uk/mailman/listinfo/nottingham" target="_blank">https://mailman.lug.org.uk/mailman/listinfo/nottingham</a><br>
</blockquote></div>
-- <br>Nottingham mailing list <br>Nottingham@mailman.lug.org.uk <br>https://mailman.lug.org.uk/mailman/listinfo/nottingham <br></div></blockquote></div><br></body></html>