[GLLUG] Link two RAIDs in one LVM?

Andy Smith andy at bitfolk.com
Tue Apr 28 16:10:47 UTC 2020


Hi,

On Tue, Apr 28, 2020 at 01:18:08PM +0200, Dr. Axel Stammler via GLLUG wrote:
> I have a 4 TB RAID system (two identical hard disks combined in a
> RAID-1, created using mdadm). Now, after a few years, this has
> reached 90% capacity, and I am thinking about first adding another
> similar 8 TB RAID system and then combining them into one 12 GB
> RAID 1+0 filesystem.

> Which hardware parameters should I look at?

You already received tips to avoid SMR. This cannot be stressed
enough. Worse still, Seagate and WD are selling SMR drives without
marking them as such.

    https://www.ixsystems.com/community/resources/list-of-known-smr-drives.141/
    https://rml527.blogspot.com/2010/09/hdd-platter-database-seagate-25.html

Do not try to use an SMR drive in a RAID array of any kind.

Next up, if your drives don't support SCTERC timeout facility then
this is not ideal for a Linux RAID system but can be worked around
(and should be). Here is an article I wrote many years ago about
this but it is still the case.

    http://strugglers.net/~andy/blog/2015/11/09/linux-software-raid-and-drive-timeouts/

On the linux-raid list many of the requests for help from people
whose arrays won't automatically assemble after a failure are
because their drives don't support SCTERC and they didn't work
around it.

> Which method should I use to combine both RAID systems into one?
> 
> - linear RAID

Doable but not great because:

- Complexity of having arrays be part of an array, e.g. you'll have
  md0 and md1 be your two RAID-1s then md2 will be a linear of
  those.

- Not ideal performance since all IO will go to one pair and
  then once capacity is reached all IO will go other.

- Not sure if you can continue to grow this one later by adding more
  devices.

> - RAID-0

Doable but not great because:

- Complexity of having arrays be part of an array.

- Uneven performance because one "half" is actually twice the size
  of the other "half".

- Cannot grow this setup when you add more drives without rebuilding
  it all again.

If all your devices were the same size there would also be the
option of reshaping RAID-1 to RAID-10, which is possible with recent
mdadm. It turns the RAID-1 into a RAID-0 and then turns that into a
single RAID-10. No further grows/reshapes would be possible after that
though.

> - Large Volume Management (using pvcreate, vgcreate, lvcreate)

(LVM stands for Logical Volume Manager btw 😀)

For ease I most likely would go this way.

You'd make your existing md device be one Physical Volume, make the
new md device be another PV, then make a Volume Group that is both
of those PVs with an allocation mode of stripe.

Pros:

- Can keep adding RAID-1 pairs like this as PVs forever without
  having to move your data about again.

- Pretty simple to manage and understand what is going on.

Cons:

- Will still be a bit uneven performance since the smaller half will
  fill up first and then LVM will only allocate from the larger PV.

- If you've never used LVM before then it's a whole set of new
  concepts.

In all of the above options though, you are going to have to destroy
your filesystem(s) that are on the RAID-1 and put them back onto
whatever system you come up with.

If you're starting over you could consider ZFS-on-Linux.

I've been burnt by btrfs and still see showstopping data loss and
availability problems on the btrfs list on a weekly basis so would
not recommend it at this time. There is likely to be someone who will
say they have been using it for years without issue; if you aren't
convinced then subscribe to linux-btrfs for a month and see what
other people are still dealing with!

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



More information about the GLLUG mailing list