[Gllug] Which path is easier, Transfer swapout or swapout transfer?
Nix
nix at esperi.org.uk
Mon Aug 18 23:02:45 UTC 2008
On 18 Aug 2008, Justin Perreault outgrape:
> Unfortunately this is where I got stuck. There is an LV that spans the
> two drives. With one of the drives not connected the system refused to
> run LVM and related commands. I suspect due to LVM not being able to see
> what it expected.
This is why it's generally considered unwise to have VGs that span PVs
that are likely to fail independently (so it's fine to have a VG
spanning multiple RAID-6 arrays, say, because they don't fail much, but
having it span multiple ordinary disks is risky).
> I am now exploring how to reduce a LV so that it resides wholly on the
> old 1TB drive.
If the LV is actually spanning both drives and has any significant data on
the failed drive, you're likely out of luck :(
Firstly, there isn't a command-line tool that will do this: vgreduce
will vape the missing disk out of the VG, but will remove any LVs
partially on that PV. So you're reduced to editing the metadata by hand.
Fortunately this is easy as it's all textual :)
The way to do this is to back up the VG's metadata with vgcfgbackup,
edit the backup to remove any segments on the appropriate PV, and
restore it again with vgcfgrestore (but back up the backup file before
you change it, so that you can revert if things go wrong).
e.g. taking a slice out of one of my own config backups:
raid {
physical_volumes {
pv0 {
id = "mAQXqU-TCAO-FCPr-798d-7Xa0-Syq0-4nGPWn"
device = "/dev/md1" # Hint only
status = ["ALLOCATABLE"]
dev_size = 153614592 # 73.2491 Gigabytes
pe_start = 384
pe_count = 18751 # 73.2461 Gigabytes
}
pv1 {
id = "pFmLbU-CvX5-ipAs-eaza-ZFBi-wgyQ-QfxKFw"
device = "/dev/md2" # Hint only
status = ["ALLOCATABLE"]
dev_size = 39262208 # 18.7217 Gigabytes
pe_start = 384
pe_count = 4792 # 18.7188 Gigabytes
}
}
logical_volumes {
mirror {
id = "i6XSVp-Cw2L-am9Z-9iIC-MYRC-DDLD-v50z4L"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 2
segment1 {
start_extent = 0
extent_count = 256 # 1024 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 512
]
}
segment2 {
start_extent = 256
extent_count = 125 # 500 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 10034
]
}
}
}
}
If /dev/md1 vanished and I wanted to recover a bit of this LV for some
reason, I could remove pv0 from the mirror logical_volume by deleting
segment2 and adjusting the segment_count therein:
raid {
[...]
logical_volumes {
mirror {
id = "i6XSVp-Cw2L-am9Z-9iIC-MYRC-DDLD-v50z4L"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 1
segment1 {
start_extent = 0
extent_count = 256 # 1024 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 512
]
}
}
}
}
Of course the resulting FS may well be completely wrecked.
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list