[Gllug] Removing faulty disk from LVM2
Magnus Nilsson
magnus at upcore.net
Fri Feb 4 10:49:16 UTC 2005
Hi,
I've got an 8 disk VG where one PV is dying.
Through pvmove'ing parts of the PV I've narrowed it down to PE #14858 of
38154 4MB PE's. All the other PE's on that PV have been moved.
The machine hard-locks every time that PE is accessed, so I can't
activate the VG with 'vgchange -ay vg00'. It also seems that I have an
unfinished pvmove of that last PE that I can't abort since I can't
activate the VG.
According to the lvm manpage I've tried using 'vgchange -P -ay vg00'
after creating /dev/ioerror using dmsetup. That lets me activate the VG
without having the faulty PV plugged in, but it still doesn't let me
mount the LV's complaining that it can't find the superblock.
I did 'echo "0 99999999999 error" >/foo && dmsetup ioerror /foo && ln
-s /dev/mapper/ioerror /dev/ioerror'
Nevertheless, the manpage says that activating with -P will only let me
read, but I don't have the required ~1TB of space to do a backup.
Is there any way of just skipping that last PE and let me repair the
filesystem afterwards? Perhaps by vgcfgrestore'ing an edited backup,
excluding the broken PV?
Alternatively I could remove another PV from the VG, create a new VG and
copy data over in chunks (I've added a new disk so I have free space in
the VG).
Sadly I can't provide any command output since both /usr and /var is on
the VG. I'm using Linux 2.6.10 with device-mapper 1.00.21 and LVM
2.00.33.
(I've sent this same email to the linux-lvm mailing list...hoping that
somebody here might be able to help)
//Magnus
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list