zteifel
May 9th, 2011, 02:15 PM
Hi!
I have a volumegroup created with linux volume manager LVM. It consists of 3 physical disks, 1 volumegroup (vg1) and 1 logical volume (lg1).
ubuntu@ubuntu:~$ sudo pvscan
PV /dev/sde VG vg1 lvm2 [931,51 GiB / 0 free]
PV /dev/sda VG vg1 lvm2 [279,48 GiB / 0 free]
PV /dev/sdb VG vg1 lvm2 [279,48 GiB / 44,00 MiB free]
Total: 3 [1,46 TiB] / in use: 3 [1,46 TiB] / in no VG: 0 [0 ]
ubuntu@ubuntu:~$ sudo lvscan
ACTIVE '/dev/vg1/lg1' [1,46 TiB] inherit
ubuntu@ubuntu:~$ sudo vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg1" using metadata type lvm2
ubuntu@ubuntu:~$
Everything works great with ubuntu 9.10. But when upgrading, or starting with live 11.04 the problem begins.
I get this when i try to activate my volumegroup
ubuntu@ubuntu:~$ sudo vgchange -ay
device-mapper: resume ioctl failed: Invalid argument
Unable to resume vg1-lg1 (252:0)
1 logical volume(s) in volume group "vg1" now active
If i check the log, i see this:
May 9 14:45:15 ubuntu kernel: [ 200.776060] device-mapper: table: 252:0: sda too small for target: start=384, len=586113024, dev_size=586112591
In 9.10 i have both /dev/mapper/vg1-lg1 and /dev/vg1/lg1 but in 11.04 i only have /dev/mapper/vg1-lg1
I have restored the volumegroup with vgcfgrestore and that works but doesn't help.
I read something a while ago about meta-data traces of previous raidconfiguration that could lead to problems with new versions of lvm. Therefore i did dmraid -ay and discovered some traces on both sda and sdb.
I erased it with dmraid -rE and nothing shows up when i do dmraid -ay anymore. Still the problem isn't solved.
(edit: I actually find a bugreport where it said this could be the problem: https://bugzilla.redhat.com/show_bug.cgi?id=505291)
I also tried to resize the logical volume because the messages in the log, but that didn't help either. I'm out of ideas and google as hell all weekend. Im guessing this is probably an easy fix, but I'm stuck!
Please help!
I have a volumegroup created with linux volume manager LVM. It consists of 3 physical disks, 1 volumegroup (vg1) and 1 logical volume (lg1).
ubuntu@ubuntu:~$ sudo pvscan
PV /dev/sde VG vg1 lvm2 [931,51 GiB / 0 free]
PV /dev/sda VG vg1 lvm2 [279,48 GiB / 0 free]
PV /dev/sdb VG vg1 lvm2 [279,48 GiB / 44,00 MiB free]
Total: 3 [1,46 TiB] / in use: 3 [1,46 TiB] / in no VG: 0 [0 ]
ubuntu@ubuntu:~$ sudo lvscan
ACTIVE '/dev/vg1/lg1' [1,46 TiB] inherit
ubuntu@ubuntu:~$ sudo vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg1" using metadata type lvm2
ubuntu@ubuntu:~$
Everything works great with ubuntu 9.10. But when upgrading, or starting with live 11.04 the problem begins.
I get this when i try to activate my volumegroup
ubuntu@ubuntu:~$ sudo vgchange -ay
device-mapper: resume ioctl failed: Invalid argument
Unable to resume vg1-lg1 (252:0)
1 logical volume(s) in volume group "vg1" now active
If i check the log, i see this:
May 9 14:45:15 ubuntu kernel: [ 200.776060] device-mapper: table: 252:0: sda too small for target: start=384, len=586113024, dev_size=586112591
In 9.10 i have both /dev/mapper/vg1-lg1 and /dev/vg1/lg1 but in 11.04 i only have /dev/mapper/vg1-lg1
I have restored the volumegroup with vgcfgrestore and that works but doesn't help.
I read something a while ago about meta-data traces of previous raidconfiguration that could lead to problems with new versions of lvm. Therefore i did dmraid -ay and discovered some traces on both sda and sdb.
I erased it with dmraid -rE and nothing shows up when i do dmraid -ay anymore. Still the problem isn't solved.
(edit: I actually find a bugreport where it said this could be the problem: https://bugzilla.redhat.com/show_bug.cgi?id=505291)
I also tried to resize the logical volume because the messages in the log, but that didn't help either. I'm out of ideas and google as hell all weekend. Im guessing this is probably an easy fix, but I'm stuck!
Please help!