When you mix software RAID1 (md) and LVM, in some situations you can get this message:
Found duplicate PV: using /dev/sdb1 not /dev/md0 ...
and the LVM doesn't assemble. The exact device names may differ, of course. But how does this happen?
What happened that at some point vgscan was run and read partition(s) that were later made into RAID1 members and saw a Physical Volume (PV) UUID on it. Since the PV UUID of a RAID1 array is identical to the PV UUID of the members, you get duplicate(s).
RAID1 members are usually not candidates for PVs, as vgscan normally excludes such devices from consideration. However there is a cache: /etc/lvm/cache/.cache which may contain outdated entries. In the example above it contained an entry for /dev/sdb1 which should have been filtered out by virtue of being in a RAID array. The solution is simple: just run vgscan again to update the cache. But you may have a problem if the device is needed for booting up. If the root devices is on a different partition or you have a rescue DVD you might be able to mount the root filesystem containing /etc read-write and refresh the cache.
Some articles suggest editing the lvm.conf file to specify a filter to exclude the RAID1 members. Try refreshing the cache first before you resort to this as it should just work.
Some articles suggest editing the lvm.conf file to specify a filter to exclude the RAID1 members. Try refreshing the cache first before you resort to this as it should just work.
This problem occurred in the context of converting in-situ a filesystem on a single disk to reside in a RAID1.