Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 117318 - raid 1 physical devices don't vgscan properly
Summary: raid 1 physical devices don't vgscan properly
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm2
Version: rawhide
Hardware: i386
OS: Linux
medium
high
Target Milestone: ---
Assignee: Alasdair Kergon
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: FC2Target
TreeView+ depends on / blocked
 
Reported: 2004-03-02 20:03 UTC by Alexandre Oliva
Modified: 2007-11-30 22:10 UTC (History)
3 users (show)

Fixed In Version: 2.00.14-1.1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2004-04-17 19:14:00 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Alexandre Oliva 2004-03-02 20:03:41 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040217

Description of problem:
Unless raid 1 members are explicitly filtered out from vgscanning in
/etc/lvm/lvm.conf, lvm vgscan fails with messages such as:

  Found duplicate PV o4Icc0yt8OloT89tnJ5xq156XCwYEczT: using /dev/sda3
not /dev/hda2


Version-Release number of selected component (if applicable):
lvm2-2.00.08-4

How reproducible:
Always

Steps to Reproduce:
1.Create a raid 1 device out of two disk partitions
2.Add it to the volume group holding your root filesystem, and make
sure /etc/lvm/lvm.conf does NOT exclude the raid members
3.Run mkinitrd
4.Reboot

Actual Results:  The machine fails to reboot because vgscan failed

Expected Results:  Ideally one shouldn't have to exclude raid members
in lvm.conf.  Very impractical, particularly for rescue disks.

Additional info:

lvm vgscan fails the same way after the boot completes, if at boot
time it has a working lvm.conf, so it's not just because /sys might
not be mounted by initrd (I haven't checked)

Comment 1 Heinz Mauelshagen 2004-03-03 09:36:26 UTC
Users are required to set up an appropriate device name filter in case
of MD RAID 1 underneath device-mapper.
FYI: We're working on the lvm2 tools and device-mapper to support RAID
1 directly.

Comment 2 Alexandre Oliva 2004-03-03 14:54:33 UTC
How do you expect users to do that in a rescue CD environment?

If you're really not going to fix this, then reassign the bug to
anaconda, because anaconda is going to have to create lvm.conf at
install time.  If the root filesystem is in a logical volume, it has
to be done before mkinitrd.  And, again, this will make the rescuecd
useless for LVM systems using RAID.  Please reconsider.  Why is it so
hard to enumerate raid components and automatically skip them? 
Failing that, consider a plug in system (i.e., vgscan runs a script)
that generates the skip list on the fly.  Then I can write a script or
program that scans /proc/mdstat and outputs a filter list of raid
members to skip.  And then you can integrate that into lvm because it
really doesn't make sense for lvm to intentionally do something that
is obviously wrong. :-/

Comment 3 Stephen Tweedie 2004-03-03 17:45:09 UTC
Agreed, we really can't simply walk away from this and decide that
rescue mode won't work on lvm-on-raid1.

Comment 4 Heinz Mauelshagen 2004-03-03 18:03:39 UTC
I don't think that it is hard to filter out underlying devices making
up an md RAID 1.
Our recommending so far was an 'external' setup of
a filter in lvm.conf before running vgscan.
We're reconsidering the issue.

Comment 5 Alasdair Kergon 2004-04-08 21:20:34 UTC
Firstly, the LVM2 installer and the RPM need to be more helpful and
install the default /etc/lvm/lvm.conf file if none exists.

Secondly, yes, we need to fix the md detection.


Comment 6 Alasdair Kergon 2004-04-15 13:29:51 UTC
2.00.12 installs the default lvm.conf if none exists

Comment 7 Alasdair Kergon 2004-04-15 18:17:18 UTC
I've not managed to reproduce this yet in my test environments.

LVM2 correctly ignores constituent md devices for me when the raid
device is active.

When it's inactive, I get the warning messages, but this doesn't cause
problems - vgscan still exits with success.

There must be some other factor at play here that I'm missing.



Comment 8 Alexandre Oliva 2004-04-16 02:12:52 UTC
> 2.00.12 installs the default lvm.conf if none exists

overwriting my lvm.conf that actually worked, and replacing it with
one that fails to exclude raid 1 members, which fails next time I
update the kernel :-(

> There must be some other factor at play here that I'm missing.

This is with lvm vgscan as started within initrd.  Did you actually
set up the root filesystem on LVM on raid 1, as described in the
original bug report?  Are you using the default lvm.conf, or something
else that would exclude the raid components?

Comment 9 Stephen Tweedie 2004-04-16 10:28:39 UTC
lvm.conf was recently marked %config(noreplace), so hopefully that
particular issue won't be a problem in the future.

Comment 10 Alasdair Kergon 2004-04-16 11:39:31 UTC
Ah - I didn't realise you actually had rootfs on raid1: I read step 2
as adding raid components to the VG *after* the rootfs was created:-(

It's probably vgchange that's failing rather than vgscan.
vgchange -ay must not be run before mdadm in the boot sequence or
there'll be problems.  Amd it's horrible to require that because some
people will need it the other way around.  

OK, now I understand what's going on, I'll sort out a patch.

Comment 11 Alasdair Kergon 2004-04-16 19:01:21 UTC
Hopefully fixed now in lvm2-2.00.14-1.1 (submitted to dist-fc2-HEAD).

Comment 12 Alexandre Oliva 2004-04-17 19:14:00 UTC
Yay!  Confirmed, thanks.  For the first time, I can boot my desktop
without a custom lvm.conf.


Note You need to log in before you can comment on or make changes to this bug.