Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 117318
Summary: | raid 1 physical devices don't vgscan properly | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Alexandre Oliva <oliva> |
Component: | lvm2 | Assignee: | Alasdair Kergon <agk> |
Status: | CLOSED RAWHIDE | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | rawhide | CC: | agk, me, sct |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | i386 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | 2.00.14-1.1 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2004-04-17 19:14:00 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 114963 |
Description
Alexandre Oliva
2004-03-02 20:03:41 UTC
Users are required to set up an appropriate device name filter in case of MD RAID 1 underneath device-mapper. FYI: We're working on the lvm2 tools and device-mapper to support RAID 1 directly. How do you expect users to do that in a rescue CD environment? If you're really not going to fix this, then reassign the bug to anaconda, because anaconda is going to have to create lvm.conf at install time. If the root filesystem is in a logical volume, it has to be done before mkinitrd. And, again, this will make the rescuecd useless for LVM systems using RAID. Please reconsider. Why is it so hard to enumerate raid components and automatically skip them? Failing that, consider a plug in system (i.e., vgscan runs a script) that generates the skip list on the fly. Then I can write a script or program that scans /proc/mdstat and outputs a filter list of raid members to skip. And then you can integrate that into lvm because it really doesn't make sense for lvm to intentionally do something that is obviously wrong. :-/ Agreed, we really can't simply walk away from this and decide that rescue mode won't work on lvm-on-raid1. I don't think that it is hard to filter out underlying devices making up an md RAID 1. Our recommending so far was an 'external' setup of a filter in lvm.conf before running vgscan. We're reconsidering the issue. Firstly, the LVM2 installer and the RPM need to be more helpful and install the default /etc/lvm/lvm.conf file if none exists. Secondly, yes, we need to fix the md detection. 2.00.12 installs the default lvm.conf if none exists I've not managed to reproduce this yet in my test environments. LVM2 correctly ignores constituent md devices for me when the raid device is active. When it's inactive, I get the warning messages, but this doesn't cause problems - vgscan still exits with success. There must be some other factor at play here that I'm missing. > 2.00.12 installs the default lvm.conf if none exists overwriting my lvm.conf that actually worked, and replacing it with one that fails to exclude raid 1 members, which fails next time I update the kernel :-( > There must be some other factor at play here that I'm missing. This is with lvm vgscan as started within initrd. Did you actually set up the root filesystem on LVM on raid 1, as described in the original bug report? Are you using the default lvm.conf, or something else that would exclude the raid components? lvm.conf was recently marked %config(noreplace), so hopefully that particular issue won't be a problem in the future. Ah - I didn't realise you actually had rootfs on raid1: I read step 2 as adding raid components to the VG *after* the rootfs was created:-( It's probably vgchange that's failing rather than vgscan. vgchange -ay must not be run before mdadm in the boot sequence or there'll be problems. Amd it's horrible to require that because some people will need it the other way around. OK, now I understand what's going on, I'll sort out a patch. Hopefully fixed now in lvm2-2.00.14-1.1 (submitted to dist-fc2-HEAD). Yay! Confirmed, thanks. For the first time, I can boot my desktop without a custom lvm.conf. |