Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1379865 - Current Fedora 25 and Rawhide cannot detect Intel firmware RAID set
Summary: Current Fedora 25 and Rawhide cannot detect Intel firmware RAID set
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: libblockdev
Version: 25
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Vratislav Podzimek
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard: AcceptedBlocker
Depends On:
Blocks: F25BetaBlocker
TreeView+ depends on / blocked
 
Reported: 2016-09-27 23:37 UTC by Adam Williamson
Modified: 2016-10-07 03:34 UTC (History)
8 users (show)

Fixed In Version: libblockdev-1.9-3 libblockdev-1.9-4.fc25
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1380034 (view as bug list)
Environment:
Last Closed: 2016-10-07 03:34:08 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
anaconda.log (from Fedora-Everything-netinst-x86_64-Rawhide-20160927.n.1.iso ) (deleted)
2016-09-27 23:45 UTC, Adam Williamson
no flags Details
storage.log (from Fedora-Everything-netinst-x86_64-Rawhide-20160927.n.1.iso ) (deleted)
2016-09-27 23:45 UTC, Adam Williamson
no flags Details
program.log (from Fedora-Everything-netinst-x86_64-Rawhide-20160927.n.1.iso ) (deleted)
2016-09-27 23:46 UTC, Adam Williamson
no flags Details
journalctl output (from Fedora-Everything-netinst-x86_64-Rawhide-20160927.n.1.iso ) (deleted)
2016-09-27 23:51 UTC, Adam Williamson
no flags Details
F24 Final storage.log for comparison (deleted)
2016-09-28 00:29 UTC, Adam Williamson
no flags Details
storage.log after applying blivet patch (with libblockdev 1.9-3) (deleted)
2016-10-04 07:50 UTC, Adam Williamson
no flags Details
mdadm output for all three /dev nodes with all three output formats, for firmware RAID case (deleted)
2016-10-04 16:49 UTC, Adam Williamson
no flags Details
mdadm output for all three /dev nodes with all three output formats, for software RAID case (deleted)
2016-10-04 17:07 UTC, Adam Williamson
no flags Details
storage.log from successful test with blivet fix and libblockdev 1.9-100 scratch build (deleted)
2016-10-04 20:14 UTC, Adam Williamson
no flags Details

Description Adam Williamson 2016-09-27 23:37:44 UTC
Current F25 and Rawhide nightlies cannot detect an Intel firmware RAID set at all. I created a fresh set out of two carefully blanked disks (wiped with lvremove/vgremove/pvremove and wipefs) on my regular test box for this stuff; the set simply does not appear on INSTALLATION DESTINATION at all. Fedora 24 Final sees it fine. This is the same both on live and netinst images.

The set is correctly assembled by the kernel and mdadm, so far as I can tell; there's a /dev/md126 and I can run fdisk on it fine, and gnome-disks sees it fine also on a live boot.

This seems to have been broken for quite a while; I tried various of the nightlies I happen to have lying around, and all of them back to Fedora-Server-netinst-x86_64-Rawhide-20160420.n.0.iso have the bug.

This is a clear Beta blocker per criterion "The installer must be able to detect and install to hardware or firmware RAID storage devices." Will attach logs.

Comment 1 Adam Williamson 2016-09-27 23:42:29 UTC
I note in storage.log:

23:33:33,845 INFO blivet: scanning Volume0_0 (/sys/devices/virtual/block/md126)...
23:33:33,847 DEBUG blivet:                DeviceTree.get_device_by_name: name: Volume0_0 ; incomplete: False ; hidden: False ;
23:33:33,849 DEBUG blivet:                DeviceTree.get_device_by_name returned None
23:33:33,852 DEBUG blivet:                  DeviceTree.get_device_by_name: name: Volume0_0 ; incomplete: False ; hidden: False ;
23:33:33,855 DEBUG blivet:                  DeviceTree.get_device_by_name returned None
23:33:33,858 DEBUG blivet:               DiskDevicePopulator.run: name: Volume0_0 ;
23:33:33,859 WARN blivet: device/vendor is not a valid attribute
23:33:33,860 WARN blivet: device/model is not a valid attribute
23:33:33,860 INFO blivet: Volume0_0 is a disk
23:33:33,861 DEBUG blivet: get_format('None') returning DeviceFormat instance with object id 88
23:33:33,861 DEBUG blivet: get_format('None') returning DeviceFormat instance with object id 89
23:33:33,864 DEBUG blivet:                       DiskDevice._set_format: Volume0_0 ; type: None ; current: None ;
23:33:33,868 DEBUG blivet:                     DiskDevice.update_sysfs_path: Volume0_0 ; status: False ;
23:33:33,868 ERR blivet: failed to update sysfs path for Volume0_0: [Errno 2] No such file or directory: '/dev/Volume0_0'

The correct path is /dev/md/Volume0_0 , not /dev/Volume0_0 .

Comment 2 Adam Williamson 2016-09-27 23:45:31 UTC
Created attachment 1205340 [details]
anaconda.log (from Fedora-Everything-netinst-x86_64-Rawhide-20160927.n.1.iso )

Comment 3 Adam Williamson 2016-09-27 23:45:53 UTC
Created attachment 1205341 [details]
storage.log (from Fedora-Everything-netinst-x86_64-Rawhide-20160927.n.1.iso )

Comment 4 Adam Williamson 2016-09-27 23:46:14 UTC
Created attachment 1205342 [details]
program.log (from Fedora-Everything-netinst-x86_64-Rawhide-20160927.n.1.iso )

Comment 5 Adam Williamson 2016-09-27 23:51:40 UTC
Created attachment 1205356 [details]
journalctl output (from Fedora-Everything-netinst-x86_64-Rawhide-20160927.n.1.iso )

Comment 6 Adam Williamson 2016-09-28 00:28:47 UTC
Well, the wrong device path is kind of a symptom: basically blivet fails to figure out that the RAID set is a RAID set, it thinks it's just a disk. If it had figured out what it was and picked the right device class, it'd get the device path right. Compare between F24 and F25 logs. F24:

23:59:50,380 INFO blivet: scanning Volume0_0 (/sys/devices/virtual/block/md126)...
...
23:59:50,406 INFO blivet: got device: MDBiosRaidArrayDevice instance (0x7f8565917390) --

F25:

23:33:33,845 INFO blivet: scanning Volume0_0 (/sys/devices/virtual/block/md126)...
...
23:33:33,860 INFO blivet: Volume0_0 is a disk
...
23:33:33,869 INFO blivet: got device: DiskDevice instance (0x7feaeca07f60) --

I'm gonna leave it to dlehman from here, because he *knows* how this device detection works, there's no point me wasting a few hours figuring it out, really.

Comment 7 Adam Williamson 2016-09-28 00:29:10 UTC
Created attachment 1205362 [details]
F24 Final storage.log for comparison

Comment 8 Stephen Gallagher 2016-09-28 12:50:01 UTC
+1 Blocker

Also, just as an aside, the use of the name Volume0_0 makes it look like it's staring at me wide-eyed and it makes me uncomfortable.

Comment 9 David Lehman 2016-09-28 14:21:47 UTC
mdadm is inconsistent in output key format across metadata formats. For v1, it uses "Raid Level :". For intel/imsm/isw, it uses "RAID Level :".

This can be worked around in libblockdev, but it should be fixed in mdadm given that the command line tool is the only API mdadm provides.

Comment 10 Vratislav Podzimek 2016-09-30 12:02:09 UTC
(In reply to David Lehman from comment #9)
> mdadm is inconsistent in output key format across metadata formats. For v1,
> it uses "Raid Level :". For intel/imsm/isw, it uses "RAID Level :".
> 
> This can be worked around in libblockdev, but it should be fixed in mdadm
> given that the command line tool is the only API mdadm provides.

Workaround for libblockdev: https://github.com/rhinstaller/libblockdev/pull/124

Comment 11 Kevin Fenzi 2016-10-01 23:38:47 UTC
+1 blocker.

Comment 12 Chris Murphy 2016-10-02 19:55:14 UTC
+1 beta blocker

Comment 13 Vratislav Podzimek 2016-10-03 16:10:18 UTC
(In reply to Vratislav Podzimek from comment #10)
> (In reply to David Lehman from comment #9)
> > mdadm is inconsistent in output key format across metadata formats. For v1,
> > it uses "Raid Level :". For intel/imsm/isw, it uses "RAID Level :".
> > 
> > This can be worked around in libblockdev, but it should be fixed in mdadm
> > given that the command line tool is the only API mdadm provides.
> 
> Workaround for libblockdev:
> https://github.com/rhinstaller/libblockdev/pull/124

Merged.

Comment 14 Petr Schindler 2016-10-03 16:22:09 UTC
Discussed at 2016-10-03 blocker review meeting: [1]. 

This bug was accepted as Beta blocker: This bug violates Beta criterion "The installer must be able to detect and install to hardware or firmware RAID storage devices."

[1] https://meetbot-raw.fedoraproject.org/fedora-blocker-review/2016-10-03/

Comment 15 Adam Williamson 2016-10-03 22:47:49 UTC
Again Bodhi failed to update the bug, but there is an update for this:

https://bodhi.fedoraproject.org/updates/FEDORA-2016-9305da925f

Comment 16 Adam Williamson 2016-10-03 23:51:57 UTC
Still does not work with the updated libblockdev, I'm afraid.

As I think this may be down to further differences in mdadm output, here's the entire output of 'mdadm --examine -E /dev/sda' on my test box:

/dev/sda:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.0.00
    Orig Family : 47426418
         Family : 47426418
     Generation : 00000001
     Attributes : All supported
           UUID : 7fe61893:94a3e502:ce92b4f6:4c0e5884
       Checksum : 97fe4ab0 correct
    MPB Sectors : 1
          Disks : 2
   RAID Devices : 1

  Disk00 Serial : 5QM2XY4V
          State : active
             Id : 00000000
    Usable Size : 976768264 (465.76 GiB 500.11 GB)

[Volume0]:
           UUID : 1b60a788:db78aca8:10553d1b:b8a61316
     RAID Level : 0
        Members : 2
          Slots : [UU]
    Failed disk : none
      This Slot : 0
     Array Size : 1953536000 (931.52 GiB 1000.21 GB)
   Per Dev Size : 976768264 (465.76 GiB 500.11 GB)
  Sector Offset : 0
    Num Stripes : 3815500
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean

  Disk01 Serial : 9VM1BT6B
          State : active
             Id : 00010000
    Usable Size : 976768264 (465.76 GiB 500.11 GB)

Some other differences I note with what `get_examine_data_from_table` seems to be looking for:

* I don't see any 'Name' key/value pair at all.
* I don't see any 'Array UUID' key, only two 'UUID' keys, one in the device output and one in the RAID set output; I don't *think* any of the parsing functions adds this distinguisher
* I don't see any 'Events' key

compare to the output of 'mdadm --examine -E /dev/vdb1', for a software RAID set that includes /dev/vdb1:

/dev/vdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : f3cb52a4:e4f1abec:79e47da7:4fbe0cb6
           Name : localhost.localdomain:root  (local to host localhost.localdomain)
  Creation Time : Mon Oct  3 23:45:22 2016
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 31438848 (14.99 GiB 16.10 GB)
     Array Size : 15719424 (14.99 GiB 16.10 GB)
    Data Offset : 16384 sectors
   Super Offset : 8 sectors
   Unused Space : before=16296 sectors, after=0 sectors
          State : active
    Device UUID : f53c5cc8:577db78e:d7bf784b:1d153ff7

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Oct  3 23:48:38 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 669ccd86 - correct
         Events : 32


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

it's clearly very different. I'll see if I can hack up a fix to speed things along.

Comment 17 Adam Williamson 2016-10-04 06:00:03 UTC
so, well. hmm. I have a fix for that, but it still doesn't fix the device detection.

I am currently digging through the code and I have a feeling something's just really wrong somewhere - or I'm just really dumb. AFAICT, none of the subclasses of DiskDevicePopulator is ever actually going to be used, because populator/helpers/__init__.py doesn't *import* any of them; its get_device_helper method should be returning a MDBiosRaidDevicePopulator instance for md126, I think, but it isn't because it never possibly *can*, it just never tries that class at all. That class's match() method returns something truth-y if you run it on the device info for md126 (I'm poking around in a python shell working this stuff out) - it returns '/dev/md/imsm' , which should satisfy an 'if' condition like the one in get_device_helper() - but in fact get_device_helper returns a DiskDevicePopulator instance for that device info object.

If you do this:

print(blivet.populator.helpers._device_helpers)

you will note that the result doesn't include MDBiosRaidDevicePopulator , or indeed FCoEDevicePopulator or iScsiDevicePopulator or DASDDevicePopulator or ZFCPDevicePopulator...

So basically I think popular/helpers/__init__.py needs a crapton more imports at the top, or it's just never going to detect things properly. But I could of course be mistaken. More news as I get it!

Comment 18 Adam Williamson 2016-10-04 06:51:17 UTC
So it looks like blivet commit 368a4db6141c7fdcb31ed45fe6be207ccc08ad30 added MultipathFormatPopulator to multipath.py but did not add an import to __init__.py , and 832fe67f6e8d0cb1a38e84b86410fcf15d42a92d added all the subclasses of DiskDevicePopulator but did not add imports for them. The patch is pretty simple:

diff --git a/blivet/populator/helpers/__init__.py b/blivet/populator/helpers/__init__.py
index 0d04e3a..d000765 100644
--- a/blivet/populator/helpers/__init__.py
+++ b/blivet/populator/helpers/__init__.py
@@ -5,7 +5,7 @@ from .formatpopulator import FormatPopulator
 
 from .btrfs import BTRFSFormatPopulator
 from .boot import AppleBootFormatPopulator, EFIFormatPopulator, MacEFIFormatPopulator
-from .disk import DiskDevicePopulator
+from .disk import DiskDevicePopulator, iScsiDevicePopulator, FCoEDevicePopulator, MDBiosRaidDevicePopulator, DASDDevicePopulator, ZFCPDevicePopulator
 from .disklabel import DiskLabelFormatPopulator
 from .dm import DMDevicePopulator
 from .dmraid import DMRaidFormatPopulator
@@ -13,7 +13,7 @@ from .loop import LoopDevicePopulator
 from .luks import LUKSDevicePopulator, LUKSFormatPopulator
 from .lvm import LVMDevicePopulator, LVMFormatPopulator
 from .mdraid import MDDevicePopulator, MDFormatPopulator
-from .multipath import MultipathDevicePopulator
+from .multipath import MultipathDevicePopulator, MultipathFormatPopulator
 from .optical import OpticalDevicePopulator
 from .partition import PartitionDevicePopulator

https://www.happyassassin.net/updates/1379865.0.img includes that. So if I use that, and my fix for handling UUIDs in libblockdev, it starts crashing when it tries to call MDBiosRaidDevicePopulator.run() , because MDBiosRaidDevicePopulator._get_kwargs() returns None, which seems to be because it fails here:

            self._devicetree.handle_device(container_info)
            container = self._devicetree.get_device_by_name(parent_name)
            if not container:
                log.error("failed to scan md container %s", parent_name)
                return

it seems that 'parent_name' is 'imsm', but none of the devices in blivet's device tree has that name. The names in the device tree seem to be 'sdc', 'sda', 'Volume0', 'sdb', 'sr0', '/run/install/repo/images/install.img', 'loop0', '/LiveOS/rootfs.img', 'loop1', '/overlay (deleted)', 'loop2', 'live-rw' and 'live-base'. The storage.log shows several get_device_by_name call with "hidden: False ; incomplete: False ; name: imsm ;" that return None, one of which is immediately followed by the 'failed to scan md container' error (there's some extra log noise from when it gets from md126 to the container - md127 - and tries to handle that device, and it doesn't really work for I think the same reason).

Comment 19 Adam Williamson 2016-10-04 07:42:40 UTC
So what I have so far is this blivet PR, which I'm pretty sure is right:

https://github.com/rhinstaller/blivet/pull/514

and a patch for libblockdev to make it handle the mdadm --examine output better, which I'm not so sure is correct any more, but in any case seems to be irrelevant to the point I'm stuck at now. I tested both an image with my change and one with just libblockdev-1.9-3 , booting both with my updates.img with that blivet fix, and they both fail identically, as described in #c18. Here's the storage.log snippet, and I'll attach the full file:

07:38:04,353 INFO blivet: scanning Volume0_0 (/sys/devices/virtual/block/md126)...
07:38:04,355 DEBUG blivet:                DeviceTree.get_device_by_name: incomplete: False ; name: Volume0_0 ; hidden: False ;
07:38:04,357 DEBUG blivet:                DeviceTree.get_device_by_name returned None
07:38:04,359 DEBUG blivet:                  DeviceTree.get_device_by_name: incomplete: False ; name: Volume0_0 ; hidden: False ;
07:38:04,361 DEBUG blivet:                  DeviceTree.get_device_by_name returned None
07:38:04,364 DEBUG blivet:               MDBiosRaidDevicePopulator.run: name: Volume0_0 ;
07:38:04,364 WARN blivet: device/vendor is not a valid attribute
07:38:04,365 WARN blivet: device/model is not a valid attribute
07:38:04,367 DEBUG blivet:                  DeviceTree.get_device_by_name: incomplete: False ; name: imsm ; hidden: False ;
07:38:04,369 DEBUG blivet:                  DeviceTree.get_device_by_name returned None
07:38:04,373 DEBUG blivet:                  DeviceTree.handle_device: info: {'DEVLINKS': '/dev/md/imsm '
             '/dev/disk/by-id/md-uuid-7fe61893:94a3e502:ce92b4f6:4c0e5884',
 'DEVNAME': '/dev/md127',
 'DEVPATH': '/devices/virtual/block/md127',
 'DEVTYPE': 'disk',
 'DM_MULTIPATH_TIMESTAMP': '1475566670',
 'MAJOR': '9',
 'MD_DEVICES': '2',
 'MD_DEVICE_sda_DEV': '/dev/sda',
 'MD_DEVICE_sda_ROLE': 'spare',
 'MD_DEVICE_sdb_DEV': '/dev/sdb',
 'MD_DEVICE_sdb_ROLE': 'spare',
 'MD_DEVNAME': 'imsm',
 'MD_LEVEL': 'container',
 'MD_METADATA': 'imsm',
 'MD_UUID': '7fe61893:94a3e502:ce92b4f6:4c0e5884',
 'MINOR': '127',
 'MPATH_SBIN_PATH': '/sbin',
 'SUBSYSTEM': 'block',
 'SYSTEMD_READY': '0',
 'TAGS': ':systemd:',
 'UDISKS_MD_DEVICES': '2',
 'UDISKS_MD_DEVICE_sda_DEV': '/dev/sda',
 'UDISKS_MD_DEVICE_sda_ROLE': 'spare',
 'UDISKS_MD_DEVICE_sdb_DEV': '/dev/sdb',
 'UDISKS_MD_DEVICE_sdb_ROLE': 'spare',
 'UDISKS_MD_DEVNAME': 'imsm',
 'UDISKS_MD_LEVEL': 'container',
 'UDISKS_MD_METADATA': 'imsm',
 'UDISKS_MD_UUID': '7fe61893:94a3e502:ce92b4f6:4c0e5884',
 'USEC_INITIALIZED': '28823734'} ; name: imsm ;
07:38:04,373 INFO blivet: scanning imsm (/sys/devices/virtual/block/md127)...
07:38:04,375 DEBUG blivet:                    DeviceTree.get_device_by_name: incomplete: False ; name: imsm ; hidden: False ;
07:38:04,377 DEBUG blivet:                    DeviceTree.get_device_by_name returned None
07:38:04,379 DEBUG blivet:                      DeviceTree.get_device_by_name: incomplete: False ; name: imsm ; hidden: False ;
07:38:04,382 DEBUG blivet:                      DeviceTree.get_device_by_name returned None
07:38:04,385 DEBUG blivet:                   MDDevicePopulator.run: name: imsm ;
07:38:04,389 DEBUG blivet:                       DeviceTree.get_device_by_name: incomplete: False ; name: sdb ; hidden: False ;
07:38:04,391 DEBUG blivet:                       DeviceTree.get_device_by_name returned existing 465.76 GiB disk sdb (30) with existing mdmember
07:38:04,394 DEBUG blivet:                       DeviceTree.get_device_by_name: incomplete: False ; name: sda ; hidden: False ;
07:38:04,397 DEBUG blivet:                       DeviceTree.get_device_by_name returned existing 465.76 GiB disk sda (11) with existing mdmember
07:38:04,399 DEBUG blivet:                     DeviceTree.get_device_by_name: incomplete: False ; name: imsm ; hidden: False ;
07:38:04,401 DEBUG blivet:                     DeviceTree.get_device_by_name returned None
07:38:04,403 DEBUG blivet:                     DeviceTree.get_device_by_uuid: incomplete: False ; uuid: 7fe61893-94a3-e502-ce92-b4f64c0e5884 ; hidden: False ;
07:38:04,405 DEBUG blivet:                     DeviceTree.get_device_by_uuid returned None
07:38:04,406 ERR blivet: failed to scan md array imsm
07:38:04,415 ERR blivet: failed to stop broken md array imsm
07:38:04,416 DEBUG blivet: no device obtained for imsm
07:38:04,419 DEBUG blivet:                  DeviceTree.get_device_by_name: incomplete: False ; name: imsm ; hidden: False ;
07:38:04,423 DEBUG blivet:                  DeviceTree.get_device_by_name returned None
07:38:04,423 ERR blivet: failed to scan md container imsm

Comment 20 Adam Williamson 2016-10-04 07:50:26 UTC
Created attachment 1207097 [details]
storage.log after applying blivet patch (with libblockdev 1.9-3)

Here's the storage.log I get after the blivet patch. anaconda shows a crash whose backtrace just shows that MDBiosRaidDevicePopulator._get_kwargs() returned None, and from the log we can see that's because of "failed to scan md container imsm".

Comment 21 Adam Williamson 2016-10-04 16:49:35 UTC
Created attachment 1207289 [details]
mdadm output for all three /dev nodes with all three output formats, for firmware RAID case

So as discussed on IRC, this basically boils down to the bits of information we need being messily provided differently for each metadata type (along with a bunch of red herrings) in three different mdadm output formats: --examine -E , --examine --export , and --examine --brief.

To that end, here's the output of all three commands run against all three relevant /dev nodes (sda, sdb and md127) for the firmware RAID case. I'll attach the corresponding output for the software RAID case shortly.

Comment 22 Adam Williamson 2016-10-04 17:07:50 UTC
Created attachment 1207292 [details]
mdadm output for all three /dev nodes with all three output formats, for software RAID case

Here's the corresponding mdadm outputs for the software RAID case (run on both partitions that form a part of the array).

Comment 23 Adam Williamson 2016-10-04 20:10:56 UTC
With my blivet fix (https://www.happyassassin.net/updates/1379865.0.img ) and http://koji.fedoraproject.org/koji/taskinfo?taskID=15941153 (a scratch libblockdev build with several fixes to mdadm output parsing), I get a successful install to the clean firmware RAID set. vpodzime is working on a libblockdev build now. dlehman should be able to do a blivet build in a little while (he said he was heading out to an appointment), or if worst comes to worst I can do one.

Comment 24 Adam Williamson 2016-10-04 20:14:16 UTC
Created attachment 1207341 [details]
storage.log from successful test with blivet fix and libblockdev 1.9-100 scratch build

Attaching the storage.log from the successful run, just for comparison and so vpodzime and dlehman can see if anything still looks screwy.

Comment 25 Fedora Update System 2016-10-05 01:55:06 UTC
libblockdev-1.9-3.fc25 has been pushed to the Fedora 25 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2016-9305da925f

Comment 26 Fedora Update System 2016-10-05 20:28:12 UTC
python-blivet-2.1.6-1.fc25 has been pushed to the Fedora 25 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2016-f38e22839e

Comment 27 Fedora Update System 2016-10-05 20:28:44 UTC
libblockdev-1.9-4.fc25 has been pushed to the Fedora 25 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2016-7428388c67

Comment 28 Adam Williamson 2016-10-06 17:10:14 UTC
Verified fixed in Beta-1.1.

Comment 29 Fedora Update System 2016-10-07 03:33:52 UTC
python-blivet-2.1.6-1.fc25 has been pushed to the Fedora 25 stable repository. If problems still persist, please make note of it in this bug report.

Comment 30 Fedora Update System 2016-10-07 03:34:08 UTC
libblockdev-1.9-4.fc25 has been pushed to the Fedora 25 stable repository. If problems still persist, please make note of it in this bug report.


Note You need to log in before you can comment on or make changes to this bug.