Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1876162
Summary: | anaconda storage configuration is extremely slow with an existing btrfs filesystem containing hundreds of snapshots | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Heðin <hmoller> | ||||||||||||||||||||
Component: | python-blivet | Assignee: | Vojtech Trefny <vtrefny> | ||||||||||||||||||||
Status: | CLOSED ERRATA | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||||||||||||||||||
Severity: | high | Docs Contact: | |||||||||||||||||||||
Priority: | unspecified | ||||||||||||||||||||||
Version: | 33 | CC: | anaconda-maint-list, awilliam, blivet-maint-list, bugzilla, davide, dlehman, gmarr, japokorn, jkonecny, jonathan, kellin, kparal, michel, mkolman, robatino, rvykydal, vanmeeuwen+fedora, vponcova, vtrefny, wwoods | ||||||||||||||||||||
Target Milestone: | --- | ||||||||||||||||||||||
Target Release: | --- | ||||||||||||||||||||||
Hardware: | x86_64 | ||||||||||||||||||||||
OS: | Unspecified | ||||||||||||||||||||||
Whiteboard: | AcceptedFreezeException | ||||||||||||||||||||||
Fixed In Version: | python-blivet-3.3.0-2.fc33 | Doc Type: | If docs needed, set a value | ||||||||||||||||||||
Doc Text: | Story Points: | --- | |||||||||||||||||||||
Clone Of: | Environment: | ||||||||||||||||||||||
Last Closed: | 2020-09-22 07:37:06 UTC | Type: | Bug | ||||||||||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||||||||||
Documentation: | --- | CRM: | |||||||||||||||||||||
Verified Versions: | Category: | --- | |||||||||||||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||||||||||
Embargoed: | |||||||||||||||||||||||
Bug Depends On: | |||||||||||||||||||||||
Bug Blocks: | 1766776, 1766777 | ||||||||||||||||||||||
Attachments: |
|
Created attachment 1713841 [details]
/tmp/dbus.log
Created attachment 1713842 [details]
/tmp/storage.log
Created attachment 1713843 [details]
lspci && lshw -short && lsblk
Created attachment 1713844 [details]
/tmp/program.log
Created attachment 1713846 [details]
lvm.log
clues are in storage.log Preexisting Btrfs with 500+ snapshots and I think we're getting some kind of dbus hang, possible due to all the messaging (is dbus used between anaconda and blivet?). Heðin do you have the traceback file in /tmp? It's the file that starts with tb- and then has random characters after it. Can you attach it? Proposed as a Blocker for 33-beta by Fedora user chrismurphy using the blocker tracking app because: The quotes are from beta guided criterion, there's custom equivalent; but by the description the crash happens before the guided/custom decision. If 500+ preexisting snapshots is a valid configuration, then the installer must "Complete an installation using any combination of disk configuration options it allows the user to select" If it's invalid, then the installer must"Reject or disallow invalid disk and volume configurations without crashing." Pretty clear blocker. But there's a but... Maybe there's a simple explanation and fix. But just to throw it out there, while the file system itself can handle thousands of snapshots, we may not want to support installing to such an already complex file system, for various valid reasons. (It's a bug but is it a blocking worthy bug, and in this cycle?) In which case is there a cheap way to count the number of subvolumes, consider a reasonable upper limit of say 50 subvolumes, and bail out without crashing? Another consideration is we are within 5 days of go/no-go, and thus "Last minute blocker bugs" could apply. i.e. if it's determined to be a blocker, it'd become a final blocker. https://fedoraproject.org/wiki/QA:SOP_blocker_bug_process#Exceptional_cases There's no /tmp/tb-* file during the issue or after. I get the following in the terminal $ sudo anaconda --liveinst Starting installer, one moment... anaconda 33.25.2-1.fc33 for anaconda bluesky (pre-release) started. * installation log files are stored in /tmp during the installation * shell is available on TTY2 and in second TMUX pane (ctrl+b, then press 2) * when reporting a bug add logs from /tmp as separate text/plain attachments ** (anaconda:75537): WARNING **: 20:51:01.148: AT-SPI: Could not obtain desktop path or name ** (anaconda:75537): WARNING **: 20:51:01.403: atk-bridge: GetRegisteredEvents returned message with unknown signature ** (anaconda:75537): WARNING **: 20:51:01.403: atk-bridge: get_device_events_reply: unknown signature ** (anaconda:75537): WARNING **: 20:51:01.403: atk-bridge: get_device_events_reply: unknown signature Internal error: Could not resolve keysym XF86FullScreen Exception ignored in: <function _after_fork at 0x7ff5432414c0> Traceback (most recent call last): File "/usr/lib64/python3.9/threading.py", line 1486, in _after_fork thread._reset_internal_locks(True) File "/usr/lib64/python3.9/threading.py", line 829, in _reset_internal_locks self._tstate_lock._at_fork_reinit() AttributeError: 'NoneType' object has no attribute '_at_fork_reinit' (In reply to Chris Murphy from comment #8) > Pretty clear blocker. But there's a but... > > Maybe there's a simple explanation and fix. But just to throw it out there, > while the file system itself can handle thousands of snapshots, we may not > want to support installing to such an already complex file system, for > various valid reasons. (It's a bug but is it a blocking worthy bug, and in > this cycle?) In which case is there a cheap way to count the number of > subvolumes, consider a reasonable upper limit of say 50 subvolumes, and bail > out without crashing? > > Another consideration is we are within 5 days of go/no-go, and thus "Last > minute blocker bugs" could apply. i.e. if it's determined to be a blocker, > it'd become a final blocker. > https://fedoraproject.org/wiki/QA:SOP_blocker_bug_process#Exceptional_cases My intention is to do a clean-install, so for my particular use-case it would be fine if anaconda bailed, stating that> ">XXX snapshots are not supported, We need to wipe the drive to install, do you want to continue?" One not tho, afaik. Suse have been defaulting to btrfs for some time with snapper on top. They may have choosen to setup regular snapshots like I had. (In reply to Chris Murphy from comment #6) > clues are in storage.log > > Preexisting Btrfs with 500+ snapshots and I think we're getting some kind of > dbus hang, possible due to all the messaging (is dbus used between anaconda > and blivet?). > I believe starting with Anaconda 33, storage is managed and manipulated as a service using udisks2. (In reply to Heðin from comment #10) > (In reply to Chris Murphy from comment #8) > > Pretty clear blocker. But there's a but... > > > > Maybe there's a simple explanation and fix. But just to throw it out there, > > while the file system itself can handle thousands of snapshots, we may not > > want to support installing to such an already complex file system, for > > various valid reasons. (It's a bug but is it a blocking worthy bug, and in > > this cycle?) In which case is there a cheap way to count the number of > > subvolumes, consider a reasonable upper limit of say 50 subvolumes, and bail > > out without crashing? > > > > Another consideration is we are within 5 days of go/no-go, and thus "Last > > minute blocker bugs" could apply. i.e. if it's determined to be a blocker, > > it'd become a final blocker. > > https://fedoraproject.org/wiki/QA:SOP_blocker_bug_process#Exceptional_cases > > My intention is to do a clean-install, so for my particular use-case it > would > be fine if anaconda bailed, stating that> > ">XXX snapshots are not supported, We need to wipe the drive to install, > do you want to continue?" > > One not tho, afaik. Suse have been defaulting to btrfs for some time with > snapper on top. They may have choosen to setup regular snapshots like I had. SUSE *definitely* does it this way. Its cleanup mechanism is... uhh, inconsistent, is the nicest way to put it. This problem is one of the reasons I didn't elect to set up system snapshots automatically in Fedora... Created attachment 1713848 [details]
/mnt/storage.log letting anaconda terminate without killall -9 anaconda
I tried to mount the btrfs volume to find out how many snapshots there actually is and I had some issues with it...
dmesg output every time I try to mount the btrfs partition
[12832.884670] BTRFS info (device sda3): disk space caching is enabled
[12832.889270] BTRFS info (device sda3): bdev /dev/sda3 errs: wr 0, rd 0, flush 0, corrupt 7159, gen 0
[12833.039543] BTRFS info (device sda3): enabling ssd optimizations
(In reply to Heðin from comment #10) > My intention is to do a clean-install, so for my particular use-case it > would > be fine if anaconda bailed, stating that> > ">XXX snapshots are not supported, We need to wipe the drive to install, > do you want to continue?" Total valid use case that we should make easy. I'm trying to reproduce now with 500+ snapshot file system. I tried to let anaconda "hang" at probing storage since the last logfile was uploaded and about 2 minutes ago i completed the "probing storage" task... so it seems its just dead_slow, instead of crashing Does this also explain why "quit" didnt quit in an expected fashion ? Created attachment 1713849 [details]
Erro checking storage configuration - output from storage.log
anaconda started with the following parameters:
$ sudo anaconda --liveinst --loglevel info --nomount
log output is from storage.log, while doing the following steps:
#1 select Installation destination
#2 select the btfs disk
#3 press Done
#4 select Reclaim space
#5 select Delete all
#6 select Reclaim space
#7 wait for the error under Installation Destination "Error checking storage configuration"
(In reply to Heðin from comment #13) > [12832.889270] BTRFS info (device sda3): bdev /dev/sda3 errs: wr 0, rd 0, > flush 0, corrupt 7159, gen 0 This is just a counter, we have no way of knowing if it's a current problem without a complete dmesg attached. But the storage log shows it's mounted without complaint, so I'm not sure that's the issue. Right now I'm on the hub with Installation Destination grayed out, status under it says probing storage for 5 minutes so far... Creating 500 snapshots with a bash loop took about 20 seconds, and no time to list them. So I think the subvolume list/search code in the installer's helper program is just kindof expensive right now. The desktop environment is responsive, no problem there. And it takes about 8 minutes to complete the probing. Memory consumption is also OK. MiB Mem : 2918.3 total, 190.4 free, 1039.9 used, 1687.9 buff/cache MiB Swap: 1459.0 total, 1459.0 free, 0.0 used. 1513.0 avail Mem I run into no further resistance using the Automatic + Reclaim Space path. Custom partitioning I get a brief hang about 6 seconds. But then no further resistance, including listing all 500 snapshots. Advanced Custom partitioning I get a longer hang, 15 seconds, and then a dialog that the hard drive is not responding: force quite or wait. I can't get this UI to cooperate further, it never progresses but also doesn't crash. OK I'm wrong. I left it alone and some 10-15 minutes later, Advanced Custom also displays all the snapshots. My times need to be treated with some skepticism because it's a 3G RAM VM. Created attachment 1713852 [details] journal, full, monotonic time - autopartition >I run into no further resistance using the Automatic + Reclaim Space path. Clicking "begin installation" yields another significant delay. The logs show individual subvolume tear down and I guess deletion, with a umount, and mount, in between each. It does eventually get there, and the installation completes and boots. Custom UI has a "delete all" option that will delete the two sets of 'unknown' tranches of subvolumes/snapshots. It took a few iterations but I was able to delete everything. The expensive teardown behavior is the same as autopart. Custom install does eventually complete, installation works fine. Advanced Custom UI, I couldn't figure out how to delete the whole btrfs volume, only each snapshot one by one. There's a long 5+ minute delete between each delete confirmation. Unworkable in this case. (In reply to Heðin from comment #16) > #7 wait for the error under Installation Destination "Error checking storage > configuration" This looks like a separate problem... From the attached log-> DEBUG:anaconda.modules.storage.partitioning.validate:Found sanity error: /dev/sda2 is currently mounted and cannot be used for the installation. Please unmount it and retry. OK so what's sda2? DEBUG:blivet: DeviceTree.get_device_by_name returned existing 953,87 GiB disk sda (14) with existing gpt disklabel DEBUG:blivet: DeviceTree.get_device_by_name returned existing 16 MiB partition sda1 (24) with existing vfat filesystem DEBUG:blivet: DeviceTree.get_device_by_name returned existing 512 MiB partition sda2 (34) with existing biosboot DEBUG:blivet: DeviceTree.get_device_by_name returned existing 953,35 GiB partition sda3 (40) with existing btrfs filesystem Weird. That is an unusually gigantic biosboot. And if it's really biosboot, it has no file system on it to mount. Heðin, can you reproduce this? Was sda2 mounted before starting the installer? I wonder if it has some bogus/stale format on it, and maybe GNOME Shell/gvfs/udisks is automounting it? If you can reproduce this, please file a new bug against anaconda, cc: me on it. And separately attach anaconda.log, storage.log, storage.state, program.log - from /tmp in the installer image booted environment. And also attach 'journalctl -b -o short-monotonic --no-hostname > bugID_journal.txt'. And also paste output of 'blkid' into a comment. Thanks. Summary: Existing installation has many snapper snapshots (500+). Anaconda's enumeration of snapshots happens during "probing storage" prior to Installation Destination becoming available for user interaction. This is taking 5-15 minutes (depending on the number of snapshots, and hardware performance). Later, once a complete wipe of the target drive is indicated, another expensive iterative teardown happens. But eventually the installation succeeds for Automatic and Custom paths; it's unworkable for Advanced-Custom partitioning. (See comments 20 and 21 for summaries of the partioning paths.) Further, I'll emphasize this is not happening because of Btrfs by default; it would happen with LVM+ext4 by default too. Is it a blocker? A reasonable user might assume the installer is stuck. This is a use case we want to better optimize for, but scope of that work needs assessment. It's not likely getting fixed for beta, possibly not for F33 final. How common is this configuration? Pretty common with openSUSE, where 2-3 snapshots are taken for each configuration change (add/remove/modify) via the package manager. In a few weeks, there are easily several hundred snapshots. (In reply to Chris Murphy from comment #22) > (In reply to Heðin from comment #16) > > #7 wait for the error under Installation Destination "Error checking storage > > configuration" > > This looks like a separate problem... > > From the attached log-> > DEBUG:anaconda.modules.storage.partitioning.validate:Found sanity error: > /dev/sda2 is currently mounted and cannot be used for the installation. > Please unmount it and retry. > > OK so what's sda2? > > DEBUG:blivet: DeviceTree.get_device_by_name returned existing > 953,87 GiB disk sda (14) with existing gpt disklabel > DEBUG:blivet: DeviceTree.get_device_by_name returned existing 16 > MiB partition sda1 (24) with existing vfat filesystem > DEBUG:blivet: DeviceTree.get_device_by_name returned existing > 512 MiB partition sda2 (34) with existing biosboot > DEBUG:blivet: DeviceTree.get_device_by_name returned existing > 953,35 GiB partition sda3 (40) with existing btrfs filesystem > > Weird. That is an unusually gigantic biosboot. And if it's really biosboot, > it has no file system on it to mount. > > Heðin, can you reproduce this? Was sda2 mounted before starting the > installer? I wonder if it has some bogus/stale format on it, and maybe GNOME > Shell/gvfs/udisks is automounting it? If you can reproduce this, please file > a new bug against anaconda, cc: me on it. And separately attach > anaconda.log, storage.log, storage.state, program.log - from /tmp in the > installer image booted environment. And also attach 'journalctl -b -o > short-monotonic --no-hostname > bugID_journal.txt'. And also paste output of > 'blkid' into a comment. Thanks. Your assumption is correct, I mounted it prior to launching anaconda. A reboot later and, while slow, everyting was working and I have managed to install F33, but if I did now know how long this use case would take, then I would have assumed that the installer was broken in some way. Tried to reproduce this with LVM thinp. The thin metadata pool got to 91% at 335 snapshots and refused to continue to create more. Subsequently launching anaconda, it hangs indefinitely (I gave up at 15 minutes). Anaconda doesn't even get to the language menu. Only program.log has any length, and it's last line is: 02:44:04,792 INF program: Running [2] lvm lvs --noheadings --nosuffix --nameprefixes --unquoted --units=b -a -o vg_name,lv_name,lv_uuid,lv_size,lv_attr,segtype,origin,pool_lv,data_lv,metadata_lv,role,move_pv,data_percent,metadata_percent,copy_percent --config= devices { preferred_names=["^/dev/mapper/", "^/dev/md/", "^/dev/sd"] } ... This has -4 Beta in the ticket - https://pagure.io/fedora-qa/blocker-review/issue/64 - so rejecting as a Beta blocker. Leaving Final proposal intact. Discussed during the 2020-09-08 blocker review meeting: [0] The decision to delay the classification of this as a blocker bug was made as this seems like a reasonable blocker candidate but we're concerned about the extent to which it can realistically be addressed in a practical timeframe for F33. We will punt for input from installer team on this. [0] https://meetbot.fedoraproject.org/fedora-blocker-review/2020-09-08/f33-blocker-review.2020-09-08-16.00.txt On that note: installer/storage folks, can you please give us your assessment of this issue and what would be realistically possible in the F33 timeframe for the two cases Chris identified - lots of btrfs snapshots, and lots of LVM thinp snapshots? I'm re-assigning this back to the default assignee as I don't think it makes sense for it be assigned to Neal. I was able to identify the problem in blivet (at least for btrfs snapshots, I have not tested with LVM snapshots yet). I'm still working on the fix but I was already able to reduce the storage scanning time to about 1 minute. It's still too much but definitely better than 10 minutes. upstream PR: https://github.com/storaged-project/blivet/pull/897 updates image: https://vtrefny.fedorapeople.org/img/rhbz1876162.img Still work in progress but so far seems to be working and reduces the storage time to less than 1 minute in my VM which is not that bad for system with 500 devices IMHO. Wow! Awesome! Vojtech, is it a safe enough fix to take as a beta free exception? Or wait until after beta? I'll give the updates image a test soon. Removing udev settles always makes me a bit nervous, because they usually got put in for a reason. But I'm not sure it's better to wait till Final...if anything it'd be good to have the change in Beta so more folks will test it out on varied hardware and we'll catch any issues caused by the change. So I'm gonna go ahead and propose this as a Beta FE. (In reply to Adam Williamson from comment #32) > Removing udev settles always makes me a bit nervous, because they usually > got put in for a reason. I think it should be safe to remove these. But I'm a little also nervous :-) I can confirm the updates image improves things quite a bit. But there's a long but... I talked to Josef Bacik about this a bit and we're not thinking udev is a factor except when devices are physically added/removed. Maybe it's to account for the user doing silly things during an installation? Like adding/removing devices? But that's not the most expensive thing in a real world scenario. In my contrived case, it's 500 snapshots of the same thing with no differences between the snapshots. Therefore the subvol deletion is pretty cheap, and it only seems like udevadm settle is the expensive thing. In a real world case, snapshots will have differences between them. That makes them expensive to remove (compared to creating them). Deleting even 20 snapshots with many differences between them will be more expensive than deleting hundreds of snapshots with no differences. There is a btrfs-cleaner kernel thread that does this expensive task in the background, but only while the fs stays mounted. The cleaner thread can delay umount from succeeding, but once umount succeeds, the interruption of the cleaner task leaves the file system in a state that causes the next mount to be more expensive: dealing with orphaned inodes and restarting the snapshot deletion. We aren't seeing any of this in my example because my example is too simple. Ideally: a) if the final step of tear down will be wipefs, just skip to the wipefs. Avoid all the btrfs subvol removals. Since they aren't block devices, no individual tear down is needed. b) if the file system isn't being wiped, do 'btrfs subvolume delete $LIST' to batch the deletions all at once. That's probably not trivial work. The big performance hit is with the mount->umount loop each being the first mount and final mount for the file system. The first mount involves a bootstrap and the resumption of previously interrupted tasks; they were interrupted because the umount was the last umount. If the mount/umount loop aren't the first or last, they become merely pseudo-bind mounts behind the scene - there'd be no interruption or bootstrapping cost. The way to make the existing loop much cheaper, would be to mount any part of this btrfs file system somewhere at the start; and then just leave it mounted there. If the file system never does a final umount, all the anaconda umount/mount events are made much cheaper. (In reply to Chris Murphy from comment #34) > a) if the final step of tear down will be wipefs, just skip to the wipefs. > Avoid all the btrfs subvol removals. > I think we should take this one step at a time. The fix I did was not for removing, but simply for the initial storage scan in Anaconda -- the `teardown_all` function is used just to make sure all block devices are unmounted and stopped/closed (for LUKS, MD etc.) and Anaconda runs it during the initial storage scan. So I'd say removing the snapshots is a different (but definitely related) issue. Changing the way we remove devices in blivet is possible but it's a way bigger change than removing few udev settles. I think we might need to completely redo snapshot management and representation in blivet to make this work and that's definitely out of scope for F33. (In reply to Vojtech Trefny from comment #35) > I think we should take this one step at a time. I agree completely. If removing some udev settles makes you even a little bit nervous, I support postponing it until the problem and solution are better understood (after Fedora 33). I also support mounting the btrfs file system somewhere (other than on /mnt/sysimage, perhaps in /run) persistently, making no other changes. If it remains persistently mounted somewhere, the mount-umount loops become much faster, and permit snapshot cleanup to happen normally and without interruption. From a Btrfs perspective that makes things much simpler; but I can't estimate how much work it is. Therefore, I have no opinion if it's practical for Fedora 33 or also should wait. Discussed during the 2020-09-14 blocker review meeting: [0] The decision to classify this bug as an "AcceptedFreezeException (Beta)" was made as it is a noticeable issue that cannot be fixed with an update. [0] https://meetbot.fedoraproject.org/fedora-blocker-review/2020-09-14/f33-blocker-review.2020-09-14-16.01.txt Discussed during the 2020-09-14 blocker review meeting: [0] The decision to delay the classification of this as a blocker bug was made as we would like more evaluation from anaconda team and also testing with a real-world case (not identical snapshots) to figure out what's really plausible in an f33 timeframe before voting on this. [0] https://meetbot.fedoraproject.org/fedora-blocker-review/2020-09-14/f33-blocker-review.2020-09-14-16.01.txt We discussed this a little bit more internally and I think removing the udev settle calls should be safe so I'll do a new build for F33 later today or tomorrow. I did some more manual testing and also asked Anaconda team to run their test suite with patched blivet and so far everything looks good. FEDORA-2020-0a1faa5d22 has been submitted as an update to Fedora 33. https://bodhi.fedoraproject.org/updates/FEDORA-2020-0a1faa5d22 FEDORA-2020-0a1faa5d22 has been pushed to the Fedora 33 testing repository. In short time you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2020-0a1faa5d22` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2020-0a1faa5d22 See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates. Discussed at 2020-09-21 blocker review meeting: https://meetbot-raw.fedoraproject.org/fedora-blocker-review/2020-09-21/f33-blocker-review.2020-09-21-16.00.html . Again, we delayed the decision, because we still don't really know how "real world" cases will behave here. The udev settle update will go into Beta (assuming it passes openQA tests, which I just triggered manually), so we'll see how that goes. FEDORA-2020-0a1faa5d22 has been pushed to the Fedora 33 stable repository. If problem still persists, please make note of it in this bug report. Related: A way to recursively delete subvolumes/snapshots: https://github.com/kdave/btrfs-progs/blob/master/libbtrfsutil/python/subvolume.c#L415 BTRFS_IOC_SUBVOL_CREATE ioctl can create/delete subvolumes directly without needing a path, and I think the _fd variant mentioned in the libbtrfs C API does this too; I don't see that variant in the python api. But if i would be useful we can add it. https://github.com/kdave/btrfs-progs/tree/master/libbtrfsutil https://github.com/kdave/btrfs-progs/blob/master/libbtrfsutil/python/subvolume.c#L323 Chris, if you're able to test a more "real world" scenario here and find that the udev settle change alone isn't enough to make it perform acceptably, please do re-open it. Thanks! Bug fixed, commonbugs not needed. If we do find that real-world btrfs snapshot scenarios are still a problem, we can file and document that separately, I guess. |
Created attachment 1713840 [details] anaconda logs BZ might need to be split in 2, "quit error" and "storage probing error" but for now, it will be one BZ, because I encountered the issue(s) in direct relation to each other. Description of problem: Was waiting for "probing storage" forever, got tired of waiting and hit the quit button... UI dimmed/grayed and nothing happened. tried again, starting anaconda from terminal with: $ anaconda --liveinst and got an exception and traceback Version-Release number of selected component (if applicable): booted fedora workstation live, build: 20200902.n.0 How reproducible: Booted to live, started installer, experienced the error, killall -9 anaconda, started anaconda from terminal, and got the same error, that 2 of 2. Steps to Reproduce: 1. boot 20200902.n.0 to live 2. terminal -> anaconda --liveinst 3. System contains 1 ssd with an existing Linux deployment, lsblk output follows: Actual results: "Probing storage" never finishes. When I got tired of waiting, then hitting the "quit" button dimmed the GUI and the terminal where I launched anaconda from throws an exception. Expected results: #1 "probing storage" should finish and either succed or tell the user why it failed. #2 pressing "quit" should quit anaconda gracefully or tell the user why anaconda can't be terminated. Additional info: