Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.

Bug 1876162

Summary: anaconda storage configuration is extremely slow with an existing btrfs filesystem containing hundreds of snapshots
Product: [Fedora] Fedora Reporter: Heðin <hmoller>
Component: python-blivetAssignee: Vojtech Trefny <vtrefny>
Status: CLOSED ERRATA QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: high Docs Contact:
Priority: unspecified    
Version: 33CC: anaconda-maint-list, awilliam, blivet-maint-list, bugzilla, davide, dlehman, gmarr, japokorn, jkonecny, jonathan, kellin, kparal, michel, mkolman, robatino, rvykydal, vanmeeuwen+fedora, vponcova, vtrefny, wwoods
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard: AcceptedFreezeException
Fixed In Version: python-blivet-3.3.0-2.fc33 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-09-22 07:37:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1766776, 1766777    
Attachments:
Description Flags
anaconda logs
none
/tmp/dbus.log
none
/tmp/storage.log
none
lspci && lshw -short && lsblk
none
/tmp/program.log
none
lvm.log
none
/mnt/storage.log letting anaconda terminate without killall -9 anaconda
none
Erro checking storage configuration - output from storage.log
none
journal, full, monotonic time - autopartition none

Description Heðin 2020-09-05 23:08:04 UTC
Created attachment 1713840 [details]
anaconda logs

BZ might need to be split in 2, "quit error" and "storage probing error" but for now, it will be one BZ, because I encountered the issue(s) in direct relation to each other.

Description of problem:
Was waiting for "probing storage" forever, got tired of waiting and hit the quit button... UI dimmed/grayed and nothing happened.
tried again, starting anaconda from terminal with:
$ anaconda --liveinst
and got an exception and traceback

Version-Release number of selected component (if applicable):
booted fedora workstation live, build: 20200902.n.0

How reproducible:
Booted to live, started installer, experienced the error, killall -9 anaconda, started anaconda from terminal, and got the same error, that 2 of 2.

Steps to Reproduce:
1. boot 20200902.n.0 to live
2. terminal -> anaconda --liveinst
3. System contains 1 ssd with an existing Linux deployment, lsblk output follows:


Actual results:
"Probing storage" never finishes.
When I got tired of waiting, then hitting the "quit" button dimmed the GUI and the terminal where I launched anaconda from throws an exception.

Expected results:
#1 "probing storage" should finish and either succed or tell the user why it failed.
#2 pressing "quit" should quit anaconda gracefully or tell the user why anaconda can't be terminated.

Additional info:

Comment 1 Heðin 2020-09-05 23:08:37 UTC
Created attachment 1713841 [details]
/tmp/dbus.log

Comment 2 Heðin 2020-09-05 23:11:30 UTC
Created attachment 1713842 [details]
/tmp/storage.log

Comment 3 Heðin 2020-09-05 23:12:39 UTC
Created attachment 1713843 [details]
lspci && lshw -short && lsblk

Comment 4 Heðin 2020-09-05 23:14:50 UTC
Created attachment 1713844 [details]
/tmp/program.log

Comment 5 Heðin 2020-09-05 23:19:05 UTC
Created attachment 1713846 [details]
lvm.log

Comment 6 Chris Murphy 2020-09-06 00:13:22 UTC
clues are in storage.log

Preexisting Btrfs with 500+ snapshots and I think we're getting some kind of dbus hang, possible due to all the messaging (is dbus used between anaconda and blivet?).

Heðin do you have the traceback file in /tmp? It's the file that starts with tb- and then has random characters after it. Can you attach it?

Comment 7 Fedora Blocker Bugs Application 2020-09-06 00:20:05 UTC
Proposed as a Blocker for 33-beta by Fedora user chrismurphy using the blocker tracking app because:

 The quotes are from beta guided criterion, there's custom equivalent; but by the description the crash happens before the guided/custom decision.

If 500+ preexisting snapshots is a valid configuration, then the installer must "Complete an installation using any combination of disk configuration options it allows the user to select"

If it's invalid, then the installer must"Reject or disallow invalid disk and volume configurations without crashing."

Comment 8 Chris Murphy 2020-09-06 00:47:31 UTC
Pretty clear blocker. But there's a but...

Maybe there's a simple explanation and fix. But just to throw it out there, while the file system itself can handle thousands of snapshots, we may not want to support installing to such an already complex file system, for various valid reasons. (It's a bug but is it a blocking worthy bug, and in this cycle?) In which case is there a cheap way to count the number of subvolumes, consider a reasonable upper limit of say 50 subvolumes, and bail out without crashing? 

Another consideration is we are within 5 days of go/no-go, and thus "Last minute blocker bugs" could apply. i.e. if it's determined to be a blocker, it'd become a final blocker.
https://fedoraproject.org/wiki/QA:SOP_blocker_bug_process#Exceptional_cases

Comment 9 Heðin 2020-09-06 00:57:22 UTC
There's no /tmp/tb-* file during the issue or after.
I get the following in the terminal

$ sudo anaconda --liveinst
Starting installer, one moment...
anaconda 33.25.2-1.fc33 for anaconda bluesky (pre-release) started.
 * installation log files are stored in /tmp during the installation
 * shell is available on TTY2 and in second TMUX pane (ctrl+b, then press 2)
 * when reporting a bug add logs from /tmp as separate text/plain attachments

** (anaconda:75537): WARNING **: 20:51:01.148: AT-SPI: Could not obtain desktop path or name


** (anaconda:75537): WARNING **: 20:51:01.403: atk-bridge: GetRegisteredEvents returned message with unknown signature

** (anaconda:75537): WARNING **: 20:51:01.403: atk-bridge: get_device_events_reply: unknown signature

** (anaconda:75537): WARNING **: 20:51:01.403: atk-bridge: get_device_events_reply: unknown signature
Internal error:   Could not resolve keysym XF86FullScreen
Exception ignored in: <function _after_fork at 0x7ff5432414c0>
Traceback (most recent call last):
  File "/usr/lib64/python3.9/threading.py", line 1486, in _after_fork
    thread._reset_internal_locks(True)
  File "/usr/lib64/python3.9/threading.py", line 829, in _reset_internal_locks
    self._tstate_lock._at_fork_reinit()
AttributeError: 'NoneType' object has no attribute '_at_fork_reinit'

Comment 10 Heðin 2020-09-06 01:03:43 UTC
(In reply to Chris Murphy from comment #8)
> Pretty clear blocker. But there's a but...
> 
> Maybe there's a simple explanation and fix. But just to throw it out there,
> while the file system itself can handle thousands of snapshots, we may not
> want to support installing to such an already complex file system, for
> various valid reasons. (It's a bug but is it a blocking worthy bug, and in
> this cycle?) In which case is there a cheap way to count the number of
> subvolumes, consider a reasonable upper limit of say 50 subvolumes, and bail
> out without crashing? 
> 
> Another consideration is we are within 5 days of go/no-go, and thus "Last
> minute blocker bugs" could apply. i.e. if it's determined to be a blocker,
> it'd become a final blocker.
> https://fedoraproject.org/wiki/QA:SOP_blocker_bug_process#Exceptional_cases

My intention is to do a clean-install, so for my particular use-case it would 
be fine if anaconda bailed, stating that>
">XXX snapshots are not supported, We need to wipe the drive to install, 
do you want to continue?"

One not tho, afaik. Suse have been defaulting to btrfs for some time with 
snapper on top. They may have choosen to setup regular snapshots like I had.

Comment 11 Neal Gompa 2020-09-06 01:04:10 UTC
(In reply to Chris Murphy from comment #6)
> clues are in storage.log
> 
> Preexisting Btrfs with 500+ snapshots and I think we're getting some kind of
> dbus hang, possible due to all the messaging (is dbus used between anaconda
> and blivet?).
> 

I believe starting with Anaconda 33, storage is managed and manipulated as a service using udisks2.

Comment 12 Neal Gompa 2020-09-06 01:05:48 UTC
(In reply to Heðin from comment #10)
> (In reply to Chris Murphy from comment #8)
> > Pretty clear blocker. But there's a but...
> > 
> > Maybe there's a simple explanation and fix. But just to throw it out there,
> > while the file system itself can handle thousands of snapshots, we may not
> > want to support installing to such an already complex file system, for
> > various valid reasons. (It's a bug but is it a blocking worthy bug, and in
> > this cycle?) In which case is there a cheap way to count the number of
> > subvolumes, consider a reasonable upper limit of say 50 subvolumes, and bail
> > out without crashing? 
> > 
> > Another consideration is we are within 5 days of go/no-go, and thus "Last
> > minute blocker bugs" could apply. i.e. if it's determined to be a blocker,
> > it'd become a final blocker.
> > https://fedoraproject.org/wiki/QA:SOP_blocker_bug_process#Exceptional_cases
> 
> My intention is to do a clean-install, so for my particular use-case it
> would 
> be fine if anaconda bailed, stating that>
> ">XXX snapshots are not supported, We need to wipe the drive to install, 
> do you want to continue?"
> 
> One not tho, afaik. Suse have been defaulting to btrfs for some time with 
> snapper on top. They may have choosen to setup regular snapshots like I had.

SUSE *definitely* does it this way. Its cleanup mechanism is... uhh, inconsistent, is the nicest way to put it.

This problem is one of the reasons I didn't elect to set up system snapshots automatically in Fedora...

Comment 13 Heðin 2020-09-06 01:16:13 UTC
Created attachment 1713848 [details]
/mnt/storage.log letting anaconda terminate without killall -9 anaconda

I tried to mount the btrfs volume to find out how many snapshots there actually is and I had some issues with it...
dmesg output every time I try to mount the btrfs partition
[12832.884670] BTRFS info (device sda3): disk space caching is enabled
[12832.889270] BTRFS info (device sda3): bdev /dev/sda3 errs: wr 0, rd 0, flush 0, corrupt 7159, gen 0
[12833.039543] BTRFS info (device sda3): enabling ssd optimizations

Comment 14 Chris Murphy 2020-09-06 01:28:16 UTC
(In reply to Heðin from comment #10)

> My intention is to do a clean-install, so for my particular use-case it
> would 
> be fine if anaconda bailed, stating that>
> ">XXX snapshots are not supported, We need to wipe the drive to install, 
> do you want to continue?"

Total valid use case that we should make easy. I'm trying to reproduce now with 500+ snapshot file system.

Comment 15 Heðin 2020-09-06 01:33:52 UTC
I tried to let anaconda "hang" at probing storage since the last logfile was uploaded and about 2 minutes ago i completed the "probing storage" task... so it seems its just dead_slow, instead of crashing

Does this also explain why "quit" didnt quit in an expected fashion ?

Comment 16 Heðin 2020-09-06 01:42:06 UTC
Created attachment 1713849 [details]
Erro checking storage configuration - output from storage.log

anaconda started with the following parameters:
$ sudo anaconda --liveinst --loglevel info --nomount

log output is from storage.log, while doing the following steps:
#1 select Installation destination
#2 select the btfs disk
#3 press Done
#4 select Reclaim space
#5 select Delete all
#6 select Reclaim space
#7 wait for the error under Installation Destination "Error checking storage configuration"

Comment 17 Chris Murphy 2020-09-06 01:45:01 UTC
(In reply to Heðin from comment #13)
> [12832.889270] BTRFS info (device sda3): bdev /dev/sda3 errs: wr 0, rd 0,
> flush 0, corrupt 7159, gen 0

This is just a counter, we have no way of knowing if it's a current problem without a complete dmesg attached. But the storage log shows it's mounted without complaint, so I'm not sure that's the issue. Right now I'm on the hub with Installation Destination grayed out, status under it says probing storage for 5 minutes so far...

Creating 500 snapshots with a bash loop took about 20 seconds, and no time to list them. So I think the subvolume list/search code in the installer's helper program is just kindof expensive right now. The desktop environment is responsive, no problem there. And it takes about 8 minutes to complete the probing. Memory consumption is also OK.

MiB Mem :   2918.3 total,    190.4 free,   1039.9 used,   1687.9 buff/cache
MiB Swap:   1459.0 total,   1459.0 free,      0.0 used.   1513.0 avail Mem 

I run into no further resistance using the Automatic + Reclaim Space path.

Custom partitioning I get a brief hang about 6 seconds. But then no further resistance, including listing all 500 snapshots.

Advanced Custom partitioning I get a longer hang, 15 seconds, and then a dialog that the hard drive is not responding: force quite or wait. I can't get this UI to cooperate further, it never progresses but also doesn't crash.

Comment 18 Chris Murphy 2020-09-06 01:56:33 UTC
OK I'm wrong. I left it alone and some 10-15 minutes later, Advanced Custom also displays all the snapshots.

Comment 19 Chris Murphy 2020-09-06 02:00:37 UTC
My times need to be treated with some skepticism because it's a 3G RAM VM.

Comment 20 Chris Murphy 2020-09-06 03:08:33 UTC
Created attachment 1713852 [details]
journal, full, monotonic time - autopartition

>I run into no further resistance using the Automatic + Reclaim Space path.

Clicking "begin installation" yields another significant delay. The logs show individual subvolume tear down and I guess deletion, with a umount, and mount, in between each. It does eventually get there, and the installation completes and boots.

Comment 21 Chris Murphy 2020-09-06 04:01:53 UTC
Custom UI has a "delete all" option that will delete the two sets of 'unknown' tranches of subvolumes/snapshots. It took a few iterations but I was able to delete everything. The expensive teardown behavior is the same as autopart. Custom install does eventually complete, installation works fine.

Advanced Custom UI, I couldn't figure out how to delete the whole btrfs volume, only each snapshot one by one. There's a long 5+ minute delete between each delete confirmation. Unworkable in this case.

Comment 22 Chris Murphy 2020-09-06 19:13:46 UTC
(In reply to Heðin from comment #16)
> #7 wait for the error under Installation Destination "Error checking storage
> configuration"

This looks like a separate problem...

From the attached log->
DEBUG:anaconda.modules.storage.partitioning.validate:Found sanity error: /dev/sda2 is currently mounted and cannot be used for the installation. Please unmount it and retry.

OK so what's sda2?

DEBUG:blivet:             DeviceTree.get_device_by_name returned existing 953,87 GiB disk sda (14) with existing gpt disklabel
DEBUG:blivet:             DeviceTree.get_device_by_name returned existing 16 MiB partition sda1 (24) with existing vfat filesystem
DEBUG:blivet:             DeviceTree.get_device_by_name returned existing 512 MiB partition sda2 (34) with existing biosboot
DEBUG:blivet:             DeviceTree.get_device_by_name returned existing 953,35 GiB partition sda3 (40) with existing btrfs filesystem

Weird. That is an unusually gigantic biosboot. And if it's really biosboot, it has no file system on it to mount.

Heðin, can you reproduce this? Was sda2 mounted before starting the installer? I wonder if it has some bogus/stale format on it, and maybe GNOME Shell/gvfs/udisks is automounting it? If you can reproduce this, please file a new bug against anaconda, cc: me on it. And separately attach anaconda.log, storage.log, storage.state, program.log - from /tmp in the installer image booted environment. And also attach 'journalctl -b -o short-monotonic --no-hostname > bugID_journal.txt'. And also paste output of 'blkid' into a comment. Thanks.

Comment 23 Chris Murphy 2020-09-06 19:41:12 UTC
Summary:

Existing installation has many snapper snapshots (500+). Anaconda's enumeration of snapshots happens during "probing storage" prior to Installation Destination becoming available for user interaction. This is taking 5-15 minutes (depending on the number of snapshots, and hardware performance). Later, once a complete wipe of the target drive is indicated, another expensive iterative teardown happens. But eventually the installation succeeds for Automatic and Custom paths; it's unworkable for Advanced-Custom partitioning. (See comments 20 and 21 for summaries of the partioning paths.) Further, I'll emphasize this is not happening because of Btrfs by default; it would happen with LVM+ext4 by default too.

Is it a blocker?
A reasonable user might assume the installer is stuck. This is a use case we want to better optimize for, but scope of that work needs assessment. It's not likely getting fixed for beta, possibly not for F33 final.

How common is this configuration?
Pretty common with openSUSE, where 2-3 snapshots are taken for each configuration change (add/remove/modify) via the package manager. In a few weeks, there are easily several hundred snapshots.

Comment 24 Heðin 2020-09-06 23:31:29 UTC
(In reply to Chris Murphy from comment #22)
> (In reply to Heðin from comment #16)
> > #7 wait for the error under Installation Destination "Error checking storage
> > configuration"
> 
> This looks like a separate problem...
> 
> From the attached log->
> DEBUG:anaconda.modules.storage.partitioning.validate:Found sanity error:
> /dev/sda2 is currently mounted and cannot be used for the installation.
> Please unmount it and retry.
> 
> OK so what's sda2?
> 
> DEBUG:blivet:             DeviceTree.get_device_by_name returned existing
> 953,87 GiB disk sda (14) with existing gpt disklabel
> DEBUG:blivet:             DeviceTree.get_device_by_name returned existing 16
> MiB partition sda1 (24) with existing vfat filesystem
> DEBUG:blivet:             DeviceTree.get_device_by_name returned existing
> 512 MiB partition sda2 (34) with existing biosboot
> DEBUG:blivet:             DeviceTree.get_device_by_name returned existing
> 953,35 GiB partition sda3 (40) with existing btrfs filesystem
> 
> Weird. That is an unusually gigantic biosboot. And if it's really biosboot,
> it has no file system on it to mount.
> 
> Heðin, can you reproduce this? Was sda2 mounted before starting the
> installer? I wonder if it has some bogus/stale format on it, and maybe GNOME
> Shell/gvfs/udisks is automounting it? If you can reproduce this, please file
> a new bug against anaconda, cc: me on it. And separately attach
> anaconda.log, storage.log, storage.state, program.log - from /tmp in the
> installer image booted environment. And also attach 'journalctl -b -o
> short-monotonic --no-hostname > bugID_journal.txt'. And also paste output of
> 'blkid' into a comment. Thanks.

Your assumption is correct, I mounted it prior to launching anaconda.
A reboot later and, while slow, everyting was working and I have managed to install F33, but if I did now know how long this use case would take, then I would have assumed that the installer was broken in some way.

Comment 25 Chris Murphy 2020-09-08 07:07:45 UTC
Tried to reproduce this with LVM thinp. The thin metadata pool got to 91% at 335 snapshots and refused to continue to create more. Subsequently launching anaconda, it hangs indefinitely (I gave up at 15 minutes). Anaconda doesn't even get to the language menu. Only program.log has any length, and it's last line is:

02:44:04,792 INF program: Running [2] lvm lvs --noheadings --nosuffix --nameprefixes --unquoted --units=b -a -o vg_name,lv_name,lv_uuid,lv_size,lv_attr,segtype,origin,pool_lv,data_lv,metadata_lv,role,move_pv,data_percent,metadata_percent,copy_percent --config= devices { preferred_names=["^/dev/mapper/", "^/dev/md/", "^/dev/sd"] }  ...

Comment 26 Adam Williamson 2020-09-08 15:59:00 UTC
This has -4 Beta in the ticket - https://pagure.io/fedora-qa/blocker-review/issue/64 - so rejecting as a Beta blocker. Leaving Final proposal intact.

Comment 27 Geoffrey Marr 2020-09-08 21:21:08 UTC
Discussed during the 2020-09-08 blocker review meeting: [0]

The decision to delay the classification of this as a blocker bug was made as this seems like a reasonable blocker candidate but we're concerned about the extent to which it can realistically be addressed in a practical timeframe for F33. We will punt for input from installer team on this.

[0] https://meetbot.fedoraproject.org/fedora-blocker-review/2020-09-08/f33-blocker-review.2020-09-08-16.00.txt

Comment 28 Adam Williamson 2020-09-08 23:53:31 UTC
On that note: installer/storage folks, can you please give us your assessment of this issue and what would be realistically possible in the F33 timeframe for the two cases Chris identified - lots of btrfs snapshots, and lots of LVM thinp snapshots?

I'm re-assigning this back to the default assignee as I don't think it makes sense for it be assigned to Neal.

Comment 29 Vojtech Trefny 2020-09-09 11:17:34 UTC
I was able to identify the problem in blivet (at least for btrfs snapshots, I have not tested with LVM snapshots yet). I'm still working on the fix but I was already able to reduce the storage scanning time to about 1 minute. It's still too much but definitely better than 10 minutes.

Comment 30 Vojtech Trefny 2020-09-09 13:58:31 UTC
upstream PR: https://github.com/storaged-project/blivet/pull/897

updates image: https://vtrefny.fedorapeople.org/img/rhbz1876162.img

Still work in progress but so far seems to be working and reduces the storage time to less than 1 minute in my VM which is not that bad for system with 500 devices IMHO.

Comment 31 Chris Murphy 2020-09-09 17:44:37 UTC
Wow! Awesome! Vojtech, is it a safe enough fix to take as a beta free exception? Or wait until after beta? I'll give the updates image a test soon.

Comment 32 Adam Williamson 2020-09-09 18:31:58 UTC
Removing udev settles always makes me a bit nervous, because they usually got put in for a reason. But I'm not sure it's better to wait till Final...if anything it'd be good to have the change in Beta so more folks will test it out on varied hardware and we'll catch any issues caused by the change. So I'm gonna go ahead and propose this as a Beta FE.

Comment 33 Vojtech Trefny 2020-09-10 08:32:30 UTC
(In reply to Adam Williamson from comment #32)
> Removing udev settles always makes me a bit nervous, because they usually
> got put in for a reason.

I think it should be safe to remove these. But I'm a little also nervous :-)

Comment 34 Chris Murphy 2020-09-10 16:57:25 UTC
I can confirm the updates image improves things quite a bit. But there's a long but...

I talked to Josef Bacik about this a bit and we're not thinking udev is a factor except when devices are physically added/removed. Maybe it's to account for the user doing silly things during an installation? Like adding/removing devices?

But that's not the most expensive thing in a real world scenario. In my contrived case, it's 500 snapshots of the same thing with no differences between the snapshots. Therefore the subvol deletion is pretty cheap, and it only seems like udevadm settle is the expensive thing.

In a real world case, snapshots will have differences between them. That makes them expensive to remove (compared to creating them). Deleting even 20 snapshots with many differences between them will be more expensive than deleting hundreds of snapshots with no differences.

There is a btrfs-cleaner kernel thread that does this expensive task in the background, but only while the fs stays mounted. The cleaner thread can delay umount from succeeding, but once umount succeeds, the interruption of the cleaner task leaves the file system in a state that causes the next mount to be more expensive: dealing with orphaned inodes and restarting the snapshot deletion. We aren't seeing any of this in my example because my example is too simple.

Ideally:
a) if the final step of tear down will be wipefs, just skip to the wipefs. Avoid all the btrfs subvol removals. Since they aren't block devices, no individual tear down is needed.
b) if the file system isn't being wiped, do 'btrfs subvolume delete $LIST' to batch the deletions all at once.

That's probably not trivial work.

The big performance hit is with the mount->umount loop each being the first mount and final mount for the file system. The first mount involves a bootstrap and the resumption of previously interrupted tasks; they were interrupted because the umount was the last umount. If the mount/umount loop aren't the first or last, they become merely pseudo-bind mounts behind the scene - there'd be no interruption or bootstrapping cost.

The way to make the existing loop much cheaper, would be to mount any part of this btrfs file system somewhere at the start; and then just leave it mounted there. If the file system never does a final umount, all the anaconda umount/mount events are made much cheaper.

Comment 35 Vojtech Trefny 2020-09-11 12:03:34 UTC
(In reply to Chris Murphy from comment #34)
> a) if the final step of tear down will be wipefs, just skip to the wipefs.
> Avoid all the btrfs subvol removals.
> 

I think we should take this one step at a time. The fix I did was not for removing, but simply for the initial storage scan in Anaconda -- the `teardown_all` function is used just to make sure all block devices are unmounted and stopped/closed (for LUKS, MD etc.) and Anaconda runs it during the initial storage scan. So I'd say removing the snapshots is a different (but definitely related) issue. Changing the way we remove devices in blivet is possible but it's a way bigger change than removing few udev settles. I think we might need to completely redo snapshot management and representation in blivet to make this work and that's definitely out of scope for F33.

Comment 36 Chris Murphy 2020-09-11 16:02:31 UTC
(In reply to Vojtech Trefny from comment #35)
> I think we should take this one step at a time.

I agree completely. If removing some udev settles makes you even a little bit nervous, I support postponing it until the problem and solution are better understood (after Fedora 33).

I also support mounting the btrfs file system somewhere (other than on /mnt/sysimage, perhaps in /run) persistently, making no other changes. If it remains persistently mounted somewhere, the mount-umount loops become much faster, and permit snapshot cleanup to happen normally and without interruption. From a Btrfs perspective that makes things much simpler; but I can't estimate how much work it is. Therefore, I have no opinion if it's practical for Fedora 33 or also should wait.

Comment 37 Geoffrey Marr 2020-09-14 20:21:52 UTC
Discussed during the 2020-09-14 blocker review meeting: [0]

The decision to classify this bug as an "AcceptedFreezeException (Beta)" was made as it is a noticeable issue that cannot be fixed with an update.

[0] https://meetbot.fedoraproject.org/fedora-blocker-review/2020-09-14/f33-blocker-review.2020-09-14-16.01.txt

Comment 38 Geoffrey Marr 2020-09-14 20:25:23 UTC
Discussed during the 2020-09-14 blocker review meeting: [0]

The decision to delay the classification of this as a blocker bug was made as we would like more evaluation from anaconda team and also testing with a real-world case (not identical snapshots) to figure out what's really plausible in an f33 timeframe before voting on this.

[0] https://meetbot.fedoraproject.org/fedora-blocker-review/2020-09-14/f33-blocker-review.2020-09-14-16.01.txt

Comment 39 Vojtech Trefny 2020-09-15 12:08:23 UTC
We discussed this a little bit more internally and I think removing the udev settle calls should be safe so I'll do a new build for F33 later today or tomorrow. I did some more manual testing and also asked Anaconda team to run their test suite with patched blivet and so far everything looks good.

Comment 40 Fedora Update System 2020-09-16 06:57:15 UTC
FEDORA-2020-0a1faa5d22 has been submitted as an update to Fedora 33. https://bodhi.fedoraproject.org/updates/FEDORA-2020-0a1faa5d22

Comment 41 Fedora Update System 2020-09-16 18:57:52 UTC
FEDORA-2020-0a1faa5d22 has been pushed to the Fedora 33 testing repository.
In short time you'll be able to install the update with the following command:
`sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2020-0a1faa5d22`
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2020-0a1faa5d22

See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.

Comment 42 Adam Williamson 2020-09-21 19:35:15 UTC
Discussed at 2020-09-21 blocker review meeting: https://meetbot-raw.fedoraproject.org/fedora-blocker-review/2020-09-21/f33-blocker-review.2020-09-21-16.00.html . Again, we delayed the decision, because we still don't really know how "real world" cases will behave here. The udev settle update will go into Beta (assuming it passes openQA tests, which I just triggered manually), so we'll see how that goes.

Comment 43 Fedora Update System 2020-09-22 07:37:06 UTC
FEDORA-2020-0a1faa5d22 has been pushed to the Fedora 33 stable repository.
If problem still persists, please make note of it in this bug report.

Comment 44 Chris Murphy 2020-10-04 07:16:43 UTC
Related:

A way to recursively delete subvolumes/snapshots:
https://github.com/kdave/btrfs-progs/blob/master/libbtrfsutil/python/subvolume.c#L415


BTRFS_IOC_SUBVOL_CREATE ioctl can create/delete subvolumes directly without needing a path, and I think the _fd variant mentioned in the libbtrfs C API does this too; I don't see that variant in the python api. But if i would be useful we can add it.
https://github.com/kdave/btrfs-progs/tree/master/libbtrfsutil
https://github.com/kdave/btrfs-progs/blob/master/libbtrfsutil/python/subvolume.c#L323

Comment 45 Adam Williamson 2020-10-04 17:16:47 UTC
Chris, if you're able to test a more "real world" scenario here and find that the udev settle change alone isn't enough to make it perform acceptably, please do re-open it. Thanks!

Comment 46 Adam Williamson 2020-10-23 21:27:09 UTC
Bug fixed, commonbugs not needed. If we do find that real-world btrfs snapshot scenarios are still a problem, we can file and document that separately, I guess.