Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 2232805
Summary: | libvirt not immediately working after Fedora 38 to 39 upgrade | |||
---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Ian Laurie <nixuser> | |
Component: | libvirt | Assignee: | Cole Robinson <crobinso> | |
Status: | CLOSED ERRATA | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | |
Severity: | low | Docs Contact: | ||
Priority: | unspecified | |||
Version: | 39 | CC: | abologna, awilliam, berrange, clalancette, crobinso, geraldo.simiao.kutz, jforbes, laine, libvirt-maint, mkletzan, virt-maint | |
Target Milestone: | --- | |||
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | AcceptedFreezeException | |||
Fixed In Version: | libvirt-9.7.0-1.fc39 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 2236500 (view as bug list) | Environment: | ||
Last Closed: | 2023-09-12 22:34:04 UTC | Type: | --- | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2143445 |
Description
Ian Laurie
2023-08-18 23:58:01 UTC
Would you mind checking what versions of libvirt you upgraded from and to? I wonder if there is a connection to Bug 2210058. @ The previous comment got sent before finishing my thoughts. Anyway, @abologna do you see any possibility of this being connected to the bug I meantioned in previous comment? I hope not, there were multiple changes in the spec files. (In reply to Martin Kletzander from comment #1) > Would you mind checking what versions of libvirt you upgraded from and to? Certainly. From the DNF log file: 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-client.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-client.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-config-network.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-config-network.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-interface.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-interface.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-network.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-network.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-nodedev.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-nodedev.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-nwfilter.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-nwfilter.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-qemu.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-qemu.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-secret.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-secret.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-core.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-core.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-disk.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-disk.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-gluster.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-gluster.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-iscsi.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-iscsi.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-iscsi-direct.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-iscsi-direct.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-logical.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-logical.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-mpath.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-mpath.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-rbd.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-rbd.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-scsi.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-scsi.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-zfs.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-driver-storage-zfs.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-kvm.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon-kvm.x86_64 9.6.0-1.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-dbus.x86_64 1.4.0-7.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-dbus.x86_64 1.4.0-8.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-glib.x86_64 4.0.0-8.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-glib.x86_64 4.0.0-9.fc39 will be an upgrade 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-libs.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-libs.x86_64 9.6.0-1.fc39 will be an upgrade . . . 2023-08-17T13:00:28+1000 DEBUG ---> Package python3-libvirt.x86_64 9.0.0-2.fc38 will be upgraded 2023-08-17T13:00:28+1000 DEBUG ---> Package python3-libvirt.x86_64 9.6.0-1.fc39 will be an upgrade I just did another in-place upgrade of Fedora 38 to 39 on different hardware and the problems reported above did not happen. The same versions of libvirt were involved. I was able to use virt-manager to launch VMs immediately after the upgrade. This is concerning. libvirt-9.6.0-1.fc39 already contains the fix for Bug 2210058, so it shouldn't manifest when upgrading to that version unless I've messed something up while implementing the new logic. At the same time, it's very confusing that you would hit the issue in the first place given that you were apparently using split daemons, and the problem is supposed to show up primarily for people who are using the monolithic daemon instead. Had you made any customizations to the configuration of libvirt and its services on the host where the failure showed up, or were you using the stock Fedora 38 configuration? Was that a system that had been installed with Fedora 38, or one that had been installed with a much older version of Fedora and had already gone through several upgrades? Any chance that you had checked whether virtqemud.service and virtnetworkd.socket were actually disabled before enabling them? What is the status for libvirtd.service and the various libvirtd*.socket on your system right now? Do you have the error message that virt-manager reported initially, when it was unable to connect, written down anywhere? >Had you made any customizations to the configuration of libvirt and >its services on the host where the failure showed up, or were you >using the stock Fedora 38 configuration? I believe it was stock. Both systems are btrfs. My notes document the following install procedure: sudo dnf -y install qemu-kvm virt-manager virt-viewer virt-install sudo dnf -y install cockpit-machines sudo systemctl enable virtqemud sudo systemctl start virtqemud sudo chattr -R +C /var/lib/libvirt/images >Was that a system that had >been installed with Fedora 38, or one that had been installed with a >much older version of Fedora and had already gone through several >upgrades? The system than did not show the failure was built new with Fedora 33 and was updated during the beta branch periods to every version up to and including 39. However, I cannot remember when I set up QEMU/KVM. May not have happened until 34 or 35. I am less confident regarding the system that showed the failure. It goes back earlier than 33 but at some point around 33/34/35 I had to rebuild it because I lost a hard drive. It then became btrfs. I set up QEMU/KVM on this system within days of the other system. Both systems were always updated early in beta branch periods and saw every version to 39. >Any chance that you had checked whether virtqemud.service and >virtnetworkd.socket were actually disabled before enabling them? Other than them not working as indicated by the reported error messages I got I don't know. I also cannot remember if the failure was observed after the final boot after the upgrade or if I rebooted again later. I noticed the error the following day when I tried to launch a QEMU/KVM VM. It may not have been rebooted since the upgrade (ie may have been the first boot into 39). >What >is the status for libvirtd.service and the various libvirtd*.socket >on your system right now? Both working. I can run VMs fine following the intervention documented in the original report. >Do you have the error message that >virt-manager reported initially, when it was unable to connect, >written down anywhere? I copy/pasted what I saw into the original report verbatim. Beyond that I don't have anything recorded. I doubt it is important, but just in case, here are some observations regarding the platforms. System showing the problem: Dell Precision T5610 2 x Intel(R) Xeon(R) CPU E5-2603 v2 @ 1.80GHz (8 cores total) NVIDIA Corporation GK104 [GeForce GTX 760] (rev a1) (but using default Nouveau drivers) 16G RAM 3 x Mechanical HD System that worked: Intel NUC i7 NUC11PAH 1 x 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz Intel Corporation TigerLake-LP GT2 [Iris Xe Graphics] (rev 01) 32G RAM 2 x SSD The system that did not exhibit the problem is considerably faster that the older T5610, and has double the RAM, and SSD drives. The performance difference is very significant. Both systems also run RPM Fusion's VirtualBox. On the NUC I still have IBT protection explicitly turned off due to https://bugzilla.rpmfusion.org/show_bug.cgi?id=6688 (haven't got around to re-enabling it). The older TS5610 CPUs don't support IBT. Since I read you experienced this on one system and not on the other one I would guess there was some change in the setup, maybe such a small thing as preset override or similar. I do not see how we could disable the service, the other options is that there could be some failure to restart or reload. (In reply to Ian Laurie from comment #6) > >Had you made any customizations to the configuration of libvirt and > >its services on the host where the failure showed up, or were you > >using the stock Fedora 38 configuration? > > I believe it was stock. Both systems are btrfs. My notes document the > following install procedure: > > sudo dnf -y install qemu-kvm virt-manager virt-viewer virt-install > sudo dnf -y install cockpit-machines > sudo systemctl enable virtqemud > sudo systemctl start virtqemud > sudo chattr -R +C /var/lib/libvirt/images The step where you manually enable virtqemud strikes me as odd. You shouldn't have needed to do that. Do you remember whether you added that step as a reaction to it not being automatically enabled? Manually starting the service after installation, on the other hand, is perfectly fine if you want to be able to access it right away instead of after the next reboot. > >Was that a system that had > >been installed with Fedora 38, or one that had been installed with a > >much older version of Fedora and had already gone through several > >upgrades? > > The system than did not show the failure was built new with Fedora 33 and > was updated during the beta branch periods to every version up to and > including 39. However, I cannot remember when I set up QEMU/KVM. May not > have happened until 34 or 35. > > I am less confident regarding the system that showed the failure. It goes > back earlier than 33 but at some point around 33/34/35 I had to rebuild it > because I lost a hard drive. It then became btrfs. I set up QEMU/KVM on > this system within days of the other system. Assuming that both systems were on the same Fedora release and fully up to date when you set up libvirt on them, this sort of rules out the only half-formed theory that I had to explain the difference in behavior. You see, modular daemons have only been the default starting with Fedora 35[1] and existing installations were explicitly not migrated over, so if one of the systems had been installed with Fedora 35 or newer and the other one with Fedora 34 or older they would use modular daemons and monolitic daemons, respectively. But, at least under the assumption above, that shouldn't have been the case. > >Any chance that you had checked whether virtqemud.service and > >virtnetworkd.socket were actually disabled before enabling them? > > Other than them not working as indicated by the reported error messages I > got I don't know. Yeah, absolutely fair enough. It unfortunately makes it harder to figure out what went wrong though :( > I also cannot remember if the failure was observed after the final boot > after the upgrade or if I rebooted again later. > > I noticed the error the following day when I tried to launch a QEMU/KVM VM. > It may not have been rebooted since the upgrade (ie may have been the first > boot into 39). Whether it was the first boot for Fedora 39 or a subsequent one shouldn't have made a difference anyway, as the configuration would have been the same. > >What > >is the status for libvirtd.service and the various libvirtd*.socket > >on your system right now? > > Both working. I can run VMs fine following the intervention documented in > the original report. Note that I've asked about libvirtd and its sockets, not virtqemud and its sockets. So the monolithic daemon, not the modular ones. I expect that, in your deployment, the former would be disabled and the latter would be enabled. Can you please confirm that? Unfortunately I have to admit that I have basically no idea what could have gone wrong :( [1] https://fedoraproject.org/wiki/Changes/LibvirtModularDaemons >The step where you manually enable virtqemud strikes me as odd. You >shouldn't have needed to do that. Do you remember whether you added >hat step as a reaction to it not being automatically enabled? I may not have needed to enable it. It was probably that enabling it to be sure is easier than checking if it's enabled already. >I expect that, in your deployment, the former would be disabled and >the latter would be enabled. Can you please confirm that? Not sure how to do that. If this isn't it, let me know and I'll issue the commands you provide. zuke$ systemctl status virtqemud.socket ● virtqemud.socket - Libvirt qemu local socket Loaded: loaded (/usr/lib/systemd/system/virtqemud.socket; enabled; preset: disabled) Active: active (running) since Wed 2023-08-23 09:20:47 AEST; 6 days ago Triggers: ● virtqemud.service Listen: /run/libvirt/virtqemud-sock (Stream) CGroup: /system.slice/virtqemud.socket Aug 23 09:20:47 zuke systemd[1]: Listening on virtqemud.socket - Libvirt qemu local socket. . . . zuke$ systemctl status libvirtd.socket Unit libvirtd.socket could not be found. zuke$ . . . zuke$ systemctl list-units | grep qemu virtqemud.service loaded active running Virtualization qemu daemon virtqemud-admin.socket loaded active running Libvirt qemu admin socket virtqemud-ro.socket loaded active running Libvirt qemu local read-only socket virtqemud.socket loaded active running Libvirt qemu local socket zuke$ systemctl list-units | grep libvirt zuke$ . . . zuke$ systemctl list-units | grep libvirt zuke$ >Unfortunately I have to admit that I have basically no idea what >could have gone wrong :( I think this will have to be closed as "not a bug" and we have to assume something weird happened on one of my systems and we'll never know what. However if you can think up a command that I can issue to provide more insight please let me know. (In reply to Ian Laurie from comment #9) > zuke$ systemctl status libvirtd.socket > Unit libvirtd.socket could not be found. > zuke$ Interesting that libvirtd.socket... > zuke$ systemctl list-units | grep libvirt > zuke$ ... and all of the other units for libvirtd are apparently no longer present on your machine. Can you please run $ systemctl list-unit-files | grep libvirtd and post the output here? The fact that these units are not found would seem to indicate that the libvirt-daemon package is not installed on the system. Can you please run $ rpm -q libvirt-daemon as well? This is particularly confusing because earlier (Comment 3) you included this bit from the dnf log 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon.x86_64 9.0.0-3.fc38 will be upgraded 2023-08-17T12:29:37+1000 DEBUG ---> Package libvirt-daemon.x86_64 9.6.0-1.fc39 will be an upgrade which indicates that the package was present on the system before the upgrade, and was upgraded rather than being removed as part of it, so it should still be on your system. Can you please compare the list of packages installed on the two machines and see if there's any obvious difference? As well as dig further into the dnf log to try and understand when and why the libvirt-daemon package was removed? As of 9.6.0 the libvirt-daemon package is optional, but I still wouldn't expect it to be removed during an upgrade just because of that. System that did not exhibit the issue:
adama$ systemctl list-unit-files | grep libvirtd
libvirtd.service enabled disabled
libvirtd-admin.socket disabled disabled
libvirtd-ro.socket enabled disabled
libvirtd-tcp.socket disabled disabled
libvirtd-tls.socket disabled disabled
libvirtd.socket enabled disabled
adama$
adama$ rpm -q libvirt-daemon
libvirt-daemon-9.6.0-1.fc39.x86_64
adama$
System that did exhibit the issue:
zuke$ systemctl list-unit-files | grep libvirtd
zuke$
zuke$ rpm -q libvirt-daemon
package libvirt-daemon is not installed
zuke$
>which indicates that the package was present on the system before the
>upgrade, and was upgraded rather than being removed as part of it, so
>it should still be on your system.
This part of the mystery has just been solved.
After the upgrade as part of the cleanup I usually run "sudo dnf autoremove" in case some things are no longer required.
It looks like I did this on zuke but not [yet] on adama. As an experiment, if I go to do it on adama this happens:
adama$ sudo dnf autoremove
[sudo] password for admin:
Last metadata expiration check: 3:04:26 ago on Wed 30 Aug 2023 03:59:03 AM AEST.
Dependencies resolved.
==================================================================================================================================
Package Architecture Version Repository Size
==================================================================================================================================
Removing:
apache-commons-collections noarch 3.2.2-30.fc39 @fedora 620 k
apache-commons-lang3 noarch 3.12.0-9.fc39 @fedora 702 k
bcache-tools x86_64 1.1-5.fc39 @fedora 156 k
clang15-libs x86_64 15.0.7-4.fc39 @fedora 106 M
clang15-resource-filesystem x86_64 15.0.7-4.fc39 @fedora 0
gdisk x86_64 1.0.9-6.fc39 @fedora 710 k
libpmemobj x86_64 1.13.1-1.fc39 @fedora 368 k
libunistring1.0 x86_64 1.0-2.fc39 @fedora 1.7 M
libvirt-daemon x86_64 9.6.0-1.fc39 @fedora 558 k
libxcrypt-compat x86_64 4.4.36-2.fc39 @fedora 193 k
llvm15-libs x86_64 15.0.7-4.fc39 @fedora 109 M
python-setuptools-wheel noarch 67.7.2-5.fc39 @fedora 725 k
python3-lazy-object-proxy x86_64 1.9.0-5.fc39 @fedora 119 k
python3-typed_ast x86_64 1.5.5-2.fc39 @fedora 668 k
python3-wrapt x86_64 1.14.1-5.fc39 @fedora 195 k
velocity noarch 1.7-41.fc39 @fedora 439 k
Transaction Summary
==================================================================================================================================
Remove 16 Packages
Freed space: 222 M
Is this ok [y/N]: n
Operation aborted.
adama$
So libvirt-daemon was almost certainly removed by the autoremove post upgrade.
But given how I am running QEMU/KVM VMs (via virt-manager or cockpit-machines) I don't seem to actually need libvirt-daemon.
What went wrong for me seems to be virtqemud wasn't running and virtnetworkd.socket went missing.
This is unlikely to be libvirt related if I am understand things correctly?
Technically then, I filed this issue against the wrong package I think.
(In reply to Ian Laurie from comment #11) > System that did not exhibit the issue: > > adama$ systemctl list-unit-files | grep libvirtd > libvirtd.service enabled disabled > libvirtd-admin.socket disabled disabled > libvirtd-ro.socket enabled disabled > libvirtd-tcp.socket disabled disabled > libvirtd-tls.socket disabled disabled > libvirtd.socket enabled disabled This is very interesting! It shows that, despite my earlier understanding that you were using modular daemons, you are actually using the monolithic daemon on this machine! Presumably the same was true of the other one. It's also possible that somehow you've ended up having both the modular daemons AND the monolithic daemon enabled on the system. I've tried such a setup locally and, as long as you make sure virtproxyd is not enabled, it seems to work, but it's definitely not considered a valid arrangement. Can you run $ systemctl list-unit-files | grep virt next, please? > > which indicates that the package was present on the system before the > > upgrade, and was upgraded rather than being removed as part of it, so > > it should still be on your system. > > This part of the mystery has just been solved. > > After the upgrade as part of the cleanup I usually run "sudo dnf > autoremove" in case some things are no longer required. > > It looks like I did this on zuke but not [yet] on adama. As an > experiment, if I go to do it on adama this happens: > > adama$ sudo dnf autoremove > Removing: > libvirt-daemon > > So libvirt-daemon was almost certainly removed by the autoremove > post upgrade. Okay, that explains it. And I've been able to reproduce the same scenario locally, which helps. > But given how I am running QEMU/KVM VMs (via virt-manager or > cockpit-machines) I don't seem to actually need libvirt-daemon. Correct. The libvirt-daemon package, which contains libvirtd, has recently been made optional for this very reason: the monolithic daemon is no longer the default or recommended deployment mode, and it should be possible to get rid of it or not install it in the first place. That said, while modular daemons are the default deployment mode, the monolithic daemon is still considered a valid setup, and the expectation is that such a configuration will not break on upgrade. > What went wrong for me seems to be virtqemud wasn't running and > virtnetworkd.socket went missing. If you, as mentioned above, were running a monolithic daemon setup, then it's expected that services and sockets for modular daemons such as virtqemud and virtnetworkd would have been disabled. > This is unlikely to be libvirt related if I am understand things correctly? > > Technically then, I filed this issue against the wrong package I think. No, I think you filed it against the correct one :) The question now becomes, how can we prevent this issue from affecting users? I think the only reasonable solution might be to make all libvirt-daemon-${drv} packages Recommend (not Require!) libvirt-daemon. This should hopefully prevent the libvirt-daemon package from being marked as a candidate for autoremoval, although it would also mean that new deployments would end up with libvirtd even though they don't need it... That's not great, but if the alternative is breaking working setups on upgrade I think it's a reasonable compromise. I'll play with this a bit to confirm whether such a solution would work. Patch posted upstream. https://listman.redhat.com/archives/libvir-list/2023-August/241623.html >Can you run > > $ systemctl list-unit-files | grep virt > >next, please? Yes, here it is for both systems. System that did not exhibit the issue: adama$ systemctl list-unit-files | grep virt libvirt-guests.service enabled disabled libvirtd.service enabled disabled virtinterfaced.service disabled disabled virtlockd.service indirect disabled virtlogd.service indirect disabled virtnetworkd.service disabled disabled virtnodedevd.service disabled disabled virtnwfilterd.service disabled disabled virtproxyd.service disabled disabled virtqemud.service disabled enabled virtsecretd.service disabled disabled virtstoraged.service disabled disabled libvirtd-admin.socket disabled disabled libvirtd-ro.socket enabled disabled libvirtd-tcp.socket disabled disabled libvirtd-tls.socket disabled disabled libvirtd.socket enabled disabled virtinterfaced-admin.socket disabled disabled virtinterfaced-ro.socket disabled disabled virtinterfaced.socket disabled enabled virtlockd-admin.socket disabled disabled virtlockd.socket enabled disabled virtlogd-admin.socket disabled disabled virtlogd.socket enabled disabled virtnetworkd-admin.socket disabled disabled virtnetworkd-ro.socket disabled disabled virtnetworkd.socket disabled enabled virtnodedevd-admin.socket disabled disabled virtnodedevd-ro.socket disabled disabled virtnodedevd.socket disabled enabled virtnwfilterd-admin.socket disabled disabled virtnwfilterd-ro.socket disabled disabled virtnwfilterd.socket disabled enabled virtproxyd-admin.socket disabled disabled virtproxyd-ro.socket disabled disabled virtproxyd-tcp.socket disabled disabled virtproxyd-tls.socket disabled disabled virtproxyd.socket disabled enabled virtqemud-admin.socket disabled disabled virtqemud-ro.socket disabled disabled virtqemud.socket disabled disabled virtsecretd-admin.socket disabled disabled virtsecretd-ro.socket disabled disabled virtsecretd.socket disabled enabled virtstoraged-admin.socket disabled disabled virtstoraged-ro.socket disabled disabled virtstoraged.socket disabled enabled virt-guest-shutdown.target static - adama$ System that did exhibit the issue: zuke$ systemctl list-unit-files | grep virt libvirt-guests.service disabled disabled virtinterfaced.service disabled disabled virtlockd.service indirect disabled virtlogd.service indirect disabled virtnetworkd.service disabled disabled virtnodedevd.service disabled disabled virtnwfilterd.service disabled disabled virtproxyd.service disabled disabled virtqemud.service enabled enabled virtsecretd.service disabled disabled virtstoraged.service disabled disabled virtinterfaced-admin.socket disabled disabled virtinterfaced-ro.socket disabled disabled virtinterfaced.socket disabled enabled virtlockd-admin.socket disabled disabled virtlockd.socket enabled disabled virtlogd-admin.socket disabled disabled virtlogd.socket enabled disabled virtnetworkd-admin.socket disabled disabled virtnetworkd-ro.socket disabled disabled virtnetworkd.socket enabled enabled virtnodedevd-admin.socket disabled disabled virtnodedevd-ro.socket disabled disabled virtnodedevd.socket disabled enabled virtnwfilterd-admin.socket disabled disabled virtnwfilterd-ro.socket disabled disabled virtnwfilterd.socket disabled enabled virtproxyd-admin.socket disabled disabled virtproxyd-ro.socket disabled disabled virtproxyd-tcp.socket disabled disabled virtproxyd-tls.socket disabled disabled virtproxyd.socket disabled enabled virtqemud-admin.socket enabled disabled virtqemud-ro.socket enabled disabled virtqemud.socket enabled disabled virtsecretd-admin.socket disabled disabled virtsecretd-ro.socket disabled disabled virtsecretd.socket disabled enabled virtstoraged-admin.socket disabled disabled virtstoraged-ro.socket disabled disabled virtstoraged.socket disabled enabled virt-guest-shutdown.target static - zuke$ >This is very interesting! It shows that, despite my earlier >understanding that you were using modular daemons, you are actually >using the monolithic daemon on this machine! Presumably the same was >true of the other one. > >It's also possible that somehow you've ended up having both the >modular daemons AND the monolithic daemon enabled on the system. I've >tried such a setup locally and, as long as you make sure virtproxyd >is not enabled, it seems to work, but it's definitely not considered >a valid arrangement. On the system that didn't have the issue (adama) that has the monolithic daemon.... Given it is not preferred to have both daemon sets, what should I do on that system? Would this be correct: sudo dnf remove libvirt-daemon (or simply allow the autoremove to do it) Then I will possibly need to this like I did on zuke: sudo systemctl enable virtqemud sudo systemctl start virtqemud sudo systemctl enable virtnetworkd.socket sudo systemctl restart virtnetworkd.socket Presumably that returns me to a recommended config? Fix pushed upstream. commit aa5895cbc72bd9b4bb1ce99e231b2ac4b25db9c4 Author: Andrea Bolognani <abologna> Date: Wed Aug 30 17:45:47 2023 +0200 rpm: Recommend libvirt-daemon for with_modular_daemons distros A default deployment on modern distros uses modular daemons but switching back to the monolithic daemon, while not recommended, is still considered a perfectly valid option. For a monolithic daemon deployment, the upgrade to libvirt 9.2.0 or newer works as expected; a subsequent call to dnf autoremove, however, results in the libvirt-daemon package being removed and the deployment no longer working. In order to avoid that situation, mark the libvirt-daemon as recommended. This will unfortunately result in it being included in most installations despite not being necessary, but considering that the alternative is breaking existing setups on upgrade it feels like a reasonable tradeoff. Moreover, since the dependency on libvirt-daemon is just a weak one, it's still possible for people looking to minimize the footprint of their installation to manually remove the package after installation, mitigating the drawbacks of this approach. https://bugzilla.redhat.com/show_bug.cgi?id=2232805 Signed-off-by: Andrea Bolognani <abologna> Reviewed-by: Erik Skultety <eskultet> Reviewed-by: Daniel P. Berrangé <berrange> v9.7.0-rc2-2-gaa5895cbc7 (In reply to Ian Laurie from comment #14) > System that did not exhibit the issue: > > adama$ systemctl list-unit-files | grep virt [...] > libvirtd.service enabled disabled > libvirtd-admin.socket disabled disabled > libvirtd-ro.socket enabled disabled > libvirtd-tcp.socket disabled disabled > libvirtd-tls.socket disabled disabled > libvirtd.socket enabled disabled [...] > virtqemud.service disabled enabled > virtqemud-admin.socket disabled disabled > virtqemud-ro.socket disabled disabled > virtqemud.socket disabled disabled [...] Okay, this looks like a sane monolithic deployment. None of the modular daemons are enabled. (In reply to Ian Laurie from comment #15) > On the system that didn't have the issue (adama) that has the > monolithic daemon.... Given it is not preferred to have both daemon > sets, what should I do on that system? Would this be correct: > > sudo dnf remove libvirt-daemon (or simply allow the autoremove to do it) > > Then I will possibly need to this like I did on zuke: > > sudo systemctl enable virtqemud > sudo systemctl start virtqemud > > sudo systemctl enable virtnetworkd.socket > sudo systemctl restart virtnetworkd.socket > > Presumably that returns me to a recommended config? You can follow the directions in https://libvirt.org/daemons.html#switching-to-modular-daemons but since Fedora defaults to a modular deployment, another valid and probably more foolproof way to do it would be to run the following incantation: $ systemctl list-unit-files | grep -E 'virt.*\.socket' | while read u _; do sudo systemctl preset $u; done $ systemctl list-unit-files | grep -E 'virt.*\.service' | while read u _; do sudo systemctl preset $u; done This will return all libvirt-related units to their distro default. One quick reboot later, you should have a working modular deployment. With all that said, I would ask you NOT to alter your configuration quite yet. libvirt 9.7.0 is going to be released tomorrow, and it should show up in Fedora a few days after that. It would be great if, once that happens, you could confirm that the fix has worked correctly and that 'dnf autoremove' no longer tries to uninstall the libvirt-daemon package. Cole, I had assigned this to myself, but since it's a Fedora bug it makes more sense for you to be the assignee so I'm moving it over. The fix has already been merged upstream, so when you rebase the Fedora package to 9.7.0 as we had previously agreed everything will have already been taken care of. I'm handling the RHEL side of it in Bug 2236500. As per comment #17 I'll wait. 39 is in freeze mode but I can get it out of the testing repo if necessary. I upgraded to libvirt-daemon-9.7.0-1.fc39.x86_64 and now a "dnf autoremove" no longer wants to remove it. (In reply to Ian Laurie from comment #20) > I upgraded to libvirt-daemon-9.7.0-1.fc39.x86_64 and now a "dnf autoremove" > no longer wants to remove it. Excellent news, thanks for confirming! Feel free to switch to modular daemons now, and even manually remove the libvirt-daemon package if you feel like it :) I made the switch to modular, explicitly removed libvirt-daemon, rebooted, then launched a CentOS-Stream-9 VM through virt-manager, it all worked as expected. Thanks for all your help. Thank *you* for reporting the issue, thus preventing it from hitting all Fedora 39 and RHEL 9.3 users :) FEDORA-2023-57fd2e3393 has been submitted as an update to Fedora 39. https://bodhi.fedoraproject.org/updates/FEDORA-2023-57fd2e3393 Proposing this as a Beta FE as it has upgrade path consequences - if this update improves the upgrade path for folks with libvirt installed, there's an argument for an FE, so folks running upgrades get the fixed experience ASAP (updates-testing is generally not enabled for upgrades). Ok, good idea. A FE would be good for people going the distro-upgrade process. +3 in https://pagure.io/fedora-qa/blocker-review/issue/1302 , marking accepted FE. FEDORA-2023-57fd2e3393 has been pushed to the Fedora 39 stable repository. If problem still persists, please make note of it in this bug report. |