Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1269975
Summary: | svirt very occasionally prevents parallel libvirt access to 'kernel' file | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Richard W.M. Jones <rjones> |
Component: | libvirt | Assignee: | Libvirt Maintainers <libvirt-maint> |
Status: | CLOSED ERRATA | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 23 | CC: | agedosier, berrange, clalancette, crobinso, dominick.grift, dwalsh, dyuan, itamar, jforbes, laine, libvirt-maint, lvrabec, mgrepl, plautrba, rjones, veillard, virt-maint |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | libvirt-1.2.18.2-2.fc23 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-01-24 03:30:07 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 910269, 910270, 921135, 922891 |
Description
Richard W.M. Jones
2015-10-08 16:30:22 UTC
Is this file shared by more virtual machines? I believe so, yes. Although only briefly: qemu loads the kernel file when it starts up, and probably doesn't touch it at all after that. In this test we are starting up lots of qemu processes in parallel. The qemu command line would be something like below. The -kernel parameter points to this file (it may have varying locations, including under the $HOME directory if building libguestfs from source). The file might be shared by multiple instances of qemu. And libvirt is likely doing some labelling here too. /usr/bin/qemu-system-x86_64 -machine accel=kvm -name guestfs-i12y68tb1oxdtfvd -S -machine pc-i440fx-2.3,accel=kvm,usb=off -cpu host -m 500 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid b8ff8fa3-c153-4adb-adf3-8cee828338d9 -nographic -no-user-config -nodefaults -device sga -chardev socket,id=charmonitor,path=/home/rjones/.config/libvirt/qemu/lib/domain-guestfs-i12y68tb1oxdtfvd/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -no-acpi -boot strict=on -kernel /var/tmp/.guestfs-1000/appliance.d/kernel -initrd /var/tmp/.guestfs-1000/appliance.d/initrd -append panic=1 console=ttyS0 udevtimeout=6000 udev.event-timeout=6000 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 TERM=xterm-256color -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive file=/tmp/libguestfsztxHGG/devnull1,if=none,id=drive-scsi0-0-0-0,format=raw,cache=writeback -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive file=/tmp/libguestfsztxHGG/overlay2,if=none,id=drive-scsi0-0-1-0,format=qcow2,cache=unsafe -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0 -chardev socket,id=charserial0,path=/tmp/libguestfsztxHGG/console.sock -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/tmp/libguestfsztxHGG/guestfsd.sock -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on Can you be more specific about how the test is launching and stopping VMs? Could it be that: - VM1 startup requested, labels kernel virt_content_t - VM1 qemu is launched - VM2 startup requested, labels kernel virt_content_t - VM1 is shutdown, resets label of kernel to user_home_t - VM2 qemu tries to launch, hits selinux avc Libvirt's locking may prevent that for all I know, but I didn't look closely It just runs virDomainCreateXML (in parallel). https://github.com/libguestfs/libguestfs/blob/master/src/launch-libvirt.c#L547 (In reply to Cole Robinson from comment #4) > Could it be that: > > - VM1 startup requested, labels kernel virt_content_t > - VM1 qemu is launched > - VM2 startup requested, labels kernel virt_content_t > - VM1 is shutdown, resets label of kernel to user_home_t > - VM2 qemu tries to launch, hits selinux avc > > Libvirt's locking may prevent that for all I know, but I didn't look closely Quite probably. This seems to happen even more frequently in Rawhide. If you're running a debug kernel, it could exacerbate the race Upstream fix: commit 68acc701bd449481e3206723c25b18fcd3d261b7 Author: Jiri Denemark <jdenemar> Date: Fri Jan 15 10:55:58 2016 +0100 security: Do not restore kernel and initrd labels *** Bug 871196 has been marked as a duplicate of this bug. *** I have tested this, so it's fine to close it once it goes into Fedora. For RHEL 7.3, there is bug 921135 tracking the same problem. libvirt-1.2.18.2-2.fc23 has been submitted as an update to Fedora 23. https://bodhi.fedoraproject.org/updates/FEDORA-2016-02dc87c44e libvirt-1.2.18.2-2.fc23 has been pushed to the Fedora 23 testing repository. If problems still persist, please make note of it in this bug report. See https://fedoraproject.org/wiki/QA:Updates_Testing for instructions on how to install test updates. You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2016-02dc87c44e libvirt-1.2.18.2-2.fc23 has been pushed to the Fedora 23 stable repository. If problems still persist, please make note of it in this bug report. |