Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 912499 - Security context on image file gets reset
Summary: Security context on image file gets reset
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Fedora
Classification: Fedora
Component: libguestfs
Version: 18
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Richard W.M. Jones
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks: 910270 1180769
TreeView+ depends on / blocked
 
Reported: 2013-02-18 20:10 UTC by Jason Tibbitts
Modified: 2016-03-31 12:23 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1180769 (view as bug list)
Environment:
Last Closed: 2013-03-01 18:17:36 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Jason Tibbitts 2013-02-18 20:10:39 UTC
I've been running into a strange problem on F18 where guests could not write to their virtual storage.  After several days of debugging and finally remember to turn off dontaudit rules with semodule -DB I realized that selinux was failing writes to the image file:

type=SYSCALL msg=audit(1361213332.460:456): arch=c000003e syscall=296 success=yes exit=8257536 a0=e a1=7fd6ce61a930 a2=1d7 a3=0 items=0 ppid=1 pid=4257 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 ses=4294967295 tty=(none) comm="qemu-kvm" exe="/usr/bin/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c630,c868 key=(null)
type=AVC msg=audit(1361213332.460:456): avc:  denied  { write } for  pid=4257 comm="qemu-kvm" path="/var/lib/libvirt/images/foo.img" dev="dm-1" ino=395221 scontext=system_u:system_r:svirt_t:s0:c630,c868 tcontext=system_u:object_r:virt_content_t:s0 tclass=file

Turns out the context on the image file was virt_content_t:

rw-------. qemu qemu system_u:object_r:virt_content_t:s0 /var/lib/libvirt/images/foo.img

Enabling libvirt verbose logging, I found that the security context was indeed getting set properly:

Feb 18 13:25:53 ld93 libvirtd[7897]: 2013-02-18 19:25:53.014+0000: 7902: info : virSecuritySELinuxSetFileconHelper:870 : Setting SELinux context on '/var/lib/libvirt/images/foo.img' to 'system_u:object_r:svirt_image_t:s0:c296,c808'

But then a few seconds later:

Feb 18 13:26:03 ld93 libvirtd[7897]: 2013-02-18 19:26:03.211+0000: 7898: info : virSecuritySELinuxSetFileconHelper:870 : Setting SELinux context on '/var/lib/libvirt/images/foo.img' to 'system_u:object_r:virt_content_t:s0'

So something was resetting the file.  I noticed that in virt-manager immediately after I brought up the guest another entry appeared in the list named guestfs-25zbexg649zpe6bz.  Armed with the knowledge that the context on an image file gets reset to the default when the guest is not running, I thought that perhaps somehow the context on the image file was getting reset when this guestfs-* VM exited.  I did 'yum erase libguestfs', restarted everything and tried to bring up another guest.  The security context stuck around this time.

I'm happy to provide any additional information you request, but at this moment I'm not at all sure what else I can provide.  Before I removed it, I had:

1:libguestfs-tools-1.20.1-3.fc18.x86_64
1:libguestfs-tools-c-1.20.1-3.fc18.x86_64
1:python-libguestfs-1.20.1-3.fc18.x86_64
guestfs-browser-0.2.2-1.fc18.x86_64
1:libguestfs-1.20.1-3.fc18.x86_64

as well as:

libvirt-0.10.2.3-1.fc18.x86_64
virt-manager-0.9.4-4.fc18.noarch

Comment 1 Richard W.M. Jones 2013-02-18 20:17:28 UTC
One observation: Only the 'python-libguestfs' package needs to
be blocked to stop virt-manager from using libguestfs.

One test you could try in order to prove whether or not
this is caused by libvirt relabelling via libguestfs is:

(1) Install /usr/bin/virt-df; this should not pull in
python-libguestfs.

(2) Run the following command on a running guest.  It
shouldn't disturb the guest (although if it triggers the
bug then it would label the disk which would definitely
be disturbing the guest):

  virt-df -d NameOfTheGuest

Comment 2 Jason Tibbitts 2013-02-18 20:34:37 UTC
OK, installed /usr/bin/virt-df, then brought up a guest.  The security context was fine and it could write:

[root@ld93 ~]# ls -lZ /var/lib/libvirt/images/foo.img
-rw-------. qemu qemu system_u:object_r:svirt_image_t:s0:c45,c503 /var/lib/libvirt/images/foo.img

Then I ran virt-df -d foo:

[root@ld93 ~]# virt-df -d foo
libguestfs: error: could not create appliance through libvirt: internal error process exited while connecting to monitor: 2013-02-18 20:32:23.664+0000: 1819: debug : virFileClose:72 : Closed fd 26
2013-02-18 20:32:23.664+0000: 1819: debug : virFileClose:72 : Closed fd 31
2013-02-18 20:32:23.665+0000: 1819: debug : virFileClose:72 : Closed fd 3
2013-02-18 20:32:23.665+0000: 1820: debug : virCgroupMakeGroup:560 : Make controller /sys/fs/cgroup/cpu,cpuacct/system/libvirtd.service/libvirt/qemu/guestfs-4ftwxteo4ol3oev9/
2013-02-18 20:32:23.665+0000: 1820: debug : virCgroupMakeGroup:560 : Make controller /sys/fs/cgroup/cpuset/libvirt/qemu/guestfs-4ftwxteo4ol3oev9/
2013-02-18 20:32:23.665+0000: 1820: debug : virCgroupMakeGroup:560 : Make controller /sys/fs/cgroup/memory/libvirt/qemu/guestfs-4ftwxteo4ol3oev9/
2013-02-18 20:32:23.665+0000: 1820: debug : virCgroupMakeGroup:560 : Make controller /sys/fs/cgroup/devices/libvirt/qemu/guestfs-4ftwxteo4ol3oev9/
2013-02-18 20:32:23.665+0000: 1820: debug : virCgroupMakeGroup:560 : Make controller /sys/fs/cgroup/freezer/libvirt/qemu/guestfs-4ftwxteo4ol [code=1 domain=10]

After this, the security context was reset:

[root@ld93 ~]# ls -lZ /var/lib/libvirt/images/foo.img
-rw-------. qemu qemu system_u:object_r:virt_content_t:s0 /var/lib/libvirt/images/foo.img

and writes to /dev/vda in the guest fail.

Comment 3 Richard W.M. Jones 2013-02-18 20:43:52 UTC
Dave: I'm pretty sure this is actually a libvirt bug (possibly an RFE).

Comment 4 Jason Tibbitts 2013-02-18 21:59:28 UTC
Just wanted to note that this actually prevents me from installing any guests in the default setup.  I just brought up an F18 machine and installed the Virtualization group.  To be honest I'm not sure how it works for anyone at all, unless they're disabling selinux or installing a package set that somehow wouldn't pull in python-libguestfs.

Comment 5 Dave Allan 2013-02-19 03:27:04 UTC
Ok, I can reproduce this behavior just trying to do a purely default install with virt-manager and indeed it did not reproduce until I installed virt-df.

Comment 6 Richard W.M. Jones 2013-02-19 15:13:52 UTC
I'll note that the workaround for this is to do:

  export LIBGUESTFS_ATTACH_METHOD=appliance

which goes back to the old (pre-F18) method of direct-launching qemu
instead of using libvirt.

Comment 7 Daniel Berrangé 2013-02-19 15:22:11 UTC
If we need to run 2 VMs at once all accessing the same disk, then AFAICT, the only way to make it work from a sVirt POV is to ensure that libguestfs uses the same seclabel as the running guest. If different MCS labels are used, sVirt is always going to block either libguestfs or the real VM.

So basically look at the running guest for:

  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c24,c151</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c24,c151</imagelabel>
  </seclabel>

And then change 'dynamic' to 'static', 'relabel' to 'no' and remove the <imagelabel> . So you get

  <seclabel type='static' model='selinux' relabel='no'>
    <label>system_u:system_r:svirt_t:s0:c24,c151</label>
  </seclabel>

The remaining problem is that if the original guest shuts down while libguesfs is running, the libguestfs VM will get its access to the disks revoked. There's not really anything we can do about that, other than to stop trying to run 2 VMs using the same disks.

Comment 8 Richard W.M. Jones 2013-02-27 16:04:04 UTC
(In reply to comment #7)
> If we need to run 2 VMs at once all accessing the same disk, then AFAICT,
> the only way to make it work from a sVirt POV is to ensure that libguestfs
> uses the same seclabel as the running guest. If different MCS labels are
> used, sVirt is always going to block either libguestfs or the real VM.
> 
> So basically look at the running guest for:
> 
>   <seclabel type='dynamic' model='selinux' relabel='yes'>
>     <label>system_u:system_r:svirt_t:s0:c24,c151</label>
>     <imagelabel>system_u:object_r:svirt_image_t:s0:c24,c151</imagelabel>
>   </seclabel>
> 
> And then change 'dynamic' to 'static', 'relabel' to 'no' and remove the
> <imagelabel> . So you get
> 
>   <seclabel type='static' model='selinux' relabel='no'>
>     <label>system_u:system_r:svirt_t:s0:c24,c151</label>
>   </seclabel>

I believe this doesn't work.  As outlined above, it won't work because
libvirt will not label the console sockets.  I got this error:

libguestfs: error: could not create appliance through libvirt: internal error process exited while connecting to monitor: qemu-system-x86_64: -chardev socket,id=charserial0,path=/home/rjones/d/libguestfs/tmp/libguestfsStzvbZ/console.sock: Failed to connect to socket: Permission denied

I fixed that by changing the global seclabel to:

 <seclabel type='static' model='selinux' relabel='yes'>
   <label>system_u:system_r:svirt_t:s0:c24,c151</label>
 </seclabel>

Of course now the problem is that it's relabelling the disks, so to get
around that I changed the disk definitions so each one had a local
<seclabel relabel="no"/> as follows:

 <disk device="disk" type="file">
   <source file="/home/rjones/d/libguestfs/tmp/libguestfsRtyvtz/snapshot2">
      <seclabel relabel="no"/>
   </source>
   <target dev="sda" bus="scsi"/>
   <driver name="qemu" type="qcow2"/>
   <address type="drive" controller="0" bus="0" target="0" unit="0"/>
 </disk>

However libvirt still relabels the disks from
system_u:object_r:svirt_image_t:s0:c678,c742 to
system_u:object_r:virt_content_t:s0.

This is possibly a bug in libvirt itself or in the documentation
of libvirt (http://libvirt.org/formatdomain.html#elementsDisks ).

Dave Allen suggested using <shareable/>, although the <seclabel> above
seems closer to my intention.

(Note I'm using libvirt 1.0.2 for testing).

Comment 9 Richard W.M. Jones 2013-02-27 16:08:32 UTC
<shareable/> doesn't stop relabelling of the disks.

Comment 10 Richard W.M. Jones 2013-02-27 18:06:11 UTC
Dan points out on the libvirt mailing list that the syntax
should be:

 <disk device="disk" type="file">
   <source file="/home/rjones/d/libguestfs/tmp/libguestfsRtyvtz/snapshot2">
      <seclabel model="selinux" relabel="no"/>
 ...

and indeed that causes libvirt not to relabel the disk.

HOWEVER that's not the end of the story.  The problem now
is that libvirt doesn't relabel the qcow2 overlay, so qemu
can't access it.  What we really want is for libvirt to
relabel the overlay but not the backing disk.

The error is:

libguestfs: error: could not create appliance through libvirt: internal error process exited while connecting to monitor: qemu-system-x86_64: -drive file=/home/rjones/d/libguestfs/tmp/libguestfsK3I2RN/snapshot2,if=none,id=drive-scsi0-0-0-0,format=qcow2: could not open disk image /home/rjones/d/libguestfs/tmp/libguestfsK3I2RN/snapshot2: Permission denied
 [code=1 domain=10]

I guess I could try to do the labelling from libguestfs (since
libguestfs creates the overlay).

Another solution that would work would be for libvirt to support
the snapshot=on parameter.  The whole reason we're creating
overlays manually here is because libvirt doesn't support that
obvious feature of qemu.

Comment 11 Richard W.M. Jones 2013-02-28 10:58:25 UTC
Final part of patch set posted:

https://www.redhat.com/archives/libguestfs/2013-February/thread.html#00122

Comment 12 Richard W.M. Jones 2013-02-28 16:09:02 UTC
Second version of patch series posted:

https://www.redhat.com/archives/libguestfs/2013-February/thread.html#00152

Comment 13 Richard W.M. Jones 2013-03-01 18:17:36 UTC
These patches are upstream.  The actual number of commits required to
fix this is rather scary:

(in reverse order)

https://github.com/libguestfs/libguestfs/commit/e78a2c5df3c4ec79e22e03ee4994958537f2e8d8
https://github.com/libguestfs/libguestfs/commit/26df366d3bf712a84337c2402f41506f2be6f610
https://github.com/libguestfs/libguestfs/commit/b9ee8baa49afbf8b6d80a42f3a309b660c7b32a5
https://github.com/libguestfs/libguestfs/commit/617eb88c5e66247894fde2aae11bd102889eb85c
https://github.com/libguestfs/libguestfs/commit/a6a703253be9e9c590a49a149c0170f2e46a1eb2
https://github.com/libguestfs/libguestfs/commit/3f1e7f1078ac40a6736b7721cc248f8ed0614f48
https://github.com/libguestfs/libguestfs/commit/93feaa4ae83b72864e7c10e9a388219ad9960123
https://github.com/libguestfs/libguestfs/commit/1ea7752e95a90aa8016d85489c7460b881fc59b0
https://github.com/libguestfs/libguestfs/commit/b6cbd980fb2fe8e43de9e716769cba63cd8d721b
https://github.com/libguestfs/libguestfs/commit/5ff3845d280515ab623d22666c3f5013f095d32a
https://github.com/libguestfs/libguestfs/commit/fe939cf842949f0eda0b6c69cad8d2d6b5b2c3fd
https://github.com/libguestfs/libguestfs/commit/6e3aab2f0c48280e746e90050abf25947159e294
https://github.com/libguestfs/libguestfs/commit/34e77af1bf42a132589901164f29bd992b37589e
https://github.com/libguestfs/libguestfs/commit/76266be549c521e3767a94c07e9ae616826a2568
https://github.com/libguestfs/libguestfs/commit/556e109765d7b6808045965a1eefcb434294f151
https://github.com/libguestfs/libguestfs/commit/4a6c8021b599952b991725043bac5c722635b3f6

As a result, I doubt that a backport to Fedora 18 / libguestfs 1.20
is going to be possible at this time.  This is Fedora 19 material.

Note that the following workaround is available for Fedora 18
users who encounter this problem:

  export LIBGUESTFS_ATTACH_METHOD=appliance

Comment 14 Robert Brown 2013-05-12 08:14:54 UTC
I'm using Fedora 18.

This bug crashes virt-manager when it attempts to call the code to perform an inspection. In my case it happens regardless of setting LIBGUESTFS_ATTACH_METHOD.

It is also used by Openstack nova, and specifically /usr/lib/python2.7/site-packages/nova/virt/disk/vfs/guestfs.py. Same scenario - new instances will crash the nova compute service unless you manually the inspection code out of the way.

It's not obvious at all, and if this is not going to be fixed in F18 I would recommend considering patches for both virt-manager and nova to stop them from tripping the bug by asking to inspect an image.

Comment 15 Richard W.M. Jones 2013-05-12 08:49:11 UTC
(In reply to comment #14)
> I'm using Fedora 18.
> 
> This bug crashes virt-manager when it attempts to call the code to perform
> an inspection. In my case it happens regardless of setting
> LIBGUESTFS_ATTACH_METHOD.

You must not be setting LIBGUESTFS_ATTACH_METHOD in the right
place, or else you're seeing a different bug.

With LIBGUESTFS_ATTACH_METHOD=appliance, libvirt, SELinux & sVirt are
not involved at all and you would not see this bug.

Also, even if you hit the bug, virt-manager won't crash, it'll just
fail to inspect the guest.

Comment 16 Richard W.M. Jones 2016-03-31 12:23:37 UTC
This bug wasn't fixed completely.  I have opened a new bug about that:
https://bugzilla.redhat.com/show_bug.cgi?id=1322837


Note You need to log in before you can comment on or make changes to this bug.