Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 975495 - AVC Denials When Installing F19 Live Desktop to mdraid RAID1
Summary: AVC Denials When Installing F19 Live Desktop to mdraid RAID1
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: selinux-policy
Version: 19
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Miroslav Grepl
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard: AcceptedFreezeException
: 975643 (view as bug list)
Depends On:
Blocks: F19-accepted, F19FinalFreezeException
TreeView+ depends on / blocked
 
Reported: 2013-06-18 15:16 UTC by Tim Flink
Modified: 2013-07-08 23:56 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-06-24 11:44:30 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
ioctl access denail details (2.46 KB, text/plain)
2013-06-18 15:18 UTC, Tim Flink
no flags Details
read access denial details (2.65 KB, text/plain)
2013-06-18 15:19 UTC, Tim Flink
no flags Details

Description Tim Flink 2013-06-18 15:16:13 UTC
I did an install from livecd, configuring software RAID1 on 2 virtual disks. During installation, two AVC denial notifications are shown. The live environment is in permissive mode and the install completes successfully but the AVC denial notifications are shown at the bottom of the screen.

The 2 AVC denials are:
SELinux is preventing /usr/sbin/mdadm from read access on the blk_file md126.
SELinux is preventing /usr/sbin/mdadm from ioctl access on the blk_file /dev/md126.

Details about the avc denials are attached to the bug.

System Used:
VM with 2 15G virtual disks
Fedora 19 TC5 Desktop Live x86_64

Comment 1 Tim Flink 2013-06-18 15:18:31 UTC
Created attachment 762539 [details]
ioctl access denail details

Comment 2 Tim Flink 2013-06-18 15:19:41 UTC
Created attachment 762541 [details]
read access denial details

Comment 3 Daniel Walsh 2013-06-18 15:32:36 UTC
If you run ls -lZ /dev/md126  Does it show them as fixed_disk_device_t?

Comment 4 Daniel Walsh 2013-06-18 15:37:49 UTC
This is a race condition where the device is created and mdadm touches it before udev can fix the label.

8171089b41052f26fdbbcc9c16b42aaa9c735572
Will allow this access in git.

Comment 5 Tim Flink 2013-06-18 15:41:18 UTC
(In reply to Daniel Walsh from comment #3)
> If you run ls -lZ /dev/md126  Does it show them as fixed_disk_device_t?

Post-install, it shows up as:

brw-rw----. root disk system_u:object_r:fixed_disk_device_t:s0 /dev/md126

I'll run another install to see if it's the same while inside the live install environment.

I should have put this in the initial description but the selinux versions on the installed system are:

selinux-policy-targeted-3.12.1-52.fc19.noarch
selinux-policy-3.12.1-52.fc19.noarch

Comment 6 Adam Williamson 2013-06-19 18:31:43 UTC
*** Bug 975643 has been marked as a duplicate of this bug. ***

Comment 7 Adam Williamson 2013-06-19 18:33:19 UTC
Discussed (as the dupe 975643) at 2013-06-19 freeze exception review meeting: http://meetbot.fedoraproject.org/fedora-blocker-review/2013-06-19/f19final-blocker-review-7.2013-06-19-16.01.log.txt . Accepted as a freeze exception issue: this is close to being a blocker under the 'no AVCs shown during install / first boot' criterion, but only affects live installs to mdraid, so it's easier just to make it a freeze exception issue. We could re-consider blocker status if somehow the fix doesn't get in soon, but we surely expect it to.

Comment 8 Miroslav Grepl 2013-06-24 11:44:30 UTC
I believe it has been fixed.

Comment 9 Darren Steven 2013-07-03 06:02:30 UTC
Not fixed for me in an install based on TC6. I have LVM2 on mdraid, created after initial install, and see avc denial on md127, and subsequent failure to see t he lvm volumes. if I do vgchange -ay post boot, it all appears.

The mdraid devices that existed prior to the install are all OK (and also have lvm)

Comment 10 Adam Williamson 2013-07-03 06:09:36 UTC
That does not sound like the same bug, then. Can you post your exact AVC? Thanks.

Comment 11 Darren Steven 2013-07-03 06:19:18 UTC
(In reply to Adam Williamson from comment #10)
Here it is


Additional Information:
Source Context                system_u:system_r:mdadm_t:s0-s0:c0.c1023
Target Context                system_u:object_r:device_t:s0
Target Objects                md127 [ blk_file ]
Source                        mdadm
Source Path                   /usr/sbin/mdadm
Port                          <Unknown>
Host                          big.home.wanacat.com
Source RPM Packages           mdadm-3.2.6-19.fc19.x86_64
Target RPM Packages           
Policy RPM                    selinux-policy-3.12.1-54.fc19.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     big.home.wanacat.com
Platform                      Linux big.home.wanacat.com 3.9.8-300.fc19.x86_64
                              #1 SMP Thu Jun 27 19:24:23 UTC 2013 x86_64 x86_64
Alert Count                   1
First Seen                    2013-07-03 16:04:44 EST
Last Seen                     2013-07-03 16:04:44 EST
Local ID                      ac464e53-bf58-4c31-84c3-beca874bcb54

Raw Audit Messages
type=AVC msg=audit(1372831484.871:27): avc:  denied  { read } for  pid=472 comm="mdadm" name="md127" dev="devtmpfs" ino=15420 scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=system_u:object_r:device_t:s0 tclass=blk_file


type=SYSCALL msg=audit(1372831484.871:27): arch=x86_64 syscall=open success=no exit=EACCES a0=7fff0ee2bf0a a1=0 a2=1 a3=1 items=0 ppid=469 pid=472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm=mdadm exe=/usr/sbin/mdadm subj=system_u:system_r:mdadm_t:s0-s0:c0.c1023 key=(null)

Hash: mdadm,mdadm_t,device_t,blk_file,read

Comment 12 Darren Steven 2013-07-03 06:21:37 UTC
Also, not sure if this helps (the /dev/md/ directory is different for this device):

[darren@big ~]$ ls -l /dev/md*
brw-rw----. 1 root disk 9,   0 Jul  3 16:04 /dev/md0
brw-rw----. 1 root disk 9,   1 Jul  3 16:04 /dev/md1
brw-rw----. 1 root disk 9, 127 Jul  3 16:04 /dev/md127
brw-rw----. 1 root disk 9,   2 Jul  3 16:04 /dev/md2

/dev/md:
total 0
lrwxrwxrwx. 1 root root 6 Jul  3 16:04 0 -> ../md0
lrwxrwxrwx. 1 root root 6 Jul  3 16:04 1 -> ../md1
lrwxrwxrwx. 1 root root 6 Jul  3 16:04 2 -> ../md2
lrwxrwxrwx. 1 root root 8 Jul  3 16:04 localhost.localdomain:1 -> ../md127

Comment 13 Miroslav Grepl 2013-07-03 07:49:52 UTC
# cat /tmp/log |audit2allow


#============= mdadm_t ==============

#!!!! This avc is allowed in the current policy
allow mdadm_t device_t:blk_file read;

http://koji.fedoraproject.org/koji/buildinfo?buildID=430265


Could you open it as a new bug. Then I can switch it to Modify and do an update for it.

Comment 14 Darren Steven 2013-07-03 09:49:24 UTC
(In reply to Miroslav Grepl from comment #13)
OK, will do. Actually seems there is something else happening. recreating mdraid (as md3) has made the avc go away, but there is some race I think, similar to above. 50% of time (approx, at least), lvm fails to find ANY LV's, but underpinning md devices appear to have been created, or are being created.  

md0: detected capacity change from 0 to 2097139712

repeated for all mdX, which I think corresponds to the general availability of a configured md raid device

then I see

device-mapper: table: 253:0: linear: dm-linear: Device lookup failed
device-mapper: ioctl: error adding target to table

one set per potential lv. As before, post login, vgchange -ay and it all appears

Comment 15 Darren Steven 2013-07-08 23:56:33 UTC
selinux-policy-3.12.1-59.fc19 (as per bug 975649) did resolve this for me, but I needed to disable, pvscan --cache and re-enable lvmetad before it had the desired effect.


Note You need to log in before you can comment on or make changes to this bug.