Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 532011
Summary: | udevd eats 60% CPU cycles | ||||||
---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Jochen Roth <jroth> | ||||
Component: | hal | Assignee: | Richard Hughes <richard> | ||||
Status: | CLOSED WONTFIX | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | low | ||||||
Version: | 13 | CC: | bobg+redhat, harald, james, mivainio, richard | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | i386 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2011-06-27 14:28:59 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Jochen Roth
2009-10-30 09:23:59 UTC
It seems something is polling the disk and opens it with the writable flag set. Can you identify what is polling the cd? Be sure, to also update DeviceKit-disks. Maybe that will cure the symptom. (In reply to comment #1) > It seems something is polling the disk and opens it with the writable flag set. > > Can you identify what is polling the cd? There is no CD in the drive and even when booting into single user mode the issue still happens. I can see 20% user and 20% system load after booting into single user mode. As soon as I kill udevd my CPU load is back to 99% idle. When I start the udevd again in single user mode the system is still 99% idle (no matter if with --debug or with -d). But when I kill udevd and start it again in runlevel 5 udevd will be busy again. > Be sure, to also update DeviceKit-disks. Maybe that will cure the symptom. [root@localhost ~]# rpm -qa udev DeviceKit-disks udev-141-7.fc11.i586 DeviceKit-disks-004-5.fc11.i586 This issue does not happen when downgrading to 141-3. So one could think that it has something to do with the changes made in between -3 and -7, doesn't it? (In reply to comment #2) > > Be sure, to also update DeviceKit-disks. Maybe that will cure the symptom. > > [root@localhost ~]# rpm -qa udev DeviceKit-disks > udev-141-7.fc11.i586 > DeviceKit-disks-004-5.fc11.i586 ok, as new as it can be.. :-( > > > This issue does not happen when downgrading to 141-3. So one could think that > it has something to do with the changes made in between -3 and -7, doesn't it? Correct. But it means another application is missbehaving. So please try to identify which application is opening the cdrom device on a change (besides of udev with cdrom_id and vol_id). > Correct. But it means another application is missbehaving. So please try to
> identify which application is opening the cdrom device on a change (besides of
> udev with cdrom_id and vol_id).
OK, I'll do that if you tell me how. It also happens in single user mode. There shouldn't be a lot of applications out there polling the cd.
Thanks
Jochen
run this: # while : ; do fuser -v /dev/sr0;done Created attachment 366819 [details]
result of fuser command
I ran this command for some minutes.
lots of scsi_id and cdrom_id
and
hald_addon_stor
root 29190 0.0 0.0 3516 904 ? S 16:33 0:00 hald-addon-storage: polling /dev/sr0 (every 2 sec)
/dev/sr0: root 29190 F.... hald-addon-stor... Ok, seems like hald-addon-storage is opening /dev/sr0 with the write flage and so it will emit another change event... Any news regarding this issue? Seen here too on F11/x86-64. udevadm monitor spews lots of KERNEL[1263464453.979413] change /devices/pci0000:00/0000:00:02.0/drm/card0 (drm) UDEV [1263464453.983914] change /devices/pci0000:00/0000:00:02.0/drm/card0 (drm) KERNEL[1263464453.984891] change /devices/pci0000:00/0000:00:02.0/drm/card0 (drm) UDEV [1263464453.989209] change /devices/pci0000:00/0000:00:02.0/drm/card0 (drm) udev-141-7.fc11.x86_64 kernel-2.6.30.10-105.fc11.x86_64 And seen here on F12/x86. DeviceKit-disks-009-3.fc12.i686 kernel-2.6.31.12-174.2.22.fc12.i686 udev-145-15.fc12.i686 udevadm monitor repeats: KERNEL[1266831770.900015] change /devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0/block/sr0 (block) UDEV [1266831770.900046] change /devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0 (scsi) KERNEL[1266831770.918083] change /devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0/block/sr0 (block) UDEV [1266831770.918114] change /devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0 (scsi) KERNEL[1266831770.935759] change /devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0/block/sr0 (block) UDEV [1266831770.935790] change /devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0 (scsi) ... I'm also still seeing the same problem even though it is not 60% anymore. It reduced down to about 20% since F12. I looks like the Problem doesn't happen as soon as there is a CD in the drive. Can someone please confirm this? CPU usage drops from ~45% to ~12% when a CD is inserted. I'm seeing the same problem, about 70% of a quad-core. udevadm monitor: UDEV [1267681377.659138] change /devices/virtual/block/md0 (block) UDEV [1267681377.663257] change /devices/virtual/block/md1 (block) UDEV [1267681377.714310] change /devices/virtual/block/md1 (block) UDEV [1267681377.716608] change /devices/virtual/block/md0 (block) UDEV [1267681377.767084] change /devices/virtual/block/md0 (block) UDEV [1267681377.770620] change /devices/virtual/block/md1 (block) UDEV [1267681377.818623] change /devices/virtual/block/md0 (block) UDEV [1267681377.818657] change /devices/virtual/block/md1 (block) UDEV [1267681377.870594] change /devices/virtual/block/md0 (block) UDEV [1267681377.877968] change /devices/virtual/block/md1 (block) UDEV [1267681377.924982] change /devices/virtual/block/md0 (block) UDEV [1267681377.925209] change /devices/virtual/block/md1 (block) UDEV [1267681377.974837] change /devices/virtual/block/md0 (block) UDEV [1267681377.979478] change /devices/virtual/block/md1 (block) UDEV [1267681378.031417] change /devices/virtual/block/md0 (block) UDEV [1267681378.038039] change /devices/virtual/block/md1 (block) UDEV [1267681378.082603] change /devices/virtual/block/md0 (block) UDEV [1267681378.082637] change /devices/virtual/block/md1 (block) Smolt profile: http://www.smolts.org/client/show/pub_d98859db-2a89-44d6-baec-284b6acac7f9 This message is a reminder that Fedora 11 is nearing its end of life. Approximately 30 (thirty) days from now Fedora will stop maintaining and issuing updates for Fedora 11. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '11'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 11's end of life. Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 11 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora please change the 'version' of this bug to the applicable version. If you are unable to change the version, please add a comment here and someone will do it for you. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. The process we are following is described here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping I'm seeing the same behaviour with Fedora 12 + updates. The only workaround so far is to kill udevd. Inserting a CD causes the CPU load to decrease but it is still higher than it should be. (In reply to comment #15) > I'm seeing the same behaviour with Fedora 12 + updates. > > The only workaround so far is to kill udevd. > Inserting a CD causes the CPU load to decrease but it is still higher than it > should be. even with https://admin.fedoraproject.org/updates/udev-145-21.fc12 ? (In reply to comment #16) > even with https://admin.fedoraproject.org/updates/udev-145-21.fc12 ? It seems to be better now as long as a CD is in the drive. The idle load in my case is 98%. The combined user and system load is still ~20% in case there is no CD in the drive. It even got worse with F13. The load is at ~40% whilst udevd is running. It is at ~1% after kill -9 udevd... gvfs-gdu-volume seems to be the process with the highest load (according to top). (In reply to comment #18) > It even got worse with F13. > > The load is at ~40% whilst udevd is running. > It is at ~1% after kill -9 udevd... > > gvfs-gdu-volume seems to be the process with the highest load (according to > top). and if you, # mv /usr/libexec/gvfs-gdu-volume-monitor /usr/libexec/gvfs-gdu-volume-monitor.bak does the problem persist after a reboot ? (In reply to comment #19) > # mv /usr/libexec/gvfs-gdu-volume-monitor > /usr/libexec/gvfs-gdu-volume-monitor.bak > > does the problem persist after a reboot ? Thanks, that helped a lot. The load reduced down to ~15% (5% udevd which causes 10% system load). still... udevd should not be triggered by constant "change" events (In reply to comment #20) > (In reply to comment #19) > > # mv /usr/libexec/gvfs-gdu-volume-monitor > > /usr/libexec/gvfs-gdu-volume-monitor.bak > > > > does the problem persist after a reboot ? > > Thanks, that helped a lot. > The load reduced down to ~15% (5% udevd which causes 10% system load). what's your output of # udevadm monitor ? (In reply to comment #22) > what's your output of > > # udevadm monitor Loads of the following messages (repeats about every 40 ms). It might have something to do with another problem I currently have. I'll open another bug for this one but the current kernel 2.6.33.4-95.fc13.i686 doesn't boot on my system. It loops forever when scsi_wait_scan is loaded by the initramfs. Commenting the following line in the initramfs helped me to boot my system: modprobe scsi_wait_scan && rmmod scsi_wait_scan I'll post a bug report later but I'm pretty sure that this has something to do with buggy hardware I'm using. #udevadm monitor KERNEL[1274969517.470415] change /devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0 (scsi) UDEV [1274969517.470445] change /devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0 (scsi) KERNEL[1274969517.470466] change /devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0/block/sr0 (block) UDEV [1274969517.511675] change /devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0/block/sr0 (block) #lspci -vv 00:1f.2 SATA controller: Intel Corporation ICH9M/M-E SATA AHCI Controller (rev 03) (prog-if 01 [AHCI 1.0]) Subsystem: Acer Incorporated [ALI] Device 013c Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin B routed to IRQ 27 Region 0: I/O ports at 1818 [size=8] Region 1: I/O ports at 180c [size=4] Region 2: I/O ports at 1810 [size=8] Region 3: I/O ports at 1808 [size=4] Region 4: I/O ports at 18e0 [size=32] Region 5: Memory at f4a04000 (32-bit, non-prefetchable) [size=2K] Capabilities: [80] MSI: Enable+ Count=1/16 Maskable- 64bit- Address: fee0100c Data: 4181 Capabilities: [70] Power Management version 3 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold-) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [a8] SATA HBA v1.0 BAR4 Offset=00000004 Capabilities: [b0] PCI Advanced Features AFCap: TP+ FLR+ AFCtrl: FLR- AFStatus: TP- Kernel driver in use: ahci The kernel bug which might be related to this one (buggy hardware?) https://bugzilla.redhat.com/show_bug.cgi?id=597110 I filed this bug: https://bugzilla.redhat.com/show_bug.cgi?id=600465 which may be the same issue but includes the observation that, when the CPU is pegged, it's possible to hotkey to the textual virtual terminal and system performance immediately returns to normal. Switching back to the X session makes the load spike again. So it seems like the X server is implicated somehow... maybe... On further exploration, this looks a lot like https://bugzilla.redhat.com/show_bug.cgi?id=528312 to me. This message is a reminder that Fedora 13 is nearing its end of life. Approximately 30 (thirty) days from now Fedora will stop maintaining and issuing updates for Fedora 13. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '13'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 13's end of life. Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 13 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora please change the 'version' of this bug to the applicable version. If you are unable to change the version, please add a comment here and someone will do it for you. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. The process we are following is described here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping Fedora 13 changed to end-of-life (EOL) status on 2011-06-25. Fedora 13 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. Thank you for reporting this bug and we are sorry it could not be fixed. |