Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.

Bug 1066630

Summary: virsh capabilities has no <guest> sections (ppc64)
Product: [Community] Virtualization Tools Reporter: Richard W.M. Jones <rjones>
Component: libvirtAssignee: Libvirt Maintainers <libvirt-maint>
Status: CLOSED NOTABUG QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: acathrow
Target Milestone: ---   
Target Release: ---   
Hardware: ppc64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-02-18 18:38:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 910269, 1059428    
Attachments:
Description Flags
libvirt daemon debugging none

Description Richard W.M. Jones 2014-02-18 18:22:13 UTC
Description of problem:

This is the output of

  $ ./run ./tools/virsh capabilities

non-root, on libvirt, compiled from git today, on ppc64 (Fedora Rawhide).

<capabilities>

  <host>
    <uuid>dfe785c8-06f9-48d6-b57b-87fb00abc31c</uuid>
    <cpu>
      <arch>ppc64</arch>
      <model>POWER7+_v2.1</model>
      <vendor>IBM</vendor>
      <topology sockets='1' cores='8' threads='4'/>
    </cpu>
    <power_management>
      <suspend_mem/>
      <suspend_disk/>
      <suspend_hybrid/>
    </power_management>
    <migration_features>
      <live/>
      <uri_transports>
        <uri_transport>tcp</uri_transport>
      </uri_transports>
    </migration_features>
    <topology>
      <cells num='2'>
        <cell id='0'>
          <memory unit='KiB'>65950848</memory>
          <cpus num='32'>
            <cpu id='0'/>
            <cpu id='1'/>
            <cpu id='2'/>
            <cpu id='3'/>
            <cpu id='4'/>
            <cpu id='5'/>
            <cpu id='6'/>
            <cpu id='7'/>
            <cpu id='8'/>
            <cpu id='9'/>
            <cpu id='10'/>
            <cpu id='11'/>
            <cpu id='12'/>
            <cpu id='13'/>
            <cpu id='14'/>
            <cpu id='15'/>
            <cpu id='16'/>
            <cpu id='17'/>
            <cpu id='18'/>
            <cpu id='19'/>
            <cpu id='20'/>
            <cpu id='21'/>
            <cpu id='22'/>
            <cpu id='23'/>
            <cpu id='24'/>
            <cpu id='25'/>
            <cpu id='26'/>
            <cpu id='27'/>
            <cpu id='28'/>
            <cpu id='29'/>
            <cpu id='30'/>
            <cpu id='31'/>
          </cpus>
        </cell>
        <cell id='1'>
          <memory unit='KiB'>64435520</memory>
          <cpus num='32'>
            <cpu id='32'/>
            <cpu id='33'/>
            <cpu id='34'/>
            <cpu id='35'/>
            <cpu id='36'/>
            <cpu id='37'/>
            <cpu id='38'/>
            <cpu id='39'/>
            <cpu id='40'/>
            <cpu id='41'/>
            <cpu id='42'/>
            <cpu id='43'/>
            <cpu id='44'/>
            <cpu id='45'/>
            <cpu id='46'/>
            <cpu id='47'/>
            <cpu id='48'/>
            <cpu id='49'/>
            <cpu id='50'/>
            <cpu id='51'/>
            <cpu id='52'/>
            <cpu id='53'/>
            <cpu id='54'/>
            <cpu id='55'/>
            <cpu id='56'/>
            <cpu id='57'/>
            <cpu id='58'/>
            <cpu id='59'/>
            <cpu id='60'/>
            <cpu id='61'/>
            <cpu id='62'/>
            <cpu id='63'/>
          </cpus>
        </cell>
      </cells>
    </topology>
    <secmodel>
      <model>selinux</model>
      <doi>0</doi>
      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
    </secmodel>
  </host>

The problem from a libguestfs point of view is there are
no <guest> sections at all.

Version-Release number of selected component (if applicable):

libvirt compiled from git today

How reproducible:

100%

Steps to Reproduce:
1. On ppc64, compile libvirt from git.
2. Kill any libvirtd processes running.
3. ./run ./tools/virsh capabilities (non-root)

Actual results:

See above.

Expected results:

Expect to see some <guest> sections.

Additional info:

qemu binaries are installed:

$ qemu-system-
qemu-system-alpha         qemu-system-mips64        qemu-system-sh4
qemu-system-arm           qemu-system-mips64el      qemu-system-sh4eb
qemu-system-cris          qemu-system-mipsel        qemu-system-sparc
qemu-system-i386          qemu-system-moxie         qemu-system-sparc64
qemu-system-lm32          qemu-system-or32          qemu-system-unicore32
qemu-system-m68k          qemu-system-ppc           qemu-system-x86_64
qemu-system-microblaze    qemu-system-ppc64         qemu-system-xtensa
qemu-system-microblazeel  qemu-system-ppcemb        qemu-system-xtensaeb
qemu-system-mips          qemu-system-s390x

Comment 1 Richard W.M. Jones 2014-02-18 18:27:02 UTC
Also happens as root, with libvirt from Rawhide.

# rpm -qf `which virsh`
libvirt-client-1.2.0-1.fc21.ppc64
# systemctl restart libvirtd
# ps ax | grep libvirtd
45510 ?        Ssl    0:00 /usr/sbin/libvirtd
45616 pts/7    S+     0:00 grep --color=auto libvirtd
# virsh capabilities
<capabilities>

  <host>
    <uuid>6ac3b69a-7939-4ede-80d0-a9837723a8f1</uuid>
    <cpu>
      <arch>ppc64</arch>
      <model>POWER7+_v2.1</model>
      <vendor>IBM</vendor>
      <topology sockets='1' cores='8' threads='4'/>
    </cpu>
    <power_management>
      <suspend_mem/>
      <suspend_disk/>
      <suspend_hybrid/>
    </power_management>
    <migration_features>
      <live/>
      <uri_transports>
        <uri_transport>tcp</uri_transport>
      </uri_transports>
    </migration_features>
    <topology>
      <cells num='2'>
        <cell id='0'>
          <memory unit='KiB'>65950848</memory>
          <cpus num='32'>
            <cpu id='0'/>
            <cpu id='1'/>
            <cpu id='2'/>
            <cpu id='3'/>
            <cpu id='4'/>
            <cpu id='5'/>
            <cpu id='6'/>
            <cpu id='7'/>
            <cpu id='8'/>
            <cpu id='9'/>
            <cpu id='10'/>
            <cpu id='11'/>
            <cpu id='12'/>
            <cpu id='13'/>
            <cpu id='14'/>
            <cpu id='15'/>
            <cpu id='16'/>
            <cpu id='17'/>
            <cpu id='18'/>
            <cpu id='19'/>
            <cpu id='20'/>
            <cpu id='21'/>
            <cpu id='22'/>
            <cpu id='23'/>
            <cpu id='24'/>
            <cpu id='25'/>
            <cpu id='26'/>
            <cpu id='27'/>
            <cpu id='28'/>
            <cpu id='29'/>
            <cpu id='30'/>
            <cpu id='31'/>
          </cpus>
        </cell>
        <cell id='1'>
          <memory unit='KiB'>64435520</memory>
          <cpus num='32'>
            <cpu id='32'/>
            <cpu id='33'/>
            <cpu id='34'/>
            <cpu id='35'/>
            <cpu id='36'/>
            <cpu id='37'/>
            <cpu id='38'/>
            <cpu id='39'/>
            <cpu id='40'/>
            <cpu id='41'/>
            <cpu id='42'/>
            <cpu id='43'/>
            <cpu id='44'/>
            <cpu id='45'/>
            <cpu id='46'/>
            <cpu id='47'/>
            <cpu id='48'/>
            <cpu id='49'/>
            <cpu id='50'/>
            <cpu id='51'/>
            <cpu id='52'/>
            <cpu id='53'/>
            <cpu id='54'/>
            <cpu id='55'/>
            <cpu id='56'/>
            <cpu id='57'/>
            <cpu id='58'/>
            <cpu id='59'/>
            <cpu id='60'/>
            <cpu id='61'/>
            <cpu id='62'/>
            <cpu id='63'/>
          </cpus>
        </cell>
      </cells>
    </topology>
    <secmodel>
      <model>selinux</model>
      <doi>0</doi>
      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
    </secmodel>
    <secmodel>
      <model>dac</model>
      <doi>0</doi>
      <baselabel type='kvm'>+107:+107</baselabel>
      <baselabel type='qemu'>+107:+107</baselabel>
    </secmodel>
  </host>

</capabilities>

Comment 2 Richard W.M. Jones 2014-02-18 18:37:12 UTC
Created attachment 864677 [details]
libvirt daemon debugging

libvirtd deadlocks somehow if you enable debugging (log_level = 1).

I was only able to set log_level = 3, and attached is the result.

To cut a long story short, turns out to be some sort of strange
qemu linking problem:

symbol lookup error: /usr/bin/qemu-system-s390x: undefined symbo\
l: glfs_discard_async

Comment 3 Richard W.M. Jones 2014-02-18 18:38:37 UTC
Fixed by updating glusterfs-devel to match the version
qemu was compiled against.

Comment 4 Richard W.M. Jones 2014-02-18 18:40:14 UTC
BTW I think the bug here is that libvirt makes this kind
of problem (which I've had several times) excessively hard
to diagnose.

It should produce a "your qemu is totally broken" message
somewhere.  Perhaps the capabilities output could include
an informative section which shows the buggy output from
broken qemu?

<info>
  Could not run qemu-system-blah:
  "symbol lookup error: /usr/bin/qemu-system-s390x: undefined symbo\
l: glfs_discard_async"
</info>