Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1372251 - libvirt wrongly convert json to xml when attaching json glusterfs backing images
Summary: libvirt wrongly convert json to xml when attaching json glusterfs backing images
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-01 09:13 UTC by Han Han
Modified: 2016-11-03 18:54 UTC (History)
6 users (show)

Fixed In Version: libvirt-2.0.0-8.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-03 18:54:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
The libvirtd log of comment1 (72.10 KB, text/plain)
2016-09-02 02:16 UTC, Han Han
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2577 0 normal SHIPPED_LIVE Moderate: libvirt security, bug fix, and enhancement update 2016-11-03 12:07:06 UTC

Description Han Han 2016-09-01 09:13:18 UTC
Description of problem:
As subject

Version-Release number of selected component (if applicable):
libvirt-2.0.0-6.el7.x86_64
qemu-kvm-rhev-2.6.0-22.el7.x86_64


How reproducible:
100%

Steps to Reproduce:
1. Prepare two glusterfs nodes as a glusterfs cluster. Create image V in glusterfs root dir.
2. Create a json backing image via glusterfs protocol:
# qemu-img create -f qcow2 -b 'json:{"file.driver":"gluster", "file.volume":"gluster-vol1", "file.path":"V","file.server":[ { "type":"tcp", "host":"10.66.5.50", "port":"24007"}, { "type":"tcp", "host":"10.66.4.233", "port":"24007"}]}' /var/lib/libvirt/images/gluster_multi.img
Formatting '/var/lib/libvirt/images/gluster_multi.img', fmt=qcow2 size=524288000 backing_file=json:{"file.driver":"gluster",, "file.volume":"gluster-vol1",, "file.path":"V",,"file.server":[ { "type":"tcp",, "host":"10.66.5.50",, "port":"24007"},, { "type":"tcp",, "host":"10.66.4.233",, "port":"24007"}]} encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16

Attach the backing image to a running VM:
# virsh attach-disk V  /var/lib/libvirt/images/gluster_multi.img vdb --subdriver qcow2                                                                                              
Disk attached successfully

# virsh dumpxml V|awk '/<disk/,/<\/disk/'                                                  
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/V.qcow2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/gluster_multi.img'/>
      <backingStore type='network' index='1'>
        <format type='raw'/>
        <source protocol='none' name='gluster-vol1/V'>
          <host name='10.66.5.50' port='24007'/>
          <host name='10.66.4.233' port='24007'/>
        </source>
        <backingStore/>
      </backingStore>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/>
    </disk>

Detach the disk:
# virsh detach-disk V  vdb              
error: Failed to detach disk
error: unsupported configuration: unknown protocol type 'none'


Actual results:
As step2. protocol='none'

Expected results:
protocol='gluster' and disk detachment is ok.

Additional info:

Comment 2 Han Han 2016-09-02 02:10:53 UTC
One more issue: unable to attach gluster json backing image with unix socket
1. Create a glusterfs server on localhost
2. Create glusterfs json backing image and attach the image to a running VM
# qemu-img create -f qcow2 -b 'json:{"file.driver":"gluster", "file.volume":"gluster-vol1", "file.path":"V","file.server":[ { "type":"unix", "socket":"/var/run/glusterd.socket"}]}' /var/lib/libvirt/images/gluster_socket.img
Formatting '/var/lib/libvirt/images/gluster_socket.img', fmt=qcow2 size=524288000 backing_file=json:{"file.driver":"gluster",, "file.volume":"gluster-vol1",, "file.path":"V",,"file.server":[ { "type":"unix",, "socket":"/var/run/glusterd.socket"}]} encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16

# virsh attach-disk V  /var/lib/libvirt/images/gluster_socket.img vdb --subdriver qcow2
error: Failed to attach disk
error: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk1' could not be initialized

Actual results:
As step2

Expected results:
Attach successfully and VM run with following xml:
 <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/gluster_multi.img'/>
      <backingStore type='network' index='1'>
        <format type='raw'/>
        <source protocol='gluster' name='gluster-vol1/V'>
        <host transport='unix' socket='/var/run/glusterd.socket'/>
        </source>
        <backingStore/>
      </backingStore>
      <target dev='vdb' bus='virtio'/>
    </disk>

By the way, the xml above can be attached and detached successfully in libvirt.

I think something wrong in virStorageSourceParseBackingJSONGluster function. So I put the two issue together.

Comment 3 Han Han 2016-09-02 02:16:04 UTC
Created attachment 1197028 [details]
The libvirtd log of comment1

In the log, we can find the json image parsed as following xml:
  <driver type='qcow2'/>
  <source file='/var/lib/libvirt/images/gluster_socket.img'/>
  <target dev='vdb'/>
</disk>

It is not correct.

Comment 4 Peter Krempa 2016-09-05 13:40:59 UTC
(In reply to Han Han from comment #2)
> One more issue: unable to attach gluster json backing image with unix socket

[...]

> # virsh attach-disk V  /var/lib/libvirt/images/gluster_socket.img vdb
> --subdriver qcow2
> error: Failed to attach disk
> error: internal error: unable to execute QEMU command
> '__com.redhat_drive_add': Device 'drive-virtio-disk1' could not be
> initialized

[...]
 
> By the way, the xml above can be attached and detached successfully in
> libvirt.
> 
> I think something wrong in virStorageSourceParseBackingJSONGluster function.
> So I put the two issue together.

Only the first issue is related to the parser. The issue described in comment 2 originates in qemu. Please attach the qemu log file. As you've said it works with libvirt so the problem is most likely in qemu.


(In reply to Han Han from comment #3)
> Created attachment 1197028 [details]
> The libvirtd log of comment1
> 
> In the log, we can find the json image parsed as following xml:
>   <driver type='qcow2'/>
>   <source file='/var/lib/libvirt/images/gluster_socket.img'/>
>   <target dev='vdb'/>
> </disk>
> 
> It is not correct.

It certainly is correct. Virsh does not send any additional information when attaching the disk. The information is added by libvirt after it attaches the disk. Since that failed the only information you see is the attach XML which is correct. (see virsh attach-disk V  /var/lib/libvirt/images/gluster_socket.img vdb --subdriver qcow2 --print-xml)

Comment 5 Han Han 2016-09-06 02:04:12 UTC
# virsh attach-disk V  /var/lib/libvirt/images/gluster_socket.img vdb --subdriver qcow2 --print-xml
<disk type='file'>
  <driver type='qcow2'/>
  <source file='/var/lib/libvirt/images/gluster_socket.img'/>
  <target dev='vdb'/>
</disk>

# virsh attach-disk V  /var/lib/libvirt/images/gluster_socket.img vdb --subdriver qcow2            
error: Failed to attach disk
error: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk1' could not be initialized

The qemu log:
[2016-09-06 01:59:41.830238] E [rpc-clnt.c:362:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7fc7e9960c32] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fc7e972b84e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fc7e972b95e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7fc7e972d2ea] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7fc7e972db18] ))))) 0-gfapi: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2016-09-06 01:59:41.829375 (xid=0x1)
[2016-09-06 01:59:41.830388] E [MSGID: 104007] [glfs-mgmt.c:637:glfs_mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch volume file (key:gluster-vol1) [Invalid argument]
[2016-09-06 01:59:41.830409] E [MSGID: 104024] [glfs-mgmt.c:738:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with remote-host: /var/run/glusterd.socket (Transport endpoint is not connected) [Transport endpoint is not connected]
Could not open backing file: Gluster connection for volume gluster-vol1, path V failed to connect

Comment 6 Peter Krempa 2016-09-06 12:06:47 UTC
As expected that points to qemu not being able to open the socket for some reason and doesn't seem to be related to the original issue by any means. I'd expect that either the permissions (unix and selinux) for the socket are invalid or something is wrong with the gluster daemon.

Comment 7 Peter Krempa 2016-09-06 12:07:14 UTC
Fix for the original issue was pushed upstream:

commit b7a650c97c717b1065c255a9be620fd2ba320180
Author: Peter Krempa <pkrempa>
Date:   Mon Sep 5 15:31:44 2016 +0200

    util: storage: Properly set protocol type when parsing gluster json string
    
    Commit 2ed772cd forgot to set proper protocol. This was also present in
    the test data.

Comment 9 Han Han 2016-09-07 07:17:29 UTC
I tested on latest upstream libvirt, the first issue has resolved:
# virsh detach-disk V vdb
Disk detached successfully

# virsh attach-disk V  /var/lib/libvirt/images/gluster_multi.img vdb --subdriver qcow2  
virDisk attached successfully

# virsh dumpxml V|awk '/<disk/,/<\/disk/'                                               
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/V.qcow2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/gluster_multi.img'/>
      <backingStore type='network' index='1'>
        <format type='raw'/>
        <source protocol='gluster' name='gluster-vol1/V'>
          <host name='10.66.5.50' port='24007'/>
          <host name='10.66.4.233' port='24007'/>
        </source>
        <backingStore/>
      </backingStore>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </disk>
# virsh detach-disk V vdb                
Disk detached successfully

For comment2, new bug BZ1373786 to track it.

Comment 11 Yang Yang 2016-09-13 03:11:04 UTC
Verified in libvirt-2.0.0-8.el7.x86_64 

Steps are as follows
1. create image using json format backing file
# qemu-img create -f qcow2 -F qcow2 -b 'json:{"file.driver":"gluster", "file.volume":"gluster-vol1", "file.path":"yy.qcow2","file.server":[ { "type":"tcp", "host":"$ip", "port":"24007"}]}' /var/lib/libvirt/images/gluster.img
Formatting '/var/lib/libvirt/images/gluster.img', fmt=qcow2 size=1073741824 backing_file=json:{"file.driver":"gluster",, "file.volume":"gluster-vol1",, "file.path":"yy.qcow2",,"file.server":[ { "type":"tcp",, "host":"$ip",, "port":"24007"}]} backing_fmt=qcow2 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
# qemu-img info /var/lib/libvirt/images/gluster.img
image: /var/lib/libvirt/images/gluster.img
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 196K
cluster_size: 65536
backing file: json:{"file.driver":"gluster", "file.volume":"gluster-vol1", "file.path":"yy.qcow2","file.server":[ { "type":"tcp", "host":"$ip", "port":"24007"}]}
backing file format: qcow2
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

2. attach the disk
# virsh attach-disk vm1 /var/lib/libvirt/images/gluster.img vdc --subdriver qcow2 --live --config
Disk attached successfully

# virsh dumpxml vm1 | grep vdc -a10    
<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/gluster.img'/>
      <backingStore type='network' index='1'>
        <format type='qcow2'/>
        <source protocol='gluster' name='gluster-vol1/yy.qcow2'>
          <host name='$ip' port='24007'/>
        </source>
        <backingStore/>
      </backingStore>
      <target dev='vdc' bus='virtio'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/>
    </disk>

[root@localhost ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0            11:0    1  392M  0 rom  
vda           252:0    0    5G  0 disk 
├─vda1        252:1    0    1G  0 part /boot
└─vda2        252:2    0    4G  0 part 
  ├─rhel-root 253:0    0  3.5G  0 lvm  /
  └─rhel-swap 253:1    0  512M  0 lvm  [SWAP]
vdb           252:16   0    1G  0 disk 
[root@localhost ~]# mkfs.xfs /dev/vdb
meta-data=/dev/vdb               isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@localhost ~]# mount /dev/vdb /mnt
[root@localhost ~]# echo hello > /mnt/hello
[root@localhost ~]# cat /mnt/hello 
hello
[root@localhost ~]# umount /mnt

# virsh detach-disk vm1 vdc --live --config
Disk detached successfully
# virsh dumpxml vm1 | grep vdc

Comment 13 errata-xmlrpc 2016-11-03 18:54:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2577.html


Note You need to log in before you can comment on or make changes to this bug.