Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1741094 - [Upstream]Incremental backup: Qemu coredump when expose an active bitmap via pull mode(data plane enable)
Summary: [Upstream]Incremental backup: Qemu coredump when expose an active bitmap via ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Eric Blake
QA Contact: aihua liang
URL:
Whiteboard:
Depends On:
Blocks: 1741186 1758964
TreeView+ depends on / blocked
 
Reported: 2019-08-14 09:07 UTC by aihua liang
Modified: 2020-02-04 18:29 UTC (History)
7 users (show)

Fixed In Version: qemu-kvm-4.1.0-14.module+el8.1.1+4632+a8269660
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-02-04 18:28:48 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:0404 0 None None None 2020-02-04 18:29:57 UTC

Description aihua liang 2019-08-14 09:07:55 UTC
Description of problem:
 [Upstream]Incremental backup: Qemu coredump when expose an active bitmap via pull mode

Version-Release number of selected component (if applicable):
 kernel version:4.18.0-122.el8.x86_64
 qemu-kvm version: v4.1.0-rc5

How reproducible:
100%

Steps to Reproduce:
1.Start guest with qemu cmds:
   /usr/local/bin/qemu-system-x86_64 \
    -name 'avocado-vt-vm1' \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190602-221944-MrlxVzia,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190602-221944-MrlxVzia,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idn20piu  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20190602-221944-MrlxVzia,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20190602-221944-MrlxVzia,path=/var/tmp/seabios-20190602-221944-MrlxVzia,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190602-221944-MrlxVzia,iobase=0x402 \
    -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
    -object iothread,id=iothread0 \
    -object iothread,id=iothread1 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -drive driver=qcow2,id=drive_image1,file=/home/kvm_autotest_root/images/rhel810-64-virtio-scsi.qcow2,if=none,cache=none \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bus=pcie.0,iothread=iothread0,bus=pcie.0-root-port-3,addr=0x0 \
    -device pcie-root-port,id=pcie.0-root-port-5,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -blockdev driver=file,filename=/mnt/data.qcow2,node-name=file_node \
    -blockdev driver=qcow2,file=file_node,node-name=drive_data1 \
    -device virtio-blk-pci,id=data1,drive=drive_data1,bus=pcie.0-root-port-5,addr=0x0,iothread=iothread0 \
    -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
    -drive format=qcow2,file=/mnt/data2.qcow2,id=drive_data2,cache=none,if=none \
    -device virtio-blk-pci,id=data2,drive=drive_data2,bus=pcie.0-root-port-6,addr=0x0 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:33:34:35:36:37,id=idj01pFr,vectors=4,netdev=idMgbx8B,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=idMgbx8B,vhost=on \
    -m 4096  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'Skylake-Client',+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio \
    -qmp tcp:0:3000,server,nowait \

2.Start ndb server:
   {"execute":"nbd-server-start","arguments":{"addr":{"type":"inet","data":{"host":"10.73.224.68","port":"10809"}}}}

3.Create scratch.img and add it.
   #qemu-img create -f qcow2 -b /mnt/data.qcow2 -F qcow2 scratch.img
   {"execute":"blockdev-add","arguments":{"driver":"qcow2","node-name":"tmp","file":{"driver":"file","filename":"scratch.img"},"backing":"drive_data1"}}

4.Do full backup with sync "none" and add a bitmap
   { "execute": "transaction", "arguments": { "actions": [ {"type": "blockdev-backup", "data": { "device": "drive_data1", "target": "tmp", "sync": "none", "job-id":"j1" } }, {"type": "block-dirty-bitmap-add", "data": { "node": "drive_data1", "name": "bitmap0" } } ] } }

5.Expose full backup node and the bitmap
  {"execute": "nbd-server-add", "arguments": { "device": "tmp", "bitmap": "bitmap0" } }
{"timestamp": {"seconds": 1565772947, "microseconds": 784292}, "event": "JOB_STATUS_CHANGE", "data": {"status": "paused", "id": "j1"}}
Ncat: Connection reset by peer.

Actual results:
 Qemu coredump.


Expected results:
 Expose an active bitmap should be forbidden.

Additional info:
 Will attach coredump info later.
 When test with data plane disabled, don't hit this issue.
 Test on qemu-kvm-4.0.0-6.module+el8.1.0+3736+a2aefea3.x86_64, don't hit this issue.

Comment 2 aihua liang 2019-08-14 09:34:04 UTC
Info when qemu quit:
 (qemu) qemu: qemu_mutex_unlock_impl: Operation not permitted
blk_debug.txt: line 43:  4499 Aborted                 (core dumped) /usr/local/bin/qemu-system-x86_64 -name 'avocado-vt-vm1' -machine q35 -nodefaults -device VGA,bus=pcie.0,addr=0x1 -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190602-221944-MrlxVzia,server,nowait

Comment 5 aihua liang 2019-08-27 06:09:37 UTC
Test on qemu-kvm-4.1.0-4.module+el8.1.0+4020+16089f93.x86_64, also hit this issue.

Comment 6 Eric Blake 2019-09-18 18:50:17 UTC
Upstream patch proposed: https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg03400.html
still needs iotest coverage

Comment 7 Eric Blake 2019-09-20 22:11:11 UTC
(In reply to Eric Blake from comment #6)
> Upstream patch proposed:
> https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg03400.html
> still needs iotest coverage

iotest coverage posted as patch 2/1 in the same thread.

Comment 9 Eric Blake 2019-10-09 14:18:31 UTC
Downstream backport proposed:
http://post-office.corp.redhat.com/archives/rhvirt-patches/2019-October/msg00544.html

Comment 12 aihua liang 2019-11-13 06:23:39 UTC
Test on qemu-kvm-4.1.0-14.module+el8.1.1+4632+a8269660, don't hit this issue any more, set bug's status to "Verified".

1.Start guest with qemu cmds:
   /usr/local/bin/qemu-system-x86_64 \
    -name 'avocado-vt-vm1' \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190602-221944-MrlxVzia,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190602-221944-MrlxVzia,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idn20piu  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20190602-221944-MrlxVzia,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20190602-221944-MrlxVzia,path=/var/tmp/seabios-20190602-221944-MrlxVzia,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190602-221944-MrlxVzia,iobase=0x402 \
    -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
    -object iothread,id=iothread0 \
    -object iothread,id=iothread1 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -drive driver=qcow2,id=drive_image1,file=/home/kvm_autotest_root/images/rhel810-64-virtio-scsi.qcow2,if=none,cache=none \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bus=pcie.0,iothread=iothread0,bus=pcie.0-root-port-3,addr=0x0 \
    -device pcie-root-port,id=pcie.0-root-port-5,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -blockdev driver=file,filename=/mnt/data.qcow2,node-name=file_node \
    -blockdev driver=qcow2,file=file_node,node-name=drive_data1 \
    -device virtio-blk-pci,id=data1,drive=drive_data1,bus=pcie.0-root-port-5,addr=0x0,iothread=iothread0 \
    -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
    -drive format=qcow2,file=/mnt/data2.qcow2,id=drive_data2,cache=none,if=none \
    -device virtio-blk-pci,id=data2,drive=drive_data2,bus=pcie.0-root-port-6,addr=0x0 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:33:34:35:36:37,id=idj01pFr,vectors=4,netdev=idMgbx8B,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=idMgbx8B,vhost=on \
    -m 4096  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'Skylake-Client',+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio \
    -qmp tcp:0:3000,server,nowait \

2.Start ndb server:
   {"execute":"nbd-server-start","arguments":{"addr":{"type":"inet","data":{"host":"10.73.224.68","port":"10809"}}}}

3.Create scratch.img and add it.
   #qemu-img create -f qcow2 -b /mnt/data.qcow2 -F qcow2 scratch.img
   {"execute":"blockdev-add","arguments":{"driver":"qcow2","node-name":"tmp","file":{"driver":"file","filename":"scratch.img"},"backing":"drive_data1"}}

4.Do full backup with sync "none" and add a bitmap
   { "execute": "transaction", "arguments": { "actions": [ {"type": "blockdev-backup", "data": { "device": "drive_data1", "target": "tmp", "sync": "none", "job-id":"j1" } }, {"type": "block-dirty-bitmap-add", "data": { "node": "drive_data1", "name": "bitmap0" } } ] } }

5.Expose full backup node and the bitmap
  {"execute": "nbd-server-add", "arguments": { "device": "tmp", "bitmap": "bitmap0" } }
  {"error": {"class": "GenericError", "desc": "Enabled bitmap 'bitmap0' incompatible with readonly export"}}

Comment 14 errata-xmlrpc 2020-02-04 18:28:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0404


Note You need to log in before you can comment on or make changes to this bug.