Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1266930 - Command 'org.ovirt.engine.core.bll.ImportVmCommand' failed: EngineException: ImportVmCommand::MoveOrCopyAllImageGroups: Failed to copy disk! (Failed with error ENGINE and code 5001)
Summary: Command 'org.ovirt.engine.core.bll.ImportVmCommand' failed: EngineException: ...
Keywords:
Status: CLOSED DUPLICATE of bug 1269948
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: General
Version: 3.6.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3.6.1
Assignee: Arik
QA Contact:
URL:
Whiteboard: virt
Depends On:
Blocks: TRACKER-bugs-affecting-libguestfs
TreeView+ depends on / blocked
 
Reported: 2015-09-28 14:50 UTC by Richard W.M. Jones
Modified: 2015-10-12 11:37 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-12 11:37:23 UTC
oVirt Team: ---
Embargoed:
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)
engine.log (867.27 KB, text/plain)
2015-09-28 14:50 UTC, Richard W.M. Jones
no flags Details

Description Richard W.M. Jones 2015-09-28 14:50:05 UTC
Created attachment 1077946 [details]
engine.log

Description of problem:

When importing a guest from virt-v2v to oVirt engine 3.6, we
see this error:

2015-09-28 04:35:18,012 ERROR [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-7-thread-15) [7c2b0e05] Command 'org.ovirt.engine.core.bll.ImportVmCommand' failed: EngineException: ImportVmCommand::MoveOrCopyAllImageGroups: Failed to copy disk! (Failed with error ENGINE and code 5001)

(Please see attached engine.log for the full context).

Unfortunately we have absolutely no idea what this error means and
what the true cause is.

Version-Release number of selected component (if applicable):

rhevm-3.6.0-0.16.master.el6.noarch
libvirt-1.2.17-9.el7.x86_64
libguestfs-1.28.1-1.55.el7.x86_64
qemu-kvm-rhev-2.3.0-24.el7.x86_64
virt-v2v-1.28.1-1.55.el7.x86_64

How reproducible:

Unknown, but happened at least twice.

Steps to Reproduce:
1. See the virt-v2v steps here:
https://bugzilla.redhat.com/show_bug.cgi?id=1260590#c20

Additional info:

engine.log from the oVirt server is attached.

Comment 1 Christopher Pereira 2015-10-09 05:53:41 UTC
Confirmed with 3.6-RC1

Richard, it seems like virt-v2v is not setting an active snapshot in the export domain for the VM and oVirt is validating it.

See this patch: https://www.mail-archive.com/engine-patches@ovirt.org/msg331204.html

Here is the code: https://github.com/halober/ovirt-engine/blob/d77f81c022bf674145b69b70b8e2f62b7f699fd8/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/ImportVmCommand.java

I imported a VM into a gluster export domain using virt-v2v.
Then, when trying to import the VM from the export domain using oVirt (GUI) I see this errors:

2015-10-09 02:25:58,704 WARN  [org.ovirt.engine.core.bll.ImportVmCommand] (default task-11) [] VM '91182f5c-4ad3-4114-9311-c854cf9a69a0' doesn't have active snapshot in export domain
2015-10-09 02:25:58,804 INFO  [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-8-thread-10) [4ca6a170] Running command: ImportVmCommand internal: false. Entities affected :  ID: d15d79f7-97a3-4737-8b85-7c34851fedde Type: StorageAction group IMPORT_EXPORT_VM with role type ADMIN,  ID: d15d79f7-97a3-4737-8b85-7c34851fedde Type: StorageAction group IMPORT_EXPORT_VM with role type ADMIN,  ID: ca268aca-bf54-441c-9e17-9bd59d3fc1f7 Type: StorageAction group IMPORT_EXPORT_VM with role type ADMIN
2015-10-09 02:25:58,828 INFO  [org.ovirt.engine.core.bll.ImagesHandler] (org.ovirt.thread.pool-8-thread-10) [4ca6a170] Disk alias retrieved from the client is null or empty, the suggested default disk alias to be used is 'test-3_Disk1'
2015-10-09 02:25:58,830 INFO  [org.ovirt.engine.core.bll.ImagesHandler] (org.ovirt.thread.pool-8-thread-10) [4ca6a170] Disk alias retrieved from the client is null or empty, the suggested default disk alias to be used is 'test-3_Disk2'
2015-10-09 02:25:58,830 WARN  [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-8-thread-10) [4ca6a170] VM '91182f5c-4ad3-4114-9311-c854cf9a69a0' doesn't have active snapshot in export domain
2015-10-09 02:25:58,843 WARN  [org.ovirt.engine.core.bll.CopyImageGroupCommand] (org.ovirt.thread.pool-8-thread-10) [7e9dacb] CanDoAction of action 'CopyImageGroup' failed for user admin@internal. Reasons: VAR__TYPE__STORAGE__DOMAIN
2015-10-09 02:25:58,843 INFO  [org.ovirt.engine.core.utils.transaction.TransactionSupport] (org.ovirt.thread.pool-8-thread-10) [7e9dacb] transaction rolled back
2015-10-09 02:25:58,844 ERROR [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-8-thread-10) [7e9dacb] Command 'org.ovirt.engine.core.bll.ImportVmCommand' failed: EngineException: ImportVmCommand::MoveOrCopyAllImageGroups: Failed to copy disk! (Failed with error ENGINE and code 5001)
2015-10-09 02:25:58,845 INFO  [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-8-thread-10) [7e9dacb] Command [id=b91295c3-2b48-4955-a89d-02670202fc3b]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.network.VmNetworkStatistics; snapshot: f5990826-681c-4a74-a6ba-3607b51cbaed.
2015-10-09 02:25:58,845 INFO  [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-8-thread-10) [7e9dacb] Command [id=b91295c3-2b48-4955-a89d-02670202fc3b]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.network.VmNetworkInterface; snapshot: f5990826-681c-4a74-a6ba-3607b51cbaed.
2015-10-09 02:25:58,846 INFO  [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-8-thread-10) [7e9dacb] Command [id=b91295c3-2b48-4955-a89d-02670202fc3b]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.VmStatistics; snapshot: 91182f5c-4ad3-4114-9311-c854cf9a69a0.
2015-10-09 02:25:58,846 INFO  [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-8-thread-10) [7e9dacb] Command [id=b91295c3-2b48-4955-a89d-02670202fc3b]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.VmDynamic; snapshot: 91182f5c-4ad3-4114-9311-c854cf9a69a0.
2015-10-09 02:25:58,847 INFO  [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-8-thread-10) [7e9dacb] Command [id=b91295c3-2b48-4955-a89d-02670202fc3b]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.VmStatic; snapshot: 91182f5c-4ad3-4114-9311-c854cf9a69a0.
2015-10-09 02:25:58,905 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-10) [7e9dacb] Correlation ID: 4ca6a170, Job ID: 3c8e0ba6-182b-4fd5-9084-cad9eafd8ae9, Call Stack: null, Custom Event ID: -1, Message: Failed to import Vm test-3 to Data Center Default, MyCluster-H8
2015-10-09 02:25:58,927 INFO  [org.ovirt.engine.core.bll.ImportVmCommand] (org.ovirt.thread.pool-8-thread-10) [7e9dacb] Lock freed to object 'EngineLock:{exclusiveLocks='[test-3=<VM_NAME, ACTION_TYPE_FAILED_NAME_ALREADY_USED>, 91182f5c-4ad3-4114-9311-c854cf9a69a0=<VM, ACTION_TYPE_FAILED_VM_IS_BEING_IMPORTED$VmName test-3>]', sharedLocks='[91182f5c-4ad3-4114-9311-c854cf9a69a0=<REMOTE_VM, ACTION_TYPE_FAILED_VM_IS_BEING_IMPORTED$VmName test-3>]'}'

Comment 2 Christopher Pereira 2015-10-09 06:50:54 UTC
I checked the code and the "VM doesn't have active snapshot in export domain" message seems to be just a warning.
The VM I'm trying to import has no snapshots at all.

I wonder why I'm getting:
CanDoAction of action 'CopyImageGroup' failed for user admin@internal. Reasons: VAR__TYPE__STORAGE__DOMAIN

Checking VDSM logs, I see a StorageDomainDoesNotExist exception for the destination Storage Domain, but the Storage Domain exists and is active in oVirt.

Thread-16391::ERROR::2015-10-09 03:19:00,141::sdc::144::Storage.StorageDomainCache::(_findDomain) domain d15d79f7-97a3-4737-8b85-7c34851fedde not found
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/sdc.py", line 142, in _findDomain
    dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/glusterSD.py", line 32, in findDomain
    return GlusterStorageDomain(GlusterStorageDomain.findDomainPath(sdUUID))
  File "/usr/share/vdsm/storage/glusterSD.py", line 28, in findDomainPath
    raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: (u'd15d79f7-97a3-4737-8b85-7c34851fedde',)
Thread-16391::ERROR::2015-10-09 03:19:00,142::monitor::250::Storage.Monitor::(_monitorDomain) Error monitoring domain d15d79f7-97a3-4737-8b85-7c34851fedde
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/monitor.py", line 246, in _monitorDomain
    self._performDomainSelftest()
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 774, in wrapper
    value = meth(self, *a, **kw)
  File "/usr/share/vdsm/storage/monitor.py", line 313, in _performDomainSelftest
    self.domain.selftest()
  File "/usr/share/vdsm/storage/sdc.py", line 49, in __getattr__
    return getattr(self.getRealDomain(), attrName)
  File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
    return self._cache._realProduce(self._sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 123, in _realProduce
    domain = self._findDomain(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 142, in _findDomain
    dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/glusterSD.py", line 32, in findDomain
    return GlusterStorageDomain(GlusterStorageDomain.findDomainPath(sdUUID))
  File "/usr/share/vdsm/storage/glusterSD.py", line 28, in findDomainPath
    raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: (u'd15d79f7-97a3-4737-8b85-7c34851fedde',)

Comment 3 Arik 2015-10-12 11:37:23 UTC

*** This bug has been marked as a duplicate of bug 1269948 ***


Note You need to log in before you can comment on or make changes to this bug.