Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1040606
Summary: | [ARM] internal error: process exited while connecting to monitor: Unable to find CPU definition | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Richard W.M. Jones <rjones> | ||||||
Component: | libvirt | Assignee: | Libvirt Maintainers <libvirt-maint> | ||||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||||
Severity: | unspecified | Docs Contact: | |||||||
Priority: | unspecified | ||||||||
Version: | 20 | CC: | berrange, clalancette, crobinso, dennis, ehabkost, itamar, jdenemar, jforbes, kchamart, laine, libvirt-maint, veillard, virt-maint | ||||||
Target Milestone: | --- | ||||||||
Target Release: | --- | ||||||||
Hardware: | arm | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2015-05-31 18:46:56 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | |||||||||
Bug Blocks: | 910269 | ||||||||
Attachments: |
|
Description
Richard W.M. Jones
2013-12-11 17:00:01 UTC
Created attachment 835378 [details]
libguestfs-test-tool output
Created attachment 835379 [details]
guestfs-rtn3b8x5jd79wrfo.log
libvirt XML produced by libguestfs: <?xml version="1.0"?> domain type="kvm" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> <name>guestfs-rtn3b8x5jd79wrfo</name> <memory unit="MiB">500</memory> <currentMemory unit="MiB">500</currentMemory> <vcpu>1</vcpu> <clock offset="utc"> <timer name="kvmclock" present="yes"/> </clock> <os> <type machine="vexpress-a15">hvm</type> <kernel>/home/rjones/d/libguestfs/tmp/.guestfs-1000/kernel.22734</kernel> <dtb>/home/rjones/d/libguestfs/tmp/.guestfs-1000/dtb.22734</dtb> <initrd>/home/rjones/d/libguestfs/tmp/.guestfs-1000/initrd.22734</initrd> <cmdline>panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=screen</cmdline> </os> <seclabel type="none"/> <on_reboot>destroy</on_reboot> <devices> <emulator>/home/rjones/d/qemu/qemu.wrapper</emulator> <controller type="scsi" index="0" model="virtio-scsi"/> <disk device="disk" type="file"> <source file="/home/rjones/d/libguestfs/tmp/libguestfsbzEufb/scratch.1"/> <target dev="sda" bus="scsi"/> <driver name="qemu" type="raw" cache="unsafe"/> <address type="drive" controller="0" bus="0" target="0" unit="0"/> </disk> <disk type="file" device="disk"> <source file="/home/rjones/d/libguestfs/tmp/libguestfsbzEufb/snapshot2"/> <target dev="sdb" bus="scsi"/> <driver name="qemu" type="qcow2" cache="unsafe"/> <address type="drive" controller="0" bus="0" target="1" unit="0"/> <shareable/> </disk> <serial type="unix"> <source mode="connect" path="/home/rjones/d/libguestfs/tmp/libguestfsbzEufb/console.sock"/> <target port="0"/> </serial> <channel type="unix"> <source mode="connect" path="/home/rjones/d/libguestfs/tmp/libguestfsbzEufb/guestfsd.sock"/> <target type="virtio" name="org.libguestfs.channel.0"/> </channel> </devices> <qemu:commandline> <qemu:env name="TMPDIR" value="/home/rjones/d/libguestfs/tmp"/> </qemu:commandline> </domain> For comparison, when libguestfs runs qemu directly (which works) it generates this command line: [01376ms] /home/rjones/d/qemu/qemu.wrapper \ -global virtio-blk-device.scsi=off \ -nodefconfig \ -nodefaults \ -nographic \ -M vexpress-a15 \ -machine accel=kvm:tcg \ -m 500 \ -no-reboot \ -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1000/kernel.22934 \ -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1000/dtb.22934 \ -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1000/initrd.22934 \ -device virtio-scsi-device,id=scsi \ -drive file=/home/rjones/d/libguestfs/tmp/libguestfsJJNpfh/scratch.1,cache=unsafe,format=raw,id=hd0,if=none \ -device scsi-hd,drive=hd0 \ -drive file=/home/rjones/d/libguestfs/tmp/.guestfs-1000/root.22934,snapshot=on,id=appliance,cache=unsafe,if=none \ -device scsi-hd,drive=appliance \ -device virtio-serial-device \ -serial stdio \ -chardev socket,path=/home/rjones/d/libguestfs/tmp/libguestfsJJNpfh/guestfsd.sock,id=channel0 \ -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \ -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=screen' If I were to guess, I'd say it has to do with: -cpu qemu64,+kvmclock which is surely bogus for two reasons: (1) It should be a 32 bit CPU. (2) It shouldn't be the "qemu*" CPU at all. Ideally you wouldn't pass the -cpu option at all, but if you had to, I'd prefer -cpu host. The problem is kvmclock and similar features advertised to a guest via CPUID cannot be used without a CPU model. Thuns when we need to emit +kvmclock, we need to come up with some default CPU model to use (unless the model is not explicitly specified in the XML). Apparently the code doing this is pretty x86-specific. On the other hand, does kvmclock even make sense in ARM world? I guess you can just remove '<timer name="kvmclock" present="yes"/>' from the domain XML and it should work. I'm not keen to remove kvmclock permanently because we've had so many clock instability problems in the past and kvmclock is a partial solution to them. Also kvmclock is not arch- specific, and applies to KVM guests on all architectures. Nevertheless as a test I removed kvmclock from the XML that is generated by libguestfs, and that does fix the problem. I see, I guessed you don't want kvmclock on arm from the QEMU commandline libguestfs uses when it runs QEMU directly (comment 4). In any case, we need to check how to properly enable kvmclock for ARM, which likely means we need to known what CPU is used by QEMU on ARM if none is specified explicitly. Edurado, do you happen to know the answer? (In reply to Jiri Denemark from comment #8) > I see, I guessed you don't want kvmclock on arm from the QEMU commandline > libguestfs uses when it runs QEMU directly (comment 4). That's a bug. We couldn't work out how to enable kvmclock without specifying a CPU either, hence ... #if defined(__i386__) || defined (__x86_64__) /* -cpu host only works if KVM is available. */ if (has_kvm) { /* Specify the host CPU for speed, and kvmclock for stability. */ ADD_CMDLINE ("-cpu"); ADD_CMDLINE ("host,+kvmclock"); } else { /* Specify default CPU for speed, and kvmclock for stability. */ ADD_CMDLINE ("-cpu"); ADD_CMDLINE_PRINTF ("qemu%d,+kvmclock", SIZEOF_LONG*8); } #endif But we would like to use kvmclock on ARM if there was a way to specify it. kvmclock doesn't exist for arm AFAICT, there's only x86 references in the kernel and qemu trees. arm CPU defaults depend on the machine type, so libvirt shouldn't get in the business of duplicating that data if it can avoid it. also, kvmclock is enabled by default when x86 kvm is used on qemu > 0.13, and can not be enabled when using tcg. so forcing kvmclock in the XML doesn't accomplish anything, the XML bit is only useful for disabling kvmclock. this was pointed out to me internally a few months ago, and I just looked at qemu.git to confirm. I'd still like to hear from Eduardo (or Glauber!) if kvmclock is something that is planned for ARM or if it is unnecessary for some reason. I have no idea if it would make sense to have something like kvmclock on ARM, and I don't know if siuch thing already exists, but if/when it gets written: 1) it may have a different name than "kvmclock", depending on how the author chose to name it; 2) it may be specified in a completly different way on the QEMU command-line, depending on how the specification says it should be advertised to the guest OS. kvmclock is enabled using "-cpu" flags on x86 because it is advertised using CPUID bits. On other platforms, it may be advertised in a completely different way. In other words, everything after the CPU model name on the "-cpu" option is very arch-specific. If one day "-cpu ...,+kvmclock" end up working on both x86 and ARM, I would call it luck. i have kvm working with a rawhide kernel, qemu and libvirt. Big issues i hit was that "-cpu host" and virtio were needed for storage. [root@cubietruck01 ~]# virt-install --cpu=host -v --machine virt --network bridge=br0 --disk /var/lib/libvirt/images/rawhide.qcow2,bus=virtio --os-variant fedora20 -l http://kojipkgs.fedoraproject.org/mash/rawhide-20140430/rawhide/armhfp/os/ --vcpus 2 --memory 1024 -n rawhide -x "console=ttyAMA,115200 ks=http://kojipkgs.fedoraproject.org//work/tasks/4707/6804707/koji-image-f21-build-6804707.ks" is the virt-install command that I ran. post install I need to pull the kernel and initramfs out. we really do need a u- boot binary to use. not 100% sure which one would be best. This message is a reminder that Fedora 20 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 20. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '20'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 20 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. The original report isn't relevant anymore, libvirt arm support has come a long way. So closing |