Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1434276 - virsh vcpupin shows the wrong cpu affinity details
Summary: virsh vcpupin shows the wrong cpu affinity details
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Virtualization Tools
Classification: Community
Component: libvirt
Version: unspecified
Hardware: ppc64le
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Daniel Henrique Barboza
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: PPCTracker
TreeView+ depends on / blocked
 
Reported: 2017-03-21 07:10 UTC by Satheesh Rajendran
Modified: 2020-08-07 13:41 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-08-07 13:41:45 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
IBM Linux Technology Center 174632 0 None None None 2019-08-05 08:58:32 UTC

Description Satheesh Rajendran 2017-03-21 07:10:46 UTC
Description of problem:
Boot the vm and check virsh vcpupin output, it does shows the affinity with offline cpus(host) aswell.

Version-Release number of selected component (if applicable):
# virsh version
Compiled against library: libvirt 3.2.0
Using library: libvirt 3.2.0
Using API: QEMU 3.2.0
Running hypervisor: QEMU 2.8.50

libvirt compiled against
commit a6d681485ff85e27859583a5c20e1630c5cf8352
Author: John Ferlan <jferlan>
Date:   Tue Mar 7 16:10:38 2017 -0500

qemu compiled againt
commit ebedf0f9cd46b617df331eecc857c379d574ac62
Author: Marek Vasut <marex>
Date:   Fri Mar 17 22:06:27 2017 +0100


How reproducible:
Always

Steps to Reproduce:
1. # virsh start vm1
Domain vm1 started

2. # lscpu
Architecture:          ppc64le
Byte Order:            Little Endian
CPU(s):                160
On-line CPU(s) list:   0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152
Off-line CPU(s) list:  1-7,9-15,17-23,25-31,33-39,41-47,49-55,57-63,65-71,73-79,81-87,89-95,97-103,105-111,113-119,121-127,129-135,137-143,145-151,153-159
Thread(s) per core:    1
Core(s) per socket:    5
Socket(s):             4
NUMA node(s):          4
Model:                 2.1 (pvr 004b 0201)
Model name:            POWER8E (raw), altivec supported
L1d cache:             64K
L1i cache:             32K
L2 cache:              512K
L3 cache:              8192K
NUMA node0 CPU(s):     0,8,16,24,32
NUMA node1 CPU(s):     40,48,56,64,72
NUMA node16 CPU(s):    80,88,96,104,112
NUMA node17 CPU(s):    120,128,136,144,152

2. virsh vcpuinfo vm1
VCPU:           0
CPU:            48
State:          running
CPU time:       27.3s
CPU Affinity:   y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------y-------
                  -----------[OK]
# virsh vcpupin vm1
VCPU: CPU Affinity
----------------------------------
   0: 0-159----------------------------------[NOK]

3.

Actual results:
0: 0-159

Expected results:
0: 0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152

Additional info:

Comment 1 Peter Krempa 2017-03-21 07:24:34 UTC
Since you did not configure any specific vcpu pinning, the vcpu threads are allowed to run on all the host vcpus from libvirt's point of view.

Returning the value expected by you would indicate that there's a pinning configured which is not true.

Comment 2 Satheesh Rajendran 2017-03-21 07:41:46 UTC
(In reply to Peter Krempa from comment #1)
> Since you did not configure any specific vcpu pinning, the vcpu threads are
> allowed to run on all the host vcpus from libvirt's point of view.
> 
> Returning the value expected by you would indicate that there's a pinning
> configured which is not true.

I agree partially but it can not be run on a offlined cpu which is in this case needs a proper initial value, it shows the invalid initial range for the user which is wrong whereas vcpuinfo API output is as expected.

#virsh vcpupin vm1 0 1
error: Invalid value '1' for 'cpuset.cpus': Invalid argument --------------OK

but the initial output range of affinity "0-159" contradicts that.

Comment 3 srwx4096 2017-03-28 13:35:21 UTC
This seems to me yet another design decision issue, of course it can be 'fixed', but do we want it fixed?

Again, I am just trying to contribute some code. 

Dan

Comment 4 Nitesh Konkar 2017-04-17 08:01:02 UTC
(In reply to srwx4096 from comment #3)
> This seems to me yet another design decision issue, of course it can be
> 'fixed', but do we want it fixed?
> 
> Again, I am just trying to contribute some code. 
> 
> Dan

Hi Dan,

I think this should be fixed because as pointed out by Viktor, CPU hotplug is very common on Linux running on z Systems and also widely used by customers.

Reference: https://www.spinics.net/linux/fedora/libvir/msg140443.html

Hence if a host CPU is offline and virsh cpupin/emulatorpin shows them as available for pinning, this would mislead user/other layers which shall end up in trying wrong pinning and fail.

-Nitesh

Comment 5 Satheesh Rajendran 2017-05-04 05:29:50 UTC
Any update on this?

Comment 6 IBM Bug Proxy 2019-07-30 19:10:21 UTC
------- Comment From scheloh.com 2019-07-30 15:08 EDT-------
https://www.redhat.com/archives/libvir-list/2019-July/msg00747.html

Comment 7 IBM Bug Proxy 2020-04-22 14:51:37 UTC
------- Comment From lagarcia.com 2020-04-22 10:42 EDT-------
It seems this one never got upstream. Getting it back to the team backlog.

Comment 8 Laine Stump 2020-04-22 16:43:53 UTC
The best way to get a forgotten patch noticed is to rebase it to current upstream, then repost it to the mailing list with --subject-prefix="libvirt PATCH v2"

Beyond that, libvirt is deprecating the use of bugzilla for upstream bugs. In the future all upstream bug tracking will be done using gitlab's issue tracker:

  https://www.redhat.com/archives/libvir-list/2020-April/msg00782.html

Dan has been slowly going through the existing bugs in bugzilla and closing them or creating new records in the gitlab tracker as appropriate.

Comment 9 Michal Privoznik 2020-07-08 17:43:46 UTC
I've just merged patches upstream:

2020c6af8a conf, qemu: consider available CPUs in vcpupin/emulatorpin output
42036650c6 virhostcpu.c: introduce virHostCPUGetAvailableCPUsBitmap()
bc07020511 virhostcpu.c: refactor virHostCPUParseCountLinux()
9d31433483 virsh-domain.c: modernize cmdVcpuinfo()
a3a628f54c virsh-domain.c: modernize virshVcpuinfoInactive()
de6a40f01f virhostcpu.c: use g_autoptr in virHostCPUGetMap()
42bf2a7573 qemu_driver.c: use g_autoptr in qemuDomainGetEmulatorPinInfo()

v6.5.0-69-g2020c6af8a

Comment 10 IBM Bug Proxy 2020-08-05 11:12:15 UTC
------- Comment From lagarcia.com 2020-08-05 07:08 EDT-------
Fedora Rawhide has now libvirt 6.6, which includes these patches. Could you please verify and close this bug if everything is OK, Satheesh?

Comment 11 IBM Bug Proxy 2020-08-05 12:11:58 UTC
------- Comment From satheera.com 2020-08-05 08:05 EDT-------
(In reply to comment #15)
> Fedora Rawhide has now libvirt 6.6, which includes these patches. Could you
> please verify and close this bug if everything is OK, Satheesh?

Sure, will have it tested.

Regards,
-Satheesh

Comment 12 IBM Bug Proxy 2020-08-07 12:20:43 UTC
------- Comment From satheera.com 2020-08-07 08:11 EDT-------
Tested with fedora rawhide libvirt version and found the issue is fixed, this bug can be closed.

# lscpu
Architecture:                    ppc64le
Byte Order:                      Little Endian
CPU(s):                          32
On-line CPU(s) list:             0,8,16,24
Off-line CPU(s) list:            1-7,9-15,17-23,25-31
Thread(s) per core:              1
Core(s) per socket:              4
Socket(s):                       1
NUMA node(s):                    1
Model:                           2.3 (pvr 004e 1203)
Model name:                      POWER9 (architected), altivec supported
...

# virsh start f31
Domain f31 started

# virsh vcpupin f31
VCPU   CPU Affinity
----------------------
0      0,8,16,24
1      0,8,16,24
2      0,8,16,24
3      0,8,16,24
4      0,8,16,24
5      0,8,16,24
6      0,8,16,24
7      0,8,16,24

# lscpu
Architecture:                    ppc64le
Byte Order:                      Little Endian
CPU(s):                          32
On-line CPU(s) list:             0-31
Thread(s) per core:              8
Core(s) per socket:              4
Socket(s):                       1
NUMA node(s):                    1
Model:                           2.3 (pvr 004e 1203)
Model name:                      POWER9 (architected), altivec supported
...

# virsh vcpupin f31
VCPU   CPU Affinity
----------------------
0      0-31
1      0-31
2      0-31
3      0-31
4      0-31
5      0-31
6      0-31
7      0-31

# rpm -qa|grep libvirt
libvirt-bash-completion-6.6.0-1.fc33.ppc64le
libvirt-libs-6.6.0-1.fc33.ppc64le
libvirt-daemon-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-storage-core-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-network-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-nwfilter-6.6.0-1.fc33.ppc64le
libvirt-daemon-config-nwfilter-6.6.0-1.fc33.ppc64le
libvirt-daemon-config-network-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-lxc-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-storage-disk-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-storage-gluster-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-storage-iscsi-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-storage-iscsi-direct-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-storage-mpath-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-storage-scsi-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-storage-sheepdog-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-storage-zfs-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-nodedev-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-qemu-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-secret-6.6.0-1.fc33.ppc64le
python3-libvirt-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-storage-logical-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-interface-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-storage-rbd-6.6.0-1.fc33.ppc64le
libvirt-daemon-driver-storage-6.6.0-1.fc33.ppc64le
libvirt-client-6.6.0-1.fc33.ppc64le
libvirt-6.6.0-1.fc33.ppc64le
libvirt-daemon-kvm-6.6.0-1.fc33.ppc64le
libvirt-admin-6.6.0-1.fc33.ppc64le
libvirt-daemon-qemu-6.6.0-1.fc33.ppc64le

Regards,
-Satheesh


Note You need to log in before you can comment on or make changes to this bug.