Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at
Bug 1987684 - qede nic: pvp case throughput got 0 with packet size is jumbo frame
Summary: qede nic: pvp case throughput got 0 with packet size is jumbo frame
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: openvswitch2.15
Version: FDP 19.E
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: ---
Assignee: Aaron Conole
QA Contact: liting
Depends On:
TreeView+ depends on / blocked
Reported: 2021-11-05 13:15 UTC by Timothy Redaelli
Modified: 2021-11-05 13:15 UTC (History)
3 users (show)

Fixed In Version: openvswitch2.15-2.15.0-133.el9fdp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed:
Target Upstream Version:

Attachments (Terms of Use)

Description Timothy Redaelli 2021-11-05 13:15:09 UTC
+++ This bug was initially created as a clone of Bug #1735638 +++

Description of problem:
qede nic: pvp case throughput got 0 with packet size is jumbo frame

Version-Release number of selected component (if applicable):
[root@dell-per730-52 vswitchperf]# rpm -qa|grep openv

[root@dell-per730-52 vswitchperf]# rpm -qa|grep dpdk

[root@dell-per730-52 vswitchperf]# ethtool -i p4p1
driver: qede
firmware-version: mfw storm
bus-info: 0000:82:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: yes
[root@dell-per730-52 vswitchperf]# lspci -s  0000:82:00.0
82:00.0 Ethernet controller: QLogic Corp. FastLinQ QL45000 Series 25GbE Controller (rev 10)

How reproducible:

Steps to Reproduce:
Run vsperf pvp_tput case with jumbo frame on dell52, got 0mpps. Detail step:
Dell52 qede nic connect with Dell53 xxv nic directly. Dell53 use for TRex sender.

1. bind two port to dpdk
driverctl -v set-override 0000:82:00.0 vfio-pci
driverctl -v set-override 0000:82:00.1 vfio-pci
2. build ovs topo
/usr/bin/ovs-vsctl set Open_vSwitch . other_config:dpdk-init=true
/usr/bin/ovs-vsctl set Open_vSwitch . other_config:dpdk-socket-mem=4096,4096
/usr/bin/ovs-vsctl set Open_vSwitch . other_config:dpdk-lcore-mask=0x2
/usr/bin/ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
/usr/bin/ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x80000008000000
/usr/bin/ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:82:00.0 options:n_rxq=1 mtu_request=2000
/usr/bin/ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=0000:82:00.1 options:n_rxq=1 mtu_request=2000
/usr/bin/ovs-vsctl add-port br0 dpdkvhostuserclient0 -- set Interface dpdkvhostuserclient0 type=dpdkvhostuserclient -- set Interface dpdkvhostuserclient0 options:vhost-server-path=/var/run/openvswitch/dpdkvhostuserclient0 mtu_request=2000
/usr/bin/ovs-vsctl add-port br0 dpdkvhostuserclient1 -- set Interface dpdkvhostuserclient1 type=dpdkvhostuserclient -- set Interface dpdkvhostuserclient1 options:vhost-server-path=/var/run/openvswitch/dpdkvhostuserclient1 mtu_request=2000
/usr/bin/ovs-ofctl -O OpenFlow13 del-flows br0 
/usr/bin/ovs-ofctl -O OpenFlow13 add-flow br0 in_port=1,idle_timeout=0,action=output:3 
/usr/bin/ovs-ofctl -O OpenFlow13 add-flow br0 in_port=3,idle_timeout=0,action=output:1
/usr/bin/ovs-ofctl -O OpenFlow13 add-flow br0 in_port=4,idle_timeout=0,action=output:2
/usr/bin/ovs-ofctl -O OpenFlow13 add-flow br0 in_port=2,idle_timeout=0,action=output:4

3. use following command to start guest
sudo -E taskset -c 3,5,33 /usr/libexec/qemu-kvm -m 8192 -smp 3,sockets=3,cores=1,threads=1 -cpu host,migratable=off -drive if=ide,file=rhel7.6-vsperf-1Q-noviommu.qcow2 -boot c --enable-kvm -monitor unix:/tmp/vm0monitor,server,nowait -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc -nographic -vnc :0 -name Client0 -snapshot -net none -no-reboot -chardev socket,id=char0,path=/var/run/openvswitch/dpdkvhostuserclient0,server -netdev type=vhost-user,id=net1,chardev=char0,vhostforce,queues=1 -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=net1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,rx_queue_size=1024,mq=on,vectors=4 -chardev socket,id=char1,path=/var/run/openvswitch/dpdkvhostuserclient1,server -netdev type=vhost-user,id=net2,chardev=char1,vhostforce,queues=1 -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=net2,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,rx_queue_size=1024,mq=on,vectors=4

4. Inside guest, start testpmd to forward packet
modprobe -r vfio
modprobe -r vfio_iommu_type1
modprobe vfio enable_unsafe_noiommu_mode=Y
modprobe vfio-pci
/usr/share/dpdk/usertools/ -b vfio-pci 00:03.0 00:04.0
/usr/bin/testpmd -l 0,1,2 -n 4 --socket-mem 1024 -- --burst=64 -i --txqflags=0xf00 --rxd=512 --txd=512 --disable-hw-vlan --nb-cores=2 --txq=1 --rxq=1 --max-pkt-len=2000 --forward-mode=io  --auto-start

5. Use Trex send rfc2544 traffic. src mac and dst mac are the mac of Trex two ports.

Actual results:
Got 0 throughput.
According to the flows, the dpdk0 didn't receive any packet.
[root@dell-per730-52 ~]# ovs-ofctl dump-flows br0
 cookie=0x0, duration=53.014s, table=0, n_packets=0, n_bytes=0, in_port=dpdk0 actions=output:3
 cookie=0x0, duration=52.982s, table=0, n_packets=0, n_bytes=0, in_port=3 actions=output:dpdk0
 cookie=0x0, duration=52.950s, table=0, n_packets=0, n_bytes=0, in_port=4 actions=output:dpdk1
 cookie=0x0, duration=52.917s, table=0, n_packets=0, n_bytes=0, in_port=dpdk1 actions=output:4

It work well with fdp 18.11 openvswitch2.10-2.10.0-10.el7fdp. 
fdp 19.A: openvswitch2.11-2.11.0-0.20190129gitd3a10db.el7fdp does not work.
fdp 18.12 openvswitch2.10-2.10.0-28.el7fdp.x86_64 also does not work.

run same case on fdp 18.11 openvswitch2.10-2.10.0-10.el7fdp, check flows as following, and got 1.2mpps throughput.
[root@dell-per730-52 vswitchperf]# ovs-ofctl dump-flows br0
 cookie=0x0, duration=45.323s, table=0, n_packets=16181, n_bytes=32297276, in_port=dpdk0 actions=output:3
 cookie=0x0, duration=45.286s, table=0, n_packets=16211, n_bytes=32357156, in_port=3 actions=output:dpdk0
 cookie=0x0, duration=45.248s, table=0, n_packets=16181, n_bytes=32297276, in_port=4 actions=output:dpdk1
 cookie=0x0, duration=45.211s, table=0, n_packets=16211, n_bytes=32357156, in_port=dpdk1 actions=output:4

Expected results:
The jumbo frame case should work well.

Additional info:

Note You need to log in before you can comment on or make changes to this bug.