XFRM pCPU RSS: Difference between revisions
No edit summary |
No edit summary |
||
Line 92: | Line 92: | ||
== More inforation RSS/XDP .. === | == More inforation RSS/XDP .. === | ||
[https://github.com/xdp-project/xdp-cpumap-tc#assign-cpus-to-rx-queues XDP cpump] | * [https://github.com/xdp-project/xdp-cpumap-tc#assign-cpus-to-rx-queues XDP cpump] | ||
[https://lpc.events/event/11/contributions/939/attachments/771/1551/xdp-multi-buff.pdf XDP multibuf Jumbo/GRO/TSO support Netdev 0x14, 2021] After this initial patch set in Linux 5.18 specific driver support with Jumbo and GRO are coming in for more drivers such ice40, MLX5 suppor it. | * [https://lpc.events/event/11/contributions/939/attachments/771/1551/xdp-multi-buff.pdf XDP multibuf Jumbo/GRO/TSO support Netdev 0x14, 2021] After this initial patch set in Linux 5.18 specific driver support with Jumbo and GRO are coming in for more drivers such ice40, MLX5 suppor it. | ||
[https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=169e77764adc041b1dacba84ea90516a895d43b2 Introduce XDP multi-buffer support, | * [https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=169e77764adc041b1dacba84ea90516a895d43b2 Introduce XDP multi-buffer support, | ||
[https://developers.redhat.com/blog/2021/05/13/receive-side-scaling-rss-with-ebpf-and-cpumap Receive Side Scaling (RSS) with eBPF and CPUMAP, Lorenzo Bianconi, May 13, 2021] | * [https://developers.redhat.com/blog/2021/05/13/receive-side-scaling-rss-with-ebpf-and-cpumap Receive Side Scaling (RSS) with eBPF and CPUMAP, Lorenzo Bianconi, May 13, 2021] | ||
== Future research/ideas == | == Future research/ideas == | ||
* Test with SR IOV and virtualisation(KVM): need systems with NIC that support SR IOV and RSS for ESP or at least UDP. | * Test with SR IOV and virtualisation(KVM): need systems with NIC that support SR IOV and RSS for ESP or at least UDP. |
Revision as of 10:53, 22 October 2023
Receiver Side Scaling (RSS) support for IPsec/ESP
Receive Side Scaling (RSS)RSS would steer flow to different ques. The receiver NIC should be able steer different flows, based on SPI, into separate queues to prevent the receiver from getting overwhelmed. We used Mellanex CX4 to test. Some cards initially tested did not seems to support RSS for ESP flows, instead only TCP and UDP. While figuring out RSS for these cards we tried a bit different approch. ESP in UDP encapsulation, along with ESP in UDP GRO patches we could see the flows getting distributed on the receiver. And later on in Nov 2019 kernel version 5.5 ML5 drivers seems to support ESP. Mellonox RSS.
config-ntuple Commands
Enable GRO. ideally you should be able to run the following,
ethtool -N <nic> rx-flow-hash esp4
Another argument is if the NIC agnostic the 16 bits of SPI, of ESP packet, is aligned with UDP port number and should provide enough entropy.
ethtool -N eno2 rx-flow-hash udp4 sdfn
RSS should suppr ESP4, ESP6, ESP in UDP for both IPv4 and IPv6.
Marvel Octeon2 support
Mellanox support (maybe)
could be configured steer the flow to a specific Q
ethtool --config-ntuple enp3s0f0 flow-type esp4 src-ip 192.168.1.1 dst-ip 192.168.1.2 spi 0xffffffff action 4
ntuple filtering of a UDP flow
ethtool --config-ntuple <interface name> flow-type udp4 src-ip 192.168.1.1 dst-ip 192.168.10.2 src-port 2000 dst-port 2001 action 2 loc 33
case ESP_V4_FLOW: return MLX5E_TT_IPV4_IPSEC_ESP;
Intel X710 (yes? in IAVF)
i40e_ethtool.c case ESP_V4_FLOW: case ESP_V6_FLOW: /* Default is src/dest for IP, no matter the L4 hashing */ cmd->data |= RXH_IP_SRC | RXH_IP_DST; break
AWS ENA (not yet)
case ESP_V4_FLOW: case ESP_V6_FLOW: return -EOPNOTSUPP;
ENA driver mention support CPU indirection may be we can use it as udp.
The default hashing is currently Toeplitz. Starting from ena driver v2.2.1 the driver supports changing the hash key and hash function as well as the indirection table itself. The support is only for instance types that end with "n", for example C5n instances. Please note that changing the indirection table is supported on all instance types.
VMWare RSS ESP : yes
The vSphere 6.7 release includes vmxnet3 version 4, which supports some new features. "RSS for ESP – RSS for encapsulating security payloads (ESP) is now available in the vmxnet3 v4 driver. Performance testing of this feature showed a 146% improvement in receive packets per second during a test that used IPSEC and four receive queues."
Marvell octeontx2-af RSS ESP : yes =
https://lore.kernel.org/r/1611378552-13288-1-git-send-email-sundeep.lkml@gmail.com
https://lore.kernel.org/netdev/1611378552-13288-1-git-send-email-sundeep.lkml@gmail.com/
ethtool -U eth0 rx-flow-hash esp4 sdfn ethtool -U eth0 rx-flow-hash ah4 sdfn ethtool -U eth0 rx-flow-hash esp6 sdfn
Broadcom : no? =
It seems would hash IP address of the ESP flow.
More inforation RSS/XDP .. =
- XDP cpump
- XDP multibuf Jumbo/GRO/TSO support Netdev 0x14, 2021 After this initial patch set in Linux 5.18 specific driver support with Jumbo and GRO are coming in for more drivers such ice40, MLX5 suppor it.
- [https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=169e77764adc041b1dacba84ea90516a895d43b2 Introduce XDP multi-buffer support,
Future research/ideas
- Test with SR IOV and virtualisation(KVM): need systems with NIC that support SR IOV and RSS for ESP or at least UDP.
- Software RSS https://www.linux-kvm.org/page/Multiqueue
- can IKE daemon use other flow distribution methods based on SPI??? DPDK???
- another way of flow control??? https://doc.dpdk.org/dts/test_plans/link_flowctrl_test_plan.html
- RSS/RPS/RFS
- DPDK RSS support