summaryrefslogtreecommitdiff
path: root/rust/kernel/workqueue.rs
diff options
context:
space:
mode:
authorPaolo Abeni <pabeni@redhat.com>2025-09-18 10:17:12 +0200
committerPaolo Abeni <pabeni@redhat.com>2025-09-18 10:17:13 +0200
commitce463e4357570096bc0b348d872e0205826a04ce (patch)
tree4d458742d2336794c02bb18ed606ac776fd5d27c /rust/kernel/workqueue.rs
parentb127e355f1af1e4a635ed8f78cb0d11c916613cf (diff)
parent6471658dc66c670580a7616e75f51b52917e7883 (diff)
Merge branch 'udp-increase-rx-performance-under-stress'
Eric Dumazet says: ==================== udp: increase RX performance under stress This series is the result of careful analysis of UDP stack, to optimize the receive side, especially when under one or several UDP sockets are receiving a DDOS attack. I have measured a 47 % increase of throughput when using IPv6 UDP packets with 120 bytes of payload, under DDOS. 16 cpus are receiving traffic targeting a single socket. Even after adding NUMA aware drop counters, we were suffering from false sharing between packet producers and the consumer. 1) First four patches are shrinking struct ipv6_pinfo size and reorganize fields to get more efficient TX path. They should also benefit TCP, by removing one cache line miss. 2) patches 5 & 6 changes how sk->sk_rmem_alloc is read and updated. They reduce reduce spinlock contention on the busylock. 3) Patches 7 & 8 change the ordering of sk_backlog (including sk_rmem_alloc) sk_receive_queue and sk_drop_counters for better data locality. 4) Patch 9 removes the hashed array of spinlocks in favor of a per-udp-socket one. 5) Final patch adopts skb_attempt_defer_free(), after TCP got good results with it. ==================== Link: https://patch.msgid.link/20250916160951.541279-1-edumazet@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Diffstat (limited to 'rust/kernel/workqueue.rs')
0 files changed, 0 insertions, 0 deletions