summaryrefslogtreecommitdiff
path: root/scripts/patch-kernel
diff options
context:
space:
mode:
authorJesper Dangaard Brouer <hawk@kernel.org>2025-08-14 20:24:37 +0200
committerDaniel Borkmann <daniel@iogearbox.net>2025-08-15 11:08:08 +0200
commit2b986b9e917bc88f81aa1ed386af63b26c983f1d (patch)
treee01a0e2ffb427f495abc9b3ef29535e652cd9f88 /scripts/patch-kernel
parent8f5ae30d69d7543eee0d70083daf4de8fe15d585 (diff)
bpf, cpumap: Disable page_pool direct xdp_return need larger scope
When running an XDP bpf_prog on the remote CPU in cpumap code then we must disable the direct return optimization that xdp_return can perform for mem_type page_pool. This optimization assumes code is still executing under RX-NAPI of the original receiving CPU, which isn't true on this remote CPU. The cpumap code already disabled this via helpers xdp_set_return_frame_no_direct() and xdp_clear_return_frame_no_direct(), but the scope didn't include xdp_do_flush(). When doing XDP_REDIRECT towards e.g devmap this causes the function bq_xmit_all() to run with direct return optimization enabled. This can lead to hard to find bugs. The issue only happens when bq_xmit_all() cannot ndo_xdp_xmit all frames and them frees them via xdp_return_frame_rx_napi(). Fix by expanding scope to include xdp_do_flush(). This was found by Dragos Tatulea. Fixes: 11941f8a8536 ("bpf: cpumap: Implement generic cpumap") Reported-by: Dragos Tatulea <dtatulea@nvidia.com> Reported-by: Chris Arges <carges@cloudflare.com> Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Chris Arges <carges@cloudflare.com> Link: https://patch.msgid.link/175519587755.3008742.1088294435150406835.stgit@firesoul
Diffstat (limited to 'scripts/patch-kernel')
0 files changed, 0 insertions, 0 deletions