diff options
author | Alexander Lobakin <aleksander.lobakin@intel.com> | 2025-02-25 18:17:47 +0100 |
---|---|---|
committer | Paolo Abeni <pabeni@redhat.com> | 2025-02-27 14:03:14 +0100 |
commit | 859d6acd94cc4ad65e9eb3fa2a9815a19e5b35cf (patch) | |
tree | e5791f0f4236c0f68b4cba10094bad0f4ef3dfcb /net/unix/af_unix.c | |
parent | 57efe762cd3c8796f8a4b410a578af8c8e99d22f (diff) |
net: skbuff: introduce napi_skb_cache_get_bulk()
Add a function to get an array of skbs from the NAPI percpu cache.
It's supposed to be a drop-in replacement for
kmem_cache_alloc_bulk(skbuff_head_cache, GFP_ATOMIC) and
xdp_alloc_skb_bulk(GFP_ATOMIC). The difference (apart from the
requirement to call it only from the BH) is that it tries to use
as many NAPI cache entries for skbs as possible, and allocate new
ones only if needed.
The logic is as follows:
* there is enough skbs in the cache: decache them and return to the
caller;
* not enough: try refilling the cache first. If there is now enough
skbs, return;
* still not enough: try allocating skbs directly to the output array
with %GFP_ZERO, maybe we'll be able to get some. If there's now
enough, return;
* still not enough: return as many as we were able to obtain.
Most of times, if called from the NAPI polling loop, the first one will
be true, sometimes (rarely) the second one. The third and the fourth --
only under heavy memory pressure.
It can save significant amounts of CPU cycles if there are GRO cycles
and/or Tx completion cycles (anything that descends to
napi_skb_cache_put()) happening on this CPU.
Tested-by: Daniel Xu <dxu@dxuuu.xyz>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Diffstat (limited to 'net/unix/af_unix.c')
0 files changed, 0 insertions, 0 deletions