diff options
author | Eric Dumazet <edumazet@google.com> | 2025-09-28 08:49:32 +0000 |
---|---|---|
committer | Paolo Abeni <pabeni@redhat.com> | 2025-09-30 15:45:52 +0200 |
commit | 9c94ae6bb0b2895024b6e29fcc1cbec968b4776a (patch) | |
tree | efd18d0b0c67ebbe5785cddf16d14a00a2f48463 /net/core/skbuff.c | |
parent | 2c0592bd5cadfcd5337eafa07e3145a097cfd880 (diff) |
net: make softnet_data.defer_count an atomic
This is preparation work to remove the softnet_data.defer_lock,
as it is contended on hosts with large number of cores.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Jason Xing <kerneljasonxing@gmail.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
Link: https://patch.msgid.link/20250928084934.3266948-2-edumazet@google.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Diffstat (limited to 'net/core/skbuff.c')
-rw-r--r-- | net/core/skbuff.c | 6 |
1 files changed, 2 insertions, 4 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 618afd59afff..16cd357d62a6 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -7202,14 +7202,12 @@ nodefer: kfree_skb_napi_cache(skb); sd = &per_cpu(softnet_data, cpu); defer_max = READ_ONCE(net_hotdata.sysctl_skb_defer_max); - if (READ_ONCE(sd->defer_count) >= defer_max) + if (atomic_read(&sd->defer_count) >= defer_max) goto nodefer; spin_lock_bh(&sd->defer_lock); /* Send an IPI every time queue reaches half capacity. */ - kick = sd->defer_count == (defer_max >> 1); - /* Paired with the READ_ONCE() few lines above */ - WRITE_ONCE(sd->defer_count, sd->defer_count + 1); + kick = (atomic_inc_return(&sd->defer_count) - 1) == (defer_max >> 1); skb->next = sd->defer_list; /* Paired with READ_ONCE() in skb_defer_free_flush() */ |