diff options
author | Jason Baron <jbaron@akamai.com> | 2025-09-22 15:19:57 -0400 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2025-09-23 16:51:26 -0700 |
commit | ca9f9cdc4de97d0221100b11224738416696163c (patch) | |
tree | d9539c5bf85b24d21fcaaa25a32d23e392b4c12a | |
parent | 16d93558e12a03488d59562343e944f27ff4b9f3 (diff) |
net: allow alloc_skb_with_frags() to use MAX_SKB_FRAGS
Currently, alloc_skb_with_frags() will only fill (MAX_SKB_FRAGS - 1)
slots. I think it should use all MAX_SKB_FRAGS slots, as callers of
alloc_skb_with_frags() will size their allocation of frags based
on MAX_SKB_FRAGS.
This issue was discovered via a test patch that sets 'order' to 0
in alloc_skb_with_frags(), which effectively tests/simulates high
fragmentation. In this case sendmsg() on unix sockets will fail every
time for large allocations. If the PAGE_SIZE is 4K, then data_len will
request 68K or 17 pages, but alloc_skb_with_frags() can only allocate
64K in this case or 16 pages.
Fixes: 09c2c90705bb ("net: allow alloc_skb_with_frags() to allocate bigger packets")
Signed-off-by: Jason Baron <jbaron@akamai.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://patch.msgid.link/20250922191957.2855612-1-jbaron@akamai.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-rw-r--r-- | net/core/skbuff.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index ee0274417948..1c0279b9cb9f 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6667,7 +6667,7 @@ struct sk_buff *alloc_skb_with_frags(unsigned long header_len, return NULL; while (data_len) { - if (nr_frags == MAX_SKB_FRAGS - 1) + if (nr_frags == MAX_SKB_FRAGS) goto failure; while (order && PAGE_ALIGN(data_len) < (PAGE_SIZE << order)) order--; |