summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2025-03-06net: airoha: Fix lan4 support in airoha_qdma_get_gdm_port()Lorenzo Bianconi
EN7581 SoC supports lan{1,4} ports on MT7530 DSA switch. Fix lan4 reported value in airoha_qdma_get_gdm_port routine. Fixes: 23020f0493270 ("net: airoha: Introduce ethernet support for EN7581 SoC") Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250304-airoha-eth-fix-lan4-v1-1-832417da4bb5@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06Merge branch 'increase-maximum-mtu-to-9k-for-airoha-en7581-soc'Jakub Kicinski
Lorenzo Bianconi says: ==================== Increase maximum MTU to 9k for Airoha EN7581 SoC EN7581 SoC supports 9k maximum MTU. Enable the reception of Scatter-Gather (SG) frames for Airoha EN7581. Introduce airoha_dev_change_mtu callback. ==================== Link: https://patch.msgid.link/20250304-airoha-eth-rx-sg-v1-0-283ebc61120e@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: airoha: Increase max mtu to 9kLorenzo Bianconi
EN7581 SoC supports 9k maximum MTU. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250304-airoha-eth-rx-sg-v1-4-283ebc61120e@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: airoha: Introduce airoha_dev_change_mtu callbackLorenzo Bianconi
Add airoha_dev_change_mtu callback to update the MTU of a running device. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250304-airoha-eth-rx-sg-v1-3-283ebc61120e@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: airoha: Enable Rx Scatter-GatherLorenzo Bianconi
EN7581 SoC can receive 9k frames. Enable the reception of Scatter-Gather (SG) frames. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250304-airoha-eth-rx-sg-v1-2-283ebc61120e@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: airoha: Move min/max packet len configuration in airoha_dev_open()Lorenzo Bianconi
In order to align max allowed packet size to the configured mtu, move REG_GDM_LEN_CFG configuration in airoha_dev_open routine. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250304-airoha-eth-rx-sg-v1-1-283ebc61120e@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: stmmac: simplify phylink_suspend() and phylink_resume() callsRussell King (Oracle)
Currently, the calls to phylink's suspend and resume functions are inside overly complex tests, and boil down to: if (device_may_wakeup(priv->device) && priv->plat->pmt) { call phylink } else { call phylink and if (device_may_wakeup(priv->device)) do something else } This results in phylink always being called, possibly with differing arguments for phylink_suspend(). Simplify this code, noting that each site is slightly different due to the order in which phylink is called and the "something else". Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/E1tpQL1-005St4-Hn@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: stmmac: avoid shadowing global buf_szRussell King (Oracle)
stmmac_rx() declares a local variable named "buf_sz" but there is also a global variable for a module parameter which is called the same. To avoid confusion, rename the local variable. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Furong Xu <0x1207@gmail.com> Link: https://patch.msgid.link/E1tpswi-005U6C-Py@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06selftests: net: bpf_offload: add 'libbpf_global' to ignored mapsJakub Kicinski
After installing pahole on the CI image we have a new map created by libbpf. Ignore it otherwise we see: Exception: Time out waiting for map counts to stabilize want 2, have 3 Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250304233204.1139251-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06selftests: net: fix error message in bpf_offloadJakub Kicinski
We hit a following exception on timeout, nmaps is never set: Test bpftool bound info reporting (own ns)... Traceback (most recent call last): File "/home/virtme/testing-1/tools/testing/selftests/net/./bpf_offload.py", line 1128, in <module> check_dev_info(False, "") File "/home/virtme/testing-1/tools/testing/selftests/net/./bpf_offload.py", line 583, in check_dev_info maps = bpftool_map_list_wait(expected=2, ns=ns) File "/home/virtme/testing-1/tools/testing/selftests/net/./bpf_offload.py", line 215, in bpftool_map_list_wait raise Exception("Time out waiting for map counts to stabilize want %d, have %d" % (expected, nmaps)) NameError: name 'nmaps' is not defined Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250304233204.1139251-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06tcp: clamp window like before the cleanupMatthieu Baerts (NGI0)
A recent cleanup changed the behaviour of tcp_set_window_clamp(). This looks unintentional, and affects MPTCP selftests, e.g. some tests re-establishing a connection after a disconnect are now unstable. Before the cleanup, this operation was done: new_rcv_ssthresh = min(tp->rcv_wnd, new_window_clamp); tp->rcv_ssthresh = max(new_rcv_ssthresh, tp->rcv_ssthresh); The cleanup used the 'clamp' macro which takes 3 arguments -- value, lowest, and highest -- and returns a value between the lowest and the highest allowable values. This then assumes ... lowest (rcv_ssthresh) <= highest (rcv_wnd) ... which doesn't seem to be always the case here according to the MPTCP selftests, even when running them without MPTCP, but only TCP. For example, when we have ... rcv_wnd < rcv_ssthresh < new_rcv_ssthresh ... before the cleanup, the rcv_ssthresh was not changed, while after the cleanup, it is lowered down to rcv_wnd (highest). During a simple test with TCP, here are the values I observed: new_window_clamp (val) rcv_ssthresh (lo) rcv_wnd (hi) 117760 (out) 65495 < 65536 128512 (out) 109595 > 80256 => lo > hi 1184975 (out) 328987 < 329088 113664 (out) 65483 < 65536 117760 (out) 110968 < 110976 129024 (out) 116527 > 109696 => lo > hi Here, we can see that it is not that rare to have rcv_ssthresh (lo) higher than rcv_wnd (hi), so having a different behaviour when the clamp() macro is used, even without MPTCP. Note: new_window_clamp is always out of range (rcv_ssthresh < rcv_wnd) here, which seems to be generally the case in my tests with small connections. I then suggests reverting this part, not to change the behaviour. Fixes: 863a952eb79a ("tcp: tcp_set_window_clamp() cleanup") Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/551 Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Tested-by: Jason Xing <kerneljasonxing@gmail.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20250305-net-next-fix-tcp-win-clamp-v1-1-12afb705d34e@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: stmmac: mostly remove "buf_sz"Russell King (Oracle)
The "buf_sz" parameter is not used in the stmmac driver - there is one place where the value of buf_sz is validated, and two places where it is written. It is otherwise unused. Remove these accesses. However, leave the module parameter in place as removing it could cause module load to fail, breaking user setups. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Furong Xu <0x1207@gmail.com> Link: https://patch.msgid.link/E1tpswn-005U6I-TU@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06ptp: ocp: Remove redundant check in _signal_summary_showIvan Abramov
In the function _signal_summary_show(), there is a NULL-check for &bp->signal[nr], which cannot actually be NULL. Therefore, this redundant check can be removed. Signed-off-by: Ivan Abramov <i.abramov@mt-integration.ru> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Link: https://patch.msgid.link/20250305092520.25817-1-i.abramov@mt-integration.ru Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06Merge branch 'net-stmmac-dwc-qos-add-fsd-eqos-support'Jakub Kicinski
Swathi says: ==================== net: stmmac: dwc-qos: Add FSD EQoS support FSD platform has two instances of EQoS IP, one is in FSYS0 block and another one is in PERIC block. This patch series add required DT binding and platform driver specific changes for the same. ==================== Link: https://patch.msgid.link/20250305091246.106626-1-swathi.ks@samsung.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: stmmac: dwc-qos: Add FSD EQoS supportSwathi K S
The FSD SoC contains two instance of the Synopsys DWC ethernet QOS IP core. The binding that it uses is slightly different from existing ones because of the integration (clocks, resets). Signed-off-by: Swathi K S <swathi.ks@samsung.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/20250305091246.106626-3-swathi.ks@samsung.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06dt-bindings: net: Add FSD EQoS device tree bindingsSwathi K S
Add FSD Ethernet compatible in Synopsys dt-bindings document. Add FSD Ethernet YAML schema to enable the DT validation. Signed-off-by: Pankaj Dubey <pankaj.dubey@samsung.com> Signed-off-by: Ravi Patel <ravi.patel@samsung.com> Signed-off-by: Swathi K S <swathi.ks@samsung.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://patch.msgid.link/20250305091246.106626-2-swathi.ks@samsung.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06Merge branch 'tcp-even-faster-connect-under-stress'Jakub Kicinski
Eric Dumazet says: ==================== tcp: even faster connect() under stress This is a followup on the prior series, "tcp: scale connect() under pressure" Now spinlocks are no longer in the picture, we see a very high cost of the inet6_ehashfn() function. In this series (of 2), I change how lport contributes to inet6_ehashfn() to ensure better cache locality and call inet6_ehashfn() only once per connect() system call. This brings an additional 229 % increase of performance for "neper/tcp_crr -6 -T 200 -F 30000" stress test, while greatly improving latency metrics. Before: latency_min=0.014131929 latency_max=17.895073144 latency_mean=0.505675853 latency_stddev=2.125164772 num_samples=307884 throughput=139866.80 After: latency_min=0.003041375 latency_max=7.056589232 latency_mean=0.141075048 latency_stddev=0.526900516 num_samples=312996 throughput=320677.21 ==================== Link: https://patch.msgid.link/20250305034550.879255-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06inet: call inet6_ehashfn() once from inet6_hash_connect()Eric Dumazet
inet6_ehashfn() being called from __inet6_check_established() has a big impact on performance, as shown in the Tested section. After prior patch, we can compute the hash for port 0 from inet6_hash_connect(), and derive each hash in __inet_hash_connect() from this initial hash: hash(saddr, lport, daddr, dport) == hash(saddr, 0, daddr, dport) + lport Apply the same principle for __inet_check_established(), although inet_ehashfn() has a smaller cost. Tested: Server: ulimit -n 40000; neper/tcp_crr -T 200 -F 30000 -6 --nolog Client: ulimit -n 40000; neper/tcp_crr -T 200 -F 30000 -6 --nolog -c -H server Before this patch: utime_start=0.286131 utime_end=4.378886 stime_start=11.952556 stime_end=1991.655533 num_transactions=1446830 latency_min=0.001061085 latency_max=12.075275028 latency_mean=0.376375302 latency_stddev=1.361969596 num_samples=306383 throughput=151866.56 perf top: 50.01% [kernel] [k] __inet6_check_established 20.65% [kernel] [k] __inet_hash_connect 15.81% [kernel] [k] inet6_ehashfn 2.92% [kernel] [k] rcu_all_qs 2.34% [kernel] [k] __cond_resched 0.50% [kernel] [k] _raw_spin_lock 0.34% [kernel] [k] sched_balance_trigger 0.24% [kernel] [k] queued_spin_lock_slowpath After this patch: utime_start=0.315047 utime_end=9.257617 stime_start=7.041489 stime_end=1923.688387 num_transactions=3057968 latency_min=0.003041375 latency_max=7.056589232 latency_mean=0.141075048 # Better latency metrics latency_stddev=0.526900516 num_samples=312996 throughput=320677.21 # 111 % increase, and 229 % for the series perf top: inet6_ehashfn is no longer seen. 39.67% [kernel] [k] __inet_hash_connect 37.06% [kernel] [k] __inet6_check_established 4.79% [kernel] [k] rcu_all_qs 3.82% [kernel] [k] __cond_resched 1.76% [kernel] [k] sched_balance_domains 0.82% [kernel] [k] _raw_spin_lock 0.81% [kernel] [k] sched_balance_rq 0.81% [kernel] [k] sched_balance_trigger 0.76% [kernel] [k] queued_spin_lock_slowpath Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Tested-by: Jason Xing <kerneljasonxing@gmail.com> Reviewed-by: Jason Xing <kerneljasonxing@gmail.com> Link: https://patch.msgid.link/20250305034550.879255-3-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06inet: change lport contribution to inet_ehashfn() and inet6_ehashfn()Eric Dumazet
In order to speedup __inet_hash_connect(), we want to ensure hash values for <source address, port X, destination address, destination port> are not randomly spread, but monotonically increasing. Goal is to allow __inet_hash_connect() to derive the hash value of a candidate 4-tuple with a single addition in the following patch in the series. Given : hash_0 = inet_ehashfn(saddr, 0, daddr, dport) hash_sport = inet_ehashfn(saddr, sport, daddr, dport) Then (hash_sport == hash_0 + sport) for all sport values. As far as I know, there is no security implication with this change. After this patch, when __inet_hash_connect() has to try XXXX candidates, the hash table buckets are contiguous and packed, allowing a better use of cpu caches and hardware prefetchers. Tested: Server: ulimit -n 40000; neper/tcp_crr -T 200 -F 30000 -6 --nolog Client: ulimit -n 40000; neper/tcp_crr -T 200 -F 30000 -6 --nolog -c -H server Before this patch: utime_start=0.271607 utime_end=3.847111 stime_start=18.407684 stime_end=1997.485557 num_transactions=1350742 latency_min=0.014131929 latency_max=17.895073144 latency_mean=0.505675853 latency_stddev=2.125164772 num_samples=307884 throughput=139866.80 perf top on client: 56.86% [kernel] [k] __inet6_check_established 17.96% [kernel] [k] __inet_hash_connect 13.88% [kernel] [k] inet6_ehashfn 2.52% [kernel] [k] rcu_all_qs 2.01% [kernel] [k] __cond_resched 0.41% [kernel] [k] _raw_spin_lock After this patch: utime_start=0.286131 utime_end=4.378886 stime_start=11.952556 stime_end=1991.655533 num_transactions=1446830 latency_min=0.001061085 latency_max=12.075275028 latency_mean=0.376375302 latency_stddev=1.361969596 num_samples=306383 throughput=151866.56 perf top: 50.01% [kernel] [k] __inet6_check_established 20.65% [kernel] [k] __inet_hash_connect 15.81% [kernel] [k] inet6_ehashfn 2.92% [kernel] [k] rcu_all_qs 2.34% [kernel] [k] __cond_resched 0.50% [kernel] [k] _raw_spin_lock 0.34% [kernel] [k] sched_balance_trigger 0.24% [kernel] [k] queued_spin_lock_slowpath There is indeed an increase of throughput and reduction of latency. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Tested-by: Jason Xing <kerneljasonxing@gmail.com> Reviewed-by: Jason Xing <kerneljasonxing@gmail.com> Link: https://patch.msgid.link/20250305034550.879255-2-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06tcp: bring back NUMA dispersion in inet_ehash_locks_alloc()Eric Dumazet
We have platforms with 6 NUMA nodes and 480 cpus. inet_ehash_locks_alloc() currently allocates a single 64KB page to hold all ehash spinlocks. This adds more pressure on a single node. Change inet_ehash_locks_alloc() to use vmalloc() to spread the spinlocks on all online nodes, driven by NUMA policies. At boot time, NUMA policy is interleave=all, meaning that tcp_hashinfo.ehash_locks gets hash dispersion on all nodes. Tested: lack5:~# grep inet_ehash_locks_alloc /proc/vmallocinfo 0x00000000d9aec4d1-0x00000000a828b652 69632 inet_ehash_locks_alloc+0x90/0x100 pages=16 vmalloc N0=2 N1=3 N2=3 N3=3 N4=3 N5=2 lack5:~# echo 8192 >/proc/sys/net/ipv4/tcp_child_ehash_entries lack5:~# numactl --interleave=all unshare -n bash -c "grep inet_ehash_locks_alloc /proc/vmallocinfo" 0x000000004e99d30c-0x00000000763f3279 36864 inet_ehash_locks_alloc+0x90/0x100 pages=8 vmalloc N0=1 N1=2 N2=2 N3=1 N4=1 N5=1 0x00000000d9aec4d1-0x00000000a828b652 69632 inet_ehash_locks_alloc+0x90/0x100 pages=16 vmalloc N0=2 N1=3 N2=3 N3=3 N4=3 N5=2 lack5:~# numactl --interleave=0,5 unshare -n bash -c "grep inet_ehash_locks_alloc /proc/vmallocinfo" 0x00000000fd73a33e-0x0000000004b9a177 36864 inet_ehash_locks_alloc+0x90/0x100 pages=8 vmalloc N0=4 N5=4 0x00000000d9aec4d1-0x00000000a828b652 69632 inet_ehash_locks_alloc+0x90/0x100 pages=16 vmalloc N0=2 N1=3 N2=3 N3=3 N4=3 N5=2 lack5:~# echo 1024 >/proc/sys/net/ipv4/tcp_child_ehash_entries lack5:~# numactl --interleave=all unshare -n bash -c "grep inet_ehash_locks_alloc /proc/vmallocinfo" 0x00000000db07d7a2-0x00000000ad697d29 8192 inet_ehash_locks_alloc+0x90/0x100 pages=1 vmalloc N2=1 0x00000000d9aec4d1-0x00000000a828b652 69632 inet_ehash_locks_alloc+0x90/0x100 pages=16 vmalloc N0=2 N1=3 N2=3 N3=3 N4=3 N5=2 Signed-off-by: Eric Dumazet <edumazet@google.com> Tested-by: Jason Xing <kerneljasonxing@gmail.com> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://patch.msgid.link/20250305130550.1865988-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR (net-6.14-rc6). Conflicts: net/ethtool/cabletest.c 2bcf4772e45a ("net: ethtool: try to protect all callback with netdev instance lock") 637399bf7e77 ("net: ethtool: netlink: Allow NULL nlattrs when getting a phy_device") No Adjacent changes. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06Merge branch 'net-hold-netdev-instance-lock-during-ndo-operations'Jakub Kicinski
Stanislav Fomichev says: ==================== net: Hold netdev instance lock during ndo operations As the gradual purging of rtnl continues, start grabbing netdev instance lock in more places so we can get to the state where most paths are working without rtnl. Start with requiring the drivers that use shaper api (and later queue mgmt api) to work with both rtnl and netdev instance lock. Eventually we might attempt to drop rtnl. This mostly affects iavf, gve, bnxt and netdev sim (as the drivers that implement shaper/queue mgmt) so those drivers are converted in the process. call_netdevice_notifiers locking is very inconsistent and might need a separate follow up. Some notified events are covered by the instance lock, some are not, which might complicate the driver expectations. Reviewed-by: Eric Dumazet <edumazet@google.com> ==================== Link: https://patch.msgid.link/20250305163732.2766420-1-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06eth: bnxt: remove most dependencies on RTNLStanislav Fomichev
Only devlink and sriov paths are grabbing rtnl explicitly. The rest is covered by netdev instance lock which the core now grabs, so there is no need to manage rtnl in most places anymore. On the core side we can now try to drop rtnl in some places (do_setlink for example) for the drivers that signal non-rtnl mode (TBD). Boot-tested and with `ethtool -L eth1 combined 24` to trigger reset. Cc: Saeed Mahameed <saeed@kernel.org> Reviewed-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-15-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06docs: net: document new locking realityStanislav Fomichev
Also clarify ndo_get_stats (that read and write paths can run concurrently) and mention only RCU. Cc: Saeed Mahameed <saeed@kernel.org> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-14-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: add option to request netdev instance lockStanislav Fomichev
Currently only the drivers that implement shaper or queue APIs are grabbing instance lock. Add an explicit opt-in for the drivers that want to grab the lock without implementing the above APIs. There is a 3-byte hole after @up, use it: /* --- cacheline 47 boundary (3008 bytes) --- */ u32 napi_defer_hard_irqs; /* 3008 4 */ bool up; /* 3012 1 */ /* XXX 3 bytes hole, try to pack */ struct mutex lock; /* 3016 144 */ /* XXX last struct has 1 hole */ Cc: Saeed Mahameed <saeed@kernel.org> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-13-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: replace dev_addr_sem with netdev instance lockStanislav Fomichev
Lockdep reports possible circular dependency in [0]. Instead of fixing the ordering, replace global dev_addr_sem with netdev instance lock. Most of the paths that set/get mac are RTNL protected. Two places where it's not, convert to explicit locking: - sysfs address_show - dev_get_mac_address via dev_ioctl 0: https://netdev-3.bots.linux.dev/vmksft-forwarding-dbg/results/993321/24-router-bridge-1d-lag-sh/stderr Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-12-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: ethtool: try to protect all callback with netdev instance lockJakub Kicinski
Protect all ethtool callbacks and PHY related state with the netdev instance lock, for drivers which want / need to have their ops instance-locked. Basically take the lock everywhere we take rtnl_lock. It was tempting to take the lock in ethnl_ops_begin(), but turns out we actually nest those calls (when generating notifications). Tested-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Cc: Saeed Mahameed <saeed@kernel.org> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-11-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: hold netdev instance lock during ndo_bpfStanislav Fomichev
Cover the paths that come via bpf system call and XSK bind. Cc: Saeed Mahameed <saeed@kernel.org> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-10-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: hold netdev instance lock during sysfs operationsStanislav Fomichev
Most of them are already covered by the converted dev_xxx APIs. Add the locking wrappers for the remaining ones. Cc: Saeed Mahameed <saeed@kernel.org> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-9-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: hold netdev instance lock during ioctl operationsStanislav Fomichev
Convert all ndo_eth_ioctl invocations to dev_eth_ioctl which does the locking. Reflow some of the dev_siocxxx to drop else clause. Cc: Saeed Mahameed <saeed@kernel.org> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-8-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: hold netdev instance lock during rtnetlink operationsStanislav Fomichev
To preserve the atomicity, hold the lock while applying multiple attributes. The major issue with a full conversion to the instance lock are software nesting devices (bonding/team/vrf/etc). Those devices call into the core stack for their lower (potentially real hw) devices. To avoid explicitly wrapping all those places into instance lock/unlock, introduce new API boundaries: - (some) existing dev_xxx calls are now considered "external" (to drivers) APIs and they transparently grab the instance lock if needed (dev_api.c) - new netif_xxx calls are internal core stack API (naming is sketchy, I've tried netdev_xxx_locked per Jakub's suggestion, but it feels a bit verbose; but happy to get back to this naming scheme if this is the preference) This avoids touching most of the existing ioctl/sysfs/drivers paths. Note the special handling of ndo_xxx_slave operations: I exploit the fact that none of the drivers that call these functions need/use instance lock. At the same time, they use dev_xxx APIs, so the lower device has to be unlocked. Changes in unregister_netdevice_many_notify (to protect dev->state with instance lock) trigger lockdep - the loop over close_list (mostly from cleanup_net) introduces spurious ordering issues. netdev_lock_cmp_fn has a justification on why it's ok to suppress for now. Cc: Saeed Mahameed <saeed@kernel.org> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-7-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: hold netdev instance lock during queue operationsStanislav Fomichev
For the drivers that use queue management API, switch to the mode where core stack holds the netdev instance lock. This affects the following drivers: - bnxt - gve - netdevsim Originally I locked only start/stop, but switched to holding the lock over all iterations to make them look atomic to the device (feels like it should be easier to reason about). Reviewed-by: Eric Dumazet <edumazet@google.com> Cc: Saeed Mahameed <saeed@kernel.org> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-6-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: hold netdev instance lock during qdisc ndo_setup_tcStanislav Fomichev
Qdisc operations that can lead to ndo_setup_tc might need to have an instance lock. Add netdev_lock_ops/netdev_unlock_ops invocations for all psched_rtnl_msg_handlers operations. Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Jiri Pirko <jiri@resnulli.us> Cc: Saeed Mahameed <saeed@kernel.org> Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-5-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: sched: wrap doit/dumpit methodsStanislav Fomichev
In preparation for grabbing netdev instance lock around qdisc operations, introduce tc_xxx wrappers that lookup netdev and call respective __tc_xxx helper to do the actual work. No functional changes. Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Jiri Pirko <jiri@resnulli.us> Cc: Saeed Mahameed <saeed@kernel.org> Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-4-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: hold netdev instance lock during nft ndo_setup_tcStanislav Fomichev
Introduce new dev_setup_tc for nft ndo_setup_tc paths. Reviewed-by: Eric Dumazet <edumazet@google.com> Cc: Saeed Mahameed <saeed@kernel.org> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-3-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06net: hold netdev instance lock during ndo_open/ndo_stopStanislav Fomichev
For the drivers that use shaper API, switch to the mode where core stack holds the netdev lock. This affects two drivers: * iavf - already grabs netdev lock in ndo_open/ndo_stop, so mostly remove these * netdevsim - switch to _locked APIs to avoid deadlock iavf_close diff is a bit confusing, the existing call looks like this: iavf_close() { netdev_lock() .. netdev_unlock() wait_event_timeout(down_waitqueue) } I change it to the following: netdev_lock() iavf_close() { .. netdev_unlock() wait_event_timeout(down_waitqueue) netdev_lock() // reusing this lock call } netdev_unlock() Since I'm reusing existing netdev_lock call, so it looks like I only add netdev_unlock. Cc: Saeed Mahameed <saeed@kernel.org> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250305163732.2766420-2-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06Merge branch 'xdp-metadata-support-for-tun-driver'Martin KaFai Lau
Marcus Wichelmann says: ==================== XDP metadata support for tun driver Hi all, this v5 of the patch series is very similar to v4, but rebased onto the bpf-next/net branch instead of bpf-next/master. Because the commit c047e0e0e435 ("selftests/bpf: Optionally open a dedicated namespace to run test in it") is not yet included in this branch, I changed the xdp_context_tuntap test to manually create a namespace to run the test in. Not so successful pipeline: https://github.com/kernel-patches/bpf/actions/runs/13682405154 The CI pipeline failed because of veristat changes in seemingly unrelated eBPF programs. I don't think this has to do with the changes from this patch series, but if it does, please let me know what I may have to do different to make the CI pass. ==================== Link: https://patch.msgid.link/20250305213438.3863922-1-marcus.wichelmann@hetzner-cloud.de Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2025-03-06selftests/bpf: Fix file descriptor assertion in open_tuntap helperMarcus Wichelmann
The open_tuntap helper function uses open() to get a file descriptor for /dev/net/tun. The open(2) manpage writes this about its return value: On success, open(), openat(), and creat() return the new file descriptor (a nonnegative integer). On error, -1 is returned and errno is set to indicate the error. This means that the fd > 0 assertion in the open_tuntap helper is incorrect and should rather check for fd >= 0. When running the BPF selftests locally, this incorrect assertion was not an issue, but the BPF kernel-patches CI failed because of this: open_tuntap:FAIL:open(/dev/net/tun) unexpected open(/dev/net/tun): actual 0 <= expected 0 Signed-off-by: Marcus Wichelmann <marcus.wichelmann@hetzner-cloud.de> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20250305213438.3863922-7-marcus.wichelmann@hetzner-cloud.de
2025-03-06selftests/bpf: Add test for XDP metadata support in tun driverMarcus Wichelmann
Add a selftest that creates a tap device, attaches XDP and TC programs, writes a packet with a test payload into the tap device and checks the test result. This test ensures that the XDP metadata support in the tun driver is enabled and that the metadata size is correctly passed to the skb. See the previous commit ("selftests/bpf: refactor xdp_context_functional test and bpf program") for details about the test design. The test runs in its own network namespace. This provides some extra safety against conflicting interface names. Signed-off-by: Marcus Wichelmann <marcus.wichelmann@hetzner-cloud.de> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://patch.msgid.link/20250305213438.3863922-6-marcus.wichelmann@hetzner-cloud.de
2025-03-06selftests/bpf: Refactor xdp_context_functional test and bpf programMarcus Wichelmann
The existing XDP metadata test works by creating a veth pair and attaching XDP & TC programs that drop the packet when the condition of the test isn't fulfilled. The test then pings through the veth pair and succeeds when the ping comes through. While this test works great for a veth pair, it is hard to replicate for tap devices to test the XDP metadata support of them. A similar test for the tun driver would either involve logic to reply to the ping request, or would have to capture the packet to check if it was dropped or not. To make the testing of other drivers easier while still maximizing code reuse, this commit refactors the existing xdp_context_functional test to use a test_result map. Instead of conditionally passing or dropping the packet, the TC program is changed to copy the received metadata into the value of that single-entry array map. Tests can then verify that the map value matches the expectation. This testing logic is easy to adapt to other network drivers as the only remaining requirement is that there is some way to send a custom Ethernet packet through it that triggers the XDP & TC programs. The Ethernet header of that custom packet is all-zero, because it is not required to be valid for the test to work. The zero ethertype also helps to filter out packets that are not related to the test and would otherwise interfere with it. The payload of the Ethernet packet is used as the test data that is expected to be passed as metadata from the XDP to the TC program and written to the map. It has a fixed size of 32 bytes which is a reasonable size that should be supported by both drivers. Additional packet headers are not necessary for the test and were therefore skipped to keep the testing code short. This new testing methodology no longer requires the veth interfaces to have IP addresses assigned, therefore these were removed. Signed-off-by: Marcus Wichelmann <marcus.wichelmann@hetzner-cloud.de> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20250305213438.3863922-5-marcus.wichelmann@hetzner-cloud.de
2025-03-06selftests/bpf: Move open_tuntap to network helpersMarcus Wichelmann
To test the XDP metadata functionality of the tun driver, it's necessary to create a new tap device first. A helper function for this already exists in lwt_helpers.h. Move it to the common network helpers header, so it can be reused in other tests. Signed-off-by: Marcus Wichelmann <marcus.wichelmann@hetzner-cloud.de> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://patch.msgid.link/20250305213438.3863922-4-marcus.wichelmann@hetzner-cloud.de
2025-03-06net: tun: Enable transfer of XDP metadata to skbMarcus Wichelmann
When the XDP metadata area was used, it is expected that the same metadata can also be accessed from TC, as can be read in the description of the bpf_xdp_adjust_meta helper function. In the tun driver, this was not yet implemented. To make this work, the skb that is being built on XDP_PASS should know of the current size of the metadata area. This is ensured by adding calls to skb_metadata_set. For the tun_xdp_one code path, an additional check is necessary to handle the case where the externally initialized xdp_buff has no metadata support (xdp->data_meta == xdp->data + 1). More information about this feature can be found in the commit message of commit de8f3a83b0a0 ("bpf: add meta pointer for direct access"). Signed-off-by: Marcus Wichelmann <marcus.wichelmann@hetzner-cloud.de> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Reviewed-by: Willem de Bruijn <willemb@google.com> Acked-by: Jason Wang <jasowang@redhat.com> Link: https://patch.msgid.link/20250305213438.3863922-3-marcus.wichelmann@hetzner-cloud.de
2025-03-06net: tun: Enable XDP metadata supportMarcus Wichelmann
Enable the support for the bpf_xdp_adjust_meta helper function for XDP buffers initialized by the tun driver. This allows to reserve a metadata area that is useful to pass any information from one XDP program to another one, for example when using tail-calls. Whether this helper function can be used in an XDP program depends on how the xdp_buff was initialized. Most net drivers initialize the xdp_buff in a way, that allows bpf_xdp_adjust_meta to be used. In case of the tun driver, this is currently not the case. There are two code paths in the tun driver that lead to a bpf_prog_run_xdp and where metadata support should be enabled: 1. tun_build_skb, which is called by tun_get_user and is used when writing packets from userspace into the device. In this case, the xdp_buff created in tun_build_skb has no support for bpf_xdp_adjust_meta and calls of that helper function result in ENOTSUPP. For this code path, it's sufficient to set the meta_valid argument of the xdp_prepare_buff call. The reserved headroom is large enough already. 2. tun_xdp_one, which is called by tun_sendmsg which again is called by other drivers (e.g. vhost_net). When the TUN_MSG_PTR mode is used, another driver may pass a batch of xdp_buffs to the tun driver. In this case, that other driver is the one initializing the xdp_buff. See commit 043d222f93ab ("tuntap: accept an array of XDP buffs through sendmsg()") for details. For now, the vhost_net driver is the only one using TUN_MSG_PTR and it already initializes the xdp_buffs with metadata support and sufficient headroom. But the tun driver disables it again, so the xdp_set_data_meta_invalid call has to be removed. Signed-off-by: Marcus Wichelmann <marcus.wichelmann@hetzner-cloud.de> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Acked-by: Jason Wang <jasowang@redhat.com> Link: https://patch.msgid.link/20250305213438.3863922-2-marcus.wichelmann@hetzner-cloud.de
2025-03-06Merge tag 'net-6.14-rc6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Paolo Abeni: "Including fixes from bluetooth and wireless. Current release - new code bugs: - wifi: nl80211: disable multi-link reconfiguration Previous releases - regressions: - gso: fix ownership in __udp_gso_segment - wifi: iwlwifi: - fix A-MSDU TSO preparation - free pages allocated when failing to build A-MSDU - ipv6: fix dst ref loop in ila lwtunnel - mptcp: fix 'scheduling while atomic' in mptcp_pm_nl_append_new_local_addr - bluetooth: add check for mgmt_alloc_skb() in mgmt_device_connected() - ethtool: allow NULL nlattrs when getting a phy_device - eth: be2net: fix sleeping while atomic bugs in be_ndo_bridge_getlink Previous releases - always broken: - core: support TCP GSO case for a few missing flags - wifi: mac80211: - fix vendor-specific inheritance - cleanup sta TXQs on flush - llc: do not use skb_get() before dev_queue_xmit() - eth: ipa: nable checksum for IPA_ENDPOINT_AP_MODEM_{RX,TX} for v4.7" * tag 'net-6.14-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (41 commits) net: ipv6: fix missing dst ref drop in ila lwtunnel net: ipv6: fix dst ref loop in ila lwtunnel mctp i3c: handle NULL header address net: dsa: mt7530: Fix traffic flooding for MMIO devices net-timestamp: support TCP GSO case for a few missing flags vlan: enforce underlying device type mptcp: fix 'scheduling while atomic' in mptcp_pm_nl_append_new_local_addr net: ethtool: netlink: Allow NULL nlattrs when getting a phy_device ppp: Fix KMSAN uninit-value warning with bpf net: ipa: Enable checksum for IPA_ENDPOINT_AP_MODEM_{RX,TX} for v4.7 net: ipa: Fix QSB data for v4.7 net: ipa: Fix v4.7 resource group names net: hns3: make sure ptp clock is unregister and freed if hclge_ptp_get_cycle returns an error wifi: nl80211: disable multi-link reconfiguration net: dsa: rtl8366rb: don't prompt users for LED control be2net: fix sleeping while atomic bugs in be_ndo_bridge_getlink llc: do not use skb_get() before dev_queue_xmit() wifi: cfg80211: regulatory: improve invalid hints checking caif_virtio: fix wrong pointer check in cfv_probe() net: gso: fix ownership in __udp_gso_segment ...
2025-03-06Merge tag 'v6.14-rc5-smb3-fixes' of git://git.samba.org/ksmbdLinus Torvalds
Pull smb fixes from Steve French: "Five SMB server fixes, two related client fixes, and minor MAINTAINERS update: - Two SMB3 lock fixes fixes (including use after free and bug on fix) - Fix to race condition that can happen in processing IPC responses - Four ACL related fixes: one related to endianness of num_aces, and two related fixes to the checks for num_aces (for both client and server), and one fixing missing check for num_subauths which can cause memory corruption - And minor update to email addresses in MAINTAINERS file" * tag 'v6.14-rc5-smb3-fixes' of git://git.samba.org/ksmbd: cifs: fix incorrect validation for num_aces field of smb_acl ksmbd: fix incorrect validation for num_aces field of smb_acl smb: common: change the data type of num_aces to le16 ksmbd: fix bug on trap in smb2_lock ksmbd: fix use-after-free in smb2_lock ksmbd: fix type confusion via race condition when using ipc_msg_send_request ksmbd: fix out-of-bounds in parse_sec_desc() MAINTAINERS: update email address in cifs and ksmbd entry
2025-03-06Merge tag 'exfat-for-6.14-rc6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/linkinjeon/exfat Pull exfat fixes from Namjae Jeon: - Optimize new cluster allocation by correctly find empty entry slot - Add a check to prevent excessive bitmap clearing due to invalid data size of file/dir entry - Fix incorrect error return for zero-byte writes * tag 'exfat-for-6.14-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/linkinjeon/exfat: exfat: add a check for invalid data size exfat: short-circuit zero-byte writes in exfat_file_write_iter exfat: fix soft lockup in exfat_clear_bitmap exfat: fix just enough dentries but allocate a new cluster to dir
2025-03-06Merge tag 'vfs-6.14-rc6.fixes' of ↵Linus Torvalds
gitolite.kernel.org:pub/scm/linux/kernel/git/vfs/vfs Pull vfs fixes from Christian Brauner: - Fix spelling mistakes in idmappings.rst - Fix RCU warnings in override_creds()/revert_creds() - Create new pid namespaces with default limit now that pid_max is namespaced * tag 'vfs-6.14-rc6.fixes' of gitolite.kernel.org:pub/scm/linux/kernel/git/vfs/vfs: pid: Do not set pid_max in new pid namespaces doc: correcting two prefix errors in idmappings.rst cred: Fix RCU warnings in override/revert_creds
2025-03-06fs/pipe: fix pipe buffer index use in FUSELinus Torvalds
This was another case that Rasmus pointed out where the direct access to the pipe head and tail pointers broke on 32-bit configurations due to the type changes. As with the pipe FIONREAD case, fix it by using the appropriate helper functions that deal with the right pipe index sizing. Reported-by: Rasmus Villemoes <ravi@prevas.dk> Link: https://lore.kernel.org/all/878qpi5wz4.fsf@prevas.dk/ Fixes: 3d252160b818 ("fs/pipe: Read pipe->{head,tail} atomically outside pipe->mutex")Cc: Oleg > Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: Swapnil Sapkal <swapnil.sapkal@amd.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-03-06fs/pipe: do not open-code pipe head/tail logic in FIONREADLinus Torvalds
Rasmus points out that we do indeed have other cases of breakage from the type changes that were introduced on 32-bit targets in order to read the pipe head and tail values atomically (commit 3d252160b818: "fs/pipe: Read pipe->{head,tail} atomically outside pipe->mutex"). Fix it up by using the proper helper functions that now deal with the pipe buffer index types properly. This makes the code simpler and more obvious. The compiler does the CSE and loop hoisting of the pipe ring size masking that we used to do manually, so open-coding this was never a good idea. Reported-by: Rasmus Villemoes <ravi@prevas.dk> Link: https://lore.kernel.org/all/87cyeu5zgk.fsf@prevas.dk/ Fixes: 3d252160b818 ("fs/pipe: Read pipe->{head,tail} atomically outside pipe->mutex")Cc: Oleg Nesterov <oleg@redhat.com> Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: Swapnil Sapkal <swapnil.sapkal@amd.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-03-06fs/pipe: express 'pipe_empty()' in terms of 'pipe_occupancy()'Linus Torvalds
That's what 'pipe_full()' does, so it's more consistent. But more importantly it gets the type limits right when the pipe head and tail are no longer necessarily 'unsigned int'. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>