summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-11-13net: phy: mediatek: Move LED helper functions into mtk phy libSkyLake.Huang
This patch creates mtk-phy-lib.c & mtk-phy.h and integrates mtk-ge-soc.c's LED helper functions so that we can use those helper functions in other MTK's ethernet phy driver. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: SkyLake.Huang <skylake.huang@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13net: phy: mediatek: Re-organize MediaTek ethernet phy driversSkyLake.Huang
Re-organize MediaTek ethernet phy driver files and get ready to integrate some common functions and add new 2.5G phy driver. mtk-ge.c: MT7530 Gphy on MT7621 & MT7531 Gphy mtk-ge-soc.c: Built-in Gphy on MT7981 & Built-in switch Gphy on MT7988 mtk-2p5ge.c: Planned for built-in 2.5G phy on MT7988 Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: SkyLake.Huang <skylake.huang@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13Merge branch 'octeontx2-rvu-rep'David S. Miller
Geetha sowjanya says: ==================== Introduce RVU representors This series adds representor support for each rvu devices. When switchdev mode is enabled, representor netdev is registered for each rvu device. In implementation of representor model, one NIX HW LF with multiple SQ and RQ is reserved, where each RQ and SQ of the LF are mapped to a representor. A loopback channel is reserved to support packet path between representors and VFs. CN10K silicon supports 2 types of MACs, RPM and SDP. This patch set adds representor support for both RPM and SDP MAC interfaces. - Patch 1: Implements basic representor driver. - Patch 2: Add devlink support to create representor netdevs that can be used to manage VFs. - Patch 3: Implements basec netdev_ndo_ops. - Patch 4: Installs tcam rules to route packets between representor and VFs. - Patch 5: Enables fetching VF stats via representor interface - Patch 6: Adds support to sync link state between representors and VFs . - Patch 7: Enables configuring VF MTU via representor netdevs. - Patch 8: Adds representors for sdp MAC. - Patch 9: Adds devlink port support. - Patch 10: Implements offload stats. - Patch 11: Implements tc offload support. - patch 12: Adds documentation for rvu port representor. pci/0002:1c:00.0 Command to create PF/VF representor Rpf1vf0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000 link/ether f6:43:83:ee:26:21 brd ff:ff:ff:ff:ff:ff Rpf1vf1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000 link/ether 12:b2:54:0e:24:54 brd ff:ff:ff:ff:ff:ff Rpf1vf2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000 link/ether 4a:12:c4:4c:32:62 brd ff:ff:ff:ff:ff:ff Rpf1vf3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000 link/ether ca:cb:68:0e:e2:6e brd ff:ff:ff:ff:ff:ff Rpf2vf0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000 link/ether 06:cc:ad:b4:f0:93 brd ff:ff:ff:ff:ff:ff ~# devlink port pci/0002:1c:00.0/0: type eth netdev Rpf1vf0 flavour physical port 0 splittable false pci/0002:1c:00.0/1: type eth netdev Rpf1vf1 flavour pcivf controller 0 pfnum 1 vfnum 1 external false splittable false pci/0002:1c:00.0/2: type eth netdev Rpf1vf2 flavour pcivf controller 0 pfnum 1 vfnum 2 external false splittable false pci/0002:1c:00.0/3: type eth netdev Rpf1vf3 flavour pcivf controller 0 pfnum 1 vfnum 3 external false splittable false ----------- v11:v1: - Submitted refactoring changes as a separate patch set. https://lore.kernel.org/netdev/20241023161843.15543-1-gakula@marvell.com/T/ - Moved documentation to a separate patch. - patch 9: Added code changes to forward updated mac address to VF. - Implemented TC offload support. v10-v11: - As suggested by "Jiri Pirko" adjusted the documentation. - Added more commit description to patch1. v9-v10: - Fixed build warning w.r.t documentation. v8-v9: - Updated the documentation. v7-v8: - Implemented offload stats ndo. - Added documentation. v6-v7: - Rebased on top net-next branch. v5-v6: - Addressed review comments provided by "Simon Horman". - Added review tag. v4-v5: - Patch 3: Removed devm_* usage in rvu_rep_create() - Patch 3: Fixed build warnings. v3-v4: - Patch 2 & 3: Fixed coccinelle reported warnings. - Patch 10: Added devlink port support. v2-v3: - Used extack for error messages. - As suggested reworked commit messages. - Fixed sparse warning. v1-v2: -Fixed build warnings. -Address review comments provided by "Kalesh Anakkur Purayil". ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13Documentation: octeontx2: Add Documentation for RVU representorsGeetha sowjanya
Adds documentation for creating and configuring rvu port representors Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13octeontx2-pf: Adds TC offload supportGeetha sowjanya
Implements tc offload support for rvu representors. Usage example: - Add tc rule to drop packets with vlan id 3 using port representor(Rpf1vf0). # tc filter add dev Rpf1vf0 protocol 802.1Q parent ffff: flower vlan_id 3 vlan_ethtype ipv4 skip_sw action drop - Redirect packets with vlan id 5 and IPv4 packets to eth1, after stripping vlan header. # tc filter add dev Rpf1vf0 ingress protocol 802.1Q flower vlan_id 5 vlan_ethtype ipv4 skip_sw action vlan pop action mirred ingress redirect dev eth1 Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13octeontx2-pf: Implement offload stats ndo for representorsGeetha sowjanya
Implement the offload stat ndo by fetching the HW stats of rx/tx queues attached to the representor. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13octeontx2-pf: Add devlink port supportGeetha sowjanya
Register devlink port for the rvu representors. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13octeontx2-pf: Add representors for sdp MACGeetha sowjanya
Hardware supports different types of MACs eg RPM, SDP, LBK. LBK is for internal Tx->Rx HW loopback path. RPM and SDP MACs support ingress/egress pkt IO on interfaces with different set of capabilities like interface modes. At the time of netdev driver registration PF will seek MAC related information from Admin function driver 'drivers/net/ethernet/marvell/octeontx2/af' and sets up ingress/egress queues etc such that pkt IO on the channels of these different MACs is possible. This patch add representors for SDP MAC. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13octeontx2-pf: Configure VF mtu via representorGeetha sowjanya
Adds support to manage the mtu configuration for VF through representor. On update of representor mtu a mbox notification is send to VF to update its mtu. This feature is implemented based on the "Network Function Representors" kernel documentation. " Setting an MTU on the representor should cause that same MTU to be reported to the representee. " Signed-off-by: Sai Krishna <saikrishnag@marvell.com> Signed-off-by: Geetha sowjanya <gakula@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13octeontx2-pf: Add support to sync link state between representor and VFsGeetha sowjanya
Implements the below requirement mentioned in the representors documentation. " The representee's link state is controlled through the representor. Setting the representor administratively UP or DOWN should cause carrier ON or OFF at the representee. " This patch enables - Reflecting the link state of representor based on the VF state and link state of VF based on representor. - On VF interface up/down a notification is sent via mbox to representor to update the link state. eg: ip link set eth0 up/down will disable carrier on/off of the corresponding representor(r0p1) interface. - On representor interface up/down will cause the link state update of VF. eg: ip link set r0p1 up/down will disable carrier on/off of the corresponding representee(eth0) interface. Signed-off-by: Harman Kalra <hkalra@marvell.com> Signed-off-by: Geetha sowjanya <gakula@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13octeontx2-pf: Get VF stats via representorGeetha sowjanya
Adds support to export VF port statistics via representor netdev. Defines new mbox "NIX_LF_STATS" to fetch VF hw stats. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13octeontx2-af: Add packet path between representor and VFGeetha sowjanya
Current HW, do not support in-built switch which will forward pkts between representee and representor. When representor is put under a bridge and pkts needs to be sent to representee, then pkts from representor are sent on a HW internal loopback channel, which again will be punted to ingress pkt parser. Now the rules that this patch installs are the MCAM filters/rules which will match against these pkts and forward them to representee. The rules that this patch installs are for basic representor <=> representee path similar to Tun/TAP between VM and Host. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13octeontx2-pf: Add basic net_device_opsGeetha sowjanya
Implements basic set of net_device_ops. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13octeontx2-pf: Create representor netdevGeetha sowjanya
Adds initial devlink support to set/get the switchdev mode. Representor netdevs are created for each rvu devices when the switch mode is set to 'switchdev'. These netdevs are be used to control and configure VFs. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-13octeontx2-pf: RVU representor driverGeetha sowjanya
Adds basic driver for the RVU representor. Driver on probe does pci specific initialization and does hw resources configuration. Introduces RVU_ESWITCH kernel config to enable/disable the driver. Representor and NIC shares the code but representors netdev support subset of NIC functionality. Hence "otx2_rep_dev" API helps to skip the features initialization that are not supported by the representors. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-11-12net: page_pool: do not count normal frag allocation in statsJakub Kicinski
Commit 0f6deac3a079 ("net: page_pool: add page allocation stats for two fast page allocate path") added increments for "fast path" allocation to page frag alloc. It mentions performance degradation analysis but the details are unclear. Could be that the author was simply surprised by the alloc stats not matching packet count. In my experience the key metric for page pool is the recycling rate. Page return stats, however, count returned _pages_ not frags. This makes it impossible to calculate recycling rate for drivers using the frag API. Here is example output of the page-pool YNL sample for a driver allocating 1200B frags (4k pages) with nearly perfect recycling: $ ./page-pool eth0[2] page pools: 32 (zombies: 0) refs: 291648 bytes: 1194590208 (refs: 0 bytes: 0) recycling: 33.3% (alloc: 4557:2256365862 recycle: 200476245:551541893) The recycling rate is reported as 33.3% because we give out 4096 // 1200 = 3 frags for every recycled page. Effectively revert the aforementioned commit. This also aligns with the stats we would see for drivers which do the fragmentation themselves, although that's not a strong reason in itself. On the (very unlikely) path where we can reuse the current page let's bump the "cached" stat. The fact that we don't put the page in the cache is just an optimization. Acked-by: Jesper Dangaard Brouer <hawk@kernel.org> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Link: https://patch.msgid.link/20241109023303.3366500-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-12eth: bnxt: use page pool for head fragsJakub Kicinski
Testing small size RPCs (300B-400B) on a large AMD system suggests that page pool recycling is very useful even for just the head frags. With this patch (and copy break disabled) I see a 30% performance improvement (82Gbps -> 106Gbps). Convert bnxt from normal page frags to page pool frags for head buffers. On systems with small page size we can use the same pool as for TPA pages. On systems with large pages the frag allocation logic of the page pool is already used to split a large page into TPA chunks. TPA chunks are much larger than heads (8k or 64k, AFAICT vs 1kB) and we always allocate the same sized chunks. Mixing allocation of TPA and head pages would lead to sub-optimal memory use. Plus Taehee's work on zero-copy / devmem will need to differentiate between TPA and non-TPA page pool, anyway. Conditionally allocate a new page pool for heads. Link: https://patch.msgid.link/20241109035119.3391864-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-12dsa: qca8k: Use nested lock to avoid splatAndrew Lunn
qca8k_phy_eth_command() is used to probe the child MDIO bus while the parent MDIO is locked. This causes lockdep splat, reporting a possible deadlock. It is not an actually deadlock, because different locks are used. By making use of mutex_lock_nested() we can avoid this false positive. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20241110175955.3053664-1-andrew@lunn.ch Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-12mptcp: fix possible integer overflow in mptcp_reset_tout_timerDmitry Kandybka
In 'mptcp_reset_tout_timer', promote 'probe_timestamp' to unsigned long to avoid possible integer overflow. Compile tested only. Found by Linux Verification Center (linuxtesting.org) with SVACE. Signed-off-by: Dmitry Kandybka <d.kandybka@gmail.com> Link: https://patch.msgid.link/20241107103657.1560536-1-d.kandybka@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-12net: sched: cls_api: improve the error message for ID allocation failureJakub Kicinski
We run into an exhaustion problem with the kernel-allocated filter IDs. Our allocation problem can be fixed on the user space side, but the error message in this case was quite misleading: "Filter with specified priority/protocol not found" (EINVAL) Specifically when we can't allocate a _new_ ID because filter with lowest ID already _exists_, saying "filter not found", is confusing. Kernel allocates IDs in range of 0xc0000 -> 0x8000, giving out ID one lower than lowest existing in that range. The error message makes sense when tcf_chain_tp_find() gets called for GET and DEL but for NEW we need to provide more specific error messages for all three cases: - user wants the ID to be auto-allocated but filter with ID 0x8000 already exists - filter already exists and can be replaced, but user asked for a protocol change - filter doesn't exist Caller of tcf_chain_tp_insert_unique() doesn't set extack today, so don't bother plumbing it in. Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Link: https://patch.msgid.link/20241108010254.2995438-1-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-11-12net: Implement fault injection forcing skb reallocationBreno Leitao
Introduce a fault injection mechanism to force skb reallocation. The primary goal is to catch bugs related to pointer invalidation after potential skb reallocation. The fault injection mechanism aims to identify scenarios where callers retain pointers to various headers in the skb but fail to reload these pointers after calling a function that may reallocate the data. This type of bug can lead to memory corruption or crashes if the old, now-invalid pointers are used. By forcing reallocation through fault injection, we can stress-test code paths and ensure proper pointer management after potential skb reallocations. Add a hook for fault injection in the following functions: * pskb_trim_rcsum() * pskb_may_pull_reason() * pskb_trim() As the other fault injection mechanism, protect it under a debug Kconfig called CONFIG_FAIL_SKB_REALLOC. This patch was *heavily* inspired by Jakub's proposal from: https://lore.kernel.org/all/20240719174140.47a868e6@kernel.org/ CC: Akinobu Mita <akinobu.mita@gmail.com> Suggested-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Breno Leitao <leitao@debian.org> Reviewed-by: Akinobu Mita <akinobu.mita@gmail.com> Acked-by: Paolo Abeni <pabeni@redhat.com> Acked-by: Guillaume Nault <gnault@redhat.com> Link: https://patch.msgid.link/20241107-fault_v6-v6-1-1b82cb6ecacd@debian.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-11-12Merge branch 'net-ip-add-drop-reasons-to-input-route'Paolo Abeni
Menglong Dong says: ==================== net: ip: add drop reasons to input route In this series, we mainly add some skb drop reasons to the input path of ip routing, and we make the following functions return drop reasons: fib_validate_source() ip_route_input_mc() ip_mc_validate_source() ip_route_input_slow() ip_route_input_rcu() ip_route_input_noref() ip_route_input() ip_mkroute_input() __mkroute_input() ip_route_use_hint() And following new skb drop reasons are added: SKB_DROP_REASON_IP_LOCAL_SOURCE SKB_DROP_REASON_IP_INVALID_SOURCE SKB_DROP_REASON_IP_LOCALNET SKB_DROP_REASON_IP_INVALID_DEST Changes since v4: - in the 6th patch: remove the unneeded "else" in ip_expire() - in the 8th patch: delete the unneeded comment in __mkroute_input() - in the 9th patch: replace "return 0" with "return SKB_NOT_DROPPED_YET" in ip_route_use_hint() Changes since v3: - don't refactor fib_validate_source/__fib_validate_source, and introduce a wrapper for fib_validate_source() instead in the 1st patch. - some small adjustment in the 4-7 patches Changes since v2: - refactor fib_validate_source and __fib_validate_source to make fib_validate_source return drop reasons - add the 9th and 10th patches to make this series cover the input route code path Changes since v1: - make ip_route_input_noref/ip_route_input_rcu/ip_route_input_slow return drop reasons, instead of passing a local variable to their function arguments. ==================== Link: https://patch.msgid.link/20241107125601.1076814-1-dongml2@chinatelecom.cn Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-11-12net: ip: make ip_route_use_hint() return drop reasonsMenglong Dong
In this commit, we make ip_route_use_hint() return drop reasons. The drop reasons that we return are similar to what we do in ip_route_input_slow(), and no drop reasons are added in this commit. Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-11-12net: ip: make ip_mkroute_input/__mkroute_input return drop reasonsMenglong Dong
In this commit, we make ip_mkroute_input() and __mkroute_input() return drop reasons. The drop reason "SKB_DROP_REASON_ARP_PVLAN_DISABLE" is introduced for the case: the packet which is not IP is forwarded to the in_dev, and the proxy_arp_pvlan is not enabled. Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-11-12net: ip: make ip_route_input() return drop reasonsMenglong Dong
In this commit, we make ip_route_input() return skb drop reasons that come from ip_route_input_noref(). Meanwhile, adjust all the call to it. Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-11-12net: ip: make ip_route_input_noref() return drop reasonsMenglong Dong
In this commit, we make ip_route_input_noref() return drop reasons, which come from ip_route_input_rcu(). We need adjust the callers of ip_route_input_noref() to make sure the return value of ip_route_input_noref() is used properly. The errno that ip_route_input_noref() returns comes from ip_route_input and bpf_lwt_input_reroute in the origin logic, and we make them return -EINVAL on error instead. In the following patch, we will make ip_route_input() returns drop reasons too. Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-11-12net: ip: make ip_route_input_rcu() return drop reasonsMenglong Dong
In this commit, we make ip_route_input_rcu() return drop reasons, which come from ip_route_input_mc() and ip_route_input_slow(). The only caller of ip_route_input_rcu() is ip_route_input_noref(). We adjust it by making it return -EINVAL on error and ignore the reasons that ip_route_input_rcu() returns. In the following patch, we will make ip_route_input_noref() returns the drop reasons. Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-11-12net: ip: make ip_route_input_slow() return drop reasonsMenglong Dong
In this commit, we make ip_route_input_slow() return skb drop reasons, and following new skb drop reasons are added: SKB_DROP_REASON_IP_INVALID_DEST The only caller of ip_route_input_slow() is ip_route_input_rcu(), and we adjust it by making it return -EINVAL on error. Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-11-12net: ip: make ip_mc_validate_source() return drop reasonMenglong Dong
Make ip_mc_validate_source() return drop reason, and adjust the call of it in ip_route_input_mc(). Another caller of it is ip_rcv_finish_core->udp_v4_early_demux, and the errno is not checked in detail, so we don't do more adjustment for it. The drop reason "SKB_DROP_REASON_IP_LOCALNET" is added in this commit. Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-11-12net: ip: make ip_route_input_mc() return drop reasonMenglong Dong
Make ip_route_input_mc() return drop reason, and adjust the call of it in ip_route_input_rcu(). Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-11-12net: ip: make fib_validate_source() support drop reasonsMenglong Dong
In this commit, we make fib_validate_source() and __fib_validate_source() return -reason instead of errno on error. The return value of fib_validate_source can be -errno, 0, and 1. It's hard to make fib_validate_source() return drop reasons directly. The fib_validate_source() will return 1 if the scope of the source(revert) route is HOST. And the __mkroute_input() will mark the skb with IPSKB_DOREDIRECT in this case (combine with some other conditions). And then, a REDIRECT ICMP will be sent in ip_forward() if this flag exists. We can't pass this information to __mkroute_input if we make fib_validate_source() return drop reasons. Therefore, we introduce the wrapper fib_validate_source_reason() for fib_validate_source(), which will return the drop reasons on error. In the origin logic, LINUX_MIB_IPRPFILTER will be counted if fib_validate_source() return -EXDEV. And now, we need to adjust it by checking "reason == SKB_DROP_REASON_IP_RPFILTER". However, this will take effect only after the patch "net: ip: make ip_route_input_noref() return drop reasons", as we can't pass the drop reasons from fib_validate_source() to ip_rcv_finish_core() in this patch. Following new drop reasons are added in this patch: SKB_DROP_REASON_IP_LOCAL_SOURCE SKB_DROP_REASON_IP_INVALID_SOURCE Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-11-11Merge branch 'mlx5-esw-qos-refactor-and-shampo-cleanup'Jakub Kicinski
Tariq Toukan says: ==================== mlx5 esw qos refactor and SHAMPO cleanup This patchset for the mlx5 core and Eth drivers consists of 3 parts. First patch by Patrisious improves the E-switch mode change operation. The following 6 patches by Carolina introduce further refactoring for the QoS handling, to set the foundation for future extensions. In the following 5 patches by Dragos, we enhance the SHAMPO datapath flow by simplifying some logic, and cleaning up the implementation. ==================== Link: https://patch.msgid.link/20241107194357.683732-1-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net/mlx5e: SHAMPO, Rework header allocation loopDragos Tatulea
The current loop code was based on the assumption that there can be page leftovers from previous function calls. This patch changes the allocation loop to make it clearer how pages get allocated every MLX5E_SHAMPO_WQ_HEADER_PER_PAGE headers. This change has no functional implications. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241107194357.683732-13-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net/mlx5e: SHAMPO, Drop info arrayDragos Tatulea
The info array is used to store a pointer to the dma address of the header and to the frag page. However, this array is not really required: - The frag page can be calculated from the header index frag page index = header index / headers per page. - The dma address can be calculated through a formula: dma page address + header offset. This series gets rid of the info array and uses the above formulas instead. The current_page_index was used in conjunction with the info array to store page fragment indices. This variable is dropped as well. There was no performance regression observed. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241107194357.683732-12-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net/mlx5e: SHAMPO, Change frag page setup order during allocationDragos Tatulea
Now that the UMR allocation has been simplified, it is no longer possible to have a leftover page from a previous call to mlx5e_build_shampo_hd_umr(). This patch simplifies the code by switching the order of operations: first take the frag page and then increment the index. This is more straightforward and it also paves the way for dropping the info array. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241107194357.683732-11-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net/mlx5e: SHAMPO, Fix page_index calculation inconsistencyDragos Tatulea
When calculating the index for the next frag page slot, the divisor is incorrect: it should be the number of pages per queue not the number of headers per queue. This is currently harmless because frag pages are not used directly, but they are intermediated through the info array. But it needs to be fixed as an upcoming patch will get rid of the info array. This patch introduces a new pages per queue variable and plugs it in the formula. Now that this variable exists, additional code can be simplified in the SHAMPO initialization code. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241107194357.683732-10-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net/mlx5e: SHAMPO, Simplify UMR allocation for headersDragos Tatulea
Allocating page fragments for header data split is currently more complicated than it should be. That's because the number of KSM entries allocated is not aligned to the number of headers per page. This leads to having leftovers in the next allocation which require additional accounting and needlessly complicated code. This patch aligns (down) the number of KSM entries in the UMR WQE to the number of headers per page by: 1) Aligning the max number of entries allocated per UMR WQE (max_ksm_entries) to MLX5E_SHAMPO_WQ_HEADER_PER_PAGE. 2) Aligning the total number of free headers to MLX5E_SHAMPO_WQ_HEADER_PER_PAGE. ... and then it drops the extra accounting code from mlx5e_build_shampo_hd_umr(). Although the number of entries allocated per UMR WQE is slightly smaller due to aligning down, no performance impact was observed. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241107194357.683732-9-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net/mlx5: Make vport QoS enablement more flexible for future extensionsCarolina Jubran
Refactor esw_qos_vport_enable to support more generic configurations, allowing it to be reused for new vport node types in future patches. This refactor includes a new way to change the vport parent node by disabling the current setup and re-enabling it with the new parent. This change sets the foundation for adapting configuration based on the parent type in future patches. Signed-off-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241107194357.683732-8-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net/mlx5: Integrate esw_qos_vport_enable logic into rate operationsCarolina Jubran
Fold the esw_qos_vport_enable function into operations for configuring maximum and minimum rates, simplifying QoS logic. This change consolidates enabling and updating the scheduling element configuration, streamlining how vport QoS is initialized and adjusted. Signed-off-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241107194357.683732-7-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net/mlx5: Generalize scheduling element operationsCarolina Jubran
Introduce helper functions to create and destroy scheduling elements, allowing flexible configuration for different scheduling element types. The new helper functions streamline the process by centralizing error handling and logging through esw_qos_sched_elem_op_warn, which now accepts the operation type (create, destroy, or modify). The changes also adjust the esw_qos_vport_enable and mlx5_esw_qos_vport_disable functions to leverage the new generalized create/destroy helpers. The destroy functions now log errors with esw_warn without returning them. This prevents unnecessary error handling since the node was already destroyed and no further action is required from callers. Signed-off-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241107194357.683732-6-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net/mlx5: Refactor scheduling element configuration bitmasksCarolina Jubran
Refactor esw_qos_sched_elem_config to set bitmasks only when max_rate or bw_share values change, allowing the function to configure nodes with only one of these parameters. This enables more flexible usage for nodes where only one parameter requires configuration. Remove scattered assignments and checks to centralize them within this function, removing the now redundant esw_qos_set_node_max_rate entirely. With this refactor, also remove the assignment of the vport scheduling node max rate to the parent max rate for unlimited vports (where max rate is set to zero), as firmware already handles this behavior. Signed-off-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241107194357.683732-5-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net/mlx5: Generalize max_rate and min_rate setting for nodesCarolina Jubran
Refactor max_rate and min_rate setting functions to operate on mlx5_esw_sched_node, allowing for generalized handling of both vports and nodes. Signed-off-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241107194357.683732-4-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net/mlx5: Simplify QoS normalization by removing error handlingCarolina Jubran
This change updates esw_qos_normalize_min_rate to not return errors, significantly simplifying the code. Normalization failures are software bugs, and it's unnecessary to handle them with rollback mechanisms. Instead, `esw_qos_update_sched_node_bw_share` and `esw_qos_normalize_min_rate` now return void, with any errors logged as warnings to indicate potential software issues. This approach avoids compensating for hidden bugs and removes error handling from all places that perform normalization, streamlining future patches. Signed-off-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241107194357.683732-3-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net/mlx5: E-switch, refactor eswitch mode changePatrisious Haddad
The E-switch mode was previously updated before removing and re-adding the IB device, which could cause a temporary mismatch between the E-switch mode and the IB device configuration. To prevent this discrepancy, the IB device is now removed first, then the E-switch mode is updated, and finally, the IB device is re-added. This sequence ensures consistent alignment between the E-switch mode and the IB device whenever the mode changes, regardless of the new mode value. Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241107194357.683732-2-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net: ipv4: Cache pmtu for all packet paths if multipath enabledVladimir Vdovin
Check number of paths by fib_info_num_path(), and update_or_create_fnhe() for every path. Problem is that pmtu is cached only for the oif that has received icmp message "need to frag", other oifs will still try to use "default" iface mtu. An example topology showing the problem: | host1 +---------+ | dummy0 | 10.179.20.18/32 mtu9000 +---------+ +-----------+----------------+ +---------+ +---------+ | ens17f0 | 10.179.2.141/31 | ens17f1 | 10.179.2.13/31 +---------+ +---------+ | (all here have mtu 9000) | +------+ +------+ | ro1 | 10.179.2.140/31 | ro2 | 10.179.2.12/31 +------+ +------+ | | ---------+------------+-------------------+------ | +-----+ | ro3 | 10.10.10.10 mtu1500 +-----+ | ======================================== some networks ======================================== | +-----+ | eth0| 10.10.30.30 mtu9000 +-----+ | host2 host1 have enabled multipath and sysctl net.ipv4.fib_multipath_hash_policy = 1: default proto static src 10.179.20.18 nexthop via 10.179.2.12 dev ens17f1 weight 1 nexthop via 10.179.2.140 dev ens17f0 weight 1 When host1 tries to do pmtud from 10.179.20.18/32 to host2, host1 receives at ens17f1 iface an icmp packet from ro3 that ro3 mtu=1500. And host1 caches it in nexthop exceptions cache. Problem is that it is cached only for the iface that has received icmp, and there is no way that ro3 will send icmp msg to host1 via another path. Host1 now have this routes to host2: ip r g 10.10.30.30 sport 30000 dport 443 10.10.30.30 via 10.179.2.12 dev ens17f1 src 10.179.20.18 uid 0 cache expires 521sec mtu 1500 ip r g 10.10.30.30 sport 30033 dport 443 10.10.30.30 via 10.179.2.140 dev ens17f0 src 10.179.20.18 uid 0 cache So when host1 tries again to reach host2 with mtu>1500, if packet flow is lucky enough to be hashed with oif=ens17f1 its ok, if oif=ens17f0 it blackholes and still gets icmp msgs from ro3 to ens17f1, until lucky day when ro3 will send it through another flow to ens17f0. Signed-off-by: Vladimir Vdovin <deliran@verdict.gg> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://patch.msgid.link/20241108093427.317942-1-deliran@verdict.gg Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net: netconsole: selftests: Check if netdevsim is availableBreno Leitao
The netconsole selftest relies on the availability of the netdevsim module. To ensure the test can run correctly, we need to check if the netdevsim module is either loaded or built-in before proceeding. Update the netconsole selftest to check for the existence of the /sys/bus/netdevsim/new_device file before running the test. If the file is not found, the test is skipped with an explanation that the CONFIG_NETDEVSIM kernel config option may not be enabled. Signed-off-by: Breno Leitao <leitao@debian.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20241108-netcon_selftest_deps-v1-1-1789cbf3adcd@debian.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11Merge branch 'net-phylink-phylink_resolve-cleanups'Jakub Kicinski
Russell King says: ==================== net: phylink: phylink_resolve() cleanups This series does a bit of clean-up in phylink_resolve() to make the code a little easier to follow. Patch 1 moves the manual flow control setting in two of the switch cases to after the switch(). Patch 2 changes the MLO_AN_FIXED case to be a simple if() statement, reducing its indentation. Patch 3 changes the MLO_AN_PHY case to also be a simple if() statment, also reducing its indentation. Patch 4 does the same for the last case. Patch 5 reformats the code and comments for the reduced indentation, making it easier to read. ==================== Link: https://patch.msgid.link/Zy411lVWe2SikuOs@shell.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net: phylink: clean up phylink_resolve()Russell King (Oracle)
Now that we have reduced the indentation level, clean up the code formatting. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/E1t9RQz-002Ff5-EA@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net: phylink: remove switch() statement in resolve handlingRussell King (Oracle)
The switch() statement doesn't sit very well with the preceeding if() statements, so let's just convert everything to if()s. As a result of the two preceding commits, there is now only one case in the switch() statement. Remove the switch statement and reduce the code indentation. Code reformatting will be in the following commit. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/E1t9RQu-002Fez-AA@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-11-11net: phylink: move MLO_AN_PHY resolve handling to if() statementRussell King (Oracle)
The switch() statement doesn't sit very well with the preceeding if() statements, and results in excessive indentation that spoils code readability. Continue cleaning this up by converting the MLO_AN_PHY case to use an if() statmeent. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/E1t9RQp-002Fet-5W@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>