summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-08-08MAINTAINERS: Update entry for rtl8187Larry Finger
As Herton Ronaldo Krzesinski is no longer active, remove him as maintainer for rtl8187. The git tree entry is also removed. Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Link: https://lore.kernel.org/r/20230804222438.16076-2-Larry.Finger@lwfinger.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-08-08wifi: brcm80211: handle params_v1 allocation failurePetr Tesarik
Return -ENOMEM from brcmf_run_escan() if kzalloc() fails for v1 params. Fixes: 398ce273d6b1 ("wifi: brcmfmac: cfg80211: Add support for scan params v2") Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Link: https://lore.kernel.org/r/20230802163430.1656-1-petrtesarik@huaweicloud.com Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-08-07net: marvell: prestera: fix handling IPv4 routes with nhidJonas Gorski
Fix handling IPv4 routes referencing a nexthop via its id by replacing calls to fib_info_nh() with fib_info_nhc(). Trying to add an IPv4 route referencing a nextop via nhid: $ ip link set up swp5 $ ip a a 10.0.0.1/24 dev swp5 $ ip nexthop add dev swp5 id 20 via 10.0.0.2 $ ip route add 10.0.1.0/24 nhid 20 triggers warnings when trying to handle the route: [ 528.805763] ------------[ cut here ]------------ [ 528.810437] WARNING: CPU: 3 PID: 53 at include/net/nexthop.h:468 __prestera_fi_is_direct+0x2c/0x68 [prestera] [ 528.820434] Modules linked in: prestera_pci act_gact act_police sch_ingress cls_u32 cls_flower prestera arm64_delta_tn48m_dn_led(O) arm64_delta_tn48m_dn_cpld(O) [last unloaded: prestera_pci] [ 528.837485] CPU: 3 PID: 53 Comm: kworker/u8:3 Tainted: G O 6.4.5 #1 [ 528.845178] Hardware name: delta,tn48m-dn (DT) [ 528.849641] Workqueue: prestera_ordered __prestera_router_fib_event_work [prestera] [ 528.857352] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 528.864347] pc : __prestera_fi_is_direct+0x2c/0x68 [prestera] [ 528.870135] lr : prestera_k_arb_fib_evt+0xb20/0xd50 [prestera] [ 528.876007] sp : ffff80000b20bc90 [ 528.879336] x29: ffff80000b20bc90 x28: 0000000000000000 x27: ffff0001374d3a48 [ 528.886510] x26: ffff000105604000 x25: ffff000134af8a28 x24: ffff0001374d3800 [ 528.893683] x23: ffff000101c89148 x22: ffff000101c89000 x21: ffff000101c89200 [ 528.900855] x20: ffff00013641fda0 x19: ffff800009d01088 x18: 0000000000000059 [ 528.908027] x17: 0000000000000277 x16: 0000000000000000 x15: 0000000000000000 [ 528.915198] x14: 0000000000000003 x13: 00000000000fe400 x12: 0000000000000000 [ 528.922371] x11: 0000000000000002 x10: 0000000000000aa0 x9 : ffff8000013d2020 [ 528.929543] x8 : 0000000000000018 x7 : 000000007b1703f8 x6 : 000000001ca72f86 [ 528.936715] x5 : 0000000033399ea7 x4 : 0000000000000000 x3 : ffff0001374d3acc [ 528.943886] x2 : 0000000000000000 x1 : ffff00010200de00 x0 : ffff000134ae3f80 [ 528.951058] Call trace: [ 528.953516] __prestera_fi_is_direct+0x2c/0x68 [prestera] [ 528.958952] __prestera_router_fib_event_work+0x100/0x158 [prestera] [ 528.965348] process_one_work+0x208/0x488 [ 528.969387] worker_thread+0x4c/0x430 [ 528.973068] kthread+0x120/0x138 [ 528.976313] ret_from_fork+0x10/0x20 [ 528.979909] ---[ end trace 0000000000000000 ]--- [ 528.984998] ------------[ cut here ]------------ [ 528.989645] WARNING: CPU: 3 PID: 53 at include/net/nexthop.h:468 __prestera_fi_is_direct+0x2c/0x68 [prestera] [ 528.999628] Modules linked in: prestera_pci act_gact act_police sch_ingress cls_u32 cls_flower prestera arm64_delta_tn48m_dn_led(O) arm64_delta_tn48m_dn_cpld(O) [last unloaded: prestera_pci] [ 529.016676] CPU: 3 PID: 53 Comm: kworker/u8:3 Tainted: G W O 6.4.5 #1 [ 529.024368] Hardware name: delta,tn48m-dn (DT) [ 529.028830] Workqueue: prestera_ordered __prestera_router_fib_event_work [prestera] [ 529.036539] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 529.043533] pc : __prestera_fi_is_direct+0x2c/0x68 [prestera] [ 529.049318] lr : __prestera_k_arb_fc_apply+0x280/0x2f8 [prestera] [ 529.055452] sp : ffff80000b20bc60 [ 529.058781] x29: ffff80000b20bc60 x28: 0000000000000000 x27: ffff0001374d3a48 [ 529.065953] x26: ffff000105604000 x25: ffff000134af8a28 x24: ffff0001374d3800 [ 529.073126] x23: ffff000101c89148 x22: ffff000101c89148 x21: ffff00013641fda0 [ 529.080299] x20: ffff000101c89000 x19: ffff000101c89020 x18: 0000000000000059 [ 529.087471] x17: 0000000000000277 x16: 0000000000000000 x15: 0000000000000000 [ 529.094642] x14: 0000000000000003 x13: 00000000000fe400 x12: 0000000000000000 [ 529.101814] x11: 0000000000000002 x10: 0000000000000aa0 x9 : ffff8000013cee80 [ 529.108985] x8 : 0000000000000018 x7 : 000000007b1703f8 x6 : 0000000000000018 [ 529.116157] x5 : 00000000d3497eb6 x4 : ffff000105604081 x3 : 000000008e979557 [ 529.123329] x2 : 0000000000000000 x1 : ffff00010200de00 x0 : ffff000134ae3f80 [ 529.130501] Call trace: [ 529.132958] __prestera_fi_is_direct+0x2c/0x68 [prestera] [ 529.138394] prestera_k_arb_fib_evt+0x6b8/0xd50 [prestera] [ 529.143918] __prestera_router_fib_event_work+0x100/0x158 [prestera] [ 529.150313] process_one_work+0x208/0x488 [ 529.154348] worker_thread+0x4c/0x430 [ 529.158030] kthread+0x120/0x138 [ 529.161274] ret_from_fork+0x10/0x20 [ 529.164867] ---[ end trace 0000000000000000 ]--- and results in a non offloaded route: $ ip route 10.0.0.0/24 dev swp5 proto kernel scope link src 10.0.0.1 rt_trap 10.0.1.0/24 nhid 20 via 10.0.0.2 dev swp5 rt_trap When creating a route referencing a nexthop via its ID, the nexthop will be stored in a separate nh pointer instead of the array of nexthops in the fib_info struct. This causes issues since fib_info_nh() only handles the nexthops array, but not the separate nh pointer, and will loudly WARN about it. In contrast fib_info_nhc() handles both, but returns a fib_nh_common pointer instead of a fib_nh pointer. Luckily we only ever access fields from the fib_nh_common parts, so we can just replace all instances of fib_info_nh() with fib_info_nhc() and access the fields via their fib_nh_common names. This allows handling IPv4 routes with an external nexthop, and they now get offloaded as expected: $ ip route 10.0.0.0/24 dev swp5 proto kernel scope link src 10.0.0.1 rt_trap 10.0.1.0/24 nhid 20 via 10.0.0.2 dev swp5 offload rt_offload Fixes: 396b80cb5cc8 ("net: marvell: prestera: Add neighbour cache accounting") Signed-off-by: Jonas Gorski <jonas.gorski@bisdn.de> Acked-by: Elad Nachman <enachman@marvell.com> Link: https://lore.kernel.org/r/20230804101220.247515-1-jonas.gorski@bisdn.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07Merge branch 'net-remove-redundant-initialization-owner'Jakub Kicinski
Li Zetao says: ==================== net: Remove redundant initialization owner This patch set removes redundant initialization owner when register a fsl_mc_driver driver ==================== Link: https://lore.kernel.org/r/20230804095946.99956-1-lizetao1@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07net: dpaa2-switch: Remove redundant initialization owner in dpaa2_switch_drvLi Zetao
The fsl_mc_driver_register() will set "THIS_MODULE" to driver.owner when register a fsl_mc_driver driver, so it is redundant initialization to set driver.owner in dpaa2_switch_drv statement. Remove it for clean code. Signed-off-by: Li Zetao <lizetao1@huawei.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20230804095946.99956-3-lizetao1@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07net: dpaa2-eth: Remove redundant initialization owner in dpaa2_eth_driverLi Zetao
The fsl_mc_driver_register() will set "THIS_MODULE" to driver.owner when register a fsl_mc_driver driver, so it is redundant initialization to set driver.owner in dpaa2_eth_driver statement. Remove it for clean code. Signed-off-by: Li Zetao <lizetao1@huawei.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20230804095946.99956-2-lizetao1@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07Merge branch 'octeontx2-af-tc-flower-offload-changes'Jakub Kicinski
Suman Ghosh says: ==================== octeontx2-af: TC flower offload changes This patchset includes minor code restructuring related to TC flower offload for outer vlan and adding support for TC inner vlan offload. Patch #1 Code restructure to handle TC flower outer vlan offload Patch #2 Add TC flower offload support for inner vlan ==================== Link: https://lore.kernel.org/r/20230804045935.3010554-1-sumang@marvell.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07octeontx2-af: TC flower offload support for inner VLANSuman Ghosh
Extend the current TC flower offload support to enable filters matching inner VLAN, and support offload of those filters to hardware. Signed-off-by: Suman Ghosh <sumang@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20230804045935.3010554-3-sumang@marvell.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07octeontx2-af: Code restructure to handle TC outer VLAN offloadSuman Ghosh
Moved the TC outer VLAN offload support to a separate function. This change is done to handle all VLAN related changes cleanly from a dedicated function. Signed-off-by: Suman Ghosh <sumang@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20230804045935.3010554-2-sumang@marvell.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07net: core: remove unnecessary frame_sz check in bpf_xdp_adjust_tail()Andrew Kanner
Syzkaller reported the following issue: ======================================= Too BIG xdp->frame_sz = 131072 WARNING: CPU: 0 PID: 5020 at net/core/filter.c:4121 ____bpf_xdp_adjust_tail net/core/filter.c:4121 [inline] WARNING: CPU: 0 PID: 5020 at net/core/filter.c:4121 bpf_xdp_adjust_tail+0x466/0xa10 net/core/filter.c:4103 ... Call Trace: <TASK> bpf_prog_4add87e5301a4105+0x1a/0x1c __bpf_prog_run include/linux/filter.h:600 [inline] bpf_prog_run_xdp include/linux/filter.h:775 [inline] bpf_prog_run_generic_xdp+0x57e/0x11e0 net/core/dev.c:4721 netif_receive_generic_xdp net/core/dev.c:4807 [inline] do_xdp_generic+0x35c/0x770 net/core/dev.c:4866 tun_get_user+0x2340/0x3ca0 drivers/net/tun.c:1919 tun_chr_write_iter+0xe8/0x210 drivers/net/tun.c:2043 call_write_iter include/linux/fs.h:1871 [inline] new_sync_write fs/read_write.c:491 [inline] vfs_write+0x650/0xe40 fs/read_write.c:584 ksys_write+0x12f/0x250 fs/read_write.c:637 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd xdp->frame_sz > PAGE_SIZE check was introduced in commit c8741e2bfe87 ("xdp: Allow bpf_xdp_adjust_tail() to grow packet size"). But Jesper Dangaard Brouer <jbrouer@redhat.com> noted that after introducing the xdp_init_buff() which all XDP driver use - it's safe to remove this check. The original intend was to catch cases where XDP drivers have not been updated to use xdp.frame_sz, but that is not longer a concern (since xdp_init_buff). Running the initial syzkaller repro it was discovered that the contiguous physical memory allocation is used for both xdp paths in tun_get_user(), e.g. tun_build_skb() and tun_alloc_skb(). It was also stated by Jesper Dangaard Brouer <jbrouer@redhat.com> that XDP can work on higher order pages, as long as this is contiguous physical memory (e.g. a page). Reported-and-tested-by: syzbot+f817490f5bd20541b90a@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/000000000000774b9205f1d8a80d@google.com/T/ Link: https://syzkaller.appspot.com/bug?extid=f817490f5bd20541b90a Link: https://lore.kernel.org/all/20230725155403.796-1-andrew.kanner@gmail.com/T/ Fixes: 43b5169d8355 ("net, xdp: Introduce xdp_init_buff utility routine") Signed-off-by: Andrew Kanner <andrew.kanner@gmail.com> Acked-by: Jesper Dangaard Brouer <hawk@kernel.org> Acked-by: Jason Wang <jasowang@redhat.com> Link: https://lore.kernel.org/r/20230803190316.2380231-1-andrew.kanner@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07drivers: net: prevent tun_build_skb() to exceed the packet size limitAndrew Kanner
Using the syzkaller repro with reduced packet size it was discovered that XDP_PACKET_HEADROOM is not checked in tun_can_build_skb(), although pad may be incremented in tun_build_skb(). This may end up with exceeding the PAGE_SIZE limit in tun_build_skb(). Jason Wang <jasowang@redhat.com> proposed to count XDP_PACKET_HEADROOM always (e.g. without rcu_access_pointer(tun->xdp_prog)) in tun_can_build_skb() since there's a window during which XDP program might be attached between tun_can_build_skb() and tun_build_skb(). Fixes: 7df13219d757 ("tun: reserve extra headroom only when XDP is set") Link: https://syzkaller.appspot.com/bug?extid=f817490f5bd20541b90a Signed-off-by: Andrew Kanner <andrew.kanner@gmail.com> Link: https://lore.kernel.org/r/20230803185947.2379988-1-andrew.kanner@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07Merge tag 'xsa432-6.5-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen netback buffer overflow fix from Juergen Gross: "The fix for XSA-423 added logic to Linux'es netback driver to deal with a frontend splitting a packet in a way such that not all of the headers would come in one piece. Unfortunately the logic introduced there didn't account for the extreme case of the entire packet being split into as many pieces as permitted by the protocol, yet still being smaller than the area that's specially dealt with to keep all (possible) headers together. Such an unusual packet would therefore trigger a buffer overrun in the driver" * tag 'xsa432-6.5-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: xen/netback: Fix buffer overrun triggered by unusual packet
2023-08-07Merge tag 'gds-for-linus-2023-08-01' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86/gds fixes from Dave Hansen: "Mitigate Gather Data Sampling issue: - Add Base GDS mitigation - Support GDS_NO under KVM - Fix a documentation typo" * tag 'gds-for-linus-2023-08-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: Documentation/x86: Fix backwards on/off logic about YMM support KVM: Add GDS_NO support to KVM x86/speculation: Add Kconfig option for GDS x86/speculation: Add force option to GDS mitigation x86/speculation: Add Gather Data Sampling mitigation
2023-08-07Merge branch 'bpf: Support bpf_get_func_ip helper in uprobes'Martin KaFai Lau
Jiri Olsa says: ==================== adding support for bpf_get_func_ip helper for uprobe program to return probed address for both uprobe and return uprobe as suggested by Andrii in [1]. We agreed that uprobe can have special use of bpf_get_func_ip helper that differs from kprobe. The kprobe bpf_get_func_ip returns: - address of the function if probe is attach on function entry for both kprobe and return kprobe - 0 if the probe is not attach on function entry The uprobe bpf_get_func_ip returns: - address of the probe for both uprobe and return uprobe The reason for this semantic change is that kernel can't really tell if the probe user space address is function entry. v3 changes: - removed bpf_get_func_ip_uprobe helper function [Yonghong] Also available at: https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git uprobe_get_func_ip [1] https://lore.kernel.org/bpf/CAEf4BzZ=xLVkG5eurEuvLU79wAMtwho7ReR+XJAgwhFF4M-7Cg@mail.gmail.com/ ==================== Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-08-07selftests/bpf: Add bpf_get_func_ip test for uprobe inside functionJiri Olsa
Adding get_func_ip test for uprobe inside function that validates the get_func_ip helper returns correct probe address value. Tested-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20230807085956.2344866-4-jolsa@kernel.org Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-08-07selftests/bpf: Add bpf_get_func_ip tests for uprobe on function entryJiri Olsa
Adding get_func_ip tests for uprobe on function entry that validates that bpf_get_func_ip returns proper values from both uprobe and return uprobe. Tested-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20230807085956.2344866-3-jolsa@kernel.org Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-08-07bpf: Add support for bpf_get_func_ip helper for uprobe programJiri Olsa
Adding support for bpf_get_func_ip helper for uprobe program to return probed address for both uprobe and return uprobe. We discussed this in [1] and agreed that uprobe can have special use of bpf_get_func_ip helper that differs from kprobe. The kprobe bpf_get_func_ip returns: - address of the function if probe is attach on function entry for both kprobe and return kprobe - 0 if the probe is not attach on function entry The uprobe bpf_get_func_ip returns: - address of the probe for both uprobe and return uprobe The reason for this semantic change is that kernel can't really tell if the probe user space address is function entry. The uprobe program is actually kprobe type program attached as uprobe. One of the consequences of this design is that uprobes do not have its own set of helpers, but share them with kprobes. As we need different functionality for bpf_get_func_ip helper for uprobe, I'm adding the bool value to the bpf_trace_run_ctx, so the helper can detect that it's executed in uprobe context and call specific code. The is_uprobe bool is set as true in bpf_prog_run_array_sleepable, which is currently used only for executing bpf programs in uprobe. Renaming bpf_prog_run_array_sleepable to bpf_prog_run_array_uprobe to address that it's only used for uprobes and that it sets the run_ctx.is_uprobe as suggested by Yafang Shao. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Tested-by: Alan Maguire <alan.maguire@oracle.com> [1] https://lore.kernel.org/bpf/CAEf4BzZ=xLVkG5eurEuvLU79wAMtwho7ReR+XJAgwhFF4M-7Cg@mail.gmail.com/ Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Viktor Malik <vmalik@redhat.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20230807085956.2344866-2-jolsa@kernel.org Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-08-07Merge tag 'x86_bugs_srso' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86/srso fixes from Borislav Petkov: "Add a mitigation for the speculative RAS (Return Address Stack) overflow vulnerability on AMD processors. In short, this is yet another issue where userspace poisons a microarchitectural structure which can then be used to leak privileged information through a side channel" * tag 'x86_bugs_srso' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/srso: Tie SBPB bit setting to microcode patch detection x86/srso: Add a forgotten NOENDBR annotation x86/srso: Fix return thunks in generated code x86/srso: Add IBPB on VMEXIT x86/srso: Add IBPB x86/srso: Add SRSO_NO support x86/srso: Add IBPB_BRTYPE support x86/srso: Add a Speculative RAS Overflow mitigation x86/bugs: Increase the x86 bugs vector size to two u32s
2023-08-07selftests/bpf: Add a movsx selftest for sign-extension of R10Yonghong Song
A movsx selftest is added for sign-extension of frame pointer R10. The verification fails for both privileged and unprivileged prog runs. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20230807175726.672394-1-yonghong.song@linux.dev Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-08-07bpf: Fix an incorrect verification success with movsx insnYonghong Song
syzbot reports a verifier bug which triggers a runtime panic. The test bpf program is: 0: (62) *(u32 *)(r10 -8) = 553656332 1: (bf) r1 = (s16)r10 2: (07) r1 += -8 3: (b7) r2 = 3 4: (bd) if r2 <= r1 goto pc+0 5: (85) call bpf_trace_printk#-138320 6: (b7) r0 = 0 7: (95) exit At insn 1, the current implementation keeps 'r1' as a frame pointer, which caused later bpf_trace_printk helper call crash since frame pointer address is not valid any more. Note that at insn 4, the 'pointer vs. scalar' comparison is allowed for privileged prog run. To fix the problem with above insn 1, the fix in the patch adopts similar pattern to existing 'R1 = (u32) R2' handling. For unprivileged prog run, verification will fail with 'R<num> sign-extension part of pointer'. For privileged prog run, the dst_reg 'r1' will be marked as an unknown scalar, so later 'bpf_trace_pointk' helper will complain since it expected certain pointers. Reported-by: syzbot+d61b595e9205573133b3@syzkaller.appspotmail.com Fixes: 8100928c8814 ("bpf: Support new sign-extension mov insns") Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230807175721.671696-1-yonghong.song@linux.dev Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-08-07bpf, docs: Formalize type notation and function semantics in ISA standardWill Hawkins
Give a single place where the shorthand for types are defined and the semantics of helper functions are described. Signed-off-by: Will Hawkins <hawkinsw@obs.cr> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20230807140651.122484-1-hawkinsw@obs.cr Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-08-07Merge tag 'wq-for-6.5-rc5-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq Pull workqueue fixes from Tejun Heo: - The recently added cpu_intensive auto detection and warning mechanism was spuriously triggered on slow CPUs. While not causing serious issues, it's still a nuisance and can cause unintended concurrency management behaviors. Relax the threshold on machines with lower BogoMIPS. While BogoMIPS is not an accurate measure of performance by most measures, we don't have to be accurate and it has rough but strong enough correlation. - A correction in Kconfig help text * tag 'wq-for-6.5-rc5-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: workqueue: Scale up wq_cpu_intensive_thresh_us if BogoMIPS is below 4000 workqueue: Fix cpu_intensive_thresh_us name in help text
2023-08-07Merge branch 'page_pool-a-couple-of-assorted-optimizations'Jakub Kicinski
Alexander Lobakin says: ==================== page_pool: a couple of assorted optimizations That initially was a spin-off of the IAVF PP series[0], but has grown (and shrunk) since then a bunch. In fact, it consists of three semi-independent blocks: * #1-2: Compile-time optimization. Split page_pool.h into 2 headers to not overbloat the consumers not needing complex inline helpers and then stop including it in skbuff.h at all. The first patch is also prereq for the whole series. * #3: Improve cacheline locality for users of the Page Pool frag API. * #4-6: Use direct cache recycling more aggressively, when it is safe obviously. In addition, make sure nobody wants to use Page Pool API with disabled interrupts. Patches #1 and #5 are authored by Yunsheng and Jakub respectively, with small modifications from my side as per ML discussions. For the perf numbers for #3-6, please see individual commit messages. Also available on my GH with many more Page Pool goodies[1]. [0] https://lore.kernel.org/netdev/20230530150035.1943669-1-aleksander.lobakin@intel.com [1] https://github.com/alobakin/linux/commits/iavf-pp-frag ==================== Link: https://lore.kernel.org/r/20230804180529.2483231-1-aleksander.lobakin@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07net: skbuff: always try to recycle PP pages directly when in softirqAlexander Lobakin
Commit 8c48eea3adf3 ("page_pool: allow caching from safely localized NAPI") allowed direct recycling of skb pages to their PP for some cases, but unfortunately missed a couple of other majors. For example, %XDP_DROP in skb mode. The netstack just calls kfree_skb(), which unconditionally passes `false` as @napi_safe. Thus, all pages go through ptr_ring and locks, although most of time we're actually inside the NAPI polling this PP is linked with, so that it would be perfectly safe to recycle pages directly. Let's address such. If @napi_safe is true, we're fine, don't change anything for this path. But if it's false, check whether we are in the softirq context. It will most likely be so and then if ->list_owner is our current CPU, we're good to use direct recycling, even though @napi_safe is false -- concurrent access is excluded. in_softirq() protection is needed mostly due to we can hit this place in the process context (not the hardirq though). For the mentioned xdp-drop-skb-mode case, the improvement I got is 3-4% in Mpps. As for page_pool stats, recycle_ring is now 0 and alloc_slow counter doesn't change most of time, which means the MM layer is not even called to allocate any new pages. Suggested-by: Jakub Kicinski <kuba@kernel.org> # in_softirq() Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Link: https://lore.kernel.org/r/20230804180529.2483231-7-aleksander.lobakin@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07page_pool: add a lockdep check for recycling in hardirqJakub Kicinski
Page pool use in hardirq is prohibited, add debug checks to catch misuses. IIRC we previously discussed using DEBUG_NET_WARN_ON_ONCE() for this, but there were concerns that people will have DEBUG_NET enabled in perf testing. I don't think anyone enables lockdep in perf testing, so use lockdep to avoid pushback and arguing :) Acked-by: Jesper Dangaard Brouer <hawk@kernel.org> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Link: https://lore.kernel.org/r/20230804180529.2483231-6-aleksander.lobakin@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07net: skbuff: avoid accessing page_pool if !napi_safe when returning pageAlexander Lobakin
Currently, pp->p.napi is always read, but the actual variable it gets assigned to is read-only when @napi_safe is true. For the !napi_safe cases, which yet is still a pack, it's an unneeded operation. Moreover, it can lead to premature or even redundant page_pool cacheline access. For example, when page_pool_is_last_frag() returns false (with the recent frag improvements). Thus, read it only when @napi_safe is true. This also allows moving @napi inside the condition block itself. Constify it while we are here, because why not. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Link: https://lore.kernel.org/r/20230804180529.2483231-5-aleksander.lobakin@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07page_pool: place frag_* fields in one cachelineAlexander Lobakin
On x86_64, frag_* fields of struct page_pool are scattered across two cachelines despite the summary size of 24 bytes. All three fields are used in pretty much the same places, but the last field, ::frag_users, is pushed out to the next CL, provoking unwanted false-sharing on hotpath (frags allocation code). There are some holes and cold members to move around. Move frag_* one block up, placing them right after &page_pool_params perfectly at the beginning of CL2. This doesn't do any meaningful to the second block, as those are some destroy-path cold structures, and doesn't do anything to ::alloc_stats, which still starts at 200-byte offset, 8 bytes after CL3 (still fitting into 1 cacheline). On my setup, this yields 1-2% of Mpps when using PP frags actively. When it comes to 32-bit architectures with 32-byte CL: &page_pool_params plus ::pad is 44 bytes, the block taken care of is 16 bytes within one CL, so there should be at least no regressions from the actual change. ::pages_state_hold_cnt is not related directly to that triple, but is paired currently with ::frags_offset and decoupling them would mean either two 4-byte holes or more invasive layout changes. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Link: https://lore.kernel.org/r/20230804180529.2483231-4-aleksander.lobakin@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07net: skbuff: don't include <net/page_pool/types.h> to <linux/skbuff.h>Alexander Lobakin
Currently, touching <net/page_pool/types.h> triggers a rebuild of more than half of the kernel. That's because it's included in <linux/skbuff.h>. And each new include to page_pool/types.h adds more [useless] data for the toolchain to process per each source file from that pile. In commit 6a5bcd84e886 ("page_pool: Allow drivers to hint on SKB recycling"), Matteo included it to be able to call a couple of functions defined there. Then, in commit 57f05bc2ab24 ("page_pool: keep pp info as long as page pool owns the page") one of the calls was removed, so only one was left. It's the call to page_pool_return_skb_page() in napi_frag_unref(). The function is external and doesn't have any dependencies. Having very niche page_pool_types.h included only for that looks like an overkill. As %PP_SIGNATURE is not local to page_pool.c (was only in the early submissions), nothing holds this function there. Teleport page_pool_return_skb_page() to skbuff.c, just next to the main consumer, skb_pp_recycle(), and rename it to napi_pp_put_page(), as it doesn't work with skbs at all and the former name tells nothing. The #if guards here are only to not compile and have it in the vmlinux when not needed -- both call sites are already guarded. Now, touching page_pool_types.h only triggers rebuilding of the drivers using it and a couple of core networking files. Suggested-by: Jakub Kicinski <kuba@kernel.org> # make skbuff.h less heavy Suggested-by: Alexander Duyck <alexanderduyck@fb.com> # move to skbuff.c Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Link: https://lore.kernel.org/r/20230804180529.2483231-3-aleksander.lobakin@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07page_pool: split types and declarations from page_pool.hYunsheng Lin
Split types and pure function declarations from page_pool.h and add them in page_page/types.h, so that C sources can include page_pool.h and headers should generally only include page_pool/types.h as suggested by jakub. Rename page_pool.h to page_pool/helpers.h to have both in one place. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Suggested-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Link: https://lore.kernel.org/r/20230804180529.2483231-2-aleksander.lobakin@intel.com [Jakub: change microsoft/mana, fix kdoc paths in Documentation] Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07Merge tag 'tpmdd-v6.5-rc6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd Pull tpm fixes from Jarkko Sakkinen: "A few more bug fixes" * tag 'tpmdd-v6.5-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd: tpm/tpm_tis: Disable interrupts for Lenovo P620 devices tpm: Disable RNG for all AMD fTPMs sysctl: set variable key_sysctls storage-class-specifier to static tpm/tpm_tis: Disable interrupts for TUXEDO InfinityBook S 15/17 Gen7
2023-08-07ice: clean up __ice_aq_get_set_rss_lut()Przemek Kitszel
Refactor __ice_aq_get_set_rss_lut() to improve reader experience and limit misuse scenarios (undesired LUT size for given LUT type). Allow only 3 RSS LUT type+size variants: PF LUT sized 2048, GLOBAL LUT sized 512, and VSI LUT sized 64, which were used on default flows prior to this commit. Prior to the change, code was mixing the meaning of @params->lut_size and @params->lut_type, flag assigning logic was cryptic, while long defines made everything harder to follow. Fix that by extracting some code out to separate helpers. Drop some of "shift by 0" statements that originated from Intel's internal HW documentation. Drop some redundant VSI masks (since ice_is_vsi_valid() gives "valid" for up to 0x300 VSIs). After sweeping all the defines out of struct ice_aqc_get_set_rss_lut, it fits into 7 lines. Finally apply some cleanup to the callsite (use of the new enums, tmp var for lengthy bit extraction). Note that flags for 128 and 64 sized VSI LUT are the same, and 64 is used everywhere in the code (updated to new enum here), it just happened that there was 128 in flag name. __ice_aq_get_set_rss_key() uses the same VSI valid bit, make constant common for it and __ice_aq_get_set_rss_lut(). Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Tested-by: Arpana Arland <arpanax.arland@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-08-07ice: add FW load waitJan Sokolowski
As some cards load FW from external sources, we have to wait to be sure that FW is ready before setting link up. Add check and wait for FW readiness Signed-off-by: Jan Sokolowski <jan.sokolowski@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-08-07ice: Add get C827 PHY index functionKarol Kolacinski
Add a function to find the C827 PHY node handle and return C827 PHY index for the E810 products. In order to bring this function to full functionality, some helpers for this were written by Michal Michalik. Co-developed-by: Michal Michalik <michal.michalik@intel.com> Signed-off-by: Michal Michalik <michal.michalik@intel.com> Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com> Signed-off-by: Jan Sokolowski <jan.sokolowski@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-08-07ice: Rename enum ice_pkt_flags valuesMarcin Szycik
enum ice_pkt_flags contains values such as ICE_PKT_FLAGS_VLAN and ICE_PKT_FLAGS_TUNNEL, but actually the flags words which they refer to contain a range of unrelated values - e.g. word 0 (ICE_PKT_FLAGS_VLAN) contains fields such as from_network and ucast, which have nothing to do with VLAN. Rename each enum value to ICE_PKT_FLAGS_MDID<number>, so it's clear in which flags word does some value reside. Signed-off-by: Marcin Szycik <marcin.szycik@linux.intel.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Tested-by: Sujai Buvaneswaran <sujai.buvaneswaran@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-08-07ice: Add direction metadataMarcin Szycik
Currently it is possible to create a filter which breaks TX traffic, e.g.: tc filter add dev $PF1 ingress protocol ip prio 1 flower ip_proto udp dst_port $PORT action mirred egress redirect dev $VF1_PR This adds a rule which might match both TX and RX traffic, and in TX path the PF will actually receive the traffic, which breaks communication. To fix this, add a match on direction metadata flag when adding a tc rule. Because of the way metadata is currently handled, a duplicate lookup word would appear if VLAN metadata is also added. The lookup would still work correctly, but one word would be wasted. To prevent it, lookup 0 now always contains all metadata. When any metadata needs to be added, it is added to lookup 0 and lookup count is not incremented. This way, two flags residing in the same word will take up one word, instead of two. Note: the drop action is also affected, i.e. it will now only work in one direction. Signed-off-by: Marcin Szycik <marcin.szycik@linux.intel.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Tested-by: Sujai Buvaneswaran <sujai.buvaneswaran@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-08-07ice: Accept LAG netdevs in bridge offloadsWojciech Drewek
Allow LAG interfaces to be used in bridge offload using netif_is_lag_master. In this case, search for ice netdev in the list of LAG's lower devices. Reviewed-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Sujai Buvaneswaran <sujai.buvaneswaran@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-08-07Merge branch 'wireguard-fixes-for-6-5-rc6'Jakub Kicinski
Jason A. Donenfeld says: ==================== wireguard fixes for 6.5-rc6 Just one patch this time, somewhat late in the cycle: 1) Fix an off-by-one calculation for the maximum node depth size in the allowedips trie data structure, and also adjust the self-tests to hit this case so it doesn't regress again in the future. ==================== Link: https://lore.kernel.org/r/20230807132146.2191597-1-Jason@zx2c4.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07wireguard: allowedips: expand maximum node depthJason A. Donenfeld
In the allowedips self-test, nodes are inserted into the tree, but it generated an even amount of nodes, but for checking maximum node depth, there is of course the root node, which makes the total number necessarily odd. With two few nodes added, it never triggered the maximum depth check like it should have. So, add 129 nodes instead of 128 nodes, and do so with a more straightforward scheme, starting with all the bits set, and shifting over one each time. Then increase the maximum depth to 129, and choose a better name for that variable to make it clear that it represents depth as opposed to bits. Cc: stable@vger.kernel.org Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20230807132146.2191597-2-Jason@zx2c4.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07Merge tag 'linux-can-next-for-6.6-20230807' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== pull-request: can-next 2023-08-07 The patch is from me and reverts the addition of the CAN controller nodes in the allwinner d1 SoC. * tag 'linux-can-next-for-6.6-20230807' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next: Revert "riscv: dts: allwinner: d1: Add CAN controller nodes" ==================== Link: https://lore.kernel.org/r/20230807074222.1576119-1-mkl@pengutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07bonding: Fix incorrect deletion of ETH_P_8021AD protocol vid from slavesZiyang Xuan
BUG_ON(!vlan_info) is triggered in unregister_vlan_dev() with following testcase: # ip netns add ns1 # ip netns exec ns1 ip link add bond0 type bond mode 0 # ip netns exec ns1 ip link add bond_slave_1 type veth peer veth2 # ip netns exec ns1 ip link set bond_slave_1 master bond0 # ip netns exec ns1 ip link add link bond_slave_1 name vlan10 type vlan id 10 protocol 802.1ad # ip netns exec ns1 ip link add link bond0 name bond0_vlan10 type vlan id 10 protocol 802.1ad # ip netns exec ns1 ip link set bond_slave_1 nomaster # ip netns del ns1 The logical analysis of the problem is as follows: 1. create ETH_P_8021AD protocol vlan10 for bond_slave_1: register_vlan_dev() vlan_vid_add() vlan_info_alloc() __vlan_vid_add() // add [ETH_P_8021AD, 10] vid to bond_slave_1 2. create ETH_P_8021AD protocol bond0_vlan10 for bond0: register_vlan_dev() vlan_vid_add() __vlan_vid_add() vlan_add_rx_filter_info() if (!vlan_hw_filter_capable(dev, proto)) // condition established because bond0 without NETIF_F_HW_VLAN_STAG_FILTER return 0; if (netif_device_present(dev)) return dev->netdev_ops->ndo_vlan_rx_add_vid(dev, proto, vid); // will be never called // The slaves of bond0 will not refer to the [ETH_P_8021AD, 10] vid. 3. detach bond_slave_1 from bond0: __bond_release_one() vlan_vids_del_by_dev() list_for_each_entry(vid_info, &vlan_info->vid_list, list) vlan_vid_del(dev, vid_info->proto, vid_info->vid); // bond_slave_1 [ETH_P_8021AD, 10] vid will be deleted. // bond_slave_1->vlan_info will be assigned NULL. 4. delete vlan10 during delete ns1: default_device_exit_batch() dev->rtnl_link_ops->dellink() // unregister_vlan_dev() for vlan10 vlan_info = rtnl_dereference(real_dev->vlan_info); // real_dev of vlan10 is bond_slave_1 BUG_ON(!vlan_info); // bond_slave_1->vlan_info is NULL now, bug is triggered!!! Add S-VLAN tag related features support to bond driver. So the bond driver will always propagate the VLAN info to its slaves. Fixes: 8ad227ff89a7 ("net: vlan: add 802.1ad support") Suggested-by: Ido Schimmel <idosch@idosch.org> Signed-off-by: Ziyang Xuan <william.xuanziyang@huawei.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/20230802114320.4156068-1-william.xuanziyang@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07Merge branch 'net-stmmac-correct-mac-propagation-delay'Jakub Kicinski
Johannes Zink says: ==================== net: stmmac: correct MAC propagation delay Changes in v3: - work in Richard's review feedback. Thank you for reviewing my patch: - as some of the hardware may have no or invalid correction value registers: introduce feature switch which can be enabled in the glue code drivers depending on the actual hardware support - only enable the feature on the i.MX8MP for the time being, as the patch improves timing accuracy and is tested for this hardware - Link to v2: https://lore.kernel.org/r/20230719-stmmac_correct_mac_delay-v2-1-3366f38ee9a6@pengutronix.de Changes in v2: - fix builds for 32bit, this was found by the kernel build bot Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202307200225.B8rmKQPN-lkp@intel.com/ - while at it also fix an overflow by shifting a u32 constant from macro by 10bits by casting the constant to u64 - Link to v1: https://lore.kernel.org/r/20230719-stmmac_correct_mac_delay-v1-1-768aa4d09334@pengutronix.de Tested-by: Kurt Kanzenbach <kurt@linutronix.de> # imx8mp ==================== Link: https://lore.kernel.org/r/20230719-stmmac_correct_mac_delay-v3-0-61e63427735e@pengutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07net: stmmac: dwmac-imx: enable MAC propagation delay correction for i.MX8MPJohannes Zink
As the i.MX8MP supports reading MAC propagation delay and correcting the Hardware timestamp counter for additional delays [1], enable the feature for this SoC. This reduces phase error of the PPS output from the PTP Hardware Clock from approx 150ns to 100ns. [1] i.MX8MP Reference Manual, rev.1 Section 11.7.2.5.3 "Timestamp correction" Signed-off-by: Johannes Zink <j.zink@pengutronix.de> Link: https://lore.kernel.org/r/20230719-stmmac_correct_mac_delay-v3-2-61e63427735e@pengutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07net: stmmac: correct MAC propagation delayJohannes Zink
The IEEE1588 Standard specifies that the timestamps of Packets must be captured when the PTP message timestamp point (leading edge of first octet after the start of frame delimiter) crosses the boundary between the node and the network. As the MAC latches the timestamp at an internal point, the captured timestamp must be corrected for the additional data transmission latency, as described in the publicly available datasheet [1]. This patch only corrects for the MAC-Internal delay, which can be read out from the MAC_Ingress_Timestamp_Latency register on DWMAC version 5, since the Phy framework currently does not support querying the Phy ingress and egress latency. The Closs Domain Crossing Circuits errors as indicated in [1] are already being accounted in the stmmac_get_tx_hwtstamp() function and are not corrected here. As the Latency varies for different link speeds and MII modes of operation, the correction value needs to be updated on each link state change. As the delay also causes a phase shift in the timestamp counter compared to the rest of the network, this correction will also reduce phase error when generating PPS outputs from the timestamp counter. Since the correction registers may be unavailable on some hardware and no feature bits are documented for dynamically detection of the MAC propagation delay readout, introduce a feature bit to explicitely enable MAC delay Correction in the gluecode driver. [1] i.MX8MP Reference Manual, rev.1 Section 11.7.2.5.3 "Timestamp correction" Signed-off-by: Johannes Zink <j.zink@pengutronix.de> Link: https://lore.kernel.org/r/20230719-stmmac_correct_mac_delay-v2-1-3366f38ee9a6@pengutronix.de Link: https://lore.kernel.org/r/20230719-stmmac_correct_mac_delay-v3-1-61e63427735e@pengutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-07net/mlx5e: Add capability check for vnic countersLama Kayal
Add missing capability check for each of the vnic counters exposed by devlink health reporter, and thus avoid unexpected behavior due to invalid access to registers. While at it, read only the exact number of bits for each counter whether it was 32 bits or 64 bits. Fixes: b0bc615df488 ("net/mlx5: Add vnic devlink health reporter to PFs/VFs") Fixes: a33682e4e78e ("net/mlx5e: Expose catastrophic steering error counters") Signed-off-by: Lama Kayal <lkayal@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Rahul Rameshbabu <rrameshbabu@nvidia.com> Reviewed-by: Maher Sanalla <msanalla@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-08-07net/mlx5: Reload auxiliary devices in pci error handlersMoshe Shemesh
Handling pci errors should fully teardown and load back auxiliary devices, same as done through mlx5 health recovery flow. Fixes: 72ed5d5624af ("net/mlx5: Suspend auxiliary devices only in case of PCI device suspend") Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-08-07net/mlx5: Skip clock update work when device is in error stateMoshe Shemesh
When device is in error state, marked by the flag MLX5_DEVICE_STATE_INTERNAL_ERROR, the HW and PCI may not be accessible and so clock update work should be skipped. Furthermore, such access through PCI in error state, after calling mlx5_pci_disable_device() can result in failing to recover from pci errors. Fixes: ef9814deafd0 ("net/mlx5e: Add HW timestamping (TS) support") Reported-and-tested-by: Ganesh G R <ganeshgr@linux.ibm.com> Closes: https://lore.kernel.org/netdev/9bdb9b9d-140a-7a28-f0de-2e64e873c068@nvidia.com Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Aya Levin <ayal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-08-07net/mlx5: LAG, Check correct bucket when modifying LAGShay Drory
Cited patch introduced buckets in hash mode, but missed to update the ports/bucket check when modifying LAG. Fix the check. Fixes: 352899f384d4 ("net/mlx5: Lag, use buckets in hash mode") Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-08-07net/mlx5e: Unoffload post act rule when handling FIB eventsChris Mi
If having the following tc rule on stack device: filter parent ffff: protocol ip pref 3 flower chain 1 filter parent ffff: protocol ip pref 3 flower chain 1 handle 0x1 dst_mac 24:25:d0:e1:00:00 src_mac 02:25:d0:25:01:02 eth_type ipv4 ct_state +trk+new in_hw in_hw_count 1 action order 1: ct commit zone 0 pipe index 2 ref 1 bind 1 installed 3807 sec used 3779 sec firstused 3800 sec Action statistics: Sent 120 bytes 2 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 used_hw_stats delayed action order 2: tunnel_key set src_ip 192.168.1.25 dst_ip 192.168.1.26 key_id 4 dst_port 4789 csum pipe index 3 ref 1 bind 1 installed 3807 sec used 3779 sec firstused 3800 sec Action statistics: Sent 120 bytes 2 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 used_hw_stats delayed action order 3: mirred (Egress Redirect to device vxlan1) stolen index 9 ref 1 bind 1 installed 3807 sec used 3779 sec firstused 3800 sec Action statistics: Sent 120 bytes 2 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 used_hw_stats delayed When handling FIB events, the rule in post act will not be deleted. And because the post act rule has packet reformat and modify header actions, also will hit the following syndromes: mlx5_core 0000:08:00.0: mlx5_cmd_out_err:829:(pid 11613): DEALLOC_MODIFY_HEADER_CONTEXT(0x941) op_mod(0x0) failed, status bad resource state(0x9), syndrome (0x1ab444), err(-22) mlx5_core 0000:08:00.0: mlx5_cmd_out_err:829:(pid 11613): DEALLOC_PACKET_REFORMAT_CONTEXT(0x93e) op_mod(0x0) failed, status bad resource state(0x9), syndrome (0x179e84), err(-22) Fix it by unoffloading post act rule when handling FIB events. Fixes: 314e1105831b ("net/mlx5e: Add post act offload/unoffload API") Signed-off-by: Chris Mi <cmi@nvidia.com> Reviewed-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-08-07net/mlx5: Fix devlink controller number for ECVFDaniel Jurgens
The controller number for ECVFs is always 0, because the ECPF must be the eswitch owner for EC VFs to be enabled. Fixes: dc13180824b7 ("net/mlx5: Enable devlink port for embedded cpu VF vports") Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-08-07net/mlx5: Allow 0 for total host VFsDaniel Jurgens
When querying eswitch functions 0 is a valid number of host VFs. After introducing ARM SRIOV falling through to getting the max value from PCI results in using the total VFs allowed on the ARM for the host. Fixes: 86eec50beaf3 ("net/mlx5: Support querying max VFs from device"); Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>