summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet
AgeCommit message (Collapse)Author
2024-08-12eth: mvpp2: implement new RSS context APIJakub Kicinski
Implement the separate create/modify/delete ops for RSS. No problems with IDs - even tho RSS tables are per device the driver already seems to allocate IDs linearly per port. There's a translation table from per-port context ID to device context ID. mvpp2 doesn't have a key for the hash, it defaults to an empty/previous indir table. Note that there is no key at all, so we don't have to be concerned with reporting the wrong one (which is addressed by a patch later in the series). Compile-tested only. Reviewed-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-12net: ethernet: mtk_wed: fix use-after-free panic in mtk_wed_setup_tc_block_cb()Zheng Zhang
When there are multiple ap interfaces on one band and with WED on, turning the interface down will cause a kernel panic on MT798X. Previously, cb_priv was freed in mtk_wed_setup_tc_block() without marking NULL,and mtk_wed_setup_tc_block_cb() didn't check the value, too. Assign NULL after free cb_priv in mtk_wed_setup_tc_block() and check NULL in mtk_wed_setup_tc_block_cb(). ---------- Unable to handle kernel paging request at virtual address 0072460bca32b4f5 Call trace: mtk_wed_setup_tc_block_cb+0x4/0x38 0xffffffc0794084bc tcf_block_playback_offloads+0x70/0x1e8 tcf_block_unbind+0x6c/0xc8 ... --------- Fixes: 799684448e3e ("net: ethernet: mtk_wed: introduce wed wo support") Signed-off-by: Zheng Zhang <everything411@qq.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-12net: mana: Fix RX buf alloc_size alignment and atomic op panicHaiyang Zhang
The MANA driver's RX buffer alloc_size is passed into napi_build_skb() to create SKB. skb_shinfo(skb) is located at the end of skb, and its alignment is affected by the alloc_size passed into napi_build_skb(). The size needs to be aligned properly for better performance and atomic operations. Otherwise, on ARM64 CPU, for certain MTU settings like 4000, atomic operations may panic on the skb_shinfo(skb)->dataref due to alignment fault. To fix this bug, add proper alignment to the alloc_size calculation. Sample panic info: [ 253.298819] Unable to handle kernel paging request at virtual address ffff000129ba5cce [ 253.300900] Mem abort info: [ 253.301760] ESR = 0x0000000096000021 [ 253.302825] EC = 0x25: DABT (current EL), IL = 32 bits [ 253.304268] SET = 0, FnV = 0 [ 253.305172] EA = 0, S1PTW = 0 [ 253.306103] FSC = 0x21: alignment fault Call trace: __skb_clone+0xfc/0x198 skb_clone+0x78/0xe0 raw6_local_deliver+0xfc/0x228 ip6_protocol_deliver_rcu+0x80/0x500 ip6_input_finish+0x48/0x80 ip6_input+0x48/0xc0 ip6_sublist_rcv_finish+0x50/0x78 ip6_sublist_rcv+0x1cc/0x2b8 ipv6_list_rcv+0x100/0x150 __netif_receive_skb_list_core+0x180/0x220 netif_receive_skb_list_internal+0x198/0x2a8 __napi_poll+0x138/0x250 net_rx_action+0x148/0x330 handle_softirqs+0x12c/0x3a0 Cc: stable@vger.kernel.org Fixes: 80f6215b450e ("net: mana: Add support for jumbo frame") Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Long Li <longli@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-12net: sunvnet: use ethtool_sprintf/putsRosen Penev
Simpler and allows avoiding manual pointer addition. Signed-off-by: Rosen Penev <rosenp@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-12net: axienet: Fix register defines comment descriptionRadhey Shyam Pandey
In axiethernet header fix register defines comment description to be inline with IP documentation. It updates MAC configuration register, MDIO configuration register and frame filter control description. Fixes: 8a3b7a252dca ("drivers/net/ethernet/xilinx: added Xilinx AXI Ethernet driver") Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-11net: mvpp2: use device_for_each_child_node() to access device child nodesJavier Carrasco
The iterated nodes are direct children of the device node, and the `device_for_each_child_node()` macro accounts for child node availability. `fwnode_for_each_available_child_node()` is meant to access the child nodes of an fwnode, and therefore not direct child nodes of the device node. The child nodes within mvpp2_probe are not accessed outside the loops, and the scoped version of the macro can be used to automatically decrement the refcount on early exits. Use `device_for_each_child_node()` and its scoped variant to indicate device's direct child nodes. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Javier Carrasco <javier.carrasco.cruz@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-11net: mvpp2: use port_count to remove portsJavier Carrasco
As discussed in [1], there is no need to iterate over child nodes to remove the list of ports. Instead, a loop up to `port_count` ports can be used, and is in fact more reliable in case the child node availability changes. The suggested approach removes the need for the `fwnode` and `port_fwnode` variables in mvpp2_remove() as well. Link: https://lore.kernel.org/all/ZqdRgDkK1PzoI2Pf@shell.armlinux.org.uk/ [1] Suggested-by: Russell King <linux@armlinux.org.uk> Signed-off-by: Javier Carrasco <javier.carrasco.cruz@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-11bnxt_en: only set dev->queue_mgmt_ops if supported by FWDavid Wei
The queue API calls bnxt_hwrm_vnic_update() to stop/start the flow of packets, which can only properly flush the pipeline if FW indicates support. Add a macro BNXT_SUPPORTS_QUEUE_API that checks for the required flags and only set queue_mgmt_ops if true. Signed-off-by: David Wei <dw@davidwei.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-11bnxt_en: stop packet flow during bnxt_queue_stop/startDavid Wei
The current implementation when resetting a queue while packets are flowing puts the queue into an inconsistent state. There needs to be some synchronisation with the FW. Add calls to bnxt_hwrm_vnic_update() to set the MRU for both the default and ntuple vnic during queue start/stop. When the MRU is set to 0, flow is stopped. Each Rx queue belongs to either the default or the ntuple vnic. With calling bnxt_hwrm_vnic_update() the calls to napi_enable() and napi_disable() must be removed for reset to work on a queue that has active traffic flowing e.g. iperf3. Co-developed-by: Somnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: David Wei <dw@davidwei.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-11bnxt_en: set vnic->mru in bnxt_hwrm_vnic_cfg()David Wei
Set the newly added vnic->mru field in bnxt_hwrm_vnic_cfg(). Signed-off-by: David Wei <dw@davidwei.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-11bnxt_en: Check the FW's VNIC flush capabilityMichael Chan
Check the HWRM_VNIC_QCAPS FW response for the receive engine flush capability. This capability indicates that we can reliably support RX ring restart when calling HWRM_VNIC_UPDATE with MRU set to 0. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David Wei <dw@davidwei.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-11bnxt_en: Add support to call FW to update a VNICMichael Chan
Add the function bnxt_hwrm_vnic_update() to call FW to update a VNIC. This call can be used when disabling and enabling a receive ring within a VNIC. The mru which is the maximum receive size of packets received by the VNIC can be updated. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David Wei <dw@davidwei.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-11bnxt_en: Update firmware interface to 1.10.3.68Michael Chan
The main changes are: 1. HWRM_VNIC_UPDATE used to safely disable and enable an RX ring within the VNIC. 2. New flag in HWRM_VNIC_QCAPS to indicate FW will do the proper flush during HWRM_VNIC_UPDATE. 3. New flag in HWRM_FUNC_QCAPS to indicate that reservations for some resources such as VNIC can be reduced. 4. New backing store memory types not used by the driver yet. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David Wei <dw@davidwei.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-11Merge branch '1GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-qDavid S. Miller
ueue Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2024-08-07 (igc) This series contains updates to igc driver only. Faizal adjusts the size of the MAC internal buffer on i226 devices to resolve an errata for leaking packet transmits. He also corrects a condition in which qbv_config_change_errors are incorrectly counted. Lastly, he adjusts the conditions for resetting the adapter when changing TSN Tx mode and corrects the conditions in which gtxoffset register is set. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-11net: ethernet: use ip_hdrlen() instead of bit shiftMoon Yeounsu
`ip_hdr(skb)->ihl << 2` is the same as `ip_hdrlen(skb)` Therefore, we should use a well-defined function not a bit shift to find the header length. It also compresses two lines to a single line. Signed-off-by: Moon Yeounsu <yyyynoom@gmail.com> Reviewed-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-09net/mlx5e: Fix queue stats access to non-existing channels splatGal Pressman
The queue stats API queries the queues according to the real_num_[tr]x_queues, in case the device is down and channels were not yet created, don't try to query their statistics. To trigger the panic, run this command before the interface is brought up: ./cli.py --spec ../../../Documentation/netlink/specs/netdev.yaml --dump qstats-get --json '{"ifindex": 4}' BUG: kernel NULL pointer dereference, address: 0000000000000c00 PGD 0 P4D 0 Oops: Oops: 0000 [#1] SMP PTI CPU: 3 UID: 0 PID: 977 Comm: python3 Not tainted 6.10.0+ #40 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 RIP: 0010:mlx5e_get_queue_stats_rx+0x3c/0xb0 [mlx5_core] Code: fc 55 48 63 ee 53 48 89 d3 e8 40 3d 70 e1 85 c0 74 58 4c 89 ef e8 d4 07 04 00 84 c0 75 41 49 8b 84 24 f8 39 00 00 48 8b 04 e8 <48> 8b 90 00 0c 00 00 48 03 90 40 0a 00 00 48 89 53 08 48 8b 90 08 RSP: 0018:ffff888116be37d0 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff888116be3868 RCX: 0000000000000004 RDX: ffff88810ada4000 RSI: 0000000000000000 RDI: ffff888109df09c0 RBP: 0000000000000000 R08: 0000000000000004 R09: 0000000000000004 R10: ffff88813461901c R11: ffffffffffffffff R12: ffff888109df0000 R13: ffff888109df09c0 R14: ffff888116be38d0 R15: 0000000000000000 FS: 00007f4375d5c740(0000) GS:ffff88852c980000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000c00 CR3: 0000000106ada006 CR4: 0000000000370eb0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> ? __die+0x1f/0x60 ? page_fault_oops+0x14e/0x3d0 ? exc_page_fault+0x73/0x130 ? asm_exc_page_fault+0x22/0x30 ? mlx5e_get_queue_stats_rx+0x3c/0xb0 [mlx5_core] netdev_nl_stats_by_netdev+0x2a6/0x4c0 ? __rmqueue_pcplist+0x351/0x6f0 netdev_nl_qstats_get_dumpit+0xc4/0x1b0 genl_dumpit+0x2d/0x80 netlink_dump+0x199/0x410 __netlink_dump_start+0x1aa/0x2c0 genl_family_rcv_msg_dumpit+0x94/0xf0 ? __pfx_genl_start+0x10/0x10 ? __pfx_genl_dumpit+0x10/0x10 ? __pfx_genl_done+0x10/0x10 genl_rcv_msg+0x116/0x2b0 ? __pfx_netdev_nl_qstats_get_dumpit+0x10/0x10 ? __pfx_genl_rcv_msg+0x10/0x10 netlink_rcv_skb+0x54/0x100 genl_rcv+0x24/0x40 netlink_unicast+0x21a/0x340 netlink_sendmsg+0x1f4/0x440 __sys_sendto+0x1b6/0x1c0 ? do_sock_setsockopt+0xc3/0x180 ? __sys_setsockopt+0x60/0xb0 __x64_sys_sendto+0x20/0x30 do_syscall_64+0x50/0x110 entry_SYSCALL_64_after_hwframe+0x76/0x7e RIP: 0033:0x7f43757132b0 Code: c0 ff ff ff ff eb b8 0f 1f 00 f3 0f 1e fa 41 89 ca 64 8b 04 25 18 00 00 00 85 c0 75 1d 45 31 c9 45 31 c0 b8 2c 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 68 c3 0f 1f 80 00 00 00 00 41 54 48 83 ec 20 RSP: 002b:00007ffd258da048 EFLAGS: 00000246 ORIG_RAX: 000000000000002c RAX: ffffffffffffffda RBX: 00007ffd258da0f8 RCX: 00007f43757132b0 RDX: 000000000000001c RSI: 00007f437464b850 RDI: 0000000000000003 RBP: 00007f4375085de0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: ffffffffc4653600 R14: 0000000000000001 R15: 00007f43751a6147 </TASK> Modules linked in: netconsole xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat br_netfilter rpcsec_gss_krb5 auth_rpcgss oid_registry overlay rpcrdma rdma_ucm ib_iser libiscsi scsi_transport_iscsi ib_umad rdma_cm ib_ipoib iw_cm ib_cm mlx5_ib ib_uverbs ib_core zram zsmalloc mlx5_core fuse [last unloaded: netconsole] CR2: 0000000000000c00 ---[ end trace 0000000000000000 ]--- RIP: 0010:mlx5e_get_queue_stats_rx+0x3c/0xb0 [mlx5_core] Code: fc 55 48 63 ee 53 48 89 d3 e8 40 3d 70 e1 85 c0 74 58 4c 89 ef e8 d4 07 04 00 84 c0 75 41 49 8b 84 24 f8 39 00 00 48 8b 04 e8 <48> 8b 90 00 0c 00 00 48 03 90 40 0a 00 00 48 89 53 08 48 8b 90 08 RSP: 0018:ffff888116be37d0 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff888116be3868 RCX: 0000000000000004 RDX: ffff88810ada4000 RSI: 0000000000000000 RDI: ffff888109df09c0 RBP: 0000000000000000 R08: 0000000000000004 R09: 0000000000000004 R10: ffff88813461901c R11: ffffffffffffffff R12: ffff888109df0000 R13: ffff888109df09c0 R14: ffff888116be38d0 R15: 0000000000000000 FS: 00007f4375d5c740(0000) GS:ffff88852c980000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000c00 CR3: 0000000106ada006 CR4: 0000000000370eb0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Fixes: 7b66ae536a78 ("net/mlx5e: Add per queue netdev-genl stats") Signed-off-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20240808144107.2095424-6-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5e: Correctly report errors for ethtool rx flowsCosmin Ratiu
Previously, an ethtool rx flow with no attrs would not be added to the NIC as it has no rules to configure the hw with, but it would be reported as successful to the caller (return code 0). This is confusing for the user as ethtool then reports "Added rule $num", but no rule was actually added. This change corrects that by instead reporting these wrong rules as -EINVAL. Fixes: b29c61dac3a2 ("net/mlx5e: Ethtool steering flow validation refactoring") Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808144107.2095424-5-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5e: Take state lock during tx timeout reporterDragos Tatulea
mlx5e_safe_reopen_channels() requires the state lock taken. The referenced changed in the Fixes tag removed the lock to fix another issue. This patch adds it back but at a later point (when calling mlx5e_safe_reopen_channels()) to avoid the deadlock referenced in the Fixes tag. Fixes: eab0da38912e ("net/mlx5e: Fix possible deadlock on mlx5e_tx_timeout_work") Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Link: https://lore.kernel.org/all/ZplpKq8FKi3vwfxv@gmail.com/T/ Reviewed-by: Breno Leitao <leitao@debian.org> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808144107.2095424-4-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5e: SHAMPO, Increase timeout to improve latencyDragos Tatulea
During latency tests (netperf TCP_RR) a 30% degradation of HW GRO vs SW GRO was observed. This is due to SHAMPO triggering timeout filler CQEs instead of delivering the CQE for the packet. Having a short timeout for SHAMPO doesn't bring any benefits as it is the driver that does the merging, not the hardware. On the contrary, it can have a negative impact: additional filler CQEs are generated due to the timeout. As there is no way to disable this timeout, this change sets it to the maximum value. Instead of using the packet_merge.timeout parameter which is also used for LRO, set the value directly when filling in the rest of the SHAMPO parameters in mlx5e_build_rq_param(). Fixes: 99be56171fa9 ("net/mlx5e: SHAMPO, Re-enable HW-GRO") Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808144107.2095424-3-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5: SD, Do not query MPIR register if no sd_groupTariq Toukan
Unconditionally calling the MPIR query on BF separate mode yields the FW syndrome below [1]. Do not call it unless admin clearly specified the SD group, i.e. expressing the intention of using the multi-PF netdev feature. This fix covers cases not covered in commit fca3b4791850 ("net/mlx5: Do not query MPIR on embedded CPU function"). [1] mlx5_cmd_out_err:808:(pid 8267): ACCESS_REG(0x805) op_mod(0x1) failed, status bad system state(0x4), syndrome (0x685f19), err(-5) Fixes: 678eb448055a ("net/mlx5: SD, Implement basic query and instantiation") Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Link: https://patch.msgid.link/20240808144107.2095424-2-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5e: CT: Update connection tracking steering entriesCosmin Ratiu
Previously, replacing a connection tracking steering entry was done by adding a new rule (with the same tag but possibly different mod hdr actions/labels) then removing the old rule. This approach doesn't work in hardware steering because two steering entries with the same tag cannot coexist in a hardware steering table. This commit prepares for that by adding a new ct_rule_update operation on the ct_fs_ops struct which is used instead of add+delete. Implementations for both dmfs (firmware steering) and smfs (software steering) are provided, which simply add the new rule and delete the old one. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-12-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5e: CT: 'update' rules instead of 'replace'Cosmin Ratiu
Offloaded rules can be updated with a new modify header action containing a changed restore cookie. This was done using the verb 'replace', while in some configurations 'update' is a better fit. This commit renames the functions used to reflect that. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-11-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5e: Use extack in get module eeprom by page callbackGal Pressman
In case of errors in get module eeprom by page, reflect it through extack instead of a dmesg print. While at it, make the messages more human friendly. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-10-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5e: Use extack in set coalesce callbackGal Pressman
In case of errors in set coalesce, reflect it through extack instead of a dmesg print. While at it, make the messages more human friendly. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-9-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5e: Use extack in get coalesce callbackGal Pressman
In case of errors in get coalesce, reflect it through extack instead of a dmesg print. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-8-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5e: Use extack in set ringparams callbackGal Pressman
In case of errors in set ringparams, reflect it through extack instead of a dmesg print. While at it, make the messages more human friendly and remove two redundant checks that are already validated by the core. Signed-off-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-7-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5e: Be consistent with bitmap handling of link modesGal Pressman
Use the bitmap operations when accessing the advertised/supported link modes and remove places that access them as arrays of unsigned longs (underlying implementation of the bitmap), this makes the code much more readable and clear. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-6-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5e: TC, Offload rewrite and mirror to both internal and external destsJianbo Liu
Firmware has the limitation that it cannot offload a rule with rewrite and mirror to internal and external destinations simultaneously. This patch adds a workaround to this issue. Here the destination array is split again, just like what's done in previous commit, but after the action indexed by split_count - 1. An extra rule is added for the leftover destinations. Such rule can be offloaded, even there are destinations to both internal and external destinations, because the header rewrite is left in the original FTE. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-5-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5e: TC, Offload rewrite and mirror on tunnel over ovs internal portJianbo Liu
To offload the encap rule when the tunnel IP is configured on an openvswitch internal port, driver need to overwrite vport metadata in reg_c0 to the value assigned to the internal port, then forward packets to root table to be processed again by the rules matching on the metadata for such internal port. When such rule is combined with header rewrite and mirror, openvswitch generates the rule like the following, because it resets mirror after packets are modified. in_port(enp8s0f0npf0sf1),.., actions:enp8s0f0npf0sf2,set(tunnel(...)),set(ipv4(...)),vxlan_sys_4789,enp8s0f0npf0sf2 The split_count was introduced before to support rewrite and mirror. Driver splits the rule into two different hardware rules in order to offload it. But it's not enough to offload the above complicated rule because of the limitations, in both driver and firmware. To resolve this issue, the destination array is split again after the destination indexed by split_count. An extra rule is added for the leftover destinations (in the above example, it is enp8s0f0npf0sf2), and is inserted to post_act table. And the extra destination is added in the original rule to forward to post_act table, so the extra mirror is done there. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-4-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5e: Enable remove flow for hard packet limitJianbo Liu
In the commit a2a73ea14b1a ("net/mlx5e: Don't listen to remove flows event"), remove_flow_enable event is removed, and the hard limit usually relies on software mechanism added in commit b2f7b01d36a9 ("net/mlx5e: Simulate missing IPsec TX limits hardware functionality"). But the delayed work is rescheduled every one second, which is slow for fast traffic. As a result, traffic can't be blocked even reaches the hard limit, which usually happens when soft and hard limits are very close. In reality it won't happen because soft limit is much lower than hard limit. But, as an optimization for RX to block traffic when reaching hard limit, need to set remove_flow_enable. When remove flow is enabled, IPSEC HARD_LIFETIME ASO syndrome will be set in the metadata defined in the ASO return register if packets reach hard lifetime threshold. And those packets are dropped immediately by the steering table. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-3-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net/mlx5: E-Switch, Increase max int port number for offloadChris Mi
Currently MLX5E_TC_MAX_INT_PORT_NUM is 8. Usually int port has one ingress and one egress rules. But sometimes, a temporary rule can be offloaded as well, eg: recirc_id(0),in_port(br-phy),eth(src=10:70:fd:87:57:c0,dst=33:33:00:00:00:16), eth_type(0x86dd),ipv6(frag=no), packets:2, bytes:180, used:0.060s, actions:enp8s0f0 If one int port device offloads 3 rules, only 2 devices can offload. Other devices will hit the limit and fail to offload. Actually it is insufficient for customers. So increase the number to 32. Signed-off-by: Chris Mi <cmi@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20240808055927.2059700-2-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net: ag71xx: use phylink_mii_ioctlRosen Penev
f1294617d2f38bd2b9f6cce516b0326858b61182 removed the custom function for ndo_eth_ioctl and used the standard phy_do_ioctl which calls phy_mii_ioctl. However since then, this driver was ported to phylink where it makes more sense to call phylink_mii_ioctl. Bring back custom function that calls phylink_mii_ioctl. Signed-off-by: Rosen Penev <rosenp@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20240807215834.33980-1-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09ibmvnic: Perform tx CSO during send scrq directNick Child
During initialization with the vnic server, a bitstring is communicated to the client regarding header info needed during CSO (See "VNIC Capabilities" in PAPR). Most of the time, to be safe, vnic server requests header info for CSO. When header info is needed, multiple TX descriptors are required per skb; This limits the driver to use send_subcrq_indirect instead of send_subcrq_direct. Previously, the vnic server request for header info was ignored. This allowed the use of send_sub_crq_direct. Transmissions were successful because the bitstring returned by vnic server is broad and over cautionary. It was observed that mlx backing devices could actually transmit and handle CSO packets without the vnic server receiving header info (despite the fact that the bitstring requested it). There was a trust issue: The bitstring was overcautionary. This extra precaution (requesting header info when the backing device may not use it) comes at the cost of performance (using direct vs indirect hcalls has a 30% delta in small packet RR transaction rate). So it has been requested that the vnic server team tries to ensure that the bitstring is more exact. In the meantime, disable CSO when it is possible to use the skb in the send_subcrq_direct path. In other words, calculate the checksum before handing the packet to FW when the packet is not segmented and xmit_more is false. Since the code path is only possible if the skb is non GSO and xmit_more is false, the cost of doing checksum in the send_subcrq_direct path is minimal. Any large segmented skb will have xmit_more set to true more frequently and it is inexpensive to do checksumming on a small skb. The worst-case workload would be a 9000 MTU TCP_RR test with close to MTU sized packets (and TSO off). This allows xmit_more to be false more frequently and open the code path up to use send_subcrq_direct. Observing trace data (graph-time = 1) and packet rate with this workload shows minimal performance degradation: 1. NIC does checksum w headers, safely use send_subcrq_indirect: - Packet rate: 631k txs - Trace data: ibmvnic_xmit = 44344685.87 us / 6234576 hits = AVG 7.11 us skb_checksum_help = 4.07 us / 2 hits = AVG 2.04 us ^ Notice hits, tracing this just for reassurance ibmvnic_tx_scrq_flush = 33040649.69 us / 5638441 hits = AVG 5.86 us send_subcrq_indirect = 37438922.24 us / 6030859 hits = AVG 6.21 us 2. NIC does checksum w/o headers, dangerously use send_subcrq_direct: - Packet rate: 831k txs - Trace data: ibmvnic_xmit = 48940092.29 us / 8187630 hits = AVG 5.98 us skb_checksum_help = 2.03 us / 1 hits = AVG 2.03 ibmvnic_tx_scrq_flush = 31141879.57 us / 7948960 hits = AVG 3.92 us send_subcrq_indirect = 8412506.03 us / 728781 hits = AVG 11.54 ^ notice hits is much lower b/c send_subcrq_direct was called ^ wasn't traceable 3. driver does checksum, safely use send_subcrq_direct (THIS PATCH): - Packet rate: 829k txs - Trace data: ibmvnic_xmit = 56696077.63 us / 8066168 hits = AVG 7.03 us skb_checksum_help = 8587456.16 us / 7526072 hits = AVG 1.14 us ibmvnic_tx_scrq_flush = 30219545.55 us / 7782409 hits = AVG 3.88 us send_subcrq_indirect = 8638326.44 us / 763693 hits = AVG 11.31 us When the bitstring ever specifies that CSO does not require headers (dependent on VIOS vnic server changes), then this patch should be removed and replaced with one that investigates the bitstring before using send_subcrq_direct. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-8-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09ibmvnic: Only record tx completed bytes once per handlerNick Child
Byte Queue Limits depends on dql_completed being called once per tx completion round in order to adjust its algorithm appropriately. The dql->limit value is an approximation of the amount of bytes that the NIC can consume per irq interval. If this approximation is too high then the NIC will become over-saturated. Too low and the NIC will starve. The dql->limit depends on dql->prev-* stats to calculate an optimal value. If dql_completed() is called more than once per irq handler then those prev-* values become unreliable (because they are not an accurate representation of the previous state of the NIC) resulting in a sub-optimal limit value. Therefore, move the call to netdev_tx_completed_queue() to the end of ibmvnic_complete_tx(). When performing 150 sessions of TCP rr (request-response 1 byte packets) workloads, one could observe: PREVIOUSLY: - limit and inflight values hovering around 130 - transaction rate of around 750k pps. NOW: - limit rises and falls in response to inflight (130-900) - transaction rate of around 1M pps (33% improvement) Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-7-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09ibmvnic: Introduce send sub-crq directNick Child
Firmware supports two hcalls to send a sub-crq request: H_SEND_SUB_CRQ_INDIRECT and H_SEND_SUB_CRQ. The indirect hcall allows for submission of batched messages while the other hcall is limited to only one message. This protocol is defined in PAPR section 17.2.3.3. Previously, the ibmvnic xmit function only used the indirect hcall. This allowed the driver to batch it's skbs. A single skb can occupy a few entries per hcall depending on if FW requires skb header information or not. The FW only needs header information if the packet is segmented. By this logic, if an skb is not GSO then it can fit in one sub-crq message and therefore is a candidate for H_SEND_SUB_CRQ. Batching skb transmission is only useful when there are more packets coming down the line (ie netdev_xmit_more is true). As it turns out, H_SEND_SUB_CRQ induces less latency than H_SEND_SUB_CRQ_INDIRECT. Therefore, use H_SEND_SUB_CRQ where appropriate. Small latency gains seen when doing TCP_RR_150 (request/response workload). Ftrace results (graph-time=1): Previous: ibmvnic_xmit = 29618270.83 us / 8860058.0 hits = AVG 3.34 ibmvnic_tx_scrq_flush = 21972231.02 us / 6553972.0 hits = AVG 3.35 Now: ibmvnic_xmit = 22153350.96 us / 8438942.0 hits = AVG 2.63 ibmvnic_tx_scrq_flush = 15858922.4 us / 6244076.0 hits = AVG 2.54 Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-6-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09ibmvnic: Remove duplicate memory barriers in txNick Child
send_subcrq_[in]direct() already has a dma memory barrier. Remove the earlier one. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-5-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09ibmvnic: Reduce memcpys in tx descriptor generationNick Child
Previously when creating the header descriptors, the driver would: 1. allocate a temporary buffer on the stack (in build_hdr_descs_arr) 2. memcpy the header info into the temporary buffer (in build_hdr_data) 3. memcpy the temp buffer into a local variable (in create_hdr_descs) 4. copy the local variable into the return buffer (in create_hdr_descs) Since, there is no opportunity for errors during this process, the temp buffer is not needed and work can be done on the return buffer directly. Repurpose build_hdr_data() to only calculate the header lengths. Rename it to get_hdr_lens(). Edit create_hdr_descs() to read from the skb directly and copy directly into the returned useful buffer. The process now involves less memory and write operations while also being more readable. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-4-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09ibmvnic: Use header len helper functions on txNick Child
Use the header length helper functions rather than trying to calculate it within the driver. There are defined functions for mac and network headers (skb_mac_header_len and skb_network_header_len) but no such function exists for the transport header length. Also, hdr_data was memset during allocation to all 0's so no need to memset again. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-3-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09ibmvnic: Only replenish rx pool when resources are getting lowNick Child
Previously, the driver would replenish the rx pool if the polling function consumed less than the budget. The logic being that the driver did not exhaust its budget so that must mean that the driver is not busy and has cycles to spare for replenishing the pool. So pool replenishment happens on every poll which did not consume the budget. This can very costly during request-response tests. In fact, an extra ~100pps can be seen in TCP_RR_150 tests when we remove this conditional. Trace results (ftrace, graph-time=1) for the poll function are below: Previous results: ibmvnic_poll = 64951846.0 us / 4167628.0 hits = AVG 15.58 replenish_rx_pool = 17602846.0 us / 4710437.0 hits = AVG 3.74 Now: ibmvnic_poll = 57673941.0 us / 4791737.0 hits = AVG 12.04 replenish_rx_pool = 3938171.6 us / 4314.0 hits = AVG 912.88 While the replenish function takes longer, it is hit less frequently meaning the ibmvnic_poll function, on average, is faster. Furthermore, this change does not have a negative effect on performance bandwidth/latency measurements. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Link: https://patch.msgid.link/20240807211809.1259563-2-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net: fs_enet: Fix warning due to wrong typeChristophe Leroy
Building fs_enet on powerpc e500 leads to following warning: CC drivers/net/ethernet/freescale/fs_enet/mac-scc.o In file included from ./include/linux/build_bug.h:5, from ./include/linux/container_of.h:5, from ./include/linux/list.h:5, from ./include/linux/module.h:12, from drivers/net/ethernet/freescale/fs_enet/mac-scc.c:15: drivers/net/ethernet/freescale/fs_enet/mac-scc.c: In function 'allocate_bd': ./include/linux/err.h:28:49: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] 28 | #define IS_ERR_VALUE(x) unlikely((unsigned long)(void *)(x) >= (unsigned long)-MAX_ERRNO) | ^ ./include/linux/compiler.h:77:45: note: in definition of macro 'unlikely' 77 | # define unlikely(x) __builtin_expect(!!(x), 0) | ^ drivers/net/ethernet/freescale/fs_enet/mac-scc.c:138:13: note: in expansion of macro 'IS_ERR_VALUE' 138 | if (IS_ERR_VALUE(fep->ring_mem_addr)) | ^~~~~~~~~~~~ This is due to fep->ring_mem_addr not being a pointer but a DMA address which is 64 bits on that platform while pointers are 32 bits as this is a 32 bits platform with wider physical bus. However, using fep->ring_mem_addr is just wrong because cpm_muram_alloc() returns an offset within the muram and not a physical address directly. So use fpi->dpram_offset instead. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/ec67ea3a3bef7e58b8dc959f7c17d405af0d27e4.1723101144.git.christophe.leroy@csgroup.eu Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-09net: stmmac: xgmac: use const char arrays for string constantsSimon Horman
Jiri Slaby advises me that the preferred mechanism for declaring string constants is static char arrays, so use that here. This mostly reverts commit 1692b9775e74 ("net: stmmac: xgmac: use #define for string constants") That commit was a fix for commit 46eba193d04f ("net: stmmac: xgmac: fix handling of DPP safety error for DMA channels"). The fix being replacing const char * with #defines in order to address compilation failures observed on GCC 6 through 10. Compile tested only. No functional change intended. Suggested-by: Jiri Slaby <jirislaby@kernel.org> Link: https://lore.kernel.org/netdev/485dbc5a-a04b-40c2-9481-955eaa5ce2e2@kernel.org/ Signed-off-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-08-08net: ti: icssg_prueth: populate netdev of_nodeMatthias Schiffer
Allow of_find_net_device_by_node() to find icssg_prueth ports and make the individual ports' of_nodes available in sysfs. Signed-off-by: Matthias Schiffer <matthias.schiffer@ew.tq-group.com> Reviewed-by: MD Danish Anwar <danishanwar@ti.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20240807121215.3178964-1-matthias.schiffer@ew.tq-group.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-08net: atlantic: use ethtool_sprintfRosen Penev
Allows simplifying get_strings and avoids manual pointer manipulation. Signed-off-by: Rosen Penev <rosenp@gmail.com> Reviewed-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20240807190303.6143-1-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-08bnx2x: Provide declaration of dmae_reg_go_c in headerSimon Horman
Provide declaration of dmae_reg_go_c in header. This symbol is defined in bnx2x_main.c. And used in that file and bnx2x_stats.c. However, Sparse complains that there is no declaration of the symbol in dmae_reg_go_c nor is the symbol static. .../bnx2x_main.c:291:11: warning: symbol 'dmae_reg_go_c' was not declared. Should it be static? Address this by moving the declaration from bnx2x_stats.c to bnx2x_reg.h. No functional change intended. Compile tested only. Signed-off-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20240806-bnx2x-dec-v1-1-ae844ec785e4@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR. No conflicts or adjacent changes. Link: https://patch.msgid.link/20240808170148.3629934-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-08Merge branch '100GbE' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2024-08-07 (ice) This series contains updates to ice driver only. Grzegorz adds IRQ synchronization call before performing reset and prevents writing to hardware when it is resetting. Mateusz swaps incorrect assignment of FEC statistics. * '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue: ice: Fix incorrect assigns of FEC counts ice: Skip PTP HW writes during PTP reset procedure ice: Fix reset handler ==================== Link: https://patch.msgid.link/20240807224521.3819189-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-08net: ethtool: fix off-by-one error in max RSS context IDsEdward Cree
Both ethtool_ops.rxfh_max_context_id and the default value used when it's not specified are supposed to be exclusive maxima (the former is documented as such; the latter, U32_MAX, cannot be used as an ID since it equals ETH_RXFH_CONTEXT_ALLOC), but xa_alloc() expects an inclusive maximum. Subtract one from 'limit' to produce an inclusive maximum, and pass that to xa_alloc(). Increase bnxt's max by one to prevent a (very minor) regression, as BNXT_MAX_ETH_RSS_CTX is an inclusive max. This is safe since bnxt is not actually hard-limited; BNXT_MAX_ETH_RSS_CTX is just a leftover from old driver code that managed context IDs itself. Rename rxfh_max_context_id to rxfh_max_num_contexts to make its semantics (hopefully) more obvious. Fixes: 847a8ab18676 ("net: ethtool: let the core choose RSS context IDs") Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Link: https://patch.msgid.link/5a2d11a599aa5b0cc6141072c01accfb7758650c.1723045898.git.ecree.xilinx@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-08net: fec: Stop PPS on driver removeCsókás, Bence
PPS was not stopped in `fec_ptp_stop()`, called when the adapter was removed. Consequentially, you couldn't safely reload the driver with the PPS signal on. Fixes: 32cba57ba74b ("net: fec: introduce fec_ptp_stop and use in probe fail path") Reviewed-by: Fabio Estevam <festevam@gmail.com> Link: https://lore.kernel.org/netdev/CAOMZO5BzcZR8PwKKwBssQq_wAGzVgf1ffwe_nhpQJjviTdxy-w@mail.gmail.com/T/#m01dcb810bfc451a492140f6797ca77443d0cb79f Signed-off-by: Csókás, Bence <csokas.bence@prolan.hu> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Frank Li <Frank.Li@nxp.com> Link: https://patch.msgid.link/20240807080956.2556602-1-csokas.bence@prolan.hu Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-08net: bcmgenet: Properly overlay PHY and MAC Wake-on-LAN capabilitiesFlorian Fainelli
Some Wake-on-LAN modes such as WAKE_FILTER may only be supported by the MAC, while others might be only supported by the PHY. Make sure that the .get_wol() returns the union of both rather than only that of the PHY if the PHY supports Wake-on-LAN. Fixes: 7e400ff35cbe ("net: bcmgenet: Add support for PHY-based Wake-on-LAN") Signed-off-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20240806175659.3232204-1-florian.fainelli@broadcom.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-08net: stmmac: dwmac4: fix PCS duplex mode decodeRussell King (Oracle)
dwmac4 was decoding the duplex mode from the GMAC_PHYIF_CONTROL_STATUS register incorrectly, using GMAC_PHYIF_CTRLSTATUS_LNKMOD_MASK (value 1) rather than GMAC_PHYIF_CTRLSTATUS_LNKMOD (bit 16). Fix this. Fixes: 70523e639bf8c ("drivers: net: stmmac: reworking the PCS code.") Reviewed-by: Andrew Halaney <ahalaney@redhat.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/E1sbJvd-001rGD-E3@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>