summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet
AgeCommit message (Collapse)Author
2023-06-12net/sched: taprio: report class offload stats per TXQ, not per TCVladimir Oltean
The taprio Qdisc creates child classes per netdev TX queue, but taprio_dump_class_stats() currently reports offload statistics per traffic class. Traffic classes are groups of TXQs sharing the same dequeue priority, so this is incorrect and we shouldn't be bundling up the TXQ stats when reporting them, as we currently do in enetc. Modify the API from taprio to drivers such that they report TXQ offload stats and not TC offload stats. There is no change in the UAPI or in the global Qdisc stats. Fixes: 6c1adb650c8d ("net/sched: taprio: add netlink reporting for offload statistics counters") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-12net: mana: Add support for vlan taggingHaiyang Zhang
To support vlan, use MANA_LONG_PKT_FMT if vlan tag is present in TX skb. Then extract the vlan tag from the skb struct, and save it to tx_oob for the NIC to transmit. For vlan tags on the payload, they are accepted by the NIC too. For RX, extract the vlan tag from CQE and put it into skb. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-12sfc: Add devlink dev info support for EF10Martin Habets
Reuse the work done for EF100 to add devlink support for EF10. There is no devlink port support for EF10. Signed-off-by: Martin Habets <habetsm.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-12ionic: add support for ethtool extended stat link_down_countNitya Sunkad
Following the example of 'commit 9a0f830f8026 ("ethtool: linkstate: add a statistic for PHY down events")', added support for link down events. Add callback ionic_get_link_ext_stats to ionic_ethtool.c to support link_down_count, a property of netdev that gets reported exclusively on physical link down events. Run ethtool -I <devname> to display the device link down count. Signed-off-by: Nitya Sunkad <nitya.sunkad@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-12Merge branch '100GbE' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Tony Nguyen says: ==================== ice: Improve miscellaneous interrupt code Jacob Keller says: This series improves the driver's use of the threaded IRQ and the communication between ice_misc_intr() and the ice_misc_intr_thread_fn() which was previously introduced by commit 1229b33973c7 ("ice: Add low latency Tx timestamp read"). First, a new custom enumerated return value is used instead of a boolean for ice_ptp_process_ts(). This significantly reduces the cognitive burden when reviewing the logic for this function, as the expected action is clear from the return value name. Second, the unconditional loop in ice_misc_intr_thread_fn() is removed, replacing it with a write to the Other Interrupt Cause register. This causes the MAC to trigger the Tx timestamp interrupt again. This makes it possible to safely use the ice_misc_intr_thread_fn() to handle other tasks beyond just the Tx timestamps. It is also easier to reason about since the thread function will exit cleanly if we do something like disable the interrupt and call synchronize_irq(). Third, refactor the handling for external timestamp events to use the miscellaneous thread function. This resolves an issue with the external time stamps getting blocked while processing the periodic work function task. Fourth, a simplification of the ice_misc_intr() function to always return IRQ_WAKE_THREAD, and schedule the ice service task in the ice_misc_intr_thread_fn() instead. Finally, the Other Interrupt Cause is kept disabled over the thread function processing, rather than immediately re-enabled. Special thanks to Michal Schmidt for the careful review of the series and pointing out my misunderstandings of the kernel IRQ code. It has been determined that the race outlined as being fixed in previous series was actually introduced by this series itself, which I've since corrected. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-11RDMA/mlx5: Fix affinity assignmentMark Bloch
The cited commit aimed to ensure that Virtual Functions (VFs) assign a queue affinity to a Queue Pair (QP) to distribute traffic when the LAG master creates a hardware LAG. If the affinity was set while the hardware was not in LAG, the firmware would ignore the affinity value. However, this commit unintentionally assigned an affinity to QPs on the LAG master's VPORT even if the RDMA device was not marked as LAG-enabled. In most cases, this was not an issue because when the hardware entered hardware LAG configuration, the RDMA device of the LAG master would be destroyed and a new one would be created, marked as LAG-enabled. The problem arises when a user configures Equal-Cost Multipath (ECMP). In ECMP mode, traffic can be directed to different physical ports based on the queue affinity, which is intended for use by VPORTS other than the E-Switch manager. ECMP mode is supported only if both E-Switch managers are in switchdev mode and the appropriate route is configured via IP. In this configuration, the RDMA device is not destroyed, and we retain the RDMA device that is not marked as LAG-enabled. To ensure correct behavior, Send Queues (SQs) opened by the E-Switch manager through verbs should be assigned strict affinity. This means they will only be able to communicate through the native physical port associated with the E-Switch manager. This will prevent the firmware from assigning affinity and will not allow the SQs to be remapped in case of failover. Fixes: 802dcc7fc5ec ("RDMA/mlx5: Support TX port affinity for VF drivers in LAG mode") Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Mark Bloch <mbloch@nvidia.com> Link: https://lore.kernel.org/r/425b05f4da840bc684b0f7e8ebf61aeb5cef09b0.1685960567.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org>
2023-06-11net/mlx5: Nullify qp->dbg pointer post destructionPatrisious Haddad
Nullifying qp->dbg is a preparation for the next patches from the series in which mlx5_core_destroy_qp() could actually fail, and then it can be called again which causes a kernel crash, since qp->dbg was not nullified in previous call. Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Link: https://lore.kernel.org/r/1677e52bb642fd8d6062d73a5aa69083c0283dc9.1685953497.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org>
2023-06-10bnx2x: fix page fault following EEH recoveryDavid Christensen
In the last step of the EEH recovery process, the EEH driver calls into bnx2x_io_resume() to re-initialize the NIC hardware via the function bnx2x_nic_load(). If an error occurs during bnx2x_nic_load(), OS and hardware resources are released and an error code is returned to the caller. When called from bnx2x_io_resume(), the return code is ignored and the network interface is brought up unconditionally. Later attempts to send a packet via this interface result in a page fault due to a null pointer reference. This patch checks the return code of bnx2x_nic_load(), prints an error message if necessary, and does not enable the interface. Signed-off-by: David Christensen <drc@linux.vnet.ibm.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-10octeontx2-af: fix lbk link credits on cn10kNithin Dabilpuram
Fix LBK link credits on CN10K to be same as CN9K i.e 16 * MAX_LBK_DATA_RATE instead of current scheme of calculation based on LBK buf length / FIFO size. Fixes: 6e54e1c5399a ("octeontx2-af: cn10K: Add MTU configuration") Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-10octeontx2-af: fixed resource availability checkSatha Rao
txschq_alloc response have two different arrays to store continuous and non-continuous schedulers of each level. Requested count should be checked for each array separately. Fixes: 5d9b976d4480 ("octeontx2-af: Support fixed transmit scheduler topology") Signed-off-by: Satha Rao <skoteshwar@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-10net: renesas: rswitch: Use hardware pause featuresYoshihiro Shimoda
Since this driver used the "global rate limiter" feature of GWCA, the TX performance of each port was reduced when multiple ports transmitted frames simultaneously. To improve performance, remove the use of the "global rate limiter" feature and use "hardware pause" features of the following: - "per priority pause" of GWCA - "global pause" of COMA Note that these features are not related to the ethernet PAUSE frame. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-10net: renesas: rswitch: Use napi_gro_receive() in RXYoshihiro Shimoda
This hardware can receive multiple frames so that using napi_gro_receive() instead of netif_receive_skb() gets good performance of RX. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-10sfc: generate encap headers for TC offloadEdward Cree
Support constructing VxLAN and GENEVE headers, on either IPv4 or IPv6, using the neighbouring information obtained in encap->neigh to populate the Ethernet header. Note that the ef100 hardware does not insert UDP checksums when performing encap, so for IPv6 the remote endpoint will need to be configured with udp6zerocsumrx or equivalent. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Pieter Jansen van Vuuren <pieter.jansen-van-vuuren@amd.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-10sfc: neighbour lookup for TC encap action offloadEdward Cree
For each neighbour we're interested in, create a struct efx_neigh_binder object which has a list of all the encap_actions using it. When we receive a neighbouring update (through the netevent notifier), find the corresponding efx_neigh_binder and update all its users. Since the actual generation of encap headers is still only a stub, the resulting rules still get left on fallback actions. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Pieter Jansen van Vuuren <pieter.jansen-van-vuuren@amd.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-10sfc: MAE functions to create/update/delete encap headersEdward Cree
Besides the raw header data, also pass the tunnel type, so that the hardware knows it needs to update the IP Total Length and UDP Length fields (and corresponding checksums) for each packet. Also, populate the ENCAP_HEADER_ID field in efx_mae_alloc_action_set() with the fw_id returned from efx_mae_allocate_encap_md(). Reviewed-by: Pieter Jansen van Vuuren <pieter.jansen-van-vuuren@amd.com> Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-10sfc: add function to atomically update a rule in the MAEEdward Cree
efx_mae_update_rule() changes the action-set-list attached to an MAE flow rule in the Action Rule Table. We will use this when neighbouring updates change encap actions. Reviewed-by: Pieter Jansen van Vuuren <pieter.jansen-van-vuuren@amd.com> Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-10sfc: some plumbing towards TC encap action offloadEdward Cree
Create software objects to manage the metadata for encap actions that can be attached to TC rules. However, since we don't yet have the neighbouring information (needed to generate the Ethernet header), all rules with encap actions are marked as "unready" and thus insert the fallback action into hardware rather than actually offloading the encapsulation action. Reviewed-by: Pieter Jansen van Vuuren <pieter.jansen-van-vuuren@amd.com> Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-10sfc: add fallback action-set-lists for TC offloadEdward Cree
When offloading a TC encap action, the action information for the hardware might not be "ready": if there's currently no neighbour entry available for the destination address, we can't construct the Ethernet header to prepend to the packet. In this case, we still offload the flow rule, but with its action-set-list ID pointing at a "fallback" action which simply delivers the packet to its default destination (as though no flow rule had matched), thus allowing software TC to handle it. Later, when we receive a neighbouring update that allows us to construct the encap header, the rule will become "ready" and we will update its action-set-list ID in hardware to point at the actual offloaded actions. This patch sets up these fallback ASLs, but does not yet use them. Reviewed-by: Pieter Jansen van Vuuren <pieter.jansen-van-vuuren@amd.com> Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-10net: move gso declarations and functions to their own filesEric Dumazet
Move declarations into include/net/gso.h and code into net/core/gso.c Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Stanislav Fomichev <sdf@google.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20230608191738.3947077-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-10Merge branch '100GbE' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2023-06-08 (ice) This series contains updates to ice driver only. Simon Horman stops null pointer dereference for GNSS error path. Kamil fixes memory leak when downing interface when XDP is enabled. * '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue: ice: Fix XDP memory leak when NIC is brought up and down ice: Don't dereference NULL in ice_gnss_read error path ==================== Link: https://lore.kernel.org/r/20230608200051.451752-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-10iavf: remove mask from iavf_irq_enable_queues()Ahmed Zaki
Enable more than 32 IRQs by removing the u32 bit mask in iavf_irq_enable_queues(). There is no need for the mask as there are no callers that select individual IRQs through the bitmask. Also, if the PF allocates more than 32 IRQs, this mask will prevent us from using all of them. Modify the comment in iavf_register.h to show that the maximum number allowed for the IRQ index is 63 as per the iAVF standard 1.0 [1]. link: [1] https://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/ethernet-adaptive-virtual-function-hardware-spec.pdf Fixes: 5eae00c57f5e ("i40evf: main driver core") Signed-off-by: Ahmed Zaki <ahmed.zaki@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Link: https://lore.kernel.org/r/20230608200226.451861-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-09net/mlx5e: Remove a useless function callChristophe JAILLET
'handle' is known to be NULL here. There is no need to kfree() it. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09net/mlx5: Light probe local SFsShay Drory
In case user wants to configure the SFs, for example: to use only vdpa functionality, he needs to fully probe a SF, configure what he wants, and afterward reload the SF. In order to save the time of the reload, local SFs will probe without any auxiliary sub-device, so that the SFs can be configured prior to its full probe. The defaults of the enable_* devlink params of these SFs are set to false. Usage example: Create SF: $ devlink port add pci/0000:08:00.0 flavour pcisf pfnum 0 sfnum 11 $ devlink port function set pci/0000:08:00.0/32768 \ hw_addr 00:00:00:00:00:11 state active Enable ETH auxiliary device: $ devlink dev param set auxiliary/mlx5_core.sf.1 \ name enable_eth value true cmode driverinit Now, in order to fully probe the SF, use devlink reload: $ devlink dev reload auxiliary/mlx5_core.sf.1 At this point the user have SF devlink instance with auxiliary device for the Ethernet functionality only. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09net/mlx5: Move esw multiport devlink param to eswitch codeShay Drory
Move the param registration and handling code into the eswitch code as they are related to each other. No point in having the devlink param registration done in separate file. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09net/mlx5: Split function_setup() to enable and open functionsShay Drory
mlx5_cmd_init_hca() is taking ~0.2 seconds. In case of a user who desire to disable some of the SF aux devices, and with large scale-1K SFs for example, this user will waste more than 3 minutes on mlx5_cmd_init_hca() which isn't needed at this stage. Downstream patch will change SFs which are probe over the E-switch, local SFs, to be probed without any aux dev. In order to support this, split function_setup() to avoid executing mlx5_cmd_init_hca(). Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09net/mlx5: Set max number of embedded CPU VFsDaniel Jurgens
Set the maximum number of embedded cpu VF functions available. Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09net/mlx5: Update SRIOV enable/disable to handle EC/VFsDaniel Jurgens
Previously on the embedded CPU platform SRIOV was never enabled/disabled via mlx5_core_sriov_configure. Host VF updates are provided by an event handler. Now in the disable flow it must be known if this is a disable due to driver unload or SRIOV detach, or if the user updated the number of VFs. If due to change in the number of VFs only wait for the pages of ECVFs. Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Reviewed-by: William Tu <witu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09net/mlx5: Query correct caps for min msix vectorsDaniel Jurgens
The VFs on the host and the embedded CPU platform share function numbers. Set the ec_vf_function field to query the caps for the correct function. Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Reviewed-by: William Tu <witu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09net/mlx5: Use correct vport when restoring GUIDsDaniel Jurgens
Prior to enabling EC VF functionality the vport number and function ID were always the same. That's not the case now. Use the correct vport number to modify the HCA vport context. Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Reviewed-by: William Tu <witu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09net/mlx5: Add new page type for EC VF pagesDaniel Jurgens
When the embedded cpu supports SRIOV it can be enabled and disabled independently from the host SRIOV. Track the pages separately so we can properly wait for returned VF pages. Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Reviewed-by: William Tu <witu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09net/mlx5: Add/remove peer miss rules for EC VFsDaniel Jurgens
Add and remove the peer miss rules for EC VFs. It's possible that there are different amounts of total VFs per function so only create rules for the minimum number of max VFs. Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Reviewed-by: William Tu <witu@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09net/mlx5: Add management of EC VF vportsDaniel Jurgens
Add init, load, unload, and cleanup of the EC VF vports. This includes changes in how eswitch SRIOV is managed. Previous on an embedded CPU platform the number of VFs provided when enabling the eswitch was always 0, host VFs vports are handled in the eswitch functions change event handler. Now track the number of EC VFs as well, so they can be handled properly in the enable/disable flows. There are only 3 marks available for use in xarrays, all 3 were already in use for this use case. EC VF vports are in a known range so we can access them by index instead of marks. Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Reviewed-by: William Tu <witu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09net/mlx5: Update vport caps query/set for EC VFsDaniel Jurgens
These functions are for query/set by vport, there was an underlying assumption that vport was equal to function ID. That's not the case for EC VF functions. Set the ec_vf_function bit accordingly. Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Reviewed-by: William Tu <witu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09net/mlx5: Enable devlink port for embedded cpu VF vportsDaniel Jurgens
Enable creation of a devlink port for EC VF vports. Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Reviewed-by: William Tu <witu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09net/mlx5: Simplify unload all rep codeDaniel Jurgens
Instead of using type specific iterators which are only used in one place just traverse the xarray. It will provide suitable ordering based on the vport numbers. This will also eliminate the need for changes here when new types are added. Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Reviewed-by: William Tu <witu@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-09octeontx2-af: Fix promiscuous modeRatheesh Kannoth
CN10KB silicon introduced a new exact match feature, which is used for DMAC filtering. The state of installed DMAC filters in this exact match table is getting corrupted when promiscuous mode is toggled. Fix this by not touching Exact match related config when promiscuous mode is toggled. Fixes: 2dba9459d2c9 ("octeontx2-af: Wrapper functions for MAC addr add/del/update/reset") Signed-off-by: Ratheesh Kannoth <rkannoth@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-09net: renesas: rswitch: Fix timestamp feature after all descriptors are usedYoshihiro Shimoda
The timestamp descriptors were intended to act cyclically. Descriptors from index 0 through gq->ring_size - 1 contain actual information, and the last index (gq->ring_size) should have LINKFIX to indicate the first index 0 descriptor. However, the LINKFIX value is missing, causing the timestamp feature to stop after all descriptors are used. To resolve this issue, set the LINKFIX to the timestamp descritors. Reported-by: Phong Hoang <phong.hoang.wz@renesas.com> Fixes: 33f5d733b589 ("net: renesas: rswitch: Improve TX timestamp accuracy") Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-08chelsio/chtls: Use splice_eof() to flushDavid Howells
Allow splice to end a Chelsio TLS record after prematurely ending a splice/sendfile due to getting an EOF condition (->splice_read() returned 0) after splice had called sendmsg() with MSG_MORE set when the user didn't set MSG_MORE. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/CAHk-=wh=V579PDYvkpnTobCLGczbgxpMgGmmhqiTyE34Cpi5Gg@mail.gmail.com/ Signed-off-by: David Howells <dhowells@redhat.com> cc: Ayush Sawal <ayush.sawal@chelsio.com> cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-08Merge tag 'mlx5-updates-2023-06-06' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5-updates-2023-06-06 1) Support 4 ports VF LAG, part 2/2 2) Few extra trivial cleanup patches Shay Drory Says: ================ Support 4 ports VF LAG, part 2/2 This series continues the series[1] "Support 4 ports VF LAG, part1/2". This series adds support for 4 ports VF LAG (single FDB E-Switch). This series of patches refactoring LAG code that make assumptions about VF LAG supporting only two ports and then enable 4 ports VF LAG. Patch 1: - Fix for ib rep code Patches 2-5: - Refactors LAG layer. Patches 6-7: - Block LAG types which doesn't support 4 ports. Patch 8: - Enable 4 ports VF LAG. This series specifically allows HCAs with 4 ports to create a VF LAG with only 4 ports. It is not possible to create a VF LAG with 2 or 3 ports using HCAs that have 4 ports. Currently, the Merged E-Switch feature only supports HCAs with 2 ports. However, upcoming patches will introduce support for HCAs with 4 ports. In order to activate VF LAG a user can execute: devlink dev eswitch set pci/0000:08:00.0 mode switchdev devlink dev eswitch set pci/0000:08:00.1 mode switchdev devlink dev eswitch set pci/0000:08:00.2 mode switchdev devlink dev eswitch set pci/0000:08:00.3 mode switchdev ip link add name bond0 type bond ip link set dev bond0 type bond mode 802.3ad ip link set dev eth2 master bond0 ip link set dev eth3 master bond0 ip link set dev eth4 master bond0 ip link set dev eth5 master bond0 Where eth2, eth3, eth4 and eth5 are net-interfaces of pci/0000:08:00.0 pci/0000:08:00.1 pci/0000:08:00.2 pci/0000:08:00.3 respectively. User can verify LAG state and type via debugfs: /sys/kernel/debug/mlx5/0000\:08\:00.0/lag/state /sys/kernel/debug/mlx5/0000\:08\:00.0/lag/type [1] https://lore.kernel.org/netdev/20230601060118.154015-1-saeed@kernel.org/T/#mf1d2083780970ba277bfe721554d4925f03f36d1 ================ * tag 'mlx5-updates-2023-06-06' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux: net/mlx5e: simplify condition after napi budget handling change mlx5/core: E-Switch, Allocate ECPF vport if it's an eswitch manager net/mlx5: Skip inline mode check after mlx5_eswitch_enable_locked() failure net/mlx5e: TC, refactor access to hash key net/mlx5e: Remove RX page cache leftovers net/mlx5e: Expose catastrophic steering error counters net/mlx5: Enable 4 ports VF LAG net/mlx5: LAG, block multiport eswitch LAG in case ldev have more than 2 ports net/mlx5: LAG, block multipath LAG in case ldev have more than 2 ports net/mlx5: LAG, change mlx5_shared_fdb_supported() to static net/mlx5: LAG, generalize handling of shared FDB net/mlx5: LAG, check if all eswitches are paired for shared FDB {net/RDMA}/mlx5: introduce lag_for_each_peer RDMA/mlx5: Free second uplink ib port ==================== Link: https://lore.kernel.org/r/20230607210410.88209-1-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-08net: fman_memac: use pcs-lynx's check for fwnode availabilityRussell King (Oracle)
Use pcs-lynx's check rather than our own when determining if the device is available. This fixes a bug where the reference gained by of_parse_phandle() is not dropped if the device is not available. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-08net: dpaa2: use pcs-lynx's check for fwnode availabilityRussell King (Oracle)
Use pcs-lynx's check rather than our own when determining if the device is available. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-08net: fman_memac: use lynx_pcs_create_fwnode()Russell King (Oracle)
Use lynx_pcs_create_fwnode() to create a lynx PCS from a fwnode handle. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-08net: dpaa2-mac: use lynx_pcs_create_fwnode()Russell King (Oracle)
Use lynx_pcs_create_fwnode() to create a lynx PCS from a fwnode handle. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-08net: fman_memac: allow lynx PCS to handle mdiodev lifetimeRussell King (Oracle)
Put the mdiodev after lynx_pcs_create() so that the Lynx PCS driver can manage the lifetime of the mdiodev its using. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-08net: dpaa2-mac: allow lynx PCS to manage mdiodev lifetimeRussell King (Oracle)
Put the mdiodev after lynx_pcs_create() so that the Lynx PCS driver can manage the lifetime of the mdiodev its using. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-08net: pch_gbe: Allow build on MIPS_GENERIC kernelJiaxun Yang
MIPS Boston board, which is using MIPS_GENERIC kernel is using EG20T PCH and thus need this driver. Dependency of PCH_GBE, PTP_1588_CLOCK_PCH is also fixed for MIPS_GENERIC. Note that CONFIG_PCH_GBE is selected in arch/mips/configs/generic/ board-boston.config for a while, some how it's never wired up in Kconfig. Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230607055953.34110-1-jiaxun.yang@flygoat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-08igb: Fix extts capture value format for 82580/i354/i350Yuezhen Luan
82580/i354/i350 features circle-counter-like timestamp registers that are different with newer i210. The EXTTS capture value in AUXTSMPx should be converted from raw circle counter value to timestamp value in resolution of 1 nanosec by the driver. This issue can be reproduced on i350 nics, connecting an 1PPS signal to a SDP pin, and run 'ts2phc' command to read external 1PPS timestamp value. On i210 this works fine, but on i350 the extts is not correctly converted. The i350/i354/82580's SYSTIM and other timestamp registers are 40bit counters, presenting time range of 2^40 ns, that means these registers overflows every about 1099s. This causes all these regs can't be used directly in contrast to the newer i210/i211s. The igb driver needs to convert these raw register values to valid time stamp format by using kernel timecounter apis for i350s families. Here the igb_extts() just forgot to do the convert. Fixes: 38970eac41db ("igb: support EXTTS on 82580/i354/i350") Signed-off-by: Yuezhen Luan <eggcar.luan@gmail.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230607164116.3768175-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-08mlxsw: spectrum_nve_vxlan: Fix unsupported flag regressionIdo Schimmel
The recently added 'VXLAN_F_LOCALBYPASS' flag is set by default on VXLAN devices and denotes a behavior that is irrelevant for the hardware data path. Add it to the lists of IPv4 and IPv6 supported flags to avoid rejecting offload of VXLAN devices which have this flag set. Fixes: 69474a8a5837 ("net: vxlan: Add nolocalbypass option to vxlan.") Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/5533e63643bf719bbe286fef60f749c9cad35005.1686139716.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-08ice: do not re-enable miscellaneous interrupt until thread_fn completesJacob Keller
The ice driver uses threaded IRQ for managing Tx timestamps via the devm_request_threaded_irq() interface. The ice_misc_intr() handler function is responsible for processing the hard interrupt context, and can wake the ice_misc_intr_thread_fn() by returning IRQ_WAKE_THREAD. The request_threaded_irq() function comment says: @handler is still called in hard interrupt context and has to check whether the interrupt originates from the device. If yes, it needs to disable the interrupt on the device and return IRQ_WAKE_THREAD which will wake up the handler thread and run the @thread_fn. We currently re-enable the Other Interrupt Cause Register (OCIR) at the end of ice_misc_intr(). In practice, this seems to be ok, but it can make communicating between the handler function and the thread function difficult. This is because the interrupt can trigger again while the thread function is still processing. Move the OICR update to the end of the thread function, leaving the other interrupt cause disabled in hardware until we complete one pass of the thread function. This prevents the miscellaneous interrupt from firing until after we finish the thread function. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Arpana Arland <arpanax.arland@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-06-08ice: trigger PFINT_OICR_TSYN_TX interrupt instead of pollingJacob Keller
In ice_misc_intr_thread_fn(), if we do not complete all Tx timestamp work, the thread function will poll continuously forever. For E822 hardware, this wastes time as the return value from ice_ptp_process_ts() is accurate and always reports correctly that the PHY actually has new timestamp data. In addition, if we receive enough timestamps with the right pacing, we may never exit this polling. Should this occur, other tasks handled by the ice_misc_intr_thread_fn() will never be processed. Fix this by instead writing to PFINT_OICR, causing an emulated interrupt to be triggered immediately. This does take slightly more processing than just re-checking the timestamps. However, it allows all of the other interrupt causes a chance to be processed first in the hard IRQ function. Note that the OICR interrupt is configured to be throttled to no more than once every 124 microseconds. This gives an effective interrupt rate of ~8000 interrupts per second. This should thus not cause a significant increase in overall CPU usage when compared to sleeping. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Arpana Arland <arpanax.arland@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>