summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/intel
AgeCommit message (Collapse)Author
2025-06-16eth: fm10k: migrate to new RXFH callbacksJakub Kicinski
Migrate to new callbacks added by commit 9bb00786fc61 ("net: ethtool: add dedicated callbacks for getting and setting rxfh fields"). .get callback moves out of the switch and set_rxnfc disappears as ETHTOOL_SRXFH as the only functionality. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Joe Damato <joe@dama.to> Reviewed-by: Tony Nguyen <anthony.l.nguyen@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250614180907.4167714-5-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-16eth: ixgbe: migrate to new RXFH callbacksJakub Kicinski
Migrate to new callbacks added by commit 9bb00786fc61 ("net: ethtool: add dedicated callbacks for getting and setting rxfh fields"). Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Joe Damato <joe@dama.to> Reviewed-by: Tony Nguyen <anthony.l.nguyen@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250614180907.4167714-4-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-16eth: igc: migrate to new RXFH callbacksJakub Kicinski
Migrate to new callbacks added by commit 9bb00786fc61 ("net: ethtool: add dedicated callbacks for getting and setting rxfh fields"). Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Joe Damato <joe@dama.to> Reviewed-by: Tony Nguyen <anthony.l.nguyen@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250614180907.4167714-3-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-16eth: igb: migrate to new RXFH callbacksJakub Kicinski
Migrate to new callbacks added by commit 9bb00786fc61 ("net: ethtool: add dedicated callbacks for getting and setting rxfh fields"). Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Joe Damato <joe@dama.to> Reviewed-by: Tony Nguyen <anthony.l.nguyen@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250614180907.4167714-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-16eth: e1000e: migrate to new RXFH callbacksJakub Kicinski
Migrate to new callbacks added by commit 9bb00786fc61 ("net: ethtool: add dedicated callbacks for getting and setting rxfh fields"). This driver's RXFH config is read only / fixed and it's the only get_rxnfc sub-command the driver supports. So convert the get_rxnfc handler into a get_rxfh_fields handler. Reviewed-by: Joe Damato <joe@dama.to> Reviewed-by: Tony Nguyen <anthony.l.nguyen@intel.com> Link: https://patch.msgid.link/20250614180638.4166766-5-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-16libeth: xsk: add XSkFQ refill and XSk wakeup helpersAlexander Lobakin
XSkFQ refill is pretty generic across the drivers minus FQ descriptor filling and can easily be unified with one inline callback. XSk wakeup is usually not, but here, instead of commonly used "SW interrupts", I picked firing an IPI. In most tests, it showed better performance; it also provides better control for userspace on which CPU will handle the xmit, as SW interrupts honor IRQ affinity no matter which core produces XSk xmit descs (while XDPSQs are associated 1:1 with cores having the same ID). Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xsk: add XSk Rx processing supportAlexander Lobakin
Add XSk counterparts for preparing XSk &libeth_xdp_buff (adding head and frags), running the program, and handling the verdict, inc. XDP_PASS. Shortcuts in comparison with regular Rx: frags and all verdicts except XDP_REDIRECT are under unlikely() and out of line; no checks for XDP program presence as it's always true for XSk. Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # optimizations Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xsk: add XSk xmit functionsAlexander Lobakin
Reuse core sending functions to send XSk xmit frames. Both metadata and no metadata pools/driver are supported. libeth_xdp also provides generic XSk metadata ops, currently with the checksum offload only and for cases when HW doesn't require supplying L3/L4 checksum offsets. Drivers are free to pass their own ops. &libeth_xdp_tx_bulk is not used here as it would be redundant; pool->tx_descs are accessed directly. Fake "libeth_xsktmo" is needed to hide implementation details from the drivers when they want to use the generic ops: the original struct is defined in the same file where dev->xsk_tx_metadata_ops gets set to avoid duplication of slowpath; at the same time; XSk xmit functions use local "fast" copy to inline XMO callbacks. Tx descriptor filling loop is unrolled by 8. Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # optimizations Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xsk: add XSk XDP_TX sending helpersAlexander Lobakin
Add Xsk counterparts for XDP_TX buffer sending and completion. The same base structures and functions used from the libeth_xdp core, with adjustments to that XSk Rx always operates on &xdp_buff_xsk for both head and frags. And unlike regular Rx, here unlikely() are used for frags, as the header split gives no benefits for XSk Rx, at least for now. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add RSS hash hint and XDP features setup helpersAlexander Lobakin
End the XDP section by adding helpers to setup XDP features, flipping .ndo_xdp_xmit() support at runtime (in case when it's not always on), and calculating the queue clean/refill threshold. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add XDP prog run and verdict result handlingAlexander Lobakin
Running a prog and handling the verdicts, up to napi_gro_receive() is also pretty generic code not really differing between vendors (except for Tx descriptor filling and Rx descriptor parsing). Define a couple inlines to do that. The inline callbacks a driver needs to pass is mentioned above: Tx descriptor filling for XDP_TX, populating skb with the descriptor data for XDP_PASS, finalizing XDPSQs after the polling loop for XDP_TX (kicking the HW to start sending). The populate callback passes only &libeth_xdp_buff assuming buff::desc pointer is enough, plus you can always get the corresponding Rx queue structure via container_of(buff::rxq). If not, a driver can extend the buff with more fields directly on the stack without touching libeth_xdp definitions. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add helpers for preparing/processing &libeth_xdp_buffAlexander Lobakin
Add convenience helpers to build an &xdp_buff. This means: general initialization before the NAPI loop, adding head, adding frags etc. libeth_xdp_process_buff() is the same what everybody have in their drivers: dma_sync_for_cpu(); if (!frag) { add_head(); prefetch(); } else { add_frag(); } Note that I don't use net_prefetch(), sticking to the original prefetch(). In none of my tests prefetching 128 bytes yielded better perf than 64 bytes. That might differ if the headers are huge enough, but then additional tunneling etc. overhead takes place, you either way won't win a lot. &libeth_xdp_stash is for cases when you exit the polling loop without finishing building the buff. If that happens, you need to store the buffer in the queue structure until the next loop and then restore it. It makes no sense to place a whole full &xdp_buff there. Define a minimal structure, which would store only the fields essential to restore it. I was able to pack it into 16 bytes, which is only 8 bytes bigger than `struct sk_buff *skb` on x64. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add XDPSQ cleanup timersAlexander Lobakin
When XDP Tx queues are not interrupt-driven but use lazy cleaning, i.e. only when there are less than `threshold` free descriptors left, we also need cleanup timers to avoid &xdp_buff and &xdp_frame stall for too long, especially with Page Pool (it warns every about inflight pages every 60 second). Let's say we sent 256 frames and don't need to send more, but we clean only when the number of pending items >= 384. In that case, those 256 will stall until 128 more are sent. For this, add simple helpers to run a timer which will clean the queue regardless, after 1 second of the last send. The timer is triggered when finalizing the queue. As long as there is regular active traffic, the timer doesn't fire. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add XDPSQ locking helpersAlexander Lobakin
Unfortunately, it's not always possible to allocate max(num_rxqs, nr_cpu_ids) even on hi-end NICs. To mitigate this, add simple locking helpers to libeth_xdp. As long as XDPSQs are not shared, the whole functionality is gated behind a static lock. Otherwise, each bulk flush locks the queue for the time of cleaning and filling the descriptors. As long as this particular queue is not used by more than 1 CPU, the impact is minimal (runtime check for boolean twice per 16+ descriptors). Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # static key Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add XDPSQE completion helpersAlexander Lobakin
Similarly to libeth_tx_complete(), add libeth_xdp_complete_tx() to handle XDP_TX and xmit buffers. Both use bulk return under the hood. Also add out of line libeth_tx_complete_any() which handles both regular and XDP frames (if libeth_xdp is loaded), for example, to call on queue destroy, where we don't need inlining but convenience. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add .ndo_xdp_xmit() helpersAlexander Lobakin
Add helpers for implementing .ndo_xdp_xmit(). Same as for XDP_TX, accumulate up to 16 DMA-mapped frames on the stack, then flush. If DMA mapping is failed for some reason, don't try mapping further frames, but still flush what was already prepared. DMA address of a head frame is stored in its headroom, assuming it has enough of it for an 8 (or 4) byte value. In addition to @prep and @xmit driver callbacks in XDP_TX, xmit also needs @finalize to kick the XDPSQ after filling. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add XDP_TX buffers sendingAlexander Lobakin
Start adding XDP-specific code to libeth, namely handling XDP_TX buffers (only sending). The idea is that we accumulate up to 16 buffers on the stack, then, if either the limit is reached or the polling is finished, flush them at once with only one XDPSQ cleaning (if needed). The main sending function will be aware of the sending budget and already have all the info to send the buffers, so it can't fail. Drivers need to provide 2 inline callbacks to the main sending function: for cleaning an XDPSQ and for filling descriptors; the library code takes care of the rest. Note that unlike the generic code, multi-buffer support is not wrapped here with unlikely() to not hurt header split setups. &libeth_xdp_buff is a simple extension over &xdp_buff which has a direct pointer to the corresponding Rx descriptor (and, luckily, precisely 1 CL size and 16-byte alignment on x86_64). Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # xmit logic Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: support native XDP and register memory modelAlexander Lobakin
Expand libeth's Page Pool functionality by adding native XDP support. This means picking the appropriate headroom and DMA direction. Also, register all the created &page_pools as XDP memory models. A driver then can call xdp_rxq_info_attach_page_pool() when registering its RxQ info. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: convert to netmemAlexander Lobakin
Back when the libeth Rx core was initially written, devmem was a draft and netmem_ref didn't exist in the mainline. Now that it's here, make libeth MP-agnostic before introducing any new code or any new library users. When it's known that the created PP/FQ is for header buffers, use faster "unsafe" underscored netmem <--> virt accessors as netmem_is_net_iov() is always false in that case, but consumes some cycles (bit test + true branch). Reviewed-by: Mina Almasry <almasrymina@google.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth, libie: clean symbol exports up a littleAlexander Lobakin
Change EXPORT_SYMBOL_NS_GPL(x, "LIBETH") to EXPORT_SYMBOL_GPL(x) + DEFAULT_SYMBOL_NAMESPACE "LIBETH" to make the code more compact. Also, explicitly include <linux/export.h> to satisfy new requirements from scripts/misc-check. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-13ice: add phase offset monitor for all PPS dpll inputsArkadiusz Kubalewski
Implement a new admin command and helper function to handle and obtain CGU measurements for input pins. Add new callback operations to control the dpll device-level feature "phase offset monitor," allowing it to be enabled or disabled. If the feature is enabled, provide users with measured phase offsets and notifications. Initialize PPS DPLL with new callback operations if the feature is supported by the firmware. Reviewed-by: Milena Olech <milena.olech@intel.com> Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Acked-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Link: https://patch.msgid.link/20250612152835.1703397-4-arkadiusz.kubalewski@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-12Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR (net-6.16-rc2). No conflicts or adjacent changes. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-12Merge tag 'net-6.16-rc2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from bluetooth and wireless. Current release - regressions: - af_unix: allow passing cred for embryo without SO_PASSCRED/SO_PASSPIDFD Current release - new code bugs: - eth: airoha: correct enable mask for RX queues 16-31 - veth: prevent NULL pointer dereference in veth_xdp_rcv when peer disappears under traffic - ipv6: move fib6_config_validate() to ip6_route_add(), prevent invalid routes Previous releases - regressions: - phy: phy_caps: don't skip better duplex match on non-exact match - dsa: b53: fix untagged traffic sent via cpu tagged with VID 0 - Revert "wifi: mwifiex: Fix HT40 bandwidth issue.", it caused transient packet loss, exact reason not fully understood, yet Previous releases - always broken: - net: clear the dst when BPF is changing skb protocol (IPv4 <> IPv6) - sched: sfq: fix a potential crash on gso_skb handling - Bluetooth: intel: improve rx buffer posting to avoid causing issues in the firmware - eth: intel: i40e: make reset handling robust against multiple requests - eth: mlx5: ensure FW pages are always allocated on the local NUMA node, even when device is configure to 'serve' another node - wifi: ath12k: fix GCC_GCC_PCIE_HOT_RST definition for WCN7850, prevent kernel crashes - wifi: ath11k: avoid burning CPU in ath11k_debugfs_fw_stats_request() for 3 sec if fw_stats_done is not set" * tag 'net-6.16-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (70 commits) selftests: drv-net: rss_ctx: Add test for ntuple rules targeting default RSS context net: ethtool: Don't check if RSS context exists in case of context 0 af_unix: Allow passing cred for embryo without SO_PASSCRED/SO_PASSPIDFD. ipv6: Move fib6_config_validate() to ip6_route_add(). net: drv: netdevsim: don't napi_complete() from netpoll net/mlx5: HWS, Add error checking to hws_bwc_rule_complex_hash_node_get() veth: prevent NULL pointer dereference in veth_xdp_rcv net_sched: remove qdisc_tree_flush_backlog() net_sched: ets: fix a race in ets_qdisc_change() net_sched: tbf: fix a race in tbf_change() net_sched: red: fix a race in __red_change() net_sched: prio: fix a race in prio_tune() net_sched: sch_sfq: reject invalid perturb period net: phy: phy_caps: Don't skip better duplex macth on non-exact match MAINTAINERS: Update Kuniyuki Iwashima's email address. selftests: net: add test case for NAT46 looping back dst net: clear the dst when changing skb protocol net/mlx5e: Fix number of lanes to UNKNOWN when using data_rate_oper net/mlx5e: Fix leak of Geneve TLV option object net/mlx5: HWS, make sure the uplink is the last destination ...
2025-06-11igc: add preemptible queue support in mqprioFaizal Rahim
igc already supports enabling MAC Merge for FPE. This patch adds support for preemptible queues in mqprio. Tested preemption with mqprio by: 1. Enable FPE: ethtool --set-mm enp1s0 pmac-enabled on tx-enabled on verify-enabled on 2. Enable preemptible queue in mqprio: mqprio num_tc 4 map 0 1 2 3 0 0 0 0 0 0 0 0 0 0 0 0 \ queues 1@0 1@1 1@2 1@3 \ fp P P P E Signed-off-by: Faizal Rahim <faizal.abdul.rahim@linux.intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Mor Bar-Gabay <morx.bar.gabay@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-11igc: add preemptible queue support in taprioFaizal Rahim
Changes: 1. Introduce tx_enabled flag to control preemptible queue. tx_enabled is set via mmsv module based on multiple factors, including link up/down status, to determine if FPE is active or inactive. 2. Add priority field to TXDCTL for express queue to improve data fetch performance. 3. Block preemptible queue setup in taprio unless reverse-tsn-txq-prio private flag is set. Encourages adoption of standard queue priority scheme for new features. 4. Hardware-padded frames from preemptible queues result in incorrect mCRC values, as padding bytes are excluded from the computation. Pad frames to at least 60 bytes using skb_padto() before transmission to ensure the hardware includes padding in the mCRC calculation. Tested preemption with taprio by: 1. Enable FPE: ethtool --set-mm enp1s0 pmac-enabled on tx-enabled on verify-enabled on 2. Enable private flag to reverse TX queue priority: ethtool --set-priv-flags enp1s0 reverse-txq-prio on 3. Enable preemptible queue in taprio: taprio num_tc 4 map 0 1 2 3 0 0 0 0 0 0 0 0 0 0 0 0 \ queues 1@0 1@1 1@2 1@3 \ fp P P P E Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Co-developed-by: Chwee-Lin Choong <chwee.lin.choong@intel.com> Signed-off-by: Chwee-Lin Choong <chwee.lin.choong@intel.com> Signed-off-by: Faizal Rahim <faizal.abdul.rahim@linux.intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Mor Bar-Gabay <morx.bar.gabay@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-11igc: add private flag to reverse TX queue priority in TSN modeFaizal Rahim
By default, igc assigns TX hw queue 0 the highest priority and queue 3 the lowest. This is opposite of most NICs, where TX hw queue 3 has the highest priority and queue 0 the lowest. mqprio in igc already uses TX arbitration unconditionally to reverse TX queue priority when mqprio is enabled. The TX arbitration logic does not require a private flag, because mqprio was added recently and no known users depend on the default queue ordering, which differs from the typical convention. taprio does not use TX arbitration, so it inherits the default igc TX queue priority order. This causes tc command inconsistencies when configuring frame preemption with taprio compared to mqprio in igc. Other tc command inconsistencies and configuration issues already exist when using taprio on igc compared to other network controllers. These issues are described in a later section. To harmonize TX queue priority behavior between taprio and mqprio, and to fix these issues without breaking long-standing taprio use cases, this patch adds a new private flag, called reverse-tsn-txq-prio, to reverse the TX queue priority. It makes queue 3 the highest and queue 0 the lowest, reusing the TX arbitration logic already used by mqprio. Users must set the private flag when enabling frame preemption with taprio to follow the standard convention. Doing so promotes adoption of the correct priority model for new features while preserving compatibility with legacy configurations. This new private flag addresses: 1. Non-standard socket -> tc -> TX hw queue mapping for taprio in igc Without the private flag: - taprio maps (socket -> tc -> TX hardware queue) differently on igc compared to other network controllers - On igc, mqprio maps tc differently from taprio, since mqprio already uses TX arbitration The following examples compare taprio configuration on igc and other network controllers: a) On other NICs (TX hw queue 3 is highest priority): taprio num_tc 4 map 0 1 2 3 .... \ queues 1@0 1@1 1@2 1@3 Mapping translates to: socket 0 -> tc 0 -> queue 0 socket 3 -> tc 3 -> queue 3 This is the normal mapping that respects the standard convention: higher socket number -> higher tc -> higher priority TX hw queue b) On igc (TX hw queue 0 is highest priority by default): taprio num_tc 4 map 3 2 1 0 .... \ queues 1@0 1@1 1@2 1@3 Mapping translates to: socket 0 -> tc 3 -> queue 3 socket 3 -> tc 0 -> queue 0 This igc tc mapping example is based on Intel's TSN validation test case, where a higher socket priority maps to a higher priority queue. It respects the mapping: higher socket number -> higher priority TX hw queue but breaks the expected ordering: higher tc -> higher priority TX hw queue as defined in [Ref1]. This custom mapping complicates common taprio setup across NICs. 2. Non-standard frame preemption mapping for taprio in igc Without the private flag: - Compared to other network controllers, taprio on igc must flip the expected fp sequence, since express traffic is expected to map to the highest priority queue and preemptible traffic to lower ones - On igc, frame preemption configuration for mqprio differs from taprio, since mqprio already uses TX arbitration The following examples compare taprio frame preemption configuration on igc and other network controllers: a) On other NICs (TX hw queue 3 is highest priority): taprio num_tc 4 map ..... \ queues 1@0 1@1 1@2 1@3 \ fp P P P E Mapping translates to: tc0, tc1, tc2 -> preemptible -> queue 0, 1, 2 tc3 -> express -> queue 3 This is the normal mapping that respects the standard convention: higher tc -> express traffic -> higher priority TX hw queue lower tc -> preemptible traffic -> lower priority TX hw queue b) On igc (TX hw queue 0 is highest priority by default): taprio num_tc 4 map ...... \ queues 1@0 1@1 1@2 1@3 \ fp E P P P Mapping translates to: tc0 -> express -> queue 0 tc1, tc2, tc3 -> preemptible -> queue 1, 2, 3 This inversion respects the mapping of: express traffic -> higher priority TX hw queue but breaks the expected ordering: higher tc -> express traffic as defined in [Ref1] where higher tc indicates higher priority. In this case, the lower tc0 is assigned to express traffic. This custom mapping further complicates common preemption setup across NICs. Tests were performed on taprio with the following combinations, where two apps send traffic simultaneously on different queues: Private Flag Traffic Sent By Traffic Sent By ---------------------------------------------------------------- enabled iperf3 (queue 3) iperf3 (queue 0) disabled iperf3 (queue 0) iperf3 (queue 3) enabled iperf3 (queue 3) real-time app (queue 0) disabled iperf3 (queue 0) real-time app (queue 3) enabled real-time app (queue 3) iperf3 (queue 0) disabled real-time app (queue 0) iperf3 (queue 3) enabled real-time app (queue 3) real-time app (queue 0) disabled real-time app (queue 0) real-time app (queue 3) Private flag is controlled with: ethtool --set-priv-flags enp1s0 reverse-tsn-txq-prio <on|off> [Ref1] IEEE 802.1Q clause 8.6.8 Transmission selection: "For a given Port and traffic class, frames are selected from the corresponding queue for transmission if and only if: ... b) For each queue corresponding to a numerically higher value of traffic class supported by the Port, the operation of the transmission selection algorithm supported by that queue determines that there is no frame available for transmission." Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Faizal Rahim <faizal.abdul.rahim@linux.intel.com> Tested-by: Mor Bar-Gabay <morx.bar.gabay@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-11igc: assign highest TX queue number as highest priority in mqprioFaizal Rahim
Previously, TX arbitration prioritized queues based on the TC they were mapped to. A queue mapped to TC 3 had higher priority than one mapped to TC 0. To improve code reuse for upcoming patches and align with typical NIC behavior, this patch updates the logic to prioritize higher queue numbers when mqprio is used. As a result, queue 0 becomes the lowest priority and queue 3 becomes the highest. This patch also introduces igc_tsn_is_tc_to_queue_priority_ordered() to preserve the original TC-based priority rule and reject configurations where a higher TC maps to a lower queue offset. Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Faizal Rahim <faizal.abdul.rahim@linux.intel.com> Tested-by: Mor Bar-Gabay <morx.bar.gabay@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-11igc: refactor TXDCTL macros to use FIELD_PREP and GEN_MASKFaizal Rahim
Refactor TXDCTL macro handling to use FIELD_PREP and GENMASK macros. This prepares the code for adding a new TXDCTL priority field in an upcoming patch. Verified that the macro values remain unchanged before and after refactoring. Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Faizal Rahim <faizal.abdul.rahim@linux.intel.com> Tested-by: Mor Bar-Gabay <morx.bar.gabay@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-11igc: add DCTL prefix to related macrosFaizal Rahim
Rename macros to use the DCTL prefix for consistency with existing macros that reference the same register. This prepares for an upcoming patch that adds new fields to TXDCTL. Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Faizal Rahim <faizal.abdul.rahim@linux.intel.com> Tested-by: Mor Bar-Gabay <morx.bar.gabay@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-11igc: move TXDCTL and RXDCTL related macrosFaizal Rahim
Move and consolidate TXDCTL and RXDCTL macros in preparation for upcoming TXDCTL changes. This improves organization and readability. Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Faizal Rahim <faizal.abdul.rahim@linux.intel.com> Tested-by: Mor Bar-Gabay <morx.bar.gabay@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-10e1000: Move cancel_work_sync to avoid deadlockJoe Damato
Previously, e1000_down called cancel_work_sync for the e1000 reset task (via e1000_down_and_stop), which takes RTNL. As reported by users and syzbot, a deadlock is possible in the following scenario: CPU 0: - RTNL is held - e1000_close - e1000_down - cancel_work_sync (cancel / wait for e1000_reset_task()) CPU 1: - process_one_work - e1000_reset_task - take RTNL To remedy this, avoid calling cancel_work_sync from e1000_down (e1000_reset_task does nothing if the device is down anyway). Instead, call cancel_work_sync for e1000_reset_task when the device is being removed. Fixes: e400c7444d84 ("e1000: Hold RTNL when e1000_down can be called") Reported-by: syzbot+846bb38dc67fe62cc733@syzkaller.appspotmail.com Closes: https://lore.kernel.org/netdev/683837bf.a00a0220.52848.0003.GAE@google.com/ Reported-by: John <john.cs.hey@gmail.com> Closes: https://lore.kernel.org/netdev/CAP=Rh=OEsn4y_2LvkO3UtDWurKcGPnZ_NPSXK=FbgygNXL37Sw@mail.gmail.com/ Signed-off-by: Joe Damato <jdamato@fastly.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Acked-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-10ice/ptp: fix crosstimestamp reportingAnton Nadezhdin
Set use_nsecs=true as timestamp is reported in ns. Lack of this result in smaller timestamp error window which cause error during phc2sys execution on E825 NICs: phc2sys[1768.256]: ioctl PTP_SYS_OFFSET_PRECISE: Invalid argument This problem was introduced in the cited commit which omitted setting use_nsecs to true when converting the ice driver to use convert_base_to_cs(). Testing hints (ethX is PF netdev): phc2sys -s ethX -c CLOCK_REALTIME -O 37 -m phc2sys[1769.256]: CLOCK_REALTIME phc offset -5 s0 freq -0 delay 0 Fixes: d4bea547ebb57 ("ice/ptp: Remove convert_art_to_tsc()") Signed-off-by: Anton Nadezhdin <anton.nadezhdin@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-10iavf: fix reset_task for early reset eventAhmed Zaki
If a reset event is received from the PF early in the init cycle, the state machine hangs for about 25 seconds. Reproducer: echo 1 > /sys/class/net/$PF0/device/sriov_numvfs ip link set dev $PF0 vf 0 mac $NEW_MAC The log shows: [792.620416] ice 0000:5e:00.0: Enabling 1 VFs [792.738812] iavf 0000:5e:01.0: enabling device (0000 -> 0002) [792.744182] ice 0000:5e:00.0: Enabling 1 VFs with 17 vectors and 16 queues per VF [792.839964] ice 0000:5e:00.0: Setting MAC 52:54:00:00:00:11 on VF 0. VF driver will be reinitialized [813.389684] iavf 0000:5e:01.0: Failed to communicate with PF; waiting before retry [818.635918] iavf 0000:5e:01.0: Hardware came out of reset. Attempting reinit. [818.766273] iavf 0000:5e:01.0: Multiqueue Enabled: Queue pair count = 16 Fix it by scheduling the reset task and making the reset task capable of resetting early in the init cycle. Fixes: ef8693eb90ae3 ("i40evf: refactor reset handling") Signed-off-by: Ahmed Zaki <ahmed.zaki@intel.com> Tested-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Signed-off-by: Marcin Szycik <marcin.szycik@linux.intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-10i40e: retry VFLR handling if there is ongoing VF resetRobert Malz
When a VFLR interrupt is received during a VF reset initiated from a different source, the VFLR may be not fully handled. This can leave the VF in an undefined state. To address this, set the I40E_VFLR_EVENT_PENDING bit again during VFLR handling if the reset is not yet complete. This ensures the driver will properly complete the VF reset in such scenarios. Fixes: 52424f974bc5 ("i40e: Fix VF hang when reset is triggered on another VF") Signed-off-by: Robert Malz <robert.malz@canonical.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-10i40e: return false from i40e_reset_vf if reset is in progressRobert Malz
The function i40e_vc_reset_vf attempts, up to 20 times, to handle a VF reset request, using the return value of i40e_reset_vf as an indicator of whether the reset was successfully triggered. Currently, i40e_reset_vf always returns true, which causes new reset requests to be ignored if a different VF reset is already in progress. This patch updates the return value of i40e_reset_vf to reflect when another VF reset is in progress, allowing the caller to properly use the retry mechanism. Fixes: 52424f974bc5 ("i40e: Fix VF hang when reset is triggered on another VF") Signed-off-by: Robert Malz <robert.malz@canonical.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-09ixgbe: Fix typos and clarify comments in X550 driver codeAlok Tiwari
Corrected spelling errors such as "simular" -> "similar", "excepted" -> "accepted", and "Determime" -> "Determine". Fixed including incorrect word usage ("to MAC" -> "two MAC") and improved awkward phrasing. Aligned function header descriptions with their actual functionality (e.g., "Writes a value" -> "Reads a value"). Corrected typo in error code from -ENIVAL to -EINVAL. Improved overall clarity and consistency in comment across various functions. These changes improve maintainability and readability of the code without affecting functionality. Signed-off-by: Alok Tiwari <alok.a.tiwari@oracle.com> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-09iavf: convert to NAPI IRQ affinity APIAhmed Zaki
Commit bd7c00605ee0 ("net: move aRFS rmap management and CPU affinity to core") allows the drivers to delegate the IRQ affinity to the NAPI instance. However, the driver needs to use a persistent NAPI config and explicitly set/unset the NAPI<->IRQ association. Convert to the new IRQ affinity API. Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Ahmed Zaki <ahmed.zaki@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-09ice: add a separate Rx handler for flow director commandsMichal Kubiak
The "ice" driver implementation uses the control VSI to handle the flow director configuration for PFs and VFs. Unfortunately, although a separate VSI type was created to handle flow director queues, the Rx queue handler was shared between the flow director and a standard NAPI Rx handler. Such a design approach was not very flexible. First, it mixed hotpath and slowpath code, blocking their further optimization. It also created a huge overkill for the flow director command processing, which is descriptor-based only, so there is no need to allocate Rx data buffers. For the above reasons, implement a separate Rx handler for the control VSI. Also, remove from the NAPI handler the code dedicated to configuring the flow director rules on VFs. Do not allocate Rx data buffers to the flow director queues because their processing is descriptor-based only. Finally, allow Rx data queues to be allocated only for VSIs that have netdev assigned to them. This handler splitting approach is the first step in converting the driver to use the Page Pool (which can only be used for data queues). Test hints: 1. Create a VF for any PF managed by the ice driver. 2. In a loop, add and delete flow director rules for the VF, e.g.: for i in {1..128}; do q=$(( i % 16 )) ethtool -N ens802f0v0 flow-type tcp4 dst-port "$i" action "$q" done for i in {0..127}; do ethtool -N ens802f0v0 delete "$i" done Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Suggested-by: Michal Swiatkowski <michal.swiatkowski@intel.com> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Michal Kubiak <michal.kubiak@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-09ice: change SMA pins to SDP in PTP APIKarol Kolacinski
This change aligns E810 PTP pin control to all other products. Currently, SMA/U.FL port expanders are controlled together with SDP pins connected to 1588 clock. To align this, separate this control by exposing only SDP20..23 pins in PTP API on adapters with DPLL. Clear error for all E810 on absent NVM pin section or other errors to allow proper initialization on SMA E810 with NVM section. Use ARRAY_SIZE for pin array instead of internal definition. Reviewed-by: Milena Olech <milena.olech@intel.com> Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com> Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-09ice: redesign dpll sma/u.fl pins controlArkadiusz Kubalewski
DPLL-enabled E810 NIC driver provides user with list of input and output pins. Hardware internal design impacts user control over SMA and U.FL pins. Currently end-user view on those dpll pins doesn't provide any layer of abstraction. On the hardware level SMA and U.FL pins are tied together due to existence of direction control logic for each pair: - SMA1 (bi-directional) and U.FL1 (only output) - SMA2 (bi-directional) and U.FL2 (only input) The user activity on each pin of the pair may impact the state of the other. Previously all the pins were provided to the user as is, without the control over SMA pins direction. Introduce a software controlled layer of abstraction over external board pins, instead of providing the user with access to raw pins connected to the dpll: - new software controlled SMA and U.FL pins, - callback operations directing user requests to corresponding hardware pins according to the runtime configuration, - ability to control SMA pins direction. Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-09ixgbe: add link_down_events statisticMartyna Szapar-Mudlaw
Introduce a link_down_events counter to the ixgbe driver, incremented each time the link transitions from up to down. This counter can help diagnose issues related to link stability, such as port flapping or unexpected link drops. The value is exposed via ethtool's get_link_ext_stats() interface. Reviewed-by: Kory Maincent <kory.maincent@bootlin.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-09i40e: add link_down_events statisticDawid Osuchowski
Introduce a link_down_events counter to the i40e driver, incremented each time the link transitions from up to down. This counter can help diagnose issues related to link stability, such as port flapping or unexpected link drops. The value is exposed via ethtool's get_link_ext_stats() interface. Co-developed-by: Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> Signed-off-by: Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Signed-off-by: Dawid Osuchowski <dawid.osuchowski@linux.intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-09ice: add link_down_events statisticMartyna Szapar-Mudlaw
Introduce a link_down_events counter to the ice driver, incremented each time the link transitions from up to down. This counter can help diagnose issues related to link stability, such as port flapping or unexpected link drops. The value is exposed via ethtool's get_link_ext_stats() interface. Reviewed-by: Kory Maincent <kory.maincent@bootlin.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Signed-off-by: Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-09net: intel: move RSS packet classifier types to libieJacob Keller
The Intel i40e, iavf, and ice drivers all include a definition of the packet classifier filter types used to program RSS hash enable bits. For i40e, these bits are used for both the PF and VF to configure the PFQF_HENA and VFQF_HENA registers. For ice and iAVF, these bits are used to communicate the desired hash enable filter over virtchnl via its struct virtchnl_rss_hashena. The virtchnl.h header makes no mention of where the bit definitions reside. Maintaining a separate copy of these bits across three drivers is cumbersome. Move the definition to libie as a new pctype.h header file. Each driver can include this, and drop its own definition. The ice implementation also defined a ICE_AVF_FLOW_FIELD_INVALID, intending to use this to indicate when there were no hash enable bits set. This is confusing, since the enumeration is using bit positions. A value of 0 *should* indicate the first bit. Instead, rewrite the code that uses ICE_AVF_FLOW_FIELD_INVALID to just check if the avf_hash is zero. From context this should be clear that we're checking if none of the bits are set. The values are kept as bit positions instead of encoding the BIT_ULL directly into their value. While most users will simply use BIT_ULL immediately, i40e uses the macros both with BIT_ULL and test_bit/set_bit calls. Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-09net: intel: rename 'hena' to 'hashcfg' for clarityJacob Keller
i40e, ice, and iAVF all use 'hena' as a shorthand for the "hash enable" configuration. This comes originally from the X710 datasheet 'xxQF_HENA' registers. In the context of the registers the meaning is fairly clear. However, on its own, hena is a weird name that can be more difficult to understand. This is especially true in ice. The E810 hardware doesn't even have registers with HENA in the name. Replace the shorthand 'hena' with 'hashcfg'. This makes it clear the variables deal with the Hash configuration, not just a single boolean on/off for all hashing. Do not update the register names. These come directly from the datasheet for X710 and X722, and it is more important that the names can be searched. Suggested-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-08treewide, timers: Rename from_timer() to timer_container_of()Ingo Molnar
Move this API to the canonical timer_*() namespace. [ tglx: Redone against pre rc1 ] Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/aB2X0jCKQO56WdMt@gmail.com
2025-06-03iavf: get rid of the crit lockPrzemek Kitszel
Get rid of the crit lock. That frees us from the error prone logic of try_locks. Thanks to netdev_lock() by Jakub it is now easy, and in most cases we were protected by it already - replace crit lock by netdev lock when it was not the case. Lockdep reports that we should cancel the work under crit_lock [splat1], and that was the scheme we have mostly followed since [1] by Slawomir. But when that is done we still got into deadlocks [splat2]. So instead we should look at the bigger problem, namely "weird locking/scheduling" of the iavf. The first step to fix that is to remove the crit lock. I will followup with a -next series that simplifies scheduling/tasks. Cancel the work without netdev lock (weird unlock+lock scheme), to fix the [splat2] (which would be totally ugly if we would kept the crit lock). Extend protected part of iavf_watchdog_task() to include scheduling more work. Note that the removed comment in iavf_reset_task() was misplaced, it belonged to inside of the removed if condition, so it's gone now. [splat1] - w/o this patch - The deadlock during VF removal: WARNING: possible circular locking dependency detected sh/3825 is trying to acquire lock: ((work_completion)(&(&adapter->watchdog_task)->work)){+.+.}-{0:0}, at: start_flush_work+0x1a1/0x470 but task is already holding lock: (&adapter->crit_lock){+.+.}-{4:4}, at: iavf_remove+0xd1/0x690 [iavf] which lock already depends on the new lock. [splat2] - when cancelling work under crit lock, w/o this series, see [2] for the band aid attempt WARNING: possible circular locking dependency detected sh/3550 is trying to acquire lock: ((wq_completion)iavf){+.+.}-{0:0}, at: touch_wq_lockdep_map+0x26/0x90 but task is already holding lock: (&dev->lock){+.+.}-{4:4}, at: iavf_remove+0xa6/0x6e0 [iavf] which lock already depends on the new lock. [1] fc2e6b3b132a ("iavf: Rework mutexes for better synchronisation") [2] https://github.com/pkitszel/linux/commit/52dddbfc2bb60294083f5711a158a Fixes: d1639a17319b ("iavf: fix a deadlock caused by rtnl and driver's lock circular dependencies") Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-03iavf: sprinkle netdev_assert_locked() annotationsPrzemek Kitszel
Lockdep annotations help in general, but here it is extra good, as next commit will remove crit lock. Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-03iavf: extract iavf_watchdog_step() out of iavf_watchdog_task()Przemek Kitszel
Finish up easy refactor of watchdog_task, total for this + prev two commits is: 1 file changed, 47 insertions(+), 82 deletions(-) Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-03iavf: simplify watchdog_task in terms of adminq task schedulingPrzemek Kitszel
Simplify the decision whether to schedule adminq task. The condition is the same, but it is executed in more scenarios. Note that movement of watchdog_done label makes this commit a bit surprising. (Hence not squashing it to anything bigger). Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>