Age | Commit message (Collapse) | Author |
|
Make phy_package a separate module, so that this code is only loaded
if needed.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Link: https://patch.msgid.link/66bb4cce-b6a3-421e-9a7b-5d4a0c75290e@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Move both functions to phy_package.c, so that phy_core.c no longer
has a dependency on phy_package.c (phy_package_address).
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Link: https://patch.msgid.link/8956fa53-3eda-4079-8203-a8fddcc17bf3@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
It appears that the GMAC_ANE_ADV and GMAC_ANE_LPA registers are only
available for TBI and RTBI PHY interfaces. In commit 482b3c3ba757
("net: stmmac: Drop TBI/RTBI PCS flags") support for these was dropped,
and thus it no longer makes sense to access these registers.
Remove the *_get_adv_lp() functions, and the now redundant struct
rgmii_adv and STMMAC_PCS_* definitions.
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Link: https://patch.msgid.link/E1uPkbT-004EyG-OQ@rmk-PC.armlinux.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
XSkFQ refill is pretty generic across the drivers minus FQ descriptor
filling and can easily be unified with one inline callback.
XSk wakeup is usually not, but here, instead of commonly used
"SW interrupts", I picked firing an IPI. In most tests, it showed better
performance; it also provides better control for userspace on which CPU
will handle the xmit, as SW interrupts honor IRQ affinity no matter
which core produces XSk xmit descs (while XDPSQs are associated 1:1
with cores having the same ID).
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Add XSk counterparts for preparing XSk &libeth_xdp_buff (adding head and
frags), running the program, and handling the verdict, inc. XDP_PASS.
Shortcuts in comparison with regular Rx: frags and all verdicts except
XDP_REDIRECT are under unlikely() and out of line; no checks for XDP
program presence as it's always true for XSk.
Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # optimizations
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Reuse core sending functions to send XSk xmit frames.
Both metadata and no metadata pools/driver are supported. libeth_xdp
also provides generic XSk metadata ops, currently with the checksum
offload only and for cases when HW doesn't require supplying L3/L4
checksum offsets. Drivers are free to pass their own ops.
&libeth_xdp_tx_bulk is not used here as it would be redundant;
pool->tx_descs are accessed directly.
Fake "libeth_xsktmo" is needed to hide implementation details from the
drivers when they want to use the generic ops: the original struct is
defined in the same file where dev->xsk_tx_metadata_ops gets set to
avoid duplication of slowpath; at the same time; XSk xmit functions
use local "fast" copy to inline XMO callbacks.
Tx descriptor filling loop is unrolled by 8.
Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # optimizations
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Add Xsk counterparts for XDP_TX buffer sending and completion.
The same base structures and functions used from the libeth_xdp core,
with adjustments to that XSk Rx always operates on &xdp_buff_xsk for
both head and frags. And unlike regular Rx, here unlikely() are used
for frags, as the header split gives no benefits for XSk Rx, at
least for now.
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
End the XDP section by adding helpers to setup XDP features, flipping
.ndo_xdp_xmit() support at runtime (in case when it's not always on),
and calculating the queue clean/refill threshold.
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Running a prog and handling the verdicts, up to napi_gro_receive()
is also pretty generic code not really differing between vendors
(except for Tx descriptor filling and Rx descriptor parsing).
Define a couple inlines to do that. The inline callbacks a driver
needs to pass is mentioned above: Tx descriptor filling for XDP_TX,
populating skb with the descriptor data for XDP_PASS, finalizing
XDPSQs after the polling loop for XDP_TX (kicking the HW to start
sending).
The populate callback passes only &libeth_xdp_buff assuming buff::desc
pointer is enough, plus you can always get the corresponding Rx queue
structure via container_of(buff::rxq). If not, a driver can extend
the buff with more fields directly on the stack without touching
libeth_xdp definitions.
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Add convenience helpers to build an &xdp_buff. This means: general
initialization before the NAPI loop, adding head, adding frags etc.
libeth_xdp_process_buff() is the same what everybody have in their
drivers:
dma_sync_for_cpu();
if (!frag) {
add_head();
prefetch();
} else {
add_frag();
}
Note that I don't use net_prefetch(), sticking to the original
prefetch(). In none of my tests prefetching 128 bytes yielded better
perf than 64 bytes. That might differ if the headers are huge enough,
but then additional tunneling etc. overhead takes place, you either
way won't win a lot.
&libeth_xdp_stash is for cases when you exit the polling loop without
finishing building the buff. If that happens, you need to store the
buffer in the queue structure until the next loop and then restore it.
It makes no sense to place a whole full &xdp_buff there. Define a
minimal structure, which would store only the fields essential to
restore it.
I was able to pack it into 16 bytes, which is only 8 bytes bigger
than `struct sk_buff *skb` on x64.
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
When XDP Tx queues are not interrupt-driven but use lazy cleaning,
i.e. only when there are less than `threshold` free descriptors left,
we also need cleanup timers to avoid &xdp_buff and &xdp_frame stall
for too long, especially with Page Pool (it warns every about inflight
pages every 60 second).
Let's say we sent 256 frames and don't need to send more, but we clean
only when the number of pending items >= 384. In that case, those 256
will stall until 128 more are sent. For this, add simple helpers to
run a timer which will clean the queue regardless, after 1 second of
the last send.
The timer is triggered when finalizing the queue. As long as there is
regular active traffic, the timer doesn't fire.
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Unfortunately, it's not always possible to allocate
max(num_rxqs, nr_cpu_ids) even on hi-end NICs.
To mitigate this, add simple locking helpers to libeth_xdp.
As long as XDPSQs are not shared, the whole functionality is gated
behind a static lock. Otherwise, each bulk flush locks the queue for
the time of cleaning and filling the descriptors.
As long as this particular queue is not used by more than 1 CPU,
the impact is minimal (runtime check for boolean twice per 16+
descriptors).
Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # static key
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Similarly to libeth_tx_complete(), add libeth_xdp_complete_tx() to
handle XDP_TX and xmit buffers. Both use bulk return under the hood.
Also add out of line libeth_tx_complete_any() which handles both
regular and XDP frames (if libeth_xdp is loaded), for example,
to call on queue destroy, where we don't need inlining but
convenience.
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Add helpers for implementing .ndo_xdp_xmit().
Same as for XDP_TX, accumulate up to 16 DMA-mapped frames on the stack,
then flush. If DMA mapping is failed for some reason, don't try mapping
further frames, but still flush what was already prepared.
DMA address of a head frame is stored in its headroom, assuming it
has enough of it for an 8 (or 4) byte value.
In addition to @prep and @xmit driver callbacks in XDP_TX, xmit also
needs @finalize to kick the XDPSQ after filling.
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Start adding XDP-specific code to libeth, namely handling XDP_TX buffers
(only sending).
The idea is that we accumulate up to 16 buffers on the stack, then,
if either the limit is reached or the polling is finished, flush them
at once with only one XDPSQ cleaning (if needed). The main sending
function will be aware of the sending budget and already have all the
info to send the buffers, so it can't fail.
Drivers need to provide 2 inline callbacks to the main sending function:
for cleaning an XDPSQ and for filling descriptors; the library code
takes care of the rest.
Note that unlike the generic code, multi-buffer support is not wrapped
here with unlikely() to not hurt header split setups.
&libeth_xdp_buff is a simple extension over &xdp_buff which has a direct
pointer to the corresponding Rx descriptor (and, luckily, precisely 1 CL
size and 16-byte alignment on x86_64).
Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # xmit logic
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Expand libeth's Page Pool functionality by adding native XDP support.
This means picking the appropriate headroom and DMA direction.
Also, register all the created &page_pools as XDP memory models.
A driver then can call xdp_rxq_info_attach_page_pool() when registering
its RxQ info.
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Back when the libeth Rx core was initially written, devmem was a draft
and netmem_ref didn't exist in the mainline. Now that it's here, make
libeth MP-agnostic before introducing any new code or any new library
users.
When it's known that the created PP/FQ is for header buffers, use faster
"unsafe" underscored netmem <--> virt accessors as netmem_is_net_iov()
is always false in that case, but consumes some cycles (bit test +
true branch).
Reviewed-by: Mina Almasry <almasrymina@google.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Change EXPORT_SYMBOL_NS_GPL(x, "LIBETH") to EXPORT_SYMBOL_GPL(x) +
DEFAULT_SYMBOL_NAMESPACE "LIBETH" to make the code more compact.
Also, explicitly include <linux/export.h> to satisfy new
requirements from scripts/misc-check.
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
In the before experience there are many issue occurred because of the
grant control signal can not be set in time especially WiFi power save
enter/leave. To control the signal more accuracy, offload the control
to firmware.
Signed-off-by: Ching-Te Ku <ku920601@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250611035523.36432-11-pkshih@realtek.com
|
|
WiFi 7 generation has 2 MAC, the PTA should bind the input/output to
correct MAC to do the packet arbitration as expected.
Signed-off-by: Ching-Te Ku <ku920601@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250611035523.36432-10-pkshih@realtek.com
|
|
BTG means a path work for Bluetooth & Wi-Fi 2.4GHz. To earn a better
coexistence performance, need to do some RF setting for BTG path.
WiFi 7 generation offload the feature to firmware, to get a more
accuracy control. And decrease driver I/O.
Signed-off-by: Ching-Te Ku <ku920601@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250611035523.36432-9-pkshih@realtek.com
|
|
Pre-AGC is Wi-Fi auto Rx gain control. The mechanism need to switching
very fast, especially while Wi-Fi is under 2GHz/5GHz multi-port scenario.
To earn a more accuracy & sensitive gain control, in the WiFi 7 later
firmware, Pre-AGC mechanism has offloaded to firmware.
Signed-off-by: Ching-Te Ku <ku920601@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250611035523.36432-8-pkshih@realtek.com
|
|
to firmware
In order to reduce driver I/O & some detail instant hardware control, some
of the necessary API offload to Wi-Fi firmware. Collect the reference
parameters to let firmware do decisions.
Signed-off-by: Ching-Te Ku <ku920601@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250611035523.36432-7-pkshih@realtek.com
|
|
Fix unexpected line warp. Collect firmware report format version and
driver support report format version code to check unexpected C2H report
exception.
Signed-off-by: Ching-Te Ku <ku920601@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250611035523.36432-6-pkshih@realtek.com
|
|
Because WiFi 7 generation has dual MAC, logic need to assign & save
the information to correct index. Update the related logic.
Signed-off-by: Ching-Te Ku <ku920601@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250611035523.36432-5-pkshih@realtek.com
|
|
To make the logic can work well with WiFi 7 & before generations,
extend & add logic for WiFi 7.
Signed-off-by: Ching-Te Ku <ku920601@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250611035523.36432-4-pkshih@realtek.com
|
|
There were some driver API offloaded to firmware, and to recognize the
feature add a version tag for it.
Signed-off-by: Ching-Te Ku <ku920601@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250611035523.36432-3-pkshih@realtek.com
|
|
Add Wi-Fi 7 MLO related multi-role (MR) chanctx descriptors and query
function. They are designed for other components, e.g. coex, which are
interested in the following info.
* whether a MLD exists and how many active link
* the number of AP mode and station mode respectively
* how many chanctx and the number of 2/5/6 GHz respectively
Signed-off-by: Zong-Zhe Yang <kevin_yang@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250611035523.36432-2-pkshih@realtek.com
|
|
If scan happen during start_ap, the register which control TX might be
turned off during scan. Additionally, if set_channel occurs during scan
will backup this register and set to firmware after set_channel done.
When scan complete, firmware will also set TX by this register, causing
TX to be disabled and beacon can't be TX. Therefore, in assign/unassign_vif
call scan abort before set_channel to avoid scan racing with set_channel.
Signed-off-by: Chih-Kang Chang <gary.chang@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250610130034.14692-13-pkshih@realtek.com
|
|
The auth retry only continue 40ms, but the GO might switch to STA role
50ms when MCC. Therefore, enlarge the TX retry count from 32 to 60 to
let GC TX time overlapping with GO timeslot.
Signed-off-by: Chih-Kang Chang <gary.chang@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250610130034.14692-12-pkshih@realtek.com
|
|
When the beacon offset is less than minimum of auxiliary tob
(aux->duration - aux->limit.max_toa), the upper bound of the reference
toa might be negative and lower than the lower bound, which causes the
auxiliary result to exceed the NoA limit. Therefore, in this case, the
anchor pattern is used for calculation.
Signed-off-by: Chih-Kang Chang <gary.chang@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250610130034.14692-11-pkshih@realtek.com
|
|
Clear NoA setting before MCC starts. Otherwise, nulldata will be
blocked to TX because firmware use the normal flow NoA to calculate
timing.
Signed-off-by: Chih-Kang Chang <gary.chang@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250610130034.14692-10-pkshih@realtek.com
|
|
In original scan, the scan time only 45ms. The GO in MCC mode only
stay 50ms and switch to STA role 50ms, which might cause GC can't scan
GO. Therefore, enlarge scan time to 105ms to ensure GC have time
overlapping with GO.
Signed-off-by: Chih-Kang Chang <gary.chang@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250610130034.14692-9-pkshih@realtek.com
|
|
Adjust TX nulldata early time to let nulldata have more contention time
to TX. Otherwise, AP is hard to receive nulldata 1, which causes the
throughput test failed due to packet drops.
Signed-off-by: Chih-Kang Chang <gary.chang@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250610130034.14692-8-pkshih@realtek.com
|
|
HW scan leak to TX nulldata 0 to AP after scan completed, which allowed
AP start to TX packet to us. Therefore, driver TX nulldata 0 after scan
completed.
Signed-off-by: Chih-Kang Chang <gary.chang@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250610130034.14692-7-pkshih@realtek.com
|
|
Stop TX during the MCC configuration period to prevent packet leakage.
The stop time is defined as 'start_tsf - tsf', which means the duration
from when MCC configuration begins until MCC starts.
Signed-off-by: Chih-Kang Chang <gary.chang@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250610130034.14692-6-pkshih@realtek.com
|
|
MCC needs to wait at most 300ms to start. Additionally, if
scanning happens before MCC starts, it will miss some beacons,
which might cause beacon loss. Therefore, we reset beacon
filter when MCC start to let hardware reset beacon loss counter.
Additionally, GO is forbid to enter courtesy mode might cause
STA beacon loss. Therefore, disable beacon filter when GO+STA.
However, In WiFi 7 chip, even when GC+STA enable courtesy mode, the
beacon might loss because switching to courtesy timeslot will disable
TX/RX. If the TOB(time offset behind) or TOA(time offset ahead) is
too close to the edge of timeslot, the beacon might not be received.
Therefore, disable beacon filter when GC+STA in WiFi 7 chip.
Because disabling the beacon filter might prevent disconnection
when the AP power-off without sending a deauth. Therefore, driver
TX QOS nulldata periodically to detect the AP status, and the
connection is terminated if no ACK is received for 6 seconds.
Signed-off-by: Chih-Kang Chang <gary.chang@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250610130034.14692-5-pkshih@realtek.com
|
|
The frequency get from PPDU status set as center channel during MCC,
but we need to report to mac80211 as primary channel. Therefore, we
use the chanctx information in software to instead it.
Signed-off-by: Chih-Kang Chang <gary.chang@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250610130034.14692-4-pkshih@realtek.com
|
|
The RF notify MCC H2C command format of 8852C different from other
chip, therefore add v0 format to update it.
Signed-off-by: Chih-Kang Chang <gary.chang@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250610130034.14692-3-pkshih@realtek.com
|
|
HW scan flow has considered the timing when to get back op for the scanning
interface. But, when concurrency, there are two interfaces with connection.
The OP channel of another one was not back originally. It then easily lead
to connection loss when scanning during concurrency. So, HW scan flow is
extended to deal with second OP channel. And, H2C command is also extended
to fill second MAC ID.
The changes mentioned above are done for WiFi 6 chips first. HW scan has
different handling architectures including FW and driver on WiFi 7 chips.
Signed-off-by: Zong-Zhe Yang <kevin_yang@realtek.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250610130034.14692-2-pkshih@realtek.com
|
|
When `dma_mapping_error()` is true, if a new `skb` has been allocated,
then it must be de-allocated.
Compile tested only
Signed-off-by: Thomas Fourier <fourier.thomas@gmail.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250613074014.69856-2-fourier.thomas@gmail.com
|
|
Don't populate the read-only array params on the stack at run time,
instead make it static const.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250611135521.172521-1-colin.i.king@gmail.com
|
|
txpower_info_{2g,5g} are too big to fit on the stack, but in most of the
rtlwifi variants this stays below the warning limit for stack frames.
In rtl8192ee and a few others, I see a case where clang decides to fully
inline this into rtl92ee_read_eeprom_info, triggering this warning:
drivers/net/wireless/realtek/rtlwifi/rtl8192ee/hw.c:2178:6: error: stack frame size (1312) exceeds limit (1280) in 'rtl92ee_read_eeprom_info' [-Werror,-Wframe-larger-than]
Mark _rtl92ee_read_txpower_info_from_hwpg() as noinline_for_stack to
and mark _rtl92ee_get_chnl_group() as __always_inline to make clang
behave the same way as gcc. Inlining _rtl92ee_get_chnl_group helps
let the compiler see that the index is always in range. The same
change appears to be necessary in all rtlwifi variants.
A more thorough approach would be to avoid the use of the two structures
on the stack entirely and combine them with the struct rtl_efuse
data that is dynamically allocated and holds the same information.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
Link: https://patch.msgid.link/20250610092240.2639751-1-arnd@kernel.org
|
|
Add ethqos_pcs_set_inband() to improve readability, and to allow future
changes when phylink PCS support is properly merged.
Reviewed-by: Andrew Halaney <ahalaney@redhat.com>
Tested-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> # sa8775p-ride-r3
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://patch.msgid.link/E1uPkbO-004EyA-EU@rmk-PC.armlinux.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Refactor the way firmware names are handled for the ICSSG PRUETH driver.
Instead of using hardcoded firmware name arrays for different modes (EMAC,
SWITCH, HSR), the driver now reads the firmware names from the device tree
property "firmware-name". Only the EMAC firmware names are specified in the
device tree property. The firmware names for all other supported modes are
generated dynamically based on the EMAC firmware names by replacing
substrings (e.g., "eth" with "sw" or "hsr") as appropriate.
Example: Below are the firmwares used currently for PRU0 core
EMAC: ti-pruss/am65x-sr2-pru0-prueth-fw.elf
SW : ti-pruss/am65x-sr2-pru0-prusw-fw.elf
HSR : ti-pruss/am65x-sr2-pru0-pruhsr-fw.elf
All three firmware names are same except for the operating mode.
In general for PRU0 core, firmware name is,
ti-pruss/am65x-sr2-pru0-pru<mode>-fw.elf
Since the EMAC firmware names are defined in DT, driver will read those
directly and for other modes swap the mode name. i.e. eth -> sw or
eth -> hsr.
This preserves backwards compatibility as ICSSG driver is supported only
by AM65x and AM64x. Both of these have "firmware-name" property
populated in their device tree.
Signed-off-by: MD Danish Anwar <danishanwar@ti.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://patch.msgid.link/20250613064547.44394-1-danishanwar@ti.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Since secs_to_jiffies()(commit:b35108a51cf7) has been introduced, we can
use it to avoid scaling the time to msec.
Signed-off-by: Yuesong Li <liyuesong@vivo.com>
Reviewed-by: Joe Damato <joe@dama.to>
Reviewed-by: Taehee Yoo <ap420073@gmail.com>
Link: https://patch.msgid.link/20250613102014.3070898-1-liyuesong@vivo.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Now that no SoC implements the .set_*_speed() methods, we can get rid
of these methods and the now unused code in rk_set_clk_tx_rate().
Arrange for the function to return an error when the .set_speed()
method is not implemented.
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://patch.msgid.link/E1uPk3O-004CFx-Ir@rmk-PC.armlinux.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Convert px30_set_rmii_speed() to use the common .set_speed() method,
which eliminates another user of the older .set_*_speed() methods.
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://patch.msgid.link/E1uPk3J-004CFr-FE@rmk-PC.armlinux.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
px30_set_rmii_speed() doesn't need to be as verbose as it is - it
merely needs the values for the register and clock rate which depend
on the speed, and then call the appropriate functions. Rewrite the
function to make it so.
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://patch.msgid.link/E1uPk3E-004CFl-BZ@rmk-PC.armlinux.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
As a result of the previous patches, many of the .set_rgmii_speed()
and .set_rmii_speed() implementations are identical apart from the
interface mode. Add a new .set_speed() function which takes the
interface mode in addition to the speed, and use it to combine the
separate implementations, calling the common rk_set_reg_speed()
function.
Also convert rk_set_clk_mac_speed() to be called by this new method
pointer, rather than having these implementations called from both
.set_*_speed() methods.
Remove all the error messages from the .set_speed() methods, as these
return an error code which is propagated up to stmmac_mac_link_up()
which will print the error.
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://patch.msgid.link/E1uPk39-004CFf-7a@rmk-PC.armlinux.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|