Age | Commit message (Collapse) | Author |
|
First, test that bpf qdisc can be set as default qdisc. Then, attach
an mq qdisc to see if bpf qdisc can be successfully created and grafted.
The test is a sequential test as net.core.default_qdisc is global.
Signed-off-by: Amery Hung <ameryhung@gmail.com>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
|
|
Allow .init to proceed if qdisc_lookup() returns NULL as it only happens
when called by qdisc_create_dflt() in mq/mqprio_init and the parent qdisc
has not been added to qdisc_hash yet. In qdisc_create(), the caller,
__tc_modify_qdisc(), would have made sure the parent qdisc already exist.
In addition, call qdisc_watchdog_init() whether .init succeeds or not to
prevent null-pointer dereference. In qdisc_create() and
qdisc_create_dflt(), if .init fails, .destroy will be called. As a
result, the destroy epilogue could call qdisc_watchdog_cancel() with an
uninitialized timer, causing null-pointer deference in hrtimer_cancel().
Fixes: c8240344956e ("bpf: net_sched: Support implementation of Qdisc_ops in bpf")
Signed-off-by: Amery Hung <ameryhung@gmail.com>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
|
|
Jordan Rife says:
====================
bpf: udp: Exactly-once socket iteration
Both UDP and TCP socket iterators use iter->offset to track progress
through a bucket, which is a measure of the number of matching sockets
from the current bucket that have been seen or processed by the
iterator. On subsequent iterations, if the current bucket has
unprocessed items, we skip at least iter->offset matching items in the
bucket before adding any remaining items to the next batch. However,
iter->offset isn't always an accurate measure of "things already seen"
when the underlying bucket changes between reads which can lead to
repeated or skipped sockets. Instead, this series remembers the cookies
of the sockets we haven't seen yet in the current bucket and resumes
from the first cookie in that list that we can find on the next
iteration. This series focuses on UDP socket iterators, but a later
series will apply a similar approach to TCP socket iterators.
To be more specific, this series replaces struct sock **batch inside
struct bpf_udp_iter_state with union bpf_udp_iter_batch_item *batch,
where union bpf_udp_iter_batch_item can contain either a pointer to a
socket or a socket cookie. During reads, batch contains pointers to all
sockets in the current batch while between reads batch contains all the
cookies of the sockets in the current bucket that have yet to be
processed. On subsequent reads, when iteration resumes,
bpf_iter_udp_batch finds the first saved cookie that matches a socket in
the bucket's socket list and picks up from there to construct the next
batch. On average, assuming it's rare that the next socket disappears
before the next read occurs, we should only need to scan as much as we
did with the offset-based approach to find the starting point. In the
case that the next socket is no longer there, we keep scanning through
the saved cookies list until we find a match. The worst case is when
none of the sockets from last time exist anymore, but again, this should
be rare.
CHANGES
=======
v6 -> v7:
* Move initialization of iter->state.bucket to -1 from patch five ("bpf:
udp: Avoid socket skips and repeats during iteration") to patch three
("bpf: udp: Get rid of st_bucket_done") to avoid skipping the first
bucket in the patch three and four (Martin).
* Rename sock to sk in bpf_iter_batch_item (Martin).
* Use ASSERT_OK_PTR in do_resume_test to check if counts is NULL
(Martin).
* goto done in do_resume_test when calloc or sock_iter_batch__open fails
to make sure things are cleaned up properly, and initialize pointers
to NULL explicitly to silence warnings from llvm 20 in CI.
v5 -> v6:
* Rework the logic in patch two ("bpf: udp: Make sure iter->batch
always contains a full bucket snapshot") again to simplify it:
* Only try realloc with GFP_USER one time instead of two (Alexei).
* v5 introduced a second call to bpf_iter_udp_realloc_batch inside the
loop to handle the GFP_ATOMIC case. In v6, move the GFP_USER case
inside the loop as well, so it's all in once place. This, I feel,
makes it a bit easier to understand the control flow. Consequently,
it also simplifies the logic outside the loop.
* Use GFP_NOWAIT instead of GFP_ATOMIC to avoid depleting memory
reserves, since iterators are not critical operation (Alexei). Alexei
suggested using __GFP_NOWARN as well with GFP_NOWAIT, but this is
already set inside bpf_iter_udp_realloc_batch, so no change was needed
there.
* Introduce patch three ("bpf: udp: Get rid of st_bucket_done") to
simplify things further, since with patch two, st_bucket_done == true
is equivalent to iter->cur_sk == iter->end_sk.
* In patch five ("bpf: udp: Avoid socket skips and repeats during
iteration"), initialize iter->state.bucket to -1 so that on the first
call to bpf_iter_udp_batch, the resume_bucket condition is not hit.
This avoids adding a special case to the condition around
bpf_iter_udp_resume for bucket zero.
v4 -> v5:
* Rework the logic from patch two ("bpf: udp: Make sure iter->batch
always contains a full bucket snapshot") to move the handling of the
GFP_ATOMIC case inside the main loop and get rid of the extra lock
variable. This makes the logic clearer and makes it clearer that the
bucket lock is always released (Martin).
* Introduce udp_portaddr_for_each_entry_from in patch two instead of
patch four ("bpf: udp: Avoid socket skips and repeats during
iteration"), since patch two now needs to be able to resume list
iteration from an arbitrary point in the GFP_ATOMIC case.
* Similarly, introduce the memcpy inside bpf_iter_udp_realloc_batch in
patch two instead of patch four, since in the GFP_ATOMIC case the new
batch needs to remember the sockets from the old batch.
* Use sock_gen_cookie instead of __sock_gen_cookie inside
bpf_iter_udp_put_batch, since it can be called from a preemptible
context (Martin).
v3 -> v4:
* Explicitly assign sk = NULL on !iter->end_sk exit condition
(Kuniyuki).
* Reword the commit message of patch two ("bpf: udp: Make sure
iter->batch always contains a full bucket snapshot") to make the
reasoning for GFP_ATOMIC more clear.
v2 -> v3:
* Guarantee that iter->batch is always a full snapshot of a bucket to
prevent socket repeat scenarios [3]. This supercedes the patch from v2
that simply propagated ENOMEM up from bpf_iter_udp_batch and covers
the scenario where the batch size is still too small after a realloc.
* Fix up self tests (Martin)
* ASSERT_EQ(nread, sizeof(out), "nread") instead of
ASSERT_GE(nread, 1, "nread) in read_n.
* Use ASSERT_OK and ASSERT_OK_FD in several places.
* Add missing free(counts) to do_resume_test.
* Move int local_port declaration to the top of do_resume_test.
* Remove unnecessary guards before close and free.
v1 -> v2:
* Drop WARN_ON_ONCE from bpf_iter_udp_realloc_batch (Kuniyuki).
* Fixed memcpy size parameter in bpf_iter_udp_realloc_batch; before it
was missing sizeof(elem) * (Kuniyuki).
* Move "bpf: udp: Propagate ENOMEM up from bpf_iter_udp_batch" to patch
two in the series (Kuniyuki).
rfc [1] -> v1:
* Use hlist_entry_safe directly to retrieve the first socket in the
current bucket's linked list instead of immediately breaking from
udp_portaddr_for_each_entry (Martin).
* Cancel iteration if bpf_iter_udp_realloc_batch() can't grab enough
memory to contain a full snapshot of the current bucket to prevent
unwanted skips or repeats [2].
[1]: https://lore.kernel.org/bpf/20250404220221.1665428-1-jordan@jrife.io/
[2]: https://lore.kernel.org/bpf/CABi4-ogUtMrH8-NVB6W8Xg_F_KDLq=yy-yu-tKr2udXE2Mu1Lg@mail.gmail.com/
[3]: https://lore.kernel.org/bpf/d323d417-3e8b-48af-ae94-bc28469ac0c1@linux.dev/
====================
Link: https://patch.msgid.link/20250502161528.264630-1-jordan@jrife.io
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
|
|
Introduce a set of tests that exercise various bucket resume scenarios:
* remove_seen resumes iteration after removing a socket from the bucket
that we've already processed. Before, with the offset-based approach,
this test would have skipped an unseen socket after resuming
iteration. With the cookie-based approach, we now see all sockets
exactly once.
* remove_unseen exercises the condition where the next socket that we
would have seen is removed from the bucket before we resume iteration.
This tests the scenario where we need to scan past the first cookie in
our remembered cookies list to find the socket from which to resume
iteration.
* remove_all exercises the condition where all sockets we remembered
were removed from the bucket to make sure iteration terminates and
returns no more results.
* add_some exercises the condition where a few, but not enough to
trigger a realloc, sockets are added to the head of the current bucket
between reads. Before, with the offset-based approach, this test would
have repeated sockets we've already seen. With the cookie-based
approach, we now see all sockets exactly once.
* force_realloc exercises the condition that we need to realloc the
batch on a subsequent read, since more sockets than can be held in the
current batch array were added to the current bucket. This exercies
the logic inside bpf_iter_udp_realloc_batch that copies cookies into
the new batch to make sure nothing is skipped or repeated.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
|
|
Extend the iter_udp_soreuse and iter_tcp_soreuse programs to write the
cookie of the current socket, so that we can track the identity of the
sockets that the iterator has seen so far. Update the existing do_test
function to account for this change to the iterator program output. At
the same time, teach both programs to work with AF_INET as well.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
|
|
Replace the offset-based approach for tracking progress through a bucket
in the UDP table with one based on socket cookies. Remember the cookies
of unprocessed sockets from the last batch and use this list to
pick up where we left off or, in the case that the next socket
disappears between reads, find the first socket after that point that
still exists in the bucket and resume from there.
This approach guarantees that all sockets that existed when iteration
began and continue to exist throughout will be visited exactly once.
Sockets that are added to the table during iteration may or may not be
seen, but if they are they will be seen exactly once.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
|
|
Prepare for the next patch that tracks cookies between iterations by
converting struct sock **batch to union bpf_udp_iter_batch_item *batch
inside struct bpf_udp_iter_state.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
|
|
Get rid of the st_bucket_done field to simplify UDP iterator state and
logic. Before, st_bucket_done could be false if bpf_iter_udp_batch
returned a partial batch; however, with the last patch ("bpf: udp: Make
sure iter->batch always contains a full bucket snapshot"),
st_bucket_done == true is equivalent to iter->cur_sk == iter->end_sk.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
|
|
Require that iter->batch always contains a full bucket snapshot. This
invariant is important to avoid skipping or repeating sockets during
iteration when combined with the next few patches. Before, there were
two cases where a call to bpf_iter_udp_batch may only capture part of a
bucket:
1. When bpf_iter_udp_realloc_batch() returns -ENOMEM [1].
2. When more sockets are added to the bucket while calling
bpf_iter_udp_realloc_batch(), making the updated batch size
insufficient [2].
In cases where the batch size only covers part of a bucket, it is
possible to forget which sockets were already visited, especially if we
have to process a bucket in more than two batches. This forces us to
choose between repeating or skipping sockets, so don't allow this:
1. Stop iteration and propagate -ENOMEM up to userspace if reallocation
fails instead of continuing with a partial batch.
2. Try bpf_iter_udp_realloc_batch() with GFP_USER just as before, but if
we still aren't able to capture the full bucket, call
bpf_iter_udp_realloc_batch() again while holding the bucket lock to
guarantee the bucket does not change. On the second attempt use
GFP_NOWAIT since we hold onto the spin lock.
Introduce the udp_portaddr_for_each_entry_from macro and use it instead
of udp_portaddr_for_each_entry to make it possible to continue iteration
from an arbitrary socket. This is required for this patch in the
GFP_NOWAIT case to allow us to fill the rest of a batch starting from
the middle of a bucket and the later patch which skips sockets that were
already seen.
Testing all scenarios directly is a bit difficult, but I did some manual
testing to exercise the code paths where GFP_NOWAIT is used and where
ERR_PTR(err) is returned. I used the realloc test case included later
in this series to trigger a scenario where a realloc happens inside
bpf_iter_udp_batch and made a small code tweak to force the first
realloc attempt to allocate a too-small batch, thus requiring
another attempt with GFP_NOWAIT. Some printks showed both reallocs with
the tests passing:
Apr 25 23:16:24 crow kernel: go again GFP_USER
Apr 25 23:16:24 crow kernel: go again GFP_NOWAIT
With this setup, I also forced each of the bpf_iter_udp_realloc_batch
calls to return -ENOMEM to ensure that iteration ends and that the
read() in userspace fails.
[1]: https://lore.kernel.org/bpf/CABi4-ogUtMrH8-NVB6W8Xg_F_KDLq=yy-yu-tKr2udXE2Mu1Lg@mail.gmail.com/
[2]: https://lore.kernel.org/bpf/7ed28273-a716-4638-912d-f86f965e54bb@linux.dev/
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
|
|
Prepare for the next patch which needs to be able to choose either
GFP_USER or GFP_NOWAIT for calls to bpf_iter_udp_realloc_batch.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
|
|
Jakub Kicinski says:
====================
tools: ynl-gen: additional C types and classic netlink handling
This series is a bit of a random grab bag adding things we need
to generate code for rt-link.
First two patches are pretty random code cleanups.
Patch 3 adds default values if the spec is missing them.
Patch 4 adds support for setting Netlink request flags
(NLM_F_CREATE, NLM_F_REPLACE etc.). Classic netlink uses those
quite a bit.
Patches 5 and 6 extend the notification handling for variations
used in classic netlink. Patch 6 adds support for when notification
ID is the same as the ID of the response message to GET.
Next 4 patches add support for handling a couple of complex types.
These are supported by the schema and Python but C code gen wasn't
there.
Patch 11 is a bit of a hack, it skips code related to kernel
policy generation, since we don't need it for classic netlink.
Patch 12 adds support for having different fixed headers per op.
Something we could avoid in previous rtnetlink specs but some
specs do mix.
v2: https://lore.kernel.org/20250425024311.1589323-1-kuba@kernel.org
v1: https://lore.kernel.org/20250424021207.1167791-1-kuba@kernel.org
====================
Link: https://patch.msgid.link/20250429154704.2613851-1-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
rtnetlink has variety of ops with different fixed headers.
Detect that op fixed header is not the same as family one,
and use sizeof() directly. For reverse parsing we need to
pass the fixed header len along the policy (in the socket
state).
Reviewed-by: Donald Hunter <donald.hunter@gmail.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Link: https://patch.msgid.link/20250429154704.2613851-13-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
rt-link has a vlan-protocols enum with:
name: 8021q value: 33024
name: 8021ad value: 34984
It's nice to have, since it converts the values to strings in Python.
For C, however, the codegen is trying to use enums to generate strict
policy checks. Parsing such sparse enums is not possible via policies.
Since for classic netlink we don't support kernel codegen and policy
generation - skip the auto-generation of checks from enums.
Reviewed-by: Donald Hunter <donald.hunter@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Link: https://patch.msgid.link/20250429154704.2613851-12-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
IPv6 addresses are expressed as binary arrays since we don't have u128.
Since they are not variable length, however, they are relatively
easy to represent as an array of known size.
Reviewed-by: Donald Hunter <donald.hunter@gmail.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Link: https://patch.msgid.link/20250429154704.2613851-11-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
C codegen supports ArrayNest AKA indexed-array carrying scalars,
but only for the netlink -> struct parsing. Support rendering
from struct to netlink.
Reviewed-by: Donald Hunter <donald.hunter@gmail.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Link: https://patch.msgid.link/20250429154704.2613851-10-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Binary types with struct are fixed size, relatively easy to
handle for multi attr. Declare the member as a pointer.
Count the members, allocate an array, copy in the data.
Allow the netlink attr to be smaller or larger than our view
of the struct in case the build headers are newer or older
than the running kernel.
Reviewed-by: Donald Hunter <donald.hunter@gmail.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Link: https://patch.msgid.link/20250429154704.2613851-9-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Add support for multi attr strings (needed for link alt_names).
We record the length individual strings in a len member, to do
the same for multi-attr create a struct ynl_string in ynl.h
and use it as a layer holding both the string and its length.
Since strings may be arbitrary length dynamically allocate each
individual one.
Adjust arg_member and struct member to avoid spacing the double
pointers to get "type **name;" rather than "type * *name;"
Reviewed-by: Donald Hunter <donald.hunter@gmail.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Link: https://patch.msgid.link/20250429154704.2613851-8-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Allow CRUD-style notification where the notification is more
like the response to the request, which can optionally be
looped back onto the requesting socket. Since the notification
and request are different ops in the spec, for example:
-
name: delrule
doc: Remove an existing FIB rule
attribute-set: fib-rule-attrs
do:
request:
value: 33
attributes: *fib-rule-all
-
name: delrule-ntf
doc: Notify a rule deletion
value: 33
notify: getrule
We need to find the request by ID. Ideally we'd detect this model
from the spec properties, rather than assume that its what all
classic netlink families do. But maybe that'd cause this model
to spread and its easy to get wrong. For now assume CRUD == classic.
Reviewed-by: Donald Hunter <donald.hunter@gmail.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Link: https://patch.msgid.link/20250429154704.2613851-7-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Classic Netlink has GET callbacks with no doit support, just dumps.
Support using their responses in notifications. If notification points
at a type which only has a dump - use the dump's type.
Reviewed-by: Donald Hunter <donald.hunter@gmail.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Link: https://patch.msgid.link/20250429154704.2613851-6-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Classic netlink makes extensive use of flags. Support specifying
them the same way as attributes are specified (using a helper),
for example:
rt_link_newlink_req_set_nlflags(req, NLM_F_CREATE | NLM_F_ECHO);
Wrap the code up in a RenderInfo predicate. I think that some
genetlink families may want this, too. It should be easy to
add a spec property later.
Reviewed-by: Donald Hunter <donald.hunter@gmail.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Link: https://patch.msgid.link/20250429154704.2613851-5-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
The C codegen refers to op attribute lists all over the place,
without checking if they are present, even tho attribute list
is technically an optional property. Add them automatically
at init if missing so that we don't have to make specs longer.
Reviewed-by: Donald Hunter <donald.hunter@gmail.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Link: https://patch.msgid.link/20250429154704.2613851-4-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Instead of walking the entries in the code gen add a method
for the struct class to return if any of the members need
an iterator.
Reviewed-by: Donald Hunter <donald.hunter@gmail.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Link: https://patch.msgid.link/20250429154704.2613851-3-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
The dict stores struct objects (of class Struct), not just
a trivial set with directions.
Reviewed-by: Donald Hunter <donald.hunter@gmail.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Link: https://patch.msgid.link/20250429154704.2613851-2-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Rewrite the textual description for the VIA Rhine platform Ethernet
controller as YAML schema, and switch the filename to follow the
compatible string. These are used in several VIA/WonderMedia SoCs
Signed-off-by: Alexey Charkov <alchark@gmail.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Link: https://patch.msgid.link/20250430-rhine-binding-v2-1-4290156c0f57@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
setup
Recent updates to the locking mechanism that protects IPv6 routing tables
[1] have affected the SRv6 networking subsystem. Such changes cause
problems with some SRv6 Endpoints behaviors, like End.B6.Encaps and also
impact SRv6 counters.
Starting from commit 169fd62799e8 ("ipv6: Get rid of RTNL for SIOCADDRT and
RTM_NEWROUTE."), the inet6_rtm_newroute() function no longer needs to
acquire the RTNL lock for creating and configuring IPv6 routes and set up
lwtunnels.
The RTNL lock can be avoided because the ip6_route_add() function
finishes setting up a new route in a section protected by RCU.
This makes sure that no dev/nexthops can disappear during the operation.
Because of this, the steps for setting up lwtunnels - i.e., calling
lwtunnel_build_state() - are now done in a RCU lock section and not
under the RTNL lock anymore.
However, creating and configuring a lwtunnel instance in an
RCU-protected section can be problematic when that tunnel needs to
allocate memory using the GFP_KERNEL flag.
For example, the following trace shows what happens when an SRv6
End.B6.Encaps behavior is instantiated after commit 169fd62799e8 ("ipv6:
Get rid of RTNL for SIOCADDRT and RTM_NEWROUTE."):
[ 3061.219696] BUG: sleeping function called from invalid context at ./include/linux/sched/mm.h:321
[ 3061.226136] in_atomic(): 0, irqs_disabled(): 0, non_block: 0, pid: 445, name: ip
[ 3061.232101] preempt_count: 0, expected: 0
[ 3061.235414] RCU nest depth: 1, expected: 0
[ 3061.238622] 1 lock held by ip/445:
[ 3061.241458] #0: ffffffff83ec64a0 (rcu_read_lock){....}-{1:3}, at: ip6_route_add+0x41/0x1e0
[ 3061.248520] CPU: 1 UID: 0 PID: 445 Comm: ip Not tainted 6.15.0-rc3-micro-vm-dev-00590-ge527e891492d #2058 PREEMPT(full)
[ 3061.248532] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
[ 3061.248549] Call Trace:
[ 3061.248620] <TASK>
[ 3061.248633] dump_stack_lvl+0xa9/0xc0
[ 3061.248846] __might_resched+0x218/0x360
[ 3061.248871] __kmalloc_node_track_caller_noprof+0x332/0x4e0
[ 3061.248889] ? rcu_is_watching+0x3a/0x70
[ 3061.248902] ? parse_nla_srh+0x56/0xa0
[ 3061.248938] kmemdup_noprof+0x1c/0x40
[ 3061.248952] parse_nla_srh+0x56/0xa0
[ 3061.248969] seg6_local_build_state+0x2e0/0x580
[ 3061.248992] ? __lock_acquire+0xaff/0x1cd0
[ 3061.249013] ? do_raw_spin_lock+0x111/0x1d0
[ 3061.249027] ? __pfx_seg6_local_build_state+0x10/0x10
[ 3061.249068] ? lwtunnel_build_state+0xe1/0x3a0
[ 3061.249274] lwtunnel_build_state+0x10d/0x3a0
[ 3061.249303] fib_nh_common_init+0xce/0x1e0
[ 3061.249337] ? __pfx_fib_nh_common_init+0x10/0x10
[ 3061.249352] ? in6_dev_get+0xaf/0x1f0
[ 3061.249369] ? __rcu_read_unlock+0x64/0x2e0
[ 3061.249392] fib6_nh_init+0x290/0xc30
[ 3061.249422] ? __pfx_fib6_nh_init+0x10/0x10
[ 3061.249447] ? __lock_acquire+0xaff/0x1cd0
[ 3061.249459] ? _raw_spin_unlock_irqrestore+0x22/0x70
[ 3061.249624] ? ip6_route_info_create+0x423/0x520
[ 3061.249641] ? rcu_is_watching+0x3a/0x70
[ 3061.249683] ip6_route_info_create_nh+0x190/0x390
[ 3061.249715] ip6_route_add+0x71/0x1e0
[ 3061.249730] ? __pfx_inet6_rtm_newroute+0x10/0x10
[ 3061.249743] inet6_rtm_newroute+0x426/0xc50
[ 3061.249764] ? avc_has_perm_noaudit+0x13d/0x360
[ 3061.249853] ? __pfx_inet6_rtm_newroute+0x10/0x10
[ 3061.249905] ? __lock_acquire+0xaff/0x1cd0
[ 3061.249962] ? rtnetlink_rcv_msg+0x52f/0x890
[ 3061.249996] ? __pfx_inet6_rtm_newroute+0x10/0x10
[ 3061.250012] rtnetlink_rcv_msg+0x551/0x890
[ 3061.250040] ? __pfx_rtnetlink_rcv_msg+0x10/0x10
[ 3061.250065] ? __lock_acquire+0xaff/0x1cd0
[ 3061.250092] netlink_rcv_skb+0xbd/0x1f0
[ 3061.250108] ? __pfx_rtnetlink_rcv_msg+0x10/0x10
[ 3061.250124] ? __pfx_netlink_rcv_skb+0x10/0x10
[ 3061.250179] ? netlink_deliver_tap+0x10b/0x700
[ 3061.250210] netlink_unicast+0x2e7/0x410
[ 3061.250232] ? __pfx_netlink_unicast+0x10/0x10
[ 3061.250241] ? __lock_acquire+0xaff/0x1cd0
[ 3061.250280] netlink_sendmsg+0x366/0x670
[ 3061.250306] ? __pfx_netlink_sendmsg+0x10/0x10
[ 3061.250313] ? find_held_lock+0x2d/0xa0
[ 3061.250344] ? import_ubuf+0xbc/0xf0
[ 3061.250370] ? __pfx_netlink_sendmsg+0x10/0x10
[ 3061.250381] __sock_sendmsg+0x13e/0x150
[ 3061.250420] ____sys_sendmsg+0x33d/0x450
[ 3061.250442] ? __pfx_____sys_sendmsg+0x10/0x10
[ 3061.250453] ? __pfx_copy_msghdr_from_user+0x10/0x10
[ 3061.250489] ? __pfx_slab_free_after_rcu_debug+0x10/0x10
[ 3061.250514] ___sys_sendmsg+0xe5/0x160
[ 3061.250530] ? __pfx____sys_sendmsg+0x10/0x10
[ 3061.250568] ? __lock_acquire+0xaff/0x1cd0
[ 3061.250617] ? find_held_lock+0x2d/0xa0
[ 3061.250678] ? __virt_addr_valid+0x199/0x340
[ 3061.250704] ? preempt_count_sub+0xf/0xc0
[ 3061.250736] __sys_sendmsg+0xca/0x140
[ 3061.250750] ? __pfx___sys_sendmsg+0x10/0x10
[ 3061.250786] ? syscall_exit_to_user_mode+0xa2/0x1e0
[ 3061.250825] do_syscall_64+0x62/0x140
[ 3061.250844] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 3061.250855] RIP: 0033:0x7f0b042ef914
[ 3061.250868] Code: 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b5 0f 1f 80 00 00 00 00 48 8d 05 e9 5d 0c 00 8b 00 85 c0 75 13 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 41 54 41 89 d4 55 48 89 f5 53
[ 3061.250876] RSP: 002b:00007ffc2d113ef8 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
[ 3061.250885] RAX: ffffffffffffffda RBX: 00000000680f93fa RCX: 00007f0b042ef914
[ 3061.250891] RDX: 0000000000000000 RSI: 00007ffc2d113f60 RDI: 0000000000000003
[ 3061.250897] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000008
[ 3061.250902] R10: fffffffffffff26d R11: 0000000000000246 R12: 0000000000000001
[ 3061.250907] R13: 000055a961f8a520 R14: 000055a961f63eae R15: 00007ffc2d115270
[ 3061.250952] </TASK>
To solve this issue, we replace the GFP_KERNEL flag with the GFP_ATOMIC
one in those SRv6 Endpoints that need to allocate memory during the
setup phase. This change makes sure that memory allocations are handled
in a way that works with RCU critical sections.
[1] - https://lore.kernel.org/all/20250418000443.43734-1-kuniyu@amazon.com/
Fixes: 169fd62799e8 ("ipv6: Get rid of RTNL for SIOCADDRT and RTM_NEWROUTE.")
Signed-off-by: Andrea Mayer <andrea.mayer@uniroma2.it>
Reviewed-by: David Ahern <dsahern@kernel.org>
Link: https://patch.msgid.link/20250429132453.31605-1-andrea.mayer@uniroma2.it
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
After 52358dd63e34 ("net: phy: remove function stubs") there's a
problem if CONFIG_MDIO_BUS is set, but CONFIG_PHYLIB is not.
mdiobus_scan() uses phylib functions like get_phy_device().
Bringing back the stub wouldn't make much sense, because it would
allow to compile mdiobus_scan(), but the function would be unusable.
The stub returned NULL, and we have the following in mdiobus_scan():
phydev = get_phy_device(bus, addr, c45);
if (IS_ERR(phydev))
return phydev;
So calling mdiobus_scan() w/o CONFIG_PHYLIB would cause a crash later in
mdiobus_scan(). In general the PHYLIB functionality isn't optional here.
Consequently, MDIO bus providers depend on PHYLIB.
Therefore factor it out and build it together with the libphy core
modules. In addition make all MDIO bus providers under /drivers/net/mdio
depend on PHYLIB. Same applies to enetc MDIO bus provider. Note that
PHYLIB selects MDIO_DEVRES, therefore we can omit this here.
Fixes: 52358dd63e34 ("net: phy: remove function stubs")
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202504270639.mT0lh2o1-lkp@intel.com/
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Link: https://patch.msgid.link/c74772a9-dab6-44bf-a657-389df89d85c2@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The MediaTek MT7988 SoC comes with an single built-in Ethernet PHY for
2500Base-T/1000Base-T/100Base-TX/10Base-T link partners in addition to
the built-in 1GE switch. The built-in PHY only supports full duplex.
Add muxes allowing to select GMAC2->2.5G PHY path and add basic support
for XGMAC as the built-in 2.5G PHY is internally connected via XGMII.
The XGMAC features will also be used by 5GBase-R, 10GBase-R and USXGMII
SerDes modes which are going to be added once support for standalone PCS
drivers is in place.
In order to make use of the built-in 2.5G PHY the appropriate PHY driver
as well as (proprietary) PHY firmware has to be present as well.
Signed-off-by: Daniel Golle <daniel@makrotopia.org>
Link: https://patch.msgid.link/9072cefbff6db969720672ec98ed5cef65e8218c.1745715380.git.daniel@makrotopia.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
This user of SHA-256 does not support any other algorithm, so the
crypto_shash abstraction provides no value. Just use the SHA-256
library API instead, which is much simpler and easier to use.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Link: https://patch.msgid.link/20250428191606.856198-1-ebiggers@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The RTL8211F supports multiple WOL modes. This patch adds support for
magic packets.
The PHY notifies the system via the INTB/PMEB pin when a WOL event
occurs.
Signed-off-by: Daniel Braunwarth <daniel.braunwarth@kuka.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://patch.msgid.link/20250429-realtek_wol-v2-1-8f84def1ef2c@kuka.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Ensure the following prerequisites before executing the test:
1. 'socat' is installed on the remote host.
2. Python version supports socket.SO_INCOMING_CPU (available since v3.11).
Skip the test if either prerequisite is not met.
Reviewed-by: Nimrod Oren <noren@nvidia.com>
Signed-off-by: Gal Pressman <gal@nvidia.com>
Link: https://patch.msgid.link/20250430054801.750646-1-gal@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Allwinner A523 SoC variant (A527/T527) contains an "EMAC0" Ethernet
MAC compatible to the A64 version.
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: Yixun Lan <dlan@gentoo.org>
Link: https://patch.msgid.link/20250430-01-sun55i-emac0-v3-2-6fc000bbccbd@gentoo.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
Intel Wired LAN Driver Updates 2025-04-29 (igb, igc, ixgbe, idpf)
For igb:
Kurt Kanzenbach adds linking of IRQs and queues to NAPI instances and
adds persistent NAPI config. Lastly, he removes undesired IRQs that
occur while busy polling.
For igc:
Kurt Kanzenbach switches the Tx mode for MQPRIO offload to harmonize the
current implementation with TAPRIO.
For ixgbe:
Jedrzej adds separate ethtool ops for E610 devices to account for device
differences.
Slawomir adds devlink region support for E610 devices.
For idpf:
Mateusz assigns and utilizes the ptype field out of libeth_rqe_info.
Michal removes unreachable code.
* '1GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue:
idpf: remove unreachable code from setting mailbox
idpf: assign extracted ptype to struct libeth_rqe_info field
ixgbe: devlink: add devlink region support for E610
ixgbe: add E610 .set_phys_id() callback implementation
ixgbe: apply different rules for setting FC on E610
ixgbe: add support for ACPI WOL for E610
ixgbe: create E610 specific ethtool_ops structure
igc: Change Tx mode for MQPRIO offloading
igc: Limit netdev_tc calls to MQPRIO
igb: Get rid of spurious interrupts
igb: Add support for persistent NAPI config
igb: Link queues to NAPI instances
igb: Link IRQs to NAPI instances
====================
Link: https://patch.msgid.link/20250429234651.3982025-1-anthony.l.nguyen@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Cross-merge networking fixes after downstream PR (net-6.15-rc5).
No conflicts or adjacent changes.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from Jakub Kicinski:
"Happy May Day.
Things have calmed down on our end (knock on wood), no outstanding
investigations. Including fixes from Bluetooth and WiFi.
Current release - fix to a fix:
- igc: fix lock order in igc_ptp_reset
Current release - new code bugs:
- Revert "wifi: iwlwifi: make no_160 more generic", fixes regression
to Killer line of devices reported by a number of people
- Revert "wifi: iwlwifi: add support for BE213", initial FW is too
buggy
- number of fixes for mld, the new Intel WiFi subdriver
Previous releases - regressions:
- wifi: mac80211: restore monitor for outgoing frames
- drv: vmxnet3: fix malformed packet sizing in vmxnet3_process_xdp
- eth: bnxt_en: fix timestamping FIFO getting out of sync on reset,
delivering stale timestamps
- use sock_gen_put() in the TCP fraglist GRO heuristic, don't assume
every socket is a full socket
Previous releases - always broken:
- sched: adapt qdiscs for reentrant enqueue cases, fix list
corruptions
- xsk: fix race condition in AF_XDP generic RX path, shared UMEM
can't be protected by a per-socket lock
- eth: mtk-star-emac: fix spinlock recursion issues on rx/tx poll
- btusb: avoid NULL pointer dereference in skb_dequeue()
- dsa: felix: fix broken taprio gate states after clock jump"
* tag 'net-6.15-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (83 commits)
net: vertexcom: mse102x: Fix RX error handling
net: vertexcom: mse102x: Add range check for CMD_RTS
net: vertexcom: mse102x: Fix LEN_MASK
net: vertexcom: mse102x: Fix possible stuck of SPI interrupt
net: hns3: defer calling ptp_clock_register()
net: hns3: fixed debugfs tm_qset size
net: hns3: fix an interrupt residual problem
net: hns3: store rx VLAN tag offload state for VF
octeon_ep: Fix host hang issue during device reboot
net: fec: ERR007885 Workaround for conventional TX
net: lan743x: Fix memleak issue when GSO enabled
ptp: ocp: Fix NULL dereference in Adva board SMA sysfs operations
net: use sock_gen_put() when sk_state is TCP_TIME_WAIT
bnxt_en: fix module unload sequence
bnxt_en: Fix ethtool -d byte order for 32-bit values
bnxt_en: Fix out-of-bound memcpy() during ethtool -w
bnxt_en: Fix coredump logic to free allocated buffer
bnxt_en: delay pci_alloc_irq_vectors() in the AER path
bnxt_en: call pci_alloc_irq_vectors() after bnxt_reserve_rings()
bnxt_en: Add missing skb_mark_for_recycle() in bnxt_rx_vlan()
...
|
|
Stefan Wahren says:
====================
net: vertexcom: mse102x: Fix RX handling
This series is the first part of two series for the Vertexcom driver.
It contains substantial fixes for the RX handling of the Vertexcom MSE102x.
====================
Link: https://patch.msgid.link/20250430133043.7722-1-wahrenst@gmx.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
In case the CMD_RTS got corrupted by interferences, the MSE102x
doesn't allow a retransmission of the command. Instead the Ethernet
frame must be shifted out of the SPI FIFO. Since the actual length is
unknown, assume the maximum possible value.
Fixes: 2f207cbf0dd4 ("net: vertexcom: Add MSE102x SPI support")
Signed-off-by: Stefan Wahren <wahrenst@gmx.net>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://patch.msgid.link/20250430133043.7722-5-wahrenst@gmx.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Since there is no protection in the SPI protocol against electrical
interferences, the driver shouldn't blindly trust the length payload
of CMD_RTS. So introduce a bounds check for incoming frames.
Fixes: 2f207cbf0dd4 ("net: vertexcom: Add MSE102x SPI support")
Signed-off-by: Stefan Wahren <wahrenst@gmx.net>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://patch.msgid.link/20250430133043.7722-4-wahrenst@gmx.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The LEN_MASK for CMD_RTS doesn't cover the whole parameter mask.
The Bit 11 is reserved, so adjust LEN_MASK accordingly.
Fixes: 2f207cbf0dd4 ("net: vertexcom: Add MSE102x SPI support")
Signed-off-by: Stefan Wahren <wahrenst@gmx.net>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://patch.msgid.link/20250430133043.7722-3-wahrenst@gmx.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The MSE102x doesn't provide any SPI commands for interrupt handling.
So in case the interrupt fired before the driver requests the IRQ,
the interrupt will never fire again. In order to fix this always poll
for pending packets after opening the interface.
Fixes: 2f207cbf0dd4 ("net: vertexcom: Add MSE102x SPI support")
Signed-off-by: Stefan Wahren <wahrenst@gmx.net>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://patch.msgid.link/20250430133043.7722-2-wahrenst@gmx.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Jijie Shao says:
====================
There are some bugfix for the HNS3 ethernet driver
====================
Link: https://patch.msgid.link/20250430093052.2400464-1-shaojijie@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Currently the ptp_clock_register() is called before relative
ptp resource ready. It may cause unexpected result when upper
layer called the ptp API during the timewindow. Fix it by
moving the ptp_clock_register() to the function end.
Fixes: 0bf5eb788512 ("net: hns3: add support for PTP")
Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Jijie Shao <shaojijie@huawei.com>
Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev>
Link: https://patch.msgid.link/20250430093052.2400464-5-shaojijie@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The size of the tm_qset file of debugfs is limited to 64 KB,
which is too small in the scenario with 1280 qsets.
The size needs to be expanded to 1 MB.
Fixes: 5e69ea7ee2a6 ("net: hns3: refactor the debugfs process")
Signed-off-by: Hao Lan <lanhao@huawei.com>
Signed-off-by: Peiyang Wang <wangpeiyang1@huawei.com>
Signed-off-by: Jijie Shao <shaojijie@huawei.com>
Link: https://patch.msgid.link/20250430093052.2400464-4-shaojijie@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
When a VF is passthrough to a VM, and the VM is killed, the reported
interrupt may not been handled, it will remain, and won't be clear by
the nic engine even with a flr or tqp reset. When the VM restart, the
interrupt of the first vector may be dropped by the second enable_irq
in vfio, see the issue below:
https://gitlab.com/qemu-project/qemu/-/issues/2884#note_2423361621
We notice that the vfio has always behaved this way, and the interrupt
is a residue of the nic engine, so we fix the problem by moving the
vector enable process out of the enable_irq loop.
Fixes: 08a100689d4b ("net: hns3: re-organize vector handle")
Signed-off-by: Yonglong Liu <liuyonglong@huawei.com>
Signed-off-by: Jijie Shao <shaojijie@huawei.com>
Link: https://patch.msgid.link/20250430093052.2400464-3-shaojijie@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The VF driver missed to store the rx VLAN tag strip state when
user change the rx VLAN tag offload state. And it will default
to enable the rx vlan tag strip when re-init VF device after
reset. So if user disable rx VLAN tag offload, and trig reset,
then the HW will still strip the VLAN tag from packet nad fill
into RX BD, but the VF driver will ignore it for rx VLAN tag
offload disabled. It may cause the rx VLAN tag dropped.
Fixes: b2641e2ad456 ("net: hns3: Add support of hardware rx-vlan-offload to HNS3 VF driver")
Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Jijie Shao <shaojijie@huawei.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://patch.msgid.link/20250430093052.2400464-2-shaojijie@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue
Tony Nguyen says:
====================
Intel Wired LAN Driver Updates 2025-04-29 (idpf, igc)
For idpf:
Michal fixes error path handling to remove memory leak.
Larysa prevents reset from being called during shutdown.
For igc:
Jake adjusts locking order to resolve sleeping in atomic context.
* '200GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue:
igc: fix lock order in igc_ptp_reset
idpf: protect shutdown from reset
idpf: fix potential memory leak on kcalloc() failure
====================
Link: https://patch.msgid.link/20250429221034.3909139-1-anthony.l.nguyen@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
When the host loses heartbeat messages from the device,
the driver calls the device-specific ndo_stop function,
which frees the resources. If the driver is unloaded in
this scenario, it calls ndo_stop again, attempting to free
resources that have already been freed, leading to a host
hang issue. To resolve this, dev_close should be called
instead of the device-specific stop function.dev_close
internally calls ndo_stop to stop the network interface
and performs additional cleanup tasks. During the driver
unload process, if the device is already down, ndo_stop
is not called.
Fixes: 5cb96c29aa0e ("octeon_ep: add heartbeat monitor")
Signed-off-by: Sathesh B Edara <sedara@marvell.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://patch.msgid.link/20250429114624.19104-1-sedara@marvell.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Activate TX hang workaround also in
fec_enet_txq_submit_skb() when TSO is not enabled.
Errata: ERR007885
Symptoms: NETDEV WATCHDOG: eth0 (fec): transmit queue 0 timed out
commit 37d6017b84f7 ("net: fec: Workaround for imx6sx enet tx hang when enable three queues")
There is a TDAR race condition for mutliQ when the software sets TDAR
and the UDMA clears TDAR simultaneously or in a small window (2-4 cycles).
This will cause the udma_tx and udma_tx_arbiter state machines to hang.
So, the Workaround is checking TDAR status four time, if TDAR cleared by
hardware and then write TDAR, otherwise don't set TDAR.
Fixes: 53bb20d1faba ("net: fec: add variable reg_desc_active to speed things up")
Signed-off-by: Mattias Barthel <mattias.barthel@atlascopco.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://patch.msgid.link/20250429090826.3101258-1-mattiasbarthel@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Always map the `skb` to the LS descriptor. Previously skb was
mapped to EXT descriptor when the number of fragments is zero with
GSO enabled. Mapping the skb to EXT descriptor prevents it from
being freed, leading to a memory leak
Fixes: 23f0703c125b ("lan743x: Add main source files for new lan743x driver")
Signed-off-by: Thangaraj Samynathan <thangaraj.s@microchip.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Link: https://patch.msgid.link/20250429052527.10031-1-thangaraj.s@microchip.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
On Adva boards, SMA sysfs store/get operations can call
__handle_signal_outputs() or __handle_signal_inputs() while the `irig`
and `dcf` pointers are uninitialized, leading to a NULL pointer
dereference in __handle_signal() and causing a kernel crash. Adva boards
don't use `irig` or `dcf` functionality, so add Adva-specific callbacks
`ptp_ocp_sma_adva_set_outputs()` and `ptp_ocp_sma_adva_set_inputs()` that
avoid invoking `irig` or `dcf` input/output routines.
Fixes: ef61f5528fca ("ptp: ocp: add Adva timecard support")
Signed-off-by: Sagi Maimon <maimon.sagi@gmail.com>
Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev>
Link: https://patch.msgid.link/20250429073320.33277-1-maimon.sagi@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
It is possible for a pointer of type struct inet_timewait_sock to be
returned from the functions __inet_lookup_established() and
__inet6_lookup_established(). This can cause a crash when the
returned pointer is of type struct inet_timewait_sock and
sock_put() is called on it. The following is a crash call stack that
shows sk->sk_wmem_alloc being accessed in sk_free() during the call to
sock_put() on a struct inet_timewait_sock pointer. To avoid this issue,
use sock_gen_put() instead of sock_put() when sk->sk_state
is TCP_TIME_WAIT.
mrdump.ko ipanic() + 120
vmlinux notifier_call_chain(nr_to_call=-1, nr_calls=0) + 132
vmlinux atomic_notifier_call_chain(val=0) + 56
vmlinux panic() + 344
vmlinux add_taint() + 164
vmlinux end_report() + 136
vmlinux kasan_report(size=0) + 236
vmlinux report_tag_fault() + 16
vmlinux do_tag_recovery() + 16
vmlinux __do_kernel_fault() + 88
vmlinux do_bad_area() + 28
vmlinux do_tag_check_fault() + 60
vmlinux do_mem_abort() + 80
vmlinux el1_abort() + 56
vmlinux el1h_64_sync_handler() + 124
vmlinux > 0xFFFFFFC080011294()
vmlinux __lse_atomic_fetch_add_release(v=0xF2FFFF82A896087C)
vmlinux __lse_atomic_fetch_sub_release(v=0xF2FFFF82A896087C)
vmlinux arch_atomic_fetch_sub_release(i=1, v=0xF2FFFF82A896087C)
+ 8
vmlinux raw_atomic_fetch_sub_release(i=1, v=0xF2FFFF82A896087C)
+ 8
vmlinux atomic_fetch_sub_release(i=1, v=0xF2FFFF82A896087C) + 8
vmlinux __refcount_sub_and_test(i=1, r=0xF2FFFF82A896087C,
oldp=0) + 8
vmlinux __refcount_dec_and_test(r=0xF2FFFF82A896087C, oldp=0) + 8
vmlinux refcount_dec_and_test(r=0xF2FFFF82A896087C) + 8
vmlinux sk_free(sk=0xF2FFFF82A8960700) + 28
vmlinux sock_put() + 48
vmlinux tcp6_check_fraglist_gro() + 236
vmlinux tcp6_gro_receive() + 624
vmlinux ipv6_gro_receive() + 912
vmlinux dev_gro_receive() + 1116
vmlinux napi_gro_receive() + 196
ccmni.ko ccmni_rx_callback() + 208
ccmni.ko ccmni_queue_recv_skb() + 388
ccci_dpmaif.ko dpmaif_rxq_push_thread() + 1088
vmlinux kthread() + 268
vmlinux 0xFFFFFFC08001F30C()
Fixes: c9d1d23e5239 ("net: add heuristic for enabling TCP fraglist GRO")
Signed-off-by: Jibin Zhang <jibin.zhang@mediatek.com>
Signed-off-by: Shiming Cheng <shiming.cheng@mediatek.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Link: https://patch.msgid.link/20250429020412.14163-1-shiming.cheng@mediatek.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|