Age | Commit message (Collapse) | Author |
|
Just in case we are still handling the QP receive completion while the
rds_ibdev is released, drop the connection instead of crashing the kernel.
Signed-off-by: Santosh Shilimkar <ssantosh@kernel.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
|
|
Similar to what we did with receive CQ completion handling, we split
the transmit completion handler so that it lets us implement batched
work completion handling.
We re-use the cq_poll routine and makes use of RDS_IB_SEND_OP to
identify the send vs receive completion event handler invocation.
Signed-off-by: Santosh Shilimkar <ssantosh@kernel.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
|
|
For better performance, we split the receive completion IRQ handler. That
lets us acknowledge several WCE events in one call. We also limit the WC
to max 32 to avoid latency. Acknowledging several completions in one call
instead of several calls each time will provide better performance since
less mutual exclusion locks are being performed.
In next patch, send completion is also split which re-uses the poll_cq()
and hence the code is moved to ib_cm.c
Signed-off-by: Santosh Shilimkar <ssantosh@kernel.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
|
|
In Transport indepedent rds_sendmsg(), we shouldn't make decisions based
on RDS_LL_SEND_FULL which is used to manage the ring for RDMA based
transports. We can safely issue rds_send_xmit() and the using its
return value take decision on deferred work. This will also fix
the scenario where at times we are seeing connections stuck with
the LL_SEND_FULL bit getting set and never cleared.
We kick krdsd after any time we see -ENOMEM or -EAGAIN from the
ring allocation code.
Signed-off-by: Santosh Shilimkar <ssantosh@kernel.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
|
|
Current process gives up if its send work over the batch limit.
The work queue will get kicked to finish off any other requests.
This fixes remainder condition from commit 443be0e5affe ("RDS: make
sure not to loop forever inside rds_send_xmit").
The restart condition is only for the case where we reached to
over_batch code for some other reason so just retrying again
before giving up.
While at it, make sure we use already available 'send_batch_count'
parameter instead of magic value. The batch count threshold value
of 1024 came via commit 443be0e5affe ("RDS: make sure not to loop
forever inside rds_send_xmit"). The idea is to process as big a
batch as we can but at the same time we don't hold other waiting
processes for send. Hence back-off after the send_batch_count
limit (1024) to avoid soft-lock ups.
Signed-off-by: Santosh Shilimkar <ssantosh@kernel.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
|
|
The function returns always non-negative values.
The problem has been detected using proposed semantic patch
scripts/coccinelle/tests/assign_signed_to_unsigned.cocci [1].
[1]: http://permalink.gmane.org/gmane.linux.kernel/2046107
Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Major changes in ath10k:
* add spectral scan support for 10.4 firmware
* add qca6164 support
* implement mesh support using firmware raw mode
|
|
Commit ea317b267e9d ("bpf: Add new bpf map type to store the pointer
to struct perf_event") added perf_event.h to the main eBPF header, so
it gets included for all users. perf_event.h is actually only needed
from array map side, so lets sanitize this a bit.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
For ARMv7 with UDIV instruction support, generate an UDIV instruction
followed by an MLS instruction.
For other ARM variants, generate code calling a C wrapper similar to
the jit_udiv() function used for BPF_ALU | BPF_DIV instructions.
Some performance numbers reported by the test_bpf module (the duration
per filter run is reported in nanoseconds, between "jitted:<x>" and
"PASS":
ARMv7 QEMU nojit: test_bpf: #3 DIV_MOD_KX jited:0 2196 PASS
ARMv7 QEMU jit: test_bpf: #3 DIV_MOD_KX jited:1 104 PASS
ARMv5 QEMU nojit: test_bpf: #3 DIV_MOD_KX jited:0 2176 PASS
ARMv5 QEMU jit: test_bpf: #3 DIV_MOD_KX jited:1 1104 PASS
ARMv5 kirkwood nojit: test_bpf: #3 DIV_MOD_KX jited:0 1103 PASS
ARMv5 kirkwood jit: test_bpf: #3 DIV_MOD_KX jited:1 311 PASS
Signed-off-by: Nicolas Schichan <nschichan@freebox.fr>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Mark Craske says:
====================
Improve ASIX RX memory allocation error handling
The ASIX RX handler algorithm is weak on error handling.
There is a design flaw in the ASIX RX handler algorithm because the
implementation for handling RX Ethernet frames for the DUB-E100 C1 can
have Ethernet frames spanning multiple URBs. This means that payload data
from more than 1 URB is sometimes needed to fill the socket buffer with a
complete Ethernet frame. When the URB with the start of an Ethernet frame
is received then an attempt is made to allocate a socket buffer. If the
memory allocation fails then the algorithm sets the buffer pointer member
to NULL and the function exits (no crash yet). Subsequently, the RX hander
is called again to process the next URB which assumes there is a socket
buffer available and the kernel crashes when there is no buffer.
This patchset implements an improvement to the RX handling algorithm to
avoid a crash when no memory is available for the socket buffer.
The patchset will apply cleanly to the net-next master branch but the
created kernel has not been tested. The driver was tested on ARM kernels
v3.8 and v3.14 for a commercial product.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Avoid a loss of synchronisation of the Ethernet Data header 32-bit
word due to a failure to get a netdev socket buffer.
The ASIX RX handling algorithm returned 0 upon a failure to get
an allocation of a netdev socket buffer. This causes the URB
processing to stop which potentially causes a loss of synchronisation
with the Ethernet Data header 32-bit word. Therefore, subsequent
processing of URBs may be rejected due to a loss of synchronisation.
This may cause additional good Ethernet frames to be discarded
along with outputting of synchronisation error messages.
Implement a solution which checks whether a netdev socket buffer
has been allocated before trying to copy the Ethernet frame into
the netdev socket buffer. But continue to process the URB so that
synchronisation is maintained. Therefore, only a single Ethernet
frame is discarded when no netdev socket buffer is available.
Signed-off-by: Dean Jenkins <Dean_Jenkins@mentor.com>
Signed-off-by: Mark Craske <Mark_Craske@mentor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When RX Ethernet frames span multiple URB socket buffers,
the data stream may suffer a discontinuity which will cause
the current Ethernet frame in the netdev socket buffer
to be incomplete. This frame needs to be discarded instead
of appending unrelated data from the current URB socket buffer
to the Ethernet frame in the netdev socket buffer. This avoids
creating a corrupted Ethernet frame in the netdev socket buffer.
A discontinuity can occur when the previous URB socket buffer
held an incomplete Ethernet frame due to truncation or a
URB socket buffer containing the end of the Ethernet frame
was missing.
Therefore, add a sanity test for when an Ethernet frame
spans multiple URB socket buffers to check that the remaining
bytes of the currently received Ethernet frame point to
a good Data header 32-bit word of the next Ethernet
frame. Upon error, reset the remaining bytes variable to
zero and discard the current netdev socket buffer.
Assume that the Data header is located at the start of
the current socket buffer and attempt to process the next
Ethernet frame from there. This avoids unnecessarily
discarding a good URB socket buffer that contains a new
Ethernet frame.
Signed-off-by: Dean Jenkins <Dean_Jenkins@mentor.com>
Signed-off-by: Mark Craske <Mark_Craske@mentor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The code is checking that the Ethernet frame will fit into a
netdev allocated socket buffer within the constraints of MTU size,
Ethernet header length plus VLAN header length.
The original code was checking rx->remaining each loop of the while
loop that processes multiple Ethernet frames per URB and/or Ethernet
frames that span across URBs. rx->remaining decreases per while loop
so there is no point in potentially checking multiple times that the
Ethernet frame (remaining part) will fit into the netdev socket buffer.
The modification checks that the size of the Ethernet frame will fit
the netdev socket buffer before allocating the netdev socket buffer.
This avoids grabbing memory and then deciding that the Ethernet frame
is too big and then freeing the memory.
Signed-off-by: Dean Jenkins <Dean_Jenkins@mentor.com>
Signed-off-by: Mark Craske <Mark_Craske@mentor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Tidy-up the Data header 32-bit word synchronisation logic in
asix_rx_fixup_internal() by removing redundant logic tests.
The code is looking at the following cases of the Data header
32-bit word that is present before each Ethernet frame:
a) all 32 bits of the Data header word are in the URB socket buffer
b) first 16 bits of the Data header word are at the end of the URB
socket buffer
c) last 16 bits of the Data header word are at the start of the URB
socket buffer eg. split_head = true
Note that the lifetime of rx->split_head exists outside of the
function call and is accessed per processing of each URB. Therefore,
split_head being true acts on the next URB to be processed.
To check for b) the offset will be 16 bits (2 bytes) from the end of
the buffer then indicate split_head is true.
To check for c) split_head must be true because the first 16 bits
have been found.
To check for a) else c)
Note that the || logic of the old code included the state
(skb->len - offset == sizeof(u16) && rx->split_head) which is not
possible because the split_head cannot be true whilst checking for b).
This is because the split_head indicates that the first 16 bits have
been found and that is not possible whilst checking for the first 16
bits. Therefore simplify the logic.
Signed-off-by: Dean Jenkins <Dean_Jenkins@mentor.com>
Signed-off-by: Mark Craske <Mark_Craske@mentor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The Data header synchronisation is easier to understand
if the variables "remaining" and "size" are renamed.
Therefore, the lifetime of the "remaining" variable exists
outside of asix_rx_fixup_internal() and is used to indicate
any remaining pending bytes of the Ethernet frame that need
to be obtained from the next socket buffer. This allows an
Ethernet frame to span across multiple socket buffers.
"size" is now local to asix_rx_fixup_internal() and contains
the size read from the Data header 32-bit word.
Add "copy_length" to hold the number of the Ethernet frame
bytes (maybe a part of a full frame) that are to be copied
out of the socket buffer.
Signed-off-by: Dean Jenkins <Dean_Jenkins@mentor.com>
Signed-off-by: Mark Craske <Mark_Craske@mentor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The current ongoing effort to dump existing cBPF seccomp filters back
to user space requires to hold the pre-transformed instructions like
we do in case of socket filters from sk_attach_filter() side, so they
can be reloaded in original form at a later point in time by utilities
such as criu.
To prepare for this, simply extend the bpf_prog_create_from_user()
API to hold a flag that tells whether we should store the original
or not. Also, fanout filters could make use of that in future for
things like diag. While fanout filters already use bpf_prog_destroy(),
move seccomp over to them as well to handle original programs when
present.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Tycho Andersen <tycho.andersen@canonical.com>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Tested-by: Tycho Andersen <tycho.andersen@canonical.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This fixes:
tried to remove device ip6gre0 from (null)
------------[ cut here ]------------
kernel BUG at net/core/dev.c:5219!
invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
CPU: 3 PID: 161 Comm: kworker/u8:2 Not tainted 4.3.0-rc2+ #1142
Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
Workqueue: netns cleanup_net
task: ffff8800d784a9c0 ti: ffff8800d74a4000 task.ti: ffff8800d74a4000
RIP: 0010:[<ffffffff817f0797>] [<ffffffff817f0797>] __netdev_adjacent_dev_remove+0x40/0xec
RSP: 0018:ffff8800d74a7a98 EFLAGS: 00010282
RAX: 000000000000002a RBX: 0000000000000000 RCX: 0000000000000000
RDX: ffff88011adcf701 RSI: ffff88011adccbf8 RDI: ffff88011adccbf8
RBP: ffff8800d74a7ab8 R08: 0000000000000001 R09: 0000000000000000
R10: ffffffff81d190ff R11: 00000000ffffffff R12: ffff8800d599e7c0
R13: 0000000000000000 R14: ffff8800d599e890 R15: ffffffff82385e00
FS: 0000000000000000(0000) GS:ffff88011ac00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007ffd6f003000 CR3: 000000000220c000 CR4: 00000000000006e0
Stack:
0000000000000000 ffff8800d599e7c0 0000000000000b00 ffff8800d599e8a0
ffff8800d74a7ad8 ffffffff817f0861 0000000000000000 ffff8800d599e7c0
ffff8800d74a7af8 ffffffff817f088f 0000000000000000 ffff8800d599e7c0
Call Trace:
[<ffffffff817f0861>] __netdev_adjacent_dev_unlink+0x1e/0x35
[<ffffffff817f088f>] __netdev_adjacent_dev_unlink_neighbour+0x17/0x41
[<ffffffff817f56e6>] netdev_upper_dev_unlink+0x6c/0x13d
[<ffffffff81674a3d>] vrf_del_slave+0x26/0x7d
[<ffffffff81674ac3>] vrf_device_event+0x2f/0x34
[<ffffffff81098c40>] notifier_call_chain+0x75/0x9c
[<ffffffff81098fa2>] raw_notifier_call_chain+0x14/0x16
[<ffffffff817ee129>] call_netdevice_notifiers_info+0x52/0x59
[<ffffffff817f179d>] call_netdevice_notifiers+0x13/0x15
[<ffffffff817f6f18>] rollback_registered_many+0x14f/0x24f
[<ffffffff817f70f2>] unregister_netdevice_many+0x19/0x64
[<ffffffff819a2455>] ip6gre_exit_net+0x163/0x177
[<ffffffff817eb019>] ops_exit_list+0x44/0x55
[<ffffffff817ebcb7>] cleanup_net+0x193/0x226
[<ffffffff81091e1c>] process_one_work+0x26c/0x4d8
[<ffffffff81091d20>] ? process_one_work+0x170/0x4d8
[<ffffffff81092296>] worker_thread+0x1df/0x2c2
[<ffffffff810920b7>] ? process_scheduled_works+0x2f/0x2f
[<ffffffff810920b7>] ? process_scheduled_works+0x2f/0x2f
[<ffffffff81097a20>] kthread+0xd4/0xdc
[<ffffffff810bc523>] ? trace_hardirqs_on_caller+0x17d/0x199
[<ffffffff8109794c>] ? __kthread_parkme+0x83/0x83
[<ffffffff81a5240f>] ret_from_fork+0x3f/0x70
[<ffffffff8109794c>] ? __kthread_parkme+0x83/0x83
Fixes: 93a7e7e837af ("net: Remove the now unused vrf_ptr")
Cc: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since the new multi-queue capability depends on a new firmware API,
we can already add some code for it. If the new API is present, a
new opmode ops struct is used that handles the new rx_rss method.
For now, only restructure the RX handling to distinguish between
the two. Future patches will convert the new infrastructure to
actually use the new RX descriptor layout.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Instead of relying on a hard-coded constant of a maximum of 64 API and
capability bits, add a new enum value after the others that will then
always track the number of used bits in the API/capabilities. We thus
no longer need to maintain the maximum number, and on 32-bit platforms
even (currently) reduce the number of bits kept in memory.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
There's no need to have negative threshold temperatures, so make
them unsigned to avoid signedness warnings in debugfs.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
The only NVM section not captured in debugfs.
Signed-off-by: Moshe Harel <moshe.harel@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Using an int* instead of u32* as the kstrtou32() output argument
obviously results in signedness warnings, change that.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
This is a tweak which has been shown to improve performance when
moving away from the AP while working in 80Mhz.
When RS decides to go down to 80MHz SISO MCS0 instead switch to 20MHz MCS4.
Go back to 80MHz MCS1 if RS can sustain 20MHz MCS5.
Signed-off-by: Eyal Shapira <eyalx.shapira@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Clean up variable initialisation slightly.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
As the 3165 device uses the same firmware as 7265-D and currently
all 7000 series (including 3160/3165) use the same API versions
remove IWL3165_UCODE_API_OK and _MIN. We might have to put them
back if firmware support ever splits, but in that case might also
have to add a different MODULE_FIRMWARE statement.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
success_ratio is actually 128 * SR in percentage while
IWL_MVM_RS_SR_NO_DECREASE is 85%. Fix this by using RS_PERCENT().
This bug caused the if branch to be always executed. This in turn
led to always selecting a rate, following a column switch, in which
the expected throughput would exceed the best expected current throughput.
In some scenarios where the success ratio isn't >85% such a rate
could be too aggressive leading us to avoid the new column.
This has the potential of causing sub optimal performance.
Reported-by: Moshe Harel <moshe.harel@intel.com>
Signed-off-by: Eyal Shapira <eyalx.shapira@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Indentation was off a bit. Fix it.
Signed-off-by: Eyal Shapira <eyalx.shapira@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
This message isn't very useful and creates clutter.
Remove it.
Signed-off-by: Eyal Shapira <eyalx.shapira@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Pretty print the rate full details to ease debugging.
Signed-off-by: Eyal Shapira <eyalx.shapira@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
The firmware has always treated these two bits to mean that
powersave is enabled when POWER_SAVE_ENA is set and CAM is
clear; it doesn't use them in any non-combined way.
Therefore, it's pointless to send it two bits, and the API
should be cleaned up. Prepare the driver by removing the CAM
bit and using only POWER_SAVE_ENA to indicate whether PS is
enabled or not.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
When enabling a queue, the default SSN is 0.
Allow determining what that SSN should be, if required. This
can happen, for example, if a queue gets reconfigured.
Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
"DQA" is shorthand for "dynamic queue allocation", with the
idea of allocating queues per-RA/TID on-demand rather than
using shared queues statically allocated per vif. The goal
of this is to enable future features (like GO PM) and to
improve performance measurements of TX traffic.
When RA/TID streams can't be neatly sorted into different AC
queues, DQA allows sharing queues for the same RA. This means
that DQA allows different ACs may reach the same HW queue.
Update the code to allow such queue sharing by having a mapping
between the HW queue and the mac80211 queues using it (as this
could be more than one queue).
Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
As the transport will decide how many queues (and MSI-X vectors)
to allocate, add a field to indicate that to the op-mode so it
can size/allocate its own data structures appropriately.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Upcoming hardware will have the ability to do L3 hashing for RSS,
directing data packets (and perhaps some associated metadata and
management notifications) to different MSI-X vectors.
In this case, it makes no sense to go through the full RX dispatch
since it's already known that only a subset of the possibilities
can come in, requiring a new receive method. In addition this must
know which queue the packet was received on.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Treat PHY RX specially, since it's actually pretty frequent,
doesn't need all the notication etc. code, and will have a
different handler in future hardware.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Return a proper error when wrong parameters are passed to debugfs
tof_range_request.
Signed-off-by: Assaf Krauss <assaf.krauss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
The command needs to have the AP interfaces BSSID (which corresponds
to its address).
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Make some input formats more natural, e.g. bandwidth and periods
are more natural in decimal than in hexadecimal.
Signed-off-by: Assaf Krauss <assaf.krauss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This ethernet driver supports the Micorchip enc424j600/626j600 Ethernet
controller over a SPI bus interface. This driver makes use of the regmap API to
optimize access to registers by caching registers where possible.
Datasheet:
http://ww1.microchip.com/downloads/en/DeviceDoc/39935b.pdf
Signed-off-by: Jon Ringle <jringle@gridpoint.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This commit allows installing a custom reg_update_bits function for cases where
the hardware provides a mechanism to set or clear register bits without a
read/modify/write cycle. Such is the case with the Microchip ENCX24J600.
Signed-off-by: Jon Ringle <jringle@gridpoint.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The current code invokes hang reset in case of error interrupt. We should
hang reset only in case of tx timeout. This because of the way hang reset
is implemented in firmware. Hang reset takes more firmware resources than
soft reset. Adaptor does not generate error interrupt in case of tx
timeout.
Hang reset only in case of tx timeout, in .ndo_tx_timeout. Do soft reset
otherwise. Introduce deferred work, enic_tx_hang_reset, to do hang reset.
Signed-off-by: Govindarajulu Varadarajan <_govind@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Some of the enic adaptors are know to generate spurious interrupts. When
error interrupt is generated, driver just resets the device. This patch
resets the device only when an error is occurred.
Signed-off-by: Govindarajulu Varadarajan <_govind@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Hariprasad Shenai says:
====================
cxgb4: Trivial fixes for cxgb4
Fixes the following issues
Don't read non existent T4/T5/T6 adapter registers for ethtool dump.
For T4, dont read mailbox control registers. Adds new devlog faility and
report correct link speed for unsupported ones.
This patch series has been created against net-next tree and includes
patches on cxgb4 driver.
We have included all the maintainers of respective drivers. Kindly review
the change and let us know in case of any review comments.
====================
Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When we get garbage from the firmware with weird Port Speeds,
etc. we should emit a warning regarding unsupported speeds rather than
use the bogus default of "10Mbps" which isn't even an option in the
firmware Port Information message
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The firmware team added a new Device Log Facility FW_DEVLOG_FACILITY_CF,
but the driver has been decoding Device Log messages with that Facility as
"(NULL)", fixing it.
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
T4 doesn't have the Shadow copy of the register which we can read without
side effect. So don't read mbox control register for T4 adapter
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Update T4/T5/T6 adapter register ranges so that it doesn't read non
existent registers when dumped using ethtool
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/net-next
Eric W. Biederman says:
====================
net: Pass net through ip fragmention
This is the next installment of my work to pass struct net through the
output path so the code does not need to guess how to figure out which
network namespace it is in, and ultimately routes can have output
devices in another network namespace.
This round focuses on passing net through ip fragmentation which we seem
to call from about everywhere. That is the main ip output paths, the
bridge netfilter code, and openvswitch. This has to happend at once
accross the tree as function pointers are involved.
First some prep work is done, then ipv4 and ipv6 are converted and then
temporary helper functions are removed.
====================
Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Sowmini Varadhan says:
====================
RDS: RDS-TCP perf enhancements
A 3-part patchset that (a) improves current RDS-TCP perf
by 2X-3X and (b) refactors earlier robustness code for
better observability/scaling.
Patch 1 is an enhancment of earlier robustness fixes
that had used separate sockets for client and server endpoints to
resolve race conditions. It is possible to have an equivalent
solution that does not use 2 sockets. The benefit of a
single socket solution is that it results in more predictable
and observable behavior for the underlying TCP pipe of an
RDS connection
Patches 2 and 3 are simple, straightforward perf bug fixes
that align the RDS TCP socket with other parts of the kernel stack.
v2: fix kbuild-test-robot warnings, comments from Sergei Shtylov
and Santosh Shilimkar.
====================
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|