Age | Commit message (Collapse) | Author |
|
This patch adds supports for setting VF trust by host. If specified
VF is trusted, then it can enable promisc(include allmulti mode).
If a trusted VF enabled promisc, and being untrusted, host will
disable promisc mode for this VF.
For VF will update its promisc mode from set_rx_mode now, so it's
unnecessary to set broadcst promisc mode when initialization or
reset.
Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
|
|
This patch adds support for spoof check configuration for VFs.
When it is enabled, "spoof checking" is done for both mac address
and VLAN. For each VF, the HW ensures that the source MAC address
(or VLAN) of every outgoing packet exists in the MAC-list (or
VLAN-list) configured for RX filtering for that VF. If not,
the packet is dropped.
Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
|
|
This patch adds support to configure VF link properties.
The options are auto, enable, and disable. Even if the PF
is down, the communication between VFs will be normal
if the VFs are set to enable. The commands are as follows:
'ip link set <pf> vf <vf_id> state <auto|enable|disable>'
change the VF status
'ip link show'
show the setting status
Signed-off-by: Yufeng Mo <moyufeng@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
|
|
Andrii Nakryiko says:
====================
This patch set makes bpf_helpers.h and bpf_endian.h a part of libbpf itself
for consumption by user BPF programs, not just selftests. It also splits off
tracing helpers into bpf_tracing.h, which also becomes part of libbpf. Some of
the legacy stuff (BPF_ANNOTATE_KV_PAIR, load_{byte,half,word}, bpf_map_def
with unsupported fields, etc, is extracted into selftests-only bpf_legacy.h.
All the selftests and samples are switched to use libbpf's headers and
selftests' ones are removed.
As part of this patch set we also add BPF_CORE_READ variadic macros, that are
simplifying BPF CO-RE reads, especially the ones that have to follow few
pointers. E.g., what in non-BPF world (and when using BCC) would be:
int x = s->a->b.c->d; /* s, a, and b.c are pointers */
Today would have to be written using explicit bpf_probe_read() calls as:
void *t;
int x;
bpf_probe_read(&t, sizeof(t), s->a);
bpf_probe_read(&t, sizeof(t), ((struct b *)t)->b.c);
bpf_probe_read(&x, sizeof(x), ((struct c *)t)->d);
This is super inconvenient and distracts from program logic a lot. Now, with
added BPF_CORE_READ() macros, you can write the above as:
int x = BPF_CORE_READ(s, a, b.c, d);
Up to 9 levels of pointer chasing are supported, which should be enough for
any practical purpose, hopefully, without adding too much boilerplate macro
definitions (though there is admittedly some, given how variadic and recursive
C macro have to be implemented).
There is also BPF_CORE_READ_INTO() variant, which relies on caller to allocate
space for result:
int x;
BPF_CORE_READ_INTO(&x, s, a, b.c, d);
Result of last bpf_probe_read() call in the chain of calls is the result of
BPF_CORE_READ_INTO(). If any intermediate bpf_probe_read() aall fails, then
all the subsequent ones will fail too, so this is sufficient to know whether
overall "operation" succeeded or not. No short-circuiting of bpf_probe_read()s
is done, though.
BPF_CORE_READ_STR_INTO() is added as well, which differs from
BPF_CORE_READ_INTO() only in that last bpf_probe_read() call (to read final
field after chasing pointers) is replaced with bpf_probe_read_str(). Result of
bpf_probe_read_str() is returned as a result of BPF_CORE_READ_STR_INTO() macro
itself, so that applications can track return code and/or length of read
string.
Patch set outline:
- patch #1 undoes previously added GCC-specific bpf-helpers.h include;
- patch #2 splits off legacy stuff we don't want to carry over;
- patch #3 adjusts CO-RE reloc tests to avoid subsequent naming conflict with
BPF_CORE_READ;
- patch #4 splits off bpf_tracing.h;
- patch #5 moves bpf_{helpers,endian,tracing}.h and bpf_helper_defs.h
generation into libbpf and adjusts Makefiles to include libbpf for header
search;
- patch #6 adds variadic BPF_CORE_READ() macro family, as described above;
- patch #7 adds tests to verify all possible levels of pointer nestedness for
BPF_CORE_READ(), as well as correctness test for BPF_CORE_READ_STR_INTO().
v4->v5:
- move BPF_CORE_READ() stuff into bpf_core_read.h header (Alexei);
v3->v4:
- rebase on latest bpf-next master;
- bpf_helper_defs.h generation is moved into libbpf's Makefile;
v2->v3:
- small formatting fixes and macro () fixes (Song);
v1->v2:
- fix CO-RE reloc tests before bpf_helpers.h move (Song);
- split off legacy stuff we don't want to carry over (Daniel, Toke);
- split off bpf_tracing.h (Daniel);
- fix samples/bpf build (assuming other fixes are applied);
- switch remaining maps either to bpf_map_def_legacy or BTF-defined maps;
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
Validate BPF_CORE_READ correctness and handling of up to 9 levels of
nestedness using cyclic task->(group_leader->)*->tgid chains.
Also add a test of maximum-dpeth BPF_CORE_READ_STR_INTO() macro.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-8-andriin@fb.com
|
|
Add few macros simplifying BCC-like multi-level probe reads, while also
emitting CO-RE relocations for each read.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-7-andriin@fb.com
|
|
Move bpf_helpers.h, bpf_tracing.h, and bpf_endian.h into libbpf. Move
bpf_helper_defs.h generation into libbpf's Makefile. Ensure all those
headers are installed along the other libbpf headers. Also, adjust
selftests and samples include path to include libbpf now.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-6-andriin@fb.com
|
|
Split-off PT_REGS-related helpers into bpf_tracing.h header. Adjust
selftests and samples to include it where necessary.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-5-andriin@fb.com
|
|
To allow adding a variadic BPF_CORE_READ macro with slightly different
syntax and semantics, define CORE_READ in CO-RE reloc tests, which is
a thin wrapper around low-level bpf_core_read() macro, which in turn is
just a wrapper around bpf_probe_read().
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-4-andriin@fb.com
|
|
Split off few legacy things from bpf_helpers.h into separate
bpf_legacy.h file:
- load_{byte|half|word};
- remove extra inner_idx and numa_node fields from bpf_map_def and
introduce bpf_map_def_legacy for use in samples;
- move BPF_ANNOTATE_KV_PAIR into bpf_legacy.h.
Adjust samples and selftests accordingly by either including
bpf_legacy.h and using bpf_map_def_legacy, or switching to BTF-defined
maps altogether.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-3-andriin@fb.com
|
|
Having GCC provide its own bpf-helper.h is not the right approach and is
going to be changed. Undo bpf_helpers.h change before moving
bpf_helpers.h into libbpf.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20191008175942.1769476-2-andriin@fb.com
|
|
syzbot reported a warning [1] that triggered after recent Jiri patch.
This exposes a bug that we hit already in the past (see commit
ff244c6b29b1 ("tun: handle register_netdevice() failures properly")
for details)
tun uses priv->destructor without an ndo_init() method.
register_netdevice() can return an error, but will
not call priv->destructor() in some cases. Jiri recent
patch added one more.
A long term fix would be to transfer the initialization
of what we destroy in ->destructor() in the ndo_init()
This looks a bit risky given the complexity of tun driver.
A simpler fix is to detect after the failed register_netdevice()
if the tun_free_netdev() function was called already.
[1]
ODEBUG: free active (active state 0) object type: timer_list hint: tun_flow_cleanup+0x0/0x280 drivers/net/tun.c:457
WARNING: CPU: 0 PID: 8653 at lib/debugobjects.c:481 debug_print_object+0x168/0x250 lib/debugobjects.c:481
Kernel panic - not syncing: panic_on_warn set ...
CPU: 0 PID: 8653 Comm: syz-executor976 Not tainted 5.4.0-rc1-next-20191004 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x172/0x1f0 lib/dump_stack.c:113
panic+0x2dc/0x755 kernel/panic.c:220
__warn.cold+0x2f/0x3c kernel/panic.c:581
report_bug+0x289/0x300 lib/bug.c:195
fixup_bug arch/x86/kernel/traps.c:174 [inline]
fixup_bug arch/x86/kernel/traps.c:169 [inline]
do_error_trap+0x11b/0x200 arch/x86/kernel/traps.c:267
do_invalid_op+0x37/0x50 arch/x86/kernel/traps.c:286
invalid_op+0x23/0x30 arch/x86/entry/entry_64.S:1028
RIP: 0010:debug_print_object+0x168/0x250 lib/debugobjects.c:481
Code: dd 80 b9 e6 87 48 89 fa 48 c1 ea 03 80 3c 02 00 0f 85 b5 00 00 00 48 8b 14 dd 80 b9 e6 87 48 c7 c7 e0 ae e6 87 e8 80 84 ff fd <0f> 0b 83 05 e3 ee 80 06 01 48 83 c4 20 5b 41 5c 41 5d 41 5e 5d c3
RSP: 0018:ffff888095997a28 EFLAGS: 00010082
RAX: 0000000000000000 RBX: 0000000000000003 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffffffff815cb526 RDI: ffffed1012b32f37
RBP: ffff888095997a68 R08: ffff8880a92ac580 R09: ffffed1015d04101
R10: ffffed1015d04100 R11: ffff8880ae820807 R12: 0000000000000001
R13: ffffffff88fb5340 R14: ffffffff81627110 R15: ffff8880aa41eab8
__debug_check_no_obj_freed lib/debugobjects.c:963 [inline]
debug_check_no_obj_freed+0x2d4/0x43f lib/debugobjects.c:994
kfree+0xf8/0x2c0 mm/slab.c:3755
kvfree+0x61/0x70 mm/util.c:593
netdev_freemem net/core/dev.c:9384 [inline]
free_netdev+0x39d/0x450 net/core/dev.c:9533
tun_set_iff drivers/net/tun.c:2871 [inline]
__tun_chr_ioctl+0x317b/0x3f30 drivers/net/tun.c:3075
tun_chr_ioctl+0x2b/0x40 drivers/net/tun.c:3355
vfs_ioctl fs/ioctl.c:47 [inline]
file_ioctl fs/ioctl.c:539 [inline]
do_vfs_ioctl+0xdb6/0x13e0 fs/ioctl.c:726
ksys_ioctl+0xab/0xd0 fs/ioctl.c:743
__do_sys_ioctl fs/ioctl.c:750 [inline]
__se_sys_ioctl fs/ioctl.c:748 [inline]
__x64_sys_ioctl+0x73/0xb0 fs/ioctl.c:748
do_syscall_64+0xfa/0x760 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x441439
Code: e8 9c ae 02 00 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 3b 0a fc ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007fff61c37438 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 0000000000441439
RDX: 0000000020000400 RSI: 00000000400454ca RDI: 0000000000000004
RBP: 00007fff61c37470 R08: 0000000000000001 R09: 0000000100000000
R10: 0000000000000000 R11: 0000000000000246 R12: ffffffffffffffff
R13: 0000000000000005 R14: 0000000000000000 R15: 0000000000000000
Kernel Offset: disabled
Rebooting in 86400 seconds..
Fixes: ff92741270bf ("net: introduce name_node struct to be used in hashlist")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jiri Pirko <jiri@mellanox.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
|
|
There is a spelling mistake in a NL_SET_ERR_MSG_MOD message. Fix it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
|
|
Don't populate const arrays on the stack but instead make them
static. Makes the object code smaller by 1058 bytes.
Before:
text data bss dec hex filename
29879 6144 0 36023 8cb7 drivers/net/phy/mscc.o
After:
text data bss dec hex filename
28437 6528 0 34965 8895 drivers/net/phy/mscc.o
(gcc version 9.2.1, amd64)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
|
|
Don't populate the array exp_mask on the stack but instead make it
static. Makes the object code smaller by 224 bytes.
Before:
text data bss dec hex filename
77832 2290 0 80122 138fa ethernet/netronome/nfp/bpf/jit.o
After:
text data bss dec hex filename
77544 2354 0 79898 1381a ethernet/netronome/nfp/bpf/jit.o
(gcc version 9.2.1, amd64)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
|
|
SPI is one of the interfaces used to access devices which have a POSIX
clock driver (real time clocks, 1588 timers etc). The fact that the SPI
bus is slow is not what the main problem is, but rather the fact that
drivers don't take a constant amount of time in transferring data over
SPI. When there is a high delay in the readout of time, there will be
uncertainty in the value that has been read out of the peripheral.
When that delay is constant, the uncertainty can at least be
approximated with a certain accuracy which is fine more often than not.
Timing jitter occurs all over in the kernel code, and is mainly caused
by having to let go of the CPU for various reasons such as preemption,
servicing interrupts, going to sleep, etc. Another major reason is CPU
dynamic frequency scaling.
It turns out that the problem of retrieving time from a SPI peripheral
with high accuracy can be solved by the use of "PTP system
timestamping" - a mechanism to correlate the time when the device has
snapshotted its internal time counter with the Linux system time at that
same moment. This is sufficient for having a precise time measurement -
it is not necessary for the whole SPI transfer to be transmitted "as
fast as possible", or "as low-jitter as possible". The system has to be
low-jitter for a very short amount of time to be effective.
This patch introduces a PTP system timestamping mechanism in struct
spi_transfer. This is to be used by SPI device drivers when they need to
know the exact time at which the underlying device's time was
snapshotted. More often than not, SPI peripherals have a very exact
timing for when their SPI-to-interconnect bridge issues a transaction
for snapshotting and reading the time register, and that will be
dependent on when the SPI-to-interconnect bridge figures out that this
is what it should do, aka as soon as it sees byte N of the SPI transfer.
Since spi_device drivers are the ones who'd know best how the peripheral
behaves in this regard, expose a mechanism in spi_transfer which allows
them to specify which word (or word range) from the transfer should be
timestamped.
Add a default implementation of the PTP system timestamping in the SPI
core. This is not going to be satisfactory performance-wise, but should
at least increase the likelihood that SPI device drivers will use PTP
system timestamping in the future.
There are 3 entry points from the core towards the SPI controller
drivers:
- transfer_one: The driver is passed individual spi_transfers to
execute. This is the easiest to timestamp.
- transfer_one_message: The core passes the driver an entire spi_message
(a potential batch of spi_transfers). The core puts the same pre and
post timestamp to all transfers within a message. This is not ideal,
but nothing better can be done by default anyway, since the core has
no insight into how the driver batches the transfers.
- transfer: Like transfer_one_message, but for unqueued drivers (i.e.
the driver implements its own queue scheduling).
Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Link: https://lore.kernel.org/r/20190905010114.26718-3-olteanv@gmail.com
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Currently, at xdp_adjust_tail_kern.c, MAX_PCKT_SIZE is limited
to 600. To make this size flexible, static global variable
'max_pcktsz' is added.
By updating new packet size from the user space, xdp_adjust_tail_kern.o
will use this value as a new max packet size.
This static global variable can be accesible from .data section with
bpf_object__find_map* from user space, since it is considered as
internal map (accessible with .bss/.data/.rodata suffix).
If no '-P <MAX_PCKT_SIZE>' option is used, the size of maximum packet
will be 600 as a default.
For clarity, change the helper to fetch map from 'bpf_map__next'
to 'bpf_object__find_map_fd_by_name'. Also, changed the way to
test prog_fd, map_fd from '!= 0' to '< 0', since fd could be 0
when stdin is closed.
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191007172117.3916-1-danieltimlee@gmail.com
|
|
Stanislav Fomichev says:
====================
While having a per-net-ns flow dissector programs is convenient for
testing, security-wise it's better to have only one vetted global
flow dissector implementation.
Let's have a convention that when BPF flow dissector is installed
in the root namespace, child namespaces can't override it.
The intended use-case is to attach global BPF flow dissector
early from the init scripts/systemd. Attaching global dissector
is prohibited if some non-root namespace already has flow dissector
attached. Also, attaching to non-root namespace is prohibited
when there is flow dissector attached to the root namespace.
v3:
* drop extra check and empty line (Andrii Nakryiko)
v2:
* EPERM -> EEXIST (Song Liu)
* Make sure we don't have dissector attached to non-root namespaces
when attaching the global one (Andrii Nakryiko)
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Make sure non-root namespaces get an error if root flow dissector is
attached.
Cc: Petar Penkov <ppenkov@google.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Always use init_net flow dissector BPF program if it's attached and fall
back to the per-net namespace one. Also, deny installing new programs if
there is already one attached to the root namespace.
Users can still detach their BPF programs, but can't attach any
new ones (-EEXIST).
Cc: Petar Penkov <ppenkov@google.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Fix spelling mistake.
Signed-off-by: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20191007082636.14686-1-anton.ivanov@cambridgegreys.com
|
|
As part of libbpf in 5e61f2707029 ("libbpf: stop enforcing kern_version,
populate it for users") non-LIBBPF_API __bpf_object__open_xattr() API
was removed from libbpf.h header. This broke bpftool, which relied on
that function. This patch fixes the build by switching to newly added
bpf_object__open_file() which provides the same capabilities, but is
official and future-proof API.
v1->v2:
- fix prog_type shadowing (Stanislav).
Fixes: 5e61f2707029 ("libbpf: stop enforcing kern_version, populate it for users")
Reported-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/bpf/20191007225604.2006146-1-andriin@fb.com
|
|
Current Makefile dependency chain is not strict enough and allows
test_attach_probe.o to be built before test_progs's
prog_test/attach_probe.o is built, which leads to assembler complaining
about missing included binary.
This patch is a minimal fix to fix this issue by enforcing that
test_attach_probe.o (BPF object file) is built before
prog_tests/attach_probe.c is attempted to be compiled.
Fixes: 928ca75e59d7 ("selftests/bpf: switch tests to new bpf_object__open_{file, mem}() APIs")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191007204149.1575990-1-andriin@fb.com
|
|
Ioana Ciornei says:
====================
dpaa2-eth: misc cleanup
This patch set consists of some cleanup patches ranging from removing dead
code to fixing a minor issue in ethtool stats. Also, unbounded while loops
are removed from the driver by adding a maximum number of retries for DPIO
portal commands.
Changes in v2:
- return -ETIMEDOUT where possible if the number of retries is hit
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Throughout the driver there are several places where we wait
indefinitely for DPIO portal commands to be executed, while
the portal returns a busy response code.
Even though in theory we are guaranteed the portals become
available eventually, in practice the QBMan hardware module
may become unresponsive in various corner cases.
Make sure we can never get stuck in an infinite while loop
by adding a retry counter for all portal commands.
Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Don't print error message for a successful return value.
Fixes: d84c3a4ded96 ("dpaa2-eth: Add new DPNI statistics counters")
Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Remove one function call whose result was not used anywhere.
Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Don't populate the array tick_array on the stack but instead make it
static. Makes the object code smaller by 29 bytes.
Before:
text data bss dec hex filename
19191 432 0 19623 4ca7 hisilicon/hns3/hns3pf/hclge_tm.o
After:
text data bss dec hex filename
19098 496 0 19594 4c8a hisilicon/hns3/hns3pf/hclge_tm.o
(gcc version 9.2.1, amd64)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Don't populate the arrays port_map and sl_map on the stack but
instead make them static. Makes the object code smaller by 64 bytes.
Before:
text data bss dec hex filename
49575 6872 64 56511 dcbf hisilicon/hns/hns_dsaf_main.o
After:
text data bss dec hex filename
49350 7032 64 56446 dc7e hisilicon/hns/hns_dsaf_main.o
(gcc version 9.2.1, amd64)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Jakub Kicinski says:
====================
net/tls: minor micro optimizations
This set brings a number of minor code changes from my tree which
don't have a noticeable impact on performance but seem reasonable
nonetheless.
First sk_msg_sg copy array is converted to a bitmap, zeroing that
structure takes a lot of time, hence we should try to keep it
small.
Next two conditions are marked as unlikely, GCC seemed to had
little trouble correctly reasoning about those.
Patch 4 adds parameters to tls_device_decrypted() to avoid
walking structures, as all callers already have the relevant
pointers.
Lastly two boolean members of TLS context structures are
converted to a bitfield.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use a single bit instead of boolean to remember if packet
was already decrypted.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Store async_capable on a single bit instead of a full integer
to save space.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Avoid unnecessary pointer chasing and calculations, callers already
have most of the state tls_device_decrypted() needs.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Make sure GCC realizes it's unlikely that allocations will fail.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Tell GCC sk->err is not likely to be set.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Don't use bool array in struct sk_msg_sg, save 12 bytes.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use helper skb_ensure_writable in two more places to simplify the code.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Consistent with how pskb_may_pull() also now does so.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This function de-facto returns a bool, so let's change the return type
accordingly.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Sameeh Jubran says:
====================
ena: Support ethtool set_channels
Difference from v2:
* ethtool's set/get channels: Switched to using combined instead of
separate rx/tx
* Fixed error handling in set_channels
* Fixed indentation and cosmetic issues as requested by Jakub Kicinski
Difference from v1:
* Dropped the print from patch 0002 - "net: ena: multiple queue creation
related cleanups" as requested by David Miller
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Set channels callback enables the user to change the count of queues
used by the driver using ethtool. We decided to currently support only
equal number of rx and tx queues, this might change in the future.
Also rename dev_up to dev_was_up in ena_update_queue_count() to make
it clearer.
Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The number of queues can be derived using ethtool, no need to print
it in ena_probe()
Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
- Update ena_ethtool:ena_get_channels() to return adapter->max_io_queues
so that ethtool -l returns the correct maximum queue number.
- Change the name of ena_calc_io_queue_num() to
ena_calc_max_io_queue_num() as it returns the maximum number of io
queues and actual number of queues can be smaller if changed
by ethtool -L which is implemented in a later commit.
- Change variable name from io_queue_num to max_num_io_queues in
ena_calc_max_io_queue_num() and ena_probe().
- Make all types of variables that convey the number and sizeof queues
to be u32, for consistency with the API between the driver and the
device.
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since we use the same IRQ and NAPI to service RX and TX then we need to
use a combined channel instead of rx and tx channels.
Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
- Rename ena_calc_queue_size() to ena_calc_io_queue_size() for clarity
and consistency
- Remove redundant number of io queues parameter in functions
ena_enable_msix() and ena_enable_msix_and_set_admin_interrupts(),
which already get adapter parameter, so use adapter->num_io_queues
in the function instead.
- Use the local variable ena_dev instead of ctx->ena_dev in
ena_calc_io_queue_size
- Fix multi row comment alignments
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Most places in the code refer to the IO queues as io_queues and not
simply queues. Examples - max_io_queues_per_vf, ENA_MAX_NUM_IO_QUEUES,
ena_destroy_all_io_queues() etc..
We are also adding the new max_num_io_queues field to struct ena_adapter
in the following commit.
The changes included in this commit are:
struct ena_adapter->num_queues => struct ena_adapter->num_io_queues
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com>
Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Daniel T. Lee says:
====================
samples: pktgen: allow to specify destination IP range
Currently, pktgen script supports specify destination port range.
To further extend the capabilities, this commit allows to specify destination
IP range with CIDR when running pktgen script.
Specifying destination IP range will be useful on various situation such as
testing RSS/RPS with randomizing n-tuple.
This patchset fixes the problem with checking the command result on proc_cmd,
and add feature to allow destination IP range.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently, kernel pktgen has the feature to specify destination
address range for sending packet. (e.g. pgset "dst_min/dst_max")
But on samples, each pktgen script doesn't have any option to achieve this.
This commit adds the feature to specify the destination address range with CIDR.
-d : ($DEST_IP) destination IP. CIDR (e.g. 198.18.0.0/15) is also allowed
# ./pktgen_sample01_simple.sh -6 -d fe80::20/126 -p 3000 -n 4
# tcpdump ip6 and udp
05:14:18.082285 IP6 fe80::99.71 > fe80::23.3000: UDP, length 16
05:14:18.082564 IP6 fe80::99.43 > fe80::23.3000: UDP, length 16
05:14:18.083366 IP6 fe80::99.107 > fe80::22.3000: UDP, length 16
05:14:18.083585 IP6 fe80::99.97 > fe80::21.3000: UDP, length 16
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This commit adds CIDR parsing and IP validate helper function to parse
single IP or range of IP with CIDR. (e.g. 198.18.0.0/15)
Validating the address should be preceded prior to the parsing.
Helpers will be used in prior to set target address in samples/pktgen.
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently, proc_cmd is used to dispatch command to 'pg_ctrl', 'pg_thread',
'pg_set'. proc_cmd is designed to check command result with grep the
"Result:", but this might fail since this string is only shown in
'pg_thread' and 'pg_set'.
This commit fixes this logic by grep-ing the "Result:" string only when
the command is not for 'pg_ctrl'.
For clarity of an execution flow, 'errexit' flag has been set.
To cleanup pktgen on exit, trap has been added for EXIT signal.
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|