Age | Commit message (Collapse) | Author |
|
Free btf_dedup if btf_ensure_modifiable() returns error.
Fixes: 919d2b1dbb07 ("libbpf: Allow modification of BTF and add btf__add_str API")
Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211022202035.48868-1-mauricio@kinvolk.io
|
|
Recent change to use tp/syscalls/sys_enter_nanosleep for perf_buffer
selftests causes this selftest to fail on 4.9 kernel in libbpf CI ([0]):
libbpf: prog 'handle_sys_enter': failed to attach to perf_event FD 6: Invalid argument
libbpf: prog 'handle_sys_enter': failed to attach to tracepoint 'syscalls/sys_enter_nanosleep': Invalid argument
It's not exactly clear why, because perf_event itself is created for
this tracepoint, but I can't even compile 4.9 kernel locally, so it's
hard to figure this out. If anyone has better luck and would like to
help investigating this, I'd really appreciate this.
For now, unblock CI by switching back to raw_syscalls/sys_enter, but reduce
amount of unnecessary samples emitted by filter by process ID. Use
explicit ARRAY map for that to make it work on 4.9 as well, because
global data isn't yet supported there.
Fixes: aa274f98b269 ("selftests/bpf: Fix possible/online index mismatch in perf_buffer test")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211022201342.3490692-1-andrii@kernel.org
|
|
Building libbpf sources out of kernel tree (in Github repo) we run into
compilation error due to unknown __aligned attribute. It must be coming
from some kernel header, which is not available to Github sources. Use
explicit __attribute__((aligned(16))) instead.
Fixes: 961632d54163 ("libbpf: Fix dumping non-aligned __int128")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211022192502.2975553-1-andrii@kernel.org
|
|
On my box I see a bunch of ping/nettest processes hanging
around after fcntal-test.sh is done.
Clean those up before netns deletion.
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: David Ahern <dsahern@kernel.org>
Link: https://lore.kernel.org/r/20211021140247.29691-1-fw@strlen.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Though gcc conveniently compiles a simple memset to "rep stos," clang
prefers to call the libc version of memset. If a test is dynamically
linked, the libc memset isn't available in L1 (nor is the PLT or the
GOT, for that matter). Even if the test is statically linked, the libc
memset may choose to use some CPU features, like AVX, which may not be
enabled in L1. Note that __builtin_memset doesn't solve the problem,
because (a) the compiler is free to call memset anyway, and (b)
__builtin_memset may also choose to use features like AVX, which may
not be available in L1.
To avoid a myriad of problems, use an explicit "rep stos" to clear the
VMCB in generic_svm_setup(), which is called both from L0 and L1.
Reported-by: Ricardo Koller <ricarkol@google.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Fixes: 20ba262f8631a ("selftests: KVM: AMD Nested test infrastructure")
Message-Id: <20210930003649.4026553-1-jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Lots of simnple overlapping additions.
With a build fix from Stephen Rothwell.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Recent kernels have checks to ensure the GPA values in special-purpose
registers like CR3 are within the maximum physical address range and
don't overlap with anything in the upper/reserved range. In the case of
SEV kselftest guests booting directly into 64-bit mode, CR3 needs to be
initialized to the GPA of the page table root, with the encryption bit
set. The kernel accounts for this encryption bit by removing it from
reserved bit range when the guest advertises the bit position via
KVM_SET_CPUID*, but kselftests currently call KVM_SET_SREGS as part of
vm_vcpu_add_default(), before KVM_SET_CPUID*.
As a result, KVM_SET_SREGS will return an error in these cases.
Address this by moving vcpu_set_cpuid() (which calls KVM_SET_CPUID*)
ahead of vcpu_setup() (which calls KVM_SET_SREGS).
While there, address a typo in the assertion that triggers when
KVM_SET_SREGS fails.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-Id: <20211006203617.13045-1-michael.roth@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Nathan Tempelman <natet@google.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from Jakub Kicinski:
"Including fixes from netfilter, and can.
We'll have one more fix for a socket accounting regression, it's still
getting polished. Otherwise things look fine.
Current release - regressions:
- revert "vrf: reset skb conntrack connection on VRF rcv", there are
valid uses for previous behavior
- can: m_can: fix iomap_read_fifo() and iomap_write_fifo()
Current release - new code bugs:
- mlx5: e-switch, return correct error code on group creation failure
Previous releases - regressions:
- sctp: fix transport encap_port update in sctp_vtag_verify
- stmmac: fix E2E delay mechanism (in PTP timestamping)
Previous releases - always broken:
- netfilter: ip6t_rt: fix out-of-bounds read of ipv6_rt_hdr
- netfilter: xt_IDLETIMER: fix out-of-bound read caused by lack of
init
- netfilter: ipvs: make global sysctl read-only in non-init netns
- tcp: md5: fix selection between vrf and non-vrf keys
- ipv6: count rx stats on the orig netdev when forwarding
- bridge: mcast: use multicast_membership_interval for IGMPv3
- can:
- j1939: fix UAF for rx_kref of j1939_priv abort sessions on
receiving bad messages
- isotp: fix TX buffer concurrent access in isotp_sendmsg() fix
return error on FC timeout on TX path
- ice: fix re-init of RDMA Tx queues and crash if RDMA was not inited
- hns3: schedule the polling again when allocation fails, prevent
stalls
- drivers: add missing of_node_put() when aborting
for_each_available_child_of_node()
- ptp: fix possible memory leak and UAF in ptp_clock_register()
- e1000e: fix packet loss in burst mode on Tiger Lake and later
- mlx5e: ipsec: fix more checksum offload issues"
* tag 'net-5.15-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (75 commits)
usbnet: sanity check for maxpacket
net: enetc: make sure all traffic classes can send large frames
net: enetc: fix ethtool counter name for PM0_TERR
ptp: free 'vclock_index' in ptp_clock_release()
sfc: Don't use netif_info before net_device setup
sfc: Export fibre-specific supported link modes
net/mlx5e: IPsec: Fix work queue entry ethernet segment checksum flags
net/mlx5e: IPsec: Fix a misuse of the software parser's fields
net/mlx5e: Fix vlan data lost during suspend flow
net/mlx5: E-switch, Return correct error code on group creation failure
net/mlx5: Lag, change multipath and bonding to be mutually exclusive
ice: Add missing E810 device ids
igc: Update I226_K device ID
e1000e: Fix packet loss on Tiger Lake and later
e1000e: Separate TGP board type from SPT
ptp: Fix possible memory leak in ptp_clock_register()
net: stmmac: Fix E2E delay mechanism
nfc: st95hf: Make spi remove() callback return zero
net: hns3: disable sriov before unload hclge layer
net: hns3: fix vf reset workqueue cannot exit
...
|
|
Utilize libbpf's feature of allowing to lookup internal maps by their
ELF section names. No need to guess or calculate the exact truncated
prefix taken from the object name.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211021014404.2635234-11-andrii@kernel.org
|
|
Map name that's assigned to internal maps (.rodata, .data, .bss, etc)
consist of a small prefix of bpf_object's name and ELF section name as
a suffix. This makes it hard for users to "guess" the name to use for
looking up by name with bpf_object__find_map_by_name() API.
One proposal was to drop object name prefix from the map name and just
use ".rodata", ".data", etc, names. One downside called out was that
when multiple BPF applications are active on the host, it will be hard
to distinguish between multiple instances of .rodata and know which BPF
object (app) they belong to. Having few first characters, while quite
limiting, still can give a bit of a clue, in general.
Note, though, that btf_value_type_id for such global data maps (ARRAY)
points to DATASEC type, which encodes full ELF name, so tools like
bpftool can take advantage of this fact to "recover" full original name
of the map. This is also the reason why for custom .data.* and .rodata.*
maps libbpf uses only their ELF names and doesn't prepend object name at
all.
Another downside of such approach is that it is not backwards compatible
and, among direct use of bpf_object__find_map_by_name() API, will break
any BPF skeleton generated using bpftool that was compiled with older
libbpf version.
Instead of causing all this pain, libbpf will still generate map name
using a combination of object name and ELF section name, but it will
allow looking such maps up by their natural names, which correspond to
their respective ELF section names. This means non-truncated ELF section
names longer than 15 characters are going to be expected and supported.
With such set up, we get the best of both worlds: leave small bits of
a clue about BPF application that instantiated such maps, as well as
making it easy for user apps to lookup such maps at runtime. In this
sense it closes corresponding libbpf 1.0 issue ([0]).
BPF skeletons will continue using full names for lookups.
[0] Closes: https://github.com/libbpf/libbpf/issues/275
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211021014404.2635234-10-andrii@kernel.org
|
|
Enhance existing selftests to demonstrate the use of custom
.data/.rodata sections.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211021014404.2635234-9-andrii@kernel.org
|
|
Add support for having multiple .rodata and .data data sections ([0]).
.rodata/.data are supported like the usual, but now also
.rodata.<whatever> and .data.<whatever> are also supported. Each such
section will get its own backing BPF_MAP_TYPE_ARRAY, just like
.rodata and .data.
Multiple .bss maps are not supported, as the whole '.bss' name is
confusing and might be deprecated soon, as well as user would need to
specify custom ELF section with SEC() attribute anyway, so might as well
stick to just .data.* and .rodata.* convention.
User-visible map name for such new maps is going to be just their ELF
section names.
[0] https://github.com/libbpf/libbpf/issues/274
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211021014404.2635234-8-andrii@kernel.org
|
|
It can happen that some data sections (e.g., .rodata.cst16, containing
compiler populated string constants) won't have a corresponding BTF
DATASEC type. Now that libbpf supports .rodata.* and .data.* sections,
situation like that will cause invalid BPF skeleton to be generated that
won't compile successfully, as some parts of skeleton would assume
memory-mapped struct definitions for each special data section.
Fix this by generating empty struct definitions for such data sections.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211021014404.2635234-7-andrii@kernel.org
|
|
Remove the assumption about only single instance of each of .rodata and
.data internal maps. Nothing changes for '.rodata' and '.data' maps, but new
'.rodata.something' map will get 'rodata_something' section in BPF
skeleton for them (as well as having struct bpf_map * field in maps
section with the same field name).
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211021014404.2635234-6-andrii@kernel.org
|
|
Remove internal libbpf assumption that there can be only one .rodata,
.data, and .bss map per BPF object. To achieve that, extend and
generalize the scheme that was used for keeping track of relocation ELF
sections. Now each ELF section has a temporary extra index that keeps
track of logical type of ELF section (relocations, data, read-only data,
BSS). Switch relocation to this scheme, as well as .rodata/.data/.bss
handling.
We don't yet allow multiple .rodata, .data, and .bss sections, but no
libbpf internal code makes an assumption that there can be only one of
each and thus they can be explicitly referenced by a single index. Next
patches will actually allow multiple .rodata and .data sections.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211021014404.2635234-5-andrii@kernel.org
|
|
Minimize the usage of class-agnostic gelf_xxx() APIs from libelf. These
APIs require copying ELF data structures into local GElf_xxx structs and
have a more cumbersome API. BPF ELF file is defined to be always 64-bit
ELF object, even when intended to be run on 32-bit host architectures,
so there is no need to do class-agnostic conversions everywhere. BPF
static linker implementation within libbpf has been using Elf64-specific
types since initial implementation.
Add two simple helpers, elf_sym_by_idx() and elf_rel_by_idx(), for more
succinct direct access to ELF symbol and relocation records within ELF
data itself and switch all the GElf_xxx usage into Elf64_xxx
equivalents. The only remaining place within libbpf.c that's still using
gelf API is gelf_getclass(), as there doesn't seem to be a direct way to
get underlying ELF bitness.
No functional changes intended.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211021014404.2635234-4-andrii@kernel.org
|
|
Name currently anonymous internal struct that keeps ELF-related state
for bpf_object. Just a bit of clean up, no functional changes.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211021014404.2635234-3-andrii@kernel.org
|
|
There isn't a good use case where anyone but libbpf itself needs to call
btf__finalize_data(). It was implemented for internal use and it's not
clear why it was made into public API in the first place. To function, it
requires active ELF data, which is stored inside bpf_object for the
duration of opening phase only. But the only BTF that needs bpf_object's
ELF is that bpf_object's BTF itself, which libbpf fixes up automatically
during bpf_object__open() operation anyways. There is no need for any
additional fix up and no reasonable scenario where it's useful and
appropriate.
Thus, btf__finalize_data() is just an API atavism and is better removed.
So this patch marks it as deprecated immediately (v0.6+) and moves the
code from btf.c into libbpf.c where it's used in the context of
bpf_object opening phase. Such code co-location allows to make code
structure more straightforward and remove bpf_object__section_size() and
bpf_object__variable_offset() internal helpers from libbpf_internal.h,
making them static. Their naming is also adjusted to not create
a wrong illusion that they are some sort of method of bpf_object. They
are internal helpers and are called appropriately.
This is part of libbpf 1.0 effort ([0]).
[0] Closes: https://github.com/libbpf/libbpf/issues/276
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211021014404.2635234-2-andrii@kernel.org
|
|
The perf buffer tests triggers trace with nanosleep syscall,
but monitors all syscalls, which results in lot of data in the
buffer and makes it harder to debug. Let's lower the trace
traffic and monitor just nanosleep syscall.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20211021114132.8196-4-jolsa@kernel.org
|
|
The perf_buffer fails on system with offline cpus:
# test_progs -t perf_buffer
serial_test_perf_buffer:PASS:nr_cpus 0 nsec
serial_test_perf_buffer:PASS:nr_on_cpus 0 nsec
serial_test_perf_buffer:PASS:skel_load 0 nsec
serial_test_perf_buffer:PASS:attach_kprobe 0 nsec
serial_test_perf_buffer:PASS:perf_buf__new 0 nsec
serial_test_perf_buffer:PASS:epoll_fd 0 nsec
skipping offline CPU #4
serial_test_perf_buffer:PASS:perf_buffer__poll 0 nsec
serial_test_perf_buffer:PASS:seen_cpu_cnt 0 nsec
serial_test_perf_buffer:PASS:buf_cnt 0 nsec
...
serial_test_perf_buffer:PASS:fd_check 0 nsec
serial_test_perf_buffer:PASS:drain_buf 0 nsec
serial_test_perf_buffer:PASS:consume_buf 0 nsec
serial_test_perf_buffer:FAIL:cpu_seen cpu 5 not seen
#88 perf_buffer:FAIL
Summary: 0/0 PASSED, 0 SKIPPED, 1 FAILED
If the offline cpu is from the middle of the possible set,
we get mismatch with possible and online cpu buffers.
The perf buffer test calls perf_buffer__consume_buffer for
all 'possible' cpus, but the library holds only 'online'
cpu buffers and perf_buffer__consume_buffer returns them
based on index.
Adding extra (online) index to keep track of online buffers,
we need the original (possible) index to trigger trace on
proper cpu.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20211021114132.8196-3-jolsa@kernel.org
|
|
The perf_buffer fails on system with offline cpus:
# test_progs -t perf_buffer
test_perf_buffer:PASS:nr_cpus 0 nsec
test_perf_buffer:PASS:nr_on_cpus 0 nsec
test_perf_buffer:PASS:skel_load 0 nsec
test_perf_buffer:PASS:attach_kprobe 0 nsec
test_perf_buffer:PASS:perf_buf__new 0 nsec
test_perf_buffer:PASS:epoll_fd 0 nsec
skipping offline CPU #24
skipping offline CPU #25
skipping offline CPU #26
skipping offline CPU #27
skipping offline CPU #28
skipping offline CPU #29
skipping offline CPU #30
skipping offline CPU #31
test_perf_buffer:PASS:perf_buffer__poll 0 nsec
test_perf_buffer:PASS:seen_cpu_cnt 0 nsec
test_perf_buffer:FAIL:buf_cnt got 24, expected 32
Summary: 0/0 PASSED, 0 SKIPPED, 1 FAILED
Changing the test to check online cpus instead of possible.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20211021114132.8196-2-jolsa@kernel.org
|
|
verified_insns field was added to response of bpf_obj_get_info_by_fd
call on a prog. Confirm that it's being populated by loading a simple
program and asking for its info.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211020074818.1017682-3-davemarchevsky@fb.com
|
|
This stat is currently printed in the verifier log and not stored
anywhere. To ease consumption of this data, add a field to bpf_prog_aux
so it can be exposed via BPF_OBJ_GET_INFO_BY_FD and fdinfo.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20211020074818.1017682-2-davemarchevsky@fb.com
|
|
Currently ptr_is_aligned() takes size, and not alignment, as a
parameter, which may be overly pessimistic e.g. for __i128 on s390,
which must be only 8-byte aligned. Fix by using btf__align_of().
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211021104658.624944-2-iii@linux.ibm.com
|
|
Add a new test which triggers unix_listen kernel function
to test bpf_skc_to_unix_sock helper.
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211021134752.1223426-3-hengqi.chen@gmail.com
|
|
The helper is used in tracing programs to cast a socket
pointer to a unix_sock pointer.
The return value could be NULL if the casting is illegal.
Suggested-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211021134752.1223426-2-hengqi.chen@gmail.com
|
|
get_warnings_count() does fclose() using File * returned from popen().
Fix it to call pclose() as it should.
tools/testing/selftests/kvm/x86_64/mmio_warning_test
x86_64/mmio_warning_test.c: In function ‘get_warnings_count’:
x86_64/mmio_warning_test.c:87:9: warning: ‘fclose’ called on pointer returned from a mismatched allocation function [-Wmismatched-dealloc]
87 | fclose(f);
| ^~~~~~~~~
x86_64/mmio_warning_test.c:84:13: note: returned from ‘popen’
84 | f = popen("dmesg | grep \"WARNING:\" | wc -l", "r");
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
|
|
Pablo Neira Ayuso says:
====================
Netfilter/IPVS fixes for net
The following patchset contains Netfilter fixes for net:
1) Crash due to missing initialization of timer data in
xt_IDLETIMER, from Juhee Kang.
2) NF_CONNTRACK_SECMARK should be bool in Kconfig, from Vegard Nossum.
3) Skip netdev events on netns removal, from Florian Westphal.
4) Add testcase to show port shadowing via UDP, also from Florian.
5) Remove pr_debug() code in ip6t_rt, this fixes a crash due to
unsafe access to non-linear skbuff, from Xin Long.
6) Make net/ipv4/vs/debug_level read-only from non-init netns,
from Antoine Tenart.
7) Remove bogus invocation to bash in selftests/netfilter/nft_flowtable.sh
also from Florian.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
* kvm/selftests/memslot:
: .
: Enable KVM memslot selftests on arm64, making them less
: x86 specific.
: .
KVM: selftests: Build the memslot tests for arm64
KVM: selftests: Make memslot_perf_test arch independent
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
Add memslot_perf_test and memslot_modification_stress_test to the list
of aarch64 selftests.
Signed-off-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Oliver Upton <oupton@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210907180957.609966-3-ricarkol@google.com
|
|
memslot_perf_test uses ucalls for synchronization between guest and
host. Ucalls API is architecture independent: tests do not need to know
details like what kind of exit they generate on a specific arch. More
specifically, there is no need to check whether an exit is KVM_EXIT_IO
in x86 for the host to know that the exit is ucall related, as
get_ucall() already makes that check.
Change memslot_perf_test to not require specifying what exit does a
ucall generate. Also add a missing ucall_init.
Signed-off-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Oliver Upton <oupton@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210907180957.609966-2-ricarkol@google.com
|
|
The various floating point test programs written in assembly have a bunch
of helper functions and macros which are cut'n'pasted between them. Factor
them out into a separate source file which is linked into all of them.
We don't include memcmp() since it isn't as generic as it should be and
directly branches to report an error in the programs.
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211019181851.3341232-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Some new verifier tests that hit some important gaps in the parameter
space for atomic ops.
There are already exhaustive tests for the JIT part in
lib/test_bpf.c, but these exercise the verifier too.
Signed-off-by: Brendan Jackman <jackmanb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211015093318.1273686-1-jackmanb@google.com
|
|
Non-aligned integers are dumped as bitfields, which is supported for at
most 64-bit integers. Fix by using the same trick as
btf_dump_float_data(): copy non-aligned values to the local buffer.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211013160902.428340-4-iii@linux.ibm.com
|
|
On big-endian arches not only bytes, but also bits are numbered in
reverse order (see e.g. S/390 ELF ABI Supplement, but this is also true
for other big-endian arches as well).
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211013160902.428340-3-iii@linux.ibm.com
|
|
cpu_number exists only on Intel and aarch64, so skip the test involing
it on other arches. An alternative would be to replace it with an
exported non-ifdefed primitive-typed percpu variable from the common
code, but there appears to be none.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211013160902.428340-2-iii@linux.ibm.com
|
|
The header is no longer needed since the event_pipe implementation
was updated to rely on libbpf's perf_buffer. This makes bpftool free of
dependencies to perf files, and we can update the Makefile accordingly.
Fixes: 9b190f185d2f ("tools/bpftool: switch map event_pipe to libbpf's perf_buffer")
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211020094826.16046-1-quentin@isovalent.com
|
|
Fix following checkincludes.pl warning:
./scripts/checkincludes.pl tools/testing/selftests/bpf/cgroup_helpers.c
tools/testing/selftests/bpf/cgroup_helpers.c: unistd.h is included more
than once.
Signed-off-by: Wan Jiabing <wanjiabing@vivo.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20211012023231.19911-1-wanjiabing@vivo.com
|
|
In preparation for bpf_program__get_prog_info_linear deprecation, move
the single use in libbpf to call bpf_obj_get_info_by_fd directly.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211011082031.4148337-2-davemarchevsky@fb.com
|
|
Pull kvm fixes from Paolo Bonzini:
"Tools:
- kvm_stat: do not show halt_wait_ns since it is not a cumulative statistic
x86:
- clean ups and fixes for bus lock vmexit and lazy allocation of rmaps
- two fixes for SEV-ES (one more coming as soon as I get reviews)
- fix for static_key underflow
ARM:
- Properly refcount pages used as a concatenated stage-2 PGD
- Fix missing unlock when detecting the use of MTE+VM_SHARED"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: SEV-ES: reduce ghcb_sa_len to 32 bits
KVM: VMX: Remove redundant handling of bus lock vmexit
KVM: kvm_stat: do not show halt_wait_ns
KVM: x86: WARN if APIC HW/SW disable static keys are non-zero on unload
Revert "KVM: x86: Open code necessary bits of kvm_lapic_set_base() at vCPU RESET"
KVM: SEV-ES: Set guest_state_protected after VMSA update
KVM: X86: fix lazy allocation of rmaps
KVM: SEV-ES: fix length of string I/O
KVM: arm64: Release mmap_lock when using VM_SHARED with MTE
KVM: arm64: Report corrupted refcount at EL2
KVM: arm64: Fix host stage-2 PGD refcount
KVM: s390: Function documentation fixes
|
|
One of bpftool's object files depends on zlib. To make sure we do not
attempt to build that object when the library is not available, commit
d66fa3c70e59 ("tools: bpftool: add feature check for zlib") introduced a
feature check to detect whether zlib is present.
This check comes as a rule for which the target ("zdep") is a
nonexistent file (phony target), which means that the Makefile always
attempts to rebuild it. It is mostly harmless. However, one side effect
is that, on running again once bpftool is already built, make considers
that "something" (the recipe for zdep) was executed, and does not print
the usual message "make: Nothing to be done for 'all'", which is a
user-friendly indicator that the build went fine.
Before, with some level of debugging information:
$ make --debug=m
[...]
Reading makefiles...
Auto-detecting system features:
... libbfd: [ on ]
... disassembler-four-args: [ on ]
... zlib: [ on ]
... libcap: [ on ]
... clang-bpf-co-re: [ on ]
Updating makefiles....
Updating goal targets....
File 'all' does not exist.
File 'zdep' does not exist.
Must remake target 'zdep'.
File 'all' does not exist.
Must remake target 'all'.
Successfully remade target file 'all'.
After the patch:
$ make --debug=m
[...]
Auto-detecting system features:
... libbfd: [ on ]
... disassembler-four-args: [ on ]
... zlib: [ on ]
... libcap: [ on ]
... clang-bpf-co-re: [ on ]
Updating makefiles....
Updating goal targets....
File 'all' does not exist.
Must remake target 'all'.
Successfully remade target file 'all'.
make: Nothing to be done for 'all'.
(Note the last line, which is not part of make's debug information.)
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20211009210341.6291-4-quentin@isovalent.com
|
|
In bpftool's Makefile, libbpf has a FORCE dependency, to make sure we
rebuild it in case its source files changed. Let's instead make the
rebuild depend on the source files directly, through a call to the
"$(wildcard ...)" function. This avoids descending into libbpf's
directory if there is nothing to update.
Do the same for the bootstrap libbpf version.
This results in a slightly faster operation and less verbose output when
running make a second time in bpftool's directory.
Before:
Auto-detecting system features:
... libbfd: [ on ]
... disassembler-four-args: [ on ]
... zlib: [ on ]
... libcap: [ on ]
... clang-bpf-co-re: [ on ]
make[1]: Entering directory '/root/dev/linux/tools/lib/bpf'
make[1]: Entering directory '/root/dev/linux/tools/lib/bpf'
make[1]: Nothing to be done for 'install_headers'.
make[1]: Leaving directory '/root/dev/linux/tools/lib/bpf'
make[1]: Leaving directory '/root/dev/linux/tools/lib/bpf'
After:
Auto-detecting system features:
... libbfd: [ on ]
... disassembler-four-args: [ on ]
... zlib: [ on ]
... libcap: [ on ]
... clang-bpf-co-re: [ on ]
Other ways to clean up the output could be to pass the "-s" option, or
to redirect the output to >/dev/null, when calling make recursively to
descend into libbpf's directory. However, this would suppress some
useful output if something goes wrong during the build. A better
alternative would be to pass "--no-print-directory" to the recursive
make, but that would still leave us with some noise for
"install_headers". Skipping the descent into libbpf's directory if no
source file has changed works best, and seems the most logical option
overall.
Reported-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20211009210341.6291-3-quentin@isovalent.com
|
|
We recently updated bpftool's Makefile to make it install the headers
from libbpf, instead of pulling them directly from libbpf's directory.
There is also an additional header, internal to libbpf, that needs be
installed. The way that bpftool's Makefile installs that particular
header is currently correct, but would break if we were to modify
$(LIBBPF_INTERNAL_HDRS) to make it point to more than one header.
Use a static pattern rule instead, so that the Makefile can withstand
the addition of other headers to install.
The objective is simply to make the Makefile more robust. It should
_not_ be read as an invitation to import more internal headers from
libbpf into bpftool.
Fixes: f012ade10b34 ("bpftool: Install libbpf headers instead of including the dir")
Reported-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20211009210341.6291-2-quentin@isovalent.com
|
|
Although relying on some definitions from the netlink.h and if_link.h
headers copied into tools/include/uapi/linux/, libbpf does not need
those headers to stay entirely up-to-date with their original versions,
and the warnings emitted by the Makefile when it detects a difference
are usually just noise. Let's remove those warnings.
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211010002528.9772-1-quentin@isovalent.com
|
|
Update to kunit_parser to improve compatibility with KTAP
specification including arbitrarily nested tests. Patch accomplishes
three major changes:
- Use a general Test object to represent all tests rather than TestCase
and TestSuite objects. This allows for easier implementation of arbitrary
levels of nested tests and promotes the idea that both test suites and test
cases are tests.
- Print errors incrementally rather than all at once after the
parsing finishes to maximize information given to the user in the
case of the parser given invalid input and to increase the helpfulness
of the timestamps given during printing. Note that kunit.py parse does
not print incrementally yet. However, this fix brings us closer to
this feature.
- Increase compatibility for different formats of input. Arbitrary levels
of nested tests supported. Also, test cases and test suites are now
supported to be present on the same level of testing.
This patch now implements the draft KTAP specification here:
https://lore.kernel.org/linux-kselftest/CA+GJov6tdjvY9x12JsJT14qn6c7NViJxqaJk+r-K1YJzPggFDQ@mail.gmail.com/
We'll update the parser as the spec evolves.
This patch adjusts the kunit_tool_test.py file to check for
the correct outputs from the new parser and adds a new test to check
the parsing for a KTAP result log with correct format for multiple nested
subtests (test_is_test_passed-all_passed_nested.log).
This patch also alters the kunit_json.py file to allow for arbitrarily
nested tests.
Signed-off-by: Rae Moar <rmoar@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
|
|
Currently, `run_kernel()` dumps all the kernel output to a file
(.kunit/test.log) and then opens the file and yields it to callers.
This made it easier to respect the requested timeout, if any.
But it means that we can't yield the results in real time, either to the
parser or to stdout (if --raw_output is set).
This change spins up a background thread to enforce the timeout, which
allows us to yield the kernel output in real time, while also copying it
to the .kunit/test.log file.
It's also careful to ensure that the .kunit/test.log file is complete,
even in the kunit_parser throws an exception/otherwise doesn't consume
every line, see the new `finally` block and unit test.
For example:
$ ./tools/testing/kunit/kunit.py run --arch=x86_64 --raw_output
<configure + build steps>
...
<can now see output from QEMU in real time>
This does not currently have a visible effect when --raw_output is not
passed, as kunit_parser.py currently only outputs everything at the end.
But that could change, and this patch is a necessary step towards
showing parsed test results in real time.
Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
|
|
The new --run_isolated flag makes the tool boot the kernel once per
suite or test, preventing leftover state from one suite to impact the
other. This can be useful as a starting point to debugging test
hermeticity issues.
Note: it takes a lot longer, so people should not use it normally.
Consider the following very simplified example:
bool disable_something_for_test = false;
void function_being_tested() {
...
if (disable_something_for_test) return;
...
}
static void test_before(struct kunit *test)
{
disable_something_for_test = true;
function_being_tested();
/* oops, we forgot to reset it back to false */
}
static void test_after(struct kunit *test)
{
/* oops, now "fixing" test_before can cause test_after to fail! */
function_being_tested();
}
Presented like this, the issues are obvious, but it gets a lot more
complicated to track down as the amount of test setup and helper
functions increases.
Another use case is memory corruption. It might not be surfaced as a
failure/crash in the test case or suite that caused it. I've noticed in
kunit's own unit tests, the 3rd suite after might be the one to finally
crash after an out-of-bounds write, for example.
Example usage:
Per suite:
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite
...
Starting KUnit Kernel (1/7)...
============================================================
======== [PASSED] kunit_executor_test ========
....
Testing complete. 5 tests run. 0 failed. 0 crashed. 0 skipped.
Starting KUnit Kernel (2/7)...
============================================================
======== [PASSED] kunit-try-catch-test ========
...
Per test:
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=test
Starting KUnit Kernel (1/23)...
============================================================
======== [PASSED] kunit_executor_test ========
[PASSED] parse_filter_test
============================================================
Testing complete. 1 tests run. 0 failed. 0 crashed. 0 skipped.
Starting KUnit Kernel (2/23)...
============================================================
======== [PASSED] kunit_executor_test ========
[PASSED] filter_subsuite_test
...
It works with filters as well:
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite example
...
Starting KUnit Kernel (1/1)...
============================================================
======== [PASSED] example ========
...
It also handles test filters, '*.*skip*' runs these 3 tests:
kunit_status.kunit_status_mark_skipped_test
example.example_skip_test
example.example_mark_skipped_test
Fixed up merge conflict between:
d8c23ead708b ("kunit: tool: better handling of quasi-bool args (--json, --raw_output)") and
6710951ee039 ("kunit: tool: support running each suite/test separately")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Shuah Khan <skhan@linuxfoundation.org>
Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
|
|
This is a long standing bug in kunit tool.
Since these files were added, run_kernel() has always yielded lines.
That means, the call to run_kernel() returns before the kernel finishes
executing tests, potentially before a single line of output is even
produced.
So code like this
time_start = time.time()
result = linux.run_kernel(...)
time_end = time.time()
would only measure the time taken for python to give back the generator
object.
From a caller's perspective, the only way to know the kernel has exited
is for us to consume all the output from the `result` generator object.
Alternatively, we could change run_kernel() to try and do its own book
keeping and return the total time, but that doesn't seem worth it.
This change makes us record `time_end` after we're done parsing all the
output (which should mean we've consumed all of it, or errored out).
That means we're including in the parsing time as well, but that should
be quite small, and it's better than claiming it took 0s to run tests.
Let's use this as an example:
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit example
Before:
Elapsed time: 7.684s total, 0.001s configuring, 4.692s building, 0.000s running
After:
Elapsed time: 6.283s total, 0.001s configuring, 3.202s building, 3.079s running
Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
|
|
Currently this code is copy-pasted between the normal "run" subcommand
and the "exec" subcommand.
Given we don't have any interest in just executing the tests without
giving the user any indication what happened (i.e. parsing the output),
make a function that does both this things and can be reused.
This will be useful when we allow more complicated ways of running
tests, e.g. invoking the kernel multiple times instead of just once,
etc.
We remove input_data from the ParseRequest so the callers don't have to
pass in a dummy value for this field. Named tuples are also immutable,
so if they did pass in a dummy, exec_tests() would need to make a copy
to call parse_tests().
Removing it also makes KunitParseRequest match the other *Request types,
as they only contain user arguments/flags, not data.
Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Acked-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
|
|
Consider this attempt to run KUnit in QEMU:
$ ./tools/testing/kunit/kunit.py run --arch=x86
Before you'd get this error message:
kunit_kernel.ConfigError: x86 is not a valid arch
After:
kunit_kernel.ConfigError: x86 is not a valid arch, options are ['alpha', 'arm', 'arm64', 'i386', 'powerpc', 'riscv', 's390', 'sparc', 'x86_64']
This should make it a bit easier for people to notice when they make
typos, etc. Currently, one would have to dive into the python code to
figure out what the valid set is.
Signed-off-by: Daniel Latypov <dlatypov@google.com>
Reviewed-by: David Gow <davidgow@google.com>
Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
|