Age | Commit message (Collapse) | Author |
|
The selftests build four kernel modules which use copy-pasted Makefile
targets. This is a bit messy, and doesn't scale so well when we add more
modules, so let's consolidate these rules into a single rule generated
for each module name, and move the module sources into a single
directory.
To avoid parallel builds of the different modules stepping on each
other's toes during the 'modpost' phase of the Kbuild 'make modules',
the module files should really be a grouped target. However, make only
added explicit support for grouped targets in version 4.3, which is
newer than the minimum version supported by the kernel. However, make
implicitly treats pattern matching rules with multiple targets as a
grouped target, so we can work around this by turning the rule into a
pattern matching target. We do this by replacing '.ko' with '%ko' in the
targets with subst().
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Viktor Malik <vmalik@redhat.com>
Link: https://lore.kernel.org/bpf/20241204-bpf-selftests-mod-compile-v5-1-b96231134a49@redhat.com
|
|
Test stashing both referenced kptr and local kptr into local kptrs. Then,
test unstashing them.
Acked-by: Martin KaFai Lau <martin.lau@kernel.org>
Acked-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Signed-off-by: Amery Hung <amery.hung@bytedance.com>
Link: https://lore.kernel.org/r/20240813212424.2871455-6-amery.hung@bytedance.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
There was some confusion amongst Meta sched_ext folks regarding whether
stashing bpf_rb_root - the tree itself, rather than a single node - was
supported. This patch adds a small test which demonstrates this
functionality: a local kptr with rb_root is created, a node is created
and added to the tree, then the tree is kptr_xchg'd into a mapval.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/bpf/20231204211722.571346-1-davemarchevsky@fb.com
|
|
This patch demonstrates that verifier changes earlier in this series
result in bpf_refcount_acquire(mapval->stashed_kptr) passing
verification. The added test additionally validates that stashing a kptr
in mapval and - in a separate BPF program - refcount_acquiring the kptr
without unstashing works as expected at runtime.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Link: https://lore.kernel.org/r/20231107085639.3016113-7-davemarchevsky@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Add a local kptr test with no special fields in the struct. Without the
previous patch, the following warning will hit:
[ 44.683877] WARNING: CPU: 3 PID: 485 at kernel/bpf/syscall.c:660 bpf_obj_free_fields+0x220/0x240
[ 44.684640] Modules linked in: bpf_testmod(OE)
[ 44.685044] CPU: 3 PID: 485 Comm: kworker/u8:5 Tainted: G OE 6.5.0-rc5-01703-g260d855e9b90 #248
[ 44.685827] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 44.686693] Workqueue: events_unbound bpf_map_free_deferred
[ 44.687297] RIP: 0010:bpf_obj_free_fields+0x220/0x240
[ 44.687775] Code: e8 55 17 1f 00 49 8b 74 24 08 4c 89 ef e8 e8 14 05 00 e8 a3 da e2 ff e9 55 fe ff ff 0f 0b e9 4e fe ff
ff 0f 0b e9 47 fe ff ff <0f> 0b e8 d9 d9 e2 ff 31 f6 eb d5 48 83 c4 10 5b 41 5c e
[ 44.689353] RSP: 0018:ffff888106467cb8 EFLAGS: 00010246
[ 44.689806] RAX: 0000000000000000 RBX: ffff888112b3a200 RCX: 0000000000000001
[ 44.690433] RDX: 0000000000000000 RSI: dffffc0000000000 RDI: ffff8881128ad988
[ 44.691094] RBP: 0000000000000002 R08: ffffffff81370bd0 R09: 1ffff110216231a5
[ 44.691643] R10: dffffc0000000000 R11: ffffed10216231a6 R12: ffff88810d68a488
[ 44.692245] R13: ffff88810767c288 R14: ffff88810d68a400 R15: ffff88810d68a418
[ 44.692829] FS: 0000000000000000(0000) GS:ffff8881f7580000(0000) knlGS:0000000000000000
[ 44.693484] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 44.693964] CR2: 000055c7f2afce28 CR3: 000000010fee4002 CR4: 0000000000370ee0
[ 44.694513] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 44.695102] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 44.695747] Call Trace:
[ 44.696001] <TASK>
[ 44.696183] ? __warn+0xfe/0x270
[ 44.696447] ? bpf_obj_free_fields+0x220/0x240
[ 44.696817] ? report_bug+0x220/0x2d0
[ 44.697180] ? handle_bug+0x3d/0x70
[ 44.697507] ? exc_invalid_op+0x1a/0x50
[ 44.697887] ? asm_exc_invalid_op+0x1a/0x20
[ 44.698282] ? btf_find_struct_meta+0xd0/0xd0
[ 44.698634] ? bpf_obj_free_fields+0x220/0x240
[ 44.699027] ? bpf_obj_free_fields+0x1e2/0x240
[ 44.699414] array_map_free+0x1a3/0x260
[ 44.699763] bpf_map_free_deferred+0x7b/0xe0
[ 44.700154] process_one_work+0x46d/0x750
[ 44.700523] worker_thread+0x49e/0x900
[ 44.700892] ? pr_cont_work+0x270/0x270
[ 44.701224] kthread+0x1ae/0x1d0
[ 44.701516] ? kthread_blkcg+0x50/0x50
[ 44.701860] ret_from_fork+0x34/0x50
[ 44.702178] ? kthread_blkcg+0x50/0x50
[ 44.702508] ret_from_fork_asm+0x11/0x20
[ 44.702880] </TASK>
With the previous patch, there is no warnings.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20230824063422.203097-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Move all kfunc exports into separate bpf_testmod_kfunc.h header file
and include it in tests that need it.
We will move all test kfuncs into bpf_testmod in following change,
so it's convenient to have declarations in single place.
The bpf_testmod_kfunc.h is included by both bpf_testmod and bpf
programs that use test kfuncs.
As suggested by David, the bpf_testmod_kfunc.h includes vmlinux.h
and bpf/bpf_helpers.h for bpf programs build, so the declarations
have proper __ksym attribute and we can resolve all the structs.
Note in kfunc_call_test_subprog.c we can no longer use the sk_state
define from bpf_tcp_helpers.h (because it clashed with vmlinux.h)
and we need to address __sk_common.skc_state field directly.
Acked-by: David Vernet <void@manifault.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20230515133756.1658301-3-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Add a new selftest, local_kptr_stash, which uses bpf_kptr_xchg to stash
a bpf_obj_new-allocated object in a map. Test the following scenarios:
* Stash two rb_nodes in an arraymap, don't unstash them, rely on map
free to destruct them
* Stash two rb_nodes in an arraymap, unstash the second one in a
separate program, rely on map free to destruct first
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Link: https://lore.kernel.org/r/20230310230743.2320707-4-davemarchevsky@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|