diff options
author | Andrii Nakryiko <andrii@kernel.org> | 2024-03-11 15:37:26 -0700 |
---|---|---|
committer | Andrii Nakryiko <andrii@kernel.org> | 2024-03-11 15:43:43 -0700 |
commit | 08701e306e480c56b68c1fa35f2c5b27204083e2 (patch) | |
tree | 0adc349c9b30acf84420c5e52584264a0b048f4f /kernel/bpf/core.c | |
parent | 365c2b32792e692bad6e3761ad19ac3f8f52c0fe (diff) | |
parent | 8df839ae23b8c581bdac4b6970d029d65a415852 (diff) |
Merge branch 'bpf-introduce-bpf-arena'
Alexei Starovoitov says:
====================
bpf: Introduce BPF arena.
From: Alexei Starovoitov <ast@kernel.org>
v2->v3:
- contains bpf bits only, but cc-ing past audience for continuity
- since prerequisite patches landed, this series focus on the main
functionality of bpf_arena.
- adopted Andrii's approach to support arena in libbpf.
- simplified LLVM support. Instead of two instructions it's now only one.
- switched to cond_break (instead of open coded iters) in selftests
- implemented several follow-ups that will be sent after this set
. remember first IP and bpf insn that faulted in arena.
report to user space via bpftool
. copy paste and tweak glob_match() aka mini-regex as a selftests/bpf
- see patch 1 for detailed description of bpf_arena
v1->v2:
- Improved commit log with reasons for using vmap_pages_range() in arena.
Thanks to Johannes
- Added support for __arena global variables in bpf programs
- Fixed race conditions spotted by Barret
- Fixed wrap32 issue spotted by Barret
- Fixed bpf_map_mmap_sz() the way Andrii suggested
The work on bpf_arena was inspired by Barret's work:
https://github.com/google/ghost-userspace/blob/main/lib/queue.bpf.h
that implements queues, lists and AVL trees completely as bpf programs
using giant bpf array map and integer indices instead of pointers.
bpf_arena is a sparse array that allows to use normal C pointers to
build such data structures. Last few patches implement page_frag
allocator, link list and hash table as bpf programs.
v1:
bpf programs have multiple options to communicate with user space:
- Various ring buffers (perf, ftrace, bpf): The data is streamed
unidirectionally from bpf to user space.
- Hash map: The bpf program populates elements, and user space consumes
them via bpf syscall.
- mmap()-ed array map: Libbpf creates an array map that is directly
accessed by the bpf program and mmap-ed to user space. It's the fastest
way. Its disadvantage is that memory for the whole array is reserved at
the start.
====================
Link: https://lore.kernel.org/r/20240308010812.89848-1-alexei.starovoitov@gmail.com
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Diffstat (limited to 'kernel/bpf/core.c')
-rw-r--r-- | kernel/bpf/core.c | 16 |
1 files changed, 16 insertions, 0 deletions
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 134b7979f537..bdbdc75cdcd5 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2932,6 +2932,11 @@ bool __weak bpf_jit_supports_far_kfunc_call(void) return false; } +bool __weak bpf_jit_supports_arena(void) +{ + return false; +} + /* Return TRUE if the JIT backend satisfies the following two conditions: * 1) JIT backend supports atomic_xchg() on pointer-sized words. * 2) Under the specific arch, the implementation of xchg() is the same @@ -2976,6 +2981,17 @@ void __weak arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, { } +/* for configs without MMU or 32-bit */ +__weak const struct bpf_map_ops arena_map_ops; +__weak u64 bpf_arena_get_user_vm_start(struct bpf_arena *arena) +{ + return 0; +} +__weak u64 bpf_arena_get_kern_vm_start(struct bpf_arena *arena) +{ + return 0; +} + #ifdef CONFIG_BPF_SYSCALL static int __init bpf_global_ma_init(void) { |