summaryrefslogtreecommitdiff
path: root/tools/lib/bpf/libbpf.c
AgeCommit message (Collapse)Author
2025-05-14libbpf: Check bpf_map_skeleton link for NULLMykyta Yatsenko
Avoid dereferencing bpf_map_skeleton's link field if it's NULL. If BPF map skeleton is created with the size, that indicates containing link field, but the field was not actually initialized with valid bpf_link pointer, libbpf crashes. This may happen when using libbpf-rs skeleton. Skeleton loading may still progress, but user needs to attach struct_ops map separately. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250514113220.219095-1-mykyta.yatsenko5@gmail.com
2025-04-25Use thread-safe function pointer in libbpf_printJonathan Wiepert
This patch fixes a thread safety bug where libbpf_print uses the global variable storing the print function pointer rather than the local variable that had the print function set via __atomic_load_n. Fixes: f1cb927cdb62 ("libbpf: Ensure print callback usage is thread-safe") Signed-off-by: Jonathan Wiepert <jonathan.wiepert@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Mykyta Yatsenko <mykyta.yatsenko5@gmail.com> Link: https://lore.kernel.org/bpf/20250424221457.793068-1-jonathan.wiepert@gmail.com
2025-04-25libbpf: Remove sample_period init in perf_bufferTao Chen
It seems that sample_period is not used in perf buffer. Actually, only wakeup_events are meaningful to enable events aggregation for wakeup notification. Remove sample_period setting code to avoid confusion. Fixes: fb84b8224655 ("libbpf: add perf buffer API") Signed-off-by: Tao Chen <chen.dylane@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/bpf/20250423163901.2983689-1-chen.dylane@linux.dev
2025-04-22libbpf: Fix event name too long errorFeng Yang
When the binary path is excessively long, the generated probe_name in libbpf exceeds the kernel's MAX_EVENT_NAME_LEN limit (64 bytes). This causes legacy uprobe event attachment to fail with error code -22. The fix reorders the fields to place the unique ID before the name. This ensures that even if truncation occurs via snprintf, the unique ID remains intact, preserving event name uniqueness. Additionally, explicit checks with MAX_EVENT_NAME_LEN are added to enforce length constraints. Before Fix: ./test_progs -t attach_probe/kprobe-long_name ...... libbpf: failed to add legacy kprobe event for 'bpf_testmod_looooooooooooooooooooooooooooooong_name+0x0': -EINVAL libbpf: prog 'handle_kprobe': failed to create kprobe 'bpf_testmod_looooooooooooooooooooooooooooooong_name+0x0' perf event: -EINVAL test_attach_kprobe_long_event_name:FAIL:attach_kprobe_long_event_name unexpected error: -22 test_attach_probe:PASS:uprobe_ref_ctr_cleanup 0 nsec #13/11 attach_probe/kprobe-long_name:FAIL #13 attach_probe:FAIL ./test_progs -t attach_probe/uprobe-long_name ...... libbpf: failed to add legacy uprobe event for /root/linux-bpf/bpf-next/tools/testing/selftests/bpf/test_progs:0x13efd9: -EINVAL libbpf: prog 'handle_uprobe': failed to create uprobe '/root/linux-bpf/bpf-next/tools/testing/selftests/bpf/test_progs:0x13efd9' perf event: -EINVAL test_attach_uprobe_long_event_name:FAIL:attach_uprobe_long_event_name unexpected error: -22 #13/10 attach_probe/uprobe-long_name:FAIL #13 attach_probe:FAIL After Fix: ./test_progs -t attach_probe/uprobe-long_name #13/10 attach_probe/uprobe-long_name:OK #13 attach_probe:OK Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED ./test_progs -t attach_probe/kprobe-long_name #13/11 attach_probe/kprobe-long_name:OK #13 attach_probe:OK Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED Fixes: 46ed5fc33db9 ("libbpf: Refactor and simplify legacy kprobe code") Fixes: cc10623c6810 ("libbpf: Add legacy uprobe attaching support") Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Feng Yang <yangfeng@kylinos.cn> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250417014848.59321-2-yangfeng59949@163.com
2025-04-15libbpf: Fix buffer overflow in bpf_object__init_progViktor Malik
As shown in [1], it is possible to corrupt a BPF ELF file such that arbitrary BPF instructions are loaded by libbpf. This can be done by setting a symbol (BPF program) section offset to a large (unsigned) number such that <section start + symbol offset> overflows and points before the section data in the memory. Consider the situation below where: - prog_start = sec_start + symbol_offset <-- size_t overflow here - prog_end = prog_start + prog_size prog_start sec_start prog_end sec_end | | | | v v v v .....................|################################|............ The report in [1] also provides a corrupted BPF ELF which can be used as a reproducer: $ readelf -S crash Section Headers: [Nr] Name Type Address Offset Size EntSize Flags Link Info Align ... [ 2] uretprobe.mu[...] PROGBITS 0000000000000000 00000040 0000000000000068 0000000000000000 AX 0 0 8 $ readelf -s crash Symbol table '.symtab' contains 8 entries: Num: Value Size Type Bind Vis Ndx Name ... 6: ffffffffffffffb8 104 FUNC GLOBAL DEFAULT 2 handle_tp Here, the handle_tp prog has section offset ffffffffffffffb8, i.e. will point before the actual memory where section 2 is allocated. This is also reported by AddressSanitizer: ================================================================= ==1232==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7c7302fe0000 at pc 0x7fc3046e4b77 bp 0x7ffe64677cd0 sp 0x7ffe64677490 READ of size 104 at 0x7c7302fe0000 thread T0 #0 0x7fc3046e4b76 in memcpy (/lib64/libasan.so.8+0xe4b76) #1 0x00000040df3e in bpf_object__init_prog /src/libbpf/src/libbpf.c:856 #2 0x00000040df3e in bpf_object__add_programs /src/libbpf/src/libbpf.c:928 #3 0x00000040df3e in bpf_object__elf_collect /src/libbpf/src/libbpf.c:3930 #4 0x00000040df3e in bpf_object_open /src/libbpf/src/libbpf.c:8067 #5 0x00000040f176 in bpf_object__open_file /src/libbpf/src/libbpf.c:8090 #6 0x000000400c16 in main /poc/poc.c:8 #7 0x7fc3043d25b4 in __libc_start_call_main (/lib64/libc.so.6+0x35b4) #8 0x7fc3043d2667 in __libc_start_main@@GLIBC_2.34 (/lib64/libc.so.6+0x3667) #9 0x000000400b34 in _start (/poc/poc+0x400b34) 0x7c7302fe0000 is located 64 bytes before 104-byte region [0x7c7302fe0040,0x7c7302fe00a8) allocated by thread T0 here: #0 0x7fc3046e716b in malloc (/lib64/libasan.so.8+0xe716b) #1 0x7fc3045ee600 in __libelf_set_rawdata_wrlock (/lib64/libelf.so.1+0xb600) #2 0x7fc3045ef018 in __elf_getdata_rdlock (/lib64/libelf.so.1+0xc018) #3 0x00000040642f in elf_sec_data /src/libbpf/src/libbpf.c:3740 The problem here is that currently, libbpf only checks that the program end is within the section bounds. There used to be a check `while (sec_off < sec_sz)` in bpf_object__add_programs, however, it was removed by commit 6245947c1b3c ("libbpf: Allow gaps in BPF program sections to support overriden weak functions"). Add a check for detecting the overflow of `sec_off + prog_sz` to bpf_object__init_prog to fix this issue. [1] https://github.com/lmarch2/poc/blob/main/libbpf/libbpf.md Fixes: 6245947c1b3c ("libbpf: Allow gaps in BPF program sections to support overriden weak functions") Reported-by: lmarch2 <2524158037@qq.com> Signed-off-by: Viktor Malik <vmalik@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> Link: https://github.com/lmarch2/poc/blob/main/libbpf/libbpf.md Link: https://lore.kernel.org/bpf/20250415155014.397603-1-vmalik@redhat.com
2025-04-09libbpf: Add getters for BTF.ext func and line infoMykyta Yatsenko
Introducing new libbpf API getters for BTF.ext func and line info, namely: bpf_program__func_info bpf_program__func_info_cnt bpf_program__line_info bpf_program__line_info_cnt This change enables scenarios, when user needs to load bpf_program directly using `bpf_prog_load`, instead of higher-level `bpf_object__load`. Line and func info are required for checking BTF info in verifier; verification may fail without these fields if, for example, program calls `bpf_obj_new`. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250408234417.452565-2-mykyta.yatsenko5@gmail.com
2025-04-04libbpf: Fix implicit memfd_create() for bionicCarlos Llamas
Since memfd_create() is not consistently available across different bionic libc implementations, using memfd_create() directly can break some Android builds: tools/lib/bpf/linker.c:576:7: error: implicit declaration of function 'memfd_create' [-Werror,-Wimplicit-function-declaration] 576 | fd = memfd_create(filename, 0); | ^ To fix this, relocate and inline the sys_memfd_create() helper so that it can be used in "linker.c". Similar issues were previously fixed by commit 9fa5e1a180aa ("libbpf: Call memfd_create() syscall directly"). Fixes: 6d5e5e5d7ce1 ("libbpf: Extend linker API to support in-memory ELF files") Signed-off-by: Carlos Llamas <cmllamas@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250330211325.530677-1-cmllamas@google.com
2025-03-17libbpf: Pass BPF token from find_prog_btf_id to BPF_BTF_GET_FD_BY_IDMykyta Yatsenko
Pass BPF token from bpf_program__set_attach_target to BPF_BTF_GET_FD_BY_ID bpf command. When freplace program attaches to target program, it needs to look up for BTF of the target, this may require BPF token, if, for example, running from user namespace. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/bpf/20250317174039.161275-4-mykyta.yatsenko5@gmail.com
2025-03-15libbpf: Split bpf object load into prepare/loadMykyta Yatsenko
Introduce bpf_object__prepare API: additional intermediate preparation step that performs ELF processing, relocations, prepares final state of BPF program instructions (accessible with bpf_program__insns()), creates and (potentially) pins maps, and stops short of loading BPF programs. We anticipate few use cases for this API, such as: * Use prepare to initialize bpf_token, without loading freplace programs, unlocking possibility to lookup BTF of other programs. * Execute prepare to obtain finalized BPF program instructions without loading programs, enabling tools like veristat to process one program at a time, without incurring cost of ELF parsing and processing. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250303135752.158343-4-mykyta.yatsenko5@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-03-15libbpf: Introduce more granular state for bpf_objectMykyta Yatsenko
We are going to split bpf_object loading into 2 stages: preparation and loading. This will increase flexibility when working with bpf_object and unlock some optimizations and use cases. This patch substitutes a boolean flag (loaded) by more finely-grained state for bpf_object. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250303135752.158343-3-mykyta.yatsenko5@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-03-15libbpf: Use map_is_created helper in map settersMykyta Yatsenko
Refactoring: use map_is_created helper in map setters that need to check the state of the map. This helps to reduce the number of the places that depend explicitly on the loaded flag, simplifying refactoring in the next patch of this set. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250303135752.158343-2-mykyta.yatsenko5@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-02-24libbpf: Fix out-of-bound readNandakumar Edamana
In `set_kcfg_value_str`, an untrusted string is accessed with the assumption that it will be at least two characters long due to the presence of checks for opening and closing quotes. But the check for the closing quote (value[len - 1] != '"') misses the fact that it could be checking the opening quote itself in case of an invalid input that consists of just the opening quote. This commit adds an explicit check to make sure the string is at least two characters long. Signed-off-by: Nandakumar Edamana <nandakumar@nandakumar.co.in> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250221210110.3182084-1-nandakumar@nandakumar.co.in
2025-02-19libbpf: Wrap libbpf API direct err with libbpf_errTao Chen
Just wrap the direct err with libbpf_err, keep consistency with other APIs. Signed-off-by: Tao Chen <chen.dylane@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20250219153711.29651-1-chen.dylane@linux.dev
2025-01-17libbpf: Work around kernel inconsistently stripping '.llvm.' suffixAndrii Nakryiko
Some versions of kernel were stripping out '.llvm.<hash>' suffix from kerne symbols (produced by Clang LTO compilation) from function names reported in available_filter_functions, while kallsyms reported full original name. This confuses libbpf's multi-kprobe logic of finding all matching kernel functions for specified user glob pattern by joining available_filter_functions and kallsyms contents, because joining by full symbol name won't work for symbols containing '.llvm.<hash>' suffix. This was eventually fixed by [0] in the kernel, but we'd like to not regress multi-kprobe experience and add a work around for this bug on libbpf side, stripping kallsym's name if it matches user pattern and contains '.llvm.' suffix. [0] fb6a421fb615 ("kallsyms: Match symbols exactly with CONFIG_LTO_CLANG") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/bpf/20250117003957.179331-1-andrii@kernel.org
2025-01-10libbpf: Add unique_match option for multi kprobeYonghong Song
Jordan reported an issue in Meta production environment where func try_to_wake_up() is renamed to try_to_wake_up.llvm.<hash>() by clang compiler at lto mode. The original 'kprobe/try_to_wake_up' does not work any more since try_to_wake_up() does not match the actual func name in /proc/kallsyms. There are a couple of ways to resolve this issue. For example, in attach_kprobe(), we could do lookup in /proc/kallsyms so try_to_wake_up() can be replaced by try_to_wake_up.llvm.<hach>(). Or we can force users to use bpf_program__attach_kprobe() where they need to lookup /proc/kallsyms to find out try_to_wake_up.llvm.<hach>(). But these two approaches requires extra work by either libbpf or user. Luckily, suggested by Andrii, multi kprobe already supports wildcard ('*') for symbol matching. In the above example, 'try_to_wake_up*' can match to try_to_wake_up() or try_to_wake_up.llvm.<hash>() and this allows bpf prog works for different kernels as some kernels may have try_to_wake_up() and some others may have try_to_wake_up.llvm.<hash>(). The original intention is to kprobe try_to_wake_up() only, so an optional field unique_match is added to struct bpf_kprobe_multi_opts. If the field is set to true, the number of matched functions must be one. Otherwise, the attachment will fail. In the above case, multi kprobe with 'try_to_wake_up*' and unique_match preserves user functionality. Reported-by: Jordan Rome <linux@jordanrome.com> Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20250109174023.3368432-1-yonghong.song@linux.dev
2024-12-30libbpf: Set MFD_NOEXEC_SEAL when creating memfdDaniel Xu
Starting from 105ff5339f49 ("mm/memfd: add MFD_NOEXEC_SEAL and MFD_EXEC") and until 1717449b4417 ("memfd: drop warning for missing exec-related flags"), the kernel would print a warning if neither MFD_NOEXEC_SEAL nor MFD_EXEC is set in memfd_create(). If libbpf runs on on a kernel between these two commits (eg. on an improperly backported system), it'll trigger this warning. To avoid this warning (and also be more secure), explicitly set MFD_NOEXEC_SEAL. But since libbpf can be run on potentially very old kernels, leave a fallback for kernels without MFD_NOEXEC_SEAL support. Signed-off-by: Daniel Xu <dxu@dxuuu.xyz> Link: https://lore.kernel.org/r/6e62c2421ad7eb1da49cbf16da95aaaa7f94d394.1735594195.git.dxu@dxuuu.xyz Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-11-15libbpf: Fix memory leak in bpf_program__attach_uprobe_multiJiri Olsa
Andrii reported memory leak detected by Coverity on error path in bpf_program__attach_uprobe_multi. Fixing that by moving the check earlier before the offsets allocations. Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241115115843.694337-1-jolsa@kernel.org
2024-11-11libbpf: Stringify errno in log messages in libbpf.cMykyta Yatsenko
Convert numeric error codes into the string representations in log messages in libbpf.c. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241111212919.368971-3-mykyta.yatsenko5@gmail.com
2024-11-11libbpf: Add support for uprobe multi session attachJiri Olsa
Adding support to attach program in uprobe session mode with bpf_program__attach_uprobe_multi function. Adding session bool to bpf_uprobe_multi_opts struct that allows to load and attach the bpf program via uprobe session. the attachment to create uprobe multi session. Also adding new program loader section that allows: SEC("uprobe.session/bpf_fentry_test*") and loads/attaches uprobe program as uprobe session. Adding sleepable hook (uprobe.session.s) as well. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-6-jolsa@kernel.org
2024-11-11bpf: Add support for uprobe multi session attachJiri Olsa
Adding support to attach BPF program for entry and return probe of the same function. This is common use case which at the moment requires to create two uprobe multi links. Adding new BPF_TRACE_UPROBE_SESSION attach type that instructs kernel to attach single link program to both entry and exit probe. It's possible to control execution of the BPF program on return probe simply by returning zero or non zero from the entry BPF program execution to execute or not the BPF program on return probe respectively. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20241108134544.480660-4-jolsa@kernel.org
2024-10-23libbpf: move global data mmap()'ing into bpf_object__load()Andrii Nakryiko
Since BPF skeleton inception libbpf has been doing mmap()'ing of global data ARRAY maps in bpf_object__load_skeleton() API, which is used by code generated .skel.h files (i.e., by BPF skeletons only). This is wrong because if BPF object is loaded through generic bpf_object__load() API, global data maps won't be re-mmap()'ed after load step, and memory pointers returned from bpf_map__initial_value() would be wrong and won't reflect the actual memory shared between BPF program and user space. bpf_map__initial_value() return result is rarely used after load, so this went unnoticed for a really long time, until bpftrace project attempted to load BPF object through generic bpf_object__load() API and then used BPF subskeleton instantiated from such bpf_object. It turned out that .data/.rodata/.bss data updates through such subskeleton was "blackholed", all because libbpf wouldn't re-mmap() those maps during bpf_object__load() phase. Long story short, this step should be done by libbpf regardless of BPF skeleton usage, right after BPF map is created in the kernel. This patch moves this functionality into bpf_object__populate_internal_map() to achieve this. And bpf_object__load_skeleton() is now simple and almost trivial, only propagating these mmap()'ed pointers into user-supplied skeleton structs. We also do trivial adjustments to error reporting inside bpf_object__populate_internal_map() for consistency with the rest of libbpf's map-handling code. Reported-by: Alastair Robertson <ajor@meta.com> Reported-by: Jonathan Wiepert <jwiepert@meta.com> Fixes: d66562fba1ce ("libbpf: Add BPF object skeleton support") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20241023043908.3834423-3-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-11libbpf: never interpret subprogs in .text as entry programsAndrii Nakryiko
Libbpf pre-1.0 had a legacy logic of allowing singular non-annotated (i.e., not having explicit SEC() annotation) function to be treated as sole entry BPF program (unless there were other explicit entry programs). This behavior was dropped during libbpf 1.0 transition period (unless LIBBPF_STRICT_SEC_NAME flag was unset in libbpf_mode). When 1.0 was released and all the legacy behavior was removed, the bug slipped through leaving this legacy behavior around. Fix this for good, as it actually causes very confusing behavior if BPF object file only has subprograms, but no entry programs. Fixes: bd054102a8c7 ("libbpf: enforce strict libbpf 1.0 behaviors") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20241010211731.4121837-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-09libbpf: fix sym_is_subprog() logic for weak global subprogsAndrii Nakryiko
sym_is_subprog() is incorrectly rejecting relocations against *weak* global subprogs. Fix that by realizing that STB_WEAK is also a global function. While it seems like verifier doesn't support taking an address of non-static subprog right now, it's still best to fix support for it on libbpf side, otherwise users will get a very confusing error during BPF skeleton generation or static linking due to misinterpreted relocation: libbpf: prog 'handle_tp': bad map relo against 'foo' in section '.text' Error: failed to open BPF object file: Relocation failed It's clearly not a map relocation, but is treated and reported as such without this fix. Fixes: 53eddb5e04ac ("libbpf: Support subprog address relocation") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20241009011554.880168-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-03libbpf: Support creating light skeleton of either endiannessTony Ambardar
Track target endianness in 'struct bpf_gen' and process in-memory data in native byte-order, but on finalization convert the embedded loader BPF insns to target endianness. The light skeleton also includes a target-accessed data blob which is heterogeneous and thus difficult to convert to target byte-order on finalization. Add support functions to convert data to target endianness as it is added to the blob. Also add additional debug logging for data blob structure details and skeleton loading. Signed-off-by: Tony Ambardar <tony.ambardar@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/569562e1d5bf1cce80a1f1a3882461ee2da1ffd5.1726475448.git.tony.ambardar@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-03libbpf: Support opening bpf objects of either endiannessTony Ambardar
Allow bpf_object__open() to access files of either endianness, and convert included BPF programs to native byte-order in-memory for introspection. Loading BPF objects of non-native byte-order is still disallowed however. Signed-off-by: Tony Ambardar <tony.ambardar@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/26353c1a1887a54400e1acd6c138fa90c99cdd40.1726475448.git.tony.ambardar@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-03libbpf: Improve log message formattingTony Ambardar
Fix missing newlines and extraneous terminal spaces in messages. Signed-off-by: Tony Ambardar <tony.ambardar@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/086884b7cbf87e524d584f9bf87f7a580e378b2b.1726475448.git.tony.ambardar@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-03libbpf: Fix expected_attach_type set handling in program load callbackTao Chen
Referenced commit broke the logic of resetting expected_attach_type to zero for allowed program types if kernel doesn't yet support such field. We do need to overwrite and preserve expected_attach_type for multi-uprobe though, but that can be done explicitly in libbpf_prepare_prog_load(). Fixes: 5902da6d8a52 ("libbpf: Add uprobe multi link support to bpf_program__attach_usdt") Suggested-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Tao Chen <chen.dylane@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240925153012.212866-1-chen.dylane@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-10-03libbpf: Change log level of BTF loading error messageIhor Solodrai
Reduce log level of BTF loading error to INFO if BTF is not required. Andrii says: Nowadays the expectation is that the BPF program will have a valid .BTF section, so even though .BTF is "optional", I think it's fine to emit a warning for that case (any reasonably recent Clang will produce valid BTF). Ihor's patch is fixing the situation with an outdated host kernel that doesn't understand BTF. libbpf will try to "upload" the program's BTF, but if that fails and the BPF object doesn't use any features that require having BTF uploaded, then it's just an information message to the user, but otherwise can be ignored. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Ihor Solodrai <ihor.solodrai@pm.me> Acked-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-09-12libbpf: Add bpf_object__token_fd accessorIhor Solodrai
Add a LIBBPF_API function to retrieve the token_fd from a bpf_object. Without this accessor, if user needs a token FD they have to get it manually via bpf_token_create, even though a token might have been already created by bpf_object__load. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Ihor Solodrai <ihor.solodrai@pm.me> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240913001858.3345583-1-ihor.solodrai@pm.me
2024-09-10libbpf: Fix uretprobe.multi.s programs auto attachmentJiri Olsa
As reported by Andrii we don't currently recognize uretprobe.multi.s programs as return probes due to using (wrong) strcmp function. Using str_has_pfx() instead to match uretprobe.multi prefix. Tests are passing, because the return program was executed as entry program and all counts were incremented properly. Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240910125336.3056271-1-jolsa@kernel.org
2024-09-06libbpf: Workaround (another) -Wmaybe-uninitialized false positiveSam James
We get this with GCC 15 -O3 (at least): ``` libbpf.c: In function ‘bpf_map__init_kern_struct_ops’: libbpf.c:1109:18: error: ‘mod_btf’ may be used uninitialized [-Werror=maybe-uninitialized] 1109 | kern_btf = mod_btf ? mod_btf->btf : obj->btf_vmlinux; | ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ libbpf.c:1094:28: note: ‘mod_btf’ was declared here 1094 | struct module_btf *mod_btf; | ^~~~~~~ In function ‘find_struct_ops_kern_types’, inlined from ‘bpf_map__init_kern_struct_ops’ at libbpf.c:1102:8: libbpf.c:982:21: error: ‘btf’ may be used uninitialized [-Werror=maybe-uninitialized] 982 | kern_type = btf__type_by_id(btf, kern_type_id); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ libbpf.c: In function ‘bpf_map__init_kern_struct_ops’: libbpf.c:967:21: note: ‘btf’ was declared here 967 | struct btf *btf; | ^~~ ``` This is similar to the other libbpf fix from a few weeks ago for the same modelling-errno issue (fab45b962749184e1a1a57c7c583782b78fad539). Signed-off-by: Sam James <sam@gentoo.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://bugs.gentoo.org/939106 Link: https://lore.kernel.org/bpf/f6962729197ae7cdf4f6d1512625bd92f2322d31.1725630494.git.sam@gentoo.org
2024-09-05libbpf: fix some typos in libbpfLin Yikai
Hi, fix some spelling errors in libbpf, the details are as follows: -in the code comments: termintaing->terminating architecutre->architecture requring->requiring recored->recoded sanitise->sanities allowd->allowed abover->above see bpf_udst_arg()->see bpf_usdt_arg() Signed-off-by: Lin Yikai <yikai.lin@vivo.com> Link: https://lore.kernel.org/r/20240905110354.3274546-3-yikai.lin@vivo.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-08-29libbpf: Fix bpf_object__open_skeleton()'s mishandling of optionsAndrii Nakryiko
We do an ugly copying of options in bpf_object__open_skeleton() just to be able to set object name from skeleton's recorded name (while still allowing user to override it through opts->object_name). This is not just ugly, but it also is broken due to memcpy() that doesn't take into account potential skel_opts' and user-provided opts' sizes differences due to backward and forward compatibility. This leads to copying over extra bytes and then failing to validate options properly. It could, technically, lead also to SIGSEGV, if we are unlucky. So just get rid of that memory copy completely and instead pass default object name into bpf_object_open() directly, simplifying all this significantly. The rule now is that obj_name should be non-NULL for bpf_object_open() when called with in-memory buffer, so validate that explicitly as well. We adopt bpf_object__open_mem() to this as well and generate default name (based on buffer memory address and size) outside of bpf_object_open(). Fixes: d66562fba1ce ("libbpf: Add BPF object skeleton support") Reported-by: Daniel Müller <deso@posteo.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Daniel Müller <deso@posteo.net> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20240827203721.1145494-1-andrii@kernel.org
2024-07-29libbpf: Don't take direct pointers into BTF data from st_opsDavid Vernet
In struct bpf_struct_ops, we have take a pointer to a BTF type name, and a struct btf_type. This was presumably done for convenience, but can actually result in subtle and confusing bugs given that BTF data can be invalidated before a program is loaded. For example, in sched_ext, we may sometimes resize a data section after a skeleton has been opened, but before the struct_ops scheduler map has been loaded. This may cause the BTF data to be realloc'd, which can then cause a UAF when loading the program because the struct_ops map has pointers directly into the BTF data. We're already storing the BTF type_id in struct bpf_struct_ops. Because type_id is stable, we can therefore just update the places where we were looking at those pointers to instead do the lookups we need from the type_id. Fixes: 590a00888250 ("bpf: libbpf: Add STRUCT_OPS support") Signed-off-by: David Vernet <void@manifault.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240724171459.281234-1-void@manifault.com
2024-07-09libbpf: improve old BPF skeleton handling for map auto-attachAndrii Nakryiko
Improve how we handle old BPF skeletons when it comes to BPF map auto-attachment. Emit one warn-level message per each struct_ops map that could have been auto-attached, if user provided recent enough BPF skeleton version. Don't spam log if there are no relevant struct_ops maps, though. This should help users realize that they probably need to regenerate BPF skeleton header with more recent bpftool/libbpf-cargo (or whatever other means of BPF skeleton generation). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20240708204540.4188946-4-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-07-09libbpf: fix BPF skeleton forward/backward compat handlingAndrii Nakryiko
BPF skeleton was designed from day one to be extensible. Generated BPF skeleton code specifies actual sizes of map/prog/variable skeletons for that reason and libbpf is supposed to work with newer/older versions correctly. Unfortunately, it was missed that we implicitly embed hard-coded most up-to-date (according to libbpf's version of libbpf.h header used to compile BPF skeleton header) sizes of those structs, which can differ from the actual sizes at runtime when libbpf is used as a shared library. We have a few places were we just index array of maps/progs/vars, which implicitly uses these potentially invalid sizes of structs. This patch aims to fix this problem going forward. Once this lands, we'll backport these changes in Github repo to create patched releases for older libbpfs. Acked-by: Eduard Zingerman <eddyz87@gmail.com> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Fixes: d66562fba1ce ("libbpf: Add BPF object skeleton support") Fixes: 430025e5dca5 ("libbpf: Add subskeleton scaffolding") Fixes: 08ac454e258e ("libbpf: Auto-attach struct_ops BPF maps in BPF skeleton") Co-developed-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240708204540.4188946-3-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-07-08libbpf: Add NULL checks to bpf_object__{prev_map,next_map}Andreas Ziegler
In the current state, an erroneous call to bpf_object__find_map_by_name(NULL, ...) leads to a segmentation fault through the following call chain: bpf_object__find_map_by_name(obj = NULL, ...) -> bpf_object__for_each_map(pos, obj = NULL) -> bpf_object__next_map((obj = NULL), NULL) -> return (obj = NULL)->maps While calling bpf_object__find_map_by_name with obj = NULL is obviously incorrect, this should not lead to a segmentation fault but rather be handled gracefully. As __bpf_map__iter already handles this situation correctly, we can delegate the check for the regular case there and only add a check in case the prev or next parameter is NULL. Signed-off-by: Andreas Ziegler <ziegler.andreas@siemens.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240703083436.505124-1-ziegler.andreas@siemens.com
2024-06-06libbpf: Auto-attach struct_ops BPF maps in BPF skeletonMykyta Yatsenko
Similarly to `bpf_program`, support `bpf_map` automatic attachment in `bpf_object__attach_skeleton`. Currently only struct_ops maps could be attached. On bpftool side, code-generate links in skeleton struct for struct_ops maps. Similarly to `bpf_program_skeleton`, set links in `bpf_map_skeleton`. On libbpf side, extend `bpf_map` with new `autoattach` field to support enabling or disabling autoattach functionality, introducing getter/setter for this field. `bpf_object__(attach|detach)_skeleton` is extended with attaching/detaching struct_ops maps logic. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240605175135.117127-1-yatsenko@meta.com
2024-05-28libbpf: Configure log verbosity with env variableMykyta Yatsenko
Configure logging verbosity by setting LIBBPF_LOG_LEVEL environment variable, which is applied only to default logger. Once user set their custom logging callback, it is up to them to handle filtering. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240524131840.114289-1-yatsenko@meta.com
2024-05-07libbpf: improve early detection of doomed-to-fail BPF program loadingAndrii Nakryiko
Extend libbpf's pre-load checks for BPF programs, detecting more typical conditions that are destinated to cause BPF program failure. This is an opportunity to provide more helpful and actionable error message to users, instead of potentially very confusing BPF verifier log and/or error. In this case, we detect struct_ops BPF program that was not referenced anywhere, but still attempted to be loaded (according to libbpf logic). Suggest that the program might need to be used in some struct_ops variable. User will get a message of the following kind: libbpf: prog 'test_1_forgotten': SEC("struct_ops") program isn't referenced anywhere, did you forget to use it? Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240507001335.1445325-6-andrii@kernel.org Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-05-07libbpf: handle yet another corner case of nulling out struct_ops programAndrii Nakryiko
There is yet another corner case where user can set STRUCT_OPS program reference in STRUCT_OPS map to NULL, but libbpf will fail to disable autoload for such BPF program. This time it's the case of "new" kernel which has type information about callback field, but user explicitly nulled-out program reference from user-space after opening BPF object. Fix, hopefully, the last remaining unhandled case. Fixes: 0737df6de946 ("libbpf: better fix for handling nulled-out struct_ops program") Fixes: f973fccd43d3 ("libbpf: handle nulled-out program in struct_ops correctly") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240507001335.1445325-3-andrii@kernel.org Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-05-07libbpf: remove unnecessary struct_ops prog validity checkAndrii Nakryiko
libbpf ensures that BPF program references set in map->st_ops->progs[i] during open phase are always valid STRUCT_OPS programs. This is done in bpf_object__collect_st_ops_relos(). So there is no need to double-check that in bpf_map__init_kern_struct_ops(). Simplify the code by removing unnecessary check. Also, we avoid using local prog variable to keep code similar to the upcoming fix, which adds similar logic in another part of bpf_map__init_kern_struct_ops(). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240507001335.1445325-2-andrii@kernel.org Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-05-02libbpf: Fix error message in attach_kprobe_multiJiri Olsa
We just failed to retrieve pattern, so we need to print spec instead. Fixes: ddc6b04989eb ("libbpf: Add bpf_program__attach_kprobe_multi_opts function") Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240502075541.1425761-2-jolsa@kernel.org
2024-05-02libbpf: Fix error message in attach_kprobe_sessionJiri Olsa
We just failed to retrieve pattern, so we need to print spec instead. Fixes: 2ca178f02b2f ("libbpf: Add support for kprobe session attach") Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240502075541.1425761-1-jolsa@kernel.org
2024-05-01libbpf: better fix for handling nulled-out struct_ops programAndrii Nakryiko
Previous attempt to fix the handling of nulled-out (from skeleton) struct_ops program is working well only if struct_ops program is defined as non-autoloaded by default (i.e., has SEC("?struct_ops") annotation, with question mark). Unfortunately, that fix is incomplete due to how bpf_object_adjust_struct_ops_autoload() is marking referenced or non-referenced struct_ops program as autoloaded (or not). Because bpf_object_adjust_struct_ops_autoload() is run after bpf_map__init_kern_struct_ops() step, which sets program slot to NULL, such programs won't be considered "referenced", and so its autoload property won't be changed. This all sounds convoluted and it is, but the desire is to have as natural behavior (as far as struct_ops usage is concerned) as possible. This fix is redoing the original fix but makes it work for autoloaded-by-default struct_ops programs as well. We achieve this by forcing prog->autoload to false if prog was declaratively set for some struct_ops map, but then nulled-out from skeleton (programmatically). This achieves desired effect of not autoloading it. If such program is still referenced somewhere else (different struct_ops map or different callback field), it will get its autoload property adjusted by bpf_object_adjust_struct_ops_autoload() later. We also fix selftest, which accidentally used SEC("?struct_ops") annotation. It was meant to use autoload-by-default program from the very beginning. Fixes: f973fccd43d3 ("libbpf: handle nulled-out program in struct_ops correctly") Cc: Kui-Feng Lee <thinker.li@gmail.com> Cc: Eduard Zingerman <eddyz87@gmail.com> Cc: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240501041706.3712608-1-andrii@kernel.org Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-05-01libbpf: support "module: Function" syntax for tracing programsViktor Malik
In some situations, it is useful to explicitly specify a kernel module to search for a tracing program target (e.g. when a function of the same name exists in multiple modules or in vmlinux). This patch enables that by allowing the "module:function" syntax for the find_kernel_btf_id function. Thanks to this, the syntax can be used both from a SEC macro (i.e. `SEC(fentry/module:function)`) and via the bpf_program__set_attach_target API call. Signed-off-by: Viktor Malik <vmalik@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/9085a8cb9a552de98e554deb22ff7e977d025440.1714469650.git.vmalik@redhat.com
2024-04-30libbpf: Add kprobe session attach type name to attach_type_nameJiri Olsa
Adding kprobe session attach type name to attach_type_name, so libbpf_bpf_attach_type_str returns proper string name for BPF_TRACE_KPROBE_SESSION attach type. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240430112830.1184228-6-jolsa@kernel.org
2024-04-30libbpf: Add support for kprobe session attachJiri Olsa
Adding support to attach program in kprobe session mode with bpf_program__attach_kprobe_multi_opts function. Adding session bool to bpf_kprobe_multi_opts struct that allows to load and attach the bpf program via kprobe session. the attachment to create kprobe multi session. Also adding new program loader section that allows: SEC("kprobe.session/bpf_fentry_test*") and loads/attaches kprobe program as kprobe session. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240430112830.1184228-5-jolsa@kernel.org
2024-04-29libbpf: handle nulled-out program in struct_ops correctlyAndrii Nakryiko
If struct_ops has one of program callbacks set declaratively and host kernel is old and doesn't support this callback, libbpf will allow to load such struct_ops as long as that callback was explicitly nulled-out (presumably through skeleton). This is all working correctly, except we won't reset corresponding program slot to NULL before bailing out, which will lead to libbpf not detecting that BPF program has to be not auto-loaded. Fix this by unconditionally resetting corresponding program slot to NULL. Fixes: c911fc61a7ce ("libbpf: Skip zeroed or null fields if not found in the kernel type.") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240428030954.3918764-1-andrii@kernel.org Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-04-10libbpf: Add bpf_link support for BPF_PROG_TYPE_SOCKMAPYonghong Song
Introduce a libbpf API function bpf_program__attach_sockmap() which allow user to get a bpf_link for their corresponding programs. Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Reviewed-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240410043532.3737722-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>