Age | Commit message (Collapse) | Author |
|
Instead of duplicating string literals, keep them in one place and consistent.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191214014710.3449601-2-andriin@fb.com
|
|
Add BASH completions for gen sub-command.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Cc: Quentin Monnet <quentin.monnet@netronome.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-18-andriin@fb.com
|
|
Add a simple selftests validating datasection-to-struct layour dumping. Global
variables are constructed in such a way as to cause both natural and
artificial padding (through custom alignment requirement).
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-17-andriin@fb.com
|
|
Convert few more selftests to use generated BPF skeletons as a demonstration
on how to use it.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-16-andriin@fb.com
|
|
Add BPF skeleton generation to selftest/bpf's Makefile. Convert attach_probe.c
to use skeleton.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-15-andriin@fb.com
|
|
Add `bpftool gen skeleton` command, which takes in compiled BPF .o object file
and dumps a BPF skeleton struct and related code to work with that skeleton.
Skeleton itself is tailored to a specific structure of provided BPF object
file, containing accessors (just plain struct fields) for every map and
program, as well as dedicated space for bpf_links. If BPF program is using
global variables, corresponding structure definitions of compatible memory
layout are emitted as well, making it possible to initialize and subsequently
read/update global variables values using simple and clear C syntax for
accessing fields. This skeleton majorly improves usability of
opening/loading/attaching of BPF object, as well as interacting with it
throughout the lifetime of loaded BPF object.
Generated skeleton struct has the following structure:
struct <object-name> {
/* used by libbpf's skeleton API */
struct bpf_object_skeleton *skeleton;
/* bpf_object for libbpf APIs */
struct bpf_object *obj;
struct {
/* for every defined map in BPF object: */
struct bpf_map *<map-name>;
} maps;
struct {
/* for every program in BPF object: */
struct bpf_program *<program-name>;
} progs;
struct {
/* for every program in BPF object: */
struct bpf_link *<program-name>;
} links;
/* for every present global data section: */
struct <object-name>__<one of bss, data, or rodata> {
/* memory layout of corresponding data section,
* with every defined variable represented as a struct field
* with exactly the same type, but without const/volatile
* modifiers, e.g.:
*/
int *my_var_1;
...
} *<one of bss, data, or rodata>;
};
This provides great usability improvements:
- no need to look up maps and programs by name, instead just
my_obj->maps.my_map or my_obj->progs.my_prog would give necessary
bpf_map/bpf_program pointers, which user can pass to existing libbpf APIs;
- pre-defined places for bpf_links, which will be automatically populated for
program types that libbpf knows how to attach automatically (currently
tracepoints, kprobe/kretprobe, raw tracepoint and tracing programs). On
tearing down skeleton, all active bpf_links will be destroyed (meaning BPF
programs will be detached, if they are attached). For cases in which libbpf
doesn't know how to auto-attach BPF program, user can manually create link
after loading skeleton and they will be auto-detached on skeleton
destruction:
my_obj->links.my_fancy_prog = bpf_program__attach_cgroup_whatever(
my_obj->progs.my_fancy_prog, <whatever extra param);
- it's extremely easy and convenient to work with global data from userspace
now. Both for read-only and read/write variables, it's possible to
pre-initialize them before skeleton is loaded:
skel = my_obj__open(raw_embed_data);
my_obj->rodata->my_var = 123;
my_obj__load(skel); /* 123 will be initialization value for my_var */
After load, if kernel supports mmap() for BPF arrays, user can still read
(and write for .bss and .data) variables values, but at that point it will
be directly mmap()-ed to BPF array, backing global variables. This allows to
seamlessly exchange data with BPF side. From userspace program's POV, all
the pointers and memory contents stay the same, but mapped kernel memory
changes to point to created map.
If kernel doesn't yet support mmap() for BPF arrays, it's still possible to
use those data section structs to pre-initialize .bss, .data, and .rodata,
but after load their pointers will be reset to NULL, allowing user code to
gracefully handle this condition, if necessary.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-14-andriin@fb.com
|
|
Add new set of APIs, allowing to open/load/attach BPF object through BPF
object skeleton, generated by bpftool for a specific BPF object file. All the
xxx_skeleton() APIs wrap up corresponding bpf_object_xxx() APIs, but
additionally also automate map/program lookups by name, global data
initialization and mmap()-ing, etc. All this greatly improves and simplifies
userspace usability of working with BPF programs. See follow up patches for
examples.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-13-andriin@fb.com
|
|
It's quite spammy. And now that bpf_object__open() is trying to determine
program type from its section name, we are getting these verbose messages all
the time. Reduce their log level to DEBUG.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-12-andriin@fb.com
|
|
Move BTF ID determination for BPF_PROG_TYPE_TRACING programs to a load phase.
Performing it at open step is inconvenient, because it prevents BPF skeleton
generation on older host kernel, which doesn't contain BTF_KIND_FUNCs
information in vmlinux BTF. This is a common set up, though, when, e.g.,
selftests are compiled on older host kernel, but the test program itself is
executed in qemu VM with bleeding edge kernel. Having this BTF searching
performed at load time allows to successfully use bpf_object__open() for
codegen and inspection of BPF object file.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-11-andriin@fb.com
|
|
Refactor global data map initialization to use anonymous mmap()-ed memory
instead of malloc()-ed one. This allows to do a transparent re-mmap()-ing of
already existing memory address to point to BPF map's memory after
bpf_object__load() step (done in follow up patch). This choreographed setup
allows to have a nice and unsurprising way to pre-initialize read-only (and
r/w as well) maps by user and after BPF map creation keep working with
mmap()-ed contents of this map. All in a way that doesn't require user code to
update any pointers: the illusion of working with memory contents is preserved
before and after actual BPF map instantiation.
Selftests and runqslower example demonstrate this feature in follow up patches.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-10-andriin@fb.com
|
|
Add APIs to get BPF program function name, as opposed to bpf_program__title(),
which returns BPF program function's section name. Function name has a benefit
of being a valid C identifier and uniquely identifies a specific BPF program,
while section name can be duplicated across multiple independent BPF programs.
Add also bpf_object__find_program_by_name(), similar to
bpf_object__find_program_by_title(), to facilitate looking up BPF programs by
their C function names.
Convert one of selftests to new API for look up.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-9-andriin@fb.com
|
|
Expose API that allows to emit type declaration and field/variable definition
(if optional field name is specified) in valid C syntax for any provided BTF
type. This is going to be used by bpftool when emitting data section layout as
a struct. As part of making this API useful in a stand-alone fashion, move
initialization of some of the internal btf_dump state to earlier phase.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-8-andriin@fb.com
|
|
Expose BTF API that calculates type alignment requirements.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-7-andriin@fb.com
|
|
LIBBPF_API and DECLARE_LIBBPF_OPTS are needed in many public libbpf API
headers. Extract them into libbpf_common.h to avoid unnecessary
interdependency between btf.h, libbpf.h, and bpf.h or code duplication.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-6-andriin@fb.com
|
|
Add a convenience macro BPF_EMBED_OBJ, which allows to embed other files
(typically used to embed BPF .o files) into a hosting userspace programs. To
C program it is exposed as struct bpf_embed_data, containing a pointer to
raw data and its size in bytes.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-5-andriin@fb.com
|
|
Few libbpf APIs are not public but currently exposed through libbpf.h to be
used by bpftool. Move them to libbpf_internal.h, where intent of being
non-stable and non-public is much more obvious.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-4-andriin@fb.com
|
|
Generalize BPF program attaching and allow libbpf to auto-detect type (and
extra parameters, where applicable) and attach supported BPF program types
based on program sections. Currently this is supported for:
- kprobe/kretprobe;
- tracepoint;
- raw tracepoint;
- tracing programs (typed raw TP/fentry/fexit).
More types support can be trivially added within this framework.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-3-andriin@fb.com
|
|
Reorganize bpf_object__open and bpf_object__load steps such that
bpf_object__open doesn't need root access. This was previously done for
feature probing and BTF sanitization. This doesn't have to happen on open,
though, so move all those steps into the load phase.
This is important, because it makes it possible for tools like bpftool, to
just open BPF object file and inspect their contents: programs, maps, BTF,
etc. For such operations it is prohibitive to require root access. On the
other hand, there is a lot of custom libbpf logic in those steps, so its best
avoided for tools to reimplement all that on their own.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191214014341.3442258-2-andriin@fb.com
|
|
Fedora binutils has been patched to show "other info" for a symbol at the
end of the line. This was done in order to support unmaintained scripts
that would break with the extra info. [1]
[1] https://src.fedoraproject.org/rpms/binutils/c/b8265c46f7ddae23a792ee8306fbaaeacba83bf8
This in turn has been done to fix the build of ruby, because of checksec.
[2] Thanks Michael Ellerman for the pointer.
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1479302
As libbpf Makefile is not unmaintained, we can simply deal with either
output format, by just removing the "other info" field, as it always comes
inside brackets.
Fixes: 3464afdf11f9 (libbpf: Fix readelf output parsing on powerpc with recent binutils)
Reported-by: Justin Forbes <jmforbes@linuxtx.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Link: https://lore.kernel.org/bpf/20191213101114.GA3986@calabresa
|
|
This patch implements lookup by name for maps and changes the behavior of
lookups by tag to be consistent with prog subcommands. Similarly to
program subcommands, the show and dump commands will return all maps with
the given name (or tag), whereas other commands will error out if several
maps have the same name (resp. tag).
When a map has BTF info, it is dumped in JSON with available BTF info.
This patch requires that all matched maps have BTF info before switching
the output format to JSON.
Signed-off-by: Paul Chaignon <paul.chaignon@orange.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/8de1c9f273860b3ea1680502928f4da2336b853e.1576263640.git.paul.chaignon@gmail.com
|
|
When working with frequently modified BPF programs, both the ID and the
tag may change. bpftool currently doesn't provide a "stable" way to match
such programs.
This patch implements lookup by name for programs. The show and dump
commands will return all programs with the given name, whereas other
commands will error out if several programs have the same name.
Signed-off-by: Paul Chaignon <paul.chaignon@orange.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Link: https://lore.kernel.org/bpf/b5fc1a5dcfaeb5f16fc80295cdaa606dd2d91534.1576263640.git.paul.chaignon@gmail.com
|
|
When several BPF programs have the same tag, bpftool matches only the
first (in ID order). This patch changes that behavior such that dump and
show commands return all matched programs. Commands that require a single
program (e.g., pin and attach) will error out if given a tag that matches
several. bpftool prog dump will also error out if file or visual are
given and several programs have the given tag.
In the case of the dump command, a program header is added before each
dump only if the tag matches several programs; this patch doesn't change
the output if a single program matches. The output when several
programs match thus looks as follows.
$ ./bpftool prog dump xlated tag 6deef7357e7b4530
3: cgroup_skb tag 6deef7357e7b4530 gpl
0: (bf) r6 = r1
[...]
7: (95) exit
4: cgroup_skb tag 6deef7357e7b4530 gpl
0: (bf) r6 = r1
[...]
7: (95) exit
Signed-off-by: Paul Chaignon <paul.chaignon@orange.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/fb1fe943202659a69cd21dd5b907c205af1e1e22.1576263640.git.paul.chaignon@gmail.com
|
|
This test only works when [1] is applied, which was rejected.
Basically, the errors are reported and cleared. In this particular case of
tls sockets, following reads will block.
The test case was originally submitted with the rejected patch, but, then,
was included as part of a different patchset, possibly by mistake.
[1] https://lore.kernel.org/netdev/20191007035323.4360-2-jakub.kicinski@netronome.com/#t
Thanks Paolo Pisati for pointing out the original patchset where this
appeared.
Fixes: 65190f77424d (selftests/tls: add a test for fragmented messages)
Reported-by: Paolo Pisati <paolo.pisati@canonical.com>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
|
|
The SO_TXTIME test depends on accurate timers. In some virtualized
environments the test has been reported to be flaky. This is easily
reproduced by disabling kvm acceleration in Qemu.
Allow greater variance in a run and retry to further reduce flakiness.
Observed errors are one of two kinds: either the packet arrives too
early or late at recv(), or it was dropped in the qdisc itself and the
recv() call times out.
In the latter case, the qdisc queues a notification to the error
queue of the send socket. Also explicitly report this cause.
Link: https://lore.kernel.org/netdev/CA+FuTSdYOnJCsGuj43xwV1jxvYsaoa_LzHQF9qMyhrkLrivxKw@mail.gmail.com
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
|
|
Make sure we can pass arbitrary data in wire_len/gso_segs.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191213223028.161282-2-sdf@google.com
|
|
The xdp_perf is a dummy XDP test, only used to measure the the cost of
jumping into a naive XDP program one million times.
To build and run the program:
$ cd tools/testing/selftests/bpf
$ make
$ ./test_progs -v -t xdp_perf
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191213175112.30208-6-bjorn.topel@gmail.com
|
|
Fix up perf_buffer.c selftest to take into account offline/missing CPUs.
Fixes: ee5cf82ce04a ("selftests/bpf: test perf buffer API")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212013621.1691858-1-andriin@fb.com
|
|
It's quite common on some systems to have more CPUs enlisted as "possible",
than there are (and could ever be) present/online CPUs. In such cases,
perf_buffer creationg will fail due to inability to create perf event on
missing CPU with error like this:
libbpf: failed to open perf buffer event on cpu #16: No such device
This patch fixes the logic of perf_buffer__new() to ignore CPUs that are
missing or currently offline. In rare cases where user explicitly listed
specific CPUs to connect to, behavior is unchanged: libbpf will try to open
perf event buffer on specified CPU(s) anyways.
Fixes: fb84b8224655 ("libbpf: add perf buffer API")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212013609.1691168-1-andriin@fb.com
|
|
Add a bunch of test validating CPU mask parsing logic and error handling.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212013559.1690898-1-andriin@fb.com
|
|
This logic is re-used for parsing a set of online CPUs. Having it as an
isolated piece of code working with input string makes it conveninent to test
this logic as well. While refactoring, also improve the robustness of original
implementation.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212013548.1690564-1-andriin@fb.com
|
|
The tests were originally written in abort-on-error style. With the switch
to test_progs we can no longer do that. So at the risk of not cleaning up
some resource on failure, we now return to the caller on error.
That said, failure inside one test should not affect others because we run
setup/cleanup before/after every test.
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212102259.418536-11-jakub@cloudflare.com
|
|
Do a pure move the show the actual work needed to adapt the tests in
subsequent patch at the cost of breaking test_progs build for the moment.
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212102259.418536-10-jakub@cloudflare.com
|
|
Again, prepare for switching reuseport tests to test_progs framework.
test_progs framework will print the subtest name for us if we set it.
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212102259.418536-9-jakub@cloudflare.com
|
|
Prepare for switching reuseport tests to test_progs framework, where we
don't have the luxury to terminate the process on failure.
Modify setup helpers to signal failure via the return value with the help
of a macro similar to the one currently in use by the tests.
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212102259.418536-8-jakub@cloudflare.com
|
|
Prepare for switching reuseport tests to test_progs framework. Loop over
the tests and perform setup/cleanup for each test separately, remembering
that with test_progs we can select tests to run.
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212102259.418536-7-jakub@cloudflare.com
|
|
Prepare for iterating over individual tests without introducing another
nested loop in the main test function.
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212102259.418536-6-jakub@cloudflare.com
|
|
Having string arrays to map socket family & type to a name prevents us from
unrolling the test runner loop in the subsequent patch. Introduce helpers
that do the same thing.
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212102259.418536-5-jakub@cloudflare.com
|
|
Update the only function that is not using sa_family_t in this source file.
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212102259.418536-4-jakub@cloudflare.com
|
|
Now that libbpf can recognize SK_REUSEPORT programs, we no longer have to
pass a prog_type hint before loading the object file.
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212102259.418536-3-jakub@cloudflare.com
|
|
Allow loading BPF object files that contain SK_REUSEPORT programs without
having to manually set the program type before load if the the section name
is set to "sk_reuseport".
Makes user-space code needed to load SK_REUSEPORT BPF program more concise.
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191212102259.418536-2-jakub@cloudflare.com
|
|
On ppc64le __u64 and __s64 are defined as long int and unsigned long int,
respectively. This causes compiler to emit warning when %lld/%llu are used to
printf 64-bit numbers. Fix this by casting to size_t/ssize_t with %zu and %zd
format specifiers, respectively.
v1->v2:
- use size_t/ssize_t instead of custom typedefs (Martin).
Fixes: 1f8e2bcb2cd5 ("libbpf: Refactor relocation handling")
Fixes: abd29c931459 ("libbpf: allow specifying map definitions using BTF")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191212171918.638010-1-andriin@fb.com
|
|
After commit d092a8707326 "arch: rely on asm-generic/io.h for default
ioremap_* definitions" the ioremap_nocache() symbol has been replaced
with ioremap(). Update the mocked symbol list for nvdimm testing.
Link: https://lore.kernel.org/r/157369090817.2974548.10148423996292973088.stgit@dwillia2-desk3.amr.corp.intel.com
Fixes: d092a8707326 ("arch: rely on asm-generic/io.h for default ioremap_* definitions")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
Add simple test script to execute funciton graph tracer while BPF trampoline
attaches and detaches from the functions being graph traced.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191209000114.1876138-4-ast@kernel.org
|
|
On an old perl such as v5.10.1, `kselftest/prefix.pl` gives below error
message:
Can't locate object method "autoflush" via package "IO::Handle" at kselftest/prefix.pl line 10.
This commit fixes the error by explicitly specifying the use of the
`IO::Handle` package.
Signed-off-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
|
|
If a timeout failure occurs, kselftest kills the test process and prints
the timeout log. If the test process has killed while printing a log
that ends with new line, the timeout log can be printed in middle of the
test process output so that it can be seems like a comment, as below:
# test_process_log not ok 3 selftests: timers: nsleep-lat # TIMEOUT
This commit avoids such problem by printing one more line before the
TIMEOUT failure log.
Signed-off-by: SeongJae Park <sjpark@amazon.de>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
|
|
Commit c78fd76f2b67 ("selftests: Move kselftest_module.sh into
kselftest/") moved kselftest_module.sh but missed updating a few
references to the path in documentation.
Fixes: c78fd76f2b67 ("selftests: Move kselftest_module.sh into kselftest/")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
|
|
Before this patch, perf expected that there might be NPROC*4 unique
cache entries at max, however, it also expected that some of them would
be shared and/or of the same size, thus the final number of entries
would be reduced to be lower than NPROC*4. In case the number of entries
hadn't been reduced (was NPROC*4), the warning was printed.
However, some systems might have unusual cache topology, such as the
following two-processor KVM guest:
cpu level shared_cpu_list size
0 1 0 32K
0 1 0 64K
0 2 0 512K
0 3 0 8192K
1 1 1 32K
1 1 1 64K
1 2 1 512K
1 3 1 8192K
This KVM guest has 8 (NPROC*4) unique cache entries, which used to make
perf printing the message, although there actually aren't "way too many
cpu caches".
v2: Removing unused argument.
v3: Unifying the way we obtain number of cpus.
v4: Removed '& UINT_MAX' construct which is redundant.
Signed-off-by: Michael Petlan <mpetlan@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
LPU-Reference: 20191208162056.20772-1-mpetlan@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Commit f01642e4912b ("perf metricgroup: Support multiple events for
metricgroup") introduced support for multiple events in a metric group.
But with the current upstream, metric events names are not printed
properly
In power9 platform:
command:# ./perf stat --metric-only -M translation -C 0 -I 1000 sleep 2
1.000208486
2.000368863
2.001400558
Similarly in skylake platform:
command:./perf stat --metric-only -M Power -I 1000
1.000579994
2.002189493
With current upstream version, issue is with event name comparison logic
in find_evsel_group(). Current logic is to compare events belonging to a
metric group to the events in perf_evlist. Since the break statement is
missing in the loop used for comparison between metric group and
perf_evlist events, the loop continues to execute even after getting a
pattern match, and end up in discarding the matches.
Incase of single metric event belongs to metric group, its working fine,
because in case of single event once it compare all events it reaches to
end of perf_evlist.
Example for single metric event in power9 platform:
command:# ./perf stat --metric-only -M branches_per_inst -I 1000 sleep 1
1.000094653 0.2
1.001337059 0.0
This patch fixes the issue by making sure once we found all events
belongs to that metric event matched in find_evsel_group(), we
successfully break from that loop by adding corresponding condition.
With this patch:
In power9 platform:
command:# ./perf stat --metric-only -M translation -C 0 -I 1000 sleep 2
result:#
time derat_4k_miss_rate_percent derat_4k_miss_ratio derat_miss_ratio derat_64k_miss_rate_percent derat_64k_miss_ratio dslb_miss_rate_percent islb_miss_rate_percent
1.000135672 0.0 0.3 1.0 0.0 0.2 0.0 0.0
2.000380617 0.0 0.0 0.0 0.0 0.0 0.0 0.0
command:# ./perf stat --metric-only -M Power -I 1000
Similarly in skylake platform:
result:#
time Turbo_Utilization C3_Core_Residency C6_Core_Residency C7_Core_Residency C2_Pkg_Residency C3_Pkg_Residency C6_Pkg_Residency C7_Pkg_Residency
1.000563580 0.3 0.0 2.6 44.2 21.9 0.0 0.0 0.0
2.002235027 0.4 0.0 2.7 43.0 20.7 0.0 0.0 0.0
Committer testing:
Before:
[root@seventh ~]# perf stat --metric-only -M Power -I 1000
# time
1.000383223
2.001168182
3.001968545
4.002741200
5.003442022
^C 5.777687244
[root@seventh ~]#
After the patch:
[root@seventh ~]# perf stat --metric-only -M Power -I 1000
# time Turbo_Utilization C3_Core_Residency C6_Core_Residency C7_Core_Residency C2_Pkg_Residency C3_Pkg_Residency C6_Pkg_Residency C7_Pkg_Residency
1.000406577 0.4 0.1 1.4 97.0 0.0 0.0 0.0 0.0
2.001481572 0.3 0.0 0.6 97.9 0.0 0.0 0.0 0.0
3.002332585 0.2 0.0 1.0 97.5 0.0 0.0 0.0 0.0
4.003196624 0.2 0.0 0.3 98.6 0.0 0.0 0.0 0.0
5.004063851 0.3 0.0 0.7 97.7 0.0 0.0 0.0 0.0
^C 5.471260276 0.2 0.0 0.5 49.3 0.0 0.0 0.0 0.0
[root@seventh ~]#
[root@seventh ~]# dmesg | grep -i skylake
[ 0.187807] Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver.
[root@seventh ~]#
Fixes: f01642e4912b ("perf metricgroup: Support multiple events for metricgroup")
Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
Reviewed-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20191120084059.24458-1-kjain@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Kernel Utilization should divide ref cycles spent in kernel with total
ref cycles.
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Haiyan Song <haiyanx.song@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lore.kernel.org/lkml/20191204162121.29998-1-ravi.bangoria@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
'perf top' stopped working on hw architectures that do not provide a
get_cpuid() implementation and thus fallback to the weak get_cpuid()
default function.
This is done because at annotation time we may need it in the arch
specific annotation init routine, but that is only being used by arches
that do provide a get_cpuid() implementation:
$ find tools/ -name "*.[ch]" | xargs grep 'evlist->env'
tools/perf/builtin-top.c: top.evlist->env = &perf_env;
tools/perf/util/evsel.c: return evsel->evlist->env;
tools/perf/util/s390-cpumsf.c: sf->machine_type = s390_cpumsf_get_type(session->evlist->env->cpuid);
tools/perf/util/header.c: session->evlist->env = &header->env;
tools/perf/util/sample-raw.c: const char *arch_pf = perf_env__arch(evlist->env);
$
$ find tools/perf/arch -name "*.[ch]" | xargs grep -w get_cpuid
tools/perf/arch/x86/util/auxtrace.c: ret = get_cpuid(buffer, sizeof(buffer));
tools/perf/arch/x86/util/header.c:get_cpuid(char *buffer, size_t sz)
tools/perf/arch/powerpc/util/header.c:get_cpuid(char *buffer, size_t sz)
tools/perf/arch/s390/util/header.c: * Implementation of get_cpuid().
tools/perf/arch/s390/util/header.c:int get_cpuid(char *buffer, size_t sz)
tools/perf/arch/s390/util/header.c: if (buf && get_cpuid(buf, 128))
$
For 'report' or 'script', i.e. tools working on perf.data files, that is
setup while reading the header, its just top that needs to explicitely
read it at tool start.
Fixes: 608127f73779 ("perf top: Initialize perf_env->cpuid, needed by the per arch annotation init routine")
Reported-by: John Garry <john.garry@huawei.com>
Analysed-by: Jiri Olsa <jolsa@kernel.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: John Garry <john.garry@huawei.com> # arm64
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lkml.kernel.org/n/tip-lxwjr0cd2eggzx04a780ffrv@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|