summaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)Author
2019-11-25Merge tag 'for-5.5/block-20191121' of git://git.kernel.dk/linux-blockLinus Torvalds
Pull core block updates from Jens Axboe: "Due to more granular branches, this one is small and will be followed with other core branches that add specific features. I meant to just have a core and drivers branch, but external dependencies we ended up adding a few more that are also core. The changes are: - Fixes and improvements for the zoned device support (Ajay, Damien) - sed-opal table writing and datastore UID (Revanth) - blk-cgroup (and bfq) blk-cgroup stat fixes (Tejun) - Improvements to the block stats tracking (Pavel) - Fix for overruning sysfs buffer for large number of CPUs (Ming) - Optimization for small IO (Ming, Christoph) - Fix typo in RWH lifetime hint (Eugene) - Dead code removal and documentation (Bart) - Reduction in memory usage for queue and tag set (Bart) - Kerneldoc header documentation (André) - Device/partition revalidation fixes (Jan) - Stats tracking for flush requests (Konstantin) - Various other little fixes here and there (et al)" * tag 'for-5.5/block-20191121' of git://git.kernel.dk/linux-block: (48 commits) Revert "block: split bio if the only bvec's length is > SZ_4K" block: add iostat counters for flush requests block,bfq: Skip tracing hooks if possible block: sed-opal: Introduce SUM_SET_LIST parameter and append it using 'add_token_u64' blk-cgroup: cgroup_rstat_updated() shouldn't be called on cgroup1 block: Don't disable interrupts in trigger_softirq() sbitmap: Delete sbitmap_any_bit_clear() blk-mq: Delete blk_mq_has_free_tags() and blk_mq_can_queue() block: split bio if the only bvec's length is > SZ_4K block: still try to split bio if the bvec crosses pages blk-cgroup: separate out blkg_rwstat under CONFIG_BLK_CGROUP_RWSTAT blk-cgroup: reimplement basic IO stats using cgroup rstat blk-cgroup: remove now unused blkg_print_stat_{bytes|ios}_recursive() blk-throtl: stop using blkg->stat_bytes and ->stat_ios bfq-iosched: stop using blkg->stat_bytes and ->stat_ios bfq-iosched: relocate bfqg_*rwstat*() helpers block: add zone open, close and finish ioctl support block: add zone open, close and finish operations block: Simplify REQ_OP_ZONE_RESET_ALL handling block: Remove REQ_OP_ZONE_RESET plugging ...
2019-11-25Merge branch 'for-5.5/system-state' into for-linusPetr Mladek
2019-11-25Merge branch 'x86/core' into perf/core, to resolve conflicts and to pick up ↵Ingo Molnar
completed topic tree Conflicts: tools/perf/check-headers.sh Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-11-25Merge branch 'perf/urgent' into perf/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-11-24bpf: Introduce BPF_TRACE_x helper for the tracing testsMartin KaFai Lau
For BPF_PROG_TYPE_TRACING, the bpf_prog's ctx is an array of u64. This patch borrows the idea from BPF_CALL_x in filter.h to convert a u64 to the arg type of the traced function. The new BPF_TRACE_x has an arg to specify the return type of a bpf_prog. It will be used in the future TCP-ops bpf_prog that may return "void". The new macros are defined in the new header file "bpf_trace_helpers.h". It is under selftests/bpf/ for now. It could be moved to libbpf later after seeing more upcoming non-tracing use cases. The tests are changed to use these new macros also. Hence, the k[s]u8/16/32/64 are no longer needed and they are removed from the bpf_helpers.h. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191123202504.1502696-1-kafai@fb.com
2019-11-24bpf, testing: Add various tail call test casesDaniel Borkmann
Add several BPF kselftest cases for tail calls which test the various patch directions, and that multiple locations are patched in same and different programs. # ./test_progs -n 45 #45/1 tailcall_1:OK #45/2 tailcall_2:OK #45/3 tailcall_3:OK #45/4 tailcall_4:OK #45/5 tailcall_5:OK #45 tailcalls:OK Summary: 1/5 PASSED, 0 SKIPPED, 0 FAILED I've also verified the JITed dump after each of the rewrite cases that it matches expectations. Also regular test_verifier suite passes fine which contains further tail call tests: # ./test_verifier [...] Summary: 1563 PASSED, 0 SKIPPED, 0 FAILED Checked under JIT, interpreter and JIT + hardening. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/3d6cbecbeb171117dccfe153306e479798fb608d.1574452833.git.daniel@iogearbox.net
2019-11-24selftests/bpf: Add BPF trampoline performance testAlexei Starovoitov
Add a test that benchmarks different ways of attaching BPF program to a kernel function. Here are the results for 2.4Ghz x86 cpu on a kernel without mitigations: $ ./test_progs -n 49 -v|grep events task_rename base 2743K events per sec task_rename kprobe 2419K events per sec task_rename kretprobe 1876K events per sec task_rename raw_tp 2578K events per sec task_rename fentry 2710K events per sec task_rename fexit 2685K events per sec On a kernel with retpoline: $ ./test_progs -n 49 -v|grep events task_rename base 2401K events per sec task_rename kprobe 1930K events per sec task_rename kretprobe 1485K events per sec task_rename raw_tp 2053K events per sec task_rename fentry 2351K events per sec task_rename fexit 2185K events per sec All 5 approaches: - kprobe/kretprobe in __set_task_comm() - raw tracepoint in trace_task_rename() - fentry/fexit in __set_task_comm() are roughly equivalent. __set_task_comm() by itself is quite fast, so any extra instructions add up. Until BPF trampoline was introduced the fastest mechanism was raw tracepoint. kprobe via ftrace was second best. kretprobe is slow due to trap. New fentry/fexit methods via BPF trampoline are clearly the fastest and the difference is more pronounced with retpoline on, since BPF trampoline doesn't use indirect jumps. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20191122011515.255371-1-ast@kernel.org
2019-11-24selftests/bpf: Add verifier tests for better jmp32 register boundsYonghong Song
Three test cases are added. Test 1: jmp32 'reg op imm'. Test 2: jmp32 'reg op reg' where dst 'reg' has unknown constant and src 'reg' has known constant Test 3: jmp32 'reg op reg' where dst 'reg' has known constant and src 'reg' has unknown constant Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191121170651.449096-1-yhs@fb.com
2019-11-24selftests/bpf: Ensure core_reloc_kernel is reading test_progs's data onlyAndrii Nakryiko
test_core_reloc_kernel.c selftest is the only CO-RE test that reads and returns for validation calling thread's information (pid, tgid, comm). Thus it has to make sure that only test_prog's invocations are honored. Fixes: df36e621418b ("selftests/bpf: add CO-RE relocs testing setup") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20191121175900.3486133-1-andriin@fb.com
2019-11-24libbpf: Fix bpf_object name determination for bpf_object__open_file()Andrii Nakryiko
If bpf_object__open_file() gets path like "some/dir/obj.o", it should derive BPF object's name as "obj" (unless overriden through opts->object_name). Instead, due to using `path` as a fallback value for opts->obj_name, path is used as is for object name, so for above example BPF object's name will be verbatim "some/dir/obj", which leads to all sorts of troubles, especially when internal maps are concern (they are using up to 8 characters of object name). Fix that by ensuring object_name stays NULL, unless overriden. Fixes: 291ee02b5e40 ("libbpf: Refactor bpf_object__open APIs to use common opts") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191122003527.551556-1-andriin@fb.com
2019-11-24libbpf: Support initialized global variablesAndrii Nakryiko
Initialized global variables are no different in ELF from static variables, and don't require any extra support from libbpf. But they are matching semantics of global data (backed by BPF maps) more closely, preventing LLVM/Clang from aggressively inlining constant values and not requiring volatile incantations to prevent those. This patch enables global variables. It still disables uninitialized variables, which will be put into special COM (common) ELF section, because BPF doesn't allow uninitialized data to be accessed. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191121070743.1309473-5-andriin@fb.com
2019-11-24libbpf: Fix various errors and warning reported by checkpatch.plAndrii Nakryiko
Fix a bunch of warnings and errors reported by checkpatch.pl, to make it easier to spot new problems. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191121070743.1309473-4-andriin@fb.com
2019-11-24libbpf: Refactor relocation handlingAndrii Nakryiko
Relocation handling code is convoluted and unnecessarily deeply nested. Split out per-relocation logic into separate function. Also refactor the logic to be more a sequence of per-relocation type checks and processing steps, making it simpler to follow control flow. This makes it easier to further extends it to new kinds of relocations (e.g., support for extern variables). This patch also makes relocation's section verification more robust. Previously relocations against not yet supported externs were silently ignored because of obj->efile.text_shndx was zero, when all BPF programs had custom section names and there was no .text section. Also, invalid LDIMM64 relocations against non-map sections were passed through, if they were pointing to a .text section (or 0, which is invalid section). All these bugs are fixed within this refactoring and checks are made more appropriate for each type of relocation. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191121070743.1309473-3-andriin@fb.com
2019-11-24selftests/bpf: Ensure no DWARF relocations for BPF object filesAndrii Nakryiko
Add -mattr=dwarfris attribute to llc to avoid having relocations against DWARF data. These relocations make it impossible to inspect DWARF contents: all strings are invalid. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191121070743.1309473-2-andriin@fb.com
2019-11-24selftests/bpf: Integrate verbose verifier log into test_progsAndrii Nakryiko
Add exra level of verboseness, activated by -vvv argument. When -vv is specified, verbose libbpf and verifier log (level 1) is output, even for successful tests. With -vvv, verifier log goes to level 2. This is extremely useful to debug verifier failures, as well as just see the state and flow of verification. Before this, you'd have to go and modify load_program()'s source code inside libbpf to specify extra log_level flags, which is suboptimal to say the least. Currently -vv and -vvv triggering verifier output is integrated into test_stub's bpf_prog_load as well as bpf_verif_scale.c tests. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191120003548.4159797-1-andriin@fb.com
2019-11-24selftests, bpftool: Skip the build test if not in treeJakub Kicinski
If selftests are copied over to another machine/location for execution the build test of bpftool will obviously not work, since the sources are not copied. Skip it if we can't find bpftool's Makefile. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191119105010.19189-3-quentin.monnet@netronome.com
2019-11-24selftests, bpftool: Set EXIT trap after usage functionQuentin Monnet
The trap on EXIT is used to clean up any temporary directory left by the build attempts. It is not needed when the user simply calls the script with its --help option, and may not be needed either if we add checks (e.g. on the availability of bpftool files) before the build attempts. Let's move this trap and related variables lower down in the code, so that we don't accidentally change the value returned from the script on early exits at pre-checks. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Link: https://lore.kernel.org/bpf/20191119105010.19189-2-quentin.monnet@netronome.com
2019-11-24tools, bpf: Fix build for 'make -s tools/bpf O=<dir>'Quentin Monnet
Building selftests with 'make TARGETS=bpf kselftest' was fixed in commit 55d554f5d140 ("tools: bpf: Use !building_out_of_srctree to determine srctree"). However, by updating $(srctree) in tools/bpf/Makefile for in-tree builds only, we leave out the case where we pass an output directory to build BPF tools, but $(srctree) is not set. This typically happens for: $ make -s tools/bpf O=/tmp/foo Makefile:40: /tools/build/Makefile.feature: No such file or directory Fix it by updating $(srctree) in the Makefile not only for out-of-tree builds, but also if $(srctree) is empty. Detected with test_bpftool_build.sh. Fixes: 55d554f5d140 ("tools: bpf: Use !building_out_of_srctree to determine srctree") Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Link: https://lore.kernel.org/bpf/20191119105626.21453-1-quentin.monnet@netronome.com
2019-11-24tools, bpftool: Fix warning on ignored return value for 'read'Quentin Monnet
When building bpftool, a warning was introduced by commit a94364603610 ("bpftool: Allow to read btf as raw data"), because the return value from a call to 'read()' is ignored. Let's address it. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191119111706.22440-1-quentin.monnet@netronome.com
2019-11-22Merge branch 'next/seccomp' into for-nextPaul Walmsley
2019-11-22Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Minor conflict in drivers/s390/net/qeth_l2_main.c, kept the lock from commit c8183f548902 ("s390/qeth: fix potential deadlock on workqueue flush"), removed the code which was removed by commit 9897d583b015 ("s390/qeth: consolidate some duplicated HW cmd code"). Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
2019-11-22perf parse: Fix potential memory leak when handling tracepoint errorsIan Rogers
An error may be in place when tracepoint_error is called, use parse_events__handle_error to avoid a memory leak and to capture the first and last error. Error detected by LLVM's libFuzzer using the following event: $ perf stat -e 'msr/event/,f:e' event syntax error: 'msr/event/,f:e' \___ can't access trace events Error: No permissions to read /sys/kernel/debug/tracing/events/f/e Hint: Try 'sudo mount -o remount,mode=755 /sys/kernel/debug/tracing/' Initial error: event syntax error: 'msr/event/,f:e' \___ no value assigned for term Run 'perf list' for a list of valid events Usage: perf stat [<options>] [<command>] -e, --event <event> event selector. use 'perf list' to list available events Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: clang-built-linux@googlegroups.com Link: http://lore.kernel.org/lkml/20191120180925.21787-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf probe: Fix spelling mistake "addrees" -> "address"Colin Ian King
There is a spelling mistake in a pr_warning message. Fix it. Signed-off-by: Colin King <colin.king@canonical.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: kernel-janitors@vger.kernel.org Link: http://lore.kernel.org/lkml/20191121092623.374896-1-colin.king@canonical.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22libtraceevent: Fix memory leakage in copy_filter_typeHewenliang
It is necessary to free the memory that we have allocated when error occurs. Fixes: ef3072cd1d5c ("tools lib traceevent: Get rid of die in add_filter_type()") Signed-off-by: Hewenliang <hewenliang4@huawei.com> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tzvetomir Stoyanov <tstoyanov@vmware.com> Link: http://lore.kernel.org/lkml/20191119014415.57210-1-hewenliang4@huawei.com Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22libtraceevent: Fix header installationSudip Mukherjee
When we passed some location in DESTDIR, install_headers called do_install with DESTDIR as part of the second argument. But do_install is again using '$(DESTDIR_SQ)$2', so as a result the headers were installed in a location $DESTDIR/$DESTDIR. In my testing I passed DESTDIR=/home/sudip/test and the headers were installed in: /home/sudip/test/home/sudip/test/usr/include/traceevent. Lets remove DESTDIR from the second argument of do_install so that the headers are installed in the correct location. Signed-off-by: Sudipm Mukherjee <sudipm.mukherjee@gmail.com> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Sudipm Mukherjee <sudipm.mukherjee@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lore.kernel.org/lkml/20191114133719.309-1-sudipm.mukherjee@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf intel-bts: Does not support AUX area samplingAdrian Hunter
Add an error message because Intel BTS does not support AUX area sampling. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-16-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf intel-pt: Add support for decoding AUX area samplesAdrian Hunter
Add support for dumping, queuing and decoding AUX area samples. Decoding samples is the same as regular decoding, except in the case where there are no timestamps, in which case buffers are decoded immediately before the sample event. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-15-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf intel-pt: Add support for recording AUX area samplesAdrian Hunter
Set up the default number of mmap pages, default sample size and default psb_period for AUX area sampling. Add documentation also. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-14-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf pmu: When using default config, record which bits of config were ↵Adrian Hunter
changed by the user Default config for a PMU is defined before selected events are parsed. That allows the user-entered config to override the default config. However that does not allow for changing the default config based on other options. For example, if the user chooses AUX area sampling mode, in the case of Intel PT, the psb_period needs to be small for sampling, so there is a need to set the default psb_period to 0 (2 KiB) in that case. However that should not override a value set by the user. To allow for that, when using default config, record which bits of config were changed by the user. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-13-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf auxtrace: Add support for queuing AUX area samplesAdrian Hunter
Add functions to queue AUX area samples in advance (auxtrace_queue_data()) or individually (auxtrace_queues__add_sample()) or find out what queue a sample belongs on (auxtrace_queues__sample_queue()). auxtrace_queue_data() can also queue snapshot data which keeps snapshots and samples ordered with respect to each other in case support for that is desired. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-12-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf session: Add facility to peek at all eventsAdrian Hunter
AUX area samples are not limited in how far back in time the sample could start. Consequently samples must be queued in advance to allow for time-ordered processing. To achieve that, add perf_session__peek_events() that walks and peeks at all the events. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-11-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf auxtrace: Add support for dumping AUX area samplesAdrian Hunter
Add support for dumping AUX area samples i.e. via the perf script/report -D (--dump-raw-trace) option. Committer notes: Add __maybe_unused to the two args for auxtrace__dump_auxtrace_sample() for when we don't HAVE_AUXTRACE_SUPPORT. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-10-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf inject: Cut AUX area samplesAdrian Hunter
After decoding AUX area samples, the AUX area data is no longer needed (having been replaced by synthesized events) so cut it out. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-9-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf record: Add aux-sample-size config termAdrian Hunter
To allow individual events to be selected for AUX area sampling, add aux-sample-size config term. attr.aux_sample_size is updated by auxtrace_parse_sample_options() so that the existing validation will see the value. Any event that has a non-zero aux_sample_size will cause AUX area sampling to be configured, irrespective of the --aux-sample option. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-8-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf record: Add support for AUX area samplingAdrian Hunter
Add a 'perf record' option '--aux-sample' to request AUX area sampling. AUX area sampling uses an overwriting buffer much like snapshot mode, so adjust the AUX buffer mmapping accordingly. To make it easy to queue samples for decoding, synthesize an ID index. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-7-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf auxtrace: Add support for AUX area sample recordingAdrian Hunter
Add support for parsing and validating AUX area sample options. At present, the only option is the sample size, but it is also necessary to ensure that events are in a group with an AUX area event as the leader. Committer note: Add missing 'static inline' in front of auxtrace_parse_sample_options() for when we don't HAVE_AUXTRACE_SUPPORT. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-6-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf auxtrace: Move perf_evsel__find_pmu()Adrian Hunter
Move perf_evsel__find_pmu() so it can be used without forward declaration. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-5-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22perf record: Add a function to test for kernel support for AUX area samplingAdrian Hunter
Architectures are expected to know if AUX area sampling is supported by the hardware. Add a function perf_can_aux_sample() which will determine whether the kernel supports it. Committer notes: I reported that this message was taking place on a kernel without the required bits: # perf record --aux-sample -e '{intel_pt//u,branch-misses:u}' Error: The sys_perf_event_open() syscall returned with 7 (Argument list too long) for event (branch-misses:u). /bin/dmesg | grep -i perf may provide additional information. Adrian sent a patch addressing it, with this explanation: ---- perf_can_aux_sample_size() always returned true because it did not pass the attribute size to sys_perf_event_open, nor correctly check the return value and errno. ---- After applying it I get, later in the series, when --aux-sample is added: # perf record --aux-sample -e '{intel_pt//u,branch-misses:u}' AUX area sampling is not supported by kernel Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-21tools: hv: add vmbus testing toolBranden Bonaby
This is a userspace tool to drive the testing. Currently it supports introducing user specified delay in the host to guest communication path on a per-channel basis. Signed-off-by: Branden Bonaby <brandonbonaby94@gmail.com> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-11-21selftests/x86/sigreturn/32: Invalidate DS and ES when abusing the kernelAndy Lutomirski
If the kernel accidentally uses DS or ES while the user values are loaded, it will work fine for sane userspace. In the interest of simulating maximally insane userspace, make sigreturn_32 zero out DS and ES for the nasty parts so that inadvertent use of these segments will crash. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@kernel.org
2019-11-21selftests/x86/mov_ss_trap: Fix the SYSENTER testAndy Lutomirski
For reasons that I haven't quite fully diagnosed, running mov_ss_trap_32 on a 32-bit kernel results in an infinite loop in userspace. This appears to be because the hacky SYSENTER test doesn't segfault as desired; instead it corrupts the program state such that it infinite loops. Fix it by explicitly clearing EBP before doing SYSENTER. This will give a more reliable segfault. Fixes: 59c2a7226fc5 ("x86/selftests: Add mov_to_ss test") Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@kernel.org
2019-11-21Merge tag 'gpio-v5.4-5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio Pull GPIO fixes from Linus Walleij: "A last set of small fixes for GPIO, this cycle was quite busy. - Fix debounce delays on the MAX77620 GPIO expander - Use the correct unit for debounce times on the BD70528 GPIO expander - Get proper deps for parallel builds of the GPIO tools - Add a specific ACPI quirk for the Terra Pad 1061" * tag 'gpio-v5.4-5' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio: gpiolib: acpi: Add Terra Pad 1061 to the run_edge_events_on_boot_blacklist tools: gpio: Correctly add make dependencies for gpio_utils gpio: bd70528: Use correct unit for debounce times gpio: max77620: Fixup debounce delays
2019-11-21perf tools: Add kernel AUX area sampling definitionsAdrian Hunter
Add kernel AUX area sampling definitions, which brings perf_event.h into line with the kernel version. New sample type PERF_SAMPLE_AUX requests a sample of the AUX area buffer. New perf_event_attr member 'aux_sample_size' specifies the desired size of the sample. Also add support for parsing samples containing AUX area data i.e. PERF_SAMPLE_AUX. Committer notes: I squashed the first two patches in this series to avoid breaking automatic bisection, i.e. after applying only the original first patch in this series we would have: # perf test -v parsing 26: Sample parsing : --- start --- test child forked, pid 17018 sample format has changed, some new PERF_SAMPLE_ bit was introduced - test needs updating test child finished with -1 ---- end ---- Sample parsing: FAILED! # With the two paches combined: # perf test parsing 26: Sample parsing : Ok # Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191115124225.5247-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-21tools/power/x86/intel-speed-select: Display TRL buckets for just base config ↵Srinivas Pandruvada
level When only base config level is present, this tool is displaying TRL (Turbo-ratio-limits) by reading legacy MSR. In this case, also present core count for TRL by reading MSR 0x1AE. Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
2019-11-21tools/power/x86/intel-speed-select: Ignore missing config levelSrinivas Pandruvada
It is possible that certain config levels are not available, even if the max level includes the level. There can be missing levels in some platforms. So ignore the level when called for information dump for all levels and fail if specifically ask for the missing level. Here the changes is to continue reading information about other levels even if we fail to get information for the current level. But use the "processed" flag to indicate the failure. When the "processed" flag is not set, don't dump information about that level. Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
2019-11-21Merge branch 'kvm-tsx-ctrl' into HEADPaolo Bonzini
Conflicts: arch/x86/kvm/vmx/vmx.c
2019-11-21selftests/powerpc: spectre_v2 test must be built 64-bitMichael Ellerman
The spectre_v2 test must be built 64-bit, it includes hand-written asm that is 64-bit only, and segfaults if built 32-bit. Fixes: c790c3d2b0ec ("selftests/powerpc: Add a test of spectre_v2 mitigations") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191120023924.13130-1-mpe@ellerman.id.au
2019-11-20Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf-next 2019-11-20 The following pull-request contains BPF updates for your *net-next* tree. We've added 81 non-merge commits during the last 17 day(s) which contain a total of 120 files changed, 4958 insertions(+), 1081 deletions(-). There are 3 trivial conflicts, resolve it by always taking the chunk from 196e8ca74886c433: <<<<<<< HEAD ======= void *bpf_map_area_mmapable_alloc(u64 size, int numa_node); >>>>>>> 196e8ca74886c433dcfc64a809707074b936aaf5 <<<<<<< HEAD void *bpf_map_area_alloc(u64 size, int numa_node) ======= static void *__bpf_map_area_alloc(u64 size, int numa_node, bool mmapable) >>>>>>> 196e8ca74886c433dcfc64a809707074b936aaf5 <<<<<<< HEAD if (size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) { ======= /* kmalloc()'ed memory can't be mmap()'ed */ if (!mmapable && size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) { >>>>>>> 196e8ca74886c433dcfc64a809707074b936aaf5 The main changes are: 1) Addition of BPF trampoline which works as a bridge between kernel functions, BPF programs and other BPF programs along with two new use cases: i) fentry/fexit BPF programs for tracing with practically zero overhead to call into BPF (as opposed to k[ret]probes) and ii) attachment of the former to networking related programs to see input/output of networking programs (covering xdpdump use case), from Alexei Starovoitov. 2) BPF array map mmap support and use in libbpf for global data maps; also a big batch of libbpf improvements, among others, support for reading bitfields in a relocatable manner (via libbpf's CO-RE helper API), from Andrii Nakryiko. 3) Extend s390x JIT with usage of relative long jumps and loads in order to lift the current 64/512k size limits on JITed BPF programs there, from Ilya Leoshkevich. 4) Add BPF audit support and emit messages upon successful prog load and unload in order to have a timeline of events, from Daniel Borkmann and Jiri Olsa. 5) Extension to libbpf and xdpsock sample programs to demo the shared umem mode (XDP_SHARED_UMEM) as well as RX-only and TX-only sockets, from Magnus Karlsson. 6) Several follow-up bug fixes for libbpf's auto-pinning code and a new API call named bpf_get_link_xdp_info() for retrieving the full set of prog IDs attached to XDP, from Toke Høiland-Jørgensen. 7) Add BTF support for array of int, array of struct and multidimensional arrays and enable it for skb->cb[] access in kfree_skb test, from Martin KaFai Lau. 8) Fix AF_XDP by using the correct number of channels from ethtool, from Luigi Rizzo. 9) Two fixes for BPF selftest to get rid of a hang in test_tc_tunnel and to avoid xdping to be run as standalone, from Jiri Benc. 10) Various BPF selftest fixes when run with latest LLVM trunk, from Yonghong Song. 11) Fix a memory leak in BPF fentry test run data, from Colin Ian King. 12) Various smaller misc cleanups and improvements mostly all over BPF selftests and samples, from Daniel T. Lee, Andre Guedes, Anders Roxell, Mao Wenan, Yue Haibing. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-19selftests/bpf: Enforce no-ALU32 for test_progs-no_alu32Andrii Nakryiko
With the most recent Clang, alu32 is enabled by default if -mcpu=probe or -mcpu=v3 is specified. Use a separate build rule with -mcpu=v2 to enforce no ALU32 mode. Suggested-by: Yonghong Song <yhs@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191120002510.4130605-1-andriin@fb.com
2019-11-19libbpf: Fix call relocation offset calculation bugAndrii Nakryiko
When relocating subprogram call, libbpf doesn't take into account relo->text_off, which comes from symbol's value. This generally works fine for subprograms implemented as static functions, but breaks for global functions. Taking a simplified test_pkt_access.c as an example: __attribute__ ((noinline)) static int test_pkt_access_subprog1(volatile struct __sk_buff *skb) { return skb->len * 2; } __attribute__ ((noinline)) static int test_pkt_access_subprog2(int val, volatile struct __sk_buff *skb) { return skb->len + val; } SEC("classifier/test_pkt_access") int test_pkt_access(struct __sk_buff *skb) { if (test_pkt_access_subprog1(skb) != skb->len * 2) return TC_ACT_SHOT; if (test_pkt_access_subprog2(2, skb) != skb->len + 2) return TC_ACT_SHOT; return TC_ACT_UNSPEC; } When compiled, we get two relocations, pointing to '.text' symbol. .text has st_value set to 0 (it points to the beginning of .text section): 0000000000000008 000000050000000a R_BPF_64_32 0000000000000000 .text 0000000000000040 000000050000000a R_BPF_64_32 0000000000000000 .text test_pkt_access_subprog1 and test_pkt_access_subprog2 offsets (targets of two calls) are encoded within call instruction's imm32 part as -1 and 2, respectively: 0000000000000000 test_pkt_access_subprog1: 0: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) 1: 64 00 00 00 01 00 00 00 w0 <<= 1 2: 95 00 00 00 00 00 00 00 exit 0000000000000018 test_pkt_access_subprog2: 3: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) 4: 04 00 00 00 02 00 00 00 w0 += 2 5: 95 00 00 00 00 00 00 00 exit 0000000000000000 test_pkt_access: 0: bf 16 00 00 00 00 00 00 r6 = r1 ===> 1: 85 10 00 00 ff ff ff ff call -1 2: bc 01 00 00 00 00 00 00 w1 = w0 3: b4 00 00 00 02 00 00 00 w0 = 2 4: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 5: 64 02 00 00 01 00 00 00 w2 <<= 1 6: 5e 21 08 00 00 00 00 00 if w1 != w2 goto +8 <LBB0_3> 7: bf 61 00 00 00 00 00 00 r1 = r6 ===> 8: 85 10 00 00 02 00 00 00 call 2 9: bc 01 00 00 00 00 00 00 w1 = w0 10: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 11: 04 02 00 00 02 00 00 00 w2 += 2 12: b4 00 00 00 ff ff ff ff w0 = -1 13: 1e 21 01 00 00 00 00 00 if w1 == w2 goto +1 <LBB0_3> 14: b4 00 00 00 02 00 00 00 w0 = 2 0000000000000078 LBB0_3: 15: 95 00 00 00 00 00 00 00 exit Now, if we compile example with global functions, the setup changes. Relocations are now against specifically test_pkt_access_subprog1 and test_pkt_access_subprog2 symbols, with test_pkt_access_subprog2 pointing 24 bytes into its respective section (.text), i.e., 3 instructions in: 0000000000000008 000000070000000a R_BPF_64_32 0000000000000000 test_pkt_access_subprog1 0000000000000048 000000080000000a R_BPF_64_32 0000000000000018 test_pkt_access_subprog2 Calls instructions now encode offsets relative to function symbols and are both set ot -1: 0000000000000000 test_pkt_access_subprog1: 0: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) 1: 64 00 00 00 01 00 00 00 w0 <<= 1 2: 95 00 00 00 00 00 00 00 exit 0000000000000018 test_pkt_access_subprog2: 3: 61 20 00 00 00 00 00 00 r0 = *(u32 *)(r2 + 0) 4: 0c 10 00 00 00 00 00 00 w0 += w1 5: 95 00 00 00 00 00 00 00 exit 0000000000000000 test_pkt_access: 0: bf 16 00 00 00 00 00 00 r6 = r1 ===> 1: 85 10 00 00 ff ff ff ff call -1 2: bc 01 00 00 00 00 00 00 w1 = w0 3: b4 00 00 00 02 00 00 00 w0 = 2 4: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 5: 64 02 00 00 01 00 00 00 w2 <<= 1 6: 5e 21 09 00 00 00 00 00 if w1 != w2 goto +9 <LBB2_3> 7: b4 01 00 00 02 00 00 00 w1 = 2 8: bf 62 00 00 00 00 00 00 r2 = r6 ===> 9: 85 10 00 00 ff ff ff ff call -1 10: bc 01 00 00 00 00 00 00 w1 = w0 11: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 12: 04 02 00 00 02 00 00 00 w2 += 2 13: b4 00 00 00 ff ff ff ff w0 = -1 14: 1e 21 01 00 00 00 00 00 if w1 == w2 goto +1 <LBB2_3> 15: b4 00 00 00 02 00 00 00 w0 = 2 0000000000000080 LBB2_3: 16: 95 00 00 00 00 00 00 00 exit Thus the right formula to calculate target call offset after relocation should take into account relocation's target symbol value (offset within section), call instruction's imm32 offset, and (subtracting, to get relative instruction offset) instruction index of call instruction itself. All that is shifted by number of instructions in main program, given all sub-programs are copied over after main program. Convert few selftests relying on bpf-to-bpf calls to use global functions instead of static ones. Fixes: 48cca7e44f9f ("libbpf: add support for bpf_call") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191119224447.3781271-1-andriin@fb.com