summaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)Author
2020-06-01selftests/bpf: Add BPF ringbuf selftestsAndrii Nakryiko
Both singleton BPF ringbuf and BPF ringbuf with map-in-map use cases are tested. Also reserve+submit/discards and output variants of API are validated. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200529075424.3139988-4-andriin@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01libbpf: Add BPF ring buffer supportAndrii Nakryiko
Declaring and instantiating BPF ring buffer doesn't require any changes to libbpf, as it's just another type of maps. So using existing BTF-defined maps syntax with __uint(type, BPF_MAP_TYPE_RINGBUF) and __uint(max_elements, <size-of-ring-buf>) is all that's necessary to create and use BPF ring buffer. This patch adds BPF ring buffer consumer to libbpf. It is very similar to perf_buffer implementation in terms of API, but also attempts to fix some minor problems and inconveniences with existing perf_buffer API. ring_buffer support both single ring buffer use case (with just using ring_buffer__new()), as well as allows to add more ring buffers, each with its own callback and context. This allows to efficiently poll and consume multiple, potentially completely independent, ring buffers, using single epoll instance. The latter is actually a problem in practice for applications that are using multiple sets of perf buffers. They have to create multiple instances for struct perf_buffer and poll them independently or in a loop, each approach having its own problems (e.g., inability to use a common poll timeout). struct ring_buffer eliminates this problem by aggregating many independent ring buffer instances under the single "ring buffer manager". Second, perf_buffer's callback can't return error, so applications that need to stop polling due to error in data or data signalling the end, have to use extra mechanisms to signal that polling has to stop. ring_buffer's callback can return error, which will be passed through back to user code and can be acted upon appropariately. Two APIs allow to consume ring buffer data: - ring_buffer__poll(), which will wait for data availability notification and will consume data only from reported ring buffer(s); this API allows to efficiently use resources by reading data only when it becomes available; - ring_buffer__consume(), will attempt to read new records regardless of data availablity notification sub-system. This API is useful for cases when lowest latency is required, in expense of burning CPU resources. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200529075424.3139988-3-andriin@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01bpf: Implement BPF ring buffer and verifier support for itAndrii Nakryiko
This commit adds a new MPSC ring buffer implementation into BPF ecosystem, which allows multiple CPUs to submit data to a single shared ring buffer. On the consumption side, only single consumer is assumed. Motivation ---------- There are two distinctive motivators for this work, which are not satisfied by existing perf buffer, which prompted creation of a new ring buffer implementation. - more efficient memory utilization by sharing ring buffer across CPUs; - preserving ordering of events that happen sequentially in time, even across multiple CPUs (e.g., fork/exec/exit events for a task). These two problems are independent, but perf buffer fails to satisfy both. Both are a result of a choice to have per-CPU perf ring buffer. Both can be also solved by having an MPSC implementation of ring buffer. The ordering problem could technically be solved for perf buffer with some in-kernel counting, but given the first one requires an MPSC buffer, the same solution would solve the second problem automatically. Semantics and APIs ------------------ Single ring buffer is presented to BPF programs as an instance of BPF map of type BPF_MAP_TYPE_RINGBUF. Two other alternatives considered, but ultimately rejected. One way would be to, similar to BPF_MAP_TYPE_PERF_EVENT_ARRAY, make BPF_MAP_TYPE_RINGBUF could represent an array of ring buffers, but not enforce "same CPU only" rule. This would be more familiar interface compatible with existing perf buffer use in BPF, but would fail if application needed more advanced logic to lookup ring buffer by arbitrary key. HASH_OF_MAPS addresses this with current approach. Additionally, given the performance of BPF ringbuf, many use cases would just opt into a simple single ring buffer shared among all CPUs, for which current approach would be an overkill. Another approach could introduce a new concept, alongside BPF map, to represent generic "container" object, which doesn't necessarily have key/value interface with lookup/update/delete operations. This approach would add a lot of extra infrastructure that has to be built for observability and verifier support. It would also add another concept that BPF developers would have to familiarize themselves with, new syntax in libbpf, etc. But then would really provide no additional benefits over the approach of using a map. BPF_MAP_TYPE_RINGBUF doesn't support lookup/update/delete operations, but so doesn't few other map types (e.g., queue and stack; array doesn't support delete, etc). The approach chosen has an advantage of re-using existing BPF map infrastructure (introspection APIs in kernel, libbpf support, etc), being familiar concept (no need to teach users a new type of object in BPF program), and utilizing existing tooling (bpftool). For common scenario of using a single ring buffer for all CPUs, it's as simple and straightforward, as would be with a dedicated "container" object. On the other hand, by being a map, it can be combined with ARRAY_OF_MAPS and HASH_OF_MAPS map-in-maps to implement a wide variety of topologies, from one ring buffer for each CPU (e.g., as a replacement for perf buffer use cases), to a complicated application hashing/sharding of ring buffers (e.g., having a small pool of ring buffers with hashed task's tgid being a look up key to preserve order, but reduce contention). Key and value sizes are enforced to be zero. max_entries is used to specify the size of ring buffer and has to be a power of 2 value. There are a bunch of similarities between perf buffer (BPF_MAP_TYPE_PERF_EVENT_ARRAY) and new BPF ring buffer semantics: - variable-length records; - if there is no more space left in ring buffer, reservation fails, no blocking; - memory-mappable data area for user-space applications for ease of consumption and high performance; - epoll notifications for new incoming data; - but still the ability to do busy polling for new data to achieve the lowest latency, if necessary. BPF ringbuf provides two sets of APIs to BPF programs: - bpf_ringbuf_output() allows to *copy* data from one place to a ring buffer, similarly to bpf_perf_event_output(); - bpf_ringbuf_reserve()/bpf_ringbuf_commit()/bpf_ringbuf_discard() APIs split the whole process into two steps. First, a fixed amount of space is reserved. If successful, a pointer to a data inside ring buffer data area is returned, which BPF programs can use similarly to a data inside array/hash maps. Once ready, this piece of memory is either committed or discarded. Discard is similar to commit, but makes consumer ignore the record. bpf_ringbuf_output() has disadvantage of incurring extra memory copy, because record has to be prepared in some other place first. But it allows to submit records of the length that's not known to verifier beforehand. It also closely matches bpf_perf_event_output(), so will simplify migration significantly. bpf_ringbuf_reserve() avoids the extra copy of memory by providing a memory pointer directly to ring buffer memory. In a lot of cases records are larger than BPF stack space allows, so many programs have use extra per-CPU array as a temporary heap for preparing sample. bpf_ringbuf_reserve() avoid this needs completely. But in exchange, it only allows a known constant size of memory to be reserved, such that verifier can verify that BPF program can't access memory outside its reserved record space. bpf_ringbuf_output(), while slightly slower due to extra memory copy, covers some use cases that are not suitable for bpf_ringbuf_reserve(). The difference between commit and discard is very small. Discard just marks a record as discarded, and such records are supposed to be ignored by consumer code. Discard is useful for some advanced use-cases, such as ensuring all-or-nothing multi-record submission, or emulating temporary malloc()/free() within single BPF program invocation. Each reserved record is tracked by verifier through existing reference-tracking logic, similar to socket ref-tracking. It is thus impossible to reserve a record, but forget to submit (or discard) it. bpf_ringbuf_query() helper allows to query various properties of ring buffer. Currently 4 are supported: - BPF_RB_AVAIL_DATA returns amount of unconsumed data in ring buffer; - BPF_RB_RING_SIZE returns the size of ring buffer; - BPF_RB_CONS_POS/BPF_RB_PROD_POS returns current logical possition of consumer/producer, respectively. Returned values are momentarily snapshots of ring buffer state and could be off by the time helper returns, so this should be used only for debugging/reporting reasons or for implementing various heuristics, that take into account highly-changeable nature of some of those characteristics. One such heuristic might involve more fine-grained control over poll/epoll notifications about new data availability in ring buffer. Together with BPF_RB_NO_WAKEUP/BPF_RB_FORCE_WAKEUP flags for output/commit/discard helpers, it allows BPF program a high degree of control and, e.g., more efficient batched notifications. Default self-balancing strategy, though, should be adequate for most applications and will work reliable and efficiently already. Design and implementation ------------------------- This reserve/commit schema allows a natural way for multiple producers, either on different CPUs or even on the same CPU/in the same BPF program, to reserve independent records and work with them without blocking other producers. This means that if BPF program was interruped by another BPF program sharing the same ring buffer, they will both get a record reserved (provided there is enough space left) and can work with it and submit it independently. This applies to NMI context as well, except that due to using a spinlock during reservation, in NMI context, bpf_ringbuf_reserve() might fail to get a lock, in which case reservation will fail even if ring buffer is not full. The ring buffer itself internally is implemented as a power-of-2 sized circular buffer, with two logical and ever-increasing counters (which might wrap around on 32-bit architectures, that's not a problem): - consumer counter shows up to which logical position consumer consumed the data; - producer counter denotes amount of data reserved by all producers. Each time a record is reserved, producer that "owns" the record will successfully advance producer counter. At that point, data is still not yet ready to be consumed, though. Each record has 8 byte header, which contains the length of reserved record, as well as two extra bits: busy bit to denote that record is still being worked on, and discard bit, which might be set at commit time if record is discarded. In the latter case, consumer is supposed to skip the record and move on to the next one. Record header also encodes record's relative offset from the beginning of ring buffer data area (in pages). This allows bpf_ringbuf_commit()/bpf_ringbuf_discard() to accept only the pointer to the record itself, without requiring also the pointer to ring buffer itself. Ring buffer memory location will be restored from record metadata header. This significantly simplifies verifier, as well as improving API usability. Producer counter increments are serialized under spinlock, so there is a strict ordering between reservations. Commits, on the other hand, are completely lockless and independent. All records become available to consumer in the order of reservations, but only after all previous records where already committed. It is thus possible for slow producers to temporarily hold off submitted records, that were reserved later. Reservation/commit/consumer protocol is verified by litmus tests in Documentation/litmus-test/bpf-rb. One interesting implementation bit, that significantly simplifies (and thus speeds up as well) implementation of both producers and consumers is how data area is mapped twice contiguously back-to-back in the virtual memory. This allows to not take any special measures for samples that have to wrap around at the end of the circular buffer data area, because the next page after the last data page would be first data page again, and thus the sample will still appear completely contiguous in virtual memory. See comment and a simple ASCII diagram showing this visually in bpf_ringbuf_area_alloc(). Another feature that distinguishes BPF ringbuf from perf ring buffer is a self-pacing notifications of new data being availability. bpf_ringbuf_commit() implementation will send a notification of new record being available after commit only if consumer has already caught up right up to the record being committed. If not, consumer still has to catch up and thus will see new data anyways without needing an extra poll notification. Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbuf.c) show that this allows to achieve a very high throughput without having to resort to tricks like "notify only every Nth sample", which are necessary with perf buffer. For extreme cases, when BPF program wants more manual control of notifications, commit/discard/output helpers accept BPF_RB_NO_WAKEUP and BPF_RB_FORCE_WAKEUP flags, which give full control over notifications of data availability, but require extra caution and diligence in using this API. Comparison to alternatives -------------------------- Before considering implementing BPF ring buffer from scratch existing alternatives in kernel were evaluated, but didn't seem to meet the needs. They largely fell into few categores: - per-CPU buffers (perf, ftrace, etc), which don't satisfy two motivations outlined above (ordering and memory consumption); - linked list-based implementations; while some were multi-producer designs, consuming these from user-space would be very complicated and most probably not performant; memory-mapping contiguous piece of memory is simpler and more performant for user-space consumers; - io_uring is SPSC, but also requires fixed-sized elements. Naively turning SPSC queue into MPSC w/ lock would have subpar performance compared to locked reserve + lockless commit, as with BPF ring buffer. Fixed sized elements would be too limiting for BPF programs, given existing BPF programs heavily rely on variable-sized perf buffer already; - specialized implementations (like a new printk ring buffer, [0]) with lots of printk-specific limitations and implications, that didn't seem to fit well for intended use with BPF programs. [0] https://lwn.net/Articles/779550/ Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200529075424.3139988-2-andriin@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01selftests/bpf: Add tests for write-only stacks/queuesAnton Protopopov
For write-only stacks and queues bpf_map_update_elem should be allowed, but bpf_map_lookup_elem and bpf_map_lookup_and_delete_elem should fail with EPERM. Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200527185700.14658-6-a.s.protopopov@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01selftests/bpf: Cleanup comments in test_mapsAnton Protopopov
Make comments inside the test_map_rdonly and test_map_wronly tests consistent with logic. Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200527185700.14658-4-a.s.protopopov@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01selftests/bpf: Cleanup some file descriptors in test_mapsAnton Protopopov
The test_map_rdonly and test_map_wronly tests should close file descriptors which they open. Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200527185700.14658-3-a.s.protopopov@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01selftests/bpf: Fix a typo in test_mapsAnton Protopopov
Trivial fix to a typo in the test_map_wronly test: "read" -> "write" Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200527185700.14658-2-a.s.protopopov@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01libbpf: Fix perf_buffer__free() API for sparse allocsEelco Chaudron
In case the cpu_bufs are sparsely allocated they are not all free'ed. These changes will fix this. Fixes: fb84b8224655 ("libbpf: add perf buffer API") Signed-off-by: Eelco Chaudron <echaudro@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/159056888305.330763.9684536967379110349.stgit@ebuild Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01bpf, selftests: Test probe_* helpers from SCHED_CLSJohn Fastabend
Lets test using probe* in SCHED_CLS network programs as well just to be sure these keep working. Its cheap to add the extra test and provides a second context to test outside of sk_msg after we generalized probe* helpers to all networking types. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/159033911685.12355.15951980509828906214.stgit@john-Precision-5820-Tower Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01bpf, selftests: Add sk_msg helpers load and attach testJohn Fastabend
The test itself is not particularly useful but it encodes a common pattern we have. Namely do a sk storage lookup then depending on data here decide if we need to do more work or alternatively allow packet to PASS. Then if we need to do more work consult task_struct for more information about the running task. Finally based on this additional information drop or pass the data. In this case the suspicious check is not so realisitic but it encodes the general pattern and uses the helpers so we test the workflow. This is a load test to ensure verifier correctly handles this case. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/159033909665.12355.6166415847337547879.stgit@john-Precision-5820-Tower Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01bpf, sk_msg: Add get socket storage helpersJohn Fastabend
Add helpers to use local socket storage. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/159033907577.12355.14740125020572756560.stgit@john-Precision-5820-Tower Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01libbpf: Use .so dynamic symbols for abi checkYauheni Kaliuta
Since dynamic symbols are used for dynamic linking it makes sense to use them (readelf --dyn-syms) for abi check. Found with some configuration on powerpc where linker puts local *.plt_call.* symbols into .so. Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200525061846.16524-1-yauheni.kaliuta@redhat.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01libbpf: Install headers as part of make installNikolay Borisov
Current 'make install' results in only pkg-config and library binaries being installed. For consistency also install headers as part of "make install" Signed-off-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200526174612.5447-1-nborisov@suse.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01libbpf: Add API to consume the perf ring buffer contentEelco Chaudron
This new API, perf_buffer__consume, can be used as follows: - When you have a perf ring where wakeup_events is higher than 1, and you have remaining data in the rings you would like to pull out on exit (or maybe based on a timeout). - For low latency cases where you burn a CPU that constantly polls the queues. Signed-off-by: Eelco Chaudron <echaudro@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/159048487929.89441.7465713173442594608.stgit@ebuild Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01tools, bpftool: Print correct error message when failing to load BTFTobias Klauser
btf__parse_raw and btf__parse_elf return negative error numbers wrapped in an ERR_PTR, so the extracted value needs to be negated before passing them to strerror which expects a positive error number. Before: Error: failed to load BTF from .../vmlinux: Unknown error -2 After: Error: failed to load BTF from .../vmlinux: No such file or directory Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200525135421.4154-1-tklauser@distanz.ch Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01tools, bpftool: Make capability check account for new BPF capsQuentin Monnet
Following the introduction of CAP_BPF, and the switch from CAP_SYS_ADMIN to other capabilities for various BPF features, update the capability checks (and potentially, drops) in bpftool for feature probes. Because bpftool and/or the system might not know of CAP_BPF yet, some caution is necessary: - If compiled and run on a system with CAP_BPF, check CAP_BPF, CAP_SYS_ADMIN, CAP_PERFMON, CAP_NET_ADMIN. - Guard against CAP_BPF being undefined, to allow compiling bpftool from latest sources on older systems. If the system where feature probes are run does not know of CAP_BPF, stop checking after CAP_SYS_ADMIN, as this should be the only capability required for all the BPF probing. - If compiled from latest sources on a system without CAP_BPF, but later executed on a newer system with CAP_BPF knowledge, then we only test CAP_SYS_ADMIN. Some probes may fail if the bpftool process has CAP_SYS_ADMIN but misses the other capabilities. The alternative would be to redefine the value for CAP_BPF in bpftool, but this does not look clean, and the case sounds relatively rare anyway. Note that libcap offers a cap_to_name() function to retrieve the name of a given capability (e.g. "cap_sys_admin"). We do not use it because deriving the names from the macros looks simpler than using cap_to_name() (doing a strdup() on the string) + cap_free() + handling the case of failed allocations, when we just want to use the name of the capability in an error message. The checks when compiling without libcap (i.e. root versus non-root) are unchanged. v2: - Do not allocate cap_list dynamically. - Drop BPF-related capabilities when running with "unprivileged", even if we didn't have the full set in the first place (in v1, we would skip dropping them in that case). - Keep track of what capabilities we have, print the names of the missing ones for privileged probing. - Attempt to drop only the capabilities we actually have. - Rename a couple variables. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200523010247.20654-1-quentin@isovalent.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01tools, bpftool: Clean subcommand help messagesQuentin Monnet
This is a clean-up for the formatting of the do_help functions for bpftool's subcommands. The following fixes are included: - Do not use argv[-2] for "iter" help message, as the help is shown by default if no "iter" action is selected, resulting in messages looking like "./bpftool bpftool pin...". - Do not print unused HELP_SPEC_PROGRAM in help message for "bpftool link". - Andrii used argument indexing to avoid having multiple occurrences of bin_name and argv[-2] in the fprintf() for the help message, for "bpftool gen" and "bpftool link". Let's reuse this for all other help functions. We can remove up to thirty arguments for the "bpftool map" help message. - Harmonise all functions, e.g. use ending quotes-comma on a separate line. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200523010751.23465-1-quentin@isovalent.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-06-01Merge tag 'smp-core-2020-06-01' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull SMP updates from Ingo Molnar: "Misc cleanups in the SMP hotplug and cross-call code" * tag 'smp-core-2020-06-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: cpu/hotplug: Remove __freeze_secondary_cpus() cpu/hotplug: Remove disable_nonboot_cpus() cpu/hotplug: Fix a typo in comment "broadacasted"->"broadcasted" smp: Use smp_call_func_t in on_each_cpu()
2020-06-01Merge tag 'perf-core-2020-06-01' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf updates from Ingo Molnar: "Kernel side changes: - Add AMD Fam17h RAPL support - Introduce CAP_PERFMON to kernel and user space - Add Zhaoxin CPU support - Misc fixes and cleanups Tooling changes: - perf record: Introduce '--switch-output-event' to use arbitrary events to be setup and read from a side band thread and, when they take place a signal be sent to the main 'perf record' thread, reusing the core for '--switch-output' to take perf.data snapshots from the ring buffer used for '--overwrite', e.g.: # perf record --overwrite -e sched:* \ --switch-output-event syscalls:*connect* \ workload will take perf.data.YYYYMMDDHHMMSS snapshots up to around the connect syscalls. Add '--num-synthesize-threads' option to control degree of parallelism of the synthesize_mmap() code which is scanning /proc/PID/task/PID/maps and can be time consuming. This mimics pre-existing behaviour in 'perf top'. - perf bench: Add a multi-threaded synthesize benchmark and kallsyms parsing benchmark. - Intel PT support: Stitch LBR records from multiple samples to get deeper backtraces, there are caveats, see the csets for details. Allow using Intel PT to synthesize callchains for regular events. Add support for synthesizing branch stacks for regular events (cycles, instructions, etc) from Intel PT data. Misc changes: - Updated perf vendor events for power9 and Coresight. - Add flamegraph.py script via 'perf flamegraph' - Misc other changes, fixes and cleanups - see the Git log for details Also, since over the last couple of years perf tooling has matured and decoupled from the kernel perf changes to a large degree, going forward Arnaldo is going to send perf tooling changes via direct pull requests" * tag 'perf-core-2020-06-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (163 commits) perf/x86/rapl: Add AMD Fam17h RAPL support perf/x86/rapl: Make perf_probe_msr() more robust and flexible perf/x86/rapl: Flip logic on default events visibility perf/x86/rapl: Refactor to share the RAPL code between Intel and AMD CPUs perf/x86/rapl: Move RAPL support to common x86 code perf/core: Replace zero-length array with flexible-array perf/x86: Replace zero-length array with flexible-array perf/x86/intel: Add more available bits for OFFCORE_RESPONSE of Intel Tremont perf/x86/rapl: Add Ice Lake RAPL support perf flamegraph: Use /bin/bash for report and record scripts perf cs-etm: Move definition of 'traceid_list' global variable from header file libsymbols kallsyms: Move hex2u64 out of header libsymbols kallsyms: Parse using io api perf bench: Add kallsyms parsing perf: cs-etm: Update to build with latest opencsd version. perf symbol: Fix kernel symbol address display perf inject: Rename perf_evsel__*() operating on 'struct evsel *' to evsel__*() perf annotate: Rename perf_evsel__*() operating on 'struct evsel *' to evsel__*() perf trace: Rename perf_evsel__*() operating on 'struct evsel *' to evsel__*() perf script: Rename perf_evsel__*() operating on 'struct evsel *' to evsel__*() ...
2020-06-01kunit: Fix TabError, remove defconfig code and handle when there is no ↵Vitor Massaru Iha
kunitconfig The identation before this code (`if not os.path.exists(cli_args.build_dir):``) was with spaces instead of tabs after fixed up merge conflits, this commit revert spaces to tabs: [iha@bbking linux]$ tools/testing/kunit/kunit.py run File "tools/testing/kunit/kunit.py", line 247 if not linux: ^ TabError: inconsistent use of tabs and spaces in indentation [iha@bbking linux]$ tools/testing/kunit/kunit.py run Traceback (most recent call last): File "tools/testing/kunit/kunit.py", line 338, in <module> main(sys.argv[1:]) File "tools/testing/kunit/kunit.py", line 215, in main add_config_opts(config_parser) [iha@bbking linux]$ tools/testing/kunit/kunit.py run Traceback (most recent call last): File "tools/testing/kunit/kunit.py", line 337, in <module> main(sys.argv[1:]) File "tools/testing/kunit/kunit.py", line 255, in main result = run_tests(linux, request) File "tools/testing/kunit/kunit.py", line 133, in run_tests request.defconfig, AttributeError: 'KunitRequest' object has no attribute 'defconfig' Handles when there is no .kunitconfig, the error due to merge conflicts between the following: commit 9bdf64b35117 ("kunit: use KUnit defconfig by default") commit 45ba7a893ad8 ("kunit: kunit_tool: Separate out config/build/exec/parse") [iha@bbking linux]$ tools/testing/kunit/kunit.py run Traceback (most recent call last): File "tools/testing/kunit/kunit.py", line 335, in <module> main(sys.argv[1:]) File "tools/testing/kunit/kunit.py", line 246, in main linux = kunit_kernel.LinuxSourceTree() File "../tools/testing/kunit/kunit_kernel.py", line 109, in __init__ self._kconfig.read_from_file(kunitconfig_path) File "t../ools/testing/kunit/kunit_config.py", line 88, in read_from_file with open(path, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: '.kunit/.kunitconfig' Signed-off-by: Vitor Massaru Iha <vitor@massaru.org> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-06-01Merge tag 'objtool-core-2020-06-01' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull objtool updates from Ingo Molnar: "There are a lot of objtool changes in this cycle, all across the map: - Speed up objtool significantly, especially when there are large number of sections - Improve objtool's understanding of special instructions such as IRET, to reduce the number of annotations required - Implement 'noinstr' validation - Do baby steps for non-x86 objtool use - Simplify/fix retpoline decoding - Add vmlinux validation - Improve documentation - Fix various bugs and apply smaller cleanups" * tag 'objtool-core-2020-06-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (54 commits) objtool: Enable compilation of objtool for all architectures objtool: Move struct objtool_file into arch-independent header objtool: Exit successfully when requesting help objtool: Add check_kcov_mode() to the uaccess safelist samples/ftrace: Fix asm function ELF annotations objtool: optimize add_dead_ends for split sections objtool: use gelf_getsymshndx to handle >64k sections objtool: Allow no-op CFI ops in alternatives x86/retpoline: Fix retpoline unwind x86: Change {JMP,CALL}_NOSPEC argument x86: Simplify retpoline declaration x86/speculation: Change FILL_RETURN_BUFFER to work with objtool objtool: Add support for intra-function calls objtool: Move the IRET hack into the arch decoder objtool: Remove INSN_STACK objtool: Make handle_insn_ops() unconditional objtool: Rework allocating stack_ops on decode objtool: UNWIND_HINT_RET_OFFSET should not check registers objtool: is_fentry_call() crashes if call has no destination x86,smap: Fix smap_{save,restore}() alternatives ...
2020-06-01Merge tag 'core-rcu-2020-06-01' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull RCU updates from Ingo Molnar: "The RCU updates for this cycle were: - RCU-tasks update, including addition of RCU Tasks Trace for BPF use and TASKS_RUDE_RCU - kfree_rcu() updates. - Remove scheduler locking restriction - RCU CPU stall warning updates. - Torture-test updates. - Miscellaneous fixes and other updates" * tag 'core-rcu-2020-06-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (103 commits) rcu: Allow for smp_call_function() running callbacks from idle rcu: Provide rcu_irq_exit_check_preempt() rcu: Abstract out rcu_irq_enter_check_tick() from rcu_nmi_enter() rcu: Provide __rcu_is_watching() rcu: Provide rcu_irq_exit_preempt() rcu: Make RCU IRQ enter/exit functions rely on in_nmi() rcu/tree: Mark the idle relevant functions noinstr x86: Replace ist_enter() with nmi_enter() x86/mce: Send #MC singal from task work x86/entry: Get rid of ist_begin/end_non_atomic() sched,rcu,tracing: Avoid tracing before in_nmi() is correct sh/ftrace: Move arch_ftrace_nmi_{enter,exit} into nmi exception lockdep: Always inline lockdep_{off,on}() hardirq/nmi: Allow nested nmi_enter() arm64: Prepare arch_nmi_enter() for recursion printk: Disallow instrumenting print_nmi_enter() printk: Prepare for nested printk_nmi_enter() rcutorture: Convert ULONG_CMP_LT() to time_before() torture: Add a --kasan argument torture: Save a few lines by using config_override_param initially ...
2020-06-01Merge tag 'pstore-v5.8-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull pstore updates from Kees Cook: "Fixes and new features for pstore. This is a pretty big set of changes (relative to past pstore pulls), but it has been in -next for a while. The biggest change here is the ability to support a block device as a pstore backend, which has been desired for a while. A lot of additional fixes and refactorings are also included, mostly in support of the new features. - refactor pstore locking for safer module unloading (Kees Cook) - remove orphaned records from pstorefs when backend unloaded (Kees Cook) - refactor dump_oops parameter into max_reason (Pavel Tatashin) - introduce pstore/zone for common code for contiguous storage (WeiXiong Liao) - introduce pstore/blk for block device backend (WeiXiong Liao) - introduce mtd backend (WeiXiong Liao)" * tag 'pstore-v5.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: (35 commits) mtd: Support kmsg dumper based on pstore/blk pstore/blk: Introduce "best_effort" mode pstore/blk: Support non-block storage devices pstore/blk: Provide way to query pstore configuration pstore/zone: Provide way to skip "broken" zone for MTD devices Documentation: Add details for pstore/blk pstore/zone,blk: Add ftrace frontend support pstore/zone,blk: Add console frontend support pstore/zone,blk: Add support for pmsg frontend pstore/blk: Introduce backend for block devices pstore/zone: Introduce common layer to manage storage zones ramoops: Add "max-reason" optional field to ramoops DT node pstore/ram: Introduce max_reason and convert dump_oops pstore/platform: Pass max_reason to kmesg dump printk: Introduce kmsg_dump_reason_str() printk: honor the max_reason field in kmsg_dumper printk: Collapse shutdown types into a single dump reason pstore/ftrace: Provide ftrace log merging routine pstore/ram: Refactor ftrace buffer merging pstore/ram: Refactor DT size parsing ...
2020-06-01selftests: mlxsw: Add test for control packetsIdo Schimmel
Generate packets matching the various control traps and check that the traps' stats increase accordingly. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-06-01sh: remove sh5 supportArnd Bergmann
sh5 never became a product and has probably never really worked. Remove it by recursively deleting all associated Kconfig options and all corresponding files. Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rich Felker <dalias@libc.org>
2020-06-01Merge tag 'spi-v5.8' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi Pull spi updates from Mark Brown: "This has been a very active release for the DesignWare driver in particular - after a long period of inactivity we have had a lot of people actively working on it for unrelated reasons this cycle with some of that work still not landed. Otherwise it's been fairly quiet for the subsystem. Highlights include: - Lots of performance improvements and fixes for the DesignWare driver from Serge Semin, Andy Shevchenko, Wan Ahmad Zainie, Clement Leger, Dinh Nguyen and Jarkko Nikula. - Support for octal mode transfers in spidev. - Slave mode support for the Rockchip drivers. - Support for AMD controllers, Broadcom mspi and Raspberry Pi 4, and Intel Elkhart Lake" * tag 'spi-v5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (125 commits) spi: spi-fsl-dspi: fix native data copy spi: Convert DW SPI binding to DT schema spi: dw: Refactor mid_spi_dma_setup() to separate DMA and IRQ config spi: dw: Make DMA request line assignments explicit for Intel Medfield spi: bcm2835: Remove shared interrupt support dt-bindings: snps,dw-apb-ssi: add optional reset property spi: dw: add reset control spi: bcm2835: Enable shared interrupt support spi: bcm2835: Implement shutdown callback spi: dw: Use regset32 DebugFS method to create regdump file spi: dw: Add DMA support to the DW SPI MMIO driver spi: dw: Cleanup generic DW DMA code namings spi: dw: Add DW SPI DMA/PCI/MMIO dependency on the DW SPI core spi: dw: Remove DW DMA code dependency from DW_DMAC_PCI spi: dw: Move Non-DMA code to the DW PCIe-SPI driver spi: dw: Add core suffix to the DW APB SSI core source file spi: dw: Fix Rx-only DMA transfers spi: dw: Use DMA max burst to set the request thresholds spi: dw: Parameterize the DMA Rx/Tx burst length spi: dw: Add SPI Rx-done wait method to DMA-based transfer ...
2020-06-01KVM: selftests: fix rdtsc() for vmx_tsc_adjust_testVitaly Kuznetsov
vmx_tsc_adjust_test fails with: IA32_TSC_ADJUST is -4294969448 (-1 * TSC_ADJUST_VALUE + -2152). IA32_TSC_ADJUST is -4294969448 (-1 * TSC_ADJUST_VALUE + -2152). IA32_TSC_ADJUST is 281470681738540 (65534 * TSC_ADJUST_VALUE + 4294962476). ==== Test Assertion Failure ==== x86_64/vmx_tsc_adjust_test.c:153: false pid=19738 tid=19738 - Interrupted system call 1 0x0000000000401192: main at vmx_tsc_adjust_test.c:153 2 0x00007fe1ef8583d4: ?? ??:0 3 0x0000000000401201: _start at ??:? Failed guest assert: (adjust <= max) The problem is that is 'tsc_val' should be u64, not u32 or the reading gets truncated. Fixes: 8d7fbf01f9afc ("KVM: selftests: VMX preemption timer migration test") Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20200601154726.261868-1-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-01perf libdw: Fix off-by 1 relative directory includesIan Rogers
This is currently working due to extra include paths in the build. Before: $ cd tools/perf/arch/arm64/util $ ls -la ../../util/unwind-libdw.h ls: cannot access '../../util/unwind-libdw.h': No such file or directory After: $ ls -la ../../../util/unwind-libdw.h -rw-r----- 1 irogers irogers 553 Apr 17 14:31 ../../../util/unwind-libdw.h Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20200529225232.207532-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-06-01perf arm-spe: Support synthetic eventsTan Xiaojun
After the commit ffd3d18c20b8 ("perf tools: Add ARM Statistical Profiling Extensions (SPE) support") has been merged, it supports to output raw data with option "--dump-raw-trace". However, it misses for support synthetic events so cannot output any statistical info. This patch is to improve the "perf report" support for ARM SPE for four types synthetic events: First level cache synthetic events, including L1 data cache accessing and missing events; Last level cache synthetic events, including last level cache accessing and missing events; TLB synthetic events, including TLB accessing and missing events; Remote access events, which is used to account load/store operations caused to another socket. Example usage: $ perf record -c 1024 -e arm_spe_0/branch_filter=1,ts_enable=1,pct_enable=1,pa_enable=1,load_filter=1,jitter=1,store_filter=1,min_latency=0/ dd if=/dev/zero of=/dev/null count=10000 $ perf report --stdio # Samples: 59 of event 'l1d-miss' # Event count (approx.): 59 # # Children Self Command Shared Object Symbol # ........ ........ ....... ................. .................................. # 23.73% 23.73% dd [kernel.kallsyms] [k] perf_iterate_ctx.constprop.135 20.34% 20.34% dd [kernel.kallsyms] [k] filemap_map_pages 5.08% 5.08% dd [kernel.kallsyms] [k] perf_event_mmap 5.08% 5.08% dd [kernel.kallsyms] [k] unlock_page_memcg 5.08% 5.08% dd [kernel.kallsyms] [k] unmap_page_range 3.39% 3.39% dd [kernel.kallsyms] [k] PageHuge 3.39% 3.39% dd [kernel.kallsyms] [k] release_pages 3.39% 3.39% dd ld-2.28.so [.] 0x0000000000008b5c 1.69% 1.69% dd [kernel.kallsyms] [k] __alloc_fd [...] # Samples: 3K of event 'l1d-access' # Event count (approx.): 3980 # # Children Self Command Shared Object Symbol # ........ ........ ....... ................. ...................................... # 26.98% 26.98% dd [kernel.kallsyms] [k] ret_to_user 10.53% 10.53% dd [kernel.kallsyms] [k] fsnotify 7.51% 7.51% dd [kernel.kallsyms] [k] new_sync_read 4.57% 4.57% dd [kernel.kallsyms] [k] vfs_read 4.35% 4.35% dd [kernel.kallsyms] [k] vfs_write 3.69% 3.69% dd [kernel.kallsyms] [k] __fget_light 3.69% 3.69% dd [kernel.kallsyms] [k] rw_verify_area 3.44% 3.44% dd [kernel.kallsyms] [k] security_file_permission 2.76% 2.76% dd [kernel.kallsyms] [k] __fsnotify_parent 2.44% 2.44% dd [kernel.kallsyms] [k] ksys_write 2.24% 2.24% dd [kernel.kallsyms] [k] iov_iter_zero 2.19% 2.19% dd [kernel.kallsyms] [k] read_iter_zero 1.81% 1.81% dd dd [.] 0x0000000000002960 1.78% 1.78% dd dd [.] 0x0000000000002980 [...] # Samples: 35 of event 'llc-miss' # Event count (approx.): 35 # # Children Self Command Shared Object Symbol # ........ ........ ....... ................. ........................... # 34.29% 34.29% dd [kernel.kallsyms] [k] filemap_map_pages 8.57% 8.57% dd [kernel.kallsyms] [k] unlock_page_memcg 8.57% 8.57% dd [kernel.kallsyms] [k] unmap_page_range 5.71% 5.71% dd [kernel.kallsyms] [k] PageHuge 5.71% 5.71% dd [kernel.kallsyms] [k] release_pages 5.71% 5.71% dd ld-2.28.so [.] 0x0000000000008b5c 2.86% 2.86% dd [kernel.kallsyms] [k] __queue_work 2.86% 2.86% dd [kernel.kallsyms] [k] __radix_tree_lookup 2.86% 2.86% dd [kernel.kallsyms] [k] copy_page [...] # Samples: 2 of event 'llc-access' # Event count (approx.): 2 # # Children Self Command Shared Object Symbol # ........ ........ ....... ................. ............. # 50.00% 50.00% dd [kernel.kallsyms] [k] copy_page 50.00% 50.00% dd libc-2.28.so [.] _dl_addr # Samples: 48 of event 'tlb-miss' # Event count (approx.): 48 # # Children Self Command Shared Object Symbol # ........ ........ ....... ................. .................................. # 20.83% 20.83% dd [kernel.kallsyms] [k] perf_iterate_ctx.constprop.135 12.50% 12.50% dd [kernel.kallsyms] [k] __arch_clear_user 10.42% 10.42% dd [kernel.kallsyms] [k] clear_page 4.17% 4.17% dd [kernel.kallsyms] [k] copy_page 4.17% 4.17% dd [kernel.kallsyms] [k] filemap_map_pages 2.08% 2.08% dd [kernel.kallsyms] [k] __alloc_fd 2.08% 2.08% dd [kernel.kallsyms] [k] __mod_memcg_state.part.70 2.08% 2.08% dd [kernel.kallsyms] [k] __queue_work 2.08% 2.08% dd [kernel.kallsyms] [k] __rcu_read_unlock 2.08% 2.08% dd [kernel.kallsyms] [k] d_path 2.08% 2.08% dd [kernel.kallsyms] [k] destroy_inode 2.08% 2.08% dd [kernel.kallsyms] [k] do_dentry_open [...] # Samples: 9K of event 'tlb-access' # Event count (approx.): 9573 # # Children Self Command Shared Object Symbol # ........ ........ ....... ................. ...................................... # 25.79% 25.79% dd [kernel.kallsyms] [k] __arch_clear_user 11.22% 11.22% dd [kernel.kallsyms] [k] ret_to_user 8.56% 8.56% dd [kernel.kallsyms] [k] fsnotify 4.06% 4.06% dd [kernel.kallsyms] [k] new_sync_read 3.67% 3.67% dd [kernel.kallsyms] [k] el0_svc_common.constprop.2 3.04% 3.04% dd [kernel.kallsyms] [k] __fsnotify_parent 2.90% 2.90% dd [kernel.kallsyms] [k] vfs_write 2.82% 2.82% dd [kernel.kallsyms] [k] vfs_read 2.52% 2.52% dd libc-2.28.so [.] write 2.26% 2.26% dd [kernel.kallsyms] [k] security_file_permission 2.08% 2.08% dd [kernel.kallsyms] [k] ksys_write 1.96% 1.96% dd [kernel.kallsyms] [k] rw_verify_area 1.95% 1.95% dd [kernel.kallsyms] [k] read_iter_zero [...] # Samples: 9 of event 'branch-miss' # Event count (approx.): 9 # # Children Self Command Shared Object Symbol # ........ ........ ....... ................. ......................... # 22.22% 22.22% dd libc-2.28.so [.] _dl_addr 11.11% 11.11% dd [kernel.kallsyms] [k] __arch_clear_user 11.11% 11.11% dd [kernel.kallsyms] [k] __arch_copy_from_user 11.11% 11.11% dd [kernel.kallsyms] [k] __dentry_kill 11.11% 11.11% dd [kernel.kallsyms] [k] __efistub_memcpy 11.11% 11.11% dd ld-2.28.so [.] 0x0000000000012b7c 11.11% 11.11% dd libc-2.28.so [.] 0x000000000002a980 11.11% 11.11% dd libc-2.28.so [.] 0x0000000000083340 # Samples: 29 of event 'remote-access' # Event count (approx.): 29 # # Children Self Command Shared Object Symbol # ........ ........ ....... ................. ........................... # 41.38% 41.38% dd [kernel.kallsyms] [k] filemap_map_pages 10.34% 10.34% dd [kernel.kallsyms] [k] unlock_page_memcg 10.34% 10.34% dd [kernel.kallsyms] [k] unmap_page_range 6.90% 6.90% dd [kernel.kallsyms] [k] release_pages 3.45% 3.45% dd [kernel.kallsyms] [k] PageHuge 3.45% 3.45% dd [kernel.kallsyms] [k] __queue_work 3.45% 3.45% dd [kernel.kallsyms] [k] page_add_file_rmap 3.45% 3.45% dd [kernel.kallsyms] [k] page_counter_try_charge 3.45% 3.45% dd [kernel.kallsyms] [k] page_remove_rmap 3.45% 3.45% dd [kernel.kallsyms] [k] xas_start 3.45% 3.45% dd ld-2.28.so [.] 0x0000000000002a1c 3.45% 3.45% dd ld-2.28.so [.] 0x0000000000008b5c 3.45% 3.45% dd ld-2.28.so [.] 0x00000000000093cc Signed-off-by: Tan Xiaojun <tanxiaojun@huawei.com> Tested-by: James Clark <james.clark@arm.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Al Grant <al.grant@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: http://lore.kernel.org/lkml/20200530122442.490-4-leo.yan@linaro.org Signed-off-by: James Clark <james.clark@arm.com> Signed-off-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-06-01perf auxtrace: Add four itrace optionsTan Xiaojun
This patch is to add four options to synthesize events which are described as below: 'f': synthesize first level cache events 'm': synthesize last level cache events 't': synthesize TLB events 'a': synthesize remote access events This four options will be used by ARM SPE as their first consumer. Signed-off-by: Tan Xiaojun <tanxiaojun@huawei.com> Tested-by: James Clark <james.clark@arm.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Al Grant <al.grant@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: http://lore.kernel.org/lkml/20200530122442.490-3-leo.yan@linaro.org Signed-off-by: James Clark <james.clark@arm.com> Signed-off-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-06-01perf tools: Move arm-spe-pkt-decoder.h/c to the new dirTan Xiaojun
Create a new arm-spe-decoder directory for subsequent extensions and move arm-spe-pkt-decoder.h/c to this directory. No code changes. Signed-off-by: Tan Xiaojun <tanxiaojun@huawei.com> Tested-by: James Clark <james.clark@arm.com> Tested-by: Qi Liu <liuqi115@hisilicon.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Al Grant <al.grant@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: http://lore.kernel.org/lkml/20200530122442.490-2-leo.yan@linaro.org Signed-off-by: James Clark <james.clark@arm.com> Signed-off-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-06-01perf test: Initialize memory in dwarf-unwindIan Rogers
Avoid a false positive caused by assembly code in arch/x86. In tests, zero the perf_event to avoid uninitialized memory uses. Warnings were caught using clang with -fsanitize=memory. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Quentin Monnet <quentin@isovalent.com> Cc: Stephane Eranian <eranian@google.com> Cc: clang-built-linux@googlegroups.com Link: http://lore.kernel.org/lkml/20200530082015.39162-4-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-06-01perf tests: Don't tail call optimize in unwind testIan Rogers
The tail call optimization can unexpectedly make the stack smaller and cause the test to fail. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: clang-built-linux@googlegroups.com Cc: Jakub Kicinski <kuba@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Quentin Monnet <quentin@isovalent.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20200530082015.39162-3-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-06-01tools compiler.h: Add attribute to disable tail callsIan Rogers
Tail call optimizations can remove stack frames that are used in unwinding tests. Add an attribute that can be used to disable the tail call optimization. Tested on clang and GCC. Committer notes: Old versions of clang don't like that __attribute__((optimize)), so add an ifdef to make it go away. Signed-off-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: clang-built-linux@googlegroups.com Cc: Jakub Kicinski <kuba@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Quentin Monnet <quentin@isovalent.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lore.kernel.org/lkml/20200530082015.39162-2-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-06-01selftests/ftrace: Distinguish between hist and synthetic event checksTom Zanussi
With synthetic events now a separate config item as a result of 'tracing: Move synthetic events to a separate file', tests that use both need to explicitly check for hist trigger support rather than relying on hist triggers to pull in synthetic events. Add an additional hist trigger check to all the trigger tests that now require it, otherwise they'll fail if synthetic events but not hist triggers are enabled. Link: http://lkml.kernel.org/r/af36c539006ef2768114b4ed38e6b054f7c7a3bd.1590693308.git.zanussi@kernel.org Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Tom Zanussi <zanussi@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-06-01KVM: selftests: update hyperv_cpuid with SynDBG testsVitaly Kuznetsov
Update tests to reflect new CPUID capabilities with SYNDBG. Check that we get the right number of entries and that 0x40000000.EAX always returns the correct max leaf. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Jon Doron <arilou@gmail.com> Message-Id: <20200529134543.1127440-7-arilou@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-01KVM: selftests: VMX preemption timer migration testMakarand Sonare
When a nested VM with a VMX-preemption timer is migrated, verify that the nested VM and its parent VM observe the VMX-preemption timer exit close to the original expiration deadline. Signed-off-by: Makarand Sonare <makarandsonare@google.com> Reviewed-by: Jim Mattson <jmattson@google.com> Message-Id: <20200526215107.205814-3-makarandsonare@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-01selftests: kvm: fix smm test on SVMVitaly Kuznetsov
KVM_CAP_NESTED_STATE is now supported for AMD too but smm test acts like it is still Intel only. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20200529130407.57176-2-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-01selftests: kvm: add a SVM version of state-testPaolo Bonzini
The test is similar to the existing one for VMX, but simpler because we don't have to test shadow VMCS or vmptrld/vmptrst/vmclear. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-06-01selftests: kvm: introduce cpu_has_svm() checkVitaly Kuznetsov
Many tests will want to check if the CPU is Intel or AMD in guest code, add cpu_has_svm() and put it as static inline to svm_util.h. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20200529130407.57176-1-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-31Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller
xdp_umem.c had overlapping changes between the 64-bit math fix for the calculation of npgs and the removal of the zerocopy memory type which got rid of the chunk_size_nohdr member. The mlx5 Kconfig conflict is a case where we just take the net-next copy of the Kconfig entry dependency as it takes on the ESWITCH dependency by one level of indirection which is what the 'net' conflicting change is trying to ensure. Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-31Merge tag 'x86-urgent-2020-05-31' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: "A pile of x86 fixes: - Prevent a memory leak in ioperm which was caused by the stupid assumption that the exit cleanup is always called for current, which is not the case when fork fails after taking a reference on the ioperm bitmap. - Fix an arithmething overflow in the DMA code on 32bit systems - Fill gaps in the xstate copy with defaults instead of leaving them uninitialized - Revert: "Make __X32_SYSCALL_BIT be unsigned long" as it turned out that existing user space fails to build" * tag 'x86-urgent-2020-05-31' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/ioperm: Prevent a memory leak when fork fails x86/dma: Fix max PFN arithmetic overflow on 32 bit systems copy_xstate_to_kernel(): don't leave parts of destination uninitialized x86/syscalls: Revert "x86/syscalls: Make __X32_SYSCALL_BIT be unsigned long"
2020-05-30selftests: forwarding: pedit_dsfield: Check counter valuePetr Machata
A missing stats_update callback was recently added to act_pedit. Now that iproute2 supports JSON dumping for pedit, extend the pedit_dsfield selftest with a check that would have caught the fact that the callback was missing. Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-30selftests: forwarding: mirror_lib: Use mausezahnPetr Machata
Using ping in tests is error-prone, because ping is too smart. On a flaky system (notably in a simulator), when packets don't come quickly enough, more pings are sent, and that throws off counters. Instead use mausezahn to generate ICMP echo request packets. That allows us to send them in quicker succession as well, because the reason the ping was made slow in the first place was to make the tests work on simulated systems. Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-30pstore/platform: Use backend name for console registrationKees Cook
If the pstore backend changes, there's no indication in the logs what the console is (it always says "pstore"). Instead, pass through the active backend's name. (Also adjust the selftest to match.) Link: https://lore.kernel.org/lkml/20200510202436.63222-5-keescook@chromium.org/ Link: https://lore.kernel.org/lkml/20200526135429.GQ12456@shao2-debian Signed-off-by: Kees Cook <keescook@chromium.org>
2020-05-30Merge remote-tracking branch 'spi/for-5.8' into spi-nextMark Brown
2020-05-29Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfDavid S. Miller
Alexei Starovoitov says: ==================== pull-request: bpf 2020-05-29 The following pull-request contains BPF updates for your *net* tree. We've added 6 non-merge commits during the last 7 day(s) which contain a total of 4 files changed, 55 insertions(+), 34 deletions(-). The main changes are: 1) minor verifier fix for fmod_ret progs, from Alexei. 2) af_xdp overflow check, from Bjorn. 3) minor verifier fix for 32bit assignment, from John. 4) powerpc has non-overlapping addr space, from Petr. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-29bpf, selftests: Add a verifier test for assigning 32bit reg states to 64bit onesJohn Fastabend
Added a verifier test for assigning 32bit reg states to 64bit where 32bit reg holds a constant value of 0. Without previous kernel verifier.c fix, the test in this patch will fail. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/159077335867.6014.2075350327073125374.stgit@john-Precision-5820-Tower
2020-05-29bpf, selftests: Verifier bounds tests need to be updatedJohn Fastabend
After previous fix for zero extension test_verifier tests #65 and #66 now fail. Before the fix we can see the alu32 mov op at insn 10 10: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0) R1_w=invP(id=0, smin_value=4294967168,smax_value=4294967423, umin_value=4294967168,umax_value=4294967423, var_off=(0x0; 0x1ffffffff), s32_min_value=-2147483648,s32_max_value=2147483647, u32_min_value=0,u32_max_value=-1) R10=fp0 fp-8_w=mmmmmmmm 10: (bc) w1 = w1 11: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0) R1_w=invP(id=0, smin_value=0,smax_value=2147483647, umin_value=0,umax_value=4294967295, var_off=(0x0; 0xffffffff), s32_min_value=-2147483648,s32_max_value=2147483647, u32_min_value=0,u32_max_value=-1) R10=fp0 fp-8_w=mmmmmmmm After the fix at insn 10 because we have 's32_min_value < 0' the following step 11 now has 'smax_value=U32_MAX' where before we pulled the s32_max_value bound into the smax_value as seen above in 11 with smax_value=2147483647. 10: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0) R1_w=inv(id=0, smin_value=4294967168,smax_value=4294967423, umin_value=4294967168,umax_value=4294967423, var_off=(0x0; 0x1ffffffff), s32_min_value=-2147483648, s32_max_value=2147483647, u32_min_value=0,u32_max_value=-1) R10=fp0 fp-8_w=mmmmmmmm 10: (bc) w1 = w1 11: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0) R1_w=inv(id=0, smin_value=0,smax_value=4294967295, umin_value=0,umax_value=4294967295, var_off=(0x0; 0xffffffff), s32_min_value=-2147483648, s32_max_value=2147483647, u32_min_value=0, u32_max_value=-1) R10=fp0 fp-8_w=mmmmmmmm The fall out of this is by the time we get to the failing instruction at step 14 where previously we had the following: 14: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0) R1_w=inv(id=0, smin_value=72057594021150720,smax_value=72057594029539328, umin_value=72057594021150720,umax_value=72057594029539328, var_off=(0xffffffff000000; 0xffffff), s32_min_value=-16777216,s32_max_value=-1, u32_min_value=-16777216,u32_max_value=-1) R10=fp0 fp-8_w=mmmmmmmm 14: (0f) r0 += r1 We now have, 14: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0) R1_w=inv(id=0, smin_value=0,smax_value=72057594037927935, umin_value=0,umax_value=72057594037927935, var_off=(0x0; 0xffffffffffffff), s32_min_value=-2147483648,s32_max_value=2147483647, u32_min_value=0,u32_max_value=-1) R10=fp0 fp-8_w=mmmmmmmm 14: (0f) r0 += r1 In the original step 14 'smin_value=72057594021150720' this trips the logic in the verifier function check_reg_sane_offset(), if (smin >= BPF_MAX_VAR_OFF || smin <= -BPF_MAX_VAR_OFF) { verbose(env, "value %lld makes %s pointer be out of bounds\n", smin, reg_type_str[type]); return false; } Specifically, the 'smin <= -BPF_MAX_VAR_OFF' check. But with the fix at step 14 we have bounds 'smin_value=0' so the above check is not tripped because BPF_MAX_VAR_OFF=1<<29. We have a smin_value=0 here because at step 10 the smaller smin_value=0 means the subtractions at steps 11 and 12 bring the smin_value negative. 11: (17) r1 -= 2147483584 12: (17) r1 -= 2147483584 13: (77) r1 >>= 8 Then the shift clears the top bit and smin_value is set to 0. Note we still have the smax_value in the fixed code so any reads will fail. An alternative would be to have reg_sane_check() do both smin and smax value tests. To fix the test we can omit the 'r1 >>=8' at line 13. This will change the err string, but keeps the intention of the test as suggseted by the title, "check after truncation of boundary-crossing range". If the verifier logic changes a different value is likely to be thrown in the error or the error will no longer be thrown forcing this test to be examined. With this change we see the new state at step 13. 13: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0) R1_w=invP(id=0, smin_value=-4294967168,smax_value=127, umin_value=0,umax_value=18446744073709551615, s32_min_value=-2147483648,s32_max_value=2147483647, u32_min_value=0,u32_max_value=-1) R10=fp0 fp-8_w=mmmmmmmm Giving the expected out of bounds error, "value -4294967168 makes map_value pointer be out of bounds" However, for unpriv case we see a different error now because of the mixed signed bounds pointer arithmatic. This seems OK so I've only added the unpriv_errstr for this. Another optino may have been to do addition on r1 instead of subtraction but I favor the approach above slightly. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/159077333942.6014.14004320043595756079.stgit@john-Precision-5820-Tower
2020-05-29perf build: Add a LIBPFM4=1 build test entryArnaldo Carvalho de Melo
So that when one runs: $ make -C tools/perf build-test We make sure that recent changes don't break that opt-in build. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrii Nakryiko <andriin@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ian Rogers <irogers@google.com> Cc: Igor Lubashev <ilubashe@akamai.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Jiwei Sun <jiwei.sun@windriver.com> Cc: John Garry <john.garry@huawei.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Yonghong Song <yhs@fb.com> Cc: yuzhoujian <yuzhoujian@didichuxing.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>