summaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)Author
2019-07-04bonding: add an option to specify a delay between peer notificationsVincent Bernat
Currently, gratuitous ARP/ND packets are sent every `miimon' milliseconds. This commit allows a user to specify a custom delay through a new option, `peer_notif_delay'. Like for `updelay' and `downdelay', this delay should be a multiple of `miimon' to avoid managing an additional work queue. The configuration logic is copied from `updelay' and `downdelay'. However, the default value cannot be set using a module parameter: Netlink or sysfs should be used to configure this feature. When setting `miimon' to 100 and `peer_notif_delay' to 500, we can observe the 500 ms delay is respected: 20:30:19.354693 ARP, Request who-has 203.0.113.10 tell 203.0.113.10, length 28 20:30:19.874892 ARP, Request who-has 203.0.113.10 tell 203.0.113.10, length 28 20:30:20.394919 ARP, Request who-has 203.0.113.10 tell 203.0.113.10, length 28 20:30:20.914963 ARP, Request who-has 203.0.113.10 tell 203.0.113.10, length 28 In bond_mii_monitor(), I have tried to keep the lock logic readable. The change is due to the fact we cannot rely on a notification to lower the value of `bond->send_peer_notif' as `NETDEV_NOTIFY_PEERS' is only triggered once every N times, while we need to decrement the counter each time. iproute2 also needs to be updated to be able to specify this new attribute through `ip link'. Signed-off-by: Vincent Bernat <vincent@bernat.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-04powerpc/pseries: Move mm/book3s64/vphn.c under platforms/pseries/Naveen N. Rao
hcall_vphn() is specific to pseries and will be used in a subsequent patch. So, move it to a more appropriate place under arch/powerpc/platforms/pseries. Also merge vphn.h into lppaca.h and update vphn selftest to use the new files. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-07-04Merge branch 'x86/cpu' into perf/core, to pick up revertIngo Molnar
perf/core has an earlier version of the x86/cpu tree merged, to avoid conflicts, and due to this we want to pick up this ABI impacting revert as well: 049331f277fe: ("x86/fsgsbase: Revert FSGSBASE support") Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-07-03Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf 2019-07-03 The following pull-request contains BPF updates for your *net* tree. The main changes are: 1) Fix the interpreter to properly handle BPF_ALU32 | BPF_ARSH on BE architectures, from Jiong. 2) Fix several bugs in the x32 BPF JIT for handling shifts by 0, from Luke and Xi. 3) Fix NULL pointer deref in btf_type_is_resolve_source_only(), from Stanislav. 4) Properly handle the check that forwarding is enabled on the device in bpf_ipv6_fib_lookup() helper code, from Anton. 5) Fix UAPI bpf_prog_info fields alignment for archs that have 16 bit alignment such as m68k, from Baruch. 6) Fix kernel hanging in unregister_netdevice loop while unregistering device bound to XDP socket, from Ilya. 7) Properly terminate tail update in xskq_produce_flush_desc(), from Nathan. 8) Fix broken always_inline handling in test_lwt_seg6local, from Jiri. 9) Fix bpftool to use correct argument in cgroup errors, from Jakub. 10) Fix detaching dummy prog in XDP redirect sample code, from Prashant. 11) Add Jonathan to AF_XDP reviewers, from Björn. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-03selftests/bpf: test BPF_SOCK_OPS_RTT_CBStanislav Fomichev
Make sure the callback is invoked for syn-ack and data packet. Cc: Eric Dumazet <edumazet@google.com> Cc: Priyaranjan Jha <priyarjha@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-03bpf/tools: sync bpf.hStanislav Fomichev
Sync new bpf_tcp_sock fields and new BPF_PROG_TYPE_SOCK_OPS RTT callback. Cc: Eric Dumazet <edumazet@google.com> Cc: Priyaranjan Jha <priyarjha@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-03selftests/x86/fsgsbase: Fix some test case bugsAndy Lutomirski
This refactors do_unexpected_base() to clean up some code. It also fixes the following bugs in test_ptrace_write_gsbase(): - Incorrect printf() format string caused crashes. - Hardcoded 0x7 for the gs selector was not reliably correct. It also documents the fact that the test is expected to fail on old kernels. Fixes: a87730cc3acc ("selftests/x86/fsgsbase: Test ptracer-induced GSBASE write with FSGSBASE") Fixes: 1b6858d5a2eb ("selftests/x86/fsgsbase: Test ptracer-induced GSBASE write") Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: "BaeChang Seok" <chang.seok.bae@intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: "BaeChang Seok" <chang.seok.bae@intel.com> Link: https://lkml.kernel.org/r/bab29c84f2475e2c30ddb00f1b877fcd7f4f96a8.1562125333.git.luto@kernel.org
2019-07-03selftests: bpf: fix inlines in test_lwt_seg6localJiri Benc
Selftests are reporting this failure in test_lwt_seg6local.sh: + ip netns exec ns2 ip -6 route add fb00::6 encap bpf in obj test_lwt_seg6local.o sec encap_srh dev veth2 Error fetching program/map! Failed to parse eBPF program: Operation not permitted The problem is __attribute__((always_inline)) alone is not enough to prevent clang from inserting those functions in .text. In that case, .text is not marked as relocateable. See the output of objdump -h test_lwt_seg6local.o: Idx Name Size VMA LMA File off Algn 0 .text 00003530 0000000000000000 0000000000000000 00000040 2**3 CONTENTS, ALLOC, LOAD, READONLY, CODE This causes the iproute bpf loader to fail in bpf_fetch_prog_sec: bpf_has_call_data returns true but bpf_fetch_prog_relo fails as there's no relocateable .text section in the file. To fix this, convert to 'static __always_inline'. v2: Use 'static __always_inline' instead of 'static inline __attribute__((always_inline))' Fixes: c99a84eac026 ("selftests/bpf: test for seg6local End.BPF action") Signed-off-by: Jiri Benc <jbenc@redhat.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-03selftests: bpf: standardize to static __always_inlineJiri Benc
The progs for bpf selftests use several different notations to force function inlining. Standardize to what most of them use, static __always_inline. Suggested-by: Song Liu <liu.song.a23@gmail.com> Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-03tools/power/x86: A tool to validate Intel Speed Select commandsSrinivas Pandruvada
The Intel(R) Speed select technologies contains four features. Performance profile:An non architectural mechanism that allows multiple optimized performance profiles per system via static and/or dynamic adjustment of core count, workload, Tjmax, and TDP, etc. aka ISS in the documentation. Base Frequency: Enables users to increase guaranteed base frequency on certain cores (high priority cores) in exchange for lower base frequency on remaining cores (low priority cores). aka PBF in the documenation. Turbo frequency: Enables the ability to set different turbo ratio limits to cores based on priority. aka FACT in the documentation. Core power: An Interface that allows user to define per core/tile priority. There is a multi level help for commands and options. This can be used to check required arguments for each feature and commands for the feature. To start navigating the features start with $sudo intel-speed-select --help For help on a specific feature for example $sudo intel-speed-select perf-profile --help To get help for a command for a feature for example $sudo intel-speed-select perf-profile get-lock-status --help Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Acked-by: Len Brown <len.brown@intel.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
2019-07-03bpf, libbpf, smatch: Fix potential NULL pointer dereferenceLeo Yan
Based on the following report from Smatch, fix the potential NULL pointer dereference check: tools/lib/bpf/libbpf.c:3493 bpf_prog_load_xattr() warn: variable dereferenced before check 'attr' (see line 3483) 3479 int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr, 3480 struct bpf_object **pobj, int *prog_fd) 3481 { 3482 struct bpf_object_open_attr open_attr = { 3483 .file = attr->file, 3484 .prog_type = attr->prog_type, ^^^^^^ 3485 }; At the head of function, it directly access 'attr' without checking if it's NULL pointer. This patch moves the values assignment after validating 'attr' and 'attr->file'. Signed-off-by: Leo Yan <leo.yan@linaro.org> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-03libbpf: fix GCC8 warning for strncpyAndrii Nakryiko
GCC8 started emitting warning about using strncpy with number of bytes exactly equal destination size, which is generally unsafe, as can lead to non-zero terminated string being copied. Use IFNAMSIZ - 1 as number of bytes to ensure name is always zero-terminated. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Cc: Magnus Karlsson <magnus.karlsson@intel.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-03selftests: bpf: add tests for shifts by zeroLuke Nelson
There are currently no tests for ALU64 shift operations when the shift amount is 0. This adds 6 new tests to make sure they are equivalent to a no-op. The x32 JIT had such bugs that could have been caught by these tests. Cc: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-07-03xfrm: policy: fix bydst hlist corruption on hash rebuildFlorian Westphal
syzbot reported following spat: BUG: KASAN: use-after-free in __write_once_size include/linux/compiler.h:221 BUG: KASAN: use-after-free in hlist_del_rcu include/linux/rculist.h:455 BUG: KASAN: use-after-free in xfrm_hash_rebuild+0xa0d/0x1000 net/xfrm/xfrm_policy.c:1318 Write of size 8 at addr ffff888095e79c00 by task kworker/1:3/8066 Workqueue: events xfrm_hash_rebuild Call Trace: __write_once_size include/linux/compiler.h:221 [inline] hlist_del_rcu include/linux/rculist.h:455 [inline] xfrm_hash_rebuild+0xa0d/0x1000 net/xfrm/xfrm_policy.c:1318 process_one_work+0x814/0x1130 kernel/workqueue.c:2269 Allocated by task 8064: __kmalloc+0x23c/0x310 mm/slab.c:3669 kzalloc include/linux/slab.h:742 [inline] xfrm_hash_alloc+0x38/0xe0 net/xfrm/xfrm_hash.c:21 xfrm_policy_init net/xfrm/xfrm_policy.c:4036 [inline] xfrm_net_init+0x269/0xd60 net/xfrm/xfrm_policy.c:4120 ops_init+0x336/0x420 net/core/net_namespace.c:130 setup_net+0x212/0x690 net/core/net_namespace.c:316 The faulting address is the address of the old chain head, free'd by xfrm_hash_resize(). In xfrm_hash_rehash(), chain heads get re-initialized without any hlist_del_rcu: for (i = hmask; i >= 0; i--) INIT_HLIST_HEAD(odst + i); Then, hlist_del_rcu() gets called on the about to-be-reinserted policy when iterating the per-net list of policies. hlist_del_rcu() will then make chain->first be nonzero again: static inline void __hlist_del(struct hlist_node *n) { struct hlist_node *next = n->next; // address of next element in list struct hlist_node **pprev = n->pprev;// location of previous elem, this // can point at chain->first WRITE_ONCE(*pprev, next); // chain->first points to next elem if (next) next->pprev = pprev; Then, when we walk chainlist to find insertion point, we may find a non-empty list even though we're supposedly reinserting the first policy to an empty chain. To fix this first unlink all exact and inexact policies instead of zeroing the list heads. Add the commands equivalent to the syzbot reproducer to xfrm_policy.sh, without fix KASAN catches the corruption as it happens, SLUB poisoning detects it a bit later. Reported-by: syzbot+0165480d4ef07360eeda@syzkaller.appspotmail.com Fixes: 1548bc4e0512 ("xfrm: policy: delete inexact policies from inexact list on hash rebuild") Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2019-07-03Merge branch 'timers/vdso' into timers/coreThomas Gleixner
so the hyper-v clocksource update can be applied.
2019-07-03selftests/powerpc: Add missing newline at end of fileGeert Uytterhoeven
"git diff" says: \ No newline at end of file after modifying the file. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> [mpe: Rebase since addition of another test] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-07-03perf script: Allow specifying the files to process guest samplesArnaldo Carvalho de Melo
The 'perf kvm' command set up things so that we can record, report, top, etc, but not 'script', so make 'perf script' be able to process samples by allowing to pass guest kallsyms, vmlinux, modules, etc, and if at least one of those is provided, set perf_guest to true so that guest samples get properly resolved. Testing it: # perf kvm --guest --guestkallsyms /wb/rhel6.kallsyms --guestmodules /wb/rhel6.modules record -e cycles:Gk ^C[ perf record: Woken up 7 times to write data ] [ perf record: Captured and wrote 3.602 MB perf.data.guest (10492 samples) ] # # perf evlist -i perf.data.guest cycles:Gk # perf evlist -v -i perf.data.guest cycles:Gk: size: 112, { sample_period, sample_freq }: 4000, sample_type: IP|TID|TIME|CPU|PERIOD, read_format: ID, disabled: 1, inherit: 1, exclude_user: 1, exclude_hv: 1, mmap: 1, comm: 1, freq: 1, task: 1, sample_id_all: 1, exclude_host: 1, mmap2: 1, comm_exec: 1, ksymbol: 1, bpf_event: 1 # # perf kvm --guestkallsyms /wb/rhel6.kallsyms --guestmodules /wb/rhel6.modules report --stdio -s sym | head -30 # To display the perf.data header info, please use --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 10K of event 'cycles:Gk' # Event count (approx.): 2434201408 # # Overhead Symbol # ........ .............................................. # 11.93% [g] avtab_search_node 3.95% [g] sidtab_context_to_sid 2.41% [g] n_tty_write 2.20% [g] _spin_unlock_irqrestore 1.37% [g] _aesni_dec4 1.33% [g] kmem_cache_alloc 1.07% [g] native_write_cr0 0.99% [g] kfree 0.95% [g] _spin_lock 0.91% [g] __memset 0.87% [g] schedule 0.83% [g] _spin_lock_irqsave 0.76% [g] __kmalloc 0.67% [g] avc_has_perm_noaudit 0.66% [g] kmem_cache_free 0.65% [g] glue_xts_crypt_128bit 0.59% [g] __d_lookup 0.59% [g] __audit_syscall_exit 0.56% [g] __memcpy # Then, when trying to use perf script to generate a python script and then process the events after adding a python hook for non-tracepoint events: # perf script -i perf.data.guest -g python generated Python script: perf-script.py # vim perf-script.py # tail -2 perf-script.py def process_event(param_dict): print(param_dict["symbol"]) # # perf script -i perf.data.guest -s perf-script.py | head in trace_begin vmx_vmexit vmx_vmexit vmx_vmexit vmx_vmexit vmx_vmexit vmx_vmexit vmx_vmexit vmx_vmexit vmx_vmexit 231 # We'd see just the vmx_vmexit, i.e. the samples from the guest don't show up. After this patch: # perf script --guestkallsyms /wb/rhel6.kallsyms --guestmodules /wb/rhel6.modules -i perf.data.guest -s perf-script.py 2> /dev/null | head -30 in trace_begin apic_timer_interrupt apic_timer_interrupt apic_timer_interrupt apic_timer_interrupt apic_timer_interrupt save_args do_timer drain_array inode_permission avc_has_perm_noaudit run_timer_softirq apic_timer_interrupt apic_timer_interrupt apic_timer_interrupt apic_timer_interrupt apic_timer_interrupt kvm_guest_apic_eoi_write run_posix_cpu_timers _spin_lock handle_pte_fault rcu_irq_enter delay_tsc delay_tsc native_read_tsc apic_timer_interrupt sys_open internal_add_timer list_del rcu_exit_nohz # Jiri Olsa noticed we need to set 'perf_guest' to true if we want to process guest samples and I made it be set if one of the guest files settings get set via the command line options added in this patch, that match those present in the 'perf kvm' command. We probably want to have 'perf record', 'perf report' etc to notice that there are guest samples and do the right thing, which is to look for files with some suffix that make it be associated with the guest used to collect the samples, i.e. if a vmlinux file is passed, we can get the build-id from it, if not some other identifier or simply looking for "kallsyms.guest", for instance, in the current directory. Reported-by: Mariano Pache <npache@redhat.com> Tested-by: Mariano Pache <npache@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Yarygin <yarygin@linux.vnet.ibm.com> Cc: Ali Raza <alirazabhutta.10@gmail.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Joe Mario <jmario@redhat.com> Cc: Larry Woodman <lwoodman@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Orran Krieger <okrieger@redhat.com> Cc: Ramkumar Ramachandra <artagnon@gmail.com> Cc: Yunlong Song <yunlong.song@huawei.com> Link: https://lkml.kernel.org/n/tip-d54gj64rerlxcqsrod05biwn@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02selftests/net: skip psock_tpacket test if KALLSYMS was not enabledPo-Hsu Lin
The psock_tpacket test will need to access /proc/kallsyms, this would require the kernel config CONFIG_KALLSYMS to be enabled first. Apart from adding CONFIG_KALLSYMS to the net/config file here, check the file existence to determine if we can run this test will be helpful to avoid a false-positive test result when testing it directly with the following commad against a kernel that have CONFIG_KALLSYMS disabled: make -C tools/testing/selftests TARGETS=net run_tests Signed-off-by: Po-Hsu Lin <po-hsu.lin@canonical.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-02kselftests: cgroup: remove duplicated include from test_freezer.cYueHaibing
Remove duplicated include. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2019-07-02perf tools metric: Don't include duration_time in groupAndi Kleen
The Memory_BW metric generates groups including duration_time, which maps to a software event. For some reason this makes the group always not count. Always put duration_time outside a group when generating metrics. It's always the same time, so no need to group it. Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20190628220737.13259-3-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02perf list: Avoid extra : for --raw metricsAndi Kleen
When printing the metrics raw, don't print : after the metricgroups. This helps the command line completion to complete those too. Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20190628220737.13259-2-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02perf vendor events intel: Metric fixes for SKX/CLXAndi Kleen
- Add a missing filter for the DRAM_Latency / DRAM_Parallel_Reads metrics - Remove the useless PMM_* metrics from Skylake Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20190628220737.13259-1-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02perf tools: Fix typos / broken sentencesAndi Kleen
- Fix a typo in the man page - Fix a tip that doesn't make any sense. Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20190628220900.13741-1-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02perf jevents: Add support for Hisi hip08 L3C PMU aliasingJohn Garry
Add support for Hisi hip08 L3C PMU aliasing. The kernel driver is in drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c Signed-off-by: John Garry <john.garry@huawei.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Hendrik Brueckner <brueckner@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Shaokun Zhang <zhangshaokun@hisilicon.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linuxarm@huawei.com Link: http://lkml.kernel.org/r/1561732552-143038-5-git-send-email-john.garry@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02perf jevents: Add support for Hisi hip08 HHA PMU aliasingJohn Garry
Add support for Hisi hip08 HHA PMU aliasing. The kernel driver is in drivers/perf/hisilicon/hisi_uncore_hha_pmu.c Signed-off-by: John Garry <john.garry@huawei.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Hendrik Brueckner <brueckner@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Shaokun Zhang <zhangshaokun@hisilicon.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linuxarm@huawei.com Link: http://lkml.kernel.org/r/1561732552-143038-4-git-send-email-john.garry@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02perf jevents: Add support for Hisi hip08 DDRC PMU aliasingJohn Garry
Add support for Hisi hip08 DDRC PMU aliasing. We can now do something like this: $perf list [snip] uncore ddrc: uncore_hisi_ddrc.act_cmd [DDRC active commands. Unit: hisi_sccl,ddrc] uncore_hisi_ddrc.flux_rcmd [DDRC read commands. Unit: hisi_sccl,ddrc] uncore_hisi_ddrc.flux_wcmd [DDRC write commands. Unit: hisi_sccl,ddrc] uncore_hisi_ddrc.flux_wr [DDRC precharge commands. Unit: hisi_sccl,ddrc] uncore_hisi_ddrc.rnk_chg [DDRC rank commands. Unit: hisi_sccl,ddrc] uncore_hisi_ddrc.rw_chg [DDRC read and write changes. Unit: hisi_sccl,ddrc] Performance counter stats for 'system wide': 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl1_ddrc0] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl3_ddrc1] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl5_ddrc2] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl7_ddrc3] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl5_ddrc0] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl7_ddrc1] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl1_ddrc3] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl1_ddrc1] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl3_ddrc2] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl5_ddrc3] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl3_ddrc0] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl5_ddrc1] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl7_ddrc2] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl7_ddrc0] 20,421 uncore_hisi_ddrc.flux_rcmd [hisi_sccl1_ddrc2] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl3_ddrc3] 1.001559011 seconds time elapsed The kernel driver is in drivers/perf/hisilicon/hisi_uncore_ddrc_pmu.c Signed-off-by: John Garry <john.garry@huawei.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Hendrik Brueckner <brueckner@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Shaokun Zhang <zhangshaokun@hisilicon.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linuxarm@huawei.com Link: http://lkml.kernel.org/r/1561732552-143038-3-git-send-email-john.garry@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02perf pmu: Support more complex PMU event aliasingJohn Garry
The jevent "Unit" field is used for uncore PMU alias definition. The form uncore_pmu_example_X is supported, where "X" is a wildcard, to support multiple instances of the same PMU in a system. Unfortunately this format not suitable for all uncore PMUs; take the Hisi DDRC uncore PMU for example, where the name is in the form hisi_scclX_ddrcY. For for current jevent parsing, we would be required to hardcode an uncore alias translation for each possible value of X. This is not scalable. Instead, add support for "Unit" field in the form "hisi_sccl,ddrc", where we can match by hisi_scclX and ddrcY. Tokens in Unit field are delimited by ','. Signed-off-by: John Garry <john.garry@huawei.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Hendrik Brueckner <brueckner@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Shaokun Zhang <zhangshaokun@hisilicon.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linuxarm@huawei.com Link: http://lkml.kernel.org/r/1561732552-143038-2-git-send-email-john.garry@huawei.com [ Shut up older gcc complianing about the last arg to strtok_r() being uninitialized, set that tmp to NULL ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02memremap: provide an optional internal refcount in struct dev_pagemapChristoph Hellwig
Provide an internal refcounting logic if no ->ref field is provided in the pagemap passed into devm_memremap_pages so that callers don't have to reinvent it poorly. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Tested-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-07-02memremap: pass a struct dev_pagemap to ->kill and ->cleanupChristoph Hellwig
Passing the actual typed structure leads to more understandable code vs just passing the ref member. Reported-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Tested-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-07-02memremap: move dev_pagemap callbacks into a separate structureChristoph Hellwig
The dev_pagemap is a growing too many callbacks. Move them into a separate ops structure so that they are not duplicated for multiple instances, and an attacker can't easily overwrite them. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Jason Gunthorpe <jgg@mellanox.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Tested-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-07-02KVM: nVMX: Change KVM_STATE_NESTED_EVMCS to signal vmcs12 is copied from eVMCSLiran Alon
Currently KVM_STATE_NESTED_EVMCS is used to signal that eVMCS capability is enabled on vCPU. As indicated by vmx->nested.enlightened_vmcs_enabled. This is quite bizarre as userspace VMM should make sure to expose same vCPU with same CPUID values in both source and destination. In case vCPU is exposed with eVMCS support on CPUID, it is also expected to enable KVM_CAP_HYPERV_ENLIGHTENED_VMCS capability. Therefore, KVM_STATE_NESTED_EVMCS is redundant. KVM_STATE_NESTED_EVMCS is currently used on restore path (vmx_set_nested_state()) only to enable eVMCS capability in KVM and to signal need_vmcs12_sync such that on next VMEntry to guest nested_sync_from_vmcs12() will be called to sync vmcs12 content into eVMCS in guest memory. However, because restore nested-state is rare enough, we could have just modified vmx_set_nested_state() to always signal need_vmcs12_sync. From all the above, it seems that we could have just removed the usage of KVM_STATE_NESTED_EVMCS. However, in order to preserve backwards migration compatibility, we cannot do that. (vmx_get_nested_state() needs to signal flag when migrating from new kernel to old kernel). Returning KVM_STATE_NESTED_EVMCS when just vCPU have eVMCS enabled have a bad side-effect of userspace VMM having to send nested-state from source to destination as part of migration stream. Even if guest have never used eVMCS as it doesn't even run a nested hypervisor workload. This requires destination userspace VMM and KVM to support setting nested-state. Which make it more difficult to migrate from new host to older host. To avoid this, change KVM_STATE_NESTED_EVMCS to signal eVMCS is not only enabled but also active. i.e. Guest have made some eVMCS active via an enlightened VMEntry. i.e. vmcs12 is copied from eVMCS and therefore should be restored into eVMCS resident in memory (by copy_vmcs12_to_enlightened()). Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Maran Wilson <maran.wilson@oracle.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-02perf diff: Documentation -c cycles optionJin Yao
Documentation the new computation selection 'cycles'. v4: --- Change the column 'Block cycles diff [start:end]' to '[Program Block Range] Cycles Diff' Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1561713784-30533-8-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02perf diff: Print the basic block cycles diffJin Yao
$ perf record -b ./div $ perf record -b ./div Following is the default perf diff output $ perf diff # Event 'cycles' # # Baseline Delta Abs Shared Object Symbol # ........ ......... ................ .................................. # 48.75% +0.33% div [.] main 8.21% -0.20% div [.] compute_flag 19.02% -0.12% libc-2.23.so [.] __random_r 16.17% -0.09% libc-2.23.so [.] __random 2.27% -0.03% div [.] rand@plt +0.02% [i915] [k] gen8_irq_handler 5.52% +0.02% libc-2.23.so [.] rand This patch creates a new computation selection 'cycles'. $ perf diff -c cycles # Event 'cycles' # # Baseline [Program Block Range] Cycles Diff Shared Object Symbol # ........ ....................................... ......................................... # 48.75% [div.c:42 -> div.c:45] 147 div [.] main 48.75% [div.c:31 -> div.c:40] 4 div [.] main 48.75% [div.c:40 -> div.c:40] 0 div [.] main 48.75% [div.c:42 -> div.c:42] 0 div [.] main 48.75% [div.c:42 -> div.c:44] 0 div [.] main 19.02% [random_r.c:357 -> random_r.c:360] 0 libc-2.23.so [.] __random_r 19.02% [random_r.c:357 -> random_r.c:373] 0 libc-2.23.so [.] __random_r 19.02% [random_r.c:357 -> random_r.c:376] 0 libc-2.23.so [.] __random_r 19.02% [random_r.c:357 -> random_r.c:380] 0 libc-2.23.so [.] __random_r 19.02% [random_r.c:357 -> random_r.c:392] 0 libc-2.23.so [.] __random_r 16.17% [random.c:288 -> random.c:291] 0 libc-2.23.so [.] __random 16.17% [random.c:288 -> random.c:291] 0 libc-2.23.so [.] __random 16.17% [random.c:288 -> random.c:295] 0 libc-2.23.so [.] __random 16.17% [random.c:288 -> random.c:297] 0 libc-2.23.so [.] __random 16.17% [random.c:291 -> random.c:291] 0 libc-2.23.so [.] __random 16.17% [random.c:293 -> random.c:293] 0 libc-2.23.so [.] __random 8.21% [div.c:22 -> div.c:22] 148 div [.] compute_flag 8.21% [div.c:22 -> div.c:25] 0 div [.] compute_flag 8.21% [div.c:27 -> div.c:28] 0 div [.] compute_flag 5.52% [rand.c:26 -> rand.c:27] 0 libc-2.23.so [.] rand 5.52% [rand.c:26 -> rand.c:28] 0 libc-2.23.so [.] rand 2.27% [rand@plt+0 -> rand@plt+0] 0 div [.] rand@plt 0.01% [entry_64.S:694 -> entry_64.S:694] 16 [vmlinux] [k] native_irq_return_iret 0.00% [fair.c:7676 -> fair.c:7665] 162 [vmlinux] [k] update_blocked_averages "[Program Block Range]" indicates the range of program basic block (start -> end). If we can find the source line it prints the source line otherwise it prints the symbol+offset instead. v4: --- Use source lines or symbol+offset to indicate the basic block. It should be easier to understand. v3: --- Cast 'struct hist_entry' to 'struct block_hist' in hist_entry__block_fprintf. Use symbol_conf.report_block to check if executing hist_entry__block_fprintf. v2: --- Keep standard perf diff format and display the 'Baseline' and 'Shared Object'. The output is sorted by "Baseline" and the basic blocks in the same function are sorted by cycles diff. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1561713784-30533-7-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02perf diff: Link same basic blocks among different dataJin Yao
The target is to compare the performance difference (cycles diff) for the same basic blocks in different data files. The same basic block means same function, same start address and same end address. This patch finds the same basic blocks from different data files and link them together and resort by the cycles diff. v3: --- The block stuffs are maintained by new structure 'block_hist', so this patch is update accordingly. v2: --- Since now the basic block hists is changed to per symbol, the patch only links the basic block hists for the same symbol in different data files. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1561713784-30533-6-git-send-email-yao.jin@linux.intel.com [ sym->name is an array, not a pointer, so no need to check it for NULL, fixes de build in some distros ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02perf diff: Use hists to manage basic blocks per symbolJin Yao
The hist__account_cycles() can account cycles per basic block. The basic block information is saved in cycles_hist structure. This patch processes each symbol, get basic blocks from cycles_hist and add the basic block entries to a new hists (in 'struct block_hist'). Using a hists is because we need to compare, sort and print the basic blocks later. v6: --- Since 'ops' argument is removed from hists__add_entry_block, update the code accordingly. No functional change. v5: --- Since now we still carry block_info in 'struct hist_entry' we don't need to use our own new/free ops for hist entries. And the block_info is released in hist_entry__delete. v3: --- 1. In v2, we put block stuffs in 'struct hist_entry', but it's not a good design. In v3, we create a new 'struct block_hist' and cast the 'struct hist_entry' to 'struct block_hist' in some places, which can avoid adding new stuffs in 'struct hist_entry'. 2. abs() -> labs(), in block_cycles_diff_cmp(). v2: --- v1 adds the basic block entries to per data-file hists but v2 adds the basic block entries to per symbol hists. That is to keep current perf-diff format. Will show the result in next patches. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1561713784-30533-5-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02perf diff: Check if all data files with branch stacksJin Yao
We will expand perf diff to support diff cycles of individual programs blocks, so it requires all data files having branch stacks. This patch checks HEADER_BRANCH_STACK in header, and only set the flag has_br_stack when HEADER_BRANCH_STACK are set in all data files. v2: --- Move check_file_brstack() from __cmd_diff() to cmd_diff(). Because later patch will check flag 'has_br_stack' before ui_init(). Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1561713784-30533-4-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02perf hists: Add block_info in hist_entryJin Yao
The block_info contains the program basic block information, i.e, contains the start address and the end address of this basic block and how much cycles it takes. We need to compare, sort and even print out the basic block by some orders, i.e. sort by cycles. For this purpose, we add block_info field to hist_entry. In order not to impact current interface, we creates a new function hists__add_entry_block. v6: --- Remove the 'ops' argument in hists__add_entry_block Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1561713784-30533-3-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02perf symbol: Create block_info structureJin Yao
'perf diff' currently can only diff symbols(functions). We should expand it to diff cycles of individual programs blocks as reported by timed LBR. This would allow to identify changes in specific code accurately. We need a new structure to maintain the basic block information, such as, symbol(function), start/end address of this block, cycles. This patch creates this structure and with some ops. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1561713784-30533-2-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02objtool: Fix build by linking against tools/lib/ctype.o sourcesJiri Olsa
Fix objtool build, because it adds _ctype dependency via isspace call patch. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: André Goddard Rosa <andre.goddard@gmail.com> Cc: Clark Williams <williams@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 7bd330de43fd ("tools lib: Adopt skip_spaces() from the kernel sources") Link: http://lkml.kernel.org/r/20190702121240.GB12694@krava Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-02selftests/x86: Test SYSCALL and SYSENTER manually with TF setAndy Lutomirski
Make sure that both variants of the nasty TF-in-compat-syscall are exercised regardless of what vendor's CPU is running the tests. Also change the intentional signal after SYSCALL to use ud2, which is a lot more comprehensible. This crashes the kernel due to an FSGSBASE bug right now. This test *also* detects a bug in KVM when run on an Intel host. KVM people, feel free to use it to help debug. There's a bunch of code in this test to warn instead of going into an infinite looping when the bug gets triggered. Reported-by: Vegard Nossum <vegard.nossum@oracle.com> Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: "BaeChang Seok" <chang.seok.bae@intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: kvm@vger.kernel.org Cc: "Bae, Chang Seok" <chang.seok.bae@intel.com> Link: https://lkml.kernel.org/r/5f5de10441ab2e3005538b4c33be9b1965d1bb63.1562035429.git.luto@kernel.org
2019-07-01blackhole_dev: add a selftestMahesh Bandewar
Since this is not really a device with all capabilities, this test ensures that it has *enough* to make it through the data path without causing unwanted side-effects (read crash!). Signed-off-by: Mahesh Bandewar <maheshb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-01tc-testing: added tdc tests for prio qdiscRoman Mashak
Signed-off-by: Roman Mashak <mrv@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-01tc-testing: updated mirred action tests with batch create/deleteRoman Mashak
Signed-off-by: Roman Mashak <mrv@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-01selftests: add a test case for cls_lower handle overflowDavide Caratti
Reported-by: Li Shuang <shuali@redhat.com> Signed-off-by: Davide Caratti <dcaratti@redhat.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-01perf jevents: Use nonlocal include statements in pmu-events.cLuke Mujica
Change pmu-events.c to not use local include statements. The code that creates the include statements for pmu-events.c is in jevents.c. pmu-events.c is a generated file, and for build systems that put generated files in a separate directory, include statements with local pathing cannot find non-generated files. Signed-off-by: Luke Mujica <lukemujica@google.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Numfor Mbiziwo-Tiapo <nums@google.com> Cc: Stephane Eranian <eranian@google.com> Link: https://lkml.kernel.org/n/tip-prgnwmaoo1pv9zz4vnv1bjaj@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-01perf annotate: Add csky supportMao Han
This patch add basic arch initialization and instruction associate support for the csky CPU architecture. E.g.: $ perf annotate --stdio2 Samples: 161 of event 'cpu-clock:pppH', 4000 Hz, Event count (approx.): 40250000, [percent: local period] test_4() /usr/lib/perf-test/callchain_test Percent Disassembly of section .text: 00008420 <test_4>: test_4(): subi sp, sp, 4 st.w r8, (sp, 0x0) mov r8, sp subi sp, sp, 8 subi r3, r8, 4 movi r2, 0 st.w r2, (r3, 0x0) ↓ br 2e 100.00 14: subi r3, r8, 4 ld.w r2, (r3, 0x0) subi r3, r8, 8 st.w r2, (r3, 0x0) subi r3, r8, 4 ld.w r3, (r3, 0x0) addi r2, r3, 1 subi r3, r8, 4 st.w r2, (r3, 0x0) 2e: subi r3, r8, 4 ld.w r2, (r3, 0x0) lrw r3, 0x98967f // 8598 <main+0x28> cmplt r3, r2 ↑ bf 14 mov r0, r0 mov r0, r0 mov sp, r8 ld.w r8, (sp, 0x0) addi sp, sp, 4 ← rts Signed-off-by: Mao Han <han_mao@c-sky.com> Acked-by: Guo Ren <guoren@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-csky@vger.kernel.org Link: http://lkml.kernel.org/r/d874d7782d9acdad5d98f2f5c4a6fb26fbe41c5d.1561531557.git.han_mao@c-sky.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-01perf stat: Fix metrics with --no-mergeAndi Kleen
Since Fixes: 8c5421c016a4 ("perf pmu: Display pmu name when printing unmerged events in stat") using --no-merge adds the PMU name to the evsel name. This breaks the metric value lookup because the parser doesn't know about this. Remove the extra postfixes for the metric evaluation. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Agustin Vega-Frias <agustinv@codeaurora.org> Cc: Kan Liang <kan.liang@linux.intel.com> Fixes: 8c5421c016a4 ("perf pmu: Display pmu name when printing unmerged events in stat") Link: http://lkml.kernel.org/r/20190624193711.35241-5-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-01perf stat: Fix group lookup for metric groupAndi Kleen
The metric group code tries to find a group it added earlier in the evlist. Fix the lookup to handle groups with partially overlaps correctly. When a sub string match fails and we reset the match, we have to compare the first element again. I also renamed the find_evsel function to find_evsel_group to make its purpose clearer. With the earlier changes this fixes: Before: % perf stat -M UPI,IPC sleep 1 ... 1,032,922 uops_retired.retire_slots # 1.1 UPI 1,896,096 inst_retired.any 1,896,096 inst_retired.any 1,177,254 cpu_clk_unhalted.thread After: % perf stat -M UPI,IPC sleep 1 ... 1,013,193 uops_retired.retire_slots # 1.1 UPI 932,033 inst_retired.any 932,033 inst_retired.any # 0.9 IPC 1,091,245 cpu_clk_unhalted.thread Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Fixes: b18f3e365019 ("perf stat: Support JSON metrics in perf stat") Link: http://lkml.kernel.org/r/20190624193711.35241-4-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-01perf stat: Don't merge events in the same PMUAndi Kleen
Event merging is mainly to collapse similar events in lots of different duplicated PMUs. It can break metric displaying. It's possible for two metrics to have the same event, and when the two events happen in a row the second wouldn't be displayed. This would also not show the second metric. To avoid this don't merge events in the same PMU. This makes sense, if we have multiple events in the same PMU there is likely some reason for it (e.g. using multiple groups) and we better not merge them. While in theory it would be possible to construct metrics that have events with the same name in different PMU no current metrics have this problem. This is the fix for perf stat -M UPI,IPC (needs also another bug fix to completely work) Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Fixes: 430daf2dc7af ("perf stat: Collapse identically named events") Link: http://lkml.kernel.org/r/20190624193711.35241-3-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-07-01perf stat: Make metric event lookup more robustAndi Kleen
After setting up metric groups through the event parser, the metricgroup code looks them up again in the event list. Make sure we only look up events that haven't been used by some other metric. The data structures currently cannot handle more than one metric per event. This avoids problems with multiple events partially overlapping. Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Link: http://lkml.kernel.org/r/20190624193711.35241-2-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>