Age | Commit message (Collapse) | Author |
|
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: https://lkml.kernel.org/n/tip-3h6fa866w6ao0wsbyqz9nrm8@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Rename the 'i' variable to 'nr_used' and use set 'nr_allocated' since
the start of this function, leaving the final assignment of the longer
named trace->ev_qualifier_ids.nr state to 'nr_used' at the end of the
function.
No change in behaviour intended.
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Mike Leach <mike.leach@linaro.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lkml.kernel.org/n/tip-kpgyn8xjdjgt0timrrnniquv@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
We were just skipping the syscalls not available in a particular
architecture without reflecting this in the number of entries in the
ev_qualifier_ids.nr variable, fix it.
This was done with the most minimalistic way, reusing the index variable
'i', a followup patch will further clean this by making 'i' renamed to
'nr_used' and using 'nr_allocated' in a few more places.
Reported-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Leo Yan <leo.yan@linaro.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Mike Leach <mike.leach@linaro.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Fixes: 04c41bcb862b ("perf trace: Skip unknown syscalls when expanding strace like syscall groups")
Link: https://lkml.kernel.org/r/20190613181514.GC1402@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Laura reported that the perf build failed in fedora when we got a glibc
that provides gettid(), which I reproduced using fedora rawhide with the
glibc-devel-2.29.9000-26.fc31.x86_64 package.
Add a feature check to avoid providing a gettid() helper in such
systems.
On a fedora rawhide system with this patch applied we now get:
[root@7a5f55352234 perf]# grep gettid /tmp/build/perf/FEATURE-DUMP
feature-gettid=1
[root@7a5f55352234 perf]# cat /tmp/build/perf/feature/test-gettid.make.output
[root@7a5f55352234 perf]# ldd /tmp/build/perf/feature/test-gettid.bin
linux-vdso.so.1 (0x00007ffc6b1f6000)
libc.so.6 => /lib64/libc.so.6 (0x00007f04e0a74000)
/lib64/ld-linux-x86-64.so.2 (0x00007f04e0c47000)
[root@7a5f55352234 perf]# nm /tmp/build/perf/feature/test-gettid.bin | grep -w gettid
U gettid@@GLIBC_2.30
[root@7a5f55352234 perf]#
While on a fedora:29 system:
[acme@quaco perf]$ grep gettid /tmp/build/perf/FEATURE-DUMP
feature-gettid=0
[acme@quaco perf]$ cat /tmp/build/perf/feature/test-gettid.make.output
test-gettid.c: In function ‘main’:
test-gettid.c:8:9: error: implicit declaration of function ‘gettid’; did you mean ‘getgid’? [-Werror=implicit-function-declaration]
return gettid();
^~~~~~
getgid
cc1: all warnings being treated as errors
[acme@quaco perf]$
Reported-by: Laura Abbott <labbott@redhat.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: https://lkml.kernel.org/n/tip-yfy3ch53agmklwu9o7rlgf9c@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Like other synthesized events, if there is also an Intel PT branch
trace, then a call stack can also be synthesized. Add that.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20190610072803.10456-12-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Add memory information from PEBS data in the Intel PT trace to the
synthesized PEBS sample. This provides sample types PERF_SAMPLE_ADDR,
PERF_SAMPLE_WEIGHT, and PERF_SAMPLE_TRANSACTION, but not
PERF_SAMPLE_DATA_SRC.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20190610072803.10456-11-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Add LBR information from PEBS data in the Intel PT trace to the
synthesized PEBS sample.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20190610072803.10456-10-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Add XMM register information from PEBS data in the Intel PT trace to the
synthesized PEBS sample.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20190610072803.10456-9-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Add general purpose register information from PEBS data in the Intel PT
trace to the synthesized PEBS sample.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20190610072803.10456-8-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Synthesize a PEBS sample using basic information (ip, timestamp) only.
Other PEBS information will be added in later patches.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20190610072803.10456-7-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Factor out common sample preparation for re-use when synthesizing PEBS
samples.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20190610072803.10456-6-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Add infrastructure to prepare for synthesizing PEBS samples but leave
the actual synthesis to later patches.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20190610072803.10456-5-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
PEBS data is encoded in Block Item Packets (BIP). Populate a new structure
intel_pt_blk_items with the values and, upon a Block End Packet (BEP),
report them as a new Intel PT sample type INTEL_PT_BLK_ITEMS.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20190610072803.10456-4-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Add Intel PT packet decoder test. This test feeds byte sequences to the
Intel PT packet decoder and checks the results. Changes to the packet
context are also checked.
Committer testing:
# perf test "Intel PT"
65: Intel PT packet decoder : Ok
# perf test -v "Intel PT"
65: Intel PT packet decoder :
--- start ---
test child forked, pid 6360
Decoded ok: 00 PAD
Decoded ok: 04 TNT N (1)
Decoded ok: 06 TNT T (1)
Decoded ok: 80 TNT NNNNNN (6)
Decoded ok: fe TNT TTTTTT (6)
Decoded ok: 02 a3 02 00 00 00 00 00 TNT N (1)
Decoded ok: 02 a3 03 00 00 00 00 00 TNT T (1)
Decoded ok: 02 a3 00 00 00 00 00 80 TNT NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN (47)
Decoded ok: 02 a3 ff ff ff ff ff ff TNT TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT (47)
Decoded ok: 0d TIP no ip
Decoded ok: 2d 01 02 TIP 0x201
Decoded ok: 4d 01 02 03 04 TIP 0x4030201
Decoded ok: 6d 01 02 03 04 05 06 TIP 0x60504030201
Decoded ok: 8d 01 02 03 04 05 06 TIP 0x60504030201
Decoded ok: cd 01 02 03 04 05 06 07 08 TIP 0x807060504030201
Decoded ok: 11 TIP.PGE no ip
Decoded ok: 31 01 02 TIP.PGE 0x201
Decoded ok: 51 01 02 03 04 TIP.PGE 0x4030201
Decoded ok: 71 01 02 03 04 05 06 TIP.PGE 0x60504030201
Decoded ok: 91 01 02 03 04 05 06 TIP.PGE 0x60504030201
Decoded ok: d1 01 02 03 04 05 06 07 08 TIP.PGE 0x807060504030201
Decoded ok: 01 TIP.PGD no ip
Decoded ok: 21 01 02 TIP.PGD 0x201
Decoded ok: 41 01 02 03 04 TIP.PGD 0x4030201
Decoded ok: 61 01 02 03 04 05 06 TIP.PGD 0x60504030201
Decoded ok: 81 01 02 03 04 05 06 TIP.PGD 0x60504030201
Decoded ok: c1 01 02 03 04 05 06 07 08 TIP.PGD 0x807060504030201
Decoded ok: 1d FUP no ip
Decoded ok: 3d 01 02 FUP 0x201
Decoded ok: 5d 01 02 03 04 FUP 0x4030201
Decoded ok: 7d 01 02 03 04 05 06 FUP 0x60504030201
Decoded ok: 9d 01 02 03 04 05 06 FUP 0x60504030201
Decoded ok: dd 01 02 03 04 05 06 07 08 FUP 0x807060504030201
Decoded ok: 02 43 02 04 06 08 0a 0c PIP 0x60504030201 (NR=0)
Decoded ok: 02 43 03 04 06 08 0a 0c PIP 0x60504030201 (NR=1)
Decoded ok: 99 00 MODE.Exec 16
Decoded ok: 99 01 MODE.Exec 64
Decoded ok: 99 02 MODE.Exec 32
Decoded ok: 99 20 MODE.TSX TXAbort:0 InTX:0
Decoded ok: 99 21 MODE.TSX TXAbort:0 InTX:1
Decoded ok: 99 22 MODE.TSX TXAbort:1 InTX:0
Decoded ok: 02 83 TraceSTOP
Decoded ok: 02 03 12 00 CBR 0x12
Decoded ok: 19 01 02 03 04 05 06 07 TSC 0x7060504030201
Decoded ok: 59 12 MTC 0x12
Decoded ok: 02 73 00 00 00 00 00 TMA CTC 0x0 FC 0x0
Decoded ok: 02 73 01 02 00 00 00 TMA CTC 0x201 FC 0x0
Decoded ok: 02 73 00 00 00 ff 01 TMA CTC 0x0 FC 0x1ff
Decoded ok: 02 73 80 c0 00 ff 01 TMA CTC 0xc080 FC 0x1ff
Decoded ok: 03 CYC 0x0
Decoded ok: 0b CYC 0x1
Decoded ok: fb CYC 0x1f
Decoded ok: 07 02 CYC 0x20
Decoded ok: ff fe CYC 0xfff
Decoded ok: 07 01 02 CYC 0x1000
Decoded ok: ff ff fe CYC 0x7ffff
Decoded ok: 07 01 01 02 CYC 0x80000
Decoded ok: ff ff ff fe CYC 0x3ffffff
Decoded ok: 07 01 01 01 02 CYC 0x4000000
Decoded ok: ff ff ff ff fe CYC 0x1ffffffff
Decoded ok: 07 01 01 01 01 02 CYC 0x200000000
Decoded ok: ff ff ff ff ff fe CYC 0xffffffffff
Decoded ok: 07 01 01 01 01 01 02 CYC 0x10000000000
Decoded ok: ff ff ff ff ff ff fe CYC 0x7fffffffffff
Decoded ok: 07 01 01 01 01 01 01 02 CYC 0x800000000000
Decoded ok: ff ff ff ff ff ff ff fe CYC 0x3fffffffffffff
Decoded ok: 07 01 01 01 01 01 01 01 02 CYC 0x40000000000000
Decoded ok: ff ff ff ff ff ff ff ff fe CYC 0x1fffffffffffffff
Decoded ok: 07 01 01 01 01 01 01 01 01 02 CYC 0x2000000000000000
Decoded ok: ff ff ff ff ff ff ff ff ff 0e CYC 0xffffffffffffffff
Decoded ok: 02 c8 01 02 03 04 05 VMCS 0x504030201
Decoded ok: 02 f3 OVF
Decoded ok: 02 f3 OVF
Decoded ok: 02 f3 OVF
Decoded ok: 02 82 02 82 02 82 02 82 02 82 02 82 02 82 02 82 PSB
Decoded ok: 02 82 02 82 02 82 02 82 02 82 02 82 02 82 02 82 PSB
Decoded ok: 02 82 02 82 02 82 02 82 02 82 02 82 02 82 02 82 PSB
Decoded ok: 02 23 PSBEND
Decoded ok: 02 c3 88 01 02 03 04 05 06 07 00 MNT 0x7060504030201
Decoded ok: 02 12 01 02 03 04 PTWRITE 0x4030201 IP:0
Decoded ok: 02 32 01 02 03 04 05 06 07 08 PTWRITE 0x807060504030201 IP:0
Decoded ok: 02 92 01 02 03 04 PTWRITE 0x4030201 IP:1
Decoded ok: 02 b2 01 02 03 04 05 06 07 08 PTWRITE 0x807060504030201 IP:1
Decoded ok: 02 62 EXSTOP IP:0
Decoded ok: 02 e2 EXSTOP IP:1
Decoded ok: 02 c2 00 00 00 00 00 00 00 00 MWAIT 0x0 Hints 0x0 Extensions 0x0
Decoded ok: 02 c2 01 02 03 04 05 06 07 08 MWAIT 0x807060504030201 Hints 0x1 Extensions 0x1
Decoded ok: 02 c2 ff 02 03 04 07 06 07 08 MWAIT 0x8070607040302ff Hints 0xff Extensions 0x3
Decoded ok: 02 22 00 00 PWRE 0x0 HW:0 CState:0 Sub-CState:0
Decoded ok: 02 22 01 02 PWRE 0x201 HW:0 CState:0 Sub-CState:2
Decoded ok: 02 22 80 34 PWRE 0x3480 HW:1 CState:3 Sub-CState:4
Decoded ok: 02 22 00 56 PWRE 0x5600 HW:0 CState:5 Sub-CState:6
Decoded ok: 02 a2 00 00 00 00 00 PWRX 0x0 Last CState:0 Deepest CState:0 Wake Reason 0x0
Decoded ok: 02 a2 01 02 03 04 05 PWRX 0x504030201 Last CState:0 Deepest CState:1 Wake Reason 0x2
Decoded ok: 02 a2 ff ff ff ff ff PWRX 0xffffffffff Last CState:15 Deepest CState:15 Wake Reason 0xf
Decoded ok: 02 63 00 BBP SZ 8-byte Type 0x0
Decoded ok: 02 63 80 BBP SZ 4-byte Type 0x0
Decoded ok: 02 63 1f BBP SZ 8-byte Type 0x1f
Decoded ok: 02 63 9f BBP SZ 4-byte Type 0x1f
Decoded ok: 04 00 00 00 00 BIP ID 0x00 Value 0x0
Decoded ok: fc 00 00 00 00 BIP ID 0x1f Value 0x0
Decoded ok: 04 01 02 03 04 BIP ID 0x00 Value 0x4030201
Decoded ok: fc 01 02 03 04 BIP ID 0x1f Value 0x4030201
Decoded ok: 04 00 00 00 00 00 00 00 00 BIP ID 0x00 Value 0x0
Decoded ok: fc 00 00 00 00 00 00 00 00 BIP ID 0x1f Value 0x0
Decoded ok: 04 01 02 03 04 05 06 07 08 BIP ID 0x00 Value 0x807060504030201
Decoded ok: fc 01 02 03 04 05 06 07 08 BIP ID 0x1f Value 0x807060504030201
Decoded ok: 02 33 BEP IP:0
Decoded ok: 02 b3 BEP IP:1
Decoded ok: 02 33 BEP IP:0
Decoded ok: 02 b3 BEP IP:1
test child finished with 0
---- end ----
Intel PT packet decoder: Ok
#
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20190610072803.10456-3-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Add 3 new packets to supports PEBS via PT, namely Block Begin Packet
(BBP), Block Item Packet (BIP) and Block End Packet (BEP). PEBS data is
encoded into multiple BIP packets that come between BBP and BEP. The BEP
packet might be associated with a FUP packet. That is indicated by using
a separate packet type (INTEL_PT_BEP_IP) similar to other packets types
with the _IP suffix.
Refer to the Intel SDM for more information about PEBS via PT:
https://software.intel.com/en-us/articles/intel-sdm
May 2019 version: Vol. 3B 18.5.5.2 PEBS output to Intel® Processor Trace
Decoding of BIP packets conflicts with single-byte TNT packets. Since
BIP packets only occur in the context of a block (i.e. between BBP and
BEP), that context must be recorded and passed to the packet decoder.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20190610072803.10456-2-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Call function cs_etm_set_option() once with all relevant options set
rather than multiple times to avoid going through the list of CPU more
than once.
Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/20190611204528.20093-1-mathieu.poirier@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
In order to subsequently add more tests for the arm64 architecture we
compile the tests target for arm64 systematically.
Further explanation provided by Mark Rutland:
Given prior questions regarding this commit, it's probably worth
spelling things out more explicitly, e.g.
Currently we only build the arm64/tests directory if
CONFIG_DWARF_UNWIND is selected, which is fine as the only test we
have is arm64/tests/dwarf-unwind.o.
So that we can add more tests to the test directory, let's
unconditionally build the directory, but conditionally build
dwarf-unwind.o depending on CONFIG_DWARF_UNWIND.
There should be no functional change as a result of this patch.
Signed-off-by: Raphael Gault <raphael.gault@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/20190611125315.18736-2-raphael.gault@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
perf record:
Alexey Budankov:
- Allow mixing --user-regs with --call-graph=dwarf, making sure that
the minimal set of registers for DWARF unwinding is present in the
set of user registers requested to be present in each sample, while
warning the user that this may make callchains unreliable if more
that the minimal set of registers is needed to unwind.
yuzhoujian:
- Add support to collect callchains from kernel or user space only,
IOW allow setting the perf_event_attr.exclude_callchain_{kernel,user}
bits from the command line.
perf trace:
Arnaldo Carvalho de Melo:
- Remove x86_64 specific syscall numbers from the augmented_raw_syscalls
BPF in-kernel collector of augmented raw_syscalls:sys_{enter,exit}
payloads, use instead the syscall numbers obtainer either by the
arch specific syscalltbl generators or from audit-libs.
- Allow 'perf trace' to ask for the number of bytes to collect for
string arguments, for now ask for PATH_MAX, i.e. the whole
pathnames, which ends up being just a way to speficy which syscall
args are pathnames and thus should be read using bpf_probe_read_str().
- Skip unknown syscalls when expanding strace like syscall groups.
This helps using the 'string' group of syscalls to work in arm64,
where some of the syscalls present in x86_64 that deal with
strings, for instance 'access', are deprecated and this should not
be asked for tracing.
Leo Yan:
- Exit when failing to build eBPF program.
perf config:
Arnaldo Carvalho de Melo:
- Bail out when a handler returns failure for a key-value pair. This
helps with cases where processing a key-value pair is not just a
matter of setting some tool specific knob, involving, for instance
building a BPF program to then attach to the list of events 'perf
trace' will use, e.g. augmented_raw_syscalls.c.
perf.data:
Kan Liang:
- Read and store die ID information available in new Intel processors
in CPUID.1F in the CPU topology written in the perf.data header.
perf stat:
Kan Liang:
- Support per-die aggregation.
Documentation:
Arnaldo Carvalho de Melo:
- Update perf.data documentation about the CPU_TOPOLOGY, MEM_TOPOLOGY,
CLOCKID and DIR_FORMAT headers.
Song Liu:
- Add description of headers HEADER_BPF_PROG_INFO and HEADER_BPF_BTF.
Leo Yan:
- Update default value for llvm.clang-bpf-cmd-template in 'man perf-config'.
JVMTI:
Jiri Olsa:
- Address gcc string overflow warning for strncpy()
core:
- Remove superfluous nthreads system_wide setup in perf_evsel__alloc_fd().
Intel PT:
Adrian Hunter:
- Add support for samples to contain IPC ratio, collecting cycles
information from CYC packets, showing the IPC info periodically, because
Intel PT does not update the cycle count on every branch or instruction,
the incremental values will often be zero. When there are values, they
will be the number of instructions and number of cycles since the last
update, and thus represent the average IPC since the last IPC value.
E.g.:
# perf record --cpu 1 -m200000 -a -e intel_pt/cyc/u sleep 0.0001
rounding mmap pages size to 1024M (262144 pages)
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 2.208 MB perf.data ]
# perf script --insn-trace --xed -F+ipc,-dso,-cpu,-tid
#
<SNIP + add line numbering to make sense of IPC counts e.g.: (18/3)>
1 cc1 63501.650479626: 7f5219ac27bf _int_free+0x3f jnz 0x7f5219ac2af0 IPC: 0.81 (36/44)
2 cc1 63501.650479626: 7f5219ac27c5 _int_free+0x45 cmp $0x1f, %rbp
3 cc1 63501.650479626: 7f5219ac27c9 _int_free+0x49 jbe 0x7f5219ac2b00
4 cc1 63501.650479626: 7f5219ac27cf _int_free+0x4f test $0x8, %al
5 cc1 63501.650479626: 7f5219ac27d1 _int_free+0x51 jnz 0x7f5219ac2b00
6 cc1 63501.650479626: 7f5219ac27d7 _int_free+0x57 movq 0x13c58a(%rip), %rcx
7 cc1 63501.650479626: 7f5219ac27de _int_free+0x5e mov %rdi, %r12
8 cc1 63501.650479626: 7f5219ac27e1 _int_free+0x61 movq %fs:(%rcx), %rax
9 cc1 63501.650479626: 7f5219ac27e5 _int_free+0x65 test %rax, %rax
10 cc1 63501.650479626: 7f5219ac27e8 _int_free+0x68 jz 0x7f5219ac2821
11 cc1 63501.650479626: 7f5219ac27ea _int_free+0x6a leaq -0x11(%rbp), %rdi
12 cc1 63501.650479626: 7f5219ac27ee _int_free+0x6e mov %rdi, %rsi
13 cc1 63501.650479626: 7f5219ac27f1 _int_free+0x71 shr $0x4, %rsi
14 cc1 63501.650479626: 7f5219ac27f5 _int_free+0x75 cmpq %rsi, 0x13caf4(%rip)
15 cc1 63501.650479626: 7f5219ac27fc _int_free+0x7c jbe 0x7f5219ac2821
16 cc1 63501.650479626: 7f5219ac2821 _int_free+0xa1 cmpq 0x13f138(%rip), %rbp
17 cc1 63501.650479626: 7f5219ac2828 _int_free+0xa8 jnbe 0x7f5219ac28d8
18 cc1 63501.650479626: 7f5219ac28d8 _int_free+0x158 testb $0x2, 0x8(%rbx)
19 cc1 63501.650479628: 7f5219ac28dc _int_free+0x15c jnz 0x7f5219ac2ab0 IPC: 6.00 (18/3)
<SNIP>
- Allow using time ranges with Intel PT, i.e. these features, already
present but not optimially usable with Intel PT, should be now:
Select the second 10% time slice:
$ perf script --time 10%/2
Select from 0% to 10% time slice:
$ perf script --time 0%-10%
Select the first and second 10% time slices:
$ perf script --time 10%/1,10%/2
Select from 0% to 10% and 30% to 40% slices:
$ perf script --time 0%-10%,30%-40%
cs-etm (ARM):
Mathieu Poirier:
- Add support for CPU-wide trace scenarios.
s390:
Thomas Richter:
- Fix missing kvm module load for s390.
- Fix OOM error in TUI mode on s390
- Support s390 diag event display when doing analysis on !s390
architectures.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
One warning each on signedness, unused variable and return type.
Fixes: 10fbcdd12aa2 ("selftests/net: add TFO key rotation selftest")
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
- Added index upper bound test case
- Added mark upper bound test case
- Re-worded descriptions to few cases for clarity
Signed-off-by: Roman Mashak <mrv@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This macro $IP will be used in upcoming tc tests, which require
to create interfaces etc.
Signed-off-by: Roman Mashak <mrv@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Alexei Starovoitov says:
====================
pull-request: bpf 2019-06-15
The following pull-request contains BPF updates for your *net* tree.
The main changes are:
1) fix stack layout of JITed x64 bpf code, from Alexei.
2) fix out of bounds memory access in bpf_sk_storage, from Arthur.
3) fix lpm trie walk, from Jonathan.
4) fix nested bpf_perf_event_output, from Matt.
5) and several other fixes.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This config option makes only couple of lines optional.
Two small helpers and an int in couple of cls structs.
Remove the config option and always compile this in.
This saves the user from unexpected surprises when he adds
a filter with ingress device match which is silently ignored
in case the config option is not set.
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This lets us test that both BPF_PROG_TYPE_CGROUP_SOCK_ADDR and
BPF_PROG_TYPE_SOCK_OPS can access underlying bpf_sock.
Cc: Martin Lau <kafai@fb.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
Add sk to struct bpf_sock_addr and struct bpf_sock_ops.
Cc: Martin Lau <kafai@fb.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
This patch adds a test for the new sockopt SO_REUSEPORT_DETACH_BPF.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
SO_DETACH_REUSEPORT_BPF is needed for the test in the next patch.
It is defined in the socket.h.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
Kernel internally checks that either key or value type ID is specified,
before using btf_fd. Do the same in libbpf's map creation code for
determining when to retry map creation w/o BTF.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: fba01a0689a9 ("libbpf: use negative fd to specify missing BTF")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
The "len" variable needs to be signed for the error handling to work
properly.
Fixes: 596092ef8bea ("selftests/bpf: enable all available cgroup v2 controllers")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
Convert the cgroup-v1 files to ReST format, in order to
allow a later addition to the admin-guide.
The conversion is actually:
- add blank lines and identation in order to identify paragraphs;
- fix tables markups;
- add some lists markups;
- mark literal blocks;
- adjust title markups.
At its new index.rst, let's add a :orphan: while this is not linked to
the main index.rst file, in order to avoid build warnings.
Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Tejun Heo <tj@kernel.org>
|
|
The conversion is actually:
- add blank lines and identation in order to identify paragraphs;
- fix tables markups;
- add some lists markups;
- mark literal blocks;
- adjust title markups.
At its new index.rst, let's add a :orphan: while this is not linked to
the main index.rst file, in order to avoid build warnings.
Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Acked-by: Federico Vaga <federico.vaga@vaga.pv.it>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
We need to pick up post-rc1 changes to various document files so they don't
get lost in Mauro's massive RST conversion push.
|
|
Test the PTP Physical Hardware Clock functionality using the "phc_ctl" (a
part of "linuxptp").
The test contains three sub-tests:
* "settime" test
* "adjtime" test
* "adjfreq" test
"settime" test:
* set the PHC time to 0 seconds.
* wait for 120.5 seconds.
* check if PHC time equal to 120.XX seconds.
"adjtime" test:
* set the PHC time to 0 seconds.
* adjust the time by 10 seconds.
* check if PHC time equal to 10.XX seconds.
"adjfreq" test:
* adjust the PHC frequency to be 1% faster.
* set the PHC time to 0 seconds.
* wait for 100.5 seconds.
* check if PHC time equal to 101.XX seconds.
Usage:
$ ./phc.sh /dev/ptp<X>
It is possible to run a subset of the tests, for example:
* To run only the "settime" test:
$ TESTS="settime" ./phc.sh /dev/ptp<X>
Signed-off-by: Shalom Toledo <shalomt@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Tested-by: Vladimir Oltean <olteanv@gmail.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Extended fw TDC tests with use cases where actions are pre-created and
attached to a filter by reference, i.e. by action index.
Signed-off-by: Roman Mashak <mrv@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Logan noticed that devm_memremap_pages_release() kills the percpu_ref
drops all the page references that were acquired at init and then
immediately proceeds to unplug, arch_remove_memory(), the backing pages
for the pagemap. If for some reason device shutdown actually collides
with a busy / elevated-ref-count page then arch_remove_memory() should
be deferred until after that reference is dropped.
As it stands the "wait for last page ref drop" happens *after*
devm_memremap_pages_release() returns, which is obviously too late and
can lead to crashes.
Fix this situation by assigning the responsibility to wait for the
percpu_ref to go idle to devm_memremap_pages() with a new ->cleanup()
callback. Implement the new cleanup callback for all
devm_memremap_pages() users: pmem, devdax, hmm, and p2pdma.
Link: http://lkml.kernel.org/r/155727339156.292046.5432007428235387859.stgit@dwillia2-desk3.amr.corp.intel.com
Fixes: 41e94a851304 ("add devm_memremap_pages")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reported-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Quoting Paul [1]:
"Given that a quick (and perhaps error-prone) search of the uses
of rcu_assign_pointer() in v5.1 didn't find a single use of the
return value, let's please instead change the documentation and
implementation to eliminate the return value."
[1] https://lkml.kernel.org/r/20190523135013.GL28207@linux.ibm.com
Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Cc: "Paul E. McKenney" <paulmck@linux.ibm.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: rcu@vger.kernel.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Sasha Levin <sashal@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
|
|
If the result of the division is LLONG_MIN, current tests do not detect
the error since the return value is truncated to a 32-bit value and ends
up being 0.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
Sync the changes to the flags made in "bpf: simplify definition of
BPF_FIB_LOOKUP related flags" with the BPF UAPI headers.
Doing in a separate commit to ease syncing of github/libbpf.
Signed-off-by: Martynas Pumputis <m@lambda.lt>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
When the ntb_msi_test module is available, the test code will trigger
each of the interrupts and ensure the corresponding occurrences files
gets incremented.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Allen Hubbe <allenbh@gmail.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux
Pull cpupower utility updates from Shuah Khan:
"This cpupower update consists of a fix and a minor spelling correction."
* tag 'linux-cpupower-5.2-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux:
cpupower : frequency-set -r option misses the last cpu in related cpu list
cpupower: correct spelling of interval
|
|
This merges a fix for a bug in our context id handling on 64-bit hash
CPUs.
The fix was written against v5.1 to ease backporting to stable
releases. Here we are merging it up to a v5.2-rc2 base, which involves
a bit of manual resolution.
It also adds a test case for the bug.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
This tests that when a process with a mapping above 512TB forks we
correctly separate the parent and child address spaces. This exercises
the bug in the context id handling fixed in the previous commit.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Test that IPv4 and IPv6 nexthops are correctly marked with offload
indication in response to neighbour events.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This test checks that route exceptions can be successfully listed and
flushed using ip -6 route {list,flush} cache.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If the leftmost parent node of the tree has does not have a child
on the left side, then trie_get_next_key (and bpftool map dump) will
not look at the child on the right. This leads to the traversal
missing elements.
Lookup is not affected.
Update selftest to handle this case.
Reproducer:
bpftool map create /sys/fs/bpf/lpm type lpm_trie key 6 \
value 1 entries 256 name test_lpm flags 1
bpftool map update pinned /sys/fs/bpf/lpm key 8 0 0 0 0 0 value 1
bpftool map update pinned /sys/fs/bpf/lpm key 16 0 0 0 0 128 value 2
bpftool map dump pinned /sys/fs/bpf/lpm
Returns only 1 element. (2 expected)
Fixes: b471f2f1de8b ("bpf: implement MAP_GET_NEXT_KEY command for LPM_TRIE")
Signed-off-by: Jonathan Lemon <jonathan.lemon@gmail.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
Use the newly added bpf_num_possible_cpus() in bpftool and selftests
and remove duplicate implementations.
Signed-off-by: Hechao Li <hechaol@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
Though currently there is no problem including bpf_util.h in kernel
space BPF C programs, in next patch in this stack, I will reuse
libbpf_num_possible_cpus() in bpf_util.h thus include libbpf.h in it,
which will cause BPF C programs compile error. Therefore I will first
remove bpf_util.h from all test BPF programs.
This can also make it clear that bpf_util.h is a user-space utility
while bpf_helpers.h is a kernel space utility.
Signed-off-by: Hechao Li <hechaol@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|