summaryrefslogtreecommitdiff
path: root/Documentation/trace
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/trace')
-rw-r--r--Documentation/trace/events-kmem.txt12
-rw-r--r--Documentation/trace/events.txt2
-rw-r--r--Documentation/trace/postprocess/trace-pagealloc-postprocess.pl20
-rw-r--r--Documentation/trace/tracepoint-analysis.txt40
4 files changed, 36 insertions, 38 deletions
diff --git a/Documentation/trace/events-kmem.txt b/Documentation/trace/events-kmem.txt
index aa82ee4a5a8..19480041006 100644
--- a/Documentation/trace/events-kmem.txt
+++ b/Documentation/trace/events-kmem.txt
@@ -40,8 +40,8 @@ but the call_site can usually be used to extrapolate that information.
==================
mm_page_alloc page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s
mm_page_alloc_zone_locked page=%p pfn=%lu order=%u migratetype=%d cpu=%d percpu_refill=%d
-mm_page_free_direct page=%p pfn=%lu order=%d
-mm_pagevec_free page=%p pfn=%lu order=%d cold=%d
+mm_page_free page=%p pfn=%lu order=%d
+mm_page_free_batched page=%p pfn=%lu order=%d cold=%d
These four events deal with page allocation and freeing. mm_page_alloc is
a simple indicator of page allocator activity. Pages may be allocated from
@@ -53,13 +53,13 @@ amounts of activity imply high activity on the zone->lock. Taking this lock
impairs performance by disabling interrupts, dirtying cache lines between
CPUs and serialising many CPUs.
-When a page is freed directly by the caller, the mm_page_free_direct event
+When a page is freed directly by the caller, the only mm_page_free event
is triggered. Significant amounts of activity here could indicate that the
callers should be batching their activities.
-When pages are freed using a pagevec, the mm_pagevec_free is
-triggered. Broadly speaking, pages are taken off the LRU lock in bulk and
-freed in batch with a pagevec. Significant amounts of activity here could
+When pages are freed in batch, the also mm_page_free_batched is triggered.
+Broadly speaking, pages are taken off the LRU lock in bulk and
+freed in batch with a page list. Significant amounts of activity here could
indicate that the system is under memory pressure and can also indicate
contention on the zone->lru_lock.
diff --git a/Documentation/trace/events.txt b/Documentation/trace/events.txt
index b510564aac7..bb24c2a0e87 100644
--- a/Documentation/trace/events.txt
+++ b/Documentation/trace/events.txt
@@ -191,8 +191,6 @@ And for string fields they are:
Currently, only exact string matches are supported.
-Currently, the maximum number of predicates in a filter is 16.
-
5.2 Setting filters
-------------------
diff --git a/Documentation/trace/postprocess/trace-pagealloc-postprocess.pl b/Documentation/trace/postprocess/trace-pagealloc-postprocess.pl
index 7df50e8cf4d..0a120aae33c 100644
--- a/Documentation/trace/postprocess/trace-pagealloc-postprocess.pl
+++ b/Documentation/trace/postprocess/trace-pagealloc-postprocess.pl
@@ -17,8 +17,8 @@ use Getopt::Long;
# Tracepoint events
use constant MM_PAGE_ALLOC => 1;
-use constant MM_PAGE_FREE_DIRECT => 2;
-use constant MM_PAGEVEC_FREE => 3;
+use constant MM_PAGE_FREE => 2;
+use constant MM_PAGE_FREE_BATCHED => 3;
use constant MM_PAGE_PCPU_DRAIN => 4;
use constant MM_PAGE_ALLOC_ZONE_LOCKED => 5;
use constant MM_PAGE_ALLOC_EXTFRAG => 6;
@@ -223,10 +223,10 @@ EVENT_PROCESS:
# Perl Switch() sucks majorly
if ($tracepoint eq "mm_page_alloc") {
$perprocesspid{$process_pid}->{MM_PAGE_ALLOC}++;
- } elsif ($tracepoint eq "mm_page_free_direct") {
- $perprocesspid{$process_pid}->{MM_PAGE_FREE_DIRECT}++;
- } elsif ($tracepoint eq "mm_pagevec_free") {
- $perprocesspid{$process_pid}->{MM_PAGEVEC_FREE}++;
+ } elsif ($tracepoint eq "mm_page_free") {
+ $perprocesspid{$process_pid}->{MM_PAGE_FREE}++
+ } elsif ($tracepoint eq "mm_page_free_batched") {
+ $perprocesspid{$process_pid}->{MM_PAGE_FREE_BATCHED}++;
} elsif ($tracepoint eq "mm_page_pcpu_drain") {
$perprocesspid{$process_pid}->{MM_PAGE_PCPU_DRAIN}++;
$perprocesspid{$process_pid}->{STATE_PCPU_PAGES_DRAINED}++;
@@ -336,8 +336,8 @@ sub dump_stats {
$process_pid,
$stats{$process_pid}->{MM_PAGE_ALLOC},
$stats{$process_pid}->{MM_PAGE_ALLOC_ZONE_LOCKED},
- $stats{$process_pid}->{MM_PAGE_FREE_DIRECT},
- $stats{$process_pid}->{MM_PAGEVEC_FREE},
+ $stats{$process_pid}->{MM_PAGE_FREE},
+ $stats{$process_pid}->{MM_PAGE_FREE_BATCHED},
$stats{$process_pid}->{MM_PAGE_PCPU_DRAIN},
$stats{$process_pid}->{HIGH_PCPU_DRAINS},
$stats{$process_pid}->{HIGH_PCPU_REFILLS},
@@ -364,8 +364,8 @@ sub aggregate_perprocesspid() {
$perprocess{$process}->{MM_PAGE_ALLOC} += $perprocesspid{$process_pid}->{MM_PAGE_ALLOC};
$perprocess{$process}->{MM_PAGE_ALLOC_ZONE_LOCKED} += $perprocesspid{$process_pid}->{MM_PAGE_ALLOC_ZONE_LOCKED};
- $perprocess{$process}->{MM_PAGE_FREE_DIRECT} += $perprocesspid{$process_pid}->{MM_PAGE_FREE_DIRECT};
- $perprocess{$process}->{MM_PAGEVEC_FREE} += $perprocesspid{$process_pid}->{MM_PAGEVEC_FREE};
+ $perprocess{$process}->{MM_PAGE_FREE} += $perprocesspid{$process_pid}->{MM_PAGE_FREE};
+ $perprocess{$process}->{MM_PAGE_FREE_BATCHED} += $perprocesspid{$process_pid}->{MM_PAGE_FREE_BATCHED};
$perprocess{$process}->{MM_PAGE_PCPU_DRAIN} += $perprocesspid{$process_pid}->{MM_PAGE_PCPU_DRAIN};
$perprocess{$process}->{HIGH_PCPU_DRAINS} += $perprocesspid{$process_pid}->{HIGH_PCPU_DRAINS};
$perprocess{$process}->{HIGH_PCPU_REFILLS} += $perprocesspid{$process_pid}->{HIGH_PCPU_REFILLS};
diff --git a/Documentation/trace/tracepoint-analysis.txt b/Documentation/trace/tracepoint-analysis.txt
index 87bee3c129b..058cc6c9dc5 100644
--- a/Documentation/trace/tracepoint-analysis.txt
+++ b/Documentation/trace/tracepoint-analysis.txt
@@ -93,14 +93,14 @@ By specifying the -a switch and analysing sleep, the system-wide events
for a duration of time can be examined.
$ perf stat -a \
- -e kmem:mm_page_alloc -e kmem:mm_page_free_direct \
- -e kmem:mm_pagevec_free \
+ -e kmem:mm_page_alloc -e kmem:mm_page_free \
+ -e kmem:mm_page_free_batched \
sleep 10
Performance counter stats for 'sleep 10':
9630 kmem:mm_page_alloc
- 2143 kmem:mm_page_free_direct
- 7424 kmem:mm_pagevec_free
+ 2143 kmem:mm_page_free
+ 7424 kmem:mm_page_free_batched
10.002577764 seconds time elapsed
@@ -119,15 +119,15 @@ basis using set_ftrace_pid.
Events can be activated and tracked for the duration of a process on a local
basis using PCL such as follows.
- $ perf stat -e kmem:mm_page_alloc -e kmem:mm_page_free_direct \
- -e kmem:mm_pagevec_free ./hackbench 10
+ $ perf stat -e kmem:mm_page_alloc -e kmem:mm_page_free \
+ -e kmem:mm_page_free_batched ./hackbench 10
Time: 0.909
Performance counter stats for './hackbench 10':
17803 kmem:mm_page_alloc
- 12398 kmem:mm_page_free_direct
- 4827 kmem:mm_pagevec_free
+ 12398 kmem:mm_page_free
+ 4827 kmem:mm_page_free_batched
0.973913387 seconds time elapsed
@@ -146,8 +146,8 @@ to know what the standard deviation is. By and large, this is left to the
performance analyst to do it by hand. In the event that the discrete event
occurrences are useful to the performance analyst, then perf can be used.
- $ perf stat --repeat 5 -e kmem:mm_page_alloc -e kmem:mm_page_free_direct
- -e kmem:mm_pagevec_free ./hackbench 10
+ $ perf stat --repeat 5 -e kmem:mm_page_alloc -e kmem:mm_page_free
+ -e kmem:mm_page_free_batched ./hackbench 10
Time: 0.890
Time: 0.895
Time: 0.915
@@ -157,8 +157,8 @@ occurrences are useful to the performance analyst, then perf can be used.
Performance counter stats for './hackbench 10' (5 runs):
16630 kmem:mm_page_alloc ( +- 3.542% )
- 11486 kmem:mm_page_free_direct ( +- 4.771% )
- 4730 kmem:mm_pagevec_free ( +- 2.325% )
+ 11486 kmem:mm_page_free ( +- 4.771% )
+ 4730 kmem:mm_page_free_batched ( +- 2.325% )
0.982653002 seconds time elapsed ( +- 1.448% )
@@ -168,15 +168,15 @@ aggregation of discrete events, then a script would need to be developed.
Using --repeat, it is also possible to view how events are fluctuating over
time on a system-wide basis using -a and sleep.
- $ perf stat -e kmem:mm_page_alloc -e kmem:mm_page_free_direct \
- -e kmem:mm_pagevec_free \
+ $ perf stat -e kmem:mm_page_alloc -e kmem:mm_page_free \
+ -e kmem:mm_page_free_batched \
-a --repeat 10 \
sleep 1
Performance counter stats for 'sleep 1' (10 runs):
1066 kmem:mm_page_alloc ( +- 26.148% )
- 182 kmem:mm_page_free_direct ( +- 5.464% )
- 890 kmem:mm_pagevec_free ( +- 30.079% )
+ 182 kmem:mm_page_free ( +- 5.464% )
+ 890 kmem:mm_page_free_batched ( +- 30.079% )
1.002251757 seconds time elapsed ( +- 0.005% )
@@ -220,8 +220,8 @@ were generating events within the kernel. To begin this sort of analysis, the
data must be recorded. At the time of writing, this required root:
$ perf record -c 1 \
- -e kmem:mm_page_alloc -e kmem:mm_page_free_direct \
- -e kmem:mm_pagevec_free \
+ -e kmem:mm_page_alloc -e kmem:mm_page_free \
+ -e kmem:mm_page_free_batched \
./hackbench 10
Time: 0.894
[ perf record: Captured and wrote 0.733 MB perf.data (~32010 samples) ]
@@ -260,8 +260,8 @@ noticed that X was generating an insane amount of page allocations so let's look
at it:
$ perf record -c 1 -f \
- -e kmem:mm_page_alloc -e kmem:mm_page_free_direct \
- -e kmem:mm_pagevec_free \
+ -e kmem:mm_page_alloc -e kmem:mm_page_free \
+ -e kmem:mm_page_free_batched \
-p `pidof X`
This was interrupted after a few seconds and