summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorThomas Gleixner <tglx@linutronix.de>2025-08-02 12:48:55 +0200
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2025-08-15 16:39:30 +0200
commita9025f73c88d9d6e125743a43afc569da3ce5328 (patch)
tree79cdeb2d5c0639b9382ff9749510a9d78625d0b6
parent7ff8521f30c4c2fcd4e88bd7640486602bf8a650 (diff)
perf/core: Handle buffer mapping fail correctly in perf_mmap()
commit f74b9f4ba63ffdf597aaaa6cad7e284cb8e04820 upstream. After successful allocation of a buffer or a successful attachment to an existing buffer perf_mmap() tries to map the buffer read only into the page table. If that fails, the already set up page table entries are zapped, but the other perf specific side effects of that failure are not handled. The calling code just cleans up the VMA and does not invoke perf_mmap_close(). This leaks reference counts, corrupts user->vm accounting and also results in an unbalanced invocation of event::event_mapped(). Cure this by moving the event::event_mapped() invocation before the map_range() call so that on map_range() failure perf_mmap_close() can be invoked without causing an unbalanced event::event_unmapped() call. perf_mmap_close() undoes the reference counts and eventually frees buffers. Fixes: b709eb872e19 ("perf/core: map pages in advance") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r--kernel/events/core.c12
1 files changed, 10 insertions, 2 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c
index a2e3591175c6..4563bd864bbc 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7148,12 +7148,20 @@ aux_unlock:
vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
vma->vm_ops = &perf_mmap_vmops;
- ret = map_range(rb, vma);
-
mapped = get_mapped(event, event_mapped);
if (mapped)
mapped(event, vma->vm_mm);
+ /*
+ * Try to map it into the page table. On fail, invoke
+ * perf_mmap_close() to undo the above, as the callsite expects
+ * full cleanup in this case and therefore does not invoke
+ * vmops::close().
+ */
+ ret = map_range(rb, vma);
+ if (ret)
+ perf_mmap_close(vma);
+
return ret;
}