summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMike Kravetz <mike.kravetz@oracle.com>2020-07-03 15:15:18 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2020-07-09 09:39:40 +0200
commitfaf8c29a190142169ca0f48ac2ed77e6530c4bee (patch)
tree6b96cb96f31a08b5ea2d7f34fba69b4a5a35540a
parent639b9de2d735841fe32a2878c93cbc2867044cb0 (diff)
mm/hugetlb.c: fix pages per hugetlb calculation
commit 1139d336fff425f9a20374945cdd28eb44d09fa8 upstream. The routine hpage_nr_pages() was incorrectly used to calculate the number of base pages in a hugetlb page. hpage_nr_pages is designed to be called for THP pages and will return HPAGE_PMD_NR for hugetlb pages of any size. Due to the context in which hpage_nr_pages was called, it is unlikely to produce a user visible error. The routine with the incorrect call is only exercised in the case of hugetlb memory error or migration. In addition, this would need to be on an architecture which supports huge page sizes less than PMD_SIZE. And, the vma containing the huge page would also need to smaller than PMD_SIZE. Fixes: c0d0381ade79 ("hugetlbfs: use i_mmap_rwsem for more pmd sharing synchronization") Reported-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/20200629185003.97202-1-mike.kravetz@oracle.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r--mm/hugetlb.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bcabbe02192b..4f7cdc55fbe4 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1594,7 +1594,7 @@ static struct address_space *_get_hugetlb_page_mapping(struct page *hpage)
/* Use first found vma */
pgoff_start = page_to_pgoff(hpage);
- pgoff_end = pgoff_start + hpage_nr_pages(hpage) - 1;
+ pgoff_end = pgoff_start + pages_per_huge_page(page_hstate(hpage)) - 1;
anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root,
pgoff_start, pgoff_end) {
struct vm_area_struct *vma = avc->vma;