diff options
author | Fan Ni <fan.ni@samsung.com> | 2025-05-05 11:22:41 -0700 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2025-05-27 19:38:26 -0700 |
commit | b0752f1a709740fd6ae518e969a25f90ec6a100f (patch) | |
tree | 4ef86b8a432cc9c0fa02a4203e32525f6a72e297 /mm | |
parent | 200577f69f29a58c90c67c83a0df6d12850e1d09 (diff) |
mm/hugetlb: pass folio instead of page to unmap_ref_private()
Patch series "Let unmap_hugepage_range() and several related functions
take folio instead of page", v4.
This patch (of 4):
unmap_ref_private() has only a single user, which passes in &folio->page.
Let it take the folio directly.
Link: https://lkml.kernel.org/r/20250505182345.506888-2-nifan.cxl@gmail.com
Link: https://lkml.kernel.org/r/20250505182345.506888-3-nifan.cxl@gmail.com
Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/hugetlb.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0057d1f1dc9a..0c2b264a7ab8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6071,7 +6071,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, * same region. */ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma, - struct page *page, unsigned long address) + struct folio *folio, unsigned long address) { struct hstate *h = hstate_vma(vma); struct vm_area_struct *iter_vma; @@ -6115,7 +6115,8 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma, */ if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER)) unmap_hugepage_range(iter_vma, address, - address + huge_page_size(h), page, 0); + address + huge_page_size(h), + &folio->page, 0); } i_mmap_unlock_write(mapping); } @@ -6238,8 +6239,7 @@ retry_avoidcopy: hugetlb_vma_unlock_read(vma); mutex_unlock(&hugetlb_fault_mutex_table[hash]); - unmap_ref_private(mm, vma, &old_folio->page, - vmf->address); + unmap_ref_private(mm, vma, old_folio, vmf->address); mutex_lock(&hugetlb_fault_mutex_table[hash]); hugetlb_vma_lock_read(vma); |