summaryrefslogtreecommitdiff
path: root/mm/memory-failure.c
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2024-07-21 17:15:46 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2024-07-21 17:15:46 -0700
commitfbc90c042cd1dc7258ebfebe6d226017e5b5ac8c (patch)
tree45513ac12ade12a80ca6b306722f201802b0a190 /mm/memory-failure.c
parent7846b618e0a4c3e08888099d1d4512722b39ca99 (diff)
parent30d77b7eef019fa4422980806e8b7cdc8674493e (diff)
Merge tag 'mm-stable-2024-07-21-14-50' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull MM updates from Andrew Morton: - In the series "mm: Avoid possible overflows in dirty throttling" Jan Kara addresses a couple of issues in the writeback throttling code. These fixes are also targetted at -stable kernels. - Ryusuke Konishi's series "nilfs2: fix potential issues related to reserved inodes" does that. This should actually be in the mm-nonmm-stable tree, along with the many other nilfs2 patches. My bad. - More folio conversions from Kefeng Wang in the series "mm: convert to folio_alloc_mpol()" - Kemeng Shi has sent some cleanups to the writeback code in the series "Add helper functions to remove repeated code and improve readability of cgroup writeback" - Kairui Song has made the swap code a little smaller and a little faster in the series "mm/swap: clean up and optimize swap cache index". - In the series "mm/memory: cleanly support zeropage in vm_insert_page*(), vm_map_pages*() and vmf_insert_mixed()" David Hildenbrand has reworked the rather sketchy handling of the use of the zeropage in MAP_SHARED mappings. I don't see any runtime effects here - more a cleanup/understandability/maintainablity thing. - Dev Jain has improved selftests/mm/va_high_addr_switch.c's handling of higher addresses, for aarch64. The (poorly named) series is "Restructure va_high_addr_switch". - The core TLB handling code gets some cleanups and possible slight optimizations in Bang Li's series "Add update_mmu_tlb_range() to simplify code". - Jane Chu has improved the handling of our fake-an-unrecoverable-memory-error testing feature MADV_HWPOISON in the series "Enhance soft hwpoison handling and injection". - Jeff Johnson has sent a billion patches everywhere to add MODULE_DESCRIPTION() to everything. Some landed in this pull. - In the series "mm: cleanup MIGRATE_SYNC_NO_COPY mode", Kefeng Wang has simplified migration's use of hardware-offload memory copying. - Yosry Ahmed performs more folio API conversions in his series "mm: zswap: trivial folio conversions". - In the series "large folios swap-in: handle refault cases first", Chuanhua Han inches us forward in the handling of large pages in the swap code. This is a cleanup and optimization, working toward the end objective of full support of large folio swapin/out. - In the series "mm,swap: cleanup VMA based swap readahead window calculation", Huang Ying has contributed some cleanups and a possible fixlet to his VMA based swap readahead code. - In the series "add mTHP support for anonymous shmem" Baolin Wang has taught anonymous shmem mappings to use multisize THP. By default this is a no-op - users must opt in vis sysfs controls. Dramatic improvements in pagefault latency are realized. - David Hildenbrand has some cleanups to our remaining use of page_mapcount() in the series "fs/proc: move page_mapcount() to fs/proc/internal.h". - David also has some highmem accounting cleanups in the series "mm/highmem: don't track highmem pages manually". - Build-time fixes and cleanups from John Hubbard in the series "cleanups, fixes, and progress towards avoiding "make headers"". - Cleanups and consolidation of the core pagemap handling from Barry Song in the series "mm: introduce pmd|pte_needs_soft_dirty_wp helpers and utilize them". - Lance Yang's series "Reclaim lazyfree THP without splitting" has reduced the latency of the reclaim of pmd-mapped THPs under fairly common circumstances. A 10x speedup is seen in a microbenchmark. It does this by punting to aother CPU but I guess that's a win unless all CPUs are pegged. - hugetlb_cgroup cleanups from Xiu Jianfeng in the series "mm/hugetlb_cgroup: rework on cftypes". - Miaohe Lin's series "Some cleanups for memory-failure" does just that thing. - Someone other than SeongJae has developed a DAMON feature in Honggyu Kim's series "DAMON based tiered memory management for CXL memory". This adds DAMON features which may be used to help determine the efficiency of our placement of CXL/PCIe attached DRAM. - DAMON user API centralization and simplificatio work in SeongJae Park's series "mm/damon: introduce DAMON parameters online commit function". - In the series "mm: page_type, zsmalloc and page_mapcount_reset()" David Hildenbrand does some maintenance work on zsmalloc - partially modernizing its use of pageframe fields. - Kefeng Wang provides more folio conversions in the series "mm: remove page_maybe_dma_pinned() and page_mkclean()". - More cleanup from David Hildenbrand, this time in the series "mm/memory_hotplug: use PageOffline() instead of PageReserved() for !ZONE_DEVICE". It "enlightens memory hotplug more about PageOffline() pages" and permits the removal of some virtio-mem hacks. - Barry Song's series "mm: clarify folio_add_new_anon_rmap() and __folio_add_anon_rmap()" is a cleanup to the anon folio handling in preparation for mTHP (multisize THP) swapin. - Kefeng Wang's series "mm: improve clear and copy user folio" implements more folio conversions, this time in the area of large folio userspace copying. - The series "Docs/mm/damon/maintaier-profile: document a mailing tool and community meetup series" tells people how to get better involved with other DAMON developers. From SeongJae Park. - A large series ("kmsan: Enable on s390") from Ilya Leoshkevich does that. - David Hildenbrand sends along more cleanups, this time against the migration code. The series is "mm/migrate: move NUMA hinting fault folio isolation + checks under PTL". - Jan Kara has found quite a lot of strangenesses and minor errors in the readahead code. He addresses this in the series "mm: Fix various readahead quirks". - SeongJae Park's series "selftests/damon: test DAMOS tried regions and {min,max}_nr_regions" adds features and addresses errors in DAMON's self testing code. - Gavin Shan has found a userspace-triggerable WARN in the pagecache code. The series "mm/filemap: Limit page cache size to that supported by xarray" addresses this. The series is marked cc:stable. - Chengming Zhou's series "mm/ksm: cmp_and_merge_page() optimizations and cleanup" cleans up and slightly optimizes KSM. - Roman Gushchin has separated the memcg-v1 and memcg-v2 code - lots of code motion. The series (which also makes the memcg-v1 code Kconfigurable) are "mm: memcg: separate legacy cgroup v1 code and put under config option" and "mm: memcg: put cgroup v1-specific memcg data under CONFIG_MEMCG_V1" - Dan Schatzberg's series "Add swappiness argument to memory.reclaim" adds an additional feature to this cgroup-v2 control file. - The series "Userspace controls soft-offline pages" from Jiaqi Yan permits userspace to stop the kernel's automatic treatment of excessive correctable memory errors. In order to permit userspace to monitor and handle this situation. - Kefeng Wang's series "mm: migrate: support poison recover from migrate folio" teaches the kernel to appropriately handle migration from poisoned source folios rather than simply panicing. - SeongJae Park's series "Docs/damon: minor fixups and improvements" does those things. - In the series "mm/zsmalloc: change back to per-size_class lock" Chengming Zhou improves zsmalloc's scalability and memory utilization. - Vivek Kasireddy's series "mm/gup: Introduce memfd_pin_folios() for pinning memfd folios" makes the GUP code use FOLL_PIN rather than bare refcount increments. So these paes can first be moved aside if they reside in the movable zone or a CMA block. - Andrii Nakryiko has added a binary ioctl()-based API to /proc/pid/maps for much faster reading of vma information. The series is "query VMAs from /proc/<pid>/maps". - In the series "mm: introduce per-order mTHP split counters" Lance Yang improves the kernel's presentation of developer information related to multisize THP splitting. - Michael Ellerman has developed the series "Reimplement huge pages without hugepd on powerpc (8xx, e500, book3s/64)". This permits userspace to use all available huge page sizes. - In the series "revert unconditional slab and page allocator fault injection calls" Vlastimil Babka removes a performance-affecting and not very useful feature from slab fault injection. * tag 'mm-stable-2024-07-21-14-50' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (411 commits) mm/mglru: fix ineffective protection calculation mm/zswap: fix a white space issue mm/hugetlb: fix kernel NULL pointer dereference when migrating hugetlb folio mm/hugetlb: fix possible recursive locking detected warning mm/gup: clear the LRU flag of a page before adding to LRU batch mm/numa_balancing: teach mpol_to_str about the balancing mode mm: memcg1: convert charge move flags to unsigned long long alloc_tag: fix page_ext_get/page_ext_put sequence during page splitting lib: reuse page_ext_data() to obtain codetag_ref lib: add missing newline character in the warning message mm/mglru: fix overshooting shrinker memory mm/mglru: fix div-by-zero in vmpressure_calc_level() mm/kmemleak: replace strncpy() with strscpy() mm, page_alloc: put should_fail_alloc_page() back behing CONFIG_FAIL_PAGE_ALLOC mm, slab: put should_failslab() back behind CONFIG_SHOULD_FAILSLAB mm: ignore data-race in __swap_writepage hugetlbfs: ensure generic_hugetlb_get_unmapped_area() returns higher address than mmap_min_addr mm: shmem: rename mTHP shmem counters mm: swap_state: use folio_alloc_mpol() in __read_swap_cache_async() mm/migrate: putback split folios when numa hint migration fails ...
Diffstat (limited to 'mm/memory-failure.c')
-rw-r--r--mm/memory-failure.c259
1 files changed, 138 insertions, 121 deletions
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index d3c830e817e35..581d3e5c91175 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -68,6 +68,8 @@ static int sysctl_memory_failure_early_kill __read_mostly;
static int sysctl_memory_failure_recovery __read_mostly = 1;
+static int sysctl_enable_soft_offline __read_mostly = 1;
+
atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0);
static bool hw_memory_failure __read_mostly = false;
@@ -141,6 +143,15 @@ static struct ctl_table memory_failure_table[] = {
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
},
+ {
+ .procname = "enable_soft_offline",
+ .data = &sysctl_enable_soft_offline,
+ .maxlen = sizeof(sysctl_enable_soft_offline),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = SYSCTL_ZERO,
+ .extra2 = SYSCTL_ONE,
+ }
};
/*
@@ -294,6 +305,7 @@ int hwpoison_filter(struct page *p)
return 0;
}
+EXPORT_SYMBOL_GPL(hwpoison_filter);
#else
int hwpoison_filter(struct page *p)
{
@@ -301,8 +313,6 @@ int hwpoison_filter(struct page *p)
}
#endif
-EXPORT_SYMBOL_GPL(hwpoison_filter);
-
/*
* Kill all processes that have a poisoned page mapped and then isolate
* the page.
@@ -344,7 +354,7 @@ static int kill_proc(struct to_kill *tk, unsigned long pfn, int flags)
int ret = 0;
pr_err("%#lx: Sending SIGBUS to %s:%d due to hardware memory corruption\n",
- pfn, t->comm, t->pid);
+ pfn, t->comm, task_pid_nr(t));
if ((flags & MF_ACTION_REQUIRED) && (t == current))
ret = force_sig_mceerr(BUS_MCEERR_AR,
@@ -355,14 +365,12 @@ static int kill_proc(struct to_kill *tk, unsigned long pfn, int flags)
* PF_MCE_EARLY set.
* Don't use force here, it's convenient if the signal
* can be temporarily blocked.
- * This could cause a loop when the user sets SIGBUS
- * to SIG_IGN, but hopefully no one will do that?
*/
ret = send_sig_mceerr(BUS_MCEERR_AO, (void __user *)tk->addr,
addr_lsb, t);
if (ret < 0)
pr_info("Error sending signal to %s:%d: %d\n",
- t->comm, t->pid, ret);
+ t->comm, task_pid_nr(t), ret);
return ret;
}
@@ -514,24 +522,17 @@ void add_to_kill_ksm(struct task_struct *tsk, struct page *p,
*
* Only do anything when FORCEKILL is set, otherwise just free the
* list (this is used for clean pages which do not need killing)
- * Also when FAIL is set do a force kill because something went
- * wrong earlier.
*/
-static void kill_procs(struct list_head *to_kill, int forcekill, bool fail,
+static void kill_procs(struct list_head *to_kill, int forcekill,
unsigned long pfn, int flags)
{
struct to_kill *tk, *next;
list_for_each_entry_safe(tk, next, to_kill, nd) {
if (forcekill) {
- /*
- * In case something went wrong with munmapping
- * make sure the process doesn't catch the
- * signal and then access the memory. Just kill it.
- */
- if (fail || tk->addr == -EFAULT) {
+ if (tk->addr == -EFAULT) {
pr_err("%#lx: forcibly killing %s:%d because of failure to unmap corrupted page\n",
- pfn, tk->tsk->comm, tk->tsk->pid);
+ pfn, tk->tsk->comm, task_pid_nr(tk->tsk));
do_send_sig_info(SIGKILL, SEND_SIG_PRIV,
tk->tsk, PIDTYPE_PID);
}
@@ -544,7 +545,7 @@ static void kill_procs(struct list_head *to_kill, int forcekill, bool fail,
*/
else if (kill_proc(tk, pfn, flags) < 0)
pr_err("%#lx: Cannot send advisory machine check signal to %s:%d\n",
- pfn, tk->tsk->comm, tk->tsk->pid);
+ pfn, tk->tsk->comm, task_pid_nr(tk->tsk));
}
list_del(&tk->nd);
put_task_struct(tk->tsk);
@@ -834,7 +835,7 @@ static int hwpoison_hugetlb_range(pte_t *ptep, unsigned long hmask,
struct mm_walk *walk)
{
struct hwpoison_walk *hwp = walk->private;
- pte_t pte = huge_ptep_get(ptep);
+ pte_t pte = huge_ptep_get(walk->mm, addr, ptep);
struct hstate *h = hstate_vma(walk->vma);
return check_hwpoisoned_entry(pte, addr, huge_page_shift(h),
@@ -886,6 +887,28 @@ static int kill_accessing_process(struct task_struct *p, unsigned long pfn,
return ret > 0 ? -EHWPOISON : -EFAULT;
}
+/*
+ * MF_IGNORED - The m-f() handler marks the page as PG_hwpoisoned'ed.
+ * But it could not do more to isolate the page from being accessed again,
+ * nor does it kill the process. This is extremely rare and one of the
+ * potential causes is that the page state has been changed due to
+ * underlying race condition. This is the most severe outcomes.
+ *
+ * MF_FAILED - The m-f() handler marks the page as PG_hwpoisoned'ed.
+ * It should have killed the process, but it can't isolate the page,
+ * due to conditions such as extra pin, unmap failure, etc. Accessing
+ * the page again may trigger another MCE and the process will be killed
+ * by the m-f() handler immediately.
+ *
+ * MF_DELAYED - The m-f() handler marks the page as PG_hwpoisoned'ed.
+ * The page is unmapped, and is removed from the LRU or file mapping.
+ * An attempt to access the page again will trigger page fault and the
+ * PF handler will kill the process.
+ *
+ * MF_RECOVERED - The m-f() handler marks the page as PG_hwpoisoned'ed.
+ * The page has been completely isolated, that is, unmapped, taken out of
+ * the buddy system, or hole-punnched out of the file mapping.
+ */
static const char *action_name[] = {
[MF_IGNORED] = "Ignored",
[MF_FAILED] = "Failed",
@@ -896,10 +919,9 @@ static const char *action_name[] = {
static const char * const action_page_types[] = {
[MF_MSG_KERNEL] = "reserved kernel page",
[MF_MSG_KERNEL_HIGH_ORDER] = "high-order kernel page",
- [MF_MSG_SLAB] = "kernel slab page",
- [MF_MSG_DIFFERENT_COMPOUND] = "different compound page after locking",
[MF_MSG_HUGE] = "huge page",
[MF_MSG_FREE_HUGE] = "free huge page",
+ [MF_MSG_GET_HWPOISON] = "get hwpoison page",
[MF_MSG_UNMAP_FAILED] = "unmapping failed page",
[MF_MSG_DIRTY_SWAPCACHE] = "dirty swapcache page",
[MF_MSG_CLEAN_SWAPCACHE] = "clean swapcache page",
@@ -913,6 +935,7 @@ static const char * const action_page_types[] = {
[MF_MSG_BUDDY] = "free buddy page",
[MF_MSG_DAX] = "dax page",
[MF_MSG_UNSPLIT_THP] = "unsplit thp",
+ [MF_MSG_ALREADY_POISONED] = "already poisoned",
[MF_MSG_UNKNOWN] = "unknown page",
};
@@ -1020,12 +1043,13 @@ static int me_kernel(struct page_state *ps, struct page *p)
/*
* Page in unknown state. Do nothing.
+ * This is a catch-all in case we fail to make sense of the page state.
*/
static int me_unknown(struct page_state *ps, struct page *p)
{
pr_err("%#lx: Unknown page state\n", page_to_pfn(p));
unlock_page(p);
- return MF_FAILED;
+ return MF_IGNORED;
}
/*
@@ -1094,7 +1118,6 @@ static int me_pagecache_dirty(struct page_state *ps, struct page *p)
struct folio *folio = page_folio(p);
struct address_space *mapping = folio_mapping(folio);
- SetPageError(p);
/* TBD: print more information about the file. */
if (mapping) {
/*
@@ -1102,34 +1125,6 @@ static int me_pagecache_dirty(struct page_state *ps, struct page *p)
* who check the mapping.
* This way the application knows that something went
* wrong with its dirty file data.
- *
- * There's one open issue:
- *
- * The EIO will be only reported on the next IO
- * operation and then cleared through the IO map.
- * Normally Linux has two mechanisms to pass IO error
- * first through the AS_EIO flag in the address space
- * and then through the PageError flag in the page.
- * Since we drop pages on memory failure handling the
- * only mechanism open to use is through AS_AIO.
- *
- * This has the disadvantage that it gets cleared on
- * the first operation that returns an error, while
- * the PageError bit is more sticky and only cleared
- * when the page is reread or dropped. If an
- * application assumes it will always get error on
- * fsync, but does other operations on the fd before
- * and the page is dropped between then the error
- * will not be properly reported.
- *
- * This can already happen even without hwpoisoned
- * pages: first on metadata IO errors (which only
- * report through AS_EIO) or when the page is dropped
- * at the wrong time.
- *
- * So right now we assume that the application DTRT on
- * the first EIO, but we're not worse than other parts
- * of the kernel.
*/
mapping_set_error(mapping, -EIO);
}
@@ -1141,7 +1136,7 @@ static int me_pagecache_dirty(struct page_state *ps, struct page *p)
* Clean and dirty swap cache.
*
* Dirty swap cache page is tricky to handle. The page could live both in page
- * cache and swap cache(ie. page is freshly swapped in). So it could be
+ * table and swap cache(ie. page is freshly swapped in). So it could be
* referenced concurrently by 2 types of PTEs:
* normal PTEs and swap PTEs. We try to handle them consistently by calling
* try_to_unmap(!TTU_HWPOISON) to convert the normal PTEs to swap PTEs,
@@ -1429,6 +1424,8 @@ static int __get_hwpoison_page(struct page *page, unsigned long flags)
return 0;
}
+#define GET_PAGE_MAX_RETRY_NUM 3
+
static int get_any_page(struct page *p, unsigned long flags)
{
int ret = 0, pass = 0;
@@ -1443,12 +1440,12 @@ try_again:
if (!ret) {
if (page_count(p)) {
/* We raced with an allocation, retry. */
- if (pass++ < 3)
+ if (pass++ < GET_PAGE_MAX_RETRY_NUM)
goto try_again;
ret = -EBUSY;
} else if (!PageHuge(p) && !is_free_buddy_page(p)) {
/* We raced with put_page, retry. */
- if (pass++ < 3)
+ if (pass++ < GET_PAGE_MAX_RETRY_NUM)
goto try_again;
ret = -EIO;
}
@@ -1474,7 +1471,7 @@ try_again:
* A page we cannot handle. Check whether we can turn
* it into something we can handle.
*/
- if (pass++ < 3) {
+ if (pass++ < GET_PAGE_MAX_RETRY_NUM) {
put_page(p);
shake_page(p);
count_increased = false;
@@ -1536,7 +1533,7 @@ static int __get_unpoison_page(struct page *page)
* the given page has PG_hwpoison. So it's never reused for other page
* allocations, and __get_unpoison_page() never races with them.
*
- * Return: 0 on failure,
+ * Return: 0 on failure or free buddy (hugetlb) page,
* 1 on success for in-use pages in a well-defined state,
* -EIO for pages on which we can not handle memory errors,
* -EBUSY when get_hwpoison_page() has raced with page lifecycle
@@ -1585,7 +1582,7 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
* This check implies we don't kill processes if their pages
* are in the swap cache early. Those are always late kills.
*/
- if (!page_mapped(p))
+ if (!folio_mapped(folio))
return true;
if (folio_test_swapcache(folio)) {
@@ -1636,10 +1633,10 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
try_to_unmap(folio, ttu);
}
- unmap_success = !page_mapped(p);
+ unmap_success = !folio_mapped(folio);
if (!unmap_success)
pr_err("%#lx: failed to unmap page (folio mapcount=%d)\n",
- pfn, folio_mapcount(page_folio(p)));
+ pfn, folio_mapcount(folio));
/*
* try_to_unmap() might put mlocked page in lru cache, so call
@@ -1660,7 +1657,7 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
*/
forcekill = folio_test_dirty(folio) || (flags & MF_MUST_KILL) ||
!unmap_success;
- kill_procs(&tokill, forcekill, !unmap_success, pfn, flags);
+ kill_procs(&tokill, forcekill, pfn, flags);
return unmap_success;
}
@@ -1688,7 +1685,12 @@ static int identify_page_state(unsigned long pfn, struct page *p,
return page_action(ps, p, pfn);
}
-static int try_to_split_thp_page(struct page *page)
+/*
+ * When 'release' is 'false', it means that if thp split has failed,
+ * there is still more to do, hence the page refcount we took earlier
+ * is still needed.
+ */
+static int try_to_split_thp_page(struct page *page, bool release)
{
int ret;
@@ -1696,7 +1698,7 @@ static int try_to_split_thp_page(struct page *page)
ret = split_huge_page(page);
unlock_page(page);
- if (unlikely(ret))
+ if (ret && release)
put_page(page);
return ret;
@@ -1724,7 +1726,7 @@ static void unmap_and_kill(struct list_head *to_kill, unsigned long pfn,
unmap_mapping_range(mapping, start, size, 0);
}
- kill_procs(to_kill, flags & MF_MUST_KILL, false, pfn, flags);
+ kill_procs(to_kill, flags & MF_MUST_KILL, pfn, flags);
}
/*
@@ -1912,7 +1914,7 @@ static int folio_set_hugetlb_hwpoison(struct folio *folio, struct page *page)
{
struct llist_head *head;
struct raw_hwp_page *raw_hwp;
- struct raw_hwp_page *p, *next;
+ struct raw_hwp_page *p;
int ret = folio_test_set_hwpoison(folio) ? -EHWPOISON : 0;
/*
@@ -1923,7 +1925,7 @@ static int folio_set_hugetlb_hwpoison(struct folio *folio, struct page *page)
if (folio_test_hugetlb_raw_hwp_unreliable(folio))
return -EHWPOISON;
head = raw_hwp_list_head(folio);
- llist_for_each_entry_safe(p, next, head->first, node) {
+ llist_for_each_entry(p, head->first, node) {
if (p->page == page)
return -EHWPOISON;
}
@@ -2062,6 +2064,7 @@ retry:
if (flags & MF_ACTION_REQUIRED) {
folio = page_folio(p);
res = kill_accessing_process(current, folio_pfn(folio), flags);
+ action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED);
}
return res;
} else if (res == -EBUSY) {
@@ -2069,7 +2072,7 @@ retry:
flags |= MF_NO_RETRY;
goto retry;
}
- return action_result(pfn, MF_MSG_UNKNOWN, MF_IGNORED);
+ return action_result(pfn, MF_MSG_GET_HWPOISON, MF_IGNORED);
}
folio = page_folio(p);
@@ -2104,7 +2107,7 @@ retry:
if (!hwpoison_user_mappings(folio, p, pfn, flags)) {
folio_unlock(folio);
- return action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED);
+ return action_result(pfn, MF_MSG_UNMAP_FAILED, MF_FAILED);
}
return identify_page_state(pfn, p, page_flags);
@@ -2125,14 +2128,10 @@ static inline unsigned long folio_free_raw_hwp(struct folio *folio, bool flag)
/* Drop the extra refcount in case we come from madvise() */
static void put_ref_page(unsigned long pfn, int flags)
{
- struct page *page;
-
if (!(flags & MF_COUNT_INCREASED))
return;
- page = pfn_to_page(pfn);
- if (page)
- put_page(page);
+ put_page(pfn_to_page(pfn));
}
static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
@@ -2167,6 +2166,22 @@ out:
return rc;
}
+/*
+ * The calling condition is as such: thp split failed, page might have
+ * been RDMA pinned, not much can be done for recovery.
+ * But a SIGBUS should be delivered with vaddr provided so that the user
+ * application has a chance to recover. Also, application processes'
+ * election for MCE early killed will be honored.
+ */
+static void kill_procs_now(struct page *p, unsigned long pfn, int flags,
+ struct folio *folio)
+{
+ LIST_HEAD(tokill);
+
+ collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED);
+ kill_procs(&tokill, true, pfn, flags);
+}
+
/**
* memory_failure - Handle memory failure of a page.
* @pfn: Page Number of the corrupted page
@@ -2238,6 +2253,7 @@ try_again:
res = kill_accessing_process(current, pfn, flags);
if (flags & MF_COUNT_INCREASED)
put_page(p);
+ action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED);
goto unlock_mutex;
}
@@ -2274,12 +2290,24 @@ try_again:
}
goto unlock_mutex;
} else if (res < 0) {
- res = action_result(pfn, MF_MSG_UNKNOWN, MF_IGNORED);
+ res = action_result(pfn, MF_MSG_GET_HWPOISON, MF_IGNORED);
goto unlock_mutex;
}
}
folio = page_folio(p);
+
+ /* filter pages that are protected from hwpoison test by users */
+ folio_lock(folio);
+ if (hwpoison_filter(p)) {
+ ClearPageHWPoison(p);
+ folio_unlock(folio);
+ folio_put(folio);
+ res = -EOPNOTSUPP;
+ goto unlock_mutex;
+ }
+ folio_unlock(folio);
+
if (folio_test_large(folio)) {
/*
* The flag must be set after the refcount is bumped
@@ -2295,8 +2323,11 @@ try_again:
* page is a valid handlable page.
*/
folio_set_has_hwpoisoned(folio);
- if (try_to_split_thp_page(p) < 0) {
- res = action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED);
+ if (try_to_split_thp_page(p, false) < 0) {
+ res = -EHWPOISON;
+ kill_procs_now(p, pfn, flags, folio);
+ put_page(p);
+ action_result(pfn, MF_MSG_UNSPLIT_THP, MF_FAILED);
goto unlock_mutex;
}
VM_BUG_ON_PAGE(!page_count(p), p);
@@ -2317,22 +2348,10 @@ try_again:
/*
* We're only intended to deal with the non-Compound page here.
- * However, the page could have changed compound pages due to
- * race window. If this happens, we could try again to hopefully
- * handle the page next round.
+ * The page cannot become compound pages again as folio has been
+ * splited and extra refcnt is held.
*/
- if (folio_test_large(folio)) {
- if (retry) {
- ClearPageHWPoison(p);
- folio_unlock(folio);
- folio_put(folio);
- flags &= ~MF_COUNT_INCREASED;
- retry = false;
- goto try_again;
- }
- res = action_result(pfn, MF_MSG_DIFFERENT_COMPOUND, MF_IGNORED);
- goto unlock_page;
- }
+ WARN_ON(folio_test_large(folio));
/*
* We use page flags to determine what action should be taken, but
@@ -2343,14 +2362,6 @@ try_again:
*/
page_flags = folio->flags;
- if (hwpoison_filter(p)) {
- ClearPageHWPoison(p);
- folio_unlock(folio);
- folio_put(folio);
- res = -EOPNOTSUPP;
- goto unlock_mutex;
- }
-
/*
* __munlock_folio() may clear a writeback folio's LRU flag without
* the folio lock. We need to wait for writeback completion for this
@@ -2370,7 +2381,7 @@ try_again:
* Abort on fail: __filemap_remove_folio() assumes unmapped page.
*/
if (!hwpoison_user_mappings(folio, p, pfn, flags)) {
- res = action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED);
+ res = action_result(pfn, MF_MSG_UNMAP_FAILED, MF_FAILED);
goto unlock_page;
}
@@ -2502,7 +2513,7 @@ static int __init memory_failure_init(void)
core_initcall(memory_failure_init);
#undef pr_fmt
-#define pr_fmt(fmt) "" fmt
+#define pr_fmt(fmt) "Unpoison: " fmt
#define unpoison_pr_info(fmt, pfn, rs) \
({ \
if (__ratelimit(rs)) \
@@ -2526,7 +2537,7 @@ int unpoison_memory(unsigned long pfn)
struct folio *folio;
struct page *p;
int ret = -EBUSY, ghp;
- unsigned long count = 1;
+ unsigned long count;
bool huge = false;
static DEFINE_RATELIMIT_STATE(unpoison_rs, DEFAULT_RATELIMIT_INTERVAL,
DEFAULT_RATELIMIT_BURST);
@@ -2540,27 +2551,27 @@ int unpoison_memory(unsigned long pfn)
mutex_lock(&mf_mutex);
if (hw_memory_failure) {
- unpoison_pr_info("Unpoison: Disabled after HW memory failure %#lx\n",
+ unpoison_pr_info("%#lx: disabled after HW memory failure\n",
pfn, &unpoison_rs);
ret = -EOPNOTSUPP;
goto unlock_mutex;
}
if (is_huge_zero_folio(folio)) {
- unpoison_pr_info("Unpoison: huge zero page is not supported %#lx\n",
+ unpoison_pr_info("%#lx: huge zero page is not supported\n",
pfn, &unpoison_rs);
ret = -EOPNOTSUPP;
goto unlock_mutex;
}
if (!PageHWPoison(p)) {
- unpoison_pr_info("Unpoison: Page was already unpoisoned %#lx\n",
+ unpoison_pr_info("%#lx: page was already unpoisoned\n",
pfn, &unpoison_rs);
goto unlock_mutex;
}
if (folio_ref_count(folio) > 1) {
- unpoison_pr_info("Unpoison: Someone grabs the hwpoison page %#lx\n",
+ unpoison_pr_info("%#lx: someone grabs the hwpoison page\n",
pfn, &unpoison_rs);
goto unlock_mutex;
}
@@ -2569,18 +2580,14 @@ int unpoison_memory(unsigned long pfn)
folio_test_reserved(folio) || folio_test_offline(folio))
goto unlock_mutex;
- /*
- * Note that folio->_mapcount is overloaded in SLAB, so the simple test
- * in folio_mapped() has to be done after folio_test_slab() is checked.
- */
if (folio_mapped(folio)) {
- unpoison_pr_info("Unpoison: Someone maps the hwpoison page %#lx\n",
+ unpoison_pr_info("%#lx: someone maps the hwpoison page\n",
pfn, &unpoison_rs);
goto unlock_mutex;
}
if (folio_mapping(folio)) {
- unpoison_pr_info("Unpoison: the hwpoison page has non-NULL mapping %#lx\n",
+ unpoison_pr_info("%#lx: the hwpoison page has non-NULL mapping\n",
pfn, &unpoison_rs);
goto unlock_mutex;
}
@@ -2599,7 +2606,7 @@ int unpoison_memory(unsigned long pfn)
ret = put_page_back_buddy(p) ? 0 : -EBUSY;
} else {
ret = ghp;
- unpoison_pr_info("Unpoison: failed to grab page %#lx\n",
+ unpoison_pr_info("%#lx: failed to grab page\n",
pfn, &unpoison_rs);
}
} else {
@@ -2624,13 +2631,16 @@ unlock_mutex:
if (!ret) {
if (!huge)
num_poisoned_pages_sub(pfn, 1);
- unpoison_pr_info("Unpoison: Software-unpoisoned page %#lx\n",
+ unpoison_pr_info("%#lx: software-unpoisoned page\n",
page_to_pfn(p), &unpoison_rs);
}
return ret;
}
EXPORT_SYMBOL(unpoison_memory);
+#undef pr_fmt
+#define pr_fmt(fmt) "Soft offline: " fmt
+
static bool mf_isolate_folio(struct folio *folio, struct list_head *pagelist)
{
bool isolated = false;
@@ -2685,8 +2695,8 @@ static int soft_offline_in_use_page(struct page *page)
};
if (!huge && folio_test_large(folio)) {
- if (try_to_split_thp_page(page)) {
- pr_info("soft offline: %#lx: thp split failed\n", pfn);
+ if (try_to_split_thp_page(page, true)) {
+ pr_info("%#lx: thp split failed\n", pfn);
return -EBUSY;
}
folio = page_folio(page);
@@ -2698,7 +2708,7 @@ static int soft_offline_in_use_page(struct page *page)
if (PageHWPoison(page)) {
folio_unlock(folio);
folio_put(folio);
- pr_info("soft offline: %#lx page already poisoned\n", pfn);
+ pr_info("%#lx: page already poisoned\n", pfn);
return 0;
}
@@ -2711,7 +2721,7 @@ static int soft_offline_in_use_page(struct page *page)
folio_unlock(folio);
if (ret) {
- pr_info("soft_offline: %#lx: invalidated\n", pfn);
+ pr_info("%#lx: invalidated\n", pfn);
page_handle_poison(page, false, true);
return 0;
}
@@ -2728,13 +2738,13 @@ static int soft_offline_in_use_page(struct page *page)
if (!list_empty(&pagelist))
putback_movable_pages(&pagelist);
- pr_info("soft offline: %#lx: %s migration failed %ld, type %pGp\n",
+ pr_info("%#lx: %s migration failed %ld, type %pGp\n",
pfn, msg_page[huge], ret, &page->flags);
if (ret > 0)
ret = -EBUSY;
}
} else {
- pr_info("soft offline: %#lx: %s isolation failed, page count %d, type %pGp\n",
+ pr_info("%#lx: %s isolation failed, page count %d, type %pGp\n",
pfn, msg_page[huge], page_count(page), &page->flags);
ret = -EBUSY;
}
@@ -2746,8 +2756,9 @@ static int soft_offline_in_use_page(struct page *page)
* @pfn: pfn to soft-offline
* @flags: flags. Same as memory_failure().
*
- * Returns 0 on success
- * -EOPNOTSUPP for hwpoison_filter() filtered the error event
+ * Returns 0 on success,
+ * -EOPNOTSUPP for hwpoison_filter() filtered the error event, or
+ * disabled by /proc/sys/vm/enable_soft_offline,
* < 0 otherwise negated errno.
*
* Soft offline a page, by migration or invalidation,
@@ -2783,10 +2794,16 @@ int soft_offline_page(unsigned long pfn, int flags)
return -EIO;
}
+ if (!sysctl_enable_soft_offline) {
+ pr_info_once("disabled by /proc/sys/vm/enable_soft_offline\n");
+ put_ref_page(pfn, flags);
+ return -EOPNOTSUPP;
+ }
+
mutex_lock(&mf_mutex);
if (PageHWPoison(page)) {
- pr_info("%s: %#lx page already poisoned\n", __func__, pfn);
+ pr_info("%#lx: page already poisoned\n", pfn);
put_ref_page(pfn, flags);
mutex_unlock(&mf_mutex);
return 0;