summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorYosry Ahmed <yosryahmed@google.com>2024-01-25 08:51:27 +0000
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2024-03-01 13:26:39 +0100
commit14f1992430ef9e647b02aa8ca12c5bcb9a1dffea (patch)
tree0af2b484c955c7f65f031324d9705bf1f550881f
parent18f614369def2a11a52f569fe0f910b199d13487 (diff)
mm: zswap: fix missing folio cleanup in writeback race path
commit e3b63e966cac0bf78aaa1efede1827a252815a1d upstream. In zswap_writeback_entry(), after we get a folio from __read_swap_cache_async(), we grab the tree lock again to check that the swap entry was not invalidated and recycled. If it was, we delete the folio we just added to the swap cache and exit. However, __read_swap_cache_async() returns the folio locked when it is newly allocated, which is always true for this path, and the folio is ref'd. Make sure to unlock and put the folio before returning. This was discovered by code inspection, probably because this path handles a race condition that should not happen often, and the bug would not crash the system, it will only strand the folio indefinitely. Link: https://lkml.kernel.org/r/20240125085127.1327013-1-yosryahmed@google.com Fixes: 04fc7816089c ("mm: fix zswap writeback race condition") Signed-off-by: Yosry Ahmed <yosryahmed@google.com> Reviewed-by: Chengming Zhou <zhouchengming@bytedance.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Nhat Pham <nphamcs@gmail.com> Cc: Domenico Cerasuolo <cerasuolodomenico@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r--mm/zswap.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/mm/zswap.c b/mm/zswap.c
index b3829ada4a41..b7cb126797f9 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1013,6 +1013,8 @@ static int zswap_writeback_entry(struct zpool *pool, unsigned long handle)
if (zswap_rb_search(&tree->rbroot, entry->offset) != entry) {
spin_unlock(&tree->lock);
delete_from_swap_cache(page_folio(page));
+ unlock_page(page);
+ put_page(page);
ret = -ENOMEM;
goto fail;
}