summaryrefslogtreecommitdiff
path: root/drivers/md
AgeCommit message (Collapse)Author
2023-03-30dm: add dm_num_hash_locks()Mike Snitzer
Simple helper to use when DM core code needs to appropriately size, based on num_online_cpus(), its data structures that split locks. dm_num_hash_locks() rounds up num_online_cpus() to next power of 2 but caps return at DM_HASH_LOCKS_MAX (64). This heuristic may evolve as warranted, but as-is it will serve as a more informed basis for sizing the sharded lock structs in dm-bufio's dm_buffer_cache (buffer_trees) and dm-bio-prison-v1's dm_bio_prison (prison_regions). Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm bio prison v1: add dm_cell_key_has_valid_rangeMike Snitzer
Don't have bio_detain() BUG_ON if a dm_cell_key is beyond BIO_PRISON_MAX_RANGE or spans a boundary. Update dm-thin.c:build_key() to use dm_cell_key_has_valid_range() which will do this checking without using BUG_ON. Also update process_discard_bio() to check the discard bio that DM core passes in (having first imposed max_discard_granularity based splitting). dm_cell_key_has_valid_range() will merely WARN_ON_ONCE if it returns false because if it does: it is programmer error that should be caught with proper testing. So relax the BUG_ONs to be WARN_ON_ONCE. Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm bio prison v1: improve concurrent IO performanceJoe Thornber
Split the bio prison into multiple regions, with a separate rbtree and associated lock for each region. To get fast bio prison locking and not damage the performance of discards too much the bio-prison now stipulates that discards should not cross a BIO_PRISON_MAX_RANGE boundary. Because the range of a key (block_end - block_begin) must not exceed BIO_PRISON_MAX_RANGE: break_up_discard_bio() now ensures the data range reflected in PHYSICAL key doesn't exceed BIO_PRISON_MAX_RANGE. And splitting the thin target's discards (handled with VIRTUAL key) is achieved by updating dm-thin.c to set limits->max_discard_sectors in terms of BIO_PRISON_MAX_RANGE _and_ setting the thin and thin-pool targets' max_discard_granularity to true. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm: split discards further if target sets max_discard_granularityMike Snitzer
The block core (bio_split_discard) will already split discards based on the 'discard_granularity' and 'max_discard_sectors' queue_limits. But the DM thin target also needs to ensure that it doesn't receive a discard that spans a 'max_discard_sectors' boundary. Introduce a dm_target 'max_discard_granularity' flag that if set will cause DM core to split discard bios relative to 'max_discard_sectors'. This treats 'discard_granularity' as a "min_discard_granularity" and 'max_discard_sectors' as a "max_discard_granularity". Requested-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm thin: speed up cell_defer_no_holder()Joe Thornber
Reduce the time that a spinlock is held in cell_defer_no_holder(). Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm bufio: use multi-page bio vectorMikulas Patocka
The kernel supports multi page bio vector entries, so we can use them in dm-bufio as an optimization. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm bufio: use waitqueue_active in __free_buffer_wakeMikulas Patocka
Save one spinlock by using waitqueue_active. We hold the bufio lock at this place, so no one can add entries to the waitqueue at this point. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm bufio: move dm_bufio_client members to avoid spanning cachelinesMike Snitzer
Movement also consolidates holes in dm_bufio_client struct. But the overall size of the struct isn't changed. Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm bufio: add lock_history optimization for cache iteratorsJoe Thornber
Sometimes it is beneficial to repeatedly get and drop locks as part of an iteration. Introduce lock_history struct to help avoid redundant drop and gets of the same lock. Optimizes cache_iterate, cache_mark_many and cache_evict. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm bufio: improve concurrent IO performanceJoe Thornber
When multiple threads perform IO to a thin device, the underlying dm_bufio object can become a bottleneck; slowing down access to btree nodes that store the thin metadata. Prior to this commit, each bufio instance had a single mutex that was taken for every bufio operation. This commit concentrates on improving the common case where: a user of dm_bufio wishes to access, but not modify, a buffer which is already within the dm_bufio cache. Implementation:: The code has been refactored; pulling out an 'lru' abstraction and a 'buffer cache' abstraction (see 2 previous commits). This commit updates higher level bufio code (that performs allocation of buffers, IO and eviction/cache sizing) to leverage both abstractions. It also deals with the delicate locking requirements of both abstractions to provide finer grained locking. The result is significantly better concurrent IO performance. Before this commit, bufio has a global lru list it used to evict the oldest, clean buffers from _all_ clients. With the new locking we don’t want different ways to access the same buffer, so instead do_global_cleanup() loops around the clients asking them to free buffers older than a certain time. This commit also converts many old BUG_ONs to WARN_ON_ONCE, see the lru_evict and cache_evict code in particular. They will return ER_DONT_EVICT if a given buffer somehow meets the invariants that should _never_ happen. [Aside from revising this commit's header and fixing coding style and whitespace nits: this switching to WARN_ON_ONCE is Mike Snitzer's lone contribution to this commit] Testing:: Some of the low level functions have been unit tested using dm-unit: https://github.com/jthornber/dm-unit/blob/main/src/tests/bufio.rs Higher level concurrency and IO is tested via a test only target found here: https://github.com/jthornber/linux/blob/2023-03-24-thin-concurrency-9/drivers/md/dm-bufio-test.c The associated userland side of these tests is here: https://github.com/jthornber/dmtest-python/blob/main/src/dmtest/bufio/bufio_tests.py In addition the full dmtest suite of tests (dm-thin, dm-cache, etc) has been run (~450 tests). Performance:: Most bufio operations have unchanged performance. But if multiple threads are attempting to get buffers concurrently, and these buffers are already in the cache then there's a big speed up. Eg, one test has 16 'hotspot' threads simulating btree lookups while another thread dirties the whole device. In this case the hotspot threads acquire the buffers about 25 times faster. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm bufio: add dm_buffer_cache abstractionJoe Thornber
The buffer cache is responsible for managing the holder count, tracking clean/dirty state, and choosing buffers via predicates. Higher level code is responsible for allocation of buffers, IO and eviction/cache sizing. The buffer cache has thread safe methods for acquiring a reference to an existing buffer. All other methods in buffer cache are _not_ threadsafe, and only contain enough locking to guarantee the safe methods. Rather than a single mutex, sharded rw_semaphores are used to allow concurrent threads to 'get' buffers. Each rw_semaphore protects its own rbtree of buffer entries. Code that uses this new dm_buffer_cache abstraction will be introduced in a following commit. This commit moves the dm_buffer struct in preparation for finer grained dm_buffer changes, in the next commit, to be more easily seen. It also introduces temporary dm_buffer struct members to allow compilation of this intermediate commit (they will be elided in the next commit). This commit will cause "defined but not used" compiler warnings that will be resolved by the next commit. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm bufio: add LRU abstractionJoe Thornber
A CLOCK algorithm is used in this LRU abstraction. This avoids relinking list nodes, which would require a write lock protecting it. None of the LRU methods are threadsafe; locking must be done at a higher level. Code that uses this new LRU will be introduced in the next 2 commits. As such, this commit will cause "defined but not used" compiler warnings that will be resolved by the next 2 commits. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm bufio: don't bug for clear developer oversightMike Snitzer
Reasonable to relax to WARN_ON because these are easily avoided but do offer some assurance future coding mistakes won't occur (if changes tested properly). Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm bufio: never crash if dm_bufio_in_request()Mike Snitzer
All these instances are entirely avoidable given that they speak to coding mistakes that result in inappropriate use. Proper testing during development will catch any such coding bug so its best to relax all of these from BUG_ON to WARN_ON_ONCE. Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm bufio: use WARN_ON in dm_bufio_client_destroy and dm_bufio_exitMike Snitzer
Using BUG_ON when tearing down is excessive. Relax these to WARN_ONs. Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm bufio: remove unused dm_bufio_release_move interfaceJoe Thornber
Was used by multi-snapshot DM target that never went upstream. Signed-off-by: Joe Thornber <ejt@redhat.com> Acked-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm: fix __send_duplicate_bios() to always allow for splitting IOMike Snitzer
Commit 7dd76d1feec70 ("dm: improve bio splitting and associated IO accounting") only called setup_split_accounting() from __send_duplicate_bios() if a single bio were being issued. But the case where duplicate bios are issued must call it too. Otherwise the bio won't be split and resubmitted (via recursion through block core back to DM) to submit the later portions of a bio (which may map to an entirely different target). For example, when discarding an entire DM striped device with the following DM table: vg-lvol0: 0 159744 striped 2 128 7:0 2048 7:1 2048 vg-lvol0: 159744 45056 striped 2 128 7:2 2048 7:3 2048 Before (broken, discards the first striped target's devices twice): device-mapper: striped: target_stripe=0, bdev=7:0, start=2048 len=79872 device-mapper: striped: target_stripe=1, bdev=7:1, start=2048 len=79872 device-mapper: striped: target_stripe=0, bdev=7:0, start=2049 len=22528 device-mapper: striped: target_stripe=1, bdev=7:1, start=2048 len=22528 After (works as expected): device-mapper: striped: target_stripe=0, bdev=7:0, start=2048 len=79872 device-mapper: striped: target_stripe=1, bdev=7:1, start=2048 len=79872 device-mapper: striped: target_stripe=0, bdev=7:2, start=2048 len=22528 device-mapper: striped: target_stripe=1, bdev=7:3, start=2048 len=22528 Fixes: 7dd76d1feec70 ("dm: improve bio splitting and associated IO accounting") Cc: stable@vger.kernel.org Reported-by: Orange Kao <orange@aiven.io> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-30dm: fix improper splitting for abnormal biosMike Snitzer
"Abnormal" bios include discards, write zeroes and secure erase. By no longer passing the calculated 'len' pointer, commit 7dd06a2548b2 ("dm: allow dm_accept_partial_bio() for dm_io without duplicate bios") took a senseless approach to disallowing dm_accept_partial_bio() from working for duplicate bios processed using __send_duplicate_bios(). It inadvertently and incorrectly stopped the use of 'len' when initializing a target's io (in alloc_tio). As such the resulting tio could address more area of a device than it should. For example, when discarding an entire DM striped device with the following DM table: vg-lvol0: 0 159744 striped 2 128 7:0 2048 7:1 2048 vg-lvol0: 159744 45056 striped 2 128 7:2 2048 7:3 2048 Before this fix: device-mapper: striped: target_stripe=0, bdev=7:0, start=2048 len=102400 blkdiscard: attempt to access beyond end of device loop0: rw=2051, sector=2048, nr_sectors = 102400 limit=81920 device-mapper: striped: target_stripe=1, bdev=7:1, start=2048 len=102400 blkdiscard: attempt to access beyond end of device loop1: rw=2051, sector=2048, nr_sectors = 102400 limit=81920 After this fix; device-mapper: striped: target_stripe=0, bdev=7:0, start=2048 len=79872 device-mapper: striped: target_stripe=1, bdev=7:1, start=2048 len=79872 Fixes: 7dd06a2548b2 ("dm: allow dm_accept_partial_bio() for dm_io without duplicate bios") Cc: stable@vger.kernel.org Reported-by: Orange Kao <orange@aiven.io> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-29md: fix regression for null-ptr-deference in __md_stop()Yu Kuai
Commit 3e453522593d ("md: Free resources in __md_stop") tried to fix null-ptr-deference for 'active_io' by moving percpu_ref_exit() to __md_stop(), however, the commit also moving 'writes_pending' to __md_stop(), and this will cause mdadm tests broken: BUG: kernel NULL pointer dereference, address: 0000000000000038 Oops: 0000 [#1] PREEMPT SMP CPU: 15 PID: 17830 Comm: mdadm Not tainted 6.3.0-rc3-next-20230324-00009-g520d37 RIP: 0010:free_percpu+0x465/0x670 Call Trace: <TASK> __percpu_ref_exit+0x48/0x70 percpu_ref_exit+0x1a/0x90 __md_stop+0xe9/0x170 do_md_stop+0x1e1/0x7b0 md_ioctl+0x90c/0x1aa0 blkdev_ioctl+0x19b/0x400 vfs_ioctl+0x20/0x50 __x64_sys_ioctl+0xba/0xe0 do_syscall_64+0x6c/0xe0 entry_SYSCALL_64_after_hwframe+0x63/0xcd And the problem can be reporduced 100% by following test: mdadm -CR /dev/md0 -l1 -n1 /dev/sda --force echo inactive > /sys/block/md0/md/array_state echo read-auto > /sys/block/md0/md/array_state echo inactive > /sys/block/md0/md/array_state Root cause: // start raid raid1_run mddev_init_writes_pending percpu_ref_init // inactive raid array_state_store do_md_stop __md_stop percpu_ref_exit // start raid again array_state_store do_md_run raid1_run mddev_init_writes_pending if (mddev->writes_pending.percpu_count_ptr) // won't reinit // inactive raid again ... percpu_ref_exit -> null-ptr-deference Before the commit, 'writes_pending' is exited when mddev is freed, and it's safe to restart raid because mddev_init_writes_pending() already make sure that 'writes_pending' will only be initialized once. Fix the prblem by moving 'writes_pending' back, it's a litter hard to find the relationship between alloc memory and free memory, however, code changes is much less and we lived with this for a long time already. Fixes: 3e453522593d ("md: Free resources in __md_stop") Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Xiao Ni <xni@redhat.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20230328094400.1448955-1-yukuai1@huaweicloud.com
2023-03-28mm: shrinkers: convert shrinker_rwsem to mutexQi Zheng
Now there are no readers of shrinker_rwsem, so we can simply replace it with mutex lock. Link: https://lkml.kernel.org/r/20230313112819.38938-9-zhengqi.arch@bytedance.com Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Kirill Tkhai <tkhai@ya.ru> Acked-by: Roman Gushchin <roman.gushchin@linux.dev> Cc: Christian König <christian.koenig@amd.com> Cc: David Hildenbrand <david@redhat.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Shakeel Butt <shakeelb@google.com> Cc: Sultan Alsawaf <sultan@kerneltoast.com> Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Cc: Yang Shi <shy828301@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-03-24Merge tag 'for-6.3/dm-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm Pull device mapper fixes from Mike Snitzer: - Fix DM thin to work as a swap device by using 'limit_swap_bios' DM target flag (initially added to allow swap to dm-crypt) to throttle the amount of outstanding swap bios. - Fix DM crypt soft lockup warnings by calling cond_resched() from the cpu intensive loop in dmcrypt_write(). - Fix DM crypt to not access an uninitialized tasklet. This fix allows for consistent handling of IO completion, by _not_ needlessly punting to a workqueue when tasklets are not needed. - Fix DM core's alloc_dev() initialization for DM stats to check for and propagate alloc_percpu() failure. * tag 'for-6.3/dm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: dm stats: check for and propagate alloc_percpu failure dm crypt: avoid accessing uninitialized tasklet dm crypt: add cond_resched() to dmcrypt_write() dm thin: fix deadlock when swapping to thin device
2023-03-16dm stats: check for and propagate alloc_percpu failureJiasheng Jiang
Check alloc_precpu()'s return value and return an error from dm_stats_init() if it fails. Update alloc_dev() to fail if dm_stats_init() does. Otherwise, a NULL pointer dereference will occur in dm_stats_cleanup() even if dm-stats isn't being actively used. Fixes: fd2ed4d25270 ("dm: add statistics support") Cc: stable@vger.kernel.org Signed-off-by: Jiasheng Jiang <jiasheng@iscas.ac.cn> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-16blk-crypto: make blk_crypto_evict_key() return voidEric Biggers
blk_crypto_evict_key() is only called in contexts such as inode eviction where failure is not an option. So there is nothing the caller can do with errors except log them. (dm-table.c does "use" the error code, but only to pass on to upper layers, so it doesn't really count.) Just make blk_crypto_evict_key() return void and log errors itself. Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230315183907.53675-2-ebiggers@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-03-15Merge branch 'md-fixes' of ↵Jens Axboe
https://git.kernel.org/pub/scm/linux/kernel/git/song/md into block-6.3 Pull MD fixes from Song: "This set contains two fixes for old issues (by Neil) and one fix for 6.3 (by Xiao)." * 'md-fixes' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md: md: select BLOCK_LEGACY_AUTOLOAD md: avoid signed overflow in slot_store() md: Free resources in __md_stop
2023-03-15md: select BLOCK_LEGACY_AUTOLOADNeilBrown
When BLOCK_LEGACY_AUTOLOAD is not enable, mdadm is not able to activate new arrays unless "CREATE names=yes" appears in mdadm.conf As this is a regression we need to always enable BLOCK_LEGACY_AUTOLOAD for when MD is selected - at least until mdadm is updated and the updates widely available. Cc: stable@vger.kernel.org # v5.18+ Fixes: fbdee71bb5d8 ("block: deprecate autoloading based on dev_t") Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Song Liu <song@kernel.org>
2023-03-15block: count 'ios' and 'sectors' when io is done for bio-based deviceYu Kuai
While using iostat for raid, I observed very strange 'await' occasionally, and turns out it's due to that 'ios' and 'sectors' is counted in bdev_start_io_acct(), while 'nsecs' is counted in bdev_end_io_acct(). I'm not sure why they are ccounted like that but I think this behaviour is obviously wrong because user will get wrong disk stats. Fix the problem by counting 'ios' and 'sectors' when io is done, like what rq-based device does. Fixes: 394ffa503bc4 ("blk: introduce generic io stat accounting help function") Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230223091226.1135678-1-yukuai1@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-03-13md: avoid signed overflow in slot_store()NeilBrown
slot_store() uses kstrtouint() to get a slot number, but stores the result in an "int" variable (by casting a pointer). This can result in a negative slot number if the unsigned int value is very large. A negative number means that the slot is empty, but setting a negative slot number this way will not remove the device from the array. I don't think this is a serious problem, but it could cause confusion and it is best to fix it. Reported-by: Dan Carpenter <error27@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Song Liu <song@kernel.org>
2023-03-13md: Free resources in __md_stopXiao Ni
If md_run() fails after ->active_io is initialized, then percpu_ref_exit is called in error path. However, later md_free_disk will call percpu_ref_exit again which leads to a panic because of null pointer dereference. It can also trigger this bug when resources are initialized but are freed in error path, then will be freed again in md_free_disk. BUG: kernel NULL pointer dereference, address: 0000000000000038 Oops: 0000 [#1] PREEMPT SMP Workqueue: md_misc mddev_delayed_delete RIP: 0010:free_percpu+0x110/0x630 Call Trace: <TASK> __percpu_ref_exit+0x44/0x70 percpu_ref_exit+0x16/0x90 md_free_disk+0x2f/0x80 disk_release+0x101/0x180 device_release+0x84/0x110 kobject_put+0x12a/0x380 kobject_put+0x160/0x380 mddev_delayed_delete+0x19/0x30 process_one_work+0x269/0x680 worker_thread+0x266/0x640 kthread+0x151/0x1b0 ret_from_fork+0x1f/0x30 For creating raid device, md raid calls do_md_run->md_run, dm raid calls md_run. We alloc those memory in md_run. For stopping raid device, md raid calls do_md_stop->__md_stop, dm raid calls md_stop->__md_stop. So we can free those memory resources in __md_stop. Fixes: 72adae23a72c ("md: Change active_io to percpu") Reported-and-tested-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Xiao Ni <xni@redhat.com> Signed-off-by: Song Liu <song@kernel.org>
2023-03-09dm crypt: avoid accessing uninitialized taskletMike Snitzer
When neither "no_read_workqueue" nor "no_write_workqueue" are enabled, tasklet_trylock() in crypt_dec_pending() may still return false due to an uninitialized state, and dm-crypt will unnecessarily do io completion in io_queue workqueue instead of current context. Fix this by adding an 'in_tasklet' flag to dm_crypt_io struct and initialize it to false in crypt_io_init(). Set this flag to true in kcryptd_queue_crypt() before calling tasklet_schedule(). If set crypt_dec_pending() will punt io completion to a workqueue. This also nicely avoids the tasklet_trylock/unlock hack when tasklets aren't in use. Fixes: 8e14f610159d ("dm crypt: do not call bio_endio() from the dm-crypt tasklet") Cc: stable@vger.kernel.org Reported-by: Hou Tao <houtao1@huawei.com> Suggested-by: Ignat Korchagin <ignat@cloudflare.com> Reviewed-by: Ignat Korchagin <ignat@cloudflare.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-06dm crypt: add cond_resched() to dmcrypt_write()Mikulas Patocka
The loop in dmcrypt_write may be running for unbounded amount of time, thus we need cond_resched() in it. This commit fixes the following warning: [ 3391.153255][ C12] watchdog: BUG: soft lockup - CPU#12 stuck for 23s! [dmcrypt_write/2:2897] ... [ 3391.387210][ C12] Call trace: [ 3391.390338][ C12] blk_attempt_bio_merge.part.6+0x38/0x158 [ 3391.395970][ C12] blk_attempt_plug_merge+0xc0/0x1b0 [ 3391.401085][ C12] blk_mq_submit_bio+0x398/0x550 [ 3391.405856][ C12] submit_bio_noacct+0x308/0x380 [ 3391.410630][ C12] dmcrypt_write+0x1e4/0x208 [dm_crypt] [ 3391.416005][ C12] kthread+0x130/0x138 [ 3391.419911][ C12] ret_from_fork+0x10/0x18 Reported-by: yangerkun <yangerkun@huawei.com> Fixes: dc2676210c42 ("dm crypt: offload writes to thread") Cc: stable@vger.kernel.org Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-03-06dm thin: fix deadlock when swapping to thin deviceColy Li
This is an already known issue that dm-thin volume cannot be used as swap, otherwise a deadlock may happen when dm-thin internal memory demand triggers swap I/O on the dm-thin volume itself. But thanks to commit a666e5c05e7c ("dm: fix deadlock when swapping to encrypted device"), the limit_swap_bios target flag can also be used for dm-thin to avoid the recursive I/O when it is used as swap. Fix is to simply set ti->limit_swap_bios to true in both pool_ctr() and thin_ctr(). In my test, I create a dm-thin volume /dev/vg/swap and use it as swap device. Then I run fio on another dm-thin volume /dev/vg/main and use large --blocksize to trigger swap I/O onto /dev/vg/swap. The following fio command line is used in my test, fio --name recursive-swap-io --lockmem 1 --iodepth 128 \ --ioengine libaio --filename /dev/vg/main --rw randrw \ --blocksize 1M --numjobs 32 --time_based --runtime=12h Without this fix, the whole system can be locked up within 15 seconds. With this fix, there is no any deadlock or hung task observed after 2 hours of running fio. Furthermore, if blocksize is changed from 1M to 128M, after around 30 seconds fio has no visible I/O, and the out-of-memory killer message shows up in kernel message. After around 20 minutes all fio processes are killed and the whole system is back to being alive. This is exactly what is expected when recursive I/O happens on dm-thin volume when it is used as swap. Depends-on: a666e5c05e7c ("dm: fix deadlock when swapping to encrypted device") Cc: stable@vger.kernel.org Signed-off-by: Coly Li <colyli@suse.de> Acked-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-25Merge tag 'flex-array-transformations-6.3-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux Pull flexible-array updates from Gustavo Silva: "Transform zero-length arrays, in unions, into flexible arrays" * tag 'flex-array-transformations-6.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux: bcache: Replace zero-length arrays with DECLARE_FLEX_ARRAY() helper mm/memremap: Replace zero-length array with DECLARE_FLEX_ARRAY() helper exportfs: Replace zero-length array with DECLARE_FLEX_ARRAY() helper
2023-02-22Merge tag 'for-6.3/dm-changes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm Pull device mapper updates from Mike Snitzer: - Fix DM cache target to free background tracker work items, otherwise slab BUG will occur when kmem_cache_destroy() is called. - Improve 2 of DM's shrinker names to reflect their use. - Fix the DM flakey target to not corrupt the zero page. Fix dm-flakey on 32-bit hughmem systems by using bvec_kmap_local instead of page_address. Also, fix logic used when imposing the "corrupt_bio_byte" feature. - Stop using WQ_UNBOUND for DM verity target's verify_wq because it causes significant Android latencies on ARM64 (and doesn't show real benefit on other architectures). - Add negative check to catch simple case of a DM table referencing itself. More complex scenarios that use intermediate devices to self-reference still need to be avoided/handled in userspace. - Fix DM core's resize to only send one uevent instead of two. This fixes a race with udev, that if udev wins, will cause udev to miss uevents (which caused premature unmount attempts by systemd). - Add cond_resched() to workqueue functions in DM core, dn-thin and dm-cache so that their loops aren't the cause of unintended cpu scheduling fairness issues. - Fix all of DM's checkpatch errors and warnings (famous last words). Various other small cleanups. * tag 'for-6.3/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (62 commits) dm: remove unnecessary (void*) conversion in event_callback() dm ioctl: remove unnecessary check when using dm_get_mdptr() dm ioctl: assert _hash_lock is held in __hash_remove dm cache: add cond_resched() to various workqueue loops dm thin: add cond_resched() to various workqueue loops dm: add cond_resched() to dm_wq_requeue_work() dm: add cond_resched() to dm_wq_work() dm sysfs: make kobj_type structure constant dm: update targets using system workqueues to use a local workqueue dm: remove flush_scheduled_work() during local_exit() dm clone: prefer kvmalloc_array() dm: declare variables static when sensible dm: fix suspect indent whitespace dm ioctl: prefer strscpy() instead of strlcpy() dm: avoid void function return statements dm integrity: change macros min/max() -> min_t/max_t where appropriate dm: fix use of sizeof() macro dm: avoid 'do {} while(0)' loop in single statement macros dm log: avoid multiple line dereference dm log: avoid trailing semicolon in macro ...
2023-02-21Merge tag 'v6.3-p1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto update from Herbert Xu: "API: - Use kmap_local instead of kmap_atomic - Change request callback to take void pointer - Print FIPS status in /proc/crypto (when enabled) Algorithms: - Add rfc4106/gcm support on arm64 - Add ARIA AVX2/512 support on x86 Drivers: - Add TRNG driver for StarFive SoC - Delete ux500/hash driver (subsumed by stm32/hash) - Add zlib support in qat - Add RSA support in aspeed" * tag 'v6.3-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (156 commits) crypto: x86/aria-avx - Do not use avx2 instructions crypto: aspeed - Fix modular aspeed-acry crypto: hisilicon/qm - fix coding style issues crypto: hisilicon/qm - update comments to match function crypto: hisilicon/qm - change function names crypto: hisilicon/qm - use min() instead of min_t() crypto: hisilicon/qm - remove some unused defines crypto: proc - Print fips status crypto: crypto4xx - Call dma_unmap_page when done crypto: octeontx2 - Fix objects shared between several modules crypto: nx - Fix sparse warnings crypto: ecc - Silence sparse warning tls: Pass rec instead of aead_req into tls_encrypt_done crypto: api - Remove completion function scaffolding tls: Remove completion function scaffolding tipc: Remove completion function scaffolding net: ipv6: Remove completion function scaffolding net: ipv4: Remove completion function scaffolding net: macsec: Remove completion function scaffolding dm: Remove completion function scaffolding ...
2023-02-21Merge tag 'rcu.2023.02.10a' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu Pull RCU updates from Paul McKenney: - Documentation updates - Miscellaneous fixes, perhaps most notably: - Throttling callback invocation based on the number of callbacks that are now ready to invoke instead of on the total number of callbacks - Several patches that suppress false-positive boot-time diagnostics, for example, due to lockdep not yet being initialized - Make expedited RCU CPU stall warnings dump stacks of any tasks that are blocking the stalled grace period. (Normal RCU CPU stall warnings have done this for many years) - Lazy-callback fixes to avoid delays during boot, suspend, and resume. (Note that lazy callbacks must be explicitly enabled, so this should not (yet) affect production use cases) - Make kfree_rcu() and friends take advantage of polled grace periods, thus reducing memory footprint by almost two orders of magnitude, admittedly on a microbenchmark This also begins the transition from kfree_rcu(p) to kfree_rcu_mightsleep(p). This transition was motivated by bugs where kfree_rcu(p), which can block, was typed instead of the intended kfree_rcu(p, rh) - SRCU updates, perhaps most notably fixing a bug that causes SRCU to fail when booted on a system with a non-zero boot CPU. This surprising situation actually happens for kdump kernels on the powerpc architecture This also adds an srcu_down_read() and srcu_up_read(), which act like srcu_read_lock() and srcu_read_unlock(), but allow an SRCU read-side critical section to be handed off from one task to another - Clean up the now-useless SRCU Kconfig option There are a few more commits that are not yet acked or pulled into maintainer trees, and these will be in a pull request for a later merge window - RCU-tasks updates, perhaps most notably these fixes: - A strange interaction between PID-namespace unshare and the RCU-tasks grace period that results in a low-probability but very real hang - A race between an RCU tasks rude grace period on a single-CPU system and CPU-hotplug addition of the second CPU that can result in a too-short grace period - A race between shrinking RCU tasks down to a single callback list and queuing a new callback to some other CPU, but where that queuing is delayed for more than an RCU grace period. This can result in that callback being stranded on the non-boot CPU - Torture-test updates and fixes - Torture-test scripting updates and fixes - Provide additional RCU CPU stall-warning information in kernels built with CONFIG_RCU_CPU_STALL_CPUTIME=y, and restore the full five-minute timeout limit for expedited RCU CPU stall warnings * tag 'rcu.2023.02.10a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: (80 commits) rcu/kvfree: Add kvfree_rcu_mightsleep() and kfree_rcu_mightsleep() kernel/notifier: Remove CONFIG_SRCU init: Remove "select SRCU" fs/quota: Remove "select SRCU" fs/notify: Remove "select SRCU" fs/btrfs: Remove "select SRCU" fs: Remove CONFIG_SRCU drivers/pci/controller: Remove "select SRCU" drivers/net: Remove "select SRCU" drivers/md: Remove "select SRCU" drivers/hwtracing/stm: Remove "select SRCU" drivers/dax: Remove "select SRCU" drivers/base: Remove CONFIG_SRCU rcu: Disable laziness if lazy-tracking says so rcu: Track laziness during boot and suspend rcu: Remove redundant call to rcu_boost_kthread_setaffinity() rcu: Allow up to five minutes expedited RCU CPU stall-warning timeouts rcu: Align the output of RCU CPU stall warning messages rcu: Add RCU stall diagnosis information sched: Add helper nr_context_switches_cpu() ...
2023-02-20dm: remove unnecessary (void*) conversion in event_callback()XU pengfei
Pointer variables of void * type do not require type cast. Signed-off-by: XU pengfei <xupengfei@nfschina.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-17dm ioctl: remove unnecessary check when using dm_get_mdptr()Hou Tao
__hash_remove() removes hash_cell with _hash_lock locked, so acquiring _hash_lock can guarantee no-NULL hc returned from dm_get_mdptr() must have not been removed and hc->md must still be md. __hash_remove() also acquires dm_hash_cells_mutex before setting mdptr as NULL. So in dm_copy_name_and_uuid(), after acquiring dm_hash_cells_mutex and ensuring returned hc is not NULL, the returned hc must still be alive and hc->md must still be md. Remove the unnecessary hc->md != md checks when using dm_get_mdptr() with _hash_lock or dm_hash_cells_mutex acquired. Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-17dm ioctl: assert _hash_lock is held in __hash_removeMike Snitzer
Also update dm_early_create() to take _hash_lock when calling both __get_name_cell and __hash_remove -- given dm_early_create()'s early boot usecase this locking isn't about correctness but it allows lockdep_assert_held() to be added to __hash_remove. Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-17dm cache: add cond_resched() to various workqueue loopsMike Snitzer
Otherwise on resource constrained systems these workqueues may be too greedy. Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-17dm thin: add cond_resched() to various workqueue loopsMike Snitzer
Otherwise on resource constrained systems these workqueues may be too greedy. Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-16dm: add cond_resched() to dm_wq_requeue_work()Mike Snitzer
Otherwise the while() loop in dm_wq_requeue_work() can result in a "dead loop" on systems that have preemption disabled. This is particularly problematic on single cpu systems. Fixes: 8b211aaccb915 ("dm: add two stage requeue mechanism") Cc: stable@vger.kernel.org Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-16dm: add cond_resched() to dm_wq_work()Pingfan Liu
Otherwise the while() loop in dm_wq_work() can result in a "dead loop" on systems that have preemption disabled. This is particularly problematic on single cpu systems. Cc: stable@vger.kernel.org Signed-off-by: Pingfan Liu <piliu@redhat.com> Acked-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-14dm sysfs: make kobj_type structure constantThomas Weißschuh
Since commit ee6d3dd4ed48 ("driver core: make kobj_type constant.") the driver core allows the usage of const struct kobj_type. Take advantage of this to constify the structure definition to prevent modification at runtime. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-14dm: update targets using system workqueues to use a local workqueueTetsuo Handa
Flushing system-wide workqueues is dangerous and will be forbidden. Use a local workqueue in dm-mpath.c, dm-raid1.c, and dm-stripe.c. Link: https://lkml.kernel.org/r/49925af7-78a8-a3dd-bce6-cfc02e1a9236@I-love.SAKURA.ne.jp Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-14dm: remove flush_scheduled_work() during local_exit()Mike Snitzer
Commit acfe0ad74d2e1 ("dm: allocate a special workqueue for deferred device removal") switched from using system workqueue to a single workqueue local to DM. But it didn't eliminate the call to flush_scheduled_work() that was introduced purely for the benefit of deferred device removal with commit 2c140a246dc ("dm: allow remove to be deferred"). Since DM core uses its own workqueue (and queue_work) there is no need to call flush_scheduled_work() from local_exit(). local_exit()'s destroy_workqueue(deferred_remove_workqueue) handles flushing work started with queue_work(). Fixes: acfe0ad74d2e1 ("dm: allocate a special workqueue for deferred device removal") Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-14dm clone: prefer kvmalloc_array()Heinz Mauelshagen
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-14dm: declare variables static when sensibleHeinz Mauelshagen
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-14dm: fix suspect indent whitespaceHeinz Mauelshagen
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-14dm ioctl: prefer strscpy() instead of strlcpy()Heinz Mauelshagen
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2023-02-14dm: avoid void function return statementsHeinz Mauelshagen
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>