summaryrefslogtreecommitdiff
path: root/fs/btrfs
AgeCommit message (Collapse)Author
2024-11-14btrfs: fix incorrect comparison for delayed refsJosef Bacik
When I reworked delayed ref comparison in cf4f04325b2b ("btrfs: move ->parent and ->ref_root into btrfs_delayed_ref_node"), I made a mistake and returned -1 for the case where ref1->ref_root was > than ref2->ref_root. This is a subtle bug that can result in improper delayed ref running order, which can result in transaction aborts. Fixes: cf4f04325b2b ("btrfs: move ->parent and ->ref_root into btrfs_delayed_ref_node") CC: stable@vger.kernel.org # 6.10+ Reviewed-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-13btrfs: validate queue limitsChristoph Hellwig
Call blk_validate_limits on the queue limits used for zone append splitting so that calculated values get filled in and any stacking conflicts get cought. Without this there isn't a max_zone_append_sectors limits as of commit 559218d43ec9 ("block: pre-calculate max_zone_append_sectors"). Fixes: 559218d43ec9 ("block: pre-calculate max_zone_append_sectors") Reported-by: Yi Zhang <yi.zhang@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20241113084541.34315-3-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-11-11btrfs: send: check for read-only send root under critical sectionFilipe Manana
We're checking if the send root is read-only without being under the protection of the root's root_item_lock spinlock, which is what protects the root's flags when clearing the read-only flag, done at btrfs_ioctl_subvol_setflags(). Furthermore, it should be done in the same critical section that increments the root's send_in_progress counter, as btrfs_ioctl_subvol_setflags() clears the read-only flag in the same critical section that checks the counter's value. So fix this by moving the read-only check under the critical section delimited by the root's root_item_lock which also increments the root's send_in_progress counter. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: send: check for dead send root under critical sectionFilipe Manana
We're checking if the send root is dead without the protection of the root's root_item_lock spinlock, which is what protects the root's flags. The inverse, setting the dead flag on a root, is done under the protection of that lock, at btrfs_delete_subvolume(). Also checking and updating the root's send_in_progress counter is supposed to be done in the same critical section as checking for or setting the root dead flag, so that these operations are done atomically as a single step (which is correctly done by btrfs_delete_subvolume()). So fix this by checking if the send root is dead in the same critical section that updates the send_in_progress counter, which is protected by the root's root_item_lock spinlock. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: remove check for NULL fs_info at btrfs_folio_end_lock_bitmap()Filipe Manana
Smatch complains about possibly dereferencing a NULL fs_info at btrfs_folio_end_lock_bitmap(): fs/btrfs/subpage.c:332 btrfs_folio_end_lock_bitmap() warn: variable dereferenced before check 'fs_info' (see line 326) because we access fs_info to set the 'start_bit' variable before doing the check for a NULL fs_info. However fs_info is never NULL, since in the only caller of btrfs_folio_end_lock_bitmap() is extent_writepage(), where we have an inode which always as a non-NULL fs_info. So remove the check for a NULL fs_info at btrfs_folio_end_lock_bitmap(). Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: fix warning on PTR_ERR() against NULL device at btrfs_control_ioctl()Filipe Manana
Smatch complains about calling PTR_ERR() against a NULL pointer: fs/btrfs/super.c:2272 btrfs_control_ioctl() warn: passing zero to 'PTR_ERR' Fix this by calling PTR_ERR() against the device pointer only if it contains an error. Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: fix a typo in btrfs_use_zone_appendChristoph Hellwig
REQ_OP_ZONE_APPNED -> REQ_OP_ZONE_APPEND. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: avoid superfluous calls to free_extent_map() in btrfs_encoded_read()Mark Harmstone
Change the control flow of btrfs_encoded_read() so that it doesn't call free_extent_map() when we know that this has already been done. Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Mark Harmstone <maharmstone@fb.com> Suggested-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: simplify logic to decrement snapshot counter at btrfs_mksnapshot()Filipe Manana
There's no point in having a 'snapshot_force_cow' variable to track if we need to decrement the root->snapshot_force_cow counter, as we never jump to the 'out' label after incrementing the counter. Simplify this by removing the variable and always decrementing the counter before the 'out' label, right after the call to btrfs_mksubvol(). Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: remove hole from struct btrfs_delayed_nodeFilipe Manana
On x86_64 and a release kernel, there's a 4 bytes hole in the structure after the ref count field: struct btrfs_delayed_node { u64 inode_id; /* 0 8 */ u64 bytes_reserved; /* 8 8 */ struct btrfs_root * root; /* 16 8 */ struct list_head n_list; /* 24 16 */ struct list_head p_list; /* 40 16 */ struct rb_root_cached ins_root; /* 56 16 */ /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ struct rb_root_cached del_root; /* 72 16 */ struct mutex mutex; /* 88 32 */ struct btrfs_inode_item inode_item; /* 120 160 */ /* --- cacheline 4 boundary (256 bytes) was 24 bytes ago --- */ refcount_t refs; /* 280 4 */ /* XXX 4 bytes hole, try to pack */ u64 index_cnt; /* 288 8 */ long unsigned int flags; /* 296 8 */ int count; /* 304 4 */ u32 curr_index_batch_size; /* 308 4 */ u32 index_item_leaves; /* 312 4 */ /* size: 320, cachelines: 5, members: 15 */ /* sum members: 312, holes: 1, sum holes: 4 */ /* padding: 4 */ }; Move the 'count' field, which is 4 bytes long, to just below the ref count field, so we eliminate the hole and reduce the structure size from 320 bytes down to 312 bytes: struct btrfs_delayed_node { u64 inode_id; /* 0 8 */ u64 bytes_reserved; /* 8 8 */ struct btrfs_root * root; /* 16 8 */ struct list_head n_list; /* 24 16 */ struct list_head p_list; /* 40 16 */ struct rb_root_cached ins_root; /* 56 16 */ /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ struct rb_root_cached del_root; /* 72 16 */ struct mutex mutex; /* 88 32 */ struct btrfs_inode_item inode_item; /* 120 160 */ /* --- cacheline 4 boundary (256 bytes) was 24 bytes ago --- */ refcount_t refs; /* 280 4 */ int count; /* 284 4 */ u64 index_cnt; /* 288 8 */ long unsigned int flags; /* 296 8 */ u32 curr_index_batch_size; /* 304 4 */ u32 index_item_leaves; /* 308 4 */ /* size: 312, cachelines: 5, members: 15 */ /* last cacheline: 56 bytes */ }; This now allows to have 13 delayed nodes per 4K page instead of 12. Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: update stale comment for struct btrfs_delayed_ref_node::add_listFilipe Manana
The comment refers to a list in the respective delayed ref head that no longer exists (ref_list), it was replaced with a rbtree (ref_tree) in commit 0e0adbcfdc90 ("btrfs: track refs in a rb_tree instead of a list"). So update the stale comment to refer to the rbtree instead of the old list. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: add new ioctl to wait for cleaned subvolumesDavid Sterba
Add a new unprivileged ioctl that will let the command 'btrfs subvolume sync' work without the (privileged) SEARCH_TREE ioctl. There are several modes of operation, where the most common ones are to wait on a specific subvolume or all currently queued for cleaning. This is utilized e.g. in backup applications that delete subvolumes and wait until they're cleaned to check for remaining space. The other modes are for flexibility, e.g. for monitoring or checkpoints in the queue of deleted subvolumes, again without the need to use SEARCH_TREE. Notes: - waiting is interruptible, the timeout is set to 1 second and is not configurable - repeated calls to the ioctl see a different state, so this is inherently racy when using e.g. the count or peek next/last Use cases: - a subvolume A was deleted, wait for cleaning (WAIT_FOR_ONE) - a bunch of subvolumes were deleted, wait for all (WAIT_FOR_QUEUED or PEEK_LAST + WAIT_FOR_ONE) - count how many are queued (not blocking), for monitoring purposes - report progress (PEEK_NEXT), may miss some if cleaning is quick - own waiting in user space (PEEK_LAST until it's 0) Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: simplify range tracking in cow_file_range()Haisu Wang
Simplify tracking of the range processed by using cur_alloc_size only to store the reserved part that may fail to the allocated extent. Remove the ram_size as well since it is always equal to cur_alloc_size in the context. Advance the start in normal path until extent allocation succeeds and keep the start unchanged in the error handling path. Passed the fstest generic/475 test for a hundred times with quota enabled. And a modified generic/475 test by removing the sleep time for a hundred times. About one tenth of the tests do enter the error handling path due to fail to reserve extent. Suggested-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Haisu Wang <haisuwang@tencent.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: remove conditional path allocation in btrfs_read_locked_inode()Leo Martins
Remove conditional path allocation from btrfs_read_locked_inode(). Add an ASSERT(path) to indicate it should never be called with a NULL path. Call btrfs_read_locked_inode() directly from btrfs_iget(). This causes code duplication between btrfs_iget() and btrfs_iget_path(), but I think this is justifiable as it removes the need for conditionally allocating the path inside of btrfs_read_locked_inode(). This makes the code easier to reason about and makes it clear who has the responsibility of allocating and freeing the path. Signed-off-by: Leo Martins <loemra.dev@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: push cleanup into btrfs_read_locked_inode()Leo Martins
Move btrfs_add_inode_to_root() so it can be called from btrfs_read_locked_inode(), no changes were made to the function. Move cleanup code from btrfs_iget_path() to btrfs_read_locked_inode. This improves readability and improves a leaky abstraction. Previously btrfs_iget_path() had to handle a positive error case as a result of a call to btrfs_search_slot(), but it makes more sense to handle this closer to the source of the call. Signed-off-by: Leo Martins <loemra.dev@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: add struct io_btrfs_cmd as type for io_uring_cmd_to_pdu()Mark Harmstone
Add struct io_btrfs_cmd as a wrapper type for io_uring_cmd_to_pdu(), rather than using a raw pointer. Suggested-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Mark Harmstone <maharmstone@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: add io_uring command for encoded reads (ENCODED_READ ioctl)Mark Harmstone
Add an io_uring command for encoded reads, using the same interface as the existing BTRFS_IOC_ENCODED_READ ioctl. btrfs_uring_encoded_read() is an io_uring version of btrfs_ioctl_encoded_read(), which validates the user input and calls btrfs_encoded_read() to read the appropriate metadata. If we determine that we need to read an extent from disk, we call btrfs_encoded_read_regular_fill_pages() through btrfs_uring_read_extent() to prepare the bio. The existing btrfs_encoded_read_regular_fill_pages() is changed so that if it is passed a valid uring_ctx, rather than waking up any waiting threads it calls btrfs_uring_read_extent_endio(). This in turn copies the read data back to userspace, and calls io_uring_cmd_done() to complete the io_uring command. Because we're potentially doing a non-blocking read, btrfs_uring_read_extent() doesn't clean up after itself if it returns -EIOCBQUEUED. Instead, it allocates a priv struct, populates the fields there that we will need to unlock the inode and free our allocations, and defers this to the btrfs_uring_read_finished() that gets called when the bio completes. Signed-off-by: Mark Harmstone <maharmstone@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: move priv off stack in btrfs_encoded_read_regular_fill_pages()Mark Harmstone
Change btrfs_encoded_read_regular_fill_pages() so that the priv struct is allocated rather than stored on the stack, in preparation for adding an asynchronous mode to the function. Signed-off-by: Mark Harmstone <maharmstone@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: don't sleep in btrfs_encoded_read() if IOCB_NOWAIT is setMark Harmstone
Change btrfs_encoded_read() so that it returns -EAGAIN rather than sleeps if IOCB_NOWAIT is set in iocb->ki_flags. The conditions that require sleeping are: inode lock, writeback, extent lock, ordered range. Signed-off-by: Mark Harmstone <maharmstone@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: change btrfs_encoded_read() so that reading of extent is done by callerMark Harmstone
Change the behaviour of btrfs_encoded_read() so that if it needs to read an extent from disk, it leaves the extent and inode locked and returns -EIOCBQUEUED. The caller is then responsible for doing the I/O via btrfs_encoded_read_regular() and unlocking the extent and inode. Signed-off-by: Mark Harmstone <maharmstone@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: remove pointless iocb::ki_pos addition in btrfs_encoded_read()Mark Harmstone
iocb->ki_pos isn't used after this function, so there's no point in changing its value. Signed-off-by: Mark Harmstone <maharmstone@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: remove no longer used delayed ref head search functionalityFilipe Manana
After the previous patch, which converted the rb-tree used to track delayed ref heads into an xarray, the find_ref_head() function is now used only by one caller which always passes false to the 'return_bigger' argument. So remove the 'return_bigger' logic, simplifying the function, and move all the function code to the single caller. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: track delayed ref heads in an xarrayFilipe Manana
Currently we use a red black tree (rb-tree) to track the delayed ref heads (in struct btrfs_delayed_ref_root::href_root). This however is not very efficient when the number of delayed ref heads is large (and it's very common to be at least in the order of thousands) since rb-trees are binary trees. For example for 10K delayed ref heads, the tree has a depth of 13. Besides that, inserting into the tree requires navigating through it and pulling useless cache lines in the process since the red black tree nodes are embedded within the delayed ref head structure - on the other hand, by being embedded, it requires no extra memory allocations. We can improve this by using an xarray instead which has a much higher branching factor than a red black tree (binary balanced tree) and is more cache friendly and behaves like a resizable array, with a much better search and insertion complexity than a red black tree. This only has one small disadvantage which is that insertion will sometimes require allocating memory for the xarray - which may fail (not that often since it uses a kmem_cache) - but on the other hand we can reduce the delayed ref head structure size by 24 bytes (from 152 down to 128 bytes) after removing the embedded red black tree node, meaning than we can now fit 32 delayed ref heads per 4K page instead of 26, and that gain compensates for the occasional memory allocations needed for the xarray nodes. We also end up using only 2 cache lines instead of 3 per delayed ref head. Running the following fs_mark test showed some improvements: $ cat test.sh #!/bin/bash DEV=/dev/nullb0 MNT=/mnt/nullb0 MOUNT_OPTIONS="-o ssd" FILES=100000 THREADS=$(nproc --all) echo "performance" | \ tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor mkfs.btrfs -f $DEV mount $MOUNT_OPTIONS $DEV $MNT OPTS="-S 0 -L 5 -n $FILES -s 0 -t $THREADS -k" for ((i = 1; i <= $THREADS; i++)); do OPTS="$OPTS -d $MNT/d$i" done fs_mark $OPTS umount $MNT Before this patch: FSUse% Count Size Files/sec App Overhead 10 1200000 0 171845.7 12253839 16 2400000 0 230898.7 12308254 23 3600000 0 212292.9 12467768 30 4800000 0 195737.8 12627554 46 6000000 0 171055.2 12783329 After this patch: FSUse% Count Size Files/sec App Overhead 10 1200000 0 173835.0 12246131 16 2400000 0 233537.8 12271746 23 3600000 0 220398.7 12307737 30 4800000 0 204483.6 12392318 40 6000000 0 182923.3 12771843 Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: add comments regarding locking to struct btrfs_delayed_ref_rootFilipe Manana
Add some comments to struct btrfs_delayed_ref_root's fields to mention what its spinlock protects. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: assert delayed refs lock is held at add_delayed_ref_head()Filipe Manana
The delayed refs lock must be held when calling add_delayed_ref_head(), so assert that it's being held. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: assert delayed refs lock is held at find_first_ref_head()Filipe Manana
The delayed refs lock must be held when calling find_first_ref_head(), so assert that it's being held. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: assert delayed refs lock is held at find_ref_head()Filipe Manana
We have 3 callers for find_ref_head() so assert at find_ref_head() that we have the delayed refs lock held, removing the assertion from one of its callers (btrfs_find_delayed_ref_head()). Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: pass fs_info to btrfs_delete_ref_head()Filipe Manana
One of the following patches in the series will need to access fs_info at btrfs_delete_ref_head(), so pass a fs_info argument to it. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: pass fs_info to functions that search for delayed ref headsFilipe Manana
One of the following patches in the series will need to access fs_info in the function find_ref_head(), so pass a fs_info argument to it as well as to the functions btrfs_select_ref_head() and btrfs_find_delayed_ref_head() which call find_ref_head(). Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: move delayed ref head unselection to delayed-ref.cFilipe Manana
The unselect_delayed_ref_head() at extent-tree.c doesn't really belong in that file as it's a delayed refs specific detail and therefore should be at delayed-ref.c. Further its inverse, btrfs_select_ref_head(), is at delayed-ref.c, so it only makes sense to have it there too. So move unselect_delayed_ref_head() into delayed-ref.c and rename it to btrfs_unselect_ref_head() so that its name closely matches its inverse (btrfs_select_ref_head()). Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: simplify obtaining a delayed ref headFilipe Manana
Instead of doing it in two steps outside of delayed-ref.c, leaking low level details such as locking, move the logic entirely to delayed-ref.c under btrfs_select_ref_head(), reducing code and making things simpler for the caller. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: change return type of btrfs_delayed_ref_lock() to booleanFilipe Manana
The function only returns 0, meaning it was able to lock the delayed ref head, or -EAGAIN in case it wasn't able to lock it. So simplify this and use a boolean return type instead, returning true if it was able to lock and false otherwise. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: remove num_entries atomic counter from delayed ref rootFilipe Manana
The atomic counter 'num_entries' is not used anymore, we increment it and decrement it but then we don't ever read it to use for any logic. Its last use was removed with commit 61a56a992fcf ("btrfs: delayed refs pre-flushing should only run the heads we have"). So remove it. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: use helper to find first ref head at btrfs_destroy_delayed_refs()Filipe Manana
Instead of open coding it, use the find_first_ref_head() helper at btrfs_destroy_delayed_refs(). This avoids duplicating the logic, specially with the upcoming changes in subsequent patches. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: remove duplicated code to drop delayed ref during transaction abortFilipe Manana
When destroying delayed refs during a transaction abort, we have open coded the removal of a delayed ref, which is also done by the static helper function drop_delayed_ref(). So remove that duplicated code and use drop_delayed_ref() instead. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: remove fs_info parameter from btrfs_cleanup_one_transaction()Filipe Manana
The fs_info parameter is redundant because it can be extracted from the transaction given as another parameter. So remove it and use the fs_info accessible from the transaction. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: remove fs_info parameter from btrfs_destroy_delayed_refs()Filipe Manana
The fs_info parameter is redundant because it can be extracted from the transaction given as another parameter. So remove it and use the fs_info accessible from the transaction. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: move btrfs_destroy_delayed_refs() to delayed-ref.cFilipe Manana
It's better suited at delayed-ref.c since it's about delayed refs and contains logic to iterate over them (using the red black tree, doing all the locking, freeing, etc), so move it from disk-io.c, which is pretty big, into delayed-ref.c, hiding implementation details of how delayed refs are tracked and managed. This also facilitates the next patches in the series. This change moves the code between files but also does the following simple cleanups: 1) Rename the 'cache' variable to 'bg', since it's a block group (the 'cache' logic comes from old days where the block group structure was named 'btrfs_block_group_cache'); 2) Move the 'ref' variable declaration to the scope of the inner while loop, since it's not used outside that loop. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: remove BUG_ON() at btrfs_destroy_delayed_refs()Filipe Manana
At btrfs_destroy_delayed_refs() it's unexpected to not find the block group to which a delayed reference's extent belongs to, so we have this BUG_ON(), not just because it's highly unexpected but also because we don't know what to do there. Since we are in the transaction abort path, there's nothing we can do other than proceed and cleanup all used resources we can. So remove the BUG_ON() and deal with a missing block group by logging an error message and continuing to cleanup all we can related to the current delayed ref head and moving to other delayed refs. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: reduce extent tree lock contention when searching for inline backrefRobbie Ko
When inserting extent backref, in order to check whether refs other than inline refs are used, we always use path keep locks for tree search, which will increase the lock contention of extent tree. We do not need the parent node every time to determine whether normal refs are used. It is only needed when the extent item is the last item in a leaf. Therefore, we change it to first use keep_locks=0 for search. If the extent item happens to be the last item in the leaf, we then change to keep_locks=1 for the second search to reduce lock contention. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Robbie Ko <robbieko@synology.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: tests: implement case for partial RAID stripe-tree deleteJohannes Thumshirn
Implement self-tests for partial deletion of RAID stripe-tree entries. These two new tests cover both the deletion of the front of a RAID stripe-tree stripe extent as well as truncation of an item to make it smaller. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: implement partial deletion of RAID stripe extentsJohannes Thumshirn
In our CI system, the RAID stripe tree configuration sometimes fails with the following ASSERT(): assertion failed: found_start >= start && found_end <= end, in fs/btrfs/raid-stripe-tree.c:64 This ASSERT()ion triggers, because for the initial design of RAID stripe-tree, I had the "one ordered-extent equals one bio" rule of zoned btrfs in mind. But for a RAID stripe-tree based system, that is not hosted on a zoned storage device, but on a regular device this rule doesn't apply. So in case the range we want to delete starts in the middle of the previous item, grab the item and "truncate" it's length. That is, clone the item, subtract the deleted portion from the key's offset, delete the old item and insert the new one. In case the range to delete ends in the middle of an item, we have to adjust both the item's key as well as the stripe extents and then re-insert the modified clone into the tree after deleting the old stripe extent. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: use filemap_get_folio() helperAnand Jain
When fgp_flags and gfp_flags are zero, use filemap_get_folio(A, B) instead of __filemap_get_folio(A, B, 0, 0)—no need for the extra arguments 0, 0. Signed-off-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: convert btrfs_buffered_write() to use foliosQu Wenruo
The buffered write path is still heavily utilizing the page interface. Since we have converted it to do a page-by-page copying, it's much easier to convert all involved functions to folio interface, this involves: - btrfs_copy_from_user() - btrfs_drop_folio() - prepare_uptodate_page() - prepare_one_page() - lock_and_cleanup_extent_if_need() - btrfs_dirty_page() All function are changed to accept a folio parameter, and if the word "page" is in the function name, change that to "folio" too. The function btrfs_dirty_page() is exported for v1 space cache, convert v1 cache call site to convert its page to folio for the new interface. And there is a small enhancement for prepare_one_folio(), instead of manually waiting for the page writeback, let __filemap_get_folio() to handle that by using FGP_WRITEBEGIN, which implies (FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE). Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: make buffered write to copy one page a timeQu Wenruo
Currently the btrfs_buffered_write() is preparing multiple page a time, allowing a better performance. But the current trend is to support larger folio as an optimization, instead of implementing own multi-page optimization. This is inspired by generic_perform_write(), which is copying one folio a time. Such change will prepare us to migrate to implement the write_begin() and write_end() callbacks, and make every involved function a little easier. Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: fix wrong sizeof in btrfs_do_encoded_write()Mark Harmstone
btrfs_do_encoded_write() was converted to use folios in 400b172b8cdc, but we're still allocating based on sizeof(struct page *) rather than sizeof(struct folio *). There's no functional change. Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Mark Harmstone <maharmstone@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: use str_yes_no() helper function in btrfs_dump_free_space()Thorsten Blum
Remove hard-coded strings by using the str_yes_no() and str_no_yes() helper functions. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: rename btrfs_folio_(set|start|end)_writer_lock()Qu Wenruo
Since there is no user of reader locks, rename the writer locks into a more generic name, by removing the "_writer" part from the name. And also rename btrfs_subpage::writer into btrfs_subpage::locked. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: unify to use writer locks for subpage lockingQu Wenruo
Since commit d7172f52e993 ("btrfs: use per-buffer locking for extent_buffer reading"), metadata read no longer relies on the subpage reader locking. This means we do not need to maintain a different metadata/data split for locking, so we can convert the existing reader lock users by: - add_ra_bio_pages() Convert to btrfs_folio_set_writer_lock() - end_folio_read() Convert to btrfs_folio_end_writer_lock() - begin_folio_read() Convert to btrfs_folio_set_writer_lock() - folio_range_has_eb() Remove the subpage->readers checks, since it is always 0. - Remove btrfs_subpage_start_reader() and btrfs_subpage_end_reader() Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-11-11btrfs: remove unused btrfs_folio_start_writer_lock()Qu Wenruo
This function is not really suitable to lock a folio, as it lacks the proper mapping checks, thus the locked folio may not even belong to btrfs. And due to the above reason, the last user inside lock_delalloc_folios() is already removed, and we can remove this function. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>