summaryrefslogtreecommitdiff
path: root/fs/bcachefs/six.c
AgeCommit message (Collapse)Author
2024-09-29Merge tag 'bcachefs-2024-09-28' of git://evilpiepirate.org/bcachefsLinus Torvalds
Pull more bcachefs updates from Kent Overstreet: "Assorted minor syzbot fixes, and for bigger stuff: Fix two disk accounting rewrite bugs: - Disk accounting keys use the version field of bkey so that journal replay can tell which updates have been applied to the btree. This is set in the transaction commit path, after we've gotten our journal reservation (and our time ordering), but the BCH_TRANS_COMMIT_skip_accounting_apply flag that journal replay uses was incorrectly skipping this for new updates generated prior to journal replay. This fixes the underlying cause of an assertion pop in disk_accounting_read. - A couple of fixes for disk accounting + device removal. Checking if acocunting replicas entries were marked in the superblock was being done at the wrong point, when deltas in the journal could still zero them out, and then additionally we'd try to add a missing replicas entry to the superblock without checking if it referred to an invalid (removed) device. A whole slew of repair fixes: - fix infinite loop in propagate_key_to_snapshot_leaves(), this fixes an infinite loop when repairing a filesystem with many snapshots - fix incorrect transaction restart handling leading to occasional "fsck counted ..." warnings - fix warning in __bch2_fsck_err() for bkey fsck errors - check_inode() in fsck now correctly checks if the filesystem was clean - there shouldn't be pending logged ops if the fs was clean, we now check for this - remove_backpointer() doesn't remove a dirent that doesn't actually point to the inode - many more fsck errors are AUTOFIX" * tag 'bcachefs-2024-09-28' of git://evilpiepirate.org/bcachefs: (35 commits) bcachefs: check_subvol_path() now prints subvol root inode bcachefs: remove_backpointer() now checks if dirent points to inode bcachefs: dirent_points_to_inode() now warns on mismatch bcachefs: Fix lost wake up bcachefs: Check for logged ops when clean bcachefs: BCH_FS_clean_recovery bcachefs: Convert disk accounting BUG_ON() to WARN_ON() bcachefs: Fix BCH_TRANS_COMMIT_skip_accounting_apply bcachefs: Check for accounting keys with bversion=0 bcachefs: rename version -> bversion bcachefs: Don't delete unlinked inodes before logged op resume bcachefs: Fix BCH_SB_ERRS() so we can reorder bcachefs: Fix fsck warnings from bkey validation bcachefs: Move transaction commit path validation to as late as possible bcachefs: Fix disk accounting attempting to mark invalid replicas entry bcachefs: Fix unlocked access to c->disk_sb.sb in bch2_replicas_entry_validate() bcachefs: Fix accounting read + device removal bcachefs: bch_accounting_mode bcachefs: fix transaction restart handling in check_extents(), check_dirents() bcachefs: kill inode_walker_entry.seen_this_pos ...
2024-09-27bcachefs: Fix lost wake upAlan Huang
If the reader acquires the read lock and then the writer enters the slow path, while the reader proceeds to the unlock path, the following scenario can occur without the change: writer: pcpu_read_count(lock) return 1 (so __do_six_trylock will return 0) reader: this_cpu_dec(*lock->readers) reader: smp_mb() reader: state = atomic_read(&lock->state) (there is no waiting flag set) writer: six_set_bitmask() then the writer will sleep forever. Signed-off-by: Alan Huang <mmpgouride@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-08-07sched/rt: Rename realtime_{prio, task}() to rt_or_dl_{prio, task}()Qais Yousef
Some find the name realtime overloaded. Use rt_or_dl() as an alternative, hopefully better, name. Suggested-by: Daniel Bristot de Oliveira <bristot@redhat.com> Signed-off-by: Qais Yousef <qyousef@layalina.io> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20240610192018.1567075-4-qyousef@layalina.io
2024-08-07sched/rt: Clean up usage of rt_task()Qais Yousef
rt_task() checks if a task has RT priority. But depends on your dictionary, this could mean it belongs to RT class, or is a 'realtime' task, which includes RT and DL classes. Since this has caused some confusion already on discussion [1], it seemed a clean up is due. I define the usage of rt_task() to be tasks that belong to RT class. Make sure that it returns true only for RT class and audit the users and replace the ones required the old behavior with the new realtime_task() which returns true for RT and DL classes. Introduce similar realtime_prio() to create similar distinction to rt_prio() and update the users that required the old behavior to use the new function. Move MAX_DL_PRIO to prio.h so it can be used in the new definitions. Document the functions to make it more obvious what is the difference between them. PI-boosted tasks is a factor that must be taken into account when choosing which function to use. Rename task_is_realtime() to realtime_task_policy() as the old name is confusing against the new realtime_task(). No functional changes were intended. [1] https://lore.kernel.org/lkml/20240506100509.GL40213@noisy.programming.kicks-ass.net/ Signed-off-by: Qais Yousef <qyousef@layalina.io> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Phil Auld <pauld@redhat.com> Reviewed-by: "Steven Rostedt (Google)" <rostedt@goodmis.org> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lore.kernel.org/r/20240610192018.1567075-2-qyousef@layalina.io
2024-01-01bcachefs: six locks: Simplify optimistic spinningKent Overstreet
osq lock maintainers don't want it to be used outside of kernel/locking/ - but, we can do better. Since we have lock handoff signalled via waitlist entries, there's no reason for optimistic spinning to have to look at the lock at all - aside from checking lock-owner; we can just spin looking at our waitlist entry. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-11-14bcachefs: six locks: Fix lost wakeupKent Overstreet
In percpu reader mode, trylock() for read had a lost wakeup: on failure to get the lock, we may have caused a writer to fail to get the lock, because we temporarily elevated the reader count. We need to check for waiters after decrementing the read count - not before. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-30six locks: Lock contended tracepointsKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Fix W=12 build errorsKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: six locks: Guard against wakee exiting in __six_lock_wakeup()Kent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: six locks: Fix missing barrier on wait->lock_acquiredKent Overstreet
Six locks do lock handoff via the wakeup path: the thread doing the wakeup also takes the lock on behalf of the waiter, which means the waiter only has to look at its waitlist entry, and doesn't have to touch the lock cacheline while another thread is using it. Linus noticed that this needs a real barrier, which this patch fixes. Also add a comment for the should_sleep_fn() error path. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: linux-bcachefs@vger.kernel.org Cc: linux-kernel@vger.kernel.org
2023-10-22six locks: Disable percpu read lock mode in userspaceKent Overstreet
When running in userspace, we currently don't have a real percpu implementation available - at least in bcachefs-tools, which is where this code is currently used in userspace. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Use atomic_try_cmpxchg_acquire()Kent Overstreet
This switches to a newer cmpxchg variant which updates @old for us on failure, simplifying the cmpxchg loops a bit and supposedly generating better code. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Fix an unitialized varKent Overstreet
In the conversion to atomic_t, six_lock_slowpath() ended up calling six_lock_wakeup() in the failure path with a state variable that was never initialized - whoops. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Delete redundant commentKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Tiny bit more tidyingKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Seq now only incremented on unlockKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Split out seq, use atomic_t instead of atomic64_tKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Single instance of six_lock_valsKent Overstreet
Since we're not generating different versions of the lock functions for each lock type, the constant propagation we were trying to do before is no longer useful - this is now a small code size decrease. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six_locks: Kill test_bit()/set_bit() usageKent Overstreet
This deletes the crazy cast-atomic-to-unsigned-long, and replaces them with atomic_and() and atomic_or(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: lock->state.seq no longer used for write lock heldKent Overstreet
lock->state.seq is shortly being moved out of lock->state, to kill the depedency on atomic64; in preparation for that, we change the write locking bit to write locked. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Simplify six_relock()Kent Overstreet
The next patch is going to move lock->seq out of lock->state. This replaces six_relock() with a much simpler implementation based on trylock. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Improve spurious wakeup handling in pcpu reader modeKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Documentation, renamingKent Overstreet
- Expanded and revamped overview documentation in six.h, giving an overview of all features - docbook-comments for all external interfaces - Rename some functions for simplicity, i.e. six_lock_ip_type() -> six_lock_ip() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Kill six_lock_state unionKent Overstreet
As suggested by Linus, this drops the six_lock_state union in favor of raw bitmasks. On the one hand, bitfields give more type-level structure to the code. However, a significant amount of the code was working with six_lock_state as a u64/atomic64_t, and the conversions from the bitfields to the u64 were deemed a bit too out-there. More significantly, because bitfield order is poorly defined (#ifdef __LITTLE_ENDIAN_BITFIELD can be used, but is gross), incrementing the sequence number would overflow into the rest of the bitfield if the compiler didn't put the sequence number at the high end of the word. The new code is a bit saner when we're on an architecture without real atomic64_t support - all accesses to lock->state now go through atomic64_*() operations. On architectures with real atomic64_t support, we additionally use atomic bit ops for setting/clearing individual bits. Text size: 7467 bytes -> 4649 bytes - compilers still suck at bitfields. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Simplify dispatchKent Overstreet
Originally, we used inlining/flattening to cause the compiler to generate different versions of lock/trylock/relock/unlock for each lock type - read, intent, and write. This made the individual functions smaller and let the compiler eliminate table lookups: however, as the code has gotten more complicated these optimizations have gotten less worthwhile, and all the tricky inlining and dispatching made the code less readable. Text size: 11015 bytes -> 7467 bytes, and benchmarks show no loss of performance. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Centralize setting of waiting bitKent Overstreet
Originally, the waiting bit was always set by trylock() on failure: however, it's now set by __six_lock_type_slowpath(), with wait_lock held - which is the more correct place to do it. That made setting the waiting bit in trylock redundant, so this patch deletes that. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Remove hacks for percpu mode lost wakeupKent Overstreet
The lost wakeup bug hasn't been observed in awhile, and we're trying to provoke it and determine if it still exists. This patch removes some defenses that were added to attempt to track it down; if it still exists, this should make it easier to see it. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Kill six_lock_pcpu_(alloc|free)Kent Overstreet
six_lock_pcpu_alloc() is an unsafe interface: it's not safe to allocate or free the percpu reader count on an existing lock that's in use, the only safe time to allocate percpu readers is when the lock is first being initialized. This patch adds a flags parameter to six_lock_init(), and instead of six_lock_pcpu_free() we now expose six_lock_exit(), which does the same thing but is less likely to be misused. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: six_lock_readers_add()Kent Overstreet
This moves a helper out of the bcachefs code that shouldn't have been there, since it touches six lock internals. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: be more careful about lost wakeupsKent Overstreet
This is a workaround for a lost wakeup bug we've been seeing - we still need to discover the actual bug. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Simplify six_lock_counts()Kent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Improved optimistic spinningKent Overstreet
This adds a threshold for the maximum spin time, similar to the rwsem code, and a flag to the lock itself indicating when we've spun too long so other threads also refrain from spinning. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Expose tracepoint IPKent Overstreet
This adds _ip variations of the various lock functions that allow an IP to be passed in, which is used by lockstat. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Wakeup now takes lock on behalf of waiterKent Overstreet
This brings back an important optimization, to avoid touching the wait lists an extra time, while preserving the property that a thread is on a lock waitlist iff it is waiting - it is never removed from the waitlist until it has the lock. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Fix a lost wakeupKent Overstreet
There was a lost wakeup between a read unlock in percpu mode and a write lock. The unlock path unlocks, then executes a barrier, then checks for waiters; correspondingly, the lock side should set the wait bit and execute a barrier, then attempt to take the lock. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Enable lockdepKent Overstreet
Now that we have lockdep_set_no_check_recursion(), we can enable lockdep checking. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Add start_time to six_lock_waiterKent Overstreet
This is needed by the cycle detector in bcachefs - we need a way to iterater over waitlist entries while dropping and retaking the waitlist lock. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: six_lock_waiter()Kent Overstreet
This allows passing in the wait list entry - to be used for a deadlock cycle detector. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Simplify wait listsKent Overstreet
This switches to a single list of waiters, instead of separate lists for read and intent, and switches write locks to also use the wait lists instead of being handled differently. Also, removal from the wait list is now done by the process waiting on the lock, not the process doing the wakeup. This is needed for the new deadlock cycle detector - we need tasks to stay on the waitlist until they've successfully acquired the lock. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Delete six_lock_pcpu_free_rcu()Kent Overstreet
Didn't have any users, and wasn't a good idea to begin with - delete it. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22six locks: Improve six_lock_countKent Overstreet
six_lock_count now counts up whether a write lock held, and this patch now also correctly counts six_lock->intent_lock_recurse. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22bcachefs: Initial commitKent Overstreet
Initially forked from drivers/md/bcache, bcachefs is a new copy-on-write filesystem with every feature you could possibly want. Website: https://bcachefs.org Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>