Age | Commit message (Collapse) | Author |
|
As a minor clean-up to commit 1fc05c8d8426 ("gfs2: cancel timed-out
glock requests"), when a demote request is in progress in
finish_xmote(), there is no point in waking up the glock holder at the
head of the queue because the reply from dlm cannot be on behalf of that
glock holder.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Reviewed-by: Andrew Price <anprice@redhat.com>
|
|
As a follow-up to commit a431d49243a0 ("gfs2: Fix request cancelation
bug"), it turns out that any call to finish_xmote() is always followed
by a call to run_queue(), either
* directly when glock_work_func() calls finish_xmote() before calling
run_queue(), or
* indirectly when do_xmote() calls finish_xmote() before calling
gfs2_glock_queue_work(), which queues a call to glock_work_func() in
work queue context,
so remove the code in finish_xmote() that duplicates the functionality
of run_queue().
In addition, the code this commit removes is missing a check for the
GLF_DEMOTE flag which indicates that no further promotes should be
performed, so if that code didn't get removed, that check would have to
be added.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Reviewed-by: Andrew Price <anprice@redhat.com>
|
|
When gdlm_ast() is called with a non-zero status code, this means that
the requested operation did not succeed and the current lock state
didn't change. Turn that into a non-zero LM_OUT_* status code (with ret
& ~LM_OUT_ST_MASK != 0) instead of pretending that dlm returned the
current lock state.
That way, we can easily change finish_xmote() to only update
gl->gl_state when the state has actually changed.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Reviewed-by: Andrew Price <anprice@redhat.com>
|
|
Commit 6cb3b1c2df87 changed how finish_xmote() clears the GLF_LOCK flag,
but it failed to adjust the equivalent code in do_xmote(). Fix that.
Fixes: 6cb3b1c2df87 ("gfs2: Fix additional unlikely request cancelation race")
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Currently, sdp->sd_aspace and the per-inode metadata address spaces use
sb->s_bdev->bd_mapping->host as their ->host; folios in those address
spaces will thus appear to be on bdev rather than on gfs2 filesystems.
This is a problem because gfs2 doesn't support cgroup writeback
(SB_I_CGROUPWB), but bdev does.
Fix that by using a "dummy" gfs2 inode as ->host in those address
spaces. When coming from a folio, folio->mapping->host->i_sb will then
be a gfs2 super block and the SB_I_CGROUPWB flag will not be set in
sb->s_iflags.
Based on a previous version from Bob Peterson from several years ago.
Thanks to Tetsuo Handa, Jan Kara, and Rafael Aquini for helping figure
this out.
Fixes: aaa2cacf8184 ("writeback: add lockdep annotation to inode_to_wb()")
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2
Pull gfs2 updates from Andreas Gruenbacher:
- Fix two bugs related to locking request cancelation (locking request
being retried instead of canceled; canceling the wrong locking
request)
- Prevent a race between inode creation and deferred delete analogous
to commit ffd1cf0443a2 from 6.13. This now allows to further simplify
gfs2_evict_inode() without introducing mysterious problems
- When in inode delete should be verified / retried "later" but that
isn't possible, skip the delete instead of carrying it out
immediately. This broke in 6.13
- More folio conversions from Matthew Wilcox (plus a fix from Dan
Carpenter)
- Various minor fixes and cleanups
* tag 'gfs2-for-6.15' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2: (22 commits)
gfs2: some comment clarifications
gfs2: Fix a NULL vs IS_ERR() bug in gfs2_find_jhead()
gfs2: Convert gfs2_meta_read_endio() to use a folio
gfs2: Convert gfs2_end_log_write_bh() to work on a folio
gfs2: Convert gfs2_find_jhead() to use a folio
gfs2: Convert gfs2_jhead_pg_srch() to gfs2_jhead_folio_search()
gfs2: Use b_folio in gfs2_check_magic()
gfs2: Use b_folio in gfs2_submit_bhs()
gfs2: Use b_folio in gfs2_trans_add_meta()
gfs2: Use b_folio in gfs2_log_write_bh()
gfs2: skip if we cannot defer delete
gfs2: remove redundant warnings
gfs2: minor evict fix
gfs2: Prevent inode creation race (2)
gfs2: Fix additional unlikely request cancelation race
gfs2: Fix request cancelation bug
gfs2: Check for empty queue in run_queue
gfs2: Remove more dead code in add_to_queue
gfs2: Replace GIF_DEFER_DELETE with GLF_DEFER_DELETE
gfs2: glock holder GL_NOPID fix
...
|
|
In glock_set_object() and glock_clear_object(), there is no need to
print the glock type and number when we dump the entire glock, anyway.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
In gfs2_try_evict(), we try grabbing the inode to evict, we try to evict
it, and then we try grabbing it again to see if it still exists. There
is no guarantee that we will end up with the same inode both times; the
inode validity check that commit ffd1cf0443a2 ("gfs2: Prevent inode
creation race") added to the first grab is actually needed both times.
(To avoid code duplication, add a grab_existing_inode() helper.)
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
In gfs2_glock_dq(), we must drop the glock spin lock before calling
->lm_cancel, but this means that in the meantime, the operation we are
trying to cancel could complete. If the operation completes
unsuccessfully, another holder can end up at the head of the queue and
another ->lm_lock operation can get started. In this case, we would end
up canceling that second operation by accident.
To prevent that, introduce a new GLF_CANCELING flag. Set that flag in
gfs2_glock_dq() when trying to cancel an operation. When seeing that
flag, finish_xmote() will then keep the GLF_LOCK flag set to prevent
other glock operations from taking place. gfs2_glock_dq() then
completes the cancelation attempt by clearing GLF_LOCK and
GLF_CANCELING.
In addition, add a missing GLF_DEMOTE_IN_PROGRESS check in
gfs2_glock_dq() to make sure that we won't accidentally cancel a demote
request.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
In finish_xmote(), when a locking request is canceled, the corresponding
holder is moved to the tail of the holders list instead of being
dequeued immediately. When there is only a single holder, the canceled
locking request is then immediately repeated. This makes no sense; it
looks like another remnant of LM_FLAG_PRIORITY support.
Instead, dequeue canceled holders and proceed with the next holder in
finish_xmote(). We can then easily detect in gfs2_glock_dq() when a
holder has been canceled.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
In run_queue(), check if the queue of pending requests is empty instead
of blindly assuming that it won't be.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Remove some more dead code in add_to_queue() that commit 0b93bac2271e
("gfs2: Remove LM_FLAG_PRIORITY flag") has rendered obsolete. This is a
continuation of commit 3302764610057 ("gfs2: remove dead code in
add_to_queue"); no functional change.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Having this flag attached to the iopen glock instead of the inode is
much simpler; it eliminates a protential weird race in gfs2_try_evict().
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Glocks are always actively acquired by processes, but as indicated by
the GL_NOPID holder flag, some of them are then associated with objects
like cached inodes rather than the process that acquired them. As such,
for those glock holders, it makes little sense to dump which processes
originally acquired them.
Therefore, gfs2 is trying to hide the identity of the processes that
acquired those glocks. The code for doing that is incorrect though, so
fix it.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Introduce a new GLF_PENDING_REPLY flag to indicate that a reply from DLM
is expected. Include that flag in glock dumps to show more clearly
what's going on. (When the GLF_PENDING_REPLY flag is set, the GLF_LOCK
flag will also be set but the GLF_LOCK flag alone isn't sufficient to
tell that we are waiting for a DLM reply.)
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
All users of lockref_init() now initialize the count to 1, so hardcode
that and remove the count argument.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Link: https://lore.kernel.org/r/20250130135624.1899988-4-agruenba@redhat.com
Signed-off-by: Christian Brauner <brauner@kernel.org>
|
|
Move the initialization of gl_lockref from gfs2_init_glock_once() to
gfs2_glock_get(). This allows to use lockref_init() there.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Link: https://lore.kernel.org/r/20250130135624.1899988-2-agruenba@redhat.com
Signed-off-by: Christian Brauner <brauner@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2
Pull gfs2 updates from Andreas Gruenbacher:
- Fix the code that cleans up left-over unlinked files.
Various fixes and minor improvements in deleting files cached or held
open remotely.
- Simplify the use of dlm's DLM_LKF_QUECVT flag.
- A few other minor cleanups.
* tag 'gfs2-for-6.13' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2: (21 commits)
gfs2: Prevent inode creation race
gfs2: Only defer deletes when we have an iopen glock
gfs2: Simplify DLM_LKF_QUECVT use
gfs2: gfs2_evict_inode clarification
gfs2: Make gfs2_inode_refresh static
gfs2: Use get_random_u32 in gfs2_orlov_skip
gfs2: Randomize GLF_VERIFY_DELETE work delay
gfs2: Use mod_delayed_work in gfs2_queue_try_to_evict
gfs2: Update to the evict / remote delete documentation
gfs2: Call gfs2_queue_verify_delete from gfs2_evict_inode
gfs2: Clean up delete work processing
gfs2: Minor delete_work_func cleanup
gfs2: Return enum evict_behavior from gfs2_upgrade_iopen_glock
gfs2: Rename dinode_demise to evict_behavior
gfs2: Rename GIF_{DEFERRED -> DEFER}_DELETE
gfs2: Faster gfs2_upgrade_iopen_glock wakeups
KMSAN: uninit-value in inode_go_dump (5)
gfs2: Fix unlinked inode cleanup
gfs2: Allow immediate GLF_VERIFY_DELETE work
gfs2: Initialize gl_no_formal_ino earlier
...
|
|
When a request to evict an inode comes in over the network, we are
trying to grab an inode reference via the iopen glock's gl_object
pointer. There is a very small probability that by the time such a
request comes in, inode creation hasn't completed and the I_NEW flag is
still set. To deal with that, wait for the inode and then check if
inode creation was successful.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Randomize the delay of GLF_VERIFY_DELETE work. This avoids thundering
herd problems when multiple nodes schedule that kind of work in response
to an inode being unlinked remotely.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
In the unlikely case that we're trying to queue GLF_TRY_TO_EVICT work
for an inode that already has GLF_VERIFY_DELETE work queued, we want to
make sure that the GLF_TRY_TO_EVICT work gets scheduled immediately
instead of waiting for the delayed work timer to expire.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Try to be a bit more clear and remove some duplications. We cannot
actually get rid of the verification step eventually, so remove the
comment saying so.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Move calls to gfs2_queue_verify_delete() into gfs2_evict_inode().
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Function delete_work_func() was previously assuming that the
GLF_TRY_TO_EVICT and GLF_VERIFY_DELETE flags won't both be set at the
same time, but there probably are races in which that can happen, so
handle that case correctly.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Move those definitions into the the scope in which they are used.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
The GIF_DEFERRED_DELETE flag indicates an action that gfs2_evict_inode()
should take, so rename the flag to GIF_DEFER_DELETE to clarify.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Move function needs_demote() to glock.h and rename it to
glock_needs_demote(). In handle_callback(), wake up the glock when
setting the GLF_PENDING_DEMOTE flag as well. (Setting the GLF_DEMOTE
flag already triggered a wake-up.)
With that, check for glock_needs_demote() in gfs2_upgrade_iopen_glock()
to wake up when either of those flags is set for the inode glock: the
faster we can react to contention, the better.
The GLF_PENDING_DEMOTE flag is only used for inode glocks (see
gfs2_glock_cb()) so it's okay to only check for the GLF_DEMOTE flag in
gfs2_drop_inode(). Still, using glock_needs_demote() there as well
makes the code a little easier to read.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Once upon a time, predecessors of those used to do file lookup
without bumping a refcount, provided that caller held rcu_read_lock()
across the lookup and whatever it wanted to read from the struct
file found. When struct file allocation switched to SLAB_TYPESAFE_BY_RCU,
that stopped being feasible and these primitives started to bump the
file refcount for lookup result, requiring the caller to call fput()
afterwards.
But that turned them pointless - e.g.
rcu_read_lock();
file = lookup_fdget_rcu(fd);
rcu_read_unlock();
is equivalent to
file = fget_raw(fd);
and all callers of lookup_fdget_rcu() are of that form. Similarly,
task_lookup_fdget_rcu() calls can be replaced with calling fget_task().
task_lookup_next_fdget_rcu() doesn't have direct counterparts, but
its callers would be happier if we replaced it with an analogue that
deals with RCU internally.
Reviewed-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Before commit f0e56edc2ec7 ("gfs2: Split the two kinds of glock "delete"
work"), function delete_work_func() was used to trigger the eviction of
in-memory inodes from remote as well as deleting unlinked inodes at a
later point. These two kinds of work were then split into two kinds of
work, and the two places in the code were deferred deletion of inodes is
required accidentally ended up queuing the wrong kind of work. This
caused unlinked inodes to be left behind, which could in the worst case
fill up filesystems and require a filesystem check to recover.
Fix that by queuing the right kind of work in try_rgrp_unlink() and
gfs2_drop_inode().
Fixes: f0e56edc2ec7 ("gfs2: Split the two kinds of glock "delete" work")
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Add an argument to gfs2_queue_verify_delete() that allows it to queue
GLF_VERIFY_DELETE work for immediate execution. This is used in the
next patch.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Set gl_no_formal_ino of the iopen glock to the generation of the
associated inode (ip->i_no_formal_ino) as soon as that value is known.
This saves us from setting it later, possibly repeatedly, when queuing
GLF_VERIFY_DELETE work.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Rename the GLF_VERIFY_EVICT flag to GLF_VERIFY_DELETE: that flag
indicates that we want to delete an inode / verify that it has been
deleted.
To match, rename gfs2_queue_verify_evict() to
gfs2_queue_verify_delete().
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
When gfs2_fill_super() fails, destroy_workqueue() is called within
gfs2_gl_hash_clear(), and the subsequent code path calls
destroy_workqueue() on the same work queue again.
This issue can be fixed by setting the work queue pointer to NULL after
the first destroy_workqueue() call and checking for a NULL pointer
before attempting to destroy the work queue again.
Reported-by: syzbot+d34c2a269ed512c531b0@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=d34c2a269ed512c531b0
Fixes: 30e388d57367 ("gfs2: Switch to a per-filesystem glock workqueue")
Cc: stable@vger.kernel.org
Signed-off-by: Julian Sun <sunjunchao2870@gmail.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
In gfs2_glock_cb(), we only need to calculate the glock hold time for
inode glocks; the value is unused otherwise.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
The logic for determining when to demote a glock in glock_work_func(),
introduced in commit 7cf8dcd3b68a ("GFS2: Automatically adjust glock min
hold time"), doesn't make sense: inode glocks have a minimum hold time
that delays demotion, while all other glocks are expected to be demoted
immediately. Instead of demoting non-inode glocks immediately,
glock_work_func() schedules glock work for them to be demoted, however.
Get rid of that unnecessary indirection.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
The demote_ok glock operation is only still used to prevent the inode
glocks of the "jindex" and "rindex" directories from getting recycled
while they are still referenced by sdp->sd_jindex and sdp->sd_rindex.
However, the LRU walking code will no longer recycle glocks which are
referenced, so the demote_ok glock operation is obsolete and can be
removed.
Each of a glock's holders in the gl_holders list is holding a reference
on the glock, so when the list of holders isn't empty in demote_ok(),
the existing reference count check will already prevent the glock from
getting released. This means that demote_ok() is obsolete as well.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
This reverts commit e7ccaf5fe1590667b3fa2f8df5c5ec9ba0dc5b85.
Before commit e7ccaf5fe159, every time a resource group glock was
dequeued by gfs2_glock_dq(), it was added to the glock LRU list even
though the glock was still referenced by the resource group and could
never be evicted, anyway. Commit e7ccaf5fe159 added a GLOF_LRU hack to
avoid that overhead for resource group glocks, and that hack was since
adopted for some other types of glocks as well.
We now no longer add glocks to the glock LRU list while they are still
referenced. This solves the underlying problem, and obsoletes the
GLOF_LRU hack.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
(cherry picked from commit 3e5257c810cba91e274d07f3db5cf013c7c830be)
|
|
In the current glock reference counting model, a bias of one is added to
a glock's refcount when it is locked (gl->gl_state != LM_ST_UNLOCKED).
A glock is removed from the lru_list when it is enqueued, and added back
when it is dequeued. This isn't a very appropriate model because most
glocks are held for long periods of time (for example, the inode "owns"
references to its inode and iopen glocks as long as the inode is cached
even when the glock state changes to LM_ST_UNLOCKED), and they can only
be freed when they are no longer referenced, anyway.
Fix this by getting rid of the refcount bias for locked glocks. That
way, we can use lockref_put_or_lock() to efficiently drop all but the
last glock reference, and put the glock onto the lru_list when the last
reference is dropped. When find_insert_glock() returns a reference to a
cached glock, it removes the glock from the lru_list.
Dumping the "glocks" and "glstats" debugfs files also takes glock
references, but instead of removing the glocks from the lru_list in that
case as well, we leave them on the list. This ensures that dumping
those files won't perturb the order of the glocks on the lru_list.
In addition, when the last reference to an *unlocked* glock is dropped,
we immediately free it; this preserves the preexisting behavior. If it
later turns out that caching unlocked glocks is useful in some
situations, we can change the caching strategy.
It is currently unclear if a glock that has no active references can
have the GLF_LFLUSH flag set. To make sure that such a glock won't
accidentally be evicted due to memory pressure, we add a GLF_LFLUSH
check to gfs2_dispose_glock_lru().
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Switch to a per-filesystem glock workqueue. Additional workqueues are
cheap nowadays, and keeping separate workqueues allows to flush the work
of each filesystem without affecting the others.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
When glocks cannot be freed for a long time, avoid the "task blocked for
more than N seconds" messages and report how many glocks are still
outstanding, instead.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Clean up the messy code in gfs2_glock_get(). No change in
functionality.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Invert the meaning of the GLF_INITIAL flag: right now, when GLF_INITIAL
is set, a DLM lock exists and we have a valid identifier for it; when
GLF_INITIAL is cleared, no DLM lock exists (yet). This is confusing.
In addition, it makes more sense to highlight the exceptional case
(i.e., no DLM lock exists yet) in glock dumps and trace points than to
highlight the common case.
To avoid confusion between the "old" and the "new" meaning of the flag,
use 'a' instead of 'I' to represent the flag.
For improved code consistency, check if the GLF_INITIAL flag is cleared
to determine whether a DLM lock exists instead of checking if the lock
identifier is non-zero.
Document what the flag is used for.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Function handle_callback() is used to request a glock demote. This
often happens in response to a conflicting remote locking request and
subsequent bast callback from DLM, but there are other reasons for
triggering a demote request as well, such as when trying to release a
glock in response to memory pressure. To clarify that, rename the
function to request_demote().
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
The GLF_FROZEN flag indicates that a reply to a DLM locking request has
been received, but should not be processed at this time. To clarify
that meaning, rename the flag to GLF_HAVE_FROZEN_REPLY.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
The GLF_REPLY_PENDING flag indicates to glock_work_func() that in
response to a locking request, DLM has sent a reply that needs to be
processed. A flag with that name could as well indicate that we are
waiting on a reply from DLM, however. To disambiguate these two cases,
rename the flag to GLF_HAVE_REPLY.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Rename the GLF_FREEING flag to GLF_UNLOCKED, and the ->go_free glock
operation to ->go_unlocked. This mechanism is used to wait for the
underlying DLM lock to be unlocked; being able to free the glock is a
consequence of the DLM lock being unlocked.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
The return statement at the end of run_queue() is useless.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
Function __gfs2_glock_dq() gets defined before it is used, so there is
no need for a separate function declaration.
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull bdev bd_inode updates from Al Viro:
"Replacement of bdev->bd_inode with sane(r) set of primitives by me and
Yu Kuai"
* tag 'pull-bd_inode-1' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
RIP ->bd_inode
dasd_format(): killing the last remaining user of ->bd_inode
nilfs_attach_log_writer(): use ->bd_mapping->host instead of ->bd_inode
block/bdev.c: use the knowledge of inode/bdev coallocation
gfs2: more obvious initializations of mapping->host
fs/buffer.c: massage the remaining users of ->bd_inode to ->bd_mapping
blk_ioctl_{discard,zeroout}(): we only want ->bd_inode->i_mapping here...
grow_dev_folio(): we only want ->bd_inode->i_mapping there
use ->bd_mapping instead of ->bd_inode->i_mapping
block_device: add a pointer to struct address_space (page cache of bdev)
missing helpers: bdev_unhash(), bdev_drop()
block: move two helpers into bdev.c
block2mtd: prevent direct access of bd_inode
dm-vdo: use bdev_nr_bytes(bdev) instead of i_size_read(bdev->bd_inode)
blkdev_write_iter(): saner way to get inode and bdev
bcachefs: remove dead function bdev_sectors()
ext4: remove block_device_ejected()
erofs_buf: store address_space instead of inode
erofs: switch erofs_bread() to passing offset instead of block number
|