summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
2024-08-03f2fs: fix start segno of large sectionSheng Yong
[ Upstream commit 8c409989678e92e4a737e7cd2bb04f3efb81071a ] get_ckpt_valid_blocks() checks valid ckpt blocks in current section. It counts all vblocks from the first to the last segment in the large section. However, START_SEGNO() is used to get the first segno in an SIT block. This patch fixes that to get the correct start segno. Fixes: 61461fc921b7 ("f2fs: fix to avoid touching checkpointed data in get_victim()") Signed-off-by: Sheng Yong <shengyong@oppo.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03jfs: Fix array-index-out-of-bounds in diFreeJeongjun Park
[ Upstream commit f73f969b2eb39ad8056f6c7f3a295fa2f85e313a ] Reported-by: syzbot+241c815bda521982cb49@syzkaller.appspotmail.com Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Jeongjun Park <aha310510@gmail.com> Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03f2fs: fix to truncate preallocated blocks in f2fs_file_open()Chao Yu
[ Upstream commit 298b1e4182d657c3e388adcc29477904e9600ed5 ] chenyuwen reports a f2fs bug as below: Unable to handle kernel NULL pointer dereference at virtual address 0000000000000011 fscrypt_set_bio_crypt_ctx+0x78/0x1e8 f2fs_grab_read_bio+0x78/0x208 f2fs_submit_page_read+0x44/0x154 f2fs_get_read_data_page+0x288/0x5f4 f2fs_get_lock_data_page+0x60/0x190 truncate_partial_data_page+0x108/0x4fc f2fs_do_truncate_blocks+0x344/0x5f0 f2fs_truncate_blocks+0x6c/0x134 f2fs_truncate+0xd8/0x200 f2fs_iget+0x20c/0x5ac do_garbage_collect+0x5d0/0xf6c f2fs_gc+0x22c/0x6a4 f2fs_disable_checkpoint+0xc8/0x310 f2fs_fill_super+0x14bc/0x1764 mount_bdev+0x1b4/0x21c f2fs_mount+0x20/0x30 legacy_get_tree+0x50/0xbc vfs_get_tree+0x5c/0x1b0 do_new_mount+0x298/0x4cc path_mount+0x33c/0x5fc __arm64_sys_mount+0xcc/0x15c invoke_syscall+0x60/0x150 el0_svc_common+0xb8/0xf8 do_el0_svc+0x28/0xa0 el0_svc+0x24/0x84 el0t_64_sync_handler+0x88/0xec It is because inode.i_crypt_info is not initialized during below path: - mount - f2fs_fill_super - f2fs_disable_checkpoint - f2fs_gc - f2fs_iget - f2fs_truncate So, let's relocate truncation of preallocated blocks to f2fs_file_open(), after fscrypt_file_open(). Fixes: d4dd19ec1ea0 ("f2fs: do not expose unwritten blocks to user by DIO") Reported-by: chenyuwen <yuwen.chen@xjmz.com> Closes: https://lore.kernel.org/linux-kernel/20240517085327.1188515-1-yuwen.chen@xjmz.com Signed-off-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03nilfs2: handle inconsistent state in nilfs_btnode_create_block()Ryusuke Konishi
commit 4811f7af6090e8f5a398fbdd766f903ef6c0d787 upstream. Syzbot reported that a buffer state inconsistency was detected in nilfs_btnode_create_block(), triggering a kernel bug. It is not appropriate to treat this inconsistency as a bug; it can occur if the argument block address (the buffer index of the newly created block) is a virtual block number and has been reallocated due to corruption of the bitmap used to manage its allocation state. So, modify nilfs_btnode_create_block() and its callers to treat it as a possible filesystem error, rather than triggering a kernel bug. Link: https://lkml.kernel.org/r/20240725052007.4562-1-konishi.ryusuke@gmail.com Fixes: a60be987d45d ("nilfs2: B-tree node cache") Signed-off-by: Ryusuke Konishi <konishi.ryusuke@gmail.com> Reported-by: syzbot+89cc4f2324ed37988b60@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=89cc4f2324ed37988b60 Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03f2fs: use meta inode for GC of COW fileSunmin Jeong
commit f18d0076933689775fe7faeeb10ee93ff01be6ab upstream. In case of the COW file, new updates and GC writes are already separated to page caches of the atomic file and COW file. As some cases that use the meta inode for GC, there are some race issues between a foreground thread and GC thread. To handle them, we need to take care when to invalidate and wait writeback of GC pages in COW files as the case of using the meta inode. Also, a pointer from the COW inode to the original inode is required to check the state of original pages. For the former, we can solve the problem by using the meta inode for GC of COW files. Then let's get a page from the original inode in move_data_block when GCing the COW file to avoid race condition. Fixes: 3db1de0e582c ("f2fs: change the current atomic write way") Cc: stable@vger.kernel.org #v5.19+ Reviewed-by: Sungjong Seo <sj1557.seo@samsung.com> Reviewed-by: Yeongjin Gil <youngjin.gil@samsung.com> Signed-off-by: Sunmin Jeong <s_min.jeong@samsung.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03f2fs: use meta inode for GC of atomic fileSunmin Jeong
commit b40a2b00370931b0c50148681dd7364573e52e6b upstream. The page cache of the atomic file keeps new data pages which will be stored in the COW file. It can also keep old data pages when GCing the atomic file. In this case, new data can be overwritten by old data if a GC thread sets the old data page as dirty after new data page was evicted. Also, since all writes to the atomic file are redirected to COW inodes, GC for the atomic file is not working well as below. f2fs_gc(gc_type=FG_GC) - select A as a victim segment do_garbage_collect - iget atomic file's inode for block B move_data_page f2fs_do_write_data_page - use dn of cow inode - set fio->old_blkaddr from cow inode - seg_freed is 0 since block B is still valid - goto gc_more and A is selected as victim again To solve the problem, let's separate GC writes and updates in the atomic file by using the meta inode for GC writes. Fixes: 3db1de0e582c ("f2fs: change the current atomic write way") Cc: stable@vger.kernel.org #v5.19+ Reviewed-by: Sungjong Seo <sj1557.seo@samsung.com> Reviewed-by: Yeongjin Gil <youngjin.gil@samsung.com> Signed-off-by: Sunmin Jeong <s_min.jeong@samsung.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03f2fs: fix return value of f2fs_convert_inline_inode()Chao Yu
commit a8eb3de28e7a365690c61161e7a07a4fc7c60bbf upstream. If device is readonly, make f2fs_convert_inline_inode() return EROFS instead of zero, otherwise it may trigger panic during writeback of inline inode's dirty page as below: f2fs_write_single_data_page+0xbb6/0x1e90 fs/f2fs/data.c:2888 f2fs_write_cache_pages fs/f2fs/data.c:3187 [inline] __f2fs_write_data_pages fs/f2fs/data.c:3342 [inline] f2fs_write_data_pages+0x1efe/0x3a90 fs/f2fs/data.c:3369 do_writepages+0x359/0x870 mm/page-writeback.c:2634 filemap_fdatawrite_wbc+0x125/0x180 mm/filemap.c:397 __filemap_fdatawrite_range mm/filemap.c:430 [inline] file_write_and_wait_range+0x1aa/0x290 mm/filemap.c:788 f2fs_do_sync_file+0x68a/0x1ae0 fs/f2fs/file.c:276 generic_write_sync include/linux/fs.h:2806 [inline] f2fs_file_write_iter+0x7bd/0x24e0 fs/f2fs/file.c:4977 call_write_iter include/linux/fs.h:2114 [inline] new_sync_write fs/read_write.c:497 [inline] vfs_write+0xa72/0xc90 fs/read_write.c:590 ksys_write+0x1a0/0x2c0 fs/read_write.c:643 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f Cc: stable@vger.kernel.org Reported-by: syzbot+848062ba19c8782ca5c8@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-f2fs-devel/000000000000d103ce06174d7ec3@google.com Signed-off-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03f2fs: fix to don't dirty inode for readonly filesystemChao Yu
commit 192b8fb8d1c8ca3c87366ebbef599fa80bb626b8 upstream. syzbot reports f2fs bug as below: kernel BUG at fs/f2fs/inode.c:933! RIP: 0010:f2fs_evict_inode+0x1576/0x1590 fs/f2fs/inode.c:933 Call Trace: evict+0x2a4/0x620 fs/inode.c:664 dispose_list fs/inode.c:697 [inline] evict_inodes+0x5f8/0x690 fs/inode.c:747 generic_shutdown_super+0x9d/0x2c0 fs/super.c:675 kill_block_super+0x44/0x90 fs/super.c:1667 kill_f2fs_super+0x303/0x3b0 fs/f2fs/super.c:4894 deactivate_locked_super+0xc1/0x130 fs/super.c:484 cleanup_mnt+0x426/0x4c0 fs/namespace.c:1256 task_work_run+0x24a/0x300 kernel/task_work.c:180 ptrace_notify+0x2cd/0x380 kernel/signal.c:2399 ptrace_report_syscall include/linux/ptrace.h:411 [inline] ptrace_report_syscall_exit include/linux/ptrace.h:473 [inline] syscall_exit_work kernel/entry/common.c:251 [inline] syscall_exit_to_user_mode_prepare kernel/entry/common.c:278 [inline] __syscall_exit_to_user_mode_work kernel/entry/common.c:283 [inline] syscall_exit_to_user_mode+0x15c/0x280 kernel/entry/common.c:296 do_syscall_64+0x50/0x110 arch/x86/entry/common.c:88 entry_SYSCALL_64_after_hwframe+0x63/0x6b The root cause is: - do_sys_open - f2fs_lookup - __f2fs_find_entry - f2fs_i_depth_write - f2fs_mark_inode_dirty_sync - f2fs_dirty_inode - set_inode_flag(inode, FI_DIRTY_INODE) - umount - kill_f2fs_super - kill_block_super - generic_shutdown_super - sync_filesystem : sb is readonly, skip sync_filesystem() - evict_inodes - iput - f2fs_evict_inode - f2fs_bug_on(sbi, is_inode_flag_set(inode, FI_DIRTY_INODE)) : trigger kernel panic When we try to repair i_current_depth in readonly filesystem, let's skip dirty inode to avoid panic in later f2fs_evict_inode(). Cc: stable@vger.kernel.org Reported-by: syzbot+31e4659a3fe953aec2f4@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-f2fs-devel/000000000000e890bc0609a55cff@google.com Signed-off-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03f2fs: fix to force buffered IO on inline_data inodeChao Yu
commit 5c8764f8679e659c5cb295af7d32279002d13735 upstream. It will return all zero data when DIO reading from inline_data inode, it is because f2fs_iomap_begin() assign iomap->type w/ IOMAP_HOLE incorrectly for this case. We can let iomap framework handle inline data via assigning iomap->type and iomap->inline_data correctly, however, it will be a little bit complicated when handling race case in between direct IO and buffered IO. So, let's force to use buffered IO to fix this issue. Cc: stable@vger.kernel.org Reported-by: Barry Song <v-songbaohua@oppo.com> Signed-off-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03fs/ntfs3: Update log->page_{mask,bits} if log->page_size changedHuacai Chen
commit 2fef55d8f78383c8e6d6d4c014b9597375132696 upstream. If an NTFS file system is mounted to another system with different PAGE_SIZE from the original system, log->page_size will change in log_replay(), but log->page_{mask,bits} don't change correspondingly. This will cause a panic because "u32 bytes = log->page_size - page_off" will get a negative value in the later read_log_page(). Cc: stable@vger.kernel.org Fixes: b46acd6a6a627d876898e ("fs/ntfs3: Add NTFS journal") Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03hostfs: fix dev_t handlingJohannes Berg
commit 267ed02c2121b75e0eaaa338240453b576039e4a upstream. dev_t is a kernel type and may have different definitions in kernel and userspace. On 32-bit x86 this currently makes the stat structure being 4 bytes longer in the user code, causing stack corruption. However, this is (potentially) not the only problem, since dev_t is a different type on user/kernel side, so we don't know that the major/minor encoding isn't also different. Decode/encode it instead to address both problems. Cc: stable@vger.kernel.org Fixes: 74ce793bcbde ("hostfs: Fix ephemeral inodes") Link: https://patch.msgid.link/20240702092440.acc960585dd5.Id0767e12f562a69c6cd3c3262dc3d765db350cf6@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03jbd2: avoid infinite transaction commit loopJan Kara
commit 27ba5b67312a944576addc4df44ac3b709aabede upstream. Commit 9f356e5a4f12 ("jbd2: Account descriptor blocks into t_outstanding_credits") started to account descriptor blocks into transactions outstanding credits. However it didn't appropriately decrease the maximum amount of credits available to userspace. Thus if the filesystem requests a transaction smaller than j_max_transaction_buffers but large enough that when descriptor blocks are added the size exceeds j_max_transaction_buffers, we confuse add_transaction_credits() into thinking previous handles have grown the transaction too much and enter infinite journal commit loop in start_this_handle() -> add_transaction_credits() trying to create transaction with enough credits available. Fix the problem by properly accounting for transaction space reserved for descriptor blocks when verifying requested transaction handle size. CC: stable@vger.kernel.org Fixes: 9f356e5a4f12 ("jbd2: Account descriptor blocks into t_outstanding_credits") Reported-by: Alexander Coffin <alex.coffin@maticrobots.com> Link: https://lore.kernel.org/all/CA+hUFcuGs04JHZ_WzA1zGN57+ehL2qmHOt5a7RMpo+rv6Vyxtw@mail.gmail.com Signed-off-by: Jan Kara <jack@suse.cz> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Link: https://patch.msgid.link/20240624170127.3253-3-jack@suse.cz Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03jbd2: precompute number of transaction descriptor blocksJan Kara
commit e3a00a23781c1f2fcda98a7aecaac515558e7a35 upstream. Instead of computing the number of descriptor blocks a transaction can have each time we need it (which is currently when starting each transaction but will become more frequent later) precompute the number once during journal initialization together with maximum transaction size. We perform the precomputation whenever journal feature set is updated similarly as for computation of journal->j_revoke_records_per_block. CC: stable@vger.kernel.org Signed-off-by: Jan Kara <jack@suse.cz> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Link: https://patch.msgid.link/20240624170127.3253-2-jack@suse.cz Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03jbd2: make jbd2_journal_get_max_txn_bufs() internalJan Kara
commit 4aa99c71e42ad60178c1154ec24e3df9c684fb67 upstream. There's no reason to have jbd2_journal_get_max_txn_bufs() public function. Currently all users are internal and can use journal->j_max_transaction_buffers instead. This saves some unnecessary recomputations of the limit as a bonus which becomes important as this function gets more complex in the following patch. CC: stable@vger.kernel.org Signed-off-by: Jan Kara <jack@suse.cz> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Link: https://patch.msgid.link/20240624170127.3253-1-jack@suse.cz Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03ext4: make sure the first directory block is not a holeBaokun Li
commit f9ca51596bbfd0f9c386dd1c613c394c78d9e5e6 upstream. The syzbot constructs a directory that has no dirblock but is non-inline, i.e. the first directory block is a hole. And no errors are reported when creating files in this directory in the following flow. ext4_mknod ... ext4_add_entry // Read block 0 ext4_read_dirblock(dir, block, DIRENT) bh = ext4_bread(NULL, inode, block, 0) if (!bh && (type == INDEX || type == DIRENT_HTREE)) // The first directory block is a hole // But type == DIRENT, so no error is reported. After that, we get a directory block without '.' and '..' but with a valid dentry. This may cause some code that relies on dot or dotdot (such as make_indexed_dir()) to crash. Therefore when ext4_read_dirblock() finds that the first directory block is a hole report that the filesystem is corrupted and return an error to avoid loading corrupted data from disk causing something bad. Reported-by: syzbot+ae688d469e36fb5138d0@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=ae688d469e36fb5138d0 Fixes: 4e19d6b65fb4 ("ext4: allow directory holes") Cc: stable@kernel.org Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Link: https://patch.msgid.link/20240702132349.2600605-3-libaokun@huaweicloud.com Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03ext4: check dot and dotdot of dx_root before making dir indexedBaokun Li
commit 50ea741def587a64e08879ce6c6a30131f7111e7 upstream. Syzbot reports a issue as follows: ============================================ BUG: unable to handle page fault for address: ffffed11022e24fe PGD 23ffee067 P4D 23ffee067 PUD 0 Oops: Oops: 0000 [#1] PREEMPT SMP KASAN PTI CPU: 0 PID: 5079 Comm: syz-executor306 Not tainted 6.10.0-rc5-g55027e689933 #0 Call Trace: <TASK> make_indexed_dir+0xdaf/0x13c0 fs/ext4/namei.c:2341 ext4_add_entry+0x222a/0x25d0 fs/ext4/namei.c:2451 ext4_rename fs/ext4/namei.c:3936 [inline] ext4_rename2+0x26e5/0x4370 fs/ext4/namei.c:4214 [...] ============================================ The immediate cause of this problem is that there is only one valid dentry for the block to be split during do_split, so split==0 results in out of bounds accesses to the map triggering the issue. do_split unsigned split dx_make_map count = 1 split = count/2 = 0; continued = hash2 == map[split - 1].hash; ---> map[4294967295] The maximum length of a filename is 255 and the minimum block size is 1024, so it is always guaranteed that the number of entries is greater than or equal to 2 when do_split() is called. But syzbot's crafted image has no dot and dotdot in dir, and the dentry distribution in dirblock is as follows: bus dentry1 hole dentry2 free |xx--|xx-------------|...............|xx-------------|...............| 0 12 (8+248)=256 268 256 524 (8+256)=264 788 236 1024 So when renaming dentry1 increases its name_len length by 1, neither hole nor free is sufficient to hold the new dentry, and make_indexed_dir() is called. In make_indexed_dir() it is assumed that the first two entries of the dirblock must be dot and dotdot, so bus and dentry1 are left in dx_root because they are treated as dot and dotdot, and only dentry2 is moved to the new leaf block. That's why count is equal to 1. Therefore add the ext4_check_dx_root() helper function to add more sanity checks to dot and dotdot before starting the conversion to avoid the above issue. Reported-by: syzbot+ae688d469e36fb5138d0@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=ae688d469e36fb5138d0 Fixes: ac27a0ec112a ("[PATCH] ext4: initial copy of files from ext3") Cc: stable@kernel.org Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Link: https://patch.msgid.link/20240702132349.2600605-2-libaokun@huaweicloud.com Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03udf: Avoid using corrupted block bitmap bufferJan Kara
commit a90d4471146de21745980cba51ce88e7926bcc4f upstream. When the filesystem block bitmap is corrupted, we detect the corruption while loading the bitmap and fail the allocation with error. However the next allocation from the same bitmap will notice the bitmap buffer is already loaded and tries to allocate from the bitmap with mixed results (depending on the exact nature of the bitmap corruption). Fix the problem by using BH_verified bit to indicate whether the bitmap is valid or not. Reported-by: syzbot+5f682cd029581f9edfd1@syzkaller.appspotmail.com CC: stable@vger.kernel.org Link: https://patch.msgid.link/20240617154201.29512-2-jack@suse.cz Fixes: 1e0d4adf17e7 ("udf: Check consistency of Space Bitmap Descriptor") Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03cifs: mount with "unix" mount option for SMB1 incorrectly handledSteve French
commit 0e314e452687ce0ec5874e42cdb993a34325d3d2 upstream. Although by default we negotiate CIFS Unix Extensions for SMB1 mounts to Samba (and they work if the user does not specify "unix" or "posix" or "linux" on mount), and we do properly handle when a user turns them off with "nounix" mount parm. But with the changes to the mount API we broke cases where the user explicitly specifies the "unix" option (or equivalently "linux" or "posix") on mount with vers=1.0 to Samba or other servers which support the CIFS Unix Extensions. "mount error(95): Operation not supported" and logged: "CIFS: VFS: Check vers= mount option. SMB3.11 disabled but required for POSIX extensions" even though CIFS Unix Extensions are supported for vers=1.0 This patch fixes the case where the user specifies both "unix" (or equivalently "posix" or "linux") and "vers=1.0" on mount to a server which supports the CIFS Unix Extensions. Cc: stable@vger.kernel.org Reviewed-by: David Howells <dhowell@redhat.com> Reviewed-by: Paulo Alcantara (Red Hat) <pc@manguebit.com> Signed-off-by: Steve French <stfrench@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03cifs: fix reconnect with SMB1 UNIX ExtensionsSteve French
commit a214384ce26b6111ea8c8d58fa82a1ca63996c38 upstream. When mounting with the SMB1 Unix Extensions (e.g. mounts to Samba with vers=1.0), reconnects no longer reset the Unix Extensions (SetFSInfo SET_FILE_UNIX_BASIC) after tcon so most operations (e.g. stat, ls, open, statfs) will fail continuously with: "Operation not supported" if the connection ever resets (e.g. due to brief network disconnect) Cc: stable@vger.kernel.org Reviewed-by: Paulo Alcantara (Red Hat) <pc@manguebit.com> Signed-off-by: Steve French <stfrench@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03cifs: fix potential null pointer use in destroy_workqueue in init_cifs error ↵Steve French
path commit 193cc89ea0ca1da311877d2b4bb5e9f03bcc82a2 upstream. Dan Carpenter reported a Smack static checker warning: fs/smb/client/cifsfs.c:1981 init_cifs() error: we previously assumed 'serverclose_wq' could be null (see line 1895) The patch which introduced the serverclose workqueue used the wrong oredering in error paths in init_cifs() for freeing it on errors. Fixes: 173217bd7336 ("smb3: retrying on failed server close") Cc: stable@vger.kernel.org Cc: Ritvik Budhiraja <rbudhiraja@microsoft.com> Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Reviewed-by: Dan Carpenter <dan.carpenter@linaro.org> Reviewed-by: David Howells <dhowell@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03ext2: Verify bitmap and itable block numbers before using themJan Kara
commit 322a6aff03937aa1ece33b4e46c298eafaf9ac41 upstream. Verify bitmap block numbers and inode table blocks are sane before using them for checking bits in the block bitmap. CC: stable@vger.kernel.org Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03hfs: fix to initialize fields of hfs_inode_info after hfs_alloc_inode()Chao Yu
commit 26a2ed107929a855155429b11e1293b83e6b2a8b upstream. Syzbot reports uninitialized value access issue as below: loop0: detected capacity change from 0 to 64 ===================================================== BUG: KMSAN: uninit-value in hfs_revalidate_dentry+0x307/0x3f0 fs/hfs/sysdep.c:30 hfs_revalidate_dentry+0x307/0x3f0 fs/hfs/sysdep.c:30 d_revalidate fs/namei.c:862 [inline] lookup_fast+0x89e/0x8e0 fs/namei.c:1649 walk_component fs/namei.c:2001 [inline] link_path_walk+0x817/0x1480 fs/namei.c:2332 path_lookupat+0xd9/0x6f0 fs/namei.c:2485 filename_lookup+0x22e/0x740 fs/namei.c:2515 user_path_at_empty+0x8b/0x390 fs/namei.c:2924 user_path_at include/linux/namei.h:57 [inline] do_mount fs/namespace.c:3689 [inline] __do_sys_mount fs/namespace.c:3898 [inline] __se_sys_mount+0x66b/0x810 fs/namespace.c:3875 __x64_sys_mount+0xe4/0x140 fs/namespace.c:3875 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xcf/0x1e0 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x63/0x6b BUG: KMSAN: uninit-value in hfs_ext_read_extent fs/hfs/extent.c:196 [inline] BUG: KMSAN: uninit-value in hfs_get_block+0x92d/0x1620 fs/hfs/extent.c:366 hfs_ext_read_extent fs/hfs/extent.c:196 [inline] hfs_get_block+0x92d/0x1620 fs/hfs/extent.c:366 block_read_full_folio+0x4ff/0x11b0 fs/buffer.c:2271 hfs_read_folio+0x55/0x60 fs/hfs/inode.c:39 filemap_read_folio+0x148/0x4f0 mm/filemap.c:2426 do_read_cache_folio+0x7c8/0xd90 mm/filemap.c:3553 do_read_cache_page mm/filemap.c:3595 [inline] read_cache_page+0xfb/0x2f0 mm/filemap.c:3604 read_mapping_page include/linux/pagemap.h:755 [inline] hfs_btree_open+0x928/0x1ae0 fs/hfs/btree.c:78 hfs_mdb_get+0x260c/0x3000 fs/hfs/mdb.c:204 hfs_fill_super+0x1fb1/0x2790 fs/hfs/super.c:406 mount_bdev+0x628/0x920 fs/super.c:1359 hfs_mount+0xcd/0xe0 fs/hfs/super.c:456 legacy_get_tree+0x167/0x2e0 fs/fs_context.c:610 vfs_get_tree+0xdc/0x5d0 fs/super.c:1489 do_new_mount+0x7a9/0x16f0 fs/namespace.c:3145 path_mount+0xf98/0x26a0 fs/namespace.c:3475 do_mount fs/namespace.c:3488 [inline] __do_sys_mount fs/namespace.c:3697 [inline] __se_sys_mount+0x919/0x9e0 fs/namespace.c:3674 __ia32_sys_mount+0x15b/0x1b0 fs/namespace.c:3674 do_syscall_32_irqs_on arch/x86/entry/common.c:112 [inline] __do_fast_syscall_32+0xa2/0x100 arch/x86/entry/common.c:178 do_fast_syscall_32+0x37/0x80 arch/x86/entry/common.c:203 do_SYSENTER_32+0x1f/0x30 arch/x86/entry/common.c:246 entry_SYSENTER_compat_after_hwframe+0x70/0x82 Uninit was created at: __alloc_pages+0x9a6/0xe00 mm/page_alloc.c:4590 __alloc_pages_node include/linux/gfp.h:238 [inline] alloc_pages_node include/linux/gfp.h:261 [inline] alloc_slab_page mm/slub.c:2190 [inline] allocate_slab mm/slub.c:2354 [inline] new_slab+0x2d7/0x1400 mm/slub.c:2407 ___slab_alloc+0x16b5/0x3970 mm/slub.c:3540 __slab_alloc mm/slub.c:3625 [inline] __slab_alloc_node mm/slub.c:3678 [inline] slab_alloc_node mm/slub.c:3850 [inline] kmem_cache_alloc_lru+0x64d/0xb30 mm/slub.c:3879 alloc_inode_sb include/linux/fs.h:3018 [inline] hfs_alloc_inode+0x5a/0xc0 fs/hfs/super.c:165 alloc_inode+0x83/0x440 fs/inode.c:260 new_inode_pseudo fs/inode.c:1005 [inline] new_inode+0x38/0x4f0 fs/inode.c:1031 hfs_new_inode+0x61/0x1010 fs/hfs/inode.c:186 hfs_mkdir+0x54/0x250 fs/hfs/dir.c:228 vfs_mkdir+0x49a/0x700 fs/namei.c:4126 do_mkdirat+0x529/0x810 fs/namei.c:4149 __do_sys_mkdirat fs/namei.c:4164 [inline] __se_sys_mkdirat fs/namei.c:4162 [inline] __x64_sys_mkdirat+0xc8/0x120 fs/namei.c:4162 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xcf/0x1e0 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x63/0x6b It missed to initialize .tz_secondswest, .cached_start and .cached_blocks fields in struct hfs_inode_info after hfs_alloc_inode(), fix it. Cc: stable@vger.kernel.org Reported-by: syzbot+3ae6be33a50b5aae4dab@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-fsdevel/0000000000005ad04005ee48897f@google.com Signed-off-by: Chao Yu <chao@kernel.org> Link: https://lore.kernel.org/r/20240616013841.2217-1-chao@kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03fuse: verify {g,u}id mount options correctlyEric Sandeen
commit 525bd65aa759ec320af1dc06e114ed69733e9e23 upstream. As was done in 0200679fc795 ("tmpfs: verify {g,u}id mount options correctly") we need to validate that the requested uid and/or gid is representable in the filesystem's idmapping. Cribbing from the above commit log, The contract for {g,u}id mount options and {g,u}id values in general set from userspace has always been that they are translated according to the caller's idmapping. In so far, fuse has been doing the correct thing. But since fuse is mountable in unprivileged contexts it is also necessary to verify that the resulting {k,g}uid is representable in the namespace of the superblock. Fixes: c30da2e981a7 ("fuse: convert to use the new mount API") Cc: stable@vger.kernel.org # 5.4+ Signed-off-by: Eric Sandeen <sandeen@redhat.com> Link: https://lore.kernel.org/r/8f07d45d-c806-484d-a2e3-7a2199df1cd2@redhat.com Reviewed-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03NFSD: Support write delegations in LAYOUTGETChuck Lever
commit abc02e5602f7bf9bbae1e8999570a2ad5114578c upstream. I noticed LAYOUTGET(LAYOUTIOMODE4_RW) returning NFS4ERR_ACCESS unexpectedly. The NFS client had created a file with mode 0444, and the server had returned a write delegation on the OPEN(CREATE). The client was requesting a RW layout using the write delegation stateid so that it could flush file modifications. Creating a read-only file does not seem to be problematic for NFSv4.1 without pNFS, so I began looking at NFSD's implementation of LAYOUTGET. The failure was because fh_verify() was doing a permission check as part of verifying the FH presented during the LAYOUTGET. It uses the loga_iomode value to specify the @accmode argument to fh_verify(). fh_verify(MAY_WRITE) on a file whose mode is 0444 fails with -EACCES. To permit LAYOUT* operations in this case, add OWNER_OVERRIDE when checking the access permission of the incoming file handle for LAYOUTGET and LAYOUTCOMMIT. Cc: Christoph Hellwig <hch@lst.de> Cc: stable@vger.kernel.org # v6.6+ Message-Id: 4E9C0D74-A06D-4DC3-A48A-73034DC40395@oracle.com Reviewed-by: Jeff Layton <jlayton@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03btrfs: fix extent map use-after-free when adding pages to compressed bioFilipe Manana
commit 8e7860543a94784d744c7ce34b78a2e11beefa5c upstream. At add_ra_bio_pages() we are accessing the extent map to calculate 'add_size' after we dropped our reference on the extent map, resulting in a use-after-free. Fix this by computing 'add_size' before dropping our extent map reference. Reported-by: syzbot+853d80cba98ce1157ae6@syzkaller.appspotmail.com Link: https://lore.kernel.org/linux-btrfs/000000000000038144061c6d18f2@google.com/ Fixes: 6a4049102055 ("btrfs: subpage: make add_ra_bio_pages() compatible") CC: stable@vger.kernel.org # 6.1+ Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03exfat: fix potential deadlock on __exfat_get_dentry_setSungjong Seo
commit 89fc548767a2155231128cb98726d6d2ea1256c9 upstream. When accessing a file with more entries than ES_MAX_ENTRY_NUM, the bh-array is allocated in __exfat_get_entry_set. The problem is that the bh-array is allocated with GFP_KERNEL. It does not make sense. In the following cases, a deadlock for sbi->s_lock between the two processes may occur. CPU0 CPU1 ---- ---- kswapd balance_pgdat lock(fs_reclaim) exfat_iterate lock(&sbi->s_lock) exfat_readdir exfat_get_uniname_from_ext_entry exfat_get_dentry_set __exfat_get_dentry_set kmalloc_array ... lock(fs_reclaim) ... evict exfat_evict_inode lock(&sbi->s_lock) To fix this, let's allocate bh-array with GFP_NOFS. Fixes: a3ff29a95fde ("exfat: support dynamic allocate bh for exfat_entry_set_cache") Cc: stable@vger.kernel.org # v6.2+ Reported-by: syzbot+412a392a2cd4a65e71db@syzkaller.appspotmail.com Closes: https://lore.kernel.org/lkml/000000000000fef47e0618c0327f@google.com Signed-off-by: Sungjong Seo <sj1557.seo@samsung.com> Signed-off-by: Namjae Jeon <linkinjeon@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-03fs/ntfs3: Keep runs for $MFT::$ATTR_DATA and $MFT::$ATTR_BITMAPKonstantin Komarov
[ Upstream commit eb95678ee930d67d79fc83f0a700245ae7230455 ] We skip the run_truncate_head call also for $MFT::$ATTR_BITMAP. Otherwise wnd_map()/run_lookup_entry will not find the disk position for the bitmap parts. Fixes: 0e5b044cbf3a ("fs/ntfs3: Refactoring attr_set_size to restore after errors") Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/ntfs3: Missed error returnKonstantin Komarov
[ Upstream commit 2cbbd96820255fff4f0ad1533197370c9ccc570b ] Fixes: 3f3b442b5ad2 ("fs/ntfs3: Add bitmap") Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/ntfs3: Fix the format of the "nocase" mount optionKonstantin Komarov
[ Upstream commit d392e85fd1e8d58e460c17ca7d0d5c157848d9c1 ] The 'nocase' option was mistakenly added as fsparam_flag_no with the 'no' prefix, causing the case-insensitive mode to require the 'nonocase' option to be enabled. Fixes: a3a956c78efa ("fs/ntfs3: Add option "nocase"") Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03nilfs2: avoid undefined behavior in nilfs_cnt32_ge macroRyusuke Konishi
[ Upstream commit 0f3819e8c483771a59cf9d3190cd68a7a990083c ] According to the C standard 3.4.3p3, the result of signed integer overflow is undefined. The macro nilfs_cnt32_ge(), which compares two sequence numbers, uses signed integer subtraction that can overflow, and therefore the result of the calculation may differ from what is expected due to undefined behavior in different environments. Similar to an earlier change to the jiffies-related comparison macros in commit 5a581b367b5d ("jiffies: Avoid undefined behavior from signed overflow"), avoid this potential issue by changing the definition of the macro to perform the subtraction as unsigned integers, then cast the result to a signed integer for comparison. Link: https://lkml.kernel.org/r/20130727225828.GA11864@linux.vnet.ibm.com Link: https://lkml.kernel.org/r/20240702183512.6390-1-konishi.ryusuke@gmail.com Fixes: 9ff05123e3bf ("nilfs2: segment constructor") Signed-off-by: Ryusuke Konishi <konishi.ryusuke@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/proc/task_mmu: properly detect PM_MMAP_EXCLUSIVE per page of PMD-mapped THPsDavid Hildenbrand
[ Upstream commit 2c1f057e5be63e890f2dd89e4c25ab5eef084a91 ] We added PM_MMAP_EXCLUSIVE in 2015 via commit 77bb499bb60f ("pagemap: add mmap-exclusive bit for marking pages mapped only here"), when THPs could not be partially mapped and page_mapcount() returned something that was true for all pages of the THP. In 2016, we added support for partially mapping THPs via commit 53f9263baba6 ("mm: rework mapcount accounting to enable 4k mapping of THPs") but missed to determine PM_MMAP_EXCLUSIVE as well per page. Checking page_mapcount() on the head page does not tell the whole story. We should check each individual page. In a future without per-page mapcounts it will be different, but we'll change that to be consistent with PTE-mapped THPs once we deal with that. Link: https://lkml.kernel.org/r/20240607122357.115423-4-david@redhat.com Fixes: 53f9263baba6 ("mm: rework mapcount accounting to enable 4k mapping of THPs") Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Oscar Salvador <osalvador@suse.de> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Lance Yang <ioworker0@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/proc/task_mmu: don't indicate PM_MMAP_EXCLUSIVE without PM_PRESENTDavid Hildenbrand
[ Upstream commit da7f31ed0f4df8f61e8195e527aa83dd54896ba3 ] Relying on the mapcount for non-present PTEs that reference pages doesn't make any sense: they are not accounted in the mapcount, so page_mapcount() == 1 won't return the result we actually want to know. While we don't check the mapcount for migration entries already, we could end up checking it for swap, hwpoison, device exclusive, ... entries, which we really shouldn't. There is one exception: device private entries, which we consider fake-present (e.g., incremented the mapcount). But we won't care about that for now for PM_MMAP_EXCLUSIVE, because indicating PM_SWAP for them although they are fake-present already sounds suspiciously wrong. Let's never indicate PM_MMAP_EXCLUSIVE without PM_PRESENT. Link: https://lkml.kernel.org/r/20240607122357.115423-3-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Oscar Salvador <osalvador@suse.de> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Lance Yang <ioworker0@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Stable-dep-of: 2c1f057e5be6 ("fs/proc/task_mmu: properly detect PM_MMAP_EXCLUSIVE per page of PMD-mapped THPs") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/proc/task_mmu.c: add_to_pagemap: remove useless parameter addrHui Zhu
[ Upstream commit cabbb6d51e2af4fc2f3c763f58a12c628f228987 ] Function parameter addr of add_to_pagemap() is useless. Remove it. Link: https://lkml.kernel.org/r/20240111084533.40038-1-teawaterz@linux.alibaba.com Signed-off-by: Hui Zhu <teawater@antgroup.com> Reviewed-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Tested-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Andrei Vagin <avagin@google.com> Cc: David Hildenbrand <david@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Stable-dep-of: 2c1f057e5be6 ("fs/proc/task_mmu: properly detect PM_MMAP_EXCLUSIVE per page of PMD-mapped THPs") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/proc/task_mmu: indicate PM_FILE for PMD-mapped file THPDavid Hildenbrand
[ Upstream commit 3f9f022e975d930709848a86a1c79775b0585202 ] Patch series "fs/proc: move page_mapcount() to fs/proc/internal.h". With all other page_mapcount() users in the tree gone, move page_mapcount() to fs/proc/internal.h, rename it and extend the documentation to prevent future (ab)use. ... of course, I find some issues while working on that code that I sort first ;) We'll now only end up calling page_mapcount() [now folio_precise_page_mapcount()] on pages mapped via present page table entries. Except for /proc/kpagecount, that still does questionable things, but we'll leave that legacy interface as is for now. Did a quick sanity check. Likely we would want some better selfestest for /proc/$/pagemap + smaps. I'll see if I can find some time to write some more. This patch (of 6): Looks like we never taught pagemap_pmd_range() about the existence of PMD-mapped file THPs. Seems to date back to the times when we first added support for non-anon THPs in the form of shmem THP. Link: https://lkml.kernel.org/r/20240607122357.115423-1-david@redhat.com Link: https://lkml.kernel.org/r/20240607122357.115423-2-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Fixes: 800d8c63b2e9 ("shmem: add huge pages support") Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Lance Yang <ioworker0@gmail.com> Reviewed-by: Oscar Salvador <osalvador@suse.de> Cc: David Hildenbrand <david@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/ntfs3: Fix field-spanning write in INDEX_HDRKonstantin Komarov
[ Upstream commit 2f3e176fee66ac86ae387787bf06457b101d9f7a ] Fields flags and res[3] replaced with one 4 byte flags. Fixes: 4534a70b7056 ("fs/ntfs3: Add headers and misc files") Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/ntfs3: Drop stray '\' (backslash) in formatting stringAndy Shevchenko
[ Upstream commit b366809dd151e8abb29decda02fd6a78b498831f ] CHECK /home/andy/prj/linux-topic-uart/fs/ntfs3/super.c fs/ntfs3/super.c:471:23: warning: unknown escape sequence: '\%' Drop stray '\' (backslash) in formatting string. Fixes: d27e202b9ac4 ("fs/ntfs3: Add more info into /proc/fs/ntfs3/<dev>/volinfo") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/ntfs3: Correct undo if ntfs_create_inode failedKonstantin Komarov
[ Upstream commit f28d0866d8ff798aa497971f93d0cc58f442d946 ] Clusters allocated for Extended Attributes, must be freed when rolling back inode creation. Fixes: 82cae269cfa95 ("fs/ntfs3: Add initialization of super block") Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/ntfs3: Replace inode_trylock with inode_lockKonstantin Komarov
[ Upstream commit 69505fe98f198ee813898cbcaf6770949636430b ] The issue was detected due to xfstest 465 failing. Fixes: 4342306f0f0d ("fs/ntfs3: Add file operations and implementation") Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/ntfs3: Add missing .dirty_folio in address_space_operationsKonstantin Komarov
[ Upstream commit 0f9579d9e0331b6255132ac06bdf2c0a01cceb90 ] After switching from pages to folio [1], it became evident that the initialization of .dirty_folio for page cache operations was missed for compressed files. [1] https://lore.kernel.org/ntfs3/20240422193203.3534108-1-willy@infradead.org Fixes: 82cae269cfa95 ("fs/ntfs3: Add initialization of super block") Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/ntfs3: Fix getting file typeKonstantin Komarov
[ Upstream commit 24c5100aceedcd47af89aaa404d4c96cd2837523 ] An additional condition causes the mft record to be read from disk and get the file type dt_type. Fixes: 22457c047ed97 ("fs/ntfs3: Modified fix directory element type detection") Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/ntfs3: Missed NI_FLAG_UPDATE_PARENT settingKonstantin Komarov
[ Upstream commit 1c308ace1fd6de93bd0b7e1a5e8963ab27e2c016 ] Fixes: be71b5cba2e64 ("fs/ntfs3: Add attrib operations") Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/ntfs3: Deny getting attr data block in compressed frameKonstantin Komarov
[ Upstream commit 69943484b95267c94331cba41e9e64ba7b24f136 ] Attempting to retrieve an attribute data block in a compressed frame is ignored. Fixes: be71b5cba2e64 ("fs/ntfs3: Add attrib operations") Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/ntfs3: Fix transform resident to nonresident for compressed filesKonstantin Komarov
[ Upstream commit 25610ff98d4a34e6a85cbe4fd8671be6b0829f8f ] Сorrected calculation of required space len (in clusters) for attribute data storage in case of compression. Fixes: be71b5cba2e64 ("fs/ntfs3: Add attrib operations") Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03fs/ntfs3: Merge synonym COMPRESSION_UNIT and NTFS_LZNT_CUNITKonstantin Komarov
[ Upstream commit 487f8d482a7e51a640b8f955a398f906a4f83951 ] COMPRESSION_UNIT and NTFS_LZNT_CUNIT mean the same thing (1u<<NTFS_LZNT_CUNIT) determines the size for compression (in clusters). COMPRESS_MAX_CLUSTER is not used in the code. Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com> Stable-dep-of: 25610ff98d4a ("fs/ntfs3: Fix transform resident to nonresident for compressed files") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03ext4: avoid writing unitialized memory to disk in EA inodesJan Kara
[ Upstream commit 65121eff3e4c8c90f8126debf3c369228691c591 ] If the extended attribute size is not a multiple of block size, the last block in the EA inode will have uninitialized tail which will get written to disk. We will never expose the data to userspace but still this is not a good practice so just zero out the tail of the block as it isn't going to cause a noticeable performance overhead. Fixes: e50e5129f384 ("ext4: xattr-in-inode support") Reported-by: syzbot+9c1fe13fcb51574b249b@syzkaller.appspotmail.com Reported-by: Hugh Dickins <hughd@google.com> Signed-off-by: Jan Kara <jack@suse.cz> Link: https://patch.msgid.link/20240613150234.25176-1-jack@suse.cz Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03ext4: don't track ranges in fast_commit if inode has inlined dataLuis Henriques (SUSE)
[ Upstream commit 7882b0187bbeb647967a7b5998ce4ad26ef68a9a ] When fast-commit needs to track ranges, it has to handle inodes that have inlined data in a different way because ext4_fc_write_inode_data(), in the actual commit path, will attempt to map the required blocks for the range. However, inodes that have inlined data will have it's data stored in inode->i_block and, eventually, in the extended attribute space. Unfortunately, because fast commit doesn't currently support extended attributes, the solution is to mark this commit as ineligible. Link: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1039883 Signed-off-by: Luis Henriques (SUSE) <luis.henriques@linux.dev> Tested-by: Ben Hutchings <benh@debian.org> Fixes: 9725958bb75c ("ext4: fast commit may miss tracking unwritten range during ftruncate") Link: https://patch.msgid.link/20240618144312.17786-1-luis.henriques@linux.dev Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03NFSv4.1 another fix for EXCHGID4_FLAG_USE_PNFS_DS for DS serverOlga Kornievskaia
[ Upstream commit 4840c00003a2275668a13b82c9f5b1aed80183aa ] Previously in order to mark the communication with the DS server, we tried to use NFS_CS_DS in cl_flags. However, this flag would only be saved for the DS server and in case where DS equals MDS, the client would not find a matching nfs_client in nfs_match_client that represents the MDS (but is also a DS). Instead, don't rely on the NFS_CS_DS but instead use NFS_CS_PNFS. Fixes: 379e4adfddd6 ("NFSv4.1: fixup use EXCHGID4_FLAG_USE_PNFS_DS for DS server") Signed-off-by: Olga Kornievskaia <kolga@netapp.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03ext4: fix infinite loop when replaying fast_commitLuis Henriques (SUSE)
[ Upstream commit 907c3fe532253a6ef4eb9c4d67efb71fab58c706 ] When doing fast_commit replay an infinite loop may occur due to an uninitialized extent_status struct. ext4_ext_determine_insert_hole() does not detect the replay and calls ext4_es_find_extent_range(), which will return immediately without initializing the 'es' variable. Because 'es' contains garbage, an integer overflow may happen causing an infinite loop in this function, easily reproducible using fstest generic/039. This commit fixes this issue by unconditionally initializing the structure in function ext4_es_find_extent_range(). Thanks to Zhang Yi, for figuring out the real problem! Fixes: 8016e29f4362 ("ext4: fast commit recovery path") Signed-off-by: Luis Henriques (SUSE) <luis.henriques@linux.dev> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Link: https://patch.msgid.link/20240515082857.32730-1-luis.henriques@linux.dev Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03udf: Fix bogus checksum computation in udf_rename()Jan Kara
[ Upstream commit 27ab33854873e6fb958cb074681a0107cc2ecc4c ] Syzbot reports uninitialized memory access in udf_rename() when updating checksum of '..' directory entry of a moved directory. This is indeed true as we pass on-stack diriter.fi to the udf_update_tag() and because that has only struct fileIdentDesc included in it and not the impUse or name fields, the checksumming function is going to checksum random stack contents beyond the end of the structure. This is actually harmless because the following udf_fiiter_write_fi() will recompute the checksum from on-disk buffers where everything is properly included. So all that is needed is just removing the bogus calculation. Fixes: e9109a92d2a9 ("udf: Convert udf_rename() to new directory iteration code") Link: https://lore.kernel.org/all/000000000000cf405f060d8f75a9@google.com/T/ Link: https://patch.msgid.link/20240617154201.29512-1-jack@suse.cz Reported-by: syzbot+d31185aa54170f7fc1f5@syzkaller.appspotmail.com Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-03udf: Fix lock ordering in udf_evict_inode()Jan Kara
[ Upstream commit 8832fc1e502687869606bb0a7b79848ed3bf036f ] udf_evict_inode() calls udf_setsize() to truncate deleted inode. However inode deletion through udf_evict_inode() can happen from inode reclaim context and udf_setsize() grabs mapping->invalidate_lock which isn't generally safe to acquire from fs reclaim context since we allocate pages under mapping->invalidate_lock for example in a page fault path. This is however not a real deadlock possibility as by the time udf_evict_inode() is called, nobody can be accessing the inode, even less work with its page cache. So this is just a lockdep triggering false positive. Fix the problem by moving mapping->invalidate_lock locking outsize of udf_setsize() into udf_setattr() as grabbing mapping->invalidate_lock from udf_evict_inode() is pointless. Reported-by: syzbot+0333a6f4b88bcd68a62f@syzkaller.appspotmail.com Fixes: b9a861fd527a ("udf: Protect truncate and file type conversion with invalidate_lock") Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Sasha Levin <sashal@kernel.org>