summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
11 daysmm: remove arch_flush_tlb_batched_pending() arch helperRyan Roberts
Since commit 4b634918384c ("arm64/mm: Close theoretical race where stale TLB entry remains valid"), all arches that use tlbbatch for reclaim (arm64, riscv, x86) implement arch_flush_tlb_batched_pending() with a flush_tlb_mm(). So let's simplify by removing the unnecessary abstraction and doing the flush_tlb_mm() directly in flush_tlb_batched_pending(). This effectively reverts commit db6c1f6f236d ("mm/tlbbatch: introduce arch_flush_tlb_batched_pending()"). Link: https://lkml.kernel.org/r/20250609103132.447370-1-ryan.roberts@arm.com Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Suggested-by: Will Deacon <will@kernel.org> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Will Deacon <will@kernel.org> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexandre Ghiti <alex@ghiti.fr> Cc: Borislav Betkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Rik van Riel <riel@surriel.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm: drop hugetlb_free_pgd_range()Anthony Yznaga
There are no longer any callers of hugetlb_free_pgd_range(). Link: https://lkml.kernel.org/r/20250716012611.10369-4-anthony.yznaga@oracle.com Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Acked-by: Oscar Salvador <osalvador@suse.de> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Andreas Larsson <andreas@gaisler.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David Hildenbrand <david@redhat.com> Cc: David S. Miller <davem@davemloft.net> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm: remove call to hugetlb_free_pgd_range()Anthony Yznaga
With the removal of the last arch-specific implementation of hugetlb_free_pgd_range(), hugetlb VMAs no longer need special handling when freeing page tables. Link: https://lkml.kernel.org/r/20250716012611.10369-3-anthony.yznaga@oracle.com Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Acked-by: Oscar Salvador <osalvador@suse.de> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Andreas Larsson <andreas@gaisler.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David Hildenbrand <david@redhat.com> Cc: David S. Miller <davem@davemloft.net> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 dayssparc64: remove hugetlb_free_pgd_range()Anthony Yznaga
Patch series "drop hugetlb_free_pgd_range()". For all architectures that support hugetlb except for sparc, hugetlb_free_pgd_range() just calls free_pgd_range(). It turns out the sparc implementation is essentially identical to free_pgd_range() and can be removed. Remove it and update free_pgtables() to treat hugetlb VMAs the same as others. This patch (of 3): The sparc implementation of hugetlb_free_pgd_range() is identical to free_pgd_range() with the exception of checking for and skipping possible leaf entries at the PUD and PMD levels. These checks are unnecessary because any huge pages have been freed and their PTEs cleared by the time page tables needed to map them are freed. While some huge page sizes do populate the page table with multiple PTEs, they are correctly cleared by huge_ptep_get_and_clear(). To verify this, libhugetlbfs tests were run for 64K, 8M, and 256M page sizes with an instrumented kernel on a qemu guest modified to support the 256M page size. The same tests were used to verify no regressions after applying this patch and were also run on x86 for both 2M and 1G page sizes. Link: https://lkml.kernel.org/r/20250716012611.10369-1-anthony.yznaga@oracle.com Link: https://lkml.kernel.org/r/20250716012611.10369-2-anthony.yznaga@oracle.com Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Acked-by: Oscar Salvador <osalvador@suse.de> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Andreas Larsson <andreas@gaisler.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David Hildenbrand <david@redhat.com> Cc: David S. Miller <davem@davemloft.net> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Oscar Salvador <osalvador@suse.de> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/shmem: writeout free swap if swap_writeout() reactivatesHugh Dickins
If swap_writeout() returns AOP_WRITEPAGE_ACTIVATE (for example, because zswap cannot compress and memcg disables writeback), there is no virtue in keeping that folio in swap cache and holding the swap allocation: shmem_writeout() switch it back to shmem page cache before returning. Folio lock is held, and folio->memcg_data remains set throughout, so there is no need to get into any memcg or memsw charge complications: swap_free_nr() and delete_from_swap_cache() do as much as is needed (but beware the race with shmem_free_swap() when inode truncated or evicted). Doing the same for an anonymous folio is harder, since it will usually have been unmapped, with references to the swap left in the page tables. Adding a function to remap the folio would be fun, but not worthwhile unless it has other uses, or an urgent bug with anon is demonstrated. [hughd@google.com: use shmem_recalc_inode() rather than open coding, per Baolin] Link: https://lkml.kernel.org/r/101a7d89-290c-545d-8a6d-b1174ed8b1e5@google.com Link: https://lkml.kernel.org/r/5c911f7a-af7a-5029-1dd4-2e00b66d565c@google.com Signed-off-by: Hugh Dickins <hughd@google.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Tested-by: David Rientjes <rientjes@google.com> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <21cnbao@gmail.com> Cc: Chris Li <chrisl@kernel.org> Cc: Kairui Song <ryncsn@gmail.com> Cc: Kemeng Shi <shikemeng@huaweicloud.com> Cc: Shakeel Butt <shakeel.butt@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/shmem: hold shmem_swaplist spinlock (not mutex) much lessHugh Dickins
A flamegraph (from an MGLRU load) showed shmem_writeout()'s use of the global shmem_swaplist_mutex worryingly hot: improvement is long overdue. 3.1 commit 6922c0c7abd3 ("tmpfs: convert shmem_writepage and enable swap") apologized for extending shmem_swaplist_mutex across add_to_swap_cache(), and hoped to find another way: yes, there may be lots of work to allocate radix tree nodes in there. Then 6.15 commit b487a2da3575 ("mm, swap: simplify folio swap allocation") will have made it worse, by moving shmem_writeout()'s swap allocation under that mutex too (but the worrying flamegraph was observed even before that change). There's a useful comment about pagelock no longer protecting from eviction once moved to swap cache: but it's good till shmem_delete_from_page_cache() replaces page pointer by swap entry, so move the swaplist add between them. We would much prefer to take the global lock once per inode than once per page: given the possible races with shmem_unuse() pruning when !swapped (and other tasks racing to swap other pages out or in), try the swaplist add whenever swapped was incremented from 0 (but inode may already be on the list - only unuse and evict bother to remove it). This technique is more subtle than it looks (we're avoiding the very lock which would make it easy), but works: whereas an unlocked list_empty() check runs a risk of the inode being unqueued and left off the swaplist forever, swapoff only completing when the page is faulted in or removed. The need for a sleepable mutex went away in 5.1 commit b56a2d8af914 ("mm: rid swapoff of quadratic complexity"): a spinlock works better now. This commit is certain to take shmem_swaplist_mutex out of contention, and has been seen to make a practical improvement (but there is likely to have been an underlying issue which made its contention so visible). Link: https://lkml.kernel.org/r/87beaec6-a3b0-ce7a-c892-1e1e5bd57aa3@google.com Signed-off-by: Hugh Dickins <hughd@google.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com> Tested-by: David Rientjes <rientjes@google.com> Reviewed-by: Kairui Song <kasong@tencent.com> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <21cnbao@gmail.com> Cc: Chris Li <chrisl@kernel.org> Cc: Kemeng Shi <shikemeng@huaweicloud.com> Cc: Shakeel Butt <shakeel.butt@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daystools/testing/selftests: extend mremap_test to test multi-VMA mremapLorenzo Stoakes
Now that we have added the ability to move multiple VMAs at once, assert that this functions correctly, both overwriting VMAs and moving backwards and forwards with merge and VMA invalidation. Additionally assert that page tables are correctly propagated by setting random data and reading it back. Link: https://lkml.kernel.org/r/139074a24a011ca4ed52498a7fa2080024b43917.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/mremap: permit mremap() move of multiple VMAsLorenzo Stoakes
Historically we've made it a uAPI requirement that mremap() may only operate on a single VMA at a time. For instances where VMAs need to be resized, this makes sense, as it becomes very difficult to determine what a user actually wants should they indicate a desire to expand or shrink the size of multiple VMAs (truncate? Adjust sizes individually? Some other strategy?). However, in instances where a user is moving VMAs, it is restrictive to disallow this. This is especially the case when anonymous mapping remap may or may not be mergeable depending on whether VMAs have or have not been faulted due to anon_vma assignment and folio index alignment with vma->vm_pgoff. Often this can result in surprising impact where a moved region is faulted, then moved back and a user fails to observe a merge from otherwise compatible, adjacent VMAs. This change allows such cases to work without the user having to be cognizant of whether a prior mremap() move or other VMA operations has resulted in VMA fragmentation. We only permit this for mremap() operations that do NOT change the size of the VMA and DO specify MREMAP_MAYMOVE | MREMAP_FIXED. Should no VMA exist in the range, -EFAULT is returned as usual. If a VMA move spans a single VMA - then there is no functional change. Otherwise, we place additional requirements upon VMAs: * They must not have a userfaultfd context associated with them - this requires dropping the lock to notify users, and we want to perform the operation with the mmap write lock held throughout. * If file-backed, they cannot have a custom get_unmapped_area handler - this might result in MREMAP_FIXED not being honoured, which could result in unexpected positioning of VMAs in the moved region. There may be gaps in the range of VMAs that are moved: X Y X Y <---> <-> <---> <-> |-------| |-----| |-----| |-------| |-----| |-----| | A | | B | | C | ---> | A' | | B' | | C' | |-------| |-----| |-----| |-------| |-----| |-----| addr new_addr The move will preserve the gaps between each VMA. Note that any failures encountered will result in a partial move. Since an mremap() can fail at any time, this might result in only some of the VMAs being moved. Note that failures are very rare and typically require an out of a memory condition or a mapping limit condition to be hit, assuming the VMAs being moved are valid. We don't try to assess ahead of time whether VMAs are valid according to the multi VMA rules, as it would be rather unusual for a user to mix uffd-enabled VMAs and/or VMAs which map unusual driver mappings that specify custom get_unmapped_area() handlers in an aggregate operation. So we optimise for the far, far more likely case of the operation being entirely permissible. In the case of the move of a single VMA, the above conditions are permitted. This makes the behaviour identical for a single VMA as before. Link: https://lkml.kernel.org/r/8cab2f2c202c4208bdfdb562635748bea6eb37bf.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/mremap: clean up mlock populate behaviourLorenzo Stoakes
When an mlock()'d VMA is expanded, we need to populate the expanded region to maintain the contract that all mlock()'d memory is present (albeit - with some period after mmap unlock where the expanded part of the mapping remains unfaulted). The current implementation is very unclear, so make it absolutely explicit under what circumstances we do this. Link: https://lkml.kernel.org/r/2358b0006baa9cab83db4259817794f16fe1992e.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/mremap: move remap_is_valid() into check_prep_vma()Lorenzo Stoakes
Group parameter check logic together, moving check_mremap_params() next to it. This puts all such checks into a single place, and invokes them early so we can simply bail out as soon as we are aware that a condition is not met. No functional change intended. Link: https://lkml.kernel.org/r/4d0669c23531629d8ead42aa701c6237bd6bf012.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/mremap: check remap conditions earlierLorenzo Stoakes
When we expand or move a VMA, this requires a number of additional checks to be performed. Make it really obvious under what circumstances these checks must be performed and aggregate all the checks in one place by invoking this in check_prep_vma(). We have to adjust the checks to account for shrink + move operations by checking new_len <= old_len rather than new_len == old_len. No functional change intended. [lorenzo.stoakes@oracle.com: allow undocumented mremap() shrink behaviour] Link: https://lkml.kernel.org/r/8fc92a38-c636-465e-9a2f-2c6ac9cb49b8@lucifer.local Link: https://lkml.kernel.org/r/8b4161ce074901e00602a446d81f182db92b0430.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/mremap: use an explicit uffd failure path for mremapLorenzo Stoakes
Right now it appears that the code is relying upon the returned destination address having bits outside PAGE_MASK to indicate whether an error value is specified, and decrementing the increased refcount on the uffd ctx if so. This is not a safe means of determining an error value, so instead, be specific. It makes far more sense to do so in a dedicated error path, so add mremap_userfaultfd_fail() for this purpose and use this when an error arises. A vm_userfaultfd_ctx is not established until we are at the point where mremap_userfaultfd_prep() is invoked in copy_vma_and_data(), so this is a no-op until this happens. That is - uffd remap notification only occurs if the VMA is actually moved - at which point a UFFD_EVENT_REMAP event is raised. No errors can occur after this point currently, though it's certainly not guaranteed this will always remain the case, and we mustn't rely on this. However, the reason for needing to handle this case is that, when an error arises on a VMA move at the point of adjusting page tables, we revert this operation, and propagate the error. At this point, it is not correct to raise a uffd remap event, and we must handle it. This refactoring makes it abundantly clear what we are doing. We assume vrm->new_addr is always valid, which a prior change made the case even for mremap() invocations which don't move the VMA, however given no uffd context would be set up in this case it's immaterial to this change anyway. No functional change intended. Link: https://lkml.kernel.org/r/a70e8a1f7bce9f43d1431065b414e0f212297297.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/mremap: cleanup post-processing stage of mremapLorenzo Stoakes
Separate out the uffd bits so it clear's what's happening. Don't bother setting vrm->mmap_locked after unlocking, because after this we are done anyway. The only time we drop the mmap lock is on VMA shrink, at which point vrm->new_len will be < vrm->old_len and the operation will not be performed anyway, so move this code out of the if (vrm->mmap_locked) block. All addresses returned by mremap() are page-aligned, so the offset_in_page() check on ret seems only to be incorrectly trying to detect whether an error occurred - explicitly check for this. No functional change intended. Link: https://lkml.kernel.org/r/ebb8f29650b8e343fe98fefc67b3a61a24d1e0f1.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/mremap: put VMA check and prep logic into helper functionLorenzo Stoakes
Rather than lumping everything together in do_mremap(), add a new helper function, check_prep_vma(), to do the work relating to each VMA. This further lays groundwork for subsequent patches which will allow for batched VMA mremap(). Additionally, if we set vrm->new_addr == vrm->addr when prepping the VMA, this avoids us needing to do so in the expand VMA mlocked case. No functional change intended. Link: https://lkml.kernel.org/r/15efa3c57935f7f8894094b94c1803c2f322c511.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/mremap: refactor initial parameter sanity checksLorenzo Stoakes
We are currently checking some things later, and some things immediately. Aggregate the checks and avoid ones that need not be made. Simplify things by aligning lengths immediately. Defer setting the delta parameter until later, which removes some duplicate code in the hugetlb case. We can safely perform the checks moved from mremap_to() to check_mremap_params() because: * If we set a new address via vrm_set_new_addr(), then this is guaranteed to not overlap nor to position the new VMA past TASK_SIZE, so there's no need to check these later. * We can simply page align lengths immediately. We do not need to check for overlap nor TASK_SIZE sanity after hugetlb alignment as this asserts addresses are huge-aligned, then huge-aligns lengths, rounding down. This means any existing overlap would have already been caught. Moving things around like this lays the groundwork for subsequent changes to permit operations on batches of VMAs. No functional change intended. Link: https://lkml.kernel.org/r/c862d625c98b1abd861c406f2bfad8baf3287f83.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/mremap: perform some simple cleanupsLorenzo Stoakes
Patch series "mm/mremap: permit mremap() move of multiple VMAs", v4. Historically we've made it a uAPI requirement that mremap() may only operate on a single VMA at a time. For instances where VMAs need to be resized, this makes sense, as it becomes very difficult to determine what a user actually wants should they indicate a desire to expand or shrink the size of multiple VMAs (truncate? Adjust sizes individually? Some other strategy?). However, in instances where a user is moving VMAs, it is restrictive to disallow this. This is especially the case when anonymous mapping remap may or may not be mergeable depending on whether VMAs have or have not been faulted due to anon_vma assignment and folio index alignment with vma->vm_pgoff. Often this can result in surprising impact where a moved region is faulted, then moved back and a user fails to observe a merge from otherwise compatible, adjacent VMAs. This change allows such cases to work without the user having to be cognizant of whether a prior mremap() move or other VMA operations has resulted in VMA fragmentation. In order to do this, this series performs a large amount of refactoring, most pertinently - grouping sanity checks together, separately those that check input parameters and those relating to VMAs. we also simplify the post-mmap lock drop processing for uffd and mlock()'d VMAs. With this done, we can then fairly straightforwardly implement this functionality. This works exclusively for mremap() invocations which specify MREMAP_FIXED. It is not compatible with VMAs which use userfaultfd, as the notification of the userland fault handler would require us to drop the mmap lock. It is also not compatible with file-backed mappings with customised get_unmapped_area() handlers as these may not honour MREMAP_FIXED. The input and output addresses ranges must not overlap. We carefully account for moves which would result in VMA iterator invalidation. While there can be gaps between VMAs in the input range, there can be no gap before the first VMA in the range. This patch (of 10): We const-ify the vrm flags parameter to indicate this will never change. We rename resize_is_valid() to remap_is_valid(), as this function does not only apply to cases where we resize, so it's simply confusing to refer to that here. We remove the BUG() from mremap_at(), as we should not BUG() unless we are certain it'll result in system instability. We rename vrm_charge() to vrm_calc_charge() to make it clear this simply calculates the charged number of pages rather than actually adjusting any state. We update the comment for vrm_implies_new_addr() to explain that MREMAP_DONTUNMAP does not require a set address, but will always be moved. Additionally consistently use 'res' rather than 'ret' for result values. No functional change intended. Link: https://lkml.kernel.org/r/cover.1752770784.git.lorenzo.stoakes@oracle.com Link: https://lkml.kernel.org/r/d35ad8ce6b2c33b2f2f4ef7ec415f04a35cba34f.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/vma: refactor vma_modify_flags_name() to vma_modify_name()Lorenzo Stoakes
The single instance in which we use this function doesn't actually need to change VMA flags, so remove this parameter and update the caller accordingly. [lorenzo.stoakes@oracle.com: correct comment] Link: https://lkml.kernel.org/r/77f45b2e-a748-4635-9381-a5051091087f@lucifer.local Link: https://lkml.kernel.org/r/20250714135839.178032-1-lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Jann Horn <jannh@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm: optimize lru_note_cost() by adding lru_note_cost_unlock_irq()Hugh Dickins
Dropping a lock, just to demand it again for an afterthought, cannot be good if contended: convert lru_note_cost() to lru_note_cost_unlock_irq(). [hughd@google.com: delete unneeded comment] Link: https://lkml.kernel.org/r/dbf9352a-1ed9-a021-c0c7-9309ac73e174@google.com Link: https://lkml.kernel.org/r/21100102-51b6-79d5-03db-1bb7f97fa94c@google.com Signed-off-by: Hugh Dickins <hughd@google.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Tested-by: Roman Gushchin <roman.gushchin@linux.dev> Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev> Cc: David Hildenbrand <david@redhat.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/mglru: stop try_to_inc_min_seq() if min_seq[type] has not increasedHao Jia
In try_to_inc_min_seq(), if min_seq[type] has not increased. In other words, min_seq[type] == lrugen->min_seq[type]. Then we should return directly to avoid unnecessary overhead later. Corollary: If min_seq[type] of both anonymous and file is not increased, try_to_inc_min_seq() will fail. Proof: It is known that min_seq[type] has not increased, that is, min_seq[type] is equal to lrugen->min_seq[type], then the following: case 1: min_seq[type] has not been reassigned and changed before judgment min_seq[type] <= lrugen->min_seq[type]. Then the subsequent min_seq[type] <= lrugen->min_seq[type] judgment will always be true. case 2: min_seq[type] is reassigned to seq, before judgment min_seq[type] <= lrugen->min_seq[type]. Then at least the condition of min_seq[type] > seq must be met before min_seq[type] will be reassigned to seq. That is to say, before the reassignment, lrugen->min_seq[type] > seq is met, and then min_seq[type] = seq. Then the following min_seq[type](seq) <= lrugen->min_seq[type] judgment is always true. Therefore, in try_to_inc_min_seq(), If min_seq[type] of both anonymous and file is not increased, we can return false directly to avoid unnecessary overhead. Link: https://lkml.kernel.org/r/20250703023946.65315-1-jiahao.kernel@gmail.com Signed-off-by: Hao Jia <jiahao1@lixiang.com> Suggested-by: Yuanchu Xie <yuanchu@google.com> Reviewed-by: Axel Rasmussen <axelrasmussen@google.com> Cc: David Hildenbrand <david@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kinsey Ho <kinseyho@google.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Yu Zhao <yuzhao@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysMerge branch 'selftests-drv-net-tso-fix-issues-with-tso-selftest'Jakub Kicinski
Daniel Zahka says: ==================== selftests: drv-net: tso: fix issues with tso selftest There are a couple issues with the tso selftest. - Features required for test cases are detected by searching the set of active features at test start, so if a feature is supported by hw, but disabled, the test will report that the feature under test is not available and fail. - The vxlan test cases do not use the correct ip link flags based on the gso feature under test - The non-tunneled tso6 test case is showing up with the wrong name. With all patches applied test output is: # Detected qstat for LSO wire-packets TAP version 13 1..14 ok 1 tso.ipv4 # Testing with mangleid enabled ok 2 tso.vxlan4_ipv4 ok 3 tso.vxlan4_ipv6 # Testing with mangleid enabled ok 4 tso.vxlan_csum4_ipv4 ok 5 tso.vxlan_csum4_ipv6 # Testing with mangleid enabled ok 6 tso.gre4_ipv4 ok 7 tso.gre4_ipv6 ok 8 tso.ipv6 # Testing with mangleid enabled ok 9 tso.vxlan6_ipv4 ok 10 tso.vxlan6_ipv6 # Testing with mangleid enabled ok 11 tso.vxlan_csum6_ipv4 ok 12 tso.vxlan_csum6_ipv6 # Testing with mangleid enabled ok 13 tso.gre6_ipv4 ok 14 tso.gre6_ipv6 # Totals: pass:14 fail:0 xfail:0 xpass:0 skip:0 error:0 ==================== Link: https://patch.msgid.link/20250723184740.4075410-1-daniel.zahka@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysselftests: drv-net: tso: fix non-tunneled tso6 test case nameDaniel Zahka
The non-tunneled tso6 test case was showing up as: ok 8 tso.ipv4 This is because of the way test_builder() uses the inner_ipver arg in test naming, and how test_info is iterated over in main(). Given that some tunnels not supported yet, e.g. ipip or sit, only support ipv4 or ipv6 as the inner network protocol, I think the best fix here is to call test_builder() in separate branches for tunneled and non-tunneled tests, and to make supported inner l3 types an explicit attribute of tunnel test cases. # Detected qstat for LSO wire-packets TAP version 13 1..14 ok 1 tso.ipv4 # Testing with mangleid enabled ok 2 tso.vxlan4_ipv4 ok 3 tso.vxlan4_ipv6 # Testing with mangleid enabled ok 4 tso.vxlan_csum4_ipv4 ok 5 tso.vxlan_csum4_ipv6 # Testing with mangleid enabled ok 6 tso.gre4_ipv4 ok 7 tso.gre4_ipv6 ok 8 tso.ipv6 # Testing with mangleid enabled ok 9 tso.vxlan6_ipv4 ok 10 tso.vxlan6_ipv6 # Testing with mangleid enabled ok 11 tso.vxlan_csum6_ipv4 ok 12 tso.vxlan_csum6_ipv6 # Testing with mangleid enabled ok 13 tso.gre6_ipv4 ok 14 tso.gre6_ipv6 # Totals: pass:14 fail:0 xfail:0 xpass:0 skip:0 error:0 Fixes: 0d0f4174f6c8 ("selftests: drv-net: add a simple TSO test") Signed-off-by: Daniel Zahka <daniel.zahka@gmail.com> Link: https://patch.msgid.link/20250723184740.4075410-4-daniel.zahka@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysselftests: drv-net: tso: fix vxlan tunnel flags to get correct gso_typeDaniel Zahka
When vxlan is used with ipv6 as the outer network header, the correct ip link parameters for acheiving the SKB_GSO_UDP_TUNNEL gso type is "udp6zerocsumtx udp6zerocsumrx". Otherwise the gso type will be SKB_GSO_UDP_TUNNEL_CSUM. This bug was the reason for the second of the three possible invocations of run_one_stream() invocations, so that can be deleted as well. We only need to test with the feature off and on. Fixes: 0d0f4174f6c8 ("selftests: drv-net: add a simple TSO test") Signed-off-by: Daniel Zahka <daniel.zahka@gmail.com> Link: https://patch.msgid.link/20250723184740.4075410-3-daniel.zahka@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysselftests: drv-net: tso: enable test cases based on hw_featuresDaniel Zahka
tso.py uses the active features at the time of test execution as the set of available gso features to test. This means if a gso feature is supported but toggled off at test start, the test will be skipped with a "Device does not support {feature}" message. Instead, we can enumerate the set of toggleable features by capturing the driver's hw_features bitmap. To avoid configuration side-effects from running the test, we also snapshot the wanted_features flag set before making any feature changes, and then attempt to restore the same set of wanted_features before test exit. Fixes: 0d0f4174f6c8 ("selftests: drv-net: add a simple TSO test") Signed-off-by: Daniel Zahka <daniel.zahka@gmail.com> Link: https://patch.msgid.link/20250723184740.4075410-2-daniel.zahka@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysMerge branch 'selftests-drv-net-fix-and-improve-command-requirement-checking'Jakub Kicinski
Gal Pressman says: ==================== selftests: drv-net: Fix and improve command requirement checking This series fixes remote command checking and cleans up command requirement calls across tests. The first patch fixes require_cmd() incorrectly checking commands locally even when remote=True was specified due to a missing host parameter. The second patch makes require_cmd() usage explicit about local/remote requirements, avoiding unnecessary test failures and consolidating duplicate calls. ==================== Link: https://patch.msgid.link/20250723135454.649342-1-gal@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysselftests: drv-net: Make command requirements explicitGal Pressman
Make require_cmd() calls explicit about whether commands are needed locally, remotely, or both. Since require_cmd() defaults to local=True, tests should explicitly set local=False when commands are only needed remotely. - socat: Set local=False since it's only needed on remote hosts. - iperf3: Use single call with both local=True and remote=True since it's needed on both hosts. This avoids unnecessary test failures when commands are missing locally but available remotely where actually needed, and consolidates a duplicate require_cmd() call into single call that checks both hosts. Fixes: 0d0f4174f6c8 ("selftests: drv-net: add a simple TSO test") Fixes: f1e68a1a4a40 ("selftests: drv-net: add require_XYZ() helpers for validating env") Fixes: c76bab22e920 ("selftests: drv-net: rss_input_xfrm: Check test prerequisites before running") Reviewed-by: Nimrod Oren <noren@nvidia.com> Signed-off-by: Gal Pressman <gal@nvidia.com> Link: https://patch.msgid.link/20250723135454.649342-3-gal@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysselftests: drv-net: Fix remote command checking in require_cmd()Gal Pressman
The require_cmd() method was checking for command availability locally even when remote=True was specified, due to a missing host parameter. Fix by passing host=self.remote when checking remote command availability, ensuring commands are verified on the correct host. Fixes: f1e68a1a4a40 ("selftests: drv-net: add require_XYZ() helpers for validating env") Reviewed-by: Nimrod Oren <noren@nvidia.com> Signed-off-by: Gal Pressman <gal@nvidia.com> Link: https://patch.msgid.link/20250723135454.649342-2-gal@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysnet/mlx5: Fix build -Wframe-larger-than warningsZhu Yanjun
When building, the following warnings will appear. " pci_irq.c: In function ‘mlx5_ctrl_irq_request’: pci_irq.c:494:1: warning: the frame size of 1040 bytes is larger than 1024 bytes [-Wframe-larger-than=] pci_irq.c: In function ‘mlx5_irq_request_vector’: pci_irq.c:561:1: warning: the frame size of 1040 bytes is larger than 1024 bytes [-Wframe-larger-than=] eq.c: In function ‘comp_irq_request_sf’: eq.c:897:1: warning: the frame size of 1080 bytes is larger than 1024 bytes [-Wframe-larger-than=] irq_affinity.c: In function ‘irq_pool_request_irq’: irq_affinity.c:74:1: warning: the frame size of 1048 bytes is larger than 1024 bytes [-Wframe-larger-than=] " These warnings indicate that the stack frame size exceeds 1024 bytes in these functions. To resolve this, instead of allocating large memory buffers on the stack, it is better to use kvzalloc to allocate memory dynamically on the heap. This approach reduces stack usage and eliminates these frame size warnings. Acked-by: Junxian Huang <huangjunxian6@hisilicon.com> Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20250722212023.244296-1-yanjun.zhu@linux.dev Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysMerge branch 'use-enum-to-represent-the-napi-threaded-state'Jakub Kicinski
Samiullah Khawaja says: ==================== Use enum to represent the NAPI threaded state Instead of using 0/1 to represent the NAPI threaded states use enum (disabled/enabled) to represent the NAPI threaded states. This patch series is a subset of patches from the following patch series: https://lore.kernel.org/20250718232052.1266188-1-skhawaja@google.com The first 3 patches are being sent separately as per the feedback to replace the usage of 0/1 as NAPI threaded states with enum. See: https://lore.kernel.org/20250721164856.1d2208e4@kernel.org ==================== Link: https://patch.msgid.link/20250723013031.2911384-1-skhawaja@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysnet: define an enum for the napi threaded stateSamiullah Khawaja
Instead of using '0' and '1' for napi threaded state use an enum with 'disabled' and 'enabled' states. Tested: ./tools/testing/selftests/net/nl_netdev.py TAP version 13 1..7 ok 1 nl_netdev.empty_check ok 2 nl_netdev.lo_check ok 3 nl_netdev.page_pool_check ok 4 nl_netdev.napi_list_check ok 5 nl_netdev.dev_set_threaded ok 6 nl_netdev.napi_set_threaded ok 7 nl_netdev.nsim_rxq_reset_down # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 Signed-off-by: Samiullah Khawaja <skhawaja@google.com> Link: https://patch.msgid.link/20250723013031.2911384-4-skhawaja@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysnet: Use netif_threaded_enable instead of netif_set_threaded in driversSamiullah Khawaja
Prepare for adding an enum type for NAPI threaded states by adding netif_threaded_enable API. De-export the existing netif_set_threaded API and only use it internally. Update existing drivers to use netif_threaded_enable instead of the de-exported netif_set_threaded. Note that dev_set_threaded used by mt76 debugfs file is unchanged. Signed-off-by: Samiullah Khawaja <skhawaja@google.com> Link: https://patch.msgid.link/20250723013031.2911384-3-skhawaja@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysnet: Create separate gro_flush_normal functionSamiullah Khawaja
Move multiple copies of same code snippet doing `gro_flush` and `gro_normal_list` into separate helper function. Signed-off-by: Samiullah Khawaja <skhawaja@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20250723013031.2911384-2-skhawaja@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daystracing: Call trace_ftrace_test_filter() for the eventSteven Rostedt
The trace event filter bootup self test tests a bunch of filter logic against the ftrace_test_filter event, but does not actually call the event. Work is being done to cause a warning if an event is defined but not used. To quiet the warning call the trace event under an if statement where it is disabled so it doesn't get optimized out. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nicolas Schier <nicolas.schier@linux.dev> Cc: Nick Desaulniers <nick.desaulniers+lkml@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/20250723194212.274458858@kernel.org Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
11 daysMerge tag 'for-net-next-2025-07-23' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next Luiz Augusto von Dentz says: ==================== bluetooth-next pull request for net-next: core: - hci_sync: fix double free in 'hci_discovery_filter_clear()' - hci_event: Mask data status from LE ext adv reports - hci_devcd_dump: fix out-of-bounds via dev_coredumpv - ISO: add socket option to report packet seqnum via CMSG - hci_event: Add support for handling LE BIG Sync Lost event - ISO: Support SCM_TIMESTAMPING for ISO TS - hci_core: Add PA_LINK to distinguish BIG sync and PA sync connections - hci_sock: Reset cookie to zero in hci_sock_free_cookie() drivers: - btusb: Add new VID/PID 0489/e14e for MT7925 - btusb: Add a new VID/PID 2c7c/7009 for MT7925 - btusb: Add RTL8852BE device 0x13d3:0x3618 - btusb: Add support for variant of RTL8851BE (USB ID 13d3:3601) - btusb: Add USB ID 3625:010b for TP-LINK Archer TX10UB Nano - btusb: QCA: Support downloading custom-made firmwares - btusb: Add one more ID 0x28de:0x1401 for Qualcomm WCN6855 - nxp: add support for supply and reset - btnxpuart: Add support for 4M baudrate - btnxpuart: Correct the Independent Reset handling after FW dump - btnxpuart: Add uevents for FW dump and FW download complete - btintel: Define a macro for Intel Reset vendor command - btintel_pcie: Support Function level reset - btintel_pcie: Add support for device 0x4d76 - btintel_pcie: Make driver wait for alive interrupt - btintel_pcie: Fix Alive Context State Handling - hci_qca: Enable ISO data packet RX * tag 'for-net-next-2025-07-23' of git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next: (42 commits) Bluetooth: Add PA_LINK to distinguish BIG sync and PA sync connections Bluetooth: hci_event: Mask data status from LE ext adv reports Bluetooth: btintel_pcie: Fix Alive Context State Handling Bluetooth: btintel_pcie: Make driver wait for alive interrupt Bluetooth: hci_devcd_dump: fix out-of-bounds via dev_coredumpv Bluetooth: hci_sync: fix double free in 'hci_discovery_filter_clear()' Bluetooth: btusb: Add one more ID 0x28de:0x1401 for Qualcomm WCN6855 Bluetooth: btusb: Sort WCN6855 device IDs by VID and PID Bluetooth: btusb: QCA: Support downloading custom-made firmwares Bluetooth: btnxpuart: Add uevents for FW dump and FW download complete Bluetooth: btnxpuart: Correct the Independent Reset handling after FW dump Bluetooth: ISO: Support SCM_TIMESTAMPING for ISO TS Bluetooth: ISO: add socket option to report packet seqnum via CMSG Bluetooth: btintel: Define a macro for Intel Reset vendor command Bluetooth: Fix typos in comments Bluetooth: RFCOMM: Fix typos in comments Bluetooth: aosp: Fix typo in comment Bluetooth: hci_bcm4377: Fix typo in comment Bluetooth: btrtl: Fix typo in comment Bluetooth: btmtk: Fix typo in log string ... ==================== Link: https://patch.msgid.link/20250723190233.166823-1-luiz.dentz@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysMerge tag 'drm-xe-fixes-2025-07-24' of ↵Dave Airlie
https://gitlab.freedesktop.org/drm/xe/kernel into drm-fixes Driver Changes: - Fix build without debugfs (Lucas) Signed-off-by: Dave Airlie <airlied@redhat.com> From: Thomas Hellstrom <thomas.hellstrom@linux.intel.com> Link: https://lore.kernel.org/r/aIKWC2RPlbRxZc5o@fedora
11 daysMerge tag 'for-netdev' of ↵Jakub Kicinski
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Martin KaFai Lau says: ==================== pull-request: bpf-next 2025-07-24 We've added 3 non-merge commits during the last 3 day(s) which contain a total of 4 files changed, 40 insertions(+), 15 deletions(-). The main changes are: 1) Improved verifier error message for incorrect narrower load from pointer field in ctx, from Paul Chaignon. 2) Disabled migration in nf_hook_run_bpf to address a syzbot report, from Kuniyuki Iwashima. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: selftests/bpf: Test invalid narrower ctx load bpf: Reject narrower access to pointer ctx fields bpf: Disable migration in nf_hook_run_bpf(). ==================== Link: https://patch.msgid.link/20250724173306.3578483-1-martin.lau@linux.dev Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 dayssprintf.h requires stdarg.hStephen Rothwell
In file included from drivers/crypto/intel/qat/qat_common/adf_pm_dbgfs_utils.c:4: include/linux/sprintf.h:11:54: error: unknown type name 'va_list' 11 | __printf(2, 0) int vsprintf(char *buf, const char *, va_list); | ^~~~~~~ include/linux/sprintf.h:1:1: note: 'va_list' is defined in header '<stdarg.h>'; this is probably fixable by adding '#include <stdarg.h>' Link: https://lkml.kernel.org/r/20250721173754.42865913@canb.auug.org.au Fixes: 39ced19b9e60 ("lib/vsprintf: split out sprintf() and friends") Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Andriy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Petr Mladek <pmladek@suse.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysresource: fix false warning in __request_region()Akinobu Mita
A warning is raised when __request_region() detects a conflict with a resource whose resource.desc is IORES_DESC_DEVICE_PRIVATE_MEMORY. But this warning is only valid for iomem_resources. The hmem device resource uses resource.desc as the numa node id, which can cause spurious warnings. This warning appeared on a machine with multiple cxl memory expanders. One of the NUMA node id is 6, which is the same as the value of IORES_DESC_DEVICE_PRIVATE_MEMORY. In this environment it was just a spurious warning, but when I saw the warning I suspected a real problem so it's better to fix it. This change fixes this by restricting the warning to only iomem_resource. This also adds a missing new line to the warning message. Link: https://lkml.kernel.org/r/20250719112604.25500-1-akinobu.mita@gmail.com Fixes: 7dab174e2e27 ("dax/hmem: Move hmem device registration to dax_hmem.ko") Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysmm/damon/core: commit damos_quota_goal->nidSeongJae Park
DAMOS quota goal uses 'nid' field when the metric is DAMOS_QUOTA_NODE_MEM_{USED,FREE}_BP. But the goal commit function is not updating the goal's nid field. Fix it. Link: https://lkml.kernel.org/r/20250719181932.72944-1-sj@kernel.org Fixes: 0e1c773b501f ("mm/damon/core: introduce damos quota goal metrics for memory node utilization") [6.16.x] Signed-off-by: SeongJae Park <sj@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
11 daysMerge tag 'drm-intel-fixes-2025-07-24' of ↵Dave Airlie
https://gitlab.freedesktop.org/drm/i915/kernel into drm-fixes - Fix DP 2.7 Gbps DP_LINK_BW value on g4x (Ville) - Fix return value on intel_atomic_commit_fence_wait (Aakash) Signed-off-by: Dave Airlie <airlied@redhat.com> From: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://lore.kernel.org/r/aIJE9F-PcCe35PFb@intel.com
11 daysMerge branch 'tools-ynl-gen-print-setters-for-multi-val-attrs'Jakub Kicinski
Jakub Kicinski says: ==================== tools: ynl-gen: print setters for multi-val attrs ncdevmem seems to manually prepare the queue attributes. This is not ideal, YNL should be providing helpers for this. Make YNL output allocation and setter helpers for multi-val attrs. v1: https://lore.kernel.org/20250722161927.3489203-1-kuba@kernel.org ==================== Link: https://patch.msgid.link/20250723171046.4027470-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysselftests: drv-net: devmem: use new mattr ynl helpersJakub Kicinski
Use the just-added YNL helpers instead of manually setting "_present" bits in the queue attrs. Compile tested only. Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Acked-by: Mina Almasry <almasrymina@google.com> Link: https://patch.msgid.link/20250723171046.4027470-6-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daystools: ynl-gen: print setters for multi-val attrsJakub Kicinski
For basic types we "flatten" setters. If a request "a" has a simple nest "b" with value "val" we print helpers like: req_set_a_b(struct a *req, int val) { req->_present.a = 1; req->b._present.val = 1; req->b.val = ... } This is not possible for multi-attr because they have to be allocated dynamically by the user. Print "object level" setters so that user preparing the object doesn't have to futz with the presence bits and other YNL internals. Add the ability to pass in the variable name to generated setters. Using "req" here doesn't feel right, while the attr is part of a request it's not the request itself, so it seems cleaner to call it "obj". Example: static inline void netdev_queue_id_set_id(struct netdev_queue_id *obj, __u32 id) { obj->_present.id = 1; obj->id = id; } Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250723171046.4027470-5-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daystools: ynl-gen: print alloc helper for multi-val attrsJakub Kicinski
In general YNL provides allocation and free helpers for types. For pure nested structs which are used as multi-attr (and therefore have to be allocated dynamically) we already print a free helper as it's needed by free of the containing struct. Add printing of the alloc helper for consistency. The helper takes the number of entries to allocate as an argument, e.g.: static inline struct netdev_queue_id *netdev_queue_id_alloc(unsigned int n) { return calloc(n, sizeof(struct netdev_queue_id)); } Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250723171046.4027470-4-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daystools: ynl-gen: move free printing to the print_type_full() helperJakub Kicinski
Just to avoid making the main function even more enormous, before adding more things to print move the free printing to a helper which already prints the type. Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250723171046.4027470-3-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daystools: ynl-gen: don't add suffix for pure typesJakub Kicinski
Don't add _req to helper names for pure types. We don't currently print those so it makes no difference to existing codegen. Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250723171046.4027470-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysMerge tag 'wireless-next-2025-07-24' of ↵Jakub Kicinski
https://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next Johannes Berg says: ==================== Another wireless update: - rtw89: - STA+P2P concurrency - support for USB devices RTL8851BU/RTL8852BU - ath9k: OF support - ath12k: - more EHT/Wi-Fi 7 features - encapsulation/decapsulation offload - iwlwifi: some FIPS interoperability - brcm80211: support SDIO 43751 device - rt2x00: better DT/OF support - cfg80211/mac80211: - improved S1G support - beacon monitor for MLO * tag 'wireless-next-2025-07-24' of https://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next: (199 commits) ssb: use new GPIO line value setter callbacks for the second GPIO chip wifi: Fix typos wifi: brcmsmac: Use str_true_false() helper wifi: brcmfmac: fix EXTSAE WPA3 connection failure due to AUTH TX failure wifi: brcm80211: Remove yet more unused functions wifi: brcm80211: Remove more unused functions wifi: brcm80211: Remove unused functions wifi: iwlwifi: Revert "wifi: iwlwifi: remove support of several iwl_ppag_table_cmd versions" wifi: iwlwifi: check validity of the FW API range wifi: iwlwifi: don't export symbols that we shouldn't wifi: iwlwifi: mld: use spec link id and not FW link id wifi: iwlwifi: mld: decode EOF bit for AMPDUs wifi: iwlwifi: Remove support for rx OMI bandwidth reduction wifi: iwlwifi: stop supporting iwl_omi_send_status_notif ver 1 wifi: iwlwifi: remove SC2F firmware support wifi: iwlwifi: mvm: Remove NAN support wifi: iwlwifi: mld: avoid outdated reorder buffer head_sn wifi: iwlwifi: mvm: avoid outdated reorder buffer head_sn wifi: iwlwifi: disable certain features for fips_enabled wifi: iwlwifi: mld: support channel survey collection for ACS scans ... ==================== Link: https://patch.msgid.link/20250724100349.21564-3-johannes@sipsolutions.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 daysx86: Handle KCOV __init vs inline mismatchesKees Cook
GCC appears to have kind of fragile inlining heuristics, in the sense that it can change whether or not it inlines something based on optimizations. It looks like the kcov instrumentation being added (or in this case, removed) from a function changes the optimization results, and some functions marked "inline" are _not_ inlined. In that case, we end up with __init code calling a function not marked __init, and we get the build warnings I'm trying to eliminate in the coming patch that adds __no_sanitize_coverage to __init functions: WARNING: modpost: vmlinux: section mismatch in reference: xbc_exit+0x8 (section: .text.unlikely) -> _xbc_exit (section: .init.text) WARNING: modpost: vmlinux: section mismatch in reference: real_mode_size_needed+0x15 (section: .text.unlikely) -> real_mode_blob_end (section: .init.data) WARNING: modpost: vmlinux: section mismatch in reference: __set_percpu_decrypted+0x16 (section: .text.unlikely) -> early_set_memory_decrypted (section: .init.text) WARNING: modpost: vmlinux: section mismatch in reference: memblock_alloc_from+0x26 (section: .text.unlikely) -> memblock_alloc_try_nid (section: .init.text) WARNING: modpost: vmlinux: section mismatch in reference: acpi_arch_set_root_pointer+0xc (section: .text.unlikely) -> x86_init (section: .init.data) WARNING: modpost: vmlinux: section mismatch in reference: acpi_arch_get_root_pointer+0x8 (section: .text.unlikely) -> x86_init (section: .init.data) WARNING: modpost: vmlinux: section mismatch in reference: efi_config_table_is_usable+0x16 (section: .text.unlikely) -> xen_efi_config_table_is_usable (section: .init.text) This problem is somewhat fragile (though using either __always_inline or __init will deterministically solve it), but we've tripped over this before with GCC and the solution has usually been to just use __always_inline and move on. For x86 this means forcing several functions to be inline with __always_inline. Link: https://lore.kernel.org/r/20250724055029.3623499-2-kees@kernel.org Signed-off-by: Kees Cook <kees@kernel.org>
11 daysarm64: Handle KCOV __init vs inline mismatchesKees Cook
GCC appears to have kind of fragile inlining heuristics, in the sense that it can change whether or not it inlines something based on optimizations. It looks like the kcov instrumentation being added (or in this case, removed) from a function changes the optimization results, and some functions marked "inline" are _not_ inlined. In that case, we end up with __init code calling a function not marked __init, and we get the build warnings I'm trying to eliminate in the coming patch that adds __no_sanitize_coverage to __init functions: WARNING: modpost: vmlinux: section mismatch in reference: acpi_get_enable_method+0x1c (section: .text.unlikely) -> acpi_psci_present (section: .init.text) This problem is somewhat fragile (though using either __always_inline or __init will deterministically solve it), but we've tripped over this before with GCC and the solution has usually been to just use __always_inline and move on. For arm64 this requires forcing one ACPI function to be inlined with __always_inline. Link: https://lore.kernel.org/r/20250724055029.3623499-1-kees@kernel.org Signed-off-by: Kees Cook <kees@kernel.org>
11 daysMerge tag 'pci-v6.16-fixes-4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci Pull pci fix from Bjorn Helgaas: - Create pwrctrl devices only when we need them, i.e., when CONFIG_PCI_PWRCTRL is enabled. This allows brcmstb to work around a pwrctrl regression by disabling CONFIG_PCI_PWRCTRL (Manivannan Sadhasivam) * tag 'pci-v6.16-fixes-4' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: PCI/pwrctrl: Create pwrctrl devices only when CONFIG_PCI_PWRCTRL is enabled
11 daysclk: tegra: periph: Make tegra_clk_periph_ops staticPei Xiao
Reduce symbol visibility by converting tegra_clk_periph_ops to static. Removed the extern declaration from clk.h as the symbol is now locally scoped to clk-periph.c. Signed-off-by: Pei Xiao <xiaopei01@kylinos.cn> Link: https://lore.kernel.org/r/bda59ad46afae6e7484edf8e2f7bf23ceafe51e9.1752046270.git.xiaopei01@kylinos.cn Acked-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Stephen Boyd <sboyd@kernel.org>