Age | Commit message (Collapse) | Author |
|
* for-next/kpti:
arm64: correct the effect of mitigations off on kpti
arm64: entry: simplify trampoline data page
arm64: mm: install KPTI nG mappings with MMU enabled
arm64: kpti-ng: simplify page table traversal logic
|
|
* for-next/kcsan:
arm64: kcsan: Support detecting more missing memory barriers
asm-generic: Add memory barrier dma_mb()
|
|
* for-next/irqflags-nmi:
arm64: select TRACE_IRQFLAGS_NMI_SUPPORT
arch: make TRACE_IRQFLAGS_NMI_SUPPORT generic
|
|
* for-next/ioremap:
arm64: Add HAVE_IOREMAP_PROT support
arm64: mm: Convert to GENERIC_IOREMAP
mm: ioremap: Add ioremap/iounmap_allowed()
mm: ioremap: Setup phys_addr of struct vm_struct
mm: ioremap: Use more sensible name in ioremap_prot()
ARM: mm: kill unused runtime hook arch_iounmap()
|
|
* for-next/extable:
arm64: extable: cleanup redundant extable type EX_TYPE_FIXUP
arm64: extable: move _cond_extable to _cond_uaccess_extable
arm64: extable: make uaaccess helper use extable type EX_TYPE_UACCESS_ERR_ZERO
arm64: asm-extable: add asm uacess helpers
arm64: asm-extable: move data fields
arm64: extable: add new extable type EX_TYPE_KACCESS_ERR_ZERO support
|
|
* for-next/errata:
arm64: errata: Remove AES hwcap for COMPAT tasks
arm64: errata: Add Cortex-A510 to the repeat tlbi list
|
|
* for-next/docs:
Documentation/arm64: update memory layout table.
|
|
* for-next/cpuidle:
arm64: cpuidle: remove generic cpuidle support
cpuidle: cpuidle-arm: remove arm64 support
|
|
Cortex-A57 and Cortex-A72 have an erratum where an interrupt that
occurs between a pair of AES instructions in aarch32 mode may corrupt
the ELR. The task will subsequently produce the wrong AES result.
The AES instructions are part of the cryptographic extensions, which are
optional. User-space software will detect the support for these
instructions from the hwcaps. If the platform doesn't support these
instructions a software implementation should be used.
Remove the hwcap bits on affected parts to indicate user-space should
not use the AES instructions.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20220714161523.279570-3-james.morse@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Cortex-A510 is affected by an erratum where in rare circumstances the
CPUs may not handle a race between a break-before-make sequence on one
CPU, and another CPU accessing the same page. This could allow a store
to a page that has been unmapped.
Work around this by adding the affected CPUs to the list that needs
TLB sequences to be done twice.
Signed-off-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20220704155732.21216-1-james.morse@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
If KASLR is enabled, then kpti will be forced to be enabled even if
mitigations off, so we need to adjust the description of this parameter.
Signed-off-by: Liu Song <liusong@linux.alibaba.com>
Link: https://lore.kernel.org/r/1656033648-84181-1-git-send-email-liusong@linux.alibaba.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Currently, extable type EX_TYPE_FIXUP is no place to use, We can safely
remove it.
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20220621072638.1273594-7-tongtiangen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Currently, We use _cond_extable for cache maintenance uaccess helper
caches_clean_inval_user_pou(), so this should be moved over to
EX_TYPE_UACCESS_ERR_ZERO and rename _cond_extable to _cond_uaccess_extable
for clarity.
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20220621072638.1273594-6-tongtiangen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Currnetly, the extable type used by __arch_copy_from/to_user() is
EX_TYPE_FIXUP. In fact, It is more clearly to use meaningful
EX_TYPE_UACCESS_*.
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Link: https://lore.kernel.org/r/20220621072638.1273594-5-tongtiangen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
In subsequent patches we want to explciitly annotate uaccess fixups in
assembly files.
We have existing helpers for this for inline assembly, but due to
differing stringification requirements it's not possible to have a
single definition that we can use for both inline asm and plain asm
files. So as with other cases (e.g. gpr-regnum.h), we must prove
separate helprs for plain asm and inline asm.
So that we can do so, this patch adds helpers to define
EX_TYPE_UACCESS_ERR_ZERO fixups in plain assembly. These correspond 1-1
with the inline assembly versions except for the absence of
stringification. No plain assmebly heleprs are added for
EX_TYPE_LOAD_UNALIGNED_ZEROPAD fixups as these only exist for a single C
function.
For copy_{to,from}_user() we'll need fixups with regs and err, so I've
added _ASM_EXTABLE_UACCESS(insn, fixup), where both the error and zero
registers are WZR.
For clarity, the existing `_asm_extable` assemgbly maco is now defined
in terms of the _ASM_EXTABLE() CPP macro, making the CPP macros
canonical in all cases.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Link: https://lore.kernel.org/r/20220621072638.1273594-4-tongtiangen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
In subsequent patches we'll need to fill in extable data fields in
regular assembly files. In preparation for this, move the definitions of
the extable data fields earlier in asm-extable.h so that they are
defined for both assembly and C files.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Link: https://lore.kernel.org/r/20220621072638.1273594-3-tongtiangen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Currently, The extable type EX_TYPE_UACCESS_ERR_ZERO is used by
__get/put_kernel_nofault(), but those helpers are not uaccess type, so we
add a new extable type EX_TYPE_KACCESS_ERR_ZERO which can be used by
__get/put_kernel_no_fault().
This is also to prepare for distinguishing the two types in machine check
safe process.
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20220621072638.1273594-2-tongtiangen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
With ioremap_prot() definition from generic ioremap, also move
pte_pgprot() from hugetlbpage.c into pgtable.h, then arm64 could
have HAVE_IOREMAP_PROT, which will enable generic_access_phys()
code, it is useful for debug, eg, gdb.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Link: https://lore.kernel.org/r/20220607125027.44946-7-wangkefeng.wang@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Add hook for arm64's special operation when ioremap(), then
ioremap_wc/np/cache is converted to use ioremap_prot() from
GENERIC_IOREMAP, update the Copyright and kill the unused
inclusions.
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Link: https://lore.kernel.org/r/20220607125027.44946-6-wangkefeng.wang@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Add special hook for architecture to verify addr, size or prot
when ioremap() or iounmap(), which will make the generic ioremap
more useful.
ioremap_allowed() return a bool,
- true means continue to remap
- false means skip remap and return directly
iounmap_allowed() return a bool,
- true means continue to vunmap
- false code means skip vunmap and return directly
Meanwhile, only vunmap the address when it is in vmalloc area
as the generic ioremap only returns vmalloc addresses.
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Baoquan He <bhe@redhat.com>
Link: https://lore.kernel.org/r/20220607125027.44946-5-wangkefeng.wang@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Show physical address of each ioremap in /proc/vmallocinfo.
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Link: https://lore.kernel.org/r/20220607125027.44946-4-wangkefeng.wang@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Use more meaningful and sensible naming phys_addr
instead addr in ioremap_prot().
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20220610092255.32445-1-wangkefeng.wang@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Since the following commits,
v5.4
commit 59d3ae9a5bf6 ("ARM: remove Intel iop33x and iop13xx support")
v5.11
commit 3e3f354bc383 ("ARM: remove ebsa110 platform")
The runtime hook arch_iounmap() on ARM is useless, kill arch_iounmap()
and __iounmap().
Cc: Russell King <linux@armlinux.org.uk>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/r/20220607125027.44946-2-wangkefeng.wang@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Get rid of some clunky open coded arithmetic on section addresses, by
emitting the trampoline data variables into a separate, dedicated r/o
data section, and putting it at the next page boundary. This way, we can
access the literals via single LDR instruction.
While at it, get rid of other, implicit literals, and use ADRP/ADD or
MOVZ/MOVK sequences, as appropriate. Note that the latter are only
supported for CONFIG_RELOCATABLE=n (which is usually the case if
CONFIG_RANDOMIZE_BASE=n), so update the CPP conditionals to reflect
this.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220622161010.3845775-1-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Commit b89ddf4cca43("arm64/bpf: Remove 128MB limit for BPF JIT programs")
removes the bpf jit region from the memory layout of the Aarch64
architecture. However, it forgets to update the documentation
accordingly.
- Remove the bpf jit region.
- Fix the Start and End addresses of the modules region.
- Fix the Start address of the vmalloc region.
Signed-off-by: Andre Mueller <am@emlix.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220621081651.61755-1-am@emlix.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
As "kcsan: Support detecting a subset of missing memory barriers"[1]
introduced KCSAN_STRICT/KCSAN_WEAK_MEMORY which make kcsan detects
more missing memory barrier, but arm64 don't have KCSAN instrumentation
for barriers, so the new selftest test_barrier() and test cases for
memory barrier instrumentation in kcsan_test module will fail, even
panic on selftest.
Let's prefix all barriers with __ on arm64, as asm-generic/barriers.h
defined the final instrumented version of these barriers, which will
fix the above issues.
Note, barrier instrumentation that can be disabled via __no_kcsan with
appropriate compiler-support (and not just with objtool help), see
commit bd3d5bd1a0ad ("kcsan: Support WEAK_MEMORY with Clang where no
objtool support exists"), it adds disable_sanitizer_instrumentation to
__no_kcsan attribute which will remove all sanitizer instrumentation fully
(with Clang 14.0). Meanwhile, GCC does the same thing with no_sanitize.
[1] https://lore.kernel.org/linux-mm/20211130114433.2580590-1-elver@google.com/
Acked-by: Marco Elver <elver@google.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220523113126.171714-3-wangkefeng.wang@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
The memory barrier dma_mb() is introduced by commit a76a37777f2c
("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
which is used to ensure that prior (both reads and writes) accesses
to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20220523113126.171714-2-wangkefeng.wang@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
In cases where we unmap the kernel while running in user space, we rely
on ASIDs to distinguish the minimal trampoline from the full kernel
mapping, and this means we must use non-global attributes for those
mappings, to ensure they are scoped by ASID and will not hit in the TLB
inadvertently.
We only do this when needed, as this is generally more costly in terms
of TLB pressure, and so we boot without these non-global attributes, and
apply them to all existing kernel mappings once all CPUs are up and we
know whether or not the non-global attributes are needed. At this point,
we cannot simply unmap and remap the entire address space, so we have to
update all existing block and page descriptors in place.
Currently, we go through a lot of trouble to perform these updates with
the MMU and caches off, to avoid violating break before make (BBM) rules
imposed by the architecture. Since we make changes to page tables that
are not covered by the ID map, we gain access to those descriptors by
disabling translations altogether. This means that the stores to memory
are issued with device attributes, and require extra care in terms of
coherency, which is costly. We also rely on the ID map to access a
shared flag, which requires the ID map to be executable and writable at
the same time, which is another thing we'd prefer to avoid.
So let's switch to an approach where we replace the kernel mapping with
a minimal mapping of a few pages that can be used for a minimal, ad-hoc
fixmap that we can use to map each page table in turn as we traverse the
hierarchy.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220609174320.4035379-3-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Simplify the KPTI G-to-nG asm helper code by:
- pulling the 'table bit' test into the get/put macros so we can combine
them and incorporate the entire loop;
- moving the 'table bit' test after the update of bit #11 so we no
longer need separate next_xxx and skip_xxx labels;
- redefining the pmd/pud register aliases and the next_pmd/next_pud
labels instead of branching to them if the number of configured page
table levels is less than 3 or 4, respectively.
No functional change intended, except for the fact that we now descend
into a next level table after setting bit #11 on its descriptor but this
should make no difference in practice.
While at it, switch to .L prefixed local labels so they don't clutter up
the symbol tables, kallsyms, etc, and clean up the indentation for
legibility.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20220609174320.4035379-2-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Due to an oversight, on arm64 lockdep IRQ state tracking doesn't work as
intended in NMI context. This demonstrably results in bogus warnings
from lockdep, and in theory could mask a variety of issues.
On arm64, we've consistently tracked IRQ flag state for NMIs (and
saved/restored the state of the interrupted context) since commit:
f0cd5ac1e4c53cb6 ("arm64: entry: fix NMI {user, kernel}->kernel transitions")
That commit fixed most lockdep issues with NMI by virtue of the
save/restore of the lockdep state of the interrupted context. However,
for lockdep IRQ state tracking to consistently take effect in NMI
context it has been necessary to select TRACE_IRQFLAGS_NMI_SUPPORT since
commit:
ed00495333ccc80f ("locking/lockdep: Fix TRACE_IRQFLAGS vs. NMIs")
As arm64 does not select TRACE_IRQFLAGS_NMI_SUPPORT, this means that the
lockdep state can be stale in NMI context, and some uses of that state
can consume stale data.
When an NMI is taken arm64 entry code will call arm64_enter_nmi(). This
will enter NMI context via __nmi_enter() before calling
lockdep_hardirqs_off() to inform lockdep that IRQs have been masked.
Where TRACE_IRQFLAGS_NMI_SUPPORT is not selected, lockdep_hardirqs_off()
will not update lockdep state if called in NMI context. Thus if IRQs
were enabled in the original context, lockdep will continue to believe
that IRQs are enabled despite the call to lockdep_hardirqs_off().
However, the lockdep_assert_*() checks do take effect in NMI context,
and will consume the stale lockdep state. If an NMI is taken from a
context which had IRQs enabled, and during the handling of the NMI
something calls lockdep_assert_irqs_disabled(), this will result in a
spurious warning based upon the stale lockdep state.
This can be seen when using perf with GICv3 pseudo-NMIs. Within the perf
NMI handler we may attempt a uaccess to record the userspace callchain,
and is this faults the el1_abort() call in the nested context will call
exit_to_kernel_mode() when returning, which has a
lockdep_assert_irqs_disabled() assertion:
| # ./perf record -a -g sh
| ------------[ cut here ]------------
| WARNING: CPU: 0 PID: 164 at arch/arm64/kernel/entry-common.c:73 exit_to_kernel_mode+0x118/0x1ac
| Modules linked in:
| CPU: 0 PID: 164 Comm: perf Not tainted 5.18.0-rc5 #1
| Hardware name: linux,dummy-virt (DT)
| pstate: 004003c5 (nzcv DAIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
| pc : exit_to_kernel_mode+0x118/0x1ac
| lr : el1_abort+0x80/0xbc
| sp : ffff8000080039f0
| pmr_save: 000000f0
| x29: ffff8000080039f0 x28: ffff6831054e4980 x27: ffff683103adb400
| x26: 0000000000000000 x25: 0000000000000001 x24: 0000000000000001
| x23: 00000000804000c5 x22: 00000000000000c0 x21: 0000000000000001
| x20: ffffbd51e635ec44 x19: ffff800008003a60 x18: 0000000000000000
| x17: ffffaadf98d23000 x16: ffff800008004000 x15: 0000ffffd14f25c0
| x14: 0000000000000000 x13: 00000000000018eb x12: 0000000000000040
| x11: 000000000000001e x10: 000000002b820020 x9 : 0000000100110000
| x8 : 000000000045cac0 x7 : 0000ffffd14f25c0 x6 : ffffbd51e639b000
| x5 : 00000000000003e5 x4 : ffffbd51e58543b0 x3 : 0000000000000001
| x2 : ffffaadf98d23000 x1 : ffff6831054e4980 x0 : 0000000100110000
| Call trace:
| exit_to_kernel_mode+0x118/0x1ac
| el1_abort+0x80/0xbc
| el1h_64_sync_handler+0xa4/0xd0
| el1h_64_sync+0x74/0x78
| __arch_copy_from_user+0xa4/0x230
| get_perf_callchain+0x134/0x1e4
| perf_callchain+0x7c/0xa0
| perf_prepare_sample+0x414/0x660
| perf_event_output_forward+0x80/0x180
| __perf_event_overflow+0x70/0x13c
| perf_event_overflow+0x1c/0x30
| armv8pmu_handle_irq+0xe8/0x160
| armpmu_dispatch_irq+0x2c/0x70
| handle_percpu_devid_fasteoi_nmi+0x7c/0xbc
| generic_handle_domain_nmi+0x3c/0x60
| gic_handle_irq+0x1dc/0x310
| call_on_irq_stack+0x2c/0x54
| do_interrupt_handler+0x80/0x94
| el1_interrupt+0xb0/0xe4
| el1h_64_irq_handler+0x18/0x24
| el1h_64_irq+0x74/0x78
| lockdep_hardirqs_off+0x50/0x120
| trace_hardirqs_off+0x38/0x214
| _raw_spin_lock_irq+0x98/0xa0
| pipe_read+0x1f8/0x404
| new_sync_read+0x140/0x150
| vfs_read+0x190/0x1dc
| ksys_read+0xdc/0xfc
| __arm64_sys_read+0x20/0x30
| invoke_syscall+0x48/0x114
| el0_svc_common.constprop.0+0x158/0x17c
| do_el0_svc+0x28/0x90
| el0_svc+0x60/0x150
| el0t_64_sync_handler+0xa4/0x130
| el0t_64_sync+0x19c/0x1a0
| irq event stamp: 483
| hardirqs last enabled at (483): [<ffffbd51e636aa24>] _raw_spin_unlock_irqrestore+0xa4/0xb0
| hardirqs last disabled at (482): [<ffffbd51e636acd0>] _raw_spin_lock_irqsave+0xb0/0xb4
| softirqs last enabled at (468): [<ffffbd51e5216f58>] put_cpu_fpsimd_context+0x28/0x70
| softirqs last disabled at (466): [<ffffbd51e5216ed4>] get_cpu_fpsimd_context+0x0/0x5c
| ---[ end trace 0000000000000000 ]---
Note that as lockdep_assert_irqs_disabled() uses WARN_ON_ONCE(), and
this uses a BRK, the warning is logged with the real PSTATE at the time
of the warning, which clearly has DAIF.I set, meaning IRQs (and
pseudo-NMIs) were definitely masked and the warning is spurious.
Fix this by selecting TRACE_IRQFLAGS_NMI_SUPPORT such that the existing
entry tracking takes effect, as we had originally intended when the
arm64 entry code was fixed for transitions to/from NMI.
Arguably the lockdep_assert_*() functions should have the same NMI
checks as the rest of the code to prevent spurious warnings when
TRACE_IRQFLAGS_NMI_SUPPORT is not selected, but the real fix for any
architecture is to explicitly handle the transitions to/from NMI in the
entry code.
Fixes: f0cd5ac1e4c5 ("arm64: entry: fix NMI {user, kernel}->kernel transitions")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220511131733.4074499-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
On most architectures, IRQ flag tracing is disabled in NMI context, and
architectures need to define and select TRACE_IRQFLAGS_NMI_SUPPORT in
order to enable this.
Commit:
859d069ee1ddd878 ("lockdep: Prepare for NMI IRQ state tracking")
Permitted IRQ flag tracing in NMI context, allowing lockdep to work in
NMI context where an architecture had suitable entry logic. At the time,
most architectures did not have such suitable entry logic, and this broke
lockdep on such architectures. Thus, this was partially disabled in
commit:
ed00495333ccc80f ("locking/lockdep: Fix TRACE_IRQFLAGS vs. NMIs")
... with architectures needing to select TRACE_IRQFLAGS_NMI_SUPPORT to
enable IRQ flag tracing in NMI context.
Currently TRACE_IRQFLAGS_NMI_SUPPORT is defined under
arch/x86/Kconfig.debug. Move it to arch/Kconfig so architectures can
select it without having to provide their own definition.
Since the regular TRACE_IRQFLAGS_SUPPORT is selected by
arch/x86/Kconfig, the select of TRACE_IRQFLAGS_NMI_SUPPORT is moved
there too.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220511131733.4074499-2-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
The arm64 support of the generic ARM cpuidle driver was removed. This
let us remove all support code for it.
Signed-off-by: Michael Walle <michael@walle.cc>
Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Link: https://lore.kernel.org/r/20220529181329.2345722-3-michael@walle.cc
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Since commit 788961462f34 ("ARM: psci: cpuidle: Enable PSCI CPUidle
driver") the generic ARM cpuidle driver doesn't probe anymore because
arm_cpuidle_init() will always return -EOPNOTSUPP. That is, because the
mentioned commit removes the only .cpu_suspend and .cpu_init_idle
provider.
Signed-off-by: Michael Walle <michael@walle.cc>
Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Link: https://lore.kernel.org/r/20220529181329.2345722-2-michael@walle.cc
Signed-off-by: Will Deacon <will@kernel.org>
|
|
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Thomas Gleixner:
- Make RESERVE_BRK() work again with older binutils. The recent
'simplification' broke that.
- Make early #VE handling increment RIP when successful.
- Make the #VE code consistent vs. the RIP adjustments and add
comments.
- Handle load_unaligned_zeropad() across page boundaries correctly in
#VE when the second page is shared.
* tag 'x86-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/tdx: Handle load_unaligned_zeropad() page-cross to a shared page
x86/tdx: Clarify RIP adjustments in #VE handler
x86/tdx: Fix early #VE handling
x86/mm: Fix RESERVE_BRK() for older binutils
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull build tooling updates from Thomas Gleixner:
- Remove obsolete CONFIG_X86_SMAP reference from objtool
- Fix overlapping text section failures in faddr2line for real
- Remove OBJECT_FILES_NON_STANDARD usage from x86 ftrace and replace it
with finegrained annotations so objtool can validate that code
correctly.
* tag 'objtool-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/ftrace: Remove OBJECT_FILES_NON_STANDARD usage
faddr2line: Fix overlapping text section failures, the sequel
objtool: Fix obsolete reference to CONFIG_X86_SMAP
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler fix from Thomas Gleixner:
"A single scheduler fix plugging a race between sched_setscheduler()
and balance_push().
sched_setscheduler() spliced the balance callbacks accross a lock
break which makes it possible for an interleaving schedule() to
observe an empty list"
* tag 'sched-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
sched: Fix balance_push() vs __sched_setscheduler()
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull lockdep fix from Thomas Gleixner:
"A RT fix for lockdep.
lockdep invokes prandom_u32() to create cookies. This worked until
prandom_u32() was switched to the real random generator, which takes a
spinlock for extraction, which does not work on RT when invoked from
atomic contexts.
lockdep has no requirement for real random numbers and it turns out
sched_clock() is good enough to create the cookie. That works
everywhere and is faster"
* tag 'locking-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
locking/lockdep: Use sched_clock() for random numbers
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq fixes from Thomas Gleixner:
"A set of interrupt subsystem updates:
Core:
- Ensure runtime power management for chained interrupts
Drivers:
- A collection of OF node refcount fixes
- Unbreak MIPS uniprocessor builds
- Fix xilinx interrupt controller Kconfig dependencies
- Add a missing compatible string to the Uniphier driver"
* tag 'irq-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip/loongson-liointc: Use architecture register to get coreid
irqchip/uniphier-aidet: Add compatible string for NX1 SoC
dt-bindings: interrupt-controller/uniphier-aidet: Add bindings for NX1 SoC
irqchip/realtek-rtl: Fix refcount leak in map_interrupts
irqchip/gic-v3: Fix refcount leak in gic_populate_ppi_partitions
irqchip/gic-v3: Fix error handling in gic_populate_ppi_partitions
irqchip/apple-aic: Fix refcount leak in aic_of_ic_init
irqchip/apple-aic: Fix refcount leak in build_fiq_affinity
irqchip/gic/realview: Fix refcount leak in realview_gic_of_init
irqchip/xilinx: Remove microblaze+zynq dependency
genirq: PM: Use runtime PM for chained interrupts
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc
Pull char/misc driver fixes for real from Greg KH:
"Let's tag the proper branch this time...
Here are some small char/misc driver fixes for 5.19-rc3 that resolve
some reported issues.
They include:
- mei driver fixes
- comedi driver fix
- rtsx build warning fix
- fsl-mc-bus driver fix
All of these have been in linux-next for a while with no reported
issues"
This is what the merge in commit f0ec9c65a8d6 _should_ have merged, but
Greg fat-fingered the pull request and I got some small changes from
linux-next instead there. Credit to Nathan Chancellor for eagle-eyes.
Link: https://lore.kernel.org/all/Yqywy+Md2AfGDu8v@dev-arch.thelio-3990X/
* tag 'char-misc-5.19-rc3-take2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc:
bus: fsl-mc-bus: fix KASAN use-after-free in fsl_mc_bus_remove()
mei: me: add raptor lake point S DID
mei: hbm: drop capability response on early shutdown
mei: me: set internal pg flag to off on hardware reset
misc: rtsx: Fix clang -Wsometimes-uninitialized in rts5261_init_from_hw()
comedi: vmk80xx: fix expression for tx buffer size
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux
Pull i2c fixes from Wolfram Sang:
"MAINTAINERS rectifications and a few minor driver fixes"
* tag 'i2c-for-5.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
i2c: mediatek: Fix an error handling path in mtk_i2c_probe()
i2c: designware: Use standard optional ref clock implementation
MAINTAINERS: core DT include belongs to core
MAINTAINERS: add include/dt-bindings/i2c to I2C SUBSYSTEM HOST DRIVERS
i2c: npcm7xx: Add check for platform_driver_register
MAINTAINERS: Update Synopsys DesignWare I2C to Supported
|
|
Pull xfs fixes from Darrick Wong:
"There's not a whole lot this time around (I'm still on vacation) but
here are some important fixes for new features merged in -rc1:
- Fix a bug where inode flag changes would accidentally drop nrext64
- Fix a race condition when toggling LARP mode"
* tag 'xfs-5.19-fixes-1' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux:
xfs: preserve DIFLAG2_NREXT64 when setting other inode attributes
xfs: fix variable state usage
xfs: fix TOCTOU race involving the new logged xattrs control knob
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4
Pull ext4 fixes from Ted Ts'o:
"Fix a variety of bugs, many of which were found by folks using fuzzing
or error injection.
Also fix up how test_dummy_encryption mount option is handled for the
new mount API.
Finally, fix/cleanup a number of comments and ext4 Documentation
files"
* tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
ext4: fix a doubled word "need" in a comment
ext4: add reserved GDT blocks check
ext4: make variable "count" signed
ext4: correct the judgment of BUG in ext4_mb_normalize_request
ext4: fix bug_on ext4_mb_use_inode_pa
ext4: fix up test_dummy_encryption handling for new mount API
ext4: use kmemdup() to replace kmalloc + memcpy
ext4: fix super block checksum incorrect after mount
ext4: improve write performance with disabled delalloc
ext4: fix warning when submitting superblock in ext4_commit_super()
ext4, doc: remove unnecessary escaping
ext4: fix incorrect comment in ext4_bio_write_page()
fs: fix jbd2_journal_try_to_free_buffers() kernel-doc comment
|
|
Pull cifs client fixes from Steve French:
"Two cifs debugging improvements - one found to deal with debugging a
multichannel problem and one for a recent fallocate issue
This does include the two larger multichannel reconnect (dynamically
adjusting interfaces on reconnect) patches, because we recently found
an additional problem with multichannel to one server type that I want
to include at the same time"
* tag '5.19-rc2-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6:
cifs: when a channel is not found for server, log its connection id
smb3: add trace point for SMB2_set_eof
|
|
Signed-off-by: Xiang wangx <wangxiang@cdjrlc.com>
Link: https://lore.kernel.org/r/20220605091503.12513-1-wangxiang@cdjrlc.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
|
|
We capture a NULL pointer issue when resizing a corrupt ext4 image which
is freshly clear resize_inode feature (not run e2fsck). It could be
simply reproduced by following steps. The problem is because of the
resize_inode feature was cleared, and it will convert the filesystem to
meta_bg mode in ext4_resize_fs(), but the es->s_reserved_gdt_blocks was
not reduced to zero, so could we mistakenly call reserve_backup_gdb()
and passing an uninitialized resize_inode to it when adding new group
descriptors.
mkfs.ext4 /dev/sda 3G
tune2fs -O ^resize_inode /dev/sda #forget to run requested e2fsck
mount /dev/sda /mnt
resize2fs /dev/sda 8G
========
BUG: kernel NULL pointer dereference, address: 0000000000000028
CPU: 19 PID: 3243 Comm: resize2fs Not tainted 5.18.0-rc7-00001-gfde086c5ebfd #748
...
RIP: 0010:ext4_flex_group_add+0xe08/0x2570
...
Call Trace:
<TASK>
ext4_resize_fs+0xbec/0x1660
__ext4_ioctl+0x1749/0x24e0
ext4_ioctl+0x12/0x20
__x64_sys_ioctl+0xa6/0x110
do_syscall_64+0x3b/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f2dd739617b
========
The fix is simple, add a check in ext4_resize_begin() to make sure that
the es->s_reserved_gdt_blocks is zero when the resize_inode feature is
disabled.
Cc: stable@kernel.org
Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
Reviewed-by: Ritesh Harjani <ritesh.list@gmail.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20220601092717.763694-1-yi.zhang@huawei.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
|
|
Since dx_make_map() may return -EFSCORRUPTED now, so change "count" to
be a signed integer so we can correctly check for an error code returned
by dx_make_map().
Fixes: 46c116b920eb ("ext4: verify dir block before splitting it")
Cc: stable@kernel.org
Signed-off-by: Ding Xiang <dingxiang@cmss.chinamobile.com>
Link: https://lore.kernel.org/r/20220530100047.537598-1-dingxiang@cmss.chinamobile.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
|
|
ext4_mb_normalize_request() can move logical start of allocated blocks
to reduce fragmentation and better utilize preallocation. However logical
block requested as a start of allocation (ac->ac_o_ex.fe_logical) should
always be covered by allocated blocks so we should check that by
modifying and to or in the assertion.
Signed-off-by: Baokun Li <libaokun1@huawei.com>
Reviewed-by: Ritesh Harjani <ritesh.list@gmail.com>
Link: https://lore.kernel.org/r/20220528110017.354175-3-libaokun1@huawei.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
|
|
Hulk Robot reported a BUG_ON:
==================================================================
kernel BUG at fs/ext4/mballoc.c:3211!
[...]
RIP: 0010:ext4_mb_mark_diskspace_used.cold+0x85/0x136f
[...]
Call Trace:
ext4_mb_new_blocks+0x9df/0x5d30
ext4_ext_map_blocks+0x1803/0x4d80
ext4_map_blocks+0x3a4/0x1a10
ext4_writepages+0x126d/0x2c30
do_writepages+0x7f/0x1b0
__filemap_fdatawrite_range+0x285/0x3b0
file_write_and_wait_range+0xb1/0x140
ext4_sync_file+0x1aa/0xca0
vfs_fsync_range+0xfb/0x260
do_fsync+0x48/0xa0
[...]
==================================================================
Above issue may happen as follows:
-------------------------------------
do_fsync
vfs_fsync_range
ext4_sync_file
file_write_and_wait_range
__filemap_fdatawrite_range
do_writepages
ext4_writepages
mpage_map_and_submit_extent
mpage_map_one_extent
ext4_map_blocks
ext4_mb_new_blocks
ext4_mb_normalize_request
>>> start + size <= ac->ac_o_ex.fe_logical
ext4_mb_regular_allocator
ext4_mb_simple_scan_group
ext4_mb_use_best_found
ext4_mb_new_preallocation
ext4_mb_new_inode_pa
ext4_mb_use_inode_pa
>>> set ac->ac_b_ex.fe_len <= 0
ext4_mb_mark_diskspace_used
>>> BUG_ON(ac->ac_b_ex.fe_len <= 0);
we can easily reproduce this problem with the following commands:
`fallocate -l100M disk`
`mkfs.ext4 -b 1024 -g 256 disk`
`mount disk /mnt`
`fsstress -d /mnt -l 0 -n 1000 -p 1`
The size must be smaller than or equal to EXT4_BLOCKS_PER_GROUP.
Therefore, "start + size <= ac->ac_o_ex.fe_logical" may occur
when the size is truncated. So start should be the start position of
the group where ac_o_ex.fe_logical is located after alignment.
In addition, when the value of fe_logical or EXT4_BLOCKS_PER_GROUP
is very large, the value calculated by start_off is more accurate.
Cc: stable@kernel.org
Fixes: cd648b8a8fd5 ("ext4: trim allocation requests to group size")
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Baokun Li <libaokun1@huawei.com>
Reviewed-by: Ritesh Harjani <ritesh.list@gmail.com>
Link: https://lore.kernel.org/r/20220528110017.354175-2-libaokun1@huawei.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
|
|
Since ext4 was converted to the new mount API, the test_dummy_encryption
mount option isn't being handled entirely correctly, because the needed
fscrypt_set_test_dummy_encryption() helper function combines
parsing/checking/applying into one function. That doesn't work well
with the new mount API, which split these into separate steps.
This was sort of okay anyway, due to the parsing logic that was copied
from fscrypt_set_test_dummy_encryption() into ext4_parse_param(),
combined with an additional check in ext4_check_test_dummy_encryption().
However, these overlooked the case of changing the value of
test_dummy_encryption on remount, which isn't allowed but ext4 wasn't
detecting until ext4_apply_options() when it's too late to fail.
Another bug is that if test_dummy_encryption was specified multiple
times with an argument, memory was leaked.
Fix this up properly by using the new helper functions that allow
splitting up the parse/check/apply steps for test_dummy_encryption.
Fixes: cebe85d570cf ("ext4: switch to the new mount api")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Link: https://lore.kernel.org/r/20220526040412.173025-1-ebiggers@kernel.org
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
|