summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-06-11drivers/perf: Simplify EVENT ATTR macro in qcom_l2_pmu.cQi Liu
Use common macro PMU_EVENT_ATTR_ID to simplify L2CACHE_EVENT_ATTR Cc: Andy Gross <agross@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Qi Liu <liuqi115@huawei.com> Link: https://lore.kernel.org/r/1623220863-58233-4-git-send-email-liuqi115@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-11drivers/perf: Simplify EVENT ATTR macro in SMMU PMU driverQi Liu
Use common macro PMU_EVENT_ATTR_ID to simplify SMMU_EVENT_ATTR Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Qi Liu <liuqi115@huawei.com> Link: https://lore.kernel.org/r/1623220863-58233-3-git-send-email-liuqi115@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-11perf: Add EVENT_ATTR_ID to simplify event attributesQi Liu
Similar EVENT_ATTR macros are defined in many PMU drivers, like Arm PMU driver, Arm SMMU PMU driver. So add a generic macro to simplify code. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Qi Liu <liuqi115@huawei.com> Link: https://lore.kernel.org/r/1623220863-58233-2-git-send-email-liuqi115@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-11perf/smmuv3: Don't trample existing events with global filterRobin Murphy
With global filtering, we only allow an event to be scheduled if its filter settings exactly match those of any existing events, therefore it is pointless to reapply the filter in that case. Much worse, though, is that in doing that we trample the event type of counter 0 if it's already active, and never touch the appropriate PMEVTYPERn so the new event is likely not counting the right thing either. Don't do that. CC: stable@vger.kernel.org Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/32c80c0e46237f49ad8da0c9f8864e13c4a803aa.1623153312.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-08arm64: mm: decode xFSC in mem_abort_decode()Mark Rutland
It would be helpful if mem_abort_decode() could decode the DFSC/IFSC, as this can make it easier to identify common bugs (e.g. accesses which trigger alignment faults) without having to manually decode the xFSC value. Decode the xFSC in mem_abort_decode(). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210608123742.11921-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-08arm64: smccc: Support SMCCC v1.3 SVE register saving hintMark Brown
SMCCC v1.2 requires that all SVE state be preserved over SMC calls which introduces substantial overhead in the common case where there is no SVE state in the registers. To avoid this SMCCC v1.3 introduces a flag which allows the caller to say that there is no state that needs to be preserved in the registers. Make use of this flag, setting it if the SMCCC version indicates support for it and the TIF_ flags indicate that there is no live SVE state in the registers, this avoids placing any constraints on when SMCCC calls can be done or triggering extra saving and reloading of SVE register state in the kernel. This would be straightforward enough except for the rather entertaining inline assembly we use to do SMCCC v1.1 calls to allow us to take advantage of the limited number of registers it clobbers. Deal with this by having a function which we call immediately before issuing the SMCCC call to make our checks and set the flag. Using alternatives the overhead if SVE is supported but not detected at runtime can be reduced to a single NOP. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210603184118.15090-1-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-06-08Makefile: fix GDB warning with CONFIG_RELRNick Desaulniers
GDB produces the following warning when debugging kernels built with CONFIG_RELR: BFD: /android0/linux-next/vmlinux: unknown type [0x13] section `.relr.dyn' when loading a kernel built with CONFIG_RELR into GDB. It can also prevent debugging symbols using such relocations. Peter sugguests: [That flag] means that lld will use dynamic tags and section type numbers in the OS-specific range rather than the generic range. The kernel itself doesn't care about these numbers; it determines the location of the RELR section using symbols defined by a linker script. Link: https://github.com/ClangBuiltLinux/linux/issues/1057 Suggested-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20210522012626.2811297-1-ndesaulniers@google.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-08perf/hisi: Constify static attribute_group structsRikard Falkeborn
These are only put in an array of pointers to const attribute_group structs. Make them const like the other static attribute_group structs to allow the compiler to put them in read-only memory. Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com> Link: https://lore.kernel.org/r/20210605221514.73449-1-rikard.falkeborn@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-08perf: qcom: Remove redundant dev_err call in qcom_l3_cache_pmu_probe()ChenXiaoSong
There is a error message within devm_ioremap_resource already, so remove the dev_err call to avoid redundant error message. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: ChenXiaoSong <chenxiaosong2@huawei.com> Link: https://lore.kernel.org/r/20210608084816.1046485-1-chenxiaosong2@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: idle: don't instrument idle code with KCOVMark Rutland
The low-level idle code in arch_cpu_idle() and its callees runs at a time where where portions of the kernel environment aren't available. For example, RCU may not be watching, and lockdep state may be out-of-sync with the hardware. Due to this, it is not sound to instrument this code. We generally avoid instrumentation by marking the entry functions as `noinstr`, but currently this doesn't inhibit KCOV instrumentation. Prevent this by factoring these functions into a new idle.c so that we can disable KCOV for the entire compilation unit, as is done for the core idle code in kernel/sched/idle.c. We'd like to keep instrumentation of the rest of process.c, and for the existing code in cpuidle.c, so a new compilation unit is preferable. The arch_cpu_idle_dead() function in process.c is a cpu hotplug function that is safe to instrument, so it is left as-is in process.c. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-21-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: don't instrument entry code with KCOVMark Rutland
The code in entry-common.c runs at exception entry and return boundaries, where portions of the kernel environment aren't available. For example, RCU may not be watching, and lockdep state may be out-of-sync with the hardware. Due to this, it is not sound to instrument this code. We generally avoid instrumentation by marking the entry functions as `noinstr`, but currently this doesn't inhibit KCOV instrumentation. Prevent this by disabling KCOV for the entire compilation unit. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-20-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: make NMI entry/exit functions staticMark Rutland
Now that we only call arm64_enter_nmi() and arm64_exit_nmi() from within entry-common.c, let's make these static to ensure this remains the case. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-19-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: split SDEI entryMark Rutland
We'd like to keep all the entry sequencing in entry-common.c, as this will allow us to ensure this is consistent, and free from any unsound instrumentation. Currently __sdei_handler() performs the NMI entry/exit sequences in sdei.c. Let's split the low-level entry sequence from the event handling, moving the former to entry-common.c and keeping the latter in sdei.c. The event handling function is renamed to do_sdei_event(), matching the do_${FOO}() pattern used for other exception handlers. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-18-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: split bad stack entryMark Rutland
We'd like to keep all the entry sequencing in entry-common.c, as this will allow us to ensure this is consistent, and free from any unsound instrumentation. Currently handle_bad_stack() performs the NMI entry sequence in traps.c. Let's split the low-level entry sequence from the reporting, moving the former to entry-common.c and keeping the latter in traps.c. To make it clear that reporting function never returns, it is renamed to panic_bad_stack(). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-17-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: fold el1_inv() into el1h_64_sync_handler()Mark Rutland
An unexpected synchronous exception from EL1h could happen at any time, and for robustness we should treat this as an NMI, making minimal assumptions about the context the exception was taken from. Currently el1_inv() assumes we can use enter_from_kernel_mode(), and also assumes that we should inherit the original DAIF value. Neither of these are desireable when we take an unexpected exception. Further, after el1_inv() calls __panic_unhandled(), the remainder of the function is unreachable, and therefore superfluous. Let's address this and simplify things by having el1h_64_sync_handler() call __panic_unhandled() directly, without any of the redundant logic. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reported-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-16-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: handle all vectors with CMark Rutland
We have 16 architectural exception vectors, and depending on kernel configuration we handle 8 or 12 of these with C code, with the remaining 8 or 4 of these handled as special cases in the entry assembly. It would be nicer if the entry assembly were uniform for all exceptions, and we deferred any specific handling of the exceptions to C code. This way the entry assembly can be more easily templated without ifdeffery or special cases, and it's easier to modify the handling of these cases in future (e.g. to dump additional registers other context). This patch reworks the entry code so that we always have a C handler for every architectural exception vector, with the entry assembly being completely uniform. We now have to handle exceptions from EL1t and EL1h, and also have to handle exceptions from AArch32 even when the kernel is built without CONFIG_COMPAT. To make this clear and to simplify templating, we rename the top-level exception handlers with a consistent naming scheme: asm: <el+sp>_<regsize>_<type> c: <el+sp>_<regsize>_<type>_handler .. where: <el+sp> is `el1t`, `el1h`, or `el0t` <regsize> is `64` or `32` <type> is `sync`, `irq`, `fiq`, or `error` ... e.g. asm: el1h_64_sync c: el1h_64_sync_handler ... with lower-level handlers simply using "el1" and "compat" as today. For unexpected exceptions, this information is passed to __panic_unhandled(), so it can report the specific vector an unexpected exception was taken from, e.g. | Unhandled 64-bit el1t sync exception For vectors we never expect to enter legitimately, the C code is generated using a macro to avoid code duplication. The exceptions are handled via __panic_unhandled(), replacing bad_mode() (which is removed). The `kernel_ventry` and `entry_handler` assembly macros are updated to handle the new naming scheme. In theory it should be possible to generate the entry functions at the same time as the vectors using a single table, but this will require reworking the linker script to split the two into separate sections, so for now we have separate tables. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-15-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: template the entry asm functionsMark Rutland
Now that the majority of the exception triage logic has been converted to C, the entry assembly functions all have a uniform structure. Let's generate them all with an assembly macro to reduce the amount of code and to ensure they all remain in sync if we make changes in future. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-14-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: improve bad_mode()Mark Rutland
Our use of bad_mode() has a few rough edges: * AArch64 doesn't use the term "mode", and refers to "Execution states", "Exception levels", and "Selected stack pointer". * We log the exception type (SYNC/IRQ/FIQ/SError), but not the actual "mode" (though this can be decoded from the SPSR value). * We use bad_mode() as a second-level handler for unexpected synchronous exceptions, where the "mode" is legitimate, but the specific exception is not. * We dump the ESR value, but call this "code", and so it's not clear to all readers that this is the ESR. ... and all of this can be somewhat opaque to those who aren't extremely familiar with the code. Let's make this a bit clearer by having bad_mode() log "Unhandled ${TYPE} exception" rather than "Bad mode in ${TYPE} handler", using "ESR" rather than "code", and having the final panic() log "Unhandled exception" rather than "Bad mode". In future we'd like to log the specific architectural vector rather than just the type of exception, so we also split the core of bad_mode() out into a helper called __panic_unhandled(), which takes the vector as a string argument. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-13-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: move bad_mode() to entry-common.cMark Rutland
In subsequent patches we'll rework the way bad_mode() is called by exception entry code. In preparation for this, let's move bad_mode() itself into entry-common.c. Let's also mark it as noinstr (e.g. to prevent it being kprobed), and let's also make the `handler` array a local variable, as this is only use by bad_mode(), and will be removed entirely in a subsequent patch. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-12-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: consolidate EL1 exception returnsMark Rutland
Following the example of ret_to_user, let's consolidate all the EL1 return paths with a ret_to_kernel helper, rather than each entry point having its own copy of the return code. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-11-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: organise entry vectors consistentlyMark Rutland
In subsequent patches we'll rename the entry handlers based on their original EL, register width, and exception class. To do so, we need to make all 3 mandatory arguments to the `kernel_ventry` macro, and distinguish EL1h from EL1t. In preparation for this, let's make the current set of arguments mandatory, and move the `regsize` column before the branch label suffix, making the vectors easier to read column-wise. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-10-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: organise entry handlers consistentlyMark Rutland
In entry.S we have two comments which distinguish EL0 and EL1 exception handlers, but the code isn't actually laid out to match, and there are a few other inconsistencies that would be good to clear up. This patch organizes the entry handers consistently: * The handlers are laid out in order of the vectors, to make them easier to navigate. * The inconsistently-applied alignment is removed * The handlers are consistently marked with SYM_CODE_START_LOCAL() rather than SYM_CODE_START_LOCAL_NOALIGN(), giving them the same default alignment as other assembly code snippets. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-9-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: convert IRQ+FIQ handlers to CMark Rutland
For various reasons we'd like to convert the bulk of arm64's exception triage logic to C. As a step towards that, this patch converts the EL1 and EL0 IRQ+FIQ triage logic to C. Separate C functions are added for the native and compat cases so that in subsequent patches we can handle native/compat differences in C. Since the triage functions can now call arm64_apply_bp_hardening() directly, the do_el0_irq_bp_hardening() wrapper function is removed. Since the user_exit_irqoff macro is now unused, it is removed. The user_enter_irqoff macro is still used by the ret_to_user code, and cannot be removed at this time. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-8-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: add a call_on_irq_stack helperMark Rutland
When handling IRQ/FIQ exceptions the entry assembly may transition from a task's stack to a CPU's IRQ stack (and IRQ shadow call stack). In subsequent patches we want to migrate the IRQ/FIQ triage logic to C, and as we want to perform some actions on the task stack (e.g. EL1 preemption), we need to switch stacks within the C handler. So that we can do so, this patch adds a helper to call a function on a CPU's IRQ stack (and shadow stack as appropriate). Subsequent patches will make use of the new helper function. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-7-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: move NMI preempt logic to CMark Rutland
Currently portions of our preempt logic are written in C while other parts are written in assembly. Let's clean this up a little bit by moving the NMI preempt checks to C. For now, the preempt count (and need_resched) checking is left in assembly, and will be converted with the body of the IRQ handler in subsequent patches. Other than the increased lockdep coverage there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-6-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: move arm64_preempt_schedule_irq to entry-common.cMark Rutland
Subsequent patches will pull more of the IRQ entry handling into C. To keep this in one place, let's move arm64_preempt_schedule_irq() into entry-common.c along with the other entry management functions. We no longer need to include <linux/lockdep.h> in process.c, so the include directive is removed. There should be no functional change as a result of this patch. Reviewed-by Joey Gouly <joey.gouly@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-5-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: convert SError handlers to CMark Rutland
For various reasons we'd like to convert the bulk of arm64's exception triage logic to C. As a step towards that, this patch converts the EL1 and EL0 SError triage logic to C. Separate C functions are added for the native and compat cases so that in subsequent patches we can handle native/compat differences in C. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-4-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: entry: unmask IRQ+FIQ after EL0 handlingMark Rutland
For non-fatal exceptions taken from EL0, we expect that at some point during exception handling it is possible to return to a regular process context with all exceptions unmasked (e.g. as we do in do_notify_resume()), and we generally aim to unmask exceptions wherever possible. While handling SError and debug exceptions from EL0, we need to leave some exceptions masked during handling. Handling SError requires us to mask SError (which also requires masking IRQ+FIQ), and handing debug exceptions requires us to mask debug (which also requires masking SError+IRQ+FIQ). Once do_serror() or do_debug_exception() has returned, we no longer need to mask exceptions, and can unmask them all, which is what we did prior to commit: 9034f6251572a474 ("arm64: Do not enable IRQs for ct_user_exit") ... where we had to mask IRQs as for context_tracking_user_exit() expected IRQs to be masked. Since then, we realised that our context tracking wasn't entirely correct, and reworked the entry code to fix this. As of commit: 23529049c6842382 ("arm64: entry: fix non-NMI user<->kernel transitions") ... we replaced the call to context_tracking_user_exit() with a call to user_exit_irqoff() as part of enter_from_user_mode(), which occurs earlier, before we run the body of the handler and unmask exceptions in DAIF. When we return to userspace, we go via ret_to_user(), which masks exceptions in DAIF prior to calling user_enter_irqoff() as part of exit_to_user_mode(). Thus, there's no longer a reason to leave IRQs or FIQs masked at the end of the EL0 debug or error handlers, as neither the user exit context tracking nor the user entry context tracking requires this. Let's bring these into line with other EL0 exception handlers and ensure that IRQ and FIQ are unmasked in DAIF at some point during the handler. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-3-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-07arm64: remove redundant local_daif_mask() in bad_mode()Mark Rutland
Upon taking an exception, the CPU sets all the DAIF bits. We never clear any of these bits prior to calling bad_mode(), and bad_mode() itself never clears any of these bits, so there's no need to call local_daif_mask(). This patch removes the redundant call. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-2-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-04kasan: disable freed user page poisoning with HW tagsPeter Collingbourne
Poisoning freed pages protects against kernel use-after-free. The likelihood of such a bug involving kernel pages is significantly higher than that for user pages. At the same time, poisoning freed pages can impose a significant performance cost, which cannot always be justified for user pages given the lower probability of finding a bug. Therefore, disable freed user page poisoning when using HW tags. We identify "user" pages via the flag set GFP_HIGHUSER_MOVABLE, which indicates a strong likelihood of not being directly accessible to the kernel. Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Link: https://linux-review.googlesource.com/id/I716846e2de8ef179f44e835770df7e6307be96c9 Link: https://lore.kernel.org/r/20210602235230.3928842-5-pcc@google.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-04arm64: mte: handle tags zeroing at page allocation timePeter Collingbourne
Currently, on an anonymous page fault, the kernel allocates a zeroed page and maps it in user space. If the mapping is tagged (PROT_MTE), set_pte_at() additionally clears the tags. It is, however, more efficient to clear the tags at the same time as zeroing the data on allocation. To avoid clearing the tags on any page (which may not be mapped as tagged), only do this if the vma flags contain VM_MTE. This requires introducing a new GFP flag that is used to determine whether to clear the tags. The DC GZVA instruction with a 0 top byte (and 0 tag) requires top-byte-ignore. Set the TCR_EL1.{TBI1,TBID1} bits irrespective of whether KASAN_HW is enabled. Signed-off-by: Peter Collingbourne <pcc@google.com> Co-developed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://linux-review.googlesource.com/id/Id46dc94e30fe11474f7e54f5d65e7658dbdddb26 Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Link: https://lore.kernel.org/r/20210602235230.3928842-4-pcc@google.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-04kasan: use separate (un)poison implementation for integrated initPeter Collingbourne
Currently with integrated init page_alloc.c needs to know whether kasan_alloc_pages() will zero initialize memory, but this will start becoming more complicated once we start adding tag initialization support for user pages. To avoid page_alloc.c needing to know more details of what integrated init will do, move the unpoisoning logic for integrated init into the HW tags implementation. Currently the logic is identical but it will diverge in subsequent patches. For symmetry do the same for poisoning although this logic will be unaffected by subsequent patches. Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Link: https://linux-review.googlesource.com/id/I2c550234c6c4a893c48c18ff0c6ce658c7c67056 Link: https://lore.kernel.org/r/20210602235230.3928842-3-pcc@google.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-04mm: arch: remove indirection level in alloc_zeroed_user_highpage_movable()Peter Collingbourne
In an upcoming change we would like to add a flag to GFP_HIGHUSER_MOVABLE so that it would no longer be an OR of GFP_HIGHUSER and __GFP_MOVABLE. This poses a problem for alloc_zeroed_user_highpage_movable() which passes __GFP_MOVABLE into an arch-specific __alloc_zeroed_user_highpage() hook which ORs in GFP_HIGHUSER. Since __alloc_zeroed_user_highpage() is only ever called from alloc_zeroed_user_highpage_movable(), we can remove one level of indirection here. Remove __alloc_zeroed_user_highpage(), make alloc_zeroed_user_highpage_movable() the hook, and use GFP_HIGHUSER_MOVABLE in the hook implementations so that they will pick up the new flag that we are going to add. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ic6361c657b2cdcd896adbe0cf7cb5a7fbb1ed7bf Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210602235230.3928842-2-pcc@google.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-04drivers/perf: hisi: Fix data source controlShaokun Zhang
'Data source' is a new function for HHA PMU and config / clear interface was wrong by mistake. 'HHA_DATSRC_CTRL' register is mainly used for data source configuration, if we enable bit0 as driver, it will go on count the event and we didn't check it carefully. So fix the issue and do as the initial purpose. Fixes: 932f6a99f9b0 ("drivers/perf: hisi: Add new functions for HHA PMU") Reported-by: kernel test robot <lkp@intel.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Link: https://lore.kernel.org/r/1622709291-37996-1-git-send-email-zhangshaokun@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-03arm64: mm: Add is_el1_data_abort() helperKefeng Wang
We alread have is_el1_instruction_abort(), add is_el1_data_abort() helper and use it. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210603120239.169018-1-wangkefeng.wang@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-03arm64: perf: Add more support on caps under sysfsShaokun Zhang
Armv8.7 has introduced BUS_SLOTS and BUS_WIDTH in PMMIR_EL1 register, add two entries in caps for bus_slots and bus_width under sysfs. It will return the true slots and width if the information is available, otherwise it will return 0. Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Link: https://lore.kernel.org/r/1622704502-63951-1-git-send-email-zhangshaokun@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-02arm64: update string routine copyrights and URLsMark Rutland
To make future archaeology easier, let's have the string routine comment blocks encode the specific upstream commit ID they were imported from. These are the same commit IDs as listed in the commits importing the code, expanded to 16 characters. Note that the routines have different commit IDs, each reprsenting the latest upstream commit which changed the particular routine. At the same time, let's consistently include 2021 in the copyright dates. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210602151358.35571-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-02perf: qcom_l2_pmu: move to use request_irq by IRQF_NO_AUTOEN flagTian Tao
request_irq() after setting IRQ_NOAUTOEN as below irq_set_status_flags(irq, IRQ_NOAUTOEN); request_irq(dev, irq...); can be replaced by request_irq() with IRQF_NO_AUTOEN flag. this patch is made base on "add IRQF_NO_AUTOEN for request_irq" which is being merged: https://lore.kernel.org/patchwork/patch/1388765/ Signed-off-by: Tian Tao <tiantao6@hisilicon.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/1622595642-61678-3-git-send-email-tiantao6@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-02arm_pmu: move to use request_irq by IRQF_NO_AUTOEN flagTian Tao
request_irq() after setting IRQ_NOAUTOEN as below irq_set_status_flags(irq, IRQ_NOAUTOEN); request_irq(dev, irq...); can be replaced by request_irq() with IRQF_NO_AUTOEN flag. this patch is made base on "add IRQF_NO_AUTOEN for request_irq" which is being merged: https://lore.kernel.org/patchwork/patch/1388765/ Signed-off-by: Tian Tao <tiantao6@hisilicon.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/1622595642-61678-2-git-send-email-tiantao6@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-01arm64: cache: Lower ARCH_DMA_MINALIGN to 64 (L1_CACHE_BYTES)Will Deacon
Back in 97303480753e ("arm64: Increase the max granular size"), ARCH_DMA_MINALIGN was effectively increased to 128 bytes thanks to an increase in L1_CACHE_BYTES due to an unsubstantiated performance claim on the now obsolete ThunderX-1. Although this was reverted in d93277b9839b, ARCH_DMA_MINALIGN was kept at 128 bytes by ebc7e21e0fa2 ("arm64: Increase ARCH_DMA_MINALIGN to 128"). During discussion of the original patch, it was reported that the change also prevented a warning during boot on (again, now obsolete) Qualcomm server hardware where the cache writeback granule was larger than 64 bytes. The reason for this warning was because non-coherent DMA could lead to data corruption due to unexpected writeback from the CPU where a cacheline is shared with other allocations. Since then, systems have appeared with larger cachelines still, and so commit 8f5c9037a55b ("arm64/mm: Correct the cache line size warning with non coherent device") reworked the warning so that it only appears on systems where non-coherent DMA is actually required and taints the kernel with TAINT_CPU_OUT_OF_SPEC. We are not aware of any systems, even including the aforementioned obsolete machines, which have a CWG larger than 64 bytes and require non-coherent DMA. More recently, it has been reported that a ARCH_DMA_MINALIGN of 128 bytes wastes considerable memory (~6% immediately after boot on one system). Reduce ARCH_DMA_MINALIGN to 64 bytes and allow the warning/taint to indicate if there are machines that unknowingly rely on this. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Link: https://lore.kernel.org/linux-arm-kernel/1442944788-17254-1-git-send-email-rric@kernel.org/ Link: https://lore.kernel.org/linux-arm-kernel/CAOZdJXUiRMAguDV+HEJqPg57MyBNqEcTyaH+ya=U93NHb-pdJA@mail.gmail.com/ Link: https://lore.kernel.org/linux-arm-kernel/20190614131141.4428-1-msys.mizuma@gmail.com/ Link: https://lore.kernel.org/r/20210517074332.28280-1-vincent.whitchurch@axis.com Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210527124356.22367-1-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-06-01arm64: mm: Remove unused support for Normal-WT memory typeWill Deacon
The Normal-WT memory type is unused, so remove it and reclaim a MAIR. Cc: Christoph Hellwig <hch@lst.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210527110319.22157-4-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-06-01arm64: acpi: Map EFI_MEMORY_WT memory as Normal-NCWill Deacon
The only user we have of Normal Write-Through memory is in the ACPI code when mapping memory regions advertised as EFI_MEMORY_WT. Since most (all?) CPUs treat write-through as non-cacheable under the hood, don't bother with the extra memory type here and just treat EFI_MEMORY_WT the same way as EFI_MEMORY_WC by mapping it to the Normal-NC memory type instead and emitting a warning if we have failed to find an alternative EFI memory type. Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Christoph Hellwig <hch@lst.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210527110319.22157-3-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-06-01arm64: mm: Remove unused support for Device-GRE memory typeWill Deacon
The Device-GRE memory type is unused, so remove it and reclaim a MAIR. Cc: Christoph Hellwig <hch@lst.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210505180228.GA3874@arm.com Link: https://lore.kernel.org/r/20210527110319.22157-2-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-06-01arm64: mm: Use better bitmap_zalloc()Kefeng Wang
Use better bitmap_zalloc() to allocate bitmap. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Link: https://lore.kernel.org/r/20210529111510.186355-1-wangkefeng.wang@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-01arm64: Rewrite __arch_clear_user()Robin Murphy
Now that we're always using STTR variants rather than abstracting two different addressing modes, the user_ldst macro here is frankly more obfuscating than helpful. Rewrite __arch_clear_user() with regular USER() annotations so that it's clearer what's going on, and take the opportunity to minimise the branchiness in the most common paths, while also allowing the exception fixup to return an accurate result. Apparently some folks examine large reads from /dev/zero closely enough to notice the loop being hot, so align it per the other critical loops (presumably around a typical instruction fetch granularity). Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/1cbd78b12c076a8ad4656a345811cfb9425df0b3.1622128527.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-01arm64: Better optimised memchr()Robin Murphy
Although we implement our own assembly version of memchr(), it turns out to be barely any better than what GCC can generate for the generic C version (and would go wrong if the size_t argument were ever large enough to be interpreted as negative). Unfortunately we can't import the tuned implementation from the Arm optimized-routines library, since that has some Advanced SIMD parts which are not really viable for general kernel library code. What we can do, however, is pep things up with some relatively straightforward word-at-a-time logic for larger calls. Adding some timing to optimized-routines' memchr() test for a simple benchmark, overall this version comes in around half as fast as the SIMD code, but still nearly 4x faster than our existing implementation. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/58471b42f9287e039dafa9e5e7035077152438fd.1622128527.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-01arm64: Import latest memcpy()/memmove() implementationRobin Murphy
Import the latest implementation of memcpy(), based on the upstream code of string/aarch64/memcpy.S at commit afd6244 from https://github.com/ARM-software/optimized-routines, and subsuming memmove() in the process. Note that for simplicity Arm have chosen to contribute this code to Linux under GPLv2 rather than the original MIT license. Note also that the needs of the usercopy routines vs. regular memcpy() have now diverged so far that we abandon the shared template idea and the damage which that incurred to the tuning of LDP/STP loops. We'll be back to tackle those routines separately in future. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/3c953af43506581b2422f61952261e76949ba711.1622128527.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-01arm64: Add assembly annotations for weak-PI-alias madnessRobin Murphy
Add yet another set of assembly symbol annotations, this time for the borderline-absurd situation of a function aliasing to a weak symbol which itself also wants a position-independent alias. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/75545b3c4129b20b887474bb58a9cf302bf2132b.1622128527.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-01arm64: Import latest version of Cortex Strings' strncmpSam Tebbs
Import the latest version of the former Cortex Strings - now Arm Optimized Routines - strncmp function based on the upstream code of string/aarch64/strncmp.S at commit e823e3a from https://github.com/ARM-software/optimized-routines Note that for simplicity Arm have chosen to contribute this code to Linux under GPLv2 rather than the original MIT license. Signed-off-by: Sam Tebbs <sam.tebbs@arm.com> [ rm: update attribution and commit message ] Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/26110bee02ad360596c9a7536af7eaaf6890d0e8.1622128527.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-01arm64: Import updated version of Cortex Strings' strlenSam Tebbs
Import an updated version of the former Cortex Strings - now Arm Optimized Routines - strcmp function. The latest version introduces Advanced SIMD usage which rules it out for our purposes, but we can still pick an intermediate improvement from the previous version, namely string/aarch64/strlen.S at commit 98e4d6a from https://github.com/ARM-software/optimized-routines Note that for simplicity Arm have chosen to contribute this code to Linux under GPLv2 rather than the original MIT license. Signed-off-by: Sam Tebbs <sam.tebbs@arm.com> [ rm: update attribution and commit message ] Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/32e3489398a24b23ae6e996935ac4818f8fd9dfd.1622128527.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>