summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/word-at-a-time.h
AgeCommit message (Collapse)Author
2024-06-19arm64: word-at-a-time: improve byte count calculations for LELinus Torvalds
Do the same optimization as x86-64: do __ffs() on the intermediate value that found whether there is a zero byte, before we've actually computed the final byte mask. The logic is: has_zero(): Check if the word has a zero byte in it, which indicates the end of the loop, and prepare a value to be used for the rest of the sequence. The standard LE implementation just creates a word that has the high bit set in each byte of the word that was zero. Example: 0xaa00bbccdd00eeff -> 0x0080000000800000 prep_zero_mask(): Possibly do more prep to then clean up the initial fast result from has_zero, so that it can be combined with another zero mask with a simple logical "or" to create a final mask. This is only used on big-endian machines that use a different algorithm, and is a no-op here. create_zero_mask(): This is "step 1" of creating the count and the mask, and is meant for any common operations between the two. In the old implementation, this actually created the zero mask, that was then used for masking and for counting the number of bits in the mask. In the new implementation, this is a no-op. count_zero(): This takes the mask bits, and counts the number of bytes before the first zero byte. In the old implementation, it counted the number of bits in the final byte mask (which was the same as the C standard "find last set bit" that uses the silly "starts at one" counting) and shifted the value down by three. In the new implementation, we know the intermediate mask isn't zero, and it just does "find first set" with the sane semantics without any off-by-one issues, and again shifts by three (which also masks off the bit offset in the zero byte itself). Example: 0x0080000000800000 -> 2 zero_bytemask(): This takes the mask bits, and turns it into an actual byte mask of the bytes preceding the first zero byte. In the old implementation, this was a no-op, because the work had already been done by create_zero_mask(). In the new implementation, this does what create_zero_mask() used to do. Example: 0x0080000000800000 -> 0x000000000000ffff The difference between the old and the new implementation is that "count_zero()" ends up scheduling better because it is being done on a value that is available earlier (before the final mask). But more importantly, it can be implemented without the insane semantics of the standard bit finding helpers that have the off-by-one issue and have to special-case the zero mask situation. On arm64, the new "count_zero()" ends up just "rbit + clz" plus the shift right that then ends up being subsumed by the "add to final length". Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-02-01kernel.h: removed REPEAT_BYTE from kernel.hTanzir Hasan
This patch creates wordpart.h and includes it in asm/word-at-a-time.h for all architectures. WORD_AT_A_TIME_CONSTANTS depends on kernel.h because of REPEAT_BYTE. Moving this to another header and including it where necessary allows us to not include the bloated kernel.h. Making this implicit dependency on REPEAT_BYTE explicit allows for later improvements in the lib/string.c inclusion list. Suggested-by: Al Viro <viro@zeniv.linux.org.uk> Suggested-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Tanzir Hasan <tanzirh@google.com> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Link: https://lore.kernel.org/r/20231226-libstringheader-v6-1-80aa08c7652c@google.com Signed-off-by: Kees Cook <keescook@chromium.org>
2023-04-05arm64: mte: rename TCO routinesVincenzo Frascino
The TCO related routines are used in uaccess methods and load_unaligned_zeropad() but are unrelated to both even if the naming suggest otherwise. Improve the readability of the code moving the away from uaccess.h and pre-pending them with "mte". [andreyknvl@google.com: drop __ from mte_disable/enable_tco names] Link: https://lkml.kernel.org/r/74d26337b2360733956114069e96ff11c296a944.1680114854.git.andreyknvl@google.com Link: https://lkml.kernel.org/r/a48e7adce1248c0f9603a457776d59daa0ef734b.1678491668.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Marco Elver <elver@google.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Weizhao Ouyang <ouyangweizhao@zeku.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2021-10-21arm64: extable: add load_unaligned_zeropad() handlerMark Rutland
For inline assembly, we place exception fixups out-of-line in the `.fixup` section such that these are out of the way of the fast path. This has a few drawbacks: * Since the fixup code is anonymous, backtraces will symbolize fixups as offsets from the nearest prior symbol, currently `__entry_tramp_text_end`. This is confusing, and painful to debug without access to the relevant vmlinux. * Since the exception handler adjusts the PC to execute the fixup, and the fixup uses a direct branch back into the function it fixes, backtraces of fixups miss the original function. This is confusing, and violates requirements for RELIABLE_STACKTRACE (and therefore LIVEPATCH). * Inline assembly and associated fixups are generated from templates, and we have many copies of logically identical fixups which only differ in which specific registers are written to and which address is branched to at the end of the fixup. This is potentially wasteful of I-cache resources, and makes it hard to add additional logic to fixups without significant bloat. * In the case of load_unaligned_zeropad(), the logic in the fixup requires a temporary register that we must allocate even in the fast-path where it will not be used. This patch address all four concerns for load_unaligned_zeropad() fixups by adding a dedicated exception handler which performs the fixup logic in exception context and subsequent returns back after the faulting instruction. For the moment, the fixup logic is identical to the old assembly fixup logic, but in future we could enhance this by taking the ESR and FAR into account to constrain the faults we try to fix up, or to specialize fixups for MTE tag check faults. Other than backtracing, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20211019160219.5202-13-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-04-26Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: - MTE asynchronous support for KASan. Previously only synchronous (slower) mode was supported. Asynchronous is faster but does not allow precise identification of the illegal access. - Run kernel mode SIMD with softirqs disabled. This allows using NEON in softirq context for crypto performance improvements. The conditional yield support is modified to take softirqs into account and reduce the latency. - Preparatory patches for Apple M1: handle CPUs that only have the VHE mode available (host kernel running at EL2), add FIQ support. - arm64 perf updates: support for HiSilicon PA and SLLC PMU drivers, new functions for the HiSilicon HHA and L3C PMU, cleanups. - Re-introduce support for execute-only user permissions but only when the EPAN (Enhanced Privileged Access Never) architecture feature is available. - Disable fine-grained traps at boot and improve the documented boot requirements. - Support CONFIG_KASAN_VMALLOC on arm64 (only with KASAN_GENERIC). - Add hierarchical eXecute Never permissions for all page tables. - Add arm64 prctl(PR_PAC_{SET,GET}_ENABLED_KEYS) allowing user programs to control which PAC keys are enabled in a particular task. - arm64 kselftests for BTI and some improvements to the MTE tests. - Minor improvements to the compat vdso and sigpage. - Miscellaneous cleanups. * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (86 commits) arm64/sve: Add compile time checks for SVE hooks in generic functions arm64/kernel/probes: Use BUG_ON instead of if condition followed by BUG. arm64: pac: Optimize kernel entry/exit key installation code paths arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS) arm64: mte: make the per-task SCTLR_EL1 field usable elsewhere arm64/sve: Remove redundant system_supports_sve() tests arm64: fpsimd: run kernel mode NEON with softirqs disabled arm64: assembler: introduce wxN aliases for wN registers arm64: assembler: remove conditional NEON yield macros kasan, arm64: tests supports for HW_TAGS async mode arm64: mte: Report async tag faults before suspend arm64: mte: Enable async tag check fault arm64: mte: Conditionally compile mte_enable_kernel_*() arm64: mte: Enable TCO in functions that can read beyond buffer limits kasan: Add report for async mode arm64: mte: Drop arch_enable_tagging() kasan: Add KASAN mode kernel parameter arm64: mte: Add asynchronous mode support arm64: Get rid of CONFIG_ARM64_VHE arm64: Cope with CPUs stuck in VHE mode ...
2021-04-11arm64: mte: Enable TCO in functions that can read beyond buffer limitsVincenzo Frascino
load_unaligned_zeropad() and __get/put_kernel_nofault() functions can read past some buffer limits which may include some MTE granule with a different tag. When MTE async mode is enabled, the load operation crosses the boundaries and the next granule has a different tag the PE sets the TFSR_EL1.TF1 bit as if an asynchronous tag fault is happened. Enable Tag Check Override (TCO) in these functions before the load and disable it afterwards to prevent this to happen. Note: The same condition can be hit in MTE sync mode but we deal with it through the exception handling. In the current implementation, mte_async_mode flag is set only at boot time but in future kasan might acquire some runtime features that that change the mode dynamically, hence we disable it when sync mode is selected for future proof. Cc: Will Deacon <will@kernel.org> Reported-by: Branislav Rankov <Branislav.Rankov@arm.com> Tested-by: Branislav Rankov <Branislav.Rankov@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-6-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-01arm64: fix inline asm in load_unaligned_zeropad()Peter Collingbourne
The inline asm's addr operand is marked as input-only, however in the case where an exception is taken it may be modified by the BIC instruction on the exception path. Fix the problem by using a temporary register as the destination register for the BIC instruction. Signed-off-by: Peter Collingbourne <pcc@google.com> Cc: stable@vger.kernel.org Link: https://linux-review.googlesource.com/id/I84538c8a2307d567b4f45bb20b715451005f9617 Link: https://lore.kernel.org/r/20210401165110.3952103-1-pcc@google.com Signed-off-by: Will Deacon <will@kernel.org>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-12-24Replace <asm/uaccess.h> with <linux/uaccess.h> globallyLinus Torvalds
This was entirely automated, using the script by Al: PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>' sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \ $(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h) to do the replacement at the end of the merge window. Requested-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-24arm64: switch to relative exception tablesArd Biesheuvel
Instead of using absolute addresses for both the exception location and the fixup, use offsets relative to the exception table entry values. Not only does this cut the size of the exception table in half, it is also a prerequisite for KASLR, since absolute exception table entries are subject to dynamic relocation, which is incompatible with the sorting of the exception table that occurs at build time. This patch also introduces the _ASM_EXTABLE preprocessor macro (which exists on x86 as well) and its _asm_extable assembly counterpart, as shorthands to emit exception table entries. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19arm64: dcache: select DCACHE_WORD_ACCESS for little-endian CPUsWill Deacon
DCACHE_WORD_ACCESS uses the word-at-a-time API for optimised string comparisons in the vfs layer. This patch implements support for load_unaligned_zeropad in much the same way as has been done for ARM, although big-endian systems are also supported. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19arm64: use generic strnlen_user and strncpy_from_user functionsWill Deacon
This patch implements the word-at-a-time interface for arm64 using the same algorithm as ARM. We use the fls64 macro, which expands to a clz instruction via a compiler builtin. Big-endian configurations make use of the implementation from asm-generic. With this implemented, we can replace our byte-at-a-time strnlen_user and strncpy_from_user functions with the optimised generic versions. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>