summaryrefslogtreecommitdiff
path: root/arch/arm/crypto/aes-neonbs-glue.c
AgeCommit message (Collapse)Author
2025-06-23crypto: arm/aes-neonbs - work around gcc-15 warningArnd Bergmann
I get a very rare -Wstringop-overread warning with gcc-15 for one function in aesbs_ctr_encrypt(): arch/arm/crypto/aes-neonbs-glue.c: In function 'ctr_encrypt': arch/arm/crypto/aes-neonbs-glue.c:212:1446: error: '__builtin_memcpy' offset [17, 2147483647] is out of the bounds [0, 16] of object 'buf' with type 'u8[16]' {aka 'unsigned char[16]'} [-Werror=array-bounds=] 212 | src = dst = memcpy(buf + sizeof(buf) - bytes, arch/arm/crypto/aes-neonbs-glue.c: In function 'ctr_encrypt': arch/arm/crypto/aes-neonbs-glue.c:218:17: error: 'aesbs_ctr_encrypt' reading 1 byte from a region of size 0 [-Werror=stringop-overread] 218 | aesbs_ctr_encrypt(dst, src, ctx->rk, ctx->rounds, bytes, walk.iv); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ arch/arm/crypto/aes-neonbs-glue.c:218:17: note: referencing argument 2 of type 'const u8[0]' {aka 'const unsigned char[]'} arch/arm/crypto/aes-neonbs-glue.c:218:17: note: referencing argument 3 of type 'const u8[0]' {aka 'const unsigned char[]'} arch/arm/crypto/aes-neonbs-glue.c:218:17: note: referencing argument 6 of type 'u8[0]' {aka 'unsigned char[]'} arch/arm/crypto/aes-neonbs-glue.c:36:17: note: in a call to function 'aesbs_ctr_encrypt' 36 | asmlinkage void aesbs_ctr_encrypt(u8 out[], u8 const in[], u8 const rk[], This could happen in theory if walk.nbytes is larger than INT_MAX and gets converted to a negative local variable. Keep the type unsigned like the orignal nbytes to be sure there is no integer overflow. Fixes: c8bf850e991a ("crypto: arm/aes-neonbs-ctr - deal with non-multiples of AES block size") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05Revert "crypto: run initcalls for generic implementations earlier"Herbert Xu
This reverts commit c4741b23059794bd99beef0f700103b0d983b3fd. Crypto API self-tests no longer run at registration time and now occur either at late_initcall or upon the first use. Therefore the premise of the above commit no longer exists. Revert it and subsequent additions of subsys_initcall and arch_initcall. Note that lib/crypto calls will stay at subsys_initcall (or rather downgraded from arch_initcall) because they may need to occur before Crypto API registration. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: arm/aes-neonbs - stop using the SIMD helperArd Biesheuvel
Now that ARM permits use of the NEON unit in softirq context as well as task context, there is no longer a need to rely on the SIMD helper module to construct async skciphers wrapping the sync ones, as the latter can always be called directly. So remove these wrappers and the dependency on the SIMD helper. This permits the use of these algorithms by callers that only support synchronous use. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-08-24crypto: simd - Do not call crypto_alloc_tfm during registrationHerbert Xu
Algorithm registration is usually carried out during module init, where as little work as possible should be carried out. The SIMD code violated this rule by allocating a tfm, this then triggers a full test of the algorithm which may dead-lock in certain cases. SIMD is only allocating the tfm to get at the alg object, which is in fact already available as it is what we are registering. Use that directly and remove the crypto_alloc_tfm call. Also remove some obsolete and unused SIMD API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-08-17crypto: arm/aes-neonbs - go back to using aes-arm directlyEric Biggers
In aes-neonbs, instead of going through the crypto API for the parts that the bit-sliced AES code doesn't handle, namely AES-CBC encryption and single-block AES, just call the ARM scalar AES cipher directly. This basically goes back to the original approach that was used before commit b56f5cbc7e08 ("crypto: arm/aes-neonbs - resolve fallback cipher at runtime"). Calling the ARM scalar AES cipher directly is faster, simpler, and avoids any chance of bugs specific to the use of fallback ciphers such as module loading deadlocks which have happened twice. The deadlocks turned out to be fixable in other ways, but there's no need to rely on anything so fragile in the first place. The rationale for the above-mentioned commit was to allow people to choose to use a time-invariant AES implementation for the fallback cipher. There are a couple problems with that rationale, though: - In practice the ARM scalar AES cipher (aes-arm) was used anyway, since it has a higher priority than aes-fixed-time. Users *could* go out of their way to disable or blacklist aes-arm, or to lower its priority using NETLINK_CRYPTO, but very few users customize the crypto API to this extent. Systems with the ARMv8 Crypto Extensions used aes-ce, but the bit-sliced algorithms are irrelevant on such systems anyway. - Since commit 913a3aa07d16 ("crypto: arm/aes - add some hardening against cache-timing attacks"), the ARM scalar AES cipher is partially hardened against cache-timing attacks. It actually works like aes-fixed-time, in that it disables interrupts and prefetches its lookup table. It does use a larger table than aes-fixed-time, but even so, it is not clear that aes-fixed-time is meaningfully more time-invariant than aes-arm. And of course, the real solution for time-invariant AES is to use a CPU that supports AES instructions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-06-21crypto: arm - add missing MODULE_DESCRIPTION() macrosJeff Johnson
With ARCH=arm, make allmodconfig && make W=1 C=1 reports: WARNING: modpost: missing MODULE_DESCRIPTION() in arch/arm/crypto/aes-arm-bs.o WARNING: modpost: missing MODULE_DESCRIPTION() in arch/arm/crypto/crc32-arm-ce.o Add the missing invocation of the MODULE_DESCRIPTION() macro to all files which have a MODULE_LICENSE(). This includes crct10dif-ce-glue.c and curve25519-glue.c which, although they did not produce a warning with the arm allmodconfig configuration, may cause this warning with other configurations. Signed-off-by: Jeff Johnson <quic_jjohnson@quicinc.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05crypto: arm/aes-neonbs-ctr - deal with non-multiples of AES block sizeArd Biesheuvel
Instead of falling back to C code to deal with the final bit of input that is not a round multiple of the block size, handle this in the asm code, permitting us to use overlapping loads and stores for performance, and implement the 16-byte wide XOR using a single NEON instruction. Since NEON loads and stores have a natural width of 16 bytes, we need to handle inputs of less than 16 bytes in a special way, but this rarely occurs in practice so it does not impact performance. All other input sizes can be consumed directly by the NEON asm code, although it should be noted that the core AES transform can still only process 128 bytes (8 AES blocks) at a time. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: remove cipher routines from public crypto APIArd Biesheuvel
The cipher routines in the crypto API are mostly intended for templates implementing skcipher modes generically in software, and shouldn't be used outside of the crypto subsystem. So move the prototypes and all related definitions to a new header file under include/crypto/internal. Also, let's use the new module namespace feature to move the symbol exports into a new namespace CRYPTO_INTERNAL. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-11-06crypto: arm/aes-neonbs - fix usage of cbc(aes) fallbackHoria Geantă
Loading the module deadlocks since: -local cbc(aes) implementation needs a fallback and -crypto API tries to find one but the request_module() resolves back to the same module Fix this by changing the module alias for cbc(aes) and using the NEED_FALLBACK flag when requesting for a fallback algorithm. Fixes: 00b99ad2bac2 ("crypto: arm/aes-neonbs - Use generic cbc encryption path") Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-09-25crypto: arm/aes-neonbs - use typed init/exit routines for XTSArd Biesheuvel
Use the typed skcipher init/exit routines instead of the generic cra_init/_exit routines when instantiating/releasing the XTS skciphers. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-09-11crypto: arm/aes-neonbs - Use generic cbc encryption pathHerbert Xu
Since commit b56f5cbc7e08ec7d31c42fc41e5247677f20b143 ("crypto: arm/aes-neonbs - resolve fallback cipher at runtime") the CBC encryption path in aes-neonbs is now identical to that obtained through the cbc template. This means that it can simply call the generic cbc template instead of doing its own thing. This patch removes the custom encryption path and simply invokes the generic cbc template. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-03-20crypto: arm/neon - memzero_explicit aes-cbc keyTorsten Duwe
At function exit, do not leave the expanded key in the rk struct which got allocated on the stack. Signed-off-by: Torsten Duwe <duwe@suse.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-09crypto: arm/aes-neonbs - implement ciphertext stealing for XTSArd Biesheuvel
Update the AES-XTS implementation based on NEON instructions so that it can deal with inputs whose size is not a multiple of the cipher block size. This is part of the original XTS specification, but was never implemented before in the Linux kernel. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-09crypto: arm/aes-ce - yield the SIMD unit between scatterwalk stepsArd Biesheuvel
Reduce the scope of the kernel_neon_begin/end regions so that the SIMD unit is released (and thus preemption re-enabled) if the crypto operation cannot be completed in a single scatterwalk step. This avoids scheduling blackouts due to preemption being enabled for unbounded periods, resulting in a more responsive system. After this change, we can also permit the cipher_walk infrastructure to sleep, so set the 'atomic' parameter to skcipher_walk_virt() to false as well. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: arm/aes-neonbs - provide a synchronous version of ctr(aes)Ard Biesheuvel
AES in CTR mode is used by modes such as GCM and CCM, which are often used in contexts where only synchronous ciphers are permitted. So provide a synchronous version of ctr(aes) based on the existing code. This requires a non-SIMD fallback to deal with invocations occurring from a context where SIMD instructions may not be used. We have a helper for this now in the AES library, so wire that up. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: arm/aes-neonbs - switch to library version of key expansion routineArd Biesheuvel
Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-04-18crypto: arm/aes-neonbs - don't access already-freed walk.ivEric Biggers
If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. arm32 xts-aes-neonbs doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and it was actually broken prior to the alignmask being removed by commit cc477bf64573 ("crypto: arm/aes - replace bit-sliced OpenSSL NEON code"). Thus, update xts-aes-neonbs to start checking the return value of skcipher_walk_virt(). Fixes: e4e7f10bfc40 ("ARM: add support for bit sliced AES using NEON instructions") Cc: <stable@vger.kernel.org> # v3.13+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-11crypto: arm/aes-neonbs - Use PTR_ERR_OR_ZERO()Gomonovych, Vasyl
Fix ptr_ret.cocci warnings: arch/arm/crypto/aes-neonbs-glue.c:184:1-3: WARNING: PTR_ERR_OR_ZERO can be used arch/arm/crypto/aes-neonbs-glue.c:261:1-3: WARNING: PTR_ERR_OR_ZERO can be used Use PTR_ERR_OR_ZERO rather than if(IS_ERR(...)) + PTR_ERR Generated by: scripts/coccinelle/api/ptr_ret.cocci Signed-off-by: Vasyl Gomonovych <gomonovych@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-08-04crypto: algapi - make crypto_xor() take separate dst and src argumentsArd Biesheuvel
There are quite a number of occurrences in the kernel of the pattern if (dst != src) memcpy(dst, src, walk.total % AES_BLOCK_SIZE); crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE); or crypto_xor(keystream, src, nbytes); memcpy(dst, keystream, nbytes); where crypto_xor() is preceded or followed by a memcpy() invocation that is only there because crypto_xor() uses its output parameter as one of the inputs. To avoid having to add new instances of this pattern in the arm64 code, which will be refactored to implement non-SIMD fallbacks, add an alternative implementation called crypto_xor_cpy(), taking separate input and output arguments. This removes the need for the separate memcpy(). Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-03-09crypto: arm/aes-neonbs - resolve fallback cipher at runtimeArd Biesheuvel
Currently, the bit sliced NEON AES code for ARM has a link time dependency on the scalar ARM asm implementation, which it uses as a fallback to perform CBC encryption and the encryption of the initial XTS tweak. The bit sliced NEON code is both fast and time invariant, which makes it a reasonable default on hardware that supports it. However, the ARM asm code it pulls in is not time invariant, and due to the way it is linked in, cannot be overridden by the new generic time invariant driver. In fact, it will not be used at all, given that the ARM asm code registers itself as a cipher with a priority that exceeds the priority of the fixed time cipher. So remove the link time dependency, and allocate the fallback cipher via the crypto API. Note that this requires this driver's module_init call to be replaced with late_initcall, so that the (possibly generic) fallback cipher is guaranteed to be available when the builtin test is performed at registration time. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-02-03crypto: arm/aes - don't use IV buffer to return final keystream blockArd Biesheuvel
The ARM bit sliced AES core code uses the IV buffer to pass the final keystream block back to the glue code if the input is not a multiple of the block size, so that the asm code does not have to deal with anything except 16 byte blocks. This is done under the assumption that the outgoing IV is meaningless anyway in this case, given that chaining is no longer possible under these circumstances. However, as it turns out, the CCM driver does expect the IV to retain a value that is equal to the original IV except for the counter value, and even interprets byte zero as a length indicator, which may result in memory corruption if the IV is overwritten with something else. So use a separate buffer to return the final keystream block. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-01-13crypto: arm/aes - replace bit-sliced OpenSSL NEON codeArd Biesheuvel
This replaces the unwieldy generated implementation of bit-sliced AES in CBC/CTR/XTS modes that originated in the OpenSSL project with a new version that is heavily based on the OpenSSL implementation, but has a number of advantages over the old version: - it does not rely on the scalar AES cipher that also originated in the OpenSSL project and contains redundant lookup tables and key schedule generation routines (which we already have in crypto/aes_generic.) - it uses the same expanded key schedule for encryption and decryption, reducing the size of the per-key data structure by 1696 bytes - it adds an implementation of AES in ECB mode, which can be wrapped by other generic chaining mode implementations - it moves the handling of corner cases that are non critical to performance to the glue layer written in C - it was written directly in assembler rather than generated from a Perl script Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>