summaryrefslogtreecommitdiff
path: root/include/crypto/internal/hash.h
AgeCommit message (Collapse)Author
2025-06-26crypto: ahash - Add crypto_ahash_tested() helper functionHarald Freudenberger
Add a little inline helper function crypto_ahash_tested() to the internal/hash.h header file to retrieve the tested status (that is the CRYPTO_ALG_TESTED bit in the cra_flags). Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-06-26crypto: ahash - make hash walk functions from ahash.c publicHarald Freudenberger
Make the hash walk functions crypto_hash_walk_done() crypto_hash_walk_first() crypto_hash_walk_last() public again. These functions had been removed from the header file include/crypto/internal/hash.h with commit 7fa481734016 ("crypto: ahash - make hash walk functions private to ahash.c") as there was no crypto algorithm code using them. With the upcoming crypto implementation for s390 phmac these functions will be exploited and thus need to be public within the kernel again. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Acked-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-06-23crypto: ahash - Stop legacy tfms from using the set_virt fallback pathHerbert Xu
Ensure that drivers that have not been converted to the ahash API do not use the ahash_request_set_virt fallback path as they cannot use the software fallback. Reported-by: Eric Biggers <ebiggers@kernel.org> Fixes: 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in API") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-23Revert "crypto: testmgr - Add hash export format testing"Herbert Xu
This reverts commit 18c438b228558e05ede7dccf947a6547516fc0c7. The s390 hmac and sha3 algorithms are failing the test. Revert the change until they have been fixed. Reported-by: Ingo Franzki <ifranzki@linux.ibm.com> Link: https://lore.kernel.org/all/623a7fcb-b4cb-48e6-9833-57ad2b32a252@linux.ibm.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-19crypto: testmgr - Add hash export format testingHerbert Xu
Ensure that the hash state can be exported to and imported from the generic algorithm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-19crypto: hmac - Add ahash supportHerbert Xu
Add ahash support to hmac so that drivers that can't do hmac in hardware do not have to implement duplicate copies of hmac. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-19crypto: hash - Add export_core and import_core hooksHerbert Xu
Add export_core and import_core hooks. These are intended to be used by algorithms which are wrappers around block-only algorithms, but are not themselves block-only, e.g., hmac. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-19crypto: hash - Move core export and import into internel/hash.hHerbert Xu
The core export and import functions are targeted at implementors so move them into internal/hash.h. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: ahash - Add HASH_REQUEST_ZEROHerbert Xu
Add a helper to zero hash stack requests that were never cloned off the stack. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: api - Rename CRYPTO_ALG_REQ_CHAIN to CRYPTO_ALG_REQ_VIRTHerbert Xu
As chaining has been removed, all that remains of REQ_CHAIN is just virtual address support. Rename it before the reintroduction of batching creates confusion. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28crypto: api - Add crypto_stack_request_init and initialise flags fullyHerbert Xu
Add a helper to initialise crypto stack requests and use it for ahash and acomp. Make sure that the flags field is initialised fully in the helper to silence false-positive warnings from the compiler. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202504250751.mdy28Ibr-lkp@intel.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28crypto: api - Add crypto_request_clone and fbHerbert Xu
Add a helper to clone crypto requests and eliminate code duplication. Use kmemdup in the helper. Also add an fb field to crypto_tfm. This also happens to fix the existing implementations which were buggy. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202504230118.1CxUaUoX-lkp@intel.com/ Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202504230004.c7mrY0C6-lkp@intel.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: shash - Handle partial blocks in APIHerbert Xu
Provide an option to handle the partial blocks in the shash API. Almost every hash algorithm has a block size and are only able to hash partial blocks on finalisation. Rather than duplicating the partial block handling many times, add this functionality to the shash API. It is optional (e.g., hmac would never need this by relying on the partial block handling of the underlying hash), and to enable it set the bit CRYPTO_AHASH_ALG_BLOCK_ONLY. The export format is always that of the underlying hash export, plus the partial block buffer, followed by a single-byte for the partial block length. Set the bit CRYPTO_AHASH_ALG_FINAL_NONZERO to withhold an extra byte in the partial block. This will come in handy when this is extended to ahash where hardware often can't deal with a zero-length final. It will also be used for algorithms requiring an extra block for finalisation (e.g., cmac). As an optimisation, set the bit CRYPTO_AHASH_ALG_FINUP_MAX if the algorithm wishes to get as much data as possible instead of just the last partial block. The descriptor will be zeroed after finalisation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: hash - Add HASH_REQUEST_ON_STACKHerbert Xu
Allow any ahash to be used with a stack request, with optional dynamic allocation when async is needed. The intended usage is: HASH_REQUEST_ON_STACK(req, tfm); ... err = crypto_ahash_digest(req); /* The request cannot complete synchronously. */ if (err == -EAGAIN) { /* This will not fail. */ req = HASH_REQUEST_CLONE(req, gfp); /* Redo operation. */ err = crypto_ahash_digest(req); } Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: ahash - Remove request chainingHerbert Xu
Request chaining requires the user to do too much book keeping. Remove it from ahash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-12crypto: ahash - Disable request chainingHerbert Xu
Disable hash request chaining in case a driver that copies an ahash_request object by hand accidentally triggers chaining. Reported-by: Manorit Chawdhry <m-chawdhry@ti.com> Fixes: f2ffe5a9183d ("crypto: hash - Add request chaining API") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Manorit Chawdhry <m-chawdhry@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-22crypto: ahash - Add virtual address supportHerbert Xu
This patch adds virtual address support to ahash. Virtual addresses were previously only supported through shash. The user may choose to use virtual addresses with ahash by calling ahash_request_set_virt instead of ahash_request_set_crypt. The API will take care of translating this to an SG list if necessary, unless the algorithm declares that it supports chaining. Therefore in order for an ahash algorithm to support chaining, it must also support virtual addresses directly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-22crypto: hash - Add request chaining APIHerbert Xu
This adds request chaining to the ahash interface. Request chaining allows multiple requests to be submitted in one shot. An algorithm can elect to receive chained requests by setting the flag CRYPTO_ALG_REQ_CHAIN. If this bit is not set, the API will break up chained requests and submit them one-by-one. A new err field is added to struct crypto_async_request to record the return value for each individual request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-04crypto: ahash - make hash walk functions private to ahash.cEric Biggers
Due to the removal of the Niagara2 SPU driver, crypto_hash_walk_first(), crypto_hash_walk_done(), crypto_hash_walk_last(), and struct crypto_hash_walk are now only used in crypto/ahash.c. Therefore, make them all private to crypto/ahash.c. I.e. un-export the two functions that were exported, make the functions static, and move the struct definition to the .c file. As part of this, move the functions to earlier in the file to avoid needing to add forward declarations. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-02-02crypto: ahash - unexport crypto_hash_alg_has_setkey()Eric Biggers
Since crypto_hash_alg_has_setkey() is only called from ahash.c itself, make it a static function. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: ahash - remove support for nonzero alignmaskEric Biggers
Currently, the ahash API checks the alignment of all key and result buffers against the algorithm's declared alignmask, and for any unaligned buffers it falls back to manually aligned temporary buffers. This is virtually useless, however. First, since it does not apply to the message, its effect is much more limited than e.g. is the case for the alignmask for "skcipher". Second, the key and result buffers are given as virtual addresses and cannot (in general) be DMA'ed into, so drivers end up having to copy to/from them in software anyway. As a result it's easy to use memcpy() or the unaligned access helpers. The crypto_hash_walk_*() helper functions do use the alignmask to align the message. But with one exception those are only used for shash algorithms being exposed via the ahash API, not for native ahashes, and aligning the message is not required in this case, especially now that alignmask support has been removed from shash. The exception is the n2_core driver, which doesn't set an alignmask. In any case, no ahash algorithms actually set a nonzero alignmask anymore. Therefore, remove support for it from ahash. The benefit is that all the code to handle "misaligned" buffers in the ahash API goes away, reducing the overhead of the ahash API. This follows the same change that was made to shash. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: shash - remove crypto_shash_ctx_aligned()Eric Biggers
crypto_shash_ctx_aligned() is no longer used, and it is useless now that shash algorithms don't support nonzero alignmasks, so remove it. Also remove crypto_tfm_ctx_aligned() which was only called by crypto_shash_ctx_aligned(). It's unlikely to be useful again, since it seems inappropriate to use cra_alignmask to represent alignment for the tfm context when it already means alignment for inputs/outputs. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-05-12crypto: hash - Make crypto_ahash_alg helper availableHerbert Xu
Move the crypto_ahash_alg helper into include/crypto/internal so that drivers can use it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-05-12crypto: hash - Add statesize to crypto_ahashHerbert Xu
As ahash drivers may need to use fallbacks, their state size is thus variable. Deal with this by making it an attribute of crypto_ahash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-20crypto: hash - Add crypto_clone_ahash/shashHerbert Xu
This patch adds the helpers crypto_clone_ahash and crypto_clone_shash. They are the hash-specific counterparts of crypto_clone_tfm. This allows code paths that cannot otherwise allocate a hash tfm object to do so. Once a new tfm has been obtained its key could then be changed without impacting other users. Note that only algorithms that implement clone_tfm can be cloned. However, all keyless hashes can be cloned by simply reusing the tfm object. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13crypto: hash - Use crypto_request_completeHerbert Xu
Use the crypto_request_complete helper instead of calling the completion function directly. This patch also removes the voodoo programming previously used for unaligned ahash operations and replaces it with a sub-request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: hash - Add ctx helpers with DMA alignmentHerbert Xu
This patch adds helpers to access the ahash context structure and request context structure with an added alignment for DMA access. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-25Revert "crypto: shash - avoid comparing pointers to exported functions under ↵Eric Biggers
CFI" This reverts commit 22ca9f4aaf431a9413dcc115dd590123307f274f because CFI no longer breaks cross-module function address equality, so crypto_shash_alg_has_setkey() can now be an inline function like before. This commit should not be backported to kernels that don't have the new CFI implementation. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-06-17crypto: shash - avoid comparing pointers to exported functions under CFIArd Biesheuvel
crypto_shash_alg_has_setkey() is implemented by testing whether the .setkey() member of a struct shash_alg points to the default version, called shash_no_setkey(). As crypto_shash_alg_has_setkey() is a static inline, this requires shash_no_setkey() to be exported to modules. Unfortunately, when building with CFI, function pointers are routed via CFI stubs which are private to each module (or to the kernel proper) and so this function pointer comparison may fail spuriously. Let's fix this by turning crypto_shash_alg_has_setkey() into an out of line function. Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-28crypto: ahash - Add ahash_alg_instanceHerbert Xu
This patch adds the helper ahash_alg_instance which is used to convert a crypto_ahash object into its corresponding ahash_instance. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-21crypto: hash - Remove unused async iteratorsIra Weiny
Revert "crypto: hash - Add real ahash walk interface" This reverts commit 75ecb231ff45b54afa9f4ec9137965c3c00868f4. The callers of the functions in this commit were removed in ab8085c130ed Remove these unused calls. Fixes: ab8085c130ed ("crypto: x86 - remove SHA multibuffer routines and mcryptd") Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - convert shash_free_instance() to new styleEric Biggers
Convert shash_free_instance() and its users to the new way of freeing instances, where a ->free() method is installed to the instance struct itself. This replaces the weakly-typed method crypto_template::free(). This will allow removing support for the old way of freeing instances. Also give shash_free_instance() a more descriptive name to reflect that it's only for instances with a single spawn, not for any instance. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: hash - add support for new way of freeing instancesEric Biggers
Add support to shash and ahash for the new way of freeing instances (already used for skcipher, aead, and akcipher) where a ->free() method is installed to the instance struct itself. These methods are more strongly-typed than crypto_template::free(), which they replace. This will allow removing support for the old way of freeing instances. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - unexport crypto_ahash_typeEric Biggers
Now that all the templates that need ahash spawns have been converted to use crypto_grab_ahash() rather than look up the algorithm directly, crypto_ahash_type is no longer used outside of ahash.c. Make it static. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: algapi - remove obsoleted instance creation helpersEric Biggers
Remove lots of helper functions that were previously used for instantiating crypto templates, but are now unused: - crypto_get_attr_alg() and similar functions looked up an inner algorithm directly from a template parameter. These were replaced with getting the algorithm's name, then calling crypto_grab_*(). - crypto_init_spawn2() and similar functions initialized a spawn, given an algorithm. Similarly, these were replaced with crypto_grab_*(). - crypto_alloc_instance() and similar functions allocated an instance with a single spawn, given the inner algorithm. These aren't useful anymore since crypto_grab_*() need the instance allocated first. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - introduce crypto_grab_ahash()Eric Biggers
Currently, ahash spawns are initialized by using ahash_attr_alg() or crypto_find_alg() to look up the ahash algorithm, then calling crypto_init_ahash_spawn(). This is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. So, let's introduce crypto_grab_ahash() so that we can convert all templates to the same way of initializing their spawns. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - introduce crypto_grab_shash()Eric Biggers
Currently, shash spawns are initialized by using shash_attr_alg() or crypto_alg_mod_lookup() to look up the shash algorithm, then calling crypto_init_shash_spawn(). This is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. So, let's introduce crypto_grab_shash() so that we can convert all templates to the same way of initializing their spawns. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - make struct ahash_instance be the full sizeEric Biggers
Define struct ahash_instance in a way analogous to struct skcipher_instance, struct aead_instance, and struct akcipher_instance, where the struct is defined to include both the algorithm structure at the beginning and the additional crypto_instance fields at the end. This is needed to allow allocating ahash instances directly using kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher, aead, and akcipher instances. In turn, that's needed to make spawns be initialized in a consistent way everywhere. Also take advantage of the addition of the base instance to struct ahash_instance by simplifying the ahash_crypto_instance() and ahash_instance() functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - make struct shash_instance be the full sizeEric Biggers
Define struct shash_instance in a way analogous to struct skcipher_instance, struct aead_instance, and struct akcipher_instance, where the struct is defined to include both the algorithm structure at the beginning and the additional crypto_instance fields at the end. This is needed to allow allocating shash instances directly using kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher, aead, and akcipher instances. In turn, that's needed to make spawns be initialized in a consistent way everywhere. Also take advantage of the addition of the base instance to struct shash_instance by simplifying the shash_crypto_instance() and shash_instance() functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-20crypto: algapi - make unregistration functions return voidEric Biggers
Some of the algorithm unregistration functions return -ENOENT when asked to unregister a non-registered algorithm, while others always return 0 or always return void. But no users check the return value, except for two of the bulk unregistration functions which print a message on error but still always return 0 to their caller, and crypto_del_alg() which calls crypto_unregister_instance() which always returns 0. Since unregistering a non-registered algorithm is always a kernel bug but there isn't anything callers should do to handle this situation at runtime, let's simplify things by making all the unregistration functions return void, and moving the error message into crypto_unregister_alg() and upgrading it to a WARN(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: hmac - Use init_tfm/exit_tfm interfaceHerbert Xu
This patch switches hmac over to the new init_tfm/exit_tfm interface as opposed to cra_init/cra_exit. This way the shash API can make sure that descsize does not exceed the maximum. This patch also adds the API helper shash_alg_instance. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11crypto: shash - allow essiv and hmac to use OPTIONAL_KEY algorithmsEric Biggers
The essiv and hmac templates refuse to use any hash algorithm that has a ->setkey() function, which includes not just algorithms that always need a key, but also algorithms that optionally take a key. Previously the only optionally-keyed hash algorithms in the crypto API were non-cryptographic algorithms like crc32, so this didn't really matter. But that's changed with BLAKE2 support being added. BLAKE2 should work with essiv and hmac, just like any other cryptographic hash. Fix this by allowing the use of both algorithms without a ->setkey() function and algorithms that have the OPTIONAL_KEY flag set. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-08Merge branch 'linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "Here is the crypto update for 5.3: API: - Test shash interface directly in testmgr - cra_driver_name is now mandatory Algorithms: - Replace arc4 crypto_cipher with library helper - Implement 5 way interleave for ECB, CBC and CTR on arm64 - Add xxhash - Add continuous self-test on noise source to drbg - Update jitter RNG Drivers: - Add support for SHA204A random number generator - Add support for 7211 in iproc-rng200 - Fix fuzz test failures in inside-secure - Fix fuzz test failures in talitos - Fix fuzz test failures in qat" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (143 commits) crypto: stm32/hash - remove interruptible condition for dma crypto: stm32/hash - Fix hmac issue more than 256 bytes crypto: stm32/crc32 - rename driver file crypto: amcc - remove memset after dma_alloc_coherent crypto: ccp - Switch to SPDX license identifiers crypto: ccp - Validate the the error value used to index error messages crypto: doc - Fix formatting of new crypto engine content crypto: doc - Add parameter documentation crypto: arm64/aes-ce - implement 5 way interleave for ECB, CBC and CTR crypto: arm64/aes-ce - add 5 way interleave routines crypto: talitos - drop icv_ool crypto: talitos - fix hash on SEC1. crypto: talitos - move struct talitos_edesc into talitos.h lib/scatterlist: Fix mapping iterator when sg->offset is greater than PAGE_SIZE crypto/NX: Set receive window credits to max number of CRBs in RxFIFO crypto: asymmetric_keys - select CRYPTO_HASH where needed crypto: serpent - mark __serpent_setkey_sbox noinline crypto: testmgr - dynamically allocate crypto_shash crypto: testmgr - dynamically allocate testvec_config crypto: talitos - eliminate unneeded 'done' functions at build time ...
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-30crypto: algapi - remove crypto_tfm_in_queue()Eric Biggers
Remove the crypto_tfm_in_queue() function, which is unused. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-11crypto: algapi - remove crypto_alloc_instance()Eric Biggers
Now that all "blkcipher" templates have been converted to "skcipher", crypto_alloc_instance() is no longer used. And it's not useful any longer as it creates an old-style weakly typed instance rather than a new-style strongly typed instance. So remove it, and now that the name is freed up rename crypto_alloc_instance2() to crypto_alloc_instance(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-15crypto: mcryptd - remove pointless wrapper functionsEric Biggers
There is no need for ahash_mcryptd_{update,final,finup,digest}(); we should just call crypto_ahash_*() directly. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: hash - introduce crypto_hash_alg_has_setkey()Eric Biggers
Templates that use an shash spawn can use crypto_shash_alg_has_setkey() to determine whether the underlying algorithm requires a key or not. But there was no corresponding function for ahash spawns. Add it. Note that the new function actually has to support both shash and ahash algorithms, since the ahash API can be used with either. Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29crypto: hmac - require that the underlying hash algorithm is unkeyedEric Biggers
Because the HMAC template didn't check that its underlying hash algorithm is unkeyed, trying to use "hmac(hmac(sha3-512-generic))" through AF_ALG or through KEYCTL_DH_COMPUTE resulted in the inner HMAC being used without having been keyed, resulting in sha3_update() being called without sha3_init(), causing a stack buffer overflow. This is a very old bug, but it seems to have only started causing real problems when SHA-3 support was added (requires CONFIG_CRYPTO_SHA3) because the innermost hash's state is ->import()ed from a zeroed buffer, and it just so happens that other hash algorithms are fine with that, but SHA-3 is not. However, there could be arch or hardware-dependent hash algorithms also affected; I couldn't test everything. Fix the bug by introducing a function crypto_shash_alg_has_setkey() which tests whether a shash algorithm is keyed. Then update the HMAC template to require that its underlying hash algorithm is unkeyed. Here is a reproducer: #include <linux/if_alg.h> #include <sys/socket.h> int main() { int algfd; struct sockaddr_alg addr = { .salg_type = "hash", .salg_name = "hmac(hmac(sha3-512-generic))", }; char key[4096] = { 0 }; algfd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(algfd, (const struct sockaddr *)&addr, sizeof(addr)); setsockopt(algfd, SOL_ALG, ALG_SET_KEY, key, sizeof(key)); } Here was the KASAN report from syzbot: BUG: KASAN: stack-out-of-bounds in memcpy include/linux/string.h:341 [inline] BUG: KASAN: stack-out-of-bounds in sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161 Write of size 4096 at addr ffff8801cca07c40 by task syzkaller076574/3044 CPU: 1 PID: 3044 Comm: syzkaller076574 Not tainted 4.14.0-mm1+ #25 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x194/0x257 lib/dump_stack.c:53 print_address_description+0x73/0x250 mm/kasan/report.c:252 kasan_report_error mm/kasan/report.c:351 [inline] kasan_report+0x25b/0x340 mm/kasan/report.c:409 check_memory_region_inline mm/kasan/kasan.c:260 [inline] check_memory_region+0x137/0x190 mm/kasan/kasan.c:267 memcpy+0x37/0x50 mm/kasan/kasan.c:303 memcpy include/linux/string.h:341 [inline] sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161 crypto_shash_update+0xcb/0x220 crypto/shash.c:109 shash_finup_unaligned+0x2a/0x60 crypto/shash.c:151 crypto_shash_finup+0xc4/0x120 crypto/shash.c:165 hmac_finup+0x182/0x330 crypto/hmac.c:152 crypto_shash_finup+0xc4/0x120 crypto/shash.c:165 shash_digest_unaligned+0x9e/0xd0 crypto/shash.c:172 crypto_shash_digest+0xc4/0x120 crypto/shash.c:186 hmac_setkey+0x36a/0x690 crypto/hmac.c:66 crypto_shash_setkey+0xad/0x190 crypto/shash.c:64 shash_async_setkey+0x47/0x60 crypto/shash.c:207 crypto_ahash_setkey+0xaf/0x180 crypto/ahash.c:200 hash_setkey+0x40/0x90 crypto/algif_hash.c:446 alg_setkey crypto/af_alg.c:221 [inline] alg_setsockopt+0x2a1/0x350 crypto/af_alg.c:254 SYSC_setsockopt net/socket.c:1851 [inline] SyS_setsockopt+0x189/0x360 net/socket.c:1830 entry_SYSCALL_64_fastpath+0x1f/0x96 Reported-by: syzbot <syzkaller@googlegroups.com> Cc: <stable@vger.kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-08-22crypto: hash - add crypto_(un)register_ahashes()Rabin Vincent
There are already helpers to (un)register multiple normal and AEAD algos. Add one for ahashes too. Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Rabin Vincent <rabinv@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>