summaryrefslogtreecommitdiff
path: root/include/crypto/internal
AgeCommit message (Collapse)AuthorFilesLines
2022-12-02crypto: hash - Add ctx helpers with DMA alignmentHerbert Xu1-0/+22
This patch adds helpers to access the ahash context structure and request context structure with an added alignment for DMA access. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: aead - Add ctx helpers with DMA alignmentHerbert Xu1-0/+22
This patch adds helpers to access the aead context structure and request context structure with an added alignment for DMA access. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: Prepare to move crypto_tfm_ctxHerbert Xu2-1/+4
The helper crypto_tfm_ctx is only used by the Crypto API algorithm code and should really be in algapi.h. However, for historical reasons many files relied on it to be in crypto.h. This patch changes those files to use algapi.h instead in prepartion for a move. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: kpp - Move reqsize into tfmHerbert Xu1-1/+1
The value of reqsize cannot be determined in case of fallbacks. Therefore it must be stored in the tfm and not the alg object. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: akcipher - Move reqsize into tfmHerbert Xu1-1/+1
The value of reqsize cannot be determined in case of fallbacks. Therefore it must be stored in the tfm and not the alg object. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02crypto: kpp - Add helper to set reqsizeHerbert Xu1-0/+6
The value of reqsize should only be changed through a helper. To do so we need to first add a helper for this. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-25Revert "crypto: shash - avoid comparing pointers to exported functions under ↵Eric Biggers1-1/+7
CFI" This reverts commit 22ca9f4aaf431a9413dcc115dd590123307f274f because CFI no longer breaks cross-module function address equality, so crypto_shash_alg_has_setkey() can now be an inline function like before. This commit should not be backported to kernels that don't have the new CFI implementation. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-18crypto: skcipher - Allow sync algorithms with large request contextsHerbert Xu1-0/+8
Some sync algorithms may require a large amount of temporary space during its operations. There is no reason why they should be limited just because some legacy users want to place all temporary data on the stack. Such algorithms can now set a flag to indicate that they need extra request context, which will cause them to be invisible to users that go through the sync_skcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-09-30crypto: aead - Remove unused inline functions from aeadGaosheng Cui1-25/+0
The aead_enqueue_request, aead_dequeue_request and aead_get_backlog are no longer used since commit 04a4616e6a21 ("crypto: omap-aes-gcm - convert to use crypto engine"), their functinoality has been replaced by crypto engine, so remove them. Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-06-10crypto: blake2s - remove shash moduleJason A. Donenfeld1-108/+0
BLAKE2s has no currently known use as an shash. Just remove all of this unnecessary plumbing. Removing this shash was something we talked about back when we were making BLAKE2s a built-in, but I simply never got around to doing it. So this completes that project. Importantly, this fixs a bug in which the lib code depends on crypto_simd_disabled_for_test, causing linker errors. Also add more alignment tests to the selftests and compare SIMD and non-SIMD compression functions, to make up for what we lose from testmgr.c. Reported-by: gaochao <gaochao49@huawei.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: stable@vger.kernel.org Fixes: 6048fdcc5f26 ("lib/crypto: blake2s: include as built-in") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-22Merge branch 'linus' of ↵Linus Torvalds1-0/+158
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - hwrng core now credits for low-quality RNG devices. Algorithms: - Optimisations for neon aes on arm/arm64. - Add accelerated crc32_be on arm64. - Add ffdheXYZ(dh) templates. - Disallow hmac keys < 112 bits in FIPS mode. - Add AVX assembly implementation for sm3 on x86. Drivers: - Add missing local_bh_disable calls for crypto_engine callback. - Ensure BH is disabled in crypto_engine callback path. - Fix zero length DMA mappings in ccree. - Add synchronization between mailbox accesses in octeontx2. - Add Xilinx SHA3 driver. - Add support for the TDES IP available on sama7g5 SoC in atmel" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (137 commits) crypto: xilinx - Turn SHA into a tristate and allow COMPILE_TEST MAINTAINERS: update HPRE/SEC2/TRNG driver maintainers list crypto: dh - Remove the unused function dh_safe_prime_dh_alg() hwrng: nomadik - Change clk_disable to clk_disable_unprepare crypto: arm64 - cleanup comments crypto: qat - fix initialization of pfvf rts_map_msg structures crypto: qat - fix initialization of pfvf cap_msg structures crypto: qat - remove unneeded assignment crypto: qat - disable registration of algorithms crypto: hisilicon/qm - fix memset during queues clearing crypto: xilinx: prevent probing on non-xilinx hardware crypto: marvell/octeontx - Use swap() instead of open coding it crypto: ccree - Fix use after free in cc_cipher_exit() crypto: ccp - ccp_dmaengine_unregister release dma channels crypto: octeontx2 - fix missing unlock hwrng: cavium - fix NULL but dereferenced coccicheck error crypto: cavium/nitrox - don't cast parameter in bit operations crypto: vmx - add missing dependencies MAINTAINERS: Add maintainer for Xilinx ZynqMP SHA3 driver crypto: xilinx - Add Xilinx SHA3 driver ...
2022-03-03crypto: kpp - provide support for KPP spawnsNicolai Stange1-0/+75
The upcoming support for the RFC 7919 ffdhe group parameters will be made available in the form of templates like "ffdhe2048(dh)", "ffdhe3072(dh)" and so on. Template instantiations thereof would wrap the inner "dh" kpp_alg and also provide kpp_alg services to the outside again. The primitves needed for providing kpp_alg services from template instances have been introduced with the previous patch. Continue this work now and implement everything needed for enabling template instances to make use of inner KPP algorithms like "dh". More specifically, define a struct crypto_kpp_spawn in close analogy to crypto_skcipher_spawn, crypto_shash_spawn and alike. Implement a crypto_grab_kpp() and crypto_drop_kpp() pair for binding such a spawn to some inner kpp_alg and for releasing it respectively. Template implementations can instantiate transforms from the underlying kpp_alg by means of the new crypto_spawn_kpp(). Finally, provide the crypto_spawn_kpp_alg() helper for accessing a spawn's underlying kpp_alg during template instantiation. Annotate everything with proper kernel-doc comments, even though include/crypto/internal/kpp.h is not considered for the generated docs. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-03crypto: kpp - provide support for KPP template instancesNicolai Stange1-0/+83
The upcoming support for the RFC 7919 ffdhe group parameters will be made available in the form of templates like "ffdhe2048(dh)", "ffdhe3072(dh)" and so on. Template instantiations thereof would wrap the inner "dh" kpp_alg and also provide kpp_alg services to the outside again. Furthermore, it might be perhaps be desirable to provide KDF templates in the future, which would similarly wrap an inner kpp_alg and present themselves to the outside as another kpp_alg, transforming the shared secret on its way out. Introduce the bits needed for supporting KPP template instances. Everything related to inner kpp_alg spawns potentially being held by such template instances will be deferred to a subsequent patch in order to facilitate review. Define struct struct kpp_instance in close analogy to the already existing skcipher_instance, shash_instance and alike, but wrapping a struct kpp_alg. Implement the new kpp_register_instance() template instance registration primitive. Provide some helper functions for - going back and forth between a generic struct crypto_instance and the new struct kpp_instance, - obtaining the instantiating kpp_instance from a crypto_kpp transform and - for accessing a given kpp_instance's implementation specific context data. Annotate everything with proper kernel-doc comments, even though include/crypto/internal/kpp.h is not considered for the generated docs. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-04lib/crypto: blake2s: avoid indirect calls to compression function for Clang CFIJason A. Donenfeld1-15/+25
blake2s_compress_generic is weakly aliased by blake2s_compress. The current harness for function selection uses a function pointer, which is ordinarily inlined and resolved at compile time. But when Clang's CFI is enabled, CFI still triggers when making an indirect call via a weak symbol. This seems like a bug in Clang's CFI, as though it's bucketing weak symbols and strong symbols differently. It also only seems to trigger when "full LTO" mode is used, rather than "thin LTO". [ 0.000000][ T0] Kernel panic - not syncing: CFI failure (target: blake2s_compress_generic+0x0/0x1444) [ 0.000000][ T0] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.16.0-mainline-06981-g076c855b846e #1 [ 0.000000][ T0] Hardware name: MT6873 (DT) [ 0.000000][ T0] Call trace: [ 0.000000][ T0] dump_backtrace+0xfc/0x1dc [ 0.000000][ T0] dump_stack_lvl+0xa8/0x11c [ 0.000000][ T0] panic+0x194/0x464 [ 0.000000][ T0] __cfi_check_fail+0x54/0x58 [ 0.000000][ T0] __cfi_slowpath_diag+0x354/0x4b0 [ 0.000000][ T0] blake2s_update+0x14c/0x178 [ 0.000000][ T0] _extract_entropy+0xf4/0x29c [ 0.000000][ T0] crng_initialize_primary+0x24/0x94 [ 0.000000][ T0] rand_initialize+0x2c/0x6c [ 0.000000][ T0] start_kernel+0x2f8/0x65c [ 0.000000][ T0] __primary_switched+0xc4/0x7be4 [ 0.000000][ T0] Rebooting in 5 seconds.. Nonetheless, the function pointer method isn't so terrific anyway, so this patch replaces it with a simple boolean, which also gets inlined away. This successfully works around the Clang bug. In general, I'm not too keen on all of the indirection involved here; it clearly does more harm than good. Hopefully the whole thing can get cleaned up down the road when lib/crypto is overhauled more comprehensively. But for now, we go with a simple bandaid. Fixes: 6048fdcc5f26 ("lib/crypto: blake2s: include as built-in") Link: https://github.com/ClangBuiltLinux/linux/issues/1567 Reported-by: Miles Chen <miles.chen@mediatek.com> Tested-by: Miles Chen <miles.chen@mediatek.com> Tested-by: Nathan Chancellor <nathan@kernel.org> Tested-by: John Stultz <john.stultz@linaro.org> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-11Merge branch 'linus' of ↵Linus Torvalds1-0/+71
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "Algorithms: - Drop alignment requirement for data in aesni - Use synchronous seeding from the /dev/random in DRBG - Reseed nopr DRBGs every 5 minutes from /dev/random - Add KDF algorithms currently used by security/DH - Fix lack of entropy on some AMD CPUs with jitter RNG Drivers: - Add support for the D1 variant in sun8i-ce - Add SEV_INIT_EX support in ccp - PFVF support for GEN4 host driver in qat - Compression support for GEN4 devices in qat - Add cn10k random number generator support" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (145 commits) crypto: af_alg - rewrite NULL pointer check lib/mpi: Add the return value check of kcalloc() crypto: qat - fix definition of ring reset results crypto: hisilicon - cleanup warning in qm_get_qos_value() crypto: kdf - select SHA-256 required for self-test crypto: x86/aesni - don't require alignment of data crypto: ccp - remove unneeded semicolon crypto: stm32/crc32 - Fix kernel BUG triggered in probe() crypto: s390/sha512 - Use macros instead of direct IV numbers crypto: sparc/sha - remove duplicate hash init function crypto: powerpc/sha - remove duplicate hash init function crypto: mips/sha - remove duplicate hash init function crypto: sha256 - remove duplicate generic hash init function crypto: jitter - add oversampling of noise source MAINTAINERS: update SEC2 driver maintainers list crypto: ux500 - Use platform_get_irq() to get the interrupt crypto: hisilicon/qm - disable qm clock-gating crypto: omap-aes - Fix broken pm_runtime_and_get() usage MAINTAINERS: update caam crypto driver maintainers list crypto: octeontx2 - prevent underflow in get_cores_bmap() ...
2022-01-07lib/crypto: blake2s: include as built-inJason A. Donenfeld1-3/+3
In preparation for using blake2s in the RNG, we change the way that it is wired-in to the build system. Instead of using ifdefs to select the right symbol, we use weak symbols. And because ARM doesn't need the generic implementation, we make the generic one default only if an arch library doesn't need it already, and then have arch libraries that do need it opt-in. So that the arch libraries can remain tristate rather than bool, we then split the shash part from the glue code. Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: linux-kbuild@vger.kernel.org Cc: linux-crypto@vger.kernel.org Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-11-26crypto: kdf - Add key derivation self-test support codeStephan Müller1-0/+71
As a preparation to add the key derivation implementations, the self-test data structure definition and the common test code is made available. The test framework follows the testing applied by the NIST CAVP test approach. The structure of the test code follows the implementations found in crypto/testmgr.c|h. In case the KDF implementations will be made available via a kernel crypto API templates, the test code is intended to be merged into testmgr.c|h. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-10-29crypto: ecc - Export additional helper functionsDaniele Alessandrelli1-0/+36
Export the following additional ECC helper functions: - ecc_alloc_point() - ecc_free_point() - vli_num_bits() - ecc_point_is_zero() This is done to allow future ECC device drivers to re-use existing code, thus simplifying their implementation. Functions are exported using EXPORT_SYMBOL() (instead of EXPORT_SYMBOL_GPL()) to be consistent with the functions already exported by crypto/ecc.c. Exported functions are documented in include/crypto/internal/ecc.h. Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-10-29crypto: ecc - Move ecc.h to include/crypto/internalDaniele Alessandrelli1-0/+245
Move ecc.h header file to 'include/crypto/internal' so that it can be easily imported from everywhere in the kernel tree. This change is done to allow crypto device drivers to re-use the symbols exported by 'crypto/ecc.c', thus avoiding code duplication. Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-06-17crypto: shash - avoid comparing pointers to exported functions under CFIArd Biesheuvel1-7/+1
crypto_shash_alg_has_setkey() is implemented by testing whether the .setkey() member of a struct shash_alg points to the default version, called shash_no_setkey(). As crypto_shash_alg_has_setkey() is a static inline, this requires shash_no_setkey() to be exported to modules. Unfortunately, when building with CFI, function pointers are routed via CFI stubs which are private to each module (or to the kernel proper) and so this function pointer comparison may fail spuriously. Let's fix this by turning crypto_shash_alg_has_setkey() into an out of line function. Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-04-02crypto: poly1305 - fix poly1305_core_setkey() declarationArnd Bergmann1-1/+2
gcc-11 points out a mismatch between the declaration and the definition of poly1305_core_setkey(): lib/crypto/poly1305-donna32.c:13:67: error: argument 2 of type ‘const u8[16]’ {aka ‘const unsigned char[16]’} with mismatched bound [-Werror=array-parameter=] 13 | void poly1305_core_setkey(struct poly1305_core_key *key, const u8 raw_key[16]) | ~~~~~~~~~^~~~~~~~~~~ In file included from lib/crypto/poly1305-donna32.c:11: include/crypto/internal/poly1305.h:21:68: note: previously declared as ‘const u8 *’ {aka ‘const unsigned char *’} 21 | void poly1305_core_setkey(struct poly1305_core_key *key, const u8 *raw_key); This is harmless in principle, as the calling conventions are the same, but the more specific prototype allows better type checking in the caller. Change the declaration to match the actual function definition. The poly1305_simd_init() is a bit suspicious here, as it previously had a 32-byte argument type, but looks like it needs to take the 16-byte POLY1305_BLOCK_SIZE array instead. Fixes: 1c08a104360f ("crypto: poly1305 - add new 32 and 64-bit generic versions") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-14crypto: x86 - remove glue helper moduleArd Biesheuvel1-1/+0
All dependencies on the x86 glue helper module have been replaced by local instantiations of the new ECB/CBC preprocessor helper macros, so the glue helper module can be retired. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: blake2b - sync with blake2s implementationEric Biggers1-0/+115
Sync the BLAKE2b code with the BLAKE2s code as much as possible: - Move a lot of code into new headers <crypto/blake2b.h> and <crypto/internal/blake2b.h>, and adjust it to be like the corresponding BLAKE2s code, i.e. like <crypto/blake2s.h> and <crypto/internal/blake2s.h>. - Rename constants, e.g. BLAKE2B_*_DIGEST_SIZE => BLAKE2B_*_HASH_SIZE. - Use a macro BLAKE2B_ALG() to define the shash_alg structs. - Export blake2b_compress_generic() for use as a fallback. This makes it much easier to add optimized implementations of BLAKE2b, as optimized implementations can use the helper functions crypto_blake2b_{setkey,init,update,final}() and blake2b_compress_generic(). The ARM implementation will use these. But this change is also helpful because it eliminates unnecessary differences between the BLAKE2b and BLAKE2s code, so that the same improvements can easily be made to both. (The two algorithms are basically identical, except for the word size and constants.) It also makes it straightforward to add a library API for BLAKE2b in the future if/when it's needed. This change does make the BLAKE2b code slightly more complicated than it needs to be, as it doesn't actually provide a library API yet. For example, __blake2b_update() doesn't really need to exist yet; it could just be inlined into crypto_blake2b_update(). But I believe this is outweighed by the benefits of keeping the code in sync. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: blake2s - adjust include guard namingEric Biggers1-3/+3
Use the full path in the include guards for the BLAKE2s headers to avoid ambiguity and to match the convention for most files in include/crypto/. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: blake2s - optimize blake2s initializationEric Biggers1-4/+1
If no key was provided, then don't waste time initializing the block buffer, as its initial contents won't be used. Also, make crypto_blake2s_init() and blake2s() call a single internal function __blake2s_init() which treats the key as optional, rather than conditionally calling blake2s_init() or blake2s_init_key(). This reduces the compiled code size, as previously both blake2s_init() and blake2s_init_key() were being inlined into these two callers, except when the key size passed to blake2s() was a compile-time constant. These optimizations aren't that significant for BLAKE2s. However, the equivalent optimizations will be more significant for BLAKE2b, as everything is twice as big in BLAKE2b. And it's good to keep things consistent rather than making optimizations for BLAKE2b but not BLAKE2s. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: blake2s - share the "shash" API boilerplate codeEric Biggers1-5/+60
Add helper functions for shash implementations of BLAKE2s to include/crypto/internal/blake2s.h, taking advantage of __blake2s_update() and __blake2s_final() that were added by the previous patch to share more code between the library and shash implementations. crypto_blake2s_setkey() and crypto_blake2s_init() are usable as shash_alg::setkey and shash_alg::init directly, while crypto_blake2s_update() and crypto_blake2s_final() take an extra 'blake2s_compress_t' function pointer parameter. This allows the implementation of the compression function to be overridden, which is the only part that optimized implementations really care about. The new functions are inline functions (similar to those in sha1_base.h, sha256_base.h, and sm3_base.h) because this avoids needing to add a new module blake2s_helpers.ko, they aren't *too* long, and this avoids indirect calls which are expensive these days. Note that they can't go in blake2s_generic.ko, as that would require selecting CRYPTO_BLAKE2S from CRYPTO_BLAKE2S_X86, which would cause a recursive dependency. Finally, use these new helper functions in the x86 implementation of BLAKE2s. (This part should be a separate patch, but unfortunately the x86 implementation used the exact same function names like "crypto_blake2s_update()", so it had to be updated at the same time.) Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: blake2s - move update and final logic to internal/blake2s.hEric Biggers1-0/+41
Move most of blake2s_update() and blake2s_final() into new inline functions __blake2s_update() and __blake2s_final() in include/crypto/internal/blake2s.h so that this logic can be shared by the shash helper functions. This will avoid duplicating this logic between the library and shash implementations. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: remove cipher routines from public crypto APIArd Biesheuvel2-0/+219
The cipher routines in the crypto API are mostly intended for templates implementing skcipher modes generically in software, and shouldn't be used outside of the crypto subsystem. So move the prototypes and all related definitions to a new header file under include/crypto/internal. Also, let's use the new module namespace feature to move the symbol exports into a new namespace CRYPTO_INTERNAL. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-12-04crypto: lib/blake2s - Move selftest prototype into header fileHerbert Xu1-0/+2
This patch fixes a missing prototype warning on blake2s_selftest. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-28crypto: ahash - Add ahash_alg_instanceHerbert Xu1-0/+6
This patch adds the helper ahash_alg_instance which is used to convert a crypto_ahash object into its corresponding ahash_instance. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-21crypto: hash - Remove unused async iteratorsIra Weiny1-13/+0
Revert "crypto: hash - Add real ahash walk interface" This reverts commit 75ecb231ff45b54afa9f4ec9137965c3c00868f4. The callers of the functions in this commit were removed in ab8085c130ed Remove these unused calls. Fixes: ab8085c130ed ("crypto: x86 - remove SHA multibuffer routines and mcryptd") Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-07mm, treewide: rename kzfree() to kfree_sensitive()Waiman Long1-1/+1
As said by Linus: A symmetric naming is only helpful if it implies symmetries in use. Otherwise it's actively misleading. In "kzalloc()", the z is meaningful and an important part of what the caller wants. In "kzfree()", the z is actively detrimental, because maybe in the future we really _might_ want to use that "memfill(0xdeadbeef)" or something. The "zero" part of the interface isn't even _relevant_. The main reason that kzfree() exists is to clear sensitive information that should not be leaked to other future users of the same memory objects. Rename kzfree() to kfree_sensitive() to follow the example of the recently added kvfree_sensitive() and make the intention of the API more explicit. In addition, memzero_explicit() is used to clear the memory to make sure that it won't get optimized away by the compiler. The renaming is done by using the command sequence: git grep -w --name-only kzfree |\ xargs sed -i 's/kzfree/kfree_sensitive/' followed by some editing of the kfree_sensitive() kerneldoc and adding a kzfree backward compatibility macro in slab.h. [akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h] [akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more] Suggested-by: Joe Perches <joe@perches.com> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: David Howells <dhowells@redhat.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Cc: James Morris <jmorris@namei.org> Cc: "Serge E. Hallyn" <serge@hallyn.com> Cc: Joe Perches <joe@perches.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: David Rientjes <rientjes@google.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: "Jason A . Donenfeld" <Jason@zx2c4.com> Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-07-16crypto: geniv - remove unneeded arguments from aead_geniv_alloc()Eric Biggers1-1/+1
The type and mask arguments to aead_geniv_alloc() are always 0, so remove them. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-16crypto: poly1305 - add new 32 and 64-bit generic versionsJason A. Donenfeld1-35/+10
These two C implementations from Zinc -- a 32x32 one and a 64x64 one, depending on the platform -- come from Andrew Moon's public domain poly1305-donna portable code, modified for usage in the kernel. The precomputation in the 32-bit version and the use of 64x64 multiplies in the 64-bit version make these perform better than the code it replaces. Moon's code is also very widespread and has received many eyeballs of scrutiny. There's a bit of interference between the x86 implementation, which relies on internal details of the old scalar implementation. In the next commit, the x86 implementation will be replaced with a faster one that doesn't rely on this, so none of this matters much. But for now, to keep this passing the tests, we inline the bits of the old implementation that the x86 implementation relied on. Also, since we now support a slightly larger key space, via the union, some offsets had to be fixed up. Nonce calculation was folded in with the emit function, to take advantage of 64x64 arithmetic. However, Adiantum appeared to rely on no nonce handling in emit, so this path was conditionalized. We also introduced a new struct, poly1305_core_key, to represent the precise amount of space that particular implementation uses. Testing with kbench9000, depending on the CPU, the update function for the 32x32 version has been improved by 4%-7%, and for the 64x64 by 19%-30%. The 32x32 gains are small, but I think there's great value in having a parallel implementation to the 64x64 one so that the two can be compared side-by-side as nice stand-alone units. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - convert shash_free_instance() to new styleEric Biggers1-1/+1
Convert shash_free_instance() and its users to the new way of freeing instances, where a ->free() method is installed to the instance struct itself. This replaces the weakly-typed method crypto_template::free(). This will allow removing support for the old way of freeing instances. Also give shash_free_instance() a more descriptive name to reflect that it's only for instances with a single spawn, not for any instance. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: geniv - convert to new way of freeing instancesEric Biggers1-1/+0
Convert the "seqiv" template to the new way of freeing instances where a ->free() method is installed to the instance struct itself. Also remove the unused implementation of the old way of freeing instances from the "echainiv" template, since it's already using the new way too. In doing this, also simplify the code by making the helper function aead_geniv_alloc() install the ->free() method, instead of making seqiv and echainiv do this themselves. This is analogous to how skcipher_alloc_instance_simple() works. This will allow removing support for the old way of freeing instances. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: hash - add support for new way of freeing instancesEric Biggers1-0/+2
Add support to shash and ahash for the new way of freeing instances (already used for skcipher, aead, and akcipher) where a ->free() method is installed to the instance struct itself. These methods are more strongly-typed than crypto_template::free(), which they replace. This will allow removing support for the old way of freeing instances. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - unexport crypto_ahash_typeEric Biggers1-2/+0
Now that all the templates that need ahash spawns have been converted to use crypto_grab_ahash() rather than look up the algorithm directly, crypto_ahash_type is no longer used outside of ahash.c. Make it static. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: algapi - remove obsoleted instance creation helpersEric Biggers1-31/+0
Remove lots of helper functions that were previously used for instantiating crypto templates, but are now unused: - crypto_get_attr_alg() and similar functions looked up an inner algorithm directly from a template parameter. These were replaced with getting the algorithm's name, then calling crypto_grab_*(). - crypto_init_spawn2() and similar functions initialized a spawn, given an algorithm. Similarly, these were replaced with crypto_grab_*(). - crypto_alloc_instance() and similar functions allocated an instance with a single spawn, given the inner algorithm. These aren't useful anymore since crypto_grab_*() need the instance allocated first. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: skcipher - use crypto_grab_cipher() and simplify error pathsEric Biggers1-2/+2
Make skcipher_alloc_instance_simple() use the new function crypto_grab_cipher() to initialize its cipher spawn. This is needed to make all spawns be initialized in a consistent way. Also simplify the error handling by taking advantage of crypto_drop_*() now accepting (as a no-op) spawns that haven't been initialized yet, and by taking advantage of crypto_grab_*() now handling ERR_PTR() names. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - introduce crypto_grab_ahash()Eric Biggers1-0/+10
Currently, ahash spawns are initialized by using ahash_attr_alg() or crypto_find_alg() to look up the ahash algorithm, then calling crypto_init_ahash_spawn(). This is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. So, let's introduce crypto_grab_ahash() so that we can convert all templates to the same way of initializing their spawns. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - introduce crypto_grab_shash()Eric Biggers1-0/+10
Currently, shash spawns are initialized by using shash_attr_alg() or crypto_alg_mod_lookup() to look up the shash algorithm, then calling crypto_init_shash_spawn(). This is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. So, let's introduce crypto_grab_shash() so that we can convert all templates to the same way of initializing their spawns. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: akcipher - pass instance to crypto_grab_akcipher()Eric Biggers1-9/+3
Initializing a crypto_akcipher_spawn currently requires: 1. Set spawn->base.inst to point to the instance. 2. Call crypto_grab_akcipher(). But there's no reason for these steps to be separate, and in fact this unneeded complication has caused at least one bug, the one fixed by commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst") So just make crypto_grab_akcipher() take the instance as an argument. To keep the function call from getting too unwieldy due to this extra argument, also introduce a 'mask' variable into pkcs1pad_create(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: aead - pass instance to crypto_grab_aead()Eric Biggers1-8/+3
Initializing a crypto_aead_spawn currently requires: 1. Set spawn->base.inst to point to the instance. 2. Call crypto_grab_aead(). But there's no reason for these steps to be separate, and in fact this unneeded complication has caused at least one bug, the one fixed by commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst") So just make crypto_grab_aead() take the instance as an argument. To keep the function calls from getting too unwieldy due to this extra argument, also introduce a 'mask' variable into the affected places which weren't already using one. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: skcipher - pass instance to crypto_grab_skcipher()Eric Biggers1-8/+3
Initializing a crypto_skcipher_spawn currently requires: 1. Set spawn->base.inst to point to the instance. 2. Call crypto_grab_skcipher(). But there's no reason for these steps to be separate, and in fact this unneeded complication has caused at least one bug, the one fixed by commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst") So just make crypto_grab_skcipher() take the instance as an argument. To keep the function calls from getting too unwieldy due to this extra argument, also introduce a 'mask' variable into the affected places which weren't already using one. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - make struct ahash_instance be the full sizeEric Biggers1-3/+9
Define struct ahash_instance in a way analogous to struct skcipher_instance, struct aead_instance, and struct akcipher_instance, where the struct is defined to include both the algorithm structure at the beginning and the additional crypto_instance fields at the end. This is needed to allow allocating ahash instances directly using kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher, aead, and akcipher instances. In turn, that's needed to make spawns be initialized in a consistent way everywhere. Also take advantage of the addition of the base instance to struct ahash_instance by simplifying the ahash_crypto_instance() and ahash_instance() functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: shash - make struct shash_instance be the full sizeEric Biggers1-4/+9
Define struct shash_instance in a way analogous to struct skcipher_instance, struct aead_instance, and struct akcipher_instance, where the struct is defined to include both the algorithm structure at the beginning and the additional crypto_instance fields at the end. This is needed to allow allocating shash instances directly using kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher, aead, and akcipher instances. In turn, that's needed to make spawns be initialized in a consistent way everywhere. Also take advantage of the addition of the base instance to struct shash_instance by simplifying the shash_crypto_instance() and shash_instance() functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: remove CRYPTO_TFM_RES_WEAK_KEYEric Biggers1-12/+3
The CRYPTO_TFM_RES_WEAK_KEY flag was apparently meant as a way to make the ->setkey() functions provide more information about errors. However, no one actually checks for this flag, which makes it pointless. There are also no tests that verify that all algorithms actually set (or don't set) it correctly. This is also the last remaining CRYPTO_TFM_RES_* flag, which means that it's the only thing still needing all the boilerplate code which propagates these flags around from child => parent tfms. And if someone ever needs to distinguish this error in the future (which is somewhat unlikely, as it's been unneeded for a long time), it would be much better to just define a new return value like -EKEYREJECTED. That would be much simpler, less error-prone, and easier to test. So just remove this flag. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: remove CRYPTO_TFM_RES_BAD_KEY_LENEric Biggers1-6/+2
The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to make the ->setkey() functions provide more information about errors. However, no one actually checks for this flag, which makes it pointless. Also, many algorithms fail to set this flag when given a bad length key. Reviewing just the generic implementations, this is the case for aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309, rfc7539, rfc7539esp, salsa20, seqiv, and xcbc. But there are probably many more in arch/*/crypto/ and drivers/crypto/. Some algorithms can even set this flag when the key is the correct length. For example, authenc and authencesn set it when the key payload is malformed in any way (not just a bad length), the atmel-sha and ccree drivers can set it if a memory allocation fails, and the chelsio driver sets it for bad auth tag lengths, not just bad key lengths. So even if someone actually wanted to start checking this flag (which seems unlikely, since it's been unused for a long time), there would be a lot of work needed to get it working correctly. But it would probably be much better to go back to the drawing board and just define different return values, like -EINVAL if the key is invalid for the algorithm vs. -EKEYREJECTED if the key was rejected by a policy like "no weak keys". That would be much simpler, less error-prone, and easier to test. So just remove this flag. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: skcipher - remove skcipher_walk_aead()Eric Biggers1-2/+0
skcipher_walk_aead() is unused and is identical to skcipher_walk_aead_encrypt(), so remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>