summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2021-02-10crypto: fcrypt - drop unneeded alignmaskArd Biesheuvel1-1/+0
The fcrypt implementation uses memcpy() to access the input and output buffers so there is no need to set an alignmask. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-10crypto: cast6 - use unaligned accessors instead of alignmaskArd Biesheuvel1-22/+17
Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the CAST6 input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-10crypto: cast5 - use unaligned accessors instead of alignmaskArd Biesheuvel1-14/+9
Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the CAST5 input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-10crypto: camellia - use unaligned accessors instead of alignmaskArd Biesheuvel1-29/+16
Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the Camellia input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-10crypto: blowfish - use unaligned accessors instead of alignmaskArd Biesheuvel1-14/+9
Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the Blowfish input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-10crypto: serpent - use unaligned accessors instead of alignmaskArd Biesheuvel1-27/+17
Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the Serpent input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-10crypto: serpent - get rid of obsolete tnepres variantArd Biesheuvel5-169/+7
It is not trivial to trace back why exactly the tnepres variant of serpent was added ~17 years ago - Google searches come up mostly empty, but it seems to be related with the 'kerneli' version, which was based on an incorrect interpretation of the serpent spec. In other words, nobody is likely to care anymore today, so let's get rid of it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-10crypto: michael_mic - fix broken misalignment handlingArd Biesheuvel1-19/+12
The Michael MIC driver uses the cra_alignmask to ensure that pointers presented to its update and finup/final methods are 32-bit aligned. However, due to the way the shash API works, this is no guarantee that the 32-bit reads occurring in the update method are also aligned, as the size of the buffer presented to update may be of uneven length. For instance, an update() of 3 bytes followed by a misaligned update() of 4 or more bytes will result in a misaligned access using an accessor that is not suitable for this. On most architectures, this does not matter, and so setting the cra_alignmask is pointless. On architectures where this does matter, setting the cra_alignmask does not actually solve the problem. So let's get rid of the cra_alignmask, and use unaligned accessors instead, where appropriate. Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-10hwrng: timeriomem - Fix cooldown period calculationJan Henrik Weinstock1-1/+1
Ensure cooldown period tolerance of 1% is actually accounted for. Fixes: ca3bff70ab32 ("hwrng: timeriomem - Improve performance...") Signed-off-by: Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-10crypto: marvell - CRYPTO_DEV_OCTEONTX2_CPT should depend on ARCH_THUNDER2Geert Uytterhoeven1-1/+1
The Marvell OcteonTX2 CPT physical function PCI device is present only on OcteonTx2 SoC, and not available as an independent PCIe endpoint. Hence add a dependency on ARCH_THUNDER2, to prevent asking the user about this driver when configuring a kernel without OcteonTx2 platform support. Fixes: 5e8ce8334734c5f2 ("crypto: marvell - add Marvell OcteonTX2 CPT PF driver") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-10Merge git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux for-next/cryptoHerbert Xu1-0/+16
Pull change from arm64 tree that's needed for crypto arm changes.
2021-02-05crypto: crypto4xx - Avoid linking failure with HW_RANDOM=mFlorian Fainelli1-1/+1
It is currently possible to build CONFIG_HW_RANDOM_PPC4XX=y with CONFIG_HW_RANDOM=m which would lead to the inability of linking with devm_hwrng_{register,unregister}. We cannot have the framework modular and the consumer of that framework built-in, so make that dependency explicit. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-05crypto: octeontx2 - Add dependency on NET_VENDOR_MARVELLHerbert Xu1-0/+1
The crypto octeontx2 driver depends on the mbox code in the network tree. It tries to select the MBOX Kconfig option but that option itself depends on many other options which are not selected, e.g., CONFIG_NET_VENDOR_MARVELL. It would be inappropriate to select them all as randomly prompting the user for network options which would oterhwise be disabled just because a crypto driver has been enabled makes no sense. This patch fixes this by adding a dependency on NET_VENDOR_MARVELL. This makes the crypto driver invisible if the network option is off. If the crypto driver must be visible even without the network stack then the shared mbox code should be moved out of drivers/net. Reported-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: kernel test robot <lkp@intel.com> Fixes: 5e8ce8334734 ("crypto: marvell - add Marvell OcteonTX2 CPT...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-05crypto: octeontx2 - fix signedness bug in cptvf_register_interrupts()Dan Carpenter1-1/+1
The "num_vec" has to be signed for the error handling to work. Fixes: 19d8e8c7be15 ("crypto: octeontx2 - add virtual function driver support") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-05crypto: ccree - fix spelling typo of allocateddingsenjie1-1/+1
allocted -> allocated Signed-off-by: dingsenjie <dingsenjie@yulong.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-03arm64: assembler: add cond_yield macroArd Biesheuvel1-0/+16
Add a macro cond_yield that branches to a specified label when called if the TIF_NEED_RESCHED flag is set and decreasing the preempt count would make the task preemptible again, resulting in a schedule to occur. This can be used by kernel mode SIMD code that keeps a lot of state in SIMD registers, which would make chunking the input in order to perform the cond_resched() check from C code disproportionately costly. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210203113626.220151-2-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-29crypto: salsa20 - remove Salsa20 stream cipher algorithmArd Biesheuvel7-1405/+3
Salsa20 is not used anywhere in the kernel, is not suitable for disk encryption, and widely considered to have been superseded by ChaCha20. So let's remove it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-29crypto: tgr192 - remove Tiger 128/160/192 hash algorithmsArd Biesheuvel6-876/+0
Tiger is never referenced anywhere in the kernel, and unlikely to be depended upon by userspace via AF_ALG. So let's remove it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-29crypto: rmd320 - remove RIPE-MD 320 hash algorithmArd Biesheuvel6-494/+1
RIPE-MD 320 is never referenced anywhere in the kernel, and unlikely to be depended upon by userspace via AF_ALG. So let's remove it Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-29crypto: rmd256 - remove RIPE-MD 256 hash algorithmArd Biesheuvel7-441/+1
RIPE-MD 256 is never referenced anywhere in the kernel, and unlikely to be depended upon by userspace via AF_ALG. So let's remove it Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-29crypto: rmd128 - remove RIPE-MD 128 hash algorithmArd Biesheuvel7-506/+1
RIPE-MD 128 is never referenced anywhere in the kernel, and unlikely to be depended upon by userspace via AF_ALG. So let's remove it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-29crypto: marvell/cesa - Fix use of sg_pcopy on iomem pointerHerbert Xu5-36/+148
The cesa driver mixes use of iomem pointers and normal kernel pointers. Sometimes it uses memcpy_toio/memcpy_fromio on both while other times it would use straight memcpy on both, through the sg_pcopy_* helpers. This patch fixes this by adding a new field sram_pool to the engine for the normal pointer case which then allows us to use the right interface depending on the value of engine->pool. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-29crypto: talitos - Fix ctr(aes) on SEC1Christophe Leroy1-0/+22
While ctr(aes) requires the use of a special descriptor on SEC2 (see commit 70d355ccea89 ("crypto: talitos - fix ctr-aes-talitos")), that special descriptor doesn't work on SEC1, see commit e738c5f15562 ("powerpc/8xx: Add DT node for using the SEC engine of the MPC885"). However, the common nonsnoop descriptor works properly on SEC1 for ctr(aes). Add a second template for ctr(aes) that will be registered only on SEC1. Fixes: 70d355ccea89 ("crypto: talitos - fix ctr-aes-talitos") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-29crypto: talitos - Work around SEC6 ERRATA (AES-CTR mode data size error)Christophe Leroy2-12/+17
Talitos Security Engine AESU considers any input data size that is not a multiple of 16 bytes to be an error. This is not a problem in general, except for Counter mode that is a stream cipher and can have an input of any size. Test Manager for ctr(aes) fails on 4th test vector which has a length of 499 while all previous vectors which have a 16 bytes multiple length succeed. As suggested by Freescale, round up the input data length to the nearest 16 bytes. Fixes: 5e75ae1b3cef ("crypto: talitos - add new crypto modes") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-29crypto: hisilicon/hpre - add ecc algorithm inqury for uacce deviceHui Tang1-1/+4
Uacce SysFS support more algorithms inqury such as 'ecdh/ecdsa/sm2/x25519/x448' Signed-off-by: Hui Tang <tanghui20@huawei.com> Reviewed-by: Zaibo Xu <xuzaibo@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-29crypto: hisilicon/hpre - add two RAS correctable errors processingHui Tang1-2/+6
1.One CE error is detecting timeout of generating a random number. 2.Another is detecting timeout of SVA prefetching address. Signed-off-by: Hui Tang <tanghui20@huawei.com> Reviewed-by: Zaibo Xu <xuzaibo@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-29crypto: hisilicon/hpre - delete ECC 1bit error reported thresholdHui Tang1-2/+0
Delete 'HPRE_RAS_ECC1BIT_TH' register setting of hpre, since register 'QM_RAS_CE_THRESHOLD' of qm has done this work. Signed-off-by: Hui Tang <tanghui20@huawei.com> Reviewed-by: Zaibo Xu <xuzaibo@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: aesni - release FPU during skcipher walk API callsArd Biesheuvel1-41/+32
Taking ownership of the FPU in kernel mode disables preemption, and this may result in excessive scheduling blackouts if the size of the data being processed on the FPU is unbounded. Given that taking and releasing the FPU is cheap these days on x86, we can limit the impact of this issue easily for skcipher implementations, by moving the FPU begin/end calls inside the skcipher walk processing loop. Considering that skcipher walks operate on at most one page at a time, doing so fully mitigates this issue. This also permits the skcipher walk logic to use non-atomic kmalloc() calls etc so we can change the 'atomic' bool argument in the calls to skcipher_walk_virt() to false as well. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: aesni - replace CTR function pointer with static callArd Biesheuvel1-6/+7
Indirect calls are very expensive on x86, so use a static call to set the system-wide AES-NI/CTR asm helper. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: keembay - use 64-bit arithmetic for computing bit_lenOvidiu Panait1-2/+2
src_size and aad_size are defined as u32, so the following expressions are currently being evaluated using 32-bit arithmetic: bit_len = src_size * 8; ... bit_len = aad_size * 8; However, bit_len is used afterwards in a context that expects a valid 64-bit value (the lower and upper 32-bit words of bit_len are extracted and written to hw). In order to make sure the correct bit length is generated and the 32-bit multiplication does not wrap around, cast src_size and aad_size to u64. Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com> Acked-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: lib/chacha20poly1305 - define empty module exit functionJason A. Donenfeld1-0/+5
With no mod_exit function, users are unable to unload the module after use. I'm not aware of any reason why module unloading should be prohibited for this one, so this commit simply adds an empty exit function. Reported-and-tested-by: John Donnelly <john.p.donnelly@oracle.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: octeontx2 - register with linux crypto frameworkSrujana Challa7-2/+1961
CPT offload module utilises the linux crypto framework to offload crypto processing. This patch registers supported algorithms by calling registration functions provided by the kernel crypto API. The module currently supports: - AES block cipher in CBC,ECB and XTS mode. - 3DES block cipher in CBC and ECB mode. - AEAD algorithms. authenc(hmac(sha1),cbc(aes)), authenc(hmac(sha256),cbc(aes)), authenc(hmac(sha384),cbc(aes)), authenc(hmac(sha512),cbc(aes)), authenc(hmac(sha1),ecb(cipher_null)), authenc(hmac(sha256),ecb(cipher_null)), authenc(hmac(sha384),ecb(cipher_null)), authenc(hmac(sha512),ecb(cipher_null)), rfc4106(gcm(aes)). Signed-off-by: Suheil Chandran <schandran@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> Signed-off-by: Srujana Challa <schalla@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: octeontx2 - add support to process the crypto requestSrujana Challa11-1/+1034
Attach LFs to CPT VF to process the crypto requests and register LF interrupts. Signed-off-by: Suheil Chandran <schandran@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> Signed-off-by: Srujana Challa <schalla@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: octeontx2 - add virtual function driver supportSrujana Challa6-1/+373
Add support for the Marvell OcteonTX2 CPT virtual function driver. This patch includes probe, PCI specific initialization and interrupt handling. Signed-off-by: Suheil Chandran <schandran@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> Signed-off-by: Srujana Challa <schalla@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: octeontx2 - add support to get engine capabilitiesSrujana Challa8-0/+350
Adds support to get engine capabilities and adds a new mailbox to share capabilities with VF driver. Signed-off-by: Suheil Chandran <schandran@marvell.com> Signed-off-by: Srujana Challa <schalla@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: octeontx2 - add LF frameworkSrujana Challa7-1/+783
CPT RVU Local Functions(LFs) needs to be attached to the PF/VF to submit the instructions to CPT. This patch adds the interface to initialize and attach the LFs. It also adds interface to register the LF's interrupts. Signed-off-by: Suheil Chandran <schandran@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> Signed-off-by: Srujana Challa <schalla@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: octeontx2 - load microcode and create engine groupsSrujana Challa8-2/+1655
CPT includes microcoded GigaCypher symmetric engines(SEs), IPsec symmetric engines(IEs), and asymmetric engines (AEs). Each engine receives CPT instructions from the engine groups it has subscribed to. This patch loads microcode, configures three engine groups(one for SEs, one for IEs and one for AEs), and configures all engines. Signed-off-by: Suheil Chandran <schandran@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> Signed-off-by: Srujana Challa <schalla@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: octeontx2 - enable SR-IOV and mailbox communication with VFSrujana Challa4-2/+583
Adds 'sriov_configure' to enable/disable virtual functions (VFs). Also Initializes VF<=>PF mailbox IRQs, register handlers for processing these mailbox messages. Admin function (AF) handles resource allocation and configuration for PFs and their VFs. PFs request the AF directly, via mailboxes. Unlike PFs, VFs cannot send a mailbox request directly. A VF sends mailbox messages to its parent PF, with which it shares a mailbox region. The PF then forwards these messages to the AF. After handling the request, the AF sends a response back to the VF, through the PF. This patch adds support for this 'VF <=> PF <=> AF' mailbox communication. Signed-off-by: Suheil Chandran <schandran@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> Signed-off-by: Srujana Challa <schalla@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: octeontx2 - add mailbox communication with AFSrujana Challa6-2/+236
In the resource virtualization unit (RVU) each of the PF and AF (admin function) share a 64KB of reserved memory region for communication. This patch initializes PF <=> AF mailbox IRQs, registers handlers for processing these communication messages. Signed-off-by: Suheil Chandran <schandran@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> Signed-off-by: Srujana Challa <schalla@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: marvell - add Marvell OcteonTX2 CPT PF driverSrujana Challa7-0/+633
Adds skeleton for the Marvell OcteonTX2 CPT physical function driver which includes probe, PCI specific initialization and hardware register defines. RVU defines are present in AF driver (drivers/net/ethernet/marvell/octeontx2/af), header files from AF driver are included here to avoid duplication. Signed-off-by: Suheil Chandran <schandran@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> Signed-off-by: Srujana Challa <schalla@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: arm64/sha - add missing module aliasesArd Biesheuvel4-0/+9
The accelerated, instruction based implementations of SHA1, SHA2 and SHA3 are autoloaded based on CPU capabilities, given that the code is modest in size, and widely used, which means that resolving the algo name, loading all compatible modules and picking the one with the highest priority is taken to be suboptimal. However, if these algorithms are requested before this CPU feature based matching and autoloading occurs, these modules are not even considered, and we end up with suboptimal performance. So add the missing module aliases for the various SHA implementations. Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto: bcm - Fix sparse warningsHerbert Xu7-40/+48
This patch fixes a number of sparse warnings in the bcm driver. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-22crypto - shash: reduce minimum alignment of shash_desc structureArd Biesheuvel2-7/+10
Unlike many other structure types defined in the crypto API, the 'shash_desc' structure is permitted to live on the stack, which implies its contents may not be accessed by DMA masters. (This is due to the fact that the stack may be located in the vmalloc area, which requires a different virtual-to-physical translation than the one implemented by the DMA subsystem) Our definition of CRYPTO_MINALIGN_ATTR is based on ARCH_KMALLOC_MINALIGN, which may take DMA constraints into account on architectures that support non-cache coherent DMA such as ARM and arm64. In this case, the value is chosen to reflect the largest cacheline size in the system, in order to ensure that explicit cache maintenance as required by non-coherent DMA masters does not affect adjacent, unrelated slab allocations. On arm64, this value is currently set at 128 bytes. This means that applying CRYPTO_MINALIGN_ATTR to struct shash_desc is both unnecessary (as it is never used for DMA), and undesirable, given that it wastes stack space (on arm64, performing the alignment costs 112 bytes in the worst case, and the hole between the 'tfm' and '__ctx' members takes up another 120 bytes, resulting in an increased stack footprint of up to 232 bytes.) So instead, let's switch to the minimum SLAB alignment, which does not take DMA constraints into account. Note that this is a no-op for x86. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-14crypto: keembay-ocs-hcu - Add dependency on HAS_IOMEM and ARCH_KEEMBAYDaniele Alessandrelli1-0/+2
Add the following additional dependencies for CRYPTO_DEV_KEEMBAY_OCS_HCU: - HAS_IOMEM to prevent build failures - ARCH_KEEMBAY to prevent asking the user about this driver when configuring a kernel without Intel Keem Bay platform support. Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-14crypto: keembay-ocs-hcu - Fix a WARN() messageDan Carpenter1-1/+1
The first argument to WARN() is a condition and the messages is the second argument is the string, so this WARN() will only display the __func__ part of the message. Fixes: ae832e329a8d ("crypto: keembay-ocs-hcu - Add HMAC support") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-14crypto: x86 - use local headers for x86 specific shared declarationsArd Biesheuvel12-8/+8
The Camellia, Serpent and Twofish related header files only contain declarations that are shared between different implementations of the respective algorithms residing under arch/x86/crypto, and none of their contents should be used elsewhere. So move the header files into the same location, and use local #includes instead. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-14crypto: x86 - remove glue helper moduleArd Biesheuvel6-243/+0
All dependencies on the x86 glue helper module have been replaced by local instantiations of the new ECB/CBC preprocessor helper macros, so the glue helper module can be retired. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-14crypto: x86/twofish - drop dependency on glue helperArd Biesheuvel3-111/+44
Replace the glue helper dependency with implementations of ECB and CBC based on the new CPP macros, which avoid the need for indirect calls. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-14crypto: x86/cast6 - drop dependency on glue helperArd Biesheuvel2-45/+17
Replace the glue helper dependency with implementations of ECB and CBC based on the new CPP macros, which avoid the need for indirect calls. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-14crypto: x86/cast5 - drop dependency on glue helperArd Biesheuvel1-167/+17
Replace the glue helper dependency with implementations of ECB and CBC based on the new CPP macros, which avoid the need for indirect calls. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>