summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2022-02-11crypto: hmac - disallow keys < 112 bits in FIPS modeStephan Müller2-0/+13
FIPS 140 requires a minimum security strength of 112 bits. This implies that the HMAC key must not be smaller than 112 in FIPS mode. This restriction implies that the test vectors for HMAC that have a key that is smaller than 112 bits must be disabled when FIPS support is compiled. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-11crypto: hmac - add fips_skip supportStephan Müller2-0/+5
By adding the support for the flag fips_skip, hash / HMAC test vectors may be marked to be not applicable in FIPS mode. Such vectors are silently skipped in FIPS mode. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05crypto: sl3516 - remove redundant initializations of pointers in_sg and out_sgColin Ian King1-2/+2
Pointers in_sg and out_sg are being initialized with values that are never read, they are being re-assigned the same values later on. The initializations are redundant, remove them in preference to the later assignments that are closer to when the pointers are being used. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05crypto: marvell/octeontx - remove redundant initialization of variable c_sizeColin Ian King1-1/+0
Variable c_size is being initialized with a value that is never read, it is being re-assigned with a different value later on. The initialization is redundant and can be removed. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05crypto: octeontx2 - remove CONFIG_DM_CRYPT checkShijith Thotton1-10/+7
No issues were found while using the driver with dm-crypt enabled. So CONFIG_DM_CRYPT check in the driver can be removed. This also fixes the NULL pointer dereference in driver release if CONFIG_DM_CRYPT is enabled. ... Unable to handle kernel NULL pointer dereference at virtual address 0000000000000008 ... Call trace: crypto_unregister_alg+0x68/0xfc crypto_unregister_skciphers+0x44/0x60 otx2_cpt_crypto_exit+0x100/0x1a0 otx2_cptvf_remove+0xf8/0x200 pci_device_remove+0x3c/0xd4 __device_release_driver+0x188/0x234 device_release_driver+0x2c/0x4c ... Fixes: 6f03f0e8b6c8 ("crypto: octeontx2 - register with linux crypto framework") Signed-off-by: Shijith Thotton <sthotton@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05crypto: tcrypt - remove all multibuffer ahash testsTianjia Zhang1-224/+0
The multibuffer algorithms was removed already in 2018, so it is necessary to clear the test code left by tcrypt. Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05crypto: atmel - add support for AES and SHA IPs available on lan966x SoCKavyasree Kotagiri2-0/+2
This patch adds support for hardware version of AES and SHA IPs available on lan966x SoC. Signed-off-by: Kavyasree Kotagiri <kavyasree.kotagiri@microchip.com> Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com> Tested-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05hwrng: core - credit entropy for low quality sources of randomnessDominik Brodowski1-1/+10
In case the entropy quality is low, there may be less than one bit to credit in the call to add_hwgenerator_randomness(): The number of bytes returned by rng_get_data() multiplied by the current quality (in entropy bits per 1024 bits of input) must be larger than 128 to credit at least one bit. However, imx-rngc.c sets the quality to 19, but may return less than 32 bytes; hid_u2fzero.c sets the quality to 1; and users may override the quality setting manually. In case there is less than one bit to credit, keep track of it and add that credit to the next iteration. Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05crypto: arm64/aes-neonbs-xts - use plain NEON for non-power-of-2 input sizesArd Biesheuvel2-108/+57
Even though the kernel's implementations of AES-XTS were updated to implement ciphertext stealing and can operate on inputs of any size larger than or equal to the AES block size, this feature is rarely used in practice. In fact, in the kernel, AES-XTS is only used to operate on 4096 or 512 byte blocks, which means that not only the ciphertext stealing is effectively dead code, the logic in the bit sliced NEON implementation to deal with fewer than 8 blocks at a time is also never used. Since the bit-sliced NEON driver already depends on the plain NEON version, which is slower but can operate on smaller data quantities more straightforwardly, let's fallback to the plain NEON implementation of XTS for any residual inputs that are not multiples of 128 bytes. This allows us to remove a lot of complicated logic that rarely gets exercised in practice. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05crypto: arm64/aes-neonbs-ctr - fallback to plain NEON for final chunkArd Biesheuvel3-142/+55
Instead of processing the entire input with the 8-way bit sliced algorithm, which is sub-optimal for inputs that are not a multiple of 128 bytes in size, invoke the plain NEON version of CTR for the remainder of the input after processing the bulk using 128 byte strides. This allows us to greatly simplify the asm code that implements CTR, and get rid of all the branches and special code paths. It also gains us a couple of percent of performance. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05crypto: arm/aes-neonbs-ctr - deal with non-multiples of AES block sizeArd Biesheuvel2-63/+77
Instead of falling back to C code to deal with the final bit of input that is not a round multiple of the block size, handle this in the asm code, permitting us to use overlapping loads and stores for performance, and implement the 16-byte wide XOR using a single NEON instruction. Since NEON loads and stores have a natural width of 16 bytes, we need to handle inputs of less than 16 bytes in a special way, but this rarely occurs in practice so it does not impact performance. All other input sizes can be consumed directly by the NEON asm code, although it should be noted that the core AES transform can still only process 128 bytes (8 AES blocks) at a time. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05crypto: arm64/aes-neon-ctr - improve handling of single tail blockArd Biesheuvel2-19/+20
Instead of falling back to C code to do a memcpy of the output of the last block, handle this in the asm code directly if possible, which is the case if the entire input is longer than 16 bytes. Cc: Nathan Huckleberry <nhuck@google.com> Cc: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05crypto: octeontx2 - increase CPT HW instruction queue lengthSrujana Challa1-4/+15
LDWB is getting incorrectly used in HW when CPT_AF_LF()_PTR_CTL[IQB_LDWB]=1 and CPT instruction queue has less than 320 free entries. So, increase HW instruction queue size by 320 and give 320 entries less for SW/NIX RX as a SW workaround. Signed-off-by: Srujana Challa <schalla@marvell.com> Signed-off-by: Shijith Thotton <sthotton@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05crypto: octeontx2 - disable DMA black hole on an DMA faultSrujana Challa2-0/+14
When CPT_AF_DIAG[FLT_DIS] = 0 and a CPT engine access to LLC/DRAM encounters a fault/poison, a rare case may result in unpredictable data being delivered to a CPT engine. So, this patch adds code to set FLT_DIS as a workaround. Signed-off-by: Srujana Challa <schalla@marvell.com> Signed-off-by: Shijith Thotton <sthotton@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05crypto: octeontx2 - CN10K CPT to RNM workaroundSrujana Challa1-1/+42
When software sets CPT_AF_CTL[RNM_REQ_EN]=1 and RNM in not producing entropy(i.e., RNM_ENTROPY_STATUS[NORMAL_CNT] < 0x40), the first cycle of the response may be lost due to a conditional clocking issue. Due to this, the subsequent random number stream will be corrupted. So, this patch adds support to ensure RNM_ENTROPY_STATUS[NORMAL_CNT] = 0x40 before writing CPT_AF_CTL[RNM_REQ_EN] = 1, as a workaround. Signed-off-by: Srujana Challa <schalla@marvell.com> Signed-off-by: Shijith Thotton <sthotton@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05hwrng: core - break out of hwrng_fillfn if current rng is not trustedDominik Brodowski1-0/+3
For two reasons, current_quality may become zero within the rngd kernel thread: (1) The user lowers current_quality to 0 by writing to the sysfs module parameter file (note that increasing the quality from zero is without effect at the moment), or (2) there are two or more hwrng devices registered, and those which provide quality>0 are unregistered, but one with quality==0 remains. If current_quality is 0, the randomness is not trusted and cannot help to increase the entropy count. That will lead to continuous calls to the hwrngd thread and continuous stirring of the input pool with untrusted bits. Cc: Matt Mackall <mpm@selenic.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05hwrng: core - only set cur_rng_set_by_user if it is workingDominik Brodowski1-1/+2
In case the user-specified rng device is not working, it is not used; therefore cur_rng_set_by_user must not be set to 1. Cc: Matt Mackall <mpm@selenic.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05hwrng: core - use rng_fillbuf in add_early_randomness()Dominik Brodowski1-2/+2
Using rng_buffer in add_early_randomness() may race with rng_dev_read(). Use rng_fillbuf instead, as it is otherwise only used within the kernel by hwrng_fillfn() and therefore never exposed to userspace. Cc: Matt Mackall <mpm@selenic.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05hwrng: core - read() callback must be called for size of 32 or more bytesDominik Brodowski1-2/+1
According to <linux/hw_random.h>, the @max parameter of the ->read callback "is a multiple of 4 and >= 32 bytes". That promise was not kept by add_early_randomness(), which only asked for 16 bytes. As rng_buffer_size() is at least 32, we can simply ask for 32 bytes. Cc: Matt Mackall <mpm@selenic.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-05hwrng: core - explicit ordering of initcallsDominik Brodowski1-1/+1
hw-random device drivers depend on the hw-random core being initialized. Make this ordering explicit, also for the case these drivers are built-in. As the core itself depends on misc_register() which is set up at subsys_initcall time, advance the initialization of the core (only) to the fs_initcall() level. Cc: Matt Mackall <mpm@selenic.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31padata: replace cpumask_weight with cpumask_empty in padata.cYury Norov1-1/+1
padata_do_parallel() calls cpumask_weight() to check if any bit of a given cpumask is set. We can do it more efficiently with cpumask_empty() because cpumask_empty() stops traversing the cpumask as soon as it finds first set bit, while cpumask_weight() counts all bits unconditionally. Signed-off-by: Yury Norov <yury.norov@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: mxs-dcp - Fix scatterlist processingTomas Paukrt1-1/+1
This patch fixes a bug in scatterlist processing that may cause incorrect AES block encryption/decryption. Fixes: 2e6d793e1bf0 ("crypto: mxs-dcp - Use sg_mapping_iter to copy data") Signed-off-by: Tomas Paukrt <tomaspaukrt@email.cz> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: hisilicon/qm - cleanup warning in qm_vf_read_qosKai Ye1-1/+1
The kernel test rebot report this warning: Uninitialized variable: ret. The code flow may return value of ret directly. This value is an uninitialized variable, here is fix it. Signed-off-by: Kai Ye <yekai13@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: hisilicon/sec - use the correct print formatKai Ye1-1/+1
Use the correct print format. Printing an unsigned int value should use %u instead of %d. Signed-off-by: Kai Ye <yekai13@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: hisilicon/sec - fix the CTR mode BD configurationKai Ye2-2/+10
The CTR counter is 32bit rollover default on the BD. But the NIST standard is 128bit rollover. it cause the testing failed, so need to fix the BD configuration. Signed-off-by: Kai Ye <yekai13@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: hisilicon/sec - fix the max length of AAD for the CCM modeKai Ye1-0/+5
Fix the maximum length of AAD for the CCM mode due to the hardware limited. Signed-off-by: Kai Ye <yekai13@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: hisilicon/sec - add some comments for soft fallbackKai Ye1-6/+6
Modify the print of information that might lead to user misunderstanding. Currently only XTS mode need the fallback tfm when using 192bit key. Others algs not need soft fallback tfm. So others algs can return directly. Signed-off-by: Kai Ye <yekai13@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: hisilicon/sec - fixup icv checking enabled on Kunpeng 930Kai Ye1-1/+1
Fixup icv(integrity check value) checking enabled wrong on Kunpeng 930 Signed-off-by: Kai Ye <yekai13@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: octeontx2 - select CONFIG_NET_DEVLINKShijith Thotton1-0/+1
OcteonTX2 CPT driver will fail to link without devlink support. aarch64-linux-gnu-ld: otx2_cpt_devlink.o: in function `otx2_cpt_dl_egrp_delete': otx2_cpt_devlink.c:18: undefined reference to `devlink_priv' aarch64-linux-gnu-ld: otx2_cpt_devlink.o: in function `otx2_cpt_dl_egrp_create': otx2_cpt_devlink.c:9: undefined reference to `devlink_priv' aarch64-linux-gnu-ld: otx2_cpt_devlink.o: in function `otx2_cpt_dl_uc_info': otx2_cpt_devlink.c:27: undefined reference to `devlink_priv' Fixes: fed8f4d5f946 ("crypto: octeontx2 - parameters for custom engine groups") Signed-off-by: Shijith Thotton <sthotton@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: memneq - avoid implicit unaligned accessesArd Biesheuvel1-7/+15
The C standard does not support dereferencing pointers that are not aligned with respect to the pointed-to type, and doing so is technically undefined behavior, even if the underlying hardware supports it. This means that conditionally dereferencing such pointers based on whether CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y is not the right thing to do, and actually results in alignment faults on ARM, which are fixed up on a slow path. Instead, we should use the unaligned accessors in such cases: on architectures that don't care about alignment, they will result in identical codegen whereas, e.g., codegen on ARM will avoid doubleword loads and stores but use ordinary ones, which are able to tolerate misalignment. Link: https://lore.kernel.org/linux-crypto/CAHk-=wiKkdYLY0bv+nXrcJz3NH9mAqPAafX7PpW5EwVtxsEu7Q@mail.gmail.com/ Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: authenc - Fix sleep in atomic context in decrypt_tailHerbert Xu1-1/+1
The function crypto_authenc_decrypt_tail discards its flags argument and always relies on the flags from the original request when starting its sub-request. This is clearly wrong as it may cause the SLEEPABLE flag to be set when it shouldn't. Fixes: 92d95ba91772 ("crypto: authenc - Convert to new AEAD interface") Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: rsa-pkcs1pad - use clearer variable namesEric Biggers1-15/+16
The new convention for akcipher_alg::verify makes it unclear which values are the lengths of the signature and digest. Add local variables to make it clearer what is going on. Also rename the digest_size variable in pkcs1pad_sign(), as it is actually the digest *info* size, not the digest size which is different. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: rsa-pkcs1pad - fix buffer overread in pkcs1pad_verify_complete()Eric Biggers1-0/+2
Before checking whether the expected digest_info is present, we need to check that there are enough bytes remaining. Fixes: a49de377e051 ("crypto: Add hash param to pkcs1pad") Cc: <stable@vger.kernel.org> # v4.6+ Cc: Tadeusz Struk <tadeusz.struk@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: rsa-pkcs1pad - restore signature length checkEric Biggers1-1/+1
RSA PKCS#1 v1.5 signatures are required to be the same length as the RSA key size. RFC8017 specifically requires the verifier to check this (https://datatracker.ietf.org/doc/html/rfc8017#section-8.2.2). Commit a49de377e051 ("crypto: Add hash param to pkcs1pad") changed the kernel to allow longer signatures, but didn't explain this part of the change; it seems to be unrelated to the rest of the commit. Revert this change, since it doesn't appear to be correct. We can be pretty sure that no one is relying on overly-long signatures (which would have to be front-padded with zeroes) being supported, given that they would have been broken since commit c7381b012872 ("crypto: akcipher - new verify API for public key algorithms"). Fixes: a49de377e051 ("crypto: Add hash param to pkcs1pad") Cc: <stable@vger.kernel.org> # v4.6+ Cc: Tadeusz Struk <tadeusz.struk@linaro.org> Suggested-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: rsa-pkcs1pad - correctly get hash from source scatterlistEric Biggers1-1/+1
Commit c7381b012872 ("crypto: akcipher - new verify API for public key algorithms") changed akcipher_alg::verify to take in both the signature and the actual hash and do the signature verification, rather than just return the hash expected by the signature as was the case before. To do this, it implemented a hack where the signature and hash are concatenated with each other in one scatterlist. Obviously, for this to work correctly, akcipher_alg::verify needs to correctly extract the two items from the scatterlist it is given. Unfortunately, it doesn't correctly extract the hash in the case where the signature is longer than the RSA key size, as it assumes that the signature's length is equal to the RSA key size. This causes a prefix of the hash, or even the entire hash, to be taken from the *signature*. (Note, the case of a signature longer than the RSA key size should not be allowed in the first place; a separate patch will fix that.) It is unclear whether the resulting scheme has any useful security properties. Fix this by correctly extracting the hash from the scatterlist. Fixes: c7381b012872 ("crypto: akcipher - new verify API for public key algorithms") Cc: <stable@vger.kernel.org> # v5.2+ Reviewed-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: rsa-pkcs1pad - only allow with rsaEric Biggers1-0/+5
The pkcs1pad template can be instantiated with an arbitrary akcipher algorithm, which doesn't make sense; it is specifically an RSA padding scheme. Make it check that the underlying algorithm really is RSA. Fixes: 3d5b1ecdea6f ("crypto: rsa - RSA padding algorithm") Cc: <stable@vger.kernel.org> # v4.5+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: qat - fix access to PFVF interrupt registers for GEN4Giovanni Cabiddu1-33/+9
The logic that detects, enables and disables pfvf interrupts was expecting a single CSR per VF. Instead, the source and mask register are two registers with a bit per VF. Due to this, the driver is reading and setting reserved CSRs and not masking the correct source of interrupts. Fix the access to the source and mask register for QAT GEN4 devices by removing the outer loop in adf_gen4_get_vf2pf_sources(), adf_gen4_enable_vf2pf_interrupts() and adf_gen4_disable_vf2pf_interrupts() and changing the helper macros ADF_4XXX_VM2PF_SOU and ADF_4XXX_VM2PF_MSK. Fixes: a9dc0d966605 ("crypto: qat - add PFVF support to the GEN4 host driver") Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Co-developed-by: Siming Wan <siming.wan@intel.com> Signed-off-by: Siming Wan <siming.wan@intel.com> Reviewed-by: Xin Zeng <xin.zeng@intel.com> Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com> Reviewed-by: Marco Chiappero <marco.chiappero@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31arm64: lib: accelerate crc32_beKevin Bracey1-14/+73
It makes no sense to leave crc32_be using the generic code while we only accelerate the little-endian ops. Even though the big-endian form doesn't fit as smoothly into the arm64, we can speed it up and avoid hitting the D cache. Tested on Cortex-A53. Without acceleration: crc32: CRC_LE_BITS = 64, CRC_BE BITS = 64 crc32: self tests passed, processed 225944 bytes in 192240 nsec crc32c: CRC_LE_BITS = 64 crc32c: self tests passed, processed 112972 bytes in 21360 nsec With acceleration: crc32: CRC_LE_BITS = 64, CRC_BE BITS = 64 crc32: self tests passed, processed 225944 bytes in 53480 nsec crc32c: CRC_LE_BITS = 64 crc32c: self tests passed, processed 112972 bytes in 21480 nsec Signed-off-by: Kevin Bracey <kevin@bracey.fi> Tested-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31lib/crc32test: correct printed bytes countKevin Bracey1-1/+1
crc32c_le self test had a stray multiply by two inherited from the crc32_le+crc32_be test loop. Signed-off-by: Kevin Bracey <kevin@bracey.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31lib/crc32: Make crc32_be weak for arch overrideKevin Bracey1-2/+3
crc32_le and __crc32c_le can be overridden - extend this to crc32_be. Signed-off-by: Kevin Bracey <kevin@bracey.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31lib/crc32: remove unneeded castsKevin Bracey1-6/+3
Casts were added in commit 8f243af42ade ("sections: fix const sections for crc32 table") to cope with the tables not being const. They are no longer required since commit f5e38b9284e1 ("lib: crc32: constify crc32 lookup table"). Signed-off-by: Kevin Bracey <kevin@bracey.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: kdf - Select hmac in addition to sha256Herbert Xu1-0/+1
In addition to sha256 we must also enable hmac for the kdf self-test to work. Reported-by: kernel test robot <oliver.sang@intel.com> Fixes: 304b4acee2f0 ("crypto: kdf - select SHA-256 required...") Fixes: 026a733e6659 ("crypto: kdf - add SP800-108 counter key...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: sun8i-ss - really disable hash on A80Corentin Labbe1-0/+2
When adding hashes support to sun8i-ss, I have added them only on A83T. But I forgot that 0 is a valid algorithm ID, so hashes are enabled on A80 but with an incorrect ID. Anyway, even with correct IDs, hashes do not work on A80 and I cannot find why. So let's disable all of them on A80. Fixes: d9b45418a917 ("crypto: sun8i-ss - support hash algorithms") Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: x86 - Convert to SPDX identifierNathan Huckleberry1-53/+10
Use SPDX-License-Identifier instead of a verbose license text and update external link. Cc: James Guilford <james.guilford@intel.com> Cc: Sean Gulley <sean.m.gulley@intel.com> Cc: Chandramouli Narayanan <mouli@linux.intel.com> Signed-off-by: Nathan Huckleberry <nhuck@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: testmgr - Move crypto_simd_disabled_for_test outHerbert Xu2-3/+6
As testmgr is part of cryptomgr which was designed to be unloadable as a module, it shouldn't export any symbols for other crypto modules to use as that would prevent it from being unloaded. All its functionality is meant to be accessed through notifiers. The symbol crypto_simd_disabled_for_test was added to testmgr which caused it to be pinned as a module if its users were also loaded. This patch moves it out of testmgr and into crypto/algapi.c so cryptomgr can again be unloaded and replaced on demand. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31hwrng: cavium - HW_RANDOM_CAVIUM should depend on ARCH_THUNDERGeert Uytterhoeven1-1/+1
The Cavium ThunderX Random Number Generator is only present on Cavium ThunderX SoCs, and not available as an independent PCIe endpoint. Hence add a dependency on ARCH_THUNDER, to prevent asking the user about this driver when configuring a kernel without Cavium Thunder SoC support. Fixes: cc2f1908c6b8f625 ("hwrng: cavium - Add Cavium HWRNG driver for ThunderX SoC.") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: ccp - remove redundant ret variableMinghao Chi1-4/+1
Return value from ccp_crypto_enqueue_request() directly instead of taking this in another redundant variable. Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Minghao Chi <chi.minghao@zte.com.cn> Signed-off-by: CGEL ZTE <cgel.zte@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-28crypto: qat - fix a signedness bug in get_service_enabled()Dan Carpenter1-1/+1
The "ret" variable needs to be signed or there is an error message which will not be printed correctly. Fixes: 0cec19c761e5 ("crypto: qat - add support for compression for 4xxx") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-28crypto: ccp - Ensure psp_ret is always init'd in __sev_platform_init_locked()Peter Gonda1-1/+1
Initialize psp_ret inside of __sev_platform_init_locked() because there are many failure paths with PSP initialization that do not set __sev_do_cmd_locked(). Fixes: e423b9d75e77: ("crypto: ccp - Move SEV_INIT retry for corrupted data") Signed-off-by: Peter Gonda <pgonda@google.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Brijesh Singh <brijesh.singh@amd.com> Cc: Marc Orr <marcorr@google.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: John Allen <john.allen@amd.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-28crypto: tcrypt - add asynchronous speed test for SM3Tianjia Zhang1-5/+9
tcrypt supports testing of SM3 hash algorithms that use AVX instruction acceleration. In order to add the sm3 asynchronous test to the appropriate position, shift the testcase sequence number of the multi buffer backward and start from 450. Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>