summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2024-04-12crypto: ecc - Use ECC_CURVE_NIST_P192/256/384_DIGITS where possibleStefan Berger1-6/+6
Replace hard-coded numbers with ECC_CURVE_NIST_P192/256/384_DIGITS where possible. Tested-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: tegra - Add Tegra Security Engine driverAkhil R9-0/+4171
Add support for Tegra Security Engine which can accelerate various crypto algorithms. The Engine has two separate instances within for AES and HASH algorithms respectively. The driver registers two crypto engines - one for AES and another for HASH algorithms and these operate independently and both uses the host1x bus. Additionally, it provides hardware-assisted key protection for up to 15 symmetric keys which it can use for the cipher operations. Signed-off-by: Akhil R <akhilrajeev@nvidia.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12gpu: host1x: Add Tegra SE to SID tableAkhil R1-0/+24
Add Tegra Security Engine details to the SID table in host1x driver. These entries are required to be in place to configure the stream ID for SE. Register writes to stream ID registers fail otherwise. Signed-off-by: Akhil R <akhilrajeev@nvidia.com> Acked-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12dt-bindings: crypto: Add Tegra Security EngineAkhil R2-0/+104
Add DT binding document for Tegra Security Engine. The AES and HASH algorithms are handled independently by separate engines within the Security Engine. These engines are registered as two separate crypto engine drivers. Signed-off-by: Akhil R <akhilrajeev@nvidia.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12padata: Disable BH when taking works lock on MT pathHerbert Xu1-4/+4
As the old padata code can execute in softirq context, disable softirqs for the new padata_do_mutithreaded code too as otherwise lockdep will get antsy. Reported-by: syzbot+0cb5bb0f4bf9e79db3b3@syzkaller.appspotmail.com Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Daniel Jordan <daniel.m.jordan@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: ccp - drop platform ifdef checksArnd Bergmann1-12/+2
When both ACPI and OF are disabled, the dev_vdata variable is unused: drivers/crypto/ccp/sp-platform.c:33:34: error: unused variable 'dev_vdata' [-Werror,-Wunused-const-variable] This is not a useful configuration, and there is not much point in saving a few bytes when only one of the two is enabled, so just remove all these ifdef checks and rely on of_match_node() and acpi_match_device() returning NULL when these subsystems are disabled. Fixes: 6c5063434098 ("crypto: ccp - Add ACPI support") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: qat - Fix spelling mistake "Invalide" -> "Invalid"Colin Ian King1-1/+1
There is a spelling mistake in a dev_err message. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Acked-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: algboss - remove NULL check in cryptomgr_schedule_probe()Roman Smirnov1-3/+0
The for loop will be executed at least once, so i > 0. If the loop is interrupted before i is incremented (e.g., when checking len for NULL), i will not be checked. Found by Linux Verification Center (linuxtesting.org) with Svace. Signed-off-by: Roman Smirnov <r.smirnov@omp.ru> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: ecc - remove checks in crypto_ecdh_shared_secret() and ↵Roman Smirnov1-2/+2
ecc_make_pub_key() With the current state of the code, ecc_get_curve() cannot return NULL in crypto_ecdh_shared_secret() and ecc_make_pub_key(). This is conditioned by the fact that they are only called from ecdh_compute_value(), which implements the kpp_alg::{generate_public_key,compute_shared_secret}() methods. Also ecdh implements the kpp_alg::init() method, which is called before the other methods and sets ecdh_ctx::curve_id to a valid value. Signed-off-by: Roman Smirnov <r.smirnov@omp.ru> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-05crypto: jitter - Replace http with httpsThorsten Blum1-1/+1
The PDF is also available via https. Signed-off-by: Thorsten Blum <thorsten.blum@toblux.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-05crypto: jitter - Remove duplicate word in commentThorsten Blum1-1/+1
s/in// Signed-off-by: Thorsten Blum <thorsten.blum@toblux.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-05crypto: x86/aes-xts - wire up VAES + AVX10/512 implementationEric Biggers2-0/+41
Add an AES-XTS implementation "xts-aes-vaes-avx10_512" for x86_64 CPUs with the VAES, VPCLMULQDQ, and either AVX10/512 or AVX512BW + AVX512VL extensions. This implementation uses zmm registers to operate on four AES blocks at a time. The assembly code is instantiated using a macro so that most of the source code is shared with other implementations. To avoid downclocking on older Intel CPU models, an exclusion list is used to prevent this 512-bit implementation from being used by default on some CPU models. They will use xts-aes-vaes-avx10_256 instead. For now, this exclusion list is simply coded into aesni-intel_glue.c. It may make sense to eventually move it into a more central location. xts-aes-vaes-avx10_512 is slightly faster than xts-aes-vaes-avx10_256 on some current CPUs. E.g., on AMD Zen 4, AES-256-XTS decryption throughput increases by 13% with 4096-byte inputs, or 14% with 512-byte inputs. On Intel Sapphire Rapids, AES-256-XTS decryption throughput increases by 2% with 4096-byte inputs, or 3% with 512-byte inputs. Future CPUs may provide stronger 512-bit support, in which case a larger benefit should be seen. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-05crypto: x86/aes-xts - wire up VAES + AVX10/256 implementationEric Biggers2-0/+25
Add an AES-XTS implementation "xts-aes-vaes-avx10_256" for x86_64 CPUs with the VAES, VPCLMULQDQ, and either AVX10/256 or AVX512BW + AVX512VL extensions. This implementation avoids using zmm registers, instead using ymm registers to operate on two AES blocks at a time. The assembly code is instantiated using a macro so that most of the source code is shared with other implementations. This is the optimal implementation on CPUs that support VAES and AVX512 but where the zmm registers should not be used due to downclocking effects, for example Intel's Ice Lake. It should also be the optimal implementation on future CPUs that support AVX10/256 but not AVX10/512. The performance is slightly better than that of xts-aes-vaes-avx2, which uses the same 256-bit vector length, due to factors such as being able to use ymm16-ymm31 to cache the AES round keys, and being able to use the vpternlogd instruction to do XORs more efficiently. For example, on Ice Lake, the throughput of decrypting 4096-byte messages with AES-256-XTS is 6.6% higher with xts-aes-vaes-avx10_256 than with xts-aes-vaes-avx2. While this is a small improvement, it is straightforward to provide this implementation (xts-aes-vaes-avx10_256) as long as we are providing xts-aes-vaes-avx2 and xts-aes-vaes-avx10_512 anyway, due to the way the _aes_xts_crypt macro is structured. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-05crypto: x86/aes-xts - wire up VAES + AVX2 implementationEric Biggers2-0/+31
Add an AES-XTS implementation "xts-aes-vaes-avx2" for x86_64 CPUs with the VAES, VPCLMULQDQ, and AVX2 extensions, but not AVX512 or AVX10. This implementation uses ymm registers to operate on two AES blocks at a time. The assembly code is instantiated using a macro so that most of the source code is shared with other implementations. This is the optimal implementation on AMD Zen 3. It should also be the optimal implementation on Intel Alder Lake, which similarly supports VAES but not AVX512. Comparing to xts-aes-aesni-avx on Zen 3, xts-aes-vaes-avx2 provides 70% higher AES-256-XTS decryption throughput with 4096-byte messages, or 23% higher with 512-byte messages. A large improvement is also seen with CPUs that do support AVX512 (e.g., 98% higher AES-256-XTS decryption throughput on Ice Lake with 4096-byte messages), though the following patches add AVX512 optimized implementations to get a bit more performance on those CPUs. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-05crypto: x86/aes-xts - wire up AESNI + AVX implementationEric Biggers2-2/+209
Add an AES-XTS implementation "xts-aes-aesni-avx" for x86_64 CPUs that have the AES-NI and AVX extensions but not VAES. It's similar to the existing xts-aes-aesni in that uses xmm registers to operate on one AES block at a time. It differs from xts-aes-aesni in the following ways: - It uses the VEX-coded (non-destructive) instructions from AVX. This improves performance slightly. - It incorporates some additional optimizations such as interleaving the tweak computation with AES en/decryption, handling single-page messages more efficiently, and caching the first round key. - It supports only 64-bit (x86_64). - It's generated by an assembly macro that will also be used to generate VAES-based implementations. The performance improvement over xts-aes-aesni varies from small to large, depending on the CPU and other factors such as the size of the messages en/decrypted. For example, the following increases in AES-256-XTS decryption throughput are seen on the following CPUs: | 4096-byte messages | 512-byte messages | ----------------------+--------------------+-------------------+ Intel Skylake | 6% | 31% | Intel Cascade Lake | 4% | 26% | AMD Zen 1 | 61% | 73% | AMD Zen 2 | 36% | 59% | (The above CPUs don't support VAES, so they can't use VAES instead.) While this isn't as large an improvement as what VAES provides, this still seems worthwhile. This implementation is fairly easy to provide based on the assembly macro that's needed for VAES anyway, and it will be the best implementation on a large number of CPUs (very roughly, the CPUs launched by Intel and AMD from 2011 to 2018). This makes the existing xts-aes-aesni *mostly* obsolete. For now, leave it in place to support 32-bit kernels and also CPUs like Intel Westmere that support AES-NI but not AVX. (We could potentially remove it anyway and just rely on the indirect acceleration via ecb-aes-aesni in those cases, but that change will need to be considered separately.) Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-05crypto: x86/aes-xts - add AES-XTS assembly macro for modern CPUsEric Biggers2-1/+802
Add an assembly file aes-xts-avx-x86_64.S which contains a macro that expands into AES-XTS implementations for x86_64 CPUs that support at least AES-NI and AVX, optionally also taking advantage of VAES, VPCLMULQDQ, and AVX512 or AVX10. This patch doesn't expand the macro at all. Later patches will do so, adding each implementation individually so that the motivation and use case for each individual implementation can be fully presented. The file also provides a function aes_xts_encrypt_iv() which handles the encryption of the IV (tweak), using AES-NI and AVX. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-05x86: add kconfig symbols for assembler VAES and VPCLMULQDQ supportEric Biggers1-0/+10
Add config symbols AS_VAES and AS_VPCLMULQDQ that expose whether the assembler supports the vector AES and carryless multiplication cryptographic extensions. Reviewed-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-05crypto: ecdh - explicitly zeroize private_keyJoachim Vandersmissen1-0/+2
private_key is overwritten with the key parameter passed in by the caller (if present), or alternatively a newly generated private key. However, it is possible that the caller provides a key (or the newly generated key) which is shorter than the previous key. In that scenario, some key material from the previous key would not be overwritten. The easiest solution is to explicitly zeroize the entire private_key array first. Note that this patch slightly changes the behavior of this function: previously, if the ecc_gen_privkey failed, the old private_key would remain. Now, the private_key is always zeroized. This behavior is consistent with the case where params.key is set and ecc_is_key_valid fails. Signed-off-by: Joachim Vandersmissen <git@jvdsn.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-05crypto: fips - Remove the now superfluous sentinel element from ctl_table arrayJoel Granados1-1/+0
This commit comes at the tail end of a greater effort to remove the empty elements at the end of the ctl_table arrays (sentinels) which will reduce the overall build time size of the kernel and run time memory bloat by ~64 bytes per sentinel (further information Link : https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/) Remove sentinel from crypto_sysctl_table Signed-off-by: Joel Granados <j.granados@samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-05crypto: jitter - Use kvfree_sensitive() to fix Coccinelle warningThorsten Blum1-2/+1
Replace memzero_explicit() and kvfree() with kvfree_sensitive() to fix the following Coccinelle/coccicheck warning reported by kfree_sensitive.cocci: WARNING opportunity for kfree_sensitive/kvfree_sensitive Signed-off-by: Thorsten Blum <thorsten.blum@toblux.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-05dt-bindings: crypto: ti,omap-sham: Convert to dtschemaAnimesh Agarwal2-28/+56
Convert the OMAP SoC SHA crypto Module bindings to DT Schema. Signed-off-by: Animesh Agarwal <animeshagarwal28@gmail.com> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-05crypto: qat - Avoid -Wflex-array-member-not-at-end warningsGustavo A. R. Silva2-6/+11
-Wflex-array-member-not-at-end is coming in GCC-14, and we are getting ready to enable it globally. Use the `__struct_group()` helper to separate the flexible array from the rest of the members in flexible `struct qat_alg_buf_list`, through tagged `struct qat_alg_buf_list_hdr`, and avoid embedding the flexible-array member in the middle of `struct qat_alg_fixed_buf_list`. Also, use `container_of()` whenever we need to retrieve a pointer to the flexible structure. So, with these changes, fix the following warnings: drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end] drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end] drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end] drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end] drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end] drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end] drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end] drivers/crypto/intel/qat/qat_common/qat_bl.h:25:33: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end] Link: https://github.com/KSPP/linux/issues/202 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Acked-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02hwrng: mxc-rnga - Drop usage of platform_driver_probe()Uwe Kleine-König1-4/+5
There are considerations to drop platform_driver_probe() as a concept that isn't relevant any more today. It comes with an added complexity that makes many users hold it wrong. (E.g. this driver should have mark the driver struct with __refdata.) Convert the driver to the more usual module_platform_driver(). This fixes a W=1 build warning: WARNING: modpost: drivers/char/hw_random/mxc-rnga: section mismatch in reference: mxc_rnga_driver+0x10 (section: .data) -> mxc_rnga_remove (section: .exit.text) with CONFIG_HW_RANDOM_MXC_RNGA=m. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: x86/aesni - Update aesni_set_key() to return voidChang S. Bae2-7/+6
The aesni_set_key() implementation has no error case, yet its prototype specifies to return an error code. Modify the function prototype to return void and adjust the related code. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: linux-crypto@vger.kernel.org Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: x86/aesni - Rearrange AES key size checkChang S. Bae1-10/+8
aes_expandkey() already includes an AES key size check. If AES-NI is unusable, invoke the function without the size check. Also, use aes_check_keylen() instead of open code. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: linux-crypto@vger.kernel.org Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: bcm - Fix pointer arithmeticAleksandr Mishin1-1/+1
In spu2_dump_omd() value of ptr is increased by ciph_key_len instead of hash_iv_len which could lead to going beyond the buffer boundaries. Fix this bug by changing ciph_key_len to hash_iv_len. Found by Linux Verification Center (linuxtesting.org) with SVACE. Fixes: 9d12ba86f818 ("crypto: brcm - Add Broadcom SPU driver") Signed-off-by: Aleksandr Mishin <amishin@t-argos.ru> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: iaa - Fix some errors in IAA documentationJerry Snitselaar1-6/+16
This cleans up the following issues I ran into when trying to use the scripts and commands in the iaa-crypto.rst document. - Fix incorrect arguments being passed to accel-config config-wq. - Replace --device_name with --driver-name. - Replace --driver_name with --driver-name. - Replace --size with --wq-size. - Add missing --priority argument. - Add missing accel-config config-engine command after the config-wq commands. - Fix wq name passed to accel-config config-wq. - Add rmmod/modprobe of iaa_crypto to script that disables, then enables all devices and workqueues to avoid enable-wq failing with -EEXIST when trying to register to compression algorithm. - Fix device name in cases where iaa was used instead of iax. Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-crypto@vger.kernel.org Cc: linux-doc@vger.kernel.org Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: ecdsa - Fix module auto-load on add-keyStefan Berger1-0/+3
Add module alias with the algorithm cra_name similar to what we have for RSA-related and other algorithms. The kernel attempts to modprobe asymmetric algorithms using the names "crypto-$cra_name" and "crypto-$cra_name-all." However, since these aliases are currently missing, the modules are not loaded. For instance, when using the `add_key` function, the hash algorithm is typically loaded automatically, but the asymmetric algorithm is not. Steps to test: 1. Create certificate openssl req -x509 -sha256 -newkey ec \ -pkeyopt "ec_paramgen_curve:secp384r1" -keyout key.pem -days 365 \ -subj '/CN=test' -nodes -outform der -out nist-p384.der 2. Optionally, trace module requests with: trace-cmd stream -e module & 3. Trigger add_key call for the cert: # keyctl padd asymmetric "" @u < nist-p384.der 641069229 # lsmod | head -2 Module Size Used by ecdsa_generic 16384 0 Fixes: c12d448ba939 ("crypto: ecdsa - Register NIST P384 and extend test suite") Cc: stable@vger.kernel.org Signed-off-by: Stefan Berger <stefanb@linux.ibm.com> Reviewed-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: ecc - update ecc_gen_privkey for FIPS 186-5Joachim Vandersmissen1-12/+17
FIPS 186-5 [1] was released approximately 1 year ago. The most interesting change for ecc_gen_privkey is the removal of curves with order < 224 bits. This is minimum is now checked in step 1. It is unlikely that there is still any benefit in generating private keys for curves with n < 224, as those curves provide less than 112 bits of security strength and are therefore unsafe for any modern usage. This patch also updates the documentation for __ecc_is_key_valid and ecc_gen_privkey to clarify which FIPS 186-5 method is being used to generate private keys. Previous documentation mentioned that "extra random bits" was used. However, this did not match the code. Instead, the code currently uses (and always has used) the "rejection sampling" ("testing candidates" in FIPS 186-4) method. [1]: https://doi.org/10.6028/NIST.FIPS.186-5 Signed-off-by: Joachim Vandersmissen <git@jvdsn.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: ecrdsa - Fix module auto-load on add_keyVitaly Chikunov1-0/+1
Add module alias with the algorithm cra_name similar to what we have for RSA-related and other algorithms. The kernel attempts to modprobe asymmetric algorithms using the names "crypto-$cra_name" and "crypto-$cra_name-all." However, since these aliases are currently missing, the modules are not loaded. For instance, when using the `add_key` function, the hash algorithm is typically loaded automatically, but the asymmetric algorithm is not. Steps to test: 1. Cert is generated usings ima-evm-utils test suite with `gen-keys.sh`, example cert is provided below: $ base64 -d >test-gost2012_512-A.cer <<EOF MIIB/DCCAWagAwIBAgIUK8+whWevr3FFkSdU9GLDAM7ure8wDAYIKoUDBwEBAwMFADARMQ8wDQYD VQQDDAZDQSBLZXkwIBcNMjIwMjAxMjIwOTQxWhgPMjA4MjEyMDUyMjA5NDFaMBExDzANBgNVBAMM BkNBIEtleTCBoDAXBggqhQMHAQEBAjALBgkqhQMHAQIBAgEDgYQABIGALXNrTJGgeErBUOov3Cfo IrHF9fcj8UjzwGeKCkbCcINzVUbdPmCopeJRHDJEvQBX1CQUPtlwDv6ANjTTRoq5nCk9L5PPFP1H z73JIXHT0eRBDVoWy0cWDRz1mmQlCnN2HThMtEloaQI81nTlKZOcEYDtDpi5WODmjEeRNQJMdqCj UDBOMAwGA1UdEwQFMAMBAf8wHQYDVR0OBBYEFCwfOITMbE9VisW1i2TYeu1tAo5QMB8GA1UdIwQY MBaAFCwfOITMbE9VisW1i2TYeu1tAo5QMAwGCCqFAwcBAQMDBQADgYEAmBfJCMTdC0/NSjz4BBiQ qDIEjomO7FEHYlkX5NGulcF8FaJW2jeyyXXtbpnub1IQ8af1KFIpwoS2e93LaaofxpWlpQLlju6m KYLOcO4xK3Whwa2hBAz9YbpUSFjvxnkS2/jpH2MsOSXuUEeCruG/RkHHB3ACef9umG6HCNQuAPY= EOF 2. Optionally, trace module requests with: trace-cmd stream -e module & 3. Trigger add_key call for the cert: # keyctl padd asymmetric "" @u <test-gost2012_512-A.cer 939910969 # lsmod | head -3 Module Size Used by ecrdsa_generic 16384 0 streebog_generic 28672 0 Repored-by: Paul Wolneykien <manowar@altlinux.org> Cc: stable@vger.kernel.org Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Tested-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02hwrng: core - Convert sprintf/snprintf to sysfs_emitLi Zhijian1-1/+1
Per filesystems/sysfs.rst, show() should only use sysfs_emit() or sysfs_emit_at() when formatting the value to be returned to user space. coccinelle complains that there are still a couple of functions that use snprintf(). Convert them to sysfs_emit(). sprintf() will be converted as weel if they have. Generally, this patch is generated by make coccicheck M=<path/to/file> MODE=patch \ COCCI=scripts/coccinelle/api/device_attr_show.cocci No functional change intended CC: Olivia Mackall <olivia@selenic.com> CC: Herbert Xu <herbert@gondor.apana.org.au> CC: linux-crypto@vger.kernel.org Signed-off-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02dt-bindings: crypto: ice: Document sc7280 inline crypto engineLuca Weiss1-0/+1
Document the compatible used for the inline crypto engine found on SC7280. Signed-off-by: Luca Weiss <luca.weiss@fairphone.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: remove CONFIG_CRYPTO_STATSEric Biggers32-1096/+71
Remove support for the "Crypto usage statistics" feature (CONFIG_CRYPTO_STATS). This feature does not appear to have ever been used, and it is harmful because it significantly reduces performance and is a large maintenance burden. Covering each of these points in detail: 1. Feature is not being used Since these generic crypto statistics are only readable using netlink, it's fairly straightforward to look for programs that use them. I'm unable to find any evidence that any such programs exist. For example, Debian Code Search returns no hits except the kernel header and kernel code itself and translations of the kernel header: https://codesearch.debian.net/search?q=CRYPTOCFGA_STAT&literal=1&perpkg=1 The patch series that added this feature in 2018 (https://lore.kernel.org/linux-crypto/1537351855-16618-1-git-send-email-clabbe@baylibre.com/) said "The goal is to have an ifconfig for crypto device." This doesn't appear to have happened. It's not clear that there is real demand for crypto statistics. Just because the kernel provides other types of statistics such as I/O and networking statistics and some people find those useful does not mean that crypto statistics are useful too. Further evidence that programs are not using CONFIG_CRYPTO_STATS is that it was able to be disabled in RHEL and Fedora as a bug fix (https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2947). Even further evidence comes from the fact that there are and have been bugs in how the stats work, but they were never reported. For example, before Linux v6.7 hash stats were double-counted in most cases. There has also never been any documentation for this feature, so it might be hard to use even if someone wanted to. 2. CONFIG_CRYPTO_STATS significantly reduces performance Enabling CONFIG_CRYPTO_STATS significantly reduces the performance of the crypto API, even if no program ever retrieves the statistics. This primarily affects systems with a large number of CPUs. For example, https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2039576 reported that Lustre client encryption performance improved from 21.7GB/s to 48.2GB/s by disabling CONFIG_CRYPTO_STATS. It can be argued that this means that CONFIG_CRYPTO_STATS should be optimized with per-cpu counters similar to many of the networking counters. But no one has done this in 5+ years. This is consistent with the fact that the feature appears to be unused, so there seems to be little interest in improving it as opposed to just disabling it. It can be argued that because CONFIG_CRYPTO_STATS is off by default, performance doesn't matter. But Linux distros tend to error on the side of enabling options. The option is enabled in Ubuntu and Arch Linux, and until recently was enabled in RHEL and Fedora (see above). So, even just having the option available is harmful to users. 3. CONFIG_CRYPTO_STATS is a large maintenance burden There are over 1000 lines of code associated with CONFIG_CRYPTO_STATS, spread among 32 files. It significantly complicates much of the implementation of the crypto API. After the initial submission, many fixes and refactorings have consumed effort of multiple people to keep this feature "working". We should be spending this effort elsewhere. Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: nx - Avoid -Wflex-array-member-not-at-end warningGustavo A. R. Silva2-6/+10
-Wflex-array-member-not-at-end is coming in GCC-14, and we are getting ready to enable it globally. So, we are deprecating flexible-array members in the middle of another structure. There is currently an object (`header`) in `struct nx842_crypto_ctx` that contains a flexible structure (`struct nx842_crypto_header`): struct nx842_crypto_ctx { ... struct nx842_crypto_header header; struct nx842_crypto_header_group group[NX842_CRYPTO_GROUP_MAX]; ... }; So, in order to avoid ending up with a flexible-array member in the middle of another struct, we use the `struct_group_tagged()` helper to separate the flexible array from the rest of the members in the flexible structure: struct nx842_crypto_header { struct_group_tagged(nx842_crypto_header_hdr, hdr, ... the rest of the members ); struct nx842_crypto_header_group group[]; } __packed; With the change described above, we can now declare an object of the type of the tagged struct, without embedding the flexible array in the middle of another struct: struct nx842_crypto_ctx { ... struct nx842_crypto_header_hdr header; struct nx842_crypto_header_group group[NX842_CRYPTO_GROUP_MAX]; ... } __packed; We also use `container_of()` whenever we need to retrieve a pointer to the flexible structure, through which we can access the flexible array if needed. So, with these changes, fix the following warning: In file included from drivers/crypto/nx/nx-842.c:55: drivers/crypto/nx/nx-842.h:174:36: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end] 174 | struct nx842_crypto_header header; | ^~~~~~ Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: starfive - Use dma for aes requestsJia Jie Ho4-238/+395
Convert AES module to use dma for data transfers to reduce cpu load and compatible with future variants. Signed-off-by: Jia Jie Ho <jiajie.ho@starfivetech.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: starfive - Skip unneeded key freeJia Jie Ho1-0/+3
Skip unneeded kfree_sensitive if RSA module is using falback algo. Signed-off-by: Jia Jie Ho <jiajie.ho@starfivetech.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: starfive - Update hash dma usageJia Jie Ho3-176/+112
Current hash uses sw fallback for non-word aligned input scatterlists. Add support for unaligned cases utilizing the data valid mask for dma. Signed-off-by: Jia Jie Ho <jiajie.ho@starfivetech.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02dt-bindings: crypto: starfive: Add jh8100 supportJia Jie Ho1-2/+28
Add compatible string and additional interrupt for StarFive JH8100 crypto engine. Signed-off-by: Jia Jie Ho <jiajie.ho@starfivetech.com> Acked-by: Conor Dooley <conor.dooley@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: iaa - Change iaa statistics to atomic64_tTom Zanussi2-64/+77
Change all the iaa statistics to use atomic64_t instead of the current u64, to avoid potentially inconsistent counts. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: iaa - Add global_stats file and remove individual stat filesTom Zanussi2-45/+61
Currently, the wq_stats output also includes the global stats, while the individual global stats are also available as separate debugfs files. Since these are all read-only, there's really no reason to have them as separate files, especially since we already display them as global stats in the wq_stats. It makes more sense to just add a separate global_stats file to display those, and remove them from the wq_stats, as well as removing the individual stats files. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: iaa - Remove comp/decomp delay statisticsTom Zanussi3-45/+0
As part of the simplification/cleanup of the iaa statistics, remove the comp/decomp delay statistics. They're actually not really useful and can be/are being more flexibly generated using standard kernel tracing infrastructure. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: iaa - fix decomp_bytes_in statsTom Zanussi1-2/+2
Decomp stats should use slen, not dlen. Change both the global and per-wq stats to use the correct value. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: qat - implement interface for live migrationXin Zeng9-1/+1445
Add logic to implement the interface for live migration defined in qat/qat_mig_dev.h. This is specific for QAT GEN4 Virtual Functions (VFs). This introduces a migration data manager which is used to handle the device state during migration. The manager ensures that the device state is stored in a format that can be restored in the destination node. The VF state is organized into a hierarchical structure that includes a preamble, a general state section, a MISC bar section and an ETR bar section. The latter contains the state of the 4 ring pairs contained on a VF. Here is a graphical representation of the state: preamble | general state section | leaf state | MISC bar state section| leaf state | ETR bar state section | bank0 state section | leaf state | bank1 state section | leaf state | bank2 state section | leaf state | bank3 state section | leaf state In addition to the implementation of the qat_migdev_ops interface and the state manager framework, add a mutex in pfvf to avoid pf2vf messages during migration. Signed-off-by: Xin Zeng <xin.zeng@intel.com> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: qat - add interface for live migrationXin Zeng5-1/+189
Extend the driver with a new interface to be used for VF live migration. This allows to create and destroy a qat_mig_dev object that contains a set of methods to allow to save and restore the state of QAT VF. This interface will be used by the qat-vfio-pci module. Signed-off-by: Xin Zeng <xin.zeng@intel.com> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: qat - add bank save and restore flowsSiming Wan4-0/+338
Add logic to save, restore, quiesce and drain a ring bank for QAT GEN4 devices. This allows to save and restore the state of a Virtual Function (VF) and will be used to implement VM live migration. Signed-off-by: Siming Wan <siming.wan@intel.com> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Xin Zeng <xin.zeng@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: qat - expand CSR operations for QAT GEN4 devicesSiming Wan3-1/+249
Extend the CSR operations for QAT GEN4 devices to allow saving and restoring the rings state. The new operations will be used as a building block for implementing the state save and restore of Virtual Functions necessary for VM live migration. This adds the following operations: - read ring status register - read ring underflow/overflow status register - read ring nearly empty status register - read ring nearly full status register - read ring full status register - read ring complete status register - read ring exception status register - read/write ring exception interrupt mask register - read ring configuration register - read ring base register - read/write ring interrupt enable register - read ring interrupt flag register - read/write ring interrupt source select register - read ring coalesced interrupt enable register - read ring coalesced interrupt control register - read ring flag and coalesced interrupt enable register - read ring service arbiter enable register - get ring coalesced interrupt control enable mask Signed-off-by: Siming Wan <siming.wan@intel.com> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Xin Zeng <xin.zeng@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: qat - rename get_sla_arr_of_type()Siming Wan2-5/+7
The function get_sla_arr_of_type() returns a pointer to an SLA type specific array. Rename it and expose it as it will be used externally to this module. This does not introduce any functional change. Signed-off-by: Siming Wan <siming.wan@intel.com> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Damian Muszynski <damian.muszynski@intel.com> Signed-off-by: Xin Zeng <xin.zeng@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: qat - relocate CSR access codeGiovanni Cabiddu17-362/+397
As the common hw_data files are growing and the adf_hw_csr_ops is going to be extended with new operations, move all logic related to ring CSRs to the newly created adf_gen[2|4]_hw_csr_data.[c|h] files. This does not introduce any functional change. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Xin Zeng <xin.zeng@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: qat - move PFVF compat checker to a functionXin Zeng2-7/+12
Move the code that implements VF version compatibility on the PF side to a separate function so that it can be reused when doing VM live migration. This does not introduce any functional change. Signed-off-by: Xin Zeng <xin.zeng@intel.com> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-02crypto: qat - relocate and rename 4xxx PF2VM definitionsXin Zeng2-5/+7
Move and rename ADF_4XXX_PF2VM_OFFSET and ADF_4XXX_VM2PF_OFFSET to ADF_GEN4_PF2VM_OFFSET and ADF_GEN4_VM2PF_OFFSET respectively. These definitions are moved from adf_gen4_pfvf.c to adf_gen4_hw_data.h as they are specific to GEN4 and not just to qat_4xxx. This change is made in anticipation of their use in live migration. This does not introduce any functional change. Signed-off-by: Xin Zeng <xin.zeng@intel.com> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>