summaryrefslogtreecommitdiff
path: root/crypto
AgeCommit message (Collapse)AuthorFilesLines
2019-07-26crypto: chacha20poly1305 - fix atomic sleep when using async algorithmEric Biggers1-11/+19
commit 7545b6c2087f4ef0287c8c9b7eba6a728c67ff8e upstream. Clear the CRYPTO_TFM_REQ_MAY_SLEEP flag when the chacha20poly1305 operation is being continued from an async completion callback, since sleeping may not be allowed in that context. This is basically the same bug that was recently fixed in the xts and lrw templates. But, it's always been broken in chacha20poly1305 too. This was found using syzkaller in combination with the updated crypto self-tests which actually test the MAY_SLEEP flag now. Reproducer: python -c 'import socket; socket.socket(socket.AF_ALG, 5, 0).bind( ("aead", "rfc7539(cryptd(chacha20-generic),poly1305-generic)"))' Kernel output: BUG: sleeping function called from invalid context at include/crypto/algapi.h:426 in_atomic(): 1, irqs_disabled(): 0, pid: 1001, name: kworker/2:2 [...] CPU: 2 PID: 1001 Comm: kworker/2:2 Not tainted 5.2.0-rc2 #5 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014 Workqueue: crypto cryptd_queue_worker Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x4d/0x6a lib/dump_stack.c:113 ___might_sleep kernel/sched/core.c:6138 [inline] ___might_sleep.cold.19+0x8e/0x9f kernel/sched/core.c:6095 crypto_yield include/crypto/algapi.h:426 [inline] crypto_hash_walk_done+0xd6/0x100 crypto/ahash.c:113 shash_ahash_update+0x41/0x60 crypto/shash.c:251 shash_async_update+0xd/0x10 crypto/shash.c:260 crypto_ahash_update include/crypto/hash.h:539 [inline] poly_setkey+0xf6/0x130 crypto/chacha20poly1305.c:337 poly_init+0x51/0x60 crypto/chacha20poly1305.c:364 async_done_continue crypto/chacha20poly1305.c:78 [inline] poly_genkey_done+0x15/0x30 crypto/chacha20poly1305.c:369 cryptd_skcipher_complete+0x29/0x70 crypto/cryptd.c:279 cryptd_skcipher_decrypt+0xcd/0x110 crypto/cryptd.c:339 cryptd_queue_worker+0x70/0xa0 crypto/cryptd.c:184 process_one_work+0x1ed/0x420 kernel/workqueue.c:2269 worker_thread+0x3e/0x3a0 kernel/workqueue.c:2415 kthread+0x11f/0x140 kernel/kthread.c:255 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:352 Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539") Cc: <stable@vger.kernel.org> # v4.2+ Cc: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-26crypto: ghash - fix unaligned memory access in ghash_setkey()Eric Biggers1-1/+7
commit 5c6bc4dfa515738149998bb0db2481a4fdead979 upstream. Changing ghash_mod_init() to be subsys_initcall made it start running before the alignment fault handler has been installed on ARM. In kernel builds where the keys in the ghash test vectors happened to be misaligned in the kernel image, this exposed the longstanding bug that ghash_setkey() is incorrectly casting the key buffer (which can have any alignment) to be128 for passing to gf128mul_init_4k_lle(). Fix this by memcpy()ing the key to a temporary buffer. Don't fix it by setting an alignmask on the algorithm instead because that would unnecessarily force alignment of the data too. Fixes: 2cdc6899a88e ("crypto: ghash - Add GHASH digest algorithm for GCM") Reported-by: Peter Robinson <pbrobinson@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Tested-by: Peter Robinson <pbrobinson@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-26crypto: asymmetric_keys - select CRYPTO_HASH where neededArnd Bergmann1-0/+3
[ Upstream commit 90acc0653d2bee203174e66d519fbaaa513502de ] Build testing with some core crypto options disabled revealed a few modules that are missing CRYPTO_HASH: crypto/asymmetric_keys/x509_public_key.o: In function `x509_get_sig_params': x509_public_key.c:(.text+0x4c7): undefined reference to `crypto_alloc_shash' x509_public_key.c:(.text+0x5e5): undefined reference to `crypto_shash_digest' crypto/asymmetric_keys/pkcs7_verify.o: In function `pkcs7_digest.isra.0': pkcs7_verify.c:(.text+0xab): undefined reference to `crypto_alloc_shash' pkcs7_verify.c:(.text+0x1b2): undefined reference to `crypto_shash_digest' pkcs7_verify.c:(.text+0x3c1): undefined reference to `crypto_shash_update' pkcs7_verify.c:(.text+0x411): undefined reference to `crypto_shash_finup' This normally doesn't show up in randconfig tests because there is a large number of other options that select CRYPTO_HASH. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-07-26crypto: serpent - mark __serpent_setkey_sbox noinlineArnd Bergmann1-1/+7
[ Upstream commit 473971187d6727609951858c63bf12b0307ef015 ] The same bug that gcc hit in the past is apparently now showing up with clang, which decides to inline __serpent_setkey_sbox: crypto/serpent_generic.c:268:5: error: stack frame size of 2112 bytes in function '__serpent_setkey' [-Werror,-Wframe-larger-than=] Marking it 'noinline' reduces the stack usage from 2112 bytes to 192 and 96 bytes, respectively, and seems to generate more useful object code. Fixes: c871c10e4ea7 ("crypto: serpent - improve __serpent_setkey with UBSAN") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-07-26crypto: testmgr - add some more preemption pointsEric Biggers1-0/+6
[ Upstream commit e63e1b0dd0003dc31f73d875907432be3a2abe5d ] Call cond_resched() after each fuzz test iteration. This avoids stall warnings if fuzz_iterations is set very high for testing purposes. While we're at it, also call cond_resched() after finishing testing each test vector. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-07-14crypto: lrw - use correct alignmaskEric Biggers1-1/+1
commit 20a0f9761343fba9b25ea46bd3a3e5e533d974f8 upstream. Commit c778f96bf347 ("crypto: lrw - Optimize tweak computation") incorrectly reduced the alignmask of LRW instances from '__alignof__(u64) - 1' to '__alignof__(__be32) - 1'. However, xor_tweak() and setkey() assume that the data and key, respectively, are aligned to 'be128', which has u64 alignment. Fix the alignmask to be at least '__alignof__(be128) - 1'. Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation") Cc: <stable@vger.kernel.org> # v4.20+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-10crypto: cryptd - Fix skcipher instance memory leakVincent Whitchurch1-0/+1
commit 1a0fad630e0b7cff38e7691b28b0517cfbb0633f upstream. cryptd_skcipher_free() fails to free the struct skcipher_instance allocated in cryptd_create_skcipher(), leading to a memory leak. This is detected by kmemleak on bootup on ARM64 platforms: unreferenced object 0xffff80003377b180 (size 1024): comm "cryptomgr_probe", pid 822, jiffies 4294894830 (age 52.760s) backtrace: kmem_cache_alloc_trace+0x270/0x2d0 cryptd_create+0x990/0x124c cryptomgr_probe+0x5c/0x1e8 kthread+0x258/0x318 ret_from_fork+0x10/0x1c Fixes: 4e0958d19bd8 ("crypto: cryptd - Add support for skcipher") Cc: <stable@vger.kernel.org> Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-10crypto: user - prevent operating on larval algorithmsEric Biggers1-0/+3
commit 21d4120ec6f5b5992b01b96ac484701163917b63 upstream. Michal Suchanek reported [1] that running the pcrypt_aead01 test from LTP [2] in a loop and holding Ctrl-C causes a NULL dereference of alg->cra_users.next in crypto_remove_spawns(), via crypto_del_alg(). The test repeatedly uses CRYPTO_MSG_NEWALG and CRYPTO_MSG_DELALG. The crash occurs when the instance that CRYPTO_MSG_DELALG is trying to unregister isn't a real registered algorithm, but rather is a "test larval", which is a special "algorithm" added to the algorithms list while the real algorithm is still being tested. Larvals don't have initialized cra_users, so that causes the crash. Normally pcrypt_aead01 doesn't trigger this because CRYPTO_MSG_NEWALG waits for the algorithm to be tested; however, CRYPTO_MSG_NEWALG returns early when interrupted. Everything else in the "crypto user configuration" API has this same bug too, i.e. it inappropriately allows operating on larval algorithms (though it doesn't look like the other cases can cause a crash). Fix this by making crypto_alg_match() exclude larval algorithms. [1] https://lkml.kernel.org/r/20190625071624.27039-1-msuchanek@suse.de [2] https://github.com/linux-test-project/ltp/blob/20190517/testcases/kernel/crypto/pcrypt_aead01.c Reported-by: Michal Suchanek <msuchanek@suse.de> Fixes: a38f7907b926 ("crypto: Add userspace configuration API") Cc: <stable@vger.kernel.org> # v3.2+ Cc: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-25crypto: hmac - fix memory leak in hmac_init_tfm()Eric Biggers1-1/+3
[ Upstream commit 7829a0c1cb9c80debfb4fdb49b4d90019f2ea1ac ] When I added the sanity check of 'descsize', I missed that the child hash tfm needs to be freed if the sanity check fails. Of course this should never happen, hence the use of WARN_ON(), but it should be fixed. Fixes: e1354400b25d ("crypto: hash - fix incorrect HASH_MAX_DESCSIZE") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-05-31crypto: hash - fix incorrect HASH_MAX_DESCSIZEEric Biggers1-0/+2
commit e1354400b25da645c4764ed6844d12f1582c3b66 upstream. The "hmac(sha3-224-generic)" algorithm has a descsize of 368 bytes, which is greater than HASH_MAX_DESCSIZE (360) which is only enough for sha3-224-generic. The check in shash_prepare_alg() doesn't catch this because the HMAC template doesn't set descsize on the algorithms, but rather sets it on each individual HMAC transform. This causes a stack buffer overflow when SHASH_DESC_ON_STACK() is used with hmac(sha3-224-generic). Fix it by increasing HASH_MAX_DESCSIZE to the real maximum. Also add a sanity check to hmac_init(). This was detected by the improved crypto self-tests in v5.2, by loading the tcrypt module with CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y enabled. I didn't notice this bug when I ran the self-tests by requesting the algorithms via AF_ALG (i.e., not using tcrypt), probably because the stack layout differs in the two cases and that made a difference here. KASAN report: BUG: KASAN: stack-out-of-bounds in memcpy include/linux/string.h:359 [inline] BUG: KASAN: stack-out-of-bounds in shash_default_import+0x52/0x80 crypto/shash.c:223 Write of size 360 at addr ffff8880651defc8 by task insmod/3689 CPU: 2 PID: 3689 Comm: insmod Tainted: G E 5.1.0-10741-g35c99ffa20edd #11 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x86/0xc5 lib/dump_stack.c:113 print_address_description+0x7f/0x260 mm/kasan/report.c:188 __kasan_report+0x144/0x187 mm/kasan/report.c:317 kasan_report+0x12/0x20 mm/kasan/common.c:614 check_memory_region_inline mm/kasan/generic.c:185 [inline] check_memory_region+0x137/0x190 mm/kasan/generic.c:191 memcpy+0x37/0x50 mm/kasan/common.c:125 memcpy include/linux/string.h:359 [inline] shash_default_import+0x52/0x80 crypto/shash.c:223 crypto_shash_import include/crypto/hash.h:880 [inline] hmac_import+0x184/0x240 crypto/hmac.c:102 hmac_init+0x96/0xc0 crypto/hmac.c:107 crypto_shash_init include/crypto/hash.h:902 [inline] shash_digest_unaligned+0x9f/0xf0 crypto/shash.c:194 crypto_shash_digest+0xe9/0x1b0 crypto/shash.c:211 generate_random_hash_testvec.constprop.11+0x1ec/0x5b0 crypto/testmgr.c:1331 test_hash_vs_generic_impl+0x3f7/0x5c0 crypto/testmgr.c:1420 __alg_test_hash+0x26d/0x340 crypto/testmgr.c:1502 alg_test_hash+0x22e/0x330 crypto/testmgr.c:1552 alg_test.part.7+0x132/0x610 crypto/testmgr.c:4931 alg_test+0x1f/0x40 crypto/testmgr.c:4952 Fixes: b68a7ec1e9a3 ("crypto: hash - Remove VLA usage") Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com> Cc: <stable@vger.kernel.org> # v4.20+ Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: gcm - fix incompatibility between "gcm" and "gcm_base"Eric Biggers1-23/+11
commit f699594d436960160f6d5ba84ed4a222f20d11cd upstream. GCM instances can be created by either the "gcm" template, which only allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base", which allows choosing the ctr and ghash implementations, e.g. "gcm_base(ctr(aes-generic),ghash-generic)". However, a "gcm_base" instance prevents a "gcm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "gcm". This can be used as a denial of service. Moreover, "gcm_base" instances are never tested by the crypto self-tests, even if there are compatible "gcm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "gcm_base" instances set the same cra_name as "gcm" instances, e.g. "gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)". This requires extracting the block cipher name from the name of the ctr algorithm. It also requires starting to verify that the algorithms are really ctr and ghash, not something else entirely. But it would be bizarre if anyone were actually using non-gcm-compatible algorithms with gcm_base, so this shouldn't break anyone in practice. Fixes: d00aa19b507b ("[CRYPTO] gcm: Allow block cipher parameter") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: crct10dif-generic - fix use via crypto_shash_digest()Eric Biggers1-7/+4
commit 307508d1072979f4435416f87936f87eaeb82054 upstream. The ->digest() method of crct10dif-generic reads the current CRC value from the shash_desc context. But this value is uninitialized, causing crypto_shash_digest() to compute the wrong result. Fix it. Probably this wasn't noticed before because lib/crc-t10dif.c only uses crypto_shash_update(), not crypto_shash_digest(). Likewise, crypto_shash_digest() is not yet tested by the crypto self-tests because those only test the ahash API which only uses shash init/update/final. This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. Fixes: 2d31e518a428 ("crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework") Cc: <stable@vger.kernel.org> # v3.11+ Cc: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: skcipher - don't WARN on unprocessed data after slow walk stepEric Biggers1-2/+7
commit dcaca01a42cc2c425154a13412b4124293a6e11e upstream. skcipher_walk_done() assumes it's a bug if, after the "slow" path is executed where the next chunk of data is processed via a bounce buffer, the algorithm says it didn't process all bytes. Thus it WARNs on this. However, this can happen legitimately when the message needs to be evenly divisible into "blocks" but isn't, and the algorithm has a 'walksize' greater than the block size. For example, ecb-aes-neonbs sets 'walksize' to 128 bytes and only supports messages evenly divisible into 16-byte blocks. If, say, 17 message bytes remain but they straddle scatterlist elements, the skcipher_walk code will take the "slow" path and pass the algorithm all 17 bytes in the bounce buffer. But the algorithm will only be able to process 16 bytes, triggering the WARN. Fix this by just removing the WARN_ON(). Returning -EINVAL, as the code already does, is the right behavior. This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: ccm - fix incompatibility between "ccm" and "ccm_base"Eric Biggers1-26/+18
commit 6a1faa4a43f5fabf9cbeaa742d916e7b5e73120f upstream. CCM instances can be created by either the "ccm" template, which only allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base", which allows choosing the ctr and cbcmac implementations, e.g. "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". However, a "ccm_base" instance prevents a "ccm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "ccm". This can be used as a denial of service. Moreover, "ccm_base" instances are never tested by the crypto self-tests, even if there are compatible "ccm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "ccm_base" instances set the same cra_name as "ccm" instances, e.g. "ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". This requires extracting the block cipher name from the name of the ctr and cbcmac algorithms. It also requires starting to verify that the algorithms are really ctr and cbcmac using the same block cipher, not something else entirely. But it would be bizarre if anyone were actually using non-ccm-compatible algorithms with ccm_base, so this shouldn't break anyone in practice. Fixes: 4a49b499dfa0 ("[CRYPTO] ccm: Added CCM mode") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: chacha20poly1305 - set cra_name correctlyEric Biggers1-2/+2
commit 5e27f38f1f3f45a0c938299c3a34a2d2db77165a upstream. If the rfc7539 template is instantiated with specific implementations, e.g. "rfc7539(chacha20-generic,poly1305-generic)" rather than "rfc7539(chacha20,poly1305)", then the implementation names end up included in the instance's cra_name. This is incorrect because it then prevents all users from allocating "rfc7539(chacha20,poly1305)", if the highest priority implementations of chacha20 and poly1305 were selected. Also, the self-tests aren't run on an instance allocated in this way. Fix it by setting the instance's cra_name from the underlying algorithms' actual cra_names, rather than from the requested names. This matches what other templates do. Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539") Cc: <stable@vger.kernel.org> # v4.2+ Cc: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: chacha-generic - fix use as arm64 no-NEON fallbackEric Biggers1-1/+1
commit 7aceaaef04eaaf6019ca159bc354d800559bba1d upstream. The arm64 implementations of ChaCha and XChaCha are failing the extra crypto self-tests following my patches to test the !may_use_simd() code paths, which previously were untested. The problem is as follows: When !may_use_simd(), the arm64 NEON implementations fall back to the generic implementation, which uses the skcipher_walk API to iterate through the src/dst scatterlists. Due to how the skcipher_walk API works, walk.stride is set from the skcipher_alg actually being used, which in this case is the arm64 NEON algorithm. Thus walk.stride is 5*CHACHA_BLOCK_SIZE, not CHACHA_BLOCK_SIZE. This unnecessarily large stride shouldn't cause an actual problem. However, the generic implementation computes round_down(nbytes, walk.stride). round_down() assumes the round amount is a power of 2, which 5*CHACHA_BLOCK_SIZE is not, so it gives the wrong result. This causes the following case in skcipher_walk_done() to be hit, causing a WARN() and failing the encryption operation: if (WARN_ON(err)) { /* unexpected case; didn't process all bytes */ err = -EINVAL; goto finish; } Fix it by rounding down to CHACHA_BLOCK_SIZE instead of walk.stride. (Or we could replace round_down() with rounddown(), but that would add a slow division operation every time, which I think we should avoid.) Fixes: 2fe55987b262 ("crypto: arm64/chacha - use combined SIMD/ALU routine for more speed") Cc: <stable@vger.kernel.org> # v5.0+ Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: lrw - don't access already-freed walk.ivEric Biggers1-1/+3
commit aec286cd36eacfd797e3d5dab8d5d23c15d1bb5e upstream. If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. Fix this in the LRW template by checking the return value of skcipher_walk_virt(). This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. When the extra self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free splat occured during lrw(aes) testing. Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation") Cc: <stable@vger.kernel.org> # v4.20+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: salsa20 - don't access already-freed walk.ivEric Biggers1-1/+1
commit edaf28e996af69222b2cb40455dbb5459c2b875a upstream. If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. salsa20-generic doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and it was actually broken prior to the alignmask being removed by commit b62b3db76f73 ("crypto: salsa20-generic - cleanup and convert to skcipher API"). Since salsa20-generic does not update the IV and does not need any IV alignment, update it to use req->iv instead of walk.iv. Fixes: 2407d60872dd ("[CRYPTO] salsa20: Salsa20 stream cipher") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-04-18crypto: lrw - Fix atomic sleep when walking skcipherHerbert Xu1-1/+5
When we perform a walk in the completion function, we need to ensure that it is atomic. Fixes: ac3c8f36c31d ("crypto: lrw - Do not use auxiliary buffer") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: xts - Fix atomic sleep when walking skcipherHerbert Xu1-1/+5
When we perform a walk in the completion function, we need to ensure that it is atomic. Reported-by: syzbot+6f72c20560060c98b566@syzkaller.appspotmail.com Fixes: 78105c7e769b ("crypto: xts - Drop use of auxiliary buffer") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: x86/poly1305 - fix overflow during partial reductionEric Biggers1-1/+43
The x86_64 implementation of Poly1305 produces the wrong result on some inputs because poly1305_4block_avx2() incorrectly assumes that when partially reducing the accumulator, the bits carried from limb 'd4' to limb 'h0' fit in a 32-bit integer. This is true for poly1305-generic which processes only one block at a time. However, it's not true for the AVX2 implementation, which processes 4 blocks at a time and therefore can produce intermediate limbs about 4x larger. Fix it by making the relevant calculations use 64-bit arithmetic rather than 32-bit. Note that most of the carries already used 64-bit arithmetic, but the d4 -> h0 carry was different for some reason. To be safe I also made the same change to the corresponding SSE2 code, though that only operates on 1 or 2 blocks at a time. I don't think it's really needed for poly1305_block_sse2(), but it doesn't hurt because it's already x86_64 code. It *might* be needed for poly1305_2block_sse2(), but overflows aren't easy to reproduce there. This bug was originally detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. But also add a test vector which reproduces it directly (in the AVX2 case). Fixes: b1ccc8f4b631 ("crypto: poly1305 - Add a four block AVX2 variant for x86_64") Fixes: c70f4abef07a ("crypto: poly1305 - Add a SSE2 SIMD variant for x86_64") Cc: <stable@vger.kernel.org> # v4.3+ Cc: Martin Willi <martin@strongswan.org> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-08lib/lzo: separate lzo-rle from lzoDave Rodgman3-3/+178
To prevent any issues with persistent data, separate lzo-rle from lzo so that it is treated as a separate algorithm, and lzo is still available. Link: http://lkml.kernel.org/r/20190205155944.16007-3-dave.rodgman@arm.com Signed-off-by: Dave Rodgman <dave.rodgman@arm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com> Cc: Matt Sealey <matt.sealey@arm.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <nitingupta910@gmail.com> Cc: Richard Purdie <rpurdie@openedhand.com> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Sonny Rao <sonnyrao@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-05Merge branch 'linus' of ↵Linus Torvalds33-8975/+6499
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto update from Herbert Xu: "API: - Add helper for simple skcipher modes. - Add helper to register multiple templates. - Set CRYPTO_TFM_NEED_KEY when setkey fails. - Require neither or both of export/import in shash. - AEAD decryption test vectors are now generated from encryption ones. - New option CONFIG_CRYPTO_MANAGER_EXTRA_TESTS that includes random fuzzing. Algorithms: - Conversions to skcipher and helper for many templates. - Add more test vectors for nhpoly1305 and adiantum. Drivers: - Add crypto4xx prng support. - Add xcbc/cmac/ecb support in caam. - Add AES support for Exynos5433 in s5p. - Remove sha384/sha512 from artpec7 as hardware cannot do partial hash" [ There is a merge of the Freescale SoC tree in order to pull in changes required by patches to the caam/qi2 driver. ] * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (174 commits) crypto: s5p - add AES support for Exynos5433 dt-bindings: crypto: document Exynos5433 SlimSSS crypto: crypto4xx - add missing of_node_put after of_device_is_available crypto: cavium/zip - fix collision with generic cra_driver_name crypto: af_alg - use struct_size() in sock_kfree_s() crypto: caam - remove redundant likely/unlikely annotation crypto: s5p - update iv after AES-CBC op end crypto: x86/poly1305 - Clear key material from stack in SSE2 variant crypto: caam - generate hash keys in-place crypto: caam - fix DMA mapping xcbc key twice crypto: caam - fix hash context DMA unmap size hwrng: bcm2835 - fix probe as platform device crypto: s5p-sss - Use AES_BLOCK_SIZE define instead of number crypto: stm32 - drop pointless static qualifier in stm32_hash_remove() crypto: chelsio - Fixed Traffic Stall crypto: marvell - Remove set but not used variable 'ivsize' crypto: ccp - Update driver messages to remove some confusion crypto: adiantum - add 1536 and 4096-byte test vectors crypto: nhpoly1305 - add a test vector with len % 16 != 0 crypto: arm/aes-ce - update IV after partial final CTR block ...
2019-02-28crypto: af_alg - use struct_size() in sock_kfree_s()Gustavo A. R. Silva1-2/+1
Make use of the struct_size() helper instead of an open-coded version in order to avoid any potential type mistakes, in particular in the context in which this code is being used. So, change the following form: sizeof(*sgl) + sizeof(sgl->sg[0]) * (MAX_SGL_ENTS + 1) to : struct_size(sgl, sg, MAX_SGL_ENTS + 1) This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22crypto: adiantum - add 1536 and 4096-byte test vectorsEric Biggers1-0/+2860
Add 1536 and 4096-byte Adiantum test vectors so that the case where there are multiple NH hashes is tested. This is already tested by the nhpoly1305 test vectors, but it should be tested at the Adiantum level too. Moreover the 4096-byte case is especially important. As with the other Adiantum test vectors, these were generated by the reference Python implementation at https://github.com/google/adiantum and then automatically formatted for testmgr by a script. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22crypto: nhpoly1305 - add a test vector with len % 16 != 0Eric Biggers1-0/+144
This is needed to test that the end of the message is zero-padded when the length is not a multiple of 16 (NH_MESSAGE_UNIT). It's already tested indirectly by the 31-byte Adiantum test vector, but it should be tested directly at the nhpoly1305 level too. As with the other nhpoly1305 test vectors, this was generated by the reference Python implementation at https://github.com/google/adiantum and then automatically formatted for testmgr by a script. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22crypto: testmgr - add iv_out to all CTR test vectorsEric Biggers1-0/+45
Test that all CTR implementations update the IV buffer to contain the next counter block, aka the IV to continue the encryption/decryption of a larger message. When the length processed is a multiple of the block size, users may rely on this for chaining. When the length processed is *not* a multiple of the block size, simple chaining doesn't work. However, as noted in commit 88a3f582bea9 ("crypto: arm64/aes - don't use IV buffer to return final keystream block"), the generic CCM implementation assumes that the CTR IV is handled in some sane way, not e.g. overwritten with part of the keystream. Since this was gotten wrong once already, it's desirable to test for it. And, the most straightforward way to do this is to enforce that all CTR implementations have the same behavior as the generic implementation, which returns the *next* counter following the final partial block. This behavior also has the advantage that if someone does misuse this case for chaining, then the keystream won't be repeated. Thus, this patch makes the tests expect this behavior. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22crypto: testmgr - add iv_out to all CBC test vectorsEric Biggers1-0/+48
Test that all CBC implementations update the IV buffer to contain the last ciphertext block, aka the IV to continue the encryption/decryption of a larger message. Users may rely on this for chaining. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22crypto: testmgr - support checking skcipher output IVEric Biggers2-7/+11
Allow skcipher test vectors to declare the value the IV buffer should be updated to at the end of the encryption or decryption operation. (This check actually used to be supported in testmgr, but it was never used and therefore got removed except for the AES-Keywrap special case. But it will be used by CBC and CTR now, so re-add it.) Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22crypto: testmgr - remove extra bytes from 3DES-CTR IVsEric Biggers1-4/+2
3DES only has an 8-byte block size, but the 3DES-CTR test vectors use 16-byte IVs. Remove the unused 8 bytes from the ends of the IVs. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-18net: crypto set sk to NULL when af_alg_release.Mao Wenan1-1/+3
KASAN has found use-after-free in sockfs_setattr. The existed commit 6d8c50dcb029 ("socket: close race condition between sock_close() and sockfs_setattr()") is to fix this simillar issue, but it seems to ignore that crypto module forgets to set the sk to NULL after af_alg_release. KASAN report details as below: BUG: KASAN: use-after-free in sockfs_setattr+0x120/0x150 Write of size 4 at addr ffff88837b956128 by task syz-executor0/4186 CPU: 2 PID: 4186 Comm: syz-executor0 Not tainted xxx + #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014 Call Trace: dump_stack+0xca/0x13e print_address_description+0x79/0x330 ? vprintk_func+0x5e/0xf0 kasan_report+0x18a/0x2e0 ? sockfs_setattr+0x120/0x150 sockfs_setattr+0x120/0x150 ? sock_register+0x2d0/0x2d0 notify_change+0x90c/0xd40 ? chown_common+0x2ef/0x510 chown_common+0x2ef/0x510 ? chmod_common+0x3b0/0x3b0 ? __lock_is_held+0xbc/0x160 ? __sb_start_write+0x13d/0x2b0 ? __mnt_want_write+0x19a/0x250 do_fchownat+0x15c/0x190 ? __ia32_sys_chmod+0x80/0x80 ? trace_hardirqs_on_thunk+0x1a/0x1c __x64_sys_fchownat+0xbf/0x160 ? lockdep_hardirqs_on+0x39a/0x5e0 do_syscall_64+0xc8/0x580 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x462589 Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fb4b2c83c58 EFLAGS: 00000246 ORIG_RAX: 0000000000000104 RAX: ffffffffffffffda RBX: 000000000072bfa0 RCX: 0000000000462589 RDX: 0000000000000000 RSI: 00000000200000c0 RDI: 0000000000000007 RBP: 0000000000000005 R08: 0000000000001000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00007fb4b2c846bc R13: 00000000004bc733 R14: 00000000006f5138 R15: 00000000ffffffff Allocated by task 4185: kasan_kmalloc+0xa0/0xd0 __kmalloc+0x14a/0x350 sk_prot_alloc+0xf6/0x290 sk_alloc+0x3d/0xc00 af_alg_accept+0x9e/0x670 hash_accept+0x4a3/0x650 __sys_accept4+0x306/0x5c0 __x64_sys_accept4+0x98/0x100 do_syscall_64+0xc8/0x580 entry_SYSCALL_64_after_hwframe+0x49/0xbe Freed by task 4184: __kasan_slab_free+0x12e/0x180 kfree+0xeb/0x2f0 __sk_destruct+0x4e6/0x6a0 sk_destruct+0x48/0x70 __sk_free+0xa9/0x270 sk_free+0x2a/0x30 af_alg_release+0x5c/0x70 __sock_release+0xd3/0x280 sock_close+0x1a/0x20 __fput+0x27f/0x7f0 task_work_run+0x136/0x1b0 exit_to_usermode_loop+0x1a7/0x1d0 do_syscall_64+0x461/0x580 entry_SYSCALL_64_after_hwframe+0x49/0xbe Syzkaller reproducer: r0 = perf_event_open(&(0x7f0000000000)={0x0, 0x70, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @perf_config_ext}, 0x0, 0x0, 0xffffffffffffffff, 0x0) r1 = socket$alg(0x26, 0x5, 0x0) getrusage(0x0, 0x0) bind(r1, &(0x7f00000001c0)=@alg={0x26, 'hash\x00', 0x0, 0x0, 'sha256-ssse3\x00'}, 0x80) r2 = accept(r1, 0x0, 0x0) r3 = accept4$unix(r2, 0x0, 0x0, 0x0) r4 = dup3(r3, r0, 0x0) fchownat(r4, &(0x7f00000000c0)='\x00', 0x0, 0x0, 0x1000) Fixes: 6d8c50dcb029 ("socket: close race condition between sock_close() and sockfs_setattr()") Signed-off-by: Mao Wenan <maowenan@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-15crypto: export arc4 definesIuliana Prodan1-4/+1
Some arc4 cipher algorithm defines show up in two places: crypto/arc4.c and drivers/crypto/bcm/cipher.h. Let's export them in a common header and update their users. Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: testmgr - check for aead_request corruptionEric Biggers1-0/+44
Check that algorithms do not change the aead_request structure, as users may rely on submitting the request again (e.g. after copying new data into the same source buffer) without reinitializing everything. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: testmgr - check for skcipher_request corruptionEric Biggers1-0/+41
Check that algorithms do not change the skcipher_request structure, as users may rely on submitting the request again (e.g. after copying new data into the same source buffer) without reinitializing everything. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: testmgr - convert hash testing to use testvec_configsEric Biggers2-550/+352
Convert alg_test_hash() to use the new test framework, adding a list of testvec_configs to test by default. When the extra self-tests are enabled, randomly generated testvec_configs are tested as well. This improves hash test coverage mainly because now all algorithms have a variety of data layouts tested, whereas before each algorithm was responsible for declaring its own chunked test cases which were often missing or provided poor test coverage. The new code also tests both the MAY_SLEEP and !MAY_SLEEP cases and buffers that cross pages. This already found bugs in the hash walk code and in the arm32 and arm64 implementations of crct10dif. I removed the hash chunked test vectors that were the same as non-chunked ones, but left the ones that were unique. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: testmgr - convert aead testing to use testvec_configsEric Biggers2-475/+185
Convert alg_test_aead() to use the new test framework, using the same list of testvec_configs that skcipher testing uses. This significantly improves AEAD test coverage mainly because previously there was only very limited test coverage of the possible data layouts. Now the data layouts to test are listed in one place for all algorithms and optionally are also randomly generated. In fact, only one AEAD algorithm (AES-GCM) even had a chunked test case before. This already found bugs in all the AEGIS and MORUS implementations, the x86 AES-GCM implementation, and the arm64 AES-CCM implementation. I removed the AEAD chunked test vectors that were the same as non-chunked ones, but left the ones that were unique. Note: the rewritten test code allocates an aead_request just once per algorithm rather than once per encryption/decryption, but some AEAD algorithms incorrectly change the tfm pointer in the request. It's nontrivial to fix these, so to move forward I'm temporarily working around it by resetting the tfm pointer. But they'll need to be fixed. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: testmgr - convert skcipher testing to use testvec_configsEric Biggers2-523/+245
Convert alg_test_skcipher() to use the new test framework, adding a list of testvec_configs to test by default. When the extra self-tests are enabled, randomly generated testvec_configs are tested as well. This improves skcipher test coverage mainly because now all algorithms have a variety of data layouts tested, whereas before each algorithm was responsible for declaring its own chunked test cases which were often missing or provided poor test coverage. The new code also tests both the MAY_SLEEP and !MAY_SLEEP cases, different IV alignments, and buffers that cross pages. This has already found a bug in the arm64 ctr-aes-neonbs algorithm. It would have easily found many past bugs. I removed the skcipher chunked test vectors that were the same as non-chunked ones, but left the ones that were unique. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: testmgr - implement random testvec_config generationEric Biggers1-0/+117
Add functions that generate a random testvec_config, in preparation for using it for randomized fuzz tests. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: testmgr - introduce CONFIG_CRYPTO_MANAGER_EXTRA_TESTSEric Biggers2-0/+24
To achieve more comprehensive crypto test coverage, I'd like to add fuzz tests that use random data layouts and request flags. To be most effective these tests should be part of testmgr, so they automatically run on every algorithm registered with the crypto API. However, they will take much longer to run than the current tests and therefore will only really be intended to be run by developers, whereas the current tests have a wider audience. Therefore, add a new kconfig option CONFIG_CRYPTO_MANAGER_EXTRA_TESTS that can be set by developers to enable these extra, expensive tests. Similar to the regular tests, also add a module parameter cryptomgr.noextratests to support disabling the tests. Finally, another module parameter cryptomgr.fuzz_iterations is added to control how many iterations the fuzz tests do. Note: for now setting this to 0 will be equivalent to cryptomgr.noextratests=1. But I opted for separate parameters to provide more flexibility to add other types of tests under the "extra tests" category in the future. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: testmgr - add testvec_config struct and helper functionsEric Biggers1-15/+437
Crypto algorithms must produce the same output for the same input regardless of data layout, i.e. how the src and dst scatterlists are divided into chunks and how each chunk is aligned. Request flags such as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either. However, testing of this currently has many gaps. For example, individual algorithms are responsible for providing their own chunked test vectors. But many don't bother to do this or test only one or two cases, providing poor test coverage. Also, other things such as misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all. Test code is also duplicated between the chunked and non-chunked cases, making it difficult to make other improvements. To improve the situation, this patch series basically moves the chunk descriptions into the testmgr itself so that they are shared by all algorithms. However, it's done in an extensible way via a new struct 'testvec_config', which describes not just the scaled chunk lengths but also all other aspects of the crypto operation besides the data itself such as the buffer alignments, the request flags, whether the operation is in-place or not, the IV alignment, and for hash algorithms when to do each update() and when to use finup() vs. final() vs. digest(). Then, this patch series makes skcipher, aead, and hash algorithms be tested against a list of default testvec_configs, replacing the current test code. This improves overall test coverage, without reducing test performance too much. Note that the test vectors themselves are not changed, except for removing the chunk lists. This series also adds randomized fuzz tests, enabled by a new kconfig option intended for developer use only, where skcipher, aead, and hash algorithms are tested against many randomly generated testvec_configs. This provides much more comprehensive test coverage. These improved tests have already exposed many bugs. To start it off, this initial patch adds the testvec_config and various helper functions that will be used by the skcipher, aead, and hash test code that will be converted to use the new testvec_config framework. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: ahash - fix another early termination in hash walkEric Biggers1-7/+7
Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and "michael_mic", fail the improved hash tests because they sometimes produce the wrong digest. The bug is that in the case where a scatterlist element crosses pages, not all the data is actually hashed because the scatterlist walk terminates too early. This happens because the 'nbytes' variable in crypto_hash_walk_done() is assigned the number of bytes remaining in the page, then later interpreted as the number of bytes remaining in the scatterlist element. Fix it. Fixes: 900a081f6912 ("crypto: ahash - Fix early termination in hash walk") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: morus - fix handling chunked inputsEric Biggers2-12/+14
The generic MORUS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Fix them. Fixes: 396be41f16fd ("crypto: morus - Add generic MORUS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: aegis - fix handling chunked inputsEric Biggers3-21/+21
The generic AEGIS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Fix them. Fixes: f606a88e5823 ("crypto: aegis - Add generic AEGIS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: testmgr - use kmemdupChristopher Diaz Riveros1-6/+3
Fixes coccinnelle alerts: /crypto/testmgr.c:2112:13-20: WARNING opportunity for kmemdup /crypto/testmgr.c:2130:13-20: WARNING opportunity for kmemdup /crypto/testmgr.c:2152:9-16: WARNING opportunity for kmemdup Signed-off-by: Christopher Diaz Riveros <chrisadr@gentoo.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01crypto: testmgr - mark crc32 checksum as FIPS allowedMilan Broz1-0/+1
The CRC32 is not a cryptographic hash algorithm, so the FIPS restrictions should not apply to it. (The CRC32C variant is already allowed.) This CRC32 variant is used for in dm-crypt legacy TrueCrypt IV implementation (tcw); detected by cryptsetup test suite failure in FIPS mode. Signed-off-by: Milan Broz <gmazyland@gmail.com> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01crypto: testmgr - skip crc32c context test for ahash algorithmsEric Biggers1-4/+10
Instantiating "cryptd(crc32c)" causes a crypto self-test failure because the crypto_alloc_shash() in alg_test_crc32c() fails. This is because cryptd(crc32c) is an ahash algorithm, not a shash algorithm; so it can only be accessed through the ahash API, unlike shash algorithms which can be accessed through both the ahash and shash APIs. As the test is testing the shash descriptor format which is only applicable to shash algorithms, skip it for ahash algorithms. (Note that it's still important to fix crypto self-test failures even for weird algorithm instantiations like cryptd(crc32c) that no one would really use; in fips_enabled mode unprivileged users can use them to panic the kernel, and also they prevent treating a crypto self-test failure as a bug when fuzzing the kernel.) Fixes: 8e3ee85e68c5 ("crypto: crc32c - Test descriptor context format") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01crypto: seqiv - Use kmemdup in seqiv_aead_encrypt()YueHaibing1-4/+3
Use kmemdup rather than duplicating its implementation Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-25crypto: clarify name of WEAK_KEY request flagEric Biggers3-11/+11
CRYPTO_TFM_REQ_WEAK_KEY confuses newcomers to the crypto API because it sounds like it is requesting a weak key. Actually, it is requesting that weak keys be forbidden (for algorithms that have the notion of "weak keys"; currently only DES and XTS do). Also it is only one letter away from CRYPTO_TFM_RES_WEAK_KEY, with which it can be easily confused. (This in fact happened in the UX500 driver, though just in some debugging messages.) Therefore, make the intent clear by renaming it to CRYPTO_TFM_REQ_FORBID_WEAK_KEYS. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-25crypto: chacha20poly1305 - use template array registering API to simplify ↵Xiongfeng Wang1-23/+14
the code Use crypto template array registering API to simplify the code. Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-25crypto: ctr - use template array registering API to simplify the codeXiongfeng Wang1-28/+14
Use crypto template array registering API to simplify the code. Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>