summaryrefslogtreecommitdiff
path: root/crypto
AgeCommit message (Collapse)AuthorFilesLines
2018-07-19Merge branch 'linus' of ↵Linus Torvalds1-1/+3
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fix from Herbert Xu: "This fixes an allocation error-path bug in af_alg discovered by syzkaller" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: af_alg - Initialize sg_num_bytes in error code path
2018-07-13crypto: af_alg - Initialize sg_num_bytes in error code pathStephan Mueller1-1/+3
The RX SGL in processing is already registered with the RX SGL tracking list to support proper cleanup. The cleanup code path uses the sg_num_bytes variable which must therefore be always initialized, even in the error code path. Signed-off-by: Stephan Mueller <smueller@chronox.de> Reported-by: syzbot+9c251bdd09f83b92ba95@syzkaller.appspotmail.com #syz test: https://github.com/google/kmsan.git master CC: <stable@vger.kernel.org> #4.14 Fixes: e870456d8e7c ("crypto: algif_skcipher - overhaul memory management") Fixes: d887c52d6ae4 ("crypto: algif_aead - overhaul memory management") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-06-28Revert changes to convert to ->poll_mask() and aio IOCB_CMD_POLLLinus Torvalds3-7/+14
The poll() changes were not well thought out, and completely unexplained. They also caused a huge performance regression, because "->poll()" was no longer a trivial file operation that just called down to the underlying file operations, but instead did at least two indirect calls. Indirect calls are sadly slow now with the Spectre mitigation, but the performance problem could at least be largely mitigated by changing the "->get_poll_head()" operation to just have a per-file-descriptor pointer to the poll head instead. That gets rid of one of the new indirections. But that doesn't fix the new complexity that is completely unwarranted for the regular case. The (undocumented) reason for the poll() changes was some alleged AIO poll race fixing, but we don't make the common case slower and more complex for some uncommon special case, so this all really needs way more explanations and most likely a fundamental redesign. [ This revert is a revert of about 30 different commits, not reverted individually because that would just be unnecessarily messy - Linus ] Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-26Merge branch 'fixes-v4.18-rc2' of ↵Linus Torvalds1-0/+9
git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security Pull security subsystem fixes from James Morris: - Smack: fix a regression caused by 1bbc55131e5 - X.509: fix a (usually un-seen) bug in RSA signature parsing * 'fixes-v4.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: X.509: unpack RSA signatureValue field from BIT STRING Smack: Mark inode instant in smack_task_to_inode
2018-06-25X.509: unpack RSA signatureValue field from BIT STRINGMaciej S. Szmigiero1-0/+9
The signatureValue field of a X.509 certificate is encoded as a BIT STRING. For RSA signatures this BIT STRING is of so-called primitive subtype, which contains a u8 prefix indicating a count of unused bits in the encoding. We have to strip this prefix from signature data, just as we already do for key data in x509_extract_key_data() function. This wasn't noticed earlier because this prefix byte is zero for RSA key sizes divisible by 8. Since BIT STRING is a big-endian encoding adding zero prefixes has no bearing on its value. The signature length, however was incorrect, which is a problem for RSA implementations that need it to be exactly correct (like AMD CCP). Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name> Fixes: c26fd69fa009 ("X.509: Add a crypto key parser for binary (DER) X.509 certificates") Cc: stable@vger.kernel.org Signed-off-by: James Morris <james.morris@microsoft.com>
2018-06-24Merge branch 'linus' of ↵Linus Torvalds2-2/+3
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fixes from Herbert Xu: - Fix use after free in chtls - Fix RBP breakage in sha3 - Fix use after free in hwrng_unregister - Fix overread in morus640 - Move sleep out of kernel_neon in arm64/aes-blk * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: hwrng: core - Always drop the RNG in hwrng_unregister() crypto: morus640 - Fix out-of-bounds access crypto: don't optimize keccakf() crypto: arm64/aes-blk - fix and move skcipher_walk_done out of kernel_neon_begin, _end crypto: chtls - use after free in chtls_pt_recvmsg()
2018-06-16docs: Fix some broken referencesMauro Carvalho Chehab2-2/+2
As we move stuff around, some doc references are broken. Fix some of them via this script: ./scripts/documentation-file-ref-check --fix Manually checked if the produced result is valid, removing a few false-positives. Acked-by: Takashi Iwai <tiwai@suse.de> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Stephen Boyd <sboyd@kernel.org> Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Acked-by: Mathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by: Coly Li <colyli@suse.de> Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Acked-by: Jonathan Corbet <corbet@lwn.net>
2018-06-15crypto: morus640 - Fix out-of-bounds accessOndrej Mosnáček1-1/+2
We must load the block from the temporary variable here, not directly from the input. Also add forgotten zeroing-out of the uninitialized part of the temporary block (as is done correctly in morus1280.c). Fixes: 396be41f16fd ("crypto: morus - Add generic MORUS AEAD implementations") Reported-by: syzbot+1fafa9c4cf42df33f716@syzkaller.appspotmail.com Reported-by: syzbot+d82643ba80bf6937cd44@syzkaller.appspotmail.com Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-06-15crypto: don't optimize keccakf()Dmitry Vyukov1-1/+1
keccakf() is the only function in kernel that uses __optimize() macro. __optimize() breaks frame pointer unwinder as optimized code uses RBP, and amusingly this always lead to degraded performance as gcc does not inline across different optimizations levels, so keccakf() wasn't inlined into its callers and keccakf_round() wasn't inlined into keccakf(). Drop __optimize() to resolve both problems. Signed-off-by: Dmitry Vyukov <dvyukov@google.com> Fixes: 83dee2ce1ae7 ("crypto: sha3-generic - rewrite KECCAK transform to help the compiler optimize") Reported-by: syzbot+37035ccfa9a0a017ffcf@syzkaller.appspotmail.com Reported-by: syzbot+e073e4740cfbb3ae200b@syzkaller.appspotmail.com Cc: linux-crypto@vger.kernel.org Cc: "David S. Miller" <davem@davemloft.net> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-06-13treewide: Use array_size() in sock_kmalloc()Kees Cook2-3/+4
The sock_kmalloc() function has no 2-factor argument form, so multiplication factors need to be wrapped in array_size(). This patch replaces cases of: sock_kmalloc(handle, a * b, gfp) with: sock_kmalloc(handle, array_size(a, b), gfp) as well as handling cases of: sock_kmalloc(handle, a * b * c, gfp) with: sock_kmalloc(handle, array3_size(a, b, c), gfp) This does, however, attempt to ignore constant size factors like: sock_kmalloc(handle, 4 * 1024, gfp) though any constants defined via macros get caught up in the conversion. Any factors with a sizeof() of "unsigned char", "char", and "u8" were dropped, since they're redundant. The Coccinelle script used for this was: // Fix redundant parens around sizeof(). @@ expression HANDLE; type TYPE; expression THING, E; @@ ( sock_kmalloc(HANDLE, - (sizeof(TYPE)) * E + sizeof(TYPE) * E , ...) | sock_kmalloc(HANDLE, - (sizeof(THING)) * E + sizeof(THING) * E , ...) ) // Drop single-byte sizes and redundant parens. @@ expression HANDLE; expression COUNT; typedef u8; typedef __u8; @@ ( sock_kmalloc(HANDLE, - sizeof(u8) * (COUNT) + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(__u8) * (COUNT) + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(char) * (COUNT) + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(unsigned char) * (COUNT) + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(u8) * COUNT + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(__u8) * COUNT + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(char) * COUNT + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(unsigned char) * COUNT + COUNT , ...) ) // 2-factor product with sizeof(type/expression) and identifier or constant. @@ expression HANDLE; type TYPE; expression THING; identifier COUNT_ID; constant COUNT_CONST; @@ ( sock_kmalloc(HANDLE, - sizeof(TYPE) * (COUNT_ID) + array_size(COUNT_ID, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE) * COUNT_ID + array_size(COUNT_ID, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE) * (COUNT_CONST) + array_size(COUNT_CONST, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE) * COUNT_CONST + array_size(COUNT_CONST, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * (COUNT_ID) + array_size(COUNT_ID, sizeof(THING)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * COUNT_ID + array_size(COUNT_ID, sizeof(THING)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * (COUNT_CONST) + array_size(COUNT_CONST, sizeof(THING)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * COUNT_CONST + array_size(COUNT_CONST, sizeof(THING)) , ...) ) // 2-factor product, only identifiers. @@ expression HANDLE; identifier SIZE, COUNT; @@ sock_kmalloc(HANDLE, - SIZE * COUNT + array_size(COUNT, SIZE) , ...) // 3-factor product with 1 sizeof(type) or sizeof(expression), with // redundant parens removed. @@ expression HANDLE; expression THING; identifier STRIDE, COUNT; type TYPE; @@ ( sock_kmalloc(HANDLE, - sizeof(TYPE) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) ) // 3-factor product with 2 sizeof(variable), with redundant parens removed. @@ expression HANDLE; expression THING1, THING2; identifier COUNT; type TYPE1, TYPE2; @@ ( sock_kmalloc(HANDLE, - sizeof(TYPE1) * sizeof(TYPE2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) ) // 3-factor product, only identifiers, with redundant parens removed. @@ expression HANDLE; identifier STRIDE, SIZE, COUNT; @@ ( sock_kmalloc(HANDLE, - (COUNT) * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - COUNT * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - COUNT * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - (COUNT) * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - COUNT * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - (COUNT) * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - (COUNT) * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - COUNT * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) ) // Any remaining multi-factor products, first at least 3-factor products // when they're not all constants... @@ expression HANDLE; expression E1, E2, E3; constant C1, C2, C3; @@ ( sock_kmalloc(HANDLE, C1 * C2 * C3, ...) | sock_kmalloc(HANDLE, - E1 * E2 * E3 + array3_size(E1, E2, E3) , ...) ) // And then all remaining 2 factors products when they're not all constants. @@ expression HANDLE; expression E1, E2; constant C1, C2; @@ ( sock_kmalloc(HANDLE, C1 * C2, ...) | sock_kmalloc(HANDLE, - E1 * E2 + array_size(E1, E2) , ...) ) Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-13treewide: kmalloc() -> kmalloc_array()Kees Cook1-1/+2
The kmalloc() function has a 2-factor argument form, kmalloc_array(). This patch replaces cases of: kmalloc(a * b, gfp) with: kmalloc_array(a * b, gfp) as well as handling cases of: kmalloc(a * b * c, gfp) with: kmalloc(array3_size(a, b, c), gfp) as it's slightly less ugly than: kmalloc_array(array_size(a, b), c, gfp) This does, however, attempt to ignore constant size factors like: kmalloc(4 * 1024, gfp) though any constants defined via macros get caught up in the conversion. Any factors with a sizeof() of "unsigned char", "char", and "u8" were dropped, since they're redundant. The tools/ directory was manually excluded, since it has its own implementation of kmalloc(). The Coccinelle script used for this was: // Fix redundant parens around sizeof(). @@ type TYPE; expression THING, E; @@ ( kmalloc( - (sizeof(TYPE)) * E + sizeof(TYPE) * E , ...) | kmalloc( - (sizeof(THING)) * E + sizeof(THING) * E , ...) ) // Drop single-byte sizes and redundant parens. @@ expression COUNT; typedef u8; typedef __u8; @@ ( kmalloc( - sizeof(u8) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(__u8) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(char) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(unsigned char) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(u8) * COUNT + COUNT , ...) | kmalloc( - sizeof(__u8) * COUNT + COUNT , ...) | kmalloc( - sizeof(char) * COUNT + COUNT , ...) | kmalloc( - sizeof(unsigned char) * COUNT + COUNT , ...) ) // 2-factor product with sizeof(type/expression) and identifier or constant. @@ type TYPE; expression THING; identifier COUNT_ID; constant COUNT_CONST; @@ ( - kmalloc + kmalloc_array ( - sizeof(TYPE) * (COUNT_ID) + COUNT_ID, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * COUNT_ID + COUNT_ID, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * (COUNT_CONST) + COUNT_CONST, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * COUNT_CONST + COUNT_CONST, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * (COUNT_ID) + COUNT_ID, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * COUNT_ID + COUNT_ID, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * (COUNT_CONST) + COUNT_CONST, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * COUNT_CONST + COUNT_CONST, sizeof(THING) , ...) ) // 2-factor product, only identifiers. @@ identifier SIZE, COUNT; @@ - kmalloc + kmalloc_array ( - SIZE * COUNT + COUNT, SIZE , ...) // 3-factor product with 1 sizeof(type) or sizeof(expression), with // redundant parens removed. @@ expression THING; identifier STRIDE, COUNT; type TYPE; @@ ( kmalloc( - sizeof(TYPE) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(TYPE) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(TYPE) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(TYPE) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(THING) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kmalloc( - sizeof(THING) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kmalloc( - sizeof(THING) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kmalloc( - sizeof(THING) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) ) // 3-factor product with 2 sizeof(variable), with redundant parens removed. @@ expression THING1, THING2; identifier COUNT; type TYPE1, TYPE2; @@ ( kmalloc( - sizeof(TYPE1) * sizeof(TYPE2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kmalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kmalloc( - sizeof(THING1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kmalloc( - sizeof(THING1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kmalloc( - sizeof(TYPE1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) | kmalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) ) // 3-factor product, only identifiers, with redundant parens removed. @@ identifier STRIDE, SIZE, COUNT; @@ ( kmalloc( - (COUNT) * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - (COUNT) * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - (COUNT) * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - (COUNT) * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) ) // Any remaining multi-factor products, first at least 3-factor products, // when they're not all constants... @@ expression E1, E2, E3; constant C1, C2, C3; @@ ( kmalloc(C1 * C2 * C3, ...) | kmalloc( - (E1) * E2 * E3 + array3_size(E1, E2, E3) , ...) | kmalloc( - (E1) * (E2) * E3 + array3_size(E1, E2, E3) , ...) | kmalloc( - (E1) * (E2) * (E3) + array3_size(E1, E2, E3) , ...) | kmalloc( - E1 * E2 * E3 + array3_size(E1, E2, E3) , ...) ) // And then all remaining 2 factors products when they're not all constants, // keeping sizeof() as the second factor argument. @@ expression THING, E1, E2; type TYPE; constant C1, C2, C3; @@ ( kmalloc(sizeof(THING) * C2, ...) | kmalloc(sizeof(TYPE) * C2, ...) | kmalloc(C1 * C2 * C3, ...) | kmalloc(C1 * C2, ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * (E2) + E2, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * E2 + E2, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * (E2) + E2, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * E2 + E2, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - (E1) * E2 + E1, E2 , ...) | - kmalloc + kmalloc_array ( - (E1) * (E2) + E1, E2 , ...) | - kmalloc + kmalloc_array ( - E1 * E2 + E1, E2 , ...) ) Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-07Merge tag 'overflow-v4.18-rc1' of ↵Linus Torvalds1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull overflow updates from Kees Cook: "This adds the new overflow checking helpers and adds them to the 2-factor argument allocators. And this adds the saturating size helpers and does a treewide replacement for the struct_size() usage. Additionally this adds the overflow testing modules to make sure everything works. I'm still working on the treewide replacements for allocators with "simple" multiplied arguments: *alloc(a * b, ...) -> *alloc_array(a, b, ...) and *zalloc(a * b, ...) -> *calloc(a, b, ...) as well as the more complex cases, but that's separable from this portion of the series. I expect to have the rest sent before -rc1 closes; there are a lot of messy cases to clean up. Summary: - Introduce arithmetic overflow test helper functions (Rasmus) - Use overflow helpers in 2-factor allocators (Kees, Rasmus) - Introduce overflow test module (Rasmus, Kees) - Introduce saturating size helper functions (Matthew, Kees) - Treewide use of struct_size() for allocators (Kees)" * tag 'overflow-v4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: treewide: Use struct_size() for devm_kmalloc() and friends treewide: Use struct_size() for vmalloc()-family treewide: Use struct_size() for kmalloc()-family device: Use overflow helpers for devm_kmalloc() mm: Use overflow helpers in kvmalloc() mm: Use overflow helpers in kmalloc_array*() test_overflow: Add memory allocation overflow tests overflow.h: Add allocation size calculation helpers test_overflow: Report test failures test_overflow: macrofy some more, do more tests for free lib: add runtime test of check_*_overflow functions compiler.h: enable builtin overflow checkers and add fallback code
2018-06-06treewide: Use struct_size() for devm_kmalloc() and friendsKees Cook1-2/+2
Replaces open-coded struct size calculations with struct_size() for devm_*, f2fs_*, and sock_* allocations. Automatically generated (and manually adjusted) from the following Coccinelle script: // Direct reference to struct field. @@ identifier alloc =~ "devm_kmalloc|devm_kzalloc|sock_kmalloc|f2fs_kmalloc|f2fs_kzalloc"; expression HANDLE; expression GFP; identifier VAR, ELEMENT; expression COUNT; @@ - alloc(HANDLE, sizeof(*VAR) + COUNT * sizeof(*VAR->ELEMENT), GFP) + alloc(HANDLE, struct_size(VAR, ELEMENT, COUNT), GFP) // mr = kzalloc(sizeof(*mr) + m * sizeof(mr->map[0]), GFP_KERNEL); @@ identifier alloc =~ "devm_kmalloc|devm_kzalloc|sock_kmalloc|f2fs_kmalloc|f2fs_kzalloc"; expression HANDLE; expression GFP; identifier VAR, ELEMENT; expression COUNT; @@ - alloc(HANDLE, sizeof(*VAR) + COUNT * sizeof(VAR->ELEMENT[0]), GFP) + alloc(HANDLE, struct_size(VAR, ELEMENT, COUNT), GFP) // Same pattern, but can't trivially locate the trailing element name, // or variable name. @@ identifier alloc =~ "devm_kmalloc|devm_kzalloc|sock_kmalloc|f2fs_kmalloc|f2fs_kzalloc"; expression HANDLE; expression GFP; expression SOMETHING, COUNT, ELEMENT; @@ - alloc(HANDLE, sizeof(SOMETHING) + COUNT * sizeof(ELEMENT), GFP) + alloc(HANDLE, CHECKME_struct_size(&SOMETHING, ELEMENT, COUNT), GFP) Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-06Merge branch 'linus' of ↵Linus Torvalds28-11945/+10639
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Decryption test vectors are now automatically generated from encryption test vectors. Algorithms: - Fix unaligned access issues in crc32/crc32c. - Add zstd compression algorithm. - Add AEGIS. - Add MORUS. Drivers: - Add accelerated AEGIS/MORUS on x86. - Add accelerated SM4 on arm64. - Removed x86 assembly salsa implementation as it is slower than C. - Add authenc(hmac(sha*), cbc(aes)) support in inside-secure. - Add ctr(aes) support in crypto4xx. - Add hardware key support in ccree. - Add support for new Centaur CPU in via-rng" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (112 commits) crypto: chtls - free beyond end rspq_skb_cache crypto: chtls - kbuild warnings crypto: chtls - dereference null variable crypto: chtls - wait for memory sendmsg, sendpage crypto: chtls - key len correction crypto: salsa20 - Revert "crypto: salsa20 - export generic helpers" crypto: x86/salsa20 - remove x86 salsa20 implementations crypto: ccp - Add GET_ID SEV command crypto: ccp - Add DOWNLOAD_FIRMWARE SEV command crypto: qat - Add MODULE_FIRMWARE for all qat drivers crypto: ccree - silence debug prints crypto: ccree - better clock handling crypto: ccree - correct host regs offset crypto: chelsio - Remove separate buffer used for DMA map B0 block in CCM crypt: chelsio - Send IV as Immediate for cipher algo crypto: chelsio - Return -ENOSPC for transient busy indication. crypto: caam/qi - fix warning in init_cgr() crypto: caam - fix rfc4543 descriptors crypto: caam - fix MC firmware detection crypto: clarify licensing of OpenSSL asm code ...
2018-06-04Merge branch 'work.aio-1' of ↵Linus Torvalds5-18/+7
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull aio updates from Al Viro: "Majority of AIO stuff this cycle. aio-fsync and aio-poll, mostly. The only thing I'm holding back for a day or so is Adam's aio ioprio - his last-minute fixup is trivial (missing stub in !CONFIG_BLOCK case), but let it sit in -next for decency sake..." * 'work.aio-1' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (46 commits) aio: sanitize the limit checking in io_submit(2) aio: fold do_io_submit() into callers aio: shift copyin of iocb into io_submit_one() aio_read_events_ring(): make a bit more readable aio: all callers of aio_{read,write,fsync,poll} treat 0 and -EIOCBQUEUED the same way aio: take list removal to (some) callers of aio_complete() aio: add missing break for the IOCB_CMD_FDSYNC case random: convert to ->poll_mask timerfd: convert to ->poll_mask eventfd: switch to ->poll_mask pipe: convert to ->poll_mask crypto: af_alg: convert to ->poll_mask net/rxrpc: convert to ->poll_mask net/iucv: convert to ->poll_mask net/phonet: convert to ->poll_mask net/nfc: convert to ->poll_mask net/caif: convert to ->poll_mask net/bluetooth: convert to ->poll_mask net/sctp: convert to ->poll_mask net/tipc: convert to ->poll_mask ...
2018-05-30crypto: salsa20 - Revert "crypto: salsa20 - export generic helpers"Eric Biggers1-7/+13
This reverts commit eb772f37ae8163a89e28a435f6a18742ae06653b, as now the x86 Salsa20 implementation has been removed and the generic helpers are no longer needed outside of salsa20_generic.c. We could keep this just in case someone else wants to add a new optimized Salsa20 implementation. But given that we have ChaCha20 now too, I think it's unlikely. And this can always be reverted back. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-30crypto: x86/salsa20 - remove x86 salsa20 implementationsEric Biggers1-28/+0
The x86 assembly implementations of Salsa20 use the frame base pointer register (%ebp or %rbp), which breaks frame pointer convention and breaks stack traces when unwinding from an interrupt in the crypto code. Recent (v4.10+) kernels will warn about this, e.g. WARNING: kernel stack regs at 00000000a8291e69 in syzkaller047086:4677 has bad 'bp' value 000000001077994c [...] But after looking into it, I believe there's very little reason to still retain the x86 Salsa20 code. First, these are *not* vectorized (SSE2/SSSE3/AVX2) implementations, which would be needed to get anywhere close to the best Salsa20 performance on any remotely modern x86 processor; they're just regular x86 assembly. Second, it's still unclear that anyone is actually using the kernel's Salsa20 at all, especially given that now ChaCha20 is supported too, and with much more efficient SSSE3 and AVX2 implementations. Finally, in benchmarks I did on both Intel and AMD processors with both gcc 8.1.0 and gcc 4.9.4, the x86_64 salsa20-asm is actually slightly *slower* than salsa20-generic (~3% slower on Skylake, ~10% slower on Zen), while the i686 salsa20-asm is only slightly faster than salsa20-generic (~15% faster on Skylake, ~20% faster on Zen). The gcc version made little difference. So, the x86_64 salsa20-asm is pretty clearly useless. That leaves just the i686 salsa20-asm, which based on my tests provides a 15-20% speed boost. But that's without updating the code to not use %ebp. And given the maintenance cost, the small speed difference vs. salsa20-generic, the fact that few people still use i686 kernels, the doubt that anyone is even using the kernel's Salsa20 at all, and the fact that a SSE2 implementation would almost certainly be much faster on any remotely modern x86 processor yet no one has cared enough to add one yet, I don't think it's worthwhile to keep. Thus, just remove both the x86_64 and i686 salsa20-asm implementations. Reported-by: syzbot+ffa3a158337bbc01ff09@syzkaller.appspotmail.com Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-30crypto: morus - Mark MORUS SIMD glue as x86-specificOndrej Mosnacek4-604/+4
Commit 56e8e57fc3a7 ("crypto: morus - Add common SIMD glue code for MORUS") accidetally consiedered the glue code to be usable by different architectures, but it seems to be only usable on x86. This patch moves it under arch/x86/crypto and adds 'depends on X86' to the Kconfig options and also removes the prompt to hide these internal options from the user. Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-30crypto: testmgr - eliminate redundant decryption test vectorsEric Biggers2-11723/+919
Currently testmgr has separate encryption and decryption test vectors for symmetric ciphers. That's massively redundant, since with few exceptions (mostly mistakes, apparently), all decryption tests are identical to the encryption tests, just with the input/result flipped. Therefore, eliminate the redundancy by removing the decryption test vectors and updating testmgr to test both encryption and decryption using what used to be the encryption test vectors. Naming is adjusted accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext' (ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and 'rlen'. Note that it was always the case that 'ilen == rlen'. AES keywrap ("kw(aes)") is special because its IV is generated by the encryption. Previously this was handled by specifying 'iv_out' for encryption and 'iv' for decryption. To make it work cleanly with only one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a boolean that indicates that the IV is generated by the encryption. In total, this removes over 10000 lines from testmgr.h, with no reduction in test coverage since prior patches already copied the few unique decryption test vectors into the encryption test vectors. This covers all algorithms that used 'struct cipher_testvec', e.g. any block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD tests, though we probably can eliminate a similar redundancy there too. The testmgr.h portion of this patch was automatically generated using the following awk script, with some slight manual fixups on top (updated 'struct cipher_testvec' definition, updated a few comments, and fixed up the AES keywrap test vectors): BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER } /^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC } /^static const struct cipher_testvec.*_dec_/ { mode = DECVEC } mode == ENCVEC && !/\.ilen[[:space:]]*=/ { sub(/\.input[[:space:]]*=$/, ".ptext =") sub(/\.input[[:space:]]*=/, ".ptext\t=") sub(/\.result[[:space:]]*=$/, ".ctext =") sub(/\.result[[:space:]]*=/, ".ctext\t=") sub(/\.rlen[[:space:]]*=/, ".len\t=") print } mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER } mode == OTHER { print } mode == ENCVEC && /^};/ { mode = OTHER } mode == DECVEC && /^};/ { mode = DECVEC_TAIL } Note that git's default diff algorithm gets confused by the testmgr.h portion of this patch, and reports too many lines added and removed. It's better viewed with 'git diff --minimal' (or 'git show --minimal'), which reports "2 files changed, 919 insertions(+), 11723 deletions(-)". Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-30crypto: testmgr - add extra kw(aes) encryption test vectorEric Biggers1-0/+13
One "kw(aes)" decryption test vector doesn't exactly match an encryption test vector with input and result swapped. In preparation for removing the decryption test vectors, add this test vector to the encryption test vectors, so we don't lose any test coverage. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-30crypto: testmgr - add extra ecb(tnepres) encryption test vectorsEric Biggers1-1/+39
None of the four "ecb(tnepres)" decryption test vectors exactly match an encryption test vector with input and result swapped. In preparation for removing the decryption test vectors, add these to the encryption test vectors, so we don't lose any test coverage. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-30crypto: testmgr - make an cbc(des) encryption test vector chunkedEric Biggers1-0/+3
One "cbc(des)" decryption test vector doesn't exactly match an encryption test vector with input and result swapped. It's *almost* the same as one, but the decryption version is "chunked" while the encryption version is "unchunked". In preparation for removing the decryption test vectors, make the encryption one both chunked and unchunked, so we don't lose any test coverage. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-30crypto: testmgr - add extra ecb(des) encryption test vectorsEric Biggers1-0/+22
Two "ecb(des)" decryption test vectors don't exactly match any of the encryption test vectors with input and result swapped. In preparation for removing the decryption test vectors, add these to the encryption test vectors, so we don't lose any test coverage. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-26crypto: testmgr - add more unkeyed crc32 and crc32c test vectorsEric Biggers1-0/+14
crc32c has an unkeyed test vector but crc32 did not. Add the crc32c one (which uses an empty input) to crc32 too, and also add a new one to both that uses a nonempty input. These test vectors verify that crc32 and crc32c implementations use the correct default initial state. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-26crypto: testmgr - fix testing OPTIONAL_KEY hash algorithmsEric Biggers1-7/+43
Since testmgr uses a single tfm for all tests of each hash algorithm, once a key is set the tfm won't be unkeyed anymore. But with crc32 and crc32c, the key is really the "default initial state" and is optional; those algorithms should have both keyed and unkeyed test vectors, to verify that implementations use the correct default key. Simply listing the unkeyed test vectors first isn't guaranteed to work yet because testmgr makes multiple passes through the test vectors. crc32c does have an unkeyed test vector listed first currently, but it only works by chance because the last crc32c test vector happens to use a key that is the same as the default key. Therefore, teach testmgr to split hash test vectors into unkeyed and keyed sections, and do all the unkeyed ones before the keyed ones. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-26crypto: testmgr - remove bfin_crc "hmac(crc32)" test vectorsEric Biggers3-98/+0
The Blackfin CRC driver was removed by commit 9678a8dc53c1 ("crypto: bfin_crc - remove blackfin CRC driver"), but it was forgotten to remove the corresponding "hmac(crc32)" test vectors. I see no point in keeping them since nothing else appears to implement or use "hmac(crc32)", which isn't an algorithm that makes sense anyway because HMAC is meant to be used with a cryptographically secure hash function, which CRC's are not. Thus, remove the unneeded test vectors. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-26crypto: crc32-generic - remove __crc32_le()Eric Biggers1-8/+2
The __crc32_le() wrapper function is pointless. Just call crc32_le() directly instead. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-26crypto: crc32c-generic - remove cra_alignmaskEric Biggers1-4/+4
crc32c-generic sets an alignmask, but actually its ->update() works with any alignment; only its ->setkey() and outputting the final digest assume an alignment. To prevent the buffer from having to be aligned by the crypto API for just these cases, switch these cases over to the unaligned access macros and remove the cra_alignmask. Note that this also makes crc32c-generic more consistent with crc32-generic. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-26crypto: crc32-generic - use unaligned access macros when neededEric Biggers1-3/+4
crc32-generic doesn't have a cra_alignmask set, which is desired as its ->update() works with any alignment. However, it incorrectly assumes 4-byte alignment in ->setkey() and when outputting the final digest. Fix this by using the unaligned access macros in those cases. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-26crypto: af_alg: convert to ->poll_maskChristoph Hellwig3-14/+7
Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-05-26net: remove sock_no_pollChristoph Hellwig3-4/+0
Now that sock_poll handles a NULL ->poll or ->poll_mask there is no need for a stub. Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-05-18crypto: x86 - Add optimized MORUS implementationsOndrej Mosnacek1-0/+26
This patch adds optimized implementations of MORUS-640 and MORUS-1280, utilizing the SSE2 and AVX2 x86 extensions. For MORUS-1280 (which operates on 256-bit blocks) we provide both AVX2 and SSE2 implementation. Although SSE2 MORUS-1280 is slower than AVX2 MORUS-1280, it is comparable in speed to the SSE2 MORUS-640. Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-18crypto: morus - Add common SIMD glue code for MORUSOndrej Mosnacek4-0/+618
This patch adds a common glue code for optimized implementations of MORUS AEAD algorithms. Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-18crypto: testmgr - Add test vectors for MORUSOndrej Mosnacek2-0/+3418
This patch adds test vectors for MORUS-640 and MORUS-1280. The test vectors were generated using the reference implementation from SUPERCOP (see code comments for more details). Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-18crypto: morus - Add generic MORUS AEAD implementationsOndrej Mosnacek4-0/+1107
This patch adds the generic implementation of the MORUS family of AEAD algorithms (MORUS-640 and MORUS-1280). The original authors of MORUS are Hongjun Wu and Tao Huang. At the time of writing, MORUS is one of the finalists in CAESAR, an open competition intended to select a portfolio of alternatives to the problematic AES-GCM: https://competitions.cr.yp.to/caesar-submissions.html https://competitions.cr.yp.to/round3/morusv2.pdf Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-18crypto: x86 - Add optimized AEGIS implementationsOndrej Mosnacek1-0/+24
This patch adds optimized implementations of AEGIS-128, AEGIS-128L, and AEGIS-256, utilizing the AES-NI and SSE2 x86 extensions. Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-18crypto: testmgr - Add test vectors for AEGISOndrej Mosnacek2-0/+2862
This patch adds test vectors for the AEGIS family of AEAD algorithms (AEGIS-128, AEGIS-128L, and AEGIS-256). The test vectors were generated using the reference implementation from SUPERCOP (see code comments for more details). Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-18crypto: aegis - Add generic AEGIS AEAD implementationsOndrej Mosnacek6-0/+1572
This patch adds the generic implementation of the AEGIS family of AEAD algorithms (AEGIS-128, AEGIS-128L, and AEGIS-256). The original authors of AEGIS are Hongjun Wu and Bart Preneel. At the time of writing, AEGIS is one of the finalists in CAESAR, an open competition intended to select a portfolio of alternatives to the problematic AES-GCM: https://competitions.cr.yp.to/caesar-submissions.html https://competitions.cr.yp.to/round3/aegisv11.pdf Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-18crypto: testmgr - reorder paes test lexicographicallyGilad Ben-Yossef1-22/+22
Due to a snafu "paes" testmgr tests were not ordered lexicographically, which led to boot time warnings. Reorder the tests as needed. Fixes: a794d8d ("crypto: ccree - enable support for hardware keys") Reported-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Tested-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-16proc: introduce proc_create_seq{,_data}Christoph Hellwig1-13/+1
Variants of proc_create{,_data} that directly take a struct seq_operations argument and drastically reduces the boilerplate code in the callers. All trivial callers converted over. Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-05-05crypto: tcrypt - Remove VLA usageKees Cook1-39/+79
In the quest to remove all stack VLA usage from the kernel[1], this allocates the return code buffers before starting jiffie timers, rather than using stack space for the array. Additionally cleans up some exit paths and make sure that the num_mb module_param() is used only once per execution to avoid possible races in the value changing. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-05crypto: sm4 - export encrypt/decrypt routines to other driversArd Biesheuvel1-4/+6
In preparation of adding support for the SIMD based arm64 implementation of arm64, which requires a fallback to non-SIMD code when invoked in certain contexts, expose the generic SM4 encrypt and decrypt routines to other drivers. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-05crypto: ccree - enable support for hardware keysGilad Ben-Yossef1-0/+43
Enable CryptoCell support for hardware keys. Hardware keys are regular AES keys loaded into CryptoCell internal memory via firmware, often from secure boot ROM or hardware fuses at boot time. As such, they can be used for enc/dec purposes like any other key but cannot (read: extremely hard to) be extracted since since they are not available anywhere in RAM during runtime. The mechanism has some similarities to s390 secure keys although the keys are not wrapped or sealed, but simply loaded offline. The interface was therefore modeled based on the s390 secure keys support. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-04-20crypto: rsa - Remove unneeded error assignmentFabio Estevam1-1/+0
There is no need to assign an error value to 'ret' prior to calling mpi_read_raw_from_sgl() because in the case of error the 'ret' variable will be assigned to the error code inside the if block. In the case of non failure, 'ret' will be overwritten immediately after, so remove the unneeded assignment. Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-04-20crypto: testmgr - Allow different compression resultsMahipal Challa1-13/+37
The following error is triggered by the ThunderX ZIP driver if the testmanager is enabled: [ 199.069437] ThunderX-ZIP 0000:03:00.0: Found ZIP device 0 177d:a01a on Node 0 [ 199.073573] alg: comp: Compression test 1 failed for deflate-generic: output len = 37 The reason for this error is the verification of the compression results. Verifying the compression result only works if all algorithm parameters are identical, in this case to the software implementation. Different compression engines like the ThunderX ZIP coprocessor might yield different compression results by tuning the algorithm parameters. In our case the compressed result is shorter than the test vector. We should not forbid different compression results but only check that compression -> decompression yields the same result. This is done already in the acomp test. Do something similar for test_comp(). Signed-off-by: Mahipal Challa <mchalla@cavium.com> Signed-off-by: Balakrishna Bhamidipati <bbhamidipati@cavium.com> [jglauber@cavium.com: removed unrelated printk changes, rewrote commit msg, fixed whitespace and unneeded initialization] Signed-off-by: Jan Glauber <jglauber@cavium.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-04-20crypto: remove several VLAsSalvatore Mesoraca5-11/+13
We avoid various VLAs[1] by using constant expressions for block size and alignment mask. [1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-04-20crypto: api - laying defines and checks for statically allocated buffersSalvatore Mesoraca1-0/+10
In preparation for the removal of VLAs[1] from crypto code. We create 2 new compile-time constants: all ciphers implemented in Linux have a block size less than or equal to 16 bytes and the most demanding hw require 16 bytes alignment for the block buffer. We also enforce these limits in crypto_check_alg when a new cipher is registered. [1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-04-20crypto: authencesn - don't leak pointers to authenc keysTudor-Dan Ambarus1-0/+1
In crypto_authenc_esn_setkey we save pointers to the authenc keys in a local variable of type struct crypto_authenc_keys and we don't zeroize it after use. Fix this and don't leak pointers to the authenc keys. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-04-20crypto: authenc - don't leak pointers to authenc keysTudor-Dan Ambarus1-0/+1
In crypto_authenc_setkey we save pointers to the authenc keys in a local variable of type struct crypto_authenc_keys and we don't zeroize it after use. Fix this and don't leak pointers to the authenc keys. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-04-20crypto: zstd - Add zstd supportNick Terrell5-0/+356
Adds zstd support to crypto and scompress. Only supports the default level. Previously we held off on this patch, since there weren't any users. Now zram is ready for zstd support, but depends on CONFIG_CRYPTO_ZSTD, which isn't defined until this patch is in. I also see a patch adding zstd to pstore [0], which depends on crypto zstd. [0] lkml.kernel.org/r/9c9416b2dff19f05fb4c35879aaa83d11ff72c92.1521626182.git.geliangtang@gmail.com Signed-off-by: Nick Terrell <terrelln@fb.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>