summaryrefslogtreecommitdiff
path: root/crypto/testmgr.h
AgeCommit message (Collapse)AuthorFilesLines
2018-12-13crypto: xchacha20 - fix comments for test vectorsEric Biggers1-8/+6
The kernel's ChaCha20 uses the RFC7539 convention of the nonce being 12 bytes rather than 8, so actually I only appended 12 random bytes (not 16) to its test vectors to form 24-byte nonces for the XChaCha20 test vectors. The other 4 bytes were just from zero-padding the stream position to 8 bytes. Fix the comments above the test vectors. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-12-13crypto: xchacha - add test vector from XChaCha20 draft RFCEric Biggers1-2/+176
There is a draft specification for XChaCha20 being worked on. Add the XChaCha20 test vector from the appendix so that we can be extra sure the kernel's implementation is compatible. I also recomputed the ciphertext with XChaCha12 and added it there too, to keep the tests for XChaCha20 and XChaCha12 in sync. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-20crypto: adiantum - add Adiantum supportEric Biggers1-0/+461
Add support for the Adiantum encryption mode. Adiantum was designed by Paul Crowley and is specified by our paper: Adiantum: length-preserving encryption for entry-level processors (https://eprint.iacr.org/2018/720.pdf) See our paper for full details; this patch only provides an overview. Adiantum is a tweakable, length-preserving encryption mode designed for fast and secure disk encryption, especially on CPUs without dedicated crypto instructions. Adiantum encrypts each sector using the XChaCha12 stream cipher, two passes of an ε-almost-∆-universal (εA∆U) hash function, and an invocation of the AES-256 block cipher on a single 16-byte block. On CPUs without AES instructions, Adiantum is much faster than AES-XTS; for example, on ARM Cortex-A7, on 4096-byte sectors Adiantum encryption is about 4 times faster than AES-256-XTS encryption, and decryption about 5 times faster. Adiantum is a specialization of the more general HBSH construction. Our earlier proposal, HPolyC, was also a HBSH specialization, but it used a different εA∆U hash function, one based on Poly1305 only. Adiantum's εA∆U hash function, which is based primarily on the "NH" hash function like that used in UMAC (RFC4418), is about twice as fast as HPolyC's; consequently, Adiantum is about 20% faster than HPolyC. This speed comes with no loss of security: Adiantum is provably just as secure as HPolyC, in fact slightly *more* secure. Like HPolyC, Adiantum's security is reducible to that of XChaCha12 and AES-256, subject to a security bound. XChaCha12 itself has a security reduction to ChaCha12. Therefore, one need not "trust" Adiantum; one need only trust ChaCha12 and AES-256. Note that the εA∆U hash function is only used for its proven combinatorical properties so cannot be "broken". Adiantum is also a true wide-block encryption mode, so flipping any plaintext bit in the sector scrambles the entire ciphertext, and vice versa. No other such mode is available in the kernel currently; doing the same with XTS scrambles only 16 bytes. Adiantum also supports arbitrary-length tweaks and naturally supports any length input >= 16 bytes without needing "ciphertext stealing". For the stream cipher, Adiantum uses XChaCha12 rather than XChaCha20 in order to make encryption feasible on the widest range of devices. Although the 20-round variant is quite popular, the best known attacks on ChaCha are on only 7 rounds, so ChaCha12 still has a substantial security margin; in fact, larger than AES-256's. 12-round Salsa20 is also the eSTREAM recommendation. For the block cipher, Adiantum uses AES-256, despite it having a lower security margin than XChaCha12 and needing table lookups, due to AES's extensive adoption and analysis making it the obvious first choice. Nevertheless, for flexibility this patch also permits the "adiantum" template to be instantiated with XChaCha20 and/or with an alternate block cipher. We need Adiantum support in the kernel for use in dm-crypt and fscrypt, where currently the only other suitable options are block cipher modes such as AES-XTS. A big problem with this is that many low-end mobile devices (e.g. Android Go phones sold primarily in developing countries, as well as some smartwatches) still have CPUs that lack AES instructions, e.g. ARM Cortex-A7. Sadly, AES-XTS encryption is much too slow to be viable on these devices. We did find that some "lightweight" block ciphers are fast enough, but these suffer from problems such as not having much cryptanalysis or being too controversial. The ChaCha stream cipher has excellent performance but is insecure to use directly for disk encryption, since each sector's IV is reused each time it is overwritten. Even restricting the threat model to offline attacks only isn't enough, since modern flash storage devices don't guarantee that "overwrites" are really overwrites, due to wear-leveling. Adiantum avoids this problem by constructing a "tweakable super-pseudorandom permutation"; this is the strongest possible security model for length-preserving encryption. Of course, storing random nonces along with the ciphertext would be the ideal solution. But doing that with existing hardware and filesystems runs into major practical problems; in most cases it would require data journaling (like dm-integrity) which severely degrades performance. Thus, for now length-preserving encryption is still needed. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-20crypto: nhpoly1305 - add NHPoly1305 supportEric Biggers1-4/+1236
Add a generic implementation of NHPoly1305, an ε-almost-∆-universal hash function used in the Adiantum encryption mode. CONFIG_NHPOLY1305 is not selectable by itself since there won't be any real reason to enable it without also enabling Adiantum support. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-20crypto: chacha - add XChaCha12 supportEric Biggers1-0/+578
Now that the generic implementation of ChaCha20 has been refactored to allow varying the number of rounds, add support for XChaCha12, which is the XSalsa construction applied to ChaCha12. ChaCha12 is one of the three ciphers specified by the original ChaCha paper (https://cr.yp.to/chacha/chacha-20080128.pdf: "ChaCha, a variant of Salsa20"), alongside ChaCha8 and ChaCha20. ChaCha12 is faster than ChaCha20 but has a lower, but still large, security margin. We need XChaCha12 support so that it can be used in the Adiantum encryption mode, which enables disk/file encryption on low-end mobile devices where AES-XTS is too slow as the CPUs lack AES instructions. We'd prefer XChaCha20 (the more popular variant), but it's too slow on some of our target devices, so at least in some cases we do need the XChaCha12-based version. In more detail, the problem is that Adiantum is still much slower than we're happy with, and encryption still has a quite noticeable effect on the feel of low-end devices. Users and vendors push back hard against encryption that degrades the user experience, which always risks encryption being disabled entirely. So we need to choose the fastest option that gives us a solid margin of security, and here that's XChaCha12. The best known attack on ChaCha breaks only 7 rounds and has 2^235 time complexity, so ChaCha12's security margin is still better than AES-256's. Much has been learned about cryptanalysis of ARX ciphers since Salsa20 was originally designed in 2005, and it now seems we can be comfortable with a smaller number of rounds. The eSTREAM project also suggests the 12-round version of Salsa20 as providing the best balance among the different variants: combining very good performance with a "comfortable margin of security". Note that it would be trivial to add vanilla ChaCha12 in addition to XChaCha12. However, it's unneeded for now and therefore is omitted. As discussed in the patch that introduced XChaCha20 support, I considered splitting the code into separate chacha-common, chacha20, xchacha20, and xchacha12 modules, so that these algorithms could be enabled/disabled independently. However, since nearly all the code is shared anyway, I ultimately decided there would have been little benefit to the added complexity. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-20crypto: chacha20-generic - add XChaCha20 supportEric Biggers1-0/+577
Add support for the XChaCha20 stream cipher. XChaCha20 is the application of the XSalsa20 construction (https://cr.yp.to/snuffle/xsalsa-20081128.pdf) to ChaCha20 rather than to Salsa20. XChaCha20 extends ChaCha20's nonce length from 64 bits (or 96 bits, depending on convention) to 192 bits, while provably retaining ChaCha20's security. XChaCha20 uses the ChaCha20 permutation to map the key and first 128 nonce bits to a 256-bit subkey. Then, it does the ChaCha20 stream cipher with the subkey and remaining 64 bits of nonce. We need XChaCha support in order to add support for the Adiantum encryption mode. Note that to meet our performance requirements, we actually plan to primarily use the variant XChaCha12. But we believe it's wise to first add XChaCha20 as a baseline with a higher security margin, in case there are any situations where it can be used. Supporting both variants is straightforward. Since XChaCha20's subkey differs for each request, XChaCha20 can't be a template that wraps ChaCha20; that would require re-keying the underlying ChaCha20 for every request, which wouldn't be thread-safe. Instead, we make XChaCha20 its own top-level algorithm which calls the ChaCha20 streaming implementation internally. Similar to the existing ChaCha20 implementation, we define the IV to be the nonce and stream position concatenated together. This allows users to seek to any position in the stream. I considered splitting the code into separate chacha20-common, chacha20, and xchacha20 modules, so that chacha20 and xchacha20 could be enabled/disabled independently. However, since nearly all the code is shared anyway, I ultimately decided there would have been little benefit to the added complexity of separate modules. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-16crypto: streebog - add Streebog test vectorsVitaly Chikunov1-0/+116
Add testmgr and tcrypt tests and vectors for Streebog hash function from RFC 6986 and GOST R 34.11-2012, for HMAC-Streebog vectors are from RFC 7836 and R 50.1.113-2016. Cc: linux-integrity@vger.kernel.org Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09crypto: testmgr - add AES-CFB testsDmitry Eremin-Solenikov1-0/+76
Add AES128/192/256-CFB testvectors from NIST SP800-38A. Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28crypto: testmgr - update sm4 test vectorsGilad Ben-Yossef1-7/+115
Add additional test vectors from "The SM4 Blockcipher Algorithm And Its Modes Of Operations" draft-ribose-cfrg-sm4-10 and register cipher speed tests for sm4. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28crypto: tcrypt - remove remnants of pcomp-based zlibHoria Geantă1-2/+0
Commit 110492183c4b ("crypto: compress - remove unused pcomp interface") removed pcomp interface but missed cleaning up tcrypt. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21crypto: testmgr - Add test for LRW counter wrap-aroundOndrej Mosnacek1-0/+21
This patch adds a test vector for lrw(aes) that triggers wrap-around of the counter, which is a tricky corner case. Suggested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04crypto: speck - remove SpeckJason A. Donenfeld1-738/+0
These are unused, undesired, and have never actually been used by anybody. The original authors of this code have changed their mind about its inclusion. While originally proposed for disk encryption on low-end devices, the idea was discarded [1] in favor of something else before that could really get going. Therefore, this patch removes Speck. [1] https://marc.info/?l=linux-crypto-vger&m=153359499015659 Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Acked-by: Eric Biggers <ebiggers@google.com> Cc: stable@vger.kernel.org Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: dh - fix calculating encoded key sizeEric Biggers1-6/+6
It was forgotten to increase DH_KPP_SECRET_MIN_SIZE to include 'q_size', causing an out-of-bounds write of 4 bytes in crypto_dh_encode_key(), and an out-of-bounds read of 4 bytes in crypto_dh_decode_key(). Fix it, and fix the lengths of the test vectors to match this. Reported-by: syzbot+6d38d558c25b53b8f4ed@syzkaller.appspotmail.com Fixes: e3fe0ae12962 ("crypto: dh - add public key verification test") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-20crypto: dh - update test for public key verificationStephan Mueller1-0/+4
By adding a zero byte-length for the DH parameter Q value, the public key verification test is disabled for the given test. Reported-by: Eric Biggers <ebiggers3@gmail.com> Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-01crypto: vmac - remove insecure version with hardcoded nonceEric Biggers1-102/+0
Remove the original version of the VMAC template that had the nonce hardcoded to 0 and produced a digest with the wrong endianness. I'm unsure whether this had users or not (there are no explicit in-kernel references to it), but given that the hardcoded nonce made it wildly insecure unless a unique key was used for each message, let's try removing it and see if anyone complains. Leave the new "vmac64" template that requires the nonce to be explicitly specified as the first 16 bytes of data and uses the correct endianness for the digest. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-01crypto: vmac - add nonced version with big endian digestEric Biggers1-0/+155
Currently the VMAC template uses a "nonce" hardcoded to 0, which makes it insecure unless a unique key is set for every message. Also, the endianness of the final digest is wrong: the implementation uses little endian, but the VMAC specification has it as big endian, as do other VMAC implementations such as the one in Crypto++. Add a new VMAC template where the nonce is passed as the first 16 bytes of data (similar to what is done for Poly1305's nonce), and the digest is big endian. Call it "vmac64", since the old name of simply "vmac" didn't clarify whether the implementation is of VMAC-64 or of VMAC-128 (which produce 64-bit and 128-bit digests respectively); so we fix the naming ambiguity too. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-30crypto: testmgr - eliminate redundant decryption test vectorsEric Biggers1-11424/+809
Currently testmgr has separate encryption and decryption test vectors for symmetric ciphers. That's massively redundant, since with few exceptions (mostly mistakes, apparently), all decryption tests are identical to the encryption tests, just with the input/result flipped. Therefore, eliminate the redundancy by removing the decryption test vectors and updating testmgr to test both encryption and decryption using what used to be the encryption test vectors. Naming is adjusted accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext' (ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and 'rlen'. Note that it was always the case that 'ilen == rlen'. AES keywrap ("kw(aes)") is special because its IV is generated by the encryption. Previously this was handled by specifying 'iv_out' for encryption and 'iv' for decryption. To make it work cleanly with only one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a boolean that indicates that the IV is generated by the encryption. In total, this removes over 10000 lines from testmgr.h, with no reduction in test coverage since prior patches already copied the few unique decryption test vectors into the encryption test vectors. This covers all algorithms that used 'struct cipher_testvec', e.g. any block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD tests, though we probably can eliminate a similar redundancy there too. The testmgr.h portion of this patch was automatically generated using the following awk script, with some slight manual fixups on top (updated 'struct cipher_testvec' definition, updated a few comments, and fixed up the AES keywrap test vectors): BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER } /^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC } /^static const struct cipher_testvec.*_dec_/ { mode = DECVEC } mode == ENCVEC && !/\.ilen[[:space:]]*=/ { sub(/\.input[[:space:]]*=$/, ".ptext =") sub(/\.input[[:space:]]*=/, ".ptext\t=") sub(/\.result[[:space:]]*=$/, ".ctext =") sub(/\.result[[:space:]]*=/, ".ctext\t=") sub(/\.rlen[[:space:]]*=/, ".len\t=") print } mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER } mode == OTHER { print } mode == ENCVEC && /^};/ { mode = OTHER } mode == DECVEC && /^};/ { mode = DECVEC_TAIL } Note that git's default diff algorithm gets confused by the testmgr.h portion of this patch, and reports too many lines added and removed. It's better viewed with 'git diff --minimal' (or 'git show --minimal'), which reports "2 files changed, 919 insertions(+), 11723 deletions(-)". Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-30crypto: testmgr - add extra kw(aes) encryption test vectorEric Biggers1-0/+13
One "kw(aes)" decryption test vector doesn't exactly match an encryption test vector with input and result swapped. In preparation for removing the decryption test vectors, add this test vector to the encryption test vectors, so we don't lose any test coverage. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-30crypto: testmgr - add extra ecb(tnepres) encryption test vectorsEric Biggers1-1/+39
None of the four "ecb(tnepres)" decryption test vectors exactly match an encryption test vector with input and result swapped. In preparation for removing the decryption test vectors, add these to the encryption test vectors, so we don't lose any test coverage. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-30crypto: testmgr - make an cbc(des) encryption test vector chunkedEric Biggers1-0/+3
One "cbc(des)" decryption test vector doesn't exactly match an encryption test vector with input and result swapped. It's *almost* the same as one, but the decryption version is "chunked" while the encryption version is "unchunked". In preparation for removing the decryption test vectors, make the encryption one both chunked and unchunked, so we don't lose any test coverage. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-30crypto: testmgr - add extra ecb(des) encryption test vectorsEric Biggers1-0/+22
Two "ecb(des)" decryption test vectors don't exactly match any of the encryption test vectors with input and result swapped. In preparation for removing the decryption test vectors, add these to the encryption test vectors, so we don't lose any test coverage. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-26crypto: testmgr - add more unkeyed crc32 and crc32c test vectorsEric Biggers1-0/+14
crc32c has an unkeyed test vector but crc32 did not. Add the crc32c one (which uses an empty input) to crc32 too, and also add a new one to both that uses a nonempty input. These test vectors verify that crc32 and crc32c implementations use the correct default initial state. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-26crypto: testmgr - remove bfin_crc "hmac(crc32)" test vectorsEric Biggers1-88/+0
The Blackfin CRC driver was removed by commit 9678a8dc53c1 ("crypto: bfin_crc - remove blackfin CRC driver"), but it was forgotten to remove the corresponding "hmac(crc32)" test vectors. I see no point in keeping them since nothing else appears to implement or use "hmac(crc32)", which isn't an algorithm that makes sense anyway because HMAC is meant to be used with a cryptographically secure hash function, which CRC's are not. Thus, remove the unneeded test vectors. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-18crypto: testmgr - Add test vectors for MORUSOndrej Mosnacek1-0/+3400
This patch adds test vectors for MORUS-640 and MORUS-1280. The test vectors were generated using the reference implementation from SUPERCOP (see code comments for more details). Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-05-18crypto: testmgr - Add test vectors for AEGISOndrej Mosnacek1-0/+2835
This patch adds test vectors for the AEGIS family of AEAD algorithms (AEGIS-128, AEGIS-128L, and AEGIS-256). The test vectors were generated using the reference implementation from SUPERCOP (see code comments for more details). Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-04-20crypto: zstd - Add zstd supportNick Terrell1-0/+71
Adds zstd support to crypto and scompress. Only supports the default level. Previously we held off on this patch, since there weren't any users. Now zram is ready for zstd support, but depends on CONFIG_CRYPTO_ZSTD, which isn't defined until this patch is in. I also see a patch adding zstd to pstore [0], which depends on crypto zstd. [0] lkml.kernel.org/r/9c9416b2dff19f05fb4c35879aaa83d11ff72c92.1521626182.git.geliangtang@gmail.com Signed-off-by: Nick Terrell <terrelln@fb.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-16crypto: testmgr - add a new test case for CRC-T10DIFArd Biesheuvel1-0/+259
In order to be able to test yield support under preempt, add a test vector for CRC-T10DIF that is long enough to take multiple iterations (and thus possible preemption between them) of the primary loop of the accelerated x86 and arm64 implementations. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-16crypto: testmgr - introduce SM4 testsGilad Ben-Yossef1-0/+131
Add testmgr tests for the newly introduced SM4 ECB symmetric cipher. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-22crypto: speck - add test vectors for Speck64-XTSEric Biggers1-0/+671
Add test vectors for Speck64-XTS, generated in userspace using C code. The inputs were borrowed from the AES-XTS test vectors, with key lengths adjusted. xts-speck64-neon passes these tests. However, they aren't currently applicable for the generic XTS template, as that only supports a 128-bit block size. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-22crypto: speck - add test vectors for Speck128-XTSEric Biggers1-0/+687
Add test vectors for Speck128-XTS, generated in userspace using C code. The inputs were borrowed from the AES-XTS test vectors. Both xts(speck128-generic) and xts-speck128-neon pass these tests. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-22crypto: speck - add support for the Speck block cipherEric Biggers1-0/+128
Add a generic implementation of Speck, including the Speck128 and Speck64 variants. Speck is a lightweight block cipher that can be much faster than AES on processors that don't have AES instructions. We are planning to offer Speck-XTS (probably Speck128/256-XTS) as an option for dm-crypt and fscrypt on Android, for low-end mobile devices with older CPUs such as ARMv7 which don't have the Cryptography Extensions. Currently, such devices are unencrypted because AES is not fast enough, even when the NEON bit-sliced implementation of AES is used. Other AES alternatives such as Twofish, Threefish, Camellia, CAST6, and Serpent aren't fast enough either; it seems that only a modern ARX cipher can provide sufficient performance on these devices. This is a replacement for our original proposal (https://patchwork.kernel.org/patch/10101451/) which was to offer ChaCha20 for these devices. However, the use of a stream cipher for disk/file encryption with no space to store nonces would have been much more insecure than we thought initially, given that it would be used on top of flash storage as well as potentially on top of F2FS, neither of which is guaranteed to overwrite data in-place. Speck has been somewhat controversial due to its origin. Nevertheless, it has a straightforward design (it's an ARX cipher), and it appears to be the leading software-optimized lightweight block cipher currently, with the most cryptanalysis. It's also easy to implement without side channels, unlike AES. Moreover, we only intend Speck to be used when the status quo is no encryption, due to AES not being fast enough. We've also considered a novel length-preserving encryption mode based on ChaCha20 and Poly1305. While theoretically attractive, such a mode would be a brand new crypto construction and would be more complicated and difficult to implement efficiently in comparison to Speck-XTS. There is confusion about the byte and word orders of Speck, since the original paper doesn't specify them. But we have implemented it using the orders the authors recommended in a correspondence with them. The test vectors are taken from the original paper but were mapped to byte arrays using the recommended byte and word orders. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-22crypto: testmgr - Fix incorrect values in PKCS#1 test vectorConor McLoughlin1-3/+3
The RSA private key for the first form should have version, prime1, prime2, exponent1, exponent2, coefficient values 0. With non-zero values for prime1,2, exponent 1,2 and coefficient the Intel QAT driver will assume that values are provided for the private key second form. This will result in signature verification failures for modules where QAT device is present and the modules are signed with rsa,sha256. Cc: <stable@vger.kernel.org> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Conor McLoughlin <conor.mcloughlin@intel.com> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-25crypto: testmgr - add new testcases for sha3Ard Biesheuvel1-0/+550
All current SHA3 test cases are smaller than the SHA3 block size, which means not all code paths are being exercised. So add a new test case to each variant, and make one of the existing test cases chunked. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-09-22crypto: sm3 - add SM3 test vectorsGilad Ben-Yossef1-0/+67
Add testmgr and tcrypt tests and vectors for SM3 secure hash. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-08-22crypto: testmgr - add chunked test cases for chacha20Ard Biesheuvel1-0/+7
We failed to catch a bug in the chacha20 code after porting it to the skcipher API. We would have caught it if any chunked tests had been defined, so define some now so we will catch future regressions. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-06-20crypto: testmgr - add testvector for pkcs1pad(rsa)Stephan Mueller1-0/+96
The PKCS#1 RSA implementation is provided with a self test with RSA 2048 and SHA-256. This self test implicitly covers other RSA keys and other hashes. Also, this self test implies that the pkcs1pad(rsa) is FIPS 140-2 compliant. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-06-10crypto: testmgr - add genkey kpp testTudor-Dan Ambarus1-0/+47
The test considers a party that already has a private-public key pair and a party that provides a NULL key. The kernel will generate the private-public key pair for the latter, computes the shared secret on both ends and verifies if it's the same. The explicit private-public key pair was copied from the previous test vector. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-04-24crypto: scomp - add support for deflate rfc1950 (zlib)Giovanni Cabiddu1-0/+75
Add scomp backend for zlib-deflate compression algorithm. This backend outputs data using the format defined in rfc1950 (raw deflate surrounded by zlib header and footer). Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-03-09crypto: testmgr - constify all test vectorsEric Biggers1-256/+256
Cryptographic test vectors should never be modified, so constify them to enforce this at both compile-time and run-time. This moves a significant amount of data from .data to .rodata when the crypto tests are enabled. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-03-04Merge branch 'linus' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fixes from Herbert Xu: - vmalloc stack regression in CCM - Build problem in CRC32 on ARM - Memory leak in cavium - Missing Kconfig dependencies in atmel and mediatek - XTS Regression on some platforms (s390 and ppc) - Memory overrun in CCM test vector * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: vmx - Use skcipher for xts fallback crypto: vmx - Use skcipher for cbc fallback crypto: testmgr - Pad aes_ccm_enc_tv_template vector crypto: arm/crc32 - add build time test for CRC instruction support crypto: arm/crc32 - fix build error with outdated binutils crypto: ccm - move cbcmac input off the stack crypto: xts - Propagate NEED_FALLBACK bit crypto: api - Add crypto_requires_off helper crypto: atmel - CRYPTO_DEV_MEDIATEK should depend on HAS_DMA crypto: atmel - CRYPTO_DEV_ATMEL_TDES and CRYPTO_DEV_ATMEL_SHA should depend on HAS_DMA crypto: cavium - fix leak on curr if curr->head fails to be allocated crypto: cavium - Fix couple of static checker errors
2017-03-01crypto: testmgr - Pad aes_ccm_enc_tv_template vectorLaura Abbott1-1/+1
Running with KASAN and crypto tests currently gives BUG: KASAN: global-out-of-bounds in __test_aead+0x9d9/0x2200 at addr ffffffff8212fca0 Read of size 16 by task cryptomgr_test/1107 Address belongs to variable 0xffffffff8212fca0 CPU: 0 PID: 1107 Comm: cryptomgr_test Not tainted 4.10.0+ #45 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.9.1-1.fc24 04/01/2014 Call Trace: dump_stack+0x63/0x8a kasan_report.part.1+0x4a7/0x4e0 ? __test_aead+0x9d9/0x2200 ? crypto_ccm_init_crypt+0x218/0x3c0 [ccm] kasan_report+0x20/0x30 check_memory_region+0x13c/0x1a0 memcpy+0x23/0x50 __test_aead+0x9d9/0x2200 ? kasan_unpoison_shadow+0x35/0x50 ? alg_test_akcipher+0xf0/0xf0 ? crypto_skcipher_init_tfm+0x2e3/0x310 ? crypto_spawn_tfm2+0x37/0x60 ? crypto_ccm_init_tfm+0xa9/0xd0 [ccm] ? crypto_aead_init_tfm+0x7b/0x90 ? crypto_alloc_tfm+0xc4/0x190 test_aead+0x28/0xc0 alg_test_aead+0x54/0xd0 alg_test+0x1eb/0x3d0 ? alg_find_test+0x90/0x90 ? __sched_text_start+0x8/0x8 ? __wake_up_common+0x70/0xb0 cryptomgr_test+0x4d/0x60 kthread+0x173/0x1c0 ? crypto_acomp_scomp_free_ctx+0x60/0x60 ? kthread_create_on_node+0xa0/0xa0 ret_from_fork+0x2c/0x40 Memory state around the buggy address: ffffffff8212fb80: 00 00 00 00 01 fa fa fa fa fa fa fa 00 00 00 00 ffffffff8212fc00: 00 01 fa fa fa fa fa fa 00 00 00 00 01 fa fa fa >ffffffff8212fc80: fa fa fa fa 00 05 fa fa fa fa fa fa 00 00 00 00 ^ ffffffff8212fd00: 01 fa fa fa fa fa fa fa 00 00 00 00 01 fa fa fa ffffffff8212fd80: fa fa fa fa 00 00 00 00 00 05 fa fa fa fa fa fa This always happens on the same IV which is less than 16 bytes. Per Ard, "CCM IVs are 16 bytes, but due to the way they are constructed internally, the final couple of bytes of input IV are dont-cares. Apparently, we do read all 16 bytes, which triggers the KASAN errors." Fix this by padding the IV with null bytes to be at least 16 bytes. Cc: stable@vger.kernel.org Fixes: 0bc5a6c5c79a ("crypto: testmgr - Disable rfc4309 test and convert test vectors") Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-02-25crypto: change LZ4 modules to work with new LZ4 module versionSven Schmidt1-40/+102
Update the crypto modules using LZ4 compression as well as the test cases in testmgr.h to work with the new LZ4 module version. Link: http://lkml.kernel.org/r/1486321748-19085-4-git-send-email-4sschmid@informatik.uni-hamburg.de Signed-off-by: Sven Schmidt <4sschmid@informatik.uni-hamburg.de> Cc: Bongkyu Kim <bongkyu.kim@lge.com> Cc: Rui Salvaterra <rsalvaterra@gmail.com> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: David S. Miller <davem@davemloft.net> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Kees Cook <keescook@chromium.org> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-11crypto: testmgr - add test cases for cbcmac(aes)Ard Biesheuvel1-0/+60
In preparation of splitting off the CBC-MAC transform in the CCM driver into a separate algorithm, define some test cases for the AES incarnation of cbcmac. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-01-13crypto: testmgr - use calculated count for number of test vectorsArd Biesheuvel1-270/+2
When working on AES in CCM mode for ARM, my code passed the internal tcrypt test before I had even bothered to implement the AES-192 and AES-256 code paths, which is strange because the tcrypt does contain AES-192 and AES-256 test vectors for CCM. As it turned out, the define AES_CCM_ENC_TEST_VECTORS was out of sync with the actual number of test vectors, causing only the AES-128 ones to be executed. So get rid of the defines, and wrap the test vector references in a macro that calculates the number of vectors automatically. The following test vector counts were out of sync with the respective defines: BF_CTR_ENC_TEST_VECTORS 2 -> 3 BF_CTR_DEC_TEST_VECTORS 2 -> 3 TF_CTR_ENC_TEST_VECTORS 2 -> 3 TF_CTR_DEC_TEST_VECTORS 2 -> 3 SERPENT_CTR_ENC_TEST_VECTORS 2 -> 3 SERPENT_CTR_DEC_TEST_VECTORS 2 -> 3 AES_CCM_ENC_TEST_VECTORS 8 -> 14 AES_CCM_DEC_TEST_VECTORS 7 -> 17 AES_CCM_4309_ENC_TEST_VECTORS 7 -> 23 AES_CCM_4309_DEC_TEST_VECTORS 10 -> 23 CAMELLIA_CTR_ENC_TEST_VECTORS 2 -> 3 CAMELLIA_CTR_DEC_TEST_VECTORS 2 -> 3 Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-12-07crypto: testmgr - add/enhance test cases for CRC-T10DIFArd Biesheuvel1-28/+42
The existing test cases only exercise a small slice of the various possible code paths through the x86 SSE/PCLMULQDQ implementation, and the upcoming ports of it for arm64. So add one that exceeds 256 bytes in size, and convert another to a chunked test. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-08-31crypto: FIPS - allow tests to be disabled in FIPS modeStephan Mueller1-0/+4
In FIPS mode, additional restrictions may apply. If these restrictions are violated, the kernel will panic(). This patch allows test vectors for symmetric ciphers to be marked as to be skipped in FIPS mode. Together with the patch, the XTS test vectors where the AES key is identical to the tweak key is disabled in FIPS mode. This test vector violates the FIPS requirement that both keys must be different. Reported-by: Tapas Sarangi <TSarangi@trustwave.com> Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-05crypto: testmgr - Add 4K private key to RSA testvectorSalvatore Benedetto1-1/+199
Key generated with openssl. It also contains all fields required for testing CRT mode Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: sha3 - Add HMAC-SHA3 test modes and test vectorsraveendra padasalagi1-0/+388
This patch adds HMAC-SHA3 test modes in tcrypt module and related test vectors. Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23crypto: ecdh - Add ECDH software supportSalvatore Benedetto1-0/+93
* Implement ECDH under kpp API * Provide ECC software support for curve P-192 and P-256. * Add kpp test for ECDH with data generated by OpenSSL Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-06-23crypto: dh - Add DH software implementationSalvatore Benedetto1-0/+230
* Implement MPI based Diffie-Hellman under kpp API * Test provided uses data generad by OpenSSL Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>