summaryrefslogtreecommitdiff
path: root/crypto/ahash.c
AgeCommit message (Collapse)AuthorFilesLines
2023-02-13crypto: api - Use data directly in completion functionHerbert Xu1-6/+6
This patch does the final flag day conversion of all completion functions which are now all contained in the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13crypto: hash - Use crypto_request_completeHerbert Xu1-105/+74
Use the crypto_request_complete helper instead of calling the completion function directly. This patch also removes the voodoo programming previously used for unaligned ahash operations and replaces it with a sub-request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-30crypto: scatterwalk - use kmap_local() not kmap_atomic()Ard Biesheuvel1-2/+2
kmap_atomic() is used to create short-lived mappings of pages that may not be accessible via the kernel direct map. This is only needed on 32-bit architectures that implement CONFIG_HIGHMEM, but it can be used on 64-bit other architectures too, where the returned mapping is simply the kernel direct address of the page. However, kmap_atomic() does not support migration on CONFIG_HIGHMEM configurations, due to the use of per-CPU kmap slots, and so it disables preemption on all architectures, not just the 32-bit ones. This implies that all scatterwalk based crypto routines essentially execute with preemption disabled all the time, which is less than ideal. So let's switch scatterwalk_map/_unmap and the shash/ahash routines to kmap_local() instead, which serves a similar purpose, but without the resulting impact on preemption on architectures that have no need for CONFIG_HIGHMEM. Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "Elliott, Robert (Servers)" <elliott@hpe.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-28crypto: ahash - Add init_tfm/exit_tfmHerbert Xu1-1/+12
This patch adds the type-safe init_tfm/exit_tfm functions to the ahash interface. This is meant to replace the unsafe cra_init and cra_exit interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-21crypto: hash - Remove unused async iteratorsIra Weiny1-37/+4
Revert "crypto: hash - Add real ahash walk interface" This reverts commit 75ecb231ff45b54afa9f4ec9137965c3c00868f4. The callers of the functions in this commit were removed in ab8085c130ed Remove these unused calls. Fixes: ab8085c130ed ("crypto: x86 - remove SHA multibuffer routines and mcryptd") Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-07mm, treewide: rename kzfree() to kfree_sensitive()Waiman Long1-2/+2
As said by Linus: A symmetric naming is only helpful if it implies symmetries in use. Otherwise it's actively misleading. In "kzalloc()", the z is meaningful and an important part of what the caller wants. In "kzfree()", the z is actively detrimental, because maybe in the future we really _might_ want to use that "memfill(0xdeadbeef)" or something. The "zero" part of the interface isn't even _relevant_. The main reason that kzfree() exists is to clear sensitive information that should not be leaked to other future users of the same memory objects. Rename kzfree() to kfree_sensitive() to follow the example of the recently added kvfree_sensitive() and make the intention of the API more explicit. In addition, memzero_explicit() is used to clear the memory to make sure that it won't get optimized away by the compiler. The renaming is done by using the command sequence: git grep -w --name-only kzfree |\ xargs sed -i 's/kzfree/kfree_sensitive/' followed by some editing of the kfree_sensitive() kerneldoc and adding a kzfree backward compatibility macro in slab.h. [akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h] [akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more] Suggested-by: Joe Perches <joe@perches.com> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: David Howells <dhowells@redhat.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Cc: James Morris <jmorris@namei.org> Cc: "Serge E. Hallyn" <serge@hallyn.com> Cc: Joe Perches <joe@perches.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: David Rientjes <rientjes@google.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: "Jason A . Donenfeld" <Jason@zx2c4.com> Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-01-09crypto: algapi - enforce that all instances have a ->free() methodEric Biggers1-0/+3
All instances need to have a ->free() method, but people could forget to set it and then not notice if the instance is never unregistered. To help detect this bug earlier, don't allow an instance without a ->free() method to be registered, and complain loudly if someone tries to do it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: algapi - remove crypto_template::{alloc,free}()Eric Biggers1-5/+0
Now that all templates provide a ->create() method which creates an instance, installs a strongly-typed ->free() method directly to it, and registers it, the older ->alloc() and ->free() methods in 'struct crypto_template' are no longer used. Remove them. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: hash - add support for new way of freeing instancesEric Biggers1-0/+13
Add support to shash and ahash for the new way of freeing instances (already used for skcipher, aead, and akcipher) where a ->free() method is installed to the instance struct itself. These methods are more strongly-typed than crypto_template::free(), which they replace. This will allow removing support for the old way of freeing instances. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - unexport crypto_ahash_typeEric Biggers1-2/+3
Now that all the templates that need ahash spawns have been converted to use crypto_grab_ahash() rather than look up the algorithm directly, crypto_ahash_type is no longer used outside of ahash.c. Make it static. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: algapi - remove obsoleted instance creation helpersEric Biggers1-25/+0
Remove lots of helper functions that were previously used for instantiating crypto templates, but are now unused: - crypto_get_attr_alg() and similar functions looked up an inner algorithm directly from a template parameter. These were replaced with getting the algorithm's name, then calling crypto_grab_*(). - crypto_init_spawn2() and similar functions initialized a spawn, given an algorithm. Similarly, these were replaced with crypto_grab_*(). - crypto_alloc_instance() and similar functions allocated an instance with a single spawn, given the inner algorithm. These aren't useful anymore since crypto_grab_*() need the instance allocated first. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09crypto: ahash - introduce crypto_grab_ahash()Eric Biggers1-0/+9
Currently, ahash spawns are initialized by using ahash_attr_alg() or crypto_find_alg() to look up the ahash algorithm, then calling crypto_init_ahash_spawn(). This is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. So, let's introduce crypto_grab_ahash() so that we can convert all templates to the same way of initializing their spawns. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-20crypto: algapi - make unregistration functions return voidEric Biggers1-2/+2
Some of the algorithm unregistration functions return -ENOENT when asked to unregister a non-registered algorithm, while others always return 0 or always return void. But no users check the return value, except for two of the bulk unregistration functions which print a message on error but still always return 0 to their caller, and crypto_del_alg() which calls crypto_unregister_instance() which always returns 0. Since unregistering a non-registered algorithm is always a kernel bug but there isn't anything callers should do to handle this situation at runtime, let's simplify things by making all the unregistration functions return void, and moving the error message into crypto_unregister_alg() and upgrading it to a WARN(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner1-6/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-08crypto: ahash - fix another early termination in hash walkEric Biggers1-7/+7
Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and "michael_mic", fail the improved hash tests because they sometimes produce the wrong digest. The bug is that in the case where a scatterlist element crosses pages, not all the data is actually hashed because the scatterlist walk terminates too early. This happens because the 'nbytes' variable in crypto_hash_walk_done() is assigned the number of bytes remaining in the page, then later interpreted as the number of bytes remaining in the scatterlist element. Fix it. Fixes: 900a081f6912 ("crypto: ahash - Fix early termination in hash walk") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-18crypto: hash - set CRYPTO_TFM_NEED_KEY if ->setkey() failsEric Biggers1-9/+19
Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a key, to prevent the tfm from being used until a new key is set. Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so ->setkey() for those must nevertheless be atomic. That's fine for now since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's not intended that OPTIONAL_KEY be used much. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: 9fa68f620041 ("crypto: hash - prevent using keyed hashes without setting key") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-12-07crypto: user - fix use_after_free of struct xxx_requestCorentin Labbe1-3/+14
All crypto_stats functions use the struct xxx_request for feeding stats, but in some case this structure could already be freed. For fixing this, the needed parameters (len and alg) will be stored before the request being executed. Fixes: cac5818c25d0 ("crypto: user - Implement a generic crypto statistics") Reported-by: syzbot <syzbot+6939a606a5305e9e9799@syzkaller.appspotmail.com> Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09crypto: user - clean up report structure copyingEric Biggers1-8/+4
There have been a pretty ridiculous number of issues with initializing the report structures that are copied to userspace by NETLINK_CRYPTO. Commit 4473710df1f8 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME expansion") replaced some strncpy()s with strlcpy()s, thereby introducing information leaks. Later two other people tried to replace other strncpy()s with strlcpy() too, which would have introduced even more information leaks: - https://lore.kernel.org/patchwork/patch/954991/ - https://patchwork.kernel.org/patch/10434351/ Commit cac5818c25d0 ("crypto: user - Implement a generic crypto statistics") also uses the buggy strlcpy() approach and therefore leaks uninitialized memory to userspace. A fix was proposed, but it was originally incomplete. Seeing as how apparently no one can get this right with the current approach, change all the reporting functions to: - Start by memsetting the report structure to 0. This guarantees it's always initialized, regardless of what happens later. - Initialize all strings using strscpy(). This is safe after the memset, ensures null termination of long strings, avoids unnecessary work, and avoids the -Wstringop-truncation warnings from gcc. - Use sizeof(var) instead of sizeof(type). This is more robust against copy+paste errors. For simplicity, also reuse the -EMSGSIZE return value from nla_put(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28crypto: user - Implement a generic crypto statisticsCorentin Labbe1-5/+16
This patch implement a generic way to get statistics about all crypto usages. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04crypto: hash - Remove VLA usageKees Cook1-2/+2
In the quest to remove all stack VLA usage from the kernel[1], this removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize()) by using the maximum allowable size (which is now more clearly captured in a macro), along with a few other cases. Similar limits are turned into macros as well. A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the largest digest size and that sizeof(struct sha3_state) (360) is the largest descriptor size. The corresponding maximums are reduced. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-30crypto: ahash - Fix early termination in hash walkHerbert Xu1-3/+4
When we have an unaligned SG list entry where there is no leftover aligned data, the hash walk code will incorrectly return zero as if the entire SG list has been processed. This patch fixes it by moving onto the next page instead. Reported-by: Eli Cooper <elicooper@gmx.com> Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-15crypto: hash - Require export/import in ahashKamil Konieczny1-16/+2
Export and import are mandatory in async hash. As drivers were rewritten, drop empty wrappers and correct init of ahash transformation. Signed-off-by: Kamil Konieczny <k.konieczny@partner.samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: hash - prevent using keyed hashes without setting keyEric Biggers1-5/+17
Currently, almost none of the keyed hash algorithms check whether a key has been set before proceeding. Some algorithms are okay with this and will effectively just use a key of all 0's or some other bogus default. However, others will severely break, as demonstrated using "hmac(sha3-512-generic)", the unkeyed use of which causes a kernel crash via a (potentially exploitable) stack buffer overflow. A while ago, this problem was solved for AF_ALG by pairing each hash transform with a 'has_key' bool. However, there are still other places in the kernel where userspace can specify an arbitrary hash algorithm by name, and the kernel uses it as unkeyed hash without checking whether it is really unkeyed. Examples of this include: - KEYCTL_DH_COMPUTE, via the KDF extension - dm-verity - dm-crypt, via the ESSIV support - dm-integrity, via the "internal hash" mode with no key given - drbd (Distributed Replicated Block Device) This bug is especially bad for KEYCTL_DH_COMPUTE as that requires no privileges to call. Fix the bug for all users by adding a flag CRYPTO_TFM_NEED_KEY to the ->crt_flags of each hash transform that indicates whether the transform still needs to be keyed or not. Then, make the hash init, import, and digest functions return -ENOKEY if the key is still needed. The new flag also replaces the 'has_key' bool which algif_hash was previously using, thereby simplifying the algif_hash implementation. Reported-by: syzbot <syzkaller@googlegroups.com> Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12crypto: hash - introduce crypto_hash_alg_has_setkey()Eric Biggers1-0/+11
Templates that use an shash spawn can use crypto_shash_alg_has_setkey() to determine whether the underlying algorithm requires a key or not. But there was no corresponding function for ahash spawns. Add it. Note that the new function actually has to support both shash and ahash algorithms, since the ahash API can be used with either. Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-03crypto: remove redundant backlog checks on EBUSYGilad Ben-Yossef1-9/+3
Now that -EBUSY return code only indicates backlog queueing we can safely remove the now redundant check for the CRYPTO_TFM_REQ_MAY_BACKLOG flag when -EBUSY is returned. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-08-22crypto: hash - add crypto_(un)register_ahashes()Rabin Vincent1-0/+29
There are already helpers to (un)register multiple normal and AEAD algos. Add one for ahashes too. Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Rabin Vincent <rabinv@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-04-10crypto: ahash - Fix EINPROGRESS notification callbackHerbert Xu1-29/+50
The ahash API modifies the request's callback function in order to clean up after itself in some corner cases (unaligned final and missing finup). When the request is complete ahash will restore the original callback and everything is fine. However, when the request gets an EBUSY on a full queue, an EINPROGRESS callback is made while the request is still ongoing. In this case the ahash API will incorrectly call its own callback. This patch fixes the problem by creating a temporary request object on the stack which is used to relay EINPROGRESS back to the original completion function. This patch also adds code to preserve the original flags value. Fixes: ab6bf4e5e5e4 ("crypto: hash - Fix the pointer voodoo in...") Cc: <stable@vger.kernel.org> Reported-by: Sabrina Dubroca <sd@queasysnail.net> Tested-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-01-12crypto: Replaced gcc specific attributes with macros from compiler.hGideon Israel Dsouza1-1/+2
Continuing from this commit: 52f5684c8e1e ("kernel: use macros from compiler.h instead of __attribute__((...))") I submitted 4 total patches. They are part of task I've taken up to increase compiler portability in the kernel. I've cleaned up the subsystems under /kernel /mm /block and /security, this patch targets /crypto. There is <linux/compiler.h> which provides macros for various gcc specific constructs. Eg: __weak for __attribute__((weak)). I've cleaned all instances of gcc specific attributes with the right macros for the crypto subsystem. I had to make one additional change into compiler-gcc.h for the case when one wants to use this: __attribute__((aligned) and not specify an alignment factor. From the gcc docs, this will result in the largest alignment for that data type on the target machine so I've named the macro __aligned_largest. Please advise if another name is more appropriate. Signed-off-by: Gideon Israel Dsouza <gidisrael@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: ahash - Add padding in crypto_ahash_extsizeHerbert Xu1-3/+3
The function crypto_ahash_extsize did not include padding when computing the tfm context size. This patch fixes this by using the generic crypto_alg_extsize helper. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-05-05crypto: hash - Fix page length clamping in hash walkHerbert Xu1-1/+2
The crypto hash walk code is broken when supplied with an offset greater than or equal to PAGE_SIZE. This patch fixes it by adjusting walk->pg and walk->offset when this happens. Cc: <stable@vger.kernel.org> Reported-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-02-06crypto: hash - Remove crypto_hash interfaceHerbert Xu1-18/+0
This patch removes all traces of the crypto_hash interface, now that everyone has switched over to shash or ahash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-01-25crypto: hash - Add crypto_has_ahash helperHerbert Xu1-0/+6
This patch adds the helper crypto_has_ahash which should replace crypto_has_hash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-01-18crypto: hash - Add crypto_ahash_has_setkeyHerbert Xu1-1/+4
This patch adds a way for ahash users to determine whether a key is required by a crypto_ahash transform. Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-13crypto: ahash - ensure statesize is non-zeroRussell King1-1/+2
Unlike shash algorithms, ahash drivers must implement export and import as their descriptors may contain hardware state and cannot be exported as is. Unfortunately some ahash drivers did not provide them and end up causing crashes with algif_hash. This patch adds a check to prevent these drivers from registering ahash algorithms until they are fixed. Cc: stable@vger.kernel.org Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-01-26crypto: replace scatterwalk_sg_next with sg_nextCristian Stoica1-1/+1
Modify crypto drivers to use the generic SG helper since both of them are equivalent and the one from crypto is redundant. See also: 468577abe37ff7b453a9ac613e0ea155349203ae reverted in b2ab4a57b018aafbba35bff088218f5cc3d2142e Signed-off-by: Cristian Stoica <cristian.stoica@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-12-22crypto: ahash - fixed style error in ahash.cJoshua I. James1-0/+1
Fixed style error identified by checkpatch. WARNING: Missing a blank line after declarations + unsigned int unaligned = alignmask + 1 - (offset & alignmask); + if (nbytes > unaligned) Signed-off-by: Joshua I. James <joshua@cybercrimetech.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-08-25crypto: hash - initialize entry len for null input in crypto hash sg list walkTim Chen1-3/+9
For the special case when we have a null input string, we want to initialize the entry len to 0 for the hash/ahash walk, so cyrpto_hash_walk_last will return the correct result indicating that we have completed the scatter list walk. Otherwise we may keep walking the sg list and access bogus memory address. Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-05-21crypto: hash - Add real ahash walk interfaceHerbert Xu1-5/+36
Although the existing hash walk interface has already been used by a number of ahash crypto drivers, it turns out that none of them were really asynchronous. They were all essentially polling for completion. That's why nobody has noticed until now that the walk interface couldn't work with a real asynchronous driver since the memory is mapped using kmap_atomic. As we now have a use-case for a real ahash implementation on x86, this patch creates a minimal ahash walk interface. Basically it just calls kmap instead of kmap_atomic and does away with the crypto_yield call. Real ahash crypto drivers don't need to yield since by definition they won't be hogging the CPU. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-21crypto: hash - Simplify the ahash_finup implementationMarek Vasut1-27/+9
The ahash_def_finup() can make use of the request save/restore functions, thus make it so. This simplifies the code a little and unifies the code paths. Note that the same remark about free()ing the req->priv applies here, the req->priv can only be free()'d after the original request was restored. Finally, squash a bug in the invocation of completion in the ASYNC path. In both ahash_def_finup_done{1,2}, the function areq->base.complete(X, err); was called with X=areq->base.data . This is incorrect , as X=&areq->base is the correct value. By analysis of the data structures, we see the areq is of type 'struct ahash_request' , areq->base is of type 'struct crypto_async_request' and areq->base.completion is of type crypto_completion_t, which is defined in include/linux/crypto.h as: typedef void (*crypto_completion_t)(struct crypto_async_request *req, int err); This is one lead that the X should be &areq->base . Next up, we can inspect other code which calls the completion callback to give us kind-of statistical idea of how this callback is used. We can try: $ git grep base\.complete\( drivers/crypto/ Finally, by inspecting ahash_request_set_callback() implementation defined in include/crypto/hash.h , we observe that the .data entry of 'struct crypto_async_request' is intended for arbitrary data, not for completion argument. Signed-off-by: Marek Vasut <marex@denx.de> Cc: David S. Miller <davem@davemloft.net> Cc: Fabio Estevam <fabio.estevam@freescale.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Shawn Guo <shawn.guo@linaro.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-21crypto: hash - Pull out the functions to save/restore requestMarek Vasut1-45/+62
The functions to save original request within a newly adjusted request and it's counterpart to restore the original request can be re-used by more code in the crypto/ahash.c file. Pull these functions out from the code so they're available. Signed-off-by: Marek Vasut <marex@denx.de> Cc: David S. Miller <davem@davemloft.net> Cc: Fabio Estevam <fabio.estevam@freescale.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Shawn Guo <shawn.guo@linaro.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-21crypto: hash - Fix the pointer voodoo in unaligned ahashMarek Vasut1-7/+49
Add documentation for the pointer voodoo that is happening in crypto/ahash.c in ahash_op_unaligned(). This code is quite confusing, so add a beefy chunk of documentation. Moreover, make sure the mangled request is completely restored after finishing this unaligned operation. This means restoring all of .result, .base.data and .base.complete . Also, remove the crypto_completion_t complete = ... line present in the ahash_op_unaligned_done() function. This type actually declares a function pointer, which is very confusing. Finally, yet very important nonetheless, make sure the req->priv is free()'d only after the original request is restored in ahash_op_unaligned_done(). The req->priv data must not be free()'d before that in ahash_op_unaligned_finish(), since we would be accessing previously free()'d data in ahash_op_unaligned_done() and cause corruption. Signed-off-by: Marek Vasut <marex@denx.de> Cc: David S. Miller <davem@davemloft.net> Cc: Fabio Estevam <fabio.estevam@freescale.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Shawn Guo <shawn.guo@linaro.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-01-05crypto: ahash - Fully restore ahash request before completingMarek Vasut1-1/+4
When finishing the ahash request, the ahash_op_unaligned_done() will call complete() on the request. Yet, this will not call the correct complete callback. The correct complete callback was previously stored in the requests' private data, as seen in ahash_op_unaligned(). This patch restores the correct complete callback and .data field of the request before calling complete() on it. Signed-off-by: Marek Vasut <marex@denx.de> Cc: David S. Miller <davem@davemloft.net> Cc: Fabio Estevam <fabio.estevam@freescale.com> Cc: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-02-19crypto: user - fix info leaks in report APIMathias Krause1-1/+1
Three errors resulting in kernel memory disclosure: 1/ The structures used for the netlink based crypto algorithm report API are located on the stack. As snprintf() does not fill the remainder of the buffer with null bytes, those stack bytes will be disclosed to users of the API. Switch to strncpy() to fix this. 2/ crypto_report_one() does not initialize all field of struct crypto_user_alg. Fix this to fix the heap info leak. 3/ For the module name we should copy only as many bytes as module_name() returns -- not as much as the destination buffer could hold. But the current code does not and therefore copies random data from behind the end of the module name, as the module name is always shorter than CRYPTO_MAX_ALG_NAME. Also switch to use strncpy() to copy the algorithm's name and driver_name. They are strings, after all. Signed-off-by: Mathias Krause <minipli@googlemail.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-04-02crypto: Stop using NLA_PUT*().David S. Miller1-3/+3
These macros contain a hidden goto, and are thus extremely error prone and make code hard to audit. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-03-20crypto: remove the second argument of k[un]map_atomic()Cong Wang1-2/+2
Signed-off-by: Cong Wang <amwang@redhat.com>
2011-11-11crypto: algapi - Fix build problem with NET disabledHerbert Xu1-0/+7
The report functions use NLA_PUT so we need to ensure that NET is enabled. Reported-by: Luis Henriques <henrix@camandro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-10-21crypto: Add userspace report for ahash type algorithmsSteffen Klassert1-0/+21
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2010-08-06crypto: hash - Fix handling of small unaligned buffersSzilveszter Ördög1-2/+5
If a scatterwalk chain contains an entry with an unaligned offset then hash_walk_next() will cut off the next step at the next alignment point. However, if the entry ends before the next alignment point then we a loop, which leads to a kernel oops. Fix this by checking whether the next aligment point is before the end of the current entry. Signed-off-by: Szilveszter Ördög <slipszi@gmail.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2010-03-03crypto: hash - Fix handling of unaligned buffersSzilveszter Ördög1-1/+0
The correct way to calculate the start of the aligned part of an unaligned buffer is: offset = ALIGN(offset, alignmask + 1); However, crypto_hash_walk_done() has: offset += alignmask - 1; offset = ALIGN(offset, alignmask + 1); which actually skips a whole block unless offset % (alignmask + 1) == 1. This patch fixes the problem. Signed-off-by: Szilveszter Ördög <slipszi@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-24crypto: ahash - Use GFP_KERNEL on allocation if the request can sleepSteffen Klassert1-2/+2
ahash_op_unaligned() and ahash_def_finup() allocate memory atomically, regardless whether the request can sleep or not. This patch changes this to use GFP_KERNEL if the request can sleep. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>