summaryrefslogtreecommitdiff
path: root/crypto
AgeCommit message (Collapse)AuthorFilesLines
2023-06-25sock: Remove ->sendpage*() in favour of sendmsg(MSG_SPLICE_PAGES)David Howells4-62/+4
Remove ->sendpage() and ->sendpage_locked(). sendmsg() with MSG_SPLICE_PAGES should be used instead. This allows multiple pages and multipage folios to be passed through. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Marc Kleine-Budde <mkl@pengutronix.de> # for net/can cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> cc: linux-afs@lists.infradead.org cc: mptcp@lists.linux.dev cc: rds-devel@oss.oracle.com cc: tipc-discussion@lists.sourceforge.net cc: virtualization@lists.linux-foundation.org Link: https://lore.kernel.org/r/20230623225513.2732256-16-dhowells@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-20crypto: af_alg/hash: Fix recvmsg() after sendmsg(MSG_MORE)David Howells1-13/+25
If an AF_ALG socket bound to a hashing algorithm is sent a zero-length message with MSG_MORE set and then recvmsg() is called without first sending another message without MSG_MORE set to end the operation, an oops will occur because the crypto context and result doesn't now get set up in advance because hash_sendmsg() now defers that as long as possible in the hope that it can use crypto_ahash_digest() - and then because the message is zero-length, it the data wrangling loop is skipped. Fix this by handling zero-length sends at the top of the hash_sendmsg() function. If we're not continuing the previous sendmsg(), then just ignore the send (hash_recvmsg() will invent something when called); if we are continuing, then we finalise the request at this point if MSG_MORE is not set to get any error here, otherwise the send is of no effect and can be ignored. Whilst we're at it, remove the code to create a kvmalloc'd scatterlist if we get more than ALG_MAX_PAGES - this shouldn't happen. Fixes: c662b043cdca ("crypto: af_alg/hash: Support MSG_SPLICE_PAGES") Reported-by: syzbot+13a08c0bf4d212766c3c@syzkaller.appspotmail.com Link: https://lore.kernel.org/r/000000000000b928f705fdeb873a@google.com/ Reported-by: syzbot+14234ccf6d0ef629ec1a@syzkaller.appspotmail.com Link: https://lore.kernel.org/r/000000000000c047db05fdeb8790@google.com/ Reported-by: syzbot+4e2e47f32607d0f72d43@syzkaller.appspotmail.com Link: https://lore.kernel.org/r/000000000000bcca3205fdeb87fb@google.com/ Reported-by: syzbot+472626bb5e7c59fb768f@syzkaller.appspotmail.com Link: https://lore.kernel.org/r/000000000000b55d8805fdeb8385@google.com/ Signed-off-by: David Howells <dhowells@redhat.com> Reported-and-tested-by: syzbot+6efc50cc1f8d718d6cb7@syzkaller.appspotmail.com cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Link: https://lore.kernel.org/r/427646.1686913832@warthog.procyon.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-18crypto: Fix af_alg_sendmsg(MSG_SPLICE_PAGES) sglist limitDavid Howells1-1/+1
When af_alg_sendmsg() calls extract_iter_to_sg(), it passes MAX_SGL_ENTS as the maximum number of elements that may be written to, but some of the elements may already have been used (as recorded in sgl->cur), so extract_iter_to_sg() may end up overrunning the scatterlist. Fix this to limit the number of elements to "MAX_SGL_ENTS - sgl->cur". Note: It probably makes sense in future to alter the behaviour of extract_iter_to_sg() to stop if "sgtable->nents >= sg_max" instead, but this is a smaller fix for now. The bug causes errors looking something like: BUG: KASAN: slab-out-of-bounds in sg_assign_page include/linux/scatterlist.h:109 [inline] BUG: KASAN: slab-out-of-bounds in sg_set_page include/linux/scatterlist.h:139 [inline] BUG: KASAN: slab-out-of-bounds in extract_bvec_to_sg lib/scatterlist.c:1183 [inline] BUG: KASAN: slab-out-of-bounds in extract_iter_to_sg lib/scatterlist.c:1352 [inline] BUG: KASAN: slab-out-of-bounds in extract_iter_to_sg+0x17a6/0x1960 lib/scatterlist.c:1339 Fixes: bf63e250c4b1 ("crypto: af_alg: Support MSG_SPLICE_PAGES") Reported-by: syzbot+6efc50cc1f8d718d6cb7@syzkaller.appspotmail.com Link: https://lore.kernel.org/r/000000000000b2585a05fdeb8379@google.com/ Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: syzbot+6efc50cc1f8d718d6cb7@syzkaller.appspotmail.com cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-13algif: Remove hash_sendpage*()David Howells1-66/+0
Remove hash_sendpage*() as nothing should now call it since the rewrite of splice_to_socket()[1]. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> Link: https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=2dc334f1a63a8839b88483a3e73c0f27c9c1791c [1] Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-17/+21
Cross-merge networking fixes after downstream PR. Conflicts: net/sched/sch_taprio.c d636fc5dd692 ("net: sched: add rcu annotations around qdisc->qdisc_sleeping") dced11ef84fb ("net/sched: taprio: don't overwrite "sch" variable in taprio_dump_class_stats()") net/ipv4/sysctl_net_ipv4.c e209fee4118f ("net/ipv4: ping_group_range: allow GID from 2147483648 to 4294967294") ccce324dabfe ("tcp: make the first N SYN RTO backoffs linear") https://lore.kernel.org/all/20230605100816.08d41a7b@canb.auug.org.au/ No adjacent changes. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-06-08crypto: af_alg/hash: Support MSG_SPLICE_PAGESDavid Howells2-41/+70
Make AF_ALG sendmsg() support MSG_SPLICE_PAGES in the hashing code. This causes pages to be spliced from the source iterator if possible. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-06-08crypto: af_alg: Convert af_alg_sendpage() to use MSG_SPLICE_PAGESDavid Howells1-44/+8
Convert af_alg_sendpage() to use sendmsg() with MSG_SPLICE_PAGES rather than directly splicing in the pages itself. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-06-08crypto: af_alg: Support MSG_SPLICE_PAGESDavid Howells3-17/+37
Make AF_ALG sendmsg() support MSG_SPLICE_PAGES. This causes pages to be spliced from the source iterator. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-06-08crypto: af_alg: Indent the loop in af_alg_sendmsg()David Howells1-24/+27
Put the loop in af_alg_sendmsg() into an if-statement to indent it to make the next patch easier to review as that will add another branch to handle MSG_SPLICE_PAGES to the if-statement. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-06-08crypto: af_alg: Use extract_iter_to_sg() to create scatterlistsDavid Howells4-55/+38
Use extract_iter_to_sg() to decant the destination iterator into a scatterlist in af_alg_get_rsgl(). af_alg_make_sg() can then be removed. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-06-08crypto: af_alg: Pin pages rather than ref'ing if appropriateDavid Howells1-3/+7
Convert AF_ALG to use iov_iter_extract_pages() instead of iov_iter_get_pages(). This will pin pages or leave them unaltered rather than getting a ref on them as appropriate to the iterator. The pages need to be pinned for DIO-read rather than having refs taken on them to prevent VM copy-on-write from malfunctioning during a concurrent fork() (the result of the I/O would otherwise end up only visible to the child process and not the parent). Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-06-02KEYS: asymmetric: Copy sig and digest in public_key_verify_signature()Roberto Sassu1-17/+21
Commit ac4e97abce9b8 ("scatterlist: sg_set_buf() argument must be in linear mapping") checks that both the signature and the digest reside in the linear mapping area. However, more recently commit ba14a194a434c ("fork: Add generic vmalloced stack support") made it possible to move the stack in the vmalloc area, which is not contiguous, and thus not suitable for sg_set_buf() which needs adjacent pages. Always make a copy of the signature and digest in the same buffer used to store the key and its parameters, and pass them to sg_init_one(). Prefer it to conditionally doing the copy if necessary, to keep the code simple. The buffer allocated with kmalloc() is in the linear mapping area. Cc: stable@vger.kernel.org # 4.9.x Fixes: ba14a194a434 ("fork: Add generic vmalloced stack support") Link: https://lore.kernel.org/linux-integrity/Y4pIpxbjBdajymBJ@sol.localdomain/ Suggested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Tested-by: Stefan Berger <stefanb@linux.ibm.com>
2023-05-07Merge tag 'v6.4-p2' of ↵Linus Torvalds11-12/+15
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fixes from Herbert Xu: - A long-standing bug in crypto_engine - A buggy but harmless check in the sun8i-ss driver - A regression in the CRYPTO_USER interface * tag 'v6.4-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: api - Fix CRYPTO_USER checks for report function crypto: engine - fix crypto_queue backlog handling crypto: sun8i-ss - Fix a test in sun8i_ss_setup_ivs()
2023-05-04Merge tag 'loongarch-6.4' of ↵Linus Torvalds1-0/+3
git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson Pull LoongArch updates from Huacai Chen: - Better backtraces for humanization - Relay BCE exceptions to userland as SIGSEGV - Provide kernel fpu functions - Optimize memory ops (memset/memcpy/memmove) - Optimize checksum and crc32(c) calculation - Add ARCH_HAS_FORTIFY_SOURCE selection - Add function error injection support - Add ftrace with direct call support - Add basic perf tools support * tag 'loongarch-6.4' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson: (24 commits) tools/perf: Add basic support for LoongArch LoongArch: ftrace: Add direct call trampoline samples support LoongArch: ftrace: Add direct call support LoongArch: ftrace: Implement ftrace_find_callable_addr() to simplify code LoongArch: ftrace: Fix build error if DYNAMIC_FTRACE_WITH_REGS is not set LoongArch: ftrace: Abstract DYNAMIC_FTRACE_WITH_ARGS accesses LoongArch: Add support for function error injection LoongArch: Add ARCH_HAS_FORTIFY_SOURCE selection LoongArch: crypto: Add crc32 and crc32c hw acceleration LoongArch: Add checksum optimization for 64-bit system LoongArch: Optimize memory ops (memset/memcpy/memmove) LoongArch: Provide kernel fpu functions LoongArch: Relay BCE exceptions to userland as SIGSEGV with si_code=SEGV_BNDERR LoongArch: Tweak the BADV and CPUCFG.PRID lines in show_regs() LoongArch: Humanize the ESTAT line when showing registers LoongArch: Humanize the ECFG line when showing registers LoongArch: Humanize the EUEN line when showing registers LoongArch: Humanize the PRMD line when showing registers LoongArch: Humanize the CRMD line when showing registers LoongArch: Fix format of CSR lines during show_regs() ...
2023-05-02crypto: api - Fix CRYPTO_USER checks for report functionOndrej Mosnacek9-9/+9
Checking the config via ifdef incorrectly compiles out the report functions when CRYPTO_USER is set to =m. Fix it by using IS_ENABLED() instead. Fixes: c0f9e01dd266 ("crypto: api - Check CRYPTO_USER instead of NET for report") Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-05-01LoongArch: crypto: Add crc32 and crc32c hw accelerationMin Zhou1-0/+3
With a blatant copy of some MIPS bits we introduce the crc32 and crc32c hw accelerated module to LoongArch. LoongArch has provided these instructions to calculate crc32 and crc32c: * crc.w.b.w crcc.w.b.w * crc.w.h.w crcc.w.h.w * crc.w.w.w crcc.w.w.w * crc.w.d.w crcc.w.d.w So we can make use of these instructions to improve the performance of calculation for crc32(c) checksums. As can be seen from the following test results, crc32(c) instructions can improve the performance by 58%. Software implemention Hardware acceleration Buffer size time cost (seconds) time cost (seconds) Accel. 100 KB 0.000845 0.000534 59.1% 1 MB 0.007758 0.004836 59.4% 10 MB 0.076593 0.047682 59.4% 100 MB 0.756734 0.479126 58.5% 1000 MB 7.563841 4.778266 58.5% Signed-off-by: Min Zhou <zhoumin@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-04-28crypto: engine - fix crypto_queue backlog handlingOlivier Bacon2-3/+6
CRYPTO_TFM_REQ_MAY_BACKLOG tells the crypto driver that it should internally backlog requests until the crypto hw's queue becomes full. At that point, crypto_engine backlogs the request and returns -EBUSY. Calling driver such as dm-crypt then waits until the complete() function is called with a status of -EINPROGRESS before sending a new request. The problem lies in the call to complete() with a value of -EINPROGRESS that is made when a backlog item is present on the queue. The call is done before the successful execution of the crypto request. In the case that do_one_request() returns < 0 and the retry support is available, the request is put back in the queue. This leads upper drivers to send a new request even if the queue is still full. The problem can be reproduced by doing a large dd into a crypto dm-crypt device. This is pretty easy to see when using Freescale CAAM crypto driver and SWIOTLB dma. Since the actual amount of requests that can be hold in the queue is unlimited we get IOs error and dma allocation. The fix is to call complete with a value of -EINPROGRESS only if the request is not enqueued back in crypto_queue. This is done by calling complete() later in the code. In order to delay the decision, crypto_queue is modified to correctly set the backlog pointer when a request is enqueued back. Fixes: 6a89f492f8e5 ("crypto: engine - support for parallel requests based on retry mechanism") Co-developed-by: Sylvain Ouellet <souellet@genetec.com> Signed-off-by: Sylvain Ouellet <souellet@genetec.com> Signed-off-by: Olivier Bacon <obacon@genetec.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-28Merge tag 'modules-6.4-rc1' of ↵Linus Torvalds1-1/+0
git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux Pull module updates from Luis Chamberlain: "The summary of the changes for this pull requests is: - Song Liu's new struct module_memory replacement - Nick Alcock's MODULE_LICENSE() removal for non-modules - My cleanups and enhancements to reduce the areas where we vmalloc module memory for duplicates, and the respective debug code which proves the remaining vmalloc pressure comes from userspace. Most of the changes have been in linux-next for quite some time except the minor fixes I made to check if a module was already loaded prior to allocating the final module memory with vmalloc and the respective debug code it introduces to help clarify the issue. Although the functional change is small it is rather safe as it can only *help* reduce vmalloc space for duplicates and is confirmed to fix a bootup issue with over 400 CPUs with KASAN enabled. I don't expect stable kernels to pick up that fix as the cleanups would have also had to have been picked up. Folks on larger CPU systems with modules will want to just upgrade if vmalloc space has been an issue on bootup. Given the size of this request, here's some more elaborate details: The functional change change in this pull request is the very first patch from Song Liu which replaces the 'struct module_layout' with a new 'struct module_memory'. The old data structure tried to put together all types of supported module memory types in one data structure, the new one abstracts the differences in memory types in a module to allow each one to provide their own set of details. This paves the way in the future so we can deal with them in a cleaner way. If you look at changes they also provide a nice cleanup of how we handle these different memory areas in a module. This change has been in linux-next since before the merge window opened for v6.3 so to provide more than a full kernel cycle of testing. It's a good thing as quite a bit of fixes have been found for it. Jason Baron then made dynamic debug a first class citizen module user by using module notifier callbacks to allocate / remove module specific dynamic debug information. Nick Alcock has done quite a bit of work cross-tree to remove module license tags from things which cannot possibly be module at my request so to: a) help him with his longer term tooling goals which require a deterministic evaluation if a piece a symbol code could ever be part of a module or not. But quite recently it is has been made clear that tooling is not the only one that would benefit. Disambiguating symbols also helps efforts such as live patching, kprobes and BPF, but for other reasons and R&D on this area is active with no clear solution in sight. b) help us inch closer to the now generally accepted long term goal of automating all the MODULE_LICENSE() tags from SPDX license tags In so far as a) is concerned, although module license tags are a no-op for non-modules, tools which would want create a mapping of possible modules can only rely on the module license tag after the commit 8b41fc4454e ("kbuild: create modules.builtin without Makefile.modbuiltin or tristate.conf"). Nick has been working on this *for years* and AFAICT I was the only one to suggest two alternatives to this approach for tooling. The complexity in one of my suggested approaches lies in that we'd need a possible-obj-m and a could-be-module which would check if the object being built is part of any kconfig build which could ever lead to it being part of a module, and if so define a new define -DPOSSIBLE_MODULE [0]. A more obvious yet theoretical approach I've suggested would be to have a tristate in kconfig imply the same new -DPOSSIBLE_MODULE as well but that means getting kconfig symbol names mapping to modules always, and I don't think that's the case today. I am not aware of Nick or anyone exploring either of these options. Quite recently Josh Poimboeuf has pointed out that live patching, kprobes and BPF would benefit from resolving some part of the disambiguation as well but for other reasons. The function granularity KASLR (fgkaslr) patches were mentioned but Joe Lawrence has clarified this effort has been dropped with no clear solution in sight [1]. In the meantime removing module license tags from code which could never be modules is welcomed for both objectives mentioned above. Some developers have also welcomed these changes as it has helped clarify when a module was never possible and they forgot to clean this up, and so you'll see quite a bit of Nick's patches in other pull requests for this merge window. I just picked up the stragglers after rc3. LWN has good coverage on the motivation behind this work [2] and the typical cross-tree issues he ran into along the way. The only concrete blocker issue he ran into was that we should not remove the MODULE_LICENSE() tags from files which have no SPDX tags yet, even if they can never be modules. Nick ended up giving up on his efforts due to having to do this vetting and backlash he ran into from folks who really did *not understand* the core of the issue nor were providing any alternative / guidance. I've gone through his changes and dropped the patches which dropped the module license tags where an SPDX license tag was missing, it only consisted of 11 drivers. To see if a pull request deals with a file which lacks SPDX tags you can just use: ./scripts/spdxcheck.py -f \ $(git diff --name-only commid-id | xargs echo) You'll see a core module file in this pull request for the above, but that's not related to his changes. WE just need to add the SPDX license tag for the kernel/module/kmod.c file in the future but it demonstrates the effectiveness of the script. Most of Nick's changes were spread out through different trees, and I just picked up the slack after rc3 for the last kernel was out. Those changes have been in linux-next for over two weeks. The cleanups, debug code I added and final fix I added for modules were motivated by David Hildenbrand's report of boot failing on a systems with over 400 CPUs when KASAN was enabled due to running out of virtual memory space. Although the functional change only consists of 3 lines in the patch "module: avoid allocation if module is already present and ready", proving that this was the best we can do on the modules side took quite a bit of effort and new debug code. The initial cleanups I did on the modules side of things has been in linux-next since around rc3 of the last kernel, the actual final fix for and debug code however have only been in linux-next for about a week or so but I think it is worth getting that code in for this merge window as it does help fix / prove / evaluate the issues reported with larger number of CPUs. Userspace is not yet fixed as it is taking a bit of time for folks to understand the crux of the issue and find a proper resolution. Worst come to worst, I have a kludge-of-concept [3] of how to make kernel_read*() calls for modules unique / converge them, but I'm currently inclined to just see if userspace can fix this instead" Link: https://lore.kernel.org/all/Y/kXDqW+7d71C4wz@bombadil.infradead.org/ [0] Link: https://lkml.kernel.org/r/025f2151-ce7c-5630-9b90-98742c97ac65@redhat.com [1] Link: https://lwn.net/Articles/927569/ [2] Link: https://lkml.kernel.org/r/20230414052840.1994456-3-mcgrof@kernel.org [3] * tag 'modules-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux: (121 commits) module: add debugging auto-load duplicate module support module: stats: fix invalid_mod_bytes typo module: remove use of uninitialized variable len module: fix building stats for 32-bit targets module: stats: include uapi/linux/module.h module: avoid allocation if module is already present and ready module: add debug stats to help identify memory pressure module: extract patient module check into helper modules/kmod: replace implementation with a semaphore Change DEFINE_SEMAPHORE() to take a number argument module: fix kmemleak annotations for non init ELF sections module: Ignore L0 and rename is_arm_mapping_symbol() module: Move is_arm_mapping_symbol() to module_symbol.h module: Sync code of is_arm_mapping_symbol() scripts/gdb: use mem instead of core_layout to get the module address interconnect: remove module-related code interconnect: remove MODULE_LICENSE in non-modules zswap: remove MODULE_LICENSE in non-modules zpool: remove MODULE_LICENSE in non-modules x86/mm/dump_pagetables: remove MODULE_LICENSE in non-modules ...
2023-04-26Merge tag 'v6.4-p1' of ↵Linus Torvalds28-851/+1137
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Total usage stats now include all that returned errors (instead of just some) - Remove maximum hash statesize limit - Add cloning support for hmac and unkeyed hashes - Demote BUG_ON in crypto_unregister_alg to a WARN_ON Algorithms: - Use RIP-relative addressing on x86 to prepare for PIE build - Add accelerated AES/GCM stitched implementation on powerpc P10 - Add some test vectors for cmac(camellia) - Remove failure case where jent is unavailable outside of FIPS mode in drbg - Add permanent and intermittent health error checks in jitter RNG Drivers: - Add support for 402xx devices in qat - Add support for HiSTB TRNG - Fix hash concurrency issues in stm32 - Add OP-TEE firmware support in caam" * tag 'v6.4-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (139 commits) i2c: designware: Add doorbell support for Mendocino i2c: designware: Use PCI PSP driver for communication powerpc: Move Power10 feature PPC_MODULE_FEATURE_P10 crypto: p10-aes-gcm - Remove POWER10_CPU dependency crypto: testmgr - Add some test vectors for cmac(camellia) crypto: cryptd - Add support for cloning hashes crypto: cryptd - Convert hash to use modern init_tfm/exit_tfm crypto: hmac - Add support for cloning crypto: hash - Add crypto_clone_ahash/shash crypto: api - Add crypto_clone_tfm crypto: api - Add crypto_tfm_get crypto: x86/sha - Use local .L symbols for code crypto: x86/crc32 - Use local .L symbols for code crypto: x86/aesni - Use local .L symbols for code crypto: x86/sha256 - Use RIP-relative addressing crypto: x86/ghash - Use RIP-relative addressing crypto: x86/des3 - Use RIP-relative addressing crypto: x86/crc32c - Use RIP-relative addressing crypto: x86/cast6 - Use RIP-relative addressing crypto: x86/cast5 - Use RIP-relative addressing ...
2023-04-24integrity: machine keyring CA configurationEric Snowberg1-0/+2
Add machine keyring CA restriction options to control the type of keys that may be added to it. The motivation is separation of certificate signing from code signing keys. Subsquent work will limit certificates being loaded into the IMA keyring to code signing keys used for signature verification. When no restrictions are selected, all Machine Owner Keys (MOK) are added to the machine keyring. When CONFIG_INTEGRITY_CA_MACHINE_KEYRING is selected, the CA bit must be true. Also the key usage must contain keyCertSign, any other usage field may be set as well. When CONFIG_INTEGRITY_CA_MACHINE_KEYRING_MAX is selected, the CA bit must be true. Also the key usage must contain keyCertSign and the digitialSignature usage may not be set. Signed-off-by: Eric Snowberg <eric.snowberg@oracle.com> Acked-by: Mimi Zohar <zohar@linux.ibm.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Tested-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2023-04-24KEYS: CA link restrictionEric Snowberg1-0/+38
Add a new link restriction. Restrict the addition of keys in a keyring based on the key to be added being a CA. Signed-off-by: Eric Snowberg <eric.snowberg@oracle.com> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Tested-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2023-04-24KEYS: X.509: Parse Key UsageEric Snowberg1-0/+28
Parse the X.509 Key Usage. The key usage extension defines the purpose of the key contained in the certificate. id-ce-keyUsage OBJECT IDENTIFIER ::= { id-ce 15 } KeyUsage ::= BIT STRING { digitalSignature (0), contentCommitment (1), keyEncipherment (2), dataEncipherment (3), keyAgreement (4), keyCertSign (5), cRLSign (6), encipherOnly (7), decipherOnly (8) } If the keyCertSign or digitalSignature is set, store it in the public_key structure. Having the purpose of the key being stored during parsing, allows enforcement on the usage field in the future. This will be used in a follow on patch that requires knowing the certificate key usage type. Link: https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.3 Signed-off-by: Eric Snowberg <eric.snowberg@oracle.com> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Tested-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2023-04-24KEYS: X.509: Parse Basic Constraints for CAEric Snowberg1-0/+22
Parse the X.509 Basic Constraints. The basic constraints extension identifies whether the subject of the certificate is a CA. BasicConstraints ::= SEQUENCE { cA BOOLEAN DEFAULT FALSE, pathLenConstraint INTEGER (0..MAX) OPTIONAL } If the CA is true, store it in the public_key. This will be used in a follow on patch that requires knowing if the public key is a CA. Link: https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.9 Signed-off-by: Eric Snowberg <eric.snowberg@oracle.com> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Tested-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2023-04-20crypto: testmgr - Add some test vectors for cmac(camellia)David Howells2-0/+53
Add some test vectors for 128-bit cmac(camellia) as found in draft-kato-ipsec-camellia-cmac96and128-01 section 6.2. The document also shows vectors for camellia-cmac-96, and for VK with a length greater than 16, but I'm not sure how to express those in testmgr. This also leaves cts(cbc(camellia)) untested, but I can't seem to find any tests for that that I could put into testmgr. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: Chuck Lever <chuck.lever@oracle.com> cc: Scott Mayhew <smayhew@redhat.com> cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org Link: https://datatracker.ietf.org/doc/pdf/draft-kato-ipsec-camellia-cmac96and128-01 Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-20crypto: cryptd - Add support for cloning hashesHerbert Xu1-0/+16
Allow cryptd hashes to be cloned. The underlying hash will be cloned. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-20crypto: cryptd - Convert hash to use modern init_tfm/exit_tfmHerbert Xu1-9/+9
The cryptd hash template was still using the obsolete cra_init/cra_exit interface. Make it use the modern ahash init_tfm/exit_tfm instead. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-20crypto: hmac - Add support for cloningHerbert Xu1-0/+15
Allow hmac to be cloned. The underlying hash can be used directly with a reference count. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-20crypto: hash - Add crypto_clone_ahash/shashHerbert Xu3-0/+107
This patch adds the helpers crypto_clone_ahash and crypto_clone_shash. They are the hash-specific counterparts of crypto_clone_tfm. This allows code paths that cannot otherwise allocate a hash tfm object to do so. Once a new tfm has been obtained its key could then be changed without impacting other users. Note that only algorithms that implement clone_tfm can be cloned. However, all keyless hashes can be cloned by simply reusing the tfm object. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-20crypto: api - Add crypto_clone_tfmHerbert Xu2-9/+52
This patch adds the helper crypto_clone_tfm. The purpose is to allocate a tfm object with GFP_ATOMIC. As we cannot sleep, the object has to be cloned from an existing tfm object. This allows code paths that cannot otherwise allocate a crypto_tfm object to do so. Once a new tfm has been obtained its key could then be changed without impacting other users. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-20crypto: api - Add crypto_tfm_getHerbert Xu2-0/+10
Add a crypto_tfm_get interface to allow tfm objects to be shared. They can still be freed in the usual way. This should only be done with tfm objects with no keys. You must also not modify the tfm flags in any way once it becomes shared. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-14crypto: api - Move low-level functions into algapi.hHerbert Xu2-4/+9
A number of low-level functions were exposed in crypto.h. Move them into algapi.h (and internal.h). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-13KEYS: remove MODULE_LICENSE in non-modulesNick Alcock1-1/+0
Since commit 8b41fc4454e ("kbuild: create modules.builtin without Makefile.modbuiltin or tristate.conf"), MODULE_LICENSE declarations are used to identify modules. As a consequence, uses of the macro in non-modules will cause modprobe to misidentify their containing object file as a module when it is not (false positives), and modprobe might succeed rather than failing with a suitable error message. So remove it in the files in this commit, none of which can be built as modules. Signed-off-by: Nick Alcock <nick.alcock@oracle.com> Suggested-by: Luis Chamberlain <mcgrof@kernel.org> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: linux-modules@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Hitomi Hasegawa <hasegawa-hitomi@fujitsu.com> Cc: David Howells <dhowells@redhat.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: keyrings@vger.kernel.org Cc: linux-crypto@vger.kernel.org Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2023-04-06crypto: hash - Remove maximum statesize limitHerbert Xu1-2/+1
Remove the HASH_MAX_STATESIZE limit now that it is unused. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-06crypto: algif_hash - Allocate hash state with kmallocHerbert Xu1-4/+15
Allocating the hash state on the stack limits its size. Change this to use kmalloc so the limit can be removed for new drivers. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-06crypto: drbg - Only fail when jent is unavailable in FIPS modeHerbert Xu1-1/+1
When jent initialisation fails for any reason other than ENOENT, the entire drbg fails to initialise, even when we're not in FIPS mode. This is wrong because we can still use the kernel RNG when we're not in FIPS mode. Change it so that it only fails when we are in FIPS mode. Fixes: 57225e679788 ("crypto: drbg - Use callback API for random readiness") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-06crypto: jitter - permanent and intermittent health errorsStephan Müller3-120/+76
According to SP800-90B, two health failures are allowed: the intermittend and the permanent failure. So far, only the intermittent failure was implemented. The permanent failure was achieved by resetting the entire entropy source including its health test state and waiting for two or more back-to-back health errors. This approach is appropriate for RCT, but not for APT as APT has a non-linear cutoff value. Thus, this patch implements 2 cutoff values for both RCT/APT. This implies that the health state is left untouched when an intermittent failure occurs. The noise source is reset and a new APT powerup-self test is performed. Yet, whith the unchanged health test state, the counting of failures continues until a permanent failure is reached. Any non-failing raw entropy value causes the health tests to reset. The intermittent error has an unchanged significance level of 2^-30. The permanent error has a significance level of 2^-60. Considering that this level also indicates a false-positive rate (see SP800-90B section 4.2) a false-positive must only be incurred with a low probability when considering a fleet of Linux kernels as a whole. Hitting the permanent error may cause a panic(), the following calculation applies: Assuming that a fleet of 10^9 Linux kernels run concurrently with this patch in FIPS mode and on each kernel 2 health tests are performed every minute for one year, the chances of a false positive is about 1:1000 based on the binomial distribution. In addition, any power-up health test errors triggered with jent_entropy_init are treated as permanent errors. A permanent failure causes the entire entropy source to permanently return an error. This implies that a caller can only remedy the situation by re-allocating a new instance of the Jitter RNG. In a subsequent patch, a transparent re-allocation will be provided which also changes the implied heuristic entropy assessment. In addition, when the kernel is booted with fips=1, the Jitter RNG is defined to be part of a FIPS module. The permanent error of the Jitter RNG is translated as a FIPS module error. In this case, the entire FIPS module must cease operation. This is implemented in the kernel by invoking panic(). The patch also fixes an off-by-one in the RCT cutoff value which is now set to 30 instead of 31. This is because the counting of the values starts with 0. Reviewed-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Stephan Mueller <smueller@chronox.de> Reviewed-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-24async_tx: fix kernel-doc notation warningsRandy Dunlap2-7/+7
Fix kernel-doc warnings by adding "struct" keyword or "enum" keyword. Also fix 2 function parameter descriptions. Change some functions and structs from kernel-doc /** notation to regular /* comment notation. async_pq.c:18: warning: cannot understand function prototype: 'struct page *pq_scribble_page; ' async_pq.c:18: error: Cannot parse struct or union! async_pq.c:40: warning: No description found for return value of 'do_async_gen_syndrome' async_pq.c:109: warning: Function parameter or member 'blocks' not described in 'do_sync_gen_syndrome' async_pq.c:109: warning: Function parameter or member 'offsets' not described in 'do_sync_gen_syndrome' async_pq.c:109: warning: Function parameter or member 'disks' not described in 'do_sync_gen_syndrome' async_pq.c:109: warning: Function parameter or member 'len' not described in 'do_sync_gen_syndrome' async_pq.c:109: warning: Function parameter or member 'submit' not described in 'do_sync_gen_syndrome' async_tx.c:136: warning: cannot understand function prototype: 'enum submit_disposition ' async_tx.c:264: warning: Function parameter or member 'tx' not described in 'async_tx_quiesce' Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-24crypto: api - Demote BUG_ON() in crypto_unregister_alg() to a WARN_ON()Toke Høiland-Jørgensen1-1/+3
The crypto_unregister_alg() function expects callers to ensure that any algorithm that is unregistered has a refcnt of exactly 1, and issues a BUG_ON() if this is not the case. However, there are in fact drivers that will call crypto_unregister_alg() without ensuring that the refcnt has been lowered first, most notably on system shutdown. This causes the BUG_ON() to trigger, which prevents a clean shutdown and hangs the system. To avoid such hangs on shutdown, demote the BUG_ON() in crypto_unregister_alg() to a WARN_ON() with early return. Cc stable because this problem was observed on a 6.2 kernel, cf the link below. Link: https://lore.kernel.org/r/87r0tyq8ph.fsf@toke.dk Cc: stable@vger.kernel.org Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-21asymmetric_keys: log on fatal failures in PE/pkcs7Robbie Harwood2-17/+17
These particular errors can be encountered while trying to kexec when secureboot lockdown is in place. Without this change, even with a signed debug build, one still needs to reboot the machine to add the appropriate dyndbg parameters (since lockdown blocks debugfs). Accordingly, upgrade all pr_debug() before fatal error into pr_warn(). Signed-off-by: Robbie Harwood <rharwood@redhat.com> Signed-off-by: David Howells <dhowells@redhat.com> cc: Jarkko Sakkinen <jarkko@kernel.org> cc: Eric Biederman <ebiederm@xmission.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: keyrings@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: kexec@lists.infradead.org Link: https://lore.kernel.org/r/20230220171254.592347-3-rharwood@redhat.com/ # v2
2023-03-21verify_pefile: relax wrapper length checkRobbie Harwood1-4/+8
The PE Format Specification (section "The Attribute Certificate Table (Image Only)") states that `dwLength` is to be rounded up to 8-byte alignment when used for traversal. Therefore, the field is not required to be an 8-byte multiple in the first place. Accordingly, pesign has not performed this alignment since version 0.110. This causes kexec failure on pesign'd binaries with "PEFILE: Signature wrapper len wrong". Update the comment and relax the check. Signed-off-by: Robbie Harwood <rharwood@redhat.com> Signed-off-by: David Howells <dhowells@redhat.com> cc: Jarkko Sakkinen <jarkko@kernel.org> cc: Eric Biederman <ebiederm@xmission.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: keyrings@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: kexec@lists.infradead.org Link: https://learn.microsoft.com/en-us/windows/win32/debug/pe-format#the-attribute-certificate-table-image-only Link: https://github.com/rhboot/pesign Link: https://lore.kernel.org/r/20230220171254.592347-2-rharwood@redhat.com/ # v2
2023-03-17crypto: fips - simplify one-level sysctl registration for crypto_sysctl_tableLuis Chamberlain1-10/+1
There is no need to declare an extra tables to just create directory, this can be easily be done with a prefix path with register_sysctl(). Simplify this registration. Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: testmgr - fix RNG performance in fuzz testsEric Biggers1-97/+169
The performance of the crypto fuzz tests has greatly regressed since v5.18. When booting a kernel on an arm64 dev board with all software crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the fuzz tests now take about 200 seconds to run, or about 325 seconds with lockdep enabled, compared to about 5 seconds before. The root cause is that the random number generation has become much slower due to commit d4150779e60f ("random32: use real rng for non-deterministic randomness"). On my same arm64 dev board, at the time the fuzz tests are run, get_random_u8() is about 345x slower than prandom_u32_state(), or about 469x if lockdep is enabled. Lockdep makes a big difference, but much of the rest comes from the get_random_*() functions taking a *very* slow path when the CRNG is not yet initialized. Since the crypto self-tests run early during boot, even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in my case) doesn't prevent this. x86 systems don't have this issue, but they still see a significant regression if lockdep is enabled. Converting the "Fully random bytes" case in generate_random_bytes() to use get_random_bytes() helps significantly, improving the test time to about 27 seconds. But that's still over 5x slower than before. This is all a bit silly, though, since the fuzz tests don't actually need cryptographically secure random numbers. So let's just make them use a non-cryptographically-secure RNG as they did before. The original prandom_u32() is gone now, so let's use prandom_u32_state() instead, with an explicitly managed state, like various other self-tests in the kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This also has the benefit that no locking is required anymore, so performance should be even better than the original version that used prandom_u32(). Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: api - Check CRYPTO_USER instead of NET for reportHerbert Xu9-72/+36
The report function is currently conditionalised on CONFIG_NET. As it's only used by CONFIG_CRYPTO_USER, conditionalising on that instead of CONFIG_NET makes more sense. This gets rid of a rarely used code-path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: rng - Count error stats differentlyHerbert Xu3-77/+48
Move all stat code specific to rng into the rng code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: skcipher - Count error stats differentlyHerbert Xu3-55/+87
Move all stat code specific to skcipher into the skcipher code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: kpp - Count error stats differentlyHerbert Xu3-58/+34
Move all stat code specific to kpp into the kpp code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: acomp - Count error stats differentlyHerbert Xu5-74/+101
Move all stat code specific to acomp into the acomp code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: hash - Count error stats differentlyHerbert Xu5-117/+176
Move all stat code specific to hash into the hash code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: akcipher - Count error stats differentlyHerbert Xu3-76/+34
Move all stat code specific to akcipher into the akcipher code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: aead - Count error stats differentlyHerbert Xu3-60/+73
Move all stat code specific to aead into the aead code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>