summaryrefslogtreecommitdiff
path: root/lib/siphash.c
AgeCommit message (Collapse)AuthorFilesLines
2022-06-03Merge tag 'spdx-5.19-rc1' of ↵Linus Torvalds1-3/+2
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx Pull SPDX updates from Greg KH: "Here are some SPDX license marker changes. The SPDX-labeling effort has started to pick up again, so here are some changes for various parts of the tree that are related to this effort. Included in here are: - freevxfs license updates - spihash.c license cleanups - spdxcheck script updates to make things easier to work with going forward All of the license updates came from the original authors/copyright holders of the code involved. All of these have been in linux-next for weeks with no reported issues" * tag 'spdx-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx: siphash: add SPDX tags as sole licensing authority scripts/spdxcheck: Exclude top-level README scripts/spdxcheck: Exclude MAINTAINERS/CREDITS scripts/spdxcheck: Exclude config directories scripts/spdxcheck: Put excluded files and directories into a separate file scripts/spdxcheck: Add option to display files without SPDX scripts/spdxcheck: Add [sub]directory statistics scripts/spdxcheck: Add directory statistics scripts/spdxcheck: Add percentage to statistics freevxfs: relicense to GPLv2 only
2022-05-19siphash: add SPDX tags as sole licensing authorityJason A. Donenfeld1-3/+2
The text "dual BSD/GPLv2 license" is somewhat ambiguous, and moving this over to SPDX is overdue. This commit adds SPDX tags to the relevant files and clarifies that it's GPLv2 only and 3-clause BSD. It also removes the old text, so that the SPDX tags are the only source of the information. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-05-18siphash: use one source of truth for siphash permutationsJason A. Donenfeld1-22/+10
The SipHash family of permutations is currently used in three places: - siphash.c itself, used in the ordinary way it was intended. - random32.c, in a construction from an anonymous contributor. - random.c, as part of its fast_mix function. Each one of these places reinvents the wheel with the same C code, same rotation constants, and same symmetry-breaking constants. This commit tidies things up a bit by placing macros for the permutations and constants into siphash.h, where each of the three .c users can access them. It also leaves a note dissuading more users of them from emerging. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-11-30siphash: use _unaligned version by defaultArnd Bergmann1-6/+6
On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS because the ordinary load/store instructions (ldr, ldrh, ldrb) can tolerate any misalignment of the memory address. However, load/store double and load/store multiple instructions (ldrd, ldm) may still only be used on memory addresses that are 32-bit aligned, and so we have to use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we may end up with a severe performance hit due to alignment traps that require fixups by the kernel. Testing shows that this currently happens with clang-13 but not gcc-11. In theory, any compiler version can produce this bug or other problems, as we are dealing with undefined behavior in C99 even on architectures that support this in hardware, see also https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100363. Fortunately, the get_unaligned() accessors do the right thing: when building for ARMv6 or later, the compiler will emit unaligned accesses using the ordinary load/store instructions (but avoid the ones that require 32-bit alignment). When building for older ARM, those accessors will emit the appropriate sequence of ldrb/mov/orr instructions. And on architectures that can truly tolerate any kind of misalignment, the get_unaligned() accessors resolve to the leXX_to_cpup accessors that operate on aligned addresses. Since the compiler will in fact emit ldrd or ldm instructions when building this code for ARM v6 or later, the solution is to use the unaligned accessors unconditionally on architectures where this is known to be fast. The _aligned version of the hash function is however still needed to get the best performance on architectures that cannot do any unaligned access in hardware. This new version avoids the undefined behavior and should produce the fastest hash on all architectures we support. Link: https://lore.kernel.org/linux-arm-kernel/20181008211554.5355-4-ard.biesheuvel@linaro.org/ Link: https://lore.kernel.org/linux-crypto/CAK8P3a2KfmmGDbVHULWevB0hv71P2oi2ZCHEAqT=8dQfa0=cqQ@mail.gmail.com/ Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Fixes: 2c956a60778c ("siphash: add cryptographically secure PRF") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-18Revert "lib: Revert use of fallthrough pseudo-keyword in lib/"Nick Desaulniers1-18/+18
This reverts commit 6a9dc5fd6170 ("lib: Revert use of fallthrough pseudo-keyword in lib/") Now that we can build arch/powerpc/boot/ free of -Wimplicit-fallthrough, re-enable these fixes for lib/. Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Miguel Ojeda <ojeda@kernel.org> Link: https://github.com/ClangBuiltLinux/linux/issues/236 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
2020-08-25lib: Revert use of fallthrough pseudo-keyword in lib/Gustavo A. R. Silva1-18/+18
The following build error for powerpc64 was reported by Nathan Chancellor: "$ scripts/config --file arch/powerpc/configs/powernv_defconfig -e KERNEL_XZ $ make -skj"$(nproc)" ARCH=powerpc CROSS_COMPILE=powerpc64le-linux- distclean powernv_defconfig zImage ... In file included from arch/powerpc/boot/../../../lib/decompress_unxz.c:234, from arch/powerpc/boot/decompress.c:38: arch/powerpc/boot/../../../lib/xz/xz_dec_stream.c: In function 'dec_main': arch/powerpc/boot/../../../lib/xz/xz_dec_stream.c:586:4: error: 'fallthrough' undeclared (first use in this function) 586 | fallthrough; | ^~~~~~~~~~~ This will end up affecting distribution configurations such as Debian and OpenSUSE according to my testing. I am not sure what the solution is, the PowerPC wrapper does not set -D__KERNEL__ so I am not sure that compiler_attributes.h can be safely included." In order to avoid these sort of problems, it seems that the best solution is to use /* fall through */ comments instead of the fallthrough pseudo-keyword macro in lib/, for now. Reported-by: Nathan Chancellor <natechancellor@gmail.com> Fixes: df561f6688fe ("treewide: Use fallthrough pseudo-keyword") Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-and-tested-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-08-24treewide: Use fallthrough pseudo-keywordGustavo A. R. Silva1-18/+18
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
2019-04-25lib/siphash.c: mark expected switch fall-throughsStephen Rothwell1-18/+18
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. This patch aims to suppress up to 18 missing-break-in-switch false positives on some architectures. Cc: Gustavo A. R. Silva <gustavo@embeddedor.com> Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-09siphash: implement HalfSipHash1-3 for hash tablesJason A. Donenfeld1-1/+320
HalfSipHash, or hsiphash, is a shortened version of SipHash, which generates 32-bit outputs using a weaker 64-bit key. It has *much* lower security margins, and shouldn't be used for anything too sensitive, but it could be used as a hashtable key function replacement, if the output is never exposed, and if the security requirement is not too high. The goal is to make this something that performance-critical jhash users would be willing to use. On 64-bit machines, HalfSipHash1-3 is slower than SipHash1-3, so we alias SipHash1-3 to HalfSipHash1-3 on those systems. 64-bit x86_64: [ 0.509409] test_siphash: SipHash2-4 cycles: 4049181 [ 0.510650] test_siphash: SipHash1-3 cycles: 2512884 [ 0.512205] test_siphash: HalfSipHash1-3 cycles: 3429920 [ 0.512904] test_siphash: JenkinsHash cycles: 978267 So, we map hsiphash() -> SipHash1-3 32-bit x86: [ 0.509868] test_siphash: SipHash2-4 cycles: 14812892 [ 0.513601] test_siphash: SipHash1-3 cycles: 9510710 [ 0.515263] test_siphash: HalfSipHash1-3 cycles: 3856157 [ 0.515952] test_siphash: JenkinsHash cycles: 1148567 So, we map hsiphash() -> HalfSipHash1-3 hsiphash() is roughly 3 times slower than jhash(), but comes with a considerable security improvement. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-09siphash: add cryptographically secure PRFJason A. Donenfeld1-0/+232
SipHash is a 64-bit keyed hash function that is actually a cryptographically secure PRF, like HMAC. Except SipHash is super fast, and is meant to be used as a hashtable keyed lookup function, or as a general PRF for short input use cases, such as sequence numbers or RNG chaining. For the first usage: There are a variety of attacks known as "hashtable poisoning" in which an attacker forms some data such that the hash of that data will be the same, and then preceeds to fill up all entries of a hashbucket. This is a realistic and well-known denial-of-service vector. Currently hashtables use jhash, which is fast but not secure, and some kind of rotating key scheme (or none at all, which isn't good). SipHash is meant as a replacement for jhash in these cases. There are a modicum of places in the kernel that are vulnerable to hashtable poisoning attacks, either via userspace vectors or network vectors, and there's not a reliable mechanism inside the kernel at the moment to fix it. The first step toward fixing these issues is actually getting a secure primitive into the kernel for developers to use. Then we can, bit by bit, port things over to it as deemed appropriate. While SipHash is extremely fast for a cryptographically secure function, it is likely a bit slower than the insecure jhash, and so replacements will be evaluated on a case-by-case basis based on whether or not the difference in speed is negligible and whether or not the current jhash usage poses a real security risk. For the second usage: A few places in the kernel are using MD5 or SHA1 for creating secure sequence numbers, syn cookies, port numbers, or fast random numbers. SipHash is a faster and more fitting, and more secure replacement for MD5 in those situations. Replacing MD5 and SHA1 with SipHash for these uses is obvious and straight-forward, and so is submitted along with this patch series. There shouldn't be much of a debate over its efficacy. Dozens of languages are already using this internally for their hash tables and PRFs. Some of the BSDs already use this in their kernels. SipHash is a widely known high-speed solution to a widely known set of problems, and it's time we catch-up. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: David Laight <David.Laight@aculab.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>