summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)AuthorFilesLines
2022-03-07lib: add rocksoft model crc64Keith Busch2-11/+68
The NVM Express specification extended data integrity fields to 64 bits using the Rocksoft parameters. Add the poly to the crc64 table generation, and provide a generic library routine implementing the algorithm. The Rocksoft 64-bit CRC model parameters are as follows: Poly: 0xAD93D23594C93659 Initial value: 0xFFFFFFFFFFFFFFFF Reflected Input: True Reflected Output: True Xor Final: 0xFFFFFFFFFFFFFFFF Since this model used reflected bits, the implementation generates the reflected table so the result is ordered consistently. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20220303201312.3255347-6-kbusch@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-07Merge branch 'for-5.18/block' into for-5.18/64bit-piJens Axboe1-23/+17
* for-5.18/block: (96 commits) block: remove bio_devname ext4: stop using bio_devname raid5-ppl: stop using bio_devname raid1: stop using bio_devname md-multipath: stop using bio_devname dm-integrity: stop using bio_devname dm-crypt: stop using bio_devname pktcdvd: remove a pointless debug check in pkt_submit_bio block: remove handle_bad_sector block: fix and cleanup bio_check_ro bfq: fix use-after-free in bfq_dispatch_request blk-crypto: show crypto capabilities in sysfs block: don't delete queue kobject before its children block: simplify calling convention of elv_unregister_queue() block: remove redundant semicolon block: default BLOCK_LEGACY_AUTOLOAD to y block: update io_ticks when io hang block, bfq: don't move oom_bfqq block, bfq: avoid moving bfqq to it's parent bfqg block, bfq: cleanup bfq_bfqq_to_bfqg() ...
2022-03-05Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski1-0/+10
Daniel Borkmann says: ==================== pull-request: bpf-next 2022-03-04 We've added 32 non-merge commits during the last 14 day(s) which contain a total of 59 files changed, 1038 insertions(+), 473 deletions(-). The main changes are: 1) Optimize BPF stackmap's build_id retrieval by caching last valid build_id, as consecutive stack frames are likely to be in the same VMA and therefore have the same build id, from Hao Luo. 2) Several improvements to arm64 BPF JIT, that is, support for JITing the atomic[64]_fetch_add, atomic[64]_[fetch_]{and,or,xor} and lastly atomic[64]_{xchg|cmpxchg}. Also fix the BTF line info dump for JITed programs, from Hou Tao. 3) Optimize generic BPF map batch deletion by only enforcing synchronize_rcu() barrier once upon return to user space, from Eric Dumazet. 4) For kernel build parse DWARF and generate BTF through pahole with enabled multithreading, from Kui-Feng Lee. 5) BPF verifier usability improvements by making log info more concise and replacing inv with scalar type name, from Mykola Lysenko. 6) Two follow-up fixes for BPF prog JIT pack allocator, from Song Liu. 7) Add a new Kconfig to allow for loading kernel modules with non-matching BTF type info; their BTF info is then removed on load, from Connor O'Brien. 8) Remove reallocarray() usage from bpftool and switch to libbpf_reallocarray() in order to fix compilation errors for older glibc, from Mauricio Vásquez. 9) Fix libbpf to error on conflicting name in BTF when type declaration appears before the definition, from Xu Kuohai. 10) Fix issue in BPF preload for in-kernel light skeleton where loaded BPF program fds prevent init process from setting up fd 0-2, from Yucong Sun. 11) Fix libbpf reuse of pinned perf RB map when max_entries is auto-determined by libbpf, from Stijn Tintel. 12) Several cleanups for libbpf and a fix to enforce perf RB map #pages to be non-zero, from Yuntao Wang. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (32 commits) bpf: Small BPF verifier log improvements libbpf: Add a check to ensure that page_cnt is non-zero bpf, x86: Set header->size properly before freeing it x86: Disable HAVE_ARCH_HUGE_VMALLOC on 32-bit x86 bpf, test_run: Fix overflow in XDP frags bpf_test_finish selftests/bpf: Update btf_dump case for conflicting names libbpf: Skip forward declaration when counting duplicated type names bpf: Add some description about BPF_JIT_ALWAYS_ON in Kconfig bpf, docs: Add a missing colon in verifier.rst bpf: Cache the last valid build_id libbpf: Fix BPF_MAP_TYPE_PERF_EVENT_ARRAY auto-pinning bpf, selftests: Use raw_tp program for atomic test bpf, arm64: Support more atomic operations bpftool: Remove redundant slashes bpf: Add config to allow loading modules with BTF mismatches bpf, arm64: Feed byte-offset into bpf line info bpf, arm64: Call build_prologue() first in first JIT pass bpf: Fix issue with bpf preload module taking over stdout/stdin of kernel. bpftool: Bpf skeletons assert type sizes bpf: Cleanup comments ... ==================== Link: https://lore.kernel.org/r/20220304164313.31675-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-03Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski2-2/+4
net/batman-adv/hard-interface.c commit 690bb6fb64f5 ("batman-adv: Request iflink once in batadv-on-batadv check") commit 6ee3c393eeb7 ("batman-adv: Demote batadv-on-batadv skip error message") https://lore.kernel.org/all/20220302163049.101957-1-sw@simonwunderlich.de/ net/smc/af_smc.c commit 4d08b7b57ece ("net/smc: Fix cleanup when register ULP fails") commit 462791bbfa35 ("net/smc: add sysctl interface for SMC") https://lore.kernel.org/all/20220302112209.355def40@canb.auug.org.au/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-03mm: remove the extra ZONE_DEVICE struct page refcountChristoph Hellwig1-1/+0
ZONE_DEVICE struct pages have an extra reference count that complicates the code for put_page() and several places in the kernel that need to check the reference count to see that a page is not being used (gup, compaction, migration, etc.). Clean up the code so the reference count doesn't need to be treated specially for ZONE_DEVICE pages. Note that this excludes the special idle page wakeup for fsdax pages, which still happens at refcount 1. This is a separate issue and will be sorted out later. Given that only fsdax pages require the notifiacation when the refcount hits 1 now, the PAGEMAP_OPS Kconfig symbol can go away and be replaced with a FS_DAX check for this hook in the put_page fastpath. Based on an earlier patch from Ralph Campbell <rcampbell@nvidia.com>. Link: https://lkml.kernel.org/r/20220210072828.2930359-8-hch@lst.de Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Acked-by: Felix Kuehling <Felix.Kuehling@amd.com> Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: Chaitanya Kulkarni <kch@nvidia.com> Cc: Christian Knig <christian.koenig@amd.com> Cc: Karol Herbst <kherbst@redhat.com> Cc: Lyude Paul <lyude@redhat.com> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: Muchun Song <songmuchun@bytedance.com> Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03mm: don't include <linux/memremap.h> in <linux/mm.h>Christoph Hellwig1-0/+1
Move the check for the actual pgmap types that need the free at refcount one behavior into the out of line helper, and thus avoid the need to pull memremap.h into mm.h. Link: https://lkml.kernel.org/r/20220210072828.2930359-7-hch@lst.de Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Acked-by: Felix Kuehling <Felix.Kuehling@amd.com> Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: Chaitanya Kulkarni <kch@nvidia.com> Cc: Karol Herbst <kherbst@redhat.com> Cc: Lyude Paul <lyude@redhat.com> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: Muchun Song <songmuchun@bytedance.com> Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03mm: remove pointless includes from <linux/hmm.h>Christoph Hellwig1-0/+2
hmm.h pulls in the world for no good reason at all. Remove the includes and push a few ones into the users instead. Link: https://lkml.kernel.org/r/20220210072828.2930359-4-hch@lst.de Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: Christian Knig <christian.koenig@amd.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Felix Kuehling <Felix.Kuehling@amd.com> Cc: Karol Herbst <kherbst@redhat.com> Cc: Lyude Paul <lyude@redhat.com> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-03-03Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds1-1/+0
Pull ARM fixes from Russell King: - Fix kgdb breakpoint for Thumb2 - Fix dependency for BITREVERSE kconfig - Fix nommu early_params and __setup returns * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 9182/1: mmu: fix returns from early_param() and __setup() functions ARM: 9178/1: fix unmet dependency on BITREVERSE for HAVE_ARCH_BITREVERSE ARM: Fix kgdb breakpoint for Thumb2
2022-03-03lib/mpi: export mpi_rshiftNicolai Stange1-0/+1
A subsequent patch will make the crypto/dh's dh_is_pubkey_valid() to calculate a safe-prime groups Q parameter from P: Q = (P - 1) / 2. For implementing this, mpi_rshift() will be needed. Export it so that it's accessible from crypto/dh. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-28bpf: Add config to allow loading modules with BTF mismatchesConnor O'Brien1-0/+10
BTF mismatch can occur for a separately-built module even when the ABI is otherwise compatible and nothing else would prevent successfully loading. Add a new Kconfig to control how mismatches are handled. By default, preserve the current behavior of refusing to load the module. If MODULE_ALLOW_BTF_MISMATCH is enabled, load the module but ignore its BTF information. Suggested-by: Yonghong Song <yhs@fb.com> Suggested-by: Michal Suchánek <msuchanek@suse.de> Signed-off-by: Connor O'Brien <connoro@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/CAADnVQJ+OVPnBz8z3vNu8gKXX42jCUqfuvhWAyCQDu8N_yqqwQ@mail.gmail.com Link: https://lore.kernel.org/bpf/20220223012814.1898677-1-connoro@google.com
2022-02-27lib: overflow: Convert to KunitKees Cook3-309/+263
Convert overflow unit tests to KUnit, for better integration into the kernel self test framework. Includes a rename of test_overflow.c to overflow_kunit.c, and CONFIG_TEST_OVERFLOW to CONFIG_OVERFLOW_KUNIT_TEST. $ ./tools/testing/kunit/kunit.py run overflow ... [14:33:51] Starting KUnit Kernel (1/1)... [14:33:51] ============================================================ [14:33:51] ================== overflow (11 subtests) ================== [14:33:51] [PASSED] u8_overflow_test [14:33:51] [PASSED] s8_overflow_test [14:33:51] [PASSED] u16_overflow_test [14:33:51] [PASSED] s16_overflow_test [14:33:51] [PASSED] u32_overflow_test [14:33:51] [PASSED] s32_overflow_test [14:33:51] [PASSED] u64_overflow_test [14:33:51] [PASSED] s64_overflow_test [14:33:51] [PASSED] overflow_shift_test [14:33:51] [PASSED] overflow_allocation_test [14:33:51] [PASSED] overflow_size_helpers_test [14:33:51] ==================== [PASSED] overflow ===================== [14:33:51] ============================================================ [14:33:51] Testing complete. Passed: 11, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0 [14:33:51] Elapsed time: 12.525s total, 0.001s configuring, 12.402s building, 0.101s running Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Nick Desaulniers <ndesaulniers@google.com> Co-developed-by: Vitor Massaru Iha <vitor@massaru.org> Signed-off-by: Vitor Massaru Iha <vitor@massaru.org> Link: https://lore.kernel.org/lkml/20200720224418.200495-1-vitor@massaru.org/ Co-developed-by: Daniel Latypov <dlatypov@google.com> Signed-off-by: Daniel Latypov <dlatypov@google.com> Link: https://lore.kernel.org/linux-kselftest/20210503211536.1384578-1-dlatypov@google.com/ Acked-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/lkml/CAKwvOdm62iA1dNiC6Q11UJ-MnTqtc4kXkm-ubPaFMK824_k0nw@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: David Gow <davidgow@google.com> Link: https://lore.kernel.org/lkml/CABVgOS=TWVh649_Vjo3wnMu9gZnq66gkV-LtGgsksAWMqc+MSA@mail.gmail.com
2022-02-26kasan: test: prevent cache merging in kmem_cache_double_destroyAndrey Konovalov1-1/+4
With HW_TAGS KASAN and kasan.stacktrace=off, the cache created in the kmem_cache_double_destroy() test might get merged with an existing one. Thus, the first kmem_cache_destroy() call won't actually destroy it but will only decrease the refcount. This causes the test to fail. Provide an empty constructor for the created cache to prevent the cache from getting merged. Link: https://lkml.kernel.org/r/b597bd434c49591d8af00ee3993a42c609dc9a59.1644346040.git.andreyknvl@google.com Fixes: f98f966cd750 ("kasan: test: add test case for double-kmem_cache_destroy()") Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Marco Elver <elver@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-02-25list: test: Add a test for list_entry_is_head()David Gow1-0/+21
The list_entry_is_head() macro was added[1] after the list KUnit tests, so wasn't tested. Add a new KUnit test to complete the set. [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e130816164e244b692921de49771eeb28205152d Signed-off-by: David Gow <davidgow@google.com> Acked-by: Daniel Latypov <dlatypov@google.com> Acked-by: Brendan Higgins <brendanhiggins@google.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-02-25list: test: Add a test for list_is_head()David Gow1-0/+19
list_is_head() was added recently[1], and didn't have a KUnit test. The implementation is trivial, so it's not a particularly exciting test, but it'd be nice to get back to full coverage of the list functions. [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/include/linux/list.h?id=0425473037db40d9e322631f2d4dc6ef51f97e88 Signed-off-by: David Gow <davidgow@google.com> Acked-by: Daniel Latypov <dlatypov@google.com> Acked-by: Brendan Higgins <brendanhiggins@google.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-02-25list: test: Add test for list_del_init_careful()David Gow1-0/+21
The list_del_init_careful() function was added[1] after the list KUnit test. Add a very basic test to cover it. Note that this test only covers the single-threaded behaviour (which matches list_del_init()), as is already the case with the test for list_empty_careful(). [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c6fe44d96fc1536af5b11cd859686453d1b7bfd1 Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-02-25uaccess: remove CONFIG_SET_FSArnd Bergmann2-2/+2
There are no remaining callers of set_fs(), so CONFIG_SET_FS can be removed globally, along with the thread_info field and any references to it. This turns access_ok() into a cheaper check against TASK_SIZE_MAX. As CONFIG_SET_FS is now gone, drop all remaining references to set_fs()/get_fs(), mm_segment_t, user_addr_max() and uaccess_kernel(). Acked-by: Sam Ravnborg <sam@ravnborg.org> # for sparc32 changes Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Tested-by: Sergey Matyukevich <sergey.matyukevich@synopsys.com> # for arc changes Acked-by: Stafford Horne <shorne@gmail.com> # [openrisc, asm-generic] Acked-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-02-25lib/test_lockup: fix kernel pointer check for separate address spacesArnd Bergmann1-3/+8
test_kernel_ptr() uses access_ok() to figure out if a given address points to user space instead of kernel space. However on architectures that set CONFIG_ALTERNATE_USER_ADDRESS_SPACE, a pointer can be valid for both, and the check always fails because access_ok() returns true. Make the check for user space pointers conditional on the type of address space layout. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-02-25uaccess: fix type mismatch warnings from access_ok()Arnd Bergmann1-2/+2
On some architectures, access_ok() does not do any argument type checking, so replacing the definition with a generic one causes a few warnings for harmless issues that were never caught before. Fix the ones that I found either through my own test builds or that were reported by the 0-day bot. Reported-by: kernel test robot <lkp@intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-02-25Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-0/+2
tools/testing/selftests/net/mptcp/mptcp_join.sh 34aa6e3bccd8 ("selftests: mptcp: add ip mptcp wrappers") 857898eb4b28 ("selftests: mptcp: add missing join check") 6ef84b1517e0 ("selftests: mptcp: more robust signal race test") https://lore.kernel.org/all/20220221131842.468893-1-broonie@kernel.org/ drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/ct.c fb7e76ea3f3b6 ("net/mlx5e: TC, Skip redundant ct clear actions") c63741b426e11 ("net/mlx5e: Fix MPLSoUDP encap to use MPLS action information") 09bf97923224f ("net/mlx5e: TC, Move pedit_headers_action to parse_attr") 84ba8062e383 ("net/mlx5e: Test CT and SAMPLE on flow attr") efe6f961cd2e ("net/mlx5e: CT, Don't set flow flag CT for ct clear flow") 3b49a7edec1d ("net/mlx5e: TC, Reject rules with multiple CT actions") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-24vsprintf: Fix %pK with kptr_restrict == 0Christophe Leroy1-15/+21
Although kptr_restrict is set to 0 and the kernel is booted with no_hash_pointers parameter, the content of /proc/vmallocinfo is lacking the real addresses. / # cat /proc/vmallocinfo 0x(ptrval)-0x(ptrval) 8192 load_module+0xc0c/0x2c0c pages=1 vmalloc 0x(ptrval)-0x(ptrval) 12288 start_kernel+0x4e0/0x690 pages=2 vmalloc 0x(ptrval)-0x(ptrval) 12288 start_kernel+0x4e0/0x690 pages=2 vmalloc 0x(ptrval)-0x(ptrval) 8192 _mpic_map_mmio.constprop.0+0x20/0x44 phys=0x80041000 ioremap 0x(ptrval)-0x(ptrval) 12288 _mpic_map_mmio.constprop.0+0x20/0x44 phys=0x80041000 ioremap ... According to the documentation for /proc/sys/kernel/, %pK is equivalent to %p when kptr_restrict is set to 0. Fixes: 5ead723a20e0 ("lib/vsprintf: no_hash_pointers prints all addresses as unhashed") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/107476128e59bff11a309b5bf7579a1753a41aca.1645087605.git.christophe.leroy@csgroup.eu
2022-02-22Merge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds1-0/+2
Pull ITER_PIPE fix from Al Viro: "Fix for old sloppiness in pipe_buffer reuse" * 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: lib/iov_iter: initialize "flags" in new pipe_buffer
2022-02-21random: remove unused tracepointsJason A. Donenfeld1-2/+0
These explicit tracepoints aren't really used and show sign of aging. It's work to keep these up to date, and before I attempted to keep them up to date, they weren't up to date, which indicates that they're not really used. These days there are better ways of introspecting anyway. Cc: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-21lib/iov_iter: initialize "flags" in new pipe_bufferMax Kellermann1-0/+2
The functions copy_page_to_iter_pipe() and push_pipe() can both allocate a new pipe_buffer, but the "flags" member initializer is missing. Fixes: 241699cd72a8 ("new iov_iter flavour: pipe-backed") To: Alexander Viro <viro@zeniv.linux.org.uk> To: linux-fsdevel@vger.kernel.org To: linux-kernel@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Max Kellermann <max.kellermann@ionos.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2022-02-21ARM: 9178/1: fix unmet dependency on BITREVERSE for HAVE_ARCH_BITREVERSEJulian Braha1-1/+0
Resending this to properly add it to the patch tracker - thanks for letting me know, Arnd :) When ARM is enabled, and BITREVERSE is disabled, Kbuild gives the following warning: WARNING: unmet direct dependencies detected for HAVE_ARCH_BITREVERSE Depends on [n]: BITREVERSE [=n] Selected by [y]: - ARM [=y] && (CPU_32v7M [=n] || CPU_32v7 [=y]) && !CPU_32v6 [=n] This is because ARM selects HAVE_ARCH_BITREVERSE without selecting BITREVERSE, despite HAVE_ARCH_BITREVERSE depending on BITREVERSE. This unmet dependency bug was found by Kismet, a static analysis tool for Kconfig. Please advise if this is not the appropriate solution. Signed-off-by: Julian Braha <julianbraha@gmail.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-02-17overflow: Provide constant expression struct_sizeKees Cook1-9/+17
There have been cases where struct_size() (or flex_array_size()) needs to be calculated for an initializer, which requires it be a constant expression. This is possible when the "count" argument is a constant expression, so provide this ability for the helpers. Cc: Gustavo A. R. Silva <gustavoars@kernel.org> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org> Tested-by: Gustavo A. R. Silva <gustavoars@kernel.org> Link: https://lore.kernel.org/lkml/20220210010407.GA701603@embeddedor
2022-02-17overflow: Implement size_t saturating arithmetic helpersKees Cook1-0/+98
In order to perform more open-coded replacements of common allocation size arithmetic, the kernel needs saturating (SIZE_MAX) helpers for multiplication, addition, and subtraction. For example, it is common in allocators, especially on realloc, to add to an existing size: p = krealloc(map->patch, sizeof(struct reg_sequence) * (map->patch_regs + num_regs), GFP_KERNEL); There is no existing saturating replacement for this calculation, and just leaving the addition open coded inside array_size() could potentially overflow as well. For example, an overflow in an expression for a size_t argument might wrap to zero: array_size(anything, something_at_size_max + 1) == 0 Introduce size_mul(), size_add(), and size_sub() helpers that implicitly promote arguments to size_t and saturated calculations for use in allocations. With these helpers it is also possible to redefine array_size(), array3_size(), flex_array_size(), and struct_size() in terms of the new helpers. As with the check_*_overflow() helpers, the new helpers use __must_check, though what is really desired is a way to make sure that assignment is only to a size_t lvalue. Without this, it's still possible to introduce overflow/underflow via type conversion (i.e. from size_t to int). Enforcing this will currently need to be left to static analysis or future use of -Wconversion. Additionally update the overflow unit tests to force runtime evaluation for the pathological cases. Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Gustavo A. R. Silva <gustavoars@kernel.org> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Leon Romanovsky <leon@kernel.org> Cc: Keith Busch <kbusch@kernel.org> Cc: Len Baker <len.baker@gmx.com> Signed-off-by: Kees Cook <keescook@chromium.org>
2022-02-14fortify: Detect struct member overflows in memset() at compile-timeKees Cook1-0/+5
As done for memcpy(), also update memset() to use the same tightened compile-time bounds checking under CONFIG_FORTIFY_SOURCE. Signed-off-by: Kees Cook <keescook@chromium.org>
2022-02-14fortify: Detect struct member overflows in memmove() at compile-timeKees Cook2-0/+10
As done for memcpy(), also update memmove() to use the same tightened compile-time checks under CONFIG_FORTIFY_SOURCE. Signed-off-by: Kees Cook <keescook@chromium.org>
2022-02-14fortify: Detect struct member overflows in memcpy() at compile-timeKees Cook4-1/+18
memcpy() is dead; long live memcpy() tl;dr: In order to eliminate a large class of common buffer overflow flaws that continue to persist in the kernel, have memcpy() (under CONFIG_FORTIFY_SOURCE) perform bounds checking of the destination struct member when they have a known size. This would have caught all of the memcpy()-related buffer write overflow flaws identified in at least the last three years. Background and analysis: While stack-based buffer overflow flaws are largely mitigated by stack canaries (and similar) features, heap-based buffer overflow flaws continue to regularly appear in the kernel. Many classes of heap buffer overflows are mitigated by FORTIFY_SOURCE when using the strcpy() family of functions, but a significant number remain exposed through the memcpy() family of functions. At its core, FORTIFY_SOURCE uses the compiler's __builtin_object_size() internal[0] to determine the available size at a target address based on the compile-time known structure layout details. It operates in two modes: outer bounds (0) and inner bounds (1). In mode 0, the size of the enclosing structure is used. In mode 1, the size of the specific field is used. For example: struct object { u16 scalar1; /* 2 bytes */ char array[6]; /* 6 bytes */ u64 scalar2; /* 8 bytes */ u32 scalar3; /* 4 bytes */ u32 scalar4; /* 4 bytes */ } instance; __builtin_object_size(instance.array, 0) == 22, since the remaining size of the enclosing structure starting from "array" is 22 bytes (6 + 8 + 4 + 4). __builtin_object_size(instance.array, 1) == 6, since the remaining size of the specific field "array" is 6 bytes. The initial implementation of FORTIFY_SOURCE used mode 0 because there were many cases of both strcpy() and memcpy() functions being used to write (or read) across multiple fields in a structure. For example, it would catch this, which is writing 2 bytes beyond the end of "instance": memcpy(&instance.array, data, 25); While this didn't protect against overwriting adjacent fields in a given structure, it would at least stop overflows from reaching beyond the end of the structure into neighboring memory, and provided a meaningful mitigation of a subset of buffer overflow flaws. However, many desirable targets remain within the enclosing structure (for example function pointers). As it happened, there were very few cases of strcpy() family functions intentionally writing beyond the end of a string buffer. Once all known cases were removed from the kernel, the strcpy() family was tightened[1] to use mode 1, providing greater mitigation coverage. What remains is switching memcpy() to mode 1 as well, but making the switch is much more difficult because of how frustrating it can be to find existing "normal" uses of memcpy() that expect to write (or read) across multiple fields. The root cause of the problem is that the C language lacks a common pattern to indicate the intent of an author's use of memcpy(), and is further complicated by the available compile-time and run-time mitigation behaviors. The FORTIFY_SOURCE mitigation comes in two halves: the compile-time half, when both the buffer size _and_ the length of the copy is known, and the run-time half, when only the buffer size is known. If neither size is known, there is no bounds checking possible. At compile-time when the compiler sees that a length will always exceed a known buffer size, a warning can be deterministically emitted. For the run-time half, the length is tested against the known size of the buffer, and the overflowing operation is detected. (The performance overhead for these tests is virtually zero.) It is relatively easy to find compile-time false-positives since a warning is always generated. Fixing the false positives, however, can be very time-consuming as there are hundreds of instances. While it's possible some over-read conditions could lead to kernel memory exposures, the bulk of the risk comes from the run-time flaws where the length of a write may end up being attacker-controlled and lead to an overflow. Many of the compile-time false-positives take a form similar to this: memcpy(&instance.scalar2, data, sizeof(instance.scalar2) + sizeof(instance.scalar3)); and the run-time ones are similar, but lack a constant expression for the size of the copy: memcpy(instance.array, data, length); The former is meant to cover multiple fields (though its style has been frowned upon more recently), but has been technically legal. Both lack any expressivity in the C language about the author's _intent_ in a way that a compiler can check when the length isn't known at compile time. A comment doesn't work well because what's needed is something a compiler can directly reason about. Is a given memcpy() call expected to overflow into neighbors? Is it not? By using the new struct_group() macro, this intent can be much more easily encoded. It is not as easy to find the run-time false-positives since the code path to exercise a seemingly out-of-bounds condition that is actually expected may not be trivially reachable. Tightening the restrictions to block an operation for a false positive will either potentially create a greater flaw (if a copy is truncated by the mitigation), or destabilize the kernel (e.g. with a BUG()), making things completely useless for the end user. As a result, tightening the memcpy() restriction (when there is a reasonable level of uncertainty of the number of false positives), needs to first WARN() with no truncation. (Though any sufficiently paranoid end-user can always opt to set the panic_on_warn=1 sysctl.) Once enough development time has passed, the mitigation can be further intensified. (Note that this patch is only the compile-time checking step, which is a prerequisite to doing run-time checking, which will come in future patches.) Given the potential frustrations of weeding out all the false positives when tightening the run-time checks, it is reasonable to wonder if these changes would actually add meaningful protection. Looking at just the last three years, there are 23 identified flaws with a CVE that mention "buffer overflow", and 11 are memcpy()-related buffer overflows. (For the remaining 12: 7 are array index overflows that would be mitigated by systems built with CONFIG_UBSAN_BOUNDS=y: CVE-2019-0145, CVE-2019-14835, CVE-2019-14896, CVE-2019-14897, CVE-2019-14901, CVE-2019-17666, CVE-2021-28952. 2 are miscalculated allocation sizes which could be mitigated with memory tagging: CVE-2019-16746, CVE-2019-2181. 1 is an iovec buffer bug maybe mitigated by memory tagging: CVE-2020-10742. 1 is a type confusion bug mitigated by stack canaries: CVE-2020-10942. 1 is a string handling logic bug with no mitigation I'm aware of: CVE-2021-28972.) At my last count on an x86_64 allmodconfig build, there are 35,294 calls to memcpy(). With callers instrumented to report all places where the buffer size is known but the length remains unknown (i.e. a run-time bounds check is added), we can count how many new run-time bounds checks are added when the destination and source arguments of memcpy() are changed to use "mode 1" bounds checking: 1,276. This means for the future run-time checking, there is a worst-case upper bounds of 3.6% false positives to fix. In addition, there were around 150 new compile-time warnings to evaluate and fix (which have now been fixed). With this instrumentation it's also possible to compare the places where the known 11 memcpy() flaw overflows manifested against the resulting list of potential new run-time bounds checks, as a measure of potential efficacy of the tightened mitigation. Much to my surprise, horror, and delight, all 11 flaws would have been detected by the newly added run-time bounds checks, making this a distinctly clear mitigation improvement: 100% coverage for known memcpy() flaws, with a possible 2 orders of magnitude gain in coverage over existing but undiscovered run-time dynamic length flaws (i.e. 1265 newly covered sites in addition to the 11 known), against only <4% of all memcpy() callers maybe gaining a false positive run-time check, with only about 150 new compile-time instances needing evaluation. Specifically these would have been mitigated: CVE-2020-24490 https://git.kernel.org/linus/a2ec905d1e160a33b2e210e45ad30445ef26ce0e CVE-2020-12654 https://git.kernel.org/linus/3a9b153c5591548612c3955c9600a98150c81875 CVE-2020-12653 https://git.kernel.org/linus/b70261a288ea4d2f4ac7cd04be08a9f0f2de4f4d CVE-2019-14895 https://git.kernel.org/linus/3d94a4a8373bf5f45cf5f939e88b8354dbf2311b CVE-2019-14816 https://git.kernel.org/linus/7caac62ed598a196d6ddf8d9c121e12e082cac3a CVE-2019-14815 https://git.kernel.org/linus/7caac62ed598a196d6ddf8d9c121e12e082cac3a CVE-2019-14814 https://git.kernel.org/linus/7caac62ed598a196d6ddf8d9c121e12e082cac3a CVE-2019-10126 https://git.kernel.org/linus/69ae4f6aac1578575126319d3f55550e7e440449 CVE-2019-9500 https://git.kernel.org/linus/1b5e2423164b3670e8bc9174e4762d297990deff no-CVE-yet https://git.kernel.org/linus/130f634da1af649205f4a3dd86cbe5c126b57914 no-CVE-yet https://git.kernel.org/linus/d10a87a3535cce2b890897914f5d0d83df669c63 To accelerate the review of potential run-time false positives, it's also worth noting that it is possible to partially automate checking by examining the memcpy() buffer argument to check for the destination struct member having a neighboring array member. It is reasonable to expect that the vast majority of run-time false positives would look like the already evaluated and fixed compile-time false positives, where the most common pattern is neighboring arrays. (And, FWIW, many of the compile-time fixes were actual bugs, so it is reasonable to assume we'll have similar cases of actual bugs getting fixed for run-time checks.) Implementation: Tighten the memcpy() destination buffer size checking to use the actual ("mode 1") target buffer size as the bounds check instead of their enclosing structure's ("mode 0") size. Use a common inline for memcpy() (and memmove() in a following patch), since all the tests are the same. All new cross-field memcpy() uses must use the struct_group() macro or similar to target a specific range of fields, so that FORTIFY_SOURCE can reason about the size and safety of the copy. For now, cross-member "mode 1" _read_ detection at compile-time will be limited to W=1 builds, since it is, unfortunately, very common. As the priority is solving write overflows, read overflows will be part of a future phase (and can be fixed in parallel, for anyone wanting to look at W=1 build output). For run-time, the "mode 0" size checking and mitigation is left unchanged, with "mode 1" to be added in stages. In this patch, no new run-time checks are added. Future patches will first bounds-check writes, and only perform a WARN() for now. This way any missed run-time false positives can be flushed out over the coming several development cycles, but system builders who have tested their workloads to be WARN()-free can enable the panic_on_warn=1 sysctl to immediately gain a mitigation against this class of buffer overflows. Once that is under way, run-time bounds-checking of reads can be similarly enabled. Related classes of flaws that will remain unmitigated: - memcpy() with flexible array structures, as the compiler does not currently have visibility into the size of the trailing flexible array. These can be fixed in the future by refactoring such cases to use a new set of flexible array structure helpers to perform the common serialization/deserialization code patterns doing allocation and/or copying. - memcpy() with raw pointers (e.g. void *, char *, etc), or otherwise having their buffer size unknown at compile time, have no good mitigation beyond memory tagging (and even that would only protect against inter-object overflow, not intra-object neighboring field overflows), or refactoring. Some kind of "fat pointer" solution is likely needed to gain proper size-of-buffer awareness. (e.g. see struct membuf) - type confusion where a higher level type's allocation size does not match the resulting cast type eventually passed to a deeper memcpy() call where the compiler cannot see the true type. In theory, greater static analysis could catch these, and the use of -Warray-bounds will help find some of these. [0] https://gcc.gnu.org/onlinedocs/gcc/Object-Size-Checking.html [1] https://git.kernel.org/linus/6a39e62abbafd1d58d1722f40c7d26ef379c6a2f Signed-off-by: Kees Cook <keescook@chromium.org>
2022-02-11Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-2/+2
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-10vsprintf: Move space out of string literals in fourcc_string()Andy Shevchenko1-1/+2
The literals "big-endian" and "little-endian" may be potentially occurred in other places. Dropping space allows linker to merge them by using only a single copy. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20220127181233.72910-2-andriy.shevchenko@linux.intel.com
2022-02-10vsprintf: Fix potential unaligned accessAndy Shevchenko1-5/+7
The %p4cc specifier in some cases might get an unaligned pointer. Due to this we need to make copy to local variable once to avoid potential crashes on some architectures due to improper access. Fixes: af612e43de6d ("lib/vsprintf: Add support for printing V4L2 and DRM fourccs") Cc: Sakari Ailus <sakari.ailus@linux.intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20220127181233.72910-1-andriy.shevchenko@linux.intel.com
2022-02-10Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski1-2/+10
Daniel Borkmann says: ==================== pull-request: bpf-next 2022-02-09 We've added 126 non-merge commits during the last 16 day(s) which contain a total of 201 files changed, 4049 insertions(+), 2215 deletions(-). The main changes are: 1) Add custom BPF allocator for JITs that pack multiple programs into a huge page to reduce iTLB pressure, from Song Liu. 2) Add __user tagging support in vmlinux BTF and utilize it from BPF verifier when generating loads, from Yonghong Song. 3) Add per-socket fast path check guarding from cgroup/BPF overhead when used by only some sockets, from Pavel Begunkov. 4) Continued libbpf deprecation work of APIs/features and removal of their usage from samples, selftests, libbpf & bpftool, from Andrii Nakryiko and various others. 5) Improve BPF instruction set documentation by adding byte swap instructions and cleaning up load/store section, from Christoph Hellwig. 6) Switch BPF preload infra to light skeleton and remove libbpf dependency from it, from Alexei Starovoitov. 7) Fix architecture-agnostic macros in libbpf for accessing syscall arguments from BPF progs for non-x86 architectures, from Ilya Leoshkevich. 8) Rework port members in struct bpf_sk_lookup and struct bpf_sock to be of 16-bit field with anonymous zero padding, from Jakub Sitnicki. 9) Add new bpf_copy_from_user_task() helper to read memory from a different task than current. Add ability to create sleepable BPF iterator progs, from Kenny Yu. 10) Implement XSK batching for ice's zero-copy driver used by AF_XDP and utilize TX batching API from XSK buffer pool, from Maciej Fijalkowski. 11) Generate temporary netns names for BPF selftests to avoid naming collisions, from Hangbin Liu. 12) Implement bpf_core_types_are_compat() with limited recursion for in-kernel usage, from Matteo Croce. 13) Simplify pahole version detection and finally enable CONFIG_DEBUG_INFO_DWARF5 to be selected with CONFIG_DEBUG_INFO_BTF, from Nathan Chancellor. 14) Misc minor fixes to libbpf and selftests from various folks. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (126 commits) selftests/bpf: Cover 4-byte load from remote_port in bpf_sk_lookup bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide libbpf: Fix compilation warning due to mismatched printf format selftests/bpf: Test BPF_KPROBE_SYSCALL macro libbpf: Add BPF_KPROBE_SYSCALL macro libbpf: Fix accessing the first syscall argument on s390 libbpf: Fix accessing the first syscall argument on arm64 libbpf: Allow overriding PT_REGS_PARM1{_CORE}_SYSCALL selftests/bpf: Skip test_bpf_syscall_macro's syscall_arg1 on arm64 and s390 libbpf: Fix accessing syscall arguments on riscv libbpf: Fix riscv register names libbpf: Fix accessing syscall arguments on powerpc selftests/bpf: Use PT_REGS_SYSCALL_REGS in bpf_syscall_macro libbpf: Add PT_REGS_SYSCALL_REGS macro selftests/bpf: Fix an endianness issue in bpf_syscall_macro test bpf: Fix bpf_prog_pack build HPAGE_PMD_SIZE bpf: Fix leftover header->pages in sparc and powerpc code. libbpf: Fix signedness bug in btf_dump_array_data() selftests/bpf: Do not export subtest as standalone test bpf, x86_64: Fail gracefully on bpf_jit_binary_pack_finalize failures ... ==================== Link: https://lore.kernel.org/r/20220209210050.8425-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-10test_overflow: Regularize test reporting outputKees Cook1-24/+30
Report test run summaries more regularly, so it's easier to understand the output: - Remove noisy "ok" reports for shift and allocator tests. - Reorganize per-type output to the end of each type's tests. - Replace redundant vmalloc tests with __vmalloc so that __GFP_NO_WARN can be used to keep the expected failure warnings out of dmesg, similar to commit 8e060c21ae2c ("lib/test_overflow.c: avoid tainting the kernel and fix wrap size") Resulting output: test_overflow: 18 u8 arithmetic tests finished test_overflow: 19 s8 arithmetic tests finished test_overflow: 17 u16 arithmetic tests finished test_overflow: 17 s16 arithmetic tests finished test_overflow: 17 u32 arithmetic tests finished test_overflow: 17 s32 arithmetic tests finished test_overflow: 17 u64 arithmetic tests finished test_overflow: 21 s64 arithmetic tests finished test_overflow: 113 shift tests finished test_overflow: 17 overflow size helper tests finished test_overflow: 11 allocation overflow tests finished test_overflow: all tests passed Acked-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Link: https://lore.kernel.org/all/eb6d02ae-e2ed-e7bd-c700-8a6d004d84ce@rasmusvillemoes.dk/ Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/all/CAKwvOdnYYa+72VhtJ4ug=SJVFn7w+n7Th+hKYE87BRDt4hvqOg@mail.gmail.com/ Signed-off-by: Kees Cook <keescook@chromium.org>
2022-02-09cxl: Prove CXL lockingDan Williams1-0/+23
When CONFIG_PROVE_LOCKING is enabled the 'struct device' definition gets an additional mutex that is not clobbered by lockdep_set_novalidate_class() like the typical device_lock(). This allows for local annotation of subsystem locks with mutex_lock_nested() per the subsystem's object/lock hierarchy. For CXL, this primarily needs the ability to lock ports by depth and child objects of ports by their parent parent-port lock. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Ben Widawsky <ben.widawsky@intel.com> Link: https://lore.kernel.org/r/164365853422.99383.1052399160445197427.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-02-08sbitmap: Delete old sbitmap_queue_get_shallow()John Garry1-3/+3
Since __sbitmap_queue_get_shallow() was introduced in commit c05e66733788 ("sbitmap: add sbitmap_get_shallow() operation"), it has not been used. Delete __sbitmap_queue_get_shallow() and rename public __sbitmap_queue_get_shallow() -> sbitmap_queue_get_shallow() as it is odd to have public __foo but no foo at all. Signed-off-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1644322024-105340-1-git-send-email-john.garry@huawei.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-02-08lib/sbitmap: kill 'depth' from sbitmap_wordMing Lei1-20/+14
Only the last sbitmap_word can have different depth, and all the others must have same depth of 1U << sb->shift, so not necessary to store it in sbitmap_word, and it can be retrieved easily and efficiently by adding one internal helper of __map_depth(sb, index). Remove 'depth' field from sbitmap_word, then the annotation of ____cacheline_aligned_in_smp for 'word' isn't needed any more. Not see performance effect when running high parallel IOPS test on null_blk. This way saves us one cacheline(usually 64 words) per each sbitmap_word. Cc: Martin Wilck <martin.wilck@suse.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Martin Wilck <mwilck@suse.com> Reviewed-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/20220110072945.347535-1-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-02-06ref_tracker: remove filter_irq_stacks() callEric Dumazet1-2/+0
After commit e94006608949 ("lib/stackdepot: always do filter_irq_stacks() in stack_depot_save()") it became unnecessary to filter the stack before calling stack_depot_save(). Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Marco Elver <elver@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-05ref_tracker: add a count of untracked referencesEric Dumazet1-1/+11
We are still chasing a netdev refcount imbalance, and we suspect we have one rogue dev_put() that is consuming a reference taken from a dev_hold_track() To detect this case, allow ref_tracker_alloc() and ref_tracker_free() to be called with a NULL @trackerp parameter, and use a dedicated refcount_t just for them. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-05ref_tracker: implement use-after-free detectionEric Dumazet1-0/+5
Whenever ref_tracker_dir_init() is called, mark the struct ref_tracker_dir as dead. Test the dead status from ref_tracker_alloc() and ref_tracker_free() This should detect buggy dev_put()/dev_hold() happening too late in netdevice dismantle process. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04lib/crypto: blake2s: avoid indirect calls to compression function for Clang CFIJason A. Donenfeld1-2/+2
blake2s_compress_generic is weakly aliased by blake2s_compress. The current harness for function selection uses a function pointer, which is ordinarily inlined and resolved at compile time. But when Clang's CFI is enabled, CFI still triggers when making an indirect call via a weak symbol. This seems like a bug in Clang's CFI, as though it's bucketing weak symbols and strong symbols differently. It also only seems to trigger when "full LTO" mode is used, rather than "thin LTO". [ 0.000000][ T0] Kernel panic - not syncing: CFI failure (target: blake2s_compress_generic+0x0/0x1444) [ 0.000000][ T0] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.16.0-mainline-06981-g076c855b846e #1 [ 0.000000][ T0] Hardware name: MT6873 (DT) [ 0.000000][ T0] Call trace: [ 0.000000][ T0] dump_backtrace+0xfc/0x1dc [ 0.000000][ T0] dump_stack_lvl+0xa8/0x11c [ 0.000000][ T0] panic+0x194/0x464 [ 0.000000][ T0] __cfi_check_fail+0x54/0x58 [ 0.000000][ T0] __cfi_slowpath_diag+0x354/0x4b0 [ 0.000000][ T0] blake2s_update+0x14c/0x178 [ 0.000000][ T0] _extract_entropy+0xf4/0x29c [ 0.000000][ T0] crng_initialize_primary+0x24/0x94 [ 0.000000][ T0] rand_initialize+0x2c/0x6c [ 0.000000][ T0] start_kernel+0x2f8/0x65c [ 0.000000][ T0] __primary_switched+0xc4/0x7be4 [ 0.000000][ T0] Rebooting in 5 seconds.. Nonetheless, the function pointer method isn't so terrific anyway, so this patch replaces it with a simple boolean, which also gets inlined away. This successfully works around the Clang bug. In general, I'm not too keen on all of the indirection involved here; it clearly does more harm than good. Hopefully the whole thing can get cleaned up down the road when lib/crypto is overhauled more comprehensively. But for now, we go with a simple bandaid. Fixes: 6048fdcc5f26 ("lib/crypto: blake2s: include as built-in") Link: https://github.com/ClangBuiltLinux/linux/issues/1567 Reported-by: Miles Chen <miles.chen@mediatek.com> Tested-by: Miles Chen <miles.chen@mediatek.com> Tested-by: Nathan Chancellor <nathan@kernel.org> Tested-by: John Stultz <john.stultz@linaro.org> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-02lib/Kconfig.debug: Allow BTF + DWARF5 with pahole 1.21+Nathan Chancellor1-1/+1
Commit 98cd6f521f10 ("Kconfig: allow explicit opt in to DWARF v5") prevented CONFIG_DEBUG_INFO_DWARF5 from being selected when CONFIG_DEBUG_INFO_BTF is enabled because pahole had issues with clang's DWARF5 info. This was resolved by [1], which is in pahole v1.21. Allow DEBUG_INFO_DWARF5 to be selected with DEBUG_INFO_BTF when using pahole v1.21 or newer. [1]: https://git.kernel.org/pub/scm/devel/pahole/pahole.git/commit/?id=7d8e829f636f47aba2e1b6eda57e74d8e31f733c Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-6-nathan@kernel.org
2022-02-02lib/Kconfig.debug: Use CONFIG_PAHOLE_VERSIONNathan Chancellor1-2/+2
Now that CONFIG_PAHOLE_VERSION exists, use it in the definition of CONFIG_PAHOLE_HAS_SPLIT_BTF and CONFIG_PAHOLE_HAS_BTF_TAG to reduce the amount of duplication across the tree. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-5-nathan@kernel.org
2022-01-31kunit: factor out str constants from binary assertion structsDaniel Latypov1-19/+19
If the compiler doesn't optimize them away, each kunit assertion (use of KUNIT_EXPECT_EQ, etc.) can use 88 bytes of stack space in the worst and most common case. This has led to compiler warnings and a suggestion from Linus to move data from the structs into static const's where possible [1]. This builds upon [2] which did so for the base struct kunit_assert type. That only reduced sizeof(struct kunit_binary_assert) from 88 to 64. Given these are by far the most commonly used asserts, this patch factors out the textual representations of the operands and comparator into another static const, saving 16 more bytes. In detail, KUNIT_EXPECT_EQ(test, 2 + 2, 5) yields the following struct (struct kunit_binary_assert) { .assert = <struct kunit_assert>, .operation = "==", .left_text = "2 + 2", .left_value = 4, .right_text = "5", .right_value = 5, } After this change static const struct kunit_binary_assert_text __text = { .operation = "==", .left_text = "2 + 2", .right_text = "5", }; (struct kunit_binary_assert) { .assert = <struct kunit_assert>, .text = &__text, .left_value = 4, .right_value = 5, } This also DRYs the code a bit more since these str fields were repeated for the string and pointer versions of kunit_binary_assert. Note: we could name the kunit_binary_assert_text fields left/right instead of left_text/right_text. But that would require changing the macros a bit since they have args called "left" and "right" which would be substituted in `.left = #left` as `.2 + 2 = \"2 + 2\"`. [1] https://groups.google.com/g/kunit-dev/c/i3fZXgvBrfA/m/VULQg1z6BAAJ [2] https://lore.kernel.org/linux-kselftest/20220113165931.451305-6-dlatypov@google.com/ Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-01-31kunit: remove va_format from kunit_assertDaniel Latypov2-16/+23
The concern is that having a lot of redundant fields in kunit_assert can blow up stack usage if the compiler doesn't optimize them away [1]. The comment on this field implies that it was meant to be initialized when the expect/assert was declared, but this only happens when we run kunit_do_failed_assertion(). We don't need to access it outside of that function, so move it out of the struct and make it a local variable there. This change also takes the chance to reduce the number of macros by inlining the now simplified KUNIT_INIT_ASSERT_STRUCT() macro. [1] https://groups.google.com/g/kunit-dev/c/i3fZXgvBrfA/m/VULQg1z6BAAJ Signed-off-by: Daniel Latypov <dlatypov@google.com> Reviewed-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2022-01-31lib/crc32test: correct printed bytes countKevin Bracey1-1/+1
crc32c_le self test had a stray multiply by two inherited from the crc32_le+crc32_be test loop. Signed-off-by: Kevin Bracey <kevin@bracey.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31lib/crc32: Make crc32_be weak for arch overrideKevin Bracey1-2/+3
crc32_le and __crc32c_le can be overridden - extend this to crc32_be. Signed-off-by: Kevin Bracey <kevin@bracey.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31lib/crc32: remove unneeded castsKevin Bracey1-6/+3
Casts were added in commit 8f243af42ade ("sections: fix const sections for crc32 table") to cope with the tables not being const. They are no longer required since commit f5e38b9284e1 ("lib: crc32: constify crc32 lookup table"). Signed-off-by: Kevin Bracey <kevin@bracey.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-30kasan: test: fix compatibility with FORTIFY_SOURCEMarco Elver1-0/+5
With CONFIG_FORTIFY_SOURCE enabled, string functions will also perform dynamic checks using __builtin_object_size(ptr), which when failed will panic the kernel. Because the KASAN test deliberately performs out-of-bounds operations, the kernel panics with FORTIFY_SOURCE, for example: | kernel BUG at lib/string_helpers.c:910! | invalid opcode: 0000 [#1] PREEMPT SMP KASAN PTI | CPU: 1 PID: 137 Comm: kunit_try_catch Tainted: G B 5.16.0-rc3+ #3 | Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 | RIP: 0010:fortify_panic+0x19/0x1b | ... | Call Trace: | kmalloc_oob_in_memset.cold+0x16/0x16 | ... Fix it by also hiding `ptr` from the optimizer, which will ensure that __builtin_object_size() does not return a valid size, preventing fortified string functions from panicking. Link: https://lkml.kernel.org/r/20220124160744.1244685-1-elver@google.com Signed-off-by: Marco Elver <elver@google.com> Reported-by: Nico Pache <npache@redhat.com> Reviewed-by: Nico Pache <npache@redhat.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Reviewed-by: Kees Cook <keescook@chromium.org> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-01-28crypto: sm3 - create SM3 stand-alone libraryTianjia Zhang3-0/+252
Stand-alone implementation of the SM3 algorithm. It is designed to have as little dependencies as possible. In other cases you should generally use the hash APIs from include/crypto/hash.h. Especially when hashing large amounts of data as those APIs may be hw-accelerated. In the new SM3 stand-alone library, sm3_transform() has also been optimized, instead of simply using the code in sm3_generic. Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Reviewed-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>