summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2024-06-12powerpc/fsl-soc: hide unused const variableArnd Bergmann1-0/+2
[ Upstream commit 01acaf3aa75e1641442cc23d8fe0a7bb4226efb1 ] vmpic_msi_feature is only used conditionally, which triggers a rare -Werror=unused-const-variable= warning with gcc: arch/powerpc/sysdev/fsl_msi.c:567:37: error: 'vmpic_msi_feature' defined but not used [-Werror=unused-const-variable=] 567 | static const struct fsl_msi_feature vmpic_msi_feature = Hide this one in the same #ifdef as the reference so we can turn on the warning by default. Fixes: 305bcf26128e ("powerpc/fsl-soc: use CONFIG_EPAPR_PARAVIRT for hcalls") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240403080702.3509288-2-arnd@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12riscv, bpf: make some atomic operations fully orderedPuranjay Mohan1-10/+10
[ Upstream commit 20a759df3bba35bf5c3ddec0c02ad69b603b584c ] The BPF atomic operations with the BPF_FETCH modifier along with BPF_XCHG and BPF_CMPXCHG are fully ordered but the RISC-V JIT implements all atomic operations except BPF_CMPXCHG with relaxed ordering. Section 8.1 of the "The RISC-V Instruction Set Manual Volume I: Unprivileged ISA" [1], titled, "Specifying Ordering of Atomic Instructions" says: | To provide more efficient support for release consistency [5], each | atomic instruction has two bits, aq and rl, used to specify additional | memory ordering constraints as viewed by other RISC-V harts. and | If only the aq bit is set, the atomic memory operation is treated as | an acquire access. | If only the rl bit is set, the atomic memory operation is treated as a | release access. | | If both the aq and rl bits are set, the atomic memory operation is | sequentially consistent. Fix this by setting both aq and rl bits as 1 for operations with BPF_FETCH and BPF_XCHG. [1] https://riscv.org/wp-content/uploads/2017/05/riscv-spec-v2.2.pdf Fixes: dd642ccb45ec ("riscv, bpf: Implement more atomic operations for RV64") Signed-off-by: Puranjay Mohan <puranjay@kernel.org> Reviewed-by: Pu Lehui <pulehui@huawei.com> Link: https://lore.kernel.org/r/20240505201633.123115-1-puranjay@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12s390/bpf: Emit a barrier for BPF_FETCH instructionsIlya Leoshkevich1-2/+6
[ Upstream commit 68378982f0b21de02ac3c6a11e2420badefcb4bc ] BPF_ATOMIC_OP() macro documentation states that "BPF_ADD | BPF_FETCH" should be the same as atomic_fetch_add(), which is currently not the case on s390x: the serialization instruction "bcr 14,0" is missing. This applies to "and", "or" and "xor" variants too. s390x is allowed to reorder stores with subsequent fetches from different addresses, so code relying on BPF_FETCH acting as a barrier, for example: stw [%r0], 1 afadd [%r1], %r2 ldxw %r3, [%r4] may be broken. Fix it by emitting "bcr 14,0". Note that a separate serialization instruction is not needed for BPF_XCHG and BPF_CMPXCHG, because COMPARE AND SWAP performs serialization itself. Fixes: ba3b86b9cef0 ("s390/bpf: Implement new atomic ops") Reported-by: Puranjay Mohan <puranjay12@gmail.com> Closes: https://lore.kernel.org/bpf/mb61p34qvq3wf.fsf@kernel.org/ Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Puranjay Mohan <puranjay@kernel.org> Link: https://lore.kernel.org/r/20240507000557.12048-1-iii@linux.ibm.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12m68k: mac: Fix reboot hang on Mac IIciFinn Thain1-18/+18
[ Upstream commit 265a3b322df9a973ff1fc63da70af456ab6ae1d6 ] Calling mac_reset() on a Mac IIci does reset the system, but what follows is a POST failure that requires a manual reset to resolve. Avoid that by using the 68030 asm implementation instead of the C implementation. Apparently the SE/30 has a similar problem as it has used the asm implementation since before git. This patch extends that solution to other systems with a similar ROM. After this patch, the only systems still using the C implementation are 68040 systems where adb_type is either MAC_ADB_IOP or MAC_ADB_II. This implies a 1 MiB Quadra ROM. This now includes the Quadra 900/950, which previously fell through to the "should never get here" catch-all. Reported-and-tested-by: Stan Johnson <userm57@yahoo.com> Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Finn Thain <fthain@linux-m68k.org> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Link: https://lore.kernel.org/r/480ebd1249d229c6dc1f3f1c6d599b8505483fd8.1714797072.git.fthain@linux-m68k.org Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12m68k: Fix spinlock race in kernel thread creationMichael Schmitz1-1/+3
[ Upstream commit da89ce46f02470ef08f0f580755d14d547da59ed ] Context switching does take care to retain the correct lock owner across the switch from 'prev' to 'next' tasks. This does rely on interrupts remaining disabled for the entire duration of the switch. This condition is guaranteed for normal process creation and context switching between already running processes, because both 'prev' and 'next' already have interrupts disabled in their saved copies of the status register. The situation is different for newly created kernel threads. The status register is set to PS_S in copy_thread(), which does leave the IPL at 0. Upon restoring the 'next' thread's status register in switch_to() aka resume(), interrupts then become enabled prematurely. resume() then returns via ret_from_kernel_thread() and schedule_tail() where run queue lock is released (see finish_task_switch() and finish_lock_switch()). A timer interrupt calling scheduler_tick() before the lock is released in finish_task_switch() will find the lock already taken, with the current task as lock owner. This causes a spinlock recursion warning as reported by Guenter Roeck. As far as I can ascertain, this race has been opened in commit 533e6903bea0 ("m68k: split ret_from_fork(), simplify kernel_thread()") but I haven't done a detailed study of kernel history so it may well predate that commit. Interrupts cannot be disabled in the saved status register copy for kernel threads (init will complain about interrupts disabled when finally starting user space). Disable interrupts temporarily when switching the tasks' register sets in resume(). Note that a simple oriw 0x700,%sr after restoring sr is not enough here - this leaves enough of a race for the 'spinlock recursion' warning to still be observed. Tested on ARAnyM and qemu (Quadra 800 emulation). Fixes: 533e6903bea0 ("m68k: split ret_from_fork(), simplify kernel_thread()") Reported-by: Guenter Roeck <linux@roeck-us.net> Closes: https://lore.kernel.org/all/07811b26-677c-4d05-aeb4-996cd880b789@roeck-us.net Signed-off-by: Michael Schmitz <schmitzmic@gmail.com> Tested-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Link: https://lore.kernel.org/r/20240411033631.16335-1-schmitzmic@gmail.com Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12x86/numa: Fix SRAT lookup of CFMWS ranges with numa_fill_memblks()Robert Richter2-4/+2
[ Upstream commit f9f67e5adc8dc2e1cc51ab2d3d6382fa97f074d4 ] For configurations that have the kconfig option NUMA_KEEP_MEMINFO disabled, numa_fill_memblks() only returns with NUMA_NO_MEMBLK (-1). SRAT lookup fails then because an existing SRAT memory range cannot be found for a CFMWS address range. This causes the addition of a duplicate numa_memblk with a different node id and a subsequent page fault and kernel crash during boot. Fix this by making numa_fill_memblks() always available regardless of NUMA_KEEP_MEMINFO. As Dan suggested, the fix is implemented to remove numa_fill_memblks() from sparsemem.h and alos using __weak for the function. Note that the issue was initially introduced with [1]. But since phys_to_target_node() was originally used that returned the valid node 0, an additional numa_memblk was not added. Though, the node id was wrong too, a message is seen then in the logs: kernel/numa.c: pr_info_once("Unknown target node for memory at 0x%llx, assuming node 0\n", [1] commit fd49f99c1809 ("ACPI: NUMA: Add a node and memblk for each CFMWS not in SRAT") Suggested-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/all/66271b0072317_69102944c@dwillia2-xfh.jf.intel.com.notmuch/ Fixes: 8f1004679987 ("ACPI/NUMA: Apply SRAT proximity domain to entire CFMWS window") Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Alison Schofield <alison.schofield@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Robert Richter <rrichter@amd.com> Acked-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12Revert "sh: Handle calling csum_partial with misaligned data"Guenter Roeck1-49/+18
[ Upstream commit b5319c96292ff877f6b58d349acf0a9dc8d3b454 ] This reverts commit cadc4e1a2b4d20d0cc0e81f2c6ba0588775e54e5. Commit cadc4e1a2b4d ("sh: Handle calling csum_partial with misaligned data") causes bad checksum calculations on unaligned data. Reverting it fixes the problem. # Subtest: checksum # module: checksum_kunit 1..5 # test_csum_fixed_random_inputs: ASSERTION FAILED at lib/checksum_kunit.c:500 Expected ( u64)result == ( u64)expec, but ( u64)result == 53378 (0xd082) ( u64)expec == 33488 (0x82d0) # test_csum_fixed_random_inputs: pass:0 fail:1 skip:0 total:1 not ok 1 test_csum_fixed_random_inputs # test_csum_all_carry_inputs: ASSERTION FAILED at lib/checksum_kunit.c:525 Expected ( u64)result == ( u64)expec, but ( u64)result == 65281 (0xff01) ( u64)expec == 65280 (0xff00) # test_csum_all_carry_inputs: pass:0 fail:1 skip:0 total:1 not ok 2 test_csum_all_carry_inputs # test_csum_no_carry_inputs: ASSERTION FAILED at lib/checksum_kunit.c:573 Expected ( u64)result == ( u64)expec, but ( u64)result == 65535 (0xffff) ( u64)expec == 65534 (0xfffe) # test_csum_no_carry_inputs: pass:0 fail:1 skip:0 total:1 not ok 3 test_csum_no_carry_inputs # test_ip_fast_csum: pass:1 fail:0 skip:0 total:1 ok 4 test_ip_fast_csum # test_csum_ipv6_magic: pass:1 fail:0 skip:0 total:1 ok 5 test_csum_ipv6_magic # checksum: pass:2 fail:3 skip:0 total:5 # Totals: pass:2 fail:3 skip:0 total:5 not ok 22 checksum Fixes: cadc4e1a2b4d ("sh: Handle calling csum_partial with misaligned data") Signed-off-by: Guenter Roeck <linux@roeck-us.net> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Link: https://lore.kernel.org/r/20240324231804.841099-1-linux@roeck-us.net Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12sh: kprobes: Merge arch_copy_kprobe() into arch_prepare_kprobe()Geert Uytterhoeven1-6/+1
[ Upstream commit 1422ae080b66134fe192082d9b721ab7bd93fcc5 ] arch/sh/kernel/kprobes.c:52:16: warning: no previous prototype for 'arch_copy_kprobe' [-Wmissing-prototypes] Although SH kprobes support was only merged in v2.6.28, it missed the earlier removal of the arch_copy_kprobe() callback in v2.6.15. Based on the powerpc part of commit 49a2a1b83ba6fa40 ("[PATCH] kprobes: changed from using spinlock to mutex"). Fixes: d39f5450146ff39f ("sh: Add kprobes support.") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Link: https://lore.kernel.org/r/717d47a19689cc944fae6e981a1ad7cae1642c89.1709326528.git.geert+renesas@glider.be Signed-off-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12x86/purgatory: Switch to the position-independent small code modelArd Biesheuvel1-1/+2
[ Upstream commit cba786af84a0f9716204e09f518ce3b7ada8555e ] On x86, the ordinary, position dependent small and kernel code models only support placement of the executable in 32-bit addressable memory, due to the use of 32-bit signed immediates to generate references to global variables. For the kernel, this implies that all global variables must reside in the top 2 GiB of the kernel virtual address space, where the implicit address bits 63:32 are equal to sign bit 31. This means the kernel code model is not suitable for other bare metal executables such as the kexec purgatory, which can be placed arbitrarily in the physical address space, where its address may no longer be representable as a sign extended 32-bit quantity. For this reason, commit e16c2983fba0 ("x86/purgatory: Change compiler flags from -mcmodel=kernel to -mcmodel=large to fix kexec relocation errors") switched to the large code model, which uses 64-bit immediates for all symbol references, including function calls, in order to avoid relying on any assumptions regarding proximity of symbols in the final executable. The large code model is rarely used, clunky and the least likely to operate in a similar fashion when comparing GCC and Clang, so it is best avoided. This is especially true now that Clang 18 has started to emit executable code in two separate sections (.text and .ltext), which triggers an issue in the kexec loading code at runtime. The SUSE bugzilla fixes tag points to gcc 13 having issues with the large model too and that perhaps the large model should simply not be used at all. Instead, use the position independent small code model, which makes no assumptions about placement but only about proximity, where all referenced symbols must be within -/+ 2 GiB, i.e., in range for a RIP-relative reference. Use hidden visibility to suppress the use of a GOT, which carries absolute addresses that are not covered by static ELF relocations, and is therefore incompatible with the kexec loader's relocation logic. [ bp: Massage commit message. ] Fixes: e16c2983fba0 ("x86/purgatory: Change compiler flags from -mcmodel=kernel to -mcmodel=large to fix kexec relocation errors") Fixes: https://bugzilla.suse.com/show_bug.cgi?id=1211853 Closes: https://github.com/ClangBuiltLinux/linux/issues/2016 Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Fangrui Song <maskray@google.com> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/all/20240417-x86-fix-kexec-with-llvm-18-v1-0-5383121e8fb7@kernel.org/ Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12openrisc: traps: Don't send signals to kernel mode threadsStafford Horne1-19/+29
[ Upstream commit c88cfb5cea5f8f9868ef02cc9ce9183a26dcf20f ] OpenRISC exception handling sends signals to user processes on floating point exceptions and trap instructions (for debugging) among others. There is a bug where the trap handling logic may send signals to kernel threads, we should not send these signals to kernel threads, if that happens we treat it as an error. This patch adds conditions to die if the kernel receives these exceptions in kernel mode code. Fixes: 27267655c531 ("openrisc: Support floating point user api") Signed-off-by: Stafford Horne <shorne@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12x86/pat: Fix W^X violation false-positives when running as Xen PV guestJuergen Gross1-8/+18
[ Upstream commit 5bc8b0f5dac04cd4ebe47f8090a5942f2f2647ef ] When running as Xen PV guest in some cases W^X violation WARN()s have been observed. Those WARN()s are produced by verify_rwx(), which looks into the PTE to verify that writable kernel pages have the NX bit set in order to avoid code modifications of the kernel by rogue code. As the NX bits of all levels of translation entries are or-ed and the RW bits of all levels are and-ed, looking just into the PTE isn't enough for the decision that a writable page is executable, too. When running as a Xen PV guest, the direct map PMDs and kernel high map PMDs share the same set of PTEs. Xen kernel initialization will set the NX bit in the direct map PMD entries, and not the shared PTEs. Fixes: 652c5bf380ad ("x86/mm: Refuse W^X violations") Reported-by: Jason Andryuk <jandryuk@gmail.com> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20240412151258.9171-5-jgross@suse.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12x86/pat: Restructure _lookup_address_cpa()Juergen Gross1-4/+7
[ Upstream commit 02eac06b820c3eae73e5736ae62f986d37fed991 ] Modify _lookup_address_cpa() to no longer use lookup_address(), but only lookup_address_in_pgd(). This is done in preparation of using lookup_address_in_pgd_attr(). No functional change intended. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20240412151258.9171-4-jgross@suse.com Stable-dep-of: 5bc8b0f5dac0 ("x86/pat: Fix W^X violation false-positives when running as Xen PV guest") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12x86/pat: Introduce lookup_address_in_pgd_attr()Juergen Gross2-3/+32
[ Upstream commit ceb647b4b529fdeca9021cd34486f5a170746bda ] Add lookup_address_in_pgd_attr() doing the same as the already existing lookup_address_in_pgd(), but returning the effective settings of the NX and RW bits of all walked page table levels, too. This will be needed in order to match hardware behavior when looking for effective access rights, especially for detecting writable code pages. In order to avoid code duplication, let lookup_address_in_pgd() call lookup_address_in_pgd_attr() with dummy parameters. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20240412151258.9171-2-jgross@suse.com Stable-dep-of: 5bc8b0f5dac0 ("x86/pat: Fix W^X violation false-positives when running as Xen PV guest") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12x86/boot/64: Clear most of CR4 in startup_64(), except PAE, MCE and LA57Ard Biesheuvel1-0/+5
[ Upstream commit a0025f587c685e5ff842fb0194036f2ca0b6eaf4 ] The early 64-bit boot code must be entered with a 1:1 mapping of the bootable image, but it cannot operate without a 1:1 mapping of all the assets in memory that it accesses, and therefore, it creates such mappings for all known assets upfront, and additional ones on demand when a page fault happens on a memory address. These mappings are created with the global bit G set, as the flags used to create page table descriptors are based on __PAGE_KERNEL_LARGE_EXEC defined by the core kernel, even though the context where these mappings are used is very different. This means that the TLB maintenance carried out by the decompressor is not sufficient if it is entered with CR4.PGE enabled, which has been observed to happen with the stage0 bootloader of project Oak. While this is a dubious practice if no global mappings are being used to begin with, the decompressor is clearly at fault here for creating global mappings and not performing the appropriate TLB maintenance. Since commit: f97b67a773cd84b ("x86/decompressor: Only call the trampoline when changing paging levels") CR4 is no longer modified by the decompressor if no change in the number of paging levels is needed. Before that, CR4 would always be set to a consistent value with PGE cleared. So let's reinstate a simplified version of the original logic to put CR4 into a known state, and preserve the PAE, MCE and LA57 bits, none of which can be modified freely at this point (PAE and LA57 cannot be changed while running in long mode, and MCE cannot be cleared when running under some hypervisors). This effectively clears PGE and works around the project Oak bug. Fixes: f97b67a773cd84b ("x86/decompressor: Only call the trampoline when ...") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/20240410151354.506098-2-ardb+git@google.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12locking/atomic/x86: Correct the definition of __arch_try_cmpxchg128()Uros Bizjak1-1/+1
[ Upstream commit 929ad065ba2967be238dfdc0895b79fda62c7f16 ] Correct the definition of __arch_try_cmpxchg128(), introduced by: b23e139d0b66 ("arch: Introduce arch_{,try_}_cmpxchg128{,_local}()") Fixes: b23e139d0b66 ("arch: Introduce arch_{,try_}_cmpxchg128{,_local}()") Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/20240408091547.90111-2-ubizjak@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12x86/boot: Ignore relocations in .notes sections in walk_relocs() tooGuixiong Wei1-0/+9
[ Upstream commit 76e9762d66373354b45c33b60e9a53ef2a3c5ff2 ] Commit: aaa8736370db ("x86, relocs: Ignore relocations in .notes section") ... only started ignoring the .notes sections in print_absolute_relocs(), but the same logic should also by applied in walk_relocs() to avoid such relocations. [ mingo: Fixed various typos in the changelog, removed extra curly braces from the code. ] Fixes: aaa8736370db ("x86, relocs: Ignore relocations in .notes section") Fixes: 5ead97c84fa7 ("xen: Core Xen implementation") Fixes: da1a679cde9b ("Add /sys/kernel/notes") Signed-off-by: Guixiong Wei <weiguixiong@bytedance.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20240317150547.24910-1-weiguixiong@bytedance.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12s390/mm: Re-enable the shared zeropage for !PV and !skeys KVM guestsDavid Hildenbrand6-47/+146
[ Upstream commit 06201e00ee3e4beacac48aab2b83eff64ebf0bc0 ] commit fa41ba0d08de ("s390/mm: avoid empty zero pages for KVM guests to avoid postcopy hangs") introduced an undesired side effect when combined with memory ballooning and VM migration: memory part of the inflated memory balloon will consume memory. Assuming we have a 100GiB VM and inflated the balloon to 40GiB. Our VM will consume ~60GiB of memory. If we now trigger a VM migration, hypervisors like QEMU will read all VM memory. As s390x does not support the shared zeropage, we'll end up allocating for all previously-inflated memory part of the memory balloon: 50 GiB. So we might easily (unexpectedly) crash the VM on the migration source. Even worse, hypervisors like QEMU optimize for zeropage migration to not consume memory on the migration destination: when migrating a "page full of zeroes", on the migration destination they check whether the target memory is already zero (by reading the destination memory) and avoid writing to the memory to not allocate memory: however, s390x will also allocate memory here, implying that also on the migration destination, we will end up allocating all previously-inflated memory part of the memory balloon. This is especially bad if actual memory overcommit was not desired, when memory ballooning is used for dynamic VM memory resizing, setting aside some memory during boot that can be added later on demand. Alternatives like virtio-mem that would avoid this issue are not yet available on s390x. There could be ways to optimize some cases in user space: before reading memory in an anonymous private mapping on the migration source, check via /proc/self/pagemap if anything is already populated. Similarly check on the migration destination before reading. While that would avoid populating tables full of shared zeropages on all architectures, it's harder to get right and performant, and requires user space changes. Further, with posctopy live migration we must place a page, so there, "avoid touching memory to avoid allocating memory" is not really possible. (Note that a previously we would have falsely inserted shared zeropages into processes using UFFDIO_ZEROPAGE where mm_forbids_zeropage() would have actually forbidden it) PV is currently incompatible with memory ballooning, and in the common case, KVM guests don't make use of storage keys. Instead of zapping zeropages when enabling storage keys / PV, that turned out to be problematic in the past, let's do exactly the same we do with KSM pages: trigger unsharing faults to replace the shared zeropages by proper anonymous folios. What about added latency when enabling storage kes? Having a lot of zeropages in applicable environments (PV, legacy guests, unittests) is unexpected. Further, KSM could today already unshare the zeropages and unmerging KSM pages when enabling storage kets would unshare the KSM-placed zeropages in the same way, resulting in the same latency. [ agordeev: Fixed sparse and checkpatch complaints and error handling ] Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com> Tested-by: Christian Borntraeger <borntraeger@linux.ibm.com> Fixes: fa41ba0d08de ("s390/mm: avoid empty zero pages for KVM guests to avoid postcopy hangs") Signed-off-by: David Hildenbrand <david@redhat.com> Link: https://lore.kernel.org/r/20240411161441.910170-3-david@redhat.com Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12ARM: configs: sunxi: Enable DRM_DW_HDMIMaxime Ripard1-0/+1
[ Upstream commit deff401b14e2d832b25b55862ad6c73378fe034e ] Commit 4fc8cb47fcfd ("drm/display: Move HDMI helpers into display-helper module") turned the DRM_DW_HDMI dependency of DRM_SUN8I_DW_HDMI into a depends on which ended up disabling the driver in the defconfig. Make sure it's still enabled. Fixes: 4fc8cb47fcfd ("drm/display: Move HDMI helpers into display-helper module") Reported-by: Mark Brown <broonie@kernel.org> Reported-by: Alexander Stein <alexander.stein@ew.tq-group.com> Signed-off-by: Maxime Ripard <mripard@kernel.org> Acked-by: Jernej Skrabec <jernej.skrabec@gmail.com> Link: https://lore.kernel.org/r/20240403-fix-dw-hdmi-kconfig-v1-5-afbc4a835c38@kernel.org Signed-off-by: Jernej Skrabec <jernej.skrabec@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12crypto: x86/sha512-avx2 - add missing vzeroupperEric Biggers1-0/+1
[ Upstream commit 6a24fdfe1edbafacdacd53516654d99068f20eec ] Since sha512_transform_rorx() uses ymm registers, execute vzeroupper before returning from it. This is necessary to avoid reducing the performance of SSE code. Fixes: e01d69cb0195 ("crypto: sha512 - Optimized SHA512 x86_64 assembly routine using AVX instructions.") Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12crypto: x86/sha256-avx2 - add missing vzeroupperEric Biggers1-0/+1
[ Upstream commit 57ce8a4e162599cf9adafef1f29763160a8e5564 ] Since sha256_transform_rorx() uses ymm registers, execute vzeroupper before returning from it. This is necessary to avoid reducing the performance of SSE code. Fixes: d34a460092d8 ("crypto: sha256 - Optimized sha256 x86_64 routine using AVX2's RORX instructions") Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12crypto: x86/nh-avx2 - add missing vzeroupperEric Biggers1-0/+1
[ Upstream commit 4ad096cca942959871d8ff73826d30f81f856f6e ] Since nh_avx2() uses ymm registers, execute vzeroupper before returning from it. This is necessary to avoid reducing the performance of SSE code. Fixes: 0f961f9f670e ("crypto: x86/nhpoly1305 - add AVX2 accelerated NHPoly1305") Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12parisc: add missing export of __cmpxchg_u8()Al Viro1-0/+1
[ Upstream commit c57e5dccb06decf3cb6c272ab138c033727149b5 ] __cmpxchg_u8() had been added (initially) for the sake of drivers/phy/ti/phy-tusb1210.c; the thing is, that drivers is modular, so we need an export Fixes: b344d6a83d01 "parisc: add support for cmpxchg on u8 pointers" Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12x86/mm: Remove broken vsyscall emulation code from the page fault codeLinus Torvalds3-59/+3
[ Upstream commit 02b670c1f88e78f42a6c5aee155c7b26960ca054 ] The syzbot-reported stack trace from hell in this discussion thread actually has three nested page faults: https://lore.kernel.org/r/000000000000d5f4fc0616e816d4@google.com ... and I think that's actually the important thing here: - the first page fault is from user space, and triggers the vsyscall emulation. - the second page fault is from __do_sys_gettimeofday(), and that should just have caused the exception that then sets the return value to -EFAULT - the third nested page fault is due to _raw_spin_unlock_irqrestore() -> preempt_schedule() -> trace_sched_switch(), which then causes a BPF trace program to run, which does that bpf_probe_read_compat(), which causes that page fault under pagefault_disable(). It's quite the nasty backtrace, and there's a lot going on. The problem is literally the vsyscall emulation, which sets current->thread.sig_on_uaccess_err = 1; and that causes the fixup_exception() code to send the signal *despite* the exception being caught. And I think that is in fact completely bogus. It's completely bogus exactly because it sends that signal even when it *shouldn't* be sent - like for the BPF user mode trace gathering. In other words, I think the whole "sig_on_uaccess_err" thing is entirely broken, because it makes any nested page-faults do all the wrong things. Now, arguably, I don't think anybody should enable vsyscall emulation any more, but this test case clearly does. I think we should just make the "send SIGSEGV" be something that the vsyscall emulation does on its own, not this broken per-thread state for something that isn't actually per thread. The x86 page fault code actually tried to deal with the "incorrect nesting" by having that: if (in_interrupt()) return; which ignores the sig_on_uaccess_err case when it happens in interrupts, but as shown by this example, these nested page faults do not need to be about interrupts at all. IOW, I think the only right thing is to remove that horrendously broken code. The attached patch looks like the ObviouslyCorrect(tm) thing to do. NOTE! This broken code goes back to this commit in 2011: 4fc3490114bb ("x86-64: Set siginfo and context on vsyscall emulation faults") ... and back then the reason was to get all the siginfo details right. Honestly, I do not for a moment believe that it's worth getting the siginfo details right here, but part of the commit says: This fixes issues with UML when vsyscall=emulate. ... and so my patch to remove this garbage will probably break UML in this situation. I do not believe that anybody should be running with vsyscall=emulate in 2024 in the first place, much less if you are doing things like UML. But let's see if somebody screams. Reported-and-tested-by: syzbot+83e7f982ca045ab4405c@syzkaller.appspotmail.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Andy Lutomirski <luto@kernel.org> Link: https://lore.kernel.org/r/CAHk-=wh9D6f7HUkDgZHKmDCHUQmp+Co89GP+b8+z+G56BKeyNg@mail.gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12bpf, x86: Fix PROBE_MEM runtime load checkPuranjay Mohan1-32/+25
[ Upstream commit b599d7d26d6ad1fc9975218574bc2ca6d0293cfd ] When a load is marked PROBE_MEM - e.g. due to PTR_UNTRUSTED access - the address being loaded from is not necessarily valid. The BPF jit sets up exception handlers for each such load which catch page faults and 0 out the destination register. If the address for the load is outside kernel address space, the load will escape the exception handling and crash the kernel. To prevent this from happening, the emits some instruction to verify that addr is > end of userspace addresses. x86 has a legacy vsyscall ABI where a page at address 0xffffffffff600000 is mapped with user accessible permissions. The addresses in this page are considered userspace addresses by the fault handler. Therefore, a BPF program accessing this page will crash the kernel. This patch fixes the runtime checks to also check that the PROBE_MEM address is below VSYSCALL_ADDR. Example BPF program: SEC("fentry/tcp_v4_connect") int BPF_PROG(fentry_tcp_v4_connect, struct sock *sk) { *(volatile unsigned long *)&sk->sk_tsq_flags; return 0; } BPF Assembly: 0: (79) r1 = *(u64 *)(r1 +0) 1: (79) r1 = *(u64 *)(r1 +344) 2: (b7) r0 = 0 3: (95) exit x86-64 JIT ========== BEFORE AFTER ------ ----- 0: nopl 0x0(%rax,%rax,1) 0: nopl 0x0(%rax,%rax,1) 5: xchg %ax,%ax 5: xchg %ax,%ax 7: push %rbp 7: push %rbp 8: mov %rsp,%rbp 8: mov %rsp,%rbp b: mov 0x0(%rdi),%rdi b: mov 0x0(%rdi),%rdi ------------------------------------------------------------------------------- f: movabs $0x100000000000000,%r11 f: movabs $0xffffffffff600000,%r10 19: add $0x2a0,%rdi 19: mov %rdi,%r11 20: cmp %r11,%rdi 1c: add $0x2a0,%r11 23: jae 0x0000000000000029 23: sub %r10,%r11 25: xor %edi,%edi 26: movabs $0x100000000a00000,%r10 27: jmp 0x000000000000002d 30: cmp %r10,%r11 29: mov 0x0(%rdi),%rdi 33: ja 0x0000000000000039 --------------------------------\ 35: xor %edi,%edi 2d: xor %eax,%eax \ 37: jmp 0x0000000000000040 2f: leave \ 39: mov 0x2a0(%rdi),%rdi 30: ret \-------------------------------------------- 40: xor %eax,%eax 42: leave 43: ret Signed-off-by: Puranjay Mohan <puranjay@kernel.org> Link: https://lore.kernel.org/r/20240424100210.11982-3-puranjay@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12LoongArch: Lately init pmu after smp is onlineBibo Mao1-1/+1
[ Upstream commit f3334ebb8a2a1841c2824594dd992e66de19deb2 ] There is an smp function call named reset_counters() to init PMU registers of every CPU in PMU initialization state. It requires that all CPUs are online. However there is an early_initcall() wrapper for the PMU init funciton init_hw_perf_events(), so that pmu init funciton is called in do_pre_smp_initcalls() which before function smp_init(). Function reset_counters() cannot work on other CPUs since they haven't boot up still. Here replace the wrapper early_initcall() with pure_initcall(), so that the PMU init function is called after every cpu is online. Signed-off-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12cpu: Ignore "mitigations" kernel parameter if CPU_MITIGATIONS=nSean Christopherson1-2/+6
[ Upstream commit ce0abef6a1d540acef85068e0e82bdf1fbeeb0e9 ] Explicitly disallow enabling mitigations at runtime for kernels that were built with CONFIG_CPU_MITIGATIONS=n, as some architectures may omit code entirely if mitigations are disabled at compile time. E.g. on x86, a large pile of Kconfigs are buried behind CPU_MITIGATIONS, and trying to provide sane behavior for retroactively enabling mitigations is extremely difficult, bordering on impossible. E.g. page table isolation and call depth tracking require build-time support, BHI mitigations will still be off without additional kernel parameters, etc. [ bp: Touchups. ] Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20240420000556.2645001-3-seanjc@google.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-06-12x86/tsc: Trust initial offset in architectural TSC-adjust MSRsDaniel J Blueman1-4/+2
commit 455f9075f14484f358b3c1d6845b4a438de198a7 upstream. When the BIOS configures the architectural TSC-adjust MSRs on secondary sockets to correct a constant inter-chassis offset, after Linux brings the cores online, the TSC sync check later resets the core-local MSR to 0, triggering HPET fallback and leading to performance loss. Fix this by unconditionally using the initial adjust values read from the MSRs. Trusting the initial offsets in this architectural mechanism is a better approach than special-casing workarounds for specific platforms. Signed-off-by: Daniel J Blueman <daniel@quora.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steffen Persvold <sp@numascale.com> Reviewed-by: James Cleverdon <james.cleverdon.external@eviden.com> Reviewed-by: Dimitri Sivanich <sivanich@hpe.com> Reviewed-by: Prarit Bhargava <prarit@redhat.com> Link: https://lore.kernel.org/r/20240419085146.175665-1-daniel@quora.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-05-17x86/apic: Don't access the APIC when disabling x2APICThomas Gleixner1-5/+11
commit 720a22fd6c1cdadf691281909950c0cbc5cdf17e upstream. With 'iommu=off' on the kernel command line and x2APIC enabled by the BIOS the code which disables the x2APIC triggers an unchecked MSR access error: RDMSR from 0x802 at rIP: 0xffffffff94079992 (native_apic_msr_read+0x12/0x50) This is happens because default_acpi_madt_oem_check() selects an x2APIC driver before the x2APIC is disabled. When the x2APIC is disabled because interrupt remapping cannot be enabled due to 'iommu=off' on the command line, x2apic_disable() invokes apic_set_fixmap() which in turn tries to read the APIC ID. This triggers the MSR warning because x2APIC is disabled, but the APIC driver is still x2APIC based. Prevent that by adding an argument to apic_set_fixmap() which makes the APIC ID read out conditional and set it to false from the x2APIC disable path. That's correct as the APIC ID has already been read out during early discovery. Fixes: d10a904435fa ("x86/apic: Consolidate boot_cpu_physical_apicid initialization sites") Reported-by: Adrian Huang <ahuang12@lenovo.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Adrian Huang <ahuang12@lenovo.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/875xw5t6r7.ffs@tglx Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-05-17arm64: dts: qcom: sa8155p-adp: fix SDHC2 CD pin configurationVolodymyr Babchuk1-17/+13
commit 819fe8c96a5172dfd960e5945e8f00f8fed32953 upstream. There are two issues with SDHC2 configuration for SA8155P-ADP, which prevent use of SDHC2 and causes issues with ethernet: - Card Detect pin for SHDC2 on SA8155P-ADP is connected to gpio4 of PMM8155AU_1, not to SoC itself. SoC's gpio4 is used for DWMAC TX. If sdhc driver probes after dwmac driver, it reconfigures gpio4 and this breaks Ethernet MAC. - pinctrl configuration mentions gpio96 as CD pin. It seems it was copied from some SM8150 example, because as mentioned above, correct CD pin is gpio4 on PMM8155AU_1. This patch fixes both mentioned issues by providing correct pin handle and pinctrl configuration. Fixes: 0deb2624e2d0 ("arm64: dts: qcom: sa8155p-adp: Add support for uSD card") Cc: stable@vger.kernel.org Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com> Reviewed-by: Stephan Gerhold <stephan@gerhold.net> Link: https://lore.kernel.org/r/20240412190310.1647893-1-volodymyr_babchuk@epam.com Signed-off-by: Bjorn Andersson <andersson@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-05-17xtensa: fix MAKE_PC_FROM_RA second argumentMax Filippov4-8/+10
commit 0e60f0b75884677fb9f4f2ad40d52b43451564d5 upstream. Xtensa has two-argument MAKE_PC_FROM_RA macro to convert a0 to an actual return address because when windowed ABI is used call{,x}{4,8,12} opcodes stuff encoded window size into the top 2 bits of the register that becomes a return address in the called function. Second argument of that macro is supposed to be an address having these 2 topmost bits set correctly, but the comment suggested that that could be the stack address. However the stack doesn't have to be in the same 1GByte region as the code, especially in noMMU XIP configurations. Fix the comment and use either _text or regs->pc as the second argument for the MAKE_PC_FROM_RA macro. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-05-17ARM: 9381/1: kasan: clear stale stack poisonBoy.Wu1-0/+4
[ Upstream commit c4238686f9093b98bd6245a348bcf059cdce23af ] We found below OOB crash: [ 33.452494] ================================================================== [ 33.453513] BUG: KASAN: stack-out-of-bounds in refresh_cpu_vm_stats.constprop.0+0xcc/0x2ec [ 33.454660] Write of size 164 at addr c1d03d30 by task swapper/0/0 [ 33.455515] [ 33.455767] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G O 6.1.25-mainline #1 [ 33.456880] Hardware name: Generic DT based system [ 33.457555] unwind_backtrace from show_stack+0x18/0x1c [ 33.458326] show_stack from dump_stack_lvl+0x40/0x4c [ 33.459072] dump_stack_lvl from print_report+0x158/0x4a4 [ 33.459863] print_report from kasan_report+0x9c/0x148 [ 33.460616] kasan_report from kasan_check_range+0x94/0x1a0 [ 33.461424] kasan_check_range from memset+0x20/0x3c [ 33.462157] memset from refresh_cpu_vm_stats.constprop.0+0xcc/0x2ec [ 33.463064] refresh_cpu_vm_stats.constprop.0 from tick_nohz_idle_stop_tick+0x180/0x53c [ 33.464181] tick_nohz_idle_stop_tick from do_idle+0x264/0x354 [ 33.465029] do_idle from cpu_startup_entry+0x20/0x24 [ 33.465769] cpu_startup_entry from rest_init+0xf0/0xf4 [ 33.466528] rest_init from arch_post_acpi_subsys_init+0x0/0x18 [ 33.467397] [ 33.467644] The buggy address belongs to stack of task swapper/0/0 [ 33.468493] and is located at offset 112 in frame: [ 33.469172] refresh_cpu_vm_stats.constprop.0+0x0/0x2ec [ 33.469917] [ 33.470165] This frame has 2 objects: [ 33.470696] [32, 76) 'global_zone_diff' [ 33.470729] [112, 276) 'global_node_diff' [ 33.471294] [ 33.472095] The buggy address belongs to the physical page: [ 33.472862] page:3cd72da8 refcount:1 mapcount:0 mapping:00000000 index:0x0 pfn:0x41d03 [ 33.473944] flags: 0x1000(reserved|zone=0) [ 33.474565] raw: 00001000 ed741470 ed741470 00000000 00000000 00000000 ffffffff 00000001 [ 33.475656] raw: 00000000 [ 33.476050] page dumped because: kasan: bad access detected [ 33.476816] [ 33.477061] Memory state around the buggy address: [ 33.477732] c1d03c00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [ 33.478630] c1d03c80: 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00 00 00 00 [ 33.479526] >c1d03d00: 00 04 f2 f2 f2 f2 00 00 00 00 00 00 f1 f1 f1 f1 [ 33.480415] ^ [ 33.481195] c1d03d80: 00 00 00 00 00 00 00 00 00 00 04 f3 f3 f3 f3 f3 [ 33.482088] c1d03e00: f3 f3 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00 [ 33.482978] ================================================================== We find the root cause of this OOB is that arm does not clear stale stack poison in the case of cpuidle. This patch refer to arch/arm64/kernel/sleep.S to resolve this issue. From cited commit [1] that explain the problem Functions which the compiler has instrumented for KASAN place poison on the stack shadow upon entry and remove this poison prior to returning. In the case of cpuidle, CPUs exit the kernel a number of levels deep in C code. Any instrumented functions on this critical path will leave portions of the stack shadow poisoned. If CPUs lose context and return to the kernel via a cold path, we restore a prior context saved in __cpu_suspend_enter are forgotten, and we never remove the poison they placed in the stack shadow area by functions calls between this and the actual exit of the kernel. Thus, (depending on stackframe layout) subsequent calls to instrumented functions may hit this stale poison, resulting in (spurious) KASAN splats to the console. To avoid this, clear any stale poison from the idle thread for a CPU prior to bringing a CPU online. From cited commit [2] Extend to check for CONFIG_KASAN_STACK [1] commit 0d97e6d8024c ("arm64: kasan: clear stale stack poison") [2] commit d56a9ef84bd0 ("kasan, arm64: unpoison stack only with CONFIG_KASAN_STACK") Signed-off-by: Boy Wu <boy.wu@mediatek.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Fixes: 5615f69bc209 ("ARM: 9016/2: Initialize the mapping of KASan shadow memory") Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-17powerpc/crypto/chacha-p10: Fix failure on non Power10Michael Ellerman1-1/+7
[ Upstream commit 69630926011c1f7170a465b7b5c228deb66e9372 ] The chacha-p10-crypto module provides optimised chacha routines for Power10. It also selects CRYPTO_ARCH_HAVE_LIB_CHACHA which says it provides chacha_crypt_arch() to generic code. Notably the module needs to provide chacha_crypt_arch() regardless of whether it is loaded on Power10 or an older CPU. The implementation of chacha_crypt_arch() already has a fallback to chacha_crypt_generic(), however the module as a whole fails to load on pre-Power10, because of the use of module_cpu_feature_match(). This breaks for example loading wireguard: jostaberry-1:~ # modprobe -v wireguard insmod /lib/modules/6.8.0-lp155.8.g7e0e887-default/kernel/arch/powerpc/crypto/chacha-p10-crypto.ko.zst modprobe: ERROR: could not insert 'wireguard': No such device Fix it by removing module_cpu_feature_match(), and instead check the CPU feature manually. If the CPU feature is not found, the module still loads successfully, but doesn't register the Power10 specific algorithms. That allows chacha_crypt_generic() to remain available for use, fixing the problem. [root@fedora ~]# modprobe -v wireguard insmod /lib/modules/6.8.0-00001-g786a790c4d79/kernel/net/ipv4/udp_tunnel.ko insmod /lib/modules/6.8.0-00001-g786a790c4d79/kernel/net/ipv6/ip6_udp_tunnel.ko insmod /lib/modules/6.8.0-00001-g786a790c4d79/kernel/lib/crypto/libchacha.ko insmod /lib/modules/6.8.0-00001-g786a790c4d79/kernel/arch/powerpc/crypto/chacha-p10-crypto.ko insmod /lib/modules/6.8.0-00001-g786a790c4d79/kernel/lib/crypto/libchacha20poly1305.ko insmod /lib/modules/6.8.0-00001-g786a790c4d79/kernel/drivers/net/wireguard/wireguard.ko [ 18.910452][ T721] wireguard: allowedips self-tests: pass [ 18.914999][ T721] wireguard: nonce counter self-tests: pass [ 19.029066][ T721] wireguard: ratelimiter self-tests: pass [ 19.029257][ T721] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. [ 19.029361][ T721] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Reported-by: Michal Suchánek <msuchanek@suse.de> Closes: https://lore.kernel.org/all/20240315122005.GG20665@kitsune.suse.cz/ Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240328130200.3041687-1-mpe@ellerman.id.au Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-17MIPS: scall: Save thread_info.syscall unconditionally on entryJiaxun Yang7-38/+42
[ Upstream commit 4370b673ccf240bf7587b0cb8e6726a5ccaf1f17 ] thread_info.syscall is used by syscall_get_nr to supply syscall nr over a thread stack frame. Previously, thread_info.syscall is only saved at syscall_trace_enter when syscall tracing is enabled. However rest of the kernel code do expect syscall_get_nr to be available without syscall tracing. The previous design breaks collect_syscall. Move saving process to syscall entry to fix it. Reported-by: Xi Ruoyao <xry111@xry111.site> Link: https://github.com/util-linux/util-linux/issues/2867 Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-17KVM: arm64: vgic-v2: Check for non-NULL vCPU in vgic_v2_parse_attr()Oliver Upton1-4/+4
[ Upstream commit 6ddb4f372fc63210034b903d96ebbeb3c7195adb ] vgic_v2_parse_attr() is responsible for finding the vCPU that matches the user-provided CPUID, which (of course) may not be valid. If the ID is invalid, kvm_get_vcpu_by_id() returns NULL, which isn't handled gracefully. Similar to the GICv3 uaccess flow, check that kvm_get_vcpu_by_id() actually returns something and fail the ioctl if not. Cc: stable@vger.kernel.org Fixes: 7d450e282171 ("KVM: arm/arm64: vgic-new: Add userland access to VGIC dist registers") Reported-by: Alexander Potapenko <glider@google.com> Tested-by: Alexander Potapenko <glider@google.com> Reviewed-by: Alexander Potapenko <glider@google.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240424173959.3776798-2-oliver.upton@linux.dev Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-17KVM: arm64: vgic-v2: Use cpuid from userspace as vcpu_idMarc Zyngier1-6/+2
[ Upstream commit 4e7728c81a54b17bd33be402ac140bc11bb0c4f4 ] When parsing a GICv2 attribute that contains a cpuid, handle this as the vcpu_id, not a vcpu_idx, as userspace cannot really know the mapping between the two. For this, use kvm_get_vcpu_by_id() instead of kvm_get_vcpu(). Take this opportunity to get rid of the pointless check against online_vcpus, which doesn't make much sense either, and switch to FIELD_GET as a way to extract the vcpu_id. Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230927090911.3355209-5-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Stable-dep-of: 6ddb4f372fc6 ("KVM: arm64: vgic-v2: Check for non-NULL vCPU in vgic_v2_parse_attr()") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-17powerpc/pseries/iommu: LPAR panics during boot up with a frozen PEGaurav Batra1-0/+8
[ Upstream commit 49a940dbdc3107fecd5e6d3063dc07128177e058 ] At the time of LPAR boot up, partition firmware provides Open Firmware property ibm,dma-window for the PE. This property is provided on the PCI bus the PE is attached to. There are execptions where the partition firmware might not provide this property for the PE at the time of LPAR boot up. One of the scenario is where the firmware has frozen the PE due to some error condition. This PE is frozen for 24 hours or unless the whole system is reinitialized. Within this time frame, if the LPAR is booted, the frozen PE will be presented to the LPAR but ibm,dma-window property could be missing. Today, under these circumstances, the LPAR oopses with NULL pointer dereference, when configuring the PCI bus the PE is attached to. BUG: Kernel NULL pointer dereference on read at 0x000000c8 Faulting instruction address: 0xc0000000001024c0 Oops: Kernel access of bad area, sig: 7 [#1] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA pSeries Modules linked in: Supported: Yes CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.4.0-150600.9-default #1 Hardware name: IBM,9043-MRX POWER10 (raw) 0x800200 0xf000006 of:IBM,FW1060.00 (NM1060_023) hv:phyp pSeries NIP: c0000000001024c0 LR: c0000000001024b0 CTR: c000000000102450 REGS: c0000000037db5c0 TRAP: 0300 Not tainted (6.4.0-150600.9-default) MSR: 8000000002009033 <SF,VEC,EE,ME,IR,DR,RI,LE> CR: 28000822 XER: 00000000 CFAR: c00000000010254c DAR: 00000000000000c8 DSISR: 00080000 IRQMASK: 0 ... NIP [c0000000001024c0] pci_dma_bus_setup_pSeriesLP+0x70/0x2a0 LR [c0000000001024b0] pci_dma_bus_setup_pSeriesLP+0x60/0x2a0 Call Trace: pci_dma_bus_setup_pSeriesLP+0x60/0x2a0 (unreliable) pcibios_setup_bus_self+0x1c0/0x370 __of_scan_bus+0x2f8/0x330 pcibios_scan_phb+0x280/0x3d0 pcibios_init+0x88/0x12c do_one_initcall+0x60/0x320 kernel_init_freeable+0x344/0x3e4 kernel_init+0x34/0x1d0 ret_from_kernel_user_thread+0x14/0x1c Fixes: b1fc44eaa9ba ("pseries/iommu/ddw: Fix kdump to work in absence of ibm,dma-window") Signed-off-by: Gaurav Batra <gbatra@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240422205141.10662-1-gbatra@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-17powerpc/pseries: make max polling consistent for longer H_CALLsNayna Jain2-8/+7
[ Upstream commit 784354349d2c988590c63a5a001ca37b2a6d4da1 ] Currently, plpks_confirm_object_flushed() function polls for 5msec in total instead of 5sec. Keep max polling time consistent for all the H_CALLs, which take longer than expected, to be 5sec. Also, make use of fsleep() everywhere to insert delay. Reported-by: Nageswara R Sastry <rnsastry@linux.ibm.com> Fixes: 2454a7af0f2a ("powerpc/pseries: define driver for Platform KeyStore") Signed-off-by: Nayna Jain <nayna@linux.ibm.com> Tested-by: Nageswara R Sastry <rnsastry@linux.ibm.com> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240418031230.170954-1-nayna@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-17s390/vdso: Add CFI for RA register to asm macro vdso_funcJens Remus2-0/+3
[ Upstream commit b961ec10b9f9719987470236feb50c967db5a652 ] The return-address (RA) register r14 is specified as volatile in the s390x ELF ABI [1]. Nevertheless proper CFI directives must be provided for an unwinder to restore the return address, if the RA register value is changed from its value at function entry, as it is the case. [1]: s390x ELF ABI, https://github.com/IBM/s390x-abi/releases Fixes: 4bff8cb54502 ("s390: convert to GENERIC_VDSO") Signed-off-by: Jens Remus <jremus@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-17s390/mm: Fix clearing storage keys for huge pagesClaudio Imbrenda1-1/+1
[ Upstream commit 412050af2ea39407fe43324b0be4ab641530ce88 ] The function __storage_key_init_range() expects the end address to be the first byte outside the range to be initialized. I.e. end - start should be the size of the area to be initialized. The current code works because __storage_key_init_range() will still loop over every page in the range, but it is slower than using sske_frame(). Fixes: 3afdfca69870 ("s390/mm: Clear skeys for newly mapped huge guest pmds") Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20240416114220.28489-3-imbrenda@linux.ibm.com Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-17s390/mm: Fix storage key clearing for guest huge pagesClaudio Imbrenda1-1/+1
[ Upstream commit 843c3280686fc1a83d89ee1e0b5599c9f6b09d0c ] The function __storage_key_init_range() expects the end address to be the first byte outside the range to be initialized. I.e. end - start should be the size of the area to be initialized. The current code works because __storage_key_init_range() will still loop over every page in the range, but it is slower than using sske_frame(). Fixes: 964c2c05c9f3 ("s390/mm: Clear huge page storage keys on enable_skey") Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20240416114220.28489-2-imbrenda@linux.ibm.com Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-17riscv, bpf: Fix incorrect runtime statsXu Kuohai1-3/+3
[ Upstream commit 10541b374aa05c8118cc6a529a615882e53f261b ] When __bpf_prog_enter() returns zero, the s1 register is not set to zero, resulting in incorrect runtime stats. Fix it by setting s1 immediately upon the return of __bpf_prog_enter(). Fixes: 49b5e77ae3e2 ("riscv, bpf: Add bpf trampoline support for RV64") Signed-off-by: Xu Kuohai <xukuohai@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Pu Lehui <pulehui@huawei.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20240416064208.2919073-3-xukuohai@huaweicloud.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-17bpf, arm64: Fix incorrect runtime statsXu Kuohai1-3/+3
[ Upstream commit dc7d7447b56bcc9cf79a9c22e4edad200a298e4c ] When __bpf_prog_enter() returns zero, the arm64 register x20 that stores prog start time is not assigned to zero, causing incorrect runtime stats. To fix it, assign the return value of bpf_prog_enter() to x20 register immediately upon its return. Fixes: efc9909fdce0 ("bpf, arm64: Add bpf trampoline for arm64") Reported-by: Ivan Babrou <ivan@cloudflare.com> Signed-off-by: Xu Kuohai <xukuohai@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Ivan Babrou <ivan@cloudflare.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240416064208.2919073-2-xukuohai@huaweicloud.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-02Revert "riscv: kdump: fix crashkernel reserving problem on RISC-V"Mingzheng Xing1-0/+13
This reverts commit 1d6cd2146c2b58bc91266db1d5d6a5f9632e14c0 which was mistakenly added into v6.6.y and the commit corresponding to the 'Fixes:' tag is invalid. For more information, see link [1]. This will result in the loss of Crashkernel data in /proc/iomem, and kdump failed: ``` Memory for crashkernel is not reserved Please reserve memory by passing"crashkernel=Y@X" parameter to kernel Then try to loading kdump kernel ``` After revert, kdump works fine. Tested on QEMU riscv. Link: https://lore.kernel.org/linux-riscv/ZSiQRDGLZk7lpakE@MiWiFi-R3L-srv [1] Cc: Baoquan He <bhe@redhat.com> Cc: Chen Jiahao <chenjiahao16@huawei.com> Signed-off-by: Mingzheng Xing <xingmingzheng@iscas.ac.cn> Acked-by: Baoquan He <bhe@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-05-02riscv: Fix loading 64-bit NOMMU kernels past the start of RAMSamuel Holland2-2/+2
[ Upstream commit aea702dde7e9876fb00571a2602f25130847bf0f ] commit 3335068f8721 ("riscv: Use PUD/P4D/PGD pages for the linear mapping") added logic to allow using RAM below the kernel load address. However, this does not work for NOMMU, where PAGE_OFFSET is fixed to the kernel load address. Since that range of memory corresponds to PFNs below ARCH_PFN_OFFSET, mm initialization runs off the beginning of mem_map and corrupts adjacent kernel memory. Fix this by restoring the previous behavior for NOMMU kernels. Fixes: 3335068f8721 ("riscv: Use PUD/P4D/PGD pages for the linear mapping") Signed-off-by: Samuel Holland <samuel.holland@sifive.com> Link: https://lore.kernel.org/r/20240227003630.3634533-3-samuel.holland@sifive.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-02riscv: Fix TASK_SIZE on 64-bit NOMMUSamuel Holland1-1/+1
[ Upstream commit 6065e736f82c817c9a597a31ee67f0ce4628e948 ] On NOMMU, userspace memory can come from anywhere in physical RAM. The current definition of TASK_SIZE is wrong if any RAM exists above 4G, causing spurious failures in the userspace access routines. Fixes: 6bd33e1ece52 ("riscv: add nommu support") Fixes: c3f896dcf1e4 ("mm: switch the test_vmalloc module to use __vmalloc_node") Signed-off-by: Samuel Holland <samuel.holland@sifive.com> Reviewed-by: Jisheng Zhang <jszhang@kernel.org> Reviewed-by: Bo Gan <ganboing@gmail.com> Link: https://lore.kernel.org/r/20240227003630.3634533-2-samuel.holland@sifive.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-02riscv: fix VMALLOC_START definitionBaoquan He1-1/+1
[ Upstream commit ac88ff6b9d7dea9f0907c86bdae204dde7d5c0e6 ] When below config items are set, compiler complained: -------------------- CONFIG_CRASH_CORE=y CONFIG_KEXEC_CORE=y CONFIG_CRASH_DUMP=y ...... ----------------------- ------------------------------------------------------------------- arch/riscv/kernel/crash_core.c: In function 'arch_crash_save_vmcoreinfo': arch/riscv/kernel/crash_core.c:11:58: warning: format '%lx' expects argument of type 'long unsigned int', but argument 2 has type 'int' [-Wformat=] 11 | vmcoreinfo_append_str("NUMBER(VMALLOC_START)=0x%lx\n", VMALLOC_START); | ~~^ | | | long unsigned int | %x ---------------------------------------------------------------------- This is because on riscv macro VMALLOC_START has different type when CONFIG_MMU is set or unset. arch/riscv/include/asm/pgtable.h: -------------------------------------------------- Changing it to _AC(0, UL) in case CONFIG_MMU=n can fix the warning. Link: https://lkml.kernel.org/r/ZW7OsX4zQRA3mO4+@MiWiFi-R3L-srv Signed-off-by: Baoquan He <bhe@redhat.com> Reported-by: Randy Dunlap <rdunlap@infradead.org> Acked-by: Randy Dunlap <rdunlap@infradead.org> Tested-by: Randy Dunlap <rdunlap@infradead.org> # build-tested Cc: Eric DeVolder <eric_devolder@yahoo.com> Cc: Ignat Korchagin <ignat@cloudflare.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Stable-dep-of: 6065e736f82c ("riscv: Fix TASK_SIZE on 64-bit NOMMU") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-05-02x86/tdx: Preserve shared bit on mprotect()Kirill A. Shutemov2-2/+6
commit a0a8d15a798be4b8f20aca2ba91bf6b688c6a640 upstream. The TDX guest platform takes one bit from the physical address to indicate if the page is shared (accessible by VMM). This bit is not part of the physical_mask and is not preserved during mprotect(). As a result, the 'shared' bit is lost during mprotect() on shared mappings. _COMMON_PAGE_CHG_MASK specifies which PTE bits need to be preserved during modification. AMD includes 'sme_me_mask' in the define to preserve the 'encrypt' bit. To cover both Intel and AMD cases, include 'cc_mask' in _COMMON_PAGE_CHG_MASK instead of 'sme_me_mask'. Reported-and-tested-by: Chris Oo <cho@microsoft.com> Fixes: 41394e33f3a0 ("x86/tdx: Extend the confidential computing API to support TDX guests") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/20240424082035.4092071-1-kirill.shutemov%40linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-05-02arm64: dts: rockchip: enable internal pull-up for Q7_THRM# on RK3399 PumaIskander Amara1-0/+10
commit 0ac417b8f124427c90ec8c2ef4f632b821d924cc upstream. Q7_THRM# pin is connected to a diode on the module which is used as a level shifter, and the pin have a pull-down enabled by default. We need to configure it to internal pull-up, other- wise whenever the pin is configured as INPUT and we try to control it externally the value will always remain zero. Signed-off-by: Iskander Amara <iskander.amara@theobroma-systems.com> Fixes: 2c66fc34e945 ("arm64: dts: rockchip: add RK3399-Q7 (Puma) SoM") Reviewed-by: Quentin Schulz <quentin.schulz@theobroma-systems.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20240308085243.69903-1-iskander.amara@theobroma-systems.com Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-05-02arm64: dts: qcom: sm8450: Fix the msi-map entriesManivannan Sadhasivam1-12/+4
commit ecc3ac293ed15ac2536e9fde2810154486f84010 upstream. While adding the GIC ITS MSI support, it was found that the msi-map entries needed to be swapped to receive MSIs from the endpoint. But later it was identified that the swapping was needed due to a bug in the Qualcomm PCIe controller driver. And since the bug is now fixed with commit bf79e33cdd89 ("PCI: qcom: Enable BDF to SID translation properly"), let's fix the msi-map entries also to reflect the actual mapping in the hardware. Cc: stable@vger.kernel.org # 6.3: bf79e33cdd89 ("PCI: qcom: Enable BDF to SID translation properly") Fixes: ff384ab56f16 ("arm64: dts: qcom: sm8450: Use GIC-ITS for PCIe0 and PCIe1") Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org> Link: https://lore.kernel.org/r/20240318-pci-bdf-sid-fix-v1-1-acca6c5d9cf1@linaro.org Signed-off-by: Bjorn Andersson <andersson@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-05-02arm64: dts: qcom: sc8280xp: add missing PCIe minimum OPPJohan Hovold1-0/+5
commit 8b8ec83a1d7d3b6605d9163d2e306971295a4ce8 upstream. Add the missing PCIe CX performance level votes to avoid relying on other drivers (e.g. USB or UFS) to maintain the nominal performance level required for Gen3 speeds. Fixes: 813e83157001 ("arm64: dts: qcom: sc8280xp/sa8540p: add PCIe2-4 nodes") Cc: stable@vger.kernel.org # 6.2 Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Johan Hovold <johan+linaro@kernel.org> Link: https://lore.kernel.org/r/20240306095651.4551-5-johan+linaro@kernel.org Signed-off-by: Bjorn Andersson <andersson@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>