summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2023-06-03KVM: x86: Account fastpath-only VM-Exits in vCPU statsSean Christopherson1-0/+3
Increment vcpu->stat.exits when handling a fastpath VM-Exit without going through any part of the "slow" path. Not bumping the exits stat can result in wildly misleading exit counts, e.g. if the primary reason the guest is exiting is to program the TSC deadline timer. Fixes: 404d5d7bff0d ("KVM: X86: Introduce more exit_fastpath_completion enum values") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230602011920.787844-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-06-03KVM: SVM: vNMI pending bit is V_NMI_PENDING_MASK not V_NMI_BLOCKING_MASKMaciej S. Szmigiero1-1/+1
While testing Hyper-V enabled Windows Server 2019 guests on Zen4 hardware I noticed that with vCPU count large enough (> 16) they sometimes froze at boot. With vCPU count of 64 they never booted successfully - suggesting some kind of a race condition. Since adding "vnmi=0" module parameter made these guests boot successfully it was clear that the problem is most likely (v)NMI-related. Running kvm-unit-tests quickly showed failing NMI-related tests cases, like "multiple nmi" and "pending nmi" from apic-split, x2apic and xapic tests and the NMI parts of eventinj test. The issue was that once one NMI was being serviced no other NMI was allowed to be set pending (NMI limit = 0), which was traced to svm_is_vnmi_pending() wrongly testing for the "NMI blocked" flag rather than for the "NMI pending" flag. Fix this by testing for the right flag in svm_is_vnmi_pending(). Once this is done, the NMI-related kvm-unit-tests pass successfully and the Windows guest no longer freezes at boot. Fixes: fa4c027a7956 ("KVM: x86: Add support for SVM's Virtual NMI") Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/be4ca192eb0c1e69a210db3009ca984e6a54ae69.1684495380.git.maciej.szmigiero@oracle.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-06-03KVM: x86/mmu: Grab memslot for correct address space in NX recovery workerSean Christopherson1-1/+4
Factor in the address space (non-SMM vs. SMM) of the target shadow page when recovering potential NX huge pages, otherwise KVM will retrieve the wrong memslot when zapping shadow pages that were created for SMM. The bug most visibly manifests as a WARN on the memslot being non-NULL, but the worst case scenario is that KVM could unaccount the shadow page without ensuring KVM won't install a huge page, i.e. if the non-SMM slot is being dirty logged, but the SMM slot is not. ------------[ cut here ]------------ WARNING: CPU: 1 PID: 3911 at arch/x86/kvm/mmu/mmu.c:7015 kvm_nx_huge_page_recovery_worker+0x38c/0x3d0 [kvm] CPU: 1 PID: 3911 Comm: kvm-nx-lpage-re RIP: 0010:kvm_nx_huge_page_recovery_worker+0x38c/0x3d0 [kvm] RSP: 0018:ffff99b284f0be68 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff99b284edd000 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 RBP: ffff9271397024e0 R08: 0000000000000000 R09: ffff927139702450 R10: 0000000000000000 R11: 0000000000000001 R12: ffff99b284f0be98 R13: 0000000000000000 R14: ffff9270991fcd80 R15: 0000000000000003 FS: 0000000000000000(0000) GS:ffff927f9f640000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f0aacad3ae0 CR3: 000000088fc2c005 CR4: 00000000003726e0 Call Trace: <TASK> __pfx_kvm_nx_huge_page_recovery_worker+0x10/0x10 [kvm] kvm_vm_worker_thread+0x106/0x1c0 [kvm] kthread+0xd9/0x100 ret_from_fork+0x2c/0x50 </TASK> ---[ end trace 0000000000000000 ]--- This bug was exposed by commit edbdb43fc96b ("KVM: x86: Preserve TDP MMU roots until they are explicitly invalidated"), which allowed KVM to retain SMM TDP MMU roots effectively indefinitely. Before commit edbdb43fc96b, KVM would zap all SMM TDP MMU roots and thus all SMM TDP MMU shadow pages once all vCPUs exited SMM, which made the window where this bug (recovering an SMM NX huge page) could be encountered quite tiny. To hit the bug, the NX recovery thread would have to run while at least one vCPU was in SMM. Most VMs typically only use SMM during boot, and so the problematic shadow pages were gone by the time the NX recovery thread ran. Now that KVM preserves TDP MMU roots until they are explicitly invalidated (e.g. by a memslot deletion), the window to trigger the bug is effectively never closed because most VMMs don't delete memslots after boot (except for a handful of special scenarios). Fixes: eb298605705a ("KVM: x86/mmu: Do not recover dirty-tracked NX Huge Pages") Reported-by: Fabio Coatti <fabio.coatti@gmail.com> Closes: https://lore.kernel.org/all/CADpTngX9LESCdHVu_2mQkNGena_Ng2CphWNwsRGSMxzDsTjU2A@mail.gmail.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230602010137.784664-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-05-21KVM: VMX: add MSR_IA32_TSX_CTRL into msrs_to_saveMingwei Zhang1-1/+5
Add MSR_IA32_TSX_CTRL into msrs_to_save[] to explicitly tell userspace to save/restore the register value during migration. Missing this may cause userspace that relies on KVM ioctl(KVM_GET_MSR_INDEX_LIST) fail to port the value to the target VM. In addition, there is no need to add MSR_IA32_TSX_CTRL when ARCH_CAP_TSX_CTRL_MSR is not supported in kvm_get_arch_capabilities(). So add the checking in kvm_probe_msr_to_save(). Fixes: c11f83e0626b ("KVM: vmx: implement MSR_IA32_TSX_CTRL disable RTM functionality") Reported-by: Jim Mattson <jmattson@google.com> Signed-off-by: Mingwei Zhang <mizhang@google.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Jim Mattson <jmattson@google.com> Message-Id: <20230509032348.1153070-1-mizhang@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-21KVM: x86: Don't adjust guest's CPUID.0x12.1 (allowed SGX enclave XFRM)Sean Christopherson1-16/+0
Drop KVM's manipulation of guest's CPUID.0x12.1 ECX and EDX, i.e. the allowed XFRM of SGX enclaves, now that KVM explicitly checks the guest's allowed XCR0 when emulating ECREATE. Note, this could theoretically break a setup where userspace advertises a "bad" XFRM and relies on KVM to provide a sane CPUID model, but QEMU is the only known user of KVM SGX, and QEMU explicitly sets the SGX CPUID XFRM subleaf based on the guest's XCR0. Reviewed-by: Kai Huang <kai.huang@intel.com> Tested-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20230503160838.3412617-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-21KVM: VMX: Don't rely _only_ on CPUID to enforce XCR0 restrictions for ECREATESean Christopherson1-2/+9
Explicitly check the vCPU's supported XCR0 when determining whether or not the XFRM for ECREATE is valid. Checking CPUID works because KVM updates guest CPUID.0x12.1 to restrict the leaf to a subset of the guest's allowed XCR0, but that is rather subtle and KVM should not modify guest CPUID except for modeling true runtime behavior (allowed XFRM is most definitely not "runtime" behavior). Reviewed-by: Kai Huang <kai.huang@intel.com> Tested-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20230503160838.3412617-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-19KVM: VMX: Fix header file dependency of asm/vmx.hJacob Xu1-0/+2
Include a definition of WARN_ON_ONCE() before using it. Fixes: bb1fcc70d98f ("KVM: nVMX: Allow L1 to use 5-level page walks for nested EPT") Cc: Sean Christopherson <seanjc@google.com> Signed-off-by: Jacob Xu <jacobhxu@google.com> [reworded commit message; changed <asm/bug.h> to <linux/bug.h>] Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220225012959.1554168-1-jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-05-17Merge tag 'kvmarm-fixes-6.4-1' of ↵Paolo Bonzini8-25/+76
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm64 fixes for 6.4, take #1 - Plug a race in the stage-2 mapping code where the IPA and the PA would end up being out of sync - Make better use of the bitmap API (bitmap_zero, bitmap_zalloc...) - FP/SVE/SME documentation update, in the hope that this field becomes clearer... - Add workaround for the usual Apple SEIS brokenness - Random comment fixes
2023-05-14Merge tag 'parisc-for-6.4-2' of ↵Linus Torvalds2-4/+6
git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux Pull parisc architecture fixes from Helge Deller: - Fix encoding of swp_entry due to added SWP_EXCLUSIVE flag - Include reboot.h to avoid gcc-12 compiler warning * tag 'parisc-for-6.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux: parisc: Fix encoding of swp_entry due to added SWP_EXCLUSIVE flag parisc: kexec: include reboot.h
2023-05-14Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds4-6/+37
Pull ARM fixes from Russell King: - fix unwinder for uleb128 case - fix kernel-doc warnings for HP Jornada 7xx - fix unbalanced stack on vfp success path * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 9297/1: vfp: avoid unbalanced stack on 'success' return path ARM: 9296/1: HP Jornada 7XX: fix kernel-doc warnings ARM: 9295/1: unwind:fix unwind abort for uleb128 case
2023-05-14Merge tag 'perf_urgent_for_v6.4_rc2' of ↵Linus Torvalds3-28/+37
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Borislav Petkov: - Make sure the PEBS buffer is flushed before reprogramming the hardware so that the correct record sizes are used - Update the sample size for AMD BRS events - Fix a confusion with using the same on-stack struct with different events in the event processing path * tag 'perf_urgent_for_v6.4_rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/intel/ds: Flush PEBS DS when changing PEBS_DATA_CFG perf/x86: Fix missing sample size update on AMD BRS perf/core: Fix perf_sample_data not properly initialized for different swevents in perf_tp_event()
2023-05-14Merge tag 'x86_urgent_for_v6.4_rc2' of ↵Linus Torvalds1-0/+2
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fix from Borislav Petkov: - Add the required PCI IDs so that the generic SMN accesses provided by amd_nb.c work for drivers which switch to them. Add a PCI device ID to k10temp's table so that latter is loaded on such systems too * tag 'x86_urgent_for_v6.4_rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: hwmon: (k10temp) Add PCI ID for family 19, model 78h x86/amd_nb: Add PCI ID for family 19h model 78h
2023-05-14parisc: Fix encoding of swp_entry due to added SWP_EXCLUSIVE flagHelge Deller1-4/+4
Fix the __swp_offset() and __swp_entry() macros due to commit 6d239fc78c0b ("parisc/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE") which introduced the SWP_EXCLUSIVE flag by reusing the _PAGE_ACCESSED flag. Reported-by: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de> Tested-by: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Helge Deller <deller@gmx.de> Fixes: 6d239fc78c0b ("parisc/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE") Cc: <stable@vger.kernel.org> # v6.3+
2023-05-13x86/retbleed: Fix return thunk alignmentBorislav Petkov (AMD)1-2/+2
SYM_FUNC_START_LOCAL_NOALIGN() adds an endbr leading to this layout (leaving only the last 2 bytes of the address): 3bff <zen_untrain_ret>: 3bff: f3 0f 1e fa endbr64 3c03: f6 test $0xcc,%bl 3c04 <__x86_return_thunk>: 3c04: c3 ret 3c05: cc int3 3c06: 0f ae e8 lfence However, "the RET at __x86_return_thunk must be on a 64 byte boundary, for alignment within the BTB." Use SYM_START instead. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-05-11Merge branch kvm-arm64/pgtable-fixes-6.4 into kvmarm-master/fixesMarc Zyngier2-9/+33
* kvm-arm64/pgtable-fixes-6.4: : . : Fixes for concurrent S2 mapping race from Oliver: : : "So it appears that there is a race between two parallel stage-2 map : walkers that could lead to mapping the incorrect PA for a given IPA, as : the IPA -> PA relationship picks up an unintended offset. This series : eliminates the problem by using the current IPA of the walk as the : source-of-truth regarding where we are in a map operation." : . KVM: arm64: Constify start/end/phys fields of the pgtable walker data KVM: arm64: Infer PA offset from VA in hyp map walker KVM: arm64: Infer the PA offset from IPA in stage-2 map walker Signed-off-by: Marc Zyngier <maz@kernel.org>
2023-05-11Merge branch kvm-arm64/misc-6.4 into kvmarm-master/fixesMarc Zyngier6-16/+43
* kvm-arm64/misc-6.4: : . : Minor changes for 6.4: : : - Make better use of the bitmap API (bitmap_zero, bitmap_zalloc...) : : - FP/SVE/SME documentation update, in the hope that this field : becomes clearer... : : - Add workaround for the usual Apple SEIS brokenness : : - Random comment fixes : . KVM: arm64: vgic: Add Apple M2 PRO/MAX cpus to the list of broken SEIS implementations KVM: arm64: Clarify host SME state management KVM: arm64: Restructure check for SVE support in FP trap handler KVM: arm64: Document check for TIF_FOREIGN_FPSTATE KVM: arm64: Fix repeated words in comments KVM: arm64: Use the bitmap API to allocate bitmaps KVM: arm64: Slightly optimize flush_context() Signed-off-by: Marc Zyngier <maz@kernel.org>
2023-05-11KVM: arm64: vgic: Add Apple M2 PRO/MAX cpus to the list of broken SEIS ↵Marc Zyngier2-0/+12
implementations Unsurprisingly, the M2 PRO is also affected by the SEIS bug, so add it to the naughty list. And since M2 MAX is likely to be of the same ilk, flag it as well. Tested on a M2 PRO mini machine. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20230501182141.39770-1-maz@kernel.org
2023-05-10ARM: 9297/1: vfp: avoid unbalanced stack on 'success' return pathArd Biesheuvel2-4/+9
Commit c76c6c4ecbec0deb5 ("ARM: 9294/2: vfp: Fix broken softirq handling with instrumentation enabled") updated the VFP exception entry logic to go via a C function, so that we get the compiler's version of local_bh_disable(), which may be instrumented, and isn't generally callable from assembler. However, this assumes that passing an alternative 'success' return address works in C as it does in asm, and this is only the case if the C calls in question are tail calls, as otherwise, the stack will need some unwinding as well. I have already sent patches to the list that replace most of the asm logic with C code, and so it is preferable to have a minimal fix that addresses the issue and can be backported along with the commit that it fixes to v6.3 from v6.4. Hopefully, we can land the C conversion for v6.5. So instead of passing the 'success' return address as a function argument, pass the stack address from where to pop it so that both LR and SP have the expected value. Fixes: c76c6c4ecbec0deb5 ("ARM: 9294/2: vfp: Fix broken softirq handling with ...") Reported-by: syzbot+d4b00edc2d0c910d4bf4@syzkaller.appspotmail.com Tested-by: syzbot+d4b00edc2d0c910d4bf4@syzkaller.appspotmail.com Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2023-05-10riscv: Fix orphan section warnings caused by kernel/piAlexandre Ghiti2-6/+3
kernel/pi gives rise to a lot of new sections that end up orphans: the first attempt to fix that tried to enumerate them all in the linker script, but kernel test robot with a random config keeps finding more of them. So prefix all those sections with .init.pi instead of only .init in order to be able to easily catch them all in the linker script. Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/oe-kbuild-all/202304301606.Cgp113Ha-lkp@intel.com/ Fixes: 26e7aacb83df ("riscv: Allow to downgrade paging mode from the command line") Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230504120759.18730-1-alexghiti@rivosinc.com Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-05-09parisc: kexec: include reboot.hSimon Horman1-0/+2
Include reboot.h in machine_kexec.c for declaration of machine_crash_shutdown and machine_shutdown. gcc-12 with W=1 reports: arch/parisc/kernel/kexec.c:57:6: warning: no previous prototype for 'machine_crash_shutdown' [-Wmissing-prototypes] 57 | void machine_crash_shutdown(struct pt_regs *regs) | ^~~~~~~~~~~~~~~~~~~~~~ arch/parisc/kernel/kexec.c:61:6: warning: no previous prototype for 'machine_shutdown' [-Wmissing-prototypes] 61 | void machine_shutdown(void) | ^~~~~~~~~~~~~~~~ No functional changes intended. Compile tested only. Signed-off-by: Simon Horman <horms@kernel.org> Acked-by: Baoquan He <bhe@redhat.com> Signed-off-by: Helge Deller <deller@gmx.de>
2023-05-08x86/amd_nb: Add PCI ID for family 19h model 78hMario Limonciello1-0/+2
Commit 310e782a99c7 ("platform/x86/amd: pmc: Utilize SMN index 0 for driver probe") switched to using amd_smn_read() which relies upon the misc PCI ID used by DF function 3 being included in a table. The ID for model 78h is missing in that table, so amd_smn_read() doesn't work. Add the missing ID into amd_nb, restoring s2idle on this system. [ bp: Simplify commit message. ] Fixes: 310e782a99c7 ("platform/x86/amd: pmc: Utilize SMN index 0 for driver probe") Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> # pci_ids.h Acked-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/20230427053338.16653-2-mario.limonciello@amd.com
2023-05-08perf/x86/intel/ds: Flush PEBS DS when changing PEBS_DATA_CFGKan Liang2-24/+35
Several similar kernel warnings can be triggered, [56605.607840] CPU0 PEBS record size 0, expected 32, config 0 cpuc->record_size=208 when the below commands are running in parallel for a while on SPR. while true; do perf record --no-buildid -a --intr-regs=AX \ -e cpu/event=0xd0,umask=0x81/pp \ -c 10003 -o /dev/null ./triad; done & while true; do perf record -o /tmp/out -W -d \ -e '{ld_blocks.store_forward:period=1000000, \ MEM_TRANS_RETIRED.LOAD_LATENCY:u:precise=2:ldlat=4}' \ -c 1037 ./triad; done The triad program is just the generation of loads/stores. The warnings are triggered when an unexpected PEBS record (with a different config and size) is found. A system-wide PEBS event with the large PEBS config may be enabled during a context switch. Some PEBS records for the system-wide PEBS may be generated while the old task is sched out but the new one hasn't been sched in yet. When the new task is sched in, the cpuc->pebs_record_size may be updated for the per-task PEBS events. So the existing system-wide PEBS records have a different size from the later PEBS records. The PEBS buffer should be flushed right before the hardware is reprogrammed. The new size and threshold should be updated after the old buffer has been flushed. Reported-by: Stephane Eranian <eranian@google.com> Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20230421184529.3320912-1-kan.liang@linux.intel.com
2023-05-08perf/x86: Fix missing sample size update on AMD BRSNamhyung Kim1-4/+2
It missed to convert a PERF_SAMPLE_BRANCH_STACK user to call the new perf_sample_save_brstack() helper in order to update the dyn_size. This affects AMD Zen3 machines with the branch-brs event. Fixes: eb55b455ef9c ("perf/core: Add perf_sample_save_brstack() helper") Signed-off-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20230427030527.580841-1-namhyung@kernel.org
2023-05-06s390: remove the unneeded select GCC12_NO_ARRAY_BOUNDSLukas Bulwahn1-1/+0
Commit 0da6e5fd6c37 ("gcc: disable '-Warray-bounds' for gcc-13 too") makes config GCC11_NO_ARRAY_BOUNDS to be for disabling -Warray-bounds in any gcc version 11 and upwards, and with that, removes the GCC12_NO_ARRAY_BOUNDS config as it is now covered by the semantics of GCC11_NO_ARRAY_BOUNDS. As GCC11_NO_ARRAY_BOUNDS is yes by default, there is no need for the s390 architecture to explicitly select GCC11_NO_ARRAY_BOUNDS. Hence, the select GCC12_NO_ARRAY_BOUNDS in arch/s390/Kconfig can simply be dropped. Remove the unneeded "select GCC12_NO_ARRAY_BOUNDS". Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-05-05Merge tag 'locking-core-2023-05-05' of ↵Linus Torvalds26-62/+114
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking updates from Ingo Molnar: - Introduce local{,64}_try_cmpxchg() - a slightly more optimal primitive, which will be used in perf events ring-buffer code - Simplify/modify rwsems on PREEMPT_RT, to address writer starvation - Misc cleanups/fixes * tag 'locking-core-2023-05-05' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: locking/atomic: Correct (cmp)xchg() instrumentation locking/x86: Define arch_try_cmpxchg_local() locking/arch: Wire up local_try_cmpxchg() locking/generic: Wire up local{,64}_try_cmpxchg() locking/atomic: Add generic try_cmpxchg{,64}_local() support locking/rwbase: Mitigate indefinite writer starvation locking/arch: Rename all internal __xchg() names to __arch_xchg()
2023-05-05Merge branch 'x86-uaccess-cleanup': x86 uaccess header cleanupsLinus Torvalds4-94/+122
Merge my x86 uaccess updates branch. The LAM ("Linear Address Masking") updates in this release made me unhappy about how "access_ok()" was done, and it actually turned out to have a couple of small bugs in it too. This is my cleanup of the code: - use the sign bit of the __user pointer rather than masking the address and checking it against the TASK_SIZE range. We already did this part for the get/put_user() side, but 'access_ok()' did the naïve "mask and range check" thing, which not only generates nasty code, but also ended up meaning that __access_ok itself didn't do a good job, and so copy_from_user_nmi() didn't get the check right. - move all the code that is 64-bit only into the 64-bit version of the header file, so that we don't unnecessarily pollute the shared x86 code and make it look like LAM might work in 32-bit too. - fix a bug in the address masking (that doesn't end up mattering: in this case the fix was to just remove the buggy code entirely). - a couple of trivial cleanups and added commentary about the access_ok() rules. * x86-uaccess-cleanup: x86-64: mm: clarify the 'positive addresses' user address rules x86: mm: remove 'sign' games from LAM untagged_addr*() macros x86: uaccess: move 32-bit and 64-bit parts into proper <asm/uaccess_N.h> header x86: mm: remove architecture-specific 'access_ok()' define x86-64: make access_ok() independent of LAM
2023-05-05Merge tag 'riscv-for-linus-6.4-mw2' of ↵Linus Torvalds19-66/+674
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull more RISC-V updates from Palmer Dabbelt: - Support for hibernation - The .rela.dyn section has been moved to the init area - A fix for the SBI probing to allow for implementation-defined behavior - Various other fixes and cleanups throughout the tree * tag 'riscv-for-linus-6.4-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: RISC-V: include cpufeature.h in cpufeature.c riscv: Move .rela.dyn to the init sections dt-bindings: riscv: explicitly mention assumption of Zicsr & Zifencei support riscv: compat_syscall_table: Fixup compile warning RISC-V: fixup in-flight collision with ARCH_WANT_OPTIMIZE_VMEMMAP rename RISC-V: fix sifive and thead section mismatches in errata RISC-V: Align SBI probe implementation with spec riscv: mm: remove redundant parameter of create_fdt_early_page_table riscv: Adjust dependencies of HAVE_DYNAMIC_FTRACE selection RISC-V: Add arch functions to support hibernation/suspend-to-disk RISC-V: mm: Enable huge page support to kernel_page_present() function RISC-V: Factor out common code of __cpu_resume_enter() RISC-V: Change suspend_save_csrs and suspend_restore_csrs to public function
2023-05-05Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds23-177/+1208
Pull more kvm updates from Paolo Bonzini: "This includes the 6.4 changes for RISC-V, and a few bugfix patches for other architectures. For x86, this closes a longstanding performance issue in the newer and (usually) more scalable page table management code. RISC-V: - ONE_REG interface to enable/disable SBI extensions - Zbb extension for Guest/VM - AIA CSR virtualization x86: - Fix a long-standing TDP MMU flaw, where unloading roots on a vCPU can result in the root being freed even though the root is completely valid and can be reused as-is (with a TLB flush). s390: - A couple of bugfixes" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: s390: fix race in gmap_make_secure() KVM: s390: pv: fix asynchronous teardown for small VMs KVM: x86: Preserve TDP MMU roots until they are explicitly invalidated RISC-V: KVM: Virtualize per-HART AIA CSRs RISC-V: KVM: Use bitmap for irqs_pending and irqs_pending_mask RISC-V: KVM: Add ONE_REG interface for AIA CSRs RISC-V: KVM: Implement subtype for CSR ONE_REG interface RISC-V: KVM: Initial skeletal support for AIA RISC-V: KVM: Drop the _MASK suffix from hgatp.VMID mask defines RISC-V: Detect AIA CSRs from ISA string RISC-V: Add AIA related CSR defines RISC-V: KVM: Allow Zbb extension for Guest/VM RISC-V: KVM: Add ONE_REG interface to enable/disable SBI extensions RISC-V: KVM: Alphabetize selects KVM: RISC-V: Retry fault if vma_lookup() results become invalid
2023-05-05Merge tag 'kvm-s390-next-6.4-2' of ↵Paolo Bonzini3-21/+23
https://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD For 6.4
2023-05-05Merge tag 'kvm-x86-mmu-6.4-2' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini1-65/+56
Fix a long-standing flaw in x86's TDP MMU where unloading roots on a vCPU can result in the root being freed even though the root is completely valid and can be reused as-is (with a TLB flush).
2023-05-05Merge tag 'kvm-riscv-6.4-1' of https://github.com/kvm-riscv/linux into HEADPaolo Bonzini19-91/+1129
KVM/riscv changes for 6.4 - ONE_REG interface to enable/disable SBI extensions - Zbb extension for Guest/VM - AIA CSR virtualization
2023-05-05ARM: 9296/1: HP Jornada 7XX: fix kernel-doc warningsRandy Dunlap1-1/+4
Fix kernel-doc warnings from the kernel test robot: jornada720_ssp.c:24: warning: Function parameter or member 'jornada_ssp_lock' not described in 'DEFINE_SPINLOCK' jornada720_ssp.c:24: warning: expecting prototype for arch/arm/mac(). Prototype was for DEFINE_SPINLOCK() instead jornada720_ssp.c:34: warning: Function parameter or member 'byte' not described in 'jornada_ssp_reverse' jornada720_ssp.c:57: warning: Function parameter or member 'byte' not described in 'jornada_ssp_byte' jornada720_ssp.c:85: warning: Function parameter or member 'byte' not described in 'jornada_ssp_inout' Link: lore.kernel.org/r/202304210535.tWby3jWF-lkp@intel.com Fixes: 69ebb22277a5 ("[ARM] 4506/1: HP Jornada 7XX: Addition of SSP Platform Driver") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: kernel test robot <lkp@intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Kristoffer Ericson <Kristoffer.ericson@gmail.com> Cc: patches@armlinux.org.uk Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2023-05-05ARM: 9295/1: unwind:fix unwind abort for uleb128 caseHaibo Li1-1/+24
When unwind instruction is 0xb2,the subsequent instructions are uleb128 bytes. For now,it uses only the first uleb128 byte in code. For vsp increments of 0x204~0x400,use one uleb128 byte like below: 0xc06a00e4 <unwind_test_work>: 0x80b27fac Compact model index: 0 0xb2 0x7f vsp = vsp + 1024 0xac pop {r4, r5, r6, r7, r8, r14} For vsp increments larger than 0x400,use two uleb128 bytes like below: 0xc06a00e4 <unwind_test_work>: @0xc0cc9e0c Compact model index: 1 0xb2 0x81 0x01 vsp = vsp + 1032 0xac pop {r4, r5, r6, r7, r8, r14} The unwind works well since the decoded uleb128 byte is also 0x81. For vsp increments larger than 0x600,use two uleb128 bytes like below: 0xc06a00e4 <unwind_test_work>: @0xc0cc9e0c Compact model index: 1 0xb2 0x81 0x02 vsp = vsp + 1544 0xac pop {r4, r5, r6, r7, r8, r14} In this case,the decoded uleb128 result is 0x101(vsp=0x204+(0x101<<2)). While the uleb128 used in code is 0x81(vsp=0x204+(0x81<<2)). The unwind aborts at this frame since it gets incorrect vsp. To fix this,add uleb128 decode to cover all the above case. Signed-off-by: Haibo Li <haibo.li@mediatek.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Alexandre Mergnat <amergnat@baylibre.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2023-05-04Merge tag 'mm-stable-2023-05-03-16-22' of ↵Linus Torvalds1-19/+1
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull more MM updates from Andrew Morton: - Some DAMON cleanups from Kefeng Wang - Some KSM work from David Hildenbrand, to make the PR_SET_MEMORY_MERGE ioctl's behavior more similar to KSM's behavior. [ Andrew called these "final", but I suspect we'll have a series fixing up the fact that the last commit in the dmapools series in the previous pull seems to have unintentionally just reverted all the other commits in the same series.. - Linus ] * tag 'mm-stable-2023-05-03-16-22' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: mm: hwpoison: coredump: support recovery from dump_user_range() mm/page_alloc: add some comments to explain the possible hole in __pageblock_pfn_to_page() mm/ksm: move disabling KSM from s390/gmap code to KSM code selftests/ksm: ksm_functional_tests: add prctl unmerge test mm/ksm: unmerge and clear VM_MERGEABLE when setting PR_SET_MEMORY_MERGE=0 mm/damon/paddr: fix missing folio_sz update in damon_pa_young() mm/damon/paddr: minor refactor of damon_pa_mark_accessed_or_deactivate() mm/damon/paddr: minor refactor of damon_pa_pageout()
2023-05-04Merge tag 'arm64-fixes' of ↵Linus Torvalds6-23/+22
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: "A few arm64 fixes that came in during the merge window for -rc1. The main thing is restoring the pointer authentication hwcaps, which disappeared during some recent refactoring - Fix regression in CPU erratum workaround when disabling the MMU - Fix detection of pointer authentication hwcaps - Avoid writeable, executable ELF sections in vmlinux" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: lds: move .got section out of .text arm64: kernel: remove SHF_WRITE|SHF_EXECINSTR from .idmap.text arm64: cpufeature: Fix pointer auth hwcaps arm64: Fix label placement in record_mmu_state()
2023-05-04Merge tag 'loongarch-6.4' of ↵Linus Torvalds28-295/+1666
git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson Pull LoongArch updates from Huacai Chen: - Better backtraces for humanization - Relay BCE exceptions to userland as SIGSEGV - Provide kernel fpu functions - Optimize memory ops (memset/memcpy/memmove) - Optimize checksum and crc32(c) calculation - Add ARCH_HAS_FORTIFY_SOURCE selection - Add function error injection support - Add ftrace with direct call support - Add basic perf tools support * tag 'loongarch-6.4' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson: (24 commits) tools/perf: Add basic support for LoongArch LoongArch: ftrace: Add direct call trampoline samples support LoongArch: ftrace: Add direct call support LoongArch: ftrace: Implement ftrace_find_callable_addr() to simplify code LoongArch: ftrace: Fix build error if DYNAMIC_FTRACE_WITH_REGS is not set LoongArch: ftrace: Abstract DYNAMIC_FTRACE_WITH_ARGS accesses LoongArch: Add support for function error injection LoongArch: Add ARCH_HAS_FORTIFY_SOURCE selection LoongArch: crypto: Add crc32 and crc32c hw acceleration LoongArch: Add checksum optimization for 64-bit system LoongArch: Optimize memory ops (memset/memcpy/memmove) LoongArch: Provide kernel fpu functions LoongArch: Relay BCE exceptions to userland as SIGSEGV with si_code=SEGV_BNDERR LoongArch: Tweak the BADV and CPUCFG.PRID lines in show_regs() LoongArch: Humanize the ESTAT line when showing registers LoongArch: Humanize the ECFG line when showing registers LoongArch: Humanize the EUEN line when showing registers LoongArch: Humanize the PRMD line when showing registers LoongArch: Humanize the CRMD line when showing registers LoongArch: Fix format of CSR lines during show_regs() ...
2023-05-04Merge tag 'csky-for-linus-6.4' of https://github.com/c-sky/csky-linuxLinus Torvalds3-5/+6
Pull arch/csky updates from Guo Ren: - Remove CPU_TLB_SIZE config - Prevent spurious page faults * tag 'csky-for-linus-6.4' of https://github.com/c-sky/csky-linux: csky: mmu: Prevent spurious page faults csky: remove obsolete config CPU_TLB_SIZE
2023-05-04KVM: s390: fix race in gmap_make_secure()Claudio Imbrenda1-21/+11
Fix a potential race in gmap_make_secure() and remove the last user of follow_page() without FOLL_GET. The old code is locking something it doesn't have a reference to, and as explained by Jason and David in this discussion: https://lore.kernel.org/linux-mm/Y9J4P%2FRNvY1Ztn0Q@nvidia.com/ it can lead to all kind of bad things, including the page getting unmapped (MADV_DONTNEED), freed, reallocated as a larger folio and the unlock_page() would target the wrong bit. There is also another race with the FOLL_WRITE, which could race between the follow_page() and the get_locked_pte(). The main point is to remove the last use of follow_page() without FOLL_GET or FOLL_PIN, removing the races can be considered a nice bonus. Link: https://lore.kernel.org/linux-mm/Y9J4P%2FRNvY1Ztn0Q@nvidia.com/ Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Fixes: 214d9bbcd3a6 ("s390/mm: provide memory management functions for protected KVM guests") Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20230428092753.27913-2-imbrenda@linux.ibm.com>
2023-05-04KVM: s390: pv: fix asynchronous teardown for small VMsClaudio Imbrenda2-0/+12
On machines without the Destroy Secure Configuration Fast UVC, the topmost level of page tables is set aside and freed asynchronously as last step of the asynchronous teardown. Each gmap has a host_to_guest radix tree mapping host (userspace) addresses (with 1M granularity) to gmap segment table entries (pmds). If a guest is smaller than 2GB, the topmost level of page tables is the segment table (i.e. there are only 2 levels). Replacing it means that the pointers in the host_to_guest mapping would become stale and cause all kinds of nasty issues. This patch fixes the issue by disallowing asynchronous teardown for guests with only 2 levels of page tables. Userspace should (and already does) try using the normal destroy if the asynchronous one fails. Update s390_replace_asce so it refuses to replace segment type ASCEs. This is still needed in case the normal destroy VM fails. Fixes: fb491d5500a7 ("KVM: s390: pv: asynchronous destroy for reboot") Reviewed-by: Marc Hartmayer <mhartmay@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20230421085036.52511-2-imbrenda@linux.ibm.com>
2023-05-04Merge tag 'parisc-for-6.4-1' of ↵Linus Torvalds7-148/+93
git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux Pull parisc updates from Helge Deller: "Two important fixes in here: - The argument pointer register was wrong when calling 64-bit firmware functions, which may cause random memory corruption or crashes. - Ensure page alignment in cache flush functions, otherwise not all memory might get flushed. The rest are cleanups (mmap implementation, panic path) and usual smaller updates. Summary: - Calculate correct argument pointer in real64_call_asm() - Cleanup mmap implementation regarding color alignment (John David Anglin) - Spinlock fixes in panic path (Guilherme G. Piccoli) - build doc update for parisc64 (Randy Dunlap) - Ensure page alignment in flush functions" * tag 'parisc-for-6.4-1' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux: parisc: Fix argument pointer in real64_call_asm() parisc: Cleanup mmap implementation regarding color alignment parisc: Drop HP-UX constants and structs from grfioctl.h parisc: Ensure page alignment in flush functions parisc: Replace regular spinlock with spin_trylock on panic path parisc: update kbuild doc. aliases for parisc64 parisc: Limit amount of kgdb breakpoints on parisc
2023-05-04Merge tag 'uml-for-linus-6.4-rc1' of ↵Linus Torvalds10-111/+45
git://git.kernel.org/pub/scm/linux/kernel/git/uml/linux Pull uml updates from Richard Weinberger: - Make stub data pages configurable - Make it harder to mix user and kernel code by accident * tag 'uml-for-linus-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/uml/linux: um: make stub data pages size tweakable um: prevent user code in modules um: further clean up user_syms um: don't export printf() um: hostfs: define our own API boundary um: add __weak for exported functions
2023-05-03Merge tag 'for-6.4-rc1' of ↵Linus Torvalds1-0/+17
git://git.kernel.org/pub/scm/linux/kernel/git/pateldipen1984/linux Pull hardware timestamp engine updates from Dipen Patel: "The changes for the hte subsystem include: - Add Tegra234 HTE provider and relevant DT bindings - Update MAINTAINERS file for the HTE subsystem" * tag 'for-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/pateldipen1984/linux: hte: tegra-194: Use proper includes hte: Use device_match_of_node() hte: tegra-194: Fix off by one in tegra_hte_map_to_line_id() hte: tegra: fix 'struct of_device_id' build error hte: Use of_property_present() for testing DT property presence gpio: tegra186: Add Tegra234 hte support hte: handle nvidia,gpio-controller property hte: Deprecate nvidia,slices property hte: Add Tegra234 provider hte: Re-phrase tegra API document arm64: tegra: Add Tegra234 GTE nodes dt-bindings: timestamp: Deprecate nvidia,slices property dt-bindings: timestamp: Add Tegra234 support MAINTAINERS: Add HTE/timestamp subsystem details
2023-05-03x86-64: mm: clarify the 'positive addresses' user address rulesLinus Torvalds2-15/+33
Dave Hansen found the "(long) addr >= 0" code in the x86-64 access_ok checks somewhat confusing, and suggested using a helper to clarify what the code is doing. So this does exactly that: clarifying what the sign bit check is all about, by adding a helper macro that makes it clear what it is testing. This also adds some explicit comments talking about how even with LAM enabled, any addresses with the sign bit will still GP-fault in the non-canonical region just above the sign bit. This is all what allows us to do the user address checks with just the sign bit, and furthermore be a bit cavalier about accesses that might be done with an additional offset even past that point. (And yes, this talks about 'positive' even though zero is also a valid user address and so technically we should call them 'non-negative'. But I don't think using 'non-negative' ends up being more understandable). Suggested-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-05-03x86: mm: remove 'sign' games from LAM untagged_addr*() macrosLinus Torvalds1-15/+3
The intent of the sign games was to not modify kernel addresses when untagging them. However, that had two issues: (a) it didn't actually work as intended, since the mask was calculated as 'addr >> 63' on an _unsigned_ address. So instead of getting a mask of all ones for kernel addresses, you just got '1'. (b) untagging a kernel address isn't actually a valid operation anyway. Now, (a) had originally been true for both 'untagged_addr()' and the remote version of it, but had accidentally been fixed for the regular version of untagged_addr() by commit e0bddc19ba95 ("x86/mm: Reduce untagged_addr() overhead for systems without LAM"). That one rewrote the shift to be part of the alternative asm code, and in the process changed the unsigned shift into a signed 'sar' instruction. And while it is true that we don't want to turn what looks like a kernel address into a user address by masking off the high bit, that doesn't need these sign masking games - all it needs is that the mm context 'untag_mask' value has the high bit set. Which it always does. So simplify the code by just removing the superfluous (and in the case of untagged_addr_remote(), still buggy) sign bit games in the address masking. Acked-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-05-03x86: uaccess: move 32-bit and 64-bit parts into proper <asm/uaccess_N.h> headerLinus Torvalds3-85/+82
The x86 <asm/uaccess.h> file has grown features that are specific to x86-64 like LAM support and the related access_ok() changes. They really should be in the <asm/uaccess_64.h> file and not pollute the generic x86 header. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-05-03x86: mm: remove architecture-specific 'access_ok()' defineLinus Torvalds1-34/+0
There's already a generic definition of 'access_ok()' in the asm-generic/access_ok.h header file, and the only difference bwteen that and the x86-specific one is the added check for WARN_ON_IN_IRQ(). And it turns out that the reason for that check is long gone: it used to use a "user_addr_max()" inline function that depended on the current thread, and caused problems in non-thread contexts. For details, see commits 7c4788950ba5 ("x86/uaccess, sched/preempt: Verify access_ok() context") and in particular commit ae31fe51a3cc ("perf/x86: Restore TASK_SIZE check on frame pointer") about how and why this came to be. But that "current task" issue was removed in the big set_fs() removal by Christoph Hellwig in commit 47058bb54b57 ("x86: remove address space overrides using set_fs()"). So the reason for the test and the architecture-specific access_ok() define no longer exists, and is actually harmful these days. For example, it led various 'copy_from_user_nmi()' games (eg using __range_not_ok() instead, and then later converted to __access_ok() when that became ok). And that in turn meant that LAM was broken for the frame following before this series, because __access_ok() used to not do the address untagging. Accessing user state still needs care in many contexts, but access_ok() is not the place for this test. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Linus Torvalds torvalds@linux-foundation.org>
2023-05-03x86-64: make access_ok() independent of LAMLinus Torvalds2-10/+69
The linear address masking (LAM) code made access_ok() more complicated, in that it now needs to untag the address in order to verify the access range. See commit 74c228d20a51 ("x86/uaccess: Provide untagged_addr() and remove tags before address check"). We were able to avoid that overhead in the get_user/put_user code paths by simply using the sign bit for the address check, and depending on the GP fault if the address was non-canonical, which made it all independent of LAM. And we can do the same thing for access_ok(): simply check that the user pointer range has the high bit clear. No need to bother with any address bit masking. In fact, we can go a bit further, and just check the starting address for known small accesses ranges: any accesses that overflow will still be in the non-canonical area and will still GP fault. To still make syzkaller catch any potentially unchecked user addresses, we'll continue to warn about GP faults that are caused by accesses in the non-canonical range. But we'll limit that to purely "high bit set and past the one-page 'slop' area". We could probably just do that "check only starting address" for any arbitrary range size: realistically all kernel accesses to user space will be done starting at the low address. But let's leave that kind of optimization for later. As it is, this already allows us to generate simpler code and not worry about any tag bits in the address. The one thing to look out for is the GUP address check: instead of actually copying data in the virtual address range (and thus bad addresses being caught by the GP fault), GUP will look up the page tables manually. As a result, the page table limits need to be checked, and that was previously implicitly done by the access_ok(). With the relaxed access_ok() check, we need to just do an explicit check for TASK_SIZE_MAX in the GUP code instead. The GUP code already needs to do the tag bit unmasking anyway, so there this is all very straightforward, and there are no LAM issues. Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-05-03parisc: Fix argument pointer in real64_call_asm()Helge Deller1-3/+2
Fix the argument pointer (ap) to point to real-mode memory instead of virtual memory. It's interesting that this issue hasn't shown up earlier, as this could have happened with any 64-bit PDC ROM code. I just noticed it because I suddenly faced a HPMC while trying to execute the 64-bit STI ROM code of an Visualize-FXe graphics card for the STI text console. Signed-off-by: Helge Deller <deller@gmx.de> Cc: <stable@vger.kernel.org>
2023-05-03parisc: Cleanup mmap implementation regarding color alignmentJohn David Anglin1-103/+63
This change simplifies the randomization of file mapping regions. It reworks the code to remove duplication. The flow is now similar to that for mips. Finally, we consistently use the do_color_align variable to determine when color alignment is needed. Tested on rp3440. Signed-off-by: John David Anglin <dave.anglin@bell.net> Signed-off-by: Helge Deller <deller@gmx.de>
2023-05-03parisc: Drop HP-UX constants and structs from grfioctl.hHelge Deller1-38/+0
Signed-off-by: Helge Deller <deller@gmx.de>