summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/cpucaps.h
AgeCommit message (Collapse)AuthorFilesLines
2020-12-09Merge remote-tracking branch 'origin/kvm-arm64/psci-relay' into ↵Marc Zyngier1-3/+3
kvmarm-master/next Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-12-04KVM: arm64: Add ARM64_KVM_PROTECTED_MODE CPU capabilityDavid Brazdil1-1/+2
Expose the boolean value whether the system is running with KVM in protected mode (nVHE + kernel param). CPU capability was selected over a global variable to allow use in alternatives. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-3-dbrazdil@google.com
2020-12-02arm64: uaccess: remove vestigal UAO supportMark Rutland1-1/+0
Now that arm64 no longer uses UAO, remove the vestigal feature detection code and Kconfig text. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-13-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-12-02arm64: uaccess: remove redundant PAN togglingMark Rutland1-1/+0
Some code (e.g. futex) needs to make privileged accesses to userspace memory, and uses uaccess_{enable,disable}_privileged() in order to permit this. All other uaccess primitives use LDTR/STTR, and never need to toggle PAN. Remove the redundant PAN toggling. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-12-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-16arm64: spectre: Rename ARM64_HARDEN_EL2_VECTORS to ARM64_SPECTRE_V3AWill Deacon1-1/+1
Since ARM64_HARDEN_EL2_VECTORS is really a mitigation for Spectre-v3a, rename it accordingly for consistency with the v2 and v4 mitigation. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20201113113847.21619-9-will@kernel.org
2020-11-10arm64: cpufeatures: Add capability for LDAPR instructionWill Deacon1-1/+2
Armv8.3 introduced the LDAPR instruction, which provides weaker memory ordering semantics than LDARi (RCpc vs RCsc). Generally, we provide an RCsc implementation when implementing the Linux memory model, but LDAPR can be used as a useful alternative to dependency ordering, particularly when the compiler is capable of breaking the dependencies. Since LDAPR is not available on all CPUs, add a cpufeature to detect it at runtime and allow the instruction to be used with alternative code patching. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-10-29arm64: Add workaround for Arm Cortex-A77 erratum 1508412Rob Herring1-1/+2
On Cortex-A77 r0p0 and r1p0, a sequence of a non-cacheable or device load and a store exclusive or PAR_EL1 read can cause a deadlock. The workaround requires a DMB SY before and after a PAR_EL1 register read. In addition, it's possible an interrupt (doing a device read) or KVM guest exit could be taken between the DMB and PAR read, so we also need a DMB before returning from interrupt and before returning to a guest. A deadlock is still possible with the workaround as KVM guests must also have the workaround. IOW, a malicious guest can deadlock an affected systems. This workaround also depends on a firmware counterpart to enable the h/w to insert DMB SY after load and store exclusive instructions. See the errata document SDEN-1152370 v10 [1] for more information. [1] https://static.docs.arm.com/101992/0010/Arm_Cortex_A77_MP074_Software_Developer_Errata_Notice_v10.pdf Signed-off-by: Rob Herring <robh@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: kvmarm@lists.cs.columbia.edu Link: https://lore.kernel.org/r/20201028182839.166037-2-robh@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-10-02Merge branch 'for-next/mte' into for-next/coreWill Deacon1-1/+2
Add userspace support for the Memory Tagging Extension introduced by Armv8.5. (Catalin Marinas and others) * for-next/mte: (30 commits) arm64: mte: Fix typo in memory tagging ABI documentation arm64: mte: Add Memory Tagging Extension documentation arm64: mte: Kconfig entry arm64: mte: Save tags when hibernating arm64: mte: Enable swap of tagged pages mm: Add arch hooks for saving/restoring tags fs: Handle intra-page faults in copy_mount_options() arm64: mte: ptrace: Add NT_ARM_TAGGED_ADDR_CTRL regset arm64: mte: ptrace: Add PTRACE_{PEEK,POKE}MTETAGS support arm64: mte: Allow {set,get}_tagged_addr_ctrl() on non-current tasks arm64: mte: Restore the GCR_EL1 register after a suspend arm64: mte: Allow user control of the generated random tags via prctl() arm64: mte: Allow user control of the tag check mode via prctl() mm: Allow arm64 mmap(PROT_MTE) on RAM-based files arm64: mte: Validate the PROT_MTE request via arch_validate_flags() mm: Introduce arch_validate_flags() arm64: mte: Add PROT_MTE support to mmap() and mprotect() mm: Introduce arch_calc_vm_flag_bits() arm64: mte: Tags-aware aware memcmp_pages() implementation arm64: Avoid unnecessary clear_user_page() indirection ...
2020-09-29arm64: Rename ARM64_SSBD to ARM64_SPECTRE_V4Will Deacon1-1/+1
In a similar manner to the renaming of ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2, rename ARM64_SSBD to ARM64_SPECTRE_V4. This isn't _entirely_ accurate, as we also need to take into account the interaction with SSBS, but that will be taken care of in subsequent patches. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2Will Deacon1-1/+1
For better or worse, the world knows about "Spectre" and not about "Branch predictor hardening". Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2 as part of moving all of the Spectre mitigations into their own little corner. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-03arm64: mte: CPU feature detection and initial sysreg configurationVincenzo Frascino1-1/+2
Add the cpufeature and hwcap entries to detect the presence of MTE. Any secondary CPU not supporting the feature, if detected on the boot CPU, will be parked. Add the minimum SCTLR_EL1 and HCR_EL2 bits for enabling MTE. The Normal Tagged memory type is configured in MAIR_EL1 before the MMU is enabled in order to avoid disrupting other CPUs in the CnP domain. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Suzuki K Poulose <Suzuki.Poulose@arm.com>
2020-07-15arm64: tlb: Detect the ARMv8.4 TLBI RANGE featureZhenyu Ye1-1/+2
ARMv8.4-TLBI provides TLBI invalidation instruction that apply to a range of input addresses. This patch detect this feature. Signed-off-by: Zhenyu Ye <yezhenyu2@huawei.com> Link: https://lore.kernel.org/r/20200715071945.897-2-yezhenyu2@huawei.com [catalin.marinas@arm.com: some renaming for consistency] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-07arm64: Detect the ARMv8.4 TTL featureMarc Zyngier1-1/+2
In order to reduce the cost of TLB invalidation, the ARMv8.4 TTL feature allows TLBs to be issued with a level allowing for quicker invalidation. Let's detect the feature for now. Further patches will implement its actual usage. Reviewed-by : Suzuki K Polose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-05-28Merge branch 'for-next/kvm/errata' into for-next/coreWill Deacon1-10/+9
KVM CPU errata rework (Andrew Scull and Marc Zyngier) * for-next/kvm/errata: KVM: arm64: Move __load_guest_stage2 to kvm_mmu.h arm64: Unify WORKAROUND_SPECULATIVE_AT_{NVHE,VHE}
2020-05-28Merge branch 'for-next/bti' into for-next/coreWill Deacon1-1/+2
Support for Branch Target Identification (BTI) in user and kernel (Mark Brown and others) * for-next/bti: (39 commits) arm64: vdso: Fix CFI directives in sigreturn trampoline arm64: vdso: Don't prefix sigreturn trampoline with a BTI C instruction arm64: bti: Fix support for userspace only BTI arm64: kconfig: Update and comment GCC version check for kernel BTI arm64: vdso: Map the vDSO text with guarded pages when built for BTI arm64: vdso: Force the vDSO to be linked as BTI when built for BTI arm64: vdso: Annotate for BTI arm64: asm: Provide a mechanism for generating ELF note for BTI arm64: bti: Provide Kconfig for kernel mode BTI arm64: mm: Mark executable text as guarded pages arm64: bpf: Annotate JITed code for BTI arm64: Set GP bit in kernel page tables to enable BTI for the kernel arm64: asm: Override SYM_FUNC_START when building the kernel with BTI arm64: bti: Support building kernel C code using BTI arm64: Document why we enable PAC support for leaf functions arm64: insn: Report PAC and BTI instructions as skippable arm64: insn: Don't assume unrecognized HINTs are skippable arm64: insn: Provide a better name for aarch64_insn_is_nop() arm64: insn: Add constants for new HINT instruction decode arm64: Disable old style assembly annotations ...
2020-05-05Merge branch 'for-next/bti-user' into for-next/btiWill Deacon1-1/+2
Merge in user support for Branch Target Identification, which narrowly missed the cut for 5.7 after a late ABI concern. * for-next/bti-user: arm64: bti: Document behaviour for dynamically linked binaries arm64: elf: Fix allnoconfig kernel build with !ARCH_USE_GNU_PROPERTY arm64: BTI: Add Kconfig entry for userspace BTI mm: smaps: Report arm64 guarded pages in smaps arm64: mm: Display guarded pages in ptdump KVM: arm64: BTI: Reset BTYPE when skipping emulated instructions arm64: BTI: Reset BTYPE when skipping emulated instructions arm64: traps: Shuffle code to eliminate forward declarations arm64: unify native/compat instruction skipping arm64: BTI: Decode BYTPE bits when printing PSTATE arm64: elf: Enable BTI at exec based on ELF program properties elf: Allow arch to tweak initial mmap prot flags arm64: Basic Branch Target Identification support ELF: Add ELF program property parsing support ELF: UAPI and Kconfig additions for ELF program properties
2020-05-04arm64: Unify WORKAROUND_SPECULATIVE_AT_{NVHE,VHE}Andrew Scull1-8/+7
Errata 1165522, 1319367 and 1530923 each allow TLB entries to be allocated as a result of a speculative AT instruction. In order to avoid mandating VHE on certain affected CPUs, apply the workaround to both the nVHE and the VHE case for all affected CPUs. Signed-off-by: Andrew Scull <ascull@google.com> Acked-by: Will Deacon <will@kernel.org> CC: Marc Zyngier <maz@kernel.org> CC: James Morse <james.morse@arm.com> CC: Suzuki K Poulose <suzuki.poulose@arm.com> CC: Will Deacon <will@kernel.org> CC: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/20200504094858.108917-1-ascull@google.com Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28arm64: cpufeature: Add CPU capability for AArch32 EL1 supportWill Deacon1-1/+2
Although we emit a "SANITY CHECK" warning and taint the kernel if we detect a CPU mismatch for AArch32 support at EL1, we still online the CPU with disastrous consequences for any running 32-bit VMs. Introduce a capability for AArch32 support at EL1 so that late onlining of incompatible CPUs is forbidden. Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200421142922.18950-4-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-03-25Merge branch 'for-next/kernel-ptrauth' into for-next/coreCatalin Marinas1-1/+3
* for-next/kernel-ptrauth: : Return address signing - in-kernel support arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH lkdtm: arm64: test kernel pointer authentication arm64: compile the kernel with ptrauth return address signing kconfig: Add support for 'as-option' arm64: suspend: restore the kernel ptrauth keys arm64: __show_regs: strip PAC from lr in printk arm64: unwind: strip PAC from kernel addresses arm64: mask PAC bits of __builtin_return_address arm64: initialize ptrauth keys for kernel booting task arm64: initialize and switch ptrauth kernel keys arm64: enable ptrauth earlier arm64: cpufeature: handle conflicts based on capability arm64: cpufeature: Move cpu capability helpers inside C file arm64: ptrauth: Add bootup/runtime flags for __cpu_setup arm64: install user ptrauth keys at kernel exit time arm64: rename ptrauth key structures to be user-specific arm64: cpufeature: add pointer auth meta-capabilities arm64: cpufeature: Fix meta-capability cpufeature check
2020-03-18arm64: cpufeature: add pointer auth meta-capabilitiesKristina Martsenko1-1/+3
To enable pointer auth for the kernel, we're going to need to check for the presence of address auth and generic auth using alternative_if. We currently have two cpucaps for each, but alternative_if needs to check a single cpucap. So define meta-capabilities that are present when either of the current two capabilities is present. Leave the existing four cpucaps in place, as they are still needed to check for mismatched systems where one CPU has the architected algorithm but another has the IMP DEF algorithm. Note, the meta-capabilities were present before but were removed in commit a56005d32105 ("arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4") and commit 1e013d06120c ("arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matches"), as they were not needed then. Note, unlike before, the current patch checks the cpucap values directly, instead of reading the CPU ID register value. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [Amit: commit message and macro rebase, use __system_matches_cap] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-16arm64: Basic Branch Target Identification supportDave Martin1-1/+2
This patch adds the bare minimum required to expose the ARMv8.5 Branch Target Identification feature to userspace. By itself, this does _not_ automatically enable BTI for any initial executable pages mapped by execve(). This will come later, but for now it should be possible to enable BTI manually on those pages by using mprotect() from within the target process. Other arches already using the generic mman.h are already using 0x10 for arch-specific prot flags, so we use that for PROT_BTI here. For consistency, signal handler entry points in BTI guarded pages are required to be annotated as such, just like any other function. This blocks a relatively minor attack vector, but comforming userspace will have the annotations anyway, so we may as well enforce them. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-06arm64: add support for the AMU extension v1Ionela Voinescu1-1/+2
The activity monitors extension is an optional extension introduced by the ARMv8.4 CPU architecture. This implements basic support for version 1 of the activity monitors architecture, AMUv1. This support includes: - Extension detection on each CPU (boot, secondary, hotplugged) - Register interface for AMU aarch64 registers Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-01-22Merge branch 'for-next/rng' into for-next/coreWill Deacon1-1/+2
* for-next/rng: (2 commits) arm64: Use v8.5-RNG entropy for KASLR seed ...
2020-01-22Merge branch 'for-next/errata' into for-next/coreWill Deacon1-2/+2
* for-next/errata: (3 commits) arm64: Workaround for Cortex-A55 erratum 1530923 ...
2020-01-22arm64: Implement archrandom.h for ARMv8.5-RNGRichard Henderson1-1/+2
Expose the ID_AA64ISAR0.RNDR field to userspace, as the RNG system registers are always available at EL0. Implement arch_get_random_seed_long using RNDR. Given that the TRNG is likely to be a shared resource between cores, and VMs, do not explicitly force re-seeding with RNDRRS. In order to avoid code complexity and potential issues with hetrogenous systems only provide values after cpufeature has finalized the system capabilities. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> [Modified to only function after cpufeature has finalized the system capabilities and move all the code into the header -- broonie] Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> [will: Advertise HWCAP via /proc/cpuinfo] Signed-off-by: Will Deacon <will@kernel.org>
2020-01-16arm64: Rename WORKAROUND_1319367 to SPECULATIVE_AT_NVHESteven Price1-1/+1
To match SPECULATIVE_AT_VHE let's also have a generic name for the NVHE variant. Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-16arm64: Rename WORKAROUND_1165522 to SPECULATIVE_AT_VHESteven Price1-1/+1
Cortex-A55 is affected by a similar erratum, so rename the existing workaround for errarum 1165522 so it can be used for both errata. Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-15arm64: Add initial support for E0PDMark Brown1-1/+2
Kernel Page Table Isolation (KPTI) is used to mitigate some speculation based security issues by ensuring that the kernel is not mapped when userspace is running but this approach is expensive and is incompatible with SPE. E0PD, introduced in the ARMv8.5 extensions, provides an alternative to this which ensures that accesses from userspace to the kernel's half of the memory map to always fault with constant time, preventing timing attacks without requiring constant unmapping and remapping or preventing legitimate accesses. Currently this feature will only be enabled if all CPUs in the system support E0PD, if some CPUs do not support the feature at boot time then the feature will not be enabled and in the unlikely event that a late CPU is the first CPU to lack the feature then we will reject that CPU. This initial patch does not yet integrate with KPTI, this will be dealt with in followup patches. Ideally we could ensure that by default we don't use KPTI on CPUs where E0PD is present. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> [will: Fixed typo in Kconfig text] Signed-off-by: Will Deacon <will@kernel.org>
2019-10-28Merge branch 'kvm-arm64/erratum-1319367' of ↵Catalin Marinas1-1/+2
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into for-next/core Similarly to erratum 1165522 that affects Cortex-A76, A57 and A72 respectively suffer from errata 1319537 and 1319367, potentially resulting in TLB corruption if the CPU speculates an AT instruction while switching guests. The fix is slightly more involved since we don't have VHE to help us here, but the idea is the same: when switching a guest in, we must prevent any speculated AT from being able to parse the page tables until S2 is up and running. Only at this stage can we allow AT to take place. For this, we always restore the guest sysregs first, except for its SCTLR and TCR registers, which must be set with SCTLR.M=1 and TCR.EPD{0,1} = {1, 1}, effectively disabling the PTW and TLB allocation. Once S2 is setup, we restore the guest's SCTLR and TCR. Similar things must be done on TLB invalidation... * 'kvm-arm64/erratum-1319367' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms: arm64: Enable and document ARM errata 1319367 and 1319537 arm64: KVM: Prevent speculative S1 PTW when restoring vcpu context arm64: KVM: Disable EL1 PTW when invalidating S2 TLBs arm64: KVM: Reorder system register restoration and stage-2 activation arm64: Add ARM64_WORKAROUND_1319367 for all A57 and A72 versions
2019-10-28Merge branch 'for-next/neoverse-n1-stale-instr' into for-next/coreCatalin Marinas1-1/+2
Neoverse-N1 cores with the 'COHERENT_ICACHE' feature may fetch stale instructions when software depends on prefetch-speculation-protection instead of explicit synchronization. [0] The workaround is to trap I-Cache maintenance and issue an inner-shareable TLBI. The affected cores have a Coherent I-Cache, so the I-Cache maintenance isn't necessary. The core tells user-space it can skip it with CTR_EL0.DIC. We also have to trap this register to hide the bit forcing DIC-aware user-space to perform the maintenance. To avoid trapping all cache-maintenance, this workaround depends on a firmware component that only traps I-cache maintenance from EL0 and performs the workaround. For user-space, the kernel's work is to trap CTR_EL0 to hide DIC, and produce a fake IminLine. EL3 traps the now-necessary I-Cache maintenance and performs the inner-shareable-TLBI that makes everything better. [0] https://developer.arm.com/docs/sden885747/latest/arm-neoverse-n1-mp050-software-developer-errata-notice * for-next/neoverse-n1-stale-instr: arm64: Silence clang warning on mismatched value/register sizes arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-space arm64: Fake the IminLine size on systems affected by Neoverse-N1 #1542419 arm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419
2019-10-25arm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419James Morse1-1/+2
Cores affected by Neoverse-N1 #1542419 could execute a stale instruction when a branch is updated to point to freshly generated instructions. To workaround this issue we need user-space to issue unnecessary icache maintenance that we can trap. Start by hiding CTR_EL0.DIC. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-10-18arm64: Add ARM64_WORKAROUND_1319367 for all A57 and A72 versionsMarc Zyngier1-1/+2
Rework the EL2 vector hardening that is only selected for A57 and A72 so that the table can also be used for ARM64_WORKAROUND_1319367. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-10-08arm64: Avoid Cavium TX2 erratum 219 when switching TTBRMarc Zyngier1-1/+2
As a PRFM instruction racing against a TTBR update can have undesirable effects on TX2, NOP-out such PRFM on cores that are affected by the TX2-219 erratum. Cc: <stable@vger.kernel.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-10-08arm64: KVM: Trap VM ops when ARM64_WORKAROUND_CAVIUM_TX2_219_TVM is setMarc Zyngier1-1/+2
In order to workaround the TX2-219 erratum, it is necessary to trap TTBRx_EL1 accesses to EL2. This is done by setting HCR_EL2.TVM on guest entry, which has the side effect of trapping all the other VM-related sysregs as well. To minimize the overhead, a fast path is used so that we don't have to go all the way back to the main sysreg handling code, unless the rest of the hypervisor expects to see these accesses. Cc: <stable@vger.kernel.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner1-12/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-23arm64: Handle erratum 1418040 as a superset of erratum 1188873Marc Zyngier1-1/+1
We already mitigate erratum 1188873 affecting Cortex-A76 and Neoverse-N1 r0p0 to r2p0. It turns out that revisions r0p0 to r3p1 of the same cores are affected by erratum 1418040, which has the same workaround as 1188873. Let's expand the range of affected revisions to match 1418040, and repaint all occurences of 1188873 to 1418040. Whilst we're there, do a bit of reformating in silicon-errata.txt and drop a now unnecessary dependency on ARM_ARCH_TIMER_OOL_WORKAROUND. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-05-23arm64: errata: Add workaround for Cortex-A76 erratum #1463225Will Deacon1-1/+2
Revisions of the Cortex-A76 CPU prior to r4p0 are affected by an erratum that can prevent interrupts from being taken when single-stepping. This patch implements a software workaround to prevent userspace from effectively being able to disable interrupts. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-04-16arm64: Advertise ARM64_HAS_DCPODP cpu featureAndrew Murray1-1/+2
Advertise ARM64_HAS_DCPODP when both DC CVAP and DC CVADP are supported. Even though we don't use this feature now, we provide it for consistency with DCPOP and anticipate it being used in the future. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-02-06arm64: cpufeature: Add cpufeature for IRQ priority maskingJulien Thierry1-1/+2
Add a cpufeature indicating whether a cpu supports masking interrupts by priority. The feature will be properly enabled in a later patch. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-12-13arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4Will Deacon1-5/+3
We can easily avoid defining the two meta-capabilities for the address and generic keys, so remove them and instead just check both of the architected and impdef capabilities when determining the level of system support. Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-12-13arm64/cpufeature: detect pointer authenticationMark Rutland1-1/+7
So that we can dynamically handle the presence of pointer authentication functionality, wire up probing code in cpufeature.c. From ARMv8.3 onwards, ID_AA64ISAR1 is no longer entirely RES0, and now has four fields describing the presence of pointer authentication functionality: * APA - address authentication present, using an architected algorithm * API - address authentication present, using an IMP DEF algorithm * GPA - generic authentication present, using an architected algorithm * GPI - generic authentication present, using an IMP DEF algorithm This patch checks for both address and generic authentication, separately. It is assumed that if all CPUs support an IMP DEF algorithm, the same algorithm is used across all CPUs. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-12-10Merge branch 'kvm/cortex-a76-erratum-1165522' into aarch64/for-next/coreWill Deacon1-1/+2
Pull in KVM workaround for A76 erratum #116522. Conflicts: arch/arm64/include/asm/cpucaps.h
2018-12-10arm64: KVM: Force VHE for systems affected by erratum 1165522Marc Zyngier1-1/+2
In order to easily mitigate ARM erratum 1165522, we need to force affected CPUs to run in VHE mode if using KVM. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-12-06arm64: Add support for SB barrier and patch in over DSB; ISB sequencesWill Deacon1-1/+2
We currently use a DSB; ISB sequence to inhibit speculation in set_fs(). Whilst this works for current CPUs, future CPUs may implement a new SB barrier instruction which acts as an architected speculation barrier. On CPUs that support it, patch in an SB; NOP sequence over the DSB; ISB sequence and advertise the presence of the new instruction to userspace. Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-10-01arm64: arch_timer: Add workaround for ARM erratum 1188873Marc Zyngier1-1/+2
When running on Cortex-A76, a timer access from an AArch32 EL0 task may end up with a corrupted value or register. The workaround for this is to trap these accesses at EL1/EL2 and execute them there. This only affects versions r0p0, r1p0 and r2p0 of the CPU. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-19arm64: cpu_errata: Remove ARM64_MISMATCHED_CACHE_LINE_SIZEWill Deacon1-3/+2
There's no need to treat mismatched cache-line sizes reported by CTR_EL0 differently to any other mismatched fields that we treat as "STRICT" in the cpufeature code. In both cases we need to trap and emulate EL0 accesses to the register, so drop ARM64_MISMATCHED_CACHE_LINE_SIZE and rely on ARM64_MISMATCHED_CACHE_TYPE instead. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: move ARM64_HAS_CNP in the empty cpucaps.h slot] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-18arm64: mm: Support Common Not Private translationsVladimir Murzin1-1/+2
Common Not Private (CNP) is a feature of ARMv8.2 extension which allows translation table entries to be shared between different PEs in the same inner shareable domain, so the hardware can use this fact to optimise the caching of such entries in the TLB. CNP occupies one bit in TTBRx_ELy and VTTBR_EL2, which advertises to the hardware that the translation table entries pointed to by this TTBR are the same as every PE in the same inner shareable domain for which the equivalent TTBR also has CNP bit set. In case CNP bit is set but TTBR does not point at the same translation table entries for a given ASID and VMID, then the system is mis-configured, so the results of translations are UNPREDICTABLE. For kernel we postpone setting CNP till all cpus are up and rely on cpufeature framework to 1) patch the code which is sensitive to CNP and 2) update TTBR1_EL1 with CNP bit set. TTBR1_EL1 can be reprogrammed as result of hibernation or cpuidle (via __enable_mmu). For these two cases we restore CnP bit via __cpu_suspend_exit(). There are a few cases we need to care of changes in TTBR0_EL1: - a switch to idmap - software emulated PAN we rule out latter via Kconfig options and for the former we make sure that CNP is set for non-zero ASIDs only. Reviewed-by: James Morse <james.morse@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> [catalin.marinas@arm.com: default y for CONFIG_ARM64_CNP] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-14arm64: cpufeature: Detect SSBS and advertise to userspaceWill Deacon1-1/+2
Armv8.5 introduces a new PSTATE bit known as Speculative Store Bypass Safe (SSBS) which can be used as a mitigation against Spectre variant 4. Additionally, a CPU may provide instructions to manipulate PSTATE.SSBS directly, so that userspace can toggle the SSBS control without trapping to the kernel. This patch probes for the existence of SSBS and advertise the new instructions to userspace if they exist. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-09-10arm64: cpufeature: add feature for CRC32 instructionsArd Biesheuvel1-1/+2
Add a CRC32 feature bit and wire it up to the CPU id register so we will be able to use alternatives patching for CRC32 operations. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-08-22Merge tag 'kvmarm-for-v4.19' of ↵Paolo Bonzini1-1/+2
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm updates for 4.19 - Support for Group0 interrupts in guests - Cache management optimizations for ARMv8.4 systems - Userspace interface for RAS, allowing error retrival and injection - Fault path optimization - Emulated physical timer fixes - Random cleanups