summaryrefslogtreecommitdiff
path: root/arch/mips/kvm
AgeCommit message (Collapse)AuthorFilesLines
2020-09-23KVM: MIPS: Change the definition of kvm typeHuacai Chen1-0/+2
[ Upstream commit 15e9e35cd1dec2bc138464de6bf8ef828df19235 ] MIPS defines two kvm types: #define KVM_VM_MIPS_TE 0 #define KVM_VM_MIPS_VZ 1 In Documentation/virt/kvm/api.rst it is said that "You probably want to use 0 as machine type", which implies that type 0 be the "automatic" or "default" type. And, in user-space libvirt use the null-machine (with type 0) to detect the kvm capability, which returns "KVM not supported" on a VZ platform. I try to fix it in QEMU but it is ugly: https://lists.nongnu.org/archive/html/qemu-devel/2020-08/msg05629.html And Thomas Huth suggests me to change the definition of kvm type: https://lists.nongnu.org/archive/html/qemu-devel/2020-09/msg03281.html So I define like this: #define KVM_VM_MIPS_AUTO 0 #define KVM_VM_MIPS_VZ 1 #define KVM_VM_MIPS_TE 2 Since VZ and TE cannot co-exists, using type 0 on a TE platform will still return success (so old user-space tools have no problems on new kernels); the advantage is that using type 0 on a VZ platform will not return failure. So, the only problem is "new user-space tools use type 2 on old kernels", but if we treat this as a kernel bug, we can backport this patch to old stable kernels. Signed-off-by: Huacai Chen <chenhc@lemote.com> Message-Id: <1599734031-28746-1-git-send-email-chenhc@lemote.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-26KVM: Pass MMU notifier range flags to kvm_unmap_hva_range()Will Deacon1-1/+2
commit fdfe7cbd58806522e799e2a50a15aee7f2cbb7b6 upstream. The 'flags' field of 'struct mmu_notifier_range' is used to indicate whether invalidate_range_{start,end}() are permitted to block. In the case of kvm_mmu_notifier_invalidate_range_start(), this field is not forwarded on to the architecture-specific implementation of kvm_unmap_hva_range() and therefore the backend cannot sensibly decide whether or not to block. Add an extra 'flags' parameter to kvm_unmap_hva_range() so that architectures are aware as to whether or not they are permitted to block. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Message-Id: <20200811102725.7121-2-will@kernel.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [will: Backport to 4.19; use 'blockable' instead of non-existent range flags] Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-09KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_IDThomas Huth1-0/+3
commit a86cb413f4bf273a9d341a3ab2c2ca44e12eb317 upstream. KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all architectures. However, on s390x, the amount of usable CPUs is determined during runtime - it is depending on the features of the machine the code is running on. Since we are using the vcpu_id as an index into the SCA structures that are defined by the hardware (see e.g. the sca_add_vcpu() function), it is not only the amount of CPUs that is limited by the hard- ware, but also the range of IDs that we can use. Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too. So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common code into the architecture specific code, and on s390x we have to return the same value here as for KVM_CAP_MAX_VCPUS. This problem has been discovered with the kvm_create_max_vcpus selftest. With this change applied, the selftest now passes on s390x, too. Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20190523164309.13345-9-thuth@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-09-07KVM: Remove obsolete kvm_unmap_hva notifier backendMarc Zyngier1-10/+0
kvm_unmap_hva is long gone, and we only have kvm_unmap_hva_range to deal with. Drop the now obsolete code. Fixes: fb1522e099f0 ("KVM: update to new mmu_notifier semantic v2") Cc: James Hogan <jhogan@kernel.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
2018-06-20sched/swait: Rename to exclusivePeter Zijlstra1-2/+2
Since swait basically implemented exclusive waits only, make sure the API reflects that. $ git grep -l -e "\<swake_up\>" -e "\<swait_event[^ (]*" -e "\<prepare_to_swait\>" | while read file; do sed -i -e 's/\<swake_up\>/&_one/g' -e 's/\<swait_event[^ (]*/&_exclusive/g' -e 's/\<prepare_to_swait\>/&_exclusive/g' $file; done With a few manual touch-ups. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: bigeasy@linutronix.de Cc: oleg@redhat.com Cc: paulmck@linux.vnet.ibm.com Cc: pbonzini@redhat.com Link: https://lkml.kernel.org/r/20180612083909.261946548@infradead.org
2018-06-12Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds1-1/+1
Pull KVM updates from Paolo Bonzini: "Small update for KVM: ARM: - lazy context-switching of FPSIMD registers on arm64 - "split" regions for vGIC redistributor s390: - cleanups for nested - clock handling - crypto - storage keys - control register bits x86: - many bugfixes - implement more Hyper-V super powers - implement lapic_timer_advance_ns even when the LAPIC timer is emulated using the processor's VMX preemption timer. - two security-related bugfixes at the top of the branch" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (79 commits) kvm: fix typo in flag name kvm: x86: use correct privilege level for sgdt/sidt/fxsave/fxrstor access KVM: x86: pass kvm_vcpu to kvm_read_guest_virt and kvm_write_guest_virt_system KVM: x86: introduce linear_{read,write}_system kvm: nVMX: Enforce cpl=0 for VMX instructions kvm: nVMX: Add support for "VMWRITE to any supported field" kvm: nVMX: Restrict VMX capability MSR changes KVM: VMX: Optimize tscdeadline timer latency KVM: docs: nVMX: Remove known limitations as they do not exist now KVM: docs: mmu: KVM support exposing SLAT to guests kvm: no need to check return value of debugfs_create functions kvm: Make VM ioctl do valloc for some archs kvm: Change return type to vm_fault_t KVM: docs: mmu: Fix link to NPT presentation from KVM Forum 2008 kvm: x86: Amend the KVM_GET_SUPPORTED_CPUID API documentation KVM: x86: hyperv: declare KVM_CAP_HYPERV_TLBFLUSH capability KVM: x86: hyperv: simplistic HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE}_EX implementation KVM: x86: hyperv: simplistic HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE} implementation KVM: introduce kvm_make_vcpus_request_mask() API KVM: x86: hyperv: do rep check for each hypercall separately ...
2018-06-01kvm: Change return type to vm_fault_tSouptick Joarder1-1/+1
Use new return type vm_fault_t for fault handler. For now, this is just documenting that the function returns a VM_FAULT value rather than an errno. Once all instances are converted, vm_fault_t will become a distinct type. commit 1c8f422059ae ("mm: change return type to vm_fault_t") Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com> Reviewed-by: Matthew Wilcox <mawilcox@microsoft.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-05-15KVM: Fix spelling mistake: "cop_unsuable" -> "cop_unusable"Colin Ian King1-1/+1
Trivial fix to spelling mistake in debugfs_entries text. Fixes: 669e846e6c4e ("KVM/MIPS32: MIPS arch specific APIs for KVM") Signed-off-by: Colin Ian King <colin.king@canonical.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kernel-janitors@vger.kernel.org Cc: <stable@vger.kernel.org> # 3.10+ Signed-off-by: James Hogan <jhogan@kernel.org>
2017-12-14KVM: introduce kvm_arch_vcpu_async_ioctlPaolo Bonzini2-3/+13
After the vcpu_load/vcpu_put pushdown, the handling of asynchronous VCPU ioctl is already much clearer in that it is obvious that they bypass vcpu_load and vcpu_put. However, it is still not perfect in that the different state of the VCPU mutex is still hidden in the caller. Separate those ioctls into a new function kvm_arch_vcpu_async_ioctl that returns -ENOIOCTLCMD for more "traditional" synchronous ioctls. Cc: James Hogan <jhogan@kernel.org> Cc: Paul Mackerras <paulus@ozlabs.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Suggested-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-12-14KVM: Move vcpu_load to arch-specific kvm_arch_vcpu_ioctlChristoffer Dall1-20/+29
Move the calls to vcpu_load() and vcpu_put() in to the architecture specific implementations of kvm_arch_vcpu_ioctl() which dispatches further architecture-specific ioctls on to other functions. Some architectures support asynchronous vcpu ioctls which cannot call vcpu_load() or take the vcpu->mutex, because that would prevent concurrent execution with a running VCPU, which is the intended purpose of these ioctls, for example because they inject interrupts. We repeat the separate checks for these specifics in the architecture code for MIPS, S390 and PPC, and avoid taking the vcpu->mutex and calling vcpu_load for these ioctls. Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-12-14KVM: Move vcpu_load to arch-specific kvm_arch_vcpu_ioctl_set_regsChristoffer Dall1-0/+3
Move vcpu_load() and vcpu_put() into the architecture specific implementations of kvm_arch_vcpu_ioctl_set_regs(). Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-12-14KVM: Move vcpu_load to arch-specific kvm_arch_vcpu_ioctl_get_regsChristoffer Dall1-0/+3
Move vcpu_load() and vcpu_put() into the architecture specific implementations of kvm_arch_vcpu_ioctl_get_regs(). Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-12-14KVM: Move vcpu_load to arch-specific kvm_arch_vcpu_ioctl_runChristoffer Dall1-0/+3
Move vcpu_load() and vcpu_put() into the architecture specific implementations of kvm_arch_vcpu_ioctl_run(). Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> # s390 parts Reviewed-by: Cornelia Huck <cohuck@redhat.com> [Rebased. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-11-27KVM: Let KVM_SET_SIGNAL_MASK work as advertisedJan H. Schönherr1-5/+2
KVM API says for the signal mask you set via KVM_SET_SIGNAL_MASK, that "any unblocked signal received [...] will cause KVM_RUN to return with -EINTR" and that "the signal will only be delivered if not blocked by the original signal mask". This, however, is only true, when the calling task has a signal handler registered for a signal. If not, signal evaluation is short-circuited for SIG_IGN and SIG_DFL, and the signal is either ignored without KVM_RUN returning or the whole process is terminated. Make KVM_SET_SIGNAL_MASK behave as advertised by utilizing logic similar to that in do_sigtimedwait() to avoid short-circuiting of signals. Signed-off-by: Jan H. Schönherr <jschoenh@amazon.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-11-02License cleanup: add SPDX GPL-2.0 license identifier to files with no licenseGreg Kroah-Hartman2-0/+2
Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-09-15kvm,mips: Fix potential swait_active() racesDavidlohr Bueso1-2/+2
For example, the following could occur, making us miss a wakeup: CPU0 CPU1 kvm_vcpu_block kvm_mips_comparecount_func [L] swait_active(&vcpu->wq) [S] prepare_to_swait(&vcpu->wq) [L] if (!kvm_vcpu_has_pending_timer(vcpu)) schedule() [S] queue_timer_int(vcpu) Ensure that the swait_active() check is not hoisted over the interrupt. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-08-08KVM: add spinlock optimization frameworkLongpeng(Mike)1-0/+5
If a vcpu exits due to request a user mode spinlock, then the spinlock-holder may be preempted in user mode or kernel mode. (Note that not all architectures trap spin loops in user mode, only AMD x86 and ARM/ARM64 currently do). But if a vcpu exits in kernel mode, then the holder must be preempted in kernel mode, so we should choose a vcpu in kernel mode as a more likely candidate for the lock holder. This introduces kvm_arch_vcpu_in_kernel() to decide whether the vcpu is in kernel-mode when it's preempted. kvm_vcpu_on_spin's new argument says the same of the spinning VCPU. Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-07Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2-2/+2
Pull KVM updates from Paolo Bonzini: "PPC: - Better machine check handling for HV KVM - Ability to support guests with threads=2, 4 or 8 on POWER9 - Fix for a race that could cause delayed recognition of signals - Fix for a bug where POWER9 guests could sleep with interrupts pending. ARM: - VCPU request overhaul - allow timer and PMU to have their interrupt number selected from userspace - workaround for Cavium erratum 30115 - handling of memory poisonning - the usual crop of fixes and cleanups s390: - initial machine check forwarding - migration support for the CMMA page hinting information - cleanups and fixes x86: - nested VMX bugfixes and improvements - more reliable NMI window detection on AMD - APIC timer optimizations Generic: - VCPU request overhaul + documentation of common code patterns - kvm_stat improvements" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (124 commits) Update my email address kvm: vmx: allow host to access guest MSR_IA32_BNDCFGS x86: kvm: mmu: use ept a/d in vmcs02 iff used in vmcs12 kvm: x86: mmu: allow A/D bits to be disabled in an mmu x86: kvm: mmu: make spte mmio mask more explicit x86: kvm: mmu: dead code thanks to access tracking KVM: PPC: Book3S: Fix typo in XICS-on-XIVE state saving code KVM: PPC: Book3S HV: Close race with testing for signals on guest entry KVM: PPC: Book3S HV: Simplify dynamic micro-threading code KVM: x86: remove ignored type attribute KVM: LAPIC: Fix lapic timer injection delay KVM: lapic: reorganize restart_apic_timer KVM: lapic: reorganize start_hv_timer kvm: nVMX: Check memory operand to INVVPID KVM: s390: Inject machine check into the nested guest KVM: s390: Inject machine check into the guest tools/kvm_stat: add new interactive command 'b' tools/kvm_stat: add new command line switch '-i' tools/kvm_stat: fix error on interactive command 'g' KVM: SVM: suppress unnecessary NMI singlestep on GIF=0 and nested exit ...
2017-06-20KVM: MIPS: Fix maybe-uninitialized build failureJames Cowgill1-1/+5
This commit fixes a "maybe-uninitialized" build failure in arch/mips/kvm/tlb.c when KVM, DYNAMIC_DEBUG and JUMP_LABEL are all enabled. The failure is: In file included from ./include/linux/printk.h:329:0, from ./include/linux/kernel.h:13, from ./include/asm-generic/bug.h:15, from ./arch/mips/include/asm/bug.h:41, from ./include/linux/bug.h:4, from ./include/linux/thread_info.h:11, from ./include/asm-generic/current.h:4, from ./arch/mips/include/generated/asm/current.h:1, from ./include/linux/sched.h:11, from arch/mips/kvm/tlb.c:13: arch/mips/kvm/tlb.c: In function ‘kvm_mips_host_tlb_inv’: ./include/linux/dynamic_debug.h:126:3: error: ‘idx_kernel’ may be used uninitialized in this function [-Werror=maybe-uninitialized] __dynamic_pr_debug(&descriptor, pr_fmt(fmt), \ ^~~~~~~~~~~~~~~~~~ arch/mips/kvm/tlb.c:169:16: note: ‘idx_kernel’ was declared here int idx_user, idx_kernel; ^~~~~~~~~~ There is a similar error relating to "idx_user". Both errors were observed with GCC 6. As far as I can tell, it is impossible for either idx_user or idx_kernel to be uninitialized when they are later read in the calls to kvm_debug, but to satisfy the compiler, add zero initializers to both variables. Signed-off-by: James Cowgill <James.Cowgill@imgtec.com> Fixes: 57e3869cfaae ("KVM: MIPS/TLB: Generalise host TLB invalidate to kernel ASID") Cc: <stable@vger.kernel.org> # 4.11+ Acked-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-06-04KVM: add kvm_request_pendingRadim Krčmář2-2/+2
A first step in vcpu->requests encapsulation. Additionally, we now use READ_ONCE() when accessing vcpu->requests, which ensures we always load vcpu->requests when it's accessed. This is important as other threads can change it any time. Also, READ_ONCE() documents that vcpu->requests is used with other threads, likely requiring memory barriers, which it does. Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> [ Documented the new use of READ_ONCE() and converted another check in arch/mips/kvm/vz.c ] Signed-off-by: Andrew Jones <drjones@redhat.com> Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
2017-04-27KVM: add kvm_{test,clear}_request to replace {test,clear}_bitRadim Krčmář1-1/+1
Users were expected to use kvm_check_request() for testing and clearing, but request have expanded their use since then and some users want to only test or do a faster clear. Make sure that requests are not directly accessed with bit operations. Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-04-07kvm: make KVM_CAP_COALESCED_MMIO architecture agnosticPaolo Bonzini1-3/+0
Remove code from architecture files that can be moved to virt/kvm, since there is already common code for coalesced MMIO. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> [Removed a pointless 'break' after 'return'.] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-03-28KVM: MIPS/Emulate: Properly implement TLBR for T&EJames Hogan1-46/+53
Properly implement emulation of the TLBR instruction for Trap & Emulate. This instruction reads the TLB entry pointed at by the CP0_Index register into the other TLB registers, which may have the side effect of changing the current ASID. Therefore abstract the CP0_EntryHi and ASID changing code into a common function in the process. A comment indicated that Linux doesn't use TLBR, which is true during normal use, however dumping of the TLB does use it (for example with the relatively recent 'x' magic sysrq key), as does a wired TLB entries test case in my KVM tests. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/VZ: Handle Octeon III guest.PRid registerJames Hogan1-2/+17
Octeon III implements a read-only guest CP0_PRid register, so add cases to the KVM register access API for Octeon to ensure the correct value is read and writes are ignored. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/VZ: Emulate hit CACHE ops for Octeon IIIJames Hogan1-0/+11
Octeon III doesn't implement the optional GuestCtl0.CG bit to allow guest mode to execute virtual address based CACHE instructions, so implement emulation of a few important ones specifically for Octeon III in response to a GPSI exception. Currently the main reason to perform these operations is for icache synchronisation, so they are implemented as a simple icache flush with local_flush_icache_range(). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/VZ: VZ hardware setup for Octeon IIIJames Hogan2-28/+108
Set up hardware virtualisation on Octeon III cores, configuring guest interrupt routing and carving out half of the root TLB for guest use, restoring it back again afterwards. We need to be careful to inhibit TLB shutdown machine check exceptions while invalidating guest TLB entries, since TLB invalidation is not available so guest entries must be invalidated by setting them to unique unmapped addresses, which could conflict with mappings set by the guest or root if recently repartitioned. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/T&E: Report correct dcache line sizeJames Hogan1-0/+8
Octeon CPUs don't report the correct dcache line size in CP0_Config1.DL, so encode the correct value for the guest CP0_Config1.DL based on cpu_dcache_line_size(). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/TLB: Handle virtually tagged icachesJames Hogan1-0/+14
When TLB entries are invalidated in the presence of a virtually tagged icache, such as that found on Octeon CPUs, flush the icache so that we don't get a reserved instruction exception even though the TLB mapping is removed. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/Emulate: Adapt T&E CACHE emulation for OcteonJames Hogan1-3/+27
Cache management is implemented separately for Cavium Octeon CPUs, so r4k_blast_[id]cache aren't available. Instead for Octeon perform a local icache flush using local_flush_icache_range(), and for other platforms which don't use c-r4k.c use __flush_cache_all() / flush_icache_all(). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/VZ: Trace guest mode changesJames Hogan3-2/+69
Create a trace event for guest mode changes, and enable VZ's GuestCtl0.MC bit after the trace event is enabled to trap all guest mode changes. The MC bit causes Guest Hardware Field Change (GHFC) exceptions whenever a guest mode change occurs (such as an exception entry or return from exception), so we need to handle this exception now. The MC bit is only enabled when restoring register state, so enabling the trace event won't take immediate effect. Tracing guest mode changes can be particularly handy when trying to work out what a guest OS gets up to before something goes wrong, especially if the problem occurs as a result of some previous guest userland exception which would otherwise be invisible in the trace. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/VZ: Support hardware guest timerJames Hogan3-11/+273
Transfer timer state to the VZ guest context (CP0_GTOffset & guest CP0_Count) when entering guest mode, enabling direct guest access to it, and transfer back to soft timer when saving guest register state. This usually allows guest code to directly read CP0_Count (via MFC0 and RDHWR) and read/write CP0_Compare, without trapping to the hypervisor for it to emulate the guest timer. Writing to CP0_Count or CP0_Cause.DC is much less common and still triggers a hypervisor GPSI exception, in which case the timer state is transferred back to an hrtimer before emulating the write. We are careful to prevent small amounts of drift from building up due to undeterministic time intervals between reading of the ktime and reading of CP0_Count. Some drift is expected however, since the system clocksource may use a different timer to the local CP0_Count timer used by VZ. This is permitted to prevent guest CP0_Count from appearing to go backwards. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/VZ: Emulate MAARs when necessaryJames Hogan2-2/+110
Add emulation of Memory Accessibility Attribute Registers (MAARs) when necessary. We can't actually do anything with whatever the guest provides, but it may not be possible to clear Guest.Config5.MRP so we have to emulate at least a pair of MAARs. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-doc@vger.kernel.org
2017-03-28KVM: MIPS/VZ: Support guest load-linked bitJames Hogan1-0/+24
When restoring guest state after another VCPU has run, be sure to clear CP0_LLAddr.LLB in order to break any interrupted atomic critical section. Without this SMP guest atomics don't work when LLB is present as one guest can complete the atomic section started by another guest. MIPS VZ guest read of CP0_LLAddr causes Guest Privileged Sensitive Instruction (GPSI) exception due to the address being root physical. Handle this by reporting only the LLB bit, which contains the bit for whether a ll/sc atomic is in progress without any reason for failure. Similarly on P5600 a guest write to CP0_LLAddr also causes a GPSI exception. Handle this also by clearing the guest LLB bit from root mode. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/VZ: Support guest hardware page table walkerJames Hogan1-0/+80
Add support for VZ guest CP0_PWBase, CP0_PWField, CP0_PWSize, and CP0_PWCtl registers for controlling the guest hardware page table walker (HTW) present on P5600 and P6600 cores. These guest registers need initialising on R6, context switching, and exposing via the KVM ioctl API when they are present. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-doc@vger.kernel.org
2017-03-28KVM: MIPS/VZ: Support guest segmentation controlJames Hogan1-1/+241
Add support for VZ guest CP0_SegCtl0, CP0_SegCtl1, and CP0_SegCtl2 registers, as found on P5600 and P6600 cores. These guest registers need initialising, context switching, and exposing via the KVM ioctl API when they are present. They also require the GVA -> GPA translation code for handling a GVA root exception to be updated to interpret the segmentation registers and decode the faulting instruction enough to detect EVA memory access instructions. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-doc@vger.kernel.org
2017-03-28KVM: MIPS/VZ: Support guest CP0_[X]ContextConfigJames Hogan1-2/+60
Add support for VZ guest CP0_ContextConfig and CP0_XContextConfig (MIPS64 only) registers, as found on P5600 and P6600 cores. These guest registers need initialising, context switching, and exposing via the KVM ioctl API when they are present. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-doc@vger.kernel.org
2017-03-28KVM: MIPS/VZ: Support guest CP0_BadInstr[P]James Hogan1-0/+46
Add support for VZ guest CP0_BadInstr and CP0_BadInstrP registers, as found on most VZ capable cores. These guest registers need context switching, and exposing via the KVM ioctl API when they are present. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-doc@vger.kernel.org
2017-03-28KVM: MIPS: Add VZ support to build systemJames Hogan2-3/+32
Add support for the MIPS Virtualization (VZ) ASE to the MIPS KVM build system. For now KVM can only be configured for T&E or VZ and not both, but the design of the user facing APIs support the possibility of having both available, so this could change in future. Note that support for various optional guest features (some of which can't be turned off) are implemented in immediately following commits, so although it should now be possible to build VZ support, it may not work yet on your hardware. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS: Implement VZ supportJames Hogan6-0/+2433
Add the main support for the MIPS Virtualization ASE (A.K.A. VZ) to MIPS KVM. The bulk of this work is in vz.c, with various new state and definitions elsewhere. Enough is implemented to be able to run on a minimal VZ core. Further patches will fill out support for guest features which are optional or can be disabled. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-doc@vger.kernel.org
2017-03-28KVM: MIPS: Update exit handler for VZJames Hogan1-13/+18
The general guest exit handler needs a few tweaks for VZ compared to trap & emulate, which for now are made directly depending on CONFIG_KVM_MIPS_VZ: - There is no need to re-enable the hardware page table walker (HTW), as it can be left enabled during guest mode operation with VZ. - There is no need to perform a privilege check, as any guest privilege violations should have already been detected by the hardware and triggered the appropriate guest exception. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/Emulate: Drop CACHE emulation for VZJames Hogan1-0/+2
Ifdef out the trap & emulate CACHE instruction emulation functions for VZ. We will provide separate CACHE instruction emulation in vz.c, and we need to avoid linker errors due to the use of T&E specific MMU helpers. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/Emulate: Update CP0_Compare emulation for VZJames Hogan1-1/+42
Update emulation of guest writes to CP0_Compare for VZ. There are two main differences compared to trap & emulate: - Writing to CP0_Compare in the VZ hardware guest context acks any pending timer, clearing CP0_Cause.TI. If we don't want an ack to take place we must carefully restore the TI bit if it was previously set. - Even with guest timer access disabled in CP0_GuestCtl0.GT, if the guest CP0_Count reaches the guest CP0_Compare the timer interrupt will assert. To prevent this we must set CP0_GTOffset to move the guest CP0_Count out of the way of the new guest CP0_Compare, either before or after depending on whether it is a forwards or backwards change. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/TLB: Add VZ TLB managementJames Hogan1-0/+404
Add functions for MIPS VZ TLB management to tlb.c. kvm_vz_host_tlb_inv() will be used for invalidating root TLB entries after GPA page tables have been modified due to a KVM page fault. It arranges for a root GPA mapping to be flushed from the TLB, using the gpa_mm ASID or the current GuestID to do the probe. kvm_vz_local_flush_roottlb_all_guests() and kvm_vz_local_flush_guesttlb_all() flush all TLB entries in the corresponding TLB for guest mappings (GPA->RPA for root TLB with GuestID, and all entries for guest TLB). They will be used when starting a new GuestID cycle, when VZ hardware is enabled/disabled, and also when switching to a guest when the guest TLB contents may be stale or belong to a different VM. kvm_vz_guest_tlb_lookup() converts a guest virtual address to a guest physical address using the guest TLB. This will be used to decode guest virtual addresses which are sometimes provided by VZ hardware in CP0_BadVAddr for certain exceptions when the guest physical address is unavailable. kvm_vz_save_guesttlb() and kvm_vz_load_guesttlb() will be used to preserve wired guest VTLB entries while a guest isn't running. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS/Entry: Update entry code to support VZJames Hogan2-10/+126
Update MIPS KVM entry code to support VZ: - We need to set GuestCtl0.GM while in guest mode. - For cores supporting GuestID, we need to set the root GuestID to match the main GuestID while in guest mode so that the root TLB refill handler writes the correct GuestID into the TLB. - For cores without GuestID where the root ASID dealiases RVA/GPA mappings, we need to load that ASID from the gpa_mm rather than the per-VCPU guest_kernel_mm or guest_user_mm, since the root TLB maps guest physical addresses. We also need to restore the normal process ASID on exit. - The normal linux process pgd needs restoring on exit, as we can't leave the GPA mappings active for kernel code. - GuestCtl0 needs saving on exit for the GExcCode field, as it may be clobbered if a preemption occurs. We also need to move the TLB refill handler to the XTLB vector at offset 0x80 on 64-bit VZ kernels, as hardware will use Root.Status.KX to determine whether a TLB refill or XTLB Refill exception is to be taken on a root TLB miss from guest mode, and KX needs to be set for kernel code to be able to access the 64-bit segments. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS: Abstract guest CP0 register access for VZJames Hogan3-7/+9
Abstract the MIPS KVM guest CP0 register access macros into inline functions which are generated by macros. This allows them to be generated differently for VZ, where they will usually need to access the hardware guest CP0 context rather than the saved values in RAM. Accessors for each individual register are generated using these macros: - __BUILD_KVM_*_SW() for registers which are not present in the VZ hardware guest context, so kvm_{read,write}_c0_guest_##name() will access the saved value in RAM regardless of whether VZ is enabled. - __BUILD_KVM_*_HW() for registers which are present in the VZ hardware guest context, so kvm_{read,write}_c0_guest_##name() will access the hardware register when VZ is enabled. These build the underlying accessors using further macros: - __BUILD_KVM_*_SAVED() builds e.g. kvm_{read,write}_sw_gc0_##name() functions for accessing the saved versions of the registers in RAM. This is used for implementing the common kvm_{read,write}_c0_guest_##name() accessors with T&E where registers are always stored in RAM, but are also available with VZ HW registers to allow them to be accessed while saved. - __BUILD_KVM_*_VZ() builds e.g. kvm_{read,write}_vz_gc0_##name() functions for accessing the VZ hardware guest context registers directly. This is used for implementing the common kvm_{read,write}_c0_guest_##name() accessors with VZ. - __BUILD_KVM_*_WRAP() builds wrappers with different names, which allows the common kvm_{read,write}_c0_guest_##name() functions to be implemented using the VZ accessors while still having the SAVED accessors available too. - __BUILD_KVM_SAVE_VZ() builds functions for saving and restoring VZ hardware guest context register state to RAM, improving conciseness of VZ context saving and restoring. Similar macros exist for generating modifiers (set, clear, change), either with a normal unlocked read/modify/write, or using atomic LL/SC sequences. These changes change the types of 32-bit registers to u32 instead of unsigned long, which requires some changes to printk() functions in MIPS KVM. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS: Add guest exit exception callbackJames Hogan3-0/+31
Add a callback for MIPS KVM implementations to handle the VZ guest exit exception. Currently the trap & emulate implementation contains a stub which reports an internal error, but the callback will be used properly by the VZ implementation. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS: Add hardware_{enable,disable} callbackJames Hogan2-1/+17
Add an implementation callback for the kvm_arch_hardware_enable() and kvm_arch_hardware_disable() architecture functions, with simple stubs for trap & emulate. This is in preparation for VZ which will make use of them. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS: Add callback to check extensionJames Hogan2-2/+18
Add an implementation callback for checking presence of KVM extensions. This allows implementation specific extensions to be provided without ifdefs in mips.c. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS: Init timer frequency from callbackJames Hogan3-10/+9
Currently the software emulated timer is initialised to a frequency of 100MHz by kvm_mips_init_count(), but this isn't suitable for VZ where the frequency of the guest timer matches that of the host. Add a count_hz argument so the caller can specify the default frequency, and move the call from kvm_arch_vcpu_create() to the implementation specific vcpu_setup() callback, so that VZ can specify a different frequency. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
2017-03-28KVM: MIPS: Add VZ & TE capabilitiesJames Hogan1-0/+9
Add new KVM_CAP_MIPS_VZ and KVM_CAP_MIPS_TE capabilities, and in order to allow MIPS KVM to support VZ without confusing old users (which expect the trap & emulate implementation), define and start checking KVM_CREATE_VM type codes. The codes available are: - KVM_VM_MIPS_TE = 0 This is the current value expected from the user, and will create a VM using trap & emulate in user mode, confined to the user mode address space. This may in future become unavailable if the kernel is only configured to support VZ, in which case the EINVAL error will be returned and KVM_CAP_MIPS_TE won't be available even though KVM_CAP_MIPS_VZ is. - KVM_VM_MIPS_VZ = 1 This can be provided when the KVM_CAP_MIPS_VZ capability is available to create a VM using VZ, with a fully virtualized guest virtual address space. If VZ support is unavailable in the kernel, the EINVAL error will be returned (although old kernels without the KVM_CAP_MIPS_VZ capability may well succeed and create a trap & emulate VM). This is designed to allow the desired implementation (T&E vs VZ) to be potentially chosen at runtime rather than being fixed in the kernel configuration. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-doc@vger.kernel.org