summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorBrijesh Singh <brijesh.singh@amd.com>2017-08-07 22:11:30 +0300
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2017-09-14 00:17:30 +0300
commit2bce0fe7d0cd0d45d78815487706b52e290abb33 (patch)
tree79963a9c5a80fe2c922af95e4fba64e0be9b5872
parent9d6412aa06ce75e44fbaf1bfa15454150fd6a803 (diff)
downloadlinux-2bce0fe7d0cd0d45d78815487706b52e290abb33.tar.xz
KVM: SVM: Limit PFERR_NESTED_GUEST_PAGE error_code check to L1 guest
commit 64531a3b70b17c8d3e77f2e49e5e1bb70f571266 upstream. Commit 147277540bbc ("kvm: svm: Add support for additional SVM NPF error codes", 2016-11-23) added a new error code to aid nested page fault handling. The commit unprotects (kvm_mmu_unprotect_page) the page when we get a NPF due to guest page table walk where the page was marked RO. However, if an L0->L2 shadow nested page table can also be marked read-only when a page is read only in L1's nested page table. If such a page is accessed by L2 while walking page tables it can cause a nested page fault (page table walks are write accesses). However, after kvm_mmu_unprotect_page we may get another page fault, and again in an endless stream. To cover this use case, we qualify the new error_code check with vcpu->arch.mmu_direct_map so that the error_code check would run on L1 guest, and not the L2 guest. This avoids hitting the above scenario. Fixes: 147277540bbc54119172481c8ef6d930cc9fbfc2 Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Thomas Lendacky <thomas.lendacky@amd.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r--arch/x86/kvm/mmu.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index cb8225969255..97fc5f18b0a8 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4759,7 +4759,8 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
* Note: AMD only (since it supports the PFERR_GUEST_PAGE_MASK used
* in PFERR_NEXT_GUEST_PAGE)
*/
- if (error_code == PFERR_NESTED_GUEST_PAGE) {
+ if (vcpu->arch.mmu.direct_map &&
+ error_code == PFERR_NESTED_GUEST_PAGE) {
kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2));
return 1;
}