summaryrefslogtreecommitdiff
path: root/virt
diff options
context:
space:
mode:
authorDavid Woodhouse <dwmw@amazon.co.uk>2023-01-11 21:06:50 +0300
committerPaolo Bonzini <pbonzini@redhat.com>2023-01-11 21:32:21 +0300
commit42a90008f890afc41837dfeec1f0b1e7bcecf94a (patch)
tree52d9bef1911c441968650403c0a4519513e9781e /virt
parentbbe17c625d6843e9cdf14d81fbece1b0f0c3fb2f (diff)
downloadlinux-42a90008f890afc41837dfeec1f0b1e7bcecf94a.tar.xz
KVM: Ensure lockdep knows about kvm->lock vs. vcpu->mutex ordering rule
Documentation/virt/kvm/locking.rst tells us that kvm->lock is taken outside vcpu->mutex. But that doesn't actually happen very often; it's only in some esoteric cases like migration with AMD SEV. This means that lockdep usually doesn't notice, and doesn't do its job of keeping us honest. Ensure that lockdep *always* knows about the ordering of these two locks, by briefly taking vcpu->mutex in kvm_vm_ioctl_create_vcpu() while kvm->lock is held. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Message-Id: <20230111180651.14394-3-dwmw2@infradead.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'virt')
-rw-r--r--virt/kvm/kvm_main.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 13e88297f999..9c60384b5ae0 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3954,6 +3954,13 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
}
mutex_lock(&kvm->lock);
+
+#ifdef CONFIG_LOCKDEP
+ /* Ensure that lockdep knows vcpu->mutex is taken *inside* kvm->lock */
+ mutex_lock(&vcpu->mutex);
+ mutex_unlock(&vcpu->mutex);
+#endif
+
if (kvm_get_vcpu_by_id(kvm, id)) {
r = -EEXIST;
goto unlock_vcpu_destroy;