diff options
author | Nicolai Stange <nstange@suse.de> | 2018-07-18 20:07:38 +0300 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2018-08-15 19:14:51 +0300 |
commit | 587d499c8bd203f6158779b5782a07fe7a5bcea8 (patch) | |
tree | 9a15bfe0dcb322fce7937ccdee3d2645be75cf7d /Documentation | |
parent | 93aed2469df1fdef8ed97d6cbb6dd042181fe46e (diff) | |
download | linux-587d499c8bd203f6158779b5782a07fe7a5bcea8.tar.xz |
x86/KVM/VMX: Initialize the vmx_l1d_flush_pages' content
commit 288d152c23dcf3c09da46c5c481903ca10ebfef7 upstream
The slow path in vmx_l1d_flush() reads from vmx_l1d_flush_pages in order
to evict the L1d cache.
However, these pages are never cleared and, in theory, their data could be
leaked.
More importantly, KSM could merge a nested hypervisor's vmx_l1d_flush_pages
to fewer than 1 << L1D_CACHE_ORDER host physical pages and this would break
the L1d flushing algorithm: L1D on x86_64 is tagged by physical addresses.
Fix this by initializing the individual vmx_l1d_flush_pages with a
different pattern each.
Rename the "empty_zp" asm constraint identifier in vmx_l1d_flush() to
"flush_pages" to reflect this change.
Fixes: a47dd5f06714 ("x86/KVM/VMX: Add L1D flush algorithm")
Signed-off-by: Nicolai Stange <nstange@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'Documentation')
0 files changed, 0 insertions, 0 deletions