summaryrefslogtreecommitdiff
path: root/arch/s390/include/asm/tlbflush.h
diff options
context:
space:
mode:
authorMartin Schwidefsky <schwidefsky@de.ibm.com>2017-08-17 09:15:16 +0300
committerMartin Schwidefsky <schwidefsky@de.ibm.com>2017-09-06 10:24:42 +0300
commit60f07c8ec5fae06c23e9fd7bab67dabce92b3414 (patch)
tree3c189bb7d158caba68c36b467603f94b243eea8f /arch/s390/include/asm/tlbflush.h
parentb3e5dc45fd1ec2aa1de6b80008f9295eb17e0659 (diff)
downloadlinux-60f07c8ec5fae06c23e9fd7bab67dabce92b3414.tar.xz
s390/mm: fix race on mm->context.flush_mm
The order in __tlb_flush_mm_lazy is to flush TLB first and then clear the mm->context.flush_mm bit. This can lead to missed flushes as the bit can be set anytime, the order needs to be the other way aronud. But this leads to a different race, __tlb_flush_mm_lazy may be called on two CPUs concurrently. If mm->context.flush_mm is cleared first then another CPU can bypass __tlb_flush_mm_lazy although the first CPU has not done the flush yet. In a virtualized environment the time until the flush is finally completed can be arbitrarily long. Add a spinlock to serialize __tlb_flush_mm_lazy and use the function in finish_arch_post_lock_switch as well. Cc: <stable@vger.kernel.org> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Diffstat (limited to 'arch/s390/include/asm/tlbflush.h')
-rw-r--r--arch/s390/include/asm/tlbflush.h4
1 files changed, 3 insertions, 1 deletions
diff --git a/arch/s390/include/asm/tlbflush.h b/arch/s390/include/asm/tlbflush.h
index 16fe2a3d9a03..b08d5bc2666e 100644
--- a/arch/s390/include/asm/tlbflush.h
+++ b/arch/s390/include/asm/tlbflush.h
@@ -101,10 +101,12 @@ static inline void __tlb_flush_kernel(void)
static inline void __tlb_flush_mm_lazy(struct mm_struct * mm)
{
+ spin_lock(&mm->context.lock);
if (mm->context.flush_mm) {
- __tlb_flush_mm(mm);
mm->context.flush_mm = 0;
+ __tlb_flush_mm(mm);
}
+ spin_unlock(&mm->context.lock);
}
/*