summaryrefslogtreecommitdiff
path: root/include/linux/hmm.h
diff options
context:
space:
mode:
authorJason Gunthorpe <jgg@mellanox.com>2019-06-07 18:10:33 +0300
committerJason Gunthorpe <jgg@mellanox.com>2019-06-27 19:05:02 +0300
commit5a136b4ae327e7f6be9c984a010df8d7ea5a4f83 (patch)
treef4de2700081df24f82818149ecc4c1cd14709785 /include/linux/hmm.h
parent14331726a3c47bb1649dab155a84610f509d414e (diff)
downloadlinux-5a136b4ae327e7f6be9c984a010df8d7ea5a4f83.tar.xz
mm/hmm: Fix error flows in hmm_invalidate_range_start
If the trylock on the hmm->mirrors_sem fails the function will return without decrementing the notifiers that were previously incremented. Since the caller will not call invalidate_range_end() on EAGAIN this will result in notifiers becoming permanently incremented and deadlock. If the sync_cpu_device_pagetables() required blocking the function will not return EAGAIN even though the device continues to touch the pages. This is a violation of the mmu notifier contract. Switch, and rename, the ranges_lock to a spin lock so we can reliably obtain it without blocking during error unwind. The error unwind is necessary since the notifiers count must be held incremented across the call to sync_cpu_device_pagetables() as we cannot allow the range to become marked valid by a parallel invalidate_start/end() pair while doing sync_cpu_device_pagetables(). Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Philip Yang <Philip.Yang@amd.com>
Diffstat (limited to 'include/linux/hmm.h')
-rw-r--r--include/linux/hmm.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index bf013e965257..0fa8ea34ccef 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -86,7 +86,7 @@
struct hmm {
struct mm_struct *mm;
struct kref kref;
- struct mutex lock;
+ spinlock_t ranges_lock;
struct list_head ranges;
struct list_head mirrors;
struct mmu_notifier mmu_notifier;