summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorHugh Dickins <hughd@google.com>2021-07-07 23:08:53 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2021-07-12 01:05:15 +0300
commitd9770fcc1c0c5b3e77dfac83b47defa3981fa7cd (patch)
treeb7524387a93ff6fe6a7da20e6da6a9aeea5cde5c
parent64b586d1922384710de2ce3c8c67e7ea0b6ffb57 (diff)
downloadlinux-d9770fcc1c0c5b3e77dfac83b47defa3981fa7cd.tar.xz
mm/rmap: fix old bug: munlocking THP missed other mlocks
The kernel recovers in due course from missing Mlocked pages: but there was no point in calling page_mlock() (formerly known as try_to_munlock()) on a THP, because nothing got done even when it was found to be mapped in another VM_LOCKED vma. It's true that we need to be careful: Mlocked accounting of pte-mapped THPs is too difficult (so consistently avoided); but Mlocked accounting of only-pmd-mapped THPs is supposed to work, even when multiple mappings are mlocked and munlocked or munmapped. Refine the tests. There is already a VM_BUG_ON_PAGE(PageDoubleMap) in page_mlock(), so page_mlock_one() does not even have to worry about that complication. (I said the kernel recovers: but would page reclaim be likely to split THP before rediscovering that it's VM_LOCKED? I've not followed that up) Fixes: 9a73f61bdb8a ("thp, mlock: do not mlock PTE-mapped file huge pages") Signed-off-by: Hugh Dickins <hughd@google.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lore.kernel.org/lkml/cfa154c-d595-406-eb7d-eb9df730f944@google.com/ Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Alistair Popple <apopple@nvidia.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Yang Shi <shy828301@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--mm/rmap.c13
1 files changed, 8 insertions, 5 deletions
diff --git a/mm/rmap.c b/mm/rmap.c
index 746013e282c3..0e83c3be8568 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1442,8 +1442,9 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
*/
if (!(flags & TTU_IGNORE_MLOCK)) {
if (vma->vm_flags & VM_LOCKED) {
- /* PTE-mapped THP are never mlocked */
- if (!PageTransCompound(page)) {
+ /* PTE-mapped THP are never marked as mlocked */
+ if (!PageTransCompound(page) ||
+ (PageHead(page) && !PageDoubleMap(page))) {
/*
* Holding pte lock, we do *not* need
* mmap_lock here
@@ -1984,9 +1985,11 @@ static bool page_mlock_one(struct page *page, struct vm_area_struct *vma,
* munlock_vma_pages_range().
*/
if (vma->vm_flags & VM_LOCKED) {
- /* PTE-mapped THP are never mlocked */
- if (!PageTransCompound(page))
- mlock_vma_page(page);
+ /*
+ * PTE-mapped THP are never marked as mlocked, but
+ * this function is never called when PageDoubleMap().
+ */
+ mlock_vma_page(page);
page_vma_mapped_walk_done(&pvmw);
}