summaryrefslogtreecommitdiff
path: root/include/linux/rmap.h
diff options
context:
space:
mode:
authorMuchun Song <songmuchun@bytedance.com>2022-04-29 09:16:10 +0300
committerakpm <akpm@linux-foundation.org>2022-04-29 09:16:10 +0300
commit6a8e0596f00469c15ec556b9f3624acd2e9a04f9 (patch)
tree6d25a184335a5b2e2dd2e84fdc4b7e1cc989573f /include/linux/rmap.h
parente583b5c472bd23d450e06f148dc1f37be74f7666 (diff)
downloadlinux-6a8e0596f00469c15ec556b9f3624acd2e9a04f9.tar.xz
mm: rmap: introduce pfn_mkclean_range() to cleans PTEs
The page_mkclean_one() is supposed to be used with the pfn that has a associated struct page, but not all the pfns (e.g. DAX) have a struct page. Introduce a new function pfn_mkclean_range() to cleans the PTEs (including PMDs) mapped with range of pfns which has no struct page associated with them. This helper will be used by DAX device in the next patch to make pfns clean. Link: https://lkml.kernel.org/r/20220403053957.10770-4-songmuchun@bytedance.com Signed-off-by: Muchun Song <songmuchun@bytedance.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@lst.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jan Kara <jack@suse.cz> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: Ross Zwisler <zwisler@kernel.org> Cc: Xiongchun Duan <duanxiongchun@bytedance.com> Cc: Xiyu Yang <xiyuyang19@fudan.edu.cn> Cc: Yang Shi <shy828301@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'include/linux/rmap.h')
-rw-r--r--include/linux/rmap.h3
1 files changed, 3 insertions, 0 deletions
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 17230c458341..8573aae50d96 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -261,6 +261,9 @@ unsigned long page_address_in_vma(struct page *, struct vm_area_struct *);
*/
int folio_mkclean(struct folio *);
+int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
+ struct vm_area_struct *vma);
+
void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked);
/*