summaryrefslogtreecommitdiff
path: root/Documentation/mm
diff options
context:
space:
mode:
authorMatthew Wilcox (Oracle) <willy@infradead.org>2023-01-11 17:29:13 +0300
committerAndrew Morton <akpm@linux-foundation.org>2023-02-03 09:33:00 +0300
commitf158ed6195ef949060811fd85086928470651944 (patch)
tree46ce5a8ac80dc4c81a5dfad2ae06a600c53b5be0 /Documentation/mm
parentf8baa6be0368b5d21be34e8bf071b563b0f77584 (diff)
downloadlinux-f158ed6195ef949060811fd85086928470651944.tar.xz
mm: convert deferred_split_huge_page() to deferred_split_folio()
Now that both callers use a folio, pass the folio in and save a call to compound_head(). Link: https://lkml.kernel.org/r/20230111142915.1001531-28-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'Documentation/mm')
-rw-r--r--Documentation/mm/transhuge.rst6
1 files changed, 3 insertions, 3 deletions
diff --git a/Documentation/mm/transhuge.rst b/Documentation/mm/transhuge.rst
index 03bbd0a19041..a9608fe51649 100644
--- a/Documentation/mm/transhuge.rst
+++ b/Documentation/mm/transhuge.rst
@@ -153,8 +153,8 @@ clear where references should go after split: it will stay on the head page.
Note that split_huge_pmd() doesn't have any limitations on refcounting:
pmd can be split at any point and never fails.
-Partial unmap and deferred_split_huge_page()
-============================================
+Partial unmap and deferred_split_folio()
+========================================
Unmapping part of THP (with munmap() or other way) is not going to free
memory immediately. Instead, we detect that a subpage of THP is not in use
@@ -166,6 +166,6 @@ the place where we can detect partial unmap. It also might be
counterproductive since in many cases partial unmap happens during exit(2) if
a THP crosses a VMA boundary.
-The function deferred_split_huge_page() is used to queue a page for splitting.
+The function deferred_split_folio() is used to queue a folio for splitting.
The splitting itself will happen when we get memory pressure via shrinker
interface.