summaryrefslogtreecommitdiff
path: root/mm/swap.c
diff options
context:
space:
mode:
authorHugh Dickins <hughd@google.com>2022-02-15 05:33:17 +0300
committerMatthew Wilcox (Oracle) <willy@infradead.org>2022-02-17 19:57:06 +0300
commitc3096e6782b733158bf34f6bbb4567808d4e0740 (patch)
treea28708da7662fc586a0ad8df19d29ccc162ecb12 /mm/swap.c
parent34b6792380ce4f4b41018351cd67c9c26f4a7a0d (diff)
downloadlinux-c3096e6782b733158bf34f6bbb4567808d4e0740.tar.xz
mm/migrate: __unmap_and_move() push good newpage to LRU
Compaction, NUMA page movement, THP collapse/split, and memory failure do isolate unevictable pages from their "LRU", losing the record of mlock_count in doing so (isolators are likely to use page->lru for their own private lists, so mlock_count has to be presumed lost). That's unfortunate, and we should put in some work to correct that: one can imagine a function to build up the mlock_count again - but it would require i_mmap_rwsem for read, so be careful where it's called. Or page_referenced_one() and try_to_unmap_one() might do that extra work. But one place that can very easily be improved is page migration's __unmap_and_move(): a small adjustment to where the successful new page is put back on LRU, and its mlock_count (if any) is built back up by remove_migration_ptes(). Signed-off-by: Hugh Dickins <hughd@google.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Diffstat (limited to 'mm/swap.c')
0 files changed, 0 insertions, 0 deletions