From e3fe8e555dd05cf74168d18555c44320ed50a0e1 Mon Sep 17 00:00:00 2001 From: "Yang, Philip" Date: Thu, 15 Aug 2019 20:52:56 +0000 Subject: mm/hmm: fix hmm_range_fault()'s handling of swapped out pages MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit hmm_range_fault() may return NULL pages because some of the pfns are equal to HMM_PFN_NONE. This happens randomly under memory pressure. The reason is during the swapped out page pte path, hmm_vma_handle_pte() doesn't update the fault variable from cpu_flags, so it failed to call hmm_vam_do_fault() to swap the page in. The fix is to call hmm_pte_need_fault() to update fault variable. Fixes: 74eee180b935 ("mm/hmm/mirror: device page fault handler") Link: https://lore.kernel.org/r/20190815205227.7949-1-Philip.Yang@amd.com Signed-off-by: Philip Yang Reviewed-by: "Jérôme Glisse" Signed-off-by: Jason Gunthorpe --- mm/hmm.c | 3 +++ 1 file changed, 3 insertions(+) (limited to 'mm/hmm.c') diff --git a/mm/hmm.c b/mm/hmm.c index 49eace16f9f8..fc05c8fe78b4 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -469,6 +469,9 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, swp_entry_t entry = pte_to_swp_entry(pte); if (!non_swap_entry(entry)) { + cpu_flags = pte_to_hmm_pfn_flags(range, pte); + hmm_pte_need_fault(hmm_vma_walk, orig_pfn, cpu_flags, + &fault, &write_fault); if (fault || write_fault) goto fault; return 0; -- cgit v1.2.3