From db6c1f6f236dbcd271d51d37675bbccfcea7c7be Mon Sep 17 00:00:00 2001 From: Yicong Yang Date: Mon, 17 Jul 2023 21:10:03 +0800 Subject: mm/tlbbatch: introduce arch_flush_tlb_batched_pending() Currently we'll flush the mm in flush_tlb_batched_pending() to avoid race between reclaim unmaps pages by batched TLB flush and mprotect/munmap/etc. Other architectures like arm64 may only need a synchronization barrier(dsb) here rather than a full mm flush. So add arch_flush_tlb_batched_pending() to allow an arch-specific implementation here. This intends no functional changes on x86 since still a full mm flush for x86. Link: https://lkml.kernel.org/r/20230717131004.12662-4-yangyicong@huawei.com Signed-off-by: Yicong Yang Reviewed-by: Catalin Marinas Cc: Anshuman Khandual Cc: Anshuman Khandual Cc: Arnd Bergmann Cc: Barry Song Cc: Barry Song Cc: Darren Hart Cc: Jonathan Cameron Cc: Jonathan Corbet Cc: Kefeng Wang Cc: lipeifeng Cc: Mark Rutland Cc: Mel Gorman Cc: Nadav Amit Cc: Peter Zijlstra Cc: Punit Agrawal Cc: Ryan Roberts Cc: Steven Miao Cc: Will Deacon Cc: Xin Hao Cc: Zeng Tao Signed-off-by: Andrew Morton --- mm/rmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'mm/rmap.c') diff --git a/mm/rmap.c b/mm/rmap.c index f6fb821d56a8..5717517e4040 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -717,7 +717,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm) int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT; if (pending != flushed) { - flush_tlb_mm(mm); + arch_flush_tlb_batched_pending(mm); /* * If the new TLB flushing is pending during flushing, leave * mm->tlb_flush_batched as is, to avoid losing flushing. -- cgit v1.2.3