summaryrefslogtreecommitdiff
path: root/arch/x86/mm
diff options
context:
space:
mode:
authorLiam R. Howlett <Liam.Howlett@oracle.com>2023-07-24 21:31:57 +0300
committerAndrew Morton <akpm@linux-foundation.org>2023-08-18 20:12:50 +0300
commit6935e052557caaa8e1ee0a7d85faeb55853d2e0e (patch)
tree69cd4f023ce1d6887b7cb9eca4976a338edd3d40 /arch/x86/mm
parentfec29364348fec535c55708b1f4025b321aba572 (diff)
downloadlinux-6935e052557caaa8e1ee0a7d85faeb55853d2e0e.tar.xz
mm/mmap: change vma iteration order in do_vmi_align_munmap()
By delaying the setting of prev/next VMA until after the write of NULL, the probability of the prev/next VMA already being in the CPU cache is significantly increased, especially for larger munmap operations. It also means that prev/next will be loaded closer to when they are used. This requires changing the loop type when gathering the VMAs that will be freed. Since prev will be set later in the function, it is better to reverse the splitting direction of the start VMA (modify the new_below argument to __split_vma). Using the vma_iter_prev_range() to walk back to the correct location in the tree will, on the most part, mean walking within the CPU cache. Usually, this is two steps vs a node reset and a tree re-walk. Link: https://lkml.kernel.org/r/20230724183157.3939892-16-Liam.Howlett@oracle.com Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Peng Zhang <zhangpeng.00@bytedance.com> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'arch/x86/mm')
0 files changed, 0 insertions, 0 deletions