summaryrefslogtreecommitdiff
path: root/Documentation/core-api/protection-keys.rst
diff options
context:
space:
mode:
authorNadav Amit <namit@vmware.com>2022-06-06 21:01:23 +0300
committerDave Hansen <dave.hansen@linux.intel.com>2022-06-07 18:48:03 +0300
commitaa44284960d550eb4d8614afdffebc68a432a9b4 (patch)
treea33218a6c83a849a64c50b01b3d42ee53db2276d /Documentation/core-api/protection-keys.rst
parente19d11267f0e6c8aff2d15d2dfed12365b4c9184 (diff)
downloadlinux-aa44284960d550eb4d8614afdffebc68a432a9b4.tar.xz
x86/mm/tlb: Avoid reading mm_tlb_gen when possible
On extreme TLB shootdown storms, the mm's tlb_gen cacheline is highly contended and reading it should (arguably) be avoided as much as possible. Currently, flush_tlb_func() reads the mm's tlb_gen unconditionally, even when it is not necessary (e.g., the mm was already switched). This is wasteful. Moreover, one of the existing optimizations is to read mm's tlb_gen to see if there are additional in-flight TLB invalidations and flush the entire TLB in such a case. However, if the request's tlb_gen was already flushed, the benefit of checking the mm's tlb_gen is likely to be offset by the overhead of the check itself. Running will-it-scale with tlb_flush1_threads show a considerable benefit on 56-core Skylake (up to +24%): threads Baseline (v5.17+) +Patch 1 159960 160202 5 310808 308378 (-0.7%) 10 479110 490728 15 526771 562528 20 534495 587316 25 547462 628296 30 579616 666313 35 594134 701814 40 612288 732967 45 617517 749727 50 637476 735497 55 614363 778913 (+24%) Signed-off-by: Nadav Amit <namit@vmware.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Andy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20220606180123.2485171-1-namit@vmware.com
Diffstat (limited to 'Documentation/core-api/protection-keys.rst')
0 files changed, 0 insertions, 0 deletions