summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/cmpxchg.h
diff options
context:
space:
mode:
authorAndrew Murray <andrew.murray@arm.com>2019-08-28 20:50:07 +0300
committerWill Deacon <will@kernel.org>2019-08-29 17:53:42 +0300
commitaddfc38672c73efd5c4e559a2e455b086e3e20c5 (patch)
treee3bc9622b3b208a628d081e37be257ab5ad99a84 /arch/arm64/include/asm/cmpxchg.h
parent580fa1b874711d633f9b145b7777b0e83ebf3787 (diff)
downloadlinux-addfc38672c73efd5c4e559a2e455b086e3e20c5.tar.xz
arm64: atomics: avoid out-of-line ll/sc atomics
When building for LSE atomics (CONFIG_ARM64_LSE_ATOMICS), if the hardware or toolchain doesn't support it the existing code will fallback to ll/sc atomics. It achieves this by branching from inline assembly to a function that is built with special compile flags. Further this results in the clobbering of registers even when the fallback isn't used increasing register pressure. Improve this by providing inline implementations of both LSE and ll/sc and use a static key to select between them, which allows for the compiler to generate better atomics code. Put the LL/SC fallback atomics in their own subsection to improve icache performance. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
Diffstat (limited to 'arch/arm64/include/asm/cmpxchg.h')
-rw-r--r--arch/arm64/include/asm/cmpxchg.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h
index 7a299a20f6dc..e5fff8cd4904 100644
--- a/arch/arm64/include/asm/cmpxchg.h
+++ b/arch/arm64/include/asm/cmpxchg.h
@@ -10,7 +10,7 @@
#include <linux/build_bug.h>
#include <linux/compiler.h>
-#include <asm/atomic.h>
+#include <asm/atomic_arch.h>
#include <asm/barrier.h>
#include <asm/lse.h>