summaryrefslogtreecommitdiff
path: root/scripts/atomic/fallbacks/inc_not_zero
diff options
context:
space:
mode:
authorMarco Elver <elver@google.com>2019-11-26 17:04:05 +0300
committerThomas Gleixner <tglx@linutronix.de>2020-06-11 09:03:24 +0300
commit765dcd209947e7b3666c08fb109ab8b879f7a471 (patch)
tree0dbe7fe72d9bd74804abfb90453138f8a6e997c1 /scripts/atomic/fallbacks/inc_not_zero
parentb29482fde649c72441d5478a4ea2c52c56d97a5e (diff)
downloadlinux-765dcd209947e7b3666c08fb109ab8b879f7a471.tar.xz
asm-generic/atomic: Use __always_inline for fallback wrappers
Use __always_inline for atomic fallback wrappers. When building for size (CC_OPTIMIZE_FOR_SIZE), some compilers appear to be less inclined to inline even relatively small static inline functions that are assumed to be inlinable such as atomic ops. This can cause problems, for example in UACCESS regions. While the fallback wrappers aren't pure wrappers, they are trivial nonetheless, and the function they wrap should determine the final inlining policy. For x86 tinyconfig we observe: - vmlinux baseline: 1315988 - vmlinux with patch: 1315928 (-60 bytes) [ tglx: Cherry-picked from KCSAN ] Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marco Elver <elver@google.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Diffstat (limited to 'scripts/atomic/fallbacks/inc_not_zero')
-rwxr-xr-xscripts/atomic/fallbacks/inc_not_zero2
1 files changed, 1 insertions, 1 deletions
diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero
index a7c45c8d107c..d9f7b97aab42 100755
--- a/scripts/atomic/fallbacks/inc_not_zero
+++ b/scripts/atomic/fallbacks/inc_not_zero
@@ -6,7 +6,7 @@ cat <<EOF
* Atomically increments @v by 1, if @v is non-zero.
* Returns true if the increment was done.
*/
-static inline bool
+static __always_inline bool
${atomic}_inc_not_zero(${atomic}_t *v)
{
return ${atomic}_add_unless(v, 1, 0);