diff options
author | Guo Ren <guoren@linux.alibaba.com> | 2023-09-08 18:43:39 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2023-09-21 11:17:00 +0300 |
commit | c6f4a90022524d06f6d9de323b1757031dcf0c26 (patch) | |
tree | 99086c83676e7efb478369ea9b240f395f2a442c /include/asm-generic/spinlock.h | |
parent | fbeb558b0dd0d6348e0872bbbbe96e30c65867b7 (diff) | |
download | linux-c6f4a90022524d06f6d9de323b1757031dcf0c26.tar.xz |
asm-generic: ticket-lock: Optimize arch_spin_value_unlocked()
The arch_spin_value_unlocked() of ticket-lock would cause the compiler to
generate inefficient asm code in riscv architecture because of
unnecessary memory access to the contended value.
Before the patch:
void lockref_get(struct lockref *lockref)
{
78: fd010113 add sp,sp,-48
7c: 02813023 sd s0,32(sp)
80: 02113423 sd ra,40(sp)
84: 03010413 add s0,sp,48
0000000000000088 <.LBB296>:
CMPXCHG_LOOP(
88: 00053783 ld a5,0(a0)
After the patch:
void lockref_get(struct lockref *lockref)
{
CMPXCHG_LOOP(
78: 00053783 ld a5,0(a0)
After the patch, the lockref_get() could get in a fast path instead of the
function's prologue. This is because ticket lock complex logic would
limit compiler optimization for the spinlock fast path, and qspinlock
won't.
The caller of arch_spin_value_unlocked() could benefit from this
change. Currently, the only caller is lockref.
Signed-off-by: Guo Ren <guoren@kernel.org>
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Waiman Long <longman@redhat.com>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230908154339.3250567-1-guoren@kernel.org
Diffstat (limited to 'include/asm-generic/spinlock.h')
-rw-r--r-- | include/asm-generic/spinlock.h | 16 |
1 files changed, 9 insertions, 7 deletions
diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h index fdfebcb050f4..90803a826ba0 100644 --- a/include/asm-generic/spinlock.h +++ b/include/asm-generic/spinlock.h @@ -68,11 +68,18 @@ static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) smp_store_release(ptr, (u16)val + 1); } +static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) +{ + u32 val = lock.counter; + + return ((val >> 16) == (val & 0xffff)); +} + static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock) { - u32 val = atomic_read(lock); + arch_spinlock_t val = READ_ONCE(*lock); - return ((val >> 16) != (val & 0xffff)); + return !arch_spin_value_unlocked(val); } static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) @@ -82,11 +89,6 @@ static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) return (s16)((val >> 16) - (val & 0xffff)) > 1; } -static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) -{ - return !arch_spin_is_locked(&lock); -} - #include <asm/qrwlock.h> #endif /* __ASM_GENERIC_SPINLOCK_H */ |