summaryrefslogtreecommitdiff
path: root/include/linux/spinlock_up.h
diff options
context:
space:
mode:
authorNick Piggin <npiggin@suse.de>2008-01-30 15:31:20 +0300
committerIngo Molnar <mingo@elte.hu>2008-01-30 15:31:20 +0300
commit95c354fe9f7d6decc08a92aa26eb233ecc2155bf (patch)
treeec9267032ea875e84216cfb20acb2cfc7c62149f /include/linux/spinlock_up.h
parenta95d67f87e1a5f1b4429be3ba3bf7b4051657908 (diff)
downloadlinux-95c354fe9f7d6decc08a92aa26eb233ecc2155bf.tar.xz
spinlock: lockbreak cleanup
The break_lock data structure and code for spinlocks is quite nasty. Not only does it double the size of a spinlock but it changes locking to a potentially less optimal trylock. Put all of that under CONFIG_GENERIC_LOCKBREAK, and introduce a __raw_spin_is_contended that uses the lock data itself to determine whether there are waiters on the lock, to be used if CONFIG_GENERIC_LOCKBREAK is not set. Rename need_lockbreak to spin_needbreak, make it use spin_is_contended to decouple it from the spinlock implementation, and make it typesafe (rwlocks do not have any need_lockbreak sites -- why do they even get bloated up with that break_lock then?). Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Diffstat (limited to 'include/linux/spinlock_up.h')
-rw-r--r--include/linux/spinlock_up.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/include/linux/spinlock_up.h b/include/linux/spinlock_up.h
index ea54c4c9a4ec..938234c4a996 100644
--- a/include/linux/spinlock_up.h
+++ b/include/linux/spinlock_up.h
@@ -64,6 +64,8 @@ static inline void __raw_spin_unlock(raw_spinlock_t *lock)
# define __raw_spin_trylock(lock) ({ (void)(lock); 1; })
#endif /* DEBUG_SPINLOCK */
+#define __raw_spin_is_contended(lock) (((void)(lock), 0))
+
#define __raw_read_can_lock(lock) (((void)(lock), 1))
#define __raw_write_can_lock(lock) (((void)(lock), 1))