summaryrefslogtreecommitdiff
path: root/kernel/locking
diff options
context:
space:
mode:
authorWill Deacon <will.deacon@arm.com>2019-02-22 15:48:44 +0300
committerWill Deacon <will.deacon@arm.com>2019-04-08 13:59:39 +0300
commitd1be6a28b13ce6d1bc42bf9b6a9454c65839225b (patch)
tree320fd6244998529491991e7012ebfc5dd9709d4a /kernel/locking
parent4614bbdee35706ed77c130d23f12e21479b670ff (diff)
downloadlinux-d1be6a28b13ce6d1bc42bf9b6a9454c65839225b.tar.xz
asm-generic/mmiowb: Add generic implementation of mmiowb() tracking
In preparation for removing all explicit mmiowb() calls from driver code, implement a tracking system in asm-generic based loosely on the PowerPC implementation. This allows architectures with a non-empty mmiowb() definition to have the barrier automatically inserted in spin_unlock() following a critical section containing an I/O write. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
Diffstat (limited to 'kernel/locking')
-rw-r--r--kernel/locking/spinlock.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c
index 936f3d14dd6b..0ff08380f531 100644
--- a/kernel/locking/spinlock.c
+++ b/kernel/locking/spinlock.c
@@ -22,6 +22,13 @@
#include <linux/debug_locks.h>
#include <linux/export.h>
+#ifdef CONFIG_MMIOWB
+#ifndef arch_mmiowb_state
+DEFINE_PER_CPU(struct mmiowb_state, __mmiowb_state);
+EXPORT_PER_CPU_SYMBOL(__mmiowb_state);
+#endif
+#endif
+
/*
* If lockdep is enabled then we use the non-preemption spin-ops
* even on CONFIG_PREEMPT, because lockdep assumes that interrupts are