summaryrefslogtreecommitdiff
path: root/Documentation/hwspinlock.txt
diff options
context:
space:
mode:
authorFabien Dessenne <fabien.dessenne@st.com>2019-03-07 18:58:23 +0300
committerBjorn Andersson <bjorn.andersson@linaro.org>2019-06-30 07:08:14 +0300
commit360aa640a59f269b784848c0b2d6d462952750d9 (patch)
treecf5d41daeadb6f14a8c41637b7ae29ee05e6747a /Documentation/hwspinlock.txt
parentbce6f5221374ba451a337d0a3773e6eb99dad3e8 (diff)
downloadlinux-360aa640a59f269b784848c0b2d6d462952750d9.tar.xz
hwspinlock: add the 'in_atomic' API
Add the 'in_atomic' mode which can be called from an atomic context. This mode relies on the existing 'raw' mode (no lock, no preemption/irq disabling) with the difference that the timeout is not based on jiffies (jiffies won't increase when irq are disabled) but handled with busy-waiting udelay() calls. Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Diffstat (limited to 'Documentation/hwspinlock.txt')
-rw-r--r--Documentation/hwspinlock.txt39
1 files changed, 39 insertions, 0 deletions
diff --git a/Documentation/hwspinlock.txt b/Documentation/hwspinlock.txt
index c3403f9ae27a..6f03713b7003 100644
--- a/Documentation/hwspinlock.txt
+++ b/Documentation/hwspinlock.txt
@@ -153,6 +153,22 @@ The function will never sleep.
::
+ int hwspin_lock_timeout_in_atomic(struct hwspinlock *hwlock, unsigned int to);
+
+Lock a previously-assigned hwspinlock with a timeout limit (specified in
+msecs). If the hwspinlock is already taken, the function will busy loop
+waiting for it to be released, but give up when the timeout elapses.
+
+This function shall be called only from an atomic context and the timeout
+value shall not exceed a few msecs.
+
+Returns 0 when successful and an appropriate error code otherwise (most
+notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
+
+The function will never sleep.
+
+::
+
int hwspin_trylock(struct hwspinlock *hwlock);
@@ -218,6 +234,19 @@ The function will never sleep.
::
+ int hwspin_trylock_in_atomic(struct hwspinlock *hwlock);
+
+Attempt to lock a previously-assigned hwspinlock, but immediately fail if
+it is already taken.
+
+This function shall be called only from an atomic context.
+
+Returns 0 on success and an appropriate error code otherwise (most
+notably -EBUSY if the hwspinlock was already taken).
+The function will never sleep.
+
+::
+
void hwspin_unlock(struct hwspinlock *hwlock);
Unlock a previously-locked hwspinlock. Always succeed, and can be called
@@ -264,6 +293,16 @@ This function will never sleep.
::
+ void hwspin_unlock_in_atomic(struct hwspinlock *hwlock);
+
+Unlock a previously-locked hwspinlock.
+
+The caller should **never** unlock an hwspinlock which is already unlocked.
+Doing so is considered a bug (there is no protection against this).
+This function will never sleep.
+
+::
+
int hwspin_lock_get_id(struct hwspinlock *hwlock);
Retrieve id number of a given hwspinlock. This is needed when an