summaryrefslogtreecommitdiff
path: root/lib
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2019-03-30 00:43:07 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2019-03-30 00:43:07 +0300
commitffb8e45cf33e14d9a565491aec7abe039bebcfce (patch)
tree4181a995d8fb1de48ed6af9c4211ac6a27b54b0a /lib
parent7376e39ad96583545faefa2e7798bcb6a2a212a7 (diff)
parent7bca889ee9297c3e208dee7c41aed7a56a880400 (diff)
downloadlinux-ffb8e45cf33e14d9a565491aec7abe039bebcfce.tar.xz
Merge tag 'for-linus-20190329' of git://git.kernel.dk/linux-block
Pull block fixes from Jens Axboe: "Small set of fixes that should go into this series. This contains: - compat signal mask fix for io_uring (Arnd) - EAGAIN corner case for direct vs buffered writes for io_uring (Roman) - NVMe pull request from Christoph with various little fixes - sbitmap ws_active fix, which caused a perf regression for shared tags (me) - sbitmap bit ordering fix (Ming) - libata on-stack DMA fix (Raymond)" * tag 'for-linus-20190329' of git://git.kernel.dk/linux-block: nvmet: fix error flow during ns enable nvmet: fix building bvec from sg list nvme-multipath: relax ANA state check nvme-tcp: fix an endianess miss-annotation libata: fix using DMA buffers on stack io_uring: offload write to async worker in case of -EAGAIN sbitmap: order READ/WRITE freed instance and setting clear bit blk-mq: fix sbitmap ws_active for shared tags io_uring: fix big-endian compat signal mask handling blk-mq: update comment for blk_mq_hctx_has_pending() blk-mq: use blk_mq_put_driver_tag() to put tag
Diffstat (limited to 'lib')
-rw-r--r--lib/sbitmap.c11
1 files changed, 11 insertions, 0 deletions
diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index 5b382c1244ed..155fe38756ec 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -591,6 +591,17 @@ EXPORT_SYMBOL_GPL(sbitmap_queue_wake_up);
void sbitmap_queue_clear(struct sbitmap_queue *sbq, unsigned int nr,
unsigned int cpu)
{
+ /*
+ * Once the clear bit is set, the bit may be allocated out.
+ *
+ * Orders READ/WRITE on the asssociated instance(such as request
+ * of blk_mq) by this bit for avoiding race with re-allocation,
+ * and its pair is the memory barrier implied in __sbitmap_get_word.
+ *
+ * One invariant is that the clear bit has to be zero when the bit
+ * is in use.
+ */
+ smp_mb__before_atomic();
sbitmap_deferred_clear_bit(&sbq->sb, nr);
/*