summaryrefslogtreecommitdiff
path: root/block
diff options
context:
space:
mode:
authorMing Lei <ming.lei@redhat.com>2024-05-15 04:31:57 +0300
committerJens Axboe <axboe@kernel.dk>2024-05-16 05:14:20 +0300
commitd0aac2363549e12cc79b8e285f13d5a9f42fd08e (patch)
tree454260b00912e2dbb2c92e7214affd3d48975d28 /block
parent6da6680632792709cecf2b006f2fe3ca7857e791 (diff)
downloadlinux-d0aac2363549e12cc79b8e285f13d5a9f42fd08e.tar.xz
blk-cgroup: fix list corruption from reorder of WRITE ->lqueued
__blkcg_rstat_flush() can be run anytime, especially when blk_cgroup_bio_start is being executed. If WRITE of `->lqueued` is re-ordered with READ of 'bisc->lnode.next' in the loop of __blkcg_rstat_flush(), `next_bisc` can be assigned with one stat instance being added in blk_cgroup_bio_start(), then the local list in __blkcg_rstat_flush() could be corrupted. Fix the issue by adding one barrier. Cc: Tejun Heo <tj@kernel.org> Cc: Waiman Long <longman@redhat.com> Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()") Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20240515013157.443672-3-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block')
-rw-r--r--block/blk-cgroup.c10
1 files changed, 10 insertions, 0 deletions
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 8699f193cf31..52367a4501d0 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1035,6 +1035,16 @@ static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu)
struct blkg_iostat cur;
unsigned int seq;
+ /*
+ * Order assignment of `next_bisc` from `bisc->lnode.next` in
+ * llist_for_each_entry_safe and clearing `bisc->lqueued` for
+ * avoiding to assign `next_bisc` with new next pointer added
+ * in blk_cgroup_bio_start() in case of re-ordering.
+ *
+ * The pair barrier is implied in llist_add() in blk_cgroup_bio_start().
+ */
+ smp_mb();
+
WRITE_ONCE(bisc->lqueued, false);
/* fetch the current per-cpu values */