From e94f68527a35271131cdf9d3fb4eb3c2513dc3d0 Mon Sep 17 00:00:00 2001 From: Pavel Begunkov Date: Thu, 21 Oct 2021 14:30:52 +0100 Subject: block: kill extra rcu lock/unlock in queue enter blk_try_enter_queue() already takes rcu_read_lock/unlock, so we can avoid the second pair in percpu_ref_tryget_live(), use a newly added percpu_ref_tryget_live_rcu(). As rcu_read_lock/unlock imply barrier()s, it's pretty noticeable, especially for for !CONFIG_PREEMPT_RCU (default for some distributions), where __rcu_read_lock/unlock() are not inlined. 3.20% io_uring [kernel.vmlinux] [k] __rcu_read_unlock 3.05% io_uring [kernel.vmlinux] [k] __rcu_read_lock 2.52% io_uring [kernel.vmlinux] [k] __rcu_read_unlock 2.28% io_uring [kernel.vmlinux] [k] __rcu_read_lock Signed-off-by: Pavel Begunkov Link: https://lore.kernel.org/r/6b11c67ea495ed9d44f067622d852de4a510ce65.1634822969.git.asml.silence@gmail.com Signed-off-by: Jens Axboe --- block/blk-core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'block') diff --git a/block/blk-core.c b/block/blk-core.c index dfa199312c2f..fd389a16013c 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -389,7 +389,7 @@ EXPORT_SYMBOL(blk_cleanup_queue); static bool blk_try_enter_queue(struct request_queue *q, bool pm) { rcu_read_lock(); - if (!percpu_ref_tryget_live(&q->q_usage_counter)) + if (!percpu_ref_tryget_live_rcu(&q->q_usage_counter)) goto fail; /* -- cgit v1.2.3