summaryrefslogtreecommitdiff
path: root/fs/io-wq.h
diff options
context:
space:
mode:
authorXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>2020-06-10 14:41:19 +0300
committerJens Axboe <axboe@kernel.dk>2020-06-11 02:58:46 +0300
commit7cdaf587de7c6f494b8433fded19f7728e70e1ef (patch)
tree14b4686517cdc4f0a7fa4ef2faf6b4658b198572 /fs/io-wq.h
parentc5b856255cbc3b664d686a83fa9397a835e063de (diff)
downloadlinux-7cdaf587de7c6f494b8433fded19f7728e70e1ef.tar.xz
io_uring: avoid whole io_wq_work copy for requests completed inline
If requests can be submitted and completed inline, we don't need to initialize whole io_wq_work in io_init_req(), which is an expensive operation, add a new 'REQ_F_WORK_INITIALIZED' to determine whether io_wq_work is initialized and add a helper io_req_init_async(), users must call io_req_init_async() for the first time touching any members of io_wq_work. I use /dev/nullb0 to evaluate performance improvement in my physical machine: modprobe null_blk nr_devices=1 completion_nsec=0 sudo taskset -c 60 fio -name=fiotest -filename=/dev/nullb0 -iodepth=128 -thread -rw=read -ioengine=io_uring -direct=1 -bs=4k -size=100G -numjobs=1 -time_based -runtime=120 before this patch: Run status group 0 (all jobs): READ: bw=724MiB/s (759MB/s), 724MiB/s-724MiB/s (759MB/s-759MB/s), io=84.8GiB (91.1GB), run=120001-120001msec With this patch: Run status group 0 (all jobs): READ: bw=761MiB/s (798MB/s), 761MiB/s-761MiB/s (798MB/s-798MB/s), io=89.2GiB (95.8GB), run=120001-120001msec About 5% improvement. Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'fs/io-wq.h')
-rw-r--r--fs/io-wq.h5
1 files changed, 0 insertions, 5 deletions
diff --git a/fs/io-wq.h b/fs/io-wq.h
index 2db24d31fbc5..8e138fa88b9f 100644
--- a/fs/io-wq.h
+++ b/fs/io-wq.h
@@ -93,11 +93,6 @@ struct io_wq_work {
pid_t task_pid;
};
-#define INIT_IO_WORK(work) \
- do { \
- *(work) = (struct io_wq_work){}; \
- } while (0) \
-
static inline struct io_wq_work *wq_next_work(struct io_wq_work *work)
{
if (!work->list.next)