summaryrefslogtreecommitdiff
path: root/io_uring
diff options
context:
space:
mode:
authorJens Axboe <axboe@kernel.dk>2022-09-22 20:41:51 +0300
committerJens Axboe <axboe@kernel.dk>2022-09-30 16:49:11 +0300
commit851eb780decb7180bcf09fad0035cba9aae669df (patch)
treed089afb600e24f9fdca9c0e2bbe2d2871b832b12 /io_uring
parentc0a7ba77e81b8440d10f38559a5e1d219ff7e87c (diff)
downloadlinux-851eb780decb7180bcf09fad0035cba9aae669df.tar.xz
nvme: enable batched completions of passthrough IO
Now that the normal passthrough end_io path doesn't need the request anymore, we can kill the explicit blk_mq_free_request() and just pass back RQ_END_IO_FREE instead. This enables the batched completion from freeing batches of requests at the time. This brings passthrough IO performance at least on par with bdev based O_DIRECT with io_uring. With this and batche allocations, peak performance goes from 110M IOPS to 122M IOPS. For IRQ based, passthrough is now also about 10% faster than previously, going from ~61M to ~67M IOPS. Reviewed-by: Anuj Gupta <anuj20.g@samsung.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Keith Busch <kbusch@kernel.org> Co-developed-by: Stefan Roesch <shr@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring')
0 files changed, 0 insertions, 0 deletions