summaryrefslogtreecommitdiff
path: root/io_uring/io_uring.c
diff options
context:
space:
mode:
authorJens Axboe <axboe@kernel.dk>2024-01-29 06:11:55 +0300
committerJens Axboe <axboe@kernel.dk>2024-02-08 23:27:06 +0300
commit521223d7c229f83915619f888c99e952f24dc39f (patch)
treeb19885749949ac462562ebc474b8964edc85331f /io_uring/io_uring.c
parent4bcb982cce74e18155fba0d97394ca9634e0d8f0 (diff)
downloadlinux-521223d7c229f83915619f888c99e952f24dc39f.tar.xz
io_uring/cancel: don't default to setting req->work.cancel_seq
Just leave it unset by default, avoiding dipping into the last cacheline (which is otherwise untouched) for the fast path of using poll to drive networked traffic. Add a flag that tells us if the sequence is valid or not, and then we can defer actually assigning the flag and sequence until someone runs cancelations. Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring/io_uring.c')
-rw-r--r--io_uring/io_uring.c1
1 files changed, 0 insertions, 1 deletions
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index b8ca907b77eb..fd552b260eef 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -463,7 +463,6 @@ static void io_prep_async_work(struct io_kiocb *req)
req->work.list.next = NULL;
req->work.flags = 0;
- req->work.cancel_seq = atomic_read(&ctx->cancel_seq);
if (req->flags & REQ_F_FORCE_ASYNC)
req->work.flags |= IO_WQ_WORK_CONCURRENT;