summaryrefslogtreecommitdiff
path: root/include/scsi/scsi_device.h
diff options
context:
space:
mode:
authorMing Lei <ming.lei@redhat.com>2020-09-10 10:50:56 +0300
committerMartin K. Petersen <martin.petersen@oracle.com>2020-09-16 05:20:11 +0300
commited5dd6a67d5eac5fb8873697b55dc1699752a9f3 (patch)
treef2b12578e83f2b256c0dbf532741ee1f41d110d3 /include/scsi/scsi_device.h
parentf97e6e1eabbfed0ec3ccce7562df26a5b21d0d99 (diff)
downloadlinux-ed5dd6a67d5eac5fb8873697b55dc1699752a9f3.tar.xz
scsi: core: Only re-run queue in scsi_end_request() if device queue is busy
The request queue is currently run unconditionally in scsi_end_request() if both target queue and host queue are ready. Recently Long Li reported that cost of a queue run can be very heavy in case of high queue depth. Improve this situation by only running the request queue when this LUN is busy. Link: https://lore.kernel.org/r/20200910075056.36509-1-ming.lei@redhat.com Reported-by: Long Li <longli@microsoft.com> Tested-by: Long Li <longli@microsoft.com> Tested-by: Kashyap Desai <kashyap.desai@broadcom.com> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Ewan D. Milne <emilne@redhat.com> Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Diffstat (limited to 'include/scsi/scsi_device.h')
-rw-r--r--include/scsi/scsi_device.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
index bc5909033d13..1a5c9a3df6d6 100644
--- a/include/scsi/scsi_device.h
+++ b/include/scsi/scsi_device.h
@@ -109,6 +109,7 @@ struct scsi_device {
atomic_t device_busy; /* commands actually active on LLDD */
atomic_t device_blocked; /* Device returned QUEUE_FULL. */
+ atomic_t restarts;
spinlock_t list_lock;
struct list_head starved_entry;
unsigned short queue_depth; /* How deep of a queue we want */