From 46e906144c3f4b0a7b6dcc7713fafad65b1859e0 Mon Sep 17 00:00:00 2001 From: André Almeida Date: Fri, 19 Jun 2020 21:20:36 -0300 Subject: docs: block: Create blk-mq documentation MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Create a documentation providing a background and explanation around the operation of the Multi-Queue Block IO Queueing Mechanism (blk-mq). The reference for writing this documentation was the source code and "Linux Block IO: Introducing Multi-queue SSD Access on Multi-core Systems", by Axboe et al. Signed-off-by: André Almeida Reviewed-by: Jens Axboe Acked-by: Randy Dunlap Link: https://lore.kernel.org/r/20200620002036.113000-1-andrealmeid@collabora.com Signed-off-by: Jonathan Corbet --- Documentation/block/blk-mq.rst | 153 +++++++++++++++++++++++++++++++++++++++++ Documentation/block/index.rst | 1 + 2 files changed, 154 insertions(+) create mode 100644 Documentation/block/blk-mq.rst (limited to 'Documentation/block') diff --git a/Documentation/block/blk-mq.rst b/Documentation/block/blk-mq.rst new file mode 100644 index 000000000000..88c56afcb070 --- /dev/null +++ b/Documentation/block/blk-mq.rst @@ -0,0 +1,153 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================================================ +Multi-Queue Block IO Queueing Mechanism (blk-mq) +================================================ + +The Multi-Queue Block IO Queueing Mechanism is an API to enable fast storage +devices to achieve a huge number of input/output operations per second (IOPS) +through queueing and submitting IO requests to block devices simultaneously, +benefiting from the parallelism offered by modern storage devices. + +Introduction +============ + +Background +---------- + +Magnetic hard disks have been the de facto standard from the beginning of the +development of the kernel. The Block IO subsystem aimed to achieve the best +performance possible for those devices with a high penalty when doing random +access, and the bottleneck was the mechanical moving parts, a lot slower than +any layer on the storage stack. One example of such optimization technique +involves ordering read/write requests according to the current position of the +hard disk head. + +However, with the development of Solid State Drives and Non-Volatile Memories +without mechanical parts nor random access penalty and capable of performing +high parallel access, the bottleneck of the stack had moved from the storage +device to the operating system. In order to take advantage of the parallelism +in those devices' design, the multi-queue mechanism was introduced. + +The former design had a single queue to store block IO requests with a single +lock. That did not scale well in SMP systems due to dirty data in cache and the +bottleneck of having a single lock for multiple processors. This setup also +suffered with congestion when different processes (or the same process, moving +to different CPUs) wanted to perform block IO. Instead of this, the blk-mq API +spawns multiple queues with individual entry points local to the CPU, removing +the need for a lock. A deeper explanation on how this works is covered in the +following section (`Operation`_). + +Operation +--------- + +When the userspace performs IO to a block device (reading or writing a file, +for instance), blk-mq takes action: it will store and manage IO requests to +the block device, acting as middleware between the userspace (and a file +system, if present) and the block device driver. + +blk-mq has two group of queues: software staging queues and hardware dispatch +queues. When the request arrives at the block layer, it will try the shortest +path possible: send it directly to the hardware queue. However, there are two +cases that it might not do that: if there's an IO scheduler attached at the +layer or if we want to try to merge requests. In both cases, requests will be +sent to the software queue. + +Then, after the requests are processed by software queues, they will be placed +at the hardware queue, a second stage queue were the hardware has direct access +to process those requests. However, if the hardware does not have enough +resources to accept more requests, blk-mq will places requests on a temporary +queue, to be sent in the future, when the hardware is able. + +Software staging queues +~~~~~~~~~~~~~~~~~~~~~~~ + +The block IO subsystem adds requests in the software staging queues +(represented by struct :c:type:`blk_mq_ctx`) in case that they weren't sent +directly to the driver. A request is one or more BIOs. They arrived at the +block layer through the data structure struct :c:type:`bio`. The block layer +will then build a new structure from it, the struct :c:type:`request` that will +be used to communicate with the device driver. Each queue has its own lock and +the number of queues is defined by a per-CPU or per-node basis. + +The staging queue can be used to merge requests for adjacent sectors. For +instance, requests for sector 3-6, 6-7, 7-9 can become one request for 3-9. +Even if random access to SSDs and NVMs have the same time of response compared +to sequential access, grouped requests for sequential access decreases the +number of individual requests. This technique of merging requests is called +plugging. + +Along with that, the requests can be reordered to ensure fairness of system +resources (e.g. to ensure that no application suffers from starvation) and/or to +improve IO performance, by an IO scheduler. + +IO Schedulers +^^^^^^^^^^^^^ + +There are several schedulers implemented by the block layer, each one following +a heuristic to improve the IO performance. They are "pluggable" (as in plug +and play), in the sense of they can be selected at run time using sysfs. You +can read more about Linux's IO schedulers `here +`_. The scheduling +happens only between requests in the same queue, so it is not possible to merge +requests from different queues, otherwise there would be cache trashing and a +need to have a lock for each queue. After the scheduling, the requests are +eligible to be sent to the hardware. One of the possible schedulers to be +selected is the NONE scheduler, the most straightforward one. It will just +place requests on whatever software queue the process is running on, without +any reordering. When the device starts processing requests in the hardware +queue (a.k.a. run the hardware queue), the software queues mapped to that +hardware queue will be drained in sequence according to their mapping. + +Hardware dispatch queues +~~~~~~~~~~~~~~~~~~~~~~~~ + +The hardware queue (represented by struct :c:type:`blk_mq_hw_ctx`) is a struct +used by device drivers to map the device submission queues (or device DMA ring +buffer), and are the last step of the block layer submission code before the +low level device driver taking ownership of the request. To run this queue, the +block layer removes requests from the associated software queues and tries to +dispatch to the hardware. + +If it's not possible to send the requests directly to hardware, they will be +added to a linked list (:c:type:`hctx->dispatch`) of requests. Then, +next time the block layer runs a queue, it will send the requests laying at the +:c:type:`dispatch` list first, to ensure a fairness dispatch with those +requests that were ready to be sent first. The number of hardware queues +depends on the number of hardware contexts supported by the hardware and its +device driver, but it will not be more than the number of cores of the system. +There is no reordering at this stage, and each software queue has a set of +hardware queues to send requests for. + +.. note:: + + Neither the block layer nor the device protocols guarantee + the order of completion of requests. This must be handled by + higher layers, like the filesystem. + +Tag-based completion +~~~~~~~~~~~~~~~~~~~~ + +In order to indicate which request has been completed, every request is +identified by an integer, ranging from 0 to the dispatch queue size. This tag +is generated by the block layer and later reused by the device driver, removing +the need to create a redundant identifier. When a request is completed in the +drive, the tag is sent back to the block layer to notify it of the finalization. +This removes the need to do a linear search to find out which IO has been +completed. + +Further reading +--------------- + +- `Linux Block IO: Introducing Multi-queue SSD Access on Multi-core Systems `_ + +- `NOOP scheduler `_ + +- `Null block device driver `_ + +Source code documentation +========================= + +.. kernel-doc:: include/linux/blk-mq.h + +.. kernel-doc:: block/blk-mq.c diff --git a/Documentation/block/index.rst b/Documentation/block/index.rst index 026addfc69bc..86dcf7159f99 100644 --- a/Documentation/block/index.rst +++ b/Documentation/block/index.rst @@ -10,6 +10,7 @@ Block bfq-iosched biodoc biovecs + blk-mq capability cmdline-partition data-integrity -- cgit v1.2.3 From 985098a05eee6bf5caca7e997e02a5b15242cfa0 Mon Sep 17 00:00:00 2001 From: Mauro Carvalho Chehab Date: Tue, 23 Jun 2020 09:09:10 +0200 Subject: docs: fix references for DMA*.txt files As we moved those files to core-api, fix references to point to their newer locations. Signed-off-by: Mauro Carvalho Chehab Link: https://lore.kernel.org/r/37b2fd159fbc7655dbf33b3eb1215396a25f6344.1592895969.git.mchehab+huawei@kernel.org Signed-off-by: Jonathan Corbet --- Documentation/PCI/pci.rst | 6 +++--- Documentation/block/biodoc.rst | 2 +- Documentation/bus-virt-phys-mapping.txt | 2 +- Documentation/core-api/dma-api.rst | 6 +++--- Documentation/core-api/dma-isa-lpc.rst | 2 +- Documentation/driver-api/usb/dma.rst | 6 +++--- Documentation/translations/ko_KR/memory-barriers.txt | 6 +++--- arch/ia64/hp/common/sba_iommu.c | 12 ++++++------ arch/parisc/kernel/pci-dma.c | 2 +- arch/x86/include/asm/dma-mapping.h | 4 ++-- arch/x86/kernel/amd_gart_64.c | 2 +- drivers/parisc/sba_iommu.c | 14 +++++++------- include/linux/dma-mapping.h | 2 +- include/media/videobuf-dma-sg.h | 2 +- kernel/dma/debug.c | 2 +- 15 files changed, 35 insertions(+), 35 deletions(-) (limited to 'Documentation/block') diff --git a/Documentation/PCI/pci.rst b/Documentation/PCI/pci.rst index 8c016d8c9862..d10d3fe604c5 100644 --- a/Documentation/PCI/pci.rst +++ b/Documentation/PCI/pci.rst @@ -265,7 +265,7 @@ Set the DMA mask size --------------------- .. note:: If anything below doesn't make sense, please refer to - Documentation/DMA-API.txt. This section is just a reminder that + :doc:`/core-api/dma-api`. This section is just a reminder that drivers need to indicate DMA capabilities of the device and is not an authoritative source for DMA interfaces. @@ -291,7 +291,7 @@ Many 64-bit "PCI" devices (before PCI-X) and some PCI-X devices are Setup shared control data ------------------------- Once the DMA masks are set, the driver can allocate "consistent" (a.k.a. shared) -memory. See Documentation/DMA-API.txt for a full description of +memory. See :doc:`/core-api/dma-api` for a full description of the DMA APIs. This section is just a reminder that it needs to be done before enabling DMA on the device. @@ -421,7 +421,7 @@ owners if there is one. Then clean up "consistent" buffers which contain the control data. -See Documentation/DMA-API.txt for details on unmapping interfaces. +See :doc:`/core-api/dma-api` for details on unmapping interfaces. Unregister from other subsystems diff --git a/Documentation/block/biodoc.rst b/Documentation/block/biodoc.rst index b964796ec9c7..ba7f45d0271c 100644 --- a/Documentation/block/biodoc.rst +++ b/Documentation/block/biodoc.rst @@ -196,7 +196,7 @@ a virtual address mapping (unlike the earlier scheme of virtual address do not have a corresponding kernel virtual address space mapping) and low-memory pages. -Note: Please refer to Documentation/DMA-API-HOWTO.txt for a discussion +Note: Please refer to :doc:`/core-api/dma-api-howto` for a discussion on PCI high mem DMA aspects and mapping of scatter gather lists, and support for 64 bit PCI. diff --git a/Documentation/bus-virt-phys-mapping.txt b/Documentation/bus-virt-phys-mapping.txt index 4bb07c2f3e7d..c7bc99cd2e21 100644 --- a/Documentation/bus-virt-phys-mapping.txt +++ b/Documentation/bus-virt-phys-mapping.txt @@ -8,7 +8,7 @@ How to access I/O mapped memory from within device drivers The virt_to_bus() and bus_to_virt() functions have been superseded by the functionality provided by the PCI DMA interface - (see Documentation/DMA-API-HOWTO.txt). They continue + (see :doc:`/core-api/dma-api-howto`). They continue to be documented below for historical purposes, but new code must not use them. --davidm 00/12/12 diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst index 2d8d2fed7317..63b4a2f20867 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -5,7 +5,7 @@ Dynamic DMA mapping using the generic device :Author: James E.J. Bottomley This document describes the DMA API. For a more gentle introduction -of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt. +of the API (and actual examples), see :doc:`/core-api/dma-api-howto`. This API is split into two pieces. Part I describes the basic API. Part II describes extensions for supporting non-consistent memory @@ -471,7 +471,7 @@ without the _attrs suffixes, except that they pass an optional dma_attrs. The interpretation of DMA attributes is architecture-specific, and -each attribute should be documented in Documentation/DMA-attributes.txt. +each attribute should be documented in :doc:`/core-api/dma-attributes`. If dma_attrs are 0, the semantics of each of these functions is identical to those of the corresponding function @@ -484,7 +484,7 @@ for DMA:: #include /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and - * documented in Documentation/DMA-attributes.txt */ + * documented in Documentation/core-api/dma-attributes.rst */ ... unsigned long attr; diff --git a/Documentation/core-api/dma-isa-lpc.rst b/Documentation/core-api/dma-isa-lpc.rst index b1ec7b16c21f..e59a3d35a93d 100644 --- a/Documentation/core-api/dma-isa-lpc.rst +++ b/Documentation/core-api/dma-isa-lpc.rst @@ -17,7 +17,7 @@ To do ISA style DMA you need to include two headers:: #include The first is the generic DMA API used to convert virtual addresses to -bus addresses (see Documentation/DMA-API.txt for details). +bus addresses (see :doc:`/core-api/dma-api` for details). The second contains the routines specific to ISA DMA transfers. Since this is not present on all platforms make sure you construct your diff --git a/Documentation/driver-api/usb/dma.rst b/Documentation/driver-api/usb/dma.rst index 59d5aee89e37..2b3dbd3265b4 100644 --- a/Documentation/driver-api/usb/dma.rst +++ b/Documentation/driver-api/usb/dma.rst @@ -10,7 +10,7 @@ API overview The big picture is that USB drivers can continue to ignore most DMA issues, though they still must provide DMA-ready buffers (see -``Documentation/DMA-API-HOWTO.txt``). That's how they've worked through +:doc:`/core-api/dma-api-howto`). That's how they've worked through the 2.4 (and earlier) kernels, or they can now be DMA-aware. DMA-aware usb drivers: @@ -60,7 +60,7 @@ and effects like cache-trashing can impose subtle penalties. force a consistent memory access ordering by using memory barriers. It's not using a streaming DMA mapping, so it's good for small transfers on systems where the I/O would otherwise thrash an IOMMU mapping. (See - ``Documentation/DMA-API-HOWTO.txt`` for definitions of "coherent" and + :doc:`/core-api/dma-api-howto` for definitions of "coherent" and "streaming" DMA mappings.) Asking for 1/Nth of a page (as well as asking for N pages) is reasonably @@ -91,7 +91,7 @@ Working with existing buffers Existing buffers aren't usable for DMA without first being mapped into the DMA address space of the device. However, most buffers passed to your driver can safely be used with such DMA mapping. (See the first section -of Documentation/DMA-API-HOWTO.txt, titled "What memory is DMA-able?") +of :doc:`/core-api/dma-api-howto`, titled "What memory is DMA-able?") - When you're using scatterlists, you can map everything at once. On some systems, this kicks in an IOMMU and turns the scatterlists into single diff --git a/Documentation/translations/ko_KR/memory-barriers.txt b/Documentation/translations/ko_KR/memory-barriers.txt index 34d041d68f78..604cee350e53 100644 --- a/Documentation/translations/ko_KR/memory-barriers.txt +++ b/Documentation/translations/ko_KR/memory-barriers.txt @@ -570,8 +570,8 @@ ACQUIRE 는 해당 오퍼레이션의 로드 부분에만 적용되고 RELEASE [*] 버스 마스터링 DMA 와 일관성에 대해서는 다음을 참고하시기 바랍니다: Documentation/driver-api/pci/pci.rst - Documentation/DMA-API-HOWTO.txt - Documentation/DMA-API.txt + Documentation/core-api/dma-api-howto.rst + Documentation/core-api/dma-api.rst 데이터 의존성 배리어 (역사적) @@ -1907,7 +1907,7 @@ Mandatory 배리어들은 SMP 시스템에서도 UP 시스템에서도 SMP 효 writel_relaxed() 와 같은 완화된 I/O 접근자들에 대한 자세한 내용을 위해서는 "커널 I/O 배리어의 효과" 섹션을, consistent memory 에 대한 자세한 내용을 - 위해선 Documentation/DMA-API.txt 문서를 참고하세요. + 위해선 Documentation/core-api/dma-api.rst 문서를 참고하세요. ========================= diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c index a806227c1fad..656a4888c300 100644 --- a/arch/ia64/hp/common/sba_iommu.c +++ b/arch/ia64/hp/common/sba_iommu.c @@ -907,7 +907,7 @@ sba_mark_invalid(struct ioc *ioc, dma_addr_t iova, size_t byte_cnt) * @dir: dma direction * @attrs: optional dma attributes * - * See Documentation/DMA-API-HOWTO.txt + * See Documentation/core-api/dma-api-howto.rst */ static dma_addr_t sba_map_page(struct device *dev, struct page *page, unsigned long poff, size_t size, @@ -1028,7 +1028,7 @@ sba_mark_clean(struct ioc *ioc, dma_addr_t iova, size_t size) * @dir: R/W or both. * @attrs: optional dma attributes * - * See Documentation/DMA-API-HOWTO.txt + * See Documentation/core-api/dma-api-howto.rst */ static void sba_unmap_page(struct device *dev, dma_addr_t iova, size_t size, enum dma_data_direction dir, unsigned long attrs) @@ -1105,7 +1105,7 @@ static void sba_unmap_page(struct device *dev, dma_addr_t iova, size_t size, * @size: number of bytes mapped in driver buffer. * @dma_handle: IOVA of new buffer. * - * See Documentation/DMA-API-HOWTO.txt + * See Documentation/core-api/dma-api-howto.rst */ static void * sba_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, @@ -1162,7 +1162,7 @@ sba_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, * @vaddr: virtual address IOVA of "consistent" buffer. * @dma_handler: IO virtual address of "consistent" buffer. * - * See Documentation/DMA-API-HOWTO.txt + * See Documentation/core-api/dma-api-howto.rst */ static void sba_free_coherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) @@ -1425,7 +1425,7 @@ static void sba_unmap_sg_attrs(struct device *dev, struct scatterlist *sglist, * @dir: R/W or both. * @attrs: optional dma attributes * - * See Documentation/DMA-API-HOWTO.txt + * See Documentation/core-api/dma-api-howto.rst */ static int sba_map_sg_attrs(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction dir, @@ -1524,7 +1524,7 @@ static int sba_map_sg_attrs(struct device *dev, struct scatterlist *sglist, * @dir: R/W or both. * @attrs: optional dma attributes * - * See Documentation/DMA-API-HOWTO.txt + * See Documentation/core-api/dma-api-howto.rst */ static void sba_unmap_sg_attrs(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction dir, diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c index 70cd24bdcfec..4f1596bb1936 100644 --- a/arch/parisc/kernel/pci-dma.c +++ b/arch/parisc/kernel/pci-dma.c @@ -3,7 +3,7 @@ ** PARISC 1.1 Dynamic DMA mapping support. ** This implementation is for PA-RISC platforms that do not support ** I/O TLBs (aka DMA address translation hardware). -** See Documentation/DMA-API-HOWTO.txt for interface definitions. +** See Documentation/core-api/dma-api-howto.rst for interface definitions. ** ** (c) Copyright 1999,2000 Hewlett-Packard Company ** (c) Copyright 2000 Grant Grundler diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h index 6b15a24930e0..fed67eafcacc 100644 --- a/arch/x86/include/asm/dma-mapping.h +++ b/arch/x86/include/asm/dma-mapping.h @@ -3,8 +3,8 @@ #define _ASM_X86_DMA_MAPPING_H /* - * IOMMU interface. See Documentation/DMA-API-HOWTO.txt and - * Documentation/DMA-API.txt for documentation. + * IOMMU interface. See Documentation/core-api/dma-api-howto.rst and + * Documentation/core-api/dma-api.rst for documentation. */ #include diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c index 17cb5b933dcf..e89031e9c847 100644 --- a/arch/x86/kernel/amd_gart_64.c +++ b/arch/x86/kernel/amd_gart_64.c @@ -6,7 +6,7 @@ * This allows to use PCI devices that only support 32bit addresses on systems * with more than 4GB. * - * See Documentation/DMA-API-HOWTO.txt for the interface specification. + * See Documentation/core-api/dma-api-howto.rst for the interface specification. * * Copyright 2002 Andi Kleen, SuSE Labs. */ diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c index 7e112829d250..5368452eb5a6 100644 --- a/drivers/parisc/sba_iommu.c +++ b/drivers/parisc/sba_iommu.c @@ -666,7 +666,7 @@ sba_mark_invalid(struct ioc *ioc, dma_addr_t iova, size_t byte_cnt) * @dev: instance of PCI owned by the driver that's asking * @mask: number of address bits this PCI device can handle * - * See Documentation/DMA-API-HOWTO.txt + * See Documentation/core-api/dma-api-howto.rst */ static int sba_dma_supported( struct device *dev, u64 mask) { @@ -698,7 +698,7 @@ static int sba_dma_supported( struct device *dev, u64 mask) * @size: number of bytes to map in driver buffer. * @direction: R/W or both. * - * See Documentation/DMA-API-HOWTO.txt + * See Documentation/core-api/dma-api-howto.rst */ static dma_addr_t sba_map_single(struct device *dev, void *addr, size_t size, @@ -788,7 +788,7 @@ sba_map_page(struct device *dev, struct page *page, unsigned long offset, * @size: number of bytes mapped in driver buffer. * @direction: R/W or both. * - * See Documentation/DMA-API-HOWTO.txt + * See Documentation/core-api/dma-api-howto.rst */ static void sba_unmap_page(struct device *dev, dma_addr_t iova, size_t size, @@ -867,7 +867,7 @@ sba_unmap_page(struct device *dev, dma_addr_t iova, size_t size, * @size: number of bytes mapped in driver buffer. * @dma_handle: IOVA of new buffer. * - * See Documentation/DMA-API-HOWTO.txt + * See Documentation/core-api/dma-api-howto.rst */ static void *sba_alloc(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) @@ -898,7 +898,7 @@ static void *sba_alloc(struct device *hwdev, size_t size, dma_addr_t *dma_handle * @vaddr: virtual address IOVA of "consistent" buffer. * @dma_handler: IO virtual address of "consistent" buffer. * - * See Documentation/DMA-API-HOWTO.txt + * See Documentation/core-api/dma-api-howto.rst */ static void sba_free(struct device *hwdev, size_t size, void *vaddr, @@ -933,7 +933,7 @@ int dump_run_sg = 0; * @nents: number of entries in list * @direction: R/W or both. * - * See Documentation/DMA-API-HOWTO.txt + * See Documentation/core-api/dma-api-howto.rst */ static int sba_map_sg(struct device *dev, struct scatterlist *sglist, int nents, @@ -1017,7 +1017,7 @@ sba_map_sg(struct device *dev, struct scatterlist *sglist, int nents, * @nents: number of entries in list * @direction: R/W or both. * - * See Documentation/DMA-API-HOWTO.txt + * See Documentation/core-api/dma-api-howto.rst */ static void sba_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents, diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 78f677cf45ab..ef2b153ddbd9 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -14,7 +14,7 @@ /** * List of possible attributes associated with a DMA mapping. The semantics - * of each attribute should be defined in Documentation/DMA-attributes.txt. + * of each attribute should be defined in Documentation/core-api/dma-attributes.rst. */ /* diff --git a/include/media/videobuf-dma-sg.h b/include/media/videobuf-dma-sg.h index b89d5e31f172..34450f7ad510 100644 --- a/include/media/videobuf-dma-sg.h +++ b/include/media/videobuf-dma-sg.h @@ -31,7 +31,7 @@ * does memory allocation too using vmalloc_32(). * * videobuf_dma_*() - * see Documentation/DMA-API-HOWTO.txt, these functions to + * see Documentation/core-api/dma-api-howto.rst, these functions to * basically the same. The map function does also build a * scatterlist for the buffer (and unmap frees it ...) * diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index 36c962a86bf2..f97f088ace7e 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -1071,7 +1071,7 @@ static void check_unmap(struct dma_debug_entry *ref) /* * Drivers should use dma_mapping_error() to check the returned * addresses of dma_map_single() and dma_map_page(). - * If not, print this warning message. See Documentation/DMA-API.txt. + * If not, print this warning message. See Documentation/core-api/dma-api.rst. */ if (entry->map_err_type == MAP_ERR_NOT_CHECKED) { err_printk(ref->dev, entry, -- cgit v1.2.3 From 6fa9a5a23026e6839c2f490325c8fba16d33f23c Mon Sep 17 00:00:00 2001 From: Randy Dunlap Date: Tue, 7 Jul 2020 11:03:56 -0700 Subject: Documentation: block: eliminate duplicated word Change the doubled word "the" to "to the". Signed-off-by: Randy Dunlap Cc: Jonathan Corbet Cc: linux-doc@vger.kernel.org Cc: Jens Axboe Cc: linux-block@vger.kernel.org Link: https://lore.kernel.org/r/20200707180414.10467-3-rdunlap@infradead.org Signed-off-by: Jonathan Corbet --- Documentation/block/pr.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'Documentation/block') diff --git a/Documentation/block/pr.rst b/Documentation/block/pr.rst index 30ea1c2e39eb..c893d6da8e04 100644 --- a/Documentation/block/pr.rst +++ b/Documentation/block/pr.rst @@ -9,7 +9,7 @@ access to block devices to specific initiators in a shared storage setup. This document gives a general overview of the support ioctl commands. -For a more detailed reference please refer the the SCSI Primary +For a more detailed reference please refer to the SCSI Primary Commands standard, specifically the section on Reservations and the "PERSISTENT RESERVE IN" and "PERSISTENT RESERVE OUT" commands. -- cgit v1.2.3