summaryrefslogtreecommitdiff
path: root/block/bsg-lib.c
diff options
context:
space:
mode:
authorDavid Hildenbrand <david@redhat.com>2024-02-26 17:13:23 +0300
committerAndrew Morton <akpm@linux-foundation.org>2024-03-05 04:01:21 +0300
commitb4d02baa9f3ed9fc4311e0bd69966861a2f5eb83 (patch)
treee27959dd8d293736e022b600f3066b79b90cfa73 /block/bsg-lib.c
parentfc4d182316bd5309b4066fd9ef21529ea397a7d4 (diff)
downloadlinux-b4d02baa9f3ed9fc4311e0bd69966861a2f5eb83.tar.xz
mm/memfd: refactor memfd_tag_pins() and memfd_wait_for_pins()
Patch series "mm: remove total_mapcount()", v2. Let's remove the remaining user from mm/memfd.c so we can get rid of total_mapcount(). This patch (of 2): Both functions are the remaining users of total_mapcount(). Let's get rid of the calls by converting the code to folios. As it turns out, the code is unnecessarily complicated, especially: 1) We can query the number of pagecache references for a folio simply via folio_nr_pages(). This will handle other folio sizes in the future correctly. 2) The xas_set(xas, page->index + cache_count) call to increment the iterator for large folios is not required. Remove it. Further, simplify the XA_CHECK_SCHED check, counting each entry exactly once. Memfd pages can be swapped out when using shmem; leave xa_is_value() checks in place. Link: https://lkml.kernel.org/r/20240226141324.278526-1-david@redhat.com Link: https://lkml.kernel.org/r/20240226141324.278526-2-david@redhat.com Co-developed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'block/bsg-lib.c')
0 files changed, 0 insertions, 0 deletions