summaryrefslogtreecommitdiff
path: root/fs/gfs2/file.c
AgeCommit message (Collapse)AuthorFilesLines
2022-12-06gfs2: Make gfs2_glock_hold return its glock argumentAndreas Gruenbacher1-2/+1
This allows code like 'gl = gfs2_glock_hold(...)'. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-10-09gfs2: Merge branch 'for-next.nopid' into for-nextAndreas Gruenbacher1-4/+25
Resolves a conflict in gfs2_inode_lookup() between the following commits: gfs2: Use TRY lock in gfs2_inode_lookup for UNLINKED inodes gfs2: Mark the remaining process-independent glock holders as GL_NOPID Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-08-09Merge tag 'pull-work.iov_iter-rebased' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull more iov_iter updates from Al Viro: - more new_sync_{read,write}() speedups - ITER_UBUF introduction - ITER_PIPE cleanups - unification of iov_iter_get_pages/iov_iter_get_pages_alloc and switching them to advancing semantics - making ITER_PIPE take high-order pages without splitting them - handling copy_page_from_iter() for high-order pages properly * tag 'pull-work.iov_iter-rebased' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (32 commits) fix copy_page_from_iter() for compound destinations hugetlbfs: copy_page_to_iter() can deal with compound pages copy_page_to_iter(): don't split high-order page in case of ITER_PIPE expand those iov_iter_advance()... pipe_get_pages(): switch to append_pipe() get rid of non-advancing variants ceph: switch the last caller of iov_iter_get_pages_alloc() 9p: convert to advancing variant of iov_iter_get_pages_alloc() af_alg_make_sg(): switch to advancing variant of iov_iter_get_pages() iter_to_pipe(): switch to advancing variant of iov_iter_get_pages() block: convert to advancing variants of iov_iter_get_pages{,_alloc}() iov_iter: advancing variants of iov_iter_get_pages{,_alloc}() iov_iter: saner helper for page array allocation fold __pipe_get_pages() into pipe_get_pages() ITER_XARRAY: don't open-code DIV_ROUND_UP() unify the rest of iov_iter_get_pages()/iov_iter_get_pages_alloc() guts unify xarray_get_pages() and xarray_get_pages_alloc() unify pipe_get_pages() and pipe_get_pages_alloc() iov_iter_get_pages(): sanity-check arguments iov_iter_get_pages_alloc(): lift freeing pages array on failure exits into wrapper ...
2022-08-09new iov_iter flavour - ITER_UBUFAl Viro1-1/+1
Equivalent of single-segment iovec. Initialized by iov_iter_ubuf(), checked for by iter_is_ubuf(), otherwise behaves like ITER_IOVEC ones. We are going to expose the things like ->write_iter() et.al. to those in subsequent commits. New predicate (user_backed_iter()) that is true for ITER_IOVEC and ITER_UBUF; places like direct-IO handling should use that for checking that pages we modify after getting them from iov_iter_get_pages() would need to be dirtied. DO NOT assume that replacing iter_is_iovec() with user_backed_iter() will solve all problems - there's code that uses iter_is_iovec() to decide how to poke around in iov_iter guts and for that the predicate replacement obviously won't suffice. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2022-06-29gfs2: Mark flock glock holders as GL_NOPIDAndreas Gruenbacher1-2/+5
Add the GL_NOPID flag for flock glock holders. Clean up the flag setting code in do_flock. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-06-29gfs2: Add flocks to glockfd debugfs fileAndreas Gruenbacher1-2/+20
Include flock glocks in the "glockfd" debugfs file. Those are similar to the iopen glocks; while an open file is holding an flock, it is holding the file's flock glock. We cannot take f_fl_mutex in gfs2_glockfd_seq_show_flock() or else dumping the "glockfd" file would block on flock operations. Instead, use the file->f_lock spin lock to protect the f_fl_gh.gh_gl glock pointer. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-06-03gfs2: Remove redundant NULL check before kfreeMinghao Chi1-2/+1
kfree on NULL pointer is a no-op. Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Minghao Chi <chi.minghao@zte.com.cn> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-05-25Merge tag 'gfs2-v5.18-rc6-fixes' of ↵Linus Torvalds1-0/+4
git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2 Pull gfs2 updates from Andreas Gruenbacher: - Clean up the allocation of glocks that have an address space attached - Quota locking fix and quota iomap conversion - Fix the FITRIM error reporting - Some list iterator cleanups * tag 'gfs2-v5.18-rc6-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2: gfs2: Convert function bh_get to use iomap gfs2: use i_lock spin_lock for inode qadata gfs2: Return more useful errors from gfs2_rgrp_send_discards() gfs2: Use container_of() for gfs2_glock(aspace) gfs2: Explain some direct I/O oddities gfs2: replace 'found' with dedicated list iterator variable
2022-05-24gfs2: Explain some direct I/O odditiesAndreas Gruenbacher1-0/+4
Add some comments explaining the oddities of partial direct I/O reads and writes. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-05-16iomap: add per-iomap_iter private dataChristoph Hellwig1-2/+2
Allow the file system to keep state for all iterations. For now only wire it up for direct I/O as there is an immediate need for it there. Reviewed-by: Darrick J. Wong <djwong@kernel.org> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-05-13gfs2: Stop using glock holder auto-demotion for nowAndreas Gruenbacher1-32/+14
We're having unresolved issues with the glock holder auto-demotion mechanism introduced in commit dc732906c245. This mechanism was assumed to be essential for avoiding frequent short reads and writes until commit 296abc0d91d8 ("gfs2: No short reads or writes upon glock contention"). Since then, when the inode glock is lost, it is simply re-acquired and the operation is resumed. This means that apart from the performance penalty, we might as well drop the inode glock before faulting in pages, and re-acquire it afterwards. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-05-13gfs2: buffered write prefaultingAndreas Gruenbacher1-12/+16
In gfs2_file_buffered_write, to increase the likelihood that all the user memory we're trying to write will be resident in memory, carry out the write in chunks and fault in each chunk of user memory before trying to write it. Otherwise, some workloads will trigger frequent short "internal" writes, causing filesystem blocks to be allocated and then partially deallocated again when writing into holes, which is wasteful and breaks reservations. Neither the chunked writes nor any of the short "internal" writes are user visible. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-05-13gfs2: Align read and write chunks to the page cacheAndreas Gruenbacher1-8/+7
Align the chunks that reads and writes are carried out in to the page cache rather than the user buffers. This will be more efficient in general, especially for allocating writes. Optimizing the case that the user buffer is gfs2 backed isn't very useful; we only need to make sure we won't deadlock. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-05-13gfs2: Pull return value test out of should_fault_in_pagesAndreas Gruenbacher1-12/+22
Pull the return value test of the previous read or write operation out of should_fault_in_pages(). In a following patch, we'll fault in pages before the I/O and there will be no return value to check. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-05-13gfs2: Clean up use of fault_in_iov_iter_{read,write}ableAndreas Gruenbacher1-17/+9
No need to store the return value of the fault_in functions in separate variables. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-05-13gfs2: Variable renameAndreas Gruenbacher1-17/+17
Instead of counting the number of bytes read from the filesystem, functions gfs2_file_direct_read and gfs2_file_read_iter count the number of bytes written into the user buffer. Conversely, functions gfs2_file_direct_write and gfs2_file_buffered_write count the number of bytes read from the user buffer. This is nothing but confusing, so change the read functions to count how many bytes they have read, and the write functions to count how many bytes they have written. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-04-28gfs2: No short reads or writes upon glock contentionAndreas Gruenbacher1-4/+0
Commit 00bfe02f4796 ("gfs2: Fix mmap + page fault deadlocks for buffered I/O") changed gfs2_file_read_iter() and gfs2_file_buffered_write() to allow dropping the inode glock while faulting in user buffers. When the lock was dropped, a short result was returned to indicate that the operation was interrupted. As pointed out by Linus (see the link below), this behavior is broken and the operations should always re-acquire the inode glock and resume the operation instead. Link: https://lore.kernel.org/lkml/CAHk-=whaz-g_nOOoo8RRiWNjnv2R+h6_xk2F1J4TuSRxk1MtLw@mail.gmail.com/ Fixes: 00bfe02f4796 ("gfs2: Fix mmap + page fault deadlocks for buffered I/O") Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-04-26gfs2: Don't re-check for write past EOF unnecessarilyAndreas Gruenbacher1-1/+1
Only re-check for direct I/O writes past the end of the file after re-acquiring the inode glock. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-03-25gfs2: Make sure not to return short direct writesAndreas Gruenbacher1-1/+1
When direct writes fail with -ENOTBLK because we're writing into a hole (gfs2_iomap_begin()) or because of a page invalidation failure (iomap_dio_rw()), we're falling back to buffered writes. In that case, when we lose the inode glock in gfs2_file_buffered_write(), we want to re-acquire it instead of returning a short write. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-03-25gfs2: Remove dead code in gfs2_file_read_iterAndreas Gruenbacher1-6/+3
Function iomap_dio_rw() only returns -ENOTBLK for write requests and gfs2_file_direct_read() no longer returns -ENOTBLK since commit 1d45bb7f9d2a5 ("gfs2: Use iomap for stuffed direct I/O reads"), so there is no need to check for -ENOTBLK in gfs2_file_read_iter() anymore. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-03-25gfs2: Fix gfs2_file_buffered_write endless loop workaroundAndreas Gruenbacher1-0/+1
Since commit 554c577cee95b, gfs2_file_buffered_write() can accidentally return a truncated iov_iter, which might confuse callers. Fix that. Fixes: 554c577cee95b ("gfs2: Prevent endless loops in gfs2_file_buffered_write") Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-03-23gfs2: Minor retry logic cleanupAndreas Gruenbacher1-18/+16
Clean up the retry logic in the read and write functions somewhat. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-03-23gfs2: Disable page faults during lockless buffered readsAndreas Gruenbacher1-1/+3
During lockless buffered reads, filemap_read() holds page cache page references while trying to copy data to the user-space buffer. The calling process isn't holding the inode glock, but the page references it holds prevent those pages from being removed from the page cache, and that prevents the underlying inode glock from being moved to another node. Thus, we can end up in the same kinds of distributed deadlock situations as with normal (non-lockless) buffered reads. Fix that by disabling page faults during lockless reads as well. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-03-23gfs2: Fix should_fault_in_pages() logicAndreas Gruenbacher1-8/+9
Fix the fault-in window size logic: * Use a maximum window size of 1 MiB instead of BIO_MAX_VECS * PAGE_SIZE. The previous window size was always one page because the pages variable was accidentally being defined and then redefined in should_fault_in_pages(). * The nr_dirtied heuristic for guessing when there might be memory pressure often results in very small window sizes. Don't let nr_dirtied drop below 8 pages (as btrfs does). * Compute the window size in units of bytes, not pages. * Account for page overlap (unaligned iterators). Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-02-15gfs2: Initialize gh_error in gfs2_glock_nqAndreas Gruenbacher1-1/+0
The gh_error field if a glock holder is initialized to zero in gfs2_holder_init(). When a locking operation fails, gh_error is set to an error code; when it succeeds, the gh_error value is left unchanged. The field isn't initialized in gfs2_holder_reinit(), which is a problem. Instead of fixing that directly, initialize gh_error in gfs2_glock_nq(). That also obsoletes the assignment in do_flock(). Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-02-15gfs2: gfs2_setattr_size error path fixAndreas Gruenbacher1-1/+1
When gfs2_setattr_size() fails, it calls gfs2_rs_delete(ip, NULL) to get rid of any reservations the inode may have. Instead, it should pass in the inode's write count as the second parameter to allow gfs2_rs_delete() to figure out if the inode has any writers left. In a next step, there are two instances of gfs2_rs_delete(ip, NULL) left where we know that there can be no other users of the inode. Replace those with gfs2_rs_deltree(&ip->i_res) to avoid the unnecessary write count check. With that, gfs2_rs_delete() is only called with the inode's actual write count, so get rid of the second parameter. Fixes: a097dc7e24cb ("GFS2: Make rgrp reservations part of the gfs2_inode structure") Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-02-11gfs2: Fix gfs2_release for non-writers regressionBob Peterson1-3/+4
When a file is opened for writing, the vfs code (do_dentry_open) calls get_write_access for the inode, thus incrementing the inode's write count. That writer normally then creates a multi-block reservation for the inode (i_res) that can be re-used by other writers, which speeds up writes for applications that stupidly loop on open/write/close. When the writes are all done, the multi-block reservation should be deleted when the file is closed by the last "writer." Commit 0ec9b9ea4f83 broke that concept when it moved the call to gfs2_rs_delete before the check for FMODE_WRITE. Non-writers have no business removing the multi-block reservations of writers. In fact, if someone opens and closes the file for RO while a writer has a multi-block reservation, the RO closer will delete the reservation midway through the write, and this results in: kernel BUG at fs/gfs2/rgrp.c:677! (or thereabouts) which is: BUG_ON(rs->rs_requested); from function gfs2_rs_deltree. This patch moves the check back inside the check for FMODE_WRITE. Fixes: 0ec9b9ea4f83 ("gfs2: Check for active reservation in gfs2_release") Cc: stable@vger.kernel.org # v5.12+ Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-11-10gfs2: Prevent endless loops in gfs2_file_buffered_writeAndreas Gruenbacher1-0/+3
Currently, instead of performing a short write, iomap_file_buffered_write will fail when part of its iov iterator cannot be read. In contrast, gfs2_file_buffered_write will loop around if it can read part of the iov iterator, so we can end up in an endless loop. This should be fixed in iomap_file_buffered_write (and also generic_perform_write), but this comes a bit late in the 5.16 development cycle, so work around it in the filesystem by trimming the iov iterator to the known-good size for now. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-11-03gfs2: Only dereference i->iov when iter_is_iovec(i)Andreas Gruenbacher1-3/+3
Only dereference i->iov after establishing that i is of type ITER_IOVEC. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-11-02Merge tag 'gfs2-v5.15-rc5-fixes' of ↵Linus Torvalds1-9/+2
git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2 Pull gfs2 updates from Andreas Gruenbacher: - Fix a locking order inversion between the inode and iopen glocks in gfs2_inode_lookup. - Implement proper queuing of glock holders for glocks that require instantiation (like reading an inode or bitmap blocks from disk). Before, multiple glock holders could race with each other and half-initialized objects could be exposed; the GL_SKIP flag further exacerbated this problem. - Fix a rare deadlock between inode lookup / creation and remote delete work. - Fix a rare scheduling-while-atomic bug in dlm during glock hash table walks. - Various other minor fixes and cleanups. * tag 'gfs2-v5.15-rc5-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2: (21 commits) gfs2: Fix unused value warning in do_gfs2_set_flags() gfs2: check context in gfs2_glock_put gfs2: Fix glock_hash_walk bugs gfs2: Cancel remote delete work asynchronously gfs2: set glock object after nq gfs2: remove RDF_UPTODATE flag gfs2: Eliminate GIF_INVALID flag gfs2: fix GL_SKIP node_scope problems gfs2: split glock instantiation off from do_promote gfs2: further simplify do_promote gfs2: re-factor function do_promote gfs2: Remove 'first' trace_gfs2_promote argument gfs2: change go_lock to go_instantiate gfs2: dump glocks from gfs2_consist_OBJ_i gfs2: dequeue iopen holder in gfs2_inode_lookup error gfs2: Save ip from gfs2_glock_nq_init gfs2: Allow append and immutable bits to coexist gfs2: Switch some BUG_ON to GLOCK_BUG_ON for debug gfs2: move GL_SKIP check from glops to do_promote gfs2: Add GL_SKIP holder flag to dump_holder ...
2021-11-02Merge tag 'gfs2-v5.15-rc5-mmap-fault' of ↵Linus Torvalds1-23/+229
git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2 Pull gfs2 mmap + page fault deadlocks fixes from Andreas Gruenbacher: "Functions gfs2_file_read_iter and gfs2_file_write_iter are both accessing the user buffer to write to or read from while holding the inode glock. In the most basic deadlock scenario, that buffer will not be resident and it will be mapped to the same file. Accessing the buffer will trigger a page fault, and gfs2 will deadlock trying to take the same inode glock again while trying to handle that fault. Fix that and similar, more complex scenarios by disabling page faults while accessing user buffers. To make this work, introduce a small amount of new infrastructure and fix some bugs that didn't trigger so far, with page faults enabled" * tag 'gfs2-v5.15-rc5-mmap-fault' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2: gfs2: Fix mmap + page fault deadlocks for direct I/O iov_iter: Introduce nofault flag to disable page faults gup: Introduce FOLL_NOFAULT flag to disable page faults iomap: Add done_before argument to iomap_dio_rw iomap: Support partial direct I/O on user copy failures iomap: Fix iomap_dio_rw return value for user copies gfs2: Fix mmap + page fault deadlocks for buffered I/O gfs2: Eliminate ip->i_gh gfs2: Move the inode glock locking to gfs2_file_buffered_write gfs2: Introduce flag for glock holder auto-demotion gfs2: Clean up function may_grant gfs2: Add wrapper for iomap_file_buffered_write iov_iter: Introduce fault_in_iov_iter_writeable iov_iter: Turn iov_iter_fault_in_readable into fault_in_iov_iter_readable gup: Turn fault_in_pages_{readable,writeable} into fault_in_{readable,writeable} powerpc/kvm: Fix kvm_use_magic_page iov_iter: Fix iov_iter_get_pages{,_alloc} page fault return value
2021-11-01Merge tag 'for-5.16/block-2021-10-29' of git://git.kernel.dk/linux-blockLinus Torvalds1-2/+2
Pull block updates from Jens Axboe: - mq-deadline accounting improvements (Bart) - blk-wbt timer fix (Andrea) - Untangle the block layer includes (Christoph) - Rework the poll support to be bio based, which will enable adding support for polling for bio based drivers (Christoph) - Block layer core support for multi-actuator drives (Damien) - blk-crypto improvements (Eric) - Batched tag allocation support (me) - Request completion batching support (me) - Plugging improvements (me) - Shared tag set improvements (John) - Concurrent queue quiesce support (Ming) - Cache bdev in ->private_data for block devices (Pavel) - bdev dio improvements (Pavel) - Block device invalidation and block size improvements (Xie) - Various cleanups, fixes, and improvements (Christoph, Jackie, Masahira, Tejun, Yu, Pavel, Zheng, me) * tag 'for-5.16/block-2021-10-29' of git://git.kernel.dk/linux-block: (174 commits) blk-mq-debugfs: Show active requests per queue for shared tags block: improve readability of blk_mq_end_request_batch() virtio-blk: Use blk_validate_block_size() to validate block size loop: Use blk_validate_block_size() to validate block size nbd: Use blk_validate_block_size() to validate block size block: Add a helper to validate the block size block: re-flow blk_mq_rq_ctx_init() block: prefetch request to be initialized block: pass in blk_mq_tags to blk_mq_rq_ctx_init() block: add rq_flags to struct blk_mq_alloc_data block: add async version of bio_set_polled block: kill DIO_MULTI_BIO block: kill unused polling bits in __blkdev_direct_IO() block: avoid extra iter advance with async iocb block: Add independent access ranges support blk-mq: don't issue request directly in case that current is to be blocked sbitmap: silence data race warning blk-cgroup: synchronize blkg creation against policy deactivation block: refactor bio_iov_bvec_set() block: add single bio async direct IO helper ...
2021-10-25gfs2: Fix unused value warning in do_gfs2_set_flags()Tim Gardner1-1/+0
Coverity complains of an unused value: CID 119623 (#1 of 1): Unused value (UNUSED_VALUE) assigned_value: Assigning value -1 to error here, but that stored value is overwritten before it can be used. 237 error = -EPERM; Fix it by removing the assignment. Signed-off-by: Tim Gardner <tim.gardner@canonical.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-25gfs2: Allow append and immutable bits to coexistBob Peterson1-8/+2
Before this patch, function do_gfs2_set_flags checked if the append and immutable flags were being set while already set. If so, error -EPERM was given. There's no reason why these two flags should be mutually exclusive, and if you set them separately, you will, in essence, set one while it is already set. For example: chattr +a /mnt/gfs2/file1 chattr +i /mnt/gfs2/file1 The first command sets the append-only flag. Since they are additive, the second command sets the immutable flag AND append-only flag, since they both coexist in i_diskflags. So the second command should not return an error. This bug caused xfstests generic/545 to fail. This patch simply removes the invalid checks. I also eliminated an unused parm from do_gfs2_set_flags. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-25gfs2: Fix mmap + page fault deadlocks for direct I/OAndreas Gruenbacher1-12/+87
Also disable page faults during direct I/O requests and implement a similar kind of retry logic as in the buffered I/O case. The retry logic in the direct I/O case differs from the buffered I/O case in the following way: direct I/O doesn't provide the kinds of consistency guarantees between concurrent reads and writes that buffered I/O provides, so once we lose the inode glock while faulting in user pages, we always resume the operation. We never need to return a partial read or write. This locking problem was originally reported by Jan Kara. Linus came up with the idea of disabling page faults. Many thanks to Al Viro and Matthew Wilcox for their feedback. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-24iomap: Add done_before argument to iomap_dio_rwAndreas Gruenbacher1-2/+2
Add a done_before argument to iomap_dio_rw that indicates how much of the request has already been transferred. When the request succeeds, we report that done_before additional bytes were tranferred. This is useful for finishing a request asynchronously when part of the request has already been completed synchronously. We'll use that to allow iomap_dio_rw to be used with page faults disabled: when a page fault occurs while submitting a request, we synchronously complete the part of the request that has already been submitted. The caller can then take care of the page fault and call iomap_dio_rw again for the rest of the request, passing in the number of bytes already tranferred. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com> Reviewed-by: Darrick J. Wong <djwong@kernel.org>
2021-10-24gfs2: Fix mmap + page fault deadlocks for buffered I/OAndreas Gruenbacher1-5/+94
In the .read_iter and .write_iter file operations, we're accessing user-space memory while holding the inode glock. There is a possibility that the memory is mapped to the same file, in which case we'd recurse on the same glock. We could detect and work around this simple case of recursive locking, but more complex scenarios exist that involve multiple glocks, processes, and cluster nodes, and working around all of those cases isn't practical or even possible. Avoid these kinds of problems by disabling page faults while holding the inode glock. If a page fault would occur, we either end up with a partial read or write or with -EFAULT if nothing could be read or written. In either case, we know that we're not done with the operation, so we indicate that we're willing to give up the inode glock and then we fault in the missing pages. If that made us lose the inode glock, we return a partial read or write. Otherwise, we resume the operation. This locking problem was originally reported by Jan Kara. Linus came up with the idea of disabling page faults. Many thanks to Al Viro and Matthew Wilcox for their feedback. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-20gfs2: Eliminate ip->i_ghAndreas Gruenbacher1-13/+21
Now that gfs2_file_buffered_write is the only remaining user of ip->i_gh, we can move the glock holder to the stack (or rather, use the one we already have on the stack); there is no need for keeping the holder in the inode anymore. This is slightly complicated by the fact that we're using ip->i_gh for the statfs inode in gfs2_file_buffered_write as well. Writing to the statfs inode isn't very common, so allocate the statfs holder dynamically when needed. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-20gfs2: Move the inode glock locking to gfs2_file_buffered_writeAndreas Gruenbacher1-0/+27
So far, for buffered writes, we were taking the inode glock in gfs2_iomap_begin and dropping it in gfs2_iomap_end with the intention of not holding the inode glock while iomap_write_actor faults in user pages. It turns out that iomap_write_actor is called inside iomap_begin ... iomap_end, so the user pages were still faulted in while holding the inode glock and the locking code in iomap_begin / iomap_end was completely pointless. Move the locking into gfs2_file_buffered_write instead. We'll take care of the potential deadlocks due to faulting in user pages while holding a glock in a subsequent patch. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-20gfs2: Add wrapper for iomap_file_buffered_writeAndreas Gruenbacher1-10/+17
Add a wrapper around iomap_file_buffered_write. We'll add code for when the operation needs to be retried here later. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-10-18block: switch polling to be bio basedChristoph Hellwig1-2/+2
Replace the blk_poll interface that requires the caller to keep a queue and cookie from the submissions with polling based on the bio. Polling for the bio itself leads to a few advantages: - the cookie construction can made entirely private in blk-mq.c - the caller does not need to remember the request_queue and cookie separately and thus sidesteps their lifetime issues - keeping the device and the cookie inside the bio allows to trivially support polling BIOs remapping by stacking drivers - a lot of code to propagate the cookie back up the submission path can be removed entirely. Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Mark Wunderlich <mark.wunderlich@intel.com> Link: https://lore.kernel.org/r/20211012111226.760968-15-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-09-10locks: remove LOCK_MAND flock lock supportJeff Layton1-2/+0
As best I can tell, the logic for these has been broken for a long time (at least before the move to git), such that they never conflict with anything. Also, nothing checks for these flags and prevented opens or read/write behavior on the files. They don't seem to do anything. Given that, we can rip these symbols out of the kernel, and just make flock(2) return 0 when LOCK_MAND is set in order to preserve existing behavior. Cc: Matthew Wilcox <willy@infradead.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Jeff Layton <jlayton@kernel.org>
2021-08-23fs: remove mandatory file locking supportJeff Layton1-3/+0
We added CONFIG_MANDATORY_FILE_LOCKING in 2015, and soon after turned it off in Fedora and RHEL8. Several other distros have followed suit. I've heard of one problem in all that time: Someone migrated from an older distro that supported "-o mand" to one that didn't, and the host had a fstab entry with "mand" in it which broke on reboot. They didn't actually _use_ mandatory locking so they just removed the mount option and moved on. This patch rips out mandatory locking support wholesale from the kernel, along with the Kconfig option and the Documentation file. It also changes the mount code to ignore the "mand" mount option instead of erroring out, and to throw a big, ugly warning. Signed-off-by: Jeff Layton <jlayton@kernel.org>
2021-06-29gfs2: Clean up gfs2_unstuff_dinodeAndreas Gruenbacher1-2/+2
Split __gfs2_unstuff_inode off from gfs2_unstuff_dinode and clean up the code a little. All remaining callers now pass NULL as the page argument of gfs2_unstuff_dinode, so remove that argument. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-06-29gfs2: Unstuff before locking page in gfs2_page_mkwriteAndreas Gruenbacher1-10/+12
In gfs2_page_mkwrite, unstuff inodes before locking the page. That way, we won't have to pass in the locked page to gfs2_unstuff_inode, and gfs2_unstuff_inode can look up and lock the page itself. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-06-29gfs2: Clean up the error handling in gfs2_page_mkwriteAndreas Gruenbacher1-23/+40
We're setting an error number so that block_page_mkwrite_return translates it into the corresponding VM_FAULT_* code in several places, but this is getting confusing, so set the VM_FAULT_* codes directly instead. (No change in functionality.) Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-06-28gfs2: Fix underflow in gfs2_page_mkwriteAndreas Gruenbacher1-2/+2
On filesystems with a block size smaller than PAGE_SIZE and non-empty files smaller then PAGE_SIZE, gfs2_page_mkwrite could end up allocating excess blocks beyond the end of the file, similar to fallocate. This doesn't make sense; fix it. Reported-by: Bob Peterson <rpeterso@redhat.com> Fixes: 184b4e60853d ("gfs2: Fix end-of-file handling in gfs2_page_mkwrite") Cc: stable@vger.kernel.org # v5.5+ Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-06-28gfs2: Fix do_gfs2_set_flags descriptionAndreas Gruenbacher1-1/+1
Commit 88b631cbfbeb ("gfs2: convert to fileattr") changed the argument list without updating the description. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-06-02Revert "gfs2: Fix mmap locking for write faults"Andreas Gruenbacher1-3/+1
This reverts commit b7f55d928e75557295c1ac280c291b738905b6fb. As explained by Linus in [*], write faults on a mmap region are reads from a filesysten point of view, so taking the inode glock exclusively on write faults is incorrect. Instead, when a page is marked writable, the .page_mkwrite vm operation will be called, which is where the exclusive lock taking needs to happen. I got this wrong because of a broken test case that made me believe .page_mkwrite isn't getting called when it actually is. [*] https://lore.kernel.org/lkml/CAHk-=wj8EWr_D65i4oRSj2FTbrc6RdNydNNCGxeabRnwtoU=3Q@mail.gmail.com/ Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-05-21gfs2: Fix mmap locking for write faultsAndreas Gruenbacher1-1/+3
When a write fault occurs, we need to take the inode glock of the underlying inode in exclusive mode. Otherwise, there's no guarantee that the dirty page will be written back to disk. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>