summaryrefslogtreecommitdiff
path: root/fs/xfs/libxfs
AgeCommit message (Collapse)AuthorFilesLines
2020-09-05Merge tag 'xfs-5.9-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linuxLinus Torvalds1-1/+1
Pull xfs fix from Darrick Wong: "Fix a broken metadata verifier that would incorrectly validate attr fork extents of a realtime file against the realtime volume" * tag 'xfs-5.9-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux: xfs: fix xfs_bmap_validate_extent_raw when checking attr fork of rt files
2020-09-03xfs: fix xfs_bmap_validate_extent_raw when checking attr fork of rt filesDarrick J. Wong1-1/+1
The realtime flag only applies to the data fork, so don't use the realtime block number checks on the attr fork of a realtime file. Fixes: 30b0984d9117 ("xfs: refactor bmap record validation") Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com>
2020-09-02Merge tag 'xfs-5.9-fixes-1' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linuxLinus Torvalds3-6/+8
Pull xfs fixes from Darrick Wong: "Various small corruption fixes that have come in during the past month: - Avoid a log recovery failure for an insert range operation by rolling deferred ops incrementally instead of at the end. - Fix an off-by-one error when calculating log space reservations for anything involving an inode allocation or free. - Fix a broken shortform xattr verifier. - Ensure that the shortform xattr header padding is always initialized to zero" * tag 'xfs-5.9-fixes-1' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux: xfs: initialize the shortform attr header padding entry xfs: fix boundary test in xfs_attr_shortform_verify xfs: fix off-by-one in inode alloc block reservation calculation xfs: finish dfops on every insert range shift iteration
2020-08-28Merge tag 'writeback_for_v5.9-rc3' of ↵Linus Torvalds1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs Pull writeback fixes from Jan Kara: "Fixes for writeback code occasionally skipping writeback of some inodes or livelocking sync(2)" * tag 'writeback_for_v5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs: writeback: Drop I_DIRTY_TIME_EXPIRE writeback: Fix sync livelock due to b_dirty_time processing writeback: Avoid skipping inode writeback writeback: Protect inode->i_io_list with inode->i_lock
2020-08-27xfs: initialize the shortform attr header padding entryDarrick J. Wong1-2/+2
Don't leak kernel memory contents into the shortform attr fork. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
2020-08-27xfs: fix boundary test in xfs_attr_shortform_verifyEric Sandeen1-1/+3
The boundary test for the fixed-offset parts of xfs_attr_sf_entry in xfs_attr_shortform_verify is off by one, because the variable array at the end is defined as nameval[1] not nameval[]. Hence we need to subtract 1 from the calculation. This can be shown by: # touch file # setfattr -n root.a file and verifications will fail when it's written to disk. This only matters for a last attribute which has a single-byte name and no value, otherwise the combination of namelen & valuelen will push endp further out and this test won't fail. Fixes: 1e1bbd8e7ee06 ("xfs: create structure verifier function for shortform xattrs") Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
2020-08-27xfs: fix off-by-one in inode alloc block reservation calculationBrian Foster2-3/+3
The inode chunk allocation transaction reserves inobt_maxlevels-1 blocks to accommodate a full split of the inode btree. A full split requires an allocation for every existing level and a new root block, which means inobt_maxlevels is the worst case block requirement for a transaction that inserts to the inobt. This can lead to a transaction block reservation overrun when tmpfile creation allocates an inode chunk and expands the inobt to its maximum depth. This problem has been observed in conjunction with overlayfs, which makes frequent use of tmpfiles internally. The existing reservation code goes back as far as the Linux git repo history (v2.6.12). It was likely never observed as a problem because the traditional file/directory creation transactions also include worst case block reservation for directory modifications, which most likely is able to make up for a single block deficiency in the inode allocation portion of the calculation. tmpfile support is relatively more recent (v3.15), less heavily used, and only includes the inode allocation block reservation as tmpfiles aren't linked into the directory tree on creation. Fix up the inode alloc block reservation macro and a couple of the block allocator minleft parameters that enforce an allocation to leave enough free blocks in the AG for a full inobt split. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2020-08-05xfs: delete duplicated words + other fixesRandy Dunlap1-1/+1
Delete repeated words in fs/xfs/. {we, that, the, a, to, fork} Change "it it" to "it is" in one location. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> To: linux-fsdevel@vger.kernel.org Cc: Darrick J. Wong <darrick.wong@oracle.com> Cc: linux-xfs@vger.kernel.org Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2020-07-29xfs: Lift -ENOSPC handler from xfs_attr_leaf_addnameAllison Collins1-14/+11
Lift -ENOSPC handler from xfs_attr_leaf_addname. This will help to reorganize transitions between the attr forms later. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Simplify xfs_attr_node_addnameAllison Collins1-63/+59
Invert the rename logic in xfs_attr_node_addname to simplify the delayed attr logic later. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Simplify xfs_attr_leaf_addnameAllison Collins1-52/+55
Invert the rename logic in xfs_attr_leaf_addname to simplify the delayed attr logic later. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Add helper function xfs_attr_node_removename_rmtAllison Collins1-9/+19
This patch adds another new helper function xfs_attr_node_removename_rmt. This will also help modularize xfs_attr_node_removename when we add delay ready attributes later. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Add helper function xfs_attr_node_removename_setupAllison Collins1-13/+33
This patch adds a new helper function xfs_attr_node_removename_setup. This will help modularize xfs_attr_node_removename when we add delay ready attributes later. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> [darrick: fix unused variable complaints by 0day robot] Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com> Reported-by: kernel test robot <lkp@intel.com>
2020-07-29xfs: Add remote block helper functionsAllison Collins1-20/+30
This patch adds two new helper functions xfs_attr_store_rmt_blk and xfs_attr_restore_rmt_blk. These two helpers assist to remove redundant code associated with storing and retrieving remote blocks during the attr set operations. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> Reviewed-by: Amir Goldstein <amir73il@gmail.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Add helper function xfs_attr_leaf_mark_incompleteAllison Collins1-14/+27
This patch helps to simplify xfs_attr_node_removename by modularizing the code around the transactions into helper functions. This will make the function easier to follow when we introduce delayed attributes. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Amir Goldstein <amir73il@gmail.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Add helpers xfs_attr_is_shortform and xfs_attr_set_shortformAllison Collins1-35/+72
In this patch, we hoist code from xfs_attr_set_args into two new helpers xfs_attr_is_shortform and xfs_attr_set_shortform. These two will help to simplify xfs_attr_set_args when we get into delayed attrs later. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Remove xfs_trans_roll in xfs_attr_node_removenameAllison Collins1-4/+0
A transaction roll is not necessary immediately after setting the INCOMPLETE flag when removing a node xattr entry with remote value blocks. The remote block invalidation that immediately follows setting the flag is an in-core only change. The next step after that is to start unmapping the remote blocks from the attr fork, but the xattr remove transaction reservation includes reservation for full tree splits of the dabtree and bmap tree. The remote block unmap code will roll the transaction as extents are unmapped and freed. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Remove unneeded xfs_trans_roll_inode callsAllison Collins1-54/+7
Some calls to xfs_trans_roll_inode and xfs_defer_finish routines are not needed. If they are the last operations executed in these functions, and no further changes are made, then higher level routines will roll or commit the transactions. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Add helper function xfs_attr_node_shrinkAllison Collins1-26/+42
This patch adds a new helper function xfs_attr_node_shrink used to shrink an attr name into an inode if it is small enough. This helps to modularize the greater calling function xfs_attr_node_removename. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Pull up xfs_attr_rmtval_invalidateAllison Collins2-3/+12
This patch pulls xfs_attr_rmtval_invalidate out of xfs_attr_rmtval_remove and into the calling functions. Eventually __xfs_attr_rmtval_remove will replace xfs_attr_rmtval_remove when we introduce delayed attributes. These functions are exepcted to return -EAGAIN when they need a new transaction. Because the invalidate does not need a new transaction, we need to separate it from the rest of the function that does. This will enable __xfs_attr_rmtval_remove to smoothly replace xfs_attr_rmtval_remove later. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Refactor xfs_attr_rmtval_removeAllison Collins2-15/+38
Refactor xfs_attr_rmtval_remove to add helper function __xfs_attr_rmtval_remove. We will use this later when we introduce delayed attributes. This function will eventually replace xfs_attr_rmtval_remove Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Pull up trans roll in xfs_attr3_leaf_clearflagAllison Collins2-4/+17
New delayed allocation routines cannot be handling transactions so pull them out into the calling functions Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Factor out xfs_attr_rmtval_invalidateAllison Collins2-6/+22
Because new delayed attribute routines cannot roll transactions, we carve off the parts of xfs_attr_rmtval_remove that we can use. This will help to reduce repetitive code later when we introduce delayed attributes. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Pull up trans roll from xfs_attr3_leaf_setflagAllison Collins2-4/+6
New delayed allocation routines cannot be handling transactions so pull them up into the calling functions Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Refactor xfs_attr_try_sf_addnameAllison Collins1-15/+15
To help pre-simplify xfs_attr_set_args, we need to hoist transaction handling up, while modularizing the adjacent code down into helpers. In this patch, hoist the commit in xfs_attr_try_sf_addname up into the calling function, and also pull the attr list creation down. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Amir Goldstein <amir73il@gmail.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Split apart xfs_attr_leaf_addnameAllison Collins1-34/+61
Split out new helper function xfs_attr_leaf_try_add from xfs_attr_leaf_addname. Because new delayed attribute routines cannot roll transactions, we split off the parts of xfs_attr_leaf_addname that we can use, and move the commit into the calling function. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Pull up trans handling in xfs_attr3_leaf_flipflagsAllison Collins2-6/+15
Since delayed operations cannot roll transactions, pull up the transaction handling into the calling function Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Factor out new helper functions xfs_attr_rmtval_setAllison Collins1-57/+92
Break xfs_attr_rmtval_set into two helper functions xfs_attr_rmt_find_hole and xfs_attr_rmtval_set_value. xfs_attr_rmtval_set rolls the transaction between the helpers, but delayed operations cannot. We will use the helpers later when constructing new delayed attribute routines. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Check for -ENOATTR or -EEXISTAllison Collins1-0/+13
Delayed operations cannot return error codes. So we must check for these conditions first before starting set or remove operations Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Add xfs_has_attr and subroutinesAllison Collins4-91/+197
This patch adds a new functions to check for the existence of an attribute. Subroutines are also added to handle the cases of leaf blocks, nodes or shortform. Common code that appears in existing attr add and remove functions have been factored out to help reduce the appearance of duplicated code. We will need these routines later for delayed attributes since delayed operations cannot return error codes. Signed-off-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Chandan Rajendra <chandanrlinux@gmail.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> [darrick: fix a leak-on-error bug reported by Dan Carpenter] [darrick: fix unused variable warning reported by 0day] Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Dave Chinner <dchinner@redhat.com> Reported-by: dan.carpenter@oracle.com Reported-by: kernel test robot <lkp@intel.com>
2020-07-29xfs: Refactor xfs_da_state_alloc() helperCarlos Maiolino4-30/+20
Every call to xfs_da_state_alloc() also requires setting up state->args and state->mp Change xfs_da_state_alloc() to receive an xfs_da_args_t as argument and return a xfs_da_state_t with both args and mp already set. Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> [darrick: reduce struct typedef usage] Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Remove kmem_zone_zalloc() usageCarlos Maiolino8-10/+13
Use kmem_cache_zalloc() directly. With the exception of xlog_ticket_alloc() which will be dealt on the next patch for readability. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: Remove kmem_zone_alloc() usageCarlos Maiolino2-2/+4
Use kmem_cache_alloc() directly. All kmem_zone_alloc() users pass 0 as flags, which are translated into: GFP_KERNEL | __GFP_NOWARN, and kmem_zone_alloc() loops forever until the allocation succeeds. We can use __GFP_NOFAIL to tell the allocator to loop forever rather than doing it ourself, and because the allocation will never fail, we do not need to use __GFP_NOWARN anymore. Hence, all callers can be converted to use GFP_KERNEL | __GFP_NOFAIL Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> [darrick: add a comment back in about nofail] Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com>
2020-07-29xfs: xfs_btree_staging.h: delete duplicated wordsRandy Dunlap1-3/+3
Drop the repeated words "with" and "be" in comments. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: "Darrick J. Wong" <darrick.wong@oracle.com> Cc: linux-xfs@vger.kernel.org Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2020-07-29xfs: rename the ondisk dquot d_flags to d_typeDarrick J. Wong2-4/+4
The ondisk dquot stores the quota record type in the flags field. Rename this field to d_type to make the _type relationship between the ondisk and incore dquot more obvious. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
2020-07-29xfs: improve ondisk dquot flags checkingDarrick J. Wong2-3/+10
Create an XFS_DQTYPE_ANY mask for ondisk dquots flags, and use that to ensure that we never accept any garbage flags when we're loading dquots. While we're at it, restructure the quota type flag checking to use the proper masking. Note that I plan to add y2038 support soon, which will require a new xfs_dqtype_t flag for extended timestamp support, hence all the work to make the type masking work correctly. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
2020-07-29xfs: create xfs_dqtype_t to represent quota typesDarrick J. Wong3-14/+20
Create a new type (xfs_dqtype_t) to represent the type of an incore dquot (user, group, project, or none). Rename the incore dquot's dq_flags field to q_type. This allows us to replace all the "uint type" arguments to the quota functions with "xfs_dqtype_t type", to make it obvious when we're passing a quota type argument into a function. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
2020-07-29xfs: rename XFS_DQ_{USER,GROUP,PROJ} to XFS_DQTYPE_*Darrick J. Wong3-11/+13
We're going to split up the incore dquot state flags from the ondisk dquot flags (eventually renaming this "type") so start by renaming the three flags and the bitmask that are going to participate in this. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
2020-07-29xfs: drop the type parameter from xfs_dquot_verifyDarrick J. Wong2-10/+6
xfs_qm_reset_dqcounts (aka quotacheck) is the only xfs_dqblk_verify caller that actually knows the specific quota type that it's looking for. Since everything else just pass in type==0 (including the buffer verifier), drop the parameter and open-code the check like xfs_dquot_from_disk already does. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
2020-07-29xfs: remove qcore from incore dquotsDarrick J. Wong1-4/+3
Now that we've stopped using qcore entirely, drop it from the incore dquot. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Chandan Babu R <chandanrlinux@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
2020-07-29xfs: make XFS_DQUOT_CLUSTER_SIZE_FSB part of the ondisk formatDarrick J. Wong1-0/+16
Move the dquot cluster size #define to xfs_format.h. It is an important part of the ondisk format because the ondisk dquot record size is not an even power of two, which means that the buffer size we use is significant here because the kernel leaves slack space at the end of the buffer to avoid having to deal with a dquot record crossing a block boundary. This is also an excuse to fix one of the longstanding discrepancies between kernel and userspace libxfs headers. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Chandan Babu R <chandanrlinux@gmail.com>
2020-07-29xfs: rename dquot incore state flagsDarrick J. Wong1-5/+5
Rename the existing incore dquot "dq_flags" field to "q_flags" to match everything else in the structure, then move the two actual dquot state flags to the XFS_DQFLAG_ namespace from XFS_DQ_. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Chandan Babu R <chandanrlinux@gmail.com>
2020-07-29xfs: fix inode allocation block res calculation precedenceBrian Foster1-1/+1
The block reservation calculation for inode allocation is supposed to consist of the blocks required for the inode chunk plus (maxlevels-1) of the inode btree multiplied by the number of inode btrees in the fs (2 when finobt is enabled, 1 otherwise). Instead, the macro returns (ialloc_blocks + 2) due to a precedence error in the calculation logic. This leads to block reservation overruns via generic/531 on small block filesystems with finobt enabled. Add braces to fix the calculation and reserve the appropriate number of blocks. Fixes: 9d43b180af67 ("xfs: update inode allocation/free transaction reservations for finobt") Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
2020-07-14xfs: get rid of unnecessary xfs_perag_{get,put} pairsGao Xiang7-64/+23
In the course of some operations, we look up the perag from the mount multiple times to get or change perag information. These are often very short pieces of code, so while the lookup cost is generally low, the cost of the lookup is far higher than the cost of the operation we are doing on the perag. Since we changed buffers to hold references to the perag they are cached in, many modification contexts already hold active references to the perag that are held across these operations. This is especially true for any operation that is serialised by an allocation group header buffer. In these cases, we can just use the buffer's reference to the perag to avoid needing to do lookups to access the perag. This means that many operations don't need to do perag lookups at all to access the perag because they've already looked up objects that own persistent references and hence can use that reference instead. Cc: Dave Chinner <dchinner@redhat.com> Cc: "Darrick J. Wong" <darrick.wong@oracle.com> Signed-off-by: Gao Xiang <hsiangkao@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2020-07-07xfs: remove xfs_inobp_check()Dave Chinner2-30/+0
This debug code is called on every xfs_iflush() call, which then checks every inode in the buffer for non-zero unlinked list field. Hence it checks every inode in the cluster buffer every time a single inode on that cluster it flushed. This is resulting in: - 38.91% 5.33% [kernel] [k] xfs_iflush - 17.70% xfs_iflush - 9.93% xfs_inobp_check 4.36% xfs_buf_offset 10% of the CPU time spent flushing inodes is repeatedly checking unlinked fields in the buffer. We don't need to do this. The other place we call xfs_inobp_check() is xfs_iunlink_update_dinode(), and this is after we've done this assert for the agino we are about to write into that inode: ASSERT(xfs_verify_agino_or_null(mp, agno, next_agino)); which means we've already checked that the agino we are about to write is not 0 on debug kernels. The inode buffer verifiers do everything else we need, so let's just remove this debug code. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2020-07-07xfs: attach inodes to the cluster buffer when dirtiedDave Chinner1-3/+6
Rather than attach inodes to the cluster buffer just when we are doing IO, attach the inodes to the cluster buffer when they are dirtied. The means the buffer always carries a list of dirty inodes that reference it, and we can use that list to make more fundamental changes to inode writeback that aren't otherwise possible. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2020-07-07xfs: pin inode backing buffer to the inode log itemDave Chinner2-7/+49
When we dirty an inode, we are going to have to write it disk at some point in the near future. This requires the inode cluster backing buffer to be present in memory. Unfortunately, under severe memory pressure we can reclaim the inode backing buffer while the inode is dirty in memory, resulting in stalling the AIL pushing because it has to do a read-modify-write cycle on the cluster buffer. When we have no memory available, the read of the cluster buffer blocks the AIL pushing process, and this causes all sorts of issues for memory reclaim as it requires inode writeback to make forwards progress. Allocating a cluster buffer causes more memory pressure, and results in more cluster buffers to be reclaimed, resulting in more RMW cycles to be done in the AIL context and everything then backs up on AIL progress. Only the synchronous inode cluster writeback in the the inode reclaim code provides some level of forwards progress guarantees that prevent OOM-killer rampages in this situation. Fix this by pinning the inode backing buffer to the inode log item when the inode is first dirtied (i.e. in xfs_trans_log_inode()). This may mean the first modification of an inode that has been held in cache for a long time may block on a cluster buffer read, but we can do that in transaction context and block safely until the buffer has been allocated and read. Once we have the cluster buffer, the inode log item takes a reference to it, pinning it in memory, and attaches it to the log item for future reference. This means we can always grab the cluster buffer from the inode log item when we need it. When the inode is finally cleaned and removed from the AIL, we can drop the reference the inode log item holds on the cluster buffer. Once all inodes on the cluster buffer are clean, the cluster buffer will be unpinned and it will be available for memory reclaim to reclaim again. This avoids the issues with needing to do RMW cycles in the AIL pushing context, and hence allows complete non-blocking inode flushing to be performed by the AIL pushing context. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2020-07-06xfs: add an inode item lockDave Chinner1-26/+26
The inode log item is kind of special in that it can be aggregating new changes in memory at the same time time existing changes are being written back to disk. This means there are fields in the log item that are accessed concurrently from contexts that don't share any locking at all. e.g. updating ili_last_fields occurs at flush time under the ILOCK_EXCL and flush lock at flush time, under the flush lock at IO completion time, and is read under the ILOCK_EXCL when the inode is logged. Hence there is no actual serialisation between reading the field during logging of the inode in transactions vs clearing the field in IO completion. We currently get away with this by the fact that we are only clearing fields in IO completion, and nothing bad happens if we accidentally log more of the inode than we actually modify. Worst case is we consume a tiny bit more memory and log bandwidth. However, if we want to do more complex state manipulations on the log item that requires updates at all three of these potential locations, we need to have some mechanism of serialising those operations. To do this, introduce a spinlock into the log item to serialise internal state. This could be done via the xfs_inode i_flags_lock, but this then leads to potential lock inversion issues where inode flag updates need to occur inside locks that best nest inside the inode log item locks (e.g. marking inodes stale during inode cluster freeing). Using a separate spinlock avoids these sorts of problems and simplifies future code. This does not touch the use of ili_fields in the item formatting code - that is entirely protected by the ILOCK_EXCL at this point in time, so it remains untouched. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2020-07-06xfs: Don't allow logging of XFS_ISTALE inodesDave Chinner1-0/+2
In tracking down a problem in this patchset, I discovered we are reclaiming dirty stale inodes. This wasn't discovered until inodes were always attached to the cluster buffer and then the rcu callback that freed inodes was assert failing because the inode still had an active pointer to the cluster buffer after it had been reclaimed. Debugging the issue indicated that this was a pre-existing issue resulting from the way the inodes are handled in xfs_inactive_ifree. When we free a cluster buffer from xfs_ifree_cluster, all the inodes in cache are marked XFS_ISTALE. Those that are clean have nothing else done to them and so eventually get cleaned up by background reclaim. i.e. it is assumed we'll never dirty/relog an inode marked XFS_ISTALE. On journal commit dirty stale inodes as are handled by both buffer and inode log items to run though xfs_istale_done() and removed from the AIL (buffer log item commit) or the log item will simply unpin it because the buffer log item will clean it. What happens to any specific inode is entirely dependent on which log item wins the commit race, but the result is the same - stale inodes are clean, not attached to the cluster buffer, and not in the AIL. Hence inode reclaim can just free these inodes without further care. However, if the stale inode is relogged, it gets dirtied again and relogged into the CIL. Most of the time this isn't an issue, because relogging simply changes the inode's location in the current checkpoint. Problems arise, however, when the CIL checkpoints between two transactions in the xfs_inactive_ifree() deferops processing. This results in the XFS_ISTALE inode being redirtied and inserted into the CIL without any of the other stale cluster buffer infrastructure being in place. Hence on journal commit, it simply gets unpinned, so it remains dirty in memory. Everything in inode writeback avoids XFS_ISTALE inodes so it can't be written back, and it is not tracked in the AIL so there's not even a trigger to attempt to clean the inode. Hence the inode just sits dirty in memory until inode reclaim comes along, sees that it is XFS_ISTALE, and goes to reclaim it. This reclaiming of a dirty inode caused use after free, list corruptions and other nasty issues later in this patchset. Hence this patch addresses a violation of the "never log XFS_ISTALE inodes" caused by the deferops processing rolling a transaction and relogging a stale inode in xfs_inactive_free. It also adds a bunch of asserts to catch this problem in debug kernels so that we don't reintroduce this problem in future. Reproducer for this issue was generic/558 on a v4 filesystem. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2020-07-06xfs: redesign the reflink remap loop to fix blkres depletion crashDarrick J. Wong1-4/+9
The existing reflink remapping loop has some structural problems that need addressing: The biggest problem is that we create one transaction for each extent in the source file without accounting for the number of mappings there are for the same range in the destination file. In other words, we don't know the number of remap operations that will be necessary and we therefore cannot guess the block reservation required. On highly fragmented filesystems (e.g. ones with active dedupe) we guess wrong, run out of block reservation, and fail. The second problem is that we don't actually use the bmap intents to their full potential -- instead of calling bunmapi directly and having to deal with its backwards operation, we could call the deferred ops xfs_bmap_unmap_extent and xfs_refcount_decrease_extent instead. This makes the frontend loop much simpler. Solve all of these problems by refactoring the remapping loops so that we only perform one remapping operation per transaction, and each operation only tries to remap a single extent from source to dest. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reported-by: Edwin Török <edwin@etorok.net> Tested-by: Edwin Török <edwin@etorok.net>