summaryrefslogtreecommitdiff
path: root/block
AgeCommit message (Collapse)AuthorFilesLines
2023-12-02Merge tag 'block-6.7-2023-12-01' of git://git.kernel.dk/linuxLinus Torvalds3-4/+26
Pull block fixes from Jens Axboe: - NVMe pull request via Keith: - Invalid namespace identification error handling (Marizio Ewan, Keith) - Fabrics keep-alive tuning (Mark) - Fix for a bad error check regression in bcache (Markus) - Fix for a performance regression with O_DIRECT (Ming) - Fix for a flush related deadlock (Ming) - Make the read-only warn on per-partition (Yu) * tag 'block-6.7-2023-12-01' of git://git.kernel.dk/linux: nvme-core: check for too small lba shift blk-mq: don't count completed flush data request as inflight in case of quiesce block: Document the role of the two attribute groups block: warn once for each partition in bio_check_ro() block: move .bd_inode into 1st cacheline of block_device nvme: check for valid nvme_identify_ns() before using it nvme-core: fix a memory leak in nvme_ns_info_from_identify() nvme: fine-tune sending of first keep-alive bcache: revert replacing IS_ERR_OR_NULL with IS_ERR
2023-12-01blk-mq: don't count completed flush data request as inflight in case of quiesceMing Lei1-1/+13
Request queue quiesce may interrupt flush sequence, and the original request may have been marked as COMPLETE, but can't get finished because of queue quiesce. This way is fine from driver viewpoint, because flush sequence is block layer concept, and it isn't related with driver. However, driver(such as dm-rq) can call blk_mq_queue_inflight() to count & drain inflight requests, then the wait & drain never gets done because the completed & not-finished flush request is counted as inflight. Fix this issue by not counting completed flush data request as inflight in case of quiesce. Cc: Mike Snitzer <snitzer@kernel.org> Cc: David Jeffery <djeffery@redhat.com> Cc: John Pittman <jpittman@redhat.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20231201085605.577730-1-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-11-29block: Document the role of the two attribute groupsBart Van Assche1-0/+2
It is nontrivial to derive the role of the two attribute groups in source file block/blk-sysfs.c. Hence add a comment that explains their roles. See also commit 6d85ebf95c44 ("blk-sysfs: add a new attr_group for blk_mq"). Cc: Christoph Hellwig <hch@lst.de> Cc: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Link: https://lore.kernel.org/r/20231128194019.72762-1-bvanassche@acm.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-11-28block: warn once for each partition in bio_check_ro()Yu Kuai1-3/+11
Commit 1b0a151c10a6 ("blk-core: use pr_warn_ratelimited() in bio_check_ro()") fix message storm by limit the rate, however, there will still be lots of message in the long term. Fix it better by warn once for each partition. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20231128123027.971610-3-yukuai1@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-11-24Merge tag 'vfs-6.7-rc3.fixes' of ↵Linus Torvalds1-0/+2
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs Pull vfs fixes from Christian Brauner: - Avoid calling back into LSMs from vfs_getattr_nosec() calls. IMA used to query inode properties accessing raw inode fields without dedicated helpers. That was finally fixed a few releases ago by forcing IMA to use vfs_getattr_nosec() helpers. The goal of the vfs_getattr_nosec() helper is to query for attributes without calling into the LSM layer which would be quite problematic because incredibly IMA is called from __fput()... __fput() -> ima_file_free() What it does is to call back into the filesystem to update the file's IMA xattr. Querying the inode without using vfs_getattr_nosec() meant that IMA didn't handle stacking filesystems such as overlayfs correctly. So the switch to vfs_getattr_nosec() is quite correct. But the switch to vfs_getattr_nosec() revealed another bug when used on stacking filesystems: __fput() -> ima_file_free() -> vfs_getattr_nosec() -> i_op->getattr::ovl_getattr() -> vfs_getattr() -> i_op->getattr::$WHATEVER_UNDERLYING_FS_getattr() -> security_inode_getattr() # calls back into LSMs Now, if that __fput() happens from task_work_run() of an exiting task current->fs and various other pointer could already be NULL. So anything in the LSM layer relying on that not being NULL would be quite surprised. Fix that by passing the information that this is a security request through to the stacking filesystem by adding a new internal ATT_GETATTR_NOSEC flag. Now the callchain becomes: __fput() -> ima_file_free() -> vfs_getattr_nosec() -> i_op->getattr::ovl_getattr() -> if (AT_GETATTR_NOSEC) vfs_getattr_nosec() else vfs_getattr() -> i_op->getattr::$WHATEVER_UNDERLYING_FS_getattr() - Fix a bug introduced with the iov_iter rework from last cycle. This broke /proc/kcore by copying too much and without the correct offset. - Add a missing NULL check when allocating the root inode in autofs_fill_super(). - Fix stable writes for multi-device filesystems (xfs, btrfs etc) and the block device pseudo filesystem. Stable writes used to be a superblock flag only, making it a per filesystem property. Add an additional AS_STABLE_WRITES mapping flag to allow for fine-grained control. - Ensure that offset_iterate_dir() returns 0 after reaching the end of a directory so it adheres to getdents() convention. * tag 'vfs-6.7-rc3.fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs: libfs: getdents() should return 0 after reaching EOD xfs: respect the stable writes flag on the RT device xfs: clean up FS_XFLAG_REALTIME handling in xfs_ioctl_setattr_xflags block: update the stable_writes flag in bdev_add filemap: add a per-mapping stable writes flag autofs: add: new_inode check in autofs_fill_super() iov_iter: fix copy_page_to_iter_nofault() fs: Pass AT_GETATTR_NOSEC flag to getattr interface function
2023-11-20block: Remove blk_set_runtime_active()Damien Le Moal1-28/+5
The function blk_set_runtime_active() is called only from blk_post_runtime_resume(), so there is no need for that function to be exported. Open-code this function directly in blk_post_runtime_resume() and remove it. Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20231120070611.33951-1-dlemoal@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-11-20block: update the stable_writes flag in bdev_addChristoph Hellwig1-0/+2
Propagate the per-queue stable_write flags into each bdev inode in bdev_add. This makes sure devices that require stable writes have it set for I/O on the block device node as well. Note that this doesn't cover the case of a flag changing on a live device yet. We should handle that as well, but I plan to cover it as part of a more general rework of how changing runtime paramters on block devices works. Fixes: 1cb039f3dc16 ("bdi: replace BDI_CAP_STABLE_WRITES with a queue and a sb flag") Reported-by: Ilya Dryomov <idryomov@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20231025141020.192413-3-hch@lst.de Tested-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-11-17blk-cgroup: bypass blkcg_deactivate_policy after destroyingMing Lei1-0/+13
blkcg_deactivate_policy() can be called after blkg_destroy_all() returns, and it isn't necessary since blkg_destroy_all has covered policy deactivation. Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20231117023527.3188627-4-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-11-17blk-cgroup: avoid to warn !rcu_read_lock_held() in blkg_lookup()Ming Lei1-2/+0
So far, all callers either holds spin lock or rcu read explicitly, and most of the caller has added WARN_ON_ONCE(!rcu_read_lock_held()) or lockdep_assert_held(&disk->queue->queue_lock). Remove WARN_ON_ONCE(!rcu_read_lock_held()) from blkg_lookup() for killing the false positive warning from blkg_conf_prep(). Reported-by: Changhui Zhong <czhong@redhat.com> Fixes: 83462a6c971c ("blkcg: Drop unnecessary RCU read [un]locks from blkg_conf_prep/finish()") Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20231117023527.3188627-3-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-11-17blk-throttle: fix lockdep warning of "cgroup_mutex or RCU read lock required!"Ming Lei1-0/+2
Inside blkg_for_each_descendant_pre(), both css_for_each_descendant_pre() and blkg_lookup() requires RCU read lock, and either cgroup_assert_mutex_or_rcu_locked() or rcu_read_lock_held() is called. Fix the warning by adding rcu read lock. Reported-by: Changhui Zhong <czhong@redhat.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20231117023527.3188627-2-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-11-13blk-mq: make sure active queue usage is held for bio_integrity_prep()Christoph Hellwig1-37/+38
blk_integrity_unregister() can come if queue usage counter isn't held for one bio with integrity prepared, so this request may be completed with calling profile->complete_fn, then kernel panic. Another constraint is that bio_integrity_prep() needs to be called before bio merge. Fix the issue by: - call bio_integrity_prep() with one queue usage counter grabbed reliably - call bio_integrity_prep() before bio merge Fixes: 900e080752025f00 ("block: move queue enter logic into blk_mq_submit_bio()") Reported-by: Yi Zhang <yi.zhang@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Ming Lei <ming.lei@redhat.com> Tested-by: Yi Zhang <yi.zhang@redhat.com> Link: https://lore.kernel.org/r/20231113035231.2708053-1-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-11-07blk-core: use pr_warn_ratelimited() in bio_check_ro()Yu Kuai1-2/+2
If one of the underlying disks of raid or dm is set to read-only, then each io will generate new log, which will cause message storm. This environment is indeed problematic, however we can't make sure our naive custormer won't do this, hence use pr_warn_ratelimited() to prevent message storm in this case. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Fixes: 57e95e4670d1 ("block: fix and cleanup bio_check_ro") Signed-off-by: Ye Bin <yebin10@huawei.com> Link: https://lore.kernel.org/r/20231107111247.2157820-1-yukuai1@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-11-03Merge tag 'mm-nonmm-stable-2023-11-02-14-08' of ↵Linus Torvalds1-3/+3
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull non-MM updates from Andrew Morton: "As usual, lots of singleton and doubleton patches all over the tree and there's little I can say which isn't in the individual changelogs. The lengthier patch series are - 'kdump: use generic functions to simplify crashkernel reservation in arch', from Baoquan He. This is mainly cleanups and consolidation of the 'crashkernel=' kernel parameter handling - After much discussion, David Laight's 'minmax: Relax type checks in min() and max()' is here. Hopefully reduces some typecasting and the use of min_t() and max_t() - A group of patches from Oleg Nesterov which clean up and slightly fix our handling of reads from /proc/PID/task/... and which remove task_struct.thread_group" * tag 'mm-nonmm-stable-2023-11-02-14-08' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (64 commits) scripts/gdb/vmalloc: disable on no-MMU scripts/gdb: fix usage of MOD_TEXT not defined when CONFIG_MODULES=n .mailmap: add address mapping for Tomeu Vizoso mailmap: update email address for Claudiu Beznea tools/testing/selftests/mm/run_vmtests.sh: lower the ptrace permissions .mailmap: map Benjamin Poirier's address scripts/gdb: add lx_current support for riscv ocfs2: fix a spelling typo in comment proc: test ProtectionKey in proc-empty-vm test proc: fix proc-empty-vm test with vsyscall fs/proc/base.c: remove unneeded semicolon do_io_accounting: use sig->stats_lock do_io_accounting: use __for_each_thread() ocfs2: replace BUG_ON() at ocfs2_num_free_extents() with ocfs2_error() ocfs2: fix a typo in a comment scripts/show_delta: add __main__ judgement before main code treewide: mark stuff as __ro_after_init fs: ocfs2: check status values proc: test /proc/${pid}/statm compiler.h: move __is_constexpr() to compiler.h ...
2023-11-02Merge tag 'for-6.7/block-2023-10-30' of git://git.kernel.dk/linuxLinus Torvalds8-384/+1465
Pull block updates from Jens Axboe: - Improvements to the queue_rqs() support, and adding null_blk support for that as well (Chengming) - Series improving badblocks support (Coly) - Key store support for sed-opal (Greg) - IBM partition string handling improvements (Jan) - Make number of ublk devices supported configurable (Mike) - Cancelation improvements for ublk (Ming) - MD pull requests via Song: - Handle timeout in md-cluster, by Denis Plotnikov - Cleanup pers->prepare_suspend, by Yu Kuai - Rewrite mddev_suspend(), by Yu Kuai - Simplify md_seq_ops, by Yu Kuai - Reduce unnecessary locking array_state_store(), by Mariusz Tkaczyk - Make rdev add/remove independent from daemon thread, by Yu Kuai - Refactor code around quiesce() and mddev_suspend(), by Yu Kuai - NVMe pull request via Keith: - nvme-auth updates (Mark) - nvme-tcp tls (Hannes) - nvme-fc annotaions (Kees) - Misc cleanups and improvements (Jiapeng, Joel) * tag 'for-6.7/block-2023-10-30' of git://git.kernel.dk/linux: (95 commits) block: ublk_drv: Remove unused function md: cleanup pers->prepare_suspend() nvme-auth: allow mixing of secret and hash lengths nvme-auth: use transformed key size to create resp nvme-auth: alloc nvme_dhchap_key as single buffer nvmet-tcp: use 'spin_lock_bh' for state_lock() powerpc/pseries: PLPKS SED Opal keystore support block: sed-opal: keystore access for SED Opal keys block:sed-opal: SED Opal keystore ublk: simplify aborting request ublk: replace monitor with cancelable uring_cmd ublk: quiesce request queue when aborting queue ublk: rename mm_lock as lock ublk: move ublk_cancel_dev() out of ub->mutex ublk: make sure io cmd handled in submitter task context ublk: don't get ublk device reference in ublk_abort_queue() ublk: Make ublks_max configurable ublk: Limit dev_id/ub_number values md-cluster: check for timeout while a new disk adding nvme: rework NVME_AUTH Kconfig selection ...
2023-10-30Merge tag 'vfs-6.7.super' of ↵Linus Torvalds6-58/+142
gitolite.kernel.org:pub/scm/linux/kernel/git/vfs/vfs Pull vfs superblock updates from Christian Brauner: "This contains the work to make block device opening functions return a struct bdev_handle instead of just a struct block_device. The same struct bdev_handle is then also passed to block device closing functions. This allows us to propagate context from opening to closing a block device without having to modify all users everytime. Sidenote, in the future we might even want to try and have block device opening functions return a struct file directly but that's a series on top of this. These are further preparatory changes to be able to count writable opens and blocking writes to mounted block devices. That's a separate piece of work for next cycle and for that we absolutely need the changes to btrfs that have been quietly dropped somehow. Originally the series contained a patch that removed the old blkdev_*() helpers. But since this would've caused needles churn in -next for bcachefs we ended up delaying it. The second piece of work addresses one of the major annoyances about the work last cycle, namely that we required dropping s_umount whenever we used the superblock and fs_holder_ops for a block device. The reason for that requirement had been that in some codepaths s_umount could've been taken under disk->open_mutex (that's always been the case, at least theoretically). For example, on surprise block device removal or media change. And opening and closing block devices required grabbing disk->open_mutex as well. So we did the work and went through the block layer and fixed all those places so that s_umount is never taken under disk->open_mutex. This means no more brittle games where we yield and reacquire s_umount during block device opening and closing and no more requirements where block devices need to be closed. Filesystems don't need to care about this. There's a bunch of other follow-up work such as moving block device freezing and thawing to holder operations which makes it work for all block devices and not just the main block device just as we did for surprise removal. But that is for next cycle. Tested with fstests for all major fses, blktests, LTP" * tag 'vfs-6.7.super' of gitolite.kernel.org:pub/scm/linux/kernel/git/vfs/vfs: (37 commits) porting: update locking requirements fs: assert that open_mutex isn't held over holder ops block: assert that we're not holding open_mutex over blk_report_disk_dead block: move bdev_mark_dead out of disk_check_media_change block: WARN_ON_ONCE() when we remove active partitions block: simplify bdev_del_partition() fs: Avoid grabbing sb->s_umount under bdev->bd_holder_lock jfs: fix log->bdev_handle null ptr deref in lbmStartIO bcache: Fixup error handling in register_cache() xfs: Convert to bdev_open_by_path() reiserfs: Convert to bdev_open_by_dev/path() ocfs2: Convert to use bdev_open_by_dev() nfs/blocklayout: Convert to use bdev_open_by_dev/path() jfs: Convert to bdev_open_by_dev() f2fs: Convert to bdev_open_by_dev/path() ext4: Convert to bdev_open_by_dev() erofs: Convert to use bdev_open_by_path() btrfs: Convert to bdev_open_by_path() fs: Convert to bdev_open_by_dev() mm/swap: Convert to use bdev_open_by_dev() ...
2023-10-28block: assert that we're not holding open_mutex over blk_report_disk_deadChristian Brauner1-0/+7
blk_report_disk_dead() has the following major callers: (1) del_gendisk() (2) blk_mark_disk_dead() Since del_gendisk() acquires disk->open_mutex it's clear that all callers are assumed to be called without disk->open_mutex held. In turn, blk_report_disk_dead() is called without disk->open_mutex held in del_gendisk(). All callers of blk_mark_disk_dead() call it without disk->open_mutex as well. Ensure that it is clear that blk_report_disk_dead() is called without disk->open_mutex on purpose by asserting it and a comment in the code. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20231017184823.1383356-5-hch@lst.de Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-28block: move bdev_mark_dead out of disk_check_media_changeChristoph Hellwig2-16/+11
disk_check_media_change is mostly called from ->open where it makes little sense to mark the file system on the device as dead, as we are just opening it. So instead of calling bdev_mark_dead from disk_check_media_change move it into the few callers that are not in an open instance. This avoid calling into bdev_mark_dead and thus taking s_umount with open_mutex held. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20231017184823.1383356-4-hch@lst.de Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-28block: WARN_ON_ONCE() when we remove active partitionsChristian Brauner1-13/+17
The logic for disk->open_partitions is: blkdev_get_by_*() -> bdev_is_partition() -> blkdev_get_part() -> blkdev_get_whole() // bdev_whole->bd_openers++ -> if (part->bd_openers == 0) disk->open_partitions++ part->bd_openers In other words, when we first claim/open a partition we increment disk->open_partitions and only when all part->bd_openers are closed will disk->open_partitions be zero. That should mean that disk->open_partitions is always > 0 as long as there's anyone that has an open partition. So the check for disk->open_partitions should mean that we can never remove an active partition that has a holder and holder ops set. Assert that in the code. The main disk isn't removed so that check doesn't work for disk->part0 which is what we want. After all we only care about partition not about the main disk. Link: https://lore.kernel.org/r/20231017184823.1383356-3-hch@lst.de Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-28block: simplify bdev_del_partition()Christian Brauner1-1/+12
BLKPG_DEL_PARTITION refuses to delete partitions that still have openers, i.e., that has an elevated @bdev->bd_openers count. If a device is claimed by setting @bdev->bd_holder and @bdev->bd_holder_ops @bdev->bd_openers and @bdev->bd_holders are incremented. @bdev->bd_openers is effectively guaranteed to be >= @bdev->bd_holders. So as long as @bdev->bd_openers isn't zero we know that this partition is still in active use and that there might still be @bdev->bd_holder and @bdev->bd_holder_ops set. The only current example is @fs_holder_ops for filesystems. But that means bdev_mark_dead() which calls into bdev->bd_holder_ops->mark_dead::fs_bdev_mark_dead() is a nop. As long as there's an elevated @bdev->bd_openers count we can't delete the partition and if there isn't an elevated @bdev->bd_openers count then there's no @bdev->bd_holder or @bdev->bd_holder_ops. So simply open-code what we need to do. This gets rid of one more instance where we acquire s_umount under @disk->open_mutex. Link: https://lore.kernel.org/r/20231016-fototermin-umriss-59f1ea6c1fe6@brauner Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20231017184823.1383356-2-hch@lst.de Reviewed-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-28fs: Avoid grabbing sb->s_umount under bdev->bd_holder_lockJan Kara2-4/+6
The implementation of bdev holder operations such as fs_bdev_mark_dead() and fs_bdev_sync() grab sb->s_umount semaphore under bdev->bd_holder_lock. This is problematic because it leads to disk->open_mutex -> sb->s_umount lock ordering which is counterintuitive (usually we grab higher level (e.g. filesystem) locks first and lower level (e.g. block layer) locks later) and indeed makes lockdep complain about possible locking cycles whenever we open a block device while holding sb->s_umount semaphore. Implement a function bdev_super_lock_shared() which safely transitions from holding bdev->bd_holder_lock to holding sb->s_umount on alive superblock without introducing the problematic lock dependency. We use this function fs_bdev_sync() and fs_bdev_mark_dead(). Signed-off-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20231018152924.3858-1-jack@suse.cz Link: https://lore.kernel.org/r/20231017184823.1383356-1-hch@lst.de Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-28block: Use bdev_open_by_dev() in disk_scan_partitions() and blkdev_bszset()Jan Kara2-8/+10
Convert disk_scan_partitions() and blkdev_bszset() to use bdev_open_by_dev(). Acked-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20230927093442.25915-3-jack@suse.cz Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-28block: Use bdev_open_by_dev() in blkdev_open()Jan Kara2-16/+31
Convert blkdev_open() to use bdev_open_by_dev(). To be able to propagate handle from blkdev_open() to blkdev_release() we need to stop using existence of file->private_data to determine exclusive block device opens. Use bdev_handle->mode for this purpose since file->f_flags isn't usable for this (O_EXCL is cleared from the flags during open). Acked-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20230927093442.25915-2-jack@suse.cz Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-28block: Provide bdev_open_* functionsJan Kara1-0/+48
Create struct bdev_handle that contains all parameters that need to be passed to blkdev_put() and provide bdev_open_* functions that return this structure instead of plain bdev pointer. This will eventually allow us to pass one more argument to blkdev_put() (renamed to bdev_release()) without too much hassle. Acked-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20230927093442.25915-1-jack@suse.cz Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-21blk-throttle: check for overflow in calculate_bytes_allowedKhazhismel Kumykov1-0/+6
Inexact, we may reject some not-overflowing values incorrectly, but they'll be on the order of exabytes allowed anyways. This fixes divide error crash on x86 if bps_limit is not configured or is set too high in the rare case that jiffy_elapsed is greater than HZ. Fixes: e8368b57c006 ("blk-throttle: use calculate_io/bytes_allowed() for throtl_trim_slice()") Fixes: 8d6bbaada2e0 ("blk-throttle: prevent overflow while calculating wait time") Signed-off-by: Khazhismel Kumykov <khazhy@google.com> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/r/20231020223617.2739774-1-khazhy@google.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-10-19treewide: mark stuff as __ro_after_initAlexey Dobriyan1-3/+3
__read_mostly predates __ro_after_init. Many variables which are marked __read_mostly should have been __ro_after_init from day 1. Also, mark some stuff as "const" and "__init" while I'm at it. [akpm@linux-foundation.org: revert sysctl_nr_open_min, sysctl_nr_open_max changes due to arm warning] [akpm@linux-foundation.org: coding-style cleanups] Link: https://lkml.kernel.org/r/4f6bb9c0-abba-4ee4-a7aa-89265e886817@p183 Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-10-17powerpc/pseries: PLPKS SED Opal keystore supportGreg Joyce1-0/+1
Define operations for SED Opal to read/write keys from POWER LPAR Platform KeyStore(PLPKS). This allows non-volatile storage of SED Opal keys. Signed-off-by: Greg Joyce <gjoyce@linux.vnet.ibm.com> Reviewed-by: Jonathan Derrick <jonathan.derrick@linux.dev> Link: https://lore.kernel.org/r/20231004201957.1451669-4-gjoyce@linux.vnet.ibm.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-10-17block: sed-opal: keystore access for SED Opal keysGreg Joyce1-2/+16
Allow for permanent SED authentication keys by reading/writing to the SED Opal non-volatile keystore. Signed-off-by: Greg Joyce <gjoyce@linux.vnet.ibm.com> Reviewed-by: Jonathan Derrick <jonathan.derrick@linux.dev> Link: https://lore.kernel.org/r/20231004201957.1451669-3-gjoyce@linux.vnet.ibm.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-10-13block: Fix regression in sed-opal for a saved key.Milan Broz1-4/+3
The commit 3bfeb61256643281ac4be5b8a57e9d9da3db4335 introduced the use of keyring for sed-opal. Unfortunately, there is also a possibility to save the Opal key used in opal_lock_unlock(). This patch switches the order of operation, so the cached key is used instead of failure for opal_get_key. The problem was found by the cryptsetup Opal test recently added to the cryptsetup tree. Fixes: 3bfeb6125664 ("block: sed-opal: keyring support for SED keys") Tested-by: Ondrej Kozina <okozina@redhat.com> Signed-off-by: Milan Broz <gmazyland@gmail.com> Link: https://lore.kernel.org/r/20231003100209.380037-1-gmazyland@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-10-12block: Don't invalidate pagecache for invalid falloc modesSarthak Kukreti1-5/+16
Only call truncate_bdev_range() if the fallocate mode is supported. This fixes a bug where data in the pagecache could be invalidated if the fallocate() was called on the block device with an invalid mode. Fixes: 25f4c41415e5 ("block: implement (some of) fallocate for block devices") Cc: stable@vger.kernel.org Reported-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Sarthak Kukreti <sarthakkukreti@chromium.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Mike Snitzer <snitzer@kernel.org> Fixes: line? I've never seen those wrapped. Link: https://lore.kernel.org/r/20231011201230.750105-1-sarthakkukreti@chromium.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-10-04partitions/ibm: Introduce defines for magic string length valuesJan Höppner1-9/+13
The length values for volume label type and volume label id are hard-coded in several places. Provide defines for those values and replace all occurrences accordingly. Note that the length is defined and used, and not the size since the volume label type string and volume label id string are not nul-terminated. Signed-off-by: Jan Höppner <hoeppner@linux.ibm.com> Reviewed-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Link: https://lore.kernel.org/r/20230915131001.697070-4-sth@linux.ibm.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-10-04partitions/ibm: Replace strncpy() and improve readabilityJan Höppner1-25/+61
strncpy() is deprecated and needs to be replaced. The volume label information strings are not nul-terminated and strncpy() can simply be replaced with memcpy(). To enhance the readability of find_label() alongside this change, the following improvements are made: - Introduce the array dasd_vollabels[] containing all information necessary for the label detection. - Provide a helper function to obtain an index value corresponding to a volume label type. This allows the use of a switch statement to reduce indentation levels. - The 'temp' variable is used to check against valid volume label types. In the good case, this variable already contains the volume label type making it unnecessary to copy the information again from e.g. label->vol.vollbl. Remove the 'temp' variable and the second copy as all information are already provided. - Remove the 'found' variable and replace it with early returns Signed-off-by: Jan Höppner <hoeppner@linux.ibm.com> Reviewed-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Link: https://lore.kernel.org/r/20230915131001.697070-3-sth@linux.ibm.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-10-04partitions/ibm: Remove unnecessary memsetJan Höppner1-2/+0
The data holding the volume label information is zeroed in case no valid volume label was found. Since the label information isn't used in that case, zeroing the data doesn't provide any value whatsoever. Remove the unnecessary memset() call accordingly. Signed-off-by: Jan Höppner <hoeppner@linux.ibm.com> Reviewed-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Link: https://lore.kernel.org/r/20230915131001.697070-2-sth@linux.ibm.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-09-26badblocks: switch to the improved badblock handling codeColy Li1-305/+3
This patch removes old code of badblocks_set(), badblocks_clear() and badblocks_check(), and make them as wrappers to call _badblocks_set(), _badblocks_clear() and _badblocks_check(). By this change now the badblock handing switch to the improved algorithm in _badblocks_set(), _badblocks_clear() and _badblocks_check(). This patch only contains the changes of old code deletion, new added code for the improved algorithms are in previous patches. Signed-off-by: Coly Li <colyli@suse.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Geliang Tang <geliang.tang@suse.com> Cc: Hannes Reinecke <hare@suse.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: NeilBrown <neilb@suse.de> Cc: Vishal L Verma <vishal.l.verma@intel.com> Cc: Xiao Ni <xni@redhat.com> Reviewed-by: Xiao Ni <xni@redhat.com> Acked-by: Geliang Tang <geliang.tang@suse.com> Link: https://lore.kernel.org/r/20230811170513.2300-7-colyli@suse.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-09-26badblocks: improve badblocks_check() for multiple ranges handlingColy Li1-0/+97
This patch rewrites badblocks_check() with similar coding style as _badblocks_set() and _badblocks_clear(). The only difference is bad blocks checking may handle multiple ranges in bad tables now. If a checking range covers multiple bad blocks range in bad block table, like the following condition (C is the checking range, E1, E2, E3 are three bad block ranges in bad block table), +------------------------------------+ | C | +------------------------------------+ +----+ +----+ +----+ | E1 | | E2 | | E3 | +----+ +----+ +----+ The improved badblocks_check() algorithm will divide checking range C into multiple parts, and handle them in 7 runs of a while-loop, +--+ +----+ +----+ +----+ +----+ +----+ +----+ |C1| | C2 | | C3 | | C4 | | C5 | | C6 | | C7 | +--+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | E1 | | E2 | | E3 | +----+ +----+ +----+ And the start LBA and length of range E1 will be set as first_bad and bad_sectors for the caller. The return value rule is consistent for multiple ranges. For example if there are following bad block ranges in bad block table, Index No. Start Len Ack 0 400 20 1 1 500 50 1 2 650 20 0 the return value, first_bad, bad_sectors by calling badblocks_set() with different checking range can be the following values, Checking Start, Len Return Value first_bad bad_sectors 100, 100 0 N/A N/A 100, 310 1 400 10 100, 440 1 400 10 100, 540 1 400 10 100, 600 -1 400 10 100, 800 -1 400 10 In order to make code review easier, this patch names the improved bad block range checking routine as _badblocks_check() and does not change existing badblock_check() code yet. Later patch will delete old code of badblocks_check() and make it as a wrapper to call _badblocks_check(). Then the new added code won't mess up with the old deleted code, it will be more clear and easier for code review. Signed-off-by: Coly Li <colyli@suse.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Geliang Tang <geliang.tang@suse.com> Cc: Hannes Reinecke <hare@suse.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: NeilBrown <neilb@suse.de> Cc: Vishal L Verma <vishal.l.verma@intel.com> Cc: Xiao Ni <xni@redhat.com> Reviewed-by: Xiao Ni <xni@redhat.com> Acked-by: Geliang Tang <geliang.tang@suse.com> Link: https://lore.kernel.org/r/20230811170513.2300-6-colyli@suse.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-09-26badblocks: improve badblocks_clear() for multiple ranges handlingColy Li1-0/+325
With the fundamental ideas and helper routines from badblocks_set() improvement, clearing bad block for multiple ranges is much simpler. With a similar idea from badblocks_set() improvement, this patch simplifies bad block range clearing into 5 situations. No matter how complicated the clearing condition is, we just look at the head part of clearing range with relative already set bad block range from the bad block table. The rested part will be handled in next run of the while-loop. Based on existing helpers added from badblocks_set(), this patch adds two more helpers, - front_clear() Clear the bad block range from bad block table which is front overlapped with the clearing range. - front_splitting_clear() Handle the condition that the clearing range hits middle of an already set bad block range from bad block table. Similar as badblocks_set(), the first part of clearing range is handled with relative bad block range which is find by prev_badblocks(). In most cases a valid hint is provided to prev_badblocks() to avoid unnecessary bad block table iteration. This patch also explains the detail algorithm code comments at beginning of badblocks.c, including which five simplified situations are categrized and how all the bad block range clearing conditions are handled by these five situations. Again, in order to make the code review easier and avoid the code changes mixed together, this patch does not modify badblock_clear() and implement another routine called _badblock_clear() for the improvement. Later patch will delete current code of badblock_clear() and make it as a wrapper to _badblock_clear(), so the code change can be much clear for review. Signed-off-by: Coly Li <colyli@suse.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Geliang Tang <geliang.tang@suse.com> Cc: Hannes Reinecke <hare@suse.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: NeilBrown <neilb@suse.de> Cc: Vishal L Verma <vishal.l.verma@intel.com> Cc: Xiao Ni <xni@redhat.com> Reviewed-by: Xiao Ni <xni@redhat.com> Acked-by: Geliang Tang <geliang.tang@suse.com> Link: https://lore.kernel.org/r/20230811170513.2300-5-colyli@suse.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-09-26badblocks: improve badblocks_set() for multiple ranges handlingColy Li1-20/+544
Recently I received a bug report that current badblocks code does not properly handle multiple ranges. For example, badblocks_set(bb, 32, 1, true); badblocks_set(bb, 34, 1, true); badblocks_set(bb, 36, 1, true); badblocks_set(bb, 32, 12, true); Then indeed badblocks_show() reports, 32 3 36 1 But the expected bad blocks table should be, 32 12 Obviously only the first 2 ranges are merged and badblocks_set() returns and ignores the rest setting range. This behavior is improper, if the caller of badblocks_set() wants to set a range of blocks into bad blocks table, all of the blocks in the range should be handled even the previous part encountering failure. The desired way to set bad blocks range by badblocks_set() is, - Set as many as blocks in the setting range into bad blocks table. - Merge the bad blocks ranges and occupy as less as slots in the bad blocks table. - Fast. Indeed the above proposal is complicated, especially with the following restrictions, - The setting bad blocks range can be acknowledged or not acknowledged. - The bad blocks table size is limited. - Memory allocation should be avoided. The basic idea of the patch is to categorize all possible bad blocks range setting combinations into much less simplified and more less special conditions. Inside badblocks_set() there is an implicit loop composed by jumping between labels 're_insert' and 'update_sectors'. No matter how large the setting bad blocks range is, in every loop just a minimized range from the head is handled by a pre-defined behavior from one of the categorized conditions. The logic is simple and code flow is manageable. The different relative layout between the setting range and existing bad block range are checked and handled (merge, combine, overwrite, insert) by the helpers in previous patch. This patch is to make all the helpers work together with the above idea. This patch only has the algorithm improvement for badblocks_set(). There are following patches contain improvement for badblocks_clear() and badblocks_check(). But the algorithm in badblocks_set() is fundamental and typical, other improvement in clear and check routines are based on all the helpers and ideas in this patch. In order to make the change to be more clear for code review, this patch does not directly modify existing badblocks_set(), and just add a new one named _badblocks_set(). Later patch will remove current existing badblocks_set() code and make it as a wrapper of _badblocks_set(). So the new added change won't be mixed with deleted code, the code review can be easier. Signed-off-by: Coly Li <colyli@suse.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Geliang Tang <geliang.tang@suse.com> Cc: Hannes Reinecke <hare@suse.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: NeilBrown <neilb@suse.de> Cc: Vishal L Verma <vishal.l.verma@intel.com> Cc: Wols Lists <antlists@youngman.org.uk> Cc: Xiao Ni <xni@redhat.com> Reviewed-by: Xiao Ni <xni@redhat.com> Acked-by: Geliang Tang <geliang.tang@suse.com> Link: https://lore.kernel.org/r/20230811170513.2300-4-colyli@suse.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-09-26badblocks: add helper routines for badblock ranges handlingColy Li1-0/+386
This patch adds several helper routines to improve badblock ranges handling. These helper routines will be used later in the improved version of badblocks_set()/badblocks_clear()/badblocks_check(). - Helpers prev_by_hint() and prev_badblocks() are used to find the bad range from bad table which the searching range starts at or after. - The following helpers are to decide the relative layout between the manipulating range and existing bad block range from bad table. - can_merge_behind() Return 'true' if the manipulating range can backward merge with the bad block range. - can_merge_front() Return 'true' if the manipulating range can forward merge with the bad block range. - can_combine_front() Return 'true' if two adjacent bad block ranges before the manipulating range can be merged. - overlap_front() Return 'true' if the manipulating range exactly overlaps with the bad block range in front of its range. - overlap_behind() Return 'true' if the manipulating range exactly overlaps with the bad block range behind its range. - can_front_overwrite() Return 'true' if the manipulating range can forward overwrite the bad block range in front of its range. - The following helpers are to add the manipulating range into the bad block table. Different routine is called with the specific relative layout between the manipulating range and other bad block range in the bad block table. - behind_merge() Merge the manipulating range with the bad block range behind its range, and return the number of merged length in unit of sector. - front_merge() Merge the manipulating range with the bad block range in front of its range, and return the number of merged length in unit of sector. - front_combine() Combine the two adjacent bad block ranges before the manipulating range into a larger one. - front_overwrite() Overwrite partial of whole bad block range which is in front of the manipulating range. The overwrite may split existing bad block range and generate more bad block ranges into the bad block table. - insert_at() Insert the manipulating range at a specific location in the bad block table. All the above helpers are used in later patches to improve the bad block ranges handling for badblocks_set()/badblocks_clear()/badblocks_check(). Signed-off-by: Coly Li <colyli@suse.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Geliang Tang <geliang.tang@suse.com> Cc: Hannes Reinecke <hare@suse.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: NeilBrown <neilb@suse.de> Cc: Vishal L Verma <vishal.l.verma@intel.com> Cc: Xiao Ni <xni@redhat.com> Reviewed-by: Xiao Ni <xni@redhat.com> Acked-by: Geliang Tang <geliang.tang@suse.com> Link: https://lore.kernel.org/r/20230811170513.2300-3-colyli@suse.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-09-26block: fix kernel-doc for disk_force_media_change()Randy Dunlap1-1/+0
Drop one function parameter's kernel-doc comment since the parameter was removed. This prevents a kernel-doc warning: block/disk-events.c:300: warning: Excess function parameter 'events' description in 'disk_force_media_change' Fixes: ab6860f62bfe ("block: simplify the disk_force_media_change interface") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: kernel test robot <lkp@intel.com> Closes: lore.kernel.org/r/202309060957.vfl0mUur-lkp@intel.com Cc: Christoph Hellwig <hch@lst.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: linux-block@vger.kernel.org Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230926005232.23666-1-rdunlap@infradead.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-09-22blk-mq: update driver tags request table when start requestChengming Zhou2-3/+1
Now we update driver tags request table in blk_mq_get_driver_tag(), so the driver that support queue_rqs() have to update that inflight table by itself. Move it to blk_mq_start_request(), which is a better place where we setup the deadline for request timeout check. And it's just where the request becomes inflight. Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20230913151616.3164338-5-chengming.zhou@linux.dev Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-09-22blk-mq: support batched queue_rqs() on shared tags queueChengming Zhou1-6/+1
Since active requests have been accounted when allocate driver tags, we can remove this limit now. Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20230913151616.3164338-4-chengming.zhou@linux.dev Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-09-22blk-mq: remove RQF_MQ_INFLIGHTChengming Zhou3-14/+2
Since the previous patch change to only account active requests when we really allocate the driver tag, the RQF_MQ_INFLIGHT can be removed and no double account problem. 1. none elevator: flush request will use the first pending request's driver tag, won't double account. 2. other elevator: flush request will be accounted when allocate driver tag when issue, and will be unaccounted when it put the driver tag. Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20230913151616.3164338-3-chengming.zhou@linux.dev Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-09-22blk-mq: account active requests when get driver tagChengming Zhou2-36/+53
There is a limit that batched queue_rqs() can't work on shared tags queue, since the account of active requests can't be done there. Now we account the active requests only in blk_mq_get_driver_tag(), which is not the time we get driver tag actually (with none elevator). To support batched queue_rqs() on shared tags queue, we move the account of active requests to where we get the driver tag: 1. none elevator: blk_mq_get_tags() and blk_mq_get_tag() 2. other elevator: __blk_mq_alloc_driver_tag() This is clearer and match with the unaccount side, which just happen when we put the driver tag. The other good point is that we don't need RQF_MQ_INFLIGHT trick anymore, which used to avoid double account of flush request. Now we only account when actually get the driver tag, so all is good. We will remove RQF_MQ_INFLIGHT in the next patch. Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20230913151616.3164338-2-chengming.zhou@linux.dev Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-09-18block: correct stale comment in rq_qos_waitKemeng Shi1-1/+1
The rq_qos_wait calls common wake-up function rq_qos_wake_function to get token. Just replace stale wbt_wake_function with rq_qos_wake_function in comment. Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/r/20230914091508.36232-1-shikemeng@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-09-12blk-mq: fix tags UAF when shrinking q->nr_hw_queuesChengming Zhou1-6/+7
When nr_hw_queues shrink, we free the excess tags before realloc'ing hw_ctxs for each queue. During that resize, we may need to access those tags, like blk_mq_tag_idle(hctx) will access queue shared tags. This can cause a slab use-after-free, as reported by KASAN. Fix it by moving the releasing of excess tags to the end. Fixes: e1dd7bc93029 ("blk-mq: fix tags leak when shrink nr_hw_queues") Reported-by: Yi Zhang <yi.zhang@redhat.com> Closes: https://lore.kernel.org/all/CAHj4cs_CK63uoDpGBGZ6DN4OCTpzkR3UaVgK=LX8Owr8ej2ieQ@mail.gmail.com/ Cc: Ming Lei <ming.lei@redhat.com> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/r/20230908005702.2183908-1-chengming.zhou@linux.dev Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-09-06block: fix pin count management when merging same-page segmentsChristoph Hellwig1-4/+3
There is no need to unpin the added page when adding it to the bio fails as that is done by the loop below. Instead we want to unpin it when adding a single page to the bio more than once as bio_release_pages will only unpin it once. Fixes: d1916c86ccdc ("block: move same page handling from __bio_add_pc_page to the callers") Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Link: https://lore.kernel.org/r/20230905124731.328255-1-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-08-31block: don't add or resize partition on the disk with GENHD_FL_NO_PARTLi Lingfeng1-0/+2
Commit a33df75c6328 ("block: use an xarray for disk->part_tbl") remove disk_expand_part_tbl() in add_partition(), which means all kinds of devices will support extended dynamic `dev_t`. However, some devices with GENHD_FL_NO_PART are not expected to add or resize partition. Fix this by adding check of GENHD_FL_NO_PART before add or resize partition. Fixes: a33df75c6328 ("block: use an xarray for disk->part_tbl") Signed-off-by: Li Lingfeng <lilingfeng3@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230831075900.1725842-1-lilingfeng@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-08-31block: remove the call to file_remove_privs in blkdev_write_iterChristoph Hellwig1-4/+0
file_remove_privs instantly returns 0 when not called for regular files, so don't bother. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20230831121911.280155-1-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-08-30blk-throttle: consider 'carryover_ios/bytes' in throtl_trim_slice()Yu Kuai1-8/+13
Currently, 'carryover_ios/bytes' is not handled in throtl_trim_slice(), for consequence, 'carryover_ios/bytes' will be used to throttle bio multiple times, for example: 1) set iops limit to 100, and slice start is 0, slice end is 100ms; 2) current time is 0, and 10 ios are dispatched, those io won't be throttled and io_disp is 10; 3) still at current time 0, update iops limit to 1000, carryover_ios is updated to (0 - 10) = -10; 4) in this slice(0 - 100ms), io_allowed = 100 + (-10) = 90, which means only 90 ios can be dispatched without waiting; 5) assume that io is throttled in slice(0 - 100ms), and throtl_trim_slice() update silce to (100ms - 200ms). In this case, 'carryover_ios/bytes' is not cleared and still only 90 ios can be dispatched between 100ms - 200ms. Fix this problem by updating 'carryover_ios/bytes' in throtl_trim_slice(). Fixes: a880ae93e5b5 ("blk-throttle: fix io hung due to configuration updates") Reported-by: zhuxiaohui <zhuxiaohui.400@bytedance.com> Link: https://lore.kernel.org/all/20230812072116.42321-1-zhuxiaohui.400@bytedance.com/ Signed-off-by: Yu Kuai <yukuai3@huawei.com> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/r/20230816012708.1193747-5-yukuai1@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-08-30blk-throttle: use calculate_io/bytes_allowed() for throtl_trim_slice()Yu Kuai1-45/+41
There are no functional changes, just make the code cleaner. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/r/20230816012708.1193747-4-yukuai1@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-08-30blk-throttle: fix wrong comparation while 'carryover_ios/bytes' is negativeYu Kuai1-6/+5
carryover_ios/bytes[] can be negative in the case that ios are dispatched in the slice in advance, and then configuration is updated. For example: 1) set iops limit to 1000, and slice start is 0, slice end is 100ms; 2) current time is 0, and 100 ios are dispatched, those ios will not be throttled, hence io_disp is 100; 3) still at current time 0, update iops limit to 100, then carryover_ios is (0 - 100) = -100; 4) then, dispatch a new io at time 0, the expected result is that this io will wait for 1s. The calculation in tg_within_iops_limit: io_disp = 0; io_allowed = calculate_io_allowed + carryover_ios = 10 + (-100) = -90; io won't be throttled if (io_disp + 1 < io_allowed) passed. Before this patch, in step 4) (io_disp + 1 < io_allowed) is passed, because -90 for unsigned value is very huge, and such io won't be throttled. Fix this problem by checking if 'io/bytes_allowed' is negative first. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/r/20230816012708.1193747-3-yukuai1@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>