summaryrefslogtreecommitdiff
path: root/fs/nilfs2
diff options
context:
space:
mode:
authorRyusuke Konishi <konishi.ryusuke@gmail.com>2024-05-11 03:29:42 +0300
committerAndrew Morton <akpm@linux-foundation.org>2024-05-20 00:36:21 +0300
commitdb3e24a02e29b507c24c0adb4d22914c65dab763 (patch)
treed2cbd7f222886dd7d159fd9713ebc392cdf7f6f3 /fs/nilfs2
parent28d2188709d9c19a7c4601c6870edd9fa0527379 (diff)
downloadlinux-db3e24a02e29b507c24c0adb4d22914c65dab763.tar.xz
nilfs2: make block erasure safe in nilfs_finish_roll_forward()
The implementation of writing a zero-fill block in nilfs_finish_roll_forward() is not safe. The buffer is being cleared without acquiring a lock or setting the uptodate flag, so theoretically, between the time the buffer's data is cleared and the time it is written back to the block device using sync_dirty_buffer(), that zero data can be undone by concurrent block device reads. Since this buffer points to a location that has been read from disk once, the uptodate flag will most likely remain, but since it was obtained with __getblk(), that is not guaranteed. In other words, this is exceptional, and this function itself is not normally called (only once when mounting after a specific pattern of unclean shutdown), so it is highly unlikely that this will actually cause a problem. Anyway, eliminate this potential race issue by protecting the clearing of buffer data with a buffer lock and setting the buffer's uptodate flag within the protected section. Link: https://lkml.kernel.org/r/20240511002942.9608-1-konishi.ryusuke@gmail.com Signed-off-by: Ryusuke Konishi <konishi.ryusuke@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'fs/nilfs2')
-rw-r--r--fs/nilfs2/recovery.c4
1 files changed, 4 insertions, 0 deletions
diff --git a/fs/nilfs2/recovery.c b/fs/nilfs2/recovery.c
index 020f304c600e..b638dc06df2f 100644
--- a/fs/nilfs2/recovery.c
+++ b/fs/nilfs2/recovery.c
@@ -702,8 +702,12 @@ static void nilfs_finish_roll_forward(struct the_nilfs *nilfs,
if (WARN_ON(!bh))
return; /* should never happen */
+ lock_buffer(bh);
memset(bh->b_data, 0, bh->b_size);
+ set_buffer_uptodate(bh);
set_buffer_dirty(bh);
+ unlock_buffer(bh);
+
err = sync_dirty_buffer(bh);
if (unlikely(err))
nilfs_warn(nilfs->ns_sb,