summaryrefslogtreecommitdiff
path: root/fs
diff options
context:
space:
mode:
authorRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>2009-07-28 12:55:29 +0400
committerRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>2009-08-01 17:48:32 +0400
commita97778457f22181e8c38c4cd7d7e528378738a98 (patch)
treed2ee3d9491ab2b17551f099d9a6119407700dbd2 /fs
parented680c4ad478d0fee9740f7d029087f181346564 (diff)
downloadlinux-a97778457f22181e8c38c4cd7d7e528378738a98.tar.xz
nilfs2: fix oops due to inconsistent state in page with discrete b-tree nodes
Andrea Gelmini gave me a report that a kernel oops hit on a nilfs filesystem with a 1KB block size when doing rsync. This turned out to be caused by an inconsistency of dirty state between a page and its buffers storing b-tree node blocks. If the page had multiple buffers split over multiple logs, and if the logs were written at a time, a dirty flag remained in the page even every dirty flag in the buffers was cleared. This will fix the failure by dropping the dirty flag properly for pages with the discrete multiple b-tree nodes. Reported-by: Andrea Gelmini <andrea.gelmini@gmail.com> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Tested-by: Andrea Gelmini <andrea.gelmini@gmail.com> Cc: stable@kernel.org
Diffstat (limited to 'fs')
-rw-r--r--fs/nilfs2/segment.c16
1 files changed, 15 insertions, 1 deletions
diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
index 8b5e4778cf28..51ff3d0a4ee2 100644
--- a/fs/nilfs2/segment.c
+++ b/fs/nilfs2/segment.c
@@ -1859,12 +1859,26 @@ static void nilfs_end_page_io(struct page *page, int err)
if (!page)
return;
- if (buffer_nilfs_node(page_buffers(page)) && !PageWriteback(page))
+ if (buffer_nilfs_node(page_buffers(page)) && !PageWriteback(page)) {
/*
* For b-tree node pages, this function may be called twice
* or more because they might be split in a segment.
*/
+ if (PageDirty(page)) {
+ /*
+ * For pages holding split b-tree node buffers, dirty
+ * flag on the buffers may be cleared discretely.
+ * In that case, the page is once redirtied for
+ * remaining buffers, and it must be cancelled if
+ * all the buffers get cleaned later.
+ */
+ lock_page(page);
+ if (nilfs_page_buffers_clean(page))
+ __nilfs_clear_page_dirty(page);
+ unlock_page(page);
+ }
return;
+ }
__nilfs_end_page_io(page, err);
}