summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorVladimir V. Saveliev <vs@namesys.com>2006-06-27 13:53:57 +0400
committerLinus Torvalds <torvalds@g5.osdl.org>2006-06-28 04:32:39 +0400
commit6527c2bdf1f833cc18e8f42bd97973d583e4aa83 (patch)
tree737055ae276cdfa75e7b3e55a3ebdd1f88105606 /mm
parent1c0f16e5cdff59f3b132a1b0c0d44a941f8813d2 (diff)
downloadlinux-6527c2bdf1f833cc18e8f42bd97973d583e4aa83.tar.xz
[PATCH] generic_file_buffered_write(): deadlock on vectored write
generic_file_buffered_write() prefaults in user pages in order to avoid deadlock on copying from the same page as write goes to. However, it looks like there is a problem when write is vectored: fault_in_pages_readable brings in current segment or its part (maxlen). OTOH, filemap_copy_from_user_iovec is called to copy number of bytes (bytes) which may exceed current segment, so filemap_copy_from_user_iovec switches to the next segment which is not brought in yet. Pagefault is generated. That causes the deadlock if pagefault is for the same page write goes to: page being written is locked and not uptodate, pagefault will deadlock trying to lock locked page. [akpm@osdl.org: somewhat rewritten] Cc: Neil Brown <neilb@suse.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/filemap.c18
1 files changed, 11 insertions, 7 deletions
diff --git a/mm/filemap.c b/mm/filemap.c
index 9c7334bafda8..d504d6e98886 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2095,14 +2095,21 @@ generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
do {
unsigned long index;
unsigned long offset;
- unsigned long maxlen;
size_t copied;
offset = (pos & (PAGE_CACHE_SIZE -1)); /* Within page */
index = pos >> PAGE_CACHE_SHIFT;
bytes = PAGE_CACHE_SIZE - offset;
- if (bytes > count)
- bytes = count;
+
+ /* Limit the size of the copy to the caller's write size */
+ bytes = min(bytes, count);
+
+ /*
+ * Limit the size of the copy to that of the current segment,
+ * because fault_in_pages_readable() doesn't know how to walk
+ * segments.
+ */
+ bytes = min(bytes, cur_iov->iov_len - iov_base);
/*
* Bring in the user page that we will copy from _first_.
@@ -2110,10 +2117,7 @@ generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
* same page as we're writing to, without it being marked
* up-to-date.
*/
- maxlen = cur_iov->iov_len - iov_base;
- if (maxlen > bytes)
- maxlen = bytes;
- fault_in_pages_readable(buf, maxlen);
+ fault_in_pages_readable(buf, bytes);
page = __grab_cache_page(mapping,index,&cached_page,&lru_pvec);
if (!page) {