summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorLiu Yuntao <liuyuntao10@huawei.com>2021-09-25 01:43:32 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2021-09-25 02:13:34 +0300
commitde6ee659684b1a2b149e0780d3c5e8032f3647d6 (patch)
tree0a7c7ffc5aecd0e884e79404de81b1fbc87cfd70 /mm
parent867050247e295cf20fce046a92a7e6491fcfe066 (diff)
downloadlinux-de6ee659684b1a2b149e0780d3c5e8032f3647d6.tar.xz
mm/shmem.c: fix judgment error in shmem_is_huge()
In the case of SHMEM_HUGE_WITHIN_SIZE, the page index is not rounded up correctly. When the page index points to the first page in a huge page, round_up() cannot bring it to the end of the huge page, but to the end of the previous one. An example: HPAGE_PMD_NR on my machine is 512(2 MB huge page size). After allcoating a 3000 KB buffer, I access it at location 2050 KB. In shmem_is_huge(), the corresponding index happens to be 512. After rounded up by HPAGE_PMD_NR, it will still be 512 which is smaller than i_size, and shmem_is_huge() will return true. As a result, my buffer takes an additional huge page, and that shouldn't happen when shmem_enabled is set to within_size. Link: https://lkml.kernel.org/r/20210909032007.18353-1-liuyuntao10@huawei.com Fixes: f3f0e1d2150b2b ("khugepaged: add support of collapse for tmpfs/shmem pages") Signed-off-by: Liu Yuntao <liuyuntao10@huawei.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: wuxu.wu <wuxu.wu@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/shmem.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/shmem.c b/mm/shmem.c
index 88742953532c..b5860f4a2738 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -490,9 +490,9 @@ bool shmem_is_huge(struct vm_area_struct *vma,
case SHMEM_HUGE_ALWAYS:
return true;
case SHMEM_HUGE_WITHIN_SIZE:
- index = round_up(index, HPAGE_PMD_NR);
+ index = round_up(index + 1, HPAGE_PMD_NR);
i_size = round_up(i_size_read(inode), PAGE_SIZE);
- if (i_size >= HPAGE_PMD_SIZE && (i_size >> PAGE_SHIFT) >= index)
+ if (i_size >> PAGE_SHIFT >= index)
return true;
fallthrough;
case SHMEM_HUGE_ADVISE: