summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorJohannes Weiner <hannes@cmpxchg.org>2009-04-01 02:19:35 +0400
committerLinus Torvalds <torvalds@linux-foundation.org>2009-04-01 19:59:12 +0400
commit9786bf841da57fac3457a1dac41acb4c1f2eced6 (patch)
tree763cb1dd3f6a3c4faf1c31b44ad2e7bef0d206d2 /mm
parentd979677c4c02f0a72db5a03ecd8184bd9d6695c8 (diff)
downloadlinux-9786bf841da57fac3457a1dac41acb4c1f2eced6.tar.xz
vmscan: clip swap_cluster_max in shrink_all_memory()
shrink_inactive_list() scans in sc->swap_cluster_max chunks until it hits the scan limit it was passed. shrink_inactive_list() { do { isolate_pages(swap_cluster_max) shrink_page_list() } while (nr_scanned < max_scan); } This assumes that swap_cluster_max is not bigger than the scan limit because the latter is checked only after at least one iteration. In shrink_all_memory() sc->swap_cluster_max is initialized to the overall reclaim goal in the beginning but not decreased while reclaim is making progress which leads to subsequent calls to shrink_inactive_list() reclaiming way too much in the one iteration that is done unconditionally. Set sc->swap_cluster_max always to the proper goal before doing shrink_all_zones() shrink_list() shrink_inactive_list(). While the current shrink_all_memory() happily reclaims more than actually requested, this patch fixes it to never exceed the goal: unpatched wanted=10000 reclaimed=13356 wanted=10000 reclaimed=19711 wanted=10000 reclaimed=10289 wanted=10000 reclaimed=17306 wanted=10000 reclaimed=10700 wanted=10000 reclaimed=10004 wanted=10000 reclaimed=13301 wanted=10000 reclaimed=10976 wanted=10000 reclaimed=10605 wanted=10000 reclaimed=10088 wanted=10000 reclaimed=15000 patched wanted=10000 reclaimed=10000 wanted=10000 reclaimed=9599 wanted=10000 reclaimed=8476 wanted=10000 reclaimed=8326 wanted=10000 reclaimed=10000 wanted=10000 reclaimed=10000 wanted=10000 reclaimed=9919 wanted=10000 reclaimed=10000 wanted=10000 reclaimed=10000 wanted=10000 reclaimed=10000 wanted=10000 reclaimed=10000 wanted=10000 reclaimed=9624 wanted=10000 reclaimed=10000 wanted=10000 reclaimed=10000 wanted=8500 reclaimed=8092 wanted=316 reclaimed=316 Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: MinChan Kim <minchan.kim@gmail.com> Acked-by: Nigel Cunningham <ncunningham@crca.org.au> Acked-by: "Rafael J. Wysocki" <rjw@sisk.pl> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/vmscan.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b15dcbb9e174..9578063cd943 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2109,7 +2109,6 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
struct scan_control sc = {
.gfp_mask = GFP_KERNEL,
.may_unmap = 0,
- .swap_cluster_max = nr_pages,
.may_writepage = 1,
.isolate_pages = isolate_pages_global,
};
@@ -2151,6 +2150,7 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
unsigned long nr_to_scan = nr_pages - sc.nr_reclaimed;
sc.nr_scanned = 0;
+ sc.swap_cluster_max = nr_to_scan;
shrink_all_zones(nr_to_scan, prio, pass, &sc);
if (sc.nr_reclaimed >= nr_pages)
goto out;