summaryrefslogtreecommitdiff
path: root/mm/internal.h
diff options
context:
space:
mode:
authorJohannes Weiner <hannes@cmpxchg.org>2014-01-30 02:05:41 +0400
committerLinus Torvalds <torvalds@linux-foundation.org>2014-01-30 04:22:39 +0400
commita1c3bfb2f67ef766de03f1f56bdfff9c8595ab14 (patch)
treee06405192d674561bf2718ab03879c32103ae34e /mm/internal.h
parenta804552b9a15c931cfc2a92a2e0aed1add8b580a (diff)
downloadlinux-a1c3bfb2f67ef766de03f1f56bdfff9c8595ab14.tar.xz
mm/page-writeback.c: do not count anon pages as dirtyable memory
The VM is currently heavily tuned to avoid swapping. Whether that is good or bad is a separate discussion, but as long as the VM won't swap to make room for dirty cache, we can not consider anonymous pages when calculating the amount of dirtyable memory, the baseline to which dirty_background_ratio and dirty_ratio are applied. A simple workload that occupies a significant size (40+%, depending on memory layout, storage speeds etc.) of memory with anon/tmpfs pages and uses the remainder for a streaming writer demonstrates this problem. In that case, the actual cache pages are a small fraction of what is considered dirtyable overall, which results in an relatively large portion of the cache pages to be dirtied. As kswapd starts rotating these, random tasks enter direct reclaim and stall on IO. Only consider free pages and file pages dirtyable. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reported-by: Tejun Heo <tj@kernel.org> Tested-by: Tejun Heo <tj@kernel.org> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Wu Fengguang <fengguang.wu@intel.com> Reviewed-by: Michal Hocko <mhocko@suse.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/internal.h')
-rw-r--r--mm/internal.h1
1 files changed, 0 insertions, 1 deletions
diff --git a/mm/internal.h b/mm/internal.h
index 612c14f5e0f5..29e1e761f9eb 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -83,7 +83,6 @@ extern unsigned long highest_memmap_pfn;
*/
extern int isolate_lru_page(struct page *page);
extern void putback_lru_page(struct page *page);
-extern unsigned long zone_reclaimable_pages(struct zone *zone);
extern bool zone_reclaimable(struct zone *zone);
/*