summaryrefslogtreecommitdiff
path: root/init
diff options
context:
space:
mode:
authorJohannes Weiner <hannes@cmpxchg.org>2019-04-19 03:50:34 +0300
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2019-05-16 20:41:23 +0300
commit6536de8232c8bc8dc47f6a3fcecd4dd80edf6a3e (patch)
treef3b9ac01c125ba442641418f300d6412a190ab96 /init
parent1134736869ef8dbb4fd9f608d8b7b4d103c0ff40 (diff)
downloadlinux-6536de8232c8bc8dc47f6a3fcecd4dd80edf6a3e.tar.xz
mm: fix inactive list balancing between NUMA nodes and cgroups
[ Upstream commit 3b991208b897f52507168374033771a984b947b1 ] During !CONFIG_CGROUP reclaim, we expand the inactive list size if it's thrashing on the node that is about to be reclaimed. But when cgroups are enabled, we suddenly ignore the node scope and use the cgroup scope only. The result is that pressure bleeds between NUMA nodes depending on whether cgroups are merely compiled into Linux. This behavioral difference is unexpected and undesirable. When the refault adaptivity of the inactive list was first introduced, there were no statistics at the lruvec level - the intersection of node and memcg - so it was better than nothing. But now that we have that infrastructure, use lruvec_page_state() to make the list balancing decision always NUMA aware. [hannes@cmpxchg.org: fix bisection hole] Link: http://lkml.kernel.org/r/20190417155241.GB23013@cmpxchg.org Link: http://lkml.kernel.org/r/20190412144438.2645-1-hannes@cmpxchg.org Fixes: 2a2e48854d70 ("mm: vmscan: fix IO/refault regression in cache workingset transition") Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Shakeel Butt <shakeelb@google.com> Cc: Roman Gushchin <guro@fb.com> Cc: Michal Hocko <mhocko@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'init')
0 files changed, 0 insertions, 0 deletions