summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2024-07-22 03:15:46 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2024-07-22 03:15:46 +0300
commitfbc90c042cd1dc7258ebfebe6d226017e5b5ac8c (patch)
tree45513ac12ade12a80ca6b306722f201802b0a190 /mm
parent7846b618e0a4c3e08888099d1d4512722b39ca99 (diff)
parent30d77b7eef019fa4422980806e8b7cdc8674493e (diff)
downloadlinux-fbc90c042cd1dc7258ebfebe6d226017e5b5ac8c.tar.xz
Merge tag 'mm-stable-2024-07-21-14-50' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull MM updates from Andrew Morton: - In the series "mm: Avoid possible overflows in dirty throttling" Jan Kara addresses a couple of issues in the writeback throttling code. These fixes are also targetted at -stable kernels. - Ryusuke Konishi's series "nilfs2: fix potential issues related to reserved inodes" does that. This should actually be in the mm-nonmm-stable tree, along with the many other nilfs2 patches. My bad. - More folio conversions from Kefeng Wang in the series "mm: convert to folio_alloc_mpol()" - Kemeng Shi has sent some cleanups to the writeback code in the series "Add helper functions to remove repeated code and improve readability of cgroup writeback" - Kairui Song has made the swap code a little smaller and a little faster in the series "mm/swap: clean up and optimize swap cache index". - In the series "mm/memory: cleanly support zeropage in vm_insert_page*(), vm_map_pages*() and vmf_insert_mixed()" David Hildenbrand has reworked the rather sketchy handling of the use of the zeropage in MAP_SHARED mappings. I don't see any runtime effects here - more a cleanup/understandability/maintainablity thing. - Dev Jain has improved selftests/mm/va_high_addr_switch.c's handling of higher addresses, for aarch64. The (poorly named) series is "Restructure va_high_addr_switch". - The core TLB handling code gets some cleanups and possible slight optimizations in Bang Li's series "Add update_mmu_tlb_range() to simplify code". - Jane Chu has improved the handling of our fake-an-unrecoverable-memory-error testing feature MADV_HWPOISON in the series "Enhance soft hwpoison handling and injection". - Jeff Johnson has sent a billion patches everywhere to add MODULE_DESCRIPTION() to everything. Some landed in this pull. - In the series "mm: cleanup MIGRATE_SYNC_NO_COPY mode", Kefeng Wang has simplified migration's use of hardware-offload memory copying. - Yosry Ahmed performs more folio API conversions in his series "mm: zswap: trivial folio conversions". - In the series "large folios swap-in: handle refault cases first", Chuanhua Han inches us forward in the handling of large pages in the swap code. This is a cleanup and optimization, working toward the end objective of full support of large folio swapin/out. - In the series "mm,swap: cleanup VMA based swap readahead window calculation", Huang Ying has contributed some cleanups and a possible fixlet to his VMA based swap readahead code. - In the series "add mTHP support for anonymous shmem" Baolin Wang has taught anonymous shmem mappings to use multisize THP. By default this is a no-op - users must opt in vis sysfs controls. Dramatic improvements in pagefault latency are realized. - David Hildenbrand has some cleanups to our remaining use of page_mapcount() in the series "fs/proc: move page_mapcount() to fs/proc/internal.h". - David also has some highmem accounting cleanups in the series "mm/highmem: don't track highmem pages manually". - Build-time fixes and cleanups from John Hubbard in the series "cleanups, fixes, and progress towards avoiding "make headers"". - Cleanups and consolidation of the core pagemap handling from Barry Song in the series "mm: introduce pmd|pte_needs_soft_dirty_wp helpers and utilize them". - Lance Yang's series "Reclaim lazyfree THP without splitting" has reduced the latency of the reclaim of pmd-mapped THPs under fairly common circumstances. A 10x speedup is seen in a microbenchmark. It does this by punting to aother CPU but I guess that's a win unless all CPUs are pegged. - hugetlb_cgroup cleanups from Xiu Jianfeng in the series "mm/hugetlb_cgroup: rework on cftypes". - Miaohe Lin's series "Some cleanups for memory-failure" does just that thing. - Someone other than SeongJae has developed a DAMON feature in Honggyu Kim's series "DAMON based tiered memory management for CXL memory". This adds DAMON features which may be used to help determine the efficiency of our placement of CXL/PCIe attached DRAM. - DAMON user API centralization and simplificatio work in SeongJae Park's series "mm/damon: introduce DAMON parameters online commit function". - In the series "mm: page_type, zsmalloc and page_mapcount_reset()" David Hildenbrand does some maintenance work on zsmalloc - partially modernizing its use of pageframe fields. - Kefeng Wang provides more folio conversions in the series "mm: remove page_maybe_dma_pinned() and page_mkclean()". - More cleanup from David Hildenbrand, this time in the series "mm/memory_hotplug: use PageOffline() instead of PageReserved() for !ZONE_DEVICE". It "enlightens memory hotplug more about PageOffline() pages" and permits the removal of some virtio-mem hacks. - Barry Song's series "mm: clarify folio_add_new_anon_rmap() and __folio_add_anon_rmap()" is a cleanup to the anon folio handling in preparation for mTHP (multisize THP) swapin. - Kefeng Wang's series "mm: improve clear and copy user folio" implements more folio conversions, this time in the area of large folio userspace copying. - The series "Docs/mm/damon/maintaier-profile: document a mailing tool and community meetup series" tells people how to get better involved with other DAMON developers. From SeongJae Park. - A large series ("kmsan: Enable on s390") from Ilya Leoshkevich does that. - David Hildenbrand sends along more cleanups, this time against the migration code. The series is "mm/migrate: move NUMA hinting fault folio isolation + checks under PTL". - Jan Kara has found quite a lot of strangenesses and minor errors in the readahead code. He addresses this in the series "mm: Fix various readahead quirks". - SeongJae Park's series "selftests/damon: test DAMOS tried regions and {min,max}_nr_regions" adds features and addresses errors in DAMON's self testing code. - Gavin Shan has found a userspace-triggerable WARN in the pagecache code. The series "mm/filemap: Limit page cache size to that supported by xarray" addresses this. The series is marked cc:stable. - Chengming Zhou's series "mm/ksm: cmp_and_merge_page() optimizations and cleanup" cleans up and slightly optimizes KSM. - Roman Gushchin has separated the memcg-v1 and memcg-v2 code - lots of code motion. The series (which also makes the memcg-v1 code Kconfigurable) are "mm: memcg: separate legacy cgroup v1 code and put under config option" and "mm: memcg: put cgroup v1-specific memcg data under CONFIG_MEMCG_V1" - Dan Schatzberg's series "Add swappiness argument to memory.reclaim" adds an additional feature to this cgroup-v2 control file. - The series "Userspace controls soft-offline pages" from Jiaqi Yan permits userspace to stop the kernel's automatic treatment of excessive correctable memory errors. In order to permit userspace to monitor and handle this situation. - Kefeng Wang's series "mm: migrate: support poison recover from migrate folio" teaches the kernel to appropriately handle migration from poisoned source folios rather than simply panicing. - SeongJae Park's series "Docs/damon: minor fixups and improvements" does those things. - In the series "mm/zsmalloc: change back to per-size_class lock" Chengming Zhou improves zsmalloc's scalability and memory utilization. - Vivek Kasireddy's series "mm/gup: Introduce memfd_pin_folios() for pinning memfd folios" makes the GUP code use FOLL_PIN rather than bare refcount increments. So these paes can first be moved aside if they reside in the movable zone or a CMA block. - Andrii Nakryiko has added a binary ioctl()-based API to /proc/pid/maps for much faster reading of vma information. The series is "query VMAs from /proc/<pid>/maps". - In the series "mm: introduce per-order mTHP split counters" Lance Yang improves the kernel's presentation of developer information related to multisize THP splitting. - Michael Ellerman has developed the series "Reimplement huge pages without hugepd on powerpc (8xx, e500, book3s/64)". This permits userspace to use all available huge page sizes. - In the series "revert unconditional slab and page allocator fault injection calls" Vlastimil Babka removes a performance-affecting and not very useful feature from slab fault injection. * tag 'mm-stable-2024-07-21-14-50' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (411 commits) mm/mglru: fix ineffective protection calculation mm/zswap: fix a white space issue mm/hugetlb: fix kernel NULL pointer dereference when migrating hugetlb folio mm/hugetlb: fix possible recursive locking detected warning mm/gup: clear the LRU flag of a page before adding to LRU batch mm/numa_balancing: teach mpol_to_str about the balancing mode mm: memcg1: convert charge move flags to unsigned long long alloc_tag: fix page_ext_get/page_ext_put sequence during page splitting lib: reuse page_ext_data() to obtain codetag_ref lib: add missing newline character in the warning message mm/mglru: fix overshooting shrinker memory mm/mglru: fix div-by-zero in vmpressure_calc_level() mm/kmemleak: replace strncpy() with strscpy() mm, page_alloc: put should_fail_alloc_page() back behing CONFIG_FAIL_PAGE_ALLOC mm, slab: put should_failslab() back behind CONFIG_SHOULD_FAILSLAB mm: ignore data-race in __swap_writepage hugetlbfs: ensure generic_hugetlb_get_unmapped_area() returns higher address than mmap_min_addr mm: shmem: rename mTHP shmem counters mm: swap_state: use folio_alloc_mpol() in __read_swap_cache_async() mm/migrate: putback split folios when numa hint migration fails ...
Diffstat (limited to 'mm')
-rw-r--r--mm/Kconfig23
-rw-r--r--mm/Makefile2
-rw-r--r--mm/balloon_compaction.c8
-rw-r--r--mm/damon/core.c338
-rw-r--r--mm/damon/dbgfs.c2
-rw-r--r--mm/damon/lru_sort.c56
-rw-r--r--mm/damon/paddr.c157
-rw-r--r--mm/damon/reclaim.c65
-rw-r--r--mm/damon/sysfs-common.h2
-rw-r--r--mm/damon/sysfs-schemes.c127
-rw-r--r--mm/damon/sysfs-test.h10
-rw-r--r--mm/damon/sysfs.c81
-rw-r--r--mm/damon/vaddr.c6
-rw-r--r--mm/dmapool_test.c1
-rw-r--r--mm/fail_page_alloc.c4
-rw-r--r--mm/failslab.c14
-rw-r--r--mm/filemap.c8
-rw-r--r--mm/folio-compat.c6
-rw-r--r--mm/gup.c510
-rw-r--r--mm/highmem.c21
-rw-r--r--mm/hmm.c2
-rw-r--r--mm/huge_memory.c193
-rw-r--r--mm/hugetlb.c134
-rw-r--r--mm/hugetlb_cgroup.c305
-rw-r--r--mm/hugetlb_vmemmap.c17
-rw-r--r--mm/hwpoison-inject.c1
-rw-r--r--mm/internal.h75
-rw-r--r--mm/kfence/core.c17
-rw-r--r--mm/kfence/kfence.h2
-rw-r--r--mm/kfence/kfence_test.c1
-rw-r--r--mm/khugepaged.c40
-rw-r--r--mm/kmemleak.c6
-rw-r--r--mm/kmsan/core.c5
-rw-r--r--mm/kmsan/hooks.c38
-rw-r--r--mm/kmsan/init.c9
-rw-r--r--mm/kmsan/instrumentation.c15
-rw-r--r--mm/kmsan/kmsan.h39
-rw-r--r--mm/kmsan/kmsan_test.c32
-rw-r--r--mm/kmsan/report.c10
-rw-r--r--mm/kmsan/shadow.c9
-rw-r--r--mm/ksm.c261
-rw-r--r--mm/list_lru.c14
-rw-r--r--mm/madvise.c2
-rw-r--r--mm/memcontrol-v1.c2969
-rw-r--r--mm/memcontrol-v1.h147
-rw-r--r--mm/memcontrol.c3359
-rw-r--r--mm/memfd.c45
-rw-r--r--mm/memory-failure.c259
-rw-r--r--mm/memory-tiers.c54
-rw-r--r--mm/memory.c376
-rw-r--r--mm/memory_hotplug.c52
-rw-r--r--mm/mempolicy.c38
-rw-r--r--mm/migrate.c213
-rw-r--r--mm/migrate_device.c24
-rw-r--r--mm/mincore.c4
-rw-r--r--mm/mlock.c19
-rw-r--r--mm/mm_init.c96
-rw-r--r--mm/mmap.c41
-rw-r--r--mm/mmap_lock.c175
-rw-r--r--mm/mprotect.c4
-rw-r--r--mm/mremap.c2
-rw-r--r--mm/page-writeback.c319
-rw-r--r--mm/page_alloc.c78
-rw-r--r--mm/page_counter.c173
-rw-r--r--mm/page_ext.c32
-rw-r--r--mm/page_io.c22
-rw-r--r--mm/pagewalk.c57
-rw-r--r--mm/percpu-internal.h6
-rw-r--r--mm/percpu.c6
-rw-r--r--mm/readahead.c276
-rw-r--r--mm/rmap.c169
-rw-r--r--mm/shmem.c359
-rw-r--r--mm/slab.h2
-rw-r--r--mm/slab_common.c10
-rw-r--r--mm/slub.c51
-rw-r--r--mm/sparse-vmemmap.c8
-rw-r--r--mm/sparse.c28
-rw-r--r--mm/swap.c51
-rw-r--r--mm/swap.h30
-rw-r--r--mm/swap_state.c120
-rw-r--r--mm/swapfile.c75
-rw-r--r--mm/truncate.c70
-rw-r--r--mm/userfaultfd.c14
-rw-r--r--mm/util.c17
-rw-r--r--mm/vmalloc.c9
-rw-r--r--mm/vmscan.c188
-rw-r--r--mm/vmstat.c26
-rw-r--r--mm/zsmalloc.c175
-rw-r--r--mm/zswap.c126
89 files changed, 7093 insertions, 5949 deletions
diff --git a/mm/Kconfig b/mm/Kconfig
index e0dfb268717c..b72e7d040f78 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -128,7 +128,7 @@ config ZSWAP_COMPRESSOR_DEFAULT
choice
prompt "Default allocator"
depends on ZSWAP
- default ZSWAP_ZPOOL_DEFAULT_ZSMALLOC if MMU
+ default ZSWAP_ZPOOL_DEFAULT_ZSMALLOC if HAVE_ZSMALLOC
default ZSWAP_ZPOOL_DEFAULT_ZBUD
help
Selects the default allocator for the compressed cache for
@@ -154,6 +154,7 @@ config ZSWAP_ZPOOL_DEFAULT_Z3FOLD
config ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
bool "zsmalloc"
+ depends on HAVE_ZSMALLOC
select ZSMALLOC
help
Use the zsmalloc allocator as the default allocator.
@@ -186,10 +187,15 @@ config Z3FOLD
page. It is a ZBUD derivative so the simplicity and determinism are
still there.
+config HAVE_ZSMALLOC
+ def_bool y
+ depends on MMU
+ depends on PAGE_SIZE_LESS_THAN_256KB # we want <= 64 KiB
+
config ZSMALLOC
tristate
prompt "N:1 compression allocator (zsmalloc)" if ZSWAP
- depends on MMU
+ depends on HAVE_ZSMALLOC
help
zsmalloc is a slab-based memory allocator designed to store
pages of various compression levels efficiently. It achieves
@@ -731,7 +737,7 @@ config DEFAULT_MMAP_MIN_ADDR
from userspace allocation. Keeping a user from writing to low pages
can help reduce the impact of kernel NULL pointer bugs.
- For most ppc64 and x86 users with lots of address space
+ For most arm64, ppc64 and x86 users with lots of address space
a value of 65536 is reasonable and should cause no problems.
On arm and other archs it should not be higher than 32768.
Programs which use vm86 functionality or have some need to map
@@ -963,6 +969,7 @@ config DEFERRED_STRUCT_PAGE_INIT
depends on SPARSEMEM
depends on !NEED_PER_CPU_KM
depends on 64BIT
+ depends on !KMSAN
select PADATA
help
Ordinarily all struct pages are initialised during early boot in a
@@ -1136,16 +1143,6 @@ config DMAPOOL_TEST
config ARCH_HAS_PTE_SPECIAL
bool
-#
-# Some architectures require a special hugepage directory format that is
-# required to support multiple hugepage sizes. For example a4fe3ce76
-# "powerpc/mm: Allow more flexible layouts for hugepage pagetables"
-# introduced it on powerpc. This allows for a more flexible hugepage
-# pagetable layouts.
-#
-config ARCH_HAS_HUGEPD
- bool
-
config MAPPING_DIRTY_HELPERS
bool
diff --git a/mm/Makefile b/mm/Makefile
index 8fb85acda1b1..d2915f8c9dc0 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -26,6 +26,7 @@ KCOV_INSTRUMENT_page_alloc.o := n
KCOV_INSTRUMENT_debug-pagealloc.o := n
KCOV_INSTRUMENT_kmemleak.o := n
KCOV_INSTRUMENT_memcontrol.o := n
+KCOV_INSTRUMENT_memcontrol-v1.o := n
KCOV_INSTRUMENT_mmzone.o := n
KCOV_INSTRUMENT_vmstat.o := n
KCOV_INSTRUMENT_failslab.o := n
@@ -95,6 +96,7 @@ obj-$(CONFIG_NUMA) += memory-tiers.o
obj-$(CONFIG_DEVICE_MIGRATION) += migrate_device.o
obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += huge_memory.o khugepaged.o
obj-$(CONFIG_PAGE_COUNTER) += page_counter.o
+obj-$(CONFIG_MEMCG_V1) += memcontrol-v1.o
obj-$(CONFIG_MEMCG) += memcontrol.o vmpressure.o
ifdef CONFIG_SWAP
obj-$(CONFIG_MEMCG) += swap_cgroup.o
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index 22c96fed70b5..6597ebea8ae2 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -234,14 +234,6 @@ static int balloon_page_migrate(struct page *newpage, struct page *page,
{
struct balloon_dev_info *balloon = balloon_page_device(page);
- /*
- * We can not easily support the no copy case here so ignore it as it
- * is unlikely to be used with balloon pages. See include/linux/hmm.h
- * for a user of the MIGRATE_SYNC_NO_COPY mode.
- */
- if (mode == MIGRATE_SYNC_NO_COPY)
- return -EINVAL;
-
VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
diff --git a/mm/damon/core.c b/mm/damon/core.c
index e66823d6b10b..7a87628b76ab 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -354,7 +354,8 @@ struct damos *damon_new_scheme(struct damos_access_pattern *pattern,
enum damos_action action,
unsigned long apply_interval_us,
struct damos_quota *quota,
- struct damos_watermarks *wmarks)
+ struct damos_watermarks *wmarks,
+ int target_nid)
{
struct damos *scheme;
@@ -381,6 +382,8 @@ struct damos *damon_new_scheme(struct damos_access_pattern *pattern,
scheme->wmarks = *wmarks;
scheme->wmarks.activated = true;
+ scheme->target_nid = target_nid;
+
return scheme;
}
@@ -663,6 +666,339 @@ void damon_set_schemes(struct damon_ctx *ctx, struct damos **schemes,
damon_add_scheme(ctx, schemes[i]);
}
+static struct damos_quota_goal *damos_nth_quota_goal(
+ int n, struct damos_quota *q)
+{
+ struct damos_quota_goal *goal;
+ int i = 0;
+
+ damos_for_each_quota_goal(goal, q) {
+ if (i++ == n)
+ return goal;
+ }
+ return NULL;
+}
+
+static void damos_commit_quota_goal(
+ struct damos_quota_goal *dst, struct damos_quota_goal *src)
+{
+ dst->metric = src->metric;
+ dst->target_value = src->target_value;
+ if (dst->metric == DAMOS_QUOTA_USER_INPUT)
+ dst->current_value = src->current_value;
+ /* keep last_psi_total as is, since it will be updated in next cycle */
+}
+
+/**
+ * damos_commit_quota_goals() - Commit DAMOS quota goals to another quota.
+ * @dst: The commit destination DAMOS quota.
+ * @src: The commit source DAMOS quota.
+ *
+ * Copies user-specified parameters for quota goals from @src to @dst. Users
+ * should use this function for quota goals-level parameters update of running
+ * DAMON contexts, instead of manual in-place updates.
+ *
+ * This function should be called from parameters-update safe context, like
+ * DAMON callbacks.
+ */
+int damos_commit_quota_goals(struct damos_quota *dst, struct damos_quota *src)
+{
+ struct damos_quota_goal *dst_goal, *next, *src_goal, *new_goal;
+ int i = 0, j = 0;
+
+ damos_for_each_quota_goal_safe(dst_goal, next, dst) {
+ src_goal = damos_nth_quota_goal(i++, src);
+ if (src_goal)
+ damos_commit_quota_goal(dst_goal, src_goal);
+ else
+ damos_destroy_quota_goal(dst_goal);
+ }
+ damos_for_each_quota_goal_safe(src_goal, next, src) {
+ if (j++ < i)
+ continue;
+ new_goal = damos_new_quota_goal(
+ src_goal->metric, src_goal->target_value);
+ if (!new_goal)
+ return -ENOMEM;
+ damos_add_quota_goal(dst, new_goal);
+ }
+ return 0;
+}
+
+static int damos_commit_quota(struct damos_quota *dst, struct damos_quota *src)
+{
+ int err;
+
+ dst->reset_interval = src->reset_interval;
+ dst->ms = src->ms;
+ dst->sz = src->sz;
+ err = damos_commit_quota_goals(dst, src);
+ if (err)
+ return err;
+ dst->weight_sz = src->weight_sz;
+ dst->weight_nr_accesses = src->weight_nr_accesses;
+ dst->weight_age = src->weight_age;
+ return 0;
+}
+
+static struct damos_filter *damos_nth_filter(int n, struct damos *s)
+{
+ struct damos_filter *filter;
+ int i = 0;
+
+ damos_for_each_filter(filter, s) {
+ if (i++ == n)
+ return filter;
+ }
+ return NULL;
+}
+
+static void damos_commit_filter_arg(
+ struct damos_filter *dst, struct damos_filter *src)
+{
+ switch (dst->type) {
+ case DAMOS_FILTER_TYPE_MEMCG:
+ dst->memcg_id = src->memcg_id;
+ break;
+ case DAMOS_FILTER_TYPE_ADDR:
+ dst->addr_range = src->addr_range;
+ break;
+ case DAMOS_FILTER_TYPE_TARGET:
+ dst->target_idx = src->target_idx;
+ break;
+ default:
+ break;
+ }
+}
+
+static void damos_commit_filter(
+ struct damos_filter *dst, struct damos_filter *src)
+{
+ dst->type = src->type;
+ dst->matching = src->matching;
+ damos_commit_filter_arg(dst, src);
+}
+
+static int damos_commit_filters(struct damos *dst, struct damos *src)
+{
+ struct damos_filter *dst_filter, *next, *src_filter, *new_filter;
+ int i = 0, j = 0;
+
+ damos_for_each_filter_safe(dst_filter, next, dst) {
+ src_filter = damos_nth_filter(i++, src);
+ if (src_filter)
+ damos_commit_filter(dst_filter, src_filter);
+ else
+ damos_destroy_filter(dst_filter);
+ }
+
+ damos_for_each_filter_safe(src_filter, next, src) {
+ if (j++ < i)
+ continue;
+
+ new_filter = damos_new_filter(
+ src_filter->type, src_filter->matching);
+ if (!new_filter)
+ return -ENOMEM;
+ damos_commit_filter_arg(new_filter, src_filter);
+ damos_add_filter(dst, new_filter);
+ }
+ return 0;
+}
+
+static struct damos *damon_nth_scheme(int n, struct damon_ctx *ctx)
+{
+ struct damos *s;
+ int i = 0;
+
+ damon_for_each_scheme(s, ctx) {
+ if (i++ == n)
+ return s;
+ }
+ return NULL;
+}
+
+static int damos_commit(struct damos *dst, struct damos *src)
+{
+ int err;
+
+ dst->pattern = src->pattern;
+ dst->action = src->action;
+ dst->apply_interval_us = src->apply_interval_us;
+
+ err = damos_commit_quota(&dst->quota, &src->quota);
+ if (err)
+ return err;
+
+ dst->wmarks = src->wmarks;
+
+ err = damos_commit_filters(dst, src);
+ return err;
+}
+
+static int damon_commit_schemes(struct damon_ctx *dst, struct damon_ctx *src)
+{
+ struct damos *dst_scheme, *next, *src_scheme, *new_scheme;
+ int i = 0, j = 0, err;
+
+ damon_for_each_scheme_safe(dst_scheme, next, dst) {
+ src_scheme = damon_nth_scheme(i++, src);
+ if (src_scheme) {
+ err = damos_commit(dst_scheme, src_scheme);
+ if (err)
+ return err;
+ } else {
+ damon_destroy_scheme(dst_scheme);
+ }
+ }
+
+ damon_for_each_scheme_safe(src_scheme, next, src) {
+ if (j++ < i)
+ continue;
+ new_scheme = damon_new_scheme(&src_scheme->pattern,
+ src_scheme->action,
+ src_scheme->apply_interval_us,
+ &src_scheme->quota, &src_scheme->wmarks,
+ NUMA_NO_NODE);
+ if (!new_scheme)
+ return -ENOMEM;
+ damon_add_scheme(dst, new_scheme);
+ }
+ return 0;
+}
+
+static struct damon_target *damon_nth_target(int n, struct damon_ctx *ctx)
+{
+ struct damon_target *t;
+ int i = 0;
+
+ damon_for_each_target(t, ctx) {
+ if (i++ == n)
+ return t;
+ }
+ return NULL;
+}
+
+/*
+ * The caller should ensure the regions of @src are
+ * 1. valid (end >= src) and
+ * 2. sorted by starting address.
+ *
+ * If @src has no region, @dst keeps current regions.
+ */
+static int damon_commit_target_regions(
+ struct damon_target *dst, struct damon_target *src)
+{
+ struct damon_region *src_region;
+ struct damon_addr_range *ranges;
+ int i = 0, err;
+
+ damon_for_each_region(src_region, src)
+ i++;
+ if (!i)
+ return 0;
+
+ ranges = kmalloc_array(i, sizeof(*ranges), GFP_KERNEL | __GFP_NOWARN);
+ if (!ranges)
+ return -ENOMEM;
+ i = 0;
+ damon_for_each_region(src_region, src)
+ ranges[i++] = src_region->ar;
+ err = damon_set_regions(dst, ranges, i);
+ kfree(ranges);
+ return err;
+}
+
+static int damon_commit_target(
+ struct damon_target *dst, bool dst_has_pid,
+ struct damon_target *src, bool src_has_pid)
+{
+ int err;
+
+ err = damon_commit_target_regions(dst, src);
+ if (err)
+ return err;
+ if (dst_has_pid)
+ put_pid(dst->pid);
+ if (src_has_pid)
+ get_pid(src->pid);
+ dst->pid = src->pid;
+ return 0;
+}
+
+static int damon_commit_targets(
+ struct damon_ctx *dst, struct damon_ctx *src)
+{
+ struct damon_target *dst_target, *next, *src_target, *new_target;
+ int i = 0, j = 0, err;
+
+ damon_for_each_target_safe(dst_target, next, dst) {
+ src_target = damon_nth_target(i++, src);
+ if (src_target) {
+ err = damon_commit_target(
+ dst_target, damon_target_has_pid(dst),
+ src_target, damon_target_has_pid(src));
+ if (err)
+ return err;
+ } else {
+ if (damon_target_has_pid(dst))
+ put_pid(dst_target->pid);
+ damon_destroy_target(dst_target);
+ }
+ }
+
+ damon_for_each_target_safe(src_target, next, src) {
+ if (j++ < i)
+ continue;
+ new_target = damon_new_target();
+ if (!new_target)
+ return -ENOMEM;
+ err = damon_commit_target(new_target, false,
+ src_target, damon_target_has_pid(src));
+ if (err)
+ return err;
+ }
+ return 0;
+}
+
+/**
+ * damon_commit_ctx() - Commit parameters of a DAMON context to another.
+ * @dst: The commit destination DAMON context.
+ * @src: The commit source DAMON context.
+ *
+ * This function copies user-specified parameters from @src to @dst and update
+ * the internal status and results accordingly. Users should use this function
+ * for context-level parameters update of running context, instead of manual
+ * in-place updates.
+ *
+ * This function should be called from parameters-update safe context, like
+ * DAMON callbacks.
+ */
+int damon_commit_ctx(struct damon_ctx *dst, struct damon_ctx *src)
+{
+ int err;
+
+ err = damon_commit_schemes(dst, src);
+ if (err)
+ return err;
+ err = damon_commit_targets(dst, src);
+ if (err)
+ return err;
+ /*
+ * schemes and targets should be updated first, since
+ * 1. damon_set_attrs() updates monitoring results of targets and
+ * next_apply_sis of schemes, and
+ * 2. ops update should be done after pid handling is done (target
+ * committing require putting pids).
+ */
+ err = damon_set_attrs(dst, &src->attrs);
+ if (err)
+ return err;
+ dst->ops = src->ops;
+
+ return 0;
+}
+
/**
* damon_nr_running_ctxs() - Return number of currently running contexts.
*/
diff --git a/mm/damon/dbgfs.c b/mm/damon/dbgfs.c
index 2461cfe2e968..51a6f1cac385 100644
--- a/mm/damon/dbgfs.c
+++ b/mm/damon/dbgfs.c
@@ -281,7 +281,7 @@ static struct damos **str_to_schemes(const char *str, ssize_t len,
pos += parsed;
scheme = damon_new_scheme(&pattern, action, 0, &quota,
- &wmarks);
+ &wmarks, NUMA_NO_NODE);
if (!scheme)
goto fail;
diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c
index 3de2916a65c3..4af8fd4a390b 100644
--- a/mm/damon/lru_sort.c
+++ b/mm/damon/lru_sort.c
@@ -163,7 +163,8 @@ static struct damos *damon_lru_sort_new_scheme(
/* under the quota. */
&quota,
/* (De)activate this according to the watermarks. */
- &damon_lru_sort_wmarks);
+ &damon_lru_sort_wmarks,
+ NUMA_NO_NODE);
}
/* Create a DAMON-based operation scheme for hot memory regions */
@@ -185,61 +186,48 @@ static struct damos *damon_lru_sort_new_cold_scheme(unsigned int cold_thres)
return damon_lru_sort_new_scheme(&pattern, DAMOS_LRU_DEPRIO);
}
-static void damon_lru_sort_copy_quota_status(struct damos_quota *dst,
- struct damos_quota *src)
-{
- dst->total_charged_sz = src->total_charged_sz;
- dst->total_charged_ns = src->total_charged_ns;
- dst->charged_sz = src->charged_sz;
- dst->charged_from = src->charged_from;
- dst->charge_target_from = src->charge_target_from;
- dst->charge_addr_from = src->charge_addr_from;
-}
-
static int damon_lru_sort_apply_parameters(void)
{
- struct damos *scheme, *hot_scheme, *cold_scheme;
- struct damos *old_hot_scheme = NULL, *old_cold_scheme = NULL;
+ struct damon_ctx *param_ctx;
+ struct damon_target *param_target;
+ struct damos *hot_scheme, *cold_scheme;
unsigned int hot_thres, cold_thres;
- int err = 0;
+ int err;
- err = damon_set_attrs(ctx, &damon_lru_sort_mon_attrs);
+ err = damon_modules_new_paddr_ctx_target(&param_ctx, &param_target);
if (err)
return err;
- damon_for_each_scheme(scheme, ctx) {
- if (!old_hot_scheme) {
- old_hot_scheme = scheme;
- continue;
- }
- old_cold_scheme = scheme;
- }
+ err = damon_set_attrs(ctx, &damon_lru_sort_mon_attrs);
+ if (err)
+ goto out;
+ err = -ENOMEM;
hot_thres = damon_max_nr_accesses(&damon_lru_sort_mon_attrs) *
hot_thres_access_freq / 1000;
hot_scheme = damon_lru_sort_new_hot_scheme(hot_thres);
if (!hot_scheme)
- return -ENOMEM;
- if (old_hot_scheme)
- damon_lru_sort_copy_quota_status(&hot_scheme->quota,
- &old_hot_scheme->quota);
+ goto out;
cold_thres = cold_min_age / damon_lru_sort_mon_attrs.aggr_interval;
cold_scheme = damon_lru_sort_new_cold_scheme(cold_thres);
if (!cold_scheme) {
damon_destroy_scheme(hot_scheme);
- return -ENOMEM;
+ goto out;
}
- if (old_cold_scheme)
- damon_lru_sort_copy_quota_status(&cold_scheme->quota,
- &old_cold_scheme->quota);
- damon_set_schemes(ctx, &hot_scheme, 1);
- damon_add_scheme(ctx, cold_scheme);
+ damon_set_schemes(param_ctx, &hot_scheme, 1);
+ damon_add_scheme(param_ctx, cold_scheme);
- return damon_set_region_biggest_system_ram_default(target,
+ err = damon_set_region_biggest_system_ram_default(param_target,
&monitor_region_start,
&monitor_region_end);
+ if (err)
+ goto out;
+ err = damon_commit_ctx(ctx, param_ctx);
+out:
+ damon_destroy_ctx(param_ctx);
+ return err;
}
static int damon_lru_sort_turn(bool on)
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index 18797c1b419b..a9ff35341d65 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -12,6 +12,9 @@
#include <linux/pagemap.h>
#include <linux/rmap.h>
#include <linux/swap.h>
+#include <linux/memory-tiers.h>
+#include <linux/migrate.h>
+#include <linux/mm_inline.h>
#include "../internal.h"
#include "ops-common.h"
@@ -325,6 +328,153 @@ static unsigned long damon_pa_deactivate_pages(struct damon_region *r,
return damon_pa_mark_accessed_or_deactivate(r, s, false);
}
+static unsigned int __damon_pa_migrate_folio_list(
+ struct list_head *migrate_folios, struct pglist_data *pgdat,
+ int target_nid)
+{
+ unsigned int nr_succeeded = 0;
+ nodemask_t allowed_mask = NODE_MASK_NONE;
+ struct migration_target_control mtc = {
+ /*
+ * Allocate from 'node', or fail quickly and quietly.
+ * When this happens, 'page' will likely just be discarded
+ * instead of migrated.
+ */
+ .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) |
+ __GFP_NOWARN | __GFP_NOMEMALLOC | GFP_NOWAIT,
+ .nid = target_nid,
+ .nmask = &allowed_mask
+ };
+
+ if (pgdat->node_id == target_nid || target_nid == NUMA_NO_NODE)
+ return 0;
+
+ if (list_empty(migrate_folios))
+ return 0;
+
+ /* Migration ignores all cpuset and mempolicy settings */
+ migrate_pages(migrate_folios, alloc_migrate_folio, NULL,
+ (unsigned long)&mtc, MIGRATE_ASYNC, MR_DAMON,
+ &nr_succeeded);
+
+ return nr_succeeded;
+}
+
+static unsigned int damon_pa_migrate_folio_list(struct list_head *folio_list,
+ struct pglist_data *pgdat,
+ int target_nid)
+{
+ unsigned int nr_migrated = 0;
+ struct folio *folio;
+ LIST_HEAD(ret_folios);
+ LIST_HEAD(migrate_folios);
+
+ while (!list_empty(folio_list)) {
+ struct folio *folio;
+
+ cond_resched();
+
+ folio = lru_to_folio(folio_list);
+ list_del(&folio->lru);
+
+ if (!folio_trylock(folio))
+ goto keep;
+
+ /* Relocate its contents to another node. */
+ list_add(&folio->lru, &migrate_folios);
+ folio_unlock(folio);
+ continue;
+keep:
+ list_add(&folio->lru, &ret_folios);
+ }
+ /* 'folio_list' is always empty here */
+
+ /* Migrate folios selected for migration */
+ nr_migrated += __damon_pa_migrate_folio_list(
+ &migrate_folios, pgdat, target_nid);
+ /*
+ * Folios that could not be migrated are still in @migrate_folios. Add
+ * those back on @folio_list
+ */
+ if (!list_empty(&migrate_folios))
+ list_splice_init(&migrate_folios, folio_list);
+
+ try_to_unmap_flush();
+
+ list_splice(&ret_folios, folio_list);
+
+ while (!list_empty(folio_list)) {
+ folio = lru_to_folio(folio_list);
+ list_del(&folio->lru);
+ folio_putback_lru(folio);
+ }
+
+ return nr_migrated;
+}
+
+static unsigned long damon_pa_migrate_pages(struct list_head *folio_list,
+ int target_nid)
+{
+ int nid;
+ unsigned long nr_migrated = 0;
+ LIST_HEAD(node_folio_list);
+ unsigned int noreclaim_flag;
+
+ if (list_empty(folio_list))
+ return nr_migrated;
+
+ noreclaim_flag = memalloc_noreclaim_save();
+
+ nid = folio_nid(lru_to_folio(folio_list));
+ do {
+ struct folio *folio = lru_to_folio(folio_list);
+
+ if (nid == folio_nid(folio)) {
+ list_move(&folio->lru, &node_folio_list);
+ continue;
+ }
+
+ nr_migrated += damon_pa_migrate_folio_list(&node_folio_list,
+ NODE_DATA(nid),
+ target_nid);
+ nid = folio_nid(lru_to_folio(folio_list));
+ } while (!list_empty(folio_list));
+
+ nr_migrated += damon_pa_migrate_folio_list(&node_folio_list,
+ NODE_DATA(nid),
+ target_nid);
+
+ memalloc_noreclaim_restore(noreclaim_flag);
+
+ return nr_migrated;
+}
+
+static unsigned long damon_pa_migrate(struct damon_region *r, struct damos *s)
+{
+ unsigned long addr, applied;
+ LIST_HEAD(folio_list);
+
+ for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
+ struct folio *folio = damon_get_folio(PHYS_PFN(addr));
+
+ if (!folio)
+ continue;
+
+ if (damos_pa_filter_out(s, folio))
+ goto put_folio;
+
+ if (!folio_isolate_lru(folio))
+ goto put_folio;
+ list_add(&folio->lru, &folio_list);
+put_folio:
+ folio_put(folio);
+ }
+ applied = damon_pa_migrate_pages(&folio_list, s->target_nid);
+ cond_resched();
+ return applied * PAGE_SIZE;
+}
+
+
static unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx,
struct damon_target *t, struct damon_region *r,
struct damos *scheme)
@@ -336,6 +486,9 @@ static unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx,
return damon_pa_mark_accessed(r, scheme);
case DAMOS_LRU_DEPRIO:
return damon_pa_deactivate_pages(r, scheme);
+ case DAMOS_MIGRATE_HOT:
+ case DAMOS_MIGRATE_COLD:
+ return damon_pa_migrate(r, scheme);
case DAMOS_STAT:
break;
default:
@@ -356,6 +509,10 @@ static int damon_pa_scheme_score(struct damon_ctx *context,
return damon_hot_score(context, r, scheme);
case DAMOS_LRU_DEPRIO:
return damon_cold_score(context, r, scheme);
+ case DAMOS_MIGRATE_HOT:
+ return damon_hot_score(context, r, scheme);
+ case DAMOS_MIGRATE_COLD:
+ return damon_cold_score(context, r, scheme);
default:
break;
}
diff --git a/mm/damon/reclaim.c b/mm/damon/reclaim.c
index 9bd341d62b4c..9e0077a9404e 100644
--- a/mm/damon/reclaim.c
+++ b/mm/damon/reclaim.c
@@ -177,76 +177,65 @@ static struct damos *damon_reclaim_new_scheme(void)
/* under the quota. */
&damon_reclaim_quota,
/* (De)activate this according to the watermarks. */
- &damon_reclaim_wmarks);
-}
-
-static void damon_reclaim_copy_quota_status(struct damos_quota *dst,
- struct damos_quota *src)
-{
- dst->total_charged_sz = src->total_charged_sz;
- dst->total_charged_ns = src->total_charged_ns;
- dst->charged_sz = src->charged_sz;
- dst->charged_from = src->charged_from;
- dst->charge_target_from = src->charge_target_from;
- dst->charge_addr_from = src->charge_addr_from;
- dst->esz_bp = src->esz_bp;
+ &damon_reclaim_wmarks,
+ NUMA_NO_NODE);
}
static int damon_reclaim_apply_parameters(void)
{
- struct damos *scheme, *old_scheme;
+ struct damon_ctx *param_ctx;
+ struct damon_target *param_target;
+ struct damos *scheme;
struct damos_quota_goal *goal;
struct damos_filter *filter;
- int err = 0;
+ int err;
- err = damon_set_attrs(ctx, &damon_reclaim_mon_attrs);
+ err = damon_modules_new_paddr_ctx_target(&param_ctx, &param_target);
if (err)
return err;
- /* Will be freed by next 'damon_set_schemes()' below */
+ err = damon_set_attrs(ctx, &damon_reclaim_mon_attrs);
+ if (err)
+ goto out;
+
+ err = -ENOMEM;
scheme = damon_reclaim_new_scheme();
if (!scheme)
- return -ENOMEM;
- if (!list_empty(&ctx->schemes)) {
- damon_for_each_scheme(old_scheme, ctx)
- damon_reclaim_copy_quota_status(&scheme->quota,
- &old_scheme->quota);
- }
+ goto out;
+ damon_set_schemes(ctx, &scheme, 1);
if (quota_mem_pressure_us) {
goal = damos_new_quota_goal(DAMOS_QUOTA_SOME_MEM_PSI_US,
quota_mem_pressure_us);
- if (!goal) {
- damon_destroy_scheme(scheme);
- return -ENOMEM;
- }
+ if (!goal)
+ goto out;
damos_add_quota_goal(&scheme->quota, goal);
}
if (quota_autotune_feedback) {
goal = damos_new_quota_goal(DAMOS_QUOTA_USER_INPUT, 10000);
- if (!goal) {
- damon_destroy_scheme(scheme);
- return -ENOMEM;
- }
+ if (!goal)
+ goto out;
goal->current_value = quota_autotune_feedback;
damos_add_quota_goal(&scheme->quota, goal);
}
if (skip_anon) {
filter = damos_new_filter(DAMOS_FILTER_TYPE_ANON, true);
- if (!filter) {
- /* Will be freed by next 'damon_set_schemes()' below */
- damon_destroy_scheme(scheme);
- return -ENOMEM;
- }
+ if (!filter)
+ goto out;
damos_add_filter(scheme, filter);
}
- damon_set_schemes(ctx, &scheme, 1);
- return damon_set_region_biggest_system_ram_default(target,
+ err = damon_set_region_biggest_system_ram_default(param_target,
&monitor_region_start,
&monitor_region_end);
+ if (err)
+ goto out;
+ err = damon_commit_ctx(ctx, param_ctx);
+out:
+ damon_destroy_ctx(param_ctx);
+ return err;
}
static int damon_reclaim_turn(bool on)
diff --git a/mm/damon/sysfs-common.h b/mm/damon/sysfs-common.h
index a63f51577cff..9a18f3c535d3 100644
--- a/mm/damon/sysfs-common.h
+++ b/mm/damon/sysfs-common.h
@@ -38,7 +38,7 @@ void damon_sysfs_schemes_rm_dirs(struct damon_sysfs_schemes *schemes);
extern const struct kobj_type damon_sysfs_schemes_ktype;
-int damon_sysfs_set_schemes(struct damon_ctx *ctx,
+int damon_sysfs_add_schemes(struct damon_ctx *ctx,
struct damon_sysfs_schemes *sysfs_schemes);
void damon_sysfs_schemes_update_stats(
diff --git a/mm/damon/sysfs-schemes.c b/mm/damon/sysfs-schemes.c
index bea5bc52846a..b095457380b5 100644
--- a/mm/damon/sysfs-schemes.c
+++ b/mm/damon/sysfs-schemes.c
@@ -6,6 +6,7 @@
*/
#include <linux/slab.h>
+#include <linux/numa.h>
#include "sysfs-common.h"
@@ -1445,6 +1446,7 @@ struct damon_sysfs_scheme {
struct damon_sysfs_scheme_filters *filters;
struct damon_sysfs_stats *stats;
struct damon_sysfs_scheme_regions *tried_regions;
+ int target_nid;
};
/* This should match with enum damos_action */
@@ -1456,6 +1458,8 @@ static const char * const damon_sysfs_damos_action_strs[] = {
"nohugepage",
"lru_prio",
"lru_deprio",
+ "migrate_hot",
+ "migrate_cold",
"stat",
};
@@ -1470,6 +1474,7 @@ static struct damon_sysfs_scheme *damon_sysfs_scheme_alloc(
scheme->kobj = (struct kobject){};
scheme->action = action;
scheme->apply_interval_us = apply_interval_us;
+ scheme->target_nid = NUMA_NO_NODE;
return scheme;
}
@@ -1692,6 +1697,28 @@ static ssize_t apply_interval_us_store(struct kobject *kobj,
return err ? err : count;
}
+static ssize_t target_nid_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct damon_sysfs_scheme *scheme = container_of(kobj,
+ struct damon_sysfs_scheme, kobj);
+
+ return sysfs_emit(buf, "%d\n", scheme->target_nid);
+}
+
+static ssize_t target_nid_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ struct damon_sysfs_scheme *scheme = container_of(kobj,
+ struct damon_sysfs_scheme, kobj);
+ int err = 0;
+
+ /* TODO: error handling for target_nid range. */
+ err = kstrtoint(buf, 0, &scheme->target_nid);
+
+ return err ? err : count;
+}
+
static void damon_sysfs_scheme_release(struct kobject *kobj)
{
kfree(container_of(kobj, struct damon_sysfs_scheme, kobj));
@@ -1703,9 +1730,13 @@ static struct kobj_attribute damon_sysfs_scheme_action_attr =
static struct kobj_attribute damon_sysfs_scheme_apply_interval_us_attr =
__ATTR_RW_MODE(apply_interval_us, 0600);
+static struct kobj_attribute damon_sysfs_scheme_target_nid_attr =
+ __ATTR_RW_MODE(target_nid, 0600);
+
static struct attribute *damon_sysfs_scheme_attrs[] = {
&damon_sysfs_scheme_action_attr.attr,
&damon_sysfs_scheme_apply_interval_us_attr.attr,
+ &damon_sysfs_scheme_target_nid_attr.attr,
NULL,
};
ATTRIBUTE_GROUPS(damon_sysfs_scheme);
@@ -1877,14 +1908,10 @@ static int damon_sysfs_memcg_path_to_id(char *memcg_path, unsigned short *id)
return found ? 0 : -EINVAL;
}
-static int damon_sysfs_set_scheme_filters(struct damos *scheme,
+static int damon_sysfs_add_scheme_filters(struct damos *scheme,
struct damon_sysfs_scheme_filters *sysfs_filters)
{
int i;
- struct damos_filter *filter, *next;
-
- damos_for_each_filter_safe(filter, next, scheme)
- damos_destroy_filter(filter);
for (i = 0; i < sysfs_filters->nr; i++) {
struct damon_sysfs_scheme_filter *sysfs_filter =
@@ -1920,16 +1947,13 @@ static int damon_sysfs_set_scheme_filters(struct damos *scheme,
return 0;
}
-static int damos_sysfs_set_quota_score(
+static int damos_sysfs_add_quota_score(
struct damos_sysfs_quota_goals *sysfs_goals,
struct damos_quota *quota)
{
- struct damos_quota_goal *goal, *next;
+ struct damos_quota_goal *goal;
int i;
- damos_for_each_quota_goal_safe(goal, next, quota)
- damos_destroy_quota_goal(goal);
-
for (i = 0; i < sysfs_goals->nr; i++) {
struct damos_sysfs_quota_goal *sysfs_goal =
sysfs_goals->goals_arr[i];
@@ -1952,10 +1976,13 @@ int damos_sysfs_set_quota_scores(struct damon_sysfs_schemes *sysfs_schemes,
struct damon_ctx *ctx)
{
struct damos *scheme;
+ struct damos_quota quota = {};
int i = 0;
+ INIT_LIST_HEAD(&quota.goals);
damon_for_each_scheme(scheme, ctx) {
struct damon_sysfs_scheme *sysfs_scheme;
+ struct damos_quota_goal *g, *g_next;
int err;
/* user could have removed the scheme sysfs dir */
@@ -1963,10 +1990,17 @@ int damos_sysfs_set_quota_scores(struct damon_sysfs_schemes *sysfs_schemes,
break;
sysfs_scheme = sysfs_schemes->schemes_arr[i];
- err = damos_sysfs_set_quota_score(sysfs_scheme->quotas->goals,
- &scheme->quota);
+ err = damos_sysfs_add_quota_score(sysfs_scheme->quotas->goals,
+ &quota);
+ if (err) {
+ damos_for_each_quota_goal_safe(g, g_next, &quota)
+ damos_destroy_quota_goal(g);
+ return err;
+ }
+ err = damos_commit_quota_goals(&scheme->quota, &quota);
+ damos_for_each_quota_goal_safe(g, g_next, &quota)
+ damos_destroy_quota_goal(g);
if (err)
- /* kdamond will clean up schemes and terminated */
return err;
i++;
}
@@ -2031,17 +2065,18 @@ static struct damos *damon_sysfs_mk_scheme(
};
scheme = damon_new_scheme(&pattern, sysfs_scheme->action,
- sysfs_scheme->apply_interval_us, &quota, &wmarks);
+ sysfs_scheme->apply_interval_us, &quota, &wmarks,
+ sysfs_scheme->target_nid);
if (!scheme)
return NULL;
- err = damos_sysfs_set_quota_score(sysfs_quotas->goals, &scheme->quota);
+ err = damos_sysfs_add_quota_score(sysfs_quotas->goals, &scheme->quota);
if (err) {
damon_destroy_scheme(scheme);
return NULL;
}
- err = damon_sysfs_set_scheme_filters(scheme, sysfs_filters);
+ err = damon_sysfs_add_scheme_filters(scheme, sysfs_filters);
if (err) {
damon_destroy_scheme(scheme);
return NULL;
@@ -2049,66 +2084,12 @@ static struct damos *damon_sysfs_mk_scheme(
return scheme;
}
-static void damon_sysfs_update_scheme(struct damos *scheme,
- struct damon_sysfs_scheme *sysfs_scheme)
-{
- struct damon_sysfs_access_pattern *access_pattern =
- sysfs_scheme->access_pattern;
- struct damon_sysfs_quotas *sysfs_quotas = sysfs_scheme->quotas;
- struct damon_sysfs_weights *sysfs_weights = sysfs_quotas->weights;
- struct damon_sysfs_watermarks *sysfs_wmarks = sysfs_scheme->watermarks;
- int err;
-
- scheme->pattern.min_sz_region = access_pattern->sz->min;
- scheme->pattern.max_sz_region = access_pattern->sz->max;
- scheme->pattern.min_nr_accesses = access_pattern->nr_accesses->min;
- scheme->pattern.max_nr_accesses = access_pattern->nr_accesses->max;
- scheme->pattern.min_age_region = access_pattern->age->min;
- scheme->pattern.max_age_region = access_pattern->age->max;
-
- scheme->action = sysfs_scheme->action;
- scheme->apply_interval_us = sysfs_scheme->apply_interval_us;
-
- scheme->quota.ms = sysfs_quotas->ms;
- scheme->quota.sz = sysfs_quotas->sz;
- scheme->quota.reset_interval = sysfs_quotas->reset_interval_ms;
- scheme->quota.weight_sz = sysfs_weights->sz;
- scheme->quota.weight_nr_accesses = sysfs_weights->nr_accesses;
- scheme->quota.weight_age = sysfs_weights->age;
-
- err = damos_sysfs_set_quota_score(sysfs_quotas->goals, &scheme->quota);
- if (err) {
- damon_destroy_scheme(scheme);
- return;
- }
-
- scheme->wmarks.metric = sysfs_wmarks->metric;
- scheme->wmarks.interval = sysfs_wmarks->interval_us;
- scheme->wmarks.high = sysfs_wmarks->high;
- scheme->wmarks.mid = sysfs_wmarks->mid;
- scheme->wmarks.low = sysfs_wmarks->low;
-
- err = damon_sysfs_set_scheme_filters(scheme, sysfs_scheme->filters);
- if (err)
- damon_destroy_scheme(scheme);
-}
-
-int damon_sysfs_set_schemes(struct damon_ctx *ctx,
+int damon_sysfs_add_schemes(struct damon_ctx *ctx,
struct damon_sysfs_schemes *sysfs_schemes)
{
- struct damos *scheme, *next;
- int i = 0;
-
- damon_for_each_scheme_safe(scheme, next, ctx) {
- if (i < sysfs_schemes->nr)
- damon_sysfs_update_scheme(scheme,
- sysfs_schemes->schemes_arr[i]);
- else
- damon_destroy_scheme(scheme);
- i++;
- }
+ int i;
- for (; i < sysfs_schemes->nr; i++) {
+ for (i = 0; i < sysfs_schemes->nr; i++) {
struct damos *scheme, *next;
scheme = damon_sysfs_mk_scheme(sysfs_schemes->schemes_arr[i]);
diff --git a/mm/damon/sysfs-test.h b/mm/damon/sysfs-test.h
index 73bdce2452c1..1c9b596057a7 100644
--- a/mm/damon/sysfs-test.h
+++ b/mm/damon/sysfs-test.h
@@ -38,7 +38,7 @@ static int __damon_sysfs_test_get_any_pid(int min, int max)
return -1;
}
-static void damon_sysfs_test_set_targets(struct kunit *test)
+static void damon_sysfs_test_add_targets(struct kunit *test)
{
struct damon_sysfs_targets *sysfs_targets;
struct damon_sysfs_target *sysfs_target;
@@ -56,13 +56,13 @@ static void damon_sysfs_test_set_targets(struct kunit *test)
ctx = damon_new_ctx();
- damon_sysfs_set_targets(ctx, sysfs_targets);
+ damon_sysfs_add_targets(ctx, sysfs_targets);
KUNIT_EXPECT_EQ(test, 1u, nr_damon_targets(ctx));
sysfs_target->pid = __damon_sysfs_test_get_any_pid(
sysfs_target->pid + 1, 200);
- damon_sysfs_set_targets(ctx, sysfs_targets);
- KUNIT_EXPECT_EQ(test, 1u, nr_damon_targets(ctx));
+ damon_sysfs_add_targets(ctx, sysfs_targets);
+ KUNIT_EXPECT_EQ(test, 2u, nr_damon_targets(ctx));
damon_destroy_ctx(ctx);
kfree(sysfs_targets->targets_arr);
@@ -71,7 +71,7 @@ static void damon_sysfs_test_set_targets(struct kunit *test)
}
static struct kunit_case damon_sysfs_test_cases[] = {
- KUNIT_CASE(damon_sysfs_test_set_targets),
+ KUNIT_CASE(damon_sysfs_test_add_targets),
{},
};
diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
index 6fee383bc0c5..cffc755e7775 100644
--- a/mm/damon/sysfs.c
+++ b/mm/damon/sysfs.c
@@ -1162,72 +1162,16 @@ destroy_targets_out:
return err;
}
-static int damon_sysfs_update_target_pid(struct damon_target *target, int pid)
-{
- struct pid *pid_new;
-
- pid_new = find_get_pid(pid);
- if (!pid_new)
- return -EINVAL;
-
- if (pid_new == target->pid) {
- put_pid(pid_new);
- return 0;
- }
-
- put_pid(target->pid);
- target->pid = pid_new;
- return 0;
-}
-
-static int damon_sysfs_update_target(struct damon_target *target,
- struct damon_ctx *ctx,
- struct damon_sysfs_target *sys_target)
-{
- int err = 0;
-
- if (damon_target_has_pid(ctx)) {
- err = damon_sysfs_update_target_pid(target, sys_target->pid);
- if (err)
- return err;
- }
-
- /*
- * Do monitoring target region boundary update only if one or more
- * regions are set by the user. This is for keeping current monitoring
- * target results and range easier, especially for dynamic monitoring
- * target regions update ops like 'vaddr'.
- */
- if (sys_target->regions->nr)
- err = damon_sysfs_set_regions(target, sys_target->regions);
- return err;
-}
-
-static int damon_sysfs_set_targets(struct damon_ctx *ctx,
+static int damon_sysfs_add_targets(struct damon_ctx *ctx,
struct damon_sysfs_targets *sysfs_targets)
{
- struct damon_target *t, *next;
- int i = 0, err;
+ int i, err;
/* Multiple physical address space monitoring targets makes no sense */
if (ctx->ops.id == DAMON_OPS_PADDR && sysfs_targets->nr > 1)
return -EINVAL;
- damon_for_each_target_safe(t, next, ctx) {
- if (i < sysfs_targets->nr) {
- err = damon_sysfs_update_target(t, ctx,
- sysfs_targets->targets_arr[i]);
- if (err)
- return err;
- } else {
- if (damon_target_has_pid(ctx))
- put_pid(t->pid);
- damon_destroy_target(t);
- }
- i++;
- }
-
- for (; i < sysfs_targets->nr; i++) {
+ for (i = 0; i < sysfs_targets->nr; i++) {
struct damon_sysfs_target *st = sysfs_targets->targets_arr[i];
err = damon_sysfs_add_target(st, ctx);
@@ -1339,12 +1283,15 @@ static int damon_sysfs_apply_inputs(struct damon_ctx *ctx,
err = damon_sysfs_set_attrs(ctx, sys_ctx->attrs);
if (err)
return err;
- err = damon_sysfs_set_targets(ctx, sys_ctx->targets);
+ err = damon_sysfs_add_targets(ctx, sys_ctx->targets);
if (err)
return err;
- return damon_sysfs_set_schemes(ctx, sys_ctx->schemes);
+ return damon_sysfs_add_schemes(ctx, sys_ctx->schemes);
}
+static struct damon_ctx *damon_sysfs_build_ctx(
+ struct damon_sysfs_context *sys_ctx);
+
/*
* damon_sysfs_commit_input() - Commit user inputs to a running kdamond.
* @kdamond: The kobject wrapper for the associated kdamond.
@@ -1353,14 +1300,22 @@ static int damon_sysfs_apply_inputs(struct damon_ctx *ctx,
*/
static int damon_sysfs_commit_input(struct damon_sysfs_kdamond *kdamond)
{
+ struct damon_ctx *param_ctx;
+ int err;
+
if (!damon_sysfs_kdamond_running(kdamond))
return -EINVAL;
/* TODO: Support multiple contexts per kdamond */
if (kdamond->contexts->nr != 1)
return -EINVAL;
- return damon_sysfs_apply_inputs(kdamond->damon_ctx,
- kdamond->contexts->contexts_arr[0]);
+ param_ctx = damon_sysfs_build_ctx(kdamond->contexts->contexts_arr[0]);
+ if (IS_ERR(param_ctx))
+ return PTR_ERR(param_ctx);
+ err = damon_commit_ctx(kdamond->damon_ctx, param_ctx);
+ damon_sysfs_destroy_targets(param_ctx);
+ damon_destroy_ctx(param_ctx);
+ return err;
}
static int damon_sysfs_commit_schemes_quota_goals(
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 381559e4a1fa..58829baf8b5d 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -339,7 +339,7 @@ static void damon_hugetlb_mkold(pte_t *pte, struct mm_struct *mm,
struct vm_area_struct *vma, unsigned long addr)
{
bool referenced = false;
- pte_t entry = huge_ptep_get(pte);
+ pte_t entry = huge_ptep_get(mm, addr, pte);
struct folio *folio = pfn_folio(pte_pfn(entry));
unsigned long psize = huge_page_size(hstate_vma(vma));
@@ -373,7 +373,7 @@ static int damon_mkold_hugetlb_entry(pte_t *pte, unsigned long hmask,
pte_t entry;
ptl = huge_pte_lock(h, walk->mm, pte);
- entry = huge_ptep_get(pte);
+ entry = huge_ptep_get(walk->mm, addr, pte);
if (!pte_present(entry))
goto out;
@@ -509,7 +509,7 @@ static int damon_young_hugetlb_entry(pte_t *pte, unsigned long hmask,
pte_t entry;
ptl = huge_pte_lock(h, walk->mm, pte);
- entry = huge_ptep_get(pte);
+ entry = huge_ptep_get(walk->mm, addr, pte);
if (!pte_present(entry))
goto out;
diff --git a/mm/dmapool_test.c b/mm/dmapool_test.c
index 370fb9e209ef..54b1fd1ccfbb 100644
--- a/mm/dmapool_test.c
+++ b/mm/dmapool_test.c
@@ -144,4 +144,5 @@ static void dmapool_exit(void)
module_init(dmapool_checks);
module_exit(dmapool_exit);
+MODULE_DESCRIPTION("dma_pool timing test");
MODULE_LICENSE("GPL");
diff --git a/mm/fail_page_alloc.c b/mm/fail_page_alloc.c
index b1b09cce9394..532851ce5132 100644
--- a/mm/fail_page_alloc.c
+++ b/mm/fail_page_alloc.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/fault-inject.h>
+#include <linux/error-injection.h>
#include <linux/mm.h>
static struct {
@@ -21,7 +22,7 @@ static int __init setup_fail_page_alloc(char *str)
}
__setup("fail_page_alloc=", setup_fail_page_alloc);
-bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
+bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
{
int flags = 0;
@@ -41,6 +42,7 @@ bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
return should_fail_ex(&fail_page_alloc.attr, 1 << order, flags);
}
+ALLOW_ERROR_INJECTION(should_fail_alloc_page, TRUE);
#ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
diff --git a/mm/failslab.c b/mm/failslab.c
index ffc420c0e767..af16c2ed578f 100644
--- a/mm/failslab.c
+++ b/mm/failslab.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/fault-inject.h>
+#include <linux/error-injection.h>
#include <linux/slab.h>
#include <linux/mm.h>
#include "slab.h"
@@ -14,23 +15,23 @@ static struct {
.cache_filter = false,
};
-bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags)
+int should_failslab(struct kmem_cache *s, gfp_t gfpflags)
{
int flags = 0;
/* No fault-injection for bootstrap cache */
if (unlikely(s == kmem_cache))
- return false;
+ return 0;
if (gfpflags & __GFP_NOFAIL)
- return false;
+ return 0;
if (failslab.ignore_gfp_reclaim &&
(gfpflags & __GFP_DIRECT_RECLAIM))
- return false;
+ return 0;
if (failslab.cache_filter && !(s->flags & SLAB_FAILSLAB))
- return false;
+ return 0;
/*
* In some cases, it expects to specify __GFP_NOWARN
@@ -41,8 +42,9 @@ bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags)
if (gfpflags & __GFP_NOWARN)
flags |= FAULT_NOWARN;
- return should_fail_ex(&failslab.attr, s->object_size, flags);
+ return should_fail_ex(&failslab.attr, s->object_size, flags) ? -ENOMEM : 0;
}
+ALLOW_ERROR_INJECTION(should_failslab, ERRNO);
static int __init setup_failslab(char *str)
{
diff --git a/mm/filemap.c b/mm/filemap.c
index ca8c8d889eef..d62150418b91 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -177,7 +177,7 @@ static void filemap_unaccount_folio(struct address_space *mapping,
* and we'd rather not leak it: if we're wrong,
* another bad page check should catch it later.
*/
- page_mapcount_reset(&folio->page);
+ atomic_set(&folio->_mapcount, -1);
folio_ref_sub(folio, mapcount);
}
}
@@ -1752,12 +1752,12 @@ pgoff_t page_cache_next_miss(struct address_space *mapping,
while (max_scan--) {
void *entry = xas_next(&xas);
if (!entry || xa_is_value(entry))
- break;
+ return xas.xa_index;
if (xas.xa_index == 0)
- break;
+ return 0;
}
- return xas.xa_index;
+ return index + max_scan;
}
EXPORT_SYMBOL(page_cache_next_miss);
diff --git a/mm/folio-compat.c b/mm/folio-compat.c
index f31e0ce65b11..f05906006b3c 100644
--- a/mm/folio-compat.c
+++ b/mm/folio-compat.c
@@ -10,12 +10,6 @@
#include <linux/swap.h>
#include "internal.h"
-struct address_space *page_mapping(struct page *page)
-{
- return folio_mapping(page_folio(page));
-}
-EXPORT_SYMBOL(page_mapping);
-
void unlock_page(struct page *page)
{
return folio_unlock(page_folio(page));
diff --git a/mm/gup.c b/mm/gup.c
index f1d6bc06eb52..54d0dc3831fb 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -5,6 +5,7 @@
#include <linux/spinlock.h>
#include <linux/mm.h>
+#include <linux/memfd.h>
#include <linux/memremap.h>
#include <linux/pagemap.h>
#include <linux/rmap.h>
@@ -17,6 +18,7 @@
#include <linux/hugetlb.h>
#include <linux/migrate.h>
#include <linux/mm_inline.h>
+#include <linux/pagevec.h>
#include <linux/sched/mm.h>
#include <linux/shmem_fs.h>
@@ -189,6 +191,19 @@ void unpin_user_page(struct page *page)
EXPORT_SYMBOL(unpin_user_page);
/**
+ * unpin_folio() - release a dma-pinned folio
+ * @folio: pointer to folio to be released
+ *
+ * Folios that were pinned via memfd_pin_folios() or other similar routines
+ * must be released either using unpin_folio() or unpin_folios().
+ */
+void unpin_folio(struct folio *folio)
+{
+ gup_put_folio(folio, 1, FOLL_PIN);
+}
+EXPORT_SYMBOL_GPL(unpin_folio);
+
+/**
* folio_add_pin - Try to get an additional pin on a pinned folio
* @folio: The folio to be pinned
*
@@ -290,7 +305,7 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
* 1) This code sees the page as already dirty, so it
* skips the call to set_page_dirty(). That could happen
* because clear_page_dirty_for_io() called
- * page_mkclean(), followed by set_page_dirty().
+ * folio_mkclean(), followed by set_page_dirty().
* However, now the page is going to get written back,
* which meets the original intention of setting it
* dirty, so all is well: clear_page_dirty_for_io() goes
@@ -400,6 +415,40 @@ void unpin_user_pages(struct page **pages, unsigned long npages)
}
EXPORT_SYMBOL(unpin_user_pages);
+/**
+ * unpin_folios() - release an array of gup-pinned folios.
+ * @folios: array of folios to be marked dirty and released.
+ * @nfolios: number of folios in the @folios array.
+ *
+ * For each folio in the @folios array, release the folio using gup_put_folio.
+ *
+ * Please see the unpin_folio() documentation for details.
+ */
+void unpin_folios(struct folio **folios, unsigned long nfolios)
+{
+ unsigned long i = 0, j;
+
+ /*
+ * If this WARN_ON() fires, then the system *might* be leaking folios
+ * (by leaving them pinned), but probably not. More likely, gup/pup
+ * returned a hard -ERRNO error to the caller, who erroneously passed
+ * it here.
+ */
+ if (WARN_ON(IS_ERR_VALUE(nfolios)))
+ return;
+
+ while (i < nfolios) {
+ for (j = i + 1; j < nfolios; j++)
+ if (folios[i] != folios[j])
+ break;
+
+ if (folios[i])
+ gup_put_folio(folios[i], j - i, FOLL_PIN);
+ i = j;
+ }
+}
+EXPORT_SYMBOL_GPL(unpin_folios);
+
/*
* Set the MMF_HAS_PINNED if not set yet; after set it'll be there for the mm's
* lifecycle. Avoid setting the bit unless necessary, or it might cause write
@@ -413,7 +462,7 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags)
#ifdef CONFIG_MMU
-#if defined(CONFIG_ARCH_HAS_HUGEPD) || defined(CONFIG_HAVE_GUP_FAST)
+#ifdef CONFIG_HAVE_GUP_FAST
static int record_subpages(struct page *page, unsigned long sz,
unsigned long addr, unsigned long end,
struct page **pages)
@@ -523,154 +572,7 @@ static struct folio *try_grab_folio_fast(struct page *page, int refs,
return folio;
}
-#endif /* CONFIG_ARCH_HAS_HUGEPD || CONFIG_HAVE_GUP_FAST */
-
-#ifdef CONFIG_ARCH_HAS_HUGEPD
-static unsigned long hugepte_addr_end(unsigned long addr, unsigned long end,
- unsigned long sz)
-{
- unsigned long __boundary = (addr + sz) & ~(sz-1);
- return (__boundary - 1 < end - 1) ? __boundary : end;
-}
-
-/*
- * Returns 1 if succeeded, 0 if failed, -EMLINK if unshare needed.
- *
- * NOTE: for the same entry, gup-fast and gup-slow can return different
- * results (0 v.s. -EMLINK) depending on whether vma is available. This is
- * the expected behavior, where we simply want gup-fast to fallback to
- * gup-slow to take the vma reference first.
- */
-static int gup_hugepte(struct vm_area_struct *vma, pte_t *ptep, unsigned long sz,
- unsigned long addr, unsigned long end, unsigned int flags,
- struct page **pages, int *nr, bool fast)
-{
- unsigned long pte_end;
- struct page *page;
- struct folio *folio;
- pte_t pte;
- int refs;
-
- pte_end = (addr + sz) & ~(sz-1);
- if (pte_end < end)
- end = pte_end;
-
- pte = huge_ptep_get(ptep);
-
- if (!pte_access_permitted(pte, flags & FOLL_WRITE))
- return 0;
-
- /* hugepages are never "special" */
- VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
-
- page = pte_page(pte);
- refs = record_subpages(page, sz, addr, end, pages + *nr);
-
- if (fast) {
- folio = try_grab_folio_fast(page, refs, flags);
- if (!folio)
- return 0;
- } else {
- folio = page_folio(page);
- if (try_grab_folio(folio, refs, flags))
- return 0;
- }
-
- if (unlikely(pte_val(pte) != pte_val(ptep_get(ptep)))) {
- gup_put_folio(folio, refs, flags);
- return 0;
- }
-
- if (!pte_write(pte) && gup_must_unshare(vma, flags, &folio->page)) {
- gup_put_folio(folio, refs, flags);
- return -EMLINK;
- }
-
- *nr += refs;
- folio_set_referenced(folio);
- return 1;
-}
-
-/*
- * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file
- * systems on Power, which does not have issue with folio writeback against
- * GUP updates. When hugepd will be extended to support non-hugetlbfs or
- * even anonymous memory, we need to do extra check as what we do with most
- * of the other folios. See writable_file_mapping_allowed() and
- * gup_fast_folio_allowed() for more information.
- */
-static int gup_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
- unsigned long addr, unsigned int pdshift,
- unsigned long end, unsigned int flags,
- struct page **pages, int *nr, bool fast)
-{
- pte_t *ptep;
- unsigned long sz = 1UL << hugepd_shift(hugepd);
- unsigned long next;
- int ret;
-
- ptep = hugepte_offset(hugepd, addr, pdshift);
- do {
- next = hugepte_addr_end(addr, end, sz);
- ret = gup_hugepte(vma, ptep, sz, addr, end, flags, pages, nr,
- fast);
- if (ret != 1)
- return ret;
- } while (ptep++, addr = next, addr != end);
-
- return 1;
-}
-
-static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
- unsigned long addr, unsigned int pdshift,
- unsigned int flags,
- struct follow_page_context *ctx)
-{
- struct page *page;
- struct hstate *h;
- spinlock_t *ptl;
- int nr = 0, ret;
- pte_t *ptep;
-
- /* Only hugetlb supports hugepd */
- if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma)))
- return ERR_PTR(-EFAULT);
-
- h = hstate_vma(vma);
- ptep = hugepte_offset(hugepd, addr, pdshift);
- ptl = huge_pte_lock(h, vma->vm_mm, ptep);
- ret = gup_hugepd(vma, hugepd, addr, pdshift, addr + PAGE_SIZE,
- flags, &page, &nr, false);
- spin_unlock(ptl);
-
- if (ret == 1) {
- /* GUP succeeded */
- WARN_ON_ONCE(nr != 1);
- ctx->page_mask = (1U << huge_page_order(h)) - 1;
- return page;
- }
-
- /* ret can be either 0 (translates to NULL) or negative */
- return ERR_PTR(ret);
-}
-#else /* CONFIG_ARCH_HAS_HUGEPD */
-static inline int gup_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
- unsigned long addr, unsigned int pdshift,
- unsigned long end, unsigned int flags,
- struct page **pages, int *nr, bool fast)
-{
- return 0;
-}
-
-static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
- unsigned long addr, unsigned int pdshift,
- unsigned int flags,
- struct follow_page_context *ctx)
-{
- return NULL;
-}
-#endif /* CONFIG_ARCH_HAS_HUGEPD */
-
+#endif /* CONFIG_HAVE_GUP_FAST */
static struct page *no_page_table(struct vm_area_struct *vma,
unsigned int flags, unsigned long address)
@@ -786,7 +688,7 @@ static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
return false;
/* ... and a write-fault isn't required for other reasons. */
- if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd))
+ if (pmd_needs_soft_dirty_wp(vma, pmd))
return false;
return !userfaultfd_huge_pmd_wp(vma, pmd);
}
@@ -907,7 +809,7 @@ static inline bool can_follow_write_pte(pte_t pte, struct page *page,
return false;
/* ... and a write-fault isn't required for other reasons. */
- if (vma_soft_dirty_enabled(vma) && !pte_soft_dirty(pte))
+ if (pte_needs_soft_dirty_wp(vma, pte))
return false;
return !userfaultfd_pte_wp(vma, pte);
}
@@ -1040,9 +942,6 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
return no_page_table(vma, flags, address);
if (!pmd_present(pmdval))
return no_page_table(vma, flags, address);
- if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval)))))
- return follow_hugepd(vma, __hugepd(pmd_val(pmdval)),
- address, PMD_SHIFT, flags, ctx);
if (pmd_devmap(pmdval)) {
ptl = pmd_lock(mm, pmd);
page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap);
@@ -1093,9 +992,6 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
pud = READ_ONCE(*pudp);
if (!pud_present(pud))
return no_page_table(vma, flags, address);
- if (unlikely(is_hugepd(__hugepd(pud_val(pud)))))
- return follow_hugepd(vma, __hugepd(pud_val(pud)),
- address, PUD_SHIFT, flags, ctx);
if (pud_leaf(pud)) {
ptl = pud_lock(mm, pudp);
page = follow_huge_pud(vma, address, pudp, flags, ctx);
@@ -1121,10 +1017,6 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
p4d = READ_ONCE(*p4dp);
BUILD_BUG_ON(p4d_leaf(p4d));
- if (unlikely(is_hugepd(__hugepd(p4d_val(p4d)))))
- return follow_hugepd(vma, __hugepd(p4d_val(p4d)),
- address, P4D_SHIFT, flags, ctx);
-
if (!p4d_present(p4d) || p4d_bad(p4d))
return no_page_table(vma, flags, address);
@@ -1168,10 +1060,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
ctx->page_mask = 0;
pgd = pgd_offset(mm, address);
- if (unlikely(is_hugepd(__hugepd(pgd_val(*pgd)))))
- page = follow_hugepd(vma, __hugepd(pgd_val(*pgd)),
- address, PGDIR_SHIFT, flags, ctx);
- else if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
+ if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
page = no_page_table(vma, flags, address);
else
page = follow_p4d_mask(vma, address, pgd, flags, ctx);
@@ -2394,19 +2283,19 @@ struct page *get_dump_page(unsigned long addr)
#ifdef CONFIG_MIGRATION
/*
- * Returns the number of collected pages. Return value is always >= 0.
+ * Returns the number of collected folios. Return value is always >= 0.
*/
-static unsigned long collect_longterm_unpinnable_pages(
- struct list_head *movable_page_list,
- unsigned long nr_pages,
- struct page **pages)
+static unsigned long collect_longterm_unpinnable_folios(
+ struct list_head *movable_folio_list,
+ unsigned long nr_folios,
+ struct folio **folios)
{
unsigned long i, collected = 0;
struct folio *prev_folio = NULL;
bool drain_allow = true;
- for (i = 0; i < nr_pages; i++) {
- struct folio *folio = page_folio(pages[i]);
+ for (i = 0; i < nr_folios; i++) {
+ struct folio *folio = folios[i];
if (folio == prev_folio)
continue;
@@ -2421,7 +2310,7 @@ static unsigned long collect_longterm_unpinnable_pages(
continue;
if (folio_test_hugetlb(folio)) {
- isolate_hugetlb(folio, movable_page_list);
+ isolate_hugetlb(folio, movable_folio_list);
continue;
}
@@ -2433,7 +2322,7 @@ static unsigned long collect_longterm_unpinnable_pages(
if (!folio_isolate_lru(folio))
continue;
- list_add_tail(&folio->lru, movable_page_list);
+ list_add_tail(&folio->lru, movable_folio_list);
node_stat_mod_folio(folio,
NR_ISOLATED_ANON + folio_is_file_lru(folio),
folio_nr_pages(folio));
@@ -2443,27 +2332,28 @@ static unsigned long collect_longterm_unpinnable_pages(
}
/*
- * Unpins all pages and migrates device coherent pages and movable_page_list.
- * Returns -EAGAIN if all pages were successfully migrated or -errno for failure
- * (or partial success).
+ * Unpins all folios and migrates device coherent folios and movable_folio_list.
+ * Returns -EAGAIN if all folios were successfully migrated or -errno for
+ * failure (or partial success).
*/
-static int migrate_longterm_unpinnable_pages(
- struct list_head *movable_page_list,
- unsigned long nr_pages,
- struct page **pages)
+static int migrate_longterm_unpinnable_folios(
+ struct list_head *movable_folio_list,
+ unsigned long nr_folios,
+ struct folio **folios)
{
int ret;
unsigned long i;
- for (i = 0; i < nr_pages; i++) {
- struct folio *folio = page_folio(pages[i]);
+ for (i = 0; i < nr_folios; i++) {
+ struct folio *folio = folios[i];
if (folio_is_device_coherent(folio)) {
/*
- * Migration will fail if the page is pinned, so convert
- * the pin on the source page to a normal reference.
+ * Migration will fail if the folio is pinned, so
+ * convert the pin on the source folio to a normal
+ * reference.
*/
- pages[i] = NULL;
+ folios[i] = NULL;
folio_get(folio);
gup_put_folio(folio, 1, FOLL_PIN);
@@ -2476,24 +2366,24 @@ static int migrate_longterm_unpinnable_pages(
}
/*
- * We can't migrate pages with unexpected references, so drop
+ * We can't migrate folios with unexpected references, so drop
* the reference obtained by __get_user_pages_locked().
- * Migrating pages have been added to movable_page_list after
+ * Migrating folios have been added to movable_folio_list after
* calling folio_isolate_lru() which takes a reference so the
- * page won't be freed if it's migrating.
+ * folio won't be freed if it's migrating.
*/
- unpin_user_page(pages[i]);
- pages[i] = NULL;
+ unpin_folio(folios[i]);
+ folios[i] = NULL;
}
- if (!list_empty(movable_page_list)) {
+ if (!list_empty(movable_folio_list)) {
struct migration_target_control mtc = {
.nid = NUMA_NO_NODE,
.gfp_mask = GFP_USER | __GFP_NOWARN,
.reason = MR_LONGTERM_PIN,
};
- if (migrate_pages(movable_page_list, alloc_migration_target,
+ if (migrate_pages(movable_folio_list, alloc_migration_target,
NULL, (unsigned long)&mtc, MIGRATE_SYNC,
MR_LONGTERM_PIN, NULL)) {
ret = -ENOMEM;
@@ -2501,48 +2391,71 @@ static int migrate_longterm_unpinnable_pages(
}
}
- putback_movable_pages(movable_page_list);
+ putback_movable_pages(movable_folio_list);
return -EAGAIN;
err:
- for (i = 0; i < nr_pages; i++)
- if (pages[i])
- unpin_user_page(pages[i]);
- putback_movable_pages(movable_page_list);
+ unpin_folios(folios, nr_folios);
+ putback_movable_pages(movable_folio_list);
return ret;
}
/*
- * Check whether all pages are *allowed* to be pinned. Rather confusingly, all
- * pages in the range are required to be pinned via FOLL_PIN, before calling
- * this routine.
+ * Check whether all folios are *allowed* to be pinned indefinitely (longterm).
+ * Rather confusingly, all folios in the range are required to be pinned via
+ * FOLL_PIN, before calling this routine.
*
- * If any pages in the range are not allowed to be pinned, then this routine
- * will migrate those pages away, unpin all the pages in the range and return
+ * If any folios in the range are not allowed to be pinned, then this routine
+ * will migrate those folios away, unpin all the folios in the range and return
* -EAGAIN. The caller should re-pin the entire range with FOLL_PIN and then
* call this routine again.
*
* If an error other than -EAGAIN occurs, this indicates a migration failure.
* The caller should give up, and propagate the error back up the call stack.
*
- * If everything is OK and all pages in the range are allowed to be pinned, then
- * this routine leaves all pages pinned and returns zero for success.
+ * If everything is OK and all folios in the range are allowed to be pinned,
+ * then this routine leaves all folios pinned and returns zero for success.
*/
-static long check_and_migrate_movable_pages(unsigned long nr_pages,
- struct page **pages)
+static long check_and_migrate_movable_folios(unsigned long nr_folios,
+ struct folio **folios)
{
unsigned long collected;
- LIST_HEAD(movable_page_list);
+ LIST_HEAD(movable_folio_list);
- collected = collect_longterm_unpinnable_pages(&movable_page_list,
- nr_pages, pages);
+ collected = collect_longterm_unpinnable_folios(&movable_folio_list,
+ nr_folios, folios);
if (!collected)
return 0;
- return migrate_longterm_unpinnable_pages(&movable_page_list, nr_pages,
- pages);
+ return migrate_longterm_unpinnable_folios(&movable_folio_list,
+ nr_folios, folios);
+}
+
+/*
+ * This routine just converts all the pages in the @pages array to folios and
+ * calls check_and_migrate_movable_folios() to do the heavy lifting.
+ *
+ * Please see the check_and_migrate_movable_folios() documentation for details.
+ */
+static long check_and_migrate_movable_pages(unsigned long nr_pages,
+ struct page **pages)
+{
+ struct folio **folios;
+ long i, ret;
+
+ folios = kmalloc_array(nr_pages, sizeof(*folios), GFP_KERNEL);
+ if (!folios)
+ return -ENOMEM;
+
+ for (i = 0; i < nr_pages; i++)
+ folios[i] = page_folio(pages[i]);
+
+ ret = check_and_migrate_movable_folios(nr_pages, folios);
+
+ kfree(folios);
+ return ret;
}
#else
static long check_and_migrate_movable_pages(unsigned long nr_pages,
@@ -2550,6 +2463,12 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages,
{
return 0;
}
+
+static long check_and_migrate_movable_folios(unsigned long nr_folios,
+ struct folio **folios)
+{
+ return 0;
+}
#endif /* CONFIG_MIGRATION */
/*
@@ -3283,15 +3202,6 @@ static int gup_fast_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr,
pages, nr))
return 0;
- } else if (unlikely(is_hugepd(__hugepd(pmd_val(pmd))))) {
- /*
- * architecture have different format for hugetlbfs
- * pmd format and THP pmd format
- */
- if (gup_hugepd(NULL, __hugepd(pmd_val(pmd)), addr,
- PMD_SHIFT, next, flags, pages, nr,
- true) != 1)
- return 0;
} else if (!gup_fast_pte_range(pmd, pmdp, addr, next, flags,
pages, nr))
return 0;
@@ -3318,11 +3228,6 @@ static int gup_fast_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr,
if (!gup_fast_pud_leaf(pud, pudp, addr, next, flags,
pages, nr))
return 0;
- } else if (unlikely(is_hugepd(__hugepd(pud_val(pud))))) {
- if (gup_hugepd(NULL, __hugepd(pud_val(pud)), addr,
- PUD_SHIFT, next, flags, pages, nr,
- true) != 1)
- return 0;
} else if (!gup_fast_pmd_range(pudp, pud, addr, next, flags,
pages, nr))
return 0;
@@ -3346,13 +3251,8 @@ static int gup_fast_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr,
if (!p4d_present(p4d))
return 0;
BUILD_BUG_ON(p4d_leaf(p4d));
- if (unlikely(is_hugepd(__hugepd(p4d_val(p4d))))) {
- if (gup_hugepd(NULL, __hugepd(p4d_val(p4d)), addr,
- P4D_SHIFT, next, flags, pages, nr,
- true) != 1)
- return 0;
- } else if (!gup_fast_pud_range(p4dp, p4d, addr, next, flags,
- pages, nr))
+ if (!gup_fast_pud_range(p4dp, p4d, addr, next, flags,
+ pages, nr))
return 0;
} while (p4dp++, addr = next, addr != end);
@@ -3376,11 +3276,6 @@ static void gup_fast_pgd_range(unsigned long addr, unsigned long end,
if (!gup_fast_pgd_leaf(pgd, pgdp, addr, next, flags,
pages, nr))
return;
- } else if (unlikely(is_hugepd(__hugepd(pgd_val(pgd))))) {
- if (gup_hugepd(NULL, __hugepd(pgd_val(pgd)), addr,
- PGDIR_SHIFT, next, flags, pages, nr,
- true) != 1)
- return;
} else if (!gup_fast_p4d_range(pgdp, pgd, addr, next, flags,
pages, nr))
return;
@@ -3687,3 +3582,140 @@ long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages,
&locked, gup_flags);
}
EXPORT_SYMBOL(pin_user_pages_unlocked);
+
+/**
+ * memfd_pin_folios() - pin folios associated with a memfd
+ * @memfd: the memfd whose folios are to be pinned
+ * @start: the first memfd offset
+ * @end: the last memfd offset (inclusive)
+ * @folios: array that receives pointers to the folios pinned
+ * @max_folios: maximum number of entries in @folios
+ * @offset: the offset into the first folio
+ *
+ * Attempt to pin folios associated with a memfd in the contiguous range
+ * [start, end]. Given that a memfd is either backed by shmem or hugetlb,
+ * the folios can either be found in the page cache or need to be allocated
+ * if necessary. Once the folios are located, they are all pinned via
+ * FOLL_PIN and @offset is populatedwith the offset into the first folio.
+ * And, eventually, these pinned folios must be released either using
+ * unpin_folios() or unpin_folio().
+ *
+ * It must be noted that the folios may be pinned for an indefinite amount
+ * of time. And, in most cases, the duration of time they may stay pinned
+ * would be controlled by the userspace. This behavior is effectively the
+ * same as using FOLL_LONGTERM with other GUP APIs.
+ *
+ * Returns number of folios pinned, which could be less than @max_folios
+ * as it depends on the folio sizes that cover the range [start, end].
+ * If no folios were pinned, it returns -errno.
+ */
+long memfd_pin_folios(struct file *memfd, loff_t start, loff_t end,
+ struct folio **folios, unsigned int max_folios,
+ pgoff_t *offset)
+{
+ unsigned int flags, nr_folios, nr_found;
+ unsigned int i, pgshift = PAGE_SHIFT;
+ pgoff_t start_idx, end_idx, next_idx;
+ struct folio *folio = NULL;
+ struct folio_batch fbatch;
+ struct hstate *h;
+ long ret = -EINVAL;
+
+ if (start < 0 || start > end || !max_folios)
+ return -EINVAL;
+
+ if (!memfd)
+ return -EINVAL;
+
+ if (!shmem_file(memfd) && !is_file_hugepages(memfd))
+ return -EINVAL;
+
+ if (end >= i_size_read(file_inode(memfd)))
+ return -EINVAL;
+
+ if (is_file_hugepages(memfd)) {
+ h = hstate_file(memfd);
+ pgshift = huge_page_shift(h);
+ }
+
+ flags = memalloc_pin_save();
+ do {
+ nr_folios = 0;
+ start_idx = start >> pgshift;
+ end_idx = end >> pgshift;
+ if (is_file_hugepages(memfd)) {
+ start_idx <<= huge_page_order(h);
+ end_idx <<= huge_page_order(h);
+ }
+
+ folio_batch_init(&fbatch);
+ while (start_idx <= end_idx && nr_folios < max_folios) {
+ /*
+ * In most cases, we should be able to find the folios
+ * in the page cache. If we cannot find them for some
+ * reason, we try to allocate them and add them to the
+ * page cache.
+ */
+ nr_found = filemap_get_folios_contig(memfd->f_mapping,
+ &start_idx,
+ end_idx,
+ &fbatch);
+ if (folio) {
+ folio_put(folio);
+ folio = NULL;
+ }
+
+ next_idx = 0;
+ for (i = 0; i < nr_found; i++) {
+ /*
+ * As there can be multiple entries for a
+ * given folio in the batch returned by
+ * filemap_get_folios_contig(), the below
+ * check is to ensure that we pin and return a
+ * unique set of folios between start and end.
+ */
+ if (next_idx &&
+ next_idx != folio_index(fbatch.folios[i]))
+ continue;
+
+ folio = page_folio(&fbatch.folios[i]->page);
+
+ if (try_grab_folio(folio, 1, FOLL_PIN)) {
+ folio_batch_release(&fbatch);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ if (nr_folios == 0)
+ *offset = offset_in_folio(folio, start);
+
+ folios[nr_folios] = folio;
+ next_idx = folio_next_index(folio);
+ if (++nr_folios == max_folios)
+ break;
+ }
+
+ folio = NULL;
+ folio_batch_release(&fbatch);
+ if (!nr_found) {
+ folio = memfd_alloc_folio(memfd, start_idx);
+ if (IS_ERR(folio)) {
+ ret = PTR_ERR(folio);
+ if (ret != -EEXIST)
+ goto err;
+ }
+ }
+ }
+
+ ret = check_and_migrate_movable_folios(nr_folios, folios);
+ } while (ret == -EAGAIN);
+
+ memalloc_pin_restore(flags);
+ return ret ? ret : nr_folios;
+err:
+ memalloc_pin_restore(flags);
+ unpin_folios(folios, nr_folios);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(memfd_pin_folios);
diff --git a/mm/highmem.c b/mm/highmem.c
index bd48ba445dd4..ef3189b36cad 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -111,13 +111,10 @@ static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
}
#endif
-atomic_long_t _totalhigh_pages __read_mostly;
-EXPORT_SYMBOL(_totalhigh_pages);
-
-unsigned int __nr_free_highpages(void)
+unsigned long __nr_free_highpages(void)
{
+ unsigned long pages = 0;
struct zone *zone;
- unsigned int pages = 0;
for_each_populated_zone(zone) {
if (is_highmem(zone))
@@ -127,6 +124,20 @@ unsigned int __nr_free_highpages(void)
return pages;
}
+unsigned long __totalhigh_pages(void)
+{
+ unsigned long pages = 0;
+ struct zone *zone;
+
+ for_each_populated_zone(zone) {
+ if (is_highmem(zone))
+ pages += zone_managed_pages(zone);
+ }
+
+ return pages;
+}
+EXPORT_SYMBOL(__totalhigh_pages);
+
static int pkmap_count[LAST_PKMAP];
static __cacheline_aligned_in_smp DEFINE_SPINLOCK(kmap_lock);
diff --git a/mm/hmm.c b/mm/hmm.c
index 93aebd9cc130..7e0229ae4a5a 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -480,7 +480,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
pte_t entry;
ptl = huge_pte_lock(hstate_vma(vma), walk->mm, pte);
- entry = huge_ptep_get(pte);
+ entry = huge_ptep_get(walk->mm, addr, pte);
i = (start - range->start) >> PAGE_SHIFT;
pfn_req_flags = range->hmm_pfns[i];
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2120f7478e55..f9696c94e211 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -20,6 +20,7 @@
#include <linux/swapops.h>
#include <linux/backing-dev.h>
#include <linux/dax.h>
+#include <linux/mm_types.h>
#include <linux/khugepaged.h>
#include <linux/freezer.h>
#include <linux/pfn_t.h>
@@ -150,10 +151,15 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
* Must be done before hugepage flags check since shmem has its
* own flags.
*/
- if (!in_pf && shmem_file(vma->vm_file))
- return shmem_is_huge(file_inode(vma->vm_file), vma->vm_pgoff,
- !enforce_sysfs, vma->vm_mm, vm_flags)
- ? orders : 0;
+ if (!in_pf && shmem_file(vma->vm_file)) {
+ bool global_huge = shmem_is_huge(file_inode(vma->vm_file), vma->vm_pgoff,
+ !enforce_sysfs, vma->vm_mm, vm_flags);
+
+ if (!vma_is_anon_shmem(vma))
+ return global_huge ? orders : 0;
+ return shmem_allowable_huge_orders(file_inode(vma->vm_file),
+ vma, vma->vm_pgoff, global_huge);
+ }
if (!vma_is_anonymous(vma)) {
/*
@@ -449,14 +455,6 @@ static void thpsize_release(struct kobject *kobj);
static DEFINE_SPINLOCK(huge_anon_orders_lock);
static LIST_HEAD(thpsize_list);
-struct thpsize {
- struct kobject kobj;
- struct list_head node;
- int order;
-};
-
-#define to_thpsize(kobj) container_of(kobj, struct thpsize, kobj)
-
static ssize_t thpsize_enabled_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
@@ -509,6 +507,13 @@ static ssize_t thpsize_enabled_store(struct kobject *kobj,
} else
ret = -EINVAL;
+ if (ret > 0) {
+ int err;
+
+ err = start_stop_khugepaged();
+ if (err)
+ ret = err;
+ }
return ret;
}
@@ -517,6 +522,9 @@ static struct kobj_attribute thpsize_enabled_attr =
static struct attribute *thpsize_attrs[] = {
&thpsize_enabled_attr.attr,
+#ifdef CONFIG_SHMEM
+ &thpsize_shmem_enabled_attr.attr,
+#endif
NULL,
};
@@ -560,6 +568,12 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK);
DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT);
DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK);
+DEFINE_MTHP_STAT_ATTR(shmem_alloc, MTHP_STAT_SHMEM_ALLOC);
+DEFINE_MTHP_STAT_ATTR(shmem_fallback, MTHP_STAT_SHMEM_FALLBACK);
+DEFINE_MTHP_STAT_ATTR(shmem_fallback_charge, MTHP_STAT_SHMEM_FALLBACK_CHARGE);
+DEFINE_MTHP_STAT_ATTR(split, MTHP_STAT_SPLIT);
+DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FAILED);
+DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED);
static struct attribute *stats_attrs[] = {
&anon_fault_alloc_attr.attr,
@@ -567,6 +581,12 @@ static struct attribute *stats_attrs[] = {
&anon_fault_fallback_charge_attr.attr,
&swpout_attr.attr,
&swpout_fallback_attr.attr,
+ &shmem_alloc_attr.attr,
+ &shmem_fallback_attr.attr,
+ &shmem_fallback_charge_attr.attr,
+ &split_attr.attr,
+ &split_failed_attr.attr,
+ &split_deferred_attr.attr,
NULL,
};
@@ -942,10 +962,10 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
goto release;
}
- clear_huge_page(page, vmf->address, HPAGE_PMD_NR);
+ folio_zero_user(folio, vmf->address);
/*
* The memory barrier inside __folio_mark_uptodate makes sure that
- * clear_huge_page writes become visible before the set_pmd_at()
+ * folio_zero_user writes become visible before the set_pmd_at()
* write.
*/
__folio_mark_uptodate(folio);
@@ -972,7 +992,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
entry = mk_huge_pmd(page, vma->vm_page_prot);
entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
- folio_add_new_anon_rmap(folio, vma, haddr);
+ folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
folio_add_lru_vma(folio, vma);
pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable);
set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
@@ -1624,7 +1644,7 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma,
return false;
/* Do we need write faults for softdirty tracking? */
- if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd))
+ if (pmd_needs_soft_dirty_wp(vma, pmd))
return false;
/* Do we need write faults for uffd-wp tracking? */
@@ -1651,7 +1671,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
int nid = NUMA_NO_NODE;
int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK);
- bool migrated = false, writable = false;
+ bool writable = false;
int flags = 0;
vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
@@ -1687,16 +1707,17 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
if (node_is_toptier(nid))
last_cpupid = folio_last_cpupid(folio);
target_nid = numa_migrate_prep(folio, vmf, haddr, nid, &flags);
- if (target_nid == NUMA_NO_NODE) {
- folio_put(folio);
+ if (target_nid == NUMA_NO_NODE)
+ goto out_map;
+ if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) {
+ flags |= TNF_MIGRATE_FAIL;
goto out_map;
}
-
+ /* The folio is isolated and isolation code holds a folio reference. */
spin_unlock(vmf->ptl);
writable = false;
- migrated = migrate_misplaced_folio(folio, vma, target_nid);
- if (migrated) {
+ if (!migrate_misplaced_folio(folio, vma, target_nid)) {
flags |= TNF_MIGRATED;
nid = target_nid;
} else {
@@ -2581,6 +2602,27 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
pmd_populate(mm, pmd, pgtable);
}
+void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmd, bool freeze, struct folio *folio)
+{
+ VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio));
+ VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
+ VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));
+ VM_BUG_ON(freeze && !folio);
+
+ /*
+ * When the caller requests to set up a migration entry, we
+ * require a folio to check the PMD against. Otherwise, there
+ * is a risk of replacing the wrong folio.
+ */
+ if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
+ is_pmd_migration_entry(*pmd)) {
+ if (folio && folio != pmd_folio(*pmd))
+ return;
+ __split_huge_pmd_locked(vma, pmd, address, freeze);
+ }
+}
+
void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
unsigned long address, bool freeze, struct folio *folio)
{
@@ -2592,26 +2634,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
(address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE);
mmu_notifier_invalidate_range_start(&range);
ptl = pmd_lock(vma->vm_mm, pmd);
-
- /*
- * If caller asks to setup a migration entry, we need a folio to check
- * pmd against. Otherwise we can end up replacing wrong folio.
- */
- VM_BUG_ON(freeze && !folio);
- VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));
-
- if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
- is_pmd_migration_entry(*pmd)) {
- /*
- * It's safe to call pmd_page when folio is set because it's
- * guaranteed that pmd is present.
- */
- if (folio && folio != pmd_folio(*pmd))
- goto out;
- __split_huge_pmd_locked(vma, pmd, range.start, freeze);
- }
-
-out:
+ split_huge_pmd_locked(vma, range.start, pmd, freeze, folio);
spin_unlock(ptl);
mmu_notifier_invalidate_range_end(&range);
}
@@ -2685,6 +2708,71 @@ static void unmap_folio(struct folio *folio)
try_to_unmap_flush();
}
+static bool __discard_anon_folio_pmd_locked(struct vm_area_struct *vma,
+ unsigned long addr, pmd_t *pmdp,
+ struct folio *folio)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ int ref_count, map_count;
+ pmd_t orig_pmd = *pmdp;
+
+ if (folio_test_dirty(folio) || pmd_dirty(orig_pmd))
+ return false;
+
+ orig_pmd = pmdp_huge_clear_flush(vma, addr, pmdp);
+
+ /*
+ * Syncing against concurrent GUP-fast:
+ * - clear PMD; barrier; read refcount
+ * - inc refcount; barrier; read PMD
+ */
+ smp_mb();
+
+ ref_count = folio_ref_count(folio);
+ map_count = folio_mapcount(folio);
+
+ /*
+ * Order reads for folio refcount and dirty flag
+ * (see comments in __remove_mapping()).
+ */
+ smp_rmb();
+
+ /*
+ * If the folio or its PMD is redirtied at this point, or if there
+ * are unexpected references, we will give up to discard this folio
+ * and remap it.
+ *
+ * The only folio refs must be one from isolation plus the rmap(s).
+ */
+ if (folio_test_dirty(folio) || pmd_dirty(orig_pmd) ||
+ ref_count != map_count + 1) {
+ set_pmd_at(mm, addr, pmdp, orig_pmd);
+ return false;
+ }
+
+ folio_remove_rmap_pmd(folio, pmd_page(orig_pmd), vma);
+ zap_deposited_table(mm, pmdp);
+ add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR);
+ if (vma->vm_flags & VM_LOCKED)
+ mlock_drain_local();
+ folio_put(folio);
+
+ return true;
+}
+
+bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr,
+ pmd_t *pmdp, struct folio *folio)
+{
+ VM_WARN_ON_FOLIO(!folio_test_pmd_mappable(folio), folio);
+ VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+ VM_WARN_ON_ONCE(!IS_ALIGNED(addr, HPAGE_PMD_SIZE));
+
+ if (folio_test_anon(folio) && !folio_test_swapbacked(folio))
+ return __discard_anon_folio_pmd_locked(vma, addr, pmdp, folio);
+
+ return false;
+}
+
static void remap_page(struct folio *folio, unsigned long nr)
{
int i = 0;
@@ -2838,7 +2926,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
split_page_memcg(head, order, new_order);
if (folio_test_anon(folio) && folio_test_swapcache(folio)) {
- offset = swp_offset(folio->swap);
+ offset = swap_cache_index(folio->swap);
swap_cache = swap_address_space(folio->swap);
xa_lock(&swap_cache->i_pages);
}
@@ -2998,7 +3086,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
XA_STATE_ORDER(xas, &folio->mapping->i_pages, folio->index, new_order);
struct anon_vma *anon_vma = NULL;
struct address_space *mapping = NULL;
- bool is_thp = folio_test_pmd_mappable(folio);
+ int order = folio_order(folio);
int extra_pins, ret;
pgoff_t end;
bool is_hzp;
@@ -3183,27 +3271,17 @@ out_unlock:
i_mmap_unlock_read(mapping);
out:
xas_destroy(&xas);
- if (is_thp)
+ if (order == HPAGE_PMD_ORDER)
count_vm_event(!ret ? THP_SPLIT_PAGE : THP_SPLIT_PAGE_FAILED);
+ count_mthp_stat(order, !ret ? MTHP_STAT_SPLIT : MTHP_STAT_SPLIT_FAILED);
return ret;
}
-void folio_undo_large_rmappable(struct folio *folio)
+void __folio_undo_large_rmappable(struct folio *folio)
{
struct deferred_split *ds_queue;
unsigned long flags;
- if (folio_order(folio) <= 1)
- return;
-
- /*
- * At this point, there is no one trying to add the folio to
- * deferred_list. If folio is not in deferred_list, it's safe
- * to check without acquiring the split_queue_lock.
- */
- if (data_race(list_empty(&folio->_deferred_list)))
- return;
-
ds_queue = get_deferred_split_queue(folio);
spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
if (!list_empty(&folio->_deferred_list)) {
@@ -3248,6 +3326,7 @@ void deferred_split_folio(struct folio *folio)
if (list_empty(&folio->_deferred_list)) {
if (folio_test_pmd_mappable(folio))
count_vm_event(THP_DEFERRED_SPLIT_PAGE);
+ count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED);
list_add_tail(&folio->_deferred_list, &ds_queue->split_queue);
ds_queue->split_queue_len++;
#ifdef CONFIG_MEMCG
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 43e1af868cfd..0858a1827207 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1355,6 +1355,10 @@ static struct folio *dequeue_hugetlb_folio_nodemask(struct hstate *h, gfp_t gfp_
struct zoneref *z;
int node = NUMA_NO_NODE;
+ /* 'nid' should not be NUMA_NO_NODE. Try to catch any misuse of it and rectifiy. */
+ if (nid == NUMA_NO_NODE)
+ nid = numa_node_id();
+
zonelist = node_zonelist(nid, gfp_mask);
retry_cpuset:
@@ -2257,13 +2261,11 @@ static struct folio *only_alloc_fresh_hugetlb_folio(struct hstate *h,
* pages is zero.
*/
static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h,
- gfp_t gfp_mask, int nid, nodemask_t *nmask,
- nodemask_t *node_alloc_noretry)
+ gfp_t gfp_mask, int nid, nodemask_t *nmask)
{
struct folio *folio;
- folio = __alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask,
- node_alloc_noretry);
+ folio = __alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL);
if (!folio)
return NULL;
@@ -2481,7 +2483,7 @@ static struct folio *alloc_surplus_hugetlb_folio(struct hstate *h,
goto out_unlock;
spin_unlock_irq(&hugetlb_lock);
- folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL);
+ folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask);
if (!folio)
return NULL;
@@ -2517,7 +2519,7 @@ static struct folio *alloc_migrate_hugetlb_folio(struct hstate *h, gfp_t gfp_mas
if (hstate_is_gigantic(h))
return NULL;
- folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL);
+ folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask);
if (!folio)
return NULL;
@@ -2586,6 +2588,23 @@ struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
return alloc_migrate_hugetlb_folio(h, gfp_mask, preferred_nid, nmask);
}
+static nodemask_t *policy_mbind_nodemask(gfp_t gfp)
+{
+#ifdef CONFIG_NUMA
+ struct mempolicy *mpol = get_task_policy(current);
+
+ /*
+ * Only enforce MPOL_BIND policy which overlaps with cpuset policy
+ * (from policy_nodemask) specifically for hugetlb case
+ */
+ if (mpol->mode == MPOL_BIND &&
+ (apply_policy_zone(mpol, gfp_zone(gfp)) &&
+ cpuset_nodemask_valid_mems_allowed(&mpol->nodes)))
+ return &mpol->nodes;
+#endif
+ return NULL;
+}
+
/*
* Increase the hugetlb pool such that it can accommodate a reservation
* of size 'delta'.
@@ -2599,6 +2618,8 @@ static int gather_surplus_pages(struct hstate *h, long delta)
long i;
long needed, allocated;
bool alloc_ok = true;
+ int node;
+ nodemask_t *mbind_nodemask = policy_mbind_nodemask(htlb_alloc_mask(h));
lockdep_assert_held(&hugetlb_lock);
needed = (h->resv_huge_pages + delta) - h->free_huge_pages;
@@ -2613,8 +2634,15 @@ static int gather_surplus_pages(struct hstate *h, long delta)
retry:
spin_unlock_irq(&hugetlb_lock);
for (i = 0; i < needed; i++) {
- folio = alloc_surplus_hugetlb_folio(h, htlb_alloc_mask(h),
- NUMA_NO_NODE, NULL);
+ folio = NULL;
+ for_each_node_mask(node, cpuset_current_mems_allowed) {
+ if (!mbind_nodemask || node_isset(node, *mbind_nodemask)) {
+ folio = alloc_surplus_hugetlb_folio(h, htlb_alloc_mask(h),
+ node, NULL);
+ if (folio)
+ break;
+ }
+ }
if (!folio) {
alloc_ok = false;
break;
@@ -3439,7 +3467,7 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid)
gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid,
- &node_states[N_MEMORY], NULL);
+ &node_states[N_MEMORY]);
if (!folio)
break;
free_huge_folio(folio); /* free it into the hugepage allocator */
@@ -4617,7 +4645,7 @@ void __init hugetlb_add_hstate(unsigned int order)
BUG_ON(hugetlb_max_hstate >= HUGE_MAX_HSTATE);
BUG_ON(order < order_base_2(__NR_USED_SUBPAGE));
h = &hstates[hugetlb_max_hstate++];
- mutex_init(&h->resize_lock);
+ __mutex_init(&h->resize_lock, "resize mutex", &h->resize_key);
h->order = order;
h->mask = ~(huge_page_size(h) - 1);
for (i = 0; i < MAX_NUMNODES; ++i)
@@ -4840,23 +4868,6 @@ static int __init default_hugepagesz_setup(char *s)
}
__setup("default_hugepagesz=", default_hugepagesz_setup);
-static nodemask_t *policy_mbind_nodemask(gfp_t gfp)
-{
-#ifdef CONFIG_NUMA
- struct mempolicy *mpol = get_task_policy(current);
-
- /*
- * Only enforce MPOL_BIND policy which overlaps with cpuset policy
- * (from policy_nodemask) specifically for hugetlb case
- */
- if (mpol->mode == MPOL_BIND &&
- (apply_policy_zone(mpol, gfp_zone(gfp)) &&
- cpuset_nodemask_valid_mems_allowed(&mpol->nodes)))
- return &mpol->nodes;
-#endif
- return NULL;
-}
-
static unsigned int allowed_mems_nr(struct hstate *h)
{
int node;
@@ -4875,7 +4886,7 @@ static unsigned int allowed_mems_nr(struct hstate *h)
}
#ifdef CONFIG_SYSCTL
-static int proc_hugetlb_doulongvec_minmax(struct ctl_table *table, int write,
+static int proc_hugetlb_doulongvec_minmax(const struct ctl_table *table, int write,
void *buffer, size_t *length,
loff_t *ppos, unsigned long *out)
{
@@ -4892,7 +4903,7 @@ static int proc_hugetlb_doulongvec_minmax(struct ctl_table *table, int write,
}
static int hugetlb_sysctl_handler_common(bool obey_mempolicy,
- struct ctl_table *table, int write,
+ const struct ctl_table *table, int write,
void *buffer, size_t *length, loff_t *ppos)
{
struct hstate *h = &default_hstate;
@@ -5279,7 +5290,7 @@ static void set_huge_ptep_writable(struct vm_area_struct *vma,
{
pte_t entry;
- entry = huge_pte_mkwrite(huge_pte_mkdirty(huge_ptep_get(ptep)));
+ entry = huge_pte_mkwrite(huge_pte_mkdirty(huge_ptep_get(vma->vm_mm, address, ptep)));
if (huge_ptep_set_access_flags(vma, address, ptep, entry, 1))
update_mmu_cache(vma, address, ptep);
}
@@ -5387,7 +5398,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
dst_ptl = huge_pte_lock(h, dst, dst_pte);
src_ptl = huge_pte_lockptr(h, src, src_pte);
spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
- entry = huge_ptep_get(src_pte);
+ entry = huge_ptep_get(src_vma->vm_mm, addr, src_pte);
again:
if (huge_pte_none(entry)) {
/*
@@ -5425,7 +5436,7 @@ again:
set_huge_pte_at(dst, addr, dst_pte,
make_pte_marker(marker), sz);
} else {
- entry = huge_ptep_get(src_pte);
+ entry = huge_ptep_get(src_vma->vm_mm, addr, src_pte);
pte_folio = page_folio(pte_page(entry));
folio_get(pte_folio);
@@ -5454,9 +5465,8 @@ again:
ret = PTR_ERR(new_folio);
break;
}
- ret = copy_user_large_folio(new_folio,
- pte_folio,
- addr, dst_vma);
+ ret = copy_user_large_folio(new_folio, pte_folio,
+ ALIGN_DOWN(addr, sz), dst_vma);
folio_put(pte_folio);
if (ret) {
folio_put(new_folio);
@@ -5467,7 +5477,7 @@ again:
dst_ptl = huge_pte_lock(h, dst, dst_pte);
src_ptl = huge_pte_lockptr(h, src, src_pte);
spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
- entry = huge_ptep_get(src_pte);
+ entry = huge_ptep_get(src_vma->vm_mm, addr, src_pte);
if (!pte_same(src_pte_old, entry)) {
restore_reserve_on_error(h, dst_vma, addr,
new_folio);
@@ -5577,7 +5587,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
new_addr |= last_addr_mask;
continue;
}
- if (huge_pte_none(huge_ptep_get(src_pte)))
+ if (huge_pte_none(huge_ptep_get(mm, old_addr, src_pte)))
continue;
if (huge_pmd_unshare(mm, vma, old_addr, src_pte)) {
@@ -5650,7 +5660,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
continue;
}
- pte = huge_ptep_get(ptep);
+ pte = huge_ptep_get(mm, address, ptep);
if (huge_pte_none(pte)) {
spin_unlock(ptl);
continue;
@@ -5899,7 +5909,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
struct vm_area_struct *vma = vmf->vma;
struct mm_struct *mm = vma->vm_mm;
const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
- pte_t pte = huge_ptep_get(vmf->pte);
+ pte_t pte = huge_ptep_get(mm, vmf->address, vmf->pte);
struct hstate *h = hstate_vma(vma);
struct folio *old_folio;
struct folio *new_folio;
@@ -6020,7 +6030,7 @@ retry_avoidcopy:
vmf->pte = hugetlb_walk(vma, vmf->address,
huge_page_size(h));
if (likely(vmf->pte &&
- pte_same(huge_ptep_get(vmf->pte), pte)))
+ pte_same(huge_ptep_get(mm, vmf->address, vmf->pte), pte)))
goto retry_avoidcopy;
/*
* race occurs while re-acquiring page table
@@ -6058,7 +6068,7 @@ retry_avoidcopy:
*/
spin_lock(vmf->ptl);
vmf->pte = hugetlb_walk(vma, vmf->address, huge_page_size(h));
- if (likely(vmf->pte && pte_same(huge_ptep_get(vmf->pte), pte))) {
+ if (likely(vmf->pte && pte_same(huge_ptep_get(mm, vmf->address, vmf->pte), pte))) {
pte_t newpte = make_huge_pte(vma, &new_folio->page, !unshare);
/* Break COW or unshare */
@@ -6159,14 +6169,14 @@ static inline vm_fault_t hugetlb_handle_userfault(struct vm_fault *vmf,
* Recheck pte with pgtable lock. Returns true if pte didn't change, or
* false if pte changed or is changing.
*/
-static bool hugetlb_pte_stable(struct hstate *h, struct mm_struct *mm,
+static bool hugetlb_pte_stable(struct hstate *h, struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t old_pte)
{
spinlock_t *ptl;
bool same;
ptl = huge_pte_lock(h, mm, ptep);
- same = pte_same(huge_ptep_get(ptep), old_pte);
+ same = pte_same(huge_ptep_get(mm, addr, ptep), old_pte);
spin_unlock(ptl);
return same;
@@ -6227,7 +6237,7 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping,
* never happen on the page after UFFDIO_COPY has
* correctly installed the page and returned.
*/
- if (!hugetlb_pte_stable(h, mm, vmf->pte, vmf->orig_pte)) {
+ if (!hugetlb_pte_stable(h, mm, vmf->address, vmf->pte, vmf->orig_pte)) {
ret = 0;
goto out;
}
@@ -6256,14 +6266,13 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping,
* here. Before returning error, get ptl and make
* sure there really is no pte entry.
*/
- if (hugetlb_pte_stable(h, mm, vmf->pte, vmf->orig_pte))
+ if (hugetlb_pte_stable(h, mm, vmf->address, vmf->pte, vmf->orig_pte))
ret = vmf_error(PTR_ERR(folio));
else
ret = 0;
goto out;
}
- clear_huge_page(&folio->page, vmf->real_address,
- pages_per_huge_page(h));
+ folio_zero_user(folio, vmf->real_address);
__folio_mark_uptodate(folio);
new_folio = true;
@@ -6306,7 +6315,7 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping,
folio_unlock(folio);
folio_put(folio);
/* See comment in userfaultfd_missing() block above */
- if (!hugetlb_pte_stable(h, mm, vmf->pte, vmf->orig_pte)) {
+ if (!hugetlb_pte_stable(h, mm, vmf->address, vmf->pte, vmf->orig_pte)) {
ret = 0;
goto out;
}
@@ -6333,7 +6342,7 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping,
vmf->ptl = huge_pte_lock(h, mm, vmf->pte);
ret = 0;
/* If pte changed from under us, retry */
- if (!pte_same(huge_ptep_get(vmf->pte), vmf->orig_pte))
+ if (!pte_same(huge_ptep_get(mm, vmf->address, vmf->pte), vmf->orig_pte))
goto backout;
if (anon_rmap)
@@ -6454,7 +6463,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
return VM_FAULT_OOM;
}
- vmf.orig_pte = huge_ptep_get(vmf.pte);
+ vmf.orig_pte = huge_ptep_get(mm, vmf.address, vmf.pte);
if (huge_pte_none_mostly(vmf.orig_pte)) {
if (is_pte_marker(vmf.orig_pte)) {
pte_marker marker =
@@ -6495,7 +6504,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
* be released there.
*/
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
- migration_entry_wait_huge(vma, vmf.pte);
+ migration_entry_wait_huge(vma, vmf.address, vmf.pte);
return 0;
} else if (unlikely(is_hugetlb_entry_hwpoisoned(vmf.orig_pte)))
ret = VM_FAULT_HWPOISON_LARGE |
@@ -6528,11 +6537,11 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
vmf.ptl = huge_pte_lock(h, mm, vmf.pte);
/* Check for a racing update before calling hugetlb_wp() */
- if (unlikely(!pte_same(vmf.orig_pte, huge_ptep_get(vmf.pte))))
+ if (unlikely(!pte_same(vmf.orig_pte, huge_ptep_get(mm, vmf.address, vmf.pte))))
goto out_ptl;
/* Handle userfault-wp first, before trying to lock more pages */
- if (userfaultfd_wp(vma) && huge_pte_uffd_wp(huge_ptep_get(vmf.pte)) &&
+ if (userfaultfd_wp(vma) && huge_pte_uffd_wp(huge_ptep_get(mm, vmf.address, vmf.pte)) &&
(flags & FAULT_FLAG_WRITE) && !huge_pte_write(vmf.orig_pte)) {
if (!userfaultfd_wp_async(vma)) {
spin_unlock(vmf.ptl);
@@ -6647,7 +6656,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
struct hstate *h = hstate_vma(dst_vma);
struct address_space *mapping = dst_vma->vm_file->f_mapping;
pgoff_t idx = vma_hugecache_offset(h, dst_vma, dst_addr);
- unsigned long size;
+ unsigned long size = huge_page_size(h);
int vm_shared = dst_vma->vm_flags & VM_SHARED;
pte_t _dst_pte;
spinlock_t *ptl;
@@ -6660,14 +6669,13 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
ptl = huge_pte_lock(h, dst_mm, dst_pte);
/* Don't overwrite any existing PTEs (even markers) */
- if (!huge_pte_none(huge_ptep_get(dst_pte))) {
+ if (!huge_pte_none(huge_ptep_get(dst_mm, dst_addr, dst_pte))) {
spin_unlock(ptl);
return -EEXIST;
}
_dst_pte = make_pte_marker(PTE_MARKER_POISONED);
- set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte,
- huge_page_size(h));
+ set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte, size);
/* No need to invalidate - it was non-present before */
update_mmu_cache(dst_vma, dst_addr, dst_pte);
@@ -6741,7 +6749,8 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
*foliop = NULL;
goto out;
}
- ret = copy_user_large_folio(folio, *foliop, dst_addr, dst_vma);
+ ret = copy_user_large_folio(folio, *foliop,
+ ALIGN_DOWN(dst_addr, size), dst_vma);
folio_put(*foliop);
*foliop = NULL;
if (ret) {
@@ -6768,9 +6777,8 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
/* Add shared, newly allocated pages to the page cache. */
if (vm_shared && !is_continue) {
- size = i_size_read(mapping->host) >> huge_page_shift(h);
ret = -EFAULT;
- if (idx >= size)
+ if (idx >= (i_size_read(mapping->host) >> huge_page_shift(h)))
goto out_release_nounlock;
/*
@@ -6797,7 +6805,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
* page backing it, then access the page.
*/
ret = -EEXIST;
- if (!huge_pte_none_mostly(huge_ptep_get(dst_pte)))
+ if (!huge_pte_none_mostly(huge_ptep_get(dst_mm, dst_addr, dst_pte)))
goto out_release_unlock;
if (folio_in_pagecache)
@@ -6827,7 +6835,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
if (wp_enabled)
_dst_pte = huge_pte_mkuffd_wp(_dst_pte);
- set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte, huge_page_size(h));
+ set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte, size);
hugetlb_count_add(pages_per_huge_page(h), dst_mm);
@@ -6918,7 +6926,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma,
address |= last_addr_mask;
continue;
}
- pte = huge_ptep_get(ptep);
+ pte = huge_ptep_get(mm, address, ptep);
if (unlikely(is_hugetlb_entry_hwpoisoned(pte))) {
/* Nothing to do. */
} else if (unlikely(is_hugetlb_entry_migration(pte))) {
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index e20339a346b9..4ff238ba1250 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -27,7 +27,17 @@
#define MEMFILE_IDX(val) (((val) >> 16) & 0xffff)
#define MEMFILE_ATTR(val) ((val) & 0xffff)
+/* Use t->m[0] to encode the offset */
+#define MEMFILE_OFFSET(t, m0) (((offsetof(t, m0) << 16) | sizeof_field(t, m0)))
+#define MEMFILE_OFFSET0(val) (((val) >> 16) & 0xffff)
+#define MEMFILE_FIELD_SIZE(val) ((val) & 0xffff)
+
+#define DFL_TMPL_SIZE ARRAY_SIZE(hugetlb_dfl_tmpl)
+#define LEGACY_TMPL_SIZE ARRAY_SIZE(hugetlb_legacy_tmpl)
+
static struct hugetlb_cgroup *root_h_cgroup __read_mostly;
+static struct cftype *dfl_files;
+static struct cftype *legacy_files;
static inline struct page_counter *
__hugetlb_cgroup_counter_from_cgroup(struct hugetlb_cgroup *h_cg, int idx,
@@ -460,7 +470,7 @@ static int hugetlb_cgroup_read_numa_stat(struct seq_file *seq, void *dummy)
int nid;
struct cftype *cft = seq_cft(seq);
int idx = MEMFILE_IDX(cft->private);
- bool legacy = MEMFILE_ATTR(cft->private);
+ bool legacy = !cgroup_subsys_on_dfl(hugetlb_cgrp_subsys);
struct hugetlb_cgroup *h_cg = hugetlb_cgroup_from_css(seq_css(seq));
struct cgroup_subsys_state *css;
unsigned long usage;
@@ -702,166 +712,185 @@ static int hugetlb_events_local_show(struct seq_file *seq, void *v)
return __hugetlb_events_show(seq, true);
}
-static void __init __hugetlb_cgroup_file_dfl_init(int idx)
+static struct cftype hugetlb_dfl_tmpl[] = {
+ {
+ .name = "max",
+ .private = RES_LIMIT,
+ .seq_show = hugetlb_cgroup_read_u64_max,
+ .write = hugetlb_cgroup_write_dfl,
+ .flags = CFTYPE_NOT_ON_ROOT,
+ },
+ {
+ .name = "rsvd.max",
+ .private = RES_RSVD_LIMIT,
+ .seq_show = hugetlb_cgroup_read_u64_max,
+ .write = hugetlb_cgroup_write_dfl,
+ .flags = CFTYPE_NOT_ON_ROOT,
+ },
+ {
+ .name = "current",
+ .private = RES_USAGE,
+ .seq_show = hugetlb_cgroup_read_u64_max,
+ .flags = CFTYPE_NOT_ON_ROOT,
+ },
+ {
+ .name = "rsvd.current",
+ .private = RES_RSVD_USAGE,
+ .seq_show = hugetlb_cgroup_read_u64_max,
+ .flags = CFTYPE_NOT_ON_ROOT,
+ },
+ {
+ .name = "events",
+ .seq_show = hugetlb_events_show,
+ .file_offset = MEMFILE_OFFSET(struct hugetlb_cgroup, events_file[0]),
+ .flags = CFTYPE_NOT_ON_ROOT,
+ },
+ {
+ .name = "events.local",
+ .seq_show = hugetlb_events_local_show,
+ .file_offset = MEMFILE_OFFSET(struct hugetlb_cgroup, events_local_file[0]),
+ .flags = CFTYPE_NOT_ON_ROOT,
+ },
+ {
+ .name = "numa_stat",
+ .seq_show = hugetlb_cgroup_read_numa_stat,
+ .flags = CFTYPE_NOT_ON_ROOT,
+ },
+ /* don't need terminator here */
+};
+
+static struct cftype hugetlb_legacy_tmpl[] = {
+ {
+ .name = "limit_in_bytes",
+ .private = RES_LIMIT,
+ .read_u64 = hugetlb_cgroup_read_u64,
+ .write = hugetlb_cgroup_write_legacy,
+ },
+ {
+ .name = "rsvd.limit_in_bytes",
+ .private = RES_RSVD_LIMIT,
+ .read_u64 = hugetlb_cgroup_read_u64,
+ .write = hugetlb_cgroup_write_legacy,
+ },
+ {
+ .name = "usage_in_bytes",
+ .private = RES_USAGE,
+ .read_u64 = hugetlb_cgroup_read_u64,
+ },
+ {
+ .name = "rsvd.usage_in_bytes",
+ .private = RES_RSVD_USAGE,
+ .read_u64 = hugetlb_cgroup_read_u64,
+ },
+ {
+ .name = "max_usage_in_bytes",
+ .private = RES_MAX_USAGE,
+ .write = hugetlb_cgroup_reset,
+ .read_u64 = hugetlb_cgroup_read_u64,
+ },
+ {
+ .name = "rsvd.max_usage_in_bytes",
+ .private = RES_RSVD_MAX_USAGE,
+ .write = hugetlb_cgroup_reset,
+ .read_u64 = hugetlb_cgroup_read_u64,
+ },
+ {
+ .name = "failcnt",
+ .private = RES_FAILCNT,
+ .write = hugetlb_cgroup_reset,
+ .read_u64 = hugetlb_cgroup_read_u64,
+ },
+ {
+ .name = "rsvd.failcnt",
+ .private = RES_RSVD_FAILCNT,
+ .write = hugetlb_cgroup_reset,
+ .read_u64 = hugetlb_cgroup_read_u64,
+ },
+ {
+ .name = "numa_stat",
+ .seq_show = hugetlb_cgroup_read_numa_stat,
+ },
+ /* don't need terminator here */
+};
+
+static void __init
+hugetlb_cgroup_cfttypes_init(struct hstate *h, struct cftype *cft,
+ struct cftype *tmpl, int tmpl_size)
{
char buf[32];
- struct cftype *cft;
- struct hstate *h = &hstates[idx];
+ int i, idx = hstate_index(h);
/* format the size */
mem_fmt(buf, sizeof(buf), huge_page_size(h));
- /* Add the limit file */
- cft = &h->cgroup_files_dfl[0];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.max", buf);
- cft->private = MEMFILE_PRIVATE(idx, RES_LIMIT);
- cft->seq_show = hugetlb_cgroup_read_u64_max;
- cft->write = hugetlb_cgroup_write_dfl;
- cft->flags = CFTYPE_NOT_ON_ROOT;
-
- /* Add the reservation limit file */
- cft = &h->cgroup_files_dfl[1];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.rsvd.max", buf);
- cft->private = MEMFILE_PRIVATE(idx, RES_RSVD_LIMIT);
- cft->seq_show = hugetlb_cgroup_read_u64_max;
- cft->write = hugetlb_cgroup_write_dfl;
- cft->flags = CFTYPE_NOT_ON_ROOT;
-
- /* Add the current usage file */
- cft = &h->cgroup_files_dfl[2];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.current", buf);
- cft->private = MEMFILE_PRIVATE(idx, RES_USAGE);
- cft->seq_show = hugetlb_cgroup_read_u64_max;
- cft->flags = CFTYPE_NOT_ON_ROOT;
-
- /* Add the current reservation usage file */
- cft = &h->cgroup_files_dfl[3];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.rsvd.current", buf);
- cft->private = MEMFILE_PRIVATE(idx, RES_RSVD_USAGE);
- cft->seq_show = hugetlb_cgroup_read_u64_max;
- cft->flags = CFTYPE_NOT_ON_ROOT;
-
- /* Add the events file */
- cft = &h->cgroup_files_dfl[4];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.events", buf);
- cft->private = MEMFILE_PRIVATE(idx, 0);
- cft->seq_show = hugetlb_events_show;
- cft->file_offset = offsetof(struct hugetlb_cgroup, events_file[idx]);
- cft->flags = CFTYPE_NOT_ON_ROOT;
-
- /* Add the events.local file */
- cft = &h->cgroup_files_dfl[5];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.events.local", buf);
- cft->private = MEMFILE_PRIVATE(idx, 0);
- cft->seq_show = hugetlb_events_local_show;
- cft->file_offset = offsetof(struct hugetlb_cgroup,
- events_local_file[idx]);
- cft->flags = CFTYPE_NOT_ON_ROOT;
-
- /* Add the numa stat file */
- cft = &h->cgroup_files_dfl[6];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.numa_stat", buf);
- cft->private = MEMFILE_PRIVATE(idx, 0);
- cft->seq_show = hugetlb_cgroup_read_numa_stat;
- cft->flags = CFTYPE_NOT_ON_ROOT;
-
- /* NULL terminate the last cft */
- cft = &h->cgroup_files_dfl[7];
- memset(cft, 0, sizeof(*cft));
+ for (i = 0; i < tmpl_size; cft++, tmpl++, i++) {
+ *cft = *tmpl;
+ /* rebuild the name */
+ snprintf(cft->name, MAX_CFTYPE_NAME, "%s.%s", buf, tmpl->name);
+ /* rebuild the private */
+ cft->private = MEMFILE_PRIVATE(idx, tmpl->private);
+ /* rebuild the file_offset */
+ if (tmpl->file_offset) {
+ unsigned int offset = tmpl->file_offset;
+
+ cft->file_offset = MEMFILE_OFFSET0(offset) +
+ MEMFILE_FIELD_SIZE(offset) * idx;
+ }
- WARN_ON(cgroup_add_dfl_cftypes(&hugetlb_cgrp_subsys,
- h->cgroup_files_dfl));
+ lockdep_register_key(&cft->lockdep_key);
+ }
}
-static void __init __hugetlb_cgroup_file_legacy_init(int idx)
+static void __init __hugetlb_cgroup_file_dfl_init(struct hstate *h)
{
- char buf[32];
- struct cftype *cft;
- struct hstate *h = &hstates[idx];
+ int idx = hstate_index(h);
- /* format the size */
- mem_fmt(buf, sizeof(buf), huge_page_size(h));
+ hugetlb_cgroup_cfttypes_init(h, dfl_files + idx * DFL_TMPL_SIZE,
+ hugetlb_dfl_tmpl, DFL_TMPL_SIZE);
+}
- /* Add the limit file */
- cft = &h->cgroup_files_legacy[0];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.limit_in_bytes", buf);
- cft->private = MEMFILE_PRIVATE(idx, RES_LIMIT);
- cft->read_u64 = hugetlb_cgroup_read_u64;
- cft->write = hugetlb_cgroup_write_legacy;
-
- /* Add the reservation limit file */
- cft = &h->cgroup_files_legacy[1];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.rsvd.limit_in_bytes", buf);
- cft->private = MEMFILE_PRIVATE(idx, RES_RSVD_LIMIT);
- cft->read_u64 = hugetlb_cgroup_read_u64;
- cft->write = hugetlb_cgroup_write_legacy;
-
- /* Add the usage file */
- cft = &h->cgroup_files_legacy[2];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.usage_in_bytes", buf);
- cft->private = MEMFILE_PRIVATE(idx, RES_USAGE);
- cft->read_u64 = hugetlb_cgroup_read_u64;
-
- /* Add the reservation usage file */
- cft = &h->cgroup_files_legacy[3];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.rsvd.usage_in_bytes", buf);
- cft->private = MEMFILE_PRIVATE(idx, RES_RSVD_USAGE);
- cft->read_u64 = hugetlb_cgroup_read_u64;
-
- /* Add the MAX usage file */
- cft = &h->cgroup_files_legacy[4];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.max_usage_in_bytes", buf);
- cft->private = MEMFILE_PRIVATE(idx, RES_MAX_USAGE);
- cft->write = hugetlb_cgroup_reset;
- cft->read_u64 = hugetlb_cgroup_read_u64;
-
- /* Add the MAX reservation usage file */
- cft = &h->cgroup_files_legacy[5];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.rsvd.max_usage_in_bytes", buf);
- cft->private = MEMFILE_PRIVATE(idx, RES_RSVD_MAX_USAGE);
- cft->write = hugetlb_cgroup_reset;
- cft->read_u64 = hugetlb_cgroup_read_u64;
-
- /* Add the failcntfile */
- cft = &h->cgroup_files_legacy[6];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.failcnt", buf);
- cft->private = MEMFILE_PRIVATE(idx, RES_FAILCNT);
- cft->write = hugetlb_cgroup_reset;
- cft->read_u64 = hugetlb_cgroup_read_u64;
-
- /* Add the reservation failcntfile */
- cft = &h->cgroup_files_legacy[7];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.rsvd.failcnt", buf);
- cft->private = MEMFILE_PRIVATE(idx, RES_RSVD_FAILCNT);
- cft->write = hugetlb_cgroup_reset;
- cft->read_u64 = hugetlb_cgroup_read_u64;
-
- /* Add the numa stat file */
- cft = &h->cgroup_files_legacy[8];
- snprintf(cft->name, MAX_CFTYPE_NAME, "%s.numa_stat", buf);
- cft->private = MEMFILE_PRIVATE(idx, 1);
- cft->seq_show = hugetlb_cgroup_read_numa_stat;
-
- /* NULL terminate the last cft */
- cft = &h->cgroup_files_legacy[9];
- memset(cft, 0, sizeof(*cft));
+static void __init __hugetlb_cgroup_file_legacy_init(struct hstate *h)
+{
+ int idx = hstate_index(h);
- WARN_ON(cgroup_add_legacy_cftypes(&hugetlb_cgrp_subsys,
- h->cgroup_files_legacy));
+ hugetlb_cgroup_cfttypes_init(h, legacy_files + idx * LEGACY_TMPL_SIZE,
+ hugetlb_legacy_tmpl, LEGACY_TMPL_SIZE);
+}
+
+static void __init __hugetlb_cgroup_file_init(struct hstate *h)
+{
+ __hugetlb_cgroup_file_dfl_init(h);
+ __hugetlb_cgroup_file_legacy_init(h);
}
-static void __init __hugetlb_cgroup_file_init(int idx)
+static void __init __hugetlb_cgroup_file_pre_init(void)
{
- __hugetlb_cgroup_file_dfl_init(idx);
- __hugetlb_cgroup_file_legacy_init(idx);
+ int cft_count;
+
+ cft_count = hugetlb_max_hstate * DFL_TMPL_SIZE + 1; /* add terminator */
+ dfl_files = kcalloc(cft_count, sizeof(struct cftype), GFP_KERNEL);
+ BUG_ON(!dfl_files);
+ cft_count = hugetlb_max_hstate * LEGACY_TMPL_SIZE + 1; /* add terminator */
+ legacy_files = kcalloc(cft_count, sizeof(struct cftype), GFP_KERNEL);
+ BUG_ON(!legacy_files);
+}
+
+static void __init __hugetlb_cgroup_file_post_init(void)
+{
+ WARN_ON(cgroup_add_dfl_cftypes(&hugetlb_cgrp_subsys,
+ dfl_files));
+ WARN_ON(cgroup_add_legacy_cftypes(&hugetlb_cgrp_subsys,
+ legacy_files));
}
void __init hugetlb_cgroup_file_init(void)
{
struct hstate *h;
+ __hugetlb_cgroup_file_pre_init();
for_each_hstate(h)
- __hugetlb_cgroup_file_init(hstate_index(h));
+ __hugetlb_cgroup_file_init(h);
+ __hugetlb_cgroup_file_post_init();
}
/*
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 8193906515c6..829112b0a914 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -184,10 +184,13 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end,
*/
static inline void free_vmemmap_page(struct page *page)
{
- if (PageReserved(page))
+ if (PageReserved(page)) {
free_bootmem_page(page);
- else
+ mod_node_page_state(page_pgdat(page), NR_MEMMAP_BOOT, -1);
+ } else {
__free_page(page);
+ mod_node_page_state(page_pgdat(page), NR_MEMMAP, -1);
+ }
}
/* Free a list of the vmemmap pages */
@@ -338,6 +341,7 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end,
copy_page(page_to_virt(walk.reuse_page),
(void *)walk.reuse_addr);
list_add(&walk.reuse_page->lru, vmemmap_pages);
+ mod_node_page_state(NODE_DATA(nid), NR_MEMMAP, 1);
}
/*
@@ -384,14 +388,19 @@ static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
unsigned long nr_pages = (end - start) >> PAGE_SHIFT;
int nid = page_to_nid((struct page *)start);
struct page *page, *next;
+ int i;
- while (nr_pages--) {
+ for (i = 0; i < nr_pages; i++) {
page = alloc_pages_node(nid, gfp_mask, 0);
- if (!page)
+ if (!page) {
+ mod_node_page_state(NODE_DATA(nid), NR_MEMMAP, i);
goto out;
+ }
list_add(&page->lru, list);
}
+ mod_node_page_state(NODE_DATA(nid), NR_MEMMAP, nr_pages);
+
return 0;
out:
list_for_each_entry_safe(page, next, list, lru)
diff --git a/mm/hwpoison-inject.c b/mm/hwpoison-inject.c
index c9d653f51e45..7ecaa1900137 100644
--- a/mm/hwpoison-inject.c
+++ b/mm/hwpoison-inject.c
@@ -110,4 +110,5 @@ static int __init pfn_inject_init(void)
module_init(pfn_inject_init);
module_exit(pfn_inject_exit);
+MODULE_DESCRIPTION("HWPoison pages injector");
MODULE_LICENSE("GPL");
diff --git a/mm/internal.h b/mm/internal.h
index cc2c5e07fad3..b4d86436565b 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -211,18 +211,21 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
}
/**
- * pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
+ * pte_move_swp_offset - Move the swap entry offset field of a swap pte
+ * forward or backward by delta
* @pte: The initial pte state; is_swap_pte(pte) must be true and
* non_swap_entry() must be false.
+ * @delta: The direction and the offset we are moving; forward if delta
+ * is positive; backward if delta is negative
*
- * Increments the swap offset, while maintaining all other fields, including
+ * Moves the swap offset, while maintaining all other fields, including
* swap type, and any swp pte bits. The resulting pte is returned.
*/
-static inline pte_t pte_next_swp_offset(pte_t pte)
+static inline pte_t pte_move_swp_offset(pte_t pte, long delta)
{
swp_entry_t entry = pte_to_swp_entry(pte);
pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry),
- (swp_offset(entry) + 1)));
+ (swp_offset(entry) + delta)));
if (pte_swp_soft_dirty(pte))
new = pte_swp_mksoft_dirty(new);
@@ -234,6 +237,20 @@ static inline pte_t pte_next_swp_offset(pte_t pte)
return new;
}
+
+/**
+ * pte_next_swp_offset - Increment the swap entry offset field of a swap pte.
+ * @pte: The initial pte state; is_swap_pte(pte) must be true and
+ * non_swap_entry() must be false.
+ *
+ * Increments the swap offset, while maintaining all other fields, including
+ * swap type, and any swp pte bits. The resulting pte is returned.
+ */
+static inline pte_t pte_next_swp_offset(pte_t pte)
+{
+ return pte_move_swp_offset(pte, 1);
+}
+
/**
* swap_pte_batch - detect a PTE batch for a set of contiguous swap entries
* @start_ptep: Page table pointer for the first entry.
@@ -587,7 +604,8 @@ extern void __putback_isolated_page(struct page *page, unsigned int order,
int mt);
extern void memblock_free_pages(struct page *page, unsigned long pfn,
unsigned int order);
-extern void __free_pages_core(struct page *page, unsigned int order);
+extern void __free_pages_core(struct page *page, unsigned int order,
+ enum meminit_context context);
/*
* This will have no effect, other than possibly generating a warning, if the
@@ -604,7 +622,22 @@ static inline void folio_set_order(struct folio *folio, unsigned int order)
#endif
}
-void folio_undo_large_rmappable(struct folio *folio);
+void __folio_undo_large_rmappable(struct folio *folio);
+static inline void folio_undo_large_rmappable(struct folio *folio)
+{
+ if (folio_order(folio) <= 1 || !folio_test_large_rmappable(folio))
+ return;
+
+ /*
+ * At this point, there is no one trying to add the folio to
+ * deferred_list. If folio is not in deferred_list, it's safe
+ * to check without acquiring the split_queue_lock.
+ */
+ if (data_race(list_empty(&folio->_deferred_list)))
+ return;
+
+ __folio_undo_large_rmappable(folio);
+}
static inline struct folio *page_rmappable_folio(struct page *page)
{
@@ -1045,12 +1078,23 @@ extern u64 hwpoison_filter_flags_mask;
extern u64 hwpoison_filter_flags_value;
extern u64 hwpoison_filter_memcg;
extern u32 hwpoison_filter_enable;
+#define MAGIC_HWPOISON 0x48575053U /* HWPS */
+void SetPageHWPoisonTakenOff(struct page *page);
+void ClearPageHWPoisonTakenOff(struct page *page);
+bool take_page_off_buddy(struct page *page);
+bool put_page_back_buddy(struct page *page);
+struct task_struct *task_early_kill(struct task_struct *tsk, int force_early);
+void add_to_kill_ksm(struct task_struct *tsk, struct page *p,
+ struct vm_area_struct *vma, struct list_head *to_kill,
+ unsigned long ksm_addr);
+unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma);
extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long,
unsigned long, unsigned long,
unsigned long, unsigned long);
extern void set_pageblock_order(void);
+struct folio *alloc_migrate_folio(struct folio *src, unsigned long private);
unsigned long reclaim_pages(struct list_head *folio_list);
unsigned int reclaim_clean_pages_from_list(struct zone *zone,
struct list_head *folio_list);
@@ -1316,6 +1360,16 @@ static inline bool vma_soft_dirty_enabled(struct vm_area_struct *vma)
return !(vma->vm_flags & VM_SOFTDIRTY);
}
+static inline bool pmd_needs_soft_dirty_wp(struct vm_area_struct *vma, pmd_t pmd)
+{
+ return vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd);
+}
+
+static inline bool pte_needs_soft_dirty_wp(struct vm_area_struct *vma, pte_t pte)
+{
+ return vma_soft_dirty_enabled(vma) && !pte_soft_dirty(pte);
+}
+
static inline void vma_iter_config(struct vma_iterator *vmi,
unsigned long index, unsigned long last)
{
@@ -1515,4 +1569,13 @@ static inline void shrinker_debugfs_remove(struct dentry *debugfs_entry,
void workingset_update_node(struct xa_node *node);
extern struct list_lru shadow_nodes;
+struct unlink_vma_file_batch {
+ int count;
+ struct vm_area_struct *vmas[8];
+};
+
+void unlink_file_vma_batch_init(struct unlink_vma_file_batch *);
+void unlink_file_vma_batch_add(struct unlink_vma_file_batch *, struct vm_area_struct *);
+void unlink_file_vma_batch_final(struct unlink_vma_file_batch *);
+
#endif /* __MM_INTERNAL_H */
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 964b8482275b..c5cb54fc696d 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -305,8 +305,14 @@ metadata_update_state(struct kfence_metadata *meta, enum kfence_object_state nex
WRITE_ONCE(meta->state, next);
}
+#ifdef CONFIG_KMSAN
+#define check_canary_attributes noinline __no_kmsan_checks
+#else
+#define check_canary_attributes inline
+#endif
+
/* Check canary byte at @addr. */
-static inline bool check_canary_byte(u8 *addr)
+static check_canary_attributes bool check_canary_byte(u8 *addr)
{
struct kfence_metadata *meta;
unsigned long flags;
@@ -341,7 +347,8 @@ static inline void set_canary(const struct kfence_metadata *meta)
*((u64 *)addr) = KFENCE_CANARY_PATTERN_U64;
}
-static inline void check_canary(const struct kfence_metadata *meta)
+static check_canary_attributes void
+check_canary(const struct kfence_metadata *meta)
{
const unsigned long pageaddr = ALIGN_DOWN(meta->addr, PAGE_SIZE);
unsigned long addr = pageaddr;
@@ -595,7 +602,7 @@ static unsigned long kfence_init_pool(void)
continue;
__folio_set_slab(slab_folio(slab));
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
slab->obj_exts = (unsigned long)&kfence_metadata_init[i / 2 - 1].obj_exts |
MEMCG_DATA_OBJEXTS;
#endif
@@ -645,7 +652,7 @@ reset_slab:
if (!i || (i % 2))
continue;
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
slab->obj_exts = 0;
#endif
__folio_clear_slab(slab_folio(slab));
@@ -1139,7 +1146,7 @@ void __kfence_free(void *addr)
{
struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr);
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
KFENCE_WARN_ON(meta->obj_exts.objcg);
#endif
/*
diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
index 084f5f36e8e7..db87a05047bd 100644
--- a/mm/kfence/kfence.h
+++ b/mm/kfence/kfence.h
@@ -97,7 +97,7 @@ struct kfence_metadata {
struct kfence_track free_track;
/* For updating alloc_covered on frees. */
u32 alloc_stack_hash;
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
struct slabobj_ext obj_exts;
#endif
};
diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
index 95b2b84c296d..00fd17285285 100644
--- a/mm/kfence/kfence_test.c
+++ b/mm/kfence/kfence_test.c
@@ -852,3 +852,4 @@ kunit_test_suites(&kfence_test_suite);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>");
+MODULE_DESCRIPTION("kfence unit test suite");
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index aab471791bd9..cdd1d8655a76 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -385,10 +385,7 @@ int hugepage_madvise(struct vm_area_struct *vma,
int __init khugepaged_init(void)
{
- mm_slot_cache = kmem_cache_create("khugepaged_mm_slot",
- sizeof(struct khugepaged_mm_slot),
- __alignof__(struct khugepaged_mm_slot),
- 0, NULL);
+ mm_slot_cache = KMEM_CACHE(khugepaged_mm_slot, 0);
if (!mm_slot_cache)
return -ENOMEM;
@@ -416,6 +413,26 @@ static inline int hpage_collapse_test_exit_or_disable(struct mm_struct *mm)
test_bit(MMF_DISABLE_THP, &mm->flags);
}
+static bool hugepage_pmd_enabled(void)
+{
+ /*
+ * We cover both the anon and the file-backed case here; file-backed
+ * hugepages, when configured in, are determined by the global control.
+ * Anon pmd-sized hugepages are determined by the pmd-size control.
+ */
+ if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
+ hugepage_global_enabled())
+ return true;
+ if (test_bit(PMD_ORDER, &huge_anon_orders_always))
+ return true;
+ if (test_bit(PMD_ORDER, &huge_anon_orders_madvise))
+ return true;
+ if (test_bit(PMD_ORDER, &huge_anon_orders_inherit) &&
+ hugepage_global_enabled())
+ return true;
+ return false;
+}
+
void __khugepaged_enter(struct mm_struct *mm)
{
struct khugepaged_mm_slot *mm_slot;
@@ -452,7 +469,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma,
unsigned long vm_flags)
{
if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) &&
- hugepage_flags_enabled()) {
+ hugepage_pmd_enabled()) {
if (thp_vma_allowable_order(vma, vm_flags, TVA_ENFORCE_SYSFS,
PMD_ORDER))
__khugepaged_enter(vma->vm_mm);
@@ -1213,7 +1230,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
spin_lock(pmd_ptl);
BUG_ON(!pmd_none(*pmd));
- folio_add_new_anon_rmap(folio, vma, address);
+ folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE);
folio_add_lru_vma(folio, vma);
pgtable_trans_huge_deposit(mm, pmd, pgtable);
set_pmd_at(mm, address, pmd, _pmd);
@@ -2465,8 +2482,7 @@ breakouterloop_mmap_lock:
static int khugepaged_has_work(void)
{
- return !list_empty(&khugepaged_scan.mm_head) &&
- hugepage_flags_enabled();
+ return !list_empty(&khugepaged_scan.mm_head) && hugepage_pmd_enabled();
}
static int khugepaged_wait_event(void)
@@ -2539,7 +2555,7 @@ static void khugepaged_wait_work(void)
return;
}
- if (hugepage_flags_enabled())
+ if (hugepage_pmd_enabled())
wait_event_freezable(khugepaged_wait, khugepaged_wait_event());
}
@@ -2570,7 +2586,7 @@ static void set_recommended_min_free_kbytes(void)
int nr_zones = 0;
unsigned long recommended_min;
- if (!hugepage_flags_enabled()) {
+ if (!hugepage_pmd_enabled()) {
calculate_min_free_kbytes();
goto update_wmarks;
}
@@ -2620,7 +2636,7 @@ int start_stop_khugepaged(void)
int err = 0;
mutex_lock(&khugepaged_mutex);
- if (hugepage_flags_enabled()) {
+ if (hugepage_pmd_enabled()) {
if (!khugepaged_thread)
khugepaged_thread = kthread_run(khugepaged, NULL,
"khugepaged");
@@ -2646,7 +2662,7 @@ fail:
void khugepaged_min_free_kbytes_update(void)
{
mutex_lock(&khugepaged_mutex);
- if (hugepage_flags_enabled() && khugepaged_thread)
+ if (hugepage_pmd_enabled() && khugepaged_thread)
set_recommended_min_free_kbytes();
mutex_unlock(&khugepaged_mutex);
}
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index d5b6fba44fc9..764b08100570 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -657,10 +657,10 @@ static struct kmemleak_object *__alloc_object(gfp_t gfp)
/* task information */
if (in_hardirq()) {
object->pid = 0;
- strncpy(object->comm, "hardirq", sizeof(object->comm));
+ strscpy(object->comm, "hardirq");
} else if (in_serving_softirq()) {
object->pid = 0;
- strncpy(object->comm, "softirq", sizeof(object->comm));
+ strscpy(object->comm, "softirq");
} else {
object->pid = current->pid;
/*
@@ -669,7 +669,7 @@ static struct kmemleak_object *__alloc_object(gfp_t gfp)
* dependency issues with current->alloc_lock. In the worst
* case, the command line is not correct.
*/
- strncpy(object->comm, current->comm, sizeof(object->comm));
+ strscpy(object->comm, current->comm);
}
/* kernel backtrace */
diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c
index 95f859e38c53..a495debf1436 100644
--- a/mm/kmsan/core.c
+++ b/mm/kmsan/core.c
@@ -43,7 +43,6 @@ void kmsan_internal_task_create(struct task_struct *task)
struct thread_info *info = current_thread_info();
__memset(ctx, 0, sizeof(*ctx));
- ctx->allow_reporting = true;
kmsan_internal_unpoison_memory(info, sizeof(*info), false);
}
@@ -250,8 +249,8 @@ struct page *kmsan_vmalloc_to_page_or_null(void *vaddr)
return NULL;
}
-void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr,
- int reason)
+void kmsan_internal_check_memory(void *addr, size_t size,
+ const void __user *user_addr, int reason)
{
depot_stack_handle_t cur_origin = 0, new_origin = 0;
unsigned long addr64 = (unsigned long)addr;
diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c
index 22e8657800ef..3ea50f09311f 100644
--- a/mm/kmsan/hooks.c
+++ b/mm/kmsan/hooks.c
@@ -39,12 +39,10 @@ void kmsan_task_create(struct task_struct *task)
void kmsan_task_exit(struct task_struct *task)
{
- struct kmsan_ctx *ctx = &task->kmsan_ctx;
-
if (!kmsan_enabled || kmsan_in_runtime())
return;
- ctx->allow_reporting = false;
+ kmsan_disable_current();
}
void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags)
@@ -76,7 +74,7 @@ void kmsan_slab_free(struct kmem_cache *s, void *object)
return;
/* RCU slabs could be legally used after free within the RCU period */
- if (unlikely(s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)))
+ if (unlikely(s->flags & SLAB_TYPESAFE_BY_RCU))
return;
/*
* If there's a constructor, freed memory must remain in the same state
@@ -267,7 +265,8 @@ void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy,
return;
ua_flags = user_access_save();
- if ((u64)to < TASK_SIZE) {
+ if (!IS_ENABLED(CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE) ||
+ (u64)to < TASK_SIZE) {
/* This is a user memory access, check it. */
kmsan_internal_check_memory((void *)from, to_copy - left, to,
REASON_COPY_TO_USER);
@@ -304,7 +303,8 @@ void kmsan_handle_urb(const struct urb *urb, bool is_out)
if (is_out)
kmsan_internal_check_memory(urb->transfer_buffer,
urb->transfer_buffer_length,
- /*user_addr*/ 0, REASON_SUBMIT_URB);
+ /*user_addr*/ NULL,
+ REASON_SUBMIT_URB);
else
kmsan_internal_unpoison_memory(urb->transfer_buffer,
urb->transfer_buffer_length,
@@ -317,14 +317,14 @@ static void kmsan_handle_dma_page(const void *addr, size_t size,
{
switch (dir) {
case DMA_BIDIRECTIONAL:
- kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0,
- REASON_ANY);
+ kmsan_internal_check_memory((void *)addr, size,
+ /*user_addr*/ NULL, REASON_ANY);
kmsan_internal_unpoison_memory((void *)addr, size,
/*checked*/ false);
break;
case DMA_TO_DEVICE:
- kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0,
- REASON_ANY);
+ kmsan_internal_check_memory((void *)addr, size,
+ /*user_addr*/ NULL, REASON_ANY);
break;
case DMA_FROM_DEVICE:
kmsan_internal_unpoison_memory((void *)addr, size,
@@ -419,7 +419,21 @@ void kmsan_check_memory(const void *addr, size_t size)
{
if (!kmsan_enabled)
return;
- return kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0,
- REASON_ANY);
+ return kmsan_internal_check_memory((void *)addr, size,
+ /*user_addr*/ NULL, REASON_ANY);
}
EXPORT_SYMBOL(kmsan_check_memory);
+
+void kmsan_enable_current(void)
+{
+ KMSAN_WARN_ON(current->kmsan_ctx.depth == 0);
+ current->kmsan_ctx.depth--;
+}
+EXPORT_SYMBOL(kmsan_enable_current);
+
+void kmsan_disable_current(void)
+{
+ current->kmsan_ctx.depth++;
+ KMSAN_WARN_ON(current->kmsan_ctx.depth == 0);
+}
+EXPORT_SYMBOL(kmsan_disable_current);
diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c
index 3ac3b8921d36..10f52c085e6c 100644
--- a/mm/kmsan/init.c
+++ b/mm/kmsan/init.c
@@ -33,7 +33,10 @@ static void __init kmsan_record_future_shadow_range(void *start, void *end)
bool merged = false;
KMSAN_WARN_ON(future_index == NUM_FUTURE_RANGES);
- KMSAN_WARN_ON((nstart >= nend) || !nstart || !nend);
+ KMSAN_WARN_ON((nstart >= nend) ||
+ /* Virtual address 0 is valid on s390. */
+ (!IS_ENABLED(CONFIG_S390) && !nstart) ||
+ !nend);
nstart = ALIGN_DOWN(nstart, PAGE_SIZE);
nend = ALIGN(nend, PAGE_SIZE);
@@ -72,7 +75,7 @@ static void __init kmsan_record_future_shadow_range(void *start, void *end)
*/
void __init kmsan_init_shadow(void)
{
- const size_t nd_size = roundup(sizeof(pg_data_t), PAGE_SIZE);
+ const size_t nd_size = sizeof(pg_data_t);
phys_addr_t p_start, p_end;
u64 loop;
int nid;
@@ -172,7 +175,7 @@ static void do_collection(void)
shadow = smallstack_pop(&collect);
origin = smallstack_pop(&collect);
kmsan_setup_meta(page, shadow, origin, collect.order);
- __free_pages_core(page, collect.order);
+ __free_pages_core(page, collect.order, MEMINIT_EARLY);
}
}
diff --git a/mm/kmsan/instrumentation.c b/mm/kmsan/instrumentation.c
index cc3907a9c33a..02a405e55d6c 100644
--- a/mm/kmsan/instrumentation.c
+++ b/mm/kmsan/instrumentation.c
@@ -14,13 +14,15 @@
#include "kmsan.h"
#include <linux/gfp.h>
+#include <linux/kmsan.h>
#include <linux/kmsan_string.h>
#include <linux/mm.h>
#include <linux/uaccess.h>
static inline bool is_bad_asm_addr(void *addr, uintptr_t size, bool is_store)
{
- if ((u64)addr < TASK_SIZE)
+ if (IS_ENABLED(CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE) &&
+ (u64)addr < TASK_SIZE)
return true;
if (!kmsan_get_metadata(addr, KMSAN_META_SHADOW))
return true;
@@ -110,11 +112,10 @@ void __msan_instrument_asm_store(void *addr, uintptr_t size)
ua_flags = user_access_save();
/*
- * Most of the accesses are below 32 bytes. The two exceptions so far
- * are clwb() (64 bytes) and FPU state (512 bytes).
- * It's unlikely that the assembly will touch more than 512 bytes.
+ * Most of the accesses are below 32 bytes. The exceptions so far are
+ * clwb() (64 bytes), FPU state (512 bytes) and chsc() (4096 bytes).
*/
- if (size > 512) {
+ if (size > 4096) {
WARN_ONCE(1, "assembly store size too big: %ld\n", size);
size = 8;
}
@@ -314,8 +315,8 @@ void __msan_warning(u32 origin)
if (!kmsan_enabled || kmsan_in_runtime())
return;
kmsan_enter_runtime();
- kmsan_report(origin, /*address*/ 0, /*size*/ 0,
- /*off_first*/ 0, /*off_last*/ 0, /*user_addr*/ 0,
+ kmsan_report(origin, /*address*/ NULL, /*size*/ 0,
+ /*off_first*/ 0, /*off_last*/ 0, /*user_addr*/ NULL,
REASON_ANY);
kmsan_leave_runtime();
}
diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h
index a14744205435..29555a8bc315 100644
--- a/mm/kmsan/kmsan.h
+++ b/mm/kmsan/kmsan.h
@@ -10,14 +10,15 @@
#ifndef __MM_KMSAN_KMSAN_H
#define __MM_KMSAN_KMSAN_H
-#include <asm/pgtable_64_types.h>
#include <linux/irqflags.h>
+#include <linux/kmsan.h>
+#include <linux/mm.h>
+#include <linux/nmi.h>
+#include <linux/pgtable.h>
+#include <linux/printk.h>
#include <linux/sched.h>
#include <linux/stackdepot.h>
#include <linux/stacktrace.h>
-#include <linux/nmi.h>
-#include <linux/mm.h>
-#include <linux/printk.h>
#define KMSAN_ALLOCA_MAGIC_ORIGIN 0xabcd0100
#define KMSAN_CHAIN_MAGIC_ORIGIN 0xabcd0200
@@ -34,29 +35,6 @@
#define KMSAN_META_SHADOW (false)
#define KMSAN_META_ORIGIN (true)
-extern bool kmsan_enabled;
-extern int panic_on_kmsan;
-
-/*
- * KMSAN performs a lot of consistency checks that are currently enabled by
- * default. BUG_ON is normally discouraged in the kernel, unless used for
- * debugging, but KMSAN itself is a debugging tool, so it makes little sense to
- * recover if something goes wrong.
- */
-#define KMSAN_WARN_ON(cond) \
- ({ \
- const bool __cond = WARN_ON(cond); \
- if (unlikely(__cond)) { \
- WRITE_ONCE(kmsan_enabled, false); \
- if (panic_on_kmsan) { \
- /* Can't call panic() here because */ \
- /* of uaccess checks. */ \
- BUG(); \
- } \
- } \
- __cond; \
- })
-
/*
* A pair of metadata pointers to be returned by the instrumentation functions.
*/
@@ -66,7 +44,6 @@ struct shadow_origin_ptr {
struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *addr, u64 size,
bool store);
-void *kmsan_get_metadata(void *addr, bool is_origin);
void __init kmsan_init_alloc_meta_for_range(void *start, void *end);
enum kmsan_bug_reason {
@@ -96,7 +73,7 @@ void kmsan_print_origin(depot_stack_handle_t origin);
* @off_last corresponding to different @origin values.
*/
void kmsan_report(depot_stack_handle_t origin, void *address, int size,
- int off_first, int off_last, const void *user_addr,
+ int off_first, int off_last, const void __user *user_addr,
enum kmsan_bug_reason reason);
DECLARE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx);
@@ -186,8 +163,8 @@ depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id);
void kmsan_internal_task_create(struct task_struct *task);
bool kmsan_metadata_is_contiguous(void *addr, size_t size);
-void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr,
- int reason);
+void kmsan_internal_check_memory(void *addr, size_t size,
+ const void __user *user_addr, int reason);
struct page *kmsan_vmalloc_to_page_or_null(void *vaddr);
void kmsan_setup_meta(struct page *page, struct page *shadow,
diff --git a/mm/kmsan/kmsan_test.c b/mm/kmsan/kmsan_test.c
index 07d3a3a5a9c5..13236d579eba 100644
--- a/mm/kmsan/kmsan_test.c
+++ b/mm/kmsan/kmsan_test.c
@@ -614,6 +614,32 @@ static void test_stackdepot_roundtrip(struct kunit *test)
KUNIT_EXPECT_TRUE(test, report_matches(&expect));
}
+/*
+ * Test case: ensure that kmsan_unpoison_memory() and the instrumentation work
+ * the same.
+ */
+static void test_unpoison_memory(struct kunit *test)
+{
+ EXPECTATION_UNINIT_VALUE_FN(expect, "test_unpoison_memory");
+ volatile char a[4], b[4];
+
+ kunit_info(
+ test,
+ "unpoisoning via the instrumentation vs. kmsan_unpoison_memory() (2 UMR reports)\n");
+
+ /* Initialize a[0] and check a[1]--a[3]. */
+ a[0] = 0;
+ kmsan_check_memory((char *)&a[1], 3);
+ KUNIT_EXPECT_TRUE(test, report_matches(&expect));
+
+ report_reset();
+
+ /* Initialize b[0] and check b[1]--b[3]. */
+ kmsan_unpoison_memory((char *)&b[0], 1);
+ kmsan_check_memory((char *)&b[1], 3);
+ KUNIT_EXPECT_TRUE(test, report_matches(&expect));
+}
+
static struct kunit_case kmsan_test_cases[] = {
KUNIT_CASE(test_uninit_kmalloc),
KUNIT_CASE(test_init_kmalloc),
@@ -637,6 +663,7 @@ static struct kunit_case kmsan_test_cases[] = {
KUNIT_CASE(test_memset64),
KUNIT_CASE(test_long_origin_chain),
KUNIT_CASE(test_stackdepot_roundtrip),
+ KUNIT_CASE(test_unpoison_memory),
{},
};
@@ -659,9 +686,13 @@ static void test_exit(struct kunit *test)
{
}
+static int orig_panic_on_kmsan;
+
static int kmsan_suite_init(struct kunit_suite *suite)
{
register_trace_console(probe_console, NULL);
+ orig_panic_on_kmsan = panic_on_kmsan;
+ panic_on_kmsan = 0;
return 0;
}
@@ -669,6 +700,7 @@ static void kmsan_suite_exit(struct kunit_suite *suite)
{
unregister_trace_console(probe_console, NULL);
tracepoint_synchronize_unregister();
+ panic_on_kmsan = orig_panic_on_kmsan;
}
static struct kunit_suite kmsan_test_suite = {
diff --git a/mm/kmsan/report.c b/mm/kmsan/report.c
index 02736ec757f2..94a3303fb65e 100644
--- a/mm/kmsan/report.c
+++ b/mm/kmsan/report.c
@@ -8,6 +8,7 @@
*/
#include <linux/console.h>
+#include <linux/kmsan.h>
#include <linux/moduleparam.h>
#include <linux/stackdepot.h>
#include <linux/stacktrace.h>
@@ -20,6 +21,7 @@ static DEFINE_RAW_SPINLOCK(kmsan_report_lock);
/* Protected by kmsan_report_lock */
static char report_local_descr[DESCR_SIZE];
int panic_on_kmsan __read_mostly;
+EXPORT_SYMBOL_GPL(panic_on_kmsan);
#ifdef MODULE_PARAM_PREFIX
#undef MODULE_PARAM_PREFIX
@@ -146,7 +148,7 @@ void kmsan_print_origin(depot_stack_handle_t origin)
}
void kmsan_report(depot_stack_handle_t origin, void *address, int size,
- int off_first, int off_last, const void *user_addr,
+ int off_first, int off_last, const void __user *user_addr,
enum kmsan_bug_reason reason)
{
unsigned long stack_entries[KMSAN_STACK_DEPTH];
@@ -157,12 +159,12 @@ void kmsan_report(depot_stack_handle_t origin, void *address, int size,
if (!kmsan_enabled)
return;
- if (!current->kmsan_ctx.allow_reporting)
+ if (current->kmsan_ctx.depth)
return;
if (!origin)
return;
- current->kmsan_ctx.allow_reporting = false;
+ kmsan_disable_current();
ua_flags = user_access_save();
raw_spin_lock(&kmsan_report_lock);
pr_err("=====================================================\n");
@@ -215,5 +217,5 @@ void kmsan_report(depot_stack_handle_t origin, void *address, int size,
if (panic_on_kmsan)
panic("kmsan.panic set ...\n");
user_access_restore(ua_flags);
- current->kmsan_ctx.allow_reporting = true;
+ kmsan_enable_current();
}
diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c
index b9d05aff313e..9c58f081d84f 100644
--- a/mm/kmsan/shadow.c
+++ b/mm/kmsan/shadow.c
@@ -123,14 +123,12 @@ return_dummy:
*/
void *kmsan_get_metadata(void *address, bool is_origin)
{
- u64 addr = (u64)address, pad, off;
+ u64 addr = (u64)address, off;
struct page *page;
void *ret;
- if (is_origin && !IS_ALIGNED(addr, KMSAN_ORIGIN_SIZE)) {
- pad = addr % KMSAN_ORIGIN_SIZE;
- addr -= pad;
- }
+ if (is_origin)
+ addr = ALIGN_DOWN(addr, KMSAN_ORIGIN_SIZE);
address = (void *)addr;
if (kmsan_internal_is_vmalloc_addr(address) ||
kmsan_internal_is_module_addr(address))
@@ -243,7 +241,6 @@ int kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end,
s_pages[i] = shadow_page_for(pages[i]);
o_pages[i] = origin_page_for(pages[i]);
}
- prot = __pgprot(pgprot_val(prot) | _PAGE_NX);
prot = PAGE_KERNEL;
origin_start = vmalloc_meta((void *)start, KMSAN_META_ORIGIN);
diff --git a/mm/ksm.c b/mm/ksm.c
index 34c4820e0d3d..df6bae3a5a2c 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -488,21 +488,17 @@ static DECLARE_WAIT_QUEUE_HEAD(ksm_iter_wait);
static DEFINE_MUTEX(ksm_thread_mutex);
static DEFINE_SPINLOCK(ksm_mmlist_lock);
-#define KSM_KMEM_CACHE(__struct, __flags) kmem_cache_create(#__struct,\
- sizeof(struct __struct), __alignof__(struct __struct),\
- (__flags), NULL)
-
static int __init ksm_slab_init(void)
{
- rmap_item_cache = KSM_KMEM_CACHE(ksm_rmap_item, 0);
+ rmap_item_cache = KMEM_CACHE(ksm_rmap_item, 0);
if (!rmap_item_cache)
goto out;
- stable_node_cache = KSM_KMEM_CACHE(ksm_stable_node, 0);
+ stable_node_cache = KMEM_CACHE(ksm_stable_node, 0);
if (!stable_node_cache)
goto out_free1;
- mm_slot_cache = KSM_KMEM_CACHE(ksm_mm_slot, 0);
+ mm_slot_cache = KMEM_CACHE(ksm_mm_slot, 0);
if (!mm_slot_cache)
goto out_free2;
@@ -1532,6 +1528,44 @@ out:
}
/*
+ * This function returns 0 if the pages were merged or if they are
+ * no longer merging candidates (e.g., VMA stale), -EFAULT otherwise.
+ */
+static int try_to_merge_with_zero_page(struct ksm_rmap_item *rmap_item,
+ struct page *page)
+{
+ struct mm_struct *mm = rmap_item->mm;
+ int err = -EFAULT;
+
+ /*
+ * Same checksum as an empty page. We attempt to merge it with the
+ * appropriate zero page if the user enabled this via sysfs.
+ */
+ if (ksm_use_zero_pages && (rmap_item->oldchecksum == zero_checksum)) {
+ struct vm_area_struct *vma;
+
+ mmap_read_lock(mm);
+ vma = find_mergeable_vma(mm, rmap_item->address);
+ if (vma) {
+ err = try_to_merge_one_page(vma, page,
+ ZERO_PAGE(rmap_item->address));
+ trace_ksm_merge_one_page(
+ page_to_pfn(ZERO_PAGE(rmap_item->address)),
+ rmap_item, mm, err);
+ } else {
+ /*
+ * If the vma is out of date, we do not need to
+ * continue.
+ */
+ err = 0;
+ }
+ mmap_read_unlock(mm);
+ }
+
+ return err;
+}
+
+/*
* try_to_merge_with_ksm_page - like try_to_merge_two_pages,
* but no new kernel page is allocated: kpage must already be a ksm page.
*
@@ -1625,7 +1659,6 @@ static struct folio *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
struct ksm_stable_node *dup, *found = NULL, *stable_node = *_stable_node;
struct hlist_node *hlist_safe;
struct folio *folio, *tree_folio = NULL;
- int nr = 0;
int found_rmap_hlist_len;
if (!prune_stale_stable_nodes ||
@@ -1652,33 +1685,26 @@ static struct folio *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
folio = ksm_get_folio(dup, KSM_GET_FOLIO_NOLOCK);
if (!folio)
continue;
- nr += 1;
- if (is_page_sharing_candidate(dup)) {
- if (!found ||
- dup->rmap_hlist_len > found_rmap_hlist_len) {
- if (found)
- folio_put(tree_folio);
- found = dup;
- found_rmap_hlist_len = found->rmap_hlist_len;
- tree_folio = folio;
-
- /* skip put_page for found dup */
- if (!prune_stale_stable_nodes)
- break;
- continue;
- }
+ /* Pick the best candidate if possible. */
+ if (!found || (is_page_sharing_candidate(dup) &&
+ (!is_page_sharing_candidate(found) ||
+ dup->rmap_hlist_len > found_rmap_hlist_len))) {
+ if (found)
+ folio_put(tree_folio);
+ found = dup;
+ found_rmap_hlist_len = found->rmap_hlist_len;
+ tree_folio = folio;
+ /* skip put_page for found candidate */
+ if (!prune_stale_stable_nodes &&
+ is_page_sharing_candidate(found))
+ break;
+ continue;
}
folio_put(folio);
}
if (found) {
- /*
- * nr is counting all dups in the chain only if
- * prune_stale_stable_nodes is true, otherwise we may
- * break the loop at nr == 1 even if there are
- * multiple entries.
- */
- if (prune_stale_stable_nodes && nr == 1) {
+ if (hlist_is_singular_node(&found->hlist_dup, &stable_node->hlist)) {
/*
* If there's not just one entry it would
* corrupt memory, better BUG_ON. In KSM
@@ -1730,25 +1756,15 @@ static struct folio *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
hlist_add_head(&found->hlist_dup,
&stable_node->hlist);
}
+ } else {
+ /* Its hlist must be empty if no one found. */
+ free_stable_node_chain(stable_node, root);
}
*_stable_node_dup = found;
return tree_folio;
}
-static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stable_node,
- struct rb_root *root)
-{
- if (!is_stable_node_chain(stable_node))
- return stable_node;
- if (hlist_empty(&stable_node->hlist)) {
- free_stable_node_chain(stable_node, root);
- return NULL;
- }
- return hlist_entry(stable_node->hlist.first,
- typeof(*stable_node), hlist_dup);
-}
-
/*
* Like for ksm_get_folio, this function can free the *_stable_node and
* *_stable_node_dup if the returned tree_page is NULL.
@@ -1769,17 +1785,10 @@ static struct folio *__stable_node_chain(struct ksm_stable_node **_stable_node_d
bool prune_stale_stable_nodes)
{
struct ksm_stable_node *stable_node = *_stable_node;
+
if (!is_stable_node_chain(stable_node)) {
- if (is_page_sharing_candidate(stable_node)) {
- *_stable_node_dup = stable_node;
- return ksm_get_folio(stable_node, KSM_GET_FOLIO_NOLOCK);
- }
- /*
- * _stable_node_dup set to NULL means the stable_node
- * reached the ksm_max_page_sharing limit.
- */
- *_stable_node_dup = NULL;
- return NULL;
+ *_stable_node_dup = stable_node;
+ return ksm_get_folio(stable_node, KSM_GET_FOLIO_NOLOCK);
}
return stable_node_dup(_stable_node_dup, _stable_node, root,
prune_stale_stable_nodes);
@@ -1793,16 +1802,10 @@ static __always_inline struct folio *chain_prune(struct ksm_stable_node **s_n_d,
}
static __always_inline struct folio *chain(struct ksm_stable_node **s_n_d,
- struct ksm_stable_node *s_n,
+ struct ksm_stable_node **s_n,
struct rb_root *root)
{
- struct ksm_stable_node *old_stable_node = s_n;
- struct folio *tree_folio;
-
- tree_folio = __stable_node_chain(s_n_d, &s_n, root, false);
- /* not pruning dups so s_n cannot have changed */
- VM_BUG_ON(s_n != old_stable_node);
- return tree_folio;
+ return __stable_node_chain(s_n_d, s_n, root, false);
}
/*
@@ -1820,7 +1823,7 @@ static struct page *stable_tree_search(struct page *page)
struct rb_root *root;
struct rb_node **new;
struct rb_node *parent;
- struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
+ struct ksm_stable_node *stable_node, *stable_node_dup;
struct ksm_stable_node *page_node;
struct folio *folio;
@@ -1844,45 +1847,7 @@ again:
cond_resched();
stable_node = rb_entry(*new, struct ksm_stable_node, node);
- stable_node_any = NULL;
tree_folio = chain_prune(&stable_node_dup, &stable_node, root);
- /*
- * NOTE: stable_node may have been freed by
- * chain_prune() if the returned stable_node_dup is
- * not NULL. stable_node_dup may have been inserted in
- * the rbtree instead as a regular stable_node (in
- * order to collapse the stable_node chain if a single
- * stable_node dup was found in it). In such case the
- * stable_node is overwritten by the callee to point
- * to the stable_node_dup that was collapsed in the
- * stable rbtree and stable_node will be equal to
- * stable_node_dup like if the chain never existed.
- */
- if (!stable_node_dup) {
- /*
- * Either all stable_node dups were full in
- * this stable_node chain, or this chain was
- * empty and should be rb_erased.
- */
- stable_node_any = stable_node_dup_any(stable_node,
- root);
- if (!stable_node_any) {
- /* rb_erase just run */
- goto again;
- }
- /*
- * Take any of the stable_node dups page of
- * this stable_node chain to let the tree walk
- * continue. All KSM pages belonging to the
- * stable_node dups in a stable_node chain
- * have the same content and they're
- * write protected at all times. Any will work
- * fine to continue the walk.
- */
- tree_folio = ksm_get_folio(stable_node_any,
- KSM_GET_FOLIO_NOLOCK);
- }
- VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
if (!tree_folio) {
/*
* If we walked over a stale stable_node,
@@ -1920,7 +1885,7 @@ again:
goto chain_append;
}
- if (!stable_node_dup) {
+ if (!is_page_sharing_candidate(stable_node_dup)) {
/*
* If the stable_node is a chain and
* we got a payload match in memcmp
@@ -2029,9 +1994,6 @@ replace:
return &folio->page;
chain_append:
- /* stable_node_dup could be null if it reached the limit */
- if (!stable_node_dup)
- stable_node_dup = stable_node_any;
/*
* If stable_node was a chain and chain_prune collapsed it,
* stable_node has been updated to be the new regular
@@ -2076,7 +2038,7 @@ static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
struct rb_root *root;
struct rb_node **new;
struct rb_node *parent;
- struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
+ struct ksm_stable_node *stable_node, *stable_node_dup;
bool need_chain = false;
kpfn = folio_pfn(kfolio);
@@ -2092,33 +2054,7 @@ again:
cond_resched();
stable_node = rb_entry(*new, struct ksm_stable_node, node);
- stable_node_any = NULL;
- tree_folio = chain(&stable_node_dup, stable_node, root);
- if (!stable_node_dup) {
- /*
- * Either all stable_node dups were full in
- * this stable_node chain, or this chain was
- * empty and should be rb_erased.
- */
- stable_node_any = stable_node_dup_any(stable_node,
- root);
- if (!stable_node_any) {
- /* rb_erase just run */
- goto again;
- }
- /*
- * Take any of the stable_node dups page of
- * this stable_node chain to let the tree walk
- * continue. All KSM pages belonging to the
- * stable_node dups in a stable_node chain
- * have the same content and they're
- * write protected at all times. Any will work
- * fine to continue the walk.
- */
- tree_folio = ksm_get_folio(stable_node_any,
- KSM_GET_FOLIO_NOLOCK);
- }
- VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
+ tree_folio = chain(&stable_node_dup, &stable_node, root);
if (!tree_folio) {
/*
* If we walked over a stale stable_node,
@@ -2306,7 +2242,6 @@ static void stable_tree_append(struct ksm_rmap_item *rmap_item,
*/
static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_item)
{
- struct mm_struct *mm = rmap_item->mm;
struct ksm_rmap_item *tree_rmap_item;
struct page *tree_page = NULL;
struct ksm_stable_node *stable_node;
@@ -2333,6 +2268,23 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
*/
if (!is_page_sharing_candidate(stable_node))
max_page_sharing_bypass = true;
+ } else {
+ remove_rmap_item_from_tree(rmap_item);
+
+ /*
+ * If the hash value of the page has changed from the last time
+ * we calculated it, this page is changing frequently: therefore we
+ * don't want to insert it in the unstable tree, and we don't want
+ * to waste our time searching for something identical to it there.
+ */
+ checksum = calc_checksum(page);
+ if (rmap_item->oldchecksum != checksum) {
+ rmap_item->oldchecksum = checksum;
+ return;
+ }
+
+ if (!try_to_merge_with_zero_page(rmap_item, page))
+ return;
}
/* We first start with searching the page inside the stable tree */
@@ -2363,48 +2315,6 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
return;
}
- /*
- * If the hash value of the page has changed from the last time
- * we calculated it, this page is changing frequently: therefore we
- * don't want to insert it in the unstable tree, and we don't want
- * to waste our time searching for something identical to it there.
- */
- checksum = calc_checksum(page);
- if (rmap_item->oldchecksum != checksum) {
- rmap_item->oldchecksum = checksum;
- return;
- }
-
- /*
- * Same checksum as an empty page. We attempt to merge it with the
- * appropriate zero page if the user enabled this via sysfs.
- */
- if (ksm_use_zero_pages && (checksum == zero_checksum)) {
- struct vm_area_struct *vma;
-
- mmap_read_lock(mm);
- vma = find_mergeable_vma(mm, rmap_item->address);
- if (vma) {
- err = try_to_merge_one_page(vma, page,
- ZERO_PAGE(rmap_item->address));
- trace_ksm_merge_one_page(
- page_to_pfn(ZERO_PAGE(rmap_item->address)),
- rmap_item, mm, err);
- } else {
- /*
- * If the vma is out of date, we do not need to
- * continue.
- */
- err = 0;
- }
- mmap_read_unlock(mm);
- /*
- * In case of failure, the page was not really empty, so we
- * need to continue. Otherwise we're done.
- */
- if (!err)
- return;
- }
tree_rmap_item =
unstable_tree_search_insert(rmap_item, page, &tree_page);
if (tree_rmap_item) {
@@ -3088,7 +2998,6 @@ struct folio *ksm_might_need_to_copy(struct folio *folio,
if (copy_mc_user_highpage(folio_page(new_folio, 0), page,
addr, vma)) {
folio_put(new_folio);
- memory_failure_queue(folio_pfn(folio), 0);
return ERR_PTR(-EHWPOISON);
}
folio_set_dirty(new_folio);
diff --git a/mm/list_lru.c b/mm/list_lru.c
index 3fd64736bc45..a29d96929d7c 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -15,7 +15,7 @@
#include "slab.h"
#include "internal.h"
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
static LIST_HEAD(memcg_list_lrus);
static DEFINE_MUTEX(list_lrus_mutex);
@@ -83,7 +83,7 @@ list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx)
{
return &lru->node[nid].lru;
}
-#endif /* CONFIG_MEMCG_KMEM */
+#endif /* CONFIG_MEMCG */
bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
struct mem_cgroup *memcg)
@@ -294,7 +294,7 @@ unsigned long list_lru_walk_node(struct list_lru *lru, int nid,
isolated += list_lru_walk_one(lru, nid, NULL, isolate, cb_arg,
nr_to_walk);
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
if (*nr_to_walk > 0 && list_lru_memcg_aware(lru)) {
struct list_lru_memcg *mlru;
unsigned long index;
@@ -324,7 +324,7 @@ static void init_one_lru(struct list_lru_one *l)
l->nr_items = 0;
}
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
static struct list_lru_memcg *memcg_init_list_lru_one(gfp_t gfp)
{
int nid;
@@ -544,14 +544,14 @@ static inline void memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
static void memcg_destroy_list_lru(struct list_lru *lru)
{
}
-#endif /* CONFIG_MEMCG_KMEM */
+#endif /* CONFIG_MEMCG */
int __list_lru_init(struct list_lru *lru, bool memcg_aware,
struct lock_class_key *key, struct shrinker *shrinker)
{
int i;
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
if (shrinker)
lru->shrinker_id = shrinker->id;
else
@@ -591,7 +591,7 @@ void list_lru_destroy(struct list_lru *lru)
kfree(lru->node);
lru->node = NULL;
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
lru->shrinker_id = -1;
#endif
}
diff --git a/mm/madvise.c b/mm/madvise.c
index a77893462b92..96c026fe0c99 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -1147,7 +1147,7 @@ static int madvise_inject_error(int behavior,
} else {
pr_info("Injecting memory failure for pfn %#lx at process virtual address %#lx\n",
pfn, start);
- ret = memory_failure(pfn, MF_COUNT_INCREASED | MF_SW_SIMULATED);
+ ret = memory_failure(pfn, MF_ACTION_REQUIRED | MF_COUNT_INCREASED | MF_SW_SIMULATED);
if (ret == -EOPNOTSUPP)
ret = 0;
}
diff --git a/mm/memcontrol-v1.c b/mm/memcontrol-v1.c
new file mode 100644
index 000000000000..2aeea4d8bf8e
--- /dev/null
+++ b/mm/memcontrol-v1.c
@@ -0,0 +1,2969 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+#include <linux/memcontrol.h>
+#include <linux/swap.h>
+#include <linux/mm_inline.h>
+#include <linux/pagewalk.h>
+#include <linux/backing-dev.h>
+#include <linux/swap_cgroup.h>
+#include <linux/eventfd.h>
+#include <linux/poll.h>
+#include <linux/sort.h>
+#include <linux/file.h>
+#include <linux/seq_buf.h>
+
+#include "internal.h"
+#include "swap.h"
+#include "memcontrol-v1.h"
+
+/*
+ * Cgroups above their limits are maintained in a RB-Tree, independent of
+ * their hierarchy representation
+ */
+
+struct mem_cgroup_tree_per_node {
+ struct rb_root rb_root;
+ struct rb_node *rb_rightmost;
+ spinlock_t lock;
+};
+
+struct mem_cgroup_tree {
+ struct mem_cgroup_tree_per_node *rb_tree_per_node[MAX_NUMNODES];
+};
+
+static struct mem_cgroup_tree soft_limit_tree __read_mostly;
+
+/*
+ * Maximum loops in mem_cgroup_soft_reclaim(), used for soft
+ * limit reclaim to prevent infinite loops, if they ever occur.
+ */
+#define MEM_CGROUP_MAX_RECLAIM_LOOPS 100
+#define MEM_CGROUP_MAX_SOFT_LIMIT_RECLAIM_LOOPS 2
+
+/* Stuffs for move charges at task migration. */
+/*
+ * Types of charges to be moved.
+ */
+#define MOVE_ANON 0x1ULL
+#define MOVE_FILE 0x2ULL
+#define MOVE_MASK (MOVE_ANON | MOVE_FILE)
+
+/* "mc" and its members are protected by cgroup_mutex */
+static struct move_charge_struct {
+ spinlock_t lock; /* for from, to */
+ struct mm_struct *mm;
+ struct mem_cgroup *from;
+ struct mem_cgroup *to;
+ unsigned long flags;
+ unsigned long precharge;
+ unsigned long moved_charge;
+ unsigned long moved_swap;
+ struct task_struct *moving_task; /* a task moving charges */
+ wait_queue_head_t waitq; /* a waitq for other context */
+} mc = {
+ .lock = __SPIN_LOCK_UNLOCKED(mc.lock),
+ .waitq = __WAIT_QUEUE_HEAD_INITIALIZER(mc.waitq),
+};
+
+/* for OOM */
+struct mem_cgroup_eventfd_list {
+ struct list_head list;
+ struct eventfd_ctx *eventfd;
+};
+
+/*
+ * cgroup_event represents events which userspace want to receive.
+ */
+struct mem_cgroup_event {
+ /*
+ * memcg which the event belongs to.
+ */
+ struct mem_cgroup *memcg;
+ /*
+ * eventfd to signal userspace about the event.
+ */
+ struct eventfd_ctx *eventfd;
+ /*
+ * Each of these stored in a list by the cgroup.
+ */
+ struct list_head list;
+ /*
+ * register_event() callback will be used to add new userspace
+ * waiter for changes related to this event. Use eventfd_signal()
+ * on eventfd to send notification to userspace.
+ */
+ int (*register_event)(struct mem_cgroup *memcg,
+ struct eventfd_ctx *eventfd, const char *args);
+ /*
+ * unregister_event() callback will be called when userspace closes
+ * the eventfd or on cgroup removing. This callback must be set,
+ * if you want provide notification functionality.
+ */
+ void (*unregister_event)(struct mem_cgroup *memcg,
+ struct eventfd_ctx *eventfd);
+ /*
+ * All fields below needed to unregister event when
+ * userspace closes eventfd.
+ */
+ poll_table pt;
+ wait_queue_head_t *wqh;
+ wait_queue_entry_t wait;
+ struct work_struct remove;
+};
+
+#define MEMFILE_PRIVATE(x, val) ((x) << 16 | (val))
+#define MEMFILE_TYPE(val) ((val) >> 16 & 0xffff)
+#define MEMFILE_ATTR(val) ((val) & 0xffff)
+
+enum {
+ RES_USAGE,
+ RES_LIMIT,
+ RES_MAX_USAGE,
+ RES_FAILCNT,
+ RES_SOFT_LIMIT,
+};
+
+#ifdef CONFIG_LOCKDEP
+static struct lockdep_map memcg_oom_lock_dep_map = {
+ .name = "memcg_oom_lock",
+};
+#endif
+
+DEFINE_SPINLOCK(memcg_oom_lock);
+
+static void __mem_cgroup_insert_exceeded(struct mem_cgroup_per_node *mz,
+ struct mem_cgroup_tree_per_node *mctz,
+ unsigned long new_usage_in_excess)
+{
+ struct rb_node **p = &mctz->rb_root.rb_node;
+ struct rb_node *parent = NULL;
+ struct mem_cgroup_per_node *mz_node;
+ bool rightmost = true;
+
+ if (mz->on_tree)
+ return;
+
+ mz->usage_in_excess = new_usage_in_excess;
+ if (!mz->usage_in_excess)
+ return;
+ while (*p) {
+ parent = *p;
+ mz_node = rb_entry(parent, struct mem_cgroup_per_node,
+ tree_node);
+ if (mz->usage_in_excess < mz_node->usage_in_excess) {
+ p = &(*p)->rb_left;
+ rightmost = false;
+ } else {
+ p = &(*p)->rb_right;
+ }
+ }
+
+ if (rightmost)
+ mctz->rb_rightmost = &mz->tree_node;
+
+ rb_link_node(&mz->tree_node, parent, p);
+ rb_insert_color(&mz->tree_node, &mctz->rb_root);
+ mz->on_tree = true;
+}
+
+static void __mem_cgroup_remove_exceeded(struct mem_cgroup_per_node *mz,
+ struct mem_cgroup_tree_per_node *mctz)
+{
+ if (!mz->on_tree)
+ return;
+
+ if (&mz->tree_node == mctz->rb_rightmost)
+ mctz->rb_rightmost = rb_prev(&mz->tree_node);
+
+ rb_erase(&mz->tree_node, &mctz->rb_root);
+ mz->on_tree = false;
+}
+
+static void mem_cgroup_remove_exceeded(struct mem_cgroup_per_node *mz,
+ struct mem_cgroup_tree_per_node *mctz)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&mctz->lock, flags);
+ __mem_cgroup_remove_exceeded(mz, mctz);
+ spin_unlock_irqrestore(&mctz->lock, flags);
+}
+
+static unsigned long soft_limit_excess(struct mem_cgroup *memcg)
+{
+ unsigned long nr_pages = page_counter_read(&memcg->memory);
+ unsigned long soft_limit = READ_ONCE(memcg->soft_limit);
+ unsigned long excess = 0;
+
+ if (nr_pages > soft_limit)
+ excess = nr_pages - soft_limit;
+
+ return excess;
+}
+
+static void memcg1_update_tree(struct mem_cgroup *memcg, int nid)
+{
+ unsigned long excess;
+ struct mem_cgroup_per_node *mz;
+ struct mem_cgroup_tree_per_node *mctz;
+
+ if (lru_gen_enabled()) {
+ if (soft_limit_excess(memcg))
+ lru_gen_soft_reclaim(memcg, nid);
+ return;
+ }
+
+ mctz = soft_limit_tree.rb_tree_per_node[nid];
+ if (!mctz)
+ return;
+ /*
+ * Necessary to update all ancestors when hierarchy is used.
+ * because their event counter is not touched.
+ */
+ for (; memcg; memcg = parent_mem_cgroup(memcg)) {
+ mz = memcg->nodeinfo[nid];
+ excess = soft_limit_excess(memcg);
+ /*
+ * We have to update the tree if mz is on RB-tree or
+ * mem is over its softlimit.
+ */
+ if (excess || mz->on_tree) {
+ unsigned long flags;
+
+ spin_lock_irqsave(&mctz->lock, flags);
+ /* if on-tree, remove it */
+ if (mz->on_tree)
+ __mem_cgroup_remove_exceeded(mz, mctz);
+ /*
+ * Insert again. mz->usage_in_excess will be updated.
+ * If excess is 0, no tree ops.
+ */
+ __mem_cgroup_insert_exceeded(mz, mctz, excess);
+ spin_unlock_irqrestore(&mctz->lock, flags);
+ }
+ }
+}
+
+void memcg1_remove_from_trees(struct mem_cgroup *memcg)
+{
+ struct mem_cgroup_tree_per_node *mctz;
+ struct mem_cgroup_per_node *mz;
+ int nid;
+
+ for_each_node(nid) {
+ mz = memcg->nodeinfo[nid];
+ mctz = soft_limit_tree.rb_tree_per_node[nid];
+ if (mctz)
+ mem_cgroup_remove_exceeded(mz, mctz);
+ }
+}
+
+static struct mem_cgroup_per_node *
+__mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz)
+{
+ struct mem_cgroup_per_node *mz;
+
+retry:
+ mz = NULL;
+ if (!mctz->rb_rightmost)
+ goto done; /* Nothing to reclaim from */
+
+ mz = rb_entry(mctz->rb_rightmost,
+ struct mem_cgroup_per_node, tree_node);
+ /*
+ * Remove the node now but someone else can add it back,
+ * we will to add it back at the end of reclaim to its correct
+ * position in the tree.
+ */
+ __mem_cgroup_remove_exceeded(mz, mctz);
+ if (!soft_limit_excess(mz->memcg) ||
+ !css_tryget(&mz->memcg->css))
+ goto retry;
+done:
+ return mz;
+}
+
+static struct mem_cgroup_per_node *
+mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz)
+{
+ struct mem_cgroup_per_node *mz;
+
+ spin_lock_irq(&mctz->lock);
+ mz = __mem_cgroup_largest_soft_limit_node(mctz);
+ spin_unlock_irq(&mctz->lock);
+ return mz;
+}
+
+static int mem_cgroup_soft_reclaim(struct mem_cgroup *root_memcg,
+ pg_data_t *pgdat,
+ gfp_t gfp_mask,
+ unsigned long *total_scanned)
+{
+ struct mem_cgroup *victim = NULL;
+ int total = 0;
+ int loop = 0;
+ unsigned long excess;
+ unsigned long nr_scanned;
+ struct mem_cgroup_reclaim_cookie reclaim = {
+ .pgdat = pgdat,
+ };
+
+ excess = soft_limit_excess(root_memcg);
+
+ while (1) {
+ victim = mem_cgroup_iter(root_memcg, victim, &reclaim);
+ if (!victim) {
+ loop++;
+ if (loop >= 2) {
+ /*
+ * If we have not been able to reclaim
+ * anything, it might because there are
+ * no reclaimable pages under this hierarchy
+ */
+ if (!total)
+ break;
+ /*
+ * We want to do more targeted reclaim.
+ * excess >> 2 is not to excessive so as to
+ * reclaim too much, nor too less that we keep
+ * coming back to reclaim from this cgroup
+ */
+ if (total >= (excess >> 2) ||
+ (loop > MEM_CGROUP_MAX_RECLAIM_LOOPS))
+ break;
+ }
+ continue;
+ }
+ total += mem_cgroup_shrink_node(victim, gfp_mask, false,
+ pgdat, &nr_scanned);
+ *total_scanned += nr_scanned;
+ if (!soft_limit_excess(root_memcg))
+ break;
+ }
+ mem_cgroup_iter_break(root_memcg, victim);
+ return total;
+}
+
+unsigned long memcg1_soft_limit_reclaim(pg_data_t *pgdat, int order,
+ gfp_t gfp_mask,
+ unsigned long *total_scanned)
+{
+ unsigned long nr_reclaimed = 0;
+ struct mem_cgroup_per_node *mz, *next_mz = NULL;
+ unsigned long reclaimed;
+ int loop = 0;
+ struct mem_cgroup_tree_per_node *mctz;
+ unsigned long excess;
+
+ if (lru_gen_enabled())
+ return 0;
+
+ if (order > 0)
+ return 0;
+
+ mctz = soft_limit_tree.rb_tree_per_node[pgdat->node_id];
+
+ /*
+ * Do not even bother to check the largest node if the root
+ * is empty. Do it lockless to prevent lock bouncing. Races
+ * are acceptable as soft limit is best effort anyway.
+ */
+ if (!mctz || RB_EMPTY_ROOT(&mctz->rb_root))
+ return 0;
+
+ /*
+ * This loop can run a while, specially if mem_cgroup's continuously
+ * keep exceeding their soft limit and putting the system under
+ * pressure
+ */
+ do {
+ if (next_mz)
+ mz = next_mz;
+ else
+ mz = mem_cgroup_largest_soft_limit_node(mctz);
+ if (!mz)
+ break;
+
+ reclaimed = mem_cgroup_soft_reclaim(mz->memcg, pgdat,
+ gfp_mask, total_scanned);
+ nr_reclaimed += reclaimed;
+ spin_lock_irq(&mctz->lock);
+
+ /*
+ * If we failed to reclaim anything from this memory cgroup
+ * it is time to move on to the next cgroup
+ */
+ next_mz = NULL;
+ if (!reclaimed)
+ next_mz = __mem_cgroup_largest_soft_limit_node(mctz);
+
+ excess = soft_limit_excess(mz->memcg);
+ /*
+ * One school of thought says that we should not add
+ * back the node to the tree if reclaim returns 0.
+ * But our reclaim could return 0, simply because due
+ * to priority we are exposing a smaller subset of
+ * memory to reclaim from. Consider this as a longer
+ * term TODO.
+ */
+ /* If excess == 0, no tree ops */
+ __mem_cgroup_insert_exceeded(mz, mctz, excess);
+ spin_unlock_irq(&mctz->lock);
+ css_put(&mz->memcg->css);
+ loop++;
+ /*
+ * Could not reclaim anything and there are no more
+ * mem cgroups to try or we seem to be looping without
+ * reclaiming anything.
+ */
+ if (!nr_reclaimed &&
+ (next_mz == NULL ||
+ loop > MEM_CGROUP_MAX_SOFT_LIMIT_RECLAIM_LOOPS))
+ break;
+ } while (!nr_reclaimed);
+ if (next_mz)
+ css_put(&next_mz->memcg->css);
+ return nr_reclaimed;
+}
+
+/*
+ * A routine for checking "mem" is under move_account() or not.
+ *
+ * Checking a cgroup is mc.from or mc.to or under hierarchy of
+ * moving cgroups. This is for waiting at high-memory pressure
+ * caused by "move".
+ */
+static bool mem_cgroup_under_move(struct mem_cgroup *memcg)
+{
+ struct mem_cgroup *from;
+ struct mem_cgroup *to;
+ bool ret = false;
+ /*
+ * Unlike task_move routines, we access mc.to, mc.from not under
+ * mutual exclusion by cgroup_mutex. Here, we take spinlock instead.
+ */
+ spin_lock(&mc.lock);
+ from = mc.from;
+ to = mc.to;
+ if (!from)
+ goto unlock;
+
+ ret = mem_cgroup_is_descendant(from, memcg) ||
+ mem_cgroup_is_descendant(to, memcg);
+unlock:
+ spin_unlock(&mc.lock);
+ return ret;
+}
+
+bool memcg1_wait_acct_move(struct mem_cgroup *memcg)
+{
+ if (mc.moving_task && current != mc.moving_task) {
+ if (mem_cgroup_under_move(memcg)) {
+ DEFINE_WAIT(wait);
+ prepare_to_wait(&mc.waitq, &wait, TASK_INTERRUPTIBLE);
+ /* moving charge context might have finished. */
+ if (mc.moving_task)
+ schedule();
+ finish_wait(&mc.waitq, &wait);
+ return true;
+ }
+ }
+ return false;
+}
+
+/**
+ * folio_memcg_lock - Bind a folio to its memcg.
+ * @folio: The folio.
+ *
+ * This function prevents unlocked LRU folios from being moved to
+ * another cgroup.
+ *
+ * It ensures lifetime of the bound memcg. The caller is responsible
+ * for the lifetime of the folio.
+ */
+void folio_memcg_lock(struct folio *folio)
+{
+ struct mem_cgroup *memcg;
+ unsigned long flags;
+
+ /*
+ * The RCU lock is held throughout the transaction. The fast
+ * path can get away without acquiring the memcg->move_lock
+ * because page moving starts with an RCU grace period.
+ */
+ rcu_read_lock();
+
+ if (mem_cgroup_disabled())
+ return;
+again:
+ memcg = folio_memcg(folio);
+ if (unlikely(!memcg))
+ return;
+
+#ifdef CONFIG_PROVE_LOCKING
+ local_irq_save(flags);
+ might_lock(&memcg->move_lock);
+ local_irq_restore(flags);
+#endif
+
+ if (atomic_read(&memcg->moving_account) <= 0)
+ return;
+
+ spin_lock_irqsave(&memcg->move_lock, flags);
+ if (memcg != folio_memcg(folio)) {
+ spin_unlock_irqrestore(&memcg->move_lock, flags);
+ goto again;
+ }
+
+ /*
+ * When charge migration first begins, we can have multiple
+ * critical sections holding the fast-path RCU lock and one
+ * holding the slowpath move_lock. Track the task who has the
+ * move_lock for folio_memcg_unlock().
+ */
+ memcg->move_lock_task = current;
+ memcg->move_lock_flags = flags;
+}
+
+static void __folio_memcg_unlock(struct mem_cgroup *memcg)
+{
+ if (memcg && memcg->move_lock_task == current) {
+ unsigned long flags = memcg->move_lock_flags;
+
+ memcg->move_lock_task = NULL;
+ memcg->move_lock_flags = 0;
+
+ spin_unlock_irqrestore(&memcg->move_lock, flags);
+ }
+
+ rcu_read_unlock();
+}
+
+/**
+ * folio_memcg_unlock - Release the binding between a folio and its memcg.
+ * @folio: The folio.
+ *
+ * This releases the binding created by folio_memcg_lock(). This does
+ * not change the accounting of this folio to its memcg, but it does
+ * permit others to change it.
+ */
+void folio_memcg_unlock(struct folio *folio)
+{
+ __folio_memcg_unlock(folio_memcg(folio));
+}
+
+#ifdef CONFIG_SWAP
+/**
+ * mem_cgroup_move_swap_account - move swap charge and swap_cgroup's record.
+ * @entry: swap entry to be moved
+ * @from: mem_cgroup which the entry is moved from
+ * @to: mem_cgroup which the entry is moved to
+ *
+ * It succeeds only when the swap_cgroup's record for this entry is the same
+ * as the mem_cgroup's id of @from.
+ *
+ * Returns 0 on success, -EINVAL on failure.
+ *
+ * The caller must have charged to @to, IOW, called page_counter_charge() about
+ * both res and memsw, and called css_get().
+ */
+static int mem_cgroup_move_swap_account(swp_entry_t entry,
+ struct mem_cgroup *from, struct mem_cgroup *to)
+{
+ unsigned short old_id, new_id;
+
+ old_id = mem_cgroup_id(from);
+ new_id = mem_cgroup_id(to);
+
+ if (swap_cgroup_cmpxchg(entry, old_id, new_id) == old_id) {
+ mod_memcg_state(from, MEMCG_SWAP, -1);
+ mod_memcg_state(to, MEMCG_SWAP, 1);
+ return 0;
+ }
+ return -EINVAL;
+}
+#else
+static inline int mem_cgroup_move_swap_account(swp_entry_t entry,
+ struct mem_cgroup *from, struct mem_cgroup *to)
+{
+ return -EINVAL;
+}
+#endif
+
+static u64 mem_cgroup_move_charge_read(struct cgroup_subsys_state *css,
+ struct cftype *cft)
+{
+ return mem_cgroup_from_css(css)->move_charge_at_immigrate;
+}
+
+#ifdef CONFIG_MMU
+static int mem_cgroup_move_charge_write(struct cgroup_subsys_state *css,
+ struct cftype *cft, u64 val)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+
+ pr_warn_once("Cgroup memory moving (move_charge_at_immigrate) is deprecated. "
+ "Please report your usecase to linux-mm@kvack.org if you "
+ "depend on this functionality.\n");
+
+ if (val & ~MOVE_MASK)
+ return -EINVAL;
+
+ /*
+ * No kind of locking is needed in here, because ->can_attach() will
+ * check this value once in the beginning of the process, and then carry
+ * on with stale data. This means that changes to this value will only
+ * affect task migrations starting after the change.
+ */
+ memcg->move_charge_at_immigrate = val;
+ return 0;
+}
+#else
+static int mem_cgroup_move_charge_write(struct cgroup_subsys_state *css,
+ struct cftype *cft, u64 val)
+{
+ return -ENOSYS;
+}
+#endif
+
+#ifdef CONFIG_MMU
+/* Handlers for move charge at task migration. */
+static int mem_cgroup_do_precharge(unsigned long count)
+{
+ int ret;
+
+ /* Try a single bulk charge without reclaim first, kswapd may wake */
+ ret = try_charge(mc.to, GFP_KERNEL & ~__GFP_DIRECT_RECLAIM, count);
+ if (!ret) {
+ mc.precharge += count;
+ return ret;
+ }
+
+ /* Try charges one by one with reclaim, but do not retry */
+ while (count--) {
+ ret = try_charge(mc.to, GFP_KERNEL | __GFP_NORETRY, 1);
+ if (ret)
+ return ret;
+ mc.precharge++;
+ cond_resched();
+ }
+ return 0;
+}
+
+union mc_target {
+ struct folio *folio;
+ swp_entry_t ent;
+};
+
+enum mc_target_type {
+ MC_TARGET_NONE = 0,
+ MC_TARGET_PAGE,
+ MC_TARGET_SWAP,
+ MC_TARGET_DEVICE,
+};
+
+static struct page *mc_handle_present_pte(struct vm_area_struct *vma,
+ unsigned long addr, pte_t ptent)
+{
+ struct page *page = vm_normal_page(vma, addr, ptent);
+
+ if (!page)
+ return NULL;
+ if (PageAnon(page)) {
+ if (!(mc.flags & MOVE_ANON))
+ return NULL;
+ } else {
+ if (!(mc.flags & MOVE_FILE))
+ return NULL;
+ }
+ get_page(page);
+
+ return page;
+}
+
+#if defined(CONFIG_SWAP) || defined(CONFIG_DEVICE_PRIVATE)
+static struct page *mc_handle_swap_pte(struct vm_area_struct *vma,
+ pte_t ptent, swp_entry_t *entry)
+{
+ struct page *page = NULL;
+ swp_entry_t ent = pte_to_swp_entry(ptent);
+
+ if (!(mc.flags & MOVE_ANON))
+ return NULL;
+
+ /*
+ * Handle device private pages that are not accessible by the CPU, but
+ * stored as special swap entries in the page table.
+ */
+ if (is_device_private_entry(ent)) {
+ page = pfn_swap_entry_to_page(ent);
+ if (!get_page_unless_zero(page))
+ return NULL;
+ return page;
+ }
+
+ if (non_swap_entry(ent))
+ return NULL;
+
+ /*
+ * Because swap_cache_get_folio() updates some statistics counter,
+ * we call find_get_page() with swapper_space directly.
+ */
+ page = find_get_page(swap_address_space(ent), swap_cache_index(ent));
+ entry->val = ent.val;
+
+ return page;
+}
+#else
+static struct page *mc_handle_swap_pte(struct vm_area_struct *vma,
+ pte_t ptent, swp_entry_t *entry)
+{
+ return NULL;
+}
+#endif
+
+static struct page *mc_handle_file_pte(struct vm_area_struct *vma,
+ unsigned long addr, pte_t ptent)
+{
+ unsigned long index;
+ struct folio *folio;
+
+ if (!vma->vm_file) /* anonymous vma */
+ return NULL;
+ if (!(mc.flags & MOVE_FILE))
+ return NULL;
+
+ /* folio is moved even if it's not RSS of this task(page-faulted). */
+ /* shmem/tmpfs may report page out on swap: account for that too. */
+ index = linear_page_index(vma, addr);
+ folio = filemap_get_incore_folio(vma->vm_file->f_mapping, index);
+ if (IS_ERR(folio))
+ return NULL;
+ return folio_file_page(folio, index);
+}
+
+/**
+ * mem_cgroup_move_account - move account of the folio
+ * @folio: The folio.
+ * @compound: charge the page as compound or small page
+ * @from: mem_cgroup which the folio is moved from.
+ * @to: mem_cgroup which the folio is moved to. @from != @to.
+ *
+ * The folio must be locked and not on the LRU.
+ *
+ * This function doesn't do "charge" to new cgroup and doesn't do "uncharge"
+ * from old cgroup.
+ */
+static int mem_cgroup_move_account(struct folio *folio,
+ bool compound,
+ struct mem_cgroup *from,
+ struct mem_cgroup *to)
+{
+ struct lruvec *from_vec, *to_vec;
+ struct pglist_data *pgdat;
+ unsigned int nr_pages = compound ? folio_nr_pages(folio) : 1;
+ int nid, ret;
+
+ VM_BUG_ON(from == to);
+ VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
+ VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
+ VM_BUG_ON(compound && !folio_test_large(folio));
+
+ ret = -EINVAL;
+ if (folio_memcg(folio) != from)
+ goto out;
+
+ pgdat = folio_pgdat(folio);
+ from_vec = mem_cgroup_lruvec(from, pgdat);
+ to_vec = mem_cgroup_lruvec(to, pgdat);
+
+ folio_memcg_lock(folio);
+
+ if (folio_test_anon(folio)) {
+ if (folio_mapped(folio)) {
+ __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages);
+ __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages);
+ if (folio_test_pmd_mappable(folio)) {
+ __mod_lruvec_state(from_vec, NR_ANON_THPS,
+ -nr_pages);
+ __mod_lruvec_state(to_vec, NR_ANON_THPS,
+ nr_pages);
+ }
+ }
+ } else {
+ __mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages);
+ __mod_lruvec_state(to_vec, NR_FILE_PAGES, nr_pages);
+
+ if (folio_test_swapbacked(folio)) {
+ __mod_lruvec_state(from_vec, NR_SHMEM, -nr_pages);
+ __mod_lruvec_state(to_vec, NR_SHMEM, nr_pages);
+ }
+
+ if (folio_mapped(folio)) {
+ __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages);
+ __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages);
+ }
+
+ if (folio_test_dirty(folio)) {
+ struct address_space *mapping = folio_mapping(folio);
+
+ if (mapping_can_writeback(mapping)) {
+ __mod_lruvec_state(from_vec, NR_FILE_DIRTY,
+ -nr_pages);
+ __mod_lruvec_state(to_vec, NR_FILE_DIRTY,
+ nr_pages);
+ }
+ }
+ }
+
+#ifdef CONFIG_SWAP
+ if (folio_test_swapcache(folio)) {
+ __mod_lruvec_state(from_vec, NR_SWAPCACHE, -nr_pages);
+ __mod_lruvec_state(to_vec, NR_SWAPCACHE, nr_pages);
+ }
+#endif
+ if (folio_test_writeback(folio)) {
+ __mod_lruvec_state(from_vec, NR_WRITEBACK, -nr_pages);
+ __mod_lruvec_state(to_vec, NR_WRITEBACK, nr_pages);
+ }
+
+ /*
+ * All state has been migrated, let's switch to the new memcg.
+ *
+ * It is safe to change page's memcg here because the page
+ * is referenced, charged, isolated, and locked: we can't race
+ * with (un)charging, migration, LRU putback, or anything else
+ * that would rely on a stable page's memory cgroup.
+ *
+ * Note that folio_memcg_lock is a memcg lock, not a page lock,
+ * to save space. As soon as we switch page's memory cgroup to a
+ * new memcg that isn't locked, the above state can change
+ * concurrently again. Make sure we're truly done with it.
+ */
+ smp_mb();
+
+ css_get(&to->css);
+ css_put(&from->css);
+
+ folio->memcg_data = (unsigned long)to;
+
+ __folio_memcg_unlock(from);
+
+ ret = 0;
+ nid = folio_nid(folio);
+
+ local_irq_disable();
+ mem_cgroup_charge_statistics(to, nr_pages);
+ memcg1_check_events(to, nid);
+ mem_cgroup_charge_statistics(from, -nr_pages);
+ memcg1_check_events(from, nid);
+ local_irq_enable();
+out:
+ return ret;
+}
+
+/**
+ * get_mctgt_type - get target type of moving charge
+ * @vma: the vma the pte to be checked belongs
+ * @addr: the address corresponding to the pte to be checked
+ * @ptent: the pte to be checked
+ * @target: the pointer the target page or swap ent will be stored(can be NULL)
+ *
+ * Context: Called with pte lock held.
+ * Return:
+ * * MC_TARGET_NONE - If the pte is not a target for move charge.
+ * * MC_TARGET_PAGE - If the page corresponding to this pte is a target for
+ * move charge. If @target is not NULL, the folio is stored in target->folio
+ * with extra refcnt taken (Caller should release it).
+ * * MC_TARGET_SWAP - If the swap entry corresponding to this pte is a
+ * target for charge migration. If @target is not NULL, the entry is
+ * stored in target->ent.
+ * * MC_TARGET_DEVICE - Like MC_TARGET_PAGE but page is device memory and
+ * thus not on the lru. For now such page is charged like a regular page
+ * would be as it is just special memory taking the place of a regular page.
+ * See Documentations/vm/hmm.txt and include/linux/hmm.h
+ */
+static enum mc_target_type get_mctgt_type(struct vm_area_struct *vma,
+ unsigned long addr, pte_t ptent, union mc_target *target)
+{
+ struct page *page = NULL;
+ struct folio *folio;
+ enum mc_target_type ret = MC_TARGET_NONE;
+ swp_entry_t ent = { .val = 0 };
+
+ if (pte_present(ptent))
+ page = mc_handle_present_pte(vma, addr, ptent);
+ else if (pte_none_mostly(ptent))
+ /*
+ * PTE markers should be treated as a none pte here, separated
+ * from other swap handling below.
+ */
+ page = mc_handle_file_pte(vma, addr, ptent);
+ else if (is_swap_pte(ptent))
+ page = mc_handle_swap_pte(vma, ptent, &ent);
+
+ if (page)
+ folio = page_folio(page);
+ if (target && page) {
+ if (!folio_trylock(folio)) {
+ folio_put(folio);
+ return ret;
+ }
+ /*
+ * page_mapped() must be stable during the move. This
+ * pte is locked, so if it's present, the page cannot
+ * become unmapped. If it isn't, we have only partial
+ * control over the mapped state: the page lock will
+ * prevent new faults against pagecache and swapcache,
+ * so an unmapped page cannot become mapped. However,
+ * if the page is already mapped elsewhere, it can
+ * unmap, and there is nothing we can do about it.
+ * Alas, skip moving the page in this case.
+ */
+ if (!pte_present(ptent) && page_mapped(page)) {
+ folio_unlock(folio);
+ folio_put(folio);
+ return ret;
+ }
+ }
+
+ if (!page && !ent.val)
+ return ret;
+ if (page) {
+ /*
+ * Do only loose check w/o serialization.
+ * mem_cgroup_move_account() checks the page is valid or
+ * not under LRU exclusion.
+ */
+ if (folio_memcg(folio) == mc.from) {
+ ret = MC_TARGET_PAGE;
+ if (folio_is_device_private(folio) ||
+ folio_is_device_coherent(folio))
+ ret = MC_TARGET_DEVICE;
+ if (target)
+ target->folio = folio;
+ }
+ if (!ret || !target) {
+ if (target)
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+ }
+ /*
+ * There is a swap entry and a page doesn't exist or isn't charged.
+ * But we cannot move a tail-page in a THP.
+ */
+ if (ent.val && !ret && (!page || !PageTransCompound(page)) &&
+ mem_cgroup_id(mc.from) == lookup_swap_cgroup_id(ent)) {
+ ret = MC_TARGET_SWAP;
+ if (target)
+ target->ent = ent;
+ }
+ return ret;
+}
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+/*
+ * We don't consider PMD mapped swapping or file mapped pages because THP does
+ * not support them for now.
+ * Caller should make sure that pmd_trans_huge(pmd) is true.
+ */
+static enum mc_target_type get_mctgt_type_thp(struct vm_area_struct *vma,
+ unsigned long addr, pmd_t pmd, union mc_target *target)
+{
+ struct page *page = NULL;
+ struct folio *folio;
+ enum mc_target_type ret = MC_TARGET_NONE;
+
+ if (unlikely(is_swap_pmd(pmd))) {
+ VM_BUG_ON(thp_migration_supported() &&
+ !is_pmd_migration_entry(pmd));
+ return ret;
+ }
+ page = pmd_page(pmd);
+ VM_BUG_ON_PAGE(!page || !PageHead(page), page);
+ folio = page_folio(page);
+ if (!(mc.flags & MOVE_ANON))
+ return ret;
+ if (folio_memcg(folio) == mc.from) {
+ ret = MC_TARGET_PAGE;
+ if (target) {
+ folio_get(folio);
+ if (!folio_trylock(folio)) {
+ folio_put(folio);
+ return MC_TARGET_NONE;
+ }
+ target->folio = folio;
+ }
+ }
+ return ret;
+}
+#else
+static inline enum mc_target_type get_mctgt_type_thp(struct vm_area_struct *vma,
+ unsigned long addr, pmd_t pmd, union mc_target *target)
+{
+ return MC_TARGET_NONE;
+}
+#endif
+
+static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd,
+ unsigned long addr, unsigned long end,
+ struct mm_walk *walk)
+{
+ struct vm_area_struct *vma = walk->vma;
+ pte_t *pte;
+ spinlock_t *ptl;
+
+ ptl = pmd_trans_huge_lock(pmd, vma);
+ if (ptl) {
+ /*
+ * Note their can not be MC_TARGET_DEVICE for now as we do not
+ * support transparent huge page with MEMORY_DEVICE_PRIVATE but
+ * this might change.
+ */
+ if (get_mctgt_type_thp(vma, addr, *pmd, NULL) == MC_TARGET_PAGE)
+ mc.precharge += HPAGE_PMD_NR;
+ spin_unlock(ptl);
+ return 0;
+ }
+
+ pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
+ if (!pte)
+ return 0;
+ for (; addr != end; pte++, addr += PAGE_SIZE)
+ if (get_mctgt_type(vma, addr, ptep_get(pte), NULL))
+ mc.precharge++; /* increment precharge temporarily */
+ pte_unmap_unlock(pte - 1, ptl);
+ cond_resched();
+
+ return 0;
+}
+
+static const struct mm_walk_ops precharge_walk_ops = {
+ .pmd_entry = mem_cgroup_count_precharge_pte_range,
+ .walk_lock = PGWALK_RDLOCK,
+};
+
+static unsigned long mem_cgroup_count_precharge(struct mm_struct *mm)
+{
+ unsigned long precharge;
+
+ mmap_read_lock(mm);
+ walk_page_range(mm, 0, ULONG_MAX, &precharge_walk_ops, NULL);
+ mmap_read_unlock(mm);
+
+ precharge = mc.precharge;
+ mc.precharge = 0;
+
+ return precharge;
+}
+
+static int mem_cgroup_precharge_mc(struct mm_struct *mm)
+{
+ unsigned long precharge = mem_cgroup_count_precharge(mm);
+
+ VM_BUG_ON(mc.moving_task);
+ mc.moving_task = current;
+ return mem_cgroup_do_precharge(precharge);
+}
+
+/* cancels all extra charges on mc.from and mc.to, and wakes up all waiters. */
+static void __mem_cgroup_clear_mc(void)
+{
+ struct mem_cgroup *from = mc.from;
+ struct mem_cgroup *to = mc.to;
+
+ /* we must uncharge all the leftover precharges from mc.to */
+ if (mc.precharge) {
+ mem_cgroup_cancel_charge(mc.to, mc.precharge);
+ mc.precharge = 0;
+ }
+ /*
+ * we didn't uncharge from mc.from at mem_cgroup_move_account(), so
+ * we must uncharge here.
+ */
+ if (mc.moved_charge) {
+ mem_cgroup_cancel_charge(mc.from, mc.moved_charge);
+ mc.moved_charge = 0;
+ }
+ /* we must fixup refcnts and charges */
+ if (mc.moved_swap) {
+ /* uncharge swap account from the old cgroup */
+ if (!mem_cgroup_is_root(mc.from))
+ page_counter_uncharge(&mc.from->memsw, mc.moved_swap);
+
+ mem_cgroup_id_put_many(mc.from, mc.moved_swap);
+
+ /*
+ * we charged both to->memory and to->memsw, so we
+ * should uncharge to->memory.
+ */
+ if (!mem_cgroup_is_root(mc.to))
+ page_counter_uncharge(&mc.to->memory, mc.moved_swap);
+
+ mc.moved_swap = 0;
+ }
+ memcg1_oom_recover(from);
+ memcg1_oom_recover(to);
+ wake_up_all(&mc.waitq);
+}
+
+static void mem_cgroup_clear_mc(void)
+{
+ struct mm_struct *mm = mc.mm;
+
+ /*
+ * we must clear moving_task before waking up waiters at the end of
+ * task migration.
+ */
+ mc.moving_task = NULL;
+ __mem_cgroup_clear_mc();
+ spin_lock(&mc.lock);
+ mc.from = NULL;
+ mc.to = NULL;
+ mc.mm = NULL;
+ spin_unlock(&mc.lock);
+
+ mmput(mm);
+}
+
+int memcg1_can_attach(struct cgroup_taskset *tset)
+{
+ struct cgroup_subsys_state *css;
+ struct mem_cgroup *memcg = NULL; /* unneeded init to make gcc happy */
+ struct mem_cgroup *from;
+ struct task_struct *leader, *p;
+ struct mm_struct *mm;
+ unsigned long move_flags;
+ int ret = 0;
+
+ /* charge immigration isn't supported on the default hierarchy */
+ if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
+ return 0;
+
+ /*
+ * Multi-process migrations only happen on the default hierarchy
+ * where charge immigration is not used. Perform charge
+ * immigration if @tset contains a leader and whine if there are
+ * multiple.
+ */
+ p = NULL;
+ cgroup_taskset_for_each_leader(leader, css, tset) {
+ WARN_ON_ONCE(p);
+ p = leader;
+ memcg = mem_cgroup_from_css(css);
+ }
+ if (!p)
+ return 0;
+
+ /*
+ * We are now committed to this value whatever it is. Changes in this
+ * tunable will only affect upcoming migrations, not the current one.
+ * So we need to save it, and keep it going.
+ */
+ move_flags = READ_ONCE(memcg->move_charge_at_immigrate);
+ if (!move_flags)
+ return 0;
+
+ from = mem_cgroup_from_task(p);
+
+ VM_BUG_ON(from == memcg);
+
+ mm = get_task_mm(p);
+ if (!mm)
+ return 0;
+ /* We move charges only when we move a owner of the mm */
+ if (mm->owner == p) {
+ VM_BUG_ON(mc.from);
+ VM_BUG_ON(mc.to);
+ VM_BUG_ON(mc.precharge);
+ VM_BUG_ON(mc.moved_charge);
+ VM_BUG_ON(mc.moved_swap);
+
+ spin_lock(&mc.lock);
+ mc.mm = mm;
+ mc.from = from;
+ mc.to = memcg;
+ mc.flags = move_flags;
+ spin_unlock(&mc.lock);
+ /* We set mc.moving_task later */
+
+ ret = mem_cgroup_precharge_mc(mm);
+ if (ret)
+ mem_cgroup_clear_mc();
+ } else {
+ mmput(mm);
+ }
+ return ret;
+}
+
+void memcg1_cancel_attach(struct cgroup_taskset *tset)
+{
+ if (mc.to)
+ mem_cgroup_clear_mc();
+}
+
+static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
+ unsigned long addr, unsigned long end,
+ struct mm_walk *walk)
+{
+ int ret = 0;
+ struct vm_area_struct *vma = walk->vma;
+ pte_t *pte;
+ spinlock_t *ptl;
+ enum mc_target_type target_type;
+ union mc_target target;
+ struct folio *folio;
+
+ ptl = pmd_trans_huge_lock(pmd, vma);
+ if (ptl) {
+ if (mc.precharge < HPAGE_PMD_NR) {
+ spin_unlock(ptl);
+ return 0;
+ }
+ target_type = get_mctgt_type_thp(vma, addr, *pmd, &target);
+ if (target_type == MC_TARGET_PAGE) {
+ folio = target.folio;
+ if (folio_isolate_lru(folio)) {
+ if (!mem_cgroup_move_account(folio, true,
+ mc.from, mc.to)) {
+ mc.precharge -= HPAGE_PMD_NR;
+ mc.moved_charge += HPAGE_PMD_NR;
+ }
+ folio_putback_lru(folio);
+ }
+ folio_unlock(folio);
+ folio_put(folio);
+ } else if (target_type == MC_TARGET_DEVICE) {
+ folio = target.folio;
+ if (!mem_cgroup_move_account(folio, true,
+ mc.from, mc.to)) {
+ mc.precharge -= HPAGE_PMD_NR;
+ mc.moved_charge += HPAGE_PMD_NR;
+ }
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+ spin_unlock(ptl);
+ return 0;
+ }
+
+retry:
+ pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
+ if (!pte)
+ return 0;
+ for (; addr != end; addr += PAGE_SIZE) {
+ pte_t ptent = ptep_get(pte++);
+ bool device = false;
+ swp_entry_t ent;
+
+ if (!mc.precharge)
+ break;
+
+ switch (get_mctgt_type(vma, addr, ptent, &target)) {
+ case MC_TARGET_DEVICE:
+ device = true;
+ fallthrough;
+ case MC_TARGET_PAGE:
+ folio = target.folio;
+ /*
+ * We can have a part of the split pmd here. Moving it
+ * can be done but it would be too convoluted so simply
+ * ignore such a partial THP and keep it in original
+ * memcg. There should be somebody mapping the head.
+ */
+ if (folio_test_large(folio))
+ goto put;
+ if (!device && !folio_isolate_lru(folio))
+ goto put;
+ if (!mem_cgroup_move_account(folio, false,
+ mc.from, mc.to)) {
+ mc.precharge--;
+ /* we uncharge from mc.from later. */
+ mc.moved_charge++;
+ }
+ if (!device)
+ folio_putback_lru(folio);
+put: /* get_mctgt_type() gets & locks the page */
+ folio_unlock(folio);
+ folio_put(folio);
+ break;
+ case MC_TARGET_SWAP:
+ ent = target.ent;
+ if (!mem_cgroup_move_swap_account(ent, mc.from, mc.to)) {
+ mc.precharge--;
+ mem_cgroup_id_get_many(mc.to, 1);
+ /* we fixup other refcnts and charges later. */
+ mc.moved_swap++;
+ }
+ break;
+ default:
+ break;
+ }
+ }
+ pte_unmap_unlock(pte - 1, ptl);
+ cond_resched();
+
+ if (addr != end) {
+ /*
+ * We have consumed all precharges we got in can_attach().
+ * We try charge one by one, but don't do any additional
+ * charges to mc.to if we have failed in charge once in attach()
+ * phase.
+ */
+ ret = mem_cgroup_do_precharge(1);
+ if (!ret)
+ goto retry;
+ }
+
+ return ret;
+}
+
+static const struct mm_walk_ops charge_walk_ops = {
+ .pmd_entry = mem_cgroup_move_charge_pte_range,
+ .walk_lock = PGWALK_RDLOCK,
+};
+
+static void mem_cgroup_move_charge(void)
+{
+ lru_add_drain_all();
+ /*
+ * Signal folio_memcg_lock() to take the memcg's move_lock
+ * while we're moving its pages to another memcg. Then wait
+ * for already started RCU-only updates to finish.
+ */
+ atomic_inc(&mc.from->moving_account);
+ synchronize_rcu();
+retry:
+ if (unlikely(!mmap_read_trylock(mc.mm))) {
+ /*
+ * Someone who are holding the mmap_lock might be waiting in
+ * waitq. So we cancel all extra charges, wake up all waiters,
+ * and retry. Because we cancel precharges, we might not be able
+ * to move enough charges, but moving charge is a best-effort
+ * feature anyway, so it wouldn't be a big problem.
+ */
+ __mem_cgroup_clear_mc();
+ cond_resched();
+ goto retry;
+ }
+ /*
+ * When we have consumed all precharges and failed in doing
+ * additional charge, the page walk just aborts.
+ */
+ walk_page_range(mc.mm, 0, ULONG_MAX, &charge_walk_ops, NULL);
+ mmap_read_unlock(mc.mm);
+ atomic_dec(&mc.from->moving_account);
+}
+
+void memcg1_move_task(void)
+{
+ if (mc.to) {
+ mem_cgroup_move_charge();
+ mem_cgroup_clear_mc();
+ }
+}
+
+#else /* !CONFIG_MMU */
+int memcg1_can_attach(struct cgroup_taskset *tset)
+{
+ return 0;
+}
+void memcg1_cancel_attach(struct cgroup_taskset *tset)
+{
+}
+void memcg1_move_task(void)
+{
+}
+#endif
+
+static void __mem_cgroup_threshold(struct mem_cgroup *memcg, bool swap)
+{
+ struct mem_cgroup_threshold_ary *t;
+ unsigned long usage;
+ int i;
+
+ rcu_read_lock();
+ if (!swap)
+ t = rcu_dereference(memcg->thresholds.primary);
+ else
+ t = rcu_dereference(memcg->memsw_thresholds.primary);
+
+ if (!t)
+ goto unlock;
+
+ usage = mem_cgroup_usage(memcg, swap);
+
+ /*
+ * current_threshold points to threshold just below or equal to usage.
+ * If it's not true, a threshold was crossed after last
+ * call of __mem_cgroup_threshold().
+ */
+ i = t->current_threshold;
+
+ /*
+ * Iterate backward over array of thresholds starting from
+ * current_threshold and check if a threshold is crossed.
+ * If none of thresholds below usage is crossed, we read
+ * only one element of the array here.
+ */
+ for (; i >= 0 && unlikely(t->entries[i].threshold > usage); i--)
+ eventfd_signal(t->entries[i].eventfd);
+
+ /* i = current_threshold + 1 */
+ i++;
+
+ /*
+ * Iterate forward over array of thresholds starting from
+ * current_threshold+1 and check if a threshold is crossed.
+ * If none of thresholds above usage is crossed, we read
+ * only one element of the array here.
+ */
+ for (; i < t->size && unlikely(t->entries[i].threshold <= usage); i++)
+ eventfd_signal(t->entries[i].eventfd);
+
+ /* Update current_threshold */
+ t->current_threshold = i - 1;
+unlock:
+ rcu_read_unlock();
+}
+
+static void mem_cgroup_threshold(struct mem_cgroup *memcg)
+{
+ while (memcg) {
+ __mem_cgroup_threshold(memcg, false);
+ if (do_memsw_account())
+ __mem_cgroup_threshold(memcg, true);
+
+ memcg = parent_mem_cgroup(memcg);
+ }
+}
+
+/*
+ * Check events in order.
+ *
+ */
+void memcg1_check_events(struct mem_cgroup *memcg, int nid)
+{
+ if (IS_ENABLED(CONFIG_PREEMPT_RT))
+ return;
+
+ /* threshold event is triggered in finer grain than soft limit */
+ if (unlikely(mem_cgroup_event_ratelimit(memcg,
+ MEM_CGROUP_TARGET_THRESH))) {
+ bool do_softlimit;
+
+ do_softlimit = mem_cgroup_event_ratelimit(memcg,
+ MEM_CGROUP_TARGET_SOFTLIMIT);
+ mem_cgroup_threshold(memcg);
+ if (unlikely(do_softlimit))
+ memcg1_update_tree(memcg, nid);
+ }
+}
+
+static int compare_thresholds(const void *a, const void *b)
+{
+ const struct mem_cgroup_threshold *_a = a;
+ const struct mem_cgroup_threshold *_b = b;
+
+ if (_a->threshold > _b->threshold)
+ return 1;
+
+ if (_a->threshold < _b->threshold)
+ return -1;
+
+ return 0;
+}
+
+static int mem_cgroup_oom_notify_cb(struct mem_cgroup *memcg)
+{
+ struct mem_cgroup_eventfd_list *ev;
+
+ spin_lock(&memcg_oom_lock);
+
+ list_for_each_entry(ev, &memcg->oom_notify, list)
+ eventfd_signal(ev->eventfd);
+
+ spin_unlock(&memcg_oom_lock);
+ return 0;
+}
+
+static void mem_cgroup_oom_notify(struct mem_cgroup *memcg)
+{
+ struct mem_cgroup *iter;
+
+ for_each_mem_cgroup_tree(iter, memcg)
+ mem_cgroup_oom_notify_cb(iter);
+}
+
+static int __mem_cgroup_usage_register_event(struct mem_cgroup *memcg,
+ struct eventfd_ctx *eventfd, const char *args, enum res_type type)
+{
+ struct mem_cgroup_thresholds *thresholds;
+ struct mem_cgroup_threshold_ary *new;
+ unsigned long threshold;
+ unsigned long usage;
+ int i, size, ret;
+
+ ret = page_counter_memparse(args, "-1", &threshold);
+ if (ret)
+ return ret;
+
+ mutex_lock(&memcg->thresholds_lock);
+
+ if (type == _MEM) {
+ thresholds = &memcg->thresholds;
+ usage = mem_cgroup_usage(memcg, false);
+ } else if (type == _MEMSWAP) {
+ thresholds = &memcg->memsw_thresholds;
+ usage = mem_cgroup_usage(memcg, true);
+ } else
+ BUG();
+
+ /* Check if a threshold crossed before adding a new one */
+ if (thresholds->primary)
+ __mem_cgroup_threshold(memcg, type == _MEMSWAP);
+
+ size = thresholds->primary ? thresholds->primary->size + 1 : 1;
+
+ /* Allocate memory for new array of thresholds */
+ new = kmalloc(struct_size(new, entries, size), GFP_KERNEL);
+ if (!new) {
+ ret = -ENOMEM;
+ goto unlock;
+ }
+ new->size = size;
+
+ /* Copy thresholds (if any) to new array */
+ if (thresholds->primary)
+ memcpy(new->entries, thresholds->primary->entries,
+ flex_array_size(new, entries, size - 1));
+
+ /* Add new threshold */
+ new->entries[size - 1].eventfd = eventfd;
+ new->entries[size - 1].threshold = threshold;
+
+ /* Sort thresholds. Registering of new threshold isn't time-critical */
+ sort(new->entries, size, sizeof(*new->entries),
+ compare_thresholds, NULL);
+
+ /* Find current threshold */
+ new->current_threshold = -1;
+ for (i = 0; i < size; i++) {
+ if (new->entries[i].threshold <= usage) {
+ /*
+ * new->current_threshold will not be used until
+ * rcu_assign_pointer(), so it's safe to increment
+ * it here.
+ */
+ ++new->current_threshold;
+ } else
+ break;
+ }
+
+ /* Free old spare buffer and save old primary buffer as spare */
+ kfree(thresholds->spare);
+ thresholds->spare = thresholds->primary;
+
+ rcu_assign_pointer(thresholds->primary, new);
+
+ /* To be sure that nobody uses thresholds */
+ synchronize_rcu();
+
+unlock:
+ mutex_unlock(&memcg->thresholds_lock);
+
+ return ret;
+}
+
+static int mem_cgroup_usage_register_event(struct mem_cgroup *memcg,
+ struct eventfd_ctx *eventfd, const char *args)
+{
+ return __mem_cgroup_usage_register_event(memcg, eventfd, args, _MEM);
+}
+
+static int memsw_cgroup_usage_register_event(struct mem_cgroup *memcg,
+ struct eventfd_ctx *eventfd, const char *args)
+{
+ return __mem_cgroup_usage_register_event(memcg, eventfd, args, _MEMSWAP);
+}
+
+static void __mem_cgroup_usage_unregister_event(struct mem_cgroup *memcg,
+ struct eventfd_ctx *eventfd, enum res_type type)
+{
+ struct mem_cgroup_thresholds *thresholds;
+ struct mem_cgroup_threshold_ary *new;
+ unsigned long usage;
+ int i, j, size, entries;
+
+ mutex_lock(&memcg->thresholds_lock);
+
+ if (type == _MEM) {
+ thresholds = &memcg->thresholds;
+ usage = mem_cgroup_usage(memcg, false);
+ } else if (type == _MEMSWAP) {
+ thresholds = &memcg->memsw_thresholds;
+ usage = mem_cgroup_usage(memcg, true);
+ } else
+ BUG();
+
+ if (!thresholds->primary)
+ goto unlock;
+
+ /* Check if a threshold crossed before removing */
+ __mem_cgroup_threshold(memcg, type == _MEMSWAP);
+
+ /* Calculate new number of threshold */
+ size = entries = 0;
+ for (i = 0; i < thresholds->primary->size; i++) {
+ if (thresholds->primary->entries[i].eventfd != eventfd)
+ size++;
+ else
+ entries++;
+ }
+
+ new = thresholds->spare;
+
+ /* If no items related to eventfd have been cleared, nothing to do */
+ if (!entries)
+ goto unlock;
+
+ /* Set thresholds array to NULL if we don't have thresholds */
+ if (!size) {
+ kfree(new);
+ new = NULL;
+ goto swap_buffers;
+ }
+
+ new->size = size;
+
+ /* Copy thresholds and find current threshold */
+ new->current_threshold = -1;
+ for (i = 0, j = 0; i < thresholds->primary->size; i++) {
+ if (thresholds->primary->entries[i].eventfd == eventfd)
+ continue;
+
+ new->entries[j] = thresholds->primary->entries[i];
+ if (new->entries[j].threshold <= usage) {
+ /*
+ * new->current_threshold will not be used
+ * until rcu_assign_pointer(), so it's safe to increment
+ * it here.
+ */
+ ++new->current_threshold;
+ }
+ j++;
+ }
+
+swap_buffers:
+ /* Swap primary and spare array */
+ thresholds->spare = thresholds->primary;
+
+ rcu_assign_pointer(thresholds->primary, new);
+
+ /* To be sure that nobody uses thresholds */
+ synchronize_rcu();
+
+ /* If all events are unregistered, free the spare array */
+ if (!new) {
+ kfree(thresholds->spare);
+ thresholds->spare = NULL;
+ }
+unlock:
+ mutex_unlock(&memcg->thresholds_lock);
+}
+
+static void mem_cgroup_usage_unregister_event(struct mem_cgroup *memcg,
+ struct eventfd_ctx *eventfd)
+{
+ return __mem_cgroup_usage_unregister_event(memcg, eventfd, _MEM);
+}
+
+static void memsw_cgroup_usage_unregister_event(struct mem_cgroup *memcg,
+ struct eventfd_ctx *eventfd)
+{
+ return __mem_cgroup_usage_unregister_event(memcg, eventfd, _MEMSWAP);
+}
+
+static int mem_cgroup_oom_register_event(struct mem_cgroup *memcg,
+ struct eventfd_ctx *eventfd, const char *args)
+{
+ struct mem_cgroup_eventfd_list *event;
+
+ event = kmalloc(sizeof(*event), GFP_KERNEL);
+ if (!event)
+ return -ENOMEM;
+
+ spin_lock(&memcg_oom_lock);
+
+ event->eventfd = eventfd;
+ list_add(&event->list, &memcg->oom_notify);
+
+ /* already in OOM ? */
+ if (memcg->under_oom)
+ eventfd_signal(eventfd);
+ spin_unlock(&memcg_oom_lock);
+
+ return 0;
+}
+
+static void mem_cgroup_oom_unregister_event(struct mem_cgroup *memcg,
+ struct eventfd_ctx *eventfd)
+{
+ struct mem_cgroup_eventfd_list *ev, *tmp;
+
+ spin_lock(&memcg_oom_lock);
+
+ list_for_each_entry_safe(ev, tmp, &memcg->oom_notify, list) {
+ if (ev->eventfd == eventfd) {
+ list_del(&ev->list);
+ kfree(ev);
+ }
+ }
+
+ spin_unlock(&memcg_oom_lock);
+}
+
+/*
+ * DO NOT USE IN NEW FILES.
+ *
+ * "cgroup.event_control" implementation.
+ *
+ * This is way over-engineered. It tries to support fully configurable
+ * events for each user. Such level of flexibility is completely
+ * unnecessary especially in the light of the planned unified hierarchy.
+ *
+ * Please deprecate this and replace with something simpler if at all
+ * possible.
+ */
+
+/*
+ * Unregister event and free resources.
+ *
+ * Gets called from workqueue.
+ */
+static void memcg_event_remove(struct work_struct *work)
+{
+ struct mem_cgroup_event *event =
+ container_of(work, struct mem_cgroup_event, remove);
+ struct mem_cgroup *memcg = event->memcg;
+
+ remove_wait_queue(event->wqh, &event->wait);
+
+ event->unregister_event(memcg, event->eventfd);
+
+ /* Notify userspace the event is going away. */
+ eventfd_signal(event->eventfd);
+
+ eventfd_ctx_put(event->eventfd);
+ kfree(event);
+ css_put(&memcg->css);
+}
+
+/*
+ * Gets called on EPOLLHUP on eventfd when user closes it.
+ *
+ * Called with wqh->lock held and interrupts disabled.
+ */
+static int memcg_event_wake(wait_queue_entry_t *wait, unsigned mode,
+ int sync, void *key)
+{
+ struct mem_cgroup_event *event =
+ container_of(wait, struct mem_cgroup_event, wait);
+ struct mem_cgroup *memcg = event->memcg;
+ __poll_t flags = key_to_poll(key);
+
+ if (flags & EPOLLHUP) {
+ /*
+ * If the event has been detached at cgroup removal, we
+ * can simply return knowing the other side will cleanup
+ * for us.
+ *
+ * We can't race against event freeing since the other
+ * side will require wqh->lock via remove_wait_queue(),
+ * which we hold.
+ */
+ spin_lock(&memcg->event_list_lock);
+ if (!list_empty(&event->list)) {
+ list_del_init(&event->list);
+ /*
+ * We are in atomic context, but cgroup_event_remove()
+ * may sleep, so we have to call it in workqueue.
+ */
+ schedule_work(&event->remove);
+ }
+ spin_unlock(&memcg->event_list_lock);
+ }
+
+ return 0;
+}
+
+static void memcg_event_ptable_queue_proc(struct file *file,
+ wait_queue_head_t *wqh, poll_table *pt)
+{
+ struct mem_cgroup_event *event =
+ container_of(pt, struct mem_cgroup_event, pt);
+
+ event->wqh = wqh;
+ add_wait_queue(wqh, &event->wait);
+}
+
+/*
+ * DO NOT USE IN NEW FILES.
+ *
+ * Parse input and register new cgroup event handler.
+ *
+ * Input must be in format '<event_fd> <control_fd> <args>'.
+ * Interpretation of args is defined by control file implementation.
+ */
+static ssize_t memcg_write_event_control(struct kernfs_open_file *of,
+ char *buf, size_t nbytes, loff_t off)
+{
+ struct cgroup_subsys_state *css = of_css(of);
+ struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+ struct mem_cgroup_event *event;
+ struct cgroup_subsys_state *cfile_css;
+ unsigned int efd, cfd;
+ struct fd efile;
+ struct fd cfile;
+ struct dentry *cdentry;
+ const char *name;
+ char *endp;
+ int ret;
+
+ if (IS_ENABLED(CONFIG_PREEMPT_RT))
+ return -EOPNOTSUPP;
+
+ buf = strstrip(buf);
+
+ efd = simple_strtoul(buf, &endp, 10);
+ if (*endp != ' ')
+ return -EINVAL;
+ buf = endp + 1;
+
+ cfd = simple_strtoul(buf, &endp, 10);
+ if ((*endp != ' ') && (*endp != '\0'))
+ return -EINVAL;
+ buf = endp + 1;
+
+ event = kzalloc(sizeof(*event), GFP_KERNEL);
+ if (!event)
+ return -ENOMEM;
+
+ event->memcg = memcg;
+ INIT_LIST_HEAD(&event->list);
+ init_poll_funcptr(&event->pt, memcg_event_ptable_queue_proc);
+ init_waitqueue_func_entry(&event->wait, memcg_event_wake);
+ INIT_WORK(&event->remove, memcg_event_remove);
+
+ efile = fdget(efd);
+ if (!efile.file) {
+ ret = -EBADF;
+ goto out_kfree;
+ }
+
+ event->eventfd = eventfd_ctx_fileget(efile.file);
+ if (IS_ERR(event->eventfd)) {
+ ret = PTR_ERR(event->eventfd);
+ goto out_put_efile;
+ }
+
+ cfile = fdget(cfd);
+ if (!cfile.file) {
+ ret = -EBADF;
+ goto out_put_eventfd;
+ }
+
+ /* the process need read permission on control file */
+ /* AV: shouldn't we check that it's been opened for read instead? */
+ ret = file_permission(cfile.file, MAY_READ);
+ if (ret < 0)
+ goto out_put_cfile;
+
+ /*
+ * The control file must be a regular cgroup1 file. As a regular cgroup
+ * file can't be renamed, it's safe to access its name afterwards.
+ */
+ cdentry = cfile.file->f_path.dentry;
+ if (cdentry->d_sb->s_type != &cgroup_fs_type || !d_is_reg(cdentry)) {
+ ret = -EINVAL;
+ goto out_put_cfile;
+ }
+
+ /*
+ * Determine the event callbacks and set them in @event. This used
+ * to be done via struct cftype but cgroup core no longer knows
+ * about these events. The following is crude but the whole thing
+ * is for compatibility anyway.
+ *
+ * DO NOT ADD NEW FILES.
+ */
+ name = cdentry->d_name.name;
+
+ if (!strcmp(name, "memory.usage_in_bytes")) {
+ event->register_event = mem_cgroup_usage_register_event;
+ event->unregister_event = mem_cgroup_usage_unregister_event;
+ } else if (!strcmp(name, "memory.oom_control")) {
+ event->register_event = mem_cgroup_oom_register_event;
+ event->unregister_event = mem_cgroup_oom_unregister_event;
+ } else if (!strcmp(name, "memory.pressure_level")) {
+ event->register_event = vmpressure_register_event;
+ event->unregister_event = vmpressure_unregister_event;
+ } else if (!strcmp(name, "memory.memsw.usage_in_bytes")) {
+ event->register_event = memsw_cgroup_usage_register_event;
+ event->unregister_event = memsw_cgroup_usage_unregister_event;
+ } else {
+ ret = -EINVAL;
+ goto out_put_cfile;
+ }
+
+ /*
+ * Verify @cfile should belong to @css. Also, remaining events are
+ * automatically removed on cgroup destruction but the removal is
+ * asynchronous, so take an extra ref on @css.
+ */
+ cfile_css = css_tryget_online_from_dir(cdentry->d_parent,
+ &memory_cgrp_subsys);
+ ret = -EINVAL;
+ if (IS_ERR(cfile_css))
+ goto out_put_cfile;
+ if (cfile_css != css) {
+ css_put(cfile_css);
+ goto out_put_cfile;
+ }
+
+ ret = event->register_event(memcg, event->eventfd, buf);
+ if (ret)
+ goto out_put_css;
+
+ vfs_poll(efile.file, &event->pt);
+
+ spin_lock_irq(&memcg->event_list_lock);
+ list_add(&event->list, &memcg->event_list);
+ spin_unlock_irq(&memcg->event_list_lock);
+
+ fdput(cfile);
+ fdput(efile);
+
+ return nbytes;
+
+out_put_css:
+ css_put(css);
+out_put_cfile:
+ fdput(cfile);
+out_put_eventfd:
+ eventfd_ctx_put(event->eventfd);
+out_put_efile:
+ fdput(efile);
+out_kfree:
+ kfree(event);
+
+ return ret;
+}
+
+void memcg1_memcg_init(struct mem_cgroup *memcg)
+{
+ INIT_LIST_HEAD(&memcg->oom_notify);
+ mutex_init(&memcg->thresholds_lock);
+ spin_lock_init(&memcg->move_lock);
+ INIT_LIST_HEAD(&memcg->event_list);
+ spin_lock_init(&memcg->event_list_lock);
+}
+
+void memcg1_css_offline(struct mem_cgroup *memcg)
+{
+ struct mem_cgroup_event *event, *tmp;
+
+ /*
+ * Unregister events and notify userspace.
+ * Notify userspace about cgroup removing only after rmdir of cgroup
+ * directory to avoid race between userspace and kernelspace.
+ */
+ spin_lock_irq(&memcg->event_list_lock);
+ list_for_each_entry_safe(event, tmp, &memcg->event_list, list) {
+ list_del_init(&event->list);
+ schedule_work(&event->remove);
+ }
+ spin_unlock_irq(&memcg->event_list_lock);
+}
+
+/*
+ * Check OOM-Killer is already running under our hierarchy.
+ * If someone is running, return false.
+ */
+static bool mem_cgroup_oom_trylock(struct mem_cgroup *memcg)
+{
+ struct mem_cgroup *iter, *failed = NULL;
+
+ spin_lock(&memcg_oom_lock);
+
+ for_each_mem_cgroup_tree(iter, memcg) {
+ if (iter->oom_lock) {
+ /*
+ * this subtree of our hierarchy is already locked
+ * so we cannot give a lock.
+ */
+ failed = iter;
+ mem_cgroup_iter_break(memcg, iter);
+ break;
+ } else
+ iter->oom_lock = true;
+ }
+
+ if (failed) {
+ /*
+ * OK, we failed to lock the whole subtree so we have
+ * to clean up what we set up to the failing subtree
+ */
+ for_each_mem_cgroup_tree(iter, memcg) {
+ if (iter == failed) {
+ mem_cgroup_iter_break(memcg, iter);
+ break;
+ }
+ iter->oom_lock = false;
+ }
+ } else
+ mutex_acquire(&memcg_oom_lock_dep_map, 0, 1, _RET_IP_);
+
+ spin_unlock(&memcg_oom_lock);
+
+ return !failed;
+}
+
+static void mem_cgroup_oom_unlock(struct mem_cgroup *memcg)
+{
+ struct mem_cgroup *iter;
+
+ spin_lock(&memcg_oom_lock);
+ mutex_release(&memcg_oom_lock_dep_map, _RET_IP_);
+ for_each_mem_cgroup_tree(iter, memcg)
+ iter->oom_lock = false;
+ spin_unlock(&memcg_oom_lock);
+}
+
+static void mem_cgroup_mark_under_oom(struct mem_cgroup *memcg)
+{
+ struct mem_cgroup *iter;
+
+ spin_lock(&memcg_oom_lock);
+ for_each_mem_cgroup_tree(iter, memcg)
+ iter->under_oom++;
+ spin_unlock(&memcg_oom_lock);
+}
+
+static void mem_cgroup_unmark_under_oom(struct mem_cgroup *memcg)
+{
+ struct mem_cgroup *iter;
+
+ /*
+ * Be careful about under_oom underflows because a child memcg
+ * could have been added after mem_cgroup_mark_under_oom.
+ */
+ spin_lock(&memcg_oom_lock);
+ for_each_mem_cgroup_tree(iter, memcg)
+ if (iter->under_oom > 0)
+ iter->under_oom--;
+ spin_unlock(&memcg_oom_lock);
+}
+
+static DECLARE_WAIT_QUEUE_HEAD(memcg_oom_waitq);
+
+struct oom_wait_info {
+ struct mem_cgroup *memcg;
+ wait_queue_entry_t wait;
+};
+
+static int memcg_oom_wake_function(wait_queue_entry_t *wait,
+ unsigned mode, int sync, void *arg)
+{
+ struct mem_cgroup *wake_memcg = (struct mem_cgroup *)arg;
+ struct mem_cgroup *oom_wait_memcg;
+ struct oom_wait_info *oom_wait_info;
+
+ oom_wait_info = container_of(wait, struct oom_wait_info, wait);
+ oom_wait_memcg = oom_wait_info->memcg;
+
+ if (!mem_cgroup_is_descendant(wake_memcg, oom_wait_memcg) &&
+ !mem_cgroup_is_descendant(oom_wait_memcg, wake_memcg))
+ return 0;
+ return autoremove_wake_function(wait, mode, sync, arg);
+}
+
+void memcg1_oom_recover(struct mem_cgroup *memcg)
+{
+ /*
+ * For the following lockless ->under_oom test, the only required
+ * guarantee is that it must see the state asserted by an OOM when
+ * this function is called as a result of userland actions
+ * triggered by the notification of the OOM. This is trivially
+ * achieved by invoking mem_cgroup_mark_under_oom() before
+ * triggering notification.
+ */
+ if (memcg && memcg->under_oom)
+ __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, memcg);
+}
+
+/**
+ * mem_cgroup_oom_synchronize - complete memcg OOM handling
+ * @handle: actually kill/wait or just clean up the OOM state
+ *
+ * This has to be called at the end of a page fault if the memcg OOM
+ * handler was enabled.
+ *
+ * Memcg supports userspace OOM handling where failed allocations must
+ * sleep on a waitqueue until the userspace task resolves the
+ * situation. Sleeping directly in the charge context with all kinds
+ * of locks held is not a good idea, instead we remember an OOM state
+ * in the task and mem_cgroup_oom_synchronize() has to be called at
+ * the end of the page fault to complete the OOM handling.
+ *
+ * Returns %true if an ongoing memcg OOM situation was detected and
+ * completed, %false otherwise.
+ */
+bool mem_cgroup_oom_synchronize(bool handle)
+{
+ struct mem_cgroup *memcg = current->memcg_in_oom;
+ struct oom_wait_info owait;
+ bool locked;
+
+ /* OOM is global, do not handle */
+ if (!memcg)
+ return false;
+
+ if (!handle)
+ goto cleanup;
+
+ owait.memcg = memcg;
+ owait.wait.flags = 0;
+ owait.wait.func = memcg_oom_wake_function;
+ owait.wait.private = current;
+ INIT_LIST_HEAD(&owait.wait.entry);
+
+ prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
+ mem_cgroup_mark_under_oom(memcg);
+
+ locked = mem_cgroup_oom_trylock(memcg);
+
+ if (locked)
+ mem_cgroup_oom_notify(memcg);
+
+ schedule();
+ mem_cgroup_unmark_under_oom(memcg);
+ finish_wait(&memcg_oom_waitq, &owait.wait);
+
+ if (locked)
+ mem_cgroup_oom_unlock(memcg);
+cleanup:
+ current->memcg_in_oom = NULL;
+ css_put(&memcg->css);
+ return true;
+}
+
+
+bool memcg1_oom_prepare(struct mem_cgroup *memcg, bool *locked)
+{
+ /*
+ * We are in the middle of the charge context here, so we
+ * don't want to block when potentially sitting on a callstack
+ * that holds all kinds of filesystem and mm locks.
+ *
+ * cgroup1 allows disabling the OOM killer and waiting for outside
+ * handling until the charge can succeed; remember the context and put
+ * the task to sleep at the end of the page fault when all locks are
+ * released.
+ *
+ * On the other hand, in-kernel OOM killer allows for an async victim
+ * memory reclaim (oom_reaper) and that means that we are not solely
+ * relying on the oom victim to make a forward progress and we can
+ * invoke the oom killer here.
+ *
+ * Please note that mem_cgroup_out_of_memory might fail to find a
+ * victim and then we have to bail out from the charge path.
+ */
+ if (READ_ONCE(memcg->oom_kill_disable)) {
+ if (current->in_user_fault) {
+ css_get(&memcg->css);
+ current->memcg_in_oom = memcg;
+ }
+ return false;
+ }
+
+ mem_cgroup_mark_under_oom(memcg);
+
+ *locked = mem_cgroup_oom_trylock(memcg);
+
+ if (*locked)
+ mem_cgroup_oom_notify(memcg);
+
+ mem_cgroup_unmark_under_oom(memcg);
+
+ return true;
+}
+
+void memcg1_oom_finish(struct mem_cgroup *memcg, bool locked)
+{
+ if (locked)
+ mem_cgroup_oom_unlock(memcg);
+}
+
+static DEFINE_MUTEX(memcg_max_mutex);
+
+static int mem_cgroup_resize_max(struct mem_cgroup *memcg,
+ unsigned long max, bool memsw)
+{
+ bool enlarge = false;
+ bool drained = false;
+ int ret;
+ bool limits_invariant;
+ struct page_counter *counter = memsw ? &memcg->memsw : &memcg->memory;
+
+ do {
+ if (signal_pending(current)) {
+ ret = -EINTR;
+ break;
+ }
+
+ mutex_lock(&memcg_max_mutex);
+ /*
+ * Make sure that the new limit (memsw or memory limit) doesn't
+ * break our basic invariant rule memory.max <= memsw.max.
+ */
+ limits_invariant = memsw ? max >= READ_ONCE(memcg->memory.max) :
+ max <= memcg->memsw.max;
+ if (!limits_invariant) {
+ mutex_unlock(&memcg_max_mutex);
+ ret = -EINVAL;
+ break;
+ }
+ if (max > counter->max)
+ enlarge = true;
+ ret = page_counter_set_max(counter, max);
+ mutex_unlock(&memcg_max_mutex);
+
+ if (!ret)
+ break;
+
+ if (!drained) {
+ drain_all_stock(memcg);
+ drained = true;
+ continue;
+ }
+
+ if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL,
+ memsw ? 0 : MEMCG_RECLAIM_MAY_SWAP, NULL)) {
+ ret = -EBUSY;
+ break;
+ }
+ } while (true);
+
+ if (!ret && enlarge)
+ memcg1_oom_recover(memcg);
+
+ return ret;
+}
+
+/*
+ * Reclaims as many pages from the given memcg as possible.
+ *
+ * Caller is responsible for holding css reference for memcg.
+ */
+static int mem_cgroup_force_empty(struct mem_cgroup *memcg)
+{
+ int nr_retries = MAX_RECLAIM_RETRIES;
+
+ /* we call try-to-free pages for make this cgroup empty */
+ lru_add_drain_all();
+
+ drain_all_stock(memcg);
+
+ /* try to free all pages in this cgroup */
+ while (nr_retries && page_counter_read(&memcg->memory)) {
+ if (signal_pending(current))
+ return -EINTR;
+
+ if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL,
+ MEMCG_RECLAIM_MAY_SWAP, NULL))
+ nr_retries--;
+ }
+
+ return 0;
+}
+
+static ssize_t mem_cgroup_force_empty_write(struct kernfs_open_file *of,
+ char *buf, size_t nbytes,
+ loff_t off)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
+
+ if (mem_cgroup_is_root(memcg))
+ return -EINVAL;
+ return mem_cgroup_force_empty(memcg) ?: nbytes;
+}
+
+static u64 mem_cgroup_hierarchy_read(struct cgroup_subsys_state *css,
+ struct cftype *cft)
+{
+ return 1;
+}
+
+static int mem_cgroup_hierarchy_write(struct cgroup_subsys_state *css,
+ struct cftype *cft, u64 val)
+{
+ if (val == 1)
+ return 0;
+
+ pr_warn_once("Non-hierarchical mode is deprecated. "
+ "Please report your usecase to linux-mm@kvack.org if you "
+ "depend on this functionality.\n");
+
+ return -EINVAL;
+}
+
+static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
+ struct cftype *cft)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+ struct page_counter *counter;
+
+ switch (MEMFILE_TYPE(cft->private)) {
+ case _MEM:
+ counter = &memcg->memory;
+ break;
+ case _MEMSWAP:
+ counter = &memcg->memsw;
+ break;
+ case _KMEM:
+ counter = &memcg->kmem;
+ break;
+ case _TCP:
+ counter = &memcg->tcpmem;
+ break;
+ default:
+ BUG();
+ }
+
+ switch (MEMFILE_ATTR(cft->private)) {
+ case RES_USAGE:
+ if (counter == &memcg->memory)
+ return (u64)mem_cgroup_usage(memcg, false) * PAGE_SIZE;
+ if (counter == &memcg->memsw)
+ return (u64)mem_cgroup_usage(memcg, true) * PAGE_SIZE;
+ return (u64)page_counter_read(counter) * PAGE_SIZE;
+ case RES_LIMIT:
+ return (u64)counter->max * PAGE_SIZE;
+ case RES_MAX_USAGE:
+ return (u64)counter->watermark * PAGE_SIZE;
+ case RES_FAILCNT:
+ return counter->failcnt;
+ case RES_SOFT_LIMIT:
+ return (u64)READ_ONCE(memcg->soft_limit) * PAGE_SIZE;
+ default:
+ BUG();
+ }
+}
+
+/*
+ * This function doesn't do anything useful. Its only job is to provide a read
+ * handler for a file so that cgroup_file_mode() will add read permissions.
+ */
+static int mem_cgroup_dummy_seq_show(__always_unused struct seq_file *m,
+ __always_unused void *v)
+{
+ return -EINVAL;
+}
+
+static int memcg_update_tcp_max(struct mem_cgroup *memcg, unsigned long max)
+{
+ int ret;
+
+ mutex_lock(&memcg_max_mutex);
+
+ ret = page_counter_set_max(&memcg->tcpmem, max);
+ if (ret)
+ goto out;
+
+ if (!memcg->tcpmem_active) {
+ /*
+ * The active flag needs to be written after the static_key
+ * update. This is what guarantees that the socket activation
+ * function is the last one to run. See mem_cgroup_sk_alloc()
+ * for details, and note that we don't mark any socket as
+ * belonging to this memcg until that flag is up.
+ *
+ * We need to do this, because static_keys will span multiple
+ * sites, but we can't control their order. If we mark a socket
+ * as accounted, but the accounting functions are not patched in
+ * yet, we'll lose accounting.
+ *
+ * We never race with the readers in mem_cgroup_sk_alloc(),
+ * because when this value change, the code to process it is not
+ * patched in yet.
+ */
+ static_branch_inc(&memcg_sockets_enabled_key);
+ memcg->tcpmem_active = true;
+ }
+out:
+ mutex_unlock(&memcg_max_mutex);
+ return ret;
+}
+
+/*
+ * The user of this function is...
+ * RES_LIMIT.
+ */
+static ssize_t mem_cgroup_write(struct kernfs_open_file *of,
+ char *buf, size_t nbytes, loff_t off)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
+ unsigned long nr_pages;
+ int ret;
+
+ buf = strstrip(buf);
+ ret = page_counter_memparse(buf, "-1", &nr_pages);
+ if (ret)
+ return ret;
+
+ switch (MEMFILE_ATTR(of_cft(of)->private)) {
+ case RES_LIMIT:
+ if (mem_cgroup_is_root(memcg)) { /* Can't set limit on root */
+ ret = -EINVAL;
+ break;
+ }
+ switch (MEMFILE_TYPE(of_cft(of)->private)) {
+ case _MEM:
+ ret = mem_cgroup_resize_max(memcg, nr_pages, false);
+ break;
+ case _MEMSWAP:
+ ret = mem_cgroup_resize_max(memcg, nr_pages, true);
+ break;
+ case _KMEM:
+ pr_warn_once("kmem.limit_in_bytes is deprecated and will be removed. "
+ "Writing any value to this file has no effect. "
+ "Please report your usecase to linux-mm@kvack.org if you "
+ "depend on this functionality.\n");
+ ret = 0;
+ break;
+ case _TCP:
+ ret = memcg_update_tcp_max(memcg, nr_pages);
+ break;
+ }
+ break;
+ case RES_SOFT_LIMIT:
+ if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
+ ret = -EOPNOTSUPP;
+ } else {
+ WRITE_ONCE(memcg->soft_limit, nr_pages);
+ ret = 0;
+ }
+ break;
+ }
+ return ret ?: nbytes;
+}
+
+static ssize_t mem_cgroup_reset(struct kernfs_open_file *of, char *buf,
+ size_t nbytes, loff_t off)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
+ struct page_counter *counter;
+
+ switch (MEMFILE_TYPE(of_cft(of)->private)) {
+ case _MEM:
+ counter = &memcg->memory;
+ break;
+ case _MEMSWAP:
+ counter = &memcg->memsw;
+ break;
+ case _KMEM:
+ counter = &memcg->kmem;
+ break;
+ case _TCP:
+ counter = &memcg->tcpmem;
+ break;
+ default:
+ BUG();
+ }
+
+ switch (MEMFILE_ATTR(of_cft(of)->private)) {
+ case RES_MAX_USAGE:
+ page_counter_reset_watermark(counter);
+ break;
+ case RES_FAILCNT:
+ counter->failcnt = 0;
+ break;
+ default:
+ BUG();
+ }
+
+ return nbytes;
+}
+
+#ifdef CONFIG_NUMA
+
+#define LRU_ALL_FILE (BIT(LRU_INACTIVE_FILE) | BIT(LRU_ACTIVE_FILE))
+#define LRU_ALL_ANON (BIT(LRU_INACTIVE_ANON) | BIT(LRU_ACTIVE_ANON))
+#define LRU_ALL ((1 << NR_LRU_LISTS) - 1)
+
+static unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg,
+ int nid, unsigned int lru_mask, bool tree)
+{
+ struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
+ unsigned long nr = 0;
+ enum lru_list lru;
+
+ VM_BUG_ON((unsigned)nid >= nr_node_ids);
+
+ for_each_lru(lru) {
+ if (!(BIT(lru) & lru_mask))
+ continue;
+ if (tree)
+ nr += lruvec_page_state(lruvec, NR_LRU_BASE + lru);
+ else
+ nr += lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
+ }
+ return nr;
+}
+
+static unsigned long mem_cgroup_nr_lru_pages(struct mem_cgroup *memcg,
+ unsigned int lru_mask,
+ bool tree)
+{
+ unsigned long nr = 0;
+ enum lru_list lru;
+
+ for_each_lru(lru) {
+ if (!(BIT(lru) & lru_mask))
+ continue;
+ if (tree)
+ nr += memcg_page_state(memcg, NR_LRU_BASE + lru);
+ else
+ nr += memcg_page_state_local(memcg, NR_LRU_BASE + lru);
+ }
+ return nr;
+}
+
+static int memcg_numa_stat_show(struct seq_file *m, void *v)
+{
+ struct numa_stat {
+ const char *name;
+ unsigned int lru_mask;
+ };
+
+ static const struct numa_stat stats[] = {
+ { "total", LRU_ALL },
+ { "file", LRU_ALL_FILE },
+ { "anon", LRU_ALL_ANON },
+ { "unevictable", BIT(LRU_UNEVICTABLE) },
+ };
+ const struct numa_stat *stat;
+ int nid;
+ struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
+
+ mem_cgroup_flush_stats(memcg);
+
+ for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) {
+ seq_printf(m, "%s=%lu", stat->name,
+ mem_cgroup_nr_lru_pages(memcg, stat->lru_mask,
+ false));
+ for_each_node_state(nid, N_MEMORY)
+ seq_printf(m, " N%d=%lu", nid,
+ mem_cgroup_node_nr_lru_pages(memcg, nid,
+ stat->lru_mask, false));
+ seq_putc(m, '\n');
+ }
+
+ for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) {
+
+ seq_printf(m, "hierarchical_%s=%lu", stat->name,
+ mem_cgroup_nr_lru_pages(memcg, stat->lru_mask,
+ true));
+ for_each_node_state(nid, N_MEMORY)
+ seq_printf(m, " N%d=%lu", nid,
+ mem_cgroup_node_nr_lru_pages(memcg, nid,
+ stat->lru_mask, true));
+ seq_putc(m, '\n');
+ }
+
+ return 0;
+}
+#endif /* CONFIG_NUMA */
+
+static const unsigned int memcg1_stats[] = {
+ NR_FILE_PAGES,
+ NR_ANON_MAPPED,
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ NR_ANON_THPS,
+#endif
+ NR_SHMEM,
+ NR_FILE_MAPPED,
+ NR_FILE_DIRTY,
+ NR_WRITEBACK,
+ WORKINGSET_REFAULT_ANON,
+ WORKINGSET_REFAULT_FILE,
+#ifdef CONFIG_SWAP
+ MEMCG_SWAP,
+ NR_SWAPCACHE,
+#endif
+};
+
+static const char *const memcg1_stat_names[] = {
+ "cache",
+ "rss",
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ "rss_huge",
+#endif
+ "shmem",
+ "mapped_file",
+ "dirty",
+ "writeback",
+ "workingset_refault_anon",
+ "workingset_refault_file",
+#ifdef CONFIG_SWAP
+ "swap",
+ "swapcached",
+#endif
+};
+
+/* Universal VM events cgroup1 shows, original sort order */
+static const unsigned int memcg1_events[] = {
+ PGPGIN,
+ PGPGOUT,
+ PGFAULT,
+ PGMAJFAULT,
+};
+
+void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
+{
+ unsigned long memory, memsw;
+ struct mem_cgroup *mi;
+ unsigned int i;
+
+ BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats));
+
+ mem_cgroup_flush_stats(memcg);
+
+ for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) {
+ unsigned long nr;
+
+ nr = memcg_page_state_local_output(memcg, memcg1_stats[i]);
+ seq_buf_printf(s, "%s %lu\n", memcg1_stat_names[i], nr);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(memcg1_events); i++)
+ seq_buf_printf(s, "%s %lu\n", vm_event_name(memcg1_events[i]),
+ memcg_events_local(memcg, memcg1_events[i]));
+
+ for (i = 0; i < NR_LRU_LISTS; i++)
+ seq_buf_printf(s, "%s %lu\n", lru_list_name(i),
+ memcg_page_state_local(memcg, NR_LRU_BASE + i) *
+ PAGE_SIZE);
+
+ /* Hierarchical information */
+ memory = memsw = PAGE_COUNTER_MAX;
+ for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) {
+ memory = min(memory, READ_ONCE(mi->memory.max));
+ memsw = min(memsw, READ_ONCE(mi->memsw.max));
+ }
+ seq_buf_printf(s, "hierarchical_memory_limit %llu\n",
+ (u64)memory * PAGE_SIZE);
+ seq_buf_printf(s, "hierarchical_memsw_limit %llu\n",
+ (u64)memsw * PAGE_SIZE);
+
+ for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) {
+ unsigned long nr;
+
+ nr = memcg_page_state_output(memcg, memcg1_stats[i]);
+ seq_buf_printf(s, "total_%s %llu\n", memcg1_stat_names[i],
+ (u64)nr);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(memcg1_events); i++)
+ seq_buf_printf(s, "total_%s %llu\n",
+ vm_event_name(memcg1_events[i]),
+ (u64)memcg_events(memcg, memcg1_events[i]));
+
+ for (i = 0; i < NR_LRU_LISTS; i++)
+ seq_buf_printf(s, "total_%s %llu\n", lru_list_name(i),
+ (u64)memcg_page_state(memcg, NR_LRU_BASE + i) *
+ PAGE_SIZE);
+
+#ifdef CONFIG_DEBUG_VM
+ {
+ pg_data_t *pgdat;
+ struct mem_cgroup_per_node *mz;
+ unsigned long anon_cost = 0;
+ unsigned long file_cost = 0;
+
+ for_each_online_pgdat(pgdat) {
+ mz = memcg->nodeinfo[pgdat->node_id];
+
+ anon_cost += mz->lruvec.anon_cost;
+ file_cost += mz->lruvec.file_cost;
+ }
+ seq_buf_printf(s, "anon_cost %lu\n", anon_cost);
+ seq_buf_printf(s, "file_cost %lu\n", file_cost);
+ }
+#endif
+}
+
+static u64 mem_cgroup_swappiness_read(struct cgroup_subsys_state *css,
+ struct cftype *cft)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+
+ return mem_cgroup_swappiness(memcg);
+}
+
+static int mem_cgroup_swappiness_write(struct cgroup_subsys_state *css,
+ struct cftype *cft, u64 val)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+
+ if (val > MAX_SWAPPINESS)
+ return -EINVAL;
+
+ if (!mem_cgroup_is_root(memcg))
+ WRITE_ONCE(memcg->swappiness, val);
+ else
+ WRITE_ONCE(vm_swappiness, val);
+
+ return 0;
+}
+
+static int mem_cgroup_oom_control_read(struct seq_file *sf, void *v)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_seq(sf);
+
+ seq_printf(sf, "oom_kill_disable %d\n", READ_ONCE(memcg->oom_kill_disable));
+ seq_printf(sf, "under_oom %d\n", (bool)memcg->under_oom);
+ seq_printf(sf, "oom_kill %lu\n",
+ atomic_long_read(&memcg->memory_events[MEMCG_OOM_KILL]));
+ return 0;
+}
+
+static int mem_cgroup_oom_control_write(struct cgroup_subsys_state *css,
+ struct cftype *cft, u64 val)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+
+ /* cannot set to root cgroup and only 0 and 1 are allowed */
+ if (mem_cgroup_is_root(memcg) || !((val == 0) || (val == 1)))
+ return -EINVAL;
+
+ WRITE_ONCE(memcg->oom_kill_disable, val);
+ if (!val)
+ memcg1_oom_recover(memcg);
+
+ return 0;
+}
+
+#ifdef CONFIG_SLUB_DEBUG
+static int mem_cgroup_slab_show(struct seq_file *m, void *p)
+{
+ /*
+ * Deprecated.
+ * Please, take a look at tools/cgroup/memcg_slabinfo.py .
+ */
+ return 0;
+}
+#endif
+
+struct cftype mem_cgroup_legacy_files[] = {
+ {
+ .name = "usage_in_bytes",
+ .private = MEMFILE_PRIVATE(_MEM, RES_USAGE),
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "max_usage_in_bytes",
+ .private = MEMFILE_PRIVATE(_MEM, RES_MAX_USAGE),
+ .write = mem_cgroup_reset,
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "limit_in_bytes",
+ .private = MEMFILE_PRIVATE(_MEM, RES_LIMIT),
+ .write = mem_cgroup_write,
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "soft_limit_in_bytes",
+ .private = MEMFILE_PRIVATE(_MEM, RES_SOFT_LIMIT),
+ .write = mem_cgroup_write,
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "failcnt",
+ .private = MEMFILE_PRIVATE(_MEM, RES_FAILCNT),
+ .write = mem_cgroup_reset,
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "stat",
+ .seq_show = memory_stat_show,
+ },
+ {
+ .name = "force_empty",
+ .write = mem_cgroup_force_empty_write,
+ },
+ {
+ .name = "use_hierarchy",
+ .write_u64 = mem_cgroup_hierarchy_write,
+ .read_u64 = mem_cgroup_hierarchy_read,
+ },
+ {
+ .name = "cgroup.event_control", /* XXX: for compat */
+ .write = memcg_write_event_control,
+ .flags = CFTYPE_NO_PREFIX | CFTYPE_WORLD_WRITABLE,
+ },
+ {
+ .name = "swappiness",
+ .read_u64 = mem_cgroup_swappiness_read,
+ .write_u64 = mem_cgroup_swappiness_write,
+ },
+ {
+ .name = "move_charge_at_immigrate",
+ .read_u64 = mem_cgroup_move_charge_read,
+ .write_u64 = mem_cgroup_move_charge_write,
+ },
+ {
+ .name = "oom_control",
+ .seq_show = mem_cgroup_oom_control_read,
+ .write_u64 = mem_cgroup_oom_control_write,
+ },
+ {
+ .name = "pressure_level",
+ .seq_show = mem_cgroup_dummy_seq_show,
+ },
+#ifdef CONFIG_NUMA
+ {
+ .name = "numa_stat",
+ .seq_show = memcg_numa_stat_show,
+ },
+#endif
+ {
+ .name = "kmem.limit_in_bytes",
+ .private = MEMFILE_PRIVATE(_KMEM, RES_LIMIT),
+ .write = mem_cgroup_write,
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "kmem.usage_in_bytes",
+ .private = MEMFILE_PRIVATE(_KMEM, RES_USAGE),
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "kmem.failcnt",
+ .private = MEMFILE_PRIVATE(_KMEM, RES_FAILCNT),
+ .write = mem_cgroup_reset,
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "kmem.max_usage_in_bytes",
+ .private = MEMFILE_PRIVATE(_KMEM, RES_MAX_USAGE),
+ .write = mem_cgroup_reset,
+ .read_u64 = mem_cgroup_read_u64,
+ },
+#ifdef CONFIG_SLUB_DEBUG
+ {
+ .name = "kmem.slabinfo",
+ .seq_show = mem_cgroup_slab_show,
+ },
+#endif
+ {
+ .name = "kmem.tcp.limit_in_bytes",
+ .private = MEMFILE_PRIVATE(_TCP, RES_LIMIT),
+ .write = mem_cgroup_write,
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "kmem.tcp.usage_in_bytes",
+ .private = MEMFILE_PRIVATE(_TCP, RES_USAGE),
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "kmem.tcp.failcnt",
+ .private = MEMFILE_PRIVATE(_TCP, RES_FAILCNT),
+ .write = mem_cgroup_reset,
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "kmem.tcp.max_usage_in_bytes",
+ .private = MEMFILE_PRIVATE(_TCP, RES_MAX_USAGE),
+ .write = mem_cgroup_reset,
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ { }, /* terminate */
+};
+
+struct cftype memsw_files[] = {
+ {
+ .name = "memsw.usage_in_bytes",
+ .private = MEMFILE_PRIVATE(_MEMSWAP, RES_USAGE),
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "memsw.max_usage_in_bytes",
+ .private = MEMFILE_PRIVATE(_MEMSWAP, RES_MAX_USAGE),
+ .write = mem_cgroup_reset,
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "memsw.limit_in_bytes",
+ .private = MEMFILE_PRIVATE(_MEMSWAP, RES_LIMIT),
+ .write = mem_cgroup_write,
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ {
+ .name = "memsw.failcnt",
+ .private = MEMFILE_PRIVATE(_MEMSWAP, RES_FAILCNT),
+ .write = mem_cgroup_reset,
+ .read_u64 = mem_cgroup_read_u64,
+ },
+ { }, /* terminate */
+};
+
+void memcg1_account_kmem(struct mem_cgroup *memcg, int nr_pages)
+{
+ if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) {
+ if (nr_pages > 0)
+ page_counter_charge(&memcg->kmem, nr_pages);
+ else
+ page_counter_uncharge(&memcg->kmem, -nr_pages);
+ }
+}
+
+bool memcg1_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages,
+ gfp_t gfp_mask)
+{
+ struct page_counter *fail;
+
+ if (page_counter_try_charge(&memcg->tcpmem, nr_pages, &fail)) {
+ memcg->tcpmem_pressure = 0;
+ return true;
+ }
+ memcg->tcpmem_pressure = 1;
+ if (gfp_mask & __GFP_NOFAIL) {
+ page_counter_charge(&memcg->tcpmem, nr_pages);
+ return true;
+ }
+ return false;
+}
+
+static int __init memcg1_init(void)
+{
+ int node;
+
+ for_each_node(node) {
+ struct mem_cgroup_tree_per_node *rtpn;
+
+ rtpn = kzalloc_node(sizeof(*rtpn), GFP_KERNEL, node);
+
+ rtpn->rb_root = RB_ROOT;
+ rtpn->rb_rightmost = NULL;
+ spin_lock_init(&rtpn->lock);
+ soft_limit_tree.rb_tree_per_node[node] = rtpn;
+ }
+
+ return 0;
+}
+subsys_initcall(memcg1_init);
diff --git a/mm/memcontrol-v1.h b/mm/memcontrol-v1.h
new file mode 100644
index 000000000000..56d7eaa98274
--- /dev/null
+++ b/mm/memcontrol-v1.h
@@ -0,0 +1,147 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#ifndef __MM_MEMCONTROL_V1_H
+#define __MM_MEMCONTROL_V1_H
+
+#include <linux/cgroup-defs.h>
+
+/* Cgroup v1 and v2 common declarations */
+
+void mem_cgroup_charge_statistics(struct mem_cgroup *memcg, int nr_pages);
+int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
+ unsigned int nr_pages);
+
+static inline int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
+ unsigned int nr_pages)
+{
+ if (mem_cgroup_is_root(memcg))
+ return 0;
+
+ return try_charge_memcg(memcg, gfp_mask, nr_pages);
+}
+
+void mem_cgroup_id_get_many(struct mem_cgroup *memcg, unsigned int n);
+void mem_cgroup_id_put_many(struct mem_cgroup *memcg, unsigned int n);
+
+/*
+ * Iteration constructs for visiting all cgroups (under a tree). If
+ * loops are exited prematurely (break), mem_cgroup_iter_break() must
+ * be used for reference counting.
+ */
+#define for_each_mem_cgroup_tree(iter, root) \
+ for (iter = mem_cgroup_iter(root, NULL, NULL); \
+ iter != NULL; \
+ iter = mem_cgroup_iter(root, iter, NULL))
+
+#define for_each_mem_cgroup(iter) \
+ for (iter = mem_cgroup_iter(NULL, NULL, NULL); \
+ iter != NULL; \
+ iter = mem_cgroup_iter(NULL, iter, NULL))
+
+/* Whether legacy memory+swap accounting is active */
+static bool do_memsw_account(void)
+{
+ return !cgroup_subsys_on_dfl(memory_cgrp_subsys);
+}
+
+/*
+ * Per memcg event counter is incremented at every pagein/pageout. With THP,
+ * it will be incremented by the number of pages. This counter is used
+ * to trigger some periodic events. This is straightforward and better
+ * than using jiffies etc. to handle periodic memcg event.
+ */
+enum mem_cgroup_events_target {
+ MEM_CGROUP_TARGET_THRESH,
+ MEM_CGROUP_TARGET_SOFTLIMIT,
+ MEM_CGROUP_NTARGETS,
+};
+
+bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg,
+ enum mem_cgroup_events_target target);
+unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap);
+
+void drain_all_stock(struct mem_cgroup *root_memcg);
+
+unsigned long memcg_events(struct mem_cgroup *memcg, int event);
+unsigned long memcg_events_local(struct mem_cgroup *memcg, int event);
+unsigned long memcg_page_state_local(struct mem_cgroup *memcg, int idx);
+unsigned long memcg_page_state_output(struct mem_cgroup *memcg, int item);
+unsigned long memcg_page_state_local_output(struct mem_cgroup *memcg, int item);
+int memory_stat_show(struct seq_file *m, void *v);
+
+/* Cgroup v1-specific declarations */
+#ifdef CONFIG_MEMCG_V1
+void memcg1_memcg_init(struct mem_cgroup *memcg);
+void memcg1_remove_from_trees(struct mem_cgroup *memcg);
+
+static inline void memcg1_soft_limit_reset(struct mem_cgroup *memcg)
+{
+ WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX);
+}
+
+bool memcg1_wait_acct_move(struct mem_cgroup *memcg);
+
+struct cgroup_taskset;
+int memcg1_can_attach(struct cgroup_taskset *tset);
+void memcg1_cancel_attach(struct cgroup_taskset *tset);
+void memcg1_move_task(void);
+void memcg1_css_offline(struct mem_cgroup *memcg);
+
+/* for encoding cft->private value on file */
+enum res_type {
+ _MEM,
+ _MEMSWAP,
+ _KMEM,
+ _TCP,
+};
+
+bool memcg1_oom_prepare(struct mem_cgroup *memcg, bool *locked);
+void memcg1_oom_finish(struct mem_cgroup *memcg, bool locked);
+void memcg1_oom_recover(struct mem_cgroup *memcg);
+
+void memcg1_check_events(struct mem_cgroup *memcg, int nid);
+
+void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s);
+
+void memcg1_account_kmem(struct mem_cgroup *memcg, int nr_pages);
+static inline bool memcg1_tcpmem_active(struct mem_cgroup *memcg)
+{
+ return memcg->tcpmem_active;
+}
+bool memcg1_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages,
+ gfp_t gfp_mask);
+static inline void memcg1_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages)
+{
+ page_counter_uncharge(&memcg->tcpmem, nr_pages);
+}
+
+extern struct cftype memsw_files[];
+extern struct cftype mem_cgroup_legacy_files[];
+
+#else /* CONFIG_MEMCG_V1 */
+
+static inline void memcg1_memcg_init(struct mem_cgroup *memcg) {}
+static inline void memcg1_remove_from_trees(struct mem_cgroup *memcg) {}
+static inline void memcg1_soft_limit_reset(struct mem_cgroup *memcg) {}
+static inline bool memcg1_wait_acct_move(struct mem_cgroup *memcg) { return false; }
+static inline void memcg1_css_offline(struct mem_cgroup *memcg) {}
+
+static inline bool memcg1_oom_prepare(struct mem_cgroup *memcg, bool *locked) { return true; }
+static inline void memcg1_oom_finish(struct mem_cgroup *memcg, bool locked) {}
+static inline void memcg1_oom_recover(struct mem_cgroup *memcg) {}
+
+static inline void memcg1_check_events(struct mem_cgroup *memcg, int nid) {}
+
+static inline void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) {}
+
+static inline void memcg1_account_kmem(struct mem_cgroup *memcg, int nr_pages) {}
+static inline bool memcg1_tcpmem_active(struct mem_cgroup *memcg) { return false; }
+static inline bool memcg1_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages,
+ gfp_t gfp_mask) { return true; }
+static inline void memcg1_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages) {}
+
+extern struct cftype memsw_files[];
+extern struct cftype mem_cgroup_legacy_files[];
+#endif /* CONFIG_MEMCG_V1 */
+
+#endif /* __MM_MEMCONTROL_V1_H */
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 8f2f1bb18c9c..960371788687 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -28,7 +28,6 @@
#include <linux/page_counter.h>
#include <linux/memcontrol.h>
#include <linux/cgroup.h>
-#include <linux/pagewalk.h>
#include <linux/sched/mm.h>
#include <linux/shmem_fs.h>
#include <linux/hugetlb.h>
@@ -45,14 +44,11 @@
#include <linux/mutex.h>
#include <linux/rbtree.h>
#include <linux/slab.h>
-#include <linux/swap.h>
#include <linux/swapops.h>
#include <linux/spinlock.h>
-#include <linux/eventfd.h>
-#include <linux/poll.h>
-#include <linux/sort.h>
#include <linux/fs.h>
#include <linux/seq_file.h>
+#include <linux/parser.h>
#include <linux/vmpressure.h>
#include <linux/memremap.h>
#include <linux/mm_inline.h>
@@ -60,7 +56,6 @@
#include <linux/cpu.h>
#include <linux/oom.h>
#include <linux/lockdep.h>
-#include <linux/file.h>
#include <linux/resume_user_mode.h>
#include <linux/psi.h>
#include <linux/seq_buf.h>
@@ -70,7 +65,7 @@
#include <net/sock.h>
#include <net/ip.h>
#include "slab.h"
-#include "swap.h"
+#include "memcontrol-v1.h"
#include <linux/uaccess.h>
@@ -98,140 +93,9 @@ static bool cgroup_memory_nobpf __ro_after_init;
static DECLARE_WAIT_QUEUE_HEAD(memcg_cgwb_frn_waitq);
#endif
-/* Whether legacy memory+swap accounting is active */
-static bool do_memsw_account(void)
-{
- return !cgroup_subsys_on_dfl(memory_cgrp_subsys);
-}
-
#define THRESHOLDS_EVENTS_TARGET 128
#define SOFTLIMIT_EVENTS_TARGET 1024
-/*
- * Cgroups above their limits are maintained in a RB-Tree, independent of
- * their hierarchy representation
- */
-
-struct mem_cgroup_tree_per_node {
- struct rb_root rb_root;
- struct rb_node *rb_rightmost;
- spinlock_t lock;
-};
-
-struct mem_cgroup_tree {
- struct mem_cgroup_tree_per_node *rb_tree_per_node[MAX_NUMNODES];
-};
-
-static struct mem_cgroup_tree soft_limit_tree __read_mostly;
-
-/* for OOM */
-struct mem_cgroup_eventfd_list {
- struct list_head list;
- struct eventfd_ctx *eventfd;
-};
-
-/*
- * cgroup_event represents events which userspace want to receive.
- */
-struct mem_cgroup_event {
- /*
- * memcg which the event belongs to.
- */
- struct mem_cgroup *memcg;
- /*
- * eventfd to signal userspace about the event.
- */
- struct eventfd_ctx *eventfd;
- /*
- * Each of these stored in a list by the cgroup.
- */
- struct list_head list;
- /*
- * register_event() callback will be used to add new userspace
- * waiter for changes related to this event. Use eventfd_signal()
- * on eventfd to send notification to userspace.
- */
- int (*register_event)(struct mem_cgroup *memcg,
- struct eventfd_ctx *eventfd, const char *args);
- /*
- * unregister_event() callback will be called when userspace closes
- * the eventfd or on cgroup removing. This callback must be set,
- * if you want provide notification functionality.
- */
- void (*unregister_event)(struct mem_cgroup *memcg,
- struct eventfd_ctx *eventfd);
- /*
- * All fields below needed to unregister event when
- * userspace closes eventfd.
- */
- poll_table pt;
- wait_queue_head_t *wqh;
- wait_queue_entry_t wait;
- struct work_struct remove;
-};
-
-static void mem_cgroup_threshold(struct mem_cgroup *memcg);
-static void mem_cgroup_oom_notify(struct mem_cgroup *memcg);
-
-/* Stuffs for move charges at task migration. */
-/*
- * Types of charges to be moved.
- */
-#define MOVE_ANON 0x1U
-#define MOVE_FILE 0x2U
-#define MOVE_MASK (MOVE_ANON | MOVE_FILE)
-
-/* "mc" and its members are protected by cgroup_mutex */
-static struct move_charge_struct {
- spinlock_t lock; /* for from, to */
- struct mm_struct *mm;
- struct mem_cgroup *from;
- struct mem_cgroup *to;
- unsigned long flags;
- unsigned long precharge;
- unsigned long moved_charge;
- unsigned long moved_swap;
- struct task_struct *moving_task; /* a task moving charges */
- wait_queue_head_t waitq; /* a waitq for other context */
-} mc = {
- .lock = __SPIN_LOCK_UNLOCKED(mc.lock),
- .waitq = __WAIT_QUEUE_HEAD_INITIALIZER(mc.waitq),
-};
-
-/*
- * Maximum loops in mem_cgroup_soft_reclaim(), used for soft
- * limit reclaim to prevent infinite loops, if they ever occur.
- */
-#define MEM_CGROUP_MAX_RECLAIM_LOOPS 100
-#define MEM_CGROUP_MAX_SOFT_LIMIT_RECLAIM_LOOPS 2
-
-/* for encoding cft->private value on file */
-enum res_type {
- _MEM,
- _MEMSWAP,
- _KMEM,
- _TCP,
-};
-
-#define MEMFILE_PRIVATE(x, val) ((x) << 16 | (val))
-#define MEMFILE_TYPE(val) ((val) >> 16 & 0xffff)
-#define MEMFILE_ATTR(val) ((val) & 0xffff)
-
-/*
- * Iteration constructs for visiting all cgroups (under a tree). If
- * loops are exited prematurely (break), mem_cgroup_iter_break() must
- * be used for reference counting.
- */
-#define for_each_mem_cgroup_tree(iter, root) \
- for (iter = mem_cgroup_iter(root, NULL, NULL); \
- iter != NULL; \
- iter = mem_cgroup_iter(root, iter, NULL))
-
-#define for_each_mem_cgroup(iter) \
- for (iter = mem_cgroup_iter(NULL, NULL, NULL); \
- iter != NULL; \
- iter = mem_cgroup_iter(NULL, iter, NULL))
-
static inline bool task_is_dying(void)
{
return tsk_is_oom_victim(current) || fatal_signal_pending(current) ||
@@ -254,7 +118,6 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr)
#define CURRENT_OBJCG_UPDATE_BIT 0
#define CURRENT_OBJCG_UPDATE_FLAG (1UL << CURRENT_OBJCG_UPDATE_BIT)
-#ifdef CONFIG_MEMCG_KMEM
static DEFINE_SPINLOCK(objcg_lock);
bool mem_cgroup_kmem_disabled(void)
@@ -359,7 +222,6 @@ EXPORT_SYMBOL(memcg_kmem_online_key);
DEFINE_STATIC_KEY_FALSE(memcg_bpf_enabled_key);
EXPORT_SYMBOL(memcg_bpf_enabled_key);
-#endif
/**
* mem_cgroup_css_from_folio - css of the memcg associated with a folio
@@ -412,169 +274,6 @@ ino_t page_cgroup_ino(struct page *page)
return ino;
}
-static void __mem_cgroup_insert_exceeded(struct mem_cgroup_per_node *mz,
- struct mem_cgroup_tree_per_node *mctz,
- unsigned long new_usage_in_excess)
-{
- struct rb_node **p = &mctz->rb_root.rb_node;
- struct rb_node *parent = NULL;
- struct mem_cgroup_per_node *mz_node;
- bool rightmost = true;
-
- if (mz->on_tree)
- return;
-
- mz->usage_in_excess = new_usage_in_excess;
- if (!mz->usage_in_excess)
- return;
- while (*p) {
- parent = *p;
- mz_node = rb_entry(parent, struct mem_cgroup_per_node,
- tree_node);
- if (mz->usage_in_excess < mz_node->usage_in_excess) {
- p = &(*p)->rb_left;
- rightmost = false;
- } else {
- p = &(*p)->rb_right;
- }
- }
-
- if (rightmost)
- mctz->rb_rightmost = &mz->tree_node;
-
- rb_link_node(&mz->tree_node, parent, p);
- rb_insert_color(&mz->tree_node, &mctz->rb_root);
- mz->on_tree = true;
-}
-
-static void __mem_cgroup_remove_exceeded(struct mem_cgroup_per_node *mz,
- struct mem_cgroup_tree_per_node *mctz)
-{
- if (!mz->on_tree)
- return;
-
- if (&mz->tree_node == mctz->rb_rightmost)
- mctz->rb_rightmost = rb_prev(&mz->tree_node);
-
- rb_erase(&mz->tree_node, &mctz->rb_root);
- mz->on_tree = false;
-}
-
-static void mem_cgroup_remove_exceeded(struct mem_cgroup_per_node *mz,
- struct mem_cgroup_tree_per_node *mctz)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&mctz->lock, flags);
- __mem_cgroup_remove_exceeded(mz, mctz);
- spin_unlock_irqrestore(&mctz->lock, flags);
-}
-
-static unsigned long soft_limit_excess(struct mem_cgroup *memcg)
-{
- unsigned long nr_pages = page_counter_read(&memcg->memory);
- unsigned long soft_limit = READ_ONCE(memcg->soft_limit);
- unsigned long excess = 0;
-
- if (nr_pages > soft_limit)
- excess = nr_pages - soft_limit;
-
- return excess;
-}
-
-static void mem_cgroup_update_tree(struct mem_cgroup *memcg, int nid)
-{
- unsigned long excess;
- struct mem_cgroup_per_node *mz;
- struct mem_cgroup_tree_per_node *mctz;
-
- if (lru_gen_enabled()) {
- if (soft_limit_excess(memcg))
- lru_gen_soft_reclaim(memcg, nid);
- return;
- }
-
- mctz = soft_limit_tree.rb_tree_per_node[nid];
- if (!mctz)
- return;
- /*
- * Necessary to update all ancestors when hierarchy is used.
- * because their event counter is not touched.
- */
- for (; memcg; memcg = parent_mem_cgroup(memcg)) {
- mz = memcg->nodeinfo[nid];
- excess = soft_limit_excess(memcg);
- /*
- * We have to update the tree if mz is on RB-tree or
- * mem is over its softlimit.
- */
- if (excess || mz->on_tree) {
- unsigned long flags;
-
- spin_lock_irqsave(&mctz->lock, flags);
- /* if on-tree, remove it */
- if (mz->on_tree)
- __mem_cgroup_remove_exceeded(mz, mctz);
- /*
- * Insert again. mz->usage_in_excess will be updated.
- * If excess is 0, no tree ops.
- */
- __mem_cgroup_insert_exceeded(mz, mctz, excess);
- spin_unlock_irqrestore(&mctz->lock, flags);
- }
- }
-}
-
-static void mem_cgroup_remove_from_trees(struct mem_cgroup *memcg)
-{
- struct mem_cgroup_tree_per_node *mctz;
- struct mem_cgroup_per_node *mz;
- int nid;
-
- for_each_node(nid) {
- mz = memcg->nodeinfo[nid];
- mctz = soft_limit_tree.rb_tree_per_node[nid];
- if (mctz)
- mem_cgroup_remove_exceeded(mz, mctz);
- }
-}
-
-static struct mem_cgroup_per_node *
-__mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz)
-{
- struct mem_cgroup_per_node *mz;
-
-retry:
- mz = NULL;
- if (!mctz->rb_rightmost)
- goto done; /* Nothing to reclaim from */
-
- mz = rb_entry(mctz->rb_rightmost,
- struct mem_cgroup_per_node, tree_node);
- /*
- * Remove the node now but someone else can add it back,
- * we will to add it back at the end of reclaim to its correct
- * position in the tree.
- */
- __mem_cgroup_remove_exceeded(mz, mctz);
- if (!soft_limit_excess(mz->memcg) ||
- !css_tryget(&mz->memcg->css))
- goto retry;
-done:
- return mz;
-}
-
-static struct mem_cgroup_per_node *
-mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz)
-{
- struct mem_cgroup_per_node *mz;
-
- spin_lock_irq(&mctz->lock);
- mz = __mem_cgroup_largest_soft_limit_node(mctz);
- spin_unlock_irq(&mctz->lock);
- return mz;
-}
-
/* Subset of node_stat_item for memcg stats */
static const unsigned int memcg_node_stat_items[] = {
NR_INACTIVE_ANON,
@@ -722,7 +421,7 @@ static const unsigned int memcg_vm_event_stat[] = {
PGDEACTIVATE,
PGLAZYFREE,
PGLAZYFREED,
-#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
+#ifdef CONFIG_ZSWAP
ZSWPIN,
ZSWPOUT,
ZSWPWB,
@@ -971,7 +670,7 @@ void __mod_memcg_state(struct mem_cgroup *memcg, enum memcg_stat_item idx,
}
/* idx can be of type enum memcg_stat_item or node_stat_item. */
-static unsigned long memcg_page_state_local(struct mem_cgroup *memcg, int idx)
+unsigned long memcg_page_state_local(struct mem_cgroup *memcg, int idx)
{
long x;
int i = memcg_stats_index(idx);
@@ -1120,7 +819,7 @@ void __count_memcg_events(struct mem_cgroup *memcg, enum vm_event_item idx,
memcg_stats_unlock();
}
-static unsigned long memcg_events(struct mem_cgroup *memcg, int event)
+unsigned long memcg_events(struct mem_cgroup *memcg, int event)
{
int i = memcg_events_index(event);
@@ -1130,7 +829,7 @@ static unsigned long memcg_events(struct mem_cgroup *memcg, int event)
return READ_ONCE(memcg->vmstats->events[i]);
}
-static unsigned long memcg_events_local(struct mem_cgroup *memcg, int event)
+unsigned long memcg_events_local(struct mem_cgroup *memcg, int event)
{
int i = memcg_events_index(event);
@@ -1140,8 +839,7 @@ static unsigned long memcg_events_local(struct mem_cgroup *memcg, int event)
return READ_ONCE(memcg->vmstats->events_local[i]);
}
-static void mem_cgroup_charge_statistics(struct mem_cgroup *memcg,
- int nr_pages)
+void mem_cgroup_charge_statistics(struct mem_cgroup *memcg, int nr_pages)
{
/* pagein of a big page is an event. So, ignore page size */
if (nr_pages > 0)
@@ -1154,8 +852,8 @@ static void mem_cgroup_charge_statistics(struct mem_cgroup *memcg,
__this_cpu_add(memcg->vmstats_percpu->nr_page_events, nr_pages);
}
-static bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg,
- enum mem_cgroup_events_target target)
+bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg,
+ enum mem_cgroup_events_target target)
{
unsigned long val, next;
@@ -1179,28 +877,6 @@ static bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg,
return false;
}
-/*
- * Check events in order.
- *
- */
-static void memcg_check_events(struct mem_cgroup *memcg, int nid)
-{
- if (IS_ENABLED(CONFIG_PREEMPT_RT))
- return;
-
- /* threshold event is triggered in finer grain than soft limit */
- if (unlikely(mem_cgroup_event_ratelimit(memcg,
- MEM_CGROUP_TARGET_THRESH))) {
- bool do_softlimit;
-
- do_softlimit = mem_cgroup_event_ratelimit(memcg,
- MEM_CGROUP_TARGET_SOFTLIMIT);
- mem_cgroup_threshold(memcg);
- if (unlikely(do_softlimit))
- mem_cgroup_update_tree(memcg, nid);
- }
-}
-
struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
{
/*
@@ -1652,51 +1328,6 @@ static unsigned long mem_cgroup_margin(struct mem_cgroup *memcg)
return margin;
}
-/*
- * A routine for checking "mem" is under move_account() or not.
- *
- * Checking a cgroup is mc.from or mc.to or under hierarchy of
- * moving cgroups. This is for waiting at high-memory pressure
- * caused by "move".
- */
-static bool mem_cgroup_under_move(struct mem_cgroup *memcg)
-{
- struct mem_cgroup *from;
- struct mem_cgroup *to;
- bool ret = false;
- /*
- * Unlike task_move routines, we access mc.to, mc.from not under
- * mutual exclusion by cgroup_mutex. Here, we take spinlock instead.
- */
- spin_lock(&mc.lock);
- from = mc.from;
- to = mc.to;
- if (!from)
- goto unlock;
-
- ret = mem_cgroup_is_descendant(from, memcg) ||
- mem_cgroup_is_descendant(to, memcg);
-unlock:
- spin_unlock(&mc.lock);
- return ret;
-}
-
-static bool mem_cgroup_wait_acct_move(struct mem_cgroup *memcg)
-{
- if (mc.moving_task && current != mc.moving_task) {
- if (mem_cgroup_under_move(memcg)) {
- DEFINE_WAIT(wait);
- prepare_to_wait(&mc.waitq, &wait, TASK_INTERRUPTIBLE);
- /* moving charge context might have finished. */
- if (mc.moving_task)
- schedule();
- finish_wait(&mc.waitq, &wait);
- return true;
- }
- }
- return false;
-}
-
struct memory_stat {
const char *name;
unsigned int idx;
@@ -1713,7 +1344,7 @@ static const struct memory_stat memory_stats[] = {
{ "sock", MEMCG_SOCK },
{ "vmalloc", MEMCG_VMALLOC },
{ "shmem", NR_SHMEM },
-#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
+#ifdef CONFIG_ZSWAP
{ "zswap", MEMCG_ZSWAP_B },
{ "zswapped", MEMCG_ZSWAPPED },
#endif
@@ -1783,15 +1414,13 @@ static int memcg_page_state_output_unit(int item)
}
}
-static inline unsigned long memcg_page_state_output(struct mem_cgroup *memcg,
- int item)
+unsigned long memcg_page_state_output(struct mem_cgroup *memcg, int item)
{
return memcg_page_state(memcg, item) *
memcg_page_state_output_unit(item);
}
-static inline unsigned long memcg_page_state_local_output(
- struct mem_cgroup *memcg, int item)
+unsigned long memcg_page_state_local_output(struct mem_cgroup *memcg, int item)
{
return memcg_page_state_local(memcg, item) *
memcg_page_state_output_unit(item);
@@ -1845,20 +1474,16 @@ static void memcg_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
vm_event_name(memcg_vm_event_stat[i]),
memcg_events(memcg, memcg_vm_event_stat[i]));
}
-
- /* The above should easily fit into one page */
- WARN_ON_ONCE(seq_buf_has_overflowed(s));
}
-static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s);
-
static void memory_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
{
if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
memcg_stat_format(memcg, s);
else
memcg1_stat_format(memcg, s);
- WARN_ON_ONCE(seq_buf_has_overflowed(s));
+ if (seq_buf_has_overflowed(s))
+ pr_warn("%s: Warning, stat buffer overflow, please report\n", __func__);
}
/**
@@ -1906,6 +1531,7 @@ void mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg)
pr_info("swap: usage %llukB, limit %llukB, failcnt %lu\n",
K((u64)page_counter_read(&memcg->swap)),
K((u64)READ_ONCE(memcg->swap.max)), memcg->swap.failcnt);
+#ifdef CONFIG_MEMCG_V1
else {
pr_info("memory+swap: usage %llukB, limit %llukB, failcnt %lu\n",
K((u64)page_counter_read(&memcg->memsw)),
@@ -1914,6 +1540,7 @@ void mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg)
K((u64)page_counter_read(&memcg->kmem)),
K((u64)memcg->kmem.max), memcg->kmem.failcnt);
}
+#endif
pr_info("Memory cgroup stats for ");
pr_cont_cgroup_path(memcg->css.cgroup);
@@ -1979,180 +1606,6 @@ unlock:
return ret;
}
-static int mem_cgroup_soft_reclaim(struct mem_cgroup *root_memcg,
- pg_data_t *pgdat,
- gfp_t gfp_mask,
- unsigned long *total_scanned)
-{
- struct mem_cgroup *victim = NULL;
- int total = 0;
- int loop = 0;
- unsigned long excess;
- unsigned long nr_scanned;
- struct mem_cgroup_reclaim_cookie reclaim = {
- .pgdat = pgdat,
- };
-
- excess = soft_limit_excess(root_memcg);
-
- while (1) {
- victim = mem_cgroup_iter(root_memcg, victim, &reclaim);
- if (!victim) {
- loop++;
- if (loop >= 2) {
- /*
- * If we have not been able to reclaim
- * anything, it might because there are
- * no reclaimable pages under this hierarchy
- */
- if (!total)
- break;
- /*
- * We want to do more targeted reclaim.
- * excess >> 2 is not to excessive so as to
- * reclaim too much, nor too less that we keep
- * coming back to reclaim from this cgroup
- */
- if (total >= (excess >> 2) ||
- (loop > MEM_CGROUP_MAX_RECLAIM_LOOPS))
- break;
- }
- continue;
- }
- total += mem_cgroup_shrink_node(victim, gfp_mask, false,
- pgdat, &nr_scanned);
- *total_scanned += nr_scanned;
- if (!soft_limit_excess(root_memcg))
- break;
- }
- mem_cgroup_iter_break(root_memcg, victim);
- return total;
-}
-
-#ifdef CONFIG_LOCKDEP
-static struct lockdep_map memcg_oom_lock_dep_map = {
- .name = "memcg_oom_lock",
-};
-#endif
-
-static DEFINE_SPINLOCK(memcg_oom_lock);
-
-/*
- * Check OOM-Killer is already running under our hierarchy.
- * If someone is running, return false.
- */
-static bool mem_cgroup_oom_trylock(struct mem_cgroup *memcg)
-{
- struct mem_cgroup *iter, *failed = NULL;
-
- spin_lock(&memcg_oom_lock);
-
- for_each_mem_cgroup_tree(iter, memcg) {
- if (iter->oom_lock) {
- /*
- * this subtree of our hierarchy is already locked
- * so we cannot give a lock.
- */
- failed = iter;
- mem_cgroup_iter_break(memcg, iter);
- break;
- } else
- iter->oom_lock = true;
- }
-
- if (failed) {
- /*
- * OK, we failed to lock the whole subtree so we have
- * to clean up what we set up to the failing subtree
- */
- for_each_mem_cgroup_tree(iter, memcg) {
- if (iter == failed) {
- mem_cgroup_iter_break(memcg, iter);
- break;
- }
- iter->oom_lock = false;
- }
- } else
- mutex_acquire(&memcg_oom_lock_dep_map, 0, 1, _RET_IP_);
-
- spin_unlock(&memcg_oom_lock);
-
- return !failed;
-}
-
-static void mem_cgroup_oom_unlock(struct mem_cgroup *memcg)
-{
- struct mem_cgroup *iter;
-
- spin_lock(&memcg_oom_lock);
- mutex_release(&memcg_oom_lock_dep_map, _RET_IP_);
- for_each_mem_cgroup_tree(iter, memcg)
- iter->oom_lock = false;
- spin_unlock(&memcg_oom_lock);
-}
-
-static void mem_cgroup_mark_under_oom(struct mem_cgroup *memcg)
-{
- struct mem_cgroup *iter;
-
- spin_lock(&memcg_oom_lock);
- for_each_mem_cgroup_tree(iter, memcg)
- iter->under_oom++;
- spin_unlock(&memcg_oom_lock);
-}
-
-static void mem_cgroup_unmark_under_oom(struct mem_cgroup *memcg)
-{
- struct mem_cgroup *iter;
-
- /*
- * Be careful about under_oom underflows because a child memcg
- * could have been added after mem_cgroup_mark_under_oom.
- */
- spin_lock(&memcg_oom_lock);
- for_each_mem_cgroup_tree(iter, memcg)
- if (iter->under_oom > 0)
- iter->under_oom--;
- spin_unlock(&memcg_oom_lock);
-}
-
-static DECLARE_WAIT_QUEUE_HEAD(memcg_oom_waitq);
-
-struct oom_wait_info {
- struct mem_cgroup *memcg;
- wait_queue_entry_t wait;
-};
-
-static int memcg_oom_wake_function(wait_queue_entry_t *wait,
- unsigned mode, int sync, void *arg)
-{
- struct mem_cgroup *wake_memcg = (struct mem_cgroup *)arg;
- struct mem_cgroup *oom_wait_memcg;
- struct oom_wait_info *oom_wait_info;
-
- oom_wait_info = container_of(wait, struct oom_wait_info, wait);
- oom_wait_memcg = oom_wait_info->memcg;
-
- if (!mem_cgroup_is_descendant(wake_memcg, oom_wait_memcg) &&
- !mem_cgroup_is_descendant(oom_wait_memcg, wake_memcg))
- return 0;
- return autoremove_wake_function(wait, mode, sync, arg);
-}
-
-static void memcg_oom_recover(struct mem_cgroup *memcg)
-{
- /*
- * For the following lockless ->under_oom test, the only required
- * guarantee is that it must see the state asserted by an OOM when
- * this function is called as a result of userland actions
- * triggered by the notification of the OOM. This is trivially
- * achieved by invoking mem_cgroup_mark_under_oom() before
- * triggering notification.
- */
- if (memcg && memcg->under_oom)
- __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, memcg);
-}
-
/*
* Returns true if successfully killed one or more processes. Though in some
* corner cases it can return true even without killing any process.
@@ -2166,105 +1619,17 @@ static bool mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int order)
memcg_memory_event(memcg, MEMCG_OOM);
- /*
- * We are in the middle of the charge context here, so we
- * don't want to block when potentially sitting on a callstack
- * that holds all kinds of filesystem and mm locks.
- *
- * cgroup1 allows disabling the OOM killer and waiting for outside
- * handling until the charge can succeed; remember the context and put
- * the task to sleep at the end of the page fault when all locks are
- * released.
- *
- * On the other hand, in-kernel OOM killer allows for an async victim
- * memory reclaim (oom_reaper) and that means that we are not solely
- * relying on the oom victim to make a forward progress and we can
- * invoke the oom killer here.
- *
- * Please note that mem_cgroup_out_of_memory might fail to find a
- * victim and then we have to bail out from the charge path.
- */
- if (READ_ONCE(memcg->oom_kill_disable)) {
- if (current->in_user_fault) {
- css_get(&memcg->css);
- current->memcg_in_oom = memcg;
- }
+ if (!memcg1_oom_prepare(memcg, &locked))
return false;
- }
-
- mem_cgroup_mark_under_oom(memcg);
- locked = mem_cgroup_oom_trylock(memcg);
-
- if (locked)
- mem_cgroup_oom_notify(memcg);
-
- mem_cgroup_unmark_under_oom(memcg);
ret = mem_cgroup_out_of_memory(memcg, mask, order);
- if (locked)
- mem_cgroup_oom_unlock(memcg);
+ memcg1_oom_finish(memcg, locked);
return ret;
}
/**
- * mem_cgroup_oom_synchronize - complete memcg OOM handling
- * @handle: actually kill/wait or just clean up the OOM state
- *
- * This has to be called at the end of a page fault if the memcg OOM
- * handler was enabled.
- *
- * Memcg supports userspace OOM handling where failed allocations must
- * sleep on a waitqueue until the userspace task resolves the
- * situation. Sleeping directly in the charge context with all kinds
- * of locks held is not a good idea, instead we remember an OOM state
- * in the task and mem_cgroup_oom_synchronize() has to be called at
- * the end of the page fault to complete the OOM handling.
- *
- * Returns %true if an ongoing memcg OOM situation was detected and
- * completed, %false otherwise.
- */
-bool mem_cgroup_oom_synchronize(bool handle)
-{
- struct mem_cgroup *memcg = current->memcg_in_oom;
- struct oom_wait_info owait;
- bool locked;
-
- /* OOM is global, do not handle */
- if (!memcg)
- return false;
-
- if (!handle)
- goto cleanup;
-
- owait.memcg = memcg;
- owait.wait.flags = 0;
- owait.wait.func = memcg_oom_wake_function;
- owait.wait.private = current;
- INIT_LIST_HEAD(&owait.wait.entry);
-
- prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
- mem_cgroup_mark_under_oom(memcg);
-
- locked = mem_cgroup_oom_trylock(memcg);
-
- if (locked)
- mem_cgroup_oom_notify(memcg);
-
- schedule();
- mem_cgroup_unmark_under_oom(memcg);
- finish_wait(&memcg_oom_waitq, &owait.wait);
-
- if (locked)
- mem_cgroup_oom_unlock(memcg);
-cleanup:
- current->memcg_in_oom = NULL;
- css_put(&memcg->css);
- return true;
-}
-
-/**
* mem_cgroup_get_oom_group - get a memory cgroup to clean up after OOM
* @victim: task to be killed by the OOM killer
* @oom_domain: memcg in case of memcg OOM, NULL in case of system-wide OOM
@@ -2328,99 +1693,16 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg)
pr_cont(" are going to be killed due to memory.oom.group set\n");
}
-/**
- * folio_memcg_lock - Bind a folio to its memcg.
- * @folio: The folio.
- *
- * This function prevents unlocked LRU folios from being moved to
- * another cgroup.
- *
- * It ensures lifetime of the bound memcg. The caller is responsible
- * for the lifetime of the folio.
- */
-void folio_memcg_lock(struct folio *folio)
-{
- struct mem_cgroup *memcg;
- unsigned long flags;
-
- /*
- * The RCU lock is held throughout the transaction. The fast
- * path can get away without acquiring the memcg->move_lock
- * because page moving starts with an RCU grace period.
- */
- rcu_read_lock();
-
- if (mem_cgroup_disabled())
- return;
-again:
- memcg = folio_memcg(folio);
- if (unlikely(!memcg))
- return;
-
-#ifdef CONFIG_PROVE_LOCKING
- local_irq_save(flags);
- might_lock(&memcg->move_lock);
- local_irq_restore(flags);
-#endif
-
- if (atomic_read(&memcg->moving_account) <= 0)
- return;
-
- spin_lock_irqsave(&memcg->move_lock, flags);
- if (memcg != folio_memcg(folio)) {
- spin_unlock_irqrestore(&memcg->move_lock, flags);
- goto again;
- }
-
- /*
- * When charge migration first begins, we can have multiple
- * critical sections holding the fast-path RCU lock and one
- * holding the slowpath move_lock. Track the task who has the
- * move_lock for folio_memcg_unlock().
- */
- memcg->move_lock_task = current;
- memcg->move_lock_flags = flags;
-}
-
-static void __folio_memcg_unlock(struct mem_cgroup *memcg)
-{
- if (memcg && memcg->move_lock_task == current) {
- unsigned long flags = memcg->move_lock_flags;
-
- memcg->move_lock_task = NULL;
- memcg->move_lock_flags = 0;
-
- spin_unlock_irqrestore(&memcg->move_lock, flags);
- }
-
- rcu_read_unlock();
-}
-
-/**
- * folio_memcg_unlock - Release the binding between a folio and its memcg.
- * @folio: The folio.
- *
- * This releases the binding created by folio_memcg_lock(). This does
- * not change the accounting of this folio to its memcg, but it does
- * permit others to change it.
- */
-void folio_memcg_unlock(struct folio *folio)
-{
- __folio_memcg_unlock(folio_memcg(folio));
-}
-
struct memcg_stock_pcp {
local_lock_t stock_lock;
struct mem_cgroup *cached; /* this never be root cgroup */
unsigned int nr_pages;
-#ifdef CONFIG_MEMCG_KMEM
struct obj_cgroup *cached_objcg;
struct pglist_data *cached_pgdat;
unsigned int nr_bytes;
int nr_slab_reclaimable_b;
int nr_slab_unreclaimable_b;
-#endif
struct work_struct work;
unsigned long flags;
@@ -2431,26 +1713,9 @@ static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock) = {
};
static DEFINE_MUTEX(percpu_charge_mutex);
-#ifdef CONFIG_MEMCG_KMEM
static struct obj_cgroup *drain_obj_stock(struct memcg_stock_pcp *stock);
static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
struct mem_cgroup *root_memcg);
-static void memcg_account_kmem(struct mem_cgroup *memcg, int nr_pages);
-
-#else
-static inline struct obj_cgroup *drain_obj_stock(struct memcg_stock_pcp *stock)
-{
- return NULL;
-}
-static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
- struct mem_cgroup *root_memcg)
-{
- return false;
-}
-static void memcg_account_kmem(struct mem_cgroup *memcg, int nr_pages)
-{
-}
-#endif
/**
* consume_stock: Try to consume stocked charge on this cpu.
@@ -2567,7 +1832,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
* Drains all per-CPU charge caches for given root_memcg resp. subtree
* of the hierarchy under it.
*/
-static void drain_all_stock(struct mem_cgroup *root_memcg)
+void drain_all_stock(struct mem_cgroup *root_memcg)
{
int cpu, curcpu;
@@ -2636,7 +1901,8 @@ static unsigned long reclaim_high(struct mem_cgroup *memcg,
psi_memstall_enter(&pflags);
nr_reclaimed += try_to_free_mem_cgroup_pages(memcg, nr_pages,
gfp_mask,
- MEMCG_RECLAIM_MAY_SWAP);
+ MEMCG_RECLAIM_MAY_SWAP,
+ NULL);
psi_memstall_leave(&pflags);
} while ((memcg = parent_mem_cgroup(memcg)) &&
!mem_cgroup_is_root(memcg));
@@ -2887,8 +2153,8 @@ out:
css_put(&memcg->css);
}
-static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
- unsigned int nr_pages)
+int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
+ unsigned int nr_pages)
{
unsigned int batch = max(MEMCG_CHARGE_BATCH, nr_pages);
int nr_retries = MAX_RECLAIM_RETRIES;
@@ -2942,7 +2208,7 @@ retry:
psi_memstall_enter(&pflags);
nr_reclaimed = try_to_free_mem_cgroup_pages(mem_over_limit, nr_pages,
- gfp_mask, reclaim_options);
+ gfp_mask, reclaim_options, NULL);
psi_memstall_leave(&pflags);
if (mem_cgroup_margin(mem_over_limit) >= nr_pages)
@@ -2971,7 +2237,7 @@ retry:
* At task move, charge accounts can be doubly counted. So, it's
* better to wait until the end of task_move if something is going on.
*/
- if (mem_cgroup_wait_acct_move(mem_over_limit))
+ if (memcg1_wait_acct_move(mem_over_limit))
goto retry;
if (nr_retries--)
@@ -3083,15 +2349,6 @@ done_restock:
return 0;
}
-static inline int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
- unsigned int nr_pages)
-{
- if (mem_cgroup_is_root(memcg))
- return 0;
-
- return try_charge_memcg(memcg, gfp_mask, nr_pages);
-}
-
/**
* mem_cgroup_cancel_charge() - cancel an uncommitted try_charge() call.
* @memcg: memcg previously charged.
@@ -3134,12 +2391,10 @@ void mem_cgroup_commit_charge(struct folio *folio, struct mem_cgroup *memcg)
local_irq_disable();
mem_cgroup_charge_statistics(memcg, folio_nr_pages(folio));
- memcg_check_events(memcg, folio_nid(folio));
+ memcg1_check_events(memcg, folio_nid(folio));
local_irq_enable();
}
-#ifdef CONFIG_MEMCG_KMEM
-
static inline void __mod_objcg_mlstate(struct obj_cgroup *objcg,
struct pglist_data *pgdat,
enum node_stat_item idx, int nr)
@@ -3367,18 +2622,6 @@ struct obj_cgroup *get_obj_cgroup_from_folio(struct folio *folio)
return objcg;
}
-static void memcg_account_kmem(struct mem_cgroup *memcg, int nr_pages)
-{
- mod_memcg_state(memcg, MEMCG_KMEM, nr_pages);
- if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) {
- if (nr_pages > 0)
- page_counter_charge(&memcg->kmem, nr_pages);
- else
- page_counter_uncharge(&memcg->kmem, -nr_pages);
- }
-}
-
-
/*
* obj_cgroup_uncharge_pages: uncharge a number of kernel pages from a objcg
* @objcg: object cgroup to uncharge
@@ -3391,7 +2634,8 @@ static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg,
memcg = get_mem_cgroup_from_objcg(objcg);
- memcg_account_kmem(memcg, -nr_pages);
+ mod_memcg_state(memcg, MEMCG_KMEM, -nr_pages);
+ memcg1_account_kmem(memcg, -nr_pages);
refill_stock(memcg, nr_pages);
css_put(&memcg->css);
@@ -3417,7 +2661,8 @@ static int obj_cgroup_charge_pages(struct obj_cgroup *objcg, gfp_t gfp,
if (ret)
goto out;
- memcg_account_kmem(memcg, nr_pages);
+ mod_memcg_state(memcg, MEMCG_KMEM, nr_pages);
+ memcg1_account_kmem(memcg, nr_pages);
out:
css_put(&memcg->css);
@@ -3570,7 +2815,8 @@ static struct obj_cgroup *drain_obj_stock(struct memcg_stock_pcp *stock)
memcg = get_mem_cgroup_from_objcg(old);
- memcg_account_kmem(memcg, -nr_pages);
+ mod_memcg_state(memcg, MEMCG_KMEM, -nr_pages);
+ memcg1_account_kmem(memcg, -nr_pages);
__refill_stock(memcg, nr_pages);
css_put(&memcg->css);
@@ -3804,10 +3050,9 @@ void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
obj_cgroup_put(objcg);
}
}
-#endif /* CONFIG_MEMCG_KMEM */
/*
- * Because page_memcg(head) is not set on tails, set it now.
+ * Because folio_memcg(head) is not set on tails, set it now.
*/
void split_page_memcg(struct page *head, int old_order, int new_order)
{
@@ -3829,240 +3074,7 @@ void split_page_memcg(struct page *head, int old_order, int new_order)
css_get_many(&memcg->css, old_nr / new_nr - 1);
}
-#ifdef CONFIG_SWAP
-/**
- * mem_cgroup_move_swap_account - move swap charge and swap_cgroup's record.
- * @entry: swap entry to be moved
- * @from: mem_cgroup which the entry is moved from
- * @to: mem_cgroup which the entry is moved to
- *
- * It succeeds only when the swap_cgroup's record for this entry is the same
- * as the mem_cgroup's id of @from.
- *
- * Returns 0 on success, -EINVAL on failure.
- *
- * The caller must have charged to @to, IOW, called page_counter_charge() about
- * both res and memsw, and called css_get().
- */
-static int mem_cgroup_move_swap_account(swp_entry_t entry,
- struct mem_cgroup *from, struct mem_cgroup *to)
-{
- unsigned short old_id, new_id;
-
- old_id = mem_cgroup_id(from);
- new_id = mem_cgroup_id(to);
-
- if (swap_cgroup_cmpxchg(entry, old_id, new_id) == old_id) {
- mod_memcg_state(from, MEMCG_SWAP, -1);
- mod_memcg_state(to, MEMCG_SWAP, 1);
- return 0;
- }
- return -EINVAL;
-}
-#else
-static inline int mem_cgroup_move_swap_account(swp_entry_t entry,
- struct mem_cgroup *from, struct mem_cgroup *to)
-{
- return -EINVAL;
-}
-#endif
-
-static DEFINE_MUTEX(memcg_max_mutex);
-
-static int mem_cgroup_resize_max(struct mem_cgroup *memcg,
- unsigned long max, bool memsw)
-{
- bool enlarge = false;
- bool drained = false;
- int ret;
- bool limits_invariant;
- struct page_counter *counter = memsw ? &memcg->memsw : &memcg->memory;
-
- do {
- if (signal_pending(current)) {
- ret = -EINTR;
- break;
- }
-
- mutex_lock(&memcg_max_mutex);
- /*
- * Make sure that the new limit (memsw or memory limit) doesn't
- * break our basic invariant rule memory.max <= memsw.max.
- */
- limits_invariant = memsw ? max >= READ_ONCE(memcg->memory.max) :
- max <= memcg->memsw.max;
- if (!limits_invariant) {
- mutex_unlock(&memcg_max_mutex);
- ret = -EINVAL;
- break;
- }
- if (max > counter->max)
- enlarge = true;
- ret = page_counter_set_max(counter, max);
- mutex_unlock(&memcg_max_mutex);
-
- if (!ret)
- break;
-
- if (!drained) {
- drain_all_stock(memcg);
- drained = true;
- continue;
- }
-
- if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL,
- memsw ? 0 : MEMCG_RECLAIM_MAY_SWAP)) {
- ret = -EBUSY;
- break;
- }
- } while (true);
-
- if (!ret && enlarge)
- memcg_oom_recover(memcg);
-
- return ret;
-}
-
-unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
- gfp_t gfp_mask,
- unsigned long *total_scanned)
-{
- unsigned long nr_reclaimed = 0;
- struct mem_cgroup_per_node *mz, *next_mz = NULL;
- unsigned long reclaimed;
- int loop = 0;
- struct mem_cgroup_tree_per_node *mctz;
- unsigned long excess;
-
- if (lru_gen_enabled())
- return 0;
-
- if (order > 0)
- return 0;
-
- mctz = soft_limit_tree.rb_tree_per_node[pgdat->node_id];
-
- /*
- * Do not even bother to check the largest node if the root
- * is empty. Do it lockless to prevent lock bouncing. Races
- * are acceptable as soft limit is best effort anyway.
- */
- if (!mctz || RB_EMPTY_ROOT(&mctz->rb_root))
- return 0;
-
- /*
- * This loop can run a while, specially if mem_cgroup's continuously
- * keep exceeding their soft limit and putting the system under
- * pressure
- */
- do {
- if (next_mz)
- mz = next_mz;
- else
- mz = mem_cgroup_largest_soft_limit_node(mctz);
- if (!mz)
- break;
-
- reclaimed = mem_cgroup_soft_reclaim(mz->memcg, pgdat,
- gfp_mask, total_scanned);
- nr_reclaimed += reclaimed;
- spin_lock_irq(&mctz->lock);
-
- /*
- * If we failed to reclaim anything from this memory cgroup
- * it is time to move on to the next cgroup
- */
- next_mz = NULL;
- if (!reclaimed)
- next_mz = __mem_cgroup_largest_soft_limit_node(mctz);
-
- excess = soft_limit_excess(mz->memcg);
- /*
- * One school of thought says that we should not add
- * back the node to the tree if reclaim returns 0.
- * But our reclaim could return 0, simply because due
- * to priority we are exposing a smaller subset of
- * memory to reclaim from. Consider this as a longer
- * term TODO.
- */
- /* If excess == 0, no tree ops */
- __mem_cgroup_insert_exceeded(mz, mctz, excess);
- spin_unlock_irq(&mctz->lock);
- css_put(&mz->memcg->css);
- loop++;
- /*
- * Could not reclaim anything and there are no more
- * mem cgroups to try or we seem to be looping without
- * reclaiming anything.
- */
- if (!nr_reclaimed &&
- (next_mz == NULL ||
- loop > MEM_CGROUP_MAX_SOFT_LIMIT_RECLAIM_LOOPS))
- break;
- } while (!nr_reclaimed);
- if (next_mz)
- css_put(&next_mz->memcg->css);
- return nr_reclaimed;
-}
-
-/*
- * Reclaims as many pages from the given memcg as possible.
- *
- * Caller is responsible for holding css reference for memcg.
- */
-static int mem_cgroup_force_empty(struct mem_cgroup *memcg)
-{
- int nr_retries = MAX_RECLAIM_RETRIES;
-
- /* we call try-to-free pages for make this cgroup empty */
- lru_add_drain_all();
-
- drain_all_stock(memcg);
-
- /* try to free all pages in this cgroup */
- while (nr_retries && page_counter_read(&memcg->memory)) {
- if (signal_pending(current))
- return -EINTR;
-
- if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL,
- MEMCG_RECLAIM_MAY_SWAP))
- nr_retries--;
- }
-
- return 0;
-}
-
-static ssize_t mem_cgroup_force_empty_write(struct kernfs_open_file *of,
- char *buf, size_t nbytes,
- loff_t off)
-{
- struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
-
- if (mem_cgroup_is_root(memcg))
- return -EINVAL;
- return mem_cgroup_force_empty(memcg) ?: nbytes;
-}
-
-static u64 mem_cgroup_hierarchy_read(struct cgroup_subsys_state *css,
- struct cftype *cft)
-{
- return 1;
-}
-
-static int mem_cgroup_hierarchy_write(struct cgroup_subsys_state *css,
- struct cftype *cft, u64 val)
-{
- if (val == 1)
- return 0;
-
- pr_warn_once("Non-hierarchical mode is deprecated. "
- "Please report your usecase to linux-mm@kvack.org if you "
- "depend on this functionality.\n");
-
- return -EINVAL;
-}
-
-static unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap)
+unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap)
{
unsigned long val;
@@ -4084,68 +3096,6 @@ static unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap)
return val;
}
-enum {
- RES_USAGE,
- RES_LIMIT,
- RES_MAX_USAGE,
- RES_FAILCNT,
- RES_SOFT_LIMIT,
-};
-
-static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
- struct cftype *cft)
-{
- struct mem_cgroup *memcg = mem_cgroup_from_css(css);
- struct page_counter *counter;
-
- switch (MEMFILE_TYPE(cft->private)) {
- case _MEM:
- counter = &memcg->memory;
- break;
- case _MEMSWAP:
- counter = &memcg->memsw;
- break;
- case _KMEM:
- counter = &memcg->kmem;
- break;
- case _TCP:
- counter = &memcg->tcpmem;
- break;
- default:
- BUG();
- }
-
- switch (MEMFILE_ATTR(cft->private)) {
- case RES_USAGE:
- if (counter == &memcg->memory)
- return (u64)mem_cgroup_usage(memcg, false) * PAGE_SIZE;
- if (counter == &memcg->memsw)
- return (u64)mem_cgroup_usage(memcg, true) * PAGE_SIZE;
- return (u64)page_counter_read(counter) * PAGE_SIZE;
- case RES_LIMIT:
- return (u64)counter->max * PAGE_SIZE;
- case RES_MAX_USAGE:
- return (u64)counter->watermark * PAGE_SIZE;
- case RES_FAILCNT:
- return counter->failcnt;
- case RES_SOFT_LIMIT:
- return (u64)READ_ONCE(memcg->soft_limit) * PAGE_SIZE;
- default:
- BUG();
- }
-}
-
-/*
- * This function doesn't do anything useful. Its only job is to provide a read
- * handler for a file so that cgroup_file_mode() will add read permissions.
- */
-static int mem_cgroup_dummy_seq_show(__always_unused struct seq_file *m,
- __always_unused void *v)
-{
- return -EINVAL;
-}
-
-#ifdef CONFIG_MEMCG_KMEM
static int memcg_online_kmem(struct mem_cgroup *memcg)
{
struct obj_cgroup *objcg;
@@ -4196,760 +3146,6 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg)
*/
memcg_reparent_list_lrus(memcg, parent);
}
-#else
-static int memcg_online_kmem(struct mem_cgroup *memcg)
-{
- return 0;
-}
-static void memcg_offline_kmem(struct mem_cgroup *memcg)
-{
-}
-#endif /* CONFIG_MEMCG_KMEM */
-
-static int memcg_update_tcp_max(struct mem_cgroup *memcg, unsigned long max)
-{
- int ret;
-
- mutex_lock(&memcg_max_mutex);
-
- ret = page_counter_set_max(&memcg->tcpmem, max);
- if (ret)
- goto out;
-
- if (!memcg->tcpmem_active) {
- /*
- * The active flag needs to be written after the static_key
- * update. This is what guarantees that the socket activation
- * function is the last one to run. See mem_cgroup_sk_alloc()
- * for details, and note that we don't mark any socket as
- * belonging to this memcg until that flag is up.
- *
- * We need to do this, because static_keys will span multiple
- * sites, but we can't control their order. If we mark a socket
- * as accounted, but the accounting functions are not patched in
- * yet, we'll lose accounting.
- *
- * We never race with the readers in mem_cgroup_sk_alloc(),
- * because when this value change, the code to process it is not
- * patched in yet.
- */
- static_branch_inc(&memcg_sockets_enabled_key);
- memcg->tcpmem_active = true;
- }
-out:
- mutex_unlock(&memcg_max_mutex);
- return ret;
-}
-
-/*
- * The user of this function is...
- * RES_LIMIT.
- */
-static ssize_t mem_cgroup_write(struct kernfs_open_file *of,
- char *buf, size_t nbytes, loff_t off)
-{
- struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
- unsigned long nr_pages;
- int ret;
-
- buf = strstrip(buf);
- ret = page_counter_memparse(buf, "-1", &nr_pages);
- if (ret)
- return ret;
-
- switch (MEMFILE_ATTR(of_cft(of)->private)) {
- case RES_LIMIT:
- if (mem_cgroup_is_root(memcg)) { /* Can't set limit on root */
- ret = -EINVAL;
- break;
- }
- switch (MEMFILE_TYPE(of_cft(of)->private)) {
- case _MEM:
- ret = mem_cgroup_resize_max(memcg, nr_pages, false);
- break;
- case _MEMSWAP:
- ret = mem_cgroup_resize_max(memcg, nr_pages, true);
- break;
- case _KMEM:
- pr_warn_once("kmem.limit_in_bytes is deprecated and will be removed. "
- "Writing any value to this file has no effect. "
- "Please report your usecase to linux-mm@kvack.org if you "
- "depend on this functionality.\n");
- ret = 0;
- break;
- case _TCP:
- ret = memcg_update_tcp_max(memcg, nr_pages);
- break;
- }
- break;
- case RES_SOFT_LIMIT:
- if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
- ret = -EOPNOTSUPP;
- } else {
- WRITE_ONCE(memcg->soft_limit, nr_pages);
- ret = 0;
- }
- break;
- }
- return ret ?: nbytes;
-}
-
-static ssize_t mem_cgroup_reset(struct kernfs_open_file *of, char *buf,
- size_t nbytes, loff_t off)
-{
- struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
- struct page_counter *counter;
-
- switch (MEMFILE_TYPE(of_cft(of)->private)) {
- case _MEM:
- counter = &memcg->memory;
- break;
- case _MEMSWAP:
- counter = &memcg->memsw;
- break;
- case _KMEM:
- counter = &memcg->kmem;
- break;
- case _TCP:
- counter = &memcg->tcpmem;
- break;
- default:
- BUG();
- }
-
- switch (MEMFILE_ATTR(of_cft(of)->private)) {
- case RES_MAX_USAGE:
- page_counter_reset_watermark(counter);
- break;
- case RES_FAILCNT:
- counter->failcnt = 0;
- break;
- default:
- BUG();
- }
-
- return nbytes;
-}
-
-static u64 mem_cgroup_move_charge_read(struct cgroup_subsys_state *css,
- struct cftype *cft)
-{
- return mem_cgroup_from_css(css)->move_charge_at_immigrate;
-}
-
-#ifdef CONFIG_MMU
-static int mem_cgroup_move_charge_write(struct cgroup_subsys_state *css,
- struct cftype *cft, u64 val)
-{
- struct mem_cgroup *memcg = mem_cgroup_from_css(css);
-
- pr_warn_once("Cgroup memory moving (move_charge_at_immigrate) is deprecated. "
- "Please report your usecase to linux-mm@kvack.org if you "
- "depend on this functionality.\n");
-
- if (val & ~MOVE_MASK)
- return -EINVAL;
-
- /*
- * No kind of locking is needed in here, because ->can_attach() will
- * check this value once in the beginning of the process, and then carry
- * on with stale data. This means that changes to this value will only
- * affect task migrations starting after the change.
- */
- memcg->move_charge_at_immigrate = val;
- return 0;
-}
-#else
-static int mem_cgroup_move_charge_write(struct cgroup_subsys_state *css,
- struct cftype *cft, u64 val)
-{
- return -ENOSYS;
-}
-#endif
-
-#ifdef CONFIG_NUMA
-
-#define LRU_ALL_FILE (BIT(LRU_INACTIVE_FILE) | BIT(LRU_ACTIVE_FILE))
-#define LRU_ALL_ANON (BIT(LRU_INACTIVE_ANON) | BIT(LRU_ACTIVE_ANON))
-#define LRU_ALL ((1 << NR_LRU_LISTS) - 1)
-
-static unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg,
- int nid, unsigned int lru_mask, bool tree)
-{
- struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
- unsigned long nr = 0;
- enum lru_list lru;
-
- VM_BUG_ON((unsigned)nid >= nr_node_ids);
-
- for_each_lru(lru) {
- if (!(BIT(lru) & lru_mask))
- continue;
- if (tree)
- nr += lruvec_page_state(lruvec, NR_LRU_BASE + lru);
- else
- nr += lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
- }
- return nr;
-}
-
-static unsigned long mem_cgroup_nr_lru_pages(struct mem_cgroup *memcg,
- unsigned int lru_mask,
- bool tree)
-{
- unsigned long nr = 0;
- enum lru_list lru;
-
- for_each_lru(lru) {
- if (!(BIT(lru) & lru_mask))
- continue;
- if (tree)
- nr += memcg_page_state(memcg, NR_LRU_BASE + lru);
- else
- nr += memcg_page_state_local(memcg, NR_LRU_BASE + lru);
- }
- return nr;
-}
-
-static int memcg_numa_stat_show(struct seq_file *m, void *v)
-{
- struct numa_stat {
- const char *name;
- unsigned int lru_mask;
- };
-
- static const struct numa_stat stats[] = {
- { "total", LRU_ALL },
- { "file", LRU_ALL_FILE },
- { "anon", LRU_ALL_ANON },
- { "unevictable", BIT(LRU_UNEVICTABLE) },
- };
- const struct numa_stat *stat;
- int nid;
- struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
-
- mem_cgroup_flush_stats(memcg);
-
- for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) {
- seq_printf(m, "%s=%lu", stat->name,
- mem_cgroup_nr_lru_pages(memcg, stat->lru_mask,
- false));
- for_each_node_state(nid, N_MEMORY)
- seq_printf(m, " N%d=%lu", nid,
- mem_cgroup_node_nr_lru_pages(memcg, nid,
- stat->lru_mask, false));
- seq_putc(m, '\n');
- }
-
- for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) {
-
- seq_printf(m, "hierarchical_%s=%lu", stat->name,
- mem_cgroup_nr_lru_pages(memcg, stat->lru_mask,
- true));
- for_each_node_state(nid, N_MEMORY)
- seq_printf(m, " N%d=%lu", nid,
- mem_cgroup_node_nr_lru_pages(memcg, nid,
- stat->lru_mask, true));
- seq_putc(m, '\n');
- }
-
- return 0;
-}
-#endif /* CONFIG_NUMA */
-
-static const unsigned int memcg1_stats[] = {
- NR_FILE_PAGES,
- NR_ANON_MAPPED,
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- NR_ANON_THPS,
-#endif
- NR_SHMEM,
- NR_FILE_MAPPED,
- NR_FILE_DIRTY,
- NR_WRITEBACK,
- WORKINGSET_REFAULT_ANON,
- WORKINGSET_REFAULT_FILE,
-#ifdef CONFIG_SWAP
- MEMCG_SWAP,
- NR_SWAPCACHE,
-#endif
-};
-
-static const char *const memcg1_stat_names[] = {
- "cache",
- "rss",
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- "rss_huge",
-#endif
- "shmem",
- "mapped_file",
- "dirty",
- "writeback",
- "workingset_refault_anon",
- "workingset_refault_file",
-#ifdef CONFIG_SWAP
- "swap",
- "swapcached",
-#endif
-};
-
-/* Universal VM events cgroup1 shows, original sort order */
-static const unsigned int memcg1_events[] = {
- PGPGIN,
- PGPGOUT,
- PGFAULT,
- PGMAJFAULT,
-};
-
-static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
-{
- unsigned long memory, memsw;
- struct mem_cgroup *mi;
- unsigned int i;
-
- BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats));
-
- mem_cgroup_flush_stats(memcg);
-
- for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) {
- unsigned long nr;
-
- nr = memcg_page_state_local_output(memcg, memcg1_stats[i]);
- seq_buf_printf(s, "%s %lu\n", memcg1_stat_names[i], nr);
- }
-
- for (i = 0; i < ARRAY_SIZE(memcg1_events); i++)
- seq_buf_printf(s, "%s %lu\n", vm_event_name(memcg1_events[i]),
- memcg_events_local(memcg, memcg1_events[i]));
-
- for (i = 0; i < NR_LRU_LISTS; i++)
- seq_buf_printf(s, "%s %lu\n", lru_list_name(i),
- memcg_page_state_local(memcg, NR_LRU_BASE + i) *
- PAGE_SIZE);
-
- /* Hierarchical information */
- memory = memsw = PAGE_COUNTER_MAX;
- for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) {
- memory = min(memory, READ_ONCE(mi->memory.max));
- memsw = min(memsw, READ_ONCE(mi->memsw.max));
- }
- seq_buf_printf(s, "hierarchical_memory_limit %llu\n",
- (u64)memory * PAGE_SIZE);
- seq_buf_printf(s, "hierarchical_memsw_limit %llu\n",
- (u64)memsw * PAGE_SIZE);
-
- for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) {
- unsigned long nr;
-
- nr = memcg_page_state_output(memcg, memcg1_stats[i]);
- seq_buf_printf(s, "total_%s %llu\n", memcg1_stat_names[i],
- (u64)nr);
- }
-
- for (i = 0; i < ARRAY_SIZE(memcg1_events); i++)
- seq_buf_printf(s, "total_%s %llu\n",
- vm_event_name(memcg1_events[i]),
- (u64)memcg_events(memcg, memcg1_events[i]));
-
- for (i = 0; i < NR_LRU_LISTS; i++)
- seq_buf_printf(s, "total_%s %llu\n", lru_list_name(i),
- (u64)memcg_page_state(memcg, NR_LRU_BASE + i) *
- PAGE_SIZE);
-
-#ifdef CONFIG_DEBUG_VM
- {
- pg_data_t *pgdat;
- struct mem_cgroup_per_node *mz;
- unsigned long anon_cost = 0;
- unsigned long file_cost = 0;
-
- for_each_online_pgdat(pgdat) {
- mz = memcg->nodeinfo[pgdat->node_id];
-
- anon_cost += mz->lruvec.anon_cost;
- file_cost += mz->lruvec.file_cost;
- }
- seq_buf_printf(s, "anon_cost %lu\n", anon_cost);
- seq_buf_printf(s, "file_cost %lu\n", file_cost);
- }
-#endif
-}
-
-static u64 mem_cgroup_swappiness_read(struct cgroup_subsys_state *css,
- struct cftype *cft)
-{
- struct mem_cgroup *memcg = mem_cgroup_from_css(css);
-
- return mem_cgroup_swappiness(memcg);
-}
-
-static int mem_cgroup_swappiness_write(struct cgroup_subsys_state *css,
- struct cftype *cft, u64 val)
-{
- struct mem_cgroup *memcg = mem_cgroup_from_css(css);
-
- if (val > 200)
- return -EINVAL;
-
- if (!mem_cgroup_is_root(memcg))
- WRITE_ONCE(memcg->swappiness, val);
- else
- WRITE_ONCE(vm_swappiness, val);
-
- return 0;
-}
-
-static void __mem_cgroup_threshold(struct mem_cgroup *memcg, bool swap)
-{
- struct mem_cgroup_threshold_ary *t;
- unsigned long usage;
- int i;
-
- rcu_read_lock();
- if (!swap)
- t = rcu_dereference(memcg->thresholds.primary);
- else
- t = rcu_dereference(memcg->memsw_thresholds.primary);
-
- if (!t)
- goto unlock;
-
- usage = mem_cgroup_usage(memcg, swap);
-
- /*
- * current_threshold points to threshold just below or equal to usage.
- * If it's not true, a threshold was crossed after last
- * call of __mem_cgroup_threshold().
- */
- i = t->current_threshold;
-
- /*
- * Iterate backward over array of thresholds starting from
- * current_threshold and check if a threshold is crossed.
- * If none of thresholds below usage is crossed, we read
- * only one element of the array here.
- */
- for (; i >= 0 && unlikely(t->entries[i].threshold > usage); i--)
- eventfd_signal(t->entries[i].eventfd);
-
- /* i = current_threshold + 1 */
- i++;
-
- /*
- * Iterate forward over array of thresholds starting from
- * current_threshold+1 and check if a threshold is crossed.
- * If none of thresholds above usage is crossed, we read
- * only one element of the array here.
- */
- for (; i < t->size && unlikely(t->entries[i].threshold <= usage); i++)
- eventfd_signal(t->entries[i].eventfd);
-
- /* Update current_threshold */
- t->current_threshold = i - 1;
-unlock:
- rcu_read_unlock();
-}
-
-static void mem_cgroup_threshold(struct mem_cgroup *memcg)
-{
- while (memcg) {
- __mem_cgroup_threshold(memcg, false);
- if (do_memsw_account())
- __mem_cgroup_threshold(memcg, true);
-
- memcg = parent_mem_cgroup(memcg);
- }
-}
-
-static int compare_thresholds(const void *a, const void *b)
-{
- const struct mem_cgroup_threshold *_a = a;
- const struct mem_cgroup_threshold *_b = b;
-
- if (_a->threshold > _b->threshold)
- return 1;
-
- if (_a->threshold < _b->threshold)
- return -1;
-
- return 0;
-}
-
-static int mem_cgroup_oom_notify_cb(struct mem_cgroup *memcg)
-{
- struct mem_cgroup_eventfd_list *ev;
-
- spin_lock(&memcg_oom_lock);
-
- list_for_each_entry(ev, &memcg->oom_notify, list)
- eventfd_signal(ev->eventfd);
-
- spin_unlock(&memcg_oom_lock);
- return 0;
-}
-
-static void mem_cgroup_oom_notify(struct mem_cgroup *memcg)
-{
- struct mem_cgroup *iter;
-
- for_each_mem_cgroup_tree(iter, memcg)
- mem_cgroup_oom_notify_cb(iter);
-}
-
-static int __mem_cgroup_usage_register_event(struct mem_cgroup *memcg,
- struct eventfd_ctx *eventfd, const char *args, enum res_type type)
-{
- struct mem_cgroup_thresholds *thresholds;
- struct mem_cgroup_threshold_ary *new;
- unsigned long threshold;
- unsigned long usage;
- int i, size, ret;
-
- ret = page_counter_memparse(args, "-1", &threshold);
- if (ret)
- return ret;
-
- mutex_lock(&memcg->thresholds_lock);
-
- if (type == _MEM) {
- thresholds = &memcg->thresholds;
- usage = mem_cgroup_usage(memcg, false);
- } else if (type == _MEMSWAP) {
- thresholds = &memcg->memsw_thresholds;
- usage = mem_cgroup_usage(memcg, true);
- } else
- BUG();
-
- /* Check if a threshold crossed before adding a new one */
- if (thresholds->primary)
- __mem_cgroup_threshold(memcg, type == _MEMSWAP);
-
- size = thresholds->primary ? thresholds->primary->size + 1 : 1;
-
- /* Allocate memory for new array of thresholds */
- new = kmalloc(struct_size(new, entries, size), GFP_KERNEL);
- if (!new) {
- ret = -ENOMEM;
- goto unlock;
- }
- new->size = size;
-
- /* Copy thresholds (if any) to new array */
- if (thresholds->primary)
- memcpy(new->entries, thresholds->primary->entries,
- flex_array_size(new, entries, size - 1));
-
- /* Add new threshold */
- new->entries[size - 1].eventfd = eventfd;
- new->entries[size - 1].threshold = threshold;
-
- /* Sort thresholds. Registering of new threshold isn't time-critical */
- sort(new->entries, size, sizeof(*new->entries),
- compare_thresholds, NULL);
-
- /* Find current threshold */
- new->current_threshold = -1;
- for (i = 0; i < size; i++) {
- if (new->entries[i].threshold <= usage) {
- /*
- * new->current_threshold will not be used until
- * rcu_assign_pointer(), so it's safe to increment
- * it here.
- */
- ++new->current_threshold;
- } else
- break;
- }
-
- /* Free old spare buffer and save old primary buffer as spare */
- kfree(thresholds->spare);
- thresholds->spare = thresholds->primary;
-
- rcu_assign_pointer(thresholds->primary, new);
-
- /* To be sure that nobody uses thresholds */
- synchronize_rcu();
-
-unlock:
- mutex_unlock(&memcg->thresholds_lock);
-
- return ret;
-}
-
-static int mem_cgroup_usage_register_event(struct mem_cgroup *memcg,
- struct eventfd_ctx *eventfd, const char *args)
-{
- return __mem_cgroup_usage_register_event(memcg, eventfd, args, _MEM);
-}
-
-static int memsw_cgroup_usage_register_event(struct mem_cgroup *memcg,
- struct eventfd_ctx *eventfd, const char *args)
-{
- return __mem_cgroup_usage_register_event(memcg, eventfd, args, _MEMSWAP);
-}
-
-static void __mem_cgroup_usage_unregister_event(struct mem_cgroup *memcg,
- struct eventfd_ctx *eventfd, enum res_type type)
-{
- struct mem_cgroup_thresholds *thresholds;
- struct mem_cgroup_threshold_ary *new;
- unsigned long usage;
- int i, j, size, entries;
-
- mutex_lock(&memcg->thresholds_lock);
-
- if (type == _MEM) {
- thresholds = &memcg->thresholds;
- usage = mem_cgroup_usage(memcg, false);
- } else if (type == _MEMSWAP) {
- thresholds = &memcg->memsw_thresholds;
- usage = mem_cgroup_usage(memcg, true);
- } else
- BUG();
-
- if (!thresholds->primary)
- goto unlock;
-
- /* Check if a threshold crossed before removing */
- __mem_cgroup_threshold(memcg, type == _MEMSWAP);
-
- /* Calculate new number of threshold */
- size = entries = 0;
- for (i = 0; i < thresholds->primary->size; i++) {
- if (thresholds->primary->entries[i].eventfd != eventfd)
- size++;
- else
- entries++;
- }
-
- new = thresholds->spare;
-
- /* If no items related to eventfd have been cleared, nothing to do */
- if (!entries)
- goto unlock;
-
- /* Set thresholds array to NULL if we don't have thresholds */
- if (!size) {
- kfree(new);
- new = NULL;
- goto swap_buffers;
- }
-
- new->size = size;
-
- /* Copy thresholds and find current threshold */
- new->current_threshold = -1;
- for (i = 0, j = 0; i < thresholds->primary->size; i++) {
- if (thresholds->primary->entries[i].eventfd == eventfd)
- continue;
-
- new->entries[j] = thresholds->primary->entries[i];
- if (new->entries[j].threshold <= usage) {
- /*
- * new->current_threshold will not be used
- * until rcu_assign_pointer(), so it's safe to increment
- * it here.
- */
- ++new->current_threshold;
- }
- j++;
- }
-
-swap_buffers:
- /* Swap primary and spare array */
- thresholds->spare = thresholds->primary;
-
- rcu_assign_pointer(thresholds->primary, new);
-
- /* To be sure that nobody uses thresholds */
- synchronize_rcu();
-
- /* If all events are unregistered, free the spare array */
- if (!new) {
- kfree(thresholds->spare);
- thresholds->spare = NULL;
- }
-unlock:
- mutex_unlock(&memcg->thresholds_lock);
-}
-
-static void mem_cgroup_usage_unregister_event(struct mem_cgroup *memcg,
- struct eventfd_ctx *eventfd)
-{
- return __mem_cgroup_usage_unregister_event(memcg, eventfd, _MEM);
-}
-
-static void memsw_cgroup_usage_unregister_event(struct mem_cgroup *memcg,
- struct eventfd_ctx *eventfd)
-{
- return __mem_cgroup_usage_unregister_event(memcg, eventfd, _MEMSWAP);
-}
-
-static int mem_cgroup_oom_register_event(struct mem_cgroup *memcg,
- struct eventfd_ctx *eventfd, const char *args)
-{
- struct mem_cgroup_eventfd_list *event;
-
- event = kmalloc(sizeof(*event), GFP_KERNEL);
- if (!event)
- return -ENOMEM;
-
- spin_lock(&memcg_oom_lock);
-
- event->eventfd = eventfd;
- list_add(&event->list, &memcg->oom_notify);
-
- /* already in OOM ? */
- if (memcg->under_oom)
- eventfd_signal(eventfd);
- spin_unlock(&memcg_oom_lock);
-
- return 0;
-}
-
-static void mem_cgroup_oom_unregister_event(struct mem_cgroup *memcg,
- struct eventfd_ctx *eventfd)
-{
- struct mem_cgroup_eventfd_list *ev, *tmp;
-
- spin_lock(&memcg_oom_lock);
-
- list_for_each_entry_safe(ev, tmp, &memcg->oom_notify, list) {
- if (ev->eventfd == eventfd) {
- list_del(&ev->list);
- kfree(ev);
- }
- }
-
- spin_unlock(&memcg_oom_lock);
-}
-
-static int mem_cgroup_oom_control_read(struct seq_file *sf, void *v)
-{
- struct mem_cgroup *memcg = mem_cgroup_from_seq(sf);
-
- seq_printf(sf, "oom_kill_disable %d\n", READ_ONCE(memcg->oom_kill_disable));
- seq_printf(sf, "under_oom %d\n", (bool)memcg->under_oom);
- seq_printf(sf, "oom_kill %lu\n",
- atomic_long_read(&memcg->memory_events[MEMCG_OOM_KILL]));
- return 0;
-}
-
-static int mem_cgroup_oom_control_write(struct cgroup_subsys_state *css,
- struct cftype *cft, u64 val)
-{
- struct mem_cgroup *memcg = mem_cgroup_from_css(css);
-
- /* cannot set to root cgroup and only 0 and 1 are allowed */
- if (mem_cgroup_is_root(memcg) || !((val == 0) || (val == 1)))
- return -EINVAL;
-
- WRITE_ONCE(memcg->oom_kill_disable, val);
- if (!val)
- memcg_oom_recover(memcg);
-
- return 0;
-}
#ifdef CONFIG_CGROUP_WRITEBACK
@@ -5165,384 +3361,6 @@ static void memcg_wb_domain_size_changed(struct mem_cgroup *memcg)
#endif /* CONFIG_CGROUP_WRITEBACK */
/*
- * DO NOT USE IN NEW FILES.
- *
- * "cgroup.event_control" implementation.
- *
- * This is way over-engineered. It tries to support fully configurable
- * events for each user. Such level of flexibility is completely
- * unnecessary especially in the light of the planned unified hierarchy.
- *
- * Please deprecate this and replace with something simpler if at all
- * possible.
- */
-
-/*
- * Unregister event and free resources.
- *
- * Gets called from workqueue.
- */
-static void memcg_event_remove(struct work_struct *work)
-{
- struct mem_cgroup_event *event =
- container_of(work, struct mem_cgroup_event, remove);
- struct mem_cgroup *memcg = event->memcg;
-
- remove_wait_queue(event->wqh, &event->wait);
-
- event->unregister_event(memcg, event->eventfd);
-
- /* Notify userspace the event is going away. */
- eventfd_signal(event->eventfd);
-
- eventfd_ctx_put(event->eventfd);
- kfree(event);
- css_put(&memcg->css);
-}
-
-/*
- * Gets called on EPOLLHUP on eventfd when user closes it.
- *
- * Called with wqh->lock held and interrupts disabled.
- */
-static int memcg_event_wake(wait_queue_entry_t *wait, unsigned mode,
- int sync, void *key)
-{
- struct mem_cgroup_event *event =
- container_of(wait, struct mem_cgroup_event, wait);
- struct mem_cgroup *memcg = event->memcg;
- __poll_t flags = key_to_poll(key);
-
- if (flags & EPOLLHUP) {
- /*
- * If the event has been detached at cgroup removal, we
- * can simply return knowing the other side will cleanup
- * for us.
- *
- * We can't race against event freeing since the other
- * side will require wqh->lock via remove_wait_queue(),
- * which we hold.
- */
- spin_lock(&memcg->event_list_lock);
- if (!list_empty(&event->list)) {
- list_del_init(&event->list);
- /*
- * We are in atomic context, but cgroup_event_remove()
- * may sleep, so we have to call it in workqueue.
- */
- schedule_work(&event->remove);
- }
- spin_unlock(&memcg->event_list_lock);
- }
-
- return 0;
-}
-
-static void memcg_event_ptable_queue_proc(struct file *file,
- wait_queue_head_t *wqh, poll_table *pt)
-{
- struct mem_cgroup_event *event =
- container_of(pt, struct mem_cgroup_event, pt);
-
- event->wqh = wqh;
- add_wait_queue(wqh, &event->wait);
-}
-
-/*
- * DO NOT USE IN NEW FILES.
- *
- * Parse input and register new cgroup event handler.
- *
- * Input must be in format '<event_fd> <control_fd> <args>'.
- * Interpretation of args is defined by control file implementation.
- */
-static ssize_t memcg_write_event_control(struct kernfs_open_file *of,
- char *buf, size_t nbytes, loff_t off)
-{
- struct cgroup_subsys_state *css = of_css(of);
- struct mem_cgroup *memcg = mem_cgroup_from_css(css);
- struct mem_cgroup_event *event;
- struct cgroup_subsys_state *cfile_css;
- unsigned int efd, cfd;
- struct fd efile;
- struct fd cfile;
- struct dentry *cdentry;
- const char *name;
- char *endp;
- int ret;
-
- if (IS_ENABLED(CONFIG_PREEMPT_RT))
- return -EOPNOTSUPP;
-
- buf = strstrip(buf);
-
- efd = simple_strtoul(buf, &endp, 10);
- if (*endp != ' ')
- return -EINVAL;
- buf = endp + 1;
-
- cfd = simple_strtoul(buf, &endp, 10);
- if ((*endp != ' ') && (*endp != '\0'))
- return -EINVAL;
- buf = endp + 1;
-
- event = kzalloc(sizeof(*event), GFP_KERNEL);
- if (!event)
- return -ENOMEM;
-
- event->memcg = memcg;
- INIT_LIST_HEAD(&event->list);
- init_poll_funcptr(&event->pt, memcg_event_ptable_queue_proc);
- init_waitqueue_func_entry(&event->wait, memcg_event_wake);
- INIT_WORK(&event->remove, memcg_event_remove);
-
- efile = fdget(efd);
- if (!efile.file) {
- ret = -EBADF;
- goto out_kfree;
- }
-
- event->eventfd = eventfd_ctx_fileget(efile.file);
- if (IS_ERR(event->eventfd)) {
- ret = PTR_ERR(event->eventfd);
- goto out_put_efile;
- }
-
- cfile = fdget(cfd);
- if (!cfile.file) {
- ret = -EBADF;
- goto out_put_eventfd;
- }
-
- /* the process need read permission on control file */
- /* AV: shouldn't we check that it's been opened for read instead? */
- ret = file_permission(cfile.file, MAY_READ);
- if (ret < 0)
- goto out_put_cfile;
-
- /*
- * The control file must be a regular cgroup1 file. As a regular cgroup
- * file can't be renamed, it's safe to access its name afterwards.
- */
- cdentry = cfile.file->f_path.dentry;
- if (cdentry->d_sb->s_type != &cgroup_fs_type || !d_is_reg(cdentry)) {
- ret = -EINVAL;
- goto out_put_cfile;
- }
-
- /*
- * Determine the event callbacks and set them in @event. This used
- * to be done via struct cftype but cgroup core no longer knows
- * about these events. The following is crude but the whole thing
- * is for compatibility anyway.
- *
- * DO NOT ADD NEW FILES.
- */
- name = cdentry->d_name.name;
-
- if (!strcmp(name, "memory.usage_in_bytes")) {
- event->register_event = mem_cgroup_usage_register_event;
- event->unregister_event = mem_cgroup_usage_unregister_event;
- } else if (!strcmp(name, "memory.oom_control")) {
- event->register_event = mem_cgroup_oom_register_event;
- event->unregister_event = mem_cgroup_oom_unregister_event;
- } else if (!strcmp(name, "memory.pressure_level")) {
- event->register_event = vmpressure_register_event;
- event->unregister_event = vmpressure_unregister_event;
- } else if (!strcmp(name, "memory.memsw.usage_in_bytes")) {
- event->register_event = memsw_cgroup_usage_register_event;
- event->unregister_event = memsw_cgroup_usage_unregister_event;
- } else {
- ret = -EINVAL;
- goto out_put_cfile;
- }
-
- /*
- * Verify @cfile should belong to @css. Also, remaining events are
- * automatically removed on cgroup destruction but the removal is
- * asynchronous, so take an extra ref on @css.
- */
- cfile_css = css_tryget_online_from_dir(cdentry->d_parent,
- &memory_cgrp_subsys);
- ret = -EINVAL;
- if (IS_ERR(cfile_css))
- goto out_put_cfile;
- if (cfile_css != css) {
- css_put(cfile_css);
- goto out_put_cfile;
- }
-
- ret = event->register_event(memcg, event->eventfd, buf);
- if (ret)
- goto out_put_css;
-
- vfs_poll(efile.file, &event->pt);
-
- spin_lock_irq(&memcg->event_list_lock);
- list_add(&event->list, &memcg->event_list);
- spin_unlock_irq(&memcg->event_list_lock);
-
- fdput(cfile);
- fdput(efile);
-
- return nbytes;
-
-out_put_css:
- css_put(css);
-out_put_cfile:
- fdput(cfile);
-out_put_eventfd:
- eventfd_ctx_put(event->eventfd);
-out_put_efile:
- fdput(efile);
-out_kfree:
- kfree(event);
-
- return ret;
-}
-
-#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_SLUB_DEBUG)
-static int mem_cgroup_slab_show(struct seq_file *m, void *p)
-{
- /*
- * Deprecated.
- * Please, take a look at tools/cgroup/memcg_slabinfo.py .
- */
- return 0;
-}
-#endif
-
-static int memory_stat_show(struct seq_file *m, void *v);
-
-static struct cftype mem_cgroup_legacy_files[] = {
- {
- .name = "usage_in_bytes",
- .private = MEMFILE_PRIVATE(_MEM, RES_USAGE),
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "max_usage_in_bytes",
- .private = MEMFILE_PRIVATE(_MEM, RES_MAX_USAGE),
- .write = mem_cgroup_reset,
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "limit_in_bytes",
- .private = MEMFILE_PRIVATE(_MEM, RES_LIMIT),
- .write = mem_cgroup_write,
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "soft_limit_in_bytes",
- .private = MEMFILE_PRIVATE(_MEM, RES_SOFT_LIMIT),
- .write = mem_cgroup_write,
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "failcnt",
- .private = MEMFILE_PRIVATE(_MEM, RES_FAILCNT),
- .write = mem_cgroup_reset,
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "stat",
- .seq_show = memory_stat_show,
- },
- {
- .name = "force_empty",
- .write = mem_cgroup_force_empty_write,
- },
- {
- .name = "use_hierarchy",
- .write_u64 = mem_cgroup_hierarchy_write,
- .read_u64 = mem_cgroup_hierarchy_read,
- },
- {
- .name = "cgroup.event_control", /* XXX: for compat */
- .write = memcg_write_event_control,
- .flags = CFTYPE_NO_PREFIX | CFTYPE_WORLD_WRITABLE,
- },
- {
- .name = "swappiness",
- .read_u64 = mem_cgroup_swappiness_read,
- .write_u64 = mem_cgroup_swappiness_write,
- },
- {
- .name = "move_charge_at_immigrate",
- .read_u64 = mem_cgroup_move_charge_read,
- .write_u64 = mem_cgroup_move_charge_write,
- },
- {
- .name = "oom_control",
- .seq_show = mem_cgroup_oom_control_read,
- .write_u64 = mem_cgroup_oom_control_write,
- },
- {
- .name = "pressure_level",
- .seq_show = mem_cgroup_dummy_seq_show,
- },
-#ifdef CONFIG_NUMA
- {
- .name = "numa_stat",
- .seq_show = memcg_numa_stat_show,
- },
-#endif
- {
- .name = "kmem.limit_in_bytes",
- .private = MEMFILE_PRIVATE(_KMEM, RES_LIMIT),
- .write = mem_cgroup_write,
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "kmem.usage_in_bytes",
- .private = MEMFILE_PRIVATE(_KMEM, RES_USAGE),
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "kmem.failcnt",
- .private = MEMFILE_PRIVATE(_KMEM, RES_FAILCNT),
- .write = mem_cgroup_reset,
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "kmem.max_usage_in_bytes",
- .private = MEMFILE_PRIVATE(_KMEM, RES_MAX_USAGE),
- .write = mem_cgroup_reset,
- .read_u64 = mem_cgroup_read_u64,
- },
-#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_SLUB_DEBUG)
- {
- .name = "kmem.slabinfo",
- .seq_show = mem_cgroup_slab_show,
- },
-#endif
- {
- .name = "kmem.tcp.limit_in_bytes",
- .private = MEMFILE_PRIVATE(_TCP, RES_LIMIT),
- .write = mem_cgroup_write,
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "kmem.tcp.usage_in_bytes",
- .private = MEMFILE_PRIVATE(_TCP, RES_USAGE),
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "kmem.tcp.failcnt",
- .private = MEMFILE_PRIVATE(_TCP, RES_FAILCNT),
- .write = mem_cgroup_reset,
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "kmem.tcp.max_usage_in_bytes",
- .private = MEMFILE_PRIVATE(_TCP, RES_MAX_USAGE),
- .write = mem_cgroup_reset,
- .read_u64 = mem_cgroup_read_u64,
- },
- { }, /* terminate */
-};
-
-/*
* Private memory cgroup IDR
*
* Swap-out records and page cache shadow entries need to store memcg
@@ -5577,13 +3395,13 @@ static void mem_cgroup_id_remove(struct mem_cgroup *memcg)
}
}
-static void __maybe_unused mem_cgroup_id_get_many(struct mem_cgroup *memcg,
- unsigned int n)
+void __maybe_unused mem_cgroup_id_get_many(struct mem_cgroup *memcg,
+ unsigned int n)
{
refcount_add(n, &memcg->id.ref);
}
-static void mem_cgroup_id_put_many(struct mem_cgroup *memcg, unsigned int n)
+void mem_cgroup_id_put_many(struct mem_cgroup *memcg, unsigned int n)
{
if (refcount_sub_and_test(n, &memcg->id.ref)) {
mem_cgroup_id_remove(memcg);
@@ -5739,17 +3557,11 @@ static struct mem_cgroup *mem_cgroup_alloc(struct mem_cgroup *parent)
goto fail;
INIT_WORK(&memcg->high_work, high_work_func);
- INIT_LIST_HEAD(&memcg->oom_notify);
- mutex_init(&memcg->thresholds_lock);
- spin_lock_init(&memcg->move_lock);
vmpressure_init(&memcg->vmpressure);
- INIT_LIST_HEAD(&memcg->event_list);
- spin_lock_init(&memcg->event_list_lock);
memcg->socket_pressure = jiffies;
-#ifdef CONFIG_MEMCG_KMEM
+ memcg1_memcg_init(memcg);
memcg->kmemcg_id = -1;
INIT_LIST_HEAD(&memcg->objcg_list);
-#endif
#ifdef CONFIG_CGROUP_WRITEBACK
INIT_LIST_HEAD(&memcg->cgwb_list);
for (i = 0; i < MEMCG_CGWB_FRN_CNT; i++)
@@ -5782,8 +3594,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
return ERR_CAST(memcg);
page_counter_set_high(&memcg->memory, PAGE_COUNTER_MAX);
- WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX);
-#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
+ memcg1_soft_limit_reset(memcg);
+#ifdef CONFIG_ZSWAP
memcg->zswap_max = PAGE_COUNTER_MAX;
WRITE_ONCE(memcg->zswap_writeback,
!parent || READ_ONCE(parent->zswap_writeback));
@@ -5791,20 +3603,23 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX);
if (parent) {
WRITE_ONCE(memcg->swappiness, mem_cgroup_swappiness(parent));
- WRITE_ONCE(memcg->oom_kill_disable, READ_ONCE(parent->oom_kill_disable));
page_counter_init(&memcg->memory, &parent->memory);
page_counter_init(&memcg->swap, &parent->swap);
+#ifdef CONFIG_MEMCG_V1
+ WRITE_ONCE(memcg->oom_kill_disable, READ_ONCE(parent->oom_kill_disable));
page_counter_init(&memcg->kmem, &parent->kmem);
page_counter_init(&memcg->tcpmem, &parent->tcpmem);
+#endif
} else {
init_memcg_stats();
init_memcg_events();
page_counter_init(&memcg->memory, NULL);
page_counter_init(&memcg->swap, NULL);
+#ifdef CONFIG_MEMCG_V1
page_counter_init(&memcg->kmem, NULL);
page_counter_init(&memcg->tcpmem, NULL);
-
+#endif
root_mem_cgroup = memcg;
return &memcg->css;
}
@@ -5812,10 +3627,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
if (cgroup_subsys_on_dfl(memory_cgrp_subsys) && !cgroup_memory_nosocket)
static_branch_inc(&memcg_sockets_enabled_key);
-#if defined(CONFIG_MEMCG_KMEM)
if (!cgroup_memory_nobpf)
static_branch_inc(&memcg_bpf_enabled_key);
-#endif
return &memcg->css;
}
@@ -5867,19 +3680,8 @@ remove_id:
static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
{
struct mem_cgroup *memcg = mem_cgroup_from_css(css);
- struct mem_cgroup_event *event, *tmp;
- /*
- * Unregister events and notify userspace.
- * Notify userspace about cgroup removing only after rmdir of cgroup
- * directory to avoid race between userspace and kernelspace.
- */
- spin_lock_irq(&memcg->event_list_lock);
- list_for_each_entry_safe(event, tmp, &memcg->event_list, list) {
- list_del_init(&event->list);
- schedule_work(&event->remove);
- }
- spin_unlock_irq(&memcg->event_list_lock);
+ memcg1_css_offline(memcg);
page_counter_set_min(&memcg->memory, 0);
page_counter_set_low(&memcg->memory, 0);
@@ -5916,17 +3718,15 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css)
if (cgroup_subsys_on_dfl(memory_cgrp_subsys) && !cgroup_memory_nosocket)
static_branch_dec(&memcg_sockets_enabled_key);
- if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && memcg->tcpmem_active)
+ if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && memcg1_tcpmem_active(memcg))
static_branch_dec(&memcg_sockets_enabled_key);
-#if defined(CONFIG_MEMCG_KMEM)
if (!cgroup_memory_nobpf)
static_branch_dec(&memcg_bpf_enabled_key);
-#endif
vmpressure_cleanup(&memcg->vmpressure);
cancel_work_sync(&memcg->high_work);
- mem_cgroup_remove_from_trees(memcg);
+ memcg1_remove_from_trees(memcg);
free_shrinker_info(memcg);
mem_cgroup_free(memcg);
}
@@ -5950,12 +3750,14 @@ static void mem_cgroup_css_reset(struct cgroup_subsys_state *css)
page_counter_set_max(&memcg->memory, PAGE_COUNTER_MAX);
page_counter_set_max(&memcg->swap, PAGE_COUNTER_MAX);
+#ifdef CONFIG_MEMCG_V1
page_counter_set_max(&memcg->kmem, PAGE_COUNTER_MAX);
page_counter_set_max(&memcg->tcpmem, PAGE_COUNTER_MAX);
+#endif
page_counter_set_min(&memcg->memory, 0);
page_counter_set_low(&memcg->memory, 0);
page_counter_set_high(&memcg->memory, PAGE_COUNTER_MAX);
- WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX);
+ memcg1_soft_limit_reset(memcg);
page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX);
memcg_wb_domain_size_changed(memcg);
}
@@ -6063,758 +3865,6 @@ static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu)
atomic64_set(&memcg->vmstats->stats_updates, 0);
}
-#ifdef CONFIG_MMU
-/* Handlers for move charge at task migration. */
-static int mem_cgroup_do_precharge(unsigned long count)
-{
- int ret;
-
- /* Try a single bulk charge without reclaim first, kswapd may wake */
- ret = try_charge(mc.to, GFP_KERNEL & ~__GFP_DIRECT_RECLAIM, count);
- if (!ret) {
- mc.precharge += count;
- return ret;
- }
-
- /* Try charges one by one with reclaim, but do not retry */
- while (count--) {
- ret = try_charge(mc.to, GFP_KERNEL | __GFP_NORETRY, 1);
- if (ret)
- return ret;
- mc.precharge++;
- cond_resched();
- }
- return 0;
-}
-
-union mc_target {
- struct folio *folio;
- swp_entry_t ent;
-};
-
-enum mc_target_type {
- MC_TARGET_NONE = 0,
- MC_TARGET_PAGE,
- MC_TARGET_SWAP,
- MC_TARGET_DEVICE,
-};
-
-static struct page *mc_handle_present_pte(struct vm_area_struct *vma,
- unsigned long addr, pte_t ptent)
-{
- struct page *page = vm_normal_page(vma, addr, ptent);
-
- if (!page)
- return NULL;
- if (PageAnon(page)) {
- if (!(mc.flags & MOVE_ANON))
- return NULL;
- } else {
- if (!(mc.flags & MOVE_FILE))
- return NULL;
- }
- get_page(page);
-
- return page;
-}
-
-#if defined(CONFIG_SWAP) || defined(CONFIG_DEVICE_PRIVATE)
-static struct page *mc_handle_swap_pte(struct vm_area_struct *vma,
- pte_t ptent, swp_entry_t *entry)
-{
- struct page *page = NULL;
- swp_entry_t ent = pte_to_swp_entry(ptent);
-
- if (!(mc.flags & MOVE_ANON))
- return NULL;
-
- /*
- * Handle device private pages that are not accessible by the CPU, but
- * stored as special swap entries in the page table.
- */
- if (is_device_private_entry(ent)) {
- page = pfn_swap_entry_to_page(ent);
- if (!get_page_unless_zero(page))
- return NULL;
- return page;
- }
-
- if (non_swap_entry(ent))
- return NULL;
-
- /*
- * Because swap_cache_get_folio() updates some statistics counter,
- * we call find_get_page() with swapper_space directly.
- */
- page = find_get_page(swap_address_space(ent), swp_offset(ent));
- entry->val = ent.val;
-
- return page;
-}
-#else
-static struct page *mc_handle_swap_pte(struct vm_area_struct *vma,
- pte_t ptent, swp_entry_t *entry)
-{
- return NULL;
-}
-#endif
-
-static struct page *mc_handle_file_pte(struct vm_area_struct *vma,
- unsigned long addr, pte_t ptent)
-{
- unsigned long index;
- struct folio *folio;
-
- if (!vma->vm_file) /* anonymous vma */
- return NULL;
- if (!(mc.flags & MOVE_FILE))
- return NULL;
-
- /* folio is moved even if it's not RSS of this task(page-faulted). */
- /* shmem/tmpfs may report page out on swap: account for that too. */
- index = linear_page_index(vma, addr);
- folio = filemap_get_incore_folio(vma->vm_file->f_mapping, index);
- if (IS_ERR(folio))
- return NULL;
- return folio_file_page(folio, index);
-}
-
-/**
- * mem_cgroup_move_account - move account of the folio
- * @folio: The folio.
- * @compound: charge the page as compound or small page
- * @from: mem_cgroup which the folio is moved from.
- * @to: mem_cgroup which the folio is moved to. @from != @to.
- *
- * The folio must be locked and not on the LRU.
- *
- * This function doesn't do "charge" to new cgroup and doesn't do "uncharge"
- * from old cgroup.
- */
-static int mem_cgroup_move_account(struct folio *folio,
- bool compound,
- struct mem_cgroup *from,
- struct mem_cgroup *to)
-{
- struct lruvec *from_vec, *to_vec;
- struct pglist_data *pgdat;
- unsigned int nr_pages = compound ? folio_nr_pages(folio) : 1;
- int nid, ret;
-
- VM_BUG_ON(from == to);
- VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
- VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
- VM_BUG_ON(compound && !folio_test_large(folio));
-
- ret = -EINVAL;
- if (folio_memcg(folio) != from)
- goto out;
-
- pgdat = folio_pgdat(folio);
- from_vec = mem_cgroup_lruvec(from, pgdat);
- to_vec = mem_cgroup_lruvec(to, pgdat);
-
- folio_memcg_lock(folio);
-
- if (folio_test_anon(folio)) {
- if (folio_mapped(folio)) {
- __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages);
- __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages);
- if (folio_test_pmd_mappable(folio)) {
- __mod_lruvec_state(from_vec, NR_ANON_THPS,
- -nr_pages);
- __mod_lruvec_state(to_vec, NR_ANON_THPS,
- nr_pages);
- }
- }
- } else {
- __mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages);
- __mod_lruvec_state(to_vec, NR_FILE_PAGES, nr_pages);
-
- if (folio_test_swapbacked(folio)) {
- __mod_lruvec_state(from_vec, NR_SHMEM, -nr_pages);
- __mod_lruvec_state(to_vec, NR_SHMEM, nr_pages);
- }
-
- if (folio_mapped(folio)) {
- __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages);
- __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages);
- }
-
- if (folio_test_dirty(folio)) {
- struct address_space *mapping = folio_mapping(folio);
-
- if (mapping_can_writeback(mapping)) {
- __mod_lruvec_state(from_vec, NR_FILE_DIRTY,
- -nr_pages);
- __mod_lruvec_state(to_vec, NR_FILE_DIRTY,
- nr_pages);
- }
- }
- }
-
-#ifdef CONFIG_SWAP
- if (folio_test_swapcache(folio)) {
- __mod_lruvec_state(from_vec, NR_SWAPCACHE, -nr_pages);
- __mod_lruvec_state(to_vec, NR_SWAPCACHE, nr_pages);
- }
-#endif
- if (folio_test_writeback(folio)) {
- __mod_lruvec_state(from_vec, NR_WRITEBACK, -nr_pages);
- __mod_lruvec_state(to_vec, NR_WRITEBACK, nr_pages);
- }
-
- /*
- * All state has been migrated, let's switch to the new memcg.
- *
- * It is safe to change page's memcg here because the page
- * is referenced, charged, isolated, and locked: we can't race
- * with (un)charging, migration, LRU putback, or anything else
- * that would rely on a stable page's memory cgroup.
- *
- * Note that folio_memcg_lock is a memcg lock, not a page lock,
- * to save space. As soon as we switch page's memory cgroup to a
- * new memcg that isn't locked, the above state can change
- * concurrently again. Make sure we're truly done with it.
- */
- smp_mb();
-
- css_get(&to->css);
- css_put(&from->css);
-
- folio->memcg_data = (unsigned long)to;
-
- __folio_memcg_unlock(from);
-
- ret = 0;
- nid = folio_nid(folio);
-
- local_irq_disable();
- mem_cgroup_charge_statistics(to, nr_pages);
- memcg_check_events(to, nid);
- mem_cgroup_charge_statistics(from, -nr_pages);
- memcg_check_events(from, nid);
- local_irq_enable();
-out:
- return ret;
-}
-
-/**
- * get_mctgt_type - get target type of moving charge
- * @vma: the vma the pte to be checked belongs
- * @addr: the address corresponding to the pte to be checked
- * @ptent: the pte to be checked
- * @target: the pointer the target page or swap ent will be stored(can be NULL)
- *
- * Context: Called with pte lock held.
- * Return:
- * * MC_TARGET_NONE - If the pte is not a target for move charge.
- * * MC_TARGET_PAGE - If the page corresponding to this pte is a target for
- * move charge. If @target is not NULL, the folio is stored in target->folio
- * with extra refcnt taken (Caller should release it).
- * * MC_TARGET_SWAP - If the swap entry corresponding to this pte is a
- * target for charge migration. If @target is not NULL, the entry is
- * stored in target->ent.
- * * MC_TARGET_DEVICE - Like MC_TARGET_PAGE but page is device memory and
- * thus not on the lru. For now such page is charged like a regular page
- * would be as it is just special memory taking the place of a regular page.
- * See Documentations/vm/hmm.txt and include/linux/hmm.h
- */
-static enum mc_target_type get_mctgt_type(struct vm_area_struct *vma,
- unsigned long addr, pte_t ptent, union mc_target *target)
-{
- struct page *page = NULL;
- struct folio *folio;
- enum mc_target_type ret = MC_TARGET_NONE;
- swp_entry_t ent = { .val = 0 };
-
- if (pte_present(ptent))
- page = mc_handle_present_pte(vma, addr, ptent);
- else if (pte_none_mostly(ptent))
- /*
- * PTE markers should be treated as a none pte here, separated
- * from other swap handling below.
- */
- page = mc_handle_file_pte(vma, addr, ptent);
- else if (is_swap_pte(ptent))
- page = mc_handle_swap_pte(vma, ptent, &ent);
-
- if (page)
- folio = page_folio(page);
- if (target && page) {
- if (!folio_trylock(folio)) {
- folio_put(folio);
- return ret;
- }
- /*
- * page_mapped() must be stable during the move. This
- * pte is locked, so if it's present, the page cannot
- * become unmapped. If it isn't, we have only partial
- * control over the mapped state: the page lock will
- * prevent new faults against pagecache and swapcache,
- * so an unmapped page cannot become mapped. However,
- * if the page is already mapped elsewhere, it can
- * unmap, and there is nothing we can do about it.
- * Alas, skip moving the page in this case.
- */
- if (!pte_present(ptent) && page_mapped(page)) {
- folio_unlock(folio);
- folio_put(folio);
- return ret;
- }
- }
-
- if (!page && !ent.val)
- return ret;
- if (page) {
- /*
- * Do only loose check w/o serialization.
- * mem_cgroup_move_account() checks the page is valid or
- * not under LRU exclusion.
- */
- if (folio_memcg(folio) == mc.from) {
- ret = MC_TARGET_PAGE;
- if (folio_is_device_private(folio) ||
- folio_is_device_coherent(folio))
- ret = MC_TARGET_DEVICE;
- if (target)
- target->folio = folio;
- }
- if (!ret || !target) {
- if (target)
- folio_unlock(folio);
- folio_put(folio);
- }
- }
- /*
- * There is a swap entry and a page doesn't exist or isn't charged.
- * But we cannot move a tail-page in a THP.
- */
- if (ent.val && !ret && (!page || !PageTransCompound(page)) &&
- mem_cgroup_id(mc.from) == lookup_swap_cgroup_id(ent)) {
- ret = MC_TARGET_SWAP;
- if (target)
- target->ent = ent;
- }
- return ret;
-}
-
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-/*
- * We don't consider PMD mapped swapping or file mapped pages because THP does
- * not support them for now.
- * Caller should make sure that pmd_trans_huge(pmd) is true.
- */
-static enum mc_target_type get_mctgt_type_thp(struct vm_area_struct *vma,
- unsigned long addr, pmd_t pmd, union mc_target *target)
-{
- struct page *page = NULL;
- struct folio *folio;
- enum mc_target_type ret = MC_TARGET_NONE;
-
- if (unlikely(is_swap_pmd(pmd))) {
- VM_BUG_ON(thp_migration_supported() &&
- !is_pmd_migration_entry(pmd));
- return ret;
- }
- page = pmd_page(pmd);
- VM_BUG_ON_PAGE(!page || !PageHead(page), page);
- folio = page_folio(page);
- if (!(mc.flags & MOVE_ANON))
- return ret;
- if (folio_memcg(folio) == mc.from) {
- ret = MC_TARGET_PAGE;
- if (target) {
- folio_get(folio);
- if (!folio_trylock(folio)) {
- folio_put(folio);
- return MC_TARGET_NONE;
- }
- target->folio = folio;
- }
- }
- return ret;
-}
-#else
-static inline enum mc_target_type get_mctgt_type_thp(struct vm_area_struct *vma,
- unsigned long addr, pmd_t pmd, union mc_target *target)
-{
- return MC_TARGET_NONE;
-}
-#endif
-
-static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd,
- unsigned long addr, unsigned long end,
- struct mm_walk *walk)
-{
- struct vm_area_struct *vma = walk->vma;
- pte_t *pte;
- spinlock_t *ptl;
-
- ptl = pmd_trans_huge_lock(pmd, vma);
- if (ptl) {
- /*
- * Note their can not be MC_TARGET_DEVICE for now as we do not
- * support transparent huge page with MEMORY_DEVICE_PRIVATE but
- * this might change.
- */
- if (get_mctgt_type_thp(vma, addr, *pmd, NULL) == MC_TARGET_PAGE)
- mc.precharge += HPAGE_PMD_NR;
- spin_unlock(ptl);
- return 0;
- }
-
- pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
- if (!pte)
- return 0;
- for (; addr != end; pte++, addr += PAGE_SIZE)
- if (get_mctgt_type(vma, addr, ptep_get(pte), NULL))
- mc.precharge++; /* increment precharge temporarily */
- pte_unmap_unlock(pte - 1, ptl);
- cond_resched();
-
- return 0;
-}
-
-static const struct mm_walk_ops precharge_walk_ops = {
- .pmd_entry = mem_cgroup_count_precharge_pte_range,
- .walk_lock = PGWALK_RDLOCK,
-};
-
-static unsigned long mem_cgroup_count_precharge(struct mm_struct *mm)
-{
- unsigned long precharge;
-
- mmap_read_lock(mm);
- walk_page_range(mm, 0, ULONG_MAX, &precharge_walk_ops, NULL);
- mmap_read_unlock(mm);
-
- precharge = mc.precharge;
- mc.precharge = 0;
-
- return precharge;
-}
-
-static int mem_cgroup_precharge_mc(struct mm_struct *mm)
-{
- unsigned long precharge = mem_cgroup_count_precharge(mm);
-
- VM_BUG_ON(mc.moving_task);
- mc.moving_task = current;
- return mem_cgroup_do_precharge(precharge);
-}
-
-/* cancels all extra charges on mc.from and mc.to, and wakes up all waiters. */
-static void __mem_cgroup_clear_mc(void)
-{
- struct mem_cgroup *from = mc.from;
- struct mem_cgroup *to = mc.to;
-
- /* we must uncharge all the leftover precharges from mc.to */
- if (mc.precharge) {
- mem_cgroup_cancel_charge(mc.to, mc.precharge);
- mc.precharge = 0;
- }
- /*
- * we didn't uncharge from mc.from at mem_cgroup_move_account(), so
- * we must uncharge here.
- */
- if (mc.moved_charge) {
- mem_cgroup_cancel_charge(mc.from, mc.moved_charge);
- mc.moved_charge = 0;
- }
- /* we must fixup refcnts and charges */
- if (mc.moved_swap) {
- /* uncharge swap account from the old cgroup */
- if (!mem_cgroup_is_root(mc.from))
- page_counter_uncharge(&mc.from->memsw, mc.moved_swap);
-
- mem_cgroup_id_put_many(mc.from, mc.moved_swap);
-
- /*
- * we charged both to->memory and to->memsw, so we
- * should uncharge to->memory.
- */
- if (!mem_cgroup_is_root(mc.to))
- page_counter_uncharge(&mc.to->memory, mc.moved_swap);
-
- mc.moved_swap = 0;
- }
- memcg_oom_recover(from);
- memcg_oom_recover(to);
- wake_up_all(&mc.waitq);
-}
-
-static void mem_cgroup_clear_mc(void)
-{
- struct mm_struct *mm = mc.mm;
-
- /*
- * we must clear moving_task before waking up waiters at the end of
- * task migration.
- */
- mc.moving_task = NULL;
- __mem_cgroup_clear_mc();
- spin_lock(&mc.lock);
- mc.from = NULL;
- mc.to = NULL;
- mc.mm = NULL;
- spin_unlock(&mc.lock);
-
- mmput(mm);
-}
-
-static int mem_cgroup_can_attach(struct cgroup_taskset *tset)
-{
- struct cgroup_subsys_state *css;
- struct mem_cgroup *memcg = NULL; /* unneeded init to make gcc happy */
- struct mem_cgroup *from;
- struct task_struct *leader, *p;
- struct mm_struct *mm;
- unsigned long move_flags;
- int ret = 0;
-
- /* charge immigration isn't supported on the default hierarchy */
- if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
- return 0;
-
- /*
- * Multi-process migrations only happen on the default hierarchy
- * where charge immigration is not used. Perform charge
- * immigration if @tset contains a leader and whine if there are
- * multiple.
- */
- p = NULL;
- cgroup_taskset_for_each_leader(leader, css, tset) {
- WARN_ON_ONCE(p);
- p = leader;
- memcg = mem_cgroup_from_css(css);
- }
- if (!p)
- return 0;
-
- /*
- * We are now committed to this value whatever it is. Changes in this
- * tunable will only affect upcoming migrations, not the current one.
- * So we need to save it, and keep it going.
- */
- move_flags = READ_ONCE(memcg->move_charge_at_immigrate);
- if (!move_flags)
- return 0;
-
- from = mem_cgroup_from_task(p);
-
- VM_BUG_ON(from == memcg);
-
- mm = get_task_mm(p);
- if (!mm)
- return 0;
- /* We move charges only when we move a owner of the mm */
- if (mm->owner == p) {
- VM_BUG_ON(mc.from);
- VM_BUG_ON(mc.to);
- VM_BUG_ON(mc.precharge);
- VM_BUG_ON(mc.moved_charge);
- VM_BUG_ON(mc.moved_swap);
-
- spin_lock(&mc.lock);
- mc.mm = mm;
- mc.from = from;
- mc.to = memcg;
- mc.flags = move_flags;
- spin_unlock(&mc.lock);
- /* We set mc.moving_task later */
-
- ret = mem_cgroup_precharge_mc(mm);
- if (ret)
- mem_cgroup_clear_mc();
- } else {
- mmput(mm);
- }
- return ret;
-}
-
-static void mem_cgroup_cancel_attach(struct cgroup_taskset *tset)
-{
- if (mc.to)
- mem_cgroup_clear_mc();
-}
-
-static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
- unsigned long addr, unsigned long end,
- struct mm_walk *walk)
-{
- int ret = 0;
- struct vm_area_struct *vma = walk->vma;
- pte_t *pte;
- spinlock_t *ptl;
- enum mc_target_type target_type;
- union mc_target target;
- struct folio *folio;
-
- ptl = pmd_trans_huge_lock(pmd, vma);
- if (ptl) {
- if (mc.precharge < HPAGE_PMD_NR) {
- spin_unlock(ptl);
- return 0;
- }
- target_type = get_mctgt_type_thp(vma, addr, *pmd, &target);
- if (target_type == MC_TARGET_PAGE) {
- folio = target.folio;
- if (folio_isolate_lru(folio)) {
- if (!mem_cgroup_move_account(folio, true,
- mc.from, mc.to)) {
- mc.precharge -= HPAGE_PMD_NR;
- mc.moved_charge += HPAGE_PMD_NR;
- }
- folio_putback_lru(folio);
- }
- folio_unlock(folio);
- folio_put(folio);
- } else if (target_type == MC_TARGET_DEVICE) {
- folio = target.folio;
- if (!mem_cgroup_move_account(folio, true,
- mc.from, mc.to)) {
- mc.precharge -= HPAGE_PMD_NR;
- mc.moved_charge += HPAGE_PMD_NR;
- }
- folio_unlock(folio);
- folio_put(folio);
- }
- spin_unlock(ptl);
- return 0;
- }
-
-retry:
- pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
- if (!pte)
- return 0;
- for (; addr != end; addr += PAGE_SIZE) {
- pte_t ptent = ptep_get(pte++);
- bool device = false;
- swp_entry_t ent;
-
- if (!mc.precharge)
- break;
-
- switch (get_mctgt_type(vma, addr, ptent, &target)) {
- case MC_TARGET_DEVICE:
- device = true;
- fallthrough;
- case MC_TARGET_PAGE:
- folio = target.folio;
- /*
- * We can have a part of the split pmd here. Moving it
- * can be done but it would be too convoluted so simply
- * ignore such a partial THP and keep it in original
- * memcg. There should be somebody mapping the head.
- */
- if (folio_test_large(folio))
- goto put;
- if (!device && !folio_isolate_lru(folio))
- goto put;
- if (!mem_cgroup_move_account(folio, false,
- mc.from, mc.to)) {
- mc.precharge--;
- /* we uncharge from mc.from later. */
- mc.moved_charge++;
- }
- if (!device)
- folio_putback_lru(folio);
-put: /* get_mctgt_type() gets & locks the page */
- folio_unlock(folio);
- folio_put(folio);
- break;
- case MC_TARGET_SWAP:
- ent = target.ent;
- if (!mem_cgroup_move_swap_account(ent, mc.from, mc.to)) {
- mc.precharge--;
- mem_cgroup_id_get_many(mc.to, 1);
- /* we fixup other refcnts and charges later. */
- mc.moved_swap++;
- }
- break;
- default:
- break;
- }
- }
- pte_unmap_unlock(pte - 1, ptl);
- cond_resched();
-
- if (addr != end) {
- /*
- * We have consumed all precharges we got in can_attach().
- * We try charge one by one, but don't do any additional
- * charges to mc.to if we have failed in charge once in attach()
- * phase.
- */
- ret = mem_cgroup_do_precharge(1);
- if (!ret)
- goto retry;
- }
-
- return ret;
-}
-
-static const struct mm_walk_ops charge_walk_ops = {
- .pmd_entry = mem_cgroup_move_charge_pte_range,
- .walk_lock = PGWALK_RDLOCK,
-};
-
-static void mem_cgroup_move_charge(void)
-{
- lru_add_drain_all();
- /*
- * Signal folio_memcg_lock() to take the memcg's move_lock
- * while we're moving its pages to another memcg. Then wait
- * for already started RCU-only updates to finish.
- */
- atomic_inc(&mc.from->moving_account);
- synchronize_rcu();
-retry:
- if (unlikely(!mmap_read_trylock(mc.mm))) {
- /*
- * Someone who are holding the mmap_lock might be waiting in
- * waitq. So we cancel all extra charges, wake up all waiters,
- * and retry. Because we cancel precharges, we might not be able
- * to move enough charges, but moving charge is a best-effort
- * feature anyway, so it wouldn't be a big problem.
- */
- __mem_cgroup_clear_mc();
- cond_resched();
- goto retry;
- }
- /*
- * When we have consumed all precharges and failed in doing
- * additional charge, the page walk just aborts.
- */
- walk_page_range(mc.mm, 0, ULONG_MAX, &charge_walk_ops, NULL);
- mmap_read_unlock(mc.mm);
- atomic_dec(&mc.from->moving_account);
-}
-
-static void mem_cgroup_move_task(void)
-{
- if (mc.to) {
- mem_cgroup_move_charge();
- mem_cgroup_clear_mc();
- }
-}
-
-#else /* !CONFIG_MMU */
-static int mem_cgroup_can_attach(struct cgroup_taskset *tset)
-{
- return 0;
-}
-static void mem_cgroup_cancel_attach(struct cgroup_taskset *tset)
-{
-}
-static void mem_cgroup_move_task(void)
-{
-}
-#endif
-
-#ifdef CONFIG_MEMCG_KMEM
static void mem_cgroup_fork(struct task_struct *task)
{
/*
@@ -6842,7 +3892,6 @@ static void mem_cgroup_exit(struct task_struct *task)
*/
task->objcg = NULL;
}
-#endif
#ifdef CONFIG_LRU_GEN
static void mem_cgroup_lru_gen_attach(struct cgroup_taskset *tset)
@@ -6866,7 +3915,6 @@ static void mem_cgroup_lru_gen_attach(struct cgroup_taskset *tset)
static void mem_cgroup_lru_gen_attach(struct cgroup_taskset *tset) {}
#endif /* CONFIG_LRU_GEN */
-#ifdef CONFIG_MEMCG_KMEM
static void mem_cgroup_kmem_attach(struct cgroup_taskset *tset)
{
struct task_struct *task;
@@ -6877,17 +3925,12 @@ static void mem_cgroup_kmem_attach(struct cgroup_taskset *tset)
set_bit(CURRENT_OBJCG_UPDATE_BIT, (unsigned long *)&task->objcg);
}
}
-#else
-static void mem_cgroup_kmem_attach(struct cgroup_taskset *tset) {}
-#endif /* CONFIG_MEMCG_KMEM */
-#if defined(CONFIG_LRU_GEN) || defined(CONFIG_MEMCG_KMEM)
static void mem_cgroup_attach(struct cgroup_taskset *tset)
{
mem_cgroup_lru_gen_attach(tset);
mem_cgroup_kmem_attach(tset);
}
-#endif
static int seq_puts_memcg_tunable(struct seq_file *m, unsigned long value)
{
@@ -7000,7 +4043,7 @@ static ssize_t memory_high_write(struct kernfs_open_file *of,
}
reclaimed = try_to_free_mem_cgroup_pages(memcg, nr_pages - high,
- GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP);
+ GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP, NULL);
if (!reclaimed && !nr_retries--)
break;
@@ -7049,7 +4092,7 @@ static ssize_t memory_max_write(struct kernfs_open_file *of,
if (nr_reclaims) {
if (!try_to_free_mem_cgroup_pages(memcg, nr_pages - max,
- GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP))
+ GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP, NULL))
nr_reclaims--;
continue;
}
@@ -7095,7 +4138,7 @@ static int memory_events_local_show(struct seq_file *m, void *v)
return 0;
}
-static int memory_stat_show(struct seq_file *m, void *v)
+int memory_stat_show(struct seq_file *m, void *v)
{
struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
char *buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
@@ -7179,19 +4222,50 @@ static ssize_t memory_oom_group_write(struct kernfs_open_file *of,
return nbytes;
}
+enum {
+ MEMORY_RECLAIM_SWAPPINESS = 0,
+ MEMORY_RECLAIM_NULL,
+};
+
+static const match_table_t tokens = {
+ { MEMORY_RECLAIM_SWAPPINESS, "swappiness=%d"},
+ { MEMORY_RECLAIM_NULL, NULL },
+};
+
static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
size_t nbytes, loff_t off)
{
struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
unsigned int nr_retries = MAX_RECLAIM_RETRIES;
unsigned long nr_to_reclaim, nr_reclaimed = 0;
+ int swappiness = -1;
unsigned int reclaim_options;
- int err;
+ char *old_buf, *start;
+ substring_t args[MAX_OPT_ARGS];
buf = strstrip(buf);
- err = page_counter_memparse(buf, "", &nr_to_reclaim);
- if (err)
- return err;
+
+ old_buf = buf;
+ nr_to_reclaim = memparse(buf, &buf) / PAGE_SIZE;
+ if (buf == old_buf)
+ return -EINVAL;
+
+ buf = strstrip(buf);
+
+ while ((start = strsep(&buf, " ")) != NULL) {
+ if (!strlen(start))
+ continue;
+ switch (match_token(start, tokens, args)) {
+ case MEMORY_RECLAIM_SWAPPINESS:
+ if (match_int(&args[0], &swappiness))
+ return -EINVAL;
+ if (swappiness < MIN_SWAPPINESS || swappiness > MAX_SWAPPINESS)
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+ }
reclaim_options = MEMCG_RECLAIM_MAY_SWAP | MEMCG_RECLAIM_PROACTIVE;
while (nr_reclaimed < nr_to_reclaim) {
@@ -7211,7 +4285,9 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
lru_add_drain_all();
reclaimed = try_to_free_mem_cgroup_pages(memcg,
- batch_size, GFP_KERNEL, reclaim_options);
+ batch_size, GFP_KERNEL,
+ reclaim_options,
+ swappiness == -1 ? NULL : &swappiness);
if (!reclaimed && !nr_retries--)
return -EAGAIN;
@@ -7301,137 +4377,19 @@ struct cgroup_subsys memory_cgrp_subsys = {
.css_free = mem_cgroup_css_free,
.css_reset = mem_cgroup_css_reset,
.css_rstat_flush = mem_cgroup_css_rstat_flush,
- .can_attach = mem_cgroup_can_attach,
-#if defined(CONFIG_LRU_GEN) || defined(CONFIG_MEMCG_KMEM)
.attach = mem_cgroup_attach,
-#endif
- .cancel_attach = mem_cgroup_cancel_attach,
- .post_attach = mem_cgroup_move_task,
-#ifdef CONFIG_MEMCG_KMEM
.fork = mem_cgroup_fork,
.exit = mem_cgroup_exit,
-#endif
.dfl_cftypes = memory_files,
+#ifdef CONFIG_MEMCG_V1
+ .can_attach = memcg1_can_attach,
+ .cancel_attach = memcg1_cancel_attach,
+ .post_attach = memcg1_move_task,
.legacy_cftypes = mem_cgroup_legacy_files,
+#endif
.early_init = 0,
};
-/*
- * This function calculates an individual cgroup's effective
- * protection which is derived from its own memory.min/low, its
- * parent's and siblings' settings, as well as the actual memory
- * distribution in the tree.
- *
- * The following rules apply to the effective protection values:
- *
- * 1. At the first level of reclaim, effective protection is equal to
- * the declared protection in memory.min and memory.low.
- *
- * 2. To enable safe delegation of the protection configuration, at
- * subsequent levels the effective protection is capped to the
- * parent's effective protection.
- *
- * 3. To make complex and dynamic subtrees easier to configure, the
- * user is allowed to overcommit the declared protection at a given
- * level. If that is the case, the parent's effective protection is
- * distributed to the children in proportion to how much protection
- * they have declared and how much of it they are utilizing.
- *
- * This makes distribution proportional, but also work-conserving:
- * if one cgroup claims much more protection than it uses memory,
- * the unused remainder is available to its siblings.
- *
- * 4. Conversely, when the declared protection is undercommitted at a
- * given level, the distribution of the larger parental protection
- * budget is NOT proportional. A cgroup's protection from a sibling
- * is capped to its own memory.min/low setting.
- *
- * 5. However, to allow protecting recursive subtrees from each other
- * without having to declare each individual cgroup's fixed share
- * of the ancestor's claim to protection, any unutilized -
- * "floating" - protection from up the tree is distributed in
- * proportion to each cgroup's *usage*. This makes the protection
- * neutral wrt sibling cgroups and lets them compete freely over
- * the shared parental protection budget, but it protects the
- * subtree as a whole from neighboring subtrees.
- *
- * Note that 4. and 5. are not in conflict: 4. is about protecting
- * against immediate siblings whereas 5. is about protecting against
- * neighboring subtrees.
- */
-static unsigned long effective_protection(unsigned long usage,
- unsigned long parent_usage,
- unsigned long setting,
- unsigned long parent_effective,
- unsigned long siblings_protected)
-{
- unsigned long protected;
- unsigned long ep;
-
- protected = min(usage, setting);
- /*
- * If all cgroups at this level combined claim and use more
- * protection than what the parent affords them, distribute
- * shares in proportion to utilization.
- *
- * We are using actual utilization rather than the statically
- * claimed protection in order to be work-conserving: claimed
- * but unused protection is available to siblings that would
- * otherwise get a smaller chunk than what they claimed.
- */
- if (siblings_protected > parent_effective)
- return protected * parent_effective / siblings_protected;
-
- /*
- * Ok, utilized protection of all children is within what the
- * parent affords them, so we know whatever this child claims
- * and utilizes is effectively protected.
- *
- * If there is unprotected usage beyond this value, reclaim
- * will apply pressure in proportion to that amount.
- *
- * If there is unutilized protection, the cgroup will be fully
- * shielded from reclaim, but we do return a smaller value for
- * protection than what the group could enjoy in theory. This
- * is okay. With the overcommit distribution above, effective
- * protection is always dependent on how memory is actually
- * consumed among the siblings anyway.
- */
- ep = protected;
-
- /*
- * If the children aren't claiming (all of) the protection
- * afforded to them by the parent, distribute the remainder in
- * proportion to the (unprotected) memory of each cgroup. That
- * way, cgroups that aren't explicitly prioritized wrt each
- * other compete freely over the allowance, but they are
- * collectively protected from neighboring trees.
- *
- * We're using unprotected memory for the weight so that if
- * some cgroups DO claim explicit protection, we don't protect
- * the same bytes twice.
- *
- * Check both usage and parent_usage against the respective
- * protected values. One should imply the other, but they
- * aren't read atomically - make sure the division is sane.
- */
- if (!(cgrp_dfl_root.flags & CGRP_ROOT_MEMORY_RECURSIVE_PROT))
- return ep;
- if (parent_effective > siblings_protected &&
- parent_usage > siblings_protected &&
- usage > protected) {
- unsigned long unclaimed;
-
- unclaimed = parent_effective - siblings_protected;
- unclaimed *= usage - protected;
- unclaimed /= parent_usage - siblings_protected;
-
- ep += unclaimed;
- }
-
- return ep;
-}
-
/**
* mem_cgroup_calculate_protection - check if memory consumption is in the normal range
* @root: the top ancestor of the sub-tree being checked
@@ -7443,8 +4401,8 @@ static unsigned long effective_protection(unsigned long usage,
void mem_cgroup_calculate_protection(struct mem_cgroup *root,
struct mem_cgroup *memcg)
{
- unsigned long usage, parent_usage;
- struct mem_cgroup *parent;
+ bool recursive_protection =
+ cgrp_dfl_root.flags & CGRP_ROOT_MEMORY_RECURSIVE_PROT;
if (mem_cgroup_disabled())
return;
@@ -7452,39 +4410,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
if (!root)
root = root_mem_cgroup;
- /*
- * Effective values of the reclaim targets are ignored so they
- * can be stale. Have a look at mem_cgroup_protection for more
- * details.
- * TODO: calculation should be more robust so that we do not need
- * that special casing.
- */
- if (memcg == root)
- return;
-
- usage = page_counter_read(&memcg->memory);
- if (!usage)
- return;
-
- parent = parent_mem_cgroup(memcg);
-
- if (parent == root) {
- memcg->memory.emin = READ_ONCE(memcg->memory.min);
- memcg->memory.elow = READ_ONCE(memcg->memory.low);
- return;
- }
-
- parent_usage = page_counter_read(&parent->memory);
-
- WRITE_ONCE(memcg->memory.emin, effective_protection(usage, parent_usage,
- READ_ONCE(memcg->memory.min),
- READ_ONCE(parent->memory.emin),
- atomic_long_read(&parent->memory.children_min_usage)));
-
- WRITE_ONCE(memcg->memory.elow, effective_protection(usage, parent_usage,
- READ_ONCE(memcg->memory.low),
- READ_ONCE(parent->memory.elow),
- atomic_long_read(&parent->memory.children_low_usage)));
+ page_counter_calculate_protection(&root->memory, &memcg->memory, recursive_protection);
}
static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg,
@@ -7637,15 +4563,17 @@ static void uncharge_batch(const struct uncharge_gather *ug)
page_counter_uncharge(&ug->memcg->memory, ug->nr_memory);
if (do_memsw_account())
page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory);
- if (ug->nr_kmem)
- memcg_account_kmem(ug->memcg, -ug->nr_kmem);
- memcg_oom_recover(ug->memcg);
+ if (ug->nr_kmem) {
+ mod_memcg_state(ug->memcg, MEMCG_KMEM, -ug->nr_kmem);
+ memcg1_account_kmem(ug->memcg, -ug->nr_kmem);
+ }
+ memcg1_oom_recover(ug->memcg);
}
local_irq_save(flags);
__count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout);
__this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory);
- memcg_check_events(ug->memcg, ug->nid);
+ memcg1_check_events(ug->memcg, ug->nid);
local_irq_restore(flags);
/* drop reference from uncharge_folio */
@@ -7784,7 +4712,7 @@ void mem_cgroup_replace_folio(struct folio *old, struct folio *new)
local_irq_save(flags);
mem_cgroup_charge_statistics(memcg, nr_pages);
- memcg_check_events(memcg, folio_nid(new));
+ memcg1_check_events(memcg, folio_nid(new));
local_irq_restore(flags);
}
@@ -7807,6 +4735,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
VM_BUG_ON_FOLIO(!folio_test_locked(new), new);
VM_BUG_ON_FOLIO(folio_test_anon(old) != folio_test_anon(new), new);
VM_BUG_ON_FOLIO(folio_nr_pages(old) != folio_nr_pages(new), new);
+ VM_BUG_ON_FOLIO(folio_test_lru(old), old);
if (mem_cgroup_disabled())
return;
@@ -7844,7 +4773,7 @@ void mem_cgroup_sk_alloc(struct sock *sk)
memcg = mem_cgroup_from_task(current);
if (mem_cgroup_is_root(memcg))
goto out;
- if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && !memcg->tcpmem_active)
+ if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && !memcg1_tcpmem_active(memcg))
goto out;
if (css_tryget(&memcg->css))
sk->sk_memcg = memcg;
@@ -7870,20 +4799,8 @@ void mem_cgroup_sk_free(struct sock *sk)
bool mem_cgroup_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages,
gfp_t gfp_mask)
{
- if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) {
- struct page_counter *fail;
-
- if (page_counter_try_charge(&memcg->tcpmem, nr_pages, &fail)) {
- memcg->tcpmem_pressure = 0;
- return true;
- }
- memcg->tcpmem_pressure = 1;
- if (gfp_mask & __GFP_NOFAIL) {
- page_counter_charge(&memcg->tcpmem, nr_pages);
- return true;
- }
- return false;
- }
+ if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
+ return memcg1_charge_skmem(memcg, nr_pages, gfp_mask);
if (try_charge(memcg, gfp_mask, nr_pages) == 0) {
mod_memcg_state(memcg, MEMCG_SOCK, nr_pages);
@@ -7901,7 +4818,7 @@ bool mem_cgroup_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages,
void mem_cgroup_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages)
{
if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) {
- page_counter_uncharge(&memcg->tcpmem, nr_pages);
+ memcg1_uncharge_skmem(memcg, nr_pages);
return;
}
@@ -7938,7 +4855,7 @@ __setup("cgroup.memory=", cgroup_memory);
*/
static int __init mem_cgroup_init(void)
{
- int cpu, node;
+ int cpu;
/*
* Currently s32 type (can refer to struct batched_lruvec_stat) is
@@ -7955,17 +4872,6 @@ static int __init mem_cgroup_init(void)
INIT_WORK(&per_cpu_ptr(&memcg_stock, cpu)->work,
drain_local_stock);
- for_each_node(node) {
- struct mem_cgroup_tree_per_node *rtpn;
-
- rtpn = kzalloc_node(sizeof(*rtpn), GFP_KERNEL, node);
-
- rtpn->rb_root = RB_ROOT;
- rtpn->rb_rightmost = NULL;
- spin_lock_init(&rtpn->lock);
- soft_limit_tree.rb_tree_per_node[node] = rtpn;
- }
-
return 0;
}
subsys_initcall(mem_cgroup_init);
@@ -8052,7 +4958,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
memcg_stats_lock();
mem_cgroup_charge_statistics(memcg, -nr_entries);
memcg_stats_unlock();
- memcg_check_events(memcg, folio_nid(folio));
+ memcg1_check_events(memcg, folio_nid(folio));
css_put(&memcg->css);
}
@@ -8293,34 +5199,7 @@ static struct cftype swap_files[] = {
{ } /* terminate */
};
-static struct cftype memsw_files[] = {
- {
- .name = "memsw.usage_in_bytes",
- .private = MEMFILE_PRIVATE(_MEMSWAP, RES_USAGE),
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "memsw.max_usage_in_bytes",
- .private = MEMFILE_PRIVATE(_MEMSWAP, RES_MAX_USAGE),
- .write = mem_cgroup_reset,
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "memsw.limit_in_bytes",
- .private = MEMFILE_PRIVATE(_MEMSWAP, RES_LIMIT),
- .write = mem_cgroup_write,
- .read_u64 = mem_cgroup_read_u64,
- },
- {
- .name = "memsw.failcnt",
- .private = MEMFILE_PRIVATE(_MEMSWAP, RES_FAILCNT),
- .write = mem_cgroup_reset,
- .read_u64 = mem_cgroup_read_u64,
- },
- { }, /* terminate */
-};
-
-#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
+#ifdef CONFIG_ZSWAP
/**
* obj_cgroup_may_zswap - check if this cgroup can zswap
* @objcg: the object cgroup
@@ -8423,7 +5302,7 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
bool mem_cgroup_zswap_writeback_enabled(struct mem_cgroup *memcg)
{
/* if zswap is disabled, do not block pages going to the swapping device */
- return !is_zswap_enabled() || !memcg || READ_ONCE(memcg->zswap_writeback);
+ return !zswap_is_enabled() || !memcg || READ_ONCE(memcg->zswap_writeback);
}
static u64 zswap_current_read(struct cgroup_subsys_state *css,
@@ -8502,7 +5381,7 @@ static struct cftype zswap_files[] = {
},
{ } /* terminate */
};
-#endif /* CONFIG_MEMCG_KMEM && CONFIG_ZSWAP */
+#endif /* CONFIG_ZSWAP */
static int __init mem_cgroup_swap_init(void)
{
@@ -8510,8 +5389,10 @@ static int __init mem_cgroup_swap_init(void)
return 0;
WARN_ON(cgroup_add_dfl_cftypes(&memory_cgrp_subsys, swap_files));
+#ifdef CONFIG_MEMCG_V1
WARN_ON(cgroup_add_legacy_cftypes(&memory_cgrp_subsys, memsw_files));
-#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
+#endif
+#ifdef CONFIG_ZSWAP
WARN_ON(cgroup_add_dfl_cftypes(&memory_cgrp_subsys, zswap_files));
#endif
return 0;
diff --git a/mm/memfd.c b/mm/memfd.c
index 7d8d3ab3fa37..e7b7c5294d59 100644
--- a/mm/memfd.c
+++ b/mm/memfd.c
@@ -60,6 +60,51 @@ static void memfd_tag_pins(struct xa_state *xas)
}
/*
+ * This is a helper function used by memfd_pin_user_pages() in GUP (gup.c).
+ * It is mainly called to allocate a folio in a memfd when the caller
+ * (memfd_pin_folios()) cannot find a folio in the page cache at a given
+ * index in the mapping.
+ */
+struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx)
+{
+#ifdef CONFIG_HUGETLB_PAGE
+ struct folio *folio;
+ gfp_t gfp_mask;
+ int err;
+
+ if (is_file_hugepages(memfd)) {
+ /*
+ * The folio would most likely be accessed by a DMA driver,
+ * therefore, we have zone memory constraints where we can
+ * alloc from. Also, the folio will be pinned for an indefinite
+ * amount of time, so it is not expected to be migrated away.
+ */
+ gfp_mask = htlb_alloc_mask(hstate_file(memfd));
+ gfp_mask &= ~(__GFP_HIGHMEM | __GFP_MOVABLE);
+
+ folio = alloc_hugetlb_folio_nodemask(hstate_file(memfd),
+ numa_node_id(),
+ NULL,
+ gfp_mask,
+ false);
+ if (folio && folio_try_get(folio)) {
+ err = hugetlb_add_to_page_cache(folio,
+ memfd->f_mapping,
+ idx);
+ if (err) {
+ folio_put(folio);
+ free_huge_folio(folio);
+ return ERR_PTR(err);
+ }
+ return folio;
+ }
+ return ERR_PTR(-ENOMEM);
+ }
+#endif
+ return shmem_read_folio(memfd->f_mapping, idx);
+}
+
+/*
* Setting SEAL_WRITE requires us to verify there's no pending writer. However,
* via get_user_pages(), drivers might have some pending I/O without any active
* user-space mappings (eg., direct-IO, AIO). Therefore, we look at all folios
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index d3c830e817e3..581d3e5c9117 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -68,6 +68,8 @@ static int sysctl_memory_failure_early_kill __read_mostly;
static int sysctl_memory_failure_recovery __read_mostly = 1;
+static int sysctl_enable_soft_offline __read_mostly = 1;
+
atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0);
static bool hw_memory_failure __read_mostly = false;
@@ -141,6 +143,15 @@ static struct ctl_table memory_failure_table[] = {
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
},
+ {
+ .procname = "enable_soft_offline",
+ .data = &sysctl_enable_soft_offline,
+ .maxlen = sizeof(sysctl_enable_soft_offline),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = SYSCTL_ZERO,
+ .extra2 = SYSCTL_ONE,
+ }
};
/*
@@ -294,6 +305,7 @@ int hwpoison_filter(struct page *p)
return 0;
}
+EXPORT_SYMBOL_GPL(hwpoison_filter);
#else
int hwpoison_filter(struct page *p)
{
@@ -301,8 +313,6 @@ int hwpoison_filter(struct page *p)
}
#endif
-EXPORT_SYMBOL_GPL(hwpoison_filter);
-
/*
* Kill all processes that have a poisoned page mapped and then isolate
* the page.
@@ -344,7 +354,7 @@ static int kill_proc(struct to_kill *tk, unsigned long pfn, int flags)
int ret = 0;
pr_err("%#lx: Sending SIGBUS to %s:%d due to hardware memory corruption\n",
- pfn, t->comm, t->pid);
+ pfn, t->comm, task_pid_nr(t));
if ((flags & MF_ACTION_REQUIRED) && (t == current))
ret = force_sig_mceerr(BUS_MCEERR_AR,
@@ -355,14 +365,12 @@ static int kill_proc(struct to_kill *tk, unsigned long pfn, int flags)
* PF_MCE_EARLY set.
* Don't use force here, it's convenient if the signal
* can be temporarily blocked.
- * This could cause a loop when the user sets SIGBUS
- * to SIG_IGN, but hopefully no one will do that?
*/
ret = send_sig_mceerr(BUS_MCEERR_AO, (void __user *)tk->addr,
addr_lsb, t);
if (ret < 0)
pr_info("Error sending signal to %s:%d: %d\n",
- t->comm, t->pid, ret);
+ t->comm, task_pid_nr(t), ret);
return ret;
}
@@ -514,24 +522,17 @@ void add_to_kill_ksm(struct task_struct *tsk, struct page *p,
*
* Only do anything when FORCEKILL is set, otherwise just free the
* list (this is used for clean pages which do not need killing)
- * Also when FAIL is set do a force kill because something went
- * wrong earlier.
*/
-static void kill_procs(struct list_head *to_kill, int forcekill, bool fail,
+static void kill_procs(struct list_head *to_kill, int forcekill,
unsigned long pfn, int flags)
{
struct to_kill *tk, *next;
list_for_each_entry_safe(tk, next, to_kill, nd) {
if (forcekill) {
- /*
- * In case something went wrong with munmapping
- * make sure the process doesn't catch the
- * signal and then access the memory. Just kill it.
- */
- if (fail || tk->addr == -EFAULT) {
+ if (tk->addr == -EFAULT) {
pr_err("%#lx: forcibly killing %s:%d because of failure to unmap corrupted page\n",
- pfn, tk->tsk->comm, tk->tsk->pid);
+ pfn, tk->tsk->comm, task_pid_nr(tk->tsk));
do_send_sig_info(SIGKILL, SEND_SIG_PRIV,
tk->tsk, PIDTYPE_PID);
}
@@ -544,7 +545,7 @@ static void kill_procs(struct list_head *to_kill, int forcekill, bool fail,
*/
else if (kill_proc(tk, pfn, flags) < 0)
pr_err("%#lx: Cannot send advisory machine check signal to %s:%d\n",
- pfn, tk->tsk->comm, tk->tsk->pid);
+ pfn, tk->tsk->comm, task_pid_nr(tk->tsk));
}
list_del(&tk->nd);
put_task_struct(tk->tsk);
@@ -834,7 +835,7 @@ static int hwpoison_hugetlb_range(pte_t *ptep, unsigned long hmask,
struct mm_walk *walk)
{
struct hwpoison_walk *hwp = walk->private;
- pte_t pte = huge_ptep_get(ptep);
+ pte_t pte = huge_ptep_get(walk->mm, addr, ptep);
struct hstate *h = hstate_vma(walk->vma);
return check_hwpoisoned_entry(pte, addr, huge_page_shift(h),
@@ -886,6 +887,28 @@ static int kill_accessing_process(struct task_struct *p, unsigned long pfn,
return ret > 0 ? -EHWPOISON : -EFAULT;
}
+/*
+ * MF_IGNORED - The m-f() handler marks the page as PG_hwpoisoned'ed.
+ * But it could not do more to isolate the page from being accessed again,
+ * nor does it kill the process. This is extremely rare and one of the
+ * potential causes is that the page state has been changed due to
+ * underlying race condition. This is the most severe outcomes.
+ *
+ * MF_FAILED - The m-f() handler marks the page as PG_hwpoisoned'ed.
+ * It should have killed the process, but it can't isolate the page,
+ * due to conditions such as extra pin, unmap failure, etc. Accessing
+ * the page again may trigger another MCE and the process will be killed
+ * by the m-f() handler immediately.
+ *
+ * MF_DELAYED - The m-f() handler marks the page as PG_hwpoisoned'ed.
+ * The page is unmapped, and is removed from the LRU or file mapping.
+ * An attempt to access the page again will trigger page fault and the
+ * PF handler will kill the process.
+ *
+ * MF_RECOVERED - The m-f() handler marks the page as PG_hwpoisoned'ed.
+ * The page has been completely isolated, that is, unmapped, taken out of
+ * the buddy system, or hole-punnched out of the file mapping.
+ */
static const char *action_name[] = {
[MF_IGNORED] = "Ignored",
[MF_FAILED] = "Failed",
@@ -896,10 +919,9 @@ static const char *action_name[] = {
static const char * const action_page_types[] = {
[MF_MSG_KERNEL] = "reserved kernel page",
[MF_MSG_KERNEL_HIGH_ORDER] = "high-order kernel page",
- [MF_MSG_SLAB] = "kernel slab page",
- [MF_MSG_DIFFERENT_COMPOUND] = "different compound page after locking",
[MF_MSG_HUGE] = "huge page",
[MF_MSG_FREE_HUGE] = "free huge page",
+ [MF_MSG_GET_HWPOISON] = "get hwpoison page",
[MF_MSG_UNMAP_FAILED] = "unmapping failed page",
[MF_MSG_DIRTY_SWAPCACHE] = "dirty swapcache page",
[MF_MSG_CLEAN_SWAPCACHE] = "clean swapcache page",
@@ -913,6 +935,7 @@ static const char * const action_page_types[] = {
[MF_MSG_BUDDY] = "free buddy page",
[MF_MSG_DAX] = "dax page",
[MF_MSG_UNSPLIT_THP] = "unsplit thp",
+ [MF_MSG_ALREADY_POISONED] = "already poisoned",
[MF_MSG_UNKNOWN] = "unknown page",
};
@@ -1020,12 +1043,13 @@ static int me_kernel(struct page_state *ps, struct page *p)
/*
* Page in unknown state. Do nothing.
+ * This is a catch-all in case we fail to make sense of the page state.
*/
static int me_unknown(struct page_state *ps, struct page *p)
{
pr_err("%#lx: Unknown page state\n", page_to_pfn(p));
unlock_page(p);
- return MF_FAILED;
+ return MF_IGNORED;
}
/*
@@ -1094,7 +1118,6 @@ static int me_pagecache_dirty(struct page_state *ps, struct page *p)
struct folio *folio = page_folio(p);
struct address_space *mapping = folio_mapping(folio);
- SetPageError(p);
/* TBD: print more information about the file. */
if (mapping) {
/*
@@ -1102,34 +1125,6 @@ static int me_pagecache_dirty(struct page_state *ps, struct page *p)
* who check the mapping.
* This way the application knows that something went
* wrong with its dirty file data.
- *
- * There's one open issue:
- *
- * The EIO will be only reported on the next IO
- * operation and then cleared through the IO map.
- * Normally Linux has two mechanisms to pass IO error
- * first through the AS_EIO flag in the address space
- * and then through the PageError flag in the page.
- * Since we drop pages on memory failure handling the
- * only mechanism open to use is through AS_AIO.
- *
- * This has the disadvantage that it gets cleared on
- * the first operation that returns an error, while
- * the PageError bit is more sticky and only cleared
- * when the page is reread or dropped. If an
- * application assumes it will always get error on
- * fsync, but does other operations on the fd before
- * and the page is dropped between then the error
- * will not be properly reported.
- *
- * This can already happen even without hwpoisoned
- * pages: first on metadata IO errors (which only
- * report through AS_EIO) or when the page is dropped
- * at the wrong time.
- *
- * So right now we assume that the application DTRT on
- * the first EIO, but we're not worse than other parts
- * of the kernel.
*/
mapping_set_error(mapping, -EIO);
}
@@ -1141,7 +1136,7 @@ static int me_pagecache_dirty(struct page_state *ps, struct page *p)
* Clean and dirty swap cache.
*
* Dirty swap cache page is tricky to handle. The page could live both in page
- * cache and swap cache(ie. page is freshly swapped in). So it could be
+ * table and swap cache(ie. page is freshly swapped in). So it could be
* referenced concurrently by 2 types of PTEs:
* normal PTEs and swap PTEs. We try to handle them consistently by calling
* try_to_unmap(!TTU_HWPOISON) to convert the normal PTEs to swap PTEs,
@@ -1429,6 +1424,8 @@ static int __get_hwpoison_page(struct page *page, unsigned long flags)
return 0;
}
+#define GET_PAGE_MAX_RETRY_NUM 3
+
static int get_any_page(struct page *p, unsigned long flags)
{
int ret = 0, pass = 0;
@@ -1443,12 +1440,12 @@ try_again:
if (!ret) {
if (page_count(p)) {
/* We raced with an allocation, retry. */
- if (pass++ < 3)
+ if (pass++ < GET_PAGE_MAX_RETRY_NUM)
goto try_again;
ret = -EBUSY;
} else if (!PageHuge(p) && !is_free_buddy_page(p)) {
/* We raced with put_page, retry. */
- if (pass++ < 3)
+ if (pass++ < GET_PAGE_MAX_RETRY_NUM)
goto try_again;
ret = -EIO;
}
@@ -1474,7 +1471,7 @@ try_again:
* A page we cannot handle. Check whether we can turn
* it into something we can handle.
*/
- if (pass++ < 3) {
+ if (pass++ < GET_PAGE_MAX_RETRY_NUM) {
put_page(p);
shake_page(p);
count_increased = false;
@@ -1536,7 +1533,7 @@ static int __get_unpoison_page(struct page *page)
* the given page has PG_hwpoison. So it's never reused for other page
* allocations, and __get_unpoison_page() never races with them.
*
- * Return: 0 on failure,
+ * Return: 0 on failure or free buddy (hugetlb) page,
* 1 on success for in-use pages in a well-defined state,
* -EIO for pages on which we can not handle memory errors,
* -EBUSY when get_hwpoison_page() has raced with page lifecycle
@@ -1585,7 +1582,7 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
* This check implies we don't kill processes if their pages
* are in the swap cache early. Those are always late kills.
*/
- if (!page_mapped(p))
+ if (!folio_mapped(folio))
return true;
if (folio_test_swapcache(folio)) {
@@ -1636,10 +1633,10 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
try_to_unmap(folio, ttu);
}
- unmap_success = !page_mapped(p);
+ unmap_success = !folio_mapped(folio);
if (!unmap_success)
pr_err("%#lx: failed to unmap page (folio mapcount=%d)\n",
- pfn, folio_mapcount(page_folio(p)));
+ pfn, folio_mapcount(folio));
/*
* try_to_unmap() might put mlocked page in lru cache, so call
@@ -1660,7 +1657,7 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
*/
forcekill = folio_test_dirty(folio) || (flags & MF_MUST_KILL) ||
!unmap_success;
- kill_procs(&tokill, forcekill, !unmap_success, pfn, flags);
+ kill_procs(&tokill, forcekill, pfn, flags);
return unmap_success;
}
@@ -1688,7 +1685,12 @@ static int identify_page_state(unsigned long pfn, struct page *p,
return page_action(ps, p, pfn);
}
-static int try_to_split_thp_page(struct page *page)
+/*
+ * When 'release' is 'false', it means that if thp split has failed,
+ * there is still more to do, hence the page refcount we took earlier
+ * is still needed.
+ */
+static int try_to_split_thp_page(struct page *page, bool release)
{
int ret;
@@ -1696,7 +1698,7 @@ static int try_to_split_thp_page(struct page *page)
ret = split_huge_page(page);
unlock_page(page);
- if (unlikely(ret))
+ if (ret && release)
put_page(page);
return ret;
@@ -1724,7 +1726,7 @@ static void unmap_and_kill(struct list_head *to_kill, unsigned long pfn,
unmap_mapping_range(mapping, start, size, 0);
}
- kill_procs(to_kill, flags & MF_MUST_KILL, false, pfn, flags);
+ kill_procs(to_kill, flags & MF_MUST_KILL, pfn, flags);
}
/*
@@ -1912,7 +1914,7 @@ static int folio_set_hugetlb_hwpoison(struct folio *folio, struct page *page)
{
struct llist_head *head;
struct raw_hwp_page *raw_hwp;
- struct raw_hwp_page *p, *next;
+ struct raw_hwp_page *p;
int ret = folio_test_set_hwpoison(folio) ? -EHWPOISON : 0;
/*
@@ -1923,7 +1925,7 @@ static int folio_set_hugetlb_hwpoison(struct folio *folio, struct page *page)
if (folio_test_hugetlb_raw_hwp_unreliable(folio))
return -EHWPOISON;
head = raw_hwp_list_head(folio);
- llist_for_each_entry_safe(p, next, head->first, node) {
+ llist_for_each_entry(p, head->first, node) {
if (p->page == page)
return -EHWPOISON;
}
@@ -2062,6 +2064,7 @@ retry:
if (flags & MF_ACTION_REQUIRED) {
folio = page_folio(p);
res = kill_accessing_process(current, folio_pfn(folio), flags);
+ action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED);
}
return res;
} else if (res == -EBUSY) {
@@ -2069,7 +2072,7 @@ retry:
flags |= MF_NO_RETRY;
goto retry;
}
- return action_result(pfn, MF_MSG_UNKNOWN, MF_IGNORED);
+ return action_result(pfn, MF_MSG_GET_HWPOISON, MF_IGNORED);
}
folio = page_folio(p);
@@ -2104,7 +2107,7 @@ retry:
if (!hwpoison_user_mappings(folio, p, pfn, flags)) {
folio_unlock(folio);
- return action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED);
+ return action_result(pfn, MF_MSG_UNMAP_FAILED, MF_FAILED);
}
return identify_page_state(pfn, p, page_flags);
@@ -2125,14 +2128,10 @@ static inline unsigned long folio_free_raw_hwp(struct folio *folio, bool flag)
/* Drop the extra refcount in case we come from madvise() */
static void put_ref_page(unsigned long pfn, int flags)
{
- struct page *page;
-
if (!(flags & MF_COUNT_INCREASED))
return;
- page = pfn_to_page(pfn);
- if (page)
- put_page(page);
+ put_page(pfn_to_page(pfn));
}
static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
@@ -2167,6 +2166,22 @@ out:
return rc;
}
+/*
+ * The calling condition is as such: thp split failed, page might have
+ * been RDMA pinned, not much can be done for recovery.
+ * But a SIGBUS should be delivered with vaddr provided so that the user
+ * application has a chance to recover. Also, application processes'
+ * election for MCE early killed will be honored.
+ */
+static void kill_procs_now(struct page *p, unsigned long pfn, int flags,
+ struct folio *folio)
+{
+ LIST_HEAD(tokill);
+
+ collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED);
+ kill_procs(&tokill, true, pfn, flags);
+}
+
/**
* memory_failure - Handle memory failure of a page.
* @pfn: Page Number of the corrupted page
@@ -2238,6 +2253,7 @@ try_again:
res = kill_accessing_process(current, pfn, flags);
if (flags & MF_COUNT_INCREASED)
put_page(p);
+ action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED);
goto unlock_mutex;
}
@@ -2274,12 +2290,24 @@ try_again:
}
goto unlock_mutex;
} else if (res < 0) {
- res = action_result(pfn, MF_MSG_UNKNOWN, MF_IGNORED);
+ res = action_result(pfn, MF_MSG_GET_HWPOISON, MF_IGNORED);
goto unlock_mutex;
}
}
folio = page_folio(p);
+
+ /* filter pages that are protected from hwpoison test by users */
+ folio_lock(folio);
+ if (hwpoison_filter(p)) {
+ ClearPageHWPoison(p);
+ folio_unlock(folio);
+ folio_put(folio);
+ res = -EOPNOTSUPP;
+ goto unlock_mutex;
+ }
+ folio_unlock(folio);
+
if (folio_test_large(folio)) {
/*
* The flag must be set after the refcount is bumped
@@ -2295,8 +2323,11 @@ try_again:
* page is a valid handlable page.
*/
folio_set_has_hwpoisoned(folio);
- if (try_to_split_thp_page(p) < 0) {
- res = action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED);
+ if (try_to_split_thp_page(p, false) < 0) {
+ res = -EHWPOISON;
+ kill_procs_now(p, pfn, flags, folio);
+ put_page(p);
+ action_result(pfn, MF_MSG_UNSPLIT_THP, MF_FAILED);
goto unlock_mutex;
}
VM_BUG_ON_PAGE(!page_count(p), p);
@@ -2317,22 +2348,10 @@ try_again:
/*
* We're only intended to deal with the non-Compound page here.
- * However, the page could have changed compound pages due to
- * race window. If this happens, we could try again to hopefully
- * handle the page next round.
+ * The page cannot become compound pages again as folio has been
+ * splited and extra refcnt is held.
*/
- if (folio_test_large(folio)) {
- if (retry) {
- ClearPageHWPoison(p);
- folio_unlock(folio);
- folio_put(folio);
- flags &= ~MF_COUNT_INCREASED;
- retry = false;
- goto try_again;
- }
- res = action_result(pfn, MF_MSG_DIFFERENT_COMPOUND, MF_IGNORED);
- goto unlock_page;
- }
+ WARN_ON(folio_test_large(folio));
/*
* We use page flags to determine what action should be taken, but
@@ -2343,14 +2362,6 @@ try_again:
*/
page_flags = folio->flags;
- if (hwpoison_filter(p)) {
- ClearPageHWPoison(p);
- folio_unlock(folio);
- folio_put(folio);
- res = -EOPNOTSUPP;
- goto unlock_mutex;
- }
-
/*
* __munlock_folio() may clear a writeback folio's LRU flag without
* the folio lock. We need to wait for writeback completion for this
@@ -2370,7 +2381,7 @@ try_again:
* Abort on fail: __filemap_remove_folio() assumes unmapped page.
*/
if (!hwpoison_user_mappings(folio, p, pfn, flags)) {
- res = action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED);
+ res = action_result(pfn, MF_MSG_UNMAP_FAILED, MF_FAILED);
goto unlock_page;
}
@@ -2502,7 +2513,7 @@ static int __init memory_failure_init(void)
core_initcall(memory_failure_init);
#undef pr_fmt
-#define pr_fmt(fmt) "" fmt
+#define pr_fmt(fmt) "Unpoison: " fmt
#define unpoison_pr_info(fmt, pfn, rs) \
({ \
if (__ratelimit(rs)) \
@@ -2526,7 +2537,7 @@ int unpoison_memory(unsigned long pfn)
struct folio *folio;
struct page *p;
int ret = -EBUSY, ghp;
- unsigned long count = 1;
+ unsigned long count;
bool huge = false;
static DEFINE_RATELIMIT_STATE(unpoison_rs, DEFAULT_RATELIMIT_INTERVAL,
DEFAULT_RATELIMIT_BURST);
@@ -2540,27 +2551,27 @@ int unpoison_memory(unsigned long pfn)
mutex_lock(&mf_mutex);
if (hw_memory_failure) {
- unpoison_pr_info("Unpoison: Disabled after HW memory failure %#lx\n",
+ unpoison_pr_info("%#lx: disabled after HW memory failure\n",
pfn, &unpoison_rs);
ret = -EOPNOTSUPP;
goto unlock_mutex;
}
if (is_huge_zero_folio(folio)) {
- unpoison_pr_info("Unpoison: huge zero page is not supported %#lx\n",
+ unpoison_pr_info("%#lx: huge zero page is not supported\n",
pfn, &unpoison_rs);
ret = -EOPNOTSUPP;
goto unlock_mutex;
}
if (!PageHWPoison(p)) {
- unpoison_pr_info("Unpoison: Page was already unpoisoned %#lx\n",
+ unpoison_pr_info("%#lx: page was already unpoisoned\n",
pfn, &unpoison_rs);
goto unlock_mutex;
}
if (folio_ref_count(folio) > 1) {
- unpoison_pr_info("Unpoison: Someone grabs the hwpoison page %#lx\n",
+ unpoison_pr_info("%#lx: someone grabs the hwpoison page\n",
pfn, &unpoison_rs);
goto unlock_mutex;
}
@@ -2569,18 +2580,14 @@ int unpoison_memory(unsigned long pfn)
folio_test_reserved(folio) || folio_test_offline(folio))
goto unlock_mutex;
- /*
- * Note that folio->_mapcount is overloaded in SLAB, so the simple test
- * in folio_mapped() has to be done after folio_test_slab() is checked.
- */
if (folio_mapped(folio)) {
- unpoison_pr_info("Unpoison: Someone maps the hwpoison page %#lx\n",
+ unpoison_pr_info("%#lx: someone maps the hwpoison page\n",
pfn, &unpoison_rs);
goto unlock_mutex;
}
if (folio_mapping(folio)) {
- unpoison_pr_info("Unpoison: the hwpoison page has non-NULL mapping %#lx\n",
+ unpoison_pr_info("%#lx: the hwpoison page has non-NULL mapping\n",
pfn, &unpoison_rs);
goto unlock_mutex;
}
@@ -2599,7 +2606,7 @@ int unpoison_memory(unsigned long pfn)
ret = put_page_back_buddy(p) ? 0 : -EBUSY;
} else {
ret = ghp;
- unpoison_pr_info("Unpoison: failed to grab page %#lx\n",
+ unpoison_pr_info("%#lx: failed to grab page\n",
pfn, &unpoison_rs);
}
} else {
@@ -2624,13 +2631,16 @@ unlock_mutex:
if (!ret) {
if (!huge)
num_poisoned_pages_sub(pfn, 1);
- unpoison_pr_info("Unpoison: Software-unpoisoned page %#lx\n",
+ unpoison_pr_info("%#lx: software-unpoisoned page\n",
page_to_pfn(p), &unpoison_rs);
}
return ret;
}
EXPORT_SYMBOL(unpoison_memory);
+#undef pr_fmt
+#define pr_fmt(fmt) "Soft offline: " fmt
+
static bool mf_isolate_folio(struct folio *folio, struct list_head *pagelist)
{
bool isolated = false;
@@ -2685,8 +2695,8 @@ static int soft_offline_in_use_page(struct page *page)
};
if (!huge && folio_test_large(folio)) {
- if (try_to_split_thp_page(page)) {
- pr_info("soft offline: %#lx: thp split failed\n", pfn);
+ if (try_to_split_thp_page(page, true)) {
+ pr_info("%#lx: thp split failed\n", pfn);
return -EBUSY;
}
folio = page_folio(page);
@@ -2698,7 +2708,7 @@ static int soft_offline_in_use_page(struct page *page)
if (PageHWPoison(page)) {
folio_unlock(folio);
folio_put(folio);
- pr_info("soft offline: %#lx page already poisoned\n", pfn);
+ pr_info("%#lx: page already poisoned\n", pfn);
return 0;
}
@@ -2711,7 +2721,7 @@ static int soft_offline_in_use_page(struct page *page)
folio_unlock(folio);
if (ret) {
- pr_info("soft_offline: %#lx: invalidated\n", pfn);
+ pr_info("%#lx: invalidated\n", pfn);
page_handle_poison(page, false, true);
return 0;
}
@@ -2728,13 +2738,13 @@ static int soft_offline_in_use_page(struct page *page)
if (!list_empty(&pagelist))
putback_movable_pages(&pagelist);
- pr_info("soft offline: %#lx: %s migration failed %ld, type %pGp\n",
+ pr_info("%#lx: %s migration failed %ld, type %pGp\n",
pfn, msg_page[huge], ret, &page->flags);
if (ret > 0)
ret = -EBUSY;
}
} else {
- pr_info("soft offline: %#lx: %s isolation failed, page count %d, type %pGp\n",
+ pr_info("%#lx: %s isolation failed, page count %d, type %pGp\n",
pfn, msg_page[huge], page_count(page), &page->flags);
ret = -EBUSY;
}
@@ -2746,8 +2756,9 @@ static int soft_offline_in_use_page(struct page *page)
* @pfn: pfn to soft-offline
* @flags: flags. Same as memory_failure().
*
- * Returns 0 on success
- * -EOPNOTSUPP for hwpoison_filter() filtered the error event
+ * Returns 0 on success,
+ * -EOPNOTSUPP for hwpoison_filter() filtered the error event, or
+ * disabled by /proc/sys/vm/enable_soft_offline,
* < 0 otherwise negated errno.
*
* Soft offline a page, by migration or invalidation,
@@ -2783,10 +2794,16 @@ int soft_offline_page(unsigned long pfn, int flags)
return -EIO;
}
+ if (!sysctl_enable_soft_offline) {
+ pr_info_once("disabled by /proc/sys/vm/enable_soft_offline\n");
+ put_ref_page(pfn, flags);
+ return -EOPNOTSUPP;
+ }
+
mutex_lock(&mf_mutex);
if (PageHWPoison(page)) {
- pr_info("%s: %#lx page already poisoned\n", __func__, pfn);
+ pr_info("%#lx: page already poisoned\n", pfn);
put_ref_page(pfn, flags);
mutex_unlock(&mf_mutex);
return 0;
diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
index 6632102bd5c9..4775b3a3dabe 100644
--- a/mm/memory-tiers.c
+++ b/mm/memory-tiers.c
@@ -43,6 +43,7 @@ static LIST_HEAD(memory_tiers);
static LIST_HEAD(default_memory_types);
static struct node_memory_type_map node_memory_types[MAX_NUMNODES];
struct memory_dev_type *default_dram_type;
+nodemask_t default_dram_nodes __initdata = NODE_MASK_NONE;
static const struct bus_type memory_tier_subsys = {
.name = "memory_tiering",
@@ -671,28 +672,35 @@ EXPORT_SYMBOL_GPL(mt_put_memory_types);
/*
* This is invoked via `late_initcall()` to initialize memory tiers for
- * CPU-less memory nodes after driver initialization, which is
- * expected to provide `adistance` algorithms.
+ * memory nodes, both with and without CPUs. After the initialization of
+ * firmware and devices, adistance algorithms are expected to be provided.
*/
static int __init memory_tier_late_init(void)
{
int nid;
+ struct memory_tier *memtier;
+ get_online_mems();
guard(mutex)(&memory_tier_lock);
+
+ /* Assign each uninitialized N_MEMORY node to a memory tier. */
for_each_node_state(nid, N_MEMORY) {
/*
- * Some device drivers may have initialized memory tiers
- * between `memory_tier_init()` and `memory_tier_late_init()`,
- * potentially bringing online memory nodes and
- * configuring memory tiers. Exclude them here.
+ * Some device drivers may have initialized
+ * memory tiers, potentially bringing memory nodes
+ * online and configuring memory tiers.
+ * Exclude them here.
*/
if (node_memory_types[nid].memtype)
continue;
- set_node_memory_tier(nid);
+ memtier = set_node_memory_tier(nid);
+ if (IS_ERR(memtier))
+ continue;
}
establish_demotion_targets();
+ put_online_mems();
return 0;
}
@@ -875,8 +883,7 @@ static int __meminit memtier_hotplug_callback(struct notifier_block *self,
static int __init memory_tier_init(void)
{
- int ret, node;
- struct memory_tier *memtier;
+ int ret;
ret = subsys_virtual_register(&memory_tier_subsys, NULL);
if (ret)
@@ -887,7 +894,8 @@ static int __init memory_tier_init(void)
GFP_KERNEL);
WARN_ON(!node_demotion);
#endif
- mutex_lock(&memory_tier_lock);
+
+ guard(mutex)(&memory_tier_lock);
/*
* For now we can have 4 faster memory tiers with smaller adistance
* than default DRAM tier.
@@ -897,29 +905,9 @@ static int __init memory_tier_init(void)
if (IS_ERR(default_dram_type))
panic("%s() failed to allocate default DRAM tier\n", __func__);
- /*
- * Look at all the existing N_MEMORY nodes and add them to
- * default memory tier or to a tier if we already have memory
- * types assigned.
- */
- for_each_node_state(node, N_MEMORY) {
- if (!node_state(node, N_CPU))
- /*
- * Defer memory tier initialization on
- * CPUless numa nodes. These will be initialized
- * after firmware and devices are initialized.
- */
- continue;
-
- memtier = set_node_memory_tier(node);
- if (IS_ERR(memtier))
- /*
- * Continue with memtiers we are able to setup
- */
- break;
- }
- establish_demotion_targets();
- mutex_unlock(&memory_tier_lock);
+ /* Record nodes with memory and CPU to set default DRAM performance. */
+ nodes_and(default_dram_nodes, node_states[N_MEMORY],
+ node_states[N_CPU]);
hotplug_memory_notifier(memtier_hotplug_callback, MEMTIER_HOTPLUG_PRI);
return 0;
diff --git a/mm/memory.c b/mm/memory.c
index d10e616d7389..4bcd79619574 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -365,6 +365,8 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
struct vm_area_struct *vma, unsigned long floor,
unsigned long ceiling, bool mm_wr_locked)
{
+ struct unlink_vma_file_batch vb;
+
do {
unsigned long addr = vma->vm_start;
struct vm_area_struct *next;
@@ -384,12 +386,15 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
if (mm_wr_locked)
vma_start_write(vma);
unlink_anon_vmas(vma);
- unlink_file_vma(vma);
if (is_vm_hugetlb_page(vma)) {
+ unlink_file_vma(vma);
hugetlb_free_pgd_range(tlb, addr, vma->vm_end,
floor, next ? next->vm_start : ceiling);
} else {
+ unlink_file_vma_batch_init(&vb);
+ unlink_file_vma_batch_add(&vb, vma);
+
/*
* Optimization: gather nearby vmas into one call down
*/
@@ -402,8 +407,9 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
if (mm_wr_locked)
vma_start_write(vma);
unlink_anon_vmas(vma);
- unlink_file_vma(vma);
+ unlink_file_vma_batch_add(&vb, vma);
}
+ unlink_file_vma_batch_final(&vb);
free_pgd_range(tlb, addr, vma->vm_end,
floor, next ? next->vm_start : ceiling);
}
@@ -575,10 +581,13 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
* VM_MIXEDMAP mappings can likewise contain memory with or without "struct
* page" backing, however the difference is that _all_ pages with a struct
* page (that is, those where pfn_valid is true) are refcounted and considered
- * normal pages by the VM. The disadvantage is that pages are refcounted
- * (which can be slower and simply not an option for some PFNMAP users). The
- * advantage is that we don't have to follow the strict linearity rule of
- * PFNMAP mappings in order to support COWable mappings.
+ * normal pages by the VM. The only exception are zeropages, which are
+ * *never* refcounted.
+ *
+ * The disadvantage is that pages are refcounted (which can be slower and
+ * simply not an option for some PFNMAP users). The advantage is that we
+ * don't have to follow the strict linearity rule of PFNMAP mappings in
+ * order to support COWable mappings.
*
*/
struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
@@ -616,6 +625,8 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
if (vma->vm_flags & VM_MIXEDMAP) {
if (!pfn_valid(pfn))
return NULL;
+ if (is_zero_pfn(pfn))
+ return NULL;
goto out;
} else {
unsigned long off;
@@ -641,6 +652,7 @@ check_pfn:
* eg. VDSO mappings can cause them to exist.
*/
out:
+ VM_WARN_ON_ONCE(is_zero_pfn(pfn));
return pfn_to_page(pfn);
}
@@ -918,7 +930,7 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
*prealloc = NULL;
copy_user_highpage(&new_folio->page, page, addr, src_vma);
__folio_mark_uptodate(new_folio);
- folio_add_new_anon_rmap(new_folio, dst_vma, addr);
+ folio_add_new_anon_rmap(new_folio, dst_vma, addr, RMAP_EXCLUSIVE);
folio_add_lru_vma(new_folio, dst_vma);
rss[MM_ANONPAGES]++;
@@ -1977,10 +1989,48 @@ pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr,
return pte_alloc_map_lock(mm, pmd, addr, ptl);
}
-static int validate_page_before_insert(struct page *page)
+static bool vm_mixed_zeropage_allowed(struct vm_area_struct *vma)
+{
+ VM_WARN_ON_ONCE(vma->vm_flags & VM_PFNMAP);
+ /*
+ * Whoever wants to forbid the zeropage after some zeropages
+ * might already have been mapped has to scan the page tables and
+ * bail out on any zeropages. Zeropages in COW mappings can
+ * be unshared using FAULT_FLAG_UNSHARE faults.
+ */
+ if (mm_forbids_zeropage(vma->vm_mm))
+ return false;
+ /* zeropages in COW mappings are common and unproblematic. */
+ if (is_cow_mapping(vma->vm_flags))
+ return true;
+ /* Mappings that do not allow for writable PTEs are unproblematic. */
+ if (!(vma->vm_flags & (VM_WRITE | VM_MAYWRITE)))
+ return true;
+ /*
+ * Why not allow any VMA that has vm_ops->pfn_mkwrite? GUP could
+ * find the shared zeropage and longterm-pin it, which would
+ * be problematic as soon as the zeropage gets replaced by a different
+ * page due to vma->vm_ops->pfn_mkwrite, because what's mapped would
+ * now differ to what GUP looked up. FSDAX is incompatible to
+ * FOLL_LONGTERM and VM_IO is incompatible to GUP completely (see
+ * check_vma_flags).
+ */
+ return vma->vm_ops && vma->vm_ops->pfn_mkwrite &&
+ (vma_is_fsdax(vma) || vma->vm_flags & VM_IO);
+}
+
+static int validate_page_before_insert(struct vm_area_struct *vma,
+ struct page *page)
{
struct folio *folio = page_folio(page);
+ if (!folio_ref_count(folio))
+ return -EINVAL;
+ if (unlikely(is_zero_folio(folio))) {
+ if (!vm_mixed_zeropage_allowed(vma))
+ return -EINVAL;
+ return 0;
+ }
if (folio_test_anon(folio) || folio_test_slab(folio) ||
page_has_type(page))
return -EINVAL;
@@ -1992,24 +2042,23 @@ static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte,
unsigned long addr, struct page *page, pgprot_t prot)
{
struct folio *folio = page_folio(page);
+ pte_t pteval;
if (!pte_none(ptep_get(pte)))
return -EBUSY;
/* Ok, finally just insert the thing.. */
- folio_get(folio);
- inc_mm_counter(vma->vm_mm, mm_counter_file(folio));
- folio_add_file_rmap_pte(folio, page, vma);
- set_pte_at(vma->vm_mm, addr, pte, mk_pte(page, prot));
+ pteval = mk_pte(page, prot);
+ if (unlikely(is_zero_folio(folio))) {
+ pteval = pte_mkspecial(pteval);
+ } else {
+ folio_get(folio);
+ inc_mm_counter(vma->vm_mm, mm_counter_file(folio));
+ folio_add_file_rmap_pte(folio, page, vma);
+ }
+ set_pte_at(vma->vm_mm, addr, pte, pteval);
return 0;
}
-/*
- * This is the old fallback for page remapping.
- *
- * For historical reasons, it only allows reserved pages. Only
- * old drivers should use this, and they needed to mark their
- * pages reserved for the old functions anyway.
- */
static int insert_page(struct vm_area_struct *vma, unsigned long addr,
struct page *page, pgprot_t prot)
{
@@ -2017,7 +2066,7 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr,
pte_t *pte;
spinlock_t *ptl;
- retval = validate_page_before_insert(page);
+ retval = validate_page_before_insert(vma, page);
if (retval)
goto out;
retval = -ENOMEM;
@@ -2035,9 +2084,7 @@ static int insert_page_in_batch_locked(struct vm_area_struct *vma, pte_t *pte,
{
int err;
- if (!page_count(page))
- return -EINVAL;
- err = validate_page_before_insert(page);
+ err = validate_page_before_insert(vma, page);
if (err)
return err;
return insert_page_into_pte_locked(vma, pte, addr, page, prot);
@@ -2143,7 +2190,8 @@ EXPORT_SYMBOL(vm_insert_pages);
* @page: source kernel page
*
* This allows drivers to insert individual pages they've allocated
- * into a user vma.
+ * into a user vma. The zeropage is supported in some VMAs,
+ * see vm_mixed_zeropage_allowed().
*
* The page has to be a nice clean _individual_ kernel allocation.
* If you allocate a compound page, you need to have marked it as
@@ -2170,8 +2218,6 @@ int vm_insert_page(struct vm_area_struct *vma, unsigned long addr,
{
if (addr < vma->vm_start || addr >= vma->vm_end)
return -EFAULT;
- if (!page_count(page))
- return -EINVAL;
if (!(vma->vm_flags & VM_MIXEDMAP)) {
BUG_ON(mmap_read_trylock(vma->vm_mm));
BUG_ON(vma->vm_flags & VM_PFNMAP);
@@ -2189,6 +2235,8 @@ EXPORT_SYMBOL(vm_insert_page);
* @offset: user's requested vm_pgoff
*
* This allows drivers to map range of kernel pages into a user vma.
+ * The zeropage is supported in some VMAs, see
+ * vm_mixed_zeropage_allowed().
*
* Return: 0 on success and error code otherwise.
*/
@@ -2404,8 +2452,11 @@ vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
}
EXPORT_SYMBOL(vmf_insert_pfn);
-static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn)
+static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn, bool mkwrite)
{
+ if (unlikely(is_zero_pfn(pfn_t_to_pfn(pfn))) &&
+ (mkwrite || !vm_mixed_zeropage_allowed(vma)))
+ return false;
/* these checks mirror the abort conditions in vm_normal_page */
if (vma->vm_flags & VM_MIXEDMAP)
return true;
@@ -2424,7 +2475,8 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma,
pgprot_t pgprot = vma->vm_page_prot;
int err;
- BUG_ON(!vm_mixed_ok(vma, pfn));
+ if (!vm_mixed_ok(vma, pfn, mkwrite))
+ return VM_FAULT_SIGBUS;
if (addr < vma->vm_start || addr >= vma->vm_end)
return VM_FAULT_SIGBUS;
@@ -2481,7 +2533,6 @@ vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
{
return __vm_insert_mixed(vma, addr, pfn, true);
}
-EXPORT_SYMBOL(vmf_insert_mixed_mkwrite);
/*
* maps a range of physical memory into the requested pages. the old
@@ -2970,10 +3021,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
unsigned long addr = vmf->address;
if (likely(src)) {
- if (copy_mc_user_highpage(dst, src, addr, vma)) {
- memory_failure_queue(page_to_pfn(src), 0);
+ if (copy_mc_user_highpage(dst, src, addr, vma))
return -EHWPOISON;
- }
return 0;
}
@@ -3172,6 +3221,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf, struct folio *folio)
pte_t entry;
VM_BUG_ON(!(vmf->flags & FAULT_FLAG_WRITE));
+ VM_WARN_ON(is_zero_pfn(pte_pfn(vmf->orig_pte)));
if (folio) {
VM_BUG_ON(folio_test_anon(folio) &&
@@ -3349,7 +3399,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
* some TLBs while the old PTE remains in others.
*/
ptep_clear_flush(vma, vmf->address, vmf->pte);
- folio_add_new_anon_rmap(new_folio, vma, vmf->address);
+ folio_add_new_anon_rmap(new_folio, vma, vmf->address, RMAP_EXCLUSIVE);
folio_add_lru_vma(new_folio, vma);
BUG_ON(unshare && pte_write(entry));
set_pte_at(mm, vmf->address, vmf->pte, entry);
@@ -3866,7 +3916,7 @@ static inline bool should_try_to_free_swap(struct folio *folio,
* reference only in case it's likely that we'll be the exlusive user.
*/
return (fault_flags & FAULT_FLAG_WRITE) && !folio_test_ksm(folio) &&
- folio_ref_count(folio) == 2;
+ folio_ref_count(folio) == (1 + folio_nr_pages(folio));
}
static vm_fault_t pte_marker_clear(struct vm_fault *vmf)
@@ -3957,6 +4007,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
pte_t pte;
vm_fault_t ret = 0;
void *shadow = NULL;
+ int nr_pages;
+ unsigned long page_idx;
+ unsigned long address;
+ pte_t *ptep;
if (!pte_unmap_same(vmf))
goto out;
@@ -4058,7 +4112,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
/* To provide entry to swap_read_folio() */
folio->swap = entry;
- swap_read_folio(folio, true, NULL);
+ swap_read_folio(folio, NULL);
folio->private = NULL;
}
} else {
@@ -4155,6 +4209,38 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
goto out_nomap;
}
+ nr_pages = 1;
+ page_idx = 0;
+ address = vmf->address;
+ ptep = vmf->pte;
+ if (folio_test_large(folio) && folio_test_swapcache(folio)) {
+ int nr = folio_nr_pages(folio);
+ unsigned long idx = folio_page_idx(folio, page);
+ unsigned long folio_start = address - idx * PAGE_SIZE;
+ unsigned long folio_end = folio_start + nr * PAGE_SIZE;
+ pte_t *folio_ptep;
+ pte_t folio_pte;
+
+ if (unlikely(folio_start < max(address & PMD_MASK, vma->vm_start)))
+ goto check_folio;
+ if (unlikely(folio_end > pmd_addr_end(address, vma->vm_end)))
+ goto check_folio;
+
+ folio_ptep = vmf->pte - idx;
+ folio_pte = ptep_get(folio_ptep);
+ if (!pte_same(folio_pte, pte_move_swp_offset(vmf->orig_pte, -idx)) ||
+ swap_pte_batch(folio_ptep, nr, folio_pte) != nr)
+ goto check_folio;
+
+ page_idx = idx;
+ address = folio_start;
+ ptep = folio_ptep;
+ nr_pages = nr;
+ entry = folio->swap;
+ page = &folio->page;
+ }
+
+check_folio:
/*
* PG_anon_exclusive reuses PG_mappedtodisk for anon pages. A swap pte
* must never point at an anonymous page in the swapcache that is
@@ -4214,13 +4300,17 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
* We're already holding a reference on the page but haven't mapped it
* yet.
*/
- swap_free(entry);
+ swap_free_nr(entry, nr_pages);
if (should_try_to_free_swap(folio, vma, vmf->flags))
folio_free_swap(folio);
- inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
- dec_mm_counter(vma->vm_mm, MM_SWAPENTS);
+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages);
+ add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages);
pte = mk_pte(page, vma->vm_page_prot);
+ if (pte_swp_soft_dirty(vmf->orig_pte))
+ pte = pte_mksoft_dirty(pte);
+ if (pte_swp_uffd_wp(vmf->orig_pte))
+ pte = pte_mkuffd_wp(pte);
/*
* Same logic as in do_wp_page(); however, optimize for pages that are
@@ -4230,32 +4320,43 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
*/
if (!folio_test_ksm(folio) &&
(exclusive || folio_ref_count(folio) == 1)) {
- if (vmf->flags & FAULT_FLAG_WRITE) {
- pte = maybe_mkwrite(pte_mkdirty(pte), vma);
- vmf->flags &= ~FAULT_FLAG_WRITE;
+ if ((vma->vm_flags & VM_WRITE) && !userfaultfd_pte_wp(vma, pte) &&
+ !pte_needs_soft_dirty_wp(vma, pte)) {
+ pte = pte_mkwrite(pte, vma);
+ if (vmf->flags & FAULT_FLAG_WRITE) {
+ pte = pte_mkdirty(pte);
+ vmf->flags &= ~FAULT_FLAG_WRITE;
+ }
}
rmap_flags |= RMAP_EXCLUSIVE;
}
- flush_icache_page(vma, page);
- if (pte_swp_soft_dirty(vmf->orig_pte))
- pte = pte_mksoft_dirty(pte);
- if (pte_swp_uffd_wp(vmf->orig_pte))
- pte = pte_mkuffd_wp(pte);
- vmf->orig_pte = pte;
+ folio_ref_add(folio, nr_pages - 1);
+ flush_icache_pages(vma, page, nr_pages);
+ vmf->orig_pte = pte_advance_pfn(pte, page_idx);
/* ksm created a completely new copy */
if (unlikely(folio != swapcache && swapcache)) {
- folio_add_new_anon_rmap(folio, vma, vmf->address);
+ folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE);
folio_add_lru_vma(folio, vma);
+ } else if (!folio_test_anon(folio)) {
+ /*
+ * We currently only expect small !anon folios, which are either
+ * fully exclusive or fully shared. If we ever get large folios
+ * here, we have to be careful.
+ */
+ VM_WARN_ON_ONCE(folio_test_large(folio));
+ VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+ folio_add_new_anon_rmap(folio, vma, address, rmap_flags);
} else {
- folio_add_anon_rmap_pte(folio, page, vma, vmf->address,
+ folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, address,
rmap_flags);
}
VM_BUG_ON(!folio_test_anon(folio) ||
(pte_write(pte) && !PageAnonExclusive(page)));
- set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
- arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
+ set_ptes(vma->vm_mm, address, ptep, pte, nr_pages);
+ arch_do_swap_page_nr(vma->vm_mm, vma, address,
+ pte, pte, nr_pages);
folio_unlock(folio);
if (folio != swapcache && swapcache) {
@@ -4279,7 +4380,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
}
/* No need to invalidate - it was non-present before */
- update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
+ update_mmu_cache_range(vmf, vma, address, ptep, nr_pages);
unlock:
if (vmf->pte)
pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -4384,7 +4485,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf)
goto next;
}
folio_throttle_swaprate(folio, gfp);
- clear_huge_page(&folio->page, vmf->address, 1 << order);
+ folio_zero_user(folio, vmf->address);
return folio;
}
next:
@@ -4410,7 +4511,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
vm_fault_t ret = 0;
int nr_pages = 1;
pte_t entry;
- int i;
/* File mapping without ->vm_ops ? */
if (vma->vm_flags & VM_SHARED)
@@ -4480,8 +4580,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
update_mmu_tlb(vma, addr, vmf->pte);
goto release;
} else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
- for (i = 0; i < nr_pages; i++)
- update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
+ update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
goto release;
}
@@ -4501,7 +4600,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_FAULT_ALLOC);
#endif
- folio_add_new_anon_rmap(folio, vma, addr);
+ folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
folio_add_lru_vma(folio, vma);
setpte:
if (vmf_orig_pte_uffd_wp(vmf))
@@ -4541,7 +4640,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
* lock_page(B)
* lock_page(B)
* pte_alloc_one
- * shrink_page_list
+ * shrink_folio_list
* wait_on_page_writeback(A)
* SetPageWriteback(B)
* unlock_page(B)
@@ -4699,7 +4798,7 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio,
/* copy-on-write page */
if (write && !(vma->vm_flags & VM_SHARED)) {
VM_BUG_ON_FOLIO(nr != 1, folio);
- folio_add_new_anon_rmap(folio, vma, addr);
+ folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
folio_add_lru_vma(folio, vma);
} else {
folio_add_file_rmap_ptes(folio, page, nr, vma);
@@ -4737,9 +4836,12 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct page *page;
+ struct folio *folio;
vm_fault_t ret;
bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&
!(vma->vm_flags & VM_SHARED);
+ int type, nr_pages;
+ unsigned long addr = vmf->address;
/* Did we COW the page? */
if (is_cow)
@@ -4770,24 +4872,62 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
return VM_FAULT_OOM;
}
+ folio = page_folio(page);
+ nr_pages = folio_nr_pages(folio);
+
+ /*
+ * Using per-page fault to maintain the uffd semantics, and same
+ * approach also applies to non-anonymous-shmem faults to avoid
+ * inflating the RSS of the process.
+ */
+ if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) {
+ nr_pages = 1;
+ } else if (nr_pages > 1) {
+ pgoff_t idx = folio_page_idx(folio, page);
+ /* The page offset of vmf->address within the VMA. */
+ pgoff_t vma_off = vmf->pgoff - vmf->vma->vm_pgoff;
+ /* The index of the entry in the pagetable for fault page. */
+ pgoff_t pte_off = pte_index(vmf->address);
+
+ /*
+ * Fallback to per-page fault in case the folio size in page
+ * cache beyond the VMA limits and PMD pagetable limits.
+ */
+ if (unlikely(vma_off < idx ||
+ vma_off + (nr_pages - idx) > vma_pages(vma) ||
+ pte_off < idx ||
+ pte_off + (nr_pages - idx) > PTRS_PER_PTE)) {
+ nr_pages = 1;
+ } else {
+ /* Now we can set mappings for the whole large folio. */
+ addr = vmf->address - idx * PAGE_SIZE;
+ page = &folio->page;
+ }
+ }
+
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
- vmf->address, &vmf->ptl);
+ addr, &vmf->ptl);
if (!vmf->pte)
return VM_FAULT_NOPAGE;
/* Re-check under ptl */
- if (likely(!vmf_pte_changed(vmf))) {
- struct folio *folio = page_folio(page);
- int type = is_cow ? MM_ANONPAGES : mm_counter_file(folio);
-
- set_pte_range(vmf, folio, page, 1, vmf->address);
- add_mm_counter(vma->vm_mm, type, 1);
- ret = 0;
- } else {
- update_mmu_tlb(vma, vmf->address, vmf->pte);
+ if (nr_pages == 1 && unlikely(vmf_pte_changed(vmf))) {
+ update_mmu_tlb(vma, addr, vmf->pte);
+ ret = VM_FAULT_NOPAGE;
+ goto unlock;
+ } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
+ update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
ret = VM_FAULT_NOPAGE;
+ goto unlock;
}
+ folio_ref_add(folio, nr_pages - 1);
+ set_pte_range(vmf, folio, page, nr_pages, addr);
+ type = is_cow ? MM_ANONPAGES : mm_counter_file(folio);
+ add_mm_counter(vma->vm_mm, type, nr_pages);
+ ret = 0;
+
+unlock:
pte_unmap_unlock(vmf->pte, vmf->ptl);
return ret;
}
@@ -5067,8 +5207,6 @@ int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf,
{
struct vm_area_struct *vma = vmf->vma;
- folio_get(folio);
-
/* Record the current PID acceesing VMA */
vma_set_access_pid_bit(vma);
@@ -5205,16 +5343,19 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
else
last_cpupid = folio_last_cpupid(folio);
target_nid = numa_migrate_prep(folio, vmf, vmf->address, nid, &flags);
- if (target_nid == NUMA_NO_NODE) {
- folio_put(folio);
+ if (target_nid == NUMA_NO_NODE)
+ goto out_map;
+ if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) {
+ flags |= TNF_MIGRATE_FAIL;
goto out_map;
}
+ /* The folio is isolated and isolation code holds a folio reference. */
pte_unmap_unlock(vmf->pte, vmf->ptl);
writable = false;
ignore_writable = true;
/* Migrate to the requested node */
- if (migrate_misplaced_folio(folio, vma, target_nid)) {
+ if (!migrate_misplaced_folio(folio, vma, target_nid)) {
nid = target_nid;
flags |= TNF_MIGRATED;
} else {
@@ -6244,23 +6385,23 @@ EXPORT_SYMBOL(__might_fault);
* cache lines hot.
*/
static inline int process_huge_page(
- unsigned long addr_hint, unsigned int pages_per_huge_page,
+ unsigned long addr_hint, unsigned int nr_pages,
int (*process_subpage)(unsigned long addr, int idx, void *arg),
void *arg)
{
int i, n, base, l, ret;
unsigned long addr = addr_hint &
- ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1);
+ ~(((unsigned long)nr_pages << PAGE_SHIFT) - 1);
/* Process target subpage last to keep its cache lines hot */
might_sleep();
n = (addr_hint - addr) / PAGE_SIZE;
- if (2 * n <= pages_per_huge_page) {
+ if (2 * n <= nr_pages) {
/* If target subpage in first half of huge page */
base = 0;
l = n;
/* Process subpages at the end of huge page */
- for (i = pages_per_huge_page - 1; i >= 2 * n; i--) {
+ for (i = nr_pages - 1; i >= 2 * n; i--) {
cond_resched();
ret = process_subpage(addr + i * PAGE_SIZE, i, arg);
if (ret)
@@ -6268,8 +6409,8 @@ static inline int process_huge_page(
}
} else {
/* If target subpage in second half of huge page */
- base = pages_per_huge_page - 2 * (pages_per_huge_page - n);
- l = pages_per_huge_page - n;
+ base = nr_pages - 2 * (nr_pages - n);
+ l = nr_pages - n;
/* Process subpages at the begin of huge page */
for (i = 0; i < base; i++) {
cond_resched();
@@ -6298,102 +6439,93 @@ static inline int process_huge_page(
return 0;
}
-static void clear_gigantic_page(struct page *page,
- unsigned long addr,
- unsigned int pages_per_huge_page)
+static void clear_gigantic_page(struct folio *folio, unsigned long addr,
+ unsigned int nr_pages)
{
int i;
- struct page *p;
might_sleep();
- for (i = 0; i < pages_per_huge_page; i++) {
- p = nth_page(page, i);
+ for (i = 0; i < nr_pages; i++) {
cond_resched();
- clear_user_highpage(p, addr + i * PAGE_SIZE);
+ clear_user_highpage(folio_page(folio, i), addr + i * PAGE_SIZE);
}
}
static int clear_subpage(unsigned long addr, int idx, void *arg)
{
- struct page *page = arg;
+ struct folio *folio = arg;
- clear_user_highpage(nth_page(page, idx), addr);
+ clear_user_highpage(folio_page(folio, idx), addr);
return 0;
}
-void clear_huge_page(struct page *page,
- unsigned long addr_hint, unsigned int pages_per_huge_page)
+/**
+ * folio_zero_user - Zero a folio which will be mapped to userspace.
+ * @folio: The folio to zero.
+ * @addr_hint: The address will be accessed or the base address if uncelar.
+ */
+void folio_zero_user(struct folio *folio, unsigned long addr_hint)
{
- unsigned long addr = addr_hint &
- ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1);
-
- if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) {
- clear_gigantic_page(page, addr, pages_per_huge_page);
- return;
- }
+ unsigned int nr_pages = folio_nr_pages(folio);
- process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, page);
+ if (unlikely(nr_pages > MAX_ORDER_NR_PAGES))
+ clear_gigantic_page(folio, addr_hint, nr_pages);
+ else
+ process_huge_page(addr_hint, nr_pages, clear_subpage, folio);
}
static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
- unsigned long addr,
- struct vm_area_struct *vma,
- unsigned int pages_per_huge_page)
+ unsigned long addr,
+ struct vm_area_struct *vma,
+ unsigned int nr_pages)
{
int i;
struct page *dst_page;
struct page *src_page;
- for (i = 0; i < pages_per_huge_page; i++) {
+ for (i = 0; i < nr_pages; i++) {
dst_page = folio_page(dst, i);
src_page = folio_page(src, i);
cond_resched();
if (copy_mc_user_highpage(dst_page, src_page,
- addr + i*PAGE_SIZE, vma)) {
- memory_failure_queue(page_to_pfn(src_page), 0);
+ addr + i*PAGE_SIZE, vma))
return -EHWPOISON;
- }
}
return 0;
}
struct copy_subpage_arg {
- struct page *dst;
- struct page *src;
+ struct folio *dst;
+ struct folio *src;
struct vm_area_struct *vma;
};
static int copy_subpage(unsigned long addr, int idx, void *arg)
{
struct copy_subpage_arg *copy_arg = arg;
- struct page *dst = nth_page(copy_arg->dst, idx);
- struct page *src = nth_page(copy_arg->src, idx);
+ struct page *dst = folio_page(copy_arg->dst, idx);
+ struct page *src = folio_page(copy_arg->src, idx);
- if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) {
- memory_failure_queue(page_to_pfn(src), 0);
+ if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma))
return -EHWPOISON;
- }
return 0;
}
int copy_user_large_folio(struct folio *dst, struct folio *src,
unsigned long addr_hint, struct vm_area_struct *vma)
{
- unsigned int pages_per_huge_page = folio_nr_pages(dst);
- unsigned long addr = addr_hint &
- ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1);
+ unsigned int nr_pages = folio_nr_pages(dst);
struct copy_subpage_arg arg = {
- .dst = &dst->page,
- .src = &src->page,
+ .dst = dst,
+ .src = src,
.vma = vma,
};
- if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES))
- return copy_user_gigantic_page(dst, src, addr, vma,
- pages_per_huge_page);
+ if (unlikely(nr_pages > MAX_ORDER_NR_PAGES))
+ return copy_user_gigantic_page(dst, src, addr_hint, vma, nr_pages);
- return process_huge_page(addr_hint, pages_per_huge_page, copy_subpage, &arg);
+ return process_huge_page(addr_hint, nr_pages, copy_subpage, &arg);
}
long copy_folio_from_user(struct folio *dst_folio,
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 431b1f6753c0..66267c26ca1b 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -628,16 +628,10 @@ int restore_online_page_callback(online_page_callback_t callback)
}
EXPORT_SYMBOL_GPL(restore_online_page_callback);
-void generic_online_page(struct page *page, unsigned int order)
+/* we are OK calling __meminit stuff here - we have CONFIG_MEMORY_HOTPLUG */
+void __ref generic_online_page(struct page *page, unsigned int order)
{
- /*
- * Freeing the page with debug_pagealloc enabled will try to unmap it,
- * so we should map it first. This is better than introducing a special
- * case in page freeing fast path.
- */
- debug_pagealloc_map_pages(page, 1 << order);
- __free_pages_core(page, order);
- totalram_pages_add(1UL << order);
+ __free_pages_core(page, order, MEMINIT_HOTPLUG);
}
EXPORT_SYMBOL_GPL(generic_online_page);
@@ -741,7 +735,7 @@ static inline void section_taint_zone_device(unsigned long pfn)
/*
* Associate the pfn range with the given zone, initializing the memmaps
* and resizing the pgdat/zone data to span the added pages. After this
- * call, all affected pages are PG_reserved.
+ * call, all affected pages are PageOffline().
*
* All aligned pageblocks are initialized to the specified migratetype
* (usually MIGRATE_MOVABLE). Besides setting the migratetype, no related
@@ -846,7 +840,6 @@ static bool auto_movable_can_online_movable(int nid, struct memory_group *group,
unsigned long kernel_early_pages, movable_pages;
struct auto_movable_group_stats group_stats = {};
struct auto_movable_stats stats = {};
- pg_data_t *pgdat = NODE_DATA(nid);
struct zone *zone;
int i;
@@ -857,6 +850,8 @@ static bool auto_movable_can_online_movable(int nid, struct memory_group *group,
auto_movable_stats_account_zone(&stats, zone);
} else {
for (i = 0; i < MAX_NR_ZONES; i++) {
+ pg_data_t *pgdat = NODE_DATA(nid);
+
zone = pgdat->node_zones + i;
if (populated_zone(zone))
auto_movable_stats_account_zone(&stats, zone);
@@ -1107,8 +1102,12 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_UNMOVABLE);
- for (i = 0; i < nr_pages; i++)
- SetPageVmemmapSelfHosted(pfn_to_page(pfn + i));
+ for (i = 0; i < nr_pages; i++) {
+ struct page *page = pfn_to_page(pfn + i);
+
+ __ClearPageOffline(page);
+ SetPageVmemmapSelfHosted(page);
+ }
/*
* It might be that the vmemmap_pages fully span sections. If that is
@@ -1731,8 +1730,8 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
unsigned long pfn;
for (pfn = start; pfn < end; pfn++) {
- struct page *page, *head;
- unsigned long skip;
+ struct page *page;
+ struct folio *folio;
if (!pfn_valid(pfn))
continue;
@@ -1753,7 +1752,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
if (!PageHuge(page))
continue;
- head = compound_head(page);
+ folio = page_folio(page);
/*
* This test is racy as we hold no reference or lock. The
* hugetlb page could have been free'ed and head is no longer
@@ -1761,10 +1760,9 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
* cases false positives and negatives are possible. Calling
* code must deal with these scenarios.
*/
- if (HPageMigratable(head))
+ if (folio_test_hugetlb_migratable(folio))
goto found;
- skip = compound_nr(head) - (pfn - page_to_pfn(head));
- pfn += skip - 1;
+ pfn |= folio_nr_pages(folio) - 1;
}
return -ENOENT;
found:
@@ -1945,7 +1943,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages,
struct zone *zone, struct memory_group *group)
{
const unsigned long end_pfn = start_pfn + nr_pages;
- unsigned long pfn, system_ram_pages = 0;
+ unsigned long pfn, managed_pages, system_ram_pages = 0;
const int node = zone_to_nid(zone);
unsigned long flags;
struct memory_notify arg;
@@ -1967,9 +1965,9 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages,
* Don't allow to offline memory blocks that contain holes.
* Consequently, memory blocks with holes can never get onlined
* via the hotplug path - online_pages() - as hotplugged memory has
- * no holes. This way, we e.g., don't have to worry about marking
- * memory holes PG_reserved, don't need pfn_valid() checks, and can
- * avoid using walk_system_ram_range() later.
+ * no holes. This way, we don't have to worry about memory holes,
+ * don't need pfn_valid() checks, and can avoid using
+ * walk_system_ram_range() later.
*/
walk_system_ram_range(start_pfn, nr_pages, &system_ram_pages,
count_system_ram_pages_cb);
@@ -2066,7 +2064,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages,
} while (ret);
/* Mark all sections offline and remove free pages from the buddy. */
- __offline_isolated_pages(start_pfn, end_pfn);
+ managed_pages = __offline_isolated_pages(start_pfn, end_pfn);
pr_debug("Offlined Pages %ld\n", nr_pages);
/*
@@ -2082,7 +2080,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages,
zone_pcp_enable(zone);
/* removal success */
- adjust_managed_page_count(pfn_to_page(start_pfn), -nr_pages);
+ adjust_managed_page_count(pfn_to_page(start_pfn), -managed_pages);
adjust_present_page_count(pfn_to_page(start_pfn), group, -nr_pages);
/* reinitialise watermarks and update pcp limits */
@@ -2283,10 +2281,8 @@ static int __ref try_remove_memory(u64 start, u64 size)
remove_memory_blocks_and_altmaps(start, size);
}
- if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
- memblock_phys_free(start, size);
+ if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK))
memblock_remove(start, size);
- }
release_mem_region_adjustable(start, size);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index aec756ae5637..327a19b0883d 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -624,7 +624,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask,
pte_t entry;
ptl = huge_pte_lock(hstate_vma(walk->vma), walk->mm, pte);
- entry = huge_ptep_get(pte);
+ entry = huge_ptep_get(walk->mm, addr, pte);
if (!pte_present(entry)) {
if (unlikely(is_hugetlb_entry_migration(entry)))
qp->nr_failed++;
@@ -1211,7 +1211,6 @@ static struct folio *alloc_migration_target_by_mpol(struct folio *src,
struct migration_mpol *mmpol = (struct migration_mpol *)private;
struct mempolicy *pol = mmpol->pol;
pgoff_t ilx = mmpol->ilx;
- struct page *page;
unsigned int order;
int nid = numa_node_id();
gfp_t gfp;
@@ -1235,8 +1234,7 @@ static struct folio *alloc_migration_target_by_mpol(struct folio *src,
else
gfp = GFP_HIGHUSER_MOVABLE | __GFP_RETRY_MAYFAIL | __GFP_COMP;
- page = alloc_pages_mpol(gfp, order, pol, ilx, nid);
- return page_rmappable_folio(page);
+ return folio_alloc_mpol(gfp, order, pol, ilx, nid);
}
#else
@@ -2277,6 +2275,13 @@ struct page *alloc_pages_mpol_noprof(gfp_t gfp, unsigned int order,
return page;
}
+struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
+ struct mempolicy *pol, pgoff_t ilx, int nid)
+{
+ return page_rmappable_folio(alloc_pages_mpol_noprof(gfp | __GFP_COMP,
+ order, pol, ilx, nid));
+}
+
/**
* vma_alloc_folio - Allocate a folio for a VMA.
* @gfp: GFP flags.
@@ -2298,13 +2303,12 @@ struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_struct
{
struct mempolicy *pol;
pgoff_t ilx;
- struct page *page;
+ struct folio *folio;
pol = get_vma_policy(vma, addr, order, &ilx);
- page = alloc_pages_mpol_noprof(gfp | __GFP_COMP, order,
- pol, ilx, numa_node_id());
+ folio = folio_alloc_mpol_noprof(gfp, order, pol, ilx, numa_node_id());
mpol_cond_put(pol);
- return page_rmappable_folio(page);
+ return folio;
}
EXPORT_SYMBOL(vma_alloc_folio_noprof);
@@ -3293,8 +3297,9 @@ out:
* @pol: pointer to mempolicy to be formatted
*
* Convert @pol into a string. If @buffer is too short, truncate the string.
- * Recommend a @maxlen of at least 32 for the longest mode, "interleave", the
- * longest flag, "relative", and to display at least a few node ids.
+ * Recommend a @maxlen of at least 51 for the longest mode, "weighted
+ * interleave", plus the longest flag flags, "relative|balancing", and to
+ * display at least a few node ids.
*/
void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
{
@@ -3303,7 +3308,10 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
unsigned short mode = MPOL_DEFAULT;
unsigned short flags = 0;
- if (pol && pol != &default_policy && !(pol->flags & MPOL_F_MORON)) {
+ if (pol &&
+ pol != &default_policy &&
+ !(pol >= &preferred_node_policy[0] &&
+ pol <= &preferred_node_policy[ARRAY_SIZE(preferred_node_policy) - 1])) {
mode = pol->mode;
flags = pol->flags;
}
@@ -3331,12 +3339,18 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
p += snprintf(p, buffer + maxlen - p, "=");
/*
- * Currently, the only defined flags are mutually exclusive
+ * Static and relative are mutually exclusive.
*/
if (flags & MPOL_F_STATIC_NODES)
p += snprintf(p, buffer + maxlen - p, "static");
else if (flags & MPOL_F_RELATIVE_NODES)
p += snprintf(p, buffer + maxlen - p, "relative");
+
+ if (flags & MPOL_F_NUMA_BALANCING) {
+ if (!is_power_of_2(flags & MPOL_MODE_FLAGS))
+ p += snprintf(p, buffer + maxlen - p, "|");
+ p += snprintf(p, buffer + maxlen - p, "balancing");
+ }
}
if (!nodes_empty(nodes))
diff --git a/mm/migrate.c b/mm/migrate.c
index ed3aac90cf4f..e7296c0fb5d5 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -338,14 +338,14 @@ out:
*
* This function will release the vma lock before returning.
*/
-void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *ptep)
+void migration_entry_wait_huge(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
{
spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), vma->vm_mm, ptep);
pte_t pte;
hugetlb_vma_assert_locked(vma);
spin_lock(ptl);
- pte = huge_ptep_get(ptep);
+ pte = huge_ptep_get(vma->vm_mm, addr, ptep);
if (unlikely(!is_hugetlb_entry_migration(pte))) {
spin_unlock(ptl);
@@ -393,28 +393,23 @@ static int folio_expected_refs(struct address_space *mapping,
}
/*
- * Replace the page in the mapping.
+ * Replace the folio in the mapping.
*
* The number of remaining references must be:
- * 1 for anonymous pages without a mapping
- * 2 for pages with a mapping
- * 3 for pages with a mapping and PagePrivate/PagePrivate2 set.
+ * 1 for anonymous folios without a mapping
+ * 2 for folios with a mapping
+ * 3 for folios with a mapping and PagePrivate/PagePrivate2 set.
*/
-int folio_migrate_mapping(struct address_space *mapping,
- struct folio *newfolio, struct folio *folio, int extra_count)
+static int __folio_migrate_mapping(struct address_space *mapping,
+ struct folio *newfolio, struct folio *folio, int expected_count)
{
XA_STATE(xas, &mapping->i_pages, folio_index(folio));
struct zone *oldzone, *newzone;
int dirty;
- int expected_count = folio_expected_refs(mapping, folio) + extra_count;
long nr = folio_nr_pages(folio);
long entries, i;
if (!mapping) {
- /* Anonymous page without mapping */
- if (folio_ref_count(folio) != expected_count)
- return -EAGAIN;
-
/* Take off deferred split queue while frozen and memcg set */
if (folio_test_large(folio) &&
folio_test_large_rmappable(folio)) {
@@ -443,8 +438,7 @@ int folio_migrate_mapping(struct address_space *mapping,
}
/* Take off deferred split queue while frozen and memcg set */
- if (folio_test_large(folio) && folio_test_large_rmappable(folio))
- folio_undo_large_rmappable(folio);
+ folio_undo_large_rmappable(folio);
/*
* Now we know that no one else is looking at the folio:
@@ -465,7 +459,7 @@ int folio_migrate_mapping(struct address_space *mapping,
entries = 1;
}
- /* Move dirty while page refs frozen and newpage not yet exposed */
+ /* Move dirty while folio refs frozen and newfolio not yet exposed */
dirty = folio_test_dirty(folio);
if (dirty) {
folio_clear_dirty(folio);
@@ -479,7 +473,7 @@ int folio_migrate_mapping(struct address_space *mapping,
}
/*
- * Drop cache reference from old page by unfreezing
+ * Drop cache reference from old folio by unfreezing
* to one less reference.
* We know this isn't the last reference.
*/
@@ -490,11 +484,11 @@ int folio_migrate_mapping(struct address_space *mapping,
/*
* If moved to a different zone then also account
- * the page for that zone. Other VM counters will be
+ * the folio for that zone. Other VM counters will be
* taken care of when we establish references to the
- * new page and drop references to the old page.
+ * new folio and drop references to the old folio.
*
- * Note that anonymous pages are accounted for
+ * Note that anonymous folios are accounted for
* via NR_FILE_PAGES and NR_ANON_MAPPED if they
* are mapped to swap space.
*/
@@ -534,6 +528,17 @@ int folio_migrate_mapping(struct address_space *mapping,
return MIGRATEPAGE_SUCCESS;
}
+
+int folio_migrate_mapping(struct address_space *mapping,
+ struct folio *newfolio, struct folio *folio, int extra_count)
+{
+ int expected_count = folio_expected_refs(mapping, folio) + extra_count;
+
+ if (folio_ref_count(folio) != expected_count)
+ return -EAGAIN;
+
+ return __folio_migrate_mapping(mapping, newfolio, folio, expected_count);
+}
EXPORT_SYMBOL(folio_migrate_mapping);
/*
@@ -544,10 +549,16 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
struct folio *dst, struct folio *src)
{
XA_STATE(xas, &mapping->i_pages, folio_index(src));
- int expected_count;
+ int rc, expected_count = folio_expected_refs(mapping, src);
+
+ if (folio_ref_count(src) != expected_count)
+ return -EAGAIN;
+
+ rc = folio_mc_copy(dst, src);
+ if (unlikely(rc))
+ return rc;
xas_lock_irq(&xas);
- expected_count = folio_expected_refs(mapping, src);
if (!folio_ref_freeze(src, expected_count)) {
xas_unlock_irq(&xas);
return -EAGAIN;
@@ -660,33 +671,32 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
}
EXPORT_SYMBOL(folio_migrate_flags);
-void folio_migrate_copy(struct folio *newfolio, struct folio *folio)
-{
- folio_copy(newfolio, folio);
- folio_migrate_flags(newfolio, folio);
-}
-EXPORT_SYMBOL(folio_migrate_copy);
-
/************************************************************
* Migration functions
***********************************************************/
-int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
- struct folio *src, enum migrate_mode mode, int extra_count)
+static int __migrate_folio(struct address_space *mapping, struct folio *dst,
+ struct folio *src, void *src_private,
+ enum migrate_mode mode)
{
- int rc;
+ int rc, expected_count = folio_expected_refs(mapping, src);
- BUG_ON(folio_test_writeback(src)); /* Writeback must be complete */
+ /* Check whether src does not have extra refs before we do more work */
+ if (folio_ref_count(src) != expected_count)
+ return -EAGAIN;
- rc = folio_migrate_mapping(mapping, dst, src, extra_count);
+ rc = folio_mc_copy(dst, src);
+ if (unlikely(rc))
+ return rc;
+ rc = __folio_migrate_mapping(mapping, dst, src, expected_count);
if (rc != MIGRATEPAGE_SUCCESS)
return rc;
- if (mode != MIGRATE_SYNC_NO_COPY)
- folio_migrate_copy(dst, src);
- else
- folio_migrate_flags(dst, src);
+ if (src_private)
+ folio_attach_private(dst, folio_detach_private(src));
+
+ folio_migrate_flags(dst, src);
return MIGRATEPAGE_SUCCESS;
}
@@ -703,9 +713,10 @@ int migrate_folio_extra(struct address_space *mapping, struct folio *dst,
* Folios are locked upon entry and exit.
*/
int migrate_folio(struct address_space *mapping, struct folio *dst,
- struct folio *src, enum migrate_mode mode)
+ struct folio *src, enum migrate_mode mode)
{
- return migrate_folio_extra(mapping, dst, src, mode, 0);
+ BUG_ON(folio_test_writeback(src)); /* Writeback must be complete */
+ return __migrate_folio(mapping, dst, src, NULL, mode);
}
EXPORT_SYMBOL(migrate_folio);
@@ -790,24 +801,16 @@ recheck_buffers:
}
}
- rc = folio_migrate_mapping(mapping, dst, src, 0);
+ rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc != MIGRATEPAGE_SUCCESS)
goto unlock_buffers;
- folio_attach_private(dst, folio_detach_private(src));
-
bh = head;
do {
folio_set_bh(bh, dst, bh_offset(bh));
bh = bh->b_this_page;
} while (bh != head);
- if (mode != MIGRATE_SYNC_NO_COPY)
- folio_migrate_copy(dst, src);
- else
- folio_migrate_flags(dst, src);
-
- rc = MIGRATEPAGE_SUCCESS;
unlock_buffers:
if (check_refs)
spin_unlock(&mapping->i_private_lock);
@@ -867,20 +870,7 @@ EXPORT_SYMBOL_GPL(buffer_migrate_folio_norefs);
int filemap_migrate_folio(struct address_space *mapping,
struct folio *dst, struct folio *src, enum migrate_mode mode)
{
- int ret;
-
- ret = folio_migrate_mapping(mapping, dst, src, 0);
- if (ret != MIGRATEPAGE_SUCCESS)
- return ret;
-
- if (folio_get_private(src))
- folio_attach_private(dst, folio_detach_private(src));
-
- if (mode != MIGRATE_SYNC_NO_COPY)
- folio_migrate_copy(dst, src);
- else
- folio_migrate_flags(dst, src);
- return MIGRATEPAGE_SUCCESS;
+ return __migrate_folio(mapping, dst, src, folio_get_private(src), mode);
}
EXPORT_SYMBOL_GPL(filemap_migrate_folio);
@@ -935,7 +925,6 @@ static int fallback_migrate_folio(struct address_space *mapping,
/* Only writeback folios in full synchronous migration */
switch (mode) {
case MIGRATE_SYNC:
- case MIGRATE_SYNC_NO_COPY:
break;
default:
return -EBUSY;
@@ -1193,7 +1182,6 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
*/
switch (mode) {
case MIGRATE_SYNC:
- case MIGRATE_SYNC_NO_COPY:
break;
default:
rc = -EBUSY;
@@ -1404,7 +1392,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
goto out;
switch (mode) {
case MIGRATE_SYNC:
- case MIGRATE_SYNC_NO_COPY:
break;
default:
goto out;
@@ -2557,16 +2544,44 @@ static struct folio *alloc_misplaced_dst_folio(struct folio *src,
return __folio_alloc_node(gfp, order, nid);
}
-static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio)
+/*
+ * Prepare for calling migrate_misplaced_folio() by isolating the folio if
+ * permitted. Must be called with the PTL still held.
+ */
+int migrate_misplaced_folio_prepare(struct folio *folio,
+ struct vm_area_struct *vma, int node)
{
int nr_pages = folio_nr_pages(folio);
+ pg_data_t *pgdat = NODE_DATA(node);
+
+ if (folio_is_file_lru(folio)) {
+ /*
+ * Do not migrate file folios that are mapped in multiple
+ * processes with execute permissions as they are probably
+ * shared libraries.
+ *
+ * See folio_likely_mapped_shared() on possible imprecision
+ * when we cannot easily detect if a folio is shared.
+ */
+ if ((vma->vm_flags & VM_EXEC) &&
+ folio_likely_mapped_shared(folio))
+ return -EACCES;
+
+ /*
+ * Do not migrate dirty folios as not all filesystems can move
+ * dirty folios in MIGRATE_ASYNC mode which is a waste of
+ * cycles.
+ */
+ if (folio_test_dirty(folio))
+ return -EAGAIN;
+ }
/* Avoid migrating to a node that is nearly full */
if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
int z;
if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
- return 0;
+ return -EAGAIN;
for (z = pgdat->nr_zones - 1; z >= 0; z--) {
if (managed_zone(pgdat->node_zones + z))
break;
@@ -2577,78 +2592,42 @@ static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio)
* further.
*/
if (z < 0)
- return 0;
+ return -EAGAIN;
wakeup_kswapd(pgdat->node_zones + z, 0,
folio_order(folio), ZONE_MOVABLE);
- return 0;
+ return -EAGAIN;
}
if (!folio_isolate_lru(folio))
- return 0;
+ return -EAGAIN;
node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio),
nr_pages);
-
- /*
- * Isolating the folio has taken another reference, so the
- * caller's reference can be safely dropped without the folio
- * disappearing underneath us during migration.
- */
- folio_put(folio);
- return 1;
+ return 0;
}
/*
* Attempt to migrate a misplaced folio to the specified destination
- * node. Caller is expected to have an elevated reference count on
- * the folio that will be dropped by this function before returning.
+ * node. Caller is expected to have isolated the folio by calling
+ * migrate_misplaced_folio_prepare(), which will result in an
+ * elevated reference count on the folio. This function will un-isolate the
+ * folio, dereferencing the folio before returning.
*/
int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma,
int node)
{
pg_data_t *pgdat = NODE_DATA(node);
- int isolated;
int nr_remaining;
unsigned int nr_succeeded;
LIST_HEAD(migratepages);
- int nr_pages = folio_nr_pages(folio);
-
- /*
- * Don't migrate file folios that are mapped in multiple processes
- * with execute permissions as they are probably shared libraries.
- *
- * See folio_likely_mapped_shared() on possible imprecision when we
- * cannot easily detect if a folio is shared.
- */
- if (folio_likely_mapped_shared(folio) && folio_is_file_lru(folio) &&
- (vma->vm_flags & VM_EXEC))
- goto out;
-
- /*
- * Also do not migrate dirty folios as not all filesystems can move
- * dirty folios in MIGRATE_ASYNC mode which is a waste of cycles.
- */
- if (folio_is_file_lru(folio) && folio_test_dirty(folio))
- goto out;
-
- isolated = numamigrate_isolate_folio(pgdat, folio);
- if (!isolated)
- goto out;
list_add(&folio->lru, &migratepages);
nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_folio,
NULL, node, MIGRATE_ASYNC,
MR_NUMA_MISPLACED, &nr_succeeded);
- if (nr_remaining) {
- if (!list_empty(&migratepages)) {
- list_del(&folio->lru);
- node_stat_mod_folio(folio, NR_ISOLATED_ANON +
- folio_is_file_lru(folio), -nr_pages);
- folio_putback_lru(folio);
- }
- isolated = 0;
- }
+ if (nr_remaining && !list_empty(&migratepages))
+ putback_movable_pages(&migratepages);
if (nr_succeeded) {
count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
if (!node_is_toptier(folio_nid(folio)) && node_is_toptier(node))
@@ -2656,11 +2635,7 @@ int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma,
nr_succeeded);
}
BUG_ON(!list_empty(&migratepages));
- return isolated;
-
-out:
- folio_put(folio);
- return 0;
+ return nr_remaining ? -EAGAIN : 0;
}
#endif /* CONFIG_NUMA_BALANCING */
#endif /* CONFIG_NUMA */
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index aecc71972a87..6d66dc1c6ffa 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -658,7 +658,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
goto unlock_abort;
inc_mm_counter(mm, MM_ANONPAGES);
- folio_add_new_anon_rmap(folio, vma, addr);
+ folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
if (!folio_is_zone_device(folio))
folio_add_lru_vma(folio, vma);
folio_get(folio);
@@ -692,8 +692,8 @@ static void __migrate_device_pages(unsigned long *src_pfns,
struct page *newpage = migrate_pfn_to_page(dst_pfns[i]);
struct page *page = migrate_pfn_to_page(src_pfns[i]);
struct address_space *mapping;
- struct folio *folio;
- int r;
+ struct folio *newfolio, *folio;
+ int r, extra_cnt = 0;
if (!newpage) {
src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
@@ -727,11 +727,12 @@ static void __migrate_device_pages(unsigned long *src_pfns,
continue;
}
+ newfolio = page_folio(newpage);
folio = page_folio(page);
mapping = folio_mapping(folio);
- if (is_device_private_page(newpage) ||
- is_device_coherent_page(newpage)) {
+ if (folio_is_device_private(newfolio) ||
+ folio_is_device_coherent(newfolio)) {
if (mapping) {
/*
* For now only support anonymous memory migrating to
@@ -745,7 +746,7 @@ static void __migrate_device_pages(unsigned long *src_pfns,
continue;
}
}
- } else if (is_zone_device_page(newpage)) {
+ } else if (folio_is_zone_device(newfolio)) {
/*
* Other types of ZONE_DEVICE page are not supported.
*/
@@ -753,14 +754,15 @@ static void __migrate_device_pages(unsigned long *src_pfns,
continue;
}
+ BUG_ON(folio_test_writeback(folio));
+
if (migrate && migrate->fault_page == page)
- r = migrate_folio_extra(mapping, page_folio(newpage),
- folio, MIGRATE_SYNC_NO_COPY, 1);
- else
- r = migrate_folio(mapping, page_folio(newpage),
- folio, MIGRATE_SYNC_NO_COPY);
+ extra_cnt = 1;
+ r = folio_migrate_mapping(mapping, newfolio, folio, extra_cnt);
if (r != MIGRATEPAGE_SUCCESS)
src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
+ else
+ folio_migrate_flags(newfolio, folio);
}
if (notified)
diff --git a/mm/mincore.c b/mm/mincore.c
index dad3622cc963..d6bd19e520fc 100644
--- a/mm/mincore.c
+++ b/mm/mincore.c
@@ -33,7 +33,7 @@ static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr,
* Hugepages under user process are always in RAM and never
* swapped out, but theoretically it needs to be checked.
*/
- present = pte && !huge_pte_none_mostly(huge_ptep_get(pte));
+ present = pte && !huge_pte_none_mostly(huge_ptep_get(walk->mm, addr, pte));
for (; addr != end; vec++, addr += PAGE_SIZE)
*vec = present;
walk->private = vec;
@@ -139,7 +139,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
} else {
#ifdef CONFIG_SWAP
*vec = mincore_page(swap_address_space(entry),
- swp_offset(entry));
+ swap_cache_index(entry));
#else
WARN_ON(1);
*vec = 1;
diff --git a/mm/mlock.c b/mm/mlock.c
index 30b51cdea89d..52d6e401ad67 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -307,26 +307,15 @@ void munlock_folio(struct folio *folio)
static inline unsigned int folio_mlock_step(struct folio *folio,
pte_t *pte, unsigned long addr, unsigned long end)
{
- unsigned int count, i, nr = folio_nr_pages(folio);
- unsigned long pfn = folio_pfn(folio);
+ const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
+ unsigned int count = (end - addr) >> PAGE_SHIFT;
pte_t ptent = ptep_get(pte);
if (!folio_test_large(folio))
return 1;
- count = pfn + nr - pte_pfn(ptent);
- count = min_t(unsigned int, count, (end - addr) >> PAGE_SHIFT);
-
- for (i = 0; i < count; i++, pte++) {
- pte_t entry = ptep_get(pte);
-
- if (!pte_present(entry))
- break;
- if (pte_pfn(entry) - pfn >= nr)
- break;
- }
-
- return i;
+ return folio_pte_batch(folio, addr, pte, ptent, count, fpb_flags, NULL,
+ NULL, NULL);
}
static inline bool allow_mlock_munlock(struct folio *folio,
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 804df0309257..75c3bd42799b 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -29,6 +29,7 @@
#include <linux/cma.h>
#include <linux/crash_dump.h>
#include <linux/execmem.h>
+#include <linux/vmstat.h>
#include "internal.h"
#include "slab.h"
#include "shuffle.h"
@@ -53,7 +54,6 @@ void __init mminit_verify_zonelist(void)
struct zonelist *zonelist;
int i, listid, zoneid;
- BUILD_BUG_ON(MAX_ZONELISTS > 2);
for (i = 0; i < MAX_ZONELISTS * MAX_NR_ZONES; i++) {
/* Identify the zone and nodelist */
@@ -568,7 +568,7 @@ void __meminit __init_single_page(struct page *page, unsigned long pfn,
mm_zero_struct_page(page);
set_page_links(page, zone, nid, pfn);
init_page_count(page);
- page_mapcount_reset(page);
+ atomic_set(&page->_mapcount, -1);
page_cpupid_reset_last(page);
page_kasan_tag_reset(page);
@@ -891,8 +891,14 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone
page = pfn_to_page(pfn);
__init_single_page(page, pfn, zone, nid);
- if (context == MEMINIT_HOTPLUG)
- __SetPageReserved(page);
+ if (context == MEMINIT_HOTPLUG) {
+#ifdef CONFIG_ZONE_DEVICE
+ if (zone == ZONE_DEVICE)
+ __SetPageReserved(page);
+ else
+#endif
+ __SetPageOffline(page);
+ }
/*
* Usually, we want to mark the pageblock MIGRATE_MOVABLE,
@@ -1617,6 +1623,8 @@ static void __init alloc_node_mem_map(struct pglist_data *pgdat)
panic("Failed to allocate %ld bytes for node %d memory map\n",
size, pgdat->node_id);
pgdat->node_mem_map = map + offset;
+ mod_node_early_perpage_metadata(pgdat->node_id,
+ DIV_ROUND_UP(size, PAGE_SIZE));
pr_debug("%s: node %d, pgdat %08lx, node_mem_map %08lx\n",
__func__, pgdat->node_id, (unsigned long)pgdat,
(unsigned long)pgdat->node_mem_map);
@@ -1912,8 +1920,8 @@ unsigned long __init node_map_pfn_alignment(void)
}
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
-static void __init deferred_free_range(unsigned long pfn,
- unsigned long nr_pages)
+static void __init deferred_free_pages(unsigned long pfn,
+ unsigned long nr_pages)
{
struct page *page;
unsigned long i;
@@ -1927,7 +1935,7 @@ static void __init deferred_free_range(unsigned long pfn,
if (nr_pages == MAX_ORDER_NR_PAGES && IS_MAX_ORDER_ALIGNED(pfn)) {
for (i = 0; i < nr_pages; i += pageblock_nr_pages)
set_pageblock_migratetype(page + i, MIGRATE_MOVABLE);
- __free_pages_core(page, MAX_PAGE_ORDER);
+ __free_pages_core(page, MAX_PAGE_ORDER, MEMINIT_EARLY);
return;
}
@@ -1937,7 +1945,7 @@ static void __init deferred_free_range(unsigned long pfn,
for (i = 0; i < nr_pages; i++, page++, pfn++) {
if (pageblock_aligned(pfn))
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
- __free_pages_core(page, 0);
+ __free_pages_core(page, 0, MEMINIT_EARLY);
}
}
@@ -1952,68 +1960,20 @@ static inline void __init pgdat_init_report_one_done(void)
}
/*
- * Returns true if page needs to be initialized or freed to buddy allocator.
- *
- * We check if a current MAX_PAGE_ORDER block is valid by only checking the
- * validity of the head pfn.
- */
-static inline bool __init deferred_pfn_valid(unsigned long pfn)
-{
- if (IS_MAX_ORDER_ALIGNED(pfn) && !pfn_valid(pfn))
- return false;
- return true;
-}
-
-/*
- * Free pages to buddy allocator. Try to free aligned pages in
- * MAX_ORDER_NR_PAGES sizes.
- */
-static void __init deferred_free_pages(unsigned long pfn,
- unsigned long end_pfn)
-{
- unsigned long nr_free = 0;
-
- for (; pfn < end_pfn; pfn++) {
- if (!deferred_pfn_valid(pfn)) {
- deferred_free_range(pfn - nr_free, nr_free);
- nr_free = 0;
- } else if (IS_MAX_ORDER_ALIGNED(pfn)) {
- deferred_free_range(pfn - nr_free, nr_free);
- nr_free = 1;
- } else {
- nr_free++;
- }
- }
- /* Free the last block of pages to allocator */
- deferred_free_range(pfn - nr_free, nr_free);
-}
-
-/*
* Initialize struct pages. We minimize pfn page lookups and scheduler checks
* by performing it only once every MAX_ORDER_NR_PAGES.
* Return number of pages initialized.
*/
-static unsigned long __init deferred_init_pages(struct zone *zone,
- unsigned long pfn,
- unsigned long end_pfn)
+static unsigned long __init deferred_init_pages(struct zone *zone,
+ unsigned long pfn, unsigned long end_pfn)
{
int nid = zone_to_nid(zone);
- unsigned long nr_pages = 0;
+ unsigned long nr_pages = end_pfn - pfn;
int zid = zone_idx(zone);
- struct page *page = NULL;
+ struct page *page = pfn_to_page(pfn);
- for (; pfn < end_pfn; pfn++) {
- if (!deferred_pfn_valid(pfn)) {
- page = NULL;
- continue;
- } else if (!page || IS_MAX_ORDER_ALIGNED(pfn)) {
- page = pfn_to_page(pfn);
- } else {
- page++;
- }
+ for (; pfn < end_pfn; pfn++, page++)
__init_single_page(page, pfn, zid, nid);
- nr_pages++;
- }
return nr_pages;
}
@@ -2097,7 +2057,7 @@ deferred_init_maxorder(u64 *i, struct zone *zone, unsigned long *start_pfn,
break;
t = min(mo_pfn, epfn);
- deferred_free_pages(spfn, t);
+ deferred_free_pages(spfn, t - spfn);
if (mo_pfn <= epfn)
break;
@@ -2126,11 +2086,10 @@ deferred_init_memmap_chunk(unsigned long start_pfn, unsigned long end_pfn,
}
}
-/* An arch may override for more concurrency. */
-__weak int __init
+static unsigned int __init
deferred_page_init_max_threads(const struct cpumask *node_cpumask)
{
- return 1;
+ return max(cpumask_weight(node_cpumask), 1U);
}
/* Initialise remaining memory on a node */
@@ -2315,6 +2274,7 @@ void set_zone_contiguous(struct zone *zone)
zone->contiguous = true;
}
+static void __init mem_init_print_info(void);
void __init page_alloc_init_late(void)
{
struct zone *zone;
@@ -2341,6 +2301,8 @@ void __init page_alloc_init_late(void)
files_maxfiles_init();
#endif
+ /* Accounting of total+free memory is stable at this point. */
+ mem_init_print_info();
buffer_init();
/* Discard memblock private memory */
@@ -2507,7 +2469,7 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn,
}
}
- __free_pages_core(page, order);
+ __free_pages_core(page, order, MEMINIT_EARLY);
}
DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, init_on_alloc);
@@ -2687,6 +2649,7 @@ static void __init mem_init_print_info(void)
void __init mm_core_init(void)
{
/* Initializations relying on SMP setup */
+ BUILD_BUG_ON(MAX_ZONELISTS > 2);
build_all_zonelists(NULL);
page_alloc_init_cpuhp();
@@ -2701,7 +2664,6 @@ void __init mm_core_init(void)
kmsan_init_shadow();
stack_depot_early_init();
mem_init();
- mem_init_print_info();
kmem_cache_init();
/*
* page_owner must be initialized after buddy is ready, and also after
diff --git a/mm/mmap.c b/mm/mmap.c
index 83b4682ec85c..e42d89f98071 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -131,6 +131,47 @@ void unlink_file_vma(struct vm_area_struct *vma)
}
}
+void unlink_file_vma_batch_init(struct unlink_vma_file_batch *vb)
+{
+ vb->count = 0;
+}
+
+static void unlink_file_vma_batch_process(struct unlink_vma_file_batch *vb)
+{
+ struct address_space *mapping;
+ int i;
+
+ mapping = vb->vmas[0]->vm_file->f_mapping;
+ i_mmap_lock_write(mapping);
+ for (i = 0; i < vb->count; i++) {
+ VM_WARN_ON_ONCE(vb->vmas[i]->vm_file->f_mapping != mapping);
+ __remove_shared_vm_struct(vb->vmas[i], mapping);
+ }
+ i_mmap_unlock_write(mapping);
+
+ unlink_file_vma_batch_init(vb);
+}
+
+void unlink_file_vma_batch_add(struct unlink_vma_file_batch *vb,
+ struct vm_area_struct *vma)
+{
+ if (vma->vm_file == NULL)
+ return;
+
+ if ((vb->count > 0 && vb->vmas[0]->vm_file != vma->vm_file) ||
+ vb->count == ARRAY_SIZE(vb->vmas))
+ unlink_file_vma_batch_process(vb);
+
+ vb->vmas[vb->count] = vma;
+ vb->count++;
+}
+
+void unlink_file_vma_batch_final(struct unlink_vma_file_batch *vb)
+{
+ if (vb->count > 0)
+ unlink_file_vma_batch_process(vb);
+}
+
/*
* Close a vm structure and free it.
*/
diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c
index 1854850b4b89..368b840e7508 100644
--- a/mm/mmap_lock.c
+++ b/mm/mmap_lock.c
@@ -19,14 +19,7 @@ EXPORT_TRACEPOINT_SYMBOL(mmap_lock_released);
#ifdef CONFIG_MEMCG
-/*
- * Our various events all share the same buffer (because we don't want or need
- * to allocate a set of buffers *per event type*), so we need to protect against
- * concurrent _reg() and _unreg() calls, and count how many _reg() calls have
- * been made.
- */
-static DEFINE_MUTEX(reg_lock);
-static int reg_refcount; /* Protected by reg_lock. */
+static atomic_t reg_refcount;
/*
* Size of the buffer for memcg path names. Ignoring stack trace support,
@@ -34,136 +27,22 @@ static int reg_refcount; /* Protected by reg_lock. */
*/
#define MEMCG_PATH_BUF_SIZE MAX_FILTER_STR_VAL
-/*
- * How many contexts our trace events might be called in: normal, softirq, irq,
- * and NMI.
- */
-#define CONTEXT_COUNT 4
-
-struct memcg_path {
- local_lock_t lock;
- char __rcu *buf;
- local_t buf_idx;
-};
-static DEFINE_PER_CPU(struct memcg_path, memcg_paths) = {
- .lock = INIT_LOCAL_LOCK(lock),
- .buf_idx = LOCAL_INIT(0),
-};
-
-static char **tmp_bufs;
-
-/* Called with reg_lock held. */
-static void free_memcg_path_bufs(void)
-{
- struct memcg_path *memcg_path;
- int cpu;
- char **old = tmp_bufs;
-
- for_each_possible_cpu(cpu) {
- memcg_path = per_cpu_ptr(&memcg_paths, cpu);
- *(old++) = rcu_dereference_protected(memcg_path->buf,
- lockdep_is_held(&reg_lock));
- rcu_assign_pointer(memcg_path->buf, NULL);
- }
-
- /* Wait for inflight memcg_path_buf users to finish. */
- synchronize_rcu();
-
- old = tmp_bufs;
- for_each_possible_cpu(cpu) {
- kfree(*(old++));
- }
-
- kfree(tmp_bufs);
- tmp_bufs = NULL;
-}
-
int trace_mmap_lock_reg(void)
{
- int cpu;
- char *new;
-
- mutex_lock(&reg_lock);
-
- /* If the refcount is going 0->1, proceed with allocating buffers. */
- if (reg_refcount++)
- goto out;
-
- tmp_bufs = kmalloc_array(num_possible_cpus(), sizeof(*tmp_bufs),
- GFP_KERNEL);
- if (tmp_bufs == NULL)
- goto out_fail;
-
- for_each_possible_cpu(cpu) {
- new = kmalloc(MEMCG_PATH_BUF_SIZE * CONTEXT_COUNT, GFP_KERNEL);
- if (new == NULL)
- goto out_fail_free;
- rcu_assign_pointer(per_cpu_ptr(&memcg_paths, cpu)->buf, new);
- /* Don't need to wait for inflights, they'd have gotten NULL. */
- }
-
-out:
- mutex_unlock(&reg_lock);
+ atomic_inc(&reg_refcount);
return 0;
-
-out_fail_free:
- free_memcg_path_bufs();
-out_fail:
- /* Since we failed, undo the earlier ref increment. */
- --reg_refcount;
-
- mutex_unlock(&reg_lock);
- return -ENOMEM;
}
void trace_mmap_lock_unreg(void)
{
- mutex_lock(&reg_lock);
-
- /* If the refcount is going 1->0, proceed with freeing buffers. */
- if (--reg_refcount)
- goto out;
-
- free_memcg_path_bufs();
-
-out:
- mutex_unlock(&reg_lock);
-}
-
-static inline char *get_memcg_path_buf(void)
-{
- struct memcg_path *memcg_path = this_cpu_ptr(&memcg_paths);
- char *buf;
- int idx;
-
- rcu_read_lock();
- buf = rcu_dereference(memcg_path->buf);
- if (buf == NULL) {
- rcu_read_unlock();
- return NULL;
- }
- idx = local_add_return(MEMCG_PATH_BUF_SIZE, &memcg_path->buf_idx) -
- MEMCG_PATH_BUF_SIZE;
- return &buf[idx];
+ atomic_dec(&reg_refcount);
}
-static inline void put_memcg_path_buf(void)
-{
- local_sub(MEMCG_PATH_BUF_SIZE, &this_cpu_ptr(&memcg_paths)->buf_idx);
- rcu_read_unlock();
-}
-
-#define TRACE_MMAP_LOCK_EVENT(type, mm, ...) \
- do { \
- const char *memcg_path; \
- local_lock(&memcg_paths.lock); \
- memcg_path = get_mm_memcg_path(mm); \
- trace_mmap_lock_##type(mm, \
- memcg_path != NULL ? memcg_path : "", \
- ##__VA_ARGS__); \
- if (likely(memcg_path != NULL)) \
- put_memcg_path_buf(); \
- local_unlock(&memcg_paths.lock); \
+#define TRACE_MMAP_LOCK_EVENT(type, mm, ...) \
+ do { \
+ char buf[MEMCG_PATH_BUF_SIZE]; \
+ get_mm_memcg_path(mm, buf, sizeof(buf)); \
+ trace_mmap_lock_##type(mm, buf, ##__VA_ARGS__); \
} while (0)
#else /* !CONFIG_MEMCG */
@@ -185,37 +64,23 @@ void trace_mmap_lock_unreg(void)
#ifdef CONFIG_TRACING
#ifdef CONFIG_MEMCG
/*
- * Write the given mm_struct's memcg path to a percpu buffer, and return a
- * pointer to it. If the path cannot be determined, or no buffer was available
- * (because the trace event is being unregistered), NULL is returned.
- *
- * Note: buffers are allocated per-cpu to avoid locking, so preemption must be
- * disabled by the caller before calling us, and re-enabled only after the
- * caller is done with the pointer.
- *
- * The caller must call put_memcg_path_buf() once the buffer is no longer
- * needed. This must be done while preemption is still disabled.
+ * Write the given mm_struct's memcg path to a buffer. If the path cannot be
+ * determined or the trace event is being unregistered, empty string is written.
*/
-static const char *get_mm_memcg_path(struct mm_struct *mm)
+static void get_mm_memcg_path(struct mm_struct *mm, char *buf, size_t buflen)
{
- char *buf = NULL;
- struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm);
+ struct mem_cgroup *memcg;
+ buf[0] = '\0';
+ /* No need to get path if no trace event is registered. */
+ if (!atomic_read(&reg_refcount))
+ return;
+ memcg = get_mem_cgroup_from_mm(mm);
if (memcg == NULL)
- goto out;
- if (unlikely(memcg->css.cgroup == NULL))
- goto out_put;
-
- buf = get_memcg_path_buf();
- if (buf == NULL)
- goto out_put;
-
- cgroup_path(memcg->css.cgroup, buf, MEMCG_PATH_BUF_SIZE);
-
-out_put:
+ return;
+ if (memcg->css.cgroup)
+ cgroup_path(memcg->css.cgroup, buf, buflen);
css_put(&memcg->css);
-out:
- return buf;
}
#endif /* CONFIG_MEMCG */
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 8c6cd8825273..222ab434da54 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -53,7 +53,7 @@ bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
return false;
/* Do we need write faults for softdirty tracking? */
- if (vma_soft_dirty_enabled(vma) && !pte_soft_dirty(pte))
+ if (pte_needs_soft_dirty_wp(vma, pte))
return false;
/* Do we need write faults for uffd-wp tracking? */
@@ -71,6 +71,8 @@ bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
return page && PageAnon(page) && PageAnonExclusive(page);
}
+ VM_WARN_ON_ONCE(is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte));
+
/*
* Writable MAP_SHARED mapping: "clean" might indicate that the FS still
* needs a real write-fault for writenotify
diff --git a/mm/mremap.c b/mm/mremap.c
index 5f96bc5ee918..e7ae140fc640 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -198,7 +198,7 @@ static int move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
* PTE.
*
* NOTE! Both old and new PTL matter: the old one
- * for racing with page_mkclean(), the new one to
+ * for racing with folio_mkclean(), the new one to
* make sure the physical page stays valid until
* the TLB entry for the old mapping has been
* flushed.
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 8a1c92090129..acff24e9fae4 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -139,6 +139,8 @@ struct dirty_throttle_control {
unsigned long wb_bg_thresh;
unsigned long pos_ratio;
+ bool freerun;
+ bool dirty_exceeded;
};
/*
@@ -859,6 +861,34 @@ static void mdtc_calc_avail(struct dirty_throttle_control *mdtc,
mdtc->avail = filepages + min(headroom, other_clean);
}
+static inline bool dtc_is_global(struct dirty_throttle_control *dtc)
+{
+ return mdtc_gdtc(dtc) == NULL;
+}
+
+/*
+ * Dirty background will ignore pages being written as we're trying to
+ * decide whether to put more under writeback.
+ */
+static void domain_dirty_avail(struct dirty_throttle_control *dtc,
+ bool include_writeback)
+{
+ if (dtc_is_global(dtc)) {
+ dtc->avail = global_dirtyable_memory();
+ dtc->dirty = global_node_page_state(NR_FILE_DIRTY);
+ if (include_writeback)
+ dtc->dirty += global_node_page_state(NR_WRITEBACK);
+ } else {
+ unsigned long filepages = 0, headroom = 0, writeback = 0;
+
+ mem_cgroup_wb_stats(dtc->wb, &filepages, &headroom, &dtc->dirty,
+ &writeback);
+ if (include_writeback)
+ dtc->dirty += writeback;
+ mdtc_calc_avail(dtc, filepages, headroom);
+ }
+}
+
/**
* __wb_calc_thresh - @wb's share of dirty threshold
* @dtc: dirty_throttle_context of interest
@@ -921,16 +951,9 @@ unsigned long cgwb_calc_thresh(struct bdi_writeback *wb)
{
struct dirty_throttle_control gdtc = { GDTC_INIT_NO_WB };
struct dirty_throttle_control mdtc = { MDTC_INIT(wb, &gdtc) };
- unsigned long filepages = 0, headroom = 0, writeback = 0;
- gdtc.avail = global_dirtyable_memory();
- gdtc.dirty = global_node_page_state(NR_FILE_DIRTY) +
- global_node_page_state(NR_WRITEBACK);
-
- mem_cgroup_wb_stats(wb, &filepages, &headroom,
- &mdtc.dirty, &writeback);
- mdtc.dirty += writeback;
- mdtc_calc_avail(&mdtc, filepages, headroom);
+ domain_dirty_avail(&gdtc, true);
+ domain_dirty_avail(&mdtc, true);
domain_dirty_limits(&mdtc);
return __wb_calc_thresh(&mdtc, mdtc.thresh);
@@ -1703,6 +1726,100 @@ static inline void wb_dirty_limits(struct dirty_throttle_control *dtc)
}
}
+static unsigned long domain_poll_intv(struct dirty_throttle_control *dtc,
+ bool strictlimit)
+{
+ unsigned long dirty, thresh;
+
+ if (strictlimit) {
+ dirty = dtc->wb_dirty;
+ thresh = dtc->wb_thresh;
+ } else {
+ dirty = dtc->dirty;
+ thresh = dtc->thresh;
+ }
+
+ return dirty_poll_interval(dirty, thresh);
+}
+
+/*
+ * Throttle it only when the background writeback cannot catch-up. This avoids
+ * (excessively) small writeouts when the wb limits are ramping up in case of
+ * !strictlimit.
+ *
+ * In strictlimit case make decision based on the wb counters and limits. Small
+ * writeouts when the wb limits are ramping up are the price we consciously pay
+ * for strictlimit-ing.
+ */
+static void domain_dirty_freerun(struct dirty_throttle_control *dtc,
+ bool strictlimit)
+{
+ unsigned long dirty, thresh, bg_thresh;
+
+ if (unlikely(strictlimit)) {
+ wb_dirty_limits(dtc);
+ dirty = dtc->wb_dirty;
+ thresh = dtc->wb_thresh;
+ bg_thresh = dtc->wb_bg_thresh;
+ } else {
+ dirty = dtc->dirty;
+ thresh = dtc->thresh;
+ bg_thresh = dtc->bg_thresh;
+ }
+ dtc->freerun = dirty <= dirty_freerun_ceiling(thresh, bg_thresh);
+}
+
+static void balance_domain_limits(struct dirty_throttle_control *dtc,
+ bool strictlimit)
+{
+ domain_dirty_avail(dtc, true);
+ domain_dirty_limits(dtc);
+ domain_dirty_freerun(dtc, strictlimit);
+}
+
+static void wb_dirty_freerun(struct dirty_throttle_control *dtc,
+ bool strictlimit)
+{
+ dtc->freerun = false;
+
+ /* was already handled in domain_dirty_freerun */
+ if (strictlimit)
+ return;
+
+ wb_dirty_limits(dtc);
+ /*
+ * LOCAL_THROTTLE tasks must not be throttled when below the per-wb
+ * freerun ceiling.
+ */
+ if (!(current->flags & PF_LOCAL_THROTTLE))
+ return;
+
+ dtc->freerun = dtc->wb_dirty <
+ dirty_freerun_ceiling(dtc->wb_thresh, dtc->wb_bg_thresh);
+}
+
+static inline void wb_dirty_exceeded(struct dirty_throttle_control *dtc,
+ bool strictlimit)
+{
+ dtc->dirty_exceeded = (dtc->wb_dirty > dtc->wb_thresh) &&
+ ((dtc->dirty > dtc->thresh) || strictlimit);
+}
+
+/*
+ * The limits fields dirty_exceeded and pos_ratio won't be updated if wb is
+ * in freerun state. Please don't use these invalid fields in freerun case.
+ */
+static void balance_wb_limits(struct dirty_throttle_control *dtc,
+ bool strictlimit)
+{
+ wb_dirty_freerun(dtc, strictlimit);
+ if (dtc->freerun)
+ return;
+
+ wb_dirty_exceeded(dtc, strictlimit);
+ wb_position_ratio(dtc);
+}
+
/*
* balance_dirty_pages() must be called by processes which are generating dirty
* data. It looks at the number of dirty pages in the machine and will force
@@ -1725,7 +1842,6 @@ static int balance_dirty_pages(struct bdi_writeback *wb,
long max_pause;
long min_pause;
int nr_dirtied_pause;
- bool dirty_exceeded = false;
unsigned long task_ratelimit;
unsigned long dirty_ratelimit;
struct backing_dev_info *bdi = wb->bdi;
@@ -1735,53 +1851,16 @@ static int balance_dirty_pages(struct bdi_writeback *wb,
for (;;) {
unsigned long now = jiffies;
- unsigned long dirty, thresh, bg_thresh;
- unsigned long m_dirty = 0; /* stop bogus uninit warnings */
- unsigned long m_thresh = 0;
- unsigned long m_bg_thresh = 0;
nr_dirty = global_node_page_state(NR_FILE_DIRTY);
- gdtc->avail = global_dirtyable_memory();
- gdtc->dirty = nr_dirty + global_node_page_state(NR_WRITEBACK);
-
- domain_dirty_limits(gdtc);
-
- if (unlikely(strictlimit)) {
- wb_dirty_limits(gdtc);
-
- dirty = gdtc->wb_dirty;
- thresh = gdtc->wb_thresh;
- bg_thresh = gdtc->wb_bg_thresh;
- } else {
- dirty = gdtc->dirty;
- thresh = gdtc->thresh;
- bg_thresh = gdtc->bg_thresh;
- }
+ balance_domain_limits(gdtc, strictlimit);
if (mdtc) {
- unsigned long filepages, headroom, writeback;
-
/*
* If @wb belongs to !root memcg, repeat the same
* basic calculations for the memcg domain.
*/
- mem_cgroup_wb_stats(wb, &filepages, &headroom,
- &mdtc->dirty, &writeback);
- mdtc->dirty += writeback;
- mdtc_calc_avail(mdtc, filepages, headroom);
-
- domain_dirty_limits(mdtc);
-
- if (unlikely(strictlimit)) {
- wb_dirty_limits(mdtc);
- m_dirty = mdtc->wb_dirty;
- m_thresh = mdtc->wb_thresh;
- m_bg_thresh = mdtc->wb_bg_thresh;
- } else {
- m_dirty = mdtc->dirty;
- m_thresh = mdtc->thresh;
- m_bg_thresh = mdtc->bg_thresh;
- }
+ balance_domain_limits(mdtc, strictlimit);
}
/*
@@ -1798,31 +1877,21 @@ static int balance_dirty_pages(struct bdi_writeback *wb,
wb_start_background_writeback(wb);
/*
- * Throttle it only when the background writeback cannot
- * catch-up. This avoids (excessively) small writeouts
- * when the wb limits are ramping up in case of !strictlimit.
- *
- * In strictlimit case make decision based on the wb counters
- * and limits. Small writeouts when the wb limits are ramping
- * up are the price we consciously pay for strictlimit-ing.
- *
* If memcg domain is in effect, @dirty should be under
* both global and memcg freerun ceilings.
*/
- if (dirty <= dirty_freerun_ceiling(thresh, bg_thresh) &&
- (!mdtc ||
- m_dirty <= dirty_freerun_ceiling(m_thresh, m_bg_thresh))) {
+ if (gdtc->freerun && (!mdtc || mdtc->freerun)) {
unsigned long intv;
unsigned long m_intv;
free_running:
- intv = dirty_poll_interval(dirty, thresh);
+ intv = domain_poll_intv(gdtc, strictlimit);
m_intv = ULONG_MAX;
current->dirty_paused_when = now;
current->nr_dirtied = 0;
if (mdtc)
- m_intv = dirty_poll_interval(m_dirty, m_thresh);
+ m_intv = domain_poll_intv(mdtc, strictlimit);
current->nr_dirtied_pause = min(intv, m_intv);
break;
}
@@ -1837,24 +1906,9 @@ free_running:
* Calculate global domain's pos_ratio and select the
* global dtc by default.
*/
- if (!strictlimit) {
- wb_dirty_limits(gdtc);
-
- if ((current->flags & PF_LOCAL_THROTTLE) &&
- gdtc->wb_dirty <
- dirty_freerun_ceiling(gdtc->wb_thresh,
- gdtc->wb_bg_thresh))
- /*
- * LOCAL_THROTTLE tasks must not be throttled
- * when below the per-wb freerun ceiling.
- */
- goto free_running;
- }
-
- dirty_exceeded = (gdtc->wb_dirty > gdtc->wb_thresh) &&
- ((gdtc->dirty > gdtc->thresh) || strictlimit);
-
- wb_position_ratio(gdtc);
+ balance_wb_limits(gdtc, strictlimit);
+ if (gdtc->freerun)
+ goto free_running;
sdtc = gdtc;
if (mdtc) {
@@ -1864,31 +1918,15 @@ free_running:
* both global and memcg domains. Choose the one
* w/ lower pos_ratio.
*/
- if (!strictlimit) {
- wb_dirty_limits(mdtc);
-
- if ((current->flags & PF_LOCAL_THROTTLE) &&
- mdtc->wb_dirty <
- dirty_freerun_ceiling(mdtc->wb_thresh,
- mdtc->wb_bg_thresh))
- /*
- * LOCAL_THROTTLE tasks must not be
- * throttled when below the per-wb
- * freerun ceiling.
- */
- goto free_running;
- }
- dirty_exceeded |= (mdtc->wb_dirty > mdtc->wb_thresh) &&
- ((mdtc->dirty > mdtc->thresh) || strictlimit);
-
- wb_position_ratio(mdtc);
+ balance_wb_limits(mdtc, strictlimit);
+ if (mdtc->freerun)
+ goto free_running;
if (mdtc->pos_ratio < gdtc->pos_ratio)
sdtc = mdtc;
}
- if (dirty_exceeded != wb->dirty_exceeded)
- wb->dirty_exceeded = dirty_exceeded;
-
+ wb->dirty_exceeded = gdtc->dirty_exceeded ||
+ (mdtc && mdtc->dirty_exceeded);
if (time_is_before_jiffies(READ_ONCE(wb->bw_time_stamp) +
BANDWIDTH_INTERVAL))
__wb_update_bandwidth(gdtc, mdtc, true);
@@ -2109,6 +2147,35 @@ void balance_dirty_pages_ratelimited(struct address_space *mapping)
}
EXPORT_SYMBOL(balance_dirty_pages_ratelimited);
+/*
+ * Similar to wb_dirty_limits, wb_bg_dirty_limits also calculates dirty
+ * and thresh, but it's for background writeback.
+ */
+static void wb_bg_dirty_limits(struct dirty_throttle_control *dtc)
+{
+ struct bdi_writeback *wb = dtc->wb;
+
+ dtc->wb_bg_thresh = __wb_calc_thresh(dtc, dtc->bg_thresh);
+ if (dtc->wb_bg_thresh < 2 * wb_stat_error())
+ dtc->wb_dirty = wb_stat_sum(wb, WB_RECLAIMABLE);
+ else
+ dtc->wb_dirty = wb_stat(wb, WB_RECLAIMABLE);
+}
+
+static bool domain_over_bg_thresh(struct dirty_throttle_control *dtc)
+{
+ domain_dirty_avail(dtc, false);
+ domain_dirty_limits(dtc);
+ if (dtc->dirty > dtc->bg_thresh)
+ return true;
+
+ wb_bg_dirty_limits(dtc);
+ if (dtc->wb_dirty > dtc->wb_bg_thresh)
+ return true;
+
+ return false;
+}
+
/**
* wb_over_bg_thresh - does @wb need to be written back?
* @wb: bdi_writeback of interest
@@ -2120,54 +2187,14 @@ EXPORT_SYMBOL(balance_dirty_pages_ratelimited);
*/
bool wb_over_bg_thresh(struct bdi_writeback *wb)
{
- struct dirty_throttle_control gdtc_stor = { GDTC_INIT(wb) };
- struct dirty_throttle_control mdtc_stor = { MDTC_INIT(wb, &gdtc_stor) };
- struct dirty_throttle_control * const gdtc = &gdtc_stor;
- struct dirty_throttle_control * const mdtc = mdtc_valid(&mdtc_stor) ?
- &mdtc_stor : NULL;
- unsigned long reclaimable;
- unsigned long thresh;
-
- /*
- * Similar to balance_dirty_pages() but ignores pages being written
- * as we're trying to decide whether to put more under writeback.
- */
- gdtc->avail = global_dirtyable_memory();
- gdtc->dirty = global_node_page_state(NR_FILE_DIRTY);
- domain_dirty_limits(gdtc);
-
- if (gdtc->dirty > gdtc->bg_thresh)
- return true;
-
- thresh = __wb_calc_thresh(gdtc, gdtc->bg_thresh);
- if (thresh < 2 * wb_stat_error())
- reclaimable = wb_stat_sum(wb, WB_RECLAIMABLE);
- else
- reclaimable = wb_stat(wb, WB_RECLAIMABLE);
+ struct dirty_throttle_control gdtc = { GDTC_INIT(wb) };
+ struct dirty_throttle_control mdtc = { MDTC_INIT(wb, &gdtc) };
- if (reclaimable > thresh)
+ if (domain_over_bg_thresh(&gdtc))
return true;
- if (mdtc) {
- unsigned long filepages, headroom, writeback;
-
- mem_cgroup_wb_stats(wb, &filepages, &headroom, &mdtc->dirty,
- &writeback);
- mdtc_calc_avail(mdtc, filepages, headroom);
- domain_dirty_limits(mdtc); /* ditto, ignore writeback */
-
- if (mdtc->dirty > mdtc->bg_thresh)
- return true;
-
- thresh = __wb_calc_thresh(mdtc, mdtc->bg_thresh);
- if (thresh < 2 * wb_stat_error())
- reclaimable = wb_stat_sum(wb, WB_RECLAIMABLE);
- else
- reclaimable = wb_stat(wb, WB_RECLAIMABLE);
-
- if (reclaimable > thresh)
- return true;
- }
+ if (mdtc_valid(&mdtc))
+ return domain_over_bg_thresh(&mdtc);
return false;
}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9ecf99190ea2..3398d914ed83 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -498,7 +498,8 @@ static void bad_page(struct page *page, const char *reason)
dump_stack();
out:
/* Leave bad fields for debug, except PageBuddy could make trouble */
- page_mapcount_reset(page); /* remove PageBuddy */
+ if (PageBuddy(page))
+ __ClearPageBuddy(page);
add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
}
@@ -711,12 +712,12 @@ static inline struct page *get_page_from_free_area(struct free_area *area,
}
/*
- * If this is not the largest possible page, check if the buddy
- * of the next-highest order is free. If it is, it's possible
+ * If this is less than the 2nd largest possible page, check if the buddy
+ * of the next-higher order is free. If it is, it's possible
* that pages are being freed that will coalesce soon. In case,
* that is happening, add the free page to the tail of the list
* so it's less likely to be used soon and more likely to be merged
- * as a higher order page
+ * as a 2-level higher order page
*/
static inline bool
buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
@@ -1218,7 +1219,8 @@ static void __free_pages_ok(struct page *page, unsigned int order,
__count_vm_events(PGFREE, 1 << order);
}
-void __free_pages_core(struct page *page, unsigned int order)
+void __meminit __free_pages_core(struct page *page, unsigned int order,
+ enum meminit_context context)
{
unsigned int nr_pages = 1 << order;
struct page *p = page;
@@ -1228,17 +1230,34 @@ void __free_pages_core(struct page *page, unsigned int order)
* When initializing the memmap, __init_single_page() sets the refcount
* of all pages to 1 ("allocated"/"not free"). We have to set the
* refcount of all involved pages to 0.
+ *
+ * Note that hotplugged memory pages are initialized to PageOffline().
+ * Pages freed from memblock might be marked as reserved.
*/
- prefetchw(p);
- for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
- prefetchw(p + 1);
- __ClearPageReserved(p);
- set_page_count(p, 0);
- }
- __ClearPageReserved(p);
- set_page_count(p, 0);
+ if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG) &&
+ unlikely(context == MEMINIT_HOTPLUG)) {
+ for (loop = 0; loop < nr_pages; loop++, p++) {
+ VM_WARN_ON_ONCE(PageReserved(p));
+ __ClearPageOffline(p);
+ set_page_count(p, 0);
+ }
- atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
+ /*
+ * Freeing the page with debug_pagealloc enabled will try to
+ * unmap it; some archs don't like double-unmappings, so
+ * map it first.
+ */
+ debug_pagealloc_map_pages(page, nr_pages);
+ adjust_managed_page_count(page, nr_pages);
+ } else {
+ for (loop = 0; loop < nr_pages; loop++, p++) {
+ __ClearPageReserved(p);
+ set_page_count(p, 0);
+ }
+
+ /* memblock adjusts totalram_pages() manually. */
+ atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
+ }
if (page_contains_unaccepted(page, order)) {
if (order == MAX_PAGE_ORDER && __free_unaccepted(page))
@@ -1351,7 +1370,8 @@ static void check_new_page_bad(struct page *page)
{
if (unlikely(page->flags & __PG_HWPOISON)) {
/* Don't complain about hwpoisoned pages */
- page_mapcount_reset(page); /* remove PageBuddy */
+ if (PageBuddy(page))
+ __ClearPageBuddy(page);
return;
}
@@ -2632,8 +2652,7 @@ void free_unref_folios(struct folio_batch *folios)
unsigned long pfn = folio_pfn(folio);
unsigned int order = folio_order(folio);
- if (order > 0 && folio_test_large_rmappable(folio))
- folio_undo_large_rmappable(folio);
+ folio_undo_large_rmappable(folio);
if (!free_pages_prepare(&folio->page, order))
continue;
/*
@@ -3031,12 +3050,6 @@ out:
return page;
}
-noinline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
-{
- return __should_fail_alloc_page(gfp_mask, order);
-}
-ALLOW_ERROR_INJECTION(should_fail_alloc_page, TRUE);
-
static inline long __zone_watermark_unusable_free(struct zone *z,
unsigned int order, unsigned int alloc_flags)
{
@@ -5213,7 +5226,7 @@ static void build_zonelists_in_node_order(pg_data_t *pgdat, int *node_order,
}
/*
- * Build gfp_thisnode zonelists
+ * Build __GFP_THISNODE zonelists
*/
static void build_thisnode_zonelists(pg_data_t *pgdat)
{
@@ -5738,6 +5751,7 @@ void __init setup_per_cpu_pageset(void)
for_each_online_pgdat(pgdat)
pgdat->per_cpu_nodestats =
alloc_percpu(struct per_cpu_nodestat);
+ store_early_perpage_metadata();
}
__meminit void zone_pcp_init(struct zone *zone)
@@ -5762,10 +5776,6 @@ void adjust_managed_page_count(struct page *page, long count)
{
atomic_long_add(count, &page_zone(page)->managed_pages);
totalram_pages_add(count);
-#ifdef CONFIG_HIGHMEM
- if (PageHighMem(page))
- totalhigh_pages_add(count);
-#endif
}
EXPORT_SYMBOL(adjust_managed_page_count);
@@ -6690,14 +6700,19 @@ void zone_pcp_reset(struct zone *zone)
/*
* All pages in the range must be in a single zone, must not contain holes,
* must span full sections, and must be isolated before calling this function.
+ *
+ * Returns the number of managed (non-PageOffline()) pages in the range: the
+ * number of pages for which memory offlining code must adjust managed page
+ * counters using adjust_managed_page_count().
*/
-void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
+unsigned long __offline_isolated_pages(unsigned long start_pfn,
+ unsigned long end_pfn)
{
+ unsigned long already_offline = 0, flags;
unsigned long pfn = start_pfn;
struct page *page;
struct zone *zone;
unsigned int order;
- unsigned long flags;
offline_mem_sections(pfn, end_pfn);
zone = page_zone(pfn_to_page(pfn));
@@ -6719,6 +6734,7 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
if (PageOffline(page)) {
BUG_ON(page_count(page));
BUG_ON(PageBuddy(page));
+ already_offline++;
pfn++;
continue;
}
@@ -6731,6 +6747,8 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
pfn += (1 << order);
}
spin_unlock_irqrestore(&zone->lock, flags);
+
+ return end_pfn - start_pfn - already_offline;
}
#endif
diff --git a/mm/page_counter.c b/mm/page_counter.c
index db20d6452b71..0153f5bb3161 100644
--- a/mm/page_counter.c
+++ b/mm/page_counter.c
@@ -262,3 +262,176 @@ int page_counter_memparse(const char *buf, const char *max,
return 0;
}
+
+
+/*
+ * This function calculates an individual page counter's effective
+ * protection which is derived from its own memory.min/low, its
+ * parent's and siblings' settings, as well as the actual memory
+ * distribution in the tree.
+ *
+ * The following rules apply to the effective protection values:
+ *
+ * 1. At the first level of reclaim, effective protection is equal to
+ * the declared protection in memory.min and memory.low.
+ *
+ * 2. To enable safe delegation of the protection configuration, at
+ * subsequent levels the effective protection is capped to the
+ * parent's effective protection.
+ *
+ * 3. To make complex and dynamic subtrees easier to configure, the
+ * user is allowed to overcommit the declared protection at a given
+ * level. If that is the case, the parent's effective protection is
+ * distributed to the children in proportion to how much protection
+ * they have declared and how much of it they are utilizing.
+ *
+ * This makes distribution proportional, but also work-conserving:
+ * if one counter claims much more protection than it uses memory,
+ * the unused remainder is available to its siblings.
+ *
+ * 4. Conversely, when the declared protection is undercommitted at a
+ * given level, the distribution of the larger parental protection
+ * budget is NOT proportional. A counter's protection from a sibling
+ * is capped to its own memory.min/low setting.
+ *
+ * 5. However, to allow protecting recursive subtrees from each other
+ * without having to declare each individual counter's fixed share
+ * of the ancestor's claim to protection, any unutilized -
+ * "floating" - protection from up the tree is distributed in
+ * proportion to each counter's *usage*. This makes the protection
+ * neutral wrt sibling cgroups and lets them compete freely over
+ * the shared parental protection budget, but it protects the
+ * subtree as a whole from neighboring subtrees.
+ *
+ * Note that 4. and 5. are not in conflict: 4. is about protecting
+ * against immediate siblings whereas 5. is about protecting against
+ * neighboring subtrees.
+ */
+static unsigned long effective_protection(unsigned long usage,
+ unsigned long parent_usage,
+ unsigned long setting,
+ unsigned long parent_effective,
+ unsigned long siblings_protected,
+ bool recursive_protection)
+{
+ unsigned long protected;
+ unsigned long ep;
+
+ protected = min(usage, setting);
+ /*
+ * If all cgroups at this level combined claim and use more
+ * protection than what the parent affords them, distribute
+ * shares in proportion to utilization.
+ *
+ * We are using actual utilization rather than the statically
+ * claimed protection in order to be work-conserving: claimed
+ * but unused protection is available to siblings that would
+ * otherwise get a smaller chunk than what they claimed.
+ */
+ if (siblings_protected > parent_effective)
+ return protected * parent_effective / siblings_protected;
+
+ /*
+ * Ok, utilized protection of all children is within what the
+ * parent affords them, so we know whatever this child claims
+ * and utilizes is effectively protected.
+ *
+ * If there is unprotected usage beyond this value, reclaim
+ * will apply pressure in proportion to that amount.
+ *
+ * If there is unutilized protection, the cgroup will be fully
+ * shielded from reclaim, but we do return a smaller value for
+ * protection than what the group could enjoy in theory. This
+ * is okay. With the overcommit distribution above, effective
+ * protection is always dependent on how memory is actually
+ * consumed among the siblings anyway.
+ */
+ ep = protected;
+
+ /*
+ * If the children aren't claiming (all of) the protection
+ * afforded to them by the parent, distribute the remainder in
+ * proportion to the (unprotected) memory of each cgroup. That
+ * way, cgroups that aren't explicitly prioritized wrt each
+ * other compete freely over the allowance, but they are
+ * collectively protected from neighboring trees.
+ *
+ * We're using unprotected memory for the weight so that if
+ * some cgroups DO claim explicit protection, we don't protect
+ * the same bytes twice.
+ *
+ * Check both usage and parent_usage against the respective
+ * protected values. One should imply the other, but they
+ * aren't read atomically - make sure the division is sane.
+ */
+ if (!recursive_protection)
+ return ep;
+
+ if (parent_effective > siblings_protected &&
+ parent_usage > siblings_protected &&
+ usage > protected) {
+ unsigned long unclaimed;
+
+ unclaimed = parent_effective - siblings_protected;
+ unclaimed *= usage - protected;
+ unclaimed /= parent_usage - siblings_protected;
+
+ ep += unclaimed;
+ }
+
+ return ep;
+}
+
+
+/**
+ * page_counter_calculate_protection - check if memory consumption is in the normal range
+ * @root: the top ancestor of the sub-tree being checked
+ * @counter: the page_counter the counter to update
+ * @recursive_protection: Whether to use memory_recursiveprot behavior.
+ *
+ * Calculates elow/emin thresholds for given page_counter.
+ *
+ * WARNING: This function is not stateless! It can only be used as part
+ * of a top-down tree iteration, not for isolated queries.
+ */
+void page_counter_calculate_protection(struct page_counter *root,
+ struct page_counter *counter,
+ bool recursive_protection)
+{
+ unsigned long usage, parent_usage;
+ struct page_counter *parent = counter->parent;
+
+ /*
+ * Effective values of the reclaim targets are ignored so they
+ * can be stale. Have a look at mem_cgroup_protection for more
+ * details.
+ * TODO: calculation should be more robust so that we do not need
+ * that special casing.
+ */
+ if (root == counter)
+ return;
+
+ usage = page_counter_read(counter);
+ if (!usage)
+ return;
+
+ if (parent == root) {
+ counter->emin = READ_ONCE(counter->min);
+ counter->elow = READ_ONCE(counter->low);
+ return;
+ }
+
+ parent_usage = page_counter_read(parent);
+
+ WRITE_ONCE(counter->emin, effective_protection(usage, parent_usage,
+ READ_ONCE(counter->min),
+ READ_ONCE(parent->emin),
+ atomic_long_read(&parent->children_min_usage),
+ recursive_protection));
+
+ WRITE_ONCE(counter->elow, effective_protection(usage, parent_usage,
+ READ_ONCE(counter->low),
+ READ_ONCE(parent->elow),
+ atomic_long_read(&parent->children_low_usage),
+ recursive_protection));
+}
diff --git a/mm/page_ext.c b/mm/page_ext.c
index 95dd8ffeaf81..c191e490c401 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -214,6 +214,8 @@ static int __init alloc_node_page_ext(int nid)
return -ENOMEM;
NODE_DATA(nid)->node_page_ext = base;
total_usage += table_size;
+ mod_node_page_state(NODE_DATA(nid), NR_MEMMAP_BOOT,
+ DIV_ROUND_UP(table_size, PAGE_SIZE));
return 0;
}
@@ -268,12 +270,15 @@ static void *__meminit alloc_page_ext(size_t size, int nid)
void *addr = NULL;
addr = alloc_pages_exact_nid(nid, size, flags);
- if (addr) {
+ if (addr)
kmemleak_alloc(addr, size, 1, flags);
- return addr;
- }
+ else
+ addr = vzalloc_node(size, nid);
- addr = vzalloc_node(size, nid);
+ if (addr) {
+ mod_node_page_state(NODE_DATA(nid), NR_MEMMAP,
+ DIV_ROUND_UP(size, PAGE_SIZE));
+ }
return addr;
}
@@ -316,18 +321,27 @@ static int __meminit init_section_page_ext(unsigned long pfn, int nid)
static void free_page_ext(void *addr)
{
+ size_t table_size;
+ struct page *page;
+ struct pglist_data *pgdat;
+
+ table_size = page_ext_size * PAGES_PER_SECTION;
+
if (is_vmalloc_addr(addr)) {
+ page = vmalloc_to_page(addr);
+ pgdat = page_pgdat(page);
vfree(addr);
} else {
- struct page *page = virt_to_page(addr);
- size_t table_size;
-
- table_size = page_ext_size * PAGES_PER_SECTION;
-
+ page = virt_to_page(addr);
+ pgdat = page_pgdat(page);
BUG_ON(PageReserved(page));
kmemleak_free(addr);
free_pages_exact(addr, table_size);
}
+
+ mod_node_page_state(pgdat, NR_MEMMAP,
+ -1L * (DIV_ROUND_UP(table_size, PAGE_SIZE)));
+
}
static void __free_page_ext(unsigned long pfn)
diff --git a/mm/page_io.c b/mm/page_io.c
index 0a150c240bf4..ff8c99ee3af7 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -196,9 +196,7 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
return ret;
}
if (zswap_store(folio)) {
- folio_start_writeback(folio);
folio_unlock(folio);
- folio_end_writeback(folio);
return 0;
}
if (!mem_cgroup_zswap_writeback_enabled(folio_memcg(folio))) {
@@ -280,7 +278,7 @@ static void sio_write_complete(struct kiocb *iocb, long ret)
* be temporary.
*/
pr_err_ratelimited("Write error %ld on dio swapfile (%llu)\n",
- ret, page_file_offset(page));
+ ret, swap_dev_pos(page_swap_entry(page)));
for (p = 0; p < sio->pages; p++) {
page = sio->bvec[p].bv_page;
set_page_dirty(page);
@@ -299,7 +297,7 @@ static void swap_writepage_fs(struct folio *folio, struct writeback_control *wbc
struct swap_iocb *sio = NULL;
struct swap_info_struct *sis = swp_swap_info(folio->swap);
struct file *swap_file = sis->swap_file;
- loff_t pos = folio_file_pos(folio);
+ loff_t pos = swap_dev_pos(folio->swap);
count_swpout_vm_event(folio);
folio_start_writeback(folio);
@@ -384,7 +382,12 @@ void __swap_writepage(struct folio *folio, struct writeback_control *wbc)
*/
if (data_race(sis->flags & SWP_FS_OPS))
swap_writepage_fs(folio, wbc);
- else if (sis->flags & SWP_SYNCHRONOUS_IO)
+ /*
+ * ->flags can be updated non-atomicially (scan_swap_map_slots),
+ * but that will never affect SWP_SYNCHRONOUS_IO, so the data_race
+ * is safe.
+ */
+ else if (data_race(sis->flags & SWP_SYNCHRONOUS_IO))
swap_writepage_bdev_sync(folio, wbc, sis);
else
swap_writepage_bdev_async(folio, wbc, sis);
@@ -430,7 +433,7 @@ static void swap_read_folio_fs(struct folio *folio, struct swap_iocb **plug)
{
struct swap_info_struct *sis = swp_swap_info(folio->swap);
struct swap_iocb *sio = NULL;
- loff_t pos = folio_file_pos(folio);
+ loff_t pos = swap_dev_pos(folio->swap);
if (plug)
sio = *plug;
@@ -493,10 +496,10 @@ static void swap_read_folio_bdev_async(struct folio *folio,
submit_bio(bio);
}
-void swap_read_folio(struct folio *folio, bool synchronous,
- struct swap_iocb **plug)
+void swap_read_folio(struct folio *folio, struct swap_iocb **plug)
{
struct swap_info_struct *sis = swp_swap_info(folio->swap);
+ bool synchronous = sis->flags & SWP_SYNCHRONOUS_IO;
bool workingset = folio_test_workingset(folio);
unsigned long pflags;
bool in_thrashing;
@@ -517,11 +520,10 @@ void swap_read_folio(struct folio *folio, bool synchronous,
delayacct_swapin_start();
if (zswap_load(folio)) {
- folio_mark_uptodate(folio);
folio_unlock(folio);
} else if (data_race(sis->flags & SWP_FS_OPS)) {
swap_read_folio_fs(folio, plug);
- } else if (synchronous || (sis->flags & SWP_SYNCHRONOUS_IO)) {
+ } else if (synchronous) {
swap_read_folio_bdev_sync(folio, sis);
} else {
swap_read_folio_bdev_async(folio, sis);
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index f46c80b18ce4..ae2f08ce991b 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -73,45 +73,6 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
return err;
}
-#ifdef CONFIG_ARCH_HAS_HUGEPD
-static int walk_hugepd_range(hugepd_t *phpd, unsigned long addr,
- unsigned long end, struct mm_walk *walk, int pdshift)
-{
- int err = 0;
- const struct mm_walk_ops *ops = walk->ops;
- int shift = hugepd_shift(*phpd);
- int page_size = 1 << shift;
-
- if (!ops->pte_entry)
- return 0;
-
- if (addr & (page_size - 1))
- return 0;
-
- for (;;) {
- pte_t *pte;
-
- spin_lock(&walk->mm->page_table_lock);
- pte = hugepte_offset(*phpd, addr, pdshift);
- err = ops->pte_entry(pte, addr, addr + page_size, walk);
- spin_unlock(&walk->mm->page_table_lock);
-
- if (err)
- break;
- if (addr >= end - page_size)
- break;
- addr += page_size;
- }
- return err;
-}
-#else
-static int walk_hugepd_range(hugepd_t *phpd, unsigned long addr,
- unsigned long end, struct mm_walk *walk, int pdshift)
-{
- return 0;
-}
-#endif
-
static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
struct mm_walk *walk)
{
@@ -159,10 +120,7 @@ again:
if (walk->vma)
split_huge_pmd(walk->vma, pmd, addr);
- if (is_hugepd(__hugepd(pmd_val(*pmd))))
- err = walk_hugepd_range((hugepd_t *)pmd, addr, next, walk, PMD_SHIFT);
- else
- err = walk_pte_range(pmd, addr, next, walk);
+ err = walk_pte_range(pmd, addr, next, walk);
if (err)
break;
@@ -215,10 +173,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
if (pud_none(*pud))
goto again;
- if (is_hugepd(__hugepd(pud_val(*pud))))
- err = walk_hugepd_range((hugepd_t *)pud, addr, next, walk, PUD_SHIFT);
- else
- err = walk_pmd_range(pud, addr, next, walk);
+ err = walk_pmd_range(pud, addr, next, walk);
if (err)
break;
} while (pud++, addr = next, addr != end);
@@ -250,9 +205,7 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
if (err)
break;
}
- if (is_hugepd(__hugepd(p4d_val(*p4d))))
- err = walk_hugepd_range((hugepd_t *)p4d, addr, next, walk, P4D_SHIFT);
- else if (ops->pud_entry || ops->pmd_entry || ops->pte_entry)
+ if (ops->pud_entry || ops->pmd_entry || ops->pte_entry)
err = walk_pud_range(p4d, addr, next, walk);
if (err)
break;
@@ -287,9 +240,7 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
if (err)
break;
}
- if (is_hugepd(__hugepd(pgd_val(*pgd))))
- err = walk_hugepd_range((hugepd_t *)pgd, addr, next, walk, PGDIR_SHIFT);
- else if (ops->p4d_entry || ops->pud_entry || ops->pmd_entry || ops->pte_entry)
+ if (ops->p4d_entry || ops->pud_entry || ops->pmd_entry || ops->pte_entry)
err = walk_p4d_range(pgd, addr, next, walk);
if (err)
break;
diff --git a/mm/percpu-internal.h b/mm/percpu-internal.h
index 7e42f0ca3b7b..4b3d6ec43703 100644
--- a/mm/percpu-internal.h
+++ b/mm/percpu-internal.h
@@ -33,7 +33,7 @@ struct pcpu_block_md {
};
struct pcpuobj_ext {
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
struct obj_cgroup *cgroup;
#endif
#ifdef CONFIG_MEM_ALLOC_PROFILING
@@ -41,7 +41,7 @@ struct pcpuobj_ext {
#endif
};
-#if defined(CONFIG_MEMCG_KMEM) || defined(CONFIG_MEM_ALLOC_PROFILING)
+#if defined(CONFIG_MEMCG) || defined(CONFIG_MEM_ALLOC_PROFILING)
#define NEED_PCPUOBJ_EXT
#endif
@@ -154,7 +154,7 @@ static inline size_t pcpu_obj_full_size(size_t size)
{
size_t extra_size = 0;
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
if (!mem_cgroup_kmem_disabled())
extra_size += size / PCPU_MIN_ALLOC_SIZE * sizeof(struct obj_cgroup *);
#endif
diff --git a/mm/percpu.c b/mm/percpu.c
index 474e3683b74d..20d91af8c033 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -1619,7 +1619,7 @@ static struct pcpu_chunk *pcpu_chunk_addr_search(void *addr)
return pcpu_get_page_chunk(pcpu_addr_to_page(addr));
}
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
static bool pcpu_memcg_pre_alloc_hook(size_t size, gfp_t gfp,
struct obj_cgroup **objcgp)
{
@@ -1681,7 +1681,7 @@ static void pcpu_memcg_free_hook(struct pcpu_chunk *chunk, int off, size_t size)
obj_cgroup_put(objcg);
}
-#else /* CONFIG_MEMCG_KMEM */
+#else /* CONFIG_MEMCG */
static bool
pcpu_memcg_pre_alloc_hook(size_t size, gfp_t gfp, struct obj_cgroup **objcgp)
{
@@ -1697,7 +1697,7 @@ static void pcpu_memcg_post_alloc_hook(struct obj_cgroup *objcg,
static void pcpu_memcg_free_hook(struct pcpu_chunk *chunk, int off, size_t size)
{
}
-#endif /* CONFIG_MEMCG_KMEM */
+#endif /* CONFIG_MEMCG */
#ifdef CONFIG_MEM_ALLOC_PROFILING
static void pcpu_alloc_tag_alloc_hook(struct pcpu_chunk *chunk, int off,
diff --git a/mm/readahead.c b/mm/readahead.c
index 817b2a352d78..517c0be7ce66 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -313,7 +313,7 @@ void force_page_cache_ra(struct readahead_control *ractl,
struct address_space *mapping = ractl->mapping;
struct file_ra_state *ra = ractl->ra;
struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
- unsigned long max_pages, index;
+ unsigned long max_pages;
if (unlikely(!mapping->a_ops->read_folio && !mapping->a_ops->readahead))
return;
@@ -322,7 +322,6 @@ void force_page_cache_ra(struct readahead_control *ractl,
* If the request exceeds the readahead window, allow the read to
* be up to the optimal hardware IO size
*/
- index = readahead_index(ractl);
max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages);
nr_to_read = min_t(unsigned long, nr_to_read, max_pages);
while (nr_to_read) {
@@ -330,10 +329,8 @@ void force_page_cache_ra(struct readahead_control *ractl,
if (this_chunk > nr_to_read)
this_chunk = nr_to_read;
- ractl->_index = index;
do_page_cache_ra(ractl, this_chunk, 0);
- index += this_chunk;
nr_to_read -= this_chunk;
}
}
@@ -413,58 +410,6 @@ static unsigned long get_next_ra_size(struct file_ra_state *ra,
* it approaches max_readhead.
*/
-/*
- * Count contiguously cached pages from @index-1 to @index-@max,
- * this count is a conservative estimation of
- * - length of the sequential read sequence, or
- * - thrashing threshold in memory tight systems
- */
-static pgoff_t count_history_pages(struct address_space *mapping,
- pgoff_t index, unsigned long max)
-{
- pgoff_t head;
-
- rcu_read_lock();
- head = page_cache_prev_miss(mapping, index - 1, max);
- rcu_read_unlock();
-
- return index - 1 - head;
-}
-
-/*
- * page cache context based readahead
- */
-static int try_context_readahead(struct address_space *mapping,
- struct file_ra_state *ra,
- pgoff_t index,
- unsigned long req_size,
- unsigned long max)
-{
- pgoff_t size;
-
- size = count_history_pages(mapping, index, max);
-
- /*
- * not enough history pages:
- * it could be a random read
- */
- if (size <= req_size)
- return 0;
-
- /*
- * starts from beginning of file:
- * it is a strong indication of long-run stream (or whole-file-read)
- */
- if (size >= index)
- size *= 2;
-
- ra->start = index;
- ra->size = min(size + req_size, max);
- ra->async_size = 1;
-
- return 1;
-}
-
static inline int ra_alloc_folio(struct readahead_control *ractl, pgoff_t index,
pgoff_t mark, unsigned int order, gfp_t gfp)
{
@@ -491,7 +436,8 @@ void page_cache_ra_order(struct readahead_control *ractl,
struct file_ra_state *ra, unsigned int new_order)
{
struct address_space *mapping = ractl->mapping;
- pgoff_t index = readahead_index(ractl);
+ pgoff_t start = readahead_index(ractl);
+ pgoff_t index = start;
pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
pgoff_t mark = index + ra->size - ra->async_size;
unsigned int nofs;
@@ -527,11 +473,6 @@ void page_cache_ra_order(struct readahead_control *ractl,
index += 1UL << order;
}
- if (index > limit) {
- ra->size += index - limit - 1;
- ra->async_size += index - limit - 1;
- }
-
read_pages(ractl);
filemap_invalidate_unlock_shared(mapping);
memalloc_nofs_restore(nofs);
@@ -544,22 +485,14 @@ void page_cache_ra_order(struct readahead_control *ractl,
if (!err)
return;
fallback:
- do_page_cache_ra(ractl, ra->size, ra->async_size);
+ do_page_cache_ra(ractl, ra->size - (index - start), ra->async_size);
}
-/*
- * A minimal readahead algorithm for trivial sequential/random reads.
- */
-static void ondemand_readahead(struct readahead_control *ractl,
- struct folio *folio, unsigned long req_size)
+static unsigned long ractl_max_pages(struct readahead_control *ractl,
+ unsigned long req_size)
{
struct backing_dev_info *bdi = inode_to_bdi(ractl->mapping->host);
- struct file_ra_state *ra = ractl->ra;
- unsigned long max_pages = ra->ra_pages;
- unsigned long add_pages;
- pgoff_t index = readahead_index(ractl);
- pgoff_t expected, prev_index;
- unsigned int order = folio ? folio_order(folio) : 0;
+ unsigned long max_pages = ractl->ra->ra_pages;
/*
* If the request exceeds the readahead window, allow the read to
@@ -567,112 +500,17 @@ static void ondemand_readahead(struct readahead_control *ractl,
*/
if (req_size > max_pages && bdi->io_pages > max_pages)
max_pages = min(req_size, bdi->io_pages);
-
- /*
- * start of file
- */
- if (!index)
- goto initial_readahead;
-
- /*
- * It's the expected callback index, assume sequential access.
- * Ramp up sizes, and push forward the readahead window.
- */
- expected = round_down(ra->start + ra->size - ra->async_size,
- 1UL << order);
- if (index == expected || index == (ra->start + ra->size)) {
- ra->start += ra->size;
- ra->size = get_next_ra_size(ra, max_pages);
- ra->async_size = ra->size;
- goto readit;
- }
-
- /*
- * Hit a marked folio without valid readahead state.
- * E.g. interleaved reads.
- * Query the pagecache for async_size, which normally equals to
- * readahead size. Ramp it up and use it as the new readahead size.
- */
- if (folio) {
- pgoff_t start;
-
- rcu_read_lock();
- start = page_cache_next_miss(ractl->mapping, index + 1,
- max_pages);
- rcu_read_unlock();
-
- if (!start || start - index > max_pages)
- return;
-
- ra->start = start;
- ra->size = start - index; /* old async_size */
- ra->size += req_size;
- ra->size = get_next_ra_size(ra, max_pages);
- ra->async_size = ra->size;
- goto readit;
- }
-
- /*
- * oversize read
- */
- if (req_size > max_pages)
- goto initial_readahead;
-
- /*
- * sequential cache miss
- * trivial case: (index - prev_index) == 1
- * unaligned reads: (index - prev_index) == 0
- */
- prev_index = (unsigned long long)ra->prev_pos >> PAGE_SHIFT;
- if (index - prev_index <= 1UL)
- goto initial_readahead;
-
- /*
- * Query the page cache and look for the traces(cached history pages)
- * that a sequential stream would leave behind.
- */
- if (try_context_readahead(ractl->mapping, ra, index, req_size,
- max_pages))
- goto readit;
-
- /*
- * standalone, small random read
- * Read as is, and do not pollute the readahead state.
- */
- do_page_cache_ra(ractl, req_size, 0);
- return;
-
-initial_readahead:
- ra->start = index;
- ra->size = get_init_ra_size(req_size, max_pages);
- ra->async_size = ra->size > req_size ? ra->size - req_size : ra->size;
-
-readit:
- /*
- * Will this read hit the readahead marker made by itself?
- * If so, trigger the readahead marker hit now, and merge
- * the resulted next readahead window into the current one.
- * Take care of maximum IO pages as above.
- */
- if (index == ra->start && ra->size == ra->async_size) {
- add_pages = get_next_ra_size(ra, max_pages);
- if (ra->size + add_pages <= max_pages) {
- ra->async_size = add_pages;
- ra->size += add_pages;
- } else {
- ra->size = max_pages;
- ra->async_size = max_pages >> 1;
- }
- }
-
- ractl->_index = ra->start;
- page_cache_ra_order(ractl, ra, order);
+ return max_pages;
}
void page_cache_sync_ra(struct readahead_control *ractl,
unsigned long req_count)
{
+ pgoff_t index = readahead_index(ractl);
bool do_forced_ra = ractl->file && (ractl->file->f_mode & FMODE_RANDOM);
+ struct file_ra_state *ra = ractl->ra;
+ unsigned long max_pages, contig_count;
+ pgoff_t prev_index, miss;
/*
* Even if readahead is disabled, issue this request as readahead
@@ -680,7 +518,7 @@ void page_cache_sync_ra(struct readahead_control *ractl,
* readahead will do the right thing and limit the read to just the
* requested range, which we'll set to 1 page for this case.
*/
- if (!ractl->ra->ra_pages || blk_cgroup_congested()) {
+ if (!ra->ra_pages || blk_cgroup_congested()) {
if (!ractl->file)
return;
req_count = 1;
@@ -693,15 +531,63 @@ void page_cache_sync_ra(struct readahead_control *ractl,
return;
}
- ondemand_readahead(ractl, NULL, req_count);
+ max_pages = ractl_max_pages(ractl, req_count);
+ prev_index = (unsigned long long)ra->prev_pos >> PAGE_SHIFT;
+ /*
+ * A start of file, oversized read, or sequential cache miss:
+ * trivial case: (index - prev_index) == 1
+ * unaligned reads: (index - prev_index) == 0
+ */
+ if (!index || req_count > max_pages || index - prev_index <= 1UL) {
+ ra->start = index;
+ ra->size = get_init_ra_size(req_count, max_pages);
+ ra->async_size = ra->size > req_count ? ra->size - req_count :
+ ra->size >> 1;
+ goto readit;
+ }
+
+ /*
+ * Query the page cache and look for the traces(cached history pages)
+ * that a sequential stream would leave behind.
+ */
+ rcu_read_lock();
+ miss = page_cache_prev_miss(ractl->mapping, index - 1, max_pages);
+ rcu_read_unlock();
+ contig_count = index - miss - 1;
+ /*
+ * Standalone, small random read. Read as is, and do not pollute the
+ * readahead state.
+ */
+ if (contig_count <= req_count) {
+ do_page_cache_ra(ractl, req_count, 0);
+ return;
+ }
+ /*
+ * File cached from the beginning:
+ * it is a strong indication of long-run stream (or whole-file-read)
+ */
+ if (miss == ULONG_MAX)
+ contig_count *= 2;
+ ra->start = index;
+ ra->size = min(contig_count + req_count, max_pages);
+ ra->async_size = 1;
+readit:
+ ractl->_index = ra->start;
+ page_cache_ra_order(ractl, ra, 0);
}
EXPORT_SYMBOL_GPL(page_cache_sync_ra);
void page_cache_async_ra(struct readahead_control *ractl,
struct folio *folio, unsigned long req_count)
{
+ unsigned long max_pages;
+ struct file_ra_state *ra = ractl->ra;
+ pgoff_t index = readahead_index(ractl);
+ pgoff_t expected, start;
+ unsigned int order = folio_order(folio);
+
/* no readahead */
- if (!ractl->ra->ra_pages)
+ if (!ra->ra_pages)
return;
/*
@@ -715,7 +601,41 @@ void page_cache_async_ra(struct readahead_control *ractl,
if (blk_cgroup_congested())
return;
- ondemand_readahead(ractl, folio, req_count);
+ max_pages = ractl_max_pages(ractl, req_count);
+ /*
+ * It's the expected callback index, assume sequential access.
+ * Ramp up sizes, and push forward the readahead window.
+ */
+ expected = round_down(ra->start + ra->size - ra->async_size,
+ 1UL << order);
+ if (index == expected) {
+ ra->start += ra->size;
+ ra->size = get_next_ra_size(ra, max_pages);
+ ra->async_size = ra->size;
+ goto readit;
+ }
+
+ /*
+ * Hit a marked folio without valid readahead state.
+ * E.g. interleaved reads.
+ * Query the pagecache for async_size, which normally equals to
+ * readahead size. Ramp it up and use it as the new readahead size.
+ */
+ rcu_read_lock();
+ start = page_cache_next_miss(ractl->mapping, index + 1, max_pages);
+ rcu_read_unlock();
+
+ if (!start || start - index > max_pages)
+ return;
+
+ ra->start = start;
+ ra->size = start - index; /* old async_size */
+ ra->size += req_count;
+ ra->size = get_next_ra_size(ra, max_pages);
+ ra->async_size = ra->size;
+readit:
+ ractl->_index = ra->start;
+ page_cache_ra_order(ractl, ra, order);
}
EXPORT_SYMBOL_GPL(page_cache_async_ra);
diff --git a/mm/rmap.c b/mm/rmap.c
index e8fc5ecb59b2..8616308610b9 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1269,33 +1269,42 @@ static void __page_check_anon_rmap(struct folio *folio, struct page *page,
page);
}
+static void __folio_mod_stat(struct folio *folio, int nr, int nr_pmdmapped)
+{
+ int idx;
+
+ if (nr) {
+ idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED;
+ __lruvec_stat_mod_folio(folio, idx, nr);
+ }
+ if (nr_pmdmapped) {
+ if (folio_test_anon(folio)) {
+ idx = NR_ANON_THPS;
+ __lruvec_stat_mod_folio(folio, idx, nr_pmdmapped);
+ } else {
+ /* NR_*_PMDMAPPED are not maintained per-memcg */
+ idx = folio_test_swapbacked(folio) ?
+ NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED;
+ __mod_node_page_state(folio_pgdat(folio), idx,
+ nr_pmdmapped);
+ }
+ }
+}
+
static __always_inline void __folio_add_anon_rmap(struct folio *folio,
struct page *page, int nr_pages, struct vm_area_struct *vma,
unsigned long address, rmap_t flags, enum rmap_level level)
{
int i, nr, nr_pmdmapped = 0;
+ VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
+
nr = __folio_add_rmap(folio, page, nr_pages, level, &nr_pmdmapped);
- if (nr_pmdmapped)
- __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr_pmdmapped);
- if (nr)
- __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
- if (unlikely(!folio_test_anon(folio))) {
- VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
- /*
- * For a PTE-mapped large folio, we only know that the single
- * PTE is exclusive. Further, __folio_set_anon() might not get
- * folio->index right when not given the address of the head
- * page.
- */
- VM_WARN_ON_FOLIO(folio_test_large(folio) &&
- level != RMAP_LEVEL_PMD, folio);
- __folio_set_anon(folio, vma, address,
- !!(flags & RMAP_EXCLUSIVE));
- } else if (likely(!folio_test_ksm(folio))) {
+ if (likely(!folio_test_ksm(folio)))
__page_check_anon_rmap(folio, page, vma, address);
- }
+
+ __folio_mod_stat(folio, nr, nr_pmdmapped);
if (flags & RMAP_EXCLUSIVE) {
switch (level) {
@@ -1381,29 +1390,37 @@ void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page,
* @folio: The folio to add the mapping to.
* @vma: the vm area in which the mapping is added
* @address: the user virtual address mapped
+ * @flags: The rmap flags
*
* Like folio_add_anon_rmap_*() but must only be called on *new* folios.
* This means the inc-and-test can be bypassed.
- * The folio does not have to be locked.
+ * The folio doesn't necessarily need to be locked while it's exclusive
+ * unless two threads map it concurrently. However, the folio must be
+ * locked if it's shared.
*
- * If the folio is pmd-mappable, it is accounted as a THP. As the folio
- * is new, it's assumed to be mapped exclusively by a single process.
+ * If the folio is pmd-mappable, it is accounted as a THP.
*/
void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
- unsigned long address)
+ unsigned long address, rmap_t flags)
{
- int nr = folio_nr_pages(folio);
+ const int nr = folio_nr_pages(folio);
+ const bool exclusive = flags & RMAP_EXCLUSIVE;
+ int nr_pmdmapped = 0;
VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio);
+ VM_WARN_ON_FOLIO(!exclusive && !folio_test_locked(folio), folio);
VM_BUG_ON_VMA(address < vma->vm_start ||
address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
- __folio_set_swapbacked(folio);
- __folio_set_anon(folio, vma, address, true);
+
+ if (!folio_test_swapbacked(folio))
+ __folio_set_swapbacked(folio);
+ __folio_set_anon(folio, vma, address, exclusive);
if (likely(!folio_test_large(folio))) {
/* increment count (starts at -1) */
atomic_set(&folio->_mapcount, 0);
- SetPageAnonExclusive(&folio->page);
+ if (exclusive)
+ SetPageAnonExclusive(&folio->page);
} else if (!folio_test_pmd_mappable(folio)) {
int i;
@@ -1412,7 +1429,8 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
/* increment count (starts at -1) */
atomic_set(&page->_mapcount, 0);
- SetPageAnonExclusive(page);
+ if (exclusive)
+ SetPageAnonExclusive(page);
}
/* increment count (starts at -1) */
@@ -1424,28 +1442,24 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
/* increment count (starts at -1) */
atomic_set(&folio->_large_mapcount, 0);
atomic_set(&folio->_nr_pages_mapped, ENTIRELY_MAPPED);
- SetPageAnonExclusive(&folio->page);
- __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr);
+ if (exclusive)
+ SetPageAnonExclusive(&folio->page);
+ nr_pmdmapped = nr;
}
- __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
+ __folio_mod_stat(folio, nr, nr_pmdmapped);
}
static __always_inline void __folio_add_file_rmap(struct folio *folio,
struct page *page, int nr_pages, struct vm_area_struct *vma,
enum rmap_level level)
{
- pg_data_t *pgdat = folio_pgdat(folio);
int nr, nr_pmdmapped = 0;
VM_WARN_ON_FOLIO(folio_test_anon(folio), folio);
nr = __folio_add_rmap(folio, page, nr_pages, level, &nr_pmdmapped);
- if (nr_pmdmapped)
- __mod_node_page_state(pgdat, folio_test_swapbacked(folio) ?
- NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, nr_pmdmapped);
- if (nr)
- __lruvec_stat_mod_folio(folio, NR_FILE_MAPPED, nr);
+ __folio_mod_stat(folio, nr, nr_pmdmapped);
/* See comments in folio_add_anon_rmap_*() */
if (!folio_test_large(folio))
@@ -1494,10 +1508,8 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
enum rmap_level level)
{
atomic_t *mapped = &folio->_nr_pages_mapped;
- pg_data_t *pgdat = folio_pgdat(folio);
int last, nr = 0, nr_pmdmapped = 0;
bool partially_mapped = false;
- enum node_stat_item idx;
__folio_rmap_sanity_checks(folio, page, nr_pages, level);
@@ -1541,20 +1553,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
break;
}
- if (nr_pmdmapped) {
- /* NR_{FILE/SHMEM}_PMDMAPPED are not maintained per-memcg */
- if (folio_test_anon(folio))
- __lruvec_stat_mod_folio(folio, NR_ANON_THPS, -nr_pmdmapped);
- else
- __mod_node_page_state(pgdat,
- folio_test_swapbacked(folio) ?
- NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED,
- -nr_pmdmapped);
- }
if (nr) {
- idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED;
- __lruvec_stat_mod_folio(folio, idx, -nr);
-
/*
* Queue anon large folio for deferred split if at least one
* page of the folio is unmapped and at least one page
@@ -1566,6 +1565,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
list_empty(&folio->_deferred_list))
deferred_split_folio(folio);
}
+ __folio_mod_stat(folio, -nr, -nr_pmdmapped);
/*
* It would be tidy to reset folio_test_anon mapping when fully
@@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
if (flags & TTU_SYNC)
pvmw.flags = PVMW_SYNC;
- if (flags & TTU_SPLIT_HUGE_PMD)
- split_huge_pmd_address(vma, address, false, folio);
-
/*
* For THP, we have to assume the worse case ie pmd for invalidation.
* For hugetlb, it could be much worse if we need to do pud
@@ -1668,9 +1665,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
mmu_notifier_invalidate_range_start(&range);
while (page_vma_mapped_walk(&pvmw)) {
- /* Unexpected PMD-mapped THP? */
- VM_BUG_ON_FOLIO(!pvmw.pte, folio);
-
/*
* If the folio is in an mlock()d vma, we must not swap it out.
*/
@@ -1679,11 +1673,30 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
/* Restore the mlock which got missed */
if (!folio_test_large(folio))
mlock_vma_folio(folio, vma);
- page_vma_mapped_walk_done(&pvmw);
- ret = false;
- break;
+ goto walk_abort;
+ }
+
+ if (!pvmw.pte) {
+ if (unmap_huge_pmd_locked(vma, pvmw.address, pvmw.pmd,
+ folio))
+ goto walk_done;
+
+ if (flags & TTU_SPLIT_HUGE_PMD) {
+ /*
+ * We temporarily have to drop the PTL and
+ * restart so we can process the PTE-mapped THP.
+ */
+ split_huge_pmd_locked(vma, pvmw.address,
+ pvmw.pmd, false, folio);
+ flags &= ~TTU_SPLIT_HUGE_PMD;
+ page_vma_mapped_walk_restart(&pvmw);
+ continue;
+ }
}
+ /* Unexpected PMD-mapped THP? */
+ VM_BUG_ON_FOLIO(!pvmw.pte, folio);
+
pfn = pte_pfn(ptep_get(pvmw.pte));
subpage = folio_page(folio, pfn - folio_pfn(folio));
address = pvmw.address;
@@ -1719,11 +1732,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
*/
if (!anon) {
VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
- if (!hugetlb_vma_trylock_write(vma)) {
- page_vma_mapped_walk_done(&pvmw);
- ret = false;
- break;
- }
+ if (!hugetlb_vma_trylock_write(vma))
+ goto walk_abort;
if (huge_pmd_unshare(mm, vma, address, pvmw.pte)) {
hugetlb_vma_unlock_write(vma);
flush_tlb_range(vma,
@@ -1738,8 +1748,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* actual page and drop map count
* to zero.
*/
- page_vma_mapped_walk_done(&pvmw);
- break;
+ goto walk_done;
}
hugetlb_vma_unlock_write(vma);
}
@@ -1811,9 +1820,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
if (unlikely(folio_test_swapbacked(folio) !=
folio_test_swapcache(folio))) {
WARN_ON_ONCE(1);
- ret = false;
- page_vma_mapped_walk_done(&pvmw);
- break;
+ goto walk_abort;
}
/* MADV_FREE page check */
@@ -1852,23 +1859,17 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
*/
set_pte_at(mm, address, pvmw.pte, pteval);
folio_set_swapbacked(folio);
- ret = false;
- page_vma_mapped_walk_done(&pvmw);
- break;
+ goto walk_abort;
}
if (swap_duplicate(entry) < 0) {
set_pte_at(mm, address, pvmw.pte, pteval);
- ret = false;
- page_vma_mapped_walk_done(&pvmw);
- break;
+ goto walk_abort;
}
if (arch_unmap_one(mm, vma, address, pteval) < 0) {
swap_free(entry);
set_pte_at(mm, address, pvmw.pte, pteval);
- ret = false;
- page_vma_mapped_walk_done(&pvmw);
- break;
+ goto walk_abort;
}
/* See folio_try_share_anon_rmap(): clear PTE first. */
@@ -1876,9 +1877,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
folio_try_share_anon_rmap_pte(folio, subpage)) {
swap_free(entry);
set_pte_at(mm, address, pvmw.pte, pteval);
- ret = false;
- page_vma_mapped_walk_done(&pvmw);
- break;
+ goto walk_abort;
}
if (list_empty(&mm->mmlist)) {
spin_lock(&mmlist_lock);
@@ -1918,6 +1917,12 @@ discard:
if (vma->vm_flags & VM_LOCKED)
mlock_drain_local();
folio_put(folio);
+ continue;
+walk_abort:
+ ret = false;
+walk_done:
+ page_vma_mapped_walk_done(&pvmw);
+ break;
}
mmu_notifier_invalidate_range_end(&range);
diff --git a/mm/shmem.c b/mm/shmem.c
index 831b52dfd56e..2faa9daaf54b 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -131,6 +131,13 @@ struct shmem_options {
#define SHMEM_SEEN_QUOTA 32
};
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static unsigned long huge_shmem_orders_always __read_mostly;
+static unsigned long huge_shmem_orders_madvise __read_mostly;
+static unsigned long huge_shmem_orders_inherit __read_mostly;
+static unsigned long huge_shmem_orders_within_size __read_mostly;
+#endif
+
#ifdef CONFIG_TMPFS
static unsigned long shmem_default_max_blocks(void)
{
@@ -1614,73 +1621,174 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp)
return result;
}
-static struct folio *shmem_alloc_hugefolio(gfp_t gfp,
- struct shmem_inode_info *info, pgoff_t index)
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+unsigned long shmem_allowable_huge_orders(struct inode *inode,
+ struct vm_area_struct *vma, pgoff_t index,
+ bool global_huge)
{
- struct mempolicy *mpol;
- pgoff_t ilx;
- struct page *page;
+ unsigned long mask = READ_ONCE(huge_shmem_orders_always);
+ unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
+ unsigned long vm_flags = vma->vm_flags;
+ /*
+ * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
+ * are enabled for this vma.
+ */
+ unsigned long orders = BIT(PMD_ORDER + 1) - 1;
+ loff_t i_size;
+ int order;
- mpol = shmem_get_pgoff_policy(info, index, HPAGE_PMD_ORDER, &ilx);
- page = alloc_pages_mpol(gfp, HPAGE_PMD_ORDER, mpol, ilx, numa_node_id());
- mpol_cond_put(mpol);
+ if ((vm_flags & VM_NOHUGEPAGE) ||
+ test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
+ return 0;
- return page_rmappable_folio(page);
+ /* If the hardware/firmware marked hugepage support disabled. */
+ if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED))
+ return 0;
+
+ /*
+ * Following the 'deny' semantics of the top level, force the huge
+ * option off from all mounts.
+ */
+ if (shmem_huge == SHMEM_HUGE_DENY)
+ return 0;
+
+ /*
+ * Only allow inherit orders if the top-level value is 'force', which
+ * means non-PMD sized THP can not override 'huge' mount option now.
+ */
+ if (shmem_huge == SHMEM_HUGE_FORCE)
+ return READ_ONCE(huge_shmem_orders_inherit);
+
+ /* Allow mTHP that will be fully within i_size. */
+ order = highest_order(within_size_orders);
+ while (within_size_orders) {
+ index = round_up(index + 1, order);
+ i_size = round_up(i_size_read(inode), PAGE_SIZE);
+ if (i_size >> PAGE_SHIFT >= index) {
+ mask |= within_size_orders;
+ break;
+ }
+
+ order = next_order(&within_size_orders, order);
+ }
+
+ if (vm_flags & VM_HUGEPAGE)
+ mask |= READ_ONCE(huge_shmem_orders_madvise);
+
+ if (global_huge)
+ mask |= READ_ONCE(huge_shmem_orders_inherit);
+
+ return orders & mask;
+}
+
+static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault *vmf,
+ struct address_space *mapping, pgoff_t index,
+ unsigned long orders)
+{
+ struct vm_area_struct *vma = vmf->vma;
+ unsigned long pages;
+ int order;
+
+ orders = thp_vma_suitable_orders(vma, vmf->address, orders);
+ if (!orders)
+ return 0;
+
+ /* Find the highest order that can add into the page cache */
+ order = highest_order(orders);
+ while (orders) {
+ pages = 1UL << order;
+ index = round_down(index, pages);
+ if (!xa_find(&mapping->i_pages, &index,
+ index + pages - 1, XA_PRESENT))
+ break;
+ order = next_order(&orders, order);
+ }
+
+ return orders;
+}
+#else
+static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault *vmf,
+ struct address_space *mapping, pgoff_t index,
+ unsigned long orders)
+{
+ return 0;
}
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-static struct folio *shmem_alloc_folio(gfp_t gfp,
+static struct folio *shmem_alloc_folio(gfp_t gfp, int order,
struct shmem_inode_info *info, pgoff_t index)
{
struct mempolicy *mpol;
pgoff_t ilx;
- struct page *page;
+ struct folio *folio;
- mpol = shmem_get_pgoff_policy(info, index, 0, &ilx);
- page = alloc_pages_mpol(gfp, 0, mpol, ilx, numa_node_id());
+ mpol = shmem_get_pgoff_policy(info, index, order, &ilx);
+ folio = folio_alloc_mpol(gfp, order, mpol, ilx, numa_node_id());
mpol_cond_put(mpol);
- return (struct folio *)page;
+ return folio;
}
-static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
- struct inode *inode, pgoff_t index,
- struct mm_struct *fault_mm, bool huge)
+static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
+ gfp_t gfp, struct inode *inode, pgoff_t index,
+ struct mm_struct *fault_mm, unsigned long orders)
{
struct address_space *mapping = inode->i_mapping;
struct shmem_inode_info *info = SHMEM_I(inode);
- struct folio *folio;
+ struct vm_area_struct *vma = vmf ? vmf->vma : NULL;
+ unsigned long suitable_orders = 0;
+ struct folio *folio = NULL;
long pages;
- int error;
+ int error, order;
if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
- huge = false;
+ orders = 0;
- if (huge) {
- pages = HPAGE_PMD_NR;
- index = round_down(index, HPAGE_PMD_NR);
+ if (orders > 0) {
+ if (vma && vma_is_anon_shmem(vma)) {
+ suitable_orders = shmem_suitable_orders(inode, vmf,
+ mapping, index, orders);
+ } else if (orders & BIT(HPAGE_PMD_ORDER)) {
+ pages = HPAGE_PMD_NR;
+ suitable_orders = BIT(HPAGE_PMD_ORDER);
+ index = round_down(index, HPAGE_PMD_NR);
- /*
- * Check for conflict before waiting on a huge allocation.
- * Conflict might be that a huge page has just been allocated
- * and added to page cache by a racing thread, or that there
- * is already at least one small page in the huge extent.
- * Be careful to retry when appropriate, but not forever!
- * Elsewhere -EEXIST would be the right code, but not here.
- */
- if (xa_find(&mapping->i_pages, &index,
- index + HPAGE_PMD_NR - 1, XA_PRESENT))
- return ERR_PTR(-E2BIG);
+ /*
+ * Check for conflict before waiting on a huge allocation.
+ * Conflict might be that a huge page has just been allocated
+ * and added to page cache by a racing thread, or that there
+ * is already at least one small page in the huge extent.
+ * Be careful to retry when appropriate, but not forever!
+ * Elsewhere -EEXIST would be the right code, but not here.
+ */
+ if (xa_find(&mapping->i_pages, &index,
+ index + HPAGE_PMD_NR - 1, XA_PRESENT))
+ return ERR_PTR(-E2BIG);
+ }
- folio = shmem_alloc_hugefolio(gfp, info, index);
- if (!folio)
- count_vm_event(THP_FILE_FALLBACK);
+ order = highest_order(suitable_orders);
+ while (suitable_orders) {
+ pages = 1UL << order;
+ index = round_down(index, pages);
+ folio = shmem_alloc_folio(gfp, order, info, index);
+ if (folio)
+ goto allocated;
+
+ if (pages == HPAGE_PMD_NR)
+ count_vm_event(THP_FILE_FALLBACK);
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ count_mthp_stat(order, MTHP_STAT_SHMEM_FALLBACK);
+#endif
+ order = next_order(&suitable_orders, order);
+ }
} else {
pages = 1;
- folio = shmem_alloc_folio(gfp, info, index);
+ folio = shmem_alloc_folio(gfp, 0, info, index);
}
if (!folio)
return ERR_PTR(-ENOMEM);
+allocated:
__folio_set_locked(folio);
__folio_set_swapbacked(folio);
@@ -1690,9 +1798,15 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
if (xa_find(&mapping->i_pages, &index,
index + pages - 1, XA_PRESENT)) {
error = -EEXIST;
- } else if (huge) {
- count_vm_event(THP_FILE_FALLBACK);
- count_vm_event(THP_FILE_FALLBACK_CHARGE);
+ } else if (pages > 1) {
+ if (pages == HPAGE_PMD_NR) {
+ count_vm_event(THP_FILE_FALLBACK);
+ count_vm_event(THP_FILE_FALLBACK_CHARGE);
+ }
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ count_mthp_stat(folio_order(folio), MTHP_STAT_SHMEM_FALLBACK);
+ count_mthp_stat(folio_order(folio), MTHP_STAT_SHMEM_FALLBACK_CHARGE);
+#endif
}
goto unlock;
}
@@ -1767,7 +1881,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
old = *foliop;
entry = old->swap;
- swap_index = swp_offset(entry);
+ swap_index = swap_cache_index(entry);
swap_mapping = swap_address_space(entry);
/*
@@ -1776,7 +1890,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
*/
gfp &= ~GFP_CONSTRAINT_MASK;
VM_BUG_ON_FOLIO(folio_test_large(old), old);
- new = shmem_alloc_folio(gfp, info, index);
+ new = shmem_alloc_folio(gfp, 0, info, index);
if (!new)
return -ENOMEM;
@@ -1975,7 +2089,8 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
struct mm_struct *fault_mm;
struct folio *folio;
int error;
- bool alloced;
+ bool alloced, huge;
+ unsigned long orders = 0;
if (WARN_ON_ONCE(!shmem_mapping(inode->i_mapping)))
return -EINVAL;
@@ -2047,23 +2162,34 @@ repeat:
return 0;
}
- if (shmem_is_huge(inode, index, false, fault_mm,
- vma ? vma->vm_flags : 0)) {
+ huge = shmem_is_huge(inode, index, false, fault_mm,
+ vma ? vma->vm_flags : 0);
+ /* Find hugepage orders that are allowed for anonymous shmem. */
+ if (vma && vma_is_anon_shmem(vma))
+ orders = shmem_allowable_huge_orders(inode, vma, index, huge);
+ else if (huge)
+ orders = BIT(HPAGE_PMD_ORDER);
+
+ if (orders > 0) {
gfp_t huge_gfp;
huge_gfp = vma_thp_gfp_mask(vma);
huge_gfp = limit_gfp_mask(huge_gfp, gfp);
- folio = shmem_alloc_and_add_folio(huge_gfp,
- inode, index, fault_mm, true);
+ folio = shmem_alloc_and_add_folio(vmf, huge_gfp,
+ inode, index, fault_mm, orders);
if (!IS_ERR(folio)) {
- count_vm_event(THP_FILE_ALLOC);
+ if (folio_test_pmd_mappable(folio))
+ count_vm_event(THP_FILE_ALLOC);
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ count_mthp_stat(folio_order(folio), MTHP_STAT_SHMEM_ALLOC);
+#endif
goto alloced;
}
if (PTR_ERR(folio) == -EEXIST)
goto repeat;
}
- folio = shmem_alloc_and_add_folio(gfp, inode, index, fault_mm, false);
+ folio = shmem_alloc_and_add_folio(vmf, gfp, inode, index, fault_mm, 0);
if (IS_ERR(folio)) {
error = PTR_ERR(folio);
if (error == -EEXIST)
@@ -2074,7 +2200,7 @@ repeat:
alloced:
alloced = true;
- if (folio_test_pmd_mappable(folio) &&
+ if (folio_test_large(folio) &&
DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE) <
folio_next_index(folio) - 1) {
struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
@@ -2283,6 +2409,7 @@ unsigned long shmem_get_unmapped_area(struct file *file,
unsigned long inflated_len;
unsigned long inflated_addr;
unsigned long inflated_offset;
+ unsigned long hpage_size;
if (len > TASK_SIZE)
return -ENOMEM;
@@ -2301,8 +2428,6 @@ unsigned long shmem_get_unmapped_area(struct file *file,
if (shmem_huge == SHMEM_HUGE_DENY)
return addr;
- if (len < HPAGE_PMD_SIZE)
- return addr;
if (flags & MAP_FIXED)
return addr;
/*
@@ -2314,8 +2439,11 @@ unsigned long shmem_get_unmapped_area(struct file *file,
if (uaddr == addr)
return addr;
+ hpage_size = HPAGE_PMD_SIZE;
if (shmem_huge != SHMEM_HUGE_FORCE) {
struct super_block *sb;
+ unsigned long __maybe_unused hpage_orders;
+ int order = 0;
if (file) {
VM_BUG_ON(file->f_op != &shmem_file_operations);
@@ -2328,18 +2456,38 @@ unsigned long shmem_get_unmapped_area(struct file *file,
if (IS_ERR(shm_mnt))
return addr;
sb = shm_mnt->mnt_sb;
+
+ /*
+ * Find the highest mTHP order used for anonymous shmem to
+ * provide a suitable alignment address.
+ */
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ hpage_orders = READ_ONCE(huge_shmem_orders_always);
+ hpage_orders |= READ_ONCE(huge_shmem_orders_within_size);
+ hpage_orders |= READ_ONCE(huge_shmem_orders_madvise);
+ if (SHMEM_SB(sb)->huge != SHMEM_HUGE_NEVER)
+ hpage_orders |= READ_ONCE(huge_shmem_orders_inherit);
+
+ if (hpage_orders > 0) {
+ order = highest_order(hpage_orders);
+ hpage_size = PAGE_SIZE << order;
+ }
+#endif
}
- if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER)
+ if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER && !order)
return addr;
}
- offset = (pgoff << PAGE_SHIFT) & (HPAGE_PMD_SIZE-1);
- if (offset && offset + len < 2 * HPAGE_PMD_SIZE)
+ if (len < hpage_size)
return addr;
- if ((addr & (HPAGE_PMD_SIZE-1)) == offset)
+
+ offset = (pgoff << PAGE_SHIFT) & (hpage_size - 1);
+ if (offset && offset + len < 2 * hpage_size)
+ return addr;
+ if ((addr & (hpage_size - 1)) == offset)
return addr;
- inflated_len = len + HPAGE_PMD_SIZE - PAGE_SIZE;
+ inflated_len = len + hpage_size - PAGE_SIZE;
if (inflated_len > TASK_SIZE)
return addr;
if (inflated_len < len)
@@ -2352,10 +2500,10 @@ unsigned long shmem_get_unmapped_area(struct file *file,
if (inflated_addr & ~PAGE_MASK)
return addr;
- inflated_offset = inflated_addr & (HPAGE_PMD_SIZE-1);
+ inflated_offset = inflated_addr & (hpage_size - 1);
inflated_addr += offset - inflated_offset;
if (inflated_offset > offset)
- inflated_addr += HPAGE_PMD_SIZE;
+ inflated_addr += hpage_size;
if (inflated_addr > TASK_SIZE - len)
return addr;
@@ -2644,7 +2792,7 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
if (!*foliop) {
ret = -ENOMEM;
- folio = shmem_alloc_folio(gfp, info, pgoff);
+ folio = shmem_alloc_folio(gfp, 0, info, pgoff);
if (!folio)
goto out_unacct_blocks;
@@ -4695,6 +4843,12 @@ void __init shmem_init(void)
SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge;
else
shmem_huge = SHMEM_HUGE_NEVER; /* just in case it was patched */
+
+ /*
+ * Default to setting PMD-sized THP to inherit the global setting and
+ * disable all other multi-size THPs.
+ */
+ huge_shmem_orders_inherit = BIT(HPAGE_PMD_ORDER);
#endif
return;
@@ -4754,6 +4908,11 @@ static ssize_t shmem_enabled_store(struct kobject *kobj,
huge != SHMEM_HUGE_NEVER && huge != SHMEM_HUGE_DENY)
return -EINVAL;
+ /* Do not override huge allocation policy with non-PMD sized mTHP */
+ if (huge == SHMEM_HUGE_FORCE &&
+ huge_shmem_orders_inherit != BIT(HPAGE_PMD_ORDER))
+ return -EINVAL;
+
shmem_huge = huge;
if (shmem_huge > SHMEM_HUGE_DENY)
SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge;
@@ -4761,6 +4920,84 @@ static ssize_t shmem_enabled_store(struct kobject *kobj,
}
struct kobj_attribute shmem_enabled_attr = __ATTR_RW(shmem_enabled);
+static DEFINE_SPINLOCK(huge_shmem_orders_lock);
+
+static ssize_t thpsize_shmem_enabled_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ int order = to_thpsize(kobj)->order;
+ const char *output;
+
+ if (test_bit(order, &huge_shmem_orders_always))
+ output = "[always] inherit within_size advise never";
+ else if (test_bit(order, &huge_shmem_orders_inherit))
+ output = "always [inherit] within_size advise never";
+ else if (test_bit(order, &huge_shmem_orders_within_size))
+ output = "always inherit [within_size] advise never";
+ else if (test_bit(order, &huge_shmem_orders_madvise))
+ output = "always inherit within_size [advise] never";
+ else
+ output = "always inherit within_size advise [never]";
+
+ return sysfs_emit(buf, "%s\n", output);
+}
+
+static ssize_t thpsize_shmem_enabled_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ int order = to_thpsize(kobj)->order;
+ ssize_t ret = count;
+
+ if (sysfs_streq(buf, "always")) {
+ spin_lock(&huge_shmem_orders_lock);
+ clear_bit(order, &huge_shmem_orders_inherit);
+ clear_bit(order, &huge_shmem_orders_madvise);
+ clear_bit(order, &huge_shmem_orders_within_size);
+ set_bit(order, &huge_shmem_orders_always);
+ spin_unlock(&huge_shmem_orders_lock);
+ } else if (sysfs_streq(buf, "inherit")) {
+ /* Do not override huge allocation policy with non-PMD sized mTHP */
+ if (shmem_huge == SHMEM_HUGE_FORCE &&
+ order != HPAGE_PMD_ORDER)
+ return -EINVAL;
+
+ spin_lock(&huge_shmem_orders_lock);
+ clear_bit(order, &huge_shmem_orders_always);
+ clear_bit(order, &huge_shmem_orders_madvise);
+ clear_bit(order, &huge_shmem_orders_within_size);
+ set_bit(order, &huge_shmem_orders_inherit);
+ spin_unlock(&huge_shmem_orders_lock);
+ } else if (sysfs_streq(buf, "within_size")) {
+ spin_lock(&huge_shmem_orders_lock);
+ clear_bit(order, &huge_shmem_orders_always);
+ clear_bit(order, &huge_shmem_orders_inherit);
+ clear_bit(order, &huge_shmem_orders_madvise);
+ set_bit(order, &huge_shmem_orders_within_size);
+ spin_unlock(&huge_shmem_orders_lock);
+ } else if (sysfs_streq(buf, "advise")) {
+ spin_lock(&huge_shmem_orders_lock);
+ clear_bit(order, &huge_shmem_orders_always);
+ clear_bit(order, &huge_shmem_orders_inherit);
+ clear_bit(order, &huge_shmem_orders_within_size);
+ set_bit(order, &huge_shmem_orders_madvise);
+ spin_unlock(&huge_shmem_orders_lock);
+ } else if (sysfs_streq(buf, "never")) {
+ spin_lock(&huge_shmem_orders_lock);
+ clear_bit(order, &huge_shmem_orders_always);
+ clear_bit(order, &huge_shmem_orders_inherit);
+ clear_bit(order, &huge_shmem_orders_within_size);
+ clear_bit(order, &huge_shmem_orders_madvise);
+ spin_unlock(&huge_shmem_orders_lock);
+ } else {
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+struct kobj_attribute thpsize_shmem_enabled_attr =
+ __ATTR(shmem_enabled, 0644, thpsize_shmem_enabled_show, thpsize_shmem_enabled_store);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE && CONFIG_SYSFS */
#else /* !CONFIG_SHMEM */
diff --git a/mm/slab.h b/mm/slab.h
index ece18ef5dd04..dcdb56b8e7f5 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -577,7 +577,7 @@ static inline enum node_stat_item cache_vmstat_idx(struct kmem_cache *s)
NR_SLAB_RECLAIMABLE_B : NR_SLAB_UNRECLAIMABLE_B;
}
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru,
gfp_t flags, size_t size, void **p);
void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 70943a4c1c4b..40b582a014b8 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -817,7 +817,7 @@ EXPORT_SYMBOL(kmalloc_size_roundup);
#define KMALLOC_DMA_NAME(sz)
#endif
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
#define KMALLOC_CGROUP_NAME(sz) .name[KMALLOC_CGROUP] = "kmalloc-cg-" #sz,
#else
#define KMALLOC_CGROUP_NAME(sz)
@@ -959,7 +959,7 @@ new_kmalloc_cache(int idx, enum kmalloc_cache_type type)
if ((KMALLOC_RECLAIM != KMALLOC_NORMAL) && (type == KMALLOC_RECLAIM)) {
flags |= SLAB_RECLAIM_ACCOUNT;
- } else if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_CGROUP)) {
+ } else if (IS_ENABLED(CONFIG_MEMCG) && (type == KMALLOC_CGROUP)) {
if (mem_cgroup_kmem_disabled()) {
kmalloc_caches[type][idx] = kmalloc_caches[KMALLOC_NORMAL][idx];
return;
@@ -975,10 +975,10 @@ new_kmalloc_cache(int idx, enum kmalloc_cache_type type)
#endif
/*
- * If CONFIG_MEMCG_KMEM is enabled, disable cache merging for
+ * If CONFIG_MEMCG is enabled, disable cache merging for
* KMALLOC_NORMAL caches.
*/
- if (IS_ENABLED(CONFIG_MEMCG_KMEM) && (type == KMALLOC_NORMAL))
+ if (IS_ENABLED(CONFIG_MEMCG) && (type == KMALLOC_NORMAL))
flags |= SLAB_NO_MERGE;
if (minalign > ARCH_KMALLOC_MINALIGN) {
@@ -1005,7 +1005,7 @@ void __init create_kmalloc_caches(void)
enum kmalloc_cache_type type;
/*
- * Including KMALLOC_CGROUP if CONFIG_MEMCG_KMEM defined
+ * Including KMALLOC_CGROUP if CONFIG_MEMCG defined
*/
for (type = KMALLOC_NORMAL; type < NR_KMALLOC_TYPES; type++) {
/* Caches that are NOT of the two-to-the-power-of size. */
diff --git a/mm/slub.c b/mm/slub.c
index 829a1f08e8a2..3520acaf9afa 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -845,10 +845,12 @@ static int disable_higher_order_debug;
static inline void metadata_access_enable(void)
{
kasan_disable_current();
+ kmsan_disable_current();
}
static inline void metadata_access_disable(void)
{
+ kmsan_enable_current();
kasan_enable_current();
}
@@ -1153,7 +1155,13 @@ static void init_object(struct kmem_cache *s, void *object, u8 val)
unsigned int poison_size = s->object_size;
if (s->flags & SLAB_RED_ZONE) {
- memset(p - s->red_left_pad, val, s->red_left_pad);
+ /*
+ * Here and below, avoid overwriting the KMSAN shadow. Keeping
+ * the shadow makes it possible to distinguish uninit-value
+ * from use-after-free.
+ */
+ memset_no_sanitize_memory(p - s->red_left_pad, val,
+ s->red_left_pad);
if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) {
/*
@@ -1166,12 +1174,13 @@ static void init_object(struct kmem_cache *s, void *object, u8 val)
}
if (s->flags & __OBJECT_POISON) {
- memset(p, POISON_FREE, poison_size - 1);
- p[poison_size - 1] = POISON_END;
+ memset_no_sanitize_memory(p, POISON_FREE, poison_size - 1);
+ memset_no_sanitize_memory(p + poison_size - 1, POISON_END, 1);
}
if (s->flags & SLAB_RED_ZONE)
- memset(p + poison_size, val, s->inuse - poison_size);
+ memset_no_sanitize_memory(p + poison_size, val,
+ s->inuse - poison_size);
}
static void restore_bytes(struct kmem_cache *s, char *message, u8 data,
@@ -1181,9 +1190,16 @@ static void restore_bytes(struct kmem_cache *s, char *message, u8 data,
memset(from, data, to - from);
}
-static int check_bytes_and_report(struct kmem_cache *s, struct slab *slab,
- u8 *object, char *what,
- u8 *start, unsigned int value, unsigned int bytes)
+#ifdef CONFIG_KMSAN
+#define pad_check_attributes noinline __no_kmsan_checks
+#else
+#define pad_check_attributes
+#endif
+
+static pad_check_attributes int
+check_bytes_and_report(struct kmem_cache *s, struct slab *slab,
+ u8 *object, char *what,
+ u8 *start, unsigned int value, unsigned int bytes)
{
u8 *fault;
u8 *end;
@@ -1273,7 +1289,8 @@ static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p)
}
/* Check the pad bytes at the end of a slab page */
-static void slab_pad_check(struct kmem_cache *s, struct slab *slab)
+static pad_check_attributes void
+slab_pad_check(struct kmem_cache *s, struct slab *slab)
{
u8 *start;
u8 *fault;
@@ -2021,7 +2038,7 @@ static inline bool need_slab_obj_ext(void)
return true;
/*
- * CONFIG_MEMCG_KMEM creates vector of obj_cgroup objects conditionally
+ * CONFIG_MEMCG creates vector of obj_cgroup objects conditionally
* inside memcg_slab_post_alloc_hook. No other users for now.
*/
return false;
@@ -2126,7 +2143,7 @@ alloc_tagging_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p,
#endif /* CONFIG_MEM_ALLOC_PROFILING */
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
static void memcg_alloc_abort_single(struct kmem_cache *s, void *object);
@@ -2168,7 +2185,7 @@ void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p,
__memcg_slab_free_hook(s, slab, p, objects, obj_exts);
}
-#else /* CONFIG_MEMCG_KMEM */
+#else /* CONFIG_MEMCG */
static inline bool memcg_slab_post_alloc_hook(struct kmem_cache *s,
struct list_lru *lru,
gfp_t flags, size_t size,
@@ -2181,7 +2198,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
void **p, int objects)
{
}
-#endif /* CONFIG_MEMCG_KMEM */
+#endif /* CONFIG_MEMCG */
/*
* Hooks for other subsystems that check memory allocations. In a typical
@@ -3914,14 +3931,6 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s,
0, sizeof(void *));
}
-noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags)
-{
- if (__should_failslab(s, gfpflags))
- return -ENOMEM;
- return 0;
-}
-ALLOW_ERROR_INJECTION(should_failslab, ERRNO);
-
static __fastpath_inline
struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
{
@@ -4465,7 +4474,7 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object,
do_slab_free(s, slab, object, object, 1, addr);
}
-#ifdef CONFIG_MEMCG_KMEM
+#ifdef CONFIG_MEMCG
/* Do not inline the rare memcg charging failed path into the allocation path */
static noinline
void memcg_alloc_abort_single(struct kmem_cache *s, void *object)
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index a2cbe44c48e1..1dda6c53370b 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -469,5 +469,13 @@ struct page * __meminit __populate_section_memmap(unsigned long pfn,
if (r < 0)
return NULL;
+ if (system_state == SYSTEM_BOOTING) {
+ mod_node_early_perpage_metadata(nid, DIV_ROUND_UP(end - start,
+ PAGE_SIZE));
+ } else {
+ mod_node_page_state(NODE_DATA(nid), NR_MEMMAP,
+ DIV_ROUND_UP(end - start, PAGE_SIZE));
+ }
+
return pfn_to_page(pfn);
}
diff --git a/mm/sparse.c b/mm/sparse.c
index de40b2c73406..e4b830091d13 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -14,7 +14,7 @@
#include <linux/swap.h>
#include <linux/swapops.h>
#include <linux/bootmem_info.h>
-
+#include <linux/vmstat.h>
#include "internal.h"
#include <asm/dma.h>
@@ -192,13 +192,10 @@ static void subsection_mask_set(unsigned long *map, unsigned long pfn,
void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages)
{
- int end_sec = pfn_to_section_nr(pfn + nr_pages - 1);
- unsigned long nr, start_sec = pfn_to_section_nr(pfn);
-
- if (!nr_pages)
- return;
+ int end_sec_nr = pfn_to_section_nr(pfn + nr_pages - 1);
+ unsigned long nr, start_sec_nr = pfn_to_section_nr(pfn);
- for (nr = start_sec; nr <= end_sec; nr++) {
+ for (nr = start_sec_nr; nr <= end_sec_nr; nr++) {
struct mem_section *ms;
unsigned long pfns;
@@ -229,17 +226,17 @@ static void __init memory_present(int nid, unsigned long start, unsigned long en
start &= PAGE_SECTION_MASK;
mminit_validate_memmodel_limits(&start, &end);
for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION) {
- unsigned long section = pfn_to_section_nr(pfn);
+ unsigned long section_nr = pfn_to_section_nr(pfn);
struct mem_section *ms;
- sparse_index_init(section, nid);
- set_section_nid(section, nid);
+ sparse_index_init(section_nr, nid);
+ set_section_nid(section_nr, nid);
- ms = __nr_to_section(section);
+ ms = __nr_to_section(section_nr);
if (!ms->section_mem_map) {
ms->section_mem_map = sparse_encode_early_nid(nid) |
SECTION_IS_ONLINE;
- __section_mark_present(ms, section);
+ __section_mark_present(ms, section_nr);
}
}
}
@@ -351,7 +348,7 @@ sparse_early_usemaps_alloc_pgdat_section(struct pglist_data *pgdat,
again:
usage = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, goal, limit, nid);
if (!usage && limit) {
- limit = 0;
+ limit = MEMBLOCK_ALLOC_ACCESSIBLE;
goto again;
}
return usage;
@@ -465,6 +462,9 @@ static void __init sparse_buffer_init(unsigned long size, int nid)
*/
sparsemap_buf = memmap_alloc(size, section_map_size(), addr, nid, true);
sparsemap_buf_end = sparsemap_buf + size;
+#ifndef CONFIG_SPARSEMEM_VMEMMAP
+ mod_node_early_perpage_metadata(nid, DIV_ROUND_UP(size, PAGE_SIZE));
+#endif
}
static void __init sparse_buffer_fini(void)
@@ -643,6 +643,8 @@ static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
unsigned long start = (unsigned long) pfn_to_page(pfn);
unsigned long end = start + nr_pages * sizeof(struct page);
+ mod_node_page_state(page_pgdat(pfn_to_page(pfn)), NR_MEMMAP,
+ -1L * (DIV_ROUND_UP(end - start, PAGE_SIZE)));
vmemmap_free(start, end, altmap);
}
static void free_map_bootmem(struct page *memmap)
diff --git a/mm/swap.c b/mm/swap.c
index 67786cb77130..9caf6b017cf0 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -123,8 +123,7 @@ void __folio_put(struct folio *folio)
}
page_cache_release(folio);
- if (folio_test_large(folio) && folio_test_large_rmappable(folio))
- folio_undo_large_rmappable(folio);
+ folio_undo_large_rmappable(folio);
mem_cgroup_uncharge(folio);
free_unref_page(&folio->page, folio_order(folio));
}
@@ -212,10 +211,6 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
for (i = 0; i < folio_batch_count(fbatch); i++) {
struct folio *folio = fbatch->folios[i];
- /* block memcg migration while the folio moves between lru */
- if (move_fn != lru_add_fn && !folio_test_clear_lru(folio))
- continue;
-
folio_lruvec_relock_irqsave(folio, &lruvec, &flags);
move_fn(lruvec, folio);
@@ -256,11 +251,16 @@ static void lru_move_tail_fn(struct lruvec *lruvec, struct folio *folio)
void folio_rotate_reclaimable(struct folio *folio)
{
if (!folio_test_locked(folio) && !folio_test_dirty(folio) &&
- !folio_test_unevictable(folio) && folio_test_lru(folio)) {
+ !folio_test_unevictable(folio)) {
struct folio_batch *fbatch;
unsigned long flags;
folio_get(folio);
+ if (!folio_test_clear_lru(folio)) {
+ folio_put(folio);
+ return;
+ }
+
local_lock_irqsave(&lru_rotate.lock, flags);
fbatch = this_cpu_ptr(&lru_rotate.fbatch);
folio_batch_add_and_move(fbatch, folio, lru_move_tail_fn);
@@ -353,11 +353,15 @@ static void folio_activate_drain(int cpu)
void folio_activate(struct folio *folio)
{
- if (folio_test_lru(folio) && !folio_test_active(folio) &&
- !folio_test_unevictable(folio)) {
+ if (!folio_test_active(folio) && !folio_test_unevictable(folio)) {
struct folio_batch *fbatch;
folio_get(folio);
+ if (!folio_test_clear_lru(folio)) {
+ folio_put(folio);
+ return;
+ }
+
local_lock(&cpu_fbatches.lock);
fbatch = this_cpu_ptr(&cpu_fbatches.activate);
folio_batch_add_and_move(fbatch, folio, folio_activate_fn);
@@ -701,6 +705,11 @@ void deactivate_file_folio(struct folio *folio)
return;
folio_get(folio);
+ if (!folio_test_clear_lru(folio)) {
+ folio_put(folio);
+ return;
+ }
+
local_lock(&cpu_fbatches.lock);
fbatch = this_cpu_ptr(&cpu_fbatches.lru_deactivate_file);
folio_batch_add_and_move(fbatch, folio, lru_deactivate_file_fn);
@@ -717,11 +726,16 @@ void deactivate_file_folio(struct folio *folio)
*/
void folio_deactivate(struct folio *folio)
{
- if (folio_test_lru(folio) && !folio_test_unevictable(folio) &&
- (folio_test_active(folio) || lru_gen_enabled())) {
+ if (!folio_test_unevictable(folio) && (folio_test_active(folio) ||
+ lru_gen_enabled())) {
struct folio_batch *fbatch;
folio_get(folio);
+ if (!folio_test_clear_lru(folio)) {
+ folio_put(folio);
+ return;
+ }
+
local_lock(&cpu_fbatches.lock);
fbatch = this_cpu_ptr(&cpu_fbatches.lru_deactivate);
folio_batch_add_and_move(fbatch, folio, lru_deactivate_fn);
@@ -738,12 +752,16 @@ void folio_deactivate(struct folio *folio)
*/
void folio_mark_lazyfree(struct folio *folio)
{
- if (folio_test_lru(folio) && folio_test_anon(folio) &&
- folio_test_swapbacked(folio) && !folio_test_swapcache(folio) &&
- !folio_test_unevictable(folio)) {
+ if (folio_test_anon(folio) && folio_test_swapbacked(folio) &&
+ !folio_test_swapcache(folio) && !folio_test_unevictable(folio)) {
struct folio_batch *fbatch;
folio_get(folio);
+ if (!folio_test_clear_lru(folio)) {
+ folio_put(folio);
+ return;
+ }
+
local_lock(&cpu_fbatches.lock);
fbatch = this_cpu_ptr(&cpu_fbatches.lru_lazyfree);
folio_batch_add_and_move(fbatch, folio, lru_lazyfree_fn);
@@ -1002,10 +1020,7 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs)
free_huge_folio(folio);
continue;
}
- if (folio_test_large(folio) &&
- folio_test_large_rmappable(folio))
- folio_undo_large_rmappable(folio);
-
+ folio_undo_large_rmappable(folio);
__page_cache_release(folio, &lruvec, &flags);
if (j != i)
diff --git a/mm/swap.h b/mm/swap.h
index fc2f6ade7f80..baa1fa946b34 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -5,13 +5,13 @@
struct mempolicy;
#ifdef CONFIG_SWAP
+#include <linux/swapops.h> /* for swp_offset */
#include <linux/blk_types.h> /* for bio_end_io_t */
/* linux/mm/page_io.c */
int sio_pool_init(void);
struct swap_iocb;
-void swap_read_folio(struct folio *folio, bool do_poll,
- struct swap_iocb **plug);
+void swap_read_folio(struct folio *folio, struct swap_iocb **plug);
void __swap_read_unplug(struct swap_iocb *plug);
static inline void swap_read_unplug(struct swap_iocb *plug)
{
@@ -26,11 +26,29 @@ void __swap_writepage(struct folio *folio, struct writeback_control *wbc);
/* One swap address space for each 64M swap space */
#define SWAP_ADDRESS_SPACE_SHIFT 14
#define SWAP_ADDRESS_SPACE_PAGES (1 << SWAP_ADDRESS_SPACE_SHIFT)
+#define SWAP_ADDRESS_SPACE_MASK (SWAP_ADDRESS_SPACE_PAGES - 1)
extern struct address_space *swapper_spaces[];
#define swap_address_space(entry) \
(&swapper_spaces[swp_type(entry)][swp_offset(entry) \
>> SWAP_ADDRESS_SPACE_SHIFT])
+/*
+ * Return the swap device position of the swap entry.
+ */
+static inline loff_t swap_dev_pos(swp_entry_t entry)
+{
+ return ((loff_t)swp_offset(entry)) << PAGE_SHIFT;
+}
+
+/*
+ * Return the swap cache index of the swap entry.
+ */
+static inline pgoff_t swap_cache_index(swp_entry_t entry)
+{
+ BUILD_BUG_ON((SWP_OFFSET_MASK | SWAP_ADDRESS_SPACE_MASK) != SWP_OFFSET_MASK);
+ return swp_offset(entry) & SWAP_ADDRESS_SPACE_MASK;
+}
+
void show_swap_cache_info(void);
bool add_to_swap(struct folio *folio);
void *get_shadow_from_swap_cache(swp_entry_t entry);
@@ -64,8 +82,7 @@ static inline unsigned int folio_swap_flags(struct folio *folio)
}
#else /* CONFIG_SWAP */
struct swap_iocb;
-static inline void swap_read_folio(struct folio *folio, bool do_poll,
- struct swap_iocb **plug)
+static inline void swap_read_folio(struct folio *folio, struct swap_iocb **plug)
{
}
static inline void swap_write_unplug(struct swap_iocb *sio)
@@ -77,6 +94,11 @@ static inline struct address_space *swap_address_space(swp_entry_t entry)
return NULL;
}
+static inline pgoff_t swap_cache_index(swp_entry_t entry)
+{
+ return 0;
+}
+
static inline void show_swap_cache_info(void)
{
}
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 642c30d8376c..a1726e49a5eb 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -28,7 +28,7 @@
/*
* swapper_space is a fiction, retained to simplify the path through
- * vmscan's shrink_page_list.
+ * vmscan's shrink_folio_list.
*/
static const struct address_space_operations swap_aops = {
.writepage = swap_writepage,
@@ -42,6 +42,8 @@ struct address_space *swapper_spaces[MAX_SWAPFILES] __read_mostly;
static unsigned int nr_swapper_spaces[MAX_SWAPFILES] __read_mostly;
static bool enable_vma_readahead __read_mostly = true;
+#define SWAP_RA_ORDER_CEILING 5
+
#define SWAP_RA_WIN_SHIFT (PAGE_SHIFT / 2)
#define SWAP_RA_HITS_MASK ((1UL << SWAP_RA_WIN_SHIFT) - 1)
#define SWAP_RA_HITS_MAX SWAP_RA_HITS_MASK
@@ -72,7 +74,7 @@ void show_swap_cache_info(void)
void *get_shadow_from_swap_cache(swp_entry_t entry)
{
struct address_space *address_space = swap_address_space(entry);
- pgoff_t idx = swp_offset(entry);
+ pgoff_t idx = swap_cache_index(entry);
void *shadow;
shadow = xa_load(&address_space->i_pages, idx);
@@ -89,7 +91,7 @@ int add_to_swap_cache(struct folio *folio, swp_entry_t entry,
gfp_t gfp, void **shadowp)
{
struct address_space *address_space = swap_address_space(entry);
- pgoff_t idx = swp_offset(entry);
+ pgoff_t idx = swap_cache_index(entry);
XA_STATE_ORDER(xas, &address_space->i_pages, idx, folio_order(folio));
unsigned long i, nr = folio_nr_pages(folio);
void *old;
@@ -144,7 +146,7 @@ void __delete_from_swap_cache(struct folio *folio,
struct address_space *address_space = swap_address_space(entry);
int i;
long nr = folio_nr_pages(folio);
- pgoff_t idx = swp_offset(entry);
+ pgoff_t idx = swap_cache_index(entry);
XA_STATE(xas, &address_space->i_pages, idx);
xas_set_update(&xas, workingset_update_node);
@@ -253,13 +255,14 @@ void clear_shadow_from_swap_cache(int type, unsigned long begin,
for (;;) {
swp_entry_t entry = swp_entry(type, curr);
+ unsigned long index = curr & SWAP_ADDRESS_SPACE_MASK;
struct address_space *address_space = swap_address_space(entry);
- XA_STATE(xas, &address_space->i_pages, curr);
+ XA_STATE(xas, &address_space->i_pages, index);
xas_set_update(&xas, workingset_update_node);
xa_lock_irq(&address_space->i_pages);
- xas_for_each(&xas, old, end) {
+ xas_for_each(&xas, old, min(index + (end - curr), SWAP_ADDRESS_SPACE_PAGES)) {
if (!xa_is_value(old))
continue;
xas_store(&xas, NULL);
@@ -350,7 +353,7 @@ struct folio *swap_cache_get_folio(swp_entry_t entry,
{
struct folio *folio;
- folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry));
+ folio = filemap_get_folio(swap_address_space(entry), swap_cache_index(entry));
if (!IS_ERR(folio)) {
bool vma_ra = swap_use_vma_readahead();
bool readahead;
@@ -420,7 +423,7 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
si = get_swap_device(swp);
if (!si)
return ERR_PTR(-ENOENT);
- index = swp_offset(swp);
+ index = swap_cache_index(swp);
folio = filemap_get_folio(swap_address_space(swp), index);
put_swap_device(si);
return folio;
@@ -447,7 +450,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
* that would confuse statistics.
*/
folio = filemap_get_folio(swap_address_space(entry),
- swp_offset(entry));
+ swap_cache_index(entry));
if (!IS_ERR(folio))
goto got_folio;
@@ -467,8 +470,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
* before marking swap_map SWAP_HAS_CACHE, when -EEXIST will
* cause any racers to loop around until we add it to cache.
*/
- folio = (struct folio *)alloc_pages_mpol(gfp_mask, 0,
- mpol, ilx, numa_node_id());
+ folio = folio_alloc_mpol(gfp_mask, 0, mpol, ilx, numa_node_id());
if (!folio)
goto fail_put_swap;
@@ -564,7 +566,7 @@ struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
mpol_cond_put(mpol);
if (page_allocated)
- swap_read_folio(folio, false, plug);
+ swap_read_folio(folio, plug);
return folio;
}
@@ -681,7 +683,7 @@ struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
if (!folio)
continue;
if (page_allocated) {
- swap_read_folio(folio, false, &splug);
+ swap_read_folio(folio, &splug);
if (offset != entry_offset) {
folio_set_readahead(folio);
count_vm_event(SWAP_RA);
@@ -698,7 +700,7 @@ skip:
&page_allocated, false);
if (unlikely(page_allocated)) {
zswap_folio_swapin(folio);
- swap_read_folio(folio, false, NULL);
+ swap_read_folio(folio, NULL);
}
return folio;
}
@@ -738,62 +740,42 @@ void exit_swap_address_space(unsigned int type)
swapper_spaces[type] = NULL;
}
-#define SWAP_RA_ORDER_CEILING 5
-
-struct vma_swap_readahead {
- unsigned short win;
- unsigned short offset;
- unsigned short nr_pte;
-};
-
-static void swap_ra_info(struct vm_fault *vmf,
- struct vma_swap_readahead *ra_info)
+static int swap_vma_ra_win(struct vm_fault *vmf, unsigned long *start,
+ unsigned long *end)
{
struct vm_area_struct *vma = vmf->vma;
unsigned long ra_val;
- unsigned long faddr, pfn, fpfn, lpfn, rpfn;
- unsigned long start, end;
+ unsigned long faddr, prev_faddr, left, right;
unsigned int max_win, hits, prev_win, win;
- max_win = 1 << min_t(unsigned int, READ_ONCE(page_cluster),
- SWAP_RA_ORDER_CEILING);
- if (max_win == 1) {
- ra_info->win = 1;
- return;
- }
+ max_win = 1 << min(READ_ONCE(page_cluster), SWAP_RA_ORDER_CEILING);
+ if (max_win == 1)
+ return 1;
faddr = vmf->address;
- fpfn = PFN_DOWN(faddr);
ra_val = GET_SWAP_RA_VAL(vma);
- pfn = PFN_DOWN(SWAP_RA_ADDR(ra_val));
+ prev_faddr = SWAP_RA_ADDR(ra_val);
prev_win = SWAP_RA_WIN(ra_val);
hits = SWAP_RA_HITS(ra_val);
- ra_info->win = win = __swapin_nr_pages(pfn, fpfn, hits,
- max_win, prev_win);
- atomic_long_set(&vma->swap_readahead_info,
- SWAP_RA_VAL(faddr, win, 0));
+ win = __swapin_nr_pages(PFN_DOWN(prev_faddr), PFN_DOWN(faddr), hits,
+ max_win, prev_win);
+ atomic_long_set(&vma->swap_readahead_info, SWAP_RA_VAL(faddr, win, 0));
if (win == 1)
- return;
-
- if (fpfn == pfn + 1) {
- lpfn = fpfn;
- rpfn = fpfn + win;
- } else if (pfn == fpfn + 1) {
- lpfn = fpfn - win + 1;
- rpfn = fpfn + 1;
- } else {
- unsigned int left = (win - 1) / 2;
-
- lpfn = fpfn - left;
- rpfn = fpfn + win - left;
- }
- start = max3(lpfn, PFN_DOWN(vma->vm_start),
- PFN_DOWN(faddr & PMD_MASK));
- end = min3(rpfn, PFN_DOWN(vma->vm_end),
- PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
+ return 1;
- ra_info->nr_pte = end - start;
- ra_info->offset = fpfn - start;
+ if (faddr == prev_faddr + PAGE_SIZE)
+ left = faddr;
+ else if (prev_faddr == faddr + PAGE_SIZE)
+ left = faddr - (win << PAGE_SHIFT) + PAGE_SIZE;
+ else
+ left = faddr - (((win - 1) / 2) << PAGE_SHIFT);
+ right = left + (win << PAGE_SHIFT);
+ if ((long)left < 0)
+ left = 0;
+ *start = max3(left, vma->vm_start, faddr & PMD_MASK);
+ *end = min3(right, vma->vm_end, (faddr & PMD_MASK) + PMD_SIZE);
+
+ return win;
}
/**
@@ -819,24 +801,20 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
struct swap_iocb *splug = NULL;
struct folio *folio;
pte_t *pte = NULL, pentry;
- unsigned long addr;
+ int win;
+ unsigned long start, end, addr;
swp_entry_t entry;
pgoff_t ilx;
- unsigned int i;
bool page_allocated;
- struct vma_swap_readahead ra_info = {
- .win = 1,
- };
- swap_ra_info(vmf, &ra_info);
- if (ra_info.win == 1)
+ win = swap_vma_ra_win(vmf, &start, &end);
+ if (win == 1)
goto skip;
- addr = vmf->address - (ra_info.offset * PAGE_SIZE);
- ilx = targ_ilx - ra_info.offset;
+ ilx = targ_ilx - PFN_DOWN(vmf->address - start);
blk_start_plug(&plug);
- for (i = 0; i < ra_info.nr_pte; i++, ilx++, addr += PAGE_SIZE) {
+ for (addr = start; addr < end; ilx++, addr += PAGE_SIZE) {
if (!pte++) {
pte = pte_offset_map(vmf->pmd, addr);
if (!pte)
@@ -855,8 +833,8 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
if (!folio)
continue;
if (page_allocated) {
- swap_read_folio(folio, false, &splug);
- if (i != ra_info.offset) {
+ swap_read_folio(folio, &splug);
+ if (addr != vmf->address) {
folio_set_readahead(folio);
count_vm_event(SWAP_RA);
}
@@ -874,7 +852,7 @@ skip:
&page_allocated, false);
if (unlikely(page_allocated)) {
zswap_folio_swapin(folio);
- swap_read_folio(folio, false, NULL);
+ swap_read_folio(folio, NULL);
}
return folio;
}
diff --git a/mm/swapfile.c b/mm/swapfile.c
index b3e5e384e330..38bdc439651a 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -142,7 +142,7 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si,
struct folio *folio;
int ret = 0;
- folio = filemap_get_folio(swap_address_space(entry), offset);
+ folio = filemap_get_folio(swap_address_space(entry), swap_cache_index(entry));
if (IS_ERR(folio))
return 0;
/*
@@ -1343,17 +1343,55 @@ static void swap_entry_free(struct swap_info_struct *p, swp_entry_t entry)
swap_range_free(p, offset, 1);
}
+static void cluster_swap_free_nr(struct swap_info_struct *sis,
+ unsigned long offset, int nr_pages)
+{
+ struct swap_cluster_info *ci;
+ DECLARE_BITMAP(to_free, BITS_PER_LONG) = { 0 };
+ int i, nr;
+
+ ci = lock_cluster_or_swap_info(sis, offset);
+ while (nr_pages) {
+ nr = min(BITS_PER_LONG, nr_pages);
+ for (i = 0; i < nr; i++) {
+ if (!__swap_entry_free_locked(sis, offset + i, 1))
+ bitmap_set(to_free, i, 1);
+ }
+ if (!bitmap_empty(to_free, BITS_PER_LONG)) {
+ unlock_cluster_or_swap_info(sis, ci);
+ for_each_set_bit(i, to_free, BITS_PER_LONG)
+ free_swap_slot(swp_entry(sis->type, offset + i));
+ if (nr == nr_pages)
+ return;
+ bitmap_clear(to_free, 0, BITS_PER_LONG);
+ ci = lock_cluster_or_swap_info(sis, offset);
+ }
+ offset += nr;
+ nr_pages -= nr;
+ }
+ unlock_cluster_or_swap_info(sis, ci);
+}
+
/*
* Caller has made sure that the swap device corresponding to entry
* is still around or has not been recycled.
*/
-void swap_free(swp_entry_t entry)
+void swap_free_nr(swp_entry_t entry, int nr_pages)
{
- struct swap_info_struct *p;
+ int nr;
+ struct swap_info_struct *sis;
+ unsigned long offset = swp_offset(entry);
- p = _swap_info_get(entry);
- if (p)
- __swap_entry_free(p, entry);
+ sis = _swap_info_get(entry);
+ if (!sis)
+ return;
+
+ while (nr_pages) {
+ nr = min_t(int, nr_pages, SWAPFILE_CLUSTER - offset % SWAPFILE_CLUSTER);
+ cluster_swap_free_nr(sis, offset, nr);
+ offset += nr;
+ nr_pages -= nr;
+ }
}
/*
@@ -1870,10 +1908,20 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio);
if (pte_swp_exclusive(old_pte))
rmap_flags |= RMAP_EXCLUSIVE;
-
- folio_add_anon_rmap_pte(folio, page, vma, addr, rmap_flags);
+ /*
+ * We currently only expect small !anon folios, which are either
+ * fully exclusive or fully shared. If we ever get large folios
+ * here, we have to be careful.
+ */
+ if (!folio_test_anon(folio)) {
+ VM_WARN_ON_ONCE(folio_test_large(folio));
+ VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+ folio_add_new_anon_rmap(folio, vma, addr, rmap_flags);
+ } else {
+ folio_add_anon_rmap_pte(folio, page, vma, addr, rmap_flags);
+ }
} else { /* ksm created a completely new copy */
- folio_add_new_anon_rmap(folio, vma, addr);
+ folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
folio_add_lru_vma(folio, vma);
}
new_pte = pte_mkold(mk_pte(page, vma->vm_page_prot));
@@ -2158,7 +2206,7 @@ retry:
(i = find_next_to_unuse(si, i)) != 0) {
entry = swp_entry(type, i);
- folio = filemap_get_folio(swap_address_space(entry), i);
+ folio = filemap_get_folio(swap_address_space(entry), swap_cache_index(entry));
if (IS_ERR(folio))
continue;
@@ -3449,12 +3497,11 @@ struct address_space *swapcache_mapping(struct folio *folio)
}
EXPORT_SYMBOL_GPL(swapcache_mapping);
-pgoff_t __page_file_index(struct page *page)
+pgoff_t __folio_swap_cache_index(struct folio *folio)
{
- swp_entry_t swap = page_swap_entry(page);
- return swp_offset(swap);
+ return swap_cache_index(folio->swap);
}
-EXPORT_SYMBOL_GPL(__page_file_index);
+EXPORT_SYMBOL_GPL(__folio_swap_cache_index);
/*
* add_swap_count_continuation - called when a swap count is duplicated
diff --git a/mm/truncate.c b/mm/truncate.c
index 581977d2356f..4d61fbdd4b2f 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -39,12 +39,25 @@ static inline void __clear_shadow_entry(struct address_space *mapping,
xas_store(&xas, NULL);
}
-static void clear_shadow_entry(struct address_space *mapping, pgoff_t index,
- void *entry)
+static void clear_shadow_entries(struct address_space *mapping,
+ struct folio_batch *fbatch, pgoff_t *indices)
{
+ int i;
+
+ /* Handled by shmem itself, or for DAX we do nothing. */
+ if (shmem_mapping(mapping) || dax_mapping(mapping))
+ return;
+
spin_lock(&mapping->host->i_lock);
xa_lock_irq(&mapping->i_pages);
- __clear_shadow_entry(mapping, index, entry);
+
+ for (i = 0; i < folio_batch_count(fbatch); i++) {
+ struct folio *folio = fbatch->folios[i];
+
+ if (xa_is_value(folio))
+ __clear_shadow_entry(mapping, indices[i], folio);
+ }
+
xa_unlock_irq(&mapping->i_pages);
if (mapping_shrinkable(mapping))
inode_add_lru(mapping->host);
@@ -105,36 +118,6 @@ static void truncate_folio_batch_exceptionals(struct address_space *mapping,
fbatch->nr = j;
}
-/*
- * Invalidate exceptional entry if easily possible. This handles exceptional
- * entries for invalidate_inode_pages().
- */
-static int invalidate_exceptional_entry(struct address_space *mapping,
- pgoff_t index, void *entry)
-{
- /* Handled by shmem itself, or for DAX we do nothing. */
- if (shmem_mapping(mapping) || dax_mapping(mapping))
- return 1;
- clear_shadow_entry(mapping, index, entry);
- return 1;
-}
-
-/*
- * Invalidate exceptional entry if clean. This handles exceptional entries for
- * invalidate_inode_pages2() so for DAX it evicts only clean entries.
- */
-static int invalidate_exceptional_entry2(struct address_space *mapping,
- pgoff_t index, void *entry)
-{
- /* Handled by shmem itself */
- if (shmem_mapping(mapping))
- return 1;
- if (dax_mapping(mapping))
- return dax_invalidate_mapping_entry_sync(mapping, index);
- clear_shadow_entry(mapping, index, entry);
- return 1;
-}
-
/**
* folio_invalidate - Invalidate part or all of a folio.
* @folio: The folio which is affected.
@@ -495,6 +478,7 @@ unsigned long mapping_try_invalidate(struct address_space *mapping,
unsigned long ret;
unsigned long count = 0;
int i;
+ bool xa_has_values = false;
folio_batch_init(&fbatch);
while (find_lock_entries(mapping, &index, end, &fbatch, indices)) {
@@ -504,8 +488,8 @@ unsigned long mapping_try_invalidate(struct address_space *mapping,
/* We rely upon deletion not changing folio->index */
if (xa_is_value(folio)) {
- count += invalidate_exceptional_entry(mapping,
- indices[i], folio);
+ xa_has_values = true;
+ count++;
continue;
}
@@ -523,6 +507,10 @@ unsigned long mapping_try_invalidate(struct address_space *mapping,
}
count += ret;
}
+
+ if (xa_has_values)
+ clear_shadow_entries(mapping, &fbatch, indices);
+
folio_batch_remove_exceptionals(&fbatch);
folio_batch_release(&fbatch);
cond_resched();
@@ -555,7 +543,7 @@ EXPORT_SYMBOL(invalidate_mapping_pages);
* This is like mapping_evict_folio(), except it ignores the folio's
* refcount. We do this because invalidate_inode_pages2() needs stronger
* invalidation guarantees, and cannot afford to leave folios behind because
- * shrink_page_list() has a temp ref on them, or because they're transiently
+ * shrink_folio_list() has a temp ref on them, or because they're transiently
* sitting in the folio_add_lru() caches.
*/
static int invalidate_complete_folio2(struct address_space *mapping,
@@ -617,6 +605,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
int ret = 0;
int ret2 = 0;
int did_range_unmap = 0;
+ bool xa_has_values = false;
if (mapping_empty(mapping))
return 0;
@@ -630,8 +619,9 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
/* We rely upon deletion not changing folio->index */
if (xa_is_value(folio)) {
- if (!invalidate_exceptional_entry2(mapping,
- indices[i], folio))
+ xa_has_values = true;
+ if (dax_mapping(mapping) &&
+ !dax_invalidate_mapping_entry_sync(mapping, indices[i]))
ret = -EBUSY;
continue;
}
@@ -667,6 +657,10 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
ret = ret2;
folio_unlock(folio);
}
+
+ if (xa_has_values)
+ clear_shadow_entries(mapping, &fbatch, indices);
+
folio_batch_remove_exceptionals(&fbatch);
folio_batch_release(&fbatch);
cond_resched();
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index defa5109cc62..e54e5c8907fa 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -216,7 +216,7 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd,
folio_add_lru(folio);
folio_add_file_rmap_pte(folio, page, dst_vma);
} else {
- folio_add_new_anon_rmap(folio, dst_vma, dst_addr);
+ folio_add_new_anon_rmap(folio, dst_vma, dst_addr, RMAP_EXCLUSIVE);
folio_add_lru_vma(folio, dst_vma);
}
@@ -587,7 +587,7 @@ retry:
}
if (!uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE) &&
- !huge_pte_none_mostly(huge_ptep_get(dst_pte))) {
+ !huge_pte_none_mostly(huge_ptep_get(dst_mm, dst_addr, dst_pte))) {
err = -EEXIST;
hugetlb_vma_unlock_read(dst_vma);
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
@@ -995,14 +995,8 @@ void double_pt_lock(spinlock_t *ptl1,
__acquires(ptl1)
__acquires(ptl2)
{
- spinlock_t *ptl_tmp;
-
- if (ptl1 > ptl2) {
- /* exchange ptl1 and ptl2 */
- ptl_tmp = ptl1;
- ptl1 = ptl2;
- ptl2 = ptl_tmp;
- }
+ if (ptl1 > ptl2)
+ swap(ptl1, ptl2);
/* lock in virtual address order to avoid lock inversion */
spin_lock(ptl1);
if (ptl1 != ptl2)
diff --git a/mm/util.c b/mm/util.c
index c6ad21ee6695..bc488f0121a7 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -844,6 +844,23 @@ void folio_copy(struct folio *dst, struct folio *src)
}
EXPORT_SYMBOL(folio_copy);
+int folio_mc_copy(struct folio *dst, struct folio *src)
+{
+ long nr = folio_nr_pages(src);
+ long i = 0;
+
+ for (;;) {
+ if (copy_mc_highpage(folio_page(dst, i), folio_page(src, i)))
+ return -EHWPOISON;
+ if (++i == nr)
+ break;
+ cond_resched();
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(folio_mc_copy);
+
int sysctl_overcommit_memory __read_mostly = OVERCOMMIT_GUESS;
int sysctl_overcommit_ratio __read_mostly = 50;
unsigned long sysctl_overcommit_kbytes __read_mostly;
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index e34ea860153f..6b783baf12a1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1816,7 +1816,7 @@ static void free_vmap_area(struct vmap_area *va)
static inline void
preload_this_cpu_lock(spinlock_t *lock, gfp_t gfp_mask, int node)
{
- struct vmap_area *va = NULL;
+ struct vmap_area *va = NULL, *tmp;
/*
* Preload this CPU with one extra vmap_area object. It is used
@@ -1832,7 +1832,8 @@ preload_this_cpu_lock(spinlock_t *lock, gfp_t gfp_mask, int node)
spin_lock(lock);
- if (va && __this_cpu_cmpxchg(ne_fit_preload_node, NULL, va))
+ tmp = NULL;
+ if (va && !__this_cpu_try_cmpxchg(ne_fit_preload_node, &tmp, va))
kmem_cache_free(vmap_area_cachep, va);
}
@@ -2055,8 +2056,8 @@ overflow:
}
if (!(gfp_mask & __GFP_NOWARN) && printk_ratelimit())
- pr_warn("vmap allocation for size %lu failed: use vmalloc=<size> to increase size\n",
- size);
+ pr_warn("vmalloc_node_range for size %lu failed: Address range restricted to %#lx - %#lx\n",
+ size, vstart, vend);
kmem_cache_free(vmap_area_cachep, va);
return ERR_PTR(-EBUSY);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2e34de9cd0d4..525d3ffa8451 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -92,6 +92,11 @@ struct scan_control {
unsigned long anon_cost;
unsigned long file_cost;
+#ifdef CONFIG_MEMCG
+ /* Swappiness value for proactive reclaim. Always use sc_swappiness()! */
+ int *proactive_swappiness;
+#endif
+
/* Can active folios be deactivated as part of reclaim? */
#define DEACTIVATE_ANON 1
#define DEACTIVATE_FILE 2
@@ -128,6 +133,9 @@ struct scan_control {
unsigned int memcg_low_reclaim:1;
unsigned int memcg_low_skipped:1;
+ /* Shared cgroup tree walk failed, rescan the whole tree */
+ unsigned int memcg_full_walk:1;
+
unsigned int hibernation_mode:1;
/* One of the zones is ready for compaction */
@@ -189,7 +197,7 @@ struct scan_control {
#endif
/*
- * From 0 .. 200. Higher means more swappy.
+ * From 0 .. MAX_SWAPPINESS. Higher means more swappy.
*/
int vm_swappiness = 60;
@@ -233,6 +241,13 @@ static bool writeback_throttling_sane(struct scan_control *sc)
#endif
return false;
}
+
+static int sc_swappiness(struct scan_control *sc, struct mem_cgroup *memcg)
+{
+ if (sc->proactive && sc->proactive_swappiness)
+ return *sc->proactive_swappiness;
+ return mem_cgroup_swappiness(memcg);
+}
#else
static bool cgroup_reclaim(struct scan_control *sc)
{
@@ -248,6 +263,11 @@ static bool writeback_throttling_sane(struct scan_control *sc)
{
return true;
}
+
+static int sc_swappiness(struct scan_control *sc, struct mem_cgroup *memcg)
+{
+ return READ_ONCE(vm_swappiness);
+}
#endif
static void set_task_reclaim_state(struct task_struct *task,
@@ -916,8 +936,7 @@ static void folio_check_dirty_writeback(struct folio *folio,
mapping->a_ops->is_dirty_writeback(folio, dirty, writeback);
}
-static struct folio *alloc_demote_folio(struct folio *src,
- unsigned long private)
+struct folio *alloc_migrate_folio(struct folio *src, unsigned long private)
{
struct folio *dst;
nodemask_t *allowed_mask;
@@ -980,7 +999,7 @@ static unsigned int demote_folio_list(struct list_head *demote_folios,
node_get_allowed_targets(pgdat, &allowed_mask);
/* Demotion ignores all cpuset and mempolicy settings */
- migrate_pages(demote_folios, alloc_demote_folio, NULL,
+ migrate_pages(demote_folios, alloc_migrate_folio, NULL,
(unsigned long)&mtc, MIGRATE_ASYNC, MR_DEMOTION,
&nr_succeeded);
@@ -1272,7 +1291,7 @@ retry:
* try_to_unmap acquire PTL from the first PTE,
* eliminating the influence of temporary PTE values.
*/
- if (folio_test_large(folio) && list_empty(&folio->_deferred_list))
+ if (folio_test_large(folio))
flags |= TTU_SYNC;
try_to_unmap(folio, flags);
@@ -1437,9 +1456,7 @@ free_it:
*/
nr_reclaimed += nr_pages;
- if (folio_test_large(folio) &&
- folio_test_large_rmappable(folio))
- folio_undo_large_rmappable(folio);
+ folio_undo_large_rmappable(folio);
if (folio_batch_add(&free_folios, folio) == 0) {
mem_cgroup_uncharge_folios(&free_folios);
try_to_unmap_flush();
@@ -1846,9 +1863,7 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
if (unlikely(folio_put_testzero(folio))) {
__folio_clear_lru_flags(folio);
- if (folio_test_large(folio) &&
- folio_test_large_rmappable(folio))
- folio_undo_large_rmappable(folio);
+ folio_undo_large_rmappable(folio);
if (folio_batch_add(&free_folios, folio) == 0) {
spin_unlock_irq(&lruvec->lru_lock);
mem_cgroup_uncharge_folios(&free_folios);
@@ -2353,7 +2368,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
struct pglist_data *pgdat = lruvec_pgdat(lruvec);
struct mem_cgroup *memcg = lruvec_memcg(lruvec);
unsigned long anon_cost, file_cost, total_cost;
- int swappiness = mem_cgroup_swappiness(memcg);
+ int swappiness = sc_swappiness(sc, memcg);
u64 fraction[ANON_AND_FILE];
u64 denominator = 0; /* gcc */
enum scan_balance scan_balance;
@@ -2429,7 +2444,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
ap = swappiness * (total_cost + 1);
ap /= anon_cost + 1;
- fp = (200 - swappiness) * (total_cost + 1);
+ fp = (MAX_SWAPPINESS - swappiness) * (total_cost + 1);
fp /= file_cost + 1;
fraction[0] = ap;
@@ -2634,7 +2649,7 @@ static int get_swappiness(struct lruvec *lruvec, struct scan_control *sc)
mem_cgroup_get_nr_swap_pages(memcg) < MIN_LRU_BATCH)
return 0;
- return mem_cgroup_swappiness(memcg);
+ return sc_swappiness(sc, memcg);
}
static int get_nr_gens(struct lruvec *lruvec, int type)
@@ -3900,6 +3915,32 @@ done:
* working set protection
******************************************************************************/
+static void set_initial_priority(struct pglist_data *pgdat, struct scan_control *sc)
+{
+ int priority;
+ unsigned long reclaimable;
+
+ if (sc->priority != DEF_PRIORITY || sc->nr_to_reclaim < MIN_LRU_BATCH)
+ return;
+ /*
+ * Determine the initial priority based on
+ * (total >> priority) * reclaimed_to_scanned_ratio = nr_to_reclaim,
+ * where reclaimed_to_scanned_ratio = inactive / total.
+ */
+ reclaimable = node_page_state(pgdat, NR_INACTIVE_FILE);
+ if (can_reclaim_anon_pages(NULL, pgdat->node_id, sc))
+ reclaimable += node_page_state(pgdat, NR_INACTIVE_ANON);
+
+ /* round down reclaimable and round up sc->nr_to_reclaim */
+ priority = fls_long(reclaimable) - 1 - fls_long(sc->nr_to_reclaim - 1);
+
+ /*
+ * The estimation is based on LRU pages only, so cap it to prevent
+ * overshoots of shrinker objects by large margins.
+ */
+ sc->priority = clamp(priority, DEF_PRIORITY / 2, DEF_PRIORITY);
+}
+
static bool lruvec_is_sizable(struct lruvec *lruvec, struct scan_control *sc)
{
int gen, type, zone;
@@ -3933,19 +3974,17 @@ static bool lruvec_is_reclaimable(struct lruvec *lruvec, struct scan_control *sc
struct mem_cgroup *memcg = lruvec_memcg(lruvec);
DEFINE_MIN_SEQ(lruvec);
- /* see the comment on lru_gen_folio */
- gen = lru_gen_from_seq(min_seq[LRU_GEN_FILE]);
- birth = READ_ONCE(lruvec->lrugen.timestamps[gen]);
-
- if (time_is_after_jiffies(birth + min_ttl))
+ if (mem_cgroup_below_min(NULL, memcg))
return false;
if (!lruvec_is_sizable(lruvec, sc))
return false;
- mem_cgroup_calculate_protection(NULL, memcg);
+ /* see the comment on lru_gen_folio */
+ gen = lru_gen_from_seq(min_seq[LRU_GEN_FILE]);
+ birth = READ_ONCE(lruvec->lrugen.timestamps[gen]);
- return !mem_cgroup_below_min(NULL, memcg);
+ return time_is_before_jiffies(birth + min_ttl);
}
/* to protect the working set of the last N jiffies */
@@ -3955,23 +3994,20 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc)
{
struct mem_cgroup *memcg;
unsigned long min_ttl = READ_ONCE(lru_gen_min_ttl);
+ bool reclaimable = !min_ttl;
VM_WARN_ON_ONCE(!current_is_kswapd());
- /* check the order to exclude compaction-induced reclaim */
- if (!min_ttl || sc->order || sc->priority == DEF_PRIORITY)
- return;
+ set_initial_priority(pgdat, sc);
memcg = mem_cgroup_iter(NULL, NULL, NULL);
do {
struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat);
- if (lruvec_is_reclaimable(lruvec, sc, min_ttl)) {
- mem_cgroup_iter_break(NULL, memcg);
- return;
- }
+ mem_cgroup_calculate_protection(NULL, memcg);
- cond_resched();
+ if (!reclaimable)
+ reclaimable = lruvec_is_reclaimable(lruvec, sc, min_ttl);
} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)));
/*
@@ -3979,7 +4015,7 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc)
* younger than min_ttl. However, another possibility is all memcgs are
* either too small or below min.
*/
- if (mutex_trylock(&oom_lock)) {
+ if (!reclaimable && mutex_trylock(&oom_lock)) {
struct oom_control oc = {
.gfp_mask = sc->gfp_mask,
};
@@ -4449,7 +4485,7 @@ static int get_type_to_scan(struct lruvec *lruvec, int swappiness, int *tier_idx
{
int type, tier;
struct ctrl_pos sp, pv;
- int gain[ANON_AND_FILE] = { swappiness, 200 - swappiness };
+ int gain[ANON_AND_FILE] = { swappiness, MAX_SWAPPINESS - swappiness };
/*
* Compare the first tier of anon with that of file to determine which
@@ -4496,7 +4532,7 @@ static int isolate_folios(struct lruvec *lruvec, struct scan_control *sc, int sw
type = LRU_GEN_ANON;
else if (swappiness == 1)
type = LRU_GEN_FILE;
- else if (swappiness == 200)
+ else if (swappiness == MAX_SWAPPINESS)
type = LRU_GEN_ANON;
else if (!(sc->gfp_mask & __GFP_IO))
type = LRU_GEN_FILE;
@@ -4582,7 +4618,6 @@ retry:
/* retry folios that may have missed folio_rotate_reclaimable() */
list_move(&folio->lru, &clean);
- sc->nr_scanned -= folio_nr_pages(folio);
}
spin_lock_irq(&lruvec->lru_lock);
@@ -4772,8 +4807,7 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
struct mem_cgroup *memcg = lruvec_memcg(lruvec);
struct pglist_data *pgdat = lruvec_pgdat(lruvec);
- mem_cgroup_calculate_protection(NULL, memcg);
-
+ /* lru_gen_age_node() called mem_cgroup_calculate_protection() */
if (mem_cgroup_below_min(NULL, memcg))
return MEMCG_LRU_YOUNG;
@@ -4897,28 +4931,6 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc
blk_finish_plug(&plug);
}
-static void set_initial_priority(struct pglist_data *pgdat, struct scan_control *sc)
-{
- int priority;
- unsigned long reclaimable;
-
- if (sc->priority != DEF_PRIORITY || sc->nr_to_reclaim < MIN_LRU_BATCH)
- return;
- /*
- * Determine the initial priority based on
- * (total >> priority) * reclaimed_to_scanned_ratio = nr_to_reclaim,
- * where reclaimed_to_scanned_ratio = inactive / total.
- */
- reclaimable = node_page_state(pgdat, NR_INACTIVE_FILE);
- if (can_reclaim_anon_pages(NULL, pgdat->node_id, sc))
- reclaimable += node_page_state(pgdat, NR_INACTIVE_ANON);
-
- /* round down reclaimable and round up sc->nr_to_reclaim */
- priority = fls_long(reclaimable) - 1 - fls_long(sc->nr_to_reclaim - 1);
-
- sc->priority = clamp(priority, 0, DEF_PRIORITY);
-}
-
static void lru_gen_shrink_node(struct pglist_data *pgdat, struct scan_control *sc)
{
struct blk_plug plug;
@@ -5430,9 +5442,9 @@ static int run_cmd(char cmd, int memcg_id, int nid, unsigned long seq,
lruvec = get_lruvec(memcg, nid);
- if (swappiness < 0)
+ if (swappiness < MIN_SWAPPINESS)
swappiness = get_swappiness(lruvec, sc);
- else if (swappiness > 200)
+ else if (swappiness > MAX_SWAPPINESS)
goto done;
switch (cmd) {
@@ -5845,9 +5857,25 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat,
static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
{
struct mem_cgroup *target_memcg = sc->target_mem_cgroup;
+ struct mem_cgroup_reclaim_cookie reclaim = {
+ .pgdat = pgdat,
+ };
+ struct mem_cgroup_reclaim_cookie *partial = &reclaim;
struct mem_cgroup *memcg;
- memcg = mem_cgroup_iter(target_memcg, NULL, NULL);
+ /*
+ * In most cases, direct reclaimers can do partial walks
+ * through the cgroup tree, using an iterator state that
+ * persists across invocations. This strikes a balance between
+ * fairness and allocation latency.
+ *
+ * For kswapd, reliable forward progress is more important
+ * than a quick return to idle. Always do full walks.
+ */
+ if (current_is_kswapd() || sc->memcg_full_walk)
+ partial = NULL;
+
+ memcg = mem_cgroup_iter(target_memcg, NULL, partial);
do {
struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat);
unsigned long reclaimed;
@@ -5897,7 +5925,12 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
sc->nr_scanned - scanned,
sc->nr_reclaimed - reclaimed);
- } while ((memcg = mem_cgroup_iter(target_memcg, memcg, NULL)));
+ /* If partial walks are allowed, bail once goal is reached */
+ if (partial && sc->nr_reclaimed >= sc->nr_to_reclaim) {
+ mem_cgroup_iter_break(target_memcg, memcg);
+ break;
+ }
+ } while ((memcg = mem_cgroup_iter(target_memcg, memcg, partial)));
}
static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
@@ -6150,9 +6183,9 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
* and balancing, not for a memcg's limit.
*/
nr_soft_scanned = 0;
- nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(zone->zone_pgdat,
- sc->order, sc->gfp_mask,
- &nr_soft_scanned);
+ nr_soft_reclaimed = memcg1_soft_limit_reclaim(zone->zone_pgdat,
+ sc->order, sc->gfp_mask,
+ &nr_soft_scanned);
sc->nr_reclaimed += nr_soft_reclaimed;
sc->nr_scanned += nr_soft_scanned;
/* need some check for avoid more shrink_zone() */
@@ -6271,6 +6304,21 @@ retry:
return 1;
/*
+ * In most cases, direct reclaimers can do partial walks
+ * through the cgroup tree to meet the reclaim goal while
+ * keeping latency low. Since the iterator state is shared
+ * among all direct reclaim invocations (to retain fairness
+ * among cgroups), though, high concurrency can result in
+ * individual threads not seeing enough cgroups to make
+ * meaningful forward progress. Avoid false OOMs in this case.
+ */
+ if (!sc->memcg_full_walk) {
+ sc->priority = initial_priority;
+ sc->memcg_full_walk = 1;
+ goto retry;
+ }
+
+ /*
* We make inactive:active ratio decisions based on the node's
* composition of memory, but a restrictive reclaim_idx or a
* memory.low cgroup setting can exempt large amounts of
@@ -6515,12 +6563,14 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg,
unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
unsigned long nr_pages,
gfp_t gfp_mask,
- unsigned int reclaim_options)
+ unsigned int reclaim_options,
+ int *swappiness)
{
unsigned long nr_reclaimed;
unsigned int noreclaim_flag;
struct scan_control sc = {
.nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
+ .proactive_swappiness = swappiness,
.gfp_mask = (current_gfp_context(gfp_mask) & GFP_RECLAIM_MASK) |
(GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
.reclaim_idx = MAX_NR_ZONES - 1,
@@ -6702,6 +6752,7 @@ static bool kswapd_shrink_node(pg_data_t *pgdat,
{
struct zone *zone;
int z;
+ unsigned long nr_reclaimed = sc->nr_reclaimed;
/* Reclaim a number of pages proportional to the number of zones */
sc->nr_to_reclaim = 0;
@@ -6729,7 +6780,8 @@ static bool kswapd_shrink_node(pg_data_t *pgdat,
if (sc->order && sc->nr_reclaimed >= compact_gap(sc->order))
sc->order = 0;
- return sc->nr_scanned >= sc->nr_to_reclaim;
+ /* account for progress from mm_account_reclaimed_pages() */
+ return max(sc->nr_scanned, sc->nr_reclaimed - nr_reclaimed) >= sc->nr_to_reclaim;
}
/* Page allocator PCP high watermark is lowered if reclaim is active. */
@@ -6899,8 +6951,8 @@ restart:
/* Call soft limit reclaim before calling shrink_node. */
sc.nr_scanned = 0;
nr_soft_scanned = 0;
- nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(pgdat, sc.order,
- sc.gfp_mask, &nr_soft_scanned);
+ nr_soft_reclaimed = memcg1_soft_limit_reclaim(pgdat, sc.order,
+ sc.gfp_mask, &nr_soft_scanned);
sc.nr_reclaimed += nr_soft_reclaimed;
/*
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 8507c497218b..73d791d1caad 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1255,7 +1255,8 @@ const char * const vmstat_text[] = {
"pgdemote_kswapd",
"pgdemote_direct",
"pgdemote_khugepaged",
-
+ "nr_memmap",
+ "nr_memmap_boot",
/* enum writeback_stat_item counters */
"nr_dirty_threshold",
"nr_dirty_background_threshold",
@@ -2282,4 +2283,27 @@ static int __init extfrag_debug_init(void)
}
module_init(extfrag_debug_init);
+
#endif
+
+/*
+ * Page metadata size (struct page and page_ext) in pages
+ */
+static unsigned long early_perpage_metadata[MAX_NUMNODES] __meminitdata;
+
+void __meminit mod_node_early_perpage_metadata(int nid, long delta)
+{
+ early_perpage_metadata[nid] += delta;
+}
+
+void __meminit store_early_perpage_metadata(void)
+{
+ int nid;
+ struct pglist_data *pgdat;
+
+ for_each_online_pgdat(pgdat) {
+ nid = pgdat->node_id;
+ mod_node_page_state(NODE_DATA(nid), NR_MEMMAP_BOOT,
+ early_perpage_metadata[nid]);
+ }
+}
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index b42d3545ca85..5d6581ab7c07 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -20,7 +20,8 @@
* page->index: links together all component pages of a zspage
* For the huge page, this is always 0, so we use this field
* to store handle.
- * page->page_type: first object offset in a subpage of zspage
+ * page->page_type: PG_zsmalloc, lower 16 bit locate the first object
+ * offset in a subpage of a zspage
*
* Usage of struct page flags:
* PG_private: identifies the first component page
@@ -33,7 +34,8 @@
/*
* lock ordering:
* page_lock
- * pool->lock
+ * pool->migrate_lock
+ * class->lock
* zspage->lock
*/
@@ -182,6 +184,7 @@ static struct dentry *zs_stat_root;
static size_t huge_class_size;
struct size_class {
+ spinlock_t lock;
struct list_head fullness_list[NR_FULLNESS_GROUPS];
/*
* Size of objects stored in this class. Must be multiple
@@ -236,7 +239,8 @@ struct zs_pool {
#ifdef CONFIG_COMPACTION
struct work_struct free_work;
#endif
- spinlock_t lock;
+ /* protect page/zspage migration */
+ rwlock_t migrate_lock;
atomic_t compaction_in_progress;
};
@@ -335,7 +339,7 @@ static void cache_free_zspage(struct zs_pool *pool, struct zspage *zspage)
kmem_cache_free(pool->zspage_cachep, zspage);
}
-/* pool->lock(which owns the handle) synchronizes races */
+/* class->lock(which owns the handle) synchronizes races */
static void record_obj(unsigned long handle, unsigned long obj)
{
*(unsigned long *)handle = obj;
@@ -430,7 +434,7 @@ static __maybe_unused int is_first_page(struct page *page)
return PagePrivate(page);
}
-/* Protected by pool->lock */
+/* Protected by class->lock */
static inline int get_zspage_inuse(struct zspage *zspage)
{
return zspage->inuse;
@@ -450,14 +454,28 @@ static inline struct page *get_first_page(struct zspage *zspage)
return first_page;
}
+#define FIRST_OBJ_PAGE_TYPE_MASK 0xffff
+
+static inline void reset_first_obj_offset(struct page *page)
+{
+ VM_WARN_ON_ONCE(!PageZsmalloc(page));
+ page->page_type |= FIRST_OBJ_PAGE_TYPE_MASK;
+}
+
static inline unsigned int get_first_obj_offset(struct page *page)
{
- return page->page_type;
+ VM_WARN_ON_ONCE(!PageZsmalloc(page));
+ return page->page_type & FIRST_OBJ_PAGE_TYPE_MASK;
}
static inline void set_first_obj_offset(struct page *page, unsigned int offset)
{
- page->page_type = offset;
+ /* With 16 bit available, we can support offsets into 64 KiB pages. */
+ BUILD_BUG_ON(PAGE_SIZE > SZ_64K);
+ VM_WARN_ON_ONCE(!PageZsmalloc(page));
+ VM_WARN_ON_ONCE(offset & ~FIRST_OBJ_PAGE_TYPE_MASK);
+ page->page_type &= ~FIRST_OBJ_PAGE_TYPE_MASK;
+ page->page_type |= offset & FIRST_OBJ_PAGE_TYPE_MASK;
}
static inline unsigned int get_freeobj(struct zspage *zspage)
@@ -494,19 +512,19 @@ static int get_size_class_index(int size)
return min_t(int, ZS_SIZE_CLASSES - 1, idx);
}
-static inline void class_stat_inc(struct size_class *class,
- int type, unsigned long cnt)
+static inline void class_stat_add(struct size_class *class, int type,
+ unsigned long cnt)
{
class->stats.objs[type] += cnt;
}
-static inline void class_stat_dec(struct size_class *class,
- int type, unsigned long cnt)
+static inline void class_stat_sub(struct size_class *class, int type,
+ unsigned long cnt)
{
class->stats.objs[type] -= cnt;
}
-static inline unsigned long zs_stat_get(struct size_class *class, int type)
+static inline unsigned long class_stat_read(struct size_class *class, int type)
{
return class->stats.objs[type];
}
@@ -554,18 +572,18 @@ static int zs_stats_size_show(struct seq_file *s, void *v)
if (class->index != i)
continue;
- spin_lock(&pool->lock);
+ spin_lock(&class->lock);
seq_printf(s, " %5u %5u ", i, class->size);
for (fg = ZS_INUSE_RATIO_10; fg < NR_FULLNESS_GROUPS; fg++) {
- inuse_totals[fg] += zs_stat_get(class, fg);
- seq_printf(s, "%9lu ", zs_stat_get(class, fg));
+ inuse_totals[fg] += class_stat_read(class, fg);
+ seq_printf(s, "%9lu ", class_stat_read(class, fg));
}
- obj_allocated = zs_stat_get(class, ZS_OBJS_ALLOCATED);
- obj_used = zs_stat_get(class, ZS_OBJS_INUSE);
+ obj_allocated = class_stat_read(class, ZS_OBJS_ALLOCATED);
+ obj_used = class_stat_read(class, ZS_OBJS_INUSE);
freeable = zs_can_compact(class);
- spin_unlock(&pool->lock);
+ spin_unlock(&class->lock);
objs_per_zspage = class->objs_per_zspage;
pages_used = obj_allocated / objs_per_zspage *
@@ -668,7 +686,7 @@ static void insert_zspage(struct size_class *class,
struct zspage *zspage,
int fullness)
{
- class_stat_inc(class, fullness, 1);
+ class_stat_add(class, fullness, 1);
list_add(&zspage->list, &class->fullness_list[fullness]);
zspage->fullness = fullness;
}
@@ -684,7 +702,7 @@ static void remove_zspage(struct size_class *class, struct zspage *zspage)
VM_BUG_ON(list_empty(&class->fullness_list[fullness]));
list_del_init(&zspage->list);
- class_stat_dec(class, fullness, 1);
+ class_stat_sub(class, fullness, 1);
}
/*
@@ -791,8 +809,9 @@ static void reset_page(struct page *page)
__ClearPageMovable(page);
ClearPagePrivate(page);
set_page_private(page, 0);
- page_mapcount_reset(page);
page->index = 0;
+ reset_first_obj_offset(page);
+ __ClearPageZsmalloc(page);
}
static int trylock_zspage(struct zspage *zspage)
@@ -821,7 +840,7 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
{
struct page *page, *next;
- assert_spin_locked(&pool->lock);
+ assert_spin_locked(&class->lock);
VM_BUG_ON(get_zspage_inuse(zspage));
VM_BUG_ON(zspage->fullness != ZS_INUSE_RATIO_0);
@@ -839,7 +858,7 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
cache_free_zspage(pool, zspage);
- class_stat_dec(class, ZS_OBJS_ALLOCATED, class->objs_per_zspage);
+ class_stat_sub(class, ZS_OBJS_ALLOCATED, class->objs_per_zspage);
atomic_long_sub(class->pages_per_zspage, &pool->pages_allocated);
}
@@ -965,11 +984,13 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
if (!page) {
while (--i >= 0) {
dec_zone_page_state(pages[i], NR_ZSPAGES);
+ __ClearPageZsmalloc(pages[i]);
__free_page(pages[i]);
}
cache_free_zspage(pool, zspage);
return NULL;
}
+ __SetPageZsmalloc(page);
inc_zone_page_state(page, NR_ZSPAGES);
pages[i] = page;
@@ -1178,19 +1199,19 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
BUG_ON(in_interrupt());
/* It guarantees it can get zspage from handle safely */
- spin_lock(&pool->lock);
+ read_lock(&pool->migrate_lock);
obj = handle_to_obj(handle);
obj_to_location(obj, &page, &obj_idx);
zspage = get_zspage(page);
/*
- * migration cannot move any zpages in this zspage. Here, pool->lock
+ * migration cannot move any zpages in this zspage. Here, class->lock
* is too heavy since callers would take some time until they calls
* zs_unmap_object API so delegate the locking from class to zspage
* which is smaller granularity.
*/
migrate_read_lock(zspage);
- spin_unlock(&pool->lock);
+ read_unlock(&pool->migrate_lock);
class = zspage_class(pool, zspage);
off = offset_in_page(class->size * obj_idx);
@@ -1285,7 +1306,6 @@ static unsigned long obj_malloc(struct zs_pool *pool,
void *vaddr;
class = pool->size_class[zspage->class];
- handle |= OBJ_ALLOCATED_TAG;
obj = get_freeobj(zspage);
offset = obj * class->size;
@@ -1301,15 +1321,16 @@ static unsigned long obj_malloc(struct zs_pool *pool,
set_freeobj(zspage, link->next >> OBJ_TAG_BITS);
if (likely(!ZsHugePage(zspage)))
/* record handle in the header of allocated chunk */
- link->handle = handle;
+ link->handle = handle | OBJ_ALLOCATED_TAG;
else
/* record handle to page->index */
- zspage->first_page->index = handle;
+ zspage->first_page->index = handle | OBJ_ALLOCATED_TAG;
kunmap_atomic(vaddr);
mod_zspage_inuse(zspage, 1);
obj = location_to_obj(m_page, obj);
+ record_obj(handle, obj);
return obj;
}
@@ -1327,7 +1348,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
*/
unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
{
- unsigned long handle, obj;
+ unsigned long handle;
struct size_class *class;
int newfg;
struct zspage *zspage;
@@ -1346,20 +1367,19 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
size += ZS_HANDLE_SIZE;
class = pool->size_class[get_size_class_index(size)];
- /* pool->lock effectively protects the zpage migration */
- spin_lock(&pool->lock);
+ /* class->lock effectively protects the zpage migration */
+ spin_lock(&class->lock);
zspage = find_get_zspage(class);
if (likely(zspage)) {
- obj = obj_malloc(pool, zspage, handle);
+ obj_malloc(pool, zspage, handle);
/* Now move the zspage to another fullness group, if required */
fix_fullness_group(class, zspage);
- record_obj(handle, obj);
- class_stat_inc(class, ZS_OBJS_INUSE, 1);
+ class_stat_add(class, ZS_OBJS_INUSE, 1);
goto out;
}
- spin_unlock(&pool->lock);
+ spin_unlock(&class->lock);
zspage = alloc_zspage(pool, class, gfp);
if (!zspage) {
@@ -1367,19 +1387,18 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
return (unsigned long)ERR_PTR(-ENOMEM);
}
- spin_lock(&pool->lock);
- obj = obj_malloc(pool, zspage, handle);
+ spin_lock(&class->lock);
+ obj_malloc(pool, zspage, handle);
newfg = get_fullness_group(class, zspage);
insert_zspage(class, zspage, newfg);
- record_obj(handle, obj);
atomic_long_add(class->pages_per_zspage, &pool->pages_allocated);
- class_stat_inc(class, ZS_OBJS_ALLOCATED, class->objs_per_zspage);
- class_stat_inc(class, ZS_OBJS_INUSE, 1);
+ class_stat_add(class, ZS_OBJS_ALLOCATED, class->objs_per_zspage);
+ class_stat_add(class, ZS_OBJS_INUSE, 1);
/* We completely set up zspage so mark them as movable */
SetZsPageMovable(pool, zspage);
out:
- spin_unlock(&pool->lock);
+ spin_unlock(&class->lock);
return handle;
}
@@ -1424,23 +1443,25 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
return;
/*
- * The pool->lock protects the race with zpage's migration
+ * The pool->migrate_lock protects the race with zpage's migration
* so it's safe to get the page from handle.
*/
- spin_lock(&pool->lock);
+ read_lock(&pool->migrate_lock);
obj = handle_to_obj(handle);
obj_to_page(obj, &f_page);
zspage = get_zspage(f_page);
class = zspage_class(pool, zspage);
+ spin_lock(&class->lock);
+ read_unlock(&pool->migrate_lock);
- class_stat_dec(class, ZS_OBJS_INUSE, 1);
+ class_stat_sub(class, ZS_OBJS_INUSE, 1);
obj_free(class->size, obj);
fullness = fix_fullness_group(class, zspage);
if (fullness == ZS_INUSE_RATIO_0)
free_zspage(pool, class, zspage);
- spin_unlock(&pool->lock);
+ spin_unlock(&class->lock);
cache_free_handle(pool, handle);
}
EXPORT_SYMBOL_GPL(zs_free);
@@ -1568,7 +1589,6 @@ static void migrate_zspage(struct zs_pool *pool, struct zspage *src_zspage,
free_obj = obj_malloc(pool, dst_zspage, handle);
zs_object_copy(class, free_obj, used_obj);
obj_idx++;
- record_obj(handle, free_obj);
obj_free(class->size, used_obj);
/* Stop if there is no more space */
@@ -1752,27 +1772,26 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
unsigned long old_obj, new_obj;
unsigned int obj_idx;
- /*
- * We cannot support the _NO_COPY case here, because copy needs to
- * happen under the zs lock, which does not work with
- * MIGRATE_SYNC_NO_COPY workflow.
- */
- if (mode == MIGRATE_SYNC_NO_COPY)
- return -EINVAL;
-
VM_BUG_ON_PAGE(!PageIsolated(page), page);
+ /* We're committed, tell the world that this is a Zsmalloc page. */
+ __SetPageZsmalloc(newpage);
+
/* The page is locked, so this pointer must remain valid */
zspage = get_zspage(page);
pool = zspage->pool;
/*
- * The pool's lock protects the race between zpage migration
+ * The pool migrate_lock protects the race between zpage migration
* and zs_free.
*/
- spin_lock(&pool->lock);
+ write_lock(&pool->migrate_lock);
class = zspage_class(pool, zspage);
+ /*
+ * the class lock protects zpage alloc/free in the zspage.
+ */
+ spin_lock(&class->lock);
/* the migrate_write_lock protects zpage access via zs_map_object */
migrate_write_lock(zspage);
@@ -1802,9 +1821,10 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
replace_sub_page(class, zspage, newpage, page);
/*
* Since we complete the data copy and set up new zspage structure,
- * it's okay to release the pool's lock.
+ * it's okay to release migration_lock.
*/
- spin_unlock(&pool->lock);
+ write_unlock(&pool->migrate_lock);
+ spin_unlock(&class->lock);
migrate_write_unlock(zspage);
get_page(newpage);
@@ -1848,20 +1868,21 @@ static void async_free_zspage(struct work_struct *work)
if (class->index != i)
continue;
- spin_lock(&pool->lock);
+ spin_lock(&class->lock);
list_splice_init(&class->fullness_list[ZS_INUSE_RATIO_0],
&free_pages);
- spin_unlock(&pool->lock);
+ spin_unlock(&class->lock);
}
list_for_each_entry_safe(zspage, tmp, &free_pages, list) {
list_del(&zspage->list);
lock_zspage(zspage);
- spin_lock(&pool->lock);
class = zspage_class(pool, zspage);
+ spin_lock(&class->lock);
+ class_stat_sub(class, ZS_INUSE_RATIO_0, 1);
__free_zspage(pool, class, zspage);
- spin_unlock(&pool->lock);
+ spin_unlock(&class->lock);
}
};
@@ -1902,8 +1923,8 @@ static inline void zs_flush_migration(struct zs_pool *pool) { }
static unsigned long zs_can_compact(struct size_class *class)
{
unsigned long obj_wasted;
- unsigned long obj_allocated = zs_stat_get(class, ZS_OBJS_ALLOCATED);
- unsigned long obj_used = zs_stat_get(class, ZS_OBJS_INUSE);
+ unsigned long obj_allocated = class_stat_read(class, ZS_OBJS_ALLOCATED);
+ unsigned long obj_used = class_stat_read(class, ZS_OBJS_INUSE);
if (obj_allocated <= obj_used)
return 0;
@@ -1925,7 +1946,8 @@ static unsigned long __zs_compact(struct zs_pool *pool,
* protect the race between zpage migration and zs_free
* as well as zpage allocation/free
*/
- spin_lock(&pool->lock);
+ write_lock(&pool->migrate_lock);
+ spin_lock(&class->lock);
while (zs_can_compact(class)) {
int fg;
@@ -1951,13 +1973,15 @@ static unsigned long __zs_compact(struct zs_pool *pool,
src_zspage = NULL;
if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100
- || spin_is_contended(&pool->lock)) {
+ || rwlock_is_contended(&pool->migrate_lock)) {
putback_zspage(class, dst_zspage);
dst_zspage = NULL;
- spin_unlock(&pool->lock);
+ spin_unlock(&class->lock);
+ write_unlock(&pool->migrate_lock);
cond_resched();
- spin_lock(&pool->lock);
+ write_lock(&pool->migrate_lock);
+ spin_lock(&class->lock);
}
}
@@ -1967,7 +1991,8 @@ static unsigned long __zs_compact(struct zs_pool *pool,
if (dst_zspage)
putback_zspage(class, dst_zspage);
- spin_unlock(&pool->lock);
+ spin_unlock(&class->lock);
+ write_unlock(&pool->migrate_lock);
return pages_freed;
}
@@ -1979,10 +2004,10 @@ unsigned long zs_compact(struct zs_pool *pool)
unsigned long pages_freed = 0;
/*
- * Pool compaction is performed under pool->lock so it is basically
+ * Pool compaction is performed under pool->migrate_lock so it is basically
* single-threaded. Having more than one thread in __zs_compact()
- * will increase pool->lock contention, which will impact other
- * zsmalloc operations that need pool->lock.
+ * will increase pool->migrate_lock contention, which will impact other
+ * zsmalloc operations that need pool->migrate_lock.
*/
if (atomic_xchg(&pool->compaction_in_progress, 1))
return 0;
@@ -2104,7 +2129,7 @@ struct zs_pool *zs_create_pool(const char *name)
return NULL;
init_deferred_free(pool);
- spin_lock_init(&pool->lock);
+ rwlock_init(&pool->migrate_lock);
atomic_set(&pool->compaction_in_progress, 0);
pool->name = kstrdup(name, GFP_KERNEL);
@@ -2176,6 +2201,7 @@ struct zs_pool *zs_create_pool(const char *name)
class->index = i;
class->pages_per_zspage = pages_per_zspage;
class->objs_per_zspage = objs_per_zspage;
+ spin_lock_init(&class->lock);
pool->size_class[i] = class;
fullness = ZS_INUSE_RATIO_0;
@@ -2276,3 +2302,4 @@ module_exit(zs_exit);
MODULE_LICENSE("Dual BSD/GPL");
MODULE_AUTHOR("Nitin Gupta <ngupta@vflare.org>");
+MODULE_DESCRIPTION("zsmalloc memory allocator");
diff --git a/mm/zswap.c b/mm/zswap.c
index a50e2986cd2f..adeaf9c97fde 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -83,6 +83,7 @@ static bool zswap_pool_reached_full;
static int zswap_setup(void);
/* Enable/disable zswap */
+static DEFINE_STATIC_KEY_MAYBE(CONFIG_ZSWAP_DEFAULT_ON, zswap_ever_enabled);
static bool zswap_enabled = IS_ENABLED(CONFIG_ZSWAP_DEFAULT_ON);
static int zswap_enabled_param_set(const char *,
const struct kernel_param *);
@@ -123,19 +124,21 @@ static unsigned int zswap_accept_thr_percent = 90; /* of max pool size */
module_param_named(accept_threshold_percent, zswap_accept_thr_percent,
uint, 0644);
-/* Number of zpools in zswap_pool (empirically determined for scalability) */
-#define ZSWAP_NR_ZPOOLS 32
-
/* Enable/disable memory pressure-based shrinker. */
static bool zswap_shrinker_enabled = IS_ENABLED(
CONFIG_ZSWAP_SHRINKER_DEFAULT_ON);
module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
-bool is_zswap_enabled(void)
+bool zswap_is_enabled(void)
{
return zswap_enabled;
}
+bool zswap_never_enabled(void)
+{
+ return !static_branch_maybe(CONFIG_ZSWAP_DEFAULT_ON, &zswap_ever_enabled);
+}
+
/*********************************
* data structures
**********************************/
@@ -156,7 +159,7 @@ struct crypto_acomp_ctx {
* needs to be verified that it's still valid in the tree.
*/
struct zswap_pool {
- struct zpool *zpools[ZSWAP_NR_ZPOOLS];
+ struct zpool *zpool;
struct crypto_acomp_ctx __percpu *acomp_ctx;
struct percpu_ref ref;
struct list_head list;
@@ -238,7 +241,7 @@ static inline struct xarray *swap_zswap_tree(swp_entry_t swp)
#define zswap_pool_debug(msg, p) \
pr_debug("%s pool %s/%s\n", msg, (p)->tfm_name, \
- zpool_get_type((p)->zpools[0]))
+ zpool_get_type((p)->zpool))
/*********************************
* pool functions
@@ -247,7 +250,6 @@ static void __zswap_pool_empty(struct percpu_ref *ref);
static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
{
- int i;
struct zswap_pool *pool;
char name[38]; /* 'zswap' + 32 char (max) num + \0 */
gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM;
@@ -268,18 +270,14 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
if (!pool)
return NULL;
- for (i = 0; i < ZSWAP_NR_ZPOOLS; i++) {
- /* unique name for each pool specifically required by zsmalloc */
- snprintf(name, 38, "zswap%x",
- atomic_inc_return(&zswap_pools_count));
-
- pool->zpools[i] = zpool_create_pool(type, name, gfp);
- if (!pool->zpools[i]) {
- pr_err("%s zpool not available\n", type);
- goto error;
- }
+ /* unique name for each pool specifically required by zsmalloc */
+ snprintf(name, 38, "zswap%x", atomic_inc_return(&zswap_pools_count));
+ pool->zpool = zpool_create_pool(type, name, gfp);
+ if (!pool->zpool) {
+ pr_err("%s zpool not available\n", type);
+ goto error;
}
- pr_debug("using %s zpool\n", zpool_get_type(pool->zpools[0]));
+ pr_debug("using %s zpool\n", zpool_get_type(pool->zpool));
strscpy(pool->tfm_name, compressor, sizeof(pool->tfm_name));
@@ -312,8 +310,8 @@ ref_fail:
error:
if (pool->acomp_ctx)
free_percpu(pool->acomp_ctx);
- while (i--)
- zpool_destroy_pool(pool->zpools[i]);
+ if (pool->zpool)
+ zpool_destroy_pool(pool->zpool);
kfree(pool);
return NULL;
}
@@ -362,15 +360,12 @@ static struct zswap_pool *__zswap_pool_create_fallback(void)
static void zswap_pool_destroy(struct zswap_pool *pool)
{
- int i;
-
zswap_pool_debug("destroying", pool);
cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
free_percpu(pool->acomp_ctx);
- for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
- zpool_destroy_pool(pool->zpools[i]);
+ zpool_destroy_pool(pool->zpool);
kfree(pool);
}
@@ -465,8 +460,7 @@ static struct zswap_pool *zswap_pool_find_get(char *type, char *compressor)
list_for_each_entry_rcu(pool, &zswap_pools, list) {
if (strcmp(pool->tfm_name, compressor))
continue;
- /* all zpools share the same type */
- if (strcmp(zpool_get_type(pool->zpools[0]), type))
+ if (strcmp(zpool_get_type(pool->zpool), type))
continue;
/* if we can't get it, it's about to be destroyed */
if (!zswap_pool_get(pool))
@@ -493,12 +487,8 @@ unsigned long zswap_total_pages(void)
unsigned long total = 0;
rcu_read_lock();
- list_for_each_entry_rcu(pool, &zswap_pools, list) {
- int i;
-
- for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
- total += zpool_get_total_pages(pool->zpools[i]);
- }
+ list_for_each_entry_rcu(pool, &zswap_pools, list)
+ total += zpool_get_total_pages(pool->zpool);
rcu_read_unlock();
return total;
@@ -803,11 +793,6 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
kmem_cache_free(zswap_entry_cache, entry);
}
-static struct zpool *zswap_find_zpool(struct zswap_entry *entry)
-{
- return entry->pool->zpools[hash_ptr(entry, ilog2(ZSWAP_NR_ZPOOLS))];
-}
-
/*
* Carries out the common pattern of freeing and entry's zpool allocation,
* freeing the entry itself, and decrementing the number of stored pages.
@@ -818,7 +803,7 @@ static void zswap_entry_free(struct zswap_entry *entry)
atomic_dec(&zswap_same_filled_pages);
else {
zswap_lru_del(&zswap_list_lru, entry);
- zpool_free(zswap_find_zpool(entry), entry->handle);
+ zpool_free(entry->pool->zpool, entry->handle);
zswap_pool_put(entry->pool);
}
if (entry->objcg) {
@@ -917,7 +902,7 @@ static bool zswap_compress(struct folio *folio, struct zswap_entry *entry)
dst = acomp_ctx->buffer;
sg_init_table(&input, 1);
- sg_set_page(&input, &folio->page, PAGE_SIZE, 0);
+ sg_set_folio(&input, folio, PAGE_SIZE, 0);
/*
* We need PAGE_SIZE * 2 here since there maybe over-compression case,
@@ -944,7 +929,7 @@ static bool zswap_compress(struct folio *folio, struct zswap_entry *entry)
if (comp_ret)
goto unlock;
- zpool = zswap_find_zpool(entry);
+ zpool = entry->pool->zpool;
gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM;
if (zpool_malloc_support_movable(zpool))
gfp |= __GFP_HIGHMEM | __GFP_MOVABLE;
@@ -971,9 +956,9 @@ unlock:
return comp_ret == 0 && alloc_ret == 0;
}
-static void zswap_decompress(struct zswap_entry *entry, struct page *page)
+static void zswap_decompress(struct zswap_entry *entry, struct folio *folio)
{
- struct zpool *zpool = zswap_find_zpool(entry);
+ struct zpool *zpool = entry->pool->zpool;
struct scatterlist input, output;
struct crypto_acomp_ctx *acomp_ctx;
u8 *src;
@@ -1000,7 +985,7 @@ static void zswap_decompress(struct zswap_entry *entry, struct page *page)
sg_init_one(&input, src, entry->length);
sg_init_table(&output, 1);
- sg_set_page(&output, page, PAGE_SIZE, 0);
+ sg_set_folio(&output, folio, PAGE_SIZE, 0);
acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, PAGE_SIZE);
BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait));
BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE);
@@ -1073,7 +1058,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
return -ENOMEM;
}
- zswap_decompress(entry, &folio->page);
+ zswap_decompress(entry, folio);
count_vm_event(ZSWPWB);
if (entry->objcg)
@@ -1375,35 +1360,35 @@ resched:
**********************************/
static bool zswap_is_folio_same_filled(struct folio *folio, unsigned long *value)
{
- unsigned long *page;
+ unsigned long *data;
unsigned long val;
- unsigned int pos, last_pos = PAGE_SIZE / sizeof(*page) - 1;
+ unsigned int pos, last_pos = PAGE_SIZE / sizeof(*data) - 1;
bool ret = false;
- page = kmap_local_folio(folio, 0);
- val = page[0];
+ data = kmap_local_folio(folio, 0);
+ val = data[0];
- if (val != page[last_pos])
+ if (val != data[last_pos])
goto out;
for (pos = 1; pos < last_pos; pos++) {
- if (val != page[pos])
+ if (val != data[pos])
goto out;
}
*value = val;
ret = true;
out:
- kunmap_local(page);
+ kunmap_local(data);
return ret;
}
-static void zswap_fill_page(void *ptr, unsigned long value)
+static void zswap_fill_folio(struct folio *folio, unsigned long value)
{
- unsigned long *page;
+ unsigned long *data = kmap_local_folio(folio, 0);
- page = (unsigned long *)ptr;
- memset_l(page, value, PAGE_SIZE / sizeof(unsigned long));
+ memset_l(data, value, PAGE_SIZE / sizeof(unsigned long));
+ kunmap_local(data);
}
/*********************************
@@ -1525,7 +1510,7 @@ store_failed:
if (!entry->length)
atomic_dec(&zswap_same_filled_pages);
else {
- zpool_free(zswap_find_zpool(entry), entry->handle);
+ zpool_free(entry->pool->zpool, entry->handle);
put_pool:
zswap_pool_put(entry->pool);
}
@@ -1551,14 +1536,26 @@ bool zswap_load(struct folio *folio)
{
swp_entry_t swp = folio->swap;
pgoff_t offset = swp_offset(swp);
- struct page *page = &folio->page;
bool swapcache = folio_test_swapcache(folio);
struct xarray *tree = swap_zswap_tree(swp);
struct zswap_entry *entry;
- u8 *dst;
VM_WARN_ON_ONCE(!folio_test_locked(folio));
+ if (zswap_never_enabled())
+ return false;
+
+ /*
+ * Large folios should not be swapped in while zswap is being used, as
+ * they are not properly handled. Zswap does not properly load large
+ * folios, and a large folio may only be partially in zswap.
+ *
+ * Return true without marking the folio uptodate so that an IO error is
+ * emitted (e.g. do_swap_page() will sigbus).
+ */
+ if (WARN_ON_ONCE(folio_test_large(folio)))
+ return true;
+
/*
* When reading into the swapcache, invalidate our entry. The
* swapcache can be the authoritative owner of the page and
@@ -1580,12 +1577,9 @@ bool zswap_load(struct folio *folio)
return false;
if (entry->length)
- zswap_decompress(entry, page);
- else {
- dst = kmap_local_page(page);
- zswap_fill_page(dst, entry->value);
- kunmap_local(dst);
- }
+ zswap_decompress(entry, folio);
+ else
+ zswap_fill_folio(folio, entry->value);
count_vm_event(ZSWPIN);
if (entry->objcg)
@@ -1596,6 +1590,7 @@ bool zswap_load(struct folio *folio)
folio_mark_dirty(folio);
}
+ folio_mark_uptodate(folio);
return true;
}
@@ -1737,9 +1732,10 @@ static int zswap_setup(void)
pool = __zswap_pool_create_fallback();
if (pool) {
pr_info("loaded using pool %s/%s\n", pool->tfm_name,
- zpool_get_type(pool->zpools[0]));
+ zpool_get_type(pool->zpool));
list_add(&pool->list, &zswap_pools);
zswap_has_pool = true;
+ static_branch_enable(&zswap_ever_enabled);
} else {
pr_err("pool creation failed\n");
zswap_enabled = false;