summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorHou Tao <houtao1@huawei.com>2023-10-20 16:31:59 +0300
committerAlexei Starovoitov <ast@kernel.org>2023-10-21 00:15:13 +0300
commit3f2189e4f77b7a3e979d143dc4ff586488c7e8a5 (patch)
treed2b472bb05a3c5e94c26e69d30c09dc91139bf00 /include
parentbaa8fdecd87bb8751237b45e3bcb5a179e5a12ca (diff)
downloadlinux-3f2189e4f77b7a3e979d143dc4ff586488c7e8a5.tar.xz
bpf: Use pcpu_alloc_size() in bpf_mem_free{_rcu}()
For bpf_global_percpu_ma, the pointer passed to bpf_mem_free_rcu() is allocated by kmalloc() and its size is fixed (16-bytes on x86-64). So no matter which cache allocates the dynamic per-cpu area, on x86-64 cache[2] will always be used to free the per-cpu area. Fix the unbalance by checking whether the bpf memory allocator is per-cpu or not and use pcpu_alloc_size() instead of ksize() to find the correct cache for per-cpu free. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20231020133202.4043247-5-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/bpf_mem_alloc.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/bpf_mem_alloc.h b/include/linux/bpf_mem_alloc.h
index d644bbb298af..bb1223b21308 100644
--- a/include/linux/bpf_mem_alloc.h
+++ b/include/linux/bpf_mem_alloc.h
@@ -11,6 +11,7 @@ struct bpf_mem_caches;
struct bpf_mem_alloc {
struct bpf_mem_caches __percpu *caches;
struct bpf_mem_cache __percpu *cache;
+ bool percpu;
struct work_struct work;
};