summaryrefslogtreecommitdiff
path: root/tools/lib
diff options
context:
space:
mode:
authorJakub Kicinski <kuba@kernel.org>2023-04-14 02:11:22 +0300
committerJakub Kicinski <kuba@kernel.org>2023-04-14 02:43:38 +0300
commitc2865b1122595e69e7df52d01f7f02338b8babca (patch)
treee1bfb26b2d31ce0bf590cabafc8e3a0ff1bece4c /tools/lib
parent800e68c44ffe71f9715f745b38fd1af6910b3773 (diff)
parent8c5c2a4898e3d6bad86e29d471e023c8a19ba799 (diff)
downloadlinux-c2865b1122595e69e7df52d01f7f02338b8babca.tar.xz
Daniel Borkmann says:
==================== pull-request: bpf-next 2023-04-13 We've added 260 non-merge commits during the last 36 day(s) which contain a total of 356 files changed, 21786 insertions(+), 11275 deletions(-). The main changes are: 1) Rework BPF verifier log behavior and implement it as a rotating log by default with the option to retain old-style fixed log behavior, from Andrii Nakryiko. 2) Adds support for using {FOU,GUE} encap with an ipip device operating in collect_md mode and add a set of BPF kfuncs for controlling encap params, from Christian Ehrig. 3) Allow BPF programs to detect at load time whether a particular kfunc exists or not, and also add support for this in light skeleton, from Alexei Starovoitov. 4) Optimize hashmap lookups when key size is multiple of 4, from Anton Protopopov. 5) Enable RCU semantics for task BPF kptrs and allow referenced kptr tasks to be stored in BPF maps, from David Vernet. 6) Add support for stashing local BPF kptr into a map value via bpf_kptr_xchg(). This is useful e.g. for rbtree node creation for new cgroups, from Dave Marchevsky. 7) Fix BTF handling of is_int_ptr to skip modifiers to work around tracing issues where a program cannot be attached, from Feng Zhou. 8) Migrate a big portion of test_verifier unit tests over to test_progs -a verifier_* via inline asm to ease {read,debug}ability, from Eduard Zingerman. 9) Several updates to the instruction-set.rst documentation which is subject to future IETF standardization (https://lwn.net/Articles/926882/), from Dave Thaler. 10) Fix BPF verifier in the __reg_bound_offset's 64->32 tnum sub-register known bits information propagation, from Daniel Borkmann. 11) Add skb bitfield compaction work related to BPF with the overall goal to make more of the sk_buff bits optional, from Jakub Kicinski. 12) BPF selftest cleanups for build id extraction which stand on its own from the upcoming integration work of build id into struct file object, from Jiri Olsa. 13) Add fixes and optimizations for xsk descriptor validation and several selftest improvements for xsk sockets, from Kal Conley. 14) Add BPF links for struct_ops and enable switching implementations of BPF TCP cong-ctls under a given name by replacing backing struct_ops map, from Kui-Feng Lee. 15) Remove a misleading BPF verifier env->bypass_spec_v1 check on variable offset stack read as earlier Spectre checks cover this, from Luis Gerhorst. 16) Fix issues in copy_from_user_nofault() for BPF and other tracers to resemble copy_from_user_nmi() from safety PoV, from Florian Lehner and Alexei Starovoitov. 17) Add --json-summary option to test_progs in order for CI tooling to ease parsing of test results, from Manu Bretelle. 18) Batch of improvements and refactoring to prep for upcoming bpf_local_storage conversion to bpf_mem_cache_{alloc,free} allocator, from Martin KaFai Lau. 19) Improve bpftool's visual program dump which produces the control flow graph in a DOT format by adding C source inline annotations, from Quentin Monnet. 20) Fix attaching fentry/fexit/fmod_ret/lsm to modules by extracting the module name from BTF of the target and searching kallsyms of the correct module, from Viktor Malik. 21) Improve BPF verifier handling of '<const> <cond> <non_const>' to better detect whether in particular jmp32 branches are taken, from Yonghong Song. 22) Allow BPF TCP cong-ctls to write app_limited of struct tcp_sock. A built-in cc or one from a kernel module is already able to write to app_limited, from Yixin Shen. Conflicts: Documentation/bpf/bpf_devel_QA.rst b7abcd9c656b ("bpf, doc: Link to submitting-patches.rst for general patch submission info") 0f10f647f455 ("bpf, docs: Use internal linking for link to netdev subsystem doc") https://lore.kernel.org/all/20230307095812.236eb1be@canb.auug.org.au/ include/net/ip_tunnels.h bc9d003dc48c3 ("ip_tunnel: Preserve pointer const in ip_tunnel_info_opts") ac931d4cdec3d ("ipip,ip_tunnel,sit: Add FOU support for externally controlled ipip devices") https://lore.kernel.org/all/20230413161235.4093777-1-broonie@kernel.org/ net/bpf/test_run.c e5995bc7e2ba ("bpf, test_run: fix crashes due to XDP frame overwriting/corruption") 294635a8165a ("bpf, test_run: fix &xdp_frame misplacement for LIVE_FRAMES") https://lore.kernel.org/all/20230320102619.05b80a98@canb.auug.org.au/ ==================== Link: https://lore.kernel.org/r/20230413191525.7295-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'tools/lib')
-rw-r--r--tools/lib/bpf/bpf.c25
-rw-r--r--tools/lib/bpf/bpf.h25
-rw-r--r--tools/lib/bpf/bpf_gen_internal.h4
-rw-r--r--tools/lib/bpf/bpf_helpers.h5
-rw-r--r--tools/lib/bpf/gen_loader.c48
-rw-r--r--tools/lib/bpf/libbpf.c245
-rw-r--r--tools/lib/bpf/libbpf.h3
-rw-r--r--tools/lib/bpf/libbpf.map1
-rw-r--r--tools/lib/bpf/linker.c14
-rw-r--r--tools/lib/bpf/zip.c6
10 files changed, 278 insertions, 98 deletions
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index e750b6f5fcc3..128ac723c4ea 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -230,9 +230,9 @@ alloc_zero_tailing_info(const void *orecord, __u32 cnt,
int bpf_prog_load(enum bpf_prog_type prog_type,
const char *prog_name, const char *license,
const struct bpf_insn *insns, size_t insn_cnt,
- const struct bpf_prog_load_opts *opts)
+ struct bpf_prog_load_opts *opts)
{
- const size_t attr_sz = offsetofend(union bpf_attr, fd_array);
+ const size_t attr_sz = offsetofend(union bpf_attr, log_true_size);
void *finfo = NULL, *linfo = NULL;
const char *func_info, *line_info;
__u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd;
@@ -290,10 +290,6 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
if (!!log_buf != !!log_size)
return libbpf_err(-EINVAL);
- if (log_level > (4 | 2 | 1))
- return libbpf_err(-EINVAL);
- if (log_level && !log_buf)
- return libbpf_err(-EINVAL);
func_info_rec_size = OPTS_GET(opts, func_info_rec_size, 0);
func_info = OPTS_GET(opts, func_info, NULL);
@@ -316,6 +312,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
}
fd = sys_bpf_prog_load(&attr, attr_sz, attempts);
+ OPTS_SET(opts, log_true_size, attr.log_true_size);
if (fd >= 0)
return fd;
@@ -356,6 +353,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
}
fd = sys_bpf_prog_load(&attr, attr_sz, attempts);
+ OPTS_SET(opts, log_true_size, attr.log_true_size);
if (fd >= 0)
goto done;
}
@@ -370,6 +368,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
attr.log_level = 1;
fd = sys_bpf_prog_load(&attr, attr_sz, attempts);
+ OPTS_SET(opts, log_true_size, attr.log_true_size);
}
done:
/* free() doesn't affect errno, so we don't need to restore it */
@@ -794,11 +793,17 @@ int bpf_link_update(int link_fd, int new_prog_fd,
if (!OPTS_VALID(opts, bpf_link_update_opts))
return libbpf_err(-EINVAL);
+ if (OPTS_GET(opts, old_prog_fd, 0) && OPTS_GET(opts, old_map_fd, 0))
+ return libbpf_err(-EINVAL);
+
memset(&attr, 0, attr_sz);
attr.link_update.link_fd = link_fd;
attr.link_update.new_prog_fd = new_prog_fd;
attr.link_update.flags = OPTS_GET(opts, flags, 0);
- attr.link_update.old_prog_fd = OPTS_GET(opts, old_prog_fd, 0);
+ if (OPTS_GET(opts, old_prog_fd, 0))
+ attr.link_update.old_prog_fd = OPTS_GET(opts, old_prog_fd, 0);
+ else if (OPTS_GET(opts, old_map_fd, 0))
+ attr.link_update.old_map_fd = OPTS_GET(opts, old_map_fd, 0);
ret = sys_bpf(BPF_LINK_UPDATE, &attr, attr_sz);
return libbpf_err_errno(ret);
@@ -1078,9 +1083,9 @@ int bpf_raw_tracepoint_open(const char *name, int prog_fd)
return libbpf_err_errno(fd);
}
-int bpf_btf_load(const void *btf_data, size_t btf_size, const struct bpf_btf_load_opts *opts)
+int bpf_btf_load(const void *btf_data, size_t btf_size, struct bpf_btf_load_opts *opts)
{
- const size_t attr_sz = offsetofend(union bpf_attr, btf_log_level);
+ const size_t attr_sz = offsetofend(union bpf_attr, btf_log_true_size);
union bpf_attr attr;
char *log_buf;
size_t log_size;
@@ -1123,6 +1128,8 @@ int bpf_btf_load(const void *btf_data, size_t btf_size, const struct bpf_btf_loa
attr.btf_log_level = 1;
fd = sys_bpf_fd(BPF_BTF_LOAD, &attr, attr_sz);
}
+
+ OPTS_SET(opts, log_true_size, attr.btf_log_true_size);
return libbpf_err_errno(fd);
}
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index f0f786373238..a2c091389b18 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -96,13 +96,20 @@ struct bpf_prog_load_opts {
__u32 log_level;
__u32 log_size;
char *log_buf;
+ /* output: actual total log contents size (including termintaing zero).
+ * It could be both larger than original log_size (if log was
+ * truncated), or smaller (if log buffer wasn't filled completely).
+ * If kernel doesn't support this feature, log_size is left unchanged.
+ */
+ __u32 log_true_size;
+ size_t :0;
};
-#define bpf_prog_load_opts__last_field log_buf
+#define bpf_prog_load_opts__last_field log_true_size
LIBBPF_API int bpf_prog_load(enum bpf_prog_type prog_type,
const char *prog_name, const char *license,
const struct bpf_insn *insns, size_t insn_cnt,
- const struct bpf_prog_load_opts *opts);
+ struct bpf_prog_load_opts *opts);
/* Flags to direct loading requirements */
#define MAPS_RELAX_COMPAT 0x01
@@ -117,11 +124,18 @@ struct bpf_btf_load_opts {
char *log_buf;
__u32 log_level;
__u32 log_size;
+ /* output: actual total log contents size (including termintaing zero).
+ * It could be both larger than original log_size (if log was
+ * truncated), or smaller (if log buffer wasn't filled completely).
+ * If kernel doesn't support this feature, log_size is left unchanged.
+ */
+ __u32 log_true_size;
+ size_t :0;
};
-#define bpf_btf_load_opts__last_field log_size
+#define bpf_btf_load_opts__last_field log_true_size
LIBBPF_API int bpf_btf_load(const void *btf_data, size_t btf_size,
- const struct bpf_btf_load_opts *opts);
+ struct bpf_btf_load_opts *opts);
LIBBPF_API int bpf_map_update_elem(int fd, const void *key, const void *value,
__u64 flags);
@@ -336,8 +350,9 @@ struct bpf_link_update_opts {
size_t sz; /* size of this struct for forward/backward compatibility */
__u32 flags; /* extra flags */
__u32 old_prog_fd; /* expected old program FD */
+ __u32 old_map_fd; /* expected old map FD */
};
-#define bpf_link_update_opts__last_field old_prog_fd
+#define bpf_link_update_opts__last_field old_map_fd
LIBBPF_API int bpf_link_update(int link_fd, int new_prog_fd,
const struct bpf_link_update_opts *opts);
diff --git a/tools/lib/bpf/bpf_gen_internal.h b/tools/lib/bpf/bpf_gen_internal.h
index 223308931d55..fdf44403ff36 100644
--- a/tools/lib/bpf/bpf_gen_internal.h
+++ b/tools/lib/bpf/bpf_gen_internal.h
@@ -11,6 +11,7 @@ struct ksym_relo_desc {
int insn_idx;
bool is_weak;
bool is_typeless;
+ bool is_ld64;
};
struct ksym_desc {
@@ -24,6 +25,7 @@ struct ksym_desc {
bool typeless;
};
int insn;
+ bool is_ld64;
};
struct bpf_gen {
@@ -65,7 +67,7 @@ void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *value, __u
void bpf_gen__map_freeze(struct bpf_gen *gen, int map_idx);
void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *name, enum bpf_attach_type type);
void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, bool is_weak,
- bool is_typeless, int kind, int insn_idx);
+ bool is_typeless, bool is_ld64, int kind, int insn_idx);
void bpf_gen__record_relo_core(struct bpf_gen *gen, const struct bpf_core_relo *core_relo);
void bpf_gen__populate_outer_map(struct bpf_gen *gen, int outer_map_idx, int key, int inner_map_idx);
diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
index 7d12d3e620cc..e7e1a8acc299 100644
--- a/tools/lib/bpf/bpf_helpers.h
+++ b/tools/lib/bpf/bpf_helpers.h
@@ -177,6 +177,11 @@ enum libbpf_tristate {
#define __kptr_untrusted __attribute__((btf_type_tag("kptr_untrusted")))
#define __kptr __attribute__((btf_type_tag("kptr")))
+#define bpf_ksym_exists(sym) ({ \
+ _Static_assert(!__builtin_constant_p(!!sym), #sym " should be marked as __weak"); \
+ !!sym; \
+})
+
#ifndef ___bpf_concat
#define ___bpf_concat(a, b) a ## b
#endif
diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
index 23f5c46708f8..83e8e3bfd8ff 100644
--- a/tools/lib/bpf/gen_loader.c
+++ b/tools/lib/bpf/gen_loader.c
@@ -560,7 +560,7 @@ static void emit_find_attach_target(struct bpf_gen *gen)
}
void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, bool is_weak,
- bool is_typeless, int kind, int insn_idx)
+ bool is_typeless, bool is_ld64, int kind, int insn_idx)
{
struct ksym_relo_desc *relo;
@@ -574,6 +574,7 @@ void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, bool is_weak,
relo->name = name;
relo->is_weak = is_weak;
relo->is_typeless = is_typeless;
+ relo->is_ld64 = is_ld64;
relo->kind = kind;
relo->insn_idx = insn_idx;
gen->relo_cnt++;
@@ -586,9 +587,11 @@ static struct ksym_desc *get_ksym_desc(struct bpf_gen *gen, struct ksym_relo_des
int i;
for (i = 0; i < gen->nr_ksyms; i++) {
- if (!strcmp(gen->ksyms[i].name, relo->name)) {
- gen->ksyms[i].ref++;
- return &gen->ksyms[i];
+ kdesc = &gen->ksyms[i];
+ if (kdesc->kind == relo->kind && kdesc->is_ld64 == relo->is_ld64 &&
+ !strcmp(kdesc->name, relo->name)) {
+ kdesc->ref++;
+ return kdesc;
}
}
kdesc = libbpf_reallocarray(gen->ksyms, gen->nr_ksyms + 1, sizeof(*kdesc));
@@ -603,6 +606,7 @@ static struct ksym_desc *get_ksym_desc(struct bpf_gen *gen, struct ksym_relo_des
kdesc->ref = 1;
kdesc->off = 0;
kdesc->insn = 0;
+ kdesc->is_ld64 = relo->is_ld64;
return kdesc;
}
@@ -804,11 +808,13 @@ static void emit_relo_ksym_btf(struct bpf_gen *gen, struct ksym_relo_desc *relo,
return;
/* try to copy from existing ldimm64 insn */
if (kdesc->ref > 1) {
- move_blob2blob(gen, insn + offsetof(struct bpf_insn, imm), 4,
- kdesc->insn + offsetof(struct bpf_insn, imm));
move_blob2blob(gen, insn + sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm), 4,
kdesc->insn + sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm));
- /* jump over src_reg adjustment if imm is not 0, reuse BPF_REG_0 from move_blob2blob */
+ move_blob2blob(gen, insn + offsetof(struct bpf_insn, imm), 4,
+ kdesc->insn + offsetof(struct bpf_insn, imm));
+ /* jump over src_reg adjustment if imm (btf_id) is not 0, reuse BPF_REG_0 from move_blob2blob
+ * If btf_id is zero, clear BPF_PSEUDO_BTF_ID flag in src_reg of ld_imm64 insn
+ */
emit(gen, BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3));
goto clear_src_reg;
}
@@ -831,7 +837,7 @@ static void emit_relo_ksym_btf(struct bpf_gen *gen, struct ksym_relo_desc *relo,
emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_8, BPF_REG_7,
sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm)));
/* skip src_reg adjustment */
- emit(gen, BPF_JMP_IMM(BPF_JSGE, BPF_REG_7, 0, 3));
+ emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, 3));
clear_src_reg:
/* clear bpf_object__relocate_data's src_reg assignment, otherwise we get a verifier failure */
reg_mask = src_reg_mask();
@@ -862,23 +868,17 @@ static void emit_relo(struct bpf_gen *gen, struct ksym_relo_desc *relo, int insn
{
int insn;
- pr_debug("gen: emit_relo (%d): %s at %d\n", relo->kind, relo->name, relo->insn_idx);
+ pr_debug("gen: emit_relo (%d): %s at %d %s\n",
+ relo->kind, relo->name, relo->insn_idx, relo->is_ld64 ? "ld64" : "call");
insn = insns + sizeof(struct bpf_insn) * relo->insn_idx;
emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_8, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, insn));
- switch (relo->kind) {
- case BTF_KIND_VAR:
+ if (relo->is_ld64) {
if (relo->is_typeless)
emit_relo_ksym_typeless(gen, relo, insn);
else
emit_relo_ksym_btf(gen, relo, insn);
- break;
- case BTF_KIND_FUNC:
+ } else {
emit_relo_kfunc_btf(gen, relo, insn);
- break;
- default:
- pr_warn("Unknown relocation kind '%d'\n", relo->kind);
- gen->error = -EDOM;
- return;
}
}
@@ -901,18 +901,20 @@ static void cleanup_core_relo(struct bpf_gen *gen)
static void cleanup_relos(struct bpf_gen *gen, int insns)
{
+ struct ksym_desc *kdesc;
int i, insn;
for (i = 0; i < gen->nr_ksyms; i++) {
+ kdesc = &gen->ksyms[i];
/* only close fds for typed ksyms and kfuncs */
- if (gen->ksyms[i].kind == BTF_KIND_VAR && !gen->ksyms[i].typeless) {
+ if (kdesc->is_ld64 && !kdesc->typeless) {
/* close fd recorded in insn[insn_idx + 1].imm */
- insn = gen->ksyms[i].insn;
+ insn = kdesc->insn;
insn += sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm);
emit_sys_close_blob(gen, insn);
- } else if (gen->ksyms[i].kind == BTF_KIND_FUNC) {
- emit_sys_close_blob(gen, blob_fd_array_off(gen, gen->ksyms[i].off));
- if (gen->ksyms[i].off < MAX_FD_ARRAY_SZ)
+ } else if (!kdesc->is_ld64) {
+ emit_sys_close_blob(gen, blob_fd_array_off(gen, kdesc->off));
+ if (kdesc->off < MAX_FD_ARRAY_SZ)
gen->nr_fd_array--;
}
}
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index a557718401e4..49cd304ae3bc 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -116,6 +116,7 @@ static const char * const attach_type_name[] = {
[BPF_SK_REUSEPORT_SELECT_OR_MIGRATE] = "sk_reuseport_select_or_migrate",
[BPF_PERF_EVENT] = "perf_event",
[BPF_TRACE_KPROBE_MULTI] = "trace_kprobe_multi",
+ [BPF_STRUCT_OPS] = "struct_ops",
};
static const char * const link_type_name[] = {
@@ -215,9 +216,10 @@ static libbpf_print_fn_t __libbpf_pr = __base_pr;
libbpf_print_fn_t libbpf_set_print(libbpf_print_fn_t fn)
{
- libbpf_print_fn_t old_print_fn = __libbpf_pr;
+ libbpf_print_fn_t old_print_fn;
+
+ old_print_fn = __atomic_exchange_n(&__libbpf_pr, fn, __ATOMIC_RELAXED);
- __libbpf_pr = fn;
return old_print_fn;
}
@@ -226,8 +228,10 @@ void libbpf_print(enum libbpf_print_level level, const char *format, ...)
{
va_list args;
int old_errno;
+ libbpf_print_fn_t print_fn;
- if (!__libbpf_pr)
+ print_fn = __atomic_load_n(&__libbpf_pr, __ATOMIC_RELAXED);
+ if (!print_fn)
return;
old_errno = errno;
@@ -315,8 +319,8 @@ enum reloc_type {
RELO_LD64,
RELO_CALL,
RELO_DATA,
- RELO_EXTERN_VAR,
- RELO_EXTERN_FUNC,
+ RELO_EXTERN_LD64,
+ RELO_EXTERN_CALL,
RELO_SUBPROG_ADDR,
RELO_CORE,
};
@@ -467,6 +471,7 @@ struct bpf_struct_ops {
#define KCONFIG_SEC ".kconfig"
#define KSYMS_SEC ".ksyms"
#define STRUCT_OPS_SEC ".struct_ops"
+#define STRUCT_OPS_LINK_SEC ".struct_ops.link"
enum libbpf_map_type {
LIBBPF_MAP_UNSPEC,
@@ -596,6 +601,7 @@ struct elf_state {
Elf64_Ehdr *ehdr;
Elf_Data *symbols;
Elf_Data *st_ops_data;
+ Elf_Data *st_ops_link_data;
size_t shstrndx; /* section index for section name strings */
size_t strtabidx;
struct elf_sec_desc *secs;
@@ -605,6 +611,7 @@ struct elf_state {
int text_shndx;
int symbols_shndx;
int st_ops_shndx;
+ int st_ops_link_shndx;
};
struct usdt_manager;
@@ -1118,7 +1125,8 @@ static int bpf_object__init_kern_struct_ops_maps(struct bpf_object *obj)
return 0;
}
-static int bpf_object__init_struct_ops_maps(struct bpf_object *obj)
+static int init_struct_ops_maps(struct bpf_object *obj, const char *sec_name,
+ int shndx, Elf_Data *data, __u32 map_flags)
{
const struct btf_type *type, *datasec;
const struct btf_var_secinfo *vsi;
@@ -1129,15 +1137,15 @@ static int bpf_object__init_struct_ops_maps(struct bpf_object *obj)
struct bpf_map *map;
__u32 i;
- if (obj->efile.st_ops_shndx == -1)
+ if (shndx == -1)
return 0;
btf = obj->btf;
- datasec_id = btf__find_by_name_kind(btf, STRUCT_OPS_SEC,
+ datasec_id = btf__find_by_name_kind(btf, sec_name,
BTF_KIND_DATASEC);
if (datasec_id < 0) {
pr_warn("struct_ops init: DATASEC %s not found\n",
- STRUCT_OPS_SEC);
+ sec_name);
return -EINVAL;
}
@@ -1150,7 +1158,7 @@ static int bpf_object__init_struct_ops_maps(struct bpf_object *obj)
type_id = btf__resolve_type(obj->btf, vsi->type);
if (type_id < 0) {
pr_warn("struct_ops init: Cannot resolve var type_id %u in DATASEC %s\n",
- vsi->type, STRUCT_OPS_SEC);
+ vsi->type, sec_name);
return -EINVAL;
}
@@ -1169,7 +1177,7 @@ static int bpf_object__init_struct_ops_maps(struct bpf_object *obj)
if (IS_ERR(map))
return PTR_ERR(map);
- map->sec_idx = obj->efile.st_ops_shndx;
+ map->sec_idx = shndx;
map->sec_offset = vsi->offset;
map->name = strdup(var_name);
if (!map->name)
@@ -1179,6 +1187,7 @@ static int bpf_object__init_struct_ops_maps(struct bpf_object *obj)
map->def.key_size = sizeof(int);
map->def.value_size = type->size;
map->def.max_entries = 1;
+ map->def.map_flags = map_flags;
map->st_ops = calloc(1, sizeof(*map->st_ops));
if (!map->st_ops)
@@ -1191,14 +1200,14 @@ static int bpf_object__init_struct_ops_maps(struct bpf_object *obj)
if (!st_ops->data || !st_ops->progs || !st_ops->kern_func_off)
return -ENOMEM;
- if (vsi->offset + type->size > obj->efile.st_ops_data->d_size) {
+ if (vsi->offset + type->size > data->d_size) {
pr_warn("struct_ops init: var %s is beyond the end of DATASEC %s\n",
- var_name, STRUCT_OPS_SEC);
+ var_name, sec_name);
return -EINVAL;
}
memcpy(st_ops->data,
- obj->efile.st_ops_data->d_buf + vsi->offset,
+ data->d_buf + vsi->offset,
type->size);
st_ops->tname = tname;
st_ops->type = type;
@@ -1211,6 +1220,19 @@ static int bpf_object__init_struct_ops_maps(struct bpf_object *obj)
return 0;
}
+static int bpf_object_init_struct_ops(struct bpf_object *obj)
+{
+ int err;
+
+ err = init_struct_ops_maps(obj, STRUCT_OPS_SEC, obj->efile.st_ops_shndx,
+ obj->efile.st_ops_data, 0);
+ err = err ?: init_struct_ops_maps(obj, STRUCT_OPS_LINK_SEC,
+ obj->efile.st_ops_link_shndx,
+ obj->efile.st_ops_link_data,
+ BPF_F_LINK);
+ return err;
+}
+
static struct bpf_object *bpf_object__new(const char *path,
const void *obj_buf,
size_t obj_buf_sz,
@@ -1247,6 +1269,7 @@ static struct bpf_object *bpf_object__new(const char *path,
obj->efile.obj_buf_sz = obj_buf_sz;
obj->efile.btf_maps_shndx = -1;
obj->efile.st_ops_shndx = -1;
+ obj->efile.st_ops_link_shndx = -1;
obj->kconfig_map_idx = -1;
obj->kern_version = get_kernel_version();
@@ -1264,6 +1287,7 @@ static void bpf_object__elf_finish(struct bpf_object *obj)
obj->efile.elf = NULL;
obj->efile.symbols = NULL;
obj->efile.st_ops_data = NULL;
+ obj->efile.st_ops_link_data = NULL;
zfree(&obj->efile.secs);
obj->efile.sec_cnt = 0;
@@ -2618,7 +2642,7 @@ static int bpf_object__init_maps(struct bpf_object *obj,
err = bpf_object__init_user_btf_maps(obj, strict, pin_root_path);
err = err ?: bpf_object__init_global_data_maps(obj);
err = err ?: bpf_object__init_kconfig_map(obj);
- err = err ?: bpf_object__init_struct_ops_maps(obj);
+ err = err ?: bpf_object_init_struct_ops(obj);
return err;
}
@@ -2752,12 +2776,13 @@ static bool libbpf_needs_btf(const struct bpf_object *obj)
{
return obj->efile.btf_maps_shndx >= 0 ||
obj->efile.st_ops_shndx >= 0 ||
+ obj->efile.st_ops_link_shndx >= 0 ||
obj->nr_extern > 0;
}
static bool kernel_needs_btf(const struct bpf_object *obj)
{
- return obj->efile.st_ops_shndx >= 0;
+ return obj->efile.st_ops_shndx >= 0 || obj->efile.st_ops_link_shndx >= 0;
}
static int bpf_object__init_btf(struct bpf_object *obj,
@@ -3450,6 +3475,9 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
} else if (strcmp(name, STRUCT_OPS_SEC) == 0) {
obj->efile.st_ops_data = data;
obj->efile.st_ops_shndx = idx;
+ } else if (strcmp(name, STRUCT_OPS_LINK_SEC) == 0) {
+ obj->efile.st_ops_link_data = data;
+ obj->efile.st_ops_link_shndx = idx;
} else {
pr_info("elf: skipping unrecognized data section(%d) %s\n",
idx, name);
@@ -3464,6 +3492,7 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
/* Only do relo for section with exec instructions */
if (!section_have_execinstr(obj, targ_sec_idx) &&
strcmp(name, ".rel" STRUCT_OPS_SEC) &&
+ strcmp(name, ".rel" STRUCT_OPS_LINK_SEC) &&
strcmp(name, ".rel" MAPS_ELF_SEC)) {
pr_info("elf: skipping relo section(%d) %s for section(%d) %s\n",
idx, name, targ_sec_idx,
@@ -4009,9 +4038,9 @@ static int bpf_program__record_reloc(struct bpf_program *prog,
pr_debug("prog '%s': found extern #%d '%s' (sym %d) for insn #%u\n",
prog->name, i, ext->name, ext->sym_idx, insn_idx);
if (insn->code == (BPF_JMP | BPF_CALL))
- reloc_desc->type = RELO_EXTERN_FUNC;
+ reloc_desc->type = RELO_EXTERN_CALL;
else
- reloc_desc->type = RELO_EXTERN_VAR;
+ reloc_desc->type = RELO_EXTERN_LD64;
reloc_desc->insn_idx = insn_idx;
reloc_desc->sym_off = i; /* sym_off stores extern index */
return 0;
@@ -5855,7 +5884,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
relo->map_idx, map);
}
break;
- case RELO_EXTERN_VAR:
+ case RELO_EXTERN_LD64:
ext = &obj->externs[relo->sym_off];
if (ext->type == EXT_KCFG) {
if (obj->gen_loader) {
@@ -5877,7 +5906,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
}
}
break;
- case RELO_EXTERN_FUNC:
+ case RELO_EXTERN_CALL:
ext = &obj->externs[relo->sym_off];
insn[0].src_reg = BPF_PSEUDO_KFUNC_CALL;
if (ext->is_set) {
@@ -6115,7 +6144,7 @@ bpf_object__reloc_code(struct bpf_object *obj, struct bpf_program *main_prog,
continue;
relo = find_prog_insn_relo(prog, insn_idx);
- if (relo && relo->type == RELO_EXTERN_FUNC)
+ if (relo && relo->type == RELO_EXTERN_CALL)
/* kfunc relocations will be handled later
* in bpf_object__relocate_data()
*/
@@ -6610,7 +6639,7 @@ static int bpf_object__collect_relos(struct bpf_object *obj)
return -LIBBPF_ERRNO__INTERNAL;
}
- if (idx == obj->efile.st_ops_shndx)
+ if (idx == obj->efile.st_ops_shndx || idx == obj->efile.st_ops_link_shndx)
err = bpf_object__collect_st_ops_relos(obj, shdr, data);
else if (idx == obj->efile.btf_maps_shndx)
err = bpf_object__collect_map_relos(obj, shdr, data);
@@ -7070,18 +7099,21 @@ static int bpf_program_record_relos(struct bpf_program *prog)
for (i = 0; i < prog->nr_reloc; i++) {
struct reloc_desc *relo = &prog->reloc_desc[i];
struct extern_desc *ext = &obj->externs[relo->sym_off];
+ int kind;
switch (relo->type) {
- case RELO_EXTERN_VAR:
+ case RELO_EXTERN_LD64:
if (ext->type != EXT_KSYM)
continue;
+ kind = btf_is_var(btf__type_by_id(obj->btf, ext->btf_id)) ?
+ BTF_KIND_VAR : BTF_KIND_FUNC;
bpf_gen__record_extern(obj->gen_loader, ext->name,
ext->is_weak, !ext->ksym.type_id,
- BTF_KIND_VAR, relo->insn_idx);
+ true, kind, relo->insn_idx);
break;
- case RELO_EXTERN_FUNC:
+ case RELO_EXTERN_CALL:
bpf_gen__record_extern(obj->gen_loader, ext->name,
- ext->is_weak, false, BTF_KIND_FUNC,
+ ext->is_weak, false, false, BTF_KIND_FUNC,
relo->insn_idx);
break;
case RELO_CORE: {
@@ -7533,6 +7565,12 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
ext->is_set = true;
ext->ksym.kernel_btf_id = kfunc_id;
ext->ksym.btf_fd_idx = mod_btf ? mod_btf->fd_array_idx : 0;
+ /* Also set kernel_btf_obj_fd to make sure that bpf_object__relocate_data()
+ * populates FD into ld_imm64 insn when it's used to point to kfunc.
+ * {kernel_btf_id, btf_fd_idx} -> fixup bpf_call.
+ * {kernel_btf_id, kernel_btf_obj_fd} -> fixup ld_imm64.
+ */
+ ext->ksym.kernel_btf_obj_fd = mod_btf ? mod_btf->fd : 0;
pr_debug("extern (func ksym) '%s': resolved to kernel [%d]\n",
ext->name, kfunc_id);
@@ -7677,6 +7715,37 @@ static int bpf_object__resolve_externs(struct bpf_object *obj,
return 0;
}
+static void bpf_map_prepare_vdata(const struct bpf_map *map)
+{
+ struct bpf_struct_ops *st_ops;
+ __u32 i;
+
+ st_ops = map->st_ops;
+ for (i = 0; i < btf_vlen(st_ops->type); i++) {
+ struct bpf_program *prog = st_ops->progs[i];
+ void *kern_data;
+ int prog_fd;
+
+ if (!prog)
+ continue;
+
+ prog_fd = bpf_program__fd(prog);
+ kern_data = st_ops->kern_vdata + st_ops->kern_func_off[i];
+ *(unsigned long *)kern_data = prog_fd;
+ }
+}
+
+static int bpf_object_prepare_struct_ops(struct bpf_object *obj)
+{
+ int i;
+
+ for (i = 0; i < obj->nr_maps; i++)
+ if (bpf_map__is_struct_ops(&obj->maps[i]))
+ bpf_map_prepare_vdata(&obj->maps[i]);
+
+ return 0;
+}
+
static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const char *target_btf_path)
{
int err, i;
@@ -7702,6 +7771,7 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
err = err ? : bpf_object__load_progs(obj, extra_log_level);
err = err ? : bpf_object_init_prog_arrays(obj);
+ err = err ? : bpf_object_prepare_struct_ops(obj);
if (obj->gen_loader) {
/* reset FDs */
@@ -8398,6 +8468,7 @@ int bpf_program__set_type(struct bpf_program *prog, enum bpf_prog_type type)
return libbpf_err(-EBUSY);
prog->type = type;
+ prog->sec_def = NULL;
return 0;
}
@@ -8811,6 +8882,7 @@ const char *libbpf_bpf_prog_type_str(enum bpf_prog_type t)
}
static struct bpf_map *find_struct_ops_map_by_offset(struct bpf_object *obj,
+ int sec_idx,
size_t offset)
{
struct bpf_map *map;
@@ -8820,7 +8892,8 @@ static struct bpf_map *find_struct_ops_map_by_offset(struct bpf_object *obj,
map = &obj->maps[i];
if (!bpf_map__is_struct_ops(map))
continue;
- if (map->sec_offset <= offset &&
+ if (map->sec_idx == sec_idx &&
+ map->sec_offset <= offset &&
offset - map->sec_offset < map->def.value_size)
return map;
}
@@ -8862,7 +8935,7 @@ static int bpf_object__collect_st_ops_relos(struct bpf_object *obj,
}
name = elf_sym_str(obj, sym->st_name) ?: "<?>";
- map = find_struct_ops_map_by_offset(obj, rel->r_offset);
+ map = find_struct_ops_map_by_offset(obj, shdr->sh_info, rel->r_offset);
if (!map) {
pr_warn("struct_ops reloc: cannot find map at rel->r_offset %zu\n",
(size_t)rel->r_offset);
@@ -8929,8 +9002,9 @@ static int bpf_object__collect_st_ops_relos(struct bpf_object *obj,
}
/* struct_ops BPF prog can be re-used between multiple
- * .struct_ops as long as it's the same struct_ops struct
- * definition and the same function pointer field
+ * .struct_ops & .struct_ops.link as long as it's the
+ * same struct_ops struct definition and the same
+ * function pointer field
*/
if (prog->attach_btf_id != st_ops->type_id ||
prog->expected_attach_type != member_idx) {
@@ -9912,16 +9986,20 @@ static int append_to_file(const char *file, const char *fmt, ...)
{
int fd, n, err = 0;
va_list ap;
+ char buf[1024];
+
+ va_start(ap, fmt);
+ n = vsnprintf(buf, sizeof(buf), fmt, ap);
+ va_end(ap);
+
+ if (n < 0 || n >= sizeof(buf))
+ return -EINVAL;
fd = open(file, O_WRONLY | O_APPEND | O_CLOEXEC, 0);
if (fd < 0)
return -errno;
- va_start(ap, fmt);
- n = vdprintf(fd, fmt, ap);
- va_end(ap);
-
- if (n < 0)
+ if (write(fd, buf, n) < 0)
err = -errno;
close(fd);
@@ -11566,22 +11644,30 @@ struct bpf_link *bpf_program__attach(const struct bpf_program *prog)
return link;
}
+struct bpf_link_struct_ops {
+ struct bpf_link link;
+ int map_fd;
+};
+
static int bpf_link__detach_struct_ops(struct bpf_link *link)
{
+ struct bpf_link_struct_ops *st_link;
__u32 zero = 0;
- if (bpf_map_delete_elem(link->fd, &zero))
- return -errno;
+ st_link = container_of(link, struct bpf_link_struct_ops, link);
- return 0;
+ if (st_link->map_fd < 0)
+ /* w/o a real link */
+ return bpf_map_delete_elem(link->fd, &zero);
+
+ return close(link->fd);
}
struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map)
{
- struct bpf_struct_ops *st_ops;
- struct bpf_link *link;
- __u32 i, zero = 0;
- int err;
+ struct bpf_link_struct_ops *link;
+ __u32 zero = 0;
+ int err, fd;
if (!bpf_map__is_struct_ops(map) || map->fd == -1)
return libbpf_err_ptr(-EINVAL);
@@ -11590,31 +11676,72 @@ struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map)
if (!link)
return libbpf_err_ptr(-EINVAL);
- st_ops = map->st_ops;
- for (i = 0; i < btf_vlen(st_ops->type); i++) {
- struct bpf_program *prog = st_ops->progs[i];
- void *kern_data;
- int prog_fd;
+ /* kern_vdata should be prepared during the loading phase. */
+ err = bpf_map_update_elem(map->fd, &zero, map->st_ops->kern_vdata, 0);
+ /* It can be EBUSY if the map has been used to create or
+ * update a link before. We don't allow updating the value of
+ * a struct_ops once it is set. That ensures that the value
+ * never changed. So, it is safe to skip EBUSY.
+ */
+ if (err && (!(map->def.map_flags & BPF_F_LINK) || err != -EBUSY)) {
+ free(link);
+ return libbpf_err_ptr(err);
+ }
- if (!prog)
- continue;
+ link->link.detach = bpf_link__detach_struct_ops;
- prog_fd = bpf_program__fd(prog);
- kern_data = st_ops->kern_vdata + st_ops->kern_func_off[i];
- *(unsigned long *)kern_data = prog_fd;
+ if (!(map->def.map_flags & BPF_F_LINK)) {
+ /* w/o a real link */
+ link->link.fd = map->fd;
+ link->map_fd = -1;
+ return &link->link;
}
- err = bpf_map_update_elem(map->fd, &zero, st_ops->kern_vdata, 0);
- if (err) {
- err = -errno;
+ fd = bpf_link_create(map->fd, 0, BPF_STRUCT_OPS, NULL);
+ if (fd < 0) {
free(link);
- return libbpf_err_ptr(err);
+ return libbpf_err_ptr(fd);
}
- link->detach = bpf_link__detach_struct_ops;
- link->fd = map->fd;
+ link->link.fd = fd;
+ link->map_fd = map->fd;
- return link;
+ return &link->link;
+}
+
+/*
+ * Swap the back struct_ops of a link with a new struct_ops map.
+ */
+int bpf_link__update_map(struct bpf_link *link, const struct bpf_map *map)
+{
+ struct bpf_link_struct_ops *st_ops_link;
+ __u32 zero = 0;
+ int err;
+
+ if (!bpf_map__is_struct_ops(map) || map->fd < 0)
+ return -EINVAL;
+
+ st_ops_link = container_of(link, struct bpf_link_struct_ops, link);
+ /* Ensure the type of a link is correct */
+ if (st_ops_link->map_fd < 0)
+ return -EINVAL;
+
+ err = bpf_map_update_elem(map->fd, &zero, map->st_ops->kern_vdata, 0);
+ /* It can be EBUSY if the map has been used to create or
+ * update a link before. We don't allow updating the value of
+ * a struct_ops once it is set. That ensures that the value
+ * never changed. So, it is safe to skip EBUSY.
+ */
+ if (err && err != -EBUSY)
+ return err;
+
+ err = bpf_link_update(link->fd, map->fd, NULL);
+ if (err < 0)
+ return err;
+
+ st_ops_link->map_fd = map->fd;
+
+ return 0;
}
typedef enum bpf_perf_event_ret (*bpf_perf_event_print_t)(struct perf_event_header *hdr,
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index db4992a036f8..0b7362397ea3 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -101,6 +101,8 @@ typedef int (*libbpf_print_fn_t)(enum libbpf_print_level level,
* be used for libbpf warnings and informational messages.
* @param fn The log print function. If NULL, libbpf won't print anything.
* @return Pointer to old print function.
+ *
+ * This function is thread-safe.
*/
LIBBPF_API libbpf_print_fn_t libbpf_set_print(libbpf_print_fn_t fn);
@@ -719,6 +721,7 @@ bpf_program__attach_freplace(const struct bpf_program *prog,
struct bpf_map;
LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map);
+LIBBPF_API int bpf_link__update_map(struct bpf_link *link, const struct bpf_map *map);
struct bpf_iter_attach_opts {
size_t sz; /* size of this struct for forward/backward compatibility */
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 50dde1f6521e..a5aa3a383d69 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -386,6 +386,7 @@ LIBBPF_1.1.0 {
LIBBPF_1.2.0 {
global:
bpf_btf_get_info_by_fd;
+ bpf_link__update_map;
bpf_link_get_info_by_fd;
bpf_map_get_info_by_fd;
bpf_prog_get_info_by_fd;
diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
index d7069780984a..5ced96d99f8c 100644
--- a/tools/lib/bpf/linker.c
+++ b/tools/lib/bpf/linker.c
@@ -1115,7 +1115,19 @@ static int extend_sec(struct bpf_linker *linker, struct dst_sec *dst, struct src
if (src->shdr->sh_type != SHT_NOBITS) {
tmp = realloc(dst->raw_data, dst_final_sz);
- if (!tmp)
+ /* If dst_align_sz == 0, realloc() behaves in a special way:
+ * 1. When dst->raw_data is NULL it returns:
+ * "either NULL or a pointer suitable to be passed to free()" [1].
+ * 2. When dst->raw_data is not-NULL it frees dst->raw_data and returns NULL,
+ * thus invalidating any "pointer suitable to be passed to free()" obtained
+ * at step (1).
+ *
+ * The dst_align_sz > 0 check avoids error exit after (2), otherwise
+ * dst->raw_data would be freed again in bpf_linker__free().
+ *
+ * [1] man 3 realloc
+ */
+ if (!tmp && dst_align_sz > 0)
return -ENOMEM;
dst->raw_data = tmp;
diff --git a/tools/lib/bpf/zip.c b/tools/lib/bpf/zip.c
index f561aa07438f..3f26d629b2b4 100644
--- a/tools/lib/bpf/zip.c
+++ b/tools/lib/bpf/zip.c
@@ -16,6 +16,10 @@
#include "libbpf_internal.h"
#include "zip.h"
+#pragma GCC diagnostic push
+#pragma GCC diagnostic ignored "-Wpacked"
+#pragma GCC diagnostic ignored "-Wattributes"
+
/* Specification of ZIP file format can be found here:
* https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT
* For a high level overview of the structure of a ZIP file see
@@ -119,6 +123,8 @@ struct local_file_header {
__u16 extra_field_length;
} __attribute__((packed));
+#pragma GCC diagnostic pop
+
struct zip_archive {
void *data;
__u32 size;