summaryrefslogtreecommitdiff
path: root/tools/bpf
AgeCommit message (Collapse)AuthorFilesLines
2022-02-17bpftool: Fix C++ additions to skeletonAndrii Nakryiko1-7/+7
Mark C++-specific T::open() and other methods as static inline to avoid symbol redefinition when multiple files use the same skeleton header in an application. Fixes: bb8ffe61ea45 ("bpftool: Add C++-specific open/load/etc skeleton wrappers") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220216233540.216642-1-andrii@kernel.org
2022-02-17bpftool: Fix pretty print dump for maps without BTF loadedJiri Olsa1-16/+15
The commit e5043894b21f ("bpftool: Use libbpf_get_error() to check error") fails to dump map without BTF loaded in pretty mode (-p option). Fixing this by making sure get_map_kv_btf won't fail in case there's no BTF available for the map. Fixes: e5043894b21f ("bpftool: Use libbpf_get_error() to check error") Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220216092102.125448-1-jolsa@kernel.org
2022-02-16bpftool: Gen min_core_btf explanation and examplesRafael David Tinoco1-0/+90
Add "min_core_btf" feature explanation and one example of how to use it to bpftool-gen man page. Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by: Rafael David Tinoco <rafael.tinoco@aquasec.com> Signed-off-by: Lorenzo Fontana <lorenzo.fontana@elastic.co> Signed-off-by: Leonardo Di Donato <leonardo.didonato@elastic.co> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220215225856.671072-7-mauricio@kinvolk.io
2022-02-16bpftool: Implement btfgen_get_btf()Mauricio Vásquez1-1/+99
The last part of the BTFGen algorithm is to create a new BTF object with all the types that were recorded in the previous steps. This function performs two different steps: 1. Add the types to the new BTF object by using btf__add_type(). Some special logic around struct and unions is implemented to only add the members that are really used in the field-based relocations. The type ID on the new and old BTF objects is stored on a map. 2. Fix all the type IDs on the new BTF object by using the IDs saved in the previous step. Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by: Rafael David Tinoco <rafael.tinoco@aquasec.com> Signed-off-by: Lorenzo Fontana <lorenzo.fontana@elastic.co> Signed-off-by: Leonardo Di Donato <leonardo.didonato@elastic.co> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220215225856.671072-6-mauricio@kinvolk.io
2022-02-16bpftool: Implement "gen min_core_btf" logicMauricio Vásquez2-6/+457
This commit implements the logic for the gen min_core_btf command. Specifically, it implements the following functions: - minimize_btf(): receives the path of a source and destination BTF files and a list of BPF objects. This function records the relocations for all objects and then generates the BTF file by calling btfgen_get_btf() (implemented in the following commit). - btfgen_record_obj(): loads the BTF and BTF.ext sections of the BPF objects and loops through all CO-RE relocations. It uses bpf_core_calc_relo_insn() from libbpf and passes the target spec to btfgen_record_reloc(), that calls one of the following functions depending on the relocation kind. - btfgen_record_field_relo(): uses the target specification to mark all the types that are involved in a field-based CO-RE relocation. In this case types resolved and marked recursively using btfgen_mark_type(). Only the struct and union members (and their types) involved in the relocation are marked to optimize the size of the generated BTF file. - btfgen_record_type_relo(): marks the types involved in a type-based CO-RE relocation. In this case no members for the struct and union types are marked as libbpf doesn't use them while performing this kind of relocation. Pointed types are marked as they are used by libbpf in this case. - btfgen_record_enumval_relo(): marks the whole enum type for enum-based relocations. Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by: Rafael David Tinoco <rafael.tinoco@aquasec.com> Signed-off-by: Lorenzo Fontana <lorenzo.fontana@elastic.co> Signed-off-by: Leonardo Di Donato <leonardo.didonato@elastic.co> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220215225856.671072-5-mauricio@kinvolk.io
2022-02-16bpftool: Add gen min_core_btf commandMauricio Vásquez2-4/+44
This command is implemented under the "gen" command in bpftool and the syntax is the following: $ bpftool gen min_core_btf INPUT OUTPUT OBJECT [OBJECT...] INPUT is the file that contains all the BTF types for a kernel and OUTPUT is the path of the minimize BTF file that will be created with only the types needed by the objects. Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by: Rafael David Tinoco <rafael.tinoco@aquasec.com> Signed-off-by: Lorenzo Fontana <lorenzo.fontana@elastic.co> Signed-off-by: Leonardo Di Donato <leonardo.didonato@elastic.co> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220215225856.671072-4-mauricio@kinvolk.io
2022-02-15bpftool: Add C++-specific open/load/etc skeleton wrappersAndrii Nakryiko1-2/+22
Add C++-specific static methods for code-generated BPF skeleton for each skeleton operation: open, open_opts, open_and_load, load, attach, detach, destroy, and elf_bytes. This is to facilitate easier C++ templating on top of pure C BPF skeleton. In C, open/load/destroy/etc "methods" are of the form <skeleton_name>__<method>() to avoid name collision with similar "methods" of other skeletons withint the same application. This works well, but is very inconvenient for C++ applications that would like to write generic (templated) wrappers around BPF skeleton to fit in with C++ code base and take advantage of destructors and other convenient C++ constructs. This patch makes it easier to build such generic templated wrappers by additionally defining C++ static methods for skeleton's struct with fixed names. This allows to refer to, say, open method as `T::open()` instead of having to somehow generate `T__open()` function call. Next patch adds an example template to test_cpp selftest to demonstrate how it's possible to have all the operations wrapped in a generic Skeleton<my_skeleton> type without explicitly passing function references. An example of generated declaration section without %1$s placeholders: #ifdef __cplusplus static struct test_attach_probe *open(const struct bpf_object_open_opts *opts = nullptr); static struct test_attach_probe *open_and_load(); static int load(struct test_attach_probe *skel); static int attach(struct test_attach_probe *skel); static void detach(struct test_attach_probe *skel); static void destroy(struct test_attach_probe *skel); static const void *elf_bytes(size_t *sz); #endif /* __cplusplus */ Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220212055733.539056-2-andrii@kernel.org
2022-02-15bpftool: Fix the error when lookup in no-btf mapsYinjun Zhang1-4/+2
When reworking btf__get_from_id() in commit a19f93cfafdf the error handling when calling bpf_btf_get_fd_by_id() changed. Before the rework if bpf_btf_get_fd_by_id() failed the error would not be propagated to callers of btf__get_from_id(), after the rework it is. This lead to a change in behavior in print_key_value() that now prints an error when trying to lookup keys in maps with no btf available. Fix this by following the way used in dumping maps to allow to look up keys in no-btf maps, by which it decides whether and where to get the btf info according to the btf value type. Fixes: a19f93cfafdf ("libbpf: Add internal helper to load BTF data by FD") Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com> Signed-off-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/1644249625-22479-1-git-send-email-yinjun.zhang@corigine.com
2022-02-11bpftool: Update versioning scheme, align on libbpf's version numberQuentin Monnet2-4/+23
Since the notion of versions was introduced for bpftool, it has been following the version number of the kernel (using the version number corresponding to the tree in which bpftool's sources are located). The rationale was that bpftool's features are loosely tied to BPF features in the kernel, and that we could defer versioning to the kernel repository itself. But this versioning scheme is confusing today, because a bpftool binary should be able to work with both older and newer kernels, even if some of its recent features won't be available on older systems. Furthermore, if bpftool is ported to other systems in the future, keeping a Linux-based version number is not a good option. Looking at other options, we could either have a totally independent scheme for bpftool, or we could align it on libbpf's version number (with an offset on the major version number, to avoid going backwards). The latter comes with a few drawbacks: - We may want bpftool releases in-between two libbpf versions. We can always append pre-release numbers to distinguish versions, although those won't look as "official" as something with a proper release number. But at the same time, having bpftool with version numbers that look "official" hasn't really been an issue so far. - If no new feature lands in bpftool for some time, we may move from e.g. 6.7.0 to 6.8.0 when libbpf levels up and have two different versions which are in fact the same. - Following libbpf's versioning scheme sounds better than kernel's, but ultimately it doesn't make too much sense either, because even though bpftool uses the lib a lot, its behaviour is not that much conditioned by the internal evolution of the library (or by new APIs that it may not use). Having an independent versioning scheme solves the above, but at the cost of heavier maintenance. Developers will likely forget to increase the numbers when adding features or bug fixes, and we would take the risk of having to send occasional "catch-up" patches just to update the version number. Based on these considerations, this patch aligns bpftool's version number on libbpf's. This is not a perfect solution, but 1) it's certainly an improvement over the current scheme, 2) the issues raised above are all minor at the moment, and 3) we can still move to an independent scheme in the future if we realise we need it. Given that libbpf is currently at version 0.7.0, and bpftool, before this patch, was at 5.16, we use an offset of 6 for the major version, bumping bpftool to 6.7.0. Libbpf does not export its patch number; leave bpftool's patch number at 0 for now. It remains possible to manually override the version number by setting BPFTOOL_VERSION when calling make. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220210104237.11649-3-quentin@isovalent.com
2022-02-11bpftool: Add libbpf's version number to "bpftool version" outputQuentin Monnet2-6/+11
To help users check what version of libbpf is being used with bpftool, print the number along with bpftool's own version number. Output: $ ./bpftool version ./bpftool v5.16.0 using libbpf v0.7 features: libbfd, libbpf_strict, skeletons $ ./bpftool version --json --pretty { "version": "5.16.0", "libbpf_version": "0.7", "features": { "libbfd": true, "libbpf_strict": true, "skeletons": true } } Note that libbpf does not expose its patch number. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220210104237.11649-2-quentin@isovalent.com
2022-02-11bpftool: Generalize light skeleton generation.Alexei Starovoitov1-19/+20
Generealize light skeleton by hiding mmap details in skel_internal.h In this form generated lskel.h is usable both by user space and by the kernel. Note that previously #include <bpf/bpf.h> was in *.lskel.h file. To avoid #ifdef-s in a generated lskel.h the include of bpf.h is moved to skel_internal.h, but skel_internal.h is also used by gen_loader.c which is part of libbpf. Therefore skel_internal.h does #include "bpf.h" in case of user space, so gen_loader.c and lskel.h have necessary definitions. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220209232001.27490-4-alexei.starovoitov@gmail.com
2022-02-10Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski5-27/+31
Daniel Borkmann says: ==================== pull-request: bpf-next 2022-02-09 We've added 126 non-merge commits during the last 16 day(s) which contain a total of 201 files changed, 4049 insertions(+), 2215 deletions(-). The main changes are: 1) Add custom BPF allocator for JITs that pack multiple programs into a huge page to reduce iTLB pressure, from Song Liu. 2) Add __user tagging support in vmlinux BTF and utilize it from BPF verifier when generating loads, from Yonghong Song. 3) Add per-socket fast path check guarding from cgroup/BPF overhead when used by only some sockets, from Pavel Begunkov. 4) Continued libbpf deprecation work of APIs/features and removal of their usage from samples, selftests, libbpf & bpftool, from Andrii Nakryiko and various others. 5) Improve BPF instruction set documentation by adding byte swap instructions and cleaning up load/store section, from Christoph Hellwig. 6) Switch BPF preload infra to light skeleton and remove libbpf dependency from it, from Alexei Starovoitov. 7) Fix architecture-agnostic macros in libbpf for accessing syscall arguments from BPF progs for non-x86 architectures, from Ilya Leoshkevich. 8) Rework port members in struct bpf_sk_lookup and struct bpf_sock to be of 16-bit field with anonymous zero padding, from Jakub Sitnicki. 9) Add new bpf_copy_from_user_task() helper to read memory from a different task than current. Add ability to create sleepable BPF iterator progs, from Kenny Yu. 10) Implement XSK batching for ice's zero-copy driver used by AF_XDP and utilize TX batching API from XSK buffer pool, from Maciej Fijalkowski. 11) Generate temporary netns names for BPF selftests to avoid naming collisions, from Hangbin Liu. 12) Implement bpf_core_types_are_compat() with limited recursion for in-kernel usage, from Matteo Croce. 13) Simplify pahole version detection and finally enable CONFIG_DEBUG_INFO_DWARF5 to be selected with CONFIG_DEBUG_INFO_BTF, from Nathan Chancellor. 14) Misc minor fixes to libbpf and selftests from various folks. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (126 commits) selftests/bpf: Cover 4-byte load from remote_port in bpf_sk_lookup bpf: Make remote_port field in struct bpf_sk_lookup 16-bit wide libbpf: Fix compilation warning due to mismatched printf format selftests/bpf: Test BPF_KPROBE_SYSCALL macro libbpf: Add BPF_KPROBE_SYSCALL macro libbpf: Fix accessing the first syscall argument on s390 libbpf: Fix accessing the first syscall argument on arm64 libbpf: Allow overriding PT_REGS_PARM1{_CORE}_SYSCALL selftests/bpf: Skip test_bpf_syscall_macro's syscall_arg1 on arm64 and s390 libbpf: Fix accessing syscall arguments on riscv libbpf: Fix riscv register names libbpf: Fix accessing syscall arguments on powerpc selftests/bpf: Use PT_REGS_SYSCALL_REGS in bpf_syscall_macro libbpf: Add PT_REGS_SYSCALL_REGS macro selftests/bpf: Fix an endianness issue in bpf_syscall_macro test bpf: Fix bpf_prog_pack build HPAGE_PMD_SIZE bpf: Fix leftover header->pages in sparc and powerpc code. libbpf: Fix signedness bug in btf_dump_array_data() selftests/bpf: Do not export subtest as standalone test bpf, x86_64: Fail gracefully on bpf_jit_binary_pack_finalize failures ... ==================== Link: https://lore.kernel.org/r/20220209210050.8425-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-07bpftool: Fix strict mode calculationMauricio Vásquez1-4/+1
"(__LIBBPF_STRICT_LAST - 1) & ~LIBBPF_STRICT_MAP_DEFINITIONS" is wrong as it is equal to 0 (LIBBPF_STRICT_NONE). Let's use "LIBBPF_STRICT_ALL & ~LIBBPF_STRICT_MAP_DEFINITIONS" now that the previous commit makes it possible in libbpf. Fixes: 93b8952d223a ("libbpf: deprecate legacy BPF map definitions") Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220207145052.124421-3-mauricio@kinvolk.io
2022-02-04Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-1/+5
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-04Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfJakub Kicinski1-1/+5
Daniel Borkmann says: ==================== pull-request: bpf 2022-02-03 We've added 6 non-merge commits during the last 10 day(s) which contain a total of 7 files changed, 11 insertions(+), 236 deletions(-). The main changes are: 1) Fix BPF ringbuf to allocate its area with VM_MAP instead of VM_ALLOC flag which otherwise trips over KASAN, from Hou Tao. 2) Fix unresolved symbol warning in resolve_btfids due to LSM callback rename, from Alexei Starovoitov. 3) Fix a possible race in inc_misses_counter() when IRQ would trigger during counter update, from He Fengqing. 4) Fix tooling infra for cross-building with clang upon probing whether gcc provides the standard libraries, from Jean-Philippe Brucker. 5) Fix silent mode build for resolve_btfids, from Nathan Chancellor. 6) Drop unneeded and outdated lirc.h header copy from tooling infra as BPF does not require it anymore, from Sean Young. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf: tools/resolve_btfids: Do not print any commands when building silently bpf: Use VM_MAP instead of VM_ALLOC for ringbuf tools: Ignore errors from `which' when searching a GCC toolchain tools headers UAPI: remove stale lirc.h bpf: Fix possible race in inc_misses_counter bpf: Fix renaming task_getsecid_subj->current_getsecid_subj. ==================== Link: https://lore.kernel.org/r/20220203155815.25689-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-03bpftool: Fix uninit variable compilation warningAndrii Nakryiko1-1/+1
Newer GCC complains about capturing the address of unitialized variable. While there is nothing wrong with the code (the variable is filled out by the kernel), initialize the variable anyway to make compiler happy. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220202225916.3313522-4-andrii@kernel.org
2022-02-03bpftool: Stop supporting BPF offload-enabled feature probingAndrii Nakryiko1-12/+17
libbpf 1.0 is not going to support passing ifindex to BPF prog/map/helper feature probing APIs. Remove the support for BPF offload feature probing. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220202225916.3313522-3-andrii@kernel.org
2022-02-03tools/resolve_btfids: Do not print any commands when building silentlyNathan Chancellor1-1/+5
When building with 'make -s', there is some output from resolve_btfids: $ make -sj"$(nproc)" oldconfig prepare MKDIR .../tools/bpf/resolve_btfids/libbpf/ MKDIR .../tools/bpf/resolve_btfids//libsubcmd LINK resolve_btfids Silent mode means that no information should be emitted about what is currently being done. Use the $(silent) variable from Makefile.include to avoid defining the msg macro so that there is no information printed. Fixes: fbbb68de80a4 ("bpf: Add resolve_btfids tool to resolve BTF IDs in ELF object") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220201212503.731732-1-nathan@kernel.org
2022-02-03bpftool: Migrate from bpf_prog_test_run_xattrDelyan Kratunov1-3/+2
bpf_prog_test_run is being deprecated in favor of the OPTS-based bpf_prog_test_run_opts. Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220202235423.1097270-4-delyank@fb.com
2022-02-02libbpf: Open code raw_tp_open and link_create commands.Alexei Starovoitov1-3/+3
Open code raw_tracepoint_open and link_create used by light skeleton to be able to avoid full libbpf eventually. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-4-alexei.starovoitov@gmail.com
2022-02-02libbpf: Add support for bpf iter in light skeleton.Alexei Starovoitov1-1/+4
bpf iterator programs should use bpf_link_create to attach instead of bpf_raw_tracepoint_open like other tracing programs. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-2-alexei.starovoitov@gmail.com
2022-01-27Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski3-3/+3
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-26bpftool: use preferred setters/getters instead of deprecated onesAndrii Nakryiko2-5/+5
Use bpf_program__type() instead of discouraged bpf_program__get_type(). Also switch to bpf_map__set_max_entries() instead of bpf_map__resize(). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220124194254.2051434-5-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-01-25Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski13-31/+98
Daniel Borkmann says: ==================== pull-request: bpf-next 2022-01-24 We've added 80 non-merge commits during the last 14 day(s) which contain a total of 128 files changed, 4990 insertions(+), 895 deletions(-). The main changes are: 1) Add XDP multi-buffer support and implement it for the mvneta driver, from Lorenzo Bianconi, Eelco Chaudron and Toke Høiland-Jørgensen. 2) Add unstable conntrack lookup helpers for BPF by using the BPF kfunc infra, from Kumar Kartikeya Dwivedi. 3) Extend BPF cgroup programs to export custom ret value to userspace via two helpers bpf_get_retval() and bpf_set_retval(), from YiFei Zhu. 4) Add support for AF_UNIX iterator batching, from Kuniyuki Iwashima. 5) Complete missing UAPI BPF helper description and change bpf_doc.py script to enforce consistent & complete helper documentation, from Usama Arif. 6) Deprecate libbpf's legacy BPF map definitions and streamline XDP APIs to follow tc-based APIs, from Andrii Nakryiko. 7) Support BPF_PROG_QUERY for BPF programs attached to sockmap, from Di Zhu. 8) Deprecate libbpf's bpf_map__def() API and replace users with proper getters and setters, from Christy Lee. 9) Extend libbpf's btf__add_btf() with an additional hashmap for strings to reduce overhead, from Kui-Feng Lee. 10) Fix bpftool and libbpf error handling related to libbpf's hashmap__new() utility function, from Mauricio Vásquez. 11) Add support to BTF program names in bpftool's program dump, from Raman Shukhau. 12) Fix resolve_btfids build to pick up host flags, from Connor O'Brien. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (80 commits) selftests, bpf: Do not yet switch to new libbpf XDP APIs selftests, xsk: Fix rx_full stats test bpf: Fix flexible_array.cocci warnings xdp: disable XDP_REDIRECT for xdp frags bpf: selftests: add CPUMAP/DEVMAP selftests for xdp frags bpf: selftests: introduce bpf_xdp_{load,store}_bytes selftest net: xdp: introduce bpf_xdp_pointer utility routine bpf: generalise tail call map compatibility check libbpf: Add SEC name for xdp frags programs bpf: selftests: update xdp_adjust_tail selftest to include xdp frags bpf: test_run: add xdp_shared_info pointer in bpf_test_finish signature bpf: introduce frags support to bpf_prog_test_run_xdp() bpf: move user_size out of bpf_test_init bpf: add frags support to xdp copy helpers bpf: add frags support to the bpf_xdp_adjust_tail() API bpf: introduce bpf_xdp_get_buff_len helper net: mvneta: enable jumbo frames if the loaded XDP program support frags bpf: introduce BPF_F_XDP_HAS_FRAGS flag in prog_flags loading the ebpf program net: mvneta: add frags support to XDP_TX xdp: add frags support to xdp_return_{buff/frame} ... ==================== Link: https://lore.kernel.org/r/20220124221235.18993-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-23Merge tag 'powerpc-5.17-2' of ↵Linus Torvalds3-3/+3
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Michael Ellerman: - A series of bpf fixes, including an oops fix and some codegen fixes. - Fix a regression in syscall_get_arch() for compat processes. - Fix boot failure on some 32-bit systems with KASAN enabled. - A couple of other build/minor fixes. Thanks to Athira Rajeev, Christophe Leroy, Dmitry V. Levin, Jiri Olsa, Johan Almbladh, Maxime Bizon, Naveen N. Rao, and Nicholas Piggin. * tag 'powerpc-5.17-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/64s: Mask SRR0 before checking against the masked NIP powerpc/perf: Only define power_pmu_wants_prompt_pmi() for CONFIG_PPC64 powerpc/32s: Fix kasan_init_region() for KASAN powerpc/time: Fix build failure due to do_hard_irq_enable() on PPC32 powerpc/audit: Fix syscall_get_arch() powerpc64/bpf: Limit 'ldbrx' to processors compliant with ISA v2.06 tools/bpf: Rename 'struct event' to avoid naming conflict powerpc/bpf: Update ldimm64 instructions during extra pass powerpc32/bpf: Fix codegen for bpf-to-bpf calls bpf: Guard against accessing NULL pt_regs in bpf_get_task_stack()
2022-01-21bpftool: use new API for attaching XDP programAndrii Nakryiko1-1/+1
Switch to new bpf_xdp_attach() API to avoid deprecation warnings. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220120061422.2710637-3-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-01-21libbpf: deprecate legacy BPF map definitionsAndrii Nakryiko1-1/+8
Enact deprecation of legacy BPF map definition in SEC("maps") ([0]). For the definitions themselves introduce LIBBPF_STRICT_MAP_DEFINITIONS flag for libbpf strict mode. If it is set, error out on any struct bpf_map_def-based map definition. If not set, libbpf will print out a warning for each legacy BPF map to raise awareness that it goes away. For any use of BPF_ANNOTATE_KV_PAIR() macro providing a legacy way to associate BTF key/value type information with legacy BPF map definition, warn through libbpf's pr_warn() error message (but don't fail BPF object open). BPF-side struct bpf_map_def is marked as deprecated. User-space struct bpf_map_def has to be used internally in libbpf, so it is left untouched. It should be enough for bpf_map__def() to be marked deprecated to raise awareness that it goes away. bpftool is an interesting case that utilizes libbpf to open BPF ELF object to generate skeleton. As such, even though bpftool itself uses full on strict libbpf mode (LIBBPF_STRICT_ALL), it has to relax it a bit for BPF map definition handling to minimize unnecessary disruptions. So opt-out of LIBBPF_STRICT_MAP_DEFINITIONS for bpftool. User's code that will later use generated skeleton will make its own decision whether to enforce LIBBPF_STRICT_MAP_DEFINITIONS or not. There are few tests in selftests/bpf that are consciously using legacy BPF map definitions to test libbpf functionality. For those, temporary opt out of LIBBPF_STRICT_MAP_DEFINITIONS mode for the duration of those tests. [0] Closes: https://github.com/libbpf/libbpf/issues/272 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220120060529.1890907-4-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-01-20Merge branch 'akpm' (patches from Andrew)Linus Torvalds1-2/+2
Merge more updates from Andrew Morton: "55 patches. Subsystems affected by this patch series: percpu, procfs, sysctl, misc, core-kernel, get_maintainer, lib, checkpatch, binfmt, nilfs2, hfs, fat, adfs, panic, delayacct, kconfig, kcov, and ubsan" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (55 commits) lib: remove redundant assignment to variable ret ubsan: remove CONFIG_UBSAN_OBJECT_SIZE kcov: fix generic Kconfig dependencies if ARCH_WANTS_NO_INSTR lib/Kconfig.debug: make TEST_KMOD depend on PAGE_SIZE_LESS_THAN_256KB btrfs: use generic Kconfig option for 256kB page size limit arch/Kconfig: split PAGE_SIZE_LESS_THAN_256KB from PAGE_SIZE_LESS_THAN_64KB configs: introduce debug.config for CI-like setup delayacct: track delays from memory compact Documentation/accounting/delay-accounting.rst: add thrashing page cache and direct compact delayacct: cleanup flags in struct task_delay_info and functions use it delayacct: fix incomplete disable operation when switch enable to disable delayacct: support swapin delay accounting for swapping without blkio panic: remove oops_id panic: use error_report_end tracepoint on warnings fs/adfs: remove unneeded variable make code cleaner FAT: use io_schedule_timeout() instead of congestion_wait() hfsplus: use struct_group_attr() for memcpy() region nilfs2: remove redundant pointer sbufs fs/binfmt_elf: use PT_LOAD p_align values for static PIE const_structs.checkpatch: add frequently used ops structs ...
2022-01-20tools/bpf/bpftool/skeleton: replace bpf_probe_read_kernel with ↵Yafang Shao1-2/+2
bpf_probe_read_kernel_str to get task comm bpf_probe_read_kernel_str() will add a nul terminator to the dst, then we don't care about if the dst size is big enough. Link: https://lkml.kernel.org/r/20211120112738.45980-7-laoar.shao@gmail.com Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: David Hildenbrand <david@redhat.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Arnaldo Carvalho de Melo <arnaldo.melo@gmail.com> Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com> Cc: Andrii Nakryiko <andrii.nakryiko@gmail.com> Cc: Michal Miroslaw <mirq-linux@rere.qmqm.pl> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: David Hildenbrand <david@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Kees Cook <keescook@chromium.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-01-19bpftool: Adding support for BTF program namesRaman Shukhau4-12/+70
`bpftool prog list` and other bpftool subcommands that show BPF program names currently get them from bpf_prog_info.name. That field is limited to 16 (BPF_OBJ_NAME_LEN) chars which leads to truncated names since many progs have much longer names. The idea of this change is to improve all bpftool commands that output prog name so that bpftool uses info from BTF to print program names if available. It tries bpf_prog_info.name first and fall back to btf only if the name is suspected to be truncated (has 15 chars length). Right now `bpftool p show id <id>` returns capped prog name <id>: kprobe name example_cap_cap tag 712e... ... With this change it would return <id>: kprobe name example_cap_capable tag 712e... ... Note, other commands that print prog names (e.g. "bpftool cgroup tree") are also addressed in this change. Signed-off-by: Raman Shukhau <ramasha@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220119100255.1068997-1-ramasha@fb.com
2022-01-15tools/bpf: Rename 'struct event' to avoid naming conflictNaveen N. Rao3-3/+3
On ppc64le, trying to build bpf seltests throws the below warning: In file included from runqslower.bpf.c:5: ./runqslower.h:7:8: error: redefinition of 'event' struct event { ^ /home/naveen/linux/tools/testing/selftests/bpf/tools/build/runqslower/vmlinux.h:156602:8: note: previous definition is here struct event { ^ This happens since 'struct event' is defined in drivers/net/ethernet/alteon/acenic.h . Rename the one in runqslower to a more appropriate 'runq_event' to avoid the naming conflict. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c13cb3767d26257ca4387b8296b632b433a58db6.1641468127.git.naveen.n.rao@linux.vnet.ibm.com
2022-01-15tools/resolve_btfids: Build with host flagsConnor O'Brien1-2/+4
resolve_btfids is built using $(HOSTCC) and $(HOSTLD) but does not pick up the corresponding flags. As a result, host-specific settings (such as a sysroot specified via HOSTCFLAGS=--sysroot=..., or a linker specified via HOSTLDFLAGS=-fuse-ld=...) will not be respected. Fix this by setting CFLAGS to KBUILD_HOSTCFLAGS and LDFLAGS to KBUILD_HOSTLDFLAGS. Also pass the cflags through to libbpf via EXTRA_CFLAGS to ensure that the host libbpf is built with flags consistent with resolve_btfids. Signed-off-by: Connor O'Brien <connoro@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20220112002503.115968-1-connoro@google.com
2022-01-13bpftool: Stop using bpf_map__def() APIChristy Lee2-9/+7
libbpf bpf_map__def() API is being deprecated, replace bpftool's usage with the appropriate getters and setters Signed-off-by: Christy Lee <christylee@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220108004218.355761-3-christylee@fb.com
2022-01-13bpftool: Only set obj->skeleton on complete successWei Fu1-1/+1
After `bpftool gen skeleton`, the ${bpf_app}.skel.h will provide that ${bpf_app_name}__open helper to load bpf. If there is some error like ENOMEM, the ${bpf_app_name}__open will rollback(free) the allocated object, including `bpf_object_skeleton`. Since the ${bpf_app_name}__create_skeleton set the obj->skeleton first and not rollback it when error, it will cause double-free in ${bpf_app_name}__destory at ${bpf_app_name}__open. Therefore, we should set the obj->skeleton before return 0; Fixes: 5dc7a8b21144 ("bpftool, selftests/bpf: Embed object file inside skeleton") Signed-off-by: Wei Fu <fuweid89@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220108084008.1053111-1-fuweid89@gmail.com
2022-01-13bpftool: Fix error check when calling hashmap__new()Mauricio Vásquez5-5/+7
hashmap__new() encodes errors with ERR_PTR(), hence it's not valid to check the returned pointer against NULL and IS_ERR() has to be used instead. libbpf_get_error() can't be used in this case as hashmap__new() is not part of the public libbpf API and it'll continue using ERR_PTR() after libbpf 1.0. Fixes: 8f184732b60b ("bpftool: Switch to libbpf's hashmap for pinned paths of BPF objects") Fixes: 2828d0d75b73 ("bpftool: Switch to libbpf's hashmap for programs/maps in BTF listing") Fixes: d6699f8e0f83 ("bpftool: Switch to libbpf's hashmap for PIDs/names references") Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20220107152620.192327-2-mauricio@kinvolk.io
2022-01-06libbpf 1.0: Deprecate bpf_map__is_offload_neutral()Christy Lee1-1/+1
Deprecate bpf_map__is_offload_neutral(). It’s most probably broken already. PERF_EVENT_ARRAY isn’t the only map that’s not suitable for hardware offloading. Applications can directly check map type instead. [0] Closes: https://github.com/libbpf/libbpf/issues/306 Signed-off-by: Christy Lee <christylee@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220105000601.2090044-1-christylee@fb.com
2022-01-05bpftool: Probe for instruction set extensionsPaul Chaignon1-0/+44
This patch introduces new probes to check whether the kernel supports instruction set extensions v2 and v3. The first introduced eBPF instructions BPF_J{LT,LE,SLT,SLE} in commit 92b31a9af73b ("bpf: add BPF_J{LT,LE,SLT,SLE} instructions"). The second introduces 32-bit variants of all jump instructions in commit 092ed0968bb6 ("bpf: verifier support JMP32"). These probes are useful for userspace BPF projects that want to use newer instruction set extensions on newer kernels, to reduce the programs' sizes or their complexity. LLVM already provides an mcpu=probe option to automatically probe the kernel and select the newest-supported instruction set extension. That is however not flexible enough for all use cases. For example, in Cilium, we only want to use the v3 instruction set extension on v5.10+, even though it is supported on all kernels v5.1+. Signed-off-by: Paul Chaignon <paul@isovalent.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/3bfedcd9898c1f41ac67ca61f144fec84c6c3a92.1641314075.git.paul@isovalent.com
2022-01-05bpftool: Probe for bounded loop supportPaul Chaignon1-0/+22
This patch introduces a new probe to check whether the verifier supports bounded loops as introduced in commit 2589726d12a1 ("bpf: introduce bounded loops"). This patch will allow BPF users such as Cilium to probe for loop support on startup and only unconditionally unroll loops on older kernels. The results are displayed as part of the miscellaneous section, as shown below. $ bpftool feature probe | grep loops Bounded loop support is available $ bpftool feature probe macro | grep LOOPS #define HAVE_BOUNDED_LOOPS $ bpftool feature probe -j | jq .misc { "have_large_insn_limit": true, "have_bounded_loops": true } Signed-off-by: Paul Chaignon <paul@isovalent.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/f7807c0b27d79f48e71de7b5a99c680ca4bd0151.1641314075.git.paul@isovalent.com
2022-01-05bpftool: Refactor misc. feature probePaul Chaignon1-16/+29
There is currently a single miscellaneous feature probe, HAVE_LARGE_INSN_LIMIT, to check for the 1M instructions limit in the verifier. Subsequent patches will add additional miscellaneous probes, which follow the same pattern at the existing probe. This patch therefore refactors the probe to avoid code duplication in subsequent patches. The BPF program type and the checked error numbers in the HAVE_LARGE_INSN_LIMIT probe are changed to better generalize to other probes. The feature probe retains its current behavior despite those changes. Signed-off-by: Paul Chaignon <paul@isovalent.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/956c9329a932c75941194f91790d01f31dfbe01b.1641314075.git.paul@isovalent.com
2021-12-22bpftool: Enable line buffering for stdoutPaul Chaignon1-0/+2
The output of bpftool prog tracelog is currently buffered, which is inconvenient when piping the output into other commands. A simple tracelog | grep will typically not display anything. This patch fixes it by enabling line buffering on stdout for the whole bpftool binary. Fixes: 30da46b5dc3a ("tools: bpftool: add a command to dump the trace pipe") Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Paul Chaignon <paul@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20211220214528.GA11706@Mem
2021-12-18bpftool: Reimplement large insn size limit feature probingAndrii Nakryiko1-3/+23
Reimplement bpf_probe_large_insn_limit() in bpftool, as that libbpf API is scheduled for deprecation in v0.8. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Link: https://lore.kernel.org/bpf/20211217171202.3352835-4-andrii@kernel.org
2021-12-16tools/runqslower: Enable cross-building with clangJean-Philippe Brucker1-2/+2
Cross-building using clang requires passing the "-target" flag rather than using the CROSS_COMPILE prefix. Makefile.include transforms CROSS_COMPILE into CLANG_CROSS_FLAGS. Add them to CFLAGS, and erase CROSS_COMPILE for the bpftool build, since it needs to be executed on the host. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20211216163842.829836-6-jean-philippe@linaro.org
2021-12-16bpftool: Enable cross-building with clangJean-Philippe Brucker1-6/+7
Cross-building using clang requires passing the "-target" flag rather than using the CROSS_COMPILE prefix. Makefile.include transforms CROSS_COMPILE into CLANG_CROSS_FLAGS, and adds that to CFLAGS. Remove the cross flags for the bootstrap bpftool, and erase the CROSS_COMPILE flag for the bootstrap libbpf. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20211216163842.829836-5-jean-philippe@linaro.org
2021-12-16tools/resolve_btfids: Support cross-building the kernel with clangJean-Philippe Brucker1-0/+1
The CROSS_COMPILE variable may be present during resolve_btfids build if the kernel is being cross-built. Since resolve_btfids is always executed on the host, we set CC to HOSTCC in order to use the host toolchain when cross-building with GCC. But instead of a toolchain prefix, cross-build with clang uses a "-target" parameter, which Makefile.include deduces from the CROSS_COMPILE variable. In order to avoid cross-building libbpf, clear CROSS_COMPILE before building resolve_btfids. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20211216163842.829836-3-jean-philippe@linaro.org
2021-12-11Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski21-70/+125
Andrii Nakryiko says: ==================== bpf-next 2021-12-10 v2 We've added 115 non-merge commits during the last 26 day(s) which contain a total of 182 files changed, 5747 insertions(+), 2564 deletions(-). The main changes are: 1) Various samples fixes, from Alexander Lobakin. 2) BPF CO-RE support in kernel and light skeleton, from Alexei Starovoitov. 3) A batch of new unified APIs for libbpf, logging improvements, version querying, etc. Also a batch of old deprecations for old APIs and various bug fixes, in preparation for libbpf 1.0, from Andrii Nakryiko. 4) BPF documentation reorganization and improvements, from Christoph Hellwig and Dave Tucker. 5) Support for declarative initialization of BPF_MAP_TYPE_PROG_ARRAY in libbpf, from Hengqi Chen. 6) Verifier log fixes, from Hou Tao. 7) Runtime-bounded loops support with bpf_loop() helper, from Joanne Koong. 8) Extend branch record capturing to all platforms that support it, from Kajol Jain. 9) Light skeleton codegen improvements, from Kumar Kartikeya Dwivedi. 10) bpftool doc-generating script improvements, from Quentin Monnet. 11) Two libbpf v0.6 bug fixes, from Shuyi Cheng and Vincent Minet. 12) Deprecation warning fix for perf/bpf_counter, from Song Liu. 13) MAX_TAIL_CALL_CNT unification and MIPS build fix for libbpf, from Tiezhu Yang. 14) BTF_KING_TYPE_TAG follow-up fixes, from Yonghong Song. 15) Selftests fixes and improvements, from Ilya Leoshkevich, Jean-Philippe Brucker, Jiri Olsa, Maxim Mikityanskiy, Tirthendu Sarkar, Yucong Sun, and others. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (115 commits) libbpf: Add "bool skipped" to struct bpf_map libbpf: Fix typo in btf__dedup@LIBBPF_0.0.2 definition bpftool: Switch bpf_object__load_xattr() to bpf_object__load() selftests/bpf: Remove the only use of deprecated bpf_object__load_xattr() selftests/bpf: Add test for libbpf's custom log_buf behavior selftests/bpf: Replace all uses of bpf_load_btf() with bpf_btf_load() libbpf: Deprecate bpf_object__load_xattr() libbpf: Add per-program log buffer setter and getter libbpf: Preserve kernel error code and remove kprobe prog type guessing libbpf: Improve logging around BPF program loading libbpf: Allow passing user log setting through bpf_object_open_opts libbpf: Allow passing preallocated log_buf when loading BTF into kernel libbpf: Add OPTS-based bpf_btf_load() API libbpf: Fix bpf_prog_load() log_buf logic for log_level 0 samples/bpf: Remove unneeded variable bpf: Remove redundant assignment to pointer t selftests/bpf: Fix a compilation warning perf/bpf_counter: Use bpf_map_create instead of bpf_create_map samples: bpf: Fix 'unknown warning group' build warning on Clang samples: bpf: Fix xdp_sample_user.o linking with Clang ... ==================== Link: https://lore.kernel.org/r/20211210234746.2100561-1-andrii@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-12-11bpftool: Switch bpf_object__load_xattr() to bpf_object__load()Andrii Nakryiko3-29/+21
Switch all the uses of to-be-deprecated bpf_object__load_xattr() into a simple bpf_object__load() calls with optional log_level passed through open_opts.kernel_log_level, if -d option is specified. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-13-andrii@kernel.org
2021-12-10Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-3/+5
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-12-05bpftool: Add debug mode for gen_loader.Alexei Starovoitov1-9/+11
Make -d flag functional for gen_loader style program loading. For example: $ bpftool prog load -L -d test_d_path.o ... // will print: libbpf: loading ./test_d_path.o libbpf: elf: section(3) fentry/security_inode_getattr, size 280, link 0, flags 6, type=1 ... libbpf: prog 'prog_close': found data map 0 (test_d_p.bss, sec 7, off 0) for insn 30 libbpf: gen: load_btf: size 5376 libbpf: gen: map_create: test_d_p.bss idx 0 type 2 value_type_id 118 libbpf: map 'test_d_p.bss': created successfully, fd=0 libbpf: gen: map_update_elem: idx 0 libbpf: sec 'fentry/filp_close': found 1 CO-RE relocations libbpf: record_relo_core: prog 1 insn[15] struct file 0:1 final insn_idx 15 libbpf: gen: prog_load: type 26 insns_cnt 35 progi_idx 0 libbpf: gen: find_attach_tgt security_inode_getattr 12 libbpf: gen: prog_load: type 26 insns_cnt 37 progi_idx 1 libbpf: gen: find_attach_tgt filp_close 12 libbpf: gen: finish 0 ... // at this point libbpf finished generating loader program 0: (bf) r6 = r1 1: (bf) r1 = r10 2: (07) r1 += -136 3: (b7) r2 = 136 4: (b7) r3 = 0 5: (85) call bpf_probe_read_kernel#113 6: (05) goto pc+104 ... // this is the assembly dump of the loader program 390: (63) *(u32 *)(r6 +44) = r0 391: (18) r1 = map[idx:0]+5584 393: (61) r0 = *(u32 *)(r1 +0) 394: (63) *(u32 *)(r6 +24) = r0 395: (b7) r0 = 0 396: (95) exit err 0 // the loader program was loaded and executed successfully (null) func#0 @0 ... // CO-RE in the kernel logs: CO-RE relocating STRUCT file: found target candidate [500] prog '': relo #0: kind <byte_off> (0), spec is [8] STRUCT file.f_path (0:1 @ offset 16) prog '': relo #0: matching candidate #0 [500] STRUCT file.f_path (0:1 @ offset 16) prog '': relo #0: patched insn #15 (ALU/ALU64) imm 16 -> 16 vmlinux_cand_cache:[11]file(500), module_cand_cache: ... // verifier logs when it was checking test_d_path.o program: R1 type=ctx expected=fp 0: R1=ctx(id=0,off=0,imm=0) R10=fp0 ; int BPF_PROG(prog_close, struct file *file, void *id) 0: (79) r6 = *(u64 *)(r1 +0) func 'filp_close' arg0 has btf_id 500 type STRUCT 'file' 1: R1=ctx(id=0,off=0,imm=0) R6_w=ptr_file(id=0,off=0,imm=0) R10=fp0 ; pid_t pid = bpf_get_current_pid_tgid() >> 32; 1: (85) call bpf_get_current_pid_tgid#14 ... // if there are multiple programs being loaded by the loader program ... // only the last program in the elf file will be printed, since ... // the same verifier log_buf is used for all PROG_LOAD commands. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211204194623.27779-1-alexei.starovoitov@gmail.com
2021-12-03bpftool: Migrate off of deprecated bpf_create_map_xattr() APIAndrii Nakryiko1-10/+13
Switch to bpf_map_create() API instead. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211201232824.3166325-4-andrii@kernel.org
2021-12-03tools/resolve_btfids: Skip unresolved symbol warning for empty BTF setsKumar Kartikeya Dwivedi1-3/+5
resolve_btfids prints a warning when it finds an unresolved symbol, (id == 0) in id_patch. This can be the case for BTF sets that are empty (due to disabled config options), hence printing warnings for certain builds, most recently seen in [0]. The reason behind this is because id->cnt aliases id->id in btf_id struct, leading to empty set showing up as ID 0 when we get to id_patch, which triggers the warning. Since sets are an exception here, accomodate by reusing hole in btf_id for bool is_set member, setting it to true for BTF set when setting id->cnt, and use that to skip extraneous warning. [0]: https://lore.kernel.org/all/1b99ae14-abb4-d18f-cc6a-d7e523b25542@gmail.com Before: ; ./tools/bpf/resolve_btfids/resolve_btfids -v -b vmlinux net/ipv4/tcp_cubic.ko adding symbol tcp_cubic_kfunc_ids WARN: resolve_btfids: unresolved symbol tcp_cubic_kfunc_ids patching addr 0: ID 0 [tcp_cubic_kfunc_ids] sorting addr 4: cnt 0 [tcp_cubic_kfunc_ids] update ok for net/ipv4/tcp_cubic.ko After: ; ./tools/bpf/resolve_btfids/resolve_btfids -v -b vmlinux net/ipv4/tcp_cubic.ko adding symbol tcp_cubic_kfunc_ids patching addr 0: ID 0 [tcp_cubic_kfunc_ids] sorting addr 4: cnt 0 [tcp_cubic_kfunc_ids] update ok for net/ipv4/tcp_cubic.ko Fixes: 0e32dfc80bae ("bpf: Enable TCP congestion control kfunc from modules") Reported-by: Pavel Skripkin <paskripkin@gmail.com> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211122144742.477787-4-memxor@gmail.com