summaryrefslogtreecommitdiff
path: root/tools/bpf/bpftool/xlated_dumper.c
AgeCommit message (Collapse)AuthorFilesLines
2021-05-19bpftool: Use syscall/loader program in "prog load" and "gen skeleton" command.Alexei Starovoitov1-0/+3
Add -L flag to bpftool to use libbpf gen_trace facility and syscall/loader program for skeleton generation and program loading. "bpftool gen skeleton -L" command will generate a "light skeleton" or "loader skeleton" that is similar to existing skeleton, but has one major difference: $ bpftool gen skeleton lsm.o > lsm.skel.h $ bpftool gen skeleton -L lsm.o > lsm.lskel.h $ diff lsm.skel.h lsm.lskel.h @@ -5,34 +4,34 @@ #define __LSM_SKEL_H__ #include <stdlib.h> -#include <bpf/libbpf.h> +#include <bpf/bpf.h> The light skeleton does not use majority of libbpf infrastructure. It doesn't need libelf. It doesn't parse .o file. It only needs few sys_bpf wrappers. All of them are in bpf/bpf.h file. In future libbpf/bpf.c can be inlined into bpf.h, so not even libbpf.a would be needed to work with light skeleton. "bpftool prog load -L file.o" command is introduced for debugging of syscall/loader program generation. Just like the same command without -L it will try to load the programs from file.o into the kernel. It won't even try to pin them. "bpftool prog load -L -d file.o" command will provide additional debug messages on how syscall/loader program was generated. Also the execution of syscall/loader program will use bpf_trace_printk() for each step of loading BTF, creating maps, and loading programs. The user can do "cat /.../trace_pipe" for further debug. An example of fexit_sleep.lskel.h generated from progs/fexit_sleep.c: struct fexit_sleep { struct bpf_loader_ctx ctx; struct { struct bpf_map_desc bss; } maps; struct { struct bpf_prog_desc nanosleep_fentry; struct bpf_prog_desc nanosleep_fexit; } progs; struct { int nanosleep_fentry_fd; int nanosleep_fexit_fd; } links; struct fexit_sleep__bss { int pid; int fentry_cnt; int fexit_cnt; } *bss; }; Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-18-alexei.starovoitov@gmail.com
2021-02-27bpftool: Print subprog address properlyYonghong Song1-0/+3
With later hashmap example, using bpftool xlated output may look like: int dump_task(struct bpf_iter__task * ctx): ; struct task_struct *task = ctx->task; 0: (79) r2 = *(u64 *)(r1 +8) ; if (task == (void *)0 || called > 0) ... 19: (18) r2 = subprog[+17] 30: (18) r2 = subprog[+25] ... 36: (95) exit __u64 check_hash_elem(struct bpf_map * map, __u32 * key, __u64 * val, struct callback_ctx * data): ; struct bpf_iter__task *ctx = data->ctx; 37: (79) r5 = *(u64 *)(r4 +0) ... 55: (95) exit __u64 check_percpu_elem(struct bpf_map * map, __u32 * key, __u64 * val, void * unused): ; check_percpu_elem(struct bpf_map *map, __u32 *key, __u64 *val, void *unused) 56: (bf) r6 = r3 ... 83: (18) r2 = subprog[-47] Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210226204931.3885458-1-yhs@fb.com
2020-01-21bpftool: Use consistent include paths for libbpfToke Høiland-Jørgensen1-1/+1
Fix bpftool to include libbpf header files with the bpf/ prefix, to be consistent with external users of the library. Also ensure that all includes of exported libbpf header files (those that are exported on 'make install' of the library) use bracketed includes instead of quoted. To make sure no new files are introduced that doesn't include the bpf/ prefix in its include, remove tools/lib/bpf from the include path entirely, and use tools/lib instead. Fixes: 6910d7d3867a ("selftests/bpf: Ensure bpf_helper_defs.h are taken from selftests dir") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/157952560684.1683545.4765181397974997027.stgit@toke.dk
2019-12-11bpftool: Don't crash on missing jited insns or ksymsToke Høiland-Jørgensen1-1/+1
When the kptr_restrict sysctl is set, the kernel can fail to return jited_ksyms or jited_prog_insns, but still have positive values in nr_jited_ksyms and jited_prog_len. This causes bpftool to crash when trying to dump the program because it only checks the len fields not the actual pointers to the instructions and ksyms. Fix this by adding the missing checks. Fixes: 71bb428fe2c1 ("tools: bpf: add bpftool") Fixes: f84192ee00b7 ("tools: bpftool: resolve calls without using imm field") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191210181412.151226-1-toke@redhat.com
2019-05-28bpf: style fix in while(!feof()) loopChang-Hsien Tsai1-3/+1
Use fgets() as the while loop condition. Signed-off-by: Chang-Hsien Tsai <luke.tw@gmail.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-10bpf: implement lookup-free direct value access for mapsDaniel Borkmann1-0/+3
This generic extension to BPF maps allows for directly loading an address residing inside a BPF map value as a single BPF ldimm64 instruction! The idea is similar to what BPF_PSEUDO_MAP_FD does today, which is a special src_reg flag for ldimm64 instruction that indicates that inside the first part of the double insns's imm field is a file descriptor which the verifier then replaces as a full 64bit address of the map into both imm parts. For the newly added BPF_PSEUDO_MAP_VALUE src_reg flag, the idea is the following: the first part of the double insns's imm field is again a file descriptor corresponding to the map, and the second part of the imm field is an offset into the value. The verifier will then replace both imm parts with an address that points into the BPF map value at the given value offset for maps that support this operation. Currently supported is array map with single entry. It is possible to support more than just single map element by reusing both 16bit off fields of the insns as a map index, so full array map lookup could be expressed that way. It hasn't been implemented here due to lack of concrete use case, but could easily be done so in future in a compatible way, since both off fields right now have to be 0 and would correctly denote a map index 0. The BPF_PSEUDO_MAP_VALUE is a distinct flag as otherwise with BPF_PSEUDO_MAP_FD we could not differ offset 0 between load of map pointer versus load of map's value at offset 0, and changing BPF_PSEUDO_MAP_FD's encoding into off by one to differ between regular map pointer and map value pointer would add unnecessary complexity and increases barrier for debugability thus less suitable. Using the second part of the imm field as an offset into the value does /not/ come with limitations since maximum possible value size is in u32 universe anyway. This optimization allows for efficiently retrieving an address to a map value memory area without having to issue a helper call which needs to prepare registers according to calling convention, etc, without needing the extra NULL test, and without having to add the offset in an additional instruction to the value base pointer. The verifier then treats the destination register as PTR_TO_MAP_VALUE with constant reg->off from the user passed offset from the second imm field, and guarantees that this is within bounds of the map value. Any subsequent operations are normally treated as typical map value handling without anything extra needed from verification side. The two map operations for direct value access have been added to array map for now. In future other types could be supported as well depending on the use case. The main use case for this commit is to allow for BPF loader support for global variables that reside in .data/.rodata/.bss sections such that we can directly load the address of them with minimal additional infrastructure required. Loader support has been added in subsequent commits for libbpf library. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-15tools: bpftool: fix -Wmissing declaration warningsQuentin Monnet1-3/+4
Help compiler check arguments for several utility functions used to print items to the console by adding the "printf" attribute when declaring those functions. Also, declare as "static" two functions that are only used in prog.c. All of them discovered by compiling bpftool with -Wmissing-format-attribute -Wmissing-declarations. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-13tools: bpftool: replace Netronome boilerplate with SPDX license headersJakub Kicinski1-35/+1
Replace the repeated license text with SDPX identifiers. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Roman Gushchin <guro@fb.com> Acked-by: YueHaibing <yuehaibing@huawei.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Stanislav Fomichev <sdf@google.com> Acked-by: Sean Young <sean@mess.org> Acked-by: Jiri Benc <jbenc@redhat.com> Acked-by: David Calavera <david.calavera@gmail.com> Acked-by: Andrey Ignatov <rdna@fb.com> Acked-by: Joe Stringer <joe@wand.net.nz> Acked-by: David Ahern <dsahern@gmail.com> Acked-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> Acked-by: Petar Penkov <ppenkov@stanford.edu> Acked-by: Sandipan Das <sandipan@linux.ibm.com> Acked-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Taeung Song <treeze.taeung@gmail.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> CC: okash.khawaja@gmail.com Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-10bpf: libbpf: bpftool: Print bpf_line_info during prog dumpMartin KaFai Lau1-2/+28
This patch adds print bpf_line_info function in 'prog dump jitted' and 'prog dump xlated': [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv [...] int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_44a040bf25481309_test_long_fname_2: ; static int test_long_fname_2(struct dummy_tracepoint_args *arg) 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x30,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: xor %esi,%esi ; int key = 0; 27: mov %esi,-0x4(%rbp) ; if (!arg->sock) 2a: mov 0x8(%rdi),%rdi ; if (!arg->sock) 2e: cmp $0x0,%rdi 32: je 0x0000000000000070 34: mov %rbp,%rsi ; counts = bpf_map_lookup_elem(&btf_map, &key); 37: add $0xfffffffffffffffc,%rsi 3b: movabs $0xffff8881139d7480,%rdi 45: add $0x110,%rdi 4c: mov 0x0(%rsi),%eax 4f: cmp $0x4,%rax 53: jae 0x000000000000005e 55: shl $0x3,%rax 59: add %rdi,%rax 5c: jmp 0x0000000000000060 5e: xor %eax,%eax ; if (!counts) 60: cmp $0x0,%rax 64: je 0x0000000000000070 ; counts->v6++; 66: mov 0x4(%rax),%edi 69: add $0x1,%rdi 6d: mov %edi,0x4(%rax) 70: mov 0x0(%rbp),%rbx 74: mov 0x8(%rbp),%r13 78: mov 0x10(%rbp),%r14 7c: mov 0x18(%rbp),%r15 80: add $0x28,%rbp 84: leaveq 85: retq [...] With linum: [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv linum int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:9] 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x28,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: callq 0x000000000000851e ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:2] 2a: xor %eax,%eax 2c: mov 0x0(%rbp),%rbx 30: mov 0x8(%rbp),%r13 34: mov 0x10(%rbp),%r14 38: mov 0x18(%rbp),%r15 3c: add $0x28,%rbp 40: leaveq 41: retq [...] Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-06bpf: Expect !info.func_info and insn_off name changes in test_btf/libbpf/bpftoolMartin KaFai Lau1-2/+2
Similar to info.jited_*, info.func_info could be 0 if bpf_dump_raw_ok() == false. This patch makes changes to test_btf and bpftool to expect info.func_info could be 0. This patch also makes the needed changes for s/insn_offset/insn_off/. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20tools/bpf: bpftool: add support for func typesYonghong Song1-0/+33
This patch added support to print function signature if btf func_info is available. Note that ksym now uses function name instead of prog_name as prog_name has a limit of 16 bytes including ending '\0'. The following is a sample output for selftests test_btf with file test_btf_haskv.o for translated insns and jited insns respectively. $ bpftool prog dump xlated id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): 0: (85) call pc+2#bpf_prog_2dcecc18072623fc_test_long_fname_1 1: (b7) r0 = 0 2: (95) exit int test_long_fname_1(struct dummy_tracepoint_args * arg): 3: (85) call pc+1#bpf_prog_89d64e4abf0f0126_test_long_fname_2 4: (95) exit int test_long_fname_2(struct dummy_tracepoint_args * arg): 5: (b7) r2 = 0 6: (63) *(u32 *)(r10 -4) = r2 7: (79) r1 = *(u64 *)(r1 +8) ... 22: (07) r1 += 1 23: (63) *(u32 *)(r0 +4) = r1 24: (95) exit $ bpftool prog dump jited id 1 int _dummy_tracepoint(struct dummy_tracepoint_args * arg): bpf_prog_b07ccb89267cf242__dummy_tracepoint: 0: push %rbp 1: mov %rsp,%rbp ...... 3c: add $0x28,%rbp 40: leaveq 41: retq int test_long_fname_1(struct dummy_tracepoint_args * arg): bpf_prog_2dcecc18072623fc_test_long_fname_1: 0: push %rbp 1: mov %rsp,%rbp ...... 3a: add $0x28,%rbp 3e: leaveq 3f: retq int test_long_fname_2(struct dummy_tracepoint_args * arg): bpf_prog_89d64e4abf0f0126_test_long_fname_2: 0: push %rbp 1: mov %rsp,%rbp ...... 80: add $0x28,%rbp 84: leaveq 85: retq Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-07-11tools: bpf: make use of reallocarrayJakub Kicinski1-3/+3
reallocarray() is a safer variant of realloc which checks for multiplication overflow in case of array allocation. Since it's not available in Glibc < 2.26 import kernel's overflow.h and add a static inline implementation when needed. Use feature detection to probe for existence of reallocarray. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24tools: bpftool: add delimiters to multi-function JITed dumpsSandipan Das1-2/+2
This splits up the contiguous JITed dump obtained via the bpf system call into more relatable chunks for each function in the program. If the kernel symbols corresponding to these are known, they are printed in the header for each JIT image dump otherwise the masked start address is printed. Before applying this patch: # bpftool prog dump jited id 1 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 72: push %rbp 73: mov %rsp,%rbp ... dd: leaveq de: retq # bpftool -p prog dump jited id 1 [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] },{ "pc": "0x72", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0xde", "operation": "retq", "operands": [null ] } ] After applying this patch: # echo 0 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 0xffffffffc02c7000: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq 0xffffffffc02cf000: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "0xffffffffc02c7000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "0xffffffffc02cf000", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] # echo 1 > /proc/sys/net/core/bpf_jit_kallsyms # bpftool prog dump jited id 1 bpf_prog_b811aab41a39ad3d_foo: 0: push %rbp 1: mov %rsp,%rbp ... 70: leaveq 71: retq bpf_prog_cf418ac8b67bebd9_F: 0: push %rbp 1: mov %rsp,%rbp ... 6b: leaveq 6c: retq # bpftool -p prog dump jited id 1 [{ "name": "bpf_prog_b811aab41a39ad3d_foo", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x71", "operation": "retq", "operands": [null ] } ] },{ "name": "bpf_prog_cf418ac8b67bebd9_F", "insns": [{ "pc": "0x0", "operation": "push", "operands": ["%rbp" ] },{ ... },{ "pc": "0x6c", "operation": "retq", "operands": [null ] } ] } ] Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24tools: bpftool: resolve calls without using imm fieldSandipan Das1-1/+9
Currently, we resolve the callee's address for a JITed function call by using the imm field of the call instruction as an offset from __bpf_call_base. If bpf_jit_kallsyms is enabled, we further use this address to get the callee's kernel symbol's name. For some architectures, such as powerpc64, the imm field is not large enough to hold this offset. So, instead of assigning this offset to the imm field, the verifier now assigns the subprog id. Also, a list of kernel symbol addresses for all the JITed functions is provided in the program info. We now use the imm field as an index for this list to lookup a callee's symbol's address and resolve its name. Suggested-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-23bpftool: Adjust to new print_bpf_insn interfaceJiri Olsa1-6/+6
Change bpftool to skip the removed struct bpf_verifier_env argument in print_bpf_insn. It was passed as NULL anyway. No functional change intended. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-02tools: bpftool: generate .dot graph from CFG informationJiong Wang1-0/+52
This patch let bpftool print .dot graph file into stdout. This graph is generated by the following steps: - iterate through the function list. - generate basic-block(BB) definition for each BB in the function. - draw out edges to connect BBs. This patch is the initial support, the layout and decoration of the .dot graph could be improved. Also, it will be useful if we could visualize some performance data from static analysis. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-03-02tools: bpftool: factor out xlated dump related code into separate fileJiong Wang1-0/+286
This patch factors out those code of dumping xlated eBPF instructions into xlated_dumper.[h|c]. They are quite independent dumper functions, so better to be kept separately. New dumper support will be added in later patches in this set. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>