summaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2023-10-20selftests/bpf: Add tests for open-coded task and css iterChuyi Zhou5-0/+415
This patch adds 4 subtests to demonstrate these patterns and validating correctness. subtest1: 1) We use task_iter to iterate all process in the system and search for the current process with a given pid. 2) We create some threads in current process context, and use BPF_TASK_ITER_PROC_THREADS to iterate all threads of current process. As expected, we would find all the threads of current process. 3) We create some threads and use BPF_TASK_ITER_ALL_THREADS to iterate all threads in the system. As expected, we would find all the threads which was created. subtest2: We create a cgroup and add the current task to the cgroup. In the BPF program, we would use bpf_for_each(css_task, task, css) to iterate all tasks under the cgroup. As expected, we would find the current process. subtest3: 1) We create a cgroup tree. In the BPF program, we use bpf_for_each(css, pos, root, XXX) to iterate all descendant under the root with pre and post order. As expected, we would find all descendant and the last iterating cgroup in post-order is root cgroup, the first iterating cgroup in pre-order is root cgroup. 2) We wse BPF_CGROUP_ITER_ANCESTORS_UP to traverse the cgroup tree starting from leaf and root separately, and record the height. The diff of the hights would be the total tree-high - 1. subtest4: Add some failure testcase when using css_task, task and css iters, e.g, unlock when using task-iters to iterate tasks. Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com> Link: https://lore.kernel.org/r/20231018061746.111364-9-zhouchuyi@bytedance.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-10-20selftests/bpf: rename bpf_iter_task.c to bpf_iter_tasks.cChuyi Zhou2-9/+9
The newly-added struct bpf_iter_task has a name collision with a selftest for the seq_file task iter's bpf skel, so the selftests/bpf/progs file is renamed in order to avoid the collision. Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20231018061746.111364-8-zhouchuyi@bytedance.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-10-20bpf: Introduce css open-coded iterator kfuncsChuyi Zhou1-0/+6
This Patch adds kfuncs bpf_iter_css_{new,next,destroy} which allow creation and manipulation of struct bpf_iter_css in open-coded iterator style. These kfuncs actually wrapps css_next_descendant_{pre, post}. css_iter can be used to: 1) iterating a sepcific cgroup tree with pre/post/up order 2) iterating cgroup_subsystem in BPF Prog, like for_each_mem_cgroup_tree/cpuset_for_each_descendant_pre in kernel. The API design is consistent with cgroup_iter. bpf_iter_css_new accepts parameters defining iteration order and starting css. Here we also reuse BPF_CGROUP_ITER_DESCENDANTS_PRE, BPF_CGROUP_ITER_DESCENDANTS_POST, BPF_CGROUP_ITER_ANCESTORS_UP enums. Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/r/20231018061746.111364-5-zhouchuyi@bytedance.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-10-20bpf: Introduce task open coded iterator kfuncsChuyi Zhou1-0/+5
This patch adds kfuncs bpf_iter_task_{new,next,destroy} which allow creation and manipulation of struct bpf_iter_task in open-coded iterator style. BPF programs can use these kfuncs or through bpf_for_each macro to iterate all processes in the system. The API design keep consistent with SEC("iter/task"). bpf_iter_task_new() accepts a specific task and iterating type which allows: 1. iterating all process in the system (BPF_TASK_ITER_ALL_PROCS) 2. iterating all threads in the system (BPF_TASK_ITER_ALL_THREADS) 3. iterating all threads of a specific task (BPF_TASK_ITER_PROC_THREADS) Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com> Link: https://lore.kernel.org/r/20231018061746.111364-4-zhouchuyi@bytedance.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-10-20bpf: Introduce css_task open-coded iterator kfuncsChuyi Zhou1-0/+8
This patch adds kfuncs bpf_iter_css_task_{new,next,destroy} which allow creation and manipulation of struct bpf_iter_css_task in open-coded iterator style. These kfuncs actually wrapps css_task_iter_{start,next, end}. BPF programs can use these kfuncs through bpf_for_each macro for iteration of all tasks under a css. css_task_iter_*() would try to get the global spin-lock *css_set_lock*, so the bpf side has to be careful in where it allows to use this iter. Currently we only allow it in bpf_lsm and bpf iter-s. Signed-off-by: Chuyi Zhou <zhouchuyi@bytedance.com> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/r/20231018061746.111364-3-zhouchuyi@bytedance.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-10-19bpftool: Wrap struct_ops dump in an arrayManu Bretelle1-0/+6
When dumping a struct_ops, 2 dictionaries are emitted. When using `name`, they were already wrapped in an array, but not when using `id`. Causing `jq` to fail at parsing the payload as it reached the comma following the first dict. This change wraps those dictionaries in an array so valid json is emitted. Before, jq fails to parse the output: ``` $ sudo bpftool struct_ops dump id 1523612 | jq . > /dev/null parse error: Expected value before ',' at line 19, column 2 ``` After, no error parsing the output: ``` sudo ./bpftool struct_ops dump id 1523612 | jq . > /dev/null ``` Signed-off-by: Manu Bretelle <chantr4@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20231018230133.1593152-3-chantr4@gmail.com
2023-10-19bpftool: Fix printing of pointer valueManu Bretelle1-1/+1
When printing a pointer value, "%p" will either print the hexadecimal value of the pointer (e.g `0x1234`), or `(nil)` when NULL. Both of those are invalid json "integer" values and need to be wrapped in quotes. Before: ``` $ sudo bpftool struct_ops dump name ned_dummy_cca | grep next "next": (nil), $ sudo bpftool struct_ops dump name ned_dummy_cca | \ jq '.[1].bpf_struct_ops_tcp_congestion_ops.data.list.next' parse error: Invalid numeric literal at line 29, column 34 ``` After: ``` $ sudo ./bpftool struct_ops dump name ned_dummy_cca | grep next "next": "(nil)", $ sudo ./bpftool struct_ops dump name ned_dummy_cca | \ jq '.[1].bpf_struct_ops_tcp_congestion_ops.data.list.next' "(nil)" ``` Signed-off-by: Manu Bretelle <chantr4@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20231018230133.1593152-2-chantr4@gmail.com
2023-10-18selftests/bpf: Add options and frags to xdp_hw_metadataLarysa Zaremba2-13/+67
This is a follow-up to the commit 9b2b86332a9b ("bpf: Allow to use kfunc XDP hints and frags together"). The are some possible implementations problems that may arise when providing metadata specifically for multi-buffer packets, therefore there must be a possibility to test such option separately. Add an option to use multi-buffer AF_XDP xdp_hw_metadata and mark used XDP program as capable to use frags. As for now, xdp_hw_metadata accepts no options, so add simple option parsing logic and a help message. For quick reference, also add an ingress packet generation command to the help message. The command comes from [0]. Example of output for multi-buffer packet: xsk_ring_cons__peek: 1 0xead018: rx_desc[15]->addr=10000000000f000 addr=f100 comp_addr=f000 rx_hash: 0x5789FCBB with RSS type:0x29 rx_timestamp: 1696856851535324697 (sec:1696856851.5353) XDP RX-time: 1696856843158256391 (sec:1696856843.1583) delta sec:-8.3771 (-8377068.306 usec) AF_XDP time: 1696856843158413078 (sec:1696856843.1584) delta sec:0.0002 (156.687 usec) 0xead018: complete idx=23 addr=f000 xsk_ring_cons__peek: 1 0xead018: rx_desc[16]->addr=100000000008000 addr=8100 comp_addr=8000 0xead018: complete idx=24 addr=8000 xsk_ring_cons__peek: 1 0xead018: rx_desc[17]->addr=100000000009000 addr=9100 comp_addr=9000 EoP 0xead018: complete idx=25 addr=9000 Metadata is printed for the first packet only. [0] https://lore.kernel.org/all/20230119221536.3349901-18-sdf@google.com/ Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20231017162800.24080-1-larysa.zaremba@intel.com
2023-10-17selftests/bpf: Add additional mprog query test coverageDaniel Borkmann1-1/+130
Add several new test cases which assert corner cases on the mprog query mechanism, for example, around passing in a too small or a larger array than the current count. ./test_progs -t tc_opts #252 tc_opts_after:OK #253 tc_opts_append:OK #254 tc_opts_basic:OK #255 tc_opts_before:OK #256 tc_opts_chain_classic:OK #257 tc_opts_chain_mixed:OK #258 tc_opts_delete_empty:OK #259 tc_opts_demixed:OK #260 tc_opts_detach:OK #261 tc_opts_detach_after:OK #262 tc_opts_detach_before:OK #263 tc_opts_dev_cleanup:OK #264 tc_opts_invalid:OK #265 tc_opts_max:OK #266 tc_opts_mixed:OK #267 tc_opts_prepend:OK #268 tc_opts_query:OK #269 tc_opts_query_attach:OK #270 tc_opts_replace:OK #271 tc_opts_revision:OK Summary: 20/0 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/20231017081728.24769-1-daniel@iogearbox.net
2023-10-17selftests/bpf: Add selftest for bpf_task_under_cgroup() in sleepable progYafang Shao2-3/+36
The result is as follows: $ tools/testing/selftests/bpf/test_progs --name=task_under_cgroup #237 task_under_cgroup:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED Without the previous patch, there will be RCU warnings in dmesg when CONFIG_PROVE_RCU is enabled. While with the previous patch, there will be no warnings. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20231007135945.4306-2-laoar.shao@gmail.com
2023-10-17libbpf: Don't assume SHT_GNU_verdef presence for SHT_GNU_versym sectionAndrii Nakryiko1-6/+10
Fix too eager assumption that SHT_GNU_verdef ELF section is going to be present whenever binary has SHT_GNU_versym section. It seems like either SHT_GNU_verdef or SHT_GNU_verneed can be used, so failing on missing SHT_GNU_verdef actually breaks use cases in production. One specific reported issue, which was used to manually test this fix, was trying to attach to `readline` function in BASH binary. Fixes: bb7fa09399b9 ("libbpf: Support symbol versioning for uprobe") Reported-by: Liam Wisehart <liamwisehart@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Manu Bretelle <chantr4@gmail.com> Reviewed-by: Fangrui Song <maskray@google.com> Acked-by: Hengqi Chen <hengqi.chen@gmail.com> Link: https://lore.kernel.org/bpf/20231016182840.4033346-1-andrii@kernel.org
2023-10-17Merge tag 'for-netdev' of ↵Jakub Kicinski86-703/+2811
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Daniel Borkmann says: ==================== pull-request: bpf-next 2023-10-16 We've added 90 non-merge commits during the last 25 day(s) which contain a total of 120 files changed, 3519 insertions(+), 895 deletions(-). The main changes are: 1) Add missed stats for kprobes to retrieve the number of missed kprobe executions and subsequent executions of BPF programs, from Jiri Olsa. 2) Add cgroup BPF sockaddr hooks for unix sockets. The use case is for systemd to reimplement the LogNamespace feature which allows running multiple instances of systemd-journald to process the logs of different services, from Daan De Meyer. 3) Implement BPF CPUv4 support for s390x BPF JIT, from Ilya Leoshkevich. 4) Improve BPF verifier log output for scalar registers to better disambiguate their internal state wrt defaults vs min/max values matching, from Andrii Nakryiko. 5) Extend the BPF fib lookup helpers for IPv4/IPv6 to support retrieving the source IP address with a new BPF_FIB_LOOKUP_SRC flag, from Martynas Pumputis. 6) Add support for open-coded task_vma iterator to help with symbolization for BPF-collected user stacks, from Dave Marchevsky. 7) Add libbpf getters for accessing individual BPF ring buffers which is useful for polling them individually, for example, from Martin Kelly. 8) Extend AF_XDP selftests to validate the SHARED_UMEM feature, from Tushar Vyavahare. 9) Improve BPF selftests cross-building support for riscv arch, from Björn Töpel. 10) Add the ability to pin a BPF timer to the same calling CPU, from David Vernet. 11) Fix libbpf's bpf_tracing.h macros for riscv to use the generic implementation of PT_REGS_SYSCALL_REGS() to access syscall arguments, from Alexandre Ghiti. 12) Extend libbpf to support symbol versioning for uprobes, from Hengqi Chen. 13) Fix bpftool's skeleton code generation to guarantee that ELF data is 8 byte aligned, from Ian Rogers. 14) Inherit system-wide cpu_mitigations_off() setting for Spectre v1/v4 security mitigations in BPF verifier, from Yafang Shao. 15) Annotate struct bpf_stack_map with __counted_by attribute to prepare BPF side for upcoming __counted_by compiler support, from Kees Cook. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (90 commits) bpf: Ensure proper register state printing for cond jumps bpf: Disambiguate SCALAR register state output in verifier logs selftests/bpf: Make align selftests more robust selftests/bpf: Improve missed_kprobe_recursion test robustness selftests/bpf: Improve percpu_alloc test robustness selftests/bpf: Add tests for open-coded task_vma iter bpf: Introduce task_vma open-coded iterator kfuncs selftests/bpf: Rename bpf_iter_task_vma.c to bpf_iter_task_vmas.c bpf: Don't explicitly emit BTF for struct btf_iter_num bpf: Change syscall_nr type to int in struct syscall_tp_t net/bpf: Avoid unused "sin_addr_len" warning when CONFIG_CGROUP_BPF is not set bpf: Avoid unnecessary audit log for CPU security mitigations selftests/bpf: Add tests for cgroup unix socket address hooks selftests/bpf: Make sure mount directory exists documentation/bpf: Document cgroup unix socket address hooks bpftool: Add support for cgroup unix socket address hooks libbpf: Add support for cgroup unix socket address hooks bpf: Implement cgroup sockaddr hooks for unix sockets bpf: Add bpf_sock_addr_set_sun_path() to allow writing unix sockaddr from bpf bpf: Propagate modified uaddrlen from cgroup sockaddr programs ... ==================== Link: https://lore.kernel.org/r/20231016204803.30153-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-16bpf: Disambiguate SCALAR register state output in verifier logsAndrii Nakryiko2-10/+10
Currently the way that verifier prints SCALAR_VALUE register state (and PTR_TO_PACKET, which can have var_off and ranges info as well) is very ambiguous. In the name of brevity we are trying to eliminate "unnecessary" output of umin/umax, smin/smax, u32_min/u32_max, and s32_min/s32_max values, if possible. Current rules are that if any of those have their default value (which for mins is the minimal value of its respective types: 0, S32_MIN, or S64_MIN, while for maxs it's U32_MAX, S32_MAX, S64_MAX, or U64_MAX) *OR* if there is another min/max value that as matching value. E.g., if smin=100 and umin=100, we'll emit only umin=10, omitting smin altogether. This approach has a few problems, being both ambiguous and sort-of incorrect in some cases. Ambiguity is due to missing value could be either default value or value of umin/umax or smin/smax. This is especially confusing when we mix signed and unsigned ranges. Quite often, umin=0 and smin=0, and so we'll have only `umin=0` leaving anyone reading verifier log to guess whether smin is actually 0 or it's actually -9223372036854775808 (S64_MIN). And often times it's important to know, especially when debugging tricky issues. "Sort-of incorrectness" comes from mixing negative and positive values. E.g., if umin is some large positive number, it can be equal to smin which is, interpreted as signed value, is actually some negative value. Currently, that smin will be omitted and only umin will be emitted with a large positive value, giving an impression that smin is also positive. Anyway, ambiguity is the biggest issue making it impossible to have an exact understanding of register state, preventing any sort of automated testing of verifier state based on verifier log. This patch is attempting to rectify the situation by removing ambiguity, while minimizing the verboseness of register state output. The rules are straightforward: - if some of the values are missing, then it definitely has a default value. I.e., `umin=0` means that umin is zero, but smin is actually S64_MIN; - all the various boundaries that happen to have the same value are emitted in one equality separated sequence. E.g., if umin and smin are both 100, we'll emit `smin=umin=100`, making this explicit; - we do not mix negative and positive values together, and even if they happen to have the same bit-level value, they will be emitted separately with proper sign. I.e., if both umax and smax happen to be 0xffffffffffffffff, we'll emit them both separately as `smax=-1,umax=18446744073709551615`; - in the name of a bit more uniformity and consistency, {u32,s32}_{min,max} are renamed to {s,u}{min,max}32, which seems to improve readability. The above means that in case of all 4 ranges being, say, [50, 100] range, we'd previously see hugely ambiguous: R1=scalar(umin=50,umax=100) Now, we'll be more explicit: R1=scalar(smin=umin=smin32=umin32=50,smax=umax=smax32=umax32=100) This is slightly more verbose, but distinct from the case when we don't know anything about signed boundaries and 32-bit boundaries, which under new rules will match the old case: R1=scalar(umin=50,umax=100) Also, in the name of simplicity of implementation and consistency, order for {s,u}32_{min,max} are emitted *before* var_off. Previously they were emitted afterwards, for unclear reasons. This patch also includes a few fixes to selftests that expect exact register state to accommodate slight changes to verifier format. You can see that the changes are pretty minimal in common cases. Note, the special case when SCALAR_VALUE register is a known constant isn't changed, we'll emit constant value once, interpreted as signed value. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20231011223728.3188086-5-andrii@kernel.org
2023-10-16selftests/bpf: Make align selftests more robustAndrii Nakryiko1-120/+121
Align subtest is very specific and finicky about expected verifier log output and format. This is often completely unnecessary as in a bunch of situations test actually cares about var_off part of register state. But given how exact it is right now, any tiny verifier log changes can lead to align tests failures, requiring constant adjustment. This patch tries to make this a bit more robust by making logic first search for specified register and then allowing to match only portion of register state, not everything exactly. This will come handly with follow up changes to SCALAR register output disambiguation. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20231011223728.3188086-4-andrii@kernel.org
2023-10-16selftests/bpf: Improve missed_kprobe_recursion test robustnessAndrii Nakryiko1-4/+4
Given missed_kprobe_recursion is non-serial and uses common testing kfuncs to count number of recursion misses it's possible that some other parallel test can trigger extraneous recursion misses. So we can't expect exactly 1 miss. Relax conditions and expect at least one. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20231011223728.3188086-3-andrii@kernel.org
2023-10-16selftests/bpf: Improve percpu_alloc test robustnessAndrii Nakryiko3-0/+14
Make these non-serial tests filter BPF programs by intended PID of a test runner process. This makes it isolated from other parallel tests that might interfere accidentally. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20231011223728.3188086-2-andrii@kernel.org
2023-10-16selftests: net: remove unused variableszhujun23-5/+3
These variables are never referenced in the code, just remove them Signed-off-by: zhujun2 <zhujun2@cmss.chinamobile.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-15ptp: add testptp mask testXabier Marquiegui1-1/+18
Add option to test timestamp event queue mask manipulation in testptp. Option -F allows the user to specify a single channel that will be applied on the mask filter via IOCTL. The test program will maintain the file open until user input is received. This allows checking the effect of the IOCTL in debugfs. eg: Console 1: ``` Channel 12 exclusively enabled. Check on debugfs. Press any key to continue ``` Console 2: ``` 0x00000000 0x00000001 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 ``` Signed-off-by: Xabier Marquiegui <reibax@gmail.com> Suggested-by: Richard Cochran <richardcochran@gmail.com> Suggested-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-15ptp: add debugfs interface to see applied channel masksXabier Marquiegui1-0/+14
Use debugfs to be able to view channel mask applied to every timestamp event queue. Every time the device is opened, a new entry is created in `$DEBUGFS_MOUNTPOINT/ptpN/$INSTANCE_ADDRESS/mask`. The mask value can be viewed grouped in 32bit decimal values using cat, or converted to hexadecimal with the included `ptpchmaskfmt.sh` script. 32 bit values are listed from least significant to most significant. Signed-off-by: Xabier Marquiegui <reibax@gmail.com> Suggested-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-15test/vsock: io_uring rx/tx testsArseniy Krasnov3-2/+348
This adds set of tests which use io_uring for rx/tx. This test suite is implemented as separated util like 'vsock_test' and has the same set of input arguments as 'vsock_test'. These tests only cover cases of data transmission (no connect/bind/accept etc). Signed-off-by: Arseniy Krasnov <avkrasnov@salutedevices.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-15test/vsock: MSG_ZEROCOPY support for vsock_perfArseniy Krasnov2-10/+72
To use this option pass '--zerocopy' parameter: ./vsock_perf --zerocopy --sender <cid> ... With this option MSG_ZEROCOPY flag will be passed to the 'send()' call. Signed-off-by: Arseniy Krasnov <avkrasnov@salutedevices.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-15test/vsock: MSG_ZEROCOPY flag testsArseniy Krasnov8-1/+633
This adds three tests for MSG_ZEROCOPY feature: 1) SOCK_STREAM tx with different buffers. 2) SOCK_SEQPACKET tx with different buffers. 3) SOCK_STREAM test to read empty error queue of the socket. Patch also works as preparation for the next patches for tools in this patchset: vsock_perf and vsock_uring_test: 1) Adds several new functions to util.c - they will be also used by vsock_uring_test. 2) Adds two new functions for MSG_ZEROCOPY handling to a new source file - such source will be shared between vsock_test, vsock_perf and vsock_uring_test, thus avoiding code copy-pasting. Signed-off-by: Arseniy Krasnov <avkrasnov@salutedevices.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-14selftests/bpf: Add tests for open-coded task_vma iterDave Marchevsky3-0/+109
The open-coded task_vma iter added earlier in this series allows for natural iteration over a task's vmas using existing open-coded iter infrastructure, specifically bpf_for_each. This patch adds a test demonstrating this pattern and validating correctness. The vma->vm_start and vma->vm_end addresses of the first 1000 vmas are recorded and compared to /proc/PID/maps output. As expected, both see the same vmas and addresses - with the exception of the [vsyscall] vma - which is explained in a comment in the prog_tests program. Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231013204426.1074286-5-davemarchevsky@fb.com
2023-10-14selftests/bpf: Rename bpf_iter_task_vma.c to bpf_iter_task_vmas.cDave Marchevsky2-13/+13
Further patches in this series will add a struct bpf_iter_task_vma, which will result in a name collision with the selftest prog renamed in this patch. Rename the selftest to avoid the collision. Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231013204426.1074286-3-davemarchevsky@fb.com
2023-10-13bpf: Change syscall_nr type to int in struct syscall_tp_tArtem Savkov2-3/+3
linux-rt-devel tree contains a patch (b1773eac3f29c ("sched: Add support for lazy preemption")) that adds an extra member to struct trace_entry. This causes the offset of args field in struct trace_event_raw_sys_enter be different from the one in struct syscall_trace_enter: struct trace_event_raw_sys_enter { struct trace_entry ent; /* 0 12 */ /* XXX last struct has 3 bytes of padding */ /* XXX 4 bytes hole, try to pack */ long int id; /* 16 8 */ long unsigned int args[6]; /* 24 48 */ /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ char __data[]; /* 72 0 */ /* size: 72, cachelines: 2, members: 4 */ /* sum members: 68, holes: 1, sum holes: 4 */ /* paddings: 1, sum paddings: 3 */ /* last cacheline: 8 bytes */ }; struct syscall_trace_enter { struct trace_entry ent; /* 0 12 */ /* XXX last struct has 3 bytes of padding */ int nr; /* 12 4 */ long unsigned int args[]; /* 16 0 */ /* size: 16, cachelines: 1, members: 3 */ /* paddings: 1, sum paddings: 3 */ /* last cacheline: 16 bytes */ }; This, in turn, causes perf_event_set_bpf_prog() fail while running bpf test_profiler testcase because max_ctx_offset is calculated based on the former struct, while off on the latter: 10488 if (is_tracepoint || is_syscall_tp) { 10489 int off = trace_event_get_offsets(event->tp_event); 10490 10491 if (prog->aux->max_ctx_offset > off) 10492 return -EACCES; 10493 } What bpf program is actually getting is a pointer to struct syscall_tp_t, defined in kernel/trace/trace_syscalls.c. This patch fixes the problem by aligning struct syscall_tp_t with struct syscall_trace_(enter|exit) and changing the tests to use these structs to dereference context. Signed-off-by: Artem Savkov <asavkov@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org> Link: https://lore.kernel.org/bpf/20231013054219.172920-1-asavkov@redhat.com
2023-10-13selftests: netdevsim: use suitable existing dummy file for flash testJiri Pirko1-7/+14
The file name used in flash test was "dummy" because at the time test was written, drivers were responsible for file request and as netdevsim didn't do that, name was unused. However, the file load request is now done in devlink code and therefore the file has to exist. Use first random file from /lib/firmware for this purpose. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-13selftests: fdb_flush: Add test cases for FDB flush with bridge deviceAmit Cohen1-0/+96
Extend the test to check flushing with bridge device, test flush by device and by VID. Add test case for flushing with "self" and "master" and attributes that are supported only in one driver, this is unrecommended configuration, check it to verify that user gets an error. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-13selftests: Add test cases for FDB flush with VXLAN deviceAmit Cohen2-0/+717
Test all the supported arguments for FDB flush. The test checks configuration, not traffic. Note that the flag 'offloaded' is not checked as it is not relevant when there is no hardware. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-13Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski7-110/+566
Cross-merge networking fixes after downstream PR. No conflicts. Adjacent changes: kernel/bpf/verifier.c 829955981c55 ("bpf: Fix verifier log for async callback return values") a923819fb2c5 ("bpf: Treat first argument as return value for bpf_throw") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-12Merge tag 'net-6.6-rc6' of ↵Linus Torvalds5-73/+331
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Paolo Abeni: "Including fixes from CAN and BPF. We have a regression in TC currently under investigation, otherwise the things that stand off most are probably the TCP and AF_PACKET fixes, with both issues coming from 6.5. Previous releases - regressions: - af_packet: fix fortified memcpy() without flex array. - tcp: fix crashes trying to free half-baked MTU probes - xdp: fix zero-size allocation warning in xskq_create() - can: sja1000: always restart the tx queue after an overrun - eth: mlx5e: again mutually exclude RX-FCS and RX-port-timestamp - eth: nfp: avoid rmmod nfp crash issues - eth: octeontx2-pf: fix page pool frag allocation warning Previous releases - always broken: - mctp: perform route lookups under a RCU read-side lock - bpf: s390: fix clobbering the caller's backchain in the trampoline - phy: lynx-28g: cancel the CDR check work item on the remove path - dsa: qca8k: fix qca8k driver for Turris 1.x - eth: ravb: fix use-after-free issue in ravb_tx_timeout_work() - eth: ixgbe: fix crash with empty VF macvlan list" * tag 'net-6.6-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (54 commits) rswitch: Fix imbalance phy_power_off() calling rswitch: Fix renesas_eth_sw_remove() implementation octeontx2-pf: Fix page pool frag allocation warning nfc: nci: assert requested protocol is valid af_packet: Fix fortified memcpy() without flex array. net: tcp: fix crashes trying to free half-baked MTU probes net/smc: Fix pos miscalculation in statistics nfp: flower: avoid rmmod nfp crash issues net: usb: dm9601: fix uninitialized variable use in dm9601_mdio_read ethtool: Fix mod state of verbose no_mask bitset net: nfc: fix races in nfc_llcp_sock_get() and nfc_llcp_sock_get_sn() mctp: perform route lookups under a RCU read-side lock net: skbuff: fix kernel-doc typos s390/bpf: Fix unwinding past the trampoline s390/bpf: Fix clobbering the caller's backchain in the trampoline net/mlx5e: Again mutually exclude RX-FCS and RX-port-timestamp net/smc: Fix dependency of SMC on ISM ixgbe: fix crash with empty VF macvlan list net/mlx5e: macsec: use update_pn flag instead of PN comparation net: phy: mscc: macsec: reject PN update requests ...
2023-10-12selftests/bpf: Add tests for cgroup unix socket address hooksDaan De Meyer10-0/+883
These selftests are written in prog_tests style instead of adding them to the existing test_sock_addr tests. Migrating the existing sock addr tests to prog_tests style is left for future work. This commit adds support for testing bind() sockaddr hooks, even though there's no unix socket sockaddr hook for bind(). We leave this code intact for when the INET and INET6 tests are migrated in the future which do support intercepting bind(). Signed-off-by: Daan De Meyer <daan.j.demeyer@gmail.com> Link: https://lore.kernel.org/r/20231011185113.140426-10-daan.j.demeyer@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-10-12selftests/bpf: Make sure mount directory existsDaan De Meyer1-0/+5
The mount directory for the selftests cgroup tree might not exist so let's make sure it does exist by creating it ourselves if it doesn't exist. Signed-off-by: Daan De Meyer <daan.j.demeyer@gmail.com> Link: https://lore.kernel.org/r/20231011185113.140426-9-daan.j.demeyer@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-10-12bpftool: Add support for cgroup unix socket address hooksDaan De Meyer5-23/+38
Add the necessary plumbing to hook up the new cgroup unix sockaddr hooks into bpftool. Signed-off-by: Daan De Meyer <daan.j.demeyer@gmail.com> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/r/20231011185113.140426-7-daan.j.demeyer@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-10-12libbpf: Add support for cgroup unix socket address hooksDaan De Meyer1-0/+10
Add the necessary plumbing to hook up the new cgroup unix sockaddr hooks into libbpf. Signed-off-by: Daan De Meyer <daan.j.demeyer@gmail.com> Link: https://lore.kernel.org/r/20231011185113.140426-6-daan.j.demeyer@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-10-12bpf: Implement cgroup sockaddr hooks for unix socketsDaan De Meyer1-4/+9
These hooks allows intercepting connect(), getsockname(), getpeername(), sendmsg() and recvmsg() for unix sockets. The unix socket hooks get write access to the address length because the address length is not fixed when dealing with unix sockets and needs to be modified when a unix socket address is modified by the hook. Because abstract socket unix addresses start with a NUL byte, we cannot recalculate the socket address in kernelspace after running the hook by calculating the length of the unix socket path using strlen(). These hooks can be used when users want to multiplex syscall to a single unix socket to multiple different processes behind the scenes by redirecting the connect() and other syscalls to process specific sockets. We do not implement support for intercepting bind() because when using bind() with unix sockets with a pathname address, this creates an inode in the filesystem which must be cleaned up. If we rewrite the address, the user might try to clean up the wrong file, leaking the socket in the filesystem where it is never cleaned up. Until we figure out a solution for this (and a use case for intercepting bind()), we opt to not allow rewriting the sockaddr in bind() calls. We also implement recvmsg() support for connected streams so that after a connect() that is modified by a sockaddr hook, any corresponding recmvsg() on the connected socket can also be modified to make the connected program think it is connected to the "intended" remote. Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Daan De Meyer <daan.j.demeyer@gmail.com> Link: https://lore.kernel.org/r/20231011185113.140426-5-daan.j.demeyer@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-10-11tools: ynl: use ynl-gen -o instead of stdout in MakefileJakub Kicinski1-2/+2
Jiri added more careful handling of output of the code generator to avoid wiping out existing files in commit f65f305ae008 ("tools: ynl-gen: use temporary file for rendering") Make use of the -o option in the Makefiles, it is already used by ynl-regen.sh. Link: https://lore.kernel.org/r/20231010202714.4045168-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-11selftests/bpf: Add missing section name tests for getpeername/getsocknameDaan De Meyer1-0/+20
These were missed when these hooks were first added so add them now instead to make sure every sockaddr hook has a matching section name test. Signed-off-by: Daan De Meyer <daan.j.demeyer@gmail.com> Link: https://lore.kernel.org/r/20231011185113.140426-2-daan.j.demeyer@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-10-10Merge tag 'hyperv-fixes-signed-20231009' of ↵Linus Torvalds2-37/+235
git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux Pull hyperv fixes from Wei Liu: - fixes for Hyper-V VTL code (Saurabh Sengar and Olaf Hering) - fix hv_kvp_daemon to support keyfile based connection profile (Shradha Gupta) * tag 'hyperv-fixes-signed-20231009' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux: hv/hv_kvp_daemon:Support for keyfile based connection profile hyperv: reduce size of ms_hyperv_info x86/hyperv: Add common print prefix "Hyper-V" in hv_init x86/hyperv: Remove hv_vtl_early_init initcall x86/hyperv: Restrict get_vtl to only VTL platforms
2023-10-10hv/hv_kvp_daemon:Support for keyfile based connection profileShradha Gupta2-37/+235
Ifcfg config file support in NetworkManger is deprecated. This patch provides support for the new keyfile config format for connection profiles in NetworkManager. The patch modifies the hv_kvp_daemon code to generate the new network configuration in keyfile format(.ini-style format) along with a ifcfg format configuration. The ifcfg format configuration is also retained to support easy backward compatibility for distro vendors. These configurations are stored in temp files which are further translated using the hv_set_ifconfig.sh script. This script is implemented by individual distros based on the network management commands supported. For example, RHEL's implementation could be found here: https://gitlab.com/redhat/centos-stream/src/hyperv-daemons/-/blob/c9s/hv_set_ifconfig.sh Debian's implementation could be found here: https://github.com/endlessm/linux/blob/master/debian/cloud-tools/hv_set_ifconfig The next part of this support is to let the Distro vendors consume these modified implementations to the new configuration format. Tested-on: Rhel9(Hyper-V, Azure)(nm and ifcfg files verified) Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com> Reviewed-by: Saurabh Sengar <ssengar@linux.microsoft.com> Reviewed-by: Ani Sinha <anisinha@redhat.com> Signed-off-by: Wei Liu <wei.liu@kernel.org> Link: https://lore.kernel.org/r/1696847920-31125-1-git-send-email-shradhagupta@linux.microsoft.com
2023-10-10tools: ynl-gen: handle do ops with no input attrsJakub Kicinski1-6/+11
The code supports dumps with no input attributes currently thru a combination of special-casing and luck. Clean up the handling of ops with no inputs. Create empty Structs, and skip printing of empty types. This makes dos with no inputs work. Tested-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com> Link: https://lore.kernel.org/r/20231006135032.3328523-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-10selftests/bpf: Add BPF_FIB_LOOKUP_SRC testsMartynas Pumputis1-6/+77
This patch extends the existing fib_lookup test suite by adding two test cases (for each IP family): * Test source IP selection from the egressing netdev. * Test source IP selection when an IP route has a preferred src IP addr. Signed-off-by: Martynas Pumputis <m@lambda.lt> Link: https://lore.kernel.org/r/20231007081415.33502-3-m@lambda.lt Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-10-10bpf: Derive source IP addr via bpf_*_fib_lookup()Martynas Pumputis1-0/+10
Extend the bpf_fib_lookup() helper by making it to return the source IPv4/IPv6 address if the BPF_FIB_LOOKUP_SRC flag is set. For example, the following snippet can be used to derive the desired source IP address: struct bpf_fib_lookup p = { .ipv4_dst = ip4->daddr }; ret = bpf_skb_fib_lookup(skb, p, sizeof(p), BPF_FIB_LOOKUP_SRC | BPF_FIB_LOOKUP_SKIP_NEIGH); if (ret != BPF_FIB_LKUP_RET_SUCCESS) return TC_ACT_SHOT; /* the p.ipv4_src now contains the source address */ The inability to derive the proper source address may cause malfunctions in BPF-based dataplanes for hosts containing netdevs with more than one routable IP address or for multi-homed hosts. For example, Cilium implements packet masquerading in BPF. If an egressing netdev to which the Cilium's BPF prog is attached has multiple IP addresses, then only one [hardcoded] IP address can be used for masquerading. This breaks connectivity if any other IP address should have been selected instead, for example, when a public and private addresses are attached to the same egress interface. The change was tested with Cilium [1]. Nikolay Aleksandrov helped to figure out the IPv6 addr selection. [1]: https://github.com/cilium/cilium/pull/28283 Signed-off-by: Martynas Pumputis <m@lambda.lt> Link: https://lore.kernel.org/r/20231007081415.33502-2-m@lambda.lt Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-10-10selftests/bpf: Add testcase for async callback return value failureDavid Vernet2-2/+51
A previous commit updated the verifier to print an accurate failure message for when someone specifies a nonzero return value from an async callback. This adds a testcase for validating that the verifier emits the correct message in such a case. Signed-off-by: David Vernet <void@manifault.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20231009161414.235829-2-void@manifault.com
2023-10-09bpftool: Align bpf_load_and_run_opts insns and dataIan Rogers1-20/+23
A C string lacks alignment so use aligned arrays to avoid potential alignment problems. Switch to using sizeof (less 1 for the \0 terminator) rather than a hardcode size constant. Signed-off-by: Ian Rogers <irogers@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20231007044439.25171-2-irogers@google.com
2023-10-09bpftool: Align output skeleton ELF codeIan Rogers1-6/+9
libbpf accesses the ELF data requiring at least 8 byte alignment, however, the data is generated into a C string that doesn't guarantee alignment. Fix this by assigning to an aligned char array. Use sizeof on the array, less one for the \0 terminator, rather than generating a constant. Fixes: a6cc6b34b93e ("bpftool: Provide a helper method for accessing skeleton's embedded ELF data") Signed-off-by: Ian Rogers <irogers@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20231007044439.25171-1-irogers@google.com
2023-10-09selftests/bpf: Test pinning bpf timer to a coreDavid Vernet2-1/+66
Now that we support pinning a BPF timer to the current core, we should test it with some selftests. This patch adds two new testcases to the timer suite, which verifies that a BPF timer both with and without BPF_F_TIMER_ABS, can be pinned to the calling core with BPF_F_TIMER_CPU_PIN. Signed-off-by: David Vernet <void@manifault.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <song@kernel.org> Acked-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/bpf/20231004162339.200702-3-void@manifault.com
2023-10-09bpf: Add ability to pin bpf timer to calling CPUDavid Vernet1-0/+4
BPF supports creating high resolution timers using bpf_timer_* helper functions. Currently, only the BPF_F_TIMER_ABS flag is supported, which specifies that the timeout should be interpreted as absolute time. It would also be useful to be able to pin that timer to a core. For example, if you wanted to make a subset of cores run without timer interrupts, and only have the timer be invoked on a single core. This patch adds support for this with a new BPF_F_TIMER_CPU_PIN flag. When specified, the HRTIMER_MODE_PINNED flag is passed to hrtimer_start(). A subsequent patch will update selftests to validate. Signed-off-by: David Vernet <void@manifault.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <song@kernel.org> Acked-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/bpf/20231004162339.200702-2-void@manifault.com
2023-10-07selftests/bpf: Make seen_tc* variable tests more robustDaniel Borkmann3-60/+46
Martin reported that on his local dev machine the test_tc_chain_mixed() fails as "test_tc_chain_mixed:FAIL:seen_tc5 unexpected seen_tc5: actual 1 != expected 0" and others occasionally, too. However, when running in a more isolated setup (qemu in particular), it works fine for him. The reason is that there is a small race-window where seen_tc* could turn into true for various test cases when there is background traffic, e.g. after the asserts they often get reset. In such case when subsequent detach takes place, unrelated background traffic could have already flipped the bool to true beforehand. Add a small helper tc_skel_reset_all_seen() to reset all bools before we do the ping test. At this point, everything is set up as expected and therefore no race can occur. All tc_{opts,links} tests continue to pass after this change. Reported-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/r/20231006220655.1653-7-daniel@iogearbox.net Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-10-07selftests/bpf: Test query on empty mprog and pass revision into attachDaniel Borkmann1-0/+59
Add a new test case to query on an empty bpf_mprog and pass the revision directly into expected_revision for attachment to assert that this does succeed. ./test_progs -t tc_opts [ 1.406778] tsc: Refined TSC clocksource calibration: 3407.990 MHz [ 1.408863] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fcaf6eb0, max_idle_ns: 440795321766 ns [ 1.412419] clocksource: Switched to clocksource tsc [ 1.428671] bpf_testmod: loading out-of-tree module taints kernel. [ 1.430260] bpf_testmod: module verification failed: signature and/or required key missing - tainting kernel #252 tc_opts_after:OK #253 tc_opts_append:OK #254 tc_opts_basic:OK #255 tc_opts_before:OK #256 tc_opts_chain_classic:OK #257 tc_opts_chain_mixed:OK #258 tc_opts_delete_empty:OK #259 tc_opts_demixed:OK #260 tc_opts_detach:OK #261 tc_opts_detach_after:OK #262 tc_opts_detach_before:OK #263 tc_opts_dev_cleanup:OK #264 tc_opts_invalid:OK #265 tc_opts_max:OK #266 tc_opts_mixed:OK #267 tc_opts_prepend:OK #268 tc_opts_query:OK #269 tc_opts_query_attach:OK <--- (new test) #270 tc_opts_replace:OK #271 tc_opts_revision:OK Summary: 20/0 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/r/20231006220655.1653-6-daniel@iogearbox.net Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-10-07selftests/bpf: Adapt assert_mprog_count to always expect 0 countDaniel Borkmann3-11/+8
Simplify __assert_mprog_count() to remove the -ENOENT corner case as the bpf_prog_query() now returns 0 when no bpf_mprog is attached. This also allows to convert a few test cases from using raw __assert_mprog_count() over to plain assert_mprog_count() helper. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/r/20231006220655.1653-5-daniel@iogearbox.net Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>