summaryrefslogtreecommitdiff
path: root/samples/bpf/test_overhead_user.c
AgeCommit message (Collapse)AuthorFilesLines
2023-01-16samples/bpf: change _kern suffix to .bpf with BPF test programsDaniel T. Lee1-3/+3
This commit changes the _kern suffix to .bpf with the BPF test programs. With this modification, test programs will inherit the benefit of the new CLANG-BPF compile target. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Link: https://lore.kernel.org/r/20230115071613.125791-11-danieltimlee@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-01-16samples/bpf: replace broken overhead microbenchmark with fib_table_lookupDaniel T. Lee1-9/+19
The test_overhead bpf program is designed to compare performance between tracepoint and kprobe. Initially it used task_rename and urandom_read tracepoint. However, commit 14c174633f34 ("random: remove unused tracepoints") removed urandom_read tracepoint, and for this reason the test_overhead got broken. This commit introduces new microbenchmark using fib_table_lookup. This microbenchmark sends UDP packets to localhost in order to invoke fib_table_lookup. In a nutshell: fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP); addr.sin_addr.s_addr = inet_addr(DUMMY_IP); addr.sin_port = htons(DUMMY_PORT); for() { sendto(fd, buf, strlen(buf), 0, (struct sockaddr *)&addr, sizeof(addr)); } on 4 cpus in parallel: lookup per sec base (no tracepoints, no kprobes) 381k with kprobe at fib_table_lookup() 325k with tracepoint at fib:fib_table_lookup 330k with raw_tracepoint at fib:fib_table_lookup 365k Fixes: 14c174633f34 ("random: remove unused tracepoints") Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Link: https://lore.kernel.org/r/20230115071613.125791-6-danieltimlee@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-04-11samples/bpf: Use libbpf 1.0 API mode instead of RLIMIT_MEMLOCKYafang Shao1-1/+0
We have switched to memcg-based memory accouting and thus the rlimit is not needed any more. LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK was introduced in libbpf for backward compatibility, so we can use it instead now. This patch also removes the useless header sys/resource.h from many files in samples/bpf. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220409125958.92629-2-laoar.shao@gmail.com
2020-12-03bpf: samples: Do not touch RLIMIT_MEMLOCKRoman Gushchin1-2/+0
Since bpf is not using rlimit memlock for the memory accounting and control, do not change the limit in sample applications. Signed-off-by: Roman Gushchin <guro@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20201201215900.3569844-35-guro@fb.com
2020-11-27samples: bpf: Refactor test_overhead program with libbpfDaniel T. Lee1-23/+59
This commit refactors the existing program with libbpf bpf loader. Since the kprobe, tracepoint and raw_tracepoint bpf program can be attached with single bpf_program__attach() interface, so the corresponding function of libbpf is used here. Rather than specifying the number of cpus inside the code, this commit uses the number of available cpus with _SC_NPROCESSORS_ONLN. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201124090310.24374-6-danieltimlee@gmail.com
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 206Thomas Gleixner1-4/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of version 2 of the gnu general public license as published by the free software foundation extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 107 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Richard Fontana <rfontana@redhat.com> Reviewed-by: Steve Winslow <swinslow@gmail.com> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190528171438.615055994@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-05samples/bpf: Check the error of write() and read()Taeung Song1-4/+15
test_task_rename() and test_urandom_read() can be failed during write() and read(), So check the result of them. Reviewed-by: David Laight <David.Laight@ACULAB.COM> Signed-off-by: Taeung Song <treeze.taeung@gmail.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-15samples: bpf: include bpf/bpf.h instead of local libbpf.hJakub Kicinski1-1/+1
There are two files in the tree called libbpf.h which is becoming problematic. Most samples don't actually need the local libbpf.h they simply include it to get to bpf/bpf.h. Include bpf/bpf.h directly instead. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-03-28samples/bpf: raw tracepoint testAlexei Starovoitov1-0/+12
add empty raw_tracepoint bpf program to test overhead similar to kprobe and traditional tracepoint tests Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2016-04-08samples/bpf: add tracepoint vs kprobe performance testsAlexei Starovoitov1-0/+162
the first microbenchmark does fd=open("/proc/self/comm"); for() { write(fd, "test"); } and on 4 cpus in parallel: writes per sec base (no tracepoints, no kprobes) 930k with kprobe at __set_task_comm() 420k with tracepoint at task:task_rename 730k For kprobe + full bpf program manully fetches oldcomm, newcomm via bpf_probe_read. For tracepint bpf program does nothing, since arguments are copied by tracepoint. 2nd microbenchmark does: fd=open("/dev/urandom"); for() { read(fd, buf); } and on 4 cpus in parallel: reads per sec base (no tracepoints, no kprobes) 300k with kprobe at urandom_read() 279k with tracepoint at random:urandom_read 290k bpf progs attached to kprobe and tracepoint are noop. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>