summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2024-03-11mm: Introduce vmap_page_range() to map pages in PCI address spaceAlexei Starovoitov7-18/+32
ioremap_page_range() should be used for ranges within vmalloc range only. The vmalloc ranges are allocated by get_vm_area(). PCI has "resource" allocator that manages PCI_IOBASE, IO_SPACE_LIMIT address range, hence introduce vmap_page_range() to be used exclusively to map pages in PCI address space. Fixes: 3e49a866c9dc ("mm: Enforce VM_IOREMAP flag and range in ioremap_page_range.") Reported-by: Miguel Ojeda <ojeda@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Miguel Ojeda <ojeda@kernel.org> Link: https://lore.kernel.org/bpf/CANiq72ka4rir+RTN2FQoT=Vvprp_Ao-CvoYEkSNqtSY+RZj+AA@mail.gmail.com
2024-03-09arm64, bpf: Use bpf_prog_pack for arm64 bpf trampolinePuranjay Mohan1-9/+46
We used bpf_prog_pack to aggregate bpf programs into huge page to relieve the iTLB pressure on the system. This was merged for ARM64[1] We can apply it to bpf trampoline as well. This would increase the preformance of fentry and struct_ops programs. [1] https://lore.kernel.org/bpf/20240228141824.119877-1-puranjay12@gmail.com/ Signed-off-by: Puranjay Mohan <puranjay12@gmail.com> Reviewed-by: Pu Lehui <pulehui@huawei.com> Message-ID: <20240304202803.31400-1-puranjay12@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-08Merge branch 'fix-hash-bucket-overflow-checks-for-32-bit-arches'Alexei Starovoitov3-13/+21
Toke Høiland-Jørgensen says: ==================== Fix hash bucket overflow checks for 32-bit arches Syzbot managed to trigger a crash by creating a DEVMAP_HASH map with a large number of buckets because the overflow check relies on well-defined behaviour that is only correct on 64-bit arches. Fix the overflow checks to happen before values are rounded up in all the affected map types. v3: - Keep the htab->n_buckets > U32_MAX / sizeof(struct bucket) check - Use 1UL << 31 instead of U32_MAX / 2 + 1 as the constant to check against - Add patch to fix stackmap.c v2: - Fix off-by-one error in overflow check - Apply the same fix to hashtab, where the devmap_hash code was copied from (John) Toke Høiland-Jørgensen (3): bpf: Fix DEVMAP_HASH overflow check on 32-bit arches bpf: Fix hashtab overflow check on 32-bit arches bpf: Fix stackmap overflow check on 32-bit arches kernel/bpf/devmap.c | 11 ++++++----- kernel/bpf/hashtab.c | 14 +++++++++----- kernel/bpf/stackmap.c | 9 ++++++--- 3 files changed, 21 insertions(+), 13 deletions(-) ==================== Link: https://lore.kernel.org/r/20240307120340.99577-1-toke@redhat.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-08bpf: Fix stackmap overflow check on 32-bit archesToke Høiland-Jørgensen1-3/+6
The stackmap code relies on roundup_pow_of_two() to compute the number of hash buckets, and contains an overflow check by checking if the resulting value is 0. However, on 32-bit arches, the roundup code itself can overflow by doing a 32-bit left-shift of an unsigned long value, which is undefined behaviour, so it is not guaranteed to truncate neatly. This was triggered by syzbot on the DEVMAP_HASH type, which contains the same check, copied from the hashtab code. The commit in the fixes tag actually attempted to fix this, but the fix did not account for the UB, so the fix only works on CPUs where an overflow does result in a neat truncation to zero, which is not guaranteed. Checking the value before rounding does not have this problem. Fixes: 6183f4d3a0a2 ("bpf: Check for integer overflow when using roundup_pow_of_two()") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Reviewed-by: Bui Quang Minh <minhquangbui99@gmail.com> Message-ID: <20240307120340.99577-4-toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-08bpf: Fix hashtab overflow check on 32-bit archesToke Høiland-Jørgensen1-5/+9
The hashtab code relies on roundup_pow_of_two() to compute the number of hash buckets, and contains an overflow check by checking if the resulting value is 0. However, on 32-bit arches, the roundup code itself can overflow by doing a 32-bit left-shift of an unsigned long value, which is undefined behaviour, so it is not guaranteed to truncate neatly. This was triggered by syzbot on the DEVMAP_HASH type, which contains the same check, copied from the hashtab code. So apply the same fix to hashtab, by moving the overflow check to before the roundup. Fixes: daaf427c6ab3 ("bpf: fix arraymap NULL deref and missing overflow and zero size checks") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Message-ID: <20240307120340.99577-3-toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-08bpf: Fix DEVMAP_HASH overflow check on 32-bit archesToke Høiland-Jørgensen1-5/+6
The devmap code allocates a number hash buckets equal to the next power of two of the max_entries value provided when creating the map. When rounding up to the next power of two, the 32-bit variable storing the number of buckets can overflow, and the code checks for overflow by checking if the truncated 32-bit value is equal to 0. However, on 32-bit arches the rounding up itself can overflow mid-way through, because it ends up doing a left-shift of 32 bits on an unsigned long value. If the size of an unsigned long is four bytes, this is undefined behaviour, so there is no guarantee that we'll end up with a nice and tidy 0-value at the end. Syzbot managed to turn this into a crash on arm32 by creating a DEVMAP_HASH with max_entries > 0x80000000 and then trying to update it. Fix this by moving the overflow check to before the rounding up operation. Fixes: 6f9d451ab1a3 ("xdp: Add devmap_hash map type for looking up devices by hashed index") Link: https://lore.kernel.org/r/000000000000ed666a0611af6818@google.com Reported-and-tested-by: syzbot+8cd36f6b65f3cafd400a@syzkaller.appspotmail.com Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Message-ID: <20240307120340.99577-2-toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-08Merge branch 'bpf: arena prerequisites'Martin KaFai Lau7-12/+96
Alexei Starovoitov says: ==================== These are bpf_arena prerequisite patches. Useful on its own. Alexei Starovoitov (5): bpf: Allow kfuncs return 'void *' bpf: Recognize '__map' suffix in kfunc arguments bpf: Plumb get_unmapped_area() callback into bpf_map_ops libbpf: Allow specifying 64-bit integers in map BTF. bpf: Tell bpf programs kernel's PAGE_SIZE ==================== Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-08bpf: Tell bpf programs kernel's PAGE_SIZEAlexei Starovoitov1-1/+6
vmlinux BTF includes all kernel enums. Add __PAGE_SIZE = PAGE_SIZE enum, so that bpf programs that include vmlinux.h can easily access it. Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20240307031228.42896-7-alexei.starovoitov@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-08bpftool: rename is_internal_mmapable_map into is_mmapable_mapAndrii Nakryiko1-9/+9
It's not restricted to working with "internal" maps, it cares about any map that can be mmap'ed. Reflect that in more succinct and generic name. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/r/20240307031228.42896-6-alexei.starovoitov@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-08libbpf: Allow specifying 64-bit integers in map BTF.Alexei Starovoitov2-2/+43
__uint() macro that is used to specify map attributes like: __uint(type, BPF_MAP_TYPE_ARRAY); __uint(map_flags, BPF_F_MMAPABLE); It is limited to 32-bit, since BTF_KIND_ARRAY has u32 "number of elements" field in "struct btf_array". Introduce __ulong() macro that allows specifying values bigger than 32-bit. In map definition "map_extra" is the only u64 field, so far. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20240307031228.42896-5-alexei.starovoitov@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-08bpf: Plumb get_unmapped_area() callback into bpf_map_opsAlexei Starovoitov2-0/+19
Subsequent patches introduce bpf_arena that imposes special alignment requirements on address selection. Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20240307031228.42896-4-alexei.starovoitov@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-08bpf: Recognize '__map' suffix in kfunc argumentsAlexei Starovoitov1-0/+16
Recognize 'void *p__map' kfunc argument as 'struct bpf_map *p__map'. It allows kfunc to have 'void *' argument for maps, since bpf progs will call them as: struct { __uint(type, BPF_MAP_TYPE_ARENA); ... } arena SEC(".maps"); bpf_kfunc_with_map(... &arena ...); Underneath libbpf will load CONST_PTR_TO_MAP into the register via ld_imm64 insn. If kfunc was defined with 'struct bpf_map *' it would pass the verifier as well, but bpf prog would need to type cast the argument (void *)&arena, which is not clean. Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20240307031228.42896-3-alexei.starovoitov@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-08bpf: Allow kfuncs return 'void *'Alexei Starovoitov1-0/+3
Recognize return of 'void *' from kfunc as returning unknown scalar. Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20240307031228.42896-2-alexei.starovoitov@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-07bpf, riscv64/cfi: Support kCFI + BPF on riscv64Puranjay Mohan6-7/+90
The riscv BPF JIT doesn't emit proper kCFI prologues for BPF programs and struct_ops trampolines when CONFIG_CFI_CLANG is enabled. This causes CFI failures when calling BPF programs and can even crash the kernel due to invalid memory accesses. Example crash: root@rv-selftester:~/bpf# ./test_progs -a dummy_st_ops Unable to handle kernel paging request at virtual address ffffffff78204ffc Oops [#1] Modules linked in: bpf_testmod(OE) [....] CPU: 3 PID: 356 Comm: test_progs Tainted: P OE 6.8.0-rc1 #1 Hardware name: riscv-virtio,qemu (DT) epc : bpf_struct_ops_test_run+0x28c/0x5fc ra : bpf_struct_ops_test_run+0x26c/0x5fc epc : ffffffff82958010 ra : ffffffff82957ff0 sp : ff200000007abc80 gp : ffffffff868d6218 tp : ff6000008d87b840 t0 : 000000000000000f t1 : 0000000000000000 t2 : 000000002005793e s0 : ff200000007abcf0 s1 : ff6000008a90fee0 a0 : 0000000000000000 a1 : 0000000000000000 a2 : 0000000000000000 a3 : 0000000000000000 a4 : 0000000000000000 a5 : ffffffff868dba26 a6 : 0000000000000001 a7 : 0000000052464e43 s2 : 00007ffffc0a95f0 s3 : ff6000008a90fe80 s4 : ff60000084c24c00 s5 : ffffffff78205000 s6 : ff60000088750648 s7 : ff20000000035008 s8 : fffffffffffffff4 s9 : ffffffff86200610 s10: 0000000000000000 s11: 0000000000000000 t3 : ffffffff8483dc30 t4 : ffffffff8483dc10 t5 : ffffffff8483dbf0 t6 : ffffffff8483dbd0 status: 0000000200000120 badaddr: ffffffff78204ffc cause: 000000000000000d [<ffffffff82958010>] bpf_struct_ops_test_run+0x28c/0x5fc [<ffffffff805083ee>] bpf_prog_test_run+0x170/0x548 [<ffffffff805029c8>] __sys_bpf+0x2d2/0x378 [<ffffffff804ff570>] __riscv_sys_bpf+0x5c/0x120 [<ffffffff8000e8fe>] syscall_handler+0x62/0xe4 [<ffffffff83362df6>] do_trap_ecall_u+0xc6/0x27c [<ffffffff833822c4>] ret_from_exception+0x0/0x64 Code: b603 0109 b683 0189 b703 0209 8493 0609 157d 8d65 (a303) ffca ---[ end trace 0000000000000000 ]--- Kernel panic - not syncing: Fatal exception SMP: stopping secondary CPUs Implement proper kCFI prologues for the BPF programs and callbacks and drop __nocfi for riscv64. Fix the trampoline generation code to emit kCFI prologue when a struct_ops trampoline is being prepared. Signed-off-by: Puranjay Mohan <puranjay12@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20240303170207.82201-2-puranjay12@gmail.com
2024-03-07Merge branch 'libbpf-type-suffixes-and-autocreate-flag-for-struct_ops-maps'Andrii Nakryiko17-69/+704
Eduard Zingerman says: ==================== libbpf: type suffixes and autocreate flag for struct_ops maps Tweak struct_ops related APIs to allow the following features: - specify version suffixes for stuct_ops map types; - share same BPF program between several map definitions with different local BTF types, assuming only maps with same kernel BTF type would be selected for load; - toggle autocreate flag for struct_ops maps; - automatically toggle autoload for struct_ops programs referenced from struct_ops maps, depending on autocreate status of the corresponding map; - use SEC("?.struct_ops") and SEC("?.struct_ops.link") to define struct_ops maps with autocreate == false after object open. This would allow loading programs like below: SEC("struct_ops/foo") int BPF_PROG(foo) { ... } SEC("struct_ops/bar") int BPF_PROG(bar) { ... } struct bpf_testmod_ops___v1 { int (*foo)(void); }; struct bpf_testmod_ops___v2 { int (*foo)(void); int (*bar)(void); }; /* Assume kernel type name to be 'test_ops' */ SEC(".struct_ops.link") struct test_ops___v1 map_v1 = { /* Program 'foo' shared by maps with * different local BTF type */ .foo = (void *)foo }; SEC(".struct_ops.link") struct test_ops___v2 map_v2 = { .foo = (void *)foo, .bar = (void *)bar }; Assuming the following tweaks are done before loading: /* to load v1 */ bpf_map__set_autocreate(skel->maps.map_v1, true); bpf_map__set_autocreate(skel->maps.map_v2, false); /* to load v2 */ bpf_map__set_autocreate(skel->maps.map_v1, false); bpf_map__set_autocreate(skel->maps.map_v2, true); Patch #8 ties autocreate and autoload flags for struct_ops maps and programs. Changelog: - v3 [3] -> v4: - changes for multiple styling suggestions from Andrii; - patch #5: libbpf log capture now happens for LIBBPF_INFO and LIBBPF_WARN messages and does not depend on verbosity flags (Andrii); - patch #6: fixed runtime crash caused by conflict with newly added test case struct_ops_multi_pages; - patch #7: fixed free of possibly uninitialized pointer (Daniel) - patch #8: simpler algorithm to detect which programs to autoload (Andrii); - patch #9: added assertions for autoload flag after object load (Andrii); - patch #12: DATASEC name rewrite in libbpf is now done inplace, no new strings added to BTF (Andrii); - patch #14: allow any printable characters in DATASEC names when kernel validates BTF (Andrii) - v2 [2] -> v3: - moved patch #8 logic to be fully done on load (requested by Andrii in offlist discussion); - in patch #9 added test case for shadow vars and autocreate/autoload interaction. - v1 [1] -> v2: - fixed memory leak in patch #1 (Kui-Feng); - improved error messages in patch #2 (Martin, Andrii); - in bad_struct_ops selftest from patch #6 added .test_2 map member setup (David); - added utility functions to capture libbpf log from selftests (David) - in selftests replaced usage of ...__open_and_load by separate calls to ..._open() and ..._load() (Andrii); - removed serial_... in selftest definitions (Andrii); - improved comments in selftest struct_ops_autocreate from patch #7 (David); - removed autoload toggling logic incompatible with shadow variables from bpf_map__set_autocreate(), instead struct_ops programs autoload property is computed at struct_ops maps load phase, see patch #8 (Kui-Feng, Martin, Andrii); - added support for SEC("?.struct_ops") and SEC("?.struct_ops.link") (Andrii). [1] https://lore.kernel.org/bpf/20240227204556.17524-1-eddyz87@gmail.com/ [2] https://lore.kernel.org/bpf/20240302011920.15302-1-eddyz87@gmail.com/ [3] https://lore.kernel.org/bpf/20240304225156.24765-1-eddyz87@gmail.com/ ==================== Link: https://lore.kernel.org/r/20240306104529.6453-1-eddyz87@gmail.com Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-03-07selftests/bpf: Test cases for '?' in BTF namesEduard Zingerman1-0/+29
Two test cases to verify that '?' and other printable characters are allowed in BTF DATASEC names: - DATASEC with name "?.foo bar:buz" should be accepted; - type with name "?foo" should be rejected. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-16-eddyz87@gmail.com
2024-03-07bpf: Allow all printable characters in BTF DATASEC namesEduard Zingerman1-1/+15
The intent is to allow libbpf to use SEC("?.struct_ops") to identify struct_ops maps that are optional, e.g. like in the following BPF code: SEC("?.struct_ops") struct test_ops optional_map = { ... }; Which yields the following BTF: ... [13] DATASEC '?.struct_ops' size=0 vlen=... ... To load such BTF libbpf rewrites DATASEC name before load. After this patch the rewrite won't be necessary. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-15-eddyz87@gmail.com
2024-03-07selftests/bpf: Test case for SEC("?.struct_ops")Eduard Zingerman2-6/+56
Check that "?.struct_ops" and "?.struct_ops.link" section names define struct_ops maps with autocreate == false after open. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-14-eddyz87@gmail.com
2024-03-07libbpf: Rewrite btf datasec names starting from '?'Eduard Zingerman3-2/+41
Optional struct_ops maps are defined using question mark at the start of the section name, e.g.: SEC("?.struct_ops") struct test_ops optional_map = { ... }; This commit teaches libbpf to detect if kernel allows '?' prefix in datasec names, and if it doesn't then to rewrite such names by replacing '?' with '_', e.g.: DATASEC ?.struct_ops -> DATASEC _.struct_ops Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-13-eddyz87@gmail.com
2024-03-07libbpf: Struct_ops in SEC("?.struct_ops") / SEC("?.struct_ops.link")Eduard Zingerman1-1/+14
Allow using two new section names for struct_ops maps: - SEC("?.struct_ops") - SEC("?.struct_ops.link") To specify maps that have bpf_map->autocreate == false after open. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-12-eddyz87@gmail.com
2024-03-07libbpf: Replace elf_state->st_ops_* fields with SEC_ST_OPS sec_typeEduard Zingerman1-29/+32
The next patch would add two new section names for struct_ops maps. To make working with multiple struct_ops sections more convenient: - remove fields like elf_state->st_ops_{shndx,link_shndx}; - mark section descriptions hosting struct_ops as elf_sec_desc->sec_type == SEC_ST_OPS; After these changes struct_ops sections could be processed uniformly by iterating bpf_object->efile.secs entries. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-11-eddyz87@gmail.com
2024-03-07selftests/bpf: Verify struct_ops autoload/autocreate syncEduard Zingerman4-4/+125
Check that autocreate flags of struct_ops map cause autoload of struct_ops corresponding programs: - when struct_ops program is referenced only from a map for which autocreate is set to false, that program should not be loaded; - when struct_ops program with autoload == false is set to be used from a map with autocreate == true using shadow var, that program should be loaded; - when struct_ops program is not referenced from any map object load should fail. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-10-eddyz87@gmail.com
2024-03-07libbpf: Sync progs autoload with maps autocreate for struct_ops mapsEduard Zingerman1-0/+43
Automatically select which struct_ops programs to load depending on which struct_ops maps are selected for automatic creation. E.g. for the BPF code below: SEC("struct_ops/test_1") int BPF_PROG(foo) { ... } SEC("struct_ops/test_2") int BPF_PROG(bar) { ... } SEC(".struct_ops.link") struct test_ops___v1 A = { .foo = (void *)foo }; SEC(".struct_ops.link") struct test_ops___v2 B = { .foo = (void *)foo, .bar = (void *)bar, }; And the following libbpf API calls: bpf_map__set_autocreate(skel->maps.A, true); bpf_map__set_autocreate(skel->maps.B, false); The autoload would be enabled for program 'foo' and disabled for program 'bar'. During load, for each struct_ops program P, referenced from some struct_ops map M: - set P.autoload = true if M.autocreate is true for some M; - set P.autoload = false if M.autocreate is false for all M; - don't change P.autoload, if P is not referenced from any map. Do this after bpf_object__init_kern_struct_ops_maps() to make sure that shadow vars assignment is done. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-9-eddyz87@gmail.com
2024-03-07selftests/bpf: Test autocreate behavior for struct_ops mapsEduard Zingerman2-0/+118
Check that bpf_map__set_autocreate() can be used to disable automatic creation for struct_ops maps. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-8-eddyz87@gmail.com
2024-03-07selftests/bpf: Bad_struct_ops testEduard Zingerman4-0/+88
When loading struct_ops programs kernel requires BTF id of the struct_ops type and member index for attachment point inside that type. This makes impossible to use same BPF program in several struct_ops maps that have different struct_ops type. Check if libbpf rejects such BPF objects files. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-7-eddyz87@gmail.com
2024-03-07selftests/bpf: Utility functions to capture libbpf log in test_progsEduard Zingerman2-0/+62
Several test_progs tests already capture libbpf log in order to check for some expected output, e.g bpf_tcp_ca.c, kfunc_dynptr_param.c, log_buf.c and a few others. This commit provides a, hopefully, simple API to capture libbpf log w/o necessity to define new print callback in each test: /* Creates a global memstream capturing INFO and WARN level output * passed to libbpf_print_fn. * Returns 0 on success, negative value on failure. * On failure the description is printed using PRINT_FAIL and * current test case is marked as fail. */ int start_libbpf_log_capture(void) /* Destroys global memstream created by start_libbpf_log_capture(). * Returns a pointer to captured data which has to be freed. * Returned buffer is null terminated. */ char *stop_libbpf_log_capture(void) The intended usage is as follows: if (start_libbpf_log_capture()) return; use_libbpf(); char *log = stop_libbpf_log_capture(); ASSERT_HAS_SUBSTR(log, "... expected ...", "expected some message"); free(log); As a safety measure, free(start_libbpf_log_capture()) is invoked in the epilogue of the test_progs.c:run_one_test(). Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-6-eddyz87@gmail.com
2024-03-07selftests/bpf: Test struct_ops map definition with type suffixEduard Zingerman3-10/+46
Extend struct_ops_module test case to check if it is possible to use '___' suffixes for struct_ops type specification. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20240306104529.6453-5-eddyz87@gmail.com
2024-03-07libbpf: Honor autocreate flag for struct_ops mapsEduard Zingerman1-3/+15
Skip load steps for struct_ops maps not marked for automatic creation. This should allow to load bpf object in situations like below: SEC("struct_ops/foo") int BPF_PROG(foo) { ... } SEC("struct_ops/bar") int BPF_PROG(bar) { ... } struct test_ops___v1 { int (*foo)(void); }; struct test_ops___v2 { int (*foo)(void); int (*does_not_exist)(void); }; SEC(".struct_ops.link") struct test_ops___v1 map_for_old = { .test_1 = (void *)foo }; SEC(".struct_ops.link") struct test_ops___v2 map_for_new = { .test_1 = (void *)foo, .does_not_exist = (void *)bar }; Suppose program is loaded on old kernel that does not have definition for 'does_not_exist' struct_ops member. After this commit it would be possible to load such object file after the following tweaks: bpf_program__set_autoload(skel->progs.bar, false); bpf_map__set_autocreate(skel->maps.map_for_new, false); Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20240306104529.6453-4-eddyz87@gmail.com
2024-03-07libbpf: Tie struct_ops programs to kernel BTF ids, not to local idsEduard Zingerman1-23/+26
Enforce the following existing limitation on struct_ops programs based on kernel BTF id instead of program-local BTF id: struct_ops BPF prog can be re-used between multiple .struct_ops & .struct_ops.link as long as it's the same struct_ops struct definition and the same function pointer field This allows reusing same BPF program for versioned struct_ops map definitions, e.g.: SEC("struct_ops/test") int BPF_PROG(foo) { ... } struct some_ops___v1 { int (*test)(void); }; struct some_ops___v2 { int (*test)(void); }; SEC(".struct_ops.link") struct some_ops___v1 a = { .test = foo } SEC(".struct_ops.link") struct some_ops___v2 b = { .test = foo } Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-3-eddyz87@gmail.com
2024-03-07libbpf: Allow version suffixes (___smth) for struct_ops typesEduard Zingerman1-1/+5
E.g. allow the following struct_ops definitions: struct bpf_testmod_ops___v1 { int (*test)(void); }; struct bpf_testmod_ops___v2 { int (*test)(void); }; SEC(".struct_ops.link") struct bpf_testmod_ops___v1 a = { .test = ... } SEC(".struct_ops.link") struct bpf_testmod_ops___v2 b = { .test = ... } Where both bpf_testmod_ops__v1 and bpf_testmod_ops__v2 would be resolved as 'struct bpf_testmod_ops' from kernel BTF. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: David Vernet <void@manifault.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240306104529.6453-2-eddyz87@gmail.com
2024-03-07Merge branch 'bpf-introduce-may_goto-and-cond_break'Andrii Nakryiko9-53/+292
Alexei Starovoitov says: ==================== bpf: Introduce may_goto and cond_break From: Alexei Starovoitov <ast@kernel.org> v5 -> v6: - Rename BPF_JMA to BPF_JCOND - Addressed Andrii's review comments v4 -> v5: - rewrote patch 1 to avoid fake may_goto_reg and use 'u32 may_goto_cnt' instead. This way may_goto handling is similar to bpf_loop() processing. - fixed bug in patch 2 that RANGE_WITHIN should not use rold->type == NOT_INIT as a safe signal. - patch 3 fixed negative offset computation in cond_break macro - using bpf_arena and cond_break recompiled lib/glob.c as bpf prog and it works! It will be added as a selftest to arena series. v3 -> v4: - fix drained issue reported by John. may_goto insn could be implemented with sticky state (once reaches 0 it stays 0), but the verifier shouldn't assume that. It has to explore both branches. Arguably drained iterator state shouldn't be there at all. bpf_iter_css_next() is not sticky. Can be fixed, but auditing all iterators for stickiness. That's an orthogonal discussion. - explained JMA name reasons in patch 1 - fixed test_progs-no_alu32 issue and added another test v2 -> v3: Major change - drop bpf_can_loop() kfunc and introduce may_goto instruction instead kfunc is a function call while may_goto doesn't consume any registers and LLVM can produce much better code due to less register pressure. - instead of counting from zero to BPF_MAX_LOOPS start from it instead and break out of the loop when count reaches zero - use may_goto instruction in cond_break macro - recognize that 'exact' state comparison doesn't need to be truly exact. regsafe() should ignore precision and liveness marks, but range_within logic is safe to use while evaluating open coded iterators. ==================== Link: https://lore.kernel.org/r/20240306031929.42666-1-alexei.starovoitov@gmail.com Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-03-07selftests/bpf: Test may_gotoAlexei Starovoitov2-3/+101
Add tests for may_goto instruction via cond_break macro. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Tested-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20240306031929.42666-5-alexei.starovoitov@gmail.com
2024-03-07bpf: Add cond_break macroAlexei Starovoitov1-0/+12
Use may_goto instruction to implement cond_break macro. Ideally the macro should be written as: asm volatile goto(".byte 0xe5; .byte 0; .short %l[l_break] ... .long 0; but LLVM doesn't recognize fixup of 2 byte PC relative yet. Hence use asm volatile goto(".byte 0xe5; .byte 0; .long %l[l_break] ... .short 0; that produces correct asm on little endian. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Tested-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20240306031929.42666-4-alexei.starovoitov@gmail.com
2024-03-07bpf: Recognize that two registers are safe when their ranges matchAlexei Starovoitov1-21/+30
When open code iterators, bpf_loop or may_goto are used the following two states are equivalent and safe to prune the search: cur state: fp-8_w=scalar(id=3,smin=umin=smin32=umin32=2,smax=umax=smax32=umax32=11,var_off=(0x0; 0xf)) old state: fp-8_rw=scalar(id=2,smin=umin=smin32=umin32=1,smax=umax=smax32=umax32=11,var_off=(0x0; 0xf)) In other words "exact" state match should ignore liveness and precision marks, since open coded iterator logic didn't complete their propagation, reg_old->type == NOT_INIT && reg_cur->type != NOT_INIT is also not safe to prune while looping, but range_within logic that applies to scalars, ptr_to_mem, map_value, pkt_ptr is safe to rely on. Avoid doing such comparison when regular infinite loop detection logic is used, otherwise bounded loop logic will declare such "infinite loop" as false positive. Such example is in progs/verifier_loops1.c not_an_inifinite_loop(). Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Tested-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20240306031929.42666-3-alexei.starovoitov@gmail.com
2024-03-07Merge branch 'mm-enforce-ioremap-address-space-and-introduce-sparse-vm_area'Andrii Nakryiko2-2/+75
Alexei Starovoitov says: ==================== mm: Enforce ioremap address space and introduce sparse vm_area From: Alexei Starovoitov <ast@kernel.org> v3 -> v4 - dropped VM_XEN patch for now. It will be in the follow up. - fixed constant as pointed out by Mike v2 -> v3 - added Christoph's reviewed-by to patch 1 - cap commit log lines to 75 chars - factored out common checks in patch 3 into helper - made vm_area_unmap_pages() return void There are various users of kernel virtual address space: vmalloc, vmap, ioremap, xen. - vmalloc use case dominates the usage. Such vm areas have VM_ALLOC flag and these areas are treated differently by KASAN. - the areas created by vmap() function should be tagged with VM_MAP (as majority of the users do). - ioremap areas are tagged with VM_IOREMAP and vm area start is aligned to size of the area unlike vmalloc/vmap. - there is also xen usage that is marked as VM_IOREMAP, but it doesn't call ioremap_page_range() unlike all other VM_IOREMAP users. To clean this up a bit, enforce that ioremap_page_range() checks the range and VM_IOREMAP flag. In addition BPF would like to reserve regions of kernel virtual address space and populate it lazily, similar to xen use cases. For that reason, introduce VM_SPARSE flag and vm_area_[un]map_pages() helpers to populate this sparse area. In the end the /proc/vmallocinfo will show "vmalloc" "vmap" "ioremap" "sparse" categories for different kinds of address regions. ioremap, sparse will return zero when dumped through /proc/kcore ==================== Link: https://lore.kernel.org/r/20240305030516.41519-1-alexei.starovoitov@gmail.com Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2024-03-07bpf: Introduce may_goto instructionAlexei Starovoitov6-30/+150
Introduce may_goto instruction that from the verifier pov is similar to open coded iterators bpf_for()/bpf_repeat() and bpf_loop() helper, but it doesn't iterate any objects. In assembly 'may_goto' is a nop most of the time until bpf runtime has to terminate the program for whatever reason. In the current implementation may_goto has a hidden counter, but other mechanisms can be used. For programs written in C the later patch introduces 'cond_break' macro that combines 'may_goto' with 'break' statement and has similar semantics: cond_break is a nop until bpf runtime has to break out of this loop. It can be used in any normal "for" or "while" loop, like for (i = zero; i < cnt; cond_break, i++) { The verifier recognizes that may_goto is used in the program, reserves additional 8 bytes of stack, initializes them in subprog prologue, and replaces may_goto instruction with: aux_reg = *(u64 *)(fp - 40) if aux_reg == 0 goto pc+off aux_reg -= 1 *(u64 *)(fp - 40) = aux_reg may_goto instruction can be used by LLVM to implement __builtin_memcpy, __builtin_strcmp. may_goto is not a full substitute for bpf_for() macro. bpf_for() doesn't have induction variable that verifiers sees, so 'i' in bpf_for(i, 0, 100) is seen as imprecise and bounded. But when the code is written as: for (i = 0; i < 100; cond_break, i++) the verifier see 'i' as precise constant zero, hence cond_break (aka may_goto) doesn't help to converge the loop. A static or global variable can be used as a workaround: static int zero = 0; for (i = zero; i < 100; cond_break, i++) // works! may_goto works well with arena pointers that don't need to be bounds checked on access. Load/store from arena returns imprecise unbounded scalar and loops with may_goto pass the verifier. Reserve new opcode BPF_JMP | BPF_JCOND for may_goto insn. JCOND stands for conditional pseudo jump. Since goto_or_nop insn was proposed, it may use the same opcode. may_goto vs goto_or_nop can be distinguished by src_reg: code = BPF_JMP | BPF_JCOND src_reg = 0 - may_goto src_reg = 1 - goto_or_nop Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Tested-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20240306031929.42666-2-alexei.starovoitov@gmail.com
2024-03-07mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages().Alexei Starovoitov2-2/+62
vmap/vmalloc APIs are used to map a set of pages into contiguous kernel virtual space. get_vm_area() with appropriate flag is used to request an area of kernel address range. It's used for vmalloc, vmap, ioremap, xen use cases. - vmalloc use case dominates the usage. Such vm areas have VM_ALLOC flag. - the areas created by vmap() function should be tagged with VM_MAP. - ioremap areas are tagged with VM_IOREMAP. BPF would like to extend the vmap API to implement a lazily-populated sparse, yet contiguous kernel virtual space. Introduce VM_SPARSE flag and vm_area_map_pages(area, start_addr, count, pages) API to map a set of pages within a given area. It has the same sanity checks as vmap() does. It also checks that get_vm_area() was created with VM_SPARSE flag which identifies such areas in /proc/vmallocinfo and returns zero pages on read through /proc/kcore. The next commits will introduce bpf_arena which is a sparsely populated shared memory region between bpf program and user space process. It will map privately-managed pages into a sparse vm area with the following steps: // request virtual memory region during bpf prog verification area = get_vm_area(area_size, VM_SPARSE); // on demand vm_area_map_pages(area, kaddr, kend, pages); vm_area_unmap_pages(area, kaddr, kend); // after bpf program is detached and unloaded free_vm_area(area); Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com> Link: https://lore.kernel.org/bpf/20240305030516.41519-3-alexei.starovoitov@gmail.com
2024-03-06mm: Enforce VM_IOREMAP flag and range in ioremap_page_range.Alexei Starovoitov1-0/+13
There are various users of get_vm_area() + ioremap_page_range() APIs. Enforce that get_vm_area() was requested as VM_IOREMAP type and range passed to ioremap_page_range() matches created vm_area to avoid accidentally ioremap-ing into wrong address range. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/bpf/20240305030516.41519-2-alexei.starovoitov@gmail.com
2024-03-05Merge branch 'Allow struct_ops maps with a large number of programs'Martin KaFai Lau7-60/+279
Kui-Feng Lee says: ==================== The BPF struct_ops previously only allowed for one page to be used for the trampolines of all links in a map. However, we have recently run out of space due to the large number of BPF program links. By allocating additional pages when we exhaust an existing page, we can accommodate more links in a single map. The variable st_map->image has been changed to st_map->image_pages, and its type has been changed to an array of pointers to buffers of PAGE_SIZE. Additional pages are allocated when all existing pages are exhausted. The test case loads a struct_ops maps having 40 programs. Their trampolines takes about 6.6k+ bytes over 1.5 pages on x86. --- Major differences from v3: - Refactor buffer allocations to bpf_struct_ops_tramp_buf_alloc() and bpf_struct_ops_tramp_buf_free(). Major differences from v2: - Move image buffer allocation to bpf_struct_ops_prepare_trampoline(). Major differences from v1: - Always free pages if failing to update. - Allocate 8 pages at most. v3: https://lore.kernel.org/all/20240224030302.1500343-1-thinker.li@gmail.com/ v2: https://lore.kernel.org/all/20240221225911.757861-1-thinker.li@gmail.com/ v1: https://lore.kernel.org/all/20240216182828.201727-1-thinker.li@gmail.com/ ==================== Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-05selftests/bpf: Test struct_ops maps with a large number of struct_ops program.Kui-Feng Lee3-0/+176
Create and load a struct_ops map with a large number of struct_ops programs to generate trampolines taking a size over multiple pages. The map includes 40 programs. Their trampolines takes 6.6k+, more than 1.5 pages, on x86. Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240224223418.526631-4-thinker.li@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-05bpf: struct_ops supports more than one page for trampolines.Kui-Feng Lee3-50/+96
The BPF struct_ops previously only allowed one page of trampolines. Each function pointer of a struct_ops is implemented by a struct_ops bpf program. Each struct_ops bpf program requires a trampoline. The following selftest patch shows each page can hold a little more than 20 trampolines. While one page is more than enough for the tcp-cc usecase, the sched_ext use case shows that one page is not always enough and hits the one page limit. This patch overcomes the one page limit by allocating another page when needed and it is limited to a total of MAX_IMAGE_PAGES (8) pages which is more than enough for reasonable usages. The variable st_map->image has been changed to st_map->image_pages, and its type has been changed to an array of pointers to pages. Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240224223418.526631-3-thinker.li@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-04bpf, net: validate struct_ops when updating value.Kui-Feng Lee2-10/+7
Perform all validations when updating values of struct_ops maps. Doing validation in st_ops->reg() and st_ops->update() is not necessary anymore. However, tcp_register_congestion_control() has been called in various places. It still needs to do validations. Cc: netdev@vger.kernel.org Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240224223418.526631-2-thinker.li@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-04selftests/bpf: xdp_hw_metadata reduce sleep intervalSong Yoong Siang1-1/+1
In current ping-pong design, xdp_hw_metadata will wait until the packet transmission completely done, then only start to receive the next packet. The current sleep interval is 10ms, which is unnecessary large. Typically, a NIC does not need such a long time to transmit a packet. Furthermore, during this 10ms sleep time, the app is unable to receive incoming packets. Therefore, this commit reduce sleep interval to 10us, so that xdp_hw_metadata is able to support periodic packets with shorter interval. 10us * 500 = 5ms should be enough for packet transmission and status retrieval. Signed-off-by: Song Yoong Siang <yoong.siang.song@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20240303083225.1184165-2-yoong.siang.song@intel.com
2024-03-04selftests/bpf: Extend uprobe/uretprobe triggering benchmarksAndrii Nakryiko4-46/+103
Settle on three "flavors" of uprobe/uretprobe, installed on different kinds of instruction: nop, push, and ret. All three are testing different internal code paths emulating or single-stepping instructions, so are interesting to compare and benchmark separately. To ensure `push rbp` instruction we ensure that uprobe_target_push() is not a leaf function by calling (global __weak) noop function and returning something afterwards (if we don't do that, compiler will just do a tail call optimization). Also, we need to make sure that compiler isn't skipping frame pointer generation, so let's add `-fno-omit-frame-pointers` to Makefile. Just to give an idea of where we currently stand in terms of relative performance of different uprobe/uretprobe cases vs a cheap syscall (getpgid()) baseline, here are results from my local machine: $ benchs/run_bench_uprobes.sh base : 1.561 ± 0.020M/s uprobe-nop : 0.947 ± 0.007M/s uprobe-push : 0.951 ± 0.004M/s uprobe-ret : 0.443 ± 0.007M/s uretprobe-nop : 0.471 ± 0.013M/s uretprobe-push : 0.483 ± 0.004M/s uretprobe-ret : 0.306 ± 0.007M/s Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240301214551.1686095-1-andrii@kernel.org
2024-03-04libbpf: Correct debug message in btf__load_vmlinux_btfChen Shen1-1/+1
In the function btf__load_vmlinux_btf, the debug message incorrectly refers to 'path' instead of 'sysfs_btf_path'. Signed-off-by: Chen Shen <peterchenshen@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/bpf/20240302062218.3587-1-peterchenshen@gmail.com
2024-03-04bpf, docs: Rename legacy conformance group to packetDave Thaler1-2/+2
There could be other legacy conformance groups in the future, so use a more descriptive name. The status of the conformance group in the IANA registry is what designates it as legacy, not the name of the group. Signed-off-by: Dave Thaler <dthaler1968@gmail.com> Link: https://lore.kernel.org/r/20240302012229.16452-1-dthaler1968@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2024-03-03bpf, docs: Use IETF format for field definitions in instruction-set.rstDave Thaler1-241/+290
In preparation for publication as an IETF RFC, the WG chairs asked me to convert the document to use IETF packet format for field layout, so this patch attempts to make it consistent with other IETF documents. Some fields that are not byte aligned were previously inconsistent in how values were defined. Some were defined as the value of the byte containing the field (like 0x20 for a field holding the high four bits of the byte), and others were defined as the value of the field itself (like 0x2). This PR makes them be consistent in using just the values of the field itself, which is IETF convention. As a result, some of the defines that used BPF_* would no longer match the value in the spec, and so this patch also drops the BPF_* prefix to avoid confusion with the defines that are the full-byte equivalent values. For consistency, BPF_* is then dropped from other fields too. BPF_<foo> is thus the Linux implementation-specific define for <foo> as it appears in the BPF ISA specification. The syntax BPF_ADD | BPF_X | BPF_ALU only worked for full-byte values so the convention {ADD, X, ALU} is proposed for referring to field values instead. Also replace the redundant "LSB bits" with "least significant bits". A preview of what the resulting Internet Draft would look like can be seen at: https://htmlpreview.github.io/?https://raw.githubusercontent.com/dthaler/ebp f-docs-1/format/draft-ietf-bpf-isa.html v1->v2: Fix sphinx issue as recommended by David Vernet Signed-off-by: Dave Thaler <dthaler1968@gmail.com> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20240301222337.15931-1-dthaler1968@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-03Merge tag 'for-netdev' of ↵Jakub Kicinski150-995/+3589
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Daniel Borkmann says: ==================== pull-request: bpf-next 2024-02-29 We've added 119 non-merge commits during the last 32 day(s) which contain a total of 150 files changed, 3589 insertions(+), 995 deletions(-). The main changes are: 1) Extend the BPF verifier to enable static subprog calls in spin lock critical sections, from Kumar Kartikeya Dwivedi. 2) Fix confusing and incorrect inference of PTR_TO_CTX argument type in BPF global subprogs, from Andrii Nakryiko. 3) Larger batch of riscv BPF JIT improvements and enabling inlining of the bpf_kptr_xchg() for RV64, from Pu Lehui. 4) Allow skeleton users to change the values of the fields in struct_ops maps at runtime, from Kui-Feng Lee. 5) Extend the verifier's capabilities of tracking scalars when they are spilled to stack, especially when the spill or fill is narrowing, from Maxim Mikityanskiy & Eduard Zingerman. 6) Various BPF selftest improvements to fix errors under gcc BPF backend, from Jose E. Marchesi. 7) Avoid module loading failure when the module trying to register a struct_ops has its BTF section stripped, from Geliang Tang. 8) Annotate all kfuncs in .BTF_ids section which eventually allows for automatic kfunc prototype generation from bpftool, from Daniel Xu. 9) Several updates to the instruction-set.rst IETF standardization document, from Dave Thaler. 10) Shrink the size of struct bpf_map resp. bpf_array, from Alexei Starovoitov. 11) Initial small subset of BPF verifier prepwork for sleepable bpf_timer, from Benjamin Tissoires. 12) Fix bpftool to be more portable to musl libc by using POSIX's basename(), from Arnaldo Carvalho de Melo. 13) Add libbpf support to gcc in CORE macro definitions, from Cupertino Miranda. 14) Remove a duplicate type check in perf_event_bpf_event, from Florian Lehner. 15) Fix bpf_spin_{un,}lock BPF helpers to actually annotate them with notrace correctly, from Yonghong Song. 16) Replace the deprecated bpf_lpm_trie_key 0-length array with flexible array to fix build warnings, from Kees Cook. 17) Fix resolve_btfids cross-compilation to non host-native endianness, from Viktor Malik. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (119 commits) selftests/bpf: Test if shadow types work correctly. bpftool: Add an example for struct_ops map and shadow type. bpftool: Generated shadow variables for struct_ops maps. libbpf: Convert st_ops->data to shadow type. libbpf: Set btf_value_type_id of struct bpf_map for struct_ops. bpf: Replace bpf_lpm_trie_key 0-length array with flexible array bpf, arm64: use bpf_prog_pack for memory management arm64: patching: implement text_poke API bpf, arm64: support exceptions arm64: stacktrace: Implement arch_bpf_stack_walk() for the BPF JIT bpf: add is_async_callback_calling_insn() helper bpf: introduce in_sleepable() helper bpf: allow more maps in sleepable bpf programs selftests/bpf: Test case for lacking CFI stub functions. bpf: Check cfi_stubs before registering a struct_ops type. bpf: Clarify batch lookup/lookup_and_delete semantics bpf, docs: specify which BPF_ABS and BPF_IND fields were zero bpf, docs: Fix typos in instruction-set.rst selftests/bpf: update tcp_custom_syncookie to use scalar packet offset bpf: Shrink size of struct bpf_map/bpf_array. ... ==================== Link: https://lore.kernel.org/r/20240301001625.8800-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-01Merge branch 'inet_dump_ifaddr-no-rtnl'David S. Miller2-92/+79
Eric Dumazet says: ==================== inet: no longer use RTNL to protect inet_dump_ifaddr() This series convert inet so that a dump of addresses (ip -4 addr) no longer requires RTNL. ==================== Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-03-01inet: use xa_array iterator to implement inet_dump_ifaddr()Eric Dumazet1-58/+37
1) inet_dump_ifaddr() can can run under RCU protection instead of RTNL. 2) properly return 0 at the end of a dump, avoiding an an extra recvmsg() system call. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>