summaryrefslogtreecommitdiff
path: root/tools/bpf
diff options
context:
space:
mode:
authorAlexei Starovoitov <ast@kernel.org>2024-04-04 23:08:01 +0300
committerAlexei Starovoitov <ast@kernel.org>2024-04-04 23:08:01 +0300
commitd82c045f9dfde6b9ea220d7f8310c98210dfc8cb (patch)
tree05376a14c790f914df2197a0431c3608ac025e66 /tools/bpf
parent21ab0b6d0cfcb8aa98e33baa83f933f963514027 (diff)
parent314a53623cd4e62e1b88126e5ed2bc87073d90ee (diff)
downloadlinux-d82c045f9dfde6b9ea220d7f8310c98210dfc8cb.tar.xz
Merge branch 'inline-bpf_get_branch_snapshot-bpf-helper'
Andrii Nakryiko says: ==================== Inline bpf_get_branch_snapshot() BPF helper Implement inlining of bpf_get_branch_snapshot() BPF helper using generic BPF assembly approach. This allows to reduce LBR record usage right before LBR records are captured from inside BPF program. See v1 cover letter ([0]) for some visual examples. I dropped them from v2 because there are multiple independent changes landing and being reviewed, all of which remove different parts of LBR record waste, so presenting final state of LBR "waste" gets more complicated until all of the pieces land. [0] https://lore.kernel.org/bpf/20240321180501.734779-1-andrii@kernel.org/ v2->v3: - fix BPF_MUL instruction definition; v1->v2: - inlining of bpf_get_smp_processor_id() split out into a separate patch set implementing internal per-CPU BPF instruction; - add efficient divide-by-24 through multiplication logic, and leave comments to explain the idea behind it; this way inlined version of bpf_get_branch_snapshot() has no compromises compared to non-inlined version of the helper (Alexei). ==================== Link: https://lore.kernel.org/r/20240404002640.1774210-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'tools/bpf')
0 files changed, 0 insertions, 0 deletions