summaryrefslogtreecommitdiff
path: root/tools/perf/util
AgeCommit message (Collapse)AuthorFilesLines
2018-11-04Merge branch 'perf-urgent-for-linus' of ↵Linus Torvalds25-66/+269
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf updates and fixes from Ingo Molnar: "These are almost all tooling updates: 'perf top', 'perf trace' and 'perf script' fixes and updates, an UAPI header sync with the merge window versions, license marker updates, much improved Sparc support from David Miller, and a number of fixes" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (66 commits) perf intel-pt/bts: Calculate cpumode for synthesized samples perf intel-pt: Insert callchain context into synthesized callchains perf tools: Don't clone maps from parent when synthesizing forks perf top: Start display thread earlier tools headers uapi: Update linux/if_link.h header copy tools headers uapi: Update linux/netlink.h header copy tools headers: Sync the various kvm.h header copies tools include uapi: Update linux/mmap.h copy perf trace beauty: Use the mmap flags table generated from headers perf beauty: Wire up the mmap flags table generator to the Makefile perf beauty: Add a generator for MAP_ mmap's flag constants tools include uapi: Update asound.h copy tools arch uapi: Update asm-generic/unistd.h and arm64 unistd.h copies tools include uapi: Update linux/fs.h copy perf callchain: Honour the ordering of PERF_CONTEXT_{USER,KERNEL,etc} perf cs-etm: Correct CPU mode for samples perf unwind: Take pgoff into account when reporting elf to libdwfl perf top: Do not use overwrite mode by default perf top: Allow disabling the overwrite mode perf trace: Beautify mount's first pathname arg ...
2018-10-31perf intel-pt/bts: Calculate cpumode for synthesized samplesAdrian Hunter2-14/+25
In the absence of a fallback, samples must provide a correct cpumode for the 'ip'. Do that now there is no fallback. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: stable@vger.kernel.org # 4.19 Link: http://lkml.kernel.org/r/20181031091043.23465-6-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-31perf intel-pt: Insert callchain context into synthesized callchainsAdrian Hunter3-12/+40
In the absence of a fallback, callchains must encode also the callchain context. Do that now there is no fallback. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: stable@vger.kernel.org # 4.19 Link: http://lkml.kernel.org/r/100ea2ec-ed14-b56d-d810-e0a6d2f4b069@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-31perf tools: Don't clone maps from parent when synthesizing forksDavid Miller4-10/+25
When synthesizing FORK events, we are trying to create thread objects for the already running tasks on the machine. Normally, for a kernel FORK event, we want to clone the parent's maps because that is what the kernel just did. But when synthesizing, this should not be done. If we do, we end up with overlapping maps as we process the sythesized MMAP2 events that get delivered shortly thereafter. Use the FORK event misc flags in an internal way to signal this situation, so we can elide the map clone when appropriate. Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Don Zickus <dzickus@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Joe Mario <jmario@redhat.com> Link: http://lkml.kernel.org/r/20181030.222404.2085088822877051075.davem@davemloft.net [ Added comment about flag use in machine__process_fork_event(), use ternary op in thread__clone_map_groups() as suggested by Jiri ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-31perf callchain: Honour the ordering of PERF_CONTEXT_{USER,KERNEL,etc}David S. Miller1-1/+34
When processing using 'perf report -g caller', which is the default, we ended up reverting the callchain entries received from the kernel, but simply reverting throws away the information that tells that from a point onwards the addresses are for userspace, kernel, guest kernel, guest user, hypervisor. The idea is that if we are walking backwards, for each cluster of non-cpumode entries we have to first scan backwards for the next one and use that for the cluster. This seems silly and more expensive than it needs to be but it is enough for a initial fix. The code here is really complicated because it is intimately intertwined with the lbr and branch handling, as well as this callchain order, further fixes will be needed to properly take into account the cpumode in those cases. Another problem with ORDER_CALLER is that the NULL "0" IP that is at the end of most callchains shows up at the top of the histogram because every callchain contains it and with ORDER_CALLER it is the first entry. Signed-off-by: David S. Miller <davem@davemloft.net> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Souvik Banerjee <souvik1997@gmail.com> Cc: Wang Nan <wangnan0@huawei.com> Cc: stable@vger.kernel.org # 4.19 Link: https://lkml.kernel.org/n/tip-2wt3ayp6j2y2f2xowixa8y6y@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-31perf cs-etm: Correct CPU mode for samplesLeo Yan1-9/+30
Since commit edeb0c90df35 ("perf tools: Stop fallbacking to kallsyms for vdso symbols lookup"), the kernel address cannot be properly parsed to kernel symbol with command 'perf script -k vmlinux'. The reason is CoreSight samples is always to set CPU mode as PERF_RECORD_MISC_USER, thus it fails to find corresponding map/dso in below flows: process_sample_event() `-> machine__resolve() `-> thread__find_map(thread, sample->cpumode, sample->ip, al); In this flow it needs to pass argument 'sample->cpumode' to tell what's the CPU mode, before it always passed PERF_RECORD_MISC_USER but without any failure until the commit edeb0c90df35 ("perf tools: Stop fallbacking to kallsyms for vdso symbols lookup") has been merged. The reason is even with the wrong CPU mode the function thread__find_map() firstly fails to find map but it will rollback to find kernel map for vdso symbols lookup. In the latest code it has removed the fallback code, thus if CPU mode is PERF_RECORD_MISC_USER then it cannot find map anymore with kernel address. This patch is to correct samples CPU mode setting, it creates a new helper function cs_etm__cpu_mode() to tell what's the CPU mode based on the address with the info from machine structure; this patch has a bit extension to check not only kernel and user mode, but also check for host/guest and hypervisor mode. Finally this patch uses the function in instruction and branch samples and also apply in cs_etm__mem_access() for a minor polishing. Signed-off-by: Leo Yan <leo.yan@linaro.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Miller <davem@davemloft.net> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: coresight@lists.linaro.org Cc: linux-arm-kernel@lists.infradead.org Cc: stable@kernel.org # v4.19 Link: http://lkml.kernel.org/r/1540883908-17018-1-git-send-email-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-31perf unwind: Take pgoff into account when reporting elf to libdwflMilian Wolff1-2/+2
libdwfl parses an ELF file itself and creates mappings for the individual sections. perf on the other hand sees raw mmap events which represent individual sections. When we encounter an address pointing into a mapping with pgoff != 0, we must take that into account and report the file at the non-offset base address. This fixes unwinding with libdwfl in some cases. E.g. for a file like: ``` using namespace std; mutex g_mutex; double worker() { lock_guard<mutex> guard(g_mutex); uniform_real_distribution<double> uniform(-1E5, 1E5); default_random_engine engine; double s = 0; for (int i = 0; i < 1000; ++i) { s += norm(complex<double>(uniform(engine), uniform(engine))); } cout << s << endl; return s; } int main() { vector<std::future<double>> results; for (int i = 0; i < 10000; ++i) { results.push_back(async(launch::async, worker)); } return 0; } ``` Compile it with `g++ -g -O2 -lpthread cpp-locking.cpp -o cpp-locking`, then record it with `perf record --call-graph dwarf -e sched:sched_switch`. When you analyze it with `perf script` and libunwind, you should see: ``` cpp-locking 20038 [005] 54830.236589: sched:sched_switch: prev_comm=cpp-locking prev_pid=20038 prev_prio=120 prev_state=T ==> next_comm=swapper/5 next_pid=0 next_prio=120 ffffffffb166fec5 __sched_text_start+0x545 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb166fec5 __sched_text_start+0x545 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb1670208 schedule+0x28 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb16737cc rwsem_down_read_failed+0xec (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb1665e04 call_rwsem_down_read_failed+0x14 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb1672a03 down_read+0x13 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb106bd85 __do_page_fault+0x445 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb18015f5 page_fault+0x45 (/lib/modules/4.14.78-1-lts/build/vmlinux) 7f38e4252591 new_heap+0x101 (/usr/lib/libc-2.28.so) 7f38e4252d0b arena_get2.part.4+0x2fb (/usr/lib/libc-2.28.so) 7f38e4255b1c tcache_init.part.6+0xec (/usr/lib/libc-2.28.so) 7f38e42569e5 __GI___libc_malloc+0x115 (inlined) 7f38e4241790 __GI__IO_file_doallocate+0x90 (inlined) 7f38e424fbbf __GI__IO_doallocbuf+0x4f (inlined) 7f38e424ee47 __GI__IO_file_overflow+0x197 (inlined) 7f38e424df36 _IO_new_file_xsputn+0x116 (inlined) 7f38e4242bfb __GI__IO_fwrite+0xdb (inlined) 7f38e463fa6d std::basic_streambuf<char, std::char_traits<char> >::sputn(char const*, long)+0x1cd (inlined) 7f38e463fa6d std::ostreambuf_iterator<char, std::char_traits<char> >::_M_put(char const*, long)+0x1cd (inlined) 7f38e463fa6d std::ostreambuf_iterator<char, std::char_traits<char> > std::__write<char>(std::ostreambuf_iterator<char, std::char_traits<char> >, char const*, int)+0x1cd (inlined) 7f38e463fa6d std::ostreambuf_iterator<char, std::char_traits<char> > std::num_put<char, std::ostreambuf_iterator<char, std::char_traits<char> > >::_M_insert_float<double>(std::ostreambuf_iterator<c> 7f38e464bd70 std::num_put<char, std::ostreambuf_iterator<char, std::char_traits<char> > >::put(std::ostreambuf_iterator<char, std::char_traits<char> >, std::ios_base&, char, double) const+0x90 (inl> 7f38e464bd70 std::ostream& std::ostream::_M_insert<double>(double)+0x90 (/usr/lib/libstdc++.so.6.0.25) 563b9cb502f7 std::ostream::operator<<(double)+0xb7 (inlined) 563b9cb502f7 worker()+0xb7 (/ssd/milian/projects/kdab/rnd/hotspot/build/tests/test-clients/cpp-locking/cpp-locking) 563b9cb506fb double std::__invoke_impl<double, double (*)()>(std::__invoke_other, double (*&&)())+0x2b (inlined) 563b9cb506fb std::__invoke_result<double (*)()>::type std::__invoke<double (*)()>(double (*&&)())+0x2b (inlined) 563b9cb506fb decltype (__invoke((_S_declval<0ul>)())) std::thread::_Invoker<std::tuple<double (*)()> >::_M_invoke<0ul>(std::_Index_tuple<0ul>)+0x2b (inlined) 563b9cb506fb std::thread::_Invoker<std::tuple<double (*)()> >::operator()()+0x2b (inlined) 563b9cb506fb std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<double>, std::__future_base::_Result_base::_Deleter>, std::thread::_Invoker<std::tuple<double (*)()> >, dou> 563b9cb506fb std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_> 563b9cb507e8 std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>::operator()() const+0x28 (inlined) 563b9cb507e8 std::__future_base::_State_baseV2::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*)+0x28 (/ssd/milian/> 7f38e46d24fe __pthread_once_slow+0xbe (/usr/lib/libpthread-2.28.so) 563b9cb51149 __gthread_once+0xe9 (inlined) 563b9cb51149 void std::call_once<void (std::__future_base::_State_baseV2::*)(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*)> 563b9cb51149 std::__future_base::_State_baseV2::_M_set_result(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>, bool)+0xe9 (inlined) 563b9cb51149 std::__future_base::_Async_state_impl<std::thread::_Invoker<std::tuple<double (*)()> >, double>::_Async_state_impl(std::thread::_Invoker<std::tuple<double (*)()> >&&)::{lambda()#1}::op> 563b9cb51149 void std::__invoke_impl<void, std::__future_base::_Async_state_impl<std::thread::_Invoker<std::tuple<double (*)()> >, double>::_Async_state_impl(std::thread::_Invoker<std::tuple<double> 563b9cb51149 std::__invoke_result<std::__future_base::_Async_state_impl<std::thread::_Invoker<std::tuple<double (*)()> >, double>::_Async_state_impl(std::thread::_Invoker<std::tuple<double (*)()> >> 563b9cb51149 decltype (__invoke((_S_declval<0ul>)())) std::thread::_Invoker<std::tuple<std::__future_base::_Async_state_impl<std::thread::_Invoker<std::tuple<double (*)()> >, double>::_Async_state_> 563b9cb51149 std::thread::_Invoker<std::tuple<std::__future_base::_Async_state_impl<std::thread::_Invoker<std::tuple<double (*)()> >, double>::_Async_state_impl(std::thread::_Invoker<std::tuple<dou> 563b9cb51149 std::thread::_State_impl<std::thread::_Invoker<std::tuple<std::__future_base::_Async_state_impl<std::thread::_Invoker<std::tuple<double (*)()> >, double>::_Async_state_impl(std::thread> 7f38e45f0062 execute_native_thread_routine+0x12 (/usr/lib/libstdc++.so.6.0.25) 7f38e46caa9c start_thread+0xfc (/usr/lib/libpthread-2.28.so) 7f38e42ccb22 __GI___clone+0x42 (inlined) ``` Before this patch, using libdwfl, you would see: ``` cpp-locking 20038 [005] 54830.236589: sched:sched_switch: prev_comm=cpp-locking prev_pid=20038 prev_prio=120 prev_state=T ==> next_comm=swapper/5 next_pid=0 next_prio=120 ffffffffb166fec5 __sched_text_start+0x545 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb166fec5 __sched_text_start+0x545 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb1670208 schedule+0x28 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb16737cc rwsem_down_read_failed+0xec (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb1665e04 call_rwsem_down_read_failed+0x14 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb1672a03 down_read+0x13 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb106bd85 __do_page_fault+0x445 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb18015f5 page_fault+0x45 (/lib/modules/4.14.78-1-lts/build/vmlinux) 7f38e4252591 new_heap+0x101 (/usr/lib/libc-2.28.so) a041161e77950c5c [unknown] ([unknown]) ``` With this patch applied, we get a bit further in unwinding: ``` cpp-locking 20038 [005] 54830.236589: sched:sched_switch: prev_comm=cpp-locking prev_pid=20038 prev_prio=120 prev_state=T ==> next_comm=swapper/5 next_pid=0 next_prio=120 ffffffffb166fec5 __sched_text_start+0x545 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb166fec5 __sched_text_start+0x545 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb1670208 schedule+0x28 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb16737cc rwsem_down_read_failed+0xec (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb1665e04 call_rwsem_down_read_failed+0x14 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb1672a03 down_read+0x13 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb106bd85 __do_page_fault+0x445 (/lib/modules/4.14.78-1-lts/build/vmlinux) ffffffffb18015f5 page_fault+0x45 (/lib/modules/4.14.78-1-lts/build/vmlinux) 7f38e4252591 new_heap+0x101 (/usr/lib/libc-2.28.so) 7f38e4252d0b arena_get2.part.4+0x2fb (/usr/lib/libc-2.28.so) 7f38e4255b1c tcache_init.part.6+0xec (/usr/lib/libc-2.28.so) 7f38e42569e5 __GI___libc_malloc+0x115 (inlined) 7f38e4241790 __GI__IO_file_doallocate+0x90 (inlined) 7f38e424fbbf __GI__IO_doallocbuf+0x4f (inlined) 7f38e424ee47 __GI__IO_file_overflow+0x197 (inlined) 7f38e424df36 _IO_new_file_xsputn+0x116 (inlined) 7f38e4242bfb __GI__IO_fwrite+0xdb (inlined) 7f38e463fa6d std::basic_streambuf<char, std::char_traits<char> >::sputn(char const*, long)+0x1cd (inlined) 7f38e463fa6d std::ostreambuf_iterator<char, std::char_traits<char> >::_M_put(char const*, long)+0x1cd (inlined) 7f38e463fa6d std::ostreambuf_iterator<char, std::char_traits<char> > std::__write<char>(std::ostreambuf_iterator<char, std::char_traits<char> >, char const*, int)+0x1cd (inlined) 7f38e463fa6d std::ostreambuf_iterator<char, std::char_traits<char> > std::num_put<char, std::ostreambuf_iterator<char, std::char_traits<char> > >::_M_insert_float<double>(std::ostreambuf_iterator<c> 7f38e464bd70 std::num_put<char, std::ostreambuf_iterator<char, std::char_traits<char> > >::put(std::ostreambuf_iterator<char, std::char_traits<char> >, std::ios_base&, char, double) const+0x90 (inl> 7f38e464bd70 std::ostream& std::ostream::_M_insert<double>(double)+0x90 (/usr/lib/libstdc++.so.6.0.25) 563b9cb502f7 std::ostream::operator<<(double)+0xb7 (inlined) 563b9cb502f7 worker()+0xb7 (/ssd/milian/projects/kdab/rnd/hotspot/build/tests/test-clients/cpp-locking/cpp-locking) 6eab825c1ee3e4ff [unknown] ([unknown]) ``` Note that the backtrace is still stopping too early, when compared to the nice results obtained via libunwind. It's unclear so far what the reason for that is. Committer note: Further comment by Milian on the thread started on the Link: tag below: --- The remaining issue is due to a bug in elfutils: https://sourceware.org/ml/elfutils-devel/2018-q4/msg00089.html With both patches applied, libunwind and elfutils produce the same output for the above scenario. --- Signed-off-by: Milian Wolff <milian.wolff@kdab.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20181029141644.3907-1-milian.wolff@kdab.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-30Merge tag 'trace-v4.20' of ↵Linus Torvalds6-22/+106
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: "The biggest change here is the updates to kprobes Back in January I posted patches to create function based events. These were the events that you suggested I make to allow developers to easily create events in code where no trace event exists. After posting those changes for review, it was suggested that we implement this instead with kprobes. The problem with kprobes is that the interface is too complex and needs to be simplified. Masami Hiramatsu posted patches in March and I've been playing with them a bit. There's been a bit of clean up in the kprobe code that was inspired by the function based event patches, and a couple of enhancements to the kprobe event interface. - If the arch supports it (we added support for x86), you can place a kprobe event at the start of a function and use $arg1, $arg2, etc to reference the arguments of a function. (Before you needed to know what register or where on the stack the argument was). - The second is a way to see array of events. For example, if you reference a mac address, you can add: echo 'p:mac ip_rcv perm_addr=+574($arg2):x8[6]' > kprobe_events And this will produce: mac: (ip_rcv+0x0/0x140) perm_addr={0x52,0x54,0x0,0xc0,0x76,0xec} Other changes include - Exporting trace_dump_stack to modules - Have the stack tracer trace the entire stack (stop trying to remove tracing itself, as we keep removing too much). - Added support for SDT in uprobes" [ SDT - "Statically Defined Tracing" are userspace markers for tracing. Let's not use random TLA's in explanations unless they are fairly well-established as generic (at least for kernel people) - Linus ] * tag 'trace-v4.20' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (24 commits) tracing: Have stack tracer trace full stack tracing: Export trace_dump_stack to modules tracing: probeevent: Fix uninitialized used of offset in parse args tracing/kprobes: Allow kprobe-events to record module symbol tracing/kprobes: Check the probe on unloaded module correctly tracing/uprobes: Fix to return -EFAULT if copy_from_user failed tracing: probeevent: Add $argN for accessing function args x86: ptrace: Add function argument access API tracing: probeevent: Add array type support tracing: probeevent: Add symbol type tracing: probeevent: Unify fetch_insn processing common part tracing: probeevent: Append traceprobe_ for exported function tracing: probeevent: Return consumed bytes of dynamic area tracing: probeevent: Unify fetch type tables tracing: probeevent: Introduce new argument fetching code tracing: probeevent: Remove NOKPROBE_SYMBOL from print functions tracing: probeevent: Cleanup argument field definition tracing: probeevent: Cleanup print argument functions trace_uprobe: support reference counter in fd-based uprobe perf probe: Support SDT markers having reference counter (semaphore) ...
2018-10-29Merge branch 'linus' into perf/urgent, to pick up fixesIngo Molnar1-12/+3
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-24perf script: Implement --graph-functionAndi Kleen2-1/+4
Add a ftrace style --graph-function argument to 'perf script' that allows to print itrace function calls only below a given function. This makes it easier to find the code of interest in a large trace. % perf record -e intel_pt//k -a sleep 1 % perf script --graph-function group_sched_in --call-trace perf 900 [000] 194167.205652203: ([kernel.kallsyms]) group_sched_in perf 900 [000] 194167.205652203: ([kernel.kallsyms]) __x86_indirect_thunk_rax perf 900 [000] 194167.205652203: ([kernel.kallsyms]) event_sched_in.isra.107 perf 900 [000] 194167.205652203: ([kernel.kallsyms]) perf_event_set_state.part.71 perf 900 [000] 194167.205652203: ([kernel.kallsyms]) perf_event_update_time perf 900 [000] 194167.205652203: ([kernel.kallsyms]) perf_pmu_disable perf 900 [000] 194167.205652203: ([kernel.kallsyms]) perf_log_itrace_start perf 900 [000] 194167.205652203: ([kernel.kallsyms]) __x86_indirect_thunk_rax perf 900 [000] 194167.205652203: ([kernel.kallsyms]) perf_event_update_userpage perf 900 [000] 194167.205652203: ([kernel.kallsyms]) calc_timer_values perf 900 [000] 194167.205652203: ([kernel.kallsyms]) sched_clock_cpu perf 900 [000] 194167.205652203: ([kernel.kallsyms]) __x86_indirect_thunk_rax perf 900 [000] 194167.205652203: ([kernel.kallsyms]) arch_perf_update_userpage perf 900 [000] 194167.205652203: ([kernel.kallsyms]) __fentry__ perf 900 [000] 194167.205652203: ([kernel.kallsyms]) using_native_sched_clock perf 900 [000] 194167.205652203: ([kernel.kallsyms]) sched_clock_stable perf 900 [000] 194167.205652203: ([kernel.kallsyms]) perf_pmu_enable perf 900 [000] 194167.205652203: ([kernel.kallsyms]) __x86_indirect_thunk_rax swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) group_sched_in swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) __x86_indirect_thunk_rax swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) event_sched_in.isra.107 swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) perf_event_set_state.part.71 swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) perf_event_update_time swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) perf_pmu_disable swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) perf_log_itrace_start swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) __x86_indirect_thunk_rax swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) perf_event_update_userpage swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) calc_timer_values swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) sched_clock_cpu swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) __x86_indirect_thunk_rax swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) arch_perf_update_userpage swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) __fentry__ swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) using_native_sched_clock swapper 0 [001] 194167.205660693: ([kernel.kallsyms]) sched_clock_stable Signed-off-by: Andi Kleen <ak@linux.intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Leo Yan <leo.yan@linaro.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Kim Phillips <kim.phillips@arm.com> Link: http://lkml.kernel.org/r/20180920180540.14039-5-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-24perf script: Make itrace script default to all callsAndi Kleen5-9/+22
By default 'perf script' for itrace outputs sampled instructions or branches. In my experience this is confusing to users because it's hard to correlate with real program behavior. The sampling makes sense for tools like 'perf report' that actually sample to reduce the run time, but run time is normally not a problem for 'perf script'. It's better to give an accurate representation of the program flow. Default 'perf script' to output all calls for itrace. That's a much saner default. The old behavior can be still requested with 'perf script' --itrace=ibxwpe100000 v2: Fix ETM build failure v3: Really fix ETM build failure (Kim Phillips) Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Kim Phillips <kim.phillips@arm.com> Cc: Leo Yan <leo.yan@linaro.org> Link: http://lkml.kernel.org/r/20180920180540.14039-3-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-24Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-nextLinus Torvalds1-12/+3
Pull networking updates from David Miller: 1) Add VF IPSEC offload support in ixgbe, from Shannon Nelson. 2) Add zero-copy AF_XDP support to i40e, from Björn Töpel. 3) All in-tree drivers are converted to {g,s}et_link_ksettings() so we can get rid of the {g,s}et_settings ethtool callbacks, from Michal Kubecek. 4) Add software timestamping to veth driver, from Michael Walle. 5) More work to make packet classifiers and actions lockless, from Vlad Buslov. 6) Support sticky FDB entries in bridge, from Nikolay Aleksandrov. 7) Add ipv6 version of IP_MULTICAST_ALL sockopt, from Andre Naujoks. 8) Support batching of XDP buffers in vhost_net, from Jason Wang. 9) Add flow dissector BPF hook, from Petar Penkov. 10) i40e vf --> generic iavf conversion, from Jesse Brandeburg. 11) Add NLA_REJECT netlink attribute policy type, to signal when users provide attributes in situations which don't make sense. From Johannes Berg. 12) Switch TCP and fair-queue scheduler over to earliest departure time model. From Eric Dumazet. 13) Improve guest receive performance by doing rx busy polling in tx path of vhost networking driver, from Tonghao Zhang. 14) Add per-cgroup local storage to bpf 15) Add reference tracking to BPF, from Joe Stringer. The verifier can now make sure that references taken to objects are properly released by the program. 16) Support in-place encryption in TLS, from Vakul Garg. 17) Add new taprio packet scheduler, from Vinicius Costa Gomes. 18) Lots of selftests additions, too numerous to mention one by one here but all of which are very much appreciated. 19) Support offloading of eBPF programs containing BPF to BPF calls in nfp driver, frm Quentin Monnet. 20) Move dpaa2_ptp driver out of staging, from Yangbo Lu. 21) Lots of u32 classifier cleanups and simplifications, from Al Viro. 22) Add new strict versions of netlink message parsers, and enable them for some situations. From David Ahern. 23) Evict neighbour entries on carrier down, also from David Ahern. 24) Support BPF sk_msg verdict programs with kTLS, from Daniel Borkmann and John Fastabend. 25) Add support for filtering route dumps, from David Ahern. 26) New igc Intel driver for 2.5G parts, from Sasha Neftin et al. 27) Allow vxlan enslavement to bridges in mlxsw driver, from Ido Schimmel. 28) Add queue and stack map types to eBPF, from Mauricio Vasquez B. 29) Add back byte-queue-limit support to r8169, with all the bug fixes in other areas of the driver it works now! From Florian Westphal and Heiner Kallweit. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (2147 commits) tcp: add tcp_reset_xmit_timer() helper qed: Fix static checker warning Revert "be2net: remove desc field from be_eq_obj" Revert "net: simplify sock_poll_wait" net: socionext: Reset tx queue in ndo_stop net: socionext: Add dummy PHY register read in phy_write() net: socionext: Stop PHY before resetting netsec net: stmmac: Set OWN bit for jumbo frames arm64: dts: stratix10: Support Ethernet Jumbo frame tls: Add maintainers net: ethernet: ti: cpsw: unsync mcast entries while switch promisc mode octeontx2-af: Support for NIXLF's UCAST/PROMISC/ALLMULTI modes octeontx2-af: Support for setting MAC address octeontx2-af: Support for changing RSS algorithm octeontx2-af: NIX Rx flowkey configuration for RSS octeontx2-af: Install ucast and bcast pkt forwarding rules octeontx2-af: Add LMAC channel info to NIXLF_ALLOC response octeontx2-af: NPC MCAM and LDATA extract minimal configuration octeontx2-af: Enable packet length and csum validation octeontx2-af: Support for VTAG strip and capture ...
2018-10-23Merge branch 'perf-core-for-linus' of ↵Linus Torvalds42-453/+2034
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf updates from Ingo Molnar: "The main updates in this cycle were: - Lots of perf tooling changes too voluminous to list (big perf trace and perf stat improvements, lots of libtraceevent reorganization, etc.), so I'll list the authors and refer to the changelog for details: Benjamin Peterson, Jérémie Galarneau, Kim Phillips, Peter Zijlstra, Ravi Bangoria, Sangwon Hong, Sean V Kelley, Steven Rostedt, Thomas Gleixner, Ding Xiang, Eduardo Habkost, Thomas Richter, Andi Kleen, Sanskriti Sharma, Adrian Hunter, Tzvetomir Stoyanov, Arnaldo Carvalho de Melo, Jiri Olsa. ... with the bulk of the changes written by Jiri Olsa, Tzvetomir Stoyanov and Arnaldo Carvalho de Melo. - Continued intel_rdt work with a focus on playing well with perf events. This also imported some non-perf RDT work due to dependencies. (Reinette Chatre) - Implement counter freezing for Arch Perfmon v4 (Skylake and newer). This allows to speed up the PMI handler by avoiding unnecessary MSR writes and make it more accurate. (Andi Kleen) - kprobes cleanups and simplification (Masami Hiramatsu) - Intel Goldmont PMU updates (Kan Liang) - ... plus misc other fixes and updates" * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (155 commits) kprobes/x86: Use preempt_enable() in optimized_callback() x86/intel_rdt: Prevent pseudo-locking from using stale pointers kprobes, x86/ptrace.h: Make regs_get_kernel_stack_nth() not fault on bad stack perf/x86/intel: Export mem events only if there's PEBS support x86/cpu: Drop pointless static qualifier in punit_dev_state_show() x86/intel_rdt: Fix initial allocation to consider CDP x86/intel_rdt: CBM overlap should also check for overlap with CDP peer x86/intel_rdt: Introduce utility to obtain CDP peer tools lib traceevent, perf tools: Move struct tep_handler definition in a local header file tools lib traceevent: Separate out tep_strerror() for strerror_r() issues perf python: More portable way to make CFLAGS work with clang perf python: Make clang_has_option() work on Python 3 perf tools: Free temporary 'sys' string in read_event_files() perf tools: Avoid double free in read_event_file() perf tools: Free 'printk' string in parse_ftrace_printk() perf tools: Cleanup trace-event-info 'tdata' leak perf strbuf: Match va_{add,copy} with va_end perf test: S390 does not support watchpoints in test 22 perf auxtrace: Include missing asm/bitsperlong.h to get BITS_PER_LONG tools include: Adopt linux/bits.h ...
2018-10-22perf trace: Introduce per-event maximum number of events propertyArnaldo Carvalho de Melo1-0/+1
Call it 'nr', as in this context it should be expressive enough, i.e.: # perf trace -e sched:*waking/nr=8,call-graph=fp/ 0.000 :0/0 sched:sched_waking:comm=rcu_sched pid=10 prio=120 target_cpu=001 try_to_wake_up ([kernel.kallsyms]) sched_clock ([kernel.kallsyms]) 3.933 :0/0 sched:sched_waking:comm=rcu_sched pid=10 prio=120 target_cpu=001 try_to_wake_up ([kernel.kallsyms]) sched_clock ([kernel.kallsyms]) 3.970 IPDL Backgroun/3622 sched:sched_waking:comm=Gecko_IOThread pid=3569 prio=120 target_cpu=003 try_to_wake_up ([kernel.kallsyms]) __libc_write (/usr/lib64/libpthread-2.26.so) 20.069 IPDL Backgroun/3622 sched:sched_waking:comm=Gecko_IOThread pid=3569 prio=120 target_cpu=003 try_to_wake_up ([kernel.kallsyms]) __libc_write (/usr/lib64/libpthread-2.26.so) 37.170 IPDL Backgroun/3622 sched:sched_waking:comm=Gecko_IOThread pid=3569 prio=120 target_cpu=003 try_to_wake_up ([kernel.kallsyms]) __libc_write (/usr/lib64/libpthread-2.26.so) 53.267 IPDL Backgroun/3622 sched:sched_waking:comm=Gecko_IOThread pid=3569 prio=120 target_cpu=003 try_to_wake_up ([kernel.kallsyms]) __libc_write (/usr/lib64/libpthread-2.26.so) 70.365 IPDL Backgroun/3622 sched:sched_waking:comm=Gecko_IOThread pid=3569 prio=120 target_cpu=003 try_to_wake_up ([kernel.kallsyms]) __libc_write (/usr/lib64/libpthread-2.26.so) 75.781 Web Content/3649 sched:sched_waking:comm=JS Helper pid=3670 prio=120 target_cpu=000 try_to_wake_up ([kernel.kallsyms]) try_to_wake_up ([kernel.kallsyms]) wake_up_q ([kernel.kallsyms]) futex_wake ([kernel.kallsyms]) do_futex ([kernel.kallsyms]) __x64_sys_futex ([kernel.kallsyms]) do_syscall_64 ([kernel.kallsyms]) entry_SYSCALL_64_after_hwframe ([kernel.kallsyms]) pthread_cond_signal@@GLIBC_2.3.2 (/usr/lib64/libpthread-2.26.so) # # perf trace -e sched:*switch/nr=2/,block:*_plug/nr=4/,block:*_unplug/nr=1/,net:*dev_queue/nr=3,max-stack=16/ 0.000 :0/0 sched:sched_switch:swapper/0:0 [120] S ==> trace:3367 [120] 0.046 :0/0 sched:sched_switch:swapper/1:0 [120] S ==> kworker/u16:58:2722 [120] 570.670 irq/50-iwlwifi/680 net:net_dev_queue:dev=wlp3s0 skbaddr=0xffff93498051ef00 len=66 __dev_queue_xmit ([kernel.kallsyms]) 1106.141 jbd2/dm-0-8/476 block:block_plug:[jbd2/dm-0-8] 1106.175 jbd2/dm-0-8/476 block:block_unplug:[jbd2/dm-0-8] 1 1618.088 kworker/u16:30/2694 block:block_plug:[kworker/u16:30] 1810.000 :0/0 net:net_dev_queue:dev=vnet0 skbaddr=0xffff93498051ef00 len=52 __dev_queue_xmit ([kernel.kallsyms]) 3857.974 :0/0 net:net_dev_queue:dev=vnet0 skbaddr=0xffff93498051f900 len=52 __dev_queue_xmit ([kernel.kallsyms]) 4790.277 jbd2/dm-2-8/748 block:block_plug:[jbd2/dm-2-8] 4790.448 jbd2/dm-2-8/748 block:block_plug:[jbd2/dm-2-8] # The global --max-events has precendence: # trace --max-events 3 -e sched:*switch/nr=2/,block:*_plug/nr=4/,block:*_unplug/nr=1/,net:*dev_queue/nr=3,max-stack=16/ 0.000 :0/0 sched:sched_switch:swapper/0:0 [120] S ==> qemu-system-x86:2252 [120] 0.029 qemu-system-x8/2252 sched:sched_switch:qemu-system-x86:2252 [120] D ==> swapper/0:0 [120] 58.047 DNS Res~er #14/31661 net:net_dev_queue:dev=wlp3s0 skbaddr=0xffff9346966af100 len=84 __dev_queue_xmit ([kernel.kallsyms]) __libc_send (/usr/lib64/libpthread-2.26.so) # Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-s4jswltvh660ughvg9nwngah@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-22perf evsel: Mark a evsel as disabled when asking the kernel do disable itArnaldo Carvalho de Melo3-7/+19
Because there may be more such events in the ring buffer that should be discarded when an app decides to stop considering them. At some point we'll do this with eBPF, this way we stop them at origin, before they are placed in the ring buffer. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-uzufuxws4hufigx07ue1dpv6@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-22Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller1-12/+3
Daniel Borkmann says: ==================== pull-request: bpf-next 2018-10-21 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) Implement two new kind of BPF maps, that is, queue and stack map along with new peek, push and pop operations, from Mauricio. 2) Add support for MSG_PEEK flag when redirecting into an ingress psock sk_msg queue, and add a new helper bpf_msg_push_data() for insert data into the message, from John. 3) Allow for BPF programs of type BPF_PROG_TYPE_CGROUP_SKB to use direct packet access for __skb_buff, from Song. 4) Use more lightweight barriers for walking perf ring buffer for libbpf and perf tool as well. Also, various fixes and improvements from verifier side, from Daniel. 5) Add per-symbol visibility for DSO in libbpf and hide by default global symbols such as netlink related functions, from Andrey. 6) Two improvements to nfp's BPF offload to check vNIC capabilities in case prog is shared with multiple vNICs and to protect against mis-initializing atomic counters, from Jakub. 7) Fix for bpftool to use 4 context mode for the nfp disassembler, also from Jakub. 8) Fix a return value comparison in test_libbpf.sh and add several bpftool improvements in bash completion, documentation of bpf fs restrictions and batch mode summary print, from Quentin. 9) Fix a file resource leak in BPF selftest's load_kallsyms() helper, from Peng. 10) Fix an unused variable warning in map_lookup_and_delete_elem(), from Alexei. 11) Fix bpf_skb_adjust_room() signature in BPF UAPI helper doc, from Nicolas. 12) Add missing executables to .gitignore in BPF selftests, from Anders. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-19tools, perf: add and use optimized ring_buffer_{read_head, write_tail} helpersDaniel Borkmann1-12/+3
Currently, on x86-64, perf uses LFENCE and MFENCE (rmb() and mb(), respectively) when processing events from the perf ring buffer which is unnecessarily expensive as we can do more lightweight in particular given this is critical fast-path in perf. According to Peter rmb()/mb() were added back then via a94d342b9cb0 ("tools/perf: Add required memory barriers") at a time where kernel still supported chips that needed it, but nowadays support for these has been ditched completely, therefore we can fix them up as well. While for x86-64, replacing rmb() and mb() with smp_*() variants would result in just a compiler barrier for the former and LOCK + ADD for the latter (__sync_synchronize() uses slower MFENCE by the way), Peter suggested we can use smp_{load_acquire,store_release}() instead for architectures where its implementation doesn't resolve in slower smp_mb(). Thus, e.g. in x86-64 we would be able to avoid CPU barrier entirely due to TSO. For architectures where the latter needs to use smp_mb() e.g. on arm, we stick to cheaper smp_rmb() variant for fetching the head. This work adds helpers ring_buffer_read_head() and ring_buffer_write_tail() for tools infrastructure that either switches to smp_load_acquire() for architectures where it is cheaper or uses READ_ONCE() + smp_rmb() barrier for those where it's not in order to fetch the data_head from the perf control page, and it uses smp_store_release() to write the data_tail. Latter is smp_mb() + WRITE_ONCE() combination or a cheaper variant if architecture allows for it. Those that rely on smp_rmb() and smp_mb() can further improve performance in a follow up step by implementing the two under tools/arch/*/include/asm/barrier.h such that they don't have to fallback to rmb() and mb() in tools/include/asm/barrier.h. Switch perf to use ring_buffer_read_head() and ring_buffer_write_tail() so it can make use of the optimizations. Later, we convert libbpf as well to use the same helpers. Side note [0]: the topic has been raised of whether one could simply use the C11 gcc builtins [1] for the smp_load_acquire() and smp_store_release() instead: __atomic_load_n(ptr, __ATOMIC_ACQUIRE); __atomic_store_n(ptr, val, __ATOMIC_RELEASE); Kernel and (presumably) tooling shipped along with the kernel has a minimum requirement of being able to build with gcc-4.6 and the latter does not have C11 builtins. While generally the C11 memory models don't align with the kernel's, the C11 load-acquire and store-release alone /could/ suffice, however. Issue is that this is implementation dependent on how the load-acquire and store-release is done by the compiler and the mapping of supported compilers must align to be compatible with the kernel's implementation, and thus needs to be verified/tracked on a case by case basis whether they match (unless an architecture uses them also from kernel side). The implementations for smp_load_acquire() and smp_store_release() in this patch have been adapted from the kernel side ones to have a concrete and compatible mapping in place. [0] http://patchwork.ozlabs.org/patch/985422/ [1] https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-10-19perf evsel: Introduce per event max_events propertyArnaldo Carvalho de Melo5-0/+17
This simply adds the field to 'struct perf_evsel' and allows setting it via the event parser, to test it lets trace trace: First look at where in a function that receives an evsel we can put a probe to read how evsel->max_events was setup: # perf probe -x ~/bin/perf -L trace__event_handler <trace__event_handler@/home/acme/git/perf/tools/perf/builtin-trace.c:0> 0 static int trace__event_handler(struct trace *trace, struct perf_evsel *evsel, union perf_event *event __maybe_unused, struct perf_sample *sample) 3 { 4 struct thread *thread = machine__findnew_thread(trace->host, sample->pid, sample->tid); 5 int callchain_ret = 0; 7 if (sample->callchain) { 8 callchain_ret = trace__resolve_callchain(trace, evsel, sample, &callchain_cursor); 9 if (callchain_ret == 0) { 10 if (callchain_cursor.nr < trace->min_stack) 11 goto out; 12 callchain_ret = 1; } } See what variables we can probe at line 7: # perf probe -x ~/bin/perf -V trace__event_handler:7 Available variables at trace__event_handler:7 @<trace__event_handler+89> int callchain_ret struct perf_evsel* evsel struct perf_sample* sample struct thread* thread struct trace* trace union perf_event* event Add a probe at that line asking for evsel->max_events to be collected and named as "max_events": # perf probe -x ~/bin/perf trace__event_handler:7 'max_events=evsel->max_events' Added new event: probe_perf:trace__event_handler (on trace__event_handler:7 in /home/acme/bin/perf with max_events=evsel->max_events) You can now use it in all perf tools, such as: perf record -e probe_perf:trace__event_handler -aR sleep 1 Now use 'perf trace', here aliased to just 'trace' and trace trace, i.e. the first 'trace' is tracing just that 'probe_perf:trace__event_handler' event, while the traced trace is tracing all scheduler tracepoints, will stop at two events (--max-events 2) and will just set evsel->max_events for all the sched tracepoints to 9, we will see the output of both traces intermixed: # trace -e *perf:*event_handler trace --max-events 2 -e sched:*/nr=9/ 0.000 :0/0 sched:sched_waking:comm=rcu_sched pid=10 prio=120 target_cpu=000 0.009 :0/0 sched:sched_wakeup:comm=rcu_sched pid=10 prio=120 target_cpu=000 0.000 trace/23949 probe_perf:trace__event_handler:(48c34a) max_events=0x9 0.046 trace/23949 probe_perf:trace__event_handler:(48c34a) max_events=0x9 # Now, if the traced trace sends its output to /dev/null, we'll see just what the first level trace outputs: that evsel->max_events is indeed being set to 9: # trace -e *perf:*event_handler trace -o /dev/null --max-events 2 -e sched:*/nr=9/ 0.000 trace/23961 probe_perf:trace__event_handler:(48c34a) max_events=0x9 0.030 trace/23961 probe_perf:trace__event_handler:(48c34a) max_events=0x9 # Now that we can set evsel->max_events, we can go to the next step, honour that per-event property in 'perf trace'. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-og00yasj276joem6e14l1eas@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-18perf symbols: Set PLT entry/header sizes properly on SparcDavid Miller1-1/+11
Using the sh_entsize for both values isn't correct. It happens to be correct on x86... For both 32-bit and 64-bit sparc, there are four PLT entries in the PLT section. Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexis Berlemont <alexis.berlemont@gmail.com> Cc: David Tolnay <dtolnay@gmail.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Hemant Kumar <hemant@linux.vnet.ibm.com> Cc: Li Bin <huawei.libin@huawei.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Milian Wolff <milian.wolff@kdab.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Cc: zhangmengting@huawei.com Fixes: b2f7605076d6 ("perf symbols: Fix plt entry calculation for ARM and AARCH64") Link: http://lkml.kernel.org/r/20181017.120859.2268840244308635255.davem@davemloft.net Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-18perf jitdump: Add Sparc support.David Miller1-0/+6
Signed-off-by: David S. Miller <davem@davemloft.net> Link: http://lkml.kernel.org/r/20181016.211545.1487970139012324624.davem@davemloft.net Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-18perf annotate: Add Sparc supportDavid Miller1-0/+8
E.g.: $ perf annotate --stdio2 Samples: 7K of event 'cycles:ppp', 4000 Hz, Event count (approx.): 3086733887 __gettimeofday /lib32/libc-2.27.so [Percent: local period] Percent│ │ │ │ Disassembly of section .text: │ │ 000a6fa0 <__gettimeofday@@GLIBC_2.0>: 0.47 │ save %sp, -96, %sp 0.73 │ sethi %hi(0xe9000), %l7 │ → call __frame_state_for@@GLIBC_2.0+0x480 0.30 │ add %l7, 0x58, %l7 ! e9058 <nftw64@@GLIBC_2.3.3+0x818> 1.33 │ mov %i0, %o0 │ mov %i1, %o1 0.43 │ mov 0x74, %g1 │ ta 0x10 88.92 │ ↓ bcc 30 2.95 │ clr %g1 │ neg %o0 │ mov 1, %g1 0.31 │30: cmp %g1, 0 │ bne,pn %icc, a6fe4 <__gettimeofday@@GLIBC_2.0+0x44> │ mov %o0, %i0 1.96 │ ← return %i7 + 8 2.62 │ nop │ sethi %hi(0), %g1 │ neg %o0, %g2 │ add %g1, 0x160, %g1 │ ld [ %l7 + %g1 ], %g1 │ st %g2, [ %g7 + %g1 ] │ ← return %i7 + 8 │ mov -1, %o0 Signed-off-by: David S. Miller <davem@davemloft.net> Link: http://lkml.kernel.org/r/20181016.205555.1070918198627611771.davem@davemloft.net Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-18perf record: Encode -k clockid frequency into Perf traceAlexey Budankov3-0/+25
Store -k clockid frequency into Perf trace to enable timestamps derived metrics conversion into wall clock time on reporting stage. Below is the example of perf report output: tools/perf/perf record -k raw -- ../../matrix/linux/matrix.gcc ... [ perf record: Captured and wrote 31.222 MB perf.data (818054 samples) ] tools/perf/perf report --header # ======== ... # event : name = cycles:ppp, , size = 112, { sample_period, sample_freq } = 4000, sample_type = IP|TID|TIME|PERIOD, disabled = 1, inherit = 1, mmap = 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, precise_ip = 3, sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1, use_clockid = 1, clockid = 4 ... # clockid frequency: 1000 MHz ... # ======== Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/23a4a1dc-b160-85a0-347d-40a2ed6d007b@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-18Merge remote-tracking branch 'tip/perf/urgent' into perf/coreArnaldo Carvalho de Melo4-25/+16
To pick up fixes. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-17perf tools: Stop fallbacking to kallsyms for vdso symbols lookupArnaldo Carvalho de Melo1-19/+2
David reports that: <quote> Perf has this hack where it uses the kernel symbol map as a backup when a symbol can't be found in the user's symbol table(s). This causes problems because the tests driving this code path use machine__kernel_ip(), and that is completely meaningless on Sparc. On sparc64 the kernel and user live in physically separate virtual address spaces, rather than a shared one. And the kernel lives at a virtual address that overlaps common userspace addresses. So this test passes almost all the time when a user symbol lookup fails. The consequence of this is that, if the unfound user virtual address in the sample doesn't match up to a kernel symbol either, we trigger things like this code in builtin-top.c: if (al.sym == NULL && al.map != NULL) { const char *msg = "Kernel samples will not be resolved.\n"; /* * As we do lazy loading of symtabs we only will know if the * specified vmlinux file is invalid when we actually have a * hit in kernel space and then try to load it. So if we get * here and there are _no_ symbols in the DSO backing the * kernel map, bail out. * * We may never get here, for instance, if we use -K/ * --hide-kernel-symbols, even if the user specifies an * invalid --vmlinux ;-) */ if (!machine->kptr_restrict_warned && !top->vmlinux_warned && __map__is_kernel(al.map) && map__has_symbols(al.map)) { if (symbol_conf.vmlinux_name) { char serr[256]; dso__strerror_load(al.map->dso, serr, sizeof(serr)); ui__warning("The %s file can't be used: %s\n%s", symbol_conf.vmlinux_name, serr, msg); } else { ui__warning("A vmlinux file was not found.\n%s", msg); } if (use_browser <= 0) sleep(5); top->vmlinux_warned = true; } } When I fire up a compilation on sparc, this triggers immediately. I'm trying to figure out what the "backup to kernel map" code is accomplishing. I see some language in the current code and in the changes that have happened in this area talking about vdso. Does that really happen? The vdso is mapped into userspace virtual addresses, not kernel ones. More history. This didn't cause problems on sparc some time ago, because the kernel IP check used to be "ip < 0" :-) Sparc kernel addresses are not negative. But now with machine__kernel_ip(), which works using the symbol table determined kernel address range, it does trigger. What it all boils down to is that on architectures like sparc, machine__kernel_ip() should always return false in this scenerio, and therefore this kind of logic: if (cpumode == PERF_RECORD_MISC_USER && machine && mg != &machine->kmaps && machine__kernel_ip(machine, al->addr)) { is basically invalid. PERF_RECORD_MISC_USER implies no kernel address can possibly match for the sample/event in question (no matter how hard you try!) :-) </> So, I thought something had changed and in the past we would somehow find that address in the kallsyms, but I couldn't find anything to back that up, the patch introducing this is over a decade old, lots of things changed, so I was just thinking I was missing something. I tried a gtod busy loop to generate vdso activity and added a 'perf probe' at that branch, on x86_64 to see if it ever gets hit: Made thread__find_map() noinline, as 'perf probe' in lines of inline functions seems to not be working, only at function start. (Masami?) # perf probe -x ~/bin/perf -L thread__find_map:57 <thread__find_map@/home/acme/git/perf/tools/perf/util/event.c:57> 57 if (cpumode == PERF_RECORD_MISC_USER && machine && 58 mg != &machine->kmaps && 59 machine__kernel_ip(machine, al->addr)) { 60 mg = &machine->kmaps; 61 load_map = true; 62 goto try_again; } } else { /* * Kernel maps might be changed when loading * symbols so loading * must be done prior to using kernel maps. */ 69 if (load_map) 70 map__load(al->map); 71 al->addr = al->map->map_ip(al->map, al->addr); # perf probe -x ~/bin/perf thread__find_map:60 Added new event: probe_perf:thread__find_map (on thread__find_map:60 in /home/acme/bin/perf) You can now use it in all perf tools, such as: perf record -e probe_perf:thread__find_map -aR sleep 1 # Then used this to see if, system wide, those probe points were being hit: # perf trace -e *perf:thread*/max-stack=8/ ^C[root@jouet ~]# No hits when running 'perf top' and: # cat gtod.c #include <sys/time.h> int main(void) { struct timeval tv; while (1) gettimeofday(&tv, 0); return 0; } [root@jouet c]# ./gtod ^C Pressed 'P' in 'perf top' and the [vdso] samples are there: 62.84% [vdso] [.] __vdso_gettimeofday 8.13% gtod [.] main 7.51% [vdso] [.] 0x0000000000000914 5.78% [vdso] [.] 0x0000000000000917 5.43% gtod [.] _init 2.71% [vdso] [.] 0x000000000000092d 0.35% [kernel] [k] native_io_delay 0.33% libc-2.26.so [.] __memmove_avx_unaligned_erms 0.20% [vdso] [.] 0x000000000000091d 0.17% [i2c_i801] [k] i801_access 0.06% firefox [.] free 0.06% libglib-2.0.so.0.5400.3 [.] g_source_iter_next 0.05% [vdso] [.] 0x0000000000000919 0.05% libpthread-2.26.so [.] __pthread_mutex_lock 0.05% libpixman-1.so.0.34.0 [.] 0x000000000006d3a7 0.04% [kernel] [k] entry_SYSCALL_64_trampoline 0.04% libxul.so [.] style::dom_apis::query_selector_slow 0.04% [kernel] [k] module_get_kallsym 0.04% firefox [.] malloc 0.04% [vdso] [.] 0x0000000000000910 I added a 'perf probe' to thread__find_map:69, and that surely got tons of hits, i.e. for every map found, just to make sure the 'perf probe' command was really working. In the process I noticed a bug, we're only have records for '[vdso]' for pre-existing commands, i.e. ones that are running when we start 'perf top', when we will generate the PERF_RECORD_MMAP by looking at /perf/PID/maps. I.e. like this, for preexisting processes with a vdso map, again, tracing for all the system, only pre-existing processes get a [vdso] map (when having one): [root@jouet ~]# perf probe -x ~/bin/perf __machine__addnew_vdso Added new event: probe_perf:__machine__addnew_vdso (on __machine__addnew_vdso in /home/acme/bin/perf) You can now use it in all perf tools, such as: perf record -e probe_perf:__machine__addnew_vdso -aR sleep 1 [root@jouet ~]# perf trace -e probe_perf:__machine__addnew_vdso/max-stack=8/ 0.000 probe_perf:__machine__addnew_vdso:(568eb3) __machine__addnew_vdso (/home/acme/bin/perf) map__new (/home/acme/bin/perf) machine__process_mmap2_event (/home/acme/bin/perf) machine__process_event (/home/acme/bin/perf) perf_event__process (/home/acme/bin/perf) perf_tool__process_synth_event (/home/acme/bin/perf) perf_event__synthesize_mmap_events (/home/acme/bin/perf) __event__synthesize_thread (/home/acme/bin/perf) The kernel is generating a PERF_RECORD_MMAP for vDSOs, but somehow 'perf top' is not getting those records while 'perf record' is: # perf record ~acme/c/gtod ^C[ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.076 MB perf.data (1499 samples) ] # perf report -D | grep PERF_RECORD_MMAP2 71293612401913 0x11b48 [0x70]: PERF_RECORD_MMAP2 25484/25484: [0x400000(0x1000) @ 0 fd:02 1137 541179306]: r-xp /home/acme/c/gtod 71293612419012 0x11be0 [0x70]: PERF_RECORD_MMAP2 25484/25484: [0x7fa4a2783000(0x227000) @ 0 fd:00 3146370 854107250]: r-xp /usr/lib64/ld-2.26.so 71293612432110 0x11c50 [0x60]: PERF_RECORD_MMAP2 25484/25484: [0x7ffcdb53a000(0x2000) @ 0 00:00 0 0]: r-xp [vdso] 71293612509944 0x11cb0 [0x70]: PERF_RECORD_MMAP2 25484/25484: [0x7fa4a23cd000(0x3b6000) @ 0 fd:00 3149723 262067164]: r-xp /usr/lib64/libc-2.26.so # # perf script | grep vdso | head gtod 25484 71293.612768: 2485554 cycles:ppp: 7ffcdb53a914 [unknown] ([vdso]) gtod 25484 71293.613576: 2149343 cycles:ppp: 7ffcdb53a917 [unknown] ([vdso]) gtod 25484 71293.614274: 1814652 cycles:ppp: 7ffcdb53aca8 __vdso_gettimeofday+0x98 ([vdso]) gtod 25484 71293.614862: 1669070 cycles:ppp: 7ffcdb53acc5 __vdso_gettimeofday+0xb5 ([vdso]) gtod 25484 71293.615404: 1451589 cycles:ppp: 7ffcdb53acc5 __vdso_gettimeofday+0xb5 ([vdso]) gtod 25484 71293.615999: 1269941 cycles:ppp: 7ffcdb53ace6 __vdso_gettimeofday+0xd6 ([vdso]) gtod 25484 71293.616405: 1177946 cycles:ppp: 7ffcdb53a914 [unknown] ([vdso]) gtod 25484 71293.616775: 1121290 cycles:ppp: 7ffcdb53ac47 __vdso_gettimeofday+0x37 ([vdso]) gtod 25484 71293.617150: 1037721 cycles:ppp: 7ffcdb53ace6 __vdso_gettimeofday+0xd6 ([vdso]) gtod 25484 71293.617478: 994526 cycles:ppp: 7ffcdb53ace6 __vdso_gettimeofday+0xd6 ([vdso]) # The patch is the obvious one and with it we also continue to resolve vdso symbols for pre-existing processes in 'perf top' and for all processes in 'perf record' + 'perf report/script'. Suggested-by: David Miller <davem@davemloft.net> Acked-by: David Miller <davem@davemloft.net> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-cs7skq9pp0kjypiju6o7trse@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-16perf report: Don't crash on invalid inline debug informationMilian Wolff1-0/+3
When the function name for an inline frame is invalid, we must not try to demangle this symbol, otherwise we crash with: #0 0x0000555555895c01 in bfd_demangle () #1 0x0000555555823262 in demangle_sym (dso=0x555555d92b90, elf_name=0x0, kmodule=0) at util/symbol-elf.c:215 #2 dso__demangle_sym (dso=dso@entry=0x555555d92b90, kmodule=<optimized out>, kmodule@entry=0, elf_name=elf_name@entry=0x0) at util/symbol-elf.c:400 #3 0x00005555557fef4b in new_inline_sym (funcname=0x0, base_sym=0x555555d92b90, dso=0x555555d92b90) at util/srcline.c:89 #4 inline_list__append_dso_a2l (dso=dso@entry=0x555555c7bb00, node=node@entry=0x555555e31810, sym=sym@entry=0x555555d92b90) at util/srcline.c:264 #5 0x00005555557ff27f in addr2line (dso_name=dso_name@entry=0x555555d92430 "/home/milian/.debug/.build-id/f7/186d14bb94f3c6161c010926da66033d24fce5/elf", addr=addr@entry=2888, file=file@entry=0x0, line=line@entry=0x0, dso=dso@entry=0x555555c7bb00, unwind_inlines=unwind_inlines@entry=true, node=0x555555e31810, sym=0x555555d92b90) at util/srcline.c:313 #6 0x00005555557ffe7c in addr2inlines (sym=0x555555d92b90, dso=0x555555c7bb00, addr=2888, dso_name=0x555555d92430 "/home/milian/.debug/.build-id/f7/186d14bb94f3c6161c010926da66033d24fce5/elf") at util/srcline.c:358 So instead handle the case where we get invalid function names for inlined frames and use a fallback '??' function name instead. While this crash was originally reported by Hadrien for rust code, I can now also reproduce it with trivial C++ code. Indeed, it seems like libbfd fails to interpret the debug information for the inline frame symbol name: $ addr2line -e /home/milian/.debug/.build-id/f7/186d14bb94f3c6161c010926da66033d24fce5/elf -if b48 main /usr/include/c++/8.2.1/complex:610 ?? /usr/include/c++/8.2.1/complex:618 ?? /usr/include/c++/8.2.1/complex:675 ?? /usr/include/c++/8.2.1/complex:685 main /home/milian/projects/kdab/rnd/hotspot/tests/test-clients/cpp-inlining/main.cpp:39 I've reported this bug upstream and also attached a patch there which should fix this issue: https://sourceware.org/bugzilla/show_bug.cgi?id=23715 Reported-by: Hadrien Grasland <grasland@lal.in2p3.fr> Signed-off-by: Milian Wolff <milian.wolff@kdab.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Fixes: a64489c56c30 ("perf report: Find the inline stack for a given address") [ The above 'Fixes:' cset is where originally the problem was introduced, i.e. using a2l->funcname without checking if it is NULL, but this current patch fixes the current codebase, i.e. multiple csets were applied after a64489c56c30 before the problem was reported by Hadrien ] Link: http://lkml.kernel.org/r/20180926135207.30263-3-milian.wolff@kdab.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-16perf cpu_map: Align cpu map synthesized events properly.David Miller1-0/+1
The size of the resulting cpu map can be smaller than a multiple of sizeof(u64), resulting in SIGBUS on cpus like Sparc as the next event will not be aligned properly. Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@intel.com> Fixes: 6c872901af07 ("perf cpu_map: Add cpu_map event synthesize function") Link: http://lkml.kernel.org/r/20181011.224655.716771175766946817.davem@davemloft.net Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-16perf evsel: Store ids for events with their own cpus ↵Jiri Olsa1-0/+3
perf_event__synthesize_event_update_cpus John reported crash when recording on an event under PMU with cpumask defined: root@localhost:~# ./perf_debug_ record -e armv8_pmuv3_0/br_mis_pred/ sleep 1 perf: Segmentation fault Obtained 9 stack frames. ./perf_debug_() [0x4c5ef8] [0xffff82ba267c] ./perf_debug_() [0x4bc5a8] ./perf_debug_() [0x419550] ./perf_debug_() [0x41a928] ./perf_debug_() [0x472f58] ./perf_debug_() [0x473210] ./perf_debug_() [0x4070f4] /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe0) [0xffff8294c8a0] Segmentation fault (core dumped) We synthesize an update event that needs to touch the evsel id array, which is not defined at that time. Fixing this by forcing the id allocation for events with their own cpus. Reported-by: John Garry <john.garry@huawei.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: John Garry <john.garry@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linuxarm@huawei.com Fixes: bfd8f72c2778 ("perf record: Synthesize unit/scale/... in event update") Link: http://lkml.kernel.org/r/20181003212052.GA32371@krava Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-09Revert "perf tools: Fix PMU term format max value calculation"Jiri Olsa1-6/+7
This reverts commit ac0e2cd555373ae6f8f3a3ad3fbbf5b6d1e7aaaa. Michael reported an issue with oversized terms values assignment and I noticed there was actually a misunderstanding of the max value check in the past. The above commit's changelog says: If bit 21 is set, there is parsing issues as below. $ perf stat -a -e uncore_qpi_0/event=0x200002,umask=0x8/ event syntax error: '..pi_0/event=0x200002,umask=0x8/' \___ value too big for format, maximum is 511 But there's no issue there, because the event value is distributed along the value defined by the format. Even if the format defines separated bit, the value is treated as a continual number, which should follow the format definition. In above case it's 9-bit value with last bit separated: $ cat uncore_qpi_0/format/event config:0-7,21 Hence the value 0x200002 is correctly reported as format violation, because it exceeds 9 bits. It should have been 0x102 instead, which sets the 9th bit - the bit 21 of the format. $ perf stat -vv -a -e uncore_qpi_0/event=0x102,umask=0x8/ Using CPUID GenuineIntel-6-2D ... ------------------------------------------------------------ perf_event_attr: type 10 size 112 config 0x200802 sample_type IDENTIFIER ... Reported-by: Michael Petlan <mpetlan@redhat.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Fixes: ac0e2cd55537 ("perf tools: Fix PMU term format max value calculation") Link: http://lkml.kernel.org/r/20181003072046.29276-1-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-09Merge tag 'perf-core-for-mingo-4.20-20181008' of ↵Ingo Molnar6-24/+42
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo: - Fix building the python bindings with python3, which fixes some problems with building with clang on Clear Linux (Eduardo Habkost) - Fix coverity warnings, fixing up some error paths and plugging some temporary small buffer leaks (Sanskriti Sharma) - Adopt a wrapper for strerror_r() for the same reasons as recently for libbpf (Steven Rostedt) - S390 does not support watchpoints in perf test 22', check if that test is supported by the arch. (Thomas Richter) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-09Merge branch 'perf/urgent' into perf/core, to pick up fixesIngo Molnar1-3/+5
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-08tools lib traceevent, perf tools: Move struct tep_handler definition in a ↵Tzvetomir Stoyanov2-11/+16
local header file As traceevent is going to be transferred into a proper library, its local data should be protected from the library users. This patch encapsulates struct tep_handler into a local header, not visible outside of the library. It implements also a bunch of new APIs, which library users can use to access tep_handler members. Signed-off-by: Tzvetomir Stoyanov <tstoyanov@vmware.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: linux trace devel <linux-trace-devel@vger.kernel.org> Cc: tzvetomir stoyanov <tstoyanov@vmware.com> Link: http://lkml.kernel.org/r/20181005122225.522155df@gandalf.local.home Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-08perf python: More portable way to make CFLAGS work with clangEduardo Habkost1-6/+8
The existing code that tries to make CFLAGS compatible with clang doesn't work with Python 3. Instead of trying to touch _sysconfigdata.build_time_vars directly, change the dictionary returned by disutils.sysconfig.get_config_vars(). This works on both Python 2 and Python 3. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/20181005204058.7966-3-ehabkost@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-08perf python: Make clang_has_option() work on Python 3Eduardo Habkost1-1/+1
Use a bytes literal so it works with Python 3's version of Popen(). Note that the b"..." syntax requires Python 2.6+. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/20181005204058.7966-2-ehabkost@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-08perf tools: Free temporary 'sys' string in read_event_files()Sanskriti Sharma1-1/+4
For each system in a given pevent, read_event_files() reads in a temporary 'sys' string. Be sure to free this string before moving onto to the next system and/or leaving read_event_files(). Fixes the following coverity complaints: Error: RESOURCE_LEAK (CWE-772): tools/perf/util/trace-event-read.c:343: overwrite_var: Overwriting "sys" in "sys = read_string()" leaks the storage that "sys" points to. tools/perf/util/trace-event-read.c:353: leaked_storage: Variable "sys" going out of scope leaks the storage it points to. Signed-off-by: Sanskriti Sharma <sansharm@redhat.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Joe Lawrence <joe.lawrence@redhat.com> Link: http://lkml.kernel.org/r/1538490554-8161-6-git-send-email-sansharm@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-08perf tools: Avoid double free in read_event_file()Sanskriti Sharma1-3/+1
The temporary 'buf' buffer allocated in read_event_file() may be freed twice. Move the free() call to the common function exit point. Fixes the following coverity complaints: Error: USE_AFTER_FREE (CWE-825): tools/perf/util/trace-event-read.c:309: double_free: Calling "free" frees pointer "buf" which has already been freed. Signed-off-by: Sanskriti Sharma <sansharm@redhat.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Joe Lawrence <joe.lawrence@redhat.com> Link: http://lkml.kernel.org/r/1538490554-8161-5-git-send-email-sansharm@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-08perf tools: Free 'printk' string in parse_ftrace_printk()Sanskriti Sharma1-0/+1
parse_ftrace_printk() tokenizes and parses a line, calling strdup() each iteration. Add code to free this temporary format string duplicate. Fixes the following coverity complaints: Error: RESOURCE_LEAK (CWE-772): tools/perf/util/trace-event-parse.c:158: overwrite_var: Overwriting "printk" in "printk = strdup(fmt + 1)" leaks the storage that "printk" points to. tools/perf/util/trace-event-parse.c:162: leaked_storage: Variable "printk" going out of scope leaks the storage it points to. Signed-off-by: Sanskriti Sharma <sansharm@redhat.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Joe Lawrence <joe.lawrence@redhat.com> Link: http://lkml.kernel.org/r/1538490554-8161-4-git-send-email-sansharm@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-08perf tools: Cleanup trace-event-info 'tdata' leakSanskriti Sharma1-0/+2
Free tracing_data structure in tracing_data_get() error paths. Fixes the following coverity complaint: Error: RESOURCE_LEAK (CWE-772): leaked_storage: Variable "tdata" going out of scope leaks the storage Signed-off-by: Sanskriti Sharma <sansharm@redhat.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Joe Lawrence <joe.lawrence@redhat.com> Link: http://lkml.kernel.org/r/1538490554-8161-3-git-send-email-sansharm@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-08perf strbuf: Match va_{add,copy} with va_endSanskriti Sharma1-2/+8
Ensure that all code paths in strbuf_addv() call va_end() on the ap_saved copy that was made. Fixes the following coverity complaint: Error: VARARGS (CWE-237): [#def683] tools/perf/util/strbuf.c:106: missing_va_end: va_end was not called for "ap_saved". Signed-off-by: Sanskriti Sharma <sansharm@redhat.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Joe Lawrence <joe.lawrence@redhat.com> Link: http://lkml.kernel.org/r/1538490554-8161-2-git-send-email-sansharm@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-08perf auxtrace: Include missing asm/bitsperlong.h to get BITS_PER_LONGArnaldo Carvalho de Melo1-0/+1
The auxtrace.h header references BITS_PER_LONG without including the header where it is defined, getting it by luck from some other header, fix it. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Sverdlin <alexander.sverdlin@nokia.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-v04ydmbh7tvpcctf3zld9j9s@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-05perf record: Use unmapped IP for inline callchain cursorsMilian Wolff1-1/+2
Only use the mapped IP to find inline frames, but keep using the unmapped IP for the callchain cursor. This ensures we properly show the unmapped IP when displaying a frame we received via the dso__parse_addr_inlines API for a module which does not contain sufficient debug symbols to show the srcline. This is another follow-up to commit 19610184693c ("perf script: Show virtual addresses instead of offsets"). Signed-off-by: Milian Wolff <milian.wolff@kdab.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Sandipan Das <sandipan@linux.ibm.com> Fixes: 19610184693c ("perf script: Show virtual addresses instead of offsets") Link: http://lkml.kernel.org/r/20180926135207.30263-2-milian.wolff@kdab.com Link: http://lkml.kernel.org/r/20181002073949.3297-1-milian.wolff@kdab.com [ Squashed a fix from Milian for a problem reported by Ravi, fixed up space damage ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-05perf python: Use -Wno-redundant-decls to build with PYTHON=python3Arnaldo Carvalho de Melo1-1/+1
When building in ClearLinux using 'make PYTHON=python3' with gcc 8.2.1 it fails with: GEN /tmp/build/perf/python/perf.so In file included from /usr/include/python3.7m/Python.h:126, from /git/linux/tools/perf/util/python.c:2: /usr/include/python3.7m/import.h:58:24: error: redundant redeclaration of ‘_PyImport_AddModuleObject’ [-Werror=redundant-decls] PyAPI_FUNC(PyObject *) _PyImport_AddModuleObject(PyObject *, PyObject *); ^~~~~~~~~~~~~~~~~~~~~~~~~ /usr/include/python3.7m/import.h:47:24: note: previous declaration of ‘_PyImport_AddModuleObject’ was here PyAPI_FUNC(PyObject *) _PyImport_AddModuleObject(PyObject *name, ^~~~~~~~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors error: command 'gcc' failed with exit status 1 And indeed there is a redundant declaration in that Python.h file, one with parameter names and the other without, so just add -Wno-error=redundant-decls to the python setup instructions. Now perf builds with gcc in ClearLinux with the following Dockerfile: # docker.io/acmel/linux-perf-tools-build-clearlinux:latest FROM docker.io/clearlinux:latest MAINTAINER Arnaldo Carvalho de Melo <acme@kernel.org> RUN swupd update && \ swupd bundle-add sysadmin-basic-dev RUN mkdir -m 777 -p /git /tmp/build/perf /tmp/build/objtool /tmp/build/linux && \ groupadd -r perfbuilder && \ useradd -m -r -g perfbuilder perfbuilder && \ chown -R perfbuilder.perfbuilder /tmp/build/ /git/ USER perfbuilder COPY rx_and_build.sh / ENV EXTRA_MAKE_ARGS=PYTHON=python3 ENTRYPOINT ["/rx_and_build.sh"] Now to figure out why the build fails with clang, that is present in the above container as detected by the rx_and_build.sh script: clang version 6.0.1 (tags/RELEASE_601/final) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /usr/sbin make: Entering directory '/git/linux/tools/perf' BUILD: Doing 'make -j4' parallel build HOSTCC /tmp/build/perf/fixdep.o HOSTLD /tmp/build/perf/fixdep-in.o LINK /tmp/build/perf/fixdep Auto-detecting system features: ... dwarf: [ OFF ] ... dwarf_getlocations: [ OFF ] ... glibc: [ OFF ] ... gtk2: [ OFF ] ... libaudit: [ OFF ] ... libbfd: [ OFF ] ... libelf: [ OFF ] ... libnuma: [ OFF ] ... numa_num_possible_cpus: [ OFF ] ... libperl: [ OFF ] ... libpython: [ OFF ] ... libslang: [ OFF ] ... libcrypto: [ OFF ] ... libunwind: [ OFF ] ... libdw-dwarf-unwind: [ OFF ] ... zlib: [ OFF ] ... lzma: [ OFF ] ... get_cpuid: [ OFF ] ... bpf: [ OFF ] Makefile.config:331: *** No gnu/libc-version.h found, please install glibc-dev[el]. Stop. make[1]: *** [Makefile.perf:206: sub-make] Error 2 make: *** [Makefile:70: all] Error 2 make: Leaving directory '/git/linux/tools/perf' Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Thiago Macieira <thiago.macieira@intel.com> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-c3khb9ac86s00qxzjrueomme@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-27perf report: Don't try to map ip to invalid mapMilian Wolff1-2/+3
Fixes a crash when the report encounters an address that could not be associated with an mmaped region: #0 0x00005555557bdc4a in callchain_srcline (ip=<error reading variable: Cannot access memory at address 0x38>, sym=0x0, map=0x0) at util/machine.c:2329 #1 unwind_entry (entry=entry@entry=0x7fffffff9180, arg=arg@entry=0x7ffff5642498) at util/machine.c:2329 #2 0x00005555558370af in entry (arg=0x7ffff5642498, cb=0x5555557bdb50 <unwind_entry>, thread=<optimized out>, ip=18446744073709551615) at util/unwind-libunwind-local.c:586 #3 get_entries (ui=ui@entry=0x7fffffff9620, cb=0x5555557bdb50 <unwind_entry>, arg=0x7ffff5642498, max_stack=<optimized out>) at util/unwind-libunwind-local.c:703 #4 0x0000555555837192 in _unwind__get_entries (cb=<optimized out>, arg=<optimized out>, thread=<optimized out>, data=<optimized out>, max_stack=<optimized out>) at util/unwind-libunwind-local.c:725 #5 0x00005555557c310f in thread__resolve_callchain_unwind (max_stack=127, sample=0x7fffffff9830, evsel=0x555555c7b3b0, cursor=0x7ffff5642498, thread=0x555555c7f6f0) at util/machine.c:2351 #6 thread__resolve_callchain (thread=0x555555c7f6f0, cursor=0x7ffff5642498, evsel=0x555555c7b3b0, sample=0x7fffffff9830, parent=0x7fffffff97b8, root_al=0x7fffffff9750, max_stack=127) at util/machine.c:2378 #7 0x00005555557ba4ee in sample__resolve_callchain (sample=<optimized out>, cursor=<optimized out>, parent=parent@entry=0x7fffffff97b8, evsel=<optimized out>, al=al@entry=0x7fffffff9750, max_stack=<optimized out>) at util/callchain.c:1085 Signed-off-by: Milian Wolff <milian.wolff@kdab.com> Tested-by: Sandipan Das <sandipan@linux.ibm.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Fixes: 2a9d5050dc84 ("perf script: Show correct offsets for DWARF-based unwinding") Link: http://lkml.kernel.org/r/20180926135207.30263-1-milian.wolff@kdab.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-24perf probe: Support SDT markers having reference counter (semaphore)Ravi Bangoria6-22/+106
With this, perf buildid-cache will save SDT markers with reference counter in probe cache. Perf probe will be able to probe markers having reference counter. Ex, # readelf -n /tmp/tick | grep -A1 loop2 Name: loop2 ... Semaphore: 0x0000000010020036 # ./perf buildid-cache --add /tmp/tick # ./perf probe sdt_tick:loop2 # ./perf stat -e sdt_tick:loop2 /tmp/tick hi: 0 hi: 1 hi: 2 ^C Performance counter stats for '/tmp/tick': 3 sdt_tick:loop2 2.561851452 seconds time elapsed Link: http://lkml.kernel.org/r/20180820044250.11659-5-ravi.bangoria@linux.ibm.com Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Reviewed-by: Song Liu <songliubraving@fb.com> Tested-by: Song Liu <songliubraving@fb.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-09-20perf intel-pt: Implement decoder flags for trace begin / endAdrian Hunter1-11/+23
Have the Intel PT decoder implement the new Intel PT decoder flags for trace begin / end. Previously, the decoder would indicate begin / end by a branch from / to zero. That hides useful information, in particular when a trace ends with a call. That happens when using address filters, for example: $ perf record -e intel_pt/cyc,mtc_period=0,noretcomp/u --filter='filter main @ /bin/uname ' uname Linux [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.031 MB perf.data ] Before: $ perf script --itrace=cre -Ftime,flags,ip,sym,symoff,addr --ns 7249.622183310: tr strt 0 [unknown] => 401590 main+0x0 7249.622183311: call 4015b9 main+0x29 => 0 [unknown] 7249.622183711: tr strt 0 [unknown] => 4015be main+0x2e 7249.622183714: call 4015c8 main+0x38 => 0 [unknown] 7249.622247731: tr strt 0 [unknown] => 4015cd main+0x3d 7249.622247760: call 4015d7 main+0x47 => 0 [unknown] 7249.622248340: tr strt 0 [unknown] => 4015dc main+0x4c 7249.622248341: call 4015e1 main+0x51 => 0 [unknown] 7249.622248681: tr strt 0 [unknown] => 4015e6 main+0x56 7249.622248682: call 4015eb main+0x5b => 0 [unknown] 7249.622248970: tr strt 0 [unknown] => 4015f0 main+0x60 7249.622248971: call 401612 main+0x82 => 0 [unknown] 7249.622249757: tr strt 0 [unknown] => 401617 main+0x87 7249.622249770: call 401847 main+0x2b7 => 0 [unknown] 7249.622250606: tr strt 0 [unknown] => 40184c main+0x2bc 7249.622250612: call 4019bf main+0x42f => 0 [unknown] 7249.622256823: tr strt 0 [unknown] => 4019c4 main+0x434 7249.622256863: call 4019f5 main+0x465 => 0 [unknown] 7249.622264217: tr strt 0 [unknown] => 4019fa main+0x46a 7249.622264235: call 401832 main+0x2a2 => 0 [unknown] After: $ perf script --itrace=cre -Ftime,flags,ip,sym,symoff,addr --ns 7249.622183310: tr strt 0 [unknown] => 401590 main+0x0 7249.622183311: tr end call 4015b9 main+0x29 => 401ef0 set_program_name+0x0 7249.622183711: tr strt 0 [unknown] => 4015be main+0x2e 7249.622183714: tr end call 4015c8 main+0x38 => 4014b0 setlocale@plt+0x0 7249.622247731: tr strt 0 [unknown] => 4015cd main+0x3d 7249.622247760: tr end call 4015d7 main+0x47 => 4012d0 bindtextdomain@plt+0x0 7249.622248340: tr strt 0 [unknown] => 4015dc main+0x4c 7249.622248341: tr end call 4015e1 main+0x51 => 4012b0 textdomain@plt+0x0 7249.622248681: tr strt 0 [unknown] => 4015e6 main+0x56 7249.622248682: tr end call 4015eb main+0x5b => 404340 atexit+0x0 7249.622248970: tr strt 0 [unknown] => 4015f0 main+0x60 7249.622248971: tr end call 401612 main+0x82 => 401320 getopt_long@plt+0x0 7249.622249757: tr strt 0 [unknown] => 401617 main+0x87 7249.622249770: tr end call 401847 main+0x2b7 => 401360 uname@plt+0x0 7249.622250606: tr strt 0 [unknown] => 40184c main+0x2bc 7249.622250612: tr end call 4019bf main+0x42f => 401b10 print_element+0x0 7249.622256823: tr strt 0 [unknown] => 4019c4 main+0x434 7249.622256863: tr end call 4019f5 main+0x465 => 401340 __overflow@plt+0x0 7249.622264217: tr strt 0 [unknown] => 4019fa main+0x46a 7249.622264235: tr end call 401832 main+0x2a2 => 401520 exit@plt+0x0 Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lkml.kernel.org/r/20180920130048.31432-7-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-20perf intel-pt: Add decoder flags for trace begin / endAdrian Hunter2-0/+7
Previously, the decoder would indicate begin / end by a branch from / to zero. That hides useful information, in particular when a trace ends with a call. To prepare for remedying that, add Intel PT decoder flags for trace begin / end and map them to the existing sample flags. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lkml.kernel.org/r/20180920130048.31432-6-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-20perf tools: Improve thread_stack__process() for trace begin / endAdrian Hunter1-7/+9
thread_stack__process() is used to create call paths for database export. Improve the handling of trace begin / end to allow for a trace that ends in a call. Previously, the Intel PT decoder would indicate begin / end by a branch from / to zero. That hides useful information, in particular when a trace ends with a call. Before remedying that, enhance the thread stack so that it identifies the trace end by the flag instead of by ip == 0. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lkml.kernel.org/r/20180920130048.31432-5-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-20perf tools: Improve thread_stack__event() for trace begin / endAdrian Hunter1-5/+30
thread_stack__event() is used to create call stacks, by keeping track of calls and returns. Improve the handling of trace begin / end to allow for a trace that ends in a call. Previously, the Intel PT decoder would indicate begin / end by a branch from / to zero. That hides useful information, in particular when a trace ends with a call. Before remedying that, enhance the thread stack so that it does not expect to see the 'return' for a 'call' that ends the trace. Committer notes: Added this: return thread_stack__push(thread->ts, ret_addr, - flags && PERF_IP_FLAG_TRACE_END); + flags & PERF_IP_FLAG_TRACE_END); To fix problem spotted by: debian:9: clang version 3.8.1-24 (tags/RELEASE_381/final) debian:experimental: clang version 6.0.1-6 (tags/RELEASE_601/final) Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lkml.kernel.org/r/20180920130048.31432-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-20perf db-export: Add trace begin / end branch type variantsAdrian Hunter1-0/+22
Add branch types to cover different combinations with "trace begin" or "trace end". Previously, the Intel PT decoder would indicate begin / end by a branch from / to zero. That hides useful information, in particular when a trace ends with a call. Before remedying that, prepare the database export to export branch types with more combinations that include trace begin / end. In those cases extend the descriptions to include 'trace begin' and 'trace end' separately. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lkml.kernel.org/r/20180920130048.31432-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-19tools lib traceevent: Rename data2host*() APIsTzvetomir Stoyanov (VMware)1-2/+2
In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. That prefix will be "tep_". This renames data2host*() APIs Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lkml.kernel.org/r/20180919185724.751088939@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-19tools lib traceevent: Rename struct plugin_list to struct tep_plugin_listTzvetomir Stoyanov (VMware)1-2/+2
In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. That prefix will be "tep_". This renames struct plugin_list to struct tep_plugin_list Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lkml.kernel.org/r/20180919185724.586889128@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>