summaryrefslogtreecommitdiff
path: root/poky/documentation/profile-manual/usage.rst
diff options
context:
space:
mode:
Diffstat (limited to 'poky/documentation/profile-manual/usage.rst')
-rw-r--r--poky/documentation/profile-manual/usage.rst184
1 files changed, 92 insertions, 92 deletions
diff --git a/poky/documentation/profile-manual/usage.rst b/poky/documentation/profile-manual/usage.rst
index c42f5b64b..825290c3f 100644
--- a/poky/documentation/profile-manual/usage.rst
+++ b/poky/documentation/profile-manual/usage.rst
@@ -48,7 +48,7 @@ For this section, we'll assume you've already performed the basic setup
outlined in the ":ref:`profile-manual/intro:General Setup`" section.
In particular, you'll get the most mileage out of perf if you profile an
-image built with the following in your ``local.conf`` file: ::
+image built with the following in your ``local.conf`` file::
INHIBIT_PACKAGE_STRIP = "1"
@@ -62,7 +62,7 @@ Basic Perf Usage
The perf tool is pretty much self-documenting. To remind yourself of the
available commands, simply type 'perf', which will show you basic usage
-along with the available perf subcommands: ::
+along with the available perf subcommands::
root@crownbay:~# perf
@@ -110,7 +110,7 @@ applets in Yocto. ::
The quickest and easiest way to get some basic overall data about what's
going on for a particular workload is to profile it using 'perf stat'.
'perf stat' basically profiles using a few default counters and displays
-the summed counts at the end of the run: ::
+the summed counts at the end of the run::
root@crownbay:~# perf stat wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2
Connecting to downloads.yoctoproject.org (140.211.169.59:80)
@@ -139,7 +139,7 @@ Also, note that 'perf stat' isn't restricted to a fixed set of counters
- basically any event listed in the output of 'perf list' can be tallied
by 'perf stat'. For example, suppose we wanted to see a summary of all
the events related to kernel memory allocation/freeing along with cache
-hits and misses: ::
+hits and misses::
root@crownbay:~# perf stat -e kmem:* -e cache-references -e cache-misses wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2
Connecting to downloads.yoctoproject.org (140.211.169.59:80)
@@ -191,7 +191,7 @@ directory. ::
To see the results in a
'text-based UI' (tui), simply run 'perf report', which will read the
perf.data file in the current working directory and display the results
-in an interactive UI: ::
+in an interactive UI::
root@crownbay:~# perf report
@@ -217,7 +217,7 @@ Before we do that, however, let's try running a different profile, one
which shows something a little more interesting. The only difference
between the new profile and the previous one is that we'll add the -g
option, which will record not just the address of a sampled function,
-but the entire callchain to the sampled function as well: ::
+but the entire callchain to the sampled function as well::
root@crownbay:~# perf record -g wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2
Connecting to downloads.yoctoproject.org (140.211.169.59:80)
@@ -293,7 +293,7 @@ busybox binary, which is actually stripped out by the Yocto build
system.
One way around that is to put the following in your ``local.conf`` file
-when you build the image: ::
+when you build the image::
INHIBIT_PACKAGE_STRIP = "1"
@@ -302,26 +302,26 @@ what can we do to get perf to resolve the symbols? Basically we need to
install the debuginfo for the BusyBox package.
To generate the debug info for the packages in the image, we can add
-``dbg-pkgs`` to :term:`EXTRA_IMAGE_FEATURES` in ``local.conf``. For example: ::
+``dbg-pkgs`` to :term:`EXTRA_IMAGE_FEATURES` in ``local.conf``. For example::
EXTRA_IMAGE_FEATURES = "debug-tweaks tools-profile dbg-pkgs"
Additionally, in order to generate the type of debuginfo that perf
understands, we also need to set
:term:`PACKAGE_DEBUG_SPLIT_STYLE`
-in the ``local.conf`` file: ::
+in the ``local.conf`` file::
PACKAGE_DEBUG_SPLIT_STYLE = 'debug-file-directory'
Once we've done that, we can install the
debuginfo for BusyBox. The debug packages once built can be found in
``build/tmp/deploy/rpm/*`` on the host system. Find the busybox-dbg-...rpm
-file and copy it to the target. For example: ::
+file and copy it to the target. For example::
[trz@empanada core2]$ scp /home/trz/yocto/crownbay-tracing-dbg/build/tmp/deploy/rpm/core2_32/busybox-dbg-1.20.2-r2.core2_32.rpm root@192.168.1.31:
busybox-dbg-1.20.2-r2.core2_32.rpm 100% 1826KB 1.8MB/s 00:01
-Now install the debug rpm on the target: ::
+Now install the debug rpm on the target::
root@crownbay:~# rpm -i busybox-dbg-1.20.2-r2.core2_32.rpm
@@ -382,7 +382,7 @@ traditional tools can also make use of the expanded possibilities now
available to them, and in some cases have, as mentioned previously).
We can get a list of the available events that can be used to profile a
-workload via 'perf list': ::
+workload via 'perf list'::
root@crownbay:~# perf list
@@ -525,7 +525,7 @@ workload via 'perf list': ::
Only a subset of these would be of interest to us when looking at this
workload, so let's choose the most likely subsystems (identified by the
string before the colon in the Tracepoint events) and do a 'perf stat'
-run using only those wildcarded subsystems: ::
+run using only those wildcarded subsystems::
root@crownbay:~# perf stat -e skb:* -e net:* -e napi:* -e sched:* -e workqueue:* -e irq:* -e syscalls:* wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2
Performance counter stats for 'wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2':
@@ -587,7 +587,7 @@ run using only those wildcarded subsystems: ::
Let's pick one of these tracepoints
-and tell perf to do a profile using it as the sampling event: ::
+and tell perf to do a profile using it as the sampling event::
root@crownbay:~# perf record -g -e sched:sched_wakeup wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2
@@ -644,14 +644,14 @@ individual steps that go into the higher-level behavior exposed by the
coarse-grained profiling data.
As a concrete example, we can trace all the events we think might be
-applicable to our workload: ::
+applicable to our workload::
root@crownbay:~# perf record -g -e skb:* -e net:* -e napi:* -e sched:sched_switch -e sched:sched_wakeup -e irq:*
-e syscalls:sys_enter_read -e syscalls:sys_exit_read -e syscalls:sys_enter_write -e syscalls:sys_exit_write
wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2
We can look at the raw trace output using 'perf script' with no
-arguments: ::
+arguments::
root@crownbay:~# perf script
@@ -735,7 +735,7 @@ two programming language bindings, one for Python and one for Perl.
Now that we have the trace data in perf.data, we can use 'perf script
-g' to generate a skeleton script with handlers for the read/write
-entry/exit events we recorded: ::
+entry/exit events we recorded::
root@crownbay:~# perf script -g python
generated Python script: perf-script.py
@@ -755,7 +755,7 @@ with its parameters. For example:
print "skbaddr=%u, len=%u, name=%s\n" % (skbaddr, len, name),
We can run that script directly to print all of the events contained in the
-perf.data file: ::
+perf.data file::
root@crownbay:~# perf script -s perf-script.py
@@ -833,7 +833,7 @@ result of all the per-event tallies. For that, we use the special
for event_name, count in counts.iteritems():
print "%-40s %10s\n" % (event_name, count)
-The end result is a summary of all the events recorded in the trace: ::
+The end result is a summary of all the events recorded in the trace::
skb__skb_copy_datagram_iovec 13148
irq__softirq_entry 4796
@@ -877,13 +877,13 @@ To do system-wide profiling or tracing, you typically use the -a flag to
'perf record'.
To demonstrate this, open up one window and start the profile using the
--a flag (press Ctrl-C to stop tracing): ::
+-a flag (press Ctrl-C to stop tracing)::
root@crownbay:~# perf record -g -a
^C[ perf record: Woken up 6 times to write data ]
[ perf record: Captured and wrote 1.400 MB perf.data (~61172 samples) ]
-In another window, run the wget test: ::
+In another window, run the wget test::
root@crownbay:~# wget http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2
Connecting to downloads.yoctoproject.org (140.211.169.59:80)
@@ -903,7 +903,7 @@ unresolvable symbols in the expanded Xorg callchain).
Note also that we have both kernel and userspace entries in the above
snapshot. We can also tell perf to focus on userspace but providing a
modifier, in this case 'u', to the 'cycles' hardware counter when we
-record a profile: ::
+record a profile::
root@crownbay:~# perf record -g -a -e cycles:u
^C[ perf record: Woken up 2 times to write data ]
@@ -923,13 +923,13 @@ the entries associated with the libc-xxx.so DSO.
:align: center
We can also use the system-wide -a switch to do system-wide tracing.
-Here we'll trace a couple of scheduler events: ::
+Here we'll trace a couple of scheduler events::
root@crownbay:~# perf record -a -e sched:sched_switch -e sched:sched_wakeup
^C[ perf record: Woken up 38 times to write data ]
[ perf record: Captured and wrote 9.780 MB perf.data (~427299 samples) ]
-We can look at the raw output using 'perf script' with no arguments: ::
+We can look at the raw output using 'perf script' with no arguments::
root@crownbay:~# perf script
@@ -952,7 +952,7 @@ do with what we're interested in, namely events that schedule 'perf'
itself in and out or that wake perf up. We can get rid of those by using
the '--filter' option - for each event we specify using -e, we can add a
--filter after that to filter out trace events that contain fields with
-specific values: ::
+specific values::
root@crownbay:~# perf record -a -e sched:sched_switch --filter 'next_comm != perf && prev_comm != perf' -e sched:sched_wakeup --filter 'comm != perf'
^C[ perf record: Woken up 38 times to write data ]
@@ -1017,7 +1017,7 @@ perf isn't restricted to the fixed set of static tracepoints listed by
'perf list'. Users can also add their own 'dynamic' tracepoints anywhere
in the kernel. For instance, suppose we want to define our own
tracepoint on do_fork(). We can do that using the 'perf probe' perf
-subcommand: ::
+subcommand::
root@crownbay:~# perf probe do_fork
Added new event:
@@ -1031,7 +1031,7 @@ Adding a new tracepoint via
'perf probe' results in an event with all the expected files and format
in /sys/kernel/debug/tracing/events, just the same as for static
tracepoints (as discussed in more detail in the trace events subsystem
-section: ::
+section::
root@crownbay:/sys/kernel/debug/tracing/events/probe/do_fork# ls -al
drwxr-xr-x 2 root root 0 Oct 28 11:42 .
@@ -1056,7 +1056,7 @@ section: ::
print fmt: "(%lx)", REC->__probe_ip
We can list all dynamic tracepoints currently in
-existence: ::
+existence::
root@crownbay:~# perf probe -l
probe:do_fork (on do_fork)
@@ -1064,13 +1064,13 @@ existence: ::
Let's record system-wide ('sleep 30' is a
trick for recording system-wide but basically do nothing and then wake
-up after 30 seconds): ::
+up after 30 seconds)::
root@crownbay:~# perf record -g -a -e probe:do_fork sleep 30
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.087 MB perf.data (~3812 samples) ]
-Using 'perf script' we can see each do_fork event that fired: ::
+Using 'perf script' we can see each do_fork event that fired::
root@crownbay:~# perf script
@@ -1163,7 +1163,7 @@ addressed by a Yocto bug: :yocto_bugs:`Bug 3388 - perf: enable man pages for
basic 'help' functionality </show_bug.cgi?id=3388>`.
The man pages in text form, along with some other files, such as a set
-of examples, can be found in the 'perf' directory of the kernel tree: ::
+of examples, can be found in the 'perf' directory of the kernel tree::
tools/perf/Documentation
@@ -1197,7 +1197,7 @@ Basic ftrace usage
'ftrace' essentially refers to everything included in the /tracing
directory of the mounted debugfs filesystem (Yocto follows the standard
convention and mounts it at /sys/kernel/debug). Here's a listing of all
-the files found in /sys/kernel/debug/tracing on a Yocto system: ::
+the files found in /sys/kernel/debug/tracing on a Yocto system::
root@sugarbay:/sys/kernel/debug/tracing# ls
README kprobe_events trace
@@ -1222,12 +1222,12 @@ the ftrace documentation.
We'll start by looking at some of the available built-in tracers.
-cat'ing the 'available_tracers' file lists the set of available tracers: ::
+cat'ing the 'available_tracers' file lists the set of available tracers::
root@sugarbay:/sys/kernel/debug/tracing# cat available_tracers
blk function_graph function nop
-The 'current_tracer' file contains the tracer currently in effect: ::
+The 'current_tracer' file contains the tracer currently in effect::
root@sugarbay:/sys/kernel/debug/tracing# cat current_tracer
nop
@@ -1237,7 +1237,7 @@ The above listing of current_tracer shows that the
there's actually no tracer currently in effect.
echo'ing one of the available_tracers into current_tracer makes the
-specified tracer the current tracer: ::
+specified tracer the current tracer::
root@sugarbay:/sys/kernel/debug/tracing# echo function > current_tracer
root@sugarbay:/sys/kernel/debug/tracing# cat current_tracer
@@ -1247,7 +1247,7 @@ The above sets the current tracer to be the 'function tracer'. This tracer
traces every function call in the kernel and makes it available as the
contents of the 'trace' file. Reading the 'trace' file lists the
currently buffered function calls that have been traced by the function
-tracer: ::
+tracer::
root@sugarbay:/sys/kernel/debug/tracing# cat trace | less
@@ -1306,7 +1306,7 @@ great way to learn about how the kernel code works in a dynamic sense.
It is a little more difficult to follow the call chains than it needs to
be - luckily there's a variant of the function tracer that displays the
-callchains explicitly, called the 'function_graph' tracer: ::
+callchains explicitly, called the 'function_graph' tracer::
root@sugarbay:/sys/kernel/debug/tracing# echo function_graph > current_tracer
root@sugarbay:/sys/kernel/debug/tracing# cat trace | less
@@ -1442,7 +1442,7 @@ One especially important directory contained within the
/sys/kernel/debug/tracing directory is the 'events' subdirectory, which
contains representations of every tracepoint in the system. Listing out
the contents of the 'events' subdirectory, we see mainly another set of
-subdirectories: ::
+subdirectories::
root@sugarbay:/sys/kernel/debug/tracing# cd events
root@sugarbay:/sys/kernel/debug/tracing/events# ls -al
@@ -1491,7 +1491,7 @@ subdirectories: ::
Each one of these subdirectories
corresponds to a 'subsystem' and contains yet again more subdirectories,
each one of those finally corresponding to a tracepoint. For example,
-here are the contents of the 'kmem' subsystem: ::
+here are the contents of the 'kmem' subsystem::
root@sugarbay:/sys/kernel/debug/tracing/events# cd kmem
root@sugarbay:/sys/kernel/debug/tracing/events/kmem# ls -al
@@ -1513,7 +1513,7 @@ here are the contents of the 'kmem' subsystem: ::
drwxr-xr-x 2 root root 0 Nov 14 23:19 mm_page_pcpu_drain
Let's see what's inside the subdirectory for a
-specific tracepoint, in this case the one for kmalloc: ::
+specific tracepoint, in this case the one for kmalloc::
root@sugarbay:/sys/kernel/debug/tracing/events/kmem# cd kmalloc
root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# ls -al
@@ -1529,7 +1529,7 @@ tracepoint describes the event in memory, which is used by the various
tracing tools that now make use of these tracepoint to parse the event
and make sense of it, along with a 'print fmt' field that allows tools
like ftrace to display the event as text. Here's what the format of the
-kmalloc event looks like: ::
+kmalloc event looks like::
root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# cat format
name: kmalloc
@@ -1568,20 +1568,20 @@ The 'enable' file
in the tracepoint directory is what allows the user (or tools such as
trace-cmd) to actually turn the tracepoint on and off. When enabled, the
corresponding tracepoint will start appearing in the ftrace 'trace' file
-described previously. For example, this turns on the kmalloc tracepoint: ::
+described previously. For example, this turns on the kmalloc tracepoint::
root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# echo 1 > enable
At the moment, we're not interested in the function tracer or
some other tracer that might be in effect, so we first turn it off, but
if we do that, we still need to turn tracing on in order to see the
-events in the output buffer: ::
+events in the output buffer::
root@sugarbay:/sys/kernel/debug/tracing# echo nop > current_tracer
root@sugarbay:/sys/kernel/debug/tracing# echo 1 > tracing_on
Now, if we look at the 'trace' file, we see nothing
-but the kmalloc events we just turned on: ::
+but the kmalloc events we just turned on::
root@sugarbay:/sys/kernel/debug/tracing# cat trace | less
# tracer: nop
@@ -1627,7 +1627,7 @@ but the kmalloc events we just turned on: ::
<idle>-0 [000] ..s3 18156.400660: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d554800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
matchbox-termin-1361 [001] ...1 18156.552800: kmalloc: call_site=ffffffff81614050 ptr=ffff88006db34800 bytes_req=576 bytes_alloc=1024 gfp_flags=GFP_KERNEL|GFP_REPEAT
-To again disable the kmalloc event, we need to send 0 to the enable file: ::
+To again disable the kmalloc event, we need to send 0 to the enable file::
root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# echo 0 > enable
@@ -1669,12 +1669,12 @@ a per-CPU graphical display. It directly uses 'trace-cmd' as the
plumbing that accomplishes all that underneath the covers (and actually
displays the trace-cmd command it uses, as we'll see).
-To start a trace using kernelshark, first start kernelshark: ::
+To start a trace using kernelshark, first start kernelshark::
root@sugarbay:~# kernelshark
Then bring up the 'Capture' dialog by
-choosing from the kernelshark menu: ::
+choosing from the kernelshark menu::
Capture | Record
@@ -1724,12 +1724,12 @@ ftrace Documentation
--------------------
The documentation for ftrace can be found in the kernel Documentation
-directory: ::
+directory::
Documentation/trace/ftrace.txt
The documentation for the trace event subsystem can also be found in the kernel
-Documentation directory: ::
+Documentation directory::
Documentation/trace/events.txt
@@ -1784,7 +1784,7 @@ which it extracts from the open syscall's argstr.
Normally, to execute this
probe, you'd simply install systemtap on the system you want to probe,
and directly run the probe on that system e.g. assuming the name of the
-file containing the above text is trace_open.stp: ::
+file containing the above text is trace_open.stp::
# stap trace_open.stp
@@ -1825,7 +1825,7 @@ target, with arguments if necessary.
In order to do this from a remote host, however, you need to have access
to the build for the image you booted. The 'crosstap' script provides
details on how to do this if you run the script on the host without
-having done a build: ::
+having done a build::
$ crosstap root@192.168.1.88 trace_open.stp
@@ -1885,7 +1885,7 @@ Running a Script on a Target
----------------------------
Once you've done that, you should be able to run a systemtap script on
-the target: ::
+the target::
$ cd /path/to/yocto
$ source oe-init-build-env
@@ -1903,17 +1903,17 @@ the target: ::
You can also run generated QEMU images with a command like 'runqemu qemux86-64'
Once you've done that, you can cd to whatever
-directory contains your scripts and use 'crosstap' to run the script: ::
+directory contains your scripts and use 'crosstap' to run the script::
$ cd /path/to/my/systemap/script
$ crosstap root@192.168.7.2 trace_open.stp
-If you get an error connecting to the target e.g.: ::
+If you get an error connecting to the target e.g.::
$ crosstap root@192.168.7.2 trace_open.stp
error establishing ssh connection on remote 'root@192.168.7.2'
-Try ssh'ing to the target and see what happens: ::
+Try ssh'ing to the target and see what happens::
$ ssh root@192.168.7.2
@@ -2038,7 +2038,7 @@ tracing.
Collecting and viewing a trace on the target (inside a shell)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-First, from the host, ssh to the target: ::
+First, from the host, ssh to the target::
$ ssh -l root 192.168.1.47
The authenticity of host '192.168.1.47 (192.168.1.47)' can't be established.
@@ -2047,30 +2047,30 @@ First, from the host, ssh to the target: ::
Warning: Permanently added '192.168.1.47' (RSA) to the list of known hosts.
root@192.168.1.47's password:
-Once on the target, use these steps to create a trace: ::
+Once on the target, use these steps to create a trace::
root@crownbay:~# lttng create
Spawning a session daemon
Session auto-20121015-232120 created.
Traces will be written in /home/root/lttng-traces/auto-20121015-232120
-Enable the events you want to trace (in this case all kernel events): ::
+Enable the events you want to trace (in this case all kernel events)::
root@crownbay:~# lttng enable-event --kernel --all
All kernel events are enabled in channel channel0
-Start the trace: ::
+Start the trace::
root@crownbay:~# lttng start
Tracing started for session auto-20121015-232120
And then stop the trace after awhile or after running a particular workload that
-you want to trace: ::
+you want to trace::
root@crownbay:~# lttng stop
Tracing stopped for session auto-20121015-232120
-You can now view the trace in text form on the target: ::
+You can now view the trace in text form on the target::
root@crownbay:~# lttng view
[23:21:56.989270399] (+?.?????????) sys_geteuid: { 1 }, { }
@@ -2116,14 +2116,14 @@ You can now view the trace in text form on the target: ::
You can now safely destroy the trace
session (note that this doesn't delete the trace - it's still there in
-~/lttng-traces): ::
+~/lttng-traces)::
root@crownbay:~# lttng destroy
Session auto-20121015-232120 destroyed at /home/root
Note that the trace is saved in a directory of the same name as returned by
'lttng create', under the ~/lttng-traces directory (note that you can change this by
-supplying your own name to 'lttng create'): ::
+supplying your own name to 'lttng create')::
root@crownbay:~# ls -al ~/lttng-traces
drwxrwx--- 3 root root 1024 Oct 15 23:21 .
@@ -2139,18 +2139,18 @@ generated by the lttng-ust build.
The 'hello' test program isn't installed on the rootfs by the lttng-ust
build, so we need to copy it over manually. First cd into the build
-directory that contains the hello executable: ::
+directory that contains the hello executable::
$ cd build/tmp/work/core2_32-poky-linux/lttng-ust/2.0.5-r0/git/tests/hello/.libs
-Copy that over to the target machine: ::
+Copy that over to the target machine::
$ scp hello root@192.168.1.20:
You now have the instrumented lttng 'hello world' test program on the
target, ready to test.
-First, from the host, ssh to the target: ::
+First, from the host, ssh to the target::
$ ssh -l root 192.168.1.47
The authenticity of host '192.168.1.47 (192.168.1.47)' can't be established.
@@ -2159,35 +2159,35 @@ First, from the host, ssh to the target: ::
Warning: Permanently added '192.168.1.47' (RSA) to the list of known hosts.
root@192.168.1.47's password:
-Once on the target, use these steps to create a trace: ::
+Once on the target, use these steps to create a trace::
root@crownbay:~# lttng create
Session auto-20190303-021943 created.
Traces will be written in /home/root/lttng-traces/auto-20190303-021943
-Enable the events you want to trace (in this case all userspace events): ::
+Enable the events you want to trace (in this case all userspace events)::
root@crownbay:~# lttng enable-event --userspace --all
All UST events are enabled in channel channel0
-Start the trace: ::
+Start the trace::
root@crownbay:~# lttng start
Tracing started for session auto-20190303-021943
-Run the instrumented hello world program: ::
+Run the instrumented hello world program::
root@crownbay:~# ./hello
Hello, World!
Tracing... done.
And then stop the trace after awhile or after running a particular workload
-that you want to trace: ::
+that you want to trace::
root@crownbay:~# lttng stop
Tracing stopped for session auto-20190303-021943
-You can now view the trace in text form on the target: ::
+You can now view the trace in text form on the target::
root@crownbay:~# lttng view
[02:31:14.906146544] (+?.?????????) hello:1424 ust_tests_hello:tptest: { cpu_id = 1 }, { intfield = 0, intfield2 = 0x0, longfield = 0, netintfield = 0, netintfieldhex = 0x0, arrfield1 = [ [0] = 1, [1] = 2, [2] = 3 ], arrfield2 = "test", _seqfield1_length = 4, seqfield1 = [ [0] = 116, [1] = 101, [2] = 115, [3] = 116 ], _seqfield2_length = 4, seqfield2 = "test", stringfield = "test", floatfield = 2222, doublefield = 2, boolfield = 1 }
@@ -2199,7 +2199,7 @@ You can now view the trace in text form on the target: ::
.
You can now safely destroy the trace session (note that this doesn't delete the
-trace - it's still there in ~/lttng-traces): ::
+trace - it's still there in ~/lttng-traces)::
root@crownbay:~# lttng destroy
Session auto-20190303-021943 destroyed at /home/root
@@ -2244,7 +2244,7 @@ Basic blktrace Usage
--------------------
To record a trace, simply run the 'blktrace' command, giving it the name
-of the block device you want to trace activity on: ::
+of the block device you want to trace activity on::
root@crownbay:~# blktrace /dev/sdc
@@ -2265,7 +2265,7 @@ dumps them to userspace for blkparse to merge and sort later). ::
Total: 8660 events (dropped 0), 406 KiB data
If you examine the files saved to disk, you see multiple files, one per CPU and
-with the device name as the first part of the filename: ::
+with the device name as the first part of the filename::
root@crownbay:~# ls -al
drwxr-xr-x 6 root root 1024 Oct 27 22:39 .
@@ -2275,7 +2275,7 @@ with the device name as the first part of the filename: ::
To view the trace events, simply invoke 'blkparse' in the directory
containing the trace files, giving it the device name that forms the
-first part of the filenames: ::
+first part of the filenames::
root@crownbay:~# blkparse sdc
@@ -2373,7 +2373,7 @@ Live Mode
blktrace and blkparse are designed from the ground up to be able to
operate together in a 'pipe mode' where the stdout of blktrace can be
-fed directly into the stdin of blkparse: ::
+fed directly into the stdin of blkparse::
root@crownbay:~# blktrace /dev/sdc -o - | blkparse -i -
@@ -2386,7 +2386,7 @@ identify and capture conditions of interest.
There's actually another blktrace command that implements the above
pipeline as a single command, so the user doesn't have to bother typing
-in the above command sequence: ::
+in the above command sequence::
root@crownbay:~# btrace /dev/sdc
@@ -2401,19 +2401,19 @@ the traced device at all by providing native support for sending all
trace data over the network.
To have blktrace operate in this mode, start blktrace on the target
-system being traced with the -l option, along with the device to trace: ::
+system being traced with the -l option, along with the device to trace::
root@crownbay:~# blktrace -l /dev/sdc
server: waiting for connections...
On the host system, use the -h option to connect to the target system,
-also passing it the device to trace: ::
+also passing it the device to trace::
$ blktrace -d /dev/sdc -h 192.168.1.43
blktrace: connecting to 192.168.1.43
blktrace: connected!
-On the target system, you should see this: ::
+On the target system, you should see this::
server: connection from 192.168.1.43
@@ -2424,7 +2424,7 @@ In another shell, execute a workload you want to trace. ::
linux-2.6.19.2.tar.b 100% \|*******************************\| 41727k 0:00:00 ETA
When it's done, do a Ctrl-C on the host system to stop the
-trace: ::
+trace::
^C=== sdc ===
CPU 0: 7691 events, 361 KiB data
@@ -2432,7 +2432,7 @@ trace: ::
Total: 11800 events (dropped 0), 554 KiB data
On the target system, you should also see a trace summary for the trace
-just ended: ::
+just ended::
server: end of run for 192.168.1.43:sdc
=== sdc ===
@@ -2441,20 +2441,20 @@ just ended: ::
Total: 11800 events (dropped 0), 554 KiB data
The blktrace instance on the host will
-save the target output inside a hostname-timestamp directory: ::
+save the target output inside a hostname-timestamp directory::
$ ls -al
drwxr-xr-x 10 root root 1024 Oct 28 02:40 .
drwxr-sr-x 4 root root 1024 Oct 26 18:24 ..
drwxr-xr-x 2 root root 1024 Oct 28 02:40 192.168.1.43-2012-10-28-02:40:56
-cd into that directory to see the output files: ::
+cd into that directory to see the output files::
$ ls -l
-rw-r--r-- 1 root root 369193 Oct 28 02:44 sdc.blktrace.0
-rw-r--r-- 1 root root 197278 Oct 28 02:44 sdc.blktrace.1
-And run blkparse on the host system using the device name: ::
+And run blkparse on the host system using the device name::
$ blkparse sdc
@@ -2517,25 +2517,25 @@ userspace tools.
To enable tracing for a given device, use /sys/block/xxx/trace/enable,
where xxx is the device name. This for example enables tracing for
-/dev/sdc: ::
+/dev/sdc::
root@crownbay:/sys/kernel/debug/tracing# echo 1 > /sys/block/sdc/trace/enable
Once you've selected the device(s) you want
-to trace, selecting the 'blk' tracer will turn the blk tracer on: ::
+to trace, selecting the 'blk' tracer will turn the blk tracer on::
root@crownbay:/sys/kernel/debug/tracing# cat available_tracers
blk function_graph function nop
root@crownbay:/sys/kernel/debug/tracing# echo blk > current_tracer
-Execute the workload you're interested in: ::
+Execute the workload you're interested in::
root@crownbay:/sys/kernel/debug/tracing# cat /media/sdc/testfile.txt
And look at the output (note here that we're using 'trace_pipe' instead of
trace to capture this trace - this allows us to wait around on the pipe
-for data to appear): ::
+for data to appear)::
root@crownbay:/sys/kernel/debug/tracing# cat trace_pipe
cat-3587 [001] d..1 3023.276361: 8,32 Q R 1699848 + 8 [cat]
@@ -2554,7 +2554,7 @@ for data to appear): ::
cat-3587 [001] d..1 3023.276497: 8,32 m N cfq3587 activate rq, drv=1
cat-3587 [001] d..2 3023.276500: 8,32 D R 1699848 + 8 [cat]
-And this turns off tracing for the specified device: ::
+And this turns off tracing for the specified device::
root@crownbay:/sys/kernel/debug/tracing# echo 0 > /sys/block/sdc/trace/enable
@@ -2572,6 +2572,6 @@ section can be found here:
The above manpages, along with manpages for the other blktrace utilities
(btt, blkiomon, etc) can be found in the /doc directory of the blktrace
-tools git repo: ::
+tools git repo::
$ git clone git://git.kernel.dk/blktrace.git