summaryrefslogtreecommitdiff
path: root/Documentation/trace
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/trace')
-rw-r--r--Documentation/trace/events-kmem.rst2
-rw-r--r--Documentation/trace/events.rst10
-rw-r--r--Documentation/trace/ftrace-uses.rst90
3 files changed, 74 insertions, 28 deletions
diff --git a/Documentation/trace/events-kmem.rst b/Documentation/trace/events-kmem.rst
index 555484110e36..68fa75247488 100644
--- a/Documentation/trace/events-kmem.rst
+++ b/Documentation/trace/events-kmem.rst
@@ -69,7 +69,7 @@ When pages are freed in batch, the also mm_page_free_batched is triggered.
Broadly speaking, pages are taken off the LRU lock in bulk and
freed in batch with a page list. Significant amounts of activity here could
indicate that the system is under memory pressure and can also indicate
-contention on the zone->lru_lock.
+contention on the lruvec->lru_lock.
4. Per-CPU Allocator Activity
=============================
diff --git a/Documentation/trace/events.rst b/Documentation/trace/events.rst
index 2a5aa48eff6c..8ddb9b09451c 100644
--- a/Documentation/trace/events.rst
+++ b/Documentation/trace/events.rst
@@ -798,13 +798,13 @@ To trace a synthetic using the piecewise method described above, the
synth_event_trace_start() function is used to 'open' the synthetic
event trace::
- struct synth_trace_state trace_state;
+ struct synth_event_trace_state trace_state;
ret = synth_event_trace_start(schedtest_event_file, &trace_state);
It's passed the trace_event_file representing the synthetic event
using the same methods as described above, along with a pointer to a
-struct synth_trace_state object, which will be zeroed before use and
+struct synth_event_trace_state object, which will be zeroed before use and
used to maintain state between this and following calls.
Once the event has been opened, which means space for it has been
@@ -816,7 +816,7 @@ lookup per field.
To assign the values one after the other without lookups,
synth_event_add_next_val() should be used. Each call is passed the
-same synth_trace_state object used in the synth_event_trace_start(),
+same synth_event_trace_state object used in the synth_event_trace_start(),
along with the value to set the next field in the event. After each
field is set, the 'cursor' points to the next field, which will be set
by the subsequent call, continuing until all the fields have been set
@@ -845,7 +845,7 @@ this method would be (without error-handling code)::
ret = synth_event_add_next_val(395, &trace_state);
To assign the values in any order, synth_event_add_val() should be
-used. Each call is passed the same synth_trace_state object used in
+used. Each call is passed the same synth_event_trace_state object used in
the synth_event_trace_start(), along with the field name of the field
to set and the value to set it to. The same sequence of calls as in
the above examples using this method would be (without error-handling
@@ -867,7 +867,7 @@ can be used but not both at the same time.
Finally, the event won't be actually traced until it's 'closed',
which is done using synth_event_trace_end(), which takes only the
-struct synth_trace_state object used in the previous calls::
+struct synth_event_trace_state object used in the previous calls::
ret = synth_event_trace_end(&trace_state);
diff --git a/Documentation/trace/ftrace-uses.rst b/Documentation/trace/ftrace-uses.rst
index a4955f7e3d19..f7d98ae5b885 100644
--- a/Documentation/trace/ftrace-uses.rst
+++ b/Documentation/trace/ftrace-uses.rst
@@ -30,8 +30,8 @@ The ftrace context
This requires extra care to what can be done inside a callback. A callback
can be called outside the protective scope of RCU.
-The ftrace infrastructure has some protections against recursions and RCU
-but one must still be very careful how they use the callbacks.
+There are helper functions to help against recursion, and making sure
+RCU is watching. These are explained below.
The ftrace_ops structure
@@ -108,6 +108,58 @@ The prototype of the callback function is as follows (as of v4.14):
at the start of the function where ftrace was tracing. Otherwise it
either contains garbage, or NULL.
+Protect your callback
+=====================
+
+As functions can be called from anywhere, and it is possible that a function
+called by a callback may also be traced, and call that same callback,
+recursion protection must be used. There are two helper functions that
+can help in this regard. If you start your code with:
+
+.. code-block:: c
+
+ int bit;
+
+ bit = ftrace_test_recursion_trylock(ip, parent_ip);
+ if (bit < 0)
+ return;
+
+and end it with:
+
+.. code-block:: c
+
+ ftrace_test_recursion_unlock(bit);
+
+The code in between will be safe to use, even if it ends up calling a
+function that the callback is tracing. Note, on success,
+ftrace_test_recursion_trylock() will disable preemption, and the
+ftrace_test_recursion_unlock() will enable it again (if it was previously
+enabled). The instruction pointer (ip) and its parent (parent_ip) is passed to
+ftrace_test_recursion_trylock() to record where the recursion happened
+(if CONFIG_FTRACE_RECORD_RECURSION is set).
+
+Alternatively, if the FTRACE_OPS_FL_RECURSION flag is set on the ftrace_ops
+(as explained below), then a helper trampoline will be used to test
+for recursion for the callback and no recursion test needs to be done.
+But this is at the expense of a slightly more overhead from an extra
+function call.
+
+If your callback accesses any data or critical section that requires RCU
+protection, it is best to make sure that RCU is "watching", otherwise
+that data or critical section will not be protected as expected. In this
+case add:
+
+.. code-block:: c
+
+ if (!rcu_is_watching())
+ return;
+
+Alternatively, if the FTRACE_OPS_FL_RCU flag is set on the ftrace_ops
+(as explained below), then a helper trampoline will be used to test
+for rcu_is_watching for the callback and no other test needs to be done.
+But this is at the expense of a slightly more overhead from an extra
+function call.
+
The ftrace FLAGS
================
@@ -128,26 +180,20 @@ FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED
will not fail with this flag set. But the callback must check if
regs is NULL or not to determine if the architecture supports it.
-FTRACE_OPS_FL_RECURSION_SAFE
- By default, a wrapper is added around the callback to
- make sure that recursion of the function does not occur. That is,
- if a function that is called as a result of the callback's execution
- is also traced, ftrace will prevent the callback from being called
- again. But this wrapper adds some overhead, and if the callback is
- safe from recursion, it can set this flag to disable the ftrace
- protection.
-
- Note, if this flag is set, and recursion does occur, it could cause
- the system to crash, and possibly reboot via a triple fault.
-
- It is OK if another callback traces a function that is called by a
- callback that is marked recursion safe. Recursion safe callbacks
- must never trace any function that are called by the callback
- itself or any nested functions that those functions call.
-
- If this flag is set, it is possible that the callback will also
- be called with preemption enabled (when CONFIG_PREEMPTION is set),
- but this is not guaranteed.
+FTRACE_OPS_FL_RECURSION
+ By default, it is expected that the callback can handle recursion.
+ But if the callback is not that worried about overehead, then
+ setting this bit will add the recursion protection around the
+ callback by calling a helper function that will do the recursion
+ protection and only call the callback if it did not recurse.
+
+ Note, if this flag is not set, and recursion does occur, it could
+ cause the system to crash, and possibly reboot via a triple fault.
+
+ Not, if this flag is set, then the callback will always be called
+ with preemption disabled. If it is not set, then it is possible
+ (but not guaranteed) that the callback will be called in
+ preemptable context.
FTRACE_OPS_FL_IPMODIFY
Requires FTRACE_OPS_FL_SAVE_REGS set. If the callback is to "hijack"