From 9270e1a744f8ed953009b0e94b26ed0912d9ec1c Mon Sep 17 00:00:00 2001 From: Alan Stern Date: Sat, 3 Oct 2020 21:40:22 -0400 Subject: tools: memory-model: Document that the LKMM can easily miss control dependencies Add a small section to the litmus-tests.txt documentation file for the Linux Kernel Memory Model explaining that the memory model often fails to recognize certain control dependencies. Suggested-by: Akira Yokosawa Signed-off-by: Alan Stern Reviewed-by: Joel Fernandes (Google) Signed-off-by: Paul E. McKenney --- tools/memory-model/Documentation/litmus-tests.txt | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) (limited to 'tools') diff --git a/tools/memory-model/Documentation/litmus-tests.txt b/tools/memory-model/Documentation/litmus-tests.txt index 2f840dcd15cf..8a9d5d2787f9 100644 --- a/tools/memory-model/Documentation/litmus-tests.txt +++ b/tools/memory-model/Documentation/litmus-tests.txt @@ -946,6 +946,23 @@ Limitations of the Linux-kernel memory model (LKMM) include: carrying a dependency, then the compiler can break that dependency by substituting a constant of that value. + Conversely, LKMM sometimes doesn't recognize that a particular + optimization is not allowed, and as a result, thinks that a + dependency is not present (because the optimization would break it). + The memory model misses some pretty obvious control dependencies + because of this limitation. A simple example is: + + r1 = READ_ONCE(x); + if (r1 == 0) + smp_mb(); + WRITE_ONCE(y, 1); + + There is a control dependency from the READ_ONCE to the WRITE_ONCE, + even when r1 is nonzero, but LKMM doesn't realize this and thinks + that the write may execute before the read if r1 != 0. (Yes, that + doesn't make sense if you think about it, but the memory model's + intelligence is limited.) + 2. Multiple access sizes for a single variable are not supported, and neither are misaligned or partially overlapping accesses. -- cgit v1.2.3 From ab8bcad67bee82e4be290b32f0faaf582d7c3edc Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Tue, 4 Aug 2020 14:00:17 -0700 Subject: tools/memory-model: Move Documentation description to Documentation/README This commit moves the descriptions of the files residing in tools/memory-model/Documentation to a README file in that directory, leaving behind the description of tools/memory-model/Documentation/README itself. After this change, tools/memory-model/Documentation/README provides a guide to the files in the tools/memory-model/Documentation directory, guiding people with different skills and needs to the most appropriate starting point. Signed-off-by: Paul E. McKenney --- tools/memory-model/Documentation/README | 59 +++++++++++++++++++++++++++++++++ tools/memory-model/README | 22 ++---------- 2 files changed, 61 insertions(+), 20 deletions(-) create mode 100644 tools/memory-model/Documentation/README (limited to 'tools') diff --git a/tools/memory-model/Documentation/README b/tools/memory-model/Documentation/README new file mode 100644 index 000000000000..2d9539f19912 --- /dev/null +++ b/tools/memory-model/Documentation/README @@ -0,0 +1,59 @@ +It has been said that successful communication requires first identifying +what your audience knows and then building a bridge from their current +knowledge to what they need to know. Unfortunately, the expected +Linux-kernel memory model (LKMM) audience might be anywhere from novice +to expert both in kernel hacking and in understanding LKMM. + +This document therefore points out a number of places to start reading, +depending on what you know and what you would like to learn. Please note +that the documents later in this list assume that the reader understands +the material provided by documents earlier in this list. + +o You are new to Linux-kernel concurrency: simple.txt + +o You are familiar with the Linux-kernel concurrency primitives + that you need, and just want to get started with LKMM litmus + tests: litmus-tests.txt + +o You are familiar with Linux-kernel concurrency, and would + like a detailed intuitive understanding of LKMM, including + situations involving more than two threads: recipes.txt + +o You are familiar with Linux-kernel concurrency and the use of + LKMM, and would like a quick reference: cheatsheet.txt + +o You are familiar with Linux-kernel concurrency and the use + of LKMM, and would like to learn about LKMM's requirements, + rationale, and implementation: explanation.txt + +o You are interested in the publications related to LKMM, including + hardware manuals, academic literature, standards-committee + working papers, and LWN articles: references.txt + + +==================== +DESCRIPTION OF FILES +==================== + +README + This file. + +cheatsheet.txt + Quick-reference guide to the Linux-kernel memory model. + +explanation.txt + Detailed description of the memory model. + +litmus-tests.txt + The format, features, capabilities, and limitations of the litmus + tests that LKMM can evaluate. + +recipes.txt + Common memory-ordering patterns. + +references.txt + Background information. + +simple.txt + Starting point for someone new to Linux-kernel concurrency. + And also a reminder of the simpler approaches to concurrency! diff --git a/tools/memory-model/README b/tools/memory-model/README index c8144d4aafa0..39d08d1f0443 100644 --- a/tools/memory-model/README +++ b/tools/memory-model/README @@ -161,26 +161,8 @@ running LKMM litmus tests. DESCRIPTION OF FILES ==================== -Documentation/cheatsheet.txt - Quick-reference guide to the Linux-kernel memory model. - -Documentation/explanation.txt - Describes the memory model in detail. - -Documentation/litmus-tests.txt - Describes the format, features, capabilities, and limitations - of the litmus tests that LKMM can evaluate. - -Documentation/recipes.txt - Lists common memory-ordering patterns. - -Documentation/references.txt - Provides background reading. - -Documentation/simple.txt - Starting point for someone new to Linux-kernel concurrency. - And also for those needing a reminder of the simpler approaches - to concurrency! +Documentation/README + Guide to the other documents in the Documentation/ directory. linux-kernel.bell Categorizes the relevant instructions, including memory -- cgit v1.2.3 From e1eb075ccf3766860b7aa3f104ca29dcb8a46ed0 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Tue, 15 Sep 2020 14:33:38 -0700 Subject: rcutorture: Make preemptible TRACE02 enable lockdep Currently, the CONFIG_PREEMPT_NONE=y rcutorture TRACE01 rcutorture scenario enables lockdep. This limits its ability to find bugs due to non-preemptible sections of code being RCU readers, and pretty much all code thus appearing to lockdep to be an RCU reader. This commit therefore moves lockdep testing to the CONFIG_PREEMPT=y rcutorture TRACE02 scenario. Signed-off-by: Paul E. McKenney --- tools/testing/selftests/rcutorture/configs/rcu/TRACE01 | 6 +++--- tools/testing/selftests/rcutorture/configs/rcu/TRACE02 | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) (limited to 'tools') diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TRACE01 b/tools/testing/selftests/rcutorture/configs/rcu/TRACE01 index 12e7661b86f5..34c8ff5a12f2 100644 --- a/tools/testing/selftests/rcutorture/configs/rcu/TRACE01 +++ b/tools/testing/selftests/rcutorture/configs/rcu/TRACE01 @@ -4,8 +4,8 @@ CONFIG_HOTPLUG_CPU=y CONFIG_PREEMPT_NONE=y CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT=n -CONFIG_DEBUG_LOCK_ALLOC=y -CONFIG_PROVE_LOCKING=y -#CHECK#CONFIG_PROVE_RCU=y +CONFIG_DEBUG_LOCK_ALLOC=n +CONFIG_PROVE_LOCKING=n +#CHECK#CONFIG_PROVE_RCU=n CONFIG_TASKS_TRACE_RCU_READ_MB=y CONFIG_RCU_EXPERT=y diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TRACE02 b/tools/testing/selftests/rcutorture/configs/rcu/TRACE02 index b69ed6673c41..77541eeb4e9f 100644 --- a/tools/testing/selftests/rcutorture/configs/rcu/TRACE02 +++ b/tools/testing/selftests/rcutorture/configs/rcu/TRACE02 @@ -4,8 +4,8 @@ CONFIG_HOTPLUG_CPU=y CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT=y -CONFIG_DEBUG_LOCK_ALLOC=n -CONFIG_PROVE_LOCKING=n -#CHECK#CONFIG_PROVE_RCU=n +CONFIG_DEBUG_LOCK_ALLOC=y +CONFIG_PROVE_LOCKING=y +#CHECK#CONFIG_PROVE_RCU=y CONFIG_TASKS_TRACE_RCU_READ_MB=n CONFIG_RCU_EXPERT=y -- cgit v1.2.3 From 08c7974293851da6a64989b5ce7a0750e58178b1 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Fri, 28 Aug 2020 06:46:03 -0700 Subject: torture: Don't kill gdb sessions The rcutorture scripting will do a "kill -9" on any guest OS that exceeds its --duration by more than a few minutes, which is very valuable when bugs result in hangs. However, this is a problem when the "hang" was due to a --gdb debugging session. This commit therefore refrains from killing the guest OS when a debugging session is in progress. This means that the user must manually kill the kvm.sh process group if a hang really does occur. Signed-off-by: Paul E. McKenney --- tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) (limited to 'tools') diff --git a/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh b/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh index 6dc2b49b85ea..d04966ab88cc 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh @@ -206,7 +206,10 @@ do kruntime=`gawk 'BEGIN { print systime() - '"$kstarttime"' }' < /dev/null` if test -z "$qemu_pid" || kill -0 "$qemu_pid" > /dev/null 2>&1 then - if test $kruntime -ge $seconds -o -f "$TORTURE_STOPFILE" + if test -n "$TORTURE_KCONFIG_GDB_ARG" + then + : + elif test $kruntime -ge $seconds || test -f "$TORTURE_STOPFILE" then break; fi -- cgit v1.2.3 From 899f317e4886f916ed21027177177c11b577cea1 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Wed, 9 Sep 2020 12:27:03 -0700 Subject: rcuscale: Add RCU Tasks Trace This commit adds the ability to test performance and scalability of RCU Tasks Trace updaters. Reported-by: Alexei Starovoitov Signed-off-by: Paul E. McKenney --- kernel/rcu/rcuscale.c | 32 +++++++++++++++++++++- .../selftests/rcutorture/configs/rcuscale/CFcommon | 3 ++ .../selftests/rcutorture/configs/rcuscale/TRACE01 | 15 ++++++++++ .../rcutorture/configs/rcuscale/TRACE01.boot | 1 + 4 files changed, 50 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/rcutorture/configs/rcuscale/TRACE01 create mode 100644 tools/testing/selftests/rcutorture/configs/rcuscale/TRACE01.boot (limited to 'tools') diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c index 2819b95479af..c42f2401c374 100644 --- a/kernel/rcu/rcuscale.c +++ b/kernel/rcu/rcuscale.c @@ -38,6 +38,7 @@ #include #include #include +#include #include "rcu.h" @@ -294,6 +295,35 @@ static struct rcu_scale_ops tasks_ops = { .name = "tasks" }; +/* + * Definitions for RCU-tasks-trace scalability testing. + */ + +static int tasks_trace_scale_read_lock(void) +{ + rcu_read_lock_trace(); + return 0; +} + +static void tasks_trace_scale_read_unlock(int idx) +{ + rcu_read_unlock_trace(); +} + +static struct rcu_scale_ops tasks_tracing_ops = { + .ptype = RCU_TASKS_FLAVOR, + .init = rcu_sync_scale_init, + .readlock = tasks_trace_scale_read_lock, + .readunlock = tasks_trace_scale_read_unlock, + .get_gp_seq = rcu_no_completed, + .gp_diff = rcu_seq_diff, + .async = call_rcu_tasks_trace, + .gp_barrier = rcu_barrier_tasks_trace, + .sync = synchronize_rcu_tasks_trace, + .exp_sync = synchronize_rcu_tasks_trace, + .name = "tasks-tracing" +}; + static unsigned long rcuscale_seq_diff(unsigned long new, unsigned long old) { if (!cur_ops->gp_diff) @@ -754,7 +784,7 @@ rcu_scale_init(void) long i; int firsterr = 0; static struct rcu_scale_ops *scale_ops[] = { - &rcu_ops, &srcu_ops, &srcud_ops, &tasks_ops, + &rcu_ops, &srcu_ops, &srcud_ops, &tasks_ops, &tasks_tracing_ops }; if (!torture_init_begin(scale_type, verbose)) diff --git a/tools/testing/selftests/rcutorture/configs/rcuscale/CFcommon b/tools/testing/selftests/rcutorture/configs/rcuscale/CFcommon index 87caa0e932c7..90942bb5bebc 100644 --- a/tools/testing/selftests/rcutorture/configs/rcuscale/CFcommon +++ b/tools/testing/selftests/rcutorture/configs/rcuscale/CFcommon @@ -1,2 +1,5 @@ CONFIG_RCU_SCALE_TEST=y CONFIG_PRINTK_TIME=y +CONFIG_TASKS_RCU_GENERIC=y +CONFIG_TASKS_RCU=y +CONFIG_TASKS_TRACE_RCU=y diff --git a/tools/testing/selftests/rcutorture/configs/rcuscale/TRACE01 b/tools/testing/selftests/rcutorture/configs/rcuscale/TRACE01 new file mode 100644 index 000000000000..e6baa2fbaeb3 --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/rcuscale/TRACE01 @@ -0,0 +1,15 @@ +CONFIG_SMP=y +CONFIG_PREEMPT_NONE=y +CONFIG_PREEMPT_VOLUNTARY=n +CONFIG_PREEMPT=n +CONFIG_HZ_PERIODIC=n +CONFIG_NO_HZ_IDLE=y +CONFIG_NO_HZ_FULL=n +CONFIG_RCU_FAST_NO_HZ=n +CONFIG_RCU_NOCB_CPU=n +CONFIG_DEBUG_LOCK_ALLOC=n +CONFIG_PROVE_LOCKING=n +CONFIG_RCU_BOOST=n +CONFIG_DEBUG_OBJECTS_RCU_HEAD=n +CONFIG_RCU_EXPERT=y +CONFIG_RCU_TRACE=y diff --git a/tools/testing/selftests/rcutorture/configs/rcuscale/TRACE01.boot b/tools/testing/selftests/rcutorture/configs/rcuscale/TRACE01.boot new file mode 100644 index 000000000000..af0aff1457a4 --- /dev/null +++ b/tools/testing/selftests/rcutorture/configs/rcuscale/TRACE01.boot @@ -0,0 +1 @@ +rcuscale.scale_type=tasks-tracing -- cgit v1.2.3 From 45c7b962014da36c2ac1aee6e5014b644ba37a84 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Wed, 9 Sep 2020 22:24:57 -0700 Subject: rcuscale: Avoid divide by zero The rcuscale test module does not use batches, so there is only ever one batch. This commit therefore informs the kvm-recheck-rcuscale.sh script of this fact of life. Signed-off-by: Paul E. McKenney --- tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuscale.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'tools') diff --git a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuscale.sh b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuscale.sh index aa745152a525..b582113178ac 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuscale.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuscale.sh @@ -32,7 +32,7 @@ sed -e 's/^\[[^]]*]//' < $i/console.log | awk ' /-scale: .* gps: .* batches:/ { ngps = $9; - nbatches = $11; + nbatches = 1; } /-scale: .*writer-duration/ { -- cgit v1.2.3 From 8d68e68a781db80606c8e8f3e4383be6974878fd Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Wed, 16 Sep 2020 10:10:10 -0700 Subject: torture: Exclude "NOHZ tick-stop error" from fatal errors The "NOHZ tick-stop error: Non-RCU local softirq work is pending" warning happens frequently and appears to be irrelevant to the various torture tests. This commit therefore filters it out. If there proves to be a need to pay attention to it a later commit will add an "advice" category to allow the user to immediately see that although something happened, it was not an indictment of the system being tortured. Signed-off-by: Paul E. McKenney --- tools/testing/selftests/rcutorture/bin/console-badness.sh | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) (limited to 'tools') diff --git a/tools/testing/selftests/rcutorture/bin/console-badness.sh b/tools/testing/selftests/rcutorture/bin/console-badness.sh index 0e4c0b2eb7f0..80ae7f08b363 100755 --- a/tools/testing/selftests/rcutorture/bin/console-badness.sh +++ b/tools/testing/selftests/rcutorture/bin/console-badness.sh @@ -13,4 +13,5 @@ egrep 'Badness|WARNING:|Warn|BUG|===========|Call Trace:|Oops:|detected stalls on CPUs/tasks:|self-detected stall on CPU|Stall ended before state dump start|\?\?\? Writer stall state|rcu_.*kthread starved for|!!!' | grep -v 'ODEBUG: ' | grep -v 'This means that this is a DEBUG kernel and it is' | -grep -v 'Warning: unable to open an initial console' +grep -v 'Warning: unable to open an initial console' | +grep -v 'NOHZ tick-stop error: Non-RCU local softirq work is pending, handler' -- cgit v1.2.3 From 6f26d010e678249367cc00b5a827c3731c8138f3 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Thu, 17 Sep 2020 12:34:01 -0700 Subject: rcutorture: Adjust scenarios SRCU-t and SRCU-u to make kconfig happy The SRCU-u scenario expects to enable lockdep but to also disable the CONFIG_PREEMPT_COUNT kconfig option. This no longer works. This commit therefore instead enables lockdep in SRCU-t, which then allows SRCU-u to disable CONFIG_PREEMPT_COUNT. Signed-off-by: Paul E. McKenney --- tools/testing/selftests/rcutorture/configs/rcu/SRCU-t | 3 ++- tools/testing/selftests/rcutorture/configs/rcu/SRCU-u | 3 +-- 2 files changed, 3 insertions(+), 3 deletions(-) (limited to 'tools') diff --git a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-t b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-t index 6c78022c8cd8..d6557c38dfe4 100644 --- a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-t +++ b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-t @@ -4,7 +4,8 @@ CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT=n #CHECK#CONFIG_TINY_SRCU=y CONFIG_RCU_TRACE=n -CONFIG_DEBUG_LOCK_ALLOC=n +CONFIG_DEBUG_LOCK_ALLOC=y +CONFIG_PROVE_LOCKING=y CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_ATOMIC_SLEEP=y #CHECK#CONFIG_PREEMPT_COUNT=y diff --git a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-u b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-u index c15ada821e45..6bc24e99862f 100644 --- a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-u +++ b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-u @@ -4,7 +4,6 @@ CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT=n #CHECK#CONFIG_TINY_SRCU=y CONFIG_RCU_TRACE=n -CONFIG_DEBUG_LOCK_ALLOC=y -CONFIG_PROVE_LOCKING=y +CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_PREEMPT_COUNT=n -- cgit v1.2.3 From c64659ef29e3901be0900ec6fb0485fa3dbdcfd8 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Fri, 18 Sep 2020 13:26:22 -0700 Subject: torture: Prevent jitter processes from delaying failed run Even when the kernel panics and qemu dies, runs with jitter enabled will continue uselessly until the jitter.sh processes terminate. This can be annoying if a planned one-hour run instead dies during boot. This commit therefore kills the jitter.sh processes when the run ends more than one minute prior to the termination time specified by the kvm.sh --duration argument or its default. Signed-off-by: Paul E. McKenney --- tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh | 14 ++++++++++++++ tools/testing/selftests/rcutorture/bin/kvm.sh | 5 ++++- 2 files changed, 18 insertions(+), 1 deletion(-) (limited to 'tools') diff --git a/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh b/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh index d04966ab88cc..3cd03d01857c 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh @@ -226,6 +226,20 @@ do echo "ps -fp $killpid" >> $resdir/Warnings 2>&1 ps -fp $killpid >> $resdir/Warnings 2>&1 fi + # Reduce probability of PID reuse by allowing a one-minute buffer + if test $((kruntime + 60)) -lt $seconds && test -s "$resdir/../jitter_pids" + then + awk < "$resdir/../jitter_pids" ' + NF > 0 { + pidlist = pidlist " " $1; + n++; + } + END { + if (n > 0) { + print "kill " pidlist; + } + }' | sh + fi else echo ' ---' `date`: "Kernel done" fi diff --git a/tools/testing/selftests/rcutorture/bin/kvm.sh b/tools/testing/selftests/rcutorture/bin/kvm.sh index 6eb1d3f6524d..5ad3882563ce 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm.sh @@ -459,8 +459,11 @@ function dump(first, pastlast, batchnum) print "if test -n \"$needqemurun\"" print "then" print "\techo ---- Starting kernels. `date` | tee -a " rd "log"; - for (j = 0; j < njitter; j++) + print "\techo > " rd "jitter_pids" + for (j = 0; j < njitter; j++) { print "\tjitter.sh " j " " dur " " ja[2] " " ja[3] "&" + print "\techo $! >> " rd "jitter_pids" + } print "\twait" print "\techo ---- All kernel runs complete. `date` | tee -a " rd "log"; print "else" -- cgit v1.2.3 From c1e06287583e5ec496e4c02bf5b319e5e41a1fd2 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Mon, 21 Sep 2020 13:18:48 -0700 Subject: torture: Force weak-hashed pointers on console log Although the rcutorture scripting now deals correctly with full-up security-induced pointer obfuscation, it is still counter-productive for kernel hackers who are analyzing console output. This commit therefore sets the debug_boot_weak_hash kernel boot parameter, which enables printing of weak-hashed pointers for torture-test runs. Please note that this change applies only to runs initiated by the kvm.sh scripting. If you are instead using modprobe and rmmod, it is your responsibility to build and boot the underlying kernel to your taste. Please note further that this change does not result in a security hole in normal use. The rcutorture testing runs with a negligible userspace, no networking, and no user interaction. Besides which, there is no data of value that can be extracted from an rcutorture guest OS that could not also be extracted from the host that this guest is running on. Suggested-by: Anna-Maria Gleixner Signed-off-by: Paul E. McKenney --- tools/testing/selftests/rcutorture/bin/functions.sh | 1 + 1 file changed, 1 insertion(+) (limited to 'tools') diff --git a/tools/testing/selftests/rcutorture/bin/functions.sh b/tools/testing/selftests/rcutorture/bin/functions.sh index 51f3464b96d3..82663495fb38 100644 --- a/tools/testing/selftests/rcutorture/bin/functions.sh +++ b/tools/testing/selftests/rcutorture/bin/functions.sh @@ -169,6 +169,7 @@ identify_qemu () { # Output arguments for the qemu "-append" string based on CPU type # and the TORTURE_QEMU_INTERACTIVE environment variable. identify_qemu_append () { + echo debug_boot_weak_hash local console=ttyS0 case "$1" in qemu-system-x86_64|qemu-system-i386) -- cgit v1.2.3 From 7de1ca35269ee20e40c35666c810cbaea528c719 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Tue, 22 Sep 2020 17:20:11 -0700 Subject: torture: Accept time units on kvm.sh --duration argument The "--duration " has worked well for a very long time, but it can be inconvenient to compute the minutes for (say) a 28-hour run. It can also be annoying to have to let a simple boot test run for a full minute. This commit therefore permits an "s" suffix to specify seconds, "m" to specify minutes (which remains the default), "h" suffix to specify hours, and "d" to specify days. With this change, "--duration 5" still specifies that each scenario run for five minutes, but "--duration 30s" runs for only 30 seconds, "--duration 8h" runs for eight hours, and "--duration 2d" runs for two days. Signed-off-by: Paul E. McKenney --- tools/testing/selftests/rcutorture/bin/kvm.sh | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) (limited to 'tools') diff --git a/tools/testing/selftests/rcutorture/bin/kvm.sh b/tools/testing/selftests/rcutorture/bin/kvm.sh index 5ad3882563ce..c348d962304f 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm.sh @@ -58,7 +58,7 @@ usage () { echo " --datestamp string" echo " --defconfig string" echo " --dryrun sched|script" - echo " --duration minutes" + echo " --duration minutes | s | h | d" echo " --gdb" echo " --help" echo " --interactive" @@ -128,8 +128,20 @@ do shift ;; --duration) - checkarg --duration "(minutes)" $# "$2" '^[0-9]*$' '^error' - dur=$(($2*60)) + checkarg --duration "(minutes)" $# "$2" '^[0-9][0-9]*\(s\|m\|h\|d\|\)$' '^error' + mult=60 + if echo "$2" | grep -q 's$' + then + mult=1 + elif echo "$2" | grep -q 'h$' + then + mult=3600 + elif echo "$2" | grep -q 'd$' + then + mult=86400 + fi + ts=`echo $2 | sed -e 's/[smhd]$//'` + dur=$(($ts*mult)) shift ;; --gdb) -- cgit v1.2.3 From a5136f4ffb44f8c1a80406c5bfd4d233433398e6 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Thu, 24 Sep 2020 08:52:33 -0700 Subject: torture: Allow alternative forms of kvm.sh command-line arguments This commit allows --build-only as a synonym for --buildonly, --kconfigs for --kconfig, and --kmake-args for --kmake-arg. Signed-off-by: Paul E. McKenney --- tools/testing/selftests/rcutorture/bin/kvm.sh | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) (limited to 'tools') diff --git a/tools/testing/selftests/rcutorture/bin/kvm.sh b/tools/testing/selftests/rcutorture/bin/kvm.sh index c348d962304f..45d07b7b69f5 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm.sh @@ -93,7 +93,7 @@ do TORTURE_BOOT_IMAGE="$2" shift ;; - --buildonly) + --buildonly|--build-only) TORTURE_BUILDONLY=1 ;; --configs|--config) @@ -160,7 +160,7 @@ do jitter="$2" shift ;; - --kconfig) + --kconfig|--kconfigs) checkarg --kconfig "(Kconfig options)" $# "$2" '^CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\)\( CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\)\)*$' '^error$' TORTURE_KCONFIG_ARG="$2" shift @@ -171,7 +171,7 @@ do --kcsan) TORTURE_KCONFIG_KCSAN_ARG="CONFIG_DEBUG_INFO=y CONFIG_KCSAN=y CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n CONFIG_KCSAN_REPORT_ONCE_IN_MS=100000 CONFIG_KCSAN_VERBOSE=y CONFIG_KCSAN_INTERRUPT_WATCHER=y"; export TORTURE_KCONFIG_KCSAN_ARG ;; - --kmake-arg) + --kmake-arg|--kmake-args) checkarg --kmake-arg "(kernel make arguments)" $# "$2" '.*' '^error$' TORTURE_KMAKE_ARG="$2" shift -- cgit v1.2.3 From 6c5b9de2c63b2f513a580c6c80d455350012e99b Mon Sep 17 00:00:00 2001 From: Samuel Hernandez Date: Sun, 11 Oct 2020 14:22:31 -0400 Subject: rcutorture/nolibc: Fix a typo in header file This fixes a typo. Before this, the AT_FDCWD macro would be defined regardless of whether or not it's been defined before. Signed-off-by: Samuel Hernandez Signed-off-by: Willy Tarreau Signed-off-by: Paul E. McKenney --- tools/include/nolibc/nolibc.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'tools') diff --git a/tools/include/nolibc/nolibc.h b/tools/include/nolibc/nolibc.h index 2551e9b71167..d6d2623c99ad 100644 --- a/tools/include/nolibc/nolibc.h +++ b/tools/include/nolibc/nolibc.h @@ -231,7 +231,7 @@ struct rusage { #define DT_SOCK 12 /* all the *at functions */ -#ifndef AT_FDWCD +#ifndef AT_FDCWD #define AT_FDCWD -100 #endif -- cgit v1.2.3 From 5be7d80deb80ceef50a6bd86d83c8fd62264778a Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Thu, 15 Oct 2020 11:42:17 -0700 Subject: torture: Make kvm-check-branches.sh use --allcpus Currently the kvm-check-branches.sh script calculates the number of CPUs and passes this to the kvm.sh --cpus command-line argument. This works, but this commit saves a line by instead using the new kvm.sh --allcpus command-line argument. Signed-off-by: Paul E. McKenney --- tools/testing/selftests/rcutorture/bin/kvm-check-branches.sh | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) (limited to 'tools') diff --git a/tools/testing/selftests/rcutorture/bin/kvm-check-branches.sh b/tools/testing/selftests/rcutorture/bin/kvm-check-branches.sh index 6e65c134e5f1..370406bbfeed 100755 --- a/tools/testing/selftests/rcutorture/bin/kvm-check-branches.sh +++ b/tools/testing/selftests/rcutorture/bin/kvm-check-branches.sh @@ -52,8 +52,7 @@ echo Results directory: $resdir/$ds KVM="`pwd`/tools/testing/selftests/rcutorture"; export KVM PATH=${KVM}/bin:$PATH; export PATH . functions.sh -cpus="`identify_qemu_vcpus`" -echo Using up to $cpus CPUs. +echo Using all `identify_qemu_vcpus` CPUs. # Each pass through this loop does one command-line argument. for gitbr in $@ @@ -74,7 +73,7 @@ do # Test the specified commit. git checkout $i > $resdir/$ds/$idir/git-checkout.out 2>&1 echo git checkout return code: $? "(Commit $ntry: $i)" - kvm.sh --cpus $cpus --duration 3 --trust-make > $resdir/$ds/$idir/kvm.sh.out 2>&1 + kvm.sh --allcpus --duration 3 --trust-make > $resdir/$ds/$idir/kvm.sh.out 2>&1 ret=$? echo kvm.sh return code $ret for commit $i from branch $gitbr -- cgit v1.2.3 From 06dc8d4591b8d8ce0ece94474718b53f0a5c5de3 Mon Sep 17 00:00:00 2001 From: Bhaskar Chowdhury Date: Tue, 20 Oct 2020 21:22:56 +0200 Subject: tools/nolibc: Fix a spelling error in a comment Fix a spelling in the comment line. s/memry/memory/p This is on linux-next. Signed-off-by: Bhaskar Chowdhury Signed-off-by: Willy Tarreau Signed-off-by: Paul E. McKenney --- tools/include/nolibc/nolibc.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'tools') diff --git a/tools/include/nolibc/nolibc.h b/tools/include/nolibc/nolibc.h index d6d2623c99ad..e61d36cd4e50 100644 --- a/tools/include/nolibc/nolibc.h +++ b/tools/include/nolibc/nolibc.h @@ -107,7 +107,7 @@ static int errno; #endif /* errno codes all ensure that they will not conflict with a valid pointer - * because they all correspond to the highest addressable memry page. + * because they all correspond to the highest addressable memory page. */ #define MAX_ERRNO 4095 -- cgit v1.2.3 From 01f9e708d9eae6335ae9ff25ab09893c20727a55 Mon Sep 17 00:00:00 2001 From: Anna-Maria Behnsen Date: Mon, 2 Nov 2020 18:12:20 +0100 Subject: tools/rcutorture: Fix BUG parsing of console.log For the rcutorture test summary log file console.log of virtual machines is parsed. When a console.log contains "DEBUG", BUG counter is incremented because regular expression does not handle to ignore DEBUG. Signed-off-by: Anna-Maria Behnsen Reviewed-by: Benedikt Spranger Signed-off-by: Paul E. McKenney --- tools/testing/selftests/rcutorture/bin/parse-console.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'tools') diff --git a/tools/testing/selftests/rcutorture/bin/parse-console.sh b/tools/testing/selftests/rcutorture/bin/parse-console.sh index e03338091a06..263b1be50008 100755 --- a/tools/testing/selftests/rcutorture/bin/parse-console.sh +++ b/tools/testing/selftests/rcutorture/bin/parse-console.sh @@ -133,7 +133,7 @@ then then summary="$summary Warnings: $n_warn" fi - n_bugs=`egrep -c 'BUG|Oops:' $file` + n_bugs=`egrep -c '\bBUG|Oops:' $file` if test "$n_bugs" -ne 0 then summary="$summary Bugs: $n_bugs" -- cgit v1.2.3 From ebb477cb2fb7a44ff600e0a7393bad906a0ecd80 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Tue, 11 Aug 2020 11:27:33 -0700 Subject: tools/memory-model: Document categories of ordering primitives The Linux kernel has a number of categories of ordering primitives, which are recorded in the LKMM implementation and hinted at by cheatsheet.txt. But there is no overview of these categories, and such an overview is needed in order to understand multithreaded LKMM litmus tests. This commit therefore adds an ordering.txt as well as extracting a control-dependencies.txt from memory-barriers.txt. It also updates the README file. [ paulmck: Apply Akira Yokosawa file-placement feedback. ] [ paulmck: Apply Alan Stern feedback. ] [ paulmck: Apply self-review feedback. ] Signed-off-by: Paul E. McKenney --- tools/memory-model/Documentation/README | 17 + .../Documentation/control-dependencies.txt | 258 ++++++++++ tools/memory-model/Documentation/ordering.txt | 556 +++++++++++++++++++++ 3 files changed, 831 insertions(+) create mode 100644 tools/memory-model/Documentation/control-dependencies.txt create mode 100644 tools/memory-model/Documentation/ordering.txt (limited to 'tools') diff --git a/tools/memory-model/Documentation/README b/tools/memory-model/Documentation/README index 2d9539f19912..db90a26dbdf4 100644 --- a/tools/memory-model/Documentation/README +++ b/tools/memory-model/Documentation/README @@ -11,6 +11,12 @@ the material provided by documents earlier in this list. o You are new to Linux-kernel concurrency: simple.txt +o You have some background in Linux-kernel concurrency, and would + like an overview of the types of low-level concurrency primitives + that the Linux kernel provides: ordering.txt + + Here, "low level" means atomic operations to single variables. + o You are familiar with the Linux-kernel concurrency primitives that you need, and just want to get started with LKMM litmus tests: litmus-tests.txt @@ -19,6 +25,9 @@ o You are familiar with Linux-kernel concurrency, and would like a detailed intuitive understanding of LKMM, including situations involving more than two threads: recipes.txt +o You would like a detailed understanding of what your compiler can + and cannot do to control dependencies: control-dependencies.txt + o You are familiar with Linux-kernel concurrency and the use of LKMM, and would like a quick reference: cheatsheet.txt @@ -41,6 +50,10 @@ README cheatsheet.txt Quick-reference guide to the Linux-kernel memory model. +control-dependencies.txt + Guide to preventing compiler optimizations from destroying + your control dependencies. + explanation.txt Detailed description of the memory model. @@ -48,6 +61,10 @@ litmus-tests.txt The format, features, capabilities, and limitations of the litmus tests that LKMM can evaluate. +ordering.txt + Overview of the Linux kernel's low-level memory-ordering + primitives by category. + recipes.txt Common memory-ordering patterns. diff --git a/tools/memory-model/Documentation/control-dependencies.txt b/tools/memory-model/Documentation/control-dependencies.txt new file mode 100644 index 000000000000..8b743d20fe27 --- /dev/null +++ b/tools/memory-model/Documentation/control-dependencies.txt @@ -0,0 +1,258 @@ +CONTROL DEPENDENCIES +==================== + +A major difficulty with control dependencies is that current compilers +do not support them. One purpose of this document is therefore to +help you prevent your compiler from breaking your code. However, +control dependencies also pose other challenges, which leads to the +second purpose of this document, namely to help you to avoid breaking +your own code, even in the absence of help from your compiler. + +One such challenge is that control dependencies order only later stores. +Therefore, a load-load control dependency will not preserve ordering +unless a read memory barrier is provided. Consider the following code: + + q = READ_ONCE(a); + if (q) + p = READ_ONCE(b); + +This is not guaranteed to provide any ordering because some types of CPUs +are permitted to predict the result of the load from "b". This prediction +can cause other CPUs to see this load as having happened before the load +from "a". This means that an explicit read barrier is required, for example +as follows: + + q = READ_ONCE(a); + if (q) { + smp_rmb(); + p = READ_ONCE(b); + } + +However, stores are not speculated. This means that ordering is +(usually) guaranteed for load-store control dependencies, as in the +following example: + + q = READ_ONCE(a); + if (q) + WRITE_ONCE(b, 1); + +Control dependencies can pair with each other and with other types +of ordering. But please note that neither the READ_ONCE() nor the +WRITE_ONCE() are optional. Without the READ_ONCE(), the compiler might +fuse the load from "a" with other loads. Without the WRITE_ONCE(), +the compiler might fuse the store to "b" with other stores. Worse yet, +the compiler might convert the store into a load and a check followed +by a store, and this compiler-generated load would not be ordered by +the control dependency. + +Furthermore, if the compiler is able to prove that the value of variable +"a" is always non-zero, it would be well within its rights to optimize +the original example by eliminating the "if" statement as follows: + + q = a; + b = 1; /* BUG: Compiler and CPU can both reorder!!! */ + +So don't leave out either the READ_ONCE() or the WRITE_ONCE(). +In particular, although READ_ONCE() does force the compiler to emit a +load, it does *not* force the compiler to actually use the loaded value. + +It is tempting to try use control dependencies to enforce ordering on +identical stores on both branches of the "if" statement as follows: + + q = READ_ONCE(a); + if (q) { + barrier(); + WRITE_ONCE(b, 1); + do_something(); + } else { + barrier(); + WRITE_ONCE(b, 1); + do_something_else(); + } + +Unfortunately, current compilers will transform this as follows at high +optimization levels: + + q = READ_ONCE(a); + barrier(); + WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */ + if (q) { + /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */ + do_something(); + } else { + /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */ + do_something_else(); + } + +Now there is no conditional between the load from "a" and the store to +"b", which means that the CPU is within its rights to reorder them: The +conditional is absolutely required, and must be present in the final +assembly code, after all of the compiler and link-time optimizations +have been applied. Therefore, if you need ordering in this example, +you must use explicit memory ordering, for example, smp_store_release(): + + q = READ_ONCE(a); + if (q) { + smp_store_release(&b, 1); + do_something(); + } else { + smp_store_release(&b, 1); + do_something_else(); + } + +Without explicit memory ordering, control-dependency-based ordering is +guaranteed only when the stores differ, for example: + + q = READ_ONCE(a); + if (q) { + WRITE_ONCE(b, 1); + do_something(); + } else { + WRITE_ONCE(b, 2); + do_something_else(); + } + +The initial READ_ONCE() is still required to prevent the compiler from +knowing too much about the value of "a". + +But please note that you need to be careful what you do with the local +variable "q", otherwise the compiler might be able to guess the value +and again remove the conditional branch that is absolutely required to +preserve ordering. For example: + + q = READ_ONCE(a); + if (q % MAX) { + WRITE_ONCE(b, 1); + do_something(); + } else { + WRITE_ONCE(b, 2); + do_something_else(); + } + +If MAX is compile-time defined to be 1, then the compiler knows that +(q % MAX) must be equal to zero, regardless of the value of "q". +The compiler is therefore within its rights to transform the above code +into the following: + + q = READ_ONCE(a); + WRITE_ONCE(b, 2); + do_something_else(); + +Given this transformation, the CPU is not required to respect the ordering +between the load from variable "a" and the store to variable "b". It is +tempting to add a barrier(), but this does not help. The conditional +is gone, and the barrier won't bring it back. Therefore, if you need +to relying on control dependencies to produce this ordering, you should +make sure that MAX is greater than one, perhaps as follows: + + q = READ_ONCE(a); + BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ + if (q % MAX) { + WRITE_ONCE(b, 1); + do_something(); + } else { + WRITE_ONCE(b, 2); + do_something_else(); + } + +Please note once again that each leg of the "if" statement absolutely +must store different values to "b". As in previous examples, if the two +values were identical, the compiler could pull this store outside of the +"if" statement, destroying the control dependency's ordering properties. + +You must also be careful avoid relying too much on boolean short-circuit +evaluation. Consider this example: + + q = READ_ONCE(a); + if (q || 1 > 0) + WRITE_ONCE(b, 1); + +Because the first condition cannot fault and the second condition is +always true, the compiler can transform this example as follows, again +destroying the control dependency's ordering: + + q = READ_ONCE(a); + WRITE_ONCE(b, 1); + +This is yet another example showing the importance of preventing the +compiler from out-guessing your code. Again, although READ_ONCE() really +does force the compiler to emit code for a given load, the compiler is +within its rights to discard the loaded value. + +In addition, control dependencies apply only to the then-clause and +else-clause of the "if" statement in question. In particular, they do +not necessarily order the code following the entire "if" statement: + + q = READ_ONCE(a); + if (q) { + WRITE_ONCE(b, 1); + } else { + WRITE_ONCE(b, 2); + } + WRITE_ONCE(c, 1); /* BUG: No ordering against the read from "a". */ + +It is tempting to argue that there in fact is ordering because the +compiler cannot reorder volatile accesses and also cannot reorder +the writes to "b" with the condition. Unfortunately for this line +of reasoning, the compiler might compile the two writes to "b" as +conditional-move instructions, as in this fanciful pseudo-assembly +language: + + ld r1,a + cmp r1,$0 + cmov,ne r4,$1 + cmov,eq r4,$2 + st r4,b + st $1,c + +The control dependencies would then extend only to the pair of cmov +instructions and the store depending on them. This means that a weakly +ordered CPU would have no dependency of any sort between the load from +"a" and the store to "c". In short, control dependencies provide ordering +only to the stores in the then-clause and else-clause of the "if" statement +in question (including functions invoked by those two clauses), and not +to code following that "if" statement. + + +In summary: + + (*) Control dependencies can order prior loads against later stores. + However, they do *not* guarantee any other sort of ordering: + Not prior loads against later loads, nor prior stores against + later anything. If you need these other forms of ordering, use + smp_load_acquire(), smp_store_release(), or, in the case of prior + stores and later loads, smp_mb(). + + (*) If both legs of the "if" statement contain identical stores to + the same variable, then you must explicitly order those stores, + either by preceding both of them with smp_mb() or by using + smp_store_release(). Please note that it is *not* sufficient to use + barrier() at beginning and end of each leg of the "if" statement + because, as shown by the example above, optimizing compilers can + destroy the control dependency while respecting the letter of the + barrier() law. + + (*) Control dependencies require at least one run-time conditional + between the prior load and the subsequent store, and this + conditional must involve the prior load. If the compiler is able + to optimize the conditional away, it will have also optimized + away the ordering. Careful use of READ_ONCE() and WRITE_ONCE() + can help to preserve the needed conditional. + + (*) Control dependencies require that the compiler avoid reordering the + dependency into nonexistence. Careful use of READ_ONCE() or + atomic{,64}_read() can help to preserve your control dependency. + + (*) Control dependencies apply only to the then-clause and else-clause + of the "if" statement containing the control dependency, including + any functions that these two clauses call. Control dependencies + do *not* apply to code beyond the end of that "if" statement. + + (*) Control dependencies pair normally with other types of barriers. + + (*) Control dependencies do *not* provide multicopy atomicity. If you + need all the CPUs to agree on the ordering of a given store against + all other accesses, use smp_mb(). + + (*) Compilers do not understand control dependencies. It is therefore + your job to ensure that they do not break your code. diff --git a/tools/memory-model/Documentation/ordering.txt b/tools/memory-model/Documentation/ordering.txt new file mode 100644 index 000000000000..9b0949d3f5ec --- /dev/null +++ b/tools/memory-model/Documentation/ordering.txt @@ -0,0 +1,556 @@ +This document gives an overview of the categories of memory-ordering +operations provided by the Linux-kernel memory model (LKMM). + + +Categories of Ordering +====================== + +This section lists LKMM's three top-level categories of memory-ordering +operations in decreasing order of strength: + +1. Barriers (also known as "fences"). A barrier orders some or + all of the CPU's prior operations against some or all of its + subsequent operations. + +2. Ordered memory accesses. These operations order themselves + against some or all of the CPU's prior accesses or some or all + of the CPU's subsequent accesses, depending on the subcategory + of the operation. + +3. Unordered accesses, as the name indicates, have no ordering + properties except to the extent that they interact with an + operation in the previous categories. This being the real world, + some of these "unordered" operations provide limited ordering + in some special situations. + +Each of the above categories is described in more detail by one of the +following sections. + + +Barriers +======== + +Each of the following categories of barriers is described in its own +subsection below: + +a. Full memory barriers. + +b. Read-modify-write (RMW) ordering augmentation barriers. + +c. Write memory barrier. + +d. Read memory barrier. + +e. Compiler barrier. + +Note well that many of these primitives generate absolutely no code +in kernels built with CONFIG_SMP=n. Therefore, if you are writing +a device driver, which must correctly order accesses to a physical +device even in kernels built with CONFIG_SMP=n, please use the +ordering primitives provided for that purpose. For example, instead of +smp_mb(), use mb(). See the "Linux Kernel Device Drivers" book or the +https://lwn.net/Articles/698014/ article for more information. + + +Full Memory Barriers +-------------------- + +The Linux-kernel primitives that provide full ordering include: + +o The smp_mb() full memory barrier. + +o Value-returning RMW atomic operations whose names do not end in + _acquire, _release, or _relaxed. + +o RCU's grace-period primitives. + +First, the smp_mb() full memory barrier orders all of the CPU's prior +accesses against all subsequent accesses from the viewpoint of all CPUs. +In other words, all CPUs will agree that any earlier action taken +by that CPU happened before any later action taken by that same CPU. +For example, consider the following: + + WRITE_ONCE(x, 1); + smp_mb(); // Order store to x before load from y. + r1 = READ_ONCE(y); + +All CPUs will agree that the store to "x" happened before the load +from "y", as indicated by the comment. And yes, please comment your +memory-ordering primitives. It is surprisingly hard to remember their +purpose after even a few months. + +Second, some RMW atomic operations provide full ordering. These +operations include value-returning RMW atomic operations (that is, those +with non-void return types) whose names do not end in _acquire, _release, +or _relaxed. Examples include atomic_add_return(), atomic_dec_and_test(), +cmpxchg(), and xchg(). Note that conditional RMW atomic operations such +as cmpxchg() are only guaranteed to provide ordering when they succeed. +When RMW atomic operations provide full ordering, they partition the +CPU's accesses into three groups: + +1. All code that executed prior to the RMW atomic operation. + +2. The RMW atomic operation itself. + +3. All code that executed after the RMW atomic operation. + +All CPUs will agree that any operation in a given partition happened +before any operation in a higher-numbered partition. + +In contrast, non-value-returning RMW atomic operations (that is, those +with void return types) do not guarantee any ordering whatsoever. Nor do +value-returning RMW atomic operations whose names end in _relaxed. +Examples of the former include atomic_inc() and atomic_dec(), +while examples of the latter include atomic_cmpxchg_relaxed() and +atomic_xchg_relaxed(). Similarly, value-returning non-RMW atomic +operations such as atomic_read() do not guarantee full ordering, and +are covered in the later section on unordered operations. + +Value-returning RMW atomic operations whose names end in _acquire or +_release provide limited ordering, and will be described later in this +document. + +Finally, RCU's grace-period primitives provide full ordering. These +primitives include synchronize_rcu(), synchronize_rcu_expedited(), +synchronize_srcu() and so on. However, these primitives have orders +of magnitude greater overhead than smp_mb(), atomic_xchg(), and so on. +Furthermore, RCU's grace-period primitives can only be invoked in +sleepable contexts. Therefore, RCU's grace-period primitives are +typically instead used to provide ordering against RCU read-side critical +sections, as documented in their comment headers. But of course if you +need a synchronize_rcu() to interact with readers, it costs you nothing +to also rely on its additional full-memory-barrier semantics. Just please +carefully comment this, otherwise your future self will hate you. + + +RMW Ordering Augmentation Barriers +---------------------------------- + +As noted in the previous section, non-value-returning RMW operations +such as atomic_inc() and atomic_dec() guarantee no ordering whatsoever. +Nevertheless, a number of popular CPU families, including x86, provide +full ordering for these primitives. One way to obtain full ordering on +all architectures is to add a call to smp_mb(): + + WRITE_ONCE(x, 1); + atomic_inc(&my_counter); + smp_mb(); // Inefficient on x86!!! + r1 = READ_ONCE(y); + +This works, but the added smp_mb() adds needless overhead for +x86, on which atomic_inc() provides full ordering all by itself. +The smp_mb__after_atomic() primitive can be used instead: + + WRITE_ONCE(x, 1); + atomic_inc(&my_counter); + smp_mb__after_atomic(); // Order store to x before load from y. + r1 = READ_ONCE(y); + +The smp_mb__after_atomic() primitive emits code only on CPUs whose +atomic_inc() implementations do not guarantee full ordering, thus +incurring no unnecessary overhead on x86. There are a number of +variations on the smp_mb__*() theme: + +o smp_mb__before_atomic(), which provides full ordering prior + to an unordered RMW atomic operation. + +o smp_mb__after_atomic(), which, as shown above, provides full + ordering subsequent to an unordered RMW atomic operation. + +o smp_mb__after_spinlock(), which provides full ordering subsequent + to a successful spinlock acquisition. Note that spin_lock() is + always successful but spin_trylock() might not be. + +o smp_mb__after_srcu_read_unlock(), which provides full ordering + subsequent to an srcu_read_unlock(). + +It is bad practice to place code between the smp__*() primitive and the +operation whose ordering that it is augmenting. The reason is that the +ordering of this intervening code will differ from one CPU architecture +to another. + + +Write Memory Barrier +-------------------- + +The Linux kernel's write memory barrier is smp_wmb(). If a CPU executes +the following code: + + WRITE_ONCE(x, 1); + smp_wmb(); + WRITE_ONCE(y, 1); + +Then any given CPU will see the write to "x" has having happened before +the write to "y". However, you are usually better off using a release +store, as described in the "Release Operations" section below. + +Note that smp_wmb() might fail to provide ordering for unmarked C-language +stores because profile-driven optimization could determine that the +value being overwritten is almost always equal to the new value. Such a +compiler might then reasonably decide to transform "x = 1" and "y = 1" +as follows: + + if (x != 1) + x = 1; + smp_wmb(); // BUG: does not order the reads!!! + if (y != 1) + y = 1; + +Therefore, if you need to use smp_wmb() with unmarked C-language writes, +you will need to make sure that none of the compilers used to build +the Linux kernel carry out this sort of transformation, both now and in +the future. + + +Read Memory Barrier +------------------- + +The Linux kernel's read memory barrier is smp_rmb(). If a CPU executes +the following code: + + r0 = READ_ONCE(y); + smp_rmb(); + r1 = READ_ONCE(x); + +Then any given CPU will see the read from "y" as having preceded the read from +"x". However, you are usually better off using an acquire load, as described +in the "Acquire Operations" section below. + +Compiler Barrier +---------------- + +The Linux kernel's compiler barrier is barrier(). This primitive +prohibits compiler code-motion optimizations that might move memory +references across the point in the code containing the barrier(), but +does not constrain hardware memory ordering. For example, this can be +used to prevent to compiler from moving code across an infinite loop: + + WRITE_ONCE(x, 1); + while (dontstop) + barrier(); + r1 = READ_ONCE(y); + +Without the barrier(), the compiler would be within its rights to move the +WRITE_ONCE() to follow the loop. This code motion could be problematic +in the case where an interrupt handler terminates the loop. Another way +to handle this is to use READ_ONCE() for the load of "dontstop". + +Note that the barriers discussed previously use barrier() or its low-level +equivalent in their implementations. + + +Ordered Memory Accesses +======================= + +The Linux kernel provides a wide variety of ordered memory accesses: + +a. Release operations. + +b. Acquire operations. + +c. RCU read-side ordering. + +d. Control dependencies. + +Each of the above categories has its own section below. + + +Release Operations +------------------ + +Release operations include smp_store_release(), atomic_set_release(), +rcu_assign_pointer(), and value-returning RMW operations whose names +end in _release. These operations order their own store against all +of the CPU's prior memory accesses. Release operations often provide +improved readability and performance compared to explicit barriers. +For example, use of smp_store_release() saves a line compared to the +smp_wmb() example above: + + WRITE_ONCE(x, 1); + smp_store_release(&y, 1); + +More important, smp_store_release() makes it easier to connect up the +different pieces of the concurrent algorithm. The variable stored to +by the smp_store_release(), in this case "y", will normally be used in +an acquire operation in other parts of the concurrent algorithm. + +To see the performance advantages, suppose that the above example read +from "x" instead of writing to it. Then an smp_wmb() could not guarantee +ordering, and an smp_mb() would be needed instead: + + r1 = READ_ONCE(x); + smp_mb(); + WRITE_ONCE(y, 1); + +But smp_mb() often incurs much higher overhead than does +smp_store_release(), which still provides the needed ordering of "x" +against "y". On x86, the version using smp_store_release() might compile +to a simple load instruction followed by a simple store instruction. +In contrast, the smp_mb() compiles to an expensive instruction that +provides the needed ordering. + +There is a wide variety of release operations: + +o Store operations, including not only the aforementioned + smp_store_release(), but also atomic_set_release(), and + atomic_long_set_release(). + +o RCU's rcu_assign_pointer() operation. This is the same as + smp_store_release() except that: (1) It takes the pointer to + be assigned to instead of a pointer to that pointer, (2) It + is intended to be used in conjunction with rcu_dereference() + and similar rather than smp_load_acquire(), and (3) It checks + for an RCU-protected pointer in "sparse" runs. + +o Value-returning RMW operations whose names end in _release, + such as atomic_fetch_add_release() and cmpxchg_release(). + Note that release ordering is guaranteed only against the + memory-store portion of the RMW operation, and not against the + memory-load portion. Note also that conditional operations such + as cmpxchg_release() are only guaranteed to provide ordering + when they succeed. + +As mentioned earlier, release operations are often paired with acquire +operations, which are the subject of the next section. + + +Acquire Operations +------------------ + +Acquire operations include smp_load_acquire(), atomic_read_acquire(), +and value-returning RMW operations whose names end in _acquire. These +operations order their own load against all of the CPU's subsequent +memory accesses. Acquire operations often provide improved performance +and readability compared to explicit barriers. For example, use of +smp_load_acquire() saves a line compared to the smp_rmb() example above: + + r0 = smp_load_acquire(&y); + r1 = READ_ONCE(x); + +As with smp_store_release(), this also makes it easier to connect +the different pieces of the concurrent algorithm by looking for the +smp_store_release() that stores to "y". In addition, smp_load_acquire() +improves upon smp_rmb() by ordering against subsequent stores as well +as against subsequent loads. + +There are a couple of categories of acquire operations: + +o Load operations, including not only the aforementioned + smp_load_acquire(), but also atomic_read_acquire(), and + atomic64_read_acquire(). + +o Value-returning RMW operations whose names end in _acquire, + such as atomic_xchg_acquire() and atomic_cmpxchg_acquire(). + Note that acquire ordering is guaranteed only against the + memory-load portion of the RMW operation, and not against the + memory-store portion. Note also that conditional operations + such as atomic_cmpxchg_acquire() are only guaranteed to provide + ordering when they succeed. + +Symmetry being what it is, acquire operations are often paired with the +release operations covered earlier. For example, consider the following +example, where task0() and task1() execute concurrently: + + void task0(void) + { + WRITE_ONCE(x, 1); + smp_store_release(&y, 1); + } + + void task1(void) + { + r0 = smp_load_acquire(&y); + r1 = READ_ONCE(x); + } + +If "x" and "y" are both initially zero, then either r0's final value +will be zero or r1's final value will be one, thus providing the required +ordering. + + +RCU Read-Side Ordering +---------------------- + +This category includes read-side markers such as rcu_read_lock() +and rcu_read_unlock() as well as pointer-traversal primitives such as +rcu_dereference() and srcu_dereference(). + +Compared to locking primitives and RMW atomic operations, markers +for RCU read-side critical sections incur very low overhead because +they interact only with the corresponding grace-period primitives. +For example, the rcu_read_lock() and rcu_read_unlock() markers interact +with synchronize_rcu(), synchronize_rcu_expedited(), and call_rcu(). +The way this works is that if a given call to synchronize_rcu() cannot +prove that it started before a given call to rcu_read_lock(), then +that synchronize_rcu() must block until the matching rcu_read_unlock() +is reached. For more information, please see the synchronize_rcu() +docbook header comment and the material in Documentation/RCU. + +RCU's pointer-traversal primitives, including rcu_dereference() and +srcu_dereference(), order their load (which must be a pointer) against any +of the CPU's subsequent memory accesses whose address has been calculated +from the value loaded. There is said to be an *address dependency* +from the value returned by the rcu_dereference() or srcu_dereference() +to that subsequent memory access. + +A call to rcu_dereference() for a given RCU-protected pointer is +usually paired with a call to a call to rcu_assign_pointer() for that +same pointer in much the same way that a call to smp_load_acquire() is +paired with a call to smp_store_release(). Calls to rcu_dereference() +and rcu_assign_pointer are often buried in other APIs, for example, +the RCU list API members defined in include/linux/rculist.h. For more +information, please see the docbook headers in that file, the most +recent LWN article on the RCU API (https://lwn.net/Articles/777036/), +and of course the material in Documentation/RCU. + +If the pointer value is manipulated between the rcu_dereference() +that returned it and a later dereference(), please read +Documentation/RCU/rcu_dereference.rst. It can also be quite helpful to +review uses in the Linux kernel. + + +Control Dependencies +-------------------- + +A control dependency extends from a marked load (READ_ONCE() or stronger) +through an "if" condition to a marked store (WRITE_ONCE() or stronger) +that is executed only by one of the legs of that "if" statement. +Control dependencies are so named because they are mediated by +control-flow instructions such as comparisons and conditional branches. + +In short, you can use a control dependency to enforce ordering between +an READ_ONCE() and a WRITE_ONCE() when there is an "if" condition +between them. The canonical example is as follows: + + q = READ_ONCE(a); + if (q) + WRITE_ONCE(b, 1); + +In this case, all CPUs would see the read from "a" as happening before +the write to "b". + +However, control dependencies are easily destroyed by compiler +optimizations, so any use of control dependencies must take into account +all of the compilers used to build the Linux kernel. Please see the +"control-dependencies.txt" file for more information. + + +Unordered Accesses +================== + +Each of these two categories of unordered accesses has a section below: + +a. Unordered marked operations. + +b. Unmarked C-language accesses. + + +Unordered Marked Operations +--------------------------- + +Unordered operations to different variables are just that, unordered. +However, if a group of CPUs apply these operations to a single variable, +all the CPUs will agree on the operation order. Of course, the ordering +of unordered marked accesses can also be constrained using the mechanisms +described earlier in this document. + +These operations come in three categories: + +o Marked writes, such as WRITE_ONCE() and atomic_set(). These + primitives required the compiler to emit the corresponding store + instructions in the expected execution order, thus suppressing + a number of destructive optimizations. However, they provide no + hardware ordering guarantees, and in fact many CPUs will happily + reorder marked writes with each other or with other unordered + operations, unless these operations are to the same variable. + +o Marked reads, such as READ_ONCE() and atomic_read(). These + primitives required the compiler to emit the corresponding load + instructions in the expected execution order, thus suppressing + a number of destructive optimizations. However, they provide no + hardware ordering guarantees, and in fact many CPUs will happily + reorder marked reads with each other or with other unordered + operations, unless these operations are to the same variable. + +o Unordered RMW atomic operations. These are non-value-returning + RMW atomic operations whose names do not end in _acquire or + _release, and also value-returning RMW operations whose names + end in _relaxed. Examples include atomic_add(), atomic_or(), + and atomic64_fetch_xor_relaxed(). These operations do carry + out the specified RMW operation atomically, for example, five + concurrent atomic_inc() operations applied to a given variable + will reliably increase the value of that variable by five. + However, many CPUs will happily reorder these operations with + each other or with other unordered operations. + + This category of operations can be efficiently ordered using + smp_mb__before_atomic() and smp_mb__after_atomic(), as was + discussed in the "RMW Ordering Augmentation Barriers" section. + +In short, these operations can be freely reordered unless they are all +operating on a single variable or unless they are constrained by one of +the operations called out earlier in this document. + + +Unmarked C-Language Accesses +---------------------------- + +Unmarked C-language accesses are normal variable accesses to normal +variables, that is, to variables that are not "volatile" and are not +C11 atomic variables. These operations provide no ordering guarantees, +and further do not guarantee "atomic" access. For example, the compiler +might (and sometimes does) split a plain C-language store into multiple +smaller stores. A load from that same variable running on some other +CPU while such a store is executing might see a value that is a mashup +of the old value and the new value. + +Unmarked C-language accesses are unordered, and are also subject to +any number of compiler optimizations, many of which can break your +concurrent code. It is possible to used unmarked C-language accesses for +shared variables that are subject to concurrent access, but great care +is required on an ongoing basis. The compiler-constraining barrier() +primitive can be helpful, as can the various ordering primitives discussed +in this document. It nevertheless bears repeating that use of unmarked +C-language accesses requires careful attention to not just your code, +but to all the compilers that might be used to build it. Such compilers +might replace a series of loads with a single load, and might replace +a series of stores with a single store. Some compilers will even split +a single store into multiple smaller stores. + +But there are some ways of using unmarked C-language accesses for shared +variables without such worries: + +o Guard all accesses to a given variable by a particular lock, + so that there are never concurrent conflicting accesses to + that variable. (There are "conflicting accesses" when + (1) at least one of the concurrent accesses to a variable is an + unmarked C-language access and (2) when at least one of those + accesses is a write, whether marked or not.) + +o As above, but using other synchronization primitives such + as reader-writer locks or sequence locks. + +o Use locking or other means to ensure that all concurrent accesses + to a given variable are reads. + +o Restrict use of a given variable to statistics or heuristics + where the occasional bogus value can be tolerated. + +o Declare the accessed variables as C11 atomics. + https://lwn.net/Articles/691128/ + +o Declare the accessed variables as "volatile". + +If you need to live more dangerously, please do take the time to +understand the compilers. One place to start is these two LWN +articles: + +Who's afraid of a big bad optimizing compiler? + https://lwn.net/Articles/793253 +Calibrating your fear of big bad optimizing compilers + https://lwn.net/Articles/799218 + +Used properly, unmarked C-language accesses can reduce overhead on +fastpaths. However, the price is great care and continual attention +to your compiler as new versions come out and as new optimizations +are enabled. -- cgit v1.2.3 From 0a27ce6b6968866fa8e3bd70371d67752db7718f Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Thu, 22 Oct 2020 15:16:08 -0700 Subject: tools/memory-model: Add a glossary of LKMM terms [ paulmck: Apply Alan Stern feedback. ] Signed-off-by: Paul E. McKenney --- tools/memory-model/Documentation/glossary.txt | 172 ++++++++++++++++++++++++++ 1 file changed, 172 insertions(+) create mode 100644 tools/memory-model/Documentation/glossary.txt (limited to 'tools') diff --git a/tools/memory-model/Documentation/glossary.txt b/tools/memory-model/Documentation/glossary.txt new file mode 100644 index 000000000000..79acb75d56ea --- /dev/null +++ b/tools/memory-model/Documentation/glossary.txt @@ -0,0 +1,172 @@ +This document contains brief definitions of LKMM-related terms. Like most +glossaries, it is not intended to be read front to back (except perhaps +as a way of confirming a diagnosis of OCD), but rather to be searched +for specific terms. + + +Address Dependency: When the address of a later memory access is computed + based on the value returned by an earlier load, an "address + dependency" extends from that load extending to the later access. + Address dependencies are quite common in RCU read-side critical + sections: + + 1 rcu_read_lock(); + 2 p = rcu_dereference(gp); + 3 do_something(p->a); + 4 rcu_read_unlock(); + + In this case, because the address of "p->a" on line 3 is computed + from the value returned by the rcu_dereference() on line 2, the + address dependency extends from that rcu_dereference() to that + "p->a". In rare cases, optimizing compilers can destroy address + dependencies. Please see Documentation/RCU/rcu_dereference.txt + for more information. + + See also "Control Dependency" and "Data Dependency". + +Acquire: With respect to a lock, acquiring that lock, for example, + using spin_lock(). With respect to a non-lock shared variable, + a special operation that includes a load and which orders that + load before later memory references running on that same CPU. + An example special acquire operation is smp_load_acquire(), + but atomic_read_acquire() and atomic_xchg_acquire() also include + acquire loads. + + When an acquire load returns the value stored by a release store + to that same variable, then all operations preceding that store + happen before any operations following that load acquire. + + See also "Relaxed" and "Release". + +Coherence (co): When one CPU's store to a given variable overwrites + either the value from another CPU's store or some later value, + there is said to be a coherence link from the second CPU to + the first. + + It is also possible to have a coherence link within a CPU, which + is a "coherence internal" (coi) link. The term "coherence + external" (coe) link is used when it is necessary to exclude + the coi case. + + See also "From-reads" and "Reads-from". + +Control Dependency: When a later store's execution depends on a test + of a value computed from a value returned by an earlier load, + a "control dependency" extends from that load to that store. + For example: + + 1 if (READ_ONCE(x)) + 2 WRITE_ONCE(y, 1); + + Here, the control dependency extends from the READ_ONCE() on + line 1 to the WRITE_ONCE() on line 2. Control dependencies are + fragile, and can be easily destroyed by optimizing compilers. + Please see control-dependencies.txt for more information. + + See also "Address Dependency" and "Data Dependency". + +Cycle: Memory-barrier pairing is restricted to a pair of CPUs, as the + name suggests. And in a great many cases, a pair of CPUs is all + that is required. In other cases, the notion of pairing must be + extended to additional CPUs, and the result is called a "cycle". + In a cycle, each CPU's ordering interacts with that of the next: + + CPU 0 CPU 1 CPU 2 + WRITE_ONCE(x, 1); WRITE_ONCE(y, 1); WRITE_ONCE(z, 1); + smp_mb(); smp_mb(); smp_mb(); + r0 = READ_ONCE(y); r1 = READ_ONCE(z); r2 = READ_ONCE(x); + + CPU 0's smp_mb() interacts with that of CPU 1, which interacts + with that of CPU 2, which in turn interacts with that of CPU 0 + to complete the cycle. Because of the smp_mb() calls between + each pair of memory accesses, the outcome where r0, r1, and r2 + are all equal to zero is forbidden by LKMM. + + See also "Pairing". + +Data Dependency: When the data written by a later store is computed based + on the value returned by an earlier load, a "data dependency" + extends from that load to that later store. For example: + + 1 r1 = READ_ONCE(x); + 2 WRITE_ONCE(y, r1 + 1); + + In this case, the data dependency extends from the READ_ONCE() + on line 1 to the WRITE_ONCE() on line 2. Data dependencies are + fragile and can be easily destroyed by optimizing compilers. + Because optimizing compilers put a great deal of effort into + working out what values integer variables might have, this is + especially true in cases where the dependency is carried through + an integer. + + See also "Address Dependency" and "Control Dependency". + +From-Reads (fr): When one CPU's store to a given variable happened + too late to affect the value returned by another CPU's + load from that same variable, there is said to be a from-reads + link from the load to the store. + + It is also possible to have a from-reads link within a CPU, which + is a "from-reads internal" (fri) link. The term "from-reads + external" (fre) link is used when it is necessary to exclude + the fri case. + + See also "Coherence" and "Reads-from". + +Fully Ordered: An operation such as smp_mb() that orders all of + its CPU's prior accesses with all of that CPU's subsequent + accesses, or a marked access such as atomic_add_return() + that orders all of its CPU's prior accesses, itself, and + all of its CPU's subsequent accesses. + +Marked Access: An access to a variable that uses an special function or + macro such as "r1 = READ_ONCE(x)" or "smp_store_release(&a, 1)". + + See also "Unmarked Access". + +Pairing: "Memory-barrier pairing" reflects the fact that synchronizing + data between two CPUs requires that both CPUs their accesses. + Memory barriers thus tend to come in pairs, one executed by + one of the CPUs and the other by the other CPU. Of course, + pairing also occurs with other types of operations, so that a + smp_store_release() pairs with an smp_load_acquire() that reads + the value stored. + + See also "Cycle". + +Reads-From (rf): When one CPU's load returns the value stored by some other + CPU, there is said to be a reads-from link from the second + CPU's store to the first CPU's load. Reads-from links have the + nice property that time must advance from the store to the load, + which means that algorithms using reads-from links can use lighter + weight ordering and synchronization compared to algorithms using + coherence and from-reads links. + + It is also possible to have a reads-from link within a CPU, which + is a "reads-from internal" (rfi) link. The term "reads-from + external" (rfe) link is used when it is necessary to exclude + the rfi case. + + See also Coherence" and "From-reads". + +Relaxed: A marked access that does not imply ordering, for example, a + READ_ONCE(), WRITE_ONCE(), a non-value-returning read-modify-write + operation, or a value-returning read-modify-write operation whose + name ends in "_relaxed". + + See also "Acquire" and "Release". + +Release: With respect to a lock, releasing that lock, for example, + using spin_unlock(). With respect to a non-lock shared variable, + a special operation that includes a store and which orders that + store after earlier memory references that ran on that same CPU. + An example special release store is smp_store_release(), but + atomic_set_release() and atomic_cmpxchg_release() also include + release stores. + + See also "Acquire" and "Relaxed". + +Unmarked Access: An access to a variable that uses normal C-language + syntax, for example, "a = b[2]"; + + See also "Marked Access". -- cgit v1.2.3 From 1947bfcf81a905e84a58b423063e81034a90efed Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Thu, 5 Nov 2020 13:20:56 -0800 Subject: tools/memory-model: Add types to litmus tests This commit adds type information for global variables in the litmus tests in order to allow easier use with klitmus7. Signed-off-by: Paul E. McKenney --- tools/memory-model/litmus-tests/CoRR+poonceonce+Once.litmus | 4 +++- tools/memory-model/litmus-tests/CoRW+poonceonce+Once.litmus | 4 +++- tools/memory-model/litmus-tests/CoWR+poonceonce+Once.litmus | 4 +++- tools/memory-model/litmus-tests/CoWW+poonceonce.litmus | 4 +++- .../litmus-tests/IRIW+fencembonceonces+OnceOnce.litmus | 5 ++++- tools/memory-model/litmus-tests/IRIW+poonceonces+OnceOnce.litmus | 5 ++++- .../litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus | 7 ++++++- tools/memory-model/litmus-tests/ISA2+poonceonces.litmus | 6 +++++- .../ISA2+pooncerelease+poacquirerelease+poacquireonce.litmus | 6 +++++- .../litmus-tests/LB+fencembonceonce+ctrlonceonce.litmus | 5 ++++- .../litmus-tests/LB+poacquireonce+pooncerelease.litmus | 5 ++++- tools/memory-model/litmus-tests/LB+poonceonces.litmus | 5 ++++- .../litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus | 5 ++++- tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus | 5 +++-- .../litmus-tests/MP+polockmbonce+poacquiresilsil.litmus | 2 ++ .../memory-model/litmus-tests/MP+polockonce+poacquiresilsil.litmus | 2 ++ tools/memory-model/litmus-tests/MP+polocks.litmus | 6 +++++- tools/memory-model/litmus-tests/MP+poonceonces.litmus | 5 ++++- .../litmus-tests/MP+pooncerelease+poacquireonce.litmus | 5 ++++- tools/memory-model/litmus-tests/MP+porevlocks.litmus | 6 +++++- tools/memory-model/litmus-tests/R+fencembonceonces.litmus | 5 ++++- tools/memory-model/litmus-tests/R+poonceonces.litmus | 5 ++++- .../litmus-tests/S+fencewmbonceonce+poacquireonce.litmus | 5 ++++- tools/memory-model/litmus-tests/S+poonceonces.litmus | 5 ++++- tools/memory-model/litmus-tests/SB+fencembonceonces.litmus | 5 ++++- tools/memory-model/litmus-tests/SB+poonceonces.litmus | 5 ++++- tools/memory-model/litmus-tests/SB+rfionceonce-poonceonces.litmus | 5 ++++- tools/memory-model/litmus-tests/WRC+poonceonces+Once.litmus | 5 ++++- .../litmus-tests/WRC+pooncerelease+fencermbonceonce+Once.litmus | 5 ++++- .../litmus-tests/Z6.0+pooncelock+poonceLock+pombonce.litmus | 7 ++++++- .../litmus-tests/Z6.0+pooncelock+pooncelock+pombonce.litmus | 7 ++++++- .../Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus | 6 +++++- 32 files changed, 130 insertions(+), 31 deletions(-) (limited to 'tools') diff --git a/tools/memory-model/litmus-tests/CoRR+poonceonce+Once.litmus b/tools/memory-model/litmus-tests/CoRR+poonceonce+Once.litmus index 967f9f2a6226..772544f03fb5 100644 --- a/tools/memory-model/litmus-tests/CoRR+poonceonce+Once.litmus +++ b/tools/memory-model/litmus-tests/CoRR+poonceonce+Once.litmus @@ -7,7 +7,9 @@ C CoRR+poonceonce+Once * reads from the same variable are ordered. *) -{} +{ + int x; +} P0(int *x) { diff --git a/tools/memory-model/litmus-tests/CoRW+poonceonce+Once.litmus b/tools/memory-model/litmus-tests/CoRW+poonceonce+Once.litmus index 4635739f3974..5faae98f7ffb 100644 --- a/tools/memory-model/litmus-tests/CoRW+poonceonce+Once.litmus +++ b/tools/memory-model/litmus-tests/CoRW+poonceonce+Once.litmus @@ -7,7 +7,9 @@ C CoRW+poonceonce+Once * a given variable and a later write to that same variable are ordered. *) -{} +{ + int x; +} P0(int *x) { diff --git a/tools/memory-model/litmus-tests/CoWR+poonceonce+Once.litmus b/tools/memory-model/litmus-tests/CoWR+poonceonce+Once.litmus index bb068c92d8da..77c9cc9f8dc6 100644 --- a/tools/memory-model/litmus-tests/CoWR+poonceonce+Once.litmus +++ b/tools/memory-model/litmus-tests/CoWR+poonceonce+Once.litmus @@ -7,7 +7,9 @@ C CoWR+poonceonce+Once * given variable and a later read from that same variable are ordered. *) -{} +{ + int x; +} P0(int *x) { diff --git a/tools/memory-model/litmus-tests/CoWW+poonceonce.litmus b/tools/memory-model/litmus-tests/CoWW+poonceonce.litmus index 0d9f0a958799..85ef746f511a 100644 --- a/tools/memory-model/litmus-tests/CoWW+poonceonce.litmus +++ b/tools/memory-model/litmus-tests/CoWW+poonceonce.litmus @@ -7,7 +7,9 @@ C CoWW+poonceonce * writes to the same variable are ordered. *) -{} +{ + int x; +} P0(int *x) { diff --git a/tools/memory-model/litmus-tests/IRIW+fencembonceonces+OnceOnce.litmus b/tools/memory-model/litmus-tests/IRIW+fencembonceonces+OnceOnce.litmus index e729d2776e89..87aa900125ab 100644 --- a/tools/memory-model/litmus-tests/IRIW+fencembonceonces+OnceOnce.litmus +++ b/tools/memory-model/litmus-tests/IRIW+fencembonceonces+OnceOnce.litmus @@ -10,7 +10,10 @@ C IRIW+fencembonceonces+OnceOnce * process? This litmus test exercises LKMM's "propagation" rule. *) -{} +{ + int x; + int y; +} P0(int *x) { diff --git a/tools/memory-model/litmus-tests/IRIW+poonceonces+OnceOnce.litmus b/tools/memory-model/litmus-tests/IRIW+poonceonces+OnceOnce.litmus index 4b54dd6a6cd9..f84022dca555 100644 --- a/tools/memory-model/litmus-tests/IRIW+poonceonces+OnceOnce.litmus +++ b/tools/memory-model/litmus-tests/IRIW+poonceonces+OnceOnce.litmus @@ -10,7 +10,10 @@ C IRIW+poonceonces+OnceOnce * different process? *) -{} +{ + int x; + int y; +} P0(int *x) { diff --git a/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus b/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus index 094d58df7789..398f624daa77 100644 --- a/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus +++ b/tools/memory-model/litmus-tests/ISA2+pooncelock+pooncelock+pombonce.litmus @@ -7,7 +7,12 @@ C ISA2+pooncelock+pooncelock+pombonce * (in P0() and P1()) is visible to external process P2(). *) -{} +{ + spinlock_t mylock; + int x; + int y; + int z; +} P0(int *x, int *y, spinlock_t *mylock) { diff --git a/tools/memory-model/litmus-tests/ISA2+poonceonces.litmus b/tools/memory-model/litmus-tests/ISA2+poonceonces.litmus index b321aa6f4ea5..212a432ba16b 100644 --- a/tools/memory-model/litmus-tests/ISA2+poonceonces.litmus +++ b/tools/memory-model/litmus-tests/ISA2+poonceonces.litmus @@ -9,7 +9,11 @@ C ISA2+poonceonces * of the smp_load_acquire() invocations are replaced by READ_ONCE()? *) -{} +{ + int x; + int y; + int z; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/ISA2+pooncerelease+poacquirerelease+poacquireonce.litmus b/tools/memory-model/litmus-tests/ISA2+pooncerelease+poacquirerelease+poacquireonce.litmus index 025b0462ec9b..7afd85672ccd 100644 --- a/tools/memory-model/litmus-tests/ISA2+pooncerelease+poacquirerelease+poacquireonce.litmus +++ b/tools/memory-model/litmus-tests/ISA2+pooncerelease+poacquirerelease+poacquireonce.litmus @@ -11,7 +11,11 @@ C ISA2+pooncerelease+poacquirerelease+poacquireonce * (AKA non-rf) link, so release-acquire is all that is needed. *) -{} +{ + int x; + int y; + int z; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/LB+fencembonceonce+ctrlonceonce.litmus b/tools/memory-model/litmus-tests/LB+fencembonceonce+ctrlonceonce.litmus index 4727f5aaf03b..c8a93c7ee556 100644 --- a/tools/memory-model/litmus-tests/LB+fencembonceonce+ctrlonceonce.litmus +++ b/tools/memory-model/litmus-tests/LB+fencembonceonce+ctrlonceonce.litmus @@ -11,7 +11,10 @@ C LB+fencembonceonce+ctrlonceonce * another control dependency and order would still be maintained.) *) -{} +{ + int x; + int y; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/LB+poacquireonce+pooncerelease.litmus b/tools/memory-model/litmus-tests/LB+poacquireonce+pooncerelease.litmus index 07b9904b0e49..2fa029568fa1 100644 --- a/tools/memory-model/litmus-tests/LB+poacquireonce+pooncerelease.litmus +++ b/tools/memory-model/litmus-tests/LB+poacquireonce+pooncerelease.litmus @@ -8,7 +8,10 @@ C LB+poacquireonce+pooncerelease * to the other? *) -{} +{ + int x; + int y; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/LB+poonceonces.litmus b/tools/memory-model/litmus-tests/LB+poonceonces.litmus index 74c49cb3c37b..2107306e8625 100644 --- a/tools/memory-model/litmus-tests/LB+poonceonces.litmus +++ b/tools/memory-model/litmus-tests/LB+poonceonces.litmus @@ -7,7 +7,10 @@ C LB+poonceonces * be prevented even with no explicit ordering? *) -{} +{ + int x; + int y; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus b/tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus index a273da9faa6d..e04b71b0ce2b 100644 --- a/tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus +++ b/tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus @@ -8,7 +8,10 @@ C MP+fencewmbonceonce+fencermbonceonce * is usually better to use smp_store_release() and smp_load_acquire(). *) -{} +{ + int x; + int y; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus b/tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus index 97731b4bbdd8..18df682b08b2 100644 --- a/tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus +++ b/tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus @@ -10,8 +10,9 @@ C MP+onceassign+derefonce *) { -y=z; -z=0; + int x; + int *y=z; + int z=0; } P0(int *x, int **y) diff --git a/tools/memory-model/litmus-tests/MP+polockmbonce+poacquiresilsil.litmus b/tools/memory-model/litmus-tests/MP+polockmbonce+poacquiresilsil.litmus index 50f4d62bbf0e..b1b1266fb49a 100644 --- a/tools/memory-model/litmus-tests/MP+polockmbonce+poacquiresilsil.litmus +++ b/tools/memory-model/litmus-tests/MP+polockmbonce+poacquiresilsil.litmus @@ -11,6 +11,8 @@ C MP+polockmbonce+poacquiresilsil *) { + spinlock_t lo; + int x; } P0(spinlock_t *lo, int *x) diff --git a/tools/memory-model/litmus-tests/MP+polockonce+poacquiresilsil.litmus b/tools/memory-model/litmus-tests/MP+polockonce+poacquiresilsil.litmus index abf81e7a0895..867c75d8b960 100644 --- a/tools/memory-model/litmus-tests/MP+polockonce+poacquiresilsil.litmus +++ b/tools/memory-model/litmus-tests/MP+polockonce+poacquiresilsil.litmus @@ -11,6 +11,8 @@ C MP+polockonce+poacquiresilsil *) { + spinlock_t lo; + int x; } P0(spinlock_t *lo, int *x) diff --git a/tools/memory-model/litmus-tests/MP+polocks.litmus b/tools/memory-model/litmus-tests/MP+polocks.litmus index 712a4fcdf6ce..63e0f67c9b9d 100644 --- a/tools/memory-model/litmus-tests/MP+polocks.litmus +++ b/tools/memory-model/litmus-tests/MP+polocks.litmus @@ -11,7 +11,11 @@ C MP+polocks * to see all prior accesses by those other CPUs. *) -{} +{ + spinlock_t mylock; + int x; + int y; +} P0(int *x, int *y, spinlock_t *mylock) { diff --git a/tools/memory-model/litmus-tests/MP+poonceonces.litmus b/tools/memory-model/litmus-tests/MP+poonceonces.litmus index 172f0145301c..68180a403e5c 100644 --- a/tools/memory-model/litmus-tests/MP+poonceonces.litmus +++ b/tools/memory-model/litmus-tests/MP+poonceonces.litmus @@ -7,7 +7,10 @@ C MP+poonceonces * no ordering at all? *) -{} +{ + int x; + int y; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus b/tools/memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus index d52c68429722..19f3e6874b50 100644 --- a/tools/memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus +++ b/tools/memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus @@ -8,7 +8,10 @@ C MP+pooncerelease+poacquireonce * pattern. *) -{} +{ + int x; + int y; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/MP+porevlocks.litmus b/tools/memory-model/litmus-tests/MP+porevlocks.litmus index 72c9276b363e..4ac189adf41e 100644 --- a/tools/memory-model/litmus-tests/MP+porevlocks.litmus +++ b/tools/memory-model/litmus-tests/MP+porevlocks.litmus @@ -11,7 +11,11 @@ C MP+porevlocks * see all prior accesses by those other CPUs. *) -{} +{ + spinlock_t mylock; + int x; + int y; +} P0(int *x, int *y, spinlock_t *mylock) { diff --git a/tools/memory-model/litmus-tests/R+fencembonceonces.litmus b/tools/memory-model/litmus-tests/R+fencembonceonces.litmus index 222a0b850b4a..af9463b39b4a 100644 --- a/tools/memory-model/litmus-tests/R+fencembonceonces.litmus +++ b/tools/memory-model/litmus-tests/R+fencembonceonces.litmus @@ -9,7 +9,10 @@ C R+fencembonceonces * cause the resulting test to be allowed. *) -{} +{ + int x; + int y; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/R+poonceonces.litmus b/tools/memory-model/litmus-tests/R+poonceonces.litmus index 5386f128a131..bcd5574e304a 100644 --- a/tools/memory-model/litmus-tests/R+poonceonces.litmus +++ b/tools/memory-model/litmus-tests/R+poonceonces.litmus @@ -8,7 +8,10 @@ C R+poonceonces * store propagation delays. *) -{} +{ + int x; + int y; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/S+fencewmbonceonce+poacquireonce.litmus b/tools/memory-model/litmus-tests/S+fencewmbonceonce+poacquireonce.litmus index 18479823cd6c..c36341d1aed6 100644 --- a/tools/memory-model/litmus-tests/S+fencewmbonceonce+poacquireonce.litmus +++ b/tools/memory-model/litmus-tests/S+fencewmbonceonce+poacquireonce.litmus @@ -7,7 +7,10 @@ C S+fencewmbonceonce+poacquireonce * store against a subsequent store? *) -{} +{ + int x; + int y; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/S+poonceonces.litmus b/tools/memory-model/litmus-tests/S+poonceonces.litmus index 8c9c2f81a580..7775c23143a0 100644 --- a/tools/memory-model/litmus-tests/S+poonceonces.litmus +++ b/tools/memory-model/litmus-tests/S+poonceonces.litmus @@ -9,7 +9,10 @@ C S+poonceonces * READ_ONCE(), is ordering preserved? *) -{} +{ + int x; + int y; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/SB+fencembonceonces.litmus b/tools/memory-model/litmus-tests/SB+fencembonceonces.litmus index ed5fff18d223..833cdfeb7c09 100644 --- a/tools/memory-model/litmus-tests/SB+fencembonceonces.litmus +++ b/tools/memory-model/litmus-tests/SB+fencembonceonces.litmus @@ -9,7 +9,10 @@ C SB+fencembonceonces * suffice, but not much else.) *) -{} +{ + int x; + int y; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/SB+poonceonces.litmus b/tools/memory-model/litmus-tests/SB+poonceonces.litmus index 10d550730b25..c92211ecbfdf 100644 --- a/tools/memory-model/litmus-tests/SB+poonceonces.litmus +++ b/tools/memory-model/litmus-tests/SB+poonceonces.litmus @@ -8,7 +8,10 @@ C SB+poonceonces * variable that the preceding process reads. *) -{} +{ + int x; + int y; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/SB+rfionceonce-poonceonces.litmus b/tools/memory-model/litmus-tests/SB+rfionceonce-poonceonces.litmus index 04a16603660b..84344b455eb7 100644 --- a/tools/memory-model/litmus-tests/SB+rfionceonce-poonceonces.litmus +++ b/tools/memory-model/litmus-tests/SB+rfionceonce-poonceonces.litmus @@ -6,7 +6,10 @@ C SB+rfionceonce-poonceonces * This litmus test demonstrates that LKMM is not fully multicopy atomic. *) -{} +{ + int x; + int y; +} P0(int *x, int *y) { diff --git a/tools/memory-model/litmus-tests/WRC+poonceonces+Once.litmus b/tools/memory-model/litmus-tests/WRC+poonceonces+Once.litmus index 6a2bc12a1af1..431494708611 100644 --- a/tools/memory-model/litmus-tests/WRC+poonceonces+Once.litmus +++ b/tools/memory-model/litmus-tests/WRC+poonceonces+Once.litmus @@ -8,7 +8,10 @@ C WRC+poonceonces+Once * test has no ordering at all. *) -{} +{ + int x; + int y; +} P0(int *x) { diff --git a/tools/memory-model/litmus-tests/WRC+pooncerelease+fencermbonceonce+Once.litmus b/tools/memory-model/litmus-tests/WRC+pooncerelease+fencermbonceonce+Once.litmus index e9947250d7de..554999c64db5 100644 --- a/tools/memory-model/litmus-tests/WRC+pooncerelease+fencermbonceonce+Once.litmus +++ b/tools/memory-model/litmus-tests/WRC+pooncerelease+fencermbonceonce+Once.litmus @@ -10,7 +10,10 @@ C WRC+pooncerelease+fencermbonceonce+Once * is A-cumulative in LKMM. *) -{} +{ + int x; + int y; +} P0(int *x) { diff --git a/tools/memory-model/litmus-tests/Z6.0+pooncelock+poonceLock+pombonce.litmus b/tools/memory-model/litmus-tests/Z6.0+pooncelock+poonceLock+pombonce.litmus index 415248fb6699..265a95ffef13 100644 --- a/tools/memory-model/litmus-tests/Z6.0+pooncelock+poonceLock+pombonce.litmus +++ b/tools/memory-model/litmus-tests/Z6.0+pooncelock+poonceLock+pombonce.litmus @@ -9,7 +9,12 @@ C Z6.0+pooncelock+poonceLock+pombonce * by CPUs not holding that lock. *) -{} +{ + spinlock_t mylock; + int x; + int y; + int z; +} P0(int *x, int *y, spinlock_t *mylock) { diff --git a/tools/memory-model/litmus-tests/Z6.0+pooncelock+pooncelock+pombonce.litmus b/tools/memory-model/litmus-tests/Z6.0+pooncelock+pooncelock+pombonce.litmus index 10a2aa04cd07..0c9aea8e80df 100644 --- a/tools/memory-model/litmus-tests/Z6.0+pooncelock+pooncelock+pombonce.litmus +++ b/tools/memory-model/litmus-tests/Z6.0+pooncelock+pooncelock+pombonce.litmus @@ -8,7 +8,12 @@ C Z6.0+pooncelock+pooncelock+pombonce * seen as ordered by a third process not holding that lock. *) -{} +{ + spinlock_t mylock; + int x; + int y; + int z; +} P0(int *x, int *y, spinlock_t *mylock) { diff --git a/tools/memory-model/litmus-tests/Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus b/tools/memory-model/litmus-tests/Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus index 88e70b87a683..661f9aaa5791 100644 --- a/tools/memory-model/litmus-tests/Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus +++ b/tools/memory-model/litmus-tests/Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus @@ -14,7 +14,11 @@ C Z6.0+pooncerelease+poacquirerelease+fencembonceonce * involving locking.) *) -{} +{ + int x; + int y; + int z; +} P0(int *x, int *y) { -- cgit v1.2.3 From acc4bdc55dcb7d7fe0be736999572a55e121873f Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Thu, 5 Nov 2020 13:30:11 -0800 Subject: tools/memory-model: Use "buf" and "flag" for message-passing tests The use of "x" and "y" for message-passing tests is fine for people familiar with memory models and litmus-test nomenclature, but is a bit obtuse for others. This commit therefore substitutes "buf" for "x" and "flag" for "y" for the MP tests. There are a few special-case MP tests that use locks and these are unchanged. There is another MP test that uses pointers, and this is changed to name the pointer "p". Reported-by: Johannes Weiner Signed-off-by: Paul E. McKenney --- .../MP+fencewmbonceonce+fencermbonceonce.litmus | 16 ++++++++-------- .../litmus-tests/MP+onceassign+derefonce.litmus | 12 ++++++------ tools/memory-model/litmus-tests/MP+polocks.litmus | 16 ++++++++-------- tools/memory-model/litmus-tests/MP+poonceonces.litmus | 16 ++++++++-------- .../litmus-tests/MP+pooncerelease+poacquireonce.litmus | 16 ++++++++-------- tools/memory-model/litmus-tests/MP+porevlocks.litmus | 16 ++++++++-------- 6 files changed, 46 insertions(+), 46 deletions(-) (limited to 'tools') diff --git a/tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus b/tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus index e04b71b0ce2b..f15e501849dd 100644 --- a/tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus +++ b/tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus @@ -9,25 +9,25 @@ C MP+fencewmbonceonce+fencermbonceonce *) { - int x; - int y; + int buf; + int flag; } -P0(int *x, int *y) +P0(int *buf, int *flag) { - WRITE_ONCE(*x, 1); + WRITE_ONCE(*buf, 1); smp_wmb(); - WRITE_ONCE(*y, 1); + WRITE_ONCE(*flag, 1); } -P1(int *x, int *y) +P1(int *buf, int *flag) { int r0; int r1; - r0 = READ_ONCE(*y); + r0 = READ_ONCE(*flag); smp_rmb(); - r1 = READ_ONCE(*x); + r1 = READ_ONCE(*buf); } exists (1:r0=1 /\ 1:r1=0) diff --git a/tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus b/tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus index 18df682b08b2..ed8ee9bde0c9 100644 --- a/tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus +++ b/tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus @@ -10,24 +10,24 @@ C MP+onceassign+derefonce *) { + int *p=y; int x; - int *y=z; - int z=0; + int y=0; } -P0(int *x, int **y) +P0(int *x, int **p) { WRITE_ONCE(*x, 1); - rcu_assign_pointer(*y, x); + rcu_assign_pointer(*p, x); } -P1(int *x, int **y) +P1(int *x, int **p) { int *r0; int r1; rcu_read_lock(); - r0 = rcu_dereference(*y); + r0 = rcu_dereference(*p); r1 = READ_ONCE(*r0); rcu_read_unlock(); } diff --git a/tools/memory-model/litmus-tests/MP+polocks.litmus b/tools/memory-model/litmus-tests/MP+polocks.litmus index 63e0f67c9b9d..4b0c2edcc029 100644 --- a/tools/memory-model/litmus-tests/MP+polocks.litmus +++ b/tools/memory-model/litmus-tests/MP+polocks.litmus @@ -13,27 +13,27 @@ C MP+polocks { spinlock_t mylock; - int x; - int y; + int buf; + int flag; } -P0(int *x, int *y, spinlock_t *mylock) +P0(int *buf, int *flag, spinlock_t *mylock) { - WRITE_ONCE(*x, 1); + WRITE_ONCE(*buf, 1); spin_lock(mylock); - WRITE_ONCE(*y, 1); + WRITE_ONCE(*flag, 1); spin_unlock(mylock); } -P1(int *x, int *y, spinlock_t *mylock) +P1(int *buf, int *flag, spinlock_t *mylock) { int r0; int r1; spin_lock(mylock); - r0 = READ_ONCE(*y); + r0 = READ_ONCE(*flag); spin_unlock(mylock); - r1 = READ_ONCE(*x); + r1 = READ_ONCE(*buf); } exists (1:r0=1 /\ 1:r1=0) diff --git a/tools/memory-model/litmus-tests/MP+poonceonces.litmus b/tools/memory-model/litmus-tests/MP+poonceonces.litmus index 68180a403e5c..3010bbaec46c 100644 --- a/tools/memory-model/litmus-tests/MP+poonceonces.litmus +++ b/tools/memory-model/litmus-tests/MP+poonceonces.litmus @@ -8,23 +8,23 @@ C MP+poonceonces *) { - int x; - int y; + int buf; + int flag; } -P0(int *x, int *y) +P0(int *buf, int *flag) { - WRITE_ONCE(*x, 1); - WRITE_ONCE(*y, 1); + WRITE_ONCE(*buf, 1); + WRITE_ONCE(*flag, 1); } -P1(int *x, int *y) +P1(int *buf, int *flag) { int r0; int r1; - r0 = READ_ONCE(*y); - r1 = READ_ONCE(*x); + r0 = READ_ONCE(*flag); + r1 = READ_ONCE(*buf); } exists (1:r0=1 /\ 1:r1=0) diff --git a/tools/memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus b/tools/memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus index 19f3e6874b50..21e825d5dea6 100644 --- a/tools/memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus +++ b/tools/memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus @@ -9,23 +9,23 @@ C MP+pooncerelease+poacquireonce *) { - int x; - int y; + int buf; + int flag; } -P0(int *x, int *y) +P0(int *buf, int *flag) { - WRITE_ONCE(*x, 1); - smp_store_release(y, 1); + WRITE_ONCE(*buf, 1); + smp_store_release(flag, 1); } -P1(int *x, int *y) +P1(int *buf, int *flag) { int r0; int r1; - r0 = smp_load_acquire(y); - r1 = READ_ONCE(*x); + r0 = smp_load_acquire(flag); + r1 = READ_ONCE(*buf); } exists (1:r0=1 /\ 1:r1=0) diff --git a/tools/memory-model/litmus-tests/MP+porevlocks.litmus b/tools/memory-model/litmus-tests/MP+porevlocks.litmus index 4ac189adf41e..9691d55b4e21 100644 --- a/tools/memory-model/litmus-tests/MP+porevlocks.litmus +++ b/tools/memory-model/litmus-tests/MP+porevlocks.litmus @@ -13,27 +13,27 @@ C MP+porevlocks { spinlock_t mylock; - int x; - int y; + int buf; + int flag; } -P0(int *x, int *y, spinlock_t *mylock) +P0(int *buf, int *flag, spinlock_t *mylock) { int r0; int r1; - r0 = READ_ONCE(*y); + r0 = READ_ONCE(*flag); spin_lock(mylock); - r1 = READ_ONCE(*x); + r1 = READ_ONCE(*buf); spin_unlock(mylock); } -P1(int *x, int *y, spinlock_t *mylock) +P1(int *buf, int *flag, spinlock_t *mylock) { spin_lock(mylock); - WRITE_ONCE(*x, 1); + WRITE_ONCE(*buf, 1); spin_unlock(mylock); - WRITE_ONCE(*y, 1); + WRITE_ONCE(*flag, 1); } exists (0:r0=1 /\ 0:r1=0) -- cgit v1.2.3 From b6ff30849ca723b78306514246b98ca5645d92f5 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Thu, 5 Nov 2020 13:39:28 -0800 Subject: tools/memory-model: Label MP tests' producers and consumers This commit adds comments that label the MP tests' producer and consumer processes, and also that label the "exists" clause as the bad outcome. Reported-by: Johannes Weiner Signed-off-by: Paul E. McKenney --- .../litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus | 6 +++--- tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus | 6 +++--- .../litmus-tests/MP+polockmbonce+poacquiresilsil.litmus | 6 +++--- .../memory-model/litmus-tests/MP+polockonce+poacquiresilsil.litmus | 6 +++--- tools/memory-model/litmus-tests/MP+polocks.litmus | 6 +++--- tools/memory-model/litmus-tests/MP+poonceonces.litmus | 6 +++--- .../memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus | 6 +++--- tools/memory-model/litmus-tests/MP+porevlocks.litmus | 6 +++--- 8 files changed, 24 insertions(+), 24 deletions(-) (limited to 'tools') diff --git a/tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus b/tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus index f15e501849dd..c5c168d92973 100644 --- a/tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus +++ b/tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus @@ -13,14 +13,14 @@ C MP+fencewmbonceonce+fencermbonceonce int flag; } -P0(int *buf, int *flag) +P0(int *buf, int *flag) // Producer { WRITE_ONCE(*buf, 1); smp_wmb(); WRITE_ONCE(*flag, 1); } -P1(int *buf, int *flag) +P1(int *buf, int *flag) // Consumer { int r0; int r1; @@ -30,4 +30,4 @@ P1(int *buf, int *flag) r1 = READ_ONCE(*buf); } -exists (1:r0=1 /\ 1:r1=0) +exists (1:r0=1 /\ 1:r1=0) (* Bad outcome. *) diff --git a/tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus b/tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus index ed8ee9bde0c9..20ff62649f1e 100644 --- a/tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus +++ b/tools/memory-model/litmus-tests/MP+onceassign+derefonce.litmus @@ -15,13 +15,13 @@ C MP+onceassign+derefonce int y=0; } -P0(int *x, int **p) +P0(int *x, int **p) // Producer { WRITE_ONCE(*x, 1); rcu_assign_pointer(*p, x); } -P1(int *x, int **p) +P1(int *x, int **p) // Consumer { int *r0; int r1; @@ -32,4 +32,4 @@ P1(int *x, int **p) rcu_read_unlock(); } -exists (1:r0=x /\ 1:r1=0) +exists (1:r0=x /\ 1:r1=0) (* Bad outcome. *) diff --git a/tools/memory-model/litmus-tests/MP+polockmbonce+poacquiresilsil.litmus b/tools/memory-model/litmus-tests/MP+polockmbonce+poacquiresilsil.litmus index b1b1266fb49a..153917ad5dc9 100644 --- a/tools/memory-model/litmus-tests/MP+polockmbonce+poacquiresilsil.litmus +++ b/tools/memory-model/litmus-tests/MP+polockmbonce+poacquiresilsil.litmus @@ -15,7 +15,7 @@ C MP+polockmbonce+poacquiresilsil int x; } -P0(spinlock_t *lo, int *x) +P0(spinlock_t *lo, int *x) // Producer { spin_lock(lo); smp_mb__after_spinlock(); @@ -23,7 +23,7 @@ P0(spinlock_t *lo, int *x) spin_unlock(lo); } -P1(spinlock_t *lo, int *x) +P1(spinlock_t *lo, int *x) // Consumer { int r1; int r2; @@ -34,4 +34,4 @@ P1(spinlock_t *lo, int *x) r3 = spin_is_locked(lo); } -exists (1:r1=1 /\ 1:r2=0 /\ 1:r3=1) +exists (1:r1=1 /\ 1:r2=0 /\ 1:r3=1) (* Bad outcome. *) diff --git a/tools/memory-model/litmus-tests/MP+polockonce+poacquiresilsil.litmus b/tools/memory-model/litmus-tests/MP+polockonce+poacquiresilsil.litmus index 867c75d8b960..aad64397bb8c 100644 --- a/tools/memory-model/litmus-tests/MP+polockonce+poacquiresilsil.litmus +++ b/tools/memory-model/litmus-tests/MP+polockonce+poacquiresilsil.litmus @@ -15,14 +15,14 @@ C MP+polockonce+poacquiresilsil int x; } -P0(spinlock_t *lo, int *x) +P0(spinlock_t *lo, int *x) // Producer { spin_lock(lo); WRITE_ONCE(*x, 1); spin_unlock(lo); } -P1(spinlock_t *lo, int *x) +P1(spinlock_t *lo, int *x) // Consumer { int r1; int r2; @@ -33,4 +33,4 @@ P1(spinlock_t *lo, int *x) r3 = spin_is_locked(lo); } -exists (1:r1=1 /\ 1:r2=0 /\ 1:r3=1) +exists (1:r1=1 /\ 1:r2=0 /\ 1:r3=1) (* Bad outcome. *) diff --git a/tools/memory-model/litmus-tests/MP+polocks.litmus b/tools/memory-model/litmus-tests/MP+polocks.litmus index 4b0c2edcc029..21cbca6f3be4 100644 --- a/tools/memory-model/litmus-tests/MP+polocks.litmus +++ b/tools/memory-model/litmus-tests/MP+polocks.litmus @@ -17,7 +17,7 @@ C MP+polocks int flag; } -P0(int *buf, int *flag, spinlock_t *mylock) +P0(int *buf, int *flag, spinlock_t *mylock) // Producer { WRITE_ONCE(*buf, 1); spin_lock(mylock); @@ -25,7 +25,7 @@ P0(int *buf, int *flag, spinlock_t *mylock) spin_unlock(mylock); } -P1(int *buf, int *flag, spinlock_t *mylock) +P1(int *buf, int *flag, spinlock_t *mylock) // Consumer { int r0; int r1; @@ -36,4 +36,4 @@ P1(int *buf, int *flag, spinlock_t *mylock) r1 = READ_ONCE(*buf); } -exists (1:r0=1 /\ 1:r1=0) +exists (1:r0=1 /\ 1:r1=0) (* Bad outcome. *) diff --git a/tools/memory-model/litmus-tests/MP+poonceonces.litmus b/tools/memory-model/litmus-tests/MP+poonceonces.litmus index 3010bbaec46c..9f9769d647c7 100644 --- a/tools/memory-model/litmus-tests/MP+poonceonces.litmus +++ b/tools/memory-model/litmus-tests/MP+poonceonces.litmus @@ -12,13 +12,13 @@ C MP+poonceonces int flag; } -P0(int *buf, int *flag) +P0(int *buf, int *flag) // Producer { WRITE_ONCE(*buf, 1); WRITE_ONCE(*flag, 1); } -P1(int *buf, int *flag) +P1(int *buf, int *flag) // Consumer { int r0; int r1; @@ -27,4 +27,4 @@ P1(int *buf, int *flag) r1 = READ_ONCE(*buf); } -exists (1:r0=1 /\ 1:r1=0) +exists (1:r0=1 /\ 1:r1=0) (* Bad outcome. *) diff --git a/tools/memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus b/tools/memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus index 21e825d5dea6..cbe28e733443 100644 --- a/tools/memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus +++ b/tools/memory-model/litmus-tests/MP+pooncerelease+poacquireonce.litmus @@ -13,13 +13,13 @@ C MP+pooncerelease+poacquireonce int flag; } -P0(int *buf, int *flag) +P0(int *buf, int *flag) // Producer { WRITE_ONCE(*buf, 1); smp_store_release(flag, 1); } -P1(int *buf, int *flag) +P1(int *buf, int *flag) // Consumer { int r0; int r1; @@ -28,4 +28,4 @@ P1(int *buf, int *flag) r1 = READ_ONCE(*buf); } -exists (1:r0=1 /\ 1:r1=0) +exists (1:r0=1 /\ 1:r1=0) (* Bad outcome. *) diff --git a/tools/memory-model/litmus-tests/MP+porevlocks.litmus b/tools/memory-model/litmus-tests/MP+porevlocks.litmus index 9691d55b4e21..012041bd4feb 100644 --- a/tools/memory-model/litmus-tests/MP+porevlocks.litmus +++ b/tools/memory-model/litmus-tests/MP+porevlocks.litmus @@ -17,7 +17,7 @@ C MP+porevlocks int flag; } -P0(int *buf, int *flag, spinlock_t *mylock) +P0(int *buf, int *flag, spinlock_t *mylock) // Consumer { int r0; int r1; @@ -28,7 +28,7 @@ P0(int *buf, int *flag, spinlock_t *mylock) spin_unlock(mylock); } -P1(int *buf, int *flag, spinlock_t *mylock) +P1(int *buf, int *flag, spinlock_t *mylock) // Producer { spin_lock(mylock); WRITE_ONCE(*buf, 1); @@ -36,4 +36,4 @@ P1(int *buf, int *flag, spinlock_t *mylock) WRITE_ONCE(*flag, 1); } -exists (0:r0=1 /\ 0:r1=0) +exists (0:r0=1 /\ 0:r1=0) (* Bad outcome. *) -- cgit v1.2.3