From 51823ca651364f68bd3ad33d848c1542fffdd627 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Tue, 21 Mar 2023 17:28:40 -0700 Subject: doc: Get rcutree module parameters back into alpha order This commit puts the rcutree module parameters back into proper alphabetical order. Signed-off-by: Paul E. McKenney --- Documentation/admin-guide/kernel-parameters.txt | 120 ++++++++++++------------ 1 file changed, 60 insertions(+), 60 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 9e5bab29685f..8a094e2fc381 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4736,43 +4736,6 @@ the propagation of recent CPU-hotplug changes up the rcu_node combining tree. - rcutree.use_softirq= [KNL] - If set to zero, move all RCU_SOFTIRQ processing to - per-CPU rcuc kthreads. Defaults to a non-zero - value, meaning that RCU_SOFTIRQ is used by default. - Specify rcutree.use_softirq=0 to use rcuc kthreads. - - But note that CONFIG_PREEMPT_RT=y kernels disable - this kernel boot parameter, forcibly setting it - to zero. - - rcutree.rcu_fanout_exact= [KNL] - Disable autobalancing of the rcu_node combining - tree. This is used by rcutorture, and might - possibly be useful for architectures having high - cache-to-cache transfer latencies. - - rcutree.rcu_fanout_leaf= [KNL] - Change the number of CPUs assigned to each - leaf rcu_node structure. Useful for very - large systems, which will choose the value 64, - and for NUMA systems with large remote-access - latencies, which will choose a value aligned - with the appropriate hardware boundaries. - - rcutree.rcu_min_cached_objs= [KNL] - Minimum number of objects which are cached and - maintained per one CPU. Object size is equal - to PAGE_SIZE. The cache allows to reduce the - pressure to page allocator, also it makes the - whole algorithm to behave better in low memory - condition. - - rcutree.rcu_delay_page_cache_fill_msec= [KNL] - Set the page-cache refill delay (in milliseconds) - in response to low-memory conditions. The range - of permitted values is in the range 0:100000. - rcutree.jiffies_till_first_fqs= [KNL] Set delay from grace-period initialization to first attempt to force quiescent states. @@ -4811,21 +4774,6 @@ When RCU_NOCB_CPU is set, also adjust the priority of NOCB callback kthreads. - rcutree.rcu_divisor= [KNL] - Set the shift-right count to use to compute - the callback-invocation batch limit bl from - the number of callbacks queued on this CPU. - The result will be bounded below by the value of - the rcutree.blimit kernel parameter. Every bl - callbacks, the softirq handler will exit in - order to allow the CPU to do other work. - - Please note that this callback-invocation batch - limit applies only to non-offloaded callback - invocation. Offloaded callbacks are instead - invoked in the context of an rcuoc kthread, which - scheduler will preempt as it does any other task. - rcutree.nocb_nobypass_lim_per_jiffy= [KNL] On callback-offloaded (rcu_nocbs) CPUs, RCU reduces the lock contention that would @@ -4839,14 +4787,6 @@ the ->nocb_bypass queue. The definition of "too many" is supplied by this kernel boot parameter. - rcutree.rcu_nocb_gp_stride= [KNL] - Set the number of NOCB callback kthreads in - each group, which defaults to the square root - of the number of CPUs. Larger numbers reduce - the wakeup overhead on the global grace-period - kthread, but increases that same overhead on - each group's NOCB grace-period kthread. - rcutree.qhimark= [KNL] Set threshold of queued RCU callbacks beyond which batch limiting is disabled. @@ -4864,6 +4804,56 @@ on rcutree.qhimark at boot time and to zero to disable more aggressive help enlistment. + rcutree.rcu_delay_page_cache_fill_msec= [KNL] + Set the page-cache refill delay (in milliseconds) + in response to low-memory conditions. The range + of permitted values is in the range 0:100000. + + rcutree.rcu_divisor= [KNL] + Set the shift-right count to use to compute + the callback-invocation batch limit bl from + the number of callbacks queued on this CPU. + The result will be bounded below by the value of + the rcutree.blimit kernel parameter. Every bl + callbacks, the softirq handler will exit in + order to allow the CPU to do other work. + + Please note that this callback-invocation batch + limit applies only to non-offloaded callback + invocation. Offloaded callbacks are instead + invoked in the context of an rcuoc kthread, which + scheduler will preempt as it does any other task. + + rcutree.rcu_fanout_exact= [KNL] + Disable autobalancing of the rcu_node combining + tree. This is used by rcutorture, and might + possibly be useful for architectures having high + cache-to-cache transfer latencies. + + rcutree.rcu_fanout_leaf= [KNL] + Change the number of CPUs assigned to each + leaf rcu_node structure. Useful for very + large systems, which will choose the value 64, + and for NUMA systems with large remote-access + latencies, which will choose a value aligned + with the appropriate hardware boundaries. + + rcutree.rcu_min_cached_objs= [KNL] + Minimum number of objects which are cached and + maintained per one CPU. Object size is equal + to PAGE_SIZE. The cache allows to reduce the + pressure to page allocator, also it makes the + whole algorithm to behave better in low memory + condition. + + rcutree.rcu_nocb_gp_stride= [KNL] + Set the number of NOCB callback kthreads in + each group, which defaults to the square root + of the number of CPUs. Larger numbers reduce + the wakeup overhead on the global grace-period + kthread, but increases that same overhead on + each group's NOCB grace-period kthread. + rcutree.rcu_kick_kthreads= [KNL] Cause the grace-period kthread to get an extra wake_up() if it sleeps three times longer than @@ -4885,6 +4875,16 @@ rcu_node tree with an eye towards determining why a new grace period has not yet started. + rcutree.use_softirq= [KNL] + If set to zero, move all RCU_SOFTIRQ processing to + per-CPU rcuc kthreads. Defaults to a non-zero + value, meaning that RCU_SOFTIRQ is used by default. + Specify rcutree.use_softirq=0 to use rcuc kthreads. + + But note that CONFIG_PREEMPT_RT=y kernels disable + this kernel boot parameter, forcibly setting it + to zero. + rcuscale.gp_async= [KNL] Measure performance of asynchronous grace-period primitives such as call_rcu(). -- cgit v1.2.3 From fb6112497bfe6defef46e58da0d5f291c11bd819 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Tue, 21 Mar 2023 17:33:07 -0700 Subject: doc: Document the rcutree.rcu_resched_ns module parameter This commit adds kernel-parameters.txt documentation for the rcutree.rcu_resched_ns module parameter. Signed-off-by: Paul E. McKenney --- Documentation/admin-guide/kernel-parameters.txt | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 8a094e2fc381..c21245c0cce6 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4861,6 +4861,13 @@ This wake_up() will be accompanied by a WARN_ONCE() splat and an ftrace_dump(). + rcutree.rcu_resched_ns= [KNL] + Limit the time spend invoking a batch of RCU + callbacks to the specified number of nanoseconds. + By default, this limit is checked only once + every 32 callbacks in order to limit the pain + inflicted by local_clock() overhead. + rcutree.rcu_unlock_delay= [KNL] In CONFIG_RCU_STRICT_GRACE_PERIOD=y kernels, this specifies an rcu_read_unlock()-time delay -- cgit v1.2.3 From 5d80155b17b3125bce873ae3c939b1bb7c11d92b Mon Sep 17 00:00:00 2001 From: Zqiang Date: Mon, 24 Apr 2023 19:51:07 +0800 Subject: MAINTAINERS: Update qiang1.zhang@intel.com to qiang.zhang1211@gmail.com The qiang1.zhang@intel.com email address will no longer be used, so this commit updates to qiang.zhang1211@gmail.com. Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/MAINTAINERS b/MAINTAINERS index 7e0b87d5aa2e..8af0aabe9bf5 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -17778,7 +17778,7 @@ M: Boqun Feng R: Steven Rostedt R: Mathieu Desnoyers R: Lai Jiangshan -R: Zqiang +R: Zqiang L: rcu@vger.kernel.org S: Supported W: http://www.rdrop.com/users/paulmck/RCU/ -- cgit v1.2.3 From 1da82598cfc22f43fb0a3bd47774f7e886cc8b62 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Sat, 18 Mar 2023 13:51:10 -0700 Subject: srcu: Remove extraneous parentheses from srcu_read_lock() etc. This commit removes extraneous parentheses from srcu_read_lock(), srcu_read_lock_nmisafe(), srcu_read_unlock(), and srcu_read_unlock_nmisafe(). Looks like someone was once a macro. Cc: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- include/linux/srcu.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 41c4b26fb1c1..eb92a50a4599 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -212,7 +212,7 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) srcu_check_nmi_safety(ssp, false); retval = __srcu_read_lock(ssp); - srcu_lock_acquire(&(ssp)->dep_map); + srcu_lock_acquire(&ssp->dep_map); return retval; } @@ -229,7 +229,7 @@ static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp srcu_check_nmi_safety(ssp, true); retval = __srcu_read_lock_nmisafe(ssp); - rcu_lock_acquire(&(ssp)->dep_map); + rcu_lock_acquire(&ssp->dep_map); return retval; } @@ -284,7 +284,7 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) { WARN_ON_ONCE(idx & ~0x1); srcu_check_nmi_safety(ssp, false); - srcu_lock_release(&(ssp)->dep_map); + srcu_lock_release(&ssp->dep_map); __srcu_read_unlock(ssp, idx); } @@ -300,7 +300,7 @@ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) { WARN_ON_ONCE(idx & ~0x1); srcu_check_nmi_safety(ssp, true); - rcu_lock_release(&(ssp)->dep_map); + rcu_lock_release(&ssp->dep_map); __srcu_read_unlock_nmisafe(ssp, idx); } -- cgit v1.2.3 From 7e3f926bf4538cb4988b3e3f8bc1cb4a603b2ef6 Mon Sep 17 00:00:00 2001 From: "Uladzislau Rezki (Sony)" Date: Wed, 1 Feb 2023 16:09:54 +0100 Subject: rcu/kvfree: Eliminate k[v]free_rcu() single argument macro The kvfree_rcu() and kfree_rcu() APIs are hazardous in that if you forget the second argument, it works, but might sleep. This sleeping can be a correctness bug from atomic contexts, and even in non-atomic contexts it might introduce unacceptable latencies. This commit therefore removes the single-argument kvfree_rcu() and kfree_rcu() macros. Code that would have previously used these single-argument kvfree_rcu() and kfree_rcu() macros should instead use kvfree_rcu_mightsleep() or kfree_rcu_mightsleep(). [ paulmck: Apply Joel Fernandes feedback. ] Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney Signed-off-by: Joel Fernandes (Google) --- include/linux/rcupdate.h | 29 ++++++++--------------------- 1 file changed, 8 insertions(+), 21 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index dcd2cf1e8326..744869ef930a 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -957,9 +957,8 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) /** * kfree_rcu() - kfree an object after a grace period. - * @ptr: pointer to kfree for both single- and double-argument invocations. - * @rhf: the name of the struct rcu_head within the type of @ptr, - * but only for double-argument invocations. + * @ptr: pointer to kfree for double-argument invocations. + * @rhf: the name of the struct rcu_head within the type of @ptr. * * Many rcu callbacks functions just call kfree() on the base structure. * These functions are trivial, but their size adds up, and furthermore @@ -984,26 +983,18 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) * The BUILD_BUG_ON check must not involve any function calls, hence the * checks are done in macros here. */ -#define kfree_rcu(ptr, rhf...) kvfree_rcu(ptr, ## rhf) +#define kfree_rcu(ptr, rhf) kvfree_rcu_arg_2(ptr, rhf) +#define kvfree_rcu(ptr, rhf) kvfree_rcu_arg_2(ptr, rhf) /** - * kvfree_rcu() - kvfree an object after a grace period. - * - * This macro consists of one or two arguments and it is - * based on whether an object is head-less or not. If it - * has a head then a semantic stays the same as it used - * to be before: - * - * kvfree_rcu(ptr, rhf); - * - * where @ptr is a pointer to kvfree(), @rhf is the name - * of the rcu_head structure within the type of @ptr. + * kfree_rcu_mightsleep() - kfree an object after a grace period. + * @ptr: pointer to kfree for single-argument invocations. * * When it comes to head-less variant, only one argument * is passed and that is just a pointer which has to be * freed after a grace period. Therefore the semantic is * - * kvfree_rcu(ptr); + * kfree_rcu_mightsleep(ptr); * * where @ptr is the pointer to be freed by kvfree(). * @@ -1012,13 +1003,9 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) * annotation. Otherwise, please switch and embed the * rcu_head structure within the type of @ptr. */ -#define kvfree_rcu(...) KVFREE_GET_MACRO(__VA_ARGS__, \ - kvfree_rcu_arg_2, kvfree_rcu_arg_1)(__VA_ARGS__) - +#define kfree_rcu_mightsleep(ptr) kvfree_rcu_arg_1(ptr) #define kvfree_rcu_mightsleep(ptr) kvfree_rcu_arg_1(ptr) -#define kfree_rcu_mightsleep(ptr) kvfree_rcu_mightsleep(ptr) -#define KVFREE_GET_MACRO(_1, _2, NAME, ...) NAME #define kvfree_rcu_arg_2(ptr, rhf) \ do { \ typeof (ptr) ___p = (ptr); \ -- cgit v1.2.3 From cdfa0f6fa6b7183c062046043b649b9a91e3ac52 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Mon, 3 Apr 2023 16:49:14 -0700 Subject: rcu/kvfree: Add debug to check grace periods This commit adds debugging checks to verify that the required RCU grace period has elapsed for each kvfree_rcu_bulk_data structure that arrives at the kvfree_rcu_bulk() function. These checks make use of that structure's ->gp_snap field, which has been upgraded from an unsigned long to an rcu_gp_oldstate structure. This upgrade reduces the chances of false positives to nearly zero, even on 32-bit systems, for which this structure carries 64 bits of state. Cc: Ziwei Dai Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 37 +++++++++++++++++++------------------ 1 file changed, 19 insertions(+), 18 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index f52ff7241041..91d75fd6c579 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2756,7 +2756,7 @@ EXPORT_SYMBOL_GPL(call_rcu); */ struct kvfree_rcu_bulk_data { struct list_head list; - unsigned long gp_snap; + struct rcu_gp_oldstate gp_snap; unsigned long nr_records; void *records[]; }; @@ -2921,23 +2921,24 @@ kvfree_rcu_bulk(struct kfree_rcu_cpu *krcp, int i; debug_rcu_bhead_unqueue(bnode); - - rcu_lock_acquire(&rcu_callback_map); - if (idx == 0) { // kmalloc() / kfree(). - trace_rcu_invoke_kfree_bulk_callback( - rcu_state.name, bnode->nr_records, - bnode->records); - - kfree_bulk(bnode->nr_records, bnode->records); - } else { // vmalloc() / vfree(). - for (i = 0; i < bnode->nr_records; i++) { - trace_rcu_invoke_kvfree_callback( - rcu_state.name, bnode->records[i], 0); - - vfree(bnode->records[i]); + if (!WARN_ON_ONCE(!poll_state_synchronize_rcu_full(&bnode->gp_snap))) { + rcu_lock_acquire(&rcu_callback_map); + if (idx == 0) { // kmalloc() / kfree(). + trace_rcu_invoke_kfree_bulk_callback( + rcu_state.name, bnode->nr_records, + bnode->records); + + kfree_bulk(bnode->nr_records, bnode->records); + } else { // vmalloc() / vfree(). + for (i = 0; i < bnode->nr_records; i++) { + trace_rcu_invoke_kvfree_callback( + rcu_state.name, bnode->records[i], 0); + + vfree(bnode->records[i]); + } } + rcu_lock_release(&rcu_callback_map); } - rcu_lock_release(&rcu_callback_map); raw_spin_lock_irqsave(&krcp->lock, flags); if (put_cached_bnode(krcp, bnode)) @@ -3081,7 +3082,7 @@ kvfree_rcu_drain_ready(struct kfree_rcu_cpu *krcp) INIT_LIST_HEAD(&bulk_ready[i]); list_for_each_entry_safe_reverse(bnode, n, &krcp->bulk_head[i], list) { - if (!poll_state_synchronize_rcu(bnode->gp_snap)) + if (!poll_state_synchronize_rcu_full(&bnode->gp_snap)) break; atomic_sub(bnode->nr_records, &krcp->bulk_count[i]); @@ -3285,7 +3286,7 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, // Finally insert and update the GP for this page. bnode->records[bnode->nr_records++] = ptr; - bnode->gp_snap = get_state_synchronize_rcu(); + get_state_synchronize_rcu_full(&bnode->gp_snap); atomic_inc(&(*krcp)->bulk_count[idx]); return true; -- cgit v1.2.3 From f32276a37652a9ce05db27cdfb40ac3e3fc98f9f Mon Sep 17 00:00:00 2001 From: "Uladzislau Rezki (Sony)" Date: Tue, 4 Apr 2023 16:13:00 +0200 Subject: rcu/kvfree: Add debug check for GP complete for kfree_rcu_cpu list Under low-memory conditions, kvfree_rcu() will use each object's rcu_head structure to queue objects in a singly linked list headed by the kfree_rcu_cpu structure's ->head field. This list is passed to call_rcu() as a unit, but there is no indication of which grace period this list needs to wait for. This in turn prevents adding debug checks in the kfree_rcu_work() as was done for the two page-of-pointers channels in the kfree_rcu_cpu structure. This commit therefore adds a ->head_free_gp_snap field to the kfree_rcu_cpu_work structure to record this grace-period number. It also adds a WARN_ON_ONCE() to kfree_rcu_monitor() that checks to make sure that the required grace period has in fact elapsed. [ paulmck: Fix kerneldoc issue raised by Stephen Rothwell. ] Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 91d75fd6c579..7452ba97ba34 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2773,6 +2773,7 @@ struct kvfree_rcu_bulk_data { * struct kfree_rcu_cpu_work - single batch of kfree_rcu() requests * @rcu_work: Let queue_rcu_work() invoke workqueue handler after grace period * @head_free: List of kfree_rcu() objects waiting for a grace period + * @head_free_gp_snap: Grace-period snapshot to check for attempted premature frees. * @bulk_head_free: Bulk-List of kvfree_rcu() objects waiting for a grace period * @krcp: Pointer to @kfree_rcu_cpu structure */ @@ -2780,6 +2781,7 @@ struct kvfree_rcu_bulk_data { struct kfree_rcu_cpu_work { struct rcu_work rcu_work; struct rcu_head *head_free; + struct rcu_gp_oldstate head_free_gp_snap; struct list_head bulk_head_free[FREE_N_CHANNELS]; struct kfree_rcu_cpu *krcp; }; @@ -2985,6 +2987,7 @@ static void kfree_rcu_work(struct work_struct *work) struct rcu_head *head; struct kfree_rcu_cpu *krcp; struct kfree_rcu_cpu_work *krwp; + struct rcu_gp_oldstate head_gp_snap; int i; krwp = container_of(to_rcu_work(work), @@ -2999,6 +3002,7 @@ static void kfree_rcu_work(struct work_struct *work) // Channel 3. head = krwp->head_free; krwp->head_free = NULL; + head_gp_snap = krwp->head_free_gp_snap; raw_spin_unlock_irqrestore(&krcp->lock, flags); // Handle the first two channels. @@ -3015,7 +3019,8 @@ static void kfree_rcu_work(struct work_struct *work) * queued on a linked list through their rcu_head structures. * This list is named "Channel 3". */ - kvfree_rcu_list(head); + if (head && !WARN_ON_ONCE(!poll_state_synchronize_rcu_full(&head_gp_snap))) + kvfree_rcu_list(head); } static bool @@ -3147,6 +3152,7 @@ static void kfree_rcu_monitor(struct work_struct *work) // objects queued on the linked list. if (!krwp->head_free) { krwp->head_free = krcp->head; + get_state_synchronize_rcu_full(&krwp->head_free_gp_snap); atomic_set(&krcp->head_count, 0); WRITE_ONCE(krcp->head, NULL); } -- cgit v1.2.3 From 1e237994d9c9a5ae47ae13030585a413a29469e6 Mon Sep 17 00:00:00 2001 From: Zqiang Date: Wed, 5 Apr 2023 10:13:59 +0800 Subject: rcu/kvfree: Invoke debug_rcu_bhead_unqueue() after checking bnode->gp_snap If kvfree_rcu_bulk() sees that the required grace period has failed to elapse, it leaks the memory because readers might still be using it. But in that case, the debug-objects subsystem still marks the relevant structures as having been freed, even though they are instead being leaked. This commit fixes this mismatch by invoking debug_rcu_bhead_unqueue() only when we are actually going to free the objects. Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 7452ba97ba34..426f1f3bb5f2 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2922,8 +2922,8 @@ kvfree_rcu_bulk(struct kfree_rcu_cpu *krcp, unsigned long flags; int i; - debug_rcu_bhead_unqueue(bnode); if (!WARN_ON_ONCE(!poll_state_synchronize_rcu_full(&bnode->gp_snap))) { + debug_rcu_bhead_unqueue(bnode); rcu_lock_acquire(&rcu_callback_map); if (idx == 0) { // kmalloc() / kfree(). trace_rcu_invoke_kfree_bulk_callback( -- cgit v1.2.3 From 309a4316507767f8078d30c9681dc76f4299b0f1 Mon Sep 17 00:00:00 2001 From: Zqiang Date: Sat, 8 Apr 2023 22:25:30 +0800 Subject: rcu/kvfree: Use consistent krcp when growing kfree_rcu() page cache The add_ptr_to_bulk_krc_lock() function is invoked to allocate a new kfree_rcu() page, also known as a kvfree_rcu_bulk_data structure. The kfree_rcu_cpu structure's lock is used to protect this operation, except that this lock must be momentarily dropped when allocating memory. It is clearly important that the lock that is reacquired be the same lock that was acquired initially via krc_this_cpu_lock(). Unfortunately, this same krc_this_cpu_lock() function is used to re-acquire this lock, and if the task migrated to some other CPU during the memory allocation, this will result in the kvfree_rcu_bulk_data structure being added to the wrong CPU's kfree_rcu_cpu structure. This commit therefore replaces that second call to krc_this_cpu_lock() with raw_spin_lock_irqsave() in order to explicitly acquire the lock on the correct kfree_rcu_cpu structure, thus keeping things straight even when the task migrates. Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 426f1f3bb5f2..51d84eabf645 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3279,7 +3279,7 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, // scenarios. bnode = (struct kvfree_rcu_bulk_data *) __get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN); - *krcp = krc_this_cpu_lock(flags); + raw_spin_lock_irqsave(&(*krcp)->lock, *flags); } if (!bnode) -- cgit v1.2.3 From 021a5ff8474379cd6c23e9b0e97aa27e5ff66a8b Mon Sep 17 00:00:00 2001 From: "Uladzislau Rezki (Sony)" Date: Tue, 11 Apr 2023 15:13:41 +0200 Subject: rcu/kvfree: Do not run a page work if a cache is disabled By default the cache size is 5 pages per CPU, but it can be disabled at boot time by setting the rcu_min_cached_objs to zero. When that happens, the current code will uselessly set an hrtimer to schedule refilling this cache with zero pages. This commit therefore streamlines this process by simply refusing the set the hrtimer when rcu_min_cached_objs is zero. Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 51d84eabf645..18f592bf6dc6 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3225,6 +3225,10 @@ static void fill_page_cache_func(struct work_struct *work) static void run_page_cache_worker(struct kfree_rcu_cpu *krcp) { + // If cache disabled, bail out. + if (!rcu_min_cached_objs) + return; + if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING && !atomic_xchg(&krcp->work_in_progress, 1)) { if (atomic_read(&krcp->backoff_page_cache_fill)) { -- cgit v1.2.3 From 60888b77a06ea16665e4df980bb86b418253e268 Mon Sep 17 00:00:00 2001 From: Zqiang Date: Wed, 12 Apr 2023 22:31:27 +0800 Subject: rcu/kvfree: Make fill page cache start from krcp->nr_bkv_objs When the fill_page_cache_func() function is invoked, it assumes that the cache of pages is completely empty. However, there can be some time between triggering execution of this function and its actual invocation. During this time, kfree_rcu_work() might run, and might fill in part or all of this cache of pages, thus invalidating the fill_page_cache_func() function's assumption. This will not overfill the cache because put_cached_bnode() will reject the extra page. However, it will result in a needless allocation and freeing of one extra page, which might not be helpful under lowish-memory conditions. This commit therefore causes the fill_page_cache_func() to explicitly account for pages that have been placed into the cache shortly before it starts running. Signed-off-by: Zqiang Reviewed-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 18f592bf6dc6..98f2e833e217 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3201,7 +3201,7 @@ static void fill_page_cache_func(struct work_struct *work) nr_pages = atomic_read(&krcp->backoff_page_cache_fill) ? 1 : rcu_min_cached_objs; - for (i = 0; i < nr_pages; i++) { + for (i = READ_ONCE(krcp->nr_bkv_objs); i < nr_pages; i++) { bnode = (struct kvfree_rcu_bulk_data *) __get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN); -- cgit v1.2.3 From 6b706e5603c44ff0b6f43c2e26e0d590e1d265f8 Mon Sep 17 00:00:00 2001 From: Zqiang Date: Tue, 18 Apr 2023 20:27:02 +0800 Subject: rcu/kvfree: Make drain_page_cache() take early return if cache is disabled If the rcutree.rcu_min_cached_objs kernel boot parameter is set to zero, then krcp->page_cache_work will never be triggered to fill page cache. In addition, the put_cached_bnode() will not fill page cache. As a result krcp->bkvcache will always be empty, so there is no need to acquire krcp->lock to get page from krcp->bkvcache. This commit therefore makes drain_page_cache() return immediately if the rcu_min_cached_objs is zero. Signed-off-by: Zqiang Reviewed-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 98f2e833e217..00ed45ddc6ca 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2902,6 +2902,9 @@ drain_page_cache(struct kfree_rcu_cpu *krcp) struct llist_node *page_list, *pos, *n; int freed = 0; + if (!rcu_min_cached_objs) + return 0; + raw_spin_lock_irqsave(&krcp->lock, flags); page_list = llist_del_all(&krcp->bkvcache); WRITE_ONCE(krcp->nr_bkv_objs, 0); -- cgit v1.2.3 From 5c83cedbaaad6dfe290e50658a204556ac5ac683 Mon Sep 17 00:00:00 2001 From: Frederic Weisbecker Date: Wed, 29 Mar 2023 18:02:00 +0200 Subject: rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading The shrinker may run concurrently with callbacks (de-)offloading. As such, calling rcu_nocb_lock() is very dangerous because it does a conditional locking. The worst outcome is that rcu_nocb_lock() doesn't lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating an imbalance. Fix this with protecting against (de-)offloading using the barrier mutex. Although if the barrier mutex is contended, which should be rare, then step aside so as not to trigger a mutex VS allocation dependency chain. Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/rcu/tree_nocb.h | 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index f2280616f9d5..1a86883902ce 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -1336,13 +1336,33 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) unsigned long flags; unsigned long count = 0; + /* + * Protect against concurrent (de-)offloading. Otherwise nocb locking + * may be ignored or imbalanced. + */ + if (!mutex_trylock(&rcu_state.barrier_mutex)) { + /* + * But really don't insist if barrier_mutex is contended since we + * can't guarantee that it will never engage in a dependency + * chain involving memory allocation. The lock is seldom contended + * anyway. + */ + return 0; + } + /* Snapshot count of all CPUs */ for_each_possible_cpu(cpu) { struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); - int _count = READ_ONCE(rdp->lazy_len); + int _count; + + if (!rcu_rdp_is_offloaded(rdp)) + continue; + + _count = READ_ONCE(rdp->lazy_len); if (_count == 0) continue; + rcu_nocb_lock_irqsave(rdp, flags); WRITE_ONCE(rdp->lazy_len, 0); rcu_nocb_unlock_irqrestore(rdp, flags); @@ -1352,6 +1372,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) if (sc->nr_to_scan <= 0) break; } + + mutex_unlock(&rcu_state.barrier_mutex); + return count ? count : SHRINK_STOP; } -- cgit v1.2.3 From 7625926086765123251f765d91fc3a70617d334d Mon Sep 17 00:00:00 2001 From: Frederic Weisbecker Date: Wed, 29 Mar 2023 18:02:01 +0200 Subject: rcu/nocb: Fix shrinker race against callback enqueuer The shrinker resets the lazy callbacks counter in order to trigger the pending lazy queue flush though the rcuog kthread. The counter reset is protected by the ->nocb_lock against concurrent accesses...except for one of them. Here is a list of existing synchronized readers/writer: 1) The first lazy enqueuer (incrementing ->lazy_len to 1) does so under ->nocb_lock and ->nocb_bypass_lock. 2) The further lazy enqueuers (incrementing ->lazy_len above 1) do so under ->nocb_bypass_lock _only_. 3) The lazy flush checks and resets to 0 under ->nocb_lock and ->nocb_bypass_lock. The shrinker protects its ->lazy_len reset against cases 1) and 3) but not against 2). As such, setting ->lazy_len to 0 under the ->nocb_lock may be cancelled right away by an overwrite from an enqueuer, leading rcuog to ignore the flush. To avoid that, use the proper bypass flush API which takes care of all those details. Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/rcu/tree_nocb.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index 1a86883902ce..c321fce2af8e 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -1364,7 +1364,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) continue; rcu_nocb_lock_irqsave(rdp, flags); - WRITE_ONCE(rdp->lazy_len, 0); + WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false)); rcu_nocb_unlock_irqrestore(rdp, flags); wake_nocb_gp(rdp, false); sc->nr_to_scan -= _count; -- cgit v1.2.3 From b96a8b0b5be40f9bc9e45819f14b32ea9cdce73f Mon Sep 17 00:00:00 2001 From: Frederic Weisbecker Date: Wed, 29 Mar 2023 18:02:02 +0200 Subject: rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker The ->lazy_len is only checked locklessly. Recheck again under the ->nocb_lock to avoid spending more time on flushing/waking if not necessary. The ->lazy_len can still increment concurrently (from 1 to infinity) but under the ->nocb_lock we at least know for sure if there are lazy callbacks at all (->lazy_len > 0). Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/rcu/tree_nocb.h | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index c321fce2af8e..dfa9c10d6727 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -1358,12 +1358,20 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) if (!rcu_rdp_is_offloaded(rdp)) continue; - _count = READ_ONCE(rdp->lazy_len); - - if (_count == 0) + if (!READ_ONCE(rdp->lazy_len)) continue; rcu_nocb_lock_irqsave(rdp, flags); + /* + * Recheck under the nocb lock. Since we are not holding the bypass + * lock we may still race with increments from the enqueuer but still + * we know for sure if there is at least one lazy callback. + */ + _count = READ_ONCE(rdp->lazy_len); + if (!_count) { + rcu_nocb_unlock_irqrestore(rdp, flags); + continue; + } WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false)); rcu_nocb_unlock_irqrestore(rdp, flags); wake_nocb_gp(rdp, false); -- cgit v1.2.3 From 5fc8cbe4cf0fd34ded8045c385790c3bf04f6785 Mon Sep 17 00:00:00 2001 From: Shigeru Yoshida Date: Wed, 3 Aug 2022 01:22:05 +0900 Subject: rcu-tasks: Avoid pr_info() with spin lock in cblist_init_generic() pr_info() is called with rtp->cbs_gbl_lock spin lock locked. Because pr_info() calls printk() that might sleep, this will result in BUG like below: [ 0.206455] cblist_init_generic: Setting adjustable number of callback queues. [ 0.206463] [ 0.206464] ============================= [ 0.206464] [ BUG: Invalid wait context ] [ 0.206465] 5.19.0-00428-g9de1f9c8ca51 #5 Not tainted [ 0.206466] ----------------------------- [ 0.206466] swapper/0/1 is trying to lock: [ 0.206467] ffffffffa0167a58 (&port_lock_key){....}-{3:3}, at: serial8250_console_write+0x327/0x4a0 [ 0.206473] other info that might help us debug this: [ 0.206473] context-{5:5} [ 0.206474] 3 locks held by swapper/0/1: [ 0.206474] #0: ffffffff9eb597e0 (rcu_tasks.cbs_gbl_lock){....}-{2:2}, at: cblist_init_generic.constprop.0+0x14/0x1f0 [ 0.206478] #1: ffffffff9eb579c0 (console_lock){+.+.}-{0:0}, at: _printk+0x63/0x7e [ 0.206482] #2: ffffffff9ea77780 (console_owner){....}-{0:0}, at: console_emit_next_record.constprop.0+0x111/0x330 [ 0.206485] stack backtrace: [ 0.206486] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-00428-g9de1f9c8ca51 #5 [ 0.206488] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-1.fc36 04/01/2014 [ 0.206489] Call Trace: [ 0.206490] [ 0.206491] dump_stack_lvl+0x6a/0x9f [ 0.206493] __lock_acquire.cold+0x2d7/0x2fe [ 0.206496] ? stack_trace_save+0x46/0x70 [ 0.206497] lock_acquire+0xd1/0x2f0 [ 0.206499] ? serial8250_console_write+0x327/0x4a0 [ 0.206500] ? __lock_acquire+0x5c7/0x2720 [ 0.206502] _raw_spin_lock_irqsave+0x3d/0x90 [ 0.206504] ? serial8250_console_write+0x327/0x4a0 [ 0.206506] serial8250_console_write+0x327/0x4a0 [ 0.206508] console_emit_next_record.constprop.0+0x180/0x330 [ 0.206511] console_unlock+0xf7/0x1f0 [ 0.206512] vprintk_emit+0xf7/0x330 [ 0.206514] _printk+0x63/0x7e [ 0.206516] cblist_init_generic.constprop.0.cold+0x24/0x32 [ 0.206518] rcu_init_tasks_generic+0x5/0xd9 [ 0.206522] kernel_init_freeable+0x15b/0x2a2 [ 0.206523] ? rest_init+0x160/0x160 [ 0.206526] kernel_init+0x11/0x120 [ 0.206527] ret_from_fork+0x1f/0x30 [ 0.206530] [ 0.207018] cblist_init_generic: Setting shift to 1 and lim to 1. This patch moves pr_info() so that it is called without rtp->cbs_gbl_lock locked. Signed-off-by: Shigeru Yoshida Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- kernel/rcu/tasks.h | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 5f4fc8184dd0..65df1aaf0ce9 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -241,7 +241,6 @@ static void cblist_init_generic(struct rcu_tasks *rtp) if (rcu_task_enqueue_lim < 0) { rcu_task_enqueue_lim = 1; rcu_task_cb_adjust = true; - pr_info("%s: Setting adjustable number of callback queues.\n", __func__); } else if (rcu_task_enqueue_lim == 0) { rcu_task_enqueue_lim = 1; } @@ -272,6 +271,10 @@ static void cblist_init_generic(struct rcu_tasks *rtp) raw_spin_unlock_rcu_node(rtpcp); // irqs remain disabled. } raw_spin_unlock_irqrestore(&rtp->cbs_gbl_lock, flags); + + if (rcu_task_cb_adjust) + pr_info("%s: Setting adjustable number of callback queues.\n", __func__); + pr_info("%s: Setting shift to %d and lim to %d.\n", __func__, data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim)); } -- cgit v1.2.3 From edff5e9a99e0ed9463999455b2604c3154eb7ab3 Mon Sep 17 00:00:00 2001 From: Zqiang Date: Thu, 23 Mar 2023 12:00:11 +0800 Subject: rcu-tasks: Clarify the cblist_init_generic() function's pr_info() output This commit uses rtp->name instead of __func__ and outputs the value of rcu_task_cb_adjust, thus reducing console-log output. Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney --- kernel/rcu/tasks.h | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 65df1aaf0ce9..cf7b00af9474 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -272,10 +272,8 @@ static void cblist_init_generic(struct rcu_tasks *rtp) } raw_spin_unlock_irqrestore(&rtp->cbs_gbl_lock, flags); - if (rcu_task_cb_adjust) - pr_info("%s: Setting adjustable number of callback queues.\n", __func__); - - pr_info("%s: Setting shift to %d and lim to %d.\n", __func__, data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim)); + pr_info("%s: Setting shift to %d and lim to %d rcu_task_cb_adjust=%d.\n", rtp->name, + data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim), rcu_task_cb_adjust); } // IRQ-work handler that does deferred wakeup for call_rcu_tasks_generic(). -- cgit v1.2.3 From e1bd2334f165aa7bef7f9fa2b0bef97a85614963 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Thu, 11 May 2023 11:59:09 -0700 Subject: rcu: Add more RCU files to kernel-api.rst Recent changes and additions to RCU have not been reflected in Documentation/core-api/kernel-api.rst, which makes it harder to find the kernel-doc headers in recently added RCU files. Therefore, add those files. Signed-off-by: Paul E. McKenney Cc: Jonathan Corbet Cc: Luis Chamberlain Cc: Kees Cook Cc: Liam Beguin Cc: --- Documentation/core-api/kernel-api.rst | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst index 9b3f3e5f5a95..712e59ad32fa 100644 --- a/Documentation/core-api/kernel-api.rst +++ b/Documentation/core-api/kernel-api.rst @@ -412,3 +412,15 @@ Read-Copy Update (RCU) .. kernel-doc:: include/linux/rcu_sync.h .. kernel-doc:: kernel/rcu/sync.c + +.. kernel-doc:: kernel/rcu/tasks.h + +.. kernel-doc:: kernel/rcu/tree_stall.h + +.. kernel-doc:: include/linux/rcupdate_trace.h + +.. kernel-doc:: include/linux/rcupdate_wait.h + +.. kernel-doc:: include/linux/rcuref.h + +.. kernel-doc:: include/linux/rcutree.h -- cgit v1.2.3 From 7a3cc29136960c45eff362a7304dd4f6eaf34cdd Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Mon, 20 Mar 2023 18:37:51 +0100 Subject: rcu: Remove RCU_NONIDLE() Since there are now exactly _zero_ users of RCU_NONIDLE(), make it go away before someone else decides to (ab)use it. [ paulmck: Remove extraneous whitespace. ] Signed-off-by: Peter Zijlstra (Intel) Acked-by: Mark Rutland Acked-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- .../RCU/Design/Requirements/Requirements.rst | 36 +--------------------- Documentation/RCU/whatisRCU.rst | 1 - include/linux/rcupdate.h | 25 --------------- 3 files changed, 1 insertion(+), 61 deletions(-) diff --git a/Documentation/RCU/Design/Requirements/Requirements.rst b/Documentation/RCU/Design/Requirements/Requirements.rst index 49387d823619..f3b605285a87 100644 --- a/Documentation/RCU/Design/Requirements/Requirements.rst +++ b/Documentation/RCU/Design/Requirements/Requirements.rst @@ -2071,41 +2071,7 @@ call. Because RCU avoids interrupting idle CPUs, it is illegal to execute an RCU read-side critical section on an idle CPU. (Kernels built with -``CONFIG_PROVE_RCU=y`` will splat if you try it.) The RCU_NONIDLE() -macro and ``_rcuidle`` event tracing is provided to work around this -restriction. In addition, rcu_is_watching() may be used to test -whether or not it is currently legal to run RCU read-side critical -sections on this CPU. I learned of the need for diagnostics on the one -hand and RCU_NONIDLE() on the other while inspecting idle-loop code. -Steven Rostedt supplied ``_rcuidle`` event tracing, which is used quite -heavily in the idle loop. However, there are some restrictions on the -code placed within RCU_NONIDLE(): - -#. Blocking is prohibited. In practice, this is not a serious - restriction given that idle tasks are prohibited from blocking to - begin with. -#. Although nesting RCU_NONIDLE() is permitted, they cannot nest - indefinitely deeply. However, given that they can be nested on the - order of a million deep, even on 32-bit systems, this should not be a - serious restriction. This nesting limit would probably be reached - long after the compiler OOMed or the stack overflowed. -#. Any code path that enters RCU_NONIDLE() must sequence out of that - same RCU_NONIDLE(). For example, the following is grossly - illegal: - - :: - - 1 RCU_NONIDLE({ - 2 do_something(); - 3 goto bad_idea; /* BUG!!! */ - 4 do_something_else();}); - 5 bad_idea: - - - It is just as illegal to transfer control into the middle of - RCU_NONIDLE()'s argument. Yes, in theory, you could transfer in - as long as you also transferred out, but in practice you could also - expect to get sharply worded review comments. +``CONFIG_PROVE_RCU=y`` will splat if you try it.) It is similarly socially unacceptable to interrupt an ``nohz_full`` CPU running in userspace. RCU must therefore track ``nohz_full`` userspace diff --git a/Documentation/RCU/whatisRCU.rst b/Documentation/RCU/whatisRCU.rst index 8eddef28d3a1..e488c8e557a9 100644 --- a/Documentation/RCU/whatisRCU.rst +++ b/Documentation/RCU/whatisRCU.rst @@ -1117,7 +1117,6 @@ All: lockdep-checked RCU utility APIs:: RCU_LOCKDEP_WARN rcu_sleep_check - RCU_NONIDLE All: Unchecked RCU-protected pointer access:: diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index dcd2cf1e8326..aae31a3e28dd 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -156,31 +156,6 @@ static inline int rcu_nocb_cpu_deoffload(int cpu) { return 0; } static inline void rcu_nocb_flush_deferred_wakeup(void) { } #endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */ -/** - * RCU_NONIDLE - Indicate idle-loop code that needs RCU readers - * @a: Code that RCU needs to pay attention to. - * - * RCU read-side critical sections are forbidden in the inner idle loop, - * that is, between the ct_idle_enter() and the ct_idle_exit() -- RCU - * will happily ignore any such read-side critical sections. However, - * things like powertop need tracepoints in the inner idle loop. - * - * This macro provides the way out: RCU_NONIDLE(do_something_with_RCU()) - * will tell RCU that it needs to pay attention, invoke its argument - * (in this example, calling the do_something_with_RCU() function), - * and then tell RCU to go back to ignoring this CPU. It is permissible - * to nest RCU_NONIDLE() wrappers, but not indefinitely (but the limit is - * on the order of a million or so, even on 32-bit systems). It is - * not legal to block within RCU_NONIDLE(), nor is it permissible to - * transfer control either into or out of RCU_NONIDLE()'s statement. - */ -#define RCU_NONIDLE(a) \ - do { \ - ct_irq_enter_irqson(); \ - do { a; } while (0); \ - ct_irq_exit_irqson(); \ - } while (0) - /* * Note a quasi-voluntary context switch for RCU-tasks's benefit. * This is a macro rather than an inline function to avoid #include hell. -- cgit v1.2.3 From fea1c1f0101783f24d00e065ecd3d6e90292f887 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Tue, 21 Mar 2023 16:43:54 -0700 Subject: rcu: Check callback-invocation time limit for rcuc kthreads Currently, a callback-invocation time limit is enforced only for callbacks invoked from the softirq environment, the rationale being that when callbacks are instead invoked from rcuc and rcuoc kthreads, these callbacks cannot be holding up other softirq vectors. Which is in fact true. However, if an rcuc kthread spends too much time invoking callbacks, it can delay quiescent-state reports from its CPU, which can also be a problem. This commit therefore applies the callback-invocation time limit to callback invocation from the rcuc kthreads as well as from softirq. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index f52ff7241041..9a5c160186d1 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2046,6 +2046,13 @@ rcu_check_quiescent_state(struct rcu_data *rdp) rcu_report_qs_rdp(rdp); } +/* Return true if callback-invocation time limit exceeded. */ +static bool rcu_do_batch_check_time(long count, long tlimit) +{ + // Invoke local_clock() only once per 32 consecutive callbacks. + return unlikely(tlimit) && !likely(count & 31) && local_clock() >= tlimit; +} + /* * Invoke any RCU callbacks that have made it to the end of their grace * period. Throttle as specified by rdp->blimit. @@ -2082,7 +2089,8 @@ static void rcu_do_batch(struct rcu_data *rdp) div = READ_ONCE(rcu_divisor); div = div < 0 ? 7 : div > sizeof(long) * 8 - 2 ? sizeof(long) * 8 - 2 : div; bl = max(rdp->blimit, pending >> div); - if (in_serving_softirq() && unlikely(bl > 100)) { + if ((in_serving_softirq() || rdp->rcu_cpu_kthread_status == RCU_KTHREAD_RUNNING) && + unlikely(bl > 100)) { long rrn = READ_ONCE(rcu_resched_ns); rrn = rrn < NSEC_PER_MSEC ? NSEC_PER_MSEC : rrn > NSEC_PER_SEC ? NSEC_PER_SEC : rrn; @@ -2126,21 +2134,23 @@ static void rcu_do_batch(struct rcu_data *rdp) * Make sure we don't spend too much time here and deprive other * softirq vectors of CPU cycles. */ - if (unlikely(tlimit)) { - /* only call local_clock() every 32 callbacks */ - if (likely((count & 31) || local_clock() < tlimit)) - continue; - /* Exceeded the time limit, so leave. */ + if (rcu_do_batch_check_time(count, tlimit)) break; - } } else { - // In rcuoc context, so no worries about depriving - // other softirq vectors of CPU cycles. + // In rcuc/rcuoc context, so no worries about + // depriving other softirq vectors of CPU cycles. local_bh_enable(); lockdep_assert_irqs_enabled(); cond_resched_tasks_rcu_qs(); lockdep_assert_irqs_enabled(); local_bh_disable(); + // But rcuc kthreads can delay quiescent-state + // reporting, so check time limits for them. + if (rdp->rcu_cpu_kthread_status == RCU_KTHREAD_RUNNING && + rcu_do_batch_check_time(count, tlimit)) { + rdp->rcu_cpu_has_work = 1; + break; + } } } -- cgit v1.2.3 From f51164a808b5bf1d81fc37eb53ab1eae59c79f2d Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Fri, 31 Mar 2023 09:05:56 -0700 Subject: rcu: Employ jiffies-based backstop to callback time limit Currently, if there are more than 100 ready-to-invoke RCU callbacks queued on a given CPU, the rcu_do_batch() function sets a timeout for invocation of the series. This timeout defaulting to three milliseconds, and may be adjusted using the rcutree.rcu_resched_ns kernel boot parameter. This timeout is checked using local_clock(), but the overhead of this function combined with the common-case very small callback-invocation overhead means that local_clock() is checked every 32nd invocation. This works well except for longer-than average callbacks. For example, a series of 500-microsecond-duration callbacks means that local_clock() is checked only once every 16 milliseconds, which makes it difficult to enforce a three-millisecond timeout. This commit therefore adds a Kconfig option RCU_DOUBLE_CHECK_CB_TIME that enables backup timeout checking using the coarser grained but lighter weight jiffies. If the jiffies counter detects a timeout, then local_clock() is consulted even if this is not the 32nd callback. This prevents the aforementioned 16-millisecond latency blow. Reported-by: Domas Mituzas Signed-off-by: Paul E. McKenney --- kernel/rcu/Kconfig | 18 ++++++++++++++++++ kernel/rcu/tree.c | 28 ++++++++++++++++++++-------- 2 files changed, 38 insertions(+), 8 deletions(-) diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig index 9071182b1284..bdd7eadb33d8 100644 --- a/kernel/rcu/Kconfig +++ b/kernel/rcu/Kconfig @@ -314,4 +314,22 @@ config RCU_LAZY To save power, batch RCU callbacks and flush after delay, memory pressure, or callback list growing too big. +config RCU_DOUBLE_CHECK_CB_TIME + bool "RCU callback-batch backup time check" + depends on RCU_EXPERT + default n + help + Use this option to provide more precise enforcement of the + rcutree.rcu_resched_ns module parameter in situations where + a single RCU callback might run for hundreds of microseconds, + thus defeating the 32-callback batching used to amortize the + cost of the fine-grained but expensive local_clock() function. + + This option rounds rcutree.rcu_resched_ns up to the next + jiffy, and overrides the 32-callback batching if this limit + is exceeded. + + Say Y here if you need tighter callback-limit enforcement. + Say N here if you are unsure. + endmenu # "RCU Subsystem" diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 9a5c160186d1..e2dbea6cee4b 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2047,10 +2047,15 @@ rcu_check_quiescent_state(struct rcu_data *rdp) } /* Return true if callback-invocation time limit exceeded. */ -static bool rcu_do_batch_check_time(long count, long tlimit) +static bool rcu_do_batch_check_time(long count, long tlimit, + bool jlimit_check, unsigned long jlimit) { // Invoke local_clock() only once per 32 consecutive callbacks. - return unlikely(tlimit) && !likely(count & 31) && local_clock() >= tlimit; + return unlikely(tlimit) && + (!likely(count & 31) || + (IS_ENABLED(CONFIG_RCU_DOUBLE_CHECK_CB_TIME) && + jlimit_check && time_after(jiffies, jlimit))) && + local_clock() >= tlimit; } /* @@ -2059,13 +2064,17 @@ static bool rcu_do_batch_check_time(long count, long tlimit) */ static void rcu_do_batch(struct rcu_data *rdp) { + long bl; + long count = 0; int div; bool __maybe_unused empty; unsigned long flags; - struct rcu_head *rhp; + unsigned long jlimit; + bool jlimit_check = false; + long pending; struct rcu_cblist rcl = RCU_CBLIST_INITIALIZER(rcl); - long bl, count = 0; - long pending, tlimit = 0; + struct rcu_head *rhp; + long tlimit = 0; /* If no callbacks are ready, just return. */ if (!rcu_segcblist_ready_cbs(&rdp->cblist)) { @@ -2090,11 +2099,14 @@ static void rcu_do_batch(struct rcu_data *rdp) div = div < 0 ? 7 : div > sizeof(long) * 8 - 2 ? sizeof(long) * 8 - 2 : div; bl = max(rdp->blimit, pending >> div); if ((in_serving_softirq() || rdp->rcu_cpu_kthread_status == RCU_KTHREAD_RUNNING) && - unlikely(bl > 100)) { + (IS_ENABLED(CONFIG_RCU_DOUBLE_CHECK_CB_TIME) || unlikely(bl > 100))) { + const long npj = NSEC_PER_SEC / HZ; long rrn = READ_ONCE(rcu_resched_ns); rrn = rrn < NSEC_PER_MSEC ? NSEC_PER_MSEC : rrn > NSEC_PER_SEC ? NSEC_PER_SEC : rrn; tlimit = local_clock() + rrn; + jlimit = jiffies + (rrn + npj + 1) / npj; + jlimit_check = true; } trace_rcu_batch_start(rcu_state.name, rcu_segcblist_n_cbs(&rdp->cblist), bl); @@ -2134,7 +2146,7 @@ static void rcu_do_batch(struct rcu_data *rdp) * Make sure we don't spend too much time here and deprive other * softirq vectors of CPU cycles. */ - if (rcu_do_batch_check_time(count, tlimit)) + if (rcu_do_batch_check_time(count, tlimit, jlimit_check, jlimit)) break; } else { // In rcuc/rcuoc context, so no worries about @@ -2147,7 +2159,7 @@ static void rcu_do_batch(struct rcu_data *rdp) // But rcuc kthreads can delay quiescent-state // reporting, so check time limits for them. if (rdp->rcu_cpu_kthread_status == RCU_KTHREAD_RUNNING && - rcu_do_batch_check_time(count, tlimit)) { + rcu_do_batch_check_time(count, tlimit, jlimit_check, jlimit)) { rdp->rcu_cpu_has_work = 1; break; } -- cgit v1.2.3 From 9146eb25495ea8bfb5010192e61e3ed5805ce9ef Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Fri, 7 Apr 2023 16:05:38 -0700 Subject: rcu: Mark additional concurrent load from ->cpu_no_qs.b.exp The per-CPU rcu_data structure's ->cpu_no_qs.b.exp field is updated only on the instance corresponding to the current CPU, but can be read more widely. Unmarked accesses are OK from the corresponding CPU, but only if interrupts are disabled, given that interrupt handlers can and do modify this field. Unfortunately, although the load from rcu_preempt_deferred_qs() is always carried out from the corresponding CPU, interrupts are not necessarily disabled. This commit therefore upgrades this load to READ_ONCE. Similarly, the diagnostic access from synchronize_rcu_expedited_wait() might run with interrupts disabled and from some other CPU. This commit therefore marks this load with data_race(). Finally, the C-language access in rcu_preempt_ctxt_queue() is OK as is because interrupts are disabled and this load is always from the corresponding CPU. This commit adds a comment giving the rationale for this access being safe. This data race was reported by KCSAN. Not appropriate for backporting due to failure being unlikely. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree_exp.h | 2 +- kernel/rcu/tree_plugin.h | 4 +++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 3b7abb58157d..8239b39d945b 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -643,7 +643,7 @@ static void synchronize_rcu_expedited_wait(void) "O."[!!cpu_online(cpu)], "o."[!!(rdp->grpmask & rnp->expmaskinit)], "N."[!!(rdp->grpmask & rnp->expmaskinitnext)], - "D."[!!(rdp->cpu_no_qs.b.exp)]); + "D."[!!data_race(rdp->cpu_no_qs.b.exp)]); } } pr_cont(" } %lu jiffies s: %lu root: %#lx/%c\n", diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 7b0fe741a088..41021080ad25 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -257,6 +257,8 @@ static void rcu_preempt_ctxt_queue(struct rcu_node *rnp, struct rcu_data *rdp) * GP should not be able to end until we report, so there should be * no need to check for a subsequent expedited GP. (Though we are * still in a quiescent state in any case.) + * + * Interrupts are disabled, so ->cpu_no_qs.b.exp cannot change. */ if (blkd_state & RCU_EXP_BLKD && rdp->cpu_no_qs.b.exp) rcu_report_exp_rdp(rdp); @@ -941,7 +943,7 @@ notrace void rcu_preempt_deferred_qs(struct task_struct *t) { struct rcu_data *rdp = this_cpu_ptr(&rcu_data); - if (rdp->cpu_no_qs.b.exp) + if (READ_ONCE(rdp->cpu_no_qs.b.exp)) rcu_report_exp_rdp(rdp); } -- cgit v1.2.3 From a24c1aab652ebacf9ea62470a166514174c96fe1 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Fri, 7 Apr 2023 16:47:34 -0700 Subject: rcu: Mark rcu_cpu_kthread() accesses to ->rcu_cpu_has_work The rcu_data structure's ->rcu_cpu_has_work field can be modified by any CPU attempting to wake up the rcuc kthread. Therefore, this commit marks accesses to this field from the rcu_cpu_kthread() function. This data race was reported by KCSAN. Not appropriate for backporting due to failure being unlikely. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index e2dbea6cee4b..faa1c01d2917 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2481,12 +2481,12 @@ static void rcu_cpu_kthread(unsigned int cpu) *statusp = RCU_KTHREAD_RUNNING; local_irq_disable(); work = *workp; - *workp = 0; + WRITE_ONCE(*workp, 0); local_irq_enable(); if (work) rcu_core(); local_bh_enable(); - if (*workp == 0) { + if (!READ_ONCE(*workp)) { trace_rcu_utilization(TPS("End CPU kthread@rcu_wait")); *statusp = RCU_KTHREAD_WAITING; return; -- cgit v1.2.3 From 15d44dfa40305da1648de4bf001e91cc63148725 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Thu, 27 Apr 2023 10:50:47 -0700 Subject: rcu: Make rcu_cpu_starting() rely on interrupts being disabled Currently, rcu_cpu_starting() is written so that it might be invoked with interrupts enabled. However, it is always called when interrupts are disabled, either by rcu_init(), notify_cpu_starting(), or from a call point prior to the call to notify_cpu_starting(). But why bother requiring that interrupts be disabled? The purpose is to allow the rcu_data structure's ->beenonline flag to be set after all early processing has completed for the incoming CPU, thus allowing this flag to be used to determine when workqueues have been set up for the incoming CPU, while still allowing this flag to be used as a diagnostic within rcu_core(). This commit therefore makes rcu_cpu_starting() rely on interrupts being disabled. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index faa1c01d2917..23685407dcf6 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4390,15 +4390,16 @@ int rcutree_offline_cpu(unsigned int cpu) * Note that this function is special in that it is invoked directly * from the incoming CPU rather than from the cpuhp_step mechanism. * This is because this function must be invoked at a precise location. + * This incoming CPU must not have enabled interrupts yet. */ void rcu_cpu_starting(unsigned int cpu) { - unsigned long flags; unsigned long mask; struct rcu_data *rdp; struct rcu_node *rnp; bool newcpu; + lockdep_assert_irqs_disabled(); rdp = per_cpu_ptr(&rcu_data, cpu); if (rdp->cpu_started) return; @@ -4406,7 +4407,6 @@ void rcu_cpu_starting(unsigned int cpu) rnp = rdp->mynode; mask = rdp->grpmask; - local_irq_save(flags); arch_spin_lock(&rcu_state.ofl_lock); rcu_dynticks_eqs_online(); raw_spin_lock(&rcu_state.barrier_lock); @@ -4425,17 +4425,16 @@ void rcu_cpu_starting(unsigned int cpu) /* An incoming CPU should never be blocking a grace period. */ if (WARN_ON_ONCE(rnp->qsmask & mask)) { /* RCU waiting on incoming CPU? */ /* rcu_report_qs_rnp() *really* wants some flags to restore */ - unsigned long flags2; + unsigned long flags; - local_irq_save(flags2); + local_irq_save(flags); rcu_disable_urgency_upon_qs(rdp); /* Report QS -after- changing ->qsmaskinitnext! */ - rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags2); + rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags); } else { raw_spin_unlock_rcu_node(rnp); } arch_spin_unlock(&rcu_state.ofl_lock); - local_irq_restore(flags); smp_mb(); /* Ensure RCU read-side usage follows above initialization. */ } -- cgit v1.2.3 From 401b0de3ae4fa49d1014c8941e26d9a25f37e7cf Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Wed, 26 Apr 2023 11:11:29 -0700 Subject: rcu-tasks: Stop rcu_tasks_invoke_cbs() from using never-onlined CPUs The rcu_tasks_invoke_cbs() function relies on queue_work_on() to silently fall back to WORK_CPU_UNBOUND when the specified CPU is offline. However, the queue_work_on() function's silent fallback mechanism relies on that CPU having been online at some time in the past. When queue_work_on() is passed a CPU that has never been online, workqueue lockups ensue, which can be bad for your kernel's general health and well-being. This commit therefore checks whether a given CPU has ever been online, and, if not substitutes WORK_CPU_UNBOUND in the subsequent call to queue_work_on(). Why not simply omit the queue_work_on() call entirely? Because this function is flooding callback-invocation notifications to all CPUs, and must deal with possibilities that include a sparse cpu_possible_mask. This commit also moves the setting of the rcu_data structure's ->beenonline field to rcu_cpu_starting(), which executes on the incoming CPU before that CPU has ever enabled interrupts. This ensures that the required workqueues are present. In addition, because the incoming CPU has not yet enabled its interrupts, there cannot yet have been any softirq handlers running on this CPU, which means that the WARN_ON_ONCE(!rdp->beenonline) within the RCU_SOFTIRQ handler cannot have triggered yet. Fixes: d363f833c6d88 ("rcu-tasks: Use workqueues for multiple rcu_tasks_invoke_cbs() invocations") Reported-by: Tejun Heo Signed-off-by: Paul E. McKenney --- kernel/rcu/rcu.h | 6 ++++++ kernel/rcu/tasks.h | 7 +++++-- kernel/rcu/tree.c | 12 +++++++++++- 3 files changed, 22 insertions(+), 3 deletions(-) diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index 4a1b9622598b..98c1544cf572 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -642,4 +642,10 @@ void show_rcu_tasks_trace_gp_kthread(void); static inline void show_rcu_tasks_trace_gp_kthread(void) {} #endif +#ifdef CONFIG_TINY_RCU +static inline bool rcu_cpu_beenfullyonline(int cpu) { return true; } +#else +bool rcu_cpu_beenfullyonline(int cpu); +#endif + #endif /* __LINUX_RCU_H */ diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 5f4fc8184dd0..8f08c087142b 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -463,6 +463,7 @@ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu { int cpu; int cpunext; + int cpuwq; unsigned long flags; int len; struct rcu_head *rhp; @@ -473,11 +474,13 @@ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu cpunext = cpu * 2 + 1; if (cpunext < smp_load_acquire(&rtp->percpu_dequeue_lim)) { rtpcp_next = per_cpu_ptr(rtp->rtpcpu, cpunext); - queue_work_on(cpunext, system_wq, &rtpcp_next->rtp_work); + cpuwq = rcu_cpu_beenfullyonline(cpunext) ? cpunext : WORK_CPU_UNBOUND; + queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work); cpunext++; if (cpunext < smp_load_acquire(&rtp->percpu_dequeue_lim)) { rtpcp_next = per_cpu_ptr(rtp->rtpcpu, cpunext); - queue_work_on(cpunext, system_wq, &rtpcp_next->rtp_work); + cpuwq = rcu_cpu_beenfullyonline(cpunext) ? cpunext : WORK_CPU_UNBOUND; + queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work); } } diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 23685407dcf6..54963f8c244c 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4305,7 +4305,6 @@ int rcutree_prepare_cpu(unsigned int cpu) */ rnp = rdp->mynode; raw_spin_lock_rcu_node(rnp); /* irqs already disabled. */ - rdp->beenonline = true; /* We have now been online. */ rdp->gp_seq = READ_ONCE(rnp->gp_seq); rdp->gp_seq_needed = rdp->gp_seq; rdp->cpu_no_qs.b.norm = true; @@ -4332,6 +4331,16 @@ static void rcutree_affinity_setting(unsigned int cpu, int outgoing) rcu_boost_kthread_setaffinity(rdp->mynode, outgoing); } +/* + * Has the specified (known valid) CPU ever been fully online? + */ +bool rcu_cpu_beenfullyonline(int cpu) +{ + struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); + + return smp_load_acquire(&rdp->beenonline); +} + /* * Near the end of the CPU-online process. Pretty much all services * enabled, and the CPU is now very much alive. @@ -4435,6 +4444,7 @@ void rcu_cpu_starting(unsigned int cpu) raw_spin_unlock_rcu_node(rnp); } arch_spin_unlock(&rcu_state.ofl_lock); + smp_store_release(&rdp->beenonline, true); smp_mb(); /* Ensure RCU read-side usage follows above initialization. */ } -- cgit v1.2.3 From fbde57d2d2995375305917b3c944bc861beb84d4 Mon Sep 17 00:00:00 2001 From: Frederic Weisbecker Date: Wed, 29 Mar 2023 18:02:03 +0200 Subject: rcu/nocb: Make shrinker iterate only over NOCB CPUs Callbacks can only be queued as lazy on NOCB CPUs, therefore iterating over the NOCB mask is enough for both counting and scanning. Just lock the mostly uncontended barrier mutex on counting as well in order to keep rcu_nocb_mask stable. Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/rcu/tree_nocb.h | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index dfa9c10d6727..43229d2b0c44 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -1319,13 +1319,22 @@ lazy_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc) int cpu; unsigned long count = 0; + if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask))) + return 0; + + /* Protect rcu_nocb_mask against concurrent (de-)offloading. */ + if (!mutex_trylock(&rcu_state.barrier_mutex)) + return 0; + /* Snapshot count of all CPUs */ - for_each_possible_cpu(cpu) { + for_each_cpu(cpu, rcu_nocb_mask) { struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); count += READ_ONCE(rdp->lazy_len); } + mutex_unlock(&rcu_state.barrier_mutex); + return count ? count : SHRINK_EMPTY; } @@ -1336,6 +1345,8 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) unsigned long flags; unsigned long count = 0; + if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask))) + return 0; /* * Protect against concurrent (de-)offloading. Otherwise nocb locking * may be ignored or imbalanced. @@ -1351,11 +1362,11 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) } /* Snapshot count of all CPUs */ - for_each_possible_cpu(cpu) { + for_each_cpu(cpu, rcu_nocb_mask) { struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); int _count; - if (!rcu_rdp_is_offloaded(rdp)) + if (WARN_ON_ONCE(!rcu_rdp_is_offloaded(rdp))) continue; if (!READ_ONCE(rdp->lazy_len)) -- cgit v1.2.3 From f8619c300f49c5831d344d35df93d3af447efc97 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Mon, 6 Mar 2023 20:48:13 -0800 Subject: locktorture: Add long_hold to adjust lock-hold delays This commit adds a long_hold module parameter to allow testing diagnostics for excessive lock-hold times. Also adjust torture_param() invocations for longer line length while in the area. Signed-off-by: Paul E. McKenney Reviewed-by: Joel Fernandes (Google) --- kernel/locking/locktorture.c | 51 +++++++++++++++++++------------------------- 1 file changed, 22 insertions(+), 29 deletions(-) diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c index 153ddc4c47ef..949d3deae506 100644 --- a/kernel/locking/locktorture.c +++ b/kernel/locking/locktorture.c @@ -33,24 +33,19 @@ MODULE_LICENSE("GPL"); MODULE_AUTHOR("Paul E. McKenney "); -torture_param(int, nwriters_stress, -1, - "Number of write-locking stress-test threads"); -torture_param(int, nreaders_stress, -1, - "Number of read-locking stress-test threads"); +torture_param(int, nwriters_stress, -1, "Number of write-locking stress-test threads"); +torture_param(int, nreaders_stress, -1, "Number of read-locking stress-test threads"); +torture_param(int, long_hold, 100, "Do occasional long hold of lock (ms), 0=disable"); torture_param(int, onoff_holdoff, 0, "Time after boot before CPU hotplugs (s)"); -torture_param(int, onoff_interval, 0, - "Time between CPU hotplugs (s), 0=disable"); -torture_param(int, shuffle_interval, 3, - "Number of jiffies between shuffles, 0=disable"); +torture_param(int, onoff_interval, 0, "Time between CPU hotplugs (s), 0=disable"); +torture_param(int, shuffle_interval, 3, "Number of jiffies between shuffles, 0=disable"); torture_param(int, shutdown_secs, 0, "Shutdown time (j), <= zero to disable."); -torture_param(int, stat_interval, 60, - "Number of seconds between stats printk()s"); +torture_param(int, stat_interval, 60, "Number of seconds between stats printk()s"); torture_param(int, stutter, 5, "Number of jiffies to run/halt test, 0=disable"); torture_param(int, rt_boost, 2, - "Do periodic rt-boost. 0=Disable, 1=Only for rt_mutex, 2=For all lock types."); + "Do periodic rt-boost. 0=Disable, 1=Only for rt_mutex, 2=For all lock types."); torture_param(int, rt_boost_factor, 50, "A factor determining how often rt-boost happens."); -torture_param(int, verbose, 1, - "Enable verbose debugging printk()s"); +torture_param(int, verbose, 1, "Enable verbose debugging printk()s"); torture_param(int, nested_locks, 0, "Number of nested locks (max = 8)"); /* Going much higher trips "BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!" errors */ #define MAX_NESTED_LOCKS 8 @@ -120,7 +115,7 @@ static int torture_lock_busted_write_lock(int tid __maybe_unused) static void torture_lock_busted_write_delay(struct torture_random_state *trsp) { - const unsigned long longdelay_ms = 100; + const unsigned long longdelay_ms = long_hold ? long_hold : ULONG_MAX; /* We want a long delay occasionally to force massive contention. */ if (!(torture_random(trsp) % @@ -198,16 +193,18 @@ __acquires(torture_spinlock) static void torture_spin_lock_write_delay(struct torture_random_state *trsp) { const unsigned long shortdelay_us = 2; - const unsigned long longdelay_ms = 100; + const unsigned long longdelay_ms = long_hold ? long_hold : ULONG_MAX; + unsigned long j; /* We want a short delay mostly to emulate likely code, and * we want a long delay occasionally to force massive contention. */ - if (!(torture_random(trsp) % - (cxt.nrealwriters_stress * 2000 * longdelay_ms))) + if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 2000 * longdelay_ms))) { + j = jiffies; mdelay(longdelay_ms); - if (!(torture_random(trsp) % - (cxt.nrealwriters_stress * 2 * shortdelay_us))) + pr_alert("%s: delay = %lu jiffies.\n", __func__, jiffies - j); + } + if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 200 * shortdelay_us))) udelay(shortdelay_us); if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000))) torture_preempt_schedule(); /* Allow test to be preempted. */ @@ -322,7 +319,7 @@ __acquires(torture_rwlock) static void torture_rwlock_write_delay(struct torture_random_state *trsp) { const unsigned long shortdelay_us = 2; - const unsigned long longdelay_ms = 100; + const unsigned long longdelay_ms = long_hold ? long_hold : ULONG_MAX; /* We want a short delay mostly to emulate likely code, and * we want a long delay occasionally to force massive contention. @@ -455,14 +452,12 @@ __acquires(torture_mutex) static void torture_mutex_delay(struct torture_random_state *trsp) { - const unsigned long longdelay_ms = 100; + const unsigned long longdelay_ms = long_hold ? long_hold : ULONG_MAX; /* We want a long delay occasionally to force massive contention. */ if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 2000 * longdelay_ms))) mdelay(longdelay_ms * 5); - else - mdelay(longdelay_ms / 5); if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000))) torture_preempt_schedule(); /* Allow test to be preempted. */ } @@ -630,7 +625,7 @@ __acquires(torture_rtmutex) static void torture_rtmutex_delay(struct torture_random_state *trsp) { const unsigned long shortdelay_us = 2; - const unsigned long longdelay_ms = 100; + const unsigned long longdelay_ms = long_hold ? long_hold : ULONG_MAX; /* * We want a short delay mostly to emulate likely code, and @@ -640,7 +635,7 @@ static void torture_rtmutex_delay(struct torture_random_state *trsp) (cxt.nrealwriters_stress * 2000 * longdelay_ms))) mdelay(longdelay_ms); if (!(torture_random(trsp) % - (cxt.nrealwriters_stress * 2 * shortdelay_us))) + (cxt.nrealwriters_stress * 200 * shortdelay_us))) udelay(shortdelay_us); if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000))) torture_preempt_schedule(); /* Allow test to be preempted. */ @@ -695,14 +690,12 @@ __acquires(torture_rwsem) static void torture_rwsem_write_delay(struct torture_random_state *trsp) { - const unsigned long longdelay_ms = 100; + const unsigned long longdelay_ms = long_hold ? long_hold : ULONG_MAX; /* We want a long delay occasionally to force massive contention. */ if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 2000 * longdelay_ms))) mdelay(longdelay_ms * 10); - else - mdelay(longdelay_ms / 10); if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000))) torture_preempt_schedule(); /* Allow test to be preempted. */ } @@ -848,8 +841,8 @@ static int lock_torture_writer(void *arg) lwsp->n_lock_acquired++; } - cxt.cur_ops->write_delay(&rand); if (!skip_main_lock) { + cxt.cur_ops->write_delay(&rand); lock_is_write_held = false; WRITE_ONCE(last_lock_release, jiffies); cxt.cur_ops->writeunlock(tid); -- cgit v1.2.3 From b409afe0268faeb77267f028ea85f2d93438fced Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Tue, 21 Mar 2023 16:40:08 -0700 Subject: rcutorture: Correct name of use_softirq module parameter The BUSTED-BOOST and TREE03 scenarios specify a mythical tree.use_softirq module parameter, which means a failure to get full test coverage. This commit therefore corrects the name to rcutree.use_softirq. Fixes: e2b949d54392 ("rcutorture: Make TREE03 use real-time tree.use_softirq setting") Signed-off-by: Paul E. McKenney Reviewed-by: Joel Fernandes (Google) --- tools/testing/selftests/rcutorture/configs/rcu/BUSTED-BOOST.boot | 2 +- tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/rcutorture/configs/rcu/BUSTED-BOOST.boot b/tools/testing/selftests/rcutorture/configs/rcu/BUSTED-BOOST.boot index f57720c52c0f..84f6bb98ce99 100644 --- a/tools/testing/selftests/rcutorture/configs/rcu/BUSTED-BOOST.boot +++ b/tools/testing/selftests/rcutorture/configs/rcu/BUSTED-BOOST.boot @@ -5,4 +5,4 @@ rcutree.gp_init_delay=3 rcutree.gp_cleanup_delay=3 rcutree.kthread_prio=2 threadirqs -tree.use_softirq=0 +rcutree.use_softirq=0 diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot b/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot index 64f864f1f361..8e50bfd4b710 100644 --- a/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot +++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot @@ -4,4 +4,4 @@ rcutree.gp_init_delay=3 rcutree.gp_cleanup_delay=3 rcutree.kthread_prio=2 threadirqs -tree.use_softirq=0 +rcutree.use_softirq=0 -- cgit v1.2.3 From bf5ddd736509a7d9077c0b6793e6f0852214dbea Mon Sep 17 00:00:00 2001 From: Qiuxu Zhuo Date: Wed, 22 Mar 2023 19:42:40 +0800 Subject: rcu/rcuscale: Move rcu_scale_*() after kfree_scale_cleanup() This code-movement-only commit moves the rcu_scale_cleanup() and rcu_scale_shutdown() functions to follow kfree_scale_cleanup(). This is code movement is in preparation for a bug-fix patch that invokes kfree_scale_cleanup() from rcu_scale_cleanup(). Signed-off-by: Qiuxu Zhuo Signed-off-by: Paul E. McKenney Reviewed-by: Joel Fernandes (Google) --- kernel/rcu/rcuscale.c | 194 +++++++++++++++++++++++++------------------------- 1 file changed, 97 insertions(+), 97 deletions(-) diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c index e82ec9f9a5d8..7e8965b0827a 100644 --- a/kernel/rcu/rcuscale.c +++ b/kernel/rcu/rcuscale.c @@ -522,89 +522,6 @@ rcu_scale_print_module_parms(struct rcu_scale_ops *cur_ops, const char *tag) scale_type, tag, nrealreaders, nrealwriters, verbose, shutdown); } -static void -rcu_scale_cleanup(void) -{ - int i; - int j; - int ngps = 0; - u64 *wdp; - u64 *wdpp; - - /* - * Would like warning at start, but everything is expedited - * during the mid-boot phase, so have to wait till the end. - */ - if (rcu_gp_is_expedited() && !rcu_gp_is_normal() && !gp_exp) - SCALEOUT_ERRSTRING("All grace periods expedited, no normal ones to measure!"); - if (rcu_gp_is_normal() && gp_exp) - SCALEOUT_ERRSTRING("All grace periods normal, no expedited ones to measure!"); - if (gp_exp && gp_async) - SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!"); - - if (torture_cleanup_begin()) - return; - if (!cur_ops) { - torture_cleanup_end(); - return; - } - - if (reader_tasks) { - for (i = 0; i < nrealreaders; i++) - torture_stop_kthread(rcu_scale_reader, - reader_tasks[i]); - kfree(reader_tasks); - } - - if (writer_tasks) { - for (i = 0; i < nrealwriters; i++) { - torture_stop_kthread(rcu_scale_writer, - writer_tasks[i]); - if (!writer_n_durations) - continue; - j = writer_n_durations[i]; - pr_alert("%s%s writer %d gps: %d\n", - scale_type, SCALE_FLAG, i, j); - ngps += j; - } - pr_alert("%s%s start: %llu end: %llu duration: %llu gps: %d batches: %ld\n", - scale_type, SCALE_FLAG, - t_rcu_scale_writer_started, t_rcu_scale_writer_finished, - t_rcu_scale_writer_finished - - t_rcu_scale_writer_started, - ngps, - rcuscale_seq_diff(b_rcu_gp_test_finished, - b_rcu_gp_test_started)); - for (i = 0; i < nrealwriters; i++) { - if (!writer_durations) - break; - if (!writer_n_durations) - continue; - wdpp = writer_durations[i]; - if (!wdpp) - continue; - for (j = 0; j < writer_n_durations[i]; j++) { - wdp = &wdpp[j]; - pr_alert("%s%s %4d writer-duration: %5d %llu\n", - scale_type, SCALE_FLAG, - i, j, *wdp); - if (j % 100 == 0) - schedule_timeout_uninterruptible(1); - } - kfree(writer_durations[i]); - } - kfree(writer_tasks); - kfree(writer_durations); - kfree(writer_n_durations); - } - - /* Do torture-type-specific cleanup operations. */ - if (cur_ops->cleanup != NULL) - cur_ops->cleanup(); - - torture_cleanup_end(); -} - /* * Return the number if non-negative. If -1, the number of CPUs. * If less than -1, that much less than the number of CPUs, but @@ -624,20 +541,6 @@ static int compute_real(int n) return nr; } -/* - * RCU scalability shutdown kthread. Just waits to be awakened, then shuts - * down system. - */ -static int -rcu_scale_shutdown(void *arg) -{ - wait_event_idle(shutdown_wq, atomic_read(&n_rcu_scale_writer_finished) >= nrealwriters); - smp_mb(); /* Wake before output. */ - rcu_scale_cleanup(); - kernel_power_off(); - return -EINVAL; -} - /* * kfree_rcu() scalability tests: Start a kfree_rcu() loop on all CPUs for number * of iterations and measure total time and number of GP for all iterations to complete. @@ -874,6 +777,103 @@ unwind: return firsterr; } +static void +rcu_scale_cleanup(void) +{ + int i; + int j; + int ngps = 0; + u64 *wdp; + u64 *wdpp; + + /* + * Would like warning at start, but everything is expedited + * during the mid-boot phase, so have to wait till the end. + */ + if (rcu_gp_is_expedited() && !rcu_gp_is_normal() && !gp_exp) + SCALEOUT_ERRSTRING("All grace periods expedited, no normal ones to measure!"); + if (rcu_gp_is_normal() && gp_exp) + SCALEOUT_ERRSTRING("All grace periods normal, no expedited ones to measure!"); + if (gp_exp && gp_async) + SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!"); + + if (torture_cleanup_begin()) + return; + if (!cur_ops) { + torture_cleanup_end(); + return; + } + + if (reader_tasks) { + for (i = 0; i < nrealreaders; i++) + torture_stop_kthread(rcu_scale_reader, + reader_tasks[i]); + kfree(reader_tasks); + } + + if (writer_tasks) { + for (i = 0; i < nrealwriters; i++) { + torture_stop_kthread(rcu_scale_writer, + writer_tasks[i]); + if (!writer_n_durations) + continue; + j = writer_n_durations[i]; + pr_alert("%s%s writer %d gps: %d\n", + scale_type, SCALE_FLAG, i, j); + ngps += j; + } + pr_alert("%s%s start: %llu end: %llu duration: %llu gps: %d batches: %ld\n", + scale_type, SCALE_FLAG, + t_rcu_scale_writer_started, t_rcu_scale_writer_finished, + t_rcu_scale_writer_finished - + t_rcu_scale_writer_started, + ngps, + rcuscale_seq_diff(b_rcu_gp_test_finished, + b_rcu_gp_test_started)); + for (i = 0; i < nrealwriters; i++) { + if (!writer_durations) + break; + if (!writer_n_durations) + continue; + wdpp = writer_durations[i]; + if (!wdpp) + continue; + for (j = 0; j < writer_n_durations[i]; j++) { + wdp = &wdpp[j]; + pr_alert("%s%s %4d writer-duration: %5d %llu\n", + scale_type, SCALE_FLAG, + i, j, *wdp); + if (j % 100 == 0) + schedule_timeout_uninterruptible(1); + } + kfree(writer_durations[i]); + } + kfree(writer_tasks); + kfree(writer_durations); + kfree(writer_n_durations); + } + + /* Do torture-type-specific cleanup operations. */ + if (cur_ops->cleanup != NULL) + cur_ops->cleanup(); + + torture_cleanup_end(); +} + +/* + * RCU scalability shutdown kthread. Just waits to be awakened, then shuts + * down system. + */ +static int +rcu_scale_shutdown(void *arg) +{ + wait_event_idle(shutdown_wq, atomic_read(&n_rcu_scale_writer_finished) >= nrealwriters); + smp_mb(); /* Wake before output. */ + rcu_scale_cleanup(); + kernel_power_off(); + return -EINVAL; +} + static int __init rcu_scale_init(void) { -- cgit v1.2.3 From 23fc8df26dead16687ae6eb47b0561a4a832e2f6 Mon Sep 17 00:00:00 2001 From: Qiuxu Zhuo Date: Wed, 22 Mar 2023 19:42:41 +0800 Subject: rcu/rcuscale: Stop kfree_scale_thread thread(s) after unloading rcuscale Running the 'kfree_rcu_test' test case [1] results in a splat [2]. The root cause is the kfree_scale_thread thread(s) continue running after unloading the rcuscale module. This commit fixes that isue by invoking kfree_scale_cleanup() from rcu_scale_cleanup() when removing the rcuscale module. [1] modprobe rcuscale kfree_rcu_test=1 // After some time rmmod rcuscale rmmod torture [2] BUG: unable to handle page fault for address: ffffffffc0601a87 #PF: supervisor instruction fetch in kernel mode #PF: error_code(0x0010) - not-present page PGD 11de4f067 P4D 11de4f067 PUD 11de51067 PMD 112f4d067 PTE 0 Oops: 0010 [#1] PREEMPT SMP NOPTI CPU: 1 PID: 1798 Comm: kfree_scale_thr Not tainted 6.3.0-rc1-rcu+ #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 RIP: 0010:0xffffffffc0601a87 Code: Unable to access opcode bytes at 0xffffffffc0601a5d. RSP: 0018:ffffb25bc2e57e18 EFLAGS: 00010297 RAX: 0000000000000000 RBX: ffffffffc061f0b6 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffffffff962fd0de RDI: ffffffff962fd0de RBP: ffffb25bc2e57ea8 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000001 R11: 0000000000000001 R12: 0000000000000000 R13: 0000000000000000 R14: 000000000000000a R15: 00000000001c1dbe FS: 0000000000000000(0000) GS:ffff921fa2200000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffffffffc0601a5d CR3: 000000011de4c006 CR4: 0000000000370ee0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? kvfree_call_rcu+0xf0/0x3a0 ? kthread+0xf3/0x120 ? kthread_complete_and_exit+0x20/0x20 ? ret_from_fork+0x1f/0x30 Modules linked in: rfkill sunrpc ... [last unloaded: torture] CR2: ffffffffc0601a87 ---[ end trace 0000000000000000 ]--- Fixes: e6e78b004fa7 ("rcuperf: Add kfree_rcu() performance Tests") Reviewed-by: Davidlohr Bueso Reviewed-by: Joel Fernandes (Google) Signed-off-by: Qiuxu Zhuo Signed-off-by: Paul E. McKenney --- kernel/rcu/rcuscale.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c index 7e8965b0827a..d1221731c7cf 100644 --- a/kernel/rcu/rcuscale.c +++ b/kernel/rcu/rcuscale.c @@ -797,6 +797,11 @@ rcu_scale_cleanup(void) if (gp_exp && gp_async) SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!"); + if (kfree_rcu_test) { + kfree_scale_cleanup(); + return; + } + if (torture_cleanup_begin()) return; if (!cur_ops) { -- cgit v1.2.3 From 9e5d61c013a2c8b18f1205b6cd488a24ebce2d39 Mon Sep 17 00:00:00 2001 From: Zqiang Date: Tue, 21 Mar 2023 10:12:34 +0800 Subject: doc/rcutorture: Add description of rcutorture.stall_cpu_block If you build a kernel with CONFIG_PREEMPTION=n and CONFIG_PREEMPT_COUNT=y, then run the rcutorture tests specifying stalls as follows: runqemu kvm slirp nographic qemuparams="-m 1024 -smp 4" \ bootparams="console=ttyS0 rcutorture.stall_cpu=30 \ rcutorture.stall_no_softlockup=1 rcutorture.stall_cpu_block=1" -d The tests will produce the following splat: [ 10.841071] rcu-torture: rcu_torture_stall begin CPU stall [ 10.841073] rcu_torture_stall start on CPU 3. [ 10.841077] BUG: scheduling while atomic: rcu_torture_sta/66/0x0000000 .... [ 10.841108] Call Trace: [ 10.841110] [ 10.841112] dump_stack_lvl+0x64/0xb0 [ 10.841118] dump_stack+0x10/0x20 [ 10.841121] __schedule_bug+0x8b/0xb0 [ 10.841126] __schedule+0x2172/0x2940 [ 10.841157] schedule+0x9b/0x150 [ 10.841160] schedule_timeout+0x2e8/0x4f0 [ 10.841192] schedule_timeout_uninterruptible+0x47/0x50 [ 10.841195] rcu_torture_stall+0x2e8/0x300 [ 10.841199] kthread+0x175/0x1a0 [ 10.841206] ret_from_fork+0x2c/0x50 This is because the rcutorture.stall_cpu_block=1 module parameter causes rcu_torture_stall() to invoke schedule_timeout_uninterruptible() within an RCU read-side critical section. This in turn results in a quiescent state (which prevents the stall) and a sleep in an atomic context (which produces the above splat). Although this code is operating as designed, the design has proven to be counterintuitive to many. This commit therefore updates the description in kernel-parameters.txt accordingly. [ paulmck: Apply Joel Fernandes feedback. ] Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney --- Documentation/admin-guide/kernel-parameters.txt | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 9e5bab29685f..6c8f630f4a91 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5087,8 +5087,17 @@ rcutorture.stall_cpu_block= [KNL] Sleep while stalling if set. This will result - in warnings from preemptible RCU in addition - to any other stall-related activity. + in warnings from preemptible RCU in addition to + any other stall-related activity. Note that + in kernels built with CONFIG_PREEMPTION=n and + CONFIG_PREEMPT_COUNT=y, this parameter will + cause the CPU to pass through a quiescent state. + Given CONFIG_PREEMPTION=n, this will suppress + RCU CPU stall warnings, but will instead result + in scheduling-while-atomic splats. + + Use of this module parameter results in splats. + rcutorture.stall_cpu_holdoff= [KNL] Time to wait (s) after boot before inducing stall. -- cgit v1.2.3 From ce2544b2d05ee84cb9be1e05bf3e1a98c72b15dc Mon Sep 17 00:00:00 2001 From: Zhouyi Zhou Date: Sun, 26 Mar 2023 08:24:34 +0800 Subject: torture: Remove duplicated argument -enable-kvm for ppc64 The qemu argument -enable-kvm is duplicated because the qemu_args bash variable in kvm-test-1-run.sh already provides it. This commit therefore removes the ppc64-specific copy in functions.sh. Signed-off-by: Zhouyi Zhou Signed-off-by: Paul E. McKenney Reviewed-by: Joel Fernandes (Google) --- tools/testing/selftests/rcutorture/bin/functions.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/rcutorture/bin/functions.sh b/tools/testing/selftests/rcutorture/bin/functions.sh index b52d5069563c..48b9147e8c91 100644 --- a/tools/testing/selftests/rcutorture/bin/functions.sh +++ b/tools/testing/selftests/rcutorture/bin/functions.sh @@ -250,7 +250,7 @@ identify_qemu_args () { echo -machine virt,gic-version=host -cpu host ;; qemu-system-ppc64) - echo -enable-kvm -M pseries -nodefaults + echo -M pseries -nodefaults echo -device spapr-vscsi if test -n "$TORTURE_QEMU_INTERACTIVE" -a -n "$TORTURE_QEMU_MAC" then -- cgit v1.2.3