From 70a27d6d1b19392a23bb4a41de7788fbc539f18d Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 8 Mar 2024 12:18:07 +0100 Subject: sched/balancing: Rename run_rebalance_domains() => sched_balance_softirq() run_rebalance_domains() is a misnomer, as it doesn't only run rebalance_domains(), but since the introduction of the NOHZ code it also runs nohz_idle_balance(). Rename it to sched_balance_softirq(), reflecting its more generic purpose and that it's a softirq handler. Signed-off-by: Ingo Molnar Reviewed-by: Valentin Schneider Reviewed-by: Shrikanth Hegde Link: https://lore.kernel.org/r/20240308111819.1101550-2-mingo@kernel.org --- Documentation/translations/zh_CN/scheduler/sched-domains.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'Documentation/translations') diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst index e814d4c01141..fbc326668e37 100644 --- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst +++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst @@ -36,7 +36,7 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这 在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过scheduler_tick() 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正 -的工作由run_rebalance_domains()->rebalance_domains()完成,在软中断上下文中执行 +的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行 (SCHED_SOFTIRQ)。 后一个函数有两个入参:当前CPU的运行队列、它在scheduler_tick()调用时是否空闲。函数会从 -- cgit v1.2.3 From 86dd6c04ef9f213e14d60c9f64bce1cc019f816e Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 8 Mar 2024 12:18:08 +0100 Subject: sched/balancing: Rename scheduler_tick() => sched_tick() - Standardize on prefixing scheduler-internal functions defined in with sched_*() prefix. scheduler_tick() was the only function using the scheduler_ prefix. Harmonize it. - The other reason to rename it is the NOHZ scheduler tick handling functions are already named sched_tick_*(). Make the 'git grep sched_tick' more meaningful. Signed-off-by: Ingo Molnar Acked-by: Valentin Schneider Reviewed-by: Shrikanth Hegde Link: https://lore.kernel.org/r/20240308111819.1101550-3-mingo@kernel.org --- Documentation/scheduler/sched-domains.rst | 4 ++-- Documentation/translations/zh_CN/scheduler/sched-domains.rst | 4 ++-- include/linux/sched.h | 2 +- kernel/sched/core.c | 4 ++-- kernel/sched/loadavg.c | 2 +- kernel/time/timer.c | 2 +- kernel/workqueue.c | 2 +- tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc | 2 +- 8 files changed, 11 insertions(+), 11 deletions(-) (limited to 'Documentation/translations') diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst index 6577b068f921..541d6c617971 100644 --- a/Documentation/scheduler/sched-domains.rst +++ b/Documentation/scheduler/sched-domains.rst @@ -32,13 +32,13 @@ load of each of its member CPUs, and only when the load of a group becomes out of balance are tasks moved between groups. In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU -through scheduler_tick(). It raises a softirq after the next regularly scheduled +through sched_tick(). It raises a softirq after the next regularly scheduled rebalancing event for the current runqueue has arrived. The actual load balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run in softirq context (SCHED_SOFTIRQ). The latter function takes two arguments: the runqueue of current CPU and whether -the CPU was idle at the time the scheduler_tick() happened and iterates over all +the CPU was idle at the time the sched_tick() happened and iterates over all sched domains our CPU is on, starting from its base domain and going up the ->parent chain. While doing that, it checks to see if the current domain has exhausted its rebalance interval. If so, it runs load_balance() on that domain. It then checks diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst index fbc326668e37..fa0c0bcc6ba5 100644 --- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst +++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst @@ -34,12 +34,12 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这 调度域中的负载均衡发生在调度组中。也就是说,每个组被视为一个实体。组的负载被定义为它 管辖的每个CPU的负载之和。仅当组的负载不均衡后,任务才在组之间发生迁移。 -在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过scheduler_tick() +在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过sched_tick() 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正 的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行 (SCHED_SOFTIRQ)。 -后一个函数有两个入参:当前CPU的运行队列、它在scheduler_tick()调用时是否空闲。函数会从 +后一个函数有两个入参:当前CPU的运行队列、它在sched_tick()调用时是否空闲。函数会从 当前CPU所在的基调度域开始迭代执行,并沿着parent指针链向上进入更高层级的调度域。在迭代 过程中,函数会检查当前调度域是否已经耗尽了再平衡的时间间隔,如果是,它在该调度域运行 load_balance()。接下来它检查父调度域(如果存在),再后来父调度域的父调度域,以此类推。 diff --git a/include/linux/sched.h b/include/linux/sched.h index 17cb0761ff65..7eb7f31af796 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -301,7 +301,7 @@ enum { TASK_COMM_LEN = 16, }; -extern void scheduler_tick(void); +extern void sched_tick(void); #define MAX_SCHEDULE_TIMEOUT LONG_MAX diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d44efa0d0611..71b7a08a6502 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5662,7 +5662,7 @@ static inline u64 cpu_resched_latency(struct rq *rq) { return 0; } * This function gets called by the timer code, with HZ frequency. * We call it with interrupts disabled. */ -void scheduler_tick(void) +void sched_tick(void) { int cpu = smp_processor_id(); struct rq *rq = cpu_rq(cpu); @@ -6585,7 +6585,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) * paths. For example, see arch/x86/entry_64.S. * * To drive preemption between tasks, the scheduler sets the flag in timer - * interrupt handler scheduler_tick(). + * interrupt handler sched_tick(). * * 3. Wakeups don't really cause entry into schedule(). They add a * task to the run-queue and that's it. diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c index 52c8f8226b0d..ca9da66cc894 100644 --- a/kernel/sched/loadavg.c +++ b/kernel/sched/loadavg.c @@ -379,7 +379,7 @@ void calc_global_load(void) } /* - * Called from scheduler_tick() to periodically update this CPU's + * Called from sched_tick() to periodically update this CPU's * active count. */ void calc_global_load_tick(struct rq *this_rq) diff --git a/kernel/time/timer.c b/kernel/time/timer.c index e69e75d3858c..ff49ddcc9800 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -2478,7 +2478,7 @@ void update_process_times(int user_tick) if (in_irq()) irq_work_tick(); #endif - scheduler_tick(); + sched_tick(); if (IS_ENABLED(CONFIG_POSIX_TIMERS)) run_posix_cpu_timers(); } diff --git a/kernel/workqueue.c b/kernel/workqueue.c index bf2bdac46843..8fbb0ec39079 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -1464,7 +1464,7 @@ void wq_worker_sleeping(struct task_struct *task) * wq_worker_tick - a scheduler tick occurred while a kworker is running * @task: task currently running * - * Called from scheduler_tick(). We're in the IRQ context and the current + * Called from sched_tick(). We're in the IRQ context and the current * worker's fields which follow the 'K' locking rule can be accessed safely. */ void wq_worker_tick(struct task_struct *task) diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc index 25432b8cd5bd..073a748b9380 100644 --- a/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc +++ b/tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc @@ -19,7 +19,7 @@ fail() { # mesg FILTER=set_ftrace_filter FUNC1="schedule" -FUNC2="scheduler_tick" +FUNC2="sched_tick" ALL_FUNCS="#### all functions enabled ####" -- cgit v1.2.3 From 983be0628c061989b6cc175d2f5e429b40699fbb Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 8 Mar 2024 12:18:09 +0100 Subject: sched/balancing: Rename trigger_load_balance() => sched_balance_trigger() Standardize scheduler load-balancing function names on the sched_balance_() prefix. Signed-off-by: Ingo Molnar Reviewed-by: Shrikanth Hegde Link: https://lore.kernel.org/r/20240308111819.1101550-4-mingo@kernel.org --- Documentation/scheduler/sched-domains.rst | 2 +- Documentation/translations/zh_CN/scheduler/sched-domains.rst | 2 +- kernel/sched/core.c | 2 +- kernel/sched/fair.c | 2 +- kernel/sched/sched.h | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) (limited to 'Documentation/translations') diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst index 541d6c617971..c7ea05f4107b 100644 --- a/Documentation/scheduler/sched-domains.rst +++ b/Documentation/scheduler/sched-domains.rst @@ -31,7 +31,7 @@ is treated as one entity. The load of a group is defined as the sum of the load of each of its member CPUs, and only when the load of a group becomes out of balance are tasks moved between groups. -In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU +In kernel/sched/core.c, sched_balance_trigger() is run periodically on each CPU through sched_tick(). It raises a softirq after the next regularly scheduled rebalancing event for the current runqueue has arrived. The actual load balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst index fa0c0bcc6ba5..1a8587a971f9 100644 --- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst +++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst @@ -34,7 +34,7 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这 调度域中的负载均衡发生在调度组中。也就是说,每个组被视为一个实体。组的负载被定义为它 管辖的每个CPU的负载之和。仅当组的负载不均衡后,任务才在组之间发生迁移。 -在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过sched_tick() +在kernel/sched/core.c中,sched_balance_trigger()在每个CPU上通过sched_tick() 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正 的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行 (SCHED_SOFTIRQ)。 diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 71b7a08a6502..929fce69f555 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5700,7 +5700,7 @@ void sched_tick(void) #ifdef CONFIG_SMP rq->idle_balance = idle_cpu(cpu); - trigger_load_balance(rq); + sched_balance_trigger(rq); #endif } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 953f39deb68e..e377b675920a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -12438,7 +12438,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h) /* * Trigger the SCHED_SOFTIRQ if it is time to do periodic load balancing. */ -void trigger_load_balance(struct rq *rq) +void sched_balance_trigger(struct rq *rq) { /* * Don't need to rebalance while attached to NULL domain or diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index d2242679239e..5b0ddb0e6017 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2397,7 +2397,7 @@ extern struct task_struct *pick_next_task_idle(struct rq *rq); extern void update_group_capacity(struct sched_domain *sd, int cpu); -extern void trigger_load_balance(struct rq *rq); +extern void sched_balance_trigger(struct rq *rq); extern void set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx); -- cgit v1.2.3 From 14ff4dbd34f46cc6b6105f549983321241ccbba9 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 8 Mar 2024 12:18:10 +0100 Subject: sched/balancing: Rename rebalance_domains() => sched_balance_domains() Standardize scheduler load-balancing function names on the sched_balance_() prefix. Signed-off-by: Ingo Molnar Reviewed-by: Shrikanth Hegde Link: https://lore.kernel.org/r/20240308111819.1101550-5-mingo@kernel.org --- Documentation/scheduler/sched-domains.rst | 2 +- Documentation/translations/zh_CN/scheduler/sched-domains.rst | 2 +- arch/arm/kernel/topology.c | 2 +- kernel/sched/fair.c | 8 ++++---- kernel/sched/sched.h | 2 +- 5 files changed, 8 insertions(+), 8 deletions(-) (limited to 'Documentation/translations') diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst index c7ea05f4107b..5d8e8b8b269e 100644 --- a/Documentation/scheduler/sched-domains.rst +++ b/Documentation/scheduler/sched-domains.rst @@ -34,7 +34,7 @@ out of balance are tasks moved between groups. In kernel/sched/core.c, sched_balance_trigger() is run periodically on each CPU through sched_tick(). It raises a softirq after the next regularly scheduled rebalancing event for the current runqueue has arrived. The actual load -balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run +balancing workhorse, sched_balance_softirq()->sched_balance_domains(), is then run in softirq context (SCHED_SOFTIRQ). The latter function takes two arguments: the runqueue of current CPU and whether diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst index 1a8587a971f9..e6590fd80640 100644 --- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst +++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst @@ -36,7 +36,7 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这 在kernel/sched/core.c中,sched_balance_trigger()在每个CPU上通过sched_tick() 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正 -的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行 +的工作由sched_balance_softirq()->sched_balance_domains()完成,在软中断上下文中执行 (SCHED_SOFTIRQ)。 后一个函数有两个入参:当前CPU的运行队列、它在sched_tick()调用时是否空闲。函数会从 diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c index ef0058de432b..2336ee2aa44a 100644 --- a/arch/arm/kernel/topology.c +++ b/arch/arm/kernel/topology.c @@ -42,7 +42,7 @@ * can take this difference into account during load balance. A per cpu * structure is preferred because each CPU updates its own cpu_capacity field * during the load balance except for idle cores. One idle core is selected - * to run the rebalance_domains for all idle cores and the cpu_capacity can be + * to run the sched_balance_domains for all idle cores and the cpu_capacity can be * updated during this sequence. */ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e377b675920a..330788b0c617 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11685,7 +11685,7 @@ static inline bool update_newidle_cost(struct sched_domain *sd, u64 cost) * * Balancing parameters are set up in init_sched_domains. */ -static void rebalance_domains(struct rq *rq, enum cpu_idle_type idle) +static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle) { int continue_balancing = 1; int cpu = rq->cpu; @@ -12161,7 +12161,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags) rq_unlock_irqrestore(rq, &rf); if (flags & NOHZ_BALANCE_KICK) - rebalance_domains(rq, CPU_IDLE); + sched_balance_domains(rq, CPU_IDLE); } if (time_after(next_balance, rq->next_balance)) { @@ -12422,7 +12422,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h) /* * If this CPU has a pending NOHZ_BALANCE_KICK, then do the * balancing on behalf of the other idle CPUs whose ticks are - * stopped. Do nohz_idle_balance *before* rebalance_domains to + * stopped. Do nohz_idle_balance *before* sched_balance_domains to * give the idle CPUs a chance to load balance. Else we may * load balance only within the local sched_domain hierarchy * and abort nohz_idle_balance altogether if we pull some load. @@ -12432,7 +12432,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h) /* normal load balance */ update_blocked_averages(this_rq->cpu); - rebalance_domains(this_rq, idle); + sched_balance_domains(this_rq, idle); } /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 5b0ddb0e6017..41024c1c49b4 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2904,7 +2904,7 @@ extern void cfs_bandwidth_usage_dec(void); #define NOHZ_NEWILB_KICK_BIT 2 #define NOHZ_NEXT_KICK_BIT 3 -/* Run rebalance_domains() */ +/* Run sched_balance_domains() */ #define NOHZ_BALANCE_KICK BIT(NOHZ_BALANCE_KICK_BIT) /* Update blocked load */ #define NOHZ_STATS_KICK BIT(NOHZ_STATS_KICK_BIT) -- cgit v1.2.3 From 4c3e509ea9f249458e8692f8298cceac73105948 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 8 Mar 2024 12:18:11 +0100 Subject: sched/balancing: Rename load_balance() => sched_balance_rq() Standardize scheduler load-balancing function names on the sched_balance_() prefix. Also load_balance() has become somewhat of a misnomer: historically it was the first and primary load-balancing function that was called, but with the introduction of sched domains, it's become a lower layer function that balances runqueues. Rename it to sched_balance_rq() accordingly. Signed-off-by: Ingo Molnar Reviewed-by: Shrikanth Hegde Link: https://lore.kernel.org/r/20240308111819.1101550-6-mingo@kernel.org --- Documentation/scheduler/sched-domains.rst | 4 +-- Documentation/scheduler/sched-stats.rst | 32 +++++++++++----------- .../translations/zh_CN/scheduler/sched-domains.rst | 4 +-- .../translations/zh_CN/scheduler/sched-stats.rst | 30 ++++++++++---------- include/linux/sched/topology.h | 2 +- kernel/sched/fair.c | 10 +++---- 6 files changed, 41 insertions(+), 41 deletions(-) (limited to 'Documentation/translations') diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst index 5d8e8b8b269e..5e996fe973b1 100644 --- a/Documentation/scheduler/sched-domains.rst +++ b/Documentation/scheduler/sched-domains.rst @@ -41,11 +41,11 @@ The latter function takes two arguments: the runqueue of current CPU and whether the CPU was idle at the time the sched_tick() happened and iterates over all sched domains our CPU is on, starting from its base domain and going up the ->parent chain. While doing that, it checks to see if the current domain has exhausted its -rebalance interval. If so, it runs load_balance() on that domain. It then checks +rebalance interval. If so, it runs sched_balance_rq() on that domain. It then checks the parent sched_domain (if it exists), and the parent of the parent and so forth. -Initially, load_balance() finds the busiest group in the current sched domain. +Initially, sched_balance_rq() finds the busiest group in the current sched domain. If it succeeds, it looks for the busiest runqueue of all the CPUs' runqueues in that group. If it manages to find such a runqueue, it locks both our initial CPU's runqueue and the newly found busiest one and starts moving tasks from it diff --git a/Documentation/scheduler/sched-stats.rst b/Documentation/scheduler/sched-stats.rst index 73c412666655..7c2b16c4729d 100644 --- a/Documentation/scheduler/sched-stats.rst +++ b/Documentation/scheduler/sched-stats.rst @@ -77,53 +77,53 @@ domain 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 The first field is a bit mask indicating what cpus this domain operates over. -The next 24 are a variety of load_balance() statistics in grouped into types +The next 24 are a variety of sched_balance_rq() statistics in grouped into types of idleness (idle, busy, and newly idle): - 1) # of times in this domain load_balance() was called when the + 1) # of times in this domain sched_balance_rq() was called when the cpu was idle - 2) # of times in this domain load_balance() checked but found + 2) # of times in this domain sched_balance_rq() checked but found the load did not require balancing when the cpu was idle - 3) # of times in this domain load_balance() tried to move one or + 3) # of times in this domain sched_balance_rq() tried to move one or more tasks and failed, when the cpu was idle 4) sum of imbalances discovered (if any) with each call to - load_balance() in this domain when the cpu was idle + sched_balance_rq() in this domain when the cpu was idle 5) # of times in this domain pull_task() was called when the cpu was idle 6) # of times in this domain pull_task() was called even though the target task was cache-hot when idle - 7) # of times in this domain load_balance() was called but did + 7) # of times in this domain sched_balance_rq() was called but did not find a busier queue while the cpu was idle 8) # of times in this domain a busier queue was found while the cpu was idle but no busier group was found - 9) # of times in this domain load_balance() was called when the + 9) # of times in this domain sched_balance_rq() was called when the cpu was busy - 10) # of times in this domain load_balance() checked but found the + 10) # of times in this domain sched_balance_rq() checked but found the load did not require balancing when busy - 11) # of times in this domain load_balance() tried to move one or + 11) # of times in this domain sched_balance_rq() tried to move one or more tasks and failed, when the cpu was busy 12) sum of imbalances discovered (if any) with each call to - load_balance() in this domain when the cpu was busy + sched_balance_rq() in this domain when the cpu was busy 13) # of times in this domain pull_task() was called when busy 14) # of times in this domain pull_task() was called even though the target task was cache-hot when busy - 15) # of times in this domain load_balance() was called but did not + 15) # of times in this domain sched_balance_rq() was called but did not find a busier queue while the cpu was busy 16) # of times in this domain a busier queue was found while the cpu was busy but no busier group was found - 17) # of times in this domain load_balance() was called when the + 17) # of times in this domain sched_balance_rq() was called when the cpu was just becoming idle - 18) # of times in this domain load_balance() checked but found the + 18) # of times in this domain sched_balance_rq() checked but found the load did not require balancing when the cpu was just becoming idle - 19) # of times in this domain load_balance() tried to move one or more + 19) # of times in this domain sched_balance_rq() tried to move one or more tasks and failed, when the cpu was just becoming idle 20) sum of imbalances discovered (if any) with each call to - load_balance() in this domain when the cpu was just becoming idle + sched_balance_rq() in this domain when the cpu was just becoming idle 21) # of times in this domain pull_task() was called when newly idle 22) # of times in this domain pull_task() was called even though the target task was cache-hot when just becoming idle - 23) # of times in this domain load_balance() was called but did not + 23) # of times in this domain sched_balance_rq() was called but did not find a busier queue while the cpu was just becoming idle 24) # of times in this domain a busier queue was found while the cpu was just becoming idle but no busier group was found diff --git a/Documentation/translations/zh_CN/scheduler/sched-domains.rst b/Documentation/translations/zh_CN/scheduler/sched-domains.rst index e6590fd80640..06363169c56b 100644 --- a/Documentation/translations/zh_CN/scheduler/sched-domains.rst +++ b/Documentation/translations/zh_CN/scheduler/sched-domains.rst @@ -42,9 +42,9 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这 后一个函数有两个入参:当前CPU的运行队列、它在sched_tick()调用时是否空闲。函数会从 当前CPU所在的基调度域开始迭代执行,并沿着parent指针链向上进入更高层级的调度域。在迭代 过程中,函数会检查当前调度域是否已经耗尽了再平衡的时间间隔,如果是,它在该调度域运行 -load_balance()。接下来它检查父调度域(如果存在),再后来父调度域的父调度域,以此类推。 +sched_balance_rq()。接下来它检查父调度域(如果存在),再后来父调度域的父调度域,以此类推。 -起初,load_balance()查找当前调度域中最繁忙的调度组。如果成功,在该调度组管辖的全部CPU +起初,sched_balance_rq()查找当前调度域中最繁忙的调度组。如果成功,在该调度组管辖的全部CPU 的运行队列中找出最繁忙的运行队列。如能找到,对当前的CPU运行队列和新找到的最繁忙运行 队列均加锁,并把任务从最繁忙队列中迁移到当前CPU上。被迁移的任务数量等于在先前迭代执行 中计算出的该调度域的调度组的不均衡值。 diff --git a/Documentation/translations/zh_CN/scheduler/sched-stats.rst b/Documentation/translations/zh_CN/scheduler/sched-stats.rst index c5e0be663837..09eee2517610 100644 --- a/Documentation/translations/zh_CN/scheduler/sched-stats.rst +++ b/Documentation/translations/zh_CN/scheduler/sched-stats.rst @@ -75,42 +75,42 @@ domain 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 繁忙,新空闲): - 1) 当CPU空闲时,load_balance()在这个调度域中被调用了#次 - 2) 当CPU空闲时,load_balance()在这个调度域中被调用,但是发现负载无需 + 1) 当CPU空闲时,sched_balance_rq()在这个调度域中被调用了#次 + 2) 当CPU空闲时,sched_balance_rq()在这个调度域中被调用,但是发现负载无需 均衡#次 - 3) 当CPU空闲时,load_balance()在这个调度域中被调用,试图迁移1个或更多 + 3) 当CPU空闲时,sched_balance_rq()在这个调度域中被调用,试图迁移1个或更多 任务且失败了#次 - 4) 当CPU空闲时,load_balance()在这个调度域中被调用,发现不均衡(如果有) + 4) 当CPU空闲时,sched_balance_rq()在这个调度域中被调用,发现不均衡(如果有) #次 5) 当CPU空闲时,pull_task()在这个调度域中被调用#次 6) 当CPU空闲时,尽管目标任务是热缓存状态,pull_task()依然被调用#次 - 7) 当CPU空闲时,load_balance()在这个调度域中被调用,未能找到更繁忙的 + 7) 当CPU空闲时,sched_balance_rq()在这个调度域中被调用,未能找到更繁忙的 队列#次 8) 当CPU空闲时,在调度域中找到了更繁忙的队列,但未找到更繁忙的调度组 #次 - 9) 当CPU繁忙时,load_balance()在这个调度域中被调用了#次 - 10) 当CPU繁忙时,load_balance()在这个调度域中被调用,但是发现负载无需 + 9) 当CPU繁忙时,sched_balance_rq()在这个调度域中被调用了#次 + 10) 当CPU繁忙时,sched_balance_rq()在这个调度域中被调用,但是发现负载无需 均衡#次 - 11) 当CPU繁忙时,load_balance()在这个调度域中被调用,试图迁移1个或更多 + 11) 当CPU繁忙时,sched_balance_rq()在这个调度域中被调用,试图迁移1个或更多 任务且失败了#次 - 12) 当CPU繁忙时,load_balance()在这个调度域中被调用,发现不均衡(如果有) + 12) 当CPU繁忙时,sched_balance_rq()在这个调度域中被调用,发现不均衡(如果有) #次 13) 当CPU繁忙时,pull_task()在这个调度域中被调用#次 14) 当CPU繁忙时,尽管目标任务是热缓存状态,pull_task()依然被调用#次 - 15) 当CPU繁忙时,load_balance()在这个调度域中被调用,未能找到更繁忙的 + 15) 当CPU繁忙时,sched_balance_rq()在这个调度域中被调用,未能找到更繁忙的 队列#次 16) 当CPU繁忙时,在调度域中找到了更繁忙的队列,但未找到更繁忙的调度组 #次 - 17) 当CPU新空闲时,load_balance()在这个调度域中被调用了#次 - 18) 当CPU新空闲时,load_balance()在这个调度域中被调用,但是发现负载无需 + 17) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用了#次 + 18) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用,但是发现负载无需 均衡#次 - 19) 当CPU新空闲时,load_balance()在这个调度域中被调用,试图迁移1个或更多 + 19) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用,试图迁移1个或更多 任务且失败了#次 - 20) 当CPU新空闲时,load_balance()在这个调度域中被调用,发现不均衡(如果有) + 20) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用,发现不均衡(如果有) #次 21) 当CPU新空闲时,pull_task()在这个调度域中被调用#次 22) 当CPU新空闲时,尽管目标任务是热缓存状态,pull_task()依然被调用#次 - 23) 当CPU新空闲时,load_balance()在这个调度域中被调用,未能找到更繁忙的 + 23) 当CPU新空闲时,sched_balance_rq()在这个调度域中被调用,未能找到更繁忙的 队列#次 24) 当CPU新空闲时,在调度域中找到了更繁忙的队列,但未找到更繁忙的调度组 #次 diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 18572c9ea724..c8fe9bab981b 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -110,7 +110,7 @@ struct sched_domain { unsigned long last_decay_max_lb_cost; #ifdef CONFIG_SCHEDSTATS - /* load_balance() stats */ + /* sched_balance_rq() stats */ unsigned int lb_count[CPU_MAX_IDLE_TYPES]; unsigned int lb_failed[CPU_MAX_IDLE_TYPES]; unsigned int lb_balanced[CPU_MAX_IDLE_TYPES]; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 330788b0c617..0d2753c50be9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6866,7 +6866,7 @@ dequeue_throttle: #ifdef CONFIG_SMP -/* Working cpumask for: load_balance, load_balance_newidle. */ +/* Working cpumask for: sched_balance_rq, load_balance_newidle. */ static DEFINE_PER_CPU(cpumask_var_t, load_balance_mask); static DEFINE_PER_CPU(cpumask_var_t, select_rq_mask); static DEFINE_PER_CPU(cpumask_var_t, should_we_balance_tmpmask); @@ -11242,7 +11242,7 @@ static int should_we_balance(struct lb_env *env) * Check this_cpu to ensure it is balanced within domain. Attempt to move * tasks if there is an imbalance. */ -static int load_balance(int this_cpu, struct rq *this_rq, +static int sched_balance_rq(int this_cpu, struct rq *this_rq, struct sched_domain *sd, enum cpu_idle_type idle, int *continue_balancing) { @@ -11647,7 +11647,7 @@ out_unlock: static atomic_t sched_balance_running = ATOMIC_INIT(0); /* - * Scale the max load_balance interval with the number of CPUs in the system. + * Scale the max sched_balance_rq interval with the number of CPUs in the system. * This trades load-balance latency on larger machines for less cross talk. */ void update_max_interval(void) @@ -11727,7 +11727,7 @@ static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle) } if (time_after_eq(jiffies, sd->last_balance + interval)) { - if (load_balance(cpu, rq, sd, idle, &continue_balancing)) { + if (sched_balance_rq(cpu, rq, sd, idle, &continue_balancing)) { /* * The LBF_DST_PINNED logic could have changed * env->dst_cpu, so we can't know our idle @@ -12353,7 +12353,7 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf) if (sd->flags & SD_BALANCE_NEWIDLE) { - pulled_task = load_balance(this_cpu, this_rq, + pulled_task = sched_balance_rq(this_cpu, this_rq, sd, CPU_NEWLY_IDLE, &continue_balancing); -- cgit v1.2.3