summaryrefslogtreecommitdiff
path: root/kernel/sched
diff options
context:
space:
mode:
authorVincent Guittot <vincent.guittot@linaro.org>2023-07-06 16:51:44 +0300
committerPeter Zijlstra <peterz@infradead.org>2023-07-26 13:28:50 +0300
commitc2e164ac33f75e0acb93004960c73bd9166d3d35 (patch)
tree88684a82dfd9821fc43e123a54f6a2505950a92a /kernel/sched
parent752182b24bf4ffda1c5a8025515d53122d930bd8 (diff)
downloadlinux-c2e164ac33f75e0acb93004960c73bd9166d3d35.tar.xz
sched/fair: remove util_est boosting
There is no need to use runnable_avg when estimating util_est and that even generates wrong behavior because one includes blocked tasks whereas the other one doesn't. This can lead to accounting twice the waking task p, once with the blocked runnable_avg and another one when adding its util_est. cpu's runnable_avg is already used when computing util_avg which is then compared with util_est. In some situation, feec will not select prev_cpu but another one on the same performance domain because of higher max_util Fixes: 7d0583cf9ec7 ("sched/fair, cpufreq: Introduce 'runnable boosting'") Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Link: https://lore.kernel.org/r/20230706135144.324311-1-vincent.guittot@linaro.org
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/fair.c3
1 files changed, 0 insertions, 3 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d3df5b1642a6..f55b0a72772e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7320,9 +7320,6 @@ cpu_util(int cpu, struct task_struct *p, int dst_cpu, int boost)
util_est = READ_ONCE(cfs_rq->avg.util_est.enqueued);
- if (boost)
- util_est = max(util_est, runnable);
-
/*
* During wake-up @p isn't enqueued yet and doesn't contribute
* to any cpu_rq(cpu)->cfs.avg.util_est.enqueued.