summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJohn Stultz <jstultz@google.com>2025-04-29 08:07:26 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2025-06-19 15:31:27 +0200
commit451a18d71bd979768de223d10b1921da4732f1c9 (patch)
tree0cc1634f237e9c93291c2bc8093a4e2a2a659726
parentf54d2b7ac42b7d791a6584a8fbc4703d09790955 (diff)
sched/core: Tweak wait_task_inactive() to force dequeue sched_delayed tasks
[ Upstream commit b7ca5743a2604156d6083b88cefacef983f3a3a6 ] It was reported that in 6.12, smpboot_create_threads() was taking much longer then in 6.6. I narrowed down the call path to: smpboot_create_threads() -> kthread_create_on_cpu() -> kthread_bind() -> __kthread_bind_mask() ->wait_task_inactive() Where in wait_task_inactive() we were regularly hitting the queued case, which sets a 1 tick timeout, which when called multiple times in a row, accumulates quickly into a long delay. I noticed disabling the DELAY_DEQUEUE sched feature recovered the performance, and it seems the newly create tasks are usually sched_delayed and left on the runqueue. So in wait_task_inactive() when we see the task p->se.sched_delayed, manually dequeue the sched_delayed task with DEQUEUE_DELAYED, so we don't have to constantly wait a tick. Fixes: 152e11f6df29 ("sched/fair: Implement delayed dequeue") Reported-by: peter-yc.chang@mediatek.com Signed-off-by: John Stultz <jstultz@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> Link: https://lkml.kernel.org/r/20250429150736.3778580-1-jstultz@google.com Signed-off-by: Sasha Levin <sashal@kernel.org>
-rw-r--r--kernel/sched/core.c6
1 files changed, 6 insertions, 0 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 814abc7ad994..51f36de5990a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2229,6 +2229,12 @@ unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state
* just go back and repeat.
*/
rq = task_rq_lock(p, &rf);
+ /*
+ * If task is sched_delayed, force dequeue it, to avoid always
+ * hitting the tick timeout in the queued case
+ */
+ if (p->se.sched_delayed)
+ dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
trace_sched_wait_task(p);
running = task_on_cpu(rq, p);
queued = task_on_rq_queued(p);