summaryrefslogtreecommitdiff
path: root/Documentation
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-07-03 03:17:25 +0400
committerLinus Torvalds <torvalds@linux-foundation.org>2013-07-03 03:17:25 +0400
commit2d722f6d5671794c0de0e29e3da75006ac086718 (patch)
tree042c6320cfd771798f985e79aa0422cbe8096258 /Documentation
parentf0bb4c0ab064a8aeeffbda1cee380151a594eaab (diff)
parent2fd1b487884310d0aa0c0640179dc7490ad86313 (diff)
downloadlinux-2d722f6d5671794c0de0e29e3da75006ac086718.tar.xz
Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar: "The main changes: - load-calculation cleanups and improvements, by Alex Shi - various nohz related tidying up of statisics, by Frederic Weisbecker - factor out /proc functions to kernel/sched/proc.c, by Paul Gortmaker - simplify the RT policy scheduler, by Kirill Tkhai - various fixes and cleanups" * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (42 commits) sched/debug: Remove CONFIG_FAIR_GROUP_SCHED mask sched/debug: Fix formatting of /proc/<PID>/sched sched: Fix typo in struct sched_avg member description sched/fair: Fix typo describing flags in enqueue_entity sched/debug: Add load-tracking statistics to task sched: Change get_rq_runnable_load() to static and inline sched/tg: Remove tg.load_weight sched/cfs_rq: Change atomic64_t removed_load to atomic_long_t sched/tg: Use 'unsigned long' for load variable in task group sched: Change cfs_rq load avg to unsigned long sched: Consider runnable load average in move_tasks() sched: Compute runnable load avg in cpu_load and cpu_avg_load_per_task sched: Update cpu load after task_tick sched: Fix sleep time double accounting in enqueue entity sched: Set an initial value of runnable avg for new forked task sched: Move a few runnable tg variables into CONFIG_SMP Revert "sched: Introduce temporary FAIR_GROUP_SCHED dependency for load-tracking" sched: Don't mix use of typedef ctl_table and struct ctl_table sched: Remove WARN_ON(!sd) from init_sched_groups_power() sched: Fix memory leakage in build_sched_groups() ...
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/cgroups/cpusets.txt2
-rw-r--r--Documentation/rt-mutex-design.txt2
-rw-r--r--Documentation/scheduler/sched-domains.txt4
-rw-r--r--Documentation/spinlocks.txt2
-rw-r--r--Documentation/virtual/uml/UserModeLinux-HOWTO.txt4
5 files changed, 7 insertions, 7 deletions
diff --git a/Documentation/cgroups/cpusets.txt b/Documentation/cgroups/cpusets.txt
index 12e01d432bfe..7740038d82bc 100644
--- a/Documentation/cgroups/cpusets.txt
+++ b/Documentation/cgroups/cpusets.txt
@@ -373,7 +373,7 @@ can become very uneven.
1.7 What is sched_load_balance ?
--------------------------------
-The kernel scheduler (kernel/sched.c) automatically load balances
+The kernel scheduler (kernel/sched/core.c) automatically load balances
tasks. If one CPU is underutilized, kernel code running on that
CPU will look for tasks on other more overloaded CPUs and move those
tasks to itself, within the constraints of such placement mechanisms
diff --git a/Documentation/rt-mutex-design.txt b/Documentation/rt-mutex-design.txt
index 33ed8007a845..a5bcd7f5c33f 100644
--- a/Documentation/rt-mutex-design.txt
+++ b/Documentation/rt-mutex-design.txt
@@ -384,7 +384,7 @@ priority back.
__rt_mutex_adjust_prio examines the result of rt_mutex_getprio, and if the
result does not equal the task's current priority, then rt_mutex_setprio
is called to adjust the priority of the task to the new priority.
-Note that rt_mutex_setprio is defined in kernel/sched.c to implement the
+Note that rt_mutex_setprio is defined in kernel/sched/core.c to implement the
actual change in priority.
It is interesting to note that __rt_mutex_adjust_prio can either increase
diff --git a/Documentation/scheduler/sched-domains.txt b/Documentation/scheduler/sched-domains.txt
index 443f0c76bab4..4af80b1c05aa 100644
--- a/Documentation/scheduler/sched-domains.txt
+++ b/Documentation/scheduler/sched-domains.txt
@@ -25,7 +25,7 @@ is treated as one entity. The load of a group is defined as the sum of the
load of each of its member CPUs, and only when the load of a group becomes
out of balance are tasks moved between groups.
-In kernel/sched.c, trigger_load_balance() is run periodically on each CPU
+In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU
through scheduler_tick(). It raises a softirq after the next regularly scheduled
rebalancing event for the current runqueue has arrived. The actual load
balancing workhorse, run_rebalance_domains()->rebalance_domains(), is then run
@@ -62,7 +62,7 @@ struct sched_domain fields, SD_FLAG_*, SD_*_INIT to get an idea of
the specifics and what to tune.
Architectures may retain the regular override the default SD_*_INIT flags
-while using the generic domain builder in kernel/sched.c if they wish to
+while using the generic domain builder in kernel/sched/core.c if they wish to
retain the traditional SMT->SMP->NUMA topology (or some subset of that). This
can be done by #define'ing ARCH_HASH_SCHED_TUNE.
diff --git a/Documentation/spinlocks.txt b/Documentation/spinlocks.txt
index 9dbe885ecd8d..97eaf5727178 100644
--- a/Documentation/spinlocks.txt
+++ b/Documentation/spinlocks.txt
@@ -137,7 +137,7 @@ don't block on each other (and thus there is no dead-lock wrt interrupts.
But when you do the write-lock, you have to use the irq-safe version.
For an example of being clever with rw-locks, see the "waitqueue_lock"
-handling in kernel/sched.c - nothing ever _changes_ a wait-queue from
+handling in kernel/sched/core.c - nothing ever _changes_ a wait-queue from
within an interrupt, they only read the queue in order to know whom to
wake up. So read-locks are safe (which is good: they are very common
indeed), while write-locks need to protect themselves against interrupts.
diff --git a/Documentation/virtual/uml/UserModeLinux-HOWTO.txt b/Documentation/virtual/uml/UserModeLinux-HOWTO.txt
index a5f8436753e7..f4099ca6b483 100644
--- a/Documentation/virtual/uml/UserModeLinux-HOWTO.txt
+++ b/Documentation/virtual/uml/UserModeLinux-HOWTO.txt
@@ -3127,7 +3127,7 @@
at process_kern.c:156
#3 0x1006a052 in switch_to (prev=0x50072000, next=0x507e8000, last=0x50072000)
at process_kern.c:161
- #4 0x10001d12 in schedule () at sched.c:777
+ #4 0x10001d12 in schedule () at core.c:777
#5 0x1006a744 in __down (sem=0x507d241c) at semaphore.c:71
#6 0x1006aa10 in __down_failed () at semaphore.c:157
#7 0x1006c5d8 in segv_handler (sc=0x5006e940) at trap_user.c:174
@@ -3191,7 +3191,7 @@
at process_kern.c:161
161 _switch_to(prev, next);
(gdb)
- #4 0x10001d12 in schedule () at sched.c:777
+ #4 0x10001d12 in schedule () at core.c:777
777 switch_to(prev, next, prev);
(gdb)
#5 0x1006a744 in __down (sem=0x507d241c) at semaphore.c:71