summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--Documentation/admin-guide/pm/cpuidle.rst77
-rw-r--r--Documentation/admin-guide/pm/intel_pstate.rst6
-rw-r--r--Documentation/devicetree/bindings/arm/tegra/nvidia,tegra30-actmon.txt57
-rw-r--r--Documentation/devicetree/bindings/devfreq/nvidia,tegra30-actmon.yaml126
-rw-r--r--Documentation/power/runtime_pm.rst15
-rw-r--r--drivers/base/power/domain.c64
-rw-r--r--drivers/base/power/domain_governor.c1
-rw-r--r--drivers/base/power/runtime.c18
-rw-r--r--drivers/base/power/wakeirq.c4
-rw-r--r--drivers/cpufreq/cpufreq.c11
-rw-r--r--drivers/cpufreq/cpufreq_stats.c5
-rw-r--r--drivers/cpufreq/intel_pstate.c263
-rw-r--r--drivers/cpufreq/loongson2_cpufreq.c1
-rw-r--r--drivers/cpufreq/sc520_freq.c1
-rw-r--r--drivers/cpufreq/sh-cpufreq.c1
-rw-r--r--drivers/cpuidle/governors/teo.c476
-rw-r--r--drivers/devfreq/Kconfig1
-rw-r--r--drivers/devfreq/devfreq.c1
-rw-r--r--drivers/devfreq/governor_passive.c3
-rw-r--r--drivers/devfreq/governor_userspace.c10
-rw-r--r--drivers/devfreq/imx-bus.c14
-rw-r--r--drivers/devfreq/tegra30-devfreq.c1
-rw-r--r--drivers/idle/intel_idle.c33
-rw-r--r--drivers/opp/core.c10
-rw-r--r--drivers/opp/of.c27
-rw-r--r--include/linux/pm_domain.h1
-rw-r--r--include/linux/pm_runtime.h3
-rw-r--r--kernel/power/Kconfig12
-rw-r--r--kernel/power/process.c2
-rw-r--r--kernel/power/snapshot.c10
-rw-r--r--kernel/power/swap.c2
31 files changed, 775 insertions, 481 deletions
diff --git a/Documentation/admin-guide/pm/cpuidle.rst b/Documentation/admin-guide/pm/cpuidle.rst
index 10fde58d0869..aec2cd2aaea7 100644
--- a/Documentation/admin-guide/pm/cpuidle.rst
+++ b/Documentation/admin-guide/pm/cpuidle.rst
@@ -347,81 +347,8 @@ for tickless systems. It follows the same basic strategy as the ``menu`` `one
<menu-gov_>`_: it always tries to find the deepest idle state suitable for the
given conditions. However, it applies a different approach to that problem.
-First, it does not use sleep length correction factors, but instead it attempts
-to correlate the observed idle duration values with the available idle states
-and use that information to pick up the idle state that is most likely to
-"match" the upcoming CPU idle interval. Second, it does not take the tasks
-that were running on the given CPU in the past and are waiting on some I/O
-operations to complete now at all (there is no guarantee that they will run on
-the same CPU when they become runnable again) and the pattern detection code in
-it avoids taking timer wakeups into account. It also only uses idle duration
-values less than the current time till the closest timer (with the scheduler
-tick excluded) for that purpose.
-
-Like in the ``menu`` governor `case <menu-gov_>`_, the first step is to obtain
-the *sleep length*, which is the time until the closest timer event with the
-assumption that the scheduler tick will be stopped (that also is the upper bound
-on the time until the next CPU wakeup). That value is then used to preselect an
-idle state on the basis of three metrics maintained for each idle state provided
-by the ``CPUIdle`` driver: ``hits``, ``misses`` and ``early_hits``.
-
-The ``hits`` and ``misses`` metrics measure the likelihood that a given idle
-state will "match" the observed (post-wakeup) idle duration if it "matches" the
-sleep length. They both are subject to decay (after a CPU wakeup) every time
-the target residency of the idle state corresponding to them is less than or
-equal to the sleep length and the target residency of the next idle state is
-greater than the sleep length (that is, when the idle state corresponding to
-them "matches" the sleep length). The ``hits`` metric is increased if the
-former condition is satisfied and the target residency of the given idle state
-is less than or equal to the observed idle duration and the target residency of
-the next idle state is greater than the observed idle duration at the same time
-(that is, it is increased when the given idle state "matches" both the sleep
-length and the observed idle duration). In turn, the ``misses`` metric is
-increased when the given idle state "matches" the sleep length only and the
-observed idle duration is too short for its target residency.
-
-The ``early_hits`` metric measures the likelihood that a given idle state will
-"match" the observed (post-wakeup) idle duration if it does not "match" the
-sleep length. It is subject to decay on every CPU wakeup and it is increased
-when the idle state corresponding to it "matches" the observed (post-wakeup)
-idle duration and the target residency of the next idle state is less than or
-equal to the sleep length (i.e. the idle state "matching" the sleep length is
-deeper than the given one).
-
-The governor walks the list of idle states provided by the ``CPUIdle`` driver
-and finds the last (deepest) one with the target residency less than or equal
-to the sleep length. Then, the ``hits`` and ``misses`` metrics of that idle
-state are compared with each other and it is preselected if the ``hits`` one is
-greater (which means that that idle state is likely to "match" the observed idle
-duration after CPU wakeup). If the ``misses`` one is greater, the governor
-preselects the shallower idle state with the maximum ``early_hits`` metric
-(or if there are multiple shallower idle states with equal ``early_hits``
-metric which also is the maximum, the shallowest of them will be preselected).
-[If there is a wakeup latency constraint coming from the `PM QoS framework
-<cpu-pm-qos_>`_ which is hit before reaching the deepest idle state with the
-target residency within the sleep length, the deepest idle state with the exit
-latency within the constraint is preselected without consulting the ``hits``,
-``misses`` and ``early_hits`` metrics.]
-
-Next, the governor takes several idle duration values observed most recently
-into consideration and if at least a half of them are greater than or equal to
-the target residency of the preselected idle state, that idle state becomes the
-final candidate to ask for. Otherwise, the average of the most recent idle
-duration values below the target residency of the preselected idle state is
-computed and the governor walks the idle states shallower than the preselected
-one and finds the deepest of them with the target residency within that average.
-That idle state is then taken as the final candidate to ask for.
-
-Still, at this point the governor may need to refine the idle state selection if
-it has not decided to `stop the scheduler tick <idle-cpus-and-tick_>`_. That
-generally happens if the target residency of the idle state selected so far is
-less than the tick period and the tick has not been stopped already (in a
-previous iteration of the idle loop). Then, like in the ``menu`` governor
-`case <menu-gov_>`_, the sleep length used in the previous computations may not
-reflect the real time until the closest timer event and if it really is greater
-than that time, a shallower state with a suitable target residency may need to
-be selected.
-
+.. kernel-doc:: drivers/cpuidle/governors/teo.c
+ :doc: teo-description
.. _idle-states-representation:
diff --git a/Documentation/admin-guide/pm/intel_pstate.rst b/Documentation/admin-guide/pm/intel_pstate.rst
index 7a7d4b041eac..d5043cd8d2f5 100644
--- a/Documentation/admin-guide/pm/intel_pstate.rst
+++ b/Documentation/admin-guide/pm/intel_pstate.rst
@@ -365,6 +365,9 @@ argument is passed to the kernel in the command line.
inclusive) including both turbo and non-turbo P-states (see
`Turbo P-states Support`_).
+ This attribute is present only if the value exposed by it is the same
+ for all of the CPUs in the system.
+
The value of this attribute is not affected by the ``no_turbo``
setting described `below <no_turbo_attr_>`_.
@@ -374,6 +377,9 @@ argument is passed to the kernel in the command line.
Ratio of the `turbo range <turbo_>`_ size to the size of the entire
range of supported P-states, in percent.
+ This attribute is present only if the value exposed by it is the same
+ for all of the CPUs in the system.
+
This attribute is read-only.
.. _no_turbo_attr:
diff --git a/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra30-actmon.txt b/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra30-actmon.txt
deleted file mode 100644
index 897eedfa2bc8..000000000000
--- a/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra30-actmon.txt
+++ /dev/null
@@ -1,57 +0,0 @@
-NVIDIA Tegra Activity Monitor
-
-The activity monitor block collects statistics about the behaviour of other
-components in the system. This information can be used to derive the rate at
-which the external memory needs to be clocked in order to serve all requests
-from the monitored clients.
-
-Required properties:
-- compatible: should be "nvidia,tegra<chip>-actmon"
-- reg: offset and length of the register set for the device
-- interrupts: standard interrupt property
-- clocks: Must contain a phandle and clock specifier pair for each entry in
-clock-names. See ../../clock/clock-bindings.txt for details.
-- clock-names: Must include the following entries:
- - actmon
- - emc
-- resets: Must contain an entry for each entry in reset-names. See
-../../reset/reset.txt for details.
-- reset-names: Must include the following entries:
- - actmon
-- operating-points-v2: See ../bindings/opp/opp.txt for details.
-- interconnects: Should contain entries for memory clients sitting on
- MC->EMC memory interconnect path.
-- interconnect-names: Should include name of the interconnect path for each
- interconnect entry. Consult TRM documentation for
- information about available memory clients, see MEMORY
- CONTROLLER section.
-
-For each opp entry in 'operating-points-v2' table:
-- opp-supported-hw: bitfield indicating SoC speedo ID mask
-- opp-peak-kBps: peak bandwidth of the memory channel
-
-Example:
- dfs_opp_table: opp-table {
- compatible = "operating-points-v2";
-
- opp@12750000 {
- opp-hz = /bits/ 64 <12750000>;
- opp-supported-hw = <0x000F>;
- opp-peak-kBps = <51000>;
- };
- ...
- };
-
- actmon@6000c800 {
- compatible = "nvidia,tegra124-actmon";
- reg = <0x0 0x6000c800 0x0 0x400>;
- interrupts = <GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&tegra_car TEGRA124_CLK_ACTMON>,
- <&tegra_car TEGRA124_CLK_EMC>;
- clock-names = "actmon", "emc";
- resets = <&tegra_car 119>;
- reset-names = "actmon";
- operating-points-v2 = <&dfs_opp_table>;
- interconnects = <&mc TEGRA124_MC_MPCORER &emc>;
- interconnect-names = "cpu";
- };
diff --git a/Documentation/devicetree/bindings/devfreq/nvidia,tegra30-actmon.yaml b/Documentation/devicetree/bindings/devfreq/nvidia,tegra30-actmon.yaml
new file mode 100644
index 000000000000..e3379d106728
--- /dev/null
+++ b/Documentation/devicetree/bindings/devfreq/nvidia,tegra30-actmon.yaml
@@ -0,0 +1,126 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/devfreq/nvidia,tegra30-actmon.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: NVIDIA Tegra30 Activity Monitor
+
+maintainers:
+ - Dmitry Osipenko <digetx@gmail.com>
+ - Jon Hunter <jonathanh@nvidia.com>
+ - Thierry Reding <thierry.reding@gmail.com>
+
+description: |
+ The activity monitor block collects statistics about the behaviour of other
+ components in the system. This information can be used to derive the rate at
+ which the external memory needs to be clocked in order to serve all requests
+ from the monitored clients.
+
+properties:
+ compatible:
+ enum:
+ - nvidia,tegra30-actmon
+ - nvidia,tegra114-actmon
+ - nvidia,tegra124-actmon
+ - nvidia,tegra210-actmon
+
+ reg:
+ maxItems: 1
+
+ clocks:
+ maxItems: 2
+
+ clock-names:
+ items:
+ - const: actmon
+ - const: emc
+
+ resets:
+ maxItems: 1
+
+ reset-names:
+ items:
+ - const: actmon
+
+ interrupts:
+ maxItems: 1
+
+ interconnects:
+ minItems: 1
+ maxItems: 12
+
+ interconnect-names:
+ minItems: 1
+ maxItems: 12
+ description:
+ Should include name of the interconnect path for each interconnect
+ entry. Consult TRM documentation for information about available
+ memory clients, see MEMORY CONTROLLER and ACTIVITY MONITOR sections.
+
+ operating-points-v2:
+ description:
+ Should contain freqs and voltages and opp-supported-hw property, which
+ is a bitfield indicating SoC speedo ID mask.
+
+ "#cooling-cells":
+ const: 2
+
+required:
+ - compatible
+ - reg
+ - clocks
+ - clock-names
+ - resets
+ - reset-names
+ - interrupts
+ - interconnects
+ - interconnect-names
+ - operating-points-v2
+ - "#cooling-cells"
+
+additionalProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/memory/tegra30-mc.h>
+
+ mc: memory-controller@7000f000 {
+ compatible = "nvidia,tegra30-mc";
+ reg = <0x7000f000 0x400>;
+ clocks = <&clk 32>;
+ clock-names = "mc";
+
+ interrupts = <0 77 4>;
+
+ #iommu-cells = <1>;
+ #reset-cells = <1>;
+ #interconnect-cells = <1>;
+ };
+
+ emc: external-memory-controller@7000f400 {
+ compatible = "nvidia,tegra30-emc";
+ reg = <0x7000f400 0x400>;
+ interrupts = <0 78 4>;
+ clocks = <&clk 57>;
+
+ nvidia,memory-controller = <&mc>;
+ operating-points-v2 = <&dvfs_opp_table>;
+ power-domains = <&domain>;
+
+ #interconnect-cells = <0>;
+ };
+
+ actmon@6000c800 {
+ compatible = "nvidia,tegra30-actmon";
+ reg = <0x6000c800 0x400>;
+ interrupts = <0 45 4>;
+ clocks = <&clk 119>, <&clk 57>;
+ clock-names = "actmon", "emc";
+ resets = <&rst 119>;
+ reset-names = "actmon";
+ operating-points-v2 = <&dvfs_opp_table>;
+ interconnects = <&mc TEGRA30_MC_MPCORER &emc>;
+ interconnect-names = "cpu-read";
+ #cooling-cells = <2>;
+ };
diff --git a/Documentation/power/runtime_pm.rst b/Documentation/power/runtime_pm.rst
index 18ae21bf7f92..d6bf84f061f4 100644
--- a/Documentation/power/runtime_pm.rst
+++ b/Documentation/power/runtime_pm.rst
@@ -378,7 +378,11 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
`int pm_runtime_get_sync(struct device *dev);`
- increment the device's usage counter, run pm_runtime_resume(dev) and
- return its result
+ return its result;
+ note that it does not drop the device's usage counter on errors, so
+ consider using pm_runtime_resume_and_get() instead of it, especially
+ if its return value is checked by the caller, as this is likely to
+ result in cleaner code.
`int pm_runtime_get_if_in_use(struct device *dev);`
- return -EINVAL if 'power.disable_depth' is nonzero; otherwise, if the
@@ -827,6 +831,15 @@ or driver about runtime power changes. Instead, the driver for the device's
parent must take responsibility for telling the device's driver when the
parent's power state changes.
+Note that, in some cases it may not be desirable for subsystems/drivers to call
+pm_runtime_no_callbacks() for their devices. This could be because a subset of
+the runtime PM callbacks needs to be implemented, a platform dependent PM
+domain could get attached to the device or that the device is power managed
+through a supplier device link. For these reasons and to avoid boilerplate code
+in subsystems/drivers, the PM core allows runtime PM callbacks to be
+unassigned. More precisely, if a callback pointer is NULL, the PM core will act
+as though there was a callback and it returned 0.
+
9. Autosuspend, or automatically-delayed suspends
=================================================
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index b6a782c31613..ab0b740cc0f1 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -379,6 +379,44 @@ err:
return ret;
}
+static int genpd_set_performance_state(struct device *dev, unsigned int state)
+{
+ struct generic_pm_domain *genpd = dev_to_genpd(dev);
+ struct generic_pm_domain_data *gpd_data = dev_gpd_data(dev);
+ unsigned int prev_state;
+ int ret;
+
+ prev_state = gpd_data->performance_state;
+ if (prev_state == state)
+ return 0;
+
+ gpd_data->performance_state = state;
+ state = _genpd_reeval_performance_state(genpd, state);
+
+ ret = _genpd_set_performance_state(genpd, state, 0);
+ if (ret)
+ gpd_data->performance_state = prev_state;
+
+ return ret;
+}
+
+static int genpd_drop_performance_state(struct device *dev)
+{
+ unsigned int prev_state = dev_gpd_data(dev)->performance_state;
+
+ if (!genpd_set_performance_state(dev, 0))
+ return prev_state;
+
+ return 0;
+}
+
+static void genpd_restore_performance_state(struct device *dev,
+ unsigned int state)
+{
+ if (state)
+ genpd_set_performance_state(dev, state);
+}
+
/**
* dev_pm_genpd_set_performance_state- Set performance state of device's power
* domain.
@@ -397,8 +435,6 @@ err:
int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state)
{
struct generic_pm_domain *genpd;
- struct generic_pm_domain_data *gpd_data;
- unsigned int prev;
int ret;
genpd = dev_to_genpd_safe(dev);
@@ -410,16 +446,7 @@ int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state)
return -EINVAL;
genpd_lock(genpd);
-
- gpd_data = to_gpd_data(dev->power.subsys_data->domain_data);
- prev = gpd_data->performance_state;
- gpd_data->performance_state = state;
-
- state = _genpd_reeval_performance_state(genpd, state);
- ret = _genpd_set_performance_state(genpd, state, 0);
- if (ret)
- gpd_data->performance_state = prev;
-
+ ret = genpd_set_performance_state(dev, state);
genpd_unlock(genpd);
return ret;
@@ -572,6 +599,7 @@ static void genpd_queue_power_off_work(struct generic_pm_domain *genpd)
* RPM status of the releated device is in an intermediate state, not yet turned
* into RPM_SUSPENDED. This means genpd_power_off() must allow one device to not
* be RPM_SUSPENDED, while it tries to power off the PM domain.
+ * @depth: nesting count for lockdep.
*
* If all of the @genpd's devices have been suspended and all of its subdomains
* have been powered down, remove power from @genpd.
@@ -832,7 +860,8 @@ static int genpd_runtime_suspend(struct device *dev)
{
struct generic_pm_domain *genpd;
bool (*suspend_ok)(struct device *__dev);
- struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
+ struct generic_pm_domain_data *gpd_data = dev_gpd_data(dev);
+ struct gpd_timing_data *td = &gpd_data->td;
bool runtime_pm = pm_runtime_enabled(dev);
ktime_t time_start;
s64 elapsed_ns;
@@ -889,6 +918,7 @@ static int genpd_runtime_suspend(struct device *dev)
return 0;
genpd_lock(genpd);
+ gpd_data->rpm_pstate = genpd_drop_performance_state(dev);
genpd_power_off(genpd, true, 0);
genpd_unlock(genpd);
@@ -906,7 +936,8 @@ static int genpd_runtime_suspend(struct device *dev)
static int genpd_runtime_resume(struct device *dev)
{
struct generic_pm_domain *genpd;
- struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
+ struct generic_pm_domain_data *gpd_data = dev_gpd_data(dev);
+ struct gpd_timing_data *td = &gpd_data->td;
bool runtime_pm = pm_runtime_enabled(dev);
ktime_t time_start;
s64 elapsed_ns;
@@ -930,6 +961,8 @@ static int genpd_runtime_resume(struct device *dev)
genpd_lock(genpd);
ret = genpd_power_on(genpd, 0);
+ if (!ret)
+ genpd_restore_performance_state(dev, gpd_data->rpm_pstate);
genpd_unlock(genpd);
if (ret)
@@ -968,6 +1001,7 @@ err_stop:
err_poweroff:
if (!pm_runtime_is_irq_safe(dev) || genpd_is_irq_safe(genpd)) {
genpd_lock(genpd);
+ gpd_data->rpm_pstate = genpd_drop_performance_state(dev);
genpd_power_off(genpd, true, 0);
genpd_unlock(genpd);
}
@@ -2505,7 +2539,7 @@ EXPORT_SYMBOL_GPL(of_genpd_remove_subdomain);
/**
* of_genpd_remove_last - Remove the last PM domain registered for a provider
- * @provider: Pointer to device structure associated with provider
+ * @np: Pointer to device node associated with provider
*
* Find the last PM domain that was added by a particular provider and
* remove this PM domain from the list of PM domains. The provider is
diff --git a/drivers/base/power/domain_governor.c b/drivers/base/power/domain_governor.c
index c6c218758f0b..cd08c5885190 100644
--- a/drivers/base/power/domain_governor.c
+++ b/drivers/base/power/domain_governor.c
@@ -252,6 +252,7 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd,
/**
* _default_power_down_ok - Default generic PM domain power off governor routine.
* @pd: PM domain to check.
+ * @now: current ktime.
*
* This routine must be executed under the PM domain's lock.
*/
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index b570848d23e0..8a66eaf731e4 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -345,7 +345,7 @@ static void rpm_suspend_suppliers(struct device *dev)
static int __rpm_callback(int (*cb)(struct device *), struct device *dev)
__releases(&dev->power.lock) __acquires(&dev->power.lock)
{
- int retval, idx;
+ int retval = 0, idx;
bool use_links = dev->power.links_count > 0;
if (dev->power.irq_safe) {
@@ -373,7 +373,8 @@ static int __rpm_callback(int (*cb)(struct device *), struct device *dev)
}
}
- retval = cb(dev);
+ if (cb)
+ retval = cb(dev);
if (dev->power.irq_safe) {
spin_lock(&dev->power.lock);
@@ -446,7 +447,10 @@ static int rpm_idle(struct device *dev, int rpmflags)
/* Pending requests need to be canceled. */
dev->power.request = RPM_REQ_NONE;
- if (dev->power.no_callbacks)
+ callback = RPM_GET_CALLBACK(dev, runtime_idle);
+
+ /* If no callback assume success. */
+ if (!callback || dev->power.no_callbacks)
goto out;
/* Carry out an asynchronous or a synchronous idle notification. */
@@ -462,10 +466,7 @@ static int rpm_idle(struct device *dev, int rpmflags)
dev->power.idle_notification = true;
- callback = RPM_GET_CALLBACK(dev, runtime_idle);
-
- if (callback)
- retval = __rpm_callback(callback, dev);
+ retval = __rpm_callback(callback, dev);
dev->power.idle_notification = false;
wake_up_all(&dev->power.wait_queue);
@@ -484,9 +485,6 @@ static int rpm_callback(int (*cb)(struct device *), struct device *dev)
{
int retval;
- if (!cb)
- return -ENOSYS;
-
if (dev->power.memalloc_noio) {
unsigned int noio_flag;
diff --git a/drivers/base/power/wakeirq.c b/drivers/base/power/wakeirq.c
index 8e021082dba8..3bad3266a2ad 100644
--- a/drivers/base/power/wakeirq.c
+++ b/drivers/base/power/wakeirq.c
@@ -182,7 +182,6 @@ int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
wirq->dev = dev;
wirq->irq = irq;
- irq_set_status_flags(irq, IRQ_NOAUTOEN);
/* Prevent deferred spurious wakeirqs with disable_irq_nosync() */
irq_set_status_flags(irq, IRQ_DISABLE_UNLAZY);
@@ -192,7 +191,8 @@ int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
* so we use a threaded irq.
*/
err = request_threaded_irq(irq, NULL, handle_threaded_wake_irq,
- IRQF_ONESHOT, wirq->name, wirq);
+ IRQF_ONESHOT | IRQF_NO_AUTOEN,
+ wirq->name, wirq);
if (err)
goto err_free_name;
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 802abc925b2a..cbab834c37a0 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -1367,9 +1367,14 @@ static int cpufreq_online(unsigned int cpu)
goto out_free_policy;
}
+ /*
+ * The initialization has succeeded and the policy is online.
+ * If there is a problem with its frequency table, take it
+ * offline and drop it.
+ */
ret = cpufreq_table_validate_and_sort(policy);
if (ret)
- goto out_exit_policy;
+ goto out_offline_policy;
/* related_cpus should at least include policy->cpus. */
cpumask_copy(policy->related_cpus, policy->cpus);
@@ -1515,6 +1520,10 @@ out_destroy_policy:
up_write(&policy->rwsem);
+out_offline_policy:
+ if (cpufreq_driver->offline)
+ cpufreq_driver->offline(policy);
+
out_exit_policy:
if (cpufreq_driver->exit)
cpufreq_driver->exit(policy);
diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c
index da717f7cd9a9..1570d6f3e75d 100644
--- a/drivers/cpufreq/cpufreq_stats.c
+++ b/drivers/cpufreq/cpufreq_stats.c
@@ -211,7 +211,7 @@ void cpufreq_stats_free_table(struct cpufreq_policy *policy)
void cpufreq_stats_create_table(struct cpufreq_policy *policy)
{
- unsigned int i = 0, count = 0, ret = -ENOMEM;
+ unsigned int i = 0, count;
struct cpufreq_stats *stats;
unsigned int alloc_size;
struct cpufreq_frequency_table *pos;
@@ -253,8 +253,7 @@ void cpufreq_stats_create_table(struct cpufreq_policy *policy)
stats->last_index = freq_table_get_index(stats, policy->cur);
policy->stats = stats;
- ret = sysfs_create_group(&policy->kobj, &stats_attr_group);
- if (!ret)
+ if (!sysfs_create_group(&policy->kobj, &stats_attr_group))
return;
/* We failed, release resources */
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 0e69dffd5a76..6012964df51b 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -121,9 +121,10 @@ struct sample {
* @max_pstate_physical:This is physical Max P state for a processor
* This can be higher than the max_pstate which can
* be limited by platform thermal design power limits
- * @scaling: Scaling factor to convert frequency to cpufreq
- * frequency units
+ * @perf_ctl_scaling: PERF_CTL P-state to frequency scaling factor
+ * @scaling: Scaling factor between performance and frequency
* @turbo_pstate: Max Turbo P state possible for this platform
+ * @min_freq: @min_pstate frequency in cpufreq units
* @max_freq: @max_pstate frequency in cpufreq units
* @turbo_freq: @turbo_pstate frequency in cpufreq units
*
@@ -134,8 +135,10 @@ struct pstate_data {
int min_pstate;
int max_pstate;
int max_pstate_physical;
+ int perf_ctl_scaling;
int scaling;
int turbo_pstate;
+ unsigned int min_freq;
unsigned int max_freq;
unsigned int turbo_freq;
};
@@ -366,7 +369,7 @@ static void intel_pstate_set_itmt_prio(int cpu)
}
}
-static int intel_pstate_get_cppc_guranteed(int cpu)
+static int intel_pstate_get_cppc_guaranteed(int cpu)
{
struct cppc_perf_caps cppc_perf;
int ret;
@@ -382,7 +385,7 @@ static int intel_pstate_get_cppc_guranteed(int cpu)
}
#else /* CONFIG_ACPI_CPPC_LIB */
-static void intel_pstate_set_itmt_prio(int cpu)
+static inline void intel_pstate_set_itmt_prio(int cpu)
{
}
#endif /* CONFIG_ACPI_CPPC_LIB */
@@ -467,6 +470,20 @@ static void intel_pstate_exit_perf_limits(struct cpufreq_policy *policy)
acpi_processor_unregister_performance(policy->cpu);
}
+
+static bool intel_pstate_cppc_perf_valid(u32 perf, struct cppc_perf_caps *caps)
+{
+ return perf && perf <= caps->highest_perf && perf >= caps->lowest_perf;
+}
+
+static bool intel_pstate_cppc_perf_caps(struct cpudata *cpu,
+ struct cppc_perf_caps *caps)
+{
+ if (cppc_get_perf_caps(cpu->cpu, caps))
+ return false;
+
+ return caps->highest_perf && caps->lowest_perf <= caps->highest_perf;
+}
#else /* CONFIG_ACPI */
static inline void intel_pstate_init_acpi_perf_limits(struct cpufreq_policy *policy)
{
@@ -483,12 +500,146 @@ static inline bool intel_pstate_acpi_pm_profile_server(void)
#endif /* CONFIG_ACPI */
#ifndef CONFIG_ACPI_CPPC_LIB
-static int intel_pstate_get_cppc_guranteed(int cpu)
+static inline int intel_pstate_get_cppc_guaranteed(int cpu)
{
return -ENOTSUPP;
}
#endif /* CONFIG_ACPI_CPPC_LIB */
+static void intel_pstate_hybrid_hwp_perf_ctl_parity(struct cpudata *cpu)
+{
+ pr_debug("CPU%d: Using PERF_CTL scaling for HWP\n", cpu->cpu);
+
+ cpu->pstate.scaling = cpu->pstate.perf_ctl_scaling;
+}
+
+/**
+ * intel_pstate_hybrid_hwp_calibrate - Calibrate HWP performance levels.
+ * @cpu: Target CPU.
+ *
+ * On hybrid processors, HWP may expose more performance levels than there are
+ * P-states accessible through the PERF_CTL interface. If that happens, the
+ * scaling factor between HWP performance levels and CPU frequency will be less
+ * than the scaling factor between P-state values and CPU frequency.
+ *
+ * In that case, the scaling factor between HWP performance levels and CPU
+ * frequency needs to be determined which can be done with the help of the
+ * observation that certain HWP performance levels should correspond to certain
+ * P-states, like for example the HWP highest performance should correspond
+ * to the maximum turbo P-state of the CPU.
+ */
+static void intel_pstate_hybrid_hwp_calibrate(struct cpudata *cpu)
+{
+ int perf_ctl_max_phys = cpu->pstate.max_pstate_physical;
+ int perf_ctl_scaling = cpu->pstate.perf_ctl_scaling;
+ int perf_ctl_turbo = pstate_funcs.get_turbo();
+ int turbo_freq = perf_ctl_turbo * perf_ctl_scaling;
+ int perf_ctl_max = pstate_funcs.get_max();
+ int max_freq = perf_ctl_max * perf_ctl_scaling;
+ int scaling = INT_MAX;
+ int freq;
+
+ pr_debug("CPU%d: perf_ctl_max_phys = %d\n", cpu->cpu, perf_ctl_max_phys);
+ pr_debug("CPU%d: perf_ctl_max = %d\n", cpu->cpu, perf_ctl_max);
+ pr_debug("CPU%d: perf_ctl_turbo = %d\n", cpu->cpu, perf_ctl_turbo);
+ pr_debug("CPU%d: perf_ctl_scaling = %d\n", cpu->cpu, perf_ctl_scaling);
+
+ pr_debug("CPU%d: HWP_CAP guaranteed = %d\n", cpu->cpu, cpu->pstate.max_pstate);
+ pr_debug("CPU%d: HWP_CAP highest = %d\n", cpu->cpu, cpu->pstate.turbo_pstate);
+
+#ifdef CONFIG_ACPI
+ if (IS_ENABLED(CONFIG_ACPI_CPPC_LIB)) {
+ struct cppc_perf_caps caps;
+
+ if (intel_pstate_cppc_perf_caps(cpu, &caps)) {
+ if (intel_pstate_cppc_perf_valid(caps.nominal_perf, &caps)) {
+ pr_debug("CPU%d: Using CPPC nominal\n", cpu->cpu);
+
+ /*
+ * If the CPPC nominal performance is valid, it
+ * can be assumed to correspond to cpu_khz.
+ */
+ if (caps.nominal_perf == perf_ctl_max_phys) {
+ intel_pstate_hybrid_hwp_perf_ctl_parity(cpu);
+ return;
+ }
+ scaling = DIV_ROUND_UP(cpu_khz, caps.nominal_perf);
+ } else if (intel_pstate_cppc_perf_valid(caps.guaranteed_perf, &caps)) {
+ pr_debug("CPU%d: Using CPPC guaranteed\n", cpu->cpu);
+
+ /*
+ * If the CPPC guaranteed performance is valid,
+ * it can be assumed to correspond to max_freq.
+ */
+ if (caps.guaranteed_perf == perf_ctl_max) {
+ intel_pstate_hybrid_hwp_perf_ctl_parity(cpu);
+ return;
+ }
+ scaling = DIV_ROUND_UP(max_freq, caps.guaranteed_perf);
+ }
+ }
+ }
+#endif
+ /*
+ * If using the CPPC data to compute the HWP-to-frequency scaling factor
+ * doesn't work, use the HWP_CAP gauranteed perf for this purpose with
+ * the assumption that it corresponds to max_freq.
+ */
+ if (scaling > perf_ctl_scaling) {
+ pr_debug("CPU%d: Using HWP_CAP guaranteed\n", cpu->cpu);
+
+ if (cpu->pstate.max_pstate == perf_ctl_max) {
+ intel_pstate_hybrid_hwp_perf_ctl_parity(cpu);
+ return;
+ }
+ scaling = DIV_ROUND_UP(max_freq, cpu->pstate.max_pstate);
+ if (scaling > perf_ctl_scaling) {
+ /*
+ * This should not happen, because it would mean that
+ * the number of HWP perf levels was less than the
+ * number of P-states, so use the PERF_CTL scaling in
+ * that case.
+ */
+ pr_debug("CPU%d: scaling (%d) out of range\n", cpu->cpu,
+ scaling);
+
+ intel_pstate_hybrid_hwp_perf_ctl_parity(cpu);
+ return;
+ }
+ }
+
+ /*
+ * If the product of the HWP performance scaling factor obtained above
+ * and the HWP_CAP highest performance is greater than the maximum turbo
+ * frequency corresponding to the pstate_funcs.get_turbo() return value,
+ * the scaling factor is too high, so recompute it so that the HWP_CAP
+ * highest performance corresponds to the maximum turbo frequency.
+ */
+ if (turbo_freq < cpu->pstate.turbo_pstate * scaling) {
+ pr_debug("CPU%d: scaling too high (%d)\n", cpu->cpu, scaling);
+
+ cpu->pstate.turbo_freq = turbo_freq;
+ scaling = DIV_ROUND_UP(turbo_freq, cpu->pstate.turbo_pstate);
+ }
+
+ cpu->pstate.scaling = scaling;
+
+ pr_debug("CPU%d: HWP-to-frequency scaling factor: %d\n", cpu->cpu, scaling);
+
+ cpu->pstate.max_freq = rounddown(cpu->pstate.max_pstate * scaling,
+ perf_ctl_scaling);
+
+ freq = perf_ctl_max_phys * perf_ctl_scaling;
+ cpu->pstate.max_pstate_physical = DIV_ROUND_UP(freq, scaling);
+
+ cpu->pstate.min_freq = cpu->pstate.min_pstate * perf_ctl_scaling;
+ /*
+ * Cast the min P-state value retrieved via pstate_funcs.get_min() to
+ * the effective range of HWP performance levels.
+ */
+ cpu->pstate.min_pstate = DIV_ROUND_UP(cpu->pstate.min_freq, scaling);
+}
+
static inline void update_turbo_state(void)
{
u64 misc_en;
@@ -795,19 +946,22 @@ cpufreq_freq_attr_rw(energy_performance_preference);
static ssize_t show_base_frequency(struct cpufreq_policy *policy, char *buf)
{
- struct cpudata *cpu;
- u64 cap;
- int ratio;
+ struct cpudata *cpu = all_cpu_data[policy->cpu];
+ int ratio, freq;
- ratio = intel_pstate_get_cppc_guranteed(policy->cpu);
+ ratio = intel_pstate_get_cppc_guaranteed(policy->cpu);
if (ratio <= 0) {
+ u64 cap;
+
rdmsrl_on_cpu(policy->cpu, MSR_HWP_CAPABILITIES, &cap);
ratio = HWP_GUARANTEED_PERF(cap);
}
- cpu = all_cpu_data[policy->cpu];
+ freq = ratio * cpu->pstate.scaling;
+ if (cpu->pstate.scaling != cpu->pstate.perf_ctl_scaling)
+ freq = rounddown(freq, cpu->pstate.perf_ctl_scaling);
- return sprintf(buf, "%d\n", ratio * cpu->pstate.scaling);
+ return sprintf(buf, "%d\n", freq);
}
cpufreq_freq_attr_ro(base_frequency);
@@ -831,9 +985,20 @@ static void __intel_pstate_get_hwp_cap(struct cpudata *cpu)
static void intel_pstate_get_hwp_cap(struct cpudata *cpu)
{
+ int scaling = cpu->pstate.scaling;
+
__intel_pstate_get_hwp_cap(cpu);
- cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling;
- cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
+
+ cpu->pstate.max_freq = cpu->pstate.max_pstate * scaling;
+ cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * scaling;
+ if (scaling != cpu->pstate.perf_ctl_scaling) {
+ int perf_ctl_scaling = cpu->pstate.perf_ctl_scaling;
+
+ cpu->pstate.max_freq = rounddown(cpu->pstate.max_freq,
+ perf_ctl_scaling);
+ cpu->pstate.turbo_freq = rounddown(cpu->pstate.turbo_freq,
+ perf_ctl_scaling);
+ }
}
static void intel_pstate_hwp_set(unsigned int cpu)
@@ -1365,8 +1530,6 @@ define_one_global_rw(energy_efficiency);
static struct attribute *intel_pstate_attributes[] = {
&status.attr,
&no_turbo.attr,
- &turbo_pct.attr,
- &num_pstates.attr,
NULL
};
@@ -1391,6 +1554,14 @@ static void __init intel_pstate_sysfs_expose_params(void)
if (WARN_ON(rc))
return;
+ if (!boot_cpu_has(X86_FEATURE_HYBRID_CPU)) {
+ rc = sysfs_create_file(intel_pstate_kobject, &turbo_pct.attr);
+ WARN_ON(rc);
+
+ rc = sysfs_create_file(intel_pstate_kobject, &num_pstates.attr);
+ WARN_ON(rc);
+ }
+
/*
* If per cpu limits are enforced there are no global limits, so
* return without creating max/min_perf_pct attributes
@@ -1417,6 +1588,11 @@ static void __init intel_pstate_sysfs_remove(void)
sysfs_remove_group(intel_pstate_kobject, &intel_pstate_attr_group);
+ if (!boot_cpu_has(X86_FEATURE_HYBRID_CPU)) {
+ sysfs_remove_file(intel_pstate_kobject, &num_pstates.attr);
+ sysfs_remove_file(intel_pstate_kobject, &turbo_pct.attr);
+ }
+
if (!per_cpu_limits) {
sysfs_remove_file(intel_pstate_kobject, &max_perf_pct.attr);
sysfs_remove_file(intel_pstate_kobject, &min_perf_pct.attr);
@@ -1713,19 +1889,33 @@ static void intel_pstate_max_within_limits(struct cpudata *cpu)
static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
{
+ bool hybrid_cpu = boot_cpu_has(X86_FEATURE_HYBRID_CPU);
+ int perf_ctl_max_phys = pstate_funcs.get_max_physical();
+ int perf_ctl_scaling = hybrid_cpu ? cpu_khz / perf_ctl_max_phys :
+ pstate_funcs.get_scaling();
+
cpu->pstate.min_pstate = pstate_funcs.get_min();
- cpu->pstate.max_pstate_physical = pstate_funcs.get_max_physical();
- cpu->pstate.scaling = pstate_funcs.get_scaling();
+ cpu->pstate.max_pstate_physical = perf_ctl_max_phys;
+ cpu->pstate.perf_ctl_scaling = perf_ctl_scaling;
if (hwp_active && !hwp_mode_bdw) {
__intel_pstate_get_hwp_cap(cpu);
+
+ if (hybrid_cpu)
+ intel_pstate_hybrid_hwp_calibrate(cpu);
+ else
+ cpu->pstate.scaling = perf_ctl_scaling;
} else {
+ cpu->pstate.scaling = perf_ctl_scaling;
cpu->pstate.max_pstate = pstate_funcs.get_max();
cpu->pstate.turbo_pstate = pstate_funcs.get_turbo();
}
- cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling;
- cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
+ if (cpu->pstate.scaling == perf_ctl_scaling) {
+ cpu->pstate.min_freq = cpu->pstate.min_pstate * perf_ctl_scaling;
+ cpu->pstate.max_freq = cpu->pstate.max_pstate * perf_ctl_scaling;
+ cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * perf_ctl_scaling;
+ }
if (pstate_funcs.get_aperf_mperf_shift)
cpu->aperf_mperf_shift = pstate_funcs.get_aperf_mperf_shift();
@@ -2087,6 +2277,8 @@ static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
X86_MATCH(ATOM_GOLDMONT, core_funcs),
X86_MATCH(ATOM_GOLDMONT_PLUS, core_funcs),
X86_MATCH(SKYLAKE_X, core_funcs),
+ X86_MATCH(COMETLAKE, core_funcs),
+ X86_MATCH(ICELAKE_X, core_funcs),
{}
};
MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids);
@@ -2195,23 +2387,34 @@ static void intel_pstate_update_perf_limits(struct cpudata *cpu,
unsigned int policy_min,
unsigned int policy_max)
{
- int scaling = cpu->pstate.scaling;
+ int perf_ctl_scaling = cpu->pstate.perf_ctl_scaling;
int32_t max_policy_perf, min_policy_perf;
+ max_policy_perf = policy_max / perf_ctl_scaling;
+ if (policy_max == policy_min) {
+ min_policy_perf = max_policy_perf;
+ } else {
+ min_policy_perf = policy_min / perf_ctl_scaling;
+ min_policy_perf = clamp_t(int32_t, min_policy_perf,
+ 0, max_policy_perf);
+ }
+
/*
* HWP needs some special consideration, because HWP_REQUEST uses
* abstract values to represent performance rather than pure ratios.
*/
- if (hwp_active)
+ if (hwp_active) {
intel_pstate_get_hwp_cap(cpu);
- max_policy_perf = policy_max / scaling;
- if (policy_max == policy_min) {
- min_policy_perf = max_policy_perf;
- } else {
- min_policy_perf = policy_min / scaling;
- min_policy_perf = clamp_t(int32_t, min_policy_perf,
- 0, max_policy_perf);
+ if (cpu->pstate.scaling != perf_ctl_scaling) {
+ int scaling = cpu->pstate.scaling;
+ int freq;
+
+ freq = max_policy_perf * perf_ctl_scaling;
+ max_policy_perf = DIV_ROUND_UP(freq, scaling);
+ freq = min_policy_perf * perf_ctl_scaling;
+ min_policy_perf = DIV_ROUND_UP(freq, scaling);
+ }
}
pr_debug("cpu:%d min_policy_perf:%d max_policy_perf:%d\n",
@@ -2405,7 +2608,7 @@ static int __intel_pstate_cpu_init(struct cpufreq_policy *policy)
cpu->min_perf_ratio = 0;
/* cpuinfo and default policy values */
- policy->cpuinfo.min_freq = cpu->pstate.min_pstate * cpu->pstate.scaling;
+ policy->cpuinfo.min_freq = cpu->pstate.min_freq;
update_turbo_state();
global.turbo_disabled_mf = global.turbo_disabled;
policy->cpuinfo.max_freq = global.turbo_disabled ?
@@ -3135,6 +3338,8 @@ hwp_cpu_matched:
}
pr_info("HWP enabled\n");
+ } else if (boot_cpu_has(X86_FEATURE_HYBRID_CPU)) {
+ pr_warn("Problematic setup: Hybrid processor with disabled HWP\n");
}
return 0;
diff --git a/drivers/cpufreq/loongson2_cpufreq.c b/drivers/cpufreq/loongson2_cpufreq.c
index d05e761d9572..afc59b292153 100644
--- a/drivers/cpufreq/loongson2_cpufreq.c
+++ b/drivers/cpufreq/loongson2_cpufreq.c
@@ -16,7 +16,6 @@
#include <linux/cpufreq.h>
#include <linux/module.h>
#include <linux/err.h>
-#include <linux/sched.h> /* set_cpus_allowed() */
#include <linux/delay.h>
#include <linux/platform_device.h>
diff --git a/drivers/cpufreq/sc520_freq.c b/drivers/cpufreq/sc520_freq.c
index 73a208559fe2..330c8d6cf93c 100644
--- a/drivers/cpufreq/sc520_freq.c
+++ b/drivers/cpufreq/sc520_freq.c
@@ -42,6 +42,7 @@ static unsigned int sc520_freq_get_cpu_frequency(unsigned int cpu)
default:
pr_err("error: cpuctl register has unexpected value %02x\n",
clockspeed_reg);
+ fallthrough;
case 0x01:
return 100000;
case 0x02:
diff --git a/drivers/cpufreq/sh-cpufreq.c b/drivers/cpufreq/sh-cpufreq.c
index 0ac265d47ef0..1a251e635ebd 100644
--- a/drivers/cpufreq/sh-cpufreq.c
+++ b/drivers/cpufreq/sh-cpufreq.c
@@ -23,7 +23,6 @@
#include <linux/cpumask.h>
#include <linux/cpu.h>
#include <linux/smp.h>
-#include <linux/sched.h> /* set_cpus_allowed() */
#include <linux/clk.h>
#include <linux/percpu.h>
#include <linux/sh_clk.h>
diff --git a/drivers/cpuidle/governors/teo.c b/drivers/cpuidle/governors/teo.c
index ac4bb27d69b0..7b91060e82f6 100644
--- a/drivers/cpuidle/governors/teo.c
+++ b/drivers/cpuidle/governors/teo.c
@@ -2,47 +2,103 @@
/*
* Timer events oriented CPU idle governor
*
- * Copyright (C) 2018 Intel Corporation
+ * Copyright (C) 2018 - 2021 Intel Corporation
* Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
+ */
+
+/**
+ * DOC: teo-description
*
* The idea of this governor is based on the observation that on many systems
* timer events are two or more orders of magnitude more frequent than any
- * other interrupts, so they are likely to be the most significant source of CPU
+ * other interrupts, so they are likely to be the most significant cause of CPU
* wakeups from idle states. Moreover, information about what happened in the
* (relatively recent) past can be used to estimate whether or not the deepest
- * idle state with target residency within the time to the closest timer is
- * likely to be suitable for the upcoming idle time of the CPU and, if not, then
- * which of the shallower idle states to choose.
+ * idle state with target residency within the (known) time till the closest
+ * timer event, referred to as the sleep length, is likely to be suitable for
+ * the upcoming CPU idle period and, if not, then which of the shallower idle
+ * states to choose instead of it.
+ *
+ * Of course, non-timer wakeup sources are more important in some use cases
+ * which can be covered by taking a few most recent idle time intervals of the
+ * CPU into account. However, even in that context it is not necessary to
+ * consider idle duration values greater than the sleep length, because the
+ * closest timer will ultimately wake up the CPU anyway unless it is woken up
+ * earlier.
+ *
+ * Thus this governor estimates whether or not the prospective idle duration of
+ * a CPU is likely to be significantly shorter than the sleep length and selects
+ * an idle state for it accordingly.
+ *
+ * The computations carried out by this governor are based on using bins whose
+ * boundaries are aligned with the target residency parameter values of the CPU
+ * idle states provided by the %CPUIdle driver in the ascending order. That is,
+ * the first bin spans from 0 up to, but not including, the target residency of
+ * the second idle state (idle state 1), the second bin spans from the target
+ * residency of idle state 1 up to, but not including, the target residency of
+ * idle state 2, the third bin spans from the target residency of idle state 2
+ * up to, but not including, the target residency of idle state 3 and so on.
+ * The last bin spans from the target residency of the deepest idle state
+ * supplied by the driver to infinity.
+ *
+ * Two metrics called "hits" and "intercepts" are associated with each bin.
+ * They are updated every time before selecting an idle state for the given CPU
+ * in accordance with what happened last time.
+ *
+ * The "hits" metric reflects the relative frequency of situations in which the
+ * sleep length and the idle duration measured after CPU wakeup fall into the
+ * same bin (that is, the CPU appears to wake up "on time" relative to the sleep
+ * length). In turn, the "intercepts" metric reflects the relative frequency of
+ * situations in which the measured idle duration is so much shorter than the
+ * sleep length that the bin it falls into corresponds to an idle state
+ * shallower than the one whose bin is fallen into by the sleep length (these
+ * situations are referred to as "intercepts" below).
+ *
+ * In addition to the metrics described above, the governor counts recent
+ * intercepts (that is, intercepts that have occurred during the last
+ * %NR_RECENT invocations of it for the given CPU) for each bin.
*
- * Of course, non-timer wakeup sources are more important in some use cases and
- * they can be covered by taking a few most recent idle time intervals of the
- * CPU into account. However, even in that case it is not necessary to consider
- * idle duration values greater than the time till the closest timer, as the
- * patterns that they may belong to produce average values close enough to
- * the time till the closest timer (sleep length) anyway.
+ * In order to select an idle state for a CPU, the governor takes the following
+ * steps (modulo the possible latency constraint that must be taken into account
+ * too):
*
- * Thus this governor estimates whether or not the upcoming idle time of the CPU
- * is likely to be significantly shorter than the sleep length and selects an
- * idle state for it in accordance with that, as follows:
+ * 1. Find the deepest CPU idle state whose target residency does not exceed
+ * the current sleep length (the candidate idle state) and compute 3 sums as
+ * follows:
*
- * - Find an idle state on the basis of the sleep length and state statistics
- * collected over time:
+ * - The sum of the "hits" and "intercepts" metrics for the candidate state
+ * and all of the deeper idle states (it represents the cases in which the
+ * CPU was idle long enough to avoid being intercepted if the sleep length
+ * had been equal to the current one).
*
- * o Find the deepest idle state whose target residency is less than or equal
- * to the sleep length.
+ * - The sum of the "intercepts" metrics for all of the idle states shallower
+ * than the candidate one (it represents the cases in which the CPU was not
+ * idle long enough to avoid being intercepted if the sleep length had been
+ * equal to the current one).
*
- * o Select it if it matched both the sleep length and the observed idle
- * duration in the past more often than it matched the sleep length alone
- * (i.e. the observed idle duration was significantly shorter than the sleep
- * length matched by it).
+ * - The sum of the numbers of recent intercepts for all of the idle states
+ * shallower than the candidate one.
*
- * o Otherwise, select the shallower state with the greatest matched "early"
- * wakeups metric.
+ * 2. If the second sum is greater than the first one or the third sum is
+ * greater than %NR_RECENT / 2, the CPU is likely to wake up early, so look
+ * for an alternative idle state to select.
*
- * - If the majority of the most recent idle duration values are below the
- * target residency of the idle state selected so far, use those values to
- * compute the new expected idle duration and find an idle state matching it
- * (which has to be shallower than the one selected so far).
+ * - Traverse the idle states shallower than the candidate one in the
+ * descending order.
+ *
+ * - For each of them compute the sum of the "intercepts" metrics and the sum
+ * of the numbers of recent intercepts over all of the idle states between
+ * it and the candidate one (including the former and excluding the
+ * latter).
+ *
+ * - If each of these sums that needs to be taken into account (because the
+ * check related to it has indicated that the CPU is likely to wake up
+ * early) is greater than a half of the corresponding sum computed in step
+ * 1 (which means that the target residency of the state in question had
+ * not exceeded the idle duration in over a half of the relevant cases),
+ * select the given idle state instead of the candidate one.
+ *
+ * 3. By default, select the candidate state.
*/
#include <linux/cpuidle.h>
@@ -60,65 +116,51 @@
/*
* Number of the most recent idle duration values to take into consideration for
- * the detection of wakeup patterns.
+ * the detection of recent early wakeup patterns.
*/
-#define INTERVALS 8
+#define NR_RECENT 9
/**
- * struct teo_idle_state - Idle state data used by the TEO cpuidle governor.
- * @early_hits: "Early" CPU wakeups "matching" this state.
- * @hits: "On time" CPU wakeups "matching" this state.
- * @misses: CPU wakeups "missing" this state.
- *
- * A CPU wakeup is "matched" by a given idle state if the idle duration measured
- * after the wakeup is between the target residency of that state and the target
- * residency of the next one (or if this is the deepest available idle state, it
- * "matches" a CPU wakeup when the measured idle duration is at least equal to
- * its target residency).
- *
- * Also, from the TEO governor perspective, a CPU wakeup from idle is "early" if
- * it occurs significantly earlier than the closest expected timer event (that
- * is, early enough to match an idle state shallower than the one matching the
- * time till the closest timer event). Otherwise, the wakeup is "on time", or
- * it is a "hit".
- *
- * A "miss" occurs when the given state doesn't match the wakeup, but it matches
- * the time till the closest timer event used for idle state selection.
+ * struct teo_bin - Metrics used by the TEO cpuidle governor.
+ * @intercepts: The "intercepts" metric.
+ * @hits: The "hits" metric.
+ * @recent: The number of recent "intercepts".
*/
-struct teo_idle_state {
- unsigned int early_hits;
+struct teo_bin {
+ unsigned int intercepts;
unsigned int hits;
- unsigned int misses;
+ unsigned int recent;
};
/**
* struct teo_cpu - CPU data used by the TEO cpuidle governor.
* @time_span_ns: Time between idle state selection and post-wakeup update.
* @sleep_length_ns: Time till the closest timer event (at the selection time).
- * @states: Idle states data corresponding to this CPU.
- * @interval_idx: Index of the most recent saved idle interval.
- * @intervals: Saved idle duration values.
+ * @state_bins: Idle state data bins for this CPU.
+ * @total: Grand total of the "intercepts" and "hits" mertics for all bins.
+ * @next_recent_idx: Index of the next @recent_idx entry to update.
+ * @recent_idx: Indices of bins corresponding to recent "intercepts".
*/
struct teo_cpu {
s64 time_span_ns;
s64 sleep_length_ns;
- struct teo_idle_state states[CPUIDLE_STATE_MAX];
- int interval_idx;
- u64 intervals[INTERVALS];
+ struct teo_bin state_bins[CPUIDLE_STATE_MAX];
+ unsigned int total;
+ int next_recent_idx;
+ int recent_idx[NR_RECENT];
};
static DEFINE_PER_CPU(struct teo_cpu, teo_cpus);
/**
- * teo_update - Update CPU data after wakeup.
+ * teo_update - Update CPU metrics after wakeup.
* @drv: cpuidle driver containing state data.
* @dev: Target CPU.
*/
static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
{
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
- int i, idx_hit = 0, idx_timer = 0;
- unsigned int hits, misses;
+ int i, idx_timer = 0, idx_duration = 0;
u64 measured_ns;
if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns) {
@@ -151,53 +193,52 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
measured_ns /= 2;
}
+ cpu_data->total = 0;
+
/*
- * Decay the "early hits" metric for all of the states and find the
- * states matching the sleep length and the measured idle duration.
+ * Decay the "hits" and "intercepts" metrics for all of the bins and
+ * find the bins that the sleep length and the measured idle duration
+ * fall into.
*/
for (i = 0; i < drv->state_count; i++) {
- unsigned int early_hits = cpu_data->states[i].early_hits;
+ s64 target_residency_ns = drv->states[i].target_residency_ns;
+ struct teo_bin *bin = &cpu_data->state_bins[i];
- cpu_data->states[i].early_hits -= early_hits >> DECAY_SHIFT;
+ bin->hits -= bin->hits >> DECAY_SHIFT;
+ bin->intercepts -= bin->intercepts >> DECAY_SHIFT;
- if (drv->states[i].target_residency_ns <= cpu_data->sleep_length_ns) {
+ cpu_data->total += bin->hits + bin->intercepts;
+
+ if (target_residency_ns <= cpu_data->sleep_length_ns) {
idx_timer = i;
- if (drv->states[i].target_residency_ns <= measured_ns)
- idx_hit = i;
+ if (target_residency_ns <= measured_ns)
+ idx_duration = i;
}
}
- /*
- * Update the "hits" and "misses" data for the state matching the sleep
- * length. If it matches the measured idle duration too, this is a hit,
- * so increase the "hits" metric for it then. Otherwise, this is a
- * miss, so increase the "misses" metric for it. In the latter case
- * also increase the "early hits" metric for the state that actually
- * matches the measured idle duration.
- */
- hits = cpu_data->states[idx_timer].hits;
- hits -= hits >> DECAY_SHIFT;
+ i = cpu_data->next_recent_idx++;
+ if (cpu_data->next_recent_idx >= NR_RECENT)
+ cpu_data->next_recent_idx = 0;
- misses = cpu_data->states[idx_timer].misses;
- misses -= misses >> DECAY_SHIFT;
+ if (cpu_data->recent_idx[i] >= 0)
+ cpu_data->state_bins[cpu_data->recent_idx[i]].recent--;
- if (idx_timer == idx_hit) {
- hits += PULSE;
+ /*
+ * If the measured idle duration falls into the same bin as the sleep
+ * length, this is a "hit", so update the "hits" metric for that bin.
+ * Otherwise, update the "intercepts" metric for the bin fallen into by
+ * the measured idle duration.
+ */
+ if (idx_timer == idx_duration) {
+ cpu_data->state_bins[idx_timer].hits += PULSE;
+ cpu_data->recent_idx[i] = -1;
} else {
- misses += PULSE;
- cpu_data->states[idx_hit].early_hits += PULSE;
+ cpu_data->state_bins[idx_duration].intercepts += PULSE;
+ cpu_data->state_bins[idx_duration].recent++;
+ cpu_data->recent_idx[i] = idx_duration;
}
- cpu_data->states[idx_timer].misses = misses;
- cpu_data->states[idx_timer].hits = hits;
-
- /*
- * Save idle duration values corresponding to non-timer wakeups for
- * pattern detection.
- */
- cpu_data->intervals[cpu_data->interval_idx++] = measured_ns;
- if (cpu_data->interval_idx >= INTERVALS)
- cpu_data->interval_idx = 0;
+ cpu_data->total += PULSE;
}
static bool teo_time_ok(u64 interval_ns)
@@ -205,6 +246,12 @@ static bool teo_time_ok(u64 interval_ns)
return !tick_nohz_tick_stopped() || interval_ns >= TICK_NSEC;
}
+static s64 teo_middle_of_bin(int idx, struct cpuidle_driver *drv)
+{
+ return (drv->states[idx].target_residency_ns +
+ drv->states[idx+1].target_residency_ns) / 2;
+}
+
/**
* teo_find_shallower_state - Find shallower idle state matching given duration.
* @drv: cpuidle driver containing state data.
@@ -240,10 +287,18 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
{
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
s64 latency_req = cpuidle_governor_latency_req(dev->cpu);
- int max_early_idx, prev_max_early_idx, constraint_idx, idx0, idx, i;
- unsigned int hits, misses, early_hits;
+ unsigned int idx_intercept_sum = 0;
+ unsigned int intercept_sum = 0;
+ unsigned int idx_recent_sum = 0;
+ unsigned int recent_sum = 0;
+ unsigned int idx_hit_sum = 0;
+ unsigned int hit_sum = 0;
+ int constraint_idx = 0;
+ int idx0 = 0, idx = -1;
+ bool alt_intercepts, alt_recent;
ktime_t delta_tick;
s64 duration_ns;
+ int i;
if (dev->last_state_idx >= 0) {
teo_update(drv, dev);
@@ -255,171 +310,136 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
duration_ns = tick_nohz_get_sleep_length(&delta_tick);
cpu_data->sleep_length_ns = duration_ns;
- hits = 0;
- misses = 0;
- early_hits = 0;
- max_early_idx = -1;
- prev_max_early_idx = -1;
- constraint_idx = drv->state_count;
- idx = -1;
- idx0 = idx;
+ /* Check if there is any choice in the first place. */
+ if (drv->state_count < 2) {
+ idx = 0;
+ goto end;
+ }
+ if (!dev->states_usage[0].disable) {
+ idx = 0;
+ if (drv->states[1].target_residency_ns > duration_ns)
+ goto end;
+ }
- for (i = 0; i < drv->state_count; i++) {
+ /*
+ * Find the deepest idle state whose target residency does not exceed
+ * the current sleep length and the deepest idle state not deeper than
+ * the former whose exit latency does not exceed the current latency
+ * constraint. Compute the sums of metrics for early wakeup pattern
+ * detection.
+ */
+ for (i = 1; i < drv->state_count; i++) {
+ struct teo_bin *prev_bin = &cpu_data->state_bins[i-1];
struct cpuidle_state *s = &drv->states[i];
- if (dev->states_usage[i].disable) {
- /*
- * Ignore disabled states with target residencies beyond
- * the anticipated idle duration.
- */
- if (s->target_residency_ns > duration_ns)
- continue;
-
- /*
- * This state is disabled, so the range of idle duration
- * values corresponding to it is covered by the current
- * candidate state, but still the "hits" and "misses"
- * metrics of the disabled state need to be used to
- * decide whether or not the state covering the range in
- * question is good enough.
- */
- hits = cpu_data->states[i].hits;
- misses = cpu_data->states[i].misses;
-
- if (early_hits >= cpu_data->states[i].early_hits ||
- idx < 0)
- continue;
-
- /*
- * If the current candidate state has been the one with
- * the maximum "early hits" metric so far, the "early
- * hits" metric of the disabled state replaces the
- * current "early hits" count to avoid selecting a
- * deeper state with lower "early hits" metric.
- */
- if (max_early_idx == idx) {
- early_hits = cpu_data->states[i].early_hits;
- continue;
- }
-
- /*
- * The current candidate state is closer to the disabled
- * one than the current maximum "early hits" state, so
- * replace the latter with it, but in case the maximum
- * "early hits" state index has not been set so far,
- * check if the current candidate state is not too
- * shallow for that role.
- */
- if (teo_time_ok(drv->states[idx].target_residency_ns)) {
- prev_max_early_idx = max_early_idx;
- early_hits = cpu_data->states[i].early_hits;
- max_early_idx = idx;
- }
+ /*
+ * Update the sums of idle state mertics for all of the states
+ * shallower than the current one.
+ */
+ intercept_sum += prev_bin->intercepts;
+ hit_sum += prev_bin->hits;
+ recent_sum += prev_bin->recent;
+ if (dev->states_usage[i].disable)
continue;
- }
if (idx < 0) {
idx = i; /* first enabled state */
- hits = cpu_data->states[i].hits;
- misses = cpu_data->states[i].misses;
idx0 = i;
}
if (s->target_residency_ns > duration_ns)
break;
- if (s->exit_latency_ns > latency_req && constraint_idx > i)
+ idx = i;
+
+ if (s->exit_latency_ns <= latency_req)
constraint_idx = i;
- idx = i;
- hits = cpu_data->states[i].hits;
- misses = cpu_data->states[i].misses;
-
- if (early_hits < cpu_data->states[i].early_hits &&
- teo_time_ok(drv->states[i].target_residency_ns)) {
- prev_max_early_idx = max_early_idx;
- early_hits = cpu_data->states[i].early_hits;
- max_early_idx = i;
- }
+ idx_intercept_sum = intercept_sum;
+ idx_hit_sum = hit_sum;
+ idx_recent_sum = recent_sum;
}
- /*
- * If the "hits" metric of the idle state matching the sleep length is
- * greater than its "misses" metric, that is the one to use. Otherwise,
- * it is more likely that one of the shallower states will match the
- * idle duration observed after wakeup, so take the one with the maximum
- * "early hits" metric, but if that cannot be determined, just use the
- * state selected so far.
- */
- if (hits <= misses) {
- /*
- * The current candidate state is not suitable, so take the one
- * whose "early hits" metric is the maximum for the range of
- * shallower states.
- */
- if (idx == max_early_idx)
- max_early_idx = prev_max_early_idx;
-
- if (max_early_idx >= 0) {
- idx = max_early_idx;
- duration_ns = drv->states[idx].target_residency_ns;
- }
+ /* Avoid unnecessary overhead. */
+ if (idx < 0) {
+ idx = 0; /* No states enabled, must use 0. */
+ goto end;
+ } else if (idx == idx0) {
+ goto end;
}
/*
- * If there is a latency constraint, it may be necessary to use a
- * shallower idle state than the one selected so far.
+ * If the sum of the intercepts metric for all of the idle states
+ * shallower than the current candidate one (idx) is greater than the
+ * sum of the intercepts and hits metrics for the candidate state and
+ * all of the deeper states, or the sum of the numbers of recent
+ * intercepts over all of the states shallower than the candidate one
+ * is greater than a half of the number of recent events taken into
+ * account, the CPU is likely to wake up early, so find an alternative
+ * idle state to select.
*/
- if (constraint_idx < idx)
- idx = constraint_idx;
-
- if (idx < 0) {
- idx = 0; /* No states enabled. Must use 0. */
- } else if (idx > idx0) {
- unsigned int count = 0;
- u64 sum = 0;
+ alt_intercepts = 2 * idx_intercept_sum > cpu_data->total - idx_hit_sum;
+ alt_recent = idx_recent_sum > NR_RECENT / 2;
+ if (alt_recent || alt_intercepts) {
+ s64 last_enabled_span_ns = duration_ns;
+ int last_enabled_idx = idx;
/*
- * The target residencies of at least two different enabled idle
- * states are less than or equal to the current expected idle
- * duration. Try to refine the selection using the most recent
- * measured idle duration values.
+ * Look for the deepest idle state whose target residency had
+ * not exceeded the idle duration in over a half of the relevant
+ * cases (both with respect to intercepts overall and with
+ * respect to the recent intercepts only) in the past.
*
- * Count and sum the most recent idle duration values less than
- * the current expected idle duration value.
+ * Take the possible latency constraint and duration limitation
+ * present if the tick has been stopped already into account.
*/
- for (i = 0; i < INTERVALS; i++) {
- u64 val = cpu_data->intervals[i];
+ intercept_sum = 0;
+ recent_sum = 0;
+
+ for (i = idx - 1; i >= idx0; i--) {
+ struct teo_bin *bin = &cpu_data->state_bins[i];
+ s64 span_ns;
- if (val >= duration_ns)
+ intercept_sum += bin->intercepts;
+ recent_sum += bin->recent;
+
+ if (dev->states_usage[i].disable)
continue;
- count++;
- sum += val;
- }
+ span_ns = teo_middle_of_bin(i, drv);
+ if (!teo_time_ok(span_ns)) {
+ /*
+ * The current state is too shallow, so select
+ * the first enabled deeper state.
+ */
+ duration_ns = last_enabled_span_ns;
+ idx = last_enabled_idx;
+ break;
+ }
- /*
- * Give up unless the majority of the most recent idle duration
- * values are in the interesting range.
- */
- if (count > INTERVALS / 2) {
- u64 avg_ns = div64_u64(sum, count);
-
- /*
- * Avoid spending too much time in an idle state that
- * would be too shallow.
- */
- if (teo_time_ok(avg_ns)) {
- duration_ns = avg_ns;
- if (drv->states[idx].target_residency_ns > avg_ns)
- idx = teo_find_shallower_state(drv, dev,
- idx, avg_ns);
+ if ((!alt_recent || 2 * recent_sum > idx_recent_sum) &&
+ (!alt_intercepts ||
+ 2 * intercept_sum > idx_intercept_sum)) {
+ idx = i;
+ duration_ns = span_ns;
+ break;
}
+
+ last_enabled_span_ns = span_ns;
+ last_enabled_idx = i;
}
}
/*
+ * If there is a latency constraint, it may be necessary to select an
+ * idle state shallower than the current candidate one.
+ */
+ if (idx > constraint_idx)
+ idx = constraint_idx;
+
+end:
+ /*
* Don't stop the tick if the selected state is a polling one or if the
* expected idle duration is shorter than the tick period length.
*/
@@ -478,8 +498,8 @@ static int teo_enable_device(struct cpuidle_driver *drv,
memset(cpu_data, 0, sizeof(*cpu_data));
- for (i = 0; i < INTERVALS; i++)
- cpu_data->intervals[i] = U64_MAX;
+ for (i = 0; i < NR_RECENT; i++)
+ cpu_data->recent_idx[i] = -1;
return 0;
}
diff --git a/drivers/devfreq/Kconfig b/drivers/devfreq/Kconfig
index 20373a893b44..e87d01c0b76a 100644
--- a/drivers/devfreq/Kconfig
+++ b/drivers/devfreq/Kconfig
@@ -103,7 +103,6 @@ config ARM_IMX8M_DDRC_DEVFREQ
tristate "i.MX8M DDRC DEVFREQ Driver"
depends on (ARCH_MXC && HAVE_ARM_SMCCC) || \
(COMPILE_TEST && HAVE_ARM_SMCCC)
- select DEVFREQ_GOV_SIMPLE_ONDEMAND
select DEVFREQ_GOV_USERSPACE
help
This adds the DEVFREQ driver for the i.MX8M DDR Controller. It allows
diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
index fe08c46642f7..28f3e0ba6cdd 100644
--- a/drivers/devfreq/devfreq.c
+++ b/drivers/devfreq/devfreq.c
@@ -823,6 +823,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
if (devfreq->profile->timer < 0
|| devfreq->profile->timer >= DEVFREQ_TIMER_NUM) {
mutex_unlock(&devfreq->lock);
+ err = -EINVAL;
goto err_dev;
}
diff --git a/drivers/devfreq/governor_passive.c b/drivers/devfreq/governor_passive.c
index b094132bd20b..fc09324a03e0 100644
--- a/drivers/devfreq/governor_passive.c
+++ b/drivers/devfreq/governor_passive.c
@@ -65,7 +65,7 @@ static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
dev_pm_opp_put(p_opp);
if (IS_ERR(opp))
- return PTR_ERR(opp);
+ goto no_required_opp;
*freq = dev_pm_opp_get_freq(opp);
dev_pm_opp_put(opp);
@@ -73,6 +73,7 @@ static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
return 0;
}
+no_required_opp:
/*
* Get the OPP table's index of decided frequency by governor
* of parent device.
diff --git a/drivers/devfreq/governor_userspace.c b/drivers/devfreq/governor_userspace.c
index 0fd6c4851071..ab9db7adb3ad 100644
--- a/drivers/devfreq/governor_userspace.c
+++ b/drivers/devfreq/governor_userspace.c
@@ -31,8 +31,8 @@ static int devfreq_userspace_func(struct devfreq *df, unsigned long *freq)
return 0;
}
-static ssize_t store_freq(struct device *dev, struct device_attribute *attr,
- const char *buf, size_t count)
+static ssize_t set_freq_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
{
struct devfreq *devfreq = to_devfreq(dev);
struct userspace_data *data;
@@ -52,8 +52,8 @@ static ssize_t store_freq(struct device *dev, struct device_attribute *attr,
return err;
}
-static ssize_t show_freq(struct device *dev, struct device_attribute *attr,
- char *buf)
+static ssize_t set_freq_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
{
struct devfreq *devfreq = to_devfreq(dev);
struct userspace_data *data;
@@ -70,7 +70,7 @@ static ssize_t show_freq(struct device *dev, struct device_attribute *attr,
return err;
}
-static DEVICE_ATTR(set_freq, 0644, show_freq, store_freq);
+static DEVICE_ATTR_RW(set_freq);
static struct attribute *dev_entries[] = {
&dev_attr_set_freq.attr,
NULL,
diff --git a/drivers/devfreq/imx-bus.c b/drivers/devfreq/imx-bus.c
index 3fc3fd77492d..f3f6e25053ed 100644
--- a/drivers/devfreq/imx-bus.c
+++ b/drivers/devfreq/imx-bus.c
@@ -45,18 +45,6 @@ static int imx_bus_get_cur_freq(struct device *dev, unsigned long *freq)
return 0;
}
-static int imx_bus_get_dev_status(struct device *dev,
- struct devfreq_dev_status *stat)
-{
- struct imx_bus *priv = dev_get_drvdata(dev);
-
- stat->busy_time = 0;
- stat->total_time = 0;
- stat->current_frequency = clk_get_rate(priv->clk);
-
- return 0;
-}
-
static void imx_bus_exit(struct device *dev)
{
struct imx_bus *priv = dev_get_drvdata(dev);
@@ -129,9 +117,7 @@ static int imx_bus_probe(struct platform_device *pdev)
return ret;
}
- priv->profile.polling_ms = 1000;
priv->profile.target = imx_bus_target;
- priv->profile.get_dev_status = imx_bus_get_dev_status;
priv->profile.exit = imx_bus_exit;
priv->profile.get_cur_freq = imx_bus_get_cur_freq;
priv->profile.initial_freq = clk_get_rate(priv->clk);
diff --git a/drivers/devfreq/tegra30-devfreq.c b/drivers/devfreq/tegra30-devfreq.c
index ce83f883ca65..10661eb2aed8 100644
--- a/drivers/devfreq/tegra30-devfreq.c
+++ b/drivers/devfreq/tegra30-devfreq.c
@@ -688,6 +688,7 @@ static struct devfreq_dev_profile tegra_devfreq_profile = {
.polling_ms = ACTMON_SAMPLING_PERIOD,
.target = tegra_devfreq_target,
.get_dev_status = tegra_devfreq_get_dev_status,
+ .is_cooling_device = true,
};
static int tegra_governor_get_target(struct devfreq *devfreq,
diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
index ec1b9d306ba6..e6c543b5ee1d 100644
--- a/drivers/idle/intel_idle.c
+++ b/drivers/idle/intel_idle.c
@@ -1484,6 +1484,36 @@ static void __init sklh_idle_state_table_update(void)
skl_cstates[6].flags |= CPUIDLE_FLAG_UNUSABLE; /* C9-SKL */
}
+/**
+ * skx_idle_state_table_update - Adjust the Sky Lake/Cascade Lake
+ * idle states table.
+ */
+static void __init skx_idle_state_table_update(void)
+{
+ unsigned long long msr;
+
+ rdmsrl(MSR_PKG_CST_CONFIG_CONTROL, msr);
+
+ /*
+ * 000b: C0/C1 (no package C-state support)
+ * 001b: C2
+ * 010b: C6 (non-retention)
+ * 011b: C6 (retention)
+ * 111b: No Package C state limits.
+ */
+ if ((msr & 0x7) < 2) {
+ /*
+ * Uses the CC6 + PC0 latency and 3 times of
+ * latency for target_residency if the PC6
+ * is disabled in BIOS. This is consistent
+ * with how intel_idle driver uses _CST
+ * to set the target_residency.
+ */
+ skx_cstates[2].exit_latency = 92;
+ skx_cstates[2].target_residency = 276;
+ }
+}
+
static bool __init intel_idle_verify_cstate(unsigned int mwait_hint)
{
unsigned int mwait_cstate = MWAIT_HINT2CSTATE(mwait_hint) + 1;
@@ -1515,6 +1545,9 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv)
case INTEL_FAM6_SKYLAKE:
sklh_idle_state_table_update();
break;
+ case INTEL_FAM6_SKYLAKE_X:
+ skx_idle_state_table_update();
+ break;
}
for (cstate = 0; cstate < CPUIDLE_STATE_MAX; ++cstate) {
diff --git a/drivers/opp/core.c b/drivers/opp/core.c
index e366218d6736..b335c077f215 100644
--- a/drivers/opp/core.c
+++ b/drivers/opp/core.c
@@ -893,6 +893,16 @@ static int _set_required_opps(struct device *dev,
if (!required_opp_tables)
return 0;
+ /*
+ * We only support genpd's OPPs in the "required-opps" for now, as we
+ * don't know much about other use cases. Error out if the required OPP
+ * doesn't belong to a genpd.
+ */
+ if (unlikely(!required_opp_tables[0]->is_genpd)) {
+ dev_err(dev, "required-opps don't belong to a genpd\n");
+ return -ENOENT;
+ }
+
/* required-opps not fully initialized yet */
if (lazy_linking_pending(opp_table))
return -EBUSY;
diff --git a/drivers/opp/of.c b/drivers/opp/of.c
index c582a9ca397b..d298e38aaf7e 100644
--- a/drivers/opp/of.c
+++ b/drivers/opp/of.c
@@ -197,21 +197,8 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table,
required_opp_tables[i] = _find_table_of_opp_np(required_np);
of_node_put(required_np);
- if (IS_ERR(required_opp_tables[i])) {
+ if (IS_ERR(required_opp_tables[i]))
lazy = true;
- continue;
- }
-
- /*
- * We only support genpd's OPPs in the "required-opps" for now,
- * as we don't know how much about other cases. Error out if the
- * required OPP doesn't belong to a genpd.
- */
- if (!required_opp_tables[i]->is_genpd) {
- dev_err(dev, "required-opp doesn't belong to genpd: %pOF\n",
- required_np);
- goto free_required_tables;
- }
}
/* Let's do the linking later on */
@@ -379,13 +366,6 @@ static void lazy_link_required_opp_table(struct opp_table *new_table)
struct dev_pm_opp *opp;
int i, ret;
- /*
- * We only support genpd's OPPs in the "required-opps" for now,
- * as we don't know much about other cases.
- */
- if (!new_table->is_genpd)
- return;
-
mutex_lock(&opp_table_lock);
list_for_each_entry_safe(opp_table, temp, &lazy_opp_tables, lazy) {
@@ -433,8 +413,7 @@ static void lazy_link_required_opp_table(struct opp_table *new_table)
/* All required opp-tables found, remove from lazy list */
if (!lazy) {
- list_del(&opp_table->lazy);
- INIT_LIST_HEAD(&opp_table->lazy);
+ list_del_init(&opp_table->lazy);
list_for_each_entry(opp, &opp_table->opp_list, node)
_required_opps_available(opp, opp_table->required_opp_count);
@@ -874,7 +853,7 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
return ERR_PTR(-ENOMEM);
ret = _read_opp_key(new_opp, opp_table, np, &rate_not_available);
- if (ret < 0 && !opp_table->is_genpd) {
+ if (ret < 0) {
dev_err(dev, "%s: opp key field not found\n", __func__);
goto free_opp;
}
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index dfcfbcecc34b..21a0577305ef 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -198,6 +198,7 @@ struct generic_pm_domain_data {
struct notifier_block *power_nb;
int cpu;
unsigned int performance_state;
+ unsigned int rpm_pstate;
ktime_t next_wakeup;
void *data;
};
diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
index 6c08a085367b..aab8b35e9f8a 100644
--- a/include/linux/pm_runtime.h
+++ b/include/linux/pm_runtime.h
@@ -380,6 +380,9 @@ static inline int pm_runtime_get(struct device *dev)
* The possible return values of this function are the same as for
* pm_runtime_resume() and the runtime PM usage counter of @dev remains
* incremented in all cases, even if it returns an error code.
+ * Consider using pm_runtime_resume_and_get() instead of it, especially
+ * if its return value is checked by the caller, as this is likely to result
+ * in cleaner code.
*/
static inline int pm_runtime_get_sync(struct device *dev)
{
diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
index 6bfe3ead10ad..a12779650f15 100644
--- a/kernel/power/Kconfig
+++ b/kernel/power/Kconfig
@@ -98,20 +98,20 @@ config PM_STD_PARTITION
default ""
help
The default resume partition is the partition that the suspend-
- to-disk implementation will look for a suspended disk image.
+ to-disk implementation will look for a suspended disk image.
- The partition specified here will be different for almost every user.
+ The partition specified here will be different for almost every user.
It should be a valid swap partition (at least for now) that is turned
- on before suspending.
+ on before suspending.
The partition specified can be overridden by specifying:
- resume=/dev/<other device>
+ resume=/dev/<other device>
- which will set the resume partition to the device specified.
+ which will set the resume partition to the device specified.
Note there is currently not a way to specify which device to save the
- suspended image to. It will simply pick the first available swap
+ suspended image to. It will simply pick the first available swap
device.
config PM_SLEEP
diff --git a/kernel/power/process.c b/kernel/power/process.c
index 50cc63534486..37401c99b7d7 100644
--- a/kernel/power/process.c
+++ b/kernel/power/process.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
- * drivers/power/process.c - Functions for starting/stopping processes on
+ * drivers/power/process.c - Functions for starting/stopping processes on
* suspend transitions.
*
* Originally from swsusp.
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 1a221dcb3c01..f7a986078213 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -331,7 +331,7 @@ static void *chain_alloc(struct chain_allocator *ca, unsigned int size)
*
* Memory bitmap is a structure consisting of many linked lists of
* objects. The main list's elements are of type struct zone_bitmap
- * and each of them corresonds to one zone. For each zone bitmap
+ * and each of them corresponds to one zone. For each zone bitmap
* object there is a list of objects of type struct bm_block that
* represent each blocks of bitmap in which information is stored.
*
@@ -1146,7 +1146,7 @@ int create_basic_memory_bitmaps(void)
Free_second_object:
kfree(bm2);
Free_first_bitmap:
- memory_bm_free(bm1, PG_UNSAFE_CLEAR);
+ memory_bm_free(bm1, PG_UNSAFE_CLEAR);
Free_first_object:
kfree(bm1);
return -ENOMEM;
@@ -1500,7 +1500,7 @@ static struct memory_bitmap copy_bm;
/**
* swsusp_free - Free pages allocated for hibernation image.
*
- * Image pages are alocated before snapshot creation, so they need to be
+ * Image pages are allocated before snapshot creation, so they need to be
* released after resume.
*/
void swsusp_free(void)
@@ -2326,7 +2326,7 @@ static struct memory_bitmap *safe_highmem_bm;
* (@nr_highmem_p points to the variable containing the number of highmem image
* pages). The pages that are "safe" (ie. will not be overwritten when the
* hibernation image is restored entirely) have the corresponding bits set in
- * @bm (it must be unitialized).
+ * @bm (it must be uninitialized).
*
* NOTE: This function should not be called if there are no highmem image pages.
*/
@@ -2483,7 +2483,7 @@ static inline void free_highmem_data(void) {}
/**
* prepare_image - Make room for loading hibernation image.
- * @new_bm: Unitialized memory bitmap structure.
+ * @new_bm: Uninitialized memory bitmap structure.
* @bm: Memory bitmap with unsafe pages marked.
*
* Use @bm to mark the pages that will be overwritten in the process of
diff --git a/kernel/power/swap.c b/kernel/power/swap.c
index bea3cb8afa11..3cb89baebc79 100644
--- a/kernel/power/swap.c
+++ b/kernel/power/swap.c
@@ -1125,7 +1125,7 @@ struct dec_data {
};
/**
- * Deompression function that runs in its own thread.
+ * Decompression function that runs in its own thread.
*/
static int lzo_decompress_threadfn(void *data)
{