summaryrefslogtreecommitdiff
path: root/drivers/perf/riscv_pmu.c
diff options
context:
space:
mode:
authorAlexandre Ghiti <alexghiti@rivosinc.com>2023-08-02 11:03:25 +0300
committerPalmer Dabbelt <palmer@rivosinc.com>2023-08-16 17:28:20 +0300
commitcc4c07c89aada16229084eeb93895c95b7eabaa3 (patch)
tree8e07323460ea0e9cf4644239e7fd2ff07a9c6c8a /drivers/perf/riscv_pmu.c
parent50be342829053d6d4a3c66eacc0e778f6611a37a (diff)
downloadlinux-cc4c07c89aada16229084eeb93895c95b7eabaa3.tar.xz
drivers: perf: Implement perf event mmap support in the SBI backend
We used to unconditionnally expose the cycle and instret csrs to userspace, which gives rise to security concerns. So now we only allow access to hw counters from userspace through the perf framework which will handle context switches, per-task events...etc. A sysctl allows to revert the behaviour to the legacy mode so that userspace applications which are not ready for this change do not break. But the default value is to allow userspace only through perf: this will break userspace applications which rely on direct access to rdcycle. This choice was made for security reasons [1][2]: most of the applications which use rdcycle can instead use rdtime to count the elapsed time. [1] https://groups.google.com/a/groups.riscv.org/g/sw-dev/c/REWcwYnzsKE?pli=1 [2] https://www.youtube.com/watch?v=3-c4C_L2PRQ&ab_channel=IEEESymposiumonSecurityandPrivacy Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Diffstat (limited to 'drivers/perf/riscv_pmu.c')
-rw-r--r--drivers/perf/riscv_pmu.c10
1 files changed, 9 insertions, 1 deletions
diff --git a/drivers/perf/riscv_pmu.c b/drivers/perf/riscv_pmu.c
index 432ad2e80ce3..80c052e93f9e 100644
--- a/drivers/perf/riscv_pmu.c
+++ b/drivers/perf/riscv_pmu.c
@@ -38,7 +38,15 @@ void arch_perf_update_userpage(struct perf_event *event,
userpg->cap_user_time_short = 0;
userpg->cap_user_rdpmc = riscv_perf_user_access(event);
- userpg->pmc_width = 64;
+#ifdef CONFIG_RISCV_PMU
+ /*
+ * The counters are 64-bit but the priv spec doesn't mandate all the
+ * bits to be implemented: that's why, counter width can vary based on
+ * the cpu vendor.
+ */
+ if (userpg->cap_user_rdpmc)
+ userpg->pmc_width = to_riscv_pmu(event->pmu)->ctr_get_width(event->hw.idx) + 1;
+#endif
do {
rd = sched_clock_read_begin(&seq);