summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2023-11-07configs: vf2: set freq to max and set ethercat to moduleIGH_ETHERCAT_5.15_v0.1.0rt-ethercat-releaseMinda Chen1-9/+4
Set defualt to max frequeuce to set latency to minium. Alse disable cpu idle and hibernation pm ops. etercat module module install sequence modprobe phylink insmod ec_master.ko main_devices=<ethercat gmac mac address> insmod ec_generic.ko modprobe pcs_xpcs cd /lib/modules/5.15.0-rt17+/kernel/drivers/net/ethernet/stmicro/stmmac/ insmod stmmac.ko insmod stmmac-platform.ko insmod dwmac-starfive-plat.ko mac address example: insmod ec_master.ko main_devices=6c:cf:39:00:27:48 Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
2023-11-07stmmac: add ethercat supportMinda Chen2-54/+188
Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
2023-11-06add ethercat codes.Minda Chen94-0/+42449
Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
2023-11-06cpupri: a work around for non-rt test panicRTLINUX_5.15_v0.1.0rt-linux-releaseMinda Chen1-0/+3
kernel BUG at kernel/sched/cpupri.c:151! The same issue can be seen in link. https://www.spinics.net/lists/kernel/msg4184866.html Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
2023-11-06riscv: rt: add riscv lazy preempt support.minda.chen5-5/+137
The code is origin from arm64/x86 Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
2023-11-06config: add vf2 PREEMPT_RT and other configMinda Chen1-2/+2
enable full preempt RT config. Set HZ 1000 and set no tickless Signed-off-by: Minda Chen <minda.chen@starfivetech.com>
2023-11-06riscv: Allow riscv PREEMPT_RT config.minda.chen1-0/+2
RISCV allow to select RT Signed-off-by: minda.chen <minda.chen@starfivetech.com>
2023-11-06POWERPC: Allow to enable RTSebastian Andrzej Siewior1-0/+2
Allow to select RT. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06powerpc/stackprotector: work around stack-guard init from atomicSebastian Andrzej Siewior1-0/+4
This is invoked from the secondary CPU in atomic context. On x86 we use tsc instead. On Power we XOR it against mftb() so lets use stack address as the initial value. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06powerpc/kvm: Disable in-kernel MPIC emulation for PREEMPT_RTBogdan Purcareata1-0/+1
While converting the openpic emulation code to use a raw_spinlock_t enables guests to run on RT, there's still a performance issue. For interrupts sent in directed delivery mode with a multiple CPU mask, the emulated openpic will loop through all of the VCPUs, and for each VCPUs, it call IRQ_check, which will loop through all the pending interrupts for that VCPU. This is done while holding the raw_lock, meaning that in all this time the interrupts and preemption are disabled on the host Linux. A malicious user app can max both these number and cause a DoS. This temporary fix is sent for two reasons. First is so that users who want to use the in-kernel MPIC emulation are aware of the potential latencies, thus making sure that the hardware MPIC and their usage scenario does not involve interrupts sent in directed delivery mode, and the number of possible pending interrupts is kept small. Secondly, this should incentivize the development of a proper openpic emulation that would be better suited for RT. Acked-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Bogdan Purcareata <bogdan.purcareata@freescale.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06powerpc/pseries/iommu: Use a locallock instead local_irq_save()Sebastian Andrzej Siewior1-11/+20
The locallock protects the per-CPU variable tce_page. The function attempts to allocate memory while tce_page is protected (by disabling interrupts). Use local_irq_save() instead of local_irq_disable(). Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06powerpc: traps: Use PREEMPT_RTSebastian Andrzej Siewior1-1/+6
Add PREEMPT_RT to the backtrace if enabled. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06ARM: Allow to enable RTSebastian Andrzej Siewior1-0/+2
Allow to select RT. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06tty/serial/pl011: Make the locking work on RTThomas Gleixner1-6/+11
The lock is a sleeping lock and local_irq_save() is not the optimsation we are looking for. Redo it to make it work on -RT and non-RT. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06tty/serial/omap: Make the locking RT awareThomas Gleixner1-8/+4
The lock is a sleeping lock and local_irq_save() is not the optimsation we are looking for. Redo it to make it work on -RT and non-RT. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06arm64: signal: Use ARCH_RT_DELAYS_SIGNAL_SEND.He Zhe2-0/+12
The software breakpoint is handled via do_debug_exception() which disables preemption. On PREEMPT_RT spinlock_t become sleeping locks and must not be acquired with disabled preemption. Use ARCH_RT_DELAYS_SIGNAL_SEND so the signal (from send_user_sigtrap()) is sent delayed in return to userland. Cc: stable-rt@vger.kernel.org Signed-off-by: He Zhe <zhe.he@windriver.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lore.kernel.org/r/20211012084421.35136-1-zhe.he@windriver.com
2023-11-06arm64/sve: Make kernel FPU protection RT friendlySebastian Andrzej Siewior1-2/+14
Non RT kernels need to protect FPU against preemption and bottom half processing. This is achieved by disabling bottom halves via local_bh_disable() which implictly disables preemption. On RT kernels this protection mechanism is not sufficient because local_bh_disable() does not disable preemption. It serializes bottom half related processing via a CPU local lock. As bottom halves are running always in thread context on RT kernels disabling preemption is the proper choice as it implicitly prevents bottom half processing. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2023-11-06arm64/sve: Delay freeing memory in fpsimd_flush_thread()Sebastian Andrzej Siewior1-1/+6
fpsimd_flush_thread() invokes kfree() via sve_free() within a preempt disabled section which is not working on -RT. Delay freeing of memory until preemption is enabled again. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2023-11-06KVM: arm/arm64: downgrade preempt_disable()d region to migrate_disable()Josh Cartwright1-3/+3
kvm_arch_vcpu_ioctl_run() disables the use of preemption when updating the vgic and timer states to prevent the calling task from migrating to another CPU. It does so to prevent the task from writing to the incorrect per-CPU GIC distributor registers. On -rt kernels, it's possible to maintain the same guarantee with the use of migrate_{disable,enable}(), with the added benefit that the migrate-disabled region is preemptible. Update kvm_arch_vcpu_ioctl_run() to do so. Cc: Christoffer Dall <christoffer.dall@linaro.org> Reported-by: Manish Jaggi <Manish.Jaggi@caviumnetworks.com> Signed-off-by: Josh Cartwright <joshc@ni.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06ARM: enable irq in translation/section permission fault handlersYadi.hu1-0/+6
Probably happens on all ARM, with CONFIG_PREEMPT_RT CONFIG_DEBUG_ATOMIC_SLEEP This simple program.... int main() { *((char*)0xc0001000) = 0; }; [ 512.742724] BUG: sleeping function called from invalid context at kernel/rtmutex.c:658 [ 512.743000] in_atomic(): 0, irqs_disabled(): 128, pid: 994, name: a [ 512.743217] INFO: lockdep is turned off. [ 512.743360] irq event stamp: 0 [ 512.743482] hardirqs last enabled at (0): [< (null)>] (null) [ 512.743714] hardirqs last disabled at (0): [<c0426370>] copy_process+0x3b0/0x11c0 [ 512.744013] softirqs last enabled at (0): [<c0426370>] copy_process+0x3b0/0x11c0 [ 512.744303] softirqs last disabled at (0): [< (null)>] (null) [ 512.744631] [<c041872c>] (unwind_backtrace+0x0/0x104) [ 512.745001] [<c09af0c4>] (dump_stack+0x20/0x24) [ 512.745355] [<c0462490>] (__might_sleep+0x1dc/0x1e0) [ 512.745717] [<c09b6770>] (rt_spin_lock+0x34/0x6c) [ 512.746073] [<c0441bf0>] (do_force_sig_info+0x34/0xf0) [ 512.746457] [<c0442668>] (force_sig_info+0x18/0x1c) [ 512.746829] [<c041d880>] (__do_user_fault+0x9c/0xd8) [ 512.747185] [<c041d938>] (do_bad_area+0x7c/0x94) [ 512.747536] [<c041d990>] (do_sect_fault+0x40/0x48) [ 512.747898] [<c040841c>] (do_DataAbort+0x40/0xa0) [ 512.748181] Exception stack(0xecaa1fb0 to 0xecaa1ff8) Oxc0000000 belongs to kernel address space, user task can not be allowed to access it. For above condition, correct result is that test case should receive a “segment fault” and exits but not stacks. the root cause is commit 02fe2845d6a8 ("avoid enabling interrupts in prefetch/data abort handlers"),it deletes irq enable block in Data abort assemble code and move them into page/breakpiont/alignment fault handlers instead. But author does not enable irq in translation/section permission fault handlers. ARM disables irq when it enters exception/ interrupt mode, if kernel doesn't enable irq, it would be still disabled during translation/section permission fault. We see the above splat because do_force_sig_info is still called with IRQs off, and that code eventually does a: spin_lock_irqsave(&t->sighand->siglock, flags); As this is architecture independent code, and we've not seen any other need for other arch to have the siglock converted to raw lock, we can conclude that we should enable irq for ARM translation/section permission exception. Signed-off-by: Yadi.hu <yadi.hu@windriver.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06arch/arm64: Add lazy preempt supportAnders Roxell5-3/+34
arm64 is missing support for PREEMPT_RT. The main feature which is lacking is support for lazy preemption. The arch-specific entry code, thread information structure definitions, and associated data tables have to be extended to provide this support. Then the Kconfig file has to be extended to indicate the support is available, and also to indicate that support for full RT preemption is now available. Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06powerpc: Add support for lazy preemptionThomas Gleixner3-2/+14
Implement the powerpc pieces for lazy preempt. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06arm: Add support for lazy preemptionThomas Gleixner5-5/+25
Implement the arm pieces for lazy preempt. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06entry: Fix the preempt lazy falloutThomas Gleixner2-2/+6
Common code needs common defines.... Fixes: f2f9e496208c ("x86: Support for lazy preemption") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06x86: Support for lazy preemptionThomas Gleixner5-3/+42
Implement the x86 pieces for lazy preempt. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06x86/entry: Use should_resched() in idtentry_exit_cond_resched()Sebastian Andrzej Siewior1-1/+1
The TIF_NEED_RESCHED bit is inlined on x86 into the preemption counter. By using should_resched(0) instead of need_resched() the same check can be performed which uses the same variable as 'preempt_count()` which was issued before. Use should_resched(0) instead need_resched(). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06x86: Enable RT also on 32bitSebastian Andrzej Siewior1-1/+1
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06x86: Allow to enable RTSebastian Andrzej Siewior1-0/+1
Allow to select RT. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06x86: kvm Require const tsc for RTThomas Gleixner1-0/+8
Non constant TSC is a nightmare on bare metal already, but with virtualization it becomes a complete disaster because the workarounds are horrible latency wise. That's also a preliminary for running RT in a guest on top of a RT host. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06signal/x86: Delay calling signals in atomicOleg Nesterov4-0/+53
On x86_64 we must disable preemption before we enable interrupts for stack faults, int3 and debugging, because the current task is using a per CPU debug stack defined by the IST. If we schedule out, another task can come in and use the same stack and cause the stack to be corrupted and crash the kernel on return. When CONFIG_PREEMPT_RT is enabled, spin_locks become mutexes, and one of these is the spin lock used in signal handling. Some of the debug code (int3) causes do_trap() to send a signal. This function calls a spin lock that has been converted to a mutex and has the possibility to sleep. If this happens, the above issues with the corrupted stack is possible. Instead of calling the signal right away, for PREEMPT_RT and x86_64, the signal information is stored on the stacks task_struct and TIF_NOTIFY_RESUME is set. Then on exit of the trap, the signal resume code will send the signal when preemption is enabled. [ rostedt: Switched from #ifdef CONFIG_PREEMPT_RT to ARCH_RT_DELAYS_SIGNAL_SEND and added comments to the code. ] Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> [bigeasy: also needed on 32bit as per Yang Shi <yang.shi@linaro.org>] Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06Add localversion for -RT releaseThomas Gleixner1-0/+1
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06sysfs: Add /sys/kernel/realtime entryClark Williams1-0/+12
Add a /sys/kernel entry to indicate that the kernel is a realtime kernel. Clark says that he needs this for udev rules, udev needs to evaluate if its a PREEMPT_RT kernel a few thousand times and parsing uname output is too slow or so. Are there better solutions? Should it exist and return 0 on !-rt? Signed-off-by: Clark Williams <williams@redhat.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06ARM64: Allow to enable RTSebastian Andrzej Siewior1-0/+2
Allow to select RT. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06sched: Add support for lazy preemptionThomas Gleixner12-35/+248
It has become an obsession to mitigate the determinism vs. throughput loss of RT. Looking at the mainline semantics of preemption points gives a hint why RT sucks throughput wise for ordinary SCHED_OTHER tasks. One major issue is the wakeup of tasks which are right away preempting the waking task while the waking task holds a lock on which the woken task will block right after having preempted the wakee. In mainline this is prevented due to the implicit preemption disable of spin/rw_lock held regions. On RT this is not possible due to the fully preemptible nature of sleeping spinlocks. Though for a SCHED_OTHER task preempting another SCHED_OTHER task this is really not a correctness issue. RT folks are concerned about SCHED_FIFO/RR tasks preemption and not about the purely fairness driven SCHED_OTHER preemption latencies. So I introduced a lazy preemption mechanism which only applies to SCHED_OTHER tasks preempting another SCHED_OTHER task. Aside of the existing preempt_count each tasks sports now a preempt_lazy_count which is manipulated on lock acquiry and release. This is slightly incorrect as for lazyness reasons I coupled this on migrate_disable/enable so some other mechanisms get the same treatment (e.g. get_cpu_light). Now on the scheduler side instead of setting NEED_RESCHED this sets NEED_RESCHED_LAZY in case of a SCHED_OTHER/SCHED_OTHER preemption and therefor allows to exit the waking task the lock held region before the woken task preempts. That also works better for cross CPU wakeups as the other side can stay in the adaptive spinning loop. For RT class preemption there is no change. This simply sets NEED_RESCHED and forgoes the lazy preemption counter. Initial test do not expose any observable latency increasement, but history shows that I've been proven wrong before :) The lazy preemption mode is per default on, but with CONFIG_SCHED_DEBUG enabled it can be disabled via: # echo NO_PREEMPT_LAZY >/sys/kernel/debug/sched_features and reenabled via # echo PREEMPT_LAZY >/sys/kernel/debug/sched_features The test results so far are very machine and workload dependent, but there is a clear trend that it enhances the non RT workload performance. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06*/softirq: Disable softirq stacks on PREEMPT_RTThomas Gleixner3-0/+8
PREEMPT_RT preempts softirqs and the current implementation avoids do_softirq_own_stack() and only uses __do_softirq(). Disable the unused softirqs stacks on PREEMPT_RT to safe some memory and ensure that do_softirq_own_stack() is not used which is not expected. [bigeasy: commit description.] Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2023-11-06generic/softirq: Disable softirq stacks on PREEMPT_RTThomas Gleixner1-1/+1
PREEMPT_RT preempts softirqs and the current implementation avoids do_softirq_own_stack() and only uses __do_softirq(). Disable the unused softirqs stacks on PREEMPT_RT to safe some memory and ensure that do_softirq_own_stack() is not used which is not expected. [bigeasy: commit description.] Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2023-11-06leds: trigger: Disable CPU trigger on PREEMPT_RTSebastian Andrzej Siewior1-0/+1
The CPU trigger is invoked on ARM from CPU-idle. That trigger later invokes led_trigger_event() which may invoke the callback of the actual driver. That driver can acquire a spinlock_t which is okay on kernel without PREEMPT_RT. On PREEMPT_RT enabled kernel this lock is turned into a sleeping lock and must not be acquired with disabled interrupts. Disable the CPU trigger on PREEMPT_RT. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lkml.kernel.org/r/20210924111501.m57cwwn7ahiyxxdd@linutronix.de
2023-11-06drivers/block/zram: Replace bit spinlocks with rtmutex for -rtMike Galbraith2-0/+37
They're nondeterministic, and lead to ___might_sleep() splats in -rt. OTOH, they're a lot less wasteful than an rtmutex per page. Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06mm/zsmalloc: Replace bit spinlock and get_cpu_var() usage.Mike Galbraith2-8/+79
For efficiency reasons, zsmalloc is using a slim `handle'. The value is the address of a memory allocation of 4 or 8 bytes depending on the size of the long data type. The lowest bit in that allocated memory is used as a bit spin lock. The usage of the bit spin lock is problematic because with the bit spin lock held zsmalloc acquires a rwlock_t and spinlock_t which are both sleeping locks on PREEMPT_RT and therefore must not be acquired with disabled preemption. Extend the handle to struct zsmalloc_handle which holds the old handle as addr and a spinlock_t which replaces the bit spinlock. Replace all the wrapper functions accordingly. The usage of get_cpu_var() in zs_map_object() is problematic because it disables preemption and makes it impossible to acquire any sleeping lock on PREEMPT_RT such as a spinlock_t. Replace the get_cpu_var() usage with a local_lock_t which is embedded struct mapping_area. It ensures that the access the struct is synchronized against all users on the same CPU. This survived LTP testing. Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> [bigeasy: replace the bitspin_lock() with a mutex, get_locked_var() and patch description. Mike then fixed the size magic and made handle lock spinlock_t.] Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2023-11-06tpm_tis: fix stall after iowrite*()sHaris Okanovic1-2/+27
ioread8() operations to TPM MMIO addresses can stall the cpu when immediately following a sequence of iowrite*()'s to the same region. For example, cyclitest measures ~400us latency spikes when a non-RT usermode application communicates with an SPI-based TPM chip (Intel Atom E3940 system, PREEMPT_RT kernel). The spikes are caused by a stalling ioread8() operation following a sequence of 30+ iowrite8()s to the same address. I believe this happens because the write sequence is buffered (in cpu or somewhere along the bus), and gets flushed on the first LOAD instruction (ioread*()) that follows. The enclosed change appears to fix this issue: read the TPM chip's access register (status code) after every iowrite*() operation to amortize the cost of flushing data to chip across multiple instructions. Signed-off-by: Haris Okanovic <haris.okanovic@ni.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06virt: acrn: Remove unsued acrn_irqfds_mutex.Sebastian Andrzej Siewior1-1/+0
acrn_irqfds_mutex is not used, never was. Remove acrn_irqfds_mutex. Fixes: aa3b483ff1d71 ("virt: acrn: Introduce irqfd") Cc: Fei Li <fei1.li@intel.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2023-11-06smack: Guard smack_ipv6_lock definition within a SMACK_IPV6_PORT_LABELING blockSebastian Andrzej Siewior1-3/+6
The mutex smack_ipv6_lock is only used with the SMACK_IPV6_PORT_LABELING block but its definition is outside of the block. This leads to a defined-but-not-used warning on PREEMPT_RT. Moving smack_ipv6_lock down to the block where it is used where it used raises the question why is smk_ipv6_port_list read if nothing is added to it. Turns out, only smk_ipv6_port_check() is using it outside of an ifdef SMACK_IPV6_PORT_LABELING block. However two of three caller invoke smk_ipv6_port_check() from a ifdef block and only one is using __is_defined() macro which requires the function and smk_ipv6_port_list to be around. Put the lock and list inside an ifdef SMACK_IPV6_PORT_LABELING block to avoid the warning regarding unused mutex. Extend the ifdef-block to also cover smk_ipv6_port_check(). Make smack_socket_connect() use ifdef instead of __is_defined() to avoid complains about missing function. Cc: Casey Schaufler <casey@schaufler-ca.com> Cc: James Morris <jmorris@namei.org> Cc: "Serge E. Hallyn" <serge@hallyn.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2023-11-06ASoC: mediatek: mt8195: Remove unsued irqs_lock.Sebastian Andrzej Siewior1-1/+0
irqs_lock is not used, never was. Remove irqs_lock. Fixes: 283b612429a27 ("ASoC: mediatek: implement mediatek common structure") Cc: Liam Girdwood <lgirdwood@gmail.com> Cc: Mark Brown <broonie@kernel.org> Cc: Jaroslav Kysela <perex@perex.cz> Cc: Takashi Iwai <tiwai@suse.com> Cc: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2023-11-06genirq: update irq_set_irqchip_state documentationJosh Cartwright1-1/+1
On -rt kernels, the use of migrate_disable()/migrate_enable() is sufficient to guarantee a task isn't moved to another CPU. Update the irq_set_irqchip_state() documentation to reflect this. Signed-off-by: Josh Cartwright <joshc@ni.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lkml.kernel.org/r/20210917103055.92150-1-bigeasy@linutronix.de
2023-11-06drm/i915: Drop the irqs_disabled() checkSebastian Andrzej Siewior1-2/+0
The !irqs_disabled() check triggers on PREEMPT_RT even with i915_sched_engine::lock acquired. The reason is the lock is transformed into a sleeping lock on PREEMPT_RT and does not disable interrupts. There is no need to check for disabled interrupts. The lockdep annotation below already check if the lock has been acquired by the caller and will yell if the interrupts are not disabled. Remove the !irqs_disabled() check. Reported-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2023-11-06drm/i915/gt: Use spin_lock_irq() instead of local_irq_disable() + spin_lock()Sebastian Andrzej Siewior1-12/+5
execlists_dequeue() is invoked from a function which uses local_irq_disable() to disable interrupts so the spin_lock() behaves like spin_lock_irq(). This breaks PREEMPT_RT because local_irq_disable() + spin_lock() is not the same as spin_lock_irq(). execlists_dequeue_irq() and execlists_dequeue() has each one caller only. If intel_engine_cs::active::lock is acquired and released with the _irq suffix then it behaves almost as if execlists_dequeue() would be invoked with disabled interrupts. The difference is the last part of the function which is then invoked with enabled interrupts. I can't tell if this makes a difference. From looking at it, it might work to move the last unlock at the end of the function as I didn't find anything that would acquire the lock again. Reported-by: Clark Williams <williams@redhat.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
2023-11-06drm/i915/gt: Queue and wait for the irq_work item.Sebastian Andrzej Siewior1-3/+2
Disabling interrupts and invoking the irq_work function directly breaks on PREEMPT_RT. PREEMPT_RT does not invoke all irq_work from hardirq context because some of the user have spinlock_t locking in the callback function. These locks are then turned into a sleeping locks which can not be acquired with disabled interrupts. Using irq_work_queue() has the benefit that the irqwork will be invoked in the regular context. In general there is "no" delay between enqueuing the callback and its invocation because the interrupt is raised right away on architectures which support it (which includes x86). Use irq_work_queue() + irq_work_sync() instead invoking the callback directly. Reported-by: Clark Williams <williams@redhat.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
2023-11-06drm/i915: skip DRM_I915_LOW_LEVEL_TRACEPOINTS with NOTRACESebastian Andrzej Siewior1-1/+1
The order of the header files is important. If this header file is included after tracepoint.h was included then the NOTRACE here becomes a nop. Currently this happens for two .c files which use the tracepoitns behind DRM_I915_LOW_LEVEL_TRACEPOINTS. Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-11-06drm/i915: Disable tracing points on PREEMPT_RTSebastian Andrzej Siewior1-0/+4
Luca Abeni reported this: | BUG: scheduling while atomic: kworker/u8:2/15203/0x00000003 | CPU: 1 PID: 15203 Comm: kworker/u8:2 Not tainted 4.19.1-rt3 #10 | Call Trace: | rt_spin_lock+0x3f/0x50 | gen6_read32+0x45/0x1d0 [i915] | g4x_get_vblank_counter+0x36/0x40 [i915] | trace_event_raw_event_i915_pipe_update_start+0x7d/0xf0 [i915] The tracing events use trace_i915_pipe_update_start() among other events use functions acquire spinlock_t locks which are transformed into sleeping locks on PREEMPT_RT. A few trace points use intel_get_crtc_scanline(), others use ->get_vblank_counter() wich also might acquire a sleeping locks on PREEMPT_RT. At the time the arguments are evaluated within trace point, preemption is disabled and so the locks must not be acquired on PREEMPT_RT. Based on this I don't see any other way than disable trace points on PREMPT_RT. Reported-by: Luca Abeni <lucabe72@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2023-11-06drm/i915: Don't check for atomic context on PREEMPT_RTSebastian Andrzej Siewior1-1/+1
The !in_atomic() check in _wait_for_atomic() triggers on PREEMPT_RT because the uncore::lock is a spinlock_t and does not disable preemption or interrupts. Changing the uncore:lock to a raw_spinlock_t doubles the worst case latency on an otherwise idle testbox during testing. Therefore I'm currently unsure about changing this. Link: https://lore.kernel.org/all/20211006164628.s2mtsdd2jdbfyf7g@linutronix.de/ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>