summaryrefslogtreecommitdiff
path: root/arch/arm/include/asm/mmu_context.h
AgeCommit message (Collapse)AuthorFilesLines
2013-04-03ARM: 7684/1: errata: Workaround for Cortex-A15 erratum 798181 (TLBI/DSB ↵Catalin Marinas1-0/+2
operations) On Cortex-A15 (r0p0..r3p2) the TLBI/DSB are not adequately shooting down all use of the old entries. This patch implements the erratum workaround which consists of: 1. Dummy TLBIMVAIS and DSB on the CPU doing the TLBI operation. 2. Send IPI to the CPUs that are running the same mm (and ASID) as the one being invalidated (or all the online CPUs for global pages). 3. CPU receiving the IPI executes a DMB and CLREX (part of the exception return code already). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-04ARM: 7659/1: mm: make mm->context.id an atomic64_t variableWill Deacon1-1/+1
mm->context.id is updated under asid_lock when a new ASID is allocated to an mm_struct. However, it is also read without the lock when a task is being scheduled and checking whether or not the current ASID generation is up-to-date. If two threads of the same process are being scheduled in parallel and the bottom bits of the generation in their mm->context.id match the current generation (that is, the mm_struct has not been used for ~2^24 rollovers) then the non-atomic, lockless access to mm->context.id may yield the incorrect ASID. This patch fixes this issue by making mm->context.id and atomic64_t, ensuring that the generation is always read consistently. For code that only requires access to the ASID bits (e.g. TLB flushing by mm), then the value is accessed directly, which GCC converts to an ldrb. Cc: <stable@vger.kernel.org> # 3.8 Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-11-26ARM: 7582/2: rename kvm_seq to vmalloc_seq so to avoid confusion with KVMNicolas Pitre1-3/+3
The kvm_seq value has nothing to do what so ever with this other KVM. Given that KVM support on ARM is imminent, it's best to rename kvm_seq into something else to clearly identify what it is about i.e. a sequence number for vmalloc section mappings. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-11-05ARM: mm: remove IPI broadcasting on ASID rolloverWill Deacon1-79/+3
ASIDs are allocated to MMU contexts based on a rolling counter. This means that after 255 allocations we must invalidate all existing ASIDs via an expensive IPI mechanism to synchronise all of the online CPUs and ensure that all tasks execute with an ASID from the new generation. This patch changes the rollover behaviour so that we rely instead on the hardware broadcasting of the TLB invalidation to avoid the IPI calls. This works by keeping track of the active ASID on each core, which is then reserved in the case of a rollover so that currently scheduled tasks can continue to run. For cores without hardware TLB broadcasting, we keep track of pending flushes in a cpumask, so cores can flush their local TLB before scheduling a new mm. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2012-04-17ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on pre-ARMv6 CPUsCatalin Marinas1-5/+26
This patch removes the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition for ARMv5 and earlier processors. On such processors, the context switch requires a full cache flush. To avoid high interrupt latencies, this patch defers the mm switching to the post-lock switch hook if the interrupts are disabled. Reviewed-by: Will Deacon <will.deacon@arm.com> Tested-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Frank Rowand <frank.rowand@am.sony.com> Tested-by: Marc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-04-17ARM: Remove current_mm per-cpu variableCatalin Marinas1-7/+0
The current_mm variable was used to store the new mm between the switch_mm() and switch_to() calls where an IPI to reset the context could have set the wrong mm. Since the interrupts are disabled during context switch, there is no need for this variable, current->active_mm already points to the current mm when interrupts are re-enabled. Reviewed-by: Will Deacon <will.deacon@arm.com> Tested-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Frank Rowand <frank.rowand@am.sony.com> Tested-by: Marc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-04-17ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUsCatalin Marinas1-16/+56
Since the ASIDs must be unique to an mm across all the CPUs in a system, the __new_context() function needs to broadcast a context reset event to all the CPUs during ASID allocation if a roll-over occurred. Such IPIs cannot be issued with interrupts disabled and ARM had to define __ARCH_WANT_INTERRUPTS_ON_CTXSW. This patch changes the check_context() function to check_and_switch_context() called from switch_mm(). In case of ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed and the interrupts are disabled, it defers the __new_context() and cpu_switch_mm() calls to the post-lock switch hook where the interrupts are enabled. Setting the reserved TTBR0 was also moved to check_and_switch_context() from cpu_v7_switch_mm(). Reviewed-by: Will Deacon <will.deacon@arm.com> Tested-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Frank Rowand <frank.rowand@am.sony.com> Tested-by: Marc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-03-24ARM: 7294/1: vectors: use gate_vma for vectors user mappingWill Deacon1-28/+1
The current user mapping for the vectors page is inserted as a `horrible hack vma' into each task via arch_setup_additional_pages. This causes problems with the MM subsystem and vm_normal_page, as described here: https://lkml.org/lkml/2012/1/14/55 Following the suggestion from Hugh in the above thread, this patch uses the gate_vma for the vectors user mapping, therefore consolidating the horrible hack VMAs into one. Acked-and-Tested-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-02ARM: add a vma entry for the user accessible vector pageNicolas Pitre1-1/+28
The kernel makes the high vector page visible to user space. This page contains (amongst others) small code segments that can be executed in user space. Make this page visible through ptrace and /proc/<pid>/mem in order to let gdb perform code parsing needed for proper unwinding. For example, the ERESTART_RESTARTBLOCK handler actually has a stack frame -- it returns to a PC value stored on the user's stack. To unwind after a "sleep" system call was interrupted twice, GDB would have to recognize this situation and understand that stack frame layout -- which it currently cannot do. We could fix this by hard-coding addresses in the vector page range into GDB, but that isn't really portable as not all of those addresses are guaranteed to remain stable across kernel releases. And having the gdb process make an exception for this page and get content from its own address space for it looks strange, and it is not future proof either. Being located above PAGE_OFFSET, this vma cannot be deleted by user space code. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2010-02-16ARM: 5905/1: ARM: Global ASID allocation on SMPCatalin Marinas1-0/+15
The current ASID allocation algorithm doesn't ensure the notification of the other CPUs when the ASID rolls over. This may lead to two processes using the same ASID (but different generation) or multiple threads of the same process using different ASIDs. This patch adds the broadcasting of the ASID rollover event to the other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid" during the handling of the broadcast, the ASID numbering now starts at "smp_processor_id() + 1". At rollover, the cpu_last_asid will be set to NR_CPUS. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-09-24cpumask: use mm_cpumask() wrapper: armRusty Russell1-3/+4
Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-07-24nommu: Remove the context.id from asm-offsets.c when !MMUCatalin Marinas1-0/+2
There is no MMU context switching on MMU-less systems. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2008-11-29[ARM] Remove linux/sched.h from asm/cacheflush.h and asm/uaccess.hRussell King1-0/+1
... and fix those drivers that were incorrectly relying upon that include. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-09-01[ARM] cachetype: move definitions to separate headerRussell King1-0/+1
Rather than pollute asm/cacheflush.h with the cache type definitions, move them to asm/cachetype.h, and include this new header where necessary. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-08-03[ARM] move include/asm-arm to arch/arm/include/asmRussell King1-0/+117
Move platform independent header files to arch/arm/include/asm, leaving those in asm/arch* and asm/plat* alone. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>