summaryrefslogtreecommitdiff
path: root/arch/arc/mm/tlb.c
diff options
context:
space:
mode:
authorVineet Gupta <vgupta@synopsys.com>2013-06-17 16:42:13 +0400
committerVineet Gupta <vgupta@synopsys.com>2013-08-29 16:21:36 +0400
commit64b703ef276964b160a5e88df0764f254460cafb (patch)
tree686e1a89cebef90413cb3a283828012de59ba2da /arch/arc/mm/tlb.c
parent4b06ff35fb1dcafbcbdcbe9ce794ab0770f2a843 (diff)
downloadlinux-64b703ef276964b160a5e88df0764f254460cafb.tar.xz
ARC: MMUv4 preps/1 - Fold PTE K/U access flags
The current ARC VM code has 13 flags in Page Table entry: some software (accesed/dirty/non-linear-maps) and rest hardware specific. With 8k MMU page, we need 19 bits for addressing page frame so remaining 13 bits is just about enough to accomodate the current flags. In MMUv4 there are 2 additional flags, SZ (normal or super page) and WT (cache access mode write-thru) - and additionally PFN is 20 bits (vs. 19 before for 8k). Thus these can't be held in current PTE w/o making each entry 64bit wide. It seems there is some scope of compressing the current PTE flags (and freeing up a few bits). Currently PTE contains fully orthogonal distinct access permissions for kernel and user mode (Kr, Kw, Kx; Ur, Uw, Ux) which can be folded into one set (R, W, X). The translation of 3 PTE bits into 6 TLB bits (when programming the MMU) can be done based on following pre-requites/assumptions: 1. For kernel-mode-only translations (vmalloc: 0x7000_0000 to 0x7FFF_FFFF), PTE additionally has PAGE_GLOBAL flag set (and user space entries can never be global). Thus such a PTE can translate to Kr, Kw, Kx (as appropriate) and zero for User mode counterparts. 2. For non global entries, the PTE flags can be used to create mirrored K and U TLB bits. This is true after commit a950549c675f2c8c504 "ARC: copy_(to|from)_user() to honor usermode-access permissions" which ensured that user-space translations _MUST_ have same access permissions for both U/K mode accesses so that copy_{to,from}_user() play fair with fault based CoW break and such... There is no such thing as free lunch - the cost is slightly infalted TLB-Miss Handlers. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Diffstat (limited to 'arch/arc/mm/tlb.c')
-rw-r--r--arch/arc/mm/tlb.c19
1 files changed, 17 insertions, 2 deletions
diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
index 7957dc4e4d4a..f9908341e8a7 100644
--- a/arch/arc/mm/tlb.c
+++ b/arch/arc/mm/tlb.c
@@ -341,7 +341,7 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
void create_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
{
unsigned long flags;
- unsigned int idx, asid_or_sasid;
+ unsigned int idx, asid_or_sasid, rwx;
unsigned long pd0_flags;
/*
@@ -393,8 +393,23 @@ void create_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
write_aux_reg(ARC_REG_TLBPD0, address | pd0_flags | asid_or_sasid);
+ /*
+ * ARC MMU provides fully orthogonal access bits for K/U mode,
+ * however Linux only saves 1 set to save PTE real-estate
+ * Here we convert 3 PTE bits into 6 MMU bits:
+ * -Kernel only entries have Kr Kw Kx 0 0 0
+ * -User entries have mirrored K and U bits
+ */
+ rwx = pte_val(*ptep) & PTE_BITS_RWX;
+
+ if (pte_val(*ptep) & _PAGE_GLOBAL)
+ rwx <<= 3; /* r w x => Kr Kw Kx 0 0 0 */
+ else
+ rwx |= (rwx << 3); /* r w x => Kr Kw Kx Ur Uw Ux */
+
/* Load remaining info in PD1 (Page Frame Addr and Kx/Kw/Kr Flags) */
- write_aux_reg(ARC_REG_TLBPD1, (pte_val(*ptep) & PTE_BITS_IN_PD1));
+ write_aux_reg(ARC_REG_TLBPD1,
+ rwx | (pte_val(*ptep) & PTE_BITS_NON_RWX_IN_PD1));
/* First verify if entry for this vaddr+ASID already exists */
write_aux_reg(ARC_REG_TLBCOMMAND, TLBProbe);