summaryrefslogtreecommitdiff
path: root/arch/csky/abiv2
AgeCommit message (Collapse)AuthorFilesLines
2023-12-14mm: Introduce flush_cache_vmap_early()Alexandre Ghiti1-0/+1
The pcpu setup when using the page allocator sets up a new vmalloc mapping very early in the boot process, so early that it cannot use the flush_cache_vmap() function which may depend on structures not yet initialized (for example in riscv, we currently send an IPI to flush other cpus TLB). But on some architectures, we must call flush_cache_vmap(): for example, in riscv, some uarchs can cache invalid TLB entries so we need to flush the new established mapping to avoid taking an exception. So fix this by introducing a new function flush_cache_vmap_early() which is called right after setting the new page table entry and before accessing this new mapping. This new function implements a local flush tlb on riscv and is no-op for other architectures (same as today). Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Dennis Zhou <dennis@kernel.org>
2023-08-30Merge tag 'mm-stable-2023-08-28-18-26' of ↵Linus Torvalds2-17/+25
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull MM updates from Andrew Morton: - Some swap cleanups from Ma Wupeng ("fix WARN_ON in add_to_avail_list") - Peter Xu has a series (mm/gup: Unify hugetlb, speed up thp") which reduces the special-case code for handling hugetlb pages in GUP. It also speeds up GUP handling of transparent hugepages. - Peng Zhang provides some maple tree speedups ("Optimize the fast path of mas_store()"). - Sergey Senozhatsky has improved te performance of zsmalloc during compaction (zsmalloc: small compaction improvements"). - Domenico Cerasuolo has developed additional selftest code for zswap ("selftests: cgroup: add zswap test program"). - xu xin has doe some work on KSM's handling of zero pages. These changes are mainly to enable the user to better understand the effectiveness of KSM's treatment of zero pages ("ksm: support tracking KSM-placed zero-pages"). - Jeff Xu has fixes the behaviour of memfd's MEMFD_NOEXEC_SCOPE_NOEXEC_ENFORCED sysctl ("mm/memfd: fix sysctl MEMFD_NOEXEC_SCOPE_NOEXEC_ENFORCED"). - David Howells has fixed an fscache optimization ("mm, netfs, fscache: Stop read optimisation when folio removed from pagecache"). - Axel Rasmussen has given userfaultfd the ability to simulate memory poisoning ("add UFFDIO_POISON to simulate memory poisoning with UFFD"). - Miaohe Lin has contributed some routine maintenance work on the memory-failure code ("mm: memory-failure: remove unneeded PageHuge() check"). - Peng Zhang has contributed some maintenance work on the maple tree code ("Improve the validation for maple tree and some cleanup"). - Hugh Dickins has optimized the collapsing of shmem or file pages into THPs ("mm: free retracted page table by RCU"). - Jiaqi Yan has a patch series which permits us to use the healthy subpages within a hardware poisoned huge page for general purposes ("Improve hugetlbfs read on HWPOISON hugepages"). - Kemeng Shi has done some maintenance work on the pagetable-check code ("Remove unused parameters in page_table_check"). - More folioification work from Matthew Wilcox ("More filesystem folio conversions for 6.6"), ("Followup folio conversions for zswap"). And from ZhangPeng ("Convert several functions in page_io.c to use a folio"). - page_ext cleanups from Kemeng Shi ("minor cleanups for page_ext"). - Baoquan He has converted some architectures to use the GENERIC_IOREMAP ioremap()/iounmap() code ("mm: ioremap: Convert architectures to take GENERIC_IOREMAP way"). - Anshuman Khandual has optimized arm64 tlb shootdown ("arm64: support batched/deferred tlb shootdown during page reclamation/migration"). - Better maple tree lockdep checking from Liam Howlett ("More strict maple tree lockdep"). Liam also developed some efficiency improvements ("Reduce preallocations for maple tree"). - Cleanup and optimization to the secondary IOMMU TLB invalidation, from Alistair Popple ("Invalidate secondary IOMMU TLB on permission upgrade"). - Ryan Roberts fixes some arm64 MM selftest issues ("selftests/mm fixes for arm64"). - Kemeng Shi provides some maintenance work on the compaction code ("Two minor cleanups for compaction"). - Some reduction in mmap_lock pressure from Matthew Wilcox ("Handle most file-backed faults under the VMA lock"). - Aneesh Kumar contributes code to use the vmemmap optimization for DAX on ppc64, under some circumstances ("Add support for DAX vmemmap optimization for ppc64"). - page-ext cleanups from Kemeng Shi ("add page_ext_data to get client data in page_ext"), ("minor cleanups to page_ext header"). - Some zswap cleanups from Johannes Weiner ("mm: zswap: three cleanups"). - kmsan cleanups from ZhangPeng ("minor cleanups for kmsan"). - VMA handling cleanups from Kefeng Wang ("mm: convert to vma_is_initial_heap/stack()"). - DAMON feature work from SeongJae Park ("mm/damon/sysfs-schemes: implement DAMOS tried total bytes file"), ("Extend DAMOS filters for address ranges and DAMON monitoring targets"). - Compaction work from Kemeng Shi ("Fixes and cleanups to compaction"). - Liam Howlett has improved the maple tree node replacement code ("maple_tree: Change replacement strategy"). - ZhangPeng has a general code cleanup - use the K() macro more widely ("cleanup with helper macro K()"). - Aneesh Kumar brings memmap-on-memory to ppc64 ("Add support for memmap on memory feature on ppc64"). - pagealloc cleanups from Kemeng Shi ("Two minor cleanups for pcp list in page_alloc"), ("Two minor cleanups for get pageblock migratetype"). - Vishal Moola introduces a memory descriptor for page table tracking, "struct ptdesc" ("Split ptdesc from struct page"). - memfd selftest maintenance work from Aleksa Sarai ("memfd: cleanups for vm.memfd_noexec"). - MM include file rationalization from Hugh Dickins ("arch: include asm/cacheflush.h in asm/hugetlb.h"). - THP debug output fixes from Hugh Dickins ("mm,thp: fix sloppy text output"). - kmemleak improvements from Xiaolei Wang ("mm/kmemleak: use object_cache instead of kmemleak_initialized"). - More folio-related cleanups from Matthew Wilcox ("Remove _folio_dtor and _folio_order"). - A VMA locking scalability improvement from Suren Baghdasaryan ("Per-VMA lock support for swap and userfaults"). - pagetable handling cleanups from Matthew Wilcox ("New page table range API"). - A batch of swap/thp cleanups from David Hildenbrand ("mm/swap: stop using page->private on tail pages for THP_SWAP + cleanups"). - Cleanups and speedups to the hugetlb fault handling from Matthew Wilcox ("Change calling convention for ->huge_fault"). - Matthew Wilcox has also done some maintenance work on the MM subsystem documentation ("Improve mm documentation"). * tag 'mm-stable-2023-08-28-18-26' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (489 commits) maple_tree: shrink struct maple_tree maple_tree: clean up mas_wr_append() secretmem: convert page_is_secretmem() to folio_is_secretmem() nios2: fix flush_dcache_page() for usage from irq context hugetlb: add documentation for vma_kernel_pagesize() mm: add orphaned kernel-doc to the rst files. mm: fix clean_record_shared_mapping_range kernel-doc mm: fix get_mctgt_type() kernel-doc mm: fix kernel-doc warning from tlb_flush_rmaps() mm: remove enum page_entry_size mm: allow ->huge_fault() to be called without the mmap_lock held mm: move PMD_ORDER to pgtable.h mm: remove checks for pte_index memcg: remove duplication detection for mem_cgroup_uncharge_swap mm/huge_memory: work on folio->swap instead of page->private when splitting folio mm/swap: inline folio_set_swap_entry() and folio_swap_entry() mm/swap: use dedicated entry for swap in folio mm/swap: stop using page->private on tail pages for THP_SWAP selftests/mm: fix WARNING comparing pointer to 0 selftests: cgroup: fix test_kmem_memcg_deletion kernel mem check ...
2023-08-25mm: rationalise flush_icache_pages() and flush_icache_page()Matthew Wilcox (Oracle)1-1/+0
Move the default (no-op) implementation of flush_icache_pages() to <linux/cacheflush.h> from <asm-generic/cacheflush.h>. Remove the flush_icache_page() wrapper from each architecture into <linux/cacheflush.h>. Link: https://lkml.kernel.org/r/20230802151406.3735276-32-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-25csky: implement the new page table range APIMatthew Wilcox (Oracle)2-18/+24
Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Link: https://lkml.kernel.org/r/20230802151406.3735276-12-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Guo Ren <guoren@kernel.org> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-11csky: pgtable: Invalidate stale I-cache lines in update_mmu_cacheGuo Ren1-3/+1
The final icache_flush was in the update_mmu_cache, and update_mmu_cache is after the set_pte_at. Thus, when CPU0 sets the pte, the other CPU would see it before the icache_flush broadcast happens, and their icaches may have cached stale VIPT cache lines in their I-caches. When address translation was ready for the new cache line, they will use the stale data of icache, not the fresh one of the dcache. The csky instruction cache is VIPT, and it needs an origin virtual address to invalidate the virtual address index entries of cache ways. The current implementation uses a temporary mapping mechanism - kmap_atomic, which returns a new virtual address for invalidation. But, the original virtual address cache line may still in the I-cache. So force invalidation I-cache in update_mmu_cache, and prevent flush_dcache when there is an EXEC page. This bug was detected in the 4*c860 SMP system, and this patch could pass the stress test. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org>
2023-04-13csky: mmu: Prevent spurious page faultsGuo Ren1-0/+3
C-SKY MMU would pre-fetch invalid pte entries, and it could work with flush_tlb_fix_spurious_fault, but the additional page fault exceptions would reduce performance. So flushing the entry of the TLB would prevent the following spurious page faults. Here is the test code: define DATA_LEN 4096 define COPY_NUM (504*100) unsigned char src[DATA_LEN*COPY_NUM] = {0}; unsigned char dst[DATA_LEN*COPY_NUM] = {0}; unsigned char func_src[DATA_LEN*COPY_NUM] = {0}; unsigned char func_dst[DATA_LEN*COPY_NUM] = {0}; void main(void) { int j; for (j = 0; j < COPY_NUM; j++) memcpy(&dst[j*DATA_LEN], &src[j*DATA_LEN], 4); } perf stat -e page-faults ./main.elf The amount of page fault traps would be reduced in half with the patch. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org>
2023-02-03csky/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVEDavid Hildenbrand1-7/+12
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit from the offset. This reduces the maximum swap space per file to 16 GiB (was 32 GiB). We might actually be able to reuse one of the other software bits (_PAGE_READ / PAGE_WRITE) instead, because we only have to keep pte_present(), pte_none() and HW happy. For now, let's keep it simple because there might be something non-obvious. Link: https://lkml.kernel.org/r/20230113171026.582290-6-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Cc: Guo Ren <guoren@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-04-18csky: Add C based string functionsMatteo Croce2-1/+5
Try to access RAM with the largest bit width possible, but without doing unaligned accesses. A further improvement could be to use multiple read and writes as the assembly version was trying to do. Tested on a BeagleV Starlight with a SiFive U74 core, where the improvement is noticeable. Signed-off-by: Matteo Croce <mcroce@microsoft.com> Co-developed-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2021-02-27csky: Fixup compile errorGuo Ren9-9/+0
: error: C++ style comments are not allowed in ISO C90 // Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd. ^ error: (this will be reported only once per input file) Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2021-02-27csky: Add VDSO with GENERIC_GETTIMEOFDAY, GENERIC_TIME_VSYSCALL, ↵Guo Ren1-0/+5
HAVE_GENERIC_VDSO It could help to reduce the latency of the time-related functions in user space. We have referenced arm's and riscv's implementation for the patch. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Cc: Vincent Chen <vincent.chen@sifive.com> Cc: Arnd Bergmann <arnd@arndb.de>
2021-02-27csky: Fixup swaponGuo Ren1-0/+22
Current csky's swappon is broken by wrong swap PTE entry format. Now redesign the new format for abiv1 & abiv2 and make swappon + zram work properly on csky machines. C-SKY PTE has VALID, DIRTY to emulate PRESENT, READ, WRITE, EXEC attributes. GLOBAL bit is shared by two pages in the same tlb entry. So we need to keep GLOBAL, VALID, PRESENT zero in swp_pte. To distinguish PAGE_NONE and swp_pte, we need to use an additional bit (abiv1 is _PAGE_READ, abiv2 is _PAGE_WRITE). Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Cc: Arnd Bergmann <arnd@arndb.de>
2021-02-27csky: pgtable.h: Coding conventionGuo Ren1-12/+2
C-SKY page table attributes only have 'Dirty' and 'Valid' to emulate 'PRESENT, READ, WRITE, EXEC, DIRTY, ACCESSED'. This patch cleanup unnecessary definition. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Cc: Arnd Bergmann <arnd@arndb.de>
2021-01-12csky: Reconstruct VDSO frameworkGuo Ren1-17/+3
Reconstruct vdso framework to support future vsyscall, vgettimeofday features. These are very important features to reduce system calls into the kernel for performance improvement. The patch is reference RISC-V's Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
2021-01-12csky: Fixup update_mmu_cache called with user io mappingGuo Ren1-0/+3
The function update_mmu_cache could be called by user-io mapping. There is no space of struct page in mem_map for the pte. Just ignore the user-io mmaping in update_mmu_cache. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2021-01-12csky: Fix TLB maintenance synchronization problemGuo Ren1-5/+30
TLB invalidate didn't contain a barrier operation in csky cpu and we need to prevent previous PTW response after TLB invalidation instruction. Of cause, the ASID changing also needs to take care of the issue. CPU0 CPU1 =============== =============== set_pte sync_is() -> See the previous set_pte for all harts tlbi.vas -> Invalidate all harts TLB entry & flush pipeline Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2021-01-12csky: Add memory layout 2.5G(user):1.5G(kernel)Guo Ren2-10/+23
There are two ways for translating va to pa for csky: - Use TLB(Translate Lookup Buffer) and PTW (Page Table Walk) - Use SSEG0/1 (Simple Segment Mapping) We use tlb mapping 0-2G and 3G-4G virtual address area and SSEG0/1 are for 2G-2.5G and 2.5G-3G translation. We could disable SSEG0 to use 2G-2.5G as TLB user mapping. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-07-31csky: Fixup kprobes handler couldn't change pcGuo Ren1-1/+3
The "Changing Execution Path" section in the Documentation/kprobes.txt said: Since kprobes can probe into a running kernel code, it can change the register set, including instruction pointer. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Cc: Arnd Bergmann <arnd@arndb.de>
2020-07-31csky: Fixup duplicated restore sp in RESTORE_REGS_FTRACEGuo Ren1-3/+0
There is no user return for RESTORE_REGS_FTRACE, so it's no need to save sp into ss0 as RESTORE_REGS_ALL. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Cc: Arnd Bergmann <arnd@arndb.de>
2020-05-28csky: Coding convention in entry.SGuo Ren1-5/+0
There is no fixup or feature in the patch, we only cleanup with: - Remove unnecessary reg used (r11, r12), just use r9 & r10 & syscallid regs as temp useage. - Add _TIF_SYSCALL_WORK and _TIF_WORK_MASK to gather macros. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-05-28csky: Fixup abiv2 syscall_trace break a4 & a5Guo Ren1-0/+2
Current implementation could destory a4 & a5 when strace, so we need to get them from pt_regs by SAVE_ALL. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-05-28csky: Fixup CONFIG_PREEMPT panicGuo Ren1-1/+0
log: [    0.13373200] Calibrating delay loop... [    0.14077600] ------------[ cut here ]------------ [    0.14116700] WARNING: CPU: 0 PID: 0 at kernel/sched/core.c:3790 preempt_count_add+0xc8/0x11c [    0.14348000] DEBUG_LOCKS_WARN_ON((preempt_count() < 0))Modules linked in: [    0.14395100] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.6.0 #7 [    0.14410800] [    0.14427400] Call Trace: [    0.14450700] [<807cd226>] dump_stack+0x8a/0xe4 [    0.14473500] [<80072792>] __warn+0x10e/0x15c [    0.14495900] [<80072852>] warn_slowpath_fmt+0x72/0xc0 [    0.14518600] [<800a5240>] preempt_count_add+0xc8/0x11c [    0.14544900] [<807ef918>] _raw_spin_lock+0x28/0x68 [    0.14572600] [<800e0eb8>] vprintk_emit+0x84/0x2d8 [    0.14599000] [<800e113a>] vprintk_default+0x2e/0x44 [    0.14625100] [<800e2042>] vprintk_func+0x12a/0x1d0 [    0.14651300] [<800e1804>] printk+0x30/0x48 [    0.14677600] [<80008052>] lockdep_init+0x12/0xb0 [    0.14703800] [<80002080>] start_kernel+0x558/0x7f8 [    0.14730000] [<800052bc>] csky_start+0x58/0x94 [    0.14756600] irq event stamp: 34 [    0.14775100] hardirqs last  enabled at (33): [<80067370>] ret_from_exception+0x2c/0x72 [    0.14793700] hardirqs last disabled at (34): [<800e0eae>] vprintk_emit+0x7a/0x2d8 [    0.14812300] softirqs last  enabled at (32): [<800655b0>] __do_softirq+0x578/0x6d8 [    0.14830800] softirqs last disabled at (25): [<8007b3b8>] irq_exit+0xec/0x128 The preempt_count of reg could be destroyed after csky_do_IRQ without reload from memory. After reference to other architectures (arm64, riscv), we move preempt entry into ret_from_exception and disable irq at the beginning of ret_from_exception instead of RESTORE_ALL. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Reported-by: Lu Baoquan <lu.baoquan@intellif.com>
2020-05-13csky: Fixup msa highest 3 bits maskLiu Yibin1-2/+2
Just as comment mentioned, the msa format: cr<30/31, 15> MSA register format: 31 - 29 | 28 - 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 BA Reserved SH WA B SO SEC C D V So we should shift 29 bits not 28 bits for mask Signed-off-by: Liu Yibin <jiulong@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-05-13csky/ftrace: Fixup error when disable CONFIG_DYNAMIC_FTRACEGuo Ren1-0/+2
When CONFIG_DYNAMIC_FTRACE is enabled, static ftrace will fail to boot up and compile. It's a carelessness when developing "dynamic ftrace" and "ftrace with regs". Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-04-03csky: Fixup cpu speculative execution to IO areaGuo Ren1-5/+2
For the memory size ( > 512MB, < 1GB), the MSA setting is: - SSEG0: PHY_START , PHY_START + 512MB - SSEG1: PHY_START + 512MB, PHY_START + 1GB But the real memory is no more than 1GB, there is a gap between the end size of memory and border of 1GB. CPU could speculatively execute to that gap and if the gap of the bus couldn't respond to the CPU request, then the crash will happen. Now make the setting with: - SSEG0: PHY_START , PHY_START + 512MB (no change) - SSEG1: Disabled (We use highmem to use the memory of 512MB~1GB) We also deprecated zhole_szie[] settings, it's only used by arm style CPUs. All memory gap should use Reserved setting of dts in csky system. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-03-08csky: Implement ftrace with regsGuo Ren2-0/+108
This patch implements FTRACE_WITH_REGS for csky, which allows a traced function's arguments (and some other registers) to be captured into a struct pt_regs, allowing these to be inspected and/or modified. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-03-08csky: Fixup init_fpu compile warning with __initGuo Ren2-6/+2
WARNING: vmlinux.o(.text+0x2366): Section mismatch in reference from the function csky_start_secondary() to the function .init.text:init_fpu() The function csky_start_secondary() references the function __init init_fpu(). This is often because csky_start_secondary lacks a __init annotation or the annotation of init_fpu is wrong. Reported-by: Lu Chongzhi <chongzhi.lcz@alibaba-inc.com> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-02-21csky: Add flush_icache_mm to defer flush icache allGuo Ren2-3/+66
Some CPUs don't support icache.va instruction to maintain the whole smp cores' icache. Using icache.all + IPI casue a lot on performace and using defer mechanism could reduce the number of calling icache _flush_all functions. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-02-21csky: Optimize abiv2 copy_to_user_page with VM_EXECGuo Ren1-1/+3
Only when vma is for VM_EXEC, we need sync dcache & icache. eg: - gdb ptrace modify user space instruction code area. Add VM_EXEC condition to reduce unnecessary cache flush. The abiv1 cpus' cache are all VIPT, so we still need to deal with dcache aliasing problem. But there is optimized way to use cache color, just like what's done in arch/csky/abiv1/inc/abi/page.h. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-02-21csky: Enable defer flush_dcache_page for abiv2 cpus (807/810/860)Guo Ren2-8/+18
Instead of flushing cache per update_mmu_cache() called, we use flush_dcache_page to reduce the frequency of flashing the cache. As abiv2 cpus are all PIPT for icache & dcache, we needn't handle dcache aliasing problem. But their icache can't snoop dcache, so we still need sync_icache_dcache in update_mmu_cache(). Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-02-21csky: Remove unnecessary flush_icache_* implementationGuo Ren2-34/+2
The abiv2 CPUs are all PIPT cache, so there is no need to implement flush_icache_page function. The function flush_icache_user_range hasn't been used, so just remove it. The function flush_cache_range is not necessary for PIPT cache when tlb mapping changed. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-02-21csky: Set regs->usp to kernel sp, when the exception is from kernelGuo Ren1-0/+11
In the past, we didn't care about kernel sp when saving pt_reg. But in some cases, we still need pt_reg->usp to represent the kernel stack before enter exception. For cmpxhg in atomic.S, we need save and restore usp for above. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2019-07-19Merge tag 'csky-for-linus-5.3-rc1' of git://github.com/c-sky/csky-linuxLinus Torvalds1-0/+10
Pull arch/csky pupdates from Guo Ren: "This round of csky subsystem gives two features (ASID algorithm update, Perf pmu record support) and some fixups. ASID updates: - Revert mmu ASID mechanism - Add new asid lib code from arm - Use generic asid algorithm to implement switch_mm - Improve tlb operation with help of asid Perf pmu record support: - Init pmu as a device - Add count-width property for csky pmu - Add pmu interrupt support - Fix perf record in kernel/user space - dt-bindings: Add csky PMU bindings Fixes: - Fixup no panic in kernel for some traps - Fixup some error count in 810 & 860. - Fixup abiv1 memset error" * tag 'csky-for-linus-5.3-rc1' of git://github.com/c-sky/csky-linux: csky: Fixup abiv1 memset error csky: Improve tlb operation with help of asid csky: Use generic asid algorithm to implement switch_mm csky: Add new asid lib code from arm csky: Revert mmu ASID mechanism dt-bindings: csky: Add csky PMU bindings dt-bindings: interrupt-controller: Update csky mpintc csky: Fixup some error count in 810 & 860. csky: Fix perf record in kernel/user space csky: Add pmu interrupt support csky: Add count-width property for csky pmu csky: Init pmu as a device csky: Fixup no panic in kernel for some traps csky: Select intc & timer drivers
2019-07-19csky: Use generic asid algorithm to implement switch_mmGuo Ren1-0/+10
Use linux generic asid/vmid algorithm to implement csky switch_mm function. The algorithm is from arm and it could work with SMP system. It'll help reduce tlb flush for switch_mm in task/vm switch. Signed-off-by: Guo Ren <ren_guo@c-sky.com> Cc: Arnd Bergmann <arnd@arndb.de>
2019-07-09Merge branch 'siginfo-linus' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace Pull force_sig() argument change from Eric Biederman: "A source of error over the years has been that force_sig has taken a task parameter when it is only safe to use force_sig with the current task. The force_sig function is built for delivering synchronous signals such as SIGSEGV where the userspace application caused a synchronous fault (such as a page fault) and the kernel responded with a signal. Because the name force_sig does not make this clear, and because the force_sig takes a task parameter the function force_sig has been abused for sending other kinds of signals over the years. Slowly those have been fixed when the oopses have been tracked down. This set of changes fixes the remaining abusers of force_sig and carefully rips out the task parameter from force_sig and friends making this kind of error almost impossible in the future" * 'siginfo-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: (27 commits) signal/x86: Move tsk inside of CONFIG_MEMORY_FAILURE in do_sigbus signal: Remove the signal number and task parameters from force_sig_info signal: Factor force_sig_info_to_task out of force_sig_info signal: Generate the siginfo in force_sig signal: Move the computation of force into send_signal and correct it. signal: Properly set TRACE_SIGNAL_LOSE_INFO in __send_signal signal: Remove the task parameter from force_sig_fault signal: Use force_sig_fault_to_task for the two calls that don't deliver to current signal: Explicitly call force_sig_fault on current signal/unicore32: Remove tsk parameter from __do_user_fault signal/arm: Remove tsk parameter from __do_user_fault signal/arm: Remove tsk parameter from ptrace_break signal/nds32: Remove tsk parameter from send_sigtrap signal/riscv: Remove tsk parameter from do_trap signal/sh: Remove tsk parameter from force_sig_info_fault signal/um: Remove task parameter from send_sigtrap signal/x86: Remove task parameter from send_sigtrap signal: Remove task parameter from force_sig_mceerr signal: Remove task parameter from force_sig signal: Remove task parameter from force_sigsegv ...
2019-05-29signal: Remove the task parameter from force_sig_faultEric W. Biederman1-1/+1
As synchronous exceptions really only make sense against the current task (otherwise how are you synchronous) remove the task parameter from from force_sig_fault to make it explicit that is what is going on. The two known exceptions that deliver a synchronous exception to a stopped ptraced task have already been changed to force_sig_fault_to_task. The callers have been changed with the following emacs regular expression (with obvious variations on the architectures that take more arguments) to avoid typos: force_sig_fault[(]\([^,]+\)[,]\([^,]+\)[,]\([^,]+\)[,]\W+current[)] -> force_sig_fault(\1,\2,\3) Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2019-05-21treewide: Add SPDX license identifier - Makefile/KconfigThomas Gleixner1-0/+1
Add SPDX license identifiers to all Make/Kconfig files which: - Have no license information of any form These files fall under the project license, GPL v2 only. The resulting SPDX license identifier is: GPL-2.0-only Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-04-22csky: Use va_pa_offset instead of phys_offsetGuo Ren1-8/+6
The name of phys_offset is so common for global export and it may conflict with some local name. So change phys_offset to va_pa_offset which also used by riscv. Also use __pa() and __va() instead of using phys_offset directly. Signed-off-by: Guo Ren <ren_guo@c-sky.com> Cc: Arnd Bergmann <arnd@arndb.de>
2019-04-22csky: Support vmlinux bootup with MMU offGuo Ren1-17/+62
Modify SETUP_MMU macro to fit on both MMU-on or MMU-off enviornment and vmlinux could bootup from MMU off enviornment for some cases. Unify the style of _start and _start_smp_secondary in head.S to make head.S looks more concise and easy to understand. Signed-off-by: Guo Ren <ren_guo@c-sky.com> Cc: Arnd Bergmann <arnd@arndb.de>
2019-04-22csky: Add perf_arch_fetch_caller_regs supportMao Han1-0/+1
In trace events as tracepoints context are not able to be retrieve with task_pt_regs. Without arch caller regs support the pt_regs context will be all zero, perf can not parsing the callchain and resolving the symbols correctly, some time will even get into deadlock while handling the page fault, eg: perf kmem —page record ls Changelog - Add test case cmd in comment - Use regs_fp(regs) which is defined in abi/regdef.h Signed-off-by: Mao Han <han_mao@c-sky.com> Signed-off-by: Guo Ren <guoren@kernel.org>
2019-04-22csky: Fixup wrong update_mmu_cache implementationGuo Ren1-11/+2
In our stress test, we found some crash problem caused by: if (!(vma->vm_flags & VM_EXEC)) return; in update_mmu_cache(). Seems current update_mmu_cache implementation is wrong and we retread to the conservative implementation. Also the usage of kmap_atomic in update_mmu_cache is risky, page-virtual may be scheduled out and changed, so we must use preempt_disable & pagefault_disable which is called by kmap_atomic(). Signed-off-by: Guo Ren <ren_guo@c-sky.com> Cc: Arnd Bergmann <arnd@arndb.de>
2019-04-22csky: Support dynamic start physical addressGuo Ren2-2/+42
Before this patch csky-linux need CONFIG_RAM_BASE to determine start physical address. Now we use phys_offset variable to replace the macro of PHYS_OFFSET and we setup phys_offset with real physical address which is determined during startup in head.S. With this patch we needn't re-compile kernel for different start physical address. ie: 0x0 / 0xc0000000 start physical address could use the same vmlinux, be care different start address must be 512MB aligned. Signed-off-by: Guo Ren <ren_guo@c-sky.com> Cc: Arnd Bergmann <arnd@arndb.de>
2019-04-22csky: Reconstruct signal processingGuo Ren2-7/+2
Linux kernel has provided some apis for arch signal's implementation. For example: restore_saved_sigmask() set_current_blocked() restore_altstack() But in last version of csky signal.c didn't use them and some codes are confusing, so reconstruct signal.c with reference to riscv's code. Now csky signal.c implementation are very close to riscv and we can get the following benefits: - Clear code structure - The signal code of riscv and csky can be reviewed together - Promoting the unification of arch's signal implementation Also modified the related code in entry.S Signed-off-by: Guo Ren <ren_guo@c-sky.com> Cc: Arnd Bergmann <arnd@arndb.de>
2019-04-22csky: Use in_syscall & forget_syscall instead of r11_sigGuo Ren1-2/+0
We could use regs->sr 16-24 bits to detect syscall: VEC_TRAP0 and r11_sig is no necessary for current implementation. In this patch, we implement the in_syscall and forget_syscall which are inspired from arm & nds32, but csky pt_regs has no syscall_num element and we just set zero to regs->sr's vector-bits-field instead. For ret_from_fork, current task was forked from parent which is in syscall progress and its regs->sr has been already setted with VEC_TRAP0. See: arch/csky/kernel/process.c: copy_thread() Signed-off-by: Guo Ren <ren_guo@c-sky.com>
2019-04-22csky: Update syscall_trace_enter/exit implementationGuo Ren1-0/+5
Previous syscall_trace implementation couldn't support AUDITSYSCALL and SYSCALL_TRACEPOINTS. Now we redesign it to support audit_syscall and syscall_tracepoints just like other archs'. Signed-off-by: Guo Ren <ren_guo@c-sky.com> Cc: Dmitry V. Levin <ldv@altlinux.org> Cc: Arnd Bergmann <arnd@arndb.de>
2019-04-22csky/ftrace: Add dynamic function tracer (include graph tracer)Guo Ren1-2/+37
Support dynamic ftrace including dynamic graph tracer. Gcc-csky with -pg will produce call site in every function prologue and we can use these call site to hook trace function. gcc with -pg origin call site: push lr jbsr _mcount nop32 nop32 If the (callee - caller)'s offset is in range of bsr instruction, we'll modify code with: push lr bsr _mcount nop32 nop32 Else if the (callee - caller)'s offset is out of bsr instrunction, we'll modify code with: push lr movih r26, ... ori r26, ... jsr r26 (r26 is reserved for jsr link reg in csky abiv2 spec.) Signed-off-by: Guo Ren <ren_guo@c-sky.com>
2019-04-22csky: Fixup vdsp&fpu issues in kernelGuo Ren1-5/+1
This fixup is continue to commit 35ff802af1c4 (csky: fixup remove vdsp implement for kernel.) and in that patch I didn't finish the job. We must forbid gcc to generate any vdsp & fpu instructions and remove vdsp asm in memmove.S. eg: For GCC it's -mcpu=ck860 and For AS it's -Wa,-mcpu=ck860fv Signed-off-by: Guo Ren <ren_guo@c-sky.com>
2018-12-31csky: ftrace call graph supported.Guo Ren1-8/+108
With csky-gcc -pg -mbacktrace, ftrace call graph supported. Signed-off-by: Guo Ren <ren_guo@c-sky.com>
2018-12-31csky: basic ftrace supportedGuo Ren2-0/+25
When gcc with -pg, it'll add _mcount stub in every function. We need implement the _mcount in kernel and ftrace depends on stackstrace. To do: call-graph, dynamic ftrace Signed-off-by: Guo Ren <ren_guo@c-sky.com>
2018-12-31csky: fixup save hi,lo,dspcr regs in switch_stack.Guo Ren2-3/+57
HI, LO, DSPCR registers are 807/810 related regs and no need for 610/860. All of the regs must be saved in pt_regs and switch_stack. This patch fixup saving dspcr reg in switch_stack and pt_regs. Signed-off-by: Guo Ren <ren_guo@c-sky.com>
2018-12-31csky: fixup remove vdsp implement for kernel.Guo Ren1-7/+1
The vr regs for vdsp only saved in task_switch not in every exception trap-in. The memcpy with vdsp instructions will destroy the vr regs for user space applications. Signed-off-by: Guo Ren <ren_guo@c-sky.com>