summaryrefslogtreecommitdiff
path: root/arch/loongarch/kernel/head.S
AgeCommit message (Collapse)AuthorFilesLines
2024-01-20Merge tag 'loongarch-6.8' of ↵Linus Torvalds1-0/+10
git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson Pull LoongArch updates from Huacai Chen: - Raise minimum clang version to 18.0.0 - Enable initial Rust support for LoongArch - Add built-in dtb support for LoongArch - Use generic interface to support crashkernel=X,[high,low] - Some bug fixes and other small changes - Update the default config file. * tag 'loongarch-6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson: (22 commits) MAINTAINERS: Add BPF JIT for LOONGARCH entry LoongArch: Update Loongson-3 default config file LoongArch: BPF: Prevent out-of-bounds memory access LoongArch: BPF: Support 64-bit pointers to kfuncs LoongArch: Fix definition of ftrace_regs_set_instruction_pointer() LoongArch: Use generic interface to support crashkernel=X,[high,low] LoongArch: Fix and simplify fcsr initialization on execve() LoongArch: Let cores_io_master cover the largest NR_CPUS LoongArch: Change SHMLBA from SZ_64K to PAGE_SIZE LoongArch: Add a missing call to efi_esrt_init() LoongArch: Parsing CPU-related information from DTS LoongArch: dts: DeviceTree for Loongson-2K2000 LoongArch: dts: DeviceTree for Loongson-2K1000 LoongArch: dts: DeviceTree for Loongson-2K0500 LoongArch: Allow device trees be built into the kernel dt-bindings: interrupt-controller: loongson,liointc: Fix dtbs_check warning for interrupt-names dt-bindings: interrupt-controller: loongson,liointc: Fix dtbs_check warning for reg-names dt-bindings: loongarch: Add Loongson SoC boards compatibles dt-bindings: loongarch: Add CPU bindings for LoongArch LoongArch: Enable initial Rust support ...
2024-01-17LoongArch: Change SHMLBA from SZ_64K to PAGE_SIZEHuacai Chen1-0/+10
LoongArch has hardware page coloring for L1 Cache, so we don't have cache aliases. But SFB (Store Fill Buffer) still has aliases. So we define SHMLBA to SZ_64K previously. But there are losts of applications use PAGE_SIZE rather than SHMLBA to mmap() file pages and shared pages. Of course we can fix them one by one, but not easy. On the other hand, we can simply disable SFB for 4KB page size to fix cache alias (there will be performance decrease, but acceptable), and in future we will fix SFB in hardware. So we can safely define SHMLBA to PAGE_SIZE (use the generic shmparam.h) to make life easier. Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-12-19efi/loongarch: Directly position the loaded image fileWang Yao1-1/+0
The use of the 'kernel_offset' variable to position the image file that has been loaded by UEFI or GRUB is unnecessary, because we can directly position the loaded image file through using the image_base field of the efi_loaded_image struct provided by UEFI. Replace kernel_offset with image_base to position the image file that has been loaded by UEFI or GRUB. Signed-off-by: Wang Yao <wangyao@lemote.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-09-06LoongArch: Add KASAN (Kernel Address Sanitizer) supportQing Zhang1-0/+4
1/8 of kernel addresses reserved for shadow memory. But for LoongArch, There are a lot of holes between different segments and valid address space (256T available) is insufficient to map all these segments to kasan shadow memory with the common formula provided by kasan core, saying (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET So LoongArch has a arch-specific mapping formula, different segments are mapped individually, and only limited space lengths of these specific segments are mapped to shadow. At early boot stage the whole shadow region populated with just one physical page (kasan_early_shadow_page). Later, this page is reused as readonly zero shadow for some memory that kasan currently don't track. After mapping the physical memory, pages for shadow memory are allocated and mapped. Functions like memset()/memcpy()/memmove() do a lot of memory accesses. If bad pointer passed to one of these function it is important to be caught. Compiler's instrumentation cannot do this since these functions are written in assembly. KASan replaces memory functions with manually instrumented variants. Original functions declared as weak symbols so strong definitions in mm/kasan/kasan.c could replace them. Original functions have aliases with '__' prefix in names, so we could call non-instrumented variant if needed. Signed-off-by: Qing Zhang <zhangqing@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-09-06LoongArch: Simplify the processing of jumping new kernel for KASLRQing Zhang1-5/+6
Modified relocate_kernel() doesn't return new kernel's entry point but the random_offset. In this way we share the start_kernel() processing with the normal kernel, which avoids calling 'jr a0' directly and allows some other operations (e.g, kasan_early_init) before start_kernel() when KASLR (CONFIG_RANDOMIZE_BASE) is turned on. Signed-off-by: Qing Zhang <zhangqing@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-06-29LoongArch: Calculate various sizes in the linker scriptWANG Rui1-4/+4
Taking the address delta between symbols in different sections is not supported by the LLVM IAS. Instead, do this in the linker script, so the same data can be properly referenced in assembly. Signed-off-by: WANG Rui <wangrui@loongson.cn> Signed-off-by: WANG Xuerui <git@xen0n.name> [chenhuacai: Fix build with !CONFIG_EFI_STUB] Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-02-25LoongArch: kdump: Add single kernel image implementationYouling Tang1-1/+1
This feature depends on the kernel being relocatable. Enable using single kernel image for kdump, and then no longer need to build two kernels (production kernel and capture kernel share a single kernel image). Also enable CONFIG_CRASH_DUMP in loongson3_defconfig. Signed-off-by: Youling Tang <tangyouling@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-02-25LoongArch: Add support for kernel address space layout randomization (KASLR)Youling Tang1-0/+13
This patch adds support for relocating the kernel to a random address. Entropy is derived from the banner, which will change every build and random_get_entropy() which should provide additional runtime entropy. The kernel is relocated by up to RANDOMIZE_BASE_MAX_OFFSET bytes from its link address. Because relocation happens so early during the kernel booting, the amount of physical memory has not yet been determined. This means the only way to limit relocation within the available memory is via Kconfig. So we limit the maximum value of RANDOMIZE_BASE_MAX_OFFSET to 256M (0x10000000) because our memory layout has many holes. Signed-off-by: Youling Tang <tangyouling@loongson.cn> Signed-off-by: Xi Ruoyao <xry111@xry111.site> # Fix compiler warnings Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-02-25LoongArch: Add support for kernel relocationYouling Tang1-0/+4
This config allows to compile kernel as PIE and to relocate it at any virtual address at runtime: this paves the way to KASLR. Runtime relocation is possible since relocation metadata are embedded into the kernel. Signed-off-by: Youling Tang <tangyouling@loongson.cn> Signed-off-by: Xi Ruoyao <xry111@xry111.site> # Use arch_initcall Signed-off-by: Jinyang He <hejinyang@loongson.cn> # Provide la_abs relocation code Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-02-25LoongArch: Add JUMP_VIRT_ADDR macro implementation to avoid using la.absYouling Tang1-8/+4
Add JUMP_VIRT_ADDR macro implementation to avoid using la.abs directly. This is a preparation for subsequent patches. Signed-off-by: Youling Tang <tangyouling@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2023-02-25LoongArch: Use la.pcrel instead of la.abs when it's trivially possibleXi Ruoyao1-1/+1
Let's start to kill la.abs in preparation for the subsequent support of the PIE kernel. BTW, Re-tab the indention in arch/loongarch/kernel/entry.S for alignment. Signed-off-by: Xi Ruoyao <xry111@xry111.site> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-12-05efi: Put Linux specific magic number in the DOS headerArd Biesheuvel1-1/+2
GRUB currently relies on the magic number in the image header of ARM and arm64 EFI kernel images to decide whether or not the image in question is a bootable kernel. However, the purpose of the magic number is to identify the image as one that implements the bare metal boot protocol, and so GRUB, which only does EFI boot, is limited unnecessarily to booting images that could potentially be booted in a non-EFI manner as well. This is problematic for the new zboot decompressor image format, as it can only boot in EFI mode, and must therefore not use the bare metal boot magic number in its header. For this reason, the strict magic number was dropped from GRUB, to permit essentially any kind of EFI executable to be booted via the 'linux' command, blurring the line between the linux loader and the chainloader. So let's use the same field in the DOS header that RISC-V and arm64 already use for their 'bare metal' magic numbers to store a 'generic Linux kernel' magic number, which can be used to identify bootable kernel images in PE format which don't necessarily implement a bare metal boot protocol in the same binary. Note that, in the context of EFI, the MS-DOS header is only described in terms of the fields that it shares with the hybrid PE/COFF image format, (i.e., the MS-DOS EXE magic number at offset #0 and the PE header offset at byte offset #0x3c). Since we aim for compatibility with EFI only, and not with MS-DOS or MS-Windows, we can use the remaining space in the MS-DOS header however we want. Let's set the generic magic number for x86 images as well: existing bootloaders already have their own methods to identify x86 Linux images that can be booted in a non-EFI manner, and having the magic number in place there will ease any future transitions in loader implementations to merge the x86 and non-x86 EFI boot paths. Note that 32-bit ARM already uses the same location in the header for a different purpose, but the ARM support is already widely implemented and the EFI zboot decompressor is not available on ARM anyway, so we just disregard it here. Acked-by: Leif Lindholm <quic_llindhol@quicinc.com> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2022-10-29LoongArch: Remove unused kernel stack paddingJinyang He1-2/+1
The current LoongArch kernel stack is padded as if obeying the MIPS o32 calling convention (32 bytes), signifying the port's MIPS lineage but no longer making sense. Remove the padding for clarity. Reviewed-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Jinyang He <hejinyang@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-10-12LoongArch: Add kexec supportYouling Tang1-1/+5
Add three new files, kexec.h, machine_kexec.c and relocate_kernel.S to the LoongArch architecture, so as to add support for the kexec re-boot mechanism (CONFIG_KEXEC) on LoongArch platforms. Kexec supports loading vmlinux.elf in ELF format and vmlinux.efi in PE format. I tested kexec on LoongArch machines (Loongson-3A5000) and it works as expected: $ sudo kexec -l /boot/vmlinux.efi --reuse-cmdline $ sudo kexec -e Signed-off-by: Youling Tang <tangyouling@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-10-12LoongArch: Use generic BUG() handlerYouling Tang1-0/+4
Inspired by commit 9fb7410f955("arm64/BUG: Use BRK instruction for generic BUG traps"), do similar for LoongArch to use generic BUG() handler. This patch uses the BREAK software breakpoint instruction to generate a trap instead, similarly to most other arches, with the generic BUG code generating the dmesg boilerplate. This allows bug metadata to be moved to a separate table and reduces the amount of inline code at BUG() and WARN() sites. This also avoids clobbering any registers before they can be dumped. To mitigate the size of the bug table further, this patch makes use of the existing infrastructure for encoding addresses within the bug table as 32-bit relative pointers instead of absolute pointers. (Note: this limits the max kernel size to 2GB.) Before patch: [ 3018.338013] lkdtm: Performing direct entry BUG [ 3018.342445] Kernel bug detected[#5]: [ 3018.345992] CPU: 2 PID: 865 Comm: cat Tainted: G D 6.0.0-rc6+ #35 After patch: [ 125.585985] lkdtm: Performing direct entry BUG [ 125.590433] ------------[ cut here ]------------ [ 125.595020] kernel BUG at drivers/misc/lkdtm/bugs.c:78! [ 125.600211] Oops - BUG[#1]: [ 125.602980] CPU: 3 PID: 410 Comm: cat Not tainted 6.0.0-rc6+ #36 Out-of-line file/line data information obtained compared to before. Signed-off-by: Youling Tang <tangyouling@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-10-12LoongArch: Adjust symbol addressing for AS_HAS_EXPLICIT_RELOCSXi Ruoyao1-6/+6
If explicit relocation hints are used by the toolchain, -Wa,-mla-* options will be useless for the C code. So only use them for the !CONFIG_AS_HAS_EXPLICIT_RELOCS case. Replace "la" with "la.pcrel" in head.S to keep the semantic consistent with new and old toolchains for the low level startup code. For per-CPU variables, the "address" of the symbol is actually an offset from $r21. The value is near the loading address of main kernel image, but far from the loading address of modules. So we use model("extreme") attibute to tell the compiler that a PC-relative addressing with 32-bit offset is not sufficient for local per-CPU variables. The behavior with different assemblers and compilers are summarized in the following table: AS has CC has explicit relocs explicit relocs * Behavior ============================================================== No No Use la.* macros. No change from Linux 6.0. -------------------------------------------------------------- No Yes Disable explicit relocs. No change from Linux 6.0. -------------------------------------------------------------- Yes No Not supported. -------------------------------------------------------------- Yes Yes Enable explicit relocs. No -Wa,-mla* options used. ============================================================== *: We assume CC must have model attribute if it has explicit relocs. Both features are added in GCC 13 development cycle, so any GCC release >= 13 should be OK. Using early GCC 13 development snapshots may produce modules with unsupported relocations. Link: https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=f09482a Link: https://gcc.gnu.org/r13-1834 Link: https://gcc.gnu.org/r13-2199 Tested-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Xi Ruoyao <xry111@xry111.site> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-10-03Merge tag 'efi-next-for-v6.1' into loongarch-nextHuacai Chen1-0/+22
LoongArch architecture changes for 6.1 depend on the efi changes to work, so merge them to create a base.
2022-09-29LoongArch: Align the address of kernel_entry to 4KBHuacai Chen1-0/+2
Align the address of kernel_entry to 4KB, to avoid early tlb miss exception in case the entry code crosses page boundary. Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-09-27efi/loongarch: libstub: remove dependency on flattened DTArd Biesheuvel1-0/+2
LoongArch does not use FDT or DT natively [yet], and the only reason it currently uses it is so that it can reuse the existing EFI stub code. Overloading the DT with data passed between the EFI stub and the core kernel has been a source of problems: there is the overlap between information provided by EFI which DT can also provide (initrd base/size, command line, memory descriptions), requiring us to reason about which is which and what to prioritize. It has also resulted in ABI leaks, i.e., internal ABI being promoted to external ABI inadvertently because the bootloader can set the EFI stub's DT properties as well (e.g., "kaslr-seed"). This has become especially problematic with boot environments that want to pretend that EFI boot is being done (to access ACPI and SMBIOS tables, for instance) but have no ability to execute the EFI stub, and so the environment that the EFI stub creates is emulated [poorly, in some cases]. Another downside of treating DT like this is that the DT binary that the kernel receives is different from the one created by the firmware, which is undesirable in the context of secure and measured boot. Given that LoongArch support in Linux is brand new, we can avoid these pitfalls, and treat the DT strictly as a hardware description, and use a separate handover method between the EFI stub and the kernel. Now that initrd loading and passing the EFI memory map have been refactored into pure EFI routines that use EFI configuration tables, the only thing we need to pass directly is the kernel command line (even if we could pass this via a config table as well, it is used extremely early, so passing it directly is preferred in this case.) Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Huacai Chen <chenhuacai@loongson.cn>
2022-09-06efi/loongarch: Add efistub booting supportHuacai Chen1-0/+20
This patch adds efistub booting support, which is the standard UEFI boot protocol for LoongArch to use. We use generic efistub, which means we can pass boot information (i.e., system table, memory map, kernel command line, initrd) via a light FDT and drop a lot of non-standard code. We use a flat mapping to map the efi runtime in the kernel's address space. In efi, VA = PA; in kernel, VA = PA + PAGE_OFFSET. As a result, flat mapping is not identity mapping, SetVirtualAddressMap() is still needed for the efi runtime. Tested-by: Xi Ruoyao <xry111@xry111.site> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> [ardb: change fpic to fpie as suggested by Xi Ruoyao] Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2022-08-12LoongArch: Jump to the link address before enable PGHuacai Chen1-8/+11
The kernel entry points of both boot CPU (i.e., kernel_entry) and non- boot CPUs (i.e., smpboot_entry) may be physical address from BootLoader (in DA mode or identity-mapping PG mode). So we should jump to the link address before PG enabled (because DA is disabled at the same time) and just after DMW configured. Specifically: With some older firmwares, non-boot CPUs started with PG enabled, but this need firmware cooperation in the form of a temporary page table, which is deemed unnecessary. OTOH, latest firmware versions configure the non-boot CPUs to start in DA mode, so kernel-side changes are needed. Reviewed-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Re-tab the assembly filesWANG Xuerui1-2/+2
Reflow the *.S files for better stylistic consistency, namely hard tabs after mnemonic position, and vertical alignment of the first operand with hard tabs. Tab width is obviously 8. Some pre-existing intra-block vertical alignments are preserved. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Use the "move" pseudo-instruction where applicableWANG Xuerui1-1/+1
Some of the assembly code in the LoongArch port likely originated from a time when the assembler did not support pseudo-instructions like "move" or "jr", so the desugared form was used and readability suffers (to a minor degree) as a result. As the upstream toolchain supports these pseudo-instructions from the beginning, migrate the existing few usages to them for better readability. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-07-29LoongArch: Use the "jr" pseudo-instruction where applicableWANG Xuerui1-2/+2
Some of the assembly code in the LoongArch port likely originated from a time when the assembler did not support pseudo-instructions like "move" or "jr", so the desugared form was used and readability suffers (to a minor degree) as a result. As the upstream toolchain supports these pseudo-instructions from the beginning, migrate the existing few usages to them for better readability. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-06-25LoongArch: Fix the _stext symbol addressHuacai Chen1-2/+0
_stext means the start of .text section (see __is_kernel_text()), but we put its definition in .ref.text by mistake. Fix it by defining it in the vmlinux.lds.S. Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-06-03LoongArch: Add multi-processor (SMP) supportHuacai Chen1-0/+30
LoongArch-based procesors have 4, 8 or 16 cores per package. This patch adds multi-processor (SMP) support for LoongArch. Reviewed-by: WANG Xuerui <git@xen0n.name> Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-06-03LoongArch: Add boot and setup routinesHuacai Chen1-0/+68
Add basic boot, setup and reset routines for LoongArch. Now, LoongArch machines use UEFI-based firmware. The firmware passes configuration information to the kernel via ACPI and DMI/SMBIOS. Currently an existing interface between the kernel and the bootloader is implemented. Kernel gets 2 values from the bootloader, passed in registers a0 and a1; a0 is an "EFI boot flag" distinguishing UEFI and non-UEFI firmware, while a1 is a pointer to an FDT with systable, memmap, cmdline and initrd information. The standard UEFI boot protocol (EFISTUB) will be added later. Cc: linux-efi@vger.kernel.org Cc: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: WANG Xuerui <git@xen0n.name> Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Co-developed-by: Yun Liu <liuyun@loongson.cn> Signed-off-by: Yun Liu <liuyun@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>