summaryrefslogtreecommitdiff
path: root/arch/loongarch/include
diff options
context:
space:
mode:
authorArd Biesheuvel <ardb@kernel.org>2022-10-13 13:07:27 +0300
committerArd Biesheuvel <ardb@kernel.org>2022-11-09 14:42:03 +0300
commit0efb61c89fa021dfcdb92f22bbc9a7cb3f0fe3fe (patch)
treeff6532a2c3db1ca3b41d69397f4146f4918e30f3 /arch/loongarch/include
parentd9ffe524a538720328b5bf98733a6c641e236bc8 (diff)
downloadlinux-0efb61c89fa021dfcdb92f22bbc9a7cb3f0fe3fe.tar.xz
efi/loongarch: Don't jump to kernel entry via the old image
Currently, the EFI entry code for LoongArch is set up to copy the executable image to the preferred offset, but instead of branching directly into that image, it branches to the local copy of kernel_entry, and relies on the logic in that function to switch to the link time address instead. This is a bit sloppy, and not something we can support once we merge the EFI decompressor with the EFI stub. So let's clean this up a bit, by adding a helper that computes the offset of kernel_entry from the start of the image, and simply adding the result to VMLINUX_LOAD_ADDRESS. And considering that we cannot execute from anywhere else anyway, let's avoid efi_relocate_kernel() and just allocate the pages instead. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Diffstat (limited to 'arch/loongarch/include')
-rw-r--r--arch/loongarch/include/asm/efi.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/loongarch/include/asm/efi.h b/arch/loongarch/include/asm/efi.h
index 5a470c8d2bbc..97f16e60c6ff 100644
--- a/arch/loongarch/include/asm/efi.h
+++ b/arch/loongarch/include/asm/efi.h
@@ -31,4 +31,6 @@ static inline unsigned long efi_get_kimg_min_align(void)
#define EFI_KIMG_PREFERRED_ADDRESS PHYSADDR(VMLINUX_LOAD_ADDRESS)
+unsigned long kernel_entry_address(void);
+
#endif /* _ASM_LOONGARCH_EFI_H */