summaryrefslogtreecommitdiff
path: root/arch/arc/include/asm/uaccess.h
AgeCommit message (Collapse)AuthorFilesLines
2023-08-17ARC: uaccess: elide unaliged handling if hardware supportsVineet Gupta1-4/+6
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: uaccess: remove arc specific out-of-line handles for -OsVineet Gupta1-9/+2
Everything is now out-of-line in lib/usercopy.c Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2022-02-25uaccess: remove CONFIG_SET_FSArnd Bergmann1-1/+0
There are no remaining callers of set_fs(), so CONFIG_SET_FS can be removed globally, along with the thread_info field and any references to it. This turns access_ok() into a cheaper check against TASK_SIZE_MAX. As CONFIG_SET_FS is now gone, drop all remaining references to set_fs()/get_fs(), mm_segment_t, user_addr_max() and uaccess_kernel(). Acked-by: Sam Ravnborg <sam@ravnborg.org> # for sparc32 changes Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Tested-by: Sergey Matyukevich <sergey.matyukevich@synopsys.com> # for arc changes Acked-by: Stafford Horne <shorne@gmail.com> # [openrisc, asm-generic] Acked-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-02-25uaccess: generalize access_ok()Arnd Bergmann1-29/+0
There are many different ways that access_ok() is defined across architectures, but in the end, they all just compare against the user_addr_max() value or they accept anything. Provide one definition that works for most architectures, checking against TASK_SIZE_MAX for user processes or skipping the check inside of uaccess_kernel() sections. For architectures without CONFIG_SET_FS(), this should be the fastest check, as it comes down to a single comparison of a pointer against a compile-time constant, while the architecture specific versions tend to do something more complex for historic reasons or get something wrong. Type checking for __user annotations is handled inconsistently across architectures, but this is easily simplified as well by using an inline function that takes a 'const void __user *' argument. A handful of callers need an extra __user annotation for this. Some architectures had trick to use 33-bit or 65-bit arithmetic on the addresses to calculate the overflow, however this simpler version uses fewer registers, which means it can produce better object code in the end despite needing a second (statically predicted) branch. Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Mark Rutland <mark.rutland@arm.com> [arm64, asm-generic] Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Stafford Horne <shorne@gmail.com> Acked-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2021-07-28asm-generic: remove extra strn{cpy_from,len}_user declarationsArnd Bergmann1-5/+0
As these are now in asm-generic, it's no longer necessary to declare them in the architecture. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2021-07-23arc: use generic strncpy/strnlen from_userArnd Bergmann1-78/+5
Remove the arc implemenation of strncpy/strnlen and instead use the generic versions. The arc version is fairly slow because it always does byte accesses even for aligned data, and its checks for user_addr_max() differ from the generic code. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2021-07-23asm-generic/uaccess.h: remove __strncpy_from_user/__strnlen_userArnd Bergmann1-4/+10
This is a preparation for changing over architectures to the generic implementation one at a time. As there are no callers of either __strncpy_from_user() or __strnlen_user(), fold these into the strncpy_from_user() and strnlen_user() functions to make each implementation independent of the others. Many of these implementations have known bugs, but the intention here is to not change behavior at all and stay compatible with those bugs for the moment. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner1-4/+1
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-04-23asm-generic: don't include <asm/segment.h> from <asm/uaccess.h>Christoph Hellwig1-0/+1
<asm/segment.h> is an odd x86 legacy that we shouldn't force on other architectures. arc used it to bring in mm_context_t, but we can do that inside the arc code easily. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2019-02-21ARC: uacces: remove lp_start, lp_end from clobber listVineet Gupta1-4/+4
Newer ARC gcc handles lp_start, lp_end in a different way and doesn't like them in the clobber list. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-12-20ARC: uaccess: dont use "l" gcc inline asm constraint modifierVineet Gupta1-2/+3
This used to setup the LP_COUNT register automatically, but now has been removed. There was an earlier fix 3c7c7a2fc8811 which fixed instance in delay.h but somehow missed this one as gcc change had not made its way into production toolchains and was not pedantic as it is now ! Cc: stable@vger.kernel.org Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-03-30ARC: uaccess: enable INLINE_COPY_{TO,FROM}_USER ...Vineet Gupta1-10/+6
... and switch to generic out of line version in lib/usercopy.c Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-03-29arc: switch to RAW_COPY_USERAl Viro1-4/+4
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-03-29arc: get rid of unused declarationAl Viro1-3/+0
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-03-28new helper: uaccess_kernel()Al Viro1-1/+1
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-03-28add asm-generic/extable.hAl Viro1-2/+0
... and make the users of generic uaccess.h use that. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-03-06uaccess: drop duplicate includes from asm/uaccess.hAl Viro1-2/+0
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-09-14ARC: uaccess: get_user to zero out dest in cause of faultVineet Gupta1-2/+9
Al reported potential issue with ARC get_user() as it wasn't clearing out destination pointer in case of fault due to bad address etc. Verified using following | { | u32 bogus1 = 0xdeadbeef; | u64 bogus2 = 0xdead; | int rc1, rc2; | | pr_info("Orig values %x %llx\n", bogus1, bogus2); | rc1 = get_user(bogus1, (u32 __user *)0x40000000); | rc2 = get_user(bogus2, (u64 __user *)0x50000000); | pr_info("access %d %d, new values %x %llx\n", | rc1, rc2, bogus1, bogus2); | } | [ARCLinux]# insmod /mnt/kernel-module/qtn.ko | Orig values deadbeef dead | access -14 -14, new values 0 0 Reported-by: Al Viro <viro@ZenIV.linux.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-30Fix typosAndrea Gelmini1-1/+1
Signed-off-by: Andrea Gelmini <andrea.gelmini@gelma.net> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-22ARCv2: Adhere to Zero Delay loop restrictionVineet Gupta1-9/+8
Branch insn can't be scheduled as last insn of Zero Overhead loop Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-09-27ARC: Fix 32-bit wrap around in access_ok()Vineet Gupta1-2/+2
Anton reported | LTP tests syscalls/process_vm_readv01 and process_vm_writev01 fail | similarly in one testcase test_iov_invalid -> lvec->iov_base. | Testcase expects errno EFAULT and return code -1, | but it gets return code 1 and ERRNO is 0 what means success. Essentially test case was passing a pointer of -1 which access_ok() was not catching. It was doing [@addr + @sz <= TASK_SIZE] which would pass for @addr == -1 Fixed that by rewriting as [@addr <= TASK_SIZE - @sz] Reported-by: Anton Kolesov <Anton.Kolesov@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-02-11ARC: [optim] uaccess __{get,put}_user() optimisedVineet Gupta1-0/+105
Override asm-generic implementations. We basically gain on 2 fronts * checks for alignment no longer needed as we are only doing "unit" sized copies. (Careful observer could argue that While the kernel buffers are aligned, the user buffer in theory might not be - however in that case the user space is already broken when it tries to deref a hword/word straddling word boundary - so we are not making it any worse). * __copy_{to,from}_user( ) returns bytes that couldn't be copied, whereas get_user() returns 0 for success or -EFAULT (not size). Thus the code to do leftover bytes calculation can be avoided as well. The savings were significant: ~17k of code. bloat-o-meter vmlinux_uaccess_pre vmlinux_uaccess_post add/remove: 0/4 grow/shrink: 8/118 up/down: 1262/-18758 (-17496) ^^^^^^^^^ Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Acked-by: Arnd Bergmann <arnd@arndb.de>
2013-02-11ARC: uaccess friendsVineet Gupta1-0/+646
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>