summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/atomic_lse.h
AgeCommit message (Expand)AuthorFilesLines
2023-06-05arch: Remove cmpxchg_doublePeter Zijlstra1-36/+0
2023-06-05arch: Introduce arch_{,try_}_cmpxchg128{,_local}()Peter Zijlstra1-0/+31
2023-03-28arm64: atomics: lse: improve cmpxchg implementationMark Rutland1-12/+5
2023-01-05arm64: cmpxchg_double*: hazard against entire exchange variableMark Rutland1-1/+1
2022-09-09arm64: atomic: always inline the assemblyMark Rutland1-17/+29
2022-01-20arm64: atomics: lse: Dereference matching sizeKees Cook1-1/+1
2021-12-14arm64: atomics: lse: define RETURN ops in terms of FETCH opsMark Rutland1-34/+14
2021-12-14arm64: atomics: lse: improve constraints for simple opsMark Rutland1-12/+18
2021-12-14arm64: atomics: lse: define ANDs in terms of ANDNOTsMark Rutland1-30/+4
2021-12-14arm64: atomics lse: define SUBs in terms of ADDsMark Rutland1-122/+58
2021-12-14arm64: atomics: format whitespace consistentlyMark Rutland1-7/+7
2020-01-16arm64: lse: fix LSE atomics with LLVM's integrated assemblerSami Tolvanen1-0/+19
2019-10-04arm64: Mark functions using explicit register variables as '__always_inline'Will Deacon1-2/+4
2019-08-29arm64: avoid using hard-coded registers for LSE atomicsAndrew Murray1-29/+41
2019-08-29arm64: atomics: avoid out-of-line ll/sc atomicsAndrew Murray1-251/+114
2019-07-09Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/k...Linus Torvalds1-17/+17
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner1-12/+1
2019-06-03locking/atomic, arm64: Use s64 for atomic64Mark Rutland1-17/+17
2019-02-11Merge branch 'locking/atomics' into locking/core, to pick up WIP commitsIngo Molnar1-19/+19
2018-12-07arm64: Avoid masking "old" for LSE cmpxchg() implementationWill Deacon1-2/+2
2018-12-07arm64: Avoid redundant type conversions in xchg() and cmpxchg()Will Deacon1-23/+23
2018-11-01arm64, locking/atomics: Use instrumented atomicsMark Rutland1-19/+19
2018-05-21arm64: lse: Add early clobbers to some input/output asm operandsWill Deacon1-12/+12
2017-07-20arm64: atomics: Remove '&' from '+&' asm constraint in lse atomicsWill Deacon1-1/+1
2017-05-09arm64: atomic_lse: match asm register sizesMark Rutland1-2/+2
2016-09-09arm64: lse: convert lse alternatives NOP padding to use __nopsWill Deacon1-37/+27
2016-06-16locking/atomic, arch/arm64: Implement atomic{,64}_fetch_{add,sub,and,andnot,o...Will Deacon1-0/+172
2016-06-16locking/atomic, arch/arm64: Generate LSE non-return cases using common macrosWill Deacon1-90/+32
2016-02-26arm64: lse: deal with clobbered IP registers after branch via PLTArd Biesheuvel1-19/+19
2015-11-05arm64: cmpxchg_dbl: fix return value typeLorenzo Pieralisi1-1/+1
2015-10-12arm64: atomics: implement native {relaxed, acquire, release} atomicsWill Deacon1-77/+116
2015-07-29arm64: lse: fix lse cmpxchg code indentationWill Deacon1-3/+3
2015-07-27arm64: atomic64_dec_if_positive: fix incorrect branch conditionWill Deacon1-1/+1
2015-07-27arm64: atomics: implement atomic{,64}_cmpxchg using cmpxchgWill Deacon1-43/+0
2015-07-27arm64: cmpxchg: avoid "cc" clobber in ll/sc routinesWill Deacon1-2/+2
2015-07-27arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPUWill Deacon1-0/+43
2015-07-27arm64: cmpxchg: patch in lse instructions when supported by the CPUWill Deacon1-0/+39
2015-07-27arm64: atomics: patch in lse instructions when supported by the CPUWill Deacon1-109/+291
2015-07-27arm64: introduce CONFIG_ARM64_LSE_ATOMICS as fallback to ll/sc atomicsWill Deacon1-0/+170