diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2022-09-15 14:10:55 +0300 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2022-10-17 17:41:01 +0300 |
commit | 8b44221671ec45d725a4558ff7aa5ea90ecfc885 (patch) | |
tree | d04b3160b20d0c041601ca055b3200d1cb435480 /arch/x86/crypto/serpent-avx-x86_64-asm_64.S | |
parent | ba1b270c20dfb7f7b7a076b1a97ef4b7dcb539b5 (diff) | |
download | linux-8b44221671ec45d725a4558ff7aa5ea90ecfc885.tar.xz |
crypto: x86/serpent: Remove redundant alignments
SYM_FUNC_START*() and friends already imply alignment, remove custom
alignment hacks to make code consistent. This prepares for future
function call ABI changes.
Also, with having pushed the function alignment to 16 bytes, this
custom alignment is completely superfluous.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220915111144.558544791@infradead.org
Diffstat (limited to 'arch/x86/crypto/serpent-avx-x86_64-asm_64.S')
-rw-r--r-- | arch/x86/crypto/serpent-avx-x86_64-asm_64.S | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/arch/x86/crypto/serpent-avx-x86_64-asm_64.S b/arch/x86/crypto/serpent-avx-x86_64-asm_64.S index 82f2313f512b..97e283621851 100644 --- a/arch/x86/crypto/serpent-avx-x86_64-asm_64.S +++ b/arch/x86/crypto/serpent-avx-x86_64-asm_64.S @@ -550,7 +550,6 @@ #define write_blocks(x0, x1, x2, x3, t0, t1, t2) \ transpose_4x4(x0, x1, x2, x3, t0, t1, t2) -.align 8 SYM_FUNC_START_LOCAL(__serpent_enc_blk8_avx) /* input: * %rdi: ctx, CTX @@ -604,7 +603,6 @@ SYM_FUNC_START_LOCAL(__serpent_enc_blk8_avx) RET; SYM_FUNC_END(__serpent_enc_blk8_avx) -.align 8 SYM_FUNC_START_LOCAL(__serpent_dec_blk8_avx) /* input: * %rdi: ctx, CTX |