summaryrefslogtreecommitdiff
path: root/arch/x86/include/asm/bitops.h
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2024-04-08 21:38:30 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2024-05-23 00:12:11 +0300
commit7453b9485114f7ffec4a99bccee469a4d4809894 (patch)
tree37e21820e4a0dc2fdbb04cb42d23e08eb9323266 /arch/x86/include/asm/bitops.h
parentdbaaabd60e1662d2659eaeab0a4fc521667737ed (diff)
downloadlinux-7453b9485114f7ffec4a99bccee469a4d4809894.tar.xz
x86: improve array_index_mask_nospec() code generation
Don't force the inputs to be 'unsigned long', when the comparison can easily be done in 32-bit if that's more appropriate. Note that while we can look at the inputs to choose an appropriate size for the compare instruction, the output is fixed at 'unsigned long'. That's not technically optimal either, since a 32-bit 'sbbl' would often be sufficient. But for the outgoing mask we don't know how the mask ends up being used (ie we have uses that have an incoming 32-bit array index, but end up using the mask for other things). That said, it only costs the extra REX prefix to always generate the 64-bit mask. [ A 'sbbl' also always technically generates a 64-bit mask, but with the upper 32 bits clear: that's fine for when the incoming index that will be masked is already 32-bit, but not if you use the mask to mask a pointer afterwards, like the file table lookup does ] Cc: Peter Zijlstra <peterz@infradead.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch/x86/include/asm/bitops.h')
0 files changed, 0 insertions, 0 deletions