summaryrefslogtreecommitdiff
path: root/arch/s390/include/asm/physmem_info.h
diff options
context:
space:
mode:
authorAlexander Gordeev <agordeev@linux.ibm.com>2023-05-26 15:30:30 +0300
committerAlexander Gordeev <agordeev@linux.ibm.com>2023-06-20 20:52:13 +0300
commit3e8261003bd28208986d3c42004510083c086e24 (patch)
treec800f820adb082533ff7b3c92e0bfa394c1ab0fa /arch/s390/include/asm/physmem_info.h
parent2ed8b509753a0454b52b2d72e982265472c8d861 (diff)
downloadlinux-3e8261003bd28208986d3c42004510083c086e24.tar.xz
s390/kasan: avoid short by one page shadow memory
Kernel Address Sanitizer uses 3 bits per byte to encode memory. That is the number of bits the start and end address of a memory range is shifted right when the corresponding shadow memory is created for that memory range. The used memory mapping routine expects page-aligned addresses, while the above described 3-bit shift might turn the shadow memory range start and end boundaries into non-page-aligned in case the size of the original memory range is less than (PAGE_SIZE << 3). As result, the resulting shadow memory range could be short on one page. Align on page boundary the start and end addresses when mapping a shadow memory range and avoid the described issue in the future. Note, that does not fix a real problem, since currently no virtual regions of size less than (PAGE_SIZE << 3) exist. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Diffstat (limited to 'arch/s390/include/asm/physmem_info.h')
0 files changed, 0 insertions, 0 deletions