summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/cache.h
diff options
context:
space:
mode:
authorCatalin Marinas <catalin.marinas@arm.com>2018-02-28 21:47:20 +0300
committerWill Deacon <will.deacon@arm.com>2018-03-06 21:52:32 +0300
commit1f85b42a691cd8329ba82dbcaeec80ac1231b32a (patch)
treeda12a0975152204ecccb02c335e68cb3f09aed22 /arch/arm64/include/asm/cache.h
parent6b24442d68e78c57c8837920ea5dfb252571847a (diff)
downloadlinux-1f85b42a691cd8329ba82dbcaeec80ac1231b32a.tar.xz
arm64: Revert L1_CACHE_SHIFT back to 6 (64-byte cache line size)
Commit 97303480753e ("arm64: Increase the max granular size") increased the cache line size to 128 to match Cavium ThunderX, apparently for some performance benefit which could not be confirmed. This change, however, has an impact on the network packets allocation in certain circumstances, requiring slightly over a 4K page with a significant performance degradation. This patch reverts L1_CACHE_SHIFT back to 6 (64-byte cache line) while keeping ARCH_DMA_MINALIGN at 128. The cache_line_size() function was changed to default to ARCH_DMA_MINALIGN in the absence of a meaningful CTR_EL0.CWG bit field. In addition, if a system with ARCH_DMA_MINALIGN < CTR_EL0.CWG is detected, the kernel will force swiotlb bounce buffering for all non-coherent devices since DMA cache maintenance on sub-CWG ranges is not safe, leading to data corruption. Cc: Tirumalesh Chalamarla <tchalamarla@cavium.com> Cc: Timur Tabi <timur@codeaurora.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
Diffstat (limited to 'arch/arm64/include/asm/cache.h')
-rw-r--r--arch/arm64/include/asm/cache.h6
1 files changed, 3 insertions, 3 deletions
diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index ea9bb4e0e9bb..b2e6ece23713 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -29,7 +29,7 @@
#define ICACHE_POLICY_VIPT 2
#define ICACHE_POLICY_PIPT 3
-#define L1_CACHE_SHIFT 7
+#define L1_CACHE_SHIFT (6)
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
/*
@@ -39,7 +39,7 @@
* cache before the transfer is done, causing old data to be seen by
* the CPU.
*/
-#define ARCH_DMA_MINALIGN L1_CACHE_BYTES
+#define ARCH_DMA_MINALIGN (128)
#ifndef __ASSEMBLY__
@@ -73,7 +73,7 @@ static inline u32 cache_type_cwg(void)
static inline int cache_line_size(void)
{
u32 cwg = cache_type_cwg();
- return cwg ? 4 << cwg : L1_CACHE_BYTES;
+ return cwg ? 4 << cwg : ARCH_DMA_MINALIGN;
}
#endif /* __ASSEMBLY__ */