summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorVijayanand Jitta <vjitta@codeaurora.org>2021-04-30 08:59:07 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2021-04-30 21:20:40 +0300
commitad216c0316ad6391d90f4de0a7f59396b2925a06 (patch)
tree478afea7154377d038c9e306bb737292b986d9ac /mm
parentd70bec8cc95ad32f6b7e3e6fad72acdd3a5418e9 (diff)
downloadlinux-ad216c0316ad6391d90f4de0a7f59396b2925a06.tar.xz
mm: vmalloc: prevent use after free in _vm_unmap_aliases
A potential use after free can occur in _vm_unmap_aliases where an already freed vmap_area could be accessed, Consider the following scenario: Process 1 Process 2 __vm_unmap_aliases __vm_unmap_aliases purge_fragmented_blocks_allcpus rcu_read_lock() rcu_read_lock() list_del_rcu(&vb->free_list) list_for_each_entry_rcu(vb .. ) __purge_vmap_area_lazy kmem_cache_free(va) va_start = vb->va->va_start Here Process 1 is in purge path and it does list_del_rcu on vmap_block and later frees the vmap_area, since Process 2 was holding the rcu lock at this time vmap_block will still be present in and Process 2 accesse it and thereby it tries to access vmap_area of that vmap_block which was already freed by Process 1 and this results in use after free. Fix this by adding a check for vb->dirty before accessing vmap_area structure since vb->dirty will be set to VMAP_BBMAP_BITS in purge path checking for this will prevent the use after free. Link: https://lkml.kernel.org/r/1616062105-23263-1-git-send-email-vjitta@codeaurora.org Signed-off-by: Vijayanand Jitta <vjitta@codeaurora.org> Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/vmalloc.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 612a3790cfd4..51874a341ab0 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2040,7 +2040,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush)
rcu_read_lock();
list_for_each_entry_rcu(vb, &vbq->free, free_list) {
spin_lock(&vb->lock);
- if (vb->dirty) {
+ if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) {
unsigned long va_start = vb->va->va_start;
unsigned long s, e;