summaryrefslogtreecommitdiff
path: root/drivers/iommu
AgeCommit message (Collapse)AuthorFilesLines
2023-06-30Merge tag 'for-linus-iommufd' of ↵Linus Torvalds2-4/+12
git://git.kernel.org/pub/scm/linux/kernel/git/jgg/iommufd Pull iommufd updates from Jason Gunthorpe: "Just two syzkaller fixes, both for the same basic issue: using the area pointer during an access forced unmap while the locks protecting it were let go" * tag 'for-linus-iommufd' of git://git.kernel.org/pub/scm/linux/kernel/git/jgg/iommufd: iommufd: Call iopt_area_contig_done() under the lock iommufd: Do not access the area pointer after unlocking
2023-06-30Merge tag 'iommu-updates-v6.5' of ↵Linus Torvalds15-605/+693
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull iommu updates from Joerg Roedel: "Core changes: - iova_magazine_alloc() optimization - Make flush-queue an IOMMU driver capability - Consolidate the error handling around device attachment AMD IOMMU changes: - AVIC Interrupt Remapping Improvements - Some minor fixes and cleanups Intel VT-d changes from Lu Baolu: - Small and misc cleanups ARM-SMMU changes from Will Deacon: - Device-tree binding updates: - Add missing clocks for SC8280XP and SA8775 Adreno SMMUs - Add two new Qualcomm SMMUs in SDX75 and SM6375 - Workarounds for Arm MMU-700 errata: - 1076982: Avoid use of SEV-based cmdq wakeup - 2812531: Terminate command batches with a CMD_SYNC - Enforce single-stage translation to avoid nesting-related errata - Set the correct level hint for range TLB invalidation on teardown .. and some other minor fixes and cleanups (including Freescale PAMU and virtio-iommu changes)" * tag 'iommu-updates-v6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (50 commits) iommu/vt-d: Remove commented-out code iommu/vt-d: Remove two WARN_ON in domain_context_mapping_one() iommu/vt-d: Handle the failure case of dmar_reenable_qi() iommu/vt-d: Remove unnecessary (void*) conversions iommu/amd: Remove extern from function prototypes iommu/amd: Use BIT/BIT_ULL macro to define bit fields iommu/amd: Fix DTE_IRQ_PHYS_ADDR_MASK macro iommu/amd: Fix compile error for unused function iommu/amd: Improving Interrupt Remapping Table Invalidation iommu/amd: Do not Invalidate IRT when IRTE caching is disabled iommu/amd: Introduce Disable IRTE Caching Support iommu/amd: Remove the unused struct amd_ir_data.ref iommu/amd: Switch amd_iommu_update_ga() to use modify_irte_ga() iommu/arm-smmu-v3: Set TTL invalidation hint better iommu/arm-smmu-v3: Document nesting-related errata iommu/arm-smmu-v3: Add explicit feature for nesting iommu/arm-smmu-v3: Document MMU-700 erratum 2812531 iommu/arm-smmu-v3: Work around MMU-600 erratum 1076982 dt-bindings: arm-smmu: Add SDX75 SMMU compatible dt-bindings: arm-smmu: Add SM6375 GPU SMMU ...
2023-06-29Merge branch 'expand-stack'Linus Torvalds2-3/+3
This modifies our user mode stack expansion code to always take the mmap_lock for writing before modifying the VM layout. It's actually something we always technically should have done, but because we didn't strictly need it, we were being lazy ("opportunistic" sounds so much better, doesn't it?) about things, and had this hack in place where we would extend the stack vma in-place without doing the proper locking. And it worked fine. We just needed to change vm_start (or, in the case of grow-up stacks, vm_end) and together with some special ad-hoc locking using the anon_vma lock and the mm->page_table_lock, it all was fairly straightforward. That is, it was all fine until Ruihan Li pointed out that now that the vma layout uses the maple tree code, we *really* don't just change vm_start and vm_end any more, and the locking really is broken. Oops. It's not actually all _that_ horrible to fix this once and for all, and do proper locking, but it's a bit painful. We have basically three different cases of stack expansion, and they all work just a bit differently: - the common and obvious case is the page fault handling. It's actually fairly simple and straightforward, except for the fact that we have something like 24 different versions of it, and you end up in a maze of twisty little passages, all alike. - the simplest case is the execve() code that creates a new stack. There are no real locking concerns because it's all in a private new VM that hasn't been exposed to anybody, but lockdep still can end up unhappy if you get it wrong. - and finally, we have GUP and page pinning, which shouldn't really be expanding the stack in the first place, but in addition to execve() we also use it for ptrace(). And debuggers do want to possibly access memory under the stack pointer and thus need to be able to expand the stack as a special case. None of these cases are exactly complicated, but the page fault case in particular is just repeated slightly differently many many times. And ia64 in particular has a fairly complicated situation where you can have both a regular grow-down stack _and_ a special grow-up stack for the register backing store. So to make this slightly more manageable, the bulk of this series is to first create a helper function for the most common page fault case, and convert all the straightforward architectures to it. Thus the new 'lock_mm_and_find_vma()' helper function, which ends up being used by x86, arm, powerpc, mips, riscv, alpha, arc, csky, hexagon, loongarch, nios2, sh, sparc32, and xtensa. So we not only convert more than half the architectures, we now have more shared code and avoid some of those twisty little passages. And largely due to this common helper function, the full diffstat of this series ends up deleting more lines than it adds. That still leaves eight architectures (ia64, m68k, microblaze, openrisc, parisc, s390, sparc64 and um) that end up doing 'expand_stack()' manually because they are doing something slightly different from the normal pattern. Along with the couple of special cases in execve() and GUP. So there's a couple of patches that first create 'locked' helper versions of the stack expansion functions, so that there's a obvious path forward in the conversion. The execve() case is then actually pretty simple, and is a nice cleanup from our old "grow-up stackls are special, because at execve time even they grow down". The #ifdef CONFIG_STACK_GROWSUP in that code just goes away, because it's just more straightforward to write out the stack expansion there manually, instead od having get_user_pages_remote() do it for us in some situations but not others and have to worry about locking rules for GUP. And the final step is then to just convert the remaining odd cases to a new world order where 'expand_stack()' is called with the mmap_lock held for reading, but where it might drop it and upgrade it to a write, only to return with it held for reading (in the success case) or with it completely dropped (in the failure case). In the process, we remove all the stack expansion from GUP (where dropping the lock wouldn't be ok without special rules anyway), and add it in manually to __access_remote_vm() for ptrace(). Thanks to Adrian Glaubitz and Frank Scheiner who tested the ia64 cases. Everything else here felt pretty straightforward, but the ia64 rules for stack expansion are really quite odd and very different from everything else. Also thanks to Vegard Nossum who caught me getting one of those odd conditions entirely the wrong way around. Anyway, I think I want to actually move all the stack expansion code to a whole new file of its own, rather than have it split up between mm/mmap.c and mm/memory.c, but since this will have to be backported to the initial maple tree vma introduction anyway, I tried to keep the patches _fairly_ minimal. Also, while I don't think it's valid to expand the stack from GUP, the final patch in here is a "warn if some crazy GUP user wants to try to expand the stack" patch. That one will be reverted before the final release, but it's left to catch any odd cases during the merge window and release candidates. Reported-by: Ruihan Li <lrh2000@pku.edu.cn> * branch 'expand-stack': gup: add warning if some caller would seem to want stack expansion mm: always expand the stack with the mmap write lock held execve: expand new process stack manually ahead of time mm: make find_extend_vma() fail if write lock not held powerpc/mm: convert coprocessor fault to lock_mm_and_find_vma() mm/fault: convert remaining simple cases to lock_mm_and_find_vma() arm/mm: Convert to using lock_mm_and_find_vma() riscv/mm: Convert to using lock_mm_and_find_vma() mips/mm: Convert to using lock_mm_and_find_vma() powerpc/mm: Convert to using lock_mm_and_find_vma() arm64/mm: Convert to using lock_mm_and_find_vma() mm: make the page fault mmap locking killable mm: introduce new 'lock_mm_and_find_vma()' page fault helper
2023-06-28Merge tag 'mm-stable-2023-06-24-19-15' of ↵Linus Torvalds4-16/+49
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull mm updates from Andrew Morton: - Yosry Ahmed brought back some cgroup v1 stats in OOM logs - Yosry has also eliminated cgroup's atomic rstat flushing - Nhat Pham adds the new cachestat() syscall. It provides userspace with the ability to query pagecache status - a similar concept to mincore() but more powerful and with improved usability - Mel Gorman provides more optimizations for compaction, reducing the prevalence of page rescanning - Lorenzo Stoakes has done some maintanance work on the get_user_pages() interface - Liam Howlett continues with cleanups and maintenance work to the maple tree code. Peng Zhang also does some work on maple tree - Johannes Weiner has done some cleanup work on the compaction code - David Hildenbrand has contributed additional selftests for get_user_pages() - Thomas Gleixner has contributed some maintenance and optimization work for the vmalloc code - Baolin Wang has provided some compaction cleanups, - SeongJae Park continues maintenance work on the DAMON code - Huang Ying has done some maintenance on the swap code's usage of device refcounting - Christoph Hellwig has some cleanups for the filemap/directio code - Ryan Roberts provides two patch series which yield some rationalization of the kernel's access to pte entries - use the provided APIs rather than open-coding accesses - Lorenzo Stoakes has some fixes to the interaction between pagecache and directio access to file mappings - John Hubbard has a series of fixes to the MM selftesting code - ZhangPeng continues the folio conversion campaign - Hugh Dickins has been working on the pagetable handling code, mainly with a view to reducing the load on the mmap_lock - Catalin Marinas has reduced the arm64 kmalloc() minimum alignment from 128 to 8 - Domenico Cerasuolo has improved the zswap reclaim mechanism by reorganizing the LRU management - Matthew Wilcox provides some fixups to make gfs2 work better with the buffer_head code - Vishal Moola also has done some folio conversion work - Matthew Wilcox has removed the remnants of the pagevec code - their functionality is migrated over to struct folio_batch * tag 'mm-stable-2023-06-24-19-15' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (380 commits) mm/hugetlb: remove hugetlb_set_page_subpool() mm: nommu: correct the range of mmap_sem_read_lock in task_mem() hugetlb: revert use of page_cache_next_miss() Revert "page cache: fix page_cache_next/prev_miss off by one" mm/vmscan: fix root proactive reclaim unthrottling unbalanced node mm: memcg: rename and document global_reclaim() mm: kill [add|del]_page_to_lru_list() mm: compaction: convert to use a folio in isolate_migratepages_block() mm: zswap: fix double invalidate with exclusive loads mm: remove unnecessary pagevec includes mm: remove references to pagevec mm: rename invalidate_mapping_pagevec to mapping_try_invalidate mm: remove struct pagevec net: convert sunrpc from pagevec to folio_batch i915: convert i915_gpu_error to use a folio_batch pagevec: rename fbatch_count() mm: remove check_move_unevictable_pages() drm: convert drm_gem_put_pages() to use a folio_batch i915: convert shmem_sg_free_table() to use a folio_batch scatterlist: add sg_set_folio() ...
2023-06-28Merge tag 'locking-core-2023-06-27' of ↵Linus Torvalds3-14/+13
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking updates from Ingo Molnar: - Introduce cmpxchg128() -- aka. the demise of cmpxchg_double() The cmpxchg128() family of functions is basically & functionally the same as cmpxchg_double(), but with a saner interface. Instead of a 6-parameter horror that forced u128 - u64/u64-halves layout details on the interface and exposed users to complexity, fragility & bugs, use a natural 3-parameter interface with u128 types. - Restructure the generated atomic headers, and add kerneldoc comments for all of the generic atomic{,64,_long}_t operations. The generated definitions are much cleaner now, and come with documentation. - Implement lock_set_cmp_fn() on lockdep, for defining an ordering when taking multiple locks of the same type. This gets rid of one use of lockdep_set_novalidate_class() in the bcache code. - Fix raw_cpu_generic_try_cmpxchg() bug due to an unintended variable shadowing generating garbage code on Clang on certain ARM builds. * tag 'locking-core-2023-06-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (43 commits) locking/atomic: scripts: fix ${atomic}_dec_if_positive() kerneldoc percpu: Fix self-assignment of __old in raw_cpu_generic_try_cmpxchg() locking/atomic: treewide: delete arch_atomic_*() kerneldoc locking/atomic: docs: Add atomic operations to the driver basic API documentation locking/atomic: scripts: generate kerneldoc comments docs: scripts: kernel-doc: accept bitwise negation like ~@var locking/atomic: scripts: simplify raw_atomic*() definitions locking/atomic: scripts: simplify raw_atomic_long*() definitions locking/atomic: scripts: split pfx/name/sfx/order locking/atomic: scripts: restructure fallback ifdeffery locking/atomic: scripts: build raw_atomic_long*() directly locking/atomic: treewide: use raw_atomic*_<op>() locking/atomic: scripts: add trivial raw_atomic*_<op>() locking/atomic: scripts: factor out order template generation locking/atomic: scripts: remove leftover "${mult}" locking/atomic: scripts: remove bogus order parameter locking/atomic: xtensa: add preprocessor symbols locking/atomic: x86: add preprocessor symbols locking/atomic: sparc: add preprocessor symbols locking/atomic: sh: add preprocessor symbols ...
2023-06-27mm: always expand the stack with the mmap write lock heldLinus Torvalds2-3/+3
This finishes the job of always holding the mmap write lock when extending the user stack vma, and removes the 'write_locked' argument from the vm helper functions again. For some cases, we just avoid expanding the stack at all: drivers and page pinning really shouldn't be extending any stacks. Let's see if any strange users really wanted that. It's worth noting that architectures that weren't converted to the new lock_mm_and_find_vma() helper function are left using the legacy "expand_stack()" function, but it has been changed to drop the mmap_lock and take it for writing while expanding the vma. This makes it fairly straightforward to convert the remaining architectures. As a result of dropping and re-taking the lock, the calling conventions for this function have also changed, since the old vma may no longer be valid. So it will now return the new vma if successful, and NULL - and the lock dropped - if the area could not be extended. Tested-by: Vegard Nossum <vegard.nossum@oracle.com> Tested-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> # ia64 Tested-by: Frank Scheiner <frank.scheiner@web.de> # ia64 Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-06-26iommufd: Call iopt_area_contig_done() under the lockJason Gunthorpe1-1/+1
The iter internally holds a pointer to the area and iopt_area_contig_done() will dereference it. The pointer is not valid outside the iova_rwsem. syzkaller reports: BUG: KASAN: slab-use-after-free in iommufd_access_unpin_pages+0x363/0x370 Read of size 8 at addr ffff888022286e20 by task syz-executor669/5771 CPU: 0 PID: 5771 Comm: syz-executor669 Not tainted 6.4.0-rc5-syzkaller-00313-g4c605260bc60 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/25/2023 Call Trace: <TASK> dump_stack_lvl+0xd9/0x150 print_address_description.constprop.0+0x2c/0x3c0 kasan_report+0x11c/0x130 iommufd_access_unpin_pages+0x363/0x370 iommufd_test_access_unmap+0x24b/0x390 iommufd_access_notify_unmap+0x24c/0x3a0 iopt_unmap_iova_range+0x4c4/0x5f0 iopt_unmap_all+0x27/0x50 iommufd_ioas_unmap+0x3d0/0x490 iommufd_fops_ioctl+0x317/0x4b0 __x64_sys_ioctl+0x197/0x210 do_syscall_64+0x39/0xb0 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7fec1dae3b19 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 11 15 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fec1da74308 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007fec1db6b438 RCX: 00007fec1dae3b19 RDX: 0000000020000100 RSI: 0000000000003b86 RDI: 0000000000000003 RBP: 00007fec1db6b430 R08: 00007fec1da74700 R09: 0000000000000000 R10: 00007fec1da74700 R11: 0000000000000246 R12: 00007fec1db6b43c R13: 00007fec1db39074 R14: 6d6f692f7665642f R15: 0000000000022000 </TASK> Allocated by task 5770: kasan_save_stack+0x22/0x40 kasan_set_track+0x25/0x30 __kasan_kmalloc+0xa2/0xb0 iopt_alloc_area_pages+0x94/0x560 iopt_map_user_pages+0x205/0x4e0 iommufd_ioas_map+0x329/0x5f0 iommufd_fops_ioctl+0x317/0x4b0 __x64_sys_ioctl+0x197/0x210 do_syscall_64+0x39/0xb0 entry_SYSCALL_64_after_hwframe+0x63/0xcd Freed by task 5770: kasan_save_stack+0x22/0x40 kasan_set_track+0x25/0x30 kasan_save_free_info+0x2e/0x40 ____kasan_slab_free+0x160/0x1c0 slab_free_freelist_hook+0x8b/0x1c0 __kmem_cache_free+0xaf/0x2d0 iopt_unmap_iova_range+0x288/0x5f0 iopt_unmap_all+0x27/0x50 iommufd_ioas_unmap+0x3d0/0x490 iommufd_fops_ioctl+0x317/0x4b0 __x64_sys_ioctl+0x197/0x210 do_syscall_64+0x39/0xb0 entry_SYSCALL_64_after_hwframe+0x63/0xcd The parallel unmap free'd iter->area the instant the lock was released. Fixes: 51fe6141f0f6 ("iommufd: Data structure to provide IOVA to PFN mapping") Link: https://lore.kernel.org/r/2-v2-9a03761d445d+54-iommufd_syz2_jgg@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reported-by: syzbot+6c8d756f238a75fc3eb8@syzkaller.appspotmail.com Closes: https://lore.kernel.org/r/000000000000905eba05fe38e9f2@google.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-06-26iommufd: Do not access the area pointer after unlockingJason Gunthorpe1-3/+11
A concurrent unmap can trigger freeing of the area pointers while we are generating an unmapping notification for accesses. syzkaller reports: BUG: KASAN: slab-use-after-free in iopt_unmap_iova_range+0x5ba/0x5f0 Read of size 4 at addr ffff888075996184 by task syz-executor.2/31160 CPU: 1 PID: 31160 Comm: syz-executor.2 Not tainted 6.4.0-rc5-syzkaller-00313-g4c605260bc60 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/25/2023 Call Trace: <TASK> dump_stack_lvl+0xd9/0x150 print_address_description.constprop.0+0x2c/0x3c0 kasan_report+0x11c/0x130 iopt_unmap_iova_range+0x5ba/0x5f0 iopt_unmap_all+0x27/0x50 iommufd_ioas_unmap+0x3d0/0x490 iommufd_fops_ioctl+0x317/0x4b0 __x64_sys_ioctl+0x197/0x210 do_syscall_64+0x39/0xb0 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7f0812c8c169 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f0813914168 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007f0812dabf80 RCX: 00007f0812c8c169 RDX: 0000000020000100 RSI: 0000000000003b86 RDI: 0000000000000005 RBP: 00007f0812ce7ca1 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f0812ecfb1f R14: 00007f0813914300 R15: 0000000000022000 </TASK> Allocated by task 31160: kasan_save_stack+0x22/0x40 kasan_set_track+0x25/0x30 __kasan_kmalloc+0xa2/0xb0 iopt_alloc_area_pages+0x94/0x560 iopt_map_user_pages+0x205/0x4e0 iommufd_ioas_map+0x329/0x5f0 iommufd_fops_ioctl+0x317/0x4b0 __x64_sys_ioctl+0x197/0x210 do_syscall_64+0x39/0xb0 entry_SYSCALL_64_after_hwframe+0x63/0xcd Freed by task 31161: kasan_save_stack+0x22/0x40 kasan_set_track+0x25/0x30 kasan_save_free_info+0x2e/0x40 ____kasan_slab_free+0x160/0x1c0 slab_free_freelist_hook+0x8b/0x1c0 __kmem_cache_free+0xaf/0x2d0 iopt_unmap_iova_range+0x288/0x5f0 iopt_unmap_all+0x27/0x50 iommufd_ioas_unmap+0x3d0/0x490 iommufd_fops_ioctl+0x317/0x4b0 __x64_sys_ioctl+0x197/0x210 do_syscall_64+0x39/0xb0 entry_SYSCALL_64_after_hwframe+0x63/0xcd The buggy address belongs to the object at ffff888075996100 which belongs to the cache kmalloc-cg-192 of size 192 The buggy address is located 132 bytes inside of freed 192-byte region [ffff888075996100, ffff8880759961c0) The buggy address belongs to the physical page: page:ffffea0001d66580 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x75996 memcg:ffff88801f1c2701 flags: 0xfff00000000200(slab|node=0|zone=1|lastcpupid=0x7ff) page_type: 0xffffffff() raw: 00fff00000000200 ffff88801244ddc0 dead000000000122 0000000000000000 raw: 0000000000000000 0000000080100010 00000001ffffffff ffff88801f1c2701 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x112cc0(GFP_USER|__GFP_NOWARN|__GFP_NORETRY), pid 31157, tgid 31154 (syz-executor.0), ts 1984547323469, free_ts 1983933451331 post_alloc_hook+0x2db/0x350 get_page_from_freelist+0xf41/0x2c00 __alloc_pages+0x1cb/0x4a0 alloc_pages+0x1aa/0x270 allocate_slab+0x25f/0x390 ___slab_alloc+0xa91/0x1400 __slab_alloc.constprop.0+0x56/0xa0 __kmem_cache_alloc_node+0x136/0x320 kmalloc_trace+0x26/0xe0 iommufd_test+0x1328/0x2c20 iommufd_fops_ioctl+0x317/0x4b0 __x64_sys_ioctl+0x197/0x210 do_syscall_64+0x39/0xb0 entry_SYSCALL_64_after_hwframe+0x63/0xcd page last free stack trace: free_unref_page_prepare+0x62e/0xcb0 free_unref_page_list+0xe3/0xa70 release_pages+0xcd8/0x1380 tlb_batch_pages_flush+0xa8/0x1a0 tlb_finish_mmu+0x14b/0x7e0 exit_mmap+0x2b2/0x930 __mmput+0x128/0x4c0 mmput+0x60/0x70 do_exit+0x9b0/0x29b0 do_group_exit+0xd4/0x2a0 get_signal+0x2318/0x25b0 arch_do_signal_or_restart+0x79/0x5c0 exit_to_user_mode_prepare+0x11f/0x240 syscall_exit_to_user_mode+0x1d/0x50 do_syscall_64+0x46/0xb0 entry_SYSCALL_64_after_hwframe+0x63/0xcd Precompute what is needed to call the access function and do not check the area's num_accesses again as the pointer may not be valid anymore. Use a counter instead. Fixes: 51fe6141f0f6 ("iommufd: Data structure to provide IOVA to PFN mapping") Link: https://lore.kernel.org/r/1-v2-9a03761d445d+54-iommufd_syz2_jgg@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reported-by: syzbot+1ad12d16afca0e7d2dde@syzkaller.appspotmail.com Closes: https://lore.kernel.org/r/0000000000001d40fc05fe385332@google.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-06-20iommu/dma: force bouncing if the size is not cacheline-alignedCatalin Marinas2-9/+42
Similarly to the direct DMA, bounce small allocations as they may have originated from a kmalloc() cache not safe for DMA. Unlike the direct DMA, iommu_dma_map_sg() cannot call iommu_dma_map_sg_swiotlb() for all non-coherent devices as this would break some cases where the iova is expected to be contiguous (dmabuf). Instead, scan the scatterlist for any small sizes and only go the swiotlb path if any element of the list needs bouncing (note that iommu_dma_map_page() would still only bounce those buffers which are not DMA-aligned). To avoid scanning the scatterlist on the 'sync' operations, introduce an SG_DMA_SWIOTLB flag set by iommu_dma_map_sg_swiotlb(). The dev_use_swiotlb() function together with the newly added dev_use_sg_swiotlb() now check for both untrusted devices and unaligned kmalloc() buffers (suggested by Robin Murphy). Link: https://lkml.kernel.org/r/20230612153201.554742-16-catalin.marinas@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Isaac J. Manjarres <isaacmanjarres@google.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Alasdair Kergon <agk@redhat.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jerry Snitselaar <jsnitsel@redhat.com> Cc: Jonathan Cameron <jic23@kernel.org> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Lars-Peter Clausen <lars@metafoo.de> Cc: Logan Gunthorpe <logang@deltatee.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Mike Snitzer <snitzer@kernel.org> Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Saravana Kannan <saravanak@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-20dma-mapping: name SG DMA flag helpers consistentlyRobin Murphy2-5/+5
sg_is_dma_bus_address() is inconsistent with the naming pattern of its corresponding setters and its own kerneldoc, so take the majority vote and rename it sg_dma_is_bus_address() (and fix up the missing underscores in the kerneldoc too). This gives us a nice clear pattern where SG DMA flags are SG_DMA_<NAME>, and the helpers for acting on them are sg_dma_<action>_<name>(). Link: https://lkml.kernel.org/r/20230612153201.554742-14-catalin.marinas@arm.com Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/fa2eca2862c7ffc41b50337abffb2dfd2864d3ea.1685036694.git.robin.murphy@arm.com Tested-by: Isaac J. Manjarres <isaacmanjarres@google.com> Cc: Alasdair Kergon <agk@redhat.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Joerg Roedel <joro@8bytes.org> Cc: Jonathan Cameron <jic23@kernel.org> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Lars-Peter Clausen <lars@metafoo.de> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Mike Snitzer <snitzer@kernel.org> Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Saravana Kannan <saravanak@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-19Merge branches 'iommu/fixes', 'arm/smmu', 'ppc/pamu', 'virtio', 'x86/vt-d', ↵Joerg Roedel15-609/+697
'core' and 'x86/amd' into next
2023-06-16iommu/vt-d: Remove commented-out codeLu Baolu1-9/+1
These lines of code were commented out when they were first added in commit ba39592764ed ("Intel IOMMU: Intel IOMMU driver"). We do not want to restore them because the VT-d spec has deprecated the read/write draining hit. VT-d spec (section 11.4.2): " Hardware implementation with Major Version 2 or higher (VER_REG), always performs required drain without software explicitly requesting a drain in IOTLB invalidation. This field is deprecated and hardware will always report it as 1 to maintain backward compatibility with software. " Remove the code to make the code cleaner. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230609060514.15154-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-16iommu/vt-d: Remove two WARN_ON in domain_context_mapping_one()Yanfei Xu1-4/+0
Remove the WARN_ON(did == 0) as the domain id 0 is reserved and set once the domain_ids is allocated. So iommu_init_domains will never return 0. Remove the WARN_ON(!table) as this pointer will be accessed in the following code, if empty "table" really happens, the kernel will report a NULL pointer reference warning at the first place. Signed-off-by: Yanfei Xu <yanfei.xu@intel.com> Link: https://lore.kernel.org/r/20230605112659.308981-3-yanfei.xu@intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-16iommu/vt-d: Handle the failure case of dmar_reenable_qi()Yanfei Xu1-3/+8
dmar_reenable_qi() may not succeed. Check and return when it fails. Signed-off-by: Yanfei Xu <yanfei.xu@intel.com> Link: https://lore.kernel.org/r/20230605112659.308981-2-yanfei.xu@intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-16iommu/vt-d: Remove unnecessary (void*) conversionsSuhui1-1/+1
No need cast (void*) to (struct root_entry *). Signed-off-by: Suhui <suhui@nfschina.com> Link: https://lore.kernel.org/r/20230425033743.75986-1-suhui@nfschina.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-16iommu/amd: Fix possible memory leak of 'domain'Su Hui1-4/+4
Move allocation code down to avoid memory leak. Fixes: 29f54745f245 ("iommu/amd: Add missing domain type checks") Signed-off-by: Su Hui <suhui@nfschina.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20230608021933.856045-1-suhui@nfschina.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-16iommu/amd: Remove extern from function prototypesVasant Hegde2-48/+46
The kernel coding style does not require 'extern' in function prototypes. Hence remove them from header file. No functional change intended. Suggested-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230609090631.6052-2-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-16iommu/amd: Use BIT/BIT_ULL macro to define bit fieldsVasant Hegde1-38/+38
Make use of BIT macro when defining bitfields which makes it easy to read. No functional change intended. Suggested-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20230609090631.6052-1-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-16iommu/amd: Fix DTE_IRQ_PHYS_ADDR_MASK macroVasant Hegde1-1/+1
Interrupt Table Root Pointer is 52 bit and table must be aligned to start on a 128-byte boundary. Hence first 6 bits are ignored. Current code uses address mask as 45 instead of 46bit. Use GENMASK_ULL macro instead of manually generating address mask. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230609090327.5923-1-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-10mm/gup: remove unused vmas parameter from pin_user_pages_remote()Lorenzo Stoakes1-2/+2
No invocation of pin_user_pages_remote() uses the vmas parameter, so remove it. This forms part of a larger patch set eliminating the use of the vmas parameters altogether. Link: https://lkml.kernel.org/r/28f000beb81e45bf538a2aaa77c90f5482b67a32.1684350871.git.lstoakes@gmail.com Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian König <christian.koenig@amd.com> Cc: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Janosch Frank <frankja@linux.ibm.com> Cc: Jarkko Sakkinen <jarkko@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Sakari Ailus <sakari.ailus@linux.intel.com> Cc: Sean Christopherson <seanjc@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-09iommu/amd: Fix compile error for unused functionJoerg Roedel1-26/+26
Recent changes introduced a compile error: drivers/iommu/amd/iommu.c:1285:13: error: ‘iommu_flush_irt_and_complete’ defined but not used [-Werror=unused-function] 1285 | static void iommu_flush_irt_and_complete(struct amd_iommu *iommu, u16 devid) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ This happens with defconfig-x86_64 because AMD IOMMU is enabled but CONFIG_IRQ_REMAP is disabled. Move the function under #ifdef CONFIG_IRQ_REMAP to fix the error. Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-09iommu/amd: Improving Interrupt Remapping Table InvalidationSuravee Suthikulpanit3-7/+24
Invalidating Interrupt Remapping Table (IRT) requires, the AMD IOMMU driver to issue INVALIDATE_INTERRUPT_TABLE and COMPLETION_WAIT commands. Currently, the driver issues the two commands separately, which requires calling raw_spin_lock_irqsave() twice. In addition, the COMPLETION_WAIT could potentially be interleaved with other commands causing delay of the COMPLETION_WAIT command. Therefore, combine issuing of the two commands in one spin-lock, and changing struct amd_iommu.cmd_sem_val to use atomic64 to minimize locking. Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20230530141137.14376-6-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-09iommu/amd: Do not Invalidate IRT when IRTE caching is disabledSuravee Suthikulpanit1-6/+15
With the Interrupt Remapping Table cache disabled, there is no need to issue invalidate IRT and wait for its completion. Therefore, add logic to bypass the operation. Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Suggested-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20230530141137.14376-5-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-09iommu/amd: Introduce Disable IRTE Caching SupportSuravee Suthikulpanit2-0/+40
An Interrupt Remapping Table (IRT) stores interrupt remapping configuration for each device. In a normal operation, the AMD IOMMU caches the table to optimize subsequent data accesses. This requires the IOMMU driver to invalidate IRT whenever it updates the table. The invalidation process includes issuing an INVALIDATE_INTERRUPT_TABLE command following by a COMPLETION_WAIT command. However, there are cases in which the IRT is updated at a high rate. For example, for IOMMU AVIC, the IRTE[IsRun] bit is updated on every vcpu scheduling (i.e. amd_iommu_update_ga()). On system with large amount of vcpus and VFIO PCI pass-through devices, the invalidation process could potentially become a performance bottleneck. Introducing a new kernel boot option: amd_iommu=irtcachedis which disables IRTE caching by setting the IRTCachedis bit in each IOMMU Control register, and bypass the IRT invalidation process. Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Co-developed-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Signed-off-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20230530141137.14376-4-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-09iommu/amd: Remove the unused struct amd_ir_data.refSuravee Suthikulpanit2-11/+7
Since the amd_iommu_update_ga() has been switched to use the modify_irte_ga() helper function to update the IRTE, the parameter is no longer needed. Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Suggested-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20230530141137.14376-3-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-09iommu/amd: Switch amd_iommu_update_ga() to use modify_irte_ga()Joao Martins1-28/+10
The modify_irte_ga() uses cmpxchg_double() to update the IRTE in one shot, which is necessary when adding IRTE cache disabling support since the driver no longer need to flush the IRT for hardware to take effect. Please note that there is a functional change where the IsRun and Destination bits of IRTE are now cached in the struct amd_ir_data.entry. Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20230530141137.14376-2-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-09iommu/arm-smmu-v3: Set TTL invalidation hint betterRobin Murphy1-2/+7
When io-pgtable unmaps a whole table, rather than waste time walking it to find the leaf entries to invalidate exactly, it simply expects .tlb_flush_walk with nominal last-level granularity to invalidate any leaf entries at higher intermediate levels as well. This works fine with page-based invalidation, but with range commands we need to be careful with the TTL hint - unconditionally setting it based on the given level 3 granule means that an invalidation for a level 1 table would strictly not be required to affect level 2 block entries. It's easy to comply with the expected behaviour by simply not setting the TTL hint for non-leaf invalidations, so let's do that. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/b409d9a17c52dc0db51faee91d92737bb7975f5b.1685637456.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2023-06-08iommu/arm-smmu-v3: Document nesting-related errataRobin Murphy1-0/+5
Both MMU-600 and MMU-700 have similar errata around TLB invalidation while both stages of translation are active, which will need some consideration once nesting support is implemented. For now, though, it's very easy to make our implicit lack of nesting support explicit for those cases, so they're less likely to be missed in future. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/696da78d32bb4491f898f11b0bb4d850a8aa7c6a.1683731256.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2023-06-08iommu/arm-smmu-v3: Add explicit feature for nestingRobin Murphy2-0/+5
In certain cases we may want to refuse to allow nested translation even when both stages are implemented, so let's add an explicit feature for nesting support which we can control in its own right. For now this merely serves as documentation, but it means a nice convenient check will be ready and waiting for the future nesting code. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/136c3f4a3a84cc14a5a1978ace57dfd3ed67b688.1683731256.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2023-06-08iommu/arm-smmu-v3: Document MMU-700 erratum 2812531Robin Murphy2-0/+13
To work around MMU-700 erratum 2812531 we need to ensure that certain sequences of commands cannot be issued without an intervening sync. In practice this falls out of our current command-batching machinery anyway - each batch only contains a single type of invalidation command, and ends with a sync. The only exception is when a batch is sufficiently large to need issuing across multiple command queue slots, wherein the earlier slots will not contain a sync and thus may in theory interleave with another batch being issued in parallel to create an affected sequence across the slot boundary. Since MMU-700 supports range invalidate commands and thus we will prefer to use them (which also happens to avoid conditions for other errata), I'm not entirely sure it's even possible for a single high-level invalidate call to generate a batch of more than 63 commands, but for the sake of robustness and documentation, wire up an option to enforce that a sync is always inserted for every slot issued. The other aspect is that the relative order of DVM commands cannot be controlled, so DVM cannot be used. Again that is already the status quo, but since we have at least defined ARM_SMMU_FEAT_BTM, we can explicitly disable it for documentation purposes even if it's not wired up anywhere yet. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/330221cdfd0003cd51b6c04e7ff3566741ad8374.1683731256.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2023-06-08iommu/arm-smmu-v3: Work around MMU-600 erratum 1076982Robin Murphy2-0/+35
MMU-600 versions prior to r1p0 fail to correctly generate a WFE wakeup event when the command queue transitions fom full to non-full. We can easily work around this by simply hiding the SEV capability such that we fall back to polling for space in the queue - since MMU-600 implements MSIs we wouldn't expect to need SEV for sync completion either, so this should have little to no impact. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/08adbe3d01024d8382a478325f73b56851f76e49.1683731256.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2023-06-05x86,intel_iommu: Replace cmpxchg_double()Peter Zijlstra1-6/+2
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20230531132323.855976804@infradead.org
2023-06-05x86,amd_iommu: Replace cmpxchg_double()Peter Zijlstra2-8/+11
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Vasant Hegde <vasant.hegde@amd.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20230531132323.788955257@infradead.org
2023-06-01iommu/mediatek: Flush IOTLB completely only if domain has been attachedChen-Yu Tsai1-1/+2
If an IOMMU domain was never attached, it lacks any linkage to the actual IOMMU hardware. Attempting to do flush_iotlb_all() on it will result in a NULL pointer dereference. This seems to happen after the recent IOMMU core rework in v6.4-rc1. Unable to handle kernel read from unreadable memory at virtual address 0000000000000018 Call trace: mtk_iommu_flush_iotlb_all+0x20/0x80 iommu_create_device_direct_mappings.part.0+0x13c/0x230 iommu_setup_default_domain+0x29c/0x4d0 iommu_probe_device+0x12c/0x190 of_iommu_configure+0x140/0x208 of_dma_configure_id+0x19c/0x3c0 platform_dma_configure+0x38/0x88 really_probe+0x78/0x2c0 Check if the "bank" field has been filled in before actually attempting the IOTLB flush to avoid it. The IOTLB is also flushed when the device comes out of runtime suspend, so it should have a clean initial state. Fixes: 08500c43d4f7 ("iommu/mediatek: Adjust the structure") Signed-off-by: Chen-Yu Tsai <wenst@chromium.org> Reviewed-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://lore.kernel.org/r/20230526085402.394239-1-wenst@chromium.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-01iommu/fsl: Use driver_managed_dma to allow VFIO to workJason Gunthorpe1-81/+23
The FSL driver is mangling the iommu_groups to not have a group for its PCI bridge/controller (eg the thing passed to fsl_add_bridge()). Robin says this is so FSL could work with VFIO which would be blocked by having a probed driver on the platform_device in the same group. This is supported by comments from FSL: https://lore.kernel.org/all/C5ECD7A89D1DC44195F34B25E172658D459471@039-SN2MPN1-013.039d.mgd.msft.net .. PCIe devices share the same device group as the PCI controller. This becomes a problem while assigning the devices to the guest, as you are required to unbind all the PCIe devices including the controller from the host. PCIe controller can't be unbound from the host, so we simply delete the controller iommu_group. However, today, we use driver_managed_dma to allow PCI infrastructure devices that are 'security safe' to co-exist in groups and still allow VFIO to work. Set this flag for the fsl_pci_driver. Change fsl_pamu_device_group() so that it no longer removes the controller from any groups. For check_pci_ctl_endpt_part() mode this creates an extra group that contains only the controller. Otherwise force the controller's single group to be the group of all the PCI devices on the controller's hose. VFIO continues to work because of driver_managed_dma. Remove the iommu_group_remove_device() calls from fsl_pamu and lightly restructure its fsl_pamu_device_group() function. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3-v2-ce71068deeec+4cf6-fsl_rm_groups_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-01iommu/fsl: Move ENODEV to fsl_pamu_probe_device()Jason Gunthorpe1-7/+12
The expectation is for the probe op to return ENODEV if the iommu is not able to support the device. Move the check for fsl,liodn to fsl_pamu_probe_device() simplify fsl_pamu_device_group() Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2-v2-ce71068deeec+4cf6-fsl_rm_groups_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-06-01iommu/fsl: Always allocate a group for non-pci devicesJason Gunthorpe1-1/+1
fsl_pamu_device_group() is only called if dev->iommu_group is NULL, so iommu_group_get() always returns NULL. Remove this test and just allocate a group. Call generic_device_group() for this like the other drivers. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1-v2-ce71068deeec+4cf6-fsl_rm_groups_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-05-23iommu/amd/pgtbl_v2: Fix domain max addressVasant Hegde1-1/+10
IOMMU v2 page table supports 4 level (47 bit) or 5 level (56 bit) virtual address space. Current code assumes it can support 64bit IOVA address space. If IOVA allocator allocates virtual address > 47/56 bit (depending on page table level) then it will do wrong mapping and cause invalid translation. Hence adjust aperture size to use max address supported by the page table. Reported-by: Jerry Snitselaar <jsnitsel@redhat.com> Fixes: aaac38f61487 ("iommu/amd: Initial support for AMD IOMMU v2 page table") Cc: <Stable@vger.kernel.org> # v6.0+ Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230518054351.9626-1-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-05-23iommu/virtio: Return size mapped for a detached domainJean-Philippe Brucker1-16/+17
When map() is called on a detached domain, the domain does not exist in the device so we do not send a MAP request, but we do update the internal mapping tree, to be replayed on the next attach. Since this constitutes a successful iommu_map() call, return *mapped in this case too. Fixes: 7e62edd7a33a ("iommu/virtio: Add map/unmap_pages() callbacks implementation") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20230515113946.1017624-3-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-05-23iommu/virtio: Detach domain on endpoint releaseJean-Philippe Brucker1-0/+24
When an endpoint is released, for example a PCIe VF being destroyed or a function hot-unplugged, it should be detached from its domain. Send a DETACH request. Fixes: edcd69ab9a32 ("iommu: Add virtio-iommu driver") Reported-by: Akihiko Odaki <akihiko.odaki@daynix.com> Link: https://lore.kernel.org/all/15bf1b00-3aa0-973a-3a86-3fa5c4d41d2c@daynix.com/ Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Tested-by: Akihiko Odaki <akihiko.odaki@daynix.com> Link: https://lore.kernel.org/r/20230515113946.1017624-2-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-05-23iommu: Tidy the control flow in iommu_group_store_type()Jason Gunthorpe1-12/+15
Use a normal "goto unwind" instead of trying to be clever with checking !ret and manually managing the unlock. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/17-v5-1b99ae392328+44574-iommu_err_unwind_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-05-23iommu: Remove __iommu_group_for_each_dev()Jason Gunthorpe1-28/+25
The last two users of it are quite trivial, just open code the one line loop. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/16-v5-1b99ae392328+44574-iommu_err_unwind_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-05-23iommu: Allow IOMMU_RESV_DIRECT to work on ARMJason Gunthorpe1-2/+31
For now several ARM drivers do not allow mappings to be created until a domain is attached. This means they do not technically support IOMMU_RESV_DIRECT as it requires the 1:1 maps to work continuously. Currently if the platform requests these maps on ARM systems they are silently ignored. Work around this by trying again to establish the direct mappings after the domain is attached if the pre-attach attempt failed. In the long run the drivers will be fixed to fully setup domains when they are created without waiting for attachment. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/15-v5-1b99ae392328+44574-iommu_err_unwind_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-05-23iommu: Consolidate the default_domain setup to one functionJason Gunthorpe1-113/+89
Make iommu_change_dev_def_domain() general enough to setup the initial default_domain or replace it with a new default_domain. Call the new function iommu_setup_default_domain() and make it the only place in the code that stores to group->default_domain. Consolidate the three copies of the default_domain setup sequence. The flow flow requires: - Determining the domain type to use - Checking if the current default domain is the same type - Allocating a domain - Doing iommu_create_device_direct_mappings() - Attaching it to devices - Store group->default_domain This adjusts the domain allocation from the prior patch to be able to detect if each of the allocation steps is already the domain we already have, which is a more robust version of what change default domain was already doing. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/14-v5-1b99ae392328+44574-iommu_err_unwind_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-05-23iommu: Revise iommu_group_alloc_default_domain()Jason Gunthorpe1-34/+37
Robin points out that the fallback to guessing what domains the driver supports should only happen if the driver doesn't return a preference from its ops->def_domain_type(). Re-organize iommu_group_alloc_default_domain() so it internally uses iommu_def_domain_type only during the fallback and makes it clearer how the fallback sequence works. Make iommu_group_alloc_default_domain() return the domain so the return based logic is cleaner and to prepare for the next patch. Remove the iommu_alloc_default_domain() function as it is now trivially just calling iommu_group_alloc_default_domain(). Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/13-v5-1b99ae392328+44574-iommu_err_unwind_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-05-23iommu: Consolidate the code to calculate the target default domain typeJason Gunthorpe1-53/+35
Put all the code to calculate the default domain type into one function. Make the function able to handle the iommu_change_dev_def_domain() by taking in the target domain type and erroring out if the target type isn't reachable. This makes it really clear that specifying a 0 type during iommu_change_dev_def_domain() will have the same outcome as the normal probe path. Remove the obfuscating use of __iommu_group_for_each_dev() and related struct __group_domain_type. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/12-v5-1b99ae392328+44574-iommu_err_unwind_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-05-23iommu: Remove the assignment of group->domain during default domain allocJason Gunthorpe1-4/+0
group->domain should only be set once all the device's drivers have had their ops->attach_dev() called. iommu_group_alloc_default_domain() doesn't do this, so it shouldn't set the value. The previous patches organized things so that each caller of iommu_group_alloc_default_domain() follows up with calling __iommu_group_set_domain_internal() that does set the group->domain. Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/11-v5-1b99ae392328+44574-iommu_err_unwind_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-05-23iommu: Do iommu_group_create_direct_mappings() before attachJason Gunthorpe1-3/+5
The iommu_probe_device() path calls iommu_create_device_direct_mappings() after attaching the device. IOMMU_RESV_DIRECT maps need to be continually in place, so if a hotplugged device has new ranges the should have been mapped into the default domain before it is attached. Move the iommu_create_device_direct_mappings() call up. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/10-v5-1b99ae392328+44574-iommu_err_unwind_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-05-23iommu: Fix iommu_probe_device() to attach the right domainJason Gunthorpe1-20/+24
The general invariant is that all devices in an iommu_group are attached to group->domain. We missed some cases here where an owned group would not get the device attached. Rework this logic so it follows the default domain flow of the bus_iommu_probe() - call iommu_alloc_default_domain(), then use __iommu_group_set_domain_internal() to set up all the devices. Finally always attach the device to the current domain if it is already set. This is an unlikely functional issue as iommufd uses iommu_attach_group(). It is possible to hot plug in a new group member, add a vfio driver to it and then hot add it to an existing iommufd. In this case it is required that the core code set the iommu_domain properly since iommufd won't call iommu_attach_group() again. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/9-v5-1b99ae392328+44574-iommu_err_unwind_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-05-23iommu: Replace iommu_group_do_dma_first_attach with __iommu_device_set_domainJason Gunthorpe1-12/+5
Since __iommu_device_set_domain() now knows how to handle deferred attach we can just call it directly from the only call site. Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/8-v5-1b99ae392328+44574-iommu_err_unwind_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>