summaryrefslogtreecommitdiff
path: root/drivers/iommu
AgeCommit message (Collapse)AuthorFilesLines
2023-08-11iommu/arm-smmu-qcom: Sort the compatible list alphabeticallyKonrad Dybcio1-2/+2
It got broken at some point, fix it up. Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Konrad Dybcio <konrad.dybcio@linaro.org> Link: https://lore.kernel.org/r/20230810-topic-lost_smmu_compats-v1-1-64a0d8749404@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2023-08-09iommu/vt-d: Fix to convert mm pfn to dma pfnYanfei Xu1-9/+13
For the case that VT-d page is smaller than mm page, converting dma pfn should be handled in two cases which are for start pfn and for end pfn. Currently the calculation of end dma pfn is incorrect and the result is less than real page frame number which is causing the mapping of iova always misses some page frames. Rename the mm_to_dma_pfn() to mm_to_dma_pfn_start() and add a new helper for converting end dma pfn named mm_to_dma_pfn_end(). Signed-off-by: Yanfei Xu <yanfei.xu@intel.com> Link: https://lore.kernel.org/r/20230625082046.979742-1-yanfei.xu@intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-09iommu/vt-d: Fix to flush cache of PASID directory tableYanfei Xu1-1/+1
Even the PCI devices don't support pasid capability, PASID table is mandatory for a PCI device in scalable mode. However flushing cache of pasid directory table for these devices are not taken after pasid table is allocated as the "size" of table is zero. Fix it by calculating the size by page order. Found this when reading the code, no real problem encountered for now. Fixes: 194b3348bdbb ("iommu/vt-d: Fix PASID directory pointer coherency") Suggested-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Yanfei Xu <yanfei.xu@intel.com> Link: https://lore.kernel.org/r/20230616081045.721873-1-yanfei.xu@intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-09iommu/vt-d: Remove rmrr check in domain attaching device pathLu Baolu1-58/+0
The core code now prevents devices with RMRR regions from being assigned to user space. There is no need to check for this condition in individual drivers. Remove it to avoid duplicate code. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20230724060352.113458-3-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-09iommu: Prevent RESV_DIRECT devices from blocking domainsLu Baolu1-10/+27
The IOMMU_RESV_DIRECT flag indicates that a memory region must be mapped 1:1 at all times. This means that the region must always be accessible to the device, even if the device is attached to a blocking domain. This is equal to saying that IOMMU_RESV_DIRECT flag prevents devices from being attached to blocking domains. This also implies that devices that implement RESV_DIRECT regions will be prevented from being assigned to user space since taking the DMA ownership immediately switches to a blocking domain. The rule of preventing devices with the IOMMU_RESV_DIRECT regions from being assigned to user space has existed in the Intel IOMMU driver for a long time. Now, this rule is being lifted up to a general core rule, as other architectures like AMD and ARM also have RMRR-like reserved regions. This has been discussed in the community mailing list and refer to below link for more details. Other places using unmanaged domains for kernel DMA must follow the iommu_get_resv_regions() and setup IOMMU_RESV_DIRECT - we do not restrict them in the core code. Cc: Robin Murphy <robin.murphy@arm.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/linux-iommu/BN9PR11MB5276E84229B5BD952D78E9598C639@BN9PR11MB5276.namprd11.prod.outlook.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20230724060352.113458-2-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-09iommu/vt-d: Add set_dev_pasid callback for dma domainLu Baolu2-5/+106
This allows the upper layers to set a domain to a PASID of a device if the PASID feature is supported by the IOMMU hardware. The typical use cases are, for example, kernel DMA with PASID and hardware assisted mediated device drivers. The attaching device and pasid information is tracked in a per-domain list and is used for IOTLB and devTLB invalidation. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20230802212427.1497170-8-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-09iommu/vt-d: Prepare for set_dev_pasid callbackLu Baolu1-1/+2
The domain_flush_pasid_iotlb() helper function is used to flush the IOTLB entries for a given PASID. Previously, this function assumed that RID2PASID was only used for the first-level DMA translation. However, with the introduction of the set_dev_pasid callback, this assumption is no longer valid. Add a check before using the RID2PASID for PASID invalidation. This check ensures that the domain has been attached to a physical device before using RID2PASID. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20230802212427.1497170-7-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-09iommu/vt-d: Make prq draining code genericLu Baolu3-26/+23
Currently draining page requests and responses for a pasid is part of SVA implementation. This is because the driver only supports attaching an SVA domain to a device pasid. As we are about to support attaching other types of domains to a device pasid, the prq draining code becomes generic. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20230802212427.1497170-6-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-09iommu/vt-d: Remove pasid_mutexLu Baolu1-40/+5
The pasid_mutex was used to protect the paths of set/remove_dev_pasid(). It's duplicate with iommu_sva_lock. Remove it to avoid duplicate code. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20230802212427.1497170-5-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-09iommu/vt-d: Add domain_flush_pasid_iotlb()Lu Baolu1-2/+14
The VT-d spec requires to use PASID-based-IOTLB invalidation descriptor to invalidate IOTLB and the paging-structure caches for a first-stage page table. Add a generic helper to do this. RID2PASID is used if the domain has been attached to a physical device, otherwise real PASIDs that the domain has been attached to will be used. The 'real' PASID attachment is handled in the subsequent change. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20230802212427.1497170-4-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-09iommu: Move global PASID allocation from SVA to coreJacob Pan2-19/+38
Intel ENQCMD requires a single PASID to be shared between multiple devices, as the PASID is stored in a single MSR register per-process and userspace can use only that one PASID. This means that the PASID allocation for any ENQCMD using device driver must always come from a shared global pool, regardless of what kind of domain the PASID will be used with. Split the code for the global PASID allocator into iommu_alloc/free_global_pasid() so that drivers can attach non-SVA domains to PASIDs as well. This patch moves global PASID allocation APIs from SVA to IOMMU APIs. Reserved PASIDs, currently only RID_PASID, are excluded from the global PASID allocation. It is expected that device drivers will use the allocated PASIDs to attach to appropriate IOMMU domains for use. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Link: https://lore.kernel.org/r/20230802212427.1497170-3-jacob.jun.pan@linux.intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-09iommu: Generalize PASID 0 for normal DMA w/o PASIDJacob Pan5-24/+22
PCIe Process address space ID (PASID) is used to tag DMA traffic, it provides finer grained isolation than requester ID (RID). For each device/RID, 0 is a special PASID for the normal DMA (no PASID). This is universal across all architectures that supports PASID, therefore warranted to be reserved globally and declared in the common header. Consequently, we can avoid the conflict between different PASID use cases in the generic code. e.g. SVA and DMA API with PASIDs. This paved away for device drivers to choose global PASID policy while continue doing normal DMA. Noting that VT-d could support none-zero RID/NO_PASID, but currently not used. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Link: https://lore.kernel.org/r/20230802212427.1497170-2-jacob.jun.pan@linux.intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-09iommu/qcom: Add support for QSMMUv2 and QSMMU-500 secured contextsAngeloGioacchino Del Regno1-2/+17
On some SoCs like MSM8956, MSM8976 and others, secure contexts are also secured: these get programmed by the bootloader or TZ (as usual) but their "interesting" registers are locked out by the hypervisor, disallowing direct register writes from Linux and, in many cases, completely disallowing the reprogramming of TTBR, TCR, MAIR and other registers including, but not limited to, resetting contexts. This is referred downstream as a "v2" IOMMU but this is effectively a "v2 firmware configuration" instead. Luckily, the described behavior of version 2 is effective only on secure contexts and not on non-secure ones: add support for that, finally getting a completely working IOMMU on at least MSM8956/76. Signed-off-by: Marijn Suijten <marijn.suijten@somainline.org> [Marijn: Rebased over next-20221111] Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Link: https://lore.kernel.org/r/20230622092742.74819-7-angelogioacchino.delregno@collabora.com Signed-off-by: Will Deacon <will@kernel.org>
2023-08-09iommu/qcom: Index contexts by asid number to allow asid 0AngeloGioacchino Del Regno1-12/+10
This driver was indexing the contexts by asid-1, which is probably done under the assumption that the first ASID is always 1. Unfortunately this is not always true: at least for MSM8956 and MSM8976's GPU IOMMU, the gpu_user context's ASID number is zero. To allow using a zero asid number, index the contexts by `asid` instead of by `asid - 1`. While at it, also enhance human readability by renaming the `num_ctxs` member of struct qcom_iommu_dev to `max_asid`. Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Link: https://lore.kernel.org/r/20230622092742.74819-5-angelogioacchino.delregno@collabora.com Signed-off-by: Will Deacon <will@kernel.org>
2023-08-09iommu/qcom: Disable and reset context bank before programmingAngeloGioacchino Del Regno1-0/+7
Writing the new TTBRs, TCRs and MAIRs on a previously enabled context bank may trigger a context fault, resulting in firmware driven AP resets: change the domain initialization programming sequence to disable the context bank(s) and to also clear the related fault address (CB_FAR) and fault status (CB_FSR) registers before writing new values to TTBR0/1, TCR/TCR2, MAIR0/1. Fixes: 0ae349a0f33f ("iommu/qcom: Add qcom_iommu") Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Link: https://lore.kernel.org/r/20230622092742.74819-4-angelogioacchino.delregno@collabora.com Signed-off-by: Will Deacon <will@kernel.org>
2023-08-09iommu/qcom: Use the asid read from device-tree if specifiedAngeloGioacchino Del Regno1-3/+15
As specified in this driver, the context banks are 0x1000 apart but on some SoCs the context number does not necessarily match this logic, hence we end up using the wrong ASID: keeping in mind that this IOMMU implementation relies heavily on SCM (TZ) calls, it is mandatory that we communicate the right context number. Since this is all about how context banks are mapped in firmware, which may be board dependent (as a different firmware version may eventually change the expected context bank numbers), introduce a new property "qcom,ctx-asid": when found, the ASID will be forced as read from the devicetree. When "qcom,ctx-asid" is not found, this driver retains the previous behavior as to avoid breaking older devicetrees or systems that do not require forcing ASID numbers. Signed-off-by: Marijn Suijten <marijn.suijten@somainline.org> [Marijn: Rebased over next-20221111] Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Link: https://lore.kernel.org/r/20230622092742.74819-3-angelogioacchino.delregno@collabora.com Signed-off-by: Will Deacon <will@kernel.org>
2023-08-08iommu/amd: Rearrange DTE bit definationsVasant Hegde1-4/+4
Rearrage according to 64bit word they are in. Note that I have not rearranged gcr3 related macros even though they belong to different 64bit word as its easy to read it in current format. No functional changes intended. Suggested-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230619131908.5887-1-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-07iommu: Remove kernel-doc warningsZhu Wang1-2/+2
Remove kernel-doc warnings: drivers/iommu/iommu.c:3261: warning: Function parameter or member 'group' not described in 'iommu_group_release_dma_owner' drivers/iommu/iommu.c:3261: warning: Excess function parameter 'dev' description in 'iommu_group_release_dma_owner' drivers/iommu/iommu.c:3275: warning: Function parameter or member 'dev' not described in 'iommu_device_release_dma_owner' drivers/iommu/iommu.c:3275: warning: Excess function parameter 'group' description in 'iommu_device_release_dma_owner' Signed-off-by: Zhu Wang <wangzhu9@huawei.com> Fixes: 89395ccedbc1 ("iommu: Add device-centric DMA ownership interfaces") Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20230731112758.214775-1-wangzhu9@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-07iommu/ipmmu-vmsa: Allow PCIe devicesYoshihiro Shimoda1-0/+5
IPMMU hardware on R-Car Gen3 and RZ/G2 is simple. Each bus-master device like eMMC host and PCIe controllers has a micro-TLB of The IPMMU, and after enabled it, all transactions of the device are under the IPMMU. eMMC host ---(micro-TLB of eMMC)--- IPMMU cache --- IPMMU main PCIe --------(micro-TLB of PCIe)--- IPMMU cache --- IPMMU main Now this IPMMU driver allows eMMC host, and it is safe to use the IPMMU. So, we can assume that it is safe to use the IPMMU from PCIe devices too, because all PCIe devices transactions will go to the micro-TLB of PCIe. So, add a new condition whether the device is a PCIe device or not in the ipmmu_device_is_allowed() which will be called if the PCIe host controller has iommu-map property. This can improve CPU load because the PCIe controllers only have a capability for lower 32-bit memory area so that this can avoid using swiotlb. Note that IPMMU on R-Car Gen4 is different than R-Car Gen3 and RZ/G2's one, especially OS-ID. But, for now, the IPMMU driver takes care of OS-ID 0 only. In other words, all PCIe devices will go to the micro-TLB of PCIe. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20230728014659.411751-1-yoshihiro.shimoda.uh@renesas.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-07iommu/sprd: Add missing force_apertureJason Gunthorpe1-0/+1
force_aperture was intended to false only by GART drivers that have an identity translation outside the aperture. This does not describe sprd, so add the missing 'force_aperture = true'. Fixes: b23e4fc4e3fa ("iommu: add Unisoc IOMMU basic driver") Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Chunyan Zhang <zhang.lyra@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-07iommu/apple-dart: mark apple_dart_pm_ops staticMin-Hua Chen1-1/+1
This patch fixes the following sparse warning: drivers/iommu/apple-dart.c:1279:1: sparse: warning: symbol 'apple_dart_pm_ops' was not declared. Should it be static? No functional change intended. Signed-off-by: Min-Hua Chen <minhuadotchen@gmail.com> Acked-by: Sven Peter <sven@svenpeter.dev> Link: https://lore.kernel.org/r/20230720232155.3923-1-minhuadotchen@gmail.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-07iommu/ipmmu-vmsa: Convert to read_poll_timeout_atomic()Geert Uytterhoeven1-9/+6
Use read_poll_timeout_atomic() instead of open-coding the same operation. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/649c7e09841b998c5c8d7fc274884a85e4b5bfe9.1689599528.git.geert+renesas@glider.be Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-07iommu/mediatek: mt8188: Add iova_region_larb_mskYong Wu1-0/+16
Add iova_region_larb_msk for mt8188. We separate the 16GB iova regions by each device's larbid/portid. Refer to include/dt-bindings/memory/mediatek,mt8188-memory-port.h As commented in the code, larb19(21) means it's larb19 while its SW index is 21. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Alexandre Mergnat <amergnat@baylibre.com> Link: https://lore.kernel.org/r/20230602090227.7264-7-yong.wu@mediatek.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-07iommu/mediatek: Add MT8188 IOMMU SupportChengci.Xu1-0/+49
MT8188 has 3 IOMMU, containing 2 MM IOMMUs, one is for vdo, the other is for vpp. and 1 INFRA IOMMU. Signed-off-by: Chengci.Xu <chengci.xu@mediatek.com> Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Alexandre Mergnat <amergnat@baylibre.com> Link: https://lore.kernel.org/r/20230602090227.7264-6-yong.wu@mediatek.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-07iommu/mediatek: Add enable IOMMU SMC command for INFRA mastersChengci.Xu1-10/+22
Prepare for MT8188. In MT8188, the register which enables IOMMU for INFRA masters are in the secure world for security concerns, therefore we add a SMC command for INFRA masters to enable IOMMU in ATF. Signed-off-by: Chengci.Xu <chengci.xu@mediatek.com> Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Alexandre Mergnat <amergnat@baylibre.com> Link: https://lore.kernel.org/r/20230602090227.7264-5-yong.wu@mediatek.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-07iommu/mediatek: Adjust mtk_iommu_config flowChengci.Xu1-26/+32
If there are many ports in a infra master, current flow will update the INFRA register many times. This patch saves all ports to portid_msk in the front of mtk_iommu_config(), then update only once for the IOMMU configure. After this, we could avoid send too many SMC calls to ATF in MT8188. Prepare for MT8188, also reduce the indention without functional change. Signed-off-by: Chengci.Xu <chengci.xu@mediatek.com> Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Alexandre Mergnat <amergnat@baylibre.com> Link: https://lore.kernel.org/r/20230602090227.7264-4-yong.wu@mediatek.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-07iommu/mediatek: Fix two IOMMU share pagetable issueChengci.Xu1-8/+14
Prepare for mt8188 to fix a two IOMMU HWs share pagetable issue. We have two MM IOMMU HWs in mt8188, one is VPP-IOMMU, the other is VDO-IOMMU. The 2 MM IOMMU HWs share pagetable don't work in this case: a) VPP-IOMMU probe firstly. b) VDO-IOMMU probe. c) The master for VDO-IOMMU probe (means frstdata is vpp-iommu). d) The master in another domain probe. No matter it is vdo or vpp. Then it still create a new pagetable in step d). The problem is "frstdata->bank[0]->m4u_dom" was not initialized. Then when d) enter, it still create a new one. In this patch, we create a new variable "share_dom" for this share pgtable case, it should be helpful for readable. and put all the share pgtable logic in the mtk_iommu_domain_finalise. In mt8195, the master of VPP-IOMMU probes before than VDO-IOMMU from its dtsi node sequence, we don't see this issue in it. Prepare for mt8188. Fixes: 645b87c190c9 ("iommu/mediatek: Fix 2 HW sharing pgtable issue") Signed-off-by: Chengci.Xu <chengci.xu@mediatek.com> Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Alexandre Mergnat <amergnat@baylibre.com> Link: https://lore.kernel.org/r/20230602090227.7264-3-yong.wu@mediatek.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-08-06x86/vector: Rename send_cleanup_vector() to vector_schedule_cleanup()Thomas Gleixner3-4/+4
Rename send_cleanup_vector() to vector_schedule_cleanup() to prepare for replacing the vector cleanup IPI with a timer callback. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Xin Li <xin3.li@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steve Wahl <steve.wahl@hpe.com> Link: https://lore.kernel.org/r/20230621171248.6805-2-xin3.li@intel.com
2023-08-01iommu/arm-smmu: Clean up resource handling during Qualcomm context probeYangtao Li1-4/+2
Convert to use devm_platform_ioremap_resource() and fix return value when platform_get_irq fails. Signed-off-by: Yangtao Li <frank.li@vivo.com> Link: https://lore.kernel.org/r/20230705130416.46710-1-frank.li@vivo.com Signed-off-by: Will Deacon <will@kernel.org>
2023-08-01iommu/arm-smmu-v3: Change vmid alloc strategy from bitmap to idaDawei Li2-23/+8
For current implementation of vmid allocation of arm smmu-v3, a per-smmu devide bitmap of 64K bits(8K bytes) is allocated on behalf of possible VMID range, which is two pages for some architectures. Besides that, its memory consumption is 'static', despite of how many VMIDs are allocated actually. That's memory inefficient and lack of scalability. So an IDA based implementation is introduced to address this issue, which is capable of self-expanding on the actual need of VMID allocation. Signed-off-by: Dawei Li <set_pte_at@outlook.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/TYCP286MB2323E0C525FF9F94E3B07C7ACA35A@TYCP286MB2323.JPNP286.PROD.OUTLOOK.COM Signed-off-by: Will Deacon <will@kernel.org>
2023-07-28iommufd/selftest: Add IOMMU_TEST_OP_ACCESS_REPLACE_IOAS coverageNicolin Chen2-0/+23
Add a new IOMMU_TEST_OP_ACCESS_REPLACE_IOAS to allow replacing the access->ioas, corresponding to the iommufd_access_replace() helper. Then add replace coverage as a part of user_copy test case, which basically repeats the copy test after replacing the old ioas with a new one. Link: https://lore.kernel.org/r/a4897f93d41c34b972213243b8dbf4c3832842e4.1690523699.git.nicolinc@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-28iommufd: Add iommufd_access_replace() APINicolin Chen1-0/+15
Taking advantage of the new iommufd_access_change_ioas_id helper, add an iommufd_access_replace() API for the VFIO emulated pathway to use. Link: https://lore.kernel.org/r/a3267b924fd5f45e0d3a1dd13a9237e923563862.1690523699.git.nicolinc@nvidia.com Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-28iommufd: Use iommufd_access_change_ioas in iommufd_access_destroy_objectNicolin Chen1-6/+4
Update iommufd_access_destroy_object() to call the new iommufd_access_change_ioas() helper. It is impossible to legitimately race iommufd_access_destroy_object() with iommufd_access_change_ioas() as iommufd_access_destroy_object() is only called once the refcount reache zero, so any concurrent iommufd_access_change_ioas() is already UAFing the memory. Link: https://lore.kernel.org/r/f9fbeca2cde7f8515da18d689b3e02a6a40a5e14.1690523699.git.nicolinc@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-28iommufd: Add iommufd_access_change_ioas(_id) helpersNicolin Chen1-38/+71
The complication of the mutex and refcount will be amplified after we introduce the replace support for accesses. So, add a preparatory change of a constitutive helper iommufd_access_change_ioas() and its wrapper iommufd_access_change_ioas_id(). They can simply take care of existing iommufd_access_attach() and iommufd_access_detach(), properly sequencing the refcount puts so that they are truely at the end of the sequence after we know the IOAS pointer is not required any more. Link: https://lore.kernel.org/r/da0c462532193b447329c4eb975a596f47e49b70.1690523699.git.nicolinc@nvidia.com Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-28iommufd: Allow passing in iopt_access_list_id to iopt_remove_access()Nicolin Chen3-6/+9
This is a preparatory change for ioas replacement support for accesses. The replacement routine does an iopt_add_access() for a new IOAS first and then iopt_remove_access() for the old IOAS upon the success of the first call. However, the first call overrides the iopt_access_list_id in the access struct, resulting in iopt_remove_access() being unable to work on the old IOAS. Add an iopt_access_list_id as a parameter to iopt_remove_access, so the replacement routine can save the id before it gets overwritten. Pass the id in iopt_remove_access() for a proper cleanup. The existing callers should just pass in access->iopt_access_list_id. Link: https://lore.kernel.org/r/7bb939b9e0102da0c099572bb3de78ab7622221e.1690523699.git.nicolinc@nvidia.com Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-27iommufd: Set end correctly when doing batch carryJason Gunthorpe1-1/+1
Even though the test suite covers this it somehow became obscured that this wasn't working. The test iommufd_ioas.mock_domain.access_domain_destory would blow up rarely. end should be set to 1 because this just pushed an item, the carry, to the pfns list. Sometimes the test would blow up with: BUG: kernel NULL pointer dereference, address: 0000000000000000 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 0 P4D 0 Oops: 0000 [#1] SMP CPU: 5 PID: 584 Comm: iommufd Not tainted 6.5.0-rc1-dirty #1236 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 RIP: 0010:batch_unpin+0xa2/0x100 [iommufd] Code: 17 48 81 fe ff ff 07 00 77 70 48 8b 15 b7 be 97 e2 48 85 d2 74 14 48 8b 14 fa 48 85 d2 74 0b 40 0f b6 f6 48 c1 e6 04 48 01 f2 <48> 8b 3a 48 c1 e0 06 89 ca 48 89 de 48 83 e7 f0 48 01 c7 e8 96 dc RSP: 0018:ffffc90001677a58 EFLAGS: 00010246 RAX: 00007f7e2646f000 RBX: 0000000000000000 RCX: 0000000000000001 RDX: 0000000000000000 RSI: 00000000fefc4c8d RDI: 0000000000fefc4c RBP: ffffc90001677a80 R08: 0000000000000048 R09: 0000000000000200 R10: 0000000000030b98 R11: ffffffff81f3bb40 R12: 0000000000000001 R13: ffff888101f75800 R14: ffffc90001677ad0 R15: 00000000000001fe FS: 00007f9323679740(0000) GS:ffff8881ba540000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 0000000105ede003 CR4: 00000000003706a0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> ? show_regs+0x5c/0x70 ? __die+0x1f/0x60 ? page_fault_oops+0x15d/0x440 ? lock_release+0xbc/0x240 ? exc_page_fault+0x4a4/0x970 ? asm_exc_page_fault+0x27/0x30 ? batch_unpin+0xa2/0x100 [iommufd] ? batch_unpin+0xba/0x100 [iommufd] __iopt_area_unfill_domain+0x198/0x430 [iommufd] ? __mutex_lock+0x8c/0xb80 ? __mutex_lock+0x6aa/0xb80 ? xa_erase+0x28/0x30 ? iopt_table_remove_domain+0x162/0x320 [iommufd] ? lock_release+0xbc/0x240 iopt_area_unfill_domain+0xd/0x10 [iommufd] iopt_table_remove_domain+0x195/0x320 [iommufd] iommufd_hw_pagetable_destroy+0xb3/0x110 [iommufd] iommufd_object_destroy_user+0x8e/0xf0 [iommufd] iommufd_device_detach+0xc5/0x140 [iommufd] iommufd_selftest_destroy+0x1f/0x70 [iommufd] iommufd_object_destroy_user+0x8e/0xf0 [iommufd] iommufd_destroy+0x3a/0x50 [iommufd] iommufd_fops_ioctl+0xfb/0x170 [iommufd] __x64_sys_ioctl+0x40d/0x9a0 do_syscall_64+0x3c/0x80 entry_SYSCALL_64_after_hwframe+0x46/0xb0 Link: https://lore.kernel.org/r/3-v1-85aacb2af554+bc-iommufd_syz3_jgg@nvidia.com Cc: <stable@vger.kernel.org> Fixes: f394576eb11d ("iommufd: PFN handling for iopt_pages") Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Reported-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-27iommufd: IOMMUFD_DESTROY should not increase the refcountJason Gunthorpe3-30/+75
syzkaller found a race where IOMMUFD_DESTROY increments the refcount: obj = iommufd_get_object(ucmd->ictx, cmd->id, IOMMUFD_OBJ_ANY); if (IS_ERR(obj)) return PTR_ERR(obj); iommufd_ref_to_users(obj); /* See iommufd_ref_to_users() */ if (!iommufd_object_destroy_user(ucmd->ictx, obj)) As part of the sequence to join the two existing primitives together. Allowing the refcount the be elevated without holding the destroy_rwsem violates the assumption that all temporary refcount elevations are protected by destroy_rwsem. Racing IOMMUFD_DESTROY with iommufd_object_destroy_user() will cause spurious failures: WARNING: CPU: 0 PID: 3076 at drivers/iommu/iommufd/device.c:477 iommufd_access_destroy+0x18/0x20 drivers/iommu/iommufd/device.c:478 Modules linked in: CPU: 0 PID: 3076 Comm: syz-executor.0 Not tainted 6.3.0-rc1-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/03/2023 RIP: 0010:iommufd_access_destroy+0x18/0x20 drivers/iommu/iommufd/device.c:477 Code: e8 3d 4e 00 00 84 c0 74 01 c3 0f 0b c3 0f 1f 44 00 00 f3 0f 1e fa 48 89 fe 48 8b bf a8 00 00 00 e8 1d 4e 00 00 84 c0 74 01 c3 <0f> 0b c3 0f 1f 44 00 00 41 57 41 56 41 55 4c 8d ae d0 00 00 00 41 RSP: 0018:ffffc90003067e08 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff888109ea0300 RCX: 0000000000000000 RDX: 0000000000000001 RSI: 0000000000000000 RDI: 00000000ffffffff RBP: 0000000000000004 R08: 0000000000000000 R09: ffff88810bbb3500 R10: ffff88810bbb3e48 R11: 0000000000000000 R12: ffffc90003067e88 R13: ffffc90003067ea8 R14: ffff888101249800 R15: 00000000fffffffe FS: 00007ff7254fe6c0(0000) GS:ffff888237c00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000555557262da8 CR3: 000000010a6fd000 CR4: 0000000000350ef0 Call Trace: <TASK> iommufd_test_create_access drivers/iommu/iommufd/selftest.c:596 [inline] iommufd_test+0x71c/0xcf0 drivers/iommu/iommufd/selftest.c:813 iommufd_fops_ioctl+0x10f/0x1b0 drivers/iommu/iommufd/main.c:337 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:870 [inline] __se_sys_ioctl fs/ioctl.c:856 [inline] __x64_sys_ioctl+0x84/0xc0 fs/ioctl.c:856 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0x80 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd The solution is to not increment the refcount on the IOMMUFD_DESTROY path at all. Instead use the xa_lock to serialize everything. The refcount check == 1 and xa_erase can be done under a single critical region. This avoids the need for any refcount incrementing. It has the downside that if userspace races destroy with other operations it will get an EBUSY instead of waiting, but this is kind of racing is already dangerous. Fixes: 2ff4bed7fee7 ("iommufd: File descriptor, context, kconfig and makefiles") Link: https://lore.kernel.org/r/2-v1-85aacb2af554+bc-iommufd_syz3_jgg@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reported-by: syzbot+7574ebfe589049630608@syzkaller.appspotmail.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-26iommufd/selftest: Add a selftest for IOMMU_HWPT_ALLOCJason Gunthorpe1-0/+3
Test the basic flow. Link: https://lore.kernel.org/r/19-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-26iommufd/selftest: Return the real idev id from selftest mock_domainJason Gunthorpe2-0/+3
Now that we actually call iommufd_device_bind() we can return the idev_id from that function to userspace for use in other APIs. Link: https://lore.kernel.org/r/18-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-26iommufd: Add IOMMU_HWPT_ALLOCJason Gunthorpe3-0/+58
This allows userspace to manually create HWPTs on IOAS's and then use those HWPTs as inputs to iommufd_device_attach/replace(). Following series will extend this to allow creating iommu_domains with driver specific parameters. Link: https://lore.kernel.org/r/17-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-26iommufd/selftest: Test iommufd_device_replace()Nicolin Chen2-0/+43
Allow the selftest to call the function on the mock idev, add some tests to exercise it. Link: https://lore.kernel.org/r/16-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-26iommufd: Make destroy_rwsem use a lock class per object typeJason Gunthorpe2-1/+11
The selftest invokes things like replace under the object lock of its idev which protects the idev in a similar way to a real user. Unfortunately this triggers lockdep. A lock class per type will solve the problem. Link: https://lore.kernel.org/r/15-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-26iommufd: Add iommufd_device_replace()Jason Gunthorpe2-0/+102
Replace allows all the devices in a group to move in one step to a new HWPT. Further, the HWPT move is done without going through a blocking domain so that the IOMMU driver can implement some level of non-distruption to ongoing DMA if that has meaning for it (eg for future special driver domains) Replace uses a lot of the same logic as normal attach, except the actual domain change over has different restrictions, and we are careful to sequence things so that failure is going to leave everything the way it was, and not get trapped in a blocking domain or something if there is ENOMEM. Link: https://lore.kernel.org/r/14-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-26iommu: Introduce a new iommu_group_replace_domain() APINicolin Chen2-0/+37
qemu has a need to replace the translations associated with a domain when the guest does large-scale operations like switching between an IDENTITY domain and, say, dma-iommu.c. Currently, it does this by replacing all the mappings in a single domain, but this is very inefficient and means that domains have to be per-device rather than per-translation. Provide a high-level API to allow replacements of one domain with another. This is similar to a detach/attach cycle except it doesn't force the group to go to the blocking domain in-between. By removing this forced blocking domain the iommu driver has the opportunity to implement a non-disruptive replacement of the domain to the greatest extent its hardware allows. This allows the qemu emulation of the vIOMMU to be more complete, as real hardware often has a non-distruptive replacement capability. It could be possible to address this by simply removing the protection from the iommu_attach_group(), but it is not so clear if that is safe for the few users. Thus, add a new API to serve this new purpose. All drivers are already required to support changing between active UNMANAGED domains when using their attach_dev ops. This API is expected to be used only by IOMMUFD, so add to the iommu-priv header and mark it as IOMMUFD_INTERNAL. Link: https://lore.kernel.org/r/13-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-26iommufd: Reorganize iommufd_device_attach into iommufd_device_change_ptJason Gunthorpe1-39/+102
The code flow for first time attaching a PT and replacing a PT is very similar except for the lowest do_attach step. Reorganize this so that the do_attach step is a function pointer. Replace requires destroying the old HWPT once it is replaced. This destruction cannot be done under all the locks that are held in the function pointer, so the signature allows returning a HWPT which will be destroyed by the caller after everything is unlocked. Link: https://lore.kernel.org/r/12-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-26iommufd: Fix locking around hwpt allocationJason Gunthorpe1-1/+1
Due to the auto_domains mechanism the ioas->mutex must be held until the hwpt is completely setup by iommufd_object_abort_and_destroy() or iommufd_object_finalize(). This prevents a concurrent iommufd_device_auto_get_domain() from seeing an incompletely initialized object through the ioas->hwpt_list. To make this more consistent move the unlock until after finalize. Fixes: e8d57210035b ("iommufd: Add kAPI toward external drivers for physical devices") Link: https://lore.kernel.org/r/11-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-26iommufd: Allow a hwpt to be aborted after allocationJason Gunthorpe3-1/+26
During creation the hwpt must have the ioas->mutex held until the object is finalized. This means we need to be able to call iommufd_object_abort_and_destroy() while holding the mutex. Since iommufd_hw_pagetable_destroy() also needs the mutex this is problematic. Fix it by creating a special abort op for the object that can assume the caller is holding the lock, as required by the contract. The next patch will add another iommufd_object_abort_and_destroy() for a hwpt. Fixes: e8d57210035b ("iommufd: Add kAPI toward external drivers for physical devices") Link: https://lore.kernel.org/r/10-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-26iommufd: Add enforced_cache_coherency to iommufd_hw_pagetable_alloc()Jason Gunthorpe3-15/+32
Logically the HWPT should have the coherency set properly for the device that it is being created for when it is created. This was happening implicitly if the immediate_attach was set because iommufd_hw_pagetable_attach() does it as the first thing. Do it unconditionally so !immediate_attach works properly. Link: https://lore.kernel.org/r/9-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-26iommufd: Move putting a hwpt to a helper functionJason Gunthorpe2-5/+11
Next patch will need to call this from two places. Link: https://lore.kernel.org/r/8-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-07-26iommufd: Make sw_msi_start a group globalJason Gunthorpe2-7/+8
The sw_msi_start is only set by the ARM drivers and it is always constant. Due to the way vfio/iommufd allow domains to be re-used between devices we have a built in assumption that there is only one value for sw_msi_start and it is global to the system. To make replace simpler where we may not reparse the iommu_get_resv_regions() move the sw_msi_start to the iommufd_group so it is always available once any HWPT has been attached. Link: https://lore.kernel.org/r/7-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>