summaryrefslogtreecommitdiff
path: root/drivers/iommu/amd/iommu.c
AgeCommit message (Collapse)Author
2024-04-26iommu/amd: Add SVA domain supportVasant Hegde
- Allocate SVA domain and setup mmu notifier. In free path unregister mmu notifier and free protection domain. - Add mmu notifier callback function. It will retrieve SVA protection domain and invalidates IO/TLB. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240418103400.6229-16-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-26iommu/amd: Initial SVA support for AMD IOMMUVasant Hegde
This includes : - Add data structure to track per protection domain dev/pasid binding details protection_domain->dev_data_list will track attached list of dev_data/PASIDs. - Move 'to_pdomain()' to header file - Add iommu_sva_set_dev_pasid(). It will check whether PASID is supported or not. Also adds PASID to SVA protection domain list as well as to device GCR3 table. - Add iommu_ops.remove_dev_pasid support. It will unbind PASID from device. Also remove pasid data from protection domain device list. - Add IOMMU_SVA as dependency to AMD_IOMMU driver For a given PASID, iommu_set_dev_pasid() will bind all devices to same SVA protection domain (1 PASID : 1 SVA protection domain : N devices). This protection domain is different from device protection domain (one that's mapped in attach_device() path). IOMMU uses domain ID for caching, invalidation, etc. In SVA mode it will use per-device-domain-ID. Hence in invalidation path we retrieve domain ID from gcr3_info_table structure and use that for invalidation. Co-developed-by: Wei Huang <wei.huang2@amd.com> Signed-off-by: Wei Huang <wei.huang2@amd.com> Co-developed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240418103400.6229-14-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-26iommu/amd: Add support for enable/disable IOPFVasant Hegde
Return success from enable_feature(IOPF) path as this interface is going away. Instead we will enable/disable IOPF support in attach/detach device path. In attach device path, if device is capable of PRI, then we will add it to per IOMMU IOPF queue and enable PPR support in IOMMU. Also it will attach device to domain even if it fails to enable PRI or add device to IOPF queue as device can continue to work without PRI support. In detach device patch it follows following sequence: - Flush the queue for the given device - Disable PPR support in DTE[devid] - Remove device from IOPF queue - Disable device PRI Also add IOMMU_IOPF as dependency to AMD_IOMMU driver. Co-developed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240418103400.6229-13-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-26iommu/amd: Add support for page responseSuravee Suthikulpanit
This generates AMD IOMMU COMPLETE_PPR_REQUEST for the specified device with the specified PRI Response Code. Also update amd_iommu_complete_ppr() to accept 'struct device' instead of pdev as it just need device reference. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Wei Huang <wei.huang2@amd.com> Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240418103400.6229-11-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-26iommu/amd: Enable PCI features based on attached domain capabilityVasant Hegde
Commit eda8c2860ab6 ("iommu/amd: Enable device ATS/PASID/PRI capabilities independently") changed the way it enables device capability while attaching devices. I missed to account the attached domain capability. Meaning if domain is not capable of handling PASID/PRI (ex: paging domain with v1 page table) then enabling device feature is not required. This patch enables PASID/PRI only if domain is capable of handling SVA. Also move pci feature enablement to do_attach() function so that we make SVA capability in one place. Finally make PRI enable/disable functions as static functions. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240418103400.6229-9-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-26iommu/amd: Setup GCR3 table in advance if domain is SVA capableVasant Hegde
SVA can be supported if domain is in passthrough mode or paging domain with v2 page table. Current code sets up GCR3 table for domain with v2 page table only. Setup GCR3 table for all SVA capable domains. - Move GCR3 init/destroy to separate function. - Change default GCR3 table to use MAX supported PASIDs. Ideally it should use 1 level PASID table as its using PASID zero only. But we don't have support to extend PASID table yet. We will fix this later. - When domain is configured with passthrough mode, allocate default GCR3 table only if device is SVA capable. Note that in attach_device() path it will not know whether device will use SVA or not. If device is attached to passthrough domain and if it doesn't use SVA then GCR3 table will never be used. We will endup wasting memory allocated for GCR3 table. This is done to avoid DTE update when attaching PASID to device. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240418103400.6229-8-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-26iommu/amd: Introduce iommu_dev_data.max_pasidsVasant Hegde
This variable will track the number of PASIDs supported by the device. If IOMMU or device doesn't support PASID then it will be zero. This will be used while allocating GCR3 table to decide required number of PASID table levels. Also in PASID bind path it will use this variable to check whether device supports PASID or not. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240418103400.6229-7-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-26iommu/amd: Move PPR-related functions into ppr.cSuravee Suthikulpanit
In preparation to subsequent PPR-related patches, and also remove static declaration for certain helper functions so that it can be reused in other files. Also rename below functions: alloc_ppr_log -> amd_iommu_alloc_ppr_log iommu_enable_ppr_log -> amd_iommu_enable_ppr_log free_ppr_log -> amd_iommu_free_ppr_log iommu_poll_ppr_log -> amd_iommu_poll_ppr_log Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240418103400.6229-5-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-26iommu/amd: Add support for enabling/disabling IOMMU featuresWei Huang
Add support for struct iommu_ops.dev_{enable/disable}_feat. Please note that the empty feature switches will be populated by subsequent patches. Signed-off-by: Wei Huang <wei.huang2@amd.com> Co-developed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240418103400.6229-4-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-26iommu/amd: Introduce per device DTE update functionVasant Hegde
Consolidate per device update and flush logic into separate function. Also make it as global function as it will be used in subsequent series to update the DTE. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240418103400.6229-3-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-26iommu/amd: Enhance def_domain_type to handle untrusted deviceVasant Hegde
Previously, IOMMU core layer was forcing IOMMU_DOMAIN_DMA domain for untrusted device. This always took precedence over driver's def_domain_type(). Commit 59ddce4418da ("iommu: Reorganize iommu_get_default_domain_type() to respect def_domain_type()") changed the behaviour. Current code calls def_domain_type() but if it doesn't return IOMMU_DOMAIN_DMA for untrusted device it throws error. This results in IOMMU group (and potentially IOMMU itself) in undetermined state. This patch adds untrusted check in AMD IOMMU driver code. So that it allows eGPUs behind Thunderbolt work again. Fine tuning amd_iommu_def_domain_type() will be done later. Reported-by: Eric Wagner <ewagner12@gmail.com> Link: https://lore.kernel.org/linux-iommu/CAHudX3zLH6CsRmLE-yb+gRjhh-v4bU5_1jW_xCcxOo_oUUZKYg@mail.gmail.com Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3182 Fixes: 59ddce4418da ("iommu: Reorganize iommu_get_default_domain_type() to respect def_domain_type()") Cc: Robin Murphy <robin.murphy@arm.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: stable@kernel.org # v6.7+ Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20240423111725.5813-1-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-26iommu/dma: Centralise iommu_setup_dma_ops()Robin Murphy
It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only ever call iommu_setup_dma_ops() after a successful iommu_probe_device(), which means there should be no harm in achieving the same order of operations by running it off the back of iommu_probe_device() itself. This then puts it in line with the x86 and s390 .probe_finalize bodges, letting us pull it all into the main flow properly. As a bonus this lets us fold in and de-scope the PCI workaround setup as well. At this point we can also then pull the call up inside the group mutex, and avoid having to think about whether iommu_group_store_type() could theoretically race and free the domain if iommu_setup_dma_ops() ran just *before* iommu_device_use_default_domain() claims it... Furthermore we replace one .probe_finalize call completely, since the only remaining implementations are now one which only needs to run once for the initial boot-time probe, and two which themselves render that path unreachable. This leaves us a big step closer to realistically being able to unpick the variety of different things that iommu_setup_dma_ops() has been muddling together, and further streamline iommu-dma into core API flows in future. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> # For Intel IOMMU Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Hanjun Guo <guohanjun@huawei.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/bebea331c1d688b34d9862eefd5ede47503961b8.1713523152.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-15iommu/amd: use page allocation function provided by iommu-pages.hPasha Tatashin
Convert iommu/amd/* files to use the new page allocation functions provided in iommu-pages.h. Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com> Acked-by: David Rientjes <rientjes@google.com> Tested-by: Bagas Sanjaya <bagasdotme@gmail.com> Link: https://lore.kernel.org/r/20240413002522.1101315-4-pasha.tatashin@soleen.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-12iommu/amd: Fix possible irq lock inversion dependency issueVasant Hegde
LOCKDEP detector reported below warning: ---------------------------------------- [ 23.796949] ======================================================== [ 23.796950] WARNING: possible irq lock inversion dependency detected [ 23.796952] 6.8.0fix+ #811 Not tainted [ 23.796954] -------------------------------------------------------- [ 23.796954] kworker/0:1/8 just changed the state of lock: [ 23.796956] ff365325e084a9b8 (&domain->lock){..-.}-{3:3}, at: amd_iommu_flush_iotlb_all+0x1f/0x50 [ 23.796969] but this lock took another, SOFTIRQ-unsafe lock in the past: [ 23.796970] (pd_bitmap_lock){+.+.}-{3:3} [ 23.796972] and interrupts could create inverse lock ordering between them. [ 23.796973] other info that might help us debug this: [ 23.796974] Chain exists of: &domain->lock --> &dev_data->lock --> pd_bitmap_lock [ 23.796980] Possible interrupt unsafe locking scenario: [ 23.796981] CPU0 CPU1 [ 23.796982] ---- ---- [ 23.796983] lock(pd_bitmap_lock); [ 23.796985] local_irq_disable(); [ 23.796985] lock(&domain->lock); [ 23.796988] lock(&dev_data->lock); [ 23.796990] <Interrupt> [ 23.796991] lock(&domain->lock); Fix this issue by disabling interrupt when acquiring pd_bitmap_lock. Note that this is temporary fix. We have a plan to replace custom bitmap allocator with IDA allocator. Fixes: 87a6f1f22c97 ("iommu/amd: Introduce per-device domain ID to fix potential TLB aliasing issue") Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20240404102717.6705-1-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-03-08iommu/amd: Fix sleeping in atomic contextVasant Hegde
Commit cf70873e3d01 ("iommu/amd: Refactor GCR3 table helper functions") changed GFP flag we use for GCR3 table. Original plan was to move GCR3 table allocation outside spinlock. But this requires complete rework of attach device path. Hence we didn't do it as part of SVA series. For now revert the GFP flag to ATOMIC (same as original code). Fixes: cf70873e3d01 ("iommu/amd: Refactor GCR3 table helper functions") Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20240307052738.116035-1-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Introduce per-device domain ID to fix potential TLB aliasing issueVasant Hegde
With v1 page table, the AMD IOMMU spec states that the hardware must use the domain ID to tag its internal translation caches. I/O devices with different v1 page tables must be given different domain IDs. I/O devices that share the same v1 page table __may__ be given the same domain ID. This domain ID management policy is currently implemented by the AMD IOMMU driver. In this case, only the domain ID is needed when issuing the INVALIDATE_IOMMU_PAGES command to invalidate the IOMMU translation cache (TLB). With v2 page table, the hardware uses domain ID and PASID as parameters to tag and issue the INVALIDATE_IOMMU_PAGES command. Since the GCR3 table is setup per-device, and there is no guarantee for PASID to be unique across multiple devices. The same PASID for different devices could have different v2 page tables. In such case, if multiple devices share the same domain ID, IOMMU translation cache for these devices would be polluted due to TLB aliasing. Hence, avoid the TLB aliasing issue with v2 page table by allocating unique domain ID for each device even when multiple devices are sharing the same v1 page table. Please note that this fix would result in multiple INVALIDATE_IOMMU_PAGES commands (one per domain id) when unmapping a translation. Domain ID can be shared until device starts using PASID. We will enhance this code later where we will allocate per device domain ID only when its needed. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-18-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Remove unused GCR3 table parameters from struct protection_domainSuravee Suthikulpanit
Since they are moved to struct iommu_dev_data, and the driver has been ported to use them. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-17-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Rearrange device flush codeVasant Hegde
Consolidate all flush related code in one place so that its easy to maintain. No functional changes intended. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-16-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Remove unused flush pasid functionsVasant Hegde
We have removed iommu_v2 module and converted v2 page table to use common flush functions. Also we have moved GCR3 table to per device. PASID related functions are not used. Hence remove these unused functions. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-15-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Refactor GCR3 table helper functionsSuravee Suthikulpanit
To use the new per-device struct gcr3_tbl_info. Use GFP_KERNEL flag instead of GFP_ATOMIC for GCR3 table allocation. Also modify set_dte_entry() to use new per device GCR3 table. Also in free_gcr3_table() path replace BUG_ON with WARN_ON_ONCE(). Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-14-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Refactor protection_domain helper functionsSuravee Suthikulpanit
To removes the code to setup GCR3 table, and only handle domain create / destroy, since GCR3 is no longer part of a domain. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-13-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Refactor attaching / detaching device functionsSuravee Suthikulpanit
If domain is configured with V2 page table then setup default GCR3 with domain GCR3 pointer. So that all devices in the domain uses same page table for translation. Also return page table setup status from do_attach() function. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-12-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Refactor helper function for setting / clearing GCR3Suravee Suthikulpanit
Refactor GCR3 helper functions in preparation to use per device GCR3 table. * Add new function update_gcr3 to update per device GCR3 table * Remove per domain default GCR3 setup during v2 page table allocation. Subsequent patch will add support to setup default gcr3 while attaching device to domain. * Remove amd_iommu_domain_update() from V2 page table path as device detach path will take care of updating the domain. * Consolidate GCR3 table related code in one place so that its easy to maintain. * Rename functions to reflect its usage. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-11-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Rearrange GCR3 table setup codeVasant Hegde
Consolidate GCR3 table related code in one place so that its easy to maintain. Note that this patch doesn't move __set_gcr3/__clear_gcr3. We are moving GCR3 table from per domain to per device. Following series will rework these functions. During that time I will move these functions as well. No functional changes intended. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-9-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Add support for device based TLB invalidationVasant Hegde
Add support to invalidate TLB/IOTLB for the given device. These functions will be used in subsequent patches where we will introduce per device GCR3 table and SVA support. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-8-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Use protection_domain.flags to check page table modeVasant Hegde
Page table mode (v1, v2 or pt) is per domain property. Recently we have enhanced protection_domain.pd_mode to track per domain page table mode. Use that variable to check the page table mode instead of global 'amd_iommu_pgtable' in {map/unmap}_pages path. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-7-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Introduce struct protection_domain.pd_modeSuravee Suthikulpanit
This enum variable is used to track the type of page table used by the protection domain. It will replace the protection_domain.flags in subsequent series. Suggested-by: Jason Gunthorpe <jgg@ziepe.ca> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-5-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Introduce get_amd_iommu_from_dev()Suravee Suthikulpanit
Introduce get_amd_iommu_from_dev() and get_amd_iommu_from_dev_data(). And replace rlookup_amd_iommu() with the new helper function where applicable to avoid unnecessary loop to look up struct amd_iommu from struct device. Suggested-by: Jason Gunthorpe <jgg@ziepe.ca> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-4-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Pass struct iommu_dev_data to set_dte_entry()Vasant Hegde
Pass iommu_dev_data structure instead of passing indivisual variables. No functional changes intended. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240205115615.6053-2-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Remove redundant error check in amd_iommu_probe_device()Vasant Hegde
iommu_init_device() is not returning -ENOTSUPP since commit 61289cbaf6c8 ("iommu/amd: Remove old alias handling code"). No functional change intended. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240118090105.5864-6-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-02-09iommu/amd: Remove unused IOVA_* macroVasant Hegde
These macros are not used after commit ac6d704679d343 ("iommu/dma: Pass address limit rather than size to iommu_setup_dma_ops()"). No functional change intended. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240118090105.5864-3-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-01-18Merge tag 'iommu-updates-v6.8' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull iommu updates from Joerg Roedel: "Core changes: - Fix race conditions in device probe path - Retire IOMMU bus_ops - Support for passing custom allocators to page table drivers - Clean up Kconfig around IOMMU_SVA - Support for sharing SVA domains with all devices bound to a mm - Firmware data parsing cleanup - Tracing improvements for iommu-dma code - Some smaller fixes and cleanups ARM-SMMU drivers: - Device-tree binding updates: - Add additional compatible strings for Qualcomm SoCs - Document Adreno clocks for Qualcomm's SM8350 SoC - SMMUv2: - Implement support for the ->domain_alloc_paging() callback - Ensure Secure context is restored following suspend of Qualcomm SMMU implementation - SMMUv3: - Disable stalling mode for the "quiet" context descriptor - Minor refactoring and driver cleanups Intel VT-d driver: - Cleanup and refactoring AMD IOMMU driver: - Improve IO TLB invalidation logic - Small cleanups and improvements Rockchip IOMMU driver: - DT binding update to add Rockchip RK3588 Apple DART driver: - Apple M1 USB4/Thunderbolt DART support - Cleanups Virtio IOMMU driver: - Add support for iotlb_sync_map - Enable deferred IO TLB flushes" * tag 'iommu-updates-v6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (66 commits) iommu: Don't reserve 0-length IOVA region iommu/vt-d: Move inline helpers to header files iommu/vt-d: Remove unused vcmd interfaces iommu/vt-d: Remove unused parameter of intel_pasid_setup_pass_through() iommu/vt-d: Refactor device_to_iommu() to retrieve iommu directly iommu/sva: Fix memory leak in iommu_sva_bind_device() dt-bindings: iommu: rockchip: Add Rockchip RK3588 iommu/dma: Trace bounce buffer usage when mapping buffers iommu/arm-smmu: Convert to domain_alloc_paging() iommu/arm-smmu: Pass arm_smmu_domain to internal functions iommu/arm-smmu: Implement IOMMU_DOMAIN_BLOCKED iommu/arm-smmu: Convert to a global static identity domain iommu/arm-smmu: Reorganize arm_smmu_domain_add_master() iommu/arm-smmu-v3: Remove ARM_SMMU_DOMAIN_NESTED iommu/arm-smmu-v3: Master cannot be NULL in arm_smmu_write_strtab_ent() iommu/arm-smmu-v3: Add a type for the STE iommu/arm-smmu-v3: disable stall for quiet_cd iommu/qcom: restore IOMMU state if needed iommu/arm-smmu-qcom: Add QCM2290 MDSS compatible iommu/arm-smmu-qcom: Add missing GMU entry to match table ...
2024-01-03Merge branches 'apple/dart', 'arm/rockchip', 'arm/smmu', 'virtio', ↵Joerg Roedel
'x86/vt-d', 'x86/amd' and 'core' into next
2023-12-12iommu: Mark dev_iommu_priv_set() with a lockdepJason Gunthorpe
A perfect driver would only call dev_iommu_priv_set() from its probe callback. We've made it functionally correct to call it from the of_xlate by adding a lock around that call. lockdep assert that iommu_probe_device_lock is held to discourage misuse. Exclude PPC kernels with CONFIG_FSL_PAMU turned on because FSL_PAMU uses a global static for its priv and abuses priv for its domain. Remove the pointless stores of NULL, all these are on paths where the core code will free dev->iommu after the op returns. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Tested-by: Hector Martin <marcan@marcan.st> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/5-v2-16e4def25ebb+820-iommu_fwspec_p1_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd/pgtbl_v2: Invalidate updated page ranges onlyVasant Hegde
Enhance __domain_flush_pages() to detect domain page table mode and use that info to build invalidation commands. So that we can use amd_iommu_domain_flush_pages() to invalidate v2 page table. Also pass PASID, gn variable to device_flush_iotlb() so that it can build IOTLB invalidation command for both v1 and v2 page table. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-10-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Make domain_flush_pages as global functionVasant Hegde
- Rename domain_flush_pages() -> amd_iommu_domain_flush_pages() and make it as global function. - Rename amd_iommu_domain_flush_tlb_pde() -> amd_iommu_domain_flush_all() and make it as static. - Convert v1 page table (io_pgtble.c) to use amd_iommu_domain_flush_pages(). Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-9-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Consolidate amd_iommu_domain_flush_complete() callVasant Hegde
Call amd_iommu_domain_flush_complete() from domain_flush_pages(). That way we can remove explicit call of amd_iommu_domain_flush_complete() from various places. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-8-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Refactor device iotlb invalidation codeVasant Hegde
build_inv_iotlb_pages() and build_inv_iotlb_pasid() pretty much duplicates the code. Enhance build_inv_iotlb_pages() to invalidate guest IOTLB as well. And remove build_inv_iotlb_pasid() function. Suggested-by: Kishon Vijay Abraham I <kvijayab@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-7-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Refactor IOMMU tlb invalidation codeVasant Hegde
build_inv_iommu_pages() and build_inv_iommu_pasid() pretty much duplicates the code. Hence enhance build_inv_iommu_pages() to invalidate guest pages as well. And remove build_inv_iommu_pasid(). Suggested-by: Kishon Vijay Abraham I <kvijayab@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-6-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Add support to invalidate multiple guest pagesVasant Hegde
Current interface supports invalidating single page or entire guest translation information for a single process address space. IOMMU CMD_INV_IOMMU_PAGES and CMD_INV_IOTLB_PAGES commands supports invalidating range of pages. Add support to invalidate multiple pages. This is preparatory patch before consolidating host and guest invalidation code into single function. Following patches will consolidation tlb invalidation code. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-5-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Remove redundant passing of PDE bitVasant Hegde
Current code always sets PDE bit in INVALIDATE_IOMMU_PAGES command. Hence get rid of 'pde' variable across functions. We can re-introduce this bit whenever its needed. Suggested-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-4-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Remove redundant domain flush from attach_device()Vasant Hegde
Domain flush was introduced in attach_device() path to handle kdump scenario. Later init code was enhanced to handle kdump scenario where it also takes care of flushing everything including TLB (see early_enable_iommus()). Hence remove redundant flush from attach_device() function. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-3-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Rename iommu_flush_all_caches() -> amd_iommu_flush_all_caches()Vasant Hegde
Rename function inline with driver naming convention. No functional changes. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-2-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Do not flush IRTE when only updating isRun and destination fieldsSuravee Suthikulpanit
According to the recent update in the AMD IOMMU spec [1], the IsRun and Destination fields of the Interrupt Remapping Table Entry (IRTE) are not cached by the IOMMU hardware. Therefore, do not issue the INVALIDATE_INTERRUPT_TABLE command when updating IRTE[IsRun] and IRTE[Destination] when IRTE[GuestMode]=1, which should help improve IOMMU AVIC/x2AVIC performance. References: [1] AMD IOMMU Spec Revision (Rev 3.08-PUB) (Link: https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/specifications/48882_IOMMU.pdf) Cc: Joao Martins <joao.m.martins@oracle.com> Cc: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Vasant Hegde <vasant.hegde@amd.com> Tested-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Link: https://lore.kernel.org/r/20231017144236.8287-1-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu/amd: Set variable amd_dirty_ops to staticKunwu Chan
Fix the followng warning: drivers/iommu/amd/iommu.c:67:30: warning: symbol 'amd_dirty_ops' was not declared. Should it be static? This variable is only used in its defining file, so it should be static. Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Reviewed-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231120095342.1102999-1-chentao@kylinos.cn Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-21x86/apic: Drop apic::delivery_modeAndrew Cooper
This field is set to APIC_DELIVERY_MODE_FIXED in all cases, and is read exactly once. Fold the constant in uv_program_mmr() and drop the field. Searching for the origin of the stale HyperV comment reveals commit a31e58e129f7 ("x86/apic: Switch all APICs to Fixed delivery mode") which notes: As a consequence of this change, the apic::irq_delivery_mode field is now pointless, but this needs to be cleaned up in a separate patch. 6 years is long enough for this technical debt to have survived. [ bp: Fold in https://lore.kernel.org/r/20231121123034.1442059-1-andrew.cooper3@citrix.com ] Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Steve Wahl <steve.wahl@hpe.com> Link: https://lore.kernel.org/r/20231102-x86-apic-v1-1-bf049a2a0ed6@citrix.com
2023-11-09Merge tag 'iommu-updates-v6.7' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull iommu updates from Joerg Roedel: "Core changes: - Make default-domains mandatory for all IOMMU drivers - Remove group refcounting - Add generic_single_device_group() helper and consolidate drivers - Cleanup map/unmap ops - Scaling improvements for the IOVA rcache depot - Convert dart & iommufd to the new domain_alloc_paging() ARM-SMMU: - Device-tree binding update: - Add qcom,sm7150-smmu-v2 for Adreno on SM7150 SoC - SMMUv2: - Support for Qualcomm SDM670 (MDSS) and SM7150 SoCs - SMMUv3: - Large refactoring of the context descriptor code to move the CD table into the master, paving the way for '->set_dev_pasid()' support on non-SVA domains - Minor cleanups to the SVA code Intel VT-d: - Enable debugfs to dump domain attached to a pasid - Remove an unnecessary inline function AMD IOMMU: - Initial patches for SVA support (not complete yet) S390 IOMMU: - DMA-API conversion and optimized IOTLB flushing And some smaller fixes and improvements" * tag 'iommu-updates-v6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (102 commits) iommu/dart: Remove the force_bypass variable iommu/dart: Call apple_dart_finalize_domain() as part of alloc_paging() iommu/dart: Convert to domain_alloc_paging() iommu/dart: Move the blocked domain support to a global static iommu/dart: Use static global identity domains iommufd: Convert to alloc_domain_paging() iommu/vt-d: Use ops->blocked_domain iommu/vt-d: Update the definition of the blocking domain iommu: Move IOMMU_DOMAIN_BLOCKED global statics to ops->blocked_domain Revert "iommu/vt-d: Remove unused function" iommu/amd: Remove DMA_FQ type from domain allocation path iommu: change iommu_map_sgtable to return signed values iommu/virtio: Add __counted_by for struct viommu_request and use struct_size() iommu/vt-d: debugfs: Support dumping a specified page table iommu/vt-d: debugfs: Create/remove debugfs file per {device, pasid} iommu/vt-d: debugfs: Dump entry pointing to huge page iommu/vt-d: Remove unused function iommu/arm-smmu-v3-sva: Remove bond refcount iommu/arm-smmu-v3-sva: Remove unused iommu_sva handle iommu/arm-smmu-v3: Rename cdcfg to cd_table ...
2023-10-27Merge branches 'iommu/fixes', 'arm/tegra', 'arm/smmu', 'virtio', 'x86/vt-d', ↵Joerg Roedel
'x86/amd', 'core' and 's390' into next
2023-10-26iommu: Pass in parent domain with user_data to domain_alloc_user opYi Liu
domain_alloc_user op already accepts user flags for domain allocation, add a parent domain pointer and a driver specific user data support as well. The user data would be tagged with a type for iommu drivers to add their own driver specific user data per hw_pagetable. Add a struct iommu_user_data as a bundle of data_ptr/data_len/type from an iommufd core uAPI structure. Make the user data opaque to the core, since a userspace driver must match the kernel driver. In the future, if drivers share some common parameter, there would be a generic parameter as well. Link: https://lore.kernel.org/r/20231026043938.63898-7-yi.l.liu@intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Co-developed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-10-24iommu/amd: Access/Dirty bit support in IOPTEsJoao Martins
IOMMU advertises Access/Dirty bits if the extended feature register reports it. Relevant AMD IOMMU SDM ref[0] "1.3.8 Enhanced Support for Access and Dirty Bits" To enable it set the DTE flag in bits 7 and 8 to enable access, or access+dirty. With that, the IOMMU starts marking the D and A flags on every Memory Request or ATS translation request. It is on the VMM side to steer whether to enable dirty tracking or not, rather than wrongly doing in IOMMU. Relevant AMD IOMMU SDM ref [0], "Table 7. Device Table Entry (DTE) Field Definitions" particularly the entry "HAD". To actually toggle on and off it's relatively simple as it's setting 2 bits on DTE and flush the device DTE cache. To get what's dirtied use existing AMD io-pgtable support, by walking the pagetables over each IOVA, with fetch_pte(). The IOTLB flushing is left to the caller (much like unmap), and iommu_dirty_bitmap_record() is the one adding page-ranges to invalidate. This allows caller to batch the flush over a big span of IOVA space, without the iommu wondering about when to flush. Worthwhile sections from AMD IOMMU SDM: "2.2.3.1 Host Access Support" "2.2.3.2 Host Dirty Support" For details on how IOMMU hardware updates the dirty bit see, and expects from its consequent clearing by CPU: "2.2.7.4 Updating Accessed and Dirty Bits in the Guest Address Tables" "2.2.7.5 Clearing Accessed and Dirty Bits" Quoting the SDM: "The setting of accessed and dirty status bits in the page tables is visible to both the CPU and the peripheral when sharing guest page tables. The IOMMU interlocked operations to update A and D bits must be 64-bit operations and naturally aligned on a 64-bit boundary" .. and for the IOMMU update sequence to Dirty bit, essentially is states: 1. Decodes the read and write intent from the memory access. 2. If P=0 in the page descriptor, fail the access. 3. Compare the A & D bits in the descriptor with the read and write intent in the request. 4. If the A or D bits need to be updated in the descriptor: * Start atomic operation. * Read the descriptor as a 64-bit access. * If the descriptor no longer appears to require an update, release the atomic lock with no further action and continue to step 5. * Calculate the new A & D bits. * Write the descriptor as a 64-bit access. * End atomic operation. 5. Continue to the next stage of translation or to the memory access. Access/Dirty bits readout also need to consider the non-default page-sizes (aka replicated PTEs as mentined by manual), as AMD supports all powers of two (except 512G) page sizes. Select IOMMUFD_DRIVER only if IOMMUFD is enabled considering that IOMMU dirty tracking requires IOMMUFD. Link: https://lore.kernel.org/r/20231024135109.73787-12-joao.m.martins@oracle.com Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>