summaryrefslogtreecommitdiff
path: root/drivers/iommu/amd/iommu.c
AgeCommit message (Collapse)Author
2022-09-07iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 tableSuravee Suthikulpanit
Currently, PPR/ATS can be enabled only if the domain is type identity mapping. However, when allowing the IOMMU v2 page table to be used for DMA-API, the check is no longer valid. Update the sanity check to only apply for when using AMD_IOMMU_V1 page table mode. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220825063939.8360-6-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-09-07iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove lockingSuravee Suthikulpanit
The current function to enable IOMMU v2 also lock the domain. In order to reuse the same code in different code path, in which the domain has already been locked, refactor the function to separate the locking from the enabling logic. Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20220825063939.8360-5-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-09-07iommu/amd: Add map/unmap_pages() iommu_domain_ops callback supportVasant Hegde
Implement the map_pages() and unmap_pages() callback for the AMD IOMMU driver to allow calls from iommu core to map and unmap multiple pages. Also deprecate map/unmap callbacks. Finally gatherer is not updated by iommu_v1_unmap_pages(). Hence pass NULL instead of gather to iommu_v1_unmap_pages. Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220825063939.8360-4-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-09-07iommu/amd: Clean up bus_set_iommu()Robin Murphy
Stop calling bus_set_iommu() since it's now unnecessary, and garbage-collect the last remnants of amd_iommu_init_api(). Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/6bcc367e8802ae5a2b2840cbe4e9661ee024e80e.1660572783.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-09-07iommu/amd: Handle race between registration and device probeRobin Murphy
As for the Intel driver, make sure the AMD driver can cope with seeing .probe_device calls without having to wait for all known instances to register first. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/a8d8ebe12b411d28972f1ab928c6db92e8913cf5.1660572783.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-09-07iommu: Retire iommu_capable()Robin Murphy
With all callers now converted to the device-specific version, retire the old bus-based interface, and give drivers the chance to indicate accurate per-instance capabilities. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/d8bd8777d06929ad8f49df7fc80e1b9af32a41b5.1660574547.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-09-07iommu/amd: use full 64-bit value in build_completion_wait()John Sperbeck
We started using a 64 bit completion value. Unfortunately, we only stored the low 32-bits, so a very large completion value would never be matched in iommu_completion_wait(). Fixes: c69d89aff393 ("iommu/amd: Use 4K page for completion wait write-back semaphore") Signed-off-by: John Sperbeck <jsperbeck@google.com> Link: https://lore.kernel.org/r/20220801192229.3358786-1-jsperbeck@google.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-29Merge branches 'arm/exynos', 'arm/mediatek', 'arm/msm', 'arm/smmu', ↵Joerg Roedel
'virtio', 'x86/vt-d', 'x86/amd' and 'core' into next
2022-07-15iommu/amd: Do not support IOMMU_DOMAIN_IDENTITY after SNP is enabledSuravee Suthikulpanit
Once SNP is enabled (by executing SNP_INIT command), IOMMU can no longer support the passthrough domain (i.e. IOMMU_DOMAIN_IDENTITY). The SNP_INIT command is called early in the boot process, and would fail if the kernel is configure to default to passthrough mode. After the system is already booted, users can try to change IOMMU domain type of a particular IOMMU group. In this case, the IOMMU driver needs to check the SNP-enable status and return failure when requesting to change domain type to identity. Therefore, return failure when trying to allocate identity domain. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20220713225651.20758-9-suravee.suthikulpanit@amd.com [ joro: Removed WARN_ON_ONCE() ] Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-15iommu/amd: Set translation valid bit only when IO page tables are in useSuravee Suthikulpanit
On AMD system with SNP enabled, IOMMU hardware checks the host translation valid (TV) and guest translation valid (GV) bits in the device table entry (DTE) before accessing the corresponded page tables. However, current IOMMU driver sets the TV bit for all devices regardless of whether the host page table is in use. This results in ILLEGAL_DEV_TABLE_ENTRY event for devices, which do not the host page table root pointer set up. Thefore, when SNP is enabled, only set TV bit when DMA remapping is not used, which is when domain ID in the AMD IOMMU device table entry (DTE) is zero. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20220713225651.20758-8-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-15iommu: remove the put_resv_regions methodChristoph Hellwig
All drivers that implement get_resv_regions just use generic_put_resv_regions to implement the put side. Remove the indirections and document the allocations constraints. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20220708080616.238833-4-hch@lst.de Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Update amd_iommu_fault structure to include PCI seg IDVasant Hegde
Rename 'device_id' as 'sbdf' and extend it to 32bit so that we can pass PCI segment ID to ppr_notifier(). Also pass PCI segment ID to pci_get_domain_bus_and_slot() instead of default value. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-36-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Print PCI segment ID in error log messagesVasant Hegde
Print pci segment ID along with bdf. Useful for debugging. Co-developed-by: Suravee Suthikulpaint <suravee.suthikulpanit@amd.com> Signed-off-by: Suravee Suthikulpaint <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-34-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Specify PCI segment ID when getting pci deviceSuravee Suthikulpanit
Upcoming AMD systems can have multiple PCI segments. Hence pass PCI segment ID to pci_get_domain_bus_and_slot() instead of '0'. Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-32-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Introduce get_device_sbdf_id() helper functionSuravee Suthikulpanit
Current get_device_id() only provide 16-bit PCI device ID (i.e. BDF). With multiple PCI segment support, we need to extend the helper function to include PCI segment ID. So, introduce a new helper function get_device_sbdf_id() to replace the current get_pci_device_id(). Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-30-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Flush upto last_bdf onlyVasant Hegde
Fix amd_iommu_flush_dte_all() and amd_iommu_flush_tlb_all() to flush upto last_bdf only. Co-developed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-29-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Remove global amd_iommu_[dev_table/alias_table/last_bdf]Suravee Suthikulpanit
Replace them with per PCI segment device table. Also remove dev_table_size, alias_table_size, amd_iommu_last_bdf variables. Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-28-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Update set_dev_entry_bit() and get_dev_entry_bit()Suravee Suthikulpanit
To include a pointer to per PCI segment device table. Also include struct amd_iommu as one of the function parameter to amd_iommu_apply_erratum_63() since it is needed when setting up DTE. Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-27-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Update set_dte_irq_entrySuravee Suthikulpanit
Start using per PCI segment device table instead of global device table. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-25-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Update dump_dte_entrySuravee Suthikulpanit
Start using per PCI segment device table instead of global device table. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-24-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Update iommu_ignore_deviceSuravee Suthikulpanit
Start using per PCI segment device table instead of global device table. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-23-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Update set_dte_entry and clear_dte_entrySuravee Suthikulpanit
Start using per PCI segment data structures instead of global data structures. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-22-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Convert to use per PCI segment rlookup_tableVasant Hegde
Then, remove the global amd_iommu_rlookup_table and rlookup_table_size. Co-developed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-21-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Update alloc_irq_table and alloc_irq_indexSuravee Suthikulpanit
Pass amd_iommu structure as one of the parameter to these functions as its needed to retrieve variable tables inside these functions. Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-20-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Update amd_irte_ops functionsSuravee Suthikulpanit
Pass amd_iommu structure as one of the parameter to amd_irte_ops functions since its needed to activate/deactivate the iommu. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-19-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Introduce struct amd_ir_data.iommuSuravee Suthikulpanit
Add a pointer to struct amd_iommu to amd_ir_data structure, which can be used to correlate interrupt remapping data to a per-PCI-segment interrupt remapping table. Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-18-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Update irq_remapping_alloc to use IOMMU lookup helper functionSuravee Suthikulpanit
To allow IOMMU rlookup using both PCI segment and device ID. Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-17-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Convert to use rlookup_amd_iommu helper functionSuravee Suthikulpanit
Use rlookup_amd_iommu() helper function which will give per PCI segment rlookup_table. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-16-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Convert to use per PCI segment irq_lookup_tableVasant Hegde
Then, remove the global irq_lookup_table. Co-developed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-15-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Introduce per PCI segment unity map listVasant Hegde
Newer AMD systems can support multiple PCI segments. In order to support multiple PCI segments IVMD table in IVRS structure is enhanced to include pci segment id. Update ivmd_header structure to include "pci_seg". Also introduce per PCI segment unity map list. It will replace global amd_iommu_unity_map list. Note that we have used "reserved" field in IVMD table to include "pci_seg id" which was set to zero. It will take care of backward compatibility (new kernel will work fine on older systems). Co-developed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-10-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Introduce per PCI segment alias_tableSuravee Suthikulpanit
This will replace global alias table (amd_iommu_alias_table). Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-9-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Introduce per PCI segment dev_data_listVasant Hegde
This will replace global dev_data_list. Co-developed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-7-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Introduce per PCI segment rlookup tableSuravee Suthikulpanit
This will replace global rlookup table (amd_iommu_rlookup_table). Add helper functions to set/get rlookup table for the given device. Also add macros to get seg/devid from sbdf. Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-5-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Introduce per PCI segment device tableSuravee Suthikulpanit
Introduce per PCI segment device table. All IOMMUs within the segment will share this device table. This will replace global device table i.e. amd_iommu_dev_table. Also introduce helper function to get the device table for the given IOMMU. Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-4-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-07-07iommu/amd: Update struct iommu_dev_data definitionVasant Hegde
struct iommu_dev_data contains member "pdev" to point to pci_dev. This is valid for only PCI devices and for other devices this will be NULL. This causes unnecessary "pdev != NULL" check at various places. Replace "struct pci_dev" member with "struct device" and use to_pci_dev() to get pci device reference as needed. Also adjust setup_aliases() and clone_aliases() function. No functional change intended. Co-developed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20220706113825.25582-2-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-04-28iommu: Introduce the domain op enforce_cache_coherency()Jason Gunthorpe
This new mechanism will replace using IOMMU_CAP_CACHE_COHERENCY and IOMMU_CACHE to control the no-snoop blocking behavior of the IOMMU. Currently only Intel and AMD IOMMUs are known to support this feature. They both implement it as an IOPTE bit, that when set, will cause PCIe TLPs to that IOVA with the no-snoop bit set to be treated as though the no-snoop bit was clear. The new API is triggered by calling enforce_cache_coherency() before mapping any IOVA to the domain which globally switches on no-snoop blocking. This allows other implementations that might block no-snoop globally and outside the IOPTE - AMD also documents such a HW capability. Leave AMD out of sync with Intel and have it block no-snoop even for in-kernel users. This can be trivially resolved in a follow up patch. Only VFIO needs to call this API because it does not have detailed control over the device to avoid requesting no-snoop behavior at the device level. Other places using domains with real kernel drivers should simply avoid asking their devices to set the no-snoop bit. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/1-v3-2cf356649677+a32-intel_no_snoop_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-04-28iommu/amd: Indicate whether DMA remap support is enabledMario Limonciello
Bit 1 of the IVFS IVInfo field indicates that IOMMU has been used for pre-boot DMA protection. Export this capability to allow other places in the kernel to be able to check for it on AMD systems. Link: https://www.amd.com/system/files/TechDocs/48882_IOMMU.pdf Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/ce7627fa1c596878ca6515dd9d4381a45b6ee38c.1650878781.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-04-28iommu/amd: Enable swiotlb in all casesMario Limonciello
Previously the AMD IOMMU would only enable SWIOTLB in certain circumstances: * IOMMU in passthrough mode * SME enabled This logic however doesn't work when an untrusted device is plugged in that doesn't do page aligned DMA transactions. The expectation is that a bounce buffer is used for those transactions. This fails like this: swiotlb buffer is full (sz: 4096 bytes), total 0 (slots), used 0 (slots) That happens because the bounce buffers have been allocated, followed by freed during startup but the bounce buffering code expects that all IOMMUs have left it enabled. Remove the criteria to set up bounce buffers on AMD systems to ensure they're always available for supporting untrusted devices. Fixes: 82612d66d51d ("iommu: Allow the dma-iommu api to use bounce buffers") Suggested-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220404204723.9767-2-mario.limonciello@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-03-08Merge branches 'arm/mediatek', 'arm/msm', 'arm/renesas', 'arm/rockchip', ↵Joerg Roedel
'arm/smmu', 'x86/vt-d' and 'x86/amd' into next
2022-02-28iommu: Split struct iommu_opsLu Baolu
Move the domain specific operations out of struct iommu_ops into a new structure that only has domain specific operations. This solves the problem of needing to know if the method vector for a given operation needs to be retrieved from the device or the domain. Logically the domain ops are the ones that make sense for external subsystems and endpoint drivers to use, while device ops, with the sole exception of domain_alloc, are IOMMU API internals. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-10-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-02-28iommu: Remove unused argument in is_attach_deferredLu Baolu
The is_attach_deferred iommu_ops callback is a device op. The domain argument is unnecessary and never used. Remove it to make code clean. Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20220216025249.3459465-9-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-02-14iommu/amd: Recover from event log overflowLennert Buytenhek
The AMD IOMMU logs I/O page faults and such to a ring buffer in system memory, and this ring buffer can overflow. The AMD IOMMU spec has the following to say about the interrupt status bit that signals this overflow condition: EventOverflow: Event log overflow. RW1C. Reset 0b. 1 = IOMMU event log overflow has occurred. This bit is set when a new event is to be written to the event log and there is no usable entry in the event log, causing the new event information to be discarded. An interrupt is generated when EventOverflow = 1b and MMIO Offset 0018h[EventIntEn] = 1b. No new event log entries are written while this bit is set. Software Note: To resume logging, clear EventOverflow (W1C), and write a 1 to MMIO Offset 0018h[EventLogEn]. The AMD IOMMU driver doesn't currently implement this recovery sequence, meaning that if a ring buffer overflow occurs, logging of EVT/PPR/GA events will cease entirely. This patch implements the spec-mandated reset sequence, with the minor tweak that the hardware seems to want to have a 0 written to MMIO Offset 0018h[EventLogEn] first, before writing an 1 into this field, or the IOMMU won't actually resume logging events. Signed-off-by: Lennert Buytenhek <buytenh@arista.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/YVrSXEdW2rzEfOvk@wantstofly.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-11-04Merge tag 'iommu-updates-v5.16' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull iommu updates from Joerg Roedel: - Intel IOMMU Updates fro Lu Baolu: - Dump DMAR translation structure when DMA fault occurs - An optimization in the page table manipulation code - Use second level for GPA->HPA translation - Various cleanups - Arm SMMU Updates from Will - Minor optimisations to SMMUv3 command creation and submission - Numerous new compatible string for Qualcomm SMMUv2 implementations - Fixes for the SWIOTLB based implemenation of dma-iommu code for untrusted devices - Add support for r8a779a0 to the Renesas IOMMU driver and DT matching code for r8a77980 - A couple of cleanups and fixes for the Apple DART IOMMU driver - Make use of generic report_iommu_fault() interface in the AMD IOMMU driver - Various smaller fixes and cleanups * tag 'iommu-updates-v5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (35 commits) iommu/dma: Fix incorrect error return on iommu deferred attach iommu/dart: Initialize DART_STREAMS_ENABLE iommu/dma: Use kvcalloc() instead of kvzalloc() iommu/tegra-smmu: Use devm_bitmap_zalloc when applicable iommu/dart: Use kmemdup instead of kzalloc and memcpy iommu/vt-d: Avoid duplicate removing in __domain_mapping() iommu/vt-d: Convert the return type of first_pte_in_page to bool iommu/vt-d: Clean up unused PASID updating functions iommu/vt-d: Delete dev_has_feat callback iommu/vt-d: Use second level for GPA->HPA translation iommu/vt-d: Check FL and SL capability sanity in scalable mode iommu/vt-d: Remove duplicate identity domain flag iommu/vt-d: Dump DMAR translation structure when DMA fault occurs iommu/vt-d: Do not falsely log intel_iommu is unsupported kernel option iommu/arm-smmu-qcom: Request direct mapping for modem device iommu: arm-smmu-qcom: Add compatible for QCM2290 dt-bindings: arm-smmu: Add compatible for QCM2290 SoC iommu/arm-smmu-qcom: Add SM6350 SMMU compatible dt-bindings: arm-smmu: Add compatible for SM6350 SoC iommu/arm-smmu-v3: Properly handle the return value of arm_smmu_cmdq_build_cmd() ...
2021-10-04treewide: Replace the use of mem_encrypt_active() with cc_platform_has()Tom Lendacky
Replace uses of mem_encrypt_active() with calls to cc_platform_has() with the CC_ATTR_MEM_ENCRYPT attribute. Remove the implementation of mem_encrypt_active() across all arches. For s390, since the default implementation of the cc_platform_has() matches the s390 implementation of mem_encrypt_active(), cc_platform_has() does not need to be implemented in s390 (the config option ARCH_HAS_CC_PLATFORM is not set). Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210928191009.32551-9-bp@alien8.de
2021-09-29iommu/amd: Use report_iommu_fault()Lennert Buytenhek
This patch makes iommu/amd call report_iommu_fault() when an I/O page fault occurs, which has two effects: 1) It allows device drivers to register a callback to be notified of I/O page faults, via the iommu_set_fault_handler() API. 2) It triggers the io_page_fault tracepoint in report_iommu_fault() when an I/O page fault occurs. The latter point is the main aim of this patch, as it allows rasdaemon-like daemons to be notified of I/O page faults, and to possibly initiate corrective action in response. A number of other IOMMU drivers already use report_iommu_fault(), and I/O page faults on those IOMMUs therefore already trigger this tracepoint -- but this isn't yet the case for AMD-Vi and Intel DMAR. The AMD IOMMU specification suggests that the bit in an I/O page fault event log entry that signals whether an I/O page fault was for a read request or for a write request is only meaningful when the faulting access was to a present page, but some testing on a Ryzen 3700X suggests that this bit encodes the correct value even for I/O page faults to non-present pages, and therefore, this patch passes the R/W information up the stack even for I/O page faults to non-present pages. Signed-off-by: Lennert Buytenhek <buytenh@arista.com> Link: https://lore.kernel.org/r/YVLyBW97vZLpOaAp@wantstofly.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-08-20Merge branches 'apple/dart', 'arm/smmu', 'iommu/fixes', 'x86/amd', ↵Joerg Roedel
'x86/vt-d' and 'core' into next
2021-08-18iommu/amd: Prepare for multiple DMA domain typesRobin Murphy
The DMA ops reset/setup can simply be unconditional, since iommu-dma already knows only to touch DMA domains. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/6450b4f39a5a086d505297b4a53ff1e4a7a0fe7c.1628682049.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-08-18iommu/amd: Drop IOVA cookie managementRobin Murphy
The core code bakes its own cookies now. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/648e74e7422caa6a7db7fb0c36813c7bd2007af8.1628682048.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-08-02Merge remote-tracking branch 'korg/core' into x86/amdJoerg Roedel
2021-08-02iommu/amd: Use only natural aligned flushes in a VMNadav Amit
When running on an AMD vIOMMU, it is better to avoid TLB flushes of unmodified PTEs. vIOMMUs require the hypervisor to synchronize the virtualized IOMMU's PTEs with the physical ones. This process induce overheads. AMD IOMMU allows us to flush any range that is aligned to the power of 2. So when running on top of a vIOMMU, break the range into sub-ranges that are naturally aligned, and flush each one separately. This apporach is better when running with a vIOMMU, but on physical IOMMUs, the penalty of IOTLB misses due to unnecessary flushed entries is likely to be low. Repurpose (i.e., keeping the name, changing the logic) domain_flush_pages() so it is used to choose whether to perform one flush of the whole range or multiple ones to avoid flushing unnecessary ranges. Use NpCache, as usual, to infer whether the IOMMU is physical or virtual. Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Cc: Jiajun Cao <caojiajun@vmware.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Nadav Amit <namit@vmware.com> Link: https://lore.kernel.org/r/20210723093209.714328-8-namit@vmware.com Signed-off-by: Joerg Roedel <jroedel@suse.de>