summaryrefslogtreecommitdiff
path: root/drivers/iommu/io-pgtable-arm.c
AgeCommit message (Collapse)Author
2025-05-23Merge branches 'fixes', 'apple/dart', 'arm/smmu/updates', ↵Joerg Roedel
'arm/smmu/bindings', 'fsl/pamu', 'mediatek', 'renesas/ipmmu', 's390', 'intel/vt-d', 'amd/amd-vi' and 'core' into next
2025-05-20iommu/io-pgtable-arm: Add quirk to quiet WARN_ON()Rob Clark
In situations where mapping/unmapping sequence can be controlled by userspace, attempting to map over a region that has not yet been unmapped is an error. But not something that should spam dmesg. Now that there is a quirk, we can also drop the selftest_running flag, and use the quirk instead for selftests. Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20250519175348.11924-6-robdclark@gmail.com [will: Rename quirk to IO_PGTABLE_QUIRK_NO_WARN per Robin's suggestion] Signed-off-by: Will Deacon <will@kernel.org>
2025-04-28iommu/io-pgtable-arm: dynamically allocate selftest device structArnd Bergmann
In general a 'struct device' is way too large to be put on the kernel stack. Apparently something just caused it to grow a slightly larger, which pushed the arm_lpae_do_selftests() function over the warning limit in some configurations: drivers/iommu/io-pgtable-arm.c:1423:19: error: stack frame size (1032) exceeds limit (1024) in 'arm_lpae_do_selftests' [-Werror,-Wframe-larger-than] 1423 | static int __init arm_lpae_do_selftests(void) | ^ Change the function to use a dynamically allocated faux_device instead of the on-stack device structure. Fixes: ca25ec247aad ("iommu/io-pgtable-arm: Remove iommu_dev==NULL special case") Link: https://lore.kernel.org/all/ab75a444-22a1-47f5-b3c0-253660395b5a@arm.com/ Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20250423164826.2931382-1-arnd@kernel.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2025-04-17iommu: Update various drivers to pass in lg2sz instead of order to iommu pagesJason Gunthorpe
Convert most of the places calling get_order() as an argument to the iommu-pages allocator into order_base_2() or the _sz flavour instead. These places already have an exact size, there is no particular reason to use order here. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/19-v4-c8663abbb606+3f7-iommu_pages_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2025-04-17iommu/pages: Move the __GFP_HIGHMEM checks into the common codeJason Gunthorpe
The entire allocator API is built around using the kernel virtual address, it is illegal to pass GFP_HIGHMEM in as a GFP flag. Block it in the common code. Remove the duplicated checks from drivers. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/14-v4-c8663abbb606+3f7-iommu_pages_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2025-04-17iommu/pages: Remove the order argument to iommu_free_pages()Jason Gunthorpe
Now that we have a folio under the allocation iommu_free_pages() can know the order of the original allocation and do the correct thing to free it. The next patch will rename iommu_free_page() to iommu_free_pages() so we have naming consistency with iommu_alloc_pages_node(). Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/5-v4-c8663abbb606+3f7-iommu_pages_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2025-01-07iommu/io-pgtable-arm: Add way to debug pgtable walkRob Clark
Add an io-pgtable method to walk the pgtable returning the raw PTEs that would be traversed for a given iova access. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Mostafa Saleh <smostafa@google.com> Link: https://lore.kernel.org/r/20241210165127.600817-4-robdclark@gmail.com [will: Removed 'arm_lpae_io_pgtable_walk_data::level' per Mostafa] Signed-off-by: Will Deacon <will@kernel.org>
2025-01-07iommu/io-pgtable-arm: Re-use the pgtable walk for iova_to_physRob Clark
Re-use the generic pgtable walk path. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Mostafa Saleh <smostafa@google.com> Link: https://lore.kernel.org/r/20241210165127.600817-3-robdclark@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2025-01-07iommu/io-pgtable-arm: Make pgtable walker more genericRob Clark
We can re-use this basic pgtable walk logic in a few places. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Mostafa Saleh <smostafa@google.com> Link: https://lore.kernel.org/r/20241210165127.600817-2-robdclark@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2024-12-19iommu/io-pgtable-arm: Fix cfg reading in arm_lpae_concat_mandatory()Mostafa Saleh
The newly introduced arm_lpae_concat_mandatory() function reads the ias/oas fields from the 'io_pgtable_cfg' copy embedded inside the 'arm_lpae_io_pgtable' structure. However, this copy is not set until later in alloc_io_pgtable_ops() after the alloc() function has been called. Use the address sizes passed in the 'io_pgtable_cfg' structure when deciding whether or not to concatenate the PGD. Fixes: 4dcac8407fe1 ("iommu/io-pgtable-arm: Fix stage-2 concatenation with 16K") Signed-off-by: Mostafa Saleh <smostafa@google.com> Link: https://lore.kernel.org/r/20241215200412.561400-1-smostafa@google.com Signed-off-by: Will Deacon <will@kernel.org>
2024-12-09iommu/io-pgtable-arm: Add coverage for different OAS in selftestMostafa Saleh
Run selftests with different OAS values intead of hardcoding it to 48 bits. We always keep OAS >= IAS to make the config valid for stage-2. This can be further improved, if we split IAS/OAS configuration for stage-1 and stage-2 (to use input sizes compatible with VA_BITS as SMMUv3 does, or IAS > OAS which is valid for stage-1). However, that adds more complexity, and the current change improves coverage and makes it possible to test all concatenation cases. Signed-off-by: Mostafa Saleh <smostafa@google.com> Link: https://lore.kernel.org/r/20241202140604.422235-3-smostafa@google.com Signed-off-by: Will Deacon <will@kernel.org>
2024-12-09iommu/io-pgtable-arm: Fix stage-2 concatenation with 16KMostafa Saleh
At the moment, io-pgtable-arm uses concatenation only if it is possible at level 0, which misses a case where concatenation is mandatory at level 1 according to R_SRKBC in Arm spec DDI0487 K.a. Also, that means concatenation can be used when not mandated, contradicting the comment on the code. However, these cases can only happen if the SMMUv3 driver is changed to use ias != oas for stage-2. This patch re-writes the code to use concatenation only if mandatory, fixing the missing case for level-1 and granule 16K with PA = 40 bits. Signed-off-by: Mostafa Saleh <smostafa@google.com> Link: https://lore.kernel.org/r/20241202140604.422235-2-smostafa@google.com Signed-off-by: Will Deacon <will@kernel.org>
2024-11-22Merge tag 'iommu-updates-v6.13' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/iommu/linux Pull iommu updates from Joerg Roedel: "Core Updates: - Convert call-sites using iommu_domain_alloc() to more specific versions and remove function - Introduce iommu_paging_domain_alloc_flags() - Extend support for allocating PASID-capable domains to more drivers - Remove iommu_present() - Some smaller improvements New IOMMU driver for RISC-V Intel VT-d Updates: - Add domain_alloc_paging support - Enable user space IOPFs in non-PASID and non-svm cases - Small code refactoring and cleanups - Add domain replacement support for pasid AMD-Vi Updates: - Adapt to iommu_paging_domain_alloc_flags() interface and alloc V2 page-tables by default - Replace custom domain ID allocator with IDA allocator - Add ops->release_domain() support - Other improvements to device attach and domain allocation code paths ARM-SMMU Updates: - SMMUv2: - Return -EPROBE_DEFER for client devices probing before their SMMU - Devicetree binding updates for Qualcomm MMU-500 implementations - SMMUv3: - Minor fixes and cleanup for NVIDIA's virtual command queue driver - IO-PGTable: - Fix indexing of concatenated PGDs and extend selftest coverage - Remove unused block-splitting support S390 IOMMU: - Implement support for blocking domain Mediatek IOMMU: - Enable 35-bit physical address support for mt8186 OMAP IOMMU driver: - Adapt to recent IOMMU core changes and unbreak driver" * tag 'iommu-updates-v6.13' of git://git.kernel.org/pub/scm/linux/kernel/git/iommu/linux: (92 commits) iommu/tegra241-cmdqv: Fix alignment failure at max_n_shift iommu: Make set_dev_pasid op support domain replacement iommu/arm-smmu-v3: Make set_dev_pasid() op support replace iommu/vt-d: Add set_dev_pasid callback for nested domain iommu/vt-d: Make identity_domain_set_dev_pasid() to handle domain replacement iommu/vt-d: Make intel_svm_set_dev_pasid() support domain replacement iommu/vt-d: Limit intel_iommu_set_dev_pasid() for paging domain iommu/vt-d: Make intel_iommu_set_dev_pasid() to handle domain replacement iommu/vt-d: Add iommu_domain_did() to get did iommu/vt-d: Consolidate the struct dev_pasid_info add/remove iommu/vt-d: Add pasid replace helpers iommu/vt-d: Refactor the pasid setup helpers iommu/vt-d: Add a helper to flush cache for updating present pasid entry iommu: Pass old domain to set_dev_pasid op iommu/iova: Fix typo 'adderss' iommu: Add a kdoc to iommu_unmap() iommu/io-pgtable-arm-v7s: Remove split on unmap behavior iommu/io-pgtable-arm: Remove split on unmap behavior iommu/vt-d: Drain PRQs when domain removed from RID iommu/vt-d: Drop pasid requirement for prq initialization ...
2024-11-12iommu/arm-smmu-v3: Use S2FWB for NESTED domainsJason Gunthorpe
Force Write Back (FWB) changes how the S2 IOPTE's MemAttr field works. When S2FWB is supported and enabled the IOPTE will force cachable access to IOMMU_CACHE memory when nesting with a S1 and deny cachable access when !IOMMU_CACHE. When using a single stage of translation, a simple S2 domain, it doesn't change things for PCI devices as it is just a different encoding for the existing mapping of the IOMMU protection flags to cachability attributes. For non-PCI it also changes the combining rules when incoming transactions have inconsistent attributes. However, when used with a nested S1, FWB has the effect of preventing the guest from choosing a MemAttr in it's S1 that would cause ordinary DMA to bypass the cache. Consistent with KVM we wish to deny the guest the ability to become incoherent with cached memory the hypervisor believes is cachable so we don't have to flush it. Allow NESTED domains to be created if the SMMU has S2FWB support and use S2FWB for NESTING_PARENTS. This is an additional option to CANWBS. Link: https://patch.msgid.link/r/10-v4-9e99b76f3518+3a8-smmuv3_nesting_jgg@nvidia.com Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Donald Dutile <ddutile@redhat.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-06iommu/io-pgtable-arm: Remove split on unmap behaviorJason Gunthorpe
A minority of page table implementations (arm_lpae, armv7) are unique in how they handle partial unmap of large IOPTEs. Other implementations will unmap the large IOPTE and return it's length. For example if a 2M IOPTE is present and the first 4K is requested to be unmapped then unmap will remove the whole 2M and report 2M as the result. arm_lpae instead replaces the IOPTE with a table of smaller IOPTEs, unmaps the 4K and returns 4k. This is actually an illegal/non-hitless operation on at least SMMUv3 because of the BBM level 0 rules. Will says this was done to support VFIO, but upon deeper analysis this was never strictly necessary: https://lore.kernel.org/all/20241024134411.GA6956@nvidia.com/ In summary, historical VFIO supported the AMD behavior of unmapping the whole large IOPTE and returning the size, even if asked to unmap a portion. The driver would see this as a request to split a large IOPTE. Modern VFIO always unmaps entire large IOPTEs (except on AMD) and drivers don't see an IOPTE split. Given it doesn't work fully correctly on SMMUv3 and relying on ARM unique behavior would create portability problems across IOMMU drivers, retire this functionality. Outside the iommu users, this will potentially effect io_pgtable users of ARM_32_LPAE_S1, ARM_32_LPAE_S2, ARM_64_LPAE_S1, ARM_64_LPAE_S2, and ARM_MALI_LPAE formats. Cc: Boris Brezillon <boris.brezillon@collabora.com> Cc: Steven Price <steven.price@arm.com> Cc: Liviu Dudau <liviu.dudau@arm.com> Cc: dri-devel@lists.freedesktop.org Reviewed-by: Liviu Dudau <liviu.dudau@arm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/1-v3-b3a5b5937f56+7bb-arm_no_split_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-10-29iommu/io-pgtable-arm: Add self test for the last page in the IASMostafa Saleh
Add a case in the selftests that can detect some bugs with concatenated page tables, where it maps the biggest supported page size at the end of the IAS, this test would fail without the previous fix. Signed-off-by: Mostafa Saleh <smostafa@google.com> Link: https://lore.kernel.org/r/20241024162516.2005652-3-smostafa@google.com Signed-off-by: Will Deacon <will@kernel.org>
2024-10-29iommu/io-pgtable-arm: Fix stage-2 map/unmap for concatenated tablesMostafa Saleh
ARM_LPAE_LVL_IDX() takes into account concatenated PGDs and can return an index spanning multiple page-table pages given a sufficiently large input address. However, when the resulting index is used to calculate the number of remaining entries in the page, the possibility of concatenation is ignored and we end up computing a negative upper bound: max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start; On the map path, this results in a negative 'mapped' value being returned but on the unmap path we can leak child tables if they are skipped in __arm_lpae_free_pgtable(). Introduce an arm_lpae_max_entries() helper to convert a table index into the remaining number of entries within a single page-table page. Cc: <stable@vger.kernel.org> Signed-off-by: Mostafa Saleh <smostafa@google.com> Link: https://lore.kernel.org/r/20241024162516.2005652-2-smostafa@google.com [will: Tweaked comment and commit message] Signed-off-by: Will Deacon <will@kernel.org>
2024-09-13Merge branches 'fixes', 'arm/smmu', 'intel/vt-d', 'amd/amd-vi' and 'core' ↵Joerg Roedel
into next
2024-08-30iommu/io-pgtable-arm: Optimise non-coherent unmapAshish Mhetre
The current __arm_lpae_unmap() function calls dma_sync() on individual PTEs after clearing them. Overall unmap performance can be improved by around 25% for large buffer sizes by combining the syncs for adjacent leaf entries. Optimize the unmap time by clearing all the leaf entries and issuing a single dma_sync() for them. Below is detailed analysis of average unmap latency(in us) with and without this optimization obtained by running dma_map_benchmark for different buffer sizes. UnMap Latency(us) Size Without With % gain with optimiztion optimization optimization 4KB 3 3 0 8KB 4 3.8 5 16KB 6.1 5.4 11.48 32KB 10.2 8.5 16.67 64KB 18.5 14.9 19.46 128KB 35 27.5 21.43 256KB 67.5 52.2 22.67 512KB 127.9 97.2 24.00 1MB 248.6 187.4 24.62 2MB 65.5 65.5 0 4MB 119.2 119 0.17 Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Ashish Mhetre <amhetre@nvidia.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240806105135.218089-1-amhetre@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-08-26iommu: Do not return 0 from map_pages if it doesn't do anythingJason Gunthorpe
These three implementations of map_pages() all succeed if a mapping is requested with no read or write. Since they return back to __iommu_map() leaving the mapped output as 0 it triggers an infinite loop. Therefore nothing is using no-access protection bits. Further, VFIO and iommufd rely on iommu_iova_to_phys() to get back PFNs stored by map, if iommu_map() succeeds but iommu_iova_to_phys() fails that will create serious bugs. Thus remove this never used "nothing to do" concept and just fail map immediately. Fixes: e5fc9753b1a8 ("iommu/io-pgtable: Add ARMv7 short descriptor support") Fixes: e1d3c0fd701d ("iommu: add ARM LPAE page table allocator") Fixes: 745ef1092bcf ("iommu/io-pgtable: Move Apple DART support to its own file") Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/2-v1-1211e1294c27+4b1-iommu_no_prot_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-07-03iommu/arm-smmu-v3: Enable HTTU for stage1 with io-pgtable mappingKunkun Jiang
If io-pgtable quirk flag indicates support for hardware update of dirty state, enable HA/HD bits in the SMMU CD and also set the DBM bit in the page descriptor. Now report the dirty page tracking capability of SMMUv3 and select IOMMUFD_DRIVER for ARM_SMMU_V3 if IOMMUFD is enabled. Co-developed-by: Keqian Zhu <zhukeqian1@huawei.com> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/r/20240703101604.2576-6-shameerali.kolothum.thodi@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2024-07-03iommu/io-pgtable-arm: Add read_and_clear_dirty() supportShameer Kolothum
.read_and_clear_dirty() IOMMU domain op takes care of reading the dirty bits (i.e. PTE has DBM set and AP[2] clear) and marshalling into a bitmap of a given page size. While reading the dirty bits we also set the PTE AP[2] bit to mark it as writeable-clean depending on read_and_clear_dirty() flags. PTE states with respect to DBM bit: DBM bit AP[2]("RDONLY" bit) 1. writable_clean 1 1 2. writable_dirty 1 0 3. read-only 0 1 Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/r/20240703101604.2576-4-shameerali.kolothum.thodi@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2024-04-15iommu/io-pgtable-arm: use page allocation function provided by iommu-pages.hPasha Tatashin
Convert iommu/io-pgtable-arm.c to use the new page allocation functions provided in iommu-pages.h. Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com> Acked-by: David Rientjes <rientjes@google.com> Tested-by: Bagas Sanjaya <bagasdotme@gmail.com> Link: https://lore.kernel.org/r/20240413002522.1101315-5-pasha.tatashin@soleen.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu: Extend LPAE page table format to support custom allocatorsBoris Brezillon
We need that in order to implement the VM_BIND ioctl in the GPU driver targeting new Mali GPUs. VM_BIND is about executing MMU map/unmap requests asynchronously, possibly after waiting for external dependencies encoded as dma_fences. We intend to use the drm_sched framework to automate the dependency tracking and VM job dequeuing logic, but this comes with its own set of constraints, one of them being the fact we are not allowed to allocate memory in the drm_gpu_scheduler_ops::run_job() to avoid this sort of deadlocks: - VM_BIND map job needs to allocate a page table to map some memory to the VM. No memory available, so kswapd is kicked - GPU driver shrinker backend ends up waiting on the fence attached to the VM map job or any other job fence depending on this VM operation. With custom allocators, we will be able to pre-reserve enough pages to guarantee the map/unmap operations we queued will take place without going through the system allocator. But we can also optimize allocation/reservation by not free-ing pages immediately, so any upcoming page table allocation requests can be serviced by some free page table pool kept at the driver level. I might also be valuable for other aspects of GPU and similar use-cases, like fine-grained memory accounting and resource limiting. Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com> Reviewed-by: Steven Price <steven.price@arm.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20231124142434.1577550-3-boris.brezillon@collabora.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-11-19iommu/io-pgtable-arm: Remove map/unmapRobin Murphy
With all users now calling {map,unmap}_pages, remove the wrappers. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/162e58e83ed42f78c3fbefe78c9b5410dd1dc412.1668100209.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-09-26Merge branches 'apple/dart', 'arm/mediatek', 'arm/omap', 'arm/smmu', ↵Joerg Roedel
'virtio', 'x86/vt-d', 'x86/amd' and 'core' into next
2022-09-26iommu/io-pgtable: Move Apple DART support to its own fileJanne Grunau
The pte format used by the DARTs found in the Apple M1 (t8103) is not fully compatible with io-pgtable-arm. The 24 MSB are used for subpage protection (mapping only parts of page) and conflict with the address mask. In addition bit 1 is not available for tagging entries but disables subpage protection. Subpage protection could be useful to support a CPU granule of 4k with the fixed IOMMU page size of 16k. The DARTs found on Apple M1 Pro/Max/Ultra use another different pte format which is even less compatible. To support an output address size of 42 bit the address is shifted down by 4. Subpage protection is mandatory and bit 1 signifies uncached mappings used by the display controller. It would be advantageous to share code for all known Apple DART variants to support common features. The page table allocator for DARTs is less complex since it uses a two levels of translation table without support for huge pages. Signed-off-by: Janne Grunau <j@jannau.net> Acked-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Sven Peter <sven@svenpeter.dev> Acked-by: Hector Martin <marcan@marcan.st> Link: https://lore.kernel.org/r/20220916094152.87137-3-j@jannau.net [ joro: Fix compile warning in __dart_alloc_pages()] Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-09-07iommu/io-pgtable-arm: Remove iommu_dev==NULL special caseRobin Murphy
The special case to allow iommu_dev==NULL in __arm_lpae_alloc_pages() is confusing to static checkers (and possibly readers in general), since it's not obvious that that is only intended for the selftests. However it only serves to get around the dev_to_node() call, and we can easily fake up enough to make that work anyway, so let's simply remove this consideration from the normal flow and punt the responsibility over to the test harness itself. Reported-by: Rustam Subkhankulov <subkhankulov@ispras.ru> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/e2095eeda305071cb56c2cb8ac8a82dc3bd4dcab.1660580155.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-06iommu/io-pgtable-arm: Fix table descriptor paddr formattingHector Martin
Table descriptors were being installed without properly formatting the address using paddr_to_iopte, which does not match up with the iopte_deref in __arm_lpae_map. This is incorrect for the LPAE pte format, as it does not handle the high bits properly. This was found on Apple T6000 DARTs, which require a new pte format (different shift); adding support for that to paddr_to_iopte/iopte_to_paddr caused it to break badly, as even <48-bit addresses would end up incorrect in that case. Fixes: 6c89928ff7a0 ("iommu/io-pgtable-arm: Support 52-bit physical address") Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Hector Martin <marcan@marcan.st> Link: https://lore.kernel.org/r/20211120031343.88034-1-marcan@marcan.st Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-08-20Merge branches 'apple/dart', 'arm/smmu', 'iommu/fixes', 'x86/amd', ↵Joerg Roedel
'x86/vt-d' and 'core' into next
2021-08-20iommu/io-pgtable: Abstract iommu_iotlb_gather accessRobin Murphy
Previously io-pgtable merely passed the iommu_iotlb_gather pointer through to helpers, but now it has grown its own direct dereference. This turns out to break the build for !IOMMU_API configs where the structure only has a dummy definition. It will probably also crash drivers who don't use the gather mechanism and simply pass in NULL. Wrap this dereference in a suitable helper which can both be stubbed out for !IOMMU_API and encapsulate a NULL check otherwise. Fixes: 7a7c5badf858 ("iommu: Indicate queued flushes via gather data") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/83672ee76f6405c82845a55c148fa836f56fbbc1.1629465282.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-08-18iommu/io-pgtable: Remove non-strict quirkRobin Murphy
IO_PGTABLE_QUIRK_NON_STRICT was never a very comfortable fit, since it's not a quirk of the pagetable format itself. Now that we have a more appropriate way to convey non-strict unmaps, though, this last of the non-quirk quirks can also go, and with the flush queue code also now enforcing its own ordering we can have a lovely cleanup all round. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/155b5c621cd8936472e273a8b07a182f62c6c20d.1628682049.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-08-12iommu/io-pgtable: Add DART pagetable formatSven Peter
Apple's DART iommu uses a pagetable format that shares some similarities with the ones already implemented by io-pgtable.c. Add a new format variant to support the required differences so that we don't have to duplicate the pagetable handling code. Reviewed-by: Alexander Graf <graf@amazon.com> Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Sven Peter <sven@svenpeter.dev> Link: https://lore.kernel.org/r/20210803121651.61594-2-sven@svenpeter.dev Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-07-26iommu/io-pgtable-arm: Implement arm_lpae_map_pages()Isaac J. Manjarres
Implement the map_pages() callback for the ARM LPAE io-pgtable format. Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org> Signed-off-by: Georgi Djakov <quic_c_gdjako@quicinc.com> Link: https://lore.kernel.org/r/1623850736-389584-12-git-send-email-quic_c_gdjako@quicinc.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-07-26iommu/io-pgtable-arm: Implement arm_lpae_unmap_pages()Isaac J. Manjarres
Implement the unmap_pages() callback for the ARM LPAE io-pgtable format. Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org> Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Georgi Djakov <quic_c_gdjako@quicinc.com> Link: https://lore.kernel.org/r/1623850736-389584-11-git-send-email-quic_c_gdjako@quicinc.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-07-26iommu/io-pgtable-arm: Prepare PTE methods for handling multiple entriesIsaac J. Manjarres
The PTE methods currently operate on a single entry. In preparation for manipulating multiple PTEs in one map or unmap call, allow them to handle multiple PTEs. Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org> Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Georgi Djakov <quic_c_gdjako@quicinc.com> Link: https://lore.kernel.org/r/1623850736-389584-10-git-send-email-quic_c_gdjako@quicinc.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-12-16Merge tag 'iommu-updates-v5.11' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull IOMMU updates from Will Deacon: "There's a good mixture of improvements to the core code and driver changes across the board. One thing worth pointing out is that this includes a quirk to work around behaviour in the i915 driver (see 65f746e8285f ("iommu: Add quirk for Intel graphic devices in map_sg")), which otherwise interacts badly with the conversion of the intel IOMMU driver over to the DMA-IOMMU APU but has being fixed properly in the DRM tree. We'll revert the quirk later this cycle once we've confirmed that things don't fall apart without it. Summary: - IOVA allocation optimisations and removal of unused code - Introduction of DOMAIN_ATTR_IO_PGTABLE_CFG for parameterising the page-table of an IOMMU domain - Support for changing the default domain type in sysfs - Optimisation to the way in which identity-mapped regions are created - Driver updates: * Arm SMMU updates, including continued work on Shared Virtual Memory * Tegra SMMU updates, including support for PCI devices * Intel VT-D updates, including conversion to the IOMMU-DMA API - Cleanup, kerneldoc and minor refactoring" * tag 'iommu-updates-v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (50 commits) iommu/amd: Add sanity check for interrupt remapping table length macros dma-iommu: remove __iommu_dma_mmap iommu/io-pgtable: Remove tlb_flush_leaf iommu: Stop exporting free_iova_mem() iommu: Stop exporting alloc_iova_mem() iommu: Delete split_and_remove_iova() iommu/io-pgtable-arm: Remove unused 'level' parameter from iopte_type() macro iommu: Defer the early return in arm_(v7s/lpae)_map iommu: Improve the performance for direct_mapping iommu: avoid taking iova_rbtree_lock twice iommu/vt-d: Avoid GFP_ATOMIC where it is not needed iommu/vt-d: Remove set but not used variable iommu: return error code when it can't get group iommu: Fix htmldocs warnings in sysfs-kernel-iommu_groups iommu: arm-smmu-impl: Add a space before open parenthesis iommu: arm-smmu-impl: Use table to list QCOM implementations iommu/arm-smmu: Move non-strict mode to use io_pgtable_domain_attr iommu/arm-smmu: Add support for pagetable config domain attribute iommu: Document usage of "/sys/kernel/iommu_groups/<grp_id>/type" file iommu: Take lock before reading iommu group default domain type ...
2020-12-10Merge tag 'drm-msm-next-2020-12-07' of ↵Dave Airlie
https://gitlab.freedesktop.org/drm/msm into drm-next * Shutdown hook for GPU (to ensure GPU is idle before iommu goes away) * GPU cooling device support * DSI 7nm and 10nm phy/pll updates * Additional sm8150/sm8250 DPU support (merge_3d and DSPP color processing) * Various DP fixes * A whole bunch of W=1 fixes from Lee Jones * GEM locking re-work (no more trylock_recursive in shrinker!) * LLCC (system cache) support * Various other fixes/cleanups Signed-off-by: Dave Airlie <airlied@redhat.com> From: Rob Clark <robdclark@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/CAF6AEGt0G=H3_RbF_GAQv838z5uujSmFd+7fYhL6Yg=23LwZ=g@mail.gmail.com
2020-12-08iommu/io-pgtable: Remove tlb_flush_leafRobin Murphy
The only user of tlb_flush_leaf is a particularly hairy corner of the Arm short-descriptor code, which wants a synchronous invalidation to minimise the races inherent in trying to split a large page mapping. This is already far enough into "here be dragons" territory that no sensible caller should ever hit it, and thus it really doesn't need optimising. Although using tlb_flush_walk there may technically be more heavyweight than needed, it does the job and saves everyone else having to carry around useless baggage. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/9844ab0c5cb3da8b2f89c6c2da16941910702b41.1606324115.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-12-08Merge branch 'for-next/iommu/misc' into for-next/iommu/coreWill Deacon
Miscellaneous IOMMU changes for 5.11. Largely cosmetic, apart from a change to the way in which identity-mapped domains are configured so that the requests are now batched and can potentially use larger pages for the mapping. * for-next/iommu/misc: iommu/io-pgtable-arm: Remove unused 'level' parameter from iopte_type() macro iommu: Defer the early return in arm_(v7s/lpae)_map iommu: Improve the performance for direct_mapping iommu: return error code when it can't get group iommu: Modify the description of iommu_sva_unbind_device
2020-12-08iommu/io-pgtable-arm: Remove unused 'level' parameter from iopte_type() macroKunkun Jiang
The 'level' parameter to the iopte_type() macro is unused, so remove it. Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com> Link: https://lore.kernel.org/r/20201207120150.1891-1-jiangkunkun@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2020-12-08iommu: Defer the early return in arm_(v7s/lpae)_mapKeqian Zhu
Although handling a mapping request with no permissions is a trivial no-op, defer the early return until after the size/range checks so that we are consistent with other mapping requests. Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com> Link: https://lore.kernel.org/r/20201207115758.9400-1-zhukeqian1@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2020-11-25iommu/io-pgtable-arm: Add support to use system cacheSai Prakash Ranjan
Add a quirk IO_PGTABLE_QUIRK_ARM_OUTER_WBWA to override the outer-cacheability attributes set in the TCR for a non-coherent page table walker when using system cache. Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/f818676b4a2a9ad1edb92721947d47db41ed6a7c.1606287059.git.saiprakash.ranjan@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-02Merge drm/drm-next into drm-misc-nextMaxime Ripard
Daniel needs -rc2 in drm-misc-next to merge some patches Signed-off-by: Maxime Ripard <maxime@cerno.tech>
2020-10-30iommu/io-pgtable-arm: Support coherency for Mali LPAERobin Murphy
Midgard GPUs have ACE-Lite master interfaces which allows systems to integrate them in an I/O-coherent manner. It seems that from the GPU's viewpoint, the rest of the system is its outer shareable domain, and so even when snoop signals are wired up, they are only emitted for outer shareable accesses. As such, setting the TTBR_SHARE_OUTER bit does indeed get coherent pagetable walks working nicely for the coherent T620 in the Arm Juno SoC. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Neil Armstrong <narmstrong@baylibre.com> Reviewed-by: Steven Price <steven.price@arm.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Neil Armstrong <narmstrong@baylibre.com> Link: https://patchwork.freedesktop.org/patch/msgid/8df778355378127ea7eccc9521d6427e3e48d4f2.1600780574.git.robin.murphy@arm.com
2020-10-15Merge tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds
Pull dma-mapping updates from Christoph Hellwig: - rework the non-coherent DMA allocator - move private definitions out of <linux/dma-mapping.h> - lower CMA_ALIGNMENT (Paul Cercueil) - remove the omap1 dma address translation in favor of the common code - make dma-direct aware of multiple dma offset ranges (Jim Quinlan) - support per-node DMA CMA areas (Barry Song) - increase the default seg boundary limit (Nicolin Chen) - misc fixes (Robin Murphy, Thomas Tai, Xu Wang) - various cleanups * tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mapping: (63 commits) ARM/ixp4xx: add a missing include of dma-map-ops.h dma-direct: simplify the DMA_ATTR_NO_KERNEL_MAPPING handling dma-direct: factor out a dma_direct_alloc_from_pool helper dma-direct check for highmem pages in dma_direct_alloc_pages dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h> dma-mapping: move large parts of <linux/dma-direct.h> to kernel/dma dma-mapping: move dma-debug.h to kernel/dma/ dma-mapping: remove <asm/dma-contiguous.h> dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h> dma-contiguous: remove dma_contiguous_set_default dma-contiguous: remove dev_set_cma_area dma-contiguous: remove dma_declare_contiguous dma-mapping: split <linux/dma-mapping.h> cma: decrease CMA_ALIGNMENT lower limit to 2 firewire-ohci: use dma_alloc_pages dma-iommu: implement ->alloc_noncoherent dma-mapping: add new {alloc,free}_noncoherent dma_map_ops methods dma-mapping: add a new dma_alloc_pages API dma-mapping: remove dma_cache_sync 53c700: convert to dma_alloc_noncoherent ...
2020-09-28iommu/io-pgtable-arm: Move some definitions to a headerJean-Philippe Brucker
Extract some of the most generic TCR defines, so they can be reused by the page table sharing code. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200918101852.582559-6-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21iommu/io-pgtable-arm: Clean up faulty sanity checkRobin Murphy
Checking for a nonzero dma_pfn_offset was a quick shortcut to validate whether the DMA == phys assumption could hold at all. Checking for a non-NULL dma_range_map is not quite equivalent, since a map may be present to describe a limited DMA window even without an offset, and thus this check can now yield false positives. However, it only ever served to short-circuit going all the way through to __arm_lpae_alloc_pages(), failing the canonical test there, and having a bit more to clean up. As such, we can simply remove it without loss of correctness. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-09-17dma-mapping: introduce DMA range map, supplanting dma_pfn_offsetJim Quinlan
The new field 'dma_range_map' in struct device is used to facilitate the use of single or multiple offsets between mapping regions of cpu addrs and dma addrs. It subsumes the role of "dev->dma_pfn_offset" which was only capable of holding a single uniform offset and had no region bounds checking. The function of_dma_get_range() has been modified so that it takes a single argument -- the device node -- and returns a map, NULL, or an error code. The map is an array that holds the information regarding the DMA regions. Each range entry contains the address offset, the cpu_start address, the dma_start address, and the size of the region. of_dma_configure() is the typical manner to set range offsets but there are a number of ad hoc assignments to "dev->dma_pfn_offset" in the kernel driver code. These cases now invoke the function dma_direct_set_offset(dev, cpu_addr, dma_addr, size). Signed-off-by: Jim Quinlan <james.quinlan@broadcom.com> [hch: various interface cleanups] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Tested-by: Mathieu Poirier <mathieu.poirier@linaro.org> Tested-by: Nathan Chancellor <natechancellor@gmail.com>
2020-07-29Merge branches 'arm/renesas', 'arm/qcom', 'arm/mediatek', 'arm/omap', ↵Joerg Roedel
'arm/exynos', 'arm/smmu', 'ppc/pamu', 'x86/vt-d', 'x86/amd' and 'core' into next