diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2024-09-19 11:12:49 +0200 | 
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2024-09-19 11:12:49 +0200 | 
| commit | 726e2d0cf2bbc14e3bf38491cddda1a56fe18663 (patch) | |
| tree | a08e086eda8ba159da3dbc3c9f5c284a7f667572 /kernel/dma/ops_helpers.c | |
| parent | de848da12f752170c2ebe114804a985314fd5a6a (diff) | |
| parent | a5fb217f13f74b2af2ab366ffad522bae717f93c (diff) | |
Merge tag 'dma-mapping-6.12-2024-09-19' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping updates from Christoph Hellwig:
 - support DMA zones for arm64 systems where memory starts at > 4GB
   (Baruch Siach, Catalin Marinas)
 - support direct calls into dma-iommu and thus obsolete dma_map_ops for
   many common configurations (Leon Romanovsky)
 - add DMA-API tracing (Sean Anderson)
 - remove the not very useful return value from various dma_set_* APIs
   (Christoph Hellwig)
 - misc cleanups and minor optimizations (Chen Y, Yosry Ahmed, Christoph
   Hellwig)
* tag 'dma-mapping-6.12-2024-09-19' of git://git.infradead.org/users/hch/dma-mapping:
  dma-mapping: reflow dma_supported
  dma-mapping: reliably inform about DMA support for IOMMU
  dma-mapping: add tracing for dma-mapping API calls
  dma-mapping: use IOMMU DMA calls for common alloc/free page calls
  dma-direct: optimize page freeing when it is not addressable
  dma-mapping: clearly mark DMA ops as an architecture feature
  vdpa_sim: don't select DMA_OPS
  arm64: mm: keep low RAM dma zone
  dma-mapping: don't return errors from dma_set_max_seg_size
  dma-mapping: don't return errors from dma_set_seg_boundary
  dma-mapping: don't return errors from dma_set_min_align_mask
  scsi: check that busses support the DMA API before setting dma parameters
  arm64: mm: fix DMA zone when dma-ranges is missing
  dma-mapping: direct calls for dma-iommu
  dma-mapping: call ->unmap_page and ->unmap_sg unconditionally
  arm64: support DMA zone above 4GB
  dma-mapping: replace zone_dma_bits by zone_dma_limit
  dma-mapping: use bit masking to check VM_DMA_COHERENT
Diffstat (limited to 'kernel/dma/ops_helpers.c')
| -rw-r--r-- | kernel/dma/ops_helpers.c | 14 | 
1 files changed, 11 insertions, 3 deletions
| diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index af4a6ef48ce0..9afd569eadb9 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -4,6 +4,7 @@   * the allocated memory contains normal pages in the direct kernel mapping.   */  #include <linux/dma-map-ops.h> +#include <linux/iommu-dma.h>  static struct page *dma_common_vaddr_to_page(void *cpu_addr)  { @@ -70,8 +71,12 @@ struct page *dma_common_alloc_pages(struct device *dev, size_t size,  	if (!page)  		return NULL; -	*dma_handle = ops->map_page(dev, page, 0, size, dir, -				    DMA_ATTR_SKIP_CPU_SYNC); +	if (use_dma_iommu(dev)) +		*dma_handle = iommu_dma_map_page(dev, page, 0, size, dir, +						 DMA_ATTR_SKIP_CPU_SYNC); +	else +		*dma_handle = ops->map_page(dev, page, 0, size, dir, +					    DMA_ATTR_SKIP_CPU_SYNC);  	if (*dma_handle == DMA_MAPPING_ERROR) {  		dma_free_contiguous(dev, page, size);  		return NULL; @@ -86,7 +91,10 @@ void dma_common_free_pages(struct device *dev, size_t size, struct page *page,  {  	const struct dma_map_ops *ops = get_dma_ops(dev); -	if (ops->unmap_page) +	if (use_dma_iommu(dev)) +		iommu_dma_unmap_page(dev, dma_handle, size, dir, +				     DMA_ATTR_SKIP_CPU_SYNC); +	else if (ops->unmap_page)  		ops->unmap_page(dev, dma_handle, size, dir,  				DMA_ATTR_SKIP_CPU_SYNC);  	dma_free_contiguous(dev, page, size); | 
