diff options
author | Nicolin Chen <nicolinc@nvidia.com> | 2025-03-05 13:18:00 -0800 |
---|---|---|
committer | Jason Gunthorpe <jgg@nvidia.com> | 2025-03-07 15:56:22 -0400 |
commit | 897008d0f7672c3510281e826232a32d62710323 (patch) | |
tree | 2aa9b5ec2bb83febfc0201edc32d65bf3575015c | |
parent | a05df03a88bc1088be8e9d958f208d6484691e43 (diff) |
iommufd: Set domain->iommufd_hwpt in all hwpt->domain allocators
Setting domain->iommufd_hwpt in iommufd_hwpt_alloc() only covers the HWPT
allocations from user space, but not for an auto domain. This resulted in
a NULL pointer access in the auto domain pathway:
Unable to handle kernel NULL pointer dereference at
virtual address 0000000000000008
pc : iommufd_sw_msi+0x54/0x2b0
lr : iommufd_sw_msi+0x40/0x2b0
Call trace:
iommufd_sw_msi+0x54/0x2b0 (P)
iommu_dma_prepare_msi+0x64/0xa8
its_irq_domain_alloc+0xf0/0x2c0
irq_domain_alloc_irqs_parent+0x2c/0xa8
msi_domain_alloc+0xa0/0x1a8
Since iommufd_sw_msi() requires to access the domain->iommufd_hwpt, it is
better to set that explicitly prior to calling iommu_domain_set_sw_msi().
Fixes: 748706d7ca06 ("iommu: Turn fault_data to iommufd private pointer")
Link: https://patch.msgid.link/r/20250305211800.229465-1-nicolinc@nvidia.com
Reported-by: Ankit Agrawal <ankita@nvidia.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Tested-by: Ankit Agrawal <ankita@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-rw-r--r-- | drivers/iommu/iommufd/hw_pagetable.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c index 268315b1d8bc..1d4cfe3677dc 100644 --- a/drivers/iommu/iommufd/hw_pagetable.c +++ b/drivers/iommu/iommufd/hw_pagetable.c @@ -159,6 +159,7 @@ iommufd_hwpt_paging_alloc(struct iommufd_ctx *ictx, struct iommufd_ioas *ioas, goto out_abort; } } + hwpt->domain->iommufd_hwpt = hwpt; iommu_domain_set_sw_msi(hwpt->domain, iommufd_sw_msi); /* @@ -255,6 +256,7 @@ iommufd_hwpt_nested_alloc(struct iommufd_ctx *ictx, goto out_abort; } hwpt->domain->owner = ops; + hwpt->domain->iommufd_hwpt = hwpt; iommu_domain_set_sw_msi(hwpt->domain, iommufd_sw_msi); if (WARN_ON_ONCE(hwpt->domain->type != IOMMU_DOMAIN_NESTED)) { @@ -311,6 +313,7 @@ iommufd_viommu_alloc_hwpt_nested(struct iommufd_viommu *viommu, u32 flags, hwpt->domain = NULL; goto out_abort; } + hwpt->domain->iommufd_hwpt = hwpt; hwpt->domain->owner = viommu->iommu_dev->ops; iommu_domain_set_sw_msi(hwpt->domain, iommufd_sw_msi); @@ -415,7 +418,6 @@ int iommufd_hwpt_alloc(struct iommufd_ucmd *ucmd) refcount_inc(&fault->obj.users); iommufd_put_object(ucmd->ictx, &fault->obj); } - hwpt->domain->iommufd_hwpt = hwpt; cmd->out_hwpt_id = hwpt->obj.id; rc = iommufd_ucmd_respond(ucmd, sizeof(*cmd)); |