summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJack Morgenstein <jackm@nvidia.com>2025-05-21 14:36:02 +0300
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2025-06-19 15:31:54 +0200
commit02e45168e0fd6fdc6f8f7c42c4b500857aa5efb0 (patch)
tree3c4274c218f52f9b32b6b6446d1caa2bcbc01ae4
parent7893a41deaf28b53036d4ea7cd9f7d54e8874dfc (diff)
RDMA/cma: Fix hang when cma_netevent_callback fails to queue_work
[ Upstream commit 92a251c3df8ea1991cd9fe00f1ab0cfce18d7711 ] The cited commit fixed a crash when cma_netevent_callback was called for a cma_id while work on that id from a previous call had not yet started. The work item was re-initialized in the second call, which corrupted the work item currently in the work queue. However, it left a problem when queue_work fails (because the item is still pending in the work queue from a previous call). In this case, cma_id_put (which is called in the work handler) is therefore not called. This results in a userspace process hang (zombie process). Fix this by calling cma_id_put() if queue_work fails. Fixes: 45f5dcdd0497 ("RDMA/cma: Fix workqueue crash in cma_netevent_work_handler") Link: https://patch.msgid.link/r/4f3640b501e48d0166f312a64fdadf72b059bd04.1747827103.git.leon@kernel.org Signed-off-by: Jack Morgenstein <jackm@nvidia.com> Signed-off-by: Feng Liu <feliu@nvidia.com> Reviewed-by: Vlad Dumitrescu <vdumitrescu@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Sharath Srinivasan <sharath.srinivasan@oracle.com> Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
-rw-r--r--drivers/infiniband/core/cma.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index 176d0b3e4488..81bc24a346d3 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -5231,7 +5231,8 @@ static int cma_netevent_callback(struct notifier_block *self,
neigh->ha, ETH_ALEN))
continue;
cma_id_get(current_id);
- queue_work(cma_wq, &current_id->id.net_work);
+ if (!queue_work(cma_wq, &current_id->id.net_work))
+ cma_id_put(current_id);
}
out:
spin_unlock_irqrestore(&id_table_lock, flags);