summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2017-12-28IB/mlx4: Add support to RSS hash for inner headersGuy Levi
Support RSS hash for inner headers according to a new flag, MLX4_IB_RX_HASH_INNER provided by the vendor channel. In case the flag is set, RSS hash will be done on the inner headers of VXLAN packets (which are encapsulated). Non-encapsulated packets will be hashed according to the outer headers. Signed-off-by: Guy Levi <guyle@mellanox.com> Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-27Merge branch 'from-rc' of ↵Jason Gunthorpe
git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git Patches for 4.16 that are dependent on patches sent to 4.15-rc. These are small clean ups for the vmw_pvrdma and i40iw drivers. * 'from-rc' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git: RDMA/vmw_pvrdma: Remove usage of BIT() from UAPI header RDMA/vmw_pvrdma: Use refcount_t instead of atomic_t RDMA/vmw_pvrdma: Use more specific sizeof in kcalloc RDMA/vmw_pvrdma: Clarify QP and CQ is_kernel logic RDMA/vmw_pvrdma: Add UAR SRQ macros in ABI header file i40iw: Change accelerated flag to bool
2017-12-27RDMA/vmw_pvrdma: Remove usage of BIT() from UAPI headerBryan Tan
BIT() should not be used in the UAPI header. Remove it. Signed-off-by: Bryan Tan <bryantan@vmware.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-27RDMA/vmw_pvrdma: Use refcount_t instead of atomic_tBryan Tan
refcount_t is the preferred type for refcounts. Change the QP and CQ refcnt fields to use refcount_t. Reviewed-by: Adit Ranadive <aditr@vmware.com> Reviewed-by: Aditya Sarwade <asarwade@vmware.com> Reviewed-by: Jorgen Hansen <jhansen@vmware.com> Signed-off-by: Bryan Tan <bryantan@vmware.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-27RDMA/vmw_pvrdma: Use more specific sizeof in kcallocBryan Tan
Convert the sizeof(void *) in two kcalloc calls to be more specific for the arrays that are being allocated. Reviewed-by: Adit Ranadive <aditr@vmware.com> Reviewed-by: Aditya Sarwade <asarwade@vmware.com> Reviewed-by: Jorgen Hansen <jhansen@vmware.com> Signed-off-by: Bryan Tan <bryantan@vmware.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-27RDMA/vmw_pvrdma: Clarify QP and CQ is_kernel logicBryan Tan
Be more consistent in setting and checking is_kernel flag for QPs and CQs. Reviewed-by: Adit Ranadive <aditr@vmware.com> Reviewed-by: Aditya Sarwade <asarwade@vmware.com> Reviewed-by: Jorgen Hansen <jhansen@vmware.com> Signed-off-by: Bryan Tan <bryantan@vmware.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-27RDMA/vmw_pvrdma: Add UAR SRQ macros in ABI header fileBryan Tan
Support for SRQs were added in the vmw_pvrdma userlevel library before two necessary macros were added into the kernel ABI header file. Add the two UAR SRQ macros that are required by the userlevel library so that the library can rely on the kernel ABI header file for these SRQ macro definitions. Fixes: 8b10ba783c9d ("RDMA/vmw_pvrdma: Add shared receive queue support") Reviewed-by: Adit Ranadive <aditr@vmware.com> Reviewed-by: Aditya Sarwade <asarwade@vmware.com> Reviewed-by: Jorgen Hansen <jhansen@vmware.com> Signed-off-by: Bryan Tan <bryantan@vmware.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-27i40iw: Change accelerated flag to boolHenry Orosco
The accelerated flag only utilizes two values: 0 and 1. Modify accelerated flag in struct i40iw_cm_node to bool. Signed-off-by: Henry Orosco <henry.orosco@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-27infiniband: drop unknown function from core_priv.hRandy Dunlap
Delete ibnl_chk_listeners() and its kernel-doc comments from the core_priv.h header file. There is no such function. Fixes: 233c1955835b ("RDMA/netlink: Reduce exposure of RDMA netlink functions") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-27IB/core: Make sure that PSN does not overflowMajd Dibbiny
The rq/sq->psn is 24 bits as defined in the IB spec, therefore we mask out the 8 most significant bits to avoid overflow in modify_qp. Signed-off-by: Majd Dibbiny <majd@mellanox.com> Signed-off-by: Daniel Jurgens <danielj@mellanox.com> Reviewed-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-27IB/mlx5: Fix mlx5_ib_alloc_mr error flowNitzan Carmi
ibmr.device is being set only after ib_alloc_mr() is (successfully) complete. Therefore, in case mlx5_core_create_mkey() return with error, the error flow calls mlx5_free_priv_descs() which uses ibmr.device (which doesn't exist yet), causing a NULL dereference oops. To fix this, the IB device should be set in the mr struct earlier stage (e.g. prior to calling mlx5_core_create_mkey()). Fixes: 8a187ee52b04 ("IB/mlx5: Support the new memory registration API") Signed-off-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Nitzan Carmi <nitzanc@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-27IB/core: Verify that QP is security enabled in create and destroyMoni Shoua
The XRC target QP create flow sets up qp_sec only if there is an IB link with LSM security enabled. However, several other related uAPI entry points blindly follow the qp_sec NULL pointer, resulting in a possible oops. Check for NULL before using qp_sec. Cc: <stable@vger.kernel.org> # v4.12 Fixes: d291f1a65232 ("IB/core: Enforce PKey security on QPs") Reviewed-by: Daniel Jurgens <danielj@mellanox.com> Signed-off-by: Moni Shoua <monis@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-27IB/uverbs: Fix command checking as part of ib_uverbs_ex_modify_qp()Moni Shoua
If the input command length is larger than the kernel supports an error should be returned in case the unsupported bytes are not cleared, instead of the other way aroudn. This matches what all other callers of ib_is_udata_cleared do and will avoid user ABI problems in the future. Cc: <stable@vger.kernel.org> # v4.10 Fixes: 189aba99e700 ("IB/uverbs: Extend modify_qp and support packet pacing") Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Moni Shoua <monis@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-27IB/mlx5: Serialize access to the VMA listMajd Dibbiny
User-space applications can do mmap and munmap directly at any time. Since the VMA list is not protected with a mutex, concurrent accesses to the VMA list from the mmap and munmap can cause data corruption. Add a mutex around the list. Cc: <stable@vger.kernel.org> # v4.7 Fixes: 7c2344c3bbf9 ("IB/mlx5: Implements disassociate_ucontext API") Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Majd Dibbiny <majd@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22IB/hfi1: Change slid arg in ingress_pkey_table_fail to 32bitDon Hiatt
Change the slid arg to ingress_pkey_table_fail() to a full 32Bits and do not convert to 16Bits in caller. This is so we can keep everything 32bit in the kernel and only change to 16bit at the uapi boundary. Signed-off-by: Don Hiatt <don.hiatt@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22IB/core: Use rdma_cap_opa_mad to check for OPADon Hiatt
Use rdma_cap_opa_mad() to check for OPA to promote code reuse. Signed-off-by: Don Hiatt <don.hiatt@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22i40iw: Fix the connection ORD value for loopbackTatyana Nikolova
The accepting QP ORD value should be adjusted not to exceed the peer QP IRD value (RFC 6581). This is skipped for loopback. After the ORD is validated by i40iw_record_ird_ord(), adjust the ORD value of the loopback accepting QP to prevent overrunning the IRD space of the peer QP. Also move the ORD accounting for 0-byte RDMA read to i40iw_record_ird_ord(). Fixes: f27b4746f378 ("i40iw: add connection management code") Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22i40iw: Validate correct IRD/ORD connection parametersTatyana Nikolova
Casting to u16 before validating IRD/ORD connection parameters could cause recording wrong IRD/ORD values in the cm_node. Validate the IRD/ORD parameters as they are passed by the application before recording them. Fixes: f27b4746f378 ("i40iw: add connection management code") Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22i40iw: Ignore LLP_DOUBT_REACHABILITY AEShiraz Saleem
The LLP_DOUBT_REACHABILITY Asynchronous Event (AE) is an early warning of a connection issue. It is followed by LLP_TOO_MANY_RETRIES AE, if the retransmit threshold is reached and recovery is not possible for the connection. Currently we terminate the connection on receiving the LLP_DOUBT_REACHABILITY AE. Ignore this AE and terminate the connection only on LLP_TOO_MANY_RETRIES AE. This improves the user experience on cable disconnect/reconnect scenario while running iWARP traffic. On cable disconnect, the QP traffic is paused and the user has a larger and more reasonable timeout within which if the cable is reconnected, traffic can continue. Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22i40iw: Fix sequence number for the first partial FPDUShiraz Saleem
Partial FPDU processing is broken as the sequence number for the first partial FPDU is wrong due to incorrect Q2 buffer offset. The offset should be 64 rather than 16. Fixes: 786c6adb3a94 ("i40iw: add puda code") Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22i40iw: Selectively teardown QPs on IP addr change eventShiraz Saleem
On IP address change event, all connected QPs are torn down irrespective of whether IP address is involved in a connection. Only teardown connections those source or destination address matches the netdev interface IP address being changed, and if they are on the same VLAN as the netdev. Fixes: e5e74b61b165 ("i40iw: Add IP addr handling on netdev events") Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22i40iw: Add notifier for network device eventsShiraz Saleem
Register a netdevice notifier for netdev UP/DOWN notification events and report the appropriate ib event. Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22i40iw: Correct Q1/XF object count equationShiraz Saleem
Lower Inbound RDMA Read Queue (Q1) object count by a factor of 2 as it is incorrectly doubled. Also, round up Q1 and Transmit FIFO (XF) object count to power of 2 to satisfy hardware requirement. Fixes: 86dbcd0f12e9 ("i40iw: add file to handle cqp calls") Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22i40iw: Use utility function roundup_pow_of_two()Shiraz Saleem
Consolidate all power of 2 round calculations to use kernel utility function roundup_pow_of_two(). Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22i40iw: Set MAX_IRD_SIZE to 64Shiraz Saleem
Increase I40IW_MAX_IRD_SIZE to 64 which is the device limit. Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22rdma: Update maintainer contact for Intel RDMA driversDennis Dalessandro
Ensure both Mike and I are listed as maintainer contacts for Intel's qib, hfi1, and rdmavt drivers. Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22IB/SA: Check dlid before SA agent queries for ClassPortInfoVenkata Sandeep Dhanalakota
SA queries SM for class port info when there is a LID_CHANGE event. When a base lid is configured before fm is started ie when smlid is not yet assigned, SA handles the LID_CHANGE event and tries query SM with lid 0. This will cause an hang. [ 1106.958820] INFO: task kworker/2:0:23 blocked for more than 120 seconds. [ 1106.965082] Tainted: G O 4.12.0+ #1 [ 1106.969602] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1106.977227] kworker/2:0 D 0 23 2 0x00000000 [ 1106.977250] Workqueue: infiniband update_ib_cpi [ib_core] [ 1106.977261] Call Trace: [ 1106.977273] __schedule+0x28e/0x860 [ 1106.977285] schedule+0x36/0x80 [ 1106.977298] schedule_timeout+0x1a3/0x2e0 [ 1106.977310] ? radix_tree_iter_tag_clear+0x1b/0x20 [ 1106.977322] ? idr_alloc+0x64/0x90 [ 1106.977334] wait_for_completion+0xe3/0x140 [ 1106.977347] ? wake_up_q+0x80/0x80 [ 1106.977369] update_ib_cpi+0x163/0x210 [ib_core] [ 1106.977381] process_one_work+0x147/0x370 [ 1106.977394] worker_thread+0x4a/0x390 [ 1106.977406] kthread+0x109/0x140 [ 1106.977418] ? process_one_work+0x370/0x370 [ 1106.977430] ? kthread_park+0x60/0x60 [ 1106.977443] ret_from_fork+0x22/0x30 Always ensure a proper smlid is assigned before querying SM for cpi. Fixes: ee1c60b1bff ("IB/SA: Modify SA to implicitly cache Class Port info") Reviewed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22nes: Change accelerated flag to boolShiraz Saleem
The accelerated flag only utilizes two values: 0 and 1. Modify accelerated flag in struct nes_cm_node to bool. Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22IB/hfi: Only read capability registers if the capability existsMichael J. Ruhl
During driver init, various registers are saved to allow restoration after an FLR or gen3 bump. Some of these registers are not available in some circumstances (i.e. Virtual machines). This bug makes the driver unusable when the PCI device is passed into a VM, it fails during probe. Delete unnecessary register read/write, and only access register if the capability exists. Cc: <stable@vger.kernel.org> # 4.14.x Fixes: a618b7e40af2 ("IB/hfi1: Move saving PCI values to a separate function") Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22drivers: infiniband: remove duplicate includesPravin Shedge
These duplicate includes have been found with scripts/checkincludes.pl but they have been removed manually to avoid removing false positives. Signed-off-by: Pravin Shedge <pravin.shedge4linux@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22RDMA/hns: Add eq support of hip08Yixian Liu
This patch adds eq support for hip08. The eq table can be multi-hop addressed. Signed-off-by: Yixian Liu <liuyixian@huawei.com> Reviewed-by: Lijun Ou <oulijun@huawei.com> Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-22RDMA/hns: Refactor eq code for hip06Yixian Liu
Considering the compatibility of supporting hip08's eq process and possible changes of data structure, this patch refactors the eq code structure of hip06. We move all the eq process code for hip06 from hns_roce_eq.c into hns_roce_hw_v1.c, and also for hns_roce_eq.h. With these changes, it will be convenient to add the eq support for later hardware version. Signed-off-by: Yixian Liu <liuyixian@huawei.com> Reviewed-by: Lijun Ou <oulijun@huawei.com> Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-21IB/ipoib: Fix lockdep issue found on ipoib_ib_dev_heavy_flushAlex Vesker
The locking order of vlan_rwsem (LOCK A) and then rtnl (LOCK B), contradicts other flows such as ipoib_open possibly causing a deadlock. To prevent this deadlock heavy flush is called with RTNL locked and only then tries to acquire vlan_rwsem. This deadlock is possible only when there are child interfaces. [ 140.941758] ====================================================== [ 140.946276] WARNING: possible circular locking dependency detected [ 140.950950] 4.15.0-rc1+ #9 Tainted: G O [ 140.954797] ------------------------------------------------------ [ 140.959424] kworker/u32:1/146 is trying to acquire lock: [ 140.963450] (rtnl_mutex){+.+.}, at: [<ffffffffc083516a>] __ipoib_ib_dev_flush+0x2da/0x4e0 [ib_ipoib] [ 140.970006] but task is already holding lock: [ 140.975141] (&priv->vlan_rwsem){++++}, at: [<ffffffffc0834ee1>] __ipoib_ib_dev_flush+0x51/0x4e0 [ib_ipoib] [ 140.982105] which lock already depends on the new lock. [ 140.990023] the existing dependency chain (in reverse order) is: [ 140.998650] -> #1 (&priv->vlan_rwsem){++++}: [ 141.005276] down_read+0x4d/0xb0 [ 141.009560] ipoib_open+0xad/0x120 [ib_ipoib] [ 141.014400] __dev_open+0xcb/0x140 [ 141.017919] __dev_change_flags+0x1a4/0x1e0 [ 141.022133] dev_change_flags+0x23/0x60 [ 141.025695] devinet_ioctl+0x704/0x7d0 [ 141.029156] sock_do_ioctl+0x20/0x50 [ 141.032526] sock_ioctl+0x221/0x300 [ 141.036079] do_vfs_ioctl+0xa6/0x6d0 [ 141.039656] SyS_ioctl+0x74/0x80 [ 141.042811] entry_SYSCALL_64_fastpath+0x1f/0x96 [ 141.046891] -> #0 (rtnl_mutex){+.+.}: [ 141.051701] lock_acquire+0xd4/0x220 [ 141.055212] __mutex_lock+0x88/0x970 [ 141.058631] __ipoib_ib_dev_flush+0x2da/0x4e0 [ib_ipoib] [ 141.063160] __ipoib_ib_dev_flush+0x71/0x4e0 [ib_ipoib] [ 141.067648] process_one_work+0x1f5/0x610 [ 141.071429] worker_thread+0x4a/0x3f0 [ 141.074890] kthread+0x141/0x180 [ 141.078085] ret_from_fork+0x24/0x30 [ 141.081559] other info that might help us debug this: [ 141.088967] Possible unsafe locking scenario: [ 141.094280] CPU0 CPU1 [ 141.097953] ---- ---- [ 141.101640] lock(&priv->vlan_rwsem); [ 141.104771] lock(rtnl_mutex); [ 141.109207] lock(&priv->vlan_rwsem); [ 141.114032] lock(rtnl_mutex); [ 141.116800] *** DEADLOCK *** Fixes: b4b678b06f6e ("IB/ipoib: Grab rtnl lock on heavy flush when calling ndo_open/stop") Signed-off-by: Alex Vesker <valex@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-21IB/mlx5: Fix congestion counters in LAG modeMajd Dibbiny
Congestion counters are counted and queried per physical function. When working in LAG mode, CNP packets can be sent or received on both of the functions, thus congestion counters should be aggregated from the two physical functions. Fixes: e1f24a79f424 ("IB/mlx5: Support congestion related counters") Signed-off-by: Majd Dibbiny <majd@mellanox.com> Reviewed-by: Aviv Heller <avivh@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-21RDMA/vmw_pvrdma: Avoid use after free due to QP/CQ/SRQ destroyBryan Tan
The use of wait queues in vmw_pvrdma for handling concurrent access to a resource leaves a race condition which can cause a use after free bug. Fix this by using the pattern from other drivers, complete() protected by dec_and_test to ensure complete() is called only once. Fixes: 29c8d9eba550 ("IB: Add vmw_pvrdma driver") Signed-off-by: Bryan Tan <bryantan@vmware.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-21RDMA/vmw_pvrdma: Use refcount_dec_and_test to avoid warningBryan Tan
refcount_dec generates a warning when the operation causes the refcount to hit zero. Avoid this by using refcount_dec_and_test. Fixes: 8b10ba783c9d ("RDMA/vmw_pvrdma: Add shared receive queue support") Reviewed-by: Adit Ranadive <aditr@vmware.com> Reviewed-by: Aditya Sarwade <asarwade@vmware.com> Reviewed-by: Jorgen Hansen <jhansen@vmware.com> Signed-off-by: Bryan Tan <bryantan@vmware.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-21RDMA/vmw_pvrdma: Call ib_umem_release on destroy QP pathBryan Tan
The QP cleanup did not previously call ib_umem_release, resulting in a user-triggerable kernel resource leak. Fixes: 29c8d9eba550 ("IB: Add vmw_pvrdma driver") Reviewed-by: Adit Ranadive <aditr@vmware.com> Reviewed-by: Aditya Sarwade <asarwade@vmware.com> Reviewed-by: Jorgen Hansen <jhansen@vmware.com> Signed-off-by: Bryan Tan <bryantan@vmware.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-21iw_cxgb4: when flushing, complete all wrs in a chainSteve Wise
If a wr chain was posted and needed to be flushed, only the first wr in the chain was completed with FLUSHED status. The rest were never completed. This caused isert to hang on shutdown due to the missing completions which left iscsi IO commands referenced, stalling the shutdown. Fixes: 4fe7c2962e11 ("iw_cxgb4: refactor sq/rq drain logic") Cc: stable@vger.kernel.org Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-21iw_cxgb4: reflect the original WR opcode in drain cqesSteve Wise
The flush/drain logic was not retaining the original wr opcode in its completion. This can cause problems if the application uses the completion opcode to make decisions. Use bit 10 of the CQE header word to indicate the CQE is a special drain completion, and save the original WR opcode in the cqe header opcode field. Fixes: 4fe7c2962e11 ("iw_cxgb4: refactor sq/rq drain logic") Cc: stable@vger.kernel.org Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-21iw_cxgb4: Only validate the MSN for successful completionsSteve Wise
If the RECV CQE is in error, ignore the MSN check. This was causing recvs that were flushed into the sw cq to be completed with the wrong status (BAD_MSN instead of FLUSHED). Cc: stable@vger.kernel.org Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-20RDMA/ocrdma: Fix permissions for OCRDMA_RESET_STATSAnton Vasilyev
Debugfs file reset_stats is created with S_IRUSR permissions, but ocrdma_dbgfs_ops_read() doesn't support OCRDMA_RESET_STATS, whereas ocrdma_dbgfs_ops_write() supports only OCRDMA_RESET_STATS. The patch fixes misstype with permissions. Found by Linux Driver Verification project (linuxtesting.org). Signed-off-by: Anton Vasilyev <vasilyev@ispras.ru> Acked-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-18iser-target: avoid reinitializing rdma contexts for isert commandsBharat Potnuri
isert commands that failed during isert_rdma_rw_ctx_post() are queued to Queue-Full(QF) queue and are scheduled to be reposted during queue-full queue processing. During this reposting, the rdma contexts are initialised again in isert_rdma_rw_ctx_post(), which is leaking significant memory. unreferenced object 0xffff8830201d9640 (size 64): comm "kworker/0:2", pid 195, jiffies 4295374851 (age 4528.436s) hex dump (first 32 bytes): 00 60 8b cb 2e 00 00 00 00 10 00 00 00 00 00 00 .`.............. 00 90 e3 cb 2e 00 00 00 00 10 00 00 00 00 00 00 ................ backtrace: [<ffffffff8170711e>] kmemleak_alloc+0x4e/0xb0 [<ffffffff811f8ba5>] __kmalloc+0x125/0x2b0 [<ffffffffa046b24f>] rdma_rw_ctx_init+0x15f/0x6f0 [ib_core] [<ffffffffa07ab644>] isert_rdma_rw_ctx_post+0xc4/0x3c0 [ib_isert] [<ffffffffa07ad972>] isert_put_datain+0x112/0x1c0 [ib_isert] [<ffffffffa07dddce>] lio_queue_data_in+0x2e/0x30 [iscsi_target_mod] [<ffffffffa076c322>] target_qf_do_work+0x2b2/0x4b0 [target_core_mod] [<ffffffff81080c3b>] process_one_work+0x1db/0x5d0 [<ffffffff8108107d>] worker_thread+0x4d/0x3e0 [<ffffffff81088667>] kthread+0x117/0x150 [<ffffffff81713fa7>] ret_from_fork+0x27/0x40 [<ffffffffffffffff>] 0xffffffffffffffff Here is patch to use the older rdma contexts while reposting the isert commands intead of reinitialising them. Signed-off-by: Potnuri Bharat Teja <bharat@chelsio.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-18IB/cm: Refactor to avoid setting path record software only fieldsParav Pandit
When path ah_attr initialization from path record fails, ib_cm_send_rej() uses av.ah_attr fields to send out reject message. In such cases initialization of path record software fields is not needed. Code is simplified for same. Additionally in current code in cm_req_handler, when ib_get_cached_gid fails for a given sgid_index of the GID of the GRH of the incoming CM MAD, error code 12 is sent. This error code refers to primary GID in incoming CM REQ and not for the GID in in MAD packet. Therefore code is refactored to send code 5 (unsupported request) for such error. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Hal Rosenstock <hal@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-18IB/{core, umad, cm}: Rename ib_init_ah_from_wc to ib_init_ah_attr_from_wcParav Pandit
Currently ib_init_ah_from_wc initializes address handle attributes and not the address handle object itself. To avoid confusion between ah_attr vs ah, ib_init_ah_from_wc is renamed to ib_init_ah_attr_from_wc to reflect that its initialzes ah_attr. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Daniel Jurgens <danielj@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-18IB/{core, cm, cma, ipoib}: Rename ib_init_ah_from_path to ↵Parav Pandit
ib_init_ah_attr_from_path Since ib_init_ah_from_path initializes the address handle attribute, it is renamed to reflect so. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Daniel Jurgens <danielj@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-18IB/cm: Fix sleeping while spin lock is heldParav Pandit
In case of LAP are used for RoCE, it can lead to a problem of sleeping a context while spin lock is held in below flow. cm_lap_handler ->spin_lock -> <..switch_case..> -> cm_init_av_for_response -> ib_init_ah_from_wc -> rdma_addr_find_l2_eth_by_grh wait_for_completion() Therefore ah attribute initialization is done for incoming lap requests outside of the lock context. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Daniel Jurgens <danielj@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-18IB/cm: Handle address handle attribute init errorParav Pandit
cm_init_av_by_path depends on ib_init_ah_from_path to initialize ah attribute and ib_init_ah_from_path() can fail, such error should not be ignored. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Daniel Jurgens <danielj@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-18IB/{cm, umad}: Handle av init errorParav Pandit
cm_init_av_for_response depends on ib_init_ah_from_wc() whose return status is ignored. ib_init_ah_from_wc() can fail and its return status should be handled as done in this patch. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Daniel Jurgens <danielj@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-18IB/{core, ipoib}: Simplify ib_find_gid to search only for IB link layerParav Pandit
Currently there are no users of ib_find_gid for RoCE transport. It is only used by IPoIB. Therefore its simplified to ignore RoCE ports and GID type check which was previously done for every port. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Eli Cohen <eli@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2017-12-18RDMA/core: Avoid copying ifindex twiceParav Pandit
rdma_copy_addr copies the ifndex to bound_dev_if. Therefore avoid copying it again after rdma_copy_addr call is completed. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Moni Shoua <monis@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>