summaryrefslogtreecommitdiff
path: root/drivers/pci/msi/msi.c
AgeCommit message (Collapse)Author
27 hoursMerge tag 'irq-msi-2025-07-27' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull MSI update from Thomas Gleixner: "A trivial cleanup in the PCI/MSI code to remove a duplicated back and forth conversion" * tag 'irq-msi-2025-07-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: PCI/MSI: Remove duplicated to_pci_dev() conversion
2025-07-10PCI/MSI: Prevent recursive locking in pci_msix_write_tph_tag()Himanshu Madhani
pci_msix_write_tph_tag() takes the per device MSI descriptor mutex and then invokes msi_domain_get_virq(), which takes the same mutex again. That obviously results in a system hang which is exposed by a softlockup or lockdep warning. Move the lock guard after the invocation of msi_domain_get_virq() to fix this. [ tglx: Massage changelog by adding a proper explanation and removing the not really useful stacktrace ] Fixes: d5124a9957b2 ("PCI/MSI: Provide a sane mechanism for TPH") Reported-by: Jorge Lopez <jorge.jo.lopez@oracle.com> Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Himanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Jorge Lopez <jorge.jo.lopez@oracle.com> Link: https://lore.kernel.org/all/20250708222530.1041477-1-himanshu.madhani@oracle.com
2025-06-18PCI/MSI: Remove duplicated to_pci_dev() conversionChris Li
In pci_msi_update_mask(), "lock = &to_pci_dev()" does the to_pci_dev() lookup, and there's another one buried inside msi_desc_to_pci_dev(). Introduce a local variable to remove that duplication. Signed-off-by: Chris Li <chrisl@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Link: https://lore.kernel.org/all/20250617-pci-msi-avoid-dup-pcidev-v1-1-ed75b0419023@kernel.org
2025-06-04PCI/MSI: Size device MSI domain with the maximum number of vectorsMarc Zyngier
Zenghui reports that since 1396e89e09f0 ("genirq/msi: Move prepare() call to per-device allocation"), his Multi-MSI capable device isn't working anymore. This is a consequence of 15c72f824b32 ("PCI/MSI: Add support for per device MSI[X] domains"), which always creates a MSI domain of size 1, even in the presence of Multi-MSI. While this was somehow working until then, moving the .prepare() call ends up sizing the ITS table with a tiny value for this device, and making the endpoint driver unhappy. Instead, always create the domain and call the .prepare() helper with the maximum expected size. Fixes: 1396e89e09f0 ("genirq/msi: Move prepare() call to per-device allocation") Fixes: 15c72f824b32 ("PCI/MSI: Add support for per device MSI[X] domains") Reported-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Link: https://lore.kernel.org/all/20250603141801.915305-1-maz@kernel.org Closes: https://lore.kernel.org/r/0b1d7aec-1eac-a9cd-502a-339e216e08a1@huawei.com
2025-05-27Merge tag 'irq-msi-2025-05-25' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull MSI updates from Thomas Gleixner: "Updates for the MSI subsystem (core code and PCI): - Switch the MSI descriptor locking to lock guards - Replace a broken and naive implementation of PCI/MSI-X control word updates in the PCI/TPH driver with a properly serialized variant in the PCI/MSI core code. - Remove the MSI descriptor abuse in the SCCI/UFS/QCOM driver by replacing the direct access to the MSI descriptors with the proper API function calls. People will never understand that APIs exist for a reason... - Provide core infrastructre for the upcoming PCI endpoint library extensions. Currently limited to ARM GICv3+, but in theory extensible to other architectures. - Provide a MSI domain::teardown() callback, which allows drivers to undo the effects of the prepare() callback. - Move the MSI domain::prepare() callback invocation to domain creation time to avoid redundant (and in case of ARM/GIC-V3-ITS confusing) invocations on every allocation. In combination with the new teardown callback this removes some ugly hacks in the GIC-V3-ITS driver, which pretended to work around the short comings of the core code so far. With this update the code is correct by design and implementation. - Make the irqchip MSI library globally available, provide a MSI parent domain creation helper and convert a bunch of (PCI/)MSI drivers over to the modern MSI parent mechanism. This is the first step to get rid of at least one incarnation of the three PCI/MSI management schemes. - The usual small cleanups and improvements" * tag 'irq-msi-2025-05-25' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (33 commits) PCI/MSI: Use bool for MSI enable state tracking PCI: tegra: Convert to MSI parent infrastructure PCI: xgene: Convert to MSI parent infrastructure PCI: apple: Convert to MSI parent infrastructure irqchip/msi-lib: Honour the MSI_FLAG_NO_AFFINITY flag irqchip/mvebu: Convert to msi_create_parent_irq_domain() helper irqchip/gic: Convert to msi_create_parent_irq_domain() helper genirq/msi: Add helper for creating MSI-parent irq domains irqchip: Make irq-msi-lib.h globally available irqchip/gic-v3-its: Use allocation size from the prepare call genirq/msi: Engage the .msi_teardown() callback on domain removal genirq/msi: Move prepare() call to per-device allocation irqchip/gic-v3-its: Implement .msi_teardown() callback genirq/msi: Add .msi_teardown() callback as the reverse of .msi_prepare() irqchip/gic-v3-its: Add support for device tree msi-map and msi-mask dt-bindings: PCI: pci-ep: Add support for iommu-map and msi-map irqchip/gic-v3-its: Set IRQ_DOMAIN_FLAG_MSI_IMMUTABLE for ITS irqdomain: Add IRQ_DOMAIN_FLAG_MSI_IMMUTABLE and irq_domain_is_msi_immutable() platform-msi: Add msi_remove_device_irq_domain() in platform_device_msi_free_irqs_all() genirq/msi: Rename msi_[un]lock_descs() ...
2025-05-21PCI/MSI: Use bool for MSI enable state trackingHans Zhang
Convert pci_msi_enable and pci_msi_enabled() to use bool type for clarity. No functional changes, only code cleanup. Signed-off-by: Hans Zhang <hans.zhang@cixtech.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250516165223.125083-2-18255117159@163.com
2025-04-15PCI/MSI: Add an option to write MSIX ENTRY_DATA before any readsJonathan Currier
Commit 7d5ec3d36123 ("PCI/MSI: Mask all unused MSI-X entries") introduced a readl() from ENTRY_VECTOR_CTRL before the writel() to ENTRY_DATA. This is correct, however some hardware, like the Sun Neptune chips, the NIU module, will cause an error and/or fatal trap if any MSIX table entry is read before the corresponding ENTRY_DATA field is written to. Add an optional early writel() in msix_prepare_msi_desc(). Fixes: 7d5ec3d36123 ("PCI/MSI: Mask all unused MSI-X entries") Signed-off-by: Jonathan Currier <dullfire@yahoo.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/20241117234843.19236-2-dullfire@yahoo.com
2025-04-09PCI/MSI: Provide a sane mechanism for TPHThomas Gleixner
The PCI/TPH driver fiddles with the MSI-X control word of an active interrupt completely unserialized against concurrent operations issued from the interrupt core. It also brings the PCI/MSI-X internal cached control word out of sync. Provide a function, which has the required serialization and keeps the control word cache in sync. Unfortunately this requires to look up and lock the interrupt descriptor, which should be only done in the interrupt core code. But confining this particular oddity in the PCI/MSI core is the lesser of all evil. A interrupt core implementation would require a larger pile of infrastructure and indirections for dubious value. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/all/20250319105506.683663807@linutronix.de
2025-04-09PCI/MSI: Switch msix_capability_init() to guard(msi_desc_lock)Thomas Gleixner
Split the lock protected functionality of msix_capability_init() out into a helper function and use guard(msi_desc_lock) to replace the lock/unlock pair. Simplify the error path in the helper function by utilizing a custom cleanup to get rid of the remaining gotos. No functional change intended. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/all/20250319105506.564105011@linutronix.de
2025-04-09PCI/MSI: Switch msi_capability_init() to guard(msi_desc_lock)Thomas Gleixner
Split the lock protected functionality of msi_capability_init() out into a helper function and use guard(msi_desc_lock) to replace the lock/unlock pair. No functional change intended. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/all/20250319105506.504992208@linutronix.de
2025-04-09PCI/MSI: Use __free() for affinity masksThomas Gleixner
Let cleanup handle the freeing of the affinity mask. That prepares for switching the MSI descriptor locking to a guard(). No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/all/20250319105506.444764312@linutronix.de
2025-04-09PCI/MSI: Set pci_dev:: Msi_enabled lateThomas Gleixner
The comment claiming that pci_dev::msi_enabled has to be set across setup is a leftover from ancient code versions. Nothing in the setup code requires the flag to be set anymore. Set it in the success path and remove the extra goto label. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/all/20250319105506.383222333@linutronix.de
2025-04-09PCI/MSI: Use guard(msi_desc_lock) where applicableThomas Gleixner
Convert the trivial cases of msi_desc_lock/unlock() pairs. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/all/20250319105506.322536126@linutronix.de
2025-03-28Revert "Merge tag 'irq-msi-2025-03-23' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip" This reverts commit 36f5f026df6c1cd8a20373adc4388d2b3401ce91, reversing changes made to 43a7eec035a5b64546c8adefdc9cf96a116da14b. Thomas says: "I just noticed that for some incomprehensible reason, probably sheer incompetemce when trying to utilize b4, I managed to merge an outdated _and_ buggy version of that series. Can you please revert that merge completely?" Done. Requested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-03-26PCI/MSI: Handle the NOMASK flag correctly for all PCI/MSI backendsThomas Gleixner
The conversion of the XEN specific global variable pci_msi_ignore_mask to a MSI domain flag, missed the facts that: 1) Legacy architectures do not provide a interrupt domain 2) Parent MSI domains do not necessarily have a domain info attached Both cases result in an unconditional NULL pointer dereference. This was unfortunatly missed in review and testing revealed it late. Cure this by using the existing pci_msi_domain_supports() helper, which handles all possible cases correctly. Fixes: c3164d2e0d18 ("PCI/MSI: Convert pci_msi_ignore_mask to per MSI domain flag") Reported-by: Daniel Gomez <da.gomez@kernel.org> Reported-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Juergen Gross <jgross@suse.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Borislav Petkov <bp@alien8.de> Tested-by: Daniel Gomez <da.gomez@kernel.org> Link: https://lore.kernel.org/all/87iknwyp2o.ffs@tglx Closes: https://lore.kernel.org/all/qn7fzggcj6qe6r6gdbwcz23pzdz2jx64aldccmsuheabhmjgrt@tawf5nfwuvw7
2025-03-25Merge tag 'for-linus-6.15-rc1-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen updates from Juergen Gross: - cleanup: remove an used function - add support for a XenServer specific virtual PCI device - fix the handling of a sparse Xen hypervisor symbol table - avoid warnings when building the kernel with gcc 15 - fix use of devices behind a VMD bridge when running as a Xen PV dom0 * tag 'for-linus-6.15-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: PCI/MSI: Convert pci_msi_ignore_mask to per MSI domain flag PCI: vmd: Disable MSI remapping bypass under Xen xen/pci: Do not register devices with segments >= 0x10000 xen/pciback: Remove unused pcistub_get_pci_dev xenfs/xensyms: respect hypervisor's "next" indication xen/mcelog: Add __nonstring annotations for unterminated strings xen: Add support for XenServer 6.1 platform device
2025-03-21PCI/MSI: Convert pci_msi_ignore_mask to per MSI domain flagRoger Pau Monne
Setting pci_msi_ignore_mask inhibits the toggling of the mask bit for both MSI and MSI-X entries globally, regardless of the IRQ chip they are using. Only Xen sets the pci_msi_ignore_mask when routing physical interrupts over event channels, to prevent PCI code from attempting to toggle the maskbit, as it's Xen that controls the bit. However, the pci_msi_ignore_mask being global will affect devices that use MSI interrupts but are not routing those interrupts over event channels (not using the Xen pIRQ chip). One example is devices behind a VMD PCI bridge. In that scenario the VMD bridge configures MSI(-X) using the normal IRQ chip (the pIRQ one in the Xen case), and devices behind the bridge configure the MSI entries using indexes into the VMD bridge MSI table. The VMD bridge then demultiplexes such interrupts and delivers to the destination device(s). Having pci_msi_ignore_mask set in that scenario prevents (un)masking of MSI entries for devices behind the VMD bridge. Move the signaling of no entry masking into the MSI domain flags, as that allows setting it on a per-domain basis. Set it for the Xen MSI domain that uses the pIRQ chip, while leaving it unset for the rest of the cases. Remove pci_msi_ignore_mask at once, since it was only used by Xen code, and with Xen dropping usage the variable is unneeded. This fixes using devices behind a VMD bridge on Xen PV hardware domains. Albeit Devices behind a VMD bridge are not known to Xen, that doesn't mean Linux cannot use them. By inhibiting the usage of VMD_FEAT_CAN_BYPASS_MSI_REMAP and the removal of the pci_msi_ignore_mask bodge devices behind a VMD bridge do work fine when use from a Linux Xen hardware domain. That's the whole point of the series. Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Juergen Gross <jgross@suse.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Message-ID: <20250219092059.90850-4-roger.pau@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2025-03-13PCI/MSI: Provide a sane mechanism for TPHThomas Gleixner
The PCI/TPH driver fiddles with the MSI-X control word of an active interrupt completely unserialized against concurrent operations issued from the interrupt core. It also brings the PCI/MSI-X internal cached control word out of sync. Provide a function, which has the required serialization and keeps the control word cache in sync. Unfortunately this requires to look up and lock the interrupt descriptor, which should be only done in the interrupt core code. But confining this particular oddity in the PCI/MSI core is the lesser of all evil. A interrupt core implementation would require a larger pile of infrastructure and indirections for dubious value. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/all/20250313130321.822790423@linutronix.de
2025-03-13PCI/MSI: Switch to MSI descriptor locking to guard()Thomas Gleixner
Convert the code to use the new guard(msi_descs_lock). No functional change intended. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/all/20250313130321.695027112@linutronix.de
2024-12-16PCI/MSI: Handle lack of irqdomain gracefullyThomas Gleixner
Alexandre observed a warning emitted from pci_msi_setup_msi_irqs() on a RISCV platform which does not provide PCI/MSI support: WARNING: CPU: 1 PID: 1 at drivers/pci/msi/msi.h:121 pci_msi_setup_msi_irqs+0x2c/0x32 __pci_enable_msix_range+0x30c/0x596 pci_msi_setup_msi_irqs+0x2c/0x32 pci_alloc_irq_vectors_affinity+0xb8/0xe2 RISCV uses hierarchical interrupt domains and correctly does not implement the legacy fallback. The warning triggers from the legacy fallback stub. That warning is bogus as the PCI/MSI layer knows whether a PCI/MSI parent domain is associated with the device or not. There is a check for MSI-X, which has a legacy assumption. But that legacy fallback assumption is only valid when legacy support is enabled, but otherwise the check should simply return -ENOTSUPP. Loongarch tripped over the same problem and blindly enabled legacy support without implementing the legacy fallbacks. There are weak implementations which return an error, so the problem was papered over. Correct pci_msi_domain_supports() to evaluate the legacy mode and add the missing supported check into the MSI enable path to complete it. Fixes: d2a463b29741 ("PCI/MSI: Reject multi-MSI early") Reported-by: Alexandre Ghiti <alexghiti@rivosinc.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/87ed2a8ow5.ffs@tglx
2024-06-24PCI/MSI: Fix UAF in msi_capability_initMostafa Saleh
KFENCE reports the following UAF: BUG: KFENCE: use-after-free read in __pci_enable_msi_range+0x2c0/0x488 Use-after-free read at 0x0000000024629571 (in kfence-#12): __pci_enable_msi_range+0x2c0/0x488 pci_alloc_irq_vectors_affinity+0xec/0x14c pci_alloc_irq_vectors+0x18/0x28 kfence-#12: 0x0000000008614900-0x00000000e06c228d, size=104, cache=kmalloc-128 allocated by task 81 on cpu 7 at 10.808142s: __kmem_cache_alloc_node+0x1f0/0x2bc kmalloc_trace+0x44/0x138 msi_alloc_desc+0x3c/0x9c msi_domain_insert_msi_desc+0x30/0x78 msi_setup_msi_desc+0x13c/0x184 __pci_enable_msi_range+0x258/0x488 pci_alloc_irq_vectors_affinity+0xec/0x14c pci_alloc_irq_vectors+0x18/0x28 freed by task 81 on cpu 7 at 10.811436s: msi_domain_free_descs+0xd4/0x10c msi_domain_free_locked.part.0+0xc0/0x1d8 msi_domain_alloc_irqs_all_locked+0xb4/0xbc pci_msi_setup_msi_irqs+0x30/0x4c __pci_enable_msi_range+0x2a8/0x488 pci_alloc_irq_vectors_affinity+0xec/0x14c pci_alloc_irq_vectors+0x18/0x28 Descriptor allocation done in: __pci_enable_msi_range msi_capability_init msi_setup_msi_desc msi_insert_msi_desc msi_domain_insert_msi_desc msi_alloc_desc ... Freed in case of failure in __msi_domain_alloc_locked() __pci_enable_msi_range msi_capability_init pci_msi_setup_msi_irqs msi_domain_alloc_irqs_all_locked msi_domain_alloc_locked __msi_domain_alloc_locked => fails msi_domain_free_locked ... That failure propagates back to pci_msi_setup_msi_irqs() in msi_capability_init() which accesses the descriptor for unmasking in the error exit path. Cure it by copying the descriptor and using the copy for the error exit path unmask operation. [ tglx: Massaged change log ] Fixes: bf6e054e0e3f ("genirq/msi: Provide msi_device_populate/destroy_sysfs()") Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Mostafa Saleh <smostafa@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Bjorn Heelgas <bhelgaas@google.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20240624203729.1094506-1-smostafa@google.com
2024-04-26PCI/MSI: Make error path handling follow the standard patternAndy Shevchenko
Make error path handling follow the standard pattern, i.e. checking for errors first. This makes code much easier to read and understand despite being a bit longer. Link: https://lore.kernel.org/r/20240426144039.557907-1-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2023-10-24PCI/MSI: Use FIELD_GET/PREP()Ilpo Järvinen
Instead of custom masking and shifting, use FIELD_GET/PREP() with register fields. Link: https://lore.kernel.org/r/20231018113254.17616-8-ilpo.jarvinen@linux.intel.com Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2023-04-16PCI/MSI: Remove over-zealous hardware size check in pci_msix_validate_entries()Thomas Gleixner
pci_msix_validate_entries() validates the entries array which is handed in by the caller for a MSI-X interrupt allocation. Aside of consistency failures it also detects a failure when the size of the MSI-X hardware table in the device is smaller than the size of the entries array. That's wrong for the case of range allocations where the caller provides the minimum and the maximum number of vectors to allocate, when the hardware size is greater or equal than the mininum, but smaller than the maximum. Remove the hardware size check completely from that function and just ensure that the entires array up to the maximum size is consistent. The limitation and range checking versus the hardware size happens independently of that afterwards anyway because the entries array is optional. Fixes: 4644d22eb673 ("PCI/MSI: Validate MSI-X contiguous restriction early") Reported-by: David Laight <David.Laight@aculab.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/87v8i3sg62.ffs@tglx
2022-12-05PCI/MSI: Split MSI-X descriptor setupThomas Gleixner
The upcoming mechanism to allocate MSI-X vectors after enabling MSI-X needs to share some of the MSI-X descriptor setup. The regular descriptor setup on enable has the following code flow: 1) Allocate descriptor 2) Setup descriptor with PCI specific data 3) Insert descriptor 4) Allocate interrupts which in turn scans the inserted descriptors This cannot be easily changed because the PCI/MSI code needs to handle the legacy architecture specific allocation model and the irq domain model where quite some domains have the assumption that the above flow is how it works. Ideally the code flow should look like this: 1) Invoke allocation at the MSI core 2) MSI core allocates descriptor 3) MSI core calls back into the irq domain which fills in the domain specific parts This could be done for underlying parent MSI domains which support post-enable allocation/free but that would create significantly different code pathes for MSI/MSI-X enable. Though for dynamic allocation which wants to share the allocation code with the upcoming PCI/IMS support it's the right thing to do. Split the MSI-X descriptor setup into the preallocation part which just sets the index and fills in the horrible hack of virtual IRQs and the real PCI specific MSI-X setup part which solely depends on the index in the descriptor. This allows to provide a common dynamic allocation interface at the MSI core level for both PCI/MSI-X and PCI/IMS. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232326.616292598@linutronix.de
2022-12-05PCI/MSI: Add support for per device MSI[X] domainsThomas Gleixner
Provide a template and the necessary callbacks to create PCI/MSI and PCI/MSI-X domains. The domains are created when MSI or MSI-X is enabled. The domain's lifetime is either the device lifetime or in case that e.g. MSI-X was tried first and failed, then the MSI-X domain is removed and a MSI domain is created as both are mutually exclusive and reside in the default domain ID slot of the per device domain pointer array. Also expand pci_msi_domain_supports() to handle feature checks correctly even in the case that the per device domain was not yet created by checking the features supported by the MSI parent. Add the necessary setup calls into the MSI and MSI-X enable code path. These setup calls are backwards compatible. They return success when there is no parent domain found, which means the existing global domains or the legacy allocation path keep just working. Co-developed-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232325.975388241@linutronix.de
2022-12-05PCI/MSI: Split __pci_write_msi_msg()Thomas Gleixner
The upcoming per device MSI domains will create different domains for MSI and MSI-X. Split the write message function into MSI and MSI-X helpers so they can be used by those new domain functions seperately. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232325.857982142@linutronix.de
2022-12-05genirq/msi: Rename msi_add_msi_desc() to msi_insert_msi_desc()Thomas Gleixner
This reflects the functionality better. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124230314.103554618@linutronix.de
2022-11-17PCI/MSI: Validate MSI-X contiguous restriction earlyThomas Gleixner
With interrupt domains the sanity check for MSI-X vector validation can be done _before_ any allocation happens. The sanity check only applies to the allocation functions which have an 'entries' array argument. The entries array is filled by the caller with the requested MSI-X indices. Some drivers have gaps in the index space which is not supported on all architectures. The PCI/MSI irq domain has a 'feature' bit to enforce this validation late during the allocation phase. Just do it right away before doing any other work along with the other sanity checks on that array. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122015.691357406@linutronix.de
2022-11-17PCI/MSI: Reject MSI-X earlyThomas Gleixner
Similar to PCI multi-MSI reject MSI-X enablement when a irq domain is attached to the device which does not support MSI-X. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122015.631728309@linutronix.de
2022-11-17PCI/MSI: Reject multi-MSI earlyThomas Gleixner
When hierarchical MSI interrupt domains are enabled then there is no point to do tons of work and detect the missing support for multi-MSI late in the allocation path. Just query the domain feature flags right away. The query function is going to be used for other purposes later and has a mode argument which influences the result: ALLOW_LEGACY returns true when: - there is no irq domain attached (legacy support) - there is a irq domain attached which has the feature flag set DENY_LEGACY returns only true when: - there is a irq domain attached which has the feature flag set This allows to use the function universally without ifdeffery in the calling code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122015.574339988@linutronix.de
2022-11-17PCI/MSI: Sanitize MSI-X checksThomas Gleixner
There is no point in doing the same sanity checks over and over in a loop during MSI-X enablement. Put them in front of the loop and return early when they fail. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122015.516946468@linutronix.de
2022-11-17PCI/MSI: Reorder functions in msi.cAhmed S. Darwish
There is no way to navigate msi.c without banging the head against the wall every now and then because MSI and MSI-X specific functions are intermingled and the code flow is completely non-obvious. Reorder everthing so common helpers, MSI and MSI-X specific functions are grouped together. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122015.459089736@linutronix.de
2022-11-17PCI/MSI: Move pci_msi_restore_state() to api.cAhmed S. Darwish
To disentangle the maze in msi.c, all exported device-driver MSI APIs are now to be grouped in one file, api.c. Move pci_msi_enabled() and add kernel-doc for the function. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122015.331584998@linutronix.de
2022-11-17PCI/MSI: Move pci_msi_enabled() to api.cAhmed S. Darwish
To disentangle the maze in msi.c, all exported device-driver MSI APIs are now to be grouped in one file, api.c. Move pci_msi_enabled() and make its kernel-doc comprehensive. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122015.271447896@linutronix.de
2022-11-17PCI/MSI: Move pci_irq_get_affinity() to api.cAhmed S. Darwish
To disentangle the maze in msi.c, all exported device-driver MSI APIs are now to be grouped in one file, api.c. Move pci_irq_get_affinity() and let its kernel-doc match rest of the file. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122015.214792769@linutronix.de
2022-11-17PCI/MSI: Move pci_disable_msix() to api.cAhmed S. Darwish
To disentangle the maze in msi.c, all exported device-driver MSI APIs are now to be grouped in one file, api.c. Move pci_disable_msix() and make its kernel-doc comprehensive. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122015.156785224@linutronix.de
2022-11-17PCI/MSI: Move pci_msix_vec_count() to api.cAhmed S. Darwish
To disentangle the maze in msi.c, all exported device-driver MSI APIs are now to be grouped in one file, api.c. Move pci_msix_vec_count() and make its kernel-doc comprehensive. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122015.099461602@linutronix.de
2022-11-17PCI/MSI: Move pci_free_irq_vectors() to api.cAhmed S. Darwish
To disentangle the maze in msi.c, all exported device-driver MSI APIs are now to be grouped in one file, api.c. Move pci_free_irq_vectors() and make its kernel-doc comprehensive. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122015.042870570@linutronix.de
2022-11-17PCI/MSI: Move pci_irq_vector() to api.cAhmed S. Darwish
To disentangle the maze in msi.c, all exported device-driver MSI APIs are now to be grouped in one file, api.c. Move pci_irq_vector() and let its kernel-doc match the rest of the file. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122014.984490384@linutronix.de
2022-11-17PCI/MSI: Move pci_alloc_irq_vectors_affinity() to api.cAhmed S. Darwish
To disentangle the maze in msi.c, all exported device-driver MSI APIs are now to be grouped in one file, api.c. Move pci_alloc_irq_vectors_affinity() and let its kernel-doc reference pci_alloc_irq_vectors() documentation added in parent commit. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122014.927531290@linutronix.de
2022-11-17PCI/MSI: Move pci_enable_msix_range() to api.cAhmed S. Darwish
To disentangle the maze in msi.c, all exported device-driver MSI APIs are now to be grouped in one file, api.c. Move pci_enable_msix_range() and make its kernel-doc comprehensive. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122014.813792885@linutronix.de
2022-11-17PCI/MSI: Move pci_enable_msi() API to api.cAhmed S. Darwish
To disentangle the maze in msi.c all exported device-driver MSI APIs are now to be grouped in one file, api.c. Move pci_enable_msi() and make its kernel-doc comprehensive. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122014.755178149@linutronix.de
2022-11-17PCI/MSI: Move pci_disable_msi() to api.cAhmed S. Darwish
msi.c is a maze of randomly sorted functions which makes the code unreadable. As a first step split the driver visible API and the internal implementation which also allows proper API documentation via one file. Create drivers/pci/msi/api.c to group all exported device-driver PCI/MSI APIs in one C file. Begin by moving pci_disable_msi() there and add kernel-doc for the function as appropriate. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122014.696798036@linutronix.de
2022-11-17PCI/MSI: Move mask and unmask helpers to msi.hAhmed S. Darwish
The upcoming support for per device MSI interrupt domains needs to share some of the inline helpers with the MSI implementation. Move them to the header file. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122014.640052354@linutronix.de
2022-11-17PCI/MSI: Check for MSI enabled in __pci_msix_enable()Thomas Gleixner
PCI/MSI and PCI/MSI-X are mutually exclusive, but the MSI-X enable code lacks a check for already enabled MSI. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20221111122013.653556720@linutronix.de
2022-08-26PCI/MSI: Correct 'can_mask' test in msi_add_msi_desc()Josef Johansson
71020a3c0dff4 ("PCI/MSI: Use msi_add_msi_desc()") inadvertently reversed the sense of "msi_attrib.can_mask" in one use: - if (entry->pci.msi_attrib.can_mask) { - addr = pci_msix_desc_addr(entry); - entry->pci.msix_ctrl = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL); + if (!desc.pci.msi_attrib.can_mask) { + addr = pci_msix_desc_addr(&desc); + desc.pci.msix_ctrl = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL); Restore the original test. [bhelgaas: commit log] Fixes: 71020a3c0dff4 ("PCI/MSI: Use msi_add_msi_desc()") Link: https://lore.kernel.org/r/d818f9c9-a432-213e-4152-eaff3b7da52e@oderland.se Signed-off-by: Josef Johansson <josef@oderland.se> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
2022-02-04PCI/MSI: Remove bogus warning in pci_irq_get_affinity()Thomas Gleixner
The recent overhaul of pci_irq_get_affinity() introduced a regression when pci_irq_get_affinity() is called for an MSI-X interrupt which was not allocated with affinity descriptor information. The original code just returned a NULL pointer in that case, but the rework added a WARN_ON() under the assumption that the corresponding WARN_ON() in the MSI case can be applied to MSI-X as well. In fact the MSI warning in the original code does not make sense either because it's legitimate to invoke pci_irq_get_affinity() for a MSI interrupt which was not allocated with affinity descriptor information. Remove it and just return NULL as the original code did. Fixes: f48235900182 ("PCI/MSI: Simplify pci_irq_get_affinity()") Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/87ee4n38sm.ffs@tglx
2021-12-18PCI/MSI: Unbreak pci_irq_get_affinity()Thomas Gleixner
The recent cleanup of pci_irq_get_affinity() broke the function for PCI/MSI-X and indices > 0. Only the MSI descriptor for PCI/MSI has more than one affinity mask which can be retrieved via the MSI index. PCI/MSI-X has one descriptor per vector and each has a single affinity mask. Use index 0 when accessing the affinity mask in the MSI descriptor when MSI-X is enabled. Fixes: f48235900182 ("PCI/MSI: Simplify pci_irq_get_affinity()") Reported-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/87v8zm9pmd.ffs@tglx
2021-12-16PCI/MSI: Use msi_on_each_desc()Thomas Gleixner
Use the new iterator functions which pave the way for dynamically extending MSI-X vectors. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Tested-by: Nishanth Menon <nm@ti.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20211206210748.142603657@linutronix.de