summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-07-23arm64: remove redundant 'if HAVE_ARCH_KASAN' in KconfigMasahiro Yamada
Since commit 0383808e4d99 ("arm64: kasan: Reduce minimum shadow alignment and enable 5 level paging"), HAVE_ARCH_KASAN is always 'y'. The condition 'if HAVE_ARCH_KASAN' is always met. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20240715192843.2201439-1-masahiroy@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2024-07-23s390: Remove protvirt and kvm config guards for uv codeJanosch Frank
Removing the CONFIG_PROTECTED_VIRTUALIZATION_GUEST ifdefs and config option as well as CONFIG_KVM ifdefs in uv files. Having this configurable has been more of a pain than a help. It's time to remove the ifdefs and the config option. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/boot: Add cmdline option to relocate lowcoreSven Schnelle
Now that everything has been converted, add the option 'relocate_lowcore' to enable relocating the lowcore. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/kdump: Make kdump ready for lowcore relocationSven Schnelle
In preparation of having lowcore at different address than zero, add the base register to all lowcore accesses in store_status() and __do_machine_kdump(). Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/entry: Make system_call() ready for lowcore relocationSven Schnelle
In preparation of having lowcore at different address than zero, add the base register to all lowcore accesses in system_call(). Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/entry: Make ret_from_fork() ready for lowcore relocationSven Schnelle
In preparation of having lowcore at different address than zero, add the base register to all lowcore accesses in ret_from_fork(). Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/entry: Make __switch_to() ready for lowcore relocationSven Schnelle
In preparation of having lowcore at different address than zero, add the base register to all lowcore accesses in __switch_to(). Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/entry: Make restart_int_handler() ready for lowcore relocationSven Schnelle
In preparation of having lowcore at different address than zero, add the base register to all lowcore accesses in restart_int_handler(). Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/entry: Make mchk_int_handler() ready for lowcore relocationSven Schnelle
In preparation of having lowcore at different address than zero, add the base register to all lowcore accesses in mcck_int_handler(). Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/entry: Make int handlers ready for lowcore relocationSven Schnelle
In preparation of having lowcore at different address than zero, add the base register to all lowcore accesses in the ext/io interrupt handlers. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/entry: Make pgm_check_handler() ready for lowcore relocationSven Schnelle
In preparation of having lowcore at different address than zero, add the base register to all lowcore accesses in pgm_check_handler(). Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/entry: Add base register to CHECK_VMAP_STACK/CHECK_STACK macroSven Schnelle
In preparation of having lowcore at different address than zero, add the base register to CHECK_VMAP_STACK and CHECK_STACK. No functional change, because %r0 is passed to the macro. Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/entry: Add base register to SIEEXIT macroSven Schnelle
In preparation of having lowcore at different address than zero, add the base register to SIEEXIT. No functional change, because %r0 is passed to the macro. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/entry: Add base register to MBEAR macroSven Schnelle
In preparation of having lowcore at different address than zero, add the base register to MBEAR. No functional change, because %r0 is passed to the macro. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/entry: Make __sie64a() ready for lowcore relocationSven Schnelle
In preparation of having lowcore at different address than zero, add the base register to all lowcore accesses in __sie64a(). Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/head64: Make startup code ready for lowcore relocationSven Schnelle
In preparation of having lowcore at different address than zero, add the base register to all lowcore accesses in startup_continue(). Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390: Add infrastructure to patch lowcore accessesSven Schnelle
The s390 architecture defines two special per-CPU data pages called the "prefix area". In s390-linux terminology this is usually called "lowcore". This memory area contains system configuration data like old/new PSW's for system call/interrupt/machine check handlers and lots of other data. It is normally mapped to logical address 0. This area can only be accessed when in supervisor mode. This means that kernel code can dereference NULL pointers, because accesses to address 0 are allowed. Parts of lowcore can be write protected, but read accesses and write accesses outside of the write protected areas are not caught. To remove this limitation for debugging and testing, remap lowcore to another address and define a function get_lowcore() which simply returns the address where lowcore is mapped at. This would normally introduce a pointer dereference (=memory read). As lowcore is used for several very often used variables, add code to patch this function during runtime, so we avoid the memory reads. For C code get_lowcore() has to be used, for assembly code it is the GET_LC macro. When using this macro/function a reference is added to alternative patching. All these locations will be patched to the actual lowcore location when the kernel is booted or a module is loaded. To make debugging/bisecting problems easier, this patch adds all the infrastructure but the lowcore address is still hardwired to 0. This way the code can be converted on a per function basis, and the functionality is enabled in a patch after all the functions have been converted. Note that this requires at least z16 because the old lpsw instruction only allowed a 12 bit displacement. z16 introduced lpswey which allows 20 bits (signed), so the lowcore can effectively be mapped from address 0 - 0x7e000. To use 0x7e000 as address, a 6 byte lgfi instruction would have to be used in the alternative. To save two bytes, llilh can be used, but this only allows to set bits 16-31 of the address. In order to use the llilh instruction, use 0x70000 as alternative lowcore address. This is still large enough to catch NULL pointer dereferences into large arrays. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/atomic_ops: Disable flag outputs constraint for GCC versions below 14.2.0Heiko Carstens
GCC may die with an ICE if the flag outputs constraint is used in combination with other inline assemblies. This will be fixed with GCC 14.2.0. Therefore disable the use of the constraint for now. Link: https://gcc.gnu.org/git?p=gcc.git;a=commit;h=cd11413ff7c4353a3e336db415304f788d23a393 Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/entry: Move SIE indicator flag to thread infoHeiko Carstens
CIF_SIE indicates if a thread is running in SIE context. This is the state of a thread and not the CPU. Therefore move this indicator to thread info. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/nmi: Simplify ptregs setupHeiko Carstens
The low level machine check handler code fills the ptregs structure partially with the register contents present at machine check handler entry and partially with contents from the machine check save area. In case of a machine check the contents of all general purpose registers are saved by the CPU to the machine check save area. Therefore simplify the code and fill the ptregs structure by only using the machine check save area as source. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/alternatives: Remove alternative facility listHeiko Carstens
The alternative and the normal facility list are always identical. Remove the alternative facility list, which allows to simplify the alternatives code. Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Tested-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/nospec: Push down alternative handlingHeiko Carstens
The nospec implementation is deeply integrated into the alternatives code: only for nospec an alternative facility list is implemented and used by the alternative code, while it is modified by nospec specific needs. Push down the nospec alternative handling into the nospec by introducing a new alternative type and a specific nospec callback to decide if alternatives should be applied. Also introduce a new global nobp variable which together with facility 82 can be used to decide if nobp is enabled or not. Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Tested-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/alternatives: Allow early alternative patching in decompressorSven Schnelle
Add the required code to patch alternatives early in the decompressor. This is required for the upcoming lowcore relocation changes, where alternatives for facility 193 need to get patched before lowcore alternatives. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Co-developed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/alternatives: Rework to allow for callbacksHeiko Carstens
Rework alternatives to allow for callbacks. With this every alternative entry has additional data encoded: - When (aka context) an alternative is supposed to be applied - The type of an alternative, which allows for type specific handling and callbacks - Extra type specific payload (patch information), which can be passed to callbacks in order to decide if an alternative should be applied or not With this only the "late" context is implemented, which means there is no change to the previous behaviour. All code is just converted to the more generic new infrastructure. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Tested-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/uaccess: Make s390_kernel_write() usable for decompressorHeiko Carstens
To avoid lots of ifdefs in C code make s390_kernel_write() usable for the decompressor: simply use memcpy() for this case since there is no write protection enabled that early. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Tested-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/alternatives: Move text sync functionsHeiko Carstens
Move all text sync functions from alternative.c to processor.c. This way there is only minimal code left in alternative.c left, which is a prerequisite to use the C file within boot code as well. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Tested-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/alternatives: Merge both alternative header filesHeiko Carstens
The two alternative header files must stay in sync. This is easier to achieve within one header file. Therefore merge both of them and have only one file, like most other architectures. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Tested-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/alternatives: Use consistent namingHeiko Carstens
The alternative code is using the words facility and feature for the same. Rename facility to more generic feature everywhere to have consistent naming. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Tested-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/alternatives: Remove noaltinstr optionSven Schnelle
The current Kernel doesn't boot without alternative patching on z16 machines. To avoid such bugs in the future, remove the option disable alternative patching. Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390: Move CIF flags to struct pcpuSven Schnelle
To allow testing flags for offline CPUs, move the CIF flags to struct pcpu. To avoid having to calculate the array index for each access, add a pointer to the pcpu member for the current cpu to lowcore. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/smp: Switch pcpu_devices to percpuSven Schnelle
In preparation of moving the CIF flags from lowcore to pcpu_devices, convert the pcpu_devices array to use the percpu infrastructure. This is required because using the pcpu_devices array as it is would introduce a performance penalty due to the fact that CPU flags for multiple CPUs would end up in the same cacheline. Note that a pointer to the pcpu struct of the IPL CPU is still required. This is because a restart interrupt can be triggered on an offline CPU. s390 stores the percpu offset in lowcore, but offline CPUs have no lowcore area allocated. So percpu data cannot be used from an offline CPU and we need to get the pcpu pointer for the IPL cpu from somewhere else. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/smp: Handle restart interrupt on ipl cpuSven Schnelle
The current smp code allows to trigger a restart interrupt on CPUs offline in linux. To allow using the percpu infrastructure instead of the pcpu_devices array, switch to the ipl cpu which is always online before calling do_restart(). Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/boot: Do not assume the decompressor range is reservedAlexander Gordeev
When allocating a random memory range for .amode31 sections the minimal randomization address is 0. That does not lead to a possible overlap with the decompressor image (which also starts from 0) since by that time the image range is already reserved. Do not assume the decompressor range is reserved and always provide the minimal randomization address for .amode31 sections beyond the decompressor. That is a prerequisite for moving the lowcore memory address from NULL elsewhere. Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/cpum_cf: Fix endless loop in CF_DIAG event stopThomas Richter
Event CF_DIAG reads out complete counter sets using stcctm instruction. This is done at event start time when the process starts execution and at event stop time when the process is removed from the CPU. During removal the difference of each counter in the counter sets is calculated and saved as raw data in the ring buffer. This works fine unless the number of counters in a counter set is zero. This may happen for the extended counter set. This set is machine specific and the size of the counter set can be zero even when extended counter set is authorized for read access. This case is not handled. cfdiag_diffctr() checks authorization of the extended counter set. If true the functions assumes the extended counter set has been saved in a data buffer. However this is not the case, cfdiag_getctrset() does not save a counter set with counter set size of zero. This mismatch causes an endless loop in the counter set readout during event stop handling. The calculation of the difference of the counters in each counter now verifies the size of the counter set is non-zero. A counter set with size zero is skipped. Fixes: a029a4eab39e ("s390/cpumf: Allow concurrent access for CPU Measurement Counter Facility") Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/ptdump: Add KMSAN page markersIlya Leoshkevich
Add KMSAN vmalloc metadata areas to /sys/kernel/debug/kernel_page_tables. Example output: 0x000003a95fff9000-0x000003a960000000 28K PTE I ---[ vmalloc Area End ]--- ---[ Kmsan vmalloc Shadow Start ]--- 0x000003a960000000-0x000003a960010000 64K PTE RW NX [...] 0x000003d3dfff9000-0x000003d3e0000000 28K PTE I ---[ Kmsan vmalloc Shadow End ]--- ---[ Kmsan vmalloc Origins Start ]--- 0x000003d3e0000000-0x000003d3e0010000 64K PTE RW NX [...] 0x000003fe5fff9000-0x000003fe60000000 28K PTE I ---[ Kmsan vmalloc Origins End ]--- ---[ Kmsan Modules Shadow Start ]--- 0x000003fe60000000-0x000003fe60001000 4K PTE RW NX [...] 0x000003fe60100000-0x000003fee0000000 2047M PMD I ---[ Kmsan Modules Shadow End ]--- ---[ Kmsan Modules Origins Start ]--- 0x000003fee0000000-0x000003fee0001000 4K PTE RW NX [...] 0x000003fee0100000-0x000003ff60000000 2047M PMD I ---[ Kmsan Modules Origins End ]--- ---[ Modules Area Start ]--- 0x000003ff60000000-0x000003ff60001000 4K PTE RO X Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Link: https://lore.kernel.org/r/20240723124441.120044-3-iii@linux.ibm.com Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/kmsan: Fix merge conflict with get_lowcore() introductionIlya Leoshkevich
Resolve the conflict between commit 2a48c8c9cf87 ("s390/kmsan: implement the architecture-specific functions") and commit 39976f1278a9 ("s390: Remove S390_lowcore"). Fixes: 2a48c8c9cf87 ("s390/kmsan: implement the architecture-specific functions") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Link: https://lore.kernel.org/r/20240723124441.120044-2-iii@linux.ibm.com Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/setup: Fix __pa/__va for modules under non-GPL licensesVasily Gorbik
The struct vm_layout contains fields used in __pa/__va calculations. Such fundamental things have to be exported with EXPORT_SYMBOL to avoid breakages of out-of-tree modules under non-GPL licenses. Fixes: 7de0446f0b26 ("s390/boot: Make identity mapping base address explicit") Acked-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/pci: Allow allocation of more than 1 MSI interruptGerd Bayer
On a PCI adapter that provides up to 8 MSI interrupt sources the s390 implementation of PCI interrupts rejected to accommodate them, although the underlying hardware is able to support that. For MSI-X it is sufficient to allocate a single irq_desc per msi_desc, but for MSI multiple irq descriptors are attached to and controlled by a single msi descriptor. Add the appropriate loops to maintain multiple irq descriptors and tie/untie them to/from the appropriate AIBV bit, if a device driver allocates more than 1 MSI interrupt. Common PCI code passes on requests to allocate a number of interrupt vectors based on the device drivers' demand and the PCI functions' capabilities. However, the root-complex of s390 systems support just a limited number of interrupt vectors per PCI function. Produce a kernel log message to inform about any architecture-specific capping that might be done. With this change, we had a PCI adapter successfully raising interrupts to its device driver via all 8 sources. Fixes: a384c8924a8b ("s390/PCI: Fix single MSI only check") Signed-off-by: Gerd Bayer <gbayer@linux.ibm.com> Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390/pci: Refactor arch_setup_msi_irqs()Gerd Bayer
Factor out adapter interrupt allocation from arch_setup_msi_irqs() in preparation for enabling registration of multiple MSIs. Code movement only, no change of functionality intended. Signed-off-by: Gerd Bayer <gbayer@linux.ibm.com> Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23s390: Add runtime constant supportHeiko Carstens
Implement the runtime constant infrastructure for s390, allowing the dcache d_hash() function to be generated using as a constant for hash table address followed by shift by a constant of the hash index. This is the s390 variant of commit 94a2bc0f611c ("arm64: add 'runtime constant' support") and commit e3c92e81711d ("runtime constants: add x86 architecture support"). Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-07-23drbd: Add peer_device to Kernel docSimon Horman
Add missing documentation of peer_device parameter to Kernel doc. These parameters were added in commit 8164dd6c8ae1 ("drbd: Add peer device parameter to whole-bitmap I/O handlers") Flagged by W=1 builds. Signed-off-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240723-drbd-doc-v1-1-a04d9b7a9688@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-07-23net: bonding: correctly annotate RCU in bond_should_notify_peers()Johannes Berg
RCU use in bond_should_notify_peers() looks wrong, since it does rcu_dereference(), leaves the critical section, and uses the pointer after that. Luckily, it's called either inside a nested RCU critical section or with the RTNL held. Annotate it with rcu_dereference_rtnl() instead, and remove the inner RCU critical section. Fixes: 4cb4f97b7e36 ("bonding: rebuild the lock use for bond_mii_monitor()") Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Acked-by: Jay Vosburgh <jv@jvosburgh.net> Link: https://patch.msgid.link/20240719094119.35c62455087d.I68eb9c0f02545b364b79a59f2110f2cf5682a8e2@changeid Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-07-23thermal: trip: Split thermal_zone_device_set_mode()Rafael J. Wysocki
Pull a wrapper around thermal zone .change_mode() callback out of thermal_zone_device_set_mode() because it will be used elsewhere subsequently. No intentional functional impact. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://patch.msgid.link/2206793.irdbgypaU6@rjwysocki.net
2024-07-23net: airoha: Fix MBI_RX_AGE_SEL_MASK definitionLorenzo Bianconi
Fix copy-paste error in MBI_RX_AGE_SEL_MASK macro definition Fixes: 23020f049327 ("net: airoha: Introduce ethernet support for EN7581 SoC") Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/d27d0465be1bff3369e886e5f10c4d37fefc4934.1721419930.git.lorenzo@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-07-23xen: fix multicall debug data referencingJuergen Gross
The recent adding of multicall debug mixed up the referencing of the debug data. A __percpu tagged pointer can't be initialized with a plain pointer, so use another percpu variable for the pointer and set it on each new cpu via a function. Fixes: 942d917cb92a ("xen: make multicall debug boot time selectable") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202407151106.5s7Mnfpz-lkp@intel.com/ Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2024-07-23drm/i915: Allow NULL memory regionJonathan Cavitt
Prevent a NULL pointer access in intel_memory_regions_hw_probe. Fixes: 05da7d9f717b ("drm/i915/gem: Downgrade stolen lmem setup warning") Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Closes: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11704 Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240712214156.3969584-1-jonathan.cavitt@intel.com (cherry picked from commit d75dec1fcbcb05b021c08b62551649567ab8955c) Signed-off-by: Tvrtko Ursulin <tursulin@ursulin.net>
2024-07-23drm/i915/gt: Do not consider preemption during execlists_dequeue for gen8Nitin Gote
We're seeing a GPU hang issue on a CHV platform, which was caused by commit bac24f59f454 ("drm/i915/execlists: Enable coarse preemption boundaries for Gen8"). The Gen8 platform only supports timeslicing and doesn't have a preemption mechanism, as its engines do not have a preemption timer. Commit 751f82b353a6 ("drm/i915/gt: Only disable preemption on Gen8 render engines") addressed this issue only for render engines. This patch extends that fix by ensuring that preemption is not considered for all engines on Gen8 platforms. v4: - Use the correct Fixes tag (Rodrigo Vivi) - Reworded commit log (Andi Shyti) v3: - Inside need_preempt(), condition of can_preempt() is not required as simplified can_preempt() is enough. (Chris Wilson) v2: Simplify can_preempt() function (Tvrtko Ursulin) Fixes: 751f82b353a6 ("drm/i915/gt: Only disable preemption on gen8 render engines") Closes: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11396 Suggested-by: Andi Shyti <andi.shyti@intel.com> Signed-off-by: Nitin Gote <nitin.r.gote@intel.com> Cc: Chris Wilson <chris.p.wilson@linux.intel.com> CC: <stable@vger.kernel.org> # v5.12+ Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240711163208.1355736-1-nitin.r.gote@intel.com (cherry picked from commit 7df0be6e6280c6fca01d039864bb123e5e36604b) Signed-off-by: Tvrtko Ursulin <tursulin@ursulin.net>
2024-07-23l2tp: make session IDR and tunnel session list coherentJames Chapman
Modify l2tp_session_register and l2tp_session_unhash so that the session IDR and tunnel session lists remain coherent. To do so, hold the session IDR lock and the tunnel's session list lock when making any changes to either list. Without this change, a rare race condition could hit the WARN_ON_ONCE in l2tp_session_unhash if a thread replaced the IDR entry while another thread was registering the same ID. [ 7126.151795][T17511] WARNING: CPU: 3 PID: 17511 at net/l2tp/l2tp_core.c:1282 l2tp_session_delete.part.0+0x87e/0xbc0 [ 7126.163754][T17511] ? show_regs+0x93/0xa0 [ 7126.164157][T17511] ? __warn+0xe5/0x3c0 [ 7126.164536][T17511] ? l2tp_session_delete.part.0+0x87e/0xbc0 [ 7126.165070][T17511] ? report_bug+0x2e1/0x500 [ 7126.165486][T17511] ? l2tp_session_delete.part.0+0x87e/0xbc0 [ 7126.166013][T17511] ? handle_bug+0x99/0x130 [ 7126.166428][T17511] ? exc_invalid_op+0x35/0x80 [ 7126.166890][T17511] ? asm_exc_invalid_op+0x1a/0x20 [ 7126.167372][T17511] ? l2tp_session_delete.part.0+0x87d/0xbc0 [ 7126.167900][T17511] ? l2tp_session_delete.part.0+0x87e/0xbc0 [ 7126.168429][T17511] ? __local_bh_enable_ip+0xa4/0x120 [ 7126.168917][T17511] l2tp_session_delete+0x40/0x50 [ 7126.169369][T17511] pppol2tp_release+0x1a1/0x3f0 [ 7126.169817][T17511] __sock_release+0xb3/0x270 [ 7126.170247][T17511] ? __pfx_sock_close+0x10/0x10 [ 7126.170697][T17511] sock_close+0x1c/0x30 [ 7126.171087][T17511] __fput+0x40b/0xb90 [ 7126.171470][T17511] task_work_run+0x16c/0x260 [ 7126.171897][T17511] ? __pfx_task_work_run+0x10/0x10 [ 7126.172362][T17511] ? srso_alias_return_thunk+0x5/0xfbef5 [ 7126.172863][T17511] ? do_raw_spin_unlock+0x174/0x230 [ 7126.173348][T17511] do_exit+0xaae/0x2b40 [ 7126.173730][T17511] ? srso_alias_return_thunk+0x5/0xfbef5 [ 7126.174235][T17511] ? __pfx_lock_release+0x10/0x10 [ 7126.174690][T17511] ? srso_alias_return_thunk+0x5/0xfbef5 [ 7126.175190][T17511] ? do_raw_spin_lock+0x12c/0x2b0 [ 7126.175650][T17511] ? __pfx_do_exit+0x10/0x10 [ 7126.176072][T17511] ? _raw_spin_unlock_irq+0x23/0x50 [ 7126.176543][T17511] do_group_exit+0xd3/0x2a0 [ 7126.176990][T17511] __x64_sys_exit_group+0x3e/0x50 [ 7126.177456][T17511] x64_sys_call+0x1821/0x1830 [ 7126.177895][T17511] do_syscall_64+0xcb/0x250 [ 7126.178317][T17511] entry_SYSCALL_64_after_hwframe+0x77/0x7f Fixes: aa5e17e1f5ec ("l2tp: store l2tpv3 sessions in per-net IDR") Signed-off-by: James Chapman <jchapman@katalix.com> Signed-off-by: Tom Parkin <tparkin@katalix.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20240718134348.289865-1-jchapman@katalix.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-07-23ipv4: Fix incorrect source address in Record Route optionIdo Schimmel
The Record Route IP option records the addresses of the routers that routed the packet. In the case of forwarded packets, the kernel performs a route lookup via fib_lookup() and fills in the preferred source address of the matched route. The lookup is performed with the DS field of the forwarded packet, but using the RT_TOS() macro which only masks one of the two ECN bits. If the packet is ECT(0) or CE, the matched route might be different than the route via which the packet was forwarded as the input path masks both of the ECN bits, resulting in the wrong address being filled in the Record Route option. Fix by masking both of the ECN bits. Fixes: 8e36360ae876 ("ipv4: Remove route key identity dependencies in ip_rt_get_source().") Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Guillaume Nault <gnault@redhat.com> Link: https://patch.msgid.link/20240718123407.434778-1-idosch@nvidia.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-07-23ceph: fix incorrect kmalloc size of pagevec mempoolethanwu
The kmalloc size of pagevec mempool is incorrectly calculated. It misses the size of page pointer and only accounts the number for the array. Fixes: a0102bda5bc0 ("ceph: move sb->wb_pagevec_pool to be a global mempool") Signed-off-by: ethanwu <ethanwu@synology.com> Reviewed-by: Xiubo Li <xiubli@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>