summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2024-08-29openrisc: Call setup_memory() earlier in the init sequenceOreoluwa Babatunde
[ Upstream commit 7b432bf376c9c198a7ff48f1ed14a14c0ffbe1fe ] The unflatten_and_copy_device_tree() function contains a call to memblock_alloc(). This means that memblock is allocating memory before any of the reserved memory regions are set aside in the setup_memory() function which calls early_init_fdt_scan_reserved_mem(). Therefore, there is a possibility for memblock to allocate from any of the reserved memory regions. Hence, move the call to setup_memory() to be earlier in the init sequence so that the reserved memory regions are set aside before any allocations are done using memblock. Signed-off-by: Oreoluwa Babatunde <quic_obabatun@quicinc.com> Signed-off-by: Stafford Horne <shorne@gmail.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-29powerpc/boot: Only free if realloc() succeedsMichael Ellerman
[ Upstream commit f2d5bccaca3e8c09c9b9c8485375f7bdbb2631d2 ] simple_realloc() frees the original buffer (ptr) even if the reallocation failed. Fix it to behave like standard realloc() and only free the original buffer if the reallocation succeeded. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240229115149.749264-1-mpe@ellerman.id.au Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-29powerpc/boot: Handle allocation failure in simple_realloc()Li zeming
[ Upstream commit 69b0194ccec033c208b071e019032c1919c2822d ] simple_malloc() will return NULL when there is not enough memory left. Check pointer 'new' before using it to copy the old data. Signed-off-by: Li zeming <zeming@nfschina.com> [mpe: Reword subject, use change log from Christophe] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20221219021816.3012-1-zeming@nfschina.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-29parisc: Use irq_enter_rcu() to fix warning at kernel/context_tracking.c:367Helge Deller
[ Upstream commit 73cb4a2d8d7e0259f94046116727084f21e4599f ] Use irq*_rcu() functions to fix this kernel warning: WARNING: CPU: 0 PID: 0 at kernel/context_tracking.c:367 ct_irq_enter+0xa0/0xd0 Modules linked in: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.7.0-rc3-64bit+ #1037 Hardware name: 9000/785/C3700 IASQ: 0000000000000000 0000000000000000 IAOQ: 00000000412cd758 00000000412cd75c IIR: 03ffe01f ISR: 0000000000000000 IOR: 0000000043c20c20 CPU: 0 CR30: 0000000041caa000 CR31: 0000000000000000 ORIG_R28: 0000000000000005 IAOQ[0]: ct_irq_enter+0xa0/0xd0 IAOQ[1]: ct_irq_enter+0xa4/0xd0 RP(r2): irq_enter+0x34/0x68 Backtrace: [<000000004034a3ec>] irq_enter+0x34/0x68 [<000000004030dc48>] do_cpu_irq_mask+0xc0/0x450 [<0000000040303070>] intr_return+0x0/0xc Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-29x86: Increase brk randomness entropy for 64-bit systemsKees Cook
[ Upstream commit 44c76825d6eefee9eb7ce06c38e1a6632ac7eb7d ] In commit c1d171a00294 ("x86: randomize brk"), arch_randomize_brk() was defined to use a 32MB range (13 bits of entropy), but was never increased when moving to 64-bit. The default arch_randomize_brk() uses 32MB for 32-bit tasks, and 1GB (18 bits of entropy) for 64-bit tasks. Update x86_64 to match the entropy used by arm64 and other 64-bit architectures. Reported-by: y0un9n132@gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Jiri Kosina <jkosina@suse.com> Closes: https://lore.kernel.org/linux-hardening/CA+2EKTVLvc8hDZc+2Yhwmus=dzOUG5E4gV7ayCbu0MPJTZzWkw@mail.gmail.com/ Link: https://lore.kernel.org/r/20240217062545.1631668-1-keescook@chromium.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-29riscv: blacklist assembly symbols for kprobeClément Léger
[ Upstream commit 5014396af9bbac0f28d9afee7eae405206d01ee7 ] Adding kprobes on some assembly functions (mainly exception handling) will result in crashes (either recursive trap or panic). To avoid such errors, add ASM_NOKPROBE() macro which allow adding specific symbols into the __kprobe_blacklist section and use to blacklist the following symbols that showed to be problematic: - handle_exception() - ret_from_exception() - handle_kernel_stack_overflow() Signed-off-by: Clément Léger <cleger@rivosinc.com> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20231004131009.409193-1-cleger@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-29powerpc/pseries/papr-sysparm: Validate buffer object lengthsNathan Lynch
[ Upstream commit 35aae182bd7b422be3cefc08c12207bf2b973364 ] The ability to get and set system parameters will be exposed to user space, so let's get a little more strict about malformed papr_sysparm_buf objects. * Create accessors for the length field of struct papr_sysparm_buf. The length is always stored in MSB order and this is better than spreading the necessary conversions all over. * Reject attempts to submit invalid buffers to RTAS. * Warn if RTAS returns a buffer with an invalid length, clamping the returned length to a safe value that won't overrun the buffer. These are meant as precautionary measures to mitigate both firmware and kernel bugs in this area, should they arise, but I am not aware of any. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-10-e9eafd0c8c6c@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-29powerpc/xics: Check return value of kasprintf in icp_native_map_one_cpuKunwu Chan
[ Upstream commit 45b1ba7e5d1f6881050d558baf9bc74a2ae13930 ] kasprintf() returns a pointer to dynamically allocated memory which can be NULL upon failure. Ensure the allocation was successful by checking the pointer validity. Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231122030651.3818-1-chentao@kylinos.cn Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-29arm64: Fix KASAN random tag seed initializationSamuel Holland
[ Upstream commit f75c235565f90c4a17b125e47f1c68ef6b8c2bce ] Currently, kasan_init_sw_tags() is called before setup_per_cpu_areas(), so per_cpu(prng_state, cpu) accesses the same address regardless of the value of "cpu", and the same seed value gets copied to the percpu area for every CPU. Fix this by moving the call to smp_prepare_boot_cpu(), which is the first architecture hook after setup_per_cpu_areas(). Fixes: 3c9e3aa11094 ("kasan: add tag related helper functions") Fixes: 3f41b6093823 ("kasan: fix random seed generation for tag-based mode") Signed-off-by: Samuel Holland <samuel.holland@sifive.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Link: https://lore.kernel.org/r/20240814091005.969756-1-samuel.holland@sifive.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-29powerpc/topology: Check if a core is onlineNysal Jan K.A
[ Upstream commit 227bbaabe64b6f9cd98aa051454c1d4a194a8c6a ] topology_is_core_online() checks if the core a CPU belongs to is online. The core is online if at least one of the sibling CPUs is online. The first CPU of an online core is also online in the common case, so this should be fairly quick. Fixes: 73c58e7e1412 ("powerpc: Add HOTPLUG_SMT support") Signed-off-by: Nysal Jan K.A <nysal@linux.ibm.com> Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240731030126.956210-3-nysal@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-29s390/smp,mcck: fix early IPI handlingHeiko Carstens
[ Upstream commit 4a1725281fc5b0009944b1c0e1d2c1dc311a09ec ] Both the external call as well as the emergency signal submask bits in control register 0 are set before any interrupt handler is registered. Change the order and first register the interrupt handler and only then enable the interrupts by setting the corresponding bits in control register 0. This prevents that the second part of the machine check handler for early machine check handling is not executed: the machine check handler sends an IPI to the CPU it runs on. If the corresponding interrupts are enabled, but no interrupt handler is present, the interrupt is ignored. Reviewed-by: Sven Schnelle <svens@linux.ibm.com> Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-29s390/uv: Panic for set and remove shared access UVC errorsClaudio Imbrenda
[ Upstream commit cff59d8631e1409ffdd22d9d717e15810181b32c ] The return value uv_set_shared() and uv_remove_shared() (which are wrappers around the share() function) is not always checked. The system integrity of a protected guest depends on the Share and Unshare UVCs being successful. This means that any caller that fails to check the return value will compromise the security of the protected guest. No code path that would lead to such violation of the security guarantees is currently exercised, since all the areas that are shared never get unshared during the lifetime of the system. This might change and become an issue in the future. The Share and Unshare UVCs can only fail in case of hypervisor misbehaviour (either a bug or malicious behaviour). In such cases there is no reasonable way forward, and the system needs to panic. This patch replaces the return at the end of the share() function with a panic, to guarantee system integrity. Fixes: 5abb9351dfd9 ("s390/uv: introduce guest side ultravisor code") Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com> Reviewed-by: Steffen Eiden <seiden@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Link: https://lore.kernel.org/r/20240801112548.85303-1-imbrenda@linux.ibm.com Message-ID: <20240801112548.85303-1-imbrenda@linux.ibm.com> [frankja@linux.ibm.com: Fixed up patch subject] Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-29arm64: ACPI: NUMA: initialize all values of acpi_early_node_map to NUMA_NO_NODEHaibo Xu
commit a21dcf0ea8566ebbe011c79d6ed08cdfea771de3 upstream. Currently, only acpi_early_node_map[0] was initialized to NUMA_NO_NODE. To ensure all the values were properly initialized, switch to initialize all of them to NUMA_NO_NODE. Fixes: e18962491696 ("arm64: numa: rework ACPI NUMA initialization") Cc: <stable@vger.kernel.org> # 4.19.x Reported-by: Andrew Jones <ajones@ventanamicro.com> Suggested-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Haibo Xu <haibo1.xu@intel.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Sunil V L <sunilvl@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Reviewed-by: Hanjun Guo <guohanjun@huawei.com> Link: https://lore.kernel.org/r/853d7f74aa243f6f5999e203246f0d1ae92d2b61.1722828421.git.haibo1.xu@intel.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-29riscv: change XIP's kernel_map.size to be size of the entire kernelNam Cao
commit 57d76bc51fd80824bcc0c84a5b5ec944f1b51edd upstream. With XIP kernel, kernel_map.size is set to be only the size of data part of the kernel. This is inconsistent with "normal" kernel, who sets it to be the size of the entire kernel. More importantly, XIP kernel fails to boot if CONFIG_DEBUG_VIRTUAL is enabled, because there are checks on virtual addresses with the assumption that kernel_map.size is the size of the entire kernel (these checks are in arch/riscv/mm/physaddr.c). Change XIP's kernel_map.size to be the size of the entire kernel. Signed-off-by: Nam Cao <namcao@linutronix.de> Cc: <stable@vger.kernel.org> # v6.1+ Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20240508191917.2892064-1-namcao@linutronix.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-29KVM: s390: fix validity interception issue when gisa is switched offMichael Mueller
commit 5a44bb061d04b0306f2aa8add761d86d152b9377 upstream. We might run into a SIE validity if gisa has been disabled either via using kernel parameter "kvm.use_gisa=0" or by setting the related sysfs attribute to N (echo N >/sys/module/kvm/parameters/use_gisa). The validity is caused by an invalid value in the SIE control block's gisa designation. That happens because we pass the uninitialized gisa origin to virt_to_phys() before writing it to the gisa designation. To fix this we return 0 in kvm_s390_get_gisa_desc() if the origin is 0. kvm_s390_get_gisa_desc() is used to determine which gisa designation to set in the SIE control block. A value of 0 in the gisa designation disables gisa usage. The issue surfaces in the host kernel with the following kernel message as soon a new kvm guest start is attemted. kvm: unhandled validity intercept 0x1011 WARNING: CPU: 0 PID: 781237 at arch/s390/kvm/intercept.c:101 kvm_handle_sie_intercept+0x42e/0x4d0 [kvm] Modules linked in: vhost_net tap tun xt_CHECKSUM xt_MASQUERADE xt_conntrack ipt_REJECT xt_tcpudp nft_compat x_tables nf_nat_tftp nf_conntrack_tftp vfio_pci_core irqbypass vhost_vsock vmw_vsock_virtio_transport_common vsock vhost vhost_iotlb kvm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nf_tables sunrpc mlx5_ib ib_uverbs ib_core mlx5_core uvdevice s390_trng eadm_sch vfio_ccw zcrypt_cex4 mdev vfio_iommu_type1 vfio sch_fq_codel drm i2c_core loop drm_panel_orientation_quirks configfs nfnetlink lcs ctcm fsm dm_service_time ghash_s390 prng chacha_s390 libchacha aes_s390 des_s390 libdes sha3_512_s390 sha3_256_s390 sha512_s390 sha256_s390 sha1_s390 sha_common dm_mirror dm_region_hash dm_log zfcp scsi_transport_fc scsi_dh_rdac scsi_dh_emc scsi_dh_alua pkey zcrypt dm_multipath rng_core autofs4 [last unloaded: vfio_pci] CPU: 0 PID: 781237 Comm: CPU 0/KVM Not tainted 6.10.0-08682-gcad9f11498ea #6 Hardware name: IBM 3931 A01 701 (LPAR) Krnl PSW : 0704c00180000000 000003d93deb0122 (kvm_handle_sie_intercept+0x432/0x4d0 [kvm]) R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3 Krnl GPRS: 000003d900000027 000003d900000023 0000000000000028 000002cd00000000 000002d063a00900 00000359c6daf708 00000000000bebb5 0000000000001eff 000002cfd82e9000 000002cfd80bc000 0000000000001011 000003d93deda412 000003ff8962df98 000003d93de77ce0 000003d93deb011e 00000359c6daf960 Krnl Code: 000003d93deb0112: c020fffe7259 larl %r2,000003d93de7e5c4 000003d93deb0118: c0e53fa8beac brasl %r14,000003d9bd3c7e70 #000003d93deb011e: af000000 mc 0,0 >000003d93deb0122: a728ffea lhi %r2,-22 000003d93deb0126: a7f4fe24 brc 15,000003d93deafd6e 000003d93deb012a: 9101f0b0 tm 176(%r15),1 000003d93deb012e: a774fe48 brc 7,000003d93deafdbe 000003d93deb0132: 40a0f0ae sth %r10,174(%r15) Call Trace: [<000003d93deb0122>] kvm_handle_sie_intercept+0x432/0x4d0 [kvm] ([<000003d93deb011e>] kvm_handle_sie_intercept+0x42e/0x4d0 [kvm]) [<000003d93deacc10>] vcpu_post_run+0x1d0/0x3b0 [kvm] [<000003d93deaceda>] __vcpu_run+0xea/0x2d0 [kvm] [<000003d93dead9da>] kvm_arch_vcpu_ioctl_run+0x16a/0x430 [kvm] [<000003d93de93ee0>] kvm_vcpu_ioctl+0x190/0x7c0 [kvm] [<000003d9bd728b4e>] vfs_ioctl+0x2e/0x70 [<000003d9bd72a092>] __s390x_sys_ioctl+0xc2/0xd0 [<000003d9be0e9222>] __do_syscall+0x1f2/0x2e0 [<000003d9be0f9a90>] system_call+0x70/0x98 Last Breaking-Event-Address: [<000003d9bd3c7f58>] __warn_printk+0xe8/0xf0 Cc: stable@vger.kernel.org Reported-by: Christian Borntraeger <borntraeger@linux.ibm.com> Fixes: fe0ef0030463 ("KVM: s390: sort out physical vs virtual pointers usage") Signed-off-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Christian Borntraeger <borntraeger@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Link: https://lore.kernel.org/r/20240801123109.2782155-1-mimu@linux.ibm.com Message-ID: <20240801123109.2782155-1-mimu@linux.ibm.com> Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-19KVM: arm64: Don't pass a TLBI level hint when zapping table entriesWill Deacon
commit 36e008323926036650299cfbb2dca704c7aba849 upstream. The TLBI level hints are for leaf entries only, so take care not to pass them incorrectly after clearing a table entry. Cc: Gavin Shan <gshan@redhat.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Fixes: 82bb02445de5 ("KVM: arm64: Implement kvm_pgtable_hyp_unmap() at EL2") Fixes: 6d9d2115c480 ("KVM: arm64: Add support for stage-2 map()/unmap() in generic page-table") Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Shaoqin Huang <shahuang@redhat.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240327124853.11206-3-will@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Cc: <stable@vger.kernel.org> # 6.6.y only [will@: Use '0' instead of TLBI_TTL_UNKNOWN to indicate "no level"] Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-19KVM: arm64: Don't defer TLB invalidation when zapping table entriesWill Deacon
commit f62d4c3eb687d87b616b4279acec7862553bda77 upstream. Commit 7657ea920c54 ("KVM: arm64: Use TLBI range-based instructions for unmap") introduced deferred TLB invalidation for the stage-2 page-table so that range-based invalidation can be used for the accumulated addresses. This works fine if the structure of the page-tables remains unchanged, but if entire tables are zapped and subsequently freed then we transiently leave the hardware page-table walker with a reference to freed memory thanks to the translation walk caches. For example, stage2_unmap_walker() will free page-table pages: if (childp) mm_ops->put_page(childp); and issue the TLB invalidation later in kvm_pgtable_stage2_unmap(): if (stage2_unmap_defer_tlb_flush(pgt)) /* Perform the deferred TLB invalidations */ kvm_tlb_flush_vmid_range(pgt->mmu, addr, size); For now, take the conservative approach and invalidate the TLB eagerly when we clear a table entry. Note, however, that the existing level hint passed to __kvm_tlb_flush_vmid_ipa() is incorrect and will be fixed in a subsequent patch. Cc: Raghavendra Rao Ananta <rananta@google.com> Cc: Shaoqin Huang <shahuang@redhat.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Shaoqin Huang <shahuang@redhat.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240327124853.11206-2-will@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Cc: <stable@vger.kernel.org> # 6.6.y only Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-19mm/page_table_check: support userfault wr-protect entriesPeter Xu
[ Upstream commit 8430557fc584657559bfbd5150b6ae1bb90f35a0 ] Allow page_table_check hooks to check over userfaultfd wr-protect criteria upon pgtable updates. The rule is no co-existance allowed for any writable flag against userfault wr-protect flag. This should be better than c2da319c2e, where we used to only sanitize such issues during a pgtable walk, but when hitting such issue we don't have a good chance to know where does that writable bit came from [1], so that even the pgtable walk exposes a kernel bug (which is still helpful on triaging) but not easy to track and debug. Now we switch to track the source. It's much easier too with the recent introduction of page table check. There are some limitations with using the page table check here for userfaultfd wr-protect purpose: - It is only enabled with explicit enablement of page table check configs and/or boot parameters, but should be good enough to track at least syzbot issues, as syzbot should enable PAGE_TABLE_CHECK[_ENFORCED] for x86 [1]. We used to have DEBUG_VM but it's now off for most distros, while distros also normally not enable PAGE_TABLE_CHECK[_ENFORCED], which is similar. - It conditionally works with the ptep_modify_prot API. It will be bypassed when e.g. XEN PV is enabled, however still work for most of the rest scenarios, which should be the common cases so should be good enough. - Hugetlb check is a bit hairy, as the page table check cannot identify hugetlb pte or normal pte via trapping at set_pte_at(), because of the current design where hugetlb maps every layers to pte_t... For example, the default set_huge_pte_at() can invoke set_pte_at() directly and lose the hugetlb context, treating it the same as a normal pte_t. So far it's fine because we have huge_pte_uffd_wp() always equals to pte_uffd_wp() as long as supported (x86 only). It'll be a bigger problem when we'll define _PAGE_UFFD_WP differently at various pgtable levels, because then one huge_pte_uffd_wp() per-arch will stop making sense first.. as of now we can leave this for later too. This patch also removes commit c2da319c2e altogether, as we have something better now. [1] https://lore.kernel.org/all/000000000000dce0530615c89210@google.com/ Link: https://lkml.kernel.org/r/20240417212549.2766883-1-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: David Hildenbrand <david@redhat.com> Cc: Nadav Amit <nadav.amit@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-19LoongArch: Define __ARCH_WANT_NEW_STAT in unistd.hHuacai Chen
commit 7697a0fe0154468f5df35c23ebd7aa48994c2cdc upstream. Chromium sandbox apparently wants to deny statx [1] so it could properly inspect arguments after the sandboxed process later falls back to fstat. Because there's currently not a "fd-only" version of statx, so that the sandbox has no way to ensure the path argument is empty without being able to peek into the sandboxed process's memory. For architectures able to do newfstatat though, glibc falls back to newfstatat after getting -ENOSYS for statx, then the respective SIGSYS handler [2] takes care of inspecting the path argument, transforming allowed newfstatat's into fstat instead which is allowed and has the same type of return value. But, as LoongArch is the first architecture to not have fstat nor newfstatat, the LoongArch glibc does not attempt falling back at all when it gets -ENOSYS for statx -- and you see the problem there! Actually, back when the LoongArch port was under review, people were aware of the same problem with sandboxing clone3 [3], so clone was eventually kept. Unfortunately it seemed at that time no one had noticed statx, so besides restoring fstat/newfstatat to LoongArch uapi (and postponing the problem further), it seems inevitable that we would need to tackle seccomp deep argument inspection. However, this is obviously a decision that shouldn't be taken lightly, so we just restore fstat/newfstatat by defining __ARCH_WANT_NEW_STAT in unistd.h. This is the simplest solution for now, and so we hope the community will tackle the long-standing problem of seccomp deep argument inspection in the future [4][5]. Also add "newstat" to syscall_abis_64 in Makefile.syscalls due to upstream asm-generic changes. More infomation please reading this thread [6]. [1] https://chromium-review.googlesource.com/c/chromium/src/+/2823150 [2] https://chromium.googlesource.com/chromium/src/sandbox/+/c085b51940bd/linux/seccomp-bpf-helpers/sigsys_handlers.cc#355 [3] https://lore.kernel.org/linux-arch/20220511211231.GG7074@brightrain.aerifal.cx/ [4] https://lwn.net/Articles/799557/ [5] https://lpc.events/event/4/contributions/560/attachments/397/640/deep-arg-inspection.pdf [6] https://lore.kernel.org/loongarch/20240226-granit-seilschaft-eccc2433014d@brauner/T/#t Cc: stable@vger.kernel.org Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-14x86/mtrr: Check if fixed MTRRs exist before saving themAndi Kleen
commit 919f18f961c03d6694aa726c514184f2311a4614 upstream. MTRRs have an obsolete fixed variant for fine grained caching control of the 640K-1MB region that uses separate MSRs. This fixed variant has a separate capability bit in the MTRR capability MSR. So far all x86 CPUs which support MTRR have this separate bit set, so it went unnoticed that mtrr_save_state() does not check the capability bit before accessing the fixed MTRR MSRs. Though on a CPU that does not support the fixed MTRR capability this results in a #GP. The #GP itself is harmless because the RDMSR fault is handled gracefully, but results in a WARN_ON(). Add the missing capability check to prevent this. Fixes: 2b1f6278d77c ("[PATCH] x86: Save the MTRRs of the BSP before booting an AP") Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/20240808000244.946864-1-ak@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-14x86/paravirt: Fix incorrect virt spinlock setting on bare metalChen Yu
commit e639222a51196c69c70b49b67098ce2f9919ed08 upstream. The kernel can change spinlock behavior when running as a guest. But this guest-friendly behavior causes performance problems on bare metal. The kernel uses a static key to switch between the two modes. In theory, the static key is enabled by default (run in guest mode) and should be disabled for bare metal (and in some guests that want native behavior or paravirt spinlock). A performance drop is reported when running encode/decode workload and BenchSEE cache sub-workload. Bisect points to commit ce0a1b608bfc ("x86/paravirt: Silence unused native_pv_lock_init() function warning"). When CONFIG_PARAVIRT_SPINLOCKS is disabled the virt_spin_lock_key is incorrectly set to true on bare metal. The qspinlock degenerates to test-and-set spinlock, which decreases the performance on bare metal. Set the default value of virt_spin_lock_key to false. If booting in a VM, enable this key. Later during the VM initialization, if other high-efficient spinlock is preferred (e.g. paravirt-spinlock), or the user wants the native qspinlock (via nopvspin boot commandline), the virt_spin_lock_key is disabled accordingly. This results in the following decision matrix: X86_FEATURE_HYPERVISOR Y Y Y N CONFIG_PARAVIRT_SPINLOCKS Y Y N Y/N PV spinlock Y N N Y/N virt_spin_lock_key N Y/N Y N Fixes: ce0a1b608bfc ("x86/paravirt: Silence unused native_pv_lock_init() function warning") Reported-by: Prem Nath Dey <prem.nath.dey@intel.com> Reported-by: Xiaoping Zhou <xiaoping.zhou@intel.com> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com> Suggested-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com> Suggested-by: Nikolay Borisov <nik.borisov@suse.com> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/20240806112207.29792-1-yu.c.chen@intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-14LoongArch: Enable general EFI poweroff methodMiao Wang
commit e688c220732e518c2eb1639e9ef77d4a9311713c upstream. efi_shutdown_init() can register a general sys_off handler named efi_power_off(). Enable this by providing efi_poweroff_required(), like arm and x86. Since EFI poweroff is also supported on LoongArch, and the enablement makes the poweroff function usable for hardwares which lack ACPI S5. We prefer ACPI poweroff rather than EFI poweroff (like x86), so we only require EFI poweroff if acpi_gbl_reduced_hardware or acpi_no_s5 is true. Cc: stable@vger.kernel.org Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Miao Wang <shankerwangmiao@gmail.com> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-14parisc: fix a possible DMA corruptionMikulas Patocka
commit 7ae04ba36b381bffe2471eff3a93edced843240f upstream. ARCH_DMA_MINALIGN was defined as 16 - this is too small - it may be possible that two unrelated 16-byte allocations share a cache line. If one of these allocations is written using DMA and the other is written using cached write, the value that was written with DMA may be corrupted. This commit changes ARCH_DMA_MINALIGN to be 128 on PA20 and 32 on PA1.1 - that's the largest possible cache line size. As different parisc microarchitectures have different cache line size, we define arch_slab_minalign(), cache_line_size() and dma_get_cache_alignment() so that the kernel may tune slab cache parameters dynamically, based on the detected cache line size. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-14parisc: fix unaligned accesses in BPFMikulas Patocka
commit 1fd2c10acb7b35d72101a4619ee5b2cddb9efd3a upstream. There were spurious unaligned access warnings when calling BPF code. Sometimes, the warnings were triggered with any incoming packet, making the machine hard to use. The reason for the warnings is this: on parisc64, pointers to functions are not really pointers to functions, they are pointers to 16-byte descriptor. The first 8 bytes of the descriptor is a pointer to the function and the next 8 bytes of the descriptor is the content of the "dp" register. This descriptor is generated in the function bpf_jit_build_prologue. The problem is that the function bpf_int_jit_compile advertises 4-byte alignment when calling bpf_jit_binary_alloc, bpf_jit_binary_alloc randomizes the returned array and if the array happens to be not aligned on 8-byte boundary, the descriptor generated in bpf_jit_build_prologue is also not aligned and this triggers the unaligned access warning. Fix this by advertising 8-byte alignment on parisc64 when calling bpf_jit_binary_alloc. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-14arm64: errata: Expand speculative SSBS workaround (again)Mark Rutland
[ Upstream commit adeec61a4723fd3e39da68db4cc4d924e6d7f641 ] A number of Arm Ltd CPUs suffer from errata whereby an MSR to the SSBS special-purpose register does not affect subsequent speculative instructions, permitting speculative store bypassing for a window of time. We worked around this for a number of CPUs in commits: * 7187bb7d0b5c7dfa ("arm64: errata: Add workaround for Arm errata 3194386 and 3312417") * 75b3c43eab594bfb ("arm64: errata: Expand speculative SSBS workaround") Since then, similar errata have been published for a number of other Arm Ltd CPUs, for which the same mitigation is sufficient. This is described in their respective Software Developer Errata Notice (SDEN) documents: * Cortex-A76 (MP052) SDEN v31.0, erratum 3324349 https://developer.arm.com/documentation/SDEN-885749/3100/ * Cortex-A77 (MP074) SDEN v19.0, erratum 3324348 https://developer.arm.com/documentation/SDEN-1152370/1900/ * Cortex-A78 (MP102) SDEN v21.0, erratum 3324344 https://developer.arm.com/documentation/SDEN-1401784/2100/ * Cortex-A78C (MP138) SDEN v16.0, erratum 3324346 https://developer.arm.com/documentation/SDEN-1707916/1600/ * Cortex-A78C (MP154) SDEN v10.0, erratum 3324347 https://developer.arm.com/documentation/SDEN-2004089/1000/ * Cortex-A725 (MP190) SDEN v5.0, erratum 3456106 https://developer.arm.com/documentation/SDEN-2832921/0500/ * Cortex-X1 (MP077) SDEN v21.0, erratum 3324344 https://developer.arm.com/documentation/SDEN-1401782/2100/ * Cortex-X1C (MP136) SDEN v16.0, erratum 3324346 https://developer.arm.com/documentation/SDEN-1707914/1600/ * Neoverse-N1 (MP050) SDEN v32.0, erratum 3324349 https://developer.arm.com/documentation/SDEN-885747/3200/ * Neoverse-V1 (MP076) SDEN v19.0, erratum 3324341 https://developer.arm.com/documentation/SDEN-1401781/1900/ Note that due to the manner in which Arm develops IP and tracks errata, some CPUs share a common erratum number and some CPUs have multiple erratum numbers for the same HW issue. On parts without SB, it is necessary to use ISB for the workaround. The spec_bar() macro used in the mitigation will expand to a "DSB SY; ISB" sequence in this case, which is sufficient on all affected parts. Enable the existing mitigation by adding the relevant MIDRs to erratum_spec_ssbs_list. The list is sorted alphanumerically (involving moving Neoverse-V3 after Neoverse-V2) so that this is easy to audit and potentially extend again in future. The Kconfig text is also updated to clarify the set of affected parts and the mitigation. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240801101803.1982459-4-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [ Mark: fix conflicts in silicon-errata.rst ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14arm64: cputype: Add Cortex-A725 definitionsMark Rutland
[ Upstream commit 9ef54a384526911095db465e77acc1cb5266b32c ] Add cputype definitions for Cortex-A725. These will be used for errata detection in subsequent patches. These values can be found in the Cortex-A725 TRM: https://developer.arm.com/documentation/107652/0001/ ... in table A-247 ("MIDR_EL1 bit descriptions"). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20240801101803.1982459-3-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [ Mark: trivial backport ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14arm64: cputype: Add Cortex-X1C definitionsMark Rutland
[ Upstream commit 58d245e03c324d083a0ec3b9ab8ebd46ec9848d7 ] Add cputype definitions for Cortex-X1C. These will be used for errata detection in subsequent patches. These values can be found in the Cortex-X1C TRM: https://developer.arm.com/documentation/101968/0002/ ... in section B2.107 ("MIDR_EL1, Main ID Register, EL1"). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20240801101803.1982459-2-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [ Mark: trivial backport ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14arm64: errata: Expand speculative SSBS workaroundMark Rutland
[ Upstream commit 75b3c43eab594bfbd8184ec8ee1a6b820950819a ] A number of Arm Ltd CPUs suffer from errata whereby an MSR to the SSBS special-purpose register does not affect subsequent speculative instructions, permitting speculative store bypassing for a window of time. We worked around this for Cortex-X4 and Neoverse-V3, in commit: 7187bb7d0b5c7dfa ("arm64: errata: Add workaround for Arm errata 3194386 and 3312417") ... as per their Software Developer Errata Notice (SDEN) documents: * Cortex-X4 SDEN v8.0, erratum 3194386: https://developer.arm.com/documentation/SDEN-2432808/0800/ * Neoverse-V3 SDEN v6.0, erratum 3312417: https://developer.arm.com/documentation/SDEN-2891958/0600/ Since then, similar errata have been published for a number of other Arm Ltd CPUs, for which the mitigation is the same. This is described in their respective SDEN documents: * Cortex-A710 SDEN v19.0, errataum 3324338 https://developer.arm.com/documentation/SDEN-1775101/1900/?lang=en * Cortex-A720 SDEN v11.0, erratum 3456091 https://developer.arm.com/documentation/SDEN-2439421/1100/?lang=en * Cortex-X2 SDEN v19.0, erratum 3324338 https://developer.arm.com/documentation/SDEN-1775100/1900/?lang=en * Cortex-X3 SDEN v14.0, erratum 3324335 https://developer.arm.com/documentation/SDEN-2055130/1400/?lang=en * Cortex-X925 SDEN v8.0, erratum 3324334 https://developer.arm.com/documentation/109108/800/?lang=en * Neoverse-N2 SDEN v17.0, erratum 3324339 https://developer.arm.com/documentation/SDEN-1982442/1700/?lang=en * Neoverse-V2 SDEN v9.0, erratum 3324336 https://developer.arm.com/documentation/SDEN-2332927/900/?lang=en Note that due to shared design lineage, some CPUs share the same erratum number. Add these to the existing mitigation under CONFIG_ARM64_ERRATUM_3194386. As listing all of the erratum IDs in the runtime description would be unwieldy, this is reduced to: "SSBS not fully self-synchronizing" ... matching the description of the errata in all of the SDENs. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240603111812.1514101-6-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [ Mark: fix conflicts in silicon-errata.rst ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14arm64: errata: Unify speculative SSBS errata logicMark Rutland
[ Upstream commit ec768766608092087dfb5c1fc45a16a6f524dee2 ] Cortex-X4 erratum 3194386 and Neoverse-V3 erratum 3312417 are identical, with duplicate Kconfig text and some unsightly ifdeffery. While we try to share code behind CONFIG_ARM64_WORKAROUND_SPECULATIVE_SSBS, having separate options results in a fair amount of boilerplate code, and this will only get worse as we expand the set of affected CPUs. To reduce this boilerplate, unify the two behind a common Kconfig option. This removes the duplicate text and Kconfig logic, and removes the need for the intermediate ARM64_WORKAROUND_SPECULATIVE_SSBS option. The set of affected CPUs is described as a list so that this can easily be extended. I've used ARM64_ERRATUM_3194386 (matching the Neoverse-V3 erratum ID) as the common option, matching the way we use ARM64_ERRATUM_1319367 to cover Cortex-A57 erratum 1319537 and Cortex-A72 erratum 1319367. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240603111812.1514101-5-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [ Mark: fix conflicts, drop unneeded cpucaps.h ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14arm64: cputype: Add Cortex-X925 definitionsMark Rutland
[ Upstream commit fd2ff5f0b320f418288e7a1f919f648fbc8a0dfc ] Add cputype definitions for Cortex-X925. These will be used for errata detection in subsequent patches. These values can be found in Table A-285 ("MIDR_EL1 bit descriptions") in issue 0001-05 of the Cortex-X925 TRM, which can be found at: https://developer.arm.com/documentation/102807/0001/?lang=en Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240603111812.1514101-4-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [ Mark: trivial backport ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14arm64: cputype: Add Cortex-A720 definitionsMark Rutland
[ Upstream commit add332c40328cf06fe35e4b3cde8ec315c4629e5 ] Add cputype definitions for Cortex-A720. These will be used for errata detection in subsequent patches. These values can be found in Table A-186 ("MIDR_EL1 bit descriptions") in issue 0002-05 of the Cortex-A720 TRM, which can be found at: https://developer.arm.com/documentation/102530/0002/?lang=en Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240603111812.1514101-3-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [ Mark: trivial backport ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14arm64: cputype: Add Cortex-X3 definitionsMark Rutland
[ Upstream commit be5a6f238700f38b534456608588723fba96c5ab ] Add cputype definitions for Cortex-X3. These will be used for errata detection in subsequent patches. These values can be found in Table A-263 ("MIDR_EL1 bit descriptions") in issue 07 of the Cortex-X3 TRM, which can be found at: https://developer.arm.com/documentation/101593/0102/?lang=en Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240603111812.1514101-2-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [ Mark: trivial backport ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14arm64: errata: Add workaround for Arm errata 3194386 and 3312417Mark Rutland
[ Upstream commit 7187bb7d0b5c7dfa18ca82e9e5c75e13861b1d88 ] Cortex-X4 and Neoverse-V3 suffer from errata whereby an MSR to the SSBS special-purpose register does not affect subsequent speculative instructions, permitting speculative store bypassing for a window of time. This is described in their Software Developer Errata Notice (SDEN) documents: * Cortex-X4 SDEN v8.0, erratum 3194386: https://developer.arm.com/documentation/SDEN-2432808/0800/ * Neoverse-V3 SDEN v6.0, erratum 3312417: https://developer.arm.com/documentation/SDEN-2891958/0600/ To workaround these errata, it is necessary to place a speculation barrier (SB) after MSR to the SSBS special-purpose register. This patch adds the requisite SB after writes to SSBS within the kernel, and hides the presence of SSBS from EL0 such that userspace software which cares about SSBS will manipulate this via prctl(PR_GET_SPECULATION_CTRL, ...). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240508081400.235362-5-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org> [ Mark: fix conflicts, drop unneeded cpucaps.h, fold in user_feature_fixup() ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14arm64: cputype: Add Neoverse-V3 definitionsMark Rutland
[ Upstream commit 0ce85db6c2141b7ffb95709d76fc55a27ff3cdc1 ] Add cputype definitions for Neoverse-V3. These will be used for errata detection in subsequent patches. These values can be found in Table B-249 ("MIDR_EL1 bit descriptions") in issue 0001-04 of the Neoverse-V3 TRM, which can be found at: https://developer.arm.com/documentation/107734/0001/?lang=en Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240508081400.235362-4-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org> [ Mark: trivial backport ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14arm64: cputype: Add Cortex-X4 definitionsMark Rutland
[ Upstream commit 02a0a04676fa7796d9cbc9eb5ca120aaa194d2dd ] Add cputype definitions for Cortex-X4. These will be used for errata detection in subsequent patches. These values can be found in Table B-249 ("MIDR_EL1 bit descriptions") in issue 0002-05 of the Cortex-X4 TRM, which can be found at: https://developer.arm.com/documentation/102484/0002/?lang=en Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240508081400.235362-3-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org> [ Mark: fix conflict (dealt with upstream via a later merge) ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14arm64: barrier: Restore spec_bar() macroMark Rutland
[ Upstream commit ebfc726eae3f31bdb5fae1bbd74ef235d71046ca ] Upcoming errata workarounds will need to use SB from C code. Restore the spec_bar() macro so that we can use SB. This is effectively a revert of commit: 4f30ba1cce36d413 ("arm64: barrier: Remove spec_bar() macro") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240508081400.235362-2-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org> [ Mark: trivial backport ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14arm64: Add Neoverse-V2 partBesar Wicaksono
[ Upstream commit f4d9d9dcc70b96b5e5d7801bd5fbf8491b07b13d ] Add the part number and MIDR for Neoverse-V2 Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com> Reviewed-by: James Clark <james.clark@arm.com> Link: https://lore.kernel.org/r/20240109192310.16234-2-bwicaksono@nvidia.com Signed-off-by: Will Deacon <will@kernel.org> [ Mark: trivial backport ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14x86/mm: Fix pti_clone_entry_text() for i386Peter Zijlstra
[ Upstream commit 3db03fb4995ef85fc41e86262ead7b4852f4bcf0 ] While x86_64 has PMD aligned text sections, i386 does not have this luxery. Notably ALIGN_ENTRY_TEXT_END is empty and _etext has PAGE alignment. This means that text on i386 can be page granular at the tail end, which in turn means that the PTI text clones should consistently account for this. Make pti_clone_entry_text() consistent with pti_clone_kernel_text(). Fixes: 16a3fe634f6a ("x86/mm/pti: Clone kernel-image on PTE level for 32 bit") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14x86/mm: Fix pti_clone_pgtable() alignment assumptionPeter Zijlstra
[ Upstream commit 41e71dbb0e0a0fe214545fe64af031303a08524c ] Guenter reported dodgy crashes on an i386-nosmp build using GCC-11 that had the form of endless traps until entry stack exhaust and then #DF from the stack guard. It turned out that pti_clone_pgtable() had alignment assumptions on the start address, notably it hard assumes start is PMD aligned. This is true on x86_64, but very much not true on i386. These assumptions can cause the end condition to malfunction, leading to a 'short' clone. Guess what happens when the user mapping has a short copy of the entry text? Use the correct increment form for addr to avoid alignment assumptions. Fixes: 16a3fe634f6a ("x86/mm/pti: Clone kernel-image on PTE level for 32 bit") Reported-by: Guenter Roeck <linux@roeck-us.net> Tested-by: Guenter Roeck <linux@roeck-us.net> Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20240731163105.GG33588@noisy.programming.kicks-ass.net Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-14platform/x86/intel/ifs: Store IFS generation numberJithu Joseph
[ Upstream commit 97a5e801b3045c1e800f76bc0fb544972538089d ] IFS generation number is reported via MSR_INTEGRITY_CAPS. As IFS support gets added to newer CPUs, some differences are expected during IFS image loading and test flows. Define MSR bitmasks to extract and store the generation in driver data, so that driver can modify its MSR interaction appropriately. Signed-off-by: Jithu Joseph <jithu.joseph@intel.com> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Tested-by: Pengfei Xu <pengfei.xu@intel.com> Link: https://lore.kernel.org/r/20231005195137.3117166-2-jithu.joseph@intel.com Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Stable-dep-of: 3114f77e9453 ("platform/x86/intel/ifs: Initialize union ifs_status to zero") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-11arm64: jump_label: Ensure patched jump_labels are visible to all CPUsWill Deacon
[ Upstream commit cfb00a35786414e7c0e6226b277d9f09657eae74 ] Although the Arm architecture permits concurrent modification and execution of NOP and branch instructions, it still requires some synchronisation to ensure that other CPUs consistently execute the newly written instruction: > When the modified instructions are observable, each PE that is > executing the modified instructions must execute an ISB or perform a > context synchronizing event to ensure execution of the modified > instructions Prior to commit f6cc0c501649 ("arm64: Avoid calling stop_machine() when patching jump labels"), the arm64 jump_label patching machinery performed synchronisation using stop_machine() after each modification, however this was problematic when flipping static keys from atomic contexts (namely, the arm_arch_timer CPU hotplug startup notifier) and so we switched to the _nosync() patching routines to avoid "scheduling while atomic" BUG()s during boot. In hindsight, the analysis of the issue in f6cc0c501649 isn't quite right: it cites the use of IPIs in the default patching routines as the cause of the lockup, whereas stop_machine() does not rely on IPIs and the I-cache invalidation is performed using __flush_icache_range(), which elides the call to kick_all_cpus_sync(). In fact, the blocking wait for other CPUs is what triggers the BUG() and the problem remains even after f6cc0c501649, for example because we could block on the jump_label_mutex. Eventually, the arm_arch_timer driver was fixed to avoid the static key entirely in commit a862fc2254bd ("clocksource/arm_arch_timer: Remove use of workaround static key"). This all leaves the jump_label patching code in a funny situation on arm64 as we do not synchronise with other CPUs to reduce the likelihood of a bug which no longer exists. Consequently, toggling a static key on one CPU cannot be assumed to take effect on other CPUs, leading to potential issues, for example with missing preempt notifiers. Rather than revert f6cc0c501649 and go back to stop_machine() for each patch site, implement arch_jump_label_transform_apply() and kick all the other CPUs with an IPI at the end of patching. Cc: Alexander Potapenko <glider@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Fixes: f6cc0c501649 ("arm64: Avoid calling stop_machine() when patching jump labels") Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240731133601.3073-1-will@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-11riscv: Fix linear mapping checks for non-contiguous memory regionsStuart Menefy
[ Upstream commit 3b6564427aea83b7a35a15ca278291d50a1edcfc ] The RISC-V kernel already has checks to ensure that memory which would lie outside of the linear mapping is not used. However those checks use memory_limit, which is used to implement the mem= kernel command line option (to limit the total amount of memory, not its address range). When memory is made up of two or more non-contiguous memory banks this check is incorrect. Two changes are made here: - add a call in setup_bootmem() to memblock_cap_memory_range() which will cause any memory which falls outside the linear mapping to be removed from the memory regions. - remove the check in create_linear_mapping_page_table() which was intended to remove memory which is outside the liner mapping based on memory_limit, as it is no longer needed. Note a check for mapping more memory than memory_limit (to implement mem=) is unnecessary because of the existing call to memblock_enforce_memory_limit(). This issue was seen when booting on a SV39 platform with two memory banks: 0x00,80000000 1GiB 0x20,00000000 32GiB This memory range is 158GiB from top to bottom, but the linear mapping is limited to 128GiB, so the lower block of RAM will be mapped at PAGE_OFFSET, and the upper block straddles the top of the linear mapping. This causes the following Oops: [ 0.000000] Linux version 6.10.0-rc2-gd3b8dd5b51dd-dirty (stuart.menefy@codasip.com) (riscv64-codasip-linux-gcc (GCC) 13.2.0, GNU ld (GNU Binutils) 2.41.0.20231213) #20 SMP Sat Jun 22 11:34:22 BST 2024 [ 0.000000] memblock_add: [0x0000000080000000-0x00000000bfffffff] early_init_dt_add_memory_arch+0x4a/0x52 [ 0.000000] memblock_add: [0x0000002000000000-0x00000027ffffffff] early_init_dt_add_memory_arch+0x4a/0x52 ... [ 0.000000] memblock_alloc_try_nid: 23724 bytes align=0x8 nid=-1 from=0x0000000000000000 max_addr=0x0000000000000000 early_init_dt_alloc_memory_arch+0x1e/0x48 [ 0.000000] memblock_reserve: [0x00000027ffff5350-0x00000027ffffaffb] memblock_alloc_range_nid+0xb8/0x132 [ 0.000000] Unable to handle kernel paging request at virtual address fffffffe7fff5350 [ 0.000000] Oops [#1] [ 0.000000] Modules linked in: [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 6.10.0-rc2-gd3b8dd5b51dd-dirty #20 [ 0.000000] Hardware name: codasip,a70x (DT) [ 0.000000] epc : __memset+0x8c/0x104 [ 0.000000] ra : memblock_alloc_try_nid+0x74/0x84 [ 0.000000] epc : ffffffff805e88c8 ra : ffffffff806148f6 sp : ffffffff80e03d50 [ 0.000000] gp : ffffffff80ec4158 tp : ffffffff80e0bec0 t0 : fffffffe7fff52f8 [ 0.000000] t1 : 00000027ffffb000 t2 : 5f6b636f6c626d65 s0 : ffffffff80e03d90 [ 0.000000] s1 : 0000000000005cac a0 : fffffffe7fff5350 a1 : 0000000000000000 [ 0.000000] a2 : 0000000000005cac a3 : fffffffe7fffaff8 a4 : 000000000000002c [ 0.000000] a5 : ffffffff805e88c8 a6 : 0000000000005cac a7 : 0000000000000030 [ 0.000000] s2 : fffffffe7fff5350 s3 : ffffffffffffffff s4 : 0000000000000000 [ 0.000000] s5 : ffffffff8062347e s6 : 0000000000000000 s7 : 0000000000000001 [ 0.000000] s8 : 0000000000002000 s9 : 00000000800226d0 s10: 0000000000000000 [ 0.000000] s11: 0000000000000000 t3 : ffffffff8080a928 t4 : ffffffff8080a928 [ 0.000000] t5 : ffffffff8080a928 t6 : ffffffff8080a940 [ 0.000000] status: 0000000200000100 badaddr: fffffffe7fff5350 cause: 000000000000000f [ 0.000000] [<ffffffff805e88c8>] __memset+0x8c/0x104 [ 0.000000] [<ffffffff8062349c>] early_init_dt_alloc_memory_arch+0x1e/0x48 [ 0.000000] [<ffffffff8043e892>] __unflatten_device_tree+0x52/0x114 [ 0.000000] [<ffffffff8062441e>] unflatten_device_tree+0x9e/0xb8 [ 0.000000] [<ffffffff806046fe>] setup_arch+0xd4/0x5bc [ 0.000000] [<ffffffff806007aa>] start_kernel+0x76/0x81a [ 0.000000] Code: b823 02b2 bc23 02b2 b023 04b2 b423 04b2 b823 04b2 (bc23) 04b2 [ 0.000000] ---[ end trace 0000000000000000 ]--- [ 0.000000] Kernel panic - not syncing: Attempted to kill the idle task! [ 0.000000] ---[ end Kernel panic - not syncing: Attempted to kill the idle task! ]--- The problem is that memblock (unaware that some physical memory cannot be used) has allocated memory from the top of memory but which is outside the linear mapping region. Signed-off-by: Stuart Menefy <stuart.menefy@codasip.com> Fixes: c99127c45248 ("riscv: Make sure the linear mapping does not use the kernel mapping") Reviewed-by: David McKay <david.mckay@codasip.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20240622114217.2158495-1-stuart.menefy@codasip.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-11riscv/mm: Add handling for VM_FAULT_SIGSEGV in mm_fault_error()Zhe Qiao
[ Upstream commit 0c710050c47d45eb77b28c271cddefc5c785cb40 ] Handle VM_FAULT_SIGSEGV in the page fault path so that we correctly kill the process and we don't BUG() the kernel. Fixes: 07037db5d479 ("RISC-V: Paging and MMU") Signed-off-by: Zhe Qiao <qiaozhe@iscas.ac.cn> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20240731084547.85380-1-qiaozhe@iscas.ac.cn Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-11riscv: remove unused functions in traps_misaligned.cClément Léger
[ Upstream commit f19c3b4239f5bfb69aacbaf75d4277c095e7aa7d ] Replace macros by the only two function calls that are done from this file, store_u8() and load_u8(). Signed-off-by: Clément Léger <cleger@rivosinc.com> Link: https://lore.kernel.org/r/20231004151405.521596-2-cleger@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Stable-dep-of: fb197c5d2fd2 ("riscv/purgatory: align riscv_kernel_entry") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-11ARM: 9406/1: Fix callchain_trace() return valueJinjie Ruan
[ Upstream commit 4e7b4ff2dcaed228cb2fb7bfe720262c98ec1bb9 ] perf_callchain_store() return 0 on success, -1 otherwise, fix callchain_trace() to return correct bool value. So walk_stackframe() can have a chance to stop walking the stack ahead. Fixes: 70ccc7c0667b ("ARM: 9258/1: stacktrace: Make stack walk callback consistent with generic code") Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-11MIPS: dts: loongson: Fix ls2k1000-rtc interruptJiaxun Yang
[ Upstream commit f70fd92df7529e7283e02a6c3a2510075f13ba30 ] The correct interrupt line for RTC is line 8 on liointc1. Fixes: e47084e116fc ("MIPS: Loongson64: DTS: Add RTC support to Loongson-2K1000") Cc: stable@vger.kernel.org Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-11MIPS: dts: loongson: Fix liointc IRQ polarityJiaxun Yang
[ Upstream commit dbb69b9d6234aad23b3ecd33e5bc8a8ae1485b7d ] All internal liointc interrupts are high level triggered. Fixes: b1a792601f26 ("MIPS: Loongson64: DeviceTree for Loongson-2K1000") Cc: stable@vger.kernel.org Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-11MIPS: Loongson64: DTS: Fix PCIe port nodes for ls7aJiaxun Yang
[ Upstream commit d89a415ff8d5e0aad4963f2d8ebb0f9e8110b7fa ] Add various required properties to silent warnings: arch/mips/boot/dts/loongson/loongson64-2k1000.dtsi:116.16-297.5: Warning (interrupt_provider): /bus@10000000/pci@1a000000: '#interrupt-cells' found, but node is not an interrupt provider arch/mips/boot/dts/loongson/loongson64_2core_2k1000.dtb: Warning (interrupt_map): Failed prerequisite 'interrupt_provider' Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Stable-dep-of: dbb69b9d6234 ("MIPS: dts: loongson: Fix liointc IRQ polarity") Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-11KVM: nVMX: Check for pending posted interrupts when looking for nested eventsSean Christopherson
[ Upstream commit 27c4fa42b11af780d49ce704f7fa67b3c2544df4 ] Check for pending (and notified!) posted interrupts when checking if L2 has a pending wake event, as fully posted/notified virtual interrupt is a valid wake event for HLT. Note that KVM must check vmx->nested.pi_pending to avoid prematurely waking L2, e.g. even if KVM sees a non-zero PID.PIR and PID.0N=1, the virtual interrupt won't actually be recognized until a notification IRQ is received by the vCPU or the vCPU does (nested) VM-Enter. Fixes: 26844fee6ade ("KVM: x86: never write to memory from kvm_vcpu_check_block()") Cc: stable@vger.kernel.org Cc: Maxim Levitsky <mlevitsk@redhat.com> Reported-by: Jim Mattson <jmattson@google.com> Closes: https://lore.kernel.org/all/20231207010302.2240506-1-jmattson@google.com Link: https://lore.kernel.org/r/20240607172609.3205077-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2024-08-11KVM: nVMX: Add a helper to get highest pending from Posted Interrupt vectorSean Christopherson
[ Upstream commit d83c36d822be44db4bad0c43bea99c8908f54117 ] Add a helper to retrieve the highest pending vector given a Posted Interrupt descriptor. While the actual operation is straightforward, it's surprisingly easy to mess up, e.g. if one tries to reuse lapic.c's find_highest_vector(), which doesn't work with PID.PIR due to the APIC's IRR and ISR component registers being physically discontiguous (they're 4-byte registers aligned at 16-byte intervals). To make PIR handling more consistent with respect to IRR and ISR handling, return -1 to indicate "no interrupt pending". Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20240607172609.3205077-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Sasha Levin <sashal@kernel.org>