summaryrefslogtreecommitdiff
path: root/arch/x86/kernel
AgeCommit message (Collapse)Author
2023-02-03livepatch,x86: Clear relocation targets on a module removalSong Liu
Josh reported a bug: When the object to be patched is a module, and that module is rmmod'ed and reloaded, it fails to load with: module: x86/modules: Skipping invalid relocation target, existing value is nonzero for type 2, loc 00000000ba0302e9, val ffffffffa03e293c livepatch: failed to initialize patch 'livepatch_nfsd' for module 'nfsd' (-8) livepatch: patch 'livepatch_nfsd' failed for module 'nfsd', refusing to load module 'nfsd' The livepatch module has a relocation which references a symbol in the _previous_ loading of nfsd. When apply_relocate_add() tries to replace the old relocation with a new one, it sees that the previous one is nonzero and it errors out. He also proposed three different solutions. We could remove the error check in apply_relocate_add() introduced by commit eda9cec4c9a1 ("x86/module: Detect and skip invalid relocations"). However the check is useful for detecting corrupted modules. We could also deny the patched modules to be removed. If it proved to be a major drawback for users, we could still implement a different approach. The solution would also complicate the existing code a lot. We thus decided to reverse the relocation patching (clear all relocation targets on x86_64). The solution is not universal and is too much arch-specific, but it may prove to be simpler in the end. Reported-by: Josh Poimboeuf <jpoimboe@redhat.com> Originally-by: Miroslav Benes <mbenes@suse.cz> Signed-off-by: Song Liu <song@kernel.org> Acked-by: Miroslav Benes <mbenes@suse.cz> Reviewed-by: Petr Mladek <pmladek@suse.com> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Reviewed-by: Joe Lawrence <joe.lawrence@redhat.com> Tested-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20230125185401.279042-2-song@kernel.org
2023-02-03x86/module: remove unused code in __apply_relocate_addSong Liu
This "#if 0" block has been untouched for many years. Remove it to clean up the code. Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Song Liu <song@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Reviewed-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20230125185401.279042-1-song@kernel.org
2023-02-02clocksource: Verify HPET and PMTMR when TSC unverifiedPaul E. McKenney
On systems with two or fewer sockets, when the boot CPU has CONSTANT_TSC, NONSTOP_TSC, and TSC_ADJUST, clocksource watchdog verification of the TSC is disabled. This works well much of the time, but there is the occasional production-level system that meets all of these criteria, but which still has a TSC that skews significantly from atomic-clock time. This is usually attributed to a firmware or hardware fault. Yes, the various NTP daemons do express their opinions of userspace-to-atomic-clock time skew, but they put them in various places, depending on the daemon and distro in question. It would therefore be good for the kernel to have some clue that there is a problem. The old behavior of marking the TSC unstable is a non-starter because a great many workloads simply cannot tolerate the overheads and latencies of the various non-TSC clocksources. In addition, NTP-corrected systems sometimes can tolerate significant kernel-space time skew as long as the userspace time sources are within epsilon of atomic-clock time. Therefore, when watchdog verification of TSC is disabled, enable it for HPET and PMTMR (AKA ACPI PM timer). This provides the needed in-kernel time-skew diagnostic without degrading the system's performance. Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Waiman Long <longman@redhat.com> Cc: <x86@kernel.org> Tested-by: Feng Tang <feng.tang@intel.com>
2023-02-02x86/tsc: Add option to force frequency recalibration with HW timerFeng Tang
The kernel assumes that the TSC frequency which is provided by the hardware / firmware via MSRs or CPUID(0x15) is correct after applying a few basic consistency checks. This disables the TSC recalibration against HPET or PM timer. As a result there is no mechanism to validate that frequency in cases where a firmware or hardware defect is suspected. And there was case that some user used atomic clock to measure the TSC frequency and reported an inaccuracy issue, which was later fixed in firmware. Add an option 'recalibrate' for 'tsc' kernel parameter to force the tsc freq recalibration with HPET or PM timer, and warn if the deviation from previous value is more than about 500 PPM, which provides a way to verify the data from hardware / firmware. There is no functional change to existing work flow. Recently there was a real-world case: "The 40ms/s divergence between TSC and HPET was observed on hardware that is quite recent" [1], on that platform the TSC frequence 1896 MHz was got from CPUID(0x15), and the force-reclibration with HPET/PMTIMER both calibrated out value of 1975 MHz, which also matched with check from software 'chronyd', indicating it's a problem of BIOS or firmware. [Thanks tglx for helping improving the commit log] [ paulmck: Wordsmith Kconfig help text. ] [1]. https://lore.kernel.org/lkml/20221117230910.GI4001@paulmck-ThinkPad-P17-Gen-1/ Signed-off-by: Feng Tang <feng.tang@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: <x86@kernel.org> Cc: <linux-doc@vger.kernel.org> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2023-01-31x86/amd: Cache debug register values in percpu variablesAlexey Kardashevskiy
Reading DR[0-3]_ADDR_MASK MSRs takes about 250 cycles which is going to be noticeable with the AMD KVM SEV-ES DebugSwap feature enabled. KVM is going to store host's DR[0-3] and DR[0-3]_ADDR_MASK before switching to a guest; the hardware is going to swap these on VMRUN and VMEXIT. Store MSR values passed to set_dr_addr_mask() in percpu variables (when changed) and return them via new amd_get_dr_addr_mask(). The gain here is about 10x. As set_dr_addr_mask() uses the array too, change the @dr type to unsigned to avoid checking for <0. And give it the amd_ prefix to match the new helper as the whole DR_ADDR_MASK feature is AMD-specific anyway. While at it, replace deprecated boot_cpu_has() with cpu_feature_enabled() in set_dr_addr_mask(). Signed-off-by: Alexey Kardashevskiy <aik@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230120031047.628097-2-aik@amd.com
2023-01-31x86/microcode: Allow only "1" as a late reload trigger valueAshok Raj
Microcode gets reloaded late only if "1" is written to the reload file. However, the code silently treats any other unsigned integer as a successful write even though no actions are performed to load microcode. Make the loader more strict to accept only "1" as a trigger value and return an error otherwise. [ bp: Massage commit message. ] Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230130213955.6046-3-ashok.raj@intel.com
2023-01-31x86/static_call: Add support for Jcc tail-callsPeter Zijlstra
Clang likes to create conditional tail calls like: 0000000000000350 <amd_pmu_add_event>: 350: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) 351: R_X86_64_NONE __fentry__-0x4 355: 48 83 bf 20 01 00 00 00 cmpq $0x0,0x120(%rdi) 35d: 0f 85 00 00 00 00 jne 363 <amd_pmu_add_event+0x13> 35f: R_X86_64_PLT32 __SCT__amd_pmu_branch_add-0x4 363: e9 00 00 00 00 jmp 368 <amd_pmu_add_event+0x18> 364: R_X86_64_PLT32 __x86_return_thunk-0x4 Where 0x35d is a static call site that's turned into a conditional tail-call using the Jcc class of instructions. Teach the in-line static call text patching about this. Notably, since there is no conditional-ret, in that case patch the Jcc to point at an empty stub function that does the ret -- or the return thunk when needed. Reported-by: "Erhard F." <erhard_f@mailbox.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Link: https://lore.kernel.org/r/Y9Kdg9QjHkr9G5b5@hirez.programming.kicks-ass.net
2023-01-31x86/alternatives: Teach text_poke_bp() to patch Jcc.d32 instructionsPeter Zijlstra
In order to re-write Jcc.d32 instructions text_poke_bp() needs to be taught about them. The biggest hurdle is that the whole machinery is currently made for 5 byte instructions and extending this would grow struct text_poke_loc which is currently a nice 16 bytes and used in an array. However, since text_poke_loc contains a full copy of the (s32) displacement, it is possible to map the Jcc.d32 2 byte opcodes to Jcc.d8 1 byte opcode for the int3 emulation. This then leaves the replacement bytes; fudge that by only storing the last 5 bytes and adding the rule that 'length == 6' instruction will be prefixed with a 0x0f byte. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20230123210607.115718513@infradead.org
2023-01-31x86/alternatives: Introduce int3_emulate_jcc()Peter Zijlstra
Move the kprobe Jcc emulation into int3_emulate_jcc() so it can be used by more code -- specifically static_call() will need this. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20230123210607.057678245@infradead.org
2023-01-31sched/clock/x86: Mark sched_clock() noinstrPeter Zijlstra
In order to use sched_clock() from noinstr code, mark it and all it's implenentations noinstr. The whole pvclock thing (used by KVM/Xen) is a bit of a pain, since it calls out to watchdogs, create a pvclock_clocksource_read_nowd() variant doesn't do that and can be noinstr. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20230126151323.702003578@infradead.org
2023-01-31x86/pvclock: Improve atomic update of last_value in pvclock_clocksource_read()Uros Bizjak
Improve atomic update of last_value in pvclock_clocksource_read: - Atomic update can be skipped if the "last_value" is already equal to "ret". - The detection of atomic update failure is not correct. The value, returned by atomic64_cmpxchg should be compared to the old value from the location to be updated. If these two are the same, then atomic update succeeded and "last_value" location is updated to "ret" in an atomic way. Otherwise, the atomic update failed and it should be retried with the value from "last_value" - exactly what atomic64_try_cmpxchg does in a correct and more optimal way. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20230118202330.3740-1-ubizjak@gmail.com Link: https://lore.kernel.org/r/20230126151323.643408110@infradead.org
2023-01-31Merge tag 'v6.2-rc6' into sched/core, to pick up fixesIngo Molnar
Pick up fixes before merging another batch of cpuidle updates. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2023-01-29Merge tag 'x86_urgent_for_v6.2_rc6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Borislav Petkov: - Start checking for -mindirect-branch-cs-prefix clang support too now that LLVM 16 will support it - Fix a NULL ptr deref when suspending with Xen PV - Have a SEV-SNP guest check explicitly for features enabled by the hypervisor and fail gracefully if some are unsupported by the guest instead of failing in a non-obvious and hard-to-debug way - Fix a MSI descriptor leakage under Xen - Mark Xen's MSI domain as supporting MSI-X - Prevent legacy PIC interrupts from being resent in software by marking them level triggered, as they should be, which lead to a NULL ptr deref * tag 'x86_urgent_for_v6.2_rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/build: Move '-mindirect-branch-cs-prefix' out of GCC-only block acpi: Fix suspend with Xen PV x86/sev: Add SEV-SNP guest feature negotiation support x86/pci/xen: Fixup fallout from the PCI/MSI overhaul x86/pci/xen: Set MSI_FLAG_PCI_MSIX support in Xen MSI domain x86/i8259: Mark legacy PIC interrupts with IRQ_LEVEL
2023-01-27x86/tdx: Add more registers to struct tdx_hypercall_argsKirill A. Shutemov
struct tdx_hypercall_args is used to pass down hypercall arguments to __tdx_hypercall() assembly routine. Currently __tdx_hypercall() handles up to 6 arguments. In preparation to changes in __tdx_hypercall(), expand the structure to 6 more registers and generate asm offsets for them. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20230126221159.8635-3-kirill.shutemov%40linux.intel.com
2023-01-26x86/ACPI/boot: Use try_cmpxchg() in __acpi_{acquire,release}_global_lock()Uros Bizjak
Use try_cmpxchg instead of cmpxchg (*ptr, old, new) == old in __acpi_{acquire,release}_global_lock(). x86 CMPXCHG instruction returns success in ZF flag, so this change saves a compare after CMPXCHG (and related MOV instruction in front of CMPXCHG). Also, try_cmpxchg() implicitly assigns old *ptr value to "old" when CMPXCHG fails. There is no need to re-read the value in the loop. Note that the value from *ptr should be read using READ_ONCE() to prevent the compiler from merging, refetching or reordering the read. No functional change intended. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/r/20230116162522.4072-1-ubizjak@gmail.com
2023-01-26x86/resctrl: Fix a silly -Wunused-but-set-variable warningBorislav Petkov (AMD)
clang correctly complains arch/x86/kernel/cpu/resctrl/rdtgroup.c:1456:6: warning: variable \ 'h' set but not used [-Wunused-but-set-variable] u32 h; ^ but it can't know whether this use is innocuous or really a problem. There's a reason why those warning switches are behind a W=1 and not enabled by default - yes, one needs to do: make W=1 CC=clang HOSTCC=clang arch/x86/kernel/cpu/resctrl/ with clang 14 in order to trigger it. I would normally not take a silly fix like that but this one is simple and doesn't make the code uglier so... Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Reinette Chatre <reinette.chatre@intel.com> Tested-by: Babu Moger <babu.moger@amd.com> Link: https://lore.kernel.org/r/202301242015.kbzkVteJ-lkp@intel.com
2023-01-25x86/cpu: Support AMD Automatic IBRSKim Phillips
The AMD Zen4 core supports a new feature called Automatic IBRS. It is a "set-and-forget" feature that means that, like Intel's Enhanced IBRS, h/w manages its IBRS mitigation resources automatically across CPL transitions. The feature is advertised by CPUID_Fn80000021_EAX bit 8 and is enabled by setting MSR C000_0080 (EFER) bit 21. Enable Automatic IBRS by default if the CPU feature is present. It typically provides greater performance over the incumbent generic retpolines mitigation. Reuse the SPECTRE_V2_EIBRS spectre_v2_mitigation enum. AMD Automatic IBRS and Intel Enhanced IBRS have similar enablement. Add NO_EIBRS_PBRSB to cpu_vuln_whitelist, since AMD Automatic IBRS isn't affected by PBRSB-eIBRS. The kernel command line option spectre_v2=eibrs is used to select AMD Automatic IBRS, if available. Signed-off-by: Kim Phillips <kim.phillips@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Sean Christopherson <seanjc@google.com> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20230124163319.2277355-8-kim.phillips@amd.com
2023-01-25x86/cpu, kvm: Add the Null Selector Clears Base featureKim Phillips
The Null Selector Clears Base feature was being open-coded for KVM. Add it to its newly added native CPUID leaf 0x80000021 EAX proper. Also drop the bit description comments now it's more self-describing. [ bp: Convert test in check_null_seg_clears_base() too. ] Signed-off-by: Kim Phillips <kim.phillips@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20230124163319.2277355-6-kim.phillips@amd.com
2023-01-25x86/cpu, kvm: Move X86_FEATURE_LFENCE_RDTSC to its native leafKim Phillips
The LFENCE always serializing feature bit was defined as scattered LFENCE_RDTSC and its native leaf bit position open-coded for KVM. Add it to its newly added CPUID leaf 0x80000021 EAX proper. With LFENCE_RDTSC in its proper place, the kernel's set_cpu_cap() will effectively synthesize the feature for KVM going forward. Also, DE_CFG[1] doesn't need to be set on such CPUs anymore. [ bp: Massage and merge diff from Sean. ] Signed-off-by: Kim Phillips <kim.phillips@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20230124163319.2277355-5-kim.phillips@amd.com
2023-01-25x86/fpu: Don't set TIF_NEED_FPU_LOAD for PF_IO_WORKER threadsJens Axboe
We don't set it on PF_KTHREAD threads as they never return to userspace, and PF_IO_WORKER threads are identical in that regard. As they keep running in the kernel until they die, skip setting the FPU flag on them. More of a cosmetic thing that was found while debugging and issue and pondering why the FPU flag is set on these threads. Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/560c844c-f128-555b-40c6-31baff27537f@kernel.dk
2023-01-25x86/vdso: Move VDSO image init to vdso2c generated codeBrian Gerst
Generate an init function for each VDSO image, replacing init_vdso() and sysenter_setup(). Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20230124184019.26850-1-brgerst@gmail.com
2023-01-25x86/cpu, kvm: Add support for CPUID_80000021_EAXKim Phillips
Add support for CPUID leaf 80000021, EAX. The majority of the features will be used in the kernel and thus a separate leaf is appropriate. Include KVM's reverse_cpuid entry because features are used by VM guests, too. [ bp: Massage commit message. ] Signed-off-by: Kim Phillips <kim.phillips@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20230124163319.2277355-2-kim.phillips@amd.com
2023-01-24x86/entry: KVM: Use dedicated VMX NMI entry for 32-bit kernels tooSean Christopherson
Use a dedicated entry for invoking the NMI handler from KVM VMX's VM-Exit path for 32-bit even though using a dedicated entry for 32-bit isn't strictly necessary. Exposing a single symbol will allow KVM to reference the entry point in assembly code without having to resort to more #ifdefs (or #defines). identry.h is intended to be included from asm files only once, and so simply including idtentry.h in KVM assembly isn't an option. Bypassing the ESP fixup and CR3 switching in the standard NMI entry code is safe as KVM always handles NMIs that occur in the guest on a kernel stack, with a kernel CR3. Cc: Andy Lutomirski <luto@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20221213060912.654668-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-01-24x86/reboot: Disable SVM, not just VMX, when stopping CPUsSean Christopherson
Disable SVM and more importantly force GIF=1 when halting a CPU or rebooting the machine. Similar to VMX, SVM allows software to block INITs via CLGI, and thus can be problematic for a crash/reboot. The window for failure is smaller with SVM as INIT is only blocked while GIF=0, i.e. between CLGI and STGI, but the window does exist. Fixes: fba4f472b33a ("x86/reboot: Turn off KVM when halting a CPU") Cc: stable@vger.kernel.org Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20221130233650.1404148-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-01-24x86/reboot: Disable virtualization in an emergency if SVM is supportedSean Christopherson
Disable SVM on all CPUs via NMI shootdown during an emergency reboot. Like VMX, SVM can block INIT, e.g. if the emergency reboot is triggered between CLGI and STGI, and thus can prevent bringing up other CPUs via INIT-SIPI-SIPI. Cc: stable@vger.kernel.org Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20221130233650.1404148-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-01-24x86/crash: Disable virt in core NMI crash handler to avoid double shootdownSean Christopherson
Disable virtualization in crash_nmi_callback() and rework the emergency_vmx_disable_all() path to do an NMI shootdown if and only if a shootdown has not already occurred. NMI crash shootdown fundamentally can't support multiple invocations as responding CPUs are deliberately put into halt state without unblocking NMIs. But, the emergency reboot path doesn't have any work of its own, it simply cares about disabling virtualization, i.e. so long as a shootdown occurred, emergency reboot doesn't care who initiated the shootdown, or when. If "crash_kexec_post_notifiers" is specified on the kernel command line, panic() will invoke crash_smp_send_stop() and result in a second call to nmi_shootdown_cpus() during native_machine_emergency_restart(). Invoke the callback _before_ disabling virtualization, as the current VMCS needs to be cleared before doing VMXOFF. Note, this results in a subtle change in ordering between disabling virtualization and stopping Intel PT on the responding CPUs. While VMX and Intel PT do interact, VMXOFF and writes to MSR_IA32_RTIT_CTL do not induce faults between one another, which is all that matters when panicking. Harden nmi_shootdown_cpus() against multiple invocations to try and capture any such kernel bugs via a WARN instead of hanging the system during a crash/dump, e.g. prior to the recent hardening of register_nmi_handler(), re-registering the NMI handler would trigger a double list_add() and hang the system if CONFIG_BUG_ON_DATA_CORRUPTION=y. list_add double add: new=ffffffff82220800, prev=ffffffff8221cfe8, next=ffffffff82220800. WARNING: CPU: 2 PID: 1319 at lib/list_debug.c:29 __list_add_valid+0x67/0x70 Call Trace: __register_nmi_handler+0xcf/0x130 nmi_shootdown_cpus+0x39/0x90 native_machine_emergency_restart+0x1c9/0x1d0 panic+0x237/0x29b Extract the disabling logic to a common helper to deduplicate code, and to prepare for doing the shootdown in the emergency reboot path if SVM is supported. Note, prior to commit ed72736183c4 ("x86/reboot: Force all cpus to exit VMX root if VMX is supported"), nmi_shootdown_cpus() was subtly protected against a second invocation by a cpu_vmx_enabled() check as the kdump handler would disable VMX if it ran first. Fixes: ed72736183c4 ("x86/reboot: Force all cpus to exit VMX root if VMX is supported") Cc: stable@vger.kernel.org Reported-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Link: https://lore.kernel.org/all/20220427224924.592546-2-gpiccoli@igalia.com Tested-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20221130233650.1404148-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-01-24Merge branch 'kvm-v6.2-rc4-fixes' into HEADPaolo Bonzini
ARM: * Fix the PMCR_EL0 reset value after the PMU rework * Correctly handle S2 fault triggered by a S1 page table walk by not always classifying it as a write, as this breaks on R/O memslots * Document why we cannot exit with KVM_EXIT_MMIO when taking a write fault from a S1 PTW on a R/O memslot * Put the Apple M2 on the naughty list for not being able to correctly implement the vgic SEIS feature, just like the M1 before it * Reviewer updates: Alex is stepping down, replaced by Zenghui x86: * Fix various rare locking issues in Xen emulation and teach lockdep to detect them * Documentation improvements * Do not return host topology information from KVM_GET_SUPPORTED_CPUID
2023-01-23x86/resctrl: Add interface to write mbm_local_bytes_configBabu Moger
The event configuration for mbm_local_bytes can be changed by the user by writing to the configuration file /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config. The event configuration settings are domain specific and will affect all the CPUs in the domain. Following are the types of events supported: ==== =========================================================== Bits Description ==== =========================================================== 6 Dirty Victims from the QOS domain to all types of memory 5 Reads to slow memory in the non-local NUMA domain 4 Reads to slow memory in the local NUMA domain 3 Non-temporal writes to non-local NUMA domain 2 Non-temporal writes to local NUMA domain 1 Reads to memory in the non-local NUMA domain 0 Reads to memory in the local NUMA domain ==== =========================================================== For example, to change the mbm_local_bytes_config to count all the non-temporal writes on domain 0, the bits 2 and 3 needs to be set which is 1100b (in hex 0xc). Run the command: $echo 0=0xc > /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config To change the mbm_local_bytes to count only reads to local NUMA domain 1, the bit 0 needs to be set which 1b (in hex 0x1). Run the command: $echo 1=0x1 > /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/r/20230113152039.770054-13-babu.moger@amd.com
2023-01-23x86/resctrl: Add interface to write mbm_total_bytes_configBabu Moger
The event configuration for mbm_total_bytes can be changed by the user by writing to the file /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config. The event configuration settings are domain specific and affect all the CPUs in the domain. Following are the types of events supported: ==== =========================================================== Bits Description ==== =========================================================== 6 Dirty Victims from the QOS domain to all types of memory 5 Reads to slow memory in the non-local NUMA domain 4 Reads to slow memory in the local NUMA domain 3 Non-temporal writes to non-local NUMA domain 2 Non-temporal writes to local NUMA domain 1 Reads to memory in the non-local NUMA domain 0 Reads to memory in the local NUMA domain ==== =========================================================== For example: To change the mbm_total_bytes to count only reads on domain 0, the bits 0, 1, 4 and 5 needs to be set, which is 110011b (in hex 0x33). Run the command: $echo 0=0x33 > /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config To change the mbm_total_bytes to count all the slow memory reads on domain 1, the bits 4 and 5 needs to be set which is 110000b (in hex 0x30). Run the command: $echo 1=0x30 > /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/r/20230113152039.770054-12-babu.moger@amd.com
2023-01-23x86/resctrl: Add interface to read mbm_local_bytes_configBabu Moger
The event configuration can be viewed by the user by reading the configuration file /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config. The event configuration settings are domain specific and will affect all the CPUs in the domain. Following are the types of events supported: ==== =========================================================== Bits Description ==== =========================================================== 6 Dirty Victims from the QOS domain to all types of memory 5 Reads to slow memory in the non-local NUMA domain 4 Reads to slow memory in the local NUMA domain 3 Non-temporal writes to non-local NUMA domain 2 Non-temporal writes to local NUMA domain 1 Reads to memory in the non-local NUMA domain 0 Reads to memory in the local NUMA domain ==== =========================================================== By default, the mbm_local_bytes_config is set to 0x15 to count all the local event types. For example: $cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config 0=0x15;1=0x15;2=0x15;3=0x15 In this case, the event mbm_local_bytes is configured with 0x15 on domains 0 to 3. Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/r/20230113152039.770054-11-babu.moger@amd.com
2023-01-23x86/resctrl: Add interface to read mbm_total_bytes_configBabu Moger
The event configuration can be viewed by the user by reading the configuration file /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config. The event configuration settings are domain specific and will affect all the CPUs in the domain. Following are the types of events supported: ==== =========================================================== Bits Description ==== =========================================================== 6 Dirty Victims from the QOS domain to all types of memory 5 Reads to slow memory in the non-local NUMA domain 4 Reads to slow memory in the local NUMA domain 3 Non-temporal writes to non-local NUMA domain 2 Non-temporal writes to local NUMA domain 1 Reads to memory in the non-local NUMA domain 0 Reads to memory in the local NUMA domain ==== =========================================================== By default, the mbm_total_bytes_config is set to 0x7f to count all the event types. For example: $cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config 0=0x7f;1=0x7f;2=0x7f;3=0x7f In this case, the event mbm_total_bytes is configured with 0x7f on domains 0 to 3. Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/r/20230113152039.770054-10-babu.moger@amd.com
2023-01-23x86/resctrl: Support monitor configurationBabu Moger
Add a new field in struct mon_evt to support Bandwidth Monitoring Event Configuration (BMEC) and also update the "mon_features" display. The resctrl file "mon_features" will display the supported events and files that can be used to configure those events if monitor configuration is supported. Before the change: $ cat /sys/fs/resctrl/info/L3_MON/mon_features llc_occupancy mbm_total_bytes mbm_local_bytes After the change when BMEC is supported: $ cat /sys/fs/resctrl/info/L3_MON/mon_features llc_occupancy mbm_total_bytes mbm_total_bytes_config mbm_local_bytes mbm_local_bytes_config Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/r/20230113152039.770054-9-babu.moger@amd.com
2023-01-23x86/resctrl: Add __init attribute to rdt_get_mon_l3_config()Babu Moger
In an upcoming change, rdt_get_mon_l3_config() needs to call rdt_cpu_has() to query the monitor related features. It cannot be called right now because rdt_cpu_has() has the __init attribute but rdt_get_mon_l3_config() doesn't. Add the __init attribute to rdt_get_mon_l3_config() that is only called by get_rdt_mon_resources() that already has the __init attribute. Also make rdt_cpu_has() available to by rdt_get_mon_l3_config() via the internal header file. Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/r/20230113152039.770054-8-babu.moger@amd.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-01-23x86/resctrl: Detect and configure Slow Memory Bandwidth AllocationBabu Moger
The QoS slow memory configuration details are available via CPUID_Fn80000020_EDX_x02. Detect the available details and initialize the rest to defaults. Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/r/20230113152039.770054-7-babu.moger@amd.com
2023-01-23x86/resctrl: Include new features in command line optionsBabu Moger
Add the command line options to enable or disable the new resctrl features: smba: Slow Memory Bandwidth Allocation bmec: Bandwidth Monitor Event Configuration. Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/r/20230113152039.770054-6-babu.moger@amd.com
2023-01-23x86/cpufeatures: Add Bandwidth Monitoring Event Configuration feature flagBabu Moger
Newer AMD processors support the new feature Bandwidth Monitoring Event Configuration (BMEC). The feature support is identified via CPUID Fn8000_0020_EBX_x0[3]: EVT_CFG - Bandwidth Monitoring Event Configuration (BMEC) The bandwidth monitoring events mbm_total_bytes and mbm_local_bytes are set to count all the total and local reads/writes, respectively. With the introduction of slow memory, the two counters are not enough to count all the different types of memory events. Therefore, BMEC provides the option to configure mbm_total_bytes and mbm_local_bytes to count the specific type of events. Each BMEC event has a configuration MSR which contains one field for each bandwidth type that can be used to configure the bandwidth event to track any combination of supported bandwidth types. The event will count requests from every bandwidth type bit that is set in the corresponding configuration register. Following are the types of events supported: ==== ======================================================== Bits Description ==== ======================================================== 6 Dirty Victims from the QOS domain to all types of memory 5 Reads to slow memory in the non-local NUMA domain 4 Reads to slow memory in the local NUMA domain 3 Non-temporal writes to non-local NUMA domain 2 Non-temporal writes to local NUMA domain 1 Reads to memory in the non-local NUMA domain 0 Reads to memory in the local NUMA domain ==== ======================================================== By default, the mbm_total_bytes configuration is set to 0x7F to count all the event types and the mbm_local_bytes configuration is set to 0x15 to count all the local memory events. Feature description is available in the specification, "AMD64 Technology Platform Quality of Service Extensions, Revision: 1.03 Publication" at https://bugzilla.kernel.org/attachment.cgi?id=301365 Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/r/20230113152039.770054-5-babu.moger@amd.com
2023-01-23x86/resctrl: Add a new resource type RDT_RESOURCE_SMBABabu Moger
Add a new resource type RDT_RESOURCE_SMBA to handle the QoS enforcement policies on the external slow memory. Mostly initialization of the essentials. Setting fflags to RFTYPE_RES_MB configures the SMBA resource to have the same resctrl files as the existing MBA resource. The SMBA resource has identical properties to the existing MBA resource. These properties will be enumerated in an upcoming change and exposed via resctrl because of this flag. Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/r/20230113152039.770054-4-babu.moger@amd.com
2023-01-23x86/cpufeatures: Add Slow Memory Bandwidth Allocation feature flagBabu Moger
Add the new AMD feature X86_FEATURE_SMBA. With it, the QOS enforcement policies can be applied to external slow memory connected to the host. QOS enforcement is accomplished by assigning a Class Of Service (COS) to a processor and specifying allocations or limits for that COS for each resource to be allocated. This feature is identified by the CPUID function 0x8000_0020_EBX_x0[2]: L3SBE - L3 external slow memory bandwidth enforcement. CXL.memory is the only supported "slow" memory device. With SMBA, the hardware enables bandwidth allocation on the slow memory devices. If there are multiple slow memory devices in the system, then the throttling logic groups all the slow sources together and applies the limit on them as a whole. The presence of the SMBA feature (with CXL.memory) is independent of whether slow memory device is actually present in the system. If there is no slow memory in the system, then setting a SMBA limit will have no impact on the performance of the system. Presence of CXL memory can be identified by the numactl command: $numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 node 0 size: 63678 MB node 0 free: 59542 MB node 1 cpus: node 1 size: 16122 MB node 1 free: 15627 MB node distances: node 0 1 0: 10 50 1: 50 10 CPU list for CXL memory will be empty. The cpu-cxl node distance is greater than cpu-to-cpu distances. Node 1 has the CXL memory in this case. CXL memory can also be identified using ACPI SRAT table and memory maps. Feature description is available in the specification, "AMD64 Technology Platform Quality of Service Extensions, Revision: 1.03 Publication # 56375 Revision: 1.03 Issue Date: February 2022" at https://bugzilla.kernel.org/attachment.cgi?id=301365 See also https://www.amd.com/en/support/tech-docs/amd64-technology-platform-quality-service-extensions Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/r/20230113152039.770054-3-babu.moger@amd.com
2023-01-23x86/resctrl: Replace smp_call_function_many() with on_each_cpu_mask()Babu Moger
on_each_cpu_mask() runs the function on each CPU specified by cpumask, which may include the local processor. Replace smp_call_function_many() with on_each_cpu_mask() to simplify the code. Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/r/20230113152039.770054-2-babu.moger@amd.com
2023-01-22Merge tag 'sched_urgent_for_v6.2_rc6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler fixes from Borislav Petkov: - Make sure the scheduler doesn't use stale frequency scaling values when latter get disabled due to a value error - Fix a NULL pointer access on UP configs - Use the proper locking when updating CPU capacity * tag 'sched_urgent_for_v6.2_rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/aperfmperf: Erase stale arch_freq_scale values when disabling frequency invariance readings sched/core: Fix NULL pointer access fault in sched_setaffinity() with non-SMP configs sched/fair: Fixes for capacity inversion detection sched/uclamp: Fix a uninitialized variable warnings
2023-01-21x86/microcode/intel: Print old and new revision during early bootAshok Raj
Make early loading message match late loading message and print both old and new revisions. This is helpful to know what the BIOS loaded revision is before an early update. Cache the early BIOS revision before the microcode update and have print_ucode_info() print both the old and new revision in the same format as microcode_reload_late(). [ bp: Massage, remove useless comment. ] Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230120161923.118882-6-ashok.raj@intel.com
2023-01-21x86/microcode/intel: Pass the microcode revision to print_ucode_info() directlyAshok Raj
print_ucode_info() takes a struct ucode_cpu_info pointer as parameter. Its sole purpose is to print the microcode revision. The only available ucode_cpu_info always describes the currently loaded microcode revision. After a microcode update is successful, this is the new revision, or on failure it is the original revision. In preparation for future changes, replace the struct ucode_cpu_info pointer parameter with a plain integer which contains the revision number and adjust the call sites accordingly. No functional change. [ bp: - Fix + cleanup commit message. - Revert arbitrary, unrelated change. ] Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230120161923.118882-5-ashok.raj@intel.com
2023-01-21x86/microcode: Adjust late loading result reporting messageAshok Raj
During late microcode loading, the "Reload completed" message is issued unconditionally, regardless of success or failure. Adjust the message to report the result of the update. [ bp: Massage. ] Fixes: 9bd681251b7c ("x86/microcode: Announce reload operation's completion") Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lore.kernel.org/lkml/874judpqqd.ffs@tglx/
2023-01-21x86/microcode: Check CPU capabilities after late microcode update correctlyAshok Raj
The kernel caches each CPU's feature bits at boot in an x86_capability[] structure. However, the capabilities in the BSP's copy can be turned off as a result of certain command line parameters or configuration restrictions, for example the SGX bit. This can cause a mismatch when comparing the values before and after the microcode update. Another example is X86_FEATURE_SRBDS_CTRL which gets added only after microcode update: --- cpuid.before 2023-01-21 14:54:15.652000747 +0100 +++ cpuid.after 2023-01-21 14:54:26.632001024 +0100 @@ -10,7 +10,7 @@ CPU: 0x00000004 0x04: eax=0x00000000 ebx=0x00000000 ecx=0x00000000 edx=0x00000000 0x00000005 0x00: eax=0x00000040 ebx=0x00000040 ecx=0x00000003 edx=0x11142120 0x00000006 0x00: eax=0x000027f7 ebx=0x00000002 ecx=0x00000001 edx=0x00000000 - 0x00000007 0x00: eax=0x00000000 ebx=0x029c6fbf ecx=0x40000000 edx=0xbc002400 + 0x00000007 0x00: eax=0x00000000 ebx=0x029c6fbf ecx=0x40000000 edx=0xbc002e00 ^^^ and which proves for a gazillionth time that late loading is a bad bad idea. microcode_check() is called after an update to report any previously cached CPUID bits which might have changed due to the update. Therefore, store the cached CPU caps before the update and compare them with the CPU caps after the microcode update has succeeded. Thus, the comparison is done between the CPUID *hardware* bits before and after the upgrade instead of using the cached, possibly runtime modified values in BSP's boot_cpu_data copy. As a result, false warnings about CPUID bits changes are avoided. [ bp: - Massage. - Add SRBDS_CTRL example. - Add kernel-doc. - Incorporate forgotten review feedback from dhansen. ] Fixes: 1008c52c09dc ("x86/CPU: Add a microcode loader callback") Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230109153555.4986-3-ashok.raj@intel.com
2023-01-20x86/microcode: Add a parameter to microcode_check() to store CPU capabilitiesAshok Raj
Add a parameter to store CPU capabilities before performing a microcode update so that CPU capabilities can be compared before and after update. [ bp: Massage. ] Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230109153555.4986-2-ashok.raj@intel.com
2023-01-20cpuidle-haltpoll: Replace default_idle() with arch_cpu_idle()Li RongQing
When a KVM guest has MWAIT, mwait_idle() is used as the default idle function. However, the cpuidle-haltpoll driver calls default_idle() from default_enter_idle() directly and that one uses HLT instead of MWAIT, which may affect performance adversely, because MWAIT is preferred to HLT as explained by the changelog of commit aebef63cf7ff ("x86: Remove vendor checks from prefer_mwait_c1_over_halt"). Make default_enter_idle() call arch_cpu_idle(), which can use MWAIT, instead of default_idle() to address this issue. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Suggested-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Li RongQing <lirongqing@baidu.com> [ rjw: Changelog rewrite ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-01-19x86/nmi: Print reasons why backtrace NMIs are ignoredPaul E. McKenney
Instrument nmi_trigger_cpumask_backtrace() to dump out diagnostics based on evidence accumulated by exc_nmi(). These diagnostics are dumped for CPUs that ignored an NMI backtrace request for more than 10 seconds. [ paulmck: Apply Ingo Molnar feedback. ] Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: <x86@kernel.org> Reviewed-by: Ingo Molnar <mingo@kernel.org>
2023-01-19x86/nmi: Accumulate NMI-progress evidence in exc_nmi()Paul E. McKenney
CPUs ignoring NMIs is often a sign of those CPUs going bad, but there are quite a few other reasons why a CPU might ignore NMIs. Therefore, accumulate evidence within exc_nmi() as to what might be preventing a given CPU from responding to an NMI. [ paulmck: Apply Peter Zijlstra feedback. ] Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: <x86@kernel.org> Reviewed-by: Ingo Molnar <mingo@kernel.org>
2023-01-18x86/microcode: Use the DEVICE_ATTR_RO() macroGuangju Wang[baidu]
Use DEVICE_ATTR_RO() helper instead of open-coded DEVICE_ATTR(), which makes the code a bit shorter and easier to read. No change in functionality. Signed-off-by: Guangju Wang[baidu] <wgj900@163.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230118023554.1898-1-wgj900@163.com
2023-01-18Merge tag 'v6.2-rc4' into perf/core, to pick up fixesIngo Molnar
Move from the -rc1 base to the fresher -rc4 kernel that has various fixes included, before applying a larger patchset. Signed-off-by: Ingo Molnar <mingo@kernel.org>