summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)Author
2023-06-01x86/mtrr: Add mtrr=debug command line optionJuergen Gross
Add a new command line option "mtrr=debug" for getting debug output after building the new cache mode map. The output will include MTRR register values and the resulting map. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-13-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Construct a memory map with cache modesJuergen Gross
After MTRR initialization construct a memory map with cache modes from MTRR values. This will speed up lookups via mtrr_lookup_type() especially in case of overlapping MTRRs. This will be needed when switching the semantics of the "uniform" parameter of mtrr_lookup_type() from "only covered by one MTRR" to "memory range has a uniform cache mode", which is the data the callers really want to know. Today this information is not easily available, in case MTRRs are not well sorted regarding base address. The map will be built in __initdata. When memory management is up, the map will be moved to dynamically allocated memory, in order to avoid the need of an overly large array. The size of this array is calculated using the number of variable MTRR registers and the needed size for fixed entries. Only add the map creation and expansion for now. The lookup will be added later. When writing new MTRR entries in the running system rebuild the map inside the call from mtrr_rendezvous_handler() in order to avoid nasty race conditions with concurrent lookups. [ bp: Move out rebuild_map() call and rename it. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-12-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Add get_effective_type() service functionJuergen Gross
Add a service function for obtaining the effective cache mode of overlapping MTRR registers. Make use of that function in check_type_overlap(). Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-11-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Allocate mtrr_value array dynamicallyJuergen Gross
The mtrr_value[] array is a static variable which is used only in a few configurations. Consuming 6kB is ridiculous for this case, especially as the array doesn't need to be that large and it can easily be allocated dynamically. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-10-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Move 32-bit code from mtrr.c to legacy.cJuergen Gross
There is some code in mtrr.c which is relevant for old 32-bit CPUs only. Move it to a new source legacy.c. While modifying mtrr_init_finalize() fix spelling of its name. Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-9-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Have only one set_mtrr() variantJuergen Gross
Today there are two variants of set_mtrr(): one calling stop_machine() and one calling stop_machine_cpuslocked(). The first one (set_mtrr()) has only one caller, and this caller is running only when resuming from suspend when the interrupts are still off and only one CPU is active. Additionally this code is used only on rather old 32-bit CPUs not supporting SMP. For these reasons the first variant can be replaced by a simple call of mtrr_if->set(). Rename the second variant set_mtrr_cpuslocked() to set_mtrr() now that there is only one variant left, in order to have a shorter function name. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-8-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Replace vendor tests in MTRR codeJuergen Gross
Modern CPUs all share the same MTRR interface implemented via generic_mtrr_ops. At several places in MTRR code this generic interface is deduced via is_cpu(INTEL) tests, which is only working due to X86_VENDOR_INTEL being 0 (the is_cpu() macro is testing mtrr_if->vendor, which isn't explicitly set in generic_mtrr_ops). Test the generic CPU feature X86_FEATURE_MTRR instead. The only other place where the .vendor member of struct mtrr_ops is being used is in set_num_var_ranges(), where depending on the vendor the number of MTRR registers is determined. This can easily be changed by replacing .vendor with the static number of MTRR registers. It should be noted that the test "is_cpu(HYGON)" wasn't ever returning true, as there is no struct mtrr_ops with that vendor information. [ bp: Use mtrr_enabled() before doing mtrr_if-> accesses, esp. in mtrr_trim_uncached_memory() which gets called independently from whether mtrr_if is set or not. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230502120931.20719-7-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/xen: Set MTRR state when running as Xen PV initial domainJuergen Gross
When running as Xen PV initial domain (aka dom0), MTRRs are disabled by the hypervisor, but the system should nevertheless use correct cache memory types. This has always kind of worked, as disabled MTRRs resulted in disabled PAT, too, so that the kernel avoided code paths resulting in inconsistencies. This bypassed all of the sanity checks the kernel is doing with enabled MTRRs in order to avoid memory mappings with conflicting memory types. This has been changed recently, leading to PAT being accepted to be enabled, while MTRRs stayed disabled. The result is that mtrr_type_lookup() no longer is accepting all memory type requests, but started to return WB even if UC- was requested. This led to driver failures during initialization of some devices. In reality MTRRs are still in effect, but they are under complete control of the Xen hypervisor. It is possible, however, to retrieve the MTRR settings from the hypervisor. In order to fix those problems, overwrite the MTRR state via mtrr_overwrite_state() with the MTRR data from the hypervisor, if the system is running as a Xen dom0. Fixes: 72cbc8f04fe2 ("x86/PAT: Have pat_enabled() properly reflect state when running on Xen") Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-6-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/hyperv: Set MTRR state when running as SEV-SNP Hyper-V guestJuergen Gross
In order to avoid mappings using the UC- cache attribute, set the MTRR state to use WB caching as the default. This is needed in order to cope with the fact that PAT is enabled, while MTRRs are not supported by the hypervisor. Fixes: 90b926e68f50 ("x86/pat: Fix pat_x_mtrr_type() for MTRR disabled case") Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230502120931.20719-5-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Support setting MTRR state for software defined MTRRsJuergen Gross
When running virtualized, MTRR access can be reduced (e.g. in Xen PV guests or when running as a SEV-SNP guest under Hyper-V). Typically, the hypervisor will not advertize the MTRR feature in CPUID data, resulting in no MTRR memory type information being available for the kernel. This has turned out to result in problems (Link tags below): - Hyper-V SEV-SNP guests using uncached mappings where they shouldn't - Xen PV dom0 mapping memory as WB which should be UC- instead Solve those problems by allowing an MTRR static state override, overwriting the empty state used today. In case such a state has been set, don't call get_mtrr_state() in mtrr_bp_init(). The set state will only be used by mtrr_type_lookup(), as in all other cases mtrr_enabled() is being checked, which will return false. Accept the overwrite call only for selected cases when running as a guest. Disable X86_FEATURE_MTRR in order to avoid any MTRR modifications by just refusing them. [ bp: Massage. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/all/4fe9541e-4d4c-2b2a-f8c8-2d34a7284930@nerdbynature.de/ Link: https://lore.kernel.org/lkml/BYAPR21MB16883ABC186566BD4D2A1451D7FE9@BYAPR21MB1688.namprd21.prod.outlook.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-06-01x86/mtrr: Replace size_or_mask and size_and_mask with a much easier conceptJuergen Gross
Replace size_or_mask and size_and_mask with the much easier concept of high reserved bits. While at it, instead of using constants in the MTRR code, use some new [ bp: - Drop mtrr_set_mask() - Unbreak long lines - Move struct mtrr_state_type out of the uapi header as it doesn't belong there. It also fixes a HDRTEST breakage "unknown type name ‘bool’" as Reported-by: kernel test robot <lkp@intel.com> - Massage. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230502120931.20719-3-jgross@suse.com Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
2023-05-31x86/platform/uv: Update UV[23] platform code for SNCSteve Wahl
Previous Sub-NUMA Clustering changes need not just a count of blades present, but a count that includes any missing ids for blades not present; in other words, the range from lowest to highest blade id. Signed-off-by: Steve Wahl <steve.wahl@hpe.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20230519190752.3297140-9-steve.wahl%40hpe.com
2023-05-31x86/platform/uv: Remove remaining BUG_ON() and BUG() callsSteve Wahl
Replace BUG and BUG_ON with WARN_ON_ONCE and carry on as best as we can. Signed-off-by: Steve Wahl <steve.wahl@hpe.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20230519190752.3297140-8-steve.wahl%40hpe.com
2023-05-31x86/platform/uv: UV support for sub-NUMA clusteringSteve Wahl
Sub-NUMA clustering (SNC) invalidates previous assumptions of a 1:1 relationship between blades, sockets, and nodes. Fix these assumptions and build tables correctly when SNC is enabled. Signed-off-by: Steve Wahl <steve.wahl@hpe.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20230519190752.3297140-7-steve.wahl%40hpe.com
2023-05-31x86/platform/uv: Helper functions for allocating and freeing conversion tablesSteve Wahl
Add alloc_conv_table() and FREE_1_TO_1_TABLE() to reduce duplicated code among the conversion tables we use. Signed-off-by: Steve Wahl <steve.wahl@hpe.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20230519190752.3297140-6-steve.wahl%40hpe.com
2023-05-31x86/platform/uv: When searching for minimums, start at INT_MAX not 99999Steve Wahl
Using a starting value of INT_MAX rather than 999999 or 99999 means this algorithm won't fail should the numbers being compared ever exceed this value. Signed-off-by: Steve Wahl <steve.wahl@hpe.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20230519190752.3297140-5-steve.wahl%40hpe.com
2023-05-31x86/platform/uv: Fix printed information in calc_mmioh_mapSteve Wahl
Fix incorrect mask names and values in calc_mmioh_map() that caused it to print wrong NASID information. And an unused blade position is not an error condition, but will yield an invalid NASID value, so change the invalid NASID message from an error to a debug message. Signed-off-by: Steve Wahl <steve.wahl@hpe.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20230519190752.3297140-4-steve.wahl%40hpe.com
2023-05-31x86/platform/uv: Introduce helper function uv_pnode_to_socket.Steve Wahl
Add and use uv_pnode_to_socket() function, which parallels other helper functions in here, and will enable avoiding duplicate code in an upcoming patch. Signed-off-by: Steve Wahl <steve.wahl@hpe.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20230519190752.3297140-3-steve.wahl%40hpe.com
2023-05-31x86/platform/uv: Add platform resolving #defines for misc GAM_MMIOH_REDIRECT*Steve Wahl
Upcoming changes will require use of new #defines UVH_RH_GAM_MMIOH_REDIRECT_CONFIG0_NASID_MASK and UVH_RH_GAM_MMIOH_REDIRECT_CONFIG1_NASID_MASK, which provide the appropriate values on different uv platforms. Also, fix typo that defined a couple of "*_CONFIG0_*" values twice where "*_CONFIG1_*" was intended. Signed-off-by: Steve Wahl <steve.wahl@hpe.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20230519190752.3297140-2-steve.wahl%40hpe.com
2023-05-31x86/smpboot: Fix the parallel bringup decisionThomas Gleixner
The decision to allow parallel bringup of secondary CPUs checks CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use parallel bootup because accessing the local APIC is intercepted and raises a #VC or #VE, which cannot be handled at that point. The check works correctly, but only for AMD encrypted guests. TDX does not set that flag. As there is no real connection between CC attributes and the inability to support parallel bringup, replace this with a generic control flag in x86_cpuinit and let SEV-ES and TDX init code disable it. Fixes: 0c7ffa32dbd6 ("x86/smpboot/64: Implement arch_cpuhp_init_parallel_bringup() and enable it") Reported-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Tested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lore.kernel.org/r/87ilc9gd2d.ffs@tglx
2023-05-31x86/nospec: Shorten RESET_CALL_DEPTHPeter Zijlstra
RESET_CALL_DEPTH is a pretty fat monster and blows up UNTRAIN_RET to 20 bytes: 19: 48 c7 c0 80 00 00 00 mov $0x80,%rax 20: 48 c1 e0 38 shl $0x38,%rax 24: 65 48 89 04 25 00 00 00 00 mov %rax,%gs:0x0 29: R_X86_64_32S pcpu_hot+0x10 Shrink it by 4 bytes: 0: 31 c0 xor %eax,%eax 2: 48 0f ba e8 3f bts $0x3f,%rax 7: 65 48 89 04 25 00 00 00 00 mov %rax,%gs:0x0 Shrink RESET_CALL_DEPTH_FROM_CALL by 5 bytes by only setting %al, the other bits are shifted out (the same could be done for RESET_CALL_DEPTH, but the XOR+BTS sequence has less dependencies due to the zeroing). Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230515093020.729622326@infradead.org
2023-05-31x86/alternatives: Add longer 64-bit NOPsPeter Zijlstra
By adding support for longer NOPs there are a few more alternatives that can turn into a single instruction. Add up to NOP11, the same limit where GNU as .nops also stops generating longer nops. This is because a number of uarchs have severe decode penalties for more than 3 prefixes. [ bp: Sync up with the version in tools/ while at it. ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230515093020.661756940@infradead.org
2023-05-30x86/resctrl: Only show tasks' pid in current pid namespaceShawn Wang
When writing a task id to the "tasks" file in an rdtgroup, rdtgroup_tasks_write() treats the pid as a number in the current pid namespace. But when reading the "tasks" file, rdtgroup_tasks_show() shows the list of global pids from the init namespace, which is confusing and incorrect. To be more robust, let the "tasks" file only show pids in the current pid namespace. Fixes: e02737d5b826 ("x86/intel_rdt: Add tasks files") Signed-off-by: Shawn Wang <shawnwang@linux.alibaba.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Reinette Chatre <reinette.chatre@intel.com> Acked-by: Fenghua Yu <fenghua.yu@intel.com> Tested-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/all/20230116071246.97717-1-shawnwang@linux.alibaba.com/
2023-05-30x86/realmode: Make stack lock work in trampoline_compat()Thomas Gleixner
The stack locking and stack assignment macro LOAD_REALMODE_ESP fails to work when invoked from the 64bit trampoline entry point: trampoline_start64 trampoline_compat LOAD_REALMODE_ESP <- lock Accessing tr_lock is only possible from 16bit mode. For the compat entry point this needs to be pa_tr_lock so that the required relocation entry is generated. Otherwise it locks the non-relocated address which is aside of being wrong never cleared in secondary_startup_64() causing all but the first CPU to get stuck on the lock. Make the macro take an argument lock_pa which defaults to 0 and rename it to LOCK_AND_LOAD_REALMODE_ESP to make it clear what this is about. Fixes: f6f1ae9128d2 ("x86/smpboot: Implement a bit spinlock to protect the realmode stack") Reported-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lore.kernel.org/r/87h6rujdvl.ffs@tglx
2023-05-29x86/smp: Initialize cpu_primary_thread_mask lateThomas Gleixner
Marking primary threads in the cpumask during early boot is only correct in certain configurations, but broken e.g. for the legacy hyperthreading detection. This is due to the complete mess in the CPUID evaluation code which initializes smp_num_siblings only half during early init and fixes it up later when identify_boot_cpu() is invoked. So using smp_num_siblings before identify_boot_cpu() leads to incorrect results. Fixing the early CPU init code to provide the proper data is a larger scale surgery as the code has dependencies on data structures which are not initialized during early boot. Move the initialization of cpu_primary_thread_mask wich depends on smp_num_siblings being correct to an early initcall so that it is set up correctly before SMP bringup. Fixes: f54d4434c281 ("x86/apic: Provide cpu_primary_thread mask") Reported-by: "Kirill A. Shutemov" <kirill@shutemov.name> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lore.kernel.org/r/87sfbhlwp9.ffs@tglx
2023-05-29x86/csum: Fix clang -Wuninitialized in csum_partial()Nathan Chancellor
Clang warns: arch/x86/lib/csum-partial_64.c:74:20: error: variable 'result' is uninitialized when used here [-Werror,-Wuninitialized] return csum_tail(result, temp64, odd); ^~~~~~ arch/x86/lib/csum-partial_64.c:48:22: note: initialize the variable 'result' to silence this warning unsigned odd, result; ^ = 0 1 error generated. The only initialization and uses of result in csum_partial() were moved into csum_tail() but result is still being passed by value to csum_tail() (clang's -Wuninitialized does not do interprocedural analysis to realize that result is always assigned in csum_tail() however). Sink the declaration of result into csum_tail() to clear up the warning. Closes: https://lore.kernel.org/202305262039.3HUYjWJk-lkp@intel.com/ Fixes: 688eb8191b47 ("x86/csum: Improve performance of `csum_partial`") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20230526-csum_partial-wuninitialized-v1-1-ebc0108dcec1%40kernel.org
2023-05-29Merge tag 'v6.4-p3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fix from Herbert Xu: "Fix an alignment crash in x86/aria" * tag 'v6.4-p3' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: x86/aria - Use 16 byte alignment for GFNI constant vectors
2023-05-28Merge tag 'x86-urgent-2023-05-28' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 cpu fix from Thomas Gleixner: "A single fix for x86: - Prevent a bogus setting for the number of HT siblings, which is caused by the CPUID evaluation trainwreck of X86. That recomputes the value for each CPU, so the last CPU "wins". That can cause completely bogus sibling values" * tag 'x86-urgent-2023-05-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/topology: Fix erroneous smp_num_siblings on Intel Hybrid platforms
2023-05-28Merge tag 'perf-urgent-2023-05-28' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Thomas Gleixner: "A small set of perf fixes: - Make the MSR-readout based CHA discovery work around broken discovery tables in some SPR firmwares. - Prevent saving PEBS configuration which has software bits set that cause a crash when restored into the relevant MSR" * tag 'perf-urgent-2023-05-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/uncore: Correct the number of CHAs on SPR perf/x86/intel: Save/restore cpuc->active_pebs_data_cfg when using guest PEBS
2023-05-28Merge tag 'objtool-urgent-2023-05-28' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull unwinder fixes from Thomas Gleixner: "A set of unwinder and tooling fixes: - Ensure that the stack pointer on x86 is aligned again so that the unwinder does not read past the end of the stack - Discard .note.gnu.property section which has a pointlessly different alignment than the other note sections. That confuses tooling of all sorts including readelf, libbpf and pahole" * tag 'objtool-urgent-2023-05-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/show_trace_log_lvl: Ensure stack pointer is aligned, again vmlinux.lds.h: Discard .note.gnu.property section
2023-05-27Merge tag 'for-linus-6.4-rc4-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen fixes from Juergen Gross: - a double free fix in the Xen pvcalls backend driver - a fix for a regression causing the MSI related sysfs entries to not being created in Xen PV guests - a fix in the Xen blkfront driver for handling insane input data better * tag 'for-linus-6.4-rc4-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: x86/pci/xen: populate MSI sysfs entries xen/pvcalls-back: fix double frees with pvcalls_new_active_socket() xen/blkfront: Only check REQ_FUA for writes
2023-05-26KVM: VMX: Use proper accessor to read guest CR4 in handle_desc()Sean Christopherson
Use kvm_is_cr4_bit_set() to read guest CR4.UMIP when sanity checking that a descriptor table VM-Exit occurs if and only if guest.CR4.UMIP=1. UMIP can't be guest-owned, i.e. using kvm_read_cr4_bits() to decache guest- owned bits isn't strictly necessary, but eliminating raw reads of vcpu->arch.cr4 is desirable as it makes it easy to visually audit KVM for correctness. Opportunistically add a compile-time assertion that UMIP isn't guest-owned as letting the guest own UMIP isn't compatible with emulation (or any CR4 bit that is emulated by KVM). Opportunistically change the WARN_ON() to a ONCE variant. When the WARN fires, it fires _a lot_, and spamming the kernel logs ends up doing more harm than whatever led to KVM's unnecessary emulation. Reported-by: Robert Hoo <robert.hu@intel.com> Link: https://lore.kernel.org/all/20230310125718.1442088-4-robert.hu@intel.com Link: https://lore.kernel.org/r/20230413231914.1482782-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-05-26KVM: VMX: Treat UMIP as emulated if and only if the host doesn't have UMIPSean Christopherson
Advertise UMIP as emulated if and only if the host doesn't natively support UMIP, otherwise vmx_umip_emulated() is misleading when the host _does_ support UMIP. Of the four users of vmx_umip_emulated(), two already check for native support, and the logic in vmx_set_cpu_caps() is relevant if and only if UMIP isn't natively supported as UMIP is set in KVM's caps by kvm_set_cpu_caps() when UMIP is present in hardware. That leaves KVM's stuffing of X86_CR4_UMIP into the default cr4_fixed1 value enumerated for nested VMX. In that case, checking for (lack of) host support is actually a bug fix of sorts, as enumerating UMIP support based solely on descriptor table exiting works only because KVM doesn't sanity check MSR_IA32_VMX_CR4_FIXED1. E.g. if a (very theoretical) host supported UMIP in hardware but didn't allow UMIP+VMX, KVM would advertise UMIP but not actually emulate UMIP. Of course, KVM would explode long before it could run a nested VM on said theoretical CPU, as KVM doesn't modify host CR4 when enabling VMX, i.e. would load an "illegal" value into vmcs.HOST_CR4. Reported-by: Robert Hoo <robert.hu@intel.com> Link: https://lore.kernel.org/all/20230310125718.1442088-2-robert.hu@intel.com Link: https://lore.kernel.org/r/20230413231914.1482782-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-05-26KVM: VMX: Move the comment of CR4.MCE handling right above the codeXiaoyao Li
Move the comment about keeping the hosts CR4.MCE loaded in hardware above the code that actually modifies the hardware CR4 value. No functional change indented. Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20230410125017.1305238-3-xiaoyao.li@intel.com [sean: elaborate in changelog] Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-05-26KVM: VMX: Use kvm_read_cr4() to get cr4 valueXiaoyao Li
Directly use vcpu->arch.cr4 is not recommended since it gets stale value if the cr4 is not available. Use kvm_read_cr4() instead to ensure correct value. Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20230410125017.1305238-2-xiaoyao.li@intel.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-05-26x86: re-introduce support for ERMS copies for user space accessesLinus Torvalds
I tried to streamline our user memory copy code fairly aggressively in commit adfcf4231b8c ("x86: don't use REP_GOOD or ERMS for user memory copies"), in order to then be able to clean up the code and inline the modern FSRM case in commit 577e6a7fd50d ("x86: inline the 'rep movs' in user copies for the FSRM case"). We had reports [1] of that causing regressions earlier with blogbench, but that turned out to be a horrible benchmark for that case, and not a sufficient reason for re-instating "rep movsb" on older machines. However, now Eric Dumazet reported [2] a regression in performance that seems to be a rather more real benchmark, where due to the removal of "rep movs" a TCP stream over a 100Gbps network no longer reaches line speed. And it turns out that with the simplified the calling convention for the non-FSRM case in commit 427fda2c8a49 ("x86: improve on the non-rep 'copy_user' function"), re-introducing the ERMS case is actually fairly simple. Of course, that "fairly simple" is glossing over several missteps due to having to fight our assembler alternative code. This code really wanted to rewrite a conditional branch to have two different targets, but that made objtool sufficiently unhappy that this instead just ended up doing a choice between "jump to the unrolled loop, or use 'rep movsb' directly". Let's see if somebody finds a case where the kernel memory copies also care (see commit 68674f94ffc9: "x86: don't use REP_GOOD or ERMS for small memory copies"). But Eric does argue that the user copies are special because networking tries to copy up to 32KB at a time, if order-3 pages allocations are possible. In-kernel memory copies are typically small, unless they are the special "copy pages at a time" kind that still use "rep movs". Link: https://lore.kernel.org/lkml/202305041446.71d46724-yujie.liu@intel.com/ [1] Link: https://lore.kernel.org/lkml/CANn89iKUbyrJ=r2+_kK+sb2ZSSHifFZ7QkPLDpAtkJ8v4WUumA@mail.gmail.com/ [2] Reported-and-tested-by: Eric Dumazet <edumazet@google.com> Fixes: adfcf4231b8c ("x86: don't use REP_GOOD or ERMS for user memory copies") Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-05-26KVM: x86/mmu: Assert on @mmu in the __kvm_mmu_invalidate_addr()Like Xu
Add assertion to track that "mmu == vcpu->arch.mmu" is always true in the context of __kvm_mmu_invalidate_addr(). for_each_shadow_entry_using_root() and kvm_sync_spte() operate on vcpu->arch.mmu, but the only reason that doesn't cause explosions is because handle_invept() frees roots instead of doing a manual invalidation. As of now, there are no major roadblocks to switching INVEPT emulation over to use kvm_mmu_invalidate_addr(). Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Like Xu <likexu@tencent.com> Link: https://lore.kernel.org/r/20230523032947.60041-1-likexu@tencent.com Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-05-26KVM: x86/mmu: Add comment on try_cmpxchg64 usage in tdp_mmu_set_spte_atomicUros Bizjak
Commit aee98a6838d5 ("KVM: x86/mmu: Use try_cmpxchg64 in tdp_mmu_set_spte_atomic") removed the comment that iter->old_spte is updated when different logical CPU modifies the page table entry. Although this is what try_cmpxchg does implicitly, it won't hurt if this fact is explicitly mentioned in a restored comment. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: David Matlack <dmatlack@google.com> Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Link: https://lore.kernel.org/r/20230425113932.3148-1-ubizjak@gmail.com [sean: extend comment above try_cmpxchg64()] Signed-off-by: Sean Christopherson <seanjc@google.com>
2023-05-26Merge tag 'drm-misc-next-2023-05-24' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-misc into drm-next drm-misc-next for v6.5: UAPI Changes: Cross-subsystem Changes: * fbdev: Move framebuffer I/O helpers to <asm/fb.h>, fix naming * firmware: Init sysfb as early as possible Core Changes: * DRM scheduler: Rename interfaces * ttm: Store ttm_device_funcs in .rodata * Replace strlcpy() with strscpy() in various places * Cleanups Driver Changes: * bridge: analogix: Fix endless probe loop; samsung-dsim: Support swapping clock/data polarity; tc358767: Use devm_ Cleanups; * gma500: Fix I/O-memory access * panel: boe-tv101wum-nl6: Improve initialization; sharp-ls043t1le001: Mode fixes; simple: Add BOE EV121WXM-N10-1850 plus DT bindings; AddS6D7AA0 plus DT bindings; Cleanups * ssd1307x: Style fixes * sun4i: Release clocks * msm: Fix I/O-memory access * nouveau: Cleanups * shmobile: Support Renesas; Enable framebuffer console; Various fixes * vkms: Fix RGB565 conversion Signed-off-by: Dave Airlie <airlied@redhat.com> # -----BEGIN PGP SIGNATURE----- # # iQEzBAABCAAdFiEEchf7rIzpz2NEoWjlaA3BHVMLeiMFAmRuBXEACgkQaA3BHVML # eiPLkwgAqCa7IuSDQhFMWVOI0EJpPPEHtHM8SCT1Pp8aniXk23Ru+E16c5zck53O # uf4tB+zoFrwD9npy60LIvX1OZmXS1KI4+ZO8itYFk6GSjxqbTWbjNFREBeWFdIpa # OG54nEqjFQZzEXY+gJYDpu5zqLy3xLN07ZgQkcMyfW3O/Krj4LLzfQTDl+jP5wkO # 7/v5Eu5CG5QjupMxIjb4e+ruUflp73pynur5bhZsfS1bPNGFTnxHlwg7NWnBXU7o # Hg23UYfCuZZWPmuO26EeUDlN33rCoaycmVgtpdZft2eznca5Mg74Loz1Qc3GQfjw # LLvKsAIlBcZvEIhElkzhtXitBoe7LQ== # =/9zV # -----END PGP SIGNATURE----- # gpg: Signature made Wed 24 May 2023 22:39:13 AEST # gpg: using RSA key 7217FBAC8CE9CF6344A168E5680DC11D530B7A23 # gpg: Can't check signature: No public key # Conflicts: # MAINTAINERS From: Thomas Zimmermann <tzimmermann@suse.de> Link: https://patchwork.freedesktop.org/patch/msgid/20230524124237.GA25416@linux-uq9g
2023-05-25x86/csum: Improve performance of `csum_partial`Noah Goldstein
1) Add special case for len == 40 as that is the hottest value. The nets a ~8-9% latency improvement and a ~30% throughput improvement in the len == 40 case. 2) Use multiple accumulators in the 64-byte loop. This dramatically improves ILP and results in up to a 40% latency/throughput improvement (better for more iterations). Results from benchmarking on Icelake. Times measured with rdtsc() len lat_new lat_old r tput_new tput_old r 8 3.58 3.47 1.032 3.58 3.51 1.021 16 4.14 4.02 1.028 3.96 3.78 1.046 24 4.99 5.03 0.992 4.23 4.03 1.050 32 5.09 5.08 1.001 4.68 4.47 1.048 40 5.57 6.08 0.916 3.05 4.43 0.690 48 6.65 6.63 1.003 4.97 4.69 1.059 56 7.74 7.72 1.003 5.22 4.95 1.055 64 6.65 7.22 0.921 6.38 6.42 0.994 96 9.43 9.96 0.946 7.46 7.54 0.990 128 9.39 12.15 0.773 8.90 8.79 1.012 200 12.65 18.08 0.699 11.63 11.60 1.002 272 15.82 23.37 0.677 14.43 14.35 1.005 440 24.12 36.43 0.662 21.57 22.69 0.951 952 46.20 74.01 0.624 42.98 53.12 0.809 1024 47.12 78.24 0.602 46.36 58.83 0.788 1552 72.01 117.30 0.614 71.92 96.78 0.743 2048 93.07 153.25 0.607 93.28 137.20 0.680 2600 114.73 194.30 0.590 114.28 179.32 0.637 3608 156.34 268.41 0.582 154.97 254.02 0.610 4096 175.01 304.03 0.576 175.89 292.08 0.602 There is no such thing as a free lunch, however, and the special case for len == 40 does add overhead to the len != 40 cases. This seems to amount to be ~5% throughput and slightly less in terms of latency. Testing: Part of this change is a new kunit test. The tests check all alignment X length pairs in [0, 64) X [0, 512). There are three cases. 1) Precomputed random inputs/seed. The expected results where generated use the generic implementation (which is assumed to be non-buggy). 2) An input of all 1s. The goal of this test is to catch any case a carry is missing. 3) An input that never carries. The goal of this test si to catch any case of incorrectly carrying. More exhaustive tests that test all alignment X length pairs in [0, 8192) X [0, 8192] on random data are also available here: https://github.com/goldsteinn/csum-reproduction The reposity also has the code for reproducing the above benchmark numbers. Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20230511011002.935690-1-goldstein.w.n%40gmail.com
2023-05-25x86/topology: Fix erroneous smp_num_siblings on Intel Hybrid platformsZhang Rui
Traditionally, all CPUs in a system have identical numbers of SMT siblings. That changes with hybrid processors where some logical CPUs have a sibling and others have none. Today, the CPU boot code sets the global variable smp_num_siblings when every CPU thread is brought up. The last thread to boot will overwrite it with the number of siblings of *that* thread. That last thread to boot will "win". If the thread is a Pcore, smp_num_siblings == 2. If it is an Ecore, smp_num_siblings == 1. smp_num_siblings describes if the *system* supports SMT. It should specify the maximum number of SMT threads among all cores. Ensure that smp_num_siblings represents the system-wide maximum number of siblings by always increasing its value. Never allow it to decrease. On MeteorLake-P platform, this fixes a problem that the Ecore CPUs are not updated in any cpu sibling map because the system is treated as an UP system when probing Ecore CPUs. Below shows part of the CPU topology information before and after the fix, for both Pcore and Ecore CPU (cpu0 is Pcore, cpu 12 is Ecore). ... -/sys/devices/system/cpu/cpu0/topology/package_cpus:000fff -/sys/devices/system/cpu/cpu0/topology/package_cpus_list:0-11 +/sys/devices/system/cpu/cpu0/topology/package_cpus:3fffff +/sys/devices/system/cpu/cpu0/topology/package_cpus_list:0-21 ... -/sys/devices/system/cpu/cpu12/topology/package_cpus:001000 -/sys/devices/system/cpu/cpu12/topology/package_cpus_list:12 +/sys/devices/system/cpu/cpu12/topology/package_cpus:3fffff +/sys/devices/system/cpu/cpu12/topology/package_cpus_list:0-21 Notice that the "before" 'package_cpus_list' has only one CPU. This means that userspace tools like lscpu will see a little laptop like an 11-socket system: -Core(s) per socket: 1 -Socket(s): 11 +Core(s) per socket: 16 +Socket(s): 1 This is also expected to make the scheduler do rather wonky things too. [ dhansen: remove CPUID detail from changelog, add end user effects ] CC: stable@kernel.org Fixes: bbb65d2d365e ("x86: use cpuid vector 0xb when available for detecting cpu topology") Fixes: 95f3d39ccf7a ("x86/cpu/topology: Provide detect_extended_topology_early()") Suggested-by: Len Brown <len.brown@intel.com> Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20230323015640.27906-1-rui.zhang%40intel.com
2023-05-24perf/x86/uncore: Correct the number of CHAs on SPRKan Liang
The number of CHAs from the discovery table on some SPR variants is incorrect, because of a firmware issue. An accurate number can be read from the MSR UNC_CBO_CONFIG. Fixes: 949b11381f81 ("perf/x86/intel/uncore: Add Sapphire Rapids server CHA support") Reported-by: Stephane Eranian <eranian@google.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Stephane Eranian <eranian@google.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230508140206.283708-1-kan.liang@linux.intel.com
2023-05-24x86/pci/xen: populate MSI sysfs entriesMaximilian Heyne
Commit bf5e758f02fc ("genirq/msi: Simplify sysfs handling") reworked the creation of sysfs entries for MSI IRQs. The creation used to be in msi_domain_alloc_irqs_descs_locked after calling ops->domain_alloc_irqs. Then it moved into __msi_domain_alloc_irqs which is an implementation of domain_alloc_irqs. However, Xen comes with the only other implementation of domain_alloc_irqs and hence doesn't run the sysfs population code anymore. Commit 6c796996ee70 ("x86/pci/xen: Fixup fallout from the PCI/MSI overhaul") set the flag MSI_FLAG_DEV_SYSFS for the xen msi_domain_info but that doesn't actually have an effect because Xen uses it's own domain_alloc_irqs implementation. Fix this by making use of the fallback functions for sysfs population. Fixes: bf5e758f02fc ("genirq/msi: Simplify sysfs handling") Signed-off-by: Maximilian Heyne <mheyne@amazon.de> Reviewed-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/20230503131656.15928-1-mheyne@amazon.de Signed-off-by: Juergen Gross <jgross@suse.com>
2023-05-24crypto: x86/aria - Use 16 byte alignment for GFNI constant vectorsArd Biesheuvel
The GFNI routines in the AVX version of the ARIA implementation now use explicit VMOVDQA instructions to load the constant input vectors, which means they must be 16 byte aligned. So ensure that this is the case, by dropping the section split and the incorrect .align 8 directive, and emitting the constants into the 16-byte aligned section instead. Note that the AVX2 version of this code deviates from this pattern, and does not require a similar fix, given that it loads these contants as 8-byte memory operands, for which AVX2 permits any alignment. Cc: Taehee Yoo <ap420073@gmail.com> Fixes: 8b84475318641c2b ("crypto: x86/aria-avx - Do not use avx2 instructions") Reported-by: syzbot+a6abcf08bad8b18fd198@syzkaller.appspotmail.com Tested-by: syzbot+a6abcf08bad8b18fd198@syzkaller.appspotmail.com Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-05-23x86/tdx: Wrap exit reason with hcall_func()Nikolay Borisov
TDX reuses VMEXIT "reasons" in its guest->host hypercall ABI. This is confusing because there might not be a VMEXIT involved at *all*. These instances are supposed to document situation and reduce confusion by wrapping VMEXIT reasons with hcall_func(). The decompression code does not follow this convention. Unify the TDX decompression code with the other TDX use of VMEXIT reasons. No functional changes. Signed-off-by: Nikolay Borisov <nik.borisov@suse.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lore.kernel.org/all/20230505120332.1429957-1-nik.borisov%40suse.com
2023-05-23perf/x86/intel: Save/restore cpuc->active_pebs_data_cfg when using guest PEBSLike Xu
After commit b752ea0c28e3 ("perf/x86/intel/ds: Flush PEBS DS when changing PEBS_DATA_CFG"), the cpuc->pebs_data_cfg may save some bits that are not supported by real hardware, such as PEBS_UPDATE_DS_SW. This would cause the VMX hardware MSR switching mechanism to save/restore invalid values for PEBS_DATA_CFG MSR, thus crashing the host when PEBS is used for guest. Fix it by using the active host value from cpuc->active_pebs_data_cfg. Fixes: b752ea0c28e3 ("perf/x86/intel/ds: Flush PEBS DS when changing PEBS_DATA_CFG") Signed-off-by: Like Xu <likexu@tencent.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Link: https://lore.kernel.org/r/20230517133808.67885-1-likexu@tencent.com
2023-05-22Merge tag 'x86_urgent_for_6.4-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fix from Dave Hansen: "This works around and issue where the INVLPG instruction may miss invalidating kernel TLB entries in recent hybrid CPUs. I do expect an eventual microcode fix for this. When the microcode version numbers are known, we can circle back around and add them the model table to disable this workaround" * tag 'x86_urgent_for_6.4-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mm: Avoid incomplete Global INVLPG flushes
2023-05-22x86/apic: Fix use of X{,2}APIC_ENABLE in asm with older binutilsAndrew Cooper
"x86/smpboot: Support parallel startup of secondary CPUs" adds the first use of X2APIC_ENABLE in assembly, but older binutils don't tolerate the UL suffix. Switch to using BIT() instead. Fixes: 7e75178a0950 ("x86/smpboot: Support parallel startup of secondary CPUs") Reported-by: Jeffrey Hugo <quic_jhugo@quicinc.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Jeffrey Hugo <quic_jhugo@quicinc.com> Link: https://lore.kernel.org/r/20230522105738.2378364-1-andrew.cooper3@citrix.com
2023-05-21Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm fixes from Paolo Bonzini: "ARM: - Plug a race in the stage-2 mapping code where the IPA and the PA would end up being out of sync - Make better use of the bitmap API (bitmap_zero, bitmap_zalloc...) - FP/SVE/SME documentation update, in the hope that this field becomes clearer... - Add workaround for Apple SEIS brokenness to a new SoC - Random comment fixes x86: - add MSR_IA32_TSX_CTRL into msrs_to_save - fixes for XCR0 handling in SGX enclaves Generic: - Fix vcpu_array[0] races - Fix race between starting a VM and 'reboot -f'" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: VMX: add MSR_IA32_TSX_CTRL into msrs_to_save KVM: x86: Don't adjust guest's CPUID.0x12.1 (allowed SGX enclave XFRM) KVM: VMX: Don't rely _only_ on CPUID to enforce XCR0 restrictions for ECREATE KVM: Fix vcpu_array[0] races KVM: VMX: Fix header file dependency of asm/vmx.h KVM: Don't enable hardware after a restart/shutdown is initiated KVM: Use syscore_ops instead of reboot_notifier to hook restart/shutdown KVM: arm64: vgic: Add Apple M2 PRO/MAX cpus to the list of broken SEIS implementations KVM: arm64: Clarify host SME state management KVM: arm64: Restructure check for SVE support in FP trap handler KVM: arm64: Document check for TIF_FOREIGN_FPSTATE KVM: arm64: Fix repeated words in comments KVM: arm64: Constify start/end/phys fields of the pgtable walker data KVM: arm64: Infer PA offset from VA in hyp map walker KVM: arm64: Infer the PA offset from IPA in stage-2 map walker KVM: arm64: Use the bitmap API to allocate bitmaps KVM: arm64: Slightly optimize flush_context()
2023-05-21KVM: VMX: add MSR_IA32_TSX_CTRL into msrs_to_saveMingwei Zhang
Add MSR_IA32_TSX_CTRL into msrs_to_save[] to explicitly tell userspace to save/restore the register value during migration. Missing this may cause userspace that relies on KVM ioctl(KVM_GET_MSR_INDEX_LIST) fail to port the value to the target VM. In addition, there is no need to add MSR_IA32_TSX_CTRL when ARCH_CAP_TSX_CTRL_MSR is not supported in kvm_get_arch_capabilities(). So add the checking in kvm_probe_msr_to_save(). Fixes: c11f83e0626b ("KVM: vmx: implement MSR_IA32_TSX_CTRL disable RTM functionality") Reported-by: Jim Mattson <jmattson@google.com> Signed-off-by: Mingwei Zhang <mizhang@google.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Jim Mattson <jmattson@google.com> Message-Id: <20230509032348.1153070-1-mizhang@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>