diff options
author | Isaku Yamahata <isaku.yamahata@intel.com> | 2025-02-22 09:47:56 +0800 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2025-03-14 14:20:56 -0400 |
commit | f30cb6429f750ad7225214072d8d13b9e3a851dc (patch) | |
tree | 4e8056f39f03d937d290e217449b9d645a2432ab | |
parent | 7e548b0d90a7576f0158e0ea2d819c026b279150 (diff) |
KVM: TDX: Handle EXCEPTION_NMI and EXTERNAL_INTERRUPT
Handle EXCEPTION_NMI and EXTERNAL_INTERRUPT exits for TDX.
NMI Handling: Just like the VMX case, NMI remains blocked after exiting
from TDX guest for NMI-induced exits [*]. Handle NMI-induced exits for
TDX guests in the same way as they are handled for VMX guests, i.e.,
handle NMI in tdx_vcpu_enter_exit() by calling the vmx_handle_nmi()
helper.
Interrupt and Exception Handling: Similar to the VMX case, external
interrupts and exceptions (machine check is the only exception type
KVM handles for TDX guests) are handled in the .handle_exit_irqoff()
callback.
For other exceptions, because TDX guest state is protected, exceptions in
TDX guests can't be intercepted. TDX VMM isn't supposed to handle these
exceptions. If unexpected exception occurs, exit to userspace with
KVM_EXIT_EXCEPTION.
For external interrupt, increase the statistics, same as the VMX case.
[*]: Some old TDX modules have a bug which makes NMI unblocked after
exiting from TDX guest for NMI-induced exits. This could potentially
lead to nested NMIs: a new NMI arrives when KVM is manually calling the
host NMI handler. This is an architectural violation, but it doesn't
have real harm until FRED is enabled together with TDX (for non-FRED,
the host NMI handler can handle nested NMIs). Given this is rare to
happen and has no real harm, ignore this for the initial TDX support.
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20250222014757.897978-16-binbin.wu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-rw-r--r-- | arch/x86/kvm/vmx/tdx.c | 26 | ||||
-rw-r--r-- | arch/x86/kvm/vmx/vmx.c | 4 |
2 files changed, 27 insertions, 3 deletions
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index a748957755932..11f8f1077e15c 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -882,6 +882,8 @@ static noinstr void tdx_vcpu_enter_exit(struct kvm_vcpu *vcpu) tdx->exit_gpa = tdx->vp_enter_args.r8; vt->exit_intr_info = tdx->vp_enter_args.r9; + vmx_handle_nmi(vcpu); + guest_state_exit_irqoff(); } @@ -1028,6 +1030,25 @@ void tdx_inject_nmi(struct kvm_vcpu *vcpu) vcpu->arch.nmi_pending = 0; } +static int tdx_handle_exception_nmi(struct kvm_vcpu *vcpu) +{ + u32 intr_info = vmx_get_intr_info(vcpu); + + /* + * Machine checks are handled by handle_exception_irqoff(), or by + * tdx_handle_exit() with TDX_NON_RECOVERABLE set if a #MC occurs on + * VM-Entry. NMIs are handled by tdx_vcpu_enter_exit(). + */ + if (is_nmi(intr_info) || is_machine_check(intr_info)) + return 1; + + vcpu->run->exit_reason = KVM_EXIT_EXCEPTION; + vcpu->run->ex.exception = intr_info & INTR_INFO_VECTOR_MASK; + vcpu->run->ex.error_code = 0; + + return 0; +} + static int complete_hypercall_exit(struct kvm_vcpu *vcpu) { tdvmcall_set_return_code(vcpu, vcpu->run->hypercall.ret); @@ -1724,6 +1745,11 @@ int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath) vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN; vcpu->mmio_needed = 0; return 0; + case EXIT_REASON_EXCEPTION_NMI: + return tdx_handle_exception_nmi(vcpu); + case EXIT_REASON_EXTERNAL_INTERRUPT: + ++vcpu->stat.irq_exits; + return 1; case EXIT_REASON_TDCALL: return handle_tdvmcall(vcpu); case EXIT_REASON_VMCALL: diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 600e6766024fe..71476a33f4f21 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6961,9 +6961,7 @@ static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu, void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu) { - struct vcpu_vmx *vmx = to_vmx(vcpu); - - if (vmx->vt.emulation_required) + if (to_vt(vcpu)->emulation_required) return; if (vmx_get_exit_reason(vcpu).basic == EXIT_REASON_EXTERNAL_INTERRUPT) |