diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2024-07-20 12:41:03 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2024-07-20 12:41:03 -0700 |
commit | 2c9b3512402ed192d1f43f4531fb5da947e72bd0 (patch) | |
tree | d63534a1e9cf5b12a1362a348e2237c9c592a493 /arch/x86/kvm/mtrr.c | |
parent | c43a20e4a520b37c2ef6d4f422de989992c9129f (diff) | |
parent | 332d2c1d713e232e163386c35a3ba0c1b90df83f (diff) |
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm updates from Paolo Bonzini:
"ARM:
- Initial infrastructure for shadow stage-2 MMUs, as part of nested
virtualization enablement
- Support for userspace changes to the guest CTR_EL0 value, enabling
(in part) migration of VMs between heterogenous hardware
- Fixes + improvements to pKVM's FF-A proxy, adding support for v1.1
of the protocol
- FPSIMD/SVE support for nested, including merged trap configuration
and exception routing
- New command-line parameter to control the WFx trap behavior under
KVM
- Introduce kCFI hardening in the EL2 hypervisor
- Fixes + cleanups for handling presence/absence of FEAT_TCRX
- Miscellaneous fixes + documentation updates
LoongArch:
- Add paravirt steal time support
- Add support for KVM_DIRTY_LOG_INITIALLY_SET
- Add perf kvm-stat support for loongarch
RISC-V:
- Redirect AMO load/store access fault traps to guest
- perf kvm stat support
- Use guest files for IMSIC virtualization, when available
s390:
- Assortment of tiny fixes which are not time critical
x86:
- Fixes for Xen emulation
- Add a global struct to consolidate tracking of host values, e.g.
EFER
- Add KVM_CAP_X86_APIC_BUS_CYCLES_NS to allow configuring the
effective APIC bus frequency, because TDX
- Print the name of the APICv/AVIC inhibits in the relevant
tracepoint
- Clean up KVM's handling of vendor specific emulation to
consistently act on "compatible with Intel/AMD", versus checking
for a specific vendor
- Drop MTRR virtualization, and instead always honor guest PAT on
CPUs that support self-snoop
- Update to the newfangled Intel CPU FMS infrastructure
- Don't advertise IA32_PERF_GLOBAL_OVF_CTRL as an MSR-to-be-saved, as
it reads '0' and writes from userspace are ignored
- Misc cleanups
x86 - MMU:
- Small cleanups, renames and refactoring extracted from the upcoming
Intel TDX support
- Don't allocate kvm_mmu_page.shadowed_translation for shadow pages
that can't hold leafs SPTEs
- Unconditionally drop mmu_lock when allocating TDP MMU page tables
for eager page splitting, to avoid stalling vCPUs when splitting
huge pages
- Bug the VM instead of simply warning if KVM tries to split a SPTE
that is non-present or not-huge. KVM is guaranteed to end up in a
broken state because the callers fully expect a valid SPTE, it's
all but dangerous to let more MMU changes happen afterwards
x86 - AMD:
- Make per-CPU save_area allocations NUMA-aware
- Force sev_es_host_save_area() to be inlined to avoid calling into
an instrumentable function from noinstr code
- Base support for running SEV-SNP guests. API-wise, this includes a
new KVM_X86_SNP_VM type, encrypting/measure the initial image into
guest memory, and finalizing it before launching it. Internally,
there are some gmem/mmu hooks needed to prepare gmem-allocated
pages before mapping them into guest private memory ranges
This includes basic support for attestation guest requests, enough
to say that KVM supports the GHCB 2.0 specification
There is no support yet for loading into the firmware those signing
keys to be used for attestation requests, and therefore no need yet
for the host to provide certificate data for those keys.
To support fetching certificate data from userspace, a new KVM exit
type will be needed to handle fetching the certificate from
userspace.
An attempt to define a new KVM_EXIT_COCO / KVM_EXIT_COCO_REQ_CERTS
exit type to handle this was introduced in v1 of this patchset, but
is still being discussed by community, so for now this patchset
only implements a stub version of SNP Extended Guest Requests that
does not provide certificate data
x86 - Intel:
- Remove an unnecessary EPT TLB flush when enabling hardware
- Fix a series of bugs that cause KVM to fail to detect nested
pending posted interrupts as valid wake eents for a vCPU executing
HLT in L2 (with HLT-exiting disable by L1)
- KVM: x86: Suppress MMIO that is triggered during task switch
emulation
Explicitly suppress userspace emulated MMIO exits that are
triggered when emulating a task switch as KVM doesn't support
userspace MMIO during complex (multi-step) emulation
Silently ignoring the exit request can result in the
WARN_ON_ONCE(vcpu->mmio_needed) firing if KVM exits to userspace
for some other reason prior to purging mmio_needed
See commit 0dc902267cb3 ("KVM: x86: Suppress pending MMIO write
exits if emulator detects exception") for more details on KVM's
limitations with respect to emulated MMIO during complex emulator
flows
Generic:
- Rename the AS_UNMOVABLE flag that was introduced for KVM to
AS_INACCESSIBLE, because the special casing needed by these pages
is not due to just unmovability (and in fact they are only
unmovable because the CPU cannot access them)
- New ioctl to populate the KVM page tables in advance, which is
useful to mitigate KVM page faults during guest boot or after live
migration. The code will also be used by TDX, but (probably) not
through the ioctl
- Enable halt poll shrinking by default, as Intel found it to be a
clear win
- Setup empty IRQ routing when creating a VM to avoid having to
synchronize SRCU when creating a split IRQCHIP on x86
- Rework the sched_in/out() paths to replace kvm_arch_sched_in() with
a flag that arch code can use for hooking both sched_in() and
sched_out()
- Take the vCPU @id as an "unsigned long" instead of "u32" to avoid
truncating a bogus value from userspace, e.g. to help userspace
detect bugs
- Mark a vCPU as preempted if and only if it's scheduled out while in
the KVM_RUN loop, e.g. to avoid marking it preempted and thus
writing guest memory when retrieving guest state during live
migration blackout
Selftests:
- Remove dead code in the memslot modification stress test
- Treat "branch instructions retired" as supported on all AMD Family
17h+ CPUs
- Print the guest pseudo-RNG seed only when it changes, to avoid
spamming the log for tests that create lots of VMs
- Make the PMU counters test less flaky when counting LLC cache
misses by doing CLFLUSH{OPT} in every loop iteration"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (227 commits)
crypto: ccp: Add the SNP_VLEK_LOAD command
KVM: x86/pmu: Add kvm_pmu_call() to simplify static calls of kvm_pmu_ops
KVM: x86: Introduce kvm_x86_call() to simplify static calls of kvm_x86_ops
KVM: x86: Replace static_call_cond() with static_call()
KVM: SEV: Provide support for SNP_EXTENDED_GUEST_REQUEST NAE event
x86/sev: Move sev_guest.h into common SEV header
KVM: SEV: Provide support for SNP_GUEST_REQUEST NAE event
KVM: x86: Suppress MMIO that is triggered during task switch emulation
KVM: x86/mmu: Clean up make_huge_page_split_spte() definition and intro
KVM: x86/mmu: Bug the VM if KVM tries to split a !hugepage SPTE
KVM: selftests: x86: Add test for KVM_PRE_FAULT_MEMORY
KVM: x86: Implement kvm_arch_vcpu_pre_fault_memory()
KVM: x86/mmu: Make kvm_mmu_do_page_fault() return mapped level
KVM: x86/mmu: Account pf_{fixed,emulate,spurious} in callers of "do page fault"
KVM: x86/mmu: Bump pf_taken stat only in the "real" page fault handler
KVM: Add KVM_PRE_FAULT_MEMORY vcpu ioctl to pre-populate guest memory
KVM: Document KVM_PRE_FAULT_MEMORY ioctl
mm, virt: merge AS_UNMOVABLE and AS_INACCESSIBLE
perf kvm: Add kvm-stat for loongarch64
LoongArch: KVM: Add PV steal time support in guest side
...
Diffstat (limited to 'arch/x86/kvm/mtrr.c')
-rw-r--r-- | arch/x86/kvm/mtrr.c | 644 |
1 files changed, 28 insertions, 616 deletions
diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c index a67c28a56417e..05490b9d8a434 100644 --- a/arch/x86/kvm/mtrr.c +++ b/arch/x86/kvm/mtrr.c @@ -19,33 +19,21 @@ #include <asm/mtrr.h> #include "cpuid.h" -#include "mmu.h" -#define IA32_MTRR_DEF_TYPE_E (1ULL << 11) -#define IA32_MTRR_DEF_TYPE_FE (1ULL << 10) -#define IA32_MTRR_DEF_TYPE_TYPE_MASK (0xff) - -static bool is_mtrr_base_msr(unsigned int msr) -{ - /* MTRR base MSRs use even numbers, masks use odd numbers. */ - return !(msr & 0x1); -} - -static struct kvm_mtrr_range *var_mtrr_msr_to_range(struct kvm_vcpu *vcpu, - unsigned int msr) +static u64 *find_mtrr(struct kvm_vcpu *vcpu, unsigned int msr) { - int index = (msr - MTRRphysBase_MSR(0)) / 2; - - return &vcpu->arch.mtrr_state.var_ranges[index]; -} + int index; -static bool msr_mtrr_valid(unsigned msr) -{ switch (msr) { case MTRRphysBase_MSR(0) ... MTRRphysMask_MSR(KVM_NR_VAR_MTRR - 1): + index = msr - MTRRphysBase_MSR(0); + return &vcpu->arch.mtrr_state.var[index]; case MSR_MTRRfix64K_00000: + return &vcpu->arch.mtrr_state.fixed_64k; case MSR_MTRRfix16K_80000: case MSR_MTRRfix16K_A0000: + index = msr - MSR_MTRRfix16K_80000; + return &vcpu->arch.mtrr_state.fixed_16k[index]; case MSR_MTRRfix4K_C0000: case MSR_MTRRfix4K_C8000: case MSR_MTRRfix4K_D0000: @@ -54,10 +42,14 @@ static bool msr_mtrr_valid(unsigned msr) case MSR_MTRRfix4K_E8000: case MSR_MTRRfix4K_F0000: case MSR_MTRRfix4K_F8000: + index = msr - MSR_MTRRfix4K_C0000; + return &vcpu->arch.mtrr_state.fixed_4k[index]; case MSR_MTRRdefType: - return true; + return &vcpu->arch.mtrr_state.deftype; + default: + break; } - return false; + return NULL; } static bool valid_mtrr_type(unsigned t) @@ -70,9 +62,6 @@ static bool kvm_mtrr_valid(struct kvm_vcpu *vcpu, u32 msr, u64 data) int i; u64 mask; - if (!msr_mtrr_valid(msr)) - return false; - if (msr == MSR_MTRRdefType) { if (data & ~0xcff) return false; @@ -85,8 +74,9 @@ static bool kvm_mtrr_valid(struct kvm_vcpu *vcpu, u32 msr, u64 data) } /* variable MTRRs */ - WARN_ON(!(msr >= MTRRphysBase_MSR(0) && - msr <= MTRRphysMask_MSR(KVM_NR_VAR_MTRR - 1))); + if (WARN_ON_ONCE(!(msr >= MTRRphysBase_MSR(0) && + msr <= MTRRphysMask_MSR(KVM_NR_VAR_MTRR - 1)))) + return false; mask = kvm_vcpu_reserved_gpa_bits_raw(vcpu); if ((msr & 1) == 0) { @@ -94,309 +84,32 @@ static bool kvm_mtrr_valid(struct kvm_vcpu *vcpu, u32 msr, u64 data) if (!valid_mtrr_type(data & 0xff)) return false; mask |= 0xf00; - } else + } else { /* MTRR mask */ mask |= 0x7ff; - - return (data & mask) == 0; -} - -static bool mtrr_is_enabled(struct kvm_mtrr *mtrr_state) -{ - return !!(mtrr_state->deftype & IA32_MTRR_DEF_TYPE_E); -} - -static bool fixed_mtrr_is_enabled(struct kvm_mtrr *mtrr_state) -{ - return !!(mtrr_state->deftype & IA32_MTRR_DEF_TYPE_FE); -} - -static u8 mtrr_default_type(struct kvm_mtrr *mtrr_state) -{ - return mtrr_state->deftype & IA32_MTRR_DEF_TYPE_TYPE_MASK; -} - -static u8 mtrr_disabled_type(struct kvm_vcpu *vcpu) -{ - /* - * Intel SDM 11.11.2.2: all MTRRs are disabled when - * IA32_MTRR_DEF_TYPE.E bit is cleared, and the UC - * memory type is applied to all of physical memory. - * - * However, virtual machines can be run with CPUID such that - * there are no MTRRs. In that case, the firmware will never - * enable MTRRs and it is obviously undesirable to run the - * guest entirely with UC memory and we use WB. - */ - if (guest_cpuid_has(vcpu, X86_FEATURE_MTRR)) - return MTRR_TYPE_UNCACHABLE; - else - return MTRR_TYPE_WRBACK; -} - -/* -* Three terms are used in the following code: -* - segment, it indicates the address segments covered by fixed MTRRs. -* - unit, it corresponds to the MSR entry in the segment. -* - range, a range is covered in one memory cache type. -*/ -struct fixed_mtrr_segment { - u64 start; - u64 end; - - int range_shift; - - /* the start position in kvm_mtrr.fixed_ranges[]. */ - int range_start; -}; - -static struct fixed_mtrr_segment fixed_seg_table[] = { - /* MSR_MTRRfix64K_00000, 1 unit. 64K fixed mtrr. */ - { - .start = 0x0, - .end = 0x80000, - .range_shift = 16, /* 64K */ - .range_start = 0, - }, - - /* - * MSR_MTRRfix16K_80000 ... MSR_MTRRfix16K_A0000, 2 units, - * 16K fixed mtrr. - */ - { - .start = 0x80000, - .end = 0xc0000, - .range_shift = 14, /* 16K */ - .range_start = 8, - }, - - /* - * MSR_MTRRfix4K_C0000 ... MSR_MTRRfix4K_F8000, 8 units, - * 4K fixed mtrr. - */ - { - .start = 0xc0000, - .end = 0x100000, - .range_shift = 12, /* 12K */ - .range_start = 24, - } -}; - -/* - * The size of unit is covered in one MSR, one MSR entry contains - * 8 ranges so that unit size is always 8 * 2^range_shift. - */ -static u64 fixed_mtrr_seg_unit_size(int seg) -{ - return 8 << fixed_seg_table[seg].range_shift; -} - -static bool fixed_msr_to_seg_unit(u32 msr, int *seg, int *unit) -{ - switch (msr) { - case MSR_MTRRfix64K_00000: - *seg = 0; - *unit = 0; - break; - case MSR_MTRRfix16K_80000 ... MSR_MTRRfix16K_A0000: - *seg = 1; - *unit = array_index_nospec( - msr - MSR_MTRRfix16K_80000, - MSR_MTRRfix16K_A0000 - MSR_MTRRfix16K_80000 + 1); - break; - case MSR_MTRRfix4K_C0000 ... MSR_MTRRfix4K_F8000: - *seg = 2; - *unit = array_index_nospec( - msr - MSR_MTRRfix4K_C0000, - MSR_MTRRfix4K_F8000 - MSR_MTRRfix4K_C0000 + 1); - break; - default: - return false; } - return true; -} - -static void fixed_mtrr_seg_unit_range(int seg, int unit, u64 *start, u64 *end) -{ - struct fixed_mtrr_segment *mtrr_seg = &fixed_seg_table[seg]; - u64 unit_size = fixed_mtrr_seg_unit_size(seg); - - *start = mtrr_seg->start + unit * unit_size; - *end = *start + unit_size; - WARN_ON(*end > mtrr_seg->end); -} - -static int fixed_mtrr_seg_unit_range_index(int seg, int unit) -{ - struct fixed_mtrr_segment *mtrr_seg = &fixed_seg_table[seg]; - - WARN_ON(mtrr_seg->start + unit * fixed_mtrr_seg_unit_size(seg) - > mtrr_seg->end); - - /* each unit has 8 ranges. */ - return mtrr_seg->range_start + 8 * unit; -} - -static int fixed_mtrr_seg_end_range_index(int seg) -{ - struct fixed_mtrr_segment *mtrr_seg = &fixed_seg_table[seg]; - int n; - - n = (mtrr_seg->end - mtrr_seg->start) >> mtrr_seg->range_shift; - return mtrr_seg->range_start + n - 1; -} - -static bool fixed_msr_to_range(u32 msr, u64 *start, u64 *end) -{ - int seg, unit; - - if (!fixed_msr_to_seg_unit(msr, &seg, &unit)) - return false; - - fixed_mtrr_seg_unit_range(seg, unit, start, end); - return true; -} - -static int fixed_msr_to_range_index(u32 msr) -{ - int seg, unit; - - if (!fixed_msr_to_seg_unit(msr, &seg, &unit)) - return -1; - - return fixed_mtrr_seg_unit_range_index(seg, unit); -} - -static int fixed_mtrr_addr_to_seg(u64 addr) -{ - struct fixed_mtrr_segment *mtrr_seg; - int seg, seg_num = ARRAY_SIZE(fixed_seg_table); - - for (seg = 0; seg < seg_num; seg++) { - mtrr_seg = &fixed_seg_table[seg]; - if (mtrr_seg->start <= addr && addr < mtrr_seg->end) - return seg; - } - - return -1; -} - -static int fixed_mtrr_addr_seg_to_range_index(u64 addr, int seg) -{ - struct fixed_mtrr_segment *mtrr_seg; - int index; - - mtrr_seg = &fixed_seg_table[seg]; - index = mtrr_seg->range_start; - index += (addr - mtrr_seg->start) >> mtrr_seg->range_shift; - return index; -} - -static u64 fixed_mtrr_range_end_addr(int seg, int index) -{ - struct fixed_mtrr_segment *mtrr_seg = &fixed_seg_table[seg]; - int pos = index - mtrr_seg->range_start; - - return mtrr_seg->start + ((pos + 1) << mtrr_seg->range_shift); -} - -static void var_mtrr_range(struct kvm_mtrr_range *range, u64 *start, u64 *end) -{ - u64 mask; - - *start = range->base & PAGE_MASK; - - mask = range->mask & PAGE_MASK; - - /* This cannot overflow because writing to the reserved bits of - * variable MTRRs causes a #GP. - */ - *end = (*start | ~mask) + 1; -} - -static void update_mtrr(struct kvm_vcpu *vcpu, u32 msr) -{ - struct kvm_mtrr *mtrr_state = &vcpu->arch.mtrr_state; - gfn_t start, end; - - if (!kvm_mmu_honors_guest_mtrrs(vcpu->kvm)) - return; - - if (!mtrr_is_enabled(mtrr_state) && msr != MSR_MTRRdefType) - return; - - /* fixed MTRRs. */ - if (fixed_msr_to_range(msr, &start, &end)) { - if (!fixed_mtrr_is_enabled(mtrr_state)) - return; - } else if (msr == MSR_MTRRdefType) { - start = 0x0; - end = ~0ULL; - } else { - /* variable range MTRRs. */ - var_mtrr_range(var_mtrr_msr_to_range(vcpu, msr), &start, &end); - } - - kvm_zap_gfn_range(vcpu->kvm, gpa_to_gfn(start), gpa_to_gfn(end)); -} - -static bool var_mtrr_range_is_valid(struct kvm_mtrr_range *range) -{ - return (range->mask & (1 << 11)) != 0; -} - -static void set_var_mtrr_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data) -{ - struct kvm_mtrr *mtrr_state = &vcpu->arch.mtrr_state; - struct kvm_mtrr_range *tmp, *cur; - - cur = var_mtrr_msr_to_range(vcpu, msr); - - /* remove the entry if it's in the list. */ - if (var_mtrr_range_is_valid(cur)) - list_del(&cur->node); - - /* - * Set all illegal GPA bits in the mask, since those bits must - * implicitly be 0. The bits are then cleared when reading them. - */ - if (is_mtrr_base_msr(msr)) - cur->base = data; - else - cur->mask = data | kvm_vcpu_reserved_gpa_bits_raw(vcpu); - - /* add it to the list if it's enabled. */ - if (var_mtrr_range_is_valid(cur)) { - list_for_each_entry(tmp, &mtrr_state->head, node) - if (cur->base >= tmp->base) - break; - list_add_tail(&cur->node, &tmp->node); - } + return (data & mask) == 0; } int kvm_mtrr_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data) { - int index; + u64 *mtrr; - if (!kvm_mtrr_valid(vcpu, msr, data)) + mtrr = find_mtrr(vcpu, msr); + if (!mtrr) return 1; - index = fixed_msr_to_range_index(msr); - if (index >= 0) - *(u64 *)&vcpu->arch.mtrr_state.fixed_ranges[index] = data; - else if (msr == MSR_MTRRdefType) - vcpu->arch.mtrr_state.deftype = data; - else - set_var_mtrr_msr(vcpu, msr, data); + if (!kvm_mtrr_valid(vcpu, msr, data)) + return 1; - update_mtrr(vcpu, msr); + *mtrr = data; return 0; } int kvm_mtrr_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata) { - int index; + u64 *mtrr; /* MSR_MTRRcap is a readonly MSR. */ if (msr == MSR_MTRRcap) { @@ -410,311 +123,10 @@ int kvm_mtrr_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata) return 0; } - if (!msr_mtrr_valid(msr)) + mtrr = find_mtrr(vcpu, msr); + if (!mtrr) return 1; - index = fixed_msr_to_range_index(msr); - if (index >= 0) { - *pdata = *(u64 *)&vcpu->arch.mtrr_state.fixed_ranges[index]; - } else if (msr == MSR_MTRRdefType) { - *pdata = vcpu->arch.mtrr_state.deftype; - } else { - /* Variable MTRRs */ - if (is_mtrr_base_msr(msr)) - *pdata = var_mtrr_msr_to_range(vcpu, msr)->base; - else - *pdata = var_mtrr_msr_to_range(vcpu, msr)->mask; - - *pdata &= ~kvm_vcpu_reserved_gpa_bits_raw(vcpu); - } - + *pdata = *mtrr; return 0; } - -void kvm_vcpu_mtrr_init(struct kvm_vcpu *vcpu) -{ - INIT_LIST_HEAD(&vcpu->arch.mtrr_state.head); -} - -struct mtrr_iter { - /* input fields. */ - struct kvm_mtrr *mtrr_state; - u64 start; - u64 end; - - /* output fields. */ - int mem_type; - /* mtrr is completely disabled? */ - bool mtrr_disabled; - /* [start, end) is not fully covered in MTRRs? */ - bool partial_map; - - /* private fields. */ - union { - /* used for fixed MTRRs. */ - struct { - int index; - int seg; - }; - - /* used for var MTRRs. */ - struct { - struct kvm_mtrr_range *range; - /* max address has been covered in var MTRRs. */ - u64 start_max; - }; - }; - - bool fixed; -}; - -static bool mtrr_lookup_fixed_start(struct mtrr_iter *iter) -{ - int seg, index; - - if (!fixed_mtrr_is_enabled(iter->mtrr_state)) - return false; - - seg = fixed_mtrr_addr_to_seg(iter->start); - if (seg < 0) - return false; - - iter->fixed = true; - index = fixed_mtrr_addr_seg_to_range_index(iter->start, seg); - iter->index = index; - iter->seg = seg; - return true; -} - -static bool match_var_range(struct mtrr_iter *iter, - struct kvm_mtrr_range *range) -{ - u64 start, end; - - var_mtrr_range(range, &start, &end); - if (!(start >= iter->end || end <= iter->start)) { - iter->range = range; - - /* - * the function is called when we do kvm_mtrr.head walking. - * Range has the minimum base address which interleaves - * [looker->start_max, looker->end). - */ - iter->partial_map |= iter->start_max < start; - - /* update the max address has been covered. */ - iter->start_max = max(iter->start_max, end); - return true; - } - - return false; -} - -static void __mtrr_lookup_var_next(struct mtrr_iter *iter) -{ - struct kvm_mtrr *mtrr_state = iter->mtrr_state; - - list_for_each_entry_continue(iter->range, &mtrr_state->head, node) - if (match_var_range(iter, iter->range)) - return; - - iter->range = NULL; - iter->partial_map |= iter->start_max < iter->end; -} - -static void mtrr_lookup_var_start(struct mtrr_iter *iter) -{ - struct kvm_mtrr *mtrr_state = iter->mtrr_state; - - iter->fixed = false; - iter->start_max = iter->start; - iter->range = NULL; - iter->range = list_prepare_entry(iter->range, &mtrr_state->head, node); - - __mtrr_lookup_var_next(iter); -} - -static void mtrr_lookup_fixed_next(struct mtrr_iter *iter) -{ - /* terminate the lookup. */ - if (fixed_mtrr_range_end_addr(iter->seg, iter->index) >= iter->end) { - iter->fixed = false; - iter->range = NULL; - return; - } - - iter->index++; - - /* have looked up for all fixed MTRRs. */ - if (iter->index >= ARRAY_SIZE(iter->mtrr_state->fixed_ranges)) - return mtrr_lookup_var_start(iter); - - /* switch to next segment. */ - if (iter->index > fixed_mtrr_seg_end_range_index(iter->seg)) - iter->seg++; -} - -static void mtrr_lookup_var_next(struct mtrr_iter *iter) -{ - __mtrr_lookup_var_next(iter); -} - -static void mtrr_lookup_start(struct mtrr_iter *iter) -{ - if (!mtrr_is_enabled(iter->mtrr_state)) { - iter->mtrr_disabled = true; - return; - } - - if (!mtrr_lookup_fixed_start(iter)) - mtrr_lookup_var_start(iter); -} - -static void mtrr_lookup_init(struct mtrr_iter *iter, - struct kvm_mtrr *mtrr_state, u64 start, u64 end) -{ - iter->mtrr_state = mtrr_state; - iter->start = start; - iter->end = end; - iter->mtrr_disabled = false; - iter->partial_map = false; - iter->fixed = false; - iter->range = NULL; - - mtrr_lookup_start(iter); -} - -static bool mtrr_lookup_okay(struct mtrr_iter *iter) -{ - if (iter->fixed) { - iter->mem_type = iter->mtrr_state->fixed_ranges[iter->index]; - return true; - } - - if (iter->range) { - iter->mem_type = iter->range->base & 0xff; - return true; - } - - return false; -} - -static void mtrr_lookup_next(struct mtrr_iter *iter) -{ - if (iter->fixed) - mtrr_lookup_fixed_next(iter); - else - mtrr_lookup_var_next(iter); -} - -#define mtrr_for_each_mem_type(_iter_, _mtrr_, _gpa_start_, _gpa_end_) \ - for (mtrr_lookup_init(_iter_, _mtrr_, _gpa_start_, _gpa_end_); \ - mtrr_lookup_okay(_iter_); mtrr_lookup_next(_iter_)) - -u8 kvm_mtrr_get_guest_memory_type(struct kvm_vcpu *vcpu, gfn_t gfn) -{ - struct kvm_mtrr *mtrr_state = &vcpu->arch.mtrr_state; - struct mtrr_iter iter; - u64 start, end; - int type = -1; - const int wt_wb_mask = (1 << MTRR_TYPE_WRBACK) - | (1 << MTRR_TYPE_WRTHROUGH); - - start = gfn_to_gpa(gfn); - end = start + PAGE_SIZE; - - mtrr_for_each_mem_type(&iter, mtrr_state, start, end) { - int curr_type = iter.mem_type; - - /* - * Please refer to Intel SDM Volume 3: 11.11.4.1 MTRR - * Precedences. - */ - - if (type == -1) { - type = curr_type; - continue; - } - - /* - * If two or more variable memory ranges match and the - * memory types are identical, then that memory type is - * used. - */ - if (type == curr_type) - continue; - - /* - * If two or more variable memory ranges match and one of - * the memory types is UC, the UC memory type used. - */ - if (curr_type == MTRR_TYPE_UNCACHABLE) - return MTRR_TYPE_UNCACHABLE; - - /* - * If two or more variable memory ranges match and the - * memory types are WT and WB, the WT memory type is used. - */ - if (((1 << type) & wt_wb_mask) && - ((1 << curr_type) & wt_wb_mask)) { - type = MTRR_TYPE_WRTHROUGH; - continue; - } - - /* - * For overlaps not defined by the above rules, processor - * behavior is undefined. - */ - - /* We use WB for this undefined behavior. :( */ - return MTRR_TYPE_WRBACK; - } - - if (iter.mtrr_disabled) - return mtrr_disabled_type(vcpu); - - /* not contained in any MTRRs. */ - if (type == -1) - return mtrr_default_type(mtrr_state); - - /* - * We just check one page, partially covered by MTRRs is - * impossible. - */ - WARN_ON(iter.partial_map); - - return type; -} -EXPORT_SYMBOL_GPL(kvm_mtrr_get_guest_memory_type); - -bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn, - int page_num) -{ - struct kvm_mtrr *mtrr_state = &vcpu->arch.mtrr_state; - struct mtrr_iter iter; - u64 start, end; - int type = -1; - - start = gfn_to_gpa(gfn); - end = gfn_to_gpa(gfn + page_num); - mtrr_for_each_mem_type(&iter, mtrr_state, start, end) { - if (type == -1) { - type = iter.mem_type; - continue; - } - - if (type != iter.mem_type) - return false; - } - - if (iter.mtrr_disabled) - return true; - - if (!iter.partial_map) - return true; - - if (type == -1) - return true; - - return type == mtrr_default_type(mtrr_state); -} |