summaryrefslogtreecommitdiff
path: root/tools/perf/scripts/python/exported-sql-viewer.py
diff options
context:
space:
mode:
authorJon Kohler <jon@nutanix.com>2022-03-23 20:44:39 -0400
committerPaolo Bonzini <pbonzini@redhat.com>2022-04-02 05:41:24 -0400
commit945024d764a18b1484428f0055439072ea58e349 (patch)
tree1f56b9e346e2cc20dfa04dc38193680cb30e923c /tools/perf/scripts/python/exported-sql-viewer.py
parentf44509f849fe55bbd81bd3f099b9aecd19002243 (diff)
KVM: x86: optimize PKU branching in kvm_load_{guest|host}_xsave_state
kvm_load_{guest|host}_xsave_state handles xsave on vm entry and exit, part of which is managing memory protection key state. The latest arch.pkru is updated with a rdpkru, and if that doesn't match the base host_pkru (which about 70% of the time), we issue a __write_pkru. To improve performance, implement the following optimizations: 1. Reorder if conditions prior to wrpkru in both kvm_load_{guest|host}_xsave_state. Flip the ordering of the || condition so that XFEATURE_MASK_PKRU is checked first, which when instrumented in our environment appeared to be always true and less overall work than kvm_read_cr4_bits. For kvm_load_guest_xsave_state, hoist arch.pkru != host_pkru ahead one position. When instrumented, I saw this be true roughly ~70% of the time vs the other conditions which were almost always true. With this change, we will avoid 3rd condition check ~30% of the time. 2. Wrap PKU sections with CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS, as if the user compiles out this feature, we should not have these branches at all. Signed-off-by: Jon Kohler <jon@nutanix.com> Message-Id: <20220324004439.6709-1-jon@nutanix.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'tools/perf/scripts/python/exported-sql-viewer.py')
0 files changed, 0 insertions, 0 deletions