summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-04-13Merge branch 'kvm-arm64/debug-5.13' into kvmarm-master/nextMarc Zyngier
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-13KVM: arm/arm64: Fix KVM_VGIC_V3_ADDR_TYPE_REDIST readEric Auger
When reading the base address of the a REDIST region through KVM_VGIC_V3_ADDR_TYPE_REDIST we expect the redistributor region list to be populated with a single element. However list_first_entry() expects the list to be non empty. Instead we should use list_first_entry_or_null which effectively returns NULL if the list is empty. Fixes: dbd9733ab674 ("KVM: arm/arm64: Replace the single rdist region by a list") Cc: <Stable@vger.kernel.org> # v4.18+ Signed-off-by: Eric Auger <eric.auger@redhat.com> Reported-by: Gavin Shan <gshan@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210412150034.29185-1-eric.auger@redhat.com
2021-04-12x86/sgx: Mark sgx_vepc_vm_ops staticWei Yongjun
Fix the following sparse warning: arch/x86/kernel/cpu/sgx/virt.c:95:35: warning: symbol 'sgx_vepc_vm_ops' was not declared. Should it be static? This symbol is not used outside of virt.c so mark it static. [ bp: Massage commit message. ] Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210412160023.193850-1-weiyongjun1@huawei.com
2021-04-12arm64: fpsimd: run kernel mode NEON with softirqs disabledArd Biesheuvel
Kernel mode NEON can be used in task or softirq context, but only in a non-nesting manner, i.e., softirq context is only permitted if the interrupt was not taken at a point where the kernel was using the NEON in task context. This means all users of kernel mode NEON have to be aware of this limitation, and either need to provide scalar fallbacks that may be much slower (up to 20x for AES instructions) and potentially less safe, or use an asynchronous interface that defers processing to a later time when the NEON is guaranteed to be available. Given that grabbing and releasing the NEON is cheap, we can relax this restriction, by increasing the granularity of kernel mode NEON code, and always disabling softirq processing while the NEON is being used in task context. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-4-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12arm64: assembler: introduce wxN aliases for wN registersArd Biesheuvel
The AArch64 asm syntax has this slightly tedious property that the names used in mnemonics to refer to registers depend on whether the opcode in question targets the entire 64-bits (xN), or only the least significant 8, 16 or 32 bits (wN). When writing parameterized code such as macros, this can be annoying, as macro arguments don't lend themselves to indexed lookups, and so generating a reference to wN in a macro that receives xN as an argument is problematic. For instance, an upcoming patch that modifies the implementation of the cond_yield macro to be able to refer to 32-bit registers would need to modify invocations such as cond_yield 3f, x8 to cond_yield 3f, 8 so that the second argument can be token pasted after x or w to emit the correct register reference. Unfortunately, this interferes with the self documenting nature of the first example, where the second argument is obviously a register, whereas in the second example, one would need to go and look at the code to find out what '8' means. So let's fix this by defining wxN aliases for all xN registers, which resolve to the 32-bit alias of each respective 64-bit register. This allows the macro implementation to paste the xN reference after a w to obtain the correct register name. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-3-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12arm64: assembler: remove conditional NEON yield macrosArd Biesheuvel
The users of the conditional NEON yield macros have all been switched to the simplified cond_yield macro, and so the NEON specific ones can be removed. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-2-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11KVM: arm64: Don't advertise FEAT_SPE to guestsAlexandru Elisei
Even though KVM sets up MDCR_EL2 to trap accesses to the SPE buffer and sampling control registers and to inject an undefined exception, the presence of FEAT_SPE is still advertised in the ID_AA64DFR0_EL1 register, if the hardware supports it. Getting an undefined exception when accessing a register usually happens for a hardware feature which is not implemented, and indeed this is how PMU emulation is handled when the virtual machine has been created without the KVM_ARM_VCPU_PMU_V3 feature. Let's be consistent and never advertise FEAT_SPE, because KVM doesn't have support for emulating it yet. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210409152154.198566-3-alexandru.elisei@arm.com
2021-04-11KVM: arm64: Don't print warning when trapping SPE registersAlexandru Elisei
KVM sets up MDCR_EL2 to trap accesses to the SPE buffer and sampling control registers and it relies on the fact that KVM injects an undefined exception for unknown registers. This mechanism of injecting undefined exceptions also prints a warning message for the host kernel; for example, when a guest tries to access PMSIDR_EL1: [ 2.691830] kvm [142]: Unsupported guest sys_reg access at: 80009e78 [800003c5] [ 2.691830] { Op0( 3), Op1( 0), CRn( 9), CRm( 9), Op2( 7), func_read }, This is unnecessary, because KVM has explicitly configured trapping of those registers and is well aware of their existence. Prevent the warning by adding the SPE registers to the list of registers that KVM emulates. The access function will inject the undefined exception. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210409152154.198566-2-alexandru.elisei@arm.com
2021-04-09KVM: arm64: Fully zero the vcpu state on resetMarc Zyngier
On vcpu reset, we expect all the registers to be brought back to their initial state, which happens to be a bunch of zeroes. However, some recent commit broke this, and is now leaving a bunch of registers (such as the FP state) with whatever was left by the guest. My bad. Zero the reset of the state (32bit SPSRs and FPSIMD state). Cc: stable@vger.kernel.org Fixes: e47c2055c68e ("KVM: arm64: Make struct kvm_regs userspace-only") Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-09KVM: arm64: Clarify vcpu reset behaviourMarc Zyngier
Although the KVM_ARM_VCPU_INIT documentation mention that the registers are reset to their "initial values", it doesn't describe what these values are. Describe this state explicitly. Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-08arm64: Get rid of CONFIG_ARM64_VHEMarc Zyngier
CONFIG_ARM64_VHE was introduced with ARMv8.1 (some 7 years ago), and has been enabled by default for almost all that time. Given that newer systems that are VHE capable are finally becoming available, and that some systems are even incapable of not running VHE, drop the configuration altogether. Anyone willing to stick to non-VHE on VHE hardware for obscure reasons should use the 'kvm-arm.mode=nvhe' command-line option. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-4-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: Cope with CPUs stuck in VHE modeMarc Zyngier
It seems that the CPUs part of the SoC known as Apple M1 have the terrible habit of being stuck with HCR_EL2.E2H==1, in violation of the architecture. Try and work around this deplorable state of affairs by detecting the stuck bit early and short-circuit the nVHE dance. Additional filtering code ensures that attempts at switching to nVHE from the command-line are also ignored. It is still unknown whether there are many more such nuggets to be found... Reported-by: Hector Martin <marcan@marcan.st> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-3-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: cpufeature: Allow early filtering of feature overrideMarc Zyngier
Some CPUs are broken enough that some overrides need to be rejected at the earliest opportunity. In some cases, that's right at cpu feature override time. Provide the necessary infrastructure to filter out overrides, and to report such filtered out overrides to the core cpufeature code. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-2-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08x86/sgx: Do not update sgx_nr_free_pages in sgx_setup_epc_section()Jarkko Sakkinen
The commit in Fixes: changed the SGX EPC page sanitization to end up in sgx_free_epc_page() which puts clean and sanitized pages on the free list. This was done for the reason that it is best to keep the logic to assign available-for-use EPC pages to the correct NUMA lists in a single location. sgx_nr_free_pages is also incremented by sgx_free_epc_pages() but those pages which are being added there per EPC section do not belong to the free list yet because they haven't been sanitized yet - they land on the dirty list first and the sanitization happens later when ksgxd starts massaging them. So remove that addition there and have sgx_free_epc_page() do that solely. [ bp: Sanitize commit message too. ] Fixes: 51ab30eb2ad4 ("x86/sgx: Replace section->init_laundry_list with sgx_dirty_page_list") Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210408092924.7032-1-jarkko@kernel.org
2021-04-08Merge remote-tracking branch 'coresight/next-ETE-TRBE' into kvmarm-master/nextMarc Zyngier
Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-08KVM: arm64: Fix table format for PTP documentationMarc Zyngier
The documentation build legitimately screams about the PTP documentation table being misformated. Fix it by adjusting the table width guides. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-07KVM: arm64: Mark the kvmarm ML as moderated for non-subscribersMarc Zyngier
The kvmarm mailing list is moderated for non-subscriber, but that was never advertised. Fix this with the hope that people will eventually subscribe before posting, saving me the hassle of letting their post through eventually. Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-04-07KVM: arm64: Initialize VCPU mdcr_el2 before loading itAlexandru Elisei
When a VCPU is created, the kvm_vcpu struct is initialized to zero in kvm_vm_ioctl_create_vcpu(). On VHE systems, the first time vcpu.arch.mdcr_el2 is loaded on hardware is in vcpu_load(), before it is set to a sensible value in kvm_arm_setup_debug() later in the run loop. The result is that KVM executes for a short time with MDCR_EL2 set to zero. This has several unintended consequences: * Setting MDCR_EL2.HPMN to 0 is constrained unpredictable according to ARM DDI 0487G.a, page D13-3820. The behavior specified by the architecture in this case is for the PE to behave as if MDCR_EL2.HPMN is set to a value less than or equal to PMCR_EL0.N, which means that an unknown number of counters are now disabled by MDCR_EL2.HPME, which is zero. * The host configuration for the other debug features controlled by MDCR_EL2 is temporarily lost. This has been harmless so far, as Linux doesn't use the other fields, but that might change in the future. Let's avoid both issues by initializing the VCPU's mdcr_el2 field in kvm_vcpu_vcpu_first_run_init(), thus making sure that the MDCR_EL2 register has a consistent value after each vcpu_load(). Fixes: d5a21bcc2995 ("KVM: arm64: Move common VHE/non-VHE trap config in separate functions") Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210407144857.199746-3-alexandru.elisei@arm.com
2021-04-07Documentation: KVM: Document KVM_GUESTDBG_USE_HW control flag for arm64Alexandru Elisei
Commit 21b6f32f9471 ("KVM: arm64: guest debug, define API headers") added the arm64 KVM_GUESTDBG_USE_HW flag for the KVM_SET_GUEST_DEBUG ioctl and commit 834bf88726f0 ("KVM: arm64: enable KVM_CAP_SET_GUEST_DEBUG") documented and implemented the flag functionality. Since its introduction, at no point was the flag known by any name other than KVM_GUESTDBG_USE_HW for the arm64 architecture, so refer to it as such in the documentation. CC: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210407144857.199746-2-alexandru.elisei@arm.com
2021-04-07ptp: arm/arm64: Enable ptp_kvm for arm/arm64Jianyong Wu
Currently, there is no mechanism to keep time sync between guest and host in arm/arm64 virtualization environment. Time in guest will drift compared with host after boot up as they may both use third party time sources to correct their time respectively. The time deviation will be in order of milliseconds. But in some scenarios,like in cloud environment, we ask for higher time precision. kvm ptp clock, which chooses the host clock source as a reference clock to sync time between guest and host, has been adopted by x86 which takes the time sync order from milliseconds to nanoseconds. This patch enables kvm ptp clock for arm/arm64 and improves clock sync precision significantly. Test result comparisons between with kvm ptp clock and without it in arm/arm64 are as follows. This test derived from the result of command 'chronyc sources'. we should take more care of the last sample column which shows the offset between the local clock and the source at the last measurement. no kvm ptp in guest: MS Name/IP address Stratum Poll Reach LastRx Last sample ======================================================================== ^* dns1.synet.edu.cn 2 6 377 13 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 21 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 29 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 37 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 45 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 53 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 61 +1040us[+1581us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 4 -130us[ +796us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 12 -130us[ +796us] +/- 21ms ^* dns1.synet.edu.cn 2 6 377 20 -130us[ +796us] +/- 21ms in host: MS Name/IP address Stratum Poll Reach LastRx Last sample ======================================================================== ^* 120.25.115.20 2 7 377 72 -470us[ -603us] +/- 18ms ^* 120.25.115.20 2 7 377 92 -470us[ -603us] +/- 18ms ^* 120.25.115.20 2 7 377 112 -470us[ -603us] +/- 18ms ^* 120.25.115.20 2 7 377 2 +872ns[-6808ns] +/- 17ms ^* 120.25.115.20 2 7 377 22 +872ns[-6808ns] +/- 17ms ^* 120.25.115.20 2 7 377 43 +872ns[-6808ns] +/- 17ms ^* 120.25.115.20 2 7 377 63 +872ns[-6808ns] +/- 17ms ^* 120.25.115.20 2 7 377 83 +872ns[-6808ns] +/- 17ms ^* 120.25.115.20 2 7 377 103 +872ns[-6808ns] +/- 17ms ^* 120.25.115.20 2 7 377 123 +872ns[-6808ns] +/- 17ms The dns1.synet.edu.cn is the network reference clock for guest and 120.25.115.20 is the network reference clock for host. we can't get the clock error between guest and host directly, but a roughly estimated value will be in order of hundreds of us to ms. with kvm ptp in guest: chrony has been disabled in host to remove the disturb by network clock. MS Name/IP address Stratum Poll Reach LastRx Last sample ======================================================================== * PHC0 0 3 377 8 -7ns[ +1ns] +/- 3ns * PHC0 0 3 377 8 +1ns[ +16ns] +/- 3ns * PHC0 0 3 377 6 -4ns[ -0ns] +/- 6ns * PHC0 0 3 377 6 -8ns[ -12ns] +/- 5ns * PHC0 0 3 377 5 +2ns[ +4ns] +/- 4ns * PHC0 0 3 377 13 +2ns[ +4ns] +/- 4ns * PHC0 0 3 377 12 -4ns[ -6ns] +/- 4ns * PHC0 0 3 377 11 -8ns[ -11ns] +/- 6ns * PHC0 0 3 377 10 -14ns[ -20ns] +/- 4ns * PHC0 0 3 377 8 +4ns[ +5ns] +/- 4ns The PHC0 is the ptp clock which choose the host clock as its source clock. So we can see that the clock difference between host and guest is in order of ns. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jianyong Wu <jianyong.wu@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201209060932.212364-8-jianyong.wu@arm.com
2021-04-07KVM: arm64: Add support for the KVM PTP serviceJianyong Wu
Implement the hypervisor side of the KVM PTP interface. The service offers wall time and cycle count from host to guest. The caller must specify whether they want the host's view of either the virtual or physical counter. Signed-off-by: Jianyong Wu <jianyong.wu@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201209060932.212364-7-jianyong.wu@arm.com
2021-04-07clocksource: Add clocksource id for arm arch counterJianyong Wu
Add clocksource id to the ARM generic counter so that it can be easily identified from callers such as ptp_kvm. Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Jianyong Wu <jianyong.wu@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201209060932.212364-6-jianyong.wu@arm.com
2021-04-07time: Add mechanism to recognize clocksource in time_get_snapshotThomas Gleixner
System time snapshots are not conveying information about the current clocksource which was used, but callers like the PTP KVM guest implementation have the requirement to evaluate the clocksource type to select the appropriate mechanism. Introduce a clocksource id field in struct clocksource which is by default set to CSID_GENERIC (0). Clocksource implementations can set that field to a value which allows to identify the clocksource. Store the clocksource id of the current clocksource in the system_time_snapshot so callers can evaluate which clocksource was used to take the snapshot and act accordingly. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Jianyong Wu <jianyong.wu@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201209060932.212364-5-jianyong.wu@arm.com
2021-04-07ptp: Reorganize ptp_kvm.c to make it arch-independentJianyong Wu
Currently, the ptp_kvm module contains a lot of x86-specific code. Let's move this code into a new arch-specific file in the same directory, and rename the arch-independent file to ptp_kvm_common.c. Acked-by: Richard Cochran <richardcochran@gmail.com> Reviewed-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Jianyong Wu <jianyong.wu@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201209060932.212364-4-jianyong.wu@arm.com
2021-04-07KVM: selftests: vgic_init kvm selftests fixupEric Auger
Bring some improvements/rationalization over the first version of the vgic_init selftests: - ucall_init is moved in run_cpu() - vcpu_args_set is not called as not needed - whenever a helper is supposed to succeed, call the non "_" version - helpers do not return -errno, instead errno is checked by the caller - vm_gic struct is used whenever possible, as well as vm_gic_destroy - _kvm_create_device takes an addition fd parameter Signed-off-by: Eric Auger <eric.auger@redhat.com> Suggested-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210407135937.533141-1-eric.auger@redhat.com
2021-04-07KVM: arm64: Don't retrieve memory slot again in page fault handlerGavin Shan
We needn't retrieve the memory slot again in user_mem_abort() because the corresponding memory slot has been passed from the caller. This would save some CPU cycles. For example, the time used to write 1GB memory, which is backed by 2MB hugetlb pages and write-protected, is dropped by 6.8% from 928ms to 864ms. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Keqian Zhu <zhukeqian1@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210316041126.81860-4-gshan@redhat.com
2021-04-07KVM: arm64: Use find_vma_intersection()Gavin Shan
find_vma_intersection() has been existing to search the intersected vma. This uses the function where it's applicable, to simplify the code. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Keqian Zhu <zhukeqian1@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210316041126.81860-3-gshan@redhat.com
2021-04-07KVM: arm64: Hide kvm_mmu_wp_memory_region()Gavin Shan
We needn't expose the function as it's only used by mmu.c since it was introduced by commit c64735554c0a ("KVM: arm: Add initial dirty page locking support"). Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Keqian Zhu <zhukeqian1@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210316041126.81860-2-gshan@redhat.com
2021-04-06dts: bindings: Document device tree bindings for Arm TRBESuzuki K Poulose
Document the device tree bindings for Trace Buffer Extension (TRBE). Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Rob Herring <robh@kernel.org> Cc: devicetree@vger.kernel.org Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-21-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06Documentation: trace: Add documentation for TRBEAnshuman Khandual
Add documentation for the TRBE under trace/coresight. Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-doc@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> [ Split from the TRBE driver patch ] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-20-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06Documentation: coresight: trbe: Sysfs ABI descriptionAnshuman Khandual
Add sysfs ABI documentation for the TRBE devices. Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-doc@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> [ Split from the TRBE driver patch ] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-19-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06coresight: sink: Add TRBE driverAnshuman Khandual
Trace Buffer Extension (TRBE) implements a trace buffer per CPU which is accessible via the system registers. The TRBE supports different addressing modes including CPU virtual address and buffer modes including the circular buffer mode. The TRBE buffer is addressed by a base pointer (TRBBASER_EL1), an write pointer (TRBPTR_EL1) and a limit pointer (TRBLIMITR_EL1). But the access to the trace buffer could be prohibited by a higher exception level (EL3 or EL2), indicated by TRBIDR_EL1.P. The TRBE can also generate a CPU private interrupt (PPI) on address translation errors and when the buffer is full. Overall implementation here is inspired from the Arm SPE driver. Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> [ Mark the buffer truncated on WRAP event, error code cleanup ] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-18-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06coresight: core: Add support for dedicated percpu sinksAnshuman Khandual
Add support for dedicated sinks that are bound to individual CPUs. (e.g, TRBE). To allow quicker access to the sink for a given CPU bound source, keep a percpu array of the sink devices. Also, add support for building a path to the CPU local sink from the ETM. This adds a new percpu sink type CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM. This new sink type is exclusively available and can only work with percpu source type device CORESIGHT_DEV_SUBTYPE_SOURCE_PROC. This defines a percpu structure that accommodates a single coresight_device which can be used to store an initialized instance from a sink driver. As these sinks are exclusively linked and dependent on corresponding percpu sources devices, they should also be the default sink device during a perf session. Outwards device connections are scanned while establishing paths between a source and a sink device. But such connections are not present for certain percpu source and sink devices which are exclusively linked and dependent. Build the path directly and skip connection scanning for such devices. Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Tested-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> [Moved the set/get percpu sink APIs from TRBE patch to here Fixed build break on arm32] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-17-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06coresight: etm-perf: Handle stale output handlesSuzuki K Poulose
The context associated with an ETM for a given perf event includes : - handle -> the perf output handle for the AUX buffer. - the path for the trace components - the buffer config for the sink. The path and the buffer config are part of the "aux_priv" data (etm_event_data) setup by the setup_aux() callback, and made available via perf_get_aux(handle). Now with a sink supporting IRQ, the sink could "end" an output handle when the buffer reaches the programmed limit and would try to restart a handle. This could fail if there is not enough space left the AUX buffer (e.g, the userspace has not consumed the data). This leaves the "handle" disconnected from the "event" and also the "perf_get_aux()" cleared. This all happens within the sink driver, without the etm_perf driver being aware. Now when the event is actually stopped, etm_event_stop() will need to access the "event_data". But since the handle is not valid anymore, we loose the information to stop the "trace" path. So, we need a reliable way to access the etm_event_data even when the handle may not be active. This patch replaces the per_cpu handle array with a per_cpu context for the ETM, which tracks the "handle" as well as the "etm_event_data". The context notes the etm_event_data at etm_event_start() and clears it at etm_event_stop(). This makes sure that we don't access a stale "etm_event_data" as we are guaranteed that it is not freed by free_aux() as long as the event is active and tracing, also provides us with access to the critical information needed to wind up a session even in the absence of an active output_handle. This is not an issue for the legacy sinks as none of them supports an IRQ and is centrally handled by the etm-perf. Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-16-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06dts: bindings: Document device tree bindings for ETESuzuki K Poulose
Document the device tree bindings for Embedded Trace Extensions. ETE can be connected to legacy coresight components and thus could optionally contain a connection graph as described by the CoreSight bindings. Cc: devicetree@vger.kernel.org Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-15-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06coresight: ete: Add support for ETE tracingSuzuki K Poulose
Add ETE as one of the supported device types we support with ETM4x driver. The devices are named following the existing convention as ete<N>. ETE mandates that the trace resource status register is programmed before the tracing is turned on. For the moment simply write to it indicating TraceActive. Cc: Mike Leach <mike.leach@linaro.org> Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-14-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06coresight: ete: Add support for ETE sysreg accessSuzuki K Poulose
Add support for handling the system registers for Embedded Trace Extensions (ETE). ETE shares most of the registers with ETMv4 except for some and also adds some new registers. Re-arrange the ETMv4x list to share the common definitions and add the ETE sysreg support. Cc: Mike Leach <mike.leach@linaro.org> Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-13-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06coresight: etm4x: Add support for PE OS lockSuzuki K Poulose
ETE may not implement the OS lock and instead could rely on the PE OS Lock for the trace unit access. This is indicated by the TRCOLSR.OSM == 0b100. Add support for handling the PE OS lock Cc: Mike Leach <mike.leach@linaro.org> Reviewed-by: mike.leach <mike.leach@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-12-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06coresight: Do not scan for graph if none is presentSuzuki K Poulose
If a graph node is not found for a given node, of_get_next_endpoint() will emit the following error message : OF: graph: no port node found in /<node_name> If the given component doesn't have any explicit connections (e.g, ETE) we could simply ignore the graph parsing. As for any legacy component where this is mandatory, the device will not be usable as before this patch. Updating the DT bindings to Yaml and enabling the schema checks can detect such issues with the DT. Cc: Mike Leach <mike.leach@linaro.org> Cc: Leo Yan <leo.yan@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-11-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06coresight: etm-perf: Allow an event to use different sinksSuzuki K Poulose
When a sink is not specified by the user, the etm perf driver finds a suitable sink automatically, based on the first ETM where this event could be scheduled. Then we allocate the sink buffer based on the selected sink. This is fine for a CPU bound event as the "sink" is always guaranteed to be reachable from the ETM (as this is the only ETM where the event is going to be scheduled). However, if we have a thread bound event, the event could be scheduled on any of the ETMs on the system. In this case, currently we automatically select a sink and exclude any ETMs that cannot reach the selected sink. This is problematic especially for 1x1 configurations. We end up in tracing the event only on the "first" ETM, as the default sink is local to the first ETM and unreachable from the rest. However, we could allow the other ETMs to trace if they all have a sink that is compatible with the "selected" sink and can use the sink buffer. This can be easily done by verifying that they are all driven by the same driver and matches the same subtype. Please note that at anytime there can be only one ETM tracing the event. Adding support for different types of sinks for a single event is complex and is not something that we expect on a sane configuration. Cc: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Mike Leach <mike.leach@linaro.org> Tested-by: Linu Cherian <lcherian@marvell.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-10-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06coresight: etm4x: Move ETM to prohibited region for disableSuzuki K Poulose
If the CPU implements Arm v8.4 Trace filter controls (FEAT_TRF), move the ETM to trace prohibited region using TRFCR, while disabling. Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Mike Leach <mike.leach@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210405164307.1720226-9-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06arm64: KVM: Enable access to TRBE support for hostSuzuki K Poulose
For a nvhe host, the EL2 must allow the EL1&0 translation regime for TraceBuffer (MDCR_EL2.E2TB == 0b11). This must be saved/restored over a trip to the guest. Also, before entering the guest, we must flush any trace data if the TRBE was enabled. And we must prohibit the generation of trace while we are in EL1 by clearing the TRFCR_EL1. For vhe, the EL2 must prevent the EL1 access to the Trace Buffer. The MDCR_EL2 bit definitions for TRBE are available here : https://developer.arm.com/documentation/ddi0601/2020-12/AArch64-Registers/ Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210405164307.1720226-8-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06KVM: arm64: Move SPE availability check to VCPU loadSuzuki K Poulose
At the moment, we check the availability of SPE on the given CPU (i.e, SPE is implemented and is allowed at the host) during every guest entry. This can be optimized a bit by moving the check to vcpu_load time and recording the availability of the feature on the current CPU via a new flag. This will also be useful for adding the TRBE support. Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Alexandru Elisei <Alexandru.Elisei@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210405164307.1720226-7-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06KVM: arm64: Handle access to TRFCR_EL1Suzuki K Poulose
Rather than falling to an "unhandled access", inject add an explicit "undefined access" for TRFCR_EL1 access from the guest. Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210405164307.1720226-6-suzuki.poulose@arm.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2021-04-06x86/sgx: Move provisioning device creation out of SGX driverSean Christopherson
And extract sgx_set_attribute() out of sgx_ioc_enclave_provision() and export it as symbol for KVM to use. The provisioning key is sensitive. The SGX driver only allows to create an enclave which can access the provisioning key when the enclave creator has permission to open /dev/sgx_provision. It should apply to a VM as well, as the provisioning key is platform-specific, thus an unrestricted VM can also potentially compromise the provisioning key. Move the provisioning device creation out of sgx_drv_init() to sgx_init() as a preparation for adding SGX virtualization support, so that even if the SGX driver is not enabled due to flexible launch control not being available, SGX virtualization can still be enabled, and use it to restrict a VM's capability of being able to access the provisioning key. [ bp: Massage commit message. ] Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Link: https://lkml.kernel.org/r/0f4d044d621561f26d5f4ef73e8dc6cd18cc7e79.1616136308.git.kai.huang@intel.com
2021-04-06x86/sgx: Add helpers to expose ECREATE and EINIT to KVMSean Christopherson
The host kernel must intercept ECREATE to impose policies on guests, and intercept EINIT to be able to write guest's virtual SGX_LEPUBKEYHASH MSR values to hardware before running guest's EINIT so it can run correctly according to hardware behavior. Provide wrappers around __ecreate() and __einit() to hide the ugliness of overloading the ENCLS return value to encode multiple error formats in a single int. KVM will trap-and-execute ECREATE and EINIT as part of SGX virtualization, and reflect ENCLS execution result to guest by setting up guest's GPRs, or on an exception, injecting the correct fault based on return value of __ecreate() and __einit(). Use host userspace addresses (provided by KVM based on guest physical address of ENCLS parameters) to execute ENCLS/EINIT when possible. Accesses to both EPC and memory originating from ENCLS are subject to segmentation and paging mechanisms. It's also possible to generate kernel mappings for ENCLS parameters by resolving PFN but using __uaccess_xx() is simpler. [ bp: Return early if the __user memory accesses fail, use cpu_feature_enabled(). ] Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Jarkko Sakkinen <jarkko@kernel.org> Link: https://lkml.kernel.org/r/20e09daf559aa5e9e680a0b4b5fba940f1bad86e.1616136308.git.kai.huang@intel.com
2021-04-06KVM: selftests: aarch64/vgic-v3 init sequence testsEric Auger
The tests exercise the VGIC_V3 device creation including the associated KVM_DEV_ARM_VGIC_GRP_ADDR group attributes: - KVM_VGIC_V3_ADDR_TYPE_DIST/REDIST - KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION Some other tests dedicate to KVM_DEV_ARM_VGIC_GRP_REDIST_REGS group and especially the GICR_TYPER read. The goal was to test the case recently fixed by commit 23bde34771f1 ("KVM: arm64: vgic-v3: Drop the reporting of GICR_TYPER.Last for userspace"). The API under test can be found at Documentation/virt/kvm/devices/arm-vgic-v3.rst Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210405163941.510258-10-eric.auger@redhat.com
2021-04-06KVM: arm64: vgic-v3: Expose GICR_TYPER.Last for userspaceEric Auger
Commit 23bde34771f1 ("KVM: arm64: vgic-v3: Drop the reporting of GICR_TYPER.Last for userspace") temporarily fixed a bug identified when attempting to access the GICR_TYPER register before the redistributor region setting, but dropped the support of the LAST bit. Emulating the GICR_TYPER.Last bit still makes sense for architecture compliance though. This patch restores its support (if the redistributor region was set) while keeping the code safe. We introduce a new helper, vgic_mmio_vcpu_rdist_is_last() which computes whether a redistributor is the highest one of a series of redistributor contributor pages. With this new implementation we do not need to have a uaccess read accessor anymore. Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210405163941.510258-9-eric.auger@redhat.com
2021-04-06kvm: arm64: vgic-v3: Introduce vgic_v3_free_redist_region()Eric Auger
To improve the readability, we introduce the new vgic_v3_free_redist_region helper and also rename vgic_v3_insert_redist_region into vgic_v3_alloc_redist_region Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210405163941.510258-8-eric.auger@redhat.com
2021-04-06KVM: arm64: Simplify argument passing to vgic_uaccess_[read|write]Eric Auger
vgic_uaccess() takes a struct vgic_io_device argument, converts it to a struct kvm_io_device and passes it to the read/write accessor functions, which convert it back to a struct vgic_io_device. Avoid the indirection by passing the struct vgic_io_device argument directly to vgic_uaccess_{read,write}. Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210405163941.510258-7-eric.auger@redhat.com