summaryrefslogtreecommitdiff
path: root/arch/s390/kernel
AgeCommit message (Collapse)Author
2023-04-25Merge tag 'timers-core-2023-04-24' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timers and timekeeping updates from Thomas Gleixner: - Improve the VDSO build time checks to cover all dynamic relocations VDSO does not allow dynamic relocations, but the build time check is incomplete and fragile. It's based on architectures specifying the relocation types to search for and does not handle R_*_NONE relocation entries correctly. R_*_NONE relocations are injected by some GNU ld variants if they fail to determine the exact .rel[a]/dyn_size to cover trailing zeros. R_*_NONE relocations must be ignored by dynamic loaders, so they should be ignored in the build time check too. Remove the architecture specific relocation types to check for and validate strictly that no other relocations than R_*_NONE end up in the VSDO .so file. - Prefer signal delivery to the current thread for CLOCK_PROCESS_CPUTIME_ID based posix-timers Such timers prefer to deliver the signal to the main thread of a process even if the context in which the timer expires is the current task. This has the downside that it might wake up an idle thread. As there is no requirement or guarantee that the signal has to be delivered to the main thread, avoid this by preferring the current task if it is part of the thread group which shares sighand. This not only avoids waking idle threads, it also distributes the signal delivery in case of multiple timers firing in the context of different threads close to each other better. - Align the tick period properly (again) For a long time the tick was starting at CLOCK_MONOTONIC zero, which allowed users space applications to either align with the tick or to place a periodic computation so that it does not interfere with the tick. The alignement of the tick period was more by chance than by intention as the tick is set up before a high resolution clocksource is installed, i.e. timekeeping is still tick based and the tick period advances from there. The early enablement of sched_clock() broke this alignement as the time accumulated by sched_clock() is taken into account when timekeeping is initialized. So the base value now(CLOCK_MONOTONIC) is not longer a multiple of tick periods, which breaks applications which relied on that behaviour. Cure this by aligning the tick starting point to the next multiple of tick periods, i.e 1000ms/CONFIG_HZ. - A set of NOHZ fixes and enhancements: * Cure the concurrent writer race for idle and IO sleeptime statistics The statitic values which are exposed via /proc/stat are updated from the CPU local idle exit and remotely by cpufreq, but that happens without any form of serialization. As a consequence sleeptimes can be accounted twice or worse. Prevent this by restricting the accumulation writeback to the CPU local idle exit and let the remote access compute the accumulated value. * Protect idle/iowait sleep time with a sequence count Reading idle/iowait sleep time, e.g. from /proc/stat, can race with idle exit updates. As a consequence the readout may result in random and potentially going backwards values. Protect this by a sequence count, which fixes the idle time statistics issue, but cannot fix the iowait time problem because iowait time accounting races with remote wake ups decrementing the remote runqueues nr_iowait counter. The latter is impossible to fix, so the only way to deal with that is to document it properly and to remove the assertion in the selftest which triggers occasionally due to that. * Restructure struct tick_sched for better cache layout * Some small cleanups and a better cache layout for struct tick_sched - Implement the missing timer_wait_running() callback for POSIX CPU timers For unknown reason the introduction of the timer_wait_running() callback missed to fixup posix CPU timers, which went unnoticed for almost four years. While initially only targeted to prevent livelocks between a timer deletion and the timer expiry function on PREEMPT_RT enabled kernels, it turned out that fixing this for mainline is not as trivial as just implementing a stub similar to the hrtimer/timer callbacks. The reason is that for CONFIG_POSIX_CPU_TIMERS_TASK_WORK enabled systems there is a livelock issue independent of RT. CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y moves the expiry of POSIX CPU timers out from hard interrupt context to task work, which is handled before returning to user space or to a VM. The expiry mechanism moves the expired timers to a stack local list head with sighand lock held. Once sighand is dropped the task can be preempted and a task which wants to delete a timer will spin-wait until the expiry task is scheduled back in. In the worst case this will end up in a livelock when the preempting task and the expiry task are pinned on the same CPU. The timer wheel has a timer_wait_running() mechanism for RT, which uses a per CPU timer-base expiry lock which is held by the expiry code and the task waiting for the timer function to complete blocks on that lock. This does not work in the same way for posix CPU timers as there is no timer base and expiry for process wide timers can run on any task belonging to that process, but the concept of waiting on an expiry lock can be used too in a slightly different way. Add a per task mutex to struct posix_cputimers_work, let the expiry task hold it accross the expiry function and let the deleting task which waits for the expiry to complete block on the mutex. In the non-contended case this results in an extra mutex_lock()/unlock() pair on both sides. This avoids spin-waiting on a task which is scheduled out, prevents the livelock and cures the problem for RT and !RT systems * tag 'timers-core-2023-04-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: posix-cpu-timers: Implement the missing timer_wait_running callback selftests/proc: Assert clock_gettime(CLOCK_BOOTTIME) VS /proc/uptime monotonicity selftests/proc: Remove idle time monotonicity assertions MAINTAINERS: Remove stale email address timers/nohz: Remove middle-function __tick_nohz_idle_stop_tick() timers/nohz: Add a comment about broken iowait counter update race timers/nohz: Protect idle/iowait sleep time under seqcount timers/nohz: Only ever update sleeptime from idle exit timers/nohz: Restructure and reshuffle struct tick_sched tick/common: Align tick period with the HZ tick. selftests/timers/posix_timers: Test delivery of signals across threads posix-timers: Prefer delivery of signals to the current thread vdso: Improve cmd_vdso_check to check all dynamic relocations
2023-04-20s390/mm: use VM_FLUSH_RESET_PERMS in module_alloc()Heiko Carstens
Make use of the set_direct_map() calls for module allocations. In particular: - All changes to read-only permissions in kernel VA mappings are also applied to the direct mapping. Note that execute permissions are intentionally not applied to the direct mapping in order to make sure that all allocated pages within the direct mapping stay non-executable - module_alloc() passes the VM_FLUSH_RESET_PERMS to __vmalloc_node_range() to make sure that all implicit permission changes made to the direct mapping are reset when the allocated vm area is freed again Side effects: the direct mapping will be fragmented depending on how many vm areas with VM_FLUSH_RESET_PERMS and/or explicit page permission changes are allocated and freed again. For example, just after boot of a system the direct mapping statistics look like: $cat /proc/meminfo ... DirectMap4k: 111628 kB DirectMap1M: 16665600 kB DirectMap2G: 0 kB Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-20s390: wire up memfd_secret system callHeiko Carstens
s390 supports ARCH_HAS_SET_DIRECT_MAP, therefore wire up the memfd_secret system call, which depends on it. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/relocate_kernel: adjust indentationHeiko Carstens
relocate_kernel.S seems to be the only assembler file which doesn't follow the standard way of indentation. Adjust this for the sake of consistency. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/relocate_kernel: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/entry: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/kprobes: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/reipl: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/head64: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/earlypgm: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/mcount: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/amode31: use SYM* macros instead of ENTRY(), etc.Heiko Carstens
Consistently use the SYM* family of macros instead of the deprecated ENTRY(), ENDPROC(), etc. family of macros. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/kdump: remove nodat stack restriction for calling nodat functionsAlexander Gordeev
To allow calling of DAT-off code from kernel the stack needs to be switched to nodat_stack (or other stack mapped as 1:1). Before call_nodat() macro was introduced that was necessary to provide the very same memory address for STNSM and STOSM instructions. If the kernel would stay on a random stack (e.g. a virtually mapped one) then a virtual address provided for STNSM instruction could differ from the physical address needed for the corresponding STOSM instruction. After call_nodat() macro is introduced the kernel stack does not need to be mapped 1:1 anymore, since the macro stores the physical memory address of return PSW in a register before entering DAT-off mode. This way the return LPSWE instruction is able to pick the correct memory location and restore the DAT-on mode. That however might fail in case the 16-byte return PSW happened to cross page boundary: PSW mask and PSW address could end up in two separate non-contiguous physical pages. Align the return PSW on 16-byte boundary so it always fits into a single physical page. As result any stack (including the virtually mapped one) could be used for calling DAT-off code and prior switching to nodat_stack becomes unnecessary. Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/kdump: rework invocation of DAT-off codeAlexander Gordeev
Calling kdump kernel is a two-step process that involves invocation of the purgatory code: first time - to verify the new kernel checksum and second time - to call the new kernel itself. The purgatory code operates on real addresses and does not expect any memory protection. Therefore, before the purgatory code is entered the DAT mode is always turned off. However, it is only restored upon return from the new kernel checksum verification. In case the purgatory was called to start the new kernel and failed the control is returned to the old kernel, but the DAT mode continues staying off. The new kernel start failure is unlikely and leads to the disabled wait state anyway. Still that poses a risk, since the kernel code in general is not DAT-off safe and even calling the disabled_wait() function might crash. Introduce call_nodat() macro that allows entering DAT-off mode, calling an arbitrary function and restoring DAT mode back on. Switch all invocations of DAT-off code to that macro and avoid the above described scenario altogether. Name the call_nodat() macro in small letters after the already existing call_on_stack() and put it to the same header file. Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> [hca@linux.ibm.com: some small modifications to call_nodat() macro] Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/kdump: fix virtual vs physical address confusionAlexander Gordeev
Fix virtual vs physical address confusion (which currently are the same). Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/kdump: cleanup do_start_kdump() prototype and usageAlexander Gordeev
Avoid unnecessary run-time and compile-time type conversions of do_start_kdump() function return value and parameter. Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/kexec: turn DAT mode off immediately before purgatoryAlexander Gordeev
The kernel code is not guaranteed DAT-off mode safe. Turn the DAT mode off immediately before entering the purgatory. Further, to avoid subtle side effects reset the system immediately before turning DAT mode off while making all necessary preparations in advance. Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/cpum_cf: remove function validate_ctr_auth() by inline codeThomas Richter
Remove function validate_ctr_auth() and replace this very small function by its body. No functional change. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/cpum_cf: provide counter number to validate_ctr_version()Thomas Richter
Function validate_ctr_version() first parameter is a pointer to a large structure, but only member hw_perf_event::config is used. Supply this structure member value in the function invocation. No functional change. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-19s390/cpum_cf: introduce static CPU counter facility informationThomas Richter
The CPU measurement facility counter information instruction qctri() retrieves information about the available counter sets. The information varies between machine generations, but is constant when running on a particular machine. For example the CPU measurement facility counter first and second version numbers determine the amount of counters in a counter set. This information never changes. The counter sets are identical for all CPUs in the system. It does not matter which CPU performs the instruction. Authorization control of the CPU Measurement facility can only be changed in the activation profile while the LPAR is not running. Retrieve the CPU measurement counter information at device driver initialization time and use its constant values. Function validate_ctr_version() verifies if a user provided CPU Measurement counter facility counter is valid and defined. It now uses the newly introduced static CPU counter facility information. To avoid repeated recalculation of the counter set sizes (numbers of counters per set), which never changes on a running machine, calculate the counter set size once at device driver initialization and store the result in an array. Functions cpum_cf_make_setsize() and cpum_cf_read_setsize() are introduced. Finally remove cpu_cf_events::info member and use the static CPU counter facility information instead. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-14init: Mark [arch_call_]rest_init() __noreturnJosh Poimboeuf
In preparation for improving objtool's handling of weak noreturn functions, mark start_kernel(), arch_call_rest_init(), and rest_init() __noreturn. Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/7194ed8a989a85b98d92e62df660f4a90435a723.1681342859.git.jpoimboe@kernel.org
2023-04-13s390/debug: replace zero-length array with flexible-array memberHeiko Carstens
There are numerous patches which convert zero-length arrays with a flexible-array member. Convert the remaining s390 occurrences. Suggested-by: Gustavo A. R. Silva <gustavoars@kernel.org> Link: https://github.com/KSPP/linux/issues/78 Link: https://gcc.gnu.org/pipermail/gcc-patches/2022-October/602902.html Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-13s390/cpum_cf: simplify pr_err() statement in cpumf_pmu_enable/disableThomas Richter
Simplify pr_err() statement into one line and omit return statement. No functional change. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-13s390/cpum_cf: remove unnecessary copy_from_user callThomas Richter
Struct s390_ctrset_read userdata is filled by ioctl_read operation using put_user/copy_to_user. However, the ctrset->data value access is not performed anywhere during the ioctl_read operation. Remove unnecessary copy_from_user() call. No functional change. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Suggested-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-13s390/cpum_cf: log bad return code of function cfset_all_copyThomas Richter
When function cfset_all_copy() fails, also log the bad return code in the debug statement (when turned on). No functional change Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-13s390/module: create module allocations without exec permissionsHeiko Carstens
This is the s390 variant of commit 7dfac3c5f40e ("arm64: module: create module allocations without exec permissions"): "The core code manages the executable permissions of code regions of modules explicitly. It is no longer necessary to create the module vmalloc regions with RWX permissions. So create them with RW- permissions instead, which is preferred from a security perspective." Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-13s390/ftrace: do not assume module_alloc() returns executable memoryHeiko Carstens
The ftrace code assumes at two places that module_alloc() returns executable memory. While this is currently true, this will be changed with a subsequent patch to follow other architectures which implement ARCH_HAS_STRICT_MODULE_RWX. Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-13s390/mm: use set_memory_*() helpers instead of open codingHeiko Carstens
Given that set_memory_rox() and set_memory_rwnx() exist, it is possible to get rid of all open coded __set_memory() usages and replace them with proper helper calls everywhere. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-13s390/ipl: fix physical-virtual confusion for diag308Nico Boehr
Diag 308 subcodes expect a physical address as their parameter. This currently is not a bug, but in the future physical and virtual addresses might differ. Fix the confusion by doing a virtual-to-physical conversion in the exported diag308() and leave the assembly wrapper __diag308() alone. Note that several callers pass NULL as addr, so check for the case when NULL is passed and pass 0 to hardware since virt_to_phys(0) might be nonzero. Suggested-by: Marc Hartmayer <mhartmay@linux.ibm.com> Signed-off-by: Nico Boehr <nrb@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-13s390/kaslr: randomize module base load addressHeiko Carstens
Randomize the load address of modules in the kernel to make KASLR effective for modules. This is the s390 variant of commit e2b32e678513 ("x86, kaslr: randomize module base load address"). Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-13s390/kaslr: provide kaslr_enabled() functionHeiko Carstens
Just like other architectures provide a kaslr_enabled() function, instead of directly accessing a global variable. Also pass the renamed __kaslr_enabled variable from the decompressor to the kernel, so that kalsr_enabled() is available there too. This will be used by a subsequent patch which randomizes the module base load address. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-04s390: enable HAVE_ARCH_STACKLEAKHeiko Carstens
Add support for the stackleak feature. Whenever the kernel returns to user space the kernel stack is filled with a poison value. Enabling this feature is quite expensive: e.g. after instrumenting the getpid() system call function to have a 4kb stack the result is an increased runtime of the system call by a factor of 3. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-04s390: remove arch_early_irq_init()Heiko Carstens
Allocate early async stack like other early stacks and get rid of arch_early_irq_init(). This way the async stack is allocated earlier, and handled like all other stacks. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-04s390: use init_thread_union aka initial stack for the first processHeiko Carstens
s390 is the only architecture which switches from the initial stack to a later on allocated different stack for the first process. This is (at least) problematic for the stackleak feature, which instruments functions to save the current stackpointer within the task structure of the running process. The stackleak code compares stack pointers of the current process - and doesn't expect that the kernel stack of a task can change. Even though the stackleak feature itself will not cause any harm, the assumption about kernel stacks being consistent is there, and only s390 doesn't follow that. Therefore switch back to use init_thread_union, just like all other architectures. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-04s390/stack: set lowcore kernel stack pointer earlyHeiko Carstens
Make sure the lowcore kernel stack pointer reflects the kernel stack of the current task as early as possible, instead of having a NULL pointer there. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-04s390/stack: use STACK_INIT_OFFSET where possibleHeiko Carstens
Make STACK_INIT_OFFSET also available for assembler code, and use it everywhere instead of open-coding it at several places. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-04s390/dumpstack: simplify in stack logic codeHeiko Carstens
The pattern for all in_<type>_stack() functions is the same; especially also the size of all stacks is the same. Simplify the code by passing only the stack address to the generic in_stack() helper, which then can assume a THREAD_SIZE sized stack. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-04s390: make extables read-onlyVasily Gorbik
Currently, exception tables are marked as ro_after_init. However, since they are sorted during compile time using scripts/sorttable, they can be moved to RO_DATA using the RO_EXCEPTION_TABLE_ALIGN macro, which is specifically designed for this purpose. Suggested-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-04s390/entry: rely on long-displacement facilityVasily Gorbik
Since commit 4efd417f298b ("s390: raise minimum supported machine generation to z10"), the long-displacement facility is assumed and required for the kernel. Clean up a couple of places in the entry code, where long-displacement could be used directly instead of using a base register. However, there are still a few other places where a base register has to be used to extend short-displacement for the second lowcore page access. Notably, boot/head.S still has to be built for z900, and in mcck_int_handler, spt and lbear, which don't have long-displacements, but need to access save areas at the second lowcore page. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-04-03Merge 6.3-rc5 into driver-core-nextGreg Kroah-Hartman
We need the fixes in here for testing, as well as the driver core changes for documentation updates to build on. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-03-27s390/ptrace: fix PTRACE_GET_LAST_BREAK error handlingHeiko Carstens
Return -EFAULT if put_user() for the PTRACE_GET_LAST_BREAK request fails, instead of silently ignoring it. Reviewed-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-03-27s390/cpum_sf: remove flag PERF_CPUM_SF_FULL_BLOCKSThomas Richter
This flag is used to process only fully populated sampling buffers when an sampling event is stopped on a CPU. By default the last sampling buffer is also scanned for samples even if the sampling block full indicator is not set in the trailer entry of a sampling buffer page. This flag can be set via perf_event_attr::config1 field. It was never used and never documented. It is useless now. With PERF_CPUM_SF_FULL_BLOCKS: When a process is scheduled off the CPU, the sampling is stopped and the samples are copied to the perf ring buffer and marked invalid. When stopped at the last full sample buffer page (which is achieved with the PERF_CPUM_SF_FULL_BLOCKS options), the hardware sampling will resume at the first free sample entry in the current, partially filled sample buffer. Without PERF_CPUM_SF_FULL_BLOCKS (default behavior): The partially filled last sample buffer is scanned and valid samples are saved to the perf ring buffer. The valid samples are marked invalid. The sampling is resumed when the process is scheduled on this CPU. Again the hardware sampling will resume at the first free sample entry in the current, partially filled sample buffer. Now the next interrupt handler invocation scans the full sample block and saves the valid samples to the ring buffer. It omits the invalid samples at the top of the buffer. The default behavior is fully sufficient, therefore remove this feature. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Hendrik Brueckner <brueckner@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2023-03-24treewide: Trace IPIs sent via smp_send_reschedule()Valentin Schneider
To be able to trace invocations of smp_send_reschedule(), rename the arch-specific definitions of it to arch_smp_send_reschedule() and wrap it into an smp_send_reschedule() that contains a tracepoint. Changes to include the declaration of the tracepoint were driven by the following coccinelle script: @func_use@ @@ smp_send_reschedule(...); @include@ @@ #include <trace/events/ipi.h> @no_include depends on func_use && !include@ @@ #include <...> + + #include <trace/events/ipi.h> [csky bits] [riscv bits] Signed-off-by: Valentin Schneider <vschneid@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Guo Ren <guoren@kernel.org> Acked-by: Palmer Dabbelt <palmer@rivosinc.com> Link: https://lore.kernel.org/r/20230307143558.294354-6-vschneid@redhat.com
2023-03-21vdso: Improve cmd_vdso_check to check all dynamic relocationsFangrui Song
The actual intention is that no dynamic relocation exists in the VDSO. For this the VDSO build validates that the resulting .so file does not have any relocations which are specified via $(ARCH_REL_TYPE_ABS) per architecture, which is fragile as e.g. ARM64 lacks an entry for R_AARCH64_RELATIVE. Aside of that ARCH_REL_TYPE_ABS is a misnomer as it checks for relative relocations too. However, some GNU ld ports produce unneeded R_*_NONE relocation entries. If a port fails to determine the exact .rel[a].dyn size, the trailing zeros become R_*_NONE relocations. E.g. ld's powerpc port recently fixed https://sourceware.org/bugzilla/show_bug.cgi?id=29540). R_*_NONE are generally a no-op in the dynamic loaders. So just ignore them. Remove the ARCH_REL_TYPE_ABS defines and just validate that the resulting .so file does not contain any R_* relocation entries except R_*_NONE. Signed-off-by: Fangrui Song <maskray@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> # for aarch64 Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> # for vDSO, aarch64 Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Link: https://lore.kernel.org/r/20230310190750.3323802-1-maskray@google.com
2023-03-21ftrace: selftest: remove broken trace_direct_trampMark Rutland
The ftrace selftest code has a trace_direct_tramp() function which it uses as a direct call trampoline. This happens to work on x86, since the direct call's return address is in the usual place, and can be returned to via a RET, but in general the calling convention for direct calls is different from regular function calls, and requires a trampoline written in assembly. On s390, regular function calls place the return address in %r14, and an ftrace patch-site in an instrumented function places the trampoline's return address (which is within the instrumented function) in %r0, preserving the original %r14 value in-place. As a regular C function will return to the address in %r14, using a C function as the trampoline results in the trampoline returning to the caller of the instrumented function, skipping the body of the instrumented function. Note that the s390 issue is not detcted by the ftrace selftest code, as the instrumented function is trivial, and returning back into the caller happens to be equivalent. On arm64, regular function calls place the return address in x30, and an ftrace patch-site in an instrumented function saves this into r9 and places the trampoline's return address (within the instrumented function) in x30. A regular C function will return to the address in x30, but will not restore x9 into x30. Consequently, using a C function as the trampoline results in returning to the trampoline's return address having corrupted x30, such that when the instrumented function returns, it will return back into itself. To avoid future issues in this area, remove the trace_direct_tramp() function, and require that each architecture with direct calls provides a stub trampoline, named ftrace_stub_direct_tramp. This can be written to handle the architecture's trampoline calling convention, and in future could be used elsewhere (e.g. in the ftrace ops sample, to measure the overhead of direct calls), so we may as well always build it in. Link: https://lkml.kernel.org/r/20230321140424.345218-8-revest@chromium.org Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Li Huafei <lihuafei1@huawei.com> Cc: Xu Kuohai <xukuohai@huawei.com> Signed-off-by: Florent Revest <revest@chromium.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2023-03-20s390/vdso: use __ALIGN instead of open coded .alignHeiko Carstens
Use __ALIGN instead of open coded .align statement to make sure that vdso code follows global kernel function alignment rules. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-03-20s390/ftrace: move hotpatch trampolines to mcount.SHeiko Carstens
Move the ftrace hotpatch trampolines to mcount.S. This allows to make use of the standard SYM_CODE macros which again makes sure that the hotpatch trampolines follow the function alignment rules of the rest of the kernel. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-03-20Merge branch 'decompressor-memory-tracking' into featuresHeiko Carstens
Vasily Gorbik says: =================== Combine and generalize all methods for finding unused memory in decompressor, while decreasing complexity, add memory holes support, while improving error handling (especially in low-memory conditions) and debug-ability. =================== Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-03-20s390/kasan: move shadow mapping to decompressorVasily Gorbik
Since regular paging structs are initialized in decompressor already move KASAN shadow mapping to decompressor as well. This helps to avoid allocating KASAN required memory in 1 large chunk, de-duplicate paging structs creation code and start the uncompressed kernel with KASAN instrumentation right away. This also allows to avoid all pitfalls accidentally calling KASAN instrumented code during KASAN initialization. Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-03-20s390/boot: rework decompressor reserved trackingVasily Gorbik
Currently several approaches for finding unused memory in decompressor are utilized. While "safe_addr" grows towards higher addresses, vmem code allocates paging structures top down. The former requires careful ordering. In addition to that ipl report handling code verifies potential intersections with secure boot certificates on its own. Neither of two approaches are memory holes aware and consistent with each other in low memory conditions. To solve that, existing approaches are generalized and combined together, as well as online memory ranges are now taken into consideration. physmem_info has been extended to contain reserved memory ranges. New set of functions allow to handle reserves and find unused memory. All reserves and memory allocations are "typed". In case of out of memory condition decompressor fails with detailed info on current reserved ranges and usable online memory. Linux version 6.2.0 ... Kernel command line: ... mem=100M Our of memory allocating 100000 bytes 100000 aligned in range 0:5800000 Reserved memory ranges: 0000000000000000 0000000003e33000 DECOMPRESSOR 0000000003f00000 00000000057648a3 INITRD 00000000063e0000 00000000063e8000 VMEM 00000000063eb000 00000000063f4000 VMEM 00000000063f7800 0000000006400000 VMEM 0000000005800000 0000000006300000 KASAN Usable online memory ranges (info source: sclp read info [3]): 0000000000000000 0000000006400000 Usable online memory total: 6400000 Reserved: 61b10a3 Free: 24ef5d Call Trace: (sp:000000000002bd58 [<0000000000012a70>] physmem_alloc_top_down+0x60/0x14c) sp:000000000002bdc8 [<0000000000013756>] _pa+0x56/0x6a sp:000000000002bdf0 [<0000000000013bcc>] pgtable_populate+0x45c/0x65e sp:000000000002be90 [<00000000000140aa>] setup_vmem+0x2da/0x424 sp:000000000002bec8 [<0000000000011c20>] startup_kernel+0x428/0x8b4 sp:000000000002bf60 [<00000000000100f4>] startup_normal+0xd4/0xd4 physmem_alloc_range allows to find free memory in specified range. It should be used for one time allocations only like finding position for amode31 and vmlinux. physmem_alloc_top_down can be used just like physmem_alloc_range, but it also allows multiple allocations per type and tries to merge sequential allocations together. Which is useful for paging structures allocations. If sequential allocations cannot be merged together they are "chained", allowing easy per type reserved ranges enumeration and migration to memblock later. Extra "struct reserved_range" allocated for chaining are not tracked or reserved but rely on the fact that both physmem_alloc_range and physmem_alloc_top_down search for free memory only below current top down allocator position. All reserved ranges should be transferred to memblock before memblock allocations are enabled. The startup code has been reordered to delay any memory allocations until online memory ranges are detected and occupied memory ranges are marked as reserved to be excluded from follow-up allocations. Ipl report certificates are a special case, ipl report certificates list is checked together with other memory reserves until certificates are saved elsewhere. KASAN required memory for shadow memory allocation and mapping is reserved as 1 large chunk which is later passed to KASAN early initialization code. Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>