summaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel
AgeCommit message (Collapse)Author
2021-03-29powerpc/32: Perform normal function call in exception entryChristophe Leroy
Now that the MMU is re-enabled before calling the transfer function, we don't need anymore that hack with the address of the handler and the return function sitting just after the 'bl' to the transfer fonction, that function is retrieving via a read relative to 'lr'. Do a regular call to the transfer function, then to the handler, then branch to the return function. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/73c00f3361ca280ef8fd7814c291bd1f5b6e2081.1615552867.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Refactor booke critical registers savingChristophe Leroy
Refactor booke critical registers saving into a few macros and move it into the exception prolog directly. Keep the dedicated transfert_to_handler entry point for the moment allthough they are empty. They will be removed in a later patch to reduce churn. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/269171496f1f5f22afa621695bded22976c9d48d.1615552867.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Provide a name to exception prolog continuation in virtual modeChristophe Leroy
Now that the prolog continuation is separated in .text, give it a name and mark it _ASM_NOKPROBE_SYMBOL. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d96374218815a6627e1e922ab2aba994050fb87a.1615552867.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Move exception prolog code into .text once MMU is back onChristophe Leroy
The space in the head section is rather constrained by the fact that exception vectors are spread every 0x100 bytes and sometimes we need to have "out of line" code because it doesn't fit. Now that we are enabling MMU early in the prolog, take that opportunity to jump somewhere else in the .text section where we don't have any space constraint. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/38b31ca4bc782a4985bc7952a675404d7ff27c24.1615552867.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Use START_EXCEPTION() as much as possibleChristophe Leroy
Everywhere where it is possible, use START_EXCEPTION(). This will help for proper exception init in future patches. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d47c1cc242bbbef8658327503726abdaef9b63ef.1615552867.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Add vmap_stack_overflow label inside the macroChristophe Leroy
For consistency, add in the macro the label used by exception prolog to branch to stack overflow processing. While at it, enclose the macro in #ifdef CONFIG_VMAP_STACK on the 8xx as already done on book3s/32. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/cf80056f5b946572ad98aea9d915dd25b23beda6.1615552867.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Statically initialise first emergency contextChristophe Leroy
The check of the emergency context initialisation in vmap_stack_overflow is buggy for the SMP case, as it compares r1 with 0 while in the SMP case r1 is offseted by the CPU id. Instead of fixing it, just perform static initialisation of the first emergency context. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4a67ba422be75713286dca0c86ee0d3df2eb6dfa.1615552867.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Enable instruction translation at the same time as data translationChristophe Leroy
On 40x and 8xx, kernel text is pinned. On book3s/32, kernel text is mapped by BATs. Enable instruction translation at the same time as data translation, it makes things simpler. In syscall handler, MSR_RI can also be set at the same time because srr0/srr1 are already saved and r1 is set properly. On booke, translation is always on, so at the end all PPC32 have translation on early. Just update msr. Also update comment in power_save_ppc32_restore(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5269c7e5f5d2117358af3a89744d75a116be27b0.1615552867.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Tag DAR in EXCEPTION_PROLOG_2 for the 8xxChristophe Leroy
8xx requires to tag the DAR with a magic value in order to fixup DAR on faults generated by 'dcbX', as the 8xx forgets to update the DAR for those faults. Do the tagging as early as possible, that is before enabling MMU. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/853a2e28ca7c5fc85617037030f99fe6070c9536.1615552867.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Always enable data translation in exception prologChristophe Leroy
If the code can use a stack in vm area, it can also use a stack in linear space. Simplify code by removing old non VMAP stack code on PPC32. That means the data translation is now re-enabled early in exception prolog in all cases, not only when using VMAP stacks. While we are touching EXCEPTION_PROLOG macros, remove the unused for_rtas parameter in EXCEPTION_PROLOG_1. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7cd6440c60a7e8f4f035b245c57720f51e225aae.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Remove ksp_limitChristophe Leroy
ksp_limit is there to help detect stack overflows. That is specific to ppc32 as it was removed from ppc64 in commit cbc9565ee826 ("powerpc: Remove ksp_limit on ppc64"). There are other means for detecting stack overflows. As ppc64 has proven to not need it, ppc32 should be able to do without it too. Lets remove it and simplify exception handling. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d789c3385b22e07bedc997613c0d26074cb513e7.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Use fast instruction to set MSR RI in exception prolog on 8xxChristophe Leroy
8xx has registers SPRN_NRI, SPRN_EID and SPRN_EIE for changing MSR EE and RI. Use SPRN_EID in exception prolog to set RI. On an 8xx, it reduces the null_syscall test by 3 cycles. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/65f6bda827c2a2abce71ea7e07543e791163da33.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Handle bookE debugging in C in exception entryChristophe Leroy
The handling of SPRN_DBCR0 and other registers can easily be done in C instead of ASM. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6d6b2497115890b90cfa72a2b3ab1da5f78123c2.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Entry cpu time accounting in CChristophe Leroy
There is no need for this to be in asm, use the new interrupt entry wrapper. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/daca4c3e05cdfe54d237162a0718b3aaca897662.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/32: Reconcile interrupts in CChristophe Leroy
There is no need for this to be in asm anymore, use the new interrupt entry wrapper. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/602e1ec47e15ca540f7edb9cf6feb6c249911bd6.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/40x: Prepare normal exception handler for enabling MMU earlyChristophe Leroy
Ensure normal exception handler are able to manage stuff with MMU enabled. For that we use CONFIG_VMAP_STACK related code allthough there is no intention to really activate CONFIG_VMAP_STACK on powerpc 40x for the moment. 40x uses SPRN_DEAR instead of SPRN_DAR and SPRN_ESR instead of SPRN_DSISR. Take it into account in common macros. 40x MSR value doesn't fit on 15 bits, use LOAD_REG_IMMEDIATE() in common macros that will be used also with 40x. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/01963af2b83037bca270d7bf1336ffcf35da8282.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/40x: Prepare for enabling MMU in critical exception prologChristophe Leroy
In order the enable MMU early in exception prolog, implement CONFIG_VMAP_STACK principles in critical exception prolog. There is no intention to use CONFIG_VMAP_STACK on 40x, but related code will be used to enable MMU early in exception in a later patch. Also address (critirq_ctx - PAGE_OFFSET) directly instead of using tophys() in order to win one instruction. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3fd75ee54c48307119acdbf66cfea966c1463bbd.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/40x: Reorder a few instructions in critical exception prologChristophe Leroy
In order to ease preparation for CONFIG_VMAP_STACK, reorder a few instruction, especially save r1 into stack frame earlier. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c895ecf958c86d1736bdd2ff6f36626b55f35fd2.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/40x: Save SRR0/SRR1 and r10/r11 earlier in critical exceptionChristophe Leroy
In order to be able to switch MMU on in exception prolog, save SRR0 and SRR1 earlier. Also save r10 and r11 into stack earlier to better match with the normal exception prolog. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/79a93f253d72dc97ac968c9c62b5066960b688ed.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/40x: Change CRITICAL_EXCEPTION_PROLOG macro to a gas macroChristophe Leroy
Change CRITICAL_EXCEPTION_PROLOG macro to a gas macro to remove the ugly ; and \ on each line. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/73291fb9dc9ec58182c27a40dfc3db204e3f4024.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/40x: Don't use SPRN_SPRG_SCRATCH0/1 in TLB miss handlersChristophe Leroy
SPRN_SPRG_SCRATCH5 is used to save SPRN_PID. SPRN_SPRG_SCRATCH6 is already available. SPRN_PID is only 8 bits. We have r12 that contains CR. We only need to preserve CR0, so we have space available in r12 to save PID. Keep PID in r12 and free up SPRN_SPRG_SCRATCH5. Then In TLB miss handlers, instead of using SPRN_SPRG_SCRATCH0 and SPRN_SPRG_SCRATCH1, use SPRN_SPRG_SCRATCH5 and SPRN_SPRG_SCRATCH6 to avoid future conflicts with normal exception prologs. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4cdaa85d38e14d594ba902424060ec55babf2c42.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/traps: Declare unrecoverable_exception() as __noreturnChristophe Leroy
unrecoverable_exception() is never expected to return, most callers have an infiniteloop in case it returns. Ensure it really never returns by terminating it with a BUG(), and declare it __no_return. It always GCC to really simplify functions calling it. In the exemple below, it avoids the stack frame in the likely fast path and avoids code duplication for the exit. With this patch: 00000348 <interrupt_exit_kernel_prepare>: 348: 81 43 00 84 lwz r10,132(r3) 34c: 71 48 00 02 andi. r8,r10,2 350: 41 82 00 2c beq 37c <interrupt_exit_kernel_prepare+0x34> 354: 71 4a 40 00 andi. r10,r10,16384 358: 40 82 00 20 bne 378 <interrupt_exit_kernel_prepare+0x30> 35c: 80 62 00 70 lwz r3,112(r2) 360: 74 63 00 01 andis. r3,r3,1 364: 40 82 00 28 bne 38c <interrupt_exit_kernel_prepare+0x44> 368: 7d 40 00 a6 mfmsr r10 36c: 7c 11 13 a6 mtspr 81,r0 370: 7c 12 13 a6 mtspr 82,r0 374: 4e 80 00 20 blr 378: 48 00 00 00 b 378 <interrupt_exit_kernel_prepare+0x30> 37c: 94 21 ff f0 stwu r1,-16(r1) 380: 7c 08 02 a6 mflr r0 384: 90 01 00 14 stw r0,20(r1) 388: 48 00 00 01 bl 388 <interrupt_exit_kernel_prepare+0x40> 388: R_PPC_REL24 unrecoverable_exception 38c: 38 e2 00 70 addi r7,r2,112 390: 3d 00 00 01 lis r8,1 394: 7c c0 38 28 lwarx r6,0,r7 398: 7c c6 40 78 andc r6,r6,r8 39c: 7c c0 39 2d stwcx. r6,0,r7 3a0: 40 a2 ff f4 bne 394 <interrupt_exit_kernel_prepare+0x4c> 3a4: 38 60 00 01 li r3,1 3a8: 4b ff ff c0 b 368 <interrupt_exit_kernel_prepare+0x20> Without this patch: 00000348 <interrupt_exit_kernel_prepare>: 348: 94 21 ff f0 stwu r1,-16(r1) 34c: 93 e1 00 0c stw r31,12(r1) 350: 7c 7f 1b 78 mr r31,r3 354: 81 23 00 84 lwz r9,132(r3) 358: 71 2a 00 02 andi. r10,r9,2 35c: 41 82 00 34 beq 390 <interrupt_exit_kernel_prepare+0x48> 360: 71 29 40 00 andi. r9,r9,16384 364: 40 82 00 28 bne 38c <interrupt_exit_kernel_prepare+0x44> 368: 80 62 00 70 lwz r3,112(r2) 36c: 74 63 00 01 andis. r3,r3,1 370: 40 82 00 3c bne 3ac <interrupt_exit_kernel_prepare+0x64> 374: 7d 20 00 a6 mfmsr r9 378: 7c 11 13 a6 mtspr 81,r0 37c: 7c 12 13 a6 mtspr 82,r0 380: 83 e1 00 0c lwz r31,12(r1) 384: 38 21 00 10 addi r1,r1,16 388: 4e 80 00 20 blr 38c: 48 00 00 00 b 38c <interrupt_exit_kernel_prepare+0x44> 390: 7c 08 02 a6 mflr r0 394: 90 01 00 14 stw r0,20(r1) 398: 48 00 00 01 bl 398 <interrupt_exit_kernel_prepare+0x50> 398: R_PPC_REL24 unrecoverable_exception 39c: 80 01 00 14 lwz r0,20(r1) 3a0: 81 3f 00 84 lwz r9,132(r31) 3a4: 7c 08 03 a6 mtlr r0 3a8: 4b ff ff b8 b 360 <interrupt_exit_kernel_prepare+0x18> 3ac: 39 02 00 70 addi r8,r2,112 3b0: 3d 40 00 01 lis r10,1 3b4: 7c e0 40 28 lwarx r7,0,r8 3b8: 7c e7 50 78 andc r7,r7,r10 3bc: 7c e0 41 2d stwcx. r7,0,r8 3c0: 40 a2 ff f4 bne 3b4 <interrupt_exit_kernel_prepare+0x6c> 3c4: 38 60 00 01 li r3,1 3c8: 4b ff ff ac b 374 <interrupt_exit_kernel_prepare+0x2c> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1e883e9d93fdb256853d1434c8ad77c257349b2d.1615552866.git.christophe.leroy@csgroup.eu
2021-03-29powerpc/uprobes: Validation for prefixed instructionRavi Bangoria
As per ISA 3.1, prefixed instruction should not cross 64-byte boundary. So don't allow Uprobe on such prefixed instruction. There are two ways probed instruction is changed in mapped pages. First, when Uprobe is activated, it searches for all the relevant pages and replace instruction in them. In this case, if that probe is on the 64-byte unaligned prefixed instruction, error out directly. Second, when Uprobe is already active and user maps a relevant page via mmap(), instruction is replaced via mmap() code path. But because Uprobe is invalid, entire mmap() operation can not be stopped. In this case just print an error and continue. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210311091538.368590-1-ravi.bangoria@linux.ibm.com
2021-03-29powerpc/signal: Use __get_user() to copy sigset_tChristopher M. Riedl
Usually sigset_t is exactly 8B which is a "trivial" size and does not warrant using __copy_from_user(). Use __get_user() directly in anticipation of future work to remove the trivial size optimizations from __copy_from_user(). The ppc32 implementation of get_sigset_t() previously called copy_from_user() which, unlike __copy_from_user(), calls access_ok(). Replacing this w/ __get_user() (no access_ok()) is fine here since both callsites in signal_32.c are preceded by an earlier access_ok(). Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-11-cmr@codefail.de
2021-03-29powerpc/signal64: Rewrite rt_sigreturn() to minimise uaccess switchesDaniel Axtens
Add uaccess blocks and use the 'unsafe' versions of functions doing user access where possible to reduce the number of times uaccess has to be opened/closed. Co-developed-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-10-cmr@codefail.de
2021-03-29powerpc/signal64: Rewrite handle_rt_signal64() to minimise uaccess switchesDaniel Axtens
Add uaccess blocks and use the 'unsafe' versions of functions doing user access where possible to reduce the number of times uaccess has to be opened/closed. There is no 'unsafe' version of copy_siginfo_to_user, so move it slightly to allow for a "longer" uaccess block. Co-developed-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-9-cmr@codefail.de
2021-03-29powerpc/signal64: Replace restore_sigcontext() w/ unsafe_restore_sigcontext()Christopher M. Riedl
Previously restore_sigcontext() performed a costly KUAP switch on every uaccess operation. These repeated uaccess switches cause a significant drop in signal handling performance. Rewrite restore_sigcontext() to assume that a userspace read access window is open by replacing all uaccess functions with their 'unsafe' versions. Modify the callers to first open, call unsafe_restore_sigcontext(), and then close the uaccess window. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-8-cmr@codefail.de
2021-03-29powerpc/signal64: Replace setup_sigcontext() w/ unsafe_setup_sigcontext()Christopher M. Riedl
Previously setup_sigcontext() performed a costly KUAP switch on every uaccess operation. These repeated uaccess switches cause a significant drop in signal handling performance. Rewrite setup_sigcontext() to assume that a userspace write access window is open by replacing all uaccess functions with their 'unsafe' versions. Modify the callers to first open, call unsafe_setup_sigcontext() and then close the uaccess window. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-7-cmr@codefail.de
2021-03-29powerpc/signal64: Remove TM ifdefery in middle of if/else blockChristopher M. Riedl
Both rt_sigreturn() and handle_rt_signal_64() contain TM-related ifdefs which break-up an if/else block. Provide stubs for the ifdef-guarded TM functions and remove the need for an ifdef in rt_sigreturn(). Rework the remaining TM ifdef in handle_rt_signal64() similar to commit f1cf4f93de2f ("powerpc/signal32: Remove ifdefery in middle of if/else"). Unlike in the commit for ppc32, the ifdef can't be removed entirely since uc_transact in sigframe depends on CONFIG_PPC_TRANSACTIONAL_MEM. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-6-cmr@codefail.de
2021-03-29powerpc/signal64: Remove non-inline calls from setup_sigcontext()Christopher M. Riedl
The majority of setup_sigcontext() can be refactored to execute in an "unsafe" context assuming an open uaccess window except for some non-inline function calls. Move these out into a separate prepare_setup_sigcontext() function which must be called first and before opening up a uaccess window. Non-inline function calls should be avoided during a uaccess window for a few reasons: - KUAP should be enabled for as much kernel code as possible. Opening a uaccess window disables KUAP which means any code executed during this time contributes to a potential attack surface. - Non-inline functions default to traceable which means they are instrumented for ftrace. This adds more code which could run with KUAP disabled. - Powerpc does not currently support the objtool UACCESS checks. All code running with uaccess must be audited manually which means: less code -> less work -> fewer problems (in theory). A follow-up commit converts setup_sigcontext() to be "unsafe". Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-4-cmr@codefail.de
2021-03-29powerpc/signal: Add unsafe_copy_{vsx, fpr}_from_user()Christopher M. Riedl
Reuse the "safe" implementation from signal.c but call unsafe_get_user() directly in a loop to avoid the intermediate copy into a local buffer. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-3-cmr@codefail.de
2021-03-26powerpc/ptrace: Convert gpr32_set_common() to user access blockChristophe Leroy
Use user access block in gpr32_set_common() instead of repetitive __get_user() which imply repetitive KUAP open/close. To get it clean, force inlining of the small set of tiny functions called inside the block. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bdcb8652c3bb4ab5b8b3bfd08147434be8fc04c9.1615398498.git.christophe.leroy@csgroup.eu
2021-03-26powerpc/syscalls: Use sys_old_select() in ppc_select()Christophe Leroy
Instead of opencodying the copy of parameters, use the generic sys_old_select(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4de983ad254739da1fe6e9f273baf387b7043ae0.1615398498.git.christophe.leroy@csgroup.eu
2021-03-24powerpc/ptrace: Remove duplicate check from pt_regs_check()Denis Efremov
"offsetof(struct pt_regs, msr) == offsetof(struct user_pt_regs, msr)" checked in pt_regs_check() twice in a row. Remove the second check. Signed-off-by: Denis Efremov <efremov@linux.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210305112807.26299-1-efremov@linux.com
2021-03-24powerpc: Remove duplicate includesZhang Yunkai
asm/tm.h included in traps.c is duplicated. It is also included on the 62nd line. asm/udbg.h included in setup-common.c is duplicated. It is also included on the 61st line. asm/bug.h included in arch/powerpc/include/asm/book3s/64/mmu-hash.h is duplicated. It is also included on the 12th line. asm/tlbflush.h included in arch/powerpc/include/asm/pgtable.h is duplicated. It is also included on the 11th line. asm/page.h included in arch/powerpc/include/asm/thread_info.h is duplicated. It is also included on the 13th line. Signed-off-by: Zhang Yunkai <zhang.yunkai@zte.com.cn> [mpe: Squash together from multiple commits] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2021-03-24powerpc/prom: Mark identical_pvr_fixup as __initNathan Chancellor
If identical_pvr_fixup() is not inlined, there are two modpost warnings: WARNING: modpost: vmlinux.o(.text+0x54e8): Section mismatch in reference from the function identical_pvr_fixup() to the function .init.text:of_get_flat_dt_prop() The function identical_pvr_fixup() references the function __init of_get_flat_dt_prop(). This is often because identical_pvr_fixup lacks a __init annotation or the annotation of of_get_flat_dt_prop is wrong. WARNING: modpost: vmlinux.o(.text+0x551c): Section mismatch in reference from the function identical_pvr_fixup() to the function .init.text:identify_cpu() The function identical_pvr_fixup() references the function __init identify_cpu(). This is often because identical_pvr_fixup lacks a __init annotation or the annotation of identify_cpu is wrong. identical_pvr_fixup() calls two functions marked as __init and is only called by a function marked as __init so it should be marked as __init as well. At the same time, remove the inline keywork as it is not necessary to inline this function. The compiler is still free to do so if it feels it is worthwhile since commit 889b3c1245de ("compiler: remove CONFIG_OPTIMIZE_INLINING entirely"). Fixes: 14b3d926a22b ("[POWERPC] 4xx: update 440EP(x)/440GR(x) identical PVR issue workaround") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://github.com/ClangBuiltLinux/linux/issues/1316 Link: https://lore.kernel.org/r/20210302200829.2680663-1-nathan@kernel.org
2021-03-24powerpc/fadump: Mark fadump_calculate_reserve_size as __initNathan Chancellor
If fadump_calculate_reserve_size() is not inlined, there is a modpost warning: WARNING: modpost: vmlinux.o(.text+0x5196c): Section mismatch in reference from the function fadump_calculate_reserve_size() to the function .init.text:parse_crashkernel() The function fadump_calculate_reserve_size() references the function __init parse_crashkernel(). This is often because fadump_calculate_reserve_size lacks a __init annotation or the annotation of parse_crashkernel is wrong. fadump_calculate_reserve_size() calls parse_crashkernel(), which is marked as __init and fadump_calculate_reserve_size() is called from within fadump_reserve_mem(), which is also marked as __init. Mark fadump_calculate_reserve_size() as __init to fix the section mismatch. Additionally, remove the inline keyword as it is not necessary to inline this function; the compiler is still free to do so if it feels it is worthwhile since commit 889b3c1245de ("compiler: remove CONFIG_OPTIMIZE_INLINING entirely"). Fixes: 11550dc0a00b ("powerpc/fadump: reuse crashkernel parameter for fadump memory reservation") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://github.com/ClangBuiltLinux/linux/issues/1300 Link: https://lore.kernel.org/r/20210302195013.2626335-1-nathan@kernel.org
2021-03-24powerpc: Fix spelling of "droping" to "dropping" in traps.cBhaskar Chowdhury
s/droping/dropping/ Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com> Acked-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210224075547.763063-1-unixbhaskar@gmail.com
2021-03-24powerpc: remove unneeded semicolonJiapeng Chong
Fix the following coccicheck warnings: ./arch/powerpc/kernel/prom_init.c:2986:2-3: Unneeded semicolon. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1614151761-53721-1-git-send-email-jiapeng.chong@linux.alibaba.com
2021-03-17quota: wire up quotactl_pathSascha Hauer
Wire up the quotactl_path syscall added in the previous patch. Link: https://lore.kernel.org/r/20210304123541.30749-3-s.hauer@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz>
2021-03-14powerpc/vdso32: Add missing _restgpr_31_x to fix build failureChristophe Leroy
With some defconfig including CONFIG_CC_OPTIMIZE_FOR_SIZE, (for instance mvme5100_defconfig and ps3_defconfig), gcc 5 generates a call to _restgpr_31_x. Until recently it went unnoticed, but commit 42ed6d56ade2 ("powerpc/vdso: Block R_PPC_REL24 relocations") made it rise to the surface. Provide that function (copied from lib/crtsavres.S) in gettimeofday.S Fixes: ab037dd87a2f ("powerpc/vdso: Switch VDSO to generic C implementation.") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a7aa198a88bcd33c6e35e99f70f86c7b7f2f9440.1615270757.git.christophe.leroy@csgroup.eu
2021-03-12powerpc/traps: unrecoverable_exception() is not an interrupt handlerChristophe Leroy
unrecoverable_exception() is called from interrupt handlers or after an interrupt handler has failed. Make it a standard function to avoid doubling the actions performed on interrupt entry (e.g.: user time accounting). Fixes: 3a96570ffceb ("powerpc: convert interrupt handlers to use wrappers") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ae96c59fa2cb7f24a8929c58cfa2c909cb8ff1f1.1615291471.git.christophe.leroy@csgroup.eu
2021-03-10powerpc/64s/exception: Clean up a missed SRR specifierDaniel Axtens
Nick's patch cleaning up the SRR specifiers in exception-64s.S missed a single instance of EXC_HV_OR_STD. Clean that up. Caught by clang's integrated assembler. Fixes: 3f7fbd97d07d ("powerpc/64s/exception: Clean up SRR specifiers") Signed-off-by: Daniel Axtens <dja@axtens.net> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210225031006.1204774-2-dja@axtens.net
2021-03-08printk: introduce a kmsg_dump iteratorJohn Ogness
Rather than storing the iterator information in the registered kmsg_dumper structure, create a separate iterator structure. The kmsg_dump_iter structure can reside on the stack of the caller, thus allowing lockless use of the kmsg_dump functions. Update code that accesses the kernel logs using the kmsg_dumper structure to use the new kmsg_dump_iter structure. For kmsg_dumpers, this also means adding a call to kmsg_dump_rewind() to initialize the iterator. All this is in preparation for removal of @logbuf_lock. Signed-off-by: John Ogness <john.ogness@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> # pstore Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20210303101528.29901-13-john.ogness@linutronix.de
2021-03-01powerpc/syscall: Force inlining of __prep_irq_for_enabled_exit()Christophe Leroy
As reported by kernel test robot, a randconfig with high amount of debuging options can lead to build failure for undefined reference to replay_soft_interrupts() on ppc32. This is due to gcc not seeing that __prep_irq_for_enabled_exit() always returns true on ppc32 because it doesn't inline it for some reason. Force inlining of __prep_irq_for_enabled_exit() to fix the build. Fixes: 344bb20b159d ("powerpc/syscall: Make interrupt.c buildable on PPC32") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/53f3a1f719441761000c41154602bf097d4350b5.1614148356.git.christophe.leroy@csgroup.eu
2021-03-01powerpc/603: Fix protection of user pages mapped with PROT_NONEChristophe Leroy
On book3s/32, page protection is defined by the PP bits in the PTE which provide the following protection depending on the access keys defined in the matching segment register: - PP 00 means RW with key 0 and N/A with key 1. - PP 01 means RW with key 0 and RO with key 1. - PP 10 means RW with both key 0 and key 1. - PP 11 means RO with both key 0 and key 1. Since the implementation of kernel userspace access protection, PP bits have been set as follows: - PP00 for pages without _PAGE_USER - PP01 for pages with _PAGE_USER and _PAGE_RW - PP11 for pages with _PAGE_USER and without _PAGE_RW For kernelspace segments, kernel accesses are performed with key 0 and user accesses are performed with key 1. As PP00 is used for non _PAGE_USER pages, user can't access kernel pages not flagged _PAGE_USER while kernel can. For userspace segments, both kernel and user accesses are performed with key 0, therefore pages not flagged _PAGE_USER are still accessible to the user. This shouldn't be an issue, because userspace is expected to be accessible to the user. But unlike most other architectures, powerpc implements PROT_NONE protection by removing _PAGE_USER flag instead of flagging the page as not valid. This means that pages in userspace that are not flagged _PAGE_USER shall remain inaccessible. To get the expected behaviour, just mimic other architectures in the TLB miss handler by checking _PAGE_USER permission on userspace accesses as if it was the _PAGE_PRESENT bit. Note that this problem only is only for 603 cores. The 604+ have an hash table, and hash_page() function already implement the verification of _PAGE_USER permission on userspace pages. Fixes: f342adca3afc ("powerpc/32s: Prepare Kernel Userspace Access Protection") Cc: stable@vger.kernel.org # v5.2+ Reported-by: Christoph Plattner <christoph.plattner@thalesgroup.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4a0c6e3bb8f0c162457bf54d9bc6fd8d7b55129f.1612160907.git.christophe.leroy@csgroup.eu
2021-02-27Merge tag 'io_uring-worker.v3-2021-02-25' of git://git.kernel.dk/linux-blockLinus Torvalds
Pull io_uring thread rewrite from Jens Axboe: "This converts the io-wq workers to be forked off the tasks in question instead of being kernel threads that assume various bits of the original task identity. This kills > 400 lines of code from io_uring/io-wq, and it's the worst part of the code. We've had several bugs in this area, and the worry is always that we could be missing some pieces for file types doing unusual things (recent /dev/tty example comes to mind, userfaultfd reads installing file descriptors is another fun one... - both of which need special handling, and I bet it's not the last weird oddity we'll find). With these identical workers, we can have full confidence that we're never missing anything. That, in itself, is a huge win. Outside of that, it's also more efficient since we're not wasting space and code on tracking state, or switching between different states. I'm sure we're going to find little things to patch up after this series, but testing has been pretty thorough, from the usual regression suite to production. Any issue that may crop up should be manageable. There's also a nice series of further reductions we can do on top of this, but I wanted to get the meat of it out sooner rather than later. The general worry here isn't that it's fundamentally broken. Most of the little issues we've found over the last week have been related to just changes in how thread startup/exit is done, since that's the main difference between using kthreads and these kinds of threads. In fact, if all goes according to plan, I want to get this into the 5.10 and 5.11 stable branches as well. That said, the changes outside of io_uring/io-wq are: - arch setup, simple one-liner to each arch copy_thread() implementation. - Removal of net and proc restrictions for io_uring, they are no longer needed or useful" * tag 'io_uring-worker.v3-2021-02-25' of git://git.kernel.dk/linux-block: (30 commits) io-wq: remove now unused IO_WQ_BIT_ERROR io_uring: fix SQPOLL thread handling over exec io-wq: improve manager/worker handling over exec io_uring: ensure SQPOLL startup is triggered before error shutdown io-wq: make buffered file write hashed work map per-ctx io-wq: fix race around io_worker grabbing io-wq: fix races around manager/worker creation and task exit io_uring: ensure io-wq context is always destroyed for tasks arch: ensure parisc/powerpc handle PF_IO_WORKER in copy_thread() io_uring: cleanup ->user usage io-wq: remove nr_process accounting io_uring: flag new native workers with IORING_FEAT_NATIVE_WORKERS net: remove cmsg restriction from io_uring based send/recvmsg calls Revert "proc: don't allow async path resolution of /proc/self components" Revert "proc: don't allow async path resolution of /proc/thread-self components" io_uring: move SQPOLL thread io-wq forked worker io-wq: make io_wq_fork_thread() available to other users io-wq: only remove worker from free_list, if it was there io_uring: remove io_identity io_uring: remove any grabbing of context ...
2021-02-25Merge tag 'kbuild-v5.12' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild updates from Masahiro Yamada: - Fix false-positive build warnings for ARCH=ia64 builds - Optimize dictionary size for module compression with xz - Check the compiler and linker versions in Kconfig - Fix misuse of extra-y - Support DWARF v5 debug info - Clamp SUBLEVEL to 255 because stable releases 4.4.x and 4.9.x exceeded the limit - Add generic syscall{tbl,hdr}.sh for cleanups across arches - Minor cleanups of genksyms - Minor cleanups of Kconfig * tag 'kbuild-v5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (38 commits) initramfs: Remove redundant dependency of RD_ZSTD on BLK_DEV_INITRD kbuild: remove deprecated 'always' and 'hostprogs-y/m' kbuild: parse C= and M= before changing the working directory kbuild: reuse this-makefile to define abs_srctree kconfig: unify rule of config, menuconfig, nconfig, gconfig, xconfig kconfig: omit --oldaskconfig option for 'make config' kconfig: fix 'invalid option' for help option kconfig: remove dead code in conf_askvalue() kconfig: clean up nested if-conditionals in check_conf() kconfig: Remove duplicate call to sym_get_string_value() Makefile: Remove # characters from compiler string Makefile: reuse CC_VERSION_TEXT kbuild: check the minimum linker version in Kconfig kbuild: remove ld-version macro scripts: add generic syscallhdr.sh scripts: add generic syscalltbl.sh arch: syscalls: remove $(srctree)/ prefix from syscall tables arch: syscalls: add missing FORCE and fix 'targets' to make if_changed work gen_compile_commands: prune some directories kbuild: simplify access to the kernel's version ...
2021-02-24Merge tag 'x86-entry-2021-02-24' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 irq entry updates from Thomas Gleixner: "The irq stack switching was moved out of the ASM entry code in course of the entry code consolidation. It ended up being suboptimal in various ways. This reworks the X86 irq stack handling: - Make the stack switching inline so the stackpointer manipulation is not longer at an easy to find place. - Get rid of the unnecessary indirect call. - Avoid the double stack switching in interrupt return and reuse the interrupt stack for softirq handling. - A objtool fix for CONFIG_FRAME_POINTER=y builds where it got confused about the stack pointer manipulation" * tag 'x86-entry-2021-02-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: objtool: Fix stack-swizzle for FRAME_POINTER=y um: Enforce the usage of asm-generic/softirq_stack.h x86/softirq/64: Inline do_softirq_own_stack() softirq: Move do_softirq_own_stack() to generic asm header softirq: Move __ARCH_HAS_DO_SOFTIRQ to Kconfig x86: Select CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK x86/softirq: Remove indirection in do_softirq_own_stack() x86/entry: Use run_sysvec_on_irqstack_cond() for XEN upcall x86/entry: Convert device interrupts to inline stack switching x86/entry: Convert system vectors to irq stack macro x86/irq: Provide macro for inlining irq stack switching x86/apic: Split out spurious handling code x86/irq/64: Adjust the per CPU irq stack pointer by 8 x86/irq: Sanitize irq stack tracking x86/entry: Fix instrumentation annotation
2021-02-23arch: ensure parisc/powerpc handle PF_IO_WORKER in copy_thread()Jens Axboe
In the arch addition of PF_IO_WORKER, I missed parisc and powerpc for some reason. Fix that up, ensuring they handle PF_IO_WORKER like they do PF_KTHREAD in copy_thread(). Reported-by: Bruno Goncalves <bgoncalv@redhat.com> Fixes: 4727dc20e042 ("arch: setup PF_IO_WORKER threads like PF_KTHREAD") Signed-off-by: Jens Axboe <axboe@kernel.dk>