summaryrefslogtreecommitdiff
path: root/arch/x86/lib/crc-pclmul-template.S
AgeCommit message (Collapse)Author
2025-06-30lib/crc: x86: Migrate optimized CRC code into lib/crc/Eric Biggers
Move the x86-optimized CRC code from arch/x86/lib/crc* into its new location in lib/crc/x86/, and wire it up in the new way. This new way of organizing the CRC code eliminates the need to artificially split the code for each CRC variant into separate arch and generic modules, enabling better inlining and dead code elimination. For more details, see "lib/crc: Prepare for arch-optimized code in subdirs of lib/crc/". Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20250607200454.73587-12-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30x86/crc: drop checks of CONFIG_AS_VPCLMULQDQEric Biggers
Now that the minimum binutils version supports VPCLMULQDQ (and the minimum clang version does too), there is no need to check for assembler support before compiling code that uses these instructions. Link: https://lore.kernel.org/r/20250531211318.83677-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-03-19x86/crc: drop the avx10_256 functions and rename avx10_512 to avx512Eric Biggers
Intel made a late change to the AVX10 specification that removes support for a 256-bit maximum vector length and enumeration of the maximum vector length. AVX10 will imply a maximum vector length of 512 bits. I.e. there won't be any such thing as AVX10/256 or AVX10/512; there will just be AVX10, and it will essentially just consolidate AVX512 features. As a result of this new development, my strategy of providing both *_avx10_256 and *_avx10_512 functions didn't turn out to be that useful. The only remaining motivation for the 256-bit AVX512 / AVX10 functions is to avoid downclocking on older Intel CPUs. But I already wrote *_avx2 code too (primarily to support CPUs without AVX512), which performs almost as well as *_avx10_256. So we should just use that. Therefore, remove the *_avx10_256 CRC functions, and rename the *_avx10_512 CRC functions to *_avx512. Make Ice Lake and Tiger Lake use the *_avx2 functions instead of *_avx10_256 which they previously used. Link: https://lore.kernel.org/r/20250319181316.91271-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
2025-02-18x86/crc: add ANNOTATE_NOENDBR to suppress objtool warningsEric Biggers
The assembly functions generated by crc-pclmul-template.S are called only via static_call, so they do not need to begin with an endbr instruction. But objtool still warns about a missing endbr by default. Add ANNOTATE_NOENDBR to suppress these warnings: vmlinux.o: warning: objtool: crc32_x86_init+0x1c0: relocation to !ENDBR: crc32_lsb_vpclmul_avx10_256+0x0 vmlinux.o: warning: objtool: crc64_x86_init+0x183: relocation to !ENDBR: crc64_msb_vpclmul_avx10_256+0x0 vmlinux.o: warning: objtool: crc_t10dif_x86_init+0x183: relocation to !ENDBR: crc16_msb_vpclmul_avx10_256+0x0 vmlinux.o: warning: objtool: __SCK__crc32_lsb_pclmul+0x0: data relocation to !ENDBR: crc32_lsb_pclmul_sse+0x0 vmlinux.o: warning: objtool: __SCK__crc64_lsb_pclmul+0x0: data relocation to !ENDBR: crc64_lsb_pclmul_sse+0x0 vmlinux.o: warning: objtool: __SCK__crc64_msb_pclmul+0x0: data relocation to !ENDBR: crc64_msb_pclmul_sse+0x0 vmlinux.o: warning: objtool: __SCK__crc16_msb_pclmul+0x0: data relocation to !ENDBR: crc16_msb_pclmul_sse+0x0 Fixes: 8d2d3e72e35b ("x86/crc: add "template" for [V]PCLMULQDQ based CRC functions") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Closes: https://lore.kernel.org/r/20250217170555.3d14df62@canb.auug.org.au/ Suggested-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250217193230.100443-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
2025-02-10x86/crc: add "template" for [V]PCLMULQDQ based CRC functionsEric Biggers
The Linux kernel implements many variants of CRC, such as crc16, crc_t10dif, crc32_le, crc32c, crc32_be, crc64_nvme, and crc64_be. On x86, except for crc32c which has special scalar instructions, the fastest way to compute any of these CRCs on any message of length roughly >= 16 bytes is to use the SIMD carryless multiplication instructions PCLMULQDQ or VPCLMULQDQ. Depending on the available CPU features this can mean PCLMULQDQ+SSE4.1, VPCLMULQDQ+AVX2, VPCLMULQDQ+AVX10/256, or VPCLMULQDQ+AVX10/512 (or the AVX512 equivalents to AVX10/*). This results in a total of 20+ CRC implementations being potentially needed to properly optimize all CRCs that someone cares about for x86. Besides crc32c, currently only crc32_le and crc_t10dif are actually optimized for x86, and they only use PCLMULQDQ, which means they can be 2-4x slower than what is possible with VPCLMULQDQ. Fortunately, at a high level the code that is needed for any [V]PCLMULQDQ based CRC implementation is mostly the same. Therefore, this patch introduces an assembly macro that expands into the body of a [V]PCLMULQDQ based CRC function for a given number of bits (8, 16, 32, or 64), bit order (lsb or msb-first), vector length, and AVX level. The function expects to be passed a constants table, specific to the polynomial desired, that was generated by the script previously added. When two CRC variants share the same number of bits and bit order, the same functions can be reused, with only the constants table differing. A new C header is also added to make it easy to integrate the new assembly code using a static call. The result is that it becomes straightforward to wire up an optimized implementation of any CRC-8, CRC-16, CRC-32, or CRC-64 for x86. Later patches will wire up specific CRC variants. Although this new template allows easily generating many functions, care was taken to still keep the binary size fairly low. Each generated function is only 550 to 850 bytes depending on the CRC variant and target CPU features. And only one function per CRC variant is actually used at runtime (since all functions support all lengths >= 16 bytes). Note that a similar approach should also work for other architectures that have carryless multiplication instructions, such as arm64. Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Keith Busch <kbusch@kernel.org> Link: https://lore.kernel.org/r/20250210174540.161705-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>