summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)Author
2021-06-14Merge tag 'v5.13-rc6' into char-misc-nextGreg Kroah-Hartman
We need the fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-06-11kunit: Add 'kunit_shutdown' optionDavid Gow
Add a new kernel command-line option, 'kunit_shutdown', which allows the user to specify that the kernel poweroff, halt, or reboot after completing all KUnit tests; this is very handy for running KUnit tests on UML or a VM so that the UML/VM process exits cleanly immediately after running all tests without needing a special initramfs. Signed-off-by: David Gow <davidgow@google.com> Signed-off-by: Brendan Higgins <brendanhiggins@google.com> Reviewed-by: Stephen Boyd <sboyd@kernel.org> Tested-By: Daniel Latypov <dlatypov@google.com> Reviewed-by: Daniel Latypov <dlatypov@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-06-11kunit: Fix result propagation for parameterised testsDavid Gow
When one parameter of a parameterised test failed, its failure would be propagated to the overall test, but not to the suite result (unless it was the last parameter). This is because test_case->success was being reset to the test->success result after each parameter was used, so a failing test's result would be overwritten by a non-failing result. The overall test result was handled in a third variable, test_result, but this was discarded after the status line was printed. Instead, just propagate the result after each parameter run. Signed-off-by: David Gow <davidgow@google.com> Fixes: fadb08e7c750 ("kunit: Support for Parameterized Testing") Reviewed-by: Marco Elver <elver@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2021-06-10bootconfig: Support mixing a value and subkeys under a keyMasami Hiramatsu
Support mixing a value and subkeys under a key. Since kernel cmdline options will support "aaa.bbb=value1 aaa.bbb.ccc=value2", it is better that the bootconfig supports such configuration too. Note that this does not change syntax itself but just accepts mixed value and subkeys e.g. key = value1 key.subkey = value2 But this is not accepted; key { value1 subkey = value2 } That will make value1 as a subkey. Also, the order of the value node under a key is fixed. If there are a value and subkeys, the value is always the first child node of the key. Thus if user specifies subkeys first, e.g. key.subkey = value1 key = value2 In the program (and /proc/bootconfig), it will be shown as below key = value2 key.subkey = value1 Link: https://lkml.kernel.org/r/162262194685.264090.7738574774030567419.stgit@devnote2 Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-06-10bootconfig: Change array value to use child nodeMasami Hiramatsu
It is not possible to put an array value with subkeys under a key node, because both of subkeys and the array elements are using "next" field of the xbc_node. Thus this changes the array values to use "child" field in the array case. The reason why split this change is to test it easily. Link: https://lkml.kernel.org/r/162262193838.264090.16044473274501498656.stgit@devnote2 Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-06-10csum_and_copy_to_pipe_iter(): leave handling of csum_state to callerAl Viro
... since all the logics is already there for use by iovec/kvec/etc. cases. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10clean up copy_mc_pipe_to_iter()Al Viro
... and we don't need kmap_atomic() there - kmap_local_page() is fine. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10pipe_zero(): we don't need no stinkin' kmap_atomic()...Al Viro
FWIW, memcpy_to_page() itself almost certainly ought to use kmap_local_page()... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter: clean csum_and_copy_...() primitives up a bitAl Viro
1) kmap_atomic() is not needed here, kmap_local_page() is enough. 2) No need to make sum = csum_block_add(sum, next, off); conditional upon next != 0 - adding 0 is a no-op as far as csum_block_add() is concerned. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10copy_page_from_iter(): don't need kmap_atomic() for kvec/bvec casesAl Viro
kmap_local_page() is enough. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10copy_page_to_iter(): don't bother with kmap_atomic() for bvec/kvec casesAl Viro
kmap_local_page() is enough there. Moreover, we can use _copy_to_iter() for actual copying in those cases - no useful extra checks on the address we are copying from in that call. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iterate_xarray(): only of the first iteration we might get offset != 0Al Viro
recalculating offset on each iteration is pointless - on all subsequent passes through the loop it will be zero anyway. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10pull handling of ->iov_offset into iterate_{iovec,bvec,xarray}Al Viro
fewer arguments (by one, but still...) for iterate_...() macros Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter: make iterator callbacks use base and len instead of iovecAl Viro
Iterator macros used to provide the arguments for step callbacks in a structure matching the flavour - iovec for ITER_IOVEC, kvec for ITER_KVEC and bio_vec for ITER_BVEC. That already broke down for ITER_XARRAY (bio_vec there); now that we are using kvec callback for bvec and xarray cases, we are always passing a pointer + length (void __user * + size_t for ITER_IOVEC callback, void * + size_t for everything else). Note that the original reason for bio_vec (page + offset + len) in case of ITER_BVEC used to be that we did *not* want to kmap a page when all we wanted was e.g. to find the alignment of its subrange. Now all such users are gone and the ones that are left want the page mapped anyway for actually copying the data. So in all cases we have pointer + length, and there's no good reason for keeping those in struct iovec or struct kvec - we can just pass them to callback separately. Again, less boilerplate in callbacks... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter: make the amount already copied available to iterator callbacksAl Viro
Making iterator macros keep track of the amount of data copied is pretty easy and it has several benefits: 1) we no longer need the mess like (from += v.iov_len) - v.iov_len in the callbacks - initial value + total amount copied so far would do just fine. 2) less obviously, we no longer need to remember the initial amount of data we wanted to copy; the loops in iterator macros are along the lines of wanted = bytes; while (bytes) { copy some bytes -= copied if short copy break } bytes = wanted - bytes; Replacement is offs = 0; while (bytes) { copy some offs += copied bytes -= copied if short copy break } bytes = offs; That wouldn't be a win per se, but unlike the initial value of bytes, the amount copied so far *is* useful in callbacks. 3) in some cases (csum_and_copy_..._iter()) we already had offs manually maintained by the callbacks. With that change we can drop that. Less boilerplate and more readable code... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter: get rid of separate bvec and xarray callbacksAl Viro
After the previous commit we have * xarray and bvec callbacks idential in all cases * both equivalent to kvec callback wrapped into kmap_local_page()/kunmap_local() pair. So we can pass only two (iovec and kvec) callbacks to iterate_and_advance() and let iterate_{bvec,xarray} wrap it into kmap_local_page()/kunmap_local_page(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter: teach iterate_{bvec,xarray}() about possible short copiesAl Viro
... and now we finally can sort out the mess in _copy_mc_to_iter(). Provide a variant of iterate_and_advance() that does *NOT* ignore the return values of bvec, xarray and kvec callbacks, use that in _copy_mc_to_iter(). That gets rid of magic in those callbacks - we used to need it so we'd get at least the right return value in case of failure halfway through. As a bonus, now iterator is advanced by the amount actually copied for all flavours. That's what the callers expect and it used to do that correctly in iovec and xarray cases. However, in kvec and bvec cases the iterator had not been advanced on such failures, breaking the users. Fixed now... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iterate_bvec(): expand bvec.h macro forest, massage a bitAl Viro
... incidentally, using pointer instead of index in an array (the only change here) trims half-kilobyte of .text... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter: unify iterate_iovec and iterate_kvecAl Viro
The differences between iterate_iovec and iterate_kvec are minor: * kvec callback is treated as if it returned 0 * initialization of __p is with i->iov and i->kvec resp. which is trivially dealt with. No code generation changes - compiler is quite capable of turning left = ((void)(STEP), 0); __v.iov_len -= left; (with no accesses to left downstream) and (void)(STEP); into the same code. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter: massage iterate_iovec and iterate_kvec to logics similar to ↵Al Viro
iterate_bvec Premature optimization is the root of all evil... Trying to unroll the first pass through the loop makes it harder to follow and not just for readers - compiler ends up generating worse code than it would on a "non-optimized" loop. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iterate_and_advance(): get rid of magic in case when n is 0Al Viro
iov_iter_advance() needs to do some non-trivial work when it's given 0 as argument (skip all empty iovecs, mostly). We used to implement it via iterate_and_advance(); we no longer do so and for all other users of iterate_and_advance() zero length is a no-op. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10csum_and_copy_to_iter(): massage into form closer to csum_and_copy_from_iter()Al Viro
Namely, have off counted starting from 0 rather than from csstate->off. To compensate we need to shift the initial value (csstate->sum) (rotate by 8 bits, as usual for csum) and do the same after we are finished adding the pieces up. What we get out of that is a bit more redundancy in our variables - from is always equal to addr + off, which will be useful several commits down the road. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter: replace iov_iter_copy_from_user_atomic() with iterator-advancing ↵Al Viro
variant Replacement is called copy_page_from_iter_atomic(); unlike the old primitive the callers do *not* need to do iov_iter_advance() after it. In case when they end up consuming less than they'd been given they need to do iov_iter_revert() on everything they had not consumed. That, however, needs to be done only on slow paths. All in-tree callers converted. And that kills the last user of iterate_all_kinds() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10[xarray] iov_iter_npages(): just use DIV_ROUND_UP()Al Viro
Compiler is capable of recognizing division by power of 2 and turning it into shifts. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter_npages(): don't bother with iterate_all_kinds()Al Viro
note that in bvec case pages can be compound ones - we can't just assume that each segment is covered by one (sub)page Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10get rid of iterate_all_kinds() in ↵Al Viro
iov_iter_get_pages()/iov_iter_get_pages_alloc() Here iterate_all_kinds() is used just to find the first (non-empty, in case of iovec) segment. Which can be easily done explicitly. Note that in bvec case we now can get more than PAGE_SIZE worth of them, in case when we have a compound page in bvec and a range that crosses a subpage boundary. Older behaviour had been to stop on that boundary; we used to get the right first page (for_each_bvec() took care of that), but that was all we'd got. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter_gap_alignment(): get rid of iterate_all_kinds()Al Viro
For one thing, it's only used for iovec (and makes sense only for those). For another, here we don't care about iov_offset, since the beginning of the first segment and the end of the last one are ignored. So it makes a lot more sense to just walk through the iovec array... We need to deal with the case of truncated iov_iter, but unlike the situation with iov_iter_alignment() we don't care where the last segment ends - just which segment is the last one. [fixed a braino spotted by Qian Cai <quic_qiancai@quicinc.com>] Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter_alignment(): don't bother with iterate_all_kinds()Al Viro
It's easier to go over the array manually. We need to watch out for truncated iov_iter, though - iovec array might cover more than i->count. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10sanitize iov_iter_fault_in_readable()Al Viro
1) constify iov_iter argument; we are not advancing it in this primitive. 2) cap the amount requested by the amount of data in iov_iter. All existing callers should've been safe, but the check is really cheap and doing it here makes for easier analysis, as well as more consistent semantics among the primitives. 3) don't bother with iterate_iovec(). Explicit loop is not any harder to follow, and we get rid of standalone iterate_iovec() users - it's only used by iterate_and_advance() and (soon to be gone) iterate_all_kinds(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter: optimize iov_iter_advance() for iovec and kvecAl Viro
We can do better than generic iterate_and_advance() for this one; inspired by bvec_iter_advance() (and massaged into that form by equivalent transformations). [fixed a braino caught by kernel test robot <oliver.sang@intel.com>] Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter: separate direction from flavourAl Viro
Instead of having them mixed in iter->type, use separate ->iter_type and ->data_source (u8 and bool resp.) And don't bother with (pseudo-) bitmap for the former - microoptimizations from being able to check if the flavour is one of two values are not worth the confusion for optimizer. It can't prove that we never get e.g. ITER_IOVEC | ITER_PIPE, so we end up with extra headache. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter_advance(): don't modify ->iov_offset for ITER_DISCARDAl Viro
the field is not used for that flavour Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter: reorder handling of flavours in primitivesAl Viro
iovec is the most common one; test it first and test explicitly, rather than "not anything else". Replace all flavour checks with use of iov_iter_is_...() helpers. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-10iov_iter: switch ..._full() variants of primitives to use of iov_iter_revert()Al Viro
Use corresponding plain variants, revert on short copy. That's the way it should've been done from the very beginning, except that we didn't have iov_iter_revert() back then... [fixed another braino caught by Qian Cai <quic_qiancai@quicinc.com>] Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-05lib: crc64: fix kernel-doc warningYueHaibing
Fix W=1 kernel build warning: lib/crc64.c:40: warning: bad line: or the previous crc64 value if computing incrementally. Link: https://lkml.kernel.org/r/20210601135851.15444-1-yuehaibing@huawei.com Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Coly Li <colyli@suse.de> Acked-by: Randy Dunlap <rdunlap@infradead.org> Tested-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-06-03Merge branch 'sched/urgent' into sched/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-06-03iov_iter_advance(): use consistent semantics for move past the endAl Viro
asking to advance by more than we have left in the iov_iter should move to the very end; it should *not* leave negative i->count and it should not spew into syslog, etc. - it's a legitimate operation. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-03[xarray] iov_iter_fault_in_readable() should do nothing in xarray caseAl Viro
... and actually should just check it's given an iovec-backed iterator in the first place. Cc: stable@vger.kernel.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-03copy_page_to_iter(): fix ITER_DISCARD caseAl Viro
we need to advance the iterator... Cc: stable@vger.kernel.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-03teach copy_page_to_iter() to handle compound pagesAl Viro
In situation when copy_page_to_iter() got a compound page the current code would only work on systems with no CONFIG_HIGHMEM. It *is* the majority of real-world setups, or we would've drown in bug reports by now. Still needs fixing. Current variant works for solitary page; rename that to __copy_page_to_iter() and turn the handling of compound pages into a loop over subpages. Cc: stable@vger.kernel.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-06-03iov_iter: Remove iov_iter_for_each_range()David Howells
Remove iov_iter_for_each_range() as it's no longer used with the removal of lustre. Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-05-31locking/lockdep: Reduce LOCKDEP dependency listRandy Dunlap
Some arches (um, sparc64, riscv, xtensa) cause a Kconfig warning for LOCKDEP. These arch-es select LOCKDEP_SUPPORT but they are not listed as one of the arch-es that LOCKDEP depends on. Since (16) arch-es define the Kconfig symbol LOCKDEP_SUPPORT if they intend to have LOCKDEP support, replace the awkward list of arch-es that LOCKDEP depends on with the LOCKDEP_SUPPORT symbol. But wait. LOCKDEP_SUPPORT is included in LOCK_DEBUGGING_SUPPORT, which is already a dependency here, so LOCKDEP_SUPPORT is redundant and not needed. That leaves the FRAME_POINTER dependency, but it is part of an expression like this: depends on (A && B) && (FRAME_POINTER || B') where B' is a dependency of B so if B is true then B' is true and the value of FRAME_POINTER does not matter. Thus we can also delete the FRAME_POINTER dependency. Fixes this kconfig warning: (for um, sparc64, riscv, xtensa) WARNING: unmet direct dependencies detected for LOCKDEP Depends on [n]: DEBUG_KERNEL [=y] && LOCK_DEBUGGING_SUPPORT [=y] && (FRAME_POINTER [=n] || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86) Selected by [y]: - PROVE_LOCKING [=y] && DEBUG_KERNEL [=y] && LOCK_DEBUGGING_SUPPORT [=y] - LOCK_STAT [=y] && DEBUG_KERNEL [=y] && LOCK_DEBUGGING_SUPPORT [=y] - DEBUG_LOCK_ALLOC [=y] && DEBUG_KERNEL [=y] && LOCK_DEBUGGING_SUPPORT [=y] Fixes: 7d37cb2c912d ("lib: fix kconfig dependency on ARCH_WANT_FRAME_POINTERS") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Waiman Long <longman@redhat.com> Link: https://lkml.kernel.org/r/20210524224150.8009-1-rdunlap@infradead.org
2021-05-31Merge 5.13-rc4 into driver-core-nextGreg Kroah-Hartman
We need the driver core fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-31Merge 5.13-rc4 into char-misc-nextGreg Kroah-Hartman
We need the char/misc fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-27Merge branch 'for-5.13-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu Pull percpu fixes from Dennis Zhou: "This contains a cleanup to lib/percpu-refcount.c and an update to the MAINTAINERS file to more formally take over support for lib/percpu*" * 'for-5.13-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu: MAINTAINERS: Add lib/percpu* as part of percpu entry percpu_ref: Don't opencode percpu_ref_is_dying
2021-05-27lib: test_scanf: Remove pointless use of type_min() with unsigned typesRichard Fitzgerald
sparse was producing warnings of the form: sparse: cast truncates bits from constant value (ffff0001 becomes 1) There is no actual problem here. Using type_min() on an unsigned type results in an (expected) truncation. However, there is no need to test an unsigned value against type_min(). The minimum value of an unsigned is obviously 0, and any value cast to an unsigned type is >= 0, so for unsigneds only type_max() need be tested. This patch also takes the opportunity to clean up the implementation of simple_numbers_loop() to use a common pattern for the positive and negative test. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20210525122012.6336-2-rf@opensource.cirrus.com
2021-05-27dyndbg: display KiB of data memory used.Jim Cromie
If booted with verbose>=1, dyndbg prints the memory usage in bytes, of builtin modules' prdebugs. KiB reads better. no functional changes. Signed-off-by: Jim Cromie <jim.cromie@gmail.com> Link: https://lore.kernel.org/r/20210525033240.35260-1-jim.cromie@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-26locking/atomic: atomic64: support ARCH_ATOMICMark Rutland
We'd like all architectures to convert to ARCH_ATOMIC, as this will enable functionality, and once all architectures are converted it will be possible to make significant cleanups to the atomic headers. A number of architectures use asm-generic/atomic64.h, and it's impractical to convert the header and all these architectures in one go. To make it possible to convert them one-by-one, let's make the asm-generic implementation function as either atomic64_*() or arch_atomic64_*() depending on whether ARCH_ATOMIC is selected. To do this, the generic implementations are prefixed as generic_atomic64_*(), and preprocessor definitions map atomic64_*()/arch_atomic64_*() onto these as appropriate. Once all users are moved over to ARCH_ATOMIC the ifdeffery in the header can be simplified and/or removed entirely. For existing users (none of which select ARCH_ATOMIC), there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210525140232.53872-11-mark.rutland@arm.com
2021-05-24Makefile: extend 32B aligned debug option to 64B alignedFeng Tang
Commit 09c60546f04f ("./Makefile: add debug option to enable function aligned on 32 bytes") was introduced to help debugging strange kernel performance changes caused by code alignment change. Recently we found 2 similar cases [1][2] caused by code-alignment changes, which can only be identified by forcing 64 bytes aligned for all functions. Originally, 32 bytes was used mainly for not wasting too much text space, but this option is only for debug anyway where text space is not a big concern. So extend the alignment to 64 bytes to cover more similar cases. [1].https://lore.kernel.org/lkml/20210427090013.GG32408@xsang-OptiPlex-9020/ [2].https://lore.kernel.org/lkml/20210420030837.GB31773@xsang-OptiPlex-9020/ Signed-off-by: Feng Tang <feng.tang@intel.com> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2021-05-22lib: kunit: suppress a compilation warning of frame sizeZhen Lei
lib/bitfield_kunit.c: In function `test_bitfields_constants': lib/bitfield_kunit.c:93:1: warning: the frame size of 7456 bytes is larger than 2048 bytes [-Wframe-larger-than=] } ^ As the description of BITFIELD_KUNIT in lib/Kconfig.debug, it "Only useful for kernel devs running the KUnit test harness, and not intended for inclusion into a production build". Therefore, it is not worth modifying variable 'test_bitfields_constants' to clear this warning. Just suppress it. Link: https://lkml.kernel.org/r/20210518094533.7652-1-thunder.leizhen@huawei.com Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Cc: Shuah Khan <skhan@linuxfoundation.org> Cc: Vitor Massaru Iha <vitor@massaru.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>