summaryrefslogtreecommitdiff
path: root/include/asm-generic/bitops/instrumented-non-atomic.h
AgeCommit message (Collapse)Author
2022-08-26wait_on_bit: add an acquire memory barrierMikulas Patocka
There are several places in the kernel where wait_on_bit is not followed by a memory barrier (for example, in drivers/md/dm-bufio.c:new_read). On architectures with weak memory ordering, it may happen that memory accesses that follow wait_on_bit are reordered before wait_on_bit and they may return invalid data. Fix this class of bugs by introducing a new function "test_bit_acquire" that works like test_bit, but has acquire memory ordering semantics. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Acked-by: Will Deacon <will@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-06-30bitops: wrap non-atomic bitops with a transparent macroAlexander Lobakin
In preparation for altering the non-atomic bitops with a macro, wrap them in a transparent definition. This requires prepending one more '_' to their names in order to be able to do that seamlessly. It is a simple change, given that all the non-prefixed definitions are now in asm-generic. sparc32 already has several triple-underscored functions, so I had to rename them ('___' -> 'sp32_'). Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Reviewed-by: Marco Elver <elver@google.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2022-06-30bitops: unify non-atomic bitops prototypes across architecturesAlexander Lobakin
Currently, there is a mess with the prototypes of the non-atomic bitops across the different architectures: ret bool, int, unsigned long nr int, long, unsigned int, unsigned long addr volatile unsigned long *, volatile void * Thankfully, it doesn't provoke any bugs, but can sometimes make the compiler angry when it's not handy at all. Adjust all the prototypes to the following standard: ret bool retval can be only 0 or 1 nr unsigned long native; signed makes no sense addr volatile unsigned long * bitmaps are arrays of ulongs Next, some architectures don't define 'arch_' versions as they don't support instrumentation, others do. To make sure there is always the same set of callables present and to ease any potential future changes, make them all follow the rule: * architecture-specific files define only 'arch_' versions; * non-prefixed versions can be defined only in asm-generic files; and place the non-prefixed definitions into a new file in asm-generic to be included by non-instrumented architectures. Finally, add some static assertions in order to prevent people from making a mess in this room again. I also used the %__always_inline attribute consistently, so that they always get resolved to the actual operations. Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Yury Norov <yury.norov@gmail.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2022-01-25asm-generic/bitops: Always inline all bit manipulation helpersBorislav Petkov
Make it consistent with the atomic/atomic-instrumented.h helpers. And defconfig size is actually going down! text data bss dec hex filename 22352096 8213152 1917164 32482412 1efa46c vmlinux.x86-64.defconfig.before 22350551 8213184 1917164 32480899 1ef9e83 vmlinux.x86-64.defconfig.after Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Marco Elver <elver@google.com> Link: https://lore.kernel.org/r/20220113155357.4706-2-bp@alien8.de
2020-08-24bitops, kcsan: Partially revert instrumentation for non-atomic bitopsMarco Elver
Previous to the change to distinguish read-write accesses, when CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y is set, KCSAN would consider the non-atomic bitops as atomic. We want to partially revert to this behaviour, but with one important distinction: report racing modifications, since lost bits due to non-atomicity are certainly possible. Given the operations here only modify a single bit, assuming non-atomicity of the writer is sufficient may be reasonable for certain usage (and follows the permissible nature of the "assume plain writes atomic" rule). In other words: 1. We want non-atomic read-modify-write races to be reported; this is accomplished by kcsan_check_read(), where any concurrent write (atomic or not) will generate a report. 2. We do not want to report races with marked readers, but -do- want to report races with unmarked readers; this is accomplished by the instrument_write() ("assume atomic write" with Kconfig option set). With the above rules, when KCSAN_ASSUME_PLAIN_WRITES_ATOMIC is selected, it is hoped that KCSAN's reporting behaviour is better aligned with current expected permissible usage for non-atomic bitops. Note that, a side-effect of not telling KCSAN that the accesses are read-writes, is that this information is not displayed in the access summary in the report. It is, however, visible in inline-expanded stack traces. For now, it does not make sense to introduce yet another special case to KCSAN's runtime, only to cater to the case here. Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-08-24asm-generic/bitops: Use instrument_read_write() where appropriateMarco Elver
Use the new instrument_read_write() where appropriate. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-03-21asm-generic, kcsan: Add KCSAN instrumentation for bitopsMarco Elver
Add explicit KCSAN checks for bitops. Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-11-07kasan: support instrumented bitops combined with generic bitopsDaniel Axtens
Currently bitops-instrumented.h assumes that the architecture provides atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit). This is true on x86 and s390, but is not always true: there is a generic bitops/non-atomic.h header that provides generic non-atomic operations, and also a generic bitops/lock.h for locking operations. powerpc uses the generic non-atomic version, so it does not have it's own e.g. __set_bit that could be renamed arch___set_bit. Split up bitops-instrumented.h to mirror the atomic/non-atomic/lock split. This allows arches to only include the headers where they have arch-specific versions to rename. Update x86 and s390. (The generic operations are automatically instrumented because they're written in C, not asm.) Suggested-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Daniel Axtens <dja@axtens.net> Acked-by: Marco Elver <elver@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190820024941.12640-1-dja@axtens.net