summaryrefslogtreecommitdiff
path: root/sysdeps/nptl/lowlevellock.h
AgeCommit message (Collapse)Author
2018-05-04Fix blocking pthread_join. [BZ #23137]Stefan Liebler
On s390 (31bit) if glibc is build with -Os, pthread_join sometimes blocks indefinitely. This is e.g. observable with testcase intl/tst-gettext6. pthread_join is calling lll_wait_tid(tid), which performs the futex-wait syscall in a loop as long as tid != 0 (thread is alive). On s390 (and build with -Os), tid is loaded from memory before comparing against zero and then the tid is loaded a second time in order to pass it to the futex-wait-syscall. If the thread exits in between, then the futex-wait-syscall is called with the value zero and it waits until a futex-wake occurs. As the thread is already exited, there won't be a futex-wake. In lll_wait_tid, the tid is stored to the local variable __tid, which is then used as argument for the futex-wait-syscall. But unfortunately the compiler is allowed to reload the value from memory. With this patch, the tid is loaded with atomic_load_acquire. Then the compiler is not allowed to reload the value for __tid from memory. ChangeLog: [BZ #23137] * sysdeps/nptl/lowlevellock.h (lll_wait_tid): Use atomic_load_acquire to load __tid.
2018-01-01Update copyright dates with scripts/update-copyrights.Joseph Myers
* All files with FSF copyright notices: Update copyright dates using scripts/update-copyrights. * locale/programs/charmap-kw.h: Regenerated. * locale/programs/locfile-kw.h: Likewise.
2017-02-06Add __glibc_unlikely hint in lll_trylock, lll_cond_trylock.Stefan Liebler
The macros lll_trylock, lll_cond_trylock are extended by an __glibc_unlikely hint. Now the trylock macros are based on the same assumption about a free/busy lock as lll_lock. With the hint gcc emits code in e.g. pthread_mutex_trylock which does not use jumps if the lock is free. Without the hint it had to jump away if the lock is free. Tested on s390x, ppc. ChangeLog: * sysdeps/nptl/lowlevellock.h (lll_trylock, lll_cond_trylock): Add __glibc_unlikely hint.
2017-01-13robust mutexes: Fix broken x86 assembly by removing itTorvald Riegel
lll_robust_unlock on i386 and x86_64 first sets the futex word to FUTEX_WAITERS|0 before calling __lll_unlock_wake, which will set the futex word to 0. If the thread is killed between these steps, then the futex word will be FUTEX_WAITERS|0, and the kernel (at least current upstream) will not set it to FUTEX_OWNER_DIED|FUTEX_WAITERS because 0 is not equal to the TID of the crashed thread. The lll_robust_lock assembly code on i386 and x86_64 is not prepared to deal with this case because the fastpath tries to only CAS 0 to TID and not FUTEX_WAITERS|0 to TID; the slowpath simply waits until it can CAS 0 to TID or the futex_word has the FUTEX_OWNER_DIED bit set. This issue is fixed by removing the custom x86 assembly code and using the generic C code instead. However, instead of adding more duplicate code to the custom x86 lowlevellock.h, the code of the lll_robust* functions is inlined into the single call sites that exist for each of these functions in the pthread_mutex_* functions. The robust mutex paths in the latter have been slightly reorganized to make them simpler. This patch is meant to be easy to backport, so C11-style atomics are not used. [BZ #20985] * nptl/Makefile: Adapt. * nptl/pthread_mutex_cond_lock.c (LLL_ROBUST_MUTEX_LOCK): Remove. (LLL_ROBUST_MUTEX_LOCK_MODIFIER): New. * nptl/pthread_mutex_lock.c (LLL_ROBUST_MUTEX_LOCK): Remove. (LLL_ROBUST_MUTEX_LOCK_MODIFIER): New. (__pthread_mutex_lock_full): Inline lll_robust* functions and adapt. * nptl/pthread_mutex_timedlock.c (pthread_mutex_timedlock): Inline lll_robust* functions and adapt. * nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise. * sysdeps/nptl/lowlevellock.h (__lll_robust_lock_wait, __lll_robust_lock, lll_robust_cond_lock, __lll_robust_timedlock_wait, __lll_robust_timedlock, __lll_robust_unlock): Remove. * sysdeps/unix/sysv/linux/i386/lowlevellock.h (lll_robust_lock, lll_robust_cond_lock, lll_robust_timedlock, lll_robust_unlock): Remove. * sysdeps/unix/sysv/linux/x86_64/lowlevellock.h (lll_robust_lock, lll_robust_cond_lock, lll_robust_timedlock, lll_robust_unlock): Remove. * sysdeps/unix/sysv/linux/sparc/lowlevellock.h (__lll_robust_lock_wait, __lll_robust_lock, lll_robust_cond_lock, __lll_robust_timedlock_wait, __lll_robust_timedlock, __lll_robust_unlock): Remove. * nptl/lowlevelrobustlock.c: Remove file. * nptl/lowlevelrobustlock.sym: Likewise. * sysdeps/unix/sysv/linux/i386/lowlevelrobustlock.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/lowlevelrobustlock.S: Likewise.
2017-01-01Update copyright dates with scripts/update-copyrights.Joseph Myers
2016-01-04Update copyright dates with scripts/update-copyrights.Joseph Myers
2015-12-23Do not violate mutex destruction requirements.Torvald Riegel
POSIX and C++11 require that a thread can destroy a mutex if no other thread owns the mutex, is blocked on the mutex, or will try to acquire it in the future. After destroying the mutex, it can reuse or unmap the underlying memory. Thus, we must not access a mutex' memory after releasing it. Currently, we can load the private flag after releasing the mutex, which is fixed by this patch. See https://sourceware.org/bugzilla/show_bug.cgi?id=13690 for more background. We need to call futex_wake on the lock after releasing it, however. This is by design, and can lead to spurious wake-ups on unrelated futex words (e.g., when the mutex memory is reused for another mutex). This behavior is documented in the glibc-internal futex API and in recent drafts of the Linux kernel's futex documentation (see the draft_futex branch of git://git.kernel.org/pub/scm/docs/man-pages/man-pages.git).
2015-01-02Update copyright dates with scripts/update-copyrights.Joseph Myers
2014-12-15Add comments for the generic lowlevellock implementation.Torvald Riegel
Patch by Bernard Ogden <bernie.ogden@linaro.org>.
2014-08-12Check value of futex before updating in __lll_timedlockBernard Ogden
2014-08-12 Bernard Ogden <bernie.ogden@linaro.org> [BZ #16892] * sysdeps/nptl/lowlevellock.h (__lll_timedlock): Use atomic_compare_and_exchange_bool_acq rather than atomic_exchange_acq.
2014-07-15Separate Linuxisms from lowlevellock.h, make a generic oneRoland McGrath
2014-07-07Remove old stub lowlevellock.h file. It is not even useful as documentation.Roland McGrath
2014-07-07Get rid of nptl/sysdeps/ entirely!Roland McGrath