summaryrefslogtreecommitdiff
path: root/sysdeps/aarch64
diff options
context:
space:
mode:
authorSzabolcs Nagy <szabolcs.nagy@arm.com>2017-09-06 17:42:00 +0100
committerSzabolcs Nagy <szabolcs.nagy@arm.com>2017-09-25 10:44:39 +0100
commit72aa623345ada1276ed89dbc00fdff9639cb8eaf (patch)
tree7851f67b911a84aa46960a6f8aff027594cee572 /sysdeps/aarch64
parentfcafcd162c843364dc2bb8d57bd239c41cfd122c (diff)
Optimized generic expf and exp2f with wrappers
Based on new expf and exp2f code from https://github.com/ARM-software/optimized-routines/ with wrapper on aarch64: expf reciprocal-throughput: 2.3x faster expf latency: 1.7x faster without wrapper on aarch64: expf reciprocal-throughput: 3.3x faster expf latency: 1.7x faster without wrapper on aarch64: exp2f reciprocal-throughput: 2.8x faster exp2f latency: 1.3x faster libm.so size on aarch64: .text size: -152 bytes .rodata size: -1740 bytes expf/exp2f worst case nearest rounding error: 0.502 ulp worst case non-nearest rounding error: 1 ulp Error checks are inline and errno setting is in separate tail called functions, but the wrappers are kept in this patch to handle the _LIB_VERSION==_SVID_ case. (So e.g. errno is set twice for expf calls and once for __expf_finite calls on targets where the new code is used.) Double precision arithmetics is used which is expected to be faster on most targets (including soft-float) than using single precision and it is easier to get good precision result with it. Const data is kept in a separate translation unit which complicates maintenance a bit, but is expected to give good code for literal loads on most targets and allows sharing data across expf, exp2f and powf. (This data is disabled on i386, m68k and ia64 which have their own expf, exp2f and powf code.) Some details may need target specific tweaks: - best convert and round to int operation in the arg reduction may be different across targets. - code was optimized on fma target, optimal polynomial eval may be different without fma. - gcc does not always generate good code for fp bit representation access via unions or it may be inherently slow on some targets. The libm-test-ulps will need adjustment because.. - The argument reduction ideally uses nearest rounded rint, but that is not efficient on most targets, so the polynomial can get evaluated on a wider interval in non-nearest rounding mode making 1 ulp errors common in that case. - The polynomial is evaluated such that it may have 1 ulp error on negative tiny inputs with upward rounding. * math/Makefile (type-float-routines): Add math_errf and e_exp2f_data. * sysdeps/aarch64/fpu/math_private.h (TOINT_INTRINSICS): Define. (roundtoint, converttoint): Likewise. * sysdeps/ieee754/flt-32/e_expf.c: New implementation. * sysdeps/ieee754/flt-32/e_exp2f.c: New implementation. * sysdeps/ieee754/flt-32/e_exp2f_data.c: New file. * sysdeps/ieee754/flt-32/math_config.h: New file. * sysdeps/ieee754/flt-32/math_errf.c: New file. * sysdeps/ieee754/flt-32/t_exp2f.h: Remove. * sysdeps/i386/fpu/e_exp2f_data.c: New file. * sysdeps/i386/fpu/math_errf.c: New file. * sysdeps/ia64/fpu/e_exp2f_data.c: New file. * sysdeps/ia64/fpu/math_errf.c: New file. * sysdeps/m68k/m680x0/fpu/e_exp2f_data.c: New file. * sysdeps/m68k/m680x0/fpu/math_errf.c: New file.
Diffstat (limited to 'sysdeps/aarch64')
-rw-r--r--sysdeps/aarch64/fpu/math_private.h20
1 files changed, 20 insertions, 0 deletions
diff --git a/sysdeps/aarch64/fpu/math_private.h b/sysdeps/aarch64/fpu/math_private.h
index 807111ea5a..be10488f5d 100644
--- a/sysdeps/aarch64/fpu/math_private.h
+++ b/sysdeps/aarch64/fpu/math_private.h
@@ -319,6 +319,26 @@ libc_feresetround_noex_aarch64_ctx (struct rm_ctx *ctx)
#define libc_feresetround_noexf_ctx libc_feresetround_noex_aarch64_ctx
#define libc_feresetround_noexl_ctx libc_feresetround_noex_aarch64_ctx
+/* Hack: only include the large arm_neon.h when needed. */
+#ifdef _MATH_CONFIG_H
+# include <arm_neon.h>
+
+/* ACLE intrinsics for frintn and fcvtns instructions. */
+# define TOINT_INTRINSICS 1
+
+static inline double_t
+roundtoint (double_t x)
+{
+ return vget_lane_f64 (vrndn_f64 (vld1_f64 (&x)), 0);
+}
+
+static inline uint64_t
+converttoint (double_t x)
+{
+ return vcvtnd_s64_f64 (x);
+}
+#endif
+
#include_next <math_private.h>
#endif