summaryrefslogtreecommitdiff
path: root/scripts/bpf_doc.py
diff options
context:
space:
mode:
authorAlexei Starovoitov <ast@kernel.org>2022-09-02 14:10:49 -0700
committerDaniel Borkmann <daniel@iogearbox.net>2022-09-05 15:33:06 +0200
commit0fd7c5d43339b783ee3301a05f925d1e52ac87c9 (patch)
treeb19c7f9b83221183677889fd7644b11549453de3 /scripts/bpf_doc.py
parent86fe28f7692d96d20232af0fc6d7632d5cc89a01 (diff)
bpf: Optimize call_rcu in non-preallocated hash map.
Doing call_rcu() million times a second becomes a bottle neck. Convert non-preallocated hash map from call_rcu to SLAB_TYPESAFE_BY_RCU. The rcu critical section is no longer observed for one htab element which makes non-preallocated hash map behave just like preallocated hash map. The map elements are released back to kernel memory after observing rcu critical section. This improves 'map_perf_test 4' performance from 100k events per second to 250k events per second. bpf_mem_alloc + percpu_counter + typesafe_by_rcu provide 10x performance boost to non-preallocated hash map and make it within few % of preallocated map while consuming fraction of memory. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220902211058.60789-8-alexei.starovoitov@gmail.com
Diffstat (limited to 'scripts/bpf_doc.py')
0 files changed, 0 insertions, 0 deletions