summaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)Author
2020-08-24libbpf: Avoid false unuinitialized variable warning in bpf_core_apply_reloAndrii Nakryiko
Some versions of GCC report uninitialized targ_spec usage. GCC is wrong, but let's avoid unnecessary warnings. Fixes: ddc7c3042614 ("libbpf: implement BPF CO-RE offset relocation algorithm") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200821225556.2178419-1-andriin@fb.com
2020-08-24tcp: bpf: Optionally store mac header in TCP_SAVE_SYNMartin KaFai Lau
This patch is adapted from Eric's patch in an earlier discussion [1]. The TCP_SAVE_SYN currently only stores the network header and tcp header. This patch allows it to optionally store the mac header also if the setsockopt's optval is 2. It requires one more bit for the "save_syn" bit field in tcp_sock. This patch achieves this by moving the syn_smc bit next to the is_mptcp. The syn_smc is currently used with the TCP experimental option. Since syn_smc is only used when CONFIG_SMC is enabled, this patch also puts the "IS_ENABLED(CONFIG_SMC)" around it like the is_mptcp did with "IS_ENABLED(CONFIG_MPTCP)". The mac_hdrlen is also stored in the "struct saved_syn" to allow a quick offset from the bpf prog if it chooses to start getting from the network header or the tcp header. [1]: https://lore.kernel.org/netdev/CANn89iLJNWh6bkH7DNhy_kmcAexuUCccqERqe7z2QsvPhGrYPQ@mail.gmail.com/ Suggested-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/bpf/20200820190123.2886935-1-kafai@fb.com
2020-08-24bpf: selftests: Tcp header optionsMartin KaFai Lau
This patch adds tests for the new bpf tcp header option feature. test_tcp_hdr_options.c: - It tests header option writing and parsing in 3WHS: regular connection establishment, fastopen, and syncookie. - In syncookie, the passive side's bpf prog is asking the active side to resend its bpf header option by specifying a RESEND bit in the outgoing SYNACK. handle_active_estab() and write_nodata_opt() has some details. - handle_passive_estab() has comments on fastopen. - It also has test for header writing and parsing in FIN packet. - Most of the tests is writing an experimental option 254 with magic 0xeB9F. - The no_exprm_estab() also tests writing a regular TCP option without any magic. test_misc_tcp_options.c: - It is an one directional test. Active side writes option and passive side parses option. The focus is to exercise the new helpers and API. - Testing the new helper: bpf_load_hdr_opt() and bpf_store_hdr_opt(). - Testing the bpf_getsockopt(TCP_BPF_SYN). - Negative tests for the above helpers. - Testing the sock_ops->skb_data. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200820190117.2886749-1-kafai@fb.com
2020-08-24bpf: selftests: Add fastopen_connect to network_helpersMartin KaFai Lau
This patch adds a fastopen_connect() helper which will be used in a later test. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200820190111.2886196-1-kafai@fb.com
2020-08-24bpf: tcp: Allow bpf prog to write and parse TCP header optionMartin KaFai Lau
[ Note: The TCP changes here is mainly to implement the bpf pieces into the bpf_skops_*() functions introduced in the earlier patches. ] The earlier effort in BPF-TCP-CC allows the TCP Congestion Control algorithm to be written in BPF. It opens up opportunities to allow a faster turnaround time in testing/releasing new congestion control ideas to production environment. The same flexibility can be extended to writing TCP header option. It is not uncommon that people want to test new TCP header option to improve the TCP performance. Another use case is for data-center that has a more controlled environment and has more flexibility in putting header options for internal only use. For example, we want to test the idea in putting maximum delay ACK in TCP header option which is similar to a draft RFC proposal [1]. This patch introduces the necessary BPF API and use them in the TCP stack to allow BPF_PROG_TYPE_SOCK_OPS program to parse and write TCP header options. It currently supports most of the TCP packet except RST. Supported TCP header option: ─────────────────────────── This patch allows the bpf-prog to write any option kind. Different bpf-progs can write its own option by calling the new helper bpf_store_hdr_opt(). The helper will ensure there is no duplicated option in the header. By allowing bpf-prog to write any option kind, this gives a lot of flexibility to the bpf-prog. Different bpf-prog can write its own option kind. It could also allow the bpf-prog to support a recently standardized option on an older kernel. Sockops Callback Flags: ────────────────────── The bpf program will only be called to parse/write tcp header option if the following newly added callback flags are enabled in tp->bpf_sock_ops_cb_flags: BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG BPF_SOCK_OPS_WRITE_HDR_OPT_CB_FLAG A few words on the PARSE CB flags. When the above PARSE CB flags are turned on, the bpf-prog will be called on packets received at a sk that has at least reached the ESTABLISHED state. The parsing of the SYN-SYNACK-ACK will be discussed in the "3 Way HandShake" section. The default is off for all of the above new CB flags, i.e. the bpf prog will not be called to parse or write bpf hdr option. There are details comment on these new cb flags in the UAPI bpf.h. sock_ops->skb_data and bpf_load_hdr_opt() ───────────────────────────────────────── sock_ops->skb_data and sock_ops->skb_data_end covers the whole TCP header and its options. They are read only. The new bpf_load_hdr_opt() helps to read a particular option "kind" from the skb_data. Please refer to the comment in UAPI bpf.h. It has details on what skb_data contains under different sock_ops->op. 3 Way HandShake ─────────────── The bpf-prog can learn if it is sending SYN or SYNACK by reading the sock_ops->skb_tcp_flags. * Passive side When writing SYNACK (i.e. sock_ops->op == BPF_SOCK_OPS_WRITE_HDR_OPT_CB), the received SYN skb will be available to the bpf prog. The bpf prog can use the SYN skb (which may carry the header option sent from the remote bpf prog) to decide what bpf header option should be written to the outgoing SYNACK skb. The SYN packet can be obtained by getsockopt(TCP_BPF_SYN*). More on this later. Also, the bpf prog can learn if it is in syncookie mode (by checking sock_ops->args[0] == BPF_WRITE_HDR_TCP_SYNACK_COOKIE). The bpf prog can store the received SYN pkt by using the existing bpf_setsockopt(TCP_SAVE_SYN). The example in a later patch does it. [ Note that the fullsock here is a listen sk, bpf_sk_storage is not very useful here since the listen sk will be shared by many concurrent connection requests. Extending bpf_sk_storage support to request_sock will add weight to the minisock and it is not necessary better than storing the whole ~100 bytes SYN pkt. ] When the connection is established, the bpf prog will be called in the existing PASSIVE_ESTABLISHED_CB callback. At that time, the bpf prog can get the header option from the saved syn and then apply the needed operation to the newly established socket. The later patch will use the max delay ack specified in the SYN header and set the RTO of this newly established connection as an example. The received ACK (that concludes the 3WHS) will also be available to the bpf prog during PASSIVE_ESTABLISHED_CB through the sock_ops->skb_data. It could be useful in syncookie scenario. More on this later. There is an existing getsockopt "TCP_SAVED_SYN" to return the whole saved syn pkt which includes the IP[46] header and the TCP header. A few "TCP_BPF_SYN*" getsockopt has been added to allow specifying where to start getting from, e.g. starting from TCP header, or from IP[46] header. The new getsockopt(TCP_BPF_SYN*) will also know where it can get the SYN's packet from: - (a) the just received syn (available when the bpf prog is writing SYNACK) and it is the only way to get SYN during syncookie mode. or - (b) the saved syn (available in PASSIVE_ESTABLISHED_CB and also other existing CB). The bpf prog does not need to know where the SYN pkt is coming from. The getsockopt(TCP_BPF_SYN*) will hide this details. Similarly, a flags "BPF_LOAD_HDR_OPT_TCP_SYN" is also added to bpf_load_hdr_opt() to read a particular header option from the SYN packet. * Fastopen Fastopen should work the same as the regular non fastopen case. This is a test in a later patch. * Syncookie For syncookie, the later example patch asks the active side's bpf prog to resend the header options in ACK. The server can use bpf_load_hdr_opt() to look at the options in this received ACK during PASSIVE_ESTABLISHED_CB. * Active side The bpf prog will get a chance to write the bpf header option in the SYN packet during WRITE_HDR_OPT_CB. The received SYNACK pkt will also be available to the bpf prog during the existing ACTIVE_ESTABLISHED_CB callback through the sock_ops->skb_data and bpf_load_hdr_opt(). * Turn off header CB flags after 3WHS If the bpf prog does not need to write/parse header options beyond the 3WHS, the bpf prog can clear the bpf_sock_ops_cb_flags to avoid being called for header options. Or the bpf-prog can select to leave the UNKNOWN_HDR_OPT_CB_FLAG on so that the kernel will only call it when there is option that the kernel cannot handle. [1]: draft-wang-tcpm-low-latency-opt-00 https://tools.ietf.org/html/draft-wang-tcpm-low-latency-opt-00 Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200820190104.2885895-1-kafai@fb.com
2020-08-24bpf: tcp: Add bpf_skops_hdr_opt_len() and bpf_skops_write_hdr_opt()Martin KaFai Lau
The bpf prog needs to parse the SYN header to learn what options have been sent by the peer's bpf-prog before writing its options into SYNACK. This patch adds a "syn_skb" arg to tcp_make_synack() and send_synack(). This syn_skb will eventually be made available (as read-only) to the bpf prog. This will be the only SYN packet available to the bpf prog during syncookie. For other regular cases, the bpf prog can also use the saved_syn. When writing options, the bpf prog will first be called to tell the kernel its required number of bytes. It is done by the new bpf_skops_hdr_opt_len(). The bpf prog will only be called when the new BPF_SOCK_OPS_WRITE_HDR_OPT_CB_FLAG is set in tp->bpf_sock_ops_cb_flags. When the bpf prog returns, the kernel will know how many bytes are needed and then update the "*remaining" arg accordingly. 4 byte alignment will be included in the "*remaining" before this function returns. The 4 byte aligned number of bytes will also be stored into the opts->bpf_opt_len. "bpf_opt_len" is a newly added member to the struct tcp_out_options. Then the new bpf_skops_write_hdr_opt() will call the bpf prog to write the header options. The bpf prog is only called if it has reserved spaces before (opts->bpf_opt_len > 0). The bpf prog is the last one getting a chance to reserve header space and writing the header option. These two functions are half implemented to highlight the changes in TCP stack. The actual codes preparing the bpf running context and invoking the bpf prog will be added in the later patch with other necessary bpf pieces. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/bpf/20200820190052.2885316-1-kafai@fb.com
2020-08-24bpf: tcp: Add bpf_skops_parse_hdr()Martin KaFai Lau
The patch adds a function bpf_skops_parse_hdr(). It will call the bpf prog to parse the TCP header received at a tcp_sock that has at least reached the ESTABLISHED state. For the packets received during the 3WHS (SYN, SYNACK and ACK), the received skb will be available to the bpf prog during the callback in bpf_skops_established() introduced in the previous patch and in the bpf_skops_write_hdr_opt() that will be added in the next patch. Calling bpf prog to parse header is controlled by two new flags in tp->bpf_sock_ops_cb_flags: BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG and BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG. When BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG is set, the bpf prog will only be called when there is unknown option in the TCP header. When BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG is set, the bpf prog will be called on all received TCP header. This function is half implemented to highlight the changes in TCP stack. The actual codes preparing the bpf running context and invoking the bpf prog will be added in the later patch with other necessary bpf pieces. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/bpf/20200820190046.2885054-1-kafai@fb.com
2020-08-24tcp: bpf: Add TCP_BPF_RTO_MIN for bpf_setsockoptMartin KaFai Lau
This patch adds bpf_setsockopt(TCP_BPF_RTO_MIN) to allow bpf prog to set the min rto of a connection. It could be used together with the earlier patch which has added bpf_setsockopt(TCP_BPF_DELACK_MAX). A later selftest patch will communicate the max delay ack in a bpf tcp header option and then the receiving side can use bpf_setsockopt(TCP_BPF_RTO_MIN) to set a shorter rto. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200820190027.2884170-1-kafai@fb.com
2020-08-24tcp: bpf: Add TCP_BPF_DELACK_MAX setsockoptMartin KaFai Lau
This change is mostly from an internal patch and adapts it from sysctl config to the bpf_setsockopt setup. The bpf_prog can set the max delay ack by using bpf_setsockopt(TCP_BPF_DELACK_MAX). This max delay ack can be communicated to its peer through bpf header option. The receiving peer can then use this max delay ack and set a potentially lower rto by using bpf_setsockopt(TCP_BPF_RTO_MIN) which will be introduced in the next patch. Another later selftest patch will also use it like the above to show how to write and parse bpf tcp header option. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200820190021.2884000-1-kafai@fb.com
2020-08-24selftests/powerpc: Update PROT_SAO test to skip ISA 3.1Shawn Anastasio
Since SAO support was removed from ISA 3.1, skip the prot_sao test if PPC_FEATURE2_ARCH_3_1 is set. Signed-off-by: Shawn Anastasio <shawn@anastas.io> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200821185558.35561-4-shawn@anastas.io
2020-08-24Revert "powerpc/64s: Remove PROT_SAO support"Shawn Anastasio
This reverts commit 5c9fa16e8abd342ce04dc830c1ebb2a03abf6c05. Since PROT_SAO can still be useful for certain classes of software, reintroduce it. Concerns about guest migration for LPARs using SAO will be addressed next. Signed-off-by: Shawn Anastasio <shawn@anastas.io> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200821185558.35561-2-shawn@anastas.io
2020-08-23Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller
2020-08-23Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds
Pull networking fixes from David Miller: "Nothing earth shattering here, lots of small fixes (f.e. missing RCU protection, bad ref counting, missing memset(), etc.) all over the place: 1) Use get_file_rcu() in task_file iterator, from Yonghong Song. 2) There are two ways to set remote source MAC addresses in macvlan driver, but only one of which validates things properly. Fix this. From Alvin Šipraga. 3) Missing of_node_put() in gianfar probing, from Sumera Priyadarsini. 4) Preserve device wanted feature bits across multiple netlink ethtool requests, from Maxim Mikityanskiy. 5) Fix rcu_sched stall in task and task_file bpf iterators, from Yonghong Song. 6) Avoid reset after device destroy in ena driver, from Shay Agroskin. 7) Missing memset() in netlink policy export reallocation path, from Johannes Berg. 8) Fix info leak in __smc_diag_dump(), from Peilin Ye. 9) Decapsulate ECN properly for ipv6 in ipv4 tunnels, from Mark Tomlinson. 10) Fix number of data stream negotiation in SCTP, from David Laight. 11) Fix double free in connection tracker action module, from Alaa Hleihel. 12) Don't allow empty NHA_GROUP attributes, from Nikolay Aleksandrov" * git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (46 commits) net: nexthop: don't allow empty NHA_GROUP bpf: Fix two typos in uapi/linux/bpf.h net: dsa: b53: check for timeout tipc: call rcu_read_lock() in tipc_aead_encrypt_done() net/sched: act_ct: Fix skb double-free in tcf_ct_handle_fragments() error flow net: sctp: Fix negotiation of the number of data streams. dt-bindings: net: renesas, ether: Improve schema validation gre6: Fix reception with IP6_TNL_F_RCV_DSCP_COPY hv_netvsc: Fix the queue_mapping in netvsc_vf_xmit() hv_netvsc: Remove "unlikely" from netvsc_select_queue bpf: selftests: global_funcs: Check err_str before strstr bpf: xdp: Fix XDP mode when no mode flags specified selftests/bpf: Remove test_align leftovers tools/resolve_btfids: Fix sections with wrong alignment net/smc: Prevent kernel-infoleak in __smc_diag_dump() sfc: fix build warnings on 32-bit net: phy: mscc: Fix a couple of spelling mistakes "spcified" -> "specified" libbpf: Fix map index used in error message net: gemini: Fix missing free_netdev() in error path of gemini_ethernet_port_probe() net: atlantic: Use readx_poll_timeout() for large timeout ...
2020-08-22Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm fixes from Paolo Bonzini: - PAE and PKU bugfixes for x86 - selftests fix for new binutils - MMU notifier fix for arm64 * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: arm64: Only reschedule if MMU_NOTIFIER_RANGE_BLOCKABLE is not set KVM: Pass MMU notifier range flags to kvm_unmap_hva_range() kvm: x86: Toggling CR4.PKE does not load PDPTEs in PAE mode kvm: x86: Toggling CR4.SMAP does not load PDPTEs in PAE mode KVM: x86: fix access code passed to gva_to_gpa selftests: kvm: Use a shorter encoding to clear RAX
2020-08-21libbpf: Normalize and improve logging across few functionsAndrii Nakryiko
Make libbpf logs follow similar pattern and provide more context like section name or program name, where appropriate. Also, add BPF_INSN_SZ constant and use it throughout to clean up code a little bit. This commit doesn't have any functional changes and just removes some code changes out of the way before bigger refactoring in libbpf internals. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200820231250.1293069-6-andriin@fb.com
2020-08-21libbpf: Skip well-known ELF sections when iterating ELFAndrii Nakryiko
Skip and don't log ELF sections that libbpf knows about and ignores during ELF processing. This allows to not unnecessarily log details about those ELF sections and cleans up libbpf debug log. Ignored sections include DWARF data, string table, empty .text section and few special (e.g., .llvm_addrsig) useless sections. With such ELF sections out of the way, log unrecognized ELF sections at pr_info level to increase visibility. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200820231250.1293069-5-andriin@fb.com
2020-08-21libbpf: Add __noinline macro to bpf_helpers.hAndrii Nakryiko
__noinline is pretty frequently used, especially with BPF subprograms, so add them along the __always_inline, for user convenience and completeness. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200820231250.1293069-4-andriin@fb.com
2020-08-21libbpf: Factor out common ELF operations and improve loggingAndrii Nakryiko
Factor out common ELF operations done throughout the libbpf. This simplifies usage across multiple places in libbpf, as well as hide error reporting from higher-level functions and make error logging more consistent. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200820231250.1293069-3-andriin@fb.com
2020-08-21selftests/bpf: BPF object files should depend only on libbpf headersAndrii Nakryiko
There is no need to re-build BPF object files if any of the sources of libbpf change. So record more precise dependency only on libbpf/bpf_*.h headers. This eliminates unnecessary re-builds. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200820231250.1293069-2-andriin@fb.com
2020-08-21selftests: bpf: Test sockmap update from BPFLorenz Bauer
Add a test which copies a socket from a sockmap into another sockmap or sockhash. This excercises bpf_map_update_elem support from BPF context. Compare the socket cookies from source and destination to ensure that the copy succeeded. Also check that the verifier rejects map_update from unsafe contexts. Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200821102948.21918-7-lmb@cloudflare.com
2020-08-21libbpf: Add perf_buffer APIs for better integration with outside epoll loopAndrii Nakryiko
Add a set of APIs to perf_buffer manage to allow applications to integrate perf buffer polling into existing epoll-based infrastructure. One example is applications using libevent already and wanting to plug perf_buffer polling, instead of relying on perf_buffer__poll() and waste an extra thread to do it. But perf_buffer is still extremely useful to set up and consume perf buffer rings even for such use cases. So to accomodate such new use cases, add three new APIs: - perf_buffer__buffer_cnt() returns number of per-CPU buffers maintained by given instance of perf_buffer manager; - perf_buffer__buffer_fd() returns FD of perf_event corresponding to a specified per-CPU buffer; this FD is then polled independently; - perf_buffer__consume_buffer() consumes data from single per-CPU buffer, identified by its slot index. To support a simpler, but less efficient, way to integrate perf_buffer into external polling logic, also expose underlying epoll FD through perf_buffer__epoll_fd() API. It will need to be followed by perf_buffer__poll(), wasting extra syscall, or perf_buffer__consume(), wasting CPU to iterate buffers with no data. But could be simpler and more convenient for some cases. These APIs allow for great flexiblity, but do not sacrifice general usability of perf_buffer. Also exercise and check new APIs in perf_buffer selftest. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/20200821165927.849538-1-andriin@fb.com
2020-08-21bpftool: Implement link_query for bpf iteratorsYonghong Song
The link query for bpf iterators is implemented. Besides being shown to the user what bpf iterator the link represents, the target_name is also used to filter out what additional information should be printed out, e.g., whether map_id should be shown or not. The following is an example of bpf_iter link dump, plain output or pretty output. $ bpftool link show 11: iter prog 59 target_name task pids test_progs(1749) 34: iter prog 173 target_name bpf_map_elem map_id 127 pids test_progs_1(1753) $ bpftool -p link show [{ "id": 11, "type": "iter", "prog_id": 59, "target_name": "task", "pids": [{ "pid": 1749, "comm": "test_progs" } ] },{ "id": 34, "type": "iter", "prog_id": 173, "target_name": "bpf_map_elem", "map_id": 127, "pids": [{ "pid": 1753, "comm": "test_progs_1" } ] } ] Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200821184420.574430-1-yhs@fb.com
2020-08-21bpf: Implement link_query for bpf iteratorsYonghong Song
This patch implemented bpf_link callback functions show_fdinfo and fill_link_info to support link_query interface. The general interface for show_fdinfo and fill_link_info will print/fill the target_name. Each targets can register show_fdinfo and fill_link_info callbacks to print/fill more target specific information. For example, the below is a fdinfo result for a bpf task iterator. $ cat /proc/1749/fdinfo/7 pos: 0 flags: 02000000 mnt_id: 14 link_type: iter link_id: 11 prog_tag: 990e1f8152f7e54f prog_id: 59 target_name: task Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200821184418.574122-1-yhs@fb.com
2020-08-21Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfDavid S. Miller
Alexei Starovoitov says: ==================== pull-request: bpf 2020-08-21 The following pull-request contains BPF updates for your *net* tree. We've added 11 non-merge commits during the last 5 day(s) which contain a total of 12 files changed, 78 insertions(+), 24 deletions(-). The main changes are: 1) three fixes in BPF task iterator logic, from Yonghong. 2) fix for compressed dwarf sections in vmlinux, from Jiri. 3) fix xdp attach regression, from Andrii. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-08-21bpf: Fix two typos in uapi/linux/bpf.hTobias Klauser
Also remove trailing whitespaces in bpf_skb_get_tunnel_key example code. Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200821133642.18870-1-tklauser@distanz.ch
2020-08-21perf: arm-spe: Fix check error when synthesizing eventsWei Li
In arm_spe_read_record(), when we are processing an events packet, 'decoder->packet.index' is the length of payload, which has been transformed in payloadlen(). So correct the check of 'idx'. Signed-off-by: Wei Li <liwei391@huawei.com> Reviewed-by: Leo Yan <leo.yan@linaro.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lore.kernel.org/lkml/20200724072628.35904-1-liwei391@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-08-21perf symbols: Add mwait_idle_with_hints.constprop.0 to the list of idle symbolsArnaldo Carvalho de Melo
The "mwait_idle_with_hints" one was already there, some compiler artifact now adds this ".constprop.0" suffix, cover that one too. At some point we need to put these in a special bucket and show it somewhere on the screen. Noticed building the kernel on a fedora:32 system using: gcc version 10.2.1 20200723 (Red Hat 10.2.1-1) (GCC) Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-08-21perf top: Skip side-band event setup if HAVE_LIBBPF_SUPPORT is not setTiezhu Yang
When I execute 'perf top' without HAVE_LIBBPF_SUPPORT, there exists the following segmentation fault, skip the side-band event setup to fix it, this is similar with commit 1101c872c8c7 ("perf record: Skip side-band event setup if HAVE_LIBBPF_SUPPORT is not set"). [yangtiezhu@linux perf]$ ./perf top <SNIP> perf: Segmentation fault Obtained 6 stack frames. ./perf(sighandler_dump_stack+0x5c) [0x12011b604] [0xffffffc010] ./perf(perf_mmap__read_init+0x3e) [0x1201feeae] ./perf() [0x1200d715c] /lib64/libpthread.so.0(+0xab9c) [0xffee10ab9c] /lib64/libc.so.6(+0x128f4c) [0xffedc08f4c] Segmentation fault [yangtiezhu@linux perf]$ I use git bisect to find commit b38d85ef49cf ("perf bpf: Decouple creating the evlist from adding the SB event") is the first bad commit, so also add the Fixes tag. Committer testing: First build perf explicitely disabling libbpf: $ make NO_LIBBPF=1 O=/tmp/build/perf -C tools/perf install-bin && perf test python Now make sure it isn't linked: $ perf -vv | grep -w bpf bpf: [ OFF ] # HAVE_LIBBPF_SUPPORT $ $ nm ~/bin/perf | grep libbpf $ And now try to run 'perf top': # perf top perf: Segmentation fault -------- backtrace -------- perf[0x5bcd6d] /lib64/libc.so.6(+0x3ca6f)[0x7fd0f5a66a6f] perf(perf_mmap__read_init+0x1e)[0x5e1afe] perf[0x4cc468] /lib64/libpthread.so.0(+0x9431)[0x7fd0f645a431] /lib64/libc.so.6(clone+0x42)[0x7fd0f5b2b912] # Applying this patch fixes the issue. Fixes: b38d85ef49cf ("perf bpf: Decouple creating the evlist from adding the SB event") Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Xuefeng Li <lixuefeng@loongson.cn> Link: http://lore.kernel.org/lkml/1597753837-16222-1-git-send-email-yangtiezhu@loongson.cn Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-08-21perf sched timehist: Fix use of CPU list with summary optionDavid Ahern
Do not update thread stats or show idle summary unless CPU is in the list of interest. Fixes: c30d630d1bcfad8d ("perf sched timehist: Add support for filtering on CPU") Signed-off-by: David Ahern <dsahern@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Link: http://lore.kernel.org/lkml/20200817170943.1486-1-dsahern@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-08-21perf test: Fix basic bpf filtering testSumanth Korikkar
BPF basic filtering test fails on s390x (when vmlinux debuginfo is utilized instead of /proc/kallsyms) Info: - bpf_probe_load installs the bpf code at do_epoll_wait. - For s390x, do_epoll_wait resolves to 3 functions including inlines. found inline addr: 0x43769e Probe point found: __s390_sys_epoll_wait+6 found inline addr: 0x437290 Probe point found: do_epoll_wait+0 found inline addr: 0x4375d6 Probe point found: __se_sys_epoll_wait+6 - add_bpf_event creates evsel for every probe in a BPF object. This results in 3 evsels. Solution: - Expected result = 50% of the samples to be collected from epoll_wait * number of entries present in the evlist. Committer testing: # perf test 42 42: BPF filter : 42.1: Basic BPF filtering : Ok 42.2: BPF pinning : Ok 42.3: BPF prologue generation : Ok 42.4: BPF relocation checker : Ok # Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.ibm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: bpf@vger.kernel.org Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Sven Schnelle <svens@linux.ibm.com> LPU-Reference: 20200817072754.58344-1-sumanthk@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-08-20selftests: net: tcp_mmap: Use huge pages in receive pathEric Dumazet
One down side of using TCP rx zerocopy is one extra TLB miss per page after the mapping operation. While if the application is using hugepages, the non zerocopy recvmsg() will not have to pay these TLB costs. This patch allows server side to use huge pages for the non zero copy case, to allow fair comparisons when both solutions use optimal conditions. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Arjun Roy <arjunroy@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-08-20selftests: net: tcp_mmap: Use huge pages in send pathEric Dumazet
There are significant gains using huge pages when available, as shown in [1]. This patch adds mmap_large_buffer() and uses it in client side (tx path of this reference tool) Following patch will use the feature for server side. [1] https://patchwork.ozlabs.org/project/netdev/patch/20200820154359.1806305-1-edumazet@google.com/ Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Arjun Roy <arjunroy@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-08-20selftests: net: tcp_mmap: use madvise(MADV_DONTNEED)Eric Dumazet
When TCP_ZEROCOPY_RECEIVE operation has been added, I made the mistake of automatically un-mapping prior content before mapping new pages. This has the unfortunate effect of adding potentially long MMU operations (like TLB flushes) while socket lock is held. Using madvise(MADV_DONTNEED) right after pages has been used has two benefits : 1) This releases pages sooner, allowing pages to be recycled if they were part of a page pool in a NIC driver. 2) No more long unmap operations while preventing immediate processing of incoming packets. The cost of the added system call is small enough. Arjun will submit a kernel patch allowing to opt out from the unmap attempt in tcp_zerocopy_receive() Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Arjun Roy <arjunroy@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-08-20selftests/timers: Turn off timeout settingPo-Hsu Lin
The following 4 tests in timers can take longer than the default 45 seconds that added in commit 852c8cbf34d3 ("selftests/kselftest/runner.sh: Add 45 second timeout per test") to run: * nsleep-lat - 2m7.350s * set-timer-lat - 2m0.66s * inconsistency-check - 1m45.074s * raw_skew - 2m0.013s Thus they will be marked as failed with the current 45s setting: not ok 3 selftests: timers: nsleep-lat # TIMEOUT not ok 4 selftests: timers: set-timer-lat # TIMEOUT not ok 6 selftests: timers: inconsistency-check # TIMEOUT not ok 7 selftests: timers: raw_skew # TIMEOUT Disable the timeout setting for timers can make these tests finish properly: ok 3 selftests: timers: nsleep-lat ok 4 selftests: timers: set-timer-lat ok 6 selftests: timers: inconsistency-check ok 7 selftests: timers: raw_skew https://bugs.launchpad.net/bugs/1864626 Fixes: 852c8cbf34d3 ("selftests/kselftest/runner.sh: Add 45 second timeout per test") Signed-off-by: Po-Hsu Lin <po-hsu.lin@canonical.com> Acked-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-08-20bpf: selftests: global_funcs: Check err_str before strstrYauheni Kaliuta
The error path in libbpf.c:load_program() has calls to pr_warn() which ends up for global_funcs tests to test_global_funcs.c:libbpf_debug_print(). For the tests with no struct test_def::err_str initialized with a string, it causes call of strstr() with NULL as the second argument and it segfaults. Fix it by calling strstr() only for non-NULL err_str. Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200820115843.39454-1-yauheni.kaliuta@redhat.com
2020-08-20selftests/bpf: Remove test_align leftoversVeronika Kabatova
Calling generic selftests "make install" fails as rsync expects all files from TEST_GEN_PROGS to be present. The binary is not generated anymore (commit 3b09d27cc93d) so we can safely remove it from there and also from gitignore. Fixes: 3b09d27cc93d ("selftests/bpf: Move test_align under test_progs") Signed-off-by: Veronika Kabatova <vkabatov@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/bpf/20200819160710.1345956-1-vkabatov@redhat.com
2020-08-20tools/resolve_btfids: Fix sections with wrong alignmentJiri Olsa
The data of compressed section should be aligned to 4 (for 32bit) or 8 (for 64 bit) bytes. The binutils ld sets sh_addralign to 1, which makes libelf fail with misaligned section error during the update as reported by Jesper: FAILED elf_update(WRITE): invalid section alignment While waiting for ld fix, we can fix compressed sections sh_addralign value manually. Adding warning in -vv mode when the fix is triggered: $ ./tools/bpf/resolve_btfids/resolve_btfids -vv vmlinux ... section(36) .comment, size 44, link 0, flags 30, type=1 section(37) .debug_aranges, size 45684, link 0, flags 800, type=1 - fixing wrong alignment sh_addralign 16, expected 8 section(38) .debug_info, size 129104957, link 0, flags 800, type=1 - fixing wrong alignment sh_addralign 1, expected 8 section(39) .debug_abbrev, size 1152583, link 0, flags 800, type=1 - fixing wrong alignment sh_addralign 1, expected 8 section(40) .debug_line, size 7374522, link 0, flags 800, type=1 - fixing wrong alignment sh_addralign 1, expected 8 section(41) .debug_frame, size 702463, link 0, flags 800, type=1 section(42) .debug_str, size 1017571, link 0, flags 830, type=1 - fixing wrong alignment sh_addralign 1, expected 8 section(43) .debug_loc, size 3019453, link 0, flags 800, type=1 - fixing wrong alignment sh_addralign 1, expected 8 section(44) .debug_ranges, size 1744583, link 0, flags 800, type=1 - fixing wrong alignment sh_addralign 16, expected 8 section(45) .symtab, size 2955888, link 46, flags 0, type=2 section(46) .strtab, size 2613072, link 0, flags 0, type=3 ... update ok for vmlinux Another workaround is to disable compressed debug info data CONFIG_DEBUG_INFO_COMPRESSED kernel option. Fixes: fbbb68de80a4 ("bpf: Add resolve_btfids tool to resolve BTF IDs in ELF object") Reported-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Yonghong Song <yhs@fb.com> Cc: Mark Wielaard <mjw@redhat.com> Cc: Nick Clifton <nickc@redhat.com> Link: https://lore.kernel.org/bpf/20200819092342.259004-1-jolsa@kernel.org
2020-08-20selftests/bpf: List newest Clang built-ins needed for some CO-RE selftestsAndrii Nakryiko
Record which built-ins are optional and needed for some of recent BPF CO-RE subtests. Document Clang diff that fixed corner-case issue with __builtin_btf_type_id(). Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200820061411.1755905-4-andriin@fb.com
2020-08-20selftests/bpf: Fix two minor compilation warnings reported by GCC 4.9Andrii Nakryiko
GCC 4.9 seems to be more strict in some regards. Fix two minor issue it reported. Fixes: 1c1052e0140a ("tools/testing/selftests/bpf: Add self-tests for new helper bpf_get_ns_current_pid_tgid.") Fixes: 2d7824ffd25c ("selftests: bpf: Add test for sk_assign") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200820061411.1755905-3-andriin@fb.com
2020-08-20libbpf: Fix libbpf build on compilers missing __builtin_mul_overflowAndrii Nakryiko
GCC compilers older than version 5 don't support __builtin_mul_overflow yet. Given GCC 4.9 is the minimal supported compiler for building kernel and the fact that libbpf is a dependency of resolve_btfids, which is dependency of CONFIG_DEBUG_INFO_BTF=y, this needs to be handled. This patch fixes the issue by falling back to slower detection of integer overflow in such cases. Fixes: 029258d7b228 ("libbpf: Remove any use of reallocarray() in libbpf") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200820061411.1755905-2-andriin@fb.com
2020-08-20libbpf: Fix detection of BPF helper call instructionAndrii Nakryiko
BPF_CALL | BPF_JMP32 is explicitly not allowed by verifier for BPF helper calls, so don't detect it as a valid call. Also drop the check on func_id pointer, as it's currently always non-null. Fixes: 109cea5a594f ("libbpf: Sanitize BPF program code for bpf_probe_read_{kernel, user}[_str]") Reported-by: Yonghong Song <yhs@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200820061411.1755905-1-andriin@fb.com
2020-08-20libbpf: Fix map index used in error messageToke Høiland-Jørgensen
The error message emitted by bpf_object__init_user_btf_maps() was using the wrong section ID. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819110534.9058-1-toke@redhat.com
2020-08-20selftests/bpf: Add bpffs preload test.Alexei Starovoitov
Add a test that mounts two bpffs instances and checks progs.debug and maps.debug for sanity data. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200819042759.51280-5-alexei.starovoitov@gmail.com
2020-08-20bpf: Add kernel module with user mode driver that populates bpffs.Alexei Starovoitov
Add kernel module with user mode driver that populates bpffs with BPF iterators. $ mount bpffs /my/bpffs/ -t bpf $ ls -la /my/bpffs/ total 4 drwxrwxrwt 2 root root 0 Jul 2 00:27 . drwxr-xr-x 19 root root 4096 Jul 2 00:09 .. -rw------- 1 root root 0 Jul 2 00:27 maps.debug -rw------- 1 root root 0 Jul 2 00:27 progs.debug The user mode driver will load BPF Type Formats, create BPF maps, populate BPF maps, load two BPF programs, attach them to BPF iterators, and finally send two bpf_link IDs back to the kernel. The kernel will pin two bpf_links into newly mounted bpffs instance under names "progs.debug" and "maps.debug". These two files become human readable. $ cat /my/bpffs/progs.debug id name attached 11 dump_bpf_map bpf_iter_bpf_map 12 dump_bpf_prog bpf_iter_bpf_prog 27 test_pkt_access 32 test_main test_pkt_access test_pkt_access 33 test_subprog1 test_pkt_access_subprog1 test_pkt_access 34 test_subprog2 test_pkt_access_subprog2 test_pkt_access 35 test_subprog3 test_pkt_access_subprog3 test_pkt_access 36 new_get_skb_len get_skb_len test_pkt_access 37 new_get_skb_ifindex get_skb_ifindex test_pkt_access 38 new_get_constant get_constant test_pkt_access The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about all BPF programs currently loaded in the system. This information is unstable and will change from kernel to kernel as ".debug" suffix conveys. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200819042759.51280-4-alexei.starovoitov@gmail.com
2020-08-20libbpf: Simplify the return expression of build_map_pin_path()Xu Wang
Simplify the return expression. Signed-off-by: Xu Wang <vulab@iscas.ac.cn> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819025324.14680-1-vulab@iscas.ac.cn
2020-08-19selftests/bpf: Add tests for ENUMVAL_EXISTS/ENUMVAL_VALUE relocationsAndrii Nakryiko
Add tests validating existence and value relocations for enum value-based relocations. If __builtin_preserve_enum_value() built-in is not supported, skip tests. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-6-andriin@fb.com
2020-08-19libbpf: Implement enum value-based CO-RE relocationsAndrii Nakryiko
Implement two relocations of a new enumerator value-based CO-RE relocation kind: ENUMVAL_EXISTS and ENUMVAL_VALUE. First, ENUMVAL_EXISTS, allows to detect the presence of a named enumerator value in the target (kernel) BTF. This is useful to do BPF helper/map/program type support detection from BPF program side. bpf_core_enum_value_exists() macro helper is provided to simplify built-in usage. Second, ENUMVAL_VALUE, allows to capture enumerator integer value and relocate it according to the target BTF, if it changes. This is useful to have a guarantee against intentional or accidental re-ordering/re-numbering of some of the internal (non-UAPI) enumerations, where kernel developers don't care about UAPI backwards compatiblity concerns. bpf_core_enum_value() allows to capture this succinctly and use correct enum values in code. LLVM uses ldimm64 instruction to capture enumerator value-based relocations, so add support for ldimm64 instruction patching as well. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-5-andriin@fb.com
2020-08-19selftests/bpf: Add CO-RE relo test for TYPE_ID_LOCAL/TYPE_ID_TARGETAndrii Nakryiko
Add tests for BTF type ID relocations. To allow testing this, enhance core_relo.c test runner to allow dynamic initialization of test inputs. If Clang doesn't have necessary support for new functionality, test is skipped. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-4-andriin@fb.com
2020-08-19selftests/bpf: Test TYPE_EXISTS and TYPE_SIZE CO-RE relocationsAndrii Nakryiko
Add selftests for TYPE_EXISTS and TYPE_SIZE relocations, testing correctness of relocations and handling of type compatiblity/incompatibility. If __builtin_preserve_type_info() is not supported by compiler, skip tests. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-3-andriin@fb.com
2020-08-19libbpf: Implement type-based CO-RE relocations supportAndrii Nakryiko
Implement support for TYPE_EXISTS/TYPE_SIZE/TYPE_ID_LOCAL/TYPE_ID_REMOTE relocations. These are examples of type-based relocations, as opposed to field-based relocations supported already. The difference is that they are calculating relocation values based on the type itself, not a field within a struct/union. Type-based relos have slightly different semantics when matching local types to kernel target types, see comments in bpf_core_types_are_compat() for details. Their behavior on failure to find target type in kernel BTF also differs. Instead of "poisoning" relocatable instruction and failing load subsequently in kernel, they return 0 (which is rarely a valid return result, so user BPF code can use that to detect success/failure of the relocation and deal with it without extra "guarding" relocations). Also, it's always possible to check existence of the type in target kernel with TYPE_EXISTS relocation, similarly to a field-based FIELD_EXISTS. TYPE_ID_LOCAL relocation is a bit special in that it always succeeds (barring any libbpf/Clang bugs) and resolved to BTF ID using **local** BTF info of BPF program itself. Tests in subsequent patches demonstrate the usage and semantics of new relocations. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-2-andriin@fb.com