diff options
author | Suchit Karunakaran <suchitkarunakaran@gmail.com> | 2025-07-27 13:47:54 +0530 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2025-07-28 10:02:57 -0700 |
commit | 5b4c54ac49af7f486806d79e3233fc8a9363961c (patch) | |
tree | b3e1e83497fc76624963d37a1737c324883051ce | |
parent | a9f8d8adcb0905cff282804a0ef8bdc231da0ad2 (diff) |
bpf: Fix various typos in verifier.c comments
This patch fixes several minor typos in comments within the BPF verifier.
No changes in functionality.
Signed-off-by: Suchit Karunakaran <suchitkarunakaran@gmail.com>
Link: https://lore.kernel.org/r/20250727081754.15986-1-suchitkarunakaran@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-rw-r--r-- | kernel/bpf/verifier.c | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 72e3f2b033496..e50d3d43be67f 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4557,7 +4557,7 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx, * . if (scalar cond K|scalar) * . helper_call(.., scalar, ...) where ARG_CONST is expected * backtrack through the verifier states and mark all registers and - * stack slots with spilled constants that these scalar regisers + * stack slots with spilled constants that these scalar registers * should be precise. * . during state pruning two registers (or spilled stack slots) * are equivalent if both are not precise. @@ -18489,7 +18489,7 @@ static void clean_verifier_state(struct bpf_verifier_env *env, /* the parentage chains form a tree. * the verifier states are added to state lists at given insn and * pushed into state stack for future exploration. - * when the verifier reaches bpf_exit insn some of the verifer states + * when the verifier reaches bpf_exit insn some of the verifier states * stored in the state lists have their final liveness state already, * but a lot of states will get revised from liveness point of view when * the verifier explores other branches. @@ -19205,7 +19205,7 @@ static bool is_iter_next_insn(struct bpf_verifier_env *env, int insn_idx) * terminology) calls specially: as opposed to bounded BPF loops, it *expects* * states to match, which otherwise would look like an infinite loop. So while * iter_next() calls are taken care of, we still need to be careful and - * prevent erroneous and too eager declaration of "ininite loop", when + * prevent erroneous and too eager declaration of "infinite loop", when * iterators are involved. * * Here's a situation in pseudo-BPF assembly form: @@ -19247,7 +19247,7 @@ static bool is_iter_next_insn(struct bpf_verifier_env *env, int insn_idx) * * This approach allows to keep infinite loop heuristic even in the face of * active iterator. E.g., C snippet below is and will be detected as - * inifintely looping: + * infinitely looping: * * struct bpf_iter_num it; * int *p, x; @@ -24488,7 +24488,7 @@ static int compute_scc(struct bpf_verifier_env *env) * if pre[i] == 0: * recur(i) * - * Below implementation replaces explicit recusion with array 'dfs'. + * Below implementation replaces explicit recursion with array 'dfs'. */ for (i = 0; i < insn_cnt; i++) { if (pre[i]) |