summaryrefslogtreecommitdiff
path: root/net/mptcp/protocol.c
diff options
context:
space:
mode:
authorJakub Kicinski <kuba@kernel.org>2025-07-21 17:48:36 -0700
committerJakub Kicinski <kuba@kernel.org>2025-07-21 17:48:36 -0700
commite8c24e23c4c9f515b02dde01de9ec2c945b2fddc (patch)
tree3ffc1f020184b13f91b038aaa7a1a8accb8a661c /net/mptcp/protocol.c
parent1b02c861714bf28814926d1fcb3c5594de960757 (diff)
parent154e56a77d81ad49eb979aa64463d5fcfad51136 (diff)
Merge branch 'mptcp-add-tcp_maxseg-sockopt-support'
Matthieu Baerts says: ==================== mptcp: add TCP_MAXSEG sockopt support The TCP_MAXSEG socket option was not supported by MPTCP, mainly because it has never been requested before. But there are still valid use-cases, e.g. with HAProxy. - Patch 1 is a small cleanup patch in the MPTCP sockopt file. - Patch 2 expose some code from TCP, to avoid duplicating it in MPTCP. - Patch 3 adds TCP_MAXSEG sockopt support in MPTCP. - Patch 4 is not related to the others, it fixes a typo in a comment. Note that the new TCP_MAXSEG sockopt support has been validated by a new packetdrill script on the MPTCP CI: https://github.com/multipath-tcp/packetdrill/pull/161 v1: https://lore.kernel.org/20250716-net-next-mptcp-tcp_maxseg-v1-0-548d3a5666f6@kernel.org ==================== Link: https://patch.msgid.link/20250719-net-next-mptcp-tcp_maxseg-v2-0-8c910fbc5307@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net/mptcp/protocol.c')
-rw-r--r--net/mptcp/protocol.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 2ad1c41e963e..6c448a0be949 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -1387,7 +1387,7 @@ struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk)
* - estimate the faster flow linger time
* - use the above to estimate the amount of byte transferred
* by the faster flow
- * - check that the amount of queued data is greter than the above,
+ * - check that the amount of queued data is greater than the above,
* otherwise do not use the picked, slower, subflow
* We select the subflow with the shorter estimated time to flush
* the queued mem, which basically ensure the above. We just need