Age | Commit message (Collapse) | Author |
|
We will convert inet6_rtm_newaddr() to per-netns RTNL.
Except for IFA_F_OPTIMISTIC, cfg.ifa_flags can be set before
__dev_get_by_index().
Let's move ifa_flags setup before __dev_get_by_index() so that
we can set ifa_flags without RTNL.
Also, now it's moved before tb[IFA_CACHEINFO] in preparing for
the next patch.
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250115080608.28127-9-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
inet6_addr_add() is called from inet6_rtm_newaddr() and
addrconf_add_ifaddr().
inet6_addr_add() looks up dev by __dev_get_by_index(), but
it's already done in inet6_rtm_newaddr().
Let's move the 2nd lookup to addrconf_add_ifaddr() and pass
dev to inet6_addr_add().
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250115080608.28127-8-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
These functions are called from inet6_ioctl() with a socket's netns
and hold RTNL.
* SIOCSIFADDR : addrconf_add_ifaddr()
* SIOCDIFADDR : addrconf_del_ifaddr()
* SIOCSIFDSTADDR : addrconf_set_dstaddr()
Let's use rtnl_net_lock().
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250115080608.28127-7-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
addrconf_init() holds RTNL for blackhole_netdev, which is the global
device in init_net.
addrconf_cleanup() holds RTNL to clean up devices in init_net too.
Let's use rtnl_net_lock(&init_net) there.
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250115080608.28127-6-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
addrconf_dad_work() is per-address work and holds RTNL internally.
We can fetch netns as dev_net(ifp->idev->dev).
Let's use rtnl_net_lock().
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250115080608.28127-5-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
addrconf_verify_work() is per-netns work to call addrconf_verify_rtnl()
under RTNL.
Let's use rtnl_net_lock().
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250115080608.28127-4-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
net.ipv6.conf.${DEV}.XXX sysctl are changed under RTNL:
* forwarding
* ignore_routes_with_linkdown
* disable_ipv6
* proxy_ndp
* addr_gen_mode
* stable_secret
* disable_policy
Let's use rtnl_net_lock() there.
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250115080608.28127-3-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
We will convert rtnl_lock() with rtnl_net_lock(), and we want to
convert __in6_dev_get() too.
__in6_dev_get() uses rcu_dereference_rtnl(), but as written in its
comment, rtnl_dereference() or rcu_dereference() is preferable.
Let's add __in6_dev_get_rtnl_net() that uses rtnl_net_dereference().
We can add the RCU version helper later if needed.
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250115080608.28127-2-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
After commit df542f669307 ("net: stmmac: Switch to zero-copy in
non-XDP RX path"), SKBs are always marked for recycle, it is redundant
to mark SKBs more than once when new frags are appended.
Signed-off-by: Furong Xu <0x1207@gmail.com>
Link: https://patch.msgid.link/20250117062805.192393-1-0x1207@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Two different models of usb card, the drivers are r8152 and asix. If no
network cable is connected, Speed = 10Mb/s. This problem is repeated in
linux 3.10, 4.19, 5.4, 6.12. This problem also exists on the latest
kernel. Both drivers call mii_ethtool_get_link_ksettings,
but the value of cmd->base.speed in this
function can only be SPEED_1000 or SPEED_100 or SPEED_10.
When the network cable is not connected, set cmd->base.speed
=SPEED_UNKNOWN.
Signed-off-by: Xiangqian Zhang <zhangxiangqian@kylinos.cn>
Link: https://patch.msgid.link/20250117094603.4192594-1-zhangxiangqian@kylinos.cn
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Merge cpupower utility updates for 6.14:
- Fix TSC MHz calculation in cpupower (He Rongguang).
- Add install and uninstall options to bindings Makefile and add header
changes for cpufreq.h to SWIG bindings in cpupower (John B. Wyatt IV).
- Add missing residency header changes in cpuidle.h to SWIG bindings in
cpupower (John B. Wyatt IV).
- Add output files to .gitignore and clean them up in "make clean" in
selftests/cpufreq (Li Zhijian).
- Fix cross-compilation in cpupower Makefile (Peng Fan).
- Revise the is_valid flag handling for idle_monitor in the cpupower
utility (wangfushuai).
- Extend and clean up AMD processors support in cpupower (Mario
Limonciello).
* pm-tools:
pm: cpupower: Add missing residency header changes in cpuidle.h to SWIG
pm: cpupower: Add header changes for cpufreq.h to SWIG bindings
pm: cpupower: Add install and uninstall options to bindings makefile
cpupower: Adjust whitespace for amd-pstate specific prints
cpupower: Don't fetch maximum latency when EPP is enabled
cpupower: Add support for showing energy performance preference
cpupower: Don't try to read frequency from hardware when kernel uses aperfmperf
cpupower: Add support for amd-pstate preferred core rankings
cpupower: Add support for parsing 'enabled' or 'disabled' strings from table
cpupower: Remove spurious return statement
cpupower: fix TSC MHz calculation
cpupower: revise is_valid flag handling for idle_monitor
pm: cpupower: Makefile: Fix cross compilation
selftests/cpufreq: gitignore output files and clean them in make clean
|
|
Since dccp and llc makefiles already check sysctl code
compilation with xxx-$(CONFIG_SYSCTL)
we can drop the checks
Signed-off-by: Denis Kirjanov <kirjanov@gmail.com>
Link: https://patch.msgid.link/20250119134254.19250-1-kirjanov@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-next
Pablo Neira Ayuso says:
====================
Netfilter updates for net-next
The following batch contains Netfilter updates for net-next:
1) Unbreak set size settings for rbtree set backend, intervals in
rbtree are represented as two elements, this detailed is leaked
to userspace leading to bogus ENOSPC from control plane.
2) Remove dead code in br_netfilter's br_nf_pre_routing_finish()
due to never matching error when looking up for route,
from Antoine Tenart.
3) Simplify check for device already in use in flowtable,
from Phil Sutter.
4) Three patches to restore interface name field in struct nft_hook
and use it, this is to prepare for wildcard interface support.
From Phil Sutter.
5) Do not remove netdev basechain when last device is gone, this is
for consistency with the flowtable behaviour. This allows for netdev
basechains without devices. Another patch to simplify netdev event
notifier after this update. Also from Phil.
6) Two patches to add missing spinlock when flowtable updates TCP
state flags, from Florian Westphal.
7) Simplify __nf_ct_refresh_acct() by removing skbuff parameter,
also from Florian.
8) Flowtable gc now extends ct timeout for offloaded flow. This
is to address a possible race that leads to handing over flow
to classic path with long ct timeouts.
9) Tear down flow if cached rt_mtu is stale, before this patch,
packet is handed over to classic path but flow entry still remained
in place.
10) Revisit the flowtable teardown strategy, which was originally
designed to release flowtable hardware entries early. Add a new
CLOSING flag that still allows hardware to release entries when
fin/rst is seen, but keeps the flow entry in place when the
TCP connection is closed. Release flow after timeout or when a new
syn packet is seen for TCP reopen scenario.
* tag 'nf-next-25-01-19' of git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-next:
netfilter: flowtable: add CLOSING state
netfilter: flowtable: teardown flow if cached mtu is stale
netfilter: conntrack: rework offload nf_conn timeout extension logic
netfilter: conntrack: remove skb argument from nf_ct_refresh
netfilter: nft_flow_offload: update tcp state flags under lock
netfilter: nft_flow_offload: clear tcp MAXACK flag before moving to slowpath
netfilter: nf_tables: Simplify chain netdev notifier
netfilter: nf_tables: Tolerate chains with no remaining hooks
netfilter: nf_tables: Compare netdev hooks based on stored name
netfilter: nf_tables: Use stored ifname in netdev hook dumps
netfilter: nf_tables: Store user-defined hook ifname
netfilter: nf_tables: Flowtable hook's pf value never varies
netfilter: br_netfilter: remove unused conditional and dead code
netfilter: nf_tables: fix set size with rbtree backend
====================
Link: https://patch.msgid.link/20250119172051.8261-1-pablo@netfilter.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Merge devfreq and OPP (Operating Performance Points) updates for 6.14:
- Clean up the Exynos devfreq driver and devfreq core (Markus Elfring,
Jeongjun Park).
- Minor cleanups and fixes for OPP (Dan Carpenter, Neil Armstrong, Joe
Hattori).
- Implement dev_pm_opp_get_bw() (Neil Armstrong).
- Expose OPP reference counting helpers for Rust (Viresh Kumar).
* pm-devfreq:
PM / devfreq: exynos: remove unused function parameter
PM / devfreq: event: Call of_node_put() only once in devfreq_event_get_edev_by_phandle()
* pm-opp:
PM / OPP: Add reference counting helpers for Rust implementation
OPP: OF: Fix an OF node leak in _opp_add_static_v2()
OPP: fix dev_pm_opp_find_bw_*() when bandwidth table not initialized
OPP: add index check to assert to avoid buffer overflow in _read_freq()
opp: core: Fix off by one in dev_pm_opp_get_bw()
opp: core: implement dev_pm_opp_get_bw
|
|
Jakub Kicinski says:
====================
net: ethtool: fixes for HDS threshold
Quick follow up on the HDS threshold work, since the merge window
is upon us.
Fix the bnxt implementation to apply the settings right away,
because we update the parameters _after_ configuring HW user
needed to reconfig the device twice to get the settings to stick.
For this I took the liberty of moving the config to a separate
struct. This follows my original thinking for the queue API.
It should also fit more neatly into how many drivers which
support safe config update operate. Drivers can allocate
new objects using the "pending" struct.
netdevsim:
KTAP version 1
1..7
ok 1 hds.get_hds
ok 2 hds.get_hds_thresh
ok 3 hds.set_hds_disable
ok 4 hds.set_hds_enable
ok 5 hds.set_hds_thresh_zero
ok 6 hds.set_hds_thresh_max
ok 7 hds.set_hds_thresh_gt
# Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0
bnxt:
KTAP version 1
1..7
ok 1 hds.get_hds
ok 2 hds.get_hds_thresh
ok 3 hds.set_hds_disable # SKIP disabling of HDS not supported by the device
ok 4 hds.set_hds_enable
ok 5 hds.set_hds_thresh_zero
ok 6 hds.set_hds_thresh_max
ok 7 hds.set_hds_thresh_gt
# Totals: pass:6 fail:0 xfail:0 xpass:0 skip:1 error:0
v1: https://lore.kernel.org/20250117194815.1514410-1-kuba@kernel.org
====================
Link: https://patch.msgid.link/20250119020518.1962249-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
300-400B RPC requests are fairly common. With the current default
of 256B HDS threshold bnxt ends up splitting those, lowering PCIe
bandwidth efficiency and increasing the number of memory allocation.
Increase the HDS threshold to fit 4 buffers in a 4k page.
This works out to 640B as the threshold on a typical kernel confing.
This change increases the performance for a microbenchmark which
receives 400B RPCs and sends empty responses by 4.5%.
Admittedly this is just a single benchmark, but 256B works out to
just 6 (so 2 more) packets per head page, because shinfo size
dominates the headers.
Now that we use page pool for the header pages I was also tempted
to default rx_copybreak to 0, but in synthetic testing the copybreak
size doesn't seem to make much difference.
Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Link: https://patch.msgid.link/20250119020518.1962249-8-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Now that we can configure HDS threshold separately from the rx_copybreak
HDS threshold may be higher than rx_copybreak.
We need to make sure that we have enough space for the headers.
Fixes: 6b43673a25c3 ("bnxt_en: add support for hds-thresh ethtool command")
Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Link: https://patch.msgid.link/20250119020518.1962249-7-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The core has the current HDS config, it can pre-populate the values
for the drivers. While at it, remove the zero-setting in netdevsim.
Zero are the default values since the config is zalloc'ed.
Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Link: https://patch.msgid.link/20250119020518.1962249-6-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Use the pending config for hds_thrs. Core will only update the "current"
one after we return success. Without this change 2 reconfigs would be
required for the setting to reach the device.
Fixes: 6b43673a25c3 ("bnxt_en: add support for hds-thresh ethtool command")
Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Link: https://patch.msgid.link/20250119020518.1962249-5-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Record the pending configuration in net_device struct.
ethtool core duplicates the current config and the specific
handlers (for now just ringparam) can modify it.
Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Link: https://patch.msgid.link/20250119020518.1962249-4-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
For ease of review of the next patch store the dev pointer
on the stack, instead of referring to req_info.dev every time.
No functional changes.
Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Link: https://patch.msgid.link/20250119020518.1962249-3-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Separate the HDS config from the ethtool state struct.
The HDS config contains just simple parameters, not state.
Having it as a separate struct will make it easier to clone / copy
and also long term potentially make it per-queue.
Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Link: https://patch.msgid.link/20250119020518.1962249-2-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull afs updates from Christian Brauner:
"Dynamic root improvements:
- Create an /afs/.<cell> mountpoint to match the /afs/<cell>
mountpoint when a cell is created
- Add some more checks on cell names proposed by the user to prevent
dodgy symlink bodies from being created. Also prevent rootcell from
being altered once set to simplify the locking
- Change the handling of /afs/@cell from being a dentry name
substitution at lookup time to making it a symlink to the current
cell name and also provide a /afs/.@cell symlink to point to the
dotted cell mountpoint
Fixes:
- Fix the abort code check in the fallback handling for the
YFS.RemoveFile2 RPC call
- Use call->op->server() for oridnary filesystem RPC calls that have
an operation descriptor instead of call->server()"
* tag 'vfs-6.14-rc1.afs' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs:
afs: Fix the fallback handling for the YFS.RemoveFile2 RPC call
afs: Make /afs/@cell and /afs/.@cell symlinks
afs: Add rootcell checks
afs: Make /afs/.<cell> as well as /afs/<cell> mountpoints
|
|
Merge cpufreq updates for 6.14:
- Use str_enable_disable()-like helpers in cpufreq (Krzysztof
Kozlowski).
- Extend the Apple cpufreq driver to support more SoCs (Hector Martin,
Nick Chan).
- Add new cpufreq driver for Airoha SoCs (Christian Marangi).
- Fix using cpufreq-dt as module (Andreas Kemnade).
- Minor fixes for Sparc, SCMI, and Qcom cpufreq drivers (Ethan Carter
Edwards, Sibi Sankar, Manivannan Sadhasivam).
- Fix the maximum supported frequency computation in the ACPI cpufreq
driver to avoid relying on unfounded assumptions (Gautham Shenoy).
- Fix an amd-pstate driver regression with preferred core rankings not
being used (Mario Limonciello).
- Fix a precision issue with frequency calculation in the amd-pstate
driver (Naresh Solanki).
- Add ftrace event to the amd-pstate driver for active mode (Mario
Limonciello).
- Set default EPP policy on Ryzen processors in amd-pstate (Mario
Limonciello).
- Clean up the amd-pstate cpufreq driver and optimize it to increase
code reuse (Mario Limonciello, Dhananjay Ugwekar).
- Use CPPC to get scaling factors between HWP performance levels and
frequency in the intel_pstate driver and make it stop using a built
-in scaling factor for the Arrow Lake processor (Rafael Wysocki).
- Make intel_pstate initialize epp_policy to CPUFREQ_POLICY_UNKNOWN for
consistency with CPU offline (Christian Loehle).
- Fix superfluous updates caused by need_freq_update in the schedutil
cpufreq governor (Sultan Alsawaf).
* pm-cpufreq: (40 commits)
cpufreq: Use str_enable_disable()-like helpers
cpufreq: airoha: Add EN7581 CPUFreq SMCCC driver
cpufreq: ACPI: Fix max-frequency computation
cpufreq/amd-pstate: Refactor max frequency calculation
cpufreq/amd-pstate: Fix prefcore rankings
cpufreq: sparc: change kzalloc to kcalloc
cpufreq: qcom: Implement clk_ops::determine_rate() for qcom_cpufreq* clocks
cpufreq: qcom: Fix qcom_cpufreq_hw_recalc_rate() to query LUT if LMh IRQ is not available
cpufreq: apple-soc: Add Apple A7-A8X SoC cpufreq support
cpufreq: apple-soc: Set fallback transition latency to APPLE_DVFS_TRANSITION_TIMEOUT
cpufreq: apple-soc: Increase cluster switch timeout to 400us
cpufreq: apple-soc: Use 32-bit read for status register
cpufreq: apple-soc: Allow per-SoC configuration of APPLE_DVFS_CMD_PS1
cpufreq: apple-soc: Drop setting the PS2 field on M2+
dt-bindings: cpufreq: apple,cluster-cpufreq: Add A7-A11, T2 compatibles
dt-bindings: cpufreq: Document support for Airoha EN7581 CPUFreq
cpufreq: fix using cpufreq-dt as module
cpufreq: scmi: Register for limit change notifications
cpufreq: schedutil: Fix superfluous updates caused by need_freq_update
cpufreq: intel_pstate: Use CPUFREQ_POLICY_UNKNOWN
...
|
|
Kuniyuki Iwashima says:
====================
af_unix: Set skb drop reason in every kfree_skb() path.
There is a potential user for skb drop reason for AF_UNIX.
This series replaces some kfree_skb() in connect() and
sendmsg() paths and sets skb drop reason for the rest of
kfree_skb() in AF_UNIX.
Link: https://lore.kernel.org/netdev/CAAf2ycmZHti95WaBR3s+L5Epm1q7sXmvZ-EqCK=-oZj=45tOwQ@mail.gmail.com/
v2: https://lore.kernel.org/20250112040810.14145-1-kuniyu@amazon.com/
v1: https://lore.kernel.org/20250110092641.85905-1-kuniyu@amazon.com/
====================
Link: https://patch.msgid.link/20250116053441.5758-1-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
This is based on Donald Hunter's patch.
These functions could fail for various reasons, sometimes
triggering kfree_skb().
* unix_stream_connect() : connect()
* unix_stream_sendmsg() : sendmsg()
* queue_oob() : sendmsg(MSG_OOB)
* unix_dgram_sendmsg() : sendmsg()
Such kfree_skb() is tied to the errno of connect() and
sendmsg(), and we need not define skb drop reasons.
Let's use consume_skb() not to churn kfree_skb() events.
Link: https://lore.kernel.org/netdev/eb30b164-7f86-46bf-a5d3-0f8bda5e9398@redhat.com/
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250116053441.5758-10-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
This is a follow-up of commit d460b04bc452 ("af_unix: Clean up
error paths in unix_stream_sendmsg().").
If we initialise skb with NULL in unix_stream_sendmsg(), we can
reuse the existing out_pipe label for the SEND_SHUTDOWN check.
Let's rename it and adjust the existing label as out_pipe_lock.
While at it, size and data_len are moved to the while loop scope.
Suggested-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250116053441.5758-9-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
unix_dgram_disconnected() is called from two places:
1. when a connect()ed socket dis-connect()s or re-connect()s to
another socket
2. when sendmsg() fails because the peer socket that the client
has connect()ed to has been close()d
Then, the client's recv queue is purged to remove all messages from
the old peer socket.
Let's define a new drop reason for that case.
# echo 1 > /sys/kernel/tracing/events/skb/kfree_skb/enable
# python3
>>> from socket import *
>>>
>>> # s1 has a message from s2
>>> s1, s2 = socketpair(AF_UNIX, SOCK_DGRAM)
>>> s2.send(b'hello world')
>>>
>>> # re-connect() drops the message from s2
>>> s3 = socket(AF_UNIX, SOCK_DGRAM)
>>> s3.bind('')
>>> s1.connect(s3.getsockname())
# cat /sys/kernel/tracing/trace_pipe
python3-250 ... kfree_skb: ... location=skb_queue_purge_reason+0xdc/0x110 reason: UNIX_DISCONNECT
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250116053441.5758-8-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
unix_stream_read_skb() is called when BPF SOCKMAP reads some data
from a socket in the map.
SOCKMAP does not support MSG_OOB, and reading OOB results in a drop.
Let's set drop reasons respectively.
* SOCKET_CLOSE : the socket in SOCKMAP was close()d
* UNIX_SKIP_OOB : OOB was read from the socket in SOCKMAP
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250116053441.5758-7-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
AF_UNIX SOCK_STREAM socket supports MSG_OOB.
When OOB data is sent to a socket, recv() will break at that point.
If the next recv() does not have MSG_OOB, the normal data following
the OOB data is returned.
Then, the OOB skb is dropped.
Let's define a new drop reason for that case in manage_oob().
# echo 1 > /sys/kernel/tracing/events/skb/kfree_skb/enable
# python3
>>> from socket import *
>>> s1, s2 = socketpair(AF_UNIX)
>>> s1.send(b'a', MSG_OOB)
>>> s1.send(b'b')
>>> s2.recv(2)
b'b'
# cat /sys/kernel/tracing/trace_pipe
...
python3-223 ... kfree_skb: ... location=unix_stream_read_generic+0x59e/0xc20 reason: UNIX_SKIP_OOB
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250116053441.5758-6-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Inflight file descriptors by SCM_RIGHTS hold references to the
struct file.
AF_UNIX sockets could hold references to each other, forming
reference cycles.
Once such sockets are close()d without the fd recv()ed, they
will be unaccessible from userspace but remain in kernel.
__unix_gc() garbage-collects skb with the dead file descriptors
and frees them by __skb_queue_purge().
Let's set SKB_DROP_REASON_SOCKET_CLOSE there.
# echo 1 > /sys/kernel/tracing/events/skb/kfree_skb/enable
# python3
>>> from socket import *
>>> from array import array
>>>
>>> # Create a reference cycle
>>> s1 = socket(AF_UNIX, SOCK_DGRAM)
>>> s1.bind('')
>>> s1.sendmsg([b"nop"], [(SOL_SOCKET, SCM_RIGHTS, array("i", [s1.fileno()]))], 0, s1.getsockname())
>>> s1.close()
>>>
>>> # Trigger GC
>>> s2 = socket(AF_UNIX)
>>> s2.close()
# cat /sys/kernel/tracing/trace_pipe
...
kworker/u16:2-42 ... kfree_skb: ... location=__unix_gc+0x4ad/0x580 reason: SOCKET_CLOSE
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250116053441.5758-5-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
unix_sock_destructor() is called as sk->sk_destruct() just before
the socket is actually freed.
Let's use SKB_DROP_REASON_SOCKET_CLOSE for skb_queue_purge().
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250116053441.5758-4-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
unix_release_sock() is called when the last refcnt of struct file
is released.
Let's define a new drop reason SKB_DROP_REASON_SOCKET_CLOSE and
set it for kfree_skb() in unix_release_sock().
# echo 1 > /sys/kernel/tracing/events/skb/kfree_skb/enable
# python3
>>> from socket import *
>>> s1, s2 = socketpair(AF_UNIX)
>>> s1.send(b'hello world')
>>> s2.close()
# cat /sys/kernel/tracing/trace_pipe
...
python3-280 ... kfree_skb: ... protocol=0 location=unix_release_sock+0x260/0x420 reason: SOCKET_CLOSE
To be precise, unix_release_sock() is also called for a new child
socket in unix_stream_connect() when something fails, but the new
sk does not have skb in the recv queue then and no event is logged.
Note that only tcp_inbound_ao_hash() uses a similar drop reason,
SKB_DROP_REASON_TCP_CLOSE, and this can be generalised later.
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250116053441.5758-3-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The following patch adds a new drop reason starting with
the SOCKET_ prefix.
Let's gather the existing SOCKET_ reasons.
Note that the order is not part of uAPI.
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Link: https://patch.msgid.link/20250116053441.5758-2-kuniyu@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Address Null pointer dereference / undefined behavior in rtattr_pack
(note that size is 0 in the bad case).
Flagged by cppcheck as:
tools/testing/selftests/net/ipsec.c:230:25: warning: Possible null pointer
dereference: payload [nullPointer]
memcpy(RTA_DATA(attr), payload, size);
^
tools/testing/selftests/net/ipsec.c:1618:54: note: Calling function 'rtattr_pack',
4th argument 'NULL' value is 0
if (rtattr_pack(&req.nh, sizeof(req), XFRMA_IF_ID, NULL, 0)) {
^
tools/testing/selftests/net/ipsec.c:230:25: note: Null pointer dereference
memcpy(RTA_DATA(attr), payload, size);
^
Signed-off-by: Liu Ye <liuye@kylinos.cn>
Link: https://patch.msgid.link/20250116013037.29470-1-liuye@kylinos.cn
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull vfs direct-io updates from Christian Brauner:
"File systems that write out of place usually require different
alignment for direct I/O writes than what they can do for reads.
Add a separate dio read align field to statx, as many out of place
write file systems can easily do reads aligned to the device sector
size, but require bigger alignment for writes.
This is usually papered over by falling back to buffered I/O for
smaller writes and doing read-modify-write cycles, but performance for
this sucks, so applications benefit from knowing the actual write
alignment"
* tag 'vfs-6.14-rc1.statx.dio' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs:
xfs: report larger dio alignment for COW inodes
xfs: report the correct read/write dio alignment for reflinked inodes
xfs: cleanup xfs_vn_getattr
fs: add STATX_DIO_READ_ALIGN
fs: reformat the statx definition
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull vfs libfs updates from Christian Brauner:
"This improves the stable directory offset behavior in various ways.
Stable offsets are needed so that NFS can reliably read directories on
filesystems such as tmpfs:
- Improve the end-of-directory detection
According to getdents(3), the d_off field in each returned
directory entry points to the next entry in the directory. The
d_off field in the last returned entry in the readdir buffer must
contain a valid offset value, but if it points to an actual
directory entry, then readdir/getdents can loop.
Introduce a specific fixed offset value that is placed in the d_off
field of the last entry in a directory. Some user space
applications assume that the EOD offset value is larger than the
offsets of real directory entries, so the largest valid offset
value is reserved for this purpose. This new value is never
allocated by simple_offset_add().
When ->iterate_dir() returns, getdents{64} inserts the ctx->pos
value into the d_off field of the last valid entry in the readdir
buffer. When it hits EOD, offset_readdir() sets ctx->pos to the EOD
offset value so the last entry is updated to point to the EOD
marker.
When trying to read the entry at the EOD offset, offset_readdir()
terminates immediately.
- Rely on d_children to iterate stable offset directories
Instead of using the mtree to emit entries in the order of their
offset values, use it only to map incoming ctx->pos to a starting
entry. Then use the directory's d_children list, which is already
maintained properly by the dcache, to find the next child to emit.
- Narrow the range of directory offset values returned by
simple_offset_add() to 3 .. (S32_MAX - 1) on all platforms. This
means the allocation behavior is identical on 32-bit systems,
64-bit systems, and 32-bit user space on 64-bit kernels. The new
range still permits over 2 billion concurrent entries per
directory.
- Return ENOSPC when the directory offset range is exhausted. Hitting
this error is almost impossible though.
- Remove the simple_offset_empty() helper"
* tag 'vfs-6.14-rc1.libfs' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs:
libfs: Use d_children list to iterate simple_offset directories
libfs: Replace simple_offset end-of-directory detection
Revert "libfs: fix infinite directory reads for offset dir"
Revert "libfs: Add simple_offset_empty()"
libfs: Return ENOSPC when the directory offset range is exhausted
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull vfs mount updates from Christian Brauner:
- Add a mountinfo program to demonstrate statmount()/listmount()
Add a new "mountinfo" sample userland program that demonstrates how
to use statmount() and listmount() to get at the same info that
/proc/pid/mountinfo provides
- Remove pointless nospec.h include
- Prepend statmount.mnt_opts string with security_sb_mnt_opts()
Currently these mount options aren't accessible via statmount()
- Add new mount namespaces to mount namespace rbtree outside of the
namespace semaphore
- Lockless mount namespace lookup
Currently we take the read lock when looking for a mount namespace to
list mounts in. We can make this lockless. The simple search case can
just use a sequence counter to detect concurrent changes to the
rbtree
For walking the list of mount namespaces sequentially via nsfs we
keep a separate rcu list as rb_prev() and rb_next() aren't usable
safely with rcu. Currently there is no primitive for retrieving the
previous list member. To do this we need a new deletion primitive
that doesn't poison the prev pointer and a corresponding retrieval
helper
Since creating mount namespaces is a relatively rare event compared
with querying mounts in a foreign mount namespace this is worth it.
Once libmount and systemd pick up this mechanism to list mounts in
foreign mount namespaces this will be used very frequently
- Add extended selftests for lockless mount namespace iteration
- Add a sample program to list all mounts on the system, i.e., in
all mount namespaces
- Improve mount namespace iteration performance
Make finding the last or first mount to start iterating the mount
namespace from an O(1) operation and add selftests for iterating the
mount table starting from the first and last mount
- Use an xarray for the old mount id
While the ida does use the xarray internally we can use it explicitly
which allows us to increment the unique mount id under the xa lock.
This allows us to remove the atomic as we're now allocating both ids
in one go
- Use a shared header for vfs sample programs
- Fix build warnings for new sample program to list all mounts
* tag 'vfs-6.14-rc1.mount.v2' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs:
samples/vfs: fix build warnings
samples/vfs: use shared header
samples/vfs/mountinfo: Use __u64 instead of uint64_t
fs: remove useless lockdep assertion
fs: use xarray for old mount id
selftests: add listmount() iteration tests
fs: cache first and last mount
samples: add test-list-all-mounts
selftests: remove unneeded include
selftests: add tests for mntns iteration
seltests: move nsfs into filesystems subfolder
fs: simplify rwlock to spinlock
fs: lockless mntns lookup for nsfs
rculist: add list_bidir_{del,prev}_rcu()
fs: lockless mntns rbtree lookup
fs: add mount namespace to rbtree late
fs: prepend statmount.mnt_opts string with security_sb_mnt_opts()
mount: remove inlude/nospec.h include
samples: add a mountinfo program to demonstrate statmount()/listmount()
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull pid_max namespacing update from Christian Brauner:
"The pid_max sysctl is a global value. For a long time the default
value has been 65535 and during the pidfd dicussions Linus proposed to
bump pid_max by default. Based on this discussion systemd started
bumping pid_max to 2^22. So all new systems now run with a very high
pid_max limit with some distros having also backported that change.
The decision to bump pid_max is obviously correct. It just doesn't
make a lot of sense nowadays to enforce such a low pid number. There's
sufficient tooling to make selecting specific processes without typing
really large pid numbers available.
In any case, there are workloads that have expections about how large
pid numbers they accept. Either for historical reasons or
architectural reasons. One concreate example is the 32-bit version of
Android's bionic libc which requires pid numbers less than 65536.
There are workloads where it is run in a 32-bit container on a 64-bit
kernel. If the host has a pid_max value greater than 65535 the libc
will abort thread creation because of size assumptions of
pthread_mutex_t.
That's a fairly specific use-case however, in general specific
workloads that are moved into containers running on a host with a new
kernel and a new systemd can run into issues with large pid_max
values. Obviously making assumptions about the size of the allocated
pid is suboptimal but we have userspace that does it.
Of course, giving containers the ability to restrict the number of
processes in their respective pid namespace indepent of the global
limit through pid_max is something desirable in itself and comes in
handy in general.
Independent of motivating use-cases the existence of pid namespaces
makes this also a good semantical extension and there have been prior
proposals pushing in a similar direction. The trick here is to
minimize the risk of regressions which I think is doable. The fact
that pid namespaces are hierarchical will help us here.
What we mostly care about is that when the host sets a low pid_max
limit, say (crazy number) 100 that no descendant pid namespace can
allocate a higher pid number in its namespace. Since pid allocation is
hierarchial this can be ensured by checking each pid allocation
against the pid namespace's pid_max limit. This means if the
allocation in the descendant pid namespace succeeds, the ancestor pid
namespace can reject it. If the ancestor pid namespace has a higher
limit than the descendant pid namespace the descendant pid namespace
will reject the pid allocation. The ancestor pid namespace will
obviously not care about this.
All in all this means pid_max continues to enforce a system wide limit
on the number of processes but allows pid namespaces sufficient leeway
in handling workloads with assumptions about pid values and allows
containers to restrict the number of processes in a pid namespace
through the pid_max interface"
* tag 'kernel-6.14-rc1.pid' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs:
tests/pid_namespace: add pid_max tests
pid: allow pid_max to be set per pid namespace
|
|
Merge updates related to system sleep, a cpuidle update and an Energy
Model handling code update for 6.14-rc1:
- Allow configuring the system suspend-resume (DPM) watchdog to warn
earlier than panic (Douglas Anderson).
- Implement devm_device_init_wakeup() helper and introduce a device-
managed variant of dev_pm_set_wake_irq() (Joe Hattori, Peng Fan).
- Remove direct inclusions of 'pm_wakeup.h' which should be only
included via 'device.h' (Wolfram Sang).
- Clean up two comments in the core system-wide PM code (Rafael
Wysocki, Randy Dunlap).
- Add Clearwater Forest processor support to the intel_idle cpuidle
driver (Artem Bityutskiy).
- Move sched domains rebuild function from the schedutil cpufreq
governor to the Energy Model handling code (Rafael Wysocki).
* pm-sleep:
PM: sleep: wakeirq: Introduce device-managed variant of dev_pm_set_wake_irq()
PM: sleep: Allow configuring the DPM watchdog to warn earlier than panic
PM: sleep: convert comment from kernel-doc to plain comment
PM: wakeup: implement devm_device_init_wakeup() helper
PM: sleep: sysfs: don't include 'pm_wakeup.h' directly
PM: sleep: autosleep: don't include 'pm_wakeup.h' directly
PM: sleep: Update stale comment in device_resume()
* pm-cpuidle:
intel_idle: add Clearwater Forest SoC support
* pm-em:
PM: EM: Move sched domains rebuild function from schedutil to EM
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull cred refcount updates from Christian Brauner:
"For the v6.13 cycle we switched overlayfs to a variant of
override_creds() that doesn't take an extra reference. To this end the
{override,revert}_creds_light() helpers were introduced.
This generalizes the idea behind {override,revert}_creds_light() to
the {override,revert}_creds() helpers. Afterwards overriding and
reverting credentials is reference count free unless the caller
explicitly takes a reference.
All callers have been appropriately ported"
* tag 'kernel-6.14-rc1.cred' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs: (30 commits)
cred: fold get_new_cred_many() into get_cred_many()
cred: remove unused get_new_cred()
nfsd: avoid pointless cred reference count bump
cachefiles: avoid pointless cred reference count bump
dns_resolver: avoid pointless cred reference count bump
trace: avoid pointless cred reference count bump
cgroup: avoid pointless cred reference count bump
acct: avoid pointless reference count bump
io_uring: avoid pointless cred reference count bump
smb: avoid pointless cred reference count bump
cifs: avoid pointless cred reference count bump
cifs: avoid pointless cred reference count bump
ovl: avoid pointless cred reference count bump
open: avoid pointless cred reference count bump
nfsfh: avoid pointless cred reference count bump
nfs/nfs4recover: avoid pointless cred reference count bump
nfs/nfs4idmap: avoid pointless reference count bump
nfs/localio: avoid pointless cred reference count bumps
coredump: avoid pointless cred reference count bump
binfmt_misc: avoid pointless cred reference count bump
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull pidfs updates from Christian Brauner:
- Rework inode number allocation
Recently we received a patchset that aims to enable file handle
encoding and decoding via name_to_handle_at(2) and
open_by_handle_at(2).
A crucical step in the patch series is how to go from inode number to
struct pid without leaking information into unprivileged contexts.
The issue is that in order to find a struct pid the pid number in the
initial pid namespace must be encoded into the file handle via
name_to_handle_at(2).
This can be used by containers using a separate pid namespace to
learn what the pid number of a given process in the initial pid
namespace is. While this is a weak information leak it could be used
in various exploits and in general is an ugly wart in the design.
To solve this problem a new way is needed to lookup a struct pid
based on the inode number allocated for that struct pid. The other
part is to remove the custom inode number allocation on 32bit systems
that is also an ugly wart that should go away.
Allocate unique identifiers for struct pid by simply incrementing a
64 bit counter and insert each struct pid into the rbtree so it can
be looked up to decode file handles avoiding to leak actual pids
across pid namespaces in file handles.
On both 64 bit and 32 bit the same 64 bit identifier is used to
lookup struct pid in the rbtree. On 64 bit the unique identifier for
struct pid simply becomes the inode number. Comparing two pidfds
continues to be as simple as comparing inode numbers.
On 32 bit the 64 bit number assigned to struct pid is split into two
32 bit numbers. The lower 32 bits are used as the inode number and
the upper 32 bits are used as the inode generation number. Whenever a
wraparound happens on 32 bit the 64 bit number will be incremented by
2 so inode numbering starts at 2 again.
When a wraparound happens on 32 bit multiple pidfds with the same
inode number are likely to exist. This isn't a problem since before
pidfs pidfds used the anonymous inode meaning all pidfds had the same
inode number. On 32 bit sserspace can thus reconstruct the 64 bit
identifier by retrieving both the inode number and the inode
generation number to compare, or use file handles. This gives the
same guarantees on both 32 bit and 64 bit.
- Implement file handle support
This is based on custom export operation methods which allows pidfs
to implement permission checking and opening of pidfs file handles
cleanly without hacking around in the core file handle code too much.
- Support bind-mounts
Allow bind-mounting pidfds. Similar to nsfs let's allow bind-mounts
for pidfds. This allows pidfds to be safely recovered and checked for
process recycling.
Instead of checking d_ops for both nsfs and pidfs we could in a
follow-up patch add a flag argument to struct dentry_operations that
functions similar to file_operations->fop_flags.
* tag 'vfs-6.14-rc1.pidfs' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs:
selftests: add pidfd bind-mount tests
pidfs: allow bind-mounts
pidfs: lookup pid through rbtree
selftests/pidfd: add pidfs file handle selftests
pidfs: check for valid ioctl commands
pidfs: implement file handle support
exportfs: add permission method
fhandle: pull CAP_DAC_READ_SEARCH check into may_decode_fh()
exportfs: add open method
fhandle: simplify error handling
pseudofs: add support for export_ops
pidfs: support FS_IOC_GETVERSION
pidfs: remove 32bit inode number handling
pidfs: rework inode number allocation
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull misc vfs updates from Christian Brauner:
"Features:
- Support caching symlink lengths in inodes
The size is stored in a new union utilizing the same space as
i_devices, thus avoiding growing the struct or taking up any more
space
When utilized it dodges strlen() in vfs_readlink(), giving about
1.5% speed up when issuing readlink on /initrd.img on ext4
- Add RWF_DONTCACHE iocb and FOP_DONTCACHE file_operations flag
If a file system supports uncached buffered IO, it may set
FOP_DONTCACHE and enable support for RWF_DONTCACHE.
If RWF_DONTCACHE is attempted without the file system supporting
it, it'll get errored with -EOPNOTSUPP
- Enable VBOXGUEST and VBOXSF_FS on ARM64
Now that VirtualBox is able to run as a host on arm64 (e.g. the
Apple M3 processors) we can enable VBOXSF_FS (and in turn
VBOXGUEST) for this architecture.
Tested with various runs of bonnie++ and dbench on an Apple MacBook
Pro with the latest Virtualbox 7.1.4 r165100 installed
Cleanups:
- Delay sysctl_nr_open check in expand_files()
- Use kernel-doc includes in fiemap docbook
- Use page->private instead of page->index in watch_queue
- Use a consume fence in mnt_idmap() as it's heavily used in
link_path_walk()
- Replace magic number 7 with ARRAY_SIZE() in fc_log
- Sort out a stale comment about races between fd alloc and dup2()
- Fix return type of do_mount() from long to int
- Various cosmetic cleanups for the lockref code
Fixes:
- Annotate spinning as unlikely() in __read_seqcount_begin
The annotation already used to be there, but got lost in commit
52ac39e5db51 ("seqlock: seqcount_t: Implement all read APIs as
statement expressions")
- Fix proc_handler for sysctl_nr_open
- Flush delayed work in delayed fput()
- Fix grammar and spelling in propagate_umount()
- Fix ESP not readable during coredump
In /proc/PID/stat, there is the kstkesp field which is the stack
pointer of a thread. While the thread is active, this field reads
zero. But during a coredump, it should have a valid value
However, at the moment, kstkesp is zero even during coredump
- Don't wake up the writer if the pipe is still full
- Fix unbalanced user_access_end() in select code"
* tag 'vfs-6.14-rc1.misc' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs: (28 commits)
gfs2: use lockref_init for qd_lockref
erofs: use lockref_init for pcl->lockref
dcache: use lockref_init for d_lockref
lockref: add a lockref_init helper
lockref: drop superfluous externs
lockref: use bool for false/true returns
lockref: improve the lockref_get_not_zero description
lockref: remove lockref_put_not_zero
fs: Fix return type of do_mount() from long to int
select: Fix unbalanced user_access_end()
vbox: Enable VBOXGUEST and VBOXSF_FS on ARM64
pipe_read: don't wake up the writer if the pipe is still full
selftests: coredump: Add stackdump test
fs/proc: do_task_stat: Fix ESP not readable during coredump
fs: add RWF_DONTCACHE iocb and FOP_DONTCACHE file_operations flag
fs: sort out a stale comment about races between fd alloc and dup2
fs: Fix grammar and spelling in propagate_umount()
fs: fc_log replace magic number 7 with ARRAY_SIZE()
fs: use a consume fence in mnt_idmap()
file: flush delayed work in delayed fput()
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull /proc/kcore updates from Christian Brauner:
"The performance of /proc/kcore reads has been showing up as a
bottleneck for the drgn debugger. drgn scripts often spend ~25% of
their time in the kernel reading from /proc/kcore.
A lot of this overhead comes from silly inefficiencies. This pull
request contains fixes for the low-hanging fruit. The fixes are all
fairly small and straightforward.
The result is a 25% improvement in read latency in micro-benchmarks
(from ~235 nanoseconds to ~175) and a 15% improvement in execution
time for real-world drgn scripts:
- Make /proc/kcore entry permanent
- Avoid walking the list on every read
- Use percpu_rw_semaphore for kclist_lock
- Make Omar Sandoval the official maintainer for /proc/kcore"
* tag 'vfs-6.14-rc1.kcore' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs:
MAINTAINERS: add me as /proc/kcore maintainer
proc/kcore: use percpu_rw_semaphore for kclist_lock
proc/kcore: don't walk list on every read
proc/kcore: mark proc entry as permanent
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull vfs netfs updates from Christian Brauner:
"This contains read performance improvements and support for monolithic
single-blob objects that have to be read/written as such (e.g. AFS
directory contents). The implementation of the two parts is interwoven
as each makes the other possible.
- Read performance improvements
The read performance improvements are intended to speed up some
loss of performance detected in cifs and to a lesser extend in afs.
The problem is that we queue too many work items during the
collection of read results: each individual subrequest is collected
by its own work item, and then they have to interact with each
other when a series of subrequests don't exactly align with the
pattern of folios that are being read by the overall request.
Whilst the processing of the pages covered by individual
subrequests as they complete potentially allows folios to be woken
in parallel and with minimum delay, it can shuffle wakeups for
sequential reads out of order - and that is the most common I/O
pattern.
The final assessment and cleanup of an operation is then held up
until the last I/O completes - and for a synchronous sequential
operation, this means the bouncing around of work items just adds
latency.
Two changes have been made to make this work:
(1) All collection is now done in a single "work item" that works
progressively through the subrequests as they complete (and
also dispatches retries as necessary).
(2) For readahead and AIO, this work item be done on a workqueue
and can run in parallel with the ultimate consumer of the data;
for synchronous direct or unbuffered reads, the collection is
run in the application thread and not offloaded.
Functions such as smb2_readv_callback() then just tell netfslib
that the subrequest has terminated; netfslib does a minimal bit of
processing on the spot - stat counting and tracing mostly - and
then queues/wakes up the worker. This simplifies the logic as the
collector just walks sequentially through the subrequests as they
complete and walks through the folios, if buffered, unlocking them
as it goes. It also keeps to a minimum the amount of latency
injected into the filesystem's low-level I/O handling
The way netfs supports filesystems using the deprecated
PG_private_2 flag is changed: folios are flagged and added to a
write request as they complete and that takes care of scheduling
the writes to the cache. The originating read request can then just
unlock the pages whatever happens.
- Single-blob object support
Single-blob objects are files for which the content of the file
must be read from or written to the server in a single operation
because reading them in parts may yield inconsistent results. AFS
directories are an example of this as there exists the possibility
that the contents are generated on the fly and would differ between
reads or might change due to third party interference.
Such objects will be written to and retrieved from the cache if one
is present, though we allow/may need to propose multiple
subrequests to do so. The important part is that read from/write to
the *server* is monolithic.
Single blob reading is, for the moment, fully synchronous and does
result collection in the application thread and, also for the
moment, the API is supplied the buffer in the form of a folio_queue
chain rather than using the pagecache.
- Related afs changes
This series makes a number of changes to the kafs filesystem,
primarily in the area of directory handling:
- AFS's FetchData RPC reply processing is made partially
asynchronous which allows the netfs_io_request's outstanding
operation counter to be removed as part of reducing the
collection to a single work item.
- Directory and symlink reading are plumbed through netfslib using
the single-blob object API and are now cacheable with fscache.
This also allows the afs_read struct to be eliminated and
netfs_io_subrequest to be used directly instead.
- Directory and symlink content are now stored in a folio_queue
buffer rather than in the pagecache. This means we don't require
the RCU read lock and xarray iteration to access it, and folios
won't randomly disappear under us because the VM wants them
back.
- The vnode operation lock is changed from a mutex struct to a
private lock implementation. The problem is that the lock now
needs to be dropped in a separate thread and mutexes don't
permit that.
- When a new directory or symlink is created, we now initialise it
locally and mark it valid rather than downloading it (we know
what it's likely to look like).
- We now use the in-directory hashtable to reduce the number of
entries we need to scan when doing a lookup. The edit routines
have to maintain the hash chains.
- Cancellation (e.g. by signal) of an async call after the
rxrpc_call has been set up is now offloaded to the worker thread
as there will be a notification from rxrpc upon completion. This
avoids a double cleanup.
- A "rolling buffer" implementation is created to abstract out the
two separate folio_queue chaining implementations I had (one for
read and one for write).
- Functions are provided to create/extend a buffer in a folio_queue
chain and tear it down again.
This is used to handle AFS directories, but could also be used to
create bounce buffers for content crypto and transport crypto.
- The was_async argument is dropped from netfs_read_subreq_terminated()
Instead we wake the read collection work item by either queuing it
or waking up the app thread.
- We don't need to use BH-excluding locks when communicating between
the issuing thread and the collection thread as neither of them now
run in BH context.
- Also included are a number of new tracepoints; a split of the
netfslib write collection code to put retrying into its own file
(it gets more complicated with content encryption).
- There are also some minor fixes AFS included, including fixing the
AFS directory format struct layout, reducing some directory
over-invalidation and making afs_mkdir() translate EEXIST to
ENOTEMPY (which is not available on all systems the servers
support).
- Finally, there's a patch to try and detect entry into the folio
unlock function with no folio_queue structs in the buffer (which
isn't allowed in the cases that can get there).
This is a debugging patch, but should be minimal overhead"
* tag 'vfs-6.14-rc1.netfs' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs: (31 commits)
netfs: Report on NULL folioq in netfs_writeback_unlock_folios()
afs: Add a tracepoint for afs_read_receive()
afs: Locally initialise the contents of a new symlink on creation
afs: Use the contained hashtable to search a directory
afs: Make afs_mkdir() locally initialise a new directory's content
netfs: Change the read result collector to only use one work item
afs: Make {Y,}FS.FetchData an asynchronous operation
afs: Fix cleanup of immediately failed async calls
afs: Eliminate afs_read
afs: Use netfslib for symlinks, allowing them to be cached
afs: Use netfslib for directories
afs: Make afs_init_request() get a key if not given a file
netfs: Add support for caching single monolithic objects such as AFS dirs
netfs: Add functions to build/clean a buffer in a folio_queue
afs: Add more tracepoints to do with tracking validity
cachefiles: Add auxiliary data trace
cachefiles: Add some subrequest tracepoints
netfs: Remove some extraneous directory invalidations
afs: Fix directory format encoding struct
afs: Fix EEXIST error returned from afs_rmdir() to be ENOTEMPTY
...
|
|
Merge ACPI battery and fan drivers updates and miscellaneous ACPI
chanages for 6.14:
- Update messages printed by the ACPI battery driver to always
refer to driver extensions as "hooks" to avoid confusion with
similar functionality in the power supply subsystem in the
future (Thomas Weißschuh).
- Fix .probe() error path cleanup in the ACPI fan driver to avoid
memory leaks (Joe Hattori).
- Constify 'struct bin_attribute' in some places in the ACPI subsystem
and mark it as __ro_after_init in one place to prevent binary blob
attributes from being updated (Thomas Weißschuh)
- Add empty stubs for several ACPI-related symbols so that they can be
used when CONFIG_ACPI is unset and use them for removing unnecessary
conditional compilation from the ipu-bridge driver (Ricardo Ribalda).
* acpi-battery:
ACPI: battery: Rename extensions to hook in messages
* acpi-fan:
ACPI: fan: cleanup resources in the error path of .probe()
* acpi-misc:
media: ipu-bridge: Remove unneeded conditional compilations
ACPI: bus: implement acpi_device_hid when !ACPI
ACPI: bus: implement for_each_acpi_consumer_dev when !ACPI
ACPI: header: implement acpi_device_handle when !ACPI
ACPI: bus: implement acpi_get_physical_device_location when !ACPI
ACPI: bus: implement for_each_acpi_dev_match when !ACPI
ACPI: bus: change the prototype for acpi_get_physical_device_location
ACPI: sysfs: Constify 'struct bin_attribute'
ACPI: BGRT: Constify 'struct bin_attribute'
ACPI: BGRT: Mark bin_attribute as __ro_after_init
|
|
This was a suggestion by David Laight, and while I was slightly worried
that some micro-architecture would predict cmov like a conditional
branch, there is little reason to actually believe any core would be
that broken.
Intel documents that their existing cores treat CMOVcc as a data
dependency that will constrain speculation in their "Speculative
Execution Side Channel Mitigations" whitepaper:
"Other instructions such as CMOVcc, AND, ADC, SBB and SETcc can also
be used to prevent bounds check bypass by constraining speculative
execution on current family 6 processors (Intel® Core™, Intel® Atom™,
Intel® Xeon® and Intel® Xeon Phi™ processors)"
and while that leaves the future uarch issues open, that's certainly
true of our traditional SBB usage too.
Any core that predicts CMOV will be unusable for various crypto
algorithms that need data-independent timing stability, so let's just
treat CMOV as the safe choice that simplifies the address masking by
avoiding an extra instruction and doesn't need a temporary register.
Suggested-by: David Laight <David.Laight@aculab.com>
Link: https://www.intel.com/content/dam/develop/external/us/en/documents/336996-speculative-execution-side-channel-mitigations.pdf
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
'acpi-apei'
Merge assorted changes in ACPI library code for 6.14:
- Use usleep_range() instead of msleep() in acpi_os_sleep() to reduce
excessive delays due to timer inaccuracy, mostly affecting system
suspend and resume (Rafael Wysocki).
- Use str_enabled_disabled() string helpers in the ACPI tables parsing
code to make it easier to follow (Sunil V L).
- Update device properties parsing on systems using ACPI so that
data firmware nodes resulting from _DSD evaluation are treated as
available in firmware nodes walks (Sakari Ailus).
- Fix missing guid_t declaration in linux/prmt.h (Robert Richter).
- Update the GHES handling code to follow the global panic= instead of
overriding it by force-rebooting the system after a fatal hw error
has been reported (Borislav Petkov).
* acpi-osl:
ACPI: OSL: Use usleep_range() in acpi_os_sleep()
* acpi-tables:
ACPI: tables: Use string choice helpers
* acpi-property:
ACPI: property: Consider data nodes as being available
* acpi-prm:
ACPI: PRM: Fix missing guid_t declaration in linux/prmt.h
* acpi-apei:
APEI: GHES: Have GHES honor the panic= setting
|
|
Back when we added SMAP support, all versions of binutils didn't
necessarily understand the 'clac' and 'stac' instructions. So we
implemented those instructions manually as ".byte" sequences.
But we've since upgraded the minimum version of binutils to version
2.25, and that included proper support for the SMAP instructions, and
there's no reason for us to use some line noise to express them any
more.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|