summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-09-30Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds
Pull virtio fixes from Michael Tsirkin: "Some last minute fixes. The virtio-blk one is the most important one since it was actually seen in the field, but the rest of them are small and clearly safe, everything here has been in next for a while" * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: vdpa/mlx5: Fix MQ to support non power of two num queues vduse: prevent uninitialized memory accesses virtio-blk: Fix WARN_ON_ONCE in virtio_queue_rq() virtio_test: fixup for vq reset virtio-crypto: fix memory-leak vdpa/ifcvf: fix the calculation of queuepair
2022-09-30Merge tag 'block-6.0-2022-09-29' of git://git.kernel.dk/linuxLinus Torvalds
Pull block fixes from Jens Axboe: "A single NVMe pull request via Christoph with a few fixes that should go into the 6.0 release: - Fix IOC_PR_CLEAR and IOC_PR_RELEASE ioctls for nvme devices (Michael Kelley) - Disable Write Zeroes on Phison E3C/E4C (Tina Hsu)" * tag 'block-6.0-2022-09-29' of git://git.kernel.dk/linux: nvme-pci: disable Write Zeroes on Phison E3C/E4C nvme: Fix IOC_PR_CLEAR and IOC_PR_RELEASE ioctls for nvme devices
2022-09-30Merge tag 'io_uring-6.0-2022-09-29' of git://git.kernel.dk/linuxLinus Torvalds
Pull io_uring fixes from Jens Axboe: "Two fixes that should go into 6.0: - Tweak the single issuer logic to register the task at creation, rather than at first submit. SINGLE_ISSUER was added for 6.0, and after some discussion on this, we decided to make it a bit stricter while it's still possible to do so (Dylan). - Stefan from Samba had some doubts on the level triggered poll that was added for this release. Rather than attempt to mess around with it now, just do the quick one-liner to disable it for release and we have time to discuss and change it for 6.1 instead (me)" * tag 'io_uring-6.0-2022-09-29' of git://git.kernel.dk/linux: io_uring/poll: disable level triggered poll io_uring: register single issuer task at creation
2022-09-30Merge tag 'pstore-v6.0-rc8' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull pstore revert from Kees Cook: "A misbehavior with some compression backends in pstore was just discovered due to the recent crypto acomp migration. Since we're so close to release, it seems better to just simply revert it, and we can figure out what's going on without leaving it broken for a release. - Revert crypto acomp migration (Guilherme G. Piccoli)" * tag 'pstore-v6.0-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: Revert "pstore: migrate to crypto acomp interface"
2022-09-30Merge tag 'gpio-fixes-for-v6.0' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux Pull gpio fixes from Bartosz Golaszewski: "One more fix for the upcoming release: - fix the check for pwm support on non-A8K platforms in gpio-mvebu" * tag 'gpio-fixes-for-v6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux: gpio: mvebu: Fix check for pwm support on non-A8K platforms
2022-09-30Revert "pstore: migrate to crypto acomp interface"Guilherme G. Piccoli
This reverts commit e4f0a7ec586b7644107839f5394fb685cf1aadcc. When using this new interface, both efi_pstore and ramoops backends are unable to properly decompress dmesg if using zstd, lz4 and lzo algorithms (and maybe more). It does succeed with deflate though. The message observed in the kernel log is: [2.328828] pstore: crypto_acomp_decompress failed, ret = -22! The pstore infrastructure is able to collect the dmesg with both backends tested, but since decompression fails it's unreadable. With this revert everything is back to normal. Fixes: e4f0a7ec586b ("pstore: migrate to crypto acomp interface") Cc: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220929215515.276486-1-gpiccoli@igalia.com
2022-09-30Merge tag 'drm-fixes-2022-09-30-1' of git://anongit.freedesktop.org/drm/drmLinus Torvalds
Pull drm fixes from Dave Airlie: "Last set of fixes for 6.0 hopefully - minor bridge fixes, i915 fixes, and a bunch of amdgpu fixes for new IP blocks, along with a couple of regression fixes. Should be all set for merge window next week. amdgpu: - GC 11.x fixes - SMU 13.x fixes - DCN 3.1.4 fixes - DCN 3.2.x fixes - GC 9.x fix - Fence fix - SR-IOV supend/resume fix - PSR regression fix i915: - Restrict forced preemption to the active context - Restrict perf_limit_reasons to the supported platforms - gen11+ bridge: - analogix: Revert earlier suspend fix - lt8912b: Fix corrupt display output" * tag 'drm-fixes-2022-09-30-1' of git://anongit.freedesktop.org/drm/drm: (26 commits) drm/amd/display: Prevent OTG shutdown during PSR SU drm/i915/gt: Perf_limit_reasons are only available for Gen11+ drm/amdgpu: Add amdgpu suspend-resume code path under SRIOV drm/amdgpu: Remove fence_process in count_emitted drm/amdgpu: Correct the position in patch_cond_exec drm/amd/display: fill in clock values when DPM is not enabled drm/amd/display: Avoid unnecessary pixel rate divider programming drm/amd/display: Remove assert for odm transition case drm/amd/display: Fix typo in get_pixel_rate_div drm/amd/display: Fix audio on display after unplugging another drm/amd/display: Add explicit FIFO disable for DP blank drm/amd/display: Wrap OTG disable workaround with FIFO control drm/amd/display: Do DIO FIFO enable after DP video stream enable drm/amd/display: Update DCN32 to use new SR latencies drm/amd/display: Avoid avoid unnecessary pixel rate divider programming drm/amdkfd: fix dropped interrupt in kfd_int_process_v11 drm/amdgpu: pass queue size and is_aql_queue to MES drm/amdkfd: fix MQD init for GFX11 in init_mqd drm/amd/pm: use adverse selection for dpm features unsupported by driver drm/amd/pm: enable gfxoff feature for SMU 13.0.0 ...
2022-09-30Merge branch 'mlx5-xsk-updates-part2-2022-09-28'Jakub Kicinski
Saeed Mahameed says: ==================== mlx5 xsk updates part2 2022-09-28 XSK buffer improvements, This is part #2 of 4 parts series. 1) Expose xsk min chunk size to drivers, to allow the driver to adjust to a better buffer stride size 2) Adjust MTT page size to the XSK frame size, to avoid umem overrun in certain situations. 3) Use xsk frame size as the striding RQ page size for XSK RQs 4) KSM for unaligned XSK, KSM allows arbitrary buffer chunk lengths registration in HW, which makes more sense for unaligned XSK. 4) More cleanups and optimizations in preparation for next improvements in part3 part 1: https://lore.kernel.org/netdev/20220927203611.244301-1-saeed@kernel.org/ ==================== Link: https://lore.kernel.org/r/20220929072156.93299-1-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Clean up and fix error flows in mlx5e_alloc_rqMaxim Mikityanskiy
Although mlx5e_rq_free_shampo can be called unconditionally, it belongs to case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ. Move it there to allow to add more init/cleanup actions to the striding RQ case. If xdp_rxq_info_reg_mem_model fails, don't forget to destroy the page pool. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Move repeating clear_bit in mlx5e_rx_reporter_err_rq_cqe_recoverMaxim Mikityanskiy
The same clear_bit is called in both error and success flows. Move the call to do it only once and remove the out label. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Split out channel (de)activation in rx_resMaxim Mikityanskiy
To decrease the nesting level and reduce duplication of code, create functions to redirect direct RQTs to the actual RQs or drop_rq, which are used in the activation and deactivation flows of channels. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: xsk: Remove mlx5e_xsk_page_alloc_poolMaxim Mikityanskiy
mlx5e_xsk_page_alloc_pool became a thin wrapper around xsk_buff_alloc. Drop it and call xsk_buff_alloc directly. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Convert struct mlx5e_alloc_unit to a unionMaxim Mikityanskiy
struct mlx5e_alloc_unit consists of a single union. Convert it to a union itself to simplify casting it to struct xdp_buff *, which will be used to implement XSK batching on striding RQ. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Remove DMA address from mlx5e_alloc_unitMaxim Mikityanskiy
mlx5e_alloc_unit stores the DMA address and a pointer to either struct page (regular RQ) or struct xdp_buff (XSK RQ). This DMA address is redundant, because when a page or an XSK frame is allocated, the same address is also stored there. Some flows take the address from struct mlx5e_alloc_unit, and some take it from struct page or xdp_buff. This commit removes the address from struct mlx5e_alloc_unit, which makes it twice as small and improves locality (this struct is used in an array), also saving on unnecessary stores to the addr field. Almost all flows know unambiguously whether the DMA address should be taken from page or from xdp_buff. The exception is the allocation flows, where a new branch appeared, which will be optimized out in the next commits. struct mlx5e_alloc_unit used to be called mlx5e_dma_info. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Rename mlx5e_dma_info to prepare for removal of DMA addressMaxim Mikityanskiy
The next commit will remove the DMA address from the struct currently called mlx5e_dma_info, because the same value can be retrieved with page_pool_get_dma_addr(page) in almost all cases, with the notable exception of SHAMPO (HW GRO implementation) that modifies this address on the fly, after the initial allocation. To keep the SHAMPO logic intact, struct mlx5e_dma_info remains in the SHAMPO code, consisting of addr and page (XSK is not compatible with SHAMPO). The struct used in all other places is renamed to mlx5e_alloc_unit, allowing the next commit to remove the addr field without affecting SHAMPO. The new name means "allocation unit", and it's more appropriate after the field with the DMA address gets removed. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Optimize the page cache reducing its size 2xMaxim Mikityanskiy
RX page cache stores dma_info structs, that consist of a pointer to struct page and a DMA address. In fact, the DMA address is extracted from struct page using page_pool_get_dma_addr when a page is pushed to the cache. By moving this call to the point when a page is popped from the cache, we can avoid storing the DMA address in the cache, effectively reducing its size by two times without losing any functionality. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Fix calculations for ICOSQ sizeMaxim Mikityanskiy
WQEs must not cross page boundaries, they are padded with NOPs if they don't fit the page. mlx5e_mpwrq_total_umr_wqebbs doesn't take into account this padding, risking reserving not enough space. The padding is not straightforward to add to this calculation, because WQEs of different sizes may be mixed together in the queue. If each page ends with a big WQE that doesn't fit and requires at most its size minus 1 WQEBB of padding, the total space can be much bigger than in case when smaller WQEs take advantage of this padding. Replace the wrong exact calculation by the following estimation. Each padding can be at most the size of the maximum WQE used in the queue minus one WQEBB. Let's call the rest of the page "useful space". If we divide the total size of all needed WQEs by this useful space, rounding up, we'll get the number of pages, which is enough to contain all these WQEs. It's correct, because every WQE that appeared on the boundary between two blocks of useful space would start in the useful space of one page and end in the padding of the same page, while our estimation reserved space for its tail in the next space, making the estimation not smaller than the real space occupied in the queue. The code actually uses a looser estimation: instead of taking the maximum size of all used WQE types minus 1 WQEBB, it takes the maximum hardware size of a WQE. It's made for simplicity and extensibility. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30xsk: Remove unused xsk_buff_discardMaxim Mikityanskiy
The previous commit removed the last usage of xsk_buff_discard in mlx5e, so the function that is no longer used can be removed. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> CC: "Björn Töpel" <bjorn@kernel.org> CC: Magnus Karlsson <magnus.karlsson@intel.com> CC: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: xsk: Use KSM for unaligned XSKMaxim Mikityanskiy
UMR MTTs used in striding RQ have certain alignment requirements. While it's guaranteed to work when UMR pages are aligned to the UMR page size, in practice it works then UMR pages are aligned to 8 bytes. However, it's still not enough flexibility for the unaligned mode of XSK. This patch leverages KSM to map UMR pages without alignment requirements, when unaligned XSK is active. The downside is that KSM entries are twice as big as MTTs, which limits the maximum WQE size, so regular RQs and aligned XSK continue using MTTs. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5: Add MLX5_FLEXIBLE_INLEN to safely calculate cmd inlenMaxim Mikityanskiy
Some commands use a flexible array after a common header. Add a macro to safely calculate the total input length of the command, detecting overflows and printing errors with specific values when such overflows happen. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Keep a separate MKey for striding RQMaxim Mikityanskiy
Currently, rq->mkey_be keeps a big-endian value of either the PA MKey (for legacy RQ, no address translation) or MTT MKey (for striding RQ, direct address translation). Striding RQ stores the same value in rq->umr_mkey in the native endianness. The next commit will make striding RQ use KSM MKey (indirect address translation) for the unaligned mode of XSK, which will require storing both KSM MKey and PA MKey in the RQ struct. This commit optimizes fields of mlx5e_rq: umr_mkey is removed (it's redundant), mkey_be always points to the PA MKey, and mpwqe.umr_mkey_be points to the MTT MKey (or to the KSM MKey, starting from the next commit). Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: xsk: Use XSK frame size as striding RQ page sizeMaxim Mikityanskiy
XSK RQs support striding RQ linear mode, but the stride size is always set to PAGE_SIZE. It may be larger than the XSK frame size, unnecessarily reducing the useful space in a WQE, but more importantly causing UMEM data corruption in certain cases. Normally, stride size bigger than XSK frame size is not a problem if the hardware enforces the MTU. However, traffic between vports skips the hardware MTU check, and oversized packets may be received. If an oversized packet is bigger than the XSK frame but not bigger than the stride, it will cause overwriting of the adjacent UMEM region. If the packet takes more than one stride, they can be recycled for reuse so it's not a problem when the XSK frame size matches the stride size. To reduce the impact of the above issue, attempt to use the MTT page size for striding RQ that matches the XSK frame size, allowing to safely use 2048-byte frames on an up-to-date firmware. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Use runtime page_shift for striding RQMaxim Mikityanskiy
This commit allows striding RQ to determine MTT page size at runtime, instead of sticking to the compile-time PAGE_SIZE. This functionality will be used by a following commit that adjusts the MTT page size to the XSK frame size. Stick with PAGE_SIZE for XSK on legacy RQ, as frag_stride is not used in data path, it only helps calculate how pages are partitioned into fragments, and PAGE_SIZE will ensure each fragment starts at the beginning of a new allocation unit (XSK frame). Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30xsk: Expose min chunk size to driversMaxim Mikityanskiy
Drivers should be aware of the range of valid UMEM chunk sizes to be able to allocate their internal structures of an appropriate size. It will be used by mlx5e in the following patches. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> CC: "Björn Töpel" <bjorn@kernel.org> CC: Magnus Karlsson <magnus.karlsson@intel.com> CC: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30MIPS: Simplify __bswapdi2() and __bswapsi2()Tiezhu Yang
Use macro definitions ___constant_swab64 and ___constant_swab32 to simplify __bswapdi2() and __bswapsi2(). Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2022-09-30MIPS: Silence missing prototype warningTiezhu Yang
Silence the following two warnings when make W=1: CC arch/mips/lib/bswapsi.o arch/mips/lib/bswapsi.c:5:22: warning: no previous prototype for '__bswapsi2' [-Wmissing-prototypes] unsigned int notrace __bswapsi2(unsigned int u) ^~~~~~~~~~ CC arch/mips/lib/bswapdi.o arch/mips/lib/bswapdi.c:5:28: warning: no previous prototype for '__bswapdi2' [-Wmissing-prototypes] unsigned long long notrace __bswapdi2(unsigned long long u) ^~~~~~~~~~ AR arch/mips/lib/built-in.a Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2022-09-30mips: update config filesLukas Bulwahn
Clean up config files by: - removing configs that were deleted in the past - removing configs not in tree and without recently pending patches - adding new configs that are replacements for old configs in the file For some detailed information, see Link. Link: https://lore.kernel.org/kernel-janitors/20220929090645.1389-1-lukas.bulwahn@gmail.com/ Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2022-09-30ip6_vti:Remove the space before the commaHongbin Wang
There should be no space before the comma Signed-off-by: Hongbin Wang <wh_bin@126.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30Merge branch 'master' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec Steffen Klassert says: ==================== pull request (net): ipsec 2022-09-29 1) Use the inner instead of the outer protocol for GSO on inter address family tunnels. This fixes the GSO case for address family tunnels. From Sabrina Dubroca. 2) Reset ipcomp_scratches with NULL when freed, otherwise it holds obsolete address. From Khalid Masum. 3) Reinject transport-mode packets through workqueue instead of a tasklet. The tasklet might take too long to finish. From Liu Jian. Please pull or let me know if there are problems. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30Merge branch 'Mediatek-mt8188'David S. Miller
Jianguo Zhang says: ==================== Mediatek ethernet patches for mt8188 Changes in v7: v7: 1) Add 'Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>' info in commit message of patch 'dt-bindings: net: snps,dwmac: add new property snps,clk-csr', 'arm64: dts: mediatek: mt2712e: Update the name of property 'clk_csr'' and 'net: stmmac: add a parse for new property 'snps,clk-csr''. v6: 1) Update commit message of patch 'dt-bindings: net: snps,dwmac: add new property snps,clk-csr' 2) Add a parse for new property 'snps,clk-csr' in patch 'net: stmmac: add a parse for new property 'snps,clk-csr'' v5: 1) Rename the property 'clk_csr' as 'snps,clk-csr' in binding file as Krzysztof Kozlowski'comment. 2) Add DTS patch 'arm64: dts: mediatek: mt2712e: Update the name of property 'clk_csr'' as Krzysztof Kozlowski'comment. 3) Add driver patch 'net: stmmac: Update the name of property 'clk_csr'' as Krzysztof Kozlowski'comment. v4: 1) Update the commit message of patch 'dt-bindings: net: snps,dwmac: add clk_csr property' as Krzysztof Kozlowski'comment. v3: 1) List the names of SoCs mt8188 and mt8195 in correct order as AngeloGioacchino Del Regno's comment. 2) Add patch version info as Krzysztof Kozlowski'comment. v2: 1) Delete patch 'stmmac: dwmac-mediatek: add support for mt8188' as Krzysztof Kozlowski's comment. 2) Update patch 'dt-bindings: net: mediatek-dwmac: add support for mt8188' as Krzysztof Kozlowski's comment. 3) Add clk_csr property to fix warning ('clk_csr' was unexpected) when runnig 'make dtbs_check'. v1: 1) Add ethernet driver entry for mt8188. 2) Add binding document for ethernet on mt8188. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net: stmmac: add a parse for new property 'snps,clk-csr'Jianguo Zhang
Parse new property 'snps,clk-csr' firstly because the new property is documented in binding file, if failed, fall back to old property 'clk_csr' for legacy case Signed-off-by: Jianguo Zhang <jianguo.zhang@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30arm64: dts: mediatek: mt2712e: Update the name of property 'clk_csr'Jianguo Zhang
Update the name of property 'clk_csr' as 'snps,clk-csr' to align with the property name in the binding file. Signed-off-by: Jianguo Zhang <jianguo.zhang@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30dt-bindings: net: snps,dwmac: add new property snps,clk-csrJianguo Zhang
Add description for new property snps,clk-csr in binding file Signed-off-by: Jianguo Zhang <jianguo.zhang@mediatek.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30dt-bindings: net: mediatek-dwmac: add support for mt8188Jianguo Zhang
Add binding document for the ethernet on mt8188 Signed-off-by: Jianguo Zhang <jianguo.zhang@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net/mlx5: Fix spelling mistake "syndrom" -> "syndrome"Colin Ian King
There is a spelling mistake in a devlink_health_report message. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net: bna: Fix spelling mistake "muliple" -> "multiple"Colin Ian King
There is a spelling mistake in a literal string in the array bnad_net_stats_strings. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30ibmveth: Ethtool set queue supportNick Child
Implement channel management functions to allow dynamic addition and removal of transmit queues. The `ethtool --show-channels` and `ethtool --set-channels` commands can be used to get and set the number of queues, respectively. Allow the ability to add as many transmit queues as available processors but never allow more than the hard maximum of 16. The number of receive queues is one and cannot be modified. Depending on whether the requested number of queues is larger or smaller than the current value, either allocate or free long term buffers. Since long term buffer construction and destruction can occur in two different areas, from either channel set requests or device open/close, define functions for performing this work. If allocation of a new buffer fails, then attempt to revert back to the previous number of queues. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30ibmveth: Implement multi queue on xmitNick Child
The `ndo_start_xmit` function is protected by a spinlock on the tx queue being used to transmit the skb. Allow concurrent calls to `ndo_start_xmit` by using more than one tx queue. This allows for greater throughput when several jobs are trying to transmit data. Introduce 16 tx queues (leave single rx queue as is) which each correspond to one DMA mapped long term buffer. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30ibmveth: Copy tx skbs into a premapped bufferNick Child
Rather than DMA mapping and unmapping every outgoing skb, copy the skb into a buffer that was mapped during the drivers open function. Copying the skb and its frags have proven to be more time efficient than mapping and unmapping. As an effect, performance increases by 3-5 Gbits/s. Allocate and DMA map one continuous 64KB buffer at `ndo_open`. This buffer is maintained until `ibmveth_close` is called. This buffer is large enough to hold the largest possible linnear skb. During `ndo_start_xmit`, copy the skb and all of it's frags into the continuous buffer. By manually linnearizing all the socket buffers, time is saved during memcpy as well as more efficient handling in FW. As a result, we no longer need to worry about the firmware limitation of handling a max of 6 frags. So, we only need to maintain 1 descriptor instead of 6 and can hardcode 0 for the other 5 descriptors during h_send_logical_lan. Since, DMA allocation/mapping issues can no longer arise in xmit functions, we can further reduce code size by removing the need for a bounce buffer on DMA errors. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30bnx2: Fix spelling mistake "bufferred" -> "buffered"Colin Ian King
There are spelling mistakes in two literal strings. Fix these. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30tcp: fix tcp_cwnd_validate() to not forget is_cwnd_limitedNeal Cardwell
This commit fixes a bug in the tracking of max_packets_out and is_cwnd_limited. This bug can cause the connection to fail to remember that is_cwnd_limited is true, causing the connection to fail to grow cwnd when it should, causing throughput to be lower than it should be. The following event sequence is an example that triggers the bug: (a) The connection is cwnd_limited, but packets_out is not at its peak due to TSO deferral deciding not to send another skb yet. In such cases the connection can advance max_packets_seq and set tp->is_cwnd_limited to true and max_packets_out to a small number. (b) Then later in the round trip the connection is pacing-limited (not cwnd-limited), and packets_out is larger. In such cases the connection would raise max_packets_out to a bigger number but (unexpectedly) flip tp->is_cwnd_limited from true to false. This commit fixes that bug. One straightforward fix would be to separately track (a) the next window after max_packets_out reaches a maximum, and (b) the next window after tp->is_cwnd_limited is set to true. But this would require consuming an extra u32 sequence number. Instead, to save space we track only the most important information. Specifically, we track the strongest available signal of the degree to which the cwnd is fully utilized: (1) If the connection is cwnd-limited then we remember that fact for the current window. (2) If the connection not cwnd-limited then we track the maximum number of outstanding packets in the current window. In particular, note that the new logic cannot trigger the buggy (a)/(b) sequence above because with the new logic a condition where tp->packets_out > tp->max_packets_out can only trigger an update of tp->is_cwnd_limited if tp->is_cwnd_limited is false. This first showed up in a testing of a BBRv2 dev branch, but this buggy behavior highlighted a general issue with the tcp_cwnd_validate() logic that can cause cwnd to fail to increase at the proper rate for any TCP congestion control, including Reno or CUBIC. Fixes: ca8a22634381 ("tcp: make cwnd-limited checks measurement-based, and gentler") Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Kevin(Yudong) Yang <yyd@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30sctp: handle the error returned from sctp_auth_asoc_init_active_keyXin Long
When it returns an error from sctp_auth_asoc_init_active_key(), the active_key is actually not updated. The old sh_key will be freeed while it's still used as active key in asoc. Then an use-after-free will be triggered when sending patckets, as found by syzbot: sctp_auth_shkey_hold+0x22/0xa0 net/sctp/auth.c:112 sctp_set_owner_w net/sctp/socket.c:132 [inline] sctp_sendmsg_to_asoc+0xbd5/0x1a20 net/sctp/socket.c:1863 sctp_sendmsg+0x1053/0x1d50 net/sctp/socket.c:2025 inet_sendmsg+0x99/0xe0 net/ipv4/af_inet.c:819 sock_sendmsg_nosec net/socket.c:714 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:734 This patch is to fix it by not replacing the sh_key when it returns errors from sctp_auth_asoc_init_active_key() in sctp_auth_set_key(). For sctp_auth_set_active_key(), old active_key_id will be set back to asoc->active_key_id when the same thing happens. Fixes: 58acd1009226 ("sctp: update active_key for asoc when old key is being replaced") Reported-by: syzbot+a236dd8e9622ed8954a3@syzkaller.appspotmail.com Signed-off-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net: bridge: assign path_cost for 2.5G and 5G link speedSteven Hsieh
As 2.5G, 5G ethernet ports are more common and affordable, these ports are being used in LAN bridge devices. STP port_cost() is missing path_cost assignment for these link speeds, causes highest cost 100 being used. This result in lower speed port being picked when there is loop between 5G and 1G ports. Original path_cost: 10G=2, 1G=4, 100m=19, 10m=100 Adjusted path_cost: 10G=2, 5G=3, 2.5G=4, 1G=5, 100m=19, 10m=100 speed greater than 10G = 1 Signed-off-by: Steven Hsieh <steven.hsieh@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net: lan966x: Fix spelling mistake "tarffic" -> "traffic"Colin Ian King
There is a spelling mistake in a netdev_err message. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Reviewed-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30mISDN: fix use-after-free bugs in l1oip timer handlersDuoming Zhou
The l1oip_cleanup() traverses the l1oip_ilist and calls release_card() to cleanup module and stack. However, release_card() calls del_timer() to delete the timers such as keep_tl and timeout_tl. If the timer handler is running, the del_timer() will not stop it and result in UAF bugs. One of the processes is shown below: (cleanup routine) | (timer handler) release_card() | l1oip_timeout() ... | del_timer() | ... ... | kfree(hc) //FREE | | hc->timeout_on = 0 //USE Fix by calling del_timer_sync() in release_card(), which makes sure the timer handlers have finished before the resources, such as l1oip and so on, have been deallocated. What's more, the hc->workq and hc->socket_thread can kick those timers right back in. We add a bool flag to show if card is released. Then, check this flag in hc->workq and hc->socket_thread. Fixes: 3712b42d4b1b ("Add layer1 over IP support") Signed-off-by: Duoming Zhou <duoming@zju.edu.cn> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net-next: skbuff: refactor pskb_pullRichard Gobert
pskb_may_pull already contains all of the checks performed by pskb_pull. Use pskb_may_pull for validation in pskb_pull, eliminating the duplication and making __pskb_pull obsolete. Replace __pskb_pull with pskb_pull where applicable. Signed-off-by: Richard Gobert <richardbgobert@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net: bonding: Convert to use sysfs_emit()/sysfs_emit_at() APIsWang Yufen
Follow the advice of the Documentation/filesystems/sysfs.rst and show() should only use sysfs_emit() or sysfs_emit_at() when formatting the value to be returned to user space. Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net-sysfs: Convert to use sysfs_emit() APIsWang Yufen
Follow the advice of the Documentation/filesystems/sysfs.rst and show() should only use sysfs_emit() or sysfs_emit_at() when formatting the value to be returned to user space. Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net: tun: Convert to use sysfs_emit() APIsWang Yufen
Follow the advice of the Documentation/filesystems/sysfs.rst and show() should only use sysfs_emit() or sysfs_emit_at() when formatting the value to be returned to user space. Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30KVM: selftests: Compare insn opcodes directly in fix_hypercall_testSean Christopherson
Directly compare the expected versus observed hypercall instructions when verifying that KVM patched in the native hypercall (FIX_HYPERCALL_INSN quirk enabled). gcc rightly complains that doing a 4-byte memcpy() with an "unsigned char" as the source generates an out-of-bounds accesses. Alternatively, "exp" and "obs" could be declared as 3-byte arrays, but there's no known reason to copy locally instead of comparing directly. In function ‘assert_hypercall_insn’, inlined from ‘guest_main’ at x86_64/fix_hypercall_test.c:91:2: x86_64/fix_hypercall_test.c:63:9: error: array subscript ‘unsigned int[0]’ is partly outside array bounds of ‘unsigned char[1]’ [-Werror=array-bounds] 63 | memcpy(&exp, exp_insn, sizeof(exp)); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ x86_64/fix_hypercall_test.c: In function ‘guest_main’: x86_64/fix_hypercall_test.c:42:22: note: object ‘vmx_hypercall_insn’ of size 1 42 | extern unsigned char vmx_hypercall_insn; | ^~~~~~~~~~~~~~~~~~ x86_64/fix_hypercall_test.c:25:22: note: object ‘svm_hypercall_insn’ of size 1 25 | extern unsigned char svm_hypercall_insn; | ^~~~~~~~~~~~~~~~~~ In function ‘assert_hypercall_insn’, inlined from ‘guest_main’ at x86_64/fix_hypercall_test.c:91:2: x86_64/fix_hypercall_test.c:64:9: error: array subscript ‘unsigned int[0]’ is partly outside array bounds of ‘unsigned char[1]’ [-Werror=array-bounds] 64 | memcpy(&obs, obs_insn, sizeof(obs)); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ x86_64/fix_hypercall_test.c: In function ‘guest_main’: x86_64/fix_hypercall_test.c:25:22: note: object ‘svm_hypercall_insn’ of size 1 25 | extern unsigned char svm_hypercall_insn; | ^~~~~~~~~~~~~~~~~~ x86_64/fix_hypercall_test.c:42:22: note: object ‘vmx_hypercall_insn’ of size 1 42 | extern unsigned char vmx_hypercall_insn; | ^~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors make: *** [../lib.mk:135: tools/testing/selftests/kvm/x86_64/fix_hypercall_test] Error 1 Fixes: 6c2fa8b20d0c ("selftests: KVM: Test KVM_X86_QUIRK_FIX_HYPERCALL_INSN") Cc: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Message-Id: <20220928233652.783504-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>