summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-08-15drm/ast: Fixed reboot test may cause system hangedY.C. Chen
There is another thread still access standard VGA I/O while loading drm driver. Disable standard VGA I/O decode to avoid this issue. Signed-off-by: Y.C. Chen <yc_chen@aspeedtech.com> Reviewed-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/1523410059-18415-1-git-send-email-yc_chen@aspeedtech.com
2019-08-14of: irq: fix a trivial typo in a doc commentLubomir Rintel
Diverged from what the code does with commit 530210c7814e ("of/irq: Replace of_irq with of_phandle_args"). Signed-off-by: Lubomir Rintel <lkundrak@v3.sk> Signed-off-by: Rob Herring <robh@kernel.org>
2019-08-14dt-bindings: pinctrl: stm32: Fix 'st,syscfg' schemaRob Herring
The proper way to add additional contraints to an existing json-schema is using 'allOf' to reference the base schema. Using just '$ref' doesn't work. Fix this for the 'st,syscfg' property. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Cc: linux-gpio@vger.kernel.org Cc: linux-stm32@st-md-mailman.stormreply.com Cc: linux-arm-kernel@lists.infradead.org Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Rob Herring <robh@kernel.org>
2019-08-14Merge tag 'Wimplicit-fallthrough-5.3-rc5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux Pull fallthrough fixes from Gustavo A. R. Silva: "Fix sh mainline builds: - Fix fall-through warning in sh. - Fix missing break bug in sh (this is a 10-year-old bug) Currently, mainline builds for sh are broken. These patches fix that" * tag 'Wimplicit-fallthrough-5.3-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux: sh: kernel: hw_breakpoint: Fix missing break in switch statement sh: kernel: disassemble: Mark expected switch fall-throughs
2019-08-14Merge tag 'afs-fixes-20190814' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs Pull afs fixes from David Howells: - Fix the CB.ProbeUuid handler to generate its reply correctly. - Fix a mix up in indices when parsing a Volume Location entry record. - Fix a potential NULL-pointer deref when cleaning up a read request. - Fix the expected data version of the destination directory in afs_rename(). - Fix afs_d_revalidate() to only update d_fsdata if it's not the same as the directory data version to reduce the likelihood of overwriting the result of a competing operation. (d_fsdata carries the directory DV or the least-significant word thereof). - Fix the tracking of the data-version on a directory and make sure that dentry objects get properly initialised, updated and revalidated. Also fix rename to update d_fsdata to match the new directory's DV if the dentry gets moved over and unhash the dentry to stop afs_d_revalidate() from interfering. * tag 'afs-fixes-20190814' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs: afs: Fix missing dentry data version updating afs: Only update d_fsdata if different in afs_d_revalidate() afs: Fix off-by-one in afs_rename() expected data version calculation fs: afs: Fix a possible null-pointer dereference in afs_put_read() afs: Fix loop index mixup in afs_deliver_vl_get_entry_by_name_u() afs: Fix the CB.ProbeUuid service handler to reply correctly
2019-08-14drm/scheduler: use job count instead of peekChristian König
The spsc_queue_peek function is accessing queue->head which belongs to the consumer thread and shouldn't be accessed by the producer This is fixing a rare race condition when destroying entities. Signed-off-by: Christian König <christian.koenig@amd.com> Acked-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Reviewed-by: Monk.liu@amd.com Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-08-14riscv: Make __fstate_clean() work correctly.Vincent Chen
Make the __fstate_clean() function correctly set the state of sstatus.FS in pt_regs to SR_FS_CLEAN. Fixes: 7db91e57a0acd ("RISC-V: Task implementation") Cc: linux-stable <stable@vger.kernel.org> Signed-off-by: Vincent Chen <vincent.chen@sifive.com> Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Christoph Hellwig <hch@lst.de> [paul.walmsley@sifive.com: expanded "Fixes" commit ID] Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
2019-08-14riscv: Correct the initialized flow of FP registerVincent Chen
The following two reasons cause FP registers are sometimes not initialized before starting the user program. 1. Currently, the FP context is initialized in flush_thread() function and we expect these initial values to be restored to FP register when doing FP context switch. However, the FP context switch only occurs in switch_to function. Hence, if this process does not be scheduled out and scheduled in before entering the user space, the FP registers have no chance to initialize. 2. In flush_thread(), the state of reg->sstatus.FS inherits from the parent. Hence, the state of reg->sstatus.FS may be dirty. If this process is scheduled out during flush_thread() and initializing the FP register, the fstate_save() in switch_to will corrupt the FP context which has been initialized until flush_thread(). To solve the 1st case, the initialization of the FP register will be completed in start_thread(). It makes sure all FP registers are initialized before starting the user program. For the 2nd case, the state of reg->sstatus.FS in start_thread will be set to SR_FS_OFF to prevent this process from corrupting FP context in doing context save. The FP state is set to SR_FS_INITIAL in start_trhead(). Signed-off-by: Vincent Chen <vincent.chen@sifive.com> Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Fixes: 7db91e57a0acd ("RISC-V: Task implementation") Cc: stable@vger.kernel.org [paul.walmsley@sifive.com: fixed brace alignment issue reported by checkpatch] Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
2019-08-14Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdmaLinus Torvalds
Pull rdma fixes from Doug Ledford: "Fairly small pull request for -rc3. I'm out of town the rest of this week, so I made sure to clean out as much as possible from patchworks in enough time for 0-day to chew through it (Yay! for 0-day being back online! :-)). Jason might send through any emergency stuff that could pop up, otherwise I'm back next week. The only real thing of note is the siw ABI change. Since we just merged siw *this* release, there are no prior kernel releases to maintain kernel ABI with. I told Bernard that if there is anything else about the siw ABI he thinks he might want to change before it goes set in stone, he should get it in ASAP. The siw module was around for several years outside the kernel tree, and it had to be revamped considerably for inclusion upstream, so we are making no attempts to be backward compatible with the out of tree version. Once 5.3 is actually released, we will have our baseline ABI to maintain. Summary: - Fix a memory registration release flow issue that was causing a WARN_ON (mlx5) - If the counters for a port aren't allocated, then we can't do operations on the non-existent counters (core) - Check the right variable for error code result (mlx5) - Fix a use after free issue (mlx5) - Fix an off by one memory leak (siw) - Actually return an error code on error (core) - Allow siw to be built on 32bit arches (siw, ABI change, but OK since siw was just merged this merge window and there is no prior released kernel to maintain compatibility with and we also updated the rdma-core user space package to match)" * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: RDMA/siw: Change CQ flags from 64->32 bits RDMA/core: Fix error code in stat_get_doit_qp() RDMA/siw: Fix a memory leak in siw_init_cpulist() IB/mlx5: Fix use-after-free error while accessing ev_file pointer IB/mlx5: Check the correct variable in error handling code RDMA/counter: Prevent QP counter binding if counters unsupported IB/mlx5: Fix implicit MR release flow
2019-08-14ALSA: usb-audio: Fix an OOB bug in parse_audio_mixer_unitHui Peng
The `uac_mixer_unit_descriptor` shown as below is read from the device side. In `parse_audio_mixer_unit`, `baSourceID` field is accessed from index 0 to `bNrInPins` - 1, the current implementation assumes that descriptor is always valid (the length of descriptor is no shorter than 5 + `bNrInPins`). If a descriptor read from the device side is invalid, it may trigger out-of-bound memory access. ``` struct uac_mixer_unit_descriptor { __u8 bLength; __u8 bDescriptorType; __u8 bDescriptorSubtype; __u8 bUnitID; __u8 bNrInPins; __u8 baSourceID[]; } ``` This patch fixes the bug by add a sanity check on the length of the descriptor. Reported-by: Hui Peng <benquike@gmail.com> Reported-by: Mathias Payer <mathias.payer@nebelwelt.net> Cc: <stable@vger.kernel.org> Signed-off-by: Hui Peng <benquike@gmail.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-08-14Merge tag 'dma-mapping-5.3-4' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds
Pull dma-mapping fixes from Christoph Hellwig: - fix the handling of the bus_dma_mask in dma_get_required_mask, which caused a regression in this merge window (Lucas Stach) - fix a regression in the handling of DMA_ATTR_NO_KERNEL_MAPPING (me) - fix dma_mmap_coherent to not cause page attribute mismatches on coherent architectures like x86 (me) * tag 'dma-mapping-5.3-4' of git://git.infradead.org/users/hch/dma-mapping: dma-mapping: fix page attributes for dma_mmap_* dma-direct: don't truncate dma_required_mask to bus addressing capabilities dma-direct: fix DMA_ATTR_NO_KERNEL_MAPPING
2019-08-14Merge tag 'iommu-fixes-v5.3-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull iommu fixes from Joerg Roedel: - A couple more fixes for the Intel VT-d driver for bugs introduced during the recent conversion of this driver to use IOMMU core default domains. - Fix for common dma-iommu code to make sure MSI mappings happen in the correct domain for a device. - Fix a corner case in the handling of sg-lists in dma-iommu code that might cause dma_length to be truncated. - Mark a switch as fall-through in arm-smmu code. * tag 'iommu-fixes-v5.3-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: iommu/vt-d: Fix possible use-after-free of private domain iommu/vt-d: Detach domain before using a private one iommu/dma: Handle SG length overflow better iommu/vt-d: Correctly check format of page table in debugfs iommu/vt-d: Detach domain when move device out of group iommu/arm-smmu: Mark expected switch fall-through iommu/dma: Handle MSI mappings separately
2019-08-14Merge branch 'akpm' (patches from Andrew)Linus Torvalds
Merge misc VM fixes from Andrew Morton: "A bunch of hotfixes, all affecting mm/. The two-patch series from Andrea may be controversial. This restores patches which were reverted in Dec 2018 due to a regression report [*]. After extensive discussion it is evident that the problems which these patches solved were significantly more serious than the problems they introduced. I am told that major distros are already carrying these two patches for this reason" [*] See https://lore.kernel.org/lkml/alpine.DEB.2.21.1812061343240.144733@chino.kir.corp.google.com/ https://lore.kernel.org/lkml/alpine.DEB.2.21.1812031545560.161134@chino.kir.corp.google.com/ for the google-specific issues brought up by David Rijentes. And as Andrew says: "I'm unaware of anyone else who will be adversely affected by this, and google already carries over a thousand kernel patches - another won't kill them. There has been sporadic discussion about fixing these things for real but it's clear that nobody apart from David is particularly motivated" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: hugetlbfs: fix hugetlb page migration/fault race causing SIGBUS mm, vmscan: do not special-case slab reclaim when watermarks are boosted Revert "mm, thp: restore node-local hugepage allocations" Revert "Revert "mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask"" include/asm-generic/5level-fixup.h: fix variable 'p4d' set but not used seq_file: fix problem when seeking mid-record mm: workingset: fix vmstat counters for shadow nodes mm/usercopy: use memory range to be accessed for wraparound check mm: kmemleak: disable early logging in case of error mm/vmalloc.c: fix percpu free VM area search criteria mm/memcontrol.c: fix use after free in mem_cgroup_iter() mm/z3fold.c: fix z3fold_destroy_pool() race condition mm/z3fold.c: fix z3fold_destroy_pool() ordering mm: mempolicy: handle vma with unmovable pages mapped correctly in mbind mm: mempolicy: make the behavior consistent when MPOL_MF_MOVE* and MPOL_MF_STRICT were specified mm/hmm: fix bad subpage pointer in try_to_unmap_one mm/hmm: fix ZONE_DEVICE anon page mapping reuse mm: document zone device struct page field usage
2019-08-14staging: fsl-dpaa2/ethsw: do not force user to bring interface downIoana Ciornei
Link settings can be changed only when the interface is down. Disable and re-enable the interface, if necessary, behind the scenes so that we do not force users to an if down/up sequence. Reported-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/1565700187-16048-11-git-send-email-ioana.ciornei@nxp.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: fsl-dpaa2/ethsw: register_netdev only when readyIoana Ciornei
The register_netdev() call should be made only when ready to process any user request on the interface. Move the call to be the last one issued in the probe sequence. Reported-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/1565700187-16048-10-git-send-email-ioana.ciornei@nxp.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: fsl-dpaa2/ethsw: reword error messageIoana Ciornei
In the current state, the dpaa2-ethsw driver supports only one bridge per DPSW object. Reword the error message so that this information is much more clear. Suggested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/1565700187-16048-9-git-send-email-ioana.ciornei@nxp.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: fsl-dpaa2/ethsw: remove redundant VLAN checkIoana Ciornei
The ethsw_add_vlan() function is already called only when the VLAN is not yet configured on the switch. Remove the redundant check. Reported-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/1565700187-16048-8-git-send-email-ioana.ciornei@nxp.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: fsl-dpaa2/ethsw: remove unnecessary memsetIoana Ciornei
The ethtool core already zeroes the memory before calling .get_ethtool_stats() thus making the memset unnecessary. Remove it. Reported-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/1565700187-16048-7-git-send-email-ioana.ciornei@nxp.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: fsl-dpaa2/ethsw: use bool when encoding learning/flooding stateIoana Ciornei
Use a bool instead of an u8 in ethsw_set_learning() and ethsw_port_set_flood() to encode an binary type property. Reported-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/1565700187-16048-6-git-send-email-ioana.ciornei@nxp.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: fsl-dpaa2/ethsw: remove debug messageIoana Ciornei
Since ethtool will be loud enough if the .set_link_ksettings() callback fails, remove the debug messages which do not add additional information. Reported-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/1565700187-16048-5-git-send-email-ioana.ciornei@nxp.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: fsl-dpaa2/ethsw: add line terminator to all formatsIoana Ciornei
Add the '\n' line terminator to the string formats missing it. Reported-by: Joe Perches <joe@perches.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/1565700187-16048-4-git-send-email-ioana.ciornei@nxp.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: fsl-dpaa2/ethsw: enable switch ports only on dev_openIoana Ciornei
At probe time, only the DPSW object should be enabled without the associated ports, which will get enabled on dev_open. Remove the ethsw_open() and ethsw_stop() functions and replace them only with dpsw_enable()/_disable(). Reported-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/1565700187-16048-3-git-send-email-ioana.ciornei@nxp.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: fsl-dpaa2/ethsw: remove IGMP default addressIoana Ciornei
Do not add an IGMP multicast address by default since we do not support Rx/Tx ar the moment. Reported-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/1565700187-16048-2-git-send-email-ioana.ciornei@nxp.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14i2c: stm32: Use the correct style for SPDX License IdentifierNishad Kamdar
This patch corrects the SPDX License Identifier style in header file related to STM32 Driver for I2C hardware bus support. For C header files Documentation/process/license-rules.rst mandates C-like comments (opposed to C source files where C++ style should be used) Changes made by using a script provided by Joe Perches here: https://lkml.org/lkml/2019/2/7/46 Suggested-by: Joe Perches <joe@perches.com> Signed-off-by: Nishad Kamdar <nishadkamdar@gmail.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2019-08-14i2c: emev2: avoid race when unregistering slave clientWolfram Sang
After we disabled interrupts, there might still be an active one running. Sync before clearing the pointer to the slave device. Fixes: c31d0a00021d ("i2c: emev2: add slave support") Reported-by: Krzysztof Adamski <krzysztof.adamski@nokia.com> Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Krzysztof Adamski <krzysztof.adamski@nokia.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2019-08-14i2c: rcar: avoid race when unregistering slave clientWolfram Sang
After we disabled interrupts, there might still be an active one running. Sync before clearing the pointer to the slave device. Fixes: de20d1857dd6 ("i2c: rcar: add slave support") Reported-by: Krzysztof Adamski <krzysztof.adamski@nokia.com> Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Krzysztof Adamski <krzysztof.adamski@nokia.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2019-08-14staging: rtl8723bs: remove redundant assignment to retColin Ian King
Variable ret is initialized to a value that is never read and it is re-assigned later. The initialization is redundant and can be removed. Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Link: https://lore.kernel.org/r/20190813124838.1317-1-colin.king@canonical.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14Staging: rtl8712: rtl8712_recv: fixed 80 character length warningMerwin Trever Ferrao
When the checkpatch.pl script was run, it showed lines with length more than 80 characters in rtl8712_recv.c file. Fixed by breaking it up into two lines within 80 characters. Signed-off-by: Merwin Trever Ferrao <merwintf@gmail.com> Link: https://lore.kernel.org/r/20190813065806.GA23606@IoT-COE Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: rtl8712: r8712_dump_aggr_xframe(): Change type to voidNishka Dasgupta
Change return type of r8712_dump_aggr_xframe from u8 to void as it always returns _SUCCESS and its return value is never used. Signed-off-by: Nishka Dasgupta <nishkadg.linux@gmail.com> Link: https://lore.kernel.org/r/20190813044638.16348-4-nishkadg.linux@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: rtl8712: r8712_xmitframe_aggr_1st(): Change return type to voidNishka Dasgupta
Change return type of r8712_xmitframe_aggr_1st from u8 to void as it always returns _SUCCESS and its return value is never used. Signed-off-by: Nishka Dasgupta <nishkadg.linux@gmail.com> Link: https://lore.kernel.org/r/20190813044638.16348-3-nishkadg.linux@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: rtl8712: r8712_append_mpdu_unit(): Change return typeNishka Dasgupta
Change return type of r8712_append_mpdu_unit from u8 to void and remove its return statement as it always returns only _SUCCESS. Modify call sites to simply call this function instead of checking its return value, and execute all the statements in the if-block for when the function returns _SUCCESS. Signed-off-by: Nishka Dasgupta <nishkadg.linux@gmail.com> Link: https://lore.kernel.org/r/20190813044638.16348-2-nishkadg.linux@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: rtl8712: xmitframe_addmic(): Change return values and typeNishka Dasgupta
Change return values of xmitframe_addmic from _SUCCESS and _FAIL to 0 and -ENOMEM respectively. Modify call sites to check for non-zero values instead of _FAIL. Also change return type from sint to int. Signed-off-by: Nishka Dasgupta <nishkadg.linux@gmail.com> Link: https://lore.kernel.org/r/20190813044638.16348-1-nishkadg.linux@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: rtl8723bs: Remove debugging information exposed via procfsKai-Heng Feng
The procfs provides many useful information for debugging, but it may be too much for normal usage, routines like proc_get_sec_info() reports various security related information. So let's remove it. Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Link: https://lore.kernel.org/r/20190813042426.13733-1-kai.heng.feng@canonical.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: erofs: xattr.c: avoid BUG_ONGao Xiang
Kill all the remaining BUG_ON in EROFS: - one BUG_ON was used to detect xattr on-disk corruption, proper error handling should be added for it instead; - the other BUG_ONs are used to detect potential issues, use DBG_BUGON only in (eng) debugging version. Signed-off-by: Gao Xiang <gaoxiang25@huawei.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Link: https://lore.kernel.org/r/20190813023054.73126-3-gaoxiang25@huawei.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14staging: erofs: remove incomplete cleancacheGao Xiang
cleancache was not fully implemented in EROFS. In addition, it's tend to remove the whole cleancache in related attempt [1]. [1] https://lore.kernel.org/linux-fsdevel/20190527103207.13287-3-jgross@suse.com/ Signed-off-by: Gao Xiang <gaoxiang25@huawei.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Link: https://lore.kernel.org/r/20190813023054.73126-2-gaoxiang25@huawei.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14MAINTAINERS: i2c-imx: take over maintainershipOleksij Rempel
I would like to maintain the i2c-imx driver. Since I work with different i.MX variants and have access to the hardware, I can spend some time on the reviewing of this driver. Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2019-08-14Revert "i2c: imx: improve the error handling in i2c_imx_dma_request()"Fabio Estevam
Since commit e1ab9a468e3b ("i2c: imx: improve the error handling in i2c_imx_dma_request()") when booting with the DMA driver as module (such as CONFIG_FSL_EDMA=m) the following endless clk warnings are seen: [ 153.077831] ------------[ cut here ]------------ [ 153.082528] WARNING: CPU: 0 PID: 15 at drivers/clk/clk.c:924 clk_core_disable_lock+0x18/0x24 [ 153.093077] i2c0 already disabled [ 153.096416] Modules linked in: [ 153.099521] CPU: 0 PID: 15 Comm: kworker/0:1 Tainted: G W 5.2.0+ #321 [ 153.107290] Hardware name: Freescale Vybrid VF5xx/VF6xx (Device Tree) [ 153.113772] Workqueue: events deferred_probe_work_func [ 153.118979] [<c0019560>] (unwind_backtrace) from [<c0014734>] (show_stack+0x10/0x14) [ 153.126778] [<c0014734>] (show_stack) from [<c083f8dc>] (dump_stack+0x9c/0xd4) [ 153.134051] [<c083f8dc>] (dump_stack) from [<c0031154>] (__warn+0xf8/0x124) [ 153.141056] [<c0031154>] (__warn) from [<c0031248>] (warn_slowpath_fmt+0x38/0x48) [ 153.148580] [<c0031248>] (warn_slowpath_fmt) from [<c040fde0>] (clk_core_disable_lock+0x18/0x24) [ 153.157413] [<c040fde0>] (clk_core_disable_lock) from [<c058f520>] (i2c_imx_probe+0x554/0x6ec) [ 153.166076] [<c058f520>] (i2c_imx_probe) from [<c04b9178>] (platform_drv_probe+0x48/0x98) [ 153.174297] [<c04b9178>] (platform_drv_probe) from [<c04b7298>] (really_probe+0x1d8/0x2c0) [ 153.182605] [<c04b7298>] (really_probe) from [<c04b7554>] (driver_probe_device+0x5c/0x174) [ 153.190909] [<c04b7554>] (driver_probe_device) from [<c04b58c8>] (bus_for_each_drv+0x44/0x8c) [ 153.199480] [<c04b58c8>] (bus_for_each_drv) from [<c04b746c>] (__device_attach+0xa0/0x108) [ 153.207782] [<c04b746c>] (__device_attach) from [<c04b65a4>] (bus_probe_device+0x88/0x90) [ 153.215999] [<c04b65a4>] (bus_probe_device) from [<c04b6a04>] (deferred_probe_work_func+0x60/0x90) [ 153.225003] [<c04b6a04>] (deferred_probe_work_func) from [<c004f190>] (process_one_work+0x204/0x634) [ 153.234178] [<c004f190>] (process_one_work) from [<c004f618>] (worker_thread+0x20/0x484) [ 153.242315] [<c004f618>] (worker_thread) from [<c0055c2c>] (kthread+0x118/0x150) [ 153.249758] [<c0055c2c>] (kthread) from [<c00090b4>] (ret_from_fork+0x14/0x20) [ 153.257006] Exception stack(0xdde43fb0 to 0xdde43ff8) [ 153.262095] 3fa0: 00000000 00000000 00000000 00000000 [ 153.270306] 3fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 [ 153.278520] 3fe0: 00000000 00000000 00000000 00000000 00000013 00000000 [ 153.285159] irq event stamp: 3323022 [ 153.288787] hardirqs last enabled at (3323021): [<c0861c4c>] _raw_spin_unlock_irq+0x24/0x2c [ 153.297261] hardirqs last disabled at (3323022): [<c040d7a0>] clk_enable_lock+0x10/0x124 [ 153.305392] softirqs last enabled at (3322092): [<c000a504>] __do_softirq+0x344/0x540 [ 153.313352] softirqs last disabled at (3322081): [<c00385c0>] irq_exit+0x10c/0x128 [ 153.320946] ---[ end trace a506731ccd9bd703 ]--- This endless clk warnings behaviour is well explained by Andrey Smirnov: "Allocating DMA after registering I2C adapter can lead to infinite probing loop, for example, consider the following scenario: 1. i2c_imx_probe() is called and successfully registers an I2C adapter via i2c_add_numbered_adapter() 2. As a part of i2c_add_numbered_adapter() new I2C slave devices are added from DT which results in a call to driver_deferred_probe_trigger() 3. i2c_imx_probe() continues and calls i2c_imx_dma_request() which due to lack of proper DMA driver returns -EPROBE_DEFER 4. i2c_imx_probe() fails, removes I2C adapter and returns -EPROBE_DEFER, which places it into deferred probe list 5. Deferred probe work triggered in #2 above kicks in and calls i2c_imx_probe() again thus bringing us to step #1" So revert commit e1ab9a468e3b ("i2c: imx: improve the error handling in i2c_imx_dma_request()") and restore the old behaviour, in order to avoid regressions on existing setups. Cc: <stable@vger.kernel.org> Reported-by: Andrey Smirnov <andrew.smirnov@gmail.com> Reported-by: Russell King <linux@armlinux.org.uk> Fixes: e1ab9a468e3b ("i2c: imx: improve the error handling in i2c_imx_dma_request()") Signed-off-by: Fabio Estevam <festevam@gmail.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2019-08-14staging: erofs: inline erofs_inode_is_data_compressed()Gao Xiang
As a helper in erofs_fs.h, erofs_inode_is_data_compressed() should be inlined. Signed-off-by: Gao Xiang <gaoxiang25@huawei.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Link: https://lore.kernel.org/r/20190813023054.73126-1-gaoxiang25@huawei.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14iio:light:noa1305: Fix missing break statement.Jonathan Cameron
This got caught by the implicit fall through detection but is a bug rather than missing marking. Reported-by: 0-DAY kernel test infrastructure Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Fixes: 741172d18e8a ("iio: light: noa1305: Add support for NOA1305") Link: https://lore.kernel.org/r/20190813133851.14345-1-Jonathan.Cameron@huawei.com Cc: Gustavo Silva <gustavo@embeddedor.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-14ALSA: hda - Add a generic reboot_notifyHui Wang
Make codec enter D3 before rebooting or poweroff can fix the noise issue on some laptops. And in theory it is harmless for all codecs to enter D3 before rebooting or poweroff, let us add a generic reboot_notify, then realtek and conexant drivers can call this function. Cc: stable@vger.kernel.org Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-08-14ALSA: hda - Let all conexant codec enter D3 when rebootingHui Wang
We have 3 new lenovo laptops which have conexant codec 0x14f11f86, these 3 laptops also have the noise issue when rebooting, after letting the codec enter D3 before rebooting or poweroff, the noise disappers. Instead of adding a new ID again in the reboot_notify(), let us make this function apply to all conexant codec. In theory make codec enter D3 before rebooting or poweroff is harmless, and I tested this change on a couple of other Lenovo laptops which have different conexant codecs, there is no side effect so far. Cc: stable@vger.kernel.org Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-08-13riscv: defconfig: Update the defconfigAlistair Francis
Update the defconfig: - Add CONFIG_HW_RANDOM=y and CONFIG_HW_RANDOM_VIRTIO=y to enable VirtIORNG when running on QEMU Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
2019-08-13riscv: rv32_defconfig: Update the defconfigAlistair Francis
Update the rv32_defconfig: - Add 'CONFIG_DEVTMPFS_MOUNT=y' to match the RISC-V defconfig - Add CONFIG_HW_RANDOM=y and CONFIG_HW_RANDOM_VIRTIO=y to enable VirtIORNG when running on QEMU Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
2019-08-13hugetlbfs: fix hugetlb page migration/fault race causing SIGBUSMike Kravetz
Li Wang discovered that LTP/move_page12 V2 sometimes triggers SIGBUS in the kernel-v5.2.3 testing. This is caused by a race between hugetlb page migration and page fault. If a hugetlb page can not be allocated to satisfy a page fault, the task is sent SIGBUS. This is normal hugetlbfs behavior. A hugetlb fault mutex exists to prevent two tasks from trying to instantiate the same page. This protects against the situation where there is only one hugetlb page, and both tasks would try to allocate. Without the mutex, one would fail and SIGBUS even though the other fault would be successful. There is a similar race between hugetlb page migration and fault. Migration code will allocate a page for the target of the migration. It will then unmap the original page from all page tables. It does this unmap by first clearing the pte and then writing a migration entry. The page table lock is held for the duration of this clear and write operation. However, the beginnings of the hugetlb page fault code optimistically checks the pte without taking the page table lock. If clear (as it can be during the migration unmap operation), a hugetlb page allocation is attempted to satisfy the fault. Note that the page which will eventually satisfy this fault was already allocated by the migration code. However, the allocation within the fault path could fail which would result in the task incorrectly being sent SIGBUS. Ideally, we could take the hugetlb fault mutex in the migration code when modifying the page tables. However, locks must be taken in the order of hugetlb fault mutex, page lock, page table lock. This would require significant rework of the migration code. Instead, the issue is addressed in the hugetlb fault code. After failing to allocate a huge page, take the page table lock and check for huge_pte_none before returning an error. This is the same check that must be made further in the code even if page allocation is successful. Link: http://lkml.kernel.org/r/20190808000533.7701-1-mike.kravetz@oracle.com Fixes: 290408d4a250 ("hugetlb: hugepage migration core") Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reported-by: Li Wang <liwang@redhat.com> Tested-by: Li Wang <liwang@redhat.com> Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Cyril Hrubis <chrubis@suse.cz> Cc: Xishi Qiu <xishi.qiuxishi@alibaba-inc.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-08-13mm, vmscan: do not special-case slab reclaim when watermarks are boostedMel Gorman
Dave Chinner reported a problem pointing a finger at commit 1c30844d2dfe ("mm: reclaim small amounts of memory when an external fragmentation event occurs"). The report is extensive: https://lore.kernel.org/linux-mm/20190807091858.2857-1-david@fromorbit.com/ and it's worth recording the most relevant parts (colorful language and typos included). When running a simple, steady state 4kB file creation test to simulate extracting tarballs larger than memory full of small files into the filesystem, I noticed that once memory fills up the cache balance goes to hell. The workload is creating one dirty cached inode for every dirty page, both of which should require a single IO each to clean and reclaim, and creation of inodes is throttled by the rate at which dirty writeback runs at (via balance dirty pages). Hence the ingest rate of new cached inodes and page cache pages is identical and steady. As a result, memory reclaim should quickly find a steady balance between page cache and inode caches. The moment memory fills, the page cache is reclaimed at a much faster rate than the inode cache, and evidence suggests that the inode cache shrinker is not being called when large batches of pages are being reclaimed. In roughly the same time period that it takes to fill memory with 50% pages and 50% slab caches, memory reclaim reduces the page cache down to just dirty pages and slab caches fill the entirety of memory. The LRU is largely full of dirty pages, and we're getting spikes of random writeback from memory reclaim so it's all going to shit. Behaviour never recovers, the page cache remains pinned at just dirty pages, and nothing I could tune would make any difference. vfs_cache_pressure makes no difference - I would set it so high it should trim the entire inode caches in a single pass, yet it didn't do anything. It was clear from tracing and live telemetry that the shrinkers were pretty much not running except when there was absolutely no memory free at all, and then they did the minimum necessary to free memory to make progress. So I went looking at the code, trying to find places where pages got reclaimed and the shrinkers weren't called. There's only one - kswapd doing boosted reclaim as per commit 1c30844d2dfe ("mm: reclaim small amounts of memory when an external fragmentation event occurs"). The watermark boosting introduced by the commit is triggered in response to an allocation "fragmentation event". The boosting was not intended to target THP specifically and triggers even if THP is disabled. However, with Dave's perfectly reasonable workload, fragmentation events can be very common given the ratio of slab to page cache allocations so boosting remains active for long periods of time. As high-order allocations might use compaction and compaction cannot move slab pages the decision was made in the commit to special-case kswapd when watermarks are boosted -- kswapd avoids reclaiming slab as reclaiming slab does not directly help compaction. As Dave notes, this decision means that slab can be artificially protected for long periods of time and messes up the balance with slab and page caches. Removing the special casing can still indirectly help avoid fragmentation by avoiding fragmentation-causing events due to slab allocation as pages from a slab pageblock will have some slab objects freed. Furthermore, with the special casing, reclaim behaviour is unpredictable as kswapd sometimes examines slab and sometimes does not in a manner that is tricky to tune or analyse. This patch removes the special casing. The downside is that this is not a universal performance win. Some benchmarks that depend on the residency of data when rereading metadata may see a regression when slab reclaim is restored to its original behaviour. Similarly, some benchmarks that only read-once or write-once may perform better when page reclaim is too aggressive. The primary upside is that slab shrinker is less surprising (arguably more sane but that's a matter of opinion), behaves consistently regardless of the fragmentation state of the system and properly obeys VM sysctls. A fsmark benchmark configuration was constructed similar to what Dave reported and is codified by the mmtest configuration config-io-fsmark-small-file-stream. It was evaluated on a 1-socket machine to avoid dealing with NUMA-related issues and the timing of reclaim. The storage was an SSD Samsung Evo and a fresh trimmed XFS filesystem was used for the test data. This is not an exact replication of Dave's setup. The configuration scales its parameters depending on the memory size of the SUT to behave similarly across machines. The parameters mean the first sample reported by fs_mark is using 50% of RAM which will barely be throttled and look like a big outlier. Dave used fake NUMA to have multiple kswapd instances which I didn't replicate. Finally, the number of iterations differ from Dave's test as the target disk was not large enough. While not identical, it should be representative. fsmark 5.3.0-rc3 5.3.0-rc3 vanilla shrinker-v1r1 Min 1-files/sec 4444.80 ( 0.00%) 4765.60 ( 7.22%) 1st-qrtle 1-files/sec 5005.10 ( 0.00%) 5091.70 ( 1.73%) 2nd-qrtle 1-files/sec 4917.80 ( 0.00%) 4855.60 ( -1.26%) 3rd-qrtle 1-files/sec 4667.40 ( 0.00%) 4831.20 ( 3.51%) Max-1 1-files/sec 11421.50 ( 0.00%) 9999.30 ( -12.45%) Max-5 1-files/sec 11421.50 ( 0.00%) 9999.30 ( -12.45%) Max-10 1-files/sec 11421.50 ( 0.00%) 9999.30 ( -12.45%) Max-90 1-files/sec 4649.60 ( 0.00%) 4780.70 ( 2.82%) Max-95 1-files/sec 4491.00 ( 0.00%) 4768.20 ( 6.17%) Max-99 1-files/sec 4491.00 ( 0.00%) 4768.20 ( 6.17%) Max 1-files/sec 11421.50 ( 0.00%) 9999.30 ( -12.45%) Hmean 1-files/sec 5004.75 ( 0.00%) 5075.96 ( 1.42%) Stddev 1-files/sec 1778.70 ( 0.00%) 1369.66 ( 23.00%) CoeffVar 1-files/sec 33.70 ( 0.00%) 26.05 ( 22.71%) BHmean-99 1-files/sec 5053.72 ( 0.00%) 5101.52 ( 0.95%) BHmean-95 1-files/sec 5053.72 ( 0.00%) 5101.52 ( 0.95%) BHmean-90 1-files/sec 5107.05 ( 0.00%) 5131.41 ( 0.48%) BHmean-75 1-files/sec 5208.45 ( 0.00%) 5206.68 ( -0.03%) BHmean-50 1-files/sec 5405.53 ( 0.00%) 5381.62 ( -0.44%) BHmean-25 1-files/sec 6179.75 ( 0.00%) 6095.14 ( -1.37%) 5.3.0-rc3 5.3.0-rc3 vanillashrinker-v1r1 Duration User 501.82 497.29 Duration System 4401.44 4424.08 Duration Elapsed 8124.76 8358.05 This is showing a slight skew for the max result representing a large outlier for the 1st, 2nd and 3rd quartile are similar indicating that the bulk of the results show little difference. Note that an earlier version of the fsmark configuration showed a regression but that included more samples taken while memory was still filling. Note that the elapsed time is higher. Part of this is that the configuration included time to delete all the test files when the test completes -- the test automation handles the possibility of testing fsmark with multiple thread counts. Without the patch, many of these objects would be memory resident which is part of what the patch is addressing. There are other important observations that justify the patch. 1. With the vanilla kernel, the number of dirty pages in the system is very low for much of the test. With this patch, dirty pages is generally kept at 10% which matches vm.dirty_background_ratio which is normal expected historical behaviour. 2. With the vanilla kernel, the ratio of Slab/Pagecache is close to 0.95 for much of the test i.e. Slab is being left alone and dominating memory consumption. With the patch applied, the ratio varies between 0.35 and 0.45 with the bulk of the measured ratios roughly half way between those values. This is a different balance to what Dave reported but it was at least consistent. 3. Slabs are scanned throughout the entire test with the patch applied. The vanille kernel has periods with no scan activity and then relatively massive spikes. 4. Without the patch, kswapd scan rates are very variable. With the patch, the scan rates remain quite steady. 4. Overall vmstats are closer to normal expectations 5.3.0-rc3 5.3.0-rc3 vanilla shrinker-v1r1 Ops Direct pages scanned 99388.00 328410.00 Ops Kswapd pages scanned 45382917.00 33451026.00 Ops Kswapd pages reclaimed 30869570.00 25239655.00 Ops Direct pages reclaimed 74131.00 5830.00 Ops Kswapd efficiency % 68.02 75.45 Ops Kswapd velocity 5585.75 4002.25 Ops Page reclaim immediate 1179721.00 430927.00 Ops Slabs scanned 62367361.00 73581394.00 Ops Direct inode steals 2103.00 1002.00 Ops Kswapd inode steals 570180.00 5183206.00 o Vanilla kernel is hitting direct reclaim more frequently, not very much in absolute terms but the fact the patch reduces it is interesting o "Page reclaim immediate" in the vanilla kernel indicates dirty pages are being encountered at the tail of the LRU. This is generally bad and means in this case that the LRU is not long enough for dirty pages to be cleaned by the background flush in time. This is much reduced by the patch. o With the patch, kswapd is reclaiming 10 times more slab pages than with the vanilla kernel. This is indicative of the watermark boosting over-protecting slab A more complete set of tests were run that were part of the basis for introducing boosting and while there are some differences, they are well within tolerances. Bottom line, the special casing kswapd to avoid slab behaviour is unpredictable and can lead to abnormal results for normal workloads. This patch restores the expected behaviour that slab and page cache is balanced consistently for a workload with a steady allocation ratio of slab/pagecache pages. It also means that if there are workloads that favour the preservation of slab over pagecache that it can be tuned via vm.vfs_cache_pressure where as the vanilla kernel effectively ignores the parameter when boosting is active. Link: http://lkml.kernel.org/r/20190808182946.GM2739@techsingularity.net Fixes: 1c30844d2dfe ("mm: reclaim small amounts of memory when an external fragmentation event occurs") Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Reviewed-by: Dave Chinner <dchinner@redhat.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Michal Hocko <mhocko@kernel.org> Cc: <stable@vger.kernel.org> [5.0+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-08-13Revert "mm, thp: restore node-local hugepage allocations"Andrea Arcangeli
This reverts commit 2f0799a0ffc033b ("mm, thp: restore node-local hugepage allocations"). commit 2f0799a0ffc033b was rightfully applied to avoid the risk of a severe regression that was reported by the kernel test robot at the end of the merge window. Now we understood the regression was a false positive and was caused by a significant increase in fairness during a swap trashing benchmark. So it's safe to re-apply the fix and continue improving the code from there. The benchmark that reported the regression is very useful, but it provides a meaningful result only when there is no significant alteration in fairness during the workload. The removal of __GFP_THISNODE increased fairness. __GFP_THISNODE cannot be used in the generic page faults path for new memory allocations under the MPOL_DEFAULT mempolicy, or the allocation behavior significantly deviates from what the MPOL_DEFAULT semantics are supposed to be for THP and 4k allocations alike. Setting THP defrag to "always" or using MADV_HUGEPAGE (with THP defrag set to "madvise") has never meant to provide an implicit MPOL_BIND on the "current" node the task is running on, causing swap storms and providing a much more aggressive behavior than even zone_reclaim_node = 3. Any workload who could have benefited from __GFP_THISNODE has now to enable zone_reclaim_mode=1||2||3. __GFP_THISNODE implicitly provided the zone_reclaim_mode behavior, but it only did so if THP was enabled: if THP was disabled, there would have been no chance to get any 4k page from the current node if the current node was full of pagecache, which further shows how this __GFP_THISNODE was misplaced in MADV_HUGEPAGE. MADV_HUGEPAGE has never been intended to provide any zone_reclaim_mode semantics, in fact the two are orthogonal, zone_reclaim_mode = 1|2|3 must work exactly the same with MADV_HUGEPAGE set or not. The performance characteristic of memory depends on the hardware details. The numbers below are obtained on Naples/EPYC architecture and the N/A projection extends them to show what we should aim for in the future as a good THP NUMA locality default. The benchmark used exercises random memory seeks (note: the cost of the page faults is not part of the measurement). D0 THP | D0 4k | D1 THP | D1 4k | D2 THP | D2 4k | D3 THP | D3 4k | ... 0% | +43% | +45% | +106% | +131% | +224% | N/A | N/A D0 means distance zero (i.e. local memory), D1 means distance one (i.e. intra socket memory), D2 means distance two (i.e. inter socket memory), etc... For the guest physical memory allocated by qemu and for guest mode kernel the performance characteristic of RAM is more complex and an ideal default could be: D0 THP | D1 THP | D0 4k | D2 THP | D1 4k | D3 THP | D2 4k | D3 4k | ... 0% | +58% | +101% | N/A | +222% | N/A | N/A | N/A NOTE: the N/A are projections and haven't been measured yet, the measurement in this case is done on a 1950x with only two NUMA nodes. The THP case here means THP was used both in the host and in the guest. After applying this commit the THP NUMA locality order that we'll get out of MADV_HUGEPAGE is this: D0 THP | D1 THP | D2 THP | D3 THP | ... | D0 4k | D1 4k | D2 4k | D3 4k | ... Before this commit it was: D0 THP | D0 4k | D1 4k | D2 4k | D3 4k | ... Even if we ignore the breakage of large workloads that can't fit in a single node that the __GFP_THISNODE implicit "current node" mbind caused, the THP NUMA locality order provided by __GFP_THISNODE was still not the one we shall aim for in the long term (i.e. the first one at the top). After this commit is applied, we can introduce a new allocator multi order API and to replace those two alloc_pages_vmas calls in the page fault path, with a single multi order call: unsigned int order = (1 << HPAGE_PMD_ORDER) | (1 << 0); page = alloc_pages_multi_order(..., &order); if (!page) goto out; if (!(order & (1 << 0))) { VM_WARN_ON(order != 1 << HPAGE_PMD_ORDER); /* THP fault */ } else { VM_WARN_ON(order != 1 << 0); /* 4k fallback */ } The page allocator logic has to be altered so that when it fails on any zone with order 9, it has to try again with a order 0 before falling back to the next zone in the zonelist. After that we need to do more measurements and evaluate if adding an opt-in feature for guest mode is worth it, to swap "DN 4k | DN+1 THP" with "DN+1 THP | DN 4k" at every NUMA distance crossing. Link: http://lkml.kernel.org/r/20190503223146.2312-3-aarcange@redhat.com Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Mel Gorman <mgorman@suse.de> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: Zi Yan <zi.yan@cs.rutgers.edu> Cc: Stefan Priebe - Profihost AG <s.priebe@profihost.ag> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-08-13Revert "Revert "mm, thp: consolidate THP gfp handling into ↵Andrea Arcangeli
alloc_hugepage_direct_gfpmask"" Patch series "reapply: relax __GFP_THISNODE for MADV_HUGEPAGE mappings". The fixes for what was originally reported as "pathological THP behavior" we rightfully reverted to be sure not to introduced regressions at end of a merge window after a severe regression report from the kernel bot. We can safely re-apply them now that we had time to analyze the problem. The mm process worked fine, because the good fixes were eventually committed upstream without excessive delay. The regression reported by the kernel bot however forced us to revert the good fixes to be sure not to introduce regressions and to give us the time to analyze the issue further. The silver lining is that this extra time allowed to think more at this issue and also plan for a future direction to improve things further in terms of THP NUMA locality. This patch (of 2): This reverts commit 356ff8a9a78fb35d ("Revert "mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask"). So it reapplies 89c83fb539f954 ("mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask"). Consolidation of the THP allocation flags at the same place was meant to be a clean up to easier handle otherwise scattered code which is imposing a maintenance burden. There were no real problems observed with the gfp mask consolidation but the reversion was rushed through without a larger consensus regardless. This patch brings the consolidation back because this should make the long term maintainability easier as well as it should allow future changes to be less error prone. [mhocko@kernel.org: changelog additions] Link: http://lkml.kernel.org/r/20190503223146.2312-2-aarcange@redhat.com Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: Zi Yan <zi.yan@cs.rutgers.edu> Cc: Stefan Priebe - Profihost AG <s.priebe@profihost.ag> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-08-13include/asm-generic/5level-fixup.h: fix variable 'p4d' set but not usedQian Cai
A compiler throws a warning on an arm64 system since commit 9849a5697d3d ("arch, mm: convert all architectures to use 5level-fixup.h"), mm/kasan/init.c: In function 'kasan_free_p4d': mm/kasan/init.c:344:9: warning: variable 'p4d' set but not used [-Wunused-but-set-variable] p4d_t *p4d; ^~~ because p4d_none() in "5level-fixup.h" is compiled away while it is a static inline function in "pgtable-nopud.h". However, if converted p4d_none() to a static inline there, powerpc would be unhappy as it reads those in assembler language in "arch/powerpc/include/asm/book3s/64/pgtable.h", so it needs to skip assembly include for the static inline C function. While at it, converted a few similar functions to be consistent with the ones in "pgtable-nopud.h". Link: http://lkml.kernel.org/r/20190806232917.881-1-cai@lca.pw Signed-off-by: Qian Cai <cai@lca.pw> Acked-by: Arnd Bergmann <arnd@arndb.de> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-08-13seq_file: fix problem when seeking mid-recordNeilBrown
If you use lseek or similar (e.g. pread) to access a location in a seq_file file that is within a record, rather than at a record boundary, then the first read will return the remainder of the record, and the second read will return the whole of that same record (instead of the next record). When seeking to a record boundary, the next record is correctly returned. This bug was introduced by a recent patch (identified below). Before that patch, seq_read() would increment m->index when the last of the buffer was returned (m->count == 0). After that patch, we rely on ->next to increment m->index after filling the buffer - but there was one place where that didn't happen. Link: https://lkml.kernel.org/lkml/877e7xl029.fsf@notabene.neil.brown.name/ Fixes: 1f4aace60b0e ("fs/seq_file.c: simplify seq_file iteration code and interface") Signed-off-by: NeilBrown <neilb@suse.com> Reported-by: Sergei Turchanov <turchanov@farpost.com> Tested-by: Sergei Turchanov <turchanov@farpost.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Markus Elfring <Markus.Elfring@web.de> Cc: <stable@vger.kernel.org> [4.19+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-08-13mm: workingset: fix vmstat counters for shadow nodesRoman Gushchin
Memcg counters for shadow nodes are broken because the memcg pointer is obtained in a wrong way. The following approach is used: virt_to_page(xa_node)->mem_cgroup Since commit 4d96ba353075 ("mm: memcg/slab: stop setting page->mem_cgroup pointer for slab pages") page->mem_cgroup pointer isn't set for slab pages, so memcg_from_slab_page() should be used instead. Also I doubt that it ever worked correctly: virt_to_head_page() should be used instead of virt_to_page(). Otherwise objects residing on tail pages are not accounted, because only the head page contains a valid mem_cgroup pointer. That was a case since the introduction of these counters by the commit 68d48e6a2df5 ("mm: workingset: add vmstat counter for shadow nodes"). Link: http://lkml.kernel.org/r/20190801233532.138743-1-guro@fb.com Fixes: 4d96ba353075 ("mm: memcg/slab: stop setting page->mem_cgroup pointer for slab pages") Signed-off-by: Roman Gushchin <guro@fb.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>