summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2018-11-21mlxsw: spectrum_nve: Allow VxLAN learningIdo Schimmel
Up until now the driver returned an error when learning was enabled on a VxLAN device enslaved to an offloaded bridge. Previous patches added VxLAN learning support, so remove the check. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21mlxsw: spectrum_switchdev: Allow deletion of learned FDB entriesIdo Schimmel
Allow users to delete learned FDB entries from the bridge's FDB before enabling VxLAN learning. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21mlxsw: spectrum_switchdev: Process learned VxLAN FDB entriesIdo Schimmel
Start processing two new entry types in addition to current ones: * Learned unicast tunnel entry * Aged-out unicast tunnel entry In both cases the device reports on a new {MAC, FID, IP address} tuple that was learned / aged-out. Based on this notification, the driver instructs the device to add / delete the entry to / from its database. The driver also makes sure to notify the bridge and VxLAN drivers about the new entry. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21mlxsw: spectrum_nve: Add API to resolve learned IP addressesIdo Schimmel
FDB notifications for entries learned from an NVE tunnel contain the IP address of the remote VTEP. In the case of IPv4 underlay, the IP address is specified as-is. IPv6 addresses on the other hand, are specified as handles which then need to be used to query the actual address from the device. Only IPv4 underlay is currently supported, so we cannot receive notifications for IPv6 addresses and therefore an error is returned when one tries to resolve such an address. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21mlxsw: spectrum_fid: Allow FID lookup by its indexIdo Schimmel
When processing a notification about a new FDB entry learned from a VxLAN tunnel, the driver is provided with the FID index among other parameters. The driver potentially needs to update the bridge and VxLAN drivers about the new entry using a pointer to the VxLAN device and the corresponding VNI. These two parameters are stored in the FID, so add a new function that allows looking up a FID based on its index. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21mlxsw: spectrum_fid: Store ifindex of NVE device in FIDIdo Schimmel
The driver periodically polls for new FDB entries learned by the device. In the case of an FDB entry learned from a VxLAN tunnel, the notification includes the IP of the remote VTEP, the filtering identifier (FID) and the source MAC address of the overlay packet. Assuming learning is enabled in the VxLAN and bridge drivers, the driver needs to generate a notification and update them about the new FDB entry. Store the ifindex of the NVE device in the FID so that the driver will be able to update the VxLAN and bridge drivers using it. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21mlxsw: reg: Add definition of unicast tunnel record for SFN registerIdo Schimmel
Will be used to process learned FDB records from an NVE tunnel. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21bridge: Allow querying bridge port flagsIdo Schimmel
Allow querying bridge port flags so that drivers capable of performing VxLAN learning will update the bridge driver only if learning is enabled on its bridge port corresponding to the VxLAN device. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21vxlan: Allow changing ageing timeIdo Schimmel
In a similar fashion to the bridge device, allow changing the ageing time of the VxLAN device by scheduling its timer to fire if the ageing time changed. One use case is selftests where learning / ageing of VxLAN FDB entries is tested. The default ageing time is 5 minutes, which is too long for a simple selftest. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21vxlan: Add hardware FDB learningPetr Machata
In order to allow devices to signal learning events to VXLAN, introduce two new switchdev messages: SWITCHDEV_VXLAN_FDB_ADD_TO_BRIDGE and SWITCHDEV_VXLAN_FDB_DEL_TO_BRIDGE. Listen to these notifications in the vxlan driver. The FDB entries learned this way have an NTF_EXT_LEARNED flag, and only entries marked as such can be unlearned by the _DEL_ event. They are also immediately marked as offloaded. This is the same behavior that the bridge driver observes. Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21vxlan: Don't override user-added entries with ext-learned onesPetr Machata
When an external learning event collides with an user-added entry, the user-added entry shouldn't be taken over. Otherwise on an unlearn event the entry would be completely lost, even though the user added it by hand. Therefore skip update of FDB flags and state for these cases. This is in accordance with the bridge behavior. Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21vxlan: Mark user-added FDB entriesPetr Machata
The VXLAN driver needs to differentiate between FDB entries learned by the VXLAN driver, and those added by the user. The latter ones shouldn't be taken over by external learning events. This is in accordance with bridge behavior. Therefore, extend the flags bitfield to 16 bits and add a new private NTF flag to mark the user-added entries. This seems preferable to adding a dedicated boolean, because passing the flag, unlike passing e.g. a true, makes it clear what the meaning of the bit is. Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21vxlan: vxlan_fdb_notify(): Make switchdev notification configurablePetr Machata
In a following patch, vxlan is extended to allow hardware FDB learning. For FDB entries learned this way, switchdev notifications should not be sent again, because the driver already knows about these entries. To that end, add an argument vxlan_fdb_notify() to determine whether the switchdev notifications should be sent. Propagate the argument to all call sites transitively, eventually passing true in all root calls. Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21vxlan: __vxlan_fdb_delete(): Drop unused argument vidPetr Machata
This argument is necessary for vxlan_fdb_delete(), the API of which is prescribed by ndo_fdb_del, but __vxlan_fdb_delete() doesn't need it. Therefore drop it. Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21net: faraday: ftmac100: remove netif_running(netdev) check before disabling ↵Vincent Chen
interrupts In the original ftmac100_interrupt(), the interrupts are only disabled when the condition "netif_running(netdev)" is true. However, this condition causes kerenl hang in the following case. When the user requests to disable the network device, kernel will clear the bit __LINK_STATE_START from the dev->state and then call the driver's ndo_stop function. Network device interrupts are not blocked during this process. If an interrupt occurs between clearing __LINK_STATE_START and stopping network device, kernel cannot disable the interrupts due to the condition "netif_running(netdev)" in the ISR. Hence, kernel will hang due to the continuous interruption of the network device. In order to solve the above problem, the interrupts of the network device should always be disabled in the ISR without being restricted by the condition "netif_running(netdev)". [V2] Remove unnecessary curly braces. Signed-off-by: Vincent Chen <vincentc@andestech.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-22drm/ast: change resolution may cause screen blurredY.C. Chen
The value of pitches is not correct while calling mode_set. The issue we found so far on following system: - Debian8 with XFCE Desktop - Ubuntu with KDE Desktop - SUSE15 with KDE Desktop Signed-off-by: Y.C. Chen <yc_chen@aspeedtech.com> Cc: <stable@vger.kernel.org> Tested-by: Jean Delvare <jdelvare@suse.de> Reviewed-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Dave Airlie <airlied@redhat.com>
2018-11-21net: lpc_eth: fix trivial comment typoAndrea Claudi
Fix comment typo rxfliterctrl -> rxfilterctrl Signed-off-by: Andrea Claudi <aclaudi@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21Merge branch 'smc-fixes'David S. Miller
Ursula Braun says: ==================== net/smc: fixes 2018-11-12 here is V4 of some net/smc fixes in different areas for the net tree. v1->v2: do not define 8-byte alignment for union smcd_cdc_cursor in patch 4/5 "net/smc: atomic SMCD cursor handling" v2->v3: stay with 8-byte alignment for union smcd_cdc_cursor in patch 4/5 "net/smc: atomic SMCD cursor handling", but get rid of __packed for struct smcd_cdc_msg v3->v4: get rid of another __packed for struct smc_cdc_msg in patch 4/5 "net/smc: atomic SMCD cursor handling" ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21net/smc: use after free fix in smc_wr_tx_put_slot()Ursula Braun
In smc_wr_tx_put_slot() field pend->idx is used after being cleared. That means always idx 0 is cleared in the wr_tx_mask. This results in a broken administration of available WR send payload buffers. Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21net/smc: atomic SMCD cursor handlingUrsula Braun
Running uperf tests with SMCD on LPARs results in corrupted cursors. SMCD cursors should be treated atomically to fix cursor corruption. Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21net/smc: add SMC-D shutdown signalHans Wippel
When a SMC-D link group is freed, a shutdown signal should be sent to the peer to indicate that the link group is invalid. This patch adds the shutdown signal to the SMC code. Signed-off-by: Hans Wippel <hwippel@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21net/smc: use queue pair number when matching link groupKarsten Graul
When searching for an existing link group the queue pair number is also to be taken into consideration. When the SMC server sends a new number in a CLC packet (keeping all other values equal) then a new link group is to be created on the SMC client side. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21net/smc: abort CLC connection in smc_releaseHans Wippel
In case of a non-blocking SMC socket, the initial CLC handshake is performed over a blocking TCP connection in a worker. If the SMC socket is released, smc_release has to wait for the blocking CLC socket operations (e.g., kernel_connect) inside the worker. This patch aborts a CLC connection when the respective non-blocking SMC socket is released to avoid waiting on socket operations or timeouts. Signed-off-by: Hans Wippel <hwippel@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21Merge tag 'wireless-drivers-for-davem-2018-11-20' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers Kalle Valo says: ==================== wireless-drivers fixes for 4.20 First set of fixes for 4.20, this time we have quite a few them but all very small. ath9k * fix a locking regression found by a static checker wlcore * fix a crash which was a regression with wakeirq handling brcm80211 * yet another fix for 160 MHz channel handling mt76 * fix a longstaning build problem when CONFIG_LEDS_CLASS is disabled * don't use uninitialised mutex iwlwifi * do note that the iwlwifi merge tag (commit 4ec321c14693) seems to contain wrong list of changes so ignore that * fix ACPI data handling, a memory leak and other smaller fixes ath10k * fix a crash during suspend which was a recent regression ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21tcp: defer SACK compression after DupThreshEric Dumazet
Jean-Louis reported a TCP regression and bisected to recent SACK compression. After a loss episode (receiver not able to keep up and dropping packets because its backlog is full), linux TCP stack is sending a single SACK (DUPACK). Sender waits a full RTO timer before recovering losses. While RFC 6675 says in section 5, "Algorithm Details", (2) If DupAcks < DupThresh but IsLost (HighACK + 1) returns true -- indicating at least three segments have arrived above the current cumulative acknowledgment point, which is taken to indicate loss -- go to step (4). ... (4) Invoke fast retransmit and enter loss recovery as follows: there are old TCP stacks not implementing this strategy, and still counting the dupacks before starting fast retransmit. While these stacks probably perform poorly when receivers implement LRO/GRO, we should be a little more gentle to them. This patch makes sure we do not enable SACK compression unless 3 dupacks have been sent since last rcv_nxt update. Ideally we should even rearm the timer to send one or two more DUPACK if no more packets are coming, but that will be work aiming for linux-4.21. Many thanks to Jean-Louis for bisecting the issue, providing packet captures and testing this patch. Fixes: 5d9f4262b7ea ("tcp: add SACK compression") Reported-by: Jean-Louis Dupond <jean-louis@dupond.be> Tested-by: Jean-Louis Dupond <jean-louis@dupond.be> Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-22tools: bpftool: fix potential NULL pointer dereference in do_loadJakub Kicinski
This patch fixes a possible null pointer dereference in do_load, detected by the semantic patch deref_null.cocci, with the following warning: ./tools/bpf/bpftool/prog.c:1021:23-25: ERROR: map_replace is NULL but dereferenced. The following code has potential null pointer references: 881 map_replace = reallocarray(map_replace, old_map_fds + 1, 882 sizeof(*map_replace)); 883 if (!map_replace) { 884 p_err("mem alloc failed"); 885 goto err_free_reuse_maps; 886 } ... 1019 err_free_reuse_maps: 1020 for (i = 0; i < old_map_fds; i++) 1021 close(map_replace[i].fd); 1022 free(map_replace); Fixes: 3ff5a4dc5d89 ("tools: bpftool: allow reuse of maps with bpftool prog load") Co-developed-by: Wen Yang <wen.yang99@zte.com.cn> Signed-off-by: Wen Yang <wen.yang99@zte.com.cn> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-21Merge branch 'VLAN-tag-handling-cleanup'David S. Miller
Michał Mirosław says: ==================== VLAN tag handling cleanup This is a cleanup set after VLAN_TAG_PRESENT removal. The CFI bit handling is made similar to how other tag fields are used. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21mlx5: use skb_vlan_tag_get_prio()Michał Mirosław
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21benet: use skb_vlan_tag_get_prio()Michał Mirosław
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21net/hyperv: use skb_vlan_tag_*() helpersMichał Mirosław
Replace open-coded bitfield manipulation with skb_vlan_tag_*() helpers. This also enables correctly passing of VLAN.CFI bit. Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21net/vlan: introduce skb_vlan_tag_get_cfi() helperMichał Mirosław
Abstract CFI/DEI bit access consistently with other VLAN tag fields. Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21net: skb_scrub_packet(): Scrub offload_fwd_markPetr Machata
When a packet is trapped and the corresponding SKB marked as already-forwarded, it retains this marking even after it is forwarded across veth links into another bridge. There, since it ingresses the bridge over veth, which doesn't have offload_fwd_mark, it triggers a warning in nbp_switchdev_frame_mark(). Then nbp_switchdev_allowed_egress() decides not to allow egress from this bridge through another veth, because the SKB is already marked, and the mark (of 0) of course matches. Thus the packet is incorrectly blocked. Solve by resetting offload_fwd_mark() in skb_scrub_packet(). That function is called from tunnels and also from veth, and thus catches the cases where traffic is forwarded between bridges and transformed in a way that invalidates the marking. Fixes: 6bc506b4fb06 ("bridge: switchdev: Add forward mark support for stacked devices") Fixes: abf4bb6b63d0 ("skbuff: Add the offload_mr_fwd_mark field") Signed-off-by: Petr Machata <petrm@mellanox.com> Suggested-by: Ido Schimmel <idosch@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-21Merge branch 'bpf-libbpf-mapinmap'Daniel Borkmann
Nikita V. Shirokov says: ==================== In this patch series I'm adding a helper for libbpf which would allow it to load map-in-map(BPF_MAP_TYPE_ARRAY_OF_MAPS and BPF_MAP_TYPE_HASH_OF_MAPS). First patch contains new helper + explains proposed workflow second patch contains tests which also could be used as example usage. v4->v5: - naming: renamed everything to map_in_map instead of mapinmap - start to return nonzero val if set_inner_map_fd failed v3->v4: - renamed helper to set_inner_map_fd - now we set this value only if it haven't been set before and only for (array|hash) of maps v2->v3: - fixing typo in patch description - initializing inner_map_fd to -1 by default v1->v2: - addressing nits - removing const identifier from fd in new helper - starting to check return val for bpf_map_update_elem ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-21bpf: adding tests for map_in_map helpber in libbpfNikita V. Shirokov
adding test/example of bpf_map__set_inner_map_fd usage Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-21bpf: adding support for map in map in libbpfNikita V. Shirokov
idea is pretty simple. for specified map (pointed by struct bpf_map) we would provide descriptor of already loaded map, which is going to be used as a prototype for inner map. proposed workflow: 1) open bpf's object (bpf_object__open) 2) create bpf's map which is going to be used as a prototype 3) find (by name) map-in-map which you want to load and update w/ descriptor of inner map w/ a new helper from this patch 4) load bpf program w/ bpf_object__load Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-21bpf: libbpf: don't specify prog name if kernel doesn't support itStanislav Fomichev
Use recently added capability check. See commit 23499442c319 ("bpf: libbpf: retry map creation without the name") for rationale. Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-21bpf: libbpf: remove map name retry from bpf_create_map_xattrStanislav Fomichev
Instead, check for a newly created caps.name bpf_object capability. If kernel doesn't support names, don't specify the attribute. See commit 23499442c319 ("bpf: libbpf: retry map creation without the name") for rationale. Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-21bpf, libbpf: introduce bpf_object__probe_caps to test BPF capabilitiesStanislav Fomichev
It currently only checks whether kernel supports map/prog names. This capability check will be used in the next two commits to skip setting prog/map names. Suggested-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-21libbpf: make sure bpf headers are c++ include-ableStanislav Fomichev
Wrap headers in extern "C", to turn off C++ mangling. This simplifies including libbpf in c++ and linking against it. v2 changes: * do the same for btf.h v3 changes: * test_libbpf.cpp to test for possible future c++ breakages Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-21bpf: fix a libbpf loader issueYonghong Song
Commit 2993e0515bb4 ("tools/bpf: add support to read .BTF.ext sections") added support to read .BTF.ext sections from an object file, create and pass prog_btf_fd and func_info to the kernel. The program btf_fd (prog->btf_fd) is initialized to be -1 to please zclose so we do not need special handling dur prog close. Passing -1 to the kernel, however, will cause loading error. Passing btf_fd 0 to the kernel if prog->btf_fd is invalid fixed the problem. Fixes: 2993e0515bb4 ("tools/bpf: add support to read .BTF.ext sections") Reported-by: Andrey Ignatov <rdna@fb.com> Reported-by: Emre Cantimur <haydum@fb.com> Tested-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-21Merge tag 'riscv-for-linus-4.20-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux Pull RISC-V fixes from Palmer Dabbelt: "This week is a bit bigger than I expected. That's my fault, as I missed a few patches while I was at Plumbers last week. We have: - A fix to a quite embarassing issue where raw_copy_to_user() was implemented with asm_copy_from_user() (and vice versa). - Improvements to our makefile to allow flat binaries to be generated. - A build fix that predeclares "struct module" at the top of <asm/module.h>, which triggers warnings later in that header. - The addition of our own <uapi/asm/unistd> header, which is necessary to align our stat ABI on 32-bit systems. - A fix to avoid printing a warning when the S or U bits are set in print_isa(). I already have one patch in the queue for next week" * tag 'riscv-for-linus-4.20-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux: RISC-V: recognize S/U mode bits in print_isa riscv: add asm/unistd.h UAPI header riscv: fix warning in arch/riscv/include/asm/module.h RISC-V: Build flat and compressed kernel images RISC-V: Fix raw_copy_{to,from}_user()
2018-11-21igc: Remove obsolete IGC_ERR defineSasha Neftin
Address community comment. Remove obsolete IGC_ERR define and use dev_err method. Suggested by Joe Perches. Signed-off-by: Sasha Neftin <sasha.neftin@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-21ixgbe: Replace synchronize_sched() with synchronize_rcu()Paul E. McKenney
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by: "Paul E. McKenney" <paulmck@linux.ibm.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-21ethernet/intel: consolidate NAPI and NAPI exitJesse Brandeburg
While reviewing code, I noticed that Eric Dumazet recommends that drivers check the return code of napi_complete_done, and use that to decide to enable interrupts or not when exiting poll. One of the Intel drivers was already fixed (ixgbe). Upon looking at the Intel drivers as a whole, we are handling our polling and NAPI exit in a few different ways based on whether we have multiqueue and whether we have Tx cleanup included. Several drivers had the bug of exiting NAPI with return 0, which appears to mess up the accounting in the stack. Consolidate all the NAPI routines to do best known way of exiting and to just mostly look like each other. 1) check return code of napi_complete_done to control interrupt enable 2) return the actual amount of work done. 3) return budget immediately if need NAPI poll again Tested the changes on e1000e with a high interrupt rate set, and it shows about an 8% reduction in the CPU utilization when busy polling because we aren't re-enabling interrupts when we're about to be polled. Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-21docs-networking: fix typo in defineJesse Brandeburg
The #define for NETIF_F_GSO_UDP_L4 was incorrect in the documentation, fix it by making it match the actual code. Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-21igb: Fix format with line continuation whitespaceJoe Perches
The line continuation unintentionally adds whitespace so instead use a coalesced format to remove the whitespace. Miscellanea: o Use a more typical style for ternaries and arguments for this logging message Signed-off-by: Joe Perches <joe@perches.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Acked-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-21iomap: readpages doesn't zero page tail beyond EOFDave Chinner
When we read the EOF page of the file via readpages, we need to zero the region beyond EOF that we either do not read or should not contain data so that mmap does not expose stale data to user applications. However, iomap_adjust_read_range() fails to detect EOF correctly, and so fsx on 1k block size filesystems fails very quickly with mapreads exposing data beyond EOF. There are two problems here. Firstly, when calculating the end block of the EOF byte, we have to round the size by one to avoid a block aligned EOF from reporting a block too large. i.e. a size of 1024 bytes is 1 block, which in index terms is block 0. Therefore we have to calculate the end block from (isize - 1), not isize. The second bug is determining if the current page spans EOF, and so whether we need split it into two half, one for the IO, and the other for zeroing. Unfortunately, the code that checks whether we should split the block doesn't actually check if we span EOF, it just checks if the read spans the /offset in the page/ that EOF sits on. So it splits every read into two if EOF is not page aligned, regardless of whether we are reading the EOF block or not. Hence we need to restrict the "does the read span EOF" check to just the page that spans EOF, not every page we read. This patch results in correct EOF detection through readpages: xfs_vm_readpages: dev 259:0 ino 0x43 nr_pages 24 xfs_iomap_found: dev 259:0 ino 0x43 size 0x66c00 offset 0x4f000 count 98304 type hole startoff 0x13c startblock 1368 blockcount 0x4 iomap_readpage_actor: orig pos 323584 pos 323584, length 4096, poff 0 plen 4096, isize 420864 xfs_iomap_found: dev 259:0 ino 0x43 size 0x66c00 offset 0x50000 count 94208 type hole startoff 0x140 startblock 1497 blockcount 0x5c iomap_readpage_actor: orig pos 327680 pos 327680, length 94208, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 331776 pos 331776, length 90112, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 335872 pos 335872, length 86016, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 339968 pos 339968, length 81920, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 344064 pos 344064, length 77824, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 348160 pos 348160, length 73728, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 352256 pos 352256, length 69632, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 356352 pos 356352, length 65536, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 360448 pos 360448, length 61440, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 364544 pos 364544, length 57344, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 368640 pos 368640, length 53248, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 372736 pos 372736, length 49152, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 376832 pos 376832, length 45056, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 380928 pos 380928, length 40960, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 385024 pos 385024, length 36864, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 389120 pos 389120, length 32768, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 393216 pos 393216, length 28672, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 397312 pos 397312, length 24576, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 401408 pos 401408, length 20480, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 405504 pos 405504, length 16384, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 409600 pos 409600, length 12288, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 413696 pos 413696, length 8192, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 417792 pos 417792, length 4096, poff 0 plen 3072, isize 420864 iomap_readpage_actor: orig pos 420864 pos 420864, length 1024, poff 3072 plen 1024, isize 420864 As you can see, it now does full page reads until the last one which is split correctly at the block aligned EOF, reading 3072 bytes and zeroing the last 1024 bytes. The original version of the patch got this right, but it got another case wrong. The EOF detection crossing really needs to the the original length as plen, while it starts at the end of the block, will be shortened as up-to-date blocks are found on the page. This means "orig_pos + plen" no longer points to the end of the page, and so will not correctly detect EOF crossing. Hence we have to use the length passed in to detect this partial page case: xfs_filemap_fault: dev 259:1 ino 0x43 write_fault 0 xfs_vm_readpage: dev 259:1 ino 0x43 nr_pages 1 xfs_iomap_found: dev 259:1 ino 0x43 size 0x2cc00 offset 0x2c000 count 4096 type hole startoff 0xb0 startblock 282 blockcount 0x4 iomap_readpage_actor: orig pos 180224 pos 181248, length 4096, poff 1024 plen 2048, isize 183296 xfs_iomap_found: dev 259:1 ino 0x43 size 0x2cc00 offset 0x2cc00 count 1024 type hole startoff 0xb3 startblock 285 blockcount 0x1 iomap_readpage_actor: orig pos 183296 pos 183296, length 1024, poff 3072 plen 1024, isize 183296 Heere we see a trace where the first block on the EOF page is up to date, hence poff = 1024 bytes. The offset into the page of EOF is 3072, so the range we want to read is 1024 - 3071, and the range we want to zero is 3072 - 4095. You can see this is split correctly now. This fixes the stale data beyond EOF problem that fsx quickly uncovers on 1k block size filesystems. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2018-11-21vfs: vfs_dedupe_file_range() doesn't return EOPNOTSUPPDave Chinner
It returns EINVAL when the operation is not supported by the filesystem. Fix it to return EOPNOTSUPP to be consistent with the man page and clone_file_range(). Clean up the inconsistent error return handling while I'm there. (I know, lipstick on a pig, but every little bit helps...) Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2018-11-21iomap: dio data corruption and spurious errors when pipes fillDave Chinner
When doing direct IO to a pipe for do_splice_direct(), then pipe is trivial to fill up and overflow as it can only hold 16 pages. At this point bio_iov_iter_get_pages() then returns -EFAULT, and we abort the IO submission process. Unfortunately, iomap_dio_rw() propagates the error back up the stack. The error is converted from the EFAULT to EAGAIN in generic_file_splice_read() to tell the splice layers that the pipe is full. do_splice_direct() completely fails to handle EAGAIN errors (it aborts on error) and returns EAGAIN to the caller. copy_file_write() then completely fails to handle EAGAIN as well, and so returns EAGAIN to userspace, having failed to copy the data it was asked to. Avoid this whole steaming pile of fail by having iomap_dio_rw() silently swallow EFAULT errors and so do short reads. To make matters worse, iomap_dio_actor() has a stale data exposure bug bio_iov_iter_get_pages() fails - it does not zero the tail block that it may have been left uncovered by partial IO. Fix the error handling case to drop to the sub-block zeroing rather than immmediately returning the -EFAULT error. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2018-11-21iomap: sub-block dio needs to zeroout beyond EOFDave Chinner
If we are doing sub-block dio that extends EOF, we need to zero the unused tail of the block to initialise the data in it it. If we do not zero the tail of the block, then an immediate mmap read of the EOF block will expose stale data beyond EOF to userspace. Found with fsx running sub-block DIO sizes vs MAPREAD/MAPWRITE operations. Fix this by detecting if the end of the DIO write is beyond EOF and zeroing the tail if necessary. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>