summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-02-18cxgb4: Use __dev_uc_sync/__dev_mc_sync to sync MAC addressHariprasad Shenai
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18i40e/i40evf: Add exception handling for Tx checksumAlexander Duyck
Add exception handling to the Tx checksum path so that we can handle cases of TSO where the frame is bad, or Tx checksum where we didn't recognize a protocol Drop I40E_TX_FLAGS_CSUM as it is unused, move the CHECKSUM_PARTIAL check into the function itself so that we can decrease indent. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-18i40e/i40evf: Do not write to descriptor unless we completeAlexander Duyck
This patch defers writing to the Tx descriptor bits until we know we have successfully completed a given operation. So for example we defer updating the tunnelling portion of the context descriptor until we have fully identified the type. The advantage to this approach is that we can assemble values as we go instead of having to try and kludge everything together all at once. As a result we can significantly clean up the tunneling configuration for instance as we can just do a pointer walk and do the math for the distance between each set of points. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-18geneve: Refine MTU limitDavid Wragg
Calculate the maximum MTU taking into account the size of headers involved in GENEVE encapsulation, as for other tunnel types. Changes in v3: - Correct comment style Changes in v2: - Conform more closely to ip_tunnel_change_mtu - Exclude GENEVE options from max MTU calculation Signed-off-by: David Wragg <david@weave.works> Acked-by: Jesse Gross <jesse@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18vxlan: tun_id is 64bit, not 32bitJiri Benc
The tun_id field in struct ip_tunnel_key is __be64, not __be32. We need to convert the vni to tun_id correctly. Fixes: 54bfd872bf16 ("vxlan: keep flags and vni in network byte order") Reported-by: Paolo Abeni <pabeni@redhat.com> Tested-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Jiri Benc <jbenc@redhat.com> Acked-by: Thadeu Lima de Souza Cascardo <cascardo@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18i40e/i40evf: Handle IPv6 extension headers in checksum offloadAlexander Duyck
This patch adds support for IPv6 extension headers in setting up the Tx checksum. Without this patch extension headers would cause IPv6 traffic to fail as the transport protocol could not be identified. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-18i40e/i40evf: Add support for IPv4 encapsulated in IPv6Alexander Duyck
This patch fixes two issues. First was the fact that iphdr(skb)->protocl was being used to test for the outer transport protocol. This completely breaks IPv6 support. Second was the fact that we cleared the flag for v4 going to v6, but we didn't take care of txflags going the other way. As such we would have the v6 flag still set even if the inner header was v4. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-18i40e/i40evf: Replace header pointers with unions of pointers in Tx checksum pathAlexander Duyck
The Tx checksum path was maintaining a set of 3 pointers and two lengths in order to prepare the packet for being checksummed. The thing is we only really needed 2 pointers, and the lengths that were being maintained can easily be computed. As such we can replace the IPv4 and IPv6 header pointers with one single union that represents both, or a generic pointer to the start of the network header. For the L4 headers we can do the same with TCP and a generic pointer to the start of the transport header. The length of the TCP header is obtained by simply multiplying doff by 4, and the network header length can be obtained by subtracting the network header pointer from the transport header pointer. While I was at it I renamed l4_hdr to l4_proto to make it a bit more clear and less likely to be confused with l4.hdr which is the transport header pointer. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-18i40e/i40evf: Consolidate all header changes into TSO functionAlexander Duyck
This patch goes through and pulls all of the spots where we were updating either the TCP or IP checksums in the TSO and checksum path into the TSO function. The general idea here is that we should only be updating the header after we verify we have completed a skb_cow_head check to verify the head is writable. One other advantage to doing this is that it makes things much more obvious. For example, in the case of IPv6 there was one spot where the offset of the IPv4 header checksum was being updated which is obviously incorrect. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-18i40e/i40evf: Factor out L4 header and checksum from L3 bits in TSO pathAlexander Duyck
This patch makes it so that the L4 header offsets and such can be ignored when dealing with the L3 checksum and length update. This is done making use of two things. First we can just use the offset from the L4 header to the start of the packet to determine the L4 offset, and from that we can then make use of the data offset to determine the full length of the headers. As far as adjusting the checksum to remove the length we can simply add the inverse of the length instead of having to recompute the entire pseudo-header without the length. In the case of an IPv6 header this should be significantly cheaper since we can make use of a value we already needed instead of having to read the source and destination address out of the packet. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-18i40e/i40evf: Use u64 values instead of casting them in TSO functionAlexander Duyck
Instead of casing u32 values to u64 it makes more sense to just start out with u64 values in the first place. This way we don't need to create a mess with all of the casts needed to populate a 64b value. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-18i40e/i40evf: Drop outer checksum offload that was not requestedAlexander Duyck
The i40e and i40evf drivers contained code for inserting an outer checksum on UDP tunnels. The issue however is that the upper levels of the stack never requested such an offload and it results in possible errors. In addition the same logic was being applied to the Rx side where it was attempting to validate the outer checksum, but the logic there was incorrect in that it was testing for the resultant sum to be equal to the header checksum instead of being equal to 0. Since this code is so massively flawed, and doing things that we didn't ask for it to do I am just dropping it, and will bring it back later to use as an offload for SKB_GSO_UDP_TUNNEL_CSUM which can make use of such a feature. As far as the Rx feature I am dropping it completely since it would need to be massively expanded and applied to IPv4 and IPv6 checksums for all parts, not just the one that supports Tx checksum offload for the outer. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-18Merge branch 'netlink-mmap-remove'David S. Miller
Florian Westphal says: ==================== netlink: remove mmapped netlink support As discussed during netconf 2016 in Seville, this series removes CONFIG_NETLINK_MMAP. Close to three years after it was merged it has retained several problems that do not appear to be fixable. No official netfilter libmnl release contains support for mmap backed netlink sockets. No openvswitch release makes use of it either. To use the mmap interface, userspace not only has to probe for mmap netlink support, it also has to implement a recv/socket receive path in order to handle messages that exceed the size of an rx ring element (NL_MMAP_STATUS_COPY). So if there are odd programs out there that attempt to use MMAP netlink they should continue to work as they already need a socket based code path to work properly. The actual revert (first patch) has a list of problems. The followup patches remove a couple of helpers that are no longer needed after the revert. I did a few tests with mmap vs. socket based interface on a 4.4 based kernel on an i7-4790 box and there are no performance advantages: loopback, single nfqueue, queueing in -t filter INPUT: traffic generated by 8 * ping -q -f localhost: socket backend: real 0m27.325s user 0m3.993s sys 0m23.292s with mmap ring backend: real 0m29.054s user 0m4.924s sys 0m24.127s with single tcp stream, unidirectional, loopback mtu set at 1500 (nc localhost discard < /dev/zero > /dev/null): socket interface: time nfqdump -b $((8 * 1024 * 1024 * 1024)) -w /dev/null real 0m15.960s user 0m1.756s sys 0m11.143s mmap ring: real 0m16.441s user 0m3.040s sys 0m13.687s socket interface nfqdump[1] with --gso option (i.e. MTU is exceeded, no kernel-side segmentation and checksum fixups) completes in about 5s. I also tested dumping a conntrack table with 1m entries. On my box this takes about 2.4 seconds for both mmap and socket backend: time LD_PRELOAD=../../src/.libs/libmnl.so ./nfct-dump-sk > /dev/null mnl_cb_run: Success messages: 1000000 real 0m2.485s user 0m1.085s sys 0m1.400s time LD_PRELOAD=../../src/.libs/libmnl.so ./nfct-dump-mmap > /dev/null messages: 1000000 real 0m2.451s user 0m1.124s sys 0m1.328s [1] https://git.breakpoint.cc/cgit/fw/nfqdump.git/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18nfnetlink: Revert "nfnetlink: add support for memory mapped netlink"Florian Westphal
reverts commit 3ab1f683bf8b ("nfnetlink: add support for memory mapped netlink")' Like previous commits in the series, remove wrappers that are not needed after mmapped netlink removal. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18nfnetlink: remove nfnetlink_alloc_skbFlorian Westphal
Following mmapped netlink removal this code can be simplified by removing the alloc wrapper. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18Revert "genl: Add genlmsg_new_unicast() for unicast message allocation"Florian Westphal
This reverts commit bb9b18fb55b0 ("genl: Add genlmsg_new_unicast() for unicast message allocation")'. Nothing wrong with it; its no longer needed since this was only for mmapped netlink support. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18openvswitch: Revert: "Enable memory mapped Netlink i/o"Florian Westphal
revert commit 795449d8b846 ("openvswitch: Enable memory mapped Netlink i/o"). Following the mmaped netlink removal this code can be removed. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18netlink: remove mmapped netlink supportFlorian Westphal
mmapped netlink has a number of unresolved issues: - TX zerocopy support had to be disabled more than a year ago via commit 4682a0358639b29cf ("netlink: Always copy on mmap TX.") because the content of the mmapped area can change after netlink attribute validation but before message processing. - RX support was implemented mainly to speed up nfqueue dumping packet payload to userspace. However, since commit ae08ce0021087a5d812d2 ("netfilter: nfnetlink_queue: zero copy support") we avoid one copy with the socket-based interface too (via the skb_zerocopy helper). The other problem is that skbs attached to mmaped netlink socket behave different from normal skbs: - they don't have a shinfo area, so all functions that use skb_shinfo() (e.g. skb_clone) cannot be used. - reserving headroom prevents userspace from seeing the content as it expects message to start at skb->head. See for instance commit aa3a022094fa ("netlink: not trim skb for mmaped socket when dump"). - skbs handed e.g. to netlink_ack must have non-NULL skb->sk, else we crash because it needs the sk to check if a tx ring is attached. Also not obvious, leads to non-intuitive bug fixes such as 7c7bdf359 ("netfilter: nfnetlink: use original skbuff when acking batches"). mmaped netlink also didn't play nicely with the skb_zerocopy helper used by nfqueue and openvswitch. Daniel Borkmann fixed this via commit 6bb0fef489f6 ("netlink, mmap: fix edge-case leakages in nf queue zero-copy")' but at the cost of also needing to provide remaining length to the allocation function. nfqueue also has problems when used with mmaped rx netlink: - mmaped netlink doesn't allow use of nfqueue batch verdict messages. Problem is that in the mmap case, the allocation time also determines the ordering in which the frame will be seen by userspace (A allocating before B means that A is located in earlier ring slot, but this also means that B might get a lower sequence number then A since seqno is decided later. To fix this we would need to extend the spinlocked region to also cover the allocation and message setup which isn't desirable. - nfqueue can now be configured to queue large (GSO) skbs to userspace. Queing GSO packets is faster than having to force a software segmentation in the kernel, so this is a desirable option. However, with a mmap based ring one has to use 64kb per ring slot element, else mmap has to fall back to the socket path (NL_MMAP_STATUS_COPY) for all large packets. To use the mmap interface, userspace not only has to probe for mmap netlink support, it also has to implement a recv/socket receive path in order to handle messages that exceed the size of an rx ring element. Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Ken-ichirou MATSUZAWA <chamaken@gmail.com> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Cc: Patrick McHardy <kaber@trash.net> Cc: Thomas Graf <tgraf@suug.ch> Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18tcp/dccp: fix another race at listener dismantleEric Dumazet
Ilya reported following lockdep splat: kernel: ========================= kernel: [ BUG: held lock freed! ] kernel: 4.5.0-rc1-ceph-00026-g5e0a311 #1 Not tainted kernel: ------------------------- kernel: swapper/5/0 is freeing memory ffff880035c9d200-ffff880035c9dbff, with a lock still held there! kernel: (&(&queue->rskq_lock)->rlock){+.-...}, at: [<ffffffff816f6a88>] inet_csk_reqsk_queue_add+0x28/0xa0 kernel: 4 locks held by swapper/5/0: kernel: #0: (rcu_read_lock){......}, at: [<ffffffff8169ef6b>] netif_receive_skb_internal+0x4b/0x1f0 kernel: #1: (rcu_read_lock){......}, at: [<ffffffff816e977f>] ip_local_deliver_finish+0x3f/0x380 kernel: #2: (slock-AF_INET){+.-...}, at: [<ffffffff81685ffb>] sk_clone_lock+0x19b/0x440 kernel: #3: (&(&queue->rskq_lock)->rlock){+.-...}, at: [<ffffffff816f6a88>] inet_csk_reqsk_queue_add+0x28/0xa0 To properly fix this issue, inet_csk_reqsk_queue_add() needs to return to its callers if the child as been queued into accept queue. We also need to make sure listener is still there before calling sk->sk_data_ready(), by holding a reference on it, since the reference carried by the child can disappear as soon as the child is put on accept queue. Reported-by: Ilya Dryomov <idryomov@gmail.com> Fixes: ebb516af60e1 ("tcp/dccp: fix race at listener dismantle phase") Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18route: check and remove route cache when we get routeXin Long
Since the gc of ipv4 route was removed, the route cached would has no chance to be removed, and even it has been timeout, it still could be used, cause no code to check it's expires. Fix this issue by checking and removing route cache when we get route. Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18net_sched: Improve readability of filter processingJamal Hadi Salim
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18bridge: switchdev: Offload VLAN flags to hardware bridgeIdo Schimmel
When VLANs are created / destroyed on a VLAN filtering bridge (MASTER flag set), the configuration is passed down to the hardware. However, when only the flags (e.g. PVID) are toggled, the configuration is done in the software bridge alone. While it is possible to pass these flags to hardware when invoked with the SELF flag set, this creates inconsistency with regards to the way the VLANs are initially configured. Pass the flags down to the hardware even when the VLAN already exists and only the flags are toggled. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18net_sched fix: reclassification needs to consider ether protocol changesJamal Hadi Salim
actions could change the etherproto in particular with ethernet tunnelled data. Typically such actions, after peeling the outer header, will ask for the packet to be reclassified. We then need to restart the classification with the new proto header. Example setup used to catch this: sudo tc qdisc add dev $ETH ingress sudo $TC filter add dev $ETH parent ffff: pref 1 protocol 802.1Q \ u32 match u32 0 0 flowid 1:1 \ action vlan pop reclassify Fixes: 3b3ae880266d ("net: sched: consolidate tc_classify{,_compat}") Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18net: phy: Add SGMII support for Marvell 88E1510/1512/1514/1518Stefan Roese
Add code to select SGMII-to-copper mode upon SGMII interface selection. Signed-off-by: Stefan Roese <sr@denx.de> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18isdn: divamnt: use y2038-safe ktime_get_ts64() for trace data timestampsAlison Schofield
divamnt stores a start_time at module init and uses it to calculate elapsed time. The elapsed time, stored in secs and usecs, is part of the trace data the driver maintains for the DIVA Server ISDN cards. No change to the format of that time data is required. To avoid overflow on 32-bit systems use ktime_get_ts64() to return the elapsed monotonic time since system boot. This is a change from real to monotonic time. Since the driver only stores elapsed time, monotonic time is sufficient and more robust against real time clock changes. These new monotonic values can be more useful for debugging because they can be easily compared to other monotonic timestamps. Note elaspsed time values will now start at system boot time rather than module load time, so they will differ slightly from previously reported values. Remove declaration and init of previously unused time constants: start_sec, start_usec. Signed-off-by: Alison Schofield <amsfield22@gmail.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18Merge branch 'mlxsw-fixes'David S. Miller
Jiri Pirko says: ==================== mlxsw fixes Another bulk of fixes from Ido. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18mlxsw: spectrum: Allow for PVID deletionIdo Schimmel
When PVID is toggled off on a port member in a VLAN filtering bridge or the PVID VLAN is deleted, make the port drop untagged packets. Reverse the operation when PVID is toggled back on. Set the PVID back to the default (1), when leaving the bridge so that untagged traffic will be directed to the CPU. Fixes: 56ade8fe3fe1 ("mlxsw: spectrum: Add initial support for Spectrum ASIC") Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18mlxsw: reg: Add the Switch Port Acceptable Frame Types registerIdo Schimmel
When VLAN filtering is enabled on a bridge and PVID is deleted from a bridge port, then untagged frames are not allowed to ingress into the bridge from this port. Add the Switch Port Acceptable Frame Types (SPAFT) register, which configures the frame admittance of the port. Fixes: 56ade8fe3fe1 ("mlxsw: spectrum: Add initial support for Spectrum ASIC") Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18Merge branch '40GbE' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue Jeff Kirsher says: ==================== 40GbE Intel Wired LAN Driver Updates 2016-02-17 This series contains updates to i40e/i40evf once again. Mitch updates the use of a define instead of a magic number. Adds support for packet split receive on VFs, which is disabled by default. Expands on a code comment which was not verbose or really helpful. Fixes an issue where if a reset fails to complete and was not properly setting the adapter state, which would cause a panic on rmmod, so set the adpater state to DOWN to avoid a panic. Jesse cleans up a "dump" in debugfs that never panned out to be useful. Anjali adds a workaround for cases where we might have interrupts that get lost but wright-back (WB) happened. Fixes an issue by falling back to enabling unicast, multicast and broadcast promiscuous mode when the driver must disable it's use of "default port" (defport mode) due to internal incompatibility with Multiple Function per Port (MFP). Fixes an issue where queues should never be enabled/disabled in the interrupt handler. Kiran cleans up th code which used hard coded base VEB SEID since it was removed from the specification. Shannon adds a few bits for better debug messages. Fixes an obscure corner case, where it was possible to clear the NVM update wait flag when no update_done message was actually received. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18net-sysfs: remove unused fmt_long_hexColin Ian King
Ever since commit 04ed3e741d0f133e02bed7fa5c98edba128f90e7 ("net: change netdev->features to u32") the format string fmt_long_hex has not been used, so we may as well remove it. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-18i2c: i801: Adding Intel Lewisburg support for iTCOAlexandra Yates
Starting from Intel Sunrisepoint (Skylake PCH) the iTCO watchdog resources have been moved to reside under the i801 SMBus host controller whereas previously they were under the LPC device. This patch adds Intel lewisburg SMBus support for iTCO device. It allows to load watchdog dynamically when the hardware is present. Fixes: cdc5a3110e7c ("i2c: i801: add Intel Lewisburg device IDs") Reviewed-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Alexandra Yates <alexandra.yates@linux.intel.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de> Cc: stable@kernel.org
2016-02-18ALSA: pcm: Fix rwsem deadlock for non-atomic PCM streamTakashi Iwai
A non-atomic PCM stream may take snd_pcm_link_rwsem rw semaphore twice in the same code path, e.g. one in snd_pcm_action_nonatomic() and another in snd_pcm_stream_lock(). Usually this is OK, but when a write lock is issued between these two read locks, the problem happens: the write lock is blocked due to the first reade lock, and the second read lock is also blocked by the write lock. This eventually deadlocks. The reason is the way rwsem manages waiters; it's queued like FIFO, so even if the writer itself doesn't take the lock yet, it blocks all the waiters (including reads) queued after it. As a workaround, in this patch, we replace the standard down_write() with an spinning loop. This is far from optimal, but it's good enough, as the spinning time is supposed to be relatively short for normal PCM operations, and the code paths requiring the write lock aren't called so often. Reported-by: Vinod Koul <vinod.koul@intel.com> Tested-by: Ramesh Babu <ramesh.babu@intel.com> Cc: <stable@vger.kernel.org> # v3.18+ Signed-off-by: Takashi Iwai <tiwai@suse.de>
2016-02-18x86/mm: Fix vmalloc_fault() to handle large pages properlyToshi Kani
A kernel page fault oops with the callstack below was observed when a read syscall was made to a pmem device after a huge amount (>512GB) of vmalloc ranges was allocated by ioremap() on a x86_64 system: BUG: unable to handle kernel paging request at ffff880840000ff8 IP: vmalloc_fault+0x1be/0x300 PGD c7f03a067 PUD 0 Oops: 0000 [#1] SM Call Trace: __do_page_fault+0x285/0x3e0 do_page_fault+0x2f/0x80 ? put_prev_entity+0x35/0x7a0 page_fault+0x28/0x30 ? memcpy_erms+0x6/0x10 ? schedule+0x35/0x80 ? pmem_rw_bytes+0x6a/0x190 [nd_pmem] ? schedule_timeout+0x183/0x240 btt_log_read+0x63/0x140 [nd_btt] : ? __symbol_put+0x60/0x60 ? kernel_read+0x50/0x80 SyS_finit_module+0xb9/0xf0 entry_SYSCALL_64_fastpath+0x1a/0xa4 Since v4.1, ioremap() supports large page (pud/pmd) mappings in x86_64 and PAE. vmalloc_fault() however assumes that the vmalloc range is limited to pte mappings. vmalloc faults do not normally happen in ioremap'd ranges since ioremap() sets up the kernel page tables, which are shared by user processes. pgd_ctor() sets the kernel's PGD entries to user's during fork(). When allocation of the vmalloc ranges crosses a 512GB boundary, ioremap() allocates a new pud table and updates the kernel PGD entry to point it. If user process's PGD entry does not have this update yet, a read/write syscall to the range will cause a vmalloc fault, which hits the Oops above as it does not handle a large page properly. Following changes are made to vmalloc_fault(). 64-bit: - No change for the PGD sync operation as it handles large pages already. - Add pud_huge() and pmd_huge() to the validation code to handle large pages. - Change pud_page_vaddr() to pud_pfn() since an ioremap range is not directly mapped (while the if-statement still works with a bogus addr). - Change pmd_page() to pmd_pfn() since an ioremap range is not backed by struct page (while the if-statement still works with a bogus addr). 32-bit: - No change for the sync operation since the index3 PGD entry covers the entire vmalloc range, which is always valid. (A separate change to sync PGD entry is necessary if this memory layout is changed regardless of the page size.) - Add pmd_huge() to the validation code to handle large pages. This is for completeness since vmalloc_fault() won't happen in ioremap'd ranges as its PGD entry is always valid. Reported-by: Henning Schild <henning.schild@siemens.com> Signed-off-by: Toshi Kani <toshi.kani@hpe.com> Acked-by: Borislav Petkov <bp@alien8.de> Cc: <stable@vger.kernel.org> # 4.1+ Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Luis R. Rodriguez <mcgrof@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Toshi Kani <toshi.kani@hp.com> Cc: linux-mm@kvack.org Cc: linux-nvdimm@lists.01.org Link: http://lkml.kernel.org/r/1455758214-24623-1-git-send-email-toshi.kani@hpe.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-02-17i40e/i40evf: Bump i40e to 1.4.15 and i40evf to 1.4.11.Catherine Sullivan
Bump. Change-ID: Ie280dc67e37a1cf667c3469499a4fb90f4177b75 Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40e: When in promisc mode apply promisc mode to Tx Traffic as wellAnjali Singhai Jain
In MFP mode particularly when we were setting the PF VSI in limited promiscuous, the HW switch was still mirroring the outgoing packets from other VSIs (VF/VMdq) onto the PF VSI. With this new bit set, the mirroring doesn't happen any more and so we are in limited promiscuous on the PF VSI in MFP which is similar to defport. An API check is not required, since this bit is reserved for FW API version < 1.5 Also update copyright year in file headers. Change-ID: I9840cb95f11dde733d943cb03ce84f68b9611bc8 Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40e: clean event descriptor before useShannon Nelson
In one obscure corner case, it was possible to clear the NVM update wait flag when no update_done message was actually received. This patch cleans the event descriptor before use, and moves the opcode check to where it won't get done if there was no event to clean. Also update copyright year in file headers. Change-ID: I68bbc41965e93f4adf07cbe98b9dfd63d41509a4 Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40evf: set adapter state on reset failureMitch Williams
If a reset fails to complete, the driver gets its affairs in order and awaits the cold solace of rmmod. Unfortunately, it was not properly setting the adapter state, which would cause a panic on rmmod, instead of the desired surcease. Set the adapter state to DOWN in this case, and avoid a panic. Change-ID: I6fdd9906da52e023f8dc744f7da44b5d95278ca9 Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40e: better error reporting for nvmupdateShannon Nelson
Make sure we return EBUSY while finishing up a reset, and add a few bits for better debug messages. Change-ID: I23f6c28a8d96d7aa171abcc265737cec7826c292 Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40e: expand commentMitch Williams
Explain why we cannot remove this code, even though it works differently than any of our other interrupt cause handling code. Change-ID: Ie66203bd037a466066036611c31d44f759ec5176 Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40e: Do not disable queues in the Legacy/MSI Interrupt handlerAnjali Singhai Jain
The queues should never be enabled/disabled in the interrupt handler, ICR0 interrupt enable should be the only thing that needs to be dynamically changed in the handler. This patch fixes that. Without this patch X722 platforms were seeing weird ping timings when in Legacy mode since it takes a whole lot of time for the HW/FW to re-enable queues. Change-ID: If065afc45d81c5a19d4a94a00cd5b8f61cefc40c Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40e/i40evf: avoid atomicsMitch Williams
In the case where we have a page fully used by receive data, we need to release the page fully to the stack. Instead of calling get_page (which increments the page count) followed by free_page (which decrements the page count), just donate our reference to the stack. Although this donation is not tax deductible, it does allow us to avoid two very expensive atomic operations that reverse each other. Change-ID: If70739792d5748995fc175ec92ac2171ed4ad8fc Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40e: Removal of code which relies on BASE VEB SEIDKiran Patil
Fixed mapping of SEID is removed from specification. Hence this patch removes code which was using hard coded base VEB SEID. Changed FCoE code to use "hw->pf_id" to obtain correct "idx" and verified. Removed defines for BASE VSI/VEB SEID and BASE_PF_SEID since it is not used anymore. Change-ID: Id507cf4b1fae1c0145e3f08ae9ea5846ea5840de Signed-off-by: Kiran Patil <kiran.patil@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40e: Fix PROMISC mode for Multi-function per port (MFP) devicesAnjali Singhai Jain
This patch falls back to enabling unicast, multicast and broadcast promiscuous mode when the driver must disable it's use of "default port" aka defport mode (which is normally used to provide a promiscuous mode), due to internal incompatibility with Multiple Function per Port (aka MFP). The situation that requires this patch is when Physical Function 0 is the device being used, and it can support SR-IOV when MFP is enabled, via the driver creating a VEB on an MFP enabled adapter. Change-ID: Ie90b00d0d58782a5dfcf2c3c9725a2eb90bd63d8 Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40e: Add a SW workaround for lost interruptsAnjali Singhai Jain
This patch adds a workaround for cases where we might have interrupts that got lost but WB happened. If that happens without this patch we will see a tx_timeout. To work around it, this patch goes ahead and reschedules NAPI in that situation, if NAPI is not already scheduled. We also add a counter in ethtool to keep track of when we detect a case of tx_lost_interrupt. Note: napi_reschedule() can be safely called from process/service_task context and is done in other drivers as well without an issue. Change-ID: I00f98f1ce3774524d9421227652bef20fcbd0d20 Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40e: trivial: cleanup use of pf->hwJesse Brandeburg
This patch makes use of a pointer called hw consistent in the i40e_remove function. Change-ID: Idacc7ff0a09a68289c57457a78618bf5497de077 Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40evf: support packet split receiveMitch Williams
Support packet split receive on VFs. This is off by default but can be enabled using ethtool private flags. Because we need to trigger a reset from outside of i40evf_main.c, create a new function to do so, and export it. Also update copyright year in file headers. Change-ID: I721aa5d70113d3d6d94102e5f31526f6fc57cbbb Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40e: drop unused debugfs file "dump"Jesse Brandeburg
There was a completely unused file "dump" in debugfs that never panned out to be useful. Change-ID: I12bb9e37b5a83299725dda815a8746157baf6562 Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17i40e: get rid of magic numberMitch Williams
We have a define for this, use it. No functional change. Change-ID: Ic0e3ea4f562e46de63b2a8de07f291ccc10205fd Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-17Merge branch 'vxlan-cleanups'David S. Miller
Jiri Benc says: ==================== vxlan: clean up rx path, consolidating extension handling The rx path of VXLAN turned over time into kind of spaghetti code. The rx processing is split between vxlan_udp_encap_recv and vxlan_rcv but in an artificial way: vxlan_rcv is just called at the end of vxlan_udp_encap_recv, continuing the rx processing where vxlan_udp_encap_recv left it. There's no clear border between those two functions. It makes sense to combine those functions into one; this will be actually needed for VXLAN-GPE where we'll need to skip part of the processing which is hard to do with the current code. However, both functions are too long already. This patchset is shortening them, consolidating extension handling that is spread all around together and moving it to separate functions. (Later patchsets will do more consolidation in other parts of the functions with the final goal of merging vxlan_udp_encap_recv and vxlan_rcv.) In process of consolidation of the extension handling, I needed to deal with vni field in a generic way, as its lower 8 bits mean different things for different extensions. While cleaning up the code to strictly distinguish between "vni" and "vni field" (which contains vni plus an additional byte), I also converted the code not to convert endianess back and forth. The full picture can be seen at: https://github.com/jbenc/linux-vxlan/commits/master ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-17vxlan: treat vni in metadata based tunnels consistentlyJiri Benc
For metadata based tunnels, VNI is ignored when doing vxlan device lookups (because such tunnel receives all VNIs). However, this was not honored by vxlan_xmit_one when doing encapsulation bypass. Move the check for metadata based tunnel to the common place where it belongs. Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>