summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2014-09-01cxgb4: Avoid dumping Write-only registers in register dumpHariprasad Shenai
Avoid dumping MPS_RPLC_MAP_CTL for reg dumps; this is a Write-Only register. Reading this register may cause MPS TCAM corruption. Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01cxgb4: Detect and display firmware reported errorsHariprasad Shenai
The adapter firmware can indicate error conditions to the host. If the firmware has indicated an error, print out the reason for the firmware error. Based on original work by Casey Leedom <leedom@chelsio.com> Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01cxgb4: Fix T5 adapter accessing T4 adapter registersHariprasad Shenai
Fixes few register access for both T4 and T5. PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS & PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS is T4 only register don't let T5 access them. For T5 MA_PARITY_ERROR_STATUS2 is additionally read. MPS_TRC_RSS_CONTROL is T4 only register, for T5 use MPS_T5_TRC_RSS_CONTROL. Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01cxgb4: Fixed the code to use correct length for part numberHariprasad Shenai
Previously it was using the length value of serial number. Also added macro for VPD unique identifier (0x82). Signed-off-by: Casey Leedom <leedom@chelsio.com> Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01cxgb4: Fix for handling 1Gb/s SFP+ Transceiver ModulesHariprasad Shenai
We previously assumed that a Port's Capabilities and Advertised Capabilities would never change from Port Initialization time. This is no longer true when we can have 10Gb/s and 1Gb/s SFP+ Transceiver Modules randomly swapped. Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-02ALSA: hda - Fix COEF setups for ALC1150 codecTakashi Iwai
ALC1150 codec seems to need the COEF- and PLL-setups just like its compatible ALC882 codec. Some machines (e.g. SunMicro X10SAT) show the problem like too low output volumes unless the COEF setup is applied. Reported-and-tested-by: Dana Goyette <danagoyette@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2014-09-01stmmac: only remove RXCSUM feature if no rx coe is availableGiuseppe CAVALLARO
In case of the HW is not able to do the receive checksum offloading the only feature to remove is NETIF_F_RXCSUM. Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01stmmac: fix the rx csum featureGiuseppe CAVALLARO
For new GMACs it is possible to turn-on/off the COE. In the current driver, when disabled the Rx-checksum via ethtool, the tool reported that csum was disabled but the HW continued to set the IPC. Indeed this is because the fix_features allows this. So the patch fixes this problem by adding the set_features. Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01sock: deduplicate errqueue dequeueWillem de Bruijn
sk->sk_error_queue is dequeued in four locations. All share the exact same logic. Deduplicate. Also collapse the two critical sections for dequeue (at the top of the recv handler) and signal (at the bottom). This moves signal generation for the next packet forward, which should be harmless. It also changes the behavior if the recv handler exits early with an error. Previously, a signal for follow-up packets on the errqueue would then not be scheduled. The new behavior, to always signal, is arguably a bug fix. For rxrpc, the change causes the same function to be called repeatedly for each queued packet (because the recv handler == sk_error_report). It is likely that all packets will fail for the same reason (e.g., memory exhaustion). This code runs without sk_lock held, so it is not safe to trust that sk->sk_err is immutable inbetween releasing q->lock and the subsequent test. Introduce int err just to avoid this potential race. Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01net-timestamp: expand documentationWillem de Bruijn
Expand Documentation/networking/timestamping.txt with new interfaces and bytestream timestamping. Also minor cleanup of the other text. Import txtimestamp.c test of the new features. Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01Merge branch 'csums-next'David S. Miller
Tom Herbert says: ==================== net: Checksum offload changes - Part VI I am working on overhauling RX checksum offload. Goals of this effort are: - Specify what exactly it means when driver returns CHECKSUM_UNNECESSARY - Preserve CHECKSUM_COMPLETE through encapsulation layers - Don't do skb_checksum more than once per packet - Unify GRO and non-GRO csum verification as much as possible - Unify the checksum functions (checksum_init) - Simplify code What is in this seventh patch set: - Add skb->csum. This allows a device or GRO to indicate that an invalid checksum was detected. - Checksum unncessary to checksum complete conversions. With these changes, I believe that the third goal of the overhaul is now mostly achieved. In the case of no encapsulation or one layer of encapsulation, there should only be at most one skb_checksum over each packet (between GRO and normal path). In the case of two layers of encapsulation, it is still possible with the right combination of non-zero and zero UDP checksums to have >1 skb_checksum. For instance: IP>GRE(with csum)>IP>UDP(zero csum)>VXLAN>IP>UDP(non-zero csum), would likely necessiate an skb_checksum in GRO and normal path. This doesn't seem like a common scenario at all so I'm inclined to not address this now, if multiple layers of encapsulation becomes popular we can reassess. Note that checksum conversion shows a nice improvement for RX VXLAN when outer UDP checksum is enabled (12.65% CPU compared to 20.94%). This is not only from the fact that we don't need checksum calculation on the host, but also allows GRO for VXLAN in this case. Checksum conversion does not help send side (which still needs to perform a checksum on host). For that we will implement remote checksum offload in a later patch (http://tools.ietf.org/html/draft-herbert-remotecsumoffload-00). Please review carefully and test if possible, mucking with basic checksum functions is always a little precarious :-) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01l2tp: Enable checksum unnecessary conversions for l2tp/UDP socketsTom Herbert
Signed-off-by: Tom Herbert <therbert@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01vxlan: Enable checksum unnecessary conversions for vxlan/UDP socketsTom Herbert
Signed-off-by: Tom Herbert <therbert@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01gre: Add support for checksum unnecessary conversionsTom Herbert
Call skb_checksum_try_convert and skb_gro_checksum_try_convert after checksum is found present and validated in the GRE header for normal and GRO paths respectively. In GRO path, call skb_gro_checksum_try_convert Signed-off-by: Tom Herbert <therbert@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01udp: Add support for doing checksum unnecessary conversionTom Herbert
Add support for doing CHECKSUM_UNNECESSARY to CHECKSUM_COMPLETE conversion in UDP tunneling path. In the normal UDP path, we call skb_checksum_try_convert after locating the UDP socket. The check is that checksum conversion is enabled for the socket (new flag in UDP socket) and that checksum field is non-zero. In the UDP GRO path, we call skb_gro_checksum_try_convert after checksum is validated and checksum field is non-zero. Since this is already in GRO we assume that checksum conversion is always wanted. Signed-off-by: Tom Herbert <therbert@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01net: Infrastructure for checksum unnecessary conversionsTom Herbert
For normal path, added skb_checksum_try_convert which is called to attempt to convert CHECKSUM_UNNECESSARY to CHECKSUM_COMPLETE. The primary condition to allow this is that ip_summed is CHECKSUM_NONE and csum_valid is true, which will be the state after consuming a CHECKSUM_UNNECESSARY. For GRO path, added skb_gro_checksum_try_convert which is the GRO analogue of skb_checksum_try_convert. The primary condition to allow this is that NAPI_GRO_CB(skb)->csum_cnt == 0 and NAPI_GRO_CB(skb)->csum_valid is set. This implies that we have consumed all available CHECKSUM_UNNECESSARY checksums in the GRO path. Signed-off-by: Tom Herbert <therbert@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01net: Support for csum_bad in skbuffTom Herbert
This flag indicates that an invalid checksum was detected in the packet. __skb_mark_checksum_bad helper function was added to set this. Checksums can be marked bad from a driver or the GRO path (the latter is implemented in this patch). csum_bad is checked in __skb_checksum_validate_complete (i.e. calling that when ip_summed == CHECKSUM_NONE). csum_bad works in conjunction with ip_summed value. In the case that ip_summed is CHECKSUM_NONE and csum_bad is set, this implies that the first (or next) checksum encountered in the packet is bad. When ip_summed is CHECKSUM_UNNECESSARY, the first checksum after the last one validated is bad. For example, if ip_summed == CHECKSUM_UNNECESSARY, csum_level == 1, and csum_bad is set-- then the third checksum in the packet is bad. In the normal path, the packet will be dropped when processing the protocol layer of the bad checksum: __skb_decr_checksum_unnecessary called twice for the good checksums changing ip_summed to CHECKSUM_NONE so that __skb_checksum_validate_complete is called to validate the third checksum and that will fail since csum_bad is set. Signed-off-by: Tom Herbert <therbert@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01r8152: rename rx_buf_szhayeswang
The variable "rx_buf_sz" is used by both tx and rx buffers. Replace it with "agg_buf_sz". Signed-off-by: Hayes Wang <hayeswang@realtek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01net: phy: mdio-bcm-unimac: NULL-terminate unimac_mdio_idsFlorian Fainelli
drivers/net/phy/mdio-bcm-unimac.c:195:37-38: unimac_mdio_ids is not NULL terminated at line 195 Make sure of_device_id tables are NULL terminated Generated by: scripts/coccinelle/misc/of_table.cocci Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01net: dsa: make dsa_pack_type staticFlorian Fainelli
net/dsa/dsa.c:624:20: sparse: symbol 'dsa_pack_type' was not declared. Should it be static? Fixes: 3e8a72d1dae374 ("net: dsa: reduce number of protocol hooks") Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01Merge branch 'amd-xgbe-net'David S. Miller
Tom Lendacky says: ==================== amd-xgbe: AMD XGBE driver fixes 2014-08-29 The following series of patches includes fixes to the driver. - Tx hardware queue flushing support dependent on hardware version - Incorrect reported fifo size - Proper mmd select in XPCS debugfs support - Proper queue count for configuring Tx flow control This patch series is based on net. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01amd-xgbe: Use the Tx queue count for Tx flow control supportLendacky, Thomas
When configuring Tx flow control the Rx queue count was used instead of the Tx queue count for looping through the Tx hardware queues. Fix the code to use the Tx queue count. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01amd-xgbe: Fix the xpcs mmd debugfs supportLendacky, Thomas
The debugfs support for the xpcs registers did not properly use the specified mmd (xpcs_mmd entry) which resulted in the default mmd value always being used. Update the debugfs support to generate the proper mmd register value. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01amd-xgbe: Reported fifo size from hardware is not correctLendacky, Thomas
The fifo size reported by the hardware is not correct. Add support to limit the reported size to what is actually present. Also, fix the argument types used in the fifo size calculation function. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01amd-xgbe: Check for Tx hardware queue flushing supportLendacky, Thomas
The flushing of the Tx hardware queues is only supported at a certain level of the hardware. Retrieve the current version of the hardware and use that to determine if flushing is supported. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01drivers: net: NET_XGENE should depend on HAS_DMAGeert Uytterhoeven
If NO_DMA=y: drivers/built-in.o: In function `xgene_enet_delete_ring': xgene_enet_main.c:(.text+0x28755a): undefined reference to `dma_free_coherent' drivers/built-in.o: In function `xgene_enet_setup_tx_desc': xgene_enet_main.c:(.text+0x287774): undefined reference to `dma_map_single' xgene_enet_main.c:(.text+0x287780): undefined reference to `dma_mapping_error' drivers/built-in.o: In function `xgene_enet_tx_completion': xgene_enet_main.c:(.text+0x2878e6): undefined reference to `dma_unmap_single' drivers/built-in.o: In function `xgene_enet_refill_bufpool': xgene_enet_main.c:(.text+0x2879d4): undefined reference to `dma_map_single' xgene_enet_main.c:(.text+0x2879e0): undefined reference to `dma_mapping_error' drivers/built-in.o: In function `xgene_enet_rx_frame': xgene_enet_main.c:(.text+0x287aaa): undefined reference to `dma_unmap_single' drivers/built-in.o: In function `xgene_enet_free_desc_ring': xgene_enet_main.c:(.text+0x287f98): undefined reference to `dma_free_coherent' drivers/built-in.o: In function `xgene_enet_create_desc_ring': xgene_enet_main.c:(.text+0x28808e): undefined reference to `dma_alloc_coherent' drivers/built-in.o: In function `xgene_enet_probe': xgene_enet_main.c:(.text+0x2883d4): undefined reference to `dma_set_mask' xgene_enet_main.c:(.text+0x2883ec): undefined reference to `dma_supported' Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-02xfs: trim eofblocks before collapse rangeBrian Foster
xfs_collapse_file_space() currently writes back the entire file undergoing collapse range to settle things down for the extent shift algorithm. While this prevents changes to the extent list during the collapse operation, the writeback itself is not enough to prevent unnecessary collapse failures. The current shift algorithm uses the extent index to iterate the in-core extent list. If a post-eof delalloc extent persists after the writeback (e.g., a prior zero range op where the end of the range aligns with eof can separate the post-eof blocks such that they are not written back and converted), xfs_bmap_shift_extents() becomes confused over the encoded br_startblock value and fails the collapse. As with the full writeback, this is a temporary fix until the algorithm is improved to cope with a volatile extent list and avoid attempts to shift post-eof extents. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-02xfs: xfs_file_collapse_range is delalloc challengedDave Chinner
If we have delalloc extents on a file before we run a collapse range opertaion, we sync the range that we are going to collapse to convert delalloc extents in that region to real extents to simplify the shift operation. However, the shift operation then assumes that the extent list is not going to change as it iterates over the extent list moving things about. Unfortunately, this isn't true because we can't hold the ILOCK over all the operations. We can prevent new IO from modifying the extent list by holding the IOLOCK, but that doesn't prevent writeback from running.... And when writeback runs, it can convert delalloc extents is the range of the file prior to the region being collapsed, and this changes the indexes of all the extents in the file. That causes the collapse range operation to Go Bad. The right fix is to rewrite the extent shift operation not to be dependent on the extent list not changing across the entire operation, but this is a fairly significant piece of work to do. Hence, as a short-term workaround for the problem, sync the entire file before starting a collapse operation to remove all delalloc ranges from the file and so avoid the problem of concurrent writeback changing the extent list. Diagnosed-and-Reported-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-02xfs: don't log inode unless extent shift makes extent modificationsBrian Foster
The file collapse mechanism uses xfs_bmap_shift_extents() to collapse all subsequent extents down into the specified, previously punched out, region. This function performs some validation, such as whether a sufficient hole exists in the target region of the collapse, then shifts the remaining exents downward. The exit path of the function currently logs the inode unconditionally. While we must log the inode (and abort) if an error occurs and the transaction is dirty, the initial validation paths can generate errors before the transaction has been dirtied. This creates an unnecessary filesystem shutdown scenario, as the caller will cancel a transaction that has been marked dirty. Modify xfs_bmap_shift_extents() to OR the logflags bits as modifications are made to the inode bmap. Only log the inode in the exit path if logflags has been set. This ensures we only have to cancel a dirty transaction if modifications have been made and prevents an unnecessary filesystem shutdown otherwise. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-02xfs: use ranged writeback and invalidation for direct IODave Chinner
Now we are not doing silly things with dirtying buffers beyond EOF and using invalidation correctly, we can finally reduce the ranges of writeback and invalidation used by direct IO to match that of the IO being issued. Bring the writeback and invalidation ranges back to match the generic direct IO code - this will greatly reduce the perturbation of cached data when direct IO and buffered IO are mixed, but still provide the same buffered vs direct IO coherency behaviour we currently have. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-02xfs: don't zero partial page cache pages during O_DIRECT writesDave Chinner
Similar to direct IO reads, direct IO writes are using truncate_pagecache_range to invalidate the page cache. This is incorrect due to the sub-block zeroing in the page cache that truncate_pagecache_range() triggers. This patch fixes things by using invalidate_inode_pages2_range instead. It preserves the page cache invalidation, but won't zero any pages. cc: stable@vger.kernel.org Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-02xfs: don't zero partial page cache pages during O_DIRECT writesChris Mason
xfs is using truncate_pagecache_range to invalidate the page cache during DIO reads. This is different from the other filesystems who only invalidate pages during DIO writes. truncate_pagecache_range is meant to be used when we are freeing the underlying data structs from disk, so it will zero any partial ranges in the page. This means a DIO read can zero out part of the page cache page, and it is possible the page will stay in cache. buffered reads will find an up to date page with zeros instead of the data actually on disk. This patch fixes things by using invalidate_inode_pages2_range instead. It preserves the page cache invalidation, but won't zero any pages. [dchinner: catch error and warn if it fails. Comment.] cc: stable@vger.kernel.org Signed-off-by: Chris Mason <clm@fb.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-02xfs: don't dirty buffers beyond EOFDave Chinner
generic/263 is failing fsx at this point with a page spanning EOF that cannot be invalidated. The operations are: 1190 mapwrite 0x52c00 thru 0x5e569 (0xb96a bytes) 1191 mapread 0x5c000 thru 0x5d636 (0x1637 bytes) 1192 write 0x5b600 thru 0x771ff (0x1bc00 bytes) where 1190 extents EOF from 0x54000 to 0x5e569. When the direct IO write attempts to invalidate the cached page over this range, it fails with -EBUSY and so any attempt to do page invalidation fails. The real question is this: Why can't that page be invalidated after it has been written to disk and cleaned? Well, there's data on the first two buffers in the page (1k block size, 4k page), but the third buffer on the page (i.e. beyond EOF) is failing drop_buffers because it's bh->b_state == 0x3, which is BH_Uptodate | BH_Dirty. IOWs, there's dirty buffers beyond EOF. Say what? OK, set_buffer_dirty() is called on all buffers from __set_page_buffers_dirty(), regardless of whether the buffer is beyond EOF or not, which means that when we get to ->writepage, we have buffers marked dirty beyond EOF that we need to clean. So, we need to implement our own .set_page_dirty method that doesn't dirty buffers beyond EOF. This is messy because the buffer code is not meant to be shared and it has interesting locking issues on the buffer dirty bits. So just copy and paste it and then modify it to suit what we need. Note: the solutions the other filesystems and generic block code use of marking the buffers clean in ->writepage does not work for XFS. It still leaves dirty buffers beyond EOF and invalidations still fail. Hence rather than play whack-a-mole, this patch simply prevents those buffers from being dirtied in the first place. cc: <stable@kernel.org> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2014-09-01bonding: add slave_changelink support and use it for queue_idNikolay Aleksandrov
This patch adds support for slave_changelink to the bonding and uses it to give the ability to change the queue_id of the enslaved devices via netlink. It sets slave_maxtype and uses bond_changelink as a prototype for bond_slave_changelink. Example/test command after the iproute2 patch: ip link set eth0 type bond_slave queue_id 10 CC: David S. Miller <davem@davemloft.net> CC: Jay Vosburgh <j.vosburgh@gmail.com> CC: Veaceslav Falico <vfalico@gmail.com> CC: Andy Gospodarek <andy@greyhouse.net> Suggested-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Acked-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01tcp: whitespace fixesstephen hemminger
Fix places where there is space before tab, long lines, and awkward if(){, double spacing etc. Add blank line after declaration/initialization. Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01net: systemport: tell RXCHK if we are using Broadcom tagsFlorian Fainelli
When Broadcom tags are enabled, e.g: when interfaced to an Ethernet switch, make sure that we tell the RXCHK engine that it should be expecting a 4-bytes Broadcom tag after the Ethernet MAC Source Address. Use netdev_uses_dsa() to check for that condition since that will tell us if a switch is attached to our network interface. Fixes: 80105befdb4b ("net: systemport: add Broadcom SYSTEMPORT Ethernet MAC driver") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01Merge tag 'master-2014-08-25' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless John W. Linville says: ==================== pull request: wireless 2014-08-28 Please pull this batch of fixes intended for the 3.17 stream. For the Bluetooth/6LowPAN/802.15.4 bits, Johan says: 'It contains a connection reference counting fix for LE where a connection might stay up even though it should get disconnected. The other 802.15.4 6LoWPAN related patches were sent to the bluetooth tree by Alexander Aring and described as follows by him: " these patches contains patches for the bluetooth branch. This series includes memory leak fixes and an errno value fix. Also there are two patches for sending and receiving 1280 6LoWPAN packets, which makes the IEEE 802.15.4 6LoWPAN stack more RFC compliant. "' Along with that... Alexey Khoroshilov fixes a use-after-free bug on at76c50x-usb. Hauke Mehrtens adds a PCI ID to bcma. Himangi Saraogi fixes a silly "A || A" test in rtlwifi. Larry Finger adds a device ID to rtl8192cu. Maks Naumov fixes a strncmp argument in ath9k. Álvaro Fernández Rojas adds a PCI ID to ssb. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01pktgen: add flag NO_TIMESTAMP to disable timestampingJesper Dangaard Brouer
Then testing the TX limits of the stack, then it is useful to be-able to disable the do_gettimeofday() timetamping on every packet. This implements a pktgen flag NO_TIMESTAMP which will disable this call to do_gettimeofday(). The performance change on (my system E5-2695) with skb_clone=0, goes from TX 2,423,751 pps to 2,567,165 pps with flag NO_TIMESTAMP. Thus, the cost of do_gettimeofday() or saving is approx 23 nanosec. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01bnx2x: fix tunneled GSO over IPv6Dmitry Kravkov
Set correct bit for packed description. Introduced in e42780b66aab88d3a82b6087bcd6095b90eecde7 bnx2x: Utilize FW 7.10.51 Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Dmitry Kravkov <Dmitry.Kravkov@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01bnx2x: prevent incorrect byte-swap in BEDmitry Kravkov
Fixes incorrectly defined struct in FW HSI for BE platform. Affects tunneling, tx-switching and anti-spoofing. Introduced in e42780b66aab88d3a82b6087bcd6095b90eecde7 bnx2x: Utilize FW 7.10.51 Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Dmitry Kravkov <Dmitry.Kravkov@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01tipc: add name distributor resiliency queueErik Hugne
TIPC name table updates are distributed asynchronously in a cluster, entailing a risk of certain race conditions. E.g., if two nodes simultaneously issue conflicting (overlapping) publications, this may not be detected until both publications have reached a third node, in which case one of the publications will be silently dropped on that node. Hence, we end up with an inconsistent name table. In most cases this conflict is just a temporary race, e.g., one node is issuing a publication under the assumption that a previous, conflicting, publication has already been withdrawn by the other node. However, because of the (rtt related) distributed update delay, this may not yet hold true on all nodes. The symptom of this failure is a syslog message: "tipc: Cannot publish {%u,%u,%u}, overlap error". In this commit we add a resiliency queue at the receiving end of the name table distributor. When insertion of an arriving publication fails, we retain it in this queue for a short amount of time, assuming that another update will arrive very soon and clear the conflict. If so happens, we insert the publication, otherwise we drop it. The (configurable) retention value defaults to 2000 ms. Knowing from experience that the situation described above is extremely rare, there is no risk that the queue will accumulate any large number of items. Signed-off-by: Erik Hugne <erik.hugne@ericsson.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01tipc: refactor name table updates out of named packet receive routineErik Hugne
We need to perform the same actions when processing deferred name table updates, so this functionality is moved to a separate function. Signed-off-by: Erik Hugne <erik.hugne@ericsson.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01r8152: reduce the number of Txhayeswang
Because the Tx has the features of stopping queue and aggregation, We don't need many tx buffers. Change the tx number from 10 to 4 to reduce the usage of the memory. This could save 16K * 6 bytes memory. Signed-off-by: Hayes Wang <hayeswang@realtek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01Merge branch 'xmit_list'David S. Miller
David Miller says: ==================== net: Make dev_hard_start_xmit() work fundamentally on lists After this patch set, dev_hard_start_xmit() will work fundemantally on any and all SKB lists. This opens the path for a clean implementation of pulling multiple packets out during qdisc_restart(), and then passing that blob in one shot to dev_hard_start_xmit(). There were two main architectural blockers to this: 1) The GSO handling, we kept the original GSO head SKB around simply because dev_hard_start_xmit() had no way to communicate to the caller how far into the segmented list it was able to go. Now it can, so the head GSO can be liberated immediately. All of the special GSO head SKB destructor et al. handling goes away too. 2) Validate of VLAN, CSUM, and segmentation characteristics was being performed inside of dev_hard_start_xmit(). If want to truly batch, we have to let the higher levels to this. In particular, this is now dequeue_skb()'s job. And with those two issues out of the way, it should now be trivial to build experiments on top of this patch set, all of the framework should be there now. You could do something as simple as: skb = q->dequeue(q); if (skb) skb = validate_xmit_skb(skb, qdisc_dev(q)); if (skb) { struct sk_buff *new, *head = skb; int limit = 5; do { new = q->dequeue(q); if (new) new = validate_xmit_skb(new, qdisc_dev(q)); if (new) { skb->next = new; skb = new; } } while (new && --limit); skb = head; } inside of the else branch of dequeue_skb(). Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01net: xmit_list() becomes dev_hard_start_xmit().David S. Miller
Now fundamentally we can process lists of SKBs as cheaply as single packets. Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01net: Don't keep around original SKB when we software segment GSO frames.David S. Miller
Just maintain the list properly by returning the head of the remaining SKB list from dev_hard_start_xmit(). Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01net: Validate xmit SKBs right when we pull them out of the qdisc.David S. Miller
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01net: Separate out SKB validation logic from transmit path.David S. Miller
dev_hard_start_xmit() does two things, it first validates and canonicalizes the SKB, then it actually sends it. Make a set of helper functions for doing the first part. Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01net: Have xmit_list() signal more==true when appropriate.David S. Miller
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-01net: Pass a "more" indication down into netdev_start_xmit() code paths.David S. Miller
For now it will always be false. Signed-off-by: David S. Miller <davem@davemloft.net>