summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet
AgeCommit message (Collapse)AuthorFilesLines
2023-04-21net/mlx5: DR, Add memory statistics for domain objectYevgeny Kliteynik3-3/+15
Add counters for number of buddies that are currently in use per domain per buddy type (STE, MODIFY-HEADER, MODIFY-PATTERN). Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-21net/mlx5: DR, Add more info in domain dbg dumpYevgeny Kliteynik1-2/+9
Add additinal items to domain info dump: Linux version and device name. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-21net/mlx5: DR, Calculate sync threshold of each pool according to its typeYevgeny Kliteynik1-15/+18
When certain ICM chunk is no longer needed, it needs to be freed. Fully freeing ICM memory involves issuing FW SYNC_STEERING command. This is very time consuming, and it is impractical to do it for every freed chunk. Instead, we manage these 'freed' chunks in hot list (list of chunks that are not required by SW any more, but HW might still access them). When size of the hot list reaches certain threshold, we purge it and issue SYNC_STEERING FW command. There is one threshold for all the different ICM types, which is not optimal, as different ICM types require different approach: STEs pool is very large, and it is very 'dynamic' in its nature, so letting hot list to become too large will result in a significant perf hiccup when purging the hot list. Modify action is much smaller and less dynamic, so we can let the hot list to grow to almost the size of the whole pool. This patch fixes this problem: instead of having same hot memory threshold for all the pools, sync operation will be triggered in accordance with the ICM type. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-21net/mlx5: DR, Fix dumping of legacy modify_hdr in debug dumpYevgeny Kliteynik1-4/+6
The steering dump parser expects to see 0 as rewrite num of actions in case pattern/args aren't supported - parsing of legacy modify header is based on this assumption. Fix this to align to parser's expectation. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-20net: libwx: fix memory leak in wx_setup_rx_resourcesZhengchao Shao1-1/+4
When wx_alloc_page_pool() failed in wx_setup_rx_resources(), it doesn't release DMA buffer. Add dma_free_coherent() in the error path to release the DMA buffer. Fixes: 850b971110b2 ("net: libwx: Allocate Rx and Tx resources") Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230418065450.2268522-1-shaozhengchao@huawei.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-20net: stmmac: dwmac-meson8b: Avoid cast to incompatible function typeSimon Horman1-2/+6
Rather than casting clk_disable_unprepare to an incompatible function type provide a trivial wrapper with the correct signature for the use-case. Reported by clang-16 with W=1: drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c:276:6: error: cast from 'void (*)(struct clk *)' to 'void (*)(void *)' converts to incompatible function type [-Werror,-Wcast-function-type-strict] (void(*)(void *))clk_disable_unprepare, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ No functional change intended. Compile tested only. Signed-off-by: Simon Horman <horms@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20230418-dwmac-meson8b-clk-cb-cast-v1-1-e892b670cbbb@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-19stmmac: fix changing mac addressCorinna Vinschen1-0/+2
Without the IFF_LIVE_ADDR_CHANGE flag being set, the network code disallows changing the mac address while the interface is UP. Consequences are, for instance, that the interface can't be used in a failover bond. Add the missing flag to net_device priv_flags. Tested on Intel Elkhart Lake with default settings, as well as with failover and alb mode bonds. Signed-off-by: Corinna Vinschen <vinschen@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: RX, Add XDP multi-buffer support in Striding RQTariq Toukan5-59/+138
Here we add support for multi-buffer XDP handling in Striding RQ, which is our default out-of-the-box RQ type. Before this series, loading such an XDP program would fail, until you switch to the legacy RQ (by unsetting the rx_striding_rq priv-flag). To overcome the lack of headroom and tailroom between the strides, we allocate a side page to be used for the descriptor (xdp_buff / skb) and the linear part. When an XDP program is attached, we structure the xdp_buff so that it contains no data in the linear part, and the whole packet resides in the fragments. In case of XDP_PASS, where an SKB still needs to be created, we copy up to 256 bytes to its linear part, to match the current behavior, and satisfy functions that assume finding the packet headers in the SKB linear part (like eth_type_trans). Performance testing: Packet rate test, 64 bytes, 32 channels, MTU 9000 bytes. CPU: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz. NIC: ConnectX-6 Dx, at 100 Gbps. +----------+-------------+-------------+---------+ | Test | Legacy RQ | Striding RQ | Speedup | +----------+-------------+-------------+---------+ | XDP_DROP | 101,615,544 | 117,191,020 | +15% | +----------+-------------+-------------+---------+ | XDP_TX | 95,608,169 | 117,043,422 | +22% | +----------+-------------+-------------+---------+ Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: RX, Prepare non-linear striding RQ for XDP multi-buffer supportTariq Toukan1-4/+47
In preparation for supporting XDP multi-buffer in striding RQ, use xdp_buff struct to describe the packet. Make its skb_shared_info collide the one of the allocated SKB, then add the fragments using the xdp_buff API. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: RX, Generalize mlx5e_fill_mxbuf()Tariq Toukan1-5/+8
Make the function more generic. Let it get an additional frame_sz parameter instead of deriving it from the RQ struct. No functional change here, just a preparation for a downstream patch. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: RX, Take shared info fragment addition into a functionTariq Toukan1-25/+31
Introduce mlx5e_add_skb_shared_info_frag(), a function dedicated for adding a fragment into a struct skb_shared_info object. Use it in the Legacy RQ flow. Similar usage will be added in a downstream patch by the corresponding Striding RQ flow. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: XDP, Allow non-linear single-segment frames in XDP TX MPWQETariq Toukan1-9/+26
Under a few restrictions, TX MPWQE feature can serve multiple TX packets in a single TX descriptor. It requires each of the packets to have a single scatter entry / segment. Today we allow only linear frames to use this feature, although there's no real problem with non-linear ones where the whole packet reside in the first fragment. Expand the XDP TX MPWQE feature support to include such frames. This is in preparation for the downstream patch, in which we will generate such non-linear frames. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: XDP, Remove un-established assumptions on XDP bufferTariq Toukan2-18/+22
Remove the assumption of non-zero linear length in the XDP xmit function, used to serve both internal XDP_TX operations as well as redirected-in requests. Do not apply the MLX5E_XDP_MIN_INLINE check unless necessary. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: XDP, Consider large muti-buffer packets in Striding RQ params ↵Tariq Toukan1-4/+7
calculations Function mlx5e_rx_get_linear_stride_sz() returns PAGE_SIZE immediately in case an XDP program is attached. The more accurate formula is ALIGN(sz, PAGE_SIZE), to prevent two packets from residing on the same page. The assumption behind the current code is that sz <= PAGE_SIZE holds for all cases with XDP program set. This is true because it is being called from: - 3 times from Striding RQ flows, in which XDP is not supported for such large packets. - 1 time from Legacy RQ flow, under the condition mlx5e_rx_is_linear_skb(). No functional change here, just removing the implied assumption in preparation for supporting XDP multi-buffer in Striding RQ. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: XDP, Let XDP checker function get the params as inputTariq Toukan1-13/+8
Change mlx5e_xdp_allowed() so it gets the params structure with the xdp_prog applied, rather than creating a local copy based on the current params in priv. This reduces the amount of memory on the stack, and acts on the exact params instance that's about to be applied. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: XDP, Improve Striding RQ check with XDPTariq Toukan1-14/+9
Non-linear mem scheme of Striding RQ does not yet support XDP at this point. Take the check where it belongs, inside the params validation function mlx5e_params_validate_xdp(). Reviewed-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: XDP, Add support for multi-buffer XDP redirect-inTariq Toukan3-17/+75
Handle multi-buffer XDP redirect-in requests coming through mlx5e_xdp_xmit. Extend struct mlx5e_xmit_data_frags with an additional dma_arr field, to point to the fragments dma mapping, as they cannot be retrieved via the page_pool_get_dma_addr() function. Push a dma_addr xdpi instance per each fragment, and use them in the completion flow to dma_unmap the frags. Finally, remove the restriction in mlx5e_open_xdpsq, and set the flag in xdp_features. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: XDP, Use multiple single-entry objects in xdpi_fifoTariq Toukan5-50/+101
Here we fix the current wi->num_pkts abuse, as it was used to indicate multiple xdpi entries in the xdpi_fifo. Instead, reduce mlx5e_xdp_info to the size of a single field, making it a union of unions. Per packet, use as many instances as needed to provide the information needed at the time of completion. The sequence of xdpi instances pushed is well defined, derived by the xmit_mode. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: XDP, Remove doubtful unlikely callsTariq Toukan1-5/+5
It is not likely nor unlikely that the xdp buff has fragments, it depends on the program loaded and size of the packet received. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: Introduce extended version for mlx5e_xmit_dataTariq Toukan5-34/+44
Introduce struct mlx5e_xmit_data_frags to be used for non-linear xmit buffers. Let it include sinfo pointer. Take one bit from the len field to indicate if the descriptor has fragments and can be casted-up into the extended version. Zero-init to make sure has_frags, and potentially future fields, are zero when not explicitly assigned. Another field will be added in a downstream patch to indicate and point to dma addresses of the different frags, for redirect-in requests. This simplifies the mlx5e_xmit_xdp_frame/mlx5e_xmit_xdp_frame_mpwqe functions params. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: Move struct mlx5e_xmit_data to datapath headerTariq Toukan2-6/+7
Move TX datapath struct from the generic en.h to the datapath txrx.h header, where it belongs. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net/mlx5e: Move XDP struct and enum to XDP headerTariq Toukan2-35/+35
Move struct mlx5e_xdp_info and enum mlx5e_xdp_xmit_mode from the generic en.h to the XDP header, where they belong. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-19net: ethernet: stmmac: dwmac-sti: remove stih415/stih416/stid127Alain Volmat1-59/+1
Remove no more supported platforms (stih415/stih416 and stid127) Signed-off-by: Alain Volmat <avolmat@me.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Horatiu Vultur <horatiu.vultur@microchip.com> Link: https://lore.kernel.org/r/20230416195523.61075-1-avolmat@me.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-19net: mscc: ocelot: remove incompatible prototypesArnd Bergmann1-3/+0
The types for the register argument changed recently, but there are still incompatible prototypes that got left behind, and gcc-13 warns about these: In file included from drivers/net/ethernet/mscc/ocelot.c:13: drivers/net/ethernet/mscc/ocelot.h:97:5: error: conflicting types for 'ocelot_port_readl' due to enum/integer mismatch; have 'u32(struct ocelot_port *, u32)' {aka 'unsigned int(struct ocelot_port *, unsigned int)'} [-Werror=enum-int-mismatch] 97 | u32 ocelot_port_readl(struct ocelot_port *port, u32 reg); | ^~~~~~~~~~~~~~~~~ Just remove the two prototypes, and rely on the copy in the global header. Fixes: 9ecd05794b8d ("net: mscc: ocelot: strengthen type of "u32 reg" in I/O accessors") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://lore.kernel.org/r/20230417205531.1880657-1-arnd@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-19net: stmmac: propagate feature flags to vlanCorinna Vinschen1-0/+4
stmmac_dev_probe doesn't propagate feature flags to VLANs. So features like offloading don't correspond with the general features and it's not possible to manipulate features via ethtool -K to affect VLANs. Propagate feature flags to vlan features. Drop TSO feature because it does not work on VLANs yet. Signed-off-by: Corinna Vinschen <vinschen@redhat.com> Link: https://lore.kernel.org/r/20230417192845.590034-1-vinschen@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net: stmmac: dwmac-starfive: Add phy interface settingsSamin Guo1-0/+48
dwmac supports multiple modess. When working under rmii and rgmii, you need to set different phy interfaces. According to the dwmac document, when working in rmii, it needs to be set to 0x4, and rgmii needs to be set to 0x1. The phy interface needs to be set in syscon, the format is as follows: starfive,syscon: <&syscon, offset, shift> Tested-by: Tommaso Merciai <tomm.merciai@gmail.com> Signed-off-by: Samin Guo <samin.guo@starfivetech.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-18net: stmmac: Add glue layer for StarFive JH7110 SoCSamin Guo3-0/+136
This adds StarFive dwmac driver support on the StarFive JH7110 SoC. Tested-by: Tommaso Merciai <tomm.merciai@gmail.com> Co-developed-by: Emil Renner Berthing <kernel@esmil.dk> Signed-off-by: Emil Renner Berthing <kernel@esmil.dk> Signed-off-by: Samin Guo <samin.guo@starfivetech.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-18net: stmmac: platform: Add snps,dwmac-5.20 IP compatible stringEmil Renner Berthing1-1/+2
Add "snps,dwmac-5.20" compatible string for 5.20 version that can avoid to define some platform data in the glue layer. Tested-by: Tommaso Merciai <tomm.merciai@gmail.com> Signed-off-by: Emil Renner Berthing <kernel@esmil.dk> Signed-off-by: Samin Guo <samin.guo@starfivetech.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-18r8169: use new macro netif_subqueue_completed_wake in the tx cleanup pathHeiner Kallweit1-12/+4
Use new net core macro netif_subqueue_completed_wake to simplify the code of the tx cleanup path. Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-18r8169: use new macro netif_subqueue_maybe_stop in rtl8169_start_xmitHeiner Kallweit1-28/+11
Use new net core macro netif_subqueue_maybe_stop in the start_xmit path to simplify the code. Whilst at it, set the tx queue start threshold to twice the stop threshold. Before values were the same, resulting in stopping/starting the queue more often than needed. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-18net: mscc: ocelot: add support for preemptible traffic classesVladimir Oltean3-1/+66
In order to not transmit (preemptible) frames which will be received by the link partner as corrupted (because it doesn't support FP), the hardware requires the driver to program the QSYS_PREEMPTION_CFG_P_QUEUES register only after the MAC Merge layer becomes active (verification succeeds, or was disabled). There are some cases when FP is known (through experimentation) to be broken. Give priority to FP over cut-through switching, and disable FP for known broken link modes. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net: mscc: ocelot: add support for mqprio offloadVladimir Oltean1-0/+50
This doesn't apply anything to hardware and in general doesn't do anything that the software variant doesn't do, except for checking that there isn't more than 1 TXQ per TC (TXQs for a DSA switch are a dubious concept anyway). The reason we add this is to be able to parse one more field added to struct tc_mqprio_qopt_offload, namely preemptible_tcs. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ferenc Fejes <fejes@inf.elte.hu> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net: mscc: ocelot: don't rely on cached verify_status in ocelot_port_get_mm()Vladimir Oltean1-0/+1
ocelot_mm_update_port_status() updates mm->verify_status, but when the verification state of a port changes, an IRQ isn't emitted, but rather, only when the verification state reaches one of the final states (like DISABLED, FAILED, SUCCEEDED) - things that would affect mm->tx_active, which is what the IRQ *is* actually emitted for. That is to say, user space may miss reports of an intermediary MAC Merge verification state (like from INITIAL to VERIFYING), unless there was an IRQ notifying the driver of the change in mm->tx_active as well. This is not a huge deal, but for reliable reporting to user space, let's call ocelot_mm_update_port_status() synchronously from ocelot_port_get_mm(), which makes user space see the current MM status. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net: mscc: ocelot: optimize ocelot_mm_irq()Vladimir Oltean1-2/+28
The MAC Merge IRQ of all ports is shared with the PTP TX timestamp IRQ of all ports, which means that currently, when a PTP TX timestamp is generated, felix_irq_handler() also polls for the MAC Merge layer status of all ports, looking for changes. This makes the kernel do more work, and under certain circumstances may make ptp4l require a tx_timestamp_timeout argument higher than before. Changes to the MAC Merge layer status are only to be expected under certain conditions - its TX direction needs to be enabled - so we can check early if that is the case, and omit register access otherwise. Make ocelot_mm_update_port_status() skip register access if mm->tx_enabled is unset, and also call it once more, outside IRQ context, from ocelot_port_set_mm(), when mm->tx_enabled transitions from true to false, because an IRQ is also expected in that case. Also, a port may have its MAC Merge layer enabled but it may not have generated the interrupt. In that case, there's no point in writing to DEV_MM_STATUS to acknowledge that IRQ. We can reduce the number of register writes per port with MM enabled by keeping an "ack" variable which writes the "write-one-to-clear" bits. Those are 3 in number: PRMPT_ACTIVE_STICKY, UNEXP_RX_PFRM_STICKY and UNEXP_TX_PFRM_STICKY. The other fields in DEV_MM_STATUS are read-only and it doesn't matter what is written to them, so writing zero is just fine. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net: mscc: ocelot: remove struct ocelot_mm_state :: lockVladimir Oltean1-12/+8
Unfortunately, the workarounds for the hardware bugs make it pointless to keep fine-grained locking for the MAC Merge state of each port. Our vsc9959_cut_through_fwd() implementation requires ocelot->fwd_domain_lock to be held, in order to serialize with changes to the bridging domains and to port speed changes (which affect which ports can be cut-through). Simultaneously, the traffic classes which can be cut-through cannot be preemptible at the same time, and this will depend on the MAC Merge layer state (which changes from threaded interrupt context). Since vsc9959_cut_through_fwd() would have to hold the mm->lock of all ports for a correct and race-free implementation with respect to ocelot_mm_irq(), in practice it means that any time a port's mm->lock is held, it would potentially block holders of ocelot->fwd_domain_lock. In the interest of simple locking rules, make all MAC Merge layer state changes (and preemptible traffic class changes) be serialized by the ocelot->fwd_domain_lock. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net: mscc: ocelot: export a single ocelot_mm_irq()Vladimir Oltean1-2/+10
When the switch emits an IRQ, we don't know what caused it, and we iterate through all ports to check the MAC Merge status. Move that iteration inside the ocelot lib; we will change the locking in a future change and it would be good to encapsulate that lock completely within the ocelot lib. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net: stmmac: add Rx HWTS metadata to XDP ZC receive pktSong Yoong Siang1-0/+22
Add receive hardware timestamp metadata support via kfunc to XDP Zero Copy receive packets. Signed-off-by: Song Yoong Siang <yoong.siang.song@intel.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net: stmmac: add Rx HWTS metadata to XDP receive pktSong Yoong Siang2-1/+42
Add receive hardware timestamp metadata support via kfunc to XDP receive packets. Suggested-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Song Yoong Siang <yoong.siang.song@intel.com> Acked-by: Stanislav Fomichev <sdf@google.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net: stmmac: introduce wrapper for struct xdp_buffSong Yoong Siang2-9/+13
Introduce struct stmmac_xdp_buff as a preparation to support XDP Rx metadata via kfuncs. Signed-off-by: Song Yoong Siang <yoong.siang.song@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net/mlx5e: Accept tunnel mode for IPsec packet offloadLeon Romanovsky1-7/+8
Open mlx5 driver to accept IPsec tunnel mode. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net/mlx5e: Create IPsec table with tunnel support only when encap is disabledLeon Romanovsky3-3/+39
Current hardware doesn't support double encapsulation which is happening when IPsec packet offload tunnel mode is configured together with eswitch encap option. Any user attempt to add new SA/policy after he/she sets encap mode, will generate the following FW syndrome: mlx5_core 0000:08:00.0: mlx5_cmd_out_err:803:(pid 1904): CREATE_FLOW_TABLE(0x930) op_mod(0x0) failed, status bad parameter(0x3), syndrome (0xa43321), err(-22) Make sure that we block encap changes before creating flow steering tables. This is applicable only for packet offload in tunnel mode, while packet offload in transport mode and crypto offload, don't have such limitation as they don't perform encapsulation. Reviewed-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net/mlx5: Allow blocking encap changes in eswitchLeon Romanovsky2-0/+62
Existing eswitch encap option enables header encapsulation. Unfortunately currently available hardware isn't able to perform double encapsulation, which can happen once IPsec packet offload tunnel mode is used together with encap mode set to BASIC. So as a solution for misconfiguration, provide an option to block encap changes, which will be used for IPsec packet offload. Reviewed-by: Emeel Hakim <ehakim@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net/mlx5e: Listen to ARP events to update IPsec L2 headers in tunnel modeLeon Romanovsky2-7/+130
In IPsec packet offload mode all header manipulations are performed by hardware, which is responsible to add/remove L2 header with source and destinations MACs. CX-7 devices don't support offload of in-kernel routing functionality, as such HW needs external help to fill other side MAC as it isn't available for HW. As a solution, let's listen to neigh ARP updates and reconfigure IPsec rules on the fly once new MAC data information arrives. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net/mlx5e: Support IPsec TX packet offload in tunnel modeLeon Romanovsky2-0/+64
Extend mlx5 driver with logic to support IPsec TX packet offload in tunnel mode. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net/mlx5e: Support IPsec RX packet offload in tunnel modeLeon Romanovsky3-0/+88
Extend mlx5 driver with logic to support IPsec RX packet offload in tunnel mode. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net/mlx5e: Prepare IPsec packet reformat code for tunnel modeLeon Romanovsky3-21/+63
Refactor setup_pkt_reformat() function to accommodate future extension to support tunnel mode. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net/mlx5e: Configure IPsec SA tables to support tunnel modeLeon Romanovsky1-8/+15
Create SA flow steering tables both for RX and TX with tunnel reformat property. This allows to add and delete extra headers needed for tunnel mode. Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-18net/mlx5e: Check IPsec packet offload tunnel capabilitiesLeon Romanovsky2-0/+7
Validate tunnel mode support for IPsec packet offload. Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-17net: lan966x: Fix lan966x_ifh_getHoratiu Vultur1-1/+1
From time to time, it was observed that the nanosecond part of the received timestamp, which is extracted from the IFH, it was actually bigger than 1 second. So then when actually calculating the full received timestamp, based on the nanosecond part from IFH and the second part which is read from HW, it was actually wrong. The issue seems to be inside the function lan966x_ifh_get, which extracts information from an IFH(which is an byte array) and returns the value in a u64. When extracting the timestamp value from the IFH, which starts at bit 192 and have the size of 32 bits, then if the most significant bit was set in the timestamp, then this bit was extended then the return value became 0xffffffff... . And the reason of this is because constants without any postfix are treated as signed longs and that is the reason why '1 << 31' becomes 0xffffffff80000000. This is fixed by adding the postfix 'ULL' to 1. Fixes: fd7627833ddf ("net: lan966x: Stop using packing library") Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-17Merge tag 'mlx5-updates-2023-04-14' of ↵David S. Miller15-89/+1025
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux mlx5-updates-2023-04-14 Yevgeny Kliteynik Says: ======================= SW Steering: Support pattern/args modify_header actions The following patch series adds support for a new pattern/arguments type of modify_header actions. Starting with ConnectX-6 DX, we use a new design of modify_header FW object. The current modify_header object allows for having only limited number of these FW objects, which means that we are limited in the number of offloaded flows that require modify_header action. The new approach comprises of two types of objects: pattern and argument. Pattern holds header modification templates, later used with corresponding argument object to create complete header modification actions. The pattern indicates which headers are modified, while the arguments provide the specific values. Therefore a single pattern can be used with different arguments in different flows, enabling offloading of large number of modify_header flows. - Patch 1, 2: Add ICM pool for modify-header-pattern objects and implement patterns cache, allowing patterns reuse for different flows - Patch 3: Allow for chunk allocation separately for STEv0 and STEv1 - Patch 4: Read related device capabilities - Patch 5: Add create/destroy functions for the new general object type - Patch 6: Add support for writing modify header argument to ICM - Patch 7, 8: Some required fixes to support pattern/arg - separate read buffer from the write buffer and fix QP continuous allocation - Patch 9: Add pool for modify header arg objects - Patch 10, 11, 12: Implement MODIFY_HEADER and TNL_L3_TO_L2 actions with the new patterns/args design - Patch 13: Optimization - set modify header action of size 1 directly on the STE instead of separate pattern/args combination - Patch 14: Adjust debug dump for patterns/args - Patch 15: Enable patterns and arguments for supporting devices =======================