summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet
AgeCommit message (Collapse)AuthorFilesLines
2023-04-15bnxt: hook NAPIs to page poolsJakub Kicinski1-0/+1
bnxt has 1:1 mapping of page pools and NAPIs, so it's safe to hoook them up together. Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Tested-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-15net/mlx5: DR, Enable patterns and arguments for supporting devicesYevgeny Kliteynik1-1/+2
Check if patterns and arguments for modify header action are supported and enable them accordingly. Signed-off-by: Muhammad Sammar <muhammads@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Add support for the pattern/arg parameters in debug dumpYevgeny Kliteynik1-3/+27
Support the pattern/args-based MODIFY_HDR and TNL_L3_TO_L2 actions in dbg dump Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Modify header action of size 1 optimizationYevgeny Kliteynik3-25/+52
Set modify header action of size 1 directly on the STE for supporting devices, thus reducing number of hops and cache misses. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Support decap L3 action using pattern / arg mechanismYevgeny Kliteynik1-16/+5
Use the new accelerated action for decap L3 on RX side: use the mechanism of pattern and argument same as in modify-header action. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Apply new accelerated modify action and decapl3Yevgeny Kliteynik2-6/+47
If there is support for pattern/args, use the new accelerated modify header action for modify header and decap L3 actions. Otherwise fall back to the old modify-header implementation. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Add modify header argument pointer to actions attributesYevgeny Kliteynik3-7/+42
While building the actions, add the pointer of the arguments for accelerated modify list action into the action's attributes. This will be used later on while building the specific STE for this action. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Add modify header arg pool mechanismYevgeny Kliteynik4-1/+304
Added new mechanism for handling arguments for modify-header action. The new action "accelerated modify-header" asks for the arguments from separated area from the pattern, this area accessed via general objects. Handling of these object is done via the pool-manager struct. When the new header patterns are supported, while loading the domain, a few pools for argument creations will be created. The requests for allocating/deallocating arg objects are done via the pool manager API. Signed-off-by: Muhammad Sammar <muhammads@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Fix QP continuous allocationYevgeny Kliteynik1-1/+1
When allocating a QP we allocate an RQ and an SQ, the RQ is stored first in memory and followed by the SQ. This allocation is not physically continiuos - it may span across different physical pages. SW Steering code always writes in pairs: 1BB write + 1BB read, or 2 continuous BBs of GTA WQE. This lead to an issue where RQ allocation was 4x16 which is equal to 1 WQE BB, causing 1 BB offset in the page and splitting the GTA WQE between different physical pages. The solution was to create the RQ with a even number of BBs and to have the RQ aligned to a page. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Read ICM memory into dedicated bufferYevgeny Kliteynik2-9/+17
Instead of using the write buffer for reading we will use a dedicated buffer only for reading ICM memory. Due to the new support for args, we can have a case with pending_wc being odd number, and with reading into the same write buffer, it is possible to overwrite next write on the same slot. For example: pending_wc is 17 so the buffer for write is: | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | and we have requests as follows: r wr wr wr wr wr wr wr wr Now, the first read will be written into the last write because we use the same buffer for read and write, before it was written to the HW and we will have a wrong data in the ICM area. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Add support for writing modify header argumentYevgeny Kliteynik2-20/+150
The accelerated modify header arguments are written in the HW area with special WQE and specific data format. New function was added to support writing of new argument type. Note that GTA WQE is larger than READ and WRITE, so the queue management logic was updated to support this. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Add create/destroy for modify-header-argument general objectYevgeny Kliteynik2-0/+49
Add functions for creation/destruction of the new type of general object. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Check for modify_header_argument device capabilitiesYevgeny Kliteynik2-0/+14
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Split chunk allocation to HW-dependent waysYevgeny Kliteynik7-18/+98
This way we are able to allocate chunk for modify_headers from 2 types: STEv0 that is allocated from the action area, and STEv1 that is allocating the chunks from the special area for patterns. Signed-off-by: Muhammad Sammar <muhammads@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Add cache for modify header patternYevgeny Kliteynik3-0/+236
Starting with ConnectX-6 Dx, we use new design of modify_header FW object. The current modify_header object allows for having only limited number of FW objects, so the new design of pattern and argument allows pattern reuse, saving memory, and having a large number of modify_header objects. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-15net/mlx5: DR, Move ACTION_CACHE_LINE_SIZE macro to headerYevgeny Kliteynik2-6/+5
Move ACTION_CACHE_LINE_SIZE macro to header to be used by the pattern functions as well. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-04-14net: mana: Add support for jumbo frameHaiyang Zhang2-24/+215
During probe, get the hardware-allowed max MTU by querying the device configuration. Users can select MTU up to the device limit. When XDP is in use, limit MTU settings so the buffer size is within one page. And, when MTU is set to a too large value, XDP is not allowed to run. Also, to prevent changing MTU fails, and leaves the NIC in a bad state, pre-allocate all buffers before starting the change. So in low memory condition, it will return error, without affecting the NIC. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-14net: mana: Enable RX path to handle various MTU sizesHaiyang Zhang1-10/+28
Update RX data path to allocate and use RX queue DMA buffers with proper size based on potentially various MTU sizes. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-14net: mana: Refactor RX buffer allocation code to prepare for various MTUHaiyang Zhang1-64/+90
Move out common buffer allocation code from mana_process_rx_cqe() and mana_alloc_rx_wqe() to helper functions. Refactor related variables so they can be changed in one place, and buffer sizes are in sync. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-14net: mana: Use napi_build_skb in RX pathHaiyang Zhang1-1/+1
Use napi_build_skb() instead of build_skb() to take advantage of the NAPI percpu caches to obtain skbuff_head. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-14Merge tag 'mlx5-updates-2023-04-11' of ↵Jakub Kicinski19-162/+1737
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5-updates-2023-04-11 1) Vlad adds the support for linux bridge multicast offload support Patches #1 through #9 Synopsis Vlad Says: ============== Implement support of bridge multicast offload in mlx5. Handle port object attribute SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED notification to toggle multicast offload and bridge snooping support on bridge. Handle port object SWITCHDEV_OBJ_ID_PORT_MDB notification to attach a bridge port to MDB. Steering architecture Existing offload infrastructure relies on two levels of flow tables - bridge ingress and egress. For multicast offload the architecture is extended with additional layer of per-port multicast replication tables. Such tables filter loopback traffic (so packets are not replicated to their source port) and pop VLAN headers for "untagged" VLANs. The tables are referenced by the MDB rules in egress table. MDB egress rule can point to multiple per-port multicast tables, which causes matching multicast traffic to be replicated to all of them, and, consecutively, to several bridge ports: +--------+--+ +---------------------------------------> Port 1 | | | +-^------+--+ | | | | +-----------------------------------------+ | +---------------------------+ | | EGRESS table | | +--> PORT 1 multicast table | | +----------------------------------+ +-----------------------------------------+ | | +---------------------------+ | | INGRESS table | | | | | | | | +----------------------------------+ | dst_mac=P1,vlan=X -> pop vlan, goto P1 +--+ | | FG0: | | | | | dst_mac=P1,vlan=Y -> pop vlan, goto P1 | | | src_port=dst_port -> drop | | | src_mac=M1,vlan=X -> goto egress +---> dst_mac=P2,vlan=X -> pop vlan, goto P2 +--+ | | FG1: | | | ... | | dst_mac=P2,vlan=Y -> goto P2 | | | | VLAN X -> pop, goto port | | | | | dst_mac=MDB1,vlan=Y -> goto mcast P1,P2 +-----+ | ... | | +----------------------------------+ | | | | | VLAN Y -> pop, goto port +-------+ +-----------------------------------------+ | | | FG3: | | | | matchall -> goto port | | | | | | | +---------------------------+ | | | | | | +--------+--+ +---------------------------------------> Port 2 | | | +-^------+--+ | | | | | +---------------------------+ | +--> PORT 2 multicast table | | +---------------------------+ | | | | | FG0: | | | src_port=dst_port -> drop | | | FG1: | | | VLAN X -> pop, goto port | | | ... | | | | | | FG3: | | | matchall -> goto port +-------+ | | +---------------------------+ Patches overview: - Patch 1 adds hardware definition bits for capabilities required to replicate multicast packets to multiple per-port tables. These bits are used by following patches to only attempt multicast offload if firmware and hardware provide necessary support. - Pathces 2-4 patches are preparations and refactoring. - Patch 5 implements necessary infrastructure to toggle multicast offload via SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED port object attribute notification. This also enabled IGMP and MLD snooping. - Patch 6 implements per-port multicast replication tables. It only supports filtering of loopback packets. - Patch 7 extends per-port multicast tables with VLAN pop support for 'untagged' VLANs. - Patch 8 handles SWITCHDEV_OBJ_ID_PORT_MDB port object notifications. It creates MDB replication rules in egress table that can replicate packets to multiple per-port multicast tables. - Patch 9 adds tracepoints for MDB events. ============== 2) Parav Create a new allocation profile for SFs, to save on memory 3) Yevgeny provides some initial patches for upcoming software steering support new pattern/arguments type of modify_header actions. Starting with ConnectX-6 DX, we use a new design of modify_header FW object. The current modify_header object allows for having only limited number of these FW objects, which means that we are limited in the number of offloaded flows that require modify_header action. As a preparation Yevgeny provides the following 4 patches: - Patch 1: Add required mlx5_ifc HW bits - Patch 2, 3: Add new WQE type and opcode that is required for pattern/arg support and adds appropriate support in dr_send.c - Patch 4: Add ICM pool for modify-header-pattern objects and implement patterns cache, allowing patterns reuse for different flows * tag 'mlx5-updates-2023-04-11' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux: net/mlx5: DR, Add modify-header-pattern ICM pool net/mlx5: DR, Prepare sending new WQE type net/mlx5: Add new WQE for updating flow table net/mlx5: Add mlx5_ifc bits for modify header argument net/mlx5: DR, Set counter ID on the last STE for STEv1 TX net/mlx5: Create a new profile for SFs net/mlx5: Bridge, add tracepoints for multicast net/mlx5: Bridge, implement mdb offload net/mlx5: Bridge, support multicast VLAN pop net/mlx5: Bridge, add per-port multicast replication tables net/mlx5: Bridge, snoop igmp/mld packets net/mlx5: Bridge, extract code to lookup parent bridge of port net/mlx5: Bridge, move additional data structures to priv header net/mlx5: Bridge, increase bridge tables sizes net/mlx5: Add mlx5_ifc definitions for bridge multicast support ==================== Link: https://lore.kernel.org/r/20230412040752.14220-1-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-14net: enetc: add support for preemptible traffic classesVladimir Oltean3-0/+27
PFs which support the MAC Merge layer also have a set of 8 registers called "Port traffic class N frame preemption register (PTC0FPR - PTC7FPR)". Through these, a traffic class (group of TX rings of same dequeue priority) can be mapped to the eMAC or to the pMAC. There's nothing particularly spectacular here. We should probably only commit the preemptible TCs to hardware once the MAC Merge layer became active, but unlike Felix, we don't have an IRQ that notifies us of that. We'd have to sleep for up to verifyTime (127 ms) to wait for a resolution coming from the verification state machine; not only from the ndo_setup_tc() code path, but also from enetc_mm_link_state_update(). Since it's relatively complicated and has a relatively small benefit, I'm not doing it. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ferenc Fejes <fejes@inf.elte.hu> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-14net: enetc: rename "mqprio" to "qopt"Vladimir Oltean1-4/+5
To gain access to the larger encapsulating structure which has the type tc_mqprio_qopt_offload, rename just the "qopt" field as "qopt". Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ferenc Fejes <fejes@inf.elte.hu> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-14net: macb: Optimize reading HW timestampHarini Katakam1-2/+2
The seconds input from BD (6 bits) just needs to be ORed with the upper bits from timer in this function. Avoid addition operation every single time. Seconds rollover handling is left untouched. Signed-off-by: Harini Katakam <harini.katakam@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-14net: macb: Enable PTP unicastHarini Katakam2-2/+15
Enable transmission and reception of PTP unicast packets by updating PTP unicast config bit and setting current HW mac address as allowed address in PTP unicast filter registers. Signed-off-by: Harini Katakam <harini.katakam@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-14net: macb: Update gem PTP support checkHarini Katakam2-3/+3
There are currently two checks for PTP functionality - one on GEM capability and another on the kernel config option. Combine them into a single function as there's no use case where gem_has_ptp is TRUE and MACB_USE_HWSTAMP is false. Signed-off-by: Harini Katakam <harini.katakam@amd.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-14net: mscc: ocelot: fix ineffective WARN_ON() in ocelot_stats.cVladimir Oltean1-6/+11
Since it is hopefully now clear that, since "last" and "layout[i].reg" are enum types and not addresses, the existing WARN_ON() is ineffective in checking that the _addresses_ are sorted in the proper order. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-14net: mscc: ocelot: strengthen type of "int i" in ocelot_stats.cVladimir Oltean1-4/+5
The "int i" used to index the struct ocelot_stat_layout array actually has a specific type: enum ocelot_stat. Use it, so that the WARN() comment from ocelot_prepare_stats_regions() makes more sense. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-14net: mscc: ocelot: strengthen type of "u32 reg" and "u32 base" in ocelot_stats.cVladimir Oltean1-3/+3
Use the specific enum ocelot_reg to make it clear that the region registers are encoded and not plain addresses. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-14net: mscc: ocelot: remove blank line at the end of ocelot_stats.cVladimir Oltean1-1/+0
Commit a3bb8f521fd8 ("net: mscc: ocelot: remove unnecessary exposure of stats structures") made an unnecessary change which was to add a new line at the end of ocelot_stats.c. Remove it. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Acked-by: Colin Foster <colin.foster@in-advantage.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-14net: mscc: ocelot: debugging print for statistics regionsVladimir Oltean1-0/+9
To make it easier to debug future issues with statistics counters not getting aggregated properly into regions, like what happened in commit 6acc72a43eac ("net: mscc: ocelot: fix stats region batching"), add some dev_dbg() prints which show the regions that were dynamically determined. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-14net: mscc: ocelot: refactor enum ocelot_reg decoding to helperVladimir Oltean2-14/+25
ocelot_io.c duplicates the decoding of an enum ocelot_reg (which holds an enum ocelot_target in the upper bits and an index into a regmap array in the lower bits) 4 times. We'd like to reuse that logic once more, from ocelot.c. In order to do that, let's consolidate the existing 4 instances into a header accessible both by ocelot.c as well as by ocelot_io.c. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-14net: mscc: ocelot: strengthen type of "u32 reg" in I/O accessorsVladimir Oltean1-9/+11
The "u32 reg" argument that is passed to these functions is not a plain address, but rather a driver-specific encoding of another enum ocelot_target target in the upper bits, and an index into the u32 ocelot->map[target][] array in the lower bits. That encoded value takes the type "enum ocelot_reg" and is what is passed to these I/O functions, so let's actually use that to prevent type confusion. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-14Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski12-75/+180
Conflicts: tools/testing/selftests/net/config 62199e3f1658 ("selftests: net: Add VXLAN MDB test") 3a0385be133e ("selftests: add the missing CONFIG_IP_SCTP in net config") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-13mlx4: bpf_xdp_metadata_rx_hash add xdp rss hash typeJesper Dangaard Brouer1-1/+18
Update API for bpf_xdp_metadata_rx_hash() with arg for xdp rss hash type via matching individual Completion Queue Entry (CQE) status bits. Fixes: ab46182d0dcb ("net/mlx4_en: Support RX XDP metadata") Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/r/168132893562.340624.12779118462402031248.stgit@firesoul Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-04-13mlx5: bpf_xdp_metadata_rx_hash add xdp rss hash typeJesper Dangaard Brouer1-1/+59
Update API for bpf_xdp_metadata_rx_hash() with arg for xdp rss hash type via mapping table. The mlx5 hardware can also identify and RSS hash IPSEC. This indicate hash includes SPI (Security Parameters Index) as part of IPSEC hash. Extend xdp core enum xdp_rss_hash_type with IPSEC hash type. Fixes: bc8d405b1ba9 ("net/mlx5e: Support RX XDP metadata") Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/r/168132892548.340624.11185734579430124869.stgit@firesoul Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-04-13xdp: rss hash types representationJesper Dangaard Brouer3-3/+6
The RSS hash type specifies what portion of packet data NIC hardware used when calculating RSS hash value. The RSS types are focused on Internet traffic protocols at OSI layers L3 and L4. L2 (e.g. ARP) often get hash value zero and no RSS type. For L3 focused on IPv4 vs. IPv6, and L4 primarily TCP vs UDP, but some hardware supports SCTP. Hardware RSS types are differently encoded for each hardware NIC. Most hardware represent RSS hash type as a number. Determining L3 vs L4 often requires a mapping table as there often isn't a pattern or sorting according to ISO layer. The patch introduce a XDP RSS hash type (enum xdp_rss_hash_type) that contains both BITs for the L3/L4 types, and combinations to be used by drivers for their mapping tables. The enum xdp_rss_type_bits get exposed to BPF via BTF, and it is up to the BPF-programmer to match using these defines. This proposal change the kfunc API bpf_xdp_metadata_rx_hash() adding a pointer value argument for provide the RSS hash type. Change signature for all xmo_rx_hash calls in drivers to make it compile. The RSS type implementations for each driver comes as separate patches. Fixes: 3d76a4d3d4e5 ("bpf: XDP metadata RX kfuncs") Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/r/168132892042.340624.582563003880565460.stgit@firesoul Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-04-13net: macb: fix a memory corruption in extended buffer descriptor modeRoman Gushchin1-0/+4
For quite some time we were chasing a bug which looked like a sudden permanent failure of networking and mmc on some of our devices. The bug was very sensitive to any software changes and even more to any kernel debug options. Finally we got a setup where the problem was reproducible with CONFIG_DMA_API_DEBUG=y and it revealed the issue with the rx dma: [ 16.992082] ------------[ cut here ]------------ [ 16.996779] DMA-API: macb ff0b0000.ethernet: device driver tries to free DMA memory it has not allocated [device address=0x0000000875e3e244] [size=1536 bytes] [ 17.011049] WARNING: CPU: 0 PID: 85 at kernel/dma/debug.c:1011 check_unmap+0x6a0/0x900 [ 17.018977] Modules linked in: xxxxx [ 17.038823] CPU: 0 PID: 85 Comm: irq/55-8000f000 Not tainted 5.4.0 #28 [ 17.045345] Hardware name: xxxxx [ 17.049528] pstate: 60000005 (nZCv daif -PAN -UAO) [ 17.054322] pc : check_unmap+0x6a0/0x900 [ 17.058243] lr : check_unmap+0x6a0/0x900 [ 17.062163] sp : ffffffc010003c40 [ 17.065470] x29: ffffffc010003c40 x28: 000000004000c03c [ 17.070783] x27: ffffffc010da7048 x26: ffffff8878e38800 [ 17.076095] x25: ffffff8879d22810 x24: ffffffc010003cc8 [ 17.081407] x23: 0000000000000000 x22: ffffffc010a08750 [ 17.086719] x21: ffffff8878e3c7c0 x20: ffffffc010acb000 [ 17.092032] x19: 0000000875e3e244 x18: 0000000000000010 [ 17.097343] x17: 0000000000000000 x16: 0000000000000000 [ 17.102647] x15: ffffff8879e4a988 x14: 0720072007200720 [ 17.107959] x13: 0720072007200720 x12: 0720072007200720 [ 17.113261] x11: 0720072007200720 x10: 0720072007200720 [ 17.118565] x9 : 0720072007200720 x8 : 000000000000022d [ 17.123869] x7 : 0000000000000015 x6 : 0000000000000098 [ 17.129173] x5 : 0000000000000000 x4 : 0000000000000000 [ 17.134475] x3 : 00000000ffffffff x2 : ffffffc010a1d370 [ 17.139778] x1 : b420c9d75d27bb00 x0 : 0000000000000000 [ 17.145082] Call trace: [ 17.147524] check_unmap+0x6a0/0x900 [ 17.151091] debug_dma_unmap_page+0x88/0x90 [ 17.155266] gem_rx+0x114/0x2f0 [ 17.158396] macb_poll+0x58/0x100 [ 17.161705] net_rx_action+0x118/0x400 [ 17.165445] __do_softirq+0x138/0x36c [ 17.169100] irq_exit+0x98/0xc0 [ 17.172234] __handle_domain_irq+0x64/0xc0 [ 17.176320] gic_handle_irq+0x5c/0xc0 [ 17.179974] el1_irq+0xb8/0x140 [ 17.183109] xiic_process+0x5c/0xe30 [ 17.186677] irq_thread_fn+0x28/0x90 [ 17.190244] irq_thread+0x208/0x2a0 [ 17.193724] kthread+0x130/0x140 [ 17.196945] ret_from_fork+0x10/0x20 [ 17.200510] ---[ end trace 7240980785f81d6f ]--- [ 237.021490] ------------[ cut here ]------------ [ 237.026129] DMA-API: exceeded 7 overlapping mappings of cacheline 0x0000000021d79e7b [ 237.033886] WARNING: CPU: 0 PID: 0 at kernel/dma/debug.c:499 add_dma_entry+0x214/0x240 [ 237.041802] Modules linked in: xxxxx [ 237.061637] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W 5.4.0 #28 [ 237.068941] Hardware name: xxxxx [ 237.073116] pstate: 80000085 (Nzcv daIf -PAN -UAO) [ 237.077900] pc : add_dma_entry+0x214/0x240 [ 237.081986] lr : add_dma_entry+0x214/0x240 [ 237.086072] sp : ffffffc010003c30 [ 237.089379] x29: ffffffc010003c30 x28: ffffff8878a0be00 [ 237.094683] x27: 0000000000000180 x26: ffffff8878e387c0 [ 237.099987] x25: 0000000000000002 x24: 0000000000000000 [ 237.105290] x23: 000000000000003b x22: ffffffc010a0fa00 [ 237.110594] x21: 0000000021d79e7b x20: ffffffc010abe600 [ 237.115897] x19: 00000000ffffffef x18: 0000000000000010 [ 237.121201] x17: 0000000000000000 x16: 0000000000000000 [ 237.126504] x15: ffffffc010a0fdc8 x14: 0720072007200720 [ 237.131807] x13: 0720072007200720 x12: 0720072007200720 [ 237.137111] x11: 0720072007200720 x10: 0720072007200720 [ 237.142415] x9 : 0720072007200720 x8 : 0000000000000259 [ 237.147718] x7 : 0000000000000001 x6 : 0000000000000000 [ 237.153022] x5 : ffffffc010003a20 x4 : 0000000000000001 [ 237.158325] x3 : 0000000000000006 x2 : 0000000000000007 [ 237.163628] x1 : 8ac721b3a7dc1c00 x0 : 0000000000000000 [ 237.168932] Call trace: [ 237.171373] add_dma_entry+0x214/0x240 [ 237.175115] debug_dma_map_page+0xf8/0x120 [ 237.179203] gem_rx_refill+0x190/0x280 [ 237.182942] gem_rx+0x224/0x2f0 [ 237.186075] macb_poll+0x58/0x100 [ 237.189384] net_rx_action+0x118/0x400 [ 237.193125] __do_softirq+0x138/0x36c [ 237.196780] irq_exit+0x98/0xc0 [ 237.199914] __handle_domain_irq+0x64/0xc0 [ 237.204000] gic_handle_irq+0x5c/0xc0 [ 237.207654] el1_irq+0xb8/0x140 [ 237.210789] arch_cpu_idle+0x40/0x200 [ 237.214444] default_idle_call+0x18/0x30 [ 237.218359] do_idle+0x200/0x280 [ 237.221578] cpu_startup_entry+0x20/0x30 [ 237.225493] rest_init+0xe4/0xf0 [ 237.228713] arch_call_rest_init+0xc/0x14 [ 237.232714] start_kernel+0x47c/0x4a8 [ 237.236367] ---[ end trace 7240980785f81d70 ]--- Lars was fast to find an explanation: according to the datasheet bit 2 of the rx buffer descriptor entry has a different meaning in the extended mode: Address [2] of beginning of buffer, or in extended buffer descriptor mode (DMA configuration register [28] = 1), indicates a valid timestamp in the buffer descriptor entry. The macb driver didn't mask this bit while getting an address and it eventually caused a memory corruption and a dma failure. The problem is resolved by explicitly clearing the problematic bit if hw timestamping is used. Fixes: 7b4296148066 ("net: macb: Add support for PTP timestamps in DMA descriptors") Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> Co-developed-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20230412232144.770336-1-roman.gushchin@linux.dev Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-13mlx4: use READ_ONCE/WRITE_ONCE for ring indexesJakub Kicinski1-3/+5
Eric points out that we should make sure that ring index updates are wrapped in the appropriate READ_ONCE/WRITE_ONCE macros. Suggested-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-13bnxt: use READ_ONCE/WRITE_ONCE for ring indexesJakub Kicinski3-11/+10
Eric points out that we should make sure that ring index updates are wrapped in the appropriate READ_ONCE/WRITE_ONCE macros. Suggested-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-13net: stmmac: dwmac-qcom-ethqos: Add EMAC3 supportAndrew Halaney1-21/+101
Add the new programming sequence needed for EMAC3 based platforms such as the sc8280xp family. Signed-off-by: Andrew Halaney <ahalaney@redhat.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Brian Masney <bmasney@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-13net: stmmac: dwmac-qcom-ethqos: Use loopback_en for all speedsAndrew Halaney1-19/+17
It seems that this variable should be used for all speeds, not just 1000/100. While at it refactor it slightly to be more readable, including fixing the typo in the variable name. Signed-off-by: Andrew Halaney <ahalaney@redhat.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Brian Masney <bmasney@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-13net: stmmac: dwmac-qcom-ethqos: Respect phy-mode and TX delayAndrew Halaney1-5/+15
The driver currently sets a MAC TX delay of 2 ns no matter what the phy-mode is. If the phy-mode indicates the phy is in charge of the TX delay (rgmii-txid, rgmii-id), don't do it in the MAC. Signed-off-by: Andrew Halaney <ahalaney@redhat.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Brian Masney <bmasney@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-13net: stmmac: dwmac4: Allow platforms to specify some DMA/MTL offsetsAndrew Halaney5-141/+274
Some platforms have dwmac4 implementations that have a different address space layout than the default, resulting in the need to define their own DMA/MTL offsets. Extend the functions to allow a platform driver to indicate what its addresses are, overriding the defaults. Signed-off-by: Andrew Halaney <ahalaney@redhat.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Brian Masney <bmasney@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-13net: stmmac: Pass stmmac_priv in some callbacksAndrew Halaney13-177/+291
Passing stmmac_priv to some of the callbacks allows hwif implementations to grab some data that platforms can customize. Adjust the callbacks accordingly in preparation of such a platform customization. Signed-off-by: Andrew Halaney <ahalaney@redhat.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Brian Masney <bmasney@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-13net: stmmac: Remove some unnecessary void pointersAndrew Halaney8-40/+47
There's a few spots in the hardware interface where a void pointer is used, but what's passed in and later cast out is always the same type. Just use the proper type directly. Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Andrew Halaney <ahalaney@redhat.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Brian Masney <bmasney@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-13net: stmmac: Fix DMA typoAndrew Halaney1-1/+1
DAM is supposed to be DMA. Fix it to improve readability. Signed-off-by: Andrew Halaney <ahalaney@redhat.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Brian Masney <bmasney@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-13net: stmmac: Remove unnecessary if statement bracketsAndrew Halaney1-2/+1
The brackets are unnecessary, remove them to match the coding style used in the kernel. Signed-off-by: Andrew Halaney <ahalaney@redhat.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Brian Masney <bmasney@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-13net: enetc: workaround for unresponsive pMAC after receiving express trafficVladimir Oltean1-0/+16
I have observed an issue where the RX direction of the LS1028A ENETC pMAC seems unresponsive. The minimal procedure to reproduce the issue is: 1. Connect ENETC port 0 with a loopback RJ45 cable to one of the Felix switch ports (0). 2. Bring the ports up (MAC Merge layer is not enabled on either end). 3. Send a large quantity of unidirectional (express) traffic from Felix to ENETC. I tried altering frame size and frame count, and it doesn't appear to be specific to either of them, but rather, to the quantity of octets received. Lowering the frame count, the minimum quantity of packets to reproduce relatively consistently seems to be around 37000 frames at 1514 octets (w/o FCS) each. 4. Using ethtool --set-mm, enable the pMAC in the Felix and in the ENETC ports, in both RX and TX directions, and with verification on both ends. 5. Wait for verification to complete on both sides. 6. Configure a traffic class as preemptible on both ends. 7. Send some packets again. The issue is at step 5, where the verification process of ENETC ends (meaning that Felix responds with an SMD-R and ENETC sees the response), but the verification process of Felix never ends (it remains VERIFYING). If step 3 is skipped or if ENETC receives less traffic than approximately that threshold, the test runs all the way through (verification succeeds on both ends, preemptible traffic passes fine). If, between step 4 and 5, the step below is also introduced: 4.1. Disable and re-enable PM0_COMMAND_CONFIG bit RX_EN then again, the sequence of steps runs all the way through, and verification succeeds, even if there was the previous RX traffic injected into ENETC. Traffic sent *by* the ENETC port prior to enabling the MAC Merge layer does not seem to influence the verification result, only received traffic does. The LS1028A manual does not mention any relationship between PM0_COMMAND_CONFIG and MMCSR, and the hardware people don't seem to know for now either. The bit that is toggled to work around the issue is also toggled by enetc_mac_enable(), called from phylink's mac_link_down() and mac_link_up() methods - which is how the workaround was found: verification would work after a link down/up. Fixes: c7b9e8086902 ("net: enetc: add support for MAC Merge layer") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20230411192645.1896048-1-vladimir.oltean@nxp.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-04-13bnxt_en: Allow to set switchdev mode without existing VFsIvan Vecera3-10/+41
Remove an inability of bnxt_en driver to set eswitch to switchdev mode without existing VFs by: 1. Allow to set switchdev mode in bnxt_dl_eswitch_mode_set() so representors are created only when num_vfs > 0 otherwise just set bp->eswitch_mode 2. Do not automatically change bp->eswitch_mode during bnxt_vf_reps_create() and bnxt_vf_reps_destroy() calls so the eswitch mode is managed only by an user by devlink. Just set temporarily bp->eswitch_mode to legacy to avoid re-opening of representors during destroy. 3. Create representors in bnxt_sriov_enable() if current eswitch mode is switchdev one Tested by this sequence: 1. Set PF interface up 2. Set PF's eswitch mode to switchdev 3. Created N VFs 4. Checked that N representors were created 5. Set eswitch mode to legacy 6. Checked that representors were deleted 7. Set eswitch mode back to switchdev 8. Checked that representors exist again for VFs 9. Deleted all VFs 10. Checked that all representors were deleted as well 11. Checked that current eswitch mode is still switchdev Signed-off-by: Ivan Vecera <ivecera@redhat.com> Acked-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com> Link: https://lore.kernel.org/r/20230411120443.126055-1-ivecera@redhat.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>