summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/mellanox
AgeCommit message (Collapse)AuthorFilesLines
2021-09-16mlxbf_gige: clear valid_polarity upon openDavid Thompson1-0/+7
The network interface managed by the mlxbf_gige driver can get into a problem state where traffic does not flow. In this state, the interface will be up and enabled, but will stop processing received packets. This problem state will happen if three specific conditions occur: 1) driver has received more than (N * RxRingSize) packets but less than (N+1 * RxRingSize) packets, where N is an odd number Note: the command "ethtool -g <interface>" will display the current receive ring size, which currently defaults to 128 2) the driver's interface was disabled via "ifconfig oob_net0 down" during the window described in #1. 3) the driver's interface is re-enabled via "ifconfig oob_net0 up" This patch ensures that the driver's "valid_polarity" field is cleared during the open() method so that it always matches the receive polarity used by hardware. Without this fix, the driver needs to be unloaded and reloaded to correct this problem state. Fixes: f92e1869d74e ("Add Mellanox BlueField Gigabit Ethernet driver") Reviewed-by: Asmaa Mnebhi <asmaa@nvidia.com> Signed-off-by: David Thompson <davthompson@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-09-16net/{mlx5|nfp|bnxt}: Remove unnecessary RTNL lock assertEli Cohen1-3/+0
Remove the assert from the callback priv lookup function since it does not require RTNL lock and is already protected by flow_indr_block_lock. This will avoid warnings from being emitted to dmesg if the driver registers its callback after an ingress qdisc was created for a netdevice. The warnings started after the following patch was merged: commit 74fc4f828769 ("net: Fix offloading indirect devices dependency on qdisc order creation") Signed-off-by: Eli Cohen <elic@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-09-08net/mlx5e: Fix condition when retrieving PTP-rqnAya Levin1-1/+1
When activating the PTP-RQ, redirect the RQT from drop-RQ to PTP-RQ. Use mlx5e_channels_get_ptp_rqn to retrieve the rqn. This helper returns a boolean (not status), hence caller should consider return value 0 as a fail. Change the caller interpretation of the return value. Fixes: 43ec0f41fa73 ("net/mlx5e: Hide all implementation details of mlx5e_rx_res") Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-09-08net/mlx5e: Fix mutual exclusion between CQE compression and HW TSAya Levin3-8/+9
Some profiles of the driver don't support a dedicated PTP-RQ, hence can't support HW TS and CQE compression simultaneously. When HW TS is enabled the COE compression is disabled, and should be restored when the HW TS is turned off. Add rx_filter as an input to modifying CQE compression to enforce this restriction. Fixes: 256f79d13c1d ("net/mlx5e: Fix HW TS with CQE compression according to profile") Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-09-08net/mlx5: Fix potential sleeping in atomic contextMaor Gottlieb1-3/+2
Fixes the below flow of sleeping in atomic context by releasing the RCU lock before calling to free_match_list. build_match_list() <- disables preempt -> free_match_list() -> tree_put_node() -> down_write_ref_node() <- take write lock Fixes: 693c6883bbc4 ("net/mlx5: Add hash table for flow groups in flow table") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-09-08net/mlx5: FWTrace, cancel work on alloc pd error flowSaeed Mahameed1-1/+2
Handle error flow on mlx5_core_alloc_pd() failure, read_fw_strings_work must be canceled. Fixes: c71ad41ccb0c ("net/mlx5: FW tracer, events handling") Reported-by: Pavel Machek (CIP) <pavel@denx.de> Suggested-by: Pavel Machek (CIP) <pavel@denx.de> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Aya Levin <ayal@nvidia.com>
2021-09-08net/mlx5: Lag, don't update lag if lag isn't supportedMark Bloch1-2/+8
In NICs that don't support LAG, the LAG control structure won't be allocated. If it wasn't allocated it means LAG doesn't exists and can be skipped. Fixes: cac1eb2cf2e3 ("net/mlx5: Lag, properly lock eswitch if needed") Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-09-08net/mlx5: Fix rdma aux device on devlink reloadParav Pandit1-5/+2
RDMA auxdev parameter registration was skipped for eswitch manager PCI PF. Due to this when devlink parameter is read, it reads as false in below code flow. $ devlink dev reload pci/0000:06:00.0 devlink_reload() mlx5_load_one() mlx5_attach_device() is_ib_enabled() devlink_param_driverinit_value_get() Due to this, is_ib_enabled() returns false for the RDMA auxiliary device. This results into a skipping RDMA auxiliary device creation on reload. There is no need to check for eswitch manager capability to support RDMA auxiliary device. Hence, fix it by skipping eswitch manager capability. Fixes: 87158cedf00e ("net/mlx5: Support enable_rdma devlink dev param") Signed-off-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-09-08net/mlx5: Bridge, fix uninitialized variable usageVlad Buslov1-2/+2
In some conditions variable 'err' is not assigned with value in mlx5_esw_bridge_port_obj_attr_set() and mlx5_esw_bridge_port_changeupper() functions after recent changes to support LAG. Initialize the variable with zero value in both cases. Reported-by: Colin King <colin.king@canonical.com> Reported-by: Tim Gardner <tim.gardner@canonical.com> Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> CC: linux-kernel@vger.kernel.org Fixes: ff9b7521468b ("net/mlx5: Bridge, support LAG") Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
2021-08-31Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski8-4/+34
include/linux/netdevice.h net/socket.c d0efb16294d1 ("net: don't unconditionally copy_from_user a struct ifreq for socket ioctls") 876f0bf9d0d5 ("net: socket: simplify dev_ifconf handling") 29c4964822aa ("net: socket: rework compat_ifreq_ioctl()") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-31net/mlxbf_gige: Make use of devm_platform_ioremap_resourcexxx()Cai Huoqing2-24/+4
Use the devm_platform_ioremap_resource_byname() helper instead of calling platform_get_resource_byname() and devm_ioremap_resource() separately Use the devm_platform_ioremap_resource() helper instead of calling platform_get_resource() and devm_ioremap_resource() separately Signed-off-by: Cai Huoqing <caihuoqing@baidu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-31sch_htb: Fix inconsistency when leaf qdisc creation failsMaxim Mikityanskiy3-13/+9
In HTB offload mode, qdiscs of leaf classes are grafted to netdev queues. sch_htb expects the dev_queue field of these qdiscs to point to the corresponding queues. However, qdisc creation may fail, and in that case noop_qdisc is used instead. Its dev_queue doesn't point to the right queue, so sch_htb can lose track of used netdev queues, which will cause internal inconsistencies. This commit fixes this bug by keeping track of the netdev queue inside struct htb_class. All reads of cl->leaf.q->dev_queue are replaced by the new field, the two values are synced on writes, and WARNs are added to assert equality of the two values. The driver API has changed: when TC_HTB_LEAF_DEL needs to move a queue, the driver used to pass the old and new queue IDs to sch_htb. Now that there is a new field (offload_queue) in struct htb_class that needs to be updated on this operation, the driver will pass the old class ID to sch_htb instead (it already knows the new class ID). Fixes: d03b195b5aa0 ("sch_htb: Hierarchical QoS hardware offload") Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20210826115425.1744053-1-maximmi@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-27net/mlx5: DR, Add support for update FTEYevgeny Kliteynik1-9/+30
Add the support for update FTE, which is needed for cases where there are multiple rules with the same match. In such case fs_core will merge the actions and call update FTE to update current FTE. Since we don't want to disrupt the traffic, we will add the new duplicate rule, and only then remove the old duplicate rule. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Improve rule tracking memory consumptionYevgeny Kliteynik3-86/+83
To track each STE of the rule a rule member was allocated, each member would point to one STE. This means that we would allocate 40B (rule member) * number of STEs per rule. To reduce this per rule allocation we use the STE tree pointers for next_htbl and pointing STE to navigate the tree, this allows us to keep only the pointer to the last STE of rule (always unique). From the last rule STE we are able to traverse and rebuild all of the STEs that construct the rule. In our testing with 8M rules, each consisting of 7 STES, we were able to reduce 1.6GB of memory. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Remove rehash ctrl struct from dr_htblYevgeny Kliteynik3-21/+24
The calculations to decide for the maximum allowed collision threshold are simple and there is no reason to save them on the htbl struct. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Remove HW specific STE type from nic domainYevgeny Kliteynik10-45/+61
Instead of using the HW specific STEv0 type, it is better to use an enum to indicate if this is an RX or TX nic domain. This means that now we will need to convert the nic domain type to the corresponding STE type. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Merge DR_STE_SIZE enumsYevgeny Kliteynik1-3/+0
Merge DR_STE_SIZE enums - no need for a separate enum for reduced STE size. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Skip source port matching on FDB RX domainYevgeny Kliteynik1-1/+13
The FDB RX pipe is connected to the wire and the source port for all incoming packets equals to wire, single uplink port per PF, this means there is no point of matching on the source port in such case. Once we recognize such case, we will optimize the RX steering rule. Note that in such case we clean both source_eswitch_owner_vhca_id and source_port. Signed-off-by: Alex Vesker <valex@mellanox.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Add ignore_flow_level support for multi-dest flow tablesYevgeny Kliteynik6-6/+18
When creating an FTE, we might need to create multi-destination flow table, which is eventually created by FW. In such case, this FW table should include all the FTE properties as requested by the upper layer, including the ability to point to another flow table with level lower or equal to the current table - indicated by the "ignore_flow_level" property. Signed-off-by: Chris Mi <cmi@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Use FW API when updating FW-owned flow tableYevgeny Kliteynik1-0/+3
Need to call the DR API only when it is DR table. To update FW-owned table the driver should call the FW API. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, replace uintN_t with kernel-style typesYevgeny Kliteynik2-6/+6
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Support IPv6 matching on flow label for STEv0Yevgeny Kliteynik1-0/+6
Add missing support for matching on IPv6 flow label for STEv0. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Reduce print level for FT chaining level checkBodong Wang1-2/+2
There are usecases with Connection Tracking that have such connection as default, printing this warning in dmesg confuses the user. Signed-off-by: Bodong Wang <bodong@mellanox.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Warn and ignore SW steering rule insertion on QP errYevgeny Kliteynik2-2/+15
In the event of SW steering QP entering error state, SW steering cannot insert more rules, and will silently ignore the insertion after issuing a warning. Signed-off-by: Yuval Avnery <yuvalav@mellanox.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Improve error flow in actions_build_ste_arrYevgeny Kliteynik1-16/+56
Improve error flow and print actions sequence when an invalid/unsupported sequence provided. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Enable QP retransmissionYevgeny Kliteynik1-0/+1
Under high stress, SW steering might get stuck on polling for completion that never comes. For such cases QP needs to have protocol retransmission mechanism enabled. Currently the retransmission timeout is defined as 0 (unlimited). Fix this by defining a real timeout. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Enable VLAN pop on TX and VLAN push on RXYevgeny Kliteynik3-9/+118
Enable pop VLAN action in TX and push VLAN in RX. These actions are supported only on STEv1. On TX: when a host sends a packet, VLAN is popped at the beginning. On RX: just before passing the packet to the host the VLAN is pushed. Signed-off-by: Muhammad Sammar <muhammads@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Split modify VLAN state to separate pop/push statesYevgeny Kliteynik1-26/+27
Split modify vlan state in the actions state machine to pop vlan and push vlan states. This enables using of pop/push vlan without restrictions (e.g. pop vlan on TX in STEv1). Signed-off-by: Muhammad Sammar <muhammads@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, Added support for REMOVE_HEADER packet reformatYevgeny Kliteynik5-7/+96
ConnectX supports offloading of various encapsulations and decapsulations (e.g. VXLAN), which are performed by 'Packet Reformat' action. Starting with ConnectX-6 DX, a new reformat type is supported - REMOVE_HEADER, which allows deleting an arbitrary size chunk at the selected position in the packet. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: DR, fix a potential use-after-free bugWentao_Liang1-1/+1
In line 849 (#1), "mlx5dr_htbl_put(cur_htbl);" drops the reference to cur_htbl and may cause cur_htbl to be freed. However, cur_htbl is subsequently used in the next line, which may result in an use-after-free bug. Fix this by calling mlx5dr_err() before the cur_htbl is put. Signed-off-by: Wentao_Liang <Wentao_Liang_g@163.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5e: Use correct eswitch for stack devices with lagDmytro Linkin1-0/+18
If link aggregation is used within stack devices driver rejects encap rules if PF of the VF tunnel device is down. This happens because route resolved for other PF and its eswitch instance is used to determine correct vport. To fix that use devcom feature to retrieve other eswitch instance if failed to find vport for the 1st eswitch and LAG is active. Fixes: 10742efc20a4 ("net/mlx5e: VF tunnel TX traffic offloading") Signed-off-by: Dmytro Linkin <dlinkin@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: E-Switch, Set vhca id valid flag when creating indir fwd groupMaor Dickman1-0/+1
When indirect forward group is created, flow is added with vhca id but without setting vhca id valid flag which violates the PRM. Fix by setting the missing flag, vhca id valid. Fixes: 34ca65352ddf ("net/mlx5: E-Switch, Indirect table infrastructure") Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5e: Fix possible use-after-free deleting fdb ruleRoi Dayan1-2/+2
After neigh-update-add failure we are still with a slow path rule but the driver always assume the rule is an fdb rule. Fix neigh-update-del by checking slow path tc flag on the flow. Also fix neigh-update-add for when neigh-update-del fails the same. Fixes: 5dbe906ff1d5 ("net/mlx5e: Use a slow path rule instead if vxlan neighbour isn't available") Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Paul Blakey <paulb@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: Remove all auxiliary devices at the unregister eventLeon Romanovsky1-1/+1
The call to mlx5_unregister_device() means that mlx5_core driver is removed. In such scenario, we need to disregard all other flags like attach/detach and forcibly remove all auxiliary devices. Fixes: a5ae8fc9058e ("net/mlx5e: Don't create devices during unload flow") Tested-and-Reported-by: Yicong Yang <yangyicong@hisilicon.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-27net/mlx5: Lag, fix multipath lag activationDima Chumak3-0/+11
When handling FIB_EVENT_ENTRY_REPLACE event for a new multipath route, lag activation can be missed if a stale (struct lag_mp)->mfi pointer exists, which was associated with an older multipath route that had been removed. Normally, when a route is removed, it triggers mlx5_lag_fib_event(), which handles FIB_EVENT_ENTRY_DEL and clears mfi pointer. But, if mlx5_lag_check_prereq() condition isn't met, for example when eswitch is in legacy mode, the fib event is skipped and mfi pointer becomes stale. Fix by resetting mfi pointer to NULL in mlx5_deactivate_lag(). Fixes: 8a66e4585979 ("net/mlx5: Change ownership model for lag") Signed-off-by: Dima Chumak <dchumak@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-24ethtool: extend coalesce setting uAPI with CQE modeYufeng Mo4-8/+24
In order to support more coalesce parameters through netlink, add two new parameter kernel_coal and extack for .set_coalesce and .get_coalesce, then some extra info can return to user with the netlink API. Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-23net/mellanox: switch from 'pci_' to 'dma_' APIChristophe JAILLET4-34/+13
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below. It has been hand modified to use 'dma_set_mask_and_coherent()' instead of 'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable. This is less verbose. It has been compile tested. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22mlxsw: spectrum_router: Increase parsing depth for multipath hashAmit Cohen2-1/+44
Commit 01848e05f8bb ("mlxsw: spectrum_router: Add support for inner layer 3 multipath hash policy") and commit daeabf89eb89 ("mlxsw: spectrum_router: Add support for custom multipath hash policy") added support for multipath hash policies where the hash is calculated based on inner packet fields. For IPv6-in-IPv6 packets, the default parsing depth (96 bytes) is not enough when these policies are used. Therefore, for such cases, call the new API to increase / decrease the parsing depth as necessary. Care is taken to ensure the API is not called multiple times. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22mlxsw: Remove old parsing depth infrastructureAmit Cohen2-69/+0
The previous patches added new API to handle parsing depth and converted the existing code to use it. Remove the old infrastructure which is not needed anymore. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22mlxsw: Convert existing consumers to use new API for parsing configurationAmit Cohen2-8/+22
Convert VxLAN and PTP modules to increase parsing depth using new API that was added in the previous patch. Separate MPRS register's configuration to VxLAN related configuration and parsing depth configuration. Handle each one using the appropriate API. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-22mlxsw: spectrum: Add infrastructure for parsing configurationAmit Cohen2-0/+94
Spectrum ASICs have a configurable limit on how deep into the packet they parse. By default, the limit is 96 bytes. There are several cases where this parsing depth is not enough and there is a need to increase it. Currently, increasing parsing depth is maintained as part of VxLAN module, because the MPRS register which configures parsing depth also configures UDP destination port number used for VxLAN encapsulation and decapsulation. Add an API for increasing parsing depth as part of spectrum.c code, so that it will be possible to use it from other modules. In addition, add an API for setting UDP destination port and protect it using a dedicated lock for saving parsing configurations. The lock is needed as not all the callers hold RTNL lock. Maintain a counter for increased parsing depth consumers. For first consumer subscription, increase the parsing depth and for last consumer unsubscription, set parsing depth to default value. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20net/mlx5: E-switch, Add QoS tracepointsDmytro Linkin2-1/+135
Add tracepoints to log QoS enabling/disabling/configuration for vports and rate groups. Signed-off-by: Dmytro Linkin <dlinkin@nvidia.com> Reviewed-by: Huy Nguyen <huyn@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-20net/mlx5: E-switch, Allow to add vports to rate groupsDmytro Linkin5-25/+199
Implement eswitch API that allows updating rate groups. If group pointer is NULL, then move the vport to internal unlimited group zero. Implement devlink_ops->rate_parent_node_set() callback in the terms of the new eswitch group update API. Enable QoS for all group's elements if a group has allocated BW share. Co-developed-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Dmytro Linkin <dlinkin@nvidia.com> Reviewed-by: Huy Nguyen <huyn@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-20net/mlx5: E-switch, Allow setting share/max tx rate limits of rate groupsDmytro Linkin4-39/+225
Provide eswitch API to allow controlling group rate limits. Use it to implement devlink_ops->mlx5_devlink_rate_node_tx_{share|max}_set(). The share rate will create relative bandwidth share on the groups level while within the group the user can set shared rate on the member vports of that group and this rate will be relative to the group's share rate. The group with the highest shared rate will get a BW share of 100 and the rest of the groups will get a value that reflects the ratio between their share rate and the maximum share rate. Example: Created four rate groups with tx_share limits: $ devlink port function rate add \ pci/0000:06:00.0/group_1 tx_share 30gbit $ devlink port function rate add \ pci/0000:06:00.0/group_2 tx_share 20gbit $ devlink port function rate add \ pci/0000:06:00.0/group_3 tx_share 20gbit $ devlink port function rate add \ pci/0000:06:00.0/group_4 tx_share 10gbit Assuming link speed is 50 Gbit/sec ratio divider will be 50 / (30+20+20+10) = 0.625. Normalized rate values for the groups: <group_1> 30 * 0.625 = 18.75 Gbit/sec <group_2> 20 * 0.625 = 12.5 Gbit/sec <group_3> 20 * 0.625 = 12.5 Gbit/sec <group_4> 10 * 0.625 = 6.25 Gbit/sec Rate group with unlimited tx_share rate will receive minimum BW value (1Mbit/sec) if presented any group with tx_share rate limit. This allow to not drop all packets in case of heavy traffic. Co-developed-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Dmytro Linkin <dlinkin@nvidia.com> Reviewed-by: Huy Nguyen <huyn@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-20net/mlx5: E-switch, Introduce rate limiting groups APIDmytro Linkin4-5/+143
Extend eswitch API with rate limiting groups: - Define new struct mlx5_esw_rate_group that is used to hold all internal group data. - Implement functions that allow creation, destruction and cleanup of groups. - Assign all vports to internal unlimited zero group by default. This commit lays the groundwork for group rate limiting by implementing devlink_ops->rate_node_{new|del}() callbacks to support creating and deleting groups through devlink rate node objects. APIs that allows setting rates and adding/removing members are implemented in following patches. Co-developed-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Dmytro Linkin <dlinkin@nvidia.com> Reviewed-by: Huy Nguyen <huyn@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-20net/mlx5: E-switch, Enable devlink port tx_{share|max} rate controlDmytro Linkin5-27/+157
Register devlink rate leaf object for every eswitch vport. Implement devlink ops that enable setting shared and max tx rates through devlink API. Extract common eswitch code from existing tx rate set function that is accessed through NDO to be reused for the devlink. Values configured with NDO API are not visible for the devlink API, therefore shouldn't be used simultaneously. When normalizing the BW share value, dividing the desired minimum rate by the common divider results in losing information since the quotient is rounded down. This has a significant affect on configurations of low rate where the round down eliminates a large percentage of the total rate. To improve the formula, round up the division result to make sure that the BW share is at least the value it was supposed to be and won't lost a significant amount of the expected value. Co-developed-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Dmytro Linkin <dlinkin@nvidia.com> Reviewed-by: Huy Nguyen <huyn@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-20net/mlx5: E-switch, Move QoS related code to dedicated fileDmytro Linkin7-316/+346
Move eswitch QoS related code into dedicated file. Provide eswitch API to access this code meaning it is isolated and restricted to be used only by eswitch.c. Exception is legacy NDO vf set rate, which moved to esw/legacy.c. Signed-off-by: Dmytro Linkin <dlinkin@nvidia.com> Reviewed-by: Huy Nguyen <huyn@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-20net/mlx5e: TC, Support sample offload action for tunneled trafficChris Mi4-91/+214
Currently the sample offload actions send the encapsulated packet to software. This commit decapsulates the packet before performing the sampling and set the tunnel properties on the skb metadata fields to make the behavior consistent with OVS sFlow. If decapsulating first, we can't use the same match like before in default table. So instantiate a post action instance to continue processing the action list. If HW can preserve reg_c, also use the post action instance. Signed-off-by: Chris Mi <cmi@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-20net/mlx5e: TC, Restore tunnel info for sample offloadChris Mi5-14/+37
Currently the sample offload actions send the encapsulated packet to software. sFlow expects tunneled packets to be decapsulated while having the tunnel properties on the skb metadata fields. Reuse the functions used by connection tracking to map the outer header properties to a unique id. The next patch will use that id to restore the tunnel information of decapsulated packets onto the skb. Signed-off-by: Chris Mi <cmi@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-20net/mlx5e: TC, Remove CONFIG_NET_TC_SKB_EXT dependency when restoring tunnelChris Mi1-9/+6
CONFIG_NET_TC_SKB_EXT controls the SKB extension support for restoring chain ids. SKB extension is not required for tunnel restoration. Remove the CONFIG_NET_TC_SKB_EXT dependency as a pre-step for using the tunnel restore methods for sample offload use cases. Signed-off-by: Chris Mi <cmi@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>