summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
AgeCommit message (Collapse)AuthorFilesLines
2024-04-11net/mlx5e: HTB, Fix inconsistencies with QoS SQs numberCarolina Jubran1-16/+17
When creating a new HTB class while the interface is down, the variable that follows the number of QoS SQs (htb_max_qos_sqs) may not be consistent with the number of HTB classes. Previously, we compared these two values to ensure that the node_qid is lower than the number of QoS SQs, and we allocated stats for that SQ when they are equal. Change the check to compare the node_qid with the current number of leaf nodes and fix the checking conditions to ensure allocation of stats_list and stats for each node. Fixes: 214baf22870c ("net/mlx5e: Support HTB offload") Signed-off-by: Carolina Jubran <cjubran@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240409190820.227554-9-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-07net/mlx5e: Let channels be SD-awareTariq Toukan1-4/+4
Distribute the channels between the different SD-devices to acheive local numa node performance on multiple numas. Each channel works against one specific mdev, creating all datapath queues against it. We distribute channels to mdevs in a round-robin policy. Example for 2 mdevs and 6 channels: +-------+---------+ | ch ix | mdev ix | +-------+---------+ | 0 | 0 | | 1 | 1 | | 2 | 0 | | 3 | 1 | | 4 | 0 | | 5 | 1 | +-------+---------+ This round-robin distribution policy is preferred over another suggested intuitive distribution, in which we first distribute one half of the channels to mdev #0 and then the second half to mdev #1. We prefer round-robin for a reason: it is less influenced by changes in the number of channels. The mapping between channel index and mdev is fixed, no matter how many channels the user configures. As the channel stats are persistent to channels closure, changing the mapping every single time would turn the accumulative stats less representing of the channel's history. Per-channel objects should stop using the primary mdev (priv->mdev) directly, and instead move to using their own channel's mdev. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2024-01-08Revert "mlx5 updates 2023-12-20"Jakub Kicinski1-4/+4
Revert "net/mlx5: Implement management PF Ethernet profile" This reverts commit 22c4640698a1d47606b5a4264a584e8046641784. Revert "net/mlx5: Enable SD feature" This reverts commit c88c49ac9c18fb7c3fa431126de1d8f8f555e912. Revert "net/mlx5e: Block TLS device offload on combined SD netdev" This reverts commit 83a59ce0057b7753d7fbece194b89622c663b2a6. Revert "net/mlx5e: Support per-mdev queue counter" This reverts commit d72baceb92539a178d2610b0e9ceb75706a75b55. Revert "net/mlx5e: Support cross-vhca RSS" This reverts commit c73a3ab8fa6e93a783bd563938d7cf00d62d5d34. Revert "net/mlx5e: Let channels be SD-aware" This reverts commit e4f9686bdee7b4dd89e0ed63cd03606e4bda4ced. Revert "net/mlx5e: Create EN core HW resources for all secondary devices" This reverts commit c4fb94aa822d6c9d05fc3c5aee35c7e339061dc1. Revert "net/mlx5e: Create single netdev per SD group" This reverts commit e2578b4f983cfcd47837bbe3bcdbf5920e50b2ad. Revert "net/mlx5: SD, Add informative prints in kernel log" This reverts commit c82d360325112ccc512fc11a3b68cdcdf04a1478. Revert "net/mlx5: SD, Implement steering for primary and secondaries" This reverts commit 605fcce33b2d1beb0139b6e5913fa0b2062116b2. Revert "net/mlx5: SD, Implement devcom communication and primary election" This reverts commit a45af9a96740873db9a4b5bb493ce2ad81ccb4d5. Revert "net/mlx5: SD, Implement basic query and instantiation" This reverts commit 63b9ce944c0e26c44c42cdd5095c2e9851c1a8ff. Revert "net/mlx5: SD, Introduce SD lib" This reverts commit 4a04a31f49320d078b8078e1da4b0e2faca5dfa3. Revert "net/mlx5: Fix query of sd_group field" This reverts commit e04984a37398b3f4f5a79c993b94c6b1224184cc. Revert "net/mlx5e: Use the correct lag ports number when creating TISes" This reverts commit a7e7b40c4bc115dbf2a2bb453d7bbb2e0ea99703. There are some unanswered questions on the list, and we don't have any docs. Given the lack of replies so far and the fact that v6.8 merge window has started - let's revert this and revisit for v6.9. Link: https://lore.kernel.org/all/20231221005721.186607-1-saeed@kernel.org/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-12-21net/mlx5e: Let channels be SD-awareTariq Toukan1-4/+4
Distribute the channels between the different SD-devices to acheive local numa node performance on multiple numas. Each channel works against one specific mdev, creating all datapath queues against it. We distribute channels to mdevs in a round-robin policy. Example for 2 mdevs and 6 channels: +-------+---------+ | ch ix | mdev ix | +-------+---------+ | 0 | 0 | | 1 | 1 | | 2 | 0 | | 3 | 1 | | 4 | 0 | | 5 | 1 | +-------+---------+ This round-robin distribution policy is preferred over another suggested intuitive distribution, in which we first distribute one half of the channels to mdev #0 and then the second half to mdev #1. We prefer round-robin for a reason: it is less influenced by changes in the number of channels. The mapping between channel index and mdev is fixed, no matter how many channels the user configures. As the channel stats are persistent to channels closure, changing the mapping every single time would turn the accumulative stats less representing of the channel's history. Per-channel objects should stop using the primary mdev (priv->mdev) directly, and instead move to using their own channel's mdev. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-12-14net/mlx5e: Decouple CQ from privTariq Toukan1-1/+1
Make CQ struct and methods independent of "priv", use more basic arguments instead. This will ease the transition to netdev with multiple mdevs. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-12-14net/mlx5: Move TISes from priv to mdev HW resourcesTariq Toukan1-2/+5
The transport interface send (TIS) object is responsible for performing all transport related operations of the transmit side. Messages from Send Queues get segmented and transmitted by the TIS including all transport required implications, e.g. in the case of large send offload, the TIS is responsible for the segmentation. These are stateless objects and can be used by multiple netdevs (e.g. representors) who share the same core device. Providing the TISes as a service from the core layer to the netdev layer reduces the number of replecated TIS objects (in case of multiple netdevs), and will ease the transition to netdev with multiple mdevs. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-07-21sch_htb: Allow HTB quantum parameter in offload modeNaveen Mamindlapalli1-2/+2
The current implementation of HTB offload returns the EINVAL error for quantum parameter. This patch removes the error returning checks for 'quantum' parameter and populates its value to tc_htb_qopt_offload structure such that driver can use the same. Add quantum parameter check in mlx5 driver, as mlx5 devices are not capable of supporting the quantum parameter when htb offload is used. Report error if quantum parameter is set to a non-default value. Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-05-15sch_htb: Allow HTB priority parameter in offload modeNaveen Mamindlapalli1-1/+6
The current implementation of HTB offload returns the EINVAL error for unsupported parameters like prio and quantum. This patch removes the error returning checks for 'prio' parameter and populates its value to tc_htb_qopt_offload structure such that driver can use the same. Add prio parameter check in mlx5 driver, as mlx5 devices are not capable of supporting the prio parameter when htb offload is used. Report error if prio parameter is set to a non-default value. Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> Co-developed-by: Rahul Rameshbabu <rrameshbabu@nvidia.com> Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-07-19net/mlx5e: HTB, move htb functions to a new fileMoshe Tal1-740/+45
Move htb related functions and data to a separated file for better encapsulation. Signed-off-by: Moshe Tal <moshet@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-07-19net/mlx5e: HTB, change functions name to follow conventionMoshe Tal1-32/+32
Following the change of the functions to be object like, change also the names. Signed-off-by: Moshe Tal <moshet@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-07-19net/mlx5e: HTB, remove priv from htb function callsMoshe Tal1-142/+160
As a step to make htb self-contained replace the passing of priv as a parameter to htb function calls with members in the htb struct. Full decoupling the htb from priv will require more work, so for now leave the priv as one of the members in the htb struct, to be replaced by channels in a future commit. Signed-off-by: Moshe Tal <moshet@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-07-19net/mlx5e: HTB, hide and dynamically allocate mlx5e_htb structureSaeed Mahameed1-23/+64
Move structure mlx5e_htb from the main driver include file "en.h" to be hidden in qos.c where the qos functionality is implemented, forward declare it for the rest of the driver and allocate it dynamically upon user demand only. Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Moshe Tal <moshet@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com>
2022-07-19net/mlx5e: HTB, move stats and max_sqs to privMoshe Tal1-8/+8
Preparation for dynamic allocation of the HTB struct. The statistics should be preserved even when the struct is de-allocated. Signed-off-by: Moshe Tal <moshet@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-07-19net/mlx5e: HTB, move section comment to the right placeMoshe Tal1-2/+2
mlx5e_get_qos_sq is a part of the SQ lifecycle, so need be under the title. Signed-off-by: Moshe Tal <moshet@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-07-19net/mlx5e: HTB, move ids to selq_params structMoshe Tal1-16/+7
HTB id fields are needed for selecting queue. Moving them to the selq_params struct will simplify synchronization between control flow and mlx5e_select_queues and will keep the IDs in the hot cacheline of mlx5e_selq_params. Replace mlx5e_selq_prepare() with separate functions that change subsets of parameters, while keeping the rest. This also will be useful to hide mlx5e_htb structure from the rest of the driver in a later patch in this series. Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Moshe Tal <moshet@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com>
2022-07-19net/mlx5e: HTB, reduce visibility of htb functionsSaeed Mahameed1-17/+67
No need to expose all htb tc functions to the main driver file, expose only the master htb tc function mlx5e_htb_setup_tc() which selects the internal "now static" function to call. Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Moshe Tal <moshet@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com>
2022-02-15net/mlx5e: Use select queue parameters to sync with control flowMaxim Mikityanskiy1-1/+5
Start using the select queue parameters introduced in the previous commit to have proper synchronization with changing the configuration (such as number of channels and queues). It ensures that the state that mlx5e_select_queue() sees is always consistent and stays the same while the function is running. Also it allows mlx5e_select_queue to stop using data structures that weren't synchronized properly: txq2sq, channel_tc2realtxq, port_ptp_tc2realtxq. The last two are removed completely, as they were used only in mlx5e_select_queue. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-02-15net/mlx5e: Introduce select queue parametersMaxim Mikityanskiy1-1/+11
ndo_select_queue can be called at any time, and there is no way to stop the kernel from calling it to synchronize with configuration changes (real_num_tx_queues, num_tc). This commit introduces an internal way in mlx5e to sync mlx5e_select_queue() with these changes. The configuration needed by this function is stored in a struct mlx5e_selq_params, which is modified and accessed in an atomic way using RCU methods. The whole ndo_select_queue is called under an RCU lock, providing the necessary guarantees. The parameters stored in the new struct mlx5e_selq_params should only be used from inside mlx5e_select_queue. It's the minimal set of parameters needed for mlx5e_select_queue to do its job efficiently, derived from parameters stored elsewhere. That means that when the configuration change, mlx5e_selq_params may need to be updated. In such cases, the mlx5e_selq_prepare/mlx5e_selq_apply API should be used. struct mlx5e_selq contains two slots for the params: active and standby. mlx5e_selq_prepare updates the standby slot, and mlx5e_selq_apply swaps the slots in a safe atomic way using the RCU API. It integrates well with the open/activate stages of the configuration change flow. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-02-15net/mlx5e: Sync txq2sq updates with mlx5e_xmit for HTB queuesMaxim Mikityanskiy1-4/+20
This commit makes necessary changes to guarantee that txq2sq remains stable while mlx5e_xmit is running. Proper synchronization is added for HTB TX queues. All updates to txq2sq are performed while the corresponding queue is disabled (i.e. mlx5e_xmit doesn't run on that queue). smp_wmb after each change guarantees that mlx5e_xmit can see the updated value after the queue is enabled. Comments explaining this mechanism are added to mlx5e_xmit. When an HTB SQ can be deleted (after deleting an HTB node), synchronize with RCU to wait for mlx5e_select_queue to finish and stop selecting that queue, before we re-enable it to avoid TX timeout watchdog alarms. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-02-02net/mlx5e: Don't treat small ceil values as unlimited in HTB offloadMaxim Mikityanskiy1-1/+2
The hardware spec defines max_average_bw == 0 as "unlimited bandwidth". max_average_bw is calculated as `ceil / BYTES_IN_MBIT`, which can become 0 when ceil is small, leading to an undesired effect of having no bandwidth limit. This commit fixes it by rounding up small values of ceil to 1 Mbit/s. Fixes: 214baf22870c ("net/mlx5e: Support HTB offload") Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-12-29net: Don't include filter.h from net/sock.hJakub Kicinski1-0/+1
sock.h is pretty heavily used (5k objects rebuilt on x86 after it's touched). We can drop the include of filter.h from it and add a forward declaration of struct sk_filter instead. This decreases the number of rebuilt objects when bpf.h is touched from ~5k to ~1k. There's a lot of missing includes this was masking. Primarily in networking tho, this time. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Marc Kleine-Budde <mkl@pengutronix.de> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com> Acked-by: Stefano Garzarella <sgarzare@redhat.com> Link: https://lore.kernel.org/bpf/20211229004913.513372-1-kuba@kernel.org
2021-10-05net/mlx5e: Add TX max rate support for MQPRIO channel modeTariq Toukan1-0/+99
Add driver max_rate support for the MQPRIO bw_rlimit shaper in channel mode. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-05net/mlx5e: Specify SQ stats struct for mlx5e_open_txqsq()Tariq Toukan1-1/+2
Let the caller of mlx5e_open_txqsq() directly pass the SQ stats structure pointer. This replaces logic involving the qos_queue_group_id parameter, and helps generalizing its role in the next patch. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-31sch_htb: Fix inconsistency when leaf qdisc creation failsMaxim Mikityanskiy1-9/+6
In HTB offload mode, qdiscs of leaf classes are grafted to netdev queues. sch_htb expects the dev_queue field of these qdiscs to point to the corresponding queues. However, qdisc creation may fail, and in that case noop_qdisc is used instead. Its dev_queue doesn't point to the right queue, so sch_htb can lose track of used netdev queues, which will cause internal inconsistencies. This commit fixes this bug by keeping track of the netdev queue inside struct htb_class. All reads of cl->leaf.q->dev_queue are replaced by the new field, the two values are synced on writes, and WARNs are added to assert equality of the two values. The driver API has changed: when TC_HTB_LEAF_DEL needs to move a queue, the driver used to pass the old and new queue IDs to sch_htb. Now that there is a new field (offload_queue) in struct htb_class that needs to be updated on this operation, the driver will pass the old class ID to sch_htb instead (it already knows the new class ID). Fixes: d03b195b5aa0 ("sch_htb: Hierarchical QoS hardware offload") Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20210826115425.1744053-1-maximmi@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-17net/mlx5e: Abstract MQPRIO paramsTariq Toukan1-1/+1
Abstract the MQPRIO params into a struct. Use a getter for DCB mode num_tcs. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-26net/mlx5e: Restrict usage of mlx5e_priv in params logic functionsTariq Toukan1-2/+2
Do not use generic struct mlx5e_priv as a parameter to param functions, as it is too generic. All calculations of the channel's param should be mainly based on struct mlx5_core_dev and struct mlx5e_params. Additional info can be explicitly passed. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-01-23net/mlx5e: Support HTB offloadMaxim Mikityanskiy1-0/+984
This commit adds support for HTB offload in the mlx5e driver. Performance: NIC: Mellanox ConnectX-6 Dx CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz (24 cores with HT) 100 Gbit/s line rate, 500 UDP streams @ ~200 Mbit/s each 48 traffic classes, flower used for steering No shaping (rate limits set to 4 Gbit/s per TC) - checking for max throughput. Baseline: 98.7 Gbps, 8.25 Mpps HTB: 6.7 Gbps, 0.56 Mpps HTB offload: 95.6 Gbps, 8.00 Mpps Limitations: 1. 256 leaf nodes, 3 levels of depth. 2. Granularity for ceil is 1 Mbit/s. Rates are converted to weights, and the bandwidth is split among the siblings according to these weights. Other parameters for classes are not supported. Ethtool statistics support for QoS SQs are also added. The counters are called qos_txN_*, where N is the QoS queue number (starting from 0, the numeration is separate from the normal SQs), and * is the counter name (the counters are the same as for the normal SQs). Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>