summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2022-10-04net/mlx5: E-switch, Don't update group if qos is not enabledChris Mi1-1/+5
Currently, qos group will be updated and qos will be enabled when unregistering devlink port. Actually no need to update group if qos is not enabled. Add a check to prevent unnecessary enabling and disabling qos for every port. Signed-off-by: Chris Mi <cmi@nvidia.com> Reviewed-by: Dmytro Linkin <dlinkin@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5: E-Switch, Allow offloading fwd dest flow table with vportRoi Dayan1-7/+9
Before this commit a fwd dest flow table resulted in ignoring vport dests which is incorrect and is supported. With this commit the dests can be a mix of flow table and vport dests. There is still a limitation that there cannot be more than one flow table dest. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5: Set default grace period based on function typeMaher Sanalla1-2/+16
Currently, driver sets the same grace period for fw fatal health reporter to any type of function. Since the lower level functions are more vulnerable to fw fatal errors as a result of parent function closure/reload, set a smaller grace period for the lower level functions, as follows: 1. For ECPF: 180 seconds. 2. For PF: 60 seconds. 3. For VF/SF: 30 seconds. Signed-off-by: Maher Sanalla <msanalla@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5: Start health poll at earlier stage of driver loadMoshe Shemesh3-10/+19
Start health poll at earlier stage, so if fw fatal issue occurred before or during initialization commands such as init_hca or set_hca_cap the poll health can detect and indicate that the driver is already in error state. Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: Expose rx_oversize_pkts_buffer counterGal Pressman4-4/+32
Add the rx_oversize_pkts_buffer counter to ethtool statistics. This counter exposes the number of dropped received packets due to length which arrived to RQ and exceed software buffer size allocated by the device for incoming traffic. It might imply that the device MTU is larger than the software buffers size. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Optimize for unaligned mode with 3072-byte framesMaxim Mikityanskiy4-3/+61
When XSK frame size is 3072 (or another power of two multiplied by 3), KLM mechanism for NIC virtual memory page mapping can be optimized by replacing it with KSM. Before this change, two KLM entries were needed to map an XSK frame that is not a power of two: one entry maps the UMEM memory up to the frame length, the other maps the rest of the stride to the garbage page. When the frame length divided by 3 is a power of two, it can be mapped using 3 KSM entries, and the fourth will map the rest of the stride to the garbage page. All 4 KSM entries are of the same size, which allows for a much faster lookup. Frame size 3072 is useful in certain use cases, because it allows packing 4 frames into 3 pages. Generally speaking, other frame sizes equal to PAGE_SIZE minus a power of two can be optimized in a similar way, but it will require many more KSMs per frame, which slows down UMRs a little bit, but more importantly may hit the limit for the maximum number of KSM entries. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Print a warning in slow configurationsMaxim Mikityanskiy1-0/+9
On striding RQ, when the XSK frame size doesn't match the MKey page size, KLM is used for memory mappings, which is a slower mechanism than MTT or KSM. It may happen in two cases: 1. Frame size is not a power of two (only possible in the unaligned mode of XSK). 2. Frame size is 2048 bytes, and the firmware doesn't support MKey pages smaller than 4096 bytes. Depending on the case, print a warning and recommend to disable striding RQ or upgrade the firmware. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Use KLM to protect frame overrun in unaligned modeMaxim Mikityanskiy4-10/+90
XSK RQs support striding RQ linear mode, but the stride size may be bigger than the XSK frame size, because: 1. The stride size must be a power of two. 2. The stride size must be equal to the UMR page size. Each XSK frame is treated as a separate page, because they aren't necessarily adjacent in physical memory, so the driver can't put more than one stride per page. 3. The minimal MTT page size is 4096 on older firmware. That means that if XSK frame size is 2048 or not a power of two, the strides may be bigger than XSK frames. Normally, it's not a problem if the hardware enforces the MTU. However, traffic between vports skips the hardware MTU check, and oversized packets may be received. If an oversized packet is bigger than the XSK frame but not bigger than the stride, it will cause overwriting of the adjacent UMEM region. If the packet takes more than one stride, they can be recycled for reuse, so it's not a problem when the XSK frame size matches the stride size. Work around the above issue by leveraging KLM to make a more fine-grained mapping. The beginning of each stride is mapped to the frame memory, and the padding up to the closest power of two is mapped to the overflow page that doesn't belong to UMEM. This way, application data corruption won't happen upon receiving packets bigger than MTU. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: Improve MTT/KSM alignmentMaxim Mikityanskiy5-16/+18
Make mlx5e_mpwrq_mtts_per_wqe take into account that KSM requires smaller alignment than MTT. Ensure that there is always an even amount of MTTs in a UMR WQE, so that complete octwords are formed, and no garbage is mapped. Drop extra alignment in MLX5_MTT_OCTW that may cause setting too big ucseg->xlt_octowords, also leading to mapping garbage. Generalize some calculations by introducing the MLX5_OCTWORD constant. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Use umr_mode to calculate striding RQ parametersMaxim Mikityanskiy7-72/+171
Instead of passing the unaligned flag, pass an enum that indicates the UMR mode. The next commit will add the third mode (KLM for certain configurations of XSK), which will be added to this enum instead of adding another bool flag everywhere. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Improve need_wakeup logicMaxim Mikityanskiy4-37/+23
XSK need_wakeup mechanism allows the driver to stop busy waiting for buffers when the fill ring is empty, yield to the application and signal it that the driver needs to be waken up after the application refills the fill ring. Add protection against the race condition on the RX (refill) side: if the application refills buffers after xskrq->post_wqes is called, but before mlx5e_xsk_update_rx_wakeup, NAPI will exit, skipping taking these buffers to the hardware WQ, and the application won't wake it up again. Optimize the whole need_wakeup logic, removing unneeded flows, to compensate for this new check. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Include XSK skb_from_cqe callbacks in INDIRECT_CALLMaxim Mikityanskiy1-2/+4
XSK is a performance-critical data path. To avoid an indirect function call with a retpoline, include XSK callbacks in the INDIRECT_CALL macro, so that they are called directly in XSK flows. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Set napi_id to support busy pollingMaxim Mikityanskiy1-1/+1
xdp_rxq_info_reg should get the actual napi_id, not 0, in order to support socket busy polling properly. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net/mlx5e: xsk: Flush RQ on XSK activation to save memoryMaxim Mikityanskiy3-5/+19
The regular RQ remains open after opening an XSK socket, in order to guarantee that closing the XSK socket never fails due to an error when reopening the regular RQ. To save memory, the regular RQ can be deactivated and flushed, releasing all pages, when an XSK socket is open. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net: ipa: update copyrightsAlex Elder36-36/+36
Some source files state copyright dates that are earlier than the last modification of the file. Change the copyright year to 2022 in all such cases. Signed-off-by: Alex Elder <elder@linaro.org> Link: https://lore.kernel.org/r/20220930224549.3503434-1-elder@linaro.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net: ipa: update commentsAlex Elder8-94/+75
This patch just updates comments throughout the IPA code. Transaction state is now tracked using indexes into an array rather than linked lists, and a few comments refer to the "old way" of doing things. The description of how transactions are used was changed to refer to "operations" rather than "commands", to (hopefully) remove a possible ambiguity. IPA register offsets and fields are now handled differently as well, and the register documentation is updated to better describe the code. A few minor updates to comments were made (e.g., adding a missing word, fixing a typo or punctuation, etc.). Finally, the local macro atomic_dec_not_zero() is no longer used, so it is deleted. Signed-off-by: Alex Elder <elder@linaro.org> Link: https://lore.kernel.org/r/20220930224527.3503404-1-elder@linaro.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-04net: lan966x: Fix return type of lan966x_port_xmitNathan Huckleberry1-1/+2
The ndo_start_xmit field in net_device_ops is expected to be of type netdev_tx_t (*ndo_start_xmit)(struct sk_buff *skb, struct net_device *dev). The mismatched return type breaks forward edge kCFI since the underlying function definition does not match the function hook definition. The return type of lan966x_port_xmit should be changed from int to netdev_tx_t. Reported-by: Dan Carpenter <error27@gmail.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1703 Cc: llvm@lists.linux.dev Signed-off-by: Nathan Huckleberry <nhuck@google.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20220929182704.64438-1-nhuck@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-03Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski151-1402/+8321
Daniel Borkmann says: ==================== pull-request: bpf-next 2022-10-03 We've added 143 non-merge commits during the last 27 day(s) which contain a total of 151 files changed, 8321 insertions(+), 1402 deletions(-). The main changes are: 1) Add kfuncs for PKCS#7 signature verification from BPF programs, from Roberto Sassu. 2) Add support for struct-based arguments for trampoline based BPF programs, from Yonghong Song. 3) Fix entry IP for kprobe-multi and trampoline probes under IBT enabled, from Jiri Olsa. 4) Batch of improvements to veristat selftest tool in particular to add CSV output, a comparison mode for CSV outputs and filtering, from Andrii Nakryiko. 5) Add preparatory changes needed for the BPF core for upcoming BPF HID support, from Benjamin Tissoires. 6) Support for direct writes to nf_conn's mark field from tc and XDP BPF program types, from Daniel Xu. 7) Initial batch of documentation improvements for BPF insn set spec, from Dave Thaler. 8) Add a new BPF_MAP_TYPE_USER_RINGBUF map which provides single-user-space-producer / single-kernel-consumer semantics for BPF ring buffer, from David Vernet. 9) Follow-up fixes to BPF allocator under RT to always use raw spinlock for the BPF hashtab's bucket lock, from Hou Tao. 10) Allow creating an iterator that loops through only the resources of one task/thread instead of all, from Kui-Feng Lee. 11) Add support for kptrs in the per-CPU arraymap, from Kumar Kartikeya Dwivedi. 12) Add a new kfunc helper for nf to set src/dst NAT IP/port in a newly allocated CT entry which is not yet inserted, from Lorenzo Bianconi. 13) Remove invalid recursion check for struct_ops for TCP congestion control BPF programs, from Martin KaFai Lau. 14) Fix W^X issue with BPF trampoline and BPF dispatcher, from Song Liu. 15) Fix percpu_counter leakage in BPF hashtab allocation error path, from Tetsuo Handa. 16) Various cleanups in BPF selftests to use preferred ASSERT_* macros, from Wang Yufen. 17) Add invocation for cgroup/connect{4,6} BPF programs for ICMP pings, from YiFei Zhu. 18) Lift blinding decision under bpf_jit_harden = 1 to bpf_capable(), from Yauheni Kaliuta. 19) Various libbpf fixes and cleanups including a libbpf NULL pointer deref, from Xin Liu. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (143 commits) net: netfilter: move bpf_ct_set_nat_info kfunc in nf_nat_bpf.c Documentation: bpf: Add implementation notes documentations to table of contents bpf, docs: Delete misformatted table. selftests/xsk: Fix double free bpftool: Fix error message of strerror libbpf: Fix overrun in netlink attribute iteration selftests/bpf: Fix spelling mistake "unpriviledged" -> "unprivileged" samples/bpf: Fix typo in xdp_router_ipv4 sample bpftool: Remove unused struct event_ring_info bpftool: Remove unused struct btf_attach_point bpf, docs: Add TOC and fix formatting. bpf, docs: Add Clang note about BPF_ALU bpf, docs: Move Clang notes to a separate file bpf, docs: Linux byteswap note bpf, docs: Move legacy packet instructions to a separate file selftests/bpf: Check -EBUSY for the recurred bpf_setsockopt(TCP_CONGESTION) bpf: tcp: Stop bpf_setsockopt(TCP_CONGESTION) in init ops to recur itself bpf: Refactor bpf_setsockopt(TCP_CONGESTION) handling into another function bpf: Move the "cdg" tcp-cc check to the common sol_tcp_sockopt() bpf: Add __bpf_prog_{enter,exit}_struct_ops for struct_ops trampoline ... ==================== Link: https://lore.kernel.org/r/20221003194915.11847-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-10-03net: netfilter: move bpf_ct_set_nat_info kfunc in nf_nat_bpf.cLorenzo Bianconi5-52/+106
Remove circular dependency between nf_nat module and nf_conntrack one moving bpf_ct_set_nat_info kfunc in nf_nat_bpf.c Fixes: 0fabd2aa199f ("net: netfilter: add bpf_ct_set_nat_info kfunc helper") Suggested-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Tested-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Yauheni Kaliuta <ykaliuta@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/r/51a65513d2cda3eeb0754842e8025ab3966068d8.1664490511.git.lorenzo@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-10-03Documentation: bpf: Add implementation notes documentations to table of contentsBagas Sanjaya1-0/+2
Sphinx reported warnings on missing implementation notes documentations in the table of contents: Documentation/bpf/clang-notes.rst: WARNING: document isn't included in any toctree Documentation/bpf/linux-notes.rst: WARNING: document isn't included in any toctree Add these documentations to the table of contents (index.rst) of BPF documentation to fix the warnings. Link: https://lore.kernel.org/linux-doc/202210020749.yfgDZbRL-lkp@intel.com/ Fixes: 6c7aaffb24efbd ("bpf, docs: Move Clang notes to a separate file") Fixes: 6166da0a02cde2 ("bpf, docs: Move legacy packet instructions to a separate file") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Bagas Sanjaya <bagasdotme@gmail.com> Link: https://lore.kernel.org/r/20221002032022.24693-1-bagasdotme@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-10-03once: add DO_ONCE_SLOW() for sleepable contextsEric Dumazet3-2/+60
Christophe Leroy reported a ~80ms latency spike happening at first TCP connect() time. This is because __inet_hash_connect() uses get_random_once() to populate a perturbation table which became quite big after commit 4c2c8f03a5ab ("tcp: increase source port perturb table to 2^16") get_random_once() uses DO_ONCE(), which block hard irqs for the duration of the operation. This patch adds DO_ONCE_SLOW() which uses a mutex instead of a spinlock for operations where we prefer to stay in process context. Then __inet_hash_connect() can use get_random_slow_once() to populate its perturbation table. Fixes: 4c2c8f03a5ab ("tcp: increase source port perturb table to 2^16") Fixes: 190cc82489f4 ("tcp: change source port randomizarion at connect() time") Reported-by: Christophe Leroy <christophe.leroy@csgroup.eu> Link: https://lore.kernel.org/netdev/CANn89iLAEYBaoYajy0Y9UmGFff5GPxDUoG-ErVB2jDdRNQ5Tug@mail.gmail.com/T/#t Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Willy Tarreau <w@1wt.eu> Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03Merge branch 'octeontx2-macsec-offload'David S. Miller15-7/+6681
Subbaraya Sundeep says: ==================== net: Introduce macsec hardware offload for cn10k platform CN10K-B and CNF10K-B variaints of CN10K silicon has macsec block(MCS) to encrypt and decrypt packets at MAC/hardware level. This block is a global resource with hardware resources like SecYs, SCs and SAs and is in between NIX block and RPM LMAC. CN10K-B silicon has only one MCS block which receives packets from all LMACS whereas CNF10K-B has seven MCS blocks for seven LMACs. Both MCS blocks are similar in operation except for few register offsets and some configurations require writing to different registers. This patchset introduces macsec hardware offloading support. AF driver manages hardware resources and PF driver consumes them when macsec hardware offloading is needed. Patch 1 adds basic pci driver for both CN10K-B and CNF10K-B silicons and initializes hardware block. Patches 2 and 3 adds mailboxes to init, reset and manage resources of the MCS block Patch 4 adds a low priority rule in MCS TCAM so that the traffic which do not need macsec processing can be sent/received Patch 5 adds macsec stats collection support Patch 6 adds interrupt handling support and any event in which AF consumer is interested can be notified via mbox notification Patch 7 adds debugfs support which helps in debugging packet path Patch 8 introduces macsec hardware offload feature for PF netdev driver. v3 changes: Fixed clang and sparse warnings v2 changes: Fix build error by changing #ifdef CONFIG_MACSEC to #if IS_ENABLED(CONFIG_MACSEC) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03octeontx2-pf: mcs: Introduce MACSEC hardware offloadingSubbaraya Sundeep5-0/+1776
This patch introduces the macsec offload feature to cn10k PF netdev driver. The macsec offload ops like adding, deleting and updating SecYs, SCs, SAs and stats are supported. XPN support will be added in later patches. Some stats use same counter in hardware which means based on the SecY mode the same counter represents different stat. Hence when SecY mode/policy is changed then snapshot of current stats are captured. Also there is no provision to specify the unique flow-id/SCI per packet to hardware hence different mac address needs to be set for macsec interfaces. Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03octeontx2-af: cn10k: mcs: Add debugfs supportGeetha sowjanya2-0/+350
This patch adds debugfs entry to dump MCS secy, sc, sa, flowid and port stats. This helps in debugging the packet path and to figure out where exactly packet was dropped. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03octeontx2-af: cn10k: mcs: Handle MCS block interruptsGeetha sowjanya8-12/+865
Hardware triggers an interrupt for events like PN wrap to zero, PN crosses set threshold. This interrupt is received by the MCS_AF. MCS AF then finds the PF/VF to which SA is mapped and notifies them using mcs_intr_notify mbox message. PF/VF using mcs_intr_cfg mbox can configure the list of interrupts for which they want to receive the notification from AF. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03octeontx2-af: cn10k: mcs: Support for stats collectionGeetha sowjanya6-0/+1048
Add mailbox messages to return the resource stats to the caller. Stats of SecY, SC and SAs as per the macsec standard, TCAM flow id hits/miss, mailbox to clear the stats are implemented. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03octeontx2-af: cn10k: mcs: Install a default TCAM for normal trafficGeetha sowjanya3-0/+69
Out of all the TCAM entries, reserve last TX and RX TCAM flow entry(low priority) so that normal traffic can be sent out and received. The traffic which needs macsec processing hits the high priority TCAM flows. Also install a FLR handler to free the allocated resources for PF/VF. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03octeontx2-af: cn10k: mcs: Manage the MCS block hardware resourcesGeetha sowjanya6-1/+1530
To establish a macsec connection association netdev driver needs hardware resources like SecY, TCAM flows, SCs and SAs. This patch manages allocating, freeing and configuring those resources. AF consumers can request resources and configure them via these mailbox messages. AF can allocate until it runs out of hardware resources. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03octeontx2-af: cn10k: mcs: Add mailboxes for port related operationsGeetha sowjanya5-4/+376
There are set of configurations to be done at MCS port level like bringing port out of reset, making port as operational or bypass. This patch adds all the port related mailbox message handlers so that AF consumers can use them. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03octeontx2-af: cn10k: Introduce driver for macsec block.Geetha sowjanya8-1/+678
CN10K-B and CNF10K-B has macsec block(MCS) to encrypt and decrypt packets at MAC level. This block is a global resource with hardware resources like SecYs, SCs and SAs and is in between NIX block and RPM LMAC. CN10K-B silicon has only one MCS block which receives packets from all LMACS whereas CNF10K-B has seven MCS blocks for seven LMACs. Both MCS blocks are similar in operation except for few register offsets and some configurations require writing to different registers. Those differences between IPs are handled using separate ops. This patch adds basic driver and does the initial hardware calibration and parser configuration. Signed-off-by: Geetha sowjanya <gakula@marvell.com> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03Merge branch 'lan966x-police-mirroring'David S. Miller7-1/+660
Horatiu Vultur says: ==================== net: lan966x: Add police and mirror using tc-matchall Add tc-matchall classifier offload support both for ingress and egress. For this add support for the port police and port mirroring action support. Port police can happen only on ingress while port mirroring is supported both on ingress and egress ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: lan966x: Add port mirroring support using tc-matchallHoratiu Vultur5-1/+193
Add support for port mirroring. It is possible to mirror only one port at a time and it is possible to have both ingress and egress mirroring. Frames injected by the CPU don't get egress mirrored because they are bypassing the analyzer module. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: lan966x: Add port police support using tc-matchallHoratiu Vultur6-1/+468
Add support for port police. It is possible to police only on the ingress side. To be able to add police support also it was required to add tc-matchall classifier offload support. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: fec: using page pool to manage RX buffersShenwei Wang3-58/+119
This patch optimizes the RX buffer management by using the page pool. The purpose for this change is to prepare for the following XDP support. The current driver uses one frame per page for easy management. Added __maybe_unused attribute to the following functions to avoid the compiling warning. Those functions will be removed by a separate patch once this page pool solution is accepted. - fec_enet_new_rxbdp - fec_enet_copybreak The following are the comparing result between page pool implementation and the original implementation (non page pool). --- small packet (64 bytes) testing are almost the same --- no matter what the implementation is --- on both i.MX8 and i.MX6SX platforms. shenwei@5810:~/pktgen$ iperf -c 10.81.16.245 -w 2m -i 1 -l 64 ------------------------------------------------------------ Client connecting to 10.81.16.245, TCP port 5001 TCP window size: 416 KByte (WARNING: requested 1.91 MByte) ------------------------------------------------------------ [ 1] local 10.81.17.20 port 39728 connected with 10.81.16.245 port 5001 [ ID] Interval Transfer Bandwidth [ 1] 0.0000-1.0000 sec 37.0 MBytes 311 Mbits/sec [ 1] 1.0000-2.0000 sec 36.6 MBytes 307 Mbits/sec [ 1] 2.0000-3.0000 sec 37.2 MBytes 312 Mbits/sec [ 1] 3.0000-4.0000 sec 37.1 MBytes 312 Mbits/sec [ 1] 4.0000-5.0000 sec 37.2 MBytes 312 Mbits/sec [ 1] 5.0000-6.0000 sec 37.2 MBytes 312 Mbits/sec [ 1] 6.0000-7.0000 sec 37.2 MBytes 312 Mbits/sec [ 1] 7.0000-8.0000 sec 37.2 MBytes 312 Mbits/sec [ 1] 0.0000-8.0943 sec 299 MBytes 310 Mbits/sec --- Page Pool implementation on i.MX8 ---- shenwei@5810:~$ iperf -c 10.81.16.245 -w 2m -i 1 ------------------------------------------------------------ Client connecting to 10.81.16.245, TCP port 5001 TCP window size: 416 KByte (WARNING: requested 1.91 MByte) ------------------------------------------------------------ [ 1] local 10.81.17.20 port 43204 connected with 10.81.16.245 port 5001 [ ID] Interval Transfer Bandwidth [ 1] 0.0000-1.0000 sec 111 MBytes 933 Mbits/sec [ 1] 1.0000-2.0000 sec 111 MBytes 934 Mbits/sec [ 1] 2.0000-3.0000 sec 112 MBytes 935 Mbits/sec [ 1] 3.0000-4.0000 sec 111 MBytes 933 Mbits/sec [ 1] 4.0000-5.0000 sec 111 MBytes 934 Mbits/sec [ 1] 5.0000-6.0000 sec 111 MBytes 933 Mbits/sec [ 1] 6.0000-7.0000 sec 111 MBytes 931 Mbits/sec [ 1] 7.0000-8.0000 sec 112 MBytes 935 Mbits/sec [ 1] 8.0000-9.0000 sec 111 MBytes 933 Mbits/sec [ 1] 9.0000-10.0000 sec 112 MBytes 935 Mbits/sec [ 1] 0.0000-10.0077 sec 1.09 GBytes 933 Mbits/sec --- Non Page Pool implementation on i.MX8 ---- shenwei@5810:~$ iperf -c 10.81.16.245 -w 2m -i 1 ------------------------------------------------------------ Client connecting to 10.81.16.245, TCP port 5001 TCP window size: 416 KByte (WARNING: requested 1.91 MByte) ------------------------------------------------------------ [ 1] local 10.81.17.20 port 49154 connected with 10.81.16.245 port 5001 [ ID] Interval Transfer Bandwidth [ 1] 0.0000-1.0000 sec 104 MBytes 868 Mbits/sec [ 1] 1.0000-2.0000 sec 105 MBytes 878 Mbits/sec [ 1] 2.0000-3.0000 sec 105 MBytes 881 Mbits/sec [ 1] 3.0000-4.0000 sec 105 MBytes 879 Mbits/sec [ 1] 4.0000-5.0000 sec 105 MBytes 878 Mbits/sec [ 1] 5.0000-6.0000 sec 105 MBytes 878 Mbits/sec [ 1] 6.0000-7.0000 sec 104 MBytes 875 Mbits/sec [ 1] 7.0000-8.0000 sec 104 MBytes 875 Mbits/sec [ 1] 8.0000-9.0000 sec 104 MBytes 873 Mbits/sec [ 1] 9.0000-10.0000 sec 104 MBytes 875 Mbits/sec [ 1] 0.0000-10.0073 sec 1.02 GBytes 875 Mbits/sec --- Page Pool implementation on i.MX6SX ---- shenwei@5810:~/pktgen$ iperf -c 10.81.16.245 -w 2m -i 1 ------------------------------------------------------------ Client connecting to 10.81.16.245, TCP port 5001 TCP window size: 416 KByte (WARNING: requested 1.91 MByte) ------------------------------------------------------------ [ 1] local 10.81.17.20 port 57288 connected with 10.81.16.245 port 5001 [ ID] Interval Transfer Bandwidth [ 1] 0.0000-1.0000 sec 78.8 MBytes 661 Mbits/sec [ 1] 1.0000-2.0000 sec 82.5 MBytes 692 Mbits/sec [ 1] 2.0000-3.0000 sec 82.4 MBytes 691 Mbits/sec [ 1] 3.0000-4.0000 sec 82.4 MBytes 691 Mbits/sec [ 1] 4.0000-5.0000 sec 82.5 MBytes 692 Mbits/sec [ 1] 5.0000-6.0000 sec 82.4 MBytes 691 Mbits/sec [ 1] 6.0000-7.0000 sec 82.5 MBytes 692 Mbits/sec [ 1] 7.0000-8.0000 sec 82.4 MBytes 691 Mbits/sec [ 1] 8.0000-9.0000 sec 82.4 MBytes 691 Mbits/sec [ 1] 9.0000-9.5506 sec 45.0 MBytes 686 Mbits/sec [ 1] 0.0000-9.5506 sec 783 MBytes 688 Mbits/sec --- Non Page Pool implementation on i.MX6SX ---- shenwei@5810:~/pktgen$ iperf -c 10.81.16.245 -w 2m -i 1 ------------------------------------------------------------ Client connecting to 10.81.16.245, TCP port 5001 TCP window size: 416 KByte (WARNING: requested 1.91 MByte) ------------------------------------------------------------ [ 1] local 10.81.17.20 port 36486 connected with 10.81.16.245 port 5001 [ ID] Interval Transfer Bandwidth [ 1] 0.0000-1.0000 sec 70.5 MBytes 591 Mbits/sec [ 1] 1.0000-2.0000 sec 64.5 MBytes 541 Mbits/sec [ 1] 2.0000-3.0000 sec 73.6 MBytes 618 Mbits/sec [ 1] 3.0000-4.0000 sec 73.6 MBytes 618 Mbits/sec [ 1] 4.0000-5.0000 sec 72.9 MBytes 611 Mbits/sec [ 1] 5.0000-6.0000 sec 73.4 MBytes 616 Mbits/sec [ 1] 6.0000-7.0000 sec 73.5 MBytes 617 Mbits/sec [ 1] 7.0000-8.0000 sec 73.4 MBytes 616 Mbits/sec [ 1] 8.0000-9.0000 sec 73.4 MBytes 616 Mbits/sec [ 1] 9.0000-10.0000 sec 73.9 MBytes 620 Mbits/sec [ 1] 0.0000-10.0174 sec 723 MBytes 605 Mbits/sec Signed-off-by: Shenwei Wang <shenwei.wang@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: Remove DECnet leftovers from flow.h.Guillaume Nault1-26/+0
DECnet was removed by commit 1202cdd66531 ("Remove DECnet support from kernel"). Let's also revome its flow structure. Compile-tested only (allmodconfig). Signed-off-by: Guillaume Nault <gnault@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03gro: add support of (hw)gro packets to gro stackCoco Li2-6/+29
Current GRO stack only supports incoming packets containing one frame/MSS. This patch changes GRO to accept packets that are already GRO. HW-GRO (aka RSC for some vendors) is very often limited in presence of interleaved packets. Linux SW GRO stack can complete the job and provide larger GRO packets, thus reducing rate of ACK packets and cpu overhead. This also means BIG TCP can still be used, even if HW-GRO/RSC was able to cook ~64 KB GRO packets. v2: fix logic in tcp_gro_receive() Only support TCP for the moment (Paolo) Co-Developed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Coco Li <lixiaoyan@google.com> Acked-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03Merge branch 'mptcp-fastclose'David S. Miller3-62/+217
Mat Martineau says: ==================== mptcp: Fastclose edge cases and error handling MPTCP has existing code to use the MP_FASTCLOSE option header, which works like a RST for the MPTCP-level connection (regular RSTs only affect specific subflows in MPTCP). This series has some improvements for fastclose. Patch 1 aligns fastclose socket error handling with TCP RST behavior on TCP sockets. Patch 2 adds use of MP_FASTCLOSE in some more edge cases, like file descriptor close, FIN_WAIT timeout, and when the socket has unread data. Patch 3 updates the fastclose self tests. Patch 4 does not change any code, just fixes some outdated comments. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03mptcp: update misleading comments.Paolo Abeni1-7/+7
The MPTCP data path is quite complex and hard to understend even without some foggy comments referring to modified code and/or completely misleading from the beginning. Update a few of them to more accurately describing the current status. Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03selftests: mptcp: update and extend fastclose test-casesPaolo Abeni2-25/+130
After the previous patches, the MPTCP protocol can generate fast-closes on both ends of the connection. Rework the relevant test-case to carefully trigger the fast-close code-path on a single end at the time, while ensuring than a predictable amount of data is spooled on both ends. Additionally add another test-cases for the passive socket fast-close. Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03mptcp: use fastclose on more edge scenariosPaolo Abeni1-19/+44
Daire reported a user-space application hang-up when the peer is forcibly closed before the data transfer completion. The relevant application expects the peer to either do an application-level clean shutdown or a transport-level connection reset. We can accommodate a such user by extending the fastclose usage: at fd close time, if the msk socket has some unread data, and at FIN_WAIT timeout. Note that at MPTCP close time we must ensure that the TCP subflows will reset: set the linger socket option to a suitable value. Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03mptcp: propagate fastclose errorPaolo Abeni1-11/+36
When an mptcp socket is closed due to an incoming FASTCLOSE option, so specific sk_err is set and later syscall will fail usually with EPIPE. Align the current fastclose error handling with TCP reset, properly setting the socket error according to the current msk state and propagating such error. Additionally sendmsg() is currently not handling properly the sk_err, always returning EPIPE. Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03Merge branch 'RollBall-Hilink-Turris-10G-copper-SFP-support'David S. Miller12-141/+797
Marek Behún says: ==================== RollBall / Hilink / Turris 10G copper SFP support I am resurrecting my attempt to add support for RollBall / Hilink / Turris 10G copper SFPs modules. The modules contain Marvell 88X3310 PHY, which can communicate with the system via sgmii, 2500base-x, 5gbase-r, 10gbase-r or usxgmii mode. Some of the patches I've taken from Russell King's net-queue [1] (with some rebasing). The important change from my previous attempts are: - I am including the changes needed to phylink and marvell10g driver, so that the 88X3310 PHY is configured to use PHY modes supported by the host (the PHY defaults to use 10gbase-r only on host's side) - I have changed the patch that informs phylib about the interfaces supported by the host (patch 5 of this series): it now fills in the phydev->host_interfaces member only when connecting a PHY that is inside a SFP module. This may change in the future. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: sfp: add support for multigig RollBall transceiversMarek Behún1-10/+39
This adds support for multigig copper SFP modules from RollBall/Hilink. These modules have a specific way to access clause 45 registers of the internal PHY. We also need to wait at least 22 seconds after deasserting TX disable before accessing the PHY. The code waits for 25 seconds just to be sure. Signed-off-by: Marek Behún <kabel@kernel.org> Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: phy: mdio-i2c: support I2C MDIO protocol for RollBall SFP modulesMarek Behún3-7/+313
Some multigig SFPs from RollBall and Hilink do not expose functional MDIO access to the internal PHY of the SFP via I2C address 0x56 (although there seems to be read-only clause 22 access on this address). Instead these SFPs PHY can be accessed via I2C via the SFP Enhanced Digital Diagnostic Interface - I2C address 0x51. The SFP_PAGE has to be selected to 3 and the password must be filled with 0xff bytes for this PHY communication to work. This extends the mdio-i2c driver to support this protocol by adding a special parameter to mdio_i2c_alloc function via which this RollBall protocol can be selected. Signed-off-by: Marek Behún <kabel@kernel.org> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: sfp: create/destroy I2C mdiobus before PHY probe/after PHY releaseMarek Behún2-13/+57
Instead of configuring the I2C mdiobus when SFP driver is probed, create/destroy the mdiobus before the PHY is probed for/after it is released. This way we can tell the mdio-i2c code which protocol to use for each SFP transceiver. Move the code that determines MDIO I2C protocol from sfp_sm_probe_for_phy() to sfp_sm_mod_probe(), where most of the SFP ID parsing is done. Don't allocate I2C bus if no PHY is expected. Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: sfp: Add and use macros for SFP quirks definitionsMarek Behún1-35/+26
Add macros SFP_QUIRK(), SFP_QUIRK_M() and SFP_QUIRK_F() for defining SFP quirk table entries. Use them to deduplicate the code a little bit. Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: phylink: allow attaching phy for SFP modules on 802.3z modeMarek Behún1-4/+1
Some SFPs may contain an internal PHY which may in some cases want to connect with the host interface in 1000base-x/2500base-x mode. Do not fail if such PHY is being attached in one of these PHY interface modes. Signed-off-by: Marek Behún <kabel@kernel.org> Reviewed-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Pali Rohár <pali@kernel.org> Cc: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: phy: marvell10g: select host interface configurationRussell King1-0/+112
Select the host interface configuration according to the capabilities of the host if the host provided them. This is currently provided only when connecting PHY that is inside a SFP. The PHY supports several configurations of host communication: - always communicate with host in 10gbase-r, even if copper speed is lower (rate matching mode), - the same as above but use xaui/rxaui instead of 10gbase-r, - switch host SerDes mode between 10gbase-r, 5gbase-r, 2500base-x and sgmii according to copper speed, - the same as above but use xaui/rxaui instead of 10gbase-r. This mode of host communication, called MACTYPE, is by default selected by strapping pins, but it can be changed in software. This adds support for selecting this mode according to which modes are supported by the host. This allows the kernel to: - support SFP modules with 88X33X0 or 88E21X0 inside them Note: we use mv3310_select_mactype() for both 88X3310 and 88X3340, although 88X3340 does not support XAUI. This is not a problem because 88X3340 does not declare XAUI in it's supported_interfaces, and so this function will never choose that MACTYPE. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> [ rebase, updated, also added support for 88E21X0 ] Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: phy: marvell10g: Use tabs instead of spaces for indentationMarek Behún1-9/+9
Some register definitions were defined with spaces used for indentation. Change them to tabs. Signed-off-by: Marek Behún <kabel@kernel.org> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-10-03net: phylink: pass supported host PHY interface modes to phylib for SFP's PHYsMarek Behún2-0/+21
Pass the supported PHY interface types to phylib if the PHY we are connecting is inside a SFP, so that the PHY driver can select an appropriate host configuration mode for their interface according to the host capabilities. For example the Marvell 88X3310 PHY inside RollBall SFP modules defaults to 10gbase-r mode on host's side, and the marvell10g driver currently does not change this setting. But a host may not support 10gbase-r. For example Turris Omnia only supports sgmii, 1000base-x and 2500base-x modes. The PHY can be configured to use those modes, but in order for the PHY driver to do that, it needs to know which modes are supported. Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>