summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/engleder
AgeCommit message (Collapse)AuthorFilesLines
2023-05-31net/sched: taprio: replace tc_taprio_qopt_offload :: enable with a "cmd" enumVladimir Oltean2-7/+9
Inspired from struct flow_cls_offload :: cmd, in order for taprio to be able to report statistics (which is future work), it seems that we need to drill one step further with the ndo_setup_tc(TC_SETUP_QDISC_TAPRIO) multiplexing, and pass the command as part of the common portion of the muxed structure. Since we already have an "enable" variable in tc_taprio_qopt_offload, refactor all drivers to check for "cmd" instead of "enable", and reject every other command except "replace" and "destroy" - to be future proof. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Horatiu Vultur <horatiu.vultur@microchip.com> # for lan966x Acked-by: Kurt Kanzenbach <kurt@linutronix.de> # hellcreek Reviewed-by: Muhammad Husaini Zulkifli <muhammad.husaini.zulkifli@intel.com> Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-04-25tsnep: Add XDP socket zero-copy TX supportGerhard Engleder2-11/+121
Send and complete XSK pool frames within TX NAPI context. NAPI context is triggered by ndo_xsk_wakeup. Test results with A53 1.2GHz: xdpsock txonly copy mode, 64 byte frames: pps pkts 1.00 tx 284,409 11,398,144 Two CPUs with 100% and 10% utilization. xdpsock txonly zero-copy mode, 64 byte frames: pps pkts 1.00 tx 511,929 5,890,368 Two CPUs with 100% and 1% utilization. xdpsock l2fwd copy mode, 64 byte frames: pps pkts 1.00 rx 248,985 7,315,885 tx 248,921 7,315,885 Two CPUs with 100% and 10% utilization. xdpsock l2fwd zero-copy mode, 64 byte frames: pps pkts 1.00 rx 254,735 3,039,456 tx 254,735 3,039,456 Two CPUs with 100% and 4% utilization. Packet rate increases and CPU utilization is reduced in both cases. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-25tsnep: Add XDP socket zero-copy RX supportGerhard Engleder3-14/+556
Add support for XSK zero-copy to RX path. The setup of the XSK pool can be done at runtime. If the netdev is running, then the queue must be disabled and enabled during reconfiguration. This can be done easily with functions introduced in previous commits. A more important property is that, if the netdev is running, then the setup of the XSK pool shall not stop the netdev in case of errors. A broken netdev after a failed XSK pool setup is bad behavior. Therefore, the allocation and setup of resources during XSK pool setup is done only before any queue is disabled. Additionally, freeing and later allocation of resources is eliminated in some cases. Page pool entries are kept for later use. Two memory models are registered in parallel. As a result, the XSK pool setup cannot fail during queue reconfiguration. In contrast to other drivers, XSK pool setup and XDP BPF program setup are separate actions. XSK pool setup can be done without any XDP BPF program. The XDP BPF program can be added, removed or changed without any reconfiguration of the XSK pool. Test results with A53 1.2GHz: xdpsock rxdrop copy mode, 64 byte frames: pps pkts 1.00 rx 856,054 10,625,775 Two CPUs with both 100% utilization. xdpsock rxdrop zero-copy mode, 64 byte frames: pps pkts 1.00 rx 889,388 4,615,284 Two CPUs with 100% and 20% utilization. Packet rate increases and CPU utilization is reduced. 100% CPU load seems to the base load. This load is consumed by ksoftirqd just for dropping the generated packets without xdpsock running. Using batch API reduced CPU utilization slightly, but measurements are not stable enough to provide meaningful numbers. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-25tsnep: Move skb receive action to separate functionGerhard Engleder1-16/+23
The function tsnep_rx_poll() is already pretty long and the skb receive action can be reused for XSK zero-copy support. Move page based skb receive to separate function. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-25tsnep: Add functions for queue enable/disableGerhard Engleder1-33/+64
Move queue enable and disable code to separate functions. This way the activation and deactivation of the queues are defined actions, which can be used in future execution paths. This functions will be used for the queue reconfiguration at runtime, which is necessary for XSK zero-copy support. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-25tsnep: Rework TX/RX queue initializationGerhard Engleder1-43/+51
Make initialization of TX and RX queues less dynamic by moving some initialization from netdev open/close to device probing. Additionally, move some initialization code to separate functions to enable future use in other execution paths. This is done as preparation for queue reconfigure at runtime, which is necessary for XSK zero-copy support. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-25tsnep: Replace modulo operation with maskGerhard Engleder2-14/+15
TX/RX ring size is static and power of 2 to enable compiler to optimize modulo operation to mask operation. Make this optimization already in the code and don't rely on the compiler. CPU utilisation during high packet rate has not changed. So no performance improvement has been measured. But it is best practice to prevent modulo operations. Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-23ethernet: remove superfluous clearing of phydevWolfram Sang1-1/+0
phy_disconnect() calls phy_detach() which already clears 'phydev' if it is attached to a struct net_device. Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Link: https://lore.kernel.org/r/20230321131745.27688-1-wsa+renesas@sang-engineering.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-11Daniel Borkmann says:Jakub Kicinski1-0/+4
==================== pull-request: bpf-next 2023-02-11 We've added 96 non-merge commits during the last 14 day(s) which contain a total of 152 files changed, 4884 insertions(+), 962 deletions(-). There is a minor conflict in drivers/net/ethernet/intel/ice/ice_main.c between commit 5b246e533d01 ("ice: split probe into smaller functions") from the net-next tree and commit 66c0e13ad236 ("drivers: net: turn on XDP features") from the bpf-next tree. Remove the hunk given ice_cfg_netdev() is otherwise there a 2nd time, and add XDP features to the existing ice_cfg_netdev() one: [...] ice_set_netdev_features(netdev); netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | NETDEV_XDP_ACT_XSK_ZEROCOPY; ice_set_ops(netdev); [...] Stephen's merge conflict mail: https://lore.kernel.org/bpf/20230207101951.21a114fa@canb.auug.org.au/ The main changes are: 1) Add support for BPF trampoline on s390x which finally allows to remove many test cases from the BPF CI's DENYLIST.s390x, from Ilya Leoshkevich. 2) Add multi-buffer XDP support to ice driver, from Maciej Fijalkowski. 3) Add capability to export the XDP features supported by the NIC. Along with that, add a XDP compliance test tool, from Lorenzo Bianconi & Marek Majtyka. 4) Add __bpf_kfunc tag for marking kernel functions as kfuncs, from David Vernet. 5) Add a deep dive documentation about the verifier's register liveness tracking algorithm, from Eduard Zingerman. 6) Fix and follow-up cleanups for resolve_btfids to be compiled as a host program to avoid cross compile issues, from Jiri Olsa & Ian Rogers. 7) Batch of fixes to the BPF selftest for xdp_hw_metadata which resulted when testing on different NICs, from Jesper Dangaard Brouer. 8) Fix libbpf to better detect kernel version code on Debian, from Hao Xiang. 9) Extend libbpf to add an option for when the perf buffer should wake up, from Jon Doron. 10) Follow-up fix on xdp_metadata selftest to just consume on TX completion, from Stanislav Fomichev. 11) Extend the kfuncs.rst document with description on kfunc lifecycle & stability expectations, from David Vernet. 12) Fix bpftool prog profile to skip attaching to offline CPUs, from Tonghao Zhang. ==================== Link: https://lore.kernel.org/r/20230211002037.8489-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-06net/sched: taprio: only pass gate mask per TXQ for igc, stmmac, tsnep, am65_cpswVladimir Oltean1-0/+21
There are 2 classes of in-tree drivers currently: - those who act upon struct tc_taprio_sched_entry :: gate_mask as if it holds a bit mask of TXQs - those who act upon the gate_mask as if it holds a bit mask of TCs When it comes to the standard, IEEE 802.1Q-2018 does say this in the second paragraph of section 8.6.8.4 Enhancements for scheduled traffic: | A gate control list associated with each Port contains an ordered list | of gate operations. Each gate operation changes the transmission gate | state for the gate associated with each of the Port's traffic class | queues and allows associated control operations to be scheduled. In typically obtuse language, it refers to a "traffic class queue" rather than a "traffic class" or a "queue". But careful reading of 802.1Q clarifies that "traffic class" and "queue" are in fact synonymous (see 8.6.6 Queuing frames): | A queue in this context is not necessarily a single FIFO data structure. | A queue is a record of all frames of a given traffic class awaiting | transmission on a given Bridge Port. The structure of this record is not | specified. i.o.w. their definition of "queue" isn't the Linux TX queue. The gate_mask really is input into taprio via its UAPI as a mask of traffic classes, but taprio_sched_to_offload() converts it into a TXQ mask. The breakdown of drivers which handle TC_SETUP_QDISC_TAPRIO is: - hellcreek, felix, sja1105: these are DSA switches, it's not even very clear what TXQs correspond to, other than purely software constructs. Only the mqprio configuration with 8 TCs and 1 TXQ per TC makes sense. So it's fine to convert these to a gate mask per TC. - enetc: I have the hardware and can confirm that the gate mask is per TC, and affects all TXQs (BD rings) configured for that priority. - igc: in igc_save_qbv_schedule(), the gate_mask is clearly interpreted to be per-TXQ. - tsnep: Gerhard Engleder clarifies that even though this hardware supports at most 1 TXQ per TC, the TXQ indices may be different from the TC values themselves, and it is the TXQ indices that matter to this hardware. So keep it per-TXQ as well. - stmmac: I have a GMAC datasheet, and in the EST section it does specify that the gate events are per TXQ rather than per TC. - lan966x: again, this is a switch, and while not a DSA one, the way in which it implements lan966x_mqprio_add() - by only allowing num_tc == NUM_PRIO_QUEUES (8) - makes it clear to me that TXQs are a purely software construct here as well. They seem to map 1:1 with TCs. - am65_cpsw: from looking at am65_cpsw_est_set_sched_cmds(), I get the impression that the fetch_allow variable is treated like a prio_mask. This definitely sounds closer to a per-TC gate mask rather than a per-TXQ one, and TI documentation does seem to recomment an identity mapping between TCs and TXQs. However, Roger Quadros would like to do some testing before making changes, so I'm leaving this driver to operate as it did before, for now. Link with more details at the end. Based on this breakdown, we have 5 drivers with a gate mask per TC and 4 with a gate mask per TXQ. So let's make the gate mask per TXQ the opt-in and the gate mask per TC the default. Benefit from the TC_QUERY_CAPS feature that Jakub suggested we add, and query the device driver before calling the proper ndo_setup_tc(), and figure out if it expects one or the other format. Link: https://patchwork.kernel.org/project/netdevbpf/patch/20230202003621.2679603-15-vladimir.oltean@nxp.com/#25193204 Cc: Horatiu Vultur <horatiu.vultur@microchip.com> Cc: Siddharth Vadapalli <s-vadapalli@ti.com> Cc: Roger Quadros <rogerq@kernel.org> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Acked-by: Kurt Kanzenbach <kurt@linutronix.de> # hellcreek Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-02-03drivers: net: turn on XDP featuresMarek Majtyka1-0/+4
A summary of the flags being set for various drivers is given below. Note that XDP_F_REDIRECT_TARGET and XDP_F_FRAG_TARGET are features that can be turned off and on at runtime. This means that these flags may be set and unset under RTNL lock protection by the driver. Hence, READ_ONCE must be used by code loading the flag value. Also, these flags are not used for synchronization against the availability of XDP resources on a device. It is merely a hint, and hence the read may race with the actual teardown of XDP resources on the device. This may change in the future, e.g. operations taking a reference on the XDP resources of the driver, and in turn inhibiting turning off this flag. However, for now, it can only be used as a hint to check whether device supports becoming a redirection target. Turn 'hw-offload' feature flag on for: - netronome (nfp) - netdevsim. Turn 'native' and 'zerocopy' features flags on for: - intel (i40e, ice, ixgbe, igc) - mellanox (mlx5). - stmmac - netronome (nfp) Turn 'native' features flags on for: - amazon (ena) - broadcom (bnxt) - freescale (dpaa, dpaa2, enetc) - funeth - intel (igb) - marvell (mvneta, mvpp2, octeontx2) - mellanox (mlx4) - mtk_eth_soc - qlogic (qede) - sfc - socionext (netsec) - ti (cpsw) - tap - tsnep - veth - xen - virtio_net. Turn 'basic' (tx, pass, aborted and drop) features flags on for: - netronome (nfp) - cavium (thunder) - hyperv. Turn 'redirect_target' feature flag on for: - amanzon (ena) - broadcom (bnxt) - freescale (dpaa, dpaa2) - intel (i40e, ice, igb, ixgbe) - ti (cpsw) - marvell (mvneta, mvpp2) - sfc - socionext (netsec) - qlogic (qede) - mellanox (mlx5) - tap - veth - virtio_net - xen Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Acked-by: Stanislav Fomichev <sdf@google.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Co-developed-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Marek Majtyka <alardam@gmail.com> Link: https://lore.kernel.org/r/3eca9fafb308462f7edb1f58e451d59209aa07eb.1675245258.git.lorenzo@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-01-28Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-4/+4
Conflicts: drivers/net/ethernet/intel/ice/ice_main.c 418e53401e47 ("ice: move devlink port creation/deletion") 643ef23bd9dd ("ice: Introduce local var for readability") https://lore.kernel.org/all/20230127124025.0dacef40@canb.auug.org.au/ https://lore.kernel.org/all/20230124005714.3996270-1-anthony.l.nguyen@intel.com/ drivers/net/ethernet/engleder/tsnep_main.c 3d53aaef4332 ("tsnep: Fix TX queue stop/wake for multiple queues") 25faa6a4c5ca ("tsnep: Replace TX spin_lock with __netif_tx_lock") https://lore.kernel.org/all/20230127123604.36bb3e99@canb.auug.org.au/ net/netfilter/nf_conntrack_proto_sctp.c 13bd9b31a969 ("Revert "netfilter: conntrack: add sctp DATA_SENT state"") a44b7651489f ("netfilter: conntrack: unify established states for SCTP paths") f71cb8f45d09 ("netfilter: conntrack: sctp: use nf log infrastructure for invalid packets") https://lore.kernel.org/all/20230127125052.674281f9@canb.auug.org.au/ https://lore.kernel.org/all/d36076f3-6add-a442-6d4b-ead9f7ffff86@tessares.net/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-01-26tsnep: Fix TX queue stop/wake for multiple queuesGerhard Engleder1-6/+9
netif_stop_queue() and netif_wake_queue() act on TX queue 0. This is ok as long as only a single TX queue is supported. But support for multiple TX queues was introduced with 762031375d5c and I missed to adapt stop and wake of TX queues. Use netif_stop_subqueue() and netif_tx_wake_queue() to act on specific TX queue. Fixes: 762031375d5c ("tsnep: Support multiple TX/RX queue pairs") Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Link: https://lore.kernel.org/r/20230124191440.56887-1-gerhard@engleder-embedded.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-01-21net: Remove C45 check in C22 only MDIO bus driversAndrew Lunn1-6/+0
The MDIO core should not pass a C45 request via the C22 API call any more. So remove the tests from the drivers. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Michael Walle <michael@walle.cc> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-01-18tsnep: Support XDP BPF program setupGerhard Engleder4-1/+36
Implement setup of BPF programs for XDP RX path with command XDP_SETUP_PROG of ndo_bpf(). This is the final step for XDP RX path support. There is no need to reinit the RX queues as they are always prepared for XDP. Additionally remove $(tsnep-y) from $(tsnep-objs) because it is added automatically. Test results with A53 1.2GHz: XDP_DROP (samples/bpf/xdp1) proto 17: 883878 pkt/s XDP_TX (samples/bpf/xdp2) proto 17: 255693 pkt/s XDP_REDIRECT (samples/bpf/xdpsock) sock0@eth2:0 rxdrop xdp-drv pps pkts 1.00 rx 855,582 5,404,523 tx 0 0 XDP_REDIRECT (samples/bpf/xdp_redirect) eth2->eth1 613,267 rx/s 0 err,drop/s 613,272 xmit/s Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-01-18tsnep: Add XDP RX supportGerhard Engleder2-2/+132
If BPF program is set up, then run BPF program for every received frame and execute the selected action. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-01-18tsnep: Add RX queue info for XDP supportGerhard Engleder2-18/+58
Register xdp_rxq_info with page_pool memory model. This is needed for XDP buffer handling. Additionally fix error path by removing call of tsnep_phy_close() after failed tsnep_phy_open(). Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-01-18tsnep: Prepare RX buffer for XDP supportGerhard Engleder1-10/+11
Always reserve XDP_PACKET_HEADROOM in front of RX buffer. Similar DMA direction is always set to DMA_BIDIRECTIONAL. This eliminates the need for RX queue reconfiguration during BPF program setup. The RX queue is always prepared for XDP. No negative impact of DMA_BIDIRECTIONAL was measured. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-01-18tsnep: Subtract TSNEP_RX_INLINE_METADATA_SIZE onceGerhard Engleder1-2/+9
Subtract size of metadata in front of received data only once. This simplifies the RX code. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-01-18tsnep: Add XDP TX supportGerhard Engleder2-10/+189
Implement ndo_xdp_xmit() for XDP TX support. Support for fragmented XDP frames is included. Also some braces and logic cleanups are done in normal TX path to keep both TX paths in sync. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-01-18tsnep: Do not print DMA mapping errorGerhard Engleder1-2/+0
Printing in data path shall be avoided. DMA mapping error is already counted in stats so printing is not necessary. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-01-18tsnep: Forward NAPI budget to napi_consume_skb()Gerhard Engleder1-1/+1
NAPI budget must be forwarded to napi_consume_skb(). It is used to detect non-NAPI context. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-01-18tsnep: Replace TX spin_lock with __netif_tx_lockGerhard Engleder2-21/+10
TX spin_lock can be eliminated, because the normal TX path is already protected with __netif_tx_lock and this lock can be used for access to queue outside of normal TX path too. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-12-02tsnep: Rework RX buffer allocationGerhard Engleder3-74/+140
Refill RX queue in batches of descriptors to improve performance. Refill is allowed to fail as long as a minimum number of descriptors is active. Thus, a limited number of failed RX buffer allocations is now allowed for normal operation. Previously every failed allocation resulted in a dropped frame. If the minimum number of active descriptors is reached, then RX buffers are still reused and frames are dropped. This ensures that the RX queue never runs empty and always continues to operate. Prework for future XDP support. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-12-02tsnep: Throttle interruptsGerhard Engleder4-0/+187
Without interrupt throttling, iperf server mode generates a CPU load of 100% (A53 1.2GHz). Also the throughput suffers with less than 900Mbit/s on a 1Gbit/s link. The reason is a high interrupt load with interrupts every ~20us. Reduce interrupt load by throttling of interrupts. Interrupt delay default is 64us. For iperf server mode the CPU load is significantly reduced to ~20% and the throughput reaches the maximum of 941MBit/s. Interrupts are generated every ~140us. RX and TX coalesce can be configured with ethtool. RX coalesce has priority over TX coalesce if the same interrupt is used. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-12-02tsnep: Add ethtool::get_channels supportGerhard Engleder1-0/+12
Allow user space to read number of TX and RX queue. This is useful for device dependent qdisc configurations like TAPRIO with hardware offload. Also ethtool::get_per_queue_coalesce / set_per_queue_coalesce requires that interface. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-12-02tsnep: Consistent naming of struct net_deviceGerhard Engleder1-6/+6
Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-11-22tsnep: Fix rotten packetsGerhard Engleder1-1/+56
If PTP synchronisation is done every second, then sporadic the interval is higher than one second: ptp4l[696.582]: master offset -17 s2 freq -1891 path delay 573 ptp4l[697.582]: master offset -22 s2 freq -1901 path delay 573 ptp4l[699.368]: master offset -1 s2 freq -1887 path delay 573 ^^^^^^^ Should be 698.582! This problem is caused by rotten packets, which are received after polling but before interrupts are enabled again. This can be fixed by checking for pending work and rescheduling if necessary after interrupts has been enabled again. Fixes: 403f69bbdbad ("tsnep: Add TSN endpoint Ethernet MAC driver") Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Link: https://lore.kernel.org/r/20221119211825.81805-1-gerhard@engleder-embedded.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-09-30tsnep: Use page pool for RXGerhard Engleder3-68/+100
Use page pool for RX buffer handling. Makes RX path more efficient and is required prework for future XDP support. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30tsnep: Add EtherType RX flow classification supportGerhard Engleder6-5/+403
Received Ethernet frames are assigned to first RX queue per default. Based on EtherType Ethernet frames can be assigned to other RX queues. This enables processing of real-time Ethernet protocols on dedicated RX queues. Add RX flow classification interface for EtherType based RX queue assignment. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30tsnep: Support multiple TX/RX queue pairsGerhard Engleder3-11/+53
Support additional TX/RX queue pairs if dedicated interrupt is available. Interrupts are detected by name in device tree. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30tsnep: Move interrupt from device to queueGerhard Engleder2-33/+99
For multiple queues multiple interrupts shall be used. Therefore, rework global interrupt to per queue interrupt. Every interrupt name shall contain interface name and queue information. To get a valid interface name, the interrupt request needs to by done during open like in other drivers. Additionally, this allows the removal of some initialisation checks in the interrupt handler. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-29net: drop the weight argument from netif_napi_addJakub Kicinski1-1/+1
We tell driver developers to always pass NAPI_POLL_WEIGHT as the weight to netif_napi_add(). This may be confusing to newcomers, drop the weight argument, those who really need to tweak the weight can use netif_napi_add_weight(). Acked-by: Marc Kleine-Budde <mkl@pengutronix.de> # for CAN Link: https://lore.kernel.org/r/20220927132753.750069-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-22tsnep: Record RX queueGerhard Engleder2-1/+5
Other drivers record RX queue so it should make sense to do that also. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-22tsnep: Support full DMA maskGerhard Engleder1-0/+7
DMA addresses up to 64bit are supported by the device. Configure DMA mask according to the capabilities of the device. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-22tsnep: Improve TX length handlingGerhard Engleder1-11/+21
TX length can by calculated more efficient during map and unmap of fragments. Another reason is that, by moving TX statistic counting to tsnep_tx_poll() it can be used there for XDP too. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-22tsnep: Add loopback supportGerhard Engleder1-17/+54
Add support for NETIF_F_LOOPBACK feature. Loopback mode is used for testing. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-22tsnep: Fix TSNEP_INFO_TX_TIME register defineGerhard Engleder1-2/+1
Fixed register define is not used, but register definition shall be kept in sync. Fixes: 403f69bbdbad ("tsnep: Add TSN endpoint Ethernet MAC driver") Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-08-09tsnep: Fix tsnep_tx_unmap() error path usageGerhard Engleder1-4/+4
If tsnep_tx_map() fails, then tsnep_tx_unmap() shall start at the write index like tsnep_tx_map(). This is different to the normal operation. Thus, add an additional parameter to tsnep_tx_unmap() to enable start at different positions for successful TX and failed TX. Fixes: 403f69bbdbad ("tsnep: Add TSN endpoint Ethernet MAC driver") Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-09tsnep: Fix unused warning for 'tsnep_of_match'Gerhard Engleder1-1/+1
Kernel test robot found the following warning: drivers/net/ethernet/engleder/tsnep_main.c:1254:34: warning: 'tsnep_of_match' defined but not used [-Wunused-const-variable=] of_match_ptr() compiles into NULL if CONFIG_OF is disabled. tsnep_of_match exists always so use of of_match_ptr() is useless. Fix warning by dropping of_match_ptr(). Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-10tsnep: Add free running cycle counter supportGerhard Engleder3-7/+63
The TSN endpoint Ethernet MAC supports a free running counter additionally to its clock. This free running counter can be read and hardware timestamps are supported. As the name implies, this counter cannot be set and its frequency cannot be adjusted. Add free running cycle counter support based on this free running counter to physical clock. This also requires hardware time stamps based on that free running counter. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-04-23tsnep: Remove useless null check before call of_node_put()Haowen Bai1-2/+1
No need to add null check before call of_node_put(), since the implementation of of_node_put() has done it. Signed-off-by: Haowen Bai <baihaowen@meizu.com> Link: https://lore.kernel.org/r/1650509283-26168-1-git-send-email-baihaowen@meizu.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-12-18tsnep: Fix s390 devm_ioremap_resource warningGerhard Engleder1-0/+1
The following warning is fixed with additional config dependencies: s390-linux-ld: drivers/net/ethernet/engleder/tsnep_main.o: in function `tsnep_probe': tsnep_main.c:(.text+0x1de6): undefined reference to `devm_ioremap_resource' Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Link: https://lore.kernel.org/r/20211216200154.1520-1-gerhard@engleder-embedded.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-12-14net_tstamp: add new flag HWTSTAMP_FLAG_BONDED_PHC_INDEXHangbin Liu1-3/+0
Since commit 94dd016ae538 ("bond: pass get_ts_info and SIOC[SG]HWTSTAMP ioctl to active device") the user could get bond active interface's PHC index directly. But when there is a failover, the bond active interface will change, thus the PHC index is also changed. This may break the user's program if they did not update the PHC timely. This patch adds a new hwtstamp_config flag HWTSTAMP_FLAG_BONDED_PHC_INDEX. When the user wants to get the bond active interface's PHC, they need to add this flag and be aware the PHC index may be changed. With the new flag. All flag checks in current drivers are removed. Only the checking in net_hwtstamp_validate() is kept. Suggested-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-11-26tsnep: Add missing of_node_put() in tsnep_mdio_init()Yang Yingliang1-1/+2
The node pointer is returned by of_get_child_by_name() with refcount incremented in tsnep_mdio_init(). Calling of_node_put() to aovid the refcount leak in tsnep_mdio_init(). Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20211124084048.175456-1-yangyingliang@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-11-25tsnep: Fix resource_size cocci warningGerhard Engleder2-2/+0
The following warning is fixed, by removing the unused resource size: drivers/net/ethernet/engleder/tsnep_main.c:1155:21-24: WARNING: Suspicious code. resource_size is maybe missing with io Reported-by: kernel test robot <lkp@intel.com> Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Link: https://lore.kernel.org/r/20211124205225.13985-1-gerhard@engleder-embedded.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-11-25tsnep: fix platform_no_drv_owner.cocci warningYang Li1-1/+0
Remove .owner field if calls are used which set it automatically Eliminate the following coccicheck warning: ./drivers/net/ethernet/engleder/tsnep_main.c:1263:3-8: No need to set .owner here. The core will do it. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Reviewed-by: Gerhard Engleder <gerhard@engleder-embedded.com> Link: https://lore.kernel.org/r/1637721384-70836-2-git-send-email-yang.lee@linux.alibaba.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-11-23tsnep: Fix set MAC addressGerhard Engleder1-1/+1
Commit 4dfb9982644b ("tsn: Fix build.") fixed compilation with const dev_addr. In tsnep_netdev_set_mac_address() the call of ether_addr_copy() was replaced with dev_set_mac_address(), which calls ndo_set_mac_address(). This results in an endless recursive loop because ndo_set_mac_address is set to tsnep_netdev_set_mac_address. Call eth_hw_addr_set() instead of dev_set_mac_address() in ndo_set_mac_address()/tsnep_netdev_set_mac_address() to copy the address as intended. [ 26.563303] Insufficient stack space to handle exception! [ 26.563312] ESR: 0x96000047 -- DABT (current EL) [ 26.563317] FAR: 0xffff80000a507fc0 [ 26.563320] Task stack: [0xffff80000a508000..0xffff80000a50c000] [ 26.563324] IRQ stack: [0xffff80000a0c0000..0xffff80000a0c4000] [ 26.563327] Overflow stack: [0xffff00007fbaf2b0..0xffff00007fbb02b0] [ 26.563333] CPU: 3 PID: 381 Comm: ifconfig Not tainted 5.16.0-rc1-zynqmp #60 [ 26.563340] Hardware name: TSN endpoint (DT) [ 26.563343] pstate: a0000005 (NzCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 26.563351] pc : inetdev_event+0x4/0x560 [ 26.563364] lr : raw_notifier_call_chain+0x54/0x78 [ 26.563372] sp : ffff80000a508040 [ 26.563374] x29: ffff80000a508040 x28: ffff00000132b800 x27: 0000000000000000 [ 26.563386] x26: 0000000000000000 x25: ffff800000ea5058 x24: 0904030201020001 [ 26.563396] x23: ffff800000ea5058 x22: ffff80000a5080e0 x21: 0000000000000009 [ 26.563405] x20: 00000000fffffffa x19: ffff80000a009510 x18: 0000000000000000 [ 26.563414] x17: 0000000000000000 x16: 0000000000000000 x15: 0000ffffd1341030 [ 26.563422] x14: ffffffffffffffff x13: 0000000000000020 x12: 0101010101010101 [ 26.563432] x11: 0000000000000020 x10: 0101010101010101 x9 : 7f7f7f7f7f7f7f7f [ 26.563441] x8 : 7f7f7f7f7f7f7f7f x7 : fefefeff30677364 x6 : 0000000080808080 [ 26.563450] x5 : 0000000000000000 x4 : ffff800008dee170 x3 : ffff80000a50bd42 [ 26.563459] x2 : ffff80000a5080e0 x1 : 0000000000000009 x0 : ffff80000a0092d0 [ 26.563470] Kernel panic - not syncing: kernel stack overflow [ 26.563474] CPU: 3 PID: 381 Comm: ifconfig Not tainted 5.16.0-rc1-zynqmp #60 [ 26.563481] Hardware name: TSN endpoint (DT) [ 26.563484] Call trace: [ 26.563486] dump_backtrace+0x0/0x1b0 [ 26.563497] show_stack+0x18/0x68 [ 26.563504] dump_stack_lvl+0x68/0x84 [ 26.563513] dump_stack+0x18/0x34 [ 26.563519] panic+0x164/0x324 [ 26.563524] nmi_panic+0x64/0x98 [ 26.563533] panic_bad_stack+0x108/0x128 [ 2k6.563539] handle_bad_stack+0x38/0x68 [ 26.563548] __bad_stack+0x88/0x8c [ 26.563553] inetdev_event+0x4/0x560 [ 26.563560] call_netdevice_notifiers_info+0x58/0xa8 [ 26.563569] dev_set_mac_address+0x78/0x110 [ 26.563576] tsnep_netdev_set_mac_address+0x38/0x60 [tsnep] [ 26.563591] dev_set_mac_address+0xc4/0x110 [ 26.563599] tsnep_netdev_set_mac_address+0x38/0x60 [tsnep] ... [ 26.565444] dev_set_mac_address+0xc4/0x110 [ 26.565452] tsnep_netdev_set_mac_address+0x38/0x60 [tsnep] [ 26.565462] dev_set_mac_address+0xc4/0x110 [ 26.565469] dev_set_mac_address_user+0x44/0x68 [ 26.565477] dev_ifsioc+0x30c/0x568 [ 26.565483] dev_ioctl+0x124/0x3f0 [ 26.565489] sock_do_ioctl+0xb4/0xf8 [ 26.565497] sock_ioctl+0x2f4/0x398 [ 26.565503] __arm64_sys_ioctl+0xa8/0xe8 [ 26.565511] invoke_syscall+0x44/0x108 [ 26.565520] el0_svc_common.constprop.3+0x94/0xf8 [ 26.565527] do_el0_svc+0x24/0x88 [ 26.565534] el0_svc+0x20/0x50 [ 26.565541] el0t_64_sync_handler+0x90/0xb8 [ 26.565548] el0t_64_sync+0x180/0x184 [ 26.565556] SMP: stopping secondary CPUs [ 26.565622] Kernel Offset: disabled [ 26.565624] CPU features: 0x0,00004002,00000846 [ 26.565628] Memory Limit: none [ 27.843428] ---[ end Kernel panic - not syncing: kernel stack overflow ]--- Fixes: 4dfb9982644b ("tsn: Fix build.") Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-11-22tsn: Fix build.David S. Miller1-2/+2
Due to const dev_addr changes. Signed-off-by: David S. Miller <davem@davemloft.net>
2021-11-22tsnep: Add TSN endpoint Ethernet MAC driverGerhard Engleder9-0/+3509
The TSN endpoint Ethernet MAC is a FPGA based network device for real-time communication. It is integrated as Ethernet controller with ethtool and PTP support. For real-time communcation TC_SETUP_QDISC_TAPRIO is supported. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>