diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2020-10-16 04:42:13 +0300 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-10-16 04:42:13 +0300 |
commit | 9ff9b0d392ea08090cd1780fb196f36dbb586529 (patch) | |
tree | 276a3a5c4525b84dee64eda30b423fc31bf94850 /drivers/net/wireless/ath/ath11k/dp_rx.c | |
parent | 840e5bb326bbcb16ce82dd2416d2769de4839aea (diff) | |
parent | 105faa8742437c28815b2a3eb8314ebc5fd9288c (diff) | |
download | linux-9ff9b0d392ea08090cd1780fb196f36dbb586529.tar.xz |
Merge tag 'net-next-5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski:
- Add redirect_neigh() BPF packet redirect helper, allowing to limit
stack traversal in common container configs and improving TCP
back-pressure.
Daniel reports ~10Gbps => ~15Gbps single stream TCP performance gain.
- Expand netlink policy support and improve policy export to user
space. (Ge)netlink core performs request validation according to
declared policies. Expand the expressiveness of those policies
(min/max length and bitmasks). Allow dumping policies for particular
commands. This is used for feature discovery by user space (instead
of kernel version parsing or trial and error).
- Support IGMPv3/MLDv2 multicast listener discovery protocols in
bridge.
- Allow more than 255 IPv4 multicast interfaces.
- Add support for Type of Service (ToS) reflection in SYN/SYN-ACK
packets of TCPv6.
- In Multi-patch TCP (MPTCP) support concurrent transmission of data on
multiple subflows in a load balancing scenario. Enhance advertising
addresses via the RM_ADDR/ADD_ADDR options.
- Support SMC-Dv2 version of SMC, which enables multi-subnet
deployments.
- Allow more calls to same peer in RxRPC.
- Support two new Controller Area Network (CAN) protocols - CAN-FD and
ISO 15765-2:2016.
- Add xfrm/IPsec compat layer, solving the 32bit user space on 64bit
kernel problem.
- Add TC actions for implementing MPLS L2 VPNs.
- Improve nexthop code - e.g. handle various corner cases when nexthop
objects are removed from groups better, skip unnecessary
notifications and make it easier to offload nexthops into HW by
converting to a blocking notifier.
- Support adding and consuming TCP header options by BPF programs,
opening the doors for easy experimental and deployment-specific TCP
option use.
- Reorganize TCP congestion control (CC) initialization to simplify
life of TCP CC implemented in BPF.
- Add support for shipping BPF programs with the kernel and loading
them early on boot via the User Mode Driver mechanism, hence reusing
all the user space infra we have.
- Support sleepable BPF programs, initially targeting LSM and tracing.
- Add bpf_d_path() helper for returning full path for given 'struct
path'.
- Make bpf_tail_call compatible with bpf-to-bpf calls.
- Allow BPF programs to call map_update_elem on sockmaps.
- Add BPF Type Format (BTF) support for type and enum discovery, as
well as support for using BTF within the kernel itself (current use
is for pretty printing structures).
- Support listing and getting information about bpf_links via the bpf
syscall.
- Enhance kernel interfaces around NIC firmware update. Allow
specifying overwrite mask to control if settings etc. are reset
during update; report expected max time operation may take to users;
support firmware activation without machine reboot incl. limits of
how much impact reset may have (e.g. dropping link or not).
- Extend ethtool configuration interface to report IEEE-standard
counters, to limit the need for per-vendor logic in user space.
- Adopt or extend devlink use for debug, monitoring, fw update in many
drivers (dsa loop, ice, ionic, sja1105, qed, mlxsw, mv88e6xxx,
dpaa2-eth).
- In mlxsw expose critical and emergency SFP module temperature alarms.
Refactor port buffer handling to make the defaults more suitable and
support setting these values explicitly via the DCBNL interface.
- Add XDP support for Intel's igb driver.
- Support offloading TC flower classification and filtering rules to
mscc_ocelot switches.
- Add PTP support for Marvell Octeontx2 and PP2.2 hardware, as well as
fixed interval period pulse generator and one-step timestamping in
dpaa-eth.
- Add support for various auth offloads in WiFi APs, e.g. SAE (WPA3)
offload.
- Add Lynx PHY/PCS MDIO module, and convert various drivers which have
this HW to use it. Convert mvpp2 to split PCS.
- Support Marvell Prestera 98DX3255 24-port switch ASICs, as well as
7-port Mediatek MT7531 IP.
- Add initial support for QCA6390 and IPQ6018 in ath11k WiFi driver,
and wcn3680 support in wcn36xx.
- Improve performance for packets which don't require much offloads on
recent Mellanox NICs by 20% by making multiple packets share a
descriptor entry.
- Move chelsio inline crypto drivers (for TLS and IPsec) from the
crypto subtree to drivers/net. Move MDIO drivers out of the phy
directory.
- Clean up a lot of W=1 warnings, reportedly the actively developed
subsections of networking drivers should now build W=1 warning free.
- Make sure drivers don't use in_interrupt() to dynamically adapt their
code. Convert tasklets to use new tasklet_setup API (sadly this
conversion is not yet complete).
* tag 'net-next-5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2583 commits)
Revert "bpfilter: Fix build error with CONFIG_BPFILTER_UMH"
net, sockmap: Don't call bpf_prog_put() on NULL pointer
bpf, selftest: Fix flaky tcp_hdr_options test when adding addr to lo
bpf, sockmap: Add locking annotations to iterator
netfilter: nftables: allow re-computing sctp CRC-32C in 'payload' statements
net: fix pos incrementment in ipv6_route_seq_next
net/smc: fix invalid return code in smcd_new_buf_create()
net/smc: fix valid DMBE buffer sizes
net/smc: fix use-after-free of delayed events
bpfilter: Fix build error with CONFIG_BPFILTER_UMH
cxgb4/ch_ipsec: Replace the module name to ch_ipsec from chcr
net: sched: Fix suspicious RCU usage while accessing tcf_tunnel_info
bpf: Fix register equivalence tracking.
rxrpc: Fix loss of final ack on shutdown
rxrpc: Fix bundle counting for exclusive connections
netfilter: restore NF_INET_NUMHOOKS
ibmveth: Identify ingress large send packets.
ibmveth: Switch order of ibmveth_helper calls.
cxgb4: handle 4-tuple PEDIT to NAT mode translation
selftests: Add VRF route leaking tests
...
Diffstat (limited to 'drivers/net/wireless/ath/ath11k/dp_rx.c')
-rw-r--r-- | drivers/net/wireless/ath/ath11k/dp_rx.c | 375 |
1 files changed, 269 insertions, 106 deletions
diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c index 791d971784ce..01625327eef7 100644 --- a/drivers/net/wireless/ath/ath11k/dp_rx.c +++ b/drivers/net/wireless/ath/ath11k/dp_rx.c @@ -9,6 +9,8 @@ #include <crypto/hash.h> #include "core.h" #include "debug.h" +#include "debugfs_htt_stats.h" +#include "debugfs_sta.h" #include "hal_desc.h" #include "hw.h" #include "dp_rx.h" @@ -260,12 +262,23 @@ static u32 ath11k_dp_rxdesc_get_ppduid(struct hal_rx_desc *rx_desc) return __le16_to_cpu(rx_desc->mpdu_start.phy_ppdu_id); } +static void ath11k_dp_service_mon_ring(struct timer_list *t) +{ + struct ath11k_base *ab = from_timer(ab, t, mon_reap_timer); + int i; + + for (i = 0; i < ab->hw_params.num_rxmda_per_pdev; i++) + ath11k_dp_rx_process_mon_rings(ab, i, NULL, DP_MON_SERVICE_BUDGET); + + mod_timer(&ab->mon_reap_timer, jiffies + + msecs_to_jiffies(ATH11K_MON_TIMER_INTERVAL)); +} + /* Returns number of Rx buffers replenished */ int ath11k_dp_rxbufs_replenish(struct ath11k_base *ab, int mac_id, struct dp_rxdma_ring *rx_ring, int req_entries, - enum hal_rx_buf_return_buf_manager mgr, - gfp_t gfp) + enum hal_rx_buf_return_buf_manager mgr) { struct hal_srng *srng; u32 *desc; @@ -312,7 +325,7 @@ int ath11k_dp_rxbufs_replenish(struct ath11k_base *ab, int mac_id, spin_lock_bh(&rx_ring->idr_lock); buf_id = idr_alloc(&rx_ring->bufs_idr, skb, 0, - rx_ring->bufs_max * 3, gfp); + rx_ring->bufs_max * 3, GFP_ATOMIC); spin_unlock_bh(&rx_ring->idr_lock); if (buf_id < 0) goto fail_dma_unmap; @@ -375,7 +388,13 @@ static int ath11k_dp_rxdma_buf_ring_free(struct ath11k *ar, idr_destroy(&rx_ring->bufs_idr); spin_unlock_bh(&rx_ring->idr_lock); - rx_ring = &dp->rx_mon_status_refill_ring; + /* if rxdma1_enable is false, mon_status_refill_ring + * isn't setup, so don't clean. + */ + if (!ar->ab->hw_params.rxdma1_enable) + return 0; + + rx_ring = &dp->rx_mon_status_refill_ring[0]; spin_lock_bh(&rx_ring->idr_lock); idr_for_each_entry(&rx_ring->bufs_idr, skb, buf_id) { @@ -390,21 +409,27 @@ static int ath11k_dp_rxdma_buf_ring_free(struct ath11k *ar, idr_destroy(&rx_ring->bufs_idr); spin_unlock_bh(&rx_ring->idr_lock); + return 0; } static int ath11k_dp_rxdma_pdev_buf_free(struct ath11k *ar) { struct ath11k_pdev_dp *dp = &ar->dp; + struct ath11k_base *ab = ar->ab; struct dp_rxdma_ring *rx_ring = &dp->rx_refill_buf_ring; + int i; ath11k_dp_rxdma_buf_ring_free(ar, rx_ring); rx_ring = &dp->rxdma_mon_buf_ring; ath11k_dp_rxdma_buf_ring_free(ar, rx_ring); - rx_ring = &dp->rx_mon_status_refill_ring; - ath11k_dp_rxdma_buf_ring_free(ar, rx_ring); + for (i = 0; i < ab->hw_params.num_rxmda_per_pdev; i++) { + rx_ring = &dp->rx_mon_status_refill_ring[i]; + ath11k_dp_rxdma_buf_ring_free(ar, rx_ring); + } + return 0; } @@ -416,26 +441,32 @@ static int ath11k_dp_rxdma_ring_buf_setup(struct ath11k *ar, int num_entries; num_entries = rx_ring->refill_buf_ring.size / - ath11k_hal_srng_get_entrysize(ringtype); + ath11k_hal_srng_get_entrysize(ar->ab, ringtype); rx_ring->bufs_max = num_entries; ath11k_dp_rxbufs_replenish(ar->ab, dp->mac_id, rx_ring, num_entries, - HAL_RX_BUF_RBM_SW3_BM, GFP_KERNEL); + HAL_RX_BUF_RBM_SW3_BM); return 0; } static int ath11k_dp_rxdma_pdev_buf_setup(struct ath11k *ar) { struct ath11k_pdev_dp *dp = &ar->dp; + struct ath11k_base *ab = ar->ab; struct dp_rxdma_ring *rx_ring = &dp->rx_refill_buf_ring; + int i; ath11k_dp_rxdma_ring_buf_setup(ar, rx_ring, HAL_RXDMA_BUF); - rx_ring = &dp->rxdma_mon_buf_ring; - ath11k_dp_rxdma_ring_buf_setup(ar, rx_ring, HAL_RXDMA_MONITOR_BUF); + if (ar->ab->hw_params.rxdma1_enable) { + rx_ring = &dp->rxdma_mon_buf_ring; + ath11k_dp_rxdma_ring_buf_setup(ar, rx_ring, HAL_RXDMA_MONITOR_BUF); + } - rx_ring = &dp->rx_mon_status_refill_ring; - ath11k_dp_rxdma_ring_buf_setup(ar, rx_ring, HAL_RXDMA_MONITOR_STATUS); + for (i = 0; i < ab->hw_params.num_rxmda_per_pdev; i++) { + rx_ring = &dp->rx_mon_status_refill_ring[i]; + ath11k_dp_rxdma_ring_buf_setup(ar, rx_ring, HAL_RXDMA_MONITOR_STATUS); + } return 0; } @@ -443,11 +474,21 @@ static int ath11k_dp_rxdma_pdev_buf_setup(struct ath11k *ar) static void ath11k_dp_rx_pdev_srng_free(struct ath11k *ar) { struct ath11k_pdev_dp *dp = &ar->dp; + struct ath11k_base *ab = ar->ab; + int i; - ath11k_dp_srng_cleanup(ar->ab, &dp->rx_refill_buf_ring.refill_buf_ring); - ath11k_dp_srng_cleanup(ar->ab, &dp->rxdma_err_dst_ring); - ath11k_dp_srng_cleanup(ar->ab, &dp->rx_mon_status_refill_ring.refill_buf_ring); - ath11k_dp_srng_cleanup(ar->ab, &dp->rxdma_mon_buf_ring.refill_buf_ring); + ath11k_dp_srng_cleanup(ab, &dp->rx_refill_buf_ring.refill_buf_ring); + + for (i = 0; i < ab->hw_params.num_rxmda_per_pdev; i++) { + if (ab->hw_params.rx_mac_buf_ring) + ath11k_dp_srng_cleanup(ab, &dp->rx_mac_buf_ring[i]); + + ath11k_dp_srng_cleanup(ab, &dp->rxdma_err_dst_ring[i]); + ath11k_dp_srng_cleanup(ab, + &dp->rx_mon_status_refill_ring[i].refill_buf_ring); + } + + ath11k_dp_srng_cleanup(ab, &dp->rxdma_mon_buf_ring.refill_buf_ring); } void ath11k_dp_pdev_reo_cleanup(struct ath11k_base *ab) @@ -486,7 +527,9 @@ err_reo_cleanup: static int ath11k_dp_rx_pdev_srng_alloc(struct ath11k *ar) { struct ath11k_pdev_dp *dp = &ar->dp; + struct ath11k_base *ab = ar->ab; struct dp_srng *srng = NULL; + int i; int ret; ret = ath11k_dp_srng_setup(ar->ab, @@ -498,24 +541,55 @@ static int ath11k_dp_rx_pdev_srng_alloc(struct ath11k *ar) return ret; } - ret = ath11k_dp_srng_setup(ar->ab, &dp->rxdma_err_dst_ring, - HAL_RXDMA_DST, 0, dp->mac_id, - DP_RXDMA_ERR_DST_RING_SIZE); - if (ret) { - ath11k_warn(ar->ab, "failed to setup rxdma_err_dst_ring\n"); - return ret; + if (ar->ab->hw_params.rx_mac_buf_ring) { + for (i = 0; i < ab->hw_params.num_rxmda_per_pdev; i++) { + ret = ath11k_dp_srng_setup(ar->ab, + &dp->rx_mac_buf_ring[i], + HAL_RXDMA_BUF, 1, + dp->mac_id + i, 1024); + if (ret) { + ath11k_warn(ar->ab, "failed to setup rx_mac_buf_ring %d\n", + i); + return ret; + } + } } - srng = &dp->rx_mon_status_refill_ring.refill_buf_ring; - ret = ath11k_dp_srng_setup(ar->ab, - srng, - HAL_RXDMA_MONITOR_STATUS, 0, dp->mac_id, - DP_RXDMA_MON_STATUS_RING_SIZE); - if (ret) { - ath11k_warn(ar->ab, - "failed to setup rx_mon_status_refill_ring\n"); - return ret; + for (i = 0; i < ab->hw_params.num_rxmda_per_pdev; i++) { + ret = ath11k_dp_srng_setup(ar->ab, &dp->rxdma_err_dst_ring[i], + HAL_RXDMA_DST, 0, dp->mac_id + i, + DP_RXDMA_ERR_DST_RING_SIZE); + if (ret) { + ath11k_warn(ar->ab, "failed to setup rxdma_err_dst_ring %d\n", i); + return ret; + } } + + for (i = 0; i < ab->hw_params.num_rxmda_per_pdev; i++) { + srng = &dp->rx_mon_status_refill_ring[i].refill_buf_ring; + ret = ath11k_dp_srng_setup(ar->ab, + srng, + HAL_RXDMA_MONITOR_STATUS, 0, dp->mac_id + i, + DP_RXDMA_MON_STATUS_RING_SIZE); + if (ret) { + ath11k_warn(ar->ab, + "failed to setup rx_mon_status_refill_ring %d\n", i); + return ret; + } + } + + /* if rxdma1_enable is false, then it doesn't need + * to setup rxdam_mon_buf_ring, rxdma_mon_dst_ring + * and rxdma_mon_desc_ring. + * init reap timer for QCA6390. + */ + if (!ar->ab->hw_params.rxdma1_enable) { + //init mon status buffer reap timer + timer_setup(&ar->ab->mon_reap_timer, + ath11k_dp_service_mon_ring, 0); + return 0; + } + ret = ath11k_dp_srng_setup(ar->ab, &dp->rxdma_mon_buf_ring.refill_buf_ring, HAL_RXDMA_MONITOR_BUF, 0, dp->mac_id, @@ -1377,9 +1451,8 @@ ath11k_update_per_peer_tx_stats(struct ath11k *ar, HTT_USR_CMPLTN_LONG_RETRY(usr_stats->cmpltn_cmn.flags) + HTT_USR_CMPLTN_SHORT_RETRY(usr_stats->cmpltn_cmn.flags); - if (ath11k_debug_is_extd_tx_stats_enabled(ar)) - ath11k_accumulate_per_peer_tx_stats(arsta, - peer_stats, rate_idx); + if (ath11k_debugfs_is_extd_tx_stats_enabled(ar)) + ath11k_debugfs_sta_add_tx_stats(arsta, peer_stats, rate_idx); } spin_unlock_bh(&ab->base_lock); @@ -1421,7 +1494,7 @@ struct htt_ppdu_stats_info *ath11k_dp_htt_get_ppdu_desc(struct ath11k *ar, } spin_unlock_bh(&ar->data_lock); - ppdu_info = kzalloc(sizeof(*ppdu_info), GFP_KERNEL); + ppdu_info = kzalloc(sizeof(*ppdu_info), GFP_ATOMIC); if (!ppdu_info) return NULL; @@ -1455,7 +1528,7 @@ static int ath11k_htt_pull_ppdu_stats(struct ath11k_base *ab, goto exit; } - if (ath11k_debug_is_pktlog_lite_mode_enabled(ar)) + if (ath11k_debugfs_is_pktlog_lite_mode_enabled(ar)) trace_ath11k_htt_ppdu_stats(ar, skb->data, len); ppdu_info = ath11k_dp_htt_get_ppdu_desc(ar, ppdu_id); @@ -1577,11 +1650,23 @@ void ath11k_dp_htt_htc_t2h_msg_handler(struct ath11k_base *ab, resp->peer_map_ev.info1); ath11k_dp_get_mac_addr(resp->peer_map_ev.mac_addr_l32, peer_mac_h16, mac_addr); + ath11k_peer_map_event(ab, vdev_id, peer_id, mac_addr, 0); + break; + case HTT_T2H_MSG_TYPE_PEER_MAP2: + vdev_id = FIELD_GET(HTT_T2H_PEER_MAP_INFO_VDEV_ID, + resp->peer_map_ev.info); + peer_id = FIELD_GET(HTT_T2H_PEER_MAP_INFO_PEER_ID, + resp->peer_map_ev.info); + peer_mac_h16 = FIELD_GET(HTT_T2H_PEER_MAP_INFO1_MAC_ADDR_H16, + resp->peer_map_ev.info1); + ath11k_dp_get_mac_addr(resp->peer_map_ev.mac_addr_l32, + peer_mac_h16, mac_addr); ast_hash = FIELD_GET(HTT_T2H_PEER_MAP_INFO2_AST_HASH_VAL, resp->peer_map_ev.info2); ath11k_peer_map_event(ab, vdev_id, peer_id, mac_addr, ast_hash); break; case HTT_T2H_MSG_TYPE_PEER_UNMAP: + case HTT_T2H_MSG_TYPE_PEER_UNMAP2: peer_id = FIELD_GET(HTT_T2H_PEER_UNMAP_INFO_PEER_ID, resp->peer_unmap_ev.info); ath11k_peer_unmap_event(ab, peer_id); @@ -1590,7 +1675,7 @@ void ath11k_dp_htt_htc_t2h_msg_handler(struct ath11k_base *ab, ath11k_htt_pull_ppdu_stats(ab, skb); break; case HTT_T2H_MSG_TYPE_EXT_STATS_CONF: - ath11k_dbg_htt_ext_stats_handler(ab, skb); + ath11k_debugfs_htt_ext_stats_handler(ab, skb); break; case HTT_T2H_MSG_TYPE_PKTLOG: ath11k_htt_pktlog(ab, skb); @@ -2065,8 +2150,6 @@ static void ath11k_dp_rx_h_mpdu(struct ath11k *ar, mcast = is_multicast_ether_addr(hdr->addr1); fill_crypto_hdr = mcast; - is_decrypted = ath11k_dp_rx_h_attn_is_decrypted(rx_desc); - spin_lock_bh(&ar->ab->base_lock); peer = ath11k_peer_find_by_addr(ar->ab, hdr->addr2); if (peer) { @@ -2080,6 +2163,8 @@ static void ath11k_dp_rx_h_mpdu(struct ath11k *ar, spin_unlock_bh(&ar->ab->base_lock); err_bitmap = ath11k_dp_rx_h_attn_mpdu_err(rx_desc); + if (enctype != HAL_ENCRYPT_TYPE_OPEN && !err_bitmap) + is_decrypted = ath11k_dp_rx_h_attn_is_decrypted(rx_desc); /* Clear per-MPDU flags while leaving per-PPDU flags intact */ rx_status->flag &= ~(RX_FLAG_FAILED_FCS_CRC | @@ -2282,6 +2367,9 @@ static void ath11k_dp_rx_deliver_msdu(struct ath11k *ar, struct napi_struct *nap !!(status->flag & RX_FLAG_MMIC_ERROR), !!(status->flag & RX_FLAG_AMSDU_MORE)); + ath11k_dbg_dump(ar->ab, ATH11K_DBG_DP_RX, NULL, "dp rx msdu: ", + msdu->data, msdu->len); + /* TODO: trace rx packet */ ieee80211_rx_napi(ar->hw, NULL, msdu, napi); @@ -2526,7 +2614,7 @@ try_again: rx_ring = &ar->dp.rx_refill_buf_ring; ath11k_dp_rxbufs_replenish(ab, i, rx_ring, num_buffs_reaped[i], - HAL_RX_BUF_RBM_SW3_BM, GFP_ATOMIC); + HAL_RX_BUF_RBM_SW3_BM); } ath11k_dp_rx_process_received_packets(ab, napi, &msdu_list, @@ -2608,7 +2696,7 @@ static void ath11k_dp_rx_update_peer_stats(struct ath11k_sta *arsta, static struct sk_buff *ath11k_dp_rx_alloc_mon_status_buf(struct ath11k_base *ab, struct dp_rxdma_ring *rx_ring, - int *buf_id, gfp_t gfp) + int *buf_id) { struct sk_buff *skb; dma_addr_t paddr; @@ -2633,7 +2721,7 @@ static struct sk_buff *ath11k_dp_rx_alloc_mon_status_buf(struct ath11k_base *ab, spin_lock_bh(&rx_ring->idr_lock); *buf_id = idr_alloc(&rx_ring->bufs_idr, skb, 0, - rx_ring->bufs_max, gfp); + rx_ring->bufs_max, GFP_ATOMIC); spin_unlock_bh(&rx_ring->idr_lock); if (*buf_id < 0) goto fail_dma_unmap; @@ -2653,8 +2741,7 @@ fail_alloc_skb: int ath11k_dp_rx_mon_status_bufs_replenish(struct ath11k_base *ab, int mac_id, struct dp_rxdma_ring *rx_ring, int req_entries, - enum hal_rx_buf_return_buf_manager mgr, - gfp_t gfp) + enum hal_rx_buf_return_buf_manager mgr) { struct hal_srng *srng; u32 *desc; @@ -2680,7 +2767,7 @@ int ath11k_dp_rx_mon_status_bufs_replenish(struct ath11k_base *ab, int mac_id, while (num_remain > 0) { skb = ath11k_dp_rx_alloc_mon_status_buf(ab, rx_ring, - &buf_id, gfp); + &buf_id); if (!skb) break; paddr = ATH11K_SKB_RXCB(skb)->paddr; @@ -2719,20 +2806,25 @@ fail_desc_get: static int ath11k_dp_rx_reap_mon_status_ring(struct ath11k_base *ab, int mac_id, int *budget, struct sk_buff_head *skb_list) { - struct ath11k *ar = ab->pdevs[mac_id].ar; - struct ath11k_pdev_dp *dp = &ar->dp; - struct dp_rxdma_ring *rx_ring = &dp->rx_mon_status_refill_ring; + struct ath11k *ar; + struct ath11k_pdev_dp *dp; + struct dp_rxdma_ring *rx_ring; struct hal_srng *srng; void *rx_mon_status_desc; struct sk_buff *skb; struct ath11k_skb_rxcb *rxcb; struct hal_tlv_hdr *tlv; u32 cookie; - int buf_id; + int buf_id, srng_id; dma_addr_t paddr; u8 rbm; int num_buffs_reaped = 0; + ar = ab->pdevs[ath11k_hw_mac_id_to_pdev_id(&ab->hw_params, mac_id)].ar; + dp = &ar->dp; + srng_id = ath11k_hw_mac_id_to_srng_id(&ab->hw_params, mac_id); + rx_ring = &dp->rx_mon_status_refill_ring[srng_id]; + srng = &ab->hal.srng_list[rx_ring->refill_buf_ring.ring_id]; spin_lock_bh(&srng->lock); @@ -2786,7 +2878,7 @@ static int ath11k_dp_rx_reap_mon_status_ring(struct ath11k_base *ab, int mac_id, } move_next: skb = ath11k_dp_rx_alloc_mon_status_buf(ab, rx_ring, - &buf_id, GFP_ATOMIC); + &buf_id); if (!skb) { ath11k_hal_rx_buf_addr_info_set(rx_mon_status_desc, 0, 0, @@ -2813,7 +2905,7 @@ move_next: int ath11k_dp_rx_process_mon_status(struct ath11k_base *ab, int mac_id, struct napi_struct *napi, int budget) { - struct ath11k *ar = ab->pdevs[mac_id].ar; + struct ath11k *ar = ath11k_ab_to_ar(ab, mac_id); enum hal_rx_mon_status hal_status; struct sk_buff *skb; struct sk_buff_head skb_list; @@ -2833,7 +2925,7 @@ int ath11k_dp_rx_process_mon_status(struct ath11k_base *ab, int mac_id, memset(&ppdu_info, 0, sizeof(ppdu_info)); ppdu_info.peer_id = HAL_INVALID_PEERID; - if (ath11k_debug_is_pktlog_rx_stats_enabled(ar)) + if (ath11k_debugfs_is_pktlog_rx_stats_enabled(ar)) trace_ath11k_htt_rxdesc(ar, skb->data, DP_RX_BUFFER_SIZE); hal_status = ath11k_hal_rx_parse_mon_status(ab, &ppdu_info, skb); @@ -2861,7 +2953,7 @@ int ath11k_dp_rx_process_mon_status(struct ath11k_base *ab, int mac_id, arsta = (struct ath11k_sta *)peer->sta->drv_priv; ath11k_dp_rx_update_peer_stats(arsta, &ppdu_info); - if (ath11k_debug_is_pktlog_peer_valid(ar, peer->addr)) + if (ath11k_debugfs_is_pktlog_peer_valid(ar, peer->addr)) trace_ath11k_htt_rxdesc(ar, skb->data, DP_RX_BUFFER_SIZE); spin_unlock_bh(&ab->base_lock); @@ -3599,7 +3691,7 @@ exit: rx_ring = &ar->dp.rx_refill_buf_ring; ath11k_dp_rxbufs_replenish(ab, i, rx_ring, n_bufs_reaped[i], - HAL_RX_BUF_RBM_SW3_BM, GFP_ATOMIC); + HAL_RX_BUF_RBM_SW3_BM); } return tot_n_bufs_reaped; @@ -3709,8 +3801,7 @@ static bool ath11k_dp_rx_h_reo_err(struct ath11k *ar, struct sk_buff *msdu, * instead, it is good to drop such packets in mac80211 * after incrementing the replay counters. */ - - /* fall through */ + fallthrough; default: /* TODO: Review other errors and process them to mac80211 * as appropriate. @@ -3820,7 +3911,7 @@ int ath11k_dp_rx_process_wbm_err(struct ath11k_base *ab, int total_num_buffs_reaped = 0; int ret, i; - for (i = 0; i < MAX_RADIOS; i++) + for (i = 0; i < ab->num_radios; i++) __skb_queue_head_init(&msdu_list[i]); srng = &ab->hal.srng_list[dp->rx_rel_ring.ring_id]; @@ -3896,7 +3987,7 @@ int ath11k_dp_rx_process_wbm_err(struct ath11k_base *ab, rx_ring = &ar->dp.rx_refill_buf_ring; ath11k_dp_rxbufs_replenish(ab, i, rx_ring, num_buffs_reaped[i], - HAL_RX_BUF_RBM_SW3_BM, GFP_ATOMIC); + HAL_RX_BUF_RBM_SW3_BM); } rcu_read_lock(); @@ -3923,9 +4014,9 @@ done: int ath11k_dp_process_rxdma_err(struct ath11k_base *ab, int mac_id, int budget) { - struct ath11k *ar = ab->pdevs[mac_id].ar; - struct dp_srng *err_ring = &ar->dp.rxdma_err_dst_ring; - struct dp_rxdma_ring *rx_ring = &ar->dp.rx_refill_buf_ring; + struct ath11k *ar; + struct dp_srng *err_ring; + struct dp_rxdma_ring *rx_ring; struct dp_link_desc_bank *link_desc_banks = ab->dp.link_desc_banks; struct hal_srng *srng; u32 msdu_cookies[HAL_NUM_RX_MSDUS_PER_LINK_DESC]; @@ -3944,6 +4035,11 @@ int ath11k_dp_process_rxdma_err(struct ath11k_base *ab, int mac_id, int budget) int i; int buf_id; + ar = ab->pdevs[ath11k_hw_mac_id_to_pdev_id(&ab->hw_params, mac_id)].ar; + err_ring = &ar->dp.rxdma_err_dst_ring[ath11k_hw_mac_id_to_srng_id(&ab->hw_params, + mac_id)]; + rx_ring = &ar->dp.rx_refill_buf_ring; + srng = &ab->hal.srng_list[err_ring->ring_id]; spin_lock_bh(&srng->lock); @@ -4000,7 +4096,7 @@ int ath11k_dp_process_rxdma_err(struct ath11k_base *ab, int mac_id, int budget) if (num_buf_freed) ath11k_dp_rxbufs_replenish(ab, mac_id, rx_ring, num_buf_freed, - HAL_RX_BUF_RBM_SW3_BM, GFP_ATOMIC); + HAL_RX_BUF_RBM_SW3_BM); return budget - quota; } @@ -4097,6 +4193,7 @@ int ath11k_dp_rx_pdev_alloc(struct ath11k_base *ab, int mac_id) struct ath11k *ar = ab->pdevs[mac_id].ar; struct ath11k_pdev_dp *dp = &ar->dp; u32 ring_id; + int i; int ret; ret = ath11k_dp_rx_pdev_srng_alloc(ar); @@ -4119,14 +4216,33 @@ int ath11k_dp_rx_pdev_alloc(struct ath11k_base *ab, int mac_id) return ret; } - ring_id = dp->rxdma_err_dst_ring.ring_id; - ret = ath11k_dp_tx_htt_srng_setup(ab, ring_id, mac_id, HAL_RXDMA_DST); - if (ret) { - ath11k_warn(ab, "failed to configure rxdma_err_dest_ring %d\n", - ret); - return ret; + if (ab->hw_params.rx_mac_buf_ring) { + for (i = 0; i < ab->hw_params.num_rxmda_per_pdev; i++) { + ring_id = dp->rx_mac_buf_ring[i].ring_id; + ret = ath11k_dp_tx_htt_srng_setup(ab, ring_id, + mac_id + i, HAL_RXDMA_BUF); + if (ret) { + ath11k_warn(ab, "failed to configure rx_mac_buf_ring%d %d\n", + i, ret); + return ret; + } + } + } + + for (i = 0; i < ab->hw_params.num_rxmda_per_pdev; i++) { + ring_id = dp->rxdma_err_dst_ring[i].ring_id; + ret = ath11k_dp_tx_htt_srng_setup(ab, ring_id, + mac_id + i, HAL_RXDMA_DST); + if (ret) { + ath11k_warn(ab, "failed to configure rxdma_err_dest_ring%d %d\n", + i, ret); + return ret; + } } + if (!ab->hw_params.rxdma1_enable) + goto config_refill_ring; + ring_id = dp->rxdma_mon_buf_ring.refill_buf_ring.ring_id; ret = ath11k_dp_tx_htt_srng_setup(ab, ring_id, mac_id, HAL_RXDMA_MONITOR_BUF); @@ -4151,15 +4267,20 @@ int ath11k_dp_rx_pdev_alloc(struct ath11k_base *ab, int mac_id) ret); return ret; } - ring_id = dp->rx_mon_status_refill_ring.refill_buf_ring.ring_id; - ret = ath11k_dp_tx_htt_srng_setup(ab, ring_id, mac_id, - HAL_RXDMA_MONITOR_STATUS); - if (ret) { - ath11k_warn(ab, - "failed to configure mon_status_refill_ring %d\n", - ret); - return ret; + +config_refill_ring: + for (i = 0; i < ab->hw_params.num_rxmda_per_pdev; i++) { + ring_id = dp->rx_mon_status_refill_ring[i].refill_buf_ring.ring_id; + ret = ath11k_dp_tx_htt_srng_setup(ab, ring_id, mac_id + i, + HAL_RXDMA_MONITOR_STATUS); + if (ret) { + ath11k_warn(ab, + "failed to configure mon_status_refill_ring%d %d\n", + i, ret); + return ret; + } } + return 0; } @@ -4185,8 +4306,13 @@ int ath11k_dp_rx_monitor_link_desc_return(struct ath11k *ar, void *src_srng_desc; int ret = 0; - dp_srng = &dp->rxdma_mon_desc_ring; - hal_srng = &ar->ab->hal.srng_list[dp_srng->ring_id]; + if (ar->ab->hw_params.rxdma1_enable) { + dp_srng = &dp->rxdma_mon_desc_ring; + hal_srng = &ar->ab->hal.srng_list[dp_srng->ring_id]; + } else { + dp_srng = &ar->ab->dp.wbm_desc_rel_ring; + hal_srng = &ar->ab->hal.srng_list[dp_srng->ring_id]; + } ath11k_hal_srng_access_begin(ar->ab, hal_srng); @@ -4210,16 +4336,16 @@ int ath11k_dp_rx_monitor_link_desc_return(struct ath11k *ar, static void ath11k_dp_rx_mon_next_link_desc_get(void *rx_msdu_link_desc, dma_addr_t *paddr, u32 *sw_cookie, + u8 *rbm, void **pp_buf_addr_info) { struct hal_rx_msdu_link *msdu_link = (struct hal_rx_msdu_link *)rx_msdu_link_desc; struct ath11k_buffer_addr *buf_addr_info; - u8 rbm = 0; buf_addr_info = (struct ath11k_buffer_addr *)&msdu_link->buf_addr_info; - ath11k_hal_rx_buf_addr_info_get(buf_addr_info, paddr, sw_cookie, &rbm); + ath11k_hal_rx_buf_addr_info_get(buf_addr_info, paddr, sw_cookie, rbm); *pp_buf_addr_info = (void *)buf_addr_info; } @@ -4330,7 +4456,7 @@ static void ath11k_dp_mon_get_buf_len(struct hal_rx_msdu_desc_info *info, } static u32 -ath11k_dp_rx_mon_mpdu_pop(struct ath11k *ar, +ath11k_dp_rx_mon_mpdu_pop(struct ath11k *ar, int mac_id, void *ring_entry, struct sk_buff **head_msdu, struct sk_buff **tail_msdu, u32 *npackets, u32 *ppdu_id) @@ -4355,9 +4481,15 @@ ath11k_dp_rx_mon_mpdu_pop(struct ath11k *ar, struct hal_reo_entrance_ring *ent_desc = (struct hal_reo_entrance_ring *)ring_entry; int buf_id; + u32 rx_link_buf_info[2]; + u8 rbm; + + if (!ar->ab->hw_params.rxdma1_enable) + rx_ring = &dp->rx_refill_buf_ring; ath11k_hal_rx_reo_ent_buf_paddr_get(ring_entry, &paddr, - &sw_cookie, &p_last_buf_addr_info, + &sw_cookie, + &p_last_buf_addr_info, &rbm, &msdu_cnt); if (FIELD_GET(HAL_REO_ENTR_RING_INFO1_RXDMA_PUSH_REASON, @@ -4383,9 +4515,14 @@ ath11k_dp_rx_mon_mpdu_pop(struct ath11k *ar, return rx_bufs_used; } - rx_msdu_link_desc = - (void *)pmon->link_desc_banks[sw_cookie].vaddr + - (paddr - pmon->link_desc_banks[sw_cookie].paddr); + if (ar->ab->hw_params.rxdma1_enable) + rx_msdu_link_desc = + (void *)pmon->link_desc_banks[sw_cookie].vaddr + + (paddr - pmon->link_desc_banks[sw_cookie].paddr); + else + rx_msdu_link_desc = + (void *)ar->ab->dp.link_desc_banks[sw_cookie].vaddr + + (paddr - ar->ab->dp.link_desc_banks[sw_cookie].paddr); ath11k_hal_rx_msdu_list_get(ar, rx_msdu_link_desc, &msdu_list, &num_msdus); @@ -4481,15 +4618,22 @@ next_msdu: spin_unlock_bh(&rx_ring->idr_lock); } + ath11k_hal_rx_buf_addr_info_set(rx_link_buf_info, paddr, sw_cookie, rbm); + ath11k_dp_rx_mon_next_link_desc_get(rx_msdu_link_desc, &paddr, - &sw_cookie, + &sw_cookie, &rbm, &p_buf_addr_info); - if (ath11k_dp_rx_monitor_link_desc_return(ar, - p_last_buf_addr_info, - dp->mac_id)) - ath11k_dbg(ar->ab, ATH11K_DBG_DATA, - "dp_rx_monitor_link_desc_return failed"); + if (ar->ab->hw_params.rxdma1_enable) { + if (ath11k_dp_rx_monitor_link_desc_return(ar, + p_last_buf_addr_info, + dp->mac_id)) + ath11k_dbg(ar->ab, ATH11K_DBG_DATA, + "dp_rx_monitor_link_desc_return failed"); + } else { + ath11k_dp_rx_link_desc_return(ar->ab, rx_link_buf_info, + HAL_WBM_REL_BM_ACT_PUT_IN_IDLE); + } p_last_buf_addr_info = p_buf_addr_info; @@ -4673,8 +4817,8 @@ mon_deliver_fail: return -EINVAL; } -static void ath11k_dp_rx_mon_dest_process(struct ath11k *ar, u32 quota, - struct napi_struct *napi) +static void ath11k_dp_rx_mon_dest_process(struct ath11k *ar, int mac_id, + u32 quota, struct napi_struct *napi) { struct ath11k_pdev_dp *dp = &ar->dp; struct ath11k_mon_data *pmon = (struct ath11k_mon_data *)&dp->mon_data; @@ -4682,10 +4826,16 @@ static void ath11k_dp_rx_mon_dest_process(struct ath11k *ar, u32 quota, void *mon_dst_srng; u32 ppdu_id; u32 rx_bufs_used; + u32 ring_id; struct ath11k_pdev_mon_stats *rx_mon_stats; u32 npackets = 0; - mon_dst_srng = &ar->ab->hal.srng_list[dp->rxdma_mon_dst_ring.ring_id]; + if (ar->ab->hw_params.rxdma1_enable) + ring_id = dp->rxdma_mon_dst_ring.ring_id; + else + ring_id = dp->rxdma_err_dst_ring[mac_id].ring_id; + + mon_dst_srng = &ar->ab->hal.srng_list[ring_id]; if (!mon_dst_srng) { ath11k_warn(ar->ab, @@ -4708,7 +4858,7 @@ static void ath11k_dp_rx_mon_dest_process(struct ath11k *ar, u32 quota, head_msdu = NULL; tail_msdu = NULL; - rx_bufs_used += ath11k_dp_rx_mon_mpdu_pop(ar, ring_entry, + rx_bufs_used += ath11k_dp_rx_mon_mpdu_pop(ar, mac_id, ring_entry, &head_msdu, &tail_msdu, &npackets, &ppdu_id); @@ -4735,15 +4885,21 @@ static void ath11k_dp_rx_mon_dest_process(struct ath11k *ar, u32 quota, if (rx_bufs_used) { rx_mon_stats->dest_ppdu_done++; - ath11k_dp_rxbufs_replenish(ar->ab, dp->mac_id, - &dp->rxdma_mon_buf_ring, - rx_bufs_used, - HAL_RX_BUF_RBM_SW3_BM, GFP_ATOMIC); + if (ar->ab->hw_params.rxdma1_enable) + ath11k_dp_rxbufs_replenish(ar->ab, dp->mac_id, + &dp->rxdma_mon_buf_ring, + rx_bufs_used, + HAL_RX_BUF_RBM_SW3_BM); + else + ath11k_dp_rxbufs_replenish(ar->ab, dp->mac_id, + &dp->rx_refill_buf_ring, + rx_bufs_used, + HAL_RX_BUF_RBM_SW3_BM); } } static void ath11k_dp_rx_mon_status_process_tlv(struct ath11k *ar, - u32 quota, + int mac_id, u32 quota, struct napi_struct *napi) { struct ath11k_pdev_dp *dp = &ar->dp; @@ -4767,7 +4923,7 @@ static void ath11k_dp_rx_mon_status_process_tlv(struct ath11k *ar, if (tlv_status == HAL_TLV_STATUS_PPDU_DONE) { rx_mon_stats->status_ppdu_done++; pmon->mon_ppdu_status = DP_PPDU_STATUS_DONE; - ath11k_dp_rx_mon_dest_process(ar, quota, napi); + ath11k_dp_rx_mon_dest_process(ar, mac_id, quota, napi); pmon->mon_ppdu_status = DP_PPDU_STATUS_START; } dev_kfree_skb_any(status_skb); @@ -4777,15 +4933,15 @@ static void ath11k_dp_rx_mon_status_process_tlv(struct ath11k *ar, static int ath11k_dp_mon_process_rx(struct ath11k_base *ab, int mac_id, struct napi_struct *napi, int budget) { - struct ath11k *ar = ab->pdevs[mac_id].ar; + struct ath11k *ar = ath11k_ab_to_ar(ab, mac_id); struct ath11k_pdev_dp *dp = &ar->dp; struct ath11k_mon_data *pmon = (struct ath11k_mon_data *)&dp->mon_data; int num_buffs_reaped = 0; - num_buffs_reaped = ath11k_dp_rx_reap_mon_status_ring(ar->ab, dp->mac_id, &budget, + num_buffs_reaped = ath11k_dp_rx_reap_mon_status_ring(ar->ab, mac_id, &budget, &pmon->rx_status_q); if (num_buffs_reaped) - ath11k_dp_rx_mon_status_process_tlv(ar, budget, napi); + ath11k_dp_rx_mon_status_process_tlv(ar, mac_id, budget, napi); return num_buffs_reaped; } @@ -4793,7 +4949,7 @@ static int ath11k_dp_mon_process_rx(struct ath11k_base *ab, int mac_id, int ath11k_dp_rx_process_mon_rings(struct ath11k_base *ab, int mac_id, struct napi_struct *napi, int budget) { - struct ath11k *ar = ab->pdevs[mac_id].ar; + struct ath11k *ar = ath11k_ab_to_ar(ab, mac_id); int ret = 0; if (test_bit(ATH11K_FLAG_MONITOR_ENABLED, &ar->monitor_flags)) @@ -4832,9 +4988,15 @@ int ath11k_dp_rx_pdev_mon_attach(struct ath11k *ar) return ret; } + /* if rxdma1_enable is false, no need to setup + * rxdma_mon_desc_ring. + */ + if (!ar->ab->hw_params.rxdma1_enable) + return 0; + dp_srng = &dp->rxdma_mon_desc_ring; n_link_desc = dp_srng->size / - ath11k_hal_srng_get_entrysize(HAL_RXDMA_MONITOR_DESC); + ath11k_hal_srng_get_entrysize(ar->ab, HAL_RXDMA_MONITOR_DESC); mon_desc_srng = &ar->ab->hal.srng_list[dp->rxdma_mon_desc_ring.ring_id]; @@ -4848,6 +5010,7 @@ int ath11k_dp_rx_pdev_mon_attach(struct ath11k *ar) pmon->mon_last_linkdesc_paddr = 0; pmon->mon_last_buf_cookie = DP_RX_DESC_COOKIE_MAX + 1; spin_lock_init(&pmon->mon_lock); + return 0; } |