summaryrefslogtreecommitdiff
path: root/net/xdp
diff options
context:
space:
mode:
authorMaxim Mikityanskiy <maximmi@mellanox.com>2021-01-18 19:03:33 +0300
committerDaniel Borkmann <daniel@iogearbox.net>2021-01-20 00:47:04 +0300
commitb425e24a934e21a502d25089c6c7443d799c5594 (patch)
treec44bdeeda97f5b4f683e2220bc63d56061bcf1e3 /net/xdp
parent301a33d51880619d0c5a581b5a48d3a5248fa84b (diff)
downloadlinux-b425e24a934e21a502d25089c6c7443d799c5594.tar.xz
xsk: Clear pool even for inactive queues
The number of queues can change by other means, rather than ethtool. For example, attaching an mqprio qdisc with num_tc > 1 leads to creating multiple sets of TX queues, which may be then destroyed when mqprio is deleted. If an AF_XDP socket is created while mqprio is active, dev->_tx[queue_id].pool will be filled, but then real_num_tx_queues may decrease with deletion of mqprio, which will mean that the pool won't be NULLed, and a further increase of the number of TX queues may expose a dangling pointer. To avoid any potential misbehavior, this commit clears pool for RX and TX queues, regardless of real_num_*_queues, still taking into consideration num_*_queues to avoid overflows. Fixes: 1c1efc2af158 ("xsk: Create and free buffer pool independently from umem") Fixes: a41b4f3c58dd ("xsk: simplify xdp_clear_umem_at_qid implementation") Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Björn Töpel <bjorn.topel@intel.com> Link: https://lore.kernel.org/bpf/20210118160333.333439-1-maximmi@mellanox.com
Diffstat (limited to 'net/xdp')
-rw-r--r--net/xdp/xsk.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 8037b04a9edd..4a83117507f5 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -108,9 +108,9 @@ EXPORT_SYMBOL(xsk_get_pool_from_qid);
void xsk_clear_pool_at_qid(struct net_device *dev, u16 queue_id)
{
- if (queue_id < dev->real_num_rx_queues)
+ if (queue_id < dev->num_rx_queues)
dev->_rx[queue_id].pool = NULL;
- if (queue_id < dev->real_num_tx_queues)
+ if (queue_id < dev->num_tx_queues)
dev->_tx[queue_id].pool = NULL;
}