summaryrefslogtreecommitdiff
path: root/net/ipv4/devinet.c
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2024-04-26 09:42:22 +0300
committerDavid S. Miller <davem@davemloft.net>2024-04-29 11:54:12 +0300
commitcd42ba1c8ac9deb9032add6adf491110e7442040 (patch)
treecbe0ad7372a49bff0d17b429fbefd21e126d33ee /net/ipv4/devinet.c
parentd63394abc923093423c141d4049b72aa403fff07 (diff)
downloadlinux-cd42ba1c8ac9deb9032add6adf491110e7442040.tar.xz
net: give more chances to rcu in netdev_wait_allrefs_any()
This came while reviewing commit c4e86b4363ac ("net: add two more call_rcu_hurry()"). Paolo asked if adding one synchronize_rcu() would help. While synchronize_rcu() does not help, making sure to call rcu_barrier() before msleep(wait) is definitely helping to make sure lazy call_rcu() are completed. Instead of waiting ~100 seconds in my tests, the ref_tracker splats occurs one time only, and netdev_wait_allrefs_any() latency is reduced to the strict minimum. Ideally we should audit our call_rcu() users to make sure no refcount (or cascading call_rcu()) is held too long, because rcu_barrier() is quite expensive. Fixes: 0e4be9e57e8c ("net: use exponential backoff in netdev_wait_allrefs") Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/all/28bbf698-befb-42f6-b561-851c67f464aa@kernel.org/T/#m76d73ed6b03cd930778ac4d20a777f22a08d6824 Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4/devinet.c')
0 files changed, 0 insertions, 0 deletions