summaryrefslogtreecommitdiff
path: root/net/ipv6/inet6_hashtables.c
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2016-04-01 18:52:13 +0300
committerDavid S. Miller <davem@davemloft.net>2016-04-05 05:11:19 +0300
commitca065d0cf80fa547724440a8bf37f1e674d917c0 (patch)
tree6384df2fda5ff249da39464de7e7b9a079a794e6 /net/ipv6/inet6_hashtables.c
parenta4298e4522d687a79af8f8fbb7eca68399ab2d81 (diff)
downloadlinux-ca065d0cf80fa547724440a8bf37f1e674d917c0.tar.xz
udp: no longer use SLAB_DESTROY_BY_RCU
Tom Herbert would like not touching UDP socket refcnt for encapsulated traffic. For this to happen, we need to use normal RCU rules, with a grace period before freeing a socket. UDP sockets are not short lived in the high usage case, so the added cost of call_rcu() should not be a concern. This actually removes a lot of complexity in UDP stack. Multicast receives no longer need to hold a bucket spinlock. Note that ip early demux still needs to take a reference on the socket. Same remark for functions used by xt_socket and xt_PROXY netfilter modules, but this might be changed later. Performance for a single UDP socket receiving flood traffic from many RX queues/cpus. Simple udp_rx using simple recvfrom() loop : 438 kpps instead of 374 kpps : 17 % increase of the peak rate. v2: Addressed Willem de Bruijn feedback in multicast handling - keep early demux break in __udp4_lib_demux_lookup() Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Tom Herbert <tom@herbertland.com> Cc: Willem de Bruijn <willemb@google.com> Tested-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv6/inet6_hashtables.c')
0 files changed, 0 insertions, 0 deletions