summaryrefslogtreecommitdiff
path: root/tools/lib
diff options
context:
space:
mode:
authorAlexei Starovoitov <ast@kernel.org>2020-09-04 03:36:41 +0300
committerAlexei Starovoitov <ast@kernel.org>2020-09-04 03:40:40 +0300
commite6135df45e21f1815a5948f452593124b1544a3e (patch)
tree1b6e2a0484ce01d82c58a29824b8c251a334fc8d /tools/lib
parent21e9ba5373fc2cec608fd68301a1dbfd14df3172 (diff)
parent4daab7132731ac5ec9384c8a070cdb9607dc38c8 (diff)
downloadlinux-e6135df45e21f1815a5948f452593124b1544a3e.tar.xz
Merge branch 'hashmap_iter_bucket_lock_fix'
Yonghong Song says: ==================== Currently, the bpf hashmap iterator takes a bucket_lock, a spin_lock, before visiting each element in the bucket. This will cause a deadlock if a map update/delete operates on an element with the same bucket id of the visited map. To avoid the deadlock, let us just use rcu_read_lock instead of bucket_lock. This may result in visiting stale elements, missing some elements, or repeating some elements, if concurrent map delete/update happens for the same map. I think using rcu_read_lock is a reasonable compromise. For users caring stale/missing/repeating element issues, bpf map batch access syscall interface can be used. Note that another approach is during bpf_iter link stage, we check whether the iter program might be able to do update/delete to the visited map. If it is, reject the link_create. Verifier needs to record whether an update/delete operation happens for each map for this approach. I just feel this checking is too specialized, hence still prefer rcu_read_lock approach. Patch #1 has the kernel implementation and Patch #2 added a selftest which can trigger deadlock without Patch #1. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'tools/lib')
0 files changed, 0 insertions, 0 deletions