summaryrefslogtreecommitdiff
path: root/fs/nfs/fscache-index.c
diff options
context:
space:
mode:
authorTrond Myklebust <trond.myklebust@hammerspace.com>2019-08-03 17:11:27 +0300
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2019-09-06 11:18:07 +0300
commitd72eb7187dcb1c9ead351b39613a4c50347d304a (patch)
treee5d62bf3238c068711a8c41749320c77aef918f5 /fs/nfs/fscache-index.c
parent1f46dbe266aa22ab434ec59d7662489f7bd5593d (diff)
downloadlinux-d72eb7187dcb1c9ead351b39613a4c50347d304a.tar.xz
NFSv4: Fix a potential sleep while atomic in nfs4_do_reclaim()
[ Upstream commit c77e22834ae9a11891cb613bd9a551be1b94f2bc ] John Hubbard reports seeing the following stack trace: nfs4_do_reclaim rcu_read_lock /* we are now in_atomic() and must not sleep */ nfs4_purge_state_owners nfs4_free_state_owner nfs4_destroy_seqid_counter rpc_destroy_wait_queue cancel_delayed_work_sync __cancel_work_timer __flush_work start_flush_work might_sleep: (kernel/workqueue.c:2975: BUG) The solution is to separate out the freeing of the state owners from nfs4_purge_state_owners(), and perform that outside the atomic context. Reported-by: John Hubbard <jhubbard@nvidia.com> Fixes: 0aaaf5c424c7f ("NFS: Cache state owners after files are closed") Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'fs/nfs/fscache-index.c')
0 files changed, 0 insertions, 0 deletions