mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
synced 2025-01-08 14:13:53 +00:00
[DLM] do full recover_locks barrier
Red Hat BZ 211914 The previous patch "[DLM] fix aborted recovery during node removal" was incomplete as discovered with further testing. It set the bit for the RS_LOCKS barrier but did not then wait for the barrier. This is often ok, but sometimes it will cause yet another recovery hang. If it's a new node that also has the lowest nodeid that skips the barrier wait, then it misses the important step of collecting and reporting the barrier status from the other nodes (which is the job of the low nodeid in the barrier wait routine). Signed-off-by: David Teigland <teigland@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
This commit is contained in:
parent
2cdc98aaf0
commit
4b77f2c93d
@ -168,9 +168,15 @@ static int ls_recover(struct dlm_ls *ls, struct dlm_recover *rv)
|
||||
/*
|
||||
* Other lockspace members may be going through the "neg" steps
|
||||
* while also adding us to the lockspace, in which case they'll
|
||||
* be looking for this status bit during dlm_recover_locks().
|
||||
* be doing the recover_locks (RS_LOCKS) barrier.
|
||||
*/
|
||||
dlm_set_recover_status(ls, DLM_RS_LOCKS);
|
||||
|
||||
error = dlm_recover_locks_wait(ls);
|
||||
if (error) {
|
||||
log_error(ls, "recover_locks_wait failed %d", error);
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
|
||||
dlm_release_root_list(ls);
|
||||
|
Loading…
Reference in New Issue
Block a user