Davidlohr Bueso 40d14da383 fgraph: Convert ret_stack tasklist scanning to rcu
It seems that alloc_retstack_tasklist() can also take a lockless
approach for scanning the tasklist, instead of using the big global
tasklist_lock. For this we also kill another deprecated and rcu-unsafe
tsk->thread_group user replacing it with for_each_process_thread(),
maintaining semantics.

Here tasklist_lock does not protect anything other than the list
against concurrent fork/exit. And considering that the whole thing
is capped by FTRACE_RETSTACK_ALLOC_SIZE (32), it should not be a
problem to have a pontentially stale, yet stable, list. The task cannot
go away either, so we don't risk racing with ftrace_graph_exit_task()
which clears the retstack.

The tsk->ret_stack management is not protected by tasklist_lock, being
serialized with the corresponding publish/subscribe barriers against
concurrent ftrace_push_return_trace(). In addition this plays nicer
with cachelines by avoiding two atomic ops in the uncontended case.

Link: https://lkml.kernel.org/r/20200907013326.9870-1-dave@stgolabs.net

Acked-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-09-21 21:06:02 -04:00
..
2020-08-03 11:57:03 -07:00
2020-09-11 09:33:54 -07:00
2020-08-26 12:41:56 +02:00
2020-08-04 22:22:25 -07:00
\n
2020-08-06 19:29:51 -07:00
2020-08-04 14:20:26 -07:00
2020-08-04 14:20:26 -07:00
2020-06-03 13:06:42 -07:00
2020-08-12 10:57:59 -07:00
2020-08-12 10:58:02 -07:00
2020-01-08 16:32:55 +00:00
2020-08-14 11:07:02 -07:00
2020-08-04 15:02:07 -07:00
2020-07-01 12:09:13 +03:00
2020-07-27 14:31:12 -04:00
2020-08-04 13:40:35 -07:00
2020-08-23 17:36:59 -05:00
2019-12-04 15:18:39 +01:00
2019-12-18 18:07:31 +01:00
2020-08-12 10:57:59 -07:00
2020-05-09 13:57:12 +02:00
2020-06-04 19:06:24 -07:00
2020-07-07 11:58:59 -05:00
2020-05-09 13:57:12 +02:00