Skip to content

Commit 6d053d1

Browse files
Peter Zijlstrakamalmostafa
authored andcommitted
perf: Fix perf_lock_task_context() vs RCU
commit 058ebd0 upstream. Jiri managed to trigger this warning: [] ====================================================== [] [ INFO: possible circular locking dependency detected ] [] 3.10.0+ #228 Tainted: G W [] ------------------------------------------------------- [] p/6613 is trying to acquire lock: [] (rcu_node_0){..-...}, at: [<ffffffff810ca797>] rcu_read_unlock_special+0xa7/0x250 [] [] but task is already holding lock: [] (&ctx->lock){-.-...}, at: [<ffffffff810f2879>] perf_lock_task_context+0xd9/0x2c0 [] [] which lock already depends on the new lock. [] [] the existing dependency chain (in reverse order) is: [] [] -> #4 (&ctx->lock){-.-...}: [] -> #3 (&rq->lock){-.-.-.}: [] -> #2 (&p->pi_lock){-.-.-.}: [] -> #1 (&rnp->nocb_gp_wq[1]){......}: [] -> #0 (rcu_node_0){..-...}: Paul was quick to explain that due to preemptible RCU we cannot call rcu_read_unlock() while holding scheduler (or nested) locks when part of the read side critical section was preemptible. Therefore solve it by making the entire RCU read side non-preemptible. Also pull out the retry from under the non-preempt to play nice with RT. Reported-by: Jiri Olsa <jolsa@redhat.com> Helped-out-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
1 parent bbac730 commit 6d053d1

1 file changed

Lines changed: 14 additions & 1 deletion

File tree

kernel/events/core.c

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -729,8 +729,18 @@ perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
729729
{
730730
struct perf_event_context *ctx;
731731

732-
rcu_read_lock();
733732
retry:
733+
/*
734+
* One of the few rules of preemptible RCU is that one cannot do
735+
* rcu_read_unlock() while holding a scheduler (or nested) lock when
736+
* part of the read side critical section was preemptible -- see
737+
* rcu_read_unlock_special().
738+
*
739+
* Since ctx->lock nests under rq->lock we must ensure the entire read
740+
* side critical section is non-preemptible.
741+
*/
742+
preempt_disable();
743+
rcu_read_lock();
734744
ctx = rcu_dereference(task->perf_event_ctxp[ctxn]);
735745
if (ctx) {
736746
/*
@@ -746,6 +756,8 @@ perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
746756
raw_spin_lock_irqsave(&ctx->lock, *flags);
747757
if (ctx != rcu_dereference(task->perf_event_ctxp[ctxn])) {
748758
raw_spin_unlock_irqrestore(&ctx->lock, *flags);
759+
rcu_read_unlock();
760+
preempt_enable();
749761
goto retry;
750762
}
751763

@@ -755,6 +767,7 @@ perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
755767
}
756768
}
757769
rcu_read_unlock();
770+
preempt_enable();
758771
return ctx;
759772
}
760773

0 commit comments

Comments
 (0)