perf tools: Check fallback error and order

The perf_event_open might fail due to various reasons, so blindly
reducing precise_ip level might not be the best way to deal with it.

It seems the kernel return -EOPNOTSUPP when PMU doesn't support the
given precise level.  Let's try again with the correct error code.

This caused a problem on AMD, as it stops on precise_ip of 2 for IBS but
user events with exclude_kernel=1 cannot make progress.  Let's add the
evsel__handle_error_quirks() to this case specially.  I plan to work on
the kernel side to improve this situation but it'd still need some
special handling for IBS.

Reviewed-by: James Clark <james.clark@linaro.org>
Reviewed-by: Ravi Bangoria <ravi.bangoria@amd.com>
Acked-by: Kan Liang <kan.liang@linux.intel.com>
Cc: James Clark <james.clark@arm.com>
Cc: Atish Patra <atishp@atishpatra.org>
Cc: Mingwei Zhang <mizhang@google.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Thomas Richter <tmricht@linux.ibm.com>
Cc: Palmer Dabbelt <palmer@rivosinc.com>
Link: https://lore.kernel.org/r/20241016062359.264929-8-namhyung@kernel.org
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
This commit is contained in:
Namhyung Kim 2024-10-15 23:23:57 -07:00
parent 28398ce172
commit af954f76ee

View File

@ -2143,9 +2143,9 @@ static void evsel__detect_missing_pmu_features(struct evsel *evsel)
/*
* Must probe features in the order they were added to the
* perf_event_attr interface. These are PMU specific limitation
* so we can detect with the given hardware event and stop on the
* first one succeeded.
* perf_event_attr interface. These are kernel core limitation but
* specific to PMUs with branch stack. So we can detect with the given
* hardware event and stop on the first one succeeded.
*/
/* Please add new feature detection here. */
@ -2409,6 +2409,25 @@ static bool evsel__detect_missing_features(struct evsel *evsel)
return false;
}
static bool evsel__handle_error_quirks(struct evsel *evsel, int error)
{
/*
* AMD core PMU tries to forward events with precise_ip to IBS PMU
* implicitly. But IBS PMU has more restrictions so it can fail with
* supported event attributes. Let's forward it back to the core PMU
* by clearing precise_ip only if it's from precise_max (:P).
*/
if ((error == -EINVAL || error == -ENOENT) && x86__is_amd_cpu() &&
evsel->core.attr.precise_ip && evsel->precise_max) {
evsel->core.attr.precise_ip = 0;
pr_debug2_peo("removing precise_ip on AMD\n");
display_attr(&evsel->core.attr);
return true;
}
return false;
}
static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus,
struct perf_thread_map *threads,
int start_cpu_map_idx, int end_cpu_map_idx)
@ -2527,9 +2546,6 @@ static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus,
return 0;
try_fallback:
if (evsel__precise_ip_fallback(evsel))
goto retry_open;
if (evsel__ignore_missing_thread(evsel, perf_cpu_map__nr(cpus),
idx, threads, thread, err)) {
/* We just removed 1 thread, so lower the upper nthreads limit. */
@ -2546,11 +2562,15 @@ static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus,
if (err == -EMFILE && rlimit__increase_nofile(&set_rlimit))
goto retry_open;
if (err != -EINVAL || idx > 0 || thread > 0)
goto out_close;
if (err == -EOPNOTSUPP && evsel__precise_ip_fallback(evsel))
goto retry_open;
if (evsel__detect_missing_features(evsel))
if (err == -EINVAL && evsel__detect_missing_features(evsel))
goto fallback_missing_features;
if (evsel__handle_error_quirks(evsel, err))
goto retry_open;
out_close:
if (err)
threads->err_thread = thread;