mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
synced 2025-01-09 06:33:34 +00:00
perf/core: fix several typos
Patch series "treewide: Refactor heap related implementation", v6. This patch series focuses on several adjustments related to heap implementation. Firstly, a type-safe interface has been added to the min_heap, along with the introduction of several new functions to enhance its functionality. Additionally, the heap implementation for bcache and bcachefs has been replaced with the generic min_heap implementation from include/linux. Furthermore, several typos have been corrected. Previous discussion with Kent Overstreet: https://lkml.kernel.org/ioyfizrzq7w7mjrqcadtzsfgpuntowtjdw5pgn4qhvsdp4mqqg@nrlek5vmisbu This patch (of 16): Replace 'artifically' with 'artificially'. Replace 'irrespecive' with 'irrespective'. Replace 'futher' with 'further'. Replace 'sufficent' with 'sufficient'. Link: https://lkml.kernel.org/r/20240524152958.919343-1-visitorckw@gmail.com Link: https://lkml.kernel.org/r/20240524152958.919343-2-visitorckw@gmail.com Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com> Reviewed-by: Ian Rogers <irogers@google.com> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Bagas Sanjaya <bagasdotme@gmail.com> Cc: Brian Foster <bfoster@redhat.com> Cc: Ching-Chun (Jim) Huang <jserv@ccns.ncku.edu.tw> Cc: Coly Li <colyli@suse.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Matthew Sakai <msakai@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
cf28d7716e
commit
ddd36b7ee1
@ -534,7 +534,7 @@ void perf_sample_event_took(u64 sample_len_ns)
|
||||
__this_cpu_write(running_sample_length, running_len);
|
||||
|
||||
/*
|
||||
* Note: this will be biased artifically low until we have
|
||||
* Note: this will be biased artificially low until we have
|
||||
* seen NR_ACCUMULATED_SAMPLES. Doing it this way keeps us
|
||||
* from having to maintain a count.
|
||||
*/
|
||||
@ -596,10 +596,10 @@ static inline u64 perf_event_clock(struct perf_event *event)
|
||||
*
|
||||
* Event groups make things a little more complicated, but not terribly so. The
|
||||
* rules for a group are that if the group leader is OFF the entire group is
|
||||
* OFF, irrespecive of what the group member states are. This results in
|
||||
* OFF, irrespective of what the group member states are. This results in
|
||||
* __perf_effective_state().
|
||||
*
|
||||
* A futher ramification is that when a group leader flips between OFF and
|
||||
* A further ramification is that when a group leader flips between OFF and
|
||||
* !OFF, we need to update all group member times.
|
||||
*
|
||||
*
|
||||
@ -891,7 +891,7 @@ static int perf_cgroup_ensure_storage(struct perf_event *event,
|
||||
int cpu, heap_size, ret = 0;
|
||||
|
||||
/*
|
||||
* Allow storage to have sufficent space for an iterator for each
|
||||
* Allow storage to have sufficient space for an iterator for each
|
||||
* possibly nested cgroup plus an iterator for events with no cgroup.
|
||||
*/
|
||||
for (heap_size = 1; css; css = css->parent)
|
||||
|
Loading…
Reference in New Issue
Block a user