linux-stable/tools/lib/api/io.h

202 lines
4.2 KiB
C
Raw Normal View History

/* SPDX-License-Identifier: GPL-2.0 */
/*
* Lightweight buffered reading library.
*
* Copyright 2019 Google LLC.
*/
#ifndef __API_IO__
#define __API_IO__
#include <errno.h>
#include <poll.h>
libsymbols kallsyms: Parse using io api 'perf record' will call kallsyms__parse 4 times during startup and process megabytes of data. This changes kallsyms__parse to use the io library rather than fgets to improve performance of the user code by over 8%. Before: Running 'internals/kallsyms-parse' benchmark: Average kallsyms__parse took: 103.988 ms (+- 0.203 ms) After: Running 'internals/kallsyms-parse' benchmark: Average kallsyms__parse took: 95.571 ms (+- 0.006 ms) For a workload like: $ perf record /bin/true Run under 'perf record -e cycles:u -g' the time goes from: Before 30.10% 1.67% perf perf [.] kallsyms__parse After 25.55% 20.04% perf perf [.] kallsyms__parse So a little under 5% of the start-up time is removed. A lot of what remains is on the kernel side, but caching kallsyms within perf would at least impact memory footprint. Committer notes: The internal/kallsyms-parse bench is run using: [root@five ~]# perf bench internals kallsyms-parse # Running 'internals/kallsyms-parse' benchmark: Average kallsyms__parse took: 80.381 ms (+- 0.115 ms) [root@five ~]# And this pre-existing test uses these routines to parse kallsyms and then compare with the info obtained from the matching ELF symtab: [root@five ~]# perf test vmlinux 1: vmlinux symtab matches kallsyms : Ok [root@five ~]# Also we can't remove hex2u64() in this patch as this breaks the build: /usr/bin/ld: /tmp/build/perf/perf-in.o: in function `modules__parse': /home/acme/git/perf/tools/perf/util/symbol.c:607: undefined reference to `hex2u64' /usr/bin/ld: /home/acme/git/perf/tools/perf/util/symbol.c:607: undefined reference to `hex2u64' /usr/bin/ld: /tmp/build/perf/perf-in.o: in function `dso__load_perf_map': /home/acme/git/perf/tools/perf/util/symbol.c:1477: undefined reference to `hex2u64' /usr/bin/ld: /home/acme/git/perf/tools/perf/util/symbol.c:1483: undefined reference to `hex2u64' collect2: error: ld returned 1 exit status Leave it there, move it in the next patch. Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lore.kernel.org/lkml/20200501221315.54715-3-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-01 22:13:14 +00:00
#include <stdlib.h>
#include <string.h>
libsymbols kallsyms: Parse using io api 'perf record' will call kallsyms__parse 4 times during startup and process megabytes of data. This changes kallsyms__parse to use the io library rather than fgets to improve performance of the user code by over 8%. Before: Running 'internals/kallsyms-parse' benchmark: Average kallsyms__parse took: 103.988 ms (+- 0.203 ms) After: Running 'internals/kallsyms-parse' benchmark: Average kallsyms__parse took: 95.571 ms (+- 0.006 ms) For a workload like: $ perf record /bin/true Run under 'perf record -e cycles:u -g' the time goes from: Before 30.10% 1.67% perf perf [.] kallsyms__parse After 25.55% 20.04% perf perf [.] kallsyms__parse So a little under 5% of the start-up time is removed. A lot of what remains is on the kernel side, but caching kallsyms within perf would at least impact memory footprint. Committer notes: The internal/kallsyms-parse bench is run using: [root@five ~]# perf bench internals kallsyms-parse # Running 'internals/kallsyms-parse' benchmark: Average kallsyms__parse took: 80.381 ms (+- 0.115 ms) [root@five ~]# And this pre-existing test uses these routines to parse kallsyms and then compare with the info obtained from the matching ELF symtab: [root@five ~]# perf test vmlinux 1: vmlinux symtab matches kallsyms : Ok [root@five ~]# Also we can't remove hex2u64() in this patch as this breaks the build: /usr/bin/ld: /tmp/build/perf/perf-in.o: in function `modules__parse': /home/acme/git/perf/tools/perf/util/symbol.c:607: undefined reference to `hex2u64' /usr/bin/ld: /home/acme/git/perf/tools/perf/util/symbol.c:607: undefined reference to `hex2u64' /usr/bin/ld: /tmp/build/perf/perf-in.o: in function `dso__load_perf_map': /home/acme/git/perf/tools/perf/util/symbol.c:1477: undefined reference to `hex2u64' /usr/bin/ld: /home/acme/git/perf/tools/perf/util/symbol.c:1483: undefined reference to `hex2u64' collect2: error: ld returned 1 exit status Leave it there, move it in the next patch. Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lore.kernel.org/lkml/20200501221315.54715-3-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-01 22:13:14 +00:00
#include <unistd.h>
#include <linux/types.h>
libsymbols kallsyms: Parse using io api 'perf record' will call kallsyms__parse 4 times during startup and process megabytes of data. This changes kallsyms__parse to use the io library rather than fgets to improve performance of the user code by over 8%. Before: Running 'internals/kallsyms-parse' benchmark: Average kallsyms__parse took: 103.988 ms (+- 0.203 ms) After: Running 'internals/kallsyms-parse' benchmark: Average kallsyms__parse took: 95.571 ms (+- 0.006 ms) For a workload like: $ perf record /bin/true Run under 'perf record -e cycles:u -g' the time goes from: Before 30.10% 1.67% perf perf [.] kallsyms__parse After 25.55% 20.04% perf perf [.] kallsyms__parse So a little under 5% of the start-up time is removed. A lot of what remains is on the kernel side, but caching kallsyms within perf would at least impact memory footprint. Committer notes: The internal/kallsyms-parse bench is run using: [root@five ~]# perf bench internals kallsyms-parse # Running 'internals/kallsyms-parse' benchmark: Average kallsyms__parse took: 80.381 ms (+- 0.115 ms) [root@five ~]# And this pre-existing test uses these routines to parse kallsyms and then compare with the info obtained from the matching ELF symtab: [root@five ~]# perf test vmlinux 1: vmlinux symtab matches kallsyms : Ok [root@five ~]# Also we can't remove hex2u64() in this patch as this breaks the build: /usr/bin/ld: /tmp/build/perf/perf-in.o: in function `modules__parse': /home/acme/git/perf/tools/perf/util/symbol.c:607: undefined reference to `hex2u64' /usr/bin/ld: /home/acme/git/perf/tools/perf/util/symbol.c:607: undefined reference to `hex2u64' /usr/bin/ld: /tmp/build/perf/perf-in.o: in function `dso__load_perf_map': /home/acme/git/perf/tools/perf/util/symbol.c:1477: undefined reference to `hex2u64' /usr/bin/ld: /home/acme/git/perf/tools/perf/util/symbol.c:1483: undefined reference to `hex2u64' collect2: error: ld returned 1 exit status Leave it there, move it in the next patch. Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lore.kernel.org/lkml/20200501221315.54715-3-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-05-01 22:13:14 +00:00
struct io {
/* File descriptor being read/ */
int fd;
/* Size of the read buffer. */
unsigned int buf_len;
/* Pointer to storage for buffering read. */
char *buf;
/* End of the storage. */
char *end;
/* Currently accessed data pointer. */
char *data;
/* Read timeout, 0 implies no timeout. */
int timeout_ms;
/* Set true on when the end of file on read error. */
bool eof;
};
static inline void io__init(struct io *io, int fd,
char *buf, unsigned int buf_len)
{
io->fd = fd;
io->buf_len = buf_len;
io->buf = buf;
io->end = buf;
io->data = buf;
io->timeout_ms = 0;
io->eof = false;
}
tools api io: Move filling the io buffer to its own function In general a read fills 4kb so filling the buffer is a 1 in 4096 operation, move it out of the io__get_char function to avoid some checking overhead and to better hint the function is good to inline. For perf's IO intensive internal (non-rigorous) benchmarks there's a small improvement to kallsyms-parsing with a default build. Before: ``` $ perf bench internals all Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 146.322 usec (+- 0.305 usec) Average num. events: 61.000 (+- 0.000) Average time per event 2.399 usec Average data synthesis took: 145.056 usec (+- 0.155 usec) Average num. events: 329.000 (+- 0.000) Average time per event 0.441 usec Average kallsyms__parse took: 162.313 ms (+- 0.599 ms) ... Computing performance of sysfs PMU event scan for 100 times Average core PMU scanning took: 53.720 usec (+- 7.823 usec) Average PMU scanning took: 375.145 usec (+- 23.974 usec) ``` After: ``` $ perf bench internals all Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 127.829 usec (+- 0.079 usec) Average num. events: 61.000 (+- 0.000) Average time per event 2.096 usec Average data synthesis took: 133.652 usec (+- 0.101 usec) Average num. events: 327.000 (+- 0.000) Average time per event 0.409 usec Average kallsyms__parse took: 150.415 ms (+- 0.313 ms) ... Computing performance of sysfs PMU event scan for 100 times Average core PMU scanning took: 47.790 usec (+- 1.178 usec) Average PMU scanning took: 376.945 usec (+- 23.683 usec) ``` Signed-off-by: Ian Rogers <irogers@google.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240519181716.4088459-1-irogers@google.com
2024-05-19 18:17:16 +00:00
/* Read from fd filling the buffer. Called when io->data == io->end. */
static inline int io__fill_buffer(struct io *io)
{
tools api io: Move filling the io buffer to its own function In general a read fills 4kb so filling the buffer is a 1 in 4096 operation, move it out of the io__get_char function to avoid some checking overhead and to better hint the function is good to inline. For perf's IO intensive internal (non-rigorous) benchmarks there's a small improvement to kallsyms-parsing with a default build. Before: ``` $ perf bench internals all Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 146.322 usec (+- 0.305 usec) Average num. events: 61.000 (+- 0.000) Average time per event 2.399 usec Average data synthesis took: 145.056 usec (+- 0.155 usec) Average num. events: 329.000 (+- 0.000) Average time per event 0.441 usec Average kallsyms__parse took: 162.313 ms (+- 0.599 ms) ... Computing performance of sysfs PMU event scan for 100 times Average core PMU scanning took: 53.720 usec (+- 7.823 usec) Average PMU scanning took: 375.145 usec (+- 23.974 usec) ``` After: ``` $ perf bench internals all Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 127.829 usec (+- 0.079 usec) Average num. events: 61.000 (+- 0.000) Average time per event 2.096 usec Average data synthesis took: 133.652 usec (+- 0.101 usec) Average num. events: 327.000 (+- 0.000) Average time per event 0.409 usec Average kallsyms__parse took: 150.415 ms (+- 0.313 ms) ... Computing performance of sysfs PMU event scan for 100 times Average core PMU scanning took: 47.790 usec (+- 1.178 usec) Average PMU scanning took: 376.945 usec (+- 23.683 usec) ``` Signed-off-by: Ian Rogers <irogers@google.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240519181716.4088459-1-irogers@google.com
2024-05-19 18:17:16 +00:00
ssize_t n;
if (io->eof)
return -1;
tools api io: Move filling the io buffer to its own function In general a read fills 4kb so filling the buffer is a 1 in 4096 operation, move it out of the io__get_char function to avoid some checking overhead and to better hint the function is good to inline. For perf's IO intensive internal (non-rigorous) benchmarks there's a small improvement to kallsyms-parsing with a default build. Before: ``` $ perf bench internals all Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 146.322 usec (+- 0.305 usec) Average num. events: 61.000 (+- 0.000) Average time per event 2.399 usec Average data synthesis took: 145.056 usec (+- 0.155 usec) Average num. events: 329.000 (+- 0.000) Average time per event 0.441 usec Average kallsyms__parse took: 162.313 ms (+- 0.599 ms) ... Computing performance of sysfs PMU event scan for 100 times Average core PMU scanning took: 53.720 usec (+- 7.823 usec) Average PMU scanning took: 375.145 usec (+- 23.974 usec) ``` After: ``` $ perf bench internals all Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 127.829 usec (+- 0.079 usec) Average num. events: 61.000 (+- 0.000) Average time per event 2.096 usec Average data synthesis took: 133.652 usec (+- 0.101 usec) Average num. events: 327.000 (+- 0.000) Average time per event 0.409 usec Average kallsyms__parse took: 150.415 ms (+- 0.313 ms) ... Computing performance of sysfs PMU event scan for 100 times Average core PMU scanning took: 47.790 usec (+- 1.178 usec) Average PMU scanning took: 376.945 usec (+- 23.683 usec) ``` Signed-off-by: Ian Rogers <irogers@google.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240519181716.4088459-1-irogers@google.com
2024-05-19 18:17:16 +00:00
if (io->timeout_ms != 0) {
struct pollfd pfds[] = {
{
.fd = io->fd,
.events = POLLIN,
},
};
tools api io: Move filling the io buffer to its own function In general a read fills 4kb so filling the buffer is a 1 in 4096 operation, move it out of the io__get_char function to avoid some checking overhead and to better hint the function is good to inline. For perf's IO intensive internal (non-rigorous) benchmarks there's a small improvement to kallsyms-parsing with a default build. Before: ``` $ perf bench internals all Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 146.322 usec (+- 0.305 usec) Average num. events: 61.000 (+- 0.000) Average time per event 2.399 usec Average data synthesis took: 145.056 usec (+- 0.155 usec) Average num. events: 329.000 (+- 0.000) Average time per event 0.441 usec Average kallsyms__parse took: 162.313 ms (+- 0.599 ms) ... Computing performance of sysfs PMU event scan for 100 times Average core PMU scanning took: 53.720 usec (+- 7.823 usec) Average PMU scanning took: 375.145 usec (+- 23.974 usec) ``` After: ``` $ perf bench internals all Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 127.829 usec (+- 0.079 usec) Average num. events: 61.000 (+- 0.000) Average time per event 2.096 usec Average data synthesis took: 133.652 usec (+- 0.101 usec) Average num. events: 327.000 (+- 0.000) Average time per event 0.409 usec Average kallsyms__parse took: 150.415 ms (+- 0.313 ms) ... Computing performance of sysfs PMU event scan for 100 times Average core PMU scanning took: 47.790 usec (+- 1.178 usec) Average PMU scanning took: 376.945 usec (+- 23.683 usec) ``` Signed-off-by: Ian Rogers <irogers@google.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240519181716.4088459-1-irogers@google.com
2024-05-19 18:17:16 +00:00
n = poll(pfds, 1, io->timeout_ms);
if (n == 0)
errno = ETIMEDOUT;
if (n > 0 && !(pfds[0].revents & POLLIN)) {
errno = EIO;
n = -1;
}
if (n <= 0) {
io->eof = true;
return -1;
}
}
tools api io: Move filling the io buffer to its own function In general a read fills 4kb so filling the buffer is a 1 in 4096 operation, move it out of the io__get_char function to avoid some checking overhead and to better hint the function is good to inline. For perf's IO intensive internal (non-rigorous) benchmarks there's a small improvement to kallsyms-parsing with a default build. Before: ``` $ perf bench internals all Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 146.322 usec (+- 0.305 usec) Average num. events: 61.000 (+- 0.000) Average time per event 2.399 usec Average data synthesis took: 145.056 usec (+- 0.155 usec) Average num. events: 329.000 (+- 0.000) Average time per event 0.441 usec Average kallsyms__parse took: 162.313 ms (+- 0.599 ms) ... Computing performance of sysfs PMU event scan for 100 times Average core PMU scanning took: 53.720 usec (+- 7.823 usec) Average PMU scanning took: 375.145 usec (+- 23.974 usec) ``` After: ``` $ perf bench internals all Computing performance of single threaded perf event synthesis by synthesizing events on the perf process itself: Average synthesis took: 127.829 usec (+- 0.079 usec) Average num. events: 61.000 (+- 0.000) Average time per event 2.096 usec Average data synthesis took: 133.652 usec (+- 0.101 usec) Average num. events: 327.000 (+- 0.000) Average time per event 0.409 usec Average kallsyms__parse took: 150.415 ms (+- 0.313 ms) ... Computing performance of sysfs PMU event scan for 100 times Average core PMU scanning took: 47.790 usec (+- 1.178 usec) Average PMU scanning took: 376.945 usec (+- 23.683 usec) ``` Signed-off-by: Ian Rogers <irogers@google.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240519181716.4088459-1-irogers@google.com
2024-05-19 18:17:16 +00:00
n = read(io->fd, io->buf, io->buf_len);
if (n <= 0) {
io->eof = true;
return -1;
}
io->data = &io->buf[0];
io->end = &io->buf[n];
return 0;
}
/* Reads one character from the "io" file with similar semantics to fgetc. */
static inline int io__get_char(struct io *io)
{
if (io->data == io->end) {
int ret = io__fill_buffer(io);
if (ret)
return ret;
}
return *io->data++;
}
/* Read a hexadecimal value with no 0x prefix into the out argument hex. If the
* first character isn't hexadecimal returns -2, io->eof returns -1, otherwise
* returns the character after the hexadecimal value which may be -1 for eof.
* If the read value is larger than a u64 the high-order bits will be dropped.
*/
static inline int io__get_hex(struct io *io, __u64 *hex)
{
bool first_read = true;
*hex = 0;
while (true) {
int ch = io__get_char(io);
if (ch < 0)
return ch;
if (ch >= '0' && ch <= '9')
*hex = (*hex << 4) | (ch - '0');
else if (ch >= 'a' && ch <= 'f')
*hex = (*hex << 4) | (ch - 'a' + 10);
else if (ch >= 'A' && ch <= 'F')
*hex = (*hex << 4) | (ch - 'A' + 10);
else if (first_read)
return -2;
else
return ch;
first_read = false;
}
}
/* Read a positive decimal value with out argument dec. If the first character
* isn't a decimal returns -2, io->eof returns -1, otherwise returns the
* character after the decimal value which may be -1 for eof. If the read value
* is larger than a u64 the high-order bits will be dropped.
*/
static inline int io__get_dec(struct io *io, __u64 *dec)
{
bool first_read = true;
*dec = 0;
while (true) {
int ch = io__get_char(io);
if (ch < 0)
return ch;
if (ch >= '0' && ch <= '9')
*dec = (*dec * 10) + ch - '0';
else if (first_read)
return -2;
else
return ch;
first_read = false;
}
}
tools api fs: Switch filename__read_str to use io.h filename__read_str() has its own string reading code that allocates memory before reading into it. The memory allocated is sized at BUFSIZ that is 8kb. Most strings are short and so most of this 8kb is wasted. Refactor io__getline(), as io__getdelim(), so that the newline character can be configurable and ignored in the case of filename__read_str(). Code like build_caches_for_cpu() in perf's header.c will read many strings and hold them in a data structure, in this case multiple strings per cache level per CPU. Using io.h's io__getline() avoids the wasted memory as strings are temporarily read into a buffer on the stack before being copied to a buffer that grows 128 bytes at a time and is never sized larger than the string. For a 16 hyperthread system the memory consumption of "perf record true" is reduced by 180kb, primarily through saving memory when reading the cache information. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Athira Jajeev <atrajeev@linux.vnet.ibm.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Dmitrii Dolgov <9erthalion6@gmail.com> Cc: German Gomez <german.gomez@arm.com> Cc: Guilherme Amadio <amadio@gentoo.org> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Li Dong <lidong@vivo.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org> Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Ming Wang <wangming01@loongson.cn> Cc: Nick Terrell <terrelln@fb.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Steinar H. Gunderson <sesse@google.com> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Cc: Wenyu Liu <liuwenyu7@huawei.com> Cc: Yang Jihong <yangjihong1@huawei.com> Link: https://lore.kernel.org/r/20231127220902.1315692-5-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-11-27 22:08:16 +00:00
/* Read up to and including the first delim. */
static inline ssize_t io__getdelim(struct io *io, char **line_out, size_t *line_len_out, int delim)
{
char buf[128];
int buf_pos = 0;
char *line = NULL, *temp;
size_t line_len = 0;
int ch = 0;
/* TODO: reuse previously allocated memory. */
free(*line_out);
tools api fs: Switch filename__read_str to use io.h filename__read_str() has its own string reading code that allocates memory before reading into it. The memory allocated is sized at BUFSIZ that is 8kb. Most strings are short and so most of this 8kb is wasted. Refactor io__getline(), as io__getdelim(), so that the newline character can be configurable and ignored in the case of filename__read_str(). Code like build_caches_for_cpu() in perf's header.c will read many strings and hold them in a data structure, in this case multiple strings per cache level per CPU. Using io.h's io__getline() avoids the wasted memory as strings are temporarily read into a buffer on the stack before being copied to a buffer that grows 128 bytes at a time and is never sized larger than the string. For a 16 hyperthread system the memory consumption of "perf record true" is reduced by 180kb, primarily through saving memory when reading the cache information. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Athira Jajeev <atrajeev@linux.vnet.ibm.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Dmitrii Dolgov <9erthalion6@gmail.com> Cc: German Gomez <german.gomez@arm.com> Cc: Guilherme Amadio <amadio@gentoo.org> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Li Dong <lidong@vivo.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org> Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Ming Wang <wangming01@loongson.cn> Cc: Nick Terrell <terrelln@fb.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Steinar H. Gunderson <sesse@google.com> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Cc: Wenyu Liu <liuwenyu7@huawei.com> Cc: Yang Jihong <yangjihong1@huawei.com> Link: https://lore.kernel.org/r/20231127220902.1315692-5-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-11-27 22:08:16 +00:00
while (ch != delim) {
ch = io__get_char(io);
if (ch < 0)
break;
if (buf_pos == sizeof(buf)) {
temp = realloc(line, line_len + sizeof(buf));
if (!temp)
goto err_out;
line = temp;
memcpy(&line[line_len], buf, sizeof(buf));
line_len += sizeof(buf);
buf_pos = 0;
}
buf[buf_pos++] = (char)ch;
}
temp = realloc(line, line_len + buf_pos + 1);
if (!temp)
goto err_out;
line = temp;
memcpy(&line[line_len], buf, buf_pos);
line[line_len + buf_pos] = '\0';
line_len += buf_pos;
*line_out = line;
*line_len_out = line_len;
return line_len;
err_out:
free(line);
*line_out = NULL;
*line_len_out = 0;
return -ENOMEM;
}
tools api fs: Switch filename__read_str to use io.h filename__read_str() has its own string reading code that allocates memory before reading into it. The memory allocated is sized at BUFSIZ that is 8kb. Most strings are short and so most of this 8kb is wasted. Refactor io__getline(), as io__getdelim(), so that the newline character can be configurable and ignored in the case of filename__read_str(). Code like build_caches_for_cpu() in perf's header.c will read many strings and hold them in a data structure, in this case multiple strings per cache level per CPU. Using io.h's io__getline() avoids the wasted memory as strings are temporarily read into a buffer on the stack before being copied to a buffer that grows 128 bytes at a time and is never sized larger than the string. For a 16 hyperthread system the memory consumption of "perf record true" is reduced by 180kb, primarily through saving memory when reading the cache information. Signed-off-by: Ian Rogers <irogers@google.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Athira Jajeev <atrajeev@linux.vnet.ibm.com> Cc: Changbin Du <changbin.du@huawei.com> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Dmitrii Dolgov <9erthalion6@gmail.com> Cc: German Gomez <german.gomez@arm.com> Cc: Guilherme Amadio <amadio@gentoo.org> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: K Prateek Nayak <kprateek.nayak@amd.com> Cc: Kajol Jain <kjain@linux.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Li Dong <lidong@vivo.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org> Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Ming Wang <wangming01@loongson.cn> Cc: Nick Terrell <terrelln@fb.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ravi Bangoria <ravi.bangoria@amd.com> Cc: Sandipan Das <sandipan.das@amd.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Steinar H. Gunderson <sesse@google.com> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Cc: Wenyu Liu <liuwenyu7@huawei.com> Cc: Yang Jihong <yangjihong1@huawei.com> Link: https://lore.kernel.org/r/20231127220902.1315692-5-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2023-11-27 22:08:16 +00:00
static inline ssize_t io__getline(struct io *io, char **line_out, size_t *line_len_out)
{
return io__getdelim(io, line_out, line_len_out, /*delim=*/'\n');
}
#endif /* __API_IO__ */