tracing: Increase size of trace_marker_raw to max ring buffer entry

There's no reason to give an arbitrary limit to the size of a raw trace
marker. Just let it be as big as the size that is allowed by the ring
buffer itself.

And there's also no reason to artificially break up the write to
TRACE_BUF_SIZE, as that's not even used.

Link: https://lore.kernel.org/linux-trace-kernel/20231213104218.2efc70c1@gandalf.local.home

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
This commit is contained in:
Steven Rostedt (Google) 2023-12-13 10:42:18 -05:00
parent 9482341d9b
commit 76ca20c748

View File

@ -7365,9 +7365,6 @@ tracing_mark_write(struct file *filp, const char __user *ubuf,
return written;
}
/* Limit it for now to 3K (including tag) */
#define RAW_DATA_MAX_SIZE (1024*3)
static ssize_t
tracing_mark_raw_write(struct file *filp, const char __user *ubuf,
size_t cnt, loff_t *fpos)
@ -7389,19 +7386,18 @@ tracing_mark_raw_write(struct file *filp, const char __user *ubuf,
return -EINVAL;
/* The marker must at least have a tag id */
if (cnt < sizeof(unsigned int) || cnt > RAW_DATA_MAX_SIZE)
if (cnt < sizeof(unsigned int))
return -EINVAL;
if (cnt > TRACE_BUF_SIZE)
cnt = TRACE_BUF_SIZE;
BUILD_BUG_ON(TRACE_BUF_SIZE >= PAGE_SIZE);
size = sizeof(*entry) + cnt;
if (cnt < FAULT_SIZE_ID)
size += FAULT_SIZE_ID - cnt;
buffer = tr->array_buffer.buffer;
if (size > ring_buffer_max_event_size(buffer))
return -EINVAL;
event = __trace_buffer_lock_reserve(buffer, TRACE_RAW_DATA, size,
tracing_gen_ctx());
if (!event)