# New commits in perf/core:
02c56362a7 ("uprobes: Guard against kmemdup() failing in dup_return_instance()")
d29e744c71 ("perf/x86: Relax privilege filter restriction on AMD IBS")
6057b90ecc ("perf/core: Export perf_exclude_event()")
8622e45b5d ("uprobes: Reuse return_instances between multiple uretprobes within task")
0cf981de76 ("uprobes: Ensure return_instance is detached from the list before freeing")
636666a1c7 ("uprobes: Decouple return_instance list traversal and freeing")
2ff913ab3f ("uprobes: Simplify session consumer tracking")
e0925f2dc4 ("uprobes: add speculative lockless VMA-to-inode-to-uprobe resolution")
83e3dc9a5d ("uprobes: simplify find_active_uprobe_rcu() VMA checks")
03a001b156 ("mm: introduce mmap_lock_speculate_{try_begin|retry}")
eb449bd969 ("mm: convert mm_lock_seq to a proper seqcount")
7528585290 ("mm/gup: Use raw_seqcount_try_begin()")
96450ead16 ("seqlock: add raw_seqcount_try_begin")
b4943b8bfc ("perf/x86/rapl: Add core energy counter support for AMD CPUs")
54d2759778 ("perf/x86/rapl: Move the cntr_mask to rapl_pmus struct")
bdc57ec705 ("perf/x86/rapl: Remove the global variable rapl_msrs")
abf03d9bd2 ("perf/x86/rapl: Modify the generic variable names to *_pkg*")
eeca4c6b25 ("perf/x86/rapl: Add arguments to the init and cleanup functions")
cd29d83a6d ("perf/x86/rapl: Make rapl_model struct global")
8bf1c86e5a ("perf/x86/rapl: Rename rapl_pmu variables")
1d5e2f637a ("perf/x86/rapl: Remove the cpu_to_rapl_pmu() function")
e4b4443477 ("x86/topology: Introduce topology_logical_core_id()")
2f2db34707 ("perf/x86/rapl: Remove the unused get_rapl_pmu_cpumask() function")
ae55e308bd ("perf/x86/intel/ds: Simplify the PEBS records processing for adaptive PEBS")
3c00ed344c ("perf/x86/intel/ds: Factor out functions for PEBS records processing")
7087bfb0ad ("perf/x86/intel/ds: Clarify adaptive PEBS processing")
faac6f105e ("perf/core: Check sample_type in perf_sample_save_brstack")
f226805bc5 ("perf/core: Check sample_type in perf_sample_save_callchain")
b9c44b9147 ("perf/core: Save raw sample data conditionally based on sample type")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Commit b35108a51c ("jiffies: Define secs_to_jiffies()") introduced
secs_to_jiffies(). As the values here are a multiple of 1000, use
secs_to_jiffies() instead of msecs_to_jiffies to avoid the multiplication.
This is converted using scripts/coccinelle/misc/secs_to_jiffies.cocci with
the following Coccinelle rules:
@@ constant C; @@
- msecs_to_jiffies(C * 1000)
+ secs_to_jiffies(C)
@@ constant C; @@
- msecs_to_jiffies(C * MSEC_PER_SEC)
+ secs_to_jiffies(C)
Link: https://lkml.kernel.org/r/20241210-converge-secs-to-jiffies-v3-4-ddfefd7e9f2a@linux.microsoft.com
Signed-off-by: Easwar Hariharan <eahariha@linux.microsoft.com>
Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Andrew Lunn <andrew+netdev@lunn.ch>
Cc: Anna-Maria Behnsen <anna-maria@linutronix.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Daniel Mack <daniel@zonque.org>
Cc: David Airlie <airlied@gmail.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Dick Kennedy <dick.kennedy@broadcom.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Florian Fainelli <florian.fainelli@broadcom.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Haojian Zhuang <haojian.zhuang@gmail.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Ilya Dryomov <idryomov@gmail.com>
Cc: Jack Wang <jinpu.wang@cloud.ionos.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: James Smart <james.smart@broadcom.com>
Cc: Jaroslav Kysela <perex@perex.cz>
Cc: Jeff Johnson <jjohnson@kernel.org>
Cc: Jeff Johnson <quic_jjohnson@quicinc.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Jeroen de Borst <jeroendb@google.com>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Joe Lawrence <joe.lawrence@redhat.com>
Cc: Johan Hedberg <johan.hedberg@gmail.com>
Cc: Josh Poimboeuf <jpoimboe@kernel.org>
Cc: Jozsef Kadlecsik <kadlec@netfilter.org>
Cc: Julia Lawall <julia.lawall@inria.fr>
Cc: Kalle Valo <kvalo@kernel.org>
Cc: Louis Peens <louis.peens@corigine.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Luiz Augusto von Dentz <luiz.dentz@gmail.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Marcel Holtmann <marcel@holtmann.org>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Miroslav Benes <mbenes@suse.cz>
Cc: Naveen N Rao <naveen@kernel.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Nicolas Palix <nicolas.palix@imag.fr>
Cc: Oded Gabbay <ogabbay@kernel.org>
Cc: Ofir Bitton <obitton@habana.ai>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Praveen Kaligineedi <pkaligineedi@google.com>
Cc: Ray Jui <rjui@broadcom.com>
Cc: Robert Jarzmik <robert.jarzmik@free.fr>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Roger Pau Monné <roger.pau@citrix.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Scott Branden <sbranden@broadcom.com>
Cc: Shailend Chand <shailend@google.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Simon Horman <horms@kernel.org>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Takashi Iwai <tiwai@suse.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Xiubo Li <xiubli@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Let's add support for including virtio-mem device RAM in the crash dump,
setting NEED_PROC_VMCORE_DEVICE_RAM, and implementing
elfcorehdr_fill_device_ram_ptload_elf64().
To avoid code duplication, factor out the code to fill a PT_LOAD entry.
Link: https://lkml.kernel.org/r/20241204125444.1734652-13-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Eric Farman <farman@linux.ibm.com>
Cc: Eugenio Pérez <eperezma@redhat.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Janosch Frank <frankja@linux.ibm.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
* features:
s390/diag: Move diag.c to diag specific folder
s390/diag324: Retrieve power readings via diag 0x324
s390/diag: Create misc device /dev/diag
s390/lib: Use exrl instead of ex in string functions
s390/mm: Simplify noexec page protection handling
s390/mm: Remove unused PAGE_KERNEL_EXEC and friends
s390/mm: Remove incorrect comment
s390/pci: Add pci_msg debug view to PCI report
s390/debug: Add a reverse mode for debug_dump()
s390/debug: Add debug_dump() to write debug view to a string buffer
s390/debug: Split private data alloc/free out of file operations
s390/debug: Simplify and document debug_next_entry() logic
s390/pci: Report PCI error recovery results via SCLP
s390/mm/hugetlbfs: Remove huge_pte_none() / huge_pte_none_mostly()
s390: Add KERNEL_IMAGE_BASE to kasan.config
s390/abs_lowcore: Include linux/smp.h for get_cpu() and put_cpu()
s390: Remove __bootdata annotations from declarations
s390/preempt: Optimize __preempt_count_dec_and_test()
s390/atomic: Provide arch_atomic_*_and_test() implementations
s390: Remove superfluous new lines from inline assemblies
s390/preempt: Adjust coding style
s390/preempt: Remove special pre MARCH_HAS_Z196_FEATURES implementation
s390/preempt: Add comments
s390/atomic: Consistent layering between atomic.h and atomic_ops.h
s390/atomic: Implement arch_atomic_inc() / arch_atomic_dec()
s390/setup: Cleanup stack_alloc() and stack_free()
s390/Kconfig: Select VMAP_STACK unconditionally
s390/Kconfig: Select KASAN_VMALLOC if KASAN is enabled
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Niklas Schnelle says:
===================
This patch series enhances the introspectability of the PCI device
recovery for firmware. Until now when Linux performs recovery in
response to a firmware error report. For example, until now firmware
debug data would have no indication if the recovery was successfull or
if it failed, for example due to KVM pass-through.
Improve on this by reporting recovery status as well as some debug
information such as device driver name and s390dbf/pci_msg/sprintf logs
via the SCLP Write Event Data Action Qualifier 2 (Log Data provided)
mechanism.
===================
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Retrieve electrical power readings for resources in a computing
environment via diag 0x324. diag 0x324 stores the power readings in the
power information block (pib).
Provide power readings from pib via diag324 ioctl interface. diag324
ioctl provides new pib to the user only if the threshold time has passed
since the last call. Otherwise, cache data is returned. When there are
no active readers, cleanup of pib buffer is performed.
Reviewed-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Create a misc device /dev/diag to fetch diagnose specific information
from the kernel and provide it to userspace.
Reviewed-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
exrl is present in all machines currently supported in the linux
kernel, therefore prefer it over ex. This saves one instruction
and doesn't need an additional register to hold the address of the
target instruction.
Reviewed-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
By default page protection definitions like PAGE_RX have the _PAGE_NOEXEC
bit set. For older machines without the instruction execution protection
facility this bit is not allowed to be used in page table entries, and
therefore must be removed.
This is done at a couple of page table walkers, but also at some but not
all page table modification functions like ptep_modify_prot_commit(). Avoid
all of this and change the page, segment and region3 protection definitions
so that the noexec bit is masked out automatically if the instruction
execution-protection facility is not available. This is similar to what
also various other architectures do which had to solve the same problem.
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Remove an outdated comment that is also located at a random place. The
generic statement that read permissions imply execute permissions is
wrong since the instruction execution-protection facility is available.
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Using the newly introduced debug_dump() mechanism add formatted content
of pci_debug_msg_id to the PCI report. The formatting is based on the
existing sprintf format but removes caller pointer and area index and
adds an column header. This will allow the platform to collect this log
data together with hardware errors. This sets the reverse flag such that
the newest log entries get added to the PCI report even if not all debug
log entries fit.
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Co-developed-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
In this mode debug_dump() writes the debug log starting at the newest
entry followed by earlier entries. To this end add a debug_prev_entry()
helper analogous to debug_next_entry() a helper to get the latest entry
which is one before the active entry and a helper to iterate either
forward or backward.
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Co-developed-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
The debug_dump() function allows to get the content of a debug log and
view pair in a string buffer. One future application of this is to
provide debug logs to the platform to be collected with hardware error
logs during recovery.
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Co-developed-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Split the allocation respectively freeing of file_private_info_t out
of open() respectively close(). This will be used in a follow on change
to access to debug views without going through the s390dbf filesystem.
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Contrary to convention debug_next_entry() returns a falsy 0 value if
there are more entries and a truthy 1 value when there are no more
entries. As there is only one caller just reverse this logic to be less
surprising and document the behavior in a kdoc comment. Also replace the
goto with an early return. In the future this allows using it in
a do {} while (debug_next_entry(...)) loop.
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Add a mechanism with which the status of PCI error recovery runs
is reported to the platform. Together with the status supply additional
information that may aid in problem determination.
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
The calculation determining whether to use three- or four-level paging
didn't account for KMSAN modules metadata. Include this metadata in the
virtual memory size calculation to ensure correct paging mode selection
and avoiding potentially unnecessary physical memory size limitations.
Fixes: 65ca73f9fb ("s390/mm: define KMSAN metadata for vmalloc and modules")
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Although Kconfig specifies:
config KERNEL_IMAGE_BASE
hex "Kernel image base address"
range 0x100000 0x1FFFFFE0000000 if !KASAN
range 0x100000 0x1BFFFFE0000000 if KASAN
default 0x3FFE0000000 if !KASAN
default 0x7FFFE0000000 if KASAN
Running make defconfig or make debug_defconfig
followed by make kasan.config results in a suboptimal
CONFIG_KERNEL_IMAGE_BASE=0x3FFE0000000. Add
CONFIG_KERNEL_IMAGE_BASE=0x7FFFE0000000 to kasan.config to address that.
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Add missing include of <linux/smp.h> in abs_lowcore.h to provide
declarations for get_cpu() and put_cpu() used in the code.
Reviewed-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
For consistency, remove the `__bootdata` and `__bootdata_preserved`
section annotations from variable declarations in header files. Section
annotations should be applied to definitions, not declarations. This
change moves the annotations to the variable definitions in the
corresponding source files.
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Use __atomic_add_const_and_test() within __preempt_count_dec_and_test().
With this it is possible to decrease preempt_count by one and test if
need_resched is set with one instruction, if the compiler has support for
flag output operand constraints.
Reviewed-by: Juergen Christ <jchrist@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Provide arch_atomic_*_and_test() implementations which make use of flag
output constraints, and allow the compiler to generate slightly better
code.
Reviewed-by: Juergen Christ <jchrist@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
GCC uses the number of lines of an inline assembly to calculate its length
(number of instructions). This has an impact on GCCs inlining decisions.
Therefore remove superfluous new lines from a couple of inline
assemblies, so that their real size is reflected.
Also use an "asm inline" statement for the fpu_lfpc_safe() inline assembly
to enforce that GCC assumes the minimum size for this inline assembly,
since it contains various statements which make it appear much larger than
the resulting code is.
Suggested-by: Juergen Christ <jchrist@linux.ibm.com>
Reviewed-by: Juergen Christ <jchrist@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Just remove a line break which reduces readability.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Remove the preempt count implementation for pre MARCH_HAS_Z196_FEATURES
builds. If the kernel is compiled with PREEMPT=n, which is the default for
all distributions, this has close to zero impact in the generated code.
Therefore remove the alternative implementation to keep things simple.
Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
The s390 preempt_count implementation is more or less a copy of the x86
implementation using different instructions. For clarification how this
works also add all comments from x86 with some minor modifications.
Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
With commit c8a91c285d ("s390/atomic: move remaining inline assemblies to
atomic_ops.h") all remaining atomic inline assemblies have been moved to
atomic_ops.h.
However the result is inconsistent: the functions in atomic_ops.h are
supposed to be used with integral types like int and long pointers, while
the functions in atomic.h work with atomic types.
This layering got violated with the named commit. Therefore adjust this
now, and also use consistent variable names in atomic_ops.h.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Implement arch_atomic_inc() / arch_atomic_dec() functions which result
in a single instruction if compiled for z196 or newer architectures.
Reduces the kernel image size by ~6K (defconfig):
bloat-o-meter:
add/remove: 0/0 grow/shrink: 12/1005 up/down: 106/-6404 (-6298)
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
DEFINE_IPL_ATTR_STR_RW() macro produces "unsigned 'len' is never less
than zero." warning when sys_vmcmd_on_*_store() callbacks are defined.
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202412081614.5uel8F6W-lkp@intel.com/
Fixes: 247576bf62 ("s390/ipl: Do not accept z/VM CP diag X'008' cmds longer than max length")
Reviewed-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Some small cleanups to stack_alloc() and stack_free():
- Rename ret to stack to reflect what the variable is used for
- Whitespace removal
Reviewed-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
There is no point in supporting !VMAP_STACK kernel builds. VMAP_STACK has
proven to work since many years. Also, since KASAN_VMALLOC is supported,
kernels built with !VMAP_STACK are completely untested.
Therefore select VMAP_STACK unconditionally and remove all config options
and code required for !VMAP_STACK builds.
Acked-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Reviewed-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Reduce the number of to be considered config options and select
KASAN_VMALLOC if KASAN is enabled.
Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Reviewed-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
With uncoupling of physical and virtual address spaces population of
the identity mapping was changed to use the type POPULATE_IDENTITY
instead of POPULATE_DIRECT. This breaks DirectMap accounting:
> cat /proc/meminfo
DirectMap4k: 55296 kB
DirectMap1M: 18446744073709496320 kB
Adjust all locations of update_page_count() in vmem.c to use
POPULATE_IDENTITY instead of POPULATE_DIRECT as well. With this
accounting is correct again:
> cat /proc/meminfo
DirectMap4k: 54264 kB
DirectMap1M: 8334336 kB
Fixes: c98d2ecae0 ("s390/mm: Uncouple physical vs virtual address spaces")
Cc: stable@vger.kernel.org
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
While at it, rename the same function in s390 cpum_sf PMU.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Ravi Bangoria <ravi.bangoria@amd.com>
Reviewed-by: Ravi Bangoria <ravi.bangoria@amd.com>
Acked-by: Thomas Richter <tmricht@linux.ibm.com>
Link: https://lore.kernel.org/r/20241203180441.1634709-2-namhyung@kernel.org
- Add swap entry for hugetlbfs support
- Add PTE_MARKER support for hugetlbs mappings; this fixes a regression
(possible page fault loop) which was introduced when support for
UFFDIO_POISON for hugetlbfs was added
- Add ARCH_HAS_PREEMPT_LAZY and PREEMPT_DYNAMIC support
- Mark IRQ entries in entry code, so that stack tracers can filter out the
non-IRQ parts of stack traces. This fixes stack depot capacity limit
warnings, since without filtering the number of unique stack traces is
huge
- In PCI code fix leak of struct zpci_dev object, and fix potential double
remove of hotplug slot
- Fix pagefault_disable() / pagefault_enable() unbalance in
arch_stack_user_walk_common()
- A couple of inline assembly optimizations, more cmpxchg() to
try_cmpxchg() conversions, and removal of usages of xchg() and cmpxchg()
on one and two byte memory areas
- Various other small improvements and cleanups
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEECMNfWEw3SLnmiLkZIg7DeRspbsIFAmdJ3WoACgkQIg7DeRsp
bsJt1RAAtlkbeN4+eVeYM4vBwHvgfAY/5Ii2wdHO2qwPHqBVkRtsqrmyewE/tVCF
PZsYBXDDrzyAtLMqjlNGDQ1QexNLn4BELgSIysr45mxwMq1W33BiXvb8I5uK/V/7
/TcW2s1daJKKrbk+HBA8ZTwna5SeUSoZuh9y/n9SKVC4rRkWdeL7G1RRNQtafDlg
aELCo17iHDZNoHeoRStOimZqVBwko6IQqQH4DCx2S4+J6nKQBGRyzGWIkLRoUxr6
MgNLrxekWjkoqAnXM0Ztb7LYg6AS/iOuGbqg/xLi1VJSWNCIf9zLpDs++SdFHoTU
n4Cj07IHR4OLQ1YB+EX2uPY7rJw0tPt0g/dgmYYi3uP88hJ7VYFOtfJx/UGlid2q
3l7wXNwtg+CJtw0Ey+21cMdmnOffxH9c3nBPahe7zK5k1GKjXDOfWEcmucG0zW5K
qYI5m7vAZAX4ve1362DOgJei/1uxGuMQQZsobHpwfhcGXzLZ2AZY45Ls86nQzHua
KpupybWQe70hQYk9hUw+M/ShChuH8dhnPjx51T0r/0E0BdU6Q20xLPLWx/2jRzUb
FlFg7WtVw2y45eQCFPbtVsoVzDCpfpfgTw5rrDsjFf/twS0E3ubmTC1rLr4YB+5m
5cjPys/SYpQWUi3wQFTQ6dL3w0+vWXlQmTi5ChcxTZF2ytwP+yg=
=cfmM
-----END PGP SIGNATURE-----
Merge tag 's390-6.13-2' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull more s390 updates from Heiko Carstens:
- Add swap entry for hugetlbfs support
- Add PTE_MARKER support for hugetlbs mappings; this fixes a regression
(possible page fault loop) which was introduced when support for
UFFDIO_POISON for hugetlbfs was added
- Add ARCH_HAS_PREEMPT_LAZY and PREEMPT_DYNAMIC support
- Mark IRQ entries in entry code, so that stack tracers can filter out
the non-IRQ parts of stack traces. This fixes stack depot capacity
limit warnings, since without filtering the number of unique stack
traces is huge
- In PCI code fix leak of struct zpci_dev object, and fix potential
double remove of hotplug slot
- Fix pagefault_disable() / pagefault_enable() unbalance in
arch_stack_user_walk_common()
- A couple of inline assembly optimizations, more cmpxchg() to
try_cmpxchg() conversions, and removal of usages of xchg() and
cmpxchg() on one and two byte memory areas
- Various other small improvements and cleanups
* tag 's390-6.13-2' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (27 commits)
Revert "s390/mm: Allow large pages for KASAN shadow mapping"
s390/spinlock: Use flag output constraint for arch_cmpxchg_niai8()
s390/spinlock: Use R constraint for arch_load_niai4()
s390/spinlock: Generate shorter code for arch_spin_unlock()
s390/spinlock: Remove condition code clobber from arch_spin_unlock()
s390/spinlock: Use symbolic names in inline assemblies
s390: Support PREEMPT_DYNAMIC
s390/pci: Fix potential double remove of hotplug slot
s390/pci: Fix leak of struct zpci_dev when zpci_add_device() fails
s390/mm/hugetlbfs: Add missing includes
s390/mm: Add PTE_MARKER support for hugetlbfs mappings
s390/mm: Introduce region-third and segment table swap entries
s390/mm: Introduce region-third and segment table entry present bits
s390/mm: Rearrange region-third and segment table entry SW bits
KVM: s390: Increase size of union sca_utility to four bytes
KVM: s390: Remove one byte cmpxchg() usage
KVM: s390: Use try_cmpxchg() instead of cmpxchg() loops
s390/ap: Replace xchg() with WRITE_ONCE()
s390/mm: Allow large pages for KASAN shadow mapping
s390: Add ARCH_HAS_PREEMPT_LAZY support
...
This reverts commit ff123eb774.
Allowing large pages for KASAN shadow mappings isn't inherently wrong,
but adding POPULATE_KASAN_MAP_SHADOW to large_allowed() exposes an issue
in can_large_pud() and can_large_pmd().
Since commit d8073dc6bc ("s390/mm: Allow large pages only for aligned
physical addresses"), both can_large_pud() and can_large_pmd() call _pa()
to check if large page physical addresses are aligned. However, _pa()
has a side effect: it allocates memory in POPULATE_KASAN_MAP_SHADOW
mode. This results in massive memory leaks.
The proper fix would be to address both large_allowed() and _pa()'s side
effects, but for now, revert this change to avoid the leaks.
Fixes: ff123eb774 ("s390/mm: Allow large pages for KASAN shadow mapping")
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Add a new variant of arch_cmpxchg_niai8() which makes use of the flag
output constraint, which allows the compiler to generate slightly better
code. Also rename arch_cmpxchg_niai8() to arch_try_cmpxchg_niai8() which
reflects the purpose of the function and makes it consistent with other
"try" variants.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
The load instruction used within arch_load_niai4() has a short displacement
and index register. Therefore use the R constraint to reflect this.
The used Q constraint does consider an index register.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Use mvhhi instead of sth to write a zero to spinlocks. Compared to the
sth variant this avoids the load of zero to a register, and reduces
register pressure.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Both instructions in arch_spin_unlock() do not clobber the condition
code. Therefore remove the condition code clobber from the inline assembly.
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>