* fixes:
KVM: x86/mmu: Treat TDP MMU faults as spurious if access is already allowed
KVM: SVM: Allow guest writes to set MSR_AMD64_DE_CFG bits
KVM: x86: Play nice with protected guests in complete_hypercall_exit()
KVM: SVM: Disable AVIC on SNP-enabled system without HvInUseWrAllowed feature
* misc: (66 commits)
KVM: x86: Add information about pending requests to kvm_exit tracepoint
KVM: x86: Add interrupt injection information to the kvm_entry tracepoint
KVM: selftests: Add test case for MMIO during vectoring on x86
KVM: selftests: Add and use a helper function for x86's LIDT
KVM: SVM: Handle event vectoring error in check_emulate_instruction()
KVM: VMX: Handle event vectoring error in check_emulate_instruction()
KVM: x86: Try to unprotect and retry on unhandleable emulation failure
KVM: x86: Add emulation status for unhandleable exception vectoring
KVM: x86: Add function for vectoring error generation
KVM: x86: Use only local variables (no bitmask) to init kvm_cpu_caps
KVM: x86: Explicitly track feature flags that are enabled at runtime
KVM: x86: Explicitly track feature flags that require vendor enabling
KVM: x86: Rename "SF" macro to "SCATTERED_F"
KVM: x86: Pull CPUID capabilities from boot_cpu_data only as needed
KVM: x86: Add a macro for features that are synthesized into boot_cpu_data
KVM: x86: Drop superfluous host XSAVE check when adjusting guest XSAVES caps
KVM: x86: Replace (almost) all guest CPUID feature queries with cpu_caps
KVM: x86: Shuffle code to prepare for dropping guest_cpuid_has()
KVM: x86: Update guest cpu_caps at runtime for dynamic CPUID-based features
KVM: x86: Update OS{XSAVE,PKE} bits in guest CPUID irrespective of host support
...
* mmu:
KVM/x86: add comment to kvm_mmu_do_page_fault()
* svm:
KVM: SVM: Remove redundant TLB flush on guest CR4.PGE change
KVM: SVM: Macrofy SEV=n versions of sev_xxx_guest()
* vcpu_array:
KVM: Drop hack that "manually" informs lockdep of kvm->lock vs. vcpu->mutex
KVM: Don't BUG() the kernel if xa_insert() fails with -EBUSY
Revert "KVM: Fix vcpu_array[0] races"
KVM: Grab vcpu->mutex across installing the vCPU's fd and bumping online_vcpus
KVM: Verify there's at least one online vCPU when iterating over all vCPUs
KVM: Explicitly verify target vCPU is online in kvm_get_vcpu()
* vmx:
KVM: x86: Remove hwapic_irr_update() from kvm_x86_ops
KVM: nVMX: Honor event priority when emulating PI delivery during VM-Enter
KVM: nVMX: Use vmcs01's controls shadow to check for IRQ/NMI windows at VM-Enter
KVM: nVMX: Drop manual vmcs01.GUEST_INTERRUPT_STATUS.RVI check at VM-Enter
KVM: nVMX: Check for pending INIT/SIPI after entering non-root mode
KVM: nVMX: Explicitly update vPPR on successful nested VM-Enter
KVM: VMX: Allow toggling bits in MSR_IA32_RTIT_CTL when enable bit is cleared
KVM: nVMX: Defer SVI update to vmcs01 on EOI when L2 is active w/o VID
KVM: x86: Plumb in the vCPU to kvm_x86_ops.hwapic_isr_update()
# New commits in x86/mm:
dd4059634d ("x86/mtrr: Rename mtrr_overwrite_state() to guest_force_mtrr_state()")
9d93db0d18 ("x86/mm/selftests: Fix typo in lam.c")
6db2526c1d ("x86/mm/tlb: Only trim the mm_cpumask once a second")
953753db88 ("x86/mm/tlb: Also remove local CPU from mm_cpumask if stale")
2815a56e4b ("x86/mm/tlb: Add tracepoint for TLB flush IPI to stale CPU")
209954cbc7 ("x86/mm/tlb: Update mm_cpumask lazily")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
# New commits in perf/core:
02c56362a7 ("uprobes: Guard against kmemdup() failing in dup_return_instance()")
d29e744c71 ("perf/x86: Relax privilege filter restriction on AMD IBS")
6057b90ecc ("perf/core: Export perf_exclude_event()")
8622e45b5d ("uprobes: Reuse return_instances between multiple uretprobes within task")
0cf981de76 ("uprobes: Ensure return_instance is detached from the list before freeing")
636666a1c7 ("uprobes: Decouple return_instance list traversal and freeing")
2ff913ab3f ("uprobes: Simplify session consumer tracking")
e0925f2dc4 ("uprobes: add speculative lockless VMA-to-inode-to-uprobe resolution")
83e3dc9a5d ("uprobes: simplify find_active_uprobe_rcu() VMA checks")
03a001b156 ("mm: introduce mmap_lock_speculate_{try_begin|retry}")
eb449bd969 ("mm: convert mm_lock_seq to a proper seqcount")
7528585290 ("mm/gup: Use raw_seqcount_try_begin()")
96450ead16 ("seqlock: add raw_seqcount_try_begin")
b4943b8bfc ("perf/x86/rapl: Add core energy counter support for AMD CPUs")
54d2759778 ("perf/x86/rapl: Move the cntr_mask to rapl_pmus struct")
bdc57ec705 ("perf/x86/rapl: Remove the global variable rapl_msrs")
abf03d9bd2 ("perf/x86/rapl: Modify the generic variable names to *_pkg*")
eeca4c6b25 ("perf/x86/rapl: Add arguments to the init and cleanup functions")
cd29d83a6d ("perf/x86/rapl: Make rapl_model struct global")
8bf1c86e5a ("perf/x86/rapl: Rename rapl_pmu variables")
1d5e2f637a ("perf/x86/rapl: Remove the cpu_to_rapl_pmu() function")
e4b4443477 ("x86/topology: Introduce topology_logical_core_id()")
2f2db34707 ("perf/x86/rapl: Remove the unused get_rapl_pmu_cpumask() function")
ae55e308bd ("perf/x86/intel/ds: Simplify the PEBS records processing for adaptive PEBS")
3c00ed344c ("perf/x86/intel/ds: Factor out functions for PEBS records processing")
7087bfb0ad ("perf/x86/intel/ds: Clarify adaptive PEBS processing")
faac6f105e ("perf/core: Check sample_type in perf_sample_save_brstack")
f226805bc5 ("perf/core: Check sample_type in perf_sample_save_callchain")
b9c44b9147 ("perf/core: Save raw sample data conditionally based on sample type")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Currently, the maximum supported physical address space can be
configured as either 48 bits or 52 bits. The only remaining difference
between these in practice is that the former omits the masking and
shifting required to construct TTBR and PTE values, which carry bits #48
and higher disjoint from the rest of the physical address.
The overhead of performing these additional calculations is negligible,
and so there is little reason to retain support for two different
configurations, and we can simply support whatever the hardware
supports.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20241212081841.2168124-14-ardb+git@google.com
Signed-off-by: Will Deacon <will@kernel.org>
Add a more advanced BAR test that writes all BARs in one go, and then reads
them back and verifies that the value matches the BAR number bitwise OR'ed
with offset, this allows us to verify:
- The BAR number was what we intended to read
- The offset was what we intended to read
This allows us to detect potential address translation issues on the EP.
Reading back the BAR directly after writing will not allow us to detect the
case where inbound address translation on the endpoint incorrectly causes
multiple BARs to be redirected to the same memory region (within the EP).
Link: https://lore.kernel.org/r/20241116032045.2574168-2-cassel@kernel.org
Signed-off-by: Niklas Cassel <cassel@kernel.org>
Signed-off-by: Krzysztof Wilczyński <kwilczynski@kernel.org>
Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Add tests for the new FIB rule flow label selector. Test both good and bad
flows and with both input and output routes.
# ./fib_rule_tests.sh
IPv6 FIB rule tests
[...]
TEST: rule6 check: flowlabel redirect to table [ OK ]
TEST: rule6 check: flowlabel no redirect to table [ OK ]
TEST: rule6 del by pref: flowlabel redirect to table [ OK ]
TEST: rule6 check: iif flowlabel redirect to table [ OK ]
TEST: rule6 check: iif flowlabel no redirect to table [ OK ]
TEST: rule6 del by pref: iif flowlabel redirect to table [ OK ]
TEST: rule6 check: flowlabel masked redirect to table [ OK ]
TEST: rule6 check: flowlabel masked no redirect to table [ OK ]
TEST: rule6 del by pref: flowlabel masked redirect to table [ OK ]
TEST: rule6 check: iif flowlabel masked redirect to table [ OK ]
TEST: rule6 check: iif flowlabel masked no redirect to table [ OK ]
TEST: rule6 del by pref: iif flowlabel masked redirect to table [ OK ]
[...]
Tests passed: 268
Tests failed: 0
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
- Rework vcpu_get_reg() to return a value instead of using an out-param, and
update all affected arch code accordingly.
- Convert the max_guest_memory_test into a more generic mmu_stress_test.
The basic gist of the "conversion" is to have the test do mprotect() on
guest memory while vCPUs are accessing said memory, e.g. to verify KVM
and mmu_notifiers are working as intended.
- Play nice with treewrite builds of unsupported architectures, e.g. arm
(32-bit), as KVM selftests' Makefile doesn't do anything to ensure the
target architecture is actually one KVM selftests supports.
- Use the kernel's $(ARCH) definition instead of the target triple for arch
specific directories, e.g. arm64 instead of aarch64, mainly so as not to
be different from the rest of the kernel.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEKTobbabEP7vbhhN9OlYIJqCjN/0FAmdjehwACgkQOlYIJqCj
N/0+/RAAh3M2IsNCtaiJ7n3COe9DHUuqxherS625J9YEOTCcGrZUd1WoSBvdDCIW
46YKEYdHIpFHOYMEDPg5ODd20/y6lLw1yDKW7xj22cC8Np1TrPbt0q+PqaDdb4RR
UlyObB6OsI/wxUHjsrvg2ZmDwAH9hIzh0kTUKPv7NZdaB+kT49eBV+tILDHtUGTx
m0LUcNUIZBUhUE2YjnGNIoPQg4w+H1bYFK8lM62Rx09HtXbJ93VlMSxBq4ModMDq
v74F6263NpOiou87bXperWT7iAWAWSqjGR+K/cwtnJpg2sosLlgpXYTnuUEiQMOL
DU2c2u3lPRZMAhUVzj6qRr9nVBICfgk/lS5nkCb6khNZ296VK2UzimQF429a8Dhy
ZkgOduNNuGVMBW6/Mc96L9ygo2yXNpGcT72RZ/2C8u/Zt0RwtYMYkzrBZQeqvgXp
wMPxgkwatHLm7VSXzSLVd1c28GXTS8Fdg77HbUbOJNjvigaOeUR8ti82LRIWPn0Y
XRyMpU1fK1wtuSkIEdxF8KWqr9B4ZPMxqFNE+ntyIhfUKvNdz6mRpQj/eYVbVdfn
poXco5iq02aCDo7q1Zqx1rxeNrYUvTdq9UmllilRbYLVOC0M+Rf+jBvqM+VBhUCu
hFSd80Pkh2iR4JuRDCtXQmk/cNfjk1CTgkeDmpm00Hdk98brmlc=
=mFjt
-----END PGP SIGNATURE-----
Merge tag 'kvm-selftests-treewide-6.14' of https://github.com/kvm-x86/linux into HEAD
KVM selftests "tree"-wide changes for 6.14:
- Rework vcpu_get_reg() to return a value instead of using an out-param, and
update all affected arch code accordingly.
- Convert the max_guest_memory_test into a more generic mmu_stress_test.
The basic gist of the "conversion" is to have the test do mprotect() on
guest memory while vCPUs are accessing said memory, e.g. to verify KVM
and mmu_notifiers are working as intended.
- Play nice with treewrite builds of unsupported architectures, e.g. arm
(32-bit), as KVM selftests' Makefile doesn't do anything to ensure the
target architecture is actually one KVM selftests supports.
- Use the kernel's $(ARCH) definition instead of the target triple for arch
specific directories, e.g. arm64 instead of aarch64, mainly so as not to
be different from the rest of the kernel.
In get_uprobe_offset(), the call to procmap_query() use the constant
PROCMAP_QUERY_VMA_EXECUTABLE, even if PROCMAP_QUERY is not defined.
Define PROCMAP_QUERY_VMA_EXECUTABLE when PROCMAP_QUERY isn't.
Fixes: 4e9e07603e ("selftests/bpf: make use of PROCMAP_QUERY ioctl if available")
Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/bpf/20241218175724.578884-1-jmarchan@redhat.com
Patch series "Fixes and cleanups to xarray", v3.
This series contains some random fixes and cleanups to xarray. Patch 1-2
are fixes and patch 3-6 are cleanups. More details can be found in
respective patches.
This patch (of 5):
Similar to issue fixed in commit cbc0285433 ("XArray: Do not return
sibling entries from xa_load()"), we may return sibling entries from
xas_find_marked as following:
Thread A: Thread B:
xa_store_range(xa, entry, 6, 7, gfp);
xa_set_mark(xa, 6, mark)
XA_STATE(xas, xa, 6);
xas_find_marked(&xas, 7, mark);
offset = xas_find_chunk(xas, advance, mark);
[offset is 6 which points to a valid entry]
xa_store_range(xa, entry, 4, 7, gfp);
entry = xa_entry(xa, node, 6);
[entry is a sibling of 4]
if (!xa_is_node(entry))
return entry;
Skip sibling entry like xas_find() does to protect caller from seeing
sibling entry from xas_find_marked() or caller may use sibling entry as a
valid entry and crash the kernel.
Besides, load_race() test is modified to catch mentioned issue and
modified load_race() only passes after this fix is merged.
Here is an example how this bug could be triggerred in theory in nfs which
enables large folio in mapping:
Let's take a look at involved racer:
1. How pages could be created and dirtied in nfs.
write
ksys_write
vfs_write
new_sync_write
nfs_file_write
generic_perform_write
nfs_write_begin
fgf_set_order
__filemap_get_folio
nfs_write_end
nfs_update_folio
nfs_writepage_setup
nfs_mark_request_dirty
filemap_dirty_folio
__folio_mark_dirty
__xa_set_mark
2. How dirty pages could be deleted in nfs.
ioctl
do_vfs_ioctl
file_ioctl
ioctl_preallocate
vfs_fallocate
nfs42_fallocate
nfs42_proc_deallocate
truncate_pagecache_range
truncate_inode_pages_range
truncate_inode_folio
filemap_remove_folio
page_cache_delete
xas_store(&xas, NULL);
3. How dirty pages could be lockless searched
sync_file_range
ksys_sync_file_range
__filemap_fdatawrite_range
filemap_fdatawrite_wbc
do_writepages
writeback_use_writepage
writeback_iter
writeback_get_folio
filemap_get_folios_tag
find_get_entry
folio = xas_find_marked()
folio_try_get(folio)
In theory, kernel will crash as following:
1.Create 2.Search 3.Delete
/* write page 2,3 */
write
...
nfs_write_begin
fgf_set_order
__filemap_get_folio
...
/* index = 2, order = 1 */
xa_store(&xas, folio)
nfs_write_end
...
__folio_mark_dirty
/* sync page 2 and page 3 */
sync_file_range
...
find_get_entry
folio = xas_find_marked()
/* offset will be 2 */
offset = xas_find_chunk()
/* delete page 2 and page 3 */
ioctl
...
xas_store(&xas, NULL);
/* write page 0-3 */
write
...
nfs_write_begin
fgf_set_order
__filemap_get_folio
...
/* index = 0, order = 2 */
xa_store(&xas, folio)
nfs_write_end
...
__folio_mark_dirty
/* get sibling entry from offset 2 */
entry = xa_entry(.., 2)
/* use sibling entry as folio and crash kernel */
folio_try_get(folio)
Link: https://lkml.kernel.org/r/20241218154613.58754-2-shikemeng@huaweicloud.com
Link: https://lkml.kernel.org/r/20241213122523.12764-1-shikemeng@huaweicloud.com
Link: https://lkml.kernel.org/r/20241213122523.12764-2-shikemeng@huaweicloud.com
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Correct the spelling dictionary so that future instances will be caught by
checkpatch, and fix the instances found.
Link: https://lkml.kernel.org/r/20241211154903.47027-1-cvam0000@gmail.com
Signed-off-by: Shivam Chaudhary <cvam0000@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Leon Romanovsky <leon@kernel.org>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Naveen N Rao <naveen@kernel.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Shivam Chaudhary <cvam0000@gmail.com>
Cc: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
The logfile option was documented but not working. Add it and optimized
the while loop.
Link: https://lkml.kernel.org/r/20241203020550.3145-1-zhangjiao2@cmss.chinamobile.com
Signed-off-by: zhang jiao <zhangjiao2@cmss.chinamobile.com>
Reviewed-by: Dr. Thomas Orgis <thomas.orgis@uni-hamburg.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Introduce the use cases of delay max, which can help quickly detect
potential abnormal delays in the system and record the types and specific
details of delay spikes.
Problem
========
Delay accounting can track the average delay of processes to show
system workload. However, when a process experiences a significant
delay, maybe a delay spike, which adversely affects performance,
getdelays can only display the average system delay over a period
of time. Yet, average delay is unhelpful for diagnosing delay peak.
It is not even possible to determine which type of delay has spiked,
as this information might be masked by the average delay.
Solution
=========
the 'delay max' can display delay peak since the system's startup,
which can record potential abnormal delays over time, including
the type of delay and the maximum delay. This is helpful for
quickly identifying crash caused by delay.
Use case
=========
bash# ./getdelays -d -p 244
print delayacct stats ON
PID 244
CPU count real total virtual total delay total delay average delay max
68 192000000 213676651 705643 0.010ms 0.306381ms
IO count delay total delay average delay max
0 0 0.000ms 0.000000ms
SWAP count delay total delay average delay max
0 0 0.000ms 0.000000ms
RECLAIM count delay total delay average delay max
0 0 0.000ms 0.000000ms
THRASHING count delay total delay average delay max
0 0 0.000ms 0.000000ms
COMPACT count delay total delay average delay max
0 0 0.000ms 0.000000ms
WPCOPY count delay total delay average delay max
235 15648284 0.067ms 0.263842ms
IRQ count delay total delay average delay max
0 0 0.000ms 0.000000ms
Link: https://lkml.kernel.org/r/20241203164848805CS62CQPQWG9GLdQj2_BxS@zte.com.cn
Co-developed-by: Wang Yong <wang.yong12@zte.com.cn>
Signed-off-by: Wang Yong <wang.yong12@zte.com.cn>
Co-developed-by: xu xin <xu.xin16@zte.com.cn>
Signed-off-by: xu xin <xu.xin16@zte.com.cn>
Co-developed-by: Wang Yaxin <wang.yaxin@zte.com.cn>
Signed-off-by: Wang Yaxin <wang.yaxin@zte.com.cn>
Signed-off-by: Kun Jiang <jiang.kun2@zte.com.cn>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Fan Yu <fan.yu9@zte.com.cn>
Cc: Peilin He <he.peilin@zte.com.cn>
Cc: tuqiang <tu.qiang35@zte.com.cn>
Cc: Yang Yang <yang.yang29@zte.com.cn>
Cc: ye xingchen <ye.xingchen@zte.com.cn>
Cc: Yunkai Zhang <zhang.yunkai@zte.com.cn>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Introduce demonstrative, basic, __mmap_region() test upon which we can
base further work upon moving forwards.
This simply asserts that mappings can be made and merges occur as
expected.
As part of this change, fix the security_vm_enough_memory_mm() stub which
was previously incorrectly implemented.
Link: https://lkml.kernel.org/r/20241213162409.41498-1-lorenzo.stoakes@oracle.com
Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Jann Horn <jannh@google.com>
Cc: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
There is no reason why the alternate signal stack should be mapped as RWX.
Map it as RW instead.
Link: https://lkml.kernel.org/r/20241209095019.1732120-15-kevin.brodsky@arm.com
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Keith Lucas <keith.lucas@oracle.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
The pkey_sighandler_tests are bound to fail if either the kernel or CPU
doesn't support pkeys. Skip the tests if pkeys support is missing.
Link: https://lkml.kernel.org/r/20241209095019.1732120-14-kevin.brodsky@arm.com
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Keith Lucas <keith.lucas@oracle.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
PKEY_ALLOW_ALL is meant to represent the pkey register value that allows
all accesses (enables all pkeys). However its current naming suggests
that the value applies to *one* key only (like PKEY_DISABLE_ACCESS for
instance).
Rename PKEY_ALLOW_ALL to PKEY_REG_ALLOW_ALL to avoid such
misunderstanding. This is consistent with the PKEY_REG_ALLOW_NONE macro
introduced by commit 6e182dc9f2 ("selftests/mm: Use generic pkey
register manipulation").
Link: https://lkml.kernel.org/r/20241209095019.1732120-13-kevin.brodsky@arm.com
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Keith Lucas <keith.lucas@oracle.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
The pkey* files can only be built on architectures that support pkeys
(pkey-helpers.h #error's otherwise). Adding pkey_util.c as dependency to
all $(TEST_GEN_FILES) is therefore a bad idea. Make it a dependency of
the pkeys tests only.
Those tests are built in 32/64-bit variants on x86_64 so we need to add an
explicit dependency there as well.
Link: https://lkml.kernel.org/r/20241216092849.2140850-1-kevin.brodsky@arm.com
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Keith Lucas <keith.lucas@oracle.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
sys_pkey_alloc, sys_pkey_free and sys_mprotect_pkey are currently used in
protections_keys.c, while pkey_sighandler_tests.c calls the libc wrappers
directly (e.g. pkey_mprotect()). This is probably ok when using glibc
(those symbols appeared a while ago), but Musl does not currently provide
them. The logging in the helpers from pkey-helpers.h can also come in
handy.
Make things more consistent by using the sys_pkey helpers in
pkey_sighandler_tests.c too. To that end their implementation is moved to
a common .c file (pkey_util.c). This also enables calling
is_pkeys_supported() outside of protections_keys.c, since it relies on
sys_pkey_{alloc,free}.
Link: https://lkml.kernel.org/r/20241209095019.1732120-12-kevin.brodsky@arm.com
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Keith Lucas <keith.lucas@oracle.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
The pkey tests define a whole lot of functions and some global variables.
A few are truly global (declared in pkey-helpers.h), but the majority are
file-scoped. Make sure those are labelled static.
Some of the pkey_{access,write}_{allow,deny} helpers are not called, or
only called when building for some architectures. Mark them
__maybe_unused to suppress compiler warnings.
Link: https://lkml.kernel.org/r/20241209095019.1732120-11-kevin.brodsky@arm.com
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Keith Lucas <keith.lucas@oracle.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>