mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
synced 2024-12-29 17:22:07 +00:00
dda5f312bb
The sync_*() ops on arch/arm are defined in terms of the regular bitops
with no special handling. This is not correct, as UP kernels elide
barriers for the fully-ordered operations, and so the required ordering
is lost when such UP kernels are run under a hypervsior on an SMP
system.
Fix this by defining sync ops with the required barriers.
Note: On 32-bit arm, the sync_*() ops are currently only used by Xen,
which requires ARMv7, but the semantics can be implemented for ARMv6+.
Fixes: e54d2f6152
("xen/arm: sync_bitops")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-2-mark.rutland@arm.com
17 lines
353 B
ArmAsm
17 lines
353 B
ArmAsm
/* SPDX-License-Identifier: GPL-2.0-only */
|
|
/*
|
|
* linux/arch/arm/lib/testchangebit.S
|
|
*
|
|
* Copyright (C) 1995-1996 Russell King
|
|
*/
|
|
#include <linux/linkage.h>
|
|
#include <asm/assembler.h>
|
|
#include "bitops.h"
|
|
.text
|
|
|
|
testop _test_and_change_bit, eor, str
|
|
|
|
#if __LINUX_ARM_ARCH__ >= 6
|
|
sync_testop _sync_test_and_change_bit, eor, str
|
|
#endif
|