2019-06-04 08:11:33 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Copyright (C) 1995-2003 Russell King
|
|
|
|
* 2001-2002 Keith Owens
|
|
|
|
*
|
|
|
|
* Generate definitions needed by assembly language modules.
|
|
|
|
* This code generates raw asm output which is post-processed to extract
|
|
|
|
* and format the required data.
|
|
|
|
*/
|
2014-10-15 21:37:13 +00:00
|
|
|
#include <linux/compiler.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/sched.h>
|
|
|
|
#include <linux/mm.h>
|
2009-11-26 16:19:58 +00:00
|
|
|
#include <linux/dma-mapping.h>
|
2011-02-06 15:48:39 +00:00
|
|
|
#include <asm/cacheflush.h>
|
2021-02-01 19:40:01 +00:00
|
|
|
#include <asm/kexec-internal.h>
|
2011-02-06 15:32:24 +00:00
|
|
|
#include <asm/glue-df.h>
|
|
|
|
#include <asm/glue-pf.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <asm/mach/arch.h>
|
|
|
|
#include <asm/thread_info.h>
|
ARM: mm: Make virt_to_pfn() a static inline
Making virt_to_pfn() a static inline taking a strongly typed
(const void *) makes the contract of a passing a pointer of that
type to the function explicit and exposes any misuse of the
macro virt_to_pfn() acting polymorphic and accepting many types
such as (void *), (unitptr_t) or (unsigned long) as arguments
without warnings.
Doing this is a bit intrusive: virt_to_pfn() requires
PHYS_PFN_OFFSET and PAGE_SHIFT to be defined, and this is defined in
<asm/page.h>, so this must be included *before* <asm/memory.h>.
The use of macros were obscuring the unclear inclusion order here,
as the macros would eventually be resolved, but a static inline
like this cannot be compiled with unresolved macros.
The naive solution to include <asm/page.h> at the top of
<asm/memory.h> does not work, because <asm/memory.h> sometimes
includes <asm/page.h> at the end of itself, which would create a
confusing inclusion loop. So instead, take the approach to always
unconditionally include <asm/page.h> at the end of <asm/memory.h>
arch/arm uses <asm/memory.h> explicitly in a lot of places,
however it turns out that if we just unconditionally include
<asm/memory.h> into <asm/page.h> and switch all inclusions of
<asm/memory.h> to <asm/page.h> instead, we enforce the right
order and <asm/memory.h> will always have access to the
definitions.
Put an inclusion guard in place making it impossible to include
<asm/memory.h> explicitly.
Link: https://lore.kernel.org/linux-mm/20220701160004.2ffff4e5ab59a55499f4c736@linux-foundation.org/
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2022-06-02 08:18:32 +00:00
|
|
|
#include <asm/page.h>
|
2017-10-16 11:54:05 +00:00
|
|
|
#include <asm/mpu.h>
|
2006-11-09 14:20:47 +00:00
|
|
|
#include <asm/procinfo.h>
|
2013-05-16 09:34:30 +00:00
|
|
|
#include <asm/suspend.h>
|
2011-09-30 13:43:12 +00:00
|
|
|
#include <asm/hardware/cache-l2x0.h>
|
2008-04-29 08:03:59 +00:00
|
|
|
#include <linux/kbuild.h>
|
2021-04-14 03:41:16 +00:00
|
|
|
#include <linux/arm-smccc.h>
|
2024-02-19 15:39:33 +00:00
|
|
|
|
|
|
|
#include <vdso/datapage.h>
|
|
|
|
|
2017-08-10 03:42:51 +00:00
|
|
|
#include "signal.h"
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Make sure that the compiler and target are compatible.
|
|
|
|
*/
|
|
|
|
#if defined(__APCS_26__)
|
|
|
|
#error Sorry, your compiler targets APCS-26 but this kernel requires APCS-32
|
|
|
|
#endif
|
|
|
|
|
|
|
|
int main(void)
|
|
|
|
{
|
|
|
|
DEFINE(TSK_ACTIVE_MM, offsetof(struct task_struct, active_mm));
|
Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables
The changes to automatically test for working stack protector compiler
support in the Kconfig files removed the special STACKPROTECTOR_AUTO
option that picked the strongest stack protector that the compiler
supported.
That was all a nice cleanup - it makes no sense to have the AUTO case
now that the Kconfig phase can just determine the compiler support
directly.
HOWEVER.
It also meant that doing "make oldconfig" would now _disable_ the strong
stackprotector if you had AUTO enabled, because in a legacy config file,
the sane stack protector configuration would look like
CONFIG_HAVE_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR_NONE is not set
# CONFIG_CC_STACKPROTECTOR_REGULAR is not set
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_CC_STACKPROTECTOR_AUTO=y
and when you ran this through "make oldconfig" with the Kbuild changes,
it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
used to be disabled (because it was really enabled by AUTO), and would
disable it in the new config, resulting in:
CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
CONFIG_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
That's dangerously subtle - people could suddenly find themselves with
the weaker stack protector setup without even realizing.
The solution here is to just rename not just the old RECULAR stack
protector option, but also the strong one. This does that by just
removing the CC_ prefix entirely for the user choices, because it really
is not about the compiler support (the compiler support now instead
automatially impacts _visibility_ of the options to users).
This results in "make oldconfig" actually asking the user for their
choice, so that we don't have any silent subtle security model changes.
The end result would generally look like this:
CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
CONFIG_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR_STRONG=y
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
where the "CC_" versions really are about internal compiler
infrastructure, not the user selections.
Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-14 03:21:18 +00:00
|
|
|
#ifdef CONFIG_STACKPROTECTOR
|
2010-06-08 01:50:33 +00:00
|
|
|
DEFINE(TSK_STACK_CANARY, offsetof(struct task_struct, stack_canary));
|
|
|
|
#endif
|
2005-04-16 22:20:36 +00:00
|
|
|
BLANK();
|
|
|
|
DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
|
|
|
|
DEFINE(TI_PREEMPT, offsetof(struct thread_info, preempt_count));
|
|
|
|
DEFINE(TI_CPU, offsetof(struct thread_info, cpu));
|
|
|
|
DEFINE(TI_CPU_DOMAIN, offsetof(struct thread_info, cpu_domain));
|
|
|
|
DEFINE(TI_CPU_SAVE, offsetof(struct thread_info, cpu_context));
|
ARM: 9107/1: syscall: always store thread_info->abi_syscall
The system call number is used in a a couple of places, in particular
ptrace, seccomp and /proc/<pid>/syscall.
The last one apparently never worked reliably on ARM for tasks that are
not currently getting traced.
Storing the syscall number in the normal entry path makes it work,
as well as allowing us to see if the current system call is for OABI
compat mode, which is the next thing I want to hook into.
Since the thread_info->syscall field is not just the number any more, it
is now renamed to abi_syscall. In kernels that enable both OABI and EABI,
the upper bits of this field encode 0x900000 (__NR_OABI_SYSCALL_BASE)
for OABI tasks, while normal EABI tasks do not set the upper bits. This
makes it possible to implement the in_oabi_syscall() helper later.
All other users of thread_info->syscall go through the syscall_get_nr()
helper, which in turn filters out the ABI bits.
Note that the ABI information is lost with PTRACE_SET_SYSCALL, so one
cannot set the internal number to a particular version, but this was
already the case. We could change it to let gdb encode the ABI type along
with the syscall in a CONFIG_OABI_COMPAT-enabled kernel, but that itself
would be a (backwards-compatible) ABI change, so I don't do it here.
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-08-11 07:30:21 +00:00
|
|
|
DEFINE(TI_ABI_SYSCALL, offsetof(struct thread_info, abi_syscall));
|
2005-04-16 22:20:36 +00:00
|
|
|
DEFINE(TI_TP_VALUE, offsetof(struct thread_info, tp_value));
|
|
|
|
DEFINE(TI_FPSTATE, offsetof(struct thread_info, fpstate));
|
2012-08-29 10:16:59 +00:00
|
|
|
#ifdef CONFIG_VFP
|
2005-04-16 22:20:36 +00:00
|
|
|
DEFINE(TI_VFPSTATE, offsetof(struct thread_info, vfpstate));
|
ARM: vfp: fix a hole in VFP thread migration
Fix a hole in the VFP thread migration. Lets define two threads.
Thread 1, we'll call 'interesting_thread' which is a thread which is
running on CPU0, using VFP (so vfp_current_hw_state[0] =
&interesting_thread->vfpstate) and gets migrated off to CPU1, where
it continues execution of VFP instructions.
Thread 2, we'll call 'new_cpu0_thread' which is the thread which takes
over on CPU0. This has also been using VFP, and last used VFP on CPU0,
but doesn't use it again.
The following code will be executed twice:
cpu = thread->cpu;
/*
* On SMP, if VFP is enabled, save the old state in
* case the thread migrates to a different CPU. The
* restoring is done lazily.
*/
if ((fpexc & FPEXC_EN) && vfp_current_hw_state[cpu]) {
vfp_save_state(vfp_current_hw_state[cpu], fpexc);
vfp_current_hw_state[cpu]->hard.cpu = cpu;
}
/*
* Thread migration, just force the reloading of the
* state on the new CPU in case the VFP registers
* contain stale data.
*/
if (thread->vfpstate.hard.cpu != cpu)
vfp_current_hw_state[cpu] = NULL;
The first execution will be on CPU0 to switch away from 'interesting_thread'.
interesting_thread->cpu will be 0.
So, vfp_current_hw_state[0] points at interesting_thread->vfpstate.
The hardware state will be saved, along with the CPU number (0) that
it was executing on.
'thread' will be 'new_cpu0_thread' with new_cpu0_thread->cpu = 0.
Also, because it was executing on CPU0, new_cpu0_thread->vfpstate.hard.cpu = 0,
and so the thread migration check is not triggered.
This means that vfp_current_hw_state[0] remains pointing at interesting_thread.
The second execution will be on CPU1 to switch _to_ 'interesting_thread'.
So, 'thread' will be 'interesting_thread' and interesting_thread->cpu now
will be 1. The previous thread executing on CPU1 is not relevant to this
so we shall ignore that.
We get to the thread migration check. Here, we discover that
interesting_thread->vfpstate.hard.cpu = 0, yet interesting_thread->cpu is
now 1, indicating thread migration. We set vfp_current_hw_state[1] to
NULL.
So, at this point vfp_current_hw_state[] contains the following:
[0] = &interesting_thread->vfpstate
[1] = NULL
Our interesting thread now executes a VFP instruction, takes a fault
which loads the state into the VFP hardware. Now, through the assembly
we now have:
[0] = &interesting_thread->vfpstate
[1] = &interesting_thread->vfpstate
CPU1 stops due to ptrace (and so saves its VFP state) using the thread
switch code above), and CPU0 calls vfp_sync_hwstate().
if (vfp_current_hw_state[cpu] == &thread->vfpstate) {
vfp_save_state(&thread->vfpstate, fpexc | FPEXC_EN);
BANG, we corrupt interesting_thread's VFP state by overwriting the
more up-to-date state saved by CPU1 with the old VFP state from CPU0.
Fix this by ensuring that we have sane semantics for the various state
describing variables:
1. vfp_current_hw_state[] points to the current owner of the context
information stored in each CPUs hardware, or NULL if that state
information is invalid.
2. thread->vfpstate.hard.cpu always contains the most recent CPU number
which the state was loaded into or NR_CPUS if no CPU owns the state.
So, for a particular CPU to be a valid owner of the VFP state for a
particular thread t, two things must be true:
vfp_current_hw_state[cpu] == &t->vfpstate && t->vfpstate.hard.cpu == cpu.
and that is valid from the moment a CPU loads the saved VFP context
into the hardware. This gives clear and consistent semantics to
interpreting these variables.
This patch also fixes thread copying, ensuring that t->vfpstate.hard.cpu
is invalidated, otherwise CPU0 may believe it was the last owner. The
hole can happen thus:
- thread1 runs on CPU2 using VFP, migrates to CPU3, exits and thread_info
freed.
- New thread allocated from a previously running thread on CPU2, reusing
memory for thread1 and copying vfp.hard.cpu.
At this point, the following are true:
new_thread1->vfpstate.hard.cpu == 2
&new_thread1->vfpstate == vfp_current_hw_state[2]
Lastly, this also addresses thread flushing in a similar way to thread
copying. Hole is:
- thread runs on CPU0, using VFP, migrates to CPU1 but does not use VFP.
- thread calls execve(), so thread flush happens, leaving
vfp_current_hw_state[0] intact. This vfpstate is memset to 0 causing
thread->vfpstate.hard.cpu = 0.
- thread migrates back to CPU0 before using VFP.
At this point, the following are true:
thread->vfpstate.hard.cpu == 0
&thread->vfpstate == vfp_current_hw_state[0]
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-09 15:09:43 +00:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
DEFINE(VFP_CPU, offsetof(union vfp_state, hard.cpu));
|
|
|
|
#endif
|
2012-08-29 10:16:59 +00:00
|
|
|
#endif
|
ARM: 9282/1: vfp: Manipulate task VFP state with softirqs disabled
In a subsequent patch, we will relax the kernel mode NEON policy, and
permit kernel mode NEON to be used not only from task context, as is
permitted today, but also from softirq context.
Given that softirqs may trigger over the back of any IRQ unless they are
explicitly disabled, we need to address the resulting races in the VFP
state handling, by disabling softirq processing in two distinct but
related cases:
- kernel mode NEON will leave the FPU disabled after it completes, so
any kernel code sequence that enables the FPU and subsequently accesses
its registers needs to disable softirqs until it completes;
- kernel_neon_begin() will preserve the userland VFP state in memory,
and if it interrupts the ordinary VFP state preserve sequence, the
latter will resume execution with the VFP registers corrupted, and
happily continue saving them to memory.
Given that disabling softirqs also disables preemption, we can replace
the existing preempt_disable/enable occurrences in the VFP state
handling asm code with new macros that dis/enable softirqs instead.
In the VFP state handling C code, add local_bh_disable/enable() calls
in those places where the VFP state is preserved.
One thing to keep in mind is that, once we allow NEON use in softirq
context, the result of any such interruption is that the FPEXC_EN bit in
the FPEXC register will be cleared, and vfp_current_hw_state[cpu] will
be NULL. This means that any sequence that [conditionally] clears
FPEXC_EN and/or sets vfp_current_hw_state[cpu] to NULL does not need to
run with softirqs disabled, as the result will be the same. Furthermore,
the handling of THREAD_NOTIFY_SWITCH is guaranteed to run with IRQs
disabled, and so it does not need protection from softirq interruptions
either.
Tested-by: Martin Willi <martin@strongswan.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-12-22 17:49:51 +00:00
|
|
|
DEFINE(SOFTIRQ_DISABLE_OFFSET,SOFTIRQ_DISABLE_OFFSET);
|
2008-04-18 21:43:06 +00:00
|
|
|
#ifdef CONFIG_ARM_THUMBEE
|
|
|
|
DEFINE(TI_THUMBEE_STATE, offsetof(struct thread_info, thumbee_state));
|
|
|
|
#endif
|
2006-03-12 22:36:06 +00:00
|
|
|
#ifdef CONFIG_IWMMXT
|
|
|
|
DEFINE(TI_IWMMXT_STATE, offsetof(struct thread_info, fpstate.iwmmxt));
|
2006-06-27 22:03:03 +00:00
|
|
|
#endif
|
2005-04-16 22:20:36 +00:00
|
|
|
BLANK();
|
2005-04-26 14:18:59 +00:00
|
|
|
DEFINE(S_R0, offsetof(struct pt_regs, ARM_r0));
|
|
|
|
DEFINE(S_R1, offsetof(struct pt_regs, ARM_r1));
|
|
|
|
DEFINE(S_R2, offsetof(struct pt_regs, ARM_r2));
|
|
|
|
DEFINE(S_R3, offsetof(struct pt_regs, ARM_r3));
|
|
|
|
DEFINE(S_R4, offsetof(struct pt_regs, ARM_r4));
|
|
|
|
DEFINE(S_R5, offsetof(struct pt_regs, ARM_r5));
|
|
|
|
DEFINE(S_R6, offsetof(struct pt_regs, ARM_r6));
|
|
|
|
DEFINE(S_R7, offsetof(struct pt_regs, ARM_r7));
|
|
|
|
DEFINE(S_R8, offsetof(struct pt_regs, ARM_r8));
|
|
|
|
DEFINE(S_R9, offsetof(struct pt_regs, ARM_r9));
|
|
|
|
DEFINE(S_R10, offsetof(struct pt_regs, ARM_r10));
|
|
|
|
DEFINE(S_FP, offsetof(struct pt_regs, ARM_fp));
|
|
|
|
DEFINE(S_IP, offsetof(struct pt_regs, ARM_ip));
|
|
|
|
DEFINE(S_SP, offsetof(struct pt_regs, ARM_sp));
|
|
|
|
DEFINE(S_LR, offsetof(struct pt_regs, ARM_lr));
|
|
|
|
DEFINE(S_PC, offsetof(struct pt_regs, ARM_pc));
|
|
|
|
DEFINE(S_PSR, offsetof(struct pt_regs, ARM_cpsr));
|
|
|
|
DEFINE(S_OLD_R0, offsetof(struct pt_regs, ARM_ORIG_r0));
|
2016-05-10 15:34:27 +00:00
|
|
|
DEFINE(PT_REGS_SIZE, sizeof(struct pt_regs));
|
2016-05-13 09:22:38 +00:00
|
|
|
DEFINE(SVC_DACR, offsetof(struct svc_pt_regs, dacr));
|
2024-03-25 07:31:13 +00:00
|
|
|
DEFINE(SVC_TTBCR, offsetof(struct svc_pt_regs, ttbcr));
|
2016-05-13 09:22:38 +00:00
|
|
|
DEFINE(SVC_REGS_SIZE, sizeof(struct svc_pt_regs));
|
2017-08-10 03:42:51 +00:00
|
|
|
BLANK();
|
|
|
|
DEFINE(SIGFRAME_RC3_OFFSET, offsetof(struct sigframe, retcode[3]));
|
|
|
|
DEFINE(RT_SIGFRAME_RC3_OFFSET, offsetof(struct rt_sigframe, sig.retcode[3]));
|
2005-04-26 14:18:59 +00:00
|
|
|
BLANK();
|
2011-09-30 13:43:12 +00:00
|
|
|
#ifdef CONFIG_CACHE_L2X0
|
|
|
|
DEFINE(L2X0_R_PHY_BASE, offsetof(struct l2x0_regs, phy_base));
|
|
|
|
DEFINE(L2X0_R_AUX_CTRL, offsetof(struct l2x0_regs, aux_ctrl));
|
|
|
|
DEFINE(L2X0_R_TAG_LATENCY, offsetof(struct l2x0_regs, tag_latency));
|
|
|
|
DEFINE(L2X0_R_DATA_LATENCY, offsetof(struct l2x0_regs, data_latency));
|
|
|
|
DEFINE(L2X0_R_FILTER_START, offsetof(struct l2x0_regs, filter_start));
|
|
|
|
DEFINE(L2X0_R_FILTER_END, offsetof(struct l2x0_regs, filter_end));
|
|
|
|
DEFINE(L2X0_R_PREFETCH_CTRL, offsetof(struct l2x0_regs, prefetch_ctrl));
|
|
|
|
DEFINE(L2X0_R_PWR_CTRL, offsetof(struct l2x0_regs, pwr_ctrl));
|
|
|
|
BLANK();
|
|
|
|
#endif
|
2007-05-17 09:19:23 +00:00
|
|
|
#ifdef CONFIG_CPU_HAS_ASID
|
2013-02-28 16:47:36 +00:00
|
|
|
DEFINE(MM_CONTEXT_ID, offsetof(struct mm_struct, context.id.counter));
|
2005-04-16 22:20:36 +00:00
|
|
|
BLANK();
|
|
|
|
#endif
|
|
|
|
DEFINE(VMA_VM_MM, offsetof(struct vm_area_struct, vm_mm));
|
|
|
|
DEFINE(VMA_VM_FLAGS, offsetof(struct vm_area_struct, vm_flags));
|
|
|
|
BLANK();
|
|
|
|
DEFINE(VM_EXEC, VM_EXEC);
|
|
|
|
BLANK();
|
|
|
|
DEFINE(PAGE_SZ, PAGE_SIZE);
|
|
|
|
BLANK();
|
|
|
|
DEFINE(SYS_ERROR0, 0x9f0000);
|
|
|
|
BLANK();
|
|
|
|
DEFINE(SIZEOF_MACHINE_DESC, sizeof(struct machine_desc));
|
2006-05-05 14:11:14 +00:00
|
|
|
DEFINE(MACHINFO_TYPE, offsetof(struct machine_desc, nr));
|
|
|
|
DEFINE(MACHINFO_NAME, offsetof(struct machine_desc, name));
|
2006-05-10 17:11:05 +00:00
|
|
|
BLANK();
|
|
|
|
DEFINE(PROC_INFO_SZ, sizeof(struct proc_info_list));
|
2006-05-05 14:11:14 +00:00
|
|
|
DEFINE(PROCINFO_INITFUNC, offsetof(struct proc_info_list, __cpu_flush));
|
2006-06-29 17:24:21 +00:00
|
|
|
DEFINE(PROCINFO_MM_MMUFLAGS, offsetof(struct proc_info_list, __cpu_mm_mmu_flags));
|
|
|
|
DEFINE(PROCINFO_IO_MMUFLAGS, offsetof(struct proc_info_list, __cpu_io_mmu_flags));
|
2008-04-18 21:43:07 +00:00
|
|
|
BLANK();
|
|
|
|
#ifdef MULTI_DABORT
|
|
|
|
DEFINE(PROCESSOR_DABT_FUNC, offsetof(struct processor, _data_abort));
|
|
|
|
#endif
|
|
|
|
#ifdef MULTI_PABORT
|
|
|
|
DEFINE(PROCESSOR_PABT_FUNC, offsetof(struct processor, _prefetch_abort));
|
2011-02-06 15:48:39 +00:00
|
|
|
#endif
|
|
|
|
#ifdef MULTI_CPU
|
|
|
|
DEFINE(CPU_SLEEP_SIZE, offsetof(struct processor, suspend_size));
|
|
|
|
DEFINE(CPU_DO_SUSPEND, offsetof(struct processor, do_suspend));
|
|
|
|
DEFINE(CPU_DO_RESUME, offsetof(struct processor, do_resume));
|
|
|
|
#endif
|
|
|
|
#ifdef MULTI_CACHE
|
|
|
|
DEFINE(CACHE_FLUSH_KERN_ALL, offsetof(struct cpu_cache_fns, flush_kern_all));
|
2013-05-16 09:34:30 +00:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_ARM_CPU_SUSPEND
|
|
|
|
DEFINE(SLEEP_SAVE_SP_SZ, sizeof(struct sleep_save_sp));
|
|
|
|
DEFINE(SLEEP_SAVE_SP_PHYS, offsetof(struct sleep_save_sp, save_ptr_stash_phys));
|
|
|
|
DEFINE(SLEEP_SAVE_SP_VIRT, offsetof(struct sleep_save_sp, save_ptr_stash));
|
2008-04-18 21:43:07 +00:00
|
|
|
#endif
|
2021-04-14 03:41:16 +00:00
|
|
|
DEFINE(ARM_SMCCC_QUIRK_ID_OFFS, offsetof(struct arm_smccc_quirk, id));
|
|
|
|
DEFINE(ARM_SMCCC_QUIRK_STATE_OFFS, offsetof(struct arm_smccc_quirk, state));
|
2009-11-26 16:19:58 +00:00
|
|
|
BLANK();
|
|
|
|
DEFINE(DMA_BIDIRECTIONAL, DMA_BIDIRECTIONAL);
|
|
|
|
DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE);
|
|
|
|
DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE);
|
2012-07-17 13:25:42 +00:00
|
|
|
BLANK();
|
2012-08-17 15:07:02 +00:00
|
|
|
DEFINE(CACHE_WRITEBACK_ORDER, __CACHE_WRITEBACK_ORDER);
|
2012-07-17 13:25:42 +00:00
|
|
|
DEFINE(CACHE_WRITEBACK_GRANULE, __CACHE_WRITEBACK_GRANULE);
|
|
|
|
BLANK();
|
2015-03-25 18:14:22 +00:00
|
|
|
#ifdef CONFIG_VDSO
|
|
|
|
DEFINE(VDSO_DATA_SIZE, sizeof(union vdso_data_store));
|
2017-10-16 11:54:05 +00:00
|
|
|
#endif
|
|
|
|
BLANK();
|
|
|
|
#ifdef CONFIG_ARM_MPU
|
|
|
|
DEFINE(MPU_RNG_INFO_RNGS, offsetof(struct mpu_rgn_info, rgns));
|
|
|
|
DEFINE(MPU_RNG_INFO_USED, offsetof(struct mpu_rgn_info, used));
|
|
|
|
|
|
|
|
DEFINE(MPU_RNG_SIZE, sizeof(struct mpu_rgn));
|
2018-04-03 09:36:37 +00:00
|
|
|
DEFINE(MPU_RGN_DRBAR, offsetof(struct mpu_rgn, drbar));
|
|
|
|
DEFINE(MPU_RGN_DRSR, offsetof(struct mpu_rgn, drsr));
|
|
|
|
DEFINE(MPU_RGN_DRACR, offsetof(struct mpu_rgn, dracr));
|
2018-04-03 09:39:23 +00:00
|
|
|
DEFINE(MPU_RGN_PRBAR, offsetof(struct mpu_rgn, prbar));
|
|
|
|
DEFINE(MPU_RGN_PRLAR, offsetof(struct mpu_rgn, prlar));
|
KVM: ARM: World-switch implementation
Provides complete world-switch implementation to switch to other guests
running in non-secure modes. Includes Hyp exception handlers that
capture necessary exception information and stores the information on
the VCPU and KVM structures.
The following Hyp-ABI is also documented in the code:
Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
Switching to Hyp mode is done through a simple HVC #0 instruction. The
exception vector code will check that the HVC comes from VMID==0 and if
so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
- r0 contains a pointer to a HYP function
- r1, r2, and r3 contain arguments to the above function.
- The HYP function will be called with its arguments in r0, r1 and r2.
On HYP function return, we return directly to SVC.
A call to a function executing in Hyp mode is performed like the following:
<svc code>
ldr r0, =BSYM(my_hyp_fn)
ldr r1, =my_param
hvc #0 ; Call my_hyp_fn(my_param) from HYP mode
<svc code>
Otherwise, the world-switch is pretty straight-forward. All state that
can be modified by the guest is first backed up on the Hyp stack and the
VCPU values is loaded onto the hardware. State, which is not loaded, but
theoretically modifiable by the guest is protected through the
virtualiation features to generate a trap and cause software emulation.
Upon guest returns, all state is restored from hardware onto the VCPU
struct and the original state is restored from the Hyp-stack onto the
hardware.
SMP support using the VMPIDR calculated on the basis of the host MPIDR
and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
a separate patch into the appropriate patches introducing the
functionality. Note that the VMIDs are stored per VM as required by the ARM
architecture reference manual.
To support VFP/NEON we trap those instructions using the HPCTR. When
we trap, we switch the FPU. After a guest exit, the VFP state is
returned to the host. When disabling access to floating point
instructions, we also mask FPEXC_EN in order to avoid the guest
receiving Undefined instruction exceptions before we have a chance to
switch back the floating point state. We are reusing vfp_hard_struct,
so we depend on VFPv3 being enabled in the host kernel, if not, we still
trap cp10 and cp11 in order to inject an undefined instruction exception
whenever the guest tries to use VFP/NEON. VFP/NEON developed by
Antionios Motakis and Rusty Russell.
Aborts that are permission faults, and not stage-1 page table walk, do
not report the faulting address in the HPFAR. We have to resolve the
IPA, and store it just like the HPFAR register on the VCPU struct. If
the IPA cannot be resolved, it means another CPU is playing with the
page tables, and we simply restart the guest. This quirk was fixed by
Marc Zyngier.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Antonios Motakis <a.motakis@virtualopensystems.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-20 23:47:42 +00:00
|
|
|
#endif
|
2021-02-01 19:40:01 +00:00
|
|
|
DEFINE(KEXEC_START_ADDR, offsetof(struct kexec_relocate_data, kexec_start_address));
|
|
|
|
DEFINE(KEXEC_INDIR_PAGE, offsetof(struct kexec_relocate_data, kexec_indirection_page));
|
|
|
|
DEFINE(KEXEC_MACH_TYPE, offsetof(struct kexec_relocate_data, kexec_mach_type));
|
|
|
|
DEFINE(KEXEC_R2, offsetof(struct kexec_relocate_data, kexec_r2));
|
2005-04-16 22:20:36 +00:00
|
|
|
return 0;
|
|
|
|
}
|