License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
# SPDX-License-Identifier: GPL-2.0
|
2005-04-16 22:20:36 +00:00
|
|
|
config ARM
|
|
|
|
bool
|
|
|
|
default y
|
32-bit userspace ABI: introduce ARCH_32BIT_OFF_T config option
All new 32-bit architectures should have 64-bit userspace off_t type, but
existing architectures has 32-bit ones.
To enforce the rule, new config option is added to arch/Kconfig that defaults
ARCH_32BIT_OFF_T to be disabled for new 32-bit architectures. All existing
32-bit architectures enable it explicitly.
New option affects force_o_largefile() behaviour. Namely, if userspace
off_t is 64-bits long, we have no reason to reject user to open big files.
Note that even if architectures has only 64-bit off_t in the kernel
(arc, c6x, h8300, hexagon, nios2, openrisc, and unicore32),
a libc may use 32-bit off_t, and therefore want to limit the file size
to 4GB unless specified differently in the open flags.
Signed-off-by: Yury Norov <ynorov@caviumnetworks.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Yury Norov <ynorov@marvell.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2018-05-16 08:18:49 +00:00
|
|
|
select ARCH_32BIT_OFF_T
|
2021-10-21 00:55:35 +00:00
|
|
|
select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE if HAVE_KRETPROBES && FRAME_POINTER && !ARM_UNWIND
|
2019-06-13 07:08:57 +00:00
|
|
|
select ARCH_HAS_BINFMT_FLAT
|
2024-02-15 14:46:32 +00:00
|
|
|
select ARCH_HAS_CPU_CACHE_ALIASING
|
2023-06-13 23:39:25 +00:00
|
|
|
select ARCH_HAS_CPU_FINALIZE_INIT if MMU
|
2024-12-02 01:08:30 +00:00
|
|
|
select ARCH_HAS_CRC32 if KERNEL_MODE_NEON
|
2024-12-02 01:20:49 +00:00
|
|
|
select ARCH_HAS_CRC_T10DIF if KERNEL_MODE_NEON
|
2022-02-16 20:05:28 +00:00
|
|
|
select ARCH_HAS_CURRENT_STACK_POINTER
|
2017-12-18 10:48:42 +00:00
|
|
|
select ARCH_HAS_DEBUG_VIRTUAL if MMU
|
2023-10-05 07:05:36 +00:00
|
|
|
select ARCH_HAS_DMA_ALLOC if MMU
|
2024-08-28 06:02:47 +00:00
|
|
|
select ARCH_HAS_DMA_OPS
|
2019-08-26 07:03:44 +00:00
|
|
|
select ARCH_HAS_DMA_WRITE_COMBINE if !ARM_DMA_MEM_BUFFERABLE
|
2015-04-14 22:48:00 +00:00
|
|
|
select ARCH_HAS_ELF_RANDOMIZE
|
2018-03-06 00:39:24 +00:00
|
|
|
select ARCH_HAS_FORTIFY_SOURCE
|
2019-05-14 00:18:30 +00:00
|
|
|
select ARCH_HAS_KEEPINITRD
|
2018-06-14 22:27:44 +00:00
|
|
|
select ARCH_HAS_KCOV
|
2018-06-26 14:52:38 +00:00
|
|
|
select ARCH_HAS_MEMBARRIER_SYNC_CORE
|
bpf: Restrict bpf_probe_read{, str}() only to archs where they work
Given the legacy bpf_probe_read{,str}() BPF helpers are broken on archs
with overlapping address ranges, we should really take the next step to
disable them from BPF use there.
To generally fix the situation, we've recently added new helper variants
bpf_probe_read_{user,kernel}() and bpf_probe_read_{user,kernel}_str().
For details on them, see 6ae08ae3dea2 ("bpf: Add probe_read_{user, kernel}
and probe_read_{user,kernel}_str helpers").
Given bpf_probe_read{,str}() have been around for ~5 years by now, there
are plenty of users at least on x86 still relying on them today, so we
cannot remove them entirely w/o breaking the BPF tracing ecosystem.
However, their use should be restricted to archs with non-overlapping
address ranges where they are working in their current form. Therefore,
move this behind a CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE and
have x86, arm64, arm select it (other archs supporting it can follow-up
on it as well).
For the remaining archs, they can workaround easily by relying on the
feature probe from bpftool which spills out defines that can be used out
of BPF C code to implement the drop-in replacement for old/new kernels
via: bpftool feature probe macro
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/bpf/20200515101118.6508-2-daniel@iogearbox.net
2020-05-15 10:11:16 +00:00
|
|
|
select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
|
2018-06-08 00:06:08 +00:00
|
|
|
select ARCH_HAS_PTE_SPECIAL if ARM_LPAE
|
2019-01-07 18:36:20 +00:00
|
|
|
select ARCH_HAS_SETUP_DMA_OPS
|
2018-06-14 22:27:44 +00:00
|
|
|
select ARCH_HAS_SET_MEMORY
|
2022-10-18 12:59:02 +00:00
|
|
|
select ARCH_STACKWALK
|
2017-02-07 00:31:57 +00:00
|
|
|
select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL
|
|
|
|
select ARCH_HAS_STRICT_MODULE_RWX if MMU
|
2022-04-19 08:28:28 +00:00
|
|
|
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
|
|
|
|
select ARCH_HAS_SYNC_DMA_FOR_CPU
|
2018-12-21 21:14:44 +00:00
|
|
|
select ARCH_HAS_TEARDOWN_DMA_OPS if MMU
|
2012-10-30 12:13:42 +00:00
|
|
|
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
|
2021-01-15 12:21:10 +00:00
|
|
|
select ARCH_HAVE_NMI_SAFE_CMPXCHG if CPU_V7 || CPU_V7M || CPU_V6K
|
2014-12-13 00:57:44 +00:00
|
|
|
select ARCH_HAS_GCOV_PROFILE_ALL
|
2020-12-15 03:09:55 +00:00
|
|
|
select ARCH_KEEP_MEMBLOCK
|
2024-01-28 18:45:29 +00:00
|
|
|
select ARCH_HAS_UBSAN
|
2013-10-08 02:07:58 +00:00
|
|
|
select ARCH_MIGHT_HAVE_PC_PARPORT
|
2017-02-07 00:31:57 +00:00
|
|
|
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
|
|
|
|
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT if CPU_V7
|
2024-05-30 17:11:31 +00:00
|
|
|
select ARCH_NEED_CMPXCHG_1_EMU if CPU_V6
|
2014-06-06 17:53:16 +00:00
|
|
|
select ARCH_SUPPORTS_ATOMIC_RMW
|
2024-04-23 07:54:45 +00:00
|
|
|
select ARCH_SUPPORTS_CFI_CLANG
|
2021-05-05 01:38:13 +00:00
|
|
|
select ARCH_SUPPORTS_HUGETLBFS if ARM_LPAE
|
2023-10-19 11:21:35 +00:00
|
|
|
select ARCH_SUPPORTS_PER_VMA_LOCK
|
2013-11-06 04:15:24 +00:00
|
|
|
select ARCH_USE_BUILTIN_BSWAP
|
2013-10-09 16:19:22 +00:00
|
|
|
select ARCH_USE_CMPXCHG_LOCKREF
|
2021-04-30 05:55:15 +00:00
|
|
|
select ARCH_USE_MEMTEST
|
2019-09-23 22:39:01 +00:00
|
|
|
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
|
2022-03-22 21:45:15 +00:00
|
|
|
select ARCH_WANT_GENERAL_HUGETLB
|
ARM: config: sort select statements alphanumerically
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
|
|
|
select ARCH_WANT_IPC_PARSE_VERSION
|
2020-11-19 20:46:56 +00:00
|
|
|
select ARCH_WANT_LD_ORPHAN_WARN
|
2019-06-13 07:08:51 +00:00
|
|
|
select BINFMT_FLAT_ARGVP_ENVP_ON_STACK
|
2019-12-04 00:46:31 +00:00
|
|
|
select BUILDTIME_TABLE_SORT if MMU
|
2022-07-26 12:16:37 +00:00
|
|
|
select COMMON_CLK if !(ARCH_RPC || ARCH_FOOTBRIDGE)
|
2013-09-12 20:24:42 +00:00
|
|
|
select CLONE_BACKWARDS
|
2018-10-24 09:20:16 +00:00
|
|
|
select CPU_PM if SUSPEND || CPU_IDLE
|
2013-12-17 18:50:16 +00:00
|
|
|
select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS
|
2019-02-03 19:12:02 +00:00
|
|
|
select DMA_DECLARE_COHERENT
|
2021-06-23 12:04:48 +00:00
|
|
|
select DMA_GLOBAL_POOL if !MMU
|
2022-02-26 15:40:21 +00:00
|
|
|
select DMA_NONCOHERENT_MMAP if MMU
|
2015-05-21 17:59:31 +00:00
|
|
|
select EDAC_SUPPORT
|
|
|
|
select EDAC_ATOMIC_SCRUB
|
2014-10-09 22:26:42 +00:00
|
|
|
select GENERIC_ALLOCATOR
|
2017-05-31 16:59:28 +00:00
|
|
|
select GENERIC_ARCH_TOPOLOGY if ARM_CPU_TOPOLOGY
|
2018-10-24 09:20:16 +00:00
|
|
|
select GENERIC_ATOMIC64 if CPU_V7M || CPU_V6 || !CPU_32v6K || !AEABI
|
ARM: config: sort select statements alphanumerically
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
|
|
|
select GENERIC_CLOCKEVENTS_BROADCAST if SMP
|
ARM: Allow IPIs to be handled as normal interrupts
In order to deal with IPIs as normal interrupts, let's add
a new way to register them with the architecture code.
set_smp_ipi_range() takes a range of interrupts, and allows
the arch code to request them as if the were normal interrupts.
A standard handler is then called by the core IRQ code to deal
with the IPI.
This means that we don't need to call irq_enter/irq_exit, and
that we don't need to deal with set_irq_regs either. So let's
move the dispatcher into its own function, and leave handle_IPI()
as a compatibility function.
On the sending side, let's make use of ipi_send_mask, which
already exists for this purpose.
One of the major difference is that we end up, in some cases
(such as when performing IRQ time accounting on the scheduler
IPI), end up with nested irq_enter()/irq_exit() pairs.
Other than the (relatively small) overhead, there should be
no consequences to it (these pairs are designed to nest
correctly, and the accounting shouldn't be off).
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-06-23 19:38:41 +00:00
|
|
|
select GENERIC_IRQ_IPI if SMP
|
2017-03-19 16:23:31 +00:00
|
|
|
select GENERIC_CPU_AUTOPROBE
|
2024-07-31 09:17:55 +00:00
|
|
|
select GENERIC_CPU_DEVICES
|
2015-09-01 06:59:28 +00:00
|
|
|
select GENERIC_EARLY_IOREMAP
|
2013-09-12 20:24:42 +00:00
|
|
|
select GENERIC_IDLE_POLL_SETUP
|
2022-03-09 14:40:47 +00:00
|
|
|
select GENERIC_IRQ_MULTI_HANDLER
|
ARM: config: sort select statements alphanumerically
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
|
|
|
select GENERIC_IRQ_PROBE
|
|
|
|
select GENERIC_IRQ_SHOW
|
2015-04-01 12:37:11 +00:00
|
|
|
select GENERIC_IRQ_SHOW_LEVEL
|
2020-07-09 19:00:10 +00:00
|
|
|
select GENERIC_LIB_DEVMEM_IS_ALLOWED
|
ARM: config: sort select statements alphanumerically
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
|
|
|
select GENERIC_PCI_IOMAP
|
2013-06-02 06:39:40 +00:00
|
|
|
select GENERIC_SCHED_CLOCK
|
ARM: config: sort select statements alphanumerically
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
|
|
|
select GENERIC_SMP_IDLE_THREAD
|
|
|
|
select HARDIRQS_SW_RESEND
|
2023-03-23 16:33:52 +00:00
|
|
|
select HAS_IOPORT
|
2018-10-24 09:20:16 +00:00
|
|
|
select HAVE_ARCH_AUDITSYSCALL if AEABI && !OABI_COMPAT
|
2015-01-16 01:45:55 +00:00
|
|
|
select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
|
2015-11-19 12:30:42 +00:00
|
|
|
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
|
2021-12-03 09:26:33 +00:00
|
|
|
select HAVE_ARCH_KFENCE if MMU && !XIP_KERNEL
|
2015-11-19 12:30:42 +00:00
|
|
|
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
|
2020-10-25 22:56:18 +00:00
|
|
|
select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
|
2022-04-27 14:29:01 +00:00
|
|
|
select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
|
2016-01-14 23:19:57 +00:00
|
|
|
select HAVE_ARCH_MMAP_RND_BITS if MMU
|
2020-12-15 03:09:59 +00:00
|
|
|
select HAVE_ARCH_PFN_VALID
|
seccomp: Move config option SECCOMP to arch/Kconfig
In order to make adding configurable features into seccomp easier,
it's better to have the options at one single location, considering
especially that the bulk of seccomp code is arch-independent. An quick
look also show that many SECCOMP descriptions are outdated; they talk
about /proc rather than prctl.
As a result of moving the config option and keeping it default on,
architectures arm, arm64, csky, riscv, sh, and xtensa did not have SECCOMP
on by default prior to this and SECCOMP will be default in this change.
Architectures microblaze, mips, powerpc, s390, sh, and sparc have an
outdated depend on PROC_FS and this dependency is removed in this change.
Suggested-by: Jann Horn <jannh@google.com>
Link: https://lore.kernel.org/lkml/CAG48ez1YWz9cnp08UZgeieYRhHdqh-ch7aNwc4JRBnGyrmgfMg@mail.gmail.com/
Signed-off-by: YiFei Zhu <yifeifz2@illinois.edu>
[kees: added HAVE_ARCH_SECCOMP help text, tweaked wording]
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/9ede6ef35c847e58d61e476c6a39540520066613.1600951211.git.yifeifz2@illinois.edu
2020-09-24 12:44:16 +00:00
|
|
|
select HAVE_ARCH_SECCOMP
|
2018-10-24 09:20:16 +00:00
|
|
|
select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
|
2024-06-27 07:38:44 +00:00
|
|
|
select HAVE_ARCH_STACKLEAK
|
2017-08-16 21:09:13 +00:00
|
|
|
select HAVE_ARCH_THREAD_STRUCT_WHITELIST
|
2012-04-04 15:19:47 +00:00
|
|
|
select HAVE_ARCH_TRACEHOOK
|
2021-05-05 01:38:29 +00:00
|
|
|
select HAVE_ARCH_TRANSPARENT_HUGEPAGE if ARM_LPAE
|
2016-01-04 14:42:55 +00:00
|
|
|
select HAVE_ARM_SMCCC if CPU_V7
|
2017-08-22 06:32:33 +00:00
|
|
|
select HAVE_EBPF_JIT if !CPU_ENDIAN_BE32
|
2022-06-08 14:40:24 +00:00
|
|
|
select HAVE_CONTEXT_TRACKING_USER
|
ARM: config: sort select statements alphanumerically
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
|
|
|
select HAVE_C_RECORDMCOUNT
|
2022-01-25 14:19:10 +00:00
|
|
|
select HAVE_BUILDTIME_MCOUNT_SORT
|
2020-01-10 12:39:26 +00:00
|
|
|
select HAVE_DEBUG_KMEMLEAK if !XIP_KERNEL
|
ARM: config: sort select statements alphanumerically
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
|
|
|
select HAVE_DMA_CONTIGUOUS if MMU
|
2018-10-24 09:20:16 +00:00
|
|
|
select HAVE_DYNAMIC_FTRACE if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
|
2017-05-26 20:49:47 +00:00
|
|
|
select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE
|
2013-12-17 18:50:16 +00:00
|
|
|
select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU
|
2016-05-21 00:00:16 +00:00
|
|
|
select HAVE_EXIT_THREAD
|
2024-04-02 12:55:15 +00:00
|
|
|
select HAVE_GUP_FAST if ARM_LPAE
|
2018-10-24 09:20:16 +00:00
|
|
|
select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
|
2022-12-04 03:46:40 +00:00
|
|
|
select HAVE_FUNCTION_ERROR_INJECTION
|
2022-01-25 14:55:24 +00:00
|
|
|
select HAVE_FUNCTION_GRAPH_TRACER
|
2022-01-26 10:43:39 +00:00
|
|
|
select HAVE_FUNCTION_TRACER if !XIP_KERNEL
|
2016-05-23 22:09:38 +00:00
|
|
|
select HAVE_GCC_PLUGINS
|
2018-10-24 09:20:16 +00:00
|
|
|
select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7)
|
2013-05-04 13:38:59 +00:00
|
|
|
select HAVE_IRQ_TIME_ACCOUNTING
|
2010-01-08 22:42:43 +00:00
|
|
|
select HAVE_KERNEL_GZIP
|
2013-07-08 23:01:48 +00:00
|
|
|
select HAVE_KERNEL_LZ4
|
2010-04-03 10:40:28 +00:00
|
|
|
select HAVE_KERNEL_LZMA
|
ARM: config: sort select statements alphanumerically
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
|
|
|
select HAVE_KERNEL_LZO
|
2012-01-26 12:08:57 +00:00
|
|
|
select HAVE_KERNEL_XZ
|
2015-05-26 14:40:44 +00:00
|
|
|
select HAVE_KPROBES if !XIP_KERNEL && !CPU_ENDIAN_BE32 && !CPU_V7M
|
2018-10-24 09:20:16 +00:00
|
|
|
select HAVE_KRETPROBES if HAVE_KPROBES
|
2024-08-21 06:34:41 +00:00
|
|
|
select HAVE_LD_DEAD_CODE_DATA_ELIMINATION if (LD_VERSION >= 23600 || LD_IS_LLD)
|
2014-11-24 15:54:35 +00:00
|
|
|
select HAVE_MOD_ARCH_SPECIFIC
|
2016-05-21 00:00:33 +00:00
|
|
|
select HAVE_NMI
|
2015-01-09 06:37:36 +00:00
|
|
|
select HAVE_OPTPROBES if !THUMB2_KERNEL
|
2024-02-26 16:14:13 +00:00
|
|
|
select HAVE_PAGE_SIZE_4KB
|
2022-09-16 19:53:09 +00:00
|
|
|
select HAVE_PCI if MMU
|
2010-02-02 19:24:58 +00:00
|
|
|
select HAVE_PERF_EVENTS
|
2013-09-26 11:36:35 +00:00
|
|
|
select HAVE_PERF_REGS
|
|
|
|
select HAVE_PERF_USER_STACK_DUMP
|
2020-02-04 01:37:02 +00:00
|
|
|
select MMU_GATHER_RCU_TABLE_FREE if SMP && ARM_LPAE
|
2010-06-25 11:24:53 +00:00
|
|
|
select HAVE_REGS_AND_STACK_ACCESS_API
|
2018-06-02 12:43:55 +00:00
|
|
|
select HAVE_RSEQ
|
2018-06-14 10:36:45 +00:00
|
|
|
select HAVE_STACKPROTECTOR
|
ARM: config: sort select statements alphanumerically
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
|
|
|
select HAVE_SYSCALL_TRACEPOINTS
|
2012-10-08 23:28:08 +00:00
|
|
|
select HAVE_UID16
|
2013-09-16 22:28:22 +00:00
|
|
|
select HAVE_VIRT_CPU_ACCOUNTING_GEN
|
2023-05-12 21:07:32 +00:00
|
|
|
select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
|
2013-08-14 19:43:17 +00:00
|
|
|
select IRQ_FORCED_THREADING
|
2023-06-22 19:24:30 +00:00
|
|
|
select LOCK_MM_AND_FIND_VMA
|
2013-09-12 20:24:42 +00:00
|
|
|
select MODULES_USE_ELF_REL
|
2018-05-09 04:53:49 +00:00
|
|
|
select NEED_DMA_MAP_STATE
|
2015-11-19 12:20:54 +00:00
|
|
|
select OF_EARLY_FLATTREE if OF
|
2013-09-12 20:24:42 +00:00
|
|
|
select OLD_SIGACTION
|
|
|
|
select OLD_SIGSUSPEND3
|
2022-07-26 12:16:37 +00:00
|
|
|
select PCI_DOMAINS_GENERIC if PCI
|
2018-11-15 19:05:34 +00:00
|
|
|
select PCI_SYSCALL if PCI
|
ARM: config: sort select statements alphanumerically
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
|
|
|
select PERF_USE_VMALLOC
|
|
|
|
select RTC_LIB
|
2022-07-26 12:16:37 +00:00
|
|
|
select SPARSE_IRQ if !(ARCH_FOOTBRIDGE || ARCH_RPC)
|
ARM: config: sort select statements alphanumerically
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
|
|
|
select SYS_SUPPORTS_APM_EMULATION
|
ARM: implement THREAD_INFO_IN_TASK for uniprocessor systems
On UP systems, only a single task can be 'current' at the same time,
which means we can use a global variable to track it. This means we can
also enable THREAD_INFO_IN_TASK for those systems, as in that case,
thread_info is accessed via current rather than the other way around,
removing the need to store thread_info at the base of the task stack.
This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP
systems as well.
To partially mitigate the performance overhead of this arrangement, use
a ADD/ADD/LDR sequence with the appropriate PC-relative group
relocations to load the value of current when needed. This means that
accessing current will still only require a single load as before,
avoiding the need for a literal to carry the address of the global
variable in each function. However, accessing thread_info will now
require this load as well.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-11-24 13:08:11 +00:00
|
|
|
select THREAD_INFO_IN_TASK
|
2022-07-26 12:16:37 +00:00
|
|
|
select TIMER_OF if OF
|
2022-01-24 18:16:58 +00:00
|
|
|
select HAVE_ARCH_VMAP_STACK if MMU && ARM_HAS_GROUP_RELOCS
|
2021-07-31 05:22:32 +00:00
|
|
|
select TRACE_IRQFLAGS_SUPPORT if !CPU_V7M
|
2022-07-26 12:16:37 +00:00
|
|
|
select USE_OF if !(ARCH_FOOTBRIDGE || ARCH_RPC || ARCH_SA1100)
|
2013-09-12 20:24:42 +00:00
|
|
|
# Above selects are sorted alphabetically; please add new ones
|
|
|
|
# according to that. Thanks.
|
2005-04-16 22:20:36 +00:00
|
|
|
help
|
|
|
|
The ARM series is a line of low-power-consumption RISC chip designs
|
2006-02-08 21:09:07 +00:00
|
|
|
licensed by ARM Ltd and targeted at embedded applications and
|
2005-04-16 22:20:36 +00:00
|
|
|
handhelds such as the Compaq IPAQ. ARM-based PCs are no longer
|
2006-02-08 21:09:07 +00:00
|
|
|
manufactured, but legacy ARM-based PC hardware remains popular in
|
2005-04-16 22:20:36 +00:00
|
|
|
Europe. There is an ARM Linux project with a web page at
|
|
|
|
<http://www.arm.linux.org.uk/>.
|
|
|
|
|
2022-01-24 18:16:58 +00:00
|
|
|
config ARM_HAS_GROUP_RELOCS
|
|
|
|
def_bool y
|
|
|
|
depends on !LD_IS_LLD || LLD_VERSION >= 140000
|
|
|
|
depends on !COMPILE_TEST
|
|
|
|
help
|
|
|
|
Whether or not to use R_ARM_ALU_PC_Gn or R_ARM_LDR_PC_Gn group
|
|
|
|
relocations, which have been around for a long time, but were not
|
|
|
|
supported in LLD until version 14. The combined range is -/+ 256 MiB,
|
|
|
|
which is usually sufficient, but not for allyesconfig, so we disable
|
|
|
|
this feature when doing compile testing.
|
|
|
|
|
2012-05-16 13:48:21 +00:00
|
|
|
config ARM_DMA_USE_IOMMU
|
|
|
|
bool
|
ARM: config: sort select statements alphanumerically
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
|
|
|
select NEED_SG_DMA_LENGTH
|
2012-05-16 13:48:21 +00:00
|
|
|
|
2013-02-06 04:21:14 +00:00
|
|
|
if ARM_DMA_USE_IOMMU
|
|
|
|
|
|
|
|
config ARM_DMA_IOMMU_ALIGNMENT
|
|
|
|
int "Maximum PAGE_SIZE order of alignment for DMA IOMMU buffers"
|
|
|
|
range 4 9
|
|
|
|
default 8
|
|
|
|
help
|
|
|
|
DMA mapping framework by default aligns all buffers to the smallest
|
|
|
|
PAGE_SIZE order which is greater than or equal to the requested buffer
|
|
|
|
size. This works well for buffers up to a few hundreds kilobytes, but
|
|
|
|
for larger buffers it just a waste of address space. Drivers which has
|
|
|
|
relatively small addressing window (like 64Mib) might run out of
|
|
|
|
virtual space with just a few allocations.
|
|
|
|
|
|
|
|
With this parameter you can specify the maximum PAGE_SIZE order for
|
|
|
|
DMA IOMMU buffers. Larger buffers will be aligned only to this
|
|
|
|
specified order. The order is expressed as a power of two multiplied
|
|
|
|
by the PAGE_SIZE.
|
|
|
|
|
|
|
|
endif
|
|
|
|
|
2007-02-09 17:08:58 +00:00
|
|
|
config SYS_SUPPORTS_APM_EMULATION
|
|
|
|
bool
|
|
|
|
|
2009-09-15 16:30:37 +00:00
|
|
|
config HAVE_TCM
|
|
|
|
bool
|
|
|
|
select GENERIC_ALLOCATOR
|
|
|
|
|
2010-01-10 17:23:29 +00:00
|
|
|
config HAVE_PROC_CPU
|
|
|
|
bool
|
|
|
|
|
2014-04-07 22:39:19 +00:00
|
|
|
config NO_IOPORT_MAP
|
2007-02-11 15:41:31 +00:00
|
|
|
bool
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
config SBUS
|
|
|
|
bool
|
|
|
|
|
2007-04-28 08:59:37 +00:00
|
|
|
config STACKTRACE_SUPPORT
|
|
|
|
bool
|
|
|
|
default y
|
|
|
|
|
|
|
|
config LOCKDEP_SUPPORT
|
|
|
|
bool
|
|
|
|
default y
|
|
|
|
|
2006-12-08 10:37:49 +00:00
|
|
|
config ARCH_HAS_ILOG2_U32
|
|
|
|
bool
|
|
|
|
|
|
|
|
config ARCH_HAS_ILOG2_U64
|
|
|
|
bool
|
|
|
|
|
2013-06-13 21:58:52 +00:00
|
|
|
config ARCH_HAS_BANDGAP
|
|
|
|
bool
|
|
|
|
|
2015-08-12 23:01:52 +00:00
|
|
|
config FIX_EARLYCON_MEM
|
|
|
|
def_bool y if MMU
|
|
|
|
|
2006-03-26 09:39:19 +00:00
|
|
|
config GENERIC_HWEIGHT
|
|
|
|
bool
|
|
|
|
default y
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
config GENERIC_CALIBRATE_DELAY
|
|
|
|
bool
|
|
|
|
default y
|
|
|
|
|
2005-09-06 00:48:42 +00:00
|
|
|
config ARCH_MAY_HAVE_PC_FDC
|
|
|
|
bool
|
|
|
|
|
2014-03-07 16:23:04 +00:00
|
|
|
config ARCH_SUPPORTS_UPROBES
|
|
|
|
def_bool y
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
config GENERIC_ISA_DMA
|
|
|
|
bool
|
|
|
|
|
|
|
|
config FIQ
|
|
|
|
bool
|
|
|
|
|
2005-12-19 21:27:59 +00:00
|
|
|
config ARCH_MTD_XIP
|
|
|
|
bool
|
|
|
|
|
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching
This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.
Patch the physical to virtual translations at runtime. As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.
As many translations are of the form:
physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)
we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.
Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.
At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.
Add a module version magic string for this feature to prevent
incompatible modules being loaded.
Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
|
|
|
config ARM_PATCH_PHYS_VIRT
|
2023-08-16 05:50:10 +00:00
|
|
|
bool "Patch physical to virtual translations at runtime" if !ARCH_MULTIPLATFORM
|
2011-08-10 09:23:45 +00:00
|
|
|
default y
|
2022-08-18 14:17:09 +00:00
|
|
|
depends on MMU
|
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching
This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.
Patch the physical to virtual translations at runtime. As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.
As many translations are of the form:
physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)
we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.
Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.
At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.
Add a module version magic string for this feature to prevent
incompatible modules being loaded.
Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
|
|
|
help
|
2011-05-12 09:02:42 +00:00
|
|
|
Patch phys-to-virt and virt-to-phys translation functions at
|
|
|
|
boot and module load time according to the position of the
|
|
|
|
kernel in system memory.
|
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching
This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.
Patch the physical to virtual translations at runtime. As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.
As many translations are of the form:
physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)
we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.
Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.
At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.
Add a module version magic string for this feature to prevent
incompatible modules being loaded.
Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
|
|
|
|
2011-05-12 09:02:42 +00:00
|
|
|
This can only be used with non-XIP MMU kernels where the base
|
ARM: p2v: reduce p2v alignment requirement to 2 MiB
The ARM kernel's linear map starts at PAGE_OFFSET, which maps to a
physical address (PHYS_OFFSET) that is platform specific, and is
discovered at boot. Since we don't want to slow down translations
between physical and virtual addresses by keeping the offset in a
variable in memory, we implement this by patching the code performing
the translation, and putting the offset between PAGE_OFFSET and the
start of physical RAM directly into the instruction opcodes.
As we only patch up to 8 bits of offset, yielding 4 GiB >> 8 == 16 MiB
of granularity, we have to round up PHYS_OFFSET to the next multiple if
the start of physical RAM is not a multiple of 16 MiB. This wastes some
physical RAM, since the memory that was skipped will now live below
PAGE_OFFSET, making it inaccessible to the kernel.
We can improve this by changing the patchable sequences and the patching
logic to carry more bits of offset: 11 bits gives us 4 GiB >> 11 == 2 MiB
of granularity, and so we will never waste more than that amount by
rounding up the physical start of DRAM to the next multiple of 2 MiB.
(Note that 2 MiB granularity guarantees that the linear mapping can be
created efficiently, whereas less than 2 MiB may result in the linear
mapping needing another level of page tables)
This helps Zhen Lei's scenario, where the start of DRAM is known to be
occupied. It also helps EFI boot, which relies on the firmware's page
allocator to allocate space for the decompressed kernel as low as
possible. And if the KASLR patches ever land for 32-bit, it will give
us 3 more bits of randomization of the placement of the kernel inside
the linear region.
For the ARM code path, it simply comes down to using two add/sub
instructions instead of one for the carryless version, and patching
each of them with the correct immediate depending on the rotation
field. For the LPAE calculation, which has to deal with a carry, it
patches the MOVW instruction with up to 12 bits of offset (but we only
need 11 bits anyway)
For the Thumb2 code path, patching more than 11 bits of displacement
would be somewhat cumbersome, but the 11 bits we need fit nicely into
the second word of the u16[2] opcode, so we simply update the immediate
assignment and the left shift to create an addend of the right magnitude.
Suggested-by: Zhen Lei <thunder.leizhen@huawei.com>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-09-18 08:55:42 +00:00
|
|
|
of physical memory is at a 2 MiB boundary.
|
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching
This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.
Patch the physical to virtual translations at runtime. As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.
As many translations are of the form:
physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)
we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.
Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.
At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.
Add a module version magic string for this feature to prevent
incompatible modules being loaded.
Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
|
|
|
|
2011-08-10 09:23:45 +00:00
|
|
|
Only disable this option if you know that you do not require
|
|
|
|
this feature (eg, building a kernel for a single machine) and
|
|
|
|
you need to shrink the kernel to the minimal size.
|
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching
This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.
Patch the physical to virtual translations at runtime. As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.
As many translations are of the form:
physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)
we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.
Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.
At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.
Add a module version magic string for this feature to prevent
incompatible modules being loaded.
Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
|
|
|
|
2012-03-05 04:03:33 +00:00
|
|
|
config NEED_MACH_IO_H
|
|
|
|
bool
|
|
|
|
help
|
|
|
|
Select this when mach/io.h is required to provide special
|
|
|
|
definitions for this platform. The need for mach/io.h should
|
|
|
|
be avoided when possible.
|
|
|
|
|
2011-09-03 02:26:55 +00:00
|
|
|
config NEED_MACH_MEMORY_H
|
2011-07-06 02:52:51 +00:00
|
|
|
bool
|
|
|
|
help
|
2011-09-03 02:26:55 +00:00
|
|
|
Select this when mach/memory.h is required to provide special
|
|
|
|
definitions for this platform. The need for mach/memory.h should
|
|
|
|
be avoided when possible.
|
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching
This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.
Patch the physical to virtual translations at runtime. As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.
As many translations are of the form:
physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)
we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.
Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.
At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.
Add a module version magic string for this feature to prevent
incompatible modules being loaded.
Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
|
|
|
|
2011-07-06 02:52:51 +00:00
|
|
|
config PHYS_OFFSET
|
2011-12-02 22:09:42 +00:00
|
|
|
hex "Physical address of main memory" if MMU
|
2022-07-27 07:26:45 +00:00
|
|
|
depends on !ARM_PATCH_PHYS_VIRT || !AUTO_ZRELADDR
|
2011-12-02 22:09:42 +00:00
|
|
|
default DRAM_BASE if !MMU
|
2022-02-11 22:32:38 +00:00
|
|
|
default 0x00000000 if ARCH_FOOTBRIDGE
|
2014-07-23 19:37:43 +00:00
|
|
|
default 0x10000000 if ARCH_OMAP1 || ARCH_RPC
|
2022-09-29 13:31:18 +00:00
|
|
|
default 0xa0000000 if ARCH_PXA
|
2021-10-18 14:30:39 +00:00
|
|
|
default 0xc0000000 if ARCH_EP93XX || ARCH_SA1100
|
|
|
|
default 0
|
2011-05-12 09:02:42 +00:00
|
|
|
help
|
2011-07-06 02:52:51 +00:00
|
|
|
Please provide the physical address corresponding to the
|
|
|
|
location of main memory in your system.
|
2011-01-04 19:39:29 +00:00
|
|
|
|
2011-08-16 22:44:26 +00:00
|
|
|
config GENERIC_BUG
|
|
|
|
def_bool y
|
|
|
|
depends on BUG
|
|
|
|
|
2015-04-14 22:45:42 +00:00
|
|
|
config PGTABLE_LEVELS
|
|
|
|
int
|
|
|
|
default 3 if ARM_LPAE
|
|
|
|
default 2
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
menu "System Type"
|
|
|
|
|
2009-07-24 11:35:00 +00:00
|
|
|
config MMU
|
|
|
|
bool "MMU-based Paged Memory Management Support"
|
|
|
|
default y
|
|
|
|
help
|
|
|
|
Select if you want MMU-based virtualised addressing space
|
|
|
|
support by paged memory management. If unsure, say 'Y'.
|
|
|
|
|
ARM: remove support for NOMMU ARMv4/v5
It is possible to build MMU-less kernels for Cortex-M base
microcrontrollers as well as a couple of older platforms that
have not been converted to CONFIG_ARCH_MULTIPLATFORM,
specifically ep93xx, footbridge, dove, sa1100 and s3c24xx.
It seems unlikely that anybody has tested those configurations
in recent years, as even building them is frequently broken.
A patch I submitted caused another build time regression
in this configuration. I sent a patch for that, but it seems
better to also remove the option entirely, leaving ARMv7-M
as the only supported Arm NOMMU target for simplicity.
A couple of platforms have dependencies on CONFIG_MMU, those
can all be removed now. Notably, mach-integrator tries to
support MMU-less CPU cores, but those have not actually been
selectable for a long time.
This addresses several build failures in randconfig builds that
have accumulated over the years.
Cc: Vladimir Murzin <vladimir.murzin@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-03-09 13:20:20 +00:00
|
|
|
config ARM_SINGLE_ARMV7M
|
|
|
|
def_bool !MMU
|
|
|
|
select ARM_NVIC
|
|
|
|
select CPU_V7M
|
|
|
|
select NO_IOPORT_MAP
|
|
|
|
|
2016-01-14 23:19:57 +00:00
|
|
|
config ARCH_MMAP_RND_BITS_MIN
|
|
|
|
default 8
|
|
|
|
|
|
|
|
config ARCH_MMAP_RND_BITS_MAX
|
|
|
|
default 14 if PAGE_OFFSET=0x40000000
|
|
|
|
default 15 if PAGE_OFFSET=0x80000000
|
|
|
|
default 16
|
|
|
|
|
ARM: initial multiplatform support
This lets us build a multiplatform kernel for experimental purposes.
However, it will not be useful for any real work, because it relies
on a number of useful things to be disabled for now:
* SMP support must be turned off because of conflicting symbols.
Marc Zyngier has proposed a solution by adding a new SOC
operations structure to hold indirect function pointers
for these, but that work is currently stalled
* We turn on SPARSE_IRQ unconditionally, which is not supported
on most platforms. Each of them is currently in a different
state, but most are being worked on.
* A common clock framework is in place since v3.4 but not yet
being used. Work on this is on its way.
* DEBUG_LL for early debugging is currently disabled.
* THUMB2_KERNEL does not work with allyesconfig because the
kernel gets too big
[Rob Herring]: Rebased to not be dependent on the mass mach header rename.
As a result, omap2plus, imx, mxs and ux500 are not converted. Highbank,
picoxcell, mvebu, and socfpga are converted.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Andrew Lunn <andrew@lunn.ch>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Cc: Dinh Nguyen <dinguyen@altera.com>
2012-09-06 18:41:12 +00:00
|
|
|
config ARCH_MULTIPLATFORM
|
2022-07-27 12:08:24 +00:00
|
|
|
bool "Require kernel to be portable to multiple machines" if EXPERT
|
|
|
|
depends on MMU && !(ARCH_FOOTBRIDGE || ARCH_RPC || ARCH_SA1100)
|
|
|
|
default y
|
2005-04-16 22:20:36 +00:00
|
|
|
help
|
2022-07-27 12:08:24 +00:00
|
|
|
In general, all Arm machines can be supported in a single
|
|
|
|
kernel image, covering either Armv4/v5 or Armv6/v7.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2022-07-27 12:08:24 +00:00
|
|
|
However, some configuration options require hardcoding machine
|
|
|
|
specific physical addresses or enable errata workarounds that may
|
|
|
|
break other machines.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2022-07-27 12:08:24 +00:00
|
|
|
Selecting N here allows using those options, including
|
|
|
|
DEBUG_UNCOMPRESS, XIP_KERNEL and ZBOOT_ROM. If unsure, say Y.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2023-11-13 14:43:51 +00:00
|
|
|
source "arch/arm/Kconfig.platforms"
|
2022-01-30 14:51:06 +00:00
|
|
|
|
2010-03-15 19:03:06 +00:00
|
|
|
#
|
|
|
|
# This is sorted alphabetically by mach-* pathname. However, plat-*
|
|
|
|
# Kconfigs may be included either alphabetically (according to the
|
|
|
|
# plat- suffix) or along side the corresponding mach-* source.
|
|
|
|
#
|
2017-02-15 10:03:22 +00:00
|
|
|
source "arch/arm/mach-actions/Kconfig"
|
|
|
|
|
2015-03-12 11:53:00 +00:00
|
|
|
source "arch/arm/mach-alpine/Kconfig"
|
|
|
|
|
2016-02-11 16:06:19 +00:00
|
|
|
source "arch/arm/mach-artpec/Kconfig"
|
|
|
|
|
2018-02-27 13:37:47 +00:00
|
|
|
source "arch/arm/mach-aspeed/Kconfig"
|
|
|
|
|
2010-01-14 11:43:54 +00:00
|
|
|
source "arch/arm/mach-at91/Kconfig"
|
|
|
|
|
2014-05-23 09:08:35 +00:00
|
|
|
source "arch/arm/mach-axxia/Kconfig"
|
|
|
|
|
2012-11-19 17:46:10 +00:00
|
|
|
source "arch/arm/mach-bcm/Kconfig"
|
|
|
|
|
2013-09-09 12:36:19 +00:00
|
|
|
source "arch/arm/mach-berlin/Kconfig"
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
source "arch/arm/mach-clps711x/Kconfig"
|
|
|
|
|
2010-01-14 11:43:54 +00:00
|
|
|
source "arch/arm/mach-davinci/Kconfig"
|
|
|
|
|
2015-01-14 08:40:30 +00:00
|
|
|
source "arch/arm/mach-digicolor/Kconfig"
|
|
|
|
|
2010-01-14 11:43:54 +00:00
|
|
|
source "arch/arm/mach-dove/Kconfig"
|
|
|
|
|
[ARM] 3369/1: ep93xx: add core cirrus ep93xx support
Patch from Lennert Buytenhek
This patch adds support for the Cirrus ep93xx series of CPUs. The
ep93xx is an ARM920T based CPU with two VICs, PL010 based UARTs,
IrDA, MaverickCrunch floating point coprocessor, between 24 and 64
GPIOs, ethernet, OHCI USB and, depending on the model, pcmcia, raster
engine, graphics accelerator, IDE controller and a bunch of other
stuff.
This patch adds the core ep93xx support code, and support for the
Glomation GESBC-9312-sx and the Technologic Systems TS-72xx SBCs.
Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-03-20 17:10:13 +00:00
|
|
|
source "arch/arm/mach-ep93xx/Kconfig"
|
|
|
|
|
2018-02-27 13:37:47 +00:00
|
|
|
source "arch/arm/mach-exynos/Kconfig"
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
source "arch/arm/mach-footbridge/Kconfig"
|
|
|
|
|
2009-03-26 08:06:08 +00:00
|
|
|
source "arch/arm/mach-gemini/Kconfig"
|
|
|
|
|
ARM: initial multiplatform support
This lets us build a multiplatform kernel for experimental purposes.
However, it will not be useful for any real work, because it relies
on a number of useful things to be disabled for now:
* SMP support must be turned off because of conflicting symbols.
Marc Zyngier has proposed a solution by adding a new SOC
operations structure to hold indirect function pointers
for these, but that work is currently stalled
* We turn on SPARSE_IRQ unconditionally, which is not supported
on most platforms. Each of them is currently in a different
state, but most are being worked on.
* A common clock framework is in place since v3.4 but not yet
being used. Work on this is on its way.
* DEBUG_LL for early debugging is currently disabled.
* THUMB2_KERNEL does not work with allyesconfig because the
kernel gets too big
[Rob Herring]: Rebased to not be dependent on the mass mach header rename.
As a result, omap2plus, imx, mxs and ux500 are not converted. Highbank,
picoxcell, mvebu, and socfpga are converted.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Andrew Lunn <andrew@lunn.ch>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Cc: Dinh Nguyen <dinguyen@altera.com>
2012-09-06 18:41:12 +00:00
|
|
|
source "arch/arm/mach-highbank/Kconfig"
|
|
|
|
|
2013-12-20 02:52:56 +00:00
|
|
|
source "arch/arm/mach-hisi/Kconfig"
|
|
|
|
|
2022-05-16 16:33:39 +00:00
|
|
|
source "arch/arm/mach-hpe/Kconfig"
|
|
|
|
|
2018-02-27 13:37:47 +00:00
|
|
|
source "arch/arm/mach-imx/Kconfig"
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
source "arch/arm/mach-ixp4xx/Kconfig"
|
|
|
|
|
2013-06-10 15:27:13 +00:00
|
|
|
source "arch/arm/mach-keystone/Kconfig"
|
|
|
|
|
2019-08-09 14:40:39 +00:00
|
|
|
source "arch/arm/mach-lpc32xx/Kconfig"
|
2010-01-14 11:43:54 +00:00
|
|
|
|
2018-02-27 13:37:47 +00:00
|
|
|
source "arch/arm/mach-mediatek/Kconfig"
|
|
|
|
|
2014-09-10 20:16:59 +00:00
|
|
|
source "arch/arm/mach-meson/Kconfig"
|
|
|
|
|
2019-02-27 04:52:33 +00:00
|
|
|
source "arch/arm/mach-milbeaut/Kconfig"
|
|
|
|
|
2018-02-27 13:37:47 +00:00
|
|
|
source "arch/arm/mach-mmp/Kconfig"
|
2013-12-18 12:58:45 +00:00
|
|
|
|
2020-07-10 09:45:38 +00:00
|
|
|
source "arch/arm/mach-mstar/Kconfig"
|
|
|
|
|
[ARM] add Marvell 78xx0 ARM SoC support
The Marvell Discovery Duo (MV78xx0) is a family of ARM SoCs featuring
(depending on the model) one or two Feroceon CPU cores with 512K of L2
cache and VFP coprocessors running at (depending on the model) between
800 MHz and 1.2 GHz, and features a DDR2 controller, two PCIe
interfaces that can each run either in x4 or quad x1 mode, three USB
2.0 interfaces, two 3Gb/s SATA II interfaces, a SPI interface, two
TWSI interfaces, a crypto accelerator, IDMA/XOR engines, a SPI
interface, four UARTs, and depending on the model, two or four gigabit
ethernet interfaces.
This patch adds basic support for the platform, and allows booting
on the MV78x00 development board, with functional UARTs, SATA, PCIe,
GigE and USB ports.
Signed-off-by: Stanislav Samsonov <samsonov@marvell.com>
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-06-22 20:45:10 +00:00
|
|
|
source "arch/arm/mach-mv78xx0/Kconfig"
|
|
|
|
|
2018-02-27 13:37:47 +00:00
|
|
|
source "arch/arm/mach-mvebu/Kconfig"
|
2014-05-12 23:06:13 +00:00
|
|
|
|
2010-12-13 12:55:03 +00:00
|
|
|
source "arch/arm/mach-mxs/Kconfig"
|
|
|
|
|
2010-01-14 11:43:54 +00:00
|
|
|
source "arch/arm/mach-nomadik/Kconfig"
|
|
|
|
|
2017-08-16 19:18:39 +00:00
|
|
|
source "arch/arm/mach-npcm/Kconfig"
|
|
|
|
|
2005-07-10 18:58:17 +00:00
|
|
|
source "arch/arm/mach-omap1/Kconfig"
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-11-10 14:26:51 +00:00
|
|
|
source "arch/arm/mach-omap2/Kconfig"
|
|
|
|
|
2008-03-27 18:51:41 +00:00
|
|
|
source "arch/arm/mach-orion5x/Kconfig"
|
[ARM] basic support for the Marvell Orion SoC family
The Marvell Orion is a family of ARM SoCs with a DDR/DDR2 memory
controller, 10/100/1000 ethernet MAC, and USB 2.0 interfaces,
and, depending on the specific model, PCI-E interface, PCI-X
interface, SATA controllers, crypto unit, SPI interface, SDIO
interface, device bus, NAND controller, DMA engine and/or XOR
engine.
This contains the basic structure and architecture register definitions.
Signed-off-by: Tzachi Perelstein <tzachi@marvell.com>
Reviewed-by: Nicolas Pitre <nico@marvell.com>
Reviewed-by: Lennert Buytenhek <buytenh@marvell.com>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
2007-10-23 19:14:41 +00:00
|
|
|
|
2010-01-14 11:43:54 +00:00
|
|
|
source "arch/arm/mach-pxa/Kconfig"
|
[ARM] basic support for the Marvell Orion SoC family
The Marvell Orion is a family of ARM SoCs with a DDR/DDR2 memory
controller, 10/100/1000 ethernet MAC, and USB 2.0 interfaces,
and, depending on the specific model, PCI-E interface, PCI-X
interface, SATA controllers, crypto unit, SPI interface, SDIO
interface, device bus, NAND controller, DMA engine and/or XOR
engine.
This contains the basic structure and architecture register definitions.
Signed-off-by: Tzachi Perelstein <tzachi@marvell.com>
Reviewed-by: Nicolas Pitre <nico@marvell.com>
Reviewed-by: Lennert Buytenhek <buytenh@marvell.com>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
2007-10-23 19:14:41 +00:00
|
|
|
|
2014-01-21 23:14:10 +00:00
|
|
|
source "arch/arm/mach-qcom/Kconfig"
|
|
|
|
|
2017-10-05 01:59:15 +00:00
|
|
|
source "arch/arm/mach-realtek/Kconfig"
|
|
|
|
|
2022-07-26 12:16:37 +00:00
|
|
|
source "arch/arm/mach-rpc/Kconfig"
|
|
|
|
|
2013-06-02 21:09:41 +00:00
|
|
|
source "arch/arm/mach-rockchip/Kconfig"
|
|
|
|
|
2019-09-02 15:47:55 +00:00
|
|
|
source "arch/arm/mach-s3c/Kconfig"
|
2018-02-27 13:37:47 +00:00
|
|
|
|
|
|
|
source "arch/arm/mach-s5pv210/Kconfig"
|
|
|
|
|
2010-01-14 11:43:54 +00:00
|
|
|
source "arch/arm/mach-sa1100/Kconfig"
|
2009-08-06 12:12:43 +00:00
|
|
|
|
2018-02-27 13:37:47 +00:00
|
|
|
source "arch/arm/mach-shmobile/Kconfig"
|
|
|
|
|
ARM: initial multiplatform support
This lets us build a multiplatform kernel for experimental purposes.
However, it will not be useful for any real work, because it relies
on a number of useful things to be disabled for now:
* SMP support must be turned off because of conflicting symbols.
Marc Zyngier has proposed a solution by adding a new SOC
operations structure to hold indirect function pointers
for these, but that work is currently stalled
* We turn on SPARSE_IRQ unconditionally, which is not supported
on most platforms. Each of them is currently in a different
state, but most are being worked on.
* A common clock framework is in place since v3.4 but not yet
being used. Work on this is on its way.
* DEBUG_LL for early debugging is currently disabled.
* THUMB2_KERNEL does not work with allyesconfig because the
kernel gets too big
[Rob Herring]: Rebased to not be dependent on the mass mach header rename.
As a result, omap2plus, imx, mxs and ux500 are not converted. Highbank,
picoxcell, mvebu, and socfpga are converted.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Andrew Lunn <andrew@lunn.ch>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Cc: Dinh Nguyen <dinguyen@altera.com>
2012-09-06 18:41:12 +00:00
|
|
|
source "arch/arm/mach-socfpga/Kconfig"
|
|
|
|
|
2012-12-02 14:12:47 +00:00
|
|
|
source "arch/arm/mach-spear/Kconfig"
|
2007-02-11 17:31:01 +00:00
|
|
|
|
2013-06-25 11:15:10 +00:00
|
|
|
source "arch/arm/mach-sti/Kconfig"
|
|
|
|
|
2017-01-30 16:33:13 +00:00
|
|
|
source "arch/arm/mach-stm32/Kconfig"
|
|
|
|
|
2012-11-08 11:40:16 +00:00
|
|
|
source "arch/arm/mach-sunxi/Kconfig"
|
|
|
|
|
2010-01-22 00:53:02 +00:00
|
|
|
source "arch/arm/mach-tegra/Kconfig"
|
|
|
|
|
2010-01-14 11:43:54 +00:00
|
|
|
source "arch/arm/mach-ux500/Kconfig"
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
source "arch/arm/mach-versatile/Kconfig"
|
|
|
|
|
2012-10-11 07:13:09 +00:00
|
|
|
source "arch/arm/mach-vt8500/Kconfig"
|
|
|
|
|
2012-11-19 17:38:29 +00:00
|
|
|
source "arch/arm/mach-zynq/Kconfig"
|
|
|
|
|
2015-05-20 22:35:44 +00:00
|
|
|
# ARMv7-M architecture
|
|
|
|
config ARCH_LPC18XX
|
|
|
|
bool "NXP LPC18xx/LPC43xx"
|
|
|
|
depends on ARM_SINGLE_ARMV7M
|
|
|
|
select ARCH_HAS_RESET_CONTROLLER
|
|
|
|
select ARM_AMBA
|
|
|
|
select CLKSRC_LPC32XX
|
|
|
|
select PINCTRL
|
|
|
|
help
|
|
|
|
Support for NXP's LPC18xx Cortex-M3 and LPC43xx Cortex-M4
|
|
|
|
high performance microcontrollers.
|
|
|
|
|
2016-04-25 08:49:13 +00:00
|
|
|
config ARCH_MPS2
|
2016-07-17 08:35:29 +00:00
|
|
|
bool "ARM MPS2 platform"
|
2016-04-25 08:49:13 +00:00
|
|
|
depends on ARM_SINGLE_ARMV7M
|
|
|
|
select ARM_AMBA
|
|
|
|
select CLKSRC_MPS2
|
|
|
|
help
|
|
|
|
Support for Cortex-M Prototyping System (or V2M-MPS2) which comes
|
|
|
|
with a range of available cores like Cortex-M3/M4/M7.
|
|
|
|
|
|
|
|
Please, note that depends which Application Note is used memory map
|
|
|
|
for the platform may vary, so adjustment of RAM base might be needed.
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
# Definitions to make life easier
|
|
|
|
config ARCH_ACORN
|
|
|
|
bool
|
|
|
|
|
2008-03-27 18:51:39 +00:00
|
|
|
config PLAT_ORION
|
|
|
|
bool
|
2011-05-08 14:33:30 +00:00
|
|
|
select CLKSRC_MMIO
|
2011-05-22 09:01:21 +00:00
|
|
|
select GENERIC_IRQ_CHIP
|
2012-06-27 11:40:04 +00:00
|
|
|
select IRQ_DOMAIN
|
2008-03-27 18:51:39 +00:00
|
|
|
|
arm: plat-orion: introduce PLAT_ORION_LEGACY hidden config option
Until now, the PLAT_ORION configuration option was common to all the
Marvell EBU SoCs, and selecting this option had the effect of enabling
the MPP code, GPIO code, address decoding and PCIe code from
plat-orion, as well as providing access to driver-specific header
files from plat-orion/include.
However, the Armada 370 and XP SoCs will not use the MPP and GPIO code
(instead some proper pinctrl and gpio drivers are in preparation), and
generally, we want to move away from plat-orion and instead have
everything in mach-mvebu.
That said, in the mean time, we want to leverage the driver-specific
headers as well as the address decoding code, so we introduce
PLAT_ORION_LEGACY. The older Marvell SoCs need to select
PLAT_ORION_LEGACY, while the newer Marvell SoCs need to select
PLAT_ORION. Of course, when PLAT_ORION_LEGACY is selected, it
automatically selects PLAT_ORION.
Then, with just PLAT_ORION, you have the address decoding code plus
the driver-specific headers. If you add PLAT_ORION_LEGACY to this, you
gain the old MPP, GPIO and PCIe code.
Again, this is only a temporary solution until we make all Marvell EBU
platforms converge into the mach-mvebu directory. This solution avoids
duplicating the existing address decoding code into mach-mvebu.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
2012-09-11 12:27:27 +00:00
|
|
|
config PLAT_ORION_LEGACY
|
|
|
|
bool
|
|
|
|
select PLAT_ORION
|
|
|
|
|
2010-01-14 12:48:06 +00:00
|
|
|
config PLAT_VERSATILE
|
|
|
|
bool
|
|
|
|
|
2018-12-11 11:01:04 +00:00
|
|
|
source "arch/arm/mm/Kconfig"
|
2005-04-16 22:20:36 +00:00
|
|
|
|
[ARM] 3881/4: xscale: clean up cp0/cp1 handling
XScale cores either have a DSP coprocessor (which contains a single
40 bit accumulator register), or an iWMMXt coprocessor (which contains
eight 64 bit registers.)
Because of the small amount of state in the DSP coprocessor, access to
the DSP coprocessor (CP0) is always enabled, and DSP context switching
is done unconditionally on every task switch. Access to the iWMMXt
coprocessor (CP0/CP1) is enabled only when an iWMMXt instruction is
first issued, and iWMMXt context switching is done lazily.
CONFIG_IWMMXT is supposed to mean 'the cpu we will be running on will
have iWMMXt support', but boards are supposed to select this config
symbol by hand, and at least one pxa27x board doesn't get this right,
so on that board, proc-xscale.S will incorrectly assume that we have a
DSP coprocessor, enable CP0 on boot, and we will then only save the
first iWMMXt register (wR0) on context switches, which is Bad.
This patch redefines CONFIG_IWMMXT as 'the cpu we will be running on
might have iWMMXt support, and we will enable iWMMXt context switching
if it does.' This means that with this patch, running a CONFIG_IWMMXT=n
kernel on an iWMMXt-capable CPU will no longer potentially corrupt iWMMXt
state over context switches, and running a CONFIG_IWMMXT=y kernel on a
non-iWMMXt capable CPU will still do DSP context save/restore.
These changes should make iWMMXt work on PXA3xx, and as a side effect,
enable proper acc0 save/restore on non-iWMMXt capable xsc3 cores such
as IOP13xx and IXP23xx (which will not have CONFIG_CPU_XSCALE defined),
as well as setting and using HWCAP_IWMMXT properly.
Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-12-03 17:51:14 +00:00
|
|
|
config IWMMXT
|
2014-04-24 21:58:30 +00:00
|
|
|
bool "Enable iWMMXt support"
|
ARM: 9352/1: iwmmxt: Remove support for PJ4/PJ4B cores
PJ4 is a v7 core that incorporates a iWMMXt coprocessor. However, GCC
does not support this combination (its iWMMXt configuration always
implies v5te), and so there is no v6/v7 user space that actually makes
use of this, beyond generic support for things like setjmp() that
preserve/restore the iWMMXt register file using generic LDC/STC
instructions emitted in assembler. As [0] appears to imply, this logic
is triggered for the init process at boot, and so most user threads will
have a iWMMXt register context associated with it, even though it is
never used.
At this point, it is highly unlikely that such GCC support will ever
materialize (and Clang does not implement support for iWMMXt to begin
with).
This means that advertising iWMMXt support on these cores results in
context switch overhead without any associated benefit, and so it is
better to simply ignore the iWMMXt unit on these systems. So rip out the
support. Doing so also fixes the issue reported in [0] related to UNDEF
handling of co-processor #0/#1 instructions issued from user space
running in Thumb2 mode.
The PJ4 cores are used in four platforms: Armada 370/xp, Dove (Cubox,
d2plug), MMP2 (xo-1.75) and Berlin (Google TV). Out of these, only the
first is still widely used, but that one actually doesn't have iWMMXt
but instead has only VFPV3-D16, and so it is not impacted by this
change.
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218427 [0]
Fixes: 8bcba70cb5c22 ("ARM: entry: Disregard Thumb undef exception ...")
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Reviewed-by: Jisheng Zhang <jszhang@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2024-02-14 07:03:24 +00:00
|
|
|
depends on CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK
|
|
|
|
default y if PXA27x || PXA3xx || ARCH_MMP
|
[ARM] 3881/4: xscale: clean up cp0/cp1 handling
XScale cores either have a DSP coprocessor (which contains a single
40 bit accumulator register), or an iWMMXt coprocessor (which contains
eight 64 bit registers.)
Because of the small amount of state in the DSP coprocessor, access to
the DSP coprocessor (CP0) is always enabled, and DSP context switching
is done unconditionally on every task switch. Access to the iWMMXt
coprocessor (CP0/CP1) is enabled only when an iWMMXt instruction is
first issued, and iWMMXt context switching is done lazily.
CONFIG_IWMMXT is supposed to mean 'the cpu we will be running on will
have iWMMXt support', but boards are supposed to select this config
symbol by hand, and at least one pxa27x board doesn't get this right,
so on that board, proc-xscale.S will incorrectly assume that we have a
DSP coprocessor, enable CP0 on boot, and we will then only save the
first iWMMXt register (wR0) on context switches, which is Bad.
This patch redefines CONFIG_IWMMXT as 'the cpu we will be running on
might have iWMMXt support, and we will enable iWMMXt context switching
if it does.' This means that with this patch, running a CONFIG_IWMMXT=n
kernel on an iWMMXt-capable CPU will no longer potentially corrupt iWMMXt
state over context switches, and running a CONFIG_IWMMXT=y kernel on a
non-iWMMXt capable CPU will still do DSP context save/restore.
These changes should make iWMMXt work on PXA3xx, and as a side effect,
enable proper acc0 save/restore on non-iWMMXt capable xsc3 cores such
as IOP13xx and IXP23xx (which will not have CONFIG_CPU_XSCALE defined),
as well as setting and using HWCAP_IWMMXT properly.
Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-12-03 17:51:14 +00:00
|
|
|
help
|
|
|
|
Enable support for iWMMXt context switching at run time if
|
|
|
|
running on a CPU that supports it.
|
|
|
|
|
2006-06-22 10:48:56 +00:00
|
|
|
if !MMU
|
|
|
|
source "arch/arm/Kconfig-nommu"
|
|
|
|
endif
|
|
|
|
|
2013-06-23 09:17:11 +00:00
|
|
|
config PJ4B_ERRATA_4742
|
|
|
|
bool "PJ4B Errata 4742: IDLE Wake Up Commands can Cause the CPU Core to Cease Operation"
|
|
|
|
depends on CPU_PJ4B && MACH_ARMADA_370
|
|
|
|
default y
|
|
|
|
help
|
|
|
|
When coming out of either a Wait for Interrupt (WFI) or a Wait for
|
|
|
|
Event (WFE) IDLE states, a specific timing sensitivity exists between
|
|
|
|
the retiring WFI/WFE instructions and the newly issued subsequent
|
|
|
|
instructions. This sensitivity can result in a CPU hang scenario.
|
|
|
|
Workaround:
|
|
|
|
The software must insert either a Data Synchronization Barrier (DSB)
|
|
|
|
or Data Memory Barrier (DMB) command immediately after the WFI/WFE
|
|
|
|
instruction
|
|
|
|
|
2012-04-20 16:20:08 +00:00
|
|
|
config ARM_ERRATA_326103
|
|
|
|
bool "ARM errata: FSR write bit incorrect on a SWP to read-only memory"
|
|
|
|
depends on CPU_V6
|
|
|
|
help
|
|
|
|
Executing a SWP instruction to read-only memory does not set bit 11
|
|
|
|
of the FSR on the ARM 1136 prior to r1p0. This causes the kernel to
|
|
|
|
treat the access as a read, preventing a COW from occurring and
|
|
|
|
causing the faulting task to livelock.
|
|
|
|
|
2009-04-30 16:06:03 +00:00
|
|
|
config ARM_ERRATA_411920
|
|
|
|
bool "ARM errata: Invalidation of the Instruction Cache operation can fail"
|
2011-01-17 15:08:32 +00:00
|
|
|
depends on CPU_V6 || CPU_V6K
|
2009-04-30 16:06:03 +00:00
|
|
|
help
|
|
|
|
Invalidation of the Instruction Cache operation can
|
|
|
|
fail. This erratum is present in 1136 (before r1p4), 1156 and 1176.
|
|
|
|
It does not affect the MPCore. This option enables the ARM Ltd.
|
|
|
|
recommended workaround.
|
|
|
|
|
2009-04-30 16:06:09 +00:00
|
|
|
config ARM_ERRATA_430973
|
|
|
|
bool "ARM errata: Stale prediction on replaced interworking branch"
|
|
|
|
depends on CPU_V7
|
|
|
|
help
|
|
|
|
This option enables the workaround for the 430973 Cortex-A8
|
2015-04-13 15:14:37 +00:00
|
|
|
r1p* erratum. If a code sequence containing an ARM/Thumb
|
2009-04-30 16:06:09 +00:00
|
|
|
interworking branch is replaced with another code sequence at the
|
|
|
|
same virtual address, whether due to self-modifying code or virtual
|
|
|
|
to physical address re-mapping, Cortex-A8 does not recover from the
|
|
|
|
stale interworking branch prediction. This results in Cortex-A8
|
|
|
|
executing the new code sequence in the incorrect ARM or Thumb state.
|
|
|
|
The workaround enables the BTB/BTAC operations by setting ACTLR.IBE
|
|
|
|
and also flushes the branch target cache at every context switch.
|
|
|
|
Note that setting specific bits in the ACTLR register may not be
|
|
|
|
available in non-secure mode.
|
|
|
|
|
2009-04-30 16:06:15 +00:00
|
|
|
config ARM_ERRATA_458693
|
|
|
|
bool "ARM errata: Processor deadlock when a false hazard is created"
|
|
|
|
depends on CPU_V7
|
2012-12-21 21:42:40 +00:00
|
|
|
depends on !ARCH_MULTIPLATFORM
|
2009-04-30 16:06:15 +00:00
|
|
|
help
|
|
|
|
This option enables the workaround for the 458693 Cortex-A8 (r2p0)
|
|
|
|
erratum. For very specific sequences of memory operations, it is
|
|
|
|
possible for a hazard condition intended for a cache line to instead
|
|
|
|
be incorrectly associated with a different cache line. This false
|
|
|
|
hazard might then cause a processor deadlock. The workaround enables
|
|
|
|
the L1 caching of the NEON accesses and disables the PLD instruction
|
|
|
|
in the ACTLR register. Note that setting specific bits in the ACTLR
|
2022-12-15 14:19:14 +00:00
|
|
|
register may not be available in non-secure mode and thus is not
|
|
|
|
available on a multiplatform kernel. This should be applied by the
|
|
|
|
bootloader instead.
|
2009-04-30 16:06:15 +00:00
|
|
|
|
2009-04-30 16:06:20 +00:00
|
|
|
config ARM_ERRATA_460075
|
|
|
|
bool "ARM errata: Data written to the L2 cache can be overwritten with stale data"
|
|
|
|
depends on CPU_V7
|
2012-12-21 21:42:40 +00:00
|
|
|
depends on !ARCH_MULTIPLATFORM
|
2009-04-30 16:06:20 +00:00
|
|
|
help
|
|
|
|
This option enables the workaround for the 460075 Cortex-A8 (r2p0)
|
|
|
|
erratum. Any asynchronous access to the L2 cache may encounter a
|
|
|
|
situation in which recent store transactions to the L2 cache are lost
|
|
|
|
and overwritten with stale memory contents from external memory. The
|
|
|
|
workaround disables the write-allocate mode for the L2 cache via the
|
|
|
|
ACTLR register. Note that setting specific bits in the ACTLR register
|
2022-12-15 14:19:14 +00:00
|
|
|
may not be available in non-secure mode and thus is not available on
|
|
|
|
a multiplatform kernel. This should be applied by the bootloader
|
|
|
|
instead.
|
2009-04-30 16:06:20 +00:00
|
|
|
|
2010-09-14 08:51:43 +00:00
|
|
|
config ARM_ERRATA_742230
|
|
|
|
bool "ARM errata: DMB operation may be faulty"
|
|
|
|
depends on CPU_V7 && SMP
|
2012-12-21 21:42:40 +00:00
|
|
|
depends on !ARCH_MULTIPLATFORM
|
2010-09-14 08:51:43 +00:00
|
|
|
help
|
|
|
|
This option enables the workaround for the 742230 Cortex-A9
|
|
|
|
(r1p0..r2p2) erratum. Under rare circumstances, a DMB instruction
|
|
|
|
between two write operations may not ensure the correct visibility
|
|
|
|
ordering of the two writes. This workaround sets a specific bit in
|
|
|
|
the diagnostic register of the Cortex-A9 which causes the DMB
|
|
|
|
instruction to behave as a DSB, ensuring the correct behaviour of
|
2022-12-15 14:19:14 +00:00
|
|
|
the two writes. Note that setting specific bits in the diagnostics
|
|
|
|
register may not be available in non-secure mode and thus is not
|
|
|
|
available on a multiplatform kernel. This should be applied by the
|
|
|
|
bootloader instead.
|
2010-09-14 08:51:43 +00:00
|
|
|
|
2010-09-14 08:53:02 +00:00
|
|
|
config ARM_ERRATA_742231
|
|
|
|
bool "ARM errata: Incorrect hazard handling in the SCU may lead to data corruption"
|
|
|
|
depends on CPU_V7 && SMP
|
2012-12-21 21:42:40 +00:00
|
|
|
depends on !ARCH_MULTIPLATFORM
|
2010-09-14 08:53:02 +00:00
|
|
|
help
|
|
|
|
This option enables the workaround for the 742231 Cortex-A9
|
|
|
|
(r2p0..r2p2) erratum. Under certain conditions, specific to the
|
|
|
|
Cortex-A9 MPCore micro-architecture, two CPUs working in SMP mode,
|
|
|
|
accessing some data located in the same cache line, may get corrupted
|
|
|
|
data due to bad handling of the address hazard when the line gets
|
|
|
|
replaced from one of the CPUs at the same time as another CPU is
|
|
|
|
accessing it. This workaround sets specific bits in the diagnostic
|
|
|
|
register of the Cortex-A9 which reduces the linefill issuing
|
2022-12-15 14:19:14 +00:00
|
|
|
capabilities of the processor. Note that setting specific bits in the
|
|
|
|
diagnostics register may not be available in non-secure mode and thus
|
|
|
|
is not available on a multiplatform kernel. This should be applied by
|
|
|
|
the bootloader instead.
|
2010-09-14 08:53:02 +00:00
|
|
|
|
2013-06-07 09:35:35 +00:00
|
|
|
config ARM_ERRATA_643719
|
|
|
|
bool "ARM errata: LoUIS bit field in CLIDR register is incorrect"
|
|
|
|
depends on CPU_V7 && SMP
|
2015-04-02 22:58:55 +00:00
|
|
|
default y
|
2013-06-07 09:35:35 +00:00
|
|
|
help
|
|
|
|
This option enables the workaround for the 643719 Cortex-A9 (prior to
|
|
|
|
r1p0) erratum. On affected cores the LoUIS bit field of the CLIDR
|
|
|
|
register returns zero when it should return one. The workaround
|
|
|
|
corrects this value, ensuring cache maintenance operations which use
|
|
|
|
it behave as intended and avoiding data corruption.
|
|
|
|
|
2010-08-05 10:20:51 +00:00
|
|
|
config ARM_ERRATA_720789
|
|
|
|
bool "ARM errata: TLBIASIDIS and TLBIMVAIS operations can broadcast a faulty ASID"
|
2011-12-08 12:37:46 +00:00
|
|
|
depends on CPU_V7
|
2010-08-05 10:20:51 +00:00
|
|
|
help
|
|
|
|
This option enables the workaround for the 720789 Cortex-A9 (prior to
|
|
|
|
r2p0) erratum. A faulty ASID can be sent to the other CPUs for the
|
|
|
|
broadcasted CP15 TLB maintenance operations TLBIASIDIS and TLBIMVAIS.
|
|
|
|
As a consequence of this erratum, some TLB entries which should be
|
|
|
|
invalidated are not, resulting in an incoherency in the system page
|
|
|
|
tables. The workaround changes the TLB flushing routines to invalidate
|
|
|
|
entries regardless of the ASID.
|
2010-09-28 13:02:02 +00:00
|
|
|
|
|
|
|
config ARM_ERRATA_743622
|
|
|
|
bool "ARM errata: Faulty hazard checking in the Store Buffer may lead to data corruption"
|
|
|
|
depends on CPU_V7
|
2012-12-21 21:42:40 +00:00
|
|
|
depends on !ARCH_MULTIPLATFORM
|
2010-09-28 13:02:02 +00:00
|
|
|
help
|
|
|
|
This option enables the workaround for the 743622 Cortex-A9
|
2012-02-24 11:12:38 +00:00
|
|
|
(r2p*) erratum. Under very rare conditions, a faulty
|
2010-09-28 13:02:02 +00:00
|
|
|
optimisation in the Cortex-A9 Store Buffer may lead to data
|
|
|
|
corruption. This workaround sets a specific bit in the diagnostic
|
|
|
|
register of the Cortex-A9 which disables the Store Buffer
|
|
|
|
optimisation, preventing the defect from occurring. This has no
|
|
|
|
visible impact on the overall performance or power consumption of the
|
2022-12-15 14:19:14 +00:00
|
|
|
processor. Note that setting specific bits in the diagnostics register
|
|
|
|
may not be available in non-secure mode and thus is not available on a
|
|
|
|
multiplatform kernel. This should be applied by the bootloader instead.
|
2010-09-28 13:02:02 +00:00
|
|
|
|
2011-02-18 15:36:35 +00:00
|
|
|
config ARM_ERRATA_751472
|
|
|
|
bool "ARM errata: Interrupted ICIALLUIS may prevent completion of broadcasted operation"
|
2011-12-08 12:41:06 +00:00
|
|
|
depends on CPU_V7
|
2012-12-21 21:42:40 +00:00
|
|
|
depends on !ARCH_MULTIPLATFORM
|
2011-02-18 15:36:35 +00:00
|
|
|
help
|
|
|
|
This option enables the workaround for the 751472 Cortex-A9 (prior
|
|
|
|
to r3p0) erratum. An interrupted ICIALLUIS operation may prevent the
|
|
|
|
completion of a following broadcasted operation if the second
|
|
|
|
operation is received by a CPU before the ICIALLUIS has completed,
|
|
|
|
potentially leading to corrupted entries in the cache or TLB.
|
2022-12-15 14:19:14 +00:00
|
|
|
Note that setting specific bits in the diagnostics register may
|
|
|
|
not be available in non-secure mode and thus is not available on
|
|
|
|
a multiplatform kernel. This should be applied by the bootloader
|
|
|
|
instead.
|
2011-02-18 15:36:35 +00:00
|
|
|
|
2011-02-28 17:15:16 +00:00
|
|
|
config ARM_ERRATA_754322
|
|
|
|
bool "ARM errata: possible faulty MMU translations following an ASID switch"
|
|
|
|
depends on CPU_V7
|
|
|
|
help
|
|
|
|
This option enables the workaround for the 754322 Cortex-A9 (r2p*,
|
|
|
|
r3p*) erratum. A speculative memory access may cause a page table walk
|
|
|
|
which starts prior to an ASID switch but completes afterwards. This
|
|
|
|
can populate the micro-TLB with a stale entry which may be hit with
|
|
|
|
the new ASID. This workaround places two dsb instructions in the mm
|
|
|
|
switching code so that no page table walks can cross the ASID switch.
|
|
|
|
|
2011-03-04 11:38:54 +00:00
|
|
|
config ARM_ERRATA_754327
|
|
|
|
bool "ARM errata: no automatic Store Buffer drain"
|
|
|
|
depends on CPU_V7 && SMP
|
|
|
|
help
|
|
|
|
This option enables the workaround for the 754327 Cortex-A9 (prior to
|
|
|
|
r2p0) erratum. The Store Buffer does not have any automatic draining
|
|
|
|
mechanism and therefore a livelock may occur if an external agent
|
|
|
|
continuously polls a memory location waiting to observe an update.
|
|
|
|
This workaround defines cpu_relax() as smp_mb(), preventing correctly
|
|
|
|
written polling loops from denying visibility of updates to memory.
|
|
|
|
|
2011-08-15 10:04:41 +00:00
|
|
|
config ARM_ERRATA_364296
|
|
|
|
bool "ARM errata: Possible cache data corruption with hit-under-miss enabled"
|
2013-07-09 17:34:01 +00:00
|
|
|
depends on CPU_V6
|
2011-08-15 10:04:41 +00:00
|
|
|
help
|
|
|
|
This options enables the workaround for the 364296 ARM1136
|
|
|
|
r0p2 erratum (possible cache data corruption with
|
|
|
|
hit-under-miss enabled). It sets the undocumented bit 31 in
|
|
|
|
the auxiliary control register and the FI bit in the control
|
|
|
|
register, thus disabling hit-under-miss without putting the
|
|
|
|
processor into full low interrupt latency mode. ARM11MPCore
|
|
|
|
is not affected.
|
|
|
|
|
2011-09-15 10:45:15 +00:00
|
|
|
config ARM_ERRATA_764369
|
|
|
|
bool "ARM errata: Data cache line maintenance operation by MVA may not succeed"
|
|
|
|
depends on CPU_V7 && SMP
|
|
|
|
help
|
|
|
|
This option enables the workaround for erratum 764369
|
|
|
|
affecting Cortex-A9 MPCore with two or more processors (all
|
|
|
|
current revisions). Under certain timing circumstances, a data
|
|
|
|
cache line maintenance operation by MVA targeting an Inner
|
|
|
|
Shareable memory region may fail to proceed up to either the
|
|
|
|
Point of Coherency or to the Point of Unification of the
|
|
|
|
system. This workaround adds a DSB instruction before the
|
|
|
|
relevant cache maintenance functions and sets a specific bit
|
|
|
|
in the diagnostic control register of the SCU.
|
|
|
|
|
2022-05-18 13:38:37 +00:00
|
|
|
config ARM_ERRATA_764319
|
|
|
|
bool "ARM errata: Read to DBGPRSR and DBGOSLSR may generate Undefined instruction"
|
|
|
|
depends on CPU_V7
|
|
|
|
help
|
2024-05-29 12:07:42 +00:00
|
|
|
This option enables the workaround for the 764319 Cortex-A9 erratum.
|
2022-05-18 13:38:37 +00:00
|
|
|
CP14 read accesses to the DBGPRSR and DBGOSLSR registers generate an
|
|
|
|
unexpected Undefined Instruction exception when the DBGSWENABLE
|
|
|
|
external pin is set to 0, even when the CP14 accesses are performed
|
|
|
|
from a privileged mode. This work around catches the exception in a
|
|
|
|
way the kernel does not stop execution.
|
|
|
|
|
2012-09-28 01:12:45 +00:00
|
|
|
config ARM_ERRATA_775420
|
|
|
|
bool "ARM errata: A data cache maintenance operation which aborts, might lead to deadlock"
|
|
|
|
depends on CPU_V7
|
|
|
|
help
|
|
|
|
This option enables the workaround for the 775420 Cortex-A9 (r2p2,
|
2019-10-25 11:38:43 +00:00
|
|
|
r2p6,r2p8,r2p10,r3p0) erratum. In case a data cache maintenance
|
2012-09-28 01:12:45 +00:00
|
|
|
operation aborts with MMU exception, it might cause the processor
|
|
|
|
to deadlock. This workaround puts DSB before executing ISB if
|
|
|
|
an abort may occur on cache maintenance.
|
|
|
|
|
2013-03-26 22:35:04 +00:00
|
|
|
config ARM_ERRATA_798181
|
|
|
|
bool "ARM errata: TLBI/DSB failure on Cortex-A15"
|
|
|
|
depends on CPU_V7 && SMP
|
|
|
|
help
|
|
|
|
On Cortex-A15 (r0p0..r3p2) the TLBI*IS/DSB operations are not
|
|
|
|
adequately shooting down all use of the old entries. This
|
|
|
|
option enables the Linux kernel workaround for this erratum
|
|
|
|
which sends an IPI to the CPUs that are running the same ASID
|
|
|
|
as the one being invalidated.
|
|
|
|
|
2013-08-20 16:29:55 +00:00
|
|
|
config ARM_ERRATA_773022
|
|
|
|
bool "ARM errata: incorrect instructions may be executed from loop buffer"
|
|
|
|
depends on CPU_V7
|
|
|
|
help
|
|
|
|
This option enables the workaround for the 773022 Cortex-A15
|
|
|
|
(up to r0p4) erratum. In certain rare sequences of code, the
|
|
|
|
loop buffer may deliver incorrect instructions. This
|
|
|
|
workaround disables the loop buffer to avoid the erratum.
|
|
|
|
|
2016-04-06 23:25:00 +00:00
|
|
|
config ARM_ERRATA_818325_852422
|
|
|
|
bool "ARM errata: A12: some seqs of opposed cond code instrs => deadlock or corruption"
|
|
|
|
depends on CPU_V7
|
|
|
|
help
|
|
|
|
This option enables the workaround for:
|
|
|
|
- Cortex-A12 818325: Execution of an UNPREDICTABLE STR or STM
|
|
|
|
instruction might deadlock. Fixed in r0p1.
|
|
|
|
- Cortex-A12 852422: Execution of a sequence of instructions might
|
|
|
|
lead to either a data corruption or a CPU deadlock. Not fixed in
|
|
|
|
any Cortex-A12 cores yet.
|
|
|
|
This workaround for all both errata involves setting bit[12] of the
|
|
|
|
Feature Register. This bit disables an optimisation applied to a
|
|
|
|
sequence of 2 instructions that use opposing condition codes.
|
|
|
|
|
2016-04-06 23:26:05 +00:00
|
|
|
config ARM_ERRATA_821420
|
|
|
|
bool "ARM errata: A12: sequence of VMOV to core registers might lead to a dead lock"
|
|
|
|
depends on CPU_V7
|
|
|
|
help
|
|
|
|
This option enables the workaround for the 821420 Cortex-A12
|
|
|
|
(all revs) erratum. In very rare timing conditions, a sequence
|
|
|
|
of VMOV to Core registers instructions, for which the second
|
|
|
|
one is in the shadow of a branch or abort, can lead to a
|
|
|
|
deadlock when the VMOV instructions are issued out-of-order.
|
|
|
|
|
2016-04-06 23:27:26 +00:00
|
|
|
config ARM_ERRATA_825619
|
|
|
|
bool "ARM errata: A12: DMB NSHST/ISHST mixed ... might cause deadlock"
|
|
|
|
depends on CPU_V7
|
|
|
|
help
|
|
|
|
This option enables the workaround for the 825619 Cortex-A12
|
|
|
|
(all revs) erratum. Within rare timing constraints, executing a
|
|
|
|
DMB NSHST or DMB ISHST instruction followed by a mix of Cacheable
|
|
|
|
and Device/Strongly-Ordered loads and stores might cause deadlock
|
|
|
|
|
2019-04-26 22:35:46 +00:00
|
|
|
config ARM_ERRATA_857271
|
|
|
|
bool "ARM errata: A12: CPU might deadlock under some very rare internal conditions"
|
|
|
|
depends on CPU_V7
|
|
|
|
help
|
|
|
|
This option enables the workaround for the 857271 Cortex-A12
|
|
|
|
(all revs) erratum. Under very rare timing conditions, the CPU might
|
|
|
|
hang. The workaround is expected to have a < 1% performance impact.
|
|
|
|
|
2016-04-06 23:27:26 +00:00
|
|
|
config ARM_ERRATA_852421
|
|
|
|
bool "ARM errata: A17: DMB ST might fail to create order between stores"
|
|
|
|
depends on CPU_V7
|
|
|
|
help
|
|
|
|
This option enables the workaround for the 852421 Cortex-A17
|
|
|
|
(r1p0, r1p1, r1p2) erratum. Under very rare timing conditions,
|
|
|
|
execution of a DMB ST instruction might fail to properly order
|
|
|
|
stores from GroupA and stores from GroupB.
|
|
|
|
|
2016-04-06 23:25:00 +00:00
|
|
|
config ARM_ERRATA_852423
|
|
|
|
bool "ARM errata: A17: some seqs of opposed cond code instrs => deadlock or corruption"
|
|
|
|
depends on CPU_V7
|
|
|
|
help
|
|
|
|
This option enables the workaround for:
|
|
|
|
- Cortex-A17 852423: Execution of a sequence of instructions might
|
|
|
|
lead to either a data corruption or a CPU deadlock. Not fixed in
|
|
|
|
any Cortex-A17 cores yet.
|
|
|
|
This is identical to Cortex-A12 erratum 852422. It is a separate
|
|
|
|
config option from the A12 erratum due to the way errata are checked
|
|
|
|
for and handled.
|
|
|
|
|
2019-04-26 22:35:46 +00:00
|
|
|
config ARM_ERRATA_857272
|
|
|
|
bool "ARM errata: A17: CPU might deadlock under some very rare internal conditions"
|
|
|
|
depends on CPU_V7
|
|
|
|
help
|
|
|
|
This option enables the workaround for the 857272 Cortex-A17 erratum.
|
|
|
|
This erratum is not known to be fixed in any A17 revision.
|
|
|
|
This is identical to Cortex-A12 erratum 857271. It is a separate
|
|
|
|
config option from the A12 erratum due to the way errata are checked
|
|
|
|
for and handled.
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
endmenu
|
|
|
|
|
|
|
|
source "arch/arm/common/Kconfig"
|
|
|
|
|
|
|
|
menu "Bus support"
|
|
|
|
|
|
|
|
config ISA
|
|
|
|
bool
|
|
|
|
help
|
|
|
|
Find out whether you have ISA slots on your motherboard. ISA is the
|
|
|
|
name of a bus system, i.e. the way the CPU talks to the other stuff
|
|
|
|
inside your box. Other bus systems are PCI, EISA, MicroChannel
|
|
|
|
(MCA) or VESA. ISA is an older system, now being displaced by PCI;
|
|
|
|
newer boards don't support it. If you have ISA, say Y, otherwise N.
|
|
|
|
|
2006-01-04 15:44:16 +00:00
|
|
|
# Select ISA DMA interface
|
2005-05-04 04:39:22 +00:00
|
|
|
config ISA_DMA_API
|
|
|
|
bool
|
|
|
|
|
2019-05-21 09:17:39 +00:00
|
|
|
config ARM_ERRATA_814220
|
|
|
|
bool "ARM errata: Cache maintenance by set/way operations can execute out of order"
|
|
|
|
depends on CPU_V7
|
|
|
|
help
|
|
|
|
The v7 ARM states that all cache and branch predictor maintenance
|
|
|
|
operations that do not specify an address execute, relative to
|
|
|
|
each other, in program order.
|
|
|
|
However, because of this erratum, an L2 set/way cache maintenance
|
|
|
|
operation can overtake an L1 set/way cache maintenance operation.
|
|
|
|
This ERRATA only affected the Cortex-A7 and present in r0p2, r0p3,
|
|
|
|
r0p4, r0p5.
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
endmenu
|
|
|
|
|
|
|
|
menu "Kernel Features"
|
|
|
|
|
2011-12-07 15:38:04 +00:00
|
|
|
config HAVE_SMP
|
|
|
|
bool
|
|
|
|
help
|
|
|
|
This option should be selected by machines which have an SMP-
|
|
|
|
capable CPU.
|
|
|
|
|
|
|
|
The only effect of this option is to make the SMP-related
|
|
|
|
options available to the user for configuration.
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
config SMP
|
2011-05-12 08:52:02 +00:00
|
|
|
bool "Symmetric Multi-Processing"
|
2011-01-17 18:01:58 +00:00
|
|
|
depends on CPU_V6K || CPU_V7
|
2011-12-07 15:38:04 +00:00
|
|
|
depends on HAVE_SMP
|
2013-02-22 18:56:04 +00:00
|
|
|
depends on MMU || ARM_MPU
|
2015-05-26 14:36:58 +00:00
|
|
|
select IRQ_WORK
|
2005-04-16 22:20:36 +00:00
|
|
|
help
|
|
|
|
This enables support for systems with more than one CPU. If you have
|
2014-01-23 23:55:29 +00:00
|
|
|
a system with only one CPU, say N. If you have a system with more
|
|
|
|
than one CPU, say Y.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2014-01-23 23:55:29 +00:00
|
|
|
If you say N here, the kernel will run on uni- and multiprocessor
|
2005-04-16 22:20:36 +00:00
|
|
|
machines, but will use only one CPU of a multiprocessor machine. If
|
2014-01-23 23:55:29 +00:00
|
|
|
you say Y here, the kernel will run on many, but not all,
|
|
|
|
uniprocessor machines. On a uniprocessor machine, the kernel
|
|
|
|
will run faster if you say N here.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2023-03-14 23:06:44 +00:00
|
|
|
See also <file:Documentation/arch/x86/i386/IO-APIC.rst>,
|
2019-06-27 17:56:51 +00:00
|
|
|
<file:Documentation/admin-guide/lockup-watchdogs.rst> and the SMP-HOWTO available at
|
2010-10-16 17:36:23 +00:00
|
|
|
<http://tldp.org/HOWTO/SMP-HOWTO.html>.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
If you don't know what to do here, say N.
|
|
|
|
|
2010-09-04 09:47:48 +00:00
|
|
|
config SMP_ON_UP
|
2015-02-13 11:04:21 +00:00
|
|
|
bool "Allow booting SMP kernel on uniprocessor systems"
|
2022-08-18 14:17:09 +00:00
|
|
|
depends on SMP && MMU
|
2010-09-04 09:47:48 +00:00
|
|
|
default y
|
|
|
|
help
|
|
|
|
SMP kernels contain instructions which fail on non-SMP processors.
|
|
|
|
Enabling this option allows the kernel to modify itself to make
|
|
|
|
these instructions safe. Disabling it allows about 1K of space
|
|
|
|
savings.
|
|
|
|
|
|
|
|
If you don't know what to do here, say Y.
|
|
|
|
|
ARM: smp: Store current pointer in TPIDRURO register if available
Now that the user space TLS register is assigned on every return to user
space, we can use it to keep the 'current' pointer while running in the
kernel. This removes the need to access it via thread_info, which is
located at the base of the stack, but will be moved out of there in a
subsequent patch.
Use the __builtin_thread_pointer() helper when available - this will
help GCC understand that reloading the value within the same function is
not necessary, even when using the per-task stack protector (which also
generates accesses via the TLS register). For example, the generated
code below loads TPIDRURO only once, and uses it to access both the
stack canary and the preempt_count fields.
<do_one_initcall>:
e92d 41f0 stmdb sp!, {r4, r5, r6, r7, r8, lr}
ee1d 4f70 mrc 15, 0, r4, cr13, cr0, {3}
4606 mov r6, r0
b094 sub sp, #80 ; 0x50
f8d4 34e8 ldr.w r3, [r4, #1256] ; 0x4e8 <- stack canary
9313 str r3, [sp, #76] ; 0x4c
f8d4 8004 ldr.w r8, [r4, #4] <- preempt count
Co-developed-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
2021-09-18 08:44:37 +00:00
|
|
|
|
|
|
|
config CURRENT_POINTER_IN_TPIDRURO
|
|
|
|
def_bool y
|
2021-11-26 09:13:06 +00:00
|
|
|
depends on CPU_32v6K && !CPU_V6
|
ARM: smp: Store current pointer in TPIDRURO register if available
Now that the user space TLS register is assigned on every return to user
space, we can use it to keep the 'current' pointer while running in the
kernel. This removes the need to access it via thread_info, which is
located at the base of the stack, but will be moved out of there in a
subsequent patch.
Use the __builtin_thread_pointer() helper when available - this will
help GCC understand that reloading the value within the same function is
not necessary, even when using the per-task stack protector (which also
generates accesses via the TLS register). For example, the generated
code below loads TPIDRURO only once, and uses it to access both the
stack canary and the preempt_count fields.
<do_one_initcall>:
e92d 41f0 stmdb sp!, {r4, r5, r6, r7, r8, lr}
ee1d 4f70 mrc 15, 0, r4, cr13, cr0, {3}
4606 mov r6, r0
b094 sub sp, #80 ; 0x50
f8d4 34e8 ldr.w r3, [r4, #1256] ; 0x4e8 <- stack canary
9313 str r3, [sp, #76] ; 0x4c
f8d4 8004 ldr.w r8, [r4, #4] <- preempt count
Co-developed-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
2021-09-18 08:44:37 +00:00
|
|
|
|
2021-10-05 07:15:40 +00:00
|
|
|
config IRQSTACKS
|
|
|
|
def_bool y
|
2021-10-05 07:15:42 +00:00
|
|
|
select HAVE_IRQ_EXIT_ON_IRQ_STACK
|
|
|
|
select HAVE_SOFTIRQ_ON_OWN_STACK
|
ARM: smp: Store current pointer in TPIDRURO register if available
Now that the user space TLS register is assigned on every return to user
space, we can use it to keep the 'current' pointer while running in the
kernel. This removes the need to access it via thread_info, which is
located at the base of the stack, but will be moved out of there in a
subsequent patch.
Use the __builtin_thread_pointer() helper when available - this will
help GCC understand that reloading the value within the same function is
not necessary, even when using the per-task stack protector (which also
generates accesses via the TLS register). For example, the generated
code below loads TPIDRURO only once, and uses it to access both the
stack canary and the preempt_count fields.
<do_one_initcall>:
e92d 41f0 stmdb sp!, {r4, r5, r6, r7, r8, lr}
ee1d 4f70 mrc 15, 0, r4, cr13, cr0, {3}
4606 mov r6, r0
b094 sub sp, #80 ; 0x50
f8d4 34e8 ldr.w r3, [r4, #1256] ; 0x4e8 <- stack canary
9313 str r3, [sp, #76] ; 0x4c
f8d4 8004 ldr.w r8, [r4, #4] <- preempt count
Co-developed-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
2021-09-18 08:44:37 +00:00
|
|
|
|
2011-08-08 12:21:59 +00:00
|
|
|
config ARM_CPU_TOPOLOGY
|
|
|
|
bool "Support cpu topology definition"
|
|
|
|
depends on SMP && CPU_V7
|
|
|
|
default y
|
|
|
|
help
|
|
|
|
Support ARM cpu topology definition. The MPIDR register defines
|
|
|
|
affinity between processors which is then used to describe the cpu
|
|
|
|
topology of an ARM System.
|
|
|
|
|
|
|
|
config SCHED_MC
|
|
|
|
bool "Multi-core scheduler support"
|
|
|
|
depends on ARM_CPU_TOPOLOGY
|
|
|
|
help
|
|
|
|
Multi-core scheduler support improves the CPU scheduler's decision
|
|
|
|
making when dealing with multi-core CPU chips at a cost of slightly
|
|
|
|
increased overhead in some places. If unsure say N here.
|
|
|
|
|
|
|
|
config SCHED_SMT
|
|
|
|
bool "SMT scheduler support"
|
|
|
|
depends on ARM_CPU_TOPOLOGY
|
|
|
|
help
|
|
|
|
Improves the CPU scheduler's decision making when dealing with
|
|
|
|
MultiThreading at a cost of slightly increased overhead in some
|
|
|
|
places. If unsure say N here.
|
|
|
|
|
2009-05-16 10:51:14 +00:00
|
|
|
config HAVE_ARM_SCU
|
|
|
|
bool
|
|
|
|
help
|
2019-01-08 13:28:05 +00:00
|
|
|
This option enables support for the ARM snoop control unit
|
2009-05-16 10:51:14 +00:00
|
|
|
|
2012-11-12 14:33:44 +00:00
|
|
|
config HAVE_ARM_ARCH_TIMER
|
2012-01-11 17:25:17 +00:00
|
|
|
bool "Architected timer support"
|
|
|
|
depends on CPU_V7
|
2012-11-12 14:33:44 +00:00
|
|
|
select ARM_ARCH_TIMER
|
2012-01-11 17:25:17 +00:00
|
|
|
help
|
|
|
|
This option enables support for the ARM architected timer
|
|
|
|
|
2009-05-16 11:14:21 +00:00
|
|
|
config HAVE_ARM_TWD
|
|
|
|
bool
|
|
|
|
help
|
|
|
|
This options enables support for the ARM timer and watchdog unit
|
|
|
|
|
2012-04-12 06:45:22 +00:00
|
|
|
config MCPM
|
|
|
|
bool "Multi-Cluster Power Management"
|
|
|
|
depends on CPU_V7 && SMP
|
|
|
|
help
|
|
|
|
This option provides the common power management infrastructure
|
|
|
|
for (multi-)cluster based systems, such as big.LITTLE based
|
|
|
|
systems.
|
|
|
|
|
2014-04-15 06:52:00 +00:00
|
|
|
config MCPM_QUAD_CLUSTER
|
|
|
|
bool
|
|
|
|
depends on MCPM
|
|
|
|
help
|
|
|
|
To avoid wasting resources unnecessarily, MCPM only supports up
|
|
|
|
to 2 clusters by default.
|
|
|
|
Platforms with 3 or 4 clusters that use MCPM must select this
|
|
|
|
option to allow the additional clusters to be managed.
|
|
|
|
|
ARM: b.L: core switcher code
This is the core code implementing big.LITTLE switcher functionality.
Rationale for this code is available here:
http://lwn.net/Articles/481055/
The main entry point for a switch request is:
void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id)
If the calling CPU is not the wanted one, this wrapper takes care of
sending the request to the appropriate CPU with schedule_work_on().
At the moment the core switch operation is handled by bL_switch_to()
which must be called on the CPU for which a switch is requested.
What this code does:
* Return early if the current cluster is the wanted one.
* Close the gate in the kernel entry vector for both the inbound
and outbound CPUs.
* Wake up the inbound CPU so it can perform its reset sequence in
parallel up to the kernel entry vector gate.
* Migrate all interrupts in the GIC targeting the outbound CPU
interface to the inbound CPU interface, including SGIs. This is
performed by gic_migrate_target() in drivers/irqchip/irq-gic.c.
* Call cpu_pm_enter() which takes care of flushing the VFP state to
RAM and save the CPU interface config from the GIC to RAM.
* Modify the cpu_logical_map to refer to the inbound physical CPU.
* Call cpu_suspend() which saves the CPU state (general purpose
registers, page table address) onto the stack and store the
resulting stack pointer in an array indexed by the updated
cpu_logical_map, then call the provided shutdown function.
This happens in arch/arm/kernel/sleep.S.
At this point, the provided shutdown function executed by the outbound
CPU ungates the inbound CPU. Therefore the inbound CPU:
* Picks up the saved stack pointer in the array indexed by its MPIDR
in arch/arm/kernel/sleep.S.
* The MMU and caches are re-enabled using the saved state on the
provided stack, just like if this was a resume operation from a
suspended state.
* Then cpu_suspend() returns, although this is on the inbound CPU
rather than the outbound CPU which called it initially.
* The function cpu_pm_exit() is called which effect is to restore the
CPU interface state in the GIC using the state previously saved by
the outbound CPU.
* Exit of bL_switch_to() to resume normal kernel execution on the
new CPU.
However, the outbound CPU is potentially still running in parallel while
the inbound CPU is resuming normal kernel execution, hence we need
per CPU stack isolation to execute bL_do_switch(). After the outbound
CPU has ungated the inbound CPU, it calls mcpm_cpu_power_down() to:
* Clean its L1 cache.
* If it is the last CPU still alive in its cluster (last man standing),
it also cleans its L2 cache and disables cache snooping from the other
cluster.
* Power down the CPU (or whole cluster).
Code called from bL_do_switch() might end up referencing 'current' for
some reasons. However, 'current' is derived from the stack pointer.
With any arbitrary stack, the returned value for 'current' and any
dereferenced values through it are just random garbage which may lead to
segmentation faults.
The active page table during the execution of bL_do_switch() is also a
problem. There is no guarantee that the inbound CPU won't destroy the
corresponding task which would free the attached page table while the
outbound CPU is still running and relying on it.
To solve both issues, we borrow some of the task space belonging to
the init/idle task which, by its nature, is lightly used and therefore
is unlikely to clash with our usage. The init task is also never going
away.
Right now the logical CPU number is assumed to be equivalent to the
physical CPU number within each cluster. The kernel should also be
booted with only one cluster active. These limitations will be lifted
eventually.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
2012-04-12 06:56:10 +00:00
|
|
|
config BIG_LITTLE
|
|
|
|
bool "big.LITTLE support (Experimental)"
|
|
|
|
depends on CPU_V7 && SMP
|
|
|
|
select MCPM
|
|
|
|
help
|
|
|
|
This option enables support selections for the big.LITTLE
|
|
|
|
system architecture.
|
|
|
|
|
|
|
|
config BL_SWITCHER
|
|
|
|
bool "big.LITTLE switcher support"
|
2015-11-19 14:49:23 +00:00
|
|
|
depends on BIG_LITTLE && MCPM && HOTPLUG_CPU && ARM_GIC
|
2014-04-22 21:26:27 +00:00
|
|
|
select CPU_PM
|
ARM: b.L: core switcher code
This is the core code implementing big.LITTLE switcher functionality.
Rationale for this code is available here:
http://lwn.net/Articles/481055/
The main entry point for a switch request is:
void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id)
If the calling CPU is not the wanted one, this wrapper takes care of
sending the request to the appropriate CPU with schedule_work_on().
At the moment the core switch operation is handled by bL_switch_to()
which must be called on the CPU for which a switch is requested.
What this code does:
* Return early if the current cluster is the wanted one.
* Close the gate in the kernel entry vector for both the inbound
and outbound CPUs.
* Wake up the inbound CPU so it can perform its reset sequence in
parallel up to the kernel entry vector gate.
* Migrate all interrupts in the GIC targeting the outbound CPU
interface to the inbound CPU interface, including SGIs. This is
performed by gic_migrate_target() in drivers/irqchip/irq-gic.c.
* Call cpu_pm_enter() which takes care of flushing the VFP state to
RAM and save the CPU interface config from the GIC to RAM.
* Modify the cpu_logical_map to refer to the inbound physical CPU.
* Call cpu_suspend() which saves the CPU state (general purpose
registers, page table address) onto the stack and store the
resulting stack pointer in an array indexed by the updated
cpu_logical_map, then call the provided shutdown function.
This happens in arch/arm/kernel/sleep.S.
At this point, the provided shutdown function executed by the outbound
CPU ungates the inbound CPU. Therefore the inbound CPU:
* Picks up the saved stack pointer in the array indexed by its MPIDR
in arch/arm/kernel/sleep.S.
* The MMU and caches are re-enabled using the saved state on the
provided stack, just like if this was a resume operation from a
suspended state.
* Then cpu_suspend() returns, although this is on the inbound CPU
rather than the outbound CPU which called it initially.
* The function cpu_pm_exit() is called which effect is to restore the
CPU interface state in the GIC using the state previously saved by
the outbound CPU.
* Exit of bL_switch_to() to resume normal kernel execution on the
new CPU.
However, the outbound CPU is potentially still running in parallel while
the inbound CPU is resuming normal kernel execution, hence we need
per CPU stack isolation to execute bL_do_switch(). After the outbound
CPU has ungated the inbound CPU, it calls mcpm_cpu_power_down() to:
* Clean its L1 cache.
* If it is the last CPU still alive in its cluster (last man standing),
it also cleans its L2 cache and disables cache snooping from the other
cluster.
* Power down the CPU (or whole cluster).
Code called from bL_do_switch() might end up referencing 'current' for
some reasons. However, 'current' is derived from the stack pointer.
With any arbitrary stack, the returned value for 'current' and any
dereferenced values through it are just random garbage which may lead to
segmentation faults.
The active page table during the execution of bL_do_switch() is also a
problem. There is no guarantee that the inbound CPU won't destroy the
corresponding task which would free the attached page table while the
outbound CPU is still running and relying on it.
To solve both issues, we borrow some of the task space belonging to
the init/idle task which, by its nature, is lightly used and therefore
is unlikely to clash with our usage. The init task is also never going
away.
Right now the logical CPU number is assumed to be equivalent to the
physical CPU number within each cluster. The kernel should also be
booted with only one cluster active. These limitations will be lifted
eventually.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
2012-04-12 06:56:10 +00:00
|
|
|
help
|
|
|
|
The big.LITTLE "switcher" provides the core functionality to
|
|
|
|
transparently handle transition between a cluster of A15's
|
|
|
|
and a cluster of A7's in a big.LITTLE system.
|
|
|
|
|
2012-04-12 07:04:28 +00:00
|
|
|
config BL_SWITCHER_DUMMY_IF
|
|
|
|
tristate "Simple big.LITTLE switcher user interface"
|
|
|
|
depends on BL_SWITCHER && DEBUG_KERNEL
|
|
|
|
help
|
|
|
|
This is a simple and dummy char dev interface to control
|
|
|
|
the big.LITTLE switcher core code. It is meant for
|
|
|
|
debugging purposes only.
|
|
|
|
|
2008-08-25 20:03:32 +00:00
|
|
|
choice
|
|
|
|
prompt "Memory split"
|
2014-02-26 19:40:46 +00:00
|
|
|
depends on MMU
|
2008-08-25 20:03:32 +00:00
|
|
|
default VMSPLIT_3G
|
|
|
|
help
|
|
|
|
Select the desired split between kernel and user memory.
|
|
|
|
|
|
|
|
If you are not absolutely sure what you are doing, leave this
|
|
|
|
option alone!
|
|
|
|
|
|
|
|
config VMSPLIT_3G
|
|
|
|
bool "3G/1G user/kernel split"
|
2015-09-13 02:30:11 +00:00
|
|
|
config VMSPLIT_3G_OPT
|
2017-06-09 14:28:18 +00:00
|
|
|
depends on !ARM_LPAE
|
2015-09-13 02:30:11 +00:00
|
|
|
bool "3G/1G user/kernel split (for full 1G low memory)"
|
2008-08-25 20:03:32 +00:00
|
|
|
config VMSPLIT_2G
|
|
|
|
bool "2G/2G user/kernel split"
|
|
|
|
config VMSPLIT_1G
|
|
|
|
bool "1G/3G user/kernel split"
|
|
|
|
endchoice
|
|
|
|
|
|
|
|
config PAGE_OFFSET
|
|
|
|
hex
|
2014-02-26 19:40:46 +00:00
|
|
|
default PHYS_OFFSET if !MMU
|
2008-08-25 20:03:32 +00:00
|
|
|
default 0x40000000 if VMSPLIT_1G
|
|
|
|
default 0x80000000 if VMSPLIT_2G
|
2015-09-13 02:30:11 +00:00
|
|
|
default 0xB0000000 if VMSPLIT_3G_OPT
|
2008-08-25 20:03:32 +00:00
|
|
|
default 0xC0000000
|
|
|
|
|
ARM: 9015/2: Define the virtual space of KASan's shadow region
Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for
the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB
addressable by a 32bit architecture) out of the virtual address
space to use as shadow memory for KASan as follows:
+----+ 0xffffffff
| |
| | |-> Static kernel image (vmlinux) BSS and page table
| |/
+----+ PAGE_OFFSET
| |
| | |-> Loadable kernel modules virtual address space area
| |/
+----+ MODULES_VADDR = KASAN_SHADOW_END
| |
| | |-> The shadow area of kernel virtual address.
| |/
+----+-> TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the
| | shadow address of MODULES_VADDR
| | |
| | |
| | |-> The user space area in lowmem. The kernel address
| | | sanitizer do not use this space, nor does it map it.
| | |
| | |
| | |
| | |
| |/
------ 0
0 .. TASK_SIZE is the memory that can be used by shared
userspace/kernelspace. It us used for userspace processes and for
passing parameters and memory buffers in system calls etc. We do not
need to shadow this area.
KASAN_SHADOW_START:
This value begins with the MODULE_VADDR's shadow address. It is the
start of kernel virtual space. Since we have modules to load, we need
to cover also that area with shadow memory so we can find memory
bugs in modules.
KASAN_SHADOW_END
This value is the 0x100000000's shadow address: the mapping that would
be after the end of the kernel memory at 0xffffffff. It is the end of
kernel address sanitizer shadow area. It is also the start of the
module area.
KASAN_SHADOW_OFFSET:
This value is used to map an address to the corresponding shadow
address by the following formula:
shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
As you would expect, >> 3 is equal to dividing by 8, meaning each
byte in the shadow memory covers 8 bytes of kernel memory, so one
bit shadow memory per byte of kernel memory is used.
The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending
on the VMSPLIT layout of the system: the kernel and userspace can
split up lowmem in different ways according to needs, so we calculate
the shadow offset depending on this.
When kasan is enabled, the definition of TASK_SIZE is not an 8-bit
rotated constant, so we need to modify the TASK_SIZE access code in the
*.s file.
The kernel and modules may use different amounts of memory,
according to the VMSPLIT configuration, which in turn
determines the PAGE_OFFSET.
We use the following KASAN_SHADOW_OFFSETs depending on how the
virtual memory is split up:
- 0x1f000000 if we have 1G userspace / 3G kernelspace split:
- The kernel address space is 3G (0xc0000000)
- PAGE_OFFSET is then set to 0x40000000 so the kernel static
image (vmlinux) uses addresses 0x40000000 .. 0xffffffff
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0x3f000000
so the modules use addresses 0x3f000000 .. 0x3fffffff
- So the addresses 0x3f000000 .. 0xffffffff need to be
covered with shadow memory. That is 0xc1000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x18200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0x26e00000, to
KASAN_SHADOW_END at 0x3effffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0x3f000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0x26e00000 = (0x3f000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0x26e00000 - (0x3f000000 >> 3)
KASAN_SHADOW_OFFSET = 0x26e00000 - 0x07e00000
KASAN_SHADOW_OFFSET = 0x1f000000
- 0x5f000000 if we have 2G userspace / 2G kernelspace split:
- The kernel space is 2G (0x80000000)
- PAGE_OFFSET is set to 0x80000000 so the kernel static
image uses 0x80000000 .. 0xffffffff.
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0x7f000000
so the modules use addresses 0x7f000000 .. 0x7fffffff
- So the addresses 0x7f000000 .. 0xffffffff need to be
covered with shadow memory. That is 0x81000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x10200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0x6ee00000, to
KASAN_SHADOW_END at 0x7effffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0x7f000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0x6ee00000 = (0x7f000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0x6ee00000 - (0x7f000000 >> 3)
KASAN_SHADOW_OFFSET = 0x6ee00000 - 0x0fe00000
KASAN_SHADOW_OFFSET = 0x5f000000
- 0x9f000000 if we have 3G userspace / 1G kernelspace split,
and this is the default split for ARM:
- The kernel address space is 1GB (0x40000000)
- PAGE_OFFSET is set to 0xc0000000 so the kernel static
image uses 0xc0000000 .. 0xffffffff.
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0xbf000000
so the modules use addresses 0xbf000000 .. 0xbfffffff
- So the addresses 0xbf000000 .. 0xffffffff need to be
covered with shadow memory. That is 0x41000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x08200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0xb6e00000, to
KASAN_SHADOW_END at 0xbfffffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0xbf000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0xb6e00000 = (0xbf000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0xb6e00000 - (0xbf000000 >> 3)
KASAN_SHADOW_OFFSET = 0xb6e00000 - 0x17e00000
KASAN_SHADOW_OFFSET = 0x9f000000
- 0x8f000000 if we have 3G userspace / 1G kernelspace with
full 1 GB low memory (VMSPLIT_3G_OPT):
- The kernel address space is 1GB (0x40000000)
- PAGE_OFFSET is set to 0xb0000000 so the kernel static
image uses 0xb0000000 .. 0xffffffff.
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0xaf000000
so the modules use addresses 0xaf000000 .. 0xaffffff
- So the addresses 0xaf000000 .. 0xffffffff need to be
covered with shadow memory. That is 0x51000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x0a200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0xa4e00000, to
KASAN_SHADOW_END at 0xaeffffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0xaf000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0xa4e00000 = (0xaf000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0xa4e00000 - (0xaf000000 >> 3)
KASAN_SHADOW_OFFSET = 0xa4e00000 - 0x15e00000
KASAN_SHADOW_OFFSET = 0x8f000000
- The default value of 0xffffffff for KASAN_SHADOW_OFFSET
is an error value. We should always match one of the
above shadow offsets.
When we do this, TASK_SIZE will sometimes get a bit odd values
that will not fit into immediate mov assembly instructions.
To account for this, we need to rewrite some assembly using
TASK_SIZE like this:
- mov r1, #TASK_SIZE
+ ldr r1, =TASK_SIZE
or
- cmp r4, #TASK_SIZE
+ ldr r0, =TASK_SIZE
+ cmp r4, r0
this is done to avoid the immediate #TASK_SIZE that need to
fit into a limited number of bits.
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Cc: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
Reported-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-10-25 22:53:46 +00:00
|
|
|
config KASAN_SHADOW_OFFSET
|
|
|
|
hex
|
|
|
|
depends on KASAN
|
|
|
|
default 0x1f000000 if PAGE_OFFSET=0x40000000
|
|
|
|
default 0x5f000000 if PAGE_OFFSET=0x80000000
|
|
|
|
default 0x9f000000 if PAGE_OFFSET=0xC0000000
|
|
|
|
default 0x8f000000 if PAGE_OFFSET=0xB0000000
|
|
|
|
default 0xffffffff
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
config NR_CPUS
|
|
|
|
int "Maximum number of CPUs (2-32)"
|
ARM: 9063/1: mm: reduce maximum number of CPUs if DEBUG_KMAP_LOCAL is enabled
The debugging code for kmap_local() doubles the number of per-CPU fixmap
slots allocated for kmap_local(), in order to use half of them as guard
regions. This causes the fixmap region to grow downwards beyond the start
of its reserved window if the supported number of CPUs is large, and collide
with the newly added virtual DT mapping right below it, which is obviously
not good.
One manifestation of this is EFI boot on a kernel built with NR_CPUS=32
and CONFIG_DEBUG_KMAP_LOCAL=y, which may pass the FDT in highmem, resulting
in block entries below the fixmap region that the fixmap code misidentifies
as fixmap table entries, and subsequently tries to dereference using a
phys-to-virt translation that is only valid for lowmem. This results in a
cryptic splat such as the one below.
ftrace: allocating 45548 entries in 89 pages
8<--- cut here ---
Unable to handle kernel paging request at virtual address fc6006f0
pgd = (ptrval)
[fc6006f0] *pgd=80000040207003, *pmd=00000000
Internal error: Oops: a06 [#1] SMP ARM
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 5.11.0+ #382
Hardware name: Generic DT based system
PC is at cpu_ca15_set_pte_ext+0x24/0x30
LR is at __set_fixmap+0xe4/0x118
pc : [<c041ac9c>] lr : [<c04189d8>] psr: 400000d3
sp : c1601ed8 ip : 00400000 fp : 00800000
r10: 0000071f r9 : 00421000 r8 : 00c00000
r7 : 00c00000 r6 : 0000071f r5 : ffade000 r4 : 4040171f
r3 : 00c00000 r2 : 4040171f r1 : c041ac78 r0 : fc6006f0
Flags: nZcv IRQs off FIQs off Mode SVC_32 ISA ARM Segment none
Control: 30c5387d Table: 40203000 DAC: 00000001
Process swapper (pid: 0, stack limit = 0x(ptrval))
So let's limit CONFIG_NR_CPUS to 16 when CONFIG_DEBUG_KMAP_LOCAL=y. Also,
fix the BUILD_BUG_ON() check that was supposed to catch this, by checking
whether the region grows below the start address rather than above the end
address.
Fixes: 2a15ba82fa6ca3f3 ("ARM: highmem: Switch to generic kmap atomic")
Reported-by: Peter Robinson <pbrobinson@gmail.com>
Tested-by: Peter Robinson <pbrobinson@gmail.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-02-17 19:26:23 +00:00
|
|
|
range 2 16 if DEBUG_KMAP_LOCAL
|
|
|
|
range 2 32 if !DEBUG_KMAP_LOCAL
|
2005-04-16 22:20:36 +00:00
|
|
|
depends on SMP
|
|
|
|
default "4"
|
ARM: 9063/1: mm: reduce maximum number of CPUs if DEBUG_KMAP_LOCAL is enabled
The debugging code for kmap_local() doubles the number of per-CPU fixmap
slots allocated for kmap_local(), in order to use half of them as guard
regions. This causes the fixmap region to grow downwards beyond the start
of its reserved window if the supported number of CPUs is large, and collide
with the newly added virtual DT mapping right below it, which is obviously
not good.
One manifestation of this is EFI boot on a kernel built with NR_CPUS=32
and CONFIG_DEBUG_KMAP_LOCAL=y, which may pass the FDT in highmem, resulting
in block entries below the fixmap region that the fixmap code misidentifies
as fixmap table entries, and subsequently tries to dereference using a
phys-to-virt translation that is only valid for lowmem. This results in a
cryptic splat such as the one below.
ftrace: allocating 45548 entries in 89 pages
8<--- cut here ---
Unable to handle kernel paging request at virtual address fc6006f0
pgd = (ptrval)
[fc6006f0] *pgd=80000040207003, *pmd=00000000
Internal error: Oops: a06 [#1] SMP ARM
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 5.11.0+ #382
Hardware name: Generic DT based system
PC is at cpu_ca15_set_pte_ext+0x24/0x30
LR is at __set_fixmap+0xe4/0x118
pc : [<c041ac9c>] lr : [<c04189d8>] psr: 400000d3
sp : c1601ed8 ip : 00400000 fp : 00800000
r10: 0000071f r9 : 00421000 r8 : 00c00000
r7 : 00c00000 r6 : 0000071f r5 : ffade000 r4 : 4040171f
r3 : 00c00000 r2 : 4040171f r1 : c041ac78 r0 : fc6006f0
Flags: nZcv IRQs off FIQs off Mode SVC_32 ISA ARM Segment none
Control: 30c5387d Table: 40203000 DAC: 00000001
Process swapper (pid: 0, stack limit = 0x(ptrval))
So let's limit CONFIG_NR_CPUS to 16 when CONFIG_DEBUG_KMAP_LOCAL=y. Also,
fix the BUILD_BUG_ON() check that was supposed to catch this, by checking
whether the region grows below the start address rather than above the end
address.
Fixes: 2a15ba82fa6ca3f3 ("ARM: highmem: Switch to generic kmap atomic")
Reported-by: Peter Robinson <pbrobinson@gmail.com>
Tested-by: Peter Robinson <pbrobinson@gmail.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-02-17 19:26:23 +00:00
|
|
|
help
|
|
|
|
The maximum number of CPUs that the kernel can support.
|
|
|
|
Up to 32 CPUs can be supported, or up to 16 if kmap_local()
|
|
|
|
debugging is enabled, which uses half of the per-CPU fixmap
|
|
|
|
slots as guard regions.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-11-02 22:24:33 +00:00
|
|
|
config HOTPLUG_CPU
|
2012-10-22 21:54:30 +00:00
|
|
|
bool "Support for hot-pluggable CPUs"
|
2013-05-21 03:49:35 +00:00
|
|
|
depends on SMP
|
2019-01-21 13:42:42 +00:00
|
|
|
select GENERIC_IRQ_MIGRATION
|
2005-11-02 22:24:33 +00:00
|
|
|
help
|
|
|
|
Say Y here to experiment with turning CPUs off and on. CPUs
|
|
|
|
can be controlled through /sys/devices/system/cpu.
|
|
|
|
|
2012-12-12 19:20:52 +00:00
|
|
|
config ARM_PSCI
|
|
|
|
bool "Support for the ARM Power State Coordination Interface (PSCI)"
|
2016-01-04 14:46:47 +00:00
|
|
|
depends on HAVE_ARM_SMCCC
|
2015-07-31 14:46:19 +00:00
|
|
|
select ARM_PSCI_FW
|
2012-12-12 19:20:52 +00:00
|
|
|
help
|
|
|
|
Say Y here if you want Linux to communicate with system firmware
|
|
|
|
implementing the PSCI specification for CPU-centric power
|
|
|
|
management operations described in ARM document number ARM DEN
|
|
|
|
0022A ("Power State Coordination Interface System Software on
|
|
|
|
ARM processors").
|
|
|
|
|
2013-04-27 22:31:10 +00:00
|
|
|
config HZ_FIXED
|
2006-03-02 22:41:59 +00:00
|
|
|
int
|
2015-03-13 21:57:24 +00:00
|
|
|
default 128 if SOC_AT91RM9200
|
2013-09-10 22:47:55 +00:00
|
|
|
default 0
|
2013-04-27 22:31:10 +00:00
|
|
|
|
|
|
|
choice
|
2013-09-10 22:47:55 +00:00
|
|
|
depends on HZ_FIXED = 0
|
2013-04-27 22:31:10 +00:00
|
|
|
prompt "Timer frequency"
|
|
|
|
|
|
|
|
config HZ_100
|
|
|
|
bool "100 Hz"
|
|
|
|
|
|
|
|
config HZ_200
|
|
|
|
bool "200 Hz"
|
|
|
|
|
|
|
|
config HZ_250
|
|
|
|
bool "250 Hz"
|
|
|
|
|
|
|
|
config HZ_300
|
|
|
|
bool "300 Hz"
|
|
|
|
|
|
|
|
config HZ_500
|
|
|
|
bool "500 Hz"
|
|
|
|
|
|
|
|
config HZ_1000
|
|
|
|
bool "1000 Hz"
|
|
|
|
|
|
|
|
endchoice
|
|
|
|
|
|
|
|
config HZ
|
|
|
|
int
|
2013-09-10 22:47:55 +00:00
|
|
|
default HZ_FIXED if HZ_FIXED != 0
|
2013-04-27 22:31:10 +00:00
|
|
|
default 100 if HZ_100
|
|
|
|
default 200 if HZ_200
|
|
|
|
default 250 if HZ_250
|
|
|
|
default 300 if HZ_300
|
|
|
|
default 500 if HZ_500
|
|
|
|
default 1000
|
|
|
|
|
|
|
|
config SCHED_HRTICK
|
|
|
|
def_bool HIGH_RES_TIMERS
|
2006-03-02 22:41:59 +00:00
|
|
|
|
2009-07-24 11:33:02 +00:00
|
|
|
config THUMB2_KERNEL
|
2011-12-09 19:52:10 +00:00
|
|
|
bool "Compile the kernel in Thumb-2 mode" if !CPU_THUMBONLY
|
2013-03-21 20:02:37 +00:00
|
|
|
depends on (CPU_V7 || CPU_V7M) && !CPU_V6 && !CPU_V6K
|
2011-12-09 19:52:10 +00:00
|
|
|
default y if CPU_THUMBONLY
|
2011-06-10 14:12:21 +00:00
|
|
|
select ARM_UNWIND
|
2009-07-24 11:33:02 +00:00
|
|
|
help
|
|
|
|
By enabling this option, the kernel will be compiled in
|
2017-11-29 06:52:52 +00:00
|
|
|
Thumb-2 mode.
|
2009-07-24 11:33:02 +00:00
|
|
|
|
|
|
|
If unsure, say N.
|
|
|
|
|
2015-12-12 01:49:21 +00:00
|
|
|
config ARM_PATCH_IDIV
|
|
|
|
bool "Runtime patch udiv/sdiv instructions into __aeabi_{u}idiv()"
|
2022-08-18 14:17:09 +00:00
|
|
|
depends on CPU_32v7
|
2015-12-12 01:49:21 +00:00
|
|
|
default y
|
|
|
|
help
|
|
|
|
The ARM compiler inserts calls to __aeabi_idiv() and
|
|
|
|
__aeabi_uidiv() when it needs to perform division on signed
|
|
|
|
and unsigned integers. Some v7 CPUs have support for the sdiv
|
|
|
|
and udiv instructions that can be used to implement those
|
|
|
|
functions.
|
|
|
|
|
|
|
|
Enabling this option allows the kernel to modify itself to
|
|
|
|
replace the first two instructions of these library functions
|
|
|
|
with the sdiv or udiv plus "bx lr" instructions when the CPU
|
|
|
|
it is running on supports them. Typically this will be faster
|
|
|
|
and less power intensive than running the original library
|
|
|
|
code to do integer division.
|
|
|
|
|
2006-01-14 16:33:50 +00:00
|
|
|
config AEABI
|
2019-07-08 19:38:15 +00:00
|
|
|
bool "Use the ARM EABI to compile the kernel" if !CPU_V7 && \
|
|
|
|
!CPU_V7M && !CPU_V6 && !CPU_V6K && !CC_IS_CLANG
|
|
|
|
default CPU_V7 || CPU_V7M || CPU_V6 || CPU_V6K || CC_IS_CLANG
|
2006-01-14 16:33:50 +00:00
|
|
|
help
|
|
|
|
This option allows for the kernel to be compiled using the latest
|
|
|
|
ARM ABI (aka EABI). This is only useful if you are using a user
|
|
|
|
space environment that is also compiled with EABI.
|
|
|
|
|
|
|
|
Since there are major incompatibilities between the legacy ABI and
|
|
|
|
EABI, especially with regard to structure member alignment, this
|
|
|
|
option also changes the kernel syscall calling convention to
|
|
|
|
disambiguate both ABIs and allow for backward compatibility support
|
|
|
|
(selected with CONFIG_OABI_COMPAT).
|
|
|
|
|
|
|
|
To use this you need GCC version 4.0.0 or later.
|
|
|
|
|
2006-01-14 16:37:15 +00:00
|
|
|
config OABI_COMPAT
|
2006-02-08 21:09:55 +00:00
|
|
|
bool "Allow old ABI binaries to run with this kernel (EXPERIMENTAL)"
|
2013-01-17 02:53:14 +00:00
|
|
|
depends on AEABI && !THUMB2_KERNEL
|
2006-01-14 16:37:15 +00:00
|
|
|
help
|
|
|
|
This option preserves the old syscall interface along with the
|
|
|
|
new (ARM EABI) one. It also provides a compatibility layer to
|
|
|
|
intercept syscalls that have structure arguments which layout
|
|
|
|
in memory differs between the legacy ABI and the new ARM EABI
|
|
|
|
(only for non "thumb" binaries). This option adds a tiny
|
|
|
|
overhead to all syscalls and produces a slightly larger kernel.
|
2013-11-08 23:51:56 +00:00
|
|
|
|
|
|
|
The seccomp filter system will not be available when this is
|
|
|
|
selected, since there is no way yet to sensibly distinguish
|
|
|
|
between calling conventions during filtering.
|
|
|
|
|
2006-01-14 16:37:15 +00:00
|
|
|
If you know you'll be using only pure EABI user space then you
|
|
|
|
can say N here. If this option is not selected and you attempt
|
|
|
|
to execute a legacy ABI binary then the result will be
|
|
|
|
UNPREDICTABLE (in fact it can be predicted that it won't work
|
2013-11-08 23:31:11 +00:00
|
|
|
at all). If in doubt say N.
|
2006-01-14 16:37:15 +00:00
|
|
|
|
2020-05-22 14:12:30 +00:00
|
|
|
config ARCH_SELECT_MEMORY_MODEL
|
2022-07-26 12:16:37 +00:00
|
|
|
def_bool y
|
2020-05-22 14:12:30 +00:00
|
|
|
|
|
|
|
config ARCH_FLATMEM_ENABLE
|
2022-07-26 12:16:37 +00:00
|
|
|
def_bool !(ARCH_RPC || ARCH_SA1100)
|
2006-11-30 20:43:51 +00:00
|
|
|
|
|
|
|
config ARCH_SPARSEMEM_ENABLE
|
2022-07-26 12:16:37 +00:00
|
|
|
def_bool !ARCH_FOOTBRIDGE
|
2020-05-22 14:12:30 +00:00
|
|
|
select SPARSEMEM_STATIC if SPARSEMEM
|
2008-10-01 20:39:58 +00:00
|
|
|
|
2008-09-19 04:36:12 +00:00
|
|
|
config HIGHMEM
|
2011-05-12 08:53:05 +00:00
|
|
|
bool "High Memory Support"
|
|
|
|
depends on MMU
|
2020-11-03 09:27:22 +00:00
|
|
|
select KMAP_LOCAL
|
2021-11-20 00:43:55 +00:00
|
|
|
select KMAP_LOCAL_NON_LINEAR_PTE_ARRAY
|
2008-09-19 04:36:12 +00:00
|
|
|
help
|
|
|
|
The address space of ARM processors is only 4 Gigabytes large
|
|
|
|
and it has to accommodate user address space, kernel address
|
|
|
|
space as well as some memory mapped IO. That means that, if you
|
|
|
|
have a large amount of physical memory and/or IO, not all of the
|
|
|
|
memory can be "permanently mapped" by the kernel. The physical
|
|
|
|
memory that is not permanently mapped is called "high memory".
|
|
|
|
|
|
|
|
Depending on the selected kernel/user memory split, minimum
|
|
|
|
vmalloc space and actual amount of RAM, you may not need this
|
|
|
|
option which should result in a slightly faster kernel.
|
|
|
|
|
|
|
|
If unsure, say n.
|
|
|
|
|
2009-08-17 19:02:06 +00:00
|
|
|
config HIGHPTE
|
2015-06-25 09:44:08 +00:00
|
|
|
bool "Allocate 2nd-level pagetables from highmem" if EXPERT
|
2009-08-17 19:02:06 +00:00
|
|
|
depends on HIGHMEM
|
2015-06-25 09:44:08 +00:00
|
|
|
default y
|
2015-06-25 09:49:45 +00:00
|
|
|
help
|
|
|
|
The VM uses one page of physical memory for each page table.
|
|
|
|
For systems with a lot of processes, this can use a lot of
|
|
|
|
precious low memory, eventually leading to low memory being
|
|
|
|
consumed by page tables. Setting this option will allow
|
|
|
|
user-space 2nd level page tables to reside in high memory.
|
2009-08-17 19:02:06 +00:00
|
|
|
|
2024-03-25 07:31:13 +00:00
|
|
|
config ARM_PAN
|
|
|
|
bool "Enable privileged no-access"
|
|
|
|
depends on MMU
|
2010-02-02 19:25:44 +00:00
|
|
|
default y
|
|
|
|
help
|
2015-08-19 19:40:41 +00:00
|
|
|
Increase kernel security by ensuring that normal kernel accesses
|
|
|
|
are unable to access userspace addresses. This can help prevent
|
|
|
|
use-after-free bugs becoming an exploitable privilege escalation
|
|
|
|
by ensuring that magic values (such as LIST_POISON) will always
|
|
|
|
fault when dereferenced.
|
|
|
|
|
2024-03-25 07:31:13 +00:00
|
|
|
The implementation uses CPU domains when !CONFIG_ARM_LPAE and
|
|
|
|
disabling of TTBR0 page table walks with CONFIG_ARM_LPAE.
|
|
|
|
|
|
|
|
config CPU_SW_DOMAIN_PAN
|
|
|
|
def_bool y
|
|
|
|
depends on ARM_PAN && !ARM_LPAE
|
|
|
|
help
|
|
|
|
Enable use of CPU domains to implement privileged no-access.
|
|
|
|
|
2015-08-19 19:40:41 +00:00
|
|
|
CPUs with low-vector mappings use a best-efforts implementation.
|
|
|
|
Their lower 1MB needs to remain accessible for the vectors, but
|
|
|
|
the remainder of userspace will become appropriately inaccessible.
|
2009-08-17 19:02:06 +00:00
|
|
|
|
2024-03-25 07:31:13 +00:00
|
|
|
config CPU_TTBR0_PAN
|
|
|
|
def_bool y
|
|
|
|
depends on ARM_PAN && ARM_LPAE
|
|
|
|
help
|
|
|
|
Enable privileged no-access by disabling TTBR0 page table walks when
|
|
|
|
running in kernel mode.
|
|
|
|
|
2010-02-02 19:25:44 +00:00
|
|
|
config HW_PERF_EVENTS
|
2015-07-06 11:23:53 +00:00
|
|
|
def_bool y
|
|
|
|
depends on ARM_PMU
|
2010-02-02 19:25:44 +00:00
|
|
|
|
2014-11-24 15:54:35 +00:00
|
|
|
config ARM_MODULE_PLTS
|
|
|
|
bool "Use PLTs to allow module memory to spill over into vmalloc area"
|
|
|
|
depends on MODULES
|
2022-04-27 14:30:00 +00:00
|
|
|
select KASAN_VMALLOC if KASAN
|
2018-03-26 13:54:25 +00:00
|
|
|
default y
|
2014-11-24 15:54:35 +00:00
|
|
|
help
|
|
|
|
Allocate PLTs when loading modules so that jumps and calls whose
|
|
|
|
targets are too far away for their relative offsets to be encoded
|
|
|
|
in the instructions themselves can be bounced via veneers in the
|
|
|
|
module's PLT. This allows modules to be allocated in the generic
|
|
|
|
vmalloc area after the dedicated module memory area has been
|
|
|
|
exhausted. The modules will use slightly more memory, but after
|
|
|
|
rounding up to page size, the actual memory footprint is usually
|
|
|
|
the same.
|
|
|
|
|
2018-03-26 13:54:25 +00:00
|
|
|
Disabling this is usually safe for small single-platform
|
|
|
|
configurations. If unsure, say y.
|
2014-11-24 15:54:35 +00:00
|
|
|
|
2022-08-15 14:39:59 +00:00
|
|
|
config ARCH_FORCE_MAX_ORDER
|
2023-03-24 05:22:20 +00:00
|
|
|
int "Order of maximal physically contiguous allocations"
|
2023-03-15 11:31:33 +00:00
|
|
|
default "11" if SOC_AM33XX
|
|
|
|
default "8" if SA1111
|
|
|
|
default "10"
|
2010-07-05 09:00:11 +00:00
|
|
|
help
|
2023-03-24 05:22:20 +00:00
|
|
|
The kernel page allocator limits the size of maximal physically
|
2023-12-28 14:47:04 +00:00
|
|
|
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
|
2023-03-24 05:22:20 +00:00
|
|
|
defines the maximal power of two of number of pages that can be
|
|
|
|
allocated as a single contiguous block. This option allows
|
|
|
|
overriding the default setting when ability to allocate very
|
|
|
|
large blocks of physically contiguous memory is required.
|
|
|
|
|
|
|
|
Don't change if unsure.
|
2010-07-05 09:00:11 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
config ALIGNMENT_TRAP
|
2020-09-24 18:25:46 +00:00
|
|
|
def_bool CPU_CP15_MMU
|
2010-01-10 17:23:29 +00:00
|
|
|
select HAVE_PROC_CPU if PROC_FS
|
2005-04-16 22:20:36 +00:00
|
|
|
help
|
2006-10-03 20:53:09 +00:00
|
|
|
ARM processors cannot fetch/store information which is not
|
2005-04-16 22:20:36 +00:00
|
|
|
naturally aligned on the bus, i.e., a 4 byte fetch must start at an
|
|
|
|
address divisible by 4. On 32-bit ARM processors, these non-aligned
|
|
|
|
fetch/store instructions will be emulated in software if you say
|
|
|
|
here, which has a severe performance impact. This is necessary for
|
|
|
|
correct operation of some network protocols. With an IP-only
|
|
|
|
configuration it is safe to say N, otherwise say Y.
|
|
|
|
|
2009-03-09 18:30:09 +00:00
|
|
|
config UACCESS_WITH_MEMCPY
|
2012-09-10 15:36:37 +00:00
|
|
|
bool "Use kernel mem{cpy,set}() for {copy_to,clear}_user()"
|
|
|
|
depends on MMU
|
2009-03-09 18:30:09 +00:00
|
|
|
default y if CPU_FEROCEON
|
|
|
|
help
|
|
|
|
Implement faster copy_to_user and clear_user methods for CPU
|
|
|
|
cores where a 8-word STM instruction give significantly higher
|
|
|
|
memory write throughput than a sequence of individual 32bit stores.
|
|
|
|
|
|
|
|
A possible side effect is a slight increase in scheduling latency
|
|
|
|
between threads sharing the same address space if they invoke
|
|
|
|
such copy operations with large buffers.
|
|
|
|
|
|
|
|
However, if the CPU data cache is using a write-allocate mode,
|
|
|
|
this option is unlikely to provide any performance gain.
|
|
|
|
|
2015-11-23 10:32:57 +00:00
|
|
|
config PARAVIRT
|
|
|
|
bool "Enable paravirtualization code"
|
|
|
|
help
|
|
|
|
This changes the kernel so it can modify itself when it is run
|
|
|
|
under a hypervisor, potentially improving performance significantly
|
|
|
|
over full virtualization.
|
|
|
|
|
|
|
|
config PARAVIRT_TIME_ACCOUNTING
|
|
|
|
bool "Paravirtual steal time accounting"
|
|
|
|
select PARAVIRT
|
|
|
|
help
|
|
|
|
Select this option to enable fine granularity task steal time
|
|
|
|
accounting. Time spent executing other tasks in parallel with
|
|
|
|
the current vCPU is discounted from the vCPU power. To account for
|
|
|
|
that, there can be a small performance impact.
|
|
|
|
|
|
|
|
If in doubt, say N here.
|
|
|
|
|
2012-09-17 14:58:17 +00:00
|
|
|
config XEN_DOM0
|
|
|
|
def_bool y
|
|
|
|
depends on XEN
|
|
|
|
|
|
|
|
config XEN
|
2014-09-17 21:07:06 +00:00
|
|
|
bool "Xen guest support on ARM"
|
2013-03-07 07:17:25 +00:00
|
|
|
depends on ARM && AEABI && OF
|
2012-10-09 10:33:52 +00:00
|
|
|
depends on CPU_V7 && !CPU_V6
|
2013-03-07 07:17:25 +00:00
|
|
|
depends on !GENERIC_ATOMIC64
|
2014-03-03 14:25:52 +00:00
|
|
|
depends on MMU
|
2014-04-22 21:26:27 +00:00
|
|
|
select ARCH_DMA_ADDR_T_64BIT
|
2013-04-24 18:47:18 +00:00
|
|
|
select ARM_PSCI
|
2018-04-03 14:43:51 +00:00
|
|
|
select SWIOTLB
|
2013-10-10 13:40:44 +00:00
|
|
|
select SWIOTLB_XEN
|
2015-11-23 10:32:57 +00:00
|
|
|
select PARAVIRT
|
2012-09-17 14:58:17 +00:00
|
|
|
help
|
|
|
|
Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
|
|
|
|
|
2021-10-21 14:16:47 +00:00
|
|
|
config CC_HAVE_STACKPROTECTOR_TLS
|
|
|
|
def_bool $(cc-option,-mtp=cp15 -mstack-protector-guard=tls -mstack-protector-guard-offset=0)
|
|
|
|
|
2018-12-06 08:32:57 +00:00
|
|
|
config STACKPROTECTOR_PER_TASK
|
|
|
|
bool "Use a unique stack canary value for each task"
|
ARM: implement THREAD_INFO_IN_TASK for uniprocessor systems
On UP systems, only a single task can be 'current' at the same time,
which means we can use a global variable to track it. This means we can
also enable THREAD_INFO_IN_TASK for those systems, as in that case,
thread_info is accessed via current rather than the other way around,
removing the need to store thread_info at the base of the task stack.
This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP
systems as well.
To partially mitigate the performance overhead of this arrangement, use
a ADD/ADD/LDR sequence with the appropriate PC-relative group
relocations to load the value of current when needed. This means that
accessing current will still only require a single load as before,
avoiding the need for a literal to carry the address of the global
variable in each function. However, accessing thread_info will now
require this load as well.
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
2021-11-24 13:08:11 +00:00
|
|
|
depends on STACKPROTECTOR && CURRENT_POINTER_IN_TPIDRURO && !XIP_DEFLATED_DATA
|
2021-10-21 14:16:47 +00:00
|
|
|
depends on GCC_PLUGINS || CC_HAVE_STACKPROTECTOR_TLS
|
|
|
|
select GCC_PLUGIN_ARM_SSP_PER_TASK if !CC_HAVE_STACKPROTECTOR_TLS
|
2018-12-06 08:32:57 +00:00
|
|
|
default y
|
|
|
|
help
|
|
|
|
Due to the fact that GCC uses an ordinary symbol reference from
|
|
|
|
which to load the value of the stack canary, this value can only
|
|
|
|
change at reboot time on SMP systems, and all tasks running in the
|
|
|
|
kernel's address space are forced to use the same canary value for
|
|
|
|
the entire duration that the system is up.
|
|
|
|
|
|
|
|
Enable this option to switch to a different method that uses a
|
|
|
|
different canary value for each task.
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
endmenu
|
|
|
|
|
|
|
|
menu "Boot options"
|
|
|
|
|
2011-04-28 20:27:20 +00:00
|
|
|
config USE_OF
|
|
|
|
bool "Flattened Device Tree support"
|
ARM: config: sort select statements alphanumerically
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
|
|
|
select IRQ_DOMAIN
|
2011-04-28 20:27:20 +00:00
|
|
|
select OF
|
|
|
|
help
|
|
|
|
Include support for flattened device tree machine descriptions.
|
|
|
|
|
2018-12-06 16:29:51 +00:00
|
|
|
config ARCH_WANT_FLAT_DTB_INSTALL
|
|
|
|
def_bool y
|
|
|
|
|
2012-09-01 02:03:25 +00:00
|
|
|
config ATAGS
|
ARM: add ATAGS dependencies to non-DT platforms
There are a total of eight platforms that only suppor ATAGS based boot
with board files but no devicetree booting.
For dove, the DT support is part of the mvebu platform, which shares
driver but no code in arch/arm.
Most of these will never get converted to DT, and the majority of the
board files appear to be entirely unused already. There are still known
users on a few machines, and there may be interest in converting some
omap1, ep93xx or footbridge machines over in the future.
For the moment, just add a Kconfig dependency to hide these platforms
completely when CONFIG_ATAGS is disabled, and reorder the priority
of the options: Rather than offering to turn ATAGS off for platforms
that have DT support, make it a top-level setting that determines
which platforms are visible.
The s3c24xx platform supports one machine with DT support, but it
cannot be built without also including ATAGS support, and the
entire platform is scheduled for removal, so leaving the entire
platform behind a dependency seems good enough.
All defconfig files should keep working, as the option remains default
enabled.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-07-05 12:10:34 +00:00
|
|
|
bool "Support for the traditional ATAGS boot data passing"
|
2012-09-01 02:03:25 +00:00
|
|
|
default y
|
|
|
|
help
|
|
|
|
This is the traditional way of passing data to the kernel at boot
|
|
|
|
time. If you are solely relying on the flattened device tree (or
|
|
|
|
the ARM_ATAG_DTB_COMPAT option) then you may unselect this option
|
2022-07-18 12:30:21 +00:00
|
|
|
to remove ATAGS support from your kernel binary.
|
|
|
|
|
2012-09-01 02:03:25 +00:00
|
|
|
config DEPRECATED_PARAM_STRUCT
|
|
|
|
bool "Provide old way to pass kernel parameters"
|
|
|
|
depends on ATAGS
|
|
|
|
help
|
|
|
|
This was deprecated in 2001 and announced to live on for 5 years.
|
|
|
|
Some old boot loaders still use this way.
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
# Compressed boot loader in ROM. Yes, we really want to ask about
|
|
|
|
# TEXT and BSS so we preserve their values in the config files.
|
|
|
|
config ZBOOT_ROM_TEXT
|
|
|
|
hex "Compressed ROM boot loader base address"
|
2020-06-09 02:28:14 +00:00
|
|
|
default 0x0
|
2005-04-16 22:20:36 +00:00
|
|
|
help
|
|
|
|
The physical address at which the ROM-able zImage is to be
|
|
|
|
placed in the target. Platforms which normally make use of
|
|
|
|
ROM-able zImage formats normally set this to a suitable
|
|
|
|
value in their defconfig file.
|
|
|
|
|
|
|
|
If ZBOOT_ROM is not enabled, this has no effect.
|
|
|
|
|
|
|
|
config ZBOOT_ROM_BSS
|
|
|
|
hex "Compressed ROM boot loader BSS address"
|
2020-06-09 02:28:14 +00:00
|
|
|
default 0x0
|
2005-04-16 22:20:36 +00:00
|
|
|
help
|
2006-09-20 22:28:51 +00:00
|
|
|
The base address of an area of read/write memory in the target
|
|
|
|
for the ROM-able zImage which must be available while the
|
|
|
|
decompressor is running. It must be large enough to hold the
|
|
|
|
entire decompressed kernel plus an additional 128 KiB.
|
|
|
|
Platforms which normally make use of ROM-able zImage formats
|
|
|
|
normally set this to a suitable value in their defconfig file.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
If ZBOOT_ROM is not enabled, this has no effect.
|
|
|
|
|
|
|
|
config ZBOOT_ROM
|
|
|
|
bool "Compressed boot loader in ROM/flash"
|
|
|
|
depends on ZBOOT_ROM_TEXT != ZBOOT_ROM_BSS
|
2014-01-01 11:59:44 +00:00
|
|
|
depends on !ARM_APPENDED_DTB && !XIP_KERNEL && !AUTO_ZRELADDR
|
2005-04-16 22:20:36 +00:00
|
|
|
help
|
|
|
|
Say Y here if you intend to execute your compressed kernel image
|
|
|
|
(zImage) directly from ROM or flash. If unsure, say N.
|
|
|
|
|
2011-05-27 22:45:50 +00:00
|
|
|
config ARM_APPENDED_DTB
|
|
|
|
bool "Use appended device tree blob to zImage (EXPERIMENTAL)"
|
2014-01-01 11:59:44 +00:00
|
|
|
depends on OF
|
2011-05-27 22:45:50 +00:00
|
|
|
help
|
|
|
|
With this option, the boot code will look for a device tree binary
|
|
|
|
(DTB) appended to zImage
|
|
|
|
(e.g. cat zImage <filename>.dtb > zImage_w_dtb).
|
|
|
|
|
|
|
|
This is meant as a backward compatibility convenience for those
|
|
|
|
systems with a bootloader that can't be upgraded to accommodate
|
|
|
|
the documented boot protocol using a device tree.
|
|
|
|
|
|
|
|
Beware that there is very little in terms of protection against
|
|
|
|
this option being confused by leftover garbage in memory that might
|
|
|
|
look like a DTB header after a reboot if no actual DTB is appended
|
|
|
|
to zImage. Do not leave this option active in a production kernel
|
|
|
|
if you don't intend to always append a DTB. Proper passing of the
|
|
|
|
location into r2 of a bootloader provided DTB is always preferable
|
|
|
|
to this option.
|
|
|
|
|
2011-09-14 02:37:07 +00:00
|
|
|
config ARM_ATAG_DTB_COMPAT
|
|
|
|
bool "Supplement the appended DTB with traditional ATAG information"
|
|
|
|
depends on ARM_APPENDED_DTB
|
|
|
|
help
|
|
|
|
Some old bootloaders can't be updated to a DTB capable one, yet
|
|
|
|
they provide ATAGs with memory configuration, the ramdisk address,
|
|
|
|
the kernel cmdline string, etc. Such information is dynamically
|
|
|
|
provided by the bootloader and can't always be stored in a static
|
|
|
|
DTB. To allow a device tree enabled kernel to be used with such
|
|
|
|
bootloaders, this option allows zImage to extract the information
|
|
|
|
from the ATAG list and store it at run time into the appended DTB.
|
|
|
|
|
2012-06-26 15:37:59 +00:00
|
|
|
choice
|
treewide: change conditional prompt for choices to 'depends on'
While Documentation/kbuild/kconfig-language.rst provides a brief
explanation, there are recurring confusions regarding the usage of a
prompt followed by 'if <expr>'. This conditional controls _only_ the
prompt.
A typical usage is as follows:
menuconfig BLOCK
bool "Enable the block layer" if EXPERT
default y
When EXPERT=n, the prompt is hidden, but this config entry is still
active, and BLOCK is set to its default value 'y'. This is reasonable
because you are likely want to enable the block device support. When
EXPERT=y, the prompt is shown, allowing you to toggle BLOCK.
Please note that it is different from 'depends on EXPERT', which would
enable and disable the entire config entry.
However, this conditional prompt has never worked in a choice block.
The following two work in the same way: when EXPERT is disabled, the
choice block is entirely disabled.
[Test Code 1]
choice
prompt "choose" if EXPERT
config A
bool "A"
config B
bool "B"
endchoice
[Test Code 2]
choice
prompt "choose"
depends on EXPERT
config A
bool "A"
config B
bool "B"
endchoice
I believe the first case should hide only the prompt, producing the
default:
CONFIG_A=y
# CONFIG_B is not set
The next commit will change (fix) the behavior of the conditional prompt
in choice blocks.
I see several choice blocks wrongly using a conditional prompt, where
'depends on' makes more sense.
To preserve the current behavior, this commit converts such misuses.
I did not touch the following entry in arch/x86/Kconfig:
choice
prompt "Memory split" if EXPERT
default VMSPLIT_3G
This is truly the correct use of the conditional prompt; when EXPERT=n,
this choice block should silently select the reasonable VMSPLIT_3G,
although the resulting PAGE_OFFSET will not be affected anyway.
Presumably, the one in fs/jffs2/Kconfig is also correct, but I converted
it to 'depends on' to avoid any potential behavioral change.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2024-06-26 18:22:00 +00:00
|
|
|
prompt "Kernel command line type"
|
|
|
|
depends on ARM_ATAG_DTB_COMPAT
|
2012-06-26 15:37:59 +00:00
|
|
|
default ARM_ATAG_DTB_COMPAT_CMDLINE_FROM_BOOTLOADER
|
|
|
|
|
|
|
|
config ARM_ATAG_DTB_COMPAT_CMDLINE_FROM_BOOTLOADER
|
|
|
|
bool "Use bootloader kernel arguments if available"
|
|
|
|
help
|
|
|
|
Uses the command-line options passed by the boot loader instead of
|
|
|
|
the device tree bootargs property. If the boot loader doesn't provide
|
|
|
|
any, the device tree bootargs property will be used.
|
|
|
|
|
|
|
|
config ARM_ATAG_DTB_COMPAT_CMDLINE_EXTEND
|
|
|
|
bool "Extend with bootloader kernel arguments"
|
|
|
|
help
|
|
|
|
The command-line arguments provided by the boot loader will be
|
|
|
|
appended to the the device tree bootargs property.
|
|
|
|
|
|
|
|
endchoice
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
config CMDLINE
|
|
|
|
string "Default kernel command string"
|
|
|
|
default ""
|
|
|
|
help
|
2020-09-24 18:25:46 +00:00
|
|
|
On some architectures (e.g. CATS), there is currently no way
|
2005-04-16 22:20:36 +00:00
|
|
|
for the boot loader to pass arguments to the kernel. For these
|
|
|
|
architectures, you should supply some command-line options at build
|
|
|
|
time by entering them here. As a minimum, you should specify the
|
|
|
|
memory size and the root device (e.g., mem=64M root=/dev/nfs).
|
|
|
|
|
2011-05-04 16:07:55 +00:00
|
|
|
choice
|
treewide: change conditional prompt for choices to 'depends on'
While Documentation/kbuild/kconfig-language.rst provides a brief
explanation, there are recurring confusions regarding the usage of a
prompt followed by 'if <expr>'. This conditional controls _only_ the
prompt.
A typical usage is as follows:
menuconfig BLOCK
bool "Enable the block layer" if EXPERT
default y
When EXPERT=n, the prompt is hidden, but this config entry is still
active, and BLOCK is set to its default value 'y'. This is reasonable
because you are likely want to enable the block device support. When
EXPERT=y, the prompt is shown, allowing you to toggle BLOCK.
Please note that it is different from 'depends on EXPERT', which would
enable and disable the entire config entry.
However, this conditional prompt has never worked in a choice block.
The following two work in the same way: when EXPERT is disabled, the
choice block is entirely disabled.
[Test Code 1]
choice
prompt "choose" if EXPERT
config A
bool "A"
config B
bool "B"
endchoice
[Test Code 2]
choice
prompt "choose"
depends on EXPERT
config A
bool "A"
config B
bool "B"
endchoice
I believe the first case should hide only the prompt, producing the
default:
CONFIG_A=y
# CONFIG_B is not set
The next commit will change (fix) the behavior of the conditional prompt
in choice blocks.
I see several choice blocks wrongly using a conditional prompt, where
'depends on' makes more sense.
To preserve the current behavior, this commit converts such misuses.
I did not touch the following entry in arch/x86/Kconfig:
choice
prompt "Memory split" if EXPERT
default VMSPLIT_3G
This is truly the correct use of the conditional prompt; when EXPERT=n,
this choice block should silently select the reasonable VMSPLIT_3G,
although the resulting PAGE_OFFSET will not be affected anyway.
Presumably, the one in fs/jffs2/Kconfig is also correct, but I converted
it to 'depends on' to avoid any potential behavioral change.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2024-06-26 18:22:00 +00:00
|
|
|
prompt "Kernel command line type"
|
|
|
|
depends on CMDLINE != ""
|
2011-05-04 16:07:55 +00:00
|
|
|
default CMDLINE_FROM_BOOTLOADER
|
|
|
|
|
|
|
|
config CMDLINE_FROM_BOOTLOADER
|
|
|
|
bool "Use bootloader kernel arguments if available"
|
|
|
|
help
|
|
|
|
Uses the command-line options passed by the boot loader. If
|
|
|
|
the boot loader doesn't provide any, the default kernel command
|
|
|
|
string provided in CMDLINE will be used.
|
|
|
|
|
|
|
|
config CMDLINE_EXTEND
|
|
|
|
bool "Extend bootloader kernel arguments"
|
|
|
|
help
|
|
|
|
The command-line arguments provided by the boot loader will be
|
|
|
|
appended to the default kernel command string.
|
|
|
|
|
2010-02-16 18:04:53 +00:00
|
|
|
config CMDLINE_FORCE
|
|
|
|
bool "Always use the default kernel command string"
|
|
|
|
help
|
|
|
|
Always use the default kernel command string, even if the boot
|
|
|
|
loader passes other arguments to the kernel.
|
|
|
|
This is useful if you cannot or don't want to change the
|
|
|
|
command-line options your boot loader passes to the kernel.
|
2011-05-04 16:07:55 +00:00
|
|
|
endchoice
|
2010-02-16 18:04:53 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
config XIP_KERNEL
|
|
|
|
bool "Kernel Execute-In-Place from ROM"
|
2014-01-01 11:59:44 +00:00
|
|
|
depends on !ARM_LPAE && !ARCH_MULTIPLATFORM
|
2022-08-18 14:17:09 +00:00
|
|
|
depends on !ARM_PATCH_IDIV && !ARM_PATCH_PHYS_VIRT && !SMP_ON_UP
|
2005-04-16 22:20:36 +00:00
|
|
|
help
|
|
|
|
Execute-In-Place allows the kernel to run from non-volatile storage
|
|
|
|
directly addressable by the CPU, such as NOR flash. This saves RAM
|
|
|
|
space since the text section of the kernel is not loaded from flash
|
|
|
|
to RAM. Read-write sections, such as the data section and stack,
|
|
|
|
are still copied to RAM. The XIP kernel is not compressed since
|
|
|
|
it has to run directly from flash, so it will take more space to
|
|
|
|
store it. The flash address used to link the kernel object files,
|
|
|
|
and for storing it, is configuration dependent. Therefore, if you
|
|
|
|
say Y here, you must know the proper physical address where to
|
|
|
|
store the kernel image depending on your own flash memory usage.
|
|
|
|
|
|
|
|
Also note that the make target becomes "make xipImage" rather than
|
|
|
|
"make zImage" or "make Image". The final kernel binary to put in
|
|
|
|
ROM memory will be arch/arm/boot/xipImage.
|
|
|
|
|
|
|
|
If unsure, say N.
|
|
|
|
|
|
|
|
config XIP_PHYS_ADDR
|
|
|
|
hex "XIP Kernel Physical Location"
|
|
|
|
depends on XIP_KERNEL
|
|
|
|
default "0x00080000"
|
|
|
|
help
|
|
|
|
This is the physical address in your flash memory the kernel will
|
|
|
|
be linked for and stored to. This address is dependent on your
|
|
|
|
own flash usage.
|
|
|
|
|
2017-08-25 04:54:18 +00:00
|
|
|
config XIP_DEFLATED_DATA
|
|
|
|
bool "Store kernel .data section compressed in ROM"
|
|
|
|
depends on XIP_KERNEL
|
|
|
|
select ZLIB_INFLATE
|
|
|
|
help
|
|
|
|
Before the kernel is actually executed, its .data section has to be
|
|
|
|
copied to RAM from ROM. This option allows for storing that data
|
|
|
|
in compressed form and decompressed to RAM rather than merely being
|
|
|
|
copied, saving some precious ROM space. A possible drawback is a
|
|
|
|
slightly longer boot delay.
|
|
|
|
|
2023-07-12 16:15:34 +00:00
|
|
|
config ARCH_SUPPORTS_KEXEC
|
|
|
|
def_bool (!SMP || PM_SLEEP_SMP) && MMU
|
2007-02-06 20:29:00 +00:00
|
|
|
|
2008-01-01 23:56:46 +00:00
|
|
|
config ATAGS_PROC
|
|
|
|
bool "Export atags in procfs"
|
2012-09-01 02:03:25 +00:00
|
|
|
depends on ATAGS && KEXEC
|
2008-02-22 15:45:18 +00:00
|
|
|
default y
|
2008-01-01 23:56:46 +00:00
|
|
|
help
|
|
|
|
Should the atags used to boot the kernel be exported in an "atags"
|
|
|
|
file in procfs. Useful with kexec.
|
|
|
|
|
2023-07-12 16:15:34 +00:00
|
|
|
config ARCH_SUPPORTS_CRASH_DUMP
|
|
|
|
def_bool y
|
2010-11-18 18:14:52 +00:00
|
|
|
|
2024-09-17 16:37:20 +00:00
|
|
|
config ARCH_DEFAULT_CRASH_DUMP
|
|
|
|
def_bool y
|
|
|
|
|
2010-07-05 13:56:50 +00:00
|
|
|
config AUTO_ZRELADDR
|
2022-07-26 12:16:37 +00:00
|
|
|
bool "Auto calculation of the decompressed kernel image address" if !ARCH_MULTIPLATFORM
|
|
|
|
default !(ARCH_FOOTBRIDGE || ARCH_RPC || ARCH_SA1100)
|
2010-07-05 13:56:50 +00:00
|
|
|
help
|
|
|
|
ZRELADDR is the physical address where the decompressed kernel
|
|
|
|
image will be placed. If AUTO_ZRELADDR is selected, the address
|
2021-01-04 13:00:52 +00:00
|
|
|
will be determined at run-time, either by masking the current IP
|
|
|
|
with 0xf8000000, or, if invalid, from the DTB passed in r2.
|
|
|
|
This assumes the zImage being placed in the first 128MB from
|
|
|
|
start of memory.
|
2010-07-05 13:56:50 +00:00
|
|
|
|
2015-09-24 03:17:54 +00:00
|
|
|
config EFI_STUB
|
|
|
|
bool
|
|
|
|
|
|
|
|
config EFI
|
|
|
|
bool "UEFI runtime support"
|
|
|
|
depends on OF && !CPU_BIG_ENDIAN && MMU && AUTO_ZRELADDR && !XIP_KERNEL
|
|
|
|
select UCS2_STRING
|
|
|
|
select EFI_PARAMS_FROM_FDT
|
|
|
|
select EFI_STUB
|
2020-04-15 19:54:18 +00:00
|
|
|
select EFI_GENERIC_STUB
|
2015-09-24 03:17:54 +00:00
|
|
|
select EFI_RUNTIME_WRAPPERS
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2015-09-24 03:17:54 +00:00
|
|
|
This option provides support for runtime services provided
|
|
|
|
by UEFI firmware (such as non-volatile variables, realtime
|
|
|
|
clock, and platform reset). A UEFI stub is also provided to
|
|
|
|
allow the kernel to be booted as an EFI application. This
|
|
|
|
is only useful for kernels that may run on systems that have
|
|
|
|
UEFI firmware.
|
|
|
|
|
2017-06-02 13:52:07 +00:00
|
|
|
config DMI
|
|
|
|
bool "Enable support for SMBIOS (DMI) tables"
|
|
|
|
depends on EFI
|
|
|
|
default y
|
|
|
|
help
|
|
|
|
This enables SMBIOS/DMI feature for systems.
|
|
|
|
|
|
|
|
This option is only useful on systems that have UEFI firmware.
|
|
|
|
However, even with this option, the resultant kernel should
|
|
|
|
continue to boot on existing non-UEFI platforms.
|
|
|
|
|
|
|
|
NOTE: This does *NOT* enable or encourage the use of DMI quirks,
|
|
|
|
i.e., the the practice of identifying the platform via DMI to
|
|
|
|
decide whether certain workarounds for buggy hardware and/or
|
|
|
|
firmware need to be enabled. This would require the DMI subsystem
|
|
|
|
to be enabled much earlier than we do on ARM, which is non-trivial.
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
endmenu
|
|
|
|
|
2008-08-18 16:26:00 +00:00
|
|
|
menu "CPU Power Management"
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
source "drivers/cpufreq/Kconfig"
|
|
|
|
|
2008-08-18 16:26:00 +00:00
|
|
|
source "drivers/cpuidle/Kconfig"
|
|
|
|
|
|
|
|
endmenu
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
menu "Floating point emulation"
|
|
|
|
|
|
|
|
comment "At least one emulation must be selected"
|
|
|
|
|
|
|
|
config FPE_NWFPE
|
|
|
|
bool "NWFPE math emulation"
|
2010-12-13 20:56:03 +00:00
|
|
|
depends on (!AEABI || OABI_COMPAT) && !THUMB2_KERNEL
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2005-04-16 22:20:36 +00:00
|
|
|
Say Y to include the NWFPE floating point emulator in the kernel.
|
|
|
|
This is necessary to run most binaries. Linux does not currently
|
|
|
|
support floating point hardware so you need to say Y here even if
|
|
|
|
your machine has an FPA or floating point co-processor podule.
|
|
|
|
|
|
|
|
You may say N here if you are going to load the Acorn FPEmulator
|
|
|
|
early in the bootup.
|
|
|
|
|
|
|
|
config FPE_NWFPE_XP
|
|
|
|
bool "Support extended precision"
|
2005-11-07 21:12:08 +00:00
|
|
|
depends on FPE_NWFPE
|
2005-04-16 22:20:36 +00:00
|
|
|
help
|
|
|
|
Say Y to include 80-bit support in the kernel floating-point
|
|
|
|
emulator. Otherwise, only 32 and 64-bit support is compiled in.
|
|
|
|
Note that gcc does not generate 80-bit operations by default,
|
|
|
|
so in most cases this option only enlarges the size of the
|
|
|
|
floating point emulator without any good reason.
|
|
|
|
|
|
|
|
You almost surely want to say N here.
|
|
|
|
|
|
|
|
config FPE_FASTFPE
|
|
|
|
bool "FastFPE math emulation (EXPERIMENTAL)"
|
2013-01-17 02:53:14 +00:00
|
|
|
depends on (!AEABI || OABI_COMPAT) && !CPU_32v3
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2005-04-16 22:20:36 +00:00
|
|
|
Say Y here to include the FAST floating point emulator in the kernel.
|
|
|
|
This is an experimental much faster emulator which now also has full
|
|
|
|
precision for the mantissa. It does not support any exceptions.
|
|
|
|
It is very simple, and approximately 3-6 times faster than NWFPE.
|
|
|
|
|
|
|
|
It should be sufficient for most programs. It may be not suitable
|
|
|
|
for scientific calculations, but you have to check this for yourself.
|
|
|
|
If you do not feel you need a faster FP emulation you should better
|
|
|
|
choose NWFPE.
|
|
|
|
|
|
|
|
config VFP
|
|
|
|
bool "VFP-format floating point maths"
|
2011-01-17 15:08:32 +00:00
|
|
|
depends on CPU_V6 || CPU_V6K || CPU_ARM926T || CPU_V7 || CPU_FEROCEON
|
2005-04-16 22:20:36 +00:00
|
|
|
help
|
|
|
|
Say Y to include VFP support code in the kernel. This is needed
|
|
|
|
if your hardware includes a VFP unit.
|
|
|
|
|
2023-05-03 22:50:54 +00:00
|
|
|
Please see <file:Documentation/arch/arm/vfp/release-notes.rst> for
|
2005-04-16 22:20:36 +00:00
|
|
|
release notes and additional status information.
|
|
|
|
|
|
|
|
Say N if your target does not have VFP hardware.
|
|
|
|
|
2007-09-25 14:22:24 +00:00
|
|
|
config VFPv3
|
|
|
|
bool
|
|
|
|
depends on VFP
|
|
|
|
default y if CPU_V7
|
|
|
|
|
2008-01-10 18:16:17 +00:00
|
|
|
config NEON
|
|
|
|
bool "Advanced SIMD (NEON) Extension support"
|
|
|
|
depends on VFPv3 && CPU_V7
|
|
|
|
help
|
|
|
|
Say Y to include support code for NEON, the ARMv7 Advanced SIMD
|
|
|
|
Extension.
|
|
|
|
|
2013-05-16 09:41:48 +00:00
|
|
|
config KERNEL_MODE_NEON
|
|
|
|
bool "Support for NEON in kernel mode"
|
2013-09-22 10:08:50 +00:00
|
|
|
depends on NEON && AEABI
|
2013-05-16 09:41:48 +00:00
|
|
|
help
|
|
|
|
Say Y to include support for NEON in kernel mode.
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
endmenu
|
|
|
|
|
|
|
|
menu "Power management options"
|
|
|
|
|
2005-11-15 11:31:41 +00:00
|
|
|
source "kernel/power/Kconfig"
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2007-12-08 01:14:00 +00:00
|
|
|
config ARCH_SUSPEND_POSSIBLE
|
2013-08-16 09:28:24 +00:00
|
|
|
depends on CPU_ARM920T || CPU_ARM926T || CPU_FEROCEON || CPU_SA1100 || \
|
2012-02-01 09:00:00 +00:00
|
|
|
CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M || CPU_XSC3 || CPU_XSCALE || CPU_MOHAWK
|
2007-12-08 01:14:00 +00:00
|
|
|
def_bool y
|
|
|
|
|
2011-10-01 19:09:39 +00:00
|
|
|
config ARM_CPU_SUSPEND
|
2016-02-01 17:01:30 +00:00
|
|
|
def_bool PM_SLEEP || BL_SWITCHER || ARM_PSCI_FW
|
2016-02-01 17:01:29 +00:00
|
|
|
depends on ARCH_SUSPEND_POSSIBLE
|
2011-10-01 19:09:39 +00:00
|
|
|
|
2014-03-25 00:20:29 +00:00
|
|
|
config ARCH_HIBERNATION_POSSIBLE
|
|
|
|
bool
|
|
|
|
depends on MMU
|
|
|
|
default y if ARCH_SUSPEND_POSSIBLE
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
endmenu
|
|
|
|
|
ARM: 8991/1: use VFP assembler mnemonics if available
The integrated assembler of Clang 10 and earlier do not allow to access
the VFP registers through the coprocessor load/store instructions:
arch/arm/vfp/vfpmodule.c:342:2: error: invalid operand for instruction
fmxr(FPEXC, fpexc & ~(FPEXC_EX|FPEXC_DEX|FPEXC_FP2V|FPEXC_VV|FPEXC_TRAP_MASK));
^
arch/arm/vfp/vfpinstr.h:79:6: note: expanded from macro 'fmxr'
asm("mcr p10, 7, %0, " vfpreg(_vfp_) ", cr0, 0 @ fmxr " #_vfp_ ", %0"
^
<inline asm>:1:6: note: instantiated into assembly here
mcr p10, 7, r0, cr8, cr0, 0 @ fmxr FPEXC, r0
^
This has been addressed with Clang 11 [0]. However, to support earlier
versions of Clang and for better readability use of VFP assembler
mnemonics still is preferred.
Ideally we would replace this code with the unified assembler language
mnemonics vmrs/vmsr on call sites along with .fpu assembler directives.
The GNU assembler supports the .fpu directive at least since 2.17 (when
documentation has been added). Since Linux requires binutils 2.21 it is
safe to use .fpu directive. However, binutils does not allow to use
FPINST or FPINST2 as an argument to vmrs/vmsr instructions up to
binutils 2.24 (see binutils commit 16d02dc907c5):
arch/arm/vfp/vfphw.S: Assembler messages:
arch/arm/vfp/vfphw.S:162: Error: operand 0 must be FPSID or FPSCR pr FPEXC -- `vmsr FPINST,r6'
arch/arm/vfp/vfphw.S:165: Error: operand 0 must be FPSID or FPSCR pr FPEXC -- `vmsr FPINST2,r8'
arch/arm/vfp/vfphw.S:235: Error: operand 1 must be a VFP extension System Register -- `vmrs r3,FPINST'
arch/arm/vfp/vfphw.S:238: Error: operand 1 must be a VFP extension System Register -- `vmrs r12,FPINST2'
Use as-instr in Kconfig to check if FPINST/FPINST2 can be used. If they
can be used make use of .fpu directives and UAL VFP mnemonics for
register access.
This allows to build vfpmodule.c with Clang and its integrated assembler.
[0] https://reviews.llvm.org/D59733
Link: https://github.com/ClangBuiltLinux/linux/issues/905
Signed-off-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-07-09 10:21:27 +00:00
|
|
|
source "arch/arm/Kconfig.assembler"
|