linux-next/arch/arm/Kconfig

1758 lines
58 KiB
Plaintext
Raw Permalink Normal View History

License cleanup: add SPDX GPL-2.0 license identifier to files with no license Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
# SPDX-License-Identifier: GPL-2.0
config ARM
bool
default y
select ARCH_32BIT_OFF_T
select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE if HAVE_KRETPROBES && FRAME_POINTER && !ARM_UNWIND
select ARCH_HAS_BINFMT_FLAT
Introduce cpu_dcache_is_aliasing() across all architectures Introduce a generic way to query whether the data cache is virtually aliased on all architectures. Its purpose is to ensure that subsystems which are incompatible with virtually aliased data caches (e.g. FS_DAX) can reliably query this. For data cache aliasing, there are three scenarios dependending on the architecture. Here is a breakdown based on my understanding: A) The data cache is always aliasing: * arc * csky * m68k (note: shared memory mappings are incoherent ? SHMLBA is missing there.) * sh * parisc B) The data cache aliasing is statically known or depends on querying CPU state at runtime: * arm (cache_is_vivt() || cache_is_vipt_aliasing()) * mips (cpu_has_dc_aliases) * nios2 (NIOS2_DCACHE_SIZE > PAGE_SIZE) * sparc32 (vac_cache_size > PAGE_SIZE) * sparc64 (L1DCACHE_SIZE > PAGE_SIZE) * xtensa (DCACHE_WAY_SIZE > PAGE_SIZE) C) The data cache is never aliasing: * alpha * arm64 (aarch64) * hexagon * loongarch (but with incoherent write buffers, which are disabled since commit d23b7795 ("LoongArch: Change SHMLBA from SZ_64K to PAGE_SIZE")) * microblaze * openrisc * powerpc * riscv * s390 * um * x86 Require architectures in A) and B) to select ARCH_HAS_CPU_CACHE_ALIASING and implement "cpu_dcache_is_aliasing()". Architectures in C) don't select ARCH_HAS_CPU_CACHE_ALIASING, and thus cpu_dcache_is_aliasing() simply evaluates to "false". Note that this leaves "cpu_icache_is_aliasing()" to be implemented as future work. This would be useful to gate features like XIP on architectures which have aliasing CPU dcache-icache but not CPU dcache-dcache. Use "cpu_dcache" and "cpu_cache" rather than just "dcache" and "cache" to clarify that we really mean "CPU data cache" and "CPU cache" to eliminate any possible confusion with VFS "dentry cache" and "page cache". Link: https://lore.kernel.org/lkml/20030910210416.GA24258@mail.jlokier.co.uk/ Link: https://lkml.kernel.org/r/20240215144633.96437-9-mathieu.desnoyers@efficios.com Fixes: d92576f1167c ("dax: does not work correctly with virtual aliasing caches") Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Alasdair Kergon <agk@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Dave Chinner <david@fromorbit.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: kernel test robot <lkp@intel.com> Cc: Michael Sclafani <dm-devel@lists.linux.dev> Cc: Mike Snitzer <snitzer@kernel.org> Cc: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-02-15 14:46:32 +00:00
select ARCH_HAS_CPU_CACHE_ALIASING
select ARCH_HAS_CPU_FINALIZE_INIT if MMU
select ARCH_HAS_CRC32 if KERNEL_MODE_NEON
select ARCH_HAS_CRC_T10DIF if KERNEL_MODE_NEON
usercopy: Check valid lifetime via stack depth One of the things that CONFIG_HARDENED_USERCOPY sanity-checks is whether an object that is about to be copied to/from userspace is overlapping the stack at all. If it is, it performs a number of inexpensive bounds checks. One of the finer-grained checks is whether an object crosses stack frames within the stack region. Doing this on x86 with CONFIG_FRAME_POINTER was cheap/easy. Doing it with ORC was deemed too heavy, and was left out (a while ago), leaving the courser whole-stack check. The LKDTM tests USERCOPY_STACK_FRAME_TO and USERCOPY_STACK_FRAME_FROM try to exercise these cross-frame cases to validate the defense is working. They have been failing ever since ORC was added (which was expected). While Muhammad was investigating various LKDTM failures[1], he asked me for additional details on them, and I realized that when exact stack frame boundary checking is not available (i.e. everything except x86 with FRAME_POINTER), it could check if a stack object is at least "current depth valid", in the sense that any object within the stack region but not between start-of-stack and current_stack_pointer should be considered unavailable (i.e. its lifetime is from a call no longer present on the stack). Introduce ARCH_HAS_CURRENT_STACK_POINTER to track which architectures have actually implemented the common global register alias. Additionally report usercopy bounds checking failures with an offset from current_stack_pointer, which may assist with diagnosing failures. The LKDTM USERCOPY_STACK_FRAME_TO and USERCOPY_STACK_FRAME_FROM tests (once slightly adjusted in a separate patch) pass again with this fixed. [1] https://github.com/kernelci/kernelci-project/issues/84 Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Reported-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Signed-off-by: Kees Cook <keescook@chromium.org> --- v1: https://lore.kernel.org/lkml/20220216201449.2087956-1-keescook@chromium.org v2: https://lore.kernel.org/lkml/20220224060342.1855457-1-keescook@chromium.org v3: https://lore.kernel.org/lkml/20220225173345.3358109-1-keescook@chromium.org v4: - improve commit log (akpm)
2022-02-16 20:05:28 +00:00
select ARCH_HAS_CURRENT_STACK_POINTER
ARM: 8738/1: Disable CONFIG_DEBUG_VIRTUAL for NOMMU While running MPS2 platform (NOMMU) with DTB placed below PHYS_OFFSET following warning poped up: ------------[ cut here ]------------ WARNING: CPU: 0 PID: 0 at arch/arm/mm/physaddr.c:42 __virt_to_phys+0x2f/0x40 virt_to_phys used for non-linear address: 00004000 (0x4000) CPU: 0 PID: 0 Comm: swapper Not tainted 4.15.0-rc1-5a31bf2-clean+ #2767 Hardware name: MPS2 (Device Tree Support) [<2100bf39>] (unwind_backtrace) from [<2100b3ff>] (show_stack+0xb/0xc) [<2100b3ff>] (show_stack) from [<2100e697>] (__warn+0x87/0xac) [<2100e697>] (__warn) from [<2100e6db>] (warn_slowpath_fmt+0x1f/0x28) [<2100e6db>] (warn_slowpath_fmt) from [<2100c603>] (__virt_to_phys+0x2f/0x40) [<2100c603>] (__virt_to_phys) from [<2116a499>] (early_init_fdt_reserve_self+0xd/0x24) [<2116a499>] (early_init_fdt_reserve_self) from [<2116222d>] (arm_memblock_init+0xb5/0xf8) [<2116222d>] (arm_memblock_init) from [<21161cad>] (setup_arch+0x38b/0x50e) [<21161cad>] (setup_arch) from [<21160455>] (start_kernel+0x31/0x280) [<21160455>] (start_kernel) from [<00000000>] ( (null)) random: get_random_bytes called from init_oops_id+0x17/0x2c with crng_init=0 ---[ end trace 0000000000000000 ]--- Platforms without MMU support run with 1:1 (i.e. linear) memory mapping, so disable CONFIG_DEBUG_VIRTUAL. Fixes: e377cd8221eb ("ARM: 8640/1: Add support for CONFIG_DEBUG_VIRTUAL") Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-12-18 10:48:42 +00:00
select ARCH_HAS_DEBUG_VIRTUAL if MMU
select ARCH_HAS_DMA_ALLOC if MMU
select ARCH_HAS_DMA_OPS
select ARCH_HAS_DMA_WRITE_COMBINE if !ARM_DMA_MEM_BUFFERABLE
mm: expose arch_mmap_rnd when available When an architecture fully supports randomizing the ELF load location, a per-arch mmap_rnd() function is used to find a randomized mmap base. In preparation for randomizing the location of ET_DYN binaries separately from mmap, this renames and exports these functions as arch_mmap_rnd(). Additionally introduces CONFIG_ARCH_HAS_ELF_RANDOMIZE for describing this feature on architectures that support it (which is a superset of ARCH_BINFMT_ELF_RANDOMIZE_PIE, since s390 already supports a separated ET_DYN ASLR from mmap ASLR without the ARCH_BINFMT_ELF_RANDOMIZE_PIE logic). Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Hector Marco-Gisbert <hecmargi@upv.es> Cc: Russell King <linux@arm.linux.org.uk> Reviewed-by: Ingo Molnar <mingo@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: "David A. Long" <dave.long@linaro.org> Cc: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Arun Chandran <achandran@mvista.com> Cc: Yann Droneaud <ydroneaud@opteya.com> Cc: Min-Hua Chen <orca.chen@gmail.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Alex Smith <alex@alex-smith.me.uk> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Vineeth Vijayan <vvijayan@mvista.com> Cc: Jeff Bailey <jeffbailey@google.com> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Behan Webster <behanw@converseincode.com> Cc: Ismael Ripoll <iripoll@upv.es> Cc: Jan-Simon Mller <dl9pf@gmx.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-04-14 22:48:00 +00:00
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_KEEPINITRD
select ARCH_HAS_KCOV
select ARCH_HAS_MEMBARRIER_SYNC_CORE
bpf: Restrict bpf_probe_read{, str}() only to archs where they work Given the legacy bpf_probe_read{,str}() BPF helpers are broken on archs with overlapping address ranges, we should really take the next step to disable them from BPF use there. To generally fix the situation, we've recently added new helper variants bpf_probe_read_{user,kernel}() and bpf_probe_read_{user,kernel}_str(). For details on them, see 6ae08ae3dea2 ("bpf: Add probe_read_{user, kernel} and probe_read_{user,kernel}_str helpers"). Given bpf_probe_read{,str}() have been around for ~5 years by now, there are plenty of users at least on x86 still relying on them today, so we cannot remove them entirely w/o breaking the BPF tracing ecosystem. However, their use should be restricted to archs with non-overlapping address ranges where they are working in their current form. Therefore, move this behind a CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE and have x86, arm64, arm select it (other archs supporting it can follow-up on it as well). For the remaining archs, they can workaround easily by relying on the feature probe from bpftool which spills out defines that can be used out of BPF C code to implement the drop-in replacement for old/new kernels via: bpftool feature probe macro Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/bpf/20200515101118.6508-2-daniel@iogearbox.net
2020-05-15 10:11:16 +00:00
select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
mm: introduce ARCH_HAS_PTE_SPECIAL Currently the PTE special supports is turned on in per architecture header files. Most of the time, it is defined in arch/*/include/asm/pgtable.h depending or not on some other per architecture static definition. This patch introduce a new configuration variable to manage this directly in the Kconfig files. It would later replace __HAVE_ARCH_PTE_SPECIAL. Here notes for some architecture where the definition of __HAVE_ARCH_PTE_SPECIAL is not obvious: arm __HAVE_ARCH_PTE_SPECIAL which is currently defined in arch/arm/include/asm/pgtable-3level.h which is included by arch/arm/include/asm/pgtable.h when CONFIG_ARM_LPAE is set. So select ARCH_HAS_PTE_SPECIAL if ARM_LPAE. powerpc __HAVE_ARCH_PTE_SPECIAL is defined in 2 files: - arch/powerpc/include/asm/book3s/64/pgtable.h - arch/powerpc/include/asm/pte-common.h The first one is included if (PPC_BOOK3S & PPC64) while the second is included in all the other cases. So select ARCH_HAS_PTE_SPECIAL all the time. sparc: __HAVE_ARCH_PTE_SPECIAL is defined if defined(__sparc__) && defined(__arch64__) which are defined through the compiler in sparc/Makefile if !SPARC32 which I assume to be if SPARC64. So select ARCH_HAS_PTE_SPECIAL if SPARC64 There is no functional change introduced by this patch. Link: http://lkml.kernel.org/r/1523433816-14460-2-git-send-email-ldufour@linux.vnet.ibm.com Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Suggested-by: Jerome Glisse <jglisse@redhat.com> Reviewed-by: Jerome Glisse <jglisse@redhat.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: David S. Miller <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Albert Ou <albert@sifive.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: David Rientjes <rientjes@google.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Christophe LEROY <christophe.leroy@c-s.fr> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-08 00:06:08 +00:00
select ARCH_HAS_PTE_SPECIAL if ARM_LPAE
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SET_MEMORY
select ARCH_STACKWALK
select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL
select ARCH_HAS_STRICT_MODULE_RWX if MMU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_TEARDOWN_DMA_OPS if MMU
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAVE_NMI_SAFE_CMPXCHG if CPU_V7 || CPU_V7M || CPU_V6K
select ARCH_HAS_GCOV_PROFILE_ALL
arm: remove CONFIG_ARCH_HAS_HOLES_MEMORYMODEL ARM is the only architecture that defines CONFIG_ARCH_HAS_HOLES_MEMORYMODEL which in turn enables memmap_valid_within() function that is intended to verify existence of struct page associated with a pfn when there are holes in the memory map. However, the ARCH_HAS_HOLES_MEMORYMODEL also enables HAVE_ARCH_PFN_VALID and arch-specific pfn_valid() implementation that also deals with the holes in the memory map. The only two users of memmap_valid_within() call this function after a call to pfn_valid() so the memmap_valid_within() check becomes redundant. Remove CONFIG_ARCH_HAS_HOLES_MEMORYMODEL and memmap_valid_within() and rely entirely on ARM's implementation of pfn_valid() that is now enabled unconditionally. Link: https://lkml.kernel.org/r/20201101170454.9567-9-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Matt Turner <mattst88@gmail.com> Cc: Meelis Roos <mroos@linux.ee> Cc: Michael Schmitz <schmitzmic@gmail.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-15 03:09:55 +00:00
select ARCH_KEEP_MEMBLOCK
select ARCH_HAS_UBSAN
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT if CPU_V7
select ARCH_NEED_CMPXCHG_1_EMU if CPU_V6
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_CFI_CLANG
mm: generalize SYS_SUPPORTS_HUGETLBFS (rename as ARCH_SUPPORTS_HUGETLBFS) SYS_SUPPORTS_HUGETLBFS config has duplicate definitions on platforms that subscribe it. Instead, just make it a generic option which can be selected on applicable platforms. Also rename it as ARCH_SUPPORTS_HUGETLBFS instead. This reduces code duplication and makes it cleaner. Link: https://lkml.kernel.org/r/1617259448-22529-3-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Acked-by: Palmer Dabbelt <palmerdabbelt@google.com> [riscv] Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc] Cc: Russell King <linux@armlinux.org.uk> Cc: Will Deacon <will@kernel.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Helge Deller <deller@gmx.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-05-05 01:38:13 +00:00
select ARCH_SUPPORTS_HUGETLBFS if ARM_LPAE
select ARCH_SUPPORTS_PER_VMA_LOCK
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_USE_MEMTEST
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
select ARCH_WANT_GENERAL_HUGETLB
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select ARCH_WANT_IPC_PARSE_VERSION
select ARCH_WANT_LD_ORPHAN_WARN
select BINFMT_FLAT_ARGVP_ENVP_ON_STACK
select BUILDTIME_TABLE_SORT if MMU
select COMMON_CLK if !(ARCH_RPC || ARCH_FOOTBRIDGE)
select CLONE_BACKWARDS
select CPU_PM if SUSPEND || CPU_IDLE
select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS
select DMA_DECLARE_COHERENT
select DMA_GLOBAL_POOL if !MMU
select DMA_NONCOHERENT_MMAP if MMU
EDAC: Cleanup atomic_scrub mess So first of all, this atomic_scrub() function's naming is bad. It looks like an atomic_t helper. Change it to edac_atomic_scrub(). The bigger problem is that this function is arch-specific and every new arch which doesn't necessarily need that functionality still needs to define it, otherwise EDAC doesn't compile. So instead of doing that and including arch-specific headers, have each arch define an EDAC_ATOMIC_SCRUB symbol which can be used in edac_mc.c for ifdeffery. Much cleaner. And we already are doing this with another symbol - EDAC_SUPPORT. This is also much cleaner than having CONFIG_EDAC enumerate all the arches which need/have EDAC support and drivers. This way I can kill the useless edac.h header in tile too. Acked-by: Ralf Baechle <ralf@linux-mips.org> Acked-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Chris Metcalf <cmetcalf@ezchip.com> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Doug Thompson <dougthompson@xmission.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-edac@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linuxppc-dev@lists.ozlabs.org Cc: "Maciej W. Rozycki" <macro@codesourcery.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Cc: Paul Mackerras <paulus@samba.org> Cc: "Steven J. Hill" <Steven.Hill@imgtec.com> Cc: x86@kernel.org Signed-off-by: Borislav Petkov <bp@suse.de>
2015-05-21 17:59:31 +00:00
select EDAC_SUPPORT
select EDAC_ATOMIC_SCRUB
select GENERIC_ALLOCATOR
select GENERIC_ARCH_TOPOLOGY if ARM_CPU_TOPOLOGY
select GENERIC_ATOMIC64 if CPU_V7M || CPU_V6 || !CPU_32v6K || !AEABI
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select GENERIC_CLOCKEVENTS_BROADCAST if SMP
select GENERIC_IRQ_IPI if SMP
select GENERIC_CPU_AUTOPROBE
select GENERIC_CPU_DEVICES
select GENERIC_EARLY_IOREMAP
select GENERIC_IDLE_POLL_SETUP
select GENERIC_IRQ_MULTI_HANDLER
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
select GENERIC_IRQ_SHOW_LEVEL
select GENERIC_LIB_DEVMEM_IS_ALLOWED
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select GENERIC_PCI_IOMAP
select GENERIC_SCHED_CLOCK
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select GENERIC_SMP_IDLE_THREAD
select HARDIRQS_SW_RESEND
select HAS_IOPORT
select HAVE_ARCH_AUDITSYSCALL if AEABI && !OABI_COMPAT
select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KFENCE if MMU && !XIP_KERNEL
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
arm: mm: support ARCH_MMAP_RND_BITS arm: arch_mmap_rnd() uses a hard-code value of 8 to generate the random offset for the mmap base address. This value represents a compromise between increased ASLR effectiveness and avoiding address-space fragmentation. Replace it with a Kconfig option, which is sensibly bounded, so that platform developers may choose where to place this compromise. Keep 8 as the minimum acceptable value. [arnd@arndb.de: ARM: avoid ARCH_MMAP_RND_BITS for NOMMU] Signed-off-by: Daniel Cashman <dcashman@google.com> Cc: Russell King <linux@arm.linux.org.uk> Acked-by: Kees Cook <keescook@chromium.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Don Zickus <dzickus@redhat.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Heinrich Schuchardt <xypron.glpk@gmx.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: David Rientjes <rientjes@google.com> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Jeff Vander Stoep <jeffv@google.com> Cc: Nick Kralevich <nnk@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Hector Marco-Gisbert <hecmargi@upv.es> Cc: Borislav Petkov <bp@suse.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-14 23:19:57 +00:00
select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_PFN_VALID
select HAVE_ARCH_SECCOMP
select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
select HAVE_ARCH_STACKLEAK
select HAVE_ARCH_THREAD_STRUCT_WHITELIST
select HAVE_ARCH_TRACEHOOK
mm: drop redundant HAVE_ARCH_TRANSPARENT_HUGEPAGE HAVE_ARCH_TRANSPARENT_HUGEPAGE has duplicate definitions on platforms that subscribe it. Drop these reduntant definitions and instead just select it on applicable platforms. Link: https://lkml.kernel.org/r/1617259448-22529-7-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc] Cc: Russell King <linux@armlinux.org.uk> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Palmer Dabbelt <palmerdabbelt@google.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Rich Felker <dalias@libc.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-05-05 01:38:29 +00:00
select HAVE_ARCH_TRANSPARENT_HUGEPAGE if ARM_LPAE
select HAVE_ARM_SMCCC if CPU_V7
select HAVE_EBPF_JIT if !CPU_ENDIAN_BE32
select HAVE_CONTEXT_TRACKING_USER
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select HAVE_C_RECORDMCOUNT
ftrace: Have architectures opt-in for mcount build time sorting First S390 complained that the sorting of the mcount sections at build time caused the kernel to crash on their architecture. Now PowerPC is complaining about it too. And also ARM64 appears to be having issues. It may be necessary to also update the relocation table for the values in the mcount table. Not only do we have to sort the table, but also update the relocations that may be applied to the items in the table. If the system is not relocatable, then it is fine to sort, but if it is, some architectures may have issues (although x86 does not as it shifts all addresses the same). Add a HAVE_BUILDTIME_MCOUNT_SORT that an architecture can set to say it is safe to do the sorting at build time. Also update the config to compile in build time sorting in the sorttable code in scripts/ to depend on CONFIG_BUILDTIME_MCOUNT_SORT. Link: https://lore.kernel.org/all/944D10DA-8200-4BA9-8D0A-3BED9AA99F82@linux.ibm.com/ Link: https://lkml.kernel.org/r/20220127153821.3bc1ac6e@gandalf.local.home Cc: Ingo Molnar <mingo@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Yinan Liu <yinan@linux.alibaba.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Kees Cook <keescook@chromium.org> Reported-by: Sachin Sant <sachinp@linux.ibm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> [arm64] Tested-by: Sachin Sant <sachinp@linux.ibm.com> Fixes: 72b3942a173c ("scripts: ftrace - move the sort-processing in ftrace_init") Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-01-25 14:19:10 +00:00
select HAVE_BUILDTIME_MCOUNT_SORT
select HAVE_DEBUG_KMEMLEAK if !XIP_KERNEL
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select HAVE_DMA_CONTIGUOUS if MMU
select HAVE_DYNAMIC_FTRACE if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE
select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU
exit_thread: remove empty bodies Define HAVE_EXIT_THREAD for archs which want to do something in exit_thread. For others, let's define exit_thread as an empty inline. This is a cleanup before we change the prototype of exit_thread to accept a task parameter. [akpm@linux-foundation.org: fix mips] Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: "David S. Miller" <davem@davemloft.net> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Aurelien Jacquiot <a-jacquiot@ti.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chen Liqin <liqin.linux@gmail.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Chris Zankel <chris@zankel.net> Cc: David Howells <dhowells@redhat.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Haavard Skinnemoen <hskinnemoen@gmail.com> Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: James Hogan <james.hogan@imgtec.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Jonas Bonn <jonas@southpole.se> Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com> Cc: Lennox Wu <lennox.wu@gmail.com> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Mikael Starvik <starvik@axis.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Rich Felker <dalias@libc.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Russell King <linux@arm.linux.org.uk> Cc: Steven Miao <realmz6@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-21 00:00:16 +00:00
select HAVE_EXIT_THREAD
select HAVE_GUP_FAST if ARM_LPAE
select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
select HAVE_FUNCTION_ERROR_INJECTION
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER if !XIP_KERNEL
GCC plugin infrastructure This patch allows to build the whole kernel with GCC plugins. It was ported from grsecurity/PaX. The infrastructure supports building out-of-tree modules and building in a separate directory. Cross-compilation is supported too. Currently the x86, arm, arm64 and uml architectures enable plugins. The directory of the gcc plugins is scripts/gcc-plugins. You can use a file or a directory there. The plugins compile with these options: * -fno-rtti: gcc is compiled with this option so the plugins must use it too * -fno-exceptions: this is inherited from gcc too * -fasynchronous-unwind-tables: this is inherited from gcc too * -ggdb: it is useful for debugging a plugin (better backtrace on internal errors) * -Wno-narrowing: to suppress warnings from gcc headers (ipa-utils.h) * -Wno-unused-variable: to suppress warnings from gcc headers (gcc_version variable, plugin-version.h) The infrastructure introduces a new Makefile target called gcc-plugins. It supports all gcc versions from 4.5 to 6.0. The scripts/gcc-plugin.sh script chooses the proper host compiler (gcc-4.7 can be built by either gcc or g++). This script also checks the availability of the included headers in scripts/gcc-plugins/gcc-common.h. The gcc-common.h header contains frequently included headers for GCC plugins and it has a compatibility layer for the supported gcc versions. The gcc-generate-*-pass.h headers automatically generate the registration structures for GIMPLE, SIMPLE_IPA, IPA and RTL passes. Note that 'make clean' keeps the *.so files (only the distclean or mrproper targets clean all) because they are needed for out-of-tree modules. Based on work created by the PaX Team. Signed-off-by: Emese Revfy <re.emese@gmail.com> Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Michal Marek <mmarek@suse.com>
2016-05-23 22:09:38 +00:00
select HAVE_GCC_PLUGINS
select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7)
select HAVE_IRQ_TIME_ACCOUNTING
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_LZ4
select HAVE_KERNEL_LZMA
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select HAVE_KERNEL_LZO
select HAVE_KERNEL_XZ
select HAVE_KPROBES if !XIP_KERNEL && !CPU_ENDIAN_BE32 && !CPU_V7M
select HAVE_KRETPROBES if HAVE_KPROBES
select HAVE_LD_DEAD_CODE_DATA_ELIMINATION if (LD_VERSION >= 23600 || LD_IS_LLD)
select HAVE_MOD_ARCH_SPECIFIC
printk/nmi: generic solution for safe printk in NMI printk() takes some locks and could not be used a safe way in NMI context. The chance of a deadlock is real especially when printing stacks from all CPUs. This particular problem has been addressed on x86 by the commit a9edc8809328 ("x86/nmi: Perform a safe NMI stack trace on all CPUs"). The patchset brings two big advantages. First, it makes the NMI backtraces safe on all architectures for free. Second, it makes all NMI messages almost safe on all architectures (the temporary buffer is limited. We still should keep the number of messages in NMI context at minimum). Note that there already are several messages printed in NMI context: WARN_ON(in_nmi()), BUG_ON(in_nmi()), anything being printed out from MCE handlers. These are not easy to avoid. This patch reuses most of the code and makes it generic. It is useful for all messages and architectures that support NMI. The alternative printk_func is set when entering and is reseted when leaving NMI context. It queues IRQ work to copy the messages into the main ring buffer in a safe context. __printk_nmi_flush() copies all available messages and reset the buffer. Then we could use a simple cmpxchg operations to get synchronized with writers. There is also used a spinlock to get synchronized with other flushers. We do not longer use seq_buf because it depends on external lock. It would be hard to make all supported operations safe for a lockless use. It would be confusing and error prone to make only some operations safe. The code is put into separate printk/nmi.c as suggested by Steven Rostedt. It needs a per-CPU buffer and is compiled only on architectures that call nmi_enter(). This is achieved by the new HAVE_NMI Kconfig flag. The are MN10300 and Xtensa architectures. We need to clean up NMI handling there first. Let's do it separately. The patch is heavily based on the draft from Peter Zijlstra, see https://lkml.org/lkml/2015/6/10/327 [arnd@arndb.de: printk-nmi: use %zu format string for size_t] [akpm@linux-foundation.org: min_t->min - all types are size_t here] Signed-off-by: Petr Mladek <pmladek@suse.com> Suggested-by: Peter Zijlstra <peterz@infradead.org> Suggested-by: Steven Rostedt <rostedt@goodmis.org> Cc: Jan Kara <jack@suse.cz> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> [arm part] Cc: Daniel Thompson <daniel.thompson@linaro.org> Cc: Jiri Kosina <jkosina@suse.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: David Miller <davem@davemloft.net> Cc: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-21 00:00:33 +00:00
select HAVE_NMI
select HAVE_OPTPROBES if !THUMB2_KERNEL
select HAVE_PAGE_SIZE_4KB
select HAVE_PCI if MMU
select HAVE_PERF_EVENTS
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select MMU_GATHER_RCU_TABLE_FREE if SMP && ARM_LPAE
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_RSEQ
select HAVE_STACKPROTECTOR
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_UID16
select HAVE_VIRT_CPU_ACCOUNTING_GEN
select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
select IRQ_FORCED_THREADING
select LOCK_MM_AND_FIND_VMA
select MODULES_USE_ELF_REL
select NEED_DMA_MAP_STATE
select OF_EARLY_FLATTREE if OF
select OLD_SIGACTION
select OLD_SIGSUSPEND3
select PCI_DOMAINS_GENERIC if PCI
select PCI_SYSCALL if PCI
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select PERF_USE_VMALLOC
select RTC_LIB
select SPARSE_IRQ if !(ARCH_FOOTBRIDGE || ARCH_RPC)
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select SYS_SUPPORTS_APM_EMULATION
select THREAD_INFO_IN_TASK
select TIMER_OF if OF
select HAVE_ARCH_VMAP_STACK if MMU && ARM_HAS_GROUP_RELOCS
select TRACE_IRQFLAGS_SUPPORT if !CPU_V7M
select USE_OF if !(ARCH_FOOTBRIDGE || ARCH_RPC || ARCH_SA1100)
# Above selects are sorted alphabetically; please add new ones
# according to that. Thanks.
help
The ARM series is a line of low-power-consumption RISC chip designs
licensed by ARM Ltd and targeted at embedded applications and
handhelds such as the Compaq IPAQ. ARM-based PCs are no longer
manufactured, but legacy ARM-based PC hardware remains popular in
Europe. There is an ARM Linux project with a web page at
<http://www.arm.linux.org.uk/>.
config ARM_HAS_GROUP_RELOCS
def_bool y
depends on !LD_IS_LLD || LLD_VERSION >= 140000
depends on !COMPILE_TEST
help
Whether or not to use R_ARM_ALU_PC_Gn or R_ARM_LDR_PC_Gn group
relocations, which have been around for a long time, but were not
supported in LLD until version 14. The combined range is -/+ 256 MiB,
which is usually sufficient, but not for allyesconfig, so we disable
this feature when doing compile testing.
ARM: dma-mapping: add support for IOMMU mapper This patch add a complete implementation of DMA-mapping API for devices which have IOMMU support. This implementation tries to optimize dma address space usage by remapping all possible physical memory chunks into a single dma address space chunk. DMA address space is managed on top of the bitmap stored in the dma_iommu_mapping structure stored in device->archdata. Platform setup code has to initialize parameters of the dma address space (base address, size, allocation precision order) with arm_iommu_create_mapping() function. To reduce the size of the bitmap, all allocations are aligned to the specified order of base 4 KiB pages. dma_alloc_* functions allocate physical memory in chunks, each with alloc_pages() function to avoid failing if the physical memory gets fragmented. In worst case the allocated buffer is composed of 4 KiB page chunks. dma_map_sg() function minimizes the total number of dma address space chunks by merging of physical memory chunks into one larger dma address space chunk. If requested chunk (scatter list entry) boundaries match physical page boundaries, most calls to dma_map_sg() requests will result in creating only one chunk in dma address space. dma_map_page() simply creates a mapping for the given page(s) in the dma address space. All dma functions also perform required cache operation like their counterparts from the arm linear physical memory mapping version. This patch contains code and fixes kindly provided by: - Krishna Reddy <vdumpa@nvidia.com>, - Andrzej Pietrasiewicz <andrzej.p@samsung.com>, - Hiroshi DOYU <hdoyu@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
2012-05-16 13:48:21 +00:00
config ARM_DMA_USE_IOMMU
bool
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select NEED_SG_DMA_LENGTH
ARM: dma-mapping: add support for IOMMU mapper This patch add a complete implementation of DMA-mapping API for devices which have IOMMU support. This implementation tries to optimize dma address space usage by remapping all possible physical memory chunks into a single dma address space chunk. DMA address space is managed on top of the bitmap stored in the dma_iommu_mapping structure stored in device->archdata. Platform setup code has to initialize parameters of the dma address space (base address, size, allocation precision order) with arm_iommu_create_mapping() function. To reduce the size of the bitmap, all allocations are aligned to the specified order of base 4 KiB pages. dma_alloc_* functions allocate physical memory in chunks, each with alloc_pages() function to avoid failing if the physical memory gets fragmented. In worst case the allocated buffer is composed of 4 KiB page chunks. dma_map_sg() function minimizes the total number of dma address space chunks by merging of physical memory chunks into one larger dma address space chunk. If requested chunk (scatter list entry) boundaries match physical page boundaries, most calls to dma_map_sg() requests will result in creating only one chunk in dma address space. dma_map_page() simply creates a mapping for the given page(s) in the dma address space. All dma functions also perform required cache operation like their counterparts from the arm linear physical memory mapping version. This patch contains code and fixes kindly provided by: - Krishna Reddy <vdumpa@nvidia.com>, - Andrzej Pietrasiewicz <andrzej.p@samsung.com>, - Hiroshi DOYU <hdoyu@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
2012-05-16 13:48:21 +00:00
if ARM_DMA_USE_IOMMU
config ARM_DMA_IOMMU_ALIGNMENT
int "Maximum PAGE_SIZE order of alignment for DMA IOMMU buffers"
range 4 9
default 8
help
DMA mapping framework by default aligns all buffers to the smallest
PAGE_SIZE order which is greater than or equal to the requested buffer
size. This works well for buffers up to a few hundreds kilobytes, but
for larger buffers it just a waste of address space. Drivers which has
relatively small addressing window (like 64Mib) might run out of
virtual space with just a few allocations.
With this parameter you can specify the maximum PAGE_SIZE order for
DMA IOMMU buffers. Larger buffers will be aligned only to this
specified order. The order is expressed as a power of two multiplied
by the PAGE_SIZE.
endif
config SYS_SUPPORTS_APM_EMULATION
bool
config HAVE_TCM
bool
select GENERIC_ALLOCATOR
config HAVE_PROC_CPU
bool
config NO_IOPORT_MAP
bool
config SBUS
bool
config STACKTRACE_SUPPORT
bool
default y
config LOCKDEP_SUPPORT
bool
default y
config ARCH_HAS_ILOG2_U32
bool
config ARCH_HAS_ILOG2_U64
bool
config ARCH_HAS_BANDGAP
bool
config FIX_EARLYCON_MEM
def_bool y if MMU
config GENERIC_HWEIGHT
bool
default y
config GENERIC_CALIBRATE_DELAY
bool
default y
config ARCH_MAY_HAVE_PC_FDC
bool
config ARCH_SUPPORTS_UPROBES
def_bool y
config GENERIC_ISA_DMA
bool
config FIQ
bool
config ARCH_MTD_XIP
bool
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
config ARM_PATCH_PHYS_VIRT
bool "Patch physical to virtual translations at runtime" if !ARCH_MULTIPLATFORM
default y
depends on MMU
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
help
Patch phys-to-virt and virt-to-phys translation functions at
boot and module load time according to the position of the
kernel in system memory.
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
This can only be used with non-XIP MMU kernels where the base
ARM: p2v: reduce p2v alignment requirement to 2 MiB The ARM kernel's linear map starts at PAGE_OFFSET, which maps to a physical address (PHYS_OFFSET) that is platform specific, and is discovered at boot. Since we don't want to slow down translations between physical and virtual addresses by keeping the offset in a variable in memory, we implement this by patching the code performing the translation, and putting the offset between PAGE_OFFSET and the start of physical RAM directly into the instruction opcodes. As we only patch up to 8 bits of offset, yielding 4 GiB >> 8 == 16 MiB of granularity, we have to round up PHYS_OFFSET to the next multiple if the start of physical RAM is not a multiple of 16 MiB. This wastes some physical RAM, since the memory that was skipped will now live below PAGE_OFFSET, making it inaccessible to the kernel. We can improve this by changing the patchable sequences and the patching logic to carry more bits of offset: 11 bits gives us 4 GiB >> 11 == 2 MiB of granularity, and so we will never waste more than that amount by rounding up the physical start of DRAM to the next multiple of 2 MiB. (Note that 2 MiB granularity guarantees that the linear mapping can be created efficiently, whereas less than 2 MiB may result in the linear mapping needing another level of page tables) This helps Zhen Lei's scenario, where the start of DRAM is known to be occupied. It also helps EFI boot, which relies on the firmware's page allocator to allocate space for the decompressed kernel as low as possible. And if the KASLR patches ever land for 32-bit, it will give us 3 more bits of randomization of the placement of the kernel inside the linear region. For the ARM code path, it simply comes down to using two add/sub instructions instead of one for the carryless version, and patching each of them with the correct immediate depending on the rotation field. For the LPAE calculation, which has to deal with a carry, it patches the MOVW instruction with up to 12 bits of offset (but we only need 11 bits anyway) For the Thumb2 code path, patching more than 11 bits of displacement would be somewhat cumbersome, but the 11 bits we need fit nicely into the second word of the u16[2] opcode, so we simply update the immediate assignment and the left shift to create an addend of the right magnitude. Suggested-by: Zhen Lei <thunder.leizhen@huawei.com> Acked-by: Nicolas Pitre <nico@fluxnic.net> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-09-18 08:55:42 +00:00
of physical memory is at a 2 MiB boundary.
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
Only disable this option if you know that you do not require
this feature (eg, building a kernel for a single machine) and
you need to shrink the kernel to the minimal size.
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
config NEED_MACH_IO_H
bool
help
Select this when mach/io.h is required to provide special
definitions for this platform. The need for mach/io.h should
be avoided when possible.
config NEED_MACH_MEMORY_H
bool
help
Select this when mach/memory.h is required to provide special
definitions for this platform. The need for mach/memory.h should
be avoided when possible.
ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: Tony Lindgren <tony@atomide.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-04 19:09:43 +00:00
config PHYS_OFFSET
hex "Physical address of main memory" if MMU
depends on !ARM_PATCH_PHYS_VIRT || !AUTO_ZRELADDR
default DRAM_BASE if !MMU
default 0x00000000 if ARCH_FOOTBRIDGE
default 0x10000000 if ARCH_OMAP1 || ARCH_RPC
default 0xa0000000 if ARCH_PXA
default 0xc0000000 if ARCH_EP93XX || ARCH_SA1100
default 0
help
Please provide the physical address corresponding to the
location of main memory in your system.
ARM: 7017/1: Use generic BUG() handler ARM uses its own BUG() handler which makes its output slightly different from other archtectures. One of the problems is that the ARM implementation doesn't report the function with the BUG() in it, but always reports the PC being in __bug(). The generic implementation doesn't have this problem. Currently we get something like: kernel BUG at fs/proc/breakme.c:35! Unable to handle kernel NULL pointer dereference at virtual address 00000000 ... PC is at __bug+0x20/0x2c With this patch it displays: kernel BUG at fs/proc/breakme.c:35! Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP ... PC is at write_breakme+0xd0/0x1b4 This implementation uses an undefined instruction to implement BUG, and sets up a bug table containing the relevant information. Many versions of gcc do not support %c properly for ARM (inserting a # when they shouldn't) so we work around this using distasteful macro magic. v1: Initial version to replace existing ARM BUG() implementation with something more similar to other architectures. v2: Add Thumb support, remove backtrace whitespace output changes. Change to use macros instead of requiring the asm %d flag to work (thanks to Dave Martin <dave.martin@linaro.org>) v3: Remove old BUG() implementation in favor of this one. Remove the Backtrace: message (will submit this separately). Use ARM_EXIT_KEEP() so that some architectures can dump exit text at link time thanks to Stephen Boyd <sboyd@codeaurora.org> (although since we always define GENERIC_BUG this might be academic.) Rebase to linux-2.6.git master. v4: Allow BUGS in modules (these were not reported correctly in v3) (thanks to Stephen Boyd <sboyd@codeaurora.org> for suggesting that.) Remove __bug() as this is no longer needed. v5: Add %progbits as the section flags. Signed-off-by: Simon Glass <sjg@chromium.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-08-16 22:44:26 +00:00
config GENERIC_BUG
def_bool y
depends on BUG
config PGTABLE_LEVELS
int
default 3 if ARM_LPAE
default 2
menu "System Type"
config MMU
bool "MMU-based Paged Memory Management Support"
default y
help
Select if you want MMU-based virtualised addressing space
support by paged memory management. If unsure, say 'Y'.
config ARM_SINGLE_ARMV7M
def_bool !MMU
select ARM_NVIC
select CPU_V7M
select NO_IOPORT_MAP
arm: mm: support ARCH_MMAP_RND_BITS arm: arch_mmap_rnd() uses a hard-code value of 8 to generate the random offset for the mmap base address. This value represents a compromise between increased ASLR effectiveness and avoiding address-space fragmentation. Replace it with a Kconfig option, which is sensibly bounded, so that platform developers may choose where to place this compromise. Keep 8 as the minimum acceptable value. [arnd@arndb.de: ARM: avoid ARCH_MMAP_RND_BITS for NOMMU] Signed-off-by: Daniel Cashman <dcashman@google.com> Cc: Russell King <linux@arm.linux.org.uk> Acked-by: Kees Cook <keescook@chromium.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Don Zickus <dzickus@redhat.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Heinrich Schuchardt <xypron.glpk@gmx.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: David Rientjes <rientjes@google.com> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Jeff Vander Stoep <jeffv@google.com> Cc: Nick Kralevich <nnk@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Hector Marco-Gisbert <hecmargi@upv.es> Cc: Borislav Petkov <bp@suse.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-14 23:19:57 +00:00
config ARCH_MMAP_RND_BITS_MIN
default 8
config ARCH_MMAP_RND_BITS_MAX
default 14 if PAGE_OFFSET=0x40000000
default 15 if PAGE_OFFSET=0x80000000
default 16
config ARCH_MULTIPLATFORM
bool "Require kernel to be portable to multiple machines" if EXPERT
depends on MMU && !(ARCH_FOOTBRIDGE || ARCH_RPC || ARCH_SA1100)
default y
help
In general, all Arm machines can be supported in a single
kernel image, covering either Armv4/v5 or Armv6/v7.
However, some configuration options require hardcoding machine
specific physical addresses or enable errata workarounds that may
break other machines.
Selecting N here allows using those options, including
DEBUG_UNCOMPRESS, XIP_KERNEL and ZBOOT_ROM. If unsure, say Y.
source "arch/arm/Kconfig.platforms"
#
# This is sorted alphabetically by mach-* pathname. However, plat-*
# Kconfigs may be included either alphabetically (according to the
# plat- suffix) or along side the corresponding mach-* source.
#
source "arch/arm/mach-actions/Kconfig"
source "arch/arm/mach-alpine/Kconfig"
source "arch/arm/mach-artpec/Kconfig"
source "arch/arm/mach-aspeed/Kconfig"
source "arch/arm/mach-at91/Kconfig"
source "arch/arm/mach-axxia/Kconfig"
source "arch/arm/mach-bcm/Kconfig"
source "arch/arm/mach-berlin/Kconfig"
source "arch/arm/mach-clps711x/Kconfig"
source "arch/arm/mach-davinci/Kconfig"
source "arch/arm/mach-digicolor/Kconfig"
source "arch/arm/mach-dove/Kconfig"
source "arch/arm/mach-ep93xx/Kconfig"
source "arch/arm/mach-exynos/Kconfig"
source "arch/arm/mach-footbridge/Kconfig"
source "arch/arm/mach-gemini/Kconfig"
source "arch/arm/mach-highbank/Kconfig"
source "arch/arm/mach-hisi/Kconfig"
source "arch/arm/mach-hpe/Kconfig"
source "arch/arm/mach-imx/Kconfig"
source "arch/arm/mach-ixp4xx/Kconfig"
source "arch/arm/mach-keystone/Kconfig"
source "arch/arm/mach-lpc32xx/Kconfig"
source "arch/arm/mach-mediatek/Kconfig"
source "arch/arm/mach-meson/Kconfig"
source "arch/arm/mach-milbeaut/Kconfig"
source "arch/arm/mach-mmp/Kconfig"
source "arch/arm/mach-mstar/Kconfig"
source "arch/arm/mach-mv78xx0/Kconfig"
source "arch/arm/mach-mvebu/Kconfig"
source "arch/arm/mach-mxs/Kconfig"
source "arch/arm/mach-nomadik/Kconfig"
source "arch/arm/mach-npcm/Kconfig"
source "arch/arm/mach-omap1/Kconfig"
source "arch/arm/mach-omap2/Kconfig"
source "arch/arm/mach-orion5x/Kconfig"
source "arch/arm/mach-pxa/Kconfig"
source "arch/arm/mach-qcom/Kconfig"
source "arch/arm/mach-realtek/Kconfig"
source "arch/arm/mach-rpc/Kconfig"
source "arch/arm/mach-rockchip/Kconfig"
source "arch/arm/mach-s3c/Kconfig"
source "arch/arm/mach-s5pv210/Kconfig"
source "arch/arm/mach-sa1100/Kconfig"
source "arch/arm/mach-shmobile/Kconfig"
source "arch/arm/mach-socfpga/Kconfig"
source "arch/arm/mach-spear/Kconfig"
source "arch/arm/mach-sti/Kconfig"
source "arch/arm/mach-stm32/Kconfig"
source "arch/arm/mach-sunxi/Kconfig"
source "arch/arm/mach-tegra/Kconfig"
source "arch/arm/mach-ux500/Kconfig"
source "arch/arm/mach-versatile/Kconfig"
source "arch/arm/mach-vt8500/Kconfig"
source "arch/arm/mach-zynq/Kconfig"
# ARMv7-M architecture
config ARCH_LPC18XX
bool "NXP LPC18xx/LPC43xx"
depends on ARM_SINGLE_ARMV7M
select ARCH_HAS_RESET_CONTROLLER
select ARM_AMBA
select CLKSRC_LPC32XX
select PINCTRL
help
Support for NXP's LPC18xx Cortex-M3 and LPC43xx Cortex-M4
high performance microcontrollers.
config ARCH_MPS2
bool "ARM MPS2 platform"
depends on ARM_SINGLE_ARMV7M
select ARM_AMBA
select CLKSRC_MPS2
help
Support for Cortex-M Prototyping System (or V2M-MPS2) which comes
with a range of available cores like Cortex-M3/M4/M7.
Please, note that depends which Application Note is used memory map
for the platform may vary, so adjustment of RAM base might be needed.
# Definitions to make life easier
config ARCH_ACORN
bool
config PLAT_ORION
bool
select CLKSRC_MMIO
select GENERIC_IRQ_CHIP
select IRQ_DOMAIN
arm: plat-orion: introduce PLAT_ORION_LEGACY hidden config option Until now, the PLAT_ORION configuration option was common to all the Marvell EBU SoCs, and selecting this option had the effect of enabling the MPP code, GPIO code, address decoding and PCIe code from plat-orion, as well as providing access to driver-specific header files from plat-orion/include. However, the Armada 370 and XP SoCs will not use the MPP and GPIO code (instead some proper pinctrl and gpio drivers are in preparation), and generally, we want to move away from plat-orion and instead have everything in mach-mvebu. That said, in the mean time, we want to leverage the driver-specific headers as well as the address decoding code, so we introduce PLAT_ORION_LEGACY. The older Marvell SoCs need to select PLAT_ORION_LEGACY, while the newer Marvell SoCs need to select PLAT_ORION. Of course, when PLAT_ORION_LEGACY is selected, it automatically selects PLAT_ORION. Then, with just PLAT_ORION, you have the address decoding code plus the driver-specific headers. If you add PLAT_ORION_LEGACY to this, you gain the old MPP, GPIO and PCIe code. Again, this is only a temporary solution until we make all Marvell EBU platforms converge into the mach-mvebu directory. This solution avoids duplicating the existing address decoding code into mach-mvebu. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Tested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Jason Cooper <jason@lakedaemon.net>
2012-09-11 12:27:27 +00:00
config PLAT_ORION_LEGACY
bool
select PLAT_ORION
config PLAT_VERSATILE
bool
source "arch/arm/mm/Kconfig"
[ARM] 3881/4: xscale: clean up cp0/cp1 handling XScale cores either have a DSP coprocessor (which contains a single 40 bit accumulator register), or an iWMMXt coprocessor (which contains eight 64 bit registers.) Because of the small amount of state in the DSP coprocessor, access to the DSP coprocessor (CP0) is always enabled, and DSP context switching is done unconditionally on every task switch. Access to the iWMMXt coprocessor (CP0/CP1) is enabled only when an iWMMXt instruction is first issued, and iWMMXt context switching is done lazily. CONFIG_IWMMXT is supposed to mean 'the cpu we will be running on will have iWMMXt support', but boards are supposed to select this config symbol by hand, and at least one pxa27x board doesn't get this right, so on that board, proc-xscale.S will incorrectly assume that we have a DSP coprocessor, enable CP0 on boot, and we will then only save the first iWMMXt register (wR0) on context switches, which is Bad. This patch redefines CONFIG_IWMMXT as 'the cpu we will be running on might have iWMMXt support, and we will enable iWMMXt context switching if it does.' This means that with this patch, running a CONFIG_IWMMXT=n kernel on an iWMMXt-capable CPU will no longer potentially corrupt iWMMXt state over context switches, and running a CONFIG_IWMMXT=y kernel on a non-iWMMXt capable CPU will still do DSP context save/restore. These changes should make iWMMXt work on PXA3xx, and as a side effect, enable proper acc0 save/restore on non-iWMMXt capable xsc3 cores such as IOP13xx and IXP23xx (which will not have CONFIG_CPU_XSCALE defined), as well as setting and using HWCAP_IWMMXT properly. Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org> Acked-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-12-03 17:51:14 +00:00
config IWMMXT
bool "Enable iWMMXt support"
ARM: 9352/1: iwmmxt: Remove support for PJ4/PJ4B cores PJ4 is a v7 core that incorporates a iWMMXt coprocessor. However, GCC does not support this combination (its iWMMXt configuration always implies v5te), and so there is no v6/v7 user space that actually makes use of this, beyond generic support for things like setjmp() that preserve/restore the iWMMXt register file using generic LDC/STC instructions emitted in assembler. As [0] appears to imply, this logic is triggered for the init process at boot, and so most user threads will have a iWMMXt register context associated with it, even though it is never used. At this point, it is highly unlikely that such GCC support will ever materialize (and Clang does not implement support for iWMMXt to begin with). This means that advertising iWMMXt support on these cores results in context switch overhead without any associated benefit, and so it is better to simply ignore the iWMMXt unit on these systems. So rip out the support. Doing so also fixes the issue reported in [0] related to UNDEF handling of co-processor #0/#1 instructions issued from user space running in Thumb2 mode. The PJ4 cores are used in four platforms: Armada 370/xp, Dove (Cubox, d2plug), MMP2 (xo-1.75) and Berlin (Google TV). Out of these, only the first is still widely used, but that one actually doesn't have iWMMXt but instead has only VFPV3-D16, and so it is not impacted by this change. Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218427 [0] Fixes: 8bcba70cb5c22 ("ARM: entry: Disregard Thumb undef exception ...") Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Nicolas Pitre <nico@fluxnic.net> Reviewed-by: Jisheng Zhang <jszhang@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2024-02-14 07:03:24 +00:00
depends on CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK
default y if PXA27x || PXA3xx || ARCH_MMP
[ARM] 3881/4: xscale: clean up cp0/cp1 handling XScale cores either have a DSP coprocessor (which contains a single 40 bit accumulator register), or an iWMMXt coprocessor (which contains eight 64 bit registers.) Because of the small amount of state in the DSP coprocessor, access to the DSP coprocessor (CP0) is always enabled, and DSP context switching is done unconditionally on every task switch. Access to the iWMMXt coprocessor (CP0/CP1) is enabled only when an iWMMXt instruction is first issued, and iWMMXt context switching is done lazily. CONFIG_IWMMXT is supposed to mean 'the cpu we will be running on will have iWMMXt support', but boards are supposed to select this config symbol by hand, and at least one pxa27x board doesn't get this right, so on that board, proc-xscale.S will incorrectly assume that we have a DSP coprocessor, enable CP0 on boot, and we will then only save the first iWMMXt register (wR0) on context switches, which is Bad. This patch redefines CONFIG_IWMMXT as 'the cpu we will be running on might have iWMMXt support, and we will enable iWMMXt context switching if it does.' This means that with this patch, running a CONFIG_IWMMXT=n kernel on an iWMMXt-capable CPU will no longer potentially corrupt iWMMXt state over context switches, and running a CONFIG_IWMMXT=y kernel on a non-iWMMXt capable CPU will still do DSP context save/restore. These changes should make iWMMXt work on PXA3xx, and as a side effect, enable proper acc0 save/restore on non-iWMMXt capable xsc3 cores such as IOP13xx and IXP23xx (which will not have CONFIG_CPU_XSCALE defined), as well as setting and using HWCAP_IWMMXT properly. Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org> Acked-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-12-03 17:51:14 +00:00
help
Enable support for iWMMXt context switching at run time if
running on a CPU that supports it.
if !MMU
source "arch/arm/Kconfig-nommu"
endif
config PJ4B_ERRATA_4742
bool "PJ4B Errata 4742: IDLE Wake Up Commands can Cause the CPU Core to Cease Operation"
depends on CPU_PJ4B && MACH_ARMADA_370
default y
help
When coming out of either a Wait for Interrupt (WFI) or a Wait for
Event (WFE) IDLE states, a specific timing sensitivity exists between
the retiring WFI/WFE instructions and the newly issued subsequent
instructions. This sensitivity can result in a CPU hang scenario.
Workaround:
The software must insert either a Data Synchronization Barrier (DSB)
or Data Memory Barrier (DMB) command immediately after the WFI/WFE
instruction
config ARM_ERRATA_326103
bool "ARM errata: FSR write bit incorrect on a SWP to read-only memory"
depends on CPU_V6
help
Executing a SWP instruction to read-only memory does not set bit 11
of the FSR on the ARM 1136 prior to r1p0. This causes the kernel to
treat the access as a read, preventing a COW from occurring and
causing the faulting task to livelock.
config ARM_ERRATA_411920
bool "ARM errata: Invalidation of the Instruction Cache operation can fail"
depends on CPU_V6 || CPU_V6K
help
Invalidation of the Instruction Cache operation can
fail. This erratum is present in 1136 (before r1p4), 1156 and 1176.
It does not affect the MPCore. This option enables the ARM Ltd.
recommended workaround.
config ARM_ERRATA_430973
bool "ARM errata: Stale prediction on replaced interworking branch"
depends on CPU_V7
help
This option enables the workaround for the 430973 Cortex-A8
r1p* erratum. If a code sequence containing an ARM/Thumb
interworking branch is replaced with another code sequence at the
same virtual address, whether due to self-modifying code or virtual
to physical address re-mapping, Cortex-A8 does not recover from the
stale interworking branch prediction. This results in Cortex-A8
executing the new code sequence in the incorrect ARM or Thumb state.
The workaround enables the BTB/BTAC operations by setting ACTLR.IBE
and also flushes the branch target cache at every context switch.
Note that setting specific bits in the ACTLR register may not be
available in non-secure mode.
config ARM_ERRATA_458693
bool "ARM errata: Processor deadlock when a false hazard is created"
depends on CPU_V7
depends on !ARCH_MULTIPLATFORM
help
This option enables the workaround for the 458693 Cortex-A8 (r2p0)
erratum. For very specific sequences of memory operations, it is
possible for a hazard condition intended for a cache line to instead
be incorrectly associated with a different cache line. This false
hazard might then cause a processor deadlock. The workaround enables
the L1 caching of the NEON accesses and disables the PLD instruction
in the ACTLR register. Note that setting specific bits in the ACTLR
register may not be available in non-secure mode and thus is not
available on a multiplatform kernel. This should be applied by the
bootloader instead.
config ARM_ERRATA_460075
bool "ARM errata: Data written to the L2 cache can be overwritten with stale data"
depends on CPU_V7
depends on !ARCH_MULTIPLATFORM
help
This option enables the workaround for the 460075 Cortex-A8 (r2p0)
erratum. Any asynchronous access to the L2 cache may encounter a
situation in which recent store transactions to the L2 cache are lost
and overwritten with stale memory contents from external memory. The
workaround disables the write-allocate mode for the L2 cache via the
ACTLR register. Note that setting specific bits in the ACTLR register
may not be available in non-secure mode and thus is not available on
a multiplatform kernel. This should be applied by the bootloader
instead.
config ARM_ERRATA_742230
bool "ARM errata: DMB operation may be faulty"
depends on CPU_V7 && SMP
depends on !ARCH_MULTIPLATFORM
help
This option enables the workaround for the 742230 Cortex-A9
(r1p0..r2p2) erratum. Under rare circumstances, a DMB instruction
between two write operations may not ensure the correct visibility
ordering of the two writes. This workaround sets a specific bit in
the diagnostic register of the Cortex-A9 which causes the DMB
instruction to behave as a DSB, ensuring the correct behaviour of
the two writes. Note that setting specific bits in the diagnostics
register may not be available in non-secure mode and thus is not
available on a multiplatform kernel. This should be applied by the
bootloader instead.
config ARM_ERRATA_742231
bool "ARM errata: Incorrect hazard handling in the SCU may lead to data corruption"
depends on CPU_V7 && SMP
depends on !ARCH_MULTIPLATFORM
help
This option enables the workaround for the 742231 Cortex-A9
(r2p0..r2p2) erratum. Under certain conditions, specific to the
Cortex-A9 MPCore micro-architecture, two CPUs working in SMP mode,
accessing some data located in the same cache line, may get corrupted
data due to bad handling of the address hazard when the line gets
replaced from one of the CPUs at the same time as another CPU is
accessing it. This workaround sets specific bits in the diagnostic
register of the Cortex-A9 which reduces the linefill issuing
capabilities of the processor. Note that setting specific bits in the
diagnostics register may not be available in non-secure mode and thus
is not available on a multiplatform kernel. This should be applied by
the bootloader instead.
config ARM_ERRATA_643719
bool "ARM errata: LoUIS bit field in CLIDR register is incorrect"
depends on CPU_V7 && SMP
default y
help
This option enables the workaround for the 643719 Cortex-A9 (prior to
r1p0) erratum. On affected cores the LoUIS bit field of the CLIDR
register returns zero when it should return one. The workaround
corrects this value, ensuring cache maintenance operations which use
it behave as intended and avoiding data corruption.
config ARM_ERRATA_720789
bool "ARM errata: TLBIASIDIS and TLBIMVAIS operations can broadcast a faulty ASID"
depends on CPU_V7
help
This option enables the workaround for the 720789 Cortex-A9 (prior to
r2p0) erratum. A faulty ASID can be sent to the other CPUs for the
broadcasted CP15 TLB maintenance operations TLBIASIDIS and TLBIMVAIS.
As a consequence of this erratum, some TLB entries which should be
invalidated are not, resulting in an incoherency in the system page
tables. The workaround changes the TLB flushing routines to invalidate
entries regardless of the ASID.
config ARM_ERRATA_743622
bool "ARM errata: Faulty hazard checking in the Store Buffer may lead to data corruption"
depends on CPU_V7
depends on !ARCH_MULTIPLATFORM
help
This option enables the workaround for the 743622 Cortex-A9
(r2p*) erratum. Under very rare conditions, a faulty
optimisation in the Cortex-A9 Store Buffer may lead to data
corruption. This workaround sets a specific bit in the diagnostic
register of the Cortex-A9 which disables the Store Buffer
optimisation, preventing the defect from occurring. This has no
visible impact on the overall performance or power consumption of the
processor. Note that setting specific bits in the diagnostics register
may not be available in non-secure mode and thus is not available on a
multiplatform kernel. This should be applied by the bootloader instead.
config ARM_ERRATA_751472
bool "ARM errata: Interrupted ICIALLUIS may prevent completion of broadcasted operation"
depends on CPU_V7
depends on !ARCH_MULTIPLATFORM
help
This option enables the workaround for the 751472 Cortex-A9 (prior
to r3p0) erratum. An interrupted ICIALLUIS operation may prevent the
completion of a following broadcasted operation if the second
operation is received by a CPU before the ICIALLUIS has completed,
potentially leading to corrupted entries in the cache or TLB.
Note that setting specific bits in the diagnostics register may
not be available in non-secure mode and thus is not available on
a multiplatform kernel. This should be applied by the bootloader
instead.
config ARM_ERRATA_754322
bool "ARM errata: possible faulty MMU translations following an ASID switch"
depends on CPU_V7
help
This option enables the workaround for the 754322 Cortex-A9 (r2p*,
r3p*) erratum. A speculative memory access may cause a page table walk
which starts prior to an ASID switch but completes afterwards. This
can populate the micro-TLB with a stale entry which may be hit with
the new ASID. This workaround places two dsb instructions in the mm
switching code so that no page table walks can cross the ASID switch.
config ARM_ERRATA_754327
bool "ARM errata: no automatic Store Buffer drain"
depends on CPU_V7 && SMP
help
This option enables the workaround for the 754327 Cortex-A9 (prior to
r2p0) erratum. The Store Buffer does not have any automatic draining
mechanism and therefore a livelock may occur if an external agent
continuously polls a memory location waiting to observe an update.
This workaround defines cpu_relax() as smp_mb(), preventing correctly
written polling loops from denying visibility of updates to memory.
config ARM_ERRATA_364296
bool "ARM errata: Possible cache data corruption with hit-under-miss enabled"
depends on CPU_V6
help
This options enables the workaround for the 364296 ARM1136
r0p2 erratum (possible cache data corruption with
hit-under-miss enabled). It sets the undocumented bit 31 in
the auxiliary control register and the FI bit in the control
register, thus disabling hit-under-miss without putting the
processor into full low interrupt latency mode. ARM11MPCore
is not affected.
config ARM_ERRATA_764369
bool "ARM errata: Data cache line maintenance operation by MVA may not succeed"
depends on CPU_V7 && SMP
help
This option enables the workaround for erratum 764369
affecting Cortex-A9 MPCore with two or more processors (all
current revisions). Under certain timing circumstances, a data
cache line maintenance operation by MVA targeting an Inner
Shareable memory region may fail to proceed up to either the
Point of Coherency or to the Point of Unification of the
system. This workaround adds a DSB instruction before the
relevant cache maintenance functions and sets a specific bit
in the diagnostic control register of the SCU.
config ARM_ERRATA_764319
bool "ARM errata: Read to DBGPRSR and DBGOSLSR may generate Undefined instruction"
depends on CPU_V7
help
This option enables the workaround for the 764319 Cortex-A9 erratum.
CP14 read accesses to the DBGPRSR and DBGOSLSR registers generate an
unexpected Undefined Instruction exception when the DBGSWENABLE
external pin is set to 0, even when the CP14 accesses are performed
from a privileged mode. This work around catches the exception in a
way the kernel does not stop execution.
config ARM_ERRATA_775420
bool "ARM errata: A data cache maintenance operation which aborts, might lead to deadlock"
depends on CPU_V7
help
This option enables the workaround for the 775420 Cortex-A9 (r2p2,
r2p6,r2p8,r2p10,r3p0) erratum. In case a data cache maintenance
operation aborts with MMU exception, it might cause the processor
to deadlock. This workaround puts DSB before executing ISB if
an abort may occur on cache maintenance.
config ARM_ERRATA_798181
bool "ARM errata: TLBI/DSB failure on Cortex-A15"
depends on CPU_V7 && SMP
help
On Cortex-A15 (r0p0..r3p2) the TLBI*IS/DSB operations are not
adequately shooting down all use of the old entries. This
option enables the Linux kernel workaround for this erratum
which sends an IPI to the CPUs that are running the same ASID
as the one being invalidated.
config ARM_ERRATA_773022
bool "ARM errata: incorrect instructions may be executed from loop buffer"
depends on CPU_V7
help
This option enables the workaround for the 773022 Cortex-A15
(up to r0p4) erratum. In certain rare sequences of code, the
loop buffer may deliver incorrect instructions. This
workaround disables the loop buffer to avoid the erratum.
ARM: 8558/1: errata: Workaround errata A12 818325/852422 A17 852423 There are several similar errata on Cortex A12 and A17 that all have the same workaround: setting bit[12] of the Feature Register. Technically the list of errata are: - A12 818325: Execution of an UNPREDICTABLE STR or STM instruction might deadlock. Fixed in r0p1. - A12 852422: Execution of a sequence of instructions might lead to either a data corruption or a CPU deadlock. Not fixed in any A12s yet. - A17 852423: Execution of a sequence of instructions might lead to either a data corruption or a CPU deadlock. Not fixed in any A17s yet. Since A12 got renamed to A17 it seems likely that there won't be any future Cortex-A12 cores, so we'll enable for all Cortex-A12. For Cortex-A17 I believe that all known revisions are affected and that all knows revisions means <= r1p2. Presumably if a new A17 was released it would have this problem fixed. Note that in <https://patchwork.kernel.org/patch/4735341/> folks previously expressed opposition to this change because: A) It was thought to only apply to r0p0 and there were no known r0p0 boards supported in mainline. B) It was argued that such a workaround beloned in firmware. Now that this same fix solves other errata on real boards (like rk3288) point A) is addressed. Point B) is impossible to address on boards like rk3288. On rk3288 the firmware doesn't stay resident in RAM and isn't involved at all in the suspend/resume process nor in the SMP bringup process. That means that the most the firmware could do would be to set the bit on "core 0" and this bit would be lost at suspend/resume time. It is true that we could write a "generic" solution that saved the boot-time "core 0" value of this register and applied it at SMP bringup / resume time. However, since this register (described as the "Feature Register" in errata) appears to be undocumented (as far as I can tell) and is only modified for these errata, that "generic" solution seems questionably cleaner. The generic solution also won't fix existing users that haven't happened to do a FW update. Note that in ARM64 presumably PSCI will be universal and fixes like this will end up in ATF. Hopefully we are nearing the end of this style of errata workaround. Signed-off-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Huang Tao <huangtao@rock-chips.com> Signed-off-by: Kever Yang <kever.yang@rock-chips.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-04-06 23:25:00 +00:00
config ARM_ERRATA_818325_852422
bool "ARM errata: A12: some seqs of opposed cond code instrs => deadlock or corruption"
depends on CPU_V7
help
This option enables the workaround for:
- Cortex-A12 818325: Execution of an UNPREDICTABLE STR or STM
instruction might deadlock. Fixed in r0p1.
- Cortex-A12 852422: Execution of a sequence of instructions might
lead to either a data corruption or a CPU deadlock. Not fixed in
any Cortex-A12 cores yet.
This workaround for all both errata involves setting bit[12] of the
Feature Register. This bit disables an optimisation applied to a
sequence of 2 instructions that use opposing condition codes.
config ARM_ERRATA_821420
bool "ARM errata: A12: sequence of VMOV to core registers might lead to a dead lock"
depends on CPU_V7
help
This option enables the workaround for the 821420 Cortex-A12
(all revs) erratum. In very rare timing conditions, a sequence
of VMOV to Core registers instructions, for which the second
one is in the shadow of a branch or abort, can lead to a
deadlock when the VMOV instructions are issued out-of-order.
config ARM_ERRATA_825619
bool "ARM errata: A12: DMB NSHST/ISHST mixed ... might cause deadlock"
depends on CPU_V7
help
This option enables the workaround for the 825619 Cortex-A12
(all revs) erratum. Within rare timing constraints, executing a
DMB NSHST or DMB ISHST instruction followed by a mix of Cacheable
and Device/Strongly-Ordered loads and stores might cause deadlock
config ARM_ERRATA_857271
bool "ARM errata: A12: CPU might deadlock under some very rare internal conditions"
depends on CPU_V7
help
This option enables the workaround for the 857271 Cortex-A12
(all revs) erratum. Under very rare timing conditions, the CPU might
hang. The workaround is expected to have a < 1% performance impact.
config ARM_ERRATA_852421
bool "ARM errata: A17: DMB ST might fail to create order between stores"
depends on CPU_V7
help
This option enables the workaround for the 852421 Cortex-A17
(r1p0, r1p1, r1p2) erratum. Under very rare timing conditions,
execution of a DMB ST instruction might fail to properly order
stores from GroupA and stores from GroupB.
ARM: 8558/1: errata: Workaround errata A12 818325/852422 A17 852423 There are several similar errata on Cortex A12 and A17 that all have the same workaround: setting bit[12] of the Feature Register. Technically the list of errata are: - A12 818325: Execution of an UNPREDICTABLE STR or STM instruction might deadlock. Fixed in r0p1. - A12 852422: Execution of a sequence of instructions might lead to either a data corruption or a CPU deadlock. Not fixed in any A12s yet. - A17 852423: Execution of a sequence of instructions might lead to either a data corruption or a CPU deadlock. Not fixed in any A17s yet. Since A12 got renamed to A17 it seems likely that there won't be any future Cortex-A12 cores, so we'll enable for all Cortex-A12. For Cortex-A17 I believe that all known revisions are affected and that all knows revisions means <= r1p2. Presumably if a new A17 was released it would have this problem fixed. Note that in <https://patchwork.kernel.org/patch/4735341/> folks previously expressed opposition to this change because: A) It was thought to only apply to r0p0 and there were no known r0p0 boards supported in mainline. B) It was argued that such a workaround beloned in firmware. Now that this same fix solves other errata on real boards (like rk3288) point A) is addressed. Point B) is impossible to address on boards like rk3288. On rk3288 the firmware doesn't stay resident in RAM and isn't involved at all in the suspend/resume process nor in the SMP bringup process. That means that the most the firmware could do would be to set the bit on "core 0" and this bit would be lost at suspend/resume time. It is true that we could write a "generic" solution that saved the boot-time "core 0" value of this register and applied it at SMP bringup / resume time. However, since this register (described as the "Feature Register" in errata) appears to be undocumented (as far as I can tell) and is only modified for these errata, that "generic" solution seems questionably cleaner. The generic solution also won't fix existing users that haven't happened to do a FW update. Note that in ARM64 presumably PSCI will be universal and fixes like this will end up in ATF. Hopefully we are nearing the end of this style of errata workaround. Signed-off-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Huang Tao <huangtao@rock-chips.com> Signed-off-by: Kever Yang <kever.yang@rock-chips.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-04-06 23:25:00 +00:00
config ARM_ERRATA_852423
bool "ARM errata: A17: some seqs of opposed cond code instrs => deadlock or corruption"
depends on CPU_V7
help
This option enables the workaround for:
- Cortex-A17 852423: Execution of a sequence of instructions might
lead to either a data corruption or a CPU deadlock. Not fixed in
any Cortex-A17 cores yet.
This is identical to Cortex-A12 erratum 852422. It is a separate
config option from the A12 erratum due to the way errata are checked
for and handled.
config ARM_ERRATA_857272
bool "ARM errata: A17: CPU might deadlock under some very rare internal conditions"
depends on CPU_V7
help
This option enables the workaround for the 857272 Cortex-A17 erratum.
This erratum is not known to be fixed in any A17 revision.
This is identical to Cortex-A12 erratum 857271. It is a separate
config option from the A12 erratum due to the way errata are checked
for and handled.
endmenu
source "arch/arm/common/Kconfig"
menu "Bus support"
config ISA
bool
help
Find out whether you have ISA slots on your motherboard. ISA is the
name of a bus system, i.e. the way the CPU talks to the other stuff
inside your box. Other bus systems are PCI, EISA, MicroChannel
(MCA) or VESA. ISA is an older system, now being displaced by PCI;
newer boards don't support it. If you have ISA, say Y, otherwise N.
# Select ISA DMA interface
config ISA_DMA_API
bool
config ARM_ERRATA_814220
bool "ARM errata: Cache maintenance by set/way operations can execute out of order"
depends on CPU_V7
help
The v7 ARM states that all cache and branch predictor maintenance
operations that do not specify an address execute, relative to
each other, in program order.
However, because of this erratum, an L2 set/way cache maintenance
operation can overtake an L1 set/way cache maintenance operation.
This ERRATA only affected the Cortex-A7 and present in r0p2, r0p3,
r0p4, r0p5.
endmenu
menu "Kernel Features"
config HAVE_SMP
bool
help
This option should be selected by machines which have an SMP-
capable CPU.
The only effect of this option is to make the SMP-related
options available to the user for configuration.
config SMP
bool "Symmetric Multi-Processing"
depends on CPU_V6K || CPU_V7
depends on HAVE_SMP
depends on MMU || ARM_MPU
select IRQ_WORK
help
This enables support for systems with more than one CPU. If you have
a system with only one CPU, say N. If you have a system with more
than one CPU, say Y.
If you say N here, the kernel will run on uni- and multiprocessor
machines, but will use only one CPU of a multiprocessor machine. If
you say Y here, the kernel will run on many, but not all,
uniprocessor machines. On a uniprocessor machine, the kernel
will run faster if you say N here.
See also <file:Documentation/arch/x86/i386/IO-APIC.rst>,
<file:Documentation/admin-guide/lockup-watchdogs.rst> and the SMP-HOWTO available at
<http://tldp.org/HOWTO/SMP-HOWTO.html>.
If you don't know what to do here, say N.
config SMP_ON_UP
bool "Allow booting SMP kernel on uniprocessor systems"
depends on SMP && MMU
default y
help
SMP kernels contain instructions which fail on non-SMP processors.
Enabling this option allows the kernel to modify itself to make
these instructions safe. Disabling it allows about 1K of space
savings.
If you don't know what to do here, say Y.
config CURRENT_POINTER_IN_TPIDRURO
def_bool y
depends on CPU_32v6K && !CPU_V6
config IRQSTACKS
def_bool y
select HAVE_IRQ_EXIT_ON_IRQ_STACK
select HAVE_SOFTIRQ_ON_OWN_STACK
config ARM_CPU_TOPOLOGY
bool "Support cpu topology definition"
depends on SMP && CPU_V7
default y
help
Support ARM cpu topology definition. The MPIDR register defines
affinity between processors which is then used to describe the cpu
topology of an ARM System.
config SCHED_MC
bool "Multi-core scheduler support"
depends on ARM_CPU_TOPOLOGY
help
Multi-core scheduler support improves the CPU scheduler's decision
making when dealing with multi-core CPU chips at a cost of slightly
increased overhead in some places. If unsure say N here.
config SCHED_SMT
bool "SMT scheduler support"
depends on ARM_CPU_TOPOLOGY
help
Improves the CPU scheduler's decision making when dealing with
MultiThreading at a cost of slightly increased overhead in some
places. If unsure say N here.
config HAVE_ARM_SCU
bool
help
This option enables support for the ARM snoop control unit
config HAVE_ARM_ARCH_TIMER
bool "Architected timer support"
depends on CPU_V7
select ARM_ARCH_TIMER
help
This option enables support for the ARM architected timer
config HAVE_ARM_TWD
bool
help
This options enables support for the ARM timer and watchdog unit
config MCPM
bool "Multi-Cluster Power Management"
depends on CPU_V7 && SMP
help
This option provides the common power management infrastructure
for (multi-)cluster based systems, such as big.LITTLE based
systems.
config MCPM_QUAD_CLUSTER
bool
depends on MCPM
help
To avoid wasting resources unnecessarily, MCPM only supports up
to 2 clusters by default.
Platforms with 3 or 4 clusters that use MCPM must select this
option to allow the additional clusters to be managed.
ARM: b.L: core switcher code This is the core code implementing big.LITTLE switcher functionality. Rationale for this code is available here: http://lwn.net/Articles/481055/ The main entry point for a switch request is: void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id) If the calling CPU is not the wanted one, this wrapper takes care of sending the request to the appropriate CPU with schedule_work_on(). At the moment the core switch operation is handled by bL_switch_to() which must be called on the CPU for which a switch is requested. What this code does: * Return early if the current cluster is the wanted one. * Close the gate in the kernel entry vector for both the inbound and outbound CPUs. * Wake up the inbound CPU so it can perform its reset sequence in parallel up to the kernel entry vector gate. * Migrate all interrupts in the GIC targeting the outbound CPU interface to the inbound CPU interface, including SGIs. This is performed by gic_migrate_target() in drivers/irqchip/irq-gic.c. * Call cpu_pm_enter() which takes care of flushing the VFP state to RAM and save the CPU interface config from the GIC to RAM. * Modify the cpu_logical_map to refer to the inbound physical CPU. * Call cpu_suspend() which saves the CPU state (general purpose registers, page table address) onto the stack and store the resulting stack pointer in an array indexed by the updated cpu_logical_map, then call the provided shutdown function. This happens in arch/arm/kernel/sleep.S. At this point, the provided shutdown function executed by the outbound CPU ungates the inbound CPU. Therefore the inbound CPU: * Picks up the saved stack pointer in the array indexed by its MPIDR in arch/arm/kernel/sleep.S. * The MMU and caches are re-enabled using the saved state on the provided stack, just like if this was a resume operation from a suspended state. * Then cpu_suspend() returns, although this is on the inbound CPU rather than the outbound CPU which called it initially. * The function cpu_pm_exit() is called which effect is to restore the CPU interface state in the GIC using the state previously saved by the outbound CPU. * Exit of bL_switch_to() to resume normal kernel execution on the new CPU. However, the outbound CPU is potentially still running in parallel while the inbound CPU is resuming normal kernel execution, hence we need per CPU stack isolation to execute bL_do_switch(). After the outbound CPU has ungated the inbound CPU, it calls mcpm_cpu_power_down() to: * Clean its L1 cache. * If it is the last CPU still alive in its cluster (last man standing), it also cleans its L2 cache and disables cache snooping from the other cluster. * Power down the CPU (or whole cluster). Code called from bL_do_switch() might end up referencing 'current' for some reasons. However, 'current' is derived from the stack pointer. With any arbitrary stack, the returned value for 'current' and any dereferenced values through it are just random garbage which may lead to segmentation faults. The active page table during the execution of bL_do_switch() is also a problem. There is no guarantee that the inbound CPU won't destroy the corresponding task which would free the attached page table while the outbound CPU is still running and relying on it. To solve both issues, we borrow some of the task space belonging to the init/idle task which, by its nature, is lightly used and therefore is unlikely to clash with our usage. The init task is also never going away. Right now the logical CPU number is assumed to be equivalent to the physical CPU number within each cluster. The kernel should also be booted with only one cluster active. These limitations will be lifted eventually. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2012-04-12 06:56:10 +00:00
config BIG_LITTLE
bool "big.LITTLE support (Experimental)"
depends on CPU_V7 && SMP
select MCPM
help
This option enables support selections for the big.LITTLE
system architecture.
config BL_SWITCHER
bool "big.LITTLE switcher support"
depends on BIG_LITTLE && MCPM && HOTPLUG_CPU && ARM_GIC
select CPU_PM
ARM: b.L: core switcher code This is the core code implementing big.LITTLE switcher functionality. Rationale for this code is available here: http://lwn.net/Articles/481055/ The main entry point for a switch request is: void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id) If the calling CPU is not the wanted one, this wrapper takes care of sending the request to the appropriate CPU with schedule_work_on(). At the moment the core switch operation is handled by bL_switch_to() which must be called on the CPU for which a switch is requested. What this code does: * Return early if the current cluster is the wanted one. * Close the gate in the kernel entry vector for both the inbound and outbound CPUs. * Wake up the inbound CPU so it can perform its reset sequence in parallel up to the kernel entry vector gate. * Migrate all interrupts in the GIC targeting the outbound CPU interface to the inbound CPU interface, including SGIs. This is performed by gic_migrate_target() in drivers/irqchip/irq-gic.c. * Call cpu_pm_enter() which takes care of flushing the VFP state to RAM and save the CPU interface config from the GIC to RAM. * Modify the cpu_logical_map to refer to the inbound physical CPU. * Call cpu_suspend() which saves the CPU state (general purpose registers, page table address) onto the stack and store the resulting stack pointer in an array indexed by the updated cpu_logical_map, then call the provided shutdown function. This happens in arch/arm/kernel/sleep.S. At this point, the provided shutdown function executed by the outbound CPU ungates the inbound CPU. Therefore the inbound CPU: * Picks up the saved stack pointer in the array indexed by its MPIDR in arch/arm/kernel/sleep.S. * The MMU and caches are re-enabled using the saved state on the provided stack, just like if this was a resume operation from a suspended state. * Then cpu_suspend() returns, although this is on the inbound CPU rather than the outbound CPU which called it initially. * The function cpu_pm_exit() is called which effect is to restore the CPU interface state in the GIC using the state previously saved by the outbound CPU. * Exit of bL_switch_to() to resume normal kernel execution on the new CPU. However, the outbound CPU is potentially still running in parallel while the inbound CPU is resuming normal kernel execution, hence we need per CPU stack isolation to execute bL_do_switch(). After the outbound CPU has ungated the inbound CPU, it calls mcpm_cpu_power_down() to: * Clean its L1 cache. * If it is the last CPU still alive in its cluster (last man standing), it also cleans its L2 cache and disables cache snooping from the other cluster. * Power down the CPU (or whole cluster). Code called from bL_do_switch() might end up referencing 'current' for some reasons. However, 'current' is derived from the stack pointer. With any arbitrary stack, the returned value for 'current' and any dereferenced values through it are just random garbage which may lead to segmentation faults. The active page table during the execution of bL_do_switch() is also a problem. There is no guarantee that the inbound CPU won't destroy the corresponding task which would free the attached page table while the outbound CPU is still running and relying on it. To solve both issues, we borrow some of the task space belonging to the init/idle task which, by its nature, is lightly used and therefore is unlikely to clash with our usage. The init task is also never going away. Right now the logical CPU number is assumed to be equivalent to the physical CPU number within each cluster. The kernel should also be booted with only one cluster active. These limitations will be lifted eventually. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2012-04-12 06:56:10 +00:00
help
The big.LITTLE "switcher" provides the core functionality to
transparently handle transition between a cluster of A15's
and a cluster of A7's in a big.LITTLE system.
config BL_SWITCHER_DUMMY_IF
tristate "Simple big.LITTLE switcher user interface"
depends on BL_SWITCHER && DEBUG_KERNEL
help
This is a simple and dummy char dev interface to control
the big.LITTLE switcher core code. It is meant for
debugging purposes only.
choice
prompt "Memory split"
depends on MMU
default VMSPLIT_3G
help
Select the desired split between kernel and user memory.
If you are not absolutely sure what you are doing, leave this
option alone!
config VMSPLIT_3G
bool "3G/1G user/kernel split"
config VMSPLIT_3G_OPT
depends on !ARM_LPAE
bool "3G/1G user/kernel split (for full 1G low memory)"
config VMSPLIT_2G
bool "2G/2G user/kernel split"
config VMSPLIT_1G
bool "1G/3G user/kernel split"
endchoice
config PAGE_OFFSET
hex
default PHYS_OFFSET if !MMU
default 0x40000000 if VMSPLIT_1G
default 0x80000000 if VMSPLIT_2G
default 0xB0000000 if VMSPLIT_3G_OPT
default 0xC0000000
ARM: 9015/2: Define the virtual space of KASan's shadow region Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB addressable by a 32bit architecture) out of the virtual address space to use as shadow memory for KASan as follows: +----+ 0xffffffff | | | | |-> Static kernel image (vmlinux) BSS and page table | |/ +----+ PAGE_OFFSET | | | | |-> Loadable kernel modules virtual address space area | |/ +----+ MODULES_VADDR = KASAN_SHADOW_END | | | | |-> The shadow area of kernel virtual address. | |/ +----+-> TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the | | shadow address of MODULES_VADDR | | | | | | | | |-> The user space area in lowmem. The kernel address | | | sanitizer do not use this space, nor does it map it. | | | | | | | | | | | | | |/ ------ 0 0 .. TASK_SIZE is the memory that can be used by shared userspace/kernelspace. It us used for userspace processes and for passing parameters and memory buffers in system calls etc. We do not need to shadow this area. KASAN_SHADOW_START: This value begins with the MODULE_VADDR's shadow address. It is the start of kernel virtual space. Since we have modules to load, we need to cover also that area with shadow memory so we can find memory bugs in modules. KASAN_SHADOW_END This value is the 0x100000000's shadow address: the mapping that would be after the end of the kernel memory at 0xffffffff. It is the end of kernel address sanitizer shadow area. It is also the start of the module area. KASAN_SHADOW_OFFSET: This value is used to map an address to the corresponding shadow address by the following formula: shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET; As you would expect, >> 3 is equal to dividing by 8, meaning each byte in the shadow memory covers 8 bytes of kernel memory, so one bit shadow memory per byte of kernel memory is used. The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending on the VMSPLIT layout of the system: the kernel and userspace can split up lowmem in different ways according to needs, so we calculate the shadow offset depending on this. When kasan is enabled, the definition of TASK_SIZE is not an 8-bit rotated constant, so we need to modify the TASK_SIZE access code in the *.s file. The kernel and modules may use different amounts of memory, according to the VMSPLIT configuration, which in turn determines the PAGE_OFFSET. We use the following KASAN_SHADOW_OFFSETs depending on how the virtual memory is split up: - 0x1f000000 if we have 1G userspace / 3G kernelspace split: - The kernel address space is 3G (0xc0000000) - PAGE_OFFSET is then set to 0x40000000 so the kernel static image (vmlinux) uses addresses 0x40000000 .. 0xffffffff - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0x3f000000 so the modules use addresses 0x3f000000 .. 0x3fffffff - So the addresses 0x3f000000 .. 0xffffffff need to be covered with shadow memory. That is 0xc1000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x18200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0x26e00000, to KASAN_SHADOW_END at 0x3effffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0x3f000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0x26e00000 = (0x3f000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0x26e00000 - (0x3f000000 >> 3) KASAN_SHADOW_OFFSET = 0x26e00000 - 0x07e00000 KASAN_SHADOW_OFFSET = 0x1f000000 - 0x5f000000 if we have 2G userspace / 2G kernelspace split: - The kernel space is 2G (0x80000000) - PAGE_OFFSET is set to 0x80000000 so the kernel static image uses 0x80000000 .. 0xffffffff. - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0x7f000000 so the modules use addresses 0x7f000000 .. 0x7fffffff - So the addresses 0x7f000000 .. 0xffffffff need to be covered with shadow memory. That is 0x81000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x10200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0x6ee00000, to KASAN_SHADOW_END at 0x7effffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0x7f000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0x6ee00000 = (0x7f000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0x6ee00000 - (0x7f000000 >> 3) KASAN_SHADOW_OFFSET = 0x6ee00000 - 0x0fe00000 KASAN_SHADOW_OFFSET = 0x5f000000 - 0x9f000000 if we have 3G userspace / 1G kernelspace split, and this is the default split for ARM: - The kernel address space is 1GB (0x40000000) - PAGE_OFFSET is set to 0xc0000000 so the kernel static image uses 0xc0000000 .. 0xffffffff. - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0xbf000000 so the modules use addresses 0xbf000000 .. 0xbfffffff - So the addresses 0xbf000000 .. 0xffffffff need to be covered with shadow memory. That is 0x41000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x08200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0xb6e00000, to KASAN_SHADOW_END at 0xbfffffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0xbf000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0xb6e00000 = (0xbf000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0xb6e00000 - (0xbf000000 >> 3) KASAN_SHADOW_OFFSET = 0xb6e00000 - 0x17e00000 KASAN_SHADOW_OFFSET = 0x9f000000 - 0x8f000000 if we have 3G userspace / 1G kernelspace with full 1 GB low memory (VMSPLIT_3G_OPT): - The kernel address space is 1GB (0x40000000) - PAGE_OFFSET is set to 0xb0000000 so the kernel static image uses 0xb0000000 .. 0xffffffff. - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0xaf000000 so the modules use addresses 0xaf000000 .. 0xaffffff - So the addresses 0xaf000000 .. 0xffffffff need to be covered with shadow memory. That is 0x51000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x0a200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0xa4e00000, to KASAN_SHADOW_END at 0xaeffffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0xaf000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0xa4e00000 = (0xaf000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0xa4e00000 - (0xaf000000 >> 3) KASAN_SHADOW_OFFSET = 0xa4e00000 - 0x15e00000 KASAN_SHADOW_OFFSET = 0x8f000000 - The default value of 0xffffffff for KASAN_SHADOW_OFFSET is an error value. We should always match one of the above shadow offsets. When we do this, TASK_SIZE will sometimes get a bit odd values that will not fit into immediate mov assembly instructions. To account for this, we need to rewrite some assembly using TASK_SIZE like this: - mov r1, #TASK_SIZE + ldr r1, =TASK_SIZE or - cmp r4, #TASK_SIZE + ldr r0, =TASK_SIZE + cmp r4, r0 this is done to avoid the immediate #TASK_SIZE that need to fit into a limited number of bits. Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: kasan-dev@googlegroups.com Cc: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q Reported-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Abbott Liu <liuwenliang@huawei.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-10-25 22:53:46 +00:00
config KASAN_SHADOW_OFFSET
hex
depends on KASAN
default 0x1f000000 if PAGE_OFFSET=0x40000000
default 0x5f000000 if PAGE_OFFSET=0x80000000
default 0x9f000000 if PAGE_OFFSET=0xC0000000
default 0x8f000000 if PAGE_OFFSET=0xB0000000
default 0xffffffff
config NR_CPUS
int "Maximum number of CPUs (2-32)"
ARM: 9063/1: mm: reduce maximum number of CPUs if DEBUG_KMAP_LOCAL is enabled The debugging code for kmap_local() doubles the number of per-CPU fixmap slots allocated for kmap_local(), in order to use half of them as guard regions. This causes the fixmap region to grow downwards beyond the start of its reserved window if the supported number of CPUs is large, and collide with the newly added virtual DT mapping right below it, which is obviously not good. One manifestation of this is EFI boot on a kernel built with NR_CPUS=32 and CONFIG_DEBUG_KMAP_LOCAL=y, which may pass the FDT in highmem, resulting in block entries below the fixmap region that the fixmap code misidentifies as fixmap table entries, and subsequently tries to dereference using a phys-to-virt translation that is only valid for lowmem. This results in a cryptic splat such as the one below. ftrace: allocating 45548 entries in 89 pages 8<--- cut here --- Unable to handle kernel paging request at virtual address fc6006f0 pgd = (ptrval) [fc6006f0] *pgd=80000040207003, *pmd=00000000 Internal error: Oops: a06 [#1] SMP ARM Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 5.11.0+ #382 Hardware name: Generic DT based system PC is at cpu_ca15_set_pte_ext+0x24/0x30 LR is at __set_fixmap+0xe4/0x118 pc : [<c041ac9c>] lr : [<c04189d8>] psr: 400000d3 sp : c1601ed8 ip : 00400000 fp : 00800000 r10: 0000071f r9 : 00421000 r8 : 00c00000 r7 : 00c00000 r6 : 0000071f r5 : ffade000 r4 : 4040171f r3 : 00c00000 r2 : 4040171f r1 : c041ac78 r0 : fc6006f0 Flags: nZcv IRQs off FIQs off Mode SVC_32 ISA ARM Segment none Control: 30c5387d Table: 40203000 DAC: 00000001 Process swapper (pid: 0, stack limit = 0x(ptrval)) So let's limit CONFIG_NR_CPUS to 16 when CONFIG_DEBUG_KMAP_LOCAL=y. Also, fix the BUILD_BUG_ON() check that was supposed to catch this, by checking whether the region grows below the start address rather than above the end address. Fixes: 2a15ba82fa6ca3f3 ("ARM: highmem: Switch to generic kmap atomic") Reported-by: Peter Robinson <pbrobinson@gmail.com> Tested-by: Peter Robinson <pbrobinson@gmail.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-02-17 19:26:23 +00:00
range 2 16 if DEBUG_KMAP_LOCAL
range 2 32 if !DEBUG_KMAP_LOCAL
depends on SMP
default "4"
ARM: 9063/1: mm: reduce maximum number of CPUs if DEBUG_KMAP_LOCAL is enabled The debugging code for kmap_local() doubles the number of per-CPU fixmap slots allocated for kmap_local(), in order to use half of them as guard regions. This causes the fixmap region to grow downwards beyond the start of its reserved window if the supported number of CPUs is large, and collide with the newly added virtual DT mapping right below it, which is obviously not good. One manifestation of this is EFI boot on a kernel built with NR_CPUS=32 and CONFIG_DEBUG_KMAP_LOCAL=y, which may pass the FDT in highmem, resulting in block entries below the fixmap region that the fixmap code misidentifies as fixmap table entries, and subsequently tries to dereference using a phys-to-virt translation that is only valid for lowmem. This results in a cryptic splat such as the one below. ftrace: allocating 45548 entries in 89 pages 8<--- cut here --- Unable to handle kernel paging request at virtual address fc6006f0 pgd = (ptrval) [fc6006f0] *pgd=80000040207003, *pmd=00000000 Internal error: Oops: a06 [#1] SMP ARM Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 5.11.0+ #382 Hardware name: Generic DT based system PC is at cpu_ca15_set_pte_ext+0x24/0x30 LR is at __set_fixmap+0xe4/0x118 pc : [<c041ac9c>] lr : [<c04189d8>] psr: 400000d3 sp : c1601ed8 ip : 00400000 fp : 00800000 r10: 0000071f r9 : 00421000 r8 : 00c00000 r7 : 00c00000 r6 : 0000071f r5 : ffade000 r4 : 4040171f r3 : 00c00000 r2 : 4040171f r1 : c041ac78 r0 : fc6006f0 Flags: nZcv IRQs off FIQs off Mode SVC_32 ISA ARM Segment none Control: 30c5387d Table: 40203000 DAC: 00000001 Process swapper (pid: 0, stack limit = 0x(ptrval)) So let's limit CONFIG_NR_CPUS to 16 when CONFIG_DEBUG_KMAP_LOCAL=y. Also, fix the BUILD_BUG_ON() check that was supposed to catch this, by checking whether the region grows below the start address rather than above the end address. Fixes: 2a15ba82fa6ca3f3 ("ARM: highmem: Switch to generic kmap atomic") Reported-by: Peter Robinson <pbrobinson@gmail.com> Tested-by: Peter Robinson <pbrobinson@gmail.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-02-17 19:26:23 +00:00
help
The maximum number of CPUs that the kernel can support.
Up to 32 CPUs can be supported, or up to 16 if kmap_local()
debugging is enabled, which uses half of the per-CPU fixmap
slots as guard regions.
config HOTPLUG_CPU
bool "Support for hot-pluggable CPUs"
depends on SMP
select GENERIC_IRQ_MIGRATION
help
Say Y here to experiment with turning CPUs off and on. CPUs
can be controlled through /sys/devices/system/cpu.
config ARM_PSCI
bool "Support for the ARM Power State Coordination Interface (PSCI)"
depends on HAVE_ARM_SMCCC
select ARM_PSCI_FW
help
Say Y here if you want Linux to communicate with system firmware
implementing the PSCI specification for CPU-centric power
management operations described in ARM document number ARM DEN
0022A ("Power State Coordination Interface System Software on
ARM processors").
config HZ_FIXED
int
default 128 if SOC_AT91RM9200
default 0
choice
depends on HZ_FIXED = 0
prompt "Timer frequency"
config HZ_100
bool "100 Hz"
config HZ_200
bool "200 Hz"
config HZ_250
bool "250 Hz"
config HZ_300
bool "300 Hz"
config HZ_500
bool "500 Hz"
config HZ_1000
bool "1000 Hz"
endchoice
config HZ
int
default HZ_FIXED if HZ_FIXED != 0
default 100 if HZ_100
default 200 if HZ_200
default 250 if HZ_250
default 300 if HZ_300
default 500 if HZ_500
default 1000
config SCHED_HRTICK
def_bool HIGH_RES_TIMERS
config THUMB2_KERNEL
bool "Compile the kernel in Thumb-2 mode" if !CPU_THUMBONLY
depends on (CPU_V7 || CPU_V7M) && !CPU_V6 && !CPU_V6K
default y if CPU_THUMBONLY
select ARM_UNWIND
help
By enabling this option, the kernel will be compiled in
Thumb-2 mode.
If unsure, say N.
config ARM_PATCH_IDIV
bool "Runtime patch udiv/sdiv instructions into __aeabi_{u}idiv()"
depends on CPU_32v7
default y
help
The ARM compiler inserts calls to __aeabi_idiv() and
__aeabi_uidiv() when it needs to perform division on signed
and unsigned integers. Some v7 CPUs have support for the sdiv
and udiv instructions that can be used to implement those
functions.
Enabling this option allows the kernel to modify itself to
replace the first two instructions of these library functions
with the sdiv or udiv plus "bx lr" instructions when the CPU
it is running on supports them. Typically this will be faster
and less power intensive than running the original library
code to do integer division.
config AEABI
bool "Use the ARM EABI to compile the kernel" if !CPU_V7 && \
!CPU_V7M && !CPU_V6 && !CPU_V6K && !CC_IS_CLANG
default CPU_V7 || CPU_V7M || CPU_V6 || CPU_V6K || CC_IS_CLANG
help
This option allows for the kernel to be compiled using the latest
ARM ABI (aka EABI). This is only useful if you are using a user
space environment that is also compiled with EABI.
Since there are major incompatibilities between the legacy ABI and
EABI, especially with regard to structure member alignment, this
option also changes the kernel syscall calling convention to
disambiguate both ABIs and allow for backward compatibility support
(selected with CONFIG_OABI_COMPAT).
To use this you need GCC version 4.0.0 or later.
config OABI_COMPAT
bool "Allow old ABI binaries to run with this kernel (EXPERIMENTAL)"
depends on AEABI && !THUMB2_KERNEL
help
This option preserves the old syscall interface along with the
new (ARM EABI) one. It also provides a compatibility layer to
intercept syscalls that have structure arguments which layout
in memory differs between the legacy ABI and the new ARM EABI
(only for non "thumb" binaries). This option adds a tiny
overhead to all syscalls and produces a slightly larger kernel.
The seccomp filter system will not be available when this is
selected, since there is no way yet to sensibly distinguish
between calling conventions during filtering.
If you know you'll be using only pure EABI user space then you
can say N here. If this option is not selected and you attempt
to execute a legacy ABI binary then the result will be
UNPREDICTABLE (in fact it can be predicted that it won't work
at all). If in doubt say N.
2020-05-22 14:12:30 +00:00
config ARCH_SELECT_MEMORY_MODEL
def_bool y
2020-05-22 14:12:30 +00:00
config ARCH_FLATMEM_ENABLE
def_bool !(ARCH_RPC || ARCH_SA1100)
config ARCH_SPARSEMEM_ENABLE
def_bool !ARCH_FOOTBRIDGE
2020-05-22 14:12:30 +00:00
select SPARSEMEM_STATIC if SPARSEMEM
config HIGHMEM
bool "High Memory Support"
depends on MMU
select KMAP_LOCAL
kmap_local: don't assume kmap PTEs are linear arrays in memory The kmap_local conversion broke the ARM architecture, because the new code assumes that all PTEs used for creating kmaps form a linear array in memory, and uses array indexing to look up the kmap PTE belonging to a certain kmap index. On ARM, this cannot work, not only because the PTE pages may be non-adjacent in memory, but also because ARM/!LPAE interleaves hardware entries and extended entries (carrying software-only bits) in a way that is not compatible with array indexing. Fortunately, this only seems to affect configurations with more than 8 CPUs, due to the way the per-CPU kmap slots are organized in memory. Work around this by permitting an architecture to set a Kconfig symbol that signifies that the kmap PTEs do not form a lineary array in memory, and so the only way to locate the appropriate one is to walk the page tables. Link: https://lore.kernel.org/linux-arm-kernel/20211026131249.3731275-1-ardb@kernel.org/ Link: https://lkml.kernel.org/r/20211116094737.7391-1-ardb@kernel.org Fixes: 2a15ba82fa6c ("ARM: highmem: Switch to generic kmap atomic") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reported-by: Quanyang Wang <quanyang.wang@windriver.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-11-20 00:43:55 +00:00
select KMAP_LOCAL_NON_LINEAR_PTE_ARRAY
help
The address space of ARM processors is only 4 Gigabytes large
and it has to accommodate user address space, kernel address
space as well as some memory mapped IO. That means that, if you
have a large amount of physical memory and/or IO, not all of the
memory can be "permanently mapped" by the kernel. The physical
memory that is not permanently mapped is called "high memory".
Depending on the selected kernel/user memory split, minimum
vmalloc space and actual amount of RAM, you may not need this
option which should result in a slightly faster kernel.
If unsure, say n.
config HIGHPTE
bool "Allocate 2nd-level pagetables from highmem" if EXPERT
depends on HIGHMEM
default y
help
The VM uses one page of physical memory for each page table.
For systems with a lot of processes, this can use a lot of
precious low memory, eventually leading to low memory being
consumed by page tables. Setting this option will allow
user-space 2nd level page tables to reside in high memory.
ARM: 9358/2: Implement PAN for LPAE by TTBR0 page table walks disablement With LPAE enabled, privileged no-access cannot be enforced using CPU domains as such feature is not available. This patch implements PAN by disabling TTBR0 page table walks while in kernel mode. The ARM architecture allows page table walks to be split between TTBR0 and TTBR1. With LPAE enabled, the split is defined by a combination of TTBCR T0SZ and T1SZ bits. Currently, an LPAE-enabled kernel uses TTBR0 for user addresses and TTBR1 for kernel addresses with the VMSPLIT_2G and VMSPLIT_3G configurations. The main advantage for the 3:1 split is that TTBR1 is reduced to 2 levels, so potentially faster TLB refill (though usually the first level entries are already cached in the TLB). The PAN support on LPAE-enabled kernels uses TTBR0 when running in user space or in kernel space during user access routines (TTBCR T0SZ and T1SZ are both 0). When running user accesses are disabled in kernel mode, TTBR0 page table walks are disabled by setting TTBCR.EPD0. TTBR1 is used for kernel accesses (including loadable modules; anything covered by swapper_pg_dir) by reducing the TTBCR.T0SZ to the minimum (2^(32-7) = 32MB). To avoid user accesses potentially hitting stale TLB entries, the ASID is switched to 0 (reserved) by setting TTBCR.A1 and using the ASID value in TTBR1. The difference from a non-PAN kernel is that with the 3:1 memory split, TTBR1 always uses 3 levels of page tables. As part of the change we are using preprocessor elif definied() clauses so balance these clauses by converting relevant precedingt ifdef clauses to if defined() clauses. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2024-03-25 07:31:13 +00:00
config ARM_PAN
bool "Enable privileged no-access"
depends on MMU
default y
help
Increase kernel security by ensuring that normal kernel accesses
are unable to access userspace addresses. This can help prevent
use-after-free bugs becoming an exploitable privilege escalation
by ensuring that magic values (such as LIST_POISON) will always
fault when dereferenced.
ARM: 9358/2: Implement PAN for LPAE by TTBR0 page table walks disablement With LPAE enabled, privileged no-access cannot be enforced using CPU domains as such feature is not available. This patch implements PAN by disabling TTBR0 page table walks while in kernel mode. The ARM architecture allows page table walks to be split between TTBR0 and TTBR1. With LPAE enabled, the split is defined by a combination of TTBCR T0SZ and T1SZ bits. Currently, an LPAE-enabled kernel uses TTBR0 for user addresses and TTBR1 for kernel addresses with the VMSPLIT_2G and VMSPLIT_3G configurations. The main advantage for the 3:1 split is that TTBR1 is reduced to 2 levels, so potentially faster TLB refill (though usually the first level entries are already cached in the TLB). The PAN support on LPAE-enabled kernels uses TTBR0 when running in user space or in kernel space during user access routines (TTBCR T0SZ and T1SZ are both 0). When running user accesses are disabled in kernel mode, TTBR0 page table walks are disabled by setting TTBCR.EPD0. TTBR1 is used for kernel accesses (including loadable modules; anything covered by swapper_pg_dir) by reducing the TTBCR.T0SZ to the minimum (2^(32-7) = 32MB). To avoid user accesses potentially hitting stale TLB entries, the ASID is switched to 0 (reserved) by setting TTBCR.A1 and using the ASID value in TTBR1. The difference from a non-PAN kernel is that with the 3:1 memory split, TTBR1 always uses 3 levels of page tables. As part of the change we are using preprocessor elif definied() clauses so balance these clauses by converting relevant precedingt ifdef clauses to if defined() clauses. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2024-03-25 07:31:13 +00:00
The implementation uses CPU domains when !CONFIG_ARM_LPAE and
disabling of TTBR0 page table walks with CONFIG_ARM_LPAE.
config CPU_SW_DOMAIN_PAN
def_bool y
depends on ARM_PAN && !ARM_LPAE
help
Enable use of CPU domains to implement privileged no-access.
CPUs with low-vector mappings use a best-efforts implementation.
Their lower 1MB needs to remain accessible for the vectors, but
the remainder of userspace will become appropriately inaccessible.
ARM: 9358/2: Implement PAN for LPAE by TTBR0 page table walks disablement With LPAE enabled, privileged no-access cannot be enforced using CPU domains as such feature is not available. This patch implements PAN by disabling TTBR0 page table walks while in kernel mode. The ARM architecture allows page table walks to be split between TTBR0 and TTBR1. With LPAE enabled, the split is defined by a combination of TTBCR T0SZ and T1SZ bits. Currently, an LPAE-enabled kernel uses TTBR0 for user addresses and TTBR1 for kernel addresses with the VMSPLIT_2G and VMSPLIT_3G configurations. The main advantage for the 3:1 split is that TTBR1 is reduced to 2 levels, so potentially faster TLB refill (though usually the first level entries are already cached in the TLB). The PAN support on LPAE-enabled kernels uses TTBR0 when running in user space or in kernel space during user access routines (TTBCR T0SZ and T1SZ are both 0). When running user accesses are disabled in kernel mode, TTBR0 page table walks are disabled by setting TTBCR.EPD0. TTBR1 is used for kernel accesses (including loadable modules; anything covered by swapper_pg_dir) by reducing the TTBCR.T0SZ to the minimum (2^(32-7) = 32MB). To avoid user accesses potentially hitting stale TLB entries, the ASID is switched to 0 (reserved) by setting TTBCR.A1 and using the ASID value in TTBR1. The difference from a non-PAN kernel is that with the 3:1 memory split, TTBR1 always uses 3 levels of page tables. As part of the change we are using preprocessor elif definied() clauses so balance these clauses by converting relevant precedingt ifdef clauses to if defined() clauses. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2024-03-25 07:31:13 +00:00
config CPU_TTBR0_PAN
def_bool y
depends on ARM_PAN && ARM_LPAE
help
Enable privileged no-access by disabling TTBR0 page table walks when
running in kernel mode.
config HW_PERF_EVENTS
def_bool y
depends on ARM_PMU
config ARM_MODULE_PLTS
bool "Use PLTs to allow module memory to spill over into vmalloc area"
depends on MODULES
ARM: 9203/1: kconfig: fix MODULE_PLTS for KASAN with KASAN_VMALLOC When we run out of module space address with ko insertion, and with MODULE_PLTS, module would turn to try to find memory from VMALLOC address space. Unfortunately, with KASAN enabled, VMALLOC doesn't work without KASAN_VMALLOC, thus select KASAN_VMALLOC by default. 8<--- cut here --- Unable to handle kernel paging request at virtual address bd300860 [bd300860] *pgd=41cf1811, *pte=41cf26df, *ppte=41cf265f Internal error: Oops: 80f [#1] PREEMPT SMP ARM Modules linked in: hello(O+) CPU: 0 PID: 89 Comm: insmod Tainted: G O 5.16.0-rc6+ #19 Hardware name: Generic DT based system PC is at mmioset+0x30/0xa8 LR is at 0x0 pc : [<c077ed30>] lr : [<00000000>] psr: 20000013 sp : c451fc18 ip : bd300860 fp : c451fc2c r10: f18042cc r9 : f18042d0 r8 : 00000000 r7 : 00000001 r6 : 00000003 r5 : 01312d00 r4 : f1804300 r3 : 00000000 r2 : 00262560 r1 : 00000000 r0 : bd300860 Flags: nzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: 43e9406a DAC: 00000051 Register r0 information: non-paged memory Register r1 information: NULL pointer Register r2 information: non-paged memory Register r3 information: NULL pointer Register r4 information: 4887-page vmalloc region starting at 0xf1802000 allocated at load_module+0x14f4/0x32a8 Register r5 information: non-paged memory Register r6 information: non-paged memory Register r7 information: non-paged memory Register r8 information: NULL pointer Register r9 information: 4887-page vmalloc region starting at 0xf1802000 allocated at load_module+0x14f4/0x32a8 Register r10 information: 4887-page vmalloc region starting at 0xf1802000 allocated at load_module+0x14f4/0x32a8 Register r11 information: non-slab/vmalloc memory Register r12 information: non-paged memory Process insmod (pid: 89, stack limit = 0xc451c000) Stack: (0xc451fc18 to 0xc4520000) fc00: f18041f0 c04803a4 fc20: c451fc44 c451fc30 c048053c c0480358 f1804030 01312cff c451fc64 c451fc48 fc40: c047f330 c0480500 f18040c0 c1b52ccc 00000001 c5be7700 c451fc74 c451fc68 fc60: f1802098 c047f300 c451fcb4 c451fc78 c026106c f180208c c4880004 00000000 fc80: c451fcb4 bf001000 c044ff48 c451fec0 f18040c0 00000000 c1b54cc4 00000000 fca0: c451fdf0 f1804268 c451fe64 c451fcb8 c0264e88 c0260d48 ffff8000 00007fff fcc0: f18040c0 c025cd00 c451fd14 00000003 0157f008 f1804258 f180425c f1804174 fce0: f1804154 f180424c f18041f0 f180414c f1804178 f18041c0 bf0025d4 188a3fa8 fd00: 0000009e f1804170 f2b18000 c451ff10 c0d92e40 f180416c c451feec 00000001 fd20: 00000000 c451fec8 c451fe20 c451fed0 f18040cc 00000000 f17ea000 c451fdc0 fd40: 41b58ab3 c1387729 c0261c28 c047fb5c c451fe2c c451fd60 c0525308 c048033c fd60: 188a3fb4 c3ccb090 c451fe00 c3ccb080 00000000 00000000 00016920 00000000 fd80: c02d0388 c047f55c c02d0388 00000000 c451fddc c451fda0 c02d0388 00000000 fda0: 41b58ab3 c13a72d0 c0524ff0 c1705f48 c451fdfc c451fdc0 c02d0388 c047f55c fdc0: 00016920 00000000 00000003 c1bb2384 c451fdfc c3ccb080 c1bb2384 00000000 fde0: 00000000 00000000 00000000 00000000 c451fe1c c451fe00 c04e9d70 c1705f48 fe00: c1b54cc4 c1bbc71c c3ccb080 00000000 c3ccb080 00000000 00000003 c451fec0 fe20: c451fe64 c451fe30 c0525918 c0524ffc c451feb0 c1705f48 00000000 c1b54cc4 fe40: b78a3fd0 c451ff60 00000000 0157f008 00000003 c451fec0 c451ffa4 c451fe68 fe60: c0265480 c0261c34 c451feb0 7fffffff 00000000 00000002 00000000 c4880000 fe80: 41b58ab3 c138777b c02652cc c04803ec 000a0000 c451ff00 ffffff9c b6ac9f60 fea0: c451fed4 c1705f48 c04a4a90 b78a3fdc f17ea000 ffffff9c b6ac9f60 c0100244 fec0: f17ea21a f17ea300 f17ea000 00016920 f1800240 f18000ac f17fb7dc 01316000 fee0: 013161b0 00002590 01316250 00000000 00000000 00000000 00002580 00000029 ff00: 0000002a 00000013 00000000 0000000c 00000000 00000000 0157f004 c451ffb0 ff20: c1719be0 aed6f410 c451ff74 c451ff38 c0c4103c c0c407d0 c451ff84 c451ff48 ff40: 00000805 c02c8658 c1604230 c1719c30 00000805 0157f004 00000005 c451ffb0 ff60: c1719be0 aed6f410 c451ffac c451ff78 c0122130 c1705f48 c451ffac 0157f008 ff80: 00000006 0000005f 0000017b c0100244 c4880000 0000017b 00000000 c451ffa8 ffa0: c0100060 c02652d8 0157f008 00000006 00000003 0157f008 00000000 b6ac9f60 ffc0: 0157f008 00000006 0000005f 0000017b 00000000 00000000 aed85f74 00000000 ffe0: b6ac9cd8 b6ac9cc8 00030200 aecf2d60 a0000010 00000003 00000000 00000000 Backtrace: [<c048034c>] (kasan_poison) from [<c048053c>] (kasan_unpoison+0x48/0x5c) [<c04804f4>] (kasan_unpoison) from [<c047f330>] (__asan_register_globals+0x3c/0x64) r5:01312cff r4:f1804030 [<c047f2f4>] (__asan_register_globals) from [<f1802098>] (_sub_I_65535_1+0x18/0xf80 [hello]) r7:c5be7700 r6:00000001 r5:c1b52ccc r4:f18040c0 [<f1802080>] (_sub_I_65535_1 [hello]) from [<c026106c>] (do_init_module+0x330/0x72c) [<c0260d3c>] (do_init_module) from [<c0264e88>] (load_module+0x3260/0x32a8) r10:f1804268 r9:c451fdf0 r8:00000000 r7:c1b54cc4 r6:00000000 r5:f18040c0 r4:c451fec0 [<c0261c28>] (load_module) from [<c0265480>] (sys_finit_module+0x1b4/0x1e8) r10:c451fec0 r9:00000003 r8:0157f008 r7:00000000 r6:c451ff60 r5:b78a3fd0 r4:c1b54cc4 [<c02652cc>] (sys_finit_module) from [<c0100060>] (ret_fast_syscall+0x0/0x1c) Exception stack(0xc451ffa8 to 0xc451fff0) ffa0: 0157f008 00000006 00000003 0157f008 00000000 b6ac9f60 ffc0: 0157f008 00000006 0000005f 0000017b 00000000 00000000 aed85f74 00000000 ffe0: b6ac9cd8 b6ac9cc8 00030200 aecf2d60 r10:0000017b r9:c4880000 r8:c0100244 r7:0000017b r6:0000005f r5:00000006 r4:0157f008 Code: e92d4100 e1a08001 e1a0e003 e2522040 (a8ac410a) ---[ end trace df6e12843197b6f5 ]--- Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-04-27 14:30:00 +00:00
select KASAN_VMALLOC if KASAN
default y
help
Allocate PLTs when loading modules so that jumps and calls whose
targets are too far away for their relative offsets to be encoded
in the instructions themselves can be bounced via veneers in the
module's PLT. This allows modules to be allocated in the generic
vmalloc area after the dedicated module memory area has been
exhausted. The modules will use slightly more memory, but after
rounding up to page size, the actual memory footprint is usually
the same.
Disabling this is usually safe for small single-platform
configurations. If unsure, say y.
config ARCH_FORCE_MAX_ORDER
arm: reword ARCH_FORCE_MAX_ORDER prompt and help text Patch series "arch,mm: cleanup Kconfig entries for ARCH_FORCE_MAX_ORDER", v3. Several architectures have ARCH_FORCE_MAX_ORDER in their Kconfig and they all have wrong and misleading prompt and help text for this option. Besides, some define insane limits for possible values of ARCH_FORCE_MAX_ORDER, some carefully define ranges only for a subset of possible configurations, some make this option configurable by users for no good reason. This set updates the prompt and help text everywhere and does its best to update actual definitions of ranges where applicable. kbuild generated a bunch of false positives because it assigns -1 to ARCH_FORCE_MAX_ORDER, hopefully this will be fixed soon. This patch (of 14): The prompt and help text of ARCH_FORCE_MAX_ORDER are not even close to describe this configuration option. Update both to actually describe what this option does. Link: https://lkml.kernel.org/r/20230325060828.2662773-1-rppt@kernel.org Link: https://lkml.kernel.org/r/20230324052233.2654090-1-rppt@kernel.org Link: https://lkml.kernel.org/r/20230324052233.2654090-2-rppt@kernel.org Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David Miller <davem@davemloft.net> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Guo Ren <guoren@kernel.org> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Rich Felker <dalias@libc.org> Cc: "Russell King (Oracle)" <linux@armlinux.org.uk> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-03-24 05:22:20 +00:00
int "Order of maximal physically contiguous allocations"
default "11" if SOC_AM33XX
default "8" if SA1111
default "10"
help
arm: reword ARCH_FORCE_MAX_ORDER prompt and help text Patch series "arch,mm: cleanup Kconfig entries for ARCH_FORCE_MAX_ORDER", v3. Several architectures have ARCH_FORCE_MAX_ORDER in their Kconfig and they all have wrong and misleading prompt and help text for this option. Besides, some define insane limits for possible values of ARCH_FORCE_MAX_ORDER, some carefully define ranges only for a subset of possible configurations, some make this option configurable by users for no good reason. This set updates the prompt and help text everywhere and does its best to update actual definitions of ranges where applicable. kbuild generated a bunch of false positives because it assigns -1 to ARCH_FORCE_MAX_ORDER, hopefully this will be fixed soon. This patch (of 14): The prompt and help text of ARCH_FORCE_MAX_ORDER are not even close to describe this configuration option. Update both to actually describe what this option does. Link: https://lkml.kernel.org/r/20230325060828.2662773-1-rppt@kernel.org Link: https://lkml.kernel.org/r/20230324052233.2654090-1-rppt@kernel.org Link: https://lkml.kernel.org/r/20230324052233.2654090-2-rppt@kernel.org Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David Miller <davem@davemloft.net> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Guo Ren <guoren@kernel.org> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Rich Felker <dalias@libc.org> Cc: "Russell King (Oracle)" <linux@armlinux.org.uk> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-03-24 05:22:20 +00:00
The kernel page allocator limits the size of maximal physically
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
arm: reword ARCH_FORCE_MAX_ORDER prompt and help text Patch series "arch,mm: cleanup Kconfig entries for ARCH_FORCE_MAX_ORDER", v3. Several architectures have ARCH_FORCE_MAX_ORDER in their Kconfig and they all have wrong and misleading prompt and help text for this option. Besides, some define insane limits for possible values of ARCH_FORCE_MAX_ORDER, some carefully define ranges only for a subset of possible configurations, some make this option configurable by users for no good reason. This set updates the prompt and help text everywhere and does its best to update actual definitions of ranges where applicable. kbuild generated a bunch of false positives because it assigns -1 to ARCH_FORCE_MAX_ORDER, hopefully this will be fixed soon. This patch (of 14): The prompt and help text of ARCH_FORCE_MAX_ORDER are not even close to describe this configuration option. Update both to actually describe what this option does. Link: https://lkml.kernel.org/r/20230325060828.2662773-1-rppt@kernel.org Link: https://lkml.kernel.org/r/20230324052233.2654090-1-rppt@kernel.org Link: https://lkml.kernel.org/r/20230324052233.2654090-2-rppt@kernel.org Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David Miller <davem@davemloft.net> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Guo Ren <guoren@kernel.org> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Rich Felker <dalias@libc.org> Cc: "Russell King (Oracle)" <linux@armlinux.org.uk> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-03-24 05:22:20 +00:00
defines the maximal power of two of number of pages that can be
allocated as a single contiguous block. This option allows
overriding the default setting when ability to allocate very
large blocks of physically contiguous memory is required.
Don't change if unsure.
config ALIGNMENT_TRAP
def_bool CPU_CP15_MMU
select HAVE_PROC_CPU if PROC_FS
help
ARM processors cannot fetch/store information which is not
naturally aligned on the bus, i.e., a 4 byte fetch must start at an
address divisible by 4. On 32-bit ARM processors, these non-aligned
fetch/store instructions will be emulated in software if you say
here, which has a severe performance impact. This is necessary for
correct operation of some network protocols. With an IP-only
configuration it is safe to say N, otherwise say Y.
[ARM] alternative copy_to_user/clear_user implementation This implements {copy_to,clear}_user() by faulting in the userland pages and then using the regular kernel mem{cpy,set}() to copy the data (while holding the page table lock). This is a win if the regular mem{cpy,set}() implementations are faster than the user copy functions, which is the case e.g. on Feroceon, where 8-word STMs (which memcpy() uses under the right conditions) give significantly higher memory write throughput than a sequence of individual 32bit stores. Here are numbers for page sized buffers on some Feroceon cores: - copy_to_user on Orion5x goes from 51 MB/s to 83 MB/s - clear_user on Orion5x goes from 89MB/s to 314MB/s - copy_to_user on Kirkwood goes from 240 MB/s to 356 MB/s - clear_user on Kirkwood goes from 367 MB/s to 1108 MB/s - copy_to_user on Disco-Duo goes from 248 MB/s to 398 MB/s - clear_user on Disco-Duo goes from 328 MB/s to 1741 MB/s Because the setup cost is non negligible, this is worthwhile only if the amount of data to copy is large enough. The operation falls back to the standard implementation when the amount of data is below a certain threshold. This threshold was determined empirically, however some targets could benefit from a lower runtime determined value for optimal results eventually. In the copy_from_user() case, this technique does not provide any worthwhile performance gain due to the fact that any kind of read access allocates the cache and subsequent 32bit loads are just as fast as the equivalent 8-word LDM. Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: Nicolas Pitre <nico@marvell.com> Tested-by: Martin Michlmayr <tbm@cyrius.com>
2009-03-09 18:30:09 +00:00
config UACCESS_WITH_MEMCPY
bool "Use kernel mem{cpy,set}() for {copy_to,clear}_user()"
depends on MMU
[ARM] alternative copy_to_user/clear_user implementation This implements {copy_to,clear}_user() by faulting in the userland pages and then using the regular kernel mem{cpy,set}() to copy the data (while holding the page table lock). This is a win if the regular mem{cpy,set}() implementations are faster than the user copy functions, which is the case e.g. on Feroceon, where 8-word STMs (which memcpy() uses under the right conditions) give significantly higher memory write throughput than a sequence of individual 32bit stores. Here are numbers for page sized buffers on some Feroceon cores: - copy_to_user on Orion5x goes from 51 MB/s to 83 MB/s - clear_user on Orion5x goes from 89MB/s to 314MB/s - copy_to_user on Kirkwood goes from 240 MB/s to 356 MB/s - clear_user on Kirkwood goes from 367 MB/s to 1108 MB/s - copy_to_user on Disco-Duo goes from 248 MB/s to 398 MB/s - clear_user on Disco-Duo goes from 328 MB/s to 1741 MB/s Because the setup cost is non negligible, this is worthwhile only if the amount of data to copy is large enough. The operation falls back to the standard implementation when the amount of data is below a certain threshold. This threshold was determined empirically, however some targets could benefit from a lower runtime determined value for optimal results eventually. In the copy_from_user() case, this technique does not provide any worthwhile performance gain due to the fact that any kind of read access allocates the cache and subsequent 32bit loads are just as fast as the equivalent 8-word LDM. Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: Nicolas Pitre <nico@marvell.com> Tested-by: Martin Michlmayr <tbm@cyrius.com>
2009-03-09 18:30:09 +00:00
default y if CPU_FEROCEON
help
Implement faster copy_to_user and clear_user methods for CPU
cores where a 8-word STM instruction give significantly higher
memory write throughput than a sequence of individual 32bit stores.
A possible side effect is a slight increase in scheduling latency
between threads sharing the same address space if they invoke
such copy operations with large buffers.
However, if the CPU data cache is using a write-allocate mode,
this option is unlikely to provide any performance gain.
config PARAVIRT
bool "Enable paravirtualization code"
help
This changes the kernel so it can modify itself when it is run
under a hypervisor, potentially improving performance significantly
over full virtualization.
config PARAVIRT_TIME_ACCOUNTING
bool "Paravirtual steal time accounting"
select PARAVIRT
help
Select this option to enable fine granularity task steal time
accounting. Time spent executing other tasks in parallel with
the current vCPU is discounted from the vCPU power. To account for
that, there can be a small performance impact.
If in doubt, say N here.
config XEN_DOM0
def_bool y
depends on XEN
config XEN
bool "Xen guest support on ARM"
xen: arm: mandate EABI and use generic atomic operations. Rob Herring has observed that c81611c4e96f "xen: event channel arrays are xen_ulong_t and not unsigned long" introduced a compile failure when building without CONFIG_AEABI: /tmp/ccJaIZOW.s: Assembler messages: /tmp/ccJaIZOW.s:831: Error: even register required -- `ldrexd r5,r6,[r4]' Will Deacon pointed out that this is because OABI does not require even base registers for 64-bit values. We can avoid this by simply using the existing atomic64_xchg operation and the same containerof trick as used by the cmpxchg macros. However since this code is used on memory which is shared with the hypervisor we require proper atomic instructions and cannot use the generic atomic64 callbacks (which are based on spinlocks), therefore add a dependency on !GENERIC_ATOMIC64. Since we already depend on !CPU_V6 there isn't much downside to this. While thinking about this we also observed that OABI has different struct alignment requirements to EABI, which is a problem for hypercall argument structs which are shared with the hypervisor and which must be in EABI layout. Since I don't expect people to want to run OABI kernels on Xen depend on CONFIG_AEABI explicitly too (although it also happens to be enforced by the !GENERIC_ATOMIC64 requirement too). Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Rob Herring <robherring2@gmail.com> Acked-by: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2013-03-07 07:17:25 +00:00
depends on ARM && AEABI && OF
depends on CPU_V7 && !CPU_V6
xen: arm: mandate EABI and use generic atomic operations. Rob Herring has observed that c81611c4e96f "xen: event channel arrays are xen_ulong_t and not unsigned long" introduced a compile failure when building without CONFIG_AEABI: /tmp/ccJaIZOW.s: Assembler messages: /tmp/ccJaIZOW.s:831: Error: even register required -- `ldrexd r5,r6,[r4]' Will Deacon pointed out that this is because OABI does not require even base registers for 64-bit values. We can avoid this by simply using the existing atomic64_xchg operation and the same containerof trick as used by the cmpxchg macros. However since this code is used on memory which is shared with the hypervisor we require proper atomic instructions and cannot use the generic atomic64 callbacks (which are based on spinlocks), therefore add a dependency on !GENERIC_ATOMIC64. Since we already depend on !CPU_V6 there isn't much downside to this. While thinking about this we also observed that OABI has different struct alignment requirements to EABI, which is a problem for hypercall argument structs which are shared with the hypervisor and which must be in EABI layout. Since I don't expect people to want to run OABI kernels on Xen depend on CONFIG_AEABI explicitly too (although it also happens to be enforced by the !GENERIC_ATOMIC64 requirement too). Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Rob Herring <robherring2@gmail.com> Acked-by: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2013-03-07 07:17:25 +00:00
depends on !GENERIC_ATOMIC64
depends on MMU
select ARCH_DMA_ADDR_T_64BIT
select ARM_PSCI
select SWIOTLB
xen/arm,arm64: enable SWIOTLB_XEN Xen on arm and arm64 needs SWIOTLB_XEN: when running on Xen we need to program the hardware with mfns rather than pfns for dma addresses. Remove SWIOTLB_XEN dependency on X86 and PCI and make XEN select SWIOTLB_XEN on arm and arm64. At the moment always rely on swiotlb-xen, but when Xen starts supporting hardware IOMMUs we'll be able to avoid it conditionally on the presence of an IOMMU on the platform. Implement xen_create_contiguous_region on arm and arm64: for the moment we assume that dom0 has been mapped 1:1 (physical addresses == machine addresses) therefore we don't need to call XENMEM_exchange. Simply return the physical address as dma address. Initialize the xen-swiotlb from xen_early_init (before the native dma_ops are initialized), set xen_dma_ops to &xen_swiotlb_dma_ops. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v8: - assume dom0 is mapped 1:1, no need to call XENMEM_exchange. Changes in v7: - call __set_phys_to_machine_multi from xen_create_contiguous_region and xen_destroy_contiguous_region to update the P2M; - don't call XENMEM_unpin, it has been removed; - call XENMEM_exchange instead of XENMEM_exchange_and_pin; - set nr_exchanged to 0 before calling the hypercall. Changes in v6: - introduce and export xen_dma_ops; - call xen_mm_init from as arch_initcall. Changes in v4: - remove redefinition of DMA_ERROR_CODE; - update the code to use XENMEM_exchange_and_pin and XENMEM_unpin; - add a note about hardware IOMMU in the commit message. Changes in v3: - code style changes; - warn on XENMEM_put_dma_buf failures.
2013-10-10 13:40:44 +00:00
select SWIOTLB_XEN
select PARAVIRT
help
Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
config CC_HAVE_STACKPROTECTOR_TLS
def_bool $(cc-option,-mtp=cp15 -mstack-protector-guard=tls -mstack-protector-guard-offset=0)
config STACKPROTECTOR_PER_TASK
bool "Use a unique stack canary value for each task"
depends on STACKPROTECTOR && CURRENT_POINTER_IN_TPIDRURO && !XIP_DEFLATED_DATA
depends on GCC_PLUGINS || CC_HAVE_STACKPROTECTOR_TLS
select GCC_PLUGIN_ARM_SSP_PER_TASK if !CC_HAVE_STACKPROTECTOR_TLS
default y
help
Due to the fact that GCC uses an ordinary symbol reference from
which to load the value of the stack canary, this value can only
change at reboot time on SMP systems, and all tasks running in the
kernel's address space are forced to use the same canary value for
the entire duration that the system is up.
Enable this option to switch to a different method that uses a
different canary value for each task.
endmenu
menu "Boot options"
config USE_OF
bool "Flattened Device Tree support"
ARM: config: sort select statements alphanumerically As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-06 16:12:25 +00:00
select IRQ_DOMAIN
select OF
help
Include support for flattened device tree machine descriptions.
config ARCH_WANT_FLAT_DTB_INSTALL
def_bool y
config ATAGS
bool "Support for the traditional ATAGS boot data passing"
default y
help
This is the traditional way of passing data to the kernel at boot
time. If you are solely relying on the flattened device tree (or
the ARM_ATAG_DTB_COMPAT option) then you may unselect this option
to remove ATAGS support from your kernel binary.
config DEPRECATED_PARAM_STRUCT
bool "Provide old way to pass kernel parameters"
depends on ATAGS
help
This was deprecated in 2001 and announced to live on for 5 years.
Some old boot loaders still use this way.
# Compressed boot loader in ROM. Yes, we really want to ask about
# TEXT and BSS so we preserve their values in the config files.
config ZBOOT_ROM_TEXT
hex "Compressed ROM boot loader base address"
default 0x0
help
The physical address at which the ROM-able zImage is to be
placed in the target. Platforms which normally make use of
ROM-able zImage formats normally set this to a suitable
value in their defconfig file.
If ZBOOT_ROM is not enabled, this has no effect.
config ZBOOT_ROM_BSS
hex "Compressed ROM boot loader BSS address"
default 0x0
help
The base address of an area of read/write memory in the target
for the ROM-able zImage which must be available while the
decompressor is running. It must be large enough to hold the
entire decompressed kernel plus an additional 128 KiB.
Platforms which normally make use of ROM-able zImage formats
normally set this to a suitable value in their defconfig file.
If ZBOOT_ROM is not enabled, this has no effect.
config ZBOOT_ROM
bool "Compressed boot loader in ROM/flash"
depends on ZBOOT_ROM_TEXT != ZBOOT_ROM_BSS
depends on !ARM_APPENDED_DTB && !XIP_KERNEL && !AUTO_ZRELADDR
help
Say Y here if you intend to execute your compressed kernel image
(zImage) directly from ROM or flash. If unsure, say N.
config ARM_APPENDED_DTB
bool "Use appended device tree blob to zImage (EXPERIMENTAL)"
depends on OF
help
With this option, the boot code will look for a device tree binary
(DTB) appended to zImage
(e.g. cat zImage <filename>.dtb > zImage_w_dtb).
This is meant as a backward compatibility convenience for those
systems with a bootloader that can't be upgraded to accommodate
the documented boot protocol using a device tree.
Beware that there is very little in terms of protection against
this option being confused by leftover garbage in memory that might
look like a DTB header after a reboot if no actual DTB is appended
to zImage. Do not leave this option active in a production kernel
if you don't intend to always append a DTB. Proper passing of the
location into r2 of a bootloader provided DTB is always preferable
to this option.
config ARM_ATAG_DTB_COMPAT
bool "Supplement the appended DTB with traditional ATAG information"
depends on ARM_APPENDED_DTB
help
Some old bootloaders can't be updated to a DTB capable one, yet
they provide ATAGs with memory configuration, the ramdisk address,
the kernel cmdline string, etc. Such information is dynamically
provided by the bootloader and can't always be stored in a static
DTB. To allow a device tree enabled kernel to be used with such
bootloaders, this option allows zImage to extract the information
from the ATAG list and store it at run time into the appended DTB.
choice
treewide: change conditional prompt for choices to 'depends on' While Documentation/kbuild/kconfig-language.rst provides a brief explanation, there are recurring confusions regarding the usage of a prompt followed by 'if <expr>'. This conditional controls _only_ the prompt. A typical usage is as follows: menuconfig BLOCK bool "Enable the block layer" if EXPERT default y When EXPERT=n, the prompt is hidden, but this config entry is still active, and BLOCK is set to its default value 'y'. This is reasonable because you are likely want to enable the block device support. When EXPERT=y, the prompt is shown, allowing you to toggle BLOCK. Please note that it is different from 'depends on EXPERT', which would enable and disable the entire config entry. However, this conditional prompt has never worked in a choice block. The following two work in the same way: when EXPERT is disabled, the choice block is entirely disabled. [Test Code 1] choice prompt "choose" if EXPERT config A bool "A" config B bool "B" endchoice [Test Code 2] choice prompt "choose" depends on EXPERT config A bool "A" config B bool "B" endchoice I believe the first case should hide only the prompt, producing the default: CONFIG_A=y # CONFIG_B is not set The next commit will change (fix) the behavior of the conditional prompt in choice blocks. I see several choice blocks wrongly using a conditional prompt, where 'depends on' makes more sense. To preserve the current behavior, this commit converts such misuses. I did not touch the following entry in arch/x86/Kconfig: choice prompt "Memory split" if EXPERT default VMSPLIT_3G This is truly the correct use of the conditional prompt; when EXPERT=n, this choice block should silently select the reasonable VMSPLIT_3G, although the resulting PAGE_OFFSET will not be affected anyway. Presumably, the one in fs/jffs2/Kconfig is also correct, but I converted it to 'depends on' to avoid any potential behavioral change. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2024-06-26 18:22:00 +00:00
prompt "Kernel command line type"
depends on ARM_ATAG_DTB_COMPAT
default ARM_ATAG_DTB_COMPAT_CMDLINE_FROM_BOOTLOADER
config ARM_ATAG_DTB_COMPAT_CMDLINE_FROM_BOOTLOADER
bool "Use bootloader kernel arguments if available"
help
Uses the command-line options passed by the boot loader instead of
the device tree bootargs property. If the boot loader doesn't provide
any, the device tree bootargs property will be used.
config ARM_ATAG_DTB_COMPAT_CMDLINE_EXTEND
bool "Extend with bootloader kernel arguments"
help
The command-line arguments provided by the boot loader will be
appended to the the device tree bootargs property.
endchoice
config CMDLINE
string "Default kernel command string"
default ""
help
On some architectures (e.g. CATS), there is currently no way
for the boot loader to pass arguments to the kernel. For these
architectures, you should supply some command-line options at build
time by entering them here. As a minimum, you should specify the
memory size and the root device (e.g., mem=64M root=/dev/nfs).
choice
treewide: change conditional prompt for choices to 'depends on' While Documentation/kbuild/kconfig-language.rst provides a brief explanation, there are recurring confusions regarding the usage of a prompt followed by 'if <expr>'. This conditional controls _only_ the prompt. A typical usage is as follows: menuconfig BLOCK bool "Enable the block layer" if EXPERT default y When EXPERT=n, the prompt is hidden, but this config entry is still active, and BLOCK is set to its default value 'y'. This is reasonable because you are likely want to enable the block device support. When EXPERT=y, the prompt is shown, allowing you to toggle BLOCK. Please note that it is different from 'depends on EXPERT', which would enable and disable the entire config entry. However, this conditional prompt has never worked in a choice block. The following two work in the same way: when EXPERT is disabled, the choice block is entirely disabled. [Test Code 1] choice prompt "choose" if EXPERT config A bool "A" config B bool "B" endchoice [Test Code 2] choice prompt "choose" depends on EXPERT config A bool "A" config B bool "B" endchoice I believe the first case should hide only the prompt, producing the default: CONFIG_A=y # CONFIG_B is not set The next commit will change (fix) the behavior of the conditional prompt in choice blocks. I see several choice blocks wrongly using a conditional prompt, where 'depends on' makes more sense. To preserve the current behavior, this commit converts such misuses. I did not touch the following entry in arch/x86/Kconfig: choice prompt "Memory split" if EXPERT default VMSPLIT_3G This is truly the correct use of the conditional prompt; when EXPERT=n, this choice block should silently select the reasonable VMSPLIT_3G, although the resulting PAGE_OFFSET will not be affected anyway. Presumably, the one in fs/jffs2/Kconfig is also correct, but I converted it to 'depends on' to avoid any potential behavioral change. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2024-06-26 18:22:00 +00:00
prompt "Kernel command line type"
depends on CMDLINE != ""
default CMDLINE_FROM_BOOTLOADER
config CMDLINE_FROM_BOOTLOADER
bool "Use bootloader kernel arguments if available"
help
Uses the command-line options passed by the boot loader. If
the boot loader doesn't provide any, the default kernel command
string provided in CMDLINE will be used.
config CMDLINE_EXTEND
bool "Extend bootloader kernel arguments"
help
The command-line arguments provided by the boot loader will be
appended to the default kernel command string.
config CMDLINE_FORCE
bool "Always use the default kernel command string"
help
Always use the default kernel command string, even if the boot
loader passes other arguments to the kernel.
This is useful if you cannot or don't want to change the
command-line options your boot loader passes to the kernel.
endchoice
config XIP_KERNEL
bool "Kernel Execute-In-Place from ROM"
depends on !ARM_LPAE && !ARCH_MULTIPLATFORM
depends on !ARM_PATCH_IDIV && !ARM_PATCH_PHYS_VIRT && !SMP_ON_UP
help
Execute-In-Place allows the kernel to run from non-volatile storage
directly addressable by the CPU, such as NOR flash. This saves RAM
space since the text section of the kernel is not loaded from flash
to RAM. Read-write sections, such as the data section and stack,
are still copied to RAM. The XIP kernel is not compressed since
it has to run directly from flash, so it will take more space to
store it. The flash address used to link the kernel object files,
and for storing it, is configuration dependent. Therefore, if you
say Y here, you must know the proper physical address where to
store the kernel image depending on your own flash memory usage.
Also note that the make target becomes "make xipImage" rather than
"make zImage" or "make Image". The final kernel binary to put in
ROM memory will be arch/arm/boot/xipImage.
If unsure, say N.
config XIP_PHYS_ADDR
hex "XIP Kernel Physical Location"
depends on XIP_KERNEL
default "0x00080000"
help
This is the physical address in your flash memory the kernel will
be linked for and stored to. This address is dependent on your
own flash usage.
config XIP_DEFLATED_DATA
bool "Store kernel .data section compressed in ROM"
depends on XIP_KERNEL
select ZLIB_INFLATE
help
Before the kernel is actually executed, its .data section has to be
copied to RAM from ROM. This option allows for storing that data
in compressed form and decompressed to RAM rather than merely being
copied, saving some precious ROM space. A possible drawback is a
slightly longer boot delay.
config ARCH_SUPPORTS_KEXEC
def_bool (!SMP || PM_SLEEP_SMP) && MMU
config ATAGS_PROC
bool "Export atags in procfs"
depends on ATAGS && KEXEC
default y
help
Should the atags used to boot the kernel be exported in an "atags"
file in procfs. Useful with kexec.
config ARCH_SUPPORTS_CRASH_DUMP
def_bool y
config ARCH_DEFAULT_CRASH_DUMP
def_bool y
config AUTO_ZRELADDR
bool "Auto calculation of the decompressed kernel image address" if !ARCH_MULTIPLATFORM
default !(ARCH_FOOTBRIDGE || ARCH_RPC || ARCH_SA1100)
help
ZRELADDR is the physical address where the decompressed kernel
image will be placed. If AUTO_ZRELADDR is selected, the address
will be determined at run-time, either by masking the current IP
with 0xf8000000, or, if invalid, from the DTB passed in r2.
This assumes the zImage being placed in the first 128MB from
start of memory.
config EFI_STUB
bool
config EFI
bool "UEFI runtime support"
depends on OF && !CPU_BIG_ENDIAN && MMU && AUTO_ZRELADDR && !XIP_KERNEL
select UCS2_STRING
select EFI_PARAMS_FROM_FDT
select EFI_STUB
select EFI_GENERIC_STUB
select EFI_RUNTIME_WRAPPERS
help
This option provides support for runtime services provided
by UEFI firmware (such as non-volatile variables, realtime
clock, and platform reset). A UEFI stub is also provided to
allow the kernel to be booted as an EFI application. This
is only useful for kernels that may run on systems that have
UEFI firmware.
efi/arm: Enable DMI/SMBIOS Wire up the existing arm64 support for SMBIOS tables (aka DMI) for ARM as well, by moving the arm64 init code to drivers/firmware/efi/arm-runtime.c (which is shared between ARM and arm64), and adding a asm/dmi.h header to ARM that defines the mapping routines for the firmware tables. This allows userspace to access these tables to discover system information exposed by the firmware. It also sets the hardware name used in crash dumps, e.g.: Unable to handle kernel NULL pointer dereference at virtual address 00000000 pgd = ed3c0000 [00000000] *pgd=bf1f3835 Internal error: Oops: 817 [#1] SMP THUMB2 Modules linked in: CPU: 0 PID: 759 Comm: bash Not tainted 4.10.0-09601-g0e8f38792120-dirty #112 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 ^^^ NOTE: This does *NOT* enable or encourage the use of DMI quirks, i.e., the the practice of identifying the platform via DMI to decide whether certain workarounds for buggy hardware and/or firmware need to be enabled. This would require the DMI subsystem to be enabled much earlier than we do on ARM, which is non-trivial. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20170602135207.21708-14-ard.biesheuvel@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-02 13:52:07 +00:00
config DMI
bool "Enable support for SMBIOS (DMI) tables"
depends on EFI
default y
help
This enables SMBIOS/DMI feature for systems.
This option is only useful on systems that have UEFI firmware.
However, even with this option, the resultant kernel should
continue to boot on existing non-UEFI platforms.
NOTE: This does *NOT* enable or encourage the use of DMI quirks,
i.e., the the practice of identifying the platform via DMI to
decide whether certain workarounds for buggy hardware and/or
firmware need to be enabled. This would require the DMI subsystem
to be enabled much earlier than we do on ARM, which is non-trivial.
endmenu
menu "CPU Power Management"
source "drivers/cpufreq/Kconfig"
source "drivers/cpuidle/Kconfig"
endmenu
menu "Floating point emulation"
comment "At least one emulation must be selected"
config FPE_NWFPE
bool "NWFPE math emulation"
depends on (!AEABI || OABI_COMPAT) && !THUMB2_KERNEL
help
Say Y to include the NWFPE floating point emulator in the kernel.
This is necessary to run most binaries. Linux does not currently
support floating point hardware so you need to say Y here even if
your machine has an FPA or floating point co-processor podule.
You may say N here if you are going to load the Acorn FPEmulator
early in the bootup.
config FPE_NWFPE_XP
bool "Support extended precision"
depends on FPE_NWFPE
help
Say Y to include 80-bit support in the kernel floating-point
emulator. Otherwise, only 32 and 64-bit support is compiled in.
Note that gcc does not generate 80-bit operations by default,
so in most cases this option only enlarges the size of the
floating point emulator without any good reason.
You almost surely want to say N here.
config FPE_FASTFPE
bool "FastFPE math emulation (EXPERIMENTAL)"
depends on (!AEABI || OABI_COMPAT) && !CPU_32v3
help
Say Y here to include the FAST floating point emulator in the kernel.
This is an experimental much faster emulator which now also has full
precision for the mantissa. It does not support any exceptions.
It is very simple, and approximately 3-6 times faster than NWFPE.
It should be sufficient for most programs. It may be not suitable
for scientific calculations, but you have to check this for yourself.
If you do not feel you need a faster FP emulation you should better
choose NWFPE.
config VFP
bool "VFP-format floating point maths"
depends on CPU_V6 || CPU_V6K || CPU_ARM926T || CPU_V7 || CPU_FEROCEON
help
Say Y to include VFP support code in the kernel. This is needed
if your hardware includes a VFP unit.
Please see <file:Documentation/arch/arm/vfp/release-notes.rst> for
release notes and additional status information.
Say N if your target does not have VFP hardware.
config VFPv3
bool
depends on VFP
default y if CPU_V7
config NEON
bool "Advanced SIMD (NEON) Extension support"
depends on VFPv3 && CPU_V7
help
Say Y to include support code for NEON, the ARMv7 Advanced SIMD
Extension.
config KERNEL_MODE_NEON
bool "Support for NEON in kernel mode"
depends on NEON && AEABI
help
Say Y to include support for NEON in kernel mode.
endmenu
menu "Power management options"
source "kernel/power/Kconfig"
config ARCH_SUSPEND_POSSIBLE
depends on CPU_ARM920T || CPU_ARM926T || CPU_FEROCEON || CPU_SA1100 || \
CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M || CPU_XSC3 || CPU_XSCALE || CPU_MOHAWK
def_bool y
config ARM_CPU_SUSPEND
ARM: 8511/1: ARM64: kernel: PSCI: move PSCI idle management code to drivers/firmware ARM64 PSCI kernel interfaces that initialize idle states and implement the suspend API to enter them are generic and can be shared with the ARM architecture. To achieve that goal, this patch moves ARM64 PSCI idle management code to drivers/firmware, so that the interface to initialize and enter idle states can actually be shared by ARM and ARM64 arches back-ends. The ARM generic CPUidle implementation also requires the definition of a cpuidle_ops section entry for the kernel to initialize the CPUidle operations at boot based on the enable-method (ie ARM64 has the statically initialized cpu_ops counterparts for that purpose); therefore this patch also adds the required section entry on CONFIG_ARM for PSCI so that the kernel can initialize the PSCI CPUidle back-end when PSCI is the probed enable-method. On ARM64 this patch provides no functional change. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arch/arm64] Acked-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jisheng Zhang <jszhang@marvell.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Jisheng Zhang <jszhang@marvell.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-02-01 17:01:30 +00:00
def_bool PM_SLEEP || BL_SWITCHER || ARM_PSCI_FW
depends on ARCH_SUSPEND_POSSIBLE
config ARCH_HIBERNATION_POSSIBLE
bool
depends on MMU
default y if ARCH_SUSPEND_POSSIBLE
endmenu
ARM: 8991/1: use VFP assembler mnemonics if available The integrated assembler of Clang 10 and earlier do not allow to access the VFP registers through the coprocessor load/store instructions: arch/arm/vfp/vfpmodule.c:342:2: error: invalid operand for instruction fmxr(FPEXC, fpexc & ~(FPEXC_EX|FPEXC_DEX|FPEXC_FP2V|FPEXC_VV|FPEXC_TRAP_MASK)); ^ arch/arm/vfp/vfpinstr.h:79:6: note: expanded from macro 'fmxr' asm("mcr p10, 7, %0, " vfpreg(_vfp_) ", cr0, 0 @ fmxr " #_vfp_ ", %0" ^ <inline asm>:1:6: note: instantiated into assembly here mcr p10, 7, r0, cr8, cr0, 0 @ fmxr FPEXC, r0 ^ This has been addressed with Clang 11 [0]. However, to support earlier versions of Clang and for better readability use of VFP assembler mnemonics still is preferred. Ideally we would replace this code with the unified assembler language mnemonics vmrs/vmsr on call sites along with .fpu assembler directives. The GNU assembler supports the .fpu directive at least since 2.17 (when documentation has been added). Since Linux requires binutils 2.21 it is safe to use .fpu directive. However, binutils does not allow to use FPINST or FPINST2 as an argument to vmrs/vmsr instructions up to binutils 2.24 (see binutils commit 16d02dc907c5): arch/arm/vfp/vfphw.S: Assembler messages: arch/arm/vfp/vfphw.S:162: Error: operand 0 must be FPSID or FPSCR pr FPEXC -- `vmsr FPINST,r6' arch/arm/vfp/vfphw.S:165: Error: operand 0 must be FPSID or FPSCR pr FPEXC -- `vmsr FPINST2,r8' arch/arm/vfp/vfphw.S:235: Error: operand 1 must be a VFP extension System Register -- `vmrs r3,FPINST' arch/arm/vfp/vfphw.S:238: Error: operand 1 must be a VFP extension System Register -- `vmrs r12,FPINST2' Use as-instr in Kconfig to check if FPINST/FPINST2 can be used. If they can be used make use of .fpu directives and UAL VFP mnemonics for register access. This allows to build vfpmodule.c with Clang and its integrated assembler. [0] https://reviews.llvm.org/D59733 Link: https://github.com/ClangBuiltLinux/linux/issues/905 Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-07-09 10:21:27 +00:00
source "arch/arm/Kconfig.assembler"