Merge branch 'linus' into locking/core, to pick up fixes

Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Ingo Molnar 2017-08-25 11:04:51 +02:00
commit 10c9850cb2
412 changed files with 3226 additions and 1439 deletions

View File

@ -27,5 +27,11 @@ You have to add the following kernel parameters in your elilo.conf:
Macbook Pro 17", iMac 20" : Macbook Pro 17", iMac 20" :
video=efifb:i20 video=efifb:i20
Accepted options:
nowc Don't map the framebuffer write combined. This can be used
to workaround side-effects and slowdowns on other CPU cores
when large amounts of console data are written.
-- --
Edgar Hucek <gimli@dark-green.com> Edgar Hucek <gimli@dark-green.com>

View File

@ -228,7 +228,7 @@ Learning on the device port should be enabled, as well as learning_sync:
bridge link set dev DEV learning on self bridge link set dev DEV learning on self
bridge link set dev DEV learning_sync on self bridge link set dev DEV learning_sync on self
Learning_sync attribute enables syncing of the learned/forgotton FDB entry to Learning_sync attribute enables syncing of the learned/forgotten FDB entry to
the bridge's FDB. It's possible, but not optimal, to enable learning on the the bridge's FDB. It's possible, but not optimal, to enable learning on the
device port and on the bridge port, and disable learning_sync. device port and on the bridge port, and disable learning_sync.
@ -245,7 +245,7 @@ the responsibility of the port driver/device to age out these entries. If the
port device supports ageing, when the FDB entry expires, it will notify the port device supports ageing, when the FDB entry expires, it will notify the
driver which in turn will notify the bridge with SWITCHDEV_FDB_DEL. If the driver which in turn will notify the bridge with SWITCHDEV_FDB_DEL. If the
device does not support ageing, the driver can simulate ageing using a device does not support ageing, the driver can simulate ageing using a
garbage collection timer to monitor FBD entries. Expired entries will be garbage collection timer to monitor FDB entries. Expired entries will be
notified to the bridge using SWITCHDEV_FDB_DEL. See rocker driver for notified to the bridge using SWITCHDEV_FDB_DEL. See rocker driver for
example of driver running ageing timer. example of driver running ageing timer.

View File

@ -58,20 +58,23 @@ Symbols/Function Pointers
%ps versatile_init %ps versatile_init
%pB prev_fn_of_versatile_init+0x88/0x88 %pB prev_fn_of_versatile_init+0x88/0x88
For printing symbols and function pointers. The ``S`` and ``s`` specifiers The ``F`` and ``f`` specifiers are for printing function pointers,
result in the symbol name with (``S``) or without (``s``) offsets. Where for example, f->func, &gettimeofday. They have the same result as
this is used on a kernel without KALLSYMS - the symbol address is ``S`` and ``s`` specifiers. But they do an extra conversion on
printed instead. ia64, ppc64 and parisc64 architectures where the function pointers
are actually function descriptors.
The ``S`` and ``s`` specifiers can be used for printing symbols
from direct addresses, for example, __builtin_return_address(0),
(void *)regs->ip. They result in the symbol name with (``S``) or
without (``s``) offsets. If KALLSYMS are disabled then the symbol
address is printed instead.
The ``B`` specifier results in the symbol name with offsets and should be The ``B`` specifier results in the symbol name with offsets and should be
used when printing stack backtraces. The specifier takes into used when printing stack backtraces. The specifier takes into
consideration the effect of compiler optimisations which may occur consideration the effect of compiler optimisations which may occur
when tail-call``s are used and marked with the noreturn GCC attribute. when tail-call``s are used and marked with the noreturn GCC attribute.
On ia64, ppc64 and parisc64 architectures function pointers are
actually function descriptors which must first be resolved. The ``F`` and
``f`` specifiers perform this resolution and then provide the same
functionality as the ``S`` and ``s`` specifiers.
Kernel Pointers Kernel Pointers
=============== ===============

View File

@ -35,9 +35,34 @@ Table : Subdirectories in /proc/sys/net
bpf_jit_enable bpf_jit_enable
-------------- --------------
This enables Berkeley Packet Filter Just in Time compiler. This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
Currently supported on x86_64 architecture, bpf_jit provides a framework and efficient infrastructure allowing to execute bytecode at various
to speed packet filtering, the one used by tcpdump/libpcap for example. hook points. It is used in a number of Linux kernel subsystems such
as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
and security (e.g. seccomp). LLVM has a BPF back end that can compile
restricted C into a sequence of BPF instructions. After program load
through bpf(2) and passing a verifier in the kernel, a JIT will then
translate these BPF proglets into native CPU instructions. There are
two flavors of JITs, the newer eBPF JIT currently supported on:
- x86_64
- arm64
- ppc64
- sparc64
- mips64
- s390x
And the older cBPF JIT supported on the following archs:
- arm
- mips
- ppc
- sparc
eBPF JITs are a superset of cBPF JITs, meaning the kernel will
migrate cBPF instructions into eBPF instructions and then JIT
compile them transparently. Older cBPF JITs can only translate
tcpdump filters, seccomp rules, etc, but not mentioned eBPF
programs loaded through bpf(2).
Values : Values :
0 - disable the JIT (default value) 0 - disable the JIT (default value)
1 - enable the JIT 1 - enable the JIT
@ -46,9 +71,9 @@ Values :
bpf_jit_harden bpf_jit_harden
-------------- --------------
This enables hardening for the Berkeley Packet Filter Just in Time compiler. This enables hardening for the BPF JIT compiler. Supported are eBPF
Supported are eBPF JIT backends. Enabling hardening trades off performance, JIT backends. Enabling hardening trades off performance, but can
but can mitigate JIT spraying. mitigate JIT spraying.
Values : Values :
0 - disable JIT hardening (default value) 0 - disable JIT hardening (default value)
1 - enable JIT hardening for unprivileged users only 1 - enable JIT hardening for unprivileged users only
@ -57,11 +82,11 @@ Values :
bpf_jit_kallsyms bpf_jit_kallsyms
---------------- ----------------
When Berkeley Packet Filter Just in Time compiler is enabled, then compiled When BPF JIT compiler is enabled, then compiled images are unknown
images are unknown addresses to the kernel, meaning they neither show up in addresses to the kernel, meaning they neither show up in traces nor
traces nor in /proc/kallsyms. This enables export of these addresses, which in /proc/kallsyms. This enables export of these addresses, which can
can be used for debugging/tracing. If bpf_jit_harden is enabled, this feature be used for debugging/tracing. If bpf_jit_harden is enabled, this
is disabled. feature is disabled.
Values : Values :
0 - disable JIT kallsyms export (default value) 0 - disable JIT kallsyms export (default value)
1 - enable JIT kallsyms export for privileged users only 1 - enable JIT kallsyms export for privileged users only

View File

@ -7110,7 +7110,6 @@ M: Marc Zyngier <marc.zyngier@arm.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core
T: git git://git.infradead.org/users/jcooper/linux.git irqchip/core
F: Documentation/devicetree/bindings/interrupt-controller/ F: Documentation/devicetree/bindings/interrupt-controller/
F: drivers/irqchip/ F: drivers/irqchip/

View File

@ -1,7 +1,7 @@
VERSION = 4 VERSION = 4
PATCHLEVEL = 13 PATCHLEVEL = 13
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc4 EXTRAVERSION = -rc6
NAME = Fearless Coyote NAME = Fearless Coyote
# *DOCUMENTATION* # *DOCUMENTATION*
@ -396,7 +396,7 @@ LINUXINCLUDE := \
KBUILD_CPPFLAGS := -D__KERNEL__ KBUILD_CPPFLAGS := -D__KERNEL__
KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
-fno-strict-aliasing -fno-common \ -fno-strict-aliasing -fno-common -fshort-wchar \
-Werror-implicit-function-declaration \ -Werror-implicit-function-declaration \
-Wno-format-security \ -Wno-format-security \
-std=gnu89 $(call cc-option,-fno-PIE) -std=gnu89 $(call cc-option,-fno-PIE)
@ -442,7 +442,7 @@ export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn \
# =========================================================================== # ===========================================================================
# Rules shared between *config targets and build targets # Rules shared between *config targets and build targets
# Basic helpers built in scripts/ # Basic helpers built in scripts/basic/
PHONY += scripts_basic PHONY += scripts_basic
scripts_basic: scripts_basic:
$(Q)$(MAKE) $(build)=scripts/basic $(Q)$(MAKE) $(build)=scripts/basic
@ -505,7 +505,7 @@ ifeq ($(KBUILD_EXTMOD),)
endif endif
endif endif
endif endif
# install and module_install need also be processed one by one # install and modules_install need also be processed one by one
ifneq ($(filter install,$(MAKECMDGOALS)),) ifneq ($(filter install,$(MAKECMDGOALS)),)
ifneq ($(filter modules_install,$(MAKECMDGOALS)),) ifneq ($(filter modules_install,$(MAKECMDGOALS)),)
mixed-targets := 1 mixed-targets := 1
@ -964,7 +964,7 @@ export KBUILD_VMLINUX_MAIN := $(core-y) $(libs-y2) $(drivers-y) $(net-y) $(virt-
export KBUILD_VMLINUX_LIBS := $(libs-y1) export KBUILD_VMLINUX_LIBS := $(libs-y1)
export KBUILD_LDS := arch/$(SRCARCH)/kernel/vmlinux.lds export KBUILD_LDS := arch/$(SRCARCH)/kernel/vmlinux.lds
export LDFLAGS_vmlinux export LDFLAGS_vmlinux
# used by scripts/pacmage/Makefile # used by scripts/package/Makefile
export KBUILD_ALLDIRS := $(sort $(filter-out arch/%,$(vmlinux-alldirs)) arch Documentation include samples scripts tools) export KBUILD_ALLDIRS := $(sort $(filter-out arch/%,$(vmlinux-alldirs)) arch Documentation include samples scripts tools)
vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) $(KBUILD_VMLINUX_LIBS) vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) $(KBUILD_VMLINUX_LIBS)
@ -992,8 +992,8 @@ include/generated/autoksyms.h: FORCE
ARCH_POSTLINK := $(wildcard $(srctree)/arch/$(SRCARCH)/Makefile.postlink) ARCH_POSTLINK := $(wildcard $(srctree)/arch/$(SRCARCH)/Makefile.postlink)
# Final link of vmlinux with optional arch pass after final link # Final link of vmlinux with optional arch pass after final link
cmd_link-vmlinux = \ cmd_link-vmlinux = \
$(CONFIG_SHELL) $< $(LD) $(LDFLAGS) $(LDFLAGS_vmlinux) ; \ $(CONFIG_SHELL) $< $(LD) $(LDFLAGS) $(LDFLAGS_vmlinux) ; \
$(if $(ARCH_POSTLINK), $(MAKE) -f $(ARCH_POSTLINK) $@, true) $(if $(ARCH_POSTLINK), $(MAKE) -f $(ARCH_POSTLINK) $@, true)
vmlinux: scripts/link-vmlinux.sh vmlinux_prereq $(vmlinux-deps) FORCE vmlinux: scripts/link-vmlinux.sh vmlinux_prereq $(vmlinux-deps) FORCE
@ -1184,6 +1184,7 @@ PHONY += kselftest
kselftest: kselftest:
$(Q)$(MAKE) -C tools/testing/selftests run_tests $(Q)$(MAKE) -C tools/testing/selftests run_tests
PHONY += kselftest-clean
kselftest-clean: kselftest-clean:
$(Q)$(MAKE) -C tools/testing/selftests clean $(Q)$(MAKE) -C tools/testing/selftests clean

View File

@ -96,7 +96,6 @@ menu "ARC Architecture Configuration"
menu "ARC Platform/SoC/Board" menu "ARC Platform/SoC/Board"
source "arch/arc/plat-sim/Kconfig"
source "arch/arc/plat-tb10x/Kconfig" source "arch/arc/plat-tb10x/Kconfig"
source "arch/arc/plat-axs10x/Kconfig" source "arch/arc/plat-axs10x/Kconfig"
#New platform adds here #New platform adds here

View File

@ -107,7 +107,7 @@ core-y += arch/arc/
# w/o this dtb won't embed into kernel binary # w/o this dtb won't embed into kernel binary
core-y += arch/arc/boot/dts/ core-y += arch/arc/boot/dts/
core-$(CONFIG_ARC_PLAT_SIM) += arch/arc/plat-sim/ core-y += arch/arc/plat-sim/
core-$(CONFIG_ARC_PLAT_TB10X) += arch/arc/plat-tb10x/ core-$(CONFIG_ARC_PLAT_TB10X) += arch/arc/plat-tb10x/
core-$(CONFIG_ARC_PLAT_AXS10X) += arch/arc/plat-axs10x/ core-$(CONFIG_ARC_PLAT_AXS10X) += arch/arc/plat-axs10x/
core-$(CONFIG_ARC_PLAT_EZNPS) += arch/arc/plat-eznps/ core-$(CONFIG_ARC_PLAT_EZNPS) += arch/arc/plat-eznps/

View File

@ -15,15 +15,15 @@
/ { / {
compatible = "snps,arc"; compatible = "snps,arc";
#address-cells = <1>; #address-cells = <2>;
#size-cells = <1>; #size-cells = <2>;
cpu_card { cpu_card {
compatible = "simple-bus"; compatible = "simple-bus";
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
ranges = <0x00000000 0xf0000000 0x10000000>; ranges = <0x00000000 0x0 0xf0000000 0x10000000>;
core_clk: core_clk { core_clk: core_clk {
#clock-cells = <0>; #clock-cells = <0>;
@ -91,23 +91,21 @@
mb_intc: dw-apb-ictl@0xe0012000 { mb_intc: dw-apb-ictl@0xe0012000 {
#interrupt-cells = <1>; #interrupt-cells = <1>;
compatible = "snps,dw-apb-ictl"; compatible = "snps,dw-apb-ictl";
reg = < 0xe0012000 0x200 >; reg = < 0x0 0xe0012000 0x0 0x200 >;
interrupt-controller; interrupt-controller;
interrupt-parent = <&core_intc>; interrupt-parent = <&core_intc>;
interrupts = < 7 >; interrupts = < 7 >;
}; };
memory { memory {
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x00000000 0x80000000 0x20000000>;
device_type = "memory"; device_type = "memory";
reg = <0x80000000 0x1b000000>; /* (512 - 32) MiB */ /* CONFIG_KERNEL_RAM_BASE_ADDRESS needs to match low mem start */
reg = <0x0 0x80000000 0x0 0x1b000000>; /* (512 - 32) MiB */
}; };
reserved-memory { reserved-memory {
#address-cells = <1>; #address-cells = <2>;
#size-cells = <1>; #size-cells = <2>;
ranges; ranges;
/* /*
* We just move frame buffer area to the very end of * We just move frame buffer area to the very end of
@ -118,7 +116,7 @@
*/ */
frame_buffer: frame_buffer@9e000000 { frame_buffer: frame_buffer@9e000000 {
compatible = "shared-dma-pool"; compatible = "shared-dma-pool";
reg = <0x9e000000 0x2000000>; reg = <0x0 0x9e000000 0x0 0x2000000>;
no-map; no-map;
}; };
}; };

View File

@ -14,15 +14,15 @@
/ { / {
compatible = "snps,arc"; compatible = "snps,arc";
#address-cells = <1>; #address-cells = <2>;
#size-cells = <1>; #size-cells = <2>;
cpu_card { cpu_card {
compatible = "simple-bus"; compatible = "simple-bus";
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
ranges = <0x00000000 0xf0000000 0x10000000>; ranges = <0x00000000 0x0 0xf0000000 0x10000000>;
core_clk: core_clk { core_clk: core_clk {
#clock-cells = <0>; #clock-cells = <0>;
@ -94,30 +94,29 @@
mb_intc: dw-apb-ictl@0xe0012000 { mb_intc: dw-apb-ictl@0xe0012000 {
#interrupt-cells = <1>; #interrupt-cells = <1>;
compatible = "snps,dw-apb-ictl"; compatible = "snps,dw-apb-ictl";
reg = < 0xe0012000 0x200 >; reg = < 0x0 0xe0012000 0x0 0x200 >;
interrupt-controller; interrupt-controller;
interrupt-parent = <&core_intc>; interrupt-parent = <&core_intc>;
interrupts = < 24 >; interrupts = < 24 >;
}; };
memory { memory {
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x00000000 0x80000000 0x40000000>;
device_type = "memory"; device_type = "memory";
reg = <0x80000000 0x20000000>; /* 512MiB */ /* CONFIG_KERNEL_RAM_BASE_ADDRESS needs to match low mem start */
reg = <0x0 0x80000000 0x0 0x20000000 /* 512 MiB low mem */
0x1 0xc0000000 0x0 0x40000000>; /* 1 GiB highmem */
}; };
reserved-memory { reserved-memory {
#address-cells = <1>; #address-cells = <2>;
#size-cells = <1>; #size-cells = <2>;
ranges; ranges;
/* /*
* Move frame buffer out of IOC aperture (0x8z-0xAz). * Move frame buffer out of IOC aperture (0x8z-0xAz).
*/ */
frame_buffer: frame_buffer@be000000 { frame_buffer: frame_buffer@be000000 {
compatible = "shared-dma-pool"; compatible = "shared-dma-pool";
reg = <0xbe000000 0x2000000>; reg = <0x0 0xbe000000 0x0 0x2000000>;
no-map; no-map;
}; };
}; };

View File

@ -14,15 +14,15 @@
/ { / {
compatible = "snps,arc"; compatible = "snps,arc";
#address-cells = <1>; #address-cells = <2>;
#size-cells = <1>; #size-cells = <2>;
cpu_card { cpu_card {
compatible = "simple-bus"; compatible = "simple-bus";
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
ranges = <0x00000000 0xf0000000 0x10000000>; ranges = <0x00000000 0x0 0xf0000000 0x10000000>;
core_clk: core_clk { core_clk: core_clk {
#clock-cells = <0>; #clock-cells = <0>;
@ -100,30 +100,29 @@
mb_intc: dw-apb-ictl@0xe0012000 { mb_intc: dw-apb-ictl@0xe0012000 {
#interrupt-cells = <1>; #interrupt-cells = <1>;
compatible = "snps,dw-apb-ictl"; compatible = "snps,dw-apb-ictl";
reg = < 0xe0012000 0x200 >; reg = < 0x0 0xe0012000 0x0 0x200 >;
interrupt-controller; interrupt-controller;
interrupt-parent = <&idu_intc>; interrupt-parent = <&idu_intc>;
interrupts = <0>; interrupts = <0>;
}; };
memory { memory {
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x00000000 0x80000000 0x40000000>;
device_type = "memory"; device_type = "memory";
reg = <0x80000000 0x20000000>; /* 512MiB */ /* CONFIG_KERNEL_RAM_BASE_ADDRESS needs to match low mem start */
reg = <0x0 0x80000000 0x0 0x20000000 /* 512 MiB low mem */
0x1 0xc0000000 0x0 0x40000000>; /* 1 GiB highmem */
}; };
reserved-memory { reserved-memory {
#address-cells = <1>; #address-cells = <2>;
#size-cells = <1>; #size-cells = <2>;
ranges; ranges;
/* /*
* Move frame buffer out of IOC aperture (0x8z-0xAz). * Move frame buffer out of IOC aperture (0x8z-0xAz).
*/ */
frame_buffer: frame_buffer@be000000 { frame_buffer: frame_buffer@be000000 {
compatible = "shared-dma-pool"; compatible = "shared-dma-pool";
reg = <0xbe000000 0x2000000>; reg = <0x0 0xbe000000 0x0 0x2000000>;
no-map; no-map;
}; };
}; };

View File

@ -13,7 +13,7 @@
compatible = "simple-bus"; compatible = "simple-bus";
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
ranges = <0x00000000 0xe0000000 0x10000000>; ranges = <0x00000000 0x0 0xe0000000 0x10000000>;
interrupt-parent = <&mb_intc>; interrupt-parent = <&mb_intc>;
i2sclk: i2sclk@100a0 { i2sclk: i2sclk@100a0 {

View File

@ -21,7 +21,6 @@ CONFIG_MODULES=y
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
# CONFIG_IOSCHED_DEADLINE is not set # CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set # CONFIG_IOSCHED_CFQ is not set
CONFIG_ARC_PLAT_SIM=y
CONFIG_ISA_ARCV2=y CONFIG_ISA_ARCV2=y
CONFIG_ARC_BUILTIN_DTB_NAME="haps_hs" CONFIG_ARC_BUILTIN_DTB_NAME="haps_hs"
CONFIG_PREEMPT=y CONFIG_PREEMPT=y

View File

@ -23,7 +23,6 @@ CONFIG_MODULES=y
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
# CONFIG_IOSCHED_DEADLINE is not set # CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set # CONFIG_IOSCHED_CFQ is not set
CONFIG_ARC_PLAT_SIM=y
CONFIG_ISA_ARCV2=y CONFIG_ISA_ARCV2=y
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_ARC_BUILTIN_DTB_NAME="haps_hs_idu" CONFIG_ARC_BUILTIN_DTB_NAME="haps_hs_idu"

View File

@ -39,7 +39,6 @@ CONFIG_IP_PNP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set # CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set # CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set # CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_LRO is not set
# CONFIG_INET_DIAG is not set # CONFIG_INET_DIAG is not set
# CONFIG_IPV6 is not set # CONFIG_IPV6 is not set
# CONFIG_WIRELESS is not set # CONFIG_WIRELESS is not set

View File

@ -23,7 +23,6 @@ CONFIG_MODULES=y
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
# CONFIG_IOSCHED_DEADLINE is not set # CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set # CONFIG_IOSCHED_CFQ is not set
CONFIG_ARC_PLAT_SIM=y
CONFIG_ARC_BUILTIN_DTB_NAME="nsim_700" CONFIG_ARC_BUILTIN_DTB_NAME="nsim_700"
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
# CONFIG_COMPACTION is not set # CONFIG_COMPACTION is not set

View File

@ -26,7 +26,6 @@ CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
# CONFIG_IOSCHED_DEADLINE is not set # CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set # CONFIG_IOSCHED_CFQ is not set
CONFIG_ARC_PLAT_SIM=y
CONFIG_ISA_ARCV2=y CONFIG_ISA_ARCV2=y
CONFIG_ARC_BUILTIN_DTB_NAME="nsim_hs" CONFIG_ARC_BUILTIN_DTB_NAME="nsim_hs"
CONFIG_PREEMPT=y CONFIG_PREEMPT=y

View File

@ -24,7 +24,6 @@ CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
# CONFIG_IOSCHED_DEADLINE is not set # CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set # CONFIG_IOSCHED_CFQ is not set
CONFIG_ARC_PLAT_SIM=y
CONFIG_ISA_ARCV2=y CONFIG_ISA_ARCV2=y
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_ARC_BUILTIN_DTB_NAME="nsim_hs_idu" CONFIG_ARC_BUILTIN_DTB_NAME="nsim_hs_idu"

View File

@ -23,7 +23,6 @@ CONFIG_MODULES=y
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
# CONFIG_IOSCHED_DEADLINE is not set # CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set # CONFIG_IOSCHED_CFQ is not set
CONFIG_ARC_PLAT_SIM=y
CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci" CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci"
# CONFIG_COMPACTION is not set # CONFIG_COMPACTION is not set
CONFIG_NET=y CONFIG_NET=y

View File

@ -23,7 +23,6 @@ CONFIG_MODULES=y
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
# CONFIG_IOSCHED_DEADLINE is not set # CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set # CONFIG_IOSCHED_CFQ is not set
CONFIG_ARC_PLAT_SIM=y
CONFIG_ISA_ARCV2=y CONFIG_ISA_ARCV2=y
CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci_hs" CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci_hs"
# CONFIG_COMPACTION is not set # CONFIG_COMPACTION is not set

View File

@ -18,7 +18,6 @@ CONFIG_MODULES=y
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
# CONFIG_IOSCHED_DEADLINE is not set # CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set # CONFIG_IOSCHED_CFQ is not set
CONFIG_ARC_PLAT_SIM=y
CONFIG_ISA_ARCV2=y CONFIG_ISA_ARCV2=y
CONFIG_SMP=y CONFIG_SMP=y
# CONFIG_ARC_TIMERS_64BIT is not set # CONFIG_ARC_TIMERS_64BIT is not set

View File

@ -38,7 +38,6 @@ CONFIG_IP_MULTICAST=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set # CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set # CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set # CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_LRO is not set
# CONFIG_INET_DIAG is not set # CONFIG_INET_DIAG is not set
# CONFIG_IPV6 is not set # CONFIG_IPV6 is not set
# CONFIG_WIRELESS is not set # CONFIG_WIRELESS is not set

View File

@ -96,7 +96,9 @@ extern unsigned long perip_base, perip_end;
#define ARC_REG_SLC_FLUSH 0x904 #define ARC_REG_SLC_FLUSH 0x904
#define ARC_REG_SLC_INVALIDATE 0x905 #define ARC_REG_SLC_INVALIDATE 0x905
#define ARC_REG_SLC_RGN_START 0x914 #define ARC_REG_SLC_RGN_START 0x914
#define ARC_REG_SLC_RGN_START1 0x915
#define ARC_REG_SLC_RGN_END 0x916 #define ARC_REG_SLC_RGN_END 0x916
#define ARC_REG_SLC_RGN_END1 0x917
/* Bit val in SLC_CONTROL */ /* Bit val in SLC_CONTROL */
#define SLC_CTRL_DIS 0x001 #define SLC_CTRL_DIS 0x001

View File

@ -94,6 +94,8 @@ static inline int is_pae40_enabled(void)
return IS_ENABLED(CONFIG_ARC_HAS_PAE40); return IS_ENABLED(CONFIG_ARC_HAS_PAE40);
} }
extern int pae40_exist_but_not_enab(void);
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#endif #endif

View File

@ -75,10 +75,13 @@ void arc_init_IRQ(void)
* Set a default priority for all available interrupts to prevent * Set a default priority for all available interrupts to prevent
* switching of register banks if Fast IRQ and multiple register banks * switching of register banks if Fast IRQ and multiple register banks
* are supported by CPU. * are supported by CPU.
* Also disable all IRQ lines so faulty external hardware won't
* trigger interrupt that kernel is not ready to handle.
*/ */
for (i = NR_EXCEPTIONS; i < irq_bcr.irqs + NR_EXCEPTIONS; i++) { for (i = NR_EXCEPTIONS; i < irq_bcr.irqs + NR_EXCEPTIONS; i++) {
write_aux_reg(AUX_IRQ_SELECT, i); write_aux_reg(AUX_IRQ_SELECT, i);
write_aux_reg(AUX_IRQ_PRIORITY, ARCV2_IRQ_DEF_PRIO); write_aux_reg(AUX_IRQ_PRIORITY, ARCV2_IRQ_DEF_PRIO);
write_aux_reg(AUX_IRQ_ENABLE, 0);
} }
/* setup status32, don't enable intr yet as kernel doesn't want */ /* setup status32, don't enable intr yet as kernel doesn't want */

View File

@ -27,7 +27,7 @@
*/ */
void arc_init_IRQ(void) void arc_init_IRQ(void)
{ {
int level_mask = 0; int level_mask = 0, i;
/* Is timer high priority Interrupt (Level2 in ARCompact jargon) */ /* Is timer high priority Interrupt (Level2 in ARCompact jargon) */
level_mask |= IS_ENABLED(CONFIG_ARC_COMPACT_IRQ_LEVELS) << TIMER0_IRQ; level_mask |= IS_ENABLED(CONFIG_ARC_COMPACT_IRQ_LEVELS) << TIMER0_IRQ;
@ -40,6 +40,18 @@ void arc_init_IRQ(void)
if (level_mask) if (level_mask)
pr_info("Level-2 interrupts bitset %x\n", level_mask); pr_info("Level-2 interrupts bitset %x\n", level_mask);
/*
* Disable all IRQ lines so faulty external hardware won't
* trigger interrupt that kernel is not ready to handle.
*/
for (i = TIMER0_IRQ; i < NR_CPU_IRQS; i++) {
unsigned int ienb;
ienb = read_aux_reg(AUX_IENABLE);
ienb &= ~(1 << i);
write_aux_reg(AUX_IENABLE, ienb);
}
} }
/* /*

View File

@ -665,6 +665,7 @@ noinline void slc_op(phys_addr_t paddr, unsigned long sz, const int op)
static DEFINE_SPINLOCK(lock); static DEFINE_SPINLOCK(lock);
unsigned long flags; unsigned long flags;
unsigned int ctrl; unsigned int ctrl;
phys_addr_t end;
spin_lock_irqsave(&lock, flags); spin_lock_irqsave(&lock, flags);
@ -694,8 +695,19 @@ noinline void slc_op(phys_addr_t paddr, unsigned long sz, const int op)
* END needs to be setup before START (latter triggers the operation) * END needs to be setup before START (latter triggers the operation)
* END can't be same as START, so add (l2_line_sz - 1) to sz * END can't be same as START, so add (l2_line_sz - 1) to sz
*/ */
write_aux_reg(ARC_REG_SLC_RGN_END, (paddr + sz + l2_line_sz - 1)); end = paddr + sz + l2_line_sz - 1;
write_aux_reg(ARC_REG_SLC_RGN_START, paddr); if (is_pae40_enabled())
write_aux_reg(ARC_REG_SLC_RGN_END1, upper_32_bits(end));
write_aux_reg(ARC_REG_SLC_RGN_END, lower_32_bits(end));
if (is_pae40_enabled())
write_aux_reg(ARC_REG_SLC_RGN_START1, upper_32_bits(paddr));
write_aux_reg(ARC_REG_SLC_RGN_START, lower_32_bits(paddr));
/* Make sure "busy" bit reports correct stataus, see STAR 9001165532 */
read_aux_reg(ARC_REG_SLC_CTRL);
while (read_aux_reg(ARC_REG_SLC_CTRL) & SLC_CTRL_BUSY); while (read_aux_reg(ARC_REG_SLC_CTRL) & SLC_CTRL_BUSY);
@ -1111,6 +1123,13 @@ noinline void __init arc_ioc_setup(void)
__dc_enable(); __dc_enable();
} }
/*
* Cache related boot time checks/setups only needed on master CPU:
* - Geometry checks (kernel build and hardware agree: e.g. L1_CACHE_BYTES)
* Assume SMP only, so all cores will have same cache config. A check on
* one core suffices for all
* - IOC setup / dma callbacks only need to be done once
*/
void __init arc_cache_init_master(void) void __init arc_cache_init_master(void)
{ {
unsigned int __maybe_unused cpu = smp_processor_id(); unsigned int __maybe_unused cpu = smp_processor_id();
@ -1190,12 +1209,27 @@ void __ref arc_cache_init(void)
printk(arc_cache_mumbojumbo(0, str, sizeof(str))); printk(arc_cache_mumbojumbo(0, str, sizeof(str)));
/*
* Only master CPU needs to execute rest of function:
* - Assume SMP so all cores will have same cache config so
* any geomtry checks will be same for all
* - IOC setup / dma callbacks only need to be setup once
*/
if (!cpu) if (!cpu)
arc_cache_init_master(); arc_cache_init_master();
/*
* In PAE regime, TLB and cache maintenance ops take wider addresses
* And even if PAE is not enabled in kernel, the upper 32-bits still need
* to be zeroed to keep the ops sane.
* As an optimization for more common !PAE enabled case, zero them out
* once at init, rather than checking/setting to 0 for every runtime op
*/
if (is_isa_arcv2() && pae40_exist_but_not_enab()) {
if (IS_ENABLED(CONFIG_ARC_HAS_ICACHE))
write_aux_reg(ARC_REG_IC_PTAG_HI, 0);
if (IS_ENABLED(CONFIG_ARC_HAS_DCACHE))
write_aux_reg(ARC_REG_DC_PTAG_HI, 0);
if (l2_line_sz) {
write_aux_reg(ARC_REG_SLC_RGN_END1, 0);
write_aux_reg(ARC_REG_SLC_RGN_START1, 0);
}
}
} }

View File

@ -153,6 +153,19 @@ static void _dma_cache_sync(phys_addr_t paddr, size_t size,
} }
} }
/*
* arc_dma_map_page - map a portion of a page for streaming DMA
*
* Ensure that any data held in the cache is appropriately discarded
* or written back.
*
* The device owns this memory once this call has completed. The CPU
* can regain ownership by calling dma_unmap_page().
*
* Note: while it takes struct page as arg, caller can "abuse" it to pass
* a region larger than PAGE_SIZE, provided it is physically contiguous
* and this still works correctly
*/
static dma_addr_t arc_dma_map_page(struct device *dev, struct page *page, static dma_addr_t arc_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long offset, size_t size, enum dma_data_direction dir,
unsigned long attrs) unsigned long attrs)
@ -165,6 +178,24 @@ static dma_addr_t arc_dma_map_page(struct device *dev, struct page *page,
return plat_phys_to_dma(dev, paddr); return plat_phys_to_dma(dev, paddr);
} }
/*
* arc_dma_unmap_page - unmap a buffer previously mapped through dma_map_page()
*
* After this call, reads by the CPU to the buffer are guaranteed to see
* whatever the device wrote there.
*
* Note: historically this routine was not implemented for ARC
*/
static void arc_dma_unmap_page(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
phys_addr_t paddr = plat_dma_to_phys(dev, handle);
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
_dma_cache_sync(paddr, size, dir);
}
static int arc_dma_map_sg(struct device *dev, struct scatterlist *sg, static int arc_dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir, unsigned long attrs) int nents, enum dma_data_direction dir, unsigned long attrs)
{ {
@ -178,6 +209,18 @@ static int arc_dma_map_sg(struct device *dev, struct scatterlist *sg,
return nents; return nents;
} }
static void arc_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir,
unsigned long attrs)
{
struct scatterlist *s;
int i;
for_each_sg(sg, s, nents, i)
arc_dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir,
attrs);
}
static void arc_dma_sync_single_for_cpu(struct device *dev, static void arc_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction dir) dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
{ {
@ -223,7 +266,9 @@ const struct dma_map_ops arc_dma_ops = {
.free = arc_dma_free, .free = arc_dma_free,
.mmap = arc_dma_mmap, .mmap = arc_dma_mmap,
.map_page = arc_dma_map_page, .map_page = arc_dma_map_page,
.unmap_page = arc_dma_unmap_page,
.map_sg = arc_dma_map_sg, .map_sg = arc_dma_map_sg,
.unmap_sg = arc_dma_unmap_sg,
.sync_single_for_device = arc_dma_sync_single_for_device, .sync_single_for_device = arc_dma_sync_single_for_device,
.sync_single_for_cpu = arc_dma_sync_single_for_cpu, .sync_single_for_cpu = arc_dma_sync_single_for_cpu,
.sync_sg_for_cpu = arc_dma_sync_sg_for_cpu, .sync_sg_for_cpu = arc_dma_sync_sg_for_cpu,

View File

@ -104,6 +104,8 @@
/* A copy of the ASID from the PID reg is kept in asid_cache */ /* A copy of the ASID from the PID reg is kept in asid_cache */
DEFINE_PER_CPU(unsigned int, asid_cache) = MM_CTXT_FIRST_CYCLE; DEFINE_PER_CPU(unsigned int, asid_cache) = MM_CTXT_FIRST_CYCLE;
static int __read_mostly pae_exists;
/* /*
* Utility Routine to erase a J-TLB entry * Utility Routine to erase a J-TLB entry
* Caller needs to setup Index Reg (manually or via getIndex) * Caller needs to setup Index Reg (manually or via getIndex)
@ -784,7 +786,7 @@ void read_decode_mmu_bcr(void)
mmu->u_dtlb = mmu4->u_dtlb * 4; mmu->u_dtlb = mmu4->u_dtlb * 4;
mmu->u_itlb = mmu4->u_itlb * 4; mmu->u_itlb = mmu4->u_itlb * 4;
mmu->sasid = mmu4->sasid; mmu->sasid = mmu4->sasid;
mmu->pae = mmu4->pae; pae_exists = mmu->pae = mmu4->pae;
} }
} }
@ -809,6 +811,11 @@ char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len)
return buf; return buf;
} }
int pae40_exist_but_not_enab(void)
{
return pae_exists && !is_pae40_enabled();
}
void arc_mmu_init(void) void arc_mmu_init(void)
{ {
char str[256]; char str[256];
@ -859,6 +866,9 @@ void arc_mmu_init(void)
/* swapper_pg_dir is the pgd for the kernel, used by vmalloc */ /* swapper_pg_dir is the pgd for the kernel, used by vmalloc */
write_aux_reg(ARC_REG_SCRATCH_DATA0, swapper_pg_dir); write_aux_reg(ARC_REG_SCRATCH_DATA0, swapper_pg_dir);
#endif #endif
if (pae40_exist_but_not_enab())
write_aux_reg(ARC_REG_TLBPD1HI, 0);
} }
/* /*

View File

@ -1,13 +0,0 @@
#
# Copyright (C) 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
menuconfig ARC_PLAT_SIM
bool "ARC nSIM based simulation virtual platforms"
help
Support for nSIM based ARC simulation platforms
This includes the standalone nSIM (uart only) vs. System C OSCI VP

View File

@ -20,11 +20,14 @@
*/ */
static const char *simulation_compat[] __initconst = { static const char *simulation_compat[] __initconst = {
#ifdef CONFIG_ISA_ARCOMPACT
"snps,nsim", "snps,nsim",
"snps,nsim_hs",
"snps,nsimosci", "snps,nsimosci",
#else
"snps,nsim_hs",
"snps,nsimosci_hs", "snps,nsimosci_hs",
"snps,zebu_hs", "snps,zebu_hs",
#endif
NULL, NULL,
}; };

View File

@ -266,6 +266,7 @@
&hdmicec { &hdmicec {
status = "okay"; status = "okay";
needs-hpd;
}; };
&hsi2c_4 { &hsi2c_4 {

View File

@ -297,6 +297,7 @@
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
status = "disabled"; status = "disabled";
ranges;
adc: adc@50030800 { adc: adc@50030800 {
compatible = "fsl,imx25-gcq"; compatible = "fsl,imx25-gcq";

View File

@ -507,7 +507,7 @@
pinctrl_pcie: pciegrp { pinctrl_pcie: pciegrp {
fsl,pins = < fsl,pins = <
/* PCIe reset */ /* PCIe reset */
MX6QDL_PAD_EIM_BCLK__GPIO6_IO31 0x030b0 MX6QDL_PAD_EIM_DA0__GPIO3_IO00 0x030b0
MX6QDL_PAD_EIM_DA4__GPIO3_IO04 0x030b0 MX6QDL_PAD_EIM_DA4__GPIO3_IO04 0x030b0
>; >;
}; };
@ -668,7 +668,7 @@
&pcie { &pcie {
pinctrl-names = "default"; pinctrl-names = "default";
pinctrl-0 = <&pinctrl_pcie>; pinctrl-0 = <&pinctrl_pcie>;
reset-gpio = <&gpio6 31 GPIO_ACTIVE_LOW>; reset-gpio = <&gpio3 0 GPIO_ACTIVE_LOW>;
status = "okay"; status = "okay";
}; };

View File

@ -557,6 +557,14 @@
>; >;
}; };
pinctrl_spi4: spi4grp {
fsl,pins = <
MX7D_PAD_GPIO1_IO09__GPIO1_IO9 0x59
MX7D_PAD_GPIO1_IO12__GPIO1_IO12 0x59
MX7D_PAD_GPIO1_IO13__GPIO1_IO13 0x59
>;
};
pinctrl_tsc2046_pendown: tsc2046_pendown { pinctrl_tsc2046_pendown: tsc2046_pendown {
fsl,pins = < fsl,pins = <
MX7D_PAD_EPDC_BDR1__GPIO2_IO29 0x59 MX7D_PAD_EPDC_BDR1__GPIO2_IO29 0x59
@ -697,13 +705,5 @@
fsl,pins = < fsl,pins = <
MX7D_PAD_LPSR_GPIO1_IO01__PWM1_OUT 0x110b0 MX7D_PAD_LPSR_GPIO1_IO01__PWM1_OUT 0x110b0
>; >;
pinctrl_spi4: spi4grp {
fsl,pins = <
MX7D_PAD_GPIO1_IO09__GPIO1_IO9 0x59
MX7D_PAD_GPIO1_IO12__GPIO1_IO12 0x59
MX7D_PAD_GPIO1_IO13__GPIO1_IO13 0x59
>;
};
}; };
}; };

View File

@ -303,7 +303,7 @@
#size-cells = <1>; #size-cells = <1>;
atmel,smc = <&hsmc>; atmel,smc = <&hsmc>;
reg = <0x10000000 0x10000000 reg = <0x10000000 0x10000000
0x40000000 0x30000000>; 0x60000000 0x30000000>;
ranges = <0x0 0x0 0x10000000 0x10000000 ranges = <0x0 0x0 0x10000000 0x10000000
0x1 0x0 0x60000000 0x10000000 0x1 0x0 0x60000000 0x10000000
0x2 0x0 0x70000000 0x10000000 0x2 0x0 0x70000000 0x10000000
@ -1048,18 +1048,18 @@
}; };
hsmc: hsmc@f8014000 { hsmc: hsmc@f8014000 {
compatible = "atmel,sama5d3-smc", "syscon", "simple-mfd"; compatible = "atmel,sama5d2-smc", "syscon", "simple-mfd";
reg = <0xf8014000 0x1000>; reg = <0xf8014000 0x1000>;
interrupts = <5 IRQ_TYPE_LEVEL_HIGH 6>; interrupts = <17 IRQ_TYPE_LEVEL_HIGH 6>;
clocks = <&hsmc_clk>; clocks = <&hsmc_clk>;
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
ranges; ranges;
pmecc: ecc-engine@ffffc070 { pmecc: ecc-engine@f8014070 {
compatible = "atmel,sama5d2-pmecc"; compatible = "atmel,sama5d2-pmecc";
reg = <0xffffc070 0x490>, reg = <0xf8014070 0x490>,
<0xffffc500 0x100>; <0xf8014500 0x100>;
}; };
}; };

View File

@ -1,7 +1,7 @@
menuconfig ARCH_AT91 menuconfig ARCH_AT91
bool "Atmel SoCs" bool "Atmel SoCs"
depends on ARCH_MULTI_V4T || ARCH_MULTI_V5 || ARCH_MULTI_V7 || ARM_SINGLE_ARMV7M depends on ARCH_MULTI_V4T || ARCH_MULTI_V5 || ARCH_MULTI_V7 || ARM_SINGLE_ARMV7M
select ARM_CPU_SUSPEND if PM select ARM_CPU_SUSPEND if PM && ARCH_MULTI_V7
select COMMON_CLK_AT91 select COMMON_CLK_AT91
select GPIOLIB select GPIOLIB
select PINCTRL select PINCTRL

View File

@ -608,6 +608,9 @@ static void __init at91_pm_init(void (*pm_idle)(void))
void __init at91rm9200_pm_init(void) void __init at91rm9200_pm_init(void)
{ {
if (!IS_ENABLED(CONFIG_SOC_AT91RM9200))
return;
at91_dt_ramc(); at91_dt_ramc();
/* /*
@ -620,18 +623,27 @@ void __init at91rm9200_pm_init(void)
void __init at91sam9_pm_init(void) void __init at91sam9_pm_init(void)
{ {
if (!IS_ENABLED(CONFIG_SOC_AT91SAM9))
return;
at91_dt_ramc(); at91_dt_ramc();
at91_pm_init(at91sam9_idle); at91_pm_init(at91sam9_idle);
} }
void __init sama5_pm_init(void) void __init sama5_pm_init(void)
{ {
if (!IS_ENABLED(CONFIG_SOC_SAMA5))
return;
at91_dt_ramc(); at91_dt_ramc();
at91_pm_init(NULL); at91_pm_init(NULL);
} }
void __init sama5d2_pm_init(void) void __init sama5d2_pm_init(void)
{ {
if (!IS_ENABLED(CONFIG_SOC_SAMA5D2))
return;
at91_pm_backup_init(); at91_pm_backup_init();
sama5_pm_init(); sama5_pm_init();
} }

View File

@ -51,6 +51,7 @@
compatible = "sinovoip,bananapi-m64", "allwinner,sun50i-a64"; compatible = "sinovoip,bananapi-m64", "allwinner,sun50i-a64";
aliases { aliases {
ethernet0 = &emac;
serial0 = &uart0; serial0 = &uart0;
serial1 = &uart1; serial1 = &uart1;
}; };

View File

@ -51,6 +51,7 @@
compatible = "pine64,pine64", "allwinner,sun50i-a64"; compatible = "pine64,pine64", "allwinner,sun50i-a64";
aliases { aliases {
ethernet0 = &emac;
serial0 = &uart0; serial0 = &uart0;
serial1 = &uart1; serial1 = &uart1;
serial2 = &uart2; serial2 = &uart2;

View File

@ -53,6 +53,7 @@
"allwinner,sun50i-a64"; "allwinner,sun50i-a64";
aliases { aliases {
ethernet0 = &emac;
serial0 = &uart0; serial0 = &uart0;
}; };

View File

@ -120,5 +120,8 @@
}; };
&pio { &pio {
interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>;
compatible = "allwinner,sun50i-h5-pinctrl"; compatible = "allwinner,sun50i-h5-pinctrl";
}; };

View File

@ -45,7 +45,7 @@
stdout-path = "serial0:115200n8"; stdout-path = "serial0:115200n8";
}; };
audio_clkout: audio_clkout { audio_clkout: audio-clkout {
/* /*
* This is same as <&rcar_sound 0> * This is same as <&rcar_sound 0>
* but needed to avoid cs2000/rcar_sound probe dead-lock * but needed to avoid cs2000/rcar_sound probe dead-lock

View File

@ -65,13 +65,13 @@ DECLARE_PER_CPU(const struct arch_timer_erratum_workaround *,
u64 _val; \ u64 _val; \
if (needs_unstable_timer_counter_workaround()) { \ if (needs_unstable_timer_counter_workaround()) { \
const struct arch_timer_erratum_workaround *wa; \ const struct arch_timer_erratum_workaround *wa; \
preempt_disable(); \ preempt_disable_notrace(); \
wa = __this_cpu_read(timer_unstable_counter_workaround); \ wa = __this_cpu_read(timer_unstable_counter_workaround); \
if (wa && wa->read_##reg) \ if (wa && wa->read_##reg) \
_val = wa->read_##reg(); \ _val = wa->read_##reg(); \
else \ else \
_val = read_sysreg(reg); \ _val = read_sysreg(reg); \
preempt_enable(); \ preempt_enable_notrace(); \
} else { \ } else { \
_val = read_sysreg(reg); \ _val = read_sysreg(reg); \
} \ } \

View File

@ -114,10 +114,10 @@
/* /*
* This is the base location for PIE (ET_DYN with INTERP) loads. On * This is the base location for PIE (ET_DYN with INTERP) loads. On
* 64-bit, this is raised to 4GB to leave the entire 32-bit address * 64-bit, this is above 4GB to leave the entire 32-bit address
* space open for things that want to use the area for 32-bit pointers. * space open for things that want to use the area for 32-bit pointers.
*/ */
#define ELF_ET_DYN_BASE 0x100000000UL #define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__

View File

@ -161,9 +161,11 @@ void fpsimd_flush_thread(void)
{ {
if (!system_supports_fpsimd()) if (!system_supports_fpsimd())
return; return;
preempt_disable();
memset(&current->thread.fpsimd_state, 0, sizeof(struct fpsimd_state)); memset(&current->thread.fpsimd_state, 0, sizeof(struct fpsimd_state));
fpsimd_flush_task_state(current); fpsimd_flush_task_state(current);
set_thread_flag(TIF_FOREIGN_FPSTATE); set_thread_flag(TIF_FOREIGN_FPSTATE);
preempt_enable();
} }
/* /*

View File

@ -354,7 +354,6 @@ __primary_switched:
tst x23, ~(MIN_KIMG_ALIGN - 1) // already running randomized? tst x23, ~(MIN_KIMG_ALIGN - 1) // already running randomized?
b.ne 0f b.ne 0f
mov x0, x21 // pass FDT address in x0 mov x0, x21 // pass FDT address in x0
mov x1, x23 // pass modulo offset in x1
bl kaslr_early_init // parse FDT for KASLR options bl kaslr_early_init // parse FDT for KASLR options
cbz x0, 0f // KASLR disabled? just proceed cbz x0, 0f // KASLR disabled? just proceed
orr x23, x23, x0 // record KASLR offset orr x23, x23, x0 // record KASLR offset

View File

@ -75,7 +75,7 @@ extern void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size,
* containing function pointers) to be reinitialized, and zero-initialized * containing function pointers) to be reinitialized, and zero-initialized
* .bss variables will be reset to 0. * .bss variables will be reset to 0.
*/ */
u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset) u64 __init kaslr_early_init(u64 dt_phys)
{ {
void *fdt; void *fdt;
u64 seed, offset, mask, module_range; u64 seed, offset, mask, module_range;
@ -131,15 +131,17 @@ u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset)
/* /*
* The kernel Image should not extend across a 1GB/32MB/512MB alignment * The kernel Image should not extend across a 1GB/32MB/512MB alignment
* boundary (for 4KB/16KB/64KB granule kernels, respectively). If this * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this
* happens, increase the KASLR offset by the size of the kernel image * happens, round down the KASLR offset by (1 << SWAPPER_TABLE_SHIFT).
* rounded up by SWAPPER_BLOCK_SIZE. *
* NOTE: The references to _text and _end below will already take the
* modulo offset (the physical displacement modulo 2 MB) into
* account, given that the physical placement is controlled by
* the loader, and will not change as a result of the virtual
* mapping we choose.
*/ */
if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) != if ((((u64)_text + offset) >> SWAPPER_TABLE_SHIFT) !=
(((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT)) { (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT))
u64 kimg_sz = _end - _text; offset = round_down(offset, 1 << SWAPPER_TABLE_SHIFT);
offset = (offset + round_up(kimg_sz, SWAPPER_BLOCK_SIZE))
& mask;
}
if (IS_ENABLED(CONFIG_KASAN)) if (IS_ENABLED(CONFIG_KASAN))
/* /*

View File

@ -435,8 +435,11 @@ retry:
* the mmap_sem because it would already be released * the mmap_sem because it would already be released
* in __lock_page_or_retry in mm/filemap.c. * in __lock_page_or_retry in mm/filemap.c.
*/ */
if (fatal_signal_pending(current)) if (fatal_signal_pending(current)) {
if (!user_mode(regs))
goto no_context;
return 0; return 0;
}
/* /*
* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk of * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk of

View File

@ -2260,7 +2260,7 @@ config CPU_R4K_CACHE_TLB
config MIPS_MT_SMP config MIPS_MT_SMP
bool "MIPS MT SMP support (1 TC on each available VPE)" bool "MIPS MT SMP support (1 TC on each available VPE)"
depends on SYS_SUPPORTS_MULTITHREADING && !CPU_MIPSR6 depends on SYS_SUPPORTS_MULTITHREADING && !CPU_MIPSR6 && !CPU_MICROMIPS
select CPU_MIPSR2_IRQ_VI select CPU_MIPSR2_IRQ_VI
select CPU_MIPSR2_IRQ_EI select CPU_MIPSR2_IRQ_EI
select SYNC_R4K select SYNC_R4K

View File

@ -243,8 +243,21 @@ include arch/mips/Kbuild.platforms
ifdef CONFIG_PHYSICAL_START ifdef CONFIG_PHYSICAL_START
load-y = $(CONFIG_PHYSICAL_START) load-y = $(CONFIG_PHYSICAL_START)
endif endif
entry-y = 0x$(shell $(NM) vmlinux 2>/dev/null \
entry-noisa-y = 0x$(shell $(NM) vmlinux 2>/dev/null \
| grep "\bkernel_entry\b" | cut -f1 -d \ ) | grep "\bkernel_entry\b" | cut -f1 -d \ )
ifdef CONFIG_CPU_MICROMIPS
#
# Set the ISA bit, since the kernel_entry symbol in the ELF will have it
# clear which would lead to images containing addresses which bootloaders may
# jump to as MIPS32 code.
#
entry-y = $(patsubst %0,%1,$(patsubst %2,%3,$(patsubst %4,%5, \
$(patsubst %6,%7,$(patsubst %8,%9,$(patsubst %a,%b, \
$(patsubst %c,%d,$(patsubst %e,%f,$(entry-noisa-y)))))))))
else
entry-y = $(entry-noisa-y)
endif
cflags-y += -I$(srctree)/arch/mips/include/asm/mach-generic cflags-y += -I$(srctree)/arch/mips/include/asm/mach-generic
drivers-$(CONFIG_PCI) += arch/mips/pci/ drivers-$(CONFIG_PCI) += arch/mips/pci/

2
arch/mips/boot/compressed/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
ashldi3.c
bswapsi.c

View File

@ -13,9 +13,9 @@
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/io.h>
#include <asm/octeon/octeon.h> #include <asm/octeon/octeon.h>
#include <asm/octeon/cvmx-gpio-defs.h>
/* USB Control Register */ /* USB Control Register */
union cvm_usbdrd_uctl_ctl { union cvm_usbdrd_uctl_ctl {

View File

@ -147,23 +147,12 @@
* Find irq with highest priority * Find irq with highest priority
*/ */
# open coded PTR_LA t1, cpu_mask_nr_tbl # open coded PTR_LA t1, cpu_mask_nr_tbl
#if (_MIPS_SZPTR == 32) #if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
# open coded la t1, cpu_mask_nr_tbl # open coded la t1, cpu_mask_nr_tbl
lui t1, %hi(cpu_mask_nr_tbl) lui t1, %hi(cpu_mask_nr_tbl)
addiu t1, %lo(cpu_mask_nr_tbl) addiu t1, %lo(cpu_mask_nr_tbl)
#else
#endif #error GCC `-msym32' option required for 64-bit DECstation builds
#if (_MIPS_SZPTR == 64)
# open coded dla t1, cpu_mask_nr_tbl
.set push
.set noat
lui t1, %highest(cpu_mask_nr_tbl)
lui AT, %hi(cpu_mask_nr_tbl)
daddiu t1, t1, %higher(cpu_mask_nr_tbl)
daddiu AT, AT, %lo(cpu_mask_nr_tbl)
dsll t1, 32
daddu t1, t1, AT
.set pop
#endif #endif
1: lw t2,(t1) 1: lw t2,(t1)
nop nop
@ -214,23 +203,12 @@
* Find irq with highest priority * Find irq with highest priority
*/ */
# open coded PTR_LA t1,asic_mask_nr_tbl # open coded PTR_LA t1,asic_mask_nr_tbl
#if (_MIPS_SZPTR == 32) #if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
# open coded la t1, asic_mask_nr_tbl # open coded la t1, asic_mask_nr_tbl
lui t1, %hi(asic_mask_nr_tbl) lui t1, %hi(asic_mask_nr_tbl)
addiu t1, %lo(asic_mask_nr_tbl) addiu t1, %lo(asic_mask_nr_tbl)
#else
#endif #error GCC `-msym32' option required for 64-bit DECstation builds
#if (_MIPS_SZPTR == 64)
# open coded dla t1, asic_mask_nr_tbl
.set push
.set noat
lui t1, %highest(asic_mask_nr_tbl)
lui AT, %hi(asic_mask_nr_tbl)
daddiu t1, t1, %higher(asic_mask_nr_tbl)
daddiu AT, AT, %lo(asic_mask_nr_tbl)
dsll t1, 32
daddu t1, t1, AT
.set pop
#endif #endif
2: lw t2,(t1) 2: lw t2,(t1)
nop nop

View File

@ -9,6 +9,8 @@
#ifndef _ASM_CACHE_H #ifndef _ASM_CACHE_H
#define _ASM_CACHE_H #define _ASM_CACHE_H
#include <kmalloc.h>
#define L1_CACHE_SHIFT CONFIG_MIPS_L1_CACHE_SHIFT #define L1_CACHE_SHIFT CONFIG_MIPS_L1_CACHE_SHIFT
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT) #define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)

View File

@ -428,6 +428,9 @@
#ifndef cpu_scache_line_size #ifndef cpu_scache_line_size
#define cpu_scache_line_size() cpu_data[0].scache.linesz #define cpu_scache_line_size() cpu_data[0].scache.linesz
#endif #endif
#ifndef cpu_tcache_line_size
#define cpu_tcache_line_size() cpu_data[0].tcache.linesz
#endif
#ifndef cpu_hwrena_impl_bits #ifndef cpu_hwrena_impl_bits
#define cpu_hwrena_impl_bits 0 #define cpu_hwrena_impl_bits 0

View File

@ -33,6 +33,10 @@
#define CVMX_L2C_DBG (CVMX_ADD_IO_SEG(0x0001180080000030ull)) #define CVMX_L2C_DBG (CVMX_ADD_IO_SEG(0x0001180080000030ull))
#define CVMX_L2C_CFG (CVMX_ADD_IO_SEG(0x0001180080000000ull)) #define CVMX_L2C_CFG (CVMX_ADD_IO_SEG(0x0001180080000000ull))
#define CVMX_L2C_CTL (CVMX_ADD_IO_SEG(0x0001180080800000ull)) #define CVMX_L2C_CTL (CVMX_ADD_IO_SEG(0x0001180080800000ull))
#define CVMX_L2C_ERR_TDTX(block_id) \
(CVMX_ADD_IO_SEG(0x0001180080A007E0ull) + ((block_id) & 3) * 0x40000ull)
#define CVMX_L2C_ERR_TTGX(block_id) \
(CVMX_ADD_IO_SEG(0x0001180080A007E8ull) + ((block_id) & 3) * 0x40000ull)
#define CVMX_L2C_LCKBASE (CVMX_ADD_IO_SEG(0x0001180080000058ull)) #define CVMX_L2C_LCKBASE (CVMX_ADD_IO_SEG(0x0001180080000058ull))
#define CVMX_L2C_LCKOFF (CVMX_ADD_IO_SEG(0x0001180080000060ull)) #define CVMX_L2C_LCKOFF (CVMX_ADD_IO_SEG(0x0001180080000060ull))
#define CVMX_L2C_PFCTL (CVMX_ADD_IO_SEG(0x0001180080000090ull)) #define CVMX_L2C_PFCTL (CVMX_ADD_IO_SEG(0x0001180080000090ull))
@ -66,9 +70,40 @@
((offset) & 1) * 8) ((offset) & 1) * 8)
#define CVMX_L2C_WPAR_PPX(offset) (CVMX_ADD_IO_SEG(0x0001180080840000ull) + \ #define CVMX_L2C_WPAR_PPX(offset) (CVMX_ADD_IO_SEG(0x0001180080840000ull) + \
((offset) & 31) * 8) ((offset) & 31) * 8)
#define CVMX_L2D_FUS3 (CVMX_ADD_IO_SEG(0x00011800800007B8ull))
union cvmx_l2c_err_tdtx {
uint64_t u64;
struct cvmx_l2c_err_tdtx_s {
__BITFIELD_FIELD(uint64_t dbe:1,
__BITFIELD_FIELD(uint64_t sbe:1,
__BITFIELD_FIELD(uint64_t vdbe:1,
__BITFIELD_FIELD(uint64_t vsbe:1,
__BITFIELD_FIELD(uint64_t syn:10,
__BITFIELD_FIELD(uint64_t reserved_22_49:28,
__BITFIELD_FIELD(uint64_t wayidx:18,
__BITFIELD_FIELD(uint64_t reserved_2_3:2,
__BITFIELD_FIELD(uint64_t type:2,
;)))))))))
} s;
};
union cvmx_l2c_err_ttgx {
uint64_t u64;
struct cvmx_l2c_err_ttgx_s {
__BITFIELD_FIELD(uint64_t dbe:1,
__BITFIELD_FIELD(uint64_t sbe:1,
__BITFIELD_FIELD(uint64_t noway:1,
__BITFIELD_FIELD(uint64_t reserved_56_60:5,
__BITFIELD_FIELD(uint64_t syn:6,
__BITFIELD_FIELD(uint64_t reserved_22_49:28,
__BITFIELD_FIELD(uint64_t wayidx:15,
__BITFIELD_FIELD(uint64_t reserved_2_6:5,
__BITFIELD_FIELD(uint64_t type:2,
;)))))))))
} s;
};
union cvmx_l2c_cfg { union cvmx_l2c_cfg {
uint64_t u64; uint64_t u64;
struct cvmx_l2c_cfg_s { struct cvmx_l2c_cfg_s {

View File

@ -0,0 +1,60 @@
/***********************license start***************
* Author: Cavium Networks
*
* Contact: support@caviumnetworks.com
* This file is part of the OCTEON SDK
*
* Copyright (c) 2003-2017 Cavium, Inc.
*
* This file is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, Version 2, as
* published by the Free Software Foundation.
*
* This file is distributed in the hope that it will be useful, but
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
* NONINFRINGEMENT. See the GNU General Public License for more
* details.
*
* You should have received a copy of the GNU General Public License
* along with this file; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
* or visit http://www.gnu.org/licenses/.
*
* This file may also be available under a different license from Cavium.
* Contact Cavium Networks for more information
***********************license end**************************************/
#ifndef __CVMX_L2D_DEFS_H__
#define __CVMX_L2D_DEFS_H__
#define CVMX_L2D_ERR (CVMX_ADD_IO_SEG(0x0001180080000010ull))
#define CVMX_L2D_FUS3 (CVMX_ADD_IO_SEG(0x00011800800007B8ull))
union cvmx_l2d_err {
uint64_t u64;
struct cvmx_l2d_err_s {
__BITFIELD_FIELD(uint64_t reserved_6_63:58,
__BITFIELD_FIELD(uint64_t bmhclsel:1,
__BITFIELD_FIELD(uint64_t ded_err:1,
__BITFIELD_FIELD(uint64_t sec_err:1,
__BITFIELD_FIELD(uint64_t ded_intena:1,
__BITFIELD_FIELD(uint64_t sec_intena:1,
__BITFIELD_FIELD(uint64_t ecc_ena:1,
;)))))))
} s;
};
union cvmx_l2d_fus3 {
uint64_t u64;
struct cvmx_l2d_fus3_s {
__BITFIELD_FIELD(uint64_t reserved_40_63:24,
__BITFIELD_FIELD(uint64_t ema_ctl:3,
__BITFIELD_FIELD(uint64_t reserved_34_36:3,
__BITFIELD_FIELD(uint64_t q3fus:34,
;))))
} s;
};
#endif

View File

@ -62,6 +62,7 @@ enum cvmx_mips_space {
#include <asm/octeon/cvmx-iob-defs.h> #include <asm/octeon/cvmx-iob-defs.h>
#include <asm/octeon/cvmx-ipd-defs.h> #include <asm/octeon/cvmx-ipd-defs.h>
#include <asm/octeon/cvmx-l2c-defs.h> #include <asm/octeon/cvmx-l2c-defs.h>
#include <asm/octeon/cvmx-l2d-defs.h>
#include <asm/octeon/cvmx-l2t-defs.h> #include <asm/octeon/cvmx-l2t-defs.h>
#include <asm/octeon/cvmx-led-defs.h> #include <asm/octeon/cvmx-led-defs.h>
#include <asm/octeon/cvmx-mio-defs.h> #include <asm/octeon/cvmx-mio-defs.h>

View File

@ -376,9 +376,6 @@ asmlinkage void start_secondary(void)
cpumask_set_cpu(cpu, &cpu_coherent_mask); cpumask_set_cpu(cpu, &cpu_coherent_mask);
notify_cpu_starting(cpu); notify_cpu_starting(cpu);
complete(&cpu_running);
synchronise_count_slave(cpu);
set_cpu_online(cpu, true); set_cpu_online(cpu, true);
set_cpu_sibling_map(cpu); set_cpu_sibling_map(cpu);
@ -386,6 +383,9 @@ asmlinkage void start_secondary(void)
calculate_cpu_foreign_map(); calculate_cpu_foreign_map();
complete(&cpu_running);
synchronise_count_slave(cpu);
/* /*
* irq will be enabled in ->smp_finish(), enabling it too early * irq will be enabled in ->smp_finish(), enabling it too early
* is dangerous. * is dangerous.

View File

@ -48,7 +48,7 @@
#include "uasm.c" #include "uasm.c"
static const struct insn const insn_table[insn_invalid] = { static const struct insn insn_table[insn_invalid] = {
[insn_addiu] = {M(addiu_op, 0, 0, 0, 0, 0), RS | RT | SIMM}, [insn_addiu] = {M(addiu_op, 0, 0, 0, 0, 0), RS | RT | SIMM},
[insn_addu] = {M(spec_op, 0, 0, 0, 0, addu_op), RS | RT | RD}, [insn_addu] = {M(spec_op, 0, 0, 0, 0, addu_op), RS | RT | RD},
[insn_and] = {M(spec_op, 0, 0, 0, 0, and_op), RS | RT | RD}, [insn_and] = {M(spec_op, 0, 0, 0, 0, and_op), RS | RT | RD},

View File

@ -28,16 +28,15 @@ EXPORT_SYMBOL(PCIBIOS_MIN_MEM);
static int __init pcibios_set_cache_line_size(void) static int __init pcibios_set_cache_line_size(void)
{ {
struct cpuinfo_mips *c = &current_cpu_data;
unsigned int lsize; unsigned int lsize;
/* /*
* Set PCI cacheline size to that of the highest level in the * Set PCI cacheline size to that of the highest level in the
* cache hierarchy. * cache hierarchy.
*/ */
lsize = c->dcache.linesz; lsize = cpu_dcache_line_size();
lsize = c->scache.linesz ? : lsize; lsize = cpu_scache_line_size() ? : lsize;
lsize = c->tcache.linesz ? : lsize; lsize = cpu_tcache_line_size() ? : lsize;
BUG_ON(!lsize); BUG_ON(!lsize);

View File

@ -35,7 +35,8 @@ static __always_inline long gettimeofday_fallback(struct timeval *_tv,
" syscall\n" " syscall\n"
: "=r" (ret), "=r" (error) : "=r" (ret), "=r" (error)
: "r" (tv), "r" (tz), "r" (nr) : "r" (tv), "r" (tz), "r" (nr)
: "memory"); : "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
"$14", "$15", "$24", "$25", "hi", "lo", "memory");
return error ? -ret : ret; return error ? -ret : ret;
} }
@ -55,7 +56,8 @@ static __always_inline long clock_gettime_fallback(clockid_t _clkid,
" syscall\n" " syscall\n"
: "=r" (ret), "=r" (error) : "=r" (ret), "=r" (error)
: "r" (clkid), "r" (ts), "r" (nr) : "r" (clkid), "r" (ts), "r" (nr)
: "memory"); : "$1", "$3", "$8", "$9", "$10", "$11", "$12", "$13",
"$14", "$15", "$24", "$25", "hi", "lo", "memory");
return error ? -ret : ret; return error ? -ret : ret;
} }

View File

@ -199,7 +199,7 @@ config PPC
select HAVE_OPTPROBES if PPC64 select HAVE_OPTPROBES if PPC64
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select HAVE_PERF_EVENTS_NMI if PPC64 select HAVE_PERF_EVENTS_NMI if PPC64
select HAVE_HARDLOCKUP_DETECTOR_PERF if HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH
select HAVE_PERF_REGS select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP select HAVE_PERF_USER_STACK_DUMP
select HAVE_RCU_TABLE_FREE if SMP select HAVE_RCU_TABLE_FREE if SMP

View File

@ -293,7 +293,8 @@ CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_STACK_USAGE=y CONFIG_DEBUG_STACK_USAGE=y
CONFIG_DEBUG_STACKOVERFLOW=y CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_LOCKUP_DETECTOR=y CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_LATENCYTOP=y CONFIG_LATENCYTOP=y
CONFIG_SCHED_TRACER=y CONFIG_SCHED_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y CONFIG_BLK_DEV_IO_TRACE=y

View File

@ -324,7 +324,8 @@ CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_STACK_USAGE=y CONFIG_DEBUG_STACK_USAGE=y
CONFIG_DEBUG_STACKOVERFLOW=y CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_LOCKUP_DETECTOR=y CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_DEBUG_MUTEXES=y CONFIG_DEBUG_MUTEXES=y
CONFIG_LATENCYTOP=y CONFIG_LATENCYTOP=y
CONFIG_SCHED_TRACER=y CONFIG_SCHED_TRACER=y

View File

@ -291,7 +291,8 @@ CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_STACK_USAGE=y CONFIG_DEBUG_STACK_USAGE=y
CONFIG_DEBUG_STACKOVERFLOW=y CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_LOCKUP_DETECTOR=y CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_LATENCYTOP=y CONFIG_LATENCYTOP=y
CONFIG_SCHED_TRACER=y CONFIG_SCHED_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y CONFIG_BLK_DEV_IO_TRACE=y

View File

@ -223,17 +223,27 @@ system_call_exit:
andi. r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK) andi. r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK)
bne- .Lsyscall_exit_work bne- .Lsyscall_exit_work
/* If MSR_FP and MSR_VEC are set in user msr, then no need to restore */ andi. r0,r8,MSR_FP
li r7,MSR_FP beq 2f
#ifdef CONFIG_ALTIVEC #ifdef CONFIG_ALTIVEC
oris r7,r7,MSR_VEC@h andis. r0,r8,MSR_VEC@h
bne 3f
#endif #endif
and r0,r8,r7 2: addi r3,r1,STACK_FRAME_OVERHEAD
cmpd r0,r7 #ifdef CONFIG_PPC_BOOK3S
bne .Lsyscall_restore_math li r10,MSR_RI
.Lsyscall_restore_math_cont: mtmsrd r10,1 /* Restore RI */
#endif
bl restore_math
#ifdef CONFIG_PPC_BOOK3S
li r11,0
mtmsrd r11,1
#endif
ld r8,_MSR(r1)
ld r3,RESULT(r1)
li r11,-MAX_ERRNO
cmpld r3,r11 3: cmpld r3,r11
ld r5,_CCR(r1) ld r5,_CCR(r1)
bge- .Lsyscall_error bge- .Lsyscall_error
.Lsyscall_error_cont: .Lsyscall_error_cont:
@ -267,40 +277,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
std r5,_CCR(r1) std r5,_CCR(r1)
b .Lsyscall_error_cont b .Lsyscall_error_cont
.Lsyscall_restore_math:
/*
* Some initial tests from restore_math to avoid the heavyweight
* C code entry and MSR manipulations.
*/
LOAD_REG_IMMEDIATE(r0, MSR_TS_MASK)
and. r0,r0,r8
bne 1f
ld r7,PACACURRENT(r13)
lbz r0,THREAD+THREAD_LOAD_FP(r7)
#ifdef CONFIG_ALTIVEC
lbz r6,THREAD+THREAD_LOAD_VEC(r7)
add r0,r0,r6
#endif
cmpdi r0,0
beq .Lsyscall_restore_math_cont
1: addi r3,r1,STACK_FRAME_OVERHEAD
#ifdef CONFIG_PPC_BOOK3S
li r10,MSR_RI
mtmsrd r10,1 /* Restore RI */
#endif
bl restore_math
#ifdef CONFIG_PPC_BOOK3S
li r11,0
mtmsrd r11,1
#endif
/* Restore volatiles, reload MSR from updated one */
ld r8,_MSR(r1)
ld r3,RESULT(r1)
li r11,-MAX_ERRNO
b .Lsyscall_restore_math_cont
/* Traced system call support */ /* Traced system call support */
.Lsyscall_dotrace: .Lsyscall_dotrace:
bl save_nvgprs bl save_nvgprs

View File

@ -362,7 +362,8 @@ void enable_kernel_vsx(void)
cpumsr = msr_check_and_set(MSR_FP|MSR_VEC|MSR_VSX); cpumsr = msr_check_and_set(MSR_FP|MSR_VEC|MSR_VSX);
if (current->thread.regs && (current->thread.regs->msr & MSR_VSX)) { if (current->thread.regs &&
(current->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP))) {
check_if_tm_restore_required(current); check_if_tm_restore_required(current);
/* /*
* If a thread has already been reclaimed then the * If a thread has already been reclaimed then the
@ -386,7 +387,7 @@ void flush_vsx_to_thread(struct task_struct *tsk)
{ {
if (tsk->thread.regs) { if (tsk->thread.regs) {
preempt_disable(); preempt_disable();
if (tsk->thread.regs->msr & MSR_VSX) { if (tsk->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP)) {
BUG_ON(tsk != current); BUG_ON(tsk != current);
giveup_vsx(tsk); giveup_vsx(tsk);
} }
@ -511,10 +512,6 @@ void restore_math(struct pt_regs *regs)
{ {
unsigned long msr; unsigned long msr;
/*
* Syscall exit makes a similar initial check before branching
* to restore_math. Keep them in synch.
*/
if (!msr_tm_active(regs->msr) && if (!msr_tm_active(regs->msr) &&
!current->thread.load_fp && !loadvec(current->thread)) !current->thread.load_fp && !loadvec(current->thread))
return; return;

View File

@ -351,7 +351,7 @@ static void nmi_ipi_lock_start(unsigned long *flags)
hard_irq_disable(); hard_irq_disable();
while (atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) { while (atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) {
raw_local_irq_restore(*flags); raw_local_irq_restore(*flags);
cpu_relax(); spin_until_cond(atomic_read(&__nmi_ipi_lock) == 0);
raw_local_irq_save(*flags); raw_local_irq_save(*flags);
hard_irq_disable(); hard_irq_disable();
} }
@ -360,7 +360,7 @@ static void nmi_ipi_lock_start(unsigned long *flags)
static void nmi_ipi_lock(void) static void nmi_ipi_lock(void)
{ {
while (atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) while (atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1)
cpu_relax(); spin_until_cond(atomic_read(&__nmi_ipi_lock) == 0);
} }
static void nmi_ipi_unlock(void) static void nmi_ipi_unlock(void)
@ -475,7 +475,7 @@ int smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us)
nmi_ipi_lock_start(&flags); nmi_ipi_lock_start(&flags);
while (nmi_ipi_busy_count) { while (nmi_ipi_busy_count) {
nmi_ipi_unlock_end(&flags); nmi_ipi_unlock_end(&flags);
cpu_relax(); spin_until_cond(nmi_ipi_busy_count == 0);
nmi_ipi_lock_start(&flags); nmi_ipi_lock_start(&flags);
} }

View File

@ -71,15 +71,20 @@ static inline void wd_smp_lock(unsigned long *flags)
* This may be called from low level interrupt handlers at some * This may be called from low level interrupt handlers at some
* point in future. * point in future.
*/ */
local_irq_save(*flags); raw_local_irq_save(*flags);
while (unlikely(test_and_set_bit_lock(0, &__wd_smp_lock))) hard_irq_disable(); /* Make it soft-NMI safe */
cpu_relax(); while (unlikely(test_and_set_bit_lock(0, &__wd_smp_lock))) {
raw_local_irq_restore(*flags);
spin_until_cond(!test_bit(0, &__wd_smp_lock));
raw_local_irq_save(*flags);
hard_irq_disable();
}
} }
static inline void wd_smp_unlock(unsigned long *flags) static inline void wd_smp_unlock(unsigned long *flags)
{ {
clear_bit_unlock(0, &__wd_smp_lock); clear_bit_unlock(0, &__wd_smp_lock);
local_irq_restore(*flags); raw_local_irq_restore(*flags);
} }
static void wd_lockup_ipi(struct pt_regs *regs) static void wd_lockup_ipi(struct pt_regs *regs)
@ -96,10 +101,10 @@ static void wd_lockup_ipi(struct pt_regs *regs)
nmi_panic(regs, "Hard LOCKUP"); nmi_panic(regs, "Hard LOCKUP");
} }
static void set_cpu_stuck(int cpu, u64 tb) static void set_cpumask_stuck(const struct cpumask *cpumask, u64 tb)
{ {
cpumask_set_cpu(cpu, &wd_smp_cpus_stuck); cpumask_or(&wd_smp_cpus_stuck, &wd_smp_cpus_stuck, cpumask);
cpumask_clear_cpu(cpu, &wd_smp_cpus_pending); cpumask_andnot(&wd_smp_cpus_pending, &wd_smp_cpus_pending, cpumask);
if (cpumask_empty(&wd_smp_cpus_pending)) { if (cpumask_empty(&wd_smp_cpus_pending)) {
wd_smp_last_reset_tb = tb; wd_smp_last_reset_tb = tb;
cpumask_andnot(&wd_smp_cpus_pending, cpumask_andnot(&wd_smp_cpus_pending,
@ -107,6 +112,10 @@ static void set_cpu_stuck(int cpu, u64 tb)
&wd_smp_cpus_stuck); &wd_smp_cpus_stuck);
} }
} }
static void set_cpu_stuck(int cpu, u64 tb)
{
set_cpumask_stuck(cpumask_of(cpu), tb);
}
static void watchdog_smp_panic(int cpu, u64 tb) static void watchdog_smp_panic(int cpu, u64 tb)
{ {
@ -135,11 +144,9 @@ static void watchdog_smp_panic(int cpu, u64 tb)
} }
smp_flush_nmi_ipi(1000000); smp_flush_nmi_ipi(1000000);
/* Take the stuck CPU out of the watch group */ /* Take the stuck CPUs out of the watch group */
for_each_cpu(c, &wd_smp_cpus_pending) set_cpumask_stuck(&wd_smp_cpus_pending, tb);
set_cpu_stuck(c, tb);
out:
wd_smp_unlock(&flags); wd_smp_unlock(&flags);
printk_safe_flush(); printk_safe_flush();
@ -152,6 +159,11 @@ out:
if (hardlockup_panic) if (hardlockup_panic)
nmi_panic(NULL, "Hard LOCKUP"); nmi_panic(NULL, "Hard LOCKUP");
return;
out:
wd_smp_unlock(&flags);
} }
static void wd_smp_clear_cpu_pending(int cpu, u64 tb) static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
@ -258,9 +270,11 @@ static void wd_timer_fn(unsigned long data)
void arch_touch_nmi_watchdog(void) void arch_touch_nmi_watchdog(void)
{ {
unsigned long ticks = tb_ticks_per_usec * wd_timer_period_ms * 1000;
int cpu = smp_processor_id(); int cpu = smp_processor_id();
watchdog_timer_interrupt(cpu); if (get_tb() - per_cpu(wd_timer_tb, cpu) >= ticks)
watchdog_timer_interrupt(cpu);
} }
EXPORT_SYMBOL(arch_touch_nmi_watchdog); EXPORT_SYMBOL(arch_touch_nmi_watchdog);
@ -283,6 +297,8 @@ static void stop_watchdog_timer_on(unsigned int cpu)
static int start_wd_on_cpu(unsigned int cpu) static int start_wd_on_cpu(unsigned int cpu)
{ {
unsigned long flags;
if (cpumask_test_cpu(cpu, &wd_cpus_enabled)) { if (cpumask_test_cpu(cpu, &wd_cpus_enabled)) {
WARN_ON(1); WARN_ON(1);
return 0; return 0;
@ -297,12 +313,14 @@ static int start_wd_on_cpu(unsigned int cpu)
if (!cpumask_test_cpu(cpu, &watchdog_cpumask)) if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
return 0; return 0;
wd_smp_lock(&flags);
cpumask_set_cpu(cpu, &wd_cpus_enabled); cpumask_set_cpu(cpu, &wd_cpus_enabled);
if (cpumask_weight(&wd_cpus_enabled) == 1) { if (cpumask_weight(&wd_cpus_enabled) == 1) {
cpumask_set_cpu(cpu, &wd_smp_cpus_pending); cpumask_set_cpu(cpu, &wd_smp_cpus_pending);
wd_smp_last_reset_tb = get_tb(); wd_smp_last_reset_tb = get_tb();
} }
smp_wmb(); wd_smp_unlock(&flags);
start_watchdog_timer_on(cpu); start_watchdog_timer_on(cpu);
return 0; return 0;
@ -310,12 +328,17 @@ static int start_wd_on_cpu(unsigned int cpu)
static int stop_wd_on_cpu(unsigned int cpu) static int stop_wd_on_cpu(unsigned int cpu)
{ {
unsigned long flags;
if (!cpumask_test_cpu(cpu, &wd_cpus_enabled)) if (!cpumask_test_cpu(cpu, &wd_cpus_enabled))
return 0; /* Can happen in CPU unplug case */ return 0; /* Can happen in CPU unplug case */
stop_watchdog_timer_on(cpu); stop_watchdog_timer_on(cpu);
wd_smp_lock(&flags);
cpumask_clear_cpu(cpu, &wd_cpus_enabled); cpumask_clear_cpu(cpu, &wd_cpus_enabled);
wd_smp_unlock(&flags);
wd_smp_clear_cpu_pending(cpu, get_tb()); wd_smp_clear_cpu_pending(cpu, get_tb());
return 0; return 0;

View File

@ -56,6 +56,7 @@ u64 pnv_first_deep_stop_state = MAX_STOP_STATE;
*/ */
static u64 pnv_deepest_stop_psscr_val; static u64 pnv_deepest_stop_psscr_val;
static u64 pnv_deepest_stop_psscr_mask; static u64 pnv_deepest_stop_psscr_mask;
static u64 pnv_deepest_stop_flag;
static bool deepest_stop_found; static bool deepest_stop_found;
static int pnv_save_sprs_for_deep_states(void) static int pnv_save_sprs_for_deep_states(void)
@ -185,8 +186,40 @@ static void pnv_alloc_idle_core_states(void)
update_subcore_sibling_mask(); update_subcore_sibling_mask();
if (supported_cpuidle_states & OPAL_PM_LOSE_FULL_CONTEXT) if (supported_cpuidle_states & OPAL_PM_LOSE_FULL_CONTEXT) {
pnv_save_sprs_for_deep_states(); int rc = pnv_save_sprs_for_deep_states();
if (likely(!rc))
return;
/*
* The stop-api is unable to restore hypervisor
* resources on wakeup from platform idle states which
* lose full context. So disable such states.
*/
supported_cpuidle_states &= ~OPAL_PM_LOSE_FULL_CONTEXT;
pr_warn("cpuidle-powernv: Disabling idle states that lose full context\n");
pr_warn("cpuidle-powernv: Idle power-savings, CPU-Hotplug affected\n");
if (cpu_has_feature(CPU_FTR_ARCH_300) &&
(pnv_deepest_stop_flag & OPAL_PM_LOSE_FULL_CONTEXT)) {
/*
* Use the default stop state for CPU-Hotplug
* if available.
*/
if (default_stop_found) {
pnv_deepest_stop_psscr_val =
pnv_default_stop_val;
pnv_deepest_stop_psscr_mask =
pnv_default_stop_mask;
pr_warn("cpuidle-powernv: Offlined CPUs will stop with psscr = 0x%016llx\n",
pnv_deepest_stop_psscr_val);
} else { /* Fallback to snooze loop for CPU-Hotplug */
deepest_stop_found = false;
pr_warn("cpuidle-powernv: Offlined CPUs will busy wait\n");
}
}
}
} }
u32 pnv_get_supported_cpuidle_states(void) u32 pnv_get_supported_cpuidle_states(void)
@ -375,7 +408,8 @@ unsigned long pnv_cpu_offline(unsigned int cpu)
pnv_deepest_stop_psscr_val; pnv_deepest_stop_psscr_val;
srr1 = power9_idle_stop(psscr); srr1 = power9_idle_stop(psscr);
} else if (idle_states & OPAL_PM_WINKLE_ENABLED) { } else if ((idle_states & OPAL_PM_WINKLE_ENABLED) &&
(idle_states & OPAL_PM_LOSE_FULL_CONTEXT)) {
srr1 = power7_idle_insn(PNV_THREAD_WINKLE); srr1 = power7_idle_insn(PNV_THREAD_WINKLE);
} else if ((idle_states & OPAL_PM_SLEEP_ENABLED) || } else if ((idle_states & OPAL_PM_SLEEP_ENABLED) ||
(idle_states & OPAL_PM_SLEEP_ENABLED_ER1)) { (idle_states & OPAL_PM_SLEEP_ENABLED_ER1)) {
@ -553,6 +587,7 @@ static int __init pnv_power9_idle_init(struct device_node *np, u32 *flags,
max_residency_ns = residency_ns[i]; max_residency_ns = residency_ns[i];
pnv_deepest_stop_psscr_val = psscr_val[i]; pnv_deepest_stop_psscr_val = psscr_val[i];
pnv_deepest_stop_psscr_mask = psscr_mask[i]; pnv_deepest_stop_psscr_mask = psscr_mask[i];
pnv_deepest_stop_flag = flags[i];
deepest_stop_found = true; deepest_stop_found = true;
} }

View File

@ -68,6 +68,7 @@ typedef struct { unsigned long iopgprot; } iopgprot_t;
#define iopgprot_val(x) ((x).iopgprot) #define iopgprot_val(x) ((x).iopgprot)
#define __pte(x) ((pte_t) { (x) } ) #define __pte(x) ((pte_t) { (x) } )
#define __pmd(x) ((pmd_t) { { (x) }, })
#define __iopte(x) ((iopte_t) { (x) } ) #define __iopte(x) ((iopte_t) { (x) } )
#define __pgd(x) ((pgd_t) { (x) } ) #define __pgd(x) ((pgd_t) { (x) } )
#define __ctxd(x) ((ctxd_t) { (x) } ) #define __ctxd(x) ((ctxd_t) { (x) } )
@ -95,6 +96,7 @@ typedef unsigned long iopgprot_t;
#define iopgprot_val(x) (x) #define iopgprot_val(x) (x)
#define __pte(x) (x) #define __pte(x) (x)
#define __pmd(x) ((pmd_t) { { (x) }, })
#define __iopte(x) (x) #define __iopte(x) (x)
#define __pgd(x) (x) #define __pgd(x) (x)
#define __ctxd(x) (x) #define __ctxd(x) (x)

View File

@ -1266,8 +1266,6 @@ static int pci_sun4v_probe(struct platform_device *op)
* ATU group, but ATU hcalls won't be available. * ATU group, but ATU hcalls won't be available.
*/ */
hv_atu = false; hv_atu = false;
pr_err(PFX "Could not register hvapi ATU err=%d\n",
err);
} else { } else {
pr_info(PFX "Registered hvapi ATU major[%lu] minor[%lu]\n", pr_info(PFX "Registered hvapi ATU major[%lu] minor[%lu]\n",
vatu_major, vatu_minor); vatu_major, vatu_minor);

View File

@ -602,7 +602,7 @@ void pcibios_fixup_bus(struct pci_bus *bus)
{ {
struct pci_dev *dev; struct pci_dev *dev;
int i, has_io, has_mem; int i, has_io, has_mem;
unsigned int cmd; unsigned int cmd = 0;
struct linux_pcic *pcic; struct linux_pcic *pcic;
/* struct linux_pbm_info* pbm = &pcic->pbm; */ /* struct linux_pbm_info* pbm = &pcic->pbm; */
int node; int node;

View File

@ -5,26 +5,26 @@
.align 4 .align 4
ENTRY(__multi3) /* %o0 = u, %o1 = v */ ENTRY(__multi3) /* %o0 = u, %o1 = v */
mov %o1, %g1 mov %o1, %g1
srl %o3, 0, %g4 srl %o3, 0, %o4
mulx %g4, %g1, %o1 mulx %o4, %g1, %o1
srlx %g1, 0x20, %g3 srlx %g1, 0x20, %g3
mulx %g3, %g4, %g5 mulx %g3, %o4, %g7
sllx %g5, 0x20, %o5 sllx %g7, 0x20, %o5
srl %g1, 0, %g4 srl %g1, 0, %o4
sub %o1, %o5, %o5 sub %o1, %o5, %o5
srlx %o5, 0x20, %o5 srlx %o5, 0x20, %o5
addcc %g5, %o5, %g5 addcc %g7, %o5, %g7
srlx %o3, 0x20, %o5 srlx %o3, 0x20, %o5
mulx %g4, %o5, %g4 mulx %o4, %o5, %o4
mulx %g3, %o5, %o5 mulx %g3, %o5, %o5
sethi %hi(0x80000000), %g3 sethi %hi(0x80000000), %g3
addcc %g5, %g4, %g5 addcc %g7, %o4, %g7
srlx %g5, 0x20, %g5 srlx %g7, 0x20, %g7
add %g3, %g3, %g3 add %g3, %g3, %g3
movcc %xcc, %g0, %g3 movcc %xcc, %g0, %g3
addcc %o5, %g5, %o5 addcc %o5, %g7, %o5
sllx %g4, 0x20, %g4 sllx %o4, 0x20, %o4
add %o1, %g4, %o1 add %o1, %o4, %o1
add %o5, %g3, %g2 add %o5, %g3, %g2
mulx %g1, %o2, %g1 mulx %g1, %o2, %g1
add %g1, %g2, %g1 add %g1, %g2, %g1

View File

@ -101,6 +101,7 @@ config X86
select GENERIC_STRNCPY_FROM_USER select GENERIC_STRNCPY_FROM_USER
select GENERIC_STRNLEN_USER select GENERIC_STRNLEN_USER
select GENERIC_TIME_VSYSCALL select GENERIC_TIME_VSYSCALL
select HARDLOCKUP_CHECK_TIMESTAMP if X86_64
select HAVE_ACPI_APEI if ACPI select HAVE_ACPI_APEI if ACPI
select HAVE_ACPI_APEI_NMI if ACPI select HAVE_ACPI_APEI_NMI if ACPI
select HAVE_ALIGNED_STRUCT_PAGE if SLUB select HAVE_ALIGNED_STRUCT_PAGE if SLUB
@ -164,7 +165,7 @@ config X86
select HAVE_PCSPKR_PLATFORM select HAVE_PCSPKR_PLATFORM
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select HAVE_PERF_EVENTS_NMI select HAVE_PERF_EVENTS_NMI
select HAVE_HARDLOCKUP_DETECTOR_PERF if HAVE_PERF_EVENTS_NMI select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI
select HAVE_PERF_REGS select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP select HAVE_PERF_USER_STACK_DUMP
select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_REGS_AND_STACK_ACCESS_API

View File

@ -117,11 +117,10 @@
.set T1, REG_T1 .set T1, REG_T1
.endm .endm
#define K_BASE %r8
#define HASH_PTR %r9 #define HASH_PTR %r9
#define BLOCKS_CTR %r8
#define BUFFER_PTR %r10 #define BUFFER_PTR %r10
#define BUFFER_PTR2 %r13 #define BUFFER_PTR2 %r13
#define BUFFER_END %r11
#define PRECALC_BUF %r14 #define PRECALC_BUF %r14
#define WK_BUF %r15 #define WK_BUF %r15
@ -205,14 +204,14 @@
* blended AVX2 and ALU instruction scheduling * blended AVX2 and ALU instruction scheduling
* 1 vector iteration per 8 rounds * 1 vector iteration per 8 rounds
*/ */
vmovdqu ((i * 2) + PRECALC_OFFSET)(BUFFER_PTR), W_TMP vmovdqu (i * 2)(BUFFER_PTR), W_TMP
.elseif ((i & 7) == 1) .elseif ((i & 7) == 1)
vinsertf128 $1, (((i-1) * 2)+PRECALC_OFFSET)(BUFFER_PTR2),\ vinsertf128 $1, ((i-1) * 2)(BUFFER_PTR2),\
WY_TMP, WY_TMP WY_TMP, WY_TMP
.elseif ((i & 7) == 2) .elseif ((i & 7) == 2)
vpshufb YMM_SHUFB_BSWAP, WY_TMP, WY vpshufb YMM_SHUFB_BSWAP, WY_TMP, WY
.elseif ((i & 7) == 4) .elseif ((i & 7) == 4)
vpaddd K_XMM(K_BASE), WY, WY_TMP vpaddd K_XMM + K_XMM_AR(%rip), WY, WY_TMP
.elseif ((i & 7) == 7) .elseif ((i & 7) == 7)
vmovdqu WY_TMP, PRECALC_WK(i&~7) vmovdqu WY_TMP, PRECALC_WK(i&~7)
@ -255,7 +254,7 @@
vpxor WY, WY_TMP, WY_TMP vpxor WY, WY_TMP, WY_TMP
.elseif ((i & 7) == 7) .elseif ((i & 7) == 7)
vpxor WY_TMP2, WY_TMP, WY vpxor WY_TMP2, WY_TMP, WY
vpaddd K_XMM(K_BASE), WY, WY_TMP vpaddd K_XMM + K_XMM_AR(%rip), WY, WY_TMP
vmovdqu WY_TMP, PRECALC_WK(i&~7) vmovdqu WY_TMP, PRECALC_WK(i&~7)
PRECALC_ROTATE_WY PRECALC_ROTATE_WY
@ -291,7 +290,7 @@
vpsrld $30, WY, WY vpsrld $30, WY, WY
vpor WY, WY_TMP, WY vpor WY, WY_TMP, WY
.elseif ((i & 7) == 7) .elseif ((i & 7) == 7)
vpaddd K_XMM(K_BASE), WY, WY_TMP vpaddd K_XMM + K_XMM_AR(%rip), WY, WY_TMP
vmovdqu WY_TMP, PRECALC_WK(i&~7) vmovdqu WY_TMP, PRECALC_WK(i&~7)
PRECALC_ROTATE_WY PRECALC_ROTATE_WY
@ -446,6 +445,16 @@
.endm .endm
/* Add constant only if (%2 > %3) condition met (uses RTA as temp)
* %1 + %2 >= %3 ? %4 : 0
*/
.macro ADD_IF_GE a, b, c, d
mov \a, RTA
add $\d, RTA
cmp $\c, \b
cmovge RTA, \a
.endm
/* /*
* macro implements 80 rounds of SHA-1, for multiple blocks with s/w pipelining * macro implements 80 rounds of SHA-1, for multiple blocks with s/w pipelining
*/ */
@ -463,13 +472,16 @@
lea (2*4*80+32)(%rsp), WK_BUF lea (2*4*80+32)(%rsp), WK_BUF
# Precalc WK for first 2 blocks # Precalc WK for first 2 blocks
PRECALC_OFFSET = 0 ADD_IF_GE BUFFER_PTR2, BLOCKS_CTR, 2, 64
.set i, 0 .set i, 0
.rept 160 .rept 160
PRECALC i PRECALC i
.set i, i + 1 .set i, i + 1
.endr .endr
PRECALC_OFFSET = 128
/* Go to next block if needed */
ADD_IF_GE BUFFER_PTR, BLOCKS_CTR, 3, 128
ADD_IF_GE BUFFER_PTR2, BLOCKS_CTR, 4, 128
xchg WK_BUF, PRECALC_BUF xchg WK_BUF, PRECALC_BUF
.align 32 .align 32
@ -479,8 +491,8 @@ _loop:
* we use K_BASE value as a signal of a last block, * we use K_BASE value as a signal of a last block,
* it is set below by: cmovae BUFFER_PTR, K_BASE * it is set below by: cmovae BUFFER_PTR, K_BASE
*/ */
cmp K_BASE, BUFFER_PTR test BLOCKS_CTR, BLOCKS_CTR
jne _begin jnz _begin
.align 32 .align 32
jmp _end jmp _end
.align 32 .align 32
@ -512,10 +524,10 @@ _loop0:
.set j, j+2 .set j, j+2
.endr .endr
add $(2*64), BUFFER_PTR /* move to next odd-64-byte block */ /* Update Counter */
cmp BUFFER_END, BUFFER_PTR /* is current block the last one? */ sub $1, BLOCKS_CTR
cmovae K_BASE, BUFFER_PTR /* signal the last iteration smartly */ /* Move to the next block only if needed*/
ADD_IF_GE BUFFER_PTR, BLOCKS_CTR, 4, 128
/* /*
* rounds * rounds
* 60,62,64,66,68 * 60,62,64,66,68
@ -532,8 +544,8 @@ _loop0:
UPDATE_HASH 12(HASH_PTR), D UPDATE_HASH 12(HASH_PTR), D
UPDATE_HASH 16(HASH_PTR), E UPDATE_HASH 16(HASH_PTR), E
cmp K_BASE, BUFFER_PTR /* is current block the last one? */ test BLOCKS_CTR, BLOCKS_CTR
je _loop jz _loop
mov TB, B mov TB, B
@ -575,10 +587,10 @@ _loop2:
.set j, j+2 .set j, j+2
.endr .endr
add $(2*64), BUFFER_PTR2 /* move to next even-64-byte block */ /* update counter */
sub $1, BLOCKS_CTR
cmp BUFFER_END, BUFFER_PTR2 /* is current block the last one */ /* Move to the next block only if needed*/
cmovae K_BASE, BUFFER_PTR /* signal the last iteration smartly */ ADD_IF_GE BUFFER_PTR2, BLOCKS_CTR, 4, 128
jmp _loop3 jmp _loop3
_loop3: _loop3:
@ -641,19 +653,12 @@ _loop3:
avx2_zeroupper avx2_zeroupper
lea K_XMM_AR(%rip), K_BASE /* Setup initial values */
mov CTX, HASH_PTR mov CTX, HASH_PTR
mov BUF, BUFFER_PTR mov BUF, BUFFER_PTR
lea 64(BUF), BUFFER_PTR2
shl $6, CNT /* mul by 64 */ mov BUF, BUFFER_PTR2
add BUF, CNT mov CNT, BLOCKS_CTR
add $64, CNT
mov CNT, BUFFER_END
cmp BUFFER_END, BUFFER_PTR2
cmovae K_BASE, BUFFER_PTR2
xmm_mov BSWAP_SHUFB_CTL(%rip), YMM_SHUFB_BSWAP xmm_mov BSWAP_SHUFB_CTL(%rip), YMM_SHUFB_BSWAP

View File

@ -201,7 +201,7 @@ asmlinkage void sha1_transform_avx2(u32 *digest, const char *data,
static bool avx2_usable(void) static bool avx2_usable(void)
{ {
if (false && avx_usable() && boot_cpu_has(X86_FEATURE_AVX2) if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2)
&& boot_cpu_has(X86_FEATURE_BMI1) && boot_cpu_has(X86_FEATURE_BMI1)
&& boot_cpu_has(X86_FEATURE_BMI2)) && boot_cpu_has(X86_FEATURE_BMI2))
return true; return true;

View File

@ -1313,6 +1313,8 @@ ENTRY(nmi)
* other IST entries. * other IST entries.
*/ */
ASM_CLAC
/* Use %rdx as our temp variable throughout */ /* Use %rdx as our temp variable throughout */
pushq %rdx pushq %rdx

View File

@ -2114,7 +2114,7 @@ static void refresh_pce(void *ignored)
load_mm_cr4(this_cpu_read(cpu_tlbstate.loaded_mm)); load_mm_cr4(this_cpu_read(cpu_tlbstate.loaded_mm));
} }
static void x86_pmu_event_mapped(struct perf_event *event) static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
{ {
if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED)) if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
return; return;
@ -2129,22 +2129,20 @@ static void x86_pmu_event_mapped(struct perf_event *event)
* For now, this can't happen because all callers hold mmap_sem * For now, this can't happen because all callers hold mmap_sem
* for write. If this changes, we'll need a different solution. * for write. If this changes, we'll need a different solution.
*/ */
lockdep_assert_held_exclusive(&current->mm->mmap_sem); lockdep_assert_held_exclusive(&mm->mmap_sem);
if (atomic_inc_return(&current->mm->context.perf_rdpmc_allowed) == 1) if (atomic_inc_return(&mm->context.perf_rdpmc_allowed) == 1)
on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1); on_each_cpu_mask(mm_cpumask(mm), refresh_pce, NULL, 1);
} }
static void x86_pmu_event_unmapped(struct perf_event *event) static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm)
{ {
if (!current->mm)
return;
if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED)) if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
return; return;
if (atomic_dec_and_test(&current->mm->context.perf_rdpmc_allowed)) if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))
on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1); on_each_cpu_mask(mm_cpumask(mm), refresh_pce, NULL, 1);
} }
static int x86_pmu_event_idx(struct perf_event *event) static int x86_pmu_event_idx(struct perf_event *event)

View File

@ -69,7 +69,7 @@ struct bts_buffer {
struct bts_phys buf[0]; struct bts_phys buf[0];
}; };
struct pmu bts_pmu; static struct pmu bts_pmu;
static size_t buf_size(struct page *page) static size_t buf_size(struct page *page)
{ {

View File

@ -587,7 +587,7 @@ static __initconst const u64 p4_hw_cache_event_ids
* P4_CONFIG_ALIASABLE or bits for P4_PEBS_METRIC, they are * P4_CONFIG_ALIASABLE or bits for P4_PEBS_METRIC, they are
* either up to date automatically or not applicable at all. * either up to date automatically or not applicable at all.
*/ */
struct p4_event_alias { static struct p4_event_alias {
u64 original; u64 original;
u64 alternative; u64 alternative;
} p4_event_aliases[] = { } p4_event_aliases[] = {

View File

@ -559,7 +559,7 @@ static struct attribute_group rapl_pmu_format_group = {
.attrs = rapl_formats_attr, .attrs = rapl_formats_attr,
}; };
const struct attribute_group *rapl_attr_groups[] = { static const struct attribute_group *rapl_attr_groups[] = {
&rapl_pmu_attr_group, &rapl_pmu_attr_group,
&rapl_pmu_format_group, &rapl_pmu_format_group,
&rapl_pmu_events_group, &rapl_pmu_events_group,

View File

@ -721,7 +721,7 @@ static struct attribute *uncore_pmu_attrs[] = {
NULL, NULL,
}; };
static struct attribute_group uncore_pmu_attr_group = { static const struct attribute_group uncore_pmu_attr_group = {
.attrs = uncore_pmu_attrs, .attrs = uncore_pmu_attrs,
}; };

View File

@ -272,7 +272,7 @@ static struct attribute *nhmex_uncore_ubox_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group nhmex_uncore_ubox_format_group = { static const struct attribute_group nhmex_uncore_ubox_format_group = {
.name = "format", .name = "format",
.attrs = nhmex_uncore_ubox_formats_attr, .attrs = nhmex_uncore_ubox_formats_attr,
}; };
@ -299,7 +299,7 @@ static struct attribute *nhmex_uncore_cbox_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group nhmex_uncore_cbox_format_group = { static const struct attribute_group nhmex_uncore_cbox_format_group = {
.name = "format", .name = "format",
.attrs = nhmex_uncore_cbox_formats_attr, .attrs = nhmex_uncore_cbox_formats_attr,
}; };
@ -407,7 +407,7 @@ static struct attribute *nhmex_uncore_bbox_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group nhmex_uncore_bbox_format_group = { static const struct attribute_group nhmex_uncore_bbox_format_group = {
.name = "format", .name = "format",
.attrs = nhmex_uncore_bbox_formats_attr, .attrs = nhmex_uncore_bbox_formats_attr,
}; };
@ -484,7 +484,7 @@ static struct attribute *nhmex_uncore_sbox_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group nhmex_uncore_sbox_format_group = { static const struct attribute_group nhmex_uncore_sbox_format_group = {
.name = "format", .name = "format",
.attrs = nhmex_uncore_sbox_formats_attr, .attrs = nhmex_uncore_sbox_formats_attr,
}; };
@ -898,7 +898,7 @@ static struct attribute *nhmex_uncore_mbox_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group nhmex_uncore_mbox_format_group = { static const struct attribute_group nhmex_uncore_mbox_format_group = {
.name = "format", .name = "format",
.attrs = nhmex_uncore_mbox_formats_attr, .attrs = nhmex_uncore_mbox_formats_attr,
}; };
@ -1163,7 +1163,7 @@ static struct attribute *nhmex_uncore_rbox_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group nhmex_uncore_rbox_format_group = { static const struct attribute_group nhmex_uncore_rbox_format_group = {
.name = "format", .name = "format",
.attrs = nhmex_uncore_rbox_formats_attr, .attrs = nhmex_uncore_rbox_formats_attr,
}; };

View File

@ -130,7 +130,7 @@ static struct attribute *snb_uncore_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group snb_uncore_format_group = { static const struct attribute_group snb_uncore_format_group = {
.name = "format", .name = "format",
.attrs = snb_uncore_formats_attr, .attrs = snb_uncore_formats_attr,
}; };
@ -289,7 +289,7 @@ static struct attribute *snb_uncore_imc_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group snb_uncore_imc_format_group = { static const struct attribute_group snb_uncore_imc_format_group = {
.name = "format", .name = "format",
.attrs = snb_uncore_imc_formats_attr, .attrs = snb_uncore_imc_formats_attr,
}; };
@ -769,7 +769,7 @@ static struct attribute *nhm_uncore_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group nhm_uncore_format_group = { static const struct attribute_group nhm_uncore_format_group = {
.name = "format", .name = "format",
.attrs = nhm_uncore_formats_attr, .attrs = nhm_uncore_formats_attr,
}; };

View File

@ -602,27 +602,27 @@ static struct uncore_event_desc snbep_uncore_qpi_events[] = {
{ /* end: all zeroes */ }, { /* end: all zeroes */ },
}; };
static struct attribute_group snbep_uncore_format_group = { static const struct attribute_group snbep_uncore_format_group = {
.name = "format", .name = "format",
.attrs = snbep_uncore_formats_attr, .attrs = snbep_uncore_formats_attr,
}; };
static struct attribute_group snbep_uncore_ubox_format_group = { static const struct attribute_group snbep_uncore_ubox_format_group = {
.name = "format", .name = "format",
.attrs = snbep_uncore_ubox_formats_attr, .attrs = snbep_uncore_ubox_formats_attr,
}; };
static struct attribute_group snbep_uncore_cbox_format_group = { static const struct attribute_group snbep_uncore_cbox_format_group = {
.name = "format", .name = "format",
.attrs = snbep_uncore_cbox_formats_attr, .attrs = snbep_uncore_cbox_formats_attr,
}; };
static struct attribute_group snbep_uncore_pcu_format_group = { static const struct attribute_group snbep_uncore_pcu_format_group = {
.name = "format", .name = "format",
.attrs = snbep_uncore_pcu_formats_attr, .attrs = snbep_uncore_pcu_formats_attr,
}; };
static struct attribute_group snbep_uncore_qpi_format_group = { static const struct attribute_group snbep_uncore_qpi_format_group = {
.name = "format", .name = "format",
.attrs = snbep_uncore_qpi_formats_attr, .attrs = snbep_uncore_qpi_formats_attr,
}; };
@ -1431,27 +1431,27 @@ static struct attribute *ivbep_uncore_qpi_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group ivbep_uncore_format_group = { static const struct attribute_group ivbep_uncore_format_group = {
.name = "format", .name = "format",
.attrs = ivbep_uncore_formats_attr, .attrs = ivbep_uncore_formats_attr,
}; };
static struct attribute_group ivbep_uncore_ubox_format_group = { static const struct attribute_group ivbep_uncore_ubox_format_group = {
.name = "format", .name = "format",
.attrs = ivbep_uncore_ubox_formats_attr, .attrs = ivbep_uncore_ubox_formats_attr,
}; };
static struct attribute_group ivbep_uncore_cbox_format_group = { static const struct attribute_group ivbep_uncore_cbox_format_group = {
.name = "format", .name = "format",
.attrs = ivbep_uncore_cbox_formats_attr, .attrs = ivbep_uncore_cbox_formats_attr,
}; };
static struct attribute_group ivbep_uncore_pcu_format_group = { static const struct attribute_group ivbep_uncore_pcu_format_group = {
.name = "format", .name = "format",
.attrs = ivbep_uncore_pcu_formats_attr, .attrs = ivbep_uncore_pcu_formats_attr,
}; };
static struct attribute_group ivbep_uncore_qpi_format_group = { static const struct attribute_group ivbep_uncore_qpi_format_group = {
.name = "format", .name = "format",
.attrs = ivbep_uncore_qpi_formats_attr, .attrs = ivbep_uncore_qpi_formats_attr,
}; };
@ -1887,7 +1887,7 @@ static struct attribute *knl_uncore_ubox_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group knl_uncore_ubox_format_group = { static const struct attribute_group knl_uncore_ubox_format_group = {
.name = "format", .name = "format",
.attrs = knl_uncore_ubox_formats_attr, .attrs = knl_uncore_ubox_formats_attr,
}; };
@ -1927,7 +1927,7 @@ static struct attribute *knl_uncore_cha_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group knl_uncore_cha_format_group = { static const struct attribute_group knl_uncore_cha_format_group = {
.name = "format", .name = "format",
.attrs = knl_uncore_cha_formats_attr, .attrs = knl_uncore_cha_formats_attr,
}; };
@ -2037,7 +2037,7 @@ static struct attribute *knl_uncore_pcu_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group knl_uncore_pcu_format_group = { static const struct attribute_group knl_uncore_pcu_format_group = {
.name = "format", .name = "format",
.attrs = knl_uncore_pcu_formats_attr, .attrs = knl_uncore_pcu_formats_attr,
}; };
@ -2187,7 +2187,7 @@ static struct attribute *knl_uncore_irp_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group knl_uncore_irp_format_group = { static const struct attribute_group knl_uncore_irp_format_group = {
.name = "format", .name = "format",
.attrs = knl_uncore_irp_formats_attr, .attrs = knl_uncore_irp_formats_attr,
}; };
@ -2385,7 +2385,7 @@ static struct attribute *hswep_uncore_ubox_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group hswep_uncore_ubox_format_group = { static const struct attribute_group hswep_uncore_ubox_format_group = {
.name = "format", .name = "format",
.attrs = hswep_uncore_ubox_formats_attr, .attrs = hswep_uncore_ubox_formats_attr,
}; };
@ -2439,7 +2439,7 @@ static struct attribute *hswep_uncore_cbox_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group hswep_uncore_cbox_format_group = { static const struct attribute_group hswep_uncore_cbox_format_group = {
.name = "format", .name = "format",
.attrs = hswep_uncore_cbox_formats_attr, .attrs = hswep_uncore_cbox_formats_attr,
}; };
@ -2621,7 +2621,7 @@ static struct attribute *hswep_uncore_sbox_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group hswep_uncore_sbox_format_group = { static const struct attribute_group hswep_uncore_sbox_format_group = {
.name = "format", .name = "format",
.attrs = hswep_uncore_sbox_formats_attr, .attrs = hswep_uncore_sbox_formats_attr,
}; };
@ -3314,7 +3314,7 @@ static struct attribute *skx_uncore_cha_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group skx_uncore_chabox_format_group = { static const struct attribute_group skx_uncore_chabox_format_group = {
.name = "format", .name = "format",
.attrs = skx_uncore_cha_formats_attr, .attrs = skx_uncore_cha_formats_attr,
}; };
@ -3427,7 +3427,7 @@ static struct attribute *skx_uncore_iio_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group skx_uncore_iio_format_group = { static const struct attribute_group skx_uncore_iio_format_group = {
.name = "format", .name = "format",
.attrs = skx_uncore_iio_formats_attr, .attrs = skx_uncore_iio_formats_attr,
}; };
@ -3484,7 +3484,7 @@ static struct attribute *skx_uncore_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group skx_uncore_format_group = { static const struct attribute_group skx_uncore_format_group = {
.name = "format", .name = "format",
.attrs = skx_uncore_formats_attr, .attrs = skx_uncore_formats_attr,
}; };
@ -3605,7 +3605,7 @@ static struct attribute *skx_upi_uncore_formats_attr[] = {
NULL, NULL,
}; };
static struct attribute_group skx_upi_uncore_format_group = { static const struct attribute_group skx_upi_uncore_format_group = {
.name = "format", .name = "format",
.attrs = skx_upi_uncore_formats_attr, .attrs = skx_upi_uncore_formats_attr,
}; };

View File

@ -286,7 +286,7 @@
#define X86_FEATURE_PAUSEFILTER (15*32+10) /* filtered pause intercept */ #define X86_FEATURE_PAUSEFILTER (15*32+10) /* filtered pause intercept */
#define X86_FEATURE_PFTHRESHOLD (15*32+12) /* pause filter threshold */ #define X86_FEATURE_PFTHRESHOLD (15*32+12) /* pause filter threshold */
#define X86_FEATURE_AVIC (15*32+13) /* Virtual Interrupt Controller */ #define X86_FEATURE_AVIC (15*32+13) /* Virtual Interrupt Controller */
#define X86_FEATURE_VIRTUAL_VMLOAD_VMSAVE (15*32+15) /* Virtual VMLOAD VMSAVE */ #define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* Virtual VMSAVE VMLOAD */
/* Intel-defined CPU features, CPUID level 0x00000007:0 (ecx), word 16 */ /* Intel-defined CPU features, CPUID level 0x00000007:0 (ecx), word 16 */
#define X86_FEATURE_AVX512VBMI (16*32+ 1) /* AVX512 Vector Bit Manipulation instructions*/ #define X86_FEATURE_AVX512VBMI (16*32+ 1) /* AVX512 Vector Bit Manipulation instructions*/

View File

@ -247,11 +247,11 @@ extern int force_personality32;
/* /*
* This is the base location for PIE (ET_DYN with INTERP) loads. On * This is the base location for PIE (ET_DYN with INTERP) loads. On
* 64-bit, this is raised to 4GB to leave the entire 32-bit address * 64-bit, this is above 4GB to leave the entire 32-bit address
* space open for things that want to use the area for 32-bit pointers. * space open for things that want to use the area for 32-bit pointers.
*/ */
#define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \ #define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \
0x100000000UL) (TASK_SIZE / 3 * 2))
/* This yields a mask that user programs can use to figure out what /* This yields a mask that user programs can use to figure out what
instruction set this CPU supports. This could be done in user space, instruction set this CPU supports. This could be done in user space,

View File

@ -43,6 +43,9 @@ struct hypervisor_x86 {
/* pin current vcpu to specified physical cpu (run rarely) */ /* pin current vcpu to specified physical cpu (run rarely) */
void (*pin_vcpu)(int); void (*pin_vcpu)(int);
/* called during init_mem_mapping() to setup early mappings. */
void (*init_mem_mapping)(void);
}; };
extern const struct hypervisor_x86 *x86_hyper; extern const struct hypervisor_x86 *x86_hyper;
@ -57,8 +60,15 @@ extern const struct hypervisor_x86 x86_hyper_kvm;
extern void init_hypervisor_platform(void); extern void init_hypervisor_platform(void);
extern bool hypervisor_x2apic_available(void); extern bool hypervisor_x2apic_available(void);
extern void hypervisor_pin_vcpu(int cpu); extern void hypervisor_pin_vcpu(int cpu);
static inline void hypervisor_init_mem_mapping(void)
{
if (x86_hyper && x86_hyper->init_mem_mapping)
x86_hyper->init_mem_mapping();
}
#else #else
static inline void init_hypervisor_platform(void) { } static inline void init_hypervisor_platform(void) { }
static inline bool hypervisor_x2apic_available(void) { return false; } static inline bool hypervisor_x2apic_available(void) { return false; }
static inline void hypervisor_init_mem_mapping(void) { }
#endif /* CONFIG_HYPERVISOR_GUEST */ #endif /* CONFIG_HYPERVISOR_GUEST */
#endif /* _ASM_X86_HYPERVISOR_H */ #endif /* _ASM_X86_HYPERVISOR_H */

View File

@ -40,13 +40,16 @@ static void aperfmperf_snapshot_khz(void *dummy)
struct aperfmperf_sample *s = this_cpu_ptr(&samples); struct aperfmperf_sample *s = this_cpu_ptr(&samples);
ktime_t now = ktime_get(); ktime_t now = ktime_get();
s64 time_delta = ktime_ms_delta(now, s->time); s64 time_delta = ktime_ms_delta(now, s->time);
unsigned long flags;
/* Don't bother re-computing within the cache threshold time. */ /* Don't bother re-computing within the cache threshold time. */
if (time_delta < APERFMPERF_CACHE_THRESHOLD_MS) if (time_delta < APERFMPERF_CACHE_THRESHOLD_MS)
return; return;
local_irq_save(flags);
rdmsrl(MSR_IA32_APERF, aperf); rdmsrl(MSR_IA32_APERF, aperf);
rdmsrl(MSR_IA32_MPERF, mperf); rdmsrl(MSR_IA32_MPERF, mperf);
local_irq_restore(flags);
aperf_delta = aperf - s->aperf; aperf_delta = aperf - s->aperf;
mperf_delta = mperf - s->mperf; mperf_delta = mperf - s->mperf;

View File

@ -122,7 +122,7 @@ static struct attribute *thermal_throttle_attrs[] = {
NULL NULL
}; };
static struct attribute_group thermal_attr_group = { static const struct attribute_group thermal_attr_group = {
.attrs = thermal_throttle_attrs, .attrs = thermal_throttle_attrs,
.name = "thermal_throttle" .name = "thermal_throttle"
}; };

View File

@ -561,7 +561,7 @@ static struct attribute *mc_default_attrs[] = {
NULL NULL
}; };
static struct attribute_group mc_attr_group = { static const struct attribute_group mc_attr_group = {
.attrs = mc_default_attrs, .attrs = mc_default_attrs,
.name = "microcode", .name = "microcode",
}; };
@ -707,7 +707,7 @@ static struct attribute *cpu_root_microcode_attrs[] = {
NULL NULL
}; };
static struct attribute_group cpu_root_microcode_group = { static const struct attribute_group cpu_root_microcode_group = {
.name = "microcode", .name = "microcode",
.attrs = cpu_root_microcode_attrs, .attrs = cpu_root_microcode_attrs,
}; };

View File

@ -237,6 +237,18 @@ set_mtrr(unsigned int reg, unsigned long base, unsigned long size, mtrr_type typ
stop_machine(mtrr_rendezvous_handler, &data, cpu_online_mask); stop_machine(mtrr_rendezvous_handler, &data, cpu_online_mask);
} }
static void set_mtrr_cpuslocked(unsigned int reg, unsigned long base,
unsigned long size, mtrr_type type)
{
struct set_mtrr_data data = { .smp_reg = reg,
.smp_base = base,
.smp_size = size,
.smp_type = type
};
stop_machine_cpuslocked(mtrr_rendezvous_handler, &data, cpu_online_mask);
}
static void set_mtrr_from_inactive_cpu(unsigned int reg, unsigned long base, static void set_mtrr_from_inactive_cpu(unsigned int reg, unsigned long base,
unsigned long size, mtrr_type type) unsigned long size, mtrr_type type)
{ {
@ -370,7 +382,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
/* Search for an empty MTRR */ /* Search for an empty MTRR */
i = mtrr_if->get_free_region(base, size, replace); i = mtrr_if->get_free_region(base, size, replace);
if (i >= 0) { if (i >= 0) {
set_mtrr(i, base, size, type); set_mtrr_cpuslocked(i, base, size, type);
if (likely(replace < 0)) { if (likely(replace < 0)) {
mtrr_usage_table[i] = 1; mtrr_usage_table[i] = 1;
} else { } else {
@ -378,7 +390,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
if (increment) if (increment)
mtrr_usage_table[i]++; mtrr_usage_table[i]++;
if (unlikely(replace != i)) { if (unlikely(replace != i)) {
set_mtrr(replace, 0, 0, 0); set_mtrr_cpuslocked(replace, 0, 0, 0);
mtrr_usage_table[replace] = 0; mtrr_usage_table[replace] = 0;
} }
} }
@ -506,7 +518,7 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size)
goto out; goto out;
} }
if (--mtrr_usage_table[reg] < 1) if (--mtrr_usage_table[reg] < 1)
set_mtrr(reg, 0, 0, 0); set_mtrr_cpuslocked(reg, 0, 0, 0);
error = reg; error = reg;
out: out:
mutex_unlock(&mtrr_mutex); mutex_unlock(&mtrr_mutex);

View File

@ -53,6 +53,7 @@ void __head __startup_64(unsigned long physaddr)
pudval_t *pud; pudval_t *pud;
pmdval_t *pmd, pmd_entry; pmdval_t *pmd, pmd_entry;
int i; int i;
unsigned int *next_pgt_ptr;
/* Is the address too large? */ /* Is the address too large? */
if (physaddr >> MAX_PHYSMEM_BITS) if (physaddr >> MAX_PHYSMEM_BITS)
@ -91,9 +92,9 @@ void __head __startup_64(unsigned long physaddr)
* creates a bunch of nonsense entries but that is fine -- * creates a bunch of nonsense entries but that is fine --
* it avoids problems around wraparound. * it avoids problems around wraparound.
*/ */
next_pgt_ptr = fixup_pointer(&next_early_pgt, physaddr);
pud = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr); pud = fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], physaddr);
pmd = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr); pmd = fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], physaddr);
if (IS_ENABLED(CONFIG_X86_5LEVEL)) { if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
p4d = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr); p4d = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);

View File

@ -55,7 +55,7 @@ static struct bin_attribute *boot_params_data_attrs[] = {
NULL, NULL,
}; };
static struct attribute_group boot_params_attr_group = { static const struct attribute_group boot_params_attr_group = {
.attrs = boot_params_version_attrs, .attrs = boot_params_version_attrs,
.bin_attrs = boot_params_data_attrs, .bin_attrs = boot_params_data_attrs,
}; };
@ -202,7 +202,7 @@ static struct bin_attribute *setup_data_data_attrs[] = {
NULL, NULL,
}; };
static struct attribute_group setup_data_attr_group = { static const struct attribute_group setup_data_attr_group = {
.attrs = setup_data_type_attrs, .attrs = setup_data_type_attrs,
.bin_attrs = setup_data_data_attrs, .bin_attrs = setup_data_data_attrs,
}; };

View File

@ -971,7 +971,8 @@ void common_cpu_up(unsigned int cpu, struct task_struct *idle)
* Returns zero if CPU booted OK, else error code from * Returns zero if CPU booted OK, else error code from
* ->wakeup_secondary_cpu. * ->wakeup_secondary_cpu.
*/ */
static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle) static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle,
int *cpu0_nmi_registered)
{ {
volatile u32 *trampoline_status = volatile u32 *trampoline_status =
(volatile u32 *) __va(real_mode_header->trampoline_status); (volatile u32 *) __va(real_mode_header->trampoline_status);
@ -979,7 +980,6 @@ static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
unsigned long start_ip = real_mode_header->trampoline_start; unsigned long start_ip = real_mode_header->trampoline_start;
unsigned long boot_error = 0; unsigned long boot_error = 0;
int cpu0_nmi_registered = 0;
unsigned long timeout; unsigned long timeout;
idle->thread.sp = (unsigned long)task_pt_regs(idle); idle->thread.sp = (unsigned long)task_pt_regs(idle);
@ -1035,7 +1035,7 @@ static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
boot_error = apic->wakeup_secondary_cpu(apicid, start_ip); boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);
else else
boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid, boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid,
&cpu0_nmi_registered); cpu0_nmi_registered);
if (!boot_error) { if (!boot_error) {
/* /*
@ -1080,12 +1080,6 @@ static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
*/ */
smpboot_restore_warm_reset_vector(); smpboot_restore_warm_reset_vector();
} }
/*
* Clean up the nmi handler. Do this after the callin and callout sync
* to avoid impact of possible long unregister time.
*/
if (cpu0_nmi_registered)
unregister_nmi_handler(NMI_LOCAL, "wake_cpu0");
return boot_error; return boot_error;
} }
@ -1093,8 +1087,9 @@ static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
int native_cpu_up(unsigned int cpu, struct task_struct *tidle) int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
{ {
int apicid = apic->cpu_present_to_apicid(cpu); int apicid = apic->cpu_present_to_apicid(cpu);
int cpu0_nmi_registered = 0;
unsigned long flags; unsigned long flags;
int err; int err, ret = 0;
WARN_ON(irqs_disabled()); WARN_ON(irqs_disabled());
@ -1131,10 +1126,11 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
common_cpu_up(cpu, tidle); common_cpu_up(cpu, tidle);
err = do_boot_cpu(apicid, cpu, tidle); err = do_boot_cpu(apicid, cpu, tidle, &cpu0_nmi_registered);
if (err) { if (err) {
pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu); pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);
return -EIO; ret = -EIO;
goto unreg_nmi;
} }
/* /*
@ -1150,7 +1146,15 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
touch_nmi_watchdog(); touch_nmi_watchdog();
} }
return 0; unreg_nmi:
/*
* Clean up the nmi handler. Do this after the callin and callout sync
* to avoid impact of possible long unregister time.
*/
if (cpu0_nmi_registered)
unregister_nmi_handler(NMI_LOCAL, "wake_cpu0");
return ret;
} }
/** /**

View File

@ -1100,7 +1100,7 @@ static __init int svm_hardware_setup(void)
if (vls) { if (vls) {
if (!npt_enabled || if (!npt_enabled ||
!boot_cpu_has(X86_FEATURE_VIRTUAL_VMLOAD_VMSAVE) || !boot_cpu_has(X86_FEATURE_V_VMSAVE_VMLOAD) ||
!IS_ENABLED(CONFIG_X86_64)) { !IS_ENABLED(CONFIG_X86_64)) {
vls = false; vls = false;
} else { } else {

View File

@ -18,6 +18,7 @@
#include <asm/dma.h> /* for MAX_DMA_PFN */ #include <asm/dma.h> /* for MAX_DMA_PFN */
#include <asm/microcode.h> #include <asm/microcode.h>
#include <asm/kaslr.h> #include <asm/kaslr.h>
#include <asm/hypervisor.h>
/* /*
* We need to define the tracepoints somewhere, and tlb.c * We need to define the tracepoints somewhere, and tlb.c
@ -636,6 +637,8 @@ void __init init_mem_mapping(void)
load_cr3(swapper_pg_dir); load_cr3(swapper_pg_dir);
__flush_tlb_all(); __flush_tlb_all();
hypervisor_init_mem_mapping();
early_memtest(0, max_pfn_mapped << PAGE_SHIFT); early_memtest(0, max_pfn_mapped << PAGE_SHIFT);
} }

Some files were not shown because too many files have changed in this diff Show More