powerpc updates for 6.13

- Rework kfence support for the HPT MMU to work on systems with >= 16TB of RAM.
 
  - Remove the powerpc "maple" platform, used by the "Yellow Dog Powerstation".
 
  - Add support for DYNAMIC_FTRACE_WITH_CALL_OPS,
    DYNAMIC_FTRACE_WITH_DIRECT_CALLS & BPF Trampolines.
 
  - Add support for running KVM nested guests on Power11.
 
  - Other small features, cleanups and fixes.
 
 Thanks to: Amit Machhiwal, Arnd Bergmann, Christophe Leroy, Costa Shulyupin,
 David Hunter, David Wang, Disha Goel, Gautam Menghani, Geert Uytterhoeven,
 Hari Bathini, Julia Lawall, Kajol Jain, Keith Packard, Lukas Bulwahn, Madhavan
 Srinivasan, Markus Elfring, Michal Suchanek, Ming Lei, Mukesh Kumar Chaurasiya,
 Nathan Chancellor, Naveen N Rao, Nicholas Piggin, Nysal Jan K.A, Paulo Miguel
 Almeida, Pavithra Prakash, Ritesh Harjani (IBM), Rob Herring (Arm), Sachin P
 Bappalige, Shen Lichuan, Simon Horman, Sourabh Jain, Thomas Weißschuh, Thorsten
 Blum, Thorsten Leemhuis, Venkat Rao Bagalkote, Zhang Zekun,
 zhang jiao.
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQRjvi15rv0TSTaE+SIF0oADX8seIQUCZ0Fi5AAKCRAF0oADX8se
 IeI0AQCAkNWRYzGNzPM6aMwDpq5qdeZzvp0rZxuNsRSnIKJlxAD+PAOxOietgjbQ
 Lxt3oizg+UcH/304Y/iyT8IrwI4n+gE=
 =xNtu
 -----END PGP SIGNATURE-----

Merge tag 'powerpc-6.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:

 - Rework kfence support for the HPT MMU to work on systems with >= 16TB
   of RAM.

 - Remove the powerpc "maple" platform, used by the "Yellow Dog
   Powerstation".

 - Add support for DYNAMIC_FTRACE_WITH_CALL_OPS,
   DYNAMIC_FTRACE_WITH_DIRECT_CALLS & BPF Trampolines.

 - Add support for running KVM nested guests on Power11.

 - Other small features, cleanups and fixes.

Thanks to Amit Machhiwal, Arnd Bergmann, Christophe Leroy, Costa
Shulyupin, David Hunter, David Wang, Disha Goel, Gautam Menghani, Geert
Uytterhoeven, Hari Bathini, Julia Lawall, Kajol Jain, Keith Packard,
Lukas Bulwahn, Madhavan Srinivasan, Markus Elfring, Michal Suchanek,
Ming Lei, Mukesh Kumar Chaurasiya, Nathan Chancellor, Naveen N Rao,
Nicholas Piggin, Nysal Jan K.A, Paulo Miguel Almeida, Pavithra Prakash,
Ritesh Harjani (IBM), Rob Herring (Arm), Sachin P Bappalige, Shen
Lichuan, Simon Horman, Sourabh Jain, Thomas Weißschuh, Thorsten Blum,
Thorsten Leemhuis, Venkat Rao Bagalkote, Zhang Zekun, and zhang jiao.

* tag 'powerpc-6.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (89 commits)
  EDAC/powerpc: Remove PPC_MAPLE drivers
  powerpc/perf: Add per-task/process monitoring to vpa_pmu driver
  powerpc/kvm: Add vpa latency counters to kvm_vcpu_arch
  docs: ABI: sysfs-bus-event_source-devices-vpa-pmu: Document sysfs event format entries for vpa_pmu
  powerpc/perf: Add perf interface to expose vpa counters
  MAINTAINERS: powerpc: Mark Maddy as "M"
  powerpc/Makefile: Allow overriding CPP
  powerpc-km82xx.c: replace of_node_put() with __free
  ps3: Correct some typos in comments
  powerpc/kexec: Fix return of uninitialized variable
  macintosh: Use common error handling code in via_pmu_led_init()
  powerpc/powermac: Use of_property_match_string() in pmac_has_backlight_type()
  powerpc: remove dead config options for MPC85xx platform support
  powerpc/xive: Use cpumask_intersects()
  selftests/powerpc: Remove the path after initialization.
  powerpc/xmon: symbol lookup length fixed
  powerpc/ep8248e: Use %pa to format resource_size_t
  powerpc/ps3: Reorganize kerneldoc parameter names
  KVM: PPC: Book3S HV: Fix kmv -> kvm typo
  powerpc/sstep: make emulate_vsx_load and emulate_vsx_store static
  ...
This commit is contained in:
Linus Torvalds 2024-11-23 10:44:31 -08:00
commit 42d9e8b7cc
164 changed files with 3059 additions and 3586 deletions

View File

@ -0,0 +1,24 @@
What: /sys/bus/event_source/devices/vpa_pmu/format
Date: November 2024
Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org>
Description: Read-only. Attribute group to describe the magic bits
that go into perf_event_attr.config for a particular pmu.
(See ABI/testing/sysfs-bus-event_source-devices-format).
Each attribute under this group defines a bit range of the
perf_event_attr.config. Supported attribute are listed
below::
event = "config:0-31" - event ID
For example::
l1_to_l2_lat = "event=0x1"
What: /sys/bus/event_source/devices/vpa_pmu/events
Date: November 2024
Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org>
Description: Read-only. Attribute group to describe performance monitoring
events for the Virtual Processor Area events. Each attribute
in this group describes a single performance monitoring event
supported by vpa_pmu. The name of the file is the name of
the event (See ABI/testing/sysfs-bus-event_source-devices-events).

View File

@ -93,8 +93,8 @@ given platform based on the content of the device-tree. Thus, you
should:
a) add your platform support as a _boolean_ option in
arch/powerpc/Kconfig, following the example of PPC_PSERIES,
PPC_PMAC and PPC_MAPLE. The latter is probably a good
arch/powerpc/Kconfig, following the example of PPC_PSERIES
and PPC_PMAC. The latter is probably a good
example of a board support to start from.
b) create your main platform file as

View File

@ -13140,7 +13140,7 @@ M: Michael Ellerman <mpe@ellerman.id.au>
R: Nicholas Piggin <npiggin@gmail.com>
R: Christophe Leroy <christophe.leroy@csgroup.eu>
R: Naveen N Rao <naveen@kernel.org>
R: Madhavan Srinivasan <maddy@linux.ibm.com>
M: Madhavan Srinivasan <maddy@linux.ibm.com>
L: linuxppc-dev@lists.ozlabs.org
S: Supported
W: https://github.com/linuxppc/wiki/wiki

View File

@ -1691,4 +1691,10 @@ config CC_HAS_SANE_FUNCTION_ALIGNMENT
config ARCH_NEED_CMPXCHG_1_EMU
bool
config ARCH_WANTS_PRE_LINK_VMLINUX
bool
help
An architecture can select this if it provides arch/<arch>/tools/Makefile
with .arch.vmlinux.o target to be linked into vmlinux.
endmenu

View File

@ -19,4 +19,4 @@ obj-$(CONFIG_KEXEC_CORE) += kexec/
obj-$(CONFIG_KEXEC_FILE) += purgatory/
# for cleaning
subdir- += boot
subdir- += boot tools

View File

@ -234,6 +234,8 @@ config PPC
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_ARGS if ARCH_USING_PATCHABLE_FUNCTION_ENTRY || MPROFILE_KERNEL || PPC32
select HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS if PPC_FTRACE_OUT_OF_LINE || (PPC32 && ARCH_USING_PATCHABLE_FUNCTION_ENTRY)
select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS if HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS
select HAVE_DYNAMIC_FTRACE_WITH_REGS if ARCH_USING_PATCHABLE_FUNCTION_ENTRY || MPROFILE_KERNEL || PPC32
select HAVE_EBPF_JIT
select HAVE_EFFICIENT_UNALIGNED_ACCESS
@ -243,7 +245,7 @@ config PPC
select HAVE_FUNCTION_DESCRIPTORS if PPC64_ELF_ABI_V1
select HAVE_FUNCTION_ERROR_INJECTION
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER if PPC64 || (PPC32 && CC_IS_GCC)
select HAVE_FUNCTION_TRACER if !COMPILE_TEST && (PPC64 || (PPC32 && CC_IS_GCC))
select HAVE_GCC_PLUGINS if GCC_VERSION >= 50200 # plugin support on gcc <= 5.1 is buggy on PPC
select HAVE_GENERIC_VDSO
select HAVE_HARDLOCKUP_DETECTOR_ARCH if PPC_BOOK3S_64 && SMP
@ -273,10 +275,12 @@ config PPC
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_RELIABLE_STACKTRACE
select HAVE_RSEQ
select HAVE_SAMPLE_FTRACE_DIRECT if HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
select HAVE_SETUP_PER_CPU_AREA if PPC64
select HAVE_SOFTIRQ_ON_OWN_STACK
select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r2)
select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r13)
select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,$(m32-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 -mstack-protector-guard-offset=0)
select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,$(m64-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r13 -mstack-protector-guard-offset=0)
select HAVE_STATIC_CALL if PPC32
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_VIRT_CPU_ACCOUNTING
@ -569,6 +573,22 @@ config ARCH_USING_PATCHABLE_FUNCTION_ENTRY
def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh $(CC) -mlittle-endian) if PPC64 && CPU_LITTLE_ENDIAN
def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh $(CC) -mbig-endian) if PPC64 && CPU_BIG_ENDIAN
config PPC_FTRACE_OUT_OF_LINE
def_bool PPC64 && ARCH_USING_PATCHABLE_FUNCTION_ENTRY
select ARCH_WANTS_PRE_LINK_VMLINUX
config PPC_FTRACE_OUT_OF_LINE_NUM_RESERVE
int "Number of ftrace out-of-line stubs to reserve within .text"
depends on PPC_FTRACE_OUT_OF_LINE
default 32768
help
Number of stubs to reserve for use by ftrace. This space is
reserved within .text, and is distinct from any additional space
added at the end of .text before the final vmlinux link. Set to
zero to have stubs only be generated at the end of vmlinux (only
if the size of vmlinux is less than 32MB). Set to a higher value
if building vmlinux larger than 48MB.
config HOTPLUG_CPU
bool "Support for enabling/disabling CPUs"
depends on SMP && (PPC_PSERIES || \

View File

@ -223,12 +223,6 @@ config PPC_EARLY_DEBUG_RTAS_CONSOLE
help
Select this to enable early debugging via the RTAS console.
config PPC_EARLY_DEBUG_MAPLE
bool "Maple real mode"
depends on PPC_MAPLE
help
Select this to enable early debugging for Maple.
config PPC_EARLY_DEBUG_PAS_REALMODE
bool "PA Semi real mode"
depends on PPC_PASEMI

View File

@ -62,14 +62,14 @@ KBUILD_LDFLAGS_MODULE += arch/powerpc/lib/crtsavres.o
endif
ifdef CONFIG_CPU_LITTLE_ENDIAN
KBUILD_CFLAGS += -mlittle-endian
KBUILD_CPPFLAGS += -mlittle-endian
KBUILD_LDFLAGS += -EL
LDEMULATION := lppc
GNUTARGET := powerpcle
MULTIPLEWORD := -mno-multiple
KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-save-toc-indirect)
else
KBUILD_CFLAGS += $(call cc-option,-mbig-endian)
KBUILD_CPPFLAGS += $(call cc-option,-mbig-endian)
KBUILD_LDFLAGS += -EB
LDEMULATION := ppc
GNUTARGET := powerpc
@ -95,18 +95,11 @@ aflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mbig-endian)
aflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mlittle-endian
ifeq ($(HAS_BIARCH),y)
KBUILD_CFLAGS += -m$(BITS)
KBUILD_CPPFLAGS += -m$(BITS)
KBUILD_AFLAGS += -m$(BITS)
KBUILD_LDFLAGS += -m elf$(BITS)$(LDEMULATION)
endif
cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard=tls
ifdef CONFIG_PPC64
cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard-reg=r13
else
cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard-reg=r2
endif
LDFLAGS_vmlinux-y := -Bstatic
LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) := -pie
LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) += -z notext
@ -155,7 +148,15 @@ CC_FLAGS_NO_FPU := $(call cc-option,-msoft-float)
ifdef CONFIG_FUNCTION_TRACER
ifdef CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY
KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
CC_FLAGS_FTRACE := -fpatchable-function-entry=1
else
ifdef CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS # PPC32 only
CC_FLAGS_FTRACE := -fpatchable-function-entry=3,1
else
CC_FLAGS_FTRACE := -fpatchable-function-entry=2
endif
endif
else
CC_FLAGS_FTRACE := -pg
ifdef CONFIG_MPROFILE_KERNEL
@ -175,7 +176,6 @@ KBUILD_CPPFLAGS += -I $(srctree)/arch/powerpc $(asinstr)
KBUILD_AFLAGS += $(AFLAGS-y)
KBUILD_CFLAGS += $(CC_FLAGS_NO_FPU)
KBUILD_CFLAGS += $(CFLAGS-y)
CPP = $(CC) -E $(KBUILD_CFLAGS)
CHECKFLAGS += -m$(BITS) -D__powerpc__ -D__powerpc$(BITS)__
ifdef CONFIG_CPU_BIG_ENDIAN
@ -359,7 +359,7 @@ define archhelp
echo ' install - Install kernel using'
echo ' (your) ~/bin/$(INSTALLKERNEL) or'
echo ' (distribution) /sbin/$(INSTALLKERNEL) or'
echo ' install to $$(INSTALL_PATH) and run lilo'
echo ' install to $$(INSTALL_PATH)'
echo ' *_defconfig - Select default config from arch/powerpc/configs'
echo ''
echo ' Targets with <dt> embed a device tree blob inside the image'
@ -402,9 +402,11 @@ prepare: stack_protector_prepare
PHONY += stack_protector_prepare
stack_protector_prepare: prepare0
ifdef CONFIG_PPC64
$(eval KBUILD_CFLAGS += -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "PACA_CANARY") print $$3;}' include/generated/asm-offsets.h))
$(eval KBUILD_CFLAGS += -mstack-protector-guard=tls -mstack-protector-guard-reg=r13 \
-mstack-protector-guard-offset=$(shell awk '{if ($$2 == "PACA_CANARY") print $$3;}' include/generated/asm-offsets.h))
else
$(eval KBUILD_CFLAGS += -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "TASK_CANARY") print $$3;}' include/generated/asm-offsets.h))
$(eval KBUILD_CFLAGS += -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 \
-mstack-protector-guard-offset=$(shell awk '{if ($$2 == "TASK_CANARY") print $$3;}' include/generated/asm-offsets.h))
endif
endif

View File

@ -24,6 +24,9 @@ else
$(CONFIG_SHELL) $(srctree)/arch/powerpc/tools/relocs_check.sh "$(OBJDUMP)" "$(NM)" "$@"
endif
quiet_cmd_ftrace_check = CHKFTRC $@
cmd_ftrace_check = $(CONFIG_SHELL) $(srctree)/arch/powerpc/tools/ftrace_check.sh "$(NM)" "$@"
# `@true` prevents complaint when there is nothing to be done
vmlinux: FORCE
@ -34,6 +37,11 @@ endif
ifdef CONFIG_RELOCATABLE
$(call if_changed,relocs_check)
endif
ifdef CONFIG_FUNCTION_TRACER
ifndef CONFIG_PPC64_ELF_ABI_V1
$(call cmd,ftrace_check)
endif
endif
clean:
rm -f .tmp_symbols.txt

View File

@ -30,7 +30,6 @@ zImage.coff
zImage.epapr
zImage.holly
zImage.*lds
zImage.maple
zImage.miboot
zImage.pmac
zImage.pseries

View File

@ -276,7 +276,6 @@ quiet_cmd_wrap = WRAP $@
image-$(CONFIG_PPC_PSERIES) += zImage.pseries
image-$(CONFIG_PPC_POWERNV) += zImage.pseries
image-$(CONFIG_PPC_MAPLE) += zImage.maple
image-$(CONFIG_PPC_IBM_CELL_BLADE) += zImage.pseries
image-$(CONFIG_PPC_PS3) += dtbImage.ps3
image-$(CONFIG_PPC_CHRP) += zImage.chrp
@ -444,7 +443,7 @@ $(obj)/zImage.initrd: $(addprefix $(obj)/, $(initrd-y))
clean-files += $(image-) $(initrd-) cuImage.* dtbImage.* treeImage.* \
zImage zImage.initrd zImage.chrp zImage.coff zImage.holly \
zImage.miboot zImage.pmac zImage.pseries \
zImage.maple simpleImage.* otheros.bld
simpleImage.* otheros.bld
# clean up files cached by wrapper
clean-kernel-base := vmlinux.strip vmlinux.bin

View File

@ -271,11 +271,6 @@ pseries)
fi
make_space=n
;;
maple)
platformo="$object/of.o $object/epapr.o"
link_address='0x400000'
make_space=n
;;
pmac|chrp)
platformo="$object/of.o $object/epapr.o"
make_space=n
@ -517,7 +512,7 @@ fi
# post-processing needed for some platforms
case "$platform" in
pseries|chrp|maple)
pseries|chrp)
$objbin/addnote "$ofile"
;;
coff)

View File

@ -1,111 +0,0 @@
CONFIG_PPC64=y
CONFIG_SMP=y
CONFIG_NR_CPUS=4
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_MAC_PARTITION=y
# CONFIG_PPC_POWERNV is not set
# CONFIG_PPC_PSERIES is not set
# CONFIG_PPC_PMAC is not set
CONFIG_PPC_MAPLE=y
CONFIG_UDBG_RTAS_CONSOLE=y
CONFIG_GEN_RTC=y
CONFIG_KEXEC=y
CONFIG_IRQ_ALL_CPUS=y
CONFIG_PPC_4K_PAGES=y
CONFIG_PCI_MSI=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=m
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
# CONFIG_IPV6 is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=8192
# CONFIG_SCSI_PROC_FS is not set
CONFIG_BLK_DEV_SD=y
CONFIG_BLK_DEV_SR=y
CONFIG_CHR_DEV_SG=y
CONFIG_SCSI_IPR=y
CONFIG_ATA=y
CONFIG_PATA_AMD=y
CONFIG_ATA_GENERIC=y
CONFIG_NETDEVICES=y
CONFIG_AMD8111_ETH=y
CONFIG_TIGON3=y
CONFIG_E1000=y
CONFIG_USB_PEGASUS=y
# CONFIG_INPUT_KEYBOARD is not set
# CONFIG_INPUT_MOUSE is not set
# CONFIG_SERIO is not set
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_HVC_RTAS=y
# CONFIG_HW_RANDOM is not set
CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_AMD8111=y
# CONFIG_VGA_CONSOLE is not set
CONFIG_HID_GYRATION=y
CONFIG_HID_PANTHERLORD=y
CONFIG_HID_PETALYNX=y
CONFIG_HID_SAMSUNG=y
CONFIG_HID_SUNPLUS=y
CONFIG_USB=y
CONFIG_USB_MON=y
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
# CONFIG_USB_EHCI_HCD_PPC_OF is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_UHCI_HCD=y
CONFIG_USB_SERIAL=y
CONFIG_USB_SERIAL_GENERIC=y
CONFIG_USB_SERIAL_CYPRESS_M8=m
CONFIG_USB_SERIAL_GARMIN=m
CONFIG_USB_SERIAL_IPW=m
CONFIG_USB_SERIAL_KEYSPAN=y
CONFIG_USB_SERIAL_TI=m
CONFIG_EXT2_FS=y
CONFIG_EXT4_FS=y
CONFIG_FS_DAX=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y
CONFIG_CRAMFS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
CONFIG_ROOT_NFS=y
CONFIG_NLS_DEFAULT="utf-8"
CONFIG_NLS_UTF8=y
CONFIG_CRC_CCITT=y
CONFIG_CRC_T10DIF=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_STACK_USAGE=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_XMON=y
CONFIG_XMON_DEFAULT=y
CONFIG_BOOTX_TEXT=y
CONFIG_CRYPTO_ECB=m
CONFIG_CRYPTO_PCBC=m
# CONFIG_CRYPTO_HW is not set
CONFIG_PRINTK_TIME=y

View File

@ -44,7 +44,6 @@ CONFIG_PPC_SMLPAR=y
CONFIG_IBMEBUS=y
CONFIG_PAPR_SCM=m
CONFIG_PPC_SVM=y
CONFIG_PPC_MAPLE=y
CONFIG_PPC_PASEMI=y
CONFIG_PPC_PASEMI_IOMMU=y
CONFIG_PPC_PS3=y

View File

@ -193,6 +193,7 @@ static inline void cpu_feature_keys_init(void) { }
#define CPU_FTR_ARCH_31 LONG_ASM_CONST(0x0004000000000000)
#define CPU_FTR_DAWR1 LONG_ASM_CONST(0x0008000000000000)
#define CPU_FTR_DEXCR_NPHIE LONG_ASM_CONST(0x0010000000000000)
#define CPU_FTR_P11_PVR LONG_ASM_CONST(0x0020000000000000)
#ifndef __ASSEMBLY__
@ -454,7 +455,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_DAWR | CPU_FTR_DAWR1 | \
CPU_FTR_DEXCR_NPHIE)
#define CPU_FTRS_POWER11 CPU_FTRS_POWER10
#define CPU_FTRS_POWER11 (CPU_FTRS_POWER10 | CPU_FTR_P11_PVR)
#define CPU_FTRS_CELL (CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \
@ -475,7 +476,7 @@ static inline void cpu_feature_keys_init(void) { }
(CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | CPU_FTRS_POWER8 | \
CPU_FTR_ALTIVEC_COMP | CPU_FTR_VSX_COMP | CPU_FTRS_POWER9 | \
CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | \
CPU_FTRS_POWER9_DD2_3 | CPU_FTRS_POWER10)
CPU_FTRS_POWER9_DD2_3 | CPU_FTRS_POWER10 | CPU_FTRS_POWER11)
#else
#define CPU_FTRS_POSSIBLE \
(CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | \
@ -483,7 +484,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTRS_POWER8 | CPU_FTRS_CELL | CPU_FTRS_PA6T | \
CPU_FTR_VSX_COMP | CPU_FTR_ALTIVEC_COMP | CPU_FTRS_POWER9 | \
CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | \
CPU_FTRS_POWER9_DD2_3 | CPU_FTRS_POWER10)
CPU_FTRS_POWER9_DD2_3 | CPU_FTRS_POWER10 | CPU_FTRS_POWER11)
#endif /* CONFIG_CPU_LITTLE_ENDIAN */
#endif
#else
@ -547,7 +548,7 @@ enum {
(CPU_FTRS_POSSIBLE & ~CPU_FTR_HVMODE & ~CPU_FTR_DBELL & \
CPU_FTRS_POWER7 & CPU_FTRS_POWER8E & CPU_FTRS_POWER8 & \
CPU_FTRS_POWER9 & CPU_FTRS_POWER9_DD2_1 & CPU_FTRS_POWER9_DD2_2 & \
CPU_FTRS_POWER10 & CPU_FTRS_DT_CPU_BASE)
CPU_FTRS_POWER10 & CPU_FTRS_POWER11 & CPU_FTRS_DT_CPU_BASE)
#else
#define CPU_FTRS_ALWAYS \
(CPU_FTRS_PPC970 & CPU_FTRS_POWER5 & \
@ -555,7 +556,7 @@ enum {
CPU_FTRS_PA6T & CPU_FTRS_POWER8 & CPU_FTRS_POWER8E & \
~CPU_FTR_HVMODE & ~CPU_FTR_DBELL & CPU_FTRS_POSSIBLE & \
CPU_FTRS_POWER9 & CPU_FTRS_POWER9_DD2_1 & CPU_FTRS_POWER9_DD2_2 & \
CPU_FTRS_POWER10 & CPU_FTRS_DT_CPU_BASE)
CPU_FTRS_POWER10 & CPU_FTRS_POWER11 & CPU_FTRS_DT_CPU_BASE)
#endif /* CONFIG_CPU_LITTLE_ENDIAN */
#endif
#else

View File

@ -1,8 +1,8 @@
#ifndef _ASM_POWERPC_DTL_H
#define _ASM_POWERPC_DTL_H
#include <linux/rwsem.h>
#include <asm/lppaca.h>
#include <linux/spinlock_types.h>
/*
* Layout of entries in the hypervisor's dispatch trace log buffer.
@ -35,7 +35,7 @@ struct dtl_entry {
#define DTL_LOG_ALL (DTL_LOG_CEDE | DTL_LOG_PREEMPT | DTL_LOG_FAULT)
extern struct kmem_cache *dtl_cache;
extern rwlock_t dtl_access_lock;
extern struct rw_semaphore dtl_access_lock;
extern void register_dtl_buffer(int cpu);
extern void alloc_dtl_buffers(unsigned long *time_limit);

View File

@ -19,6 +19,7 @@ extern int is_fadump_active(void);
extern int should_fadump_crash(void);
extern void crash_fadump(struct pt_regs *, const char *);
extern void fadump_cleanup(void);
void fadump_setup_param_area(void);
extern void fadump_append_bootargs(void);
#else /* CONFIG_FA_DUMP */
@ -26,6 +27,7 @@ static inline int is_fadump_active(void) { return 0; }
static inline int should_fadump_crash(void) { return 0; }
static inline void crash_fadump(struct pt_regs *regs, const char *str) { }
static inline void fadump_cleanup(void) { }
static inline void fadump_setup_param_area(void) { }
static inline void fadump_append_bootargs(void) { }
#endif /* !CONFIG_FA_DUMP */
@ -34,4 +36,11 @@ extern int early_init_dt_scan_fw_dump(unsigned long node, const char *uname,
int depth, void *data);
extern int fadump_reserve_mem(void);
#endif
#if defined(CONFIG_FA_DUMP) && defined(CONFIG_CMA)
void fadump_cma_init(void);
#else
static inline void fadump_cma_init(void) { }
#endif
#endif /* _ASM_POWERPC_FADUMP_H */

View File

@ -24,7 +24,10 @@ unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip,
struct module;
struct dyn_ftrace;
struct dyn_arch_ftrace {
struct module *mod;
#ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
/* pointer to the associated out-of-line stub */
unsigned long ool_stub;
#endif
};
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
@ -110,8 +113,36 @@ static inline u8 this_cpu_get_ftrace_enabled(void) { return 1; }
#ifdef CONFIG_FUNCTION_TRACER
extern unsigned int ftrace_tramp_text[], ftrace_tramp_init[];
#ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
struct ftrace_ool_stub {
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS
struct ftrace_ops *ftrace_op;
#endif
u32 insn[4];
} __aligned(sizeof(unsigned long));
extern struct ftrace_ool_stub ftrace_ool_stub_text_end[], ftrace_ool_stub_text[],
ftrace_ool_stub_inittext[];
extern unsigned int ftrace_ool_stub_text_end_count, ftrace_ool_stub_text_count,
ftrace_ool_stub_inittext_count;
#endif
void ftrace_free_init_tramp(void);
unsigned long ftrace_call_adjust(unsigned long addr);
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
/*
* When an ftrace registered caller is tracing a function that is also set by a
* register_ftrace_direct() call, it needs to be differentiated in the
* ftrace_caller trampoline so that the direct call can be invoked after the
* other ftrace ops. To do this, place the direct caller in the orig_gpr3 field
* of pt_regs. This tells ftrace_caller that there's a direct caller.
*/
static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs, unsigned long addr)
{
struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs;
regs->orig_gpr3 = addr;
}
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
#else
static inline void ftrace_free_init_tramp(void) { }
static inline unsigned long ftrace_call_adjust(unsigned long addr) { return addr; }

View File

@ -495,6 +495,7 @@
#define H_GUEST_CAP_COPY_MEM (1UL<<(63-0))
#define H_GUEST_CAP_POWER9 (1UL<<(63-1))
#define H_GUEST_CAP_POWER10 (1UL<<(63-2))
#define H_GUEST_CAP_POWER11 (1UL<<(63-3))
#define H_GUEST_CAP_BITMAP2 (1UL<<(63-63))
#ifndef __ASSEMBLY__

View File

@ -15,7 +15,7 @@
#define ARCH_FUNC_PREFIX "."
#endif
#ifdef CONFIG_KFENCE
extern bool kfence_early_init;
extern bool kfence_disabled;
static inline void disable_kfence(void)
@ -27,7 +27,11 @@ static inline bool arch_kfence_init_pool(void)
{
return !kfence_disabled;
}
#endif
static inline bool kfence_early_init_enabled(void)
{
return IS_ENABLED(CONFIG_KFENCE) && kfence_early_init;
}
#ifdef CONFIG_PPC64
static inline bool kfence_protect_page(unsigned long addr, bool protect)

View File

@ -684,10 +684,16 @@ int kvmhv_nestedv2_set_ptbl_entry(unsigned long lpid, u64 dw0, u64 dw1);
int kvmhv_nestedv2_parse_output(struct kvm_vcpu *vcpu);
int kvmhv_nestedv2_set_vpa(struct kvm_vcpu *vcpu, unsigned long vpa);
int kmvhv_counters_tracepoint_regfunc(void);
void kmvhv_counters_tracepoint_unregfunc(void);
int kvmhv_counters_tracepoint_regfunc(void);
void kvmhv_counters_tracepoint_unregfunc(void);
int kvmhv_get_l2_counters_status(void);
void kvmhv_set_l2_counters_status(int cpu, bool status);
u64 kvmhv_get_l1_to_l2_cs_time(void);
u64 kvmhv_get_l2_to_l1_cs_time(void);
u64 kvmhv_get_l2_runtime_agg(void);
u64 kvmhv_get_l1_to_l2_cs_time_vcpu(void);
u64 kvmhv_get_l2_to_l1_cs_time_vcpu(void);
u64 kvmhv_get_l2_runtime_agg_vcpu(void);
#endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */

View File

@ -871,6 +871,11 @@ struct kvm_vcpu_arch {
struct kvmhv_tb_accumulator cede_time; /* time napping inside guest */
#endif
#endif /* CONFIG_KVM_BOOK3S_HV_EXIT_TIMING */
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
u64 l1_to_l2_cs;
u64 l2_to_l1_cs;
u64 l2_runtime_agg;
#endif
};
#define VCPU_FPR(vcpu, i) (vcpu)->arch.fp.fpr[i][TS_FPROFFSET]

View File

@ -4,20 +4,24 @@
#ifdef __KERNEL__
#include <linux/compiler.h>
#include <linux/seq_file.h>
#include <linux/init.h>
#include <linux/dma-mapping.h>
#include <linux/export.h>
#include <linux/time64.h>
#include <asm/page.h>
struct pt_regs;
struct pci_bus;
struct device;
struct device_node;
struct iommu_table;
struct rtc_time;
struct file;
struct pci_dev;
struct pci_controller;
struct kimage;
struct pci_host_bridge;
struct seq_file;
struct machdep_calls {
const char *name;

View File

@ -35,9 +35,11 @@ struct mod_arch_specific {
bool toc_fixed; /* Have we fixed up .TOC.? */
#endif
#ifdef CONFIG_PPC64_ELF_ABI_V1
/* For module function descriptor dereference */
unsigned long start_opd;
unsigned long end_opd;
#endif
#else /* powerpc64 */
/* Indices of PLT sections within module. */
unsigned int core_plt_section;
@ -47,6 +49,11 @@ struct mod_arch_specific {
#ifdef CONFIG_DYNAMIC_FTRACE
unsigned long tramp;
unsigned long tramp_regs;
#ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
struct ftrace_ool_stub *ool_stubs;
unsigned int ool_stub_count;
unsigned int ool_stub_index;
#endif
#endif
};

View File

@ -587,12 +587,26 @@
#define PPC_RAW_MTSPR(spr, d) (0x7c0003a6 | ___PPC_RS(d) | __PPC_SPR(spr))
#define PPC_RAW_EIEIO() (0x7c0006ac)
/* bcl 20,31,$+4 */
#define PPC_RAW_BCL4() (0x429f0005)
#define PPC_RAW_BRANCH(offset) (0x48000000 | PPC_LI(offset))
#define PPC_RAW_BL(offset) (0x48000001 | PPC_LI(offset))
#define PPC_RAW_TW(t0, a, b) (0x7c000008 | ___PPC_RS(t0) | ___PPC_RA(a) | ___PPC_RB(b))
#define PPC_RAW_TRAP() PPC_RAW_TW(31, 0, 0)
#define PPC_RAW_SETB(t, bfa) (0x7c000100 | ___PPC_RT(t) | ___PPC_RA((bfa) << 2))
#ifdef CONFIG_PPC32
#define PPC_RAW_STL PPC_RAW_STW
#define PPC_RAW_STLU PPC_RAW_STWU
#define PPC_RAW_LL PPC_RAW_LWZ
#define PPC_RAW_CMPLI PPC_RAW_CMPWI
#else
#define PPC_RAW_STL PPC_RAW_STD
#define PPC_RAW_STLU PPC_RAW_STDU
#define PPC_RAW_LL PPC_RAW_LD
#define PPC_RAW_CMPLI PPC_RAW_CMPDI
#endif
/* Deal with instructions that older assemblers aren't aware of */
#define PPC_BCCTR_FLUSH stringify_in_c(.long PPC_INST_BCCTR_FLUSH)
#define PPC_CP_ABORT stringify_in_c(.long PPC_RAW_CP_ABORT)

View File

@ -12,37 +12,37 @@
int change_memory_attr(unsigned long addr, int numpages, long action);
static inline int set_memory_ro(unsigned long addr, int numpages)
static inline int __must_check set_memory_ro(unsigned long addr, int numpages)
{
return change_memory_attr(addr, numpages, SET_MEMORY_RO);
}
static inline int set_memory_rw(unsigned long addr, int numpages)
static inline int __must_check set_memory_rw(unsigned long addr, int numpages)
{
return change_memory_attr(addr, numpages, SET_MEMORY_RW);
}
static inline int set_memory_nx(unsigned long addr, int numpages)
static inline int __must_check set_memory_nx(unsigned long addr, int numpages)
{
return change_memory_attr(addr, numpages, SET_MEMORY_NX);
}
static inline int set_memory_x(unsigned long addr, int numpages)
static inline int __must_check set_memory_x(unsigned long addr, int numpages)
{
return change_memory_attr(addr, numpages, SET_MEMORY_X);
}
static inline int set_memory_np(unsigned long addr, int numpages)
static inline int __must_check set_memory_np(unsigned long addr, int numpages)
{
return change_memory_attr(addr, numpages, SET_MEMORY_NP);
}
static inline int set_memory_p(unsigned long addr, int numpages)
static inline int __must_check set_memory_p(unsigned long addr, int numpages)
{
return change_memory_attr(addr, numpages, SET_MEMORY_P);
}
static inline int set_memory_rox(unsigned long addr, int numpages)
static inline int __must_check set_memory_rox(unsigned long addr, int numpages)
{
return change_memory_attr(addr, numpages, SET_MEMORY_ROX);
}

View File

@ -216,7 +216,6 @@ spu_disable_spu (struct spu_context *ctx)
*/
extern const struct spu_priv1_ops spu_priv1_mmio_ops;
extern const struct spu_priv1_ops spu_priv1_beat_ops;
extern const struct spu_management_ops spu_management_of_ops;

View File

@ -173,9 +173,4 @@ int emulate_step(struct pt_regs *regs, ppc_inst_t instr);
*/
extern int emulate_loadstore(struct pt_regs *regs, struct instruction_op *op);
extern void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
const void *mem, bool cross_endian);
extern void emulate_vsx_store(struct instruction_op *op,
const union vsx_reg *reg, void *mem,
bool cross_endian);
extern int emulate_dcbz(unsigned long ea, struct pt_regs *regs);

View File

@ -38,7 +38,6 @@ void __init udbg_early_init(void);
void __init udbg_init_debug_lpar(void);
void __init udbg_init_debug_lpar_hvsi(void);
void __init udbg_init_pmac_realmode(void);
void __init udbg_init_maple_realmode(void);
void __init udbg_init_pas_realmode(void);
void __init udbg_init_rtas_panel(void);
void __init udbg_init_rtas_console(void);

View File

@ -25,6 +25,7 @@ int vdso_getcpu_init(void);
#ifdef __VDSO64__
#define V_FUNCTION_BEGIN(name) \
.globl name; \
.type name,@function; \
name: \
#define V_FUNCTION_END(name) \

View File

@ -7,6 +7,8 @@
#ifndef __ASSEMBLY__
#include <asm/vdso_datapage.h>
static __always_inline int do_syscall_3(const unsigned long _r0, const unsigned long _r3,
const unsigned long _r4, const unsigned long _r5)
{
@ -43,11 +45,21 @@ static __always_inline ssize_t getrandom_syscall(void *buffer, size_t len, unsig
static __always_inline struct vdso_rng_data *__arch_get_vdso_rng_data(void)
{
return NULL;
struct vdso_arch_data *data;
asm (
" bcl 20, 31, .+4 ;"
"0: mflr %0 ;"
" addis %0, %0, (_vdso_datapage - 0b)@ha ;"
" addi %0, %0, (_vdso_datapage - 0b)@l ;"
: "=r" (data) : : "lr"
);
return &data->rng_data;
}
ssize_t __c_kernel_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state,
size_t opaque_len, const struct vdso_rng_data *vd);
size_t opaque_len);
#endif /* !__ASSEMBLY__ */

View File

@ -28,8 +28,9 @@ struct vdso_arch_data {
__u32 syscall_map[SYSCALL_MAP_SIZE]; /* Map of syscalls */
__u32 compat_syscall_map[SYSCALL_MAP_SIZE]; /* Map of compat syscalls */
struct vdso_data data[CS_BASES];
struct vdso_rng_data rng_data;
struct vdso_data data[CS_BASES] __aligned(1 << CONFIG_PAGE_SHIFT);
};
#else /* CONFIG_PPC64 */
@ -38,8 +39,9 @@ struct vdso_arch_data {
__u64 tb_ticks_per_sec; /* Timebase tics / sec */
__u32 syscall_map[SYSCALL_MAP_SIZE]; /* Map of syscalls */
__u32 compat_syscall_map[0]; /* No compat syscalls on PPC32 */
struct vdso_data data[CS_BASES];
struct vdso_rng_data rng_data;
struct vdso_data data[CS_BASES] __aligned(1 << CONFIG_PAGE_SHIFT);
};
#endif /* CONFIG_PPC64 */
@ -48,29 +50,17 @@ extern struct vdso_arch_data *vdso_data;
#else /* __ASSEMBLY__ */
.macro get_datapage ptr
.macro get_datapage ptr offset=0
bcl 20, 31, .+4
999:
mflr \ptr
addis \ptr, \ptr, (_vdso_datapage - 999b)@ha
addi \ptr, \ptr, (_vdso_datapage - 999b)@l
addis \ptr, \ptr, (_vdso_datapage - 999b + \offset)@ha
addi \ptr, \ptr, (_vdso_datapage - 999b + \offset)@l
.endm
#include <asm/asm-offsets.h>
#include <asm/page.h>
.macro get_realdatapage ptr scratch
get_datapage \ptr
#ifdef CONFIG_TIME_NS
lwz \scratch, VDSO_CLOCKMODE_OFFSET(\ptr)
xoris \scratch, \scratch, VDSO_CLOCKMODE_TIMENS@h
xori \scratch, \scratch, VDSO_CLOCKMODE_TIMENS@l
cntlzw \scratch, \scratch
rlwinm \scratch, \scratch, PAGE_SHIFT - 5, 1 << PAGE_SHIFT
add \ptr, \ptr, \scratch
#endif
.endm
#endif /* __ASSEMBLY__ */
#endif /* __KERNEL__ */

View File

@ -335,7 +335,6 @@ int main(void)
/* datapage offsets for use by vdso */
OFFSET(VDSO_DATA_OFFSET, vdso_arch_data, data);
OFFSET(VDSO_RNG_DATA_OFFSET, vdso_arch_data, rng_data);
OFFSET(CFG_TB_TICKS_PER_SEC, vdso_arch_data, tb_ticks_per_sec);
#ifdef CONFIG_PPC64
OFFSET(CFG_ICACHE_BLOCKSZ, vdso_arch_data, icache_block_size);
@ -347,8 +346,6 @@ int main(void)
#else
OFFSET(CFG_SYSCALL_MAP32, vdso_arch_data, syscall_map);
#endif
OFFSET(VDSO_CLOCKMODE_OFFSET, vdso_arch_data, data[0].clock_mode);
DEFINE(VDSO_CLOCKMODE_TIMENS, VDSO_CLOCKMODE_TIMENS);
#ifdef CONFIG_BUG
DEFINE(BUG_ENTRY_SIZE, sizeof(struct bug_entry));
@ -597,7 +594,6 @@ int main(void)
HSTATE_FIELD(HSTATE_DABR, dabr);
HSTATE_FIELD(HSTATE_DECEXP, dec_expires);
HSTATE_FIELD(HSTATE_SPLIT_MODE, kvm_split_mode);
DEFINE(IPI_PRIORITY, IPI_PRIORITY);
OFFSET(KVM_SPLIT_RPR, kvm_split_mode, rpr);
OFFSET(KVM_SPLIT_PMMAR, kvm_split_mode, pmmar);
OFFSET(KVM_SPLIT_LDBAR, kvm_split_mode, ldbar);
@ -677,5 +673,16 @@ int main(void)
DEFINE(BPT_SIZE, BPT_SIZE);
#endif
#ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
DEFINE(FTRACE_OOL_STUB_SIZE, sizeof(struct ftrace_ool_stub));
#endif
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS
OFFSET(FTRACE_OPS_FUNC, ftrace_ops, func);
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
OFFSET(FTRACE_OPS_DIRECT_CALL, ftrace_ops, direct_call);
#endif
#endif
return 0;
}

View File

@ -78,26 +78,38 @@ static struct cma *fadump_cma;
* But for some reason even if it fails we still have the memory reservation
* with us and we can still continue doing fadump.
*/
static int __init fadump_cma_init(void)
void __init fadump_cma_init(void)
{
unsigned long long base, size;
unsigned long long base, size, end;
int rc;
if (!fw_dump.fadump_enabled)
return 0;
if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
fw_dump.dump_active)
return;
/*
* Do not use CMA if user has provided fadump=nocma kernel parameter.
* Return 1 to continue with fadump old behaviour.
*/
if (fw_dump.nocma)
return 1;
if (fw_dump.nocma || !fw_dump.boot_memory_size)
return;
/*
* [base, end) should be reserved during early init in
* fadump_reserve_mem(). No need to check this here as
* cma_init_reserved_mem() already checks for overlap.
* Here we give the aligned chunk of this reserved memory to CMA.
*/
base = fw_dump.reserve_dump_area_start;
size = fw_dump.boot_memory_size;
end = base + size;
if (!size)
return 0;
base = ALIGN(base, CMA_MIN_ALIGNMENT_BYTES);
end = ALIGN_DOWN(end, CMA_MIN_ALIGNMENT_BYTES);
size = end - base;
if (end <= base) {
pr_warn("%s: Too less memory to give to CMA\n", __func__);
return;
}
rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma);
if (rc) {
@ -108,7 +120,7 @@ static int __init fadump_cma_init(void)
* blocked from production system usage. Hence return 1,
* so that we can continue with fadump.
*/
return 1;
return;
}
/*
@ -120,15 +132,13 @@ static int __init fadump_cma_init(void)
/*
* So we now have successfully initialized cma area for fadump.
*/
pr_info("Initialized 0x%lx bytes cma area at %ldMB from 0x%lx "
pr_info("Initialized [0x%llx, %luMB] cma area from [0x%lx, %luMB] "
"bytes of memory reserved for firmware-assisted dump\n",
cma_get_size(fadump_cma),
(unsigned long)cma_get_base(fadump_cma) >> 20,
fw_dump.reserve_dump_area_size);
return 1;
cma_get_base(fadump_cma), cma_get_size(fadump_cma) >> 20,
fw_dump.reserve_dump_area_start,
fw_dump.boot_memory_size >> 20);
return;
}
#else
static int __init fadump_cma_init(void) { return 1; }
#endif /* CONFIG_CMA */
/*
@ -143,7 +153,7 @@ void __init fadump_append_bootargs(void)
if (!fw_dump.dump_active || !fw_dump.param_area_supported || !fw_dump.param_area)
return;
if (fw_dump.param_area >= fw_dump.boot_mem_top) {
if (fw_dump.param_area < fw_dump.boot_mem_top) {
if (memblock_reserve(fw_dump.param_area, COMMAND_LINE_SIZE)) {
pr_warn("WARNING: Can't use additional parameters area!\n");
fw_dump.param_area = 0;
@ -558,13 +568,6 @@ int __init fadump_reserve_mem(void)
if (!fw_dump.dump_active) {
fw_dump.boot_memory_size =
PAGE_ALIGN(fadump_calculate_reserve_size());
#ifdef CONFIG_CMA
if (!fw_dump.nocma) {
fw_dump.boot_memory_size =
ALIGN(fw_dump.boot_memory_size,
CMA_MIN_ALIGNMENT_BYTES);
}
#endif
bootmem_min = fw_dump.ops->fadump_get_bootmem_min();
if (fw_dump.boot_memory_size < bootmem_min) {
@ -637,8 +640,6 @@ int __init fadump_reserve_mem(void)
pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n",
(size >> 20), base, (memblock_phys_mem_size() >> 20));
ret = fadump_cma_init();
}
return ret;
@ -1586,6 +1587,12 @@ static void __init fadump_init_files(void)
return;
}
if (fw_dump.param_area) {
rc = sysfs_create_file(fadump_kobj, &bootargs_append_attr.attr);
if (rc)
pr_err("unable to create bootargs_append sysfs file (%d)\n", rc);
}
debugfs_create_file("fadump_region", 0444, arch_debugfs_dir, NULL,
&fadump_region_fops);
@ -1740,7 +1747,7 @@ static void __init fadump_process(void)
* Reserve memory to store additional parameters to be passed
* for fadump/capture kernel.
*/
static void __init fadump_setup_param_area(void)
void __init fadump_setup_param_area(void)
{
phys_addr_t range_start, range_end;
@ -1748,7 +1755,7 @@ static void __init fadump_setup_param_area(void)
return;
/* This memory can't be used by PFW or bootloader as it is shared across kernels */
if (radix_enabled()) {
if (early_radix_enabled()) {
/*
* Anywhere in the upper half should be good enough as all memory
* is accessible in real mode.
@ -1776,12 +1783,12 @@ static void __init fadump_setup_param_area(void)
COMMAND_LINE_SIZE,
range_start,
range_end);
if (!fw_dump.param_area || sysfs_create_file(fadump_kobj, &bootargs_append_attr.attr)) {
if (!fw_dump.param_area) {
pr_warn("WARNING: Could not setup area to pass additional parameters!\n");
return;
}
memset(phys_to_virt(fw_dump.param_area), 0, COMMAND_LINE_SIZE);
memset((void *)fw_dump.param_area, 0, COMMAND_LINE_SIZE);
}
/*
@ -1807,7 +1814,6 @@ int __init setup_fadump(void)
}
/* Initialize the kernel dump memory structure and register with f/w */
else if (fw_dump.reserve_dump_area_size) {
fadump_setup_param_area();
fw_dump.ops->fadump_init_mem_struct(&fw_dump);
register_fadump();
}

View File

@ -89,69 +89,69 @@ int arch_show_interrupts(struct seq_file *p, int prec)
#if defined(CONFIG_PPC32) && defined(CONFIG_TAU_INT)
if (tau_initialized) {
seq_printf(p, "%*s: ", prec, "TAU");
seq_printf(p, "%*s:", prec, "TAU");
for_each_online_cpu(j)
seq_printf(p, "%10u ", tau_interrupts(j));
seq_put_decimal_ull_width(p, " ", tau_interrupts(j), 10);
seq_puts(p, " PowerPC Thermal Assist (cpu temp)\n");
}
#endif /* CONFIG_PPC32 && CONFIG_TAU_INT */
seq_printf(p, "%*s: ", prec, "LOC");
seq_printf(p, "%*s:", prec, "LOC");
for_each_online_cpu(j)
seq_printf(p, "%10u ", per_cpu(irq_stat, j).timer_irqs_event);
seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).timer_irqs_event, 10);
seq_printf(p, " Local timer interrupts for timer event device\n");
seq_printf(p, "%*s: ", prec, "BCT");
seq_printf(p, "%*s:", prec, "BCT");
for_each_online_cpu(j)
seq_printf(p, "%10u ", per_cpu(irq_stat, j).broadcast_irqs_event);
seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).broadcast_irqs_event, 10);
seq_printf(p, " Broadcast timer interrupts for timer event device\n");
seq_printf(p, "%*s: ", prec, "LOC");
seq_printf(p, "%*s:", prec, "LOC");
for_each_online_cpu(j)
seq_printf(p, "%10u ", per_cpu(irq_stat, j).timer_irqs_others);
seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).timer_irqs_others, 10);
seq_printf(p, " Local timer interrupts for others\n");
seq_printf(p, "%*s: ", prec, "SPU");
seq_printf(p, "%*s:", prec, "SPU");
for_each_online_cpu(j)
seq_printf(p, "%10u ", per_cpu(irq_stat, j).spurious_irqs);
seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).spurious_irqs, 10);
seq_printf(p, " Spurious interrupts\n");
seq_printf(p, "%*s: ", prec, "PMI");
seq_printf(p, "%*s:", prec, "PMI");
for_each_online_cpu(j)
seq_printf(p, "%10u ", per_cpu(irq_stat, j).pmu_irqs);
seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).pmu_irqs, 10);
seq_printf(p, " Performance monitoring interrupts\n");
seq_printf(p, "%*s: ", prec, "MCE");
seq_printf(p, "%*s:", prec, "MCE");
for_each_online_cpu(j)
seq_printf(p, "%10u ", per_cpu(irq_stat, j).mce_exceptions);
seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).mce_exceptions, 10);
seq_printf(p, " Machine check exceptions\n");
#ifdef CONFIG_PPC_BOOK3S_64
if (cpu_has_feature(CPU_FTR_HVMODE)) {
seq_printf(p, "%*s: ", prec, "HMI");
seq_printf(p, "%*s:", prec, "HMI");
for_each_online_cpu(j)
seq_printf(p, "%10u ", paca_ptrs[j]->hmi_irqs);
seq_put_decimal_ull_width(p, " ", paca_ptrs[j]->hmi_irqs, 10);
seq_printf(p, " Hypervisor Maintenance Interrupts\n");
}
#endif
seq_printf(p, "%*s: ", prec, "NMI");
seq_printf(p, "%*s:", prec, "NMI");
for_each_online_cpu(j)
seq_printf(p, "%10u ", per_cpu(irq_stat, j).sreset_irqs);
seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).sreset_irqs, 10);
seq_printf(p, " System Reset interrupts\n");
#ifdef CONFIG_PPC_WATCHDOG
seq_printf(p, "%*s: ", prec, "WDG");
seq_printf(p, "%*s:", prec, "WDG");
for_each_online_cpu(j)
seq_printf(p, "%10u ", per_cpu(irq_stat, j).soft_nmi_irqs);
seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).soft_nmi_irqs, 10);
seq_printf(p, " Watchdog soft-NMI interrupts\n");
#endif
#ifdef CONFIG_PPC_DOORBELL
if (cpu_has_feature(CPU_FTR_DBELL)) {
seq_printf(p, "%*s: ", prec, "DBL");
seq_printf(p, "%*s:", prec, "DBL");
for_each_online_cpu(j)
seq_printf(p, "%10u ", per_cpu(irq_stat, j).doorbell_irqs);
seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).doorbell_irqs, 10);
seq_printf(p, " Doorbell interrupts\n");
}
#endif

View File

@ -105,24 +105,22 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
return addr;
}
static bool arch_kprobe_on_func_entry(unsigned long offset)
static bool arch_kprobe_on_func_entry(unsigned long addr, unsigned long offset)
{
#ifdef CONFIG_PPC64_ELF_ABI_V2
#ifdef CONFIG_KPROBES_ON_FTRACE
return offset <= 16;
#else
return offset <= 8;
#endif
#else
unsigned long ip = ftrace_location(addr);
if (ip)
return offset <= (ip - addr);
if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL))
return offset <= 8;
return !offset;
#endif
}
/* XXX try and fold the magic of kprobe_lookup_name() in this */
kprobe_opcode_t *arch_adjust_kprobe_addr(unsigned long addr, unsigned long offset,
bool *on_func_entry)
{
*on_func_entry = arch_kprobe_on_func_entry(offset);
*on_func_entry = arch_kprobe_on_func_entry(addr, offset);
return (kprobe_opcode_t *)(addr + offset);
}

View File

@ -74,7 +74,7 @@ _GLOBAL(rmci_off)
blr
#endif /* CONFIG_PPC_EARLY_DEBUG_BOOTX */
#if defined(CONFIG_PPC_PMAC) || defined(CONFIG_PPC_MAPLE)
#ifdef CONFIG_PPC_PMAC
/*
* Do an IO access in real mode
@ -137,7 +137,7 @@ _GLOBAL(real_writeb)
sync
isync
blr
#endif /* defined(CONFIG_PPC_PMAC) || defined(CONFIG_PPC_MAPLE) */
#endif // CONFIG_PPC_PMAC
#ifdef CONFIG_PPC_PASEMI
@ -174,7 +174,7 @@ _GLOBAL(real_205_writeb)
#endif /* CONFIG_PPC_PASEMI */
#if defined(CONFIG_CPU_FREQ_PMAC64) || defined(CONFIG_CPU_FREQ_MAPLE)
#ifdef CONFIG_CPU_FREQ_PMAC64
/*
* SCOM access functions for 970 (FX only for now)
*
@ -243,7 +243,7 @@ _GLOBAL(scom970_write)
/* restore interrupts */
mtmsrd r5,1
blr
#endif /* CONFIG_CPU_FREQ_PMAC64 || CONFIG_CPU_FREQ_MAPLE */
#endif // CONFIG_CPU_FREQ_PMAC64
/* kexec_wait(phys_cpu)
*

View File

@ -205,7 +205,9 @@ static int relacmp(const void *_x, const void *_y)
/* Get size of potential trampolines required. */
static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
const Elf64_Shdr *sechdrs)
const Elf64_Shdr *sechdrs,
char *secstrings,
struct module *me)
{
/* One extra reloc so it's always 0-addr terminated */
unsigned long relocs = 1;
@ -241,13 +243,25 @@ static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
}
}
#ifdef CONFIG_DYNAMIC_FTRACE
/* make the trampoline to the ftrace_caller */
relocs++;
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
/* an additional one for ftrace_regs_caller */
relocs++;
#endif
/* stubs for ftrace_caller and ftrace_regs_caller */
relocs += IS_ENABLED(CONFIG_DYNAMIC_FTRACE) + IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS);
#ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
/* stubs for the function tracer */
for (i = 1; i < hdr->e_shnum; i++) {
if (!strcmp(secstrings + sechdrs[i].sh_name, "__patchable_function_entries")) {
me->arch.ool_stub_count = sechdrs[i].sh_size / sizeof(unsigned long);
me->arch.ool_stub_index = 0;
relocs += roundup(me->arch.ool_stub_count * sizeof(struct ftrace_ool_stub),
sizeof(struct ppc64_stub_entry)) /
sizeof(struct ppc64_stub_entry);
break;
}
}
if (i == hdr->e_shnum) {
pr_err("%s: doesn't contain __patchable_function_entries.\n", me->name);
return -ENOEXEC;
}
#endif
pr_debug("Looks like a total of %lu stubs, max\n", relocs);
@ -460,7 +474,7 @@ int module_frob_arch_sections(Elf64_Ehdr *hdr,
#endif
/* Override the stubs size */
sechdrs[me->arch.stubs_section].sh_size = get_stubs_size(hdr, sechdrs);
sechdrs[me->arch.stubs_section].sh_size = get_stubs_size(hdr, sechdrs, secstrings, me);
return 0;
}
@ -1085,6 +1099,37 @@ int module_trampoline_target(struct module *mod, unsigned long addr,
return 0;
}
static int setup_ftrace_ool_stubs(const Elf64_Shdr *sechdrs, unsigned long addr, struct module *me)
{
#ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
unsigned int i, total_stubs, num_stubs;
struct ppc64_stub_entry *stub;
total_stubs = sechdrs[me->arch.stubs_section].sh_size / sizeof(*stub);
num_stubs = roundup(me->arch.ool_stub_count * sizeof(struct ftrace_ool_stub),
sizeof(struct ppc64_stub_entry)) / sizeof(struct ppc64_stub_entry);
/* Find the next available entry */
stub = (void *)sechdrs[me->arch.stubs_section].sh_addr;
for (i = 0; stub_func_addr(stub[i].funcdata); i++)
if (WARN_ON(i >= total_stubs))
return -1;
if (WARN_ON(i + num_stubs > total_stubs))
return -1;
stub += i;
me->arch.ool_stubs = (struct ftrace_ool_stub *)stub;
/* reserve stubs */
for (i = 0; i < num_stubs; i++)
if (patch_u32((void *)&stub->funcdata, PPC_RAW_NOP()))
return -1;
#endif
return 0;
}
int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sechdrs)
{
mod->arch.tramp = stub_for_addr(sechdrs,
@ -1103,6 +1148,9 @@ int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sechdrs)
if (!mod->arch.tramp)
return -ENOENT;
if (setup_ftrace_ool_stubs(sechdrs, mod->arch.tramp, mod))
return -ENOENT;
return 0;
}
#endif

View File

@ -908,6 +908,9 @@ void __init early_init_devtree(void *params)
mmu_early_init_devtree();
/* Setup param area for passing additional parameters to fadump capture kernel. */
fadump_setup_param_area();
#ifdef CONFIG_PPC_POWERNV
/* Scan and build the list of machine check recoverable ranges */
of_scan_flat_dt(early_init_dt_scan_recoverable_ranges, NULL);

View File

@ -2792,90 +2792,6 @@ static void __init flatten_device_tree(void)
dt_struct_start, dt_struct_end);
}
#ifdef CONFIG_PPC_MAPLE
/* PIBS Version 1.05.0000 04/26/2005 has an incorrect /ht/isa/ranges property.
* The values are bad, and it doesn't even have the right number of cells. */
static void __init fixup_device_tree_maple(void)
{
phandle isa;
u32 rloc = 0x01002000; /* IO space; PCI device = 4 */
u32 isa_ranges[6];
char *name;
name = "/ht@0/isa@4";
isa = call_prom("finddevice", 1, 1, ADDR(name));
if (!PHANDLE_VALID(isa)) {
name = "/ht@0/isa@6";
isa = call_prom("finddevice", 1, 1, ADDR(name));
rloc = 0x01003000; /* IO space; PCI device = 6 */
}
if (!PHANDLE_VALID(isa))
return;
if (prom_getproplen(isa, "ranges") != 12)
return;
if (prom_getprop(isa, "ranges", isa_ranges, sizeof(isa_ranges))
== PROM_ERROR)
return;
if (isa_ranges[0] != 0x1 ||
isa_ranges[1] != 0xf4000000 ||
isa_ranges[2] != 0x00010000)
return;
prom_printf("Fixing up bogus ISA range on Maple/Apache...\n");
isa_ranges[0] = 0x1;
isa_ranges[1] = 0x0;
isa_ranges[2] = rloc;
isa_ranges[3] = 0x0;
isa_ranges[4] = 0x0;
isa_ranges[5] = 0x00010000;
prom_setprop(isa, name, "ranges",
isa_ranges, sizeof(isa_ranges));
}
#define CPC925_MC_START 0xf8000000
#define CPC925_MC_LENGTH 0x1000000
/* The values for memory-controller don't have right number of cells */
static void __init fixup_device_tree_maple_memory_controller(void)
{
phandle mc;
u32 mc_reg[4];
char *name = "/hostbridge@f8000000";
u32 ac, sc;
mc = call_prom("finddevice", 1, 1, ADDR(name));
if (!PHANDLE_VALID(mc))
return;
if (prom_getproplen(mc, "reg") != 8)
return;
prom_getprop(prom.root, "#address-cells", &ac, sizeof(ac));
prom_getprop(prom.root, "#size-cells", &sc, sizeof(sc));
if ((ac != 2) || (sc != 2))
return;
if (prom_getprop(mc, "reg", mc_reg, sizeof(mc_reg)) == PROM_ERROR)
return;
if (mc_reg[0] != CPC925_MC_START || mc_reg[1] != CPC925_MC_LENGTH)
return;
prom_printf("Fixing up bogus hostbridge on Maple...\n");
mc_reg[0] = 0x0;
mc_reg[1] = CPC925_MC_START;
mc_reg[2] = 0x0;
mc_reg[3] = CPC925_MC_LENGTH;
prom_setprop(mc, name, "reg", mc_reg, sizeof(mc_reg));
}
#else
#define fixup_device_tree_maple()
#define fixup_device_tree_maple_memory_controller()
#endif
#ifdef CONFIG_PPC_CHRP
/*
* Pegasos and BriQ lacks the "ranges" property in the isa node
@ -3193,8 +3109,6 @@ static inline void fixup_device_tree_pasemi(void) { }
static void __init fixup_device_tree(void)
{
fixup_device_tree_maple();
fixup_device_tree_maple_memory_controller();
fixup_device_tree_chrp();
fixup_device_tree_pmac();
fixup_device_tree_efika();

View File

@ -5,6 +5,7 @@
*/
#include <linux/types.h>
#include <linux/of.h>
#include <linux/string_choices.h>
#include <asm/secure_boot.h>
static struct device_node *get_ppc_fw_sb_node(void)
@ -38,7 +39,7 @@ bool is_ppc_secureboot_enabled(void)
of_node_put(node);
out:
pr_info("Secure boot mode %s\n", enabled ? "enabled" : "disabled");
pr_info("Secure boot mode %s\n", str_enabled_disabled(enabled));
return enabled;
}
@ -62,7 +63,7 @@ bool is_ppc_trustedboot_enabled(void)
of_node_put(node);
out:
pr_info("Trusted boot mode %s\n", enabled ? "enabled" : "disabled");
pr_info("Trusted boot mode %s\n", str_enabled_disabled(enabled));
return enabled;
}

View File

@ -1000,9 +1000,11 @@ void __init setup_arch(char **cmdline_p)
initmem_init();
/*
* Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
* be called after initmem_init(), so that pageblock_order is initialised.
* Reserve large chunks of memory for use by CMA for fadump, KVM and
* hugetlb. These must be called after initmem_init(), so that
* pageblock_order is initialised.
*/
fadump_cma_init();
kvm_cma_reserve();
gigantic_hugetlb_cma_reserve();

View File

@ -920,6 +920,7 @@ static int __init disable_hardlockup_detector(void)
hardlockup_detector_disable();
#else
if (firmware_has_feature(FW_FEATURE_LPAR)) {
check_kvm_guest();
if (is_kvm_guest())
hardlockup_detector_disable();
}

View File

@ -17,6 +17,7 @@
#include <asm/hvcall.h>
#include <asm/machdep.h>
#include <asm/smp.h>
#include <asm/time.h>
#include <asm/pmc.h>
#include <asm/firmware.h>
#include <asm/idle.h>

View File

@ -9,12 +9,15 @@ CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_ftrace_64_pg.o = $(CC_FLAGS_FTRACE)
endif
obj32-$(CONFIG_FUNCTION_TRACER) += ftrace.o ftrace_entry.o
ifdef CONFIG_MPROFILE_KERNEL
obj64-$(CONFIG_FUNCTION_TRACER) += ftrace.o ftrace_entry.o
ifdef CONFIG_FUNCTION_TRACER
obj32-y += ftrace.o ftrace_entry.o
ifeq ($(CONFIG_MPROFILE_KERNEL)$(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY),)
obj64-y += ftrace_64_pg.o ftrace_64_pg_entry.o
else
obj64-$(CONFIG_FUNCTION_TRACER) += ftrace_64_pg.o ftrace_64_pg_entry.o
obj64-y += ftrace.o ftrace_entry.o
endif
endif
obj-$(CONFIG_TRACING) += trace_clock.o
obj-$(CONFIG_PPC64) += $(obj64-y)

View File

@ -37,8 +37,12 @@ unsigned long ftrace_call_adjust(unsigned long addr)
if (addr >= (unsigned long)__exittext_begin && addr < (unsigned long)__exittext_end)
return 0;
if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY))
if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY) &&
!IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {
addr += MCOUNT_INSN_SIZE;
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS))
addr += MCOUNT_INSN_SIZE;
}
return addr;
}
@ -82,7 +86,7 @@ static inline int ftrace_modify_code(unsigned long ip, ppc_inst_t old, ppc_inst_
{
int ret = ftrace_validate_inst(ip, old);
if (!ret)
if (!ret && !ppc_inst_equal(old, new))
ret = patch_instruction((u32 *)ip, new);
return ret;
@ -106,28 +110,68 @@ static unsigned long find_ftrace_tramp(unsigned long ip)
return 0;
}
#ifdef CONFIG_MODULES
static unsigned long ftrace_lookup_module_stub(unsigned long ip, unsigned long addr)
{
struct module *mod = NULL;
preempt_disable();
mod = __module_text_address(ip);
preempt_enable();
if (!mod)
pr_err("No module loaded at addr=%lx\n", ip);
return (addr == (unsigned long)ftrace_caller ? mod->arch.tramp : mod->arch.tramp_regs);
}
#else
static unsigned long ftrace_lookup_module_stub(unsigned long ip, unsigned long addr)
{
return 0;
}
#endif
static unsigned long ftrace_get_ool_stub(struct dyn_ftrace *rec)
{
#ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
return rec->arch.ool_stub;
#else
BUILD_BUG();
#endif
}
static int ftrace_get_call_inst(struct dyn_ftrace *rec, unsigned long addr, ppc_inst_t *call_inst)
{
unsigned long ip = rec->ip;
unsigned long ip;
unsigned long stub;
if (is_offset_in_branch_range(addr - ip)) {
/* Within range */
stub = addr;
#ifdef CONFIG_MODULES
} else if (rec->arch.mod) {
/* Module code would be going to one of the module stubs */
stub = (addr == (unsigned long)ftrace_caller ? rec->arch.mod->arch.tramp :
rec->arch.mod->arch.tramp_regs);
#endif
} else if (core_kernel_text(ip)) {
/* We would be branching to one of our ftrace stubs */
stub = find_ftrace_tramp(ip);
if (!stub) {
pr_err("0x%lx: No ftrace stubs reachable\n", ip);
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
ip = ftrace_get_ool_stub(rec) + MCOUNT_INSN_SIZE; /* second instruction in stub */
else
ip = rec->ip;
if (!is_offset_in_branch_range(addr - ip) && addr != FTRACE_ADDR &&
addr != FTRACE_REGS_ADDR) {
/* This can only happen with ftrace direct */
if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS)) {
pr_err("0x%lx (0x%lx): Unexpected target address 0x%lx\n",
ip, rec->ip, addr);
return -EINVAL;
}
} else {
addr = FTRACE_ADDR;
}
if (is_offset_in_branch_range(addr - ip))
/* Within range */
stub = addr;
else if (core_kernel_text(ip))
/* We would be branching to one of our ftrace stubs */
stub = find_ftrace_tramp(ip);
else
stub = ftrace_lookup_module_stub(ip, addr);
if (!stub) {
pr_err("0x%lx (0x%lx): No ftrace stubs reachable\n", ip, rec->ip);
return -EINVAL;
}
@ -135,6 +179,145 @@ static int ftrace_get_call_inst(struct dyn_ftrace *rec, unsigned long addr, ppc_
return 0;
}
static int ftrace_init_ool_stub(struct module *mod, struct dyn_ftrace *rec)
{
#ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
static int ool_stub_text_index, ool_stub_text_end_index, ool_stub_inittext_index;
int ret = 0, ool_stub_count, *ool_stub_index;
ppc_inst_t inst;
/*
* See ftrace_entry.S if changing the below instruction sequence, as we rely on
* decoding the last branch instruction here to recover the correct function ip.
*/
struct ftrace_ool_stub *ool_stub, ool_stub_template = {
.insn = {
PPC_RAW_MFLR(_R0),
PPC_RAW_NOP(), /* bl ftrace_caller */
PPC_RAW_MTLR(_R0),
PPC_RAW_NOP() /* b rec->ip + 4 */
}
};
WARN_ON(rec->arch.ool_stub);
if (is_kernel_inittext(rec->ip)) {
ool_stub = ftrace_ool_stub_inittext;
ool_stub_index = &ool_stub_inittext_index;
ool_stub_count = ftrace_ool_stub_inittext_count;
} else if (is_kernel_text(rec->ip)) {
/*
* ftrace records are sorted, so we first use up the stub area within .text
* (ftrace_ool_stub_text) before using the area at the end of .text
* (ftrace_ool_stub_text_end), unless the stub is out of range of the record.
*/
if (ool_stub_text_index >= ftrace_ool_stub_text_count ||
!is_offset_in_branch_range((long)rec->ip -
(long)&ftrace_ool_stub_text[ool_stub_text_index])) {
ool_stub = ftrace_ool_stub_text_end;
ool_stub_index = &ool_stub_text_end_index;
ool_stub_count = ftrace_ool_stub_text_end_count;
} else {
ool_stub = ftrace_ool_stub_text;
ool_stub_index = &ool_stub_text_index;
ool_stub_count = ftrace_ool_stub_text_count;
}
#ifdef CONFIG_MODULES
} else if (mod) {
ool_stub = mod->arch.ool_stubs;
ool_stub_index = &mod->arch.ool_stub_index;
ool_stub_count = mod->arch.ool_stub_count;
#endif
} else {
return -EINVAL;
}
ool_stub += (*ool_stub_index)++;
if (WARN_ON(*ool_stub_index > ool_stub_count))
return -EINVAL;
if (!is_offset_in_branch_range((long)rec->ip - (long)&ool_stub->insn[0]) ||
!is_offset_in_branch_range((long)(rec->ip + MCOUNT_INSN_SIZE) -
(long)&ool_stub->insn[3])) {
pr_err("%s: ftrace ool stub out of range (%p -> %p).\n",
__func__, (void *)rec->ip, (void *)&ool_stub->insn[0]);
return -EINVAL;
}
rec->arch.ool_stub = (unsigned long)&ool_stub->insn[0];
/* bl ftrace_caller */
if (!mod)
ret = ftrace_get_call_inst(rec, (unsigned long)ftrace_caller, &inst);
#ifdef CONFIG_MODULES
else
/*
* We can't use ftrace_get_call_inst() since that uses
* __module_text_address(rec->ip) to look up the module.
* But, since the module is not fully formed at this stage,
* the lookup fails. We know the target though, so generate
* the branch inst directly.
*/
inst = ftrace_create_branch_inst(ftrace_get_ool_stub(rec) + MCOUNT_INSN_SIZE,
mod->arch.tramp, 1);
#endif
ool_stub_template.insn[1] = ppc_inst_val(inst);
/* b rec->ip + 4 */
if (!ret && create_branch(&inst, &ool_stub->insn[3], rec->ip + MCOUNT_INSN_SIZE, 0))
return -EINVAL;
ool_stub_template.insn[3] = ppc_inst_val(inst);
if (!ret)
ret = patch_instructions((u32 *)ool_stub, (u32 *)&ool_stub_template,
sizeof(ool_stub_template), false);
return ret;
#else /* !CONFIG_PPC_FTRACE_OUT_OF_LINE */
BUILD_BUG();
#endif
}
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS
static const struct ftrace_ops *powerpc_rec_get_ops(struct dyn_ftrace *rec)
{
const struct ftrace_ops *ops = NULL;
if (rec->flags & FTRACE_FL_CALL_OPS_EN) {
ops = ftrace_find_unique_ops(rec);
WARN_ON_ONCE(!ops);
}
if (!ops)
ops = &ftrace_list_ops;
return ops;
}
static int ftrace_rec_set_ops(struct dyn_ftrace *rec, const struct ftrace_ops *ops)
{
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
return patch_ulong((void *)(ftrace_get_ool_stub(rec) - sizeof(unsigned long)),
(unsigned long)ops);
else
return patch_ulong((void *)(rec->ip - MCOUNT_INSN_SIZE - sizeof(unsigned long)),
(unsigned long)ops);
}
static int ftrace_rec_set_nop_ops(struct dyn_ftrace *rec)
{
return ftrace_rec_set_ops(rec, &ftrace_nop_ops);
}
static int ftrace_rec_update_ops(struct dyn_ftrace *rec)
{
return ftrace_rec_set_ops(rec, powerpc_rec_get_ops(rec));
}
#else
static int ftrace_rec_set_nop_ops(struct dyn_ftrace *rec) { return 0; }
static int ftrace_rec_update_ops(struct dyn_ftrace *rec) { return 0; }
#endif
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, unsigned long addr)
{
@ -147,18 +330,33 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, unsigned
int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
{
ppc_inst_t old, new;
int ret;
unsigned long ip = rec->ip;
int ret = 0;
/* This can only ever be called during module load */
if (WARN_ON(!IS_ENABLED(CONFIG_MODULES) || core_kernel_text(rec->ip)))
if (WARN_ON(!IS_ENABLED(CONFIG_MODULES) || core_kernel_text(ip)))
return -EINVAL;
old = ppc_inst(PPC_RAW_NOP());
ret = ftrace_get_call_inst(rec, addr, &new);
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {
ip = ftrace_get_ool_stub(rec) + MCOUNT_INSN_SIZE; /* second instruction in stub */
ret = ftrace_get_call_inst(rec, (unsigned long)ftrace_caller, &old);
}
ret |= ftrace_get_call_inst(rec, addr, &new);
if (!ret)
ret = ftrace_modify_code(ip, old, new);
ret = ftrace_rec_update_ops(rec);
if (ret)
return ret;
return ftrace_modify_code(rec->ip, old, new);
if (!ret && IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
ret = ftrace_modify_code(rec->ip, ppc_inst(PPC_RAW_NOP()),
ppc_inst(PPC_RAW_BRANCH((long)ftrace_get_ool_stub(rec) - (long)rec->ip)));
return ret;
}
int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr)
@ -191,6 +389,13 @@ void ftrace_replace_code(int enable)
new_addr = ftrace_get_addr_new(rec);
update = ftrace_update_record(rec, enable);
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE) && update != FTRACE_UPDATE_IGNORE) {
ip = ftrace_get_ool_stub(rec) + MCOUNT_INSN_SIZE;
ret = ftrace_get_call_inst(rec, (unsigned long)ftrace_caller, &nop_inst);
if (ret)
goto out;
}
switch (update) {
case FTRACE_UPDATE_IGNORE:
default:
@ -198,16 +403,19 @@ void ftrace_replace_code(int enable)
case FTRACE_UPDATE_MODIFY_CALL:
ret = ftrace_get_call_inst(rec, new_addr, &new_call_inst);
ret |= ftrace_get_call_inst(rec, addr, &call_inst);
ret |= ftrace_rec_update_ops(rec);
old = call_inst;
new = new_call_inst;
break;
case FTRACE_UPDATE_MAKE_NOP:
ret = ftrace_get_call_inst(rec, addr, &call_inst);
ret |= ftrace_rec_set_nop_ops(rec);
old = call_inst;
new = nop_inst;
break;
case FTRACE_UPDATE_MAKE_CALL:
ret = ftrace_get_call_inst(rec, new_addr, &call_inst);
ret |= ftrace_rec_update_ops(rec);
old = nop_inst;
new = call_inst;
break;
@ -215,6 +423,24 @@ void ftrace_replace_code(int enable)
if (!ret)
ret = ftrace_modify_code(ip, old, new);
if (!ret && IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE) &&
(update == FTRACE_UPDATE_MAKE_NOP || update == FTRACE_UPDATE_MAKE_CALL)) {
/* Update the actual ftrace location */
call_inst = ppc_inst(PPC_RAW_BRANCH((long)ftrace_get_ool_stub(rec) -
(long)rec->ip));
nop_inst = ppc_inst(PPC_RAW_NOP());
ip = rec->ip;
if (update == FTRACE_UPDATE_MAKE_NOP)
ret = ftrace_modify_code(ip, call_inst, nop_inst);
else
ret = ftrace_modify_code(ip, nop_inst, call_inst);
if (ret)
goto out;
}
if (ret)
goto out;
}
@ -234,20 +460,27 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
/* Verify instructions surrounding the ftrace location */
if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)) {
/* Expect nops */
ret = ftrace_validate_inst(ip - 4, ppc_inst(PPC_RAW_NOP()));
if (!IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
ret = ftrace_validate_inst(ip - 4, ppc_inst(PPC_RAW_NOP()));
if (!ret)
ret = ftrace_validate_inst(ip, ppc_inst(PPC_RAW_NOP()));
} else if (IS_ENABLED(CONFIG_PPC32)) {
/* Expected sequence: 'mflr r0', 'stw r0,4(r1)', 'bl _mcount' */
ret = ftrace_validate_inst(ip - 8, ppc_inst(PPC_RAW_MFLR(_R0)));
if (!ret)
ret = ftrace_validate_inst(ip - 4, ppc_inst(PPC_RAW_STW(_R0, _R1, 4)));
if (ret)
return ret;
ret = ftrace_modify_code(ip - 4, ppc_inst(PPC_RAW_STW(_R0, _R1, 4)),
ppc_inst(PPC_RAW_NOP()));
} else if (IS_ENABLED(CONFIG_MPROFILE_KERNEL)) {
/* Expected sequence: 'mflr r0', ['std r0,16(r1)'], 'bl _mcount' */
ret = ftrace_read_inst(ip - 4, &old);
if (!ret && !ppc_inst_equal(old, ppc_inst(PPC_RAW_MFLR(_R0)))) {
/* Gcc v5.x emit the additional 'std' instruction, gcc v6.x don't */
ret = ftrace_validate_inst(ip - 8, ppc_inst(PPC_RAW_MFLR(_R0)));
ret |= ftrace_validate_inst(ip - 4, ppc_inst(PPC_RAW_STD(_R0, _R1, 16)));
if (ret)
return ret;
ret = ftrace_modify_code(ip - 4, ppc_inst(PPC_RAW_STD(_R0, _R1, 16)),
ppc_inst(PPC_RAW_NOP()));
}
} else {
return -EINVAL;
@ -256,13 +489,9 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
if (ret)
return ret;
if (!core_kernel_text(ip)) {
if (!mod) {
pr_err("0x%lx: No module provided for non-kernel address\n", ip);
return -EFAULT;
}
rec->arch.mod = mod;
}
/* Set up out-of-line stub */
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
return ftrace_init_ool_stub(mod, rec);
/* Nop-out the ftrace location */
new = ppc_inst(PPC_RAW_NOP());
@ -302,6 +531,13 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
ppc_inst_t old, new;
int ret;
/*
* When using CALL_OPS, the function to call is associated with the
* call site, and we don't have a global function pointer to update.
*/
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS))
return 0;
old = ppc_inst_read((u32 *)&ftrace_call);
new = ftrace_create_branch_inst(ip, ppc_function_entry(func), 1);
ret = ftrace_modify_code(ip, old, new);

View File

@ -116,6 +116,20 @@ static unsigned long find_bl_target(unsigned long ip, ppc_inst_t op)
}
#ifdef CONFIG_MODULES
static struct module *ftrace_lookup_module(struct dyn_ftrace *rec)
{
struct module *mod;
preempt_disable();
mod = __module_text_address(rec->ip);
preempt_enable();
if (!mod)
pr_err("No module loaded at addr=%lx\n", rec->ip);
return mod;
}
static int
__ftrace_make_nop(struct module *mod,
struct dyn_ftrace *rec, unsigned long addr)
@ -124,6 +138,12 @@ __ftrace_make_nop(struct module *mod,
unsigned long ip = rec->ip;
ppc_inst_t op, pop;
if (!mod) {
mod = ftrace_lookup_module(rec);
if (!mod)
return -EINVAL;
}
/* read where this goes */
if (copy_inst_from_kernel_nofault(&op, (void *)ip)) {
pr_err("Fetching opcode failed.\n");
@ -366,27 +386,6 @@ int ftrace_make_nop(struct module *mod,
return -EINVAL;
}
/*
* Out of range jumps are called from modules.
* We should either already have a pointer to the module
* or it has been passed in.
*/
if (!rec->arch.mod) {
if (!mod) {
pr_err("No module loaded addr=%lx\n", addr);
return -EFAULT;
}
rec->arch.mod = mod;
} else if (mod) {
if (mod != rec->arch.mod) {
pr_err("Record mod %p not equal to passed in mod %p\n",
rec->arch.mod, mod);
return -EINVAL;
}
/* nothing to do if mod == rec->arch.mod */
} else
mod = rec->arch.mod;
return __ftrace_make_nop(mod, rec, addr);
}
@ -411,7 +410,10 @@ __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
ppc_inst_t op[2];
void *ip = (void *)rec->ip;
unsigned long entry, ptr, tramp;
struct module *mod = rec->arch.mod;
struct module *mod = ftrace_lookup_module(rec);
if (!mod)
return -EINVAL;
/* read where this goes */
if (copy_inst_from_kernel_nofault(op, ip))
@ -533,16 +535,6 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
return -EINVAL;
}
/*
* Out of range jumps are called from modules.
* Being that we are converting from nop, it had better
* already have a module defined.
*/
if (!rec->arch.mod) {
pr_err("No module loaded\n");
return -EINVAL;
}
return __ftrace_make_call(rec, addr);
}
@ -555,7 +547,10 @@ __ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
ppc_inst_t op;
unsigned long ip = rec->ip;
unsigned long entry, ptr, tramp;
struct module *mod = rec->arch.mod;
struct module *mod = ftrace_lookup_module(rec);
if (!mod)
return -EINVAL;
/* If we never set up ftrace trampolines, then bail */
if (!mod->arch.tramp || !mod->arch.tramp_regs) {
@ -668,14 +663,6 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
return -EINVAL;
}
/*
* Out of range jumps are called from modules.
*/
if (!rec->arch.mod) {
pr_err("No module loaded\n");
return -EINVAL;
}
return __ftrace_modify_call(rec, old_addr, addr);
}
#endif

View File

@ -39,13 +39,37 @@
/* Create our stack frame + pt_regs */
PPC_STLU r1,-SWITCH_FRAME_SIZE(r1)
.if \allregs == 1
SAVE_GPRS(11, 12, r1)
.endif
/* Get the _mcount() call site out of LR */
mflr r11
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
/* Load the ftrace_op */
PPC_LL r12, -(MCOUNT_INSN_SIZE*2 + SZL)(r11)
/* Load direct_call from the ftrace_op */
PPC_LL r12, FTRACE_OPS_DIRECT_CALL(r12)
PPC_LCMPI r12, 0
.if \allregs == 1
bne .Lftrace_direct_call_regs
.else
bne .Lftrace_direct_call
.endif
#endif
/* Save the previous LR in pt_regs->link */
PPC_STL r0, _LINK(r1)
/* Also save it in A's stack frame */
PPC_STL r0, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE+LRSAVE(r1)
/* Save all gprs to pt_regs */
SAVE_GPR(0, r1)
SAVE_GPRS(3, 10, r1)
#ifdef CONFIG_PPC64
/* Save the original return address in A's stack frame */
std r0, LRSAVE+SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE(r1)
/* Ok to continue? */
lbz r3, PACA_FTRACE_ENABLED(r13)
cmpdi r3, 0
@ -54,9 +78,9 @@
.if \allregs == 1
SAVE_GPR(2, r1)
SAVE_GPRS(11, 31, r1)
SAVE_GPRS(13, 31, r1)
.else
#ifdef CONFIG_LIVEPATCH_64
#if defined(CONFIG_LIVEPATCH_64) || defined(CONFIG_PPC_FTRACE_OUT_OF_LINE)
SAVE_GPR(14, r1)
#endif
.endif
@ -67,80 +91,143 @@
.if \allregs == 1
/* Load special regs for save below */
mfcr r7
mfmsr r8
mfctr r9
mfxer r10
mfcr r11
.else
/* Clear MSR to flag as ftrace_caller versus frace_regs_caller */
li r8, 0
.endif
/* Get the _mcount() call site out of LR */
mflr r7
/* Save it as pt_regs->nip */
PPC_STL r7, _NIP(r1)
/* Also save it in B's stackframe header for proper unwind */
PPC_STL r7, LRSAVE+SWITCH_FRAME_SIZE(r1)
/* Save the read LR in pt_regs->link */
PPC_STL r0, _LINK(r1)
#ifdef CONFIG_PPC64
/* Save callee's TOC in the ABI compliant location */
std r2, STK_GOT(r1)
LOAD_PACA_TOC() /* get kernel TOC in r2 */
#endif
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS
/* r11 points to the instruction following the call to ftrace */
PPC_LL r5, -(MCOUNT_INSN_SIZE*2 + SZL)(r11)
PPC_LL r12, FTRACE_OPS_FUNC(r5)
mtctr r12
#else /* !CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS */
#ifdef CONFIG_PPC64
LOAD_REG_ADDR(r3, function_trace_op)
ld r5,0(r3)
#else
lis r3,function_trace_op@ha
lwz r5,function_trace_op@l(r3)
#endif
#ifdef CONFIG_LIVEPATCH_64
mr r14, r7 /* remember old NIP */
#endif
/* Calculate ip from nip-4 into r3 for call below */
subi r3, r7, MCOUNT_INSN_SIZE
/* Put the original return address in r4 as parent_ip */
mr r4, r0
/* Save special regs */
PPC_STL r8, _MSR(r1)
.if \allregs == 1
PPC_STL r7, _CCR(r1)
PPC_STL r9, _CTR(r1)
PPC_STL r10, _XER(r1)
PPC_STL r11, _CCR(r1)
.endif
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
/* Clear orig_gpr3 to later detect ftrace_direct call */
li r7, 0
PPC_STL r7, ORIG_GPR3(r1)
#endif
#ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
/* Save our real return address in nvr for return */
.if \allregs == 0
SAVE_GPR(15, r1)
.endif
mr r15, r11
/*
* We want the ftrace location in the function, but our lr (in r11)
* points at the 'mtlr r0' instruction in the out of line stub. To
* recover the ftrace location, we read the branch instruction in the
* stub, and adjust our lr by the branch offset.
*
* See ftrace_init_ool_stub() for the profile sequence.
*/
lwz r8, MCOUNT_INSN_SIZE(r11)
slwi r8, r8, 6
srawi r8, r8, 6
add r3, r11, r8
/*
* Override our nip to point past the branch in the original function.
* This allows reliable stack trace and the ftrace stack tracer to work as-is.
*/
addi r11, r3, MCOUNT_INSN_SIZE
#else
/* Calculate ip from nip-4 into r3 for call below */
subi r3, r11, MCOUNT_INSN_SIZE
#endif
/* Save NIP as pt_regs->nip */
PPC_STL r11, _NIP(r1)
/* Also save it in B's stackframe header for proper unwind */
PPC_STL r11, LRSAVE+SWITCH_FRAME_SIZE(r1)
#if defined(CONFIG_LIVEPATCH_64) || defined(CONFIG_PPC_FTRACE_OUT_OF_LINE)
mr r14, r11 /* remember old NIP */
#endif
/* Put the original return address in r4 as parent_ip */
mr r4, r0
/* Load &pt_regs in r6 for call below */
addi r6, r1, STACK_INT_FRAME_REGS
.endm
.macro ftrace_regs_exit allregs
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
/* Check orig_gpr3 to detect ftrace_direct call */
PPC_LL r3, ORIG_GPR3(r1)
PPC_LCMPI cr1, r3, 0
mtctr r3
#endif
/* Restore possibly modified LR */
PPC_LL r0, _LINK(r1)
#ifndef CONFIG_PPC_FTRACE_OUT_OF_LINE
/* Load ctr with the possibly modified NIP */
PPC_LL r3, _NIP(r1)
mtctr r3
#ifdef CONFIG_LIVEPATCH_64
cmpd r14, r3 /* has NIP been altered? */
#endif
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
beq cr1,2f
mtlr r3
b 3f
#endif
2: mtctr r3
mtlr r0
3:
#else /* !CONFIG_PPC_FTRACE_OUT_OF_LINE */
/* Load LR with the possibly modified NIP */
PPC_LL r3, _NIP(r1)
cmpd r14, r3 /* has NIP been altered? */
bne- 1f
mr r3, r15
.if \allregs == 0
REST_GPR(15, r1)
.endif
1: mtlr r3
#endif
/* Restore gprs */
.if \allregs == 1
REST_GPRS(2, 31, r1)
.else
REST_GPRS(3, 10, r1)
#ifdef CONFIG_LIVEPATCH_64
#if defined(CONFIG_LIVEPATCH_64) || defined(CONFIG_PPC_FTRACE_OUT_OF_LINE)
REST_GPR(14, r1)
#endif
.endif
/* Restore possibly modified LR */
PPC_LL r0, _LINK(r1)
mtlr r0
#ifdef CONFIG_PPC64
/* Restore callee's TOC */
ld r2, STK_GOT(r1)
@ -153,23 +240,46 @@
/* Based on the cmpd above, if the NIP was altered handle livepatch */
bne- livepatch_handler
#endif
bctr /* jump after _mcount site */
/* jump after _mcount site */
#ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
bnectr cr1
#endif
/*
* Return with blr to keep the link stack balanced. The function profiling sequence
* uses 'mtlr r0' to restore LR.
*/
blr
#else
bctr
#endif
.endm
.macro ftrace_regs_func allregs
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS
bctrl
#else
.if \allregs == 1
.globl ftrace_regs_call
ftrace_regs_call:
.else
.globl ftrace_call
ftrace_call:
.endif
/* ftrace_call(r3, r4, r5, r6) */
bl ftrace_stub
#endif
.endm
_GLOBAL(ftrace_regs_caller)
ftrace_regs_entry 1
/* ftrace_call(r3, r4, r5, r6) */
.globl ftrace_regs_call
ftrace_regs_call:
bl ftrace_stub
ftrace_regs_func 1
ftrace_regs_exit 1
_GLOBAL(ftrace_caller)
ftrace_regs_entry 0
/* ftrace_call(r3, r4, r5, r6) */
.globl ftrace_call
ftrace_call:
bl ftrace_stub
ftrace_regs_func 0
ftrace_regs_exit 0
_GLOBAL(ftrace_stub)
@ -177,6 +287,11 @@ _GLOBAL(ftrace_stub)
#ifdef CONFIG_PPC64
ftrace_no_trace:
#ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
REST_GPR(3, r1)
addi r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
blr
#else
mflr r3
mtctr r3
REST_GPR(3, r1)
@ -184,6 +299,22 @@ ftrace_no_trace:
mtlr r0
bctr
#endif
#endif
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
.Lftrace_direct_call_regs:
mtctr r12
REST_GPRS(11, 12, r1)
addi r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
bctr
.Lftrace_direct_call:
mtctr r12
addi r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
bctr
SYM_FUNC_START(ftrace_stub_direct_tramp)
blr
SYM_FUNC_END(ftrace_stub_direct_tramp)
#endif
#ifdef CONFIG_LIVEPATCH_64
/*
@ -194,11 +325,17 @@ ftrace_no_trace:
* We get here when a function A, calls another function B, but B has
* been live patched with a new function C.
*
* On entry:
* - we have no stack frame and can not allocate one
* On entry, we have no stack frame and can not allocate one.
*
* With PPC_FTRACE_OUT_OF_LINE=n, on entry:
* - LR points back to the original caller (in A)
* - CTR holds the new NIP in C
* - r0, r11 & r12 are free
*
* With PPC_FTRACE_OUT_OF_LINE=y, on entry:
* - r0 points back to the original caller (in A)
* - LR holds the new NIP in C
* - r11 & r12 are free
*/
livepatch_handler:
ld r12, PACA_THREAD_INFO(r13)
@ -208,18 +345,23 @@ livepatch_handler:
addi r11, r11, 24
std r11, TI_livepatch_sp(r12)
/* Save toc & real LR on livepatch stack */
std r2, -24(r11)
mflr r12
std r12, -16(r11)
/* Store stack end marker */
lis r12, STACK_END_MAGIC@h
ori r12, r12, STACK_END_MAGIC@l
std r12, -8(r11)
/* Put ctr in r12 for global entry and branch there */
/* Save toc & real LR on livepatch stack */
std r2, -24(r11)
#ifndef CONFIG_PPC_FTRACE_OUT_OF_LINE
mflr r12
std r12, -16(r11)
mfctr r12
#else
std r0, -16(r11)
mflr r12
/* Put ctr in r12 for global entry and branch there */
mtctr r12
#endif
bctrl
/*
@ -308,6 +450,14 @@ _GLOBAL(return_to_handler)
blr
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
#ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
SYM_DATA(ftrace_ool_stub_text_count, .long CONFIG_PPC_FTRACE_OUT_OF_LINE_NUM_RESERVE)
SYM_START(ftrace_ool_stub_text, SYM_L_GLOBAL, .balign SZL)
.space CONFIG_PPC_FTRACE_OUT_OF_LINE_NUM_RESERVE * FTRACE_OOL_STUB_SIZE
SYM_CODE_END(ftrace_ool_stub_text)
#endif
.pushsection ".tramp.ftrace.text","aw",@progbits;
.globl ftrace_tramp_text
ftrace_tramp_text:

View File

@ -39,9 +39,6 @@ void __init udbg_early_init(void)
#elif defined(CONFIG_PPC_EARLY_DEBUG_RTAS_CONSOLE)
/* RTAS console debug */
udbg_init_rtas_console();
#elif defined(CONFIG_PPC_EARLY_DEBUG_MAPLE)
/* Maple real mode debug */
udbg_init_maple_realmode();
#elif defined(CONFIG_PPC_EARLY_DEBUG_PAS_REALMODE)
udbg_init_pas_realmode();
#elif defined(CONFIG_PPC_EARLY_DEBUG_BOOTX)

View File

@ -205,29 +205,6 @@ void __init udbg_uart_init_mmio(void __iomem *addr, unsigned int stride)
udbg_use_uart();
}
#ifdef CONFIG_PPC_MAPLE
#define UDBG_UART_MAPLE_ADDR ((void __iomem *)0xf40003f8)
static u8 udbg_uart_in_maple(unsigned int reg)
{
return real_readb(UDBG_UART_MAPLE_ADDR + reg);
}
static void udbg_uart_out_maple(unsigned int reg, u8 val)
{
real_writeb(val, UDBG_UART_MAPLE_ADDR + reg);
}
void __init udbg_init_maple_realmode(void)
{
udbg_uart_in = udbg_uart_in_maple;
udbg_uart_out = udbg_uart_out_maple;
udbg_use_uart();
}
#endif /* CONFIG_PPC_MAPLE */
#ifdef CONFIG_PPC_PASEMI
#define UDBG_UART_PAS_ADDR ((void __iomem *)0xfcff03f8UL)

View File

@ -47,12 +47,13 @@ long sys_ni_syscall(void);
*/
static union {
struct vdso_arch_data data;
u8 page[PAGE_SIZE];
u8 page[2 * PAGE_SIZE];
} vdso_data_store __page_aligned_data;
struct vdso_arch_data *vdso_data = &vdso_data_store.data;
enum vvar_pages {
VVAR_DATA_PAGE_OFFSET,
VVAR_BASE_PAGE_OFFSET,
VVAR_TIME_PAGE_OFFSET,
VVAR_TIMENS_PAGE_OFFSET,
VVAR_NR_PAGES,
};
@ -118,7 +119,7 @@ static struct vm_special_mapping vdso64_spec __ro_after_init = {
#ifdef CONFIG_TIME_NS
struct vdso_data *arch_get_vdso_data(void *vvar_page)
{
return ((struct vdso_arch_data *)vvar_page)->data;
return vvar_page;
}
/*
@ -152,11 +153,14 @@ static vm_fault_t vvar_fault(const struct vm_special_mapping *sm,
unsigned long pfn;
switch (vmf->pgoff) {
case VVAR_DATA_PAGE_OFFSET:
case VVAR_BASE_PAGE_OFFSET:
pfn = virt_to_pfn(vdso_data);
break;
case VVAR_TIME_PAGE_OFFSET:
if (timens_page)
pfn = page_to_pfn(timens_page);
else
pfn = virt_to_pfn(vdso_data);
pfn = virt_to_pfn(vdso_data->data);
break;
#ifdef CONFIG_TIME_NS
case VVAR_TIMENS_PAGE_OFFSET:
@ -169,7 +173,7 @@ static vm_fault_t vvar_fault(const struct vm_special_mapping *sm,
*/
if (!timens_page)
return VM_FAULT_SIGBUS;
pfn = virt_to_pfn(vdso_data);
pfn = virt_to_pfn(vdso_data->data);
break;
#endif /* CONFIG_TIME_NS */
default:

View File

@ -50,14 +50,18 @@ ldflags-$(CONFIG_LD_IS_LLD) += $(call cc-option,--ld-path=$(LD),-fuse-ld=lld)
ldflags-$(CONFIG_LD_ORPHAN_WARN) += -Wl,--orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL)
# Filter flags that clang will warn are unused for linking
ldflags-y += $(filter-out $(CC_AUTO_VAR_INIT_ZERO_ENABLER) $(CC_FLAGS_FTRACE) -Wa$(comma)%, $(KBUILD_CFLAGS))
ldflags-y += $(filter-out $(CC_AUTO_VAR_INIT_ZERO_ENABLER) $(CC_FLAGS_FTRACE) -Wa$(comma)%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
CC32FLAGS := -m32
CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc
# This flag is supported by clang for 64-bit but not 32-bit so it will cause
# an unused command line flag warning for this file.
ifdef CONFIG_CC_IS_CLANG
# This flag is supported by clang for 64-bit but not 32-bit so it will cause
# an unused command line flag warning for this file.
CC32FLAGSREMOVE += -fno-stack-clash-protection
# -mstack-protector-guard values from the 64-bit build are not valid for the
# 32-bit one. clang validates the values passed to these arguments during
# parsing, even when -fno-stack-protector is passed afterwards.
CC32FLAGSREMOVE += -mstack-protector-guard%
endif
LD32FLAGS := -Wl,-soname=linux-vdso32.so.1
AS32FLAGS := -D__VDSO32__

View File

@ -30,7 +30,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
#ifdef CONFIG_PPC64
mflr r12
.cfi_register lr,r12
get_realdatapage r10, r11
get_datapage r10
mtlr r12
.cfi_restore lr
#endif

View File

@ -28,7 +28,7 @@ V_FUNCTION_BEGIN(__kernel_get_syscall_map)
mflr r12
.cfi_register lr,r12
mr. r4,r3
get_realdatapage r3, r11
get_datapage r3
mtlr r12
#ifdef __powerpc64__
addi r3,r3,CFG_SYSCALL_MAP64
@ -52,7 +52,7 @@ V_FUNCTION_BEGIN(__kernel_get_tbfreq)
.cfi_startproc
mflr r12
.cfi_register lr,r12
get_realdatapage r3, r11
get_datapage r3
#ifndef __powerpc64__
lwz r4,(CFG_TB_TICKS_PER_SEC + 4)(r3)
#endif

View File

@ -31,8 +31,6 @@
PPC_STL r2, PPC_MIN_STKFRM + STK_GOT(r1)
.cfi_rel_offset r2, PPC_MIN_STKFRM + STK_GOT
#endif
get_realdatapage r8, r11
addi r8, r8, VDSO_RNG_DATA_OFFSET
bl CFUNC(DOTSYM(\funct))
PPC_LL r0, PPC_MIN_STKFRM + PPC_LR_STKOFF(r1)
#ifdef __powerpc64__

View File

@ -32,11 +32,10 @@
PPC_STL r2, PPC_MIN_STKFRM + STK_GOT(r1)
.cfi_rel_offset r2, PPC_MIN_STKFRM + STK_GOT
#endif
get_datapage r5
.ifeq \call_time
addi r5, r5, VDSO_DATA_OFFSET
get_datapage r5 VDSO_DATA_OFFSET
.else
addi r4, r5, VDSO_DATA_OFFSET
get_datapage r4 VDSO_DATA_OFFSET
.endif
bl CFUNC(DOTSYM(\funct))
PPC_LL r0, PPC_MIN_STKFRM + PPC_LR_STKOFF(r1)

View File

@ -16,7 +16,7 @@ OUTPUT_ARCH(powerpc:common)
SECTIONS
{
PROVIDE(_vdso_datapage = . - 2 * PAGE_SIZE);
PROVIDE(_vdso_datapage = . - 3 * PAGE_SIZE);
. = SIZEOF_HEADERS;
.hash : { *(.hash) } :text

View File

@ -16,7 +16,7 @@ OUTPUT_ARCH(powerpc:common64)
SECTIONS
{
PROVIDE(_vdso_datapage = . - 2 * PAGE_SIZE);
PROVIDE(_vdso_datapage = . - 3 * PAGE_SIZE);
. = SIZEOF_HEADERS;
.hash : { *(.hash) } :text

View File

@ -8,7 +8,7 @@
#include <linux/types.h>
ssize_t __c_kernel_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state,
size_t opaque_len, const struct vdso_rng_data *vd)
size_t opaque_len)
{
return __cvdso_getrandom_data(vd, buffer, len, flags, opaque_state, opaque_len);
return __cvdso_getrandom(buffer, len, flags, opaque_state, opaque_len);
}

View File

@ -265,14 +265,13 @@ SECTIONS
.init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
_sinittext = .;
INIT_TEXT
*(.tramp.ftrace.init);
/*
*.init.text might be RO so we must ensure this section ends on
* a page boundary.
*/
. = ALIGN(PAGE_SIZE);
_einittext = .;
*(.tramp.ftrace.init);
} :text
/* .exit.text is discarded at runtime, not link time,

View File

@ -736,13 +736,18 @@ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
if (dn) {
u64 val;
of_property_read_u64(dn, "opal-base-address", &val);
ret = of_property_read_u64(dn, "opal-base-address", &val);
if (ret)
goto out;
ret = kexec_purgatory_get_set_symbol(image, "opal_base", &val,
sizeof(val), false);
if (ret)
goto out;
of_property_read_u64(dn, "opal-entry-address", &val);
ret = of_property_read_u64(dn, "opal-entry-address", &val);
if (ret)
goto out;
ret = kexec_purgatory_get_set_symbol(image, "opal_entry", &val,
sizeof(val), false);
}

View File

@ -400,7 +400,10 @@ static inline unsigned long map_pcr_to_cap(unsigned long pcr)
cap = H_GUEST_CAP_POWER9;
break;
case PCR_ARCH_31:
cap = H_GUEST_CAP_POWER10;
if (cpu_has_feature(CPU_FTR_P11_PVR))
cap = H_GUEST_CAP_POWER11;
else
cap = H_GUEST_CAP_POWER10;
break;
default:
break;
@ -415,7 +418,7 @@ static int kvmppc_set_arch_compat(struct kvm_vcpu *vcpu, u32 arch_compat)
struct kvmppc_vcore *vc = vcpu->arch.vcore;
/* We can (emulate) our own architecture version and anything older */
if (cpu_has_feature(CPU_FTR_ARCH_31))
if (cpu_has_feature(CPU_FTR_P11_PVR) || cpu_has_feature(CPU_FTR_ARCH_31))
host_pcr_bit = PCR_ARCH_31;
else if (cpu_has_feature(CPU_FTR_ARCH_300))
host_pcr_bit = PCR_ARCH_300;
@ -2060,36 +2063,9 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu)
fallthrough; /* go to facility unavailable handler */
#endif
case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: {
u64 cause = vcpu->arch.hfscr >> 56;
/*
* Only pass HFU interrupts to the L1 if the facility is
* permitted but disabled by the L1's HFSCR, otherwise
* the interrupt does not make sense to the L1 so turn
* it into a HEAI.
*/
if (!(vcpu->arch.hfscr_permitted & (1UL << cause)) ||
(vcpu->arch.nested_hfscr & (1UL << cause))) {
ppc_inst_t pinst;
vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST;
/*
* If the fetch failed, return to guest and
* try executing it again.
*/
r = kvmppc_get_last_inst(vcpu, INST_GENERIC, &pinst);
vcpu->arch.emul_inst = ppc_inst_val(pinst);
if (r != EMULATE_DONE)
r = RESUME_GUEST;
else
r = RESUME_HOST;
} else {
r = RESUME_HOST;
}
case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
r = RESUME_HOST;
break;
}
case BOOK3S_INTERRUPT_HV_RM_HARD:
vcpu->arch.trap = 0;
@ -4153,8 +4129,9 @@ void kvmhv_set_l2_counters_status(int cpu, bool status)
else
lppaca_of(cpu).l2_counters_enable = 0;
}
EXPORT_SYMBOL(kvmhv_set_l2_counters_status);
int kmvhv_counters_tracepoint_regfunc(void)
int kvmhv_counters_tracepoint_regfunc(void)
{
int cpu;
@ -4164,7 +4141,7 @@ int kmvhv_counters_tracepoint_regfunc(void)
return 0;
}
void kmvhv_counters_tracepoint_unregfunc(void)
void kvmhv_counters_tracepoint_unregfunc(void)
{
int cpu;
@ -4190,8 +4167,74 @@ static void do_trace_nested_cs_time(struct kvm_vcpu *vcpu)
*l1_to_l2_cs_ptr = l1_to_l2_ns;
*l2_to_l1_cs_ptr = l2_to_l1_ns;
*l2_runtime_agg_ptr = l2_runtime_ns;
vcpu->arch.l1_to_l2_cs = l1_to_l2_ns;
vcpu->arch.l2_to_l1_cs = l2_to_l1_ns;
vcpu->arch.l2_runtime_agg = l2_runtime_ns;
}
u64 kvmhv_get_l1_to_l2_cs_time(void)
{
return tb_to_ns(be64_to_cpu(get_lppaca()->l1_to_l2_cs_tb));
}
EXPORT_SYMBOL(kvmhv_get_l1_to_l2_cs_time);
u64 kvmhv_get_l2_to_l1_cs_time(void)
{
return tb_to_ns(be64_to_cpu(get_lppaca()->l2_to_l1_cs_tb));
}
EXPORT_SYMBOL(kvmhv_get_l2_to_l1_cs_time);
u64 kvmhv_get_l2_runtime_agg(void)
{
return tb_to_ns(be64_to_cpu(get_lppaca()->l2_runtime_tb));
}
EXPORT_SYMBOL(kvmhv_get_l2_runtime_agg);
u64 kvmhv_get_l1_to_l2_cs_time_vcpu(void)
{
struct kvm_vcpu *vcpu;
struct kvm_vcpu_arch *arch;
vcpu = local_paca->kvm_hstate.kvm_vcpu;
if (vcpu) {
arch = &vcpu->arch;
return arch->l1_to_l2_cs;
} else {
return 0;
}
}
EXPORT_SYMBOL(kvmhv_get_l1_to_l2_cs_time_vcpu);
u64 kvmhv_get_l2_to_l1_cs_time_vcpu(void)
{
struct kvm_vcpu *vcpu;
struct kvm_vcpu_arch *arch;
vcpu = local_paca->kvm_hstate.kvm_vcpu;
if (vcpu) {
arch = &vcpu->arch;
return arch->l2_to_l1_cs;
} else {
return 0;
}
}
EXPORT_SYMBOL(kvmhv_get_l2_to_l1_cs_time_vcpu);
u64 kvmhv_get_l2_runtime_agg_vcpu(void)
{
struct kvm_vcpu *vcpu;
struct kvm_vcpu_arch *arch;
vcpu = local_paca->kvm_hstate.kvm_vcpu;
if (vcpu) {
arch = &vcpu->arch;
return arch->l2_runtime_agg;
} else {
return 0;
}
}
EXPORT_SYMBOL(kvmhv_get_l2_runtime_agg_vcpu);
#else
int kvmhv_get_l2_counters_status(void)
{
@ -4309,6 +4352,15 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
}
hvregs.hdec_expiry = time_limit;
/*
* hvregs has the doorbell status, so zero it here which
* enables us to receive doorbells when H_ENTER_NESTED is
* in progress for this vCPU
*/
if (vcpu->arch.doorbell_request)
vcpu->arch.doorbell_request = 0;
/*
* When setting DEC, we must always deal with irq_work_raise
* via NMI vs setting DEC. The problem occurs right as we
@ -4912,7 +4964,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
lpcr &= ~LPCR_MER;
}
} else if (vcpu->arch.pending_exceptions ||
vcpu->arch.doorbell_request ||
xive_interrupt_pending(vcpu)) {
vcpu->arch.ret = RESUME_HOST;
goto out;

View File

@ -32,7 +32,7 @@ void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
struct kvmppc_vcore *vc = vcpu->arch.vcore;
hr->pcr = vc->pcr | PCR_MASK;
hr->dpdes = vc->dpdes;
hr->dpdes = vcpu->arch.doorbell_request;
hr->hfscr = vcpu->arch.hfscr;
hr->tb_offset = vc->tb_offset;
hr->dawr0 = vcpu->arch.dawr0;
@ -105,7 +105,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu,
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
hr->dpdes = vc->dpdes;
hr->dpdes = vcpu->arch.doorbell_request;
hr->purr = vcpu->arch.purr;
hr->spurr = vcpu->arch.spurr;
hr->ic = vcpu->arch.ic;
@ -143,7 +143,7 @@ static void restore_hv_regs(struct kvm_vcpu *vcpu, const struct hv_guest_state *
struct kvmppc_vcore *vc = vcpu->arch.vcore;
vc->pcr = hr->pcr | PCR_MASK;
vc->dpdes = hr->dpdes;
vcpu->arch.doorbell_request = hr->dpdes;
vcpu->arch.hfscr = hr->hfscr;
vcpu->arch.dawr0 = hr->dawr0;
vcpu->arch.dawrx0 = hr->dawrx0;
@ -170,7 +170,13 @@ void kvmhv_restore_hv_return_state(struct kvm_vcpu *vcpu,
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
vc->dpdes = hr->dpdes;
/*
* This L2 vCPU might have received a doorbell while H_ENTER_NESTED was being handled.
* Make sure we preserve the doorbell if it was either:
* a) Sent after H_ENTER_NESTED was called on this vCPU (arch.doorbell_request would be 1)
* b) Doorbell was not handled and L2 exited for some other reason (hr->dpdes would be 1)
*/
vcpu->arch.doorbell_request = vcpu->arch.doorbell_request | hr->dpdes;
vcpu->arch.hfscr = hr->hfscr;
vcpu->arch.purr = hr->purr;
vcpu->arch.spurr = hr->spurr;
@ -445,6 +451,8 @@ long kvmhv_nested_init(void)
if (rc == H_SUCCESS) {
unsigned long capabilities = 0;
if (cpu_has_feature(CPU_FTR_P11_PVR))
capabilities |= H_GUEST_CAP_POWER11;
if (cpu_has_feature(CPU_FTR_ARCH_31))
capabilities |= H_GUEST_CAP_POWER10;
if (cpu_has_feature(CPU_FTR_ARCH_300))

View File

@ -370,7 +370,9 @@ static int gs_msg_ops_vcpu_fill_info(struct kvmppc_gs_buff *gsb,
* default to L1's PVR.
*/
if (!vcpu->arch.vcore->arch_compat) {
if (cpu_has_feature(CPU_FTR_ARCH_31))
if (cpu_has_feature(CPU_FTR_P11_PVR))
arch_compat = PVR_ARCH_31_P11;
else if (cpu_has_feature(CPU_FTR_ARCH_31))
arch_compat = PVR_ARCH_31;
else if (cpu_has_feature(CPU_FTR_ARCH_300))
arch_compat = PVR_ARCH_300;

View File

@ -92,12 +92,6 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
spin_unlock(&vcpu3s->mmu_lock);
}
static void free_pte_rcu(struct rcu_head *head)
{
struct hpte_cache *pte = container_of(head, struct hpte_cache, rcu_head);
kmem_cache_free(hpte_cache, pte);
}
static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
{
struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
@ -126,7 +120,7 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
spin_unlock(&vcpu3s->mmu_lock);
call_rcu(&pte->rcu_head, free_pte_rcu);
kfree_rcu(pte, rcu_head);
}
static void kvmppc_mmu_pte_flush_all(struct kvm_vcpu *vcpu)

View File

@ -538,7 +538,7 @@ TRACE_EVENT_FN_COND(kvmppc_vcpu_stats,
TP_printk("VCPU %d: l1_to_l2_cs_time=%llu ns l2_to_l1_cs_time=%llu ns l2_runtime=%llu ns",
__entry->vcpu_id, __entry->l1_to_l2_cs,
__entry->l2_to_l1_cs, __entry->l2_runtime),
kmvhv_counters_tracepoint_regfunc, kmvhv_counters_tracepoint_unregfunc
kvmhv_counters_tracepoint_regfunc, kvmhv_counters_tracepoint_unregfunc
);
#endif
#endif /* _TRACE_KVM_HV_H */

View File

@ -780,8 +780,8 @@ static nokprobe_inline int emulate_stq(struct pt_regs *regs, unsigned long ea,
#endif /* __powerpc64 */
#ifdef CONFIG_VSX
void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
const void *mem, bool rev)
static nokprobe_inline void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
const void *mem, bool rev)
{
int size, read_size;
int i, j;
@ -863,11 +863,9 @@ void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
break;
}
}
EXPORT_SYMBOL_GPL(emulate_vsx_load);
NOKPROBE_SYMBOL(emulate_vsx_load);
void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
void *mem, bool rev)
static nokprobe_inline void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
void *mem, bool rev)
{
int size, write_size;
int i, j;
@ -955,8 +953,6 @@ void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
break;
}
}
EXPORT_SYMBOL_GPL(emulate_vsx_store);
NOKPROBE_SYMBOL(emulate_vsx_store);
static nokprobe_inline int do_vsx_load(struct instruction_op *op,
unsigned long ea, struct pt_regs *regs,

View File

@ -40,6 +40,7 @@
#include <linux/random.h>
#include <linux/elf-randomize.h>
#include <linux/of_fdt.h>
#include <linux/kfence.h>
#include <asm/interrupt.h>
#include <asm/processor.h>
@ -66,6 +67,7 @@
#include <asm/pte-walk.h>
#include <asm/asm-prototypes.h>
#include <asm/ultravisor.h>
#include <asm/kfence.h>
#include <mm/mmu_decl.h>
@ -123,8 +125,6 @@ EXPORT_SYMBOL_GPL(mmu_slb_size);
#ifdef CONFIG_PPC_64K_PAGES
int mmu_ci_restrictions;
#endif
static u8 *linear_map_hash_slots;
static unsigned long linear_map_hash_count;
struct mmu_hash_ops mmu_hash_ops __ro_after_init;
EXPORT_SYMBOL(mmu_hash_ops);
@ -273,6 +273,270 @@ void hash__tlbiel_all(unsigned int action)
WARN(1, "%s called on pre-POWER7 CPU\n", __func__);
}
#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
static void kernel_map_linear_page(unsigned long vaddr, unsigned long idx,
u8 *slots, raw_spinlock_t *lock)
{
unsigned long hash;
unsigned long vsid = get_kernel_vsid(vaddr, mmu_kernel_ssize);
unsigned long vpn = hpt_vpn(vaddr, vsid, mmu_kernel_ssize);
unsigned long mode = htab_convert_pte_flags(pgprot_val(PAGE_KERNEL), HPTE_USE_KERNEL_KEY);
long ret;
hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize);
/* Don't create HPTE entries for bad address */
if (!vsid)
return;
if (slots[idx] & 0x80)
return;
ret = hpte_insert_repeating(hash, vpn, __pa(vaddr), mode,
HPTE_V_BOLTED,
mmu_linear_psize, mmu_kernel_ssize);
BUG_ON (ret < 0);
raw_spin_lock(lock);
BUG_ON(slots[idx] & 0x80);
slots[idx] = ret | 0x80;
raw_spin_unlock(lock);
}
static void kernel_unmap_linear_page(unsigned long vaddr, unsigned long idx,
u8 *slots, raw_spinlock_t *lock)
{
unsigned long hash, hslot, slot;
unsigned long vsid = get_kernel_vsid(vaddr, mmu_kernel_ssize);
unsigned long vpn = hpt_vpn(vaddr, vsid, mmu_kernel_ssize);
hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize);
raw_spin_lock(lock);
if (!(slots[idx] & 0x80)) {
raw_spin_unlock(lock);
return;
}
hslot = slots[idx] & 0x7f;
slots[idx] = 0;
raw_spin_unlock(lock);
if (hslot & _PTEIDX_SECONDARY)
hash = ~hash;
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
slot += hslot & _PTEIDX_GROUP_IX;
mmu_hash_ops.hpte_invalidate(slot, vpn, mmu_linear_psize,
mmu_linear_psize,
mmu_kernel_ssize, 0);
}
#endif
static inline bool hash_supports_debug_pagealloc(void)
{
unsigned long max_hash_count = ppc64_rma_size / 4;
unsigned long linear_map_count = memblock_end_of_DRAM() >> PAGE_SHIFT;
if (!debug_pagealloc_enabled() || linear_map_count > max_hash_count)
return false;
return true;
}
#ifdef CONFIG_DEBUG_PAGEALLOC
static u8 *linear_map_hash_slots;
static unsigned long linear_map_hash_count;
static DEFINE_RAW_SPINLOCK(linear_map_hash_lock);
static void hash_debug_pagealloc_alloc_slots(void)
{
if (!hash_supports_debug_pagealloc())
return;
linear_map_hash_count = memblock_end_of_DRAM() >> PAGE_SHIFT;
linear_map_hash_slots = memblock_alloc_try_nid(
linear_map_hash_count, 1, MEMBLOCK_LOW_LIMIT,
ppc64_rma_size, NUMA_NO_NODE);
if (!linear_map_hash_slots)
panic("%s: Failed to allocate %lu bytes max_addr=%pa\n",
__func__, linear_map_hash_count, &ppc64_rma_size);
}
static inline void hash_debug_pagealloc_add_slot(phys_addr_t paddr,
int slot)
{
if (!debug_pagealloc_enabled() || !linear_map_hash_count)
return;
if ((paddr >> PAGE_SHIFT) < linear_map_hash_count)
linear_map_hash_slots[paddr >> PAGE_SHIFT] = slot | 0x80;
}
static int hash_debug_pagealloc_map_pages(struct page *page, int numpages,
int enable)
{
unsigned long flags, vaddr, lmi;
int i;
if (!debug_pagealloc_enabled() || !linear_map_hash_count)
return 0;
local_irq_save(flags);
for (i = 0; i < numpages; i++, page++) {
vaddr = (unsigned long)page_address(page);
lmi = __pa(vaddr) >> PAGE_SHIFT;
if (lmi >= linear_map_hash_count)
continue;
if (enable)
kernel_map_linear_page(vaddr, lmi,
linear_map_hash_slots, &linear_map_hash_lock);
else
kernel_unmap_linear_page(vaddr, lmi,
linear_map_hash_slots, &linear_map_hash_lock);
}
local_irq_restore(flags);
return 0;
}
#else /* CONFIG_DEBUG_PAGEALLOC */
static inline void hash_debug_pagealloc_alloc_slots(void) {}
static inline void hash_debug_pagealloc_add_slot(phys_addr_t paddr, int slot) {}
static int __maybe_unused
hash_debug_pagealloc_map_pages(struct page *page, int numpages, int enable)
{
return 0;
}
#endif /* CONFIG_DEBUG_PAGEALLOC */
#ifdef CONFIG_KFENCE
static u8 *linear_map_kf_hash_slots;
static unsigned long linear_map_kf_hash_count;
static DEFINE_RAW_SPINLOCK(linear_map_kf_hash_lock);
static phys_addr_t kfence_pool;
static inline void hash_kfence_alloc_pool(void)
{
if (!kfence_early_init_enabled())
goto err;
/* allocate linear map for kfence within RMA region */
linear_map_kf_hash_count = KFENCE_POOL_SIZE >> PAGE_SHIFT;
linear_map_kf_hash_slots = memblock_alloc_try_nid(
linear_map_kf_hash_count, 1,
MEMBLOCK_LOW_LIMIT, ppc64_rma_size,
NUMA_NO_NODE);
if (!linear_map_kf_hash_slots) {
pr_err("%s: memblock for linear map (%lu) failed\n", __func__,
linear_map_kf_hash_count);
goto err;
}
/* allocate kfence pool early */
kfence_pool = memblock_phys_alloc_range(KFENCE_POOL_SIZE, PAGE_SIZE,
MEMBLOCK_LOW_LIMIT, MEMBLOCK_ALLOC_ANYWHERE);
if (!kfence_pool) {
pr_err("%s: memblock for kfence pool (%lu) failed\n", __func__,
KFENCE_POOL_SIZE);
memblock_free(linear_map_kf_hash_slots,
linear_map_kf_hash_count);
linear_map_kf_hash_count = 0;
goto err;
}
memblock_mark_nomap(kfence_pool, KFENCE_POOL_SIZE);
return;
err:
pr_info("Disabling kfence\n");
disable_kfence();
}
static inline void hash_kfence_map_pool(void)
{
unsigned long kfence_pool_start, kfence_pool_end;
unsigned long prot = pgprot_val(PAGE_KERNEL);
if (!kfence_pool)
return;
kfence_pool_start = (unsigned long) __va(kfence_pool);
kfence_pool_end = kfence_pool_start + KFENCE_POOL_SIZE;
__kfence_pool = (char *) kfence_pool_start;
BUG_ON(htab_bolt_mapping(kfence_pool_start, kfence_pool_end,
kfence_pool, prot, mmu_linear_psize,
mmu_kernel_ssize));
memblock_clear_nomap(kfence_pool, KFENCE_POOL_SIZE);
}
static inline void hash_kfence_add_slot(phys_addr_t paddr, int slot)
{
unsigned long vaddr = (unsigned long) __va(paddr);
unsigned long lmi = (vaddr - (unsigned long)__kfence_pool)
>> PAGE_SHIFT;
if (!kfence_pool)
return;
BUG_ON(!is_kfence_address((void *)vaddr));
BUG_ON(lmi >= linear_map_kf_hash_count);
linear_map_kf_hash_slots[lmi] = slot | 0x80;
}
static int hash_kfence_map_pages(struct page *page, int numpages, int enable)
{
unsigned long flags, vaddr, lmi;
int i;
WARN_ON_ONCE(!linear_map_kf_hash_count);
local_irq_save(flags);
for (i = 0; i < numpages; i++, page++) {
vaddr = (unsigned long)page_address(page);
lmi = (vaddr - (unsigned long)__kfence_pool) >> PAGE_SHIFT;
/* Ideally this should never happen */
if (lmi >= linear_map_kf_hash_count) {
WARN_ON_ONCE(1);
continue;
}
if (enable)
kernel_map_linear_page(vaddr, lmi,
linear_map_kf_hash_slots,
&linear_map_kf_hash_lock);
else
kernel_unmap_linear_page(vaddr, lmi,
linear_map_kf_hash_slots,
&linear_map_kf_hash_lock);
}
local_irq_restore(flags);
return 0;
}
#else
static inline void hash_kfence_alloc_pool(void) {}
static inline void hash_kfence_map_pool(void) {}
static inline void hash_kfence_add_slot(phys_addr_t paddr, int slot) {}
static int __maybe_unused
hash_kfence_map_pages(struct page *page, int numpages, int enable)
{
return 0;
}
#endif
#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
int hash__kernel_map_pages(struct page *page, int numpages, int enable)
{
void *vaddr = page_address(page);
if (is_kfence_address(vaddr))
return hash_kfence_map_pages(page, numpages, enable);
else
return hash_debug_pagealloc_map_pages(page, numpages, enable);
}
static void hash_linear_map_add_slot(phys_addr_t paddr, int slot)
{
if (is_kfence_address(__va(paddr)))
hash_kfence_add_slot(paddr, slot);
else
hash_debug_pagealloc_add_slot(paddr, slot);
}
#else
static void hash_linear_map_add_slot(phys_addr_t paddr, int slot) {}
#endif
/*
* 'R' and 'C' update notes:
* - Under pHyp or KVM, the updatepp path will not set C, thus it *will*
@ -431,9 +695,8 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long vend,
break;
cond_resched();
if (debug_pagealloc_enabled_or_kfence() &&
(paddr >> PAGE_SHIFT) < linear_map_hash_count)
linear_map_hash_slots[paddr >> PAGE_SHIFT] = ret | 0x80;
/* add slot info in debug_pagealloc / kfence linear map */
hash_linear_map_add_slot(paddr, ret);
}
return ret < 0 ? ret : 0;
}
@ -814,7 +1077,7 @@ static void __init htab_init_page_sizes(void)
bool aligned = true;
init_hpte_page_sizes();
if (!debug_pagealloc_enabled_or_kfence()) {
if (!hash_supports_debug_pagealloc() && !kfence_early_init_enabled()) {
/*
* Pick a size for the linear mapping. Currently, we only
* support 16M, 1M and 4K which is the default
@ -1134,16 +1397,8 @@ static void __init htab_initialize(void)
prot = pgprot_val(PAGE_KERNEL);
if (debug_pagealloc_enabled_or_kfence()) {
linear_map_hash_count = memblock_end_of_DRAM() >> PAGE_SHIFT;
linear_map_hash_slots = memblock_alloc_try_nid(
linear_map_hash_count, 1, MEMBLOCK_LOW_LIMIT,
ppc64_rma_size, NUMA_NO_NODE);
if (!linear_map_hash_slots)
panic("%s: Failed to allocate %lu bytes max_addr=%pa\n",
__func__, linear_map_hash_count, &ppc64_rma_size);
}
hash_debug_pagealloc_alloc_slots();
hash_kfence_alloc_pool();
/* create bolted the linear mapping in the hash table */
for_each_mem_range(i, &base, &end) {
size = end - base;
@ -1160,6 +1415,7 @@ static void __init htab_initialize(void)
BUG_ON(htab_bolt_mapping(base, base + size, __pa(base),
prot, mmu_linear_psize, mmu_kernel_ssize));
}
hash_kfence_map_pool();
memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE);
/*
@ -2120,82 +2376,6 @@ void hpt_do_stress(unsigned long ea, unsigned long hpte_group)
}
}
#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
static DEFINE_RAW_SPINLOCK(linear_map_hash_lock);
static void kernel_map_linear_page(unsigned long vaddr, unsigned long lmi)
{
unsigned long hash;
unsigned long vsid = get_kernel_vsid(vaddr, mmu_kernel_ssize);
unsigned long vpn = hpt_vpn(vaddr, vsid, mmu_kernel_ssize);
unsigned long mode = htab_convert_pte_flags(pgprot_val(PAGE_KERNEL), HPTE_USE_KERNEL_KEY);
long ret;
hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize);
/* Don't create HPTE entries for bad address */
if (!vsid)
return;
if (linear_map_hash_slots[lmi] & 0x80)
return;
ret = hpte_insert_repeating(hash, vpn, __pa(vaddr), mode,
HPTE_V_BOLTED,
mmu_linear_psize, mmu_kernel_ssize);
BUG_ON (ret < 0);
raw_spin_lock(&linear_map_hash_lock);
BUG_ON(linear_map_hash_slots[lmi] & 0x80);
linear_map_hash_slots[lmi] = ret | 0x80;
raw_spin_unlock(&linear_map_hash_lock);
}
static void kernel_unmap_linear_page(unsigned long vaddr, unsigned long lmi)
{
unsigned long hash, hidx, slot;
unsigned long vsid = get_kernel_vsid(vaddr, mmu_kernel_ssize);
unsigned long vpn = hpt_vpn(vaddr, vsid, mmu_kernel_ssize);
hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize);
raw_spin_lock(&linear_map_hash_lock);
if (!(linear_map_hash_slots[lmi] & 0x80)) {
raw_spin_unlock(&linear_map_hash_lock);
return;
}
hidx = linear_map_hash_slots[lmi] & 0x7f;
linear_map_hash_slots[lmi] = 0;
raw_spin_unlock(&linear_map_hash_lock);
if (hidx & _PTEIDX_SECONDARY)
hash = ~hash;
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
slot += hidx & _PTEIDX_GROUP_IX;
mmu_hash_ops.hpte_invalidate(slot, vpn, mmu_linear_psize,
mmu_linear_psize,
mmu_kernel_ssize, 0);
}
int hash__kernel_map_pages(struct page *page, int numpages, int enable)
{
unsigned long flags, vaddr, lmi;
int i;
local_irq_save(flags);
for (i = 0; i < numpages; i++, page++) {
vaddr = (unsigned long)page_address(page);
lmi = __pa(vaddr) >> PAGE_SHIFT;
if (lmi >= linear_map_hash_count)
continue;
if (enable)
kernel_map_linear_page(vaddr, lmi);
else
kernel_unmap_linear_page(vaddr, lmi);
}
local_irq_restore(flags);
return 0;
}
#endif /* CONFIG_DEBUG_PAGEALLOC || CONFIG_KFENCE */
void hash__setup_initial_memory_limit(phys_addr_t first_memblock_base,
phys_addr_t first_memblock_size)
{

View File

@ -37,6 +37,19 @@ EXPORT_SYMBOL(__pmd_frag_nr);
unsigned long __pmd_frag_size_shift;
EXPORT_SYMBOL(__pmd_frag_size_shift);
#ifdef CONFIG_KFENCE
extern bool kfence_early_init;
static int __init parse_kfence_early_init(char *arg)
{
int val;
if (get_option(&arg, &val))
kfence_early_init = !!val;
return 0;
}
early_param("kfence.sample_interval", parse_kfence_early_init);
#endif
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/*
* This is called when relaxing access to a hugepage. It's also called in the page

View File

@ -363,18 +363,6 @@ static int __meminit create_physical_mapping(unsigned long start,
}
#ifdef CONFIG_KFENCE
static bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
static int __init parse_kfence_early_init(char *arg)
{
int val;
if (get_option(&arg, &val))
kfence_early_init = !!val;
return 0;
}
early_param("kfence.sample_interval", parse_kfence_early_init);
static inline phys_addr_t alloc_kfence_pool(void)
{
phys_addr_t kfence_pool;

View File

@ -439,10 +439,16 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
/*
* The kernel should never take an execute fault nor should it
* take a page fault to a kernel address or a page fault to a user
* address outside of dedicated places
* address outside of dedicated places.
*
* Rather than kfence directly reporting false negatives, search whether
* the NIP belongs to the fixup table for cases where fault could come
* from functions like copy_from_kernel_nofault().
*/
if (unlikely(!is_user && bad_kernel_fault(regs, error_code, address, is_write))) {
if (kfence_handle_page_fault(address, is_write, regs))
if (is_kfence_address((void *)address) &&
!search_exception_tables(instruction_pointer(regs)) &&
kfence_handle_page_fault(address, is_write, regs))
return 0;
return SIGSEGV;

View File

@ -33,6 +33,7 @@ bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
#ifdef CONFIG_KFENCE
bool __ro_after_init kfence_disabled;
bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
#endif
static int __init parse_nosmep(char *p)

View File

@ -12,6 +12,7 @@
#include <asm/types.h>
#include <asm/ppc-opcode.h>
#include <linux/build_bug.h>
#ifdef CONFIG_PPC64_ELF_ABI_V1
#define FUNCTION_DESCR_SIZE 24
@ -21,6 +22,9 @@
#define CTX_NIA(ctx) ((unsigned long)ctx->idx * 4)
#define SZL sizeof(unsigned long)
#define BPF_INSN_SAFETY 64
#define PLANT_INSTR(d, idx, instr) \
do { if (d) { (d)[idx] = instr; } idx++; } while (0)
#define EMIT(instr) PLANT_INSTR(image, ctx->idx, instr)
@ -81,6 +85,18 @@
EMIT(PPC_RAW_ORI(d, d, (uintptr_t)(i) & \
0xffff)); \
} } while (0)
#define PPC_LI_ADDR PPC_LI64
#ifndef CONFIG_PPC_KERNEL_PCREL
#define PPC64_LOAD_PACA() \
EMIT(PPC_RAW_LD(_R2, _R13, offsetof(struct paca_struct, kernel_toc)))
#else
#define PPC64_LOAD_PACA() do {} while (0)
#endif
#else
#define PPC_LI64(d, i) BUILD_BUG()
#define PPC_LI_ADDR PPC_LI32
#define PPC64_LOAD_PACA() BUILD_BUG()
#endif
/*
@ -165,6 +181,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
u32 *addrs, int pass, bool extra_pass);
void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx);
void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx);
void bpf_jit_build_fentry_stubs(u32 *image, struct codegen_context *ctx);
void bpf_jit_realloc_regs(struct codegen_context *ctx);
int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx, int tmp_reg, long exit_addr);

View File

@ -22,11 +22,81 @@
#include "bpf_jit.h"
/* These offsets are from bpf prog end and stay the same across progs */
static int bpf_jit_ool_stub, bpf_jit_long_branch_stub;
static void bpf_jit_fill_ill_insns(void *area, unsigned int size)
{
memset32(area, BREAKPOINT_INSTRUCTION, size / 4);
}
void dummy_tramp(void);
asm (
" .pushsection .text, \"ax\", @progbits ;"
" .global dummy_tramp ;"
" .type dummy_tramp, @function ;"
"dummy_tramp: ;"
#ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE
" blr ;"
#else
/* LR is always in r11, so we don't need a 'mflr r11' here */
" mtctr 11 ;"
" mtlr 0 ;"
" bctr ;"
#endif
" .size dummy_tramp, .-dummy_tramp ;"
" .popsection ;"
);
void bpf_jit_build_fentry_stubs(u32 *image, struct codegen_context *ctx)
{
int ool_stub_idx, long_branch_stub_idx;
/*
* Out-of-line stub:
* mflr r0
* [b|bl] tramp
* mtlr r0 // only with CONFIG_PPC_FTRACE_OUT_OF_LINE
* b bpf_func + 4
*/
ool_stub_idx = ctx->idx;
EMIT(PPC_RAW_MFLR(_R0));
EMIT(PPC_RAW_NOP());
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
EMIT(PPC_RAW_MTLR(_R0));
WARN_ON_ONCE(!is_offset_in_branch_range(4 - (long)ctx->idx * 4));
EMIT(PPC_RAW_BRANCH(4 - (long)ctx->idx * 4));
/*
* Long branch stub:
* .long <dummy_tramp_addr>
* mflr r11
* bcl 20,31,$+4
* mflr r12
* ld r12, -8-SZL(r12)
* mtctr r12
* mtlr r11 // needed to retain ftrace ABI
* bctr
*/
if (image)
*((unsigned long *)&image[ctx->idx]) = (unsigned long)dummy_tramp;
ctx->idx += SZL / 4;
long_branch_stub_idx = ctx->idx;
EMIT(PPC_RAW_MFLR(_R11));
EMIT(PPC_RAW_BCL4());
EMIT(PPC_RAW_MFLR(_R12));
EMIT(PPC_RAW_LL(_R12, _R12, -8-SZL));
EMIT(PPC_RAW_MTCTR(_R12));
EMIT(PPC_RAW_MTLR(_R11));
EMIT(PPC_RAW_BCTR());
if (!bpf_jit_ool_stub) {
bpf_jit_ool_stub = (ctx->idx - ool_stub_idx) * 4;
bpf_jit_long_branch_stub = (ctx->idx - long_branch_stub_idx) * 4;
}
}
int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx, int tmp_reg, long exit_addr)
{
if (!exit_addr || is_offset_in_branch_range(exit_addr - (ctx->idx * 4))) {
@ -222,7 +292,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
fp->bpf_func = (void *)fimage;
fp->jited = 1;
fp->jited_len = proglen + FUNCTION_DESCR_SIZE;
fp->jited_len = cgctx.idx * 4 + FUNCTION_DESCR_SIZE;
if (!fp->is_func || extra_pass) {
if (bpf_jit_binary_pack_finalize(fhdr, hdr)) {
@ -369,3 +439,778 @@ bool bpf_jit_supports_far_kfunc_call(void)
{
return IS_ENABLED(CONFIG_PPC64);
}
void *arch_alloc_bpf_trampoline(unsigned int size)
{
return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
}
void arch_free_bpf_trampoline(void *image, unsigned int size)
{
bpf_prog_pack_free(image, size);
}
int arch_protect_bpf_trampoline(void *image, unsigned int size)
{
return 0;
}
static int invoke_bpf_prog(u32 *image, u32 *ro_image, struct codegen_context *ctx,
struct bpf_tramp_link *l, int regs_off, int retval_off,
int run_ctx_off, bool save_ret)
{
struct bpf_prog *p = l->link.prog;
ppc_inst_t branch_insn;
u32 jmp_idx;
int ret = 0;
/* Save cookie */
if (IS_ENABLED(CONFIG_PPC64)) {
PPC_LI64(_R3, l->cookie);
EMIT(PPC_RAW_STD(_R3, _R1, run_ctx_off + offsetof(struct bpf_tramp_run_ctx,
bpf_cookie)));
} else {
PPC_LI32(_R3, l->cookie >> 32);
PPC_LI32(_R4, l->cookie);
EMIT(PPC_RAW_STW(_R3, _R1,
run_ctx_off + offsetof(struct bpf_tramp_run_ctx, bpf_cookie)));
EMIT(PPC_RAW_STW(_R4, _R1,
run_ctx_off + offsetof(struct bpf_tramp_run_ctx, bpf_cookie) + 4));
}
/* __bpf_prog_enter(p, &bpf_tramp_run_ctx) */
PPC_LI_ADDR(_R3, p);
EMIT(PPC_RAW_MR(_R25, _R3));
EMIT(PPC_RAW_ADDI(_R4, _R1, run_ctx_off));
ret = bpf_jit_emit_func_call_rel(image, ro_image, ctx,
(unsigned long)bpf_trampoline_enter(p));
if (ret)
return ret;
/* Remember prog start time returned by __bpf_prog_enter */
EMIT(PPC_RAW_MR(_R26, _R3));
/*
* if (__bpf_prog_enter(p) == 0)
* goto skip_exec_of_prog;
*
* Emit a nop to be later patched with conditional branch, once offset is known
*/
EMIT(PPC_RAW_CMPLI(_R3, 0));
jmp_idx = ctx->idx;
EMIT(PPC_RAW_NOP());
/* p->bpf_func(ctx) */
EMIT(PPC_RAW_ADDI(_R3, _R1, regs_off));
if (!p->jited)
PPC_LI_ADDR(_R4, (unsigned long)p->insnsi);
if (!create_branch(&branch_insn, (u32 *)&ro_image[ctx->idx], (unsigned long)p->bpf_func,
BRANCH_SET_LINK)) {
if (image)
image[ctx->idx] = ppc_inst_val(branch_insn);
ctx->idx++;
} else {
EMIT(PPC_RAW_LL(_R12, _R25, offsetof(struct bpf_prog, bpf_func)));
EMIT(PPC_RAW_MTCTR(_R12));
EMIT(PPC_RAW_BCTRL());
}
if (save_ret)
EMIT(PPC_RAW_STL(_R3, _R1, retval_off));
/* Fix up branch */
if (image) {
if (create_cond_branch(&branch_insn, &image[jmp_idx],
(unsigned long)&image[ctx->idx], COND_EQ << 16))
return -EINVAL;
image[jmp_idx] = ppc_inst_val(branch_insn);
}
/* __bpf_prog_exit(p, start_time, &bpf_tramp_run_ctx) */
EMIT(PPC_RAW_MR(_R3, _R25));
EMIT(PPC_RAW_MR(_R4, _R26));
EMIT(PPC_RAW_ADDI(_R5, _R1, run_ctx_off));
ret = bpf_jit_emit_func_call_rel(image, ro_image, ctx,
(unsigned long)bpf_trampoline_exit(p));
return ret;
}
static int invoke_bpf_mod_ret(u32 *image, u32 *ro_image, struct codegen_context *ctx,
struct bpf_tramp_links *tl, int regs_off, int retval_off,
int run_ctx_off, u32 *branches)
{
int i;
/*
* The first fmod_ret program will receive a garbage return value.
* Set this to 0 to avoid confusing the program.
*/
EMIT(PPC_RAW_LI(_R3, 0));
EMIT(PPC_RAW_STL(_R3, _R1, retval_off));
for (i = 0; i < tl->nr_links; i++) {
if (invoke_bpf_prog(image, ro_image, ctx, tl->links[i], regs_off, retval_off,
run_ctx_off, true))
return -EINVAL;
/*
* mod_ret prog stored return value after prog ctx. Emit:
* if (*(u64 *)(ret_val) != 0)
* goto do_fexit;
*/
EMIT(PPC_RAW_LL(_R3, _R1, retval_off));
EMIT(PPC_RAW_CMPLI(_R3, 0));
/*
* Save the location of the branch and generate a nop, which is
* replaced with a conditional jump once do_fexit (i.e. the
* start of the fexit invocation) is finalized.
*/
branches[i] = ctx->idx;
EMIT(PPC_RAW_NOP());
}
return 0;
}
static void bpf_trampoline_setup_tail_call_cnt(u32 *image, struct codegen_context *ctx,
int func_frame_offset, int r4_off)
{
if (IS_ENABLED(CONFIG_PPC64)) {
/* See bpf_jit_stack_tailcallcnt() */
int tailcallcnt_offset = 6 * 8;
EMIT(PPC_RAW_LL(_R3, _R1, func_frame_offset - tailcallcnt_offset));
EMIT(PPC_RAW_STL(_R3, _R1, -tailcallcnt_offset));
} else {
/* See bpf_jit_stack_offsetof() and BPF_PPC_TC */
EMIT(PPC_RAW_LL(_R4, _R1, r4_off));
}
}
static void bpf_trampoline_restore_tail_call_cnt(u32 *image, struct codegen_context *ctx,
int func_frame_offset, int r4_off)
{
if (IS_ENABLED(CONFIG_PPC64)) {
/* See bpf_jit_stack_tailcallcnt() */
int tailcallcnt_offset = 6 * 8;
EMIT(PPC_RAW_LL(_R3, _R1, -tailcallcnt_offset));
EMIT(PPC_RAW_STL(_R3, _R1, func_frame_offset - tailcallcnt_offset));
} else {
/* See bpf_jit_stack_offsetof() and BPF_PPC_TC */
EMIT(PPC_RAW_STL(_R4, _R1, r4_off));
}
}
static void bpf_trampoline_save_args(u32 *image, struct codegen_context *ctx, int func_frame_offset,
int nr_regs, int regs_off)
{
int param_save_area_offset;
param_save_area_offset = func_frame_offset; /* the two frames we alloted */
param_save_area_offset += STACK_FRAME_MIN_SIZE; /* param save area is past frame header */
for (int i = 0; i < nr_regs; i++) {
if (i < 8) {
EMIT(PPC_RAW_STL(_R3 + i, _R1, regs_off + i * SZL));
} else {
EMIT(PPC_RAW_LL(_R3, _R1, param_save_area_offset + i * SZL));
EMIT(PPC_RAW_STL(_R3, _R1, regs_off + i * SZL));
}
}
}
/* Used when restoring just the register parameters when returning back */
static void bpf_trampoline_restore_args_regs(u32 *image, struct codegen_context *ctx,
int nr_regs, int regs_off)
{
for (int i = 0; i < nr_regs && i < 8; i++)
EMIT(PPC_RAW_LL(_R3 + i, _R1, regs_off + i * SZL));
}
/* Used when we call into the traced function. Replicate parameter save area */
static void bpf_trampoline_restore_args_stack(u32 *image, struct codegen_context *ctx,
int func_frame_offset, int nr_regs, int regs_off)
{
int param_save_area_offset;
param_save_area_offset = func_frame_offset; /* the two frames we alloted */
param_save_area_offset += STACK_FRAME_MIN_SIZE; /* param save area is past frame header */
for (int i = 8; i < nr_regs; i++) {
EMIT(PPC_RAW_LL(_R3, _R1, param_save_area_offset + i * SZL));
EMIT(PPC_RAW_STL(_R3, _R1, STACK_FRAME_MIN_SIZE + i * SZL));
}
bpf_trampoline_restore_args_regs(image, ctx, nr_regs, regs_off);
}
static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_image,
void *rw_image_end, void *ro_image,
const struct btf_func_model *m, u32 flags,
struct bpf_tramp_links *tlinks,
void *func_addr)
{
int regs_off, nregs_off, ip_off, run_ctx_off, retval_off, nvr_off, alt_lr_off, r4_off = 0;
int i, ret, nr_regs, bpf_frame_size = 0, bpf_dummy_frame_size = 0, func_frame_offset;
struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
struct codegen_context codegen_ctx, *ctx;
u32 *image = (u32 *)rw_image;
ppc_inst_t branch_insn;
u32 *branches = NULL;
bool save_ret;
if (IS_ENABLED(CONFIG_PPC32))
return -EOPNOTSUPP;
nr_regs = m->nr_args;
/* Extra registers for struct arguments */
for (i = 0; i < m->nr_args; i++)
if (m->arg_size[i] > SZL)
nr_regs += round_up(m->arg_size[i], SZL) / SZL - 1;
if (nr_regs > MAX_BPF_FUNC_ARGS)
return -EOPNOTSUPP;
ctx = &codegen_ctx;
memset(ctx, 0, sizeof(*ctx));
/*
* Generated stack layout:
*
* func prev back chain [ back chain ]
* [ ]
* bpf prog redzone/tailcallcnt [ ... ] 64 bytes (64-bit powerpc)
* [ ] --
* LR save area [ r0 save (64-bit) ] | header
* [ r0 save (32-bit) ] |
* dummy frame for unwind [ back chain 1 ] --
* [ padding ] align stack frame
* r4_off [ r4 (tailcallcnt) ] optional - 32-bit powerpc
* alt_lr_off [ real lr (ool stub)] optional - actual lr
* [ r26 ]
* nvr_off [ r25 ] nvr save area
* retval_off [ return value ]
* [ reg argN ]
* [ ... ]
* regs_off [ reg_arg1 ] prog ctx context
* nregs_off [ args count ]
* ip_off [ traced function ]
* [ ... ]
* run_ctx_off [ bpf_tramp_run_ctx ]
* [ reg argN ]
* [ ... ]
* param_save_area [ reg_arg1 ] min 8 doublewords, per ABI
* [ TOC save (64-bit) ] --
* [ LR save (64-bit) ] | header
* [ LR save (32-bit) ] |
* bpf trampoline frame [ back chain 2 ] --
*
*/
/* Minimum stack frame header */
bpf_frame_size = STACK_FRAME_MIN_SIZE;
/*
* Room for parameter save area.
*
* As per the ABI, this is required if we call into the traced
* function (BPF_TRAMP_F_CALL_ORIG):
* - if the function takes more than 8 arguments for the rest to spill onto the stack
* - or, if the function has variadic arguments
* - or, if this functions's prototype was not available to the caller
*
* Reserve space for at least 8 registers for now. This can be optimized later.
*/
bpf_frame_size += (nr_regs > 8 ? nr_regs : 8) * SZL;
/* Room for struct bpf_tramp_run_ctx */
run_ctx_off = bpf_frame_size;
bpf_frame_size += round_up(sizeof(struct bpf_tramp_run_ctx), SZL);
/* Room for IP address argument */
ip_off = bpf_frame_size;
if (flags & BPF_TRAMP_F_IP_ARG)
bpf_frame_size += SZL;
/* Room for args count */
nregs_off = bpf_frame_size;
bpf_frame_size += SZL;
/* Room for args */
regs_off = bpf_frame_size;
bpf_frame_size += nr_regs * SZL;
/* Room for return value of func_addr or fentry prog */
retval_off = bpf_frame_size;
save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET);
if (save_ret)
bpf_frame_size += SZL;
/* Room for nvr save area */
nvr_off = bpf_frame_size;
bpf_frame_size += 2 * SZL;
/* Optional save area for actual LR in case of ool ftrace */
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {
alt_lr_off = bpf_frame_size;
bpf_frame_size += SZL;
}
if (IS_ENABLED(CONFIG_PPC32)) {
if (nr_regs < 2) {
r4_off = bpf_frame_size;
bpf_frame_size += SZL;
} else {
r4_off = regs_off + SZL;
}
}
/* Padding to align stack frame, if any */
bpf_frame_size = round_up(bpf_frame_size, SZL * 2);
/* Dummy frame size for proper unwind - includes 64-bytes red zone for 64-bit powerpc */
bpf_dummy_frame_size = STACK_FRAME_MIN_SIZE + 64;
/* Offset to the traced function's stack frame */
func_frame_offset = bpf_dummy_frame_size + bpf_frame_size;
/* Create dummy frame for unwind, store original return value */
EMIT(PPC_RAW_STL(_R0, _R1, PPC_LR_STKOFF));
/* Protect red zone where tail call count goes */
EMIT(PPC_RAW_STLU(_R1, _R1, -bpf_dummy_frame_size));
/* Create our stack frame */
EMIT(PPC_RAW_STLU(_R1, _R1, -bpf_frame_size));
/* 64-bit: Save TOC and load kernel TOC */
if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) {
EMIT(PPC_RAW_STD(_R2, _R1, 24));
PPC64_LOAD_PACA();
}
/* 32-bit: save tail call count in r4 */
if (IS_ENABLED(CONFIG_PPC32) && nr_regs < 2)
EMIT(PPC_RAW_STL(_R4, _R1, r4_off));
bpf_trampoline_save_args(image, ctx, func_frame_offset, nr_regs, regs_off);
/* Save our return address */
EMIT(PPC_RAW_MFLR(_R3));
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
EMIT(PPC_RAW_STL(_R3, _R1, alt_lr_off));
else
EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));
/*
* Save ip address of the traced function.
* We could recover this from LR, but we will need to address for OOL trampoline,
* and optional GEP area.
*/
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE) || flags & BPF_TRAMP_F_IP_ARG) {
EMIT(PPC_RAW_LWZ(_R4, _R3, 4));
EMIT(PPC_RAW_SLWI(_R4, _R4, 6));
EMIT(PPC_RAW_SRAWI(_R4, _R4, 6));
EMIT(PPC_RAW_ADD(_R3, _R3, _R4));
EMIT(PPC_RAW_ADDI(_R3, _R3, 4));
}
if (flags & BPF_TRAMP_F_IP_ARG)
EMIT(PPC_RAW_STL(_R3, _R1, ip_off));
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
/* Fake our LR for unwind */
EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));
/* Save function arg count -- see bpf_get_func_arg_cnt() */
EMIT(PPC_RAW_LI(_R3, nr_regs));
EMIT(PPC_RAW_STL(_R3, _R1, nregs_off));
/* Save nv regs */
EMIT(PPC_RAW_STL(_R25, _R1, nvr_off));
EMIT(PPC_RAW_STL(_R26, _R1, nvr_off + SZL));
if (flags & BPF_TRAMP_F_CALL_ORIG) {
PPC_LI_ADDR(_R3, (unsigned long)im);
ret = bpf_jit_emit_func_call_rel(image, ro_image, ctx,
(unsigned long)__bpf_tramp_enter);
if (ret)
return ret;
}
for (i = 0; i < fentry->nr_links; i++)
if (invoke_bpf_prog(image, ro_image, ctx, fentry->links[i], regs_off, retval_off,
run_ctx_off, flags & BPF_TRAMP_F_RET_FENTRY_RET))
return -EINVAL;
if (fmod_ret->nr_links) {
branches = kcalloc(fmod_ret->nr_links, sizeof(u32), GFP_KERNEL);
if (!branches)
return -ENOMEM;
if (invoke_bpf_mod_ret(image, ro_image, ctx, fmod_ret, regs_off, retval_off,
run_ctx_off, branches)) {
ret = -EINVAL;
goto cleanup;
}
}
/* Call the traced function */
if (flags & BPF_TRAMP_F_CALL_ORIG) {
/*
* The address in LR save area points to the correct point in the original function
* with both PPC_FTRACE_OUT_OF_LINE as well as with traditional ftrace instruction
* sequence
*/
EMIT(PPC_RAW_LL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));
EMIT(PPC_RAW_MTCTR(_R3));
/* Replicate tail_call_cnt before calling the original BPF prog */
if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
bpf_trampoline_setup_tail_call_cnt(image, ctx, func_frame_offset, r4_off);
/* Restore args */
bpf_trampoline_restore_args_stack(image, ctx, func_frame_offset, nr_regs, regs_off);
/* Restore TOC for 64-bit */
if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL))
EMIT(PPC_RAW_LD(_R2, _R1, 24));
EMIT(PPC_RAW_BCTRL());
if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL))
PPC64_LOAD_PACA();
/* Store return value for bpf prog to access */
EMIT(PPC_RAW_STL(_R3, _R1, retval_off));
/* Restore updated tail_call_cnt */
if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
bpf_trampoline_restore_tail_call_cnt(image, ctx, func_frame_offset, r4_off);
/* Reserve space to patch branch instruction to skip fexit progs */
im->ip_after_call = &((u32 *)ro_image)[ctx->idx];
EMIT(PPC_RAW_NOP());
}
/* Update branches saved in invoke_bpf_mod_ret with address of do_fexit */
for (i = 0; i < fmod_ret->nr_links && image; i++) {
if (create_cond_branch(&branch_insn, &image[branches[i]],
(unsigned long)&image[ctx->idx], COND_NE << 16)) {
ret = -EINVAL;
goto cleanup;
}
image[branches[i]] = ppc_inst_val(branch_insn);
}
for (i = 0; i < fexit->nr_links; i++)
if (invoke_bpf_prog(image, ro_image, ctx, fexit->links[i], regs_off, retval_off,
run_ctx_off, false)) {
ret = -EINVAL;
goto cleanup;
}
if (flags & BPF_TRAMP_F_CALL_ORIG) {
im->ip_epilogue = &((u32 *)ro_image)[ctx->idx];
PPC_LI_ADDR(_R3, im);
ret = bpf_jit_emit_func_call_rel(image, ro_image, ctx,
(unsigned long)__bpf_tramp_exit);
if (ret)
goto cleanup;
}
if (flags & BPF_TRAMP_F_RESTORE_REGS)
bpf_trampoline_restore_args_regs(image, ctx, nr_regs, regs_off);
/* Restore return value of func_addr or fentry prog */
if (save_ret)
EMIT(PPC_RAW_LL(_R3, _R1, retval_off));
/* Restore nv regs */
EMIT(PPC_RAW_LL(_R26, _R1, nvr_off + SZL));
EMIT(PPC_RAW_LL(_R25, _R1, nvr_off));
/* Epilogue */
if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL))
EMIT(PPC_RAW_LD(_R2, _R1, 24));
if (flags & BPF_TRAMP_F_SKIP_FRAME) {
/* Skip the traced function and return to parent */
EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset));
EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF));
EMIT(PPC_RAW_MTLR(_R0));
EMIT(PPC_RAW_BLR());
} else {
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {
EMIT(PPC_RAW_LL(_R0, _R1, alt_lr_off));
EMIT(PPC_RAW_MTLR(_R0));
EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset));
EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF));
EMIT(PPC_RAW_BLR());
} else {
EMIT(PPC_RAW_LL(_R0, _R1, bpf_frame_size + PPC_LR_STKOFF));
EMIT(PPC_RAW_MTCTR(_R0));
EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset));
EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF));
EMIT(PPC_RAW_MTLR(_R0));
EMIT(PPC_RAW_BCTR());
}
}
/* Make sure the trampoline generation logic doesn't overflow */
if (image && WARN_ON_ONCE(&image[ctx->idx] > (u32 *)rw_image_end - BPF_INSN_SAFETY)) {
ret = -EFAULT;
goto cleanup;
}
ret = ctx->idx * 4 + BPF_INSN_SAFETY * 4;
cleanup:
kfree(branches);
return ret;
}
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
struct bpf_tramp_links *tlinks, void *func_addr)
{
struct bpf_tramp_image im;
void *image;
int ret;
/*
* Allocate a temporary buffer for __arch_prepare_bpf_trampoline().
* This will NOT cause fragmentation in direct map, as we do not
* call set_memory_*() on this buffer.
*
* We cannot use kvmalloc here, because we need image to be in
* module memory range.
*/
image = bpf_jit_alloc_exec(PAGE_SIZE);
if (!image)
return -ENOMEM;
ret = __arch_prepare_bpf_trampoline(&im, image, image + PAGE_SIZE, image,
m, flags, tlinks, func_addr);
bpf_jit_free_exec(image);
return ret;
}
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
const struct btf_func_model *m, u32 flags,
struct bpf_tramp_links *tlinks,
void *func_addr)
{
u32 size = image_end - image;
void *rw_image, *tmp;
int ret;
/*
* rw_image doesn't need to be in module memory range, so we can
* use kvmalloc.
*/
rw_image = kvmalloc(size, GFP_KERNEL);
if (!rw_image)
return -ENOMEM;
ret = __arch_prepare_bpf_trampoline(im, rw_image, rw_image + size, image, m,
flags, tlinks, func_addr);
if (ret < 0)
goto out;
if (bpf_jit_enable > 1)
bpf_jit_dump(1, ret - BPF_INSN_SAFETY * 4, 1, rw_image);
tmp = bpf_arch_text_copy(image, rw_image, size);
if (IS_ERR(tmp))
ret = PTR_ERR(tmp);
out:
kvfree(rw_image);
return ret;
}
static int bpf_modify_inst(void *ip, ppc_inst_t old_inst, ppc_inst_t new_inst)
{
ppc_inst_t org_inst;
if (copy_inst_from_kernel_nofault(&org_inst, ip)) {
pr_err("0x%lx: fetching instruction failed\n", (unsigned long)ip);
return -EFAULT;
}
if (!ppc_inst_equal(org_inst, old_inst)) {
pr_err("0x%lx: expected (%08lx) != found (%08lx)\n",
(unsigned long)ip, ppc_inst_as_ulong(old_inst), ppc_inst_as_ulong(org_inst));
return -EINVAL;
}
if (ppc_inst_equal(old_inst, new_inst))
return 0;
return patch_instruction(ip, new_inst);
}
static void do_isync(void *info __maybe_unused)
{
isync();
}
/*
* A 3-step process for bpf prog entry:
* 1. At bpf prog entry, a single nop/b:
* bpf_func:
* [nop|b] ool_stub
* 2. Out-of-line stub:
* ool_stub:
* mflr r0
* [b|bl] <bpf_prog>/<long_branch_stub>
* mtlr r0 // CONFIG_PPC_FTRACE_OUT_OF_LINE only
* b bpf_func + 4
* 3. Long branch stub:
* long_branch_stub:
* .long <branch_addr>/<dummy_tramp>
* mflr r11
* bcl 20,31,$+4
* mflr r12
* ld r12, -16(r12)
* mtctr r12
* mtlr r11 // needed to retain ftrace ABI
* bctr
*
* dummy_tramp is used to reduce synchronization requirements.
*
* When attaching a bpf trampoline to a bpf prog, we do not need any
* synchronization here since we always have a valid branch target regardless
* of the order in which the above stores are seen. dummy_tramp ensures that
* the long_branch stub goes to a valid destination on other cpus, even when
* the branch to the long_branch stub is seen before the updated trampoline
* address.
*
* However, when detaching a bpf trampoline from a bpf prog, or if changing
* the bpf trampoline address, we need synchronization to ensure that other
* cpus can no longer branch into the older trampoline so that it can be
* safely freed. bpf_tramp_image_put() uses rcu_tasks to ensure all cpus
* make forward progress, but we still need to ensure that other cpus
* execute isync (or some CSI) so that they don't go back into the
* trampoline again.
*/
int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
void *old_addr, void *new_addr)
{
unsigned long bpf_func, bpf_func_end, size, offset;
ppc_inst_t old_inst, new_inst;
int ret = 0, branch_flags;
char name[KSYM_NAME_LEN];
if (IS_ENABLED(CONFIG_PPC32))
return -EOPNOTSUPP;
bpf_func = (unsigned long)ip;
branch_flags = poke_type == BPF_MOD_CALL ? BRANCH_SET_LINK : 0;
/* We currently only support poking bpf programs */
if (!__bpf_address_lookup(bpf_func, &size, &offset, name)) {
pr_err("%s (0x%lx): kernel/modules are not supported\n", __func__, bpf_func);
return -EOPNOTSUPP;
}
/*
* If we are not poking at bpf prog entry, then we are simply patching in/out
* an unconditional branch instruction at im->ip_after_call
*/
if (offset) {
if (poke_type != BPF_MOD_JUMP) {
pr_err("%s (0x%lx): calls are not supported in bpf prog body\n", __func__,
bpf_func);
return -EOPNOTSUPP;
}
old_inst = ppc_inst(PPC_RAW_NOP());
if (old_addr)
if (create_branch(&old_inst, ip, (unsigned long)old_addr, 0))
return -ERANGE;
new_inst = ppc_inst(PPC_RAW_NOP());
if (new_addr)
if (create_branch(&new_inst, ip, (unsigned long)new_addr, 0))
return -ERANGE;
mutex_lock(&text_mutex);
ret = bpf_modify_inst(ip, old_inst, new_inst);
mutex_unlock(&text_mutex);
/* Make sure all cpus see the new instruction */
smp_call_function(do_isync, NULL, 1);
return ret;
}
bpf_func_end = bpf_func + size;
/* Address of the jmp/call instruction in the out-of-line stub */
ip = (void *)(bpf_func_end - bpf_jit_ool_stub + 4);
if (!is_offset_in_branch_range((long)ip - 4 - bpf_func)) {
pr_err("%s (0x%lx): bpf prog too large, ool stub out of branch range\n", __func__,
bpf_func);
return -ERANGE;
}
old_inst = ppc_inst(PPC_RAW_NOP());
if (old_addr) {
if (is_offset_in_branch_range(ip - old_addr))
create_branch(&old_inst, ip, (unsigned long)old_addr, branch_flags);
else
create_branch(&old_inst, ip, bpf_func_end - bpf_jit_long_branch_stub,
branch_flags);
}
new_inst = ppc_inst(PPC_RAW_NOP());
if (new_addr) {
if (is_offset_in_branch_range(ip - new_addr))
create_branch(&new_inst, ip, (unsigned long)new_addr, branch_flags);
else
create_branch(&new_inst, ip, bpf_func_end - bpf_jit_long_branch_stub,
branch_flags);
}
mutex_lock(&text_mutex);
/*
* 1. Update the address in the long branch stub:
* If new_addr is out of range, we will have to use the long branch stub, so patch new_addr
* here. Otherwise, revert to dummy_tramp, but only if we had patched old_addr here.
*/
if ((new_addr && !is_offset_in_branch_range(new_addr - ip)) ||
(old_addr && !is_offset_in_branch_range(old_addr - ip)))
ret = patch_ulong((void *)(bpf_func_end - bpf_jit_long_branch_stub - SZL),
(new_addr && !is_offset_in_branch_range(new_addr - ip)) ?
(unsigned long)new_addr : (unsigned long)dummy_tramp);
if (ret)
goto out;
/* 2. Update the branch/call in the out-of-line stub */
ret = bpf_modify_inst(ip, old_inst, new_inst);
if (ret)
goto out;
/* 3. Update instruction at bpf prog entry */
ip = (void *)bpf_func;
if (!old_addr || !new_addr) {
if (!old_addr) {
old_inst = ppc_inst(PPC_RAW_NOP());
create_branch(&new_inst, ip, bpf_func_end - bpf_jit_ool_stub, 0);
} else {
new_inst = ppc_inst(PPC_RAW_NOP());
create_branch(&old_inst, ip, bpf_func_end - bpf_jit_ool_stub, 0);
}
ret = bpf_modify_inst(ip, old_inst, new_inst);
}
out:
mutex_unlock(&text_mutex);
/*
* Sync only if we are not attaching a trampoline to a bpf prog so the older
* trampoline can be freed safely.
*/
if (old_addr)
smp_call_function(do_isync, NULL, 1);
return ret;
}

View File

@ -127,13 +127,16 @@ void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx)
{
int i;
/* Instruction for trampoline attach */
EMIT(PPC_RAW_NOP());
/* Initialize tail_call_cnt, to be skipped if we do tail calls. */
if (ctx->seen & SEEN_TAILCALL)
EMIT(PPC_RAW_LI(_R4, 0));
else
EMIT(PPC_RAW_NOP());
#define BPF_TAILCALL_PROLOGUE_SIZE 4
#define BPF_TAILCALL_PROLOGUE_SIZE 8
if (bpf_has_stack_frame(ctx))
EMIT(PPC_RAW_STWU(_R1, _R1, -BPF_PPC_STACKFRAME(ctx)));
@ -198,6 +201,8 @@ void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx)
bpf_jit_emit_common_epilogue(image, ctx);
EMIT(PPC_RAW_BLR());
bpf_jit_build_fentry_stubs(image, ctx);
}
/* Relative offset needs to be calculated based on final image location */

View File

@ -84,7 +84,7 @@ static inline bool bpf_has_stack_frame(struct codegen_context *ctx)
}
/*
* When not setting up our own stackframe, the redzone usage is:
* When not setting up our own stackframe, the redzone (288 bytes) usage is:
*
* [ prev sp ] <-------------
* [ ... ] |
@ -92,7 +92,7 @@ static inline bool bpf_has_stack_frame(struct codegen_context *ctx)
* [ nv gpr save area ] 5*8
* [ tail_call_cnt ] 8
* [ local_tmp_var ] 16
* [ unused red zone ] 208 bytes protected
* [ unused red zone ] 224
*/
static int bpf_jit_stack_local(struct codegen_context *ctx)
{
@ -126,6 +126,9 @@ void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx)
{
int i;
/* Instruction for trampoline attach */
EMIT(PPC_RAW_NOP());
#ifndef CONFIG_PPC_KERNEL_PCREL
if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2))
EMIT(PPC_RAW_LD(_R2, _R13, offsetof(struct paca_struct, kernel_toc)));
@ -200,16 +203,26 @@ void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx)
EMIT(PPC_RAW_MR(_R3, bpf_to_ppc(BPF_REG_0)));
EMIT(PPC_RAW_BLR());
bpf_jit_build_fentry_stubs(image, ctx);
}
static int
bpf_jit_emit_func_call_hlp(u32 *image, u32 *fimage, struct codegen_context *ctx, u64 func)
int bpf_jit_emit_func_call_rel(u32 *image, u32 *fimage, struct codegen_context *ctx, u64 func)
{
unsigned long func_addr = func ? ppc_function_entry((void *)func) : 0;
long reladdr;
if (WARN_ON_ONCE(!kernel_text_address(func_addr)))
return -EINVAL;
/* bpf to bpf call, func is not known in the initial pass. Emit 5 nops as a placeholder */
if (!func) {
for (int i = 0; i < 5; i++)
EMIT(PPC_RAW_NOP());
/* elfv1 needs an additional instruction to load addr from descriptor */
if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V1))
EMIT(PPC_RAW_NOP());
EMIT(PPC_RAW_MTCTR(_R12));
EMIT(PPC_RAW_BCTRL());
return 0;
}
#ifdef CONFIG_PPC_KERNEL_PCREL
reladdr = func_addr - local_paca->kernelbase;
@ -266,7 +279,8 @@ bpf_jit_emit_func_call_hlp(u32 *image, u32 *fimage, struct codegen_context *ctx,
* We can clobber r2 since we get called through a
* function pointer (so caller will save/restore r2).
*/
EMIT(PPC_RAW_LD(_R2, bpf_to_ppc(TMP_REG_2), 8));
if (is_module_text_address(func_addr))
EMIT(PPC_RAW_LD(_R2, bpf_to_ppc(TMP_REG_2), 8));
} else {
PPC_LI64(_R12, func);
EMIT(PPC_RAW_MTCTR(_R12));
@ -276,46 +290,14 @@ bpf_jit_emit_func_call_hlp(u32 *image, u32 *fimage, struct codegen_context *ctx,
* Load r2 with kernel TOC as kernel TOC is used if function address falls
* within core kernel text.
*/
EMIT(PPC_RAW_LD(_R2, _R13, offsetof(struct paca_struct, kernel_toc)));
if (is_module_text_address(func_addr))
EMIT(PPC_RAW_LD(_R2, _R13, offsetof(struct paca_struct, kernel_toc)));
}
#endif
return 0;
}
int bpf_jit_emit_func_call_rel(u32 *image, u32 *fimage, struct codegen_context *ctx, u64 func)
{
unsigned int i, ctx_idx = ctx->idx;
if (WARN_ON_ONCE(func && is_module_text_address(func)))
return -EINVAL;
/* skip past descriptor if elf v1 */
func += FUNCTION_DESCR_SIZE;
/* Load function address into r12 */
PPC_LI64(_R12, func);
/* For bpf-to-bpf function calls, the callee's address is unknown
* until the last extra pass. As seen above, we use PPC_LI64() to
* load the callee's address, but this may optimize the number of
* instructions required based on the nature of the address.
*
* Since we don't want the number of instructions emitted to increase,
* we pad the optimized PPC_LI64() call with NOPs to guarantee that
* we always have a five-instruction sequence, which is the maximum
* that PPC_LI64() can emit.
*/
if (!image)
for (i = ctx->idx - ctx_idx; i < 5; i++)
EMIT(PPC_RAW_NOP());
EMIT(PPC_RAW_MTCTR(_R12));
EMIT(PPC_RAW_BCTRL());
return 0;
}
static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
{
/*
@ -326,7 +308,7 @@ static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 o
*/
int b2p_bpf_array = bpf_to_ppc(BPF_REG_2);
int b2p_index = bpf_to_ppc(BPF_REG_3);
int bpf_tailcall_prologue_size = 8;
int bpf_tailcall_prologue_size = 12;
if (!IS_ENABLED(CONFIG_PPC_KERNEL_PCREL) && IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2))
bpf_tailcall_prologue_size += 4; /* skip past the toc load */
@ -1102,11 +1084,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
if (ret < 0)
return ret;
if (func_addr_fixed)
ret = bpf_jit_emit_func_call_hlp(image, fimage, ctx, func_addr);
else
ret = bpf_jit_emit_func_call_rel(image, fimage, ctx, func_addr);
ret = bpf_jit_emit_func_call_rel(image, fimage, ctx, func_addr);
if (ret)
return ret;

View File

@ -16,6 +16,8 @@ obj-$(CONFIG_FSL_EMB_PERF_EVENT_E500) += e500-pmu.o e6500-pmu.o
obj-$(CONFIG_HV_PERF_CTRS) += hv-24x7.o hv-gpci.o hv-common.o
obj-$(CONFIG_VPA_PMU) += vpa-pmu.o
obj-$(CONFIG_PPC_8xx) += 8xx-pmu.o
obj-$(CONFIG_PPC64) += $(obj64-y)

203
arch/powerpc/perf/vpa-pmu.c Normal file
View File

@ -0,0 +1,203 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Performance monitoring support for Virtual Processor Area(VPA) based counters
*
* Copyright (C) 2024 IBM Corporation
*/
#define pr_fmt(fmt) "vpa_pmu: " fmt
#include <linux/module.h>
#include <linux/perf_event.h>
#include <asm/kvm_ppc.h>
#include <asm/kvm_book3s_64.h>
#define MODULE_VERS "1.0"
#define MODULE_NAME "pseries_vpa_pmu"
#define EVENT(_name, _code) enum{_name = _code}
#define VPA_PMU_EVENT_VAR(_id) event_attr_##_id
#define VPA_PMU_EVENT_PTR(_id) (&event_attr_##_id.attr.attr)
static ssize_t vpa_pmu_events_sysfs_show(struct device *dev,
struct device_attribute *attr, char *page)
{
struct perf_pmu_events_attr *pmu_attr;
pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr);
return sprintf(page, "event=0x%02llx\n", pmu_attr->id);
}
#define VPA_PMU_EVENT_ATTR(_name, _id) \
PMU_EVENT_ATTR(_name, VPA_PMU_EVENT_VAR(_id), _id, \
vpa_pmu_events_sysfs_show)
EVENT(L1_TO_L2_CS_LAT, 0x1);
EVENT(L2_TO_L1_CS_LAT, 0x2);
EVENT(L2_RUNTIME_AGG, 0x3);
VPA_PMU_EVENT_ATTR(l1_to_l2_lat, L1_TO_L2_CS_LAT);
VPA_PMU_EVENT_ATTR(l2_to_l1_lat, L2_TO_L1_CS_LAT);
VPA_PMU_EVENT_ATTR(l2_runtime_agg, L2_RUNTIME_AGG);
static struct attribute *vpa_pmu_events_attr[] = {
VPA_PMU_EVENT_PTR(L1_TO_L2_CS_LAT),
VPA_PMU_EVENT_PTR(L2_TO_L1_CS_LAT),
VPA_PMU_EVENT_PTR(L2_RUNTIME_AGG),
NULL
};
static const struct attribute_group vpa_pmu_events_group = {
.name = "events",
.attrs = vpa_pmu_events_attr,
};
PMU_FORMAT_ATTR(event, "config:0-31");
static struct attribute *vpa_pmu_format_attr[] = {
&format_attr_event.attr,
NULL,
};
static struct attribute_group vpa_pmu_format_group = {
.name = "format",
.attrs = vpa_pmu_format_attr,
};
static const struct attribute_group *vpa_pmu_attr_groups[] = {
&vpa_pmu_events_group,
&vpa_pmu_format_group,
NULL
};
static int vpa_pmu_event_init(struct perf_event *event)
{
if (event->attr.type != event->pmu->type)
return -ENOENT;
/* it does not support event sampling mode */
if (is_sampling_event(event))
return -EOPNOTSUPP;
/* no branch sampling */
if (has_branch_stack(event))
return -EOPNOTSUPP;
/* Invalid event code */
if ((event->attr.config <= 0) || (event->attr.config > 3))
return -EINVAL;
return 0;
}
static unsigned long get_counter_data(struct perf_event *event)
{
unsigned int config = event->attr.config;
u64 data;
switch (config) {
case L1_TO_L2_CS_LAT:
if (event->attach_state & PERF_ATTACH_TASK)
data = kvmhv_get_l1_to_l2_cs_time_vcpu();
else
data = kvmhv_get_l1_to_l2_cs_time();
break;
case L2_TO_L1_CS_LAT:
if (event->attach_state & PERF_ATTACH_TASK)
data = kvmhv_get_l2_to_l1_cs_time_vcpu();
else
data = kvmhv_get_l2_to_l1_cs_time();
break;
case L2_RUNTIME_AGG:
if (event->attach_state & PERF_ATTACH_TASK)
data = kvmhv_get_l2_runtime_agg_vcpu();
else
data = kvmhv_get_l2_runtime_agg();
break;
default:
data = 0;
break;
}
return data;
}
static int vpa_pmu_add(struct perf_event *event, int flags)
{
u64 data;
kvmhv_set_l2_counters_status(smp_processor_id(), true);
data = get_counter_data(event);
local64_set(&event->hw.prev_count, data);
return 0;
}
static void vpa_pmu_read(struct perf_event *event)
{
u64 prev_data, new_data, final_data;
prev_data = local64_read(&event->hw.prev_count);
new_data = get_counter_data(event);
final_data = new_data - prev_data;
local64_add(final_data, &event->count);
}
static void vpa_pmu_del(struct perf_event *event, int flags)
{
vpa_pmu_read(event);
/*
* Disable vpa counter accumulation
*/
kvmhv_set_l2_counters_status(smp_processor_id(), false);
}
static struct pmu vpa_pmu = {
.task_ctx_nr = perf_sw_context,
.name = "vpa_pmu",
.event_init = vpa_pmu_event_init,
.add = vpa_pmu_add,
.del = vpa_pmu_del,
.read = vpa_pmu_read,
.attr_groups = vpa_pmu_attr_groups,
.capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT,
};
static int __init pseries_vpa_pmu_init(void)
{
/*
* List of current Linux on Power platforms and
* this driver is supported only in PowerVM LPAR
* (L1) platform.
*
* Enabled Linux on Power Platforms
* ----------------------------------------
* [X] PowerVM LPAR (L1)
* [ ] KVM Guest On PowerVM KoP(L2)
* [ ] Baremetal(PowerNV)
* [ ] KVM Guest On PowerNV
*/
if (!firmware_has_feature(FW_FEATURE_LPAR) || is_kvm_guest())
return -ENODEV;
perf_pmu_register(&vpa_pmu, vpa_pmu.name, -1);
pr_info("Virtual Processor Area PMU registered.\n");
return 0;
}
static void __exit pseries_vpa_pmu_cleanup(void)
{
perf_pmu_unregister(&vpa_pmu);
pr_info("Virtual Processor Area PMU unregistered.\n");
}
module_init(pseries_vpa_pmu_init);
module_exit(pseries_vpa_pmu_cleanup);
MODULE_DESCRIPTION("Perf Driver for pSeries VPA pmu counter");
MODULE_AUTHOR("Kajol Jain <kjain@linux.ibm.com>");
MODULE_AUTHOR("Madhavan Srinivasan <maddy@linux.ibm.com>");
MODULE_LICENSE("GPL");

View File

@ -94,10 +94,8 @@ static int __init ppc4xx_parse_dma_ranges(struct pci_controller *hose,
struct resource *res)
{
u64 size;
const u32 *ranges;
int rlen;
int pna = of_n_addr_cells(hose->dn);
int np = pna + 5;
struct of_range_parser parser;
struct of_range range;
/* Default */
res->start = 0;
@ -105,18 +103,15 @@ static int __init ppc4xx_parse_dma_ranges(struct pci_controller *hose,
res->end = size - 1;
res->flags = IORESOURCE_MEM | IORESOURCE_PREFETCH;
/* Get dma-ranges property */
ranges = of_get_property(hose->dn, "dma-ranges", &rlen);
if (ranges == NULL)
if (of_pci_dma_range_parser_init(&parser, hose->dn))
goto out;
/* Walk it */
while ((rlen -= np * 4) >= 0) {
u32 pci_space = ranges[0];
u64 pci_addr = of_read_number(ranges + 1, 2);
u64 cpu_addr = of_translate_dma_address(hose->dn, ranges + 3);
size = of_read_number(ranges + pna + 3, 2);
ranges += np;
for_each_of_range(&parser, &range) {
u32 pci_space = range.flags;
u64 pci_addr = range.bus_addr;
u64 cpu_addr = range.cpu_addr;
size = range.size;
if (cpu_addr == OF_BAD_ADDR || size == 0)
continue;

View File

@ -13,6 +13,7 @@
#include <generated/utsrelease.h>
#include <linux/pci.h>
#include <linux/of.h>
#include <linux/seq_file.h>
#include <asm/dma.h>
#include <asm/time.h>
#include <asm/machdep.h>

View File

@ -128,7 +128,7 @@ static int ep8248e_mdio_probe(struct platform_device *ofdev)
bus->name = "ep8248e-mdio-bitbang";
bus->parent = &ofdev->dev;
snprintf(bus->id, MII_BUS_ID_SIZE, "%x", res.start);
snprintf(bus->id, MII_BUS_ID_SIZE, "%pa", &res.start);
ret = of_mdiobus_register(bus, ofdev->dev.of_node);
if (ret)

View File

@ -27,15 +27,15 @@
static void __init km82xx_pic_init(void)
{
struct device_node *np = of_find_compatible_node(NULL, NULL,
"fsl,pq2-pic");
struct device_node *np __free(device_node);
np = of_find_compatible_node(NULL, NULL, "fsl,pq2-pic");
if (!np) {
pr_err("PIC init: can not find cpm-pic node\n");
return;
}
cpm2_pic_init(np);
of_node_put(np);
}
struct cpm_pin {

View File

@ -40,27 +40,6 @@ config BSC9132_QDS
and dual StarCore SC3850 DSP cores.
Manufacturer : Freescale Semiconductor, Inc
config MPC8540_ADS
bool "Freescale MPC8540 ADS"
select DEFAULT_UIMAGE
help
This option enables support for the MPC 8540 ADS board
config MPC8560_ADS
bool "Freescale MPC8560 ADS"
select DEFAULT_UIMAGE
select CPM2
help
This option enables support for the MPC 8560 ADS board
config MPC85xx_CDS
bool "Freescale MPC85xx CDS"
select DEFAULT_UIMAGE
select PPC_I8259
select HAVE_RAPIDIO
help
This option enables support for the MPC85xx CDS board
config MPC85xx_MDS
bool "Freescale MPC8568 MDS / MPC8569 MDS / P1021 MDS"
select DEFAULT_UIMAGE

View File

@ -7,7 +7,6 @@ source "arch/powerpc/platforms/chrp/Kconfig"
source "arch/powerpc/platforms/512x/Kconfig"
source "arch/powerpc/platforms/52xx/Kconfig"
source "arch/powerpc/platforms/powermac/Kconfig"
source "arch/powerpc/platforms/maple/Kconfig"
source "arch/powerpc/platforms/pasemi/Kconfig"
source "arch/powerpc/platforms/ps3/Kconfig"
source "arch/powerpc/platforms/cell/Kconfig"

View File

@ -14,7 +14,6 @@ obj-$(CONFIG_FSL_SOC_BOOKE) += 85xx/
obj-$(CONFIG_PPC_86xx) += 86xx/
obj-$(CONFIG_PPC_POWERNV) += powernv/
obj-$(CONFIG_PPC_PSERIES) += pseries/
obj-$(CONFIG_PPC_MAPLE) += maple/
obj-$(CONFIG_PPC_PASEMI) += pasemi/
obj-$(CONFIG_PPC_CELL) += cell/
obj-$(CONFIG_PPC_PS3) += ps3/

View File

@ -779,58 +779,41 @@ static int __init cell_iommu_init_disabled(void)
static u64 cell_iommu_get_fixed_address(struct device *dev)
{
u64 cpu_addr, size, best_size, dev_addr = OF_BAD_ADDR;
u64 best_size, dev_addr = OF_BAD_ADDR;
struct device_node *np;
const u32 *ranges = NULL;
int i, len, best, naddr, nsize, pna, range_size;
struct of_range_parser parser;
struct of_range range;
/* We can be called for platform devices that have no of_node */
np = of_node_get(dev->of_node);
if (!np)
goto out;
while (1) {
naddr = of_n_addr_cells(np);
nsize = of_n_size_cells(np);
np = of_get_next_parent(np);
if (!np)
break;
while ((np = of_get_next_parent(np))) {
if (of_pci_dma_range_parser_init(&parser, np))
continue;
ranges = of_get_property(np, "dma-ranges", &len);
/* Ignore empty ranges, they imply no translation required */
if (ranges && len > 0)
if (of_range_count(&parser))
break;
}
if (!ranges) {
if (!np) {
dev_dbg(dev, "iommu: no dma-ranges found\n");
goto out;
}
len /= sizeof(u32);
best_size = 0;
for_each_of_range(&parser, &range) {
if (!range.cpu_addr)
continue;
pna = of_n_addr_cells(np);
range_size = naddr + nsize + pna;
/* dma-ranges format:
* child addr : naddr cells
* parent addr : pna cells
* size : nsize cells
*/
for (i = 0, best = -1, best_size = 0; i < len; i += range_size) {
cpu_addr = of_translate_dma_address(np, ranges + i + naddr);
size = of_read_number(ranges + i + naddr + pna, nsize);
if (cpu_addr == 0 && size > best_size) {
best = i;
best_size = size;
if (range.size > best_size) {
best_size = range.size;
dev_addr = range.bus_addr;
}
}
if (best >= 0) {
dev_addr = of_read_number(ranges + best, naddr);
} else
if (!best_size)
dev_dbg(dev, "iommu: no suitable range found!\n");
out:

View File

@ -13,6 +13,7 @@
#include <linux/kernel.h>
#include <linux/initrd.h>
#include <linux/of_platform.h>
#include <linux/seq_file.h>
#include <asm/time.h>
#include <asm/mpic.h>

View File

@ -14,6 +14,7 @@
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/seq_file.h>
#include <asm/i8259.h>
#include <asm/pci-bridge.h>

View File

@ -1,19 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
config PPC_MAPLE
depends on PPC64 && PPC_BOOK3S && CPU_BIG_ENDIAN
bool "Maple 970FX Evaluation Board"
select FORCE_PCI
select MPIC
select U3_DART
select MPIC_U3_HT_IRQS
select GENERIC_TBSYNC
select PPC_UDBG_16550
select PPC_970_NAP
select PPC_64S_HASH_MMU
select PPC_HASH_MMU_NATIVE
select PPC_RTAS
select MMIO_NVRAM
select ATA_NONSTANDARD if ATA
help
This option enables support for the Maple 970FX Evaluation Board.
For more information, refer to <http://www.970eval.com>

View File

@ -1,14 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Declarations for maple-specific code.
*
* Maple is the name of a PPC970 evaluation board.
*/
extern int maple_set_rtc_time(struct rtc_time *tm);
extern void maple_get_rtc_time(struct rtc_time *tm);
extern time64_t maple_get_boot_time(void);
extern void maple_pci_init(void);
extern void maple_pci_irq_fixup(struct pci_dev *dev);
extern int maple_pci_get_legacy_ide_irq(struct pci_dev *dev, int channel);
extern struct pci_controller_ops maple_pci_controller_ops;

View File

@ -1,672 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) 2004 Benjamin Herrenschmuidt (benh@kernel.crashing.org),
* IBM Corp.
*/
#undef DEBUG
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/delay.h>
#include <linux/string.h>
#include <linux/init.h>
#include <linux/irq.h>
#include <linux/of_irq.h>
#include <asm/sections.h>
#include <asm/io.h>
#include <asm/pci-bridge.h>
#include <asm/machdep.h>
#include <asm/iommu.h>
#include <asm/ppc-pci.h>
#include <asm/isa-bridge.h>
#include "maple.h"
#ifdef DEBUG
#define DBG(x...) printk(x)
#else
#define DBG(x...)
#endif
static struct pci_controller *u3_agp, *u3_ht, *u4_pcie;
static int __init fixup_one_level_bus_range(struct device_node *node, int higher)
{
for (; node; node = node->sibling) {
const int *bus_range;
const unsigned int *class_code;
int len;
/* For PCI<->PCI bridges or CardBus bridges, we go down */
class_code = of_get_property(node, "class-code", NULL);
if (!class_code || ((*class_code >> 8) != PCI_CLASS_BRIDGE_PCI &&
(*class_code >> 8) != PCI_CLASS_BRIDGE_CARDBUS))
continue;
bus_range = of_get_property(node, "bus-range", &len);
if (bus_range != NULL && len > 2 * sizeof(int)) {
if (bus_range[1] > higher)
higher = bus_range[1];
}
higher = fixup_one_level_bus_range(node->child, higher);
}
return higher;
}
/* This routine fixes the "bus-range" property of all bridges in the
* system since they tend to have their "last" member wrong on macs
*
* Note that the bus numbers manipulated here are OF bus numbers, they
* are not Linux bus numbers.
*/
static void __init fixup_bus_range(struct device_node *bridge)
{
int *bus_range;
struct property *prop;
int len;
/* Lookup the "bus-range" property for the hose */
prop = of_find_property(bridge, "bus-range", &len);
if (prop == NULL || prop->value == NULL || len < 2 * sizeof(int)) {
printk(KERN_WARNING "Can't get bus-range for %pOF\n",
bridge);
return;
}
bus_range = prop->value;
bus_range[1] = fixup_one_level_bus_range(bridge->child, bus_range[1]);
}
static unsigned long u3_agp_cfa0(u8 devfn, u8 off)
{
return (1 << (unsigned long)PCI_SLOT(devfn)) |
((unsigned long)PCI_FUNC(devfn) << 8) |
((unsigned long)off & 0xFCUL);
}
static unsigned long u3_agp_cfa1(u8 bus, u8 devfn, u8 off)
{
return ((unsigned long)bus << 16) |
((unsigned long)devfn << 8) |
((unsigned long)off & 0xFCUL) |
1UL;
}
static volatile void __iomem *u3_agp_cfg_access(struct pci_controller* hose,
u8 bus, u8 dev_fn, u8 offset)
{
unsigned int caddr;
if (bus == hose->first_busno) {
if (dev_fn < (11 << 3))
return NULL;
caddr = u3_agp_cfa0(dev_fn, offset);
} else
caddr = u3_agp_cfa1(bus, dev_fn, offset);
/* Uninorth will return garbage if we don't read back the value ! */
do {
out_le32(hose->cfg_addr, caddr);
} while (in_le32(hose->cfg_addr) != caddr);
offset &= 0x07;
return hose->cfg_data + offset;
}
static int u3_agp_read_config(struct pci_bus *bus, unsigned int devfn,
int offset, int len, u32 *val)
{
struct pci_controller *hose;
volatile void __iomem *addr;
hose = pci_bus_to_host(bus);
if (hose == NULL)
return PCIBIOS_DEVICE_NOT_FOUND;
addr = u3_agp_cfg_access(hose, bus->number, devfn, offset);
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
/*
* Note: the caller has already checked that offset is
* suitably aligned and that len is 1, 2 or 4.
*/
switch (len) {
case 1:
*val = in_8(addr);
break;
case 2:
*val = in_le16(addr);
break;
default:
*val = in_le32(addr);
break;
}
return PCIBIOS_SUCCESSFUL;
}
static int u3_agp_write_config(struct pci_bus *bus, unsigned int devfn,
int offset, int len, u32 val)
{
struct pci_controller *hose;
volatile void __iomem *addr;
hose = pci_bus_to_host(bus);
if (hose == NULL)
return PCIBIOS_DEVICE_NOT_FOUND;
addr = u3_agp_cfg_access(hose, bus->number, devfn, offset);
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
/*
* Note: the caller has already checked that offset is
* suitably aligned and that len is 1, 2 or 4.
*/
switch (len) {
case 1:
out_8(addr, val);
break;
case 2:
out_le16(addr, val);
break;
default:
out_le32(addr, val);
break;
}
return PCIBIOS_SUCCESSFUL;
}
static struct pci_ops u3_agp_pci_ops =
{
.read = u3_agp_read_config,
.write = u3_agp_write_config,
};
static unsigned long u3_ht_cfa0(u8 devfn, u8 off)
{
return (devfn << 8) | off;
}
static unsigned long u3_ht_cfa1(u8 bus, u8 devfn, u8 off)
{
return u3_ht_cfa0(devfn, off) + (bus << 16) + 0x01000000UL;
}
static volatile void __iomem *u3_ht_cfg_access(struct pci_controller* hose,
u8 bus, u8 devfn, u8 offset)
{
if (bus == hose->first_busno) {
if (PCI_SLOT(devfn) == 0)
return NULL;
return hose->cfg_data + u3_ht_cfa0(devfn, offset);
} else
return hose->cfg_data + u3_ht_cfa1(bus, devfn, offset);
}
static int u3_ht_root_read_config(struct pci_controller *hose, u8 offset,
int len, u32 *val)
{
volatile void __iomem *addr;
addr = hose->cfg_addr;
addr += ((offset & ~3) << 2) + (4 - len - (offset & 3));
switch (len) {
case 1:
*val = in_8(addr);
break;
case 2:
*val = in_be16(addr);
break;
default:
*val = in_be32(addr);
break;
}
return PCIBIOS_SUCCESSFUL;
}
static int u3_ht_root_write_config(struct pci_controller *hose, u8 offset,
int len, u32 val)
{
volatile void __iomem *addr;
addr = hose->cfg_addr + ((offset & ~3) << 2) + (4 - len - (offset & 3));
if (offset >= PCI_BASE_ADDRESS_0 && offset < PCI_CAPABILITY_LIST)
return PCIBIOS_SUCCESSFUL;
switch (len) {
case 1:
out_8(addr, val);
break;
case 2:
out_be16(addr, val);
break;
default:
out_be32(addr, val);
break;
}
return PCIBIOS_SUCCESSFUL;
}
static int u3_ht_read_config(struct pci_bus *bus, unsigned int devfn,
int offset, int len, u32 *val)
{
struct pci_controller *hose;
volatile void __iomem *addr;
hose = pci_bus_to_host(bus);
if (hose == NULL)
return PCIBIOS_DEVICE_NOT_FOUND;
if (bus->number == hose->first_busno && devfn == PCI_DEVFN(0, 0))
return u3_ht_root_read_config(hose, offset, len, val);
if (offset > 0xff)
return PCIBIOS_BAD_REGISTER_NUMBER;
addr = u3_ht_cfg_access(hose, bus->number, devfn, offset);
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
/*
* Note: the caller has already checked that offset is
* suitably aligned and that len is 1, 2 or 4.
*/
switch (len) {
case 1:
*val = in_8(addr);
break;
case 2:
*val = in_le16(addr);
break;
default:
*val = in_le32(addr);
break;
}
return PCIBIOS_SUCCESSFUL;
}
static int u3_ht_write_config(struct pci_bus *bus, unsigned int devfn,
int offset, int len, u32 val)
{
struct pci_controller *hose;
volatile void __iomem *addr;
hose = pci_bus_to_host(bus);
if (hose == NULL)
return PCIBIOS_DEVICE_NOT_FOUND;
if (bus->number == hose->first_busno && devfn == PCI_DEVFN(0, 0))
return u3_ht_root_write_config(hose, offset, len, val);
if (offset > 0xff)
return PCIBIOS_BAD_REGISTER_NUMBER;
addr = u3_ht_cfg_access(hose, bus->number, devfn, offset);
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
/*
* Note: the caller has already checked that offset is
* suitably aligned and that len is 1, 2 or 4.
*/
switch (len) {
case 1:
out_8(addr, val);
break;
case 2:
out_le16(addr, val);
break;
default:
out_le32(addr, val);
break;
}
return PCIBIOS_SUCCESSFUL;
}
static struct pci_ops u3_ht_pci_ops =
{
.read = u3_ht_read_config,
.write = u3_ht_write_config,
};
static unsigned int u4_pcie_cfa0(unsigned int devfn, unsigned int off)
{
return (1 << PCI_SLOT(devfn)) |
(PCI_FUNC(devfn) << 8) |
((off >> 8) << 28) |
(off & 0xfcu);
}
static unsigned int u4_pcie_cfa1(unsigned int bus, unsigned int devfn,
unsigned int off)
{
return (bus << 16) |
(devfn << 8) |
((off >> 8) << 28) |
(off & 0xfcu) | 1u;
}
static volatile void __iomem *u4_pcie_cfg_access(struct pci_controller* hose,
u8 bus, u8 dev_fn, int offset)
{
unsigned int caddr;
if (bus == hose->first_busno)
caddr = u4_pcie_cfa0(dev_fn, offset);
else
caddr = u4_pcie_cfa1(bus, dev_fn, offset);
/* Uninorth will return garbage if we don't read back the value ! */
do {
out_le32(hose->cfg_addr, caddr);
} while (in_le32(hose->cfg_addr) != caddr);
offset &= 0x03;
return hose->cfg_data + offset;
}
static int u4_pcie_read_config(struct pci_bus *bus, unsigned int devfn,
int offset, int len, u32 *val)
{
struct pci_controller *hose;
volatile void __iomem *addr;
hose = pci_bus_to_host(bus);
if (hose == NULL)
return PCIBIOS_DEVICE_NOT_FOUND;
if (offset >= 0x1000)
return PCIBIOS_BAD_REGISTER_NUMBER;
addr = u4_pcie_cfg_access(hose, bus->number, devfn, offset);
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
/*
* Note: the caller has already checked that offset is
* suitably aligned and that len is 1, 2 or 4.
*/
switch (len) {
case 1:
*val = in_8(addr);
break;
case 2:
*val = in_le16(addr);
break;
default:
*val = in_le32(addr);
break;
}
return PCIBIOS_SUCCESSFUL;
}
static int u4_pcie_write_config(struct pci_bus *bus, unsigned int devfn,
int offset, int len, u32 val)
{
struct pci_controller *hose;
volatile void __iomem *addr;
hose = pci_bus_to_host(bus);
if (hose == NULL)
return PCIBIOS_DEVICE_NOT_FOUND;
if (offset >= 0x1000)
return PCIBIOS_BAD_REGISTER_NUMBER;
addr = u4_pcie_cfg_access(hose, bus->number, devfn, offset);
if (!addr)
return PCIBIOS_DEVICE_NOT_FOUND;
/*
* Note: the caller has already checked that offset is
* suitably aligned and that len is 1, 2 or 4.
*/
switch (len) {
case 1:
out_8(addr, val);
break;
case 2:
out_le16(addr, val);
break;
default:
out_le32(addr, val);
break;
}
return PCIBIOS_SUCCESSFUL;
}
static struct pci_ops u4_pcie_pci_ops =
{
.read = u4_pcie_read_config,
.write = u4_pcie_write_config,
};
static void __init setup_u3_agp(struct pci_controller* hose)
{
/* On G5, we move AGP up to high bus number so we don't need
* to reassign bus numbers for HT. If we ever have P2P bridges
* on AGP, we'll have to move pci_assign_all_buses to the
* pci_controller structure so we enable it for AGP and not for
* HT childs.
* We hard code the address because of the different size of
* the reg address cell, we shall fix that by killing struct
* reg_property and using some accessor functions instead
*/
hose->first_busno = 0xf0;
hose->last_busno = 0xff;
hose->ops = &u3_agp_pci_ops;
hose->cfg_addr = ioremap(0xf0000000 + 0x800000, 0x1000);
hose->cfg_data = ioremap(0xf0000000 + 0xc00000, 0x1000);
u3_agp = hose;
}
static void __init setup_u4_pcie(struct pci_controller* hose)
{
/* We currently only implement the "non-atomic" config space, to
* be optimised later.
*/
hose->ops = &u4_pcie_pci_ops;
hose->cfg_addr = ioremap(0xf0000000 + 0x800000, 0x1000);
hose->cfg_data = ioremap(0xf0000000 + 0xc00000, 0x1000);
u4_pcie = hose;
}
static void __init setup_u3_ht(struct pci_controller* hose)
{
hose->ops = &u3_ht_pci_ops;
/* We hard code the address because of the different size of
* the reg address cell, we shall fix that by killing struct
* reg_property and using some accessor functions instead
*/
hose->cfg_data = ioremap(0xf2000000, 0x02000000);
hose->cfg_addr = ioremap(0xf8070000, 0x1000);
hose->first_busno = 0;
hose->last_busno = 0xef;
u3_ht = hose;
}
static int __init maple_add_bridge(struct device_node *dev)
{
int len;
struct pci_controller *hose;
char* disp_name;
const int *bus_range;
int primary = 1;
DBG("Adding PCI host bridge %pOF\n", dev);
bus_range = of_get_property(dev, "bus-range", &len);
if (bus_range == NULL || len < 2 * sizeof(int)) {
printk(KERN_WARNING "Can't get bus-range for %pOF, assume bus 0\n",
dev);
}
hose = pcibios_alloc_controller(dev);
if (hose == NULL)
return -ENOMEM;
hose->first_busno = bus_range ? bus_range[0] : 0;
hose->last_busno = bus_range ? bus_range[1] : 0xff;
hose->controller_ops = maple_pci_controller_ops;
disp_name = NULL;
if (of_device_is_compatible(dev, "u3-agp")) {
setup_u3_agp(hose);
disp_name = "U3-AGP";
primary = 0;
} else if (of_device_is_compatible(dev, "u3-ht")) {
setup_u3_ht(hose);
disp_name = "U3-HT";
primary = 1;
} else if (of_device_is_compatible(dev, "u4-pcie")) {
setup_u4_pcie(hose);
disp_name = "U4-PCIE";
primary = 0;
}
printk(KERN_INFO "Found %s PCI host bridge. Firmware bus number: %d->%d\n",
disp_name, hose->first_busno, hose->last_busno);
/* Interpret the "ranges" property */
/* This also maps the I/O region and sets isa_io/mem_base */
pci_process_bridge_OF_ranges(hose, dev, primary);
/* Fixup "bus-range" OF property */
fixup_bus_range(dev);
/* Check for legacy IOs */
isa_bridge_find_early(hose);
/* create pci_dn's for DT nodes under this PHB */
pci_devs_phb_init_dynamic(hose);
return 0;
}
void maple_pci_irq_fixup(struct pci_dev *dev)
{
DBG(" -> maple_pci_irq_fixup\n");
/* Fixup IRQ for PCIe host */
if (u4_pcie != NULL && dev->bus->number == 0 &&
pci_bus_to_host(dev->bus) == u4_pcie) {
printk(KERN_DEBUG "Fixup U4 PCIe IRQ\n");
dev->irq = irq_create_mapping(NULL, 1);
if (dev->irq)
irq_set_irq_type(dev->irq, IRQ_TYPE_LEVEL_LOW);
}
/* Hide AMD8111 IDE interrupt when in legacy mode so
* the driver calls pci_get_legacy_ide_irq()
*/
if (dev->vendor == PCI_VENDOR_ID_AMD &&
dev->device == PCI_DEVICE_ID_AMD_8111_IDE &&
(dev->class & 5) != 5) {
dev->irq = 0;
}
DBG(" <- maple_pci_irq_fixup\n");
}
static int maple_pci_root_bridge_prepare(struct pci_host_bridge *bridge)
{
struct pci_controller *hose = pci_bus_to_host(bridge->bus);
struct device_node *np, *child;
if (hose != u3_agp)
return 0;
/* Fixup the PCI<->OF mapping for U3 AGP due to bus renumbering. We
* assume there is no P2P bridge on the AGP bus, which should be a
* safe assumptions hopefully.
*/
np = hose->dn;
PCI_DN(np)->busno = 0xf0;
for_each_child_of_node(np, child)
PCI_DN(child)->busno = 0xf0;
return 0;
}
void __init maple_pci_init(void)
{
struct device_node *np, *root;
struct device_node *ht = NULL;
/* Probe root PCI hosts, that is on U3 the AGP host and the
* HyperTransport host. That one is actually "kept" around
* and actually added last as its resource management relies
* on the AGP resources to have been setup first
*/
root = of_find_node_by_path("/");
if (root == NULL) {
printk(KERN_CRIT "maple_find_bridges: can't find root of device tree\n");
return;
}
for_each_child_of_node(root, np) {
if (!of_node_is_type(np, "pci") && !of_node_is_type(np, "ht"))
continue;
if ((of_device_is_compatible(np, "u4-pcie") ||
of_device_is_compatible(np, "u3-agp")) &&
maple_add_bridge(np) == 0)
of_node_get(np);
if (of_device_is_compatible(np, "u3-ht")) {
of_node_get(np);
ht = np;
}
}
of_node_put(root);
/* Now setup the HyperTransport host if we found any
*/
if (ht && maple_add_bridge(ht) != 0)
of_node_put(ht);
ppc_md.pcibios_root_bridge_prepare = maple_pci_root_bridge_prepare;
/* Tell pci.c to not change any resource allocations. */
pci_add_flags(PCI_PROBE_ONLY);
}
int maple_pci_get_legacy_ide_irq(struct pci_dev *pdev, int channel)
{
struct device_node *np;
unsigned int defirq = channel ? 15 : 14;
unsigned int irq;
if (pdev->vendor != PCI_VENDOR_ID_AMD ||
pdev->device != PCI_DEVICE_ID_AMD_8111_IDE)
return defirq;
np = pci_device_to_OF_node(pdev);
if (np == NULL) {
printk("Failed to locate OF node for IDE %s\n",
pci_name(pdev));
return defirq;
}
irq = irq_of_parse_and_map(np, channel & 0x1);
if (!irq) {
printk("Failed to map onboard IDE interrupt for channel %d\n",
channel);
return defirq;
}
return irq;
}
static void quirk_ipr_msi(struct pci_dev *dev)
{
/* Something prevents MSIs from the IPR from working on Bimini,
* and the driver has no smarts to recover. So disable MSI
* on it for now. */
if (machine_is(maple)) {
dev->no_msi = 1;
dev_info(&dev->dev, "Quirk disabled MSI\n");
}
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_OBSIDIAN,
quirk_ipr_msi);
struct pci_controller_ops maple_pci_controller_ops = {
};

View File

@ -1,363 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Maple (970 eval board) setup code
*
* (c) Copyright 2004 Benjamin Herrenschmidt (benh@kernel.crashing.org),
* IBM Corp.
*/
#undef DEBUG
#include <linux/init.h>
#include <linux/errno.h>
#include <linux/sched.h>
#include <linux/kernel.h>
#include <linux/export.h>
#include <linux/mm.h>
#include <linux/stddef.h>
#include <linux/unistd.h>
#include <linux/ptrace.h>
#include <linux/user.h>
#include <linux/tty.h>
#include <linux/string.h>
#include <linux/delay.h>
#include <linux/ioport.h>
#include <linux/major.h>
#include <linux/initrd.h>
#include <linux/vt_kern.h>
#include <linux/console.h>
#include <linux/pci.h>
#include <linux/adb.h>
#include <linux/cuda.h>
#include <linux/pmu.h>
#include <linux/irq.h>
#include <linux/seq_file.h>
#include <linux/root_dev.h>
#include <linux/serial.h>
#include <linux/smp.h>
#include <linux/bitops.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/platform_device.h>
#include <linux/memblock.h>
#include <asm/processor.h>
#include <asm/sections.h>
#include <asm/io.h>
#include <asm/pci-bridge.h>
#include <asm/iommu.h>
#include <asm/machdep.h>
#include <asm/dma.h>
#include <asm/cputable.h>
#include <asm/time.h>
#include <asm/mpic.h>
#include <asm/rtas.h>
#include <asm/udbg.h>
#include <asm/nvram.h>
#include "maple.h"
#ifdef DEBUG
#define DBG(fmt...) udbg_printf(fmt)
#else
#define DBG(fmt...)
#endif
static unsigned long maple_find_nvram_base(void)
{
struct device_node *rtcs;
unsigned long result = 0;
/* find NVRAM device */
rtcs = of_find_compatible_node(NULL, "nvram", "AMD8111");
if (rtcs) {
struct resource r;
if (of_address_to_resource(rtcs, 0, &r)) {
printk(KERN_EMERG "Maple: Unable to translate NVRAM"
" address\n");
goto bail;
}
if (!(r.flags & IORESOURCE_IO)) {
printk(KERN_EMERG "Maple: NVRAM address isn't PIO!\n");
goto bail;
}
result = r.start;
} else
printk(KERN_EMERG "Maple: Unable to find NVRAM\n");
bail:
of_node_put(rtcs);
return result;
}
static void __noreturn maple_restart(char *cmd)
{
unsigned int maple_nvram_base;
const unsigned int *maple_nvram_offset, *maple_nvram_command;
struct device_node *sp;
maple_nvram_base = maple_find_nvram_base();
if (maple_nvram_base == 0)
goto fail;
/* find service processor device */
sp = of_find_node_by_name(NULL, "service-processor");
if (!sp) {
printk(KERN_EMERG "Maple: Unable to find Service Processor\n");
goto fail;
}
maple_nvram_offset = of_get_property(sp, "restart-addr", NULL);
maple_nvram_command = of_get_property(sp, "restart-value", NULL);
of_node_put(sp);
/* send command */
outb_p(*maple_nvram_command, maple_nvram_base + *maple_nvram_offset);
for (;;) ;
fail:
printk(KERN_EMERG "Maple: Manual Restart Required\n");
for (;;) ;
}
static void __noreturn maple_power_off(void)
{
unsigned int maple_nvram_base;
const unsigned int *maple_nvram_offset, *maple_nvram_command;
struct device_node *sp;
maple_nvram_base = maple_find_nvram_base();
if (maple_nvram_base == 0)
goto fail;
/* find service processor device */
sp = of_find_node_by_name(NULL, "service-processor");
if (!sp) {
printk(KERN_EMERG "Maple: Unable to find Service Processor\n");
goto fail;
}
maple_nvram_offset = of_get_property(sp, "power-off-addr", NULL);
maple_nvram_command = of_get_property(sp, "power-off-value", NULL);
of_node_put(sp);
/* send command */
outb_p(*maple_nvram_command, maple_nvram_base + *maple_nvram_offset);
for (;;) ;
fail:
printk(KERN_EMERG "Maple: Manual Power-Down Required\n");
for (;;) ;
}
static void __noreturn maple_halt(void)
{
maple_power_off();
}
#ifdef CONFIG_SMP
static struct smp_ops_t maple_smp_ops = {
.probe = smp_mpic_probe,
.message_pass = smp_mpic_message_pass,
.kick_cpu = smp_generic_kick_cpu,
.setup_cpu = smp_mpic_setup_cpu,
.give_timebase = smp_generic_give_timebase,
.take_timebase = smp_generic_take_timebase,
};
#endif /* CONFIG_SMP */
static void __init maple_use_rtas_reboot_and_halt_if_present(void)
{
if (rtas_function_implemented(RTAS_FN_SYSTEM_REBOOT) &&
rtas_function_implemented(RTAS_FN_POWER_OFF)) {
ppc_md.restart = rtas_restart;
pm_power_off = rtas_power_off;
ppc_md.halt = rtas_halt;
}
}
static void __init maple_setup_arch(void)
{
/* init to some ~sane value until calibrate_delay() runs */
loops_per_jiffy = 50000000;
/* Setup SMP callback */
#ifdef CONFIG_SMP
smp_ops = &maple_smp_ops;
#endif
maple_use_rtas_reboot_and_halt_if_present();
printk(KERN_DEBUG "Using native/NAP idle loop\n");
mmio_nvram_init();
}
/*
* This is almost identical to pSeries and CHRP. We need to make that
* code generic at one point, with appropriate bits in the device-tree to
* identify the presence of an HT APIC
*/
static void __init maple_init_IRQ(void)
{
struct device_node *root, *np, *mpic_node = NULL;
const unsigned int *opprop;
unsigned long openpic_addr = 0;
int naddr, n, i, opplen, has_isus = 0;
struct mpic *mpic;
unsigned int flags = 0;
/* Locate MPIC in the device-tree. Note that there is a bug
* in Maple device-tree where the type of the controller is
* open-pic and not interrupt-controller
*/
for_each_node_by_type(np, "interrupt-controller")
if (of_device_is_compatible(np, "open-pic")) {
mpic_node = np;
break;
}
if (mpic_node == NULL)
for_each_node_by_type(np, "open-pic") {
mpic_node = np;
break;
}
if (mpic_node == NULL) {
printk(KERN_ERR
"Failed to locate the MPIC interrupt controller\n");
return;
}
/* Find address list in /platform-open-pic */
root = of_find_node_by_path("/");
naddr = of_n_addr_cells(root);
opprop = of_get_property(root, "platform-open-pic", &opplen);
if (opprop) {
openpic_addr = of_read_number(opprop, naddr);
has_isus = (opplen > naddr);
printk(KERN_DEBUG "OpenPIC addr: %lx, has ISUs: %d\n",
openpic_addr, has_isus);
}
BUG_ON(openpic_addr == 0);
/* Check for a big endian MPIC */
if (of_property_read_bool(np, "big-endian"))
flags |= MPIC_BIG_ENDIAN;
/* XXX Maple specific bits */
flags |= MPIC_U3_HT_IRQS;
/* All U3/U4 are big-endian, older SLOF firmware doesn't encode this */
flags |= MPIC_BIG_ENDIAN;
/* Setup the openpic driver. More device-tree junks, we hard code no
* ISUs for now. I'll have to revisit some stuffs with the folks doing
* the firmware for those
*/
mpic = mpic_alloc(mpic_node, openpic_addr, flags,
/*has_isus ? 16 :*/ 0, 0, " MPIC ");
BUG_ON(mpic == NULL);
/* Add ISUs */
opplen /= sizeof(u32);
for (n = 0, i = naddr; i < opplen; i += naddr, n++) {
unsigned long isuaddr = of_read_number(opprop + i, naddr);
mpic_assign_isu(mpic, n, isuaddr);
}
/* All ISUs are setup, complete initialization */
mpic_init(mpic);
ppc_md.get_irq = mpic_get_irq;
of_node_put(mpic_node);
of_node_put(root);
}
static void __init maple_progress(char *s, unsigned short hex)
{
printk("*** %04x : %s\n", hex, s ? s : "");
}
/*
* Called very early, MMU is off, device-tree isn't unflattened
*/
static int __init maple_probe(void)
{
if (!of_machine_is_compatible("Momentum,Maple") &&
!of_machine_is_compatible("Momentum,Apache"))
return 0;
pm_power_off = maple_power_off;
iommu_init_early_dart(&maple_pci_controller_ops);
return 1;
}
#ifdef CONFIG_EDAC
/*
* Register a platform device for CPC925 memory controller on
* all boards with U3H (CPC925) bridge.
*/
static int __init maple_cpc925_edac_setup(void)
{
struct platform_device *pdev;
struct device_node *np = NULL;
struct resource r;
int ret;
volatile void __iomem *mem;
u32 rev;
np = of_find_node_by_type(NULL, "memory-controller");
if (!np) {
printk(KERN_ERR "%s: Unable to find memory-controller node\n",
__func__);
return -ENODEV;
}
ret = of_address_to_resource(np, 0, &r);
of_node_put(np);
if (ret < 0) {
printk(KERN_ERR "%s: Unable to get memory-controller reg\n",
__func__);
return -ENODEV;
}
mem = ioremap(r.start, resource_size(&r));
if (!mem) {
printk(KERN_ERR "%s: Unable to map memory-controller memory\n",
__func__);
return -ENOMEM;
}
rev = __raw_readl(mem);
iounmap(mem);
if (rev < 0x34 || rev > 0x3f) { /* U3H */
printk(KERN_ERR "%s: Non-CPC925(U3H) bridge revision: %02x\n",
__func__, rev);
return 0;
}
pdev = platform_device_register_simple("cpc925_edac", 0, &r, 1);
if (IS_ERR(pdev))
return PTR_ERR(pdev);
printk(KERN_INFO "%s: CPC925 platform device created\n", __func__);
return 0;
}
machine_device_initcall(maple, maple_cpc925_edac_setup);
#endif
define_machine(maple) {
.name = "Maple",
.probe = maple_probe,
.setup_arch = maple_setup_arch,
.discover_phbs = maple_pci_init,
.init_IRQ = maple_init_IRQ,
.pci_irq_fixup = maple_pci_irq_fixup,
.pci_get_legacy_ide_irq = maple_pci_get_legacy_ide_irq,
.restart = maple_restart,
.halt = maple_halt,
.get_boot_time = maple_get_boot_time,
.set_rtc_time = maple_set_rtc_time,
.get_rtc_time = maple_get_rtc_time,
.progress = maple_progress,
.power_save = power4_idle,
};

View File

@ -1,170 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* (c) Copyright 2004 Benjamin Herrenschmidt (benh@kernel.crashing.org),
* IBM Corp.
*/
#undef DEBUG
#include <linux/errno.h>
#include <linux/sched.h>
#include <linux/kernel.h>
#include <linux/param.h>
#include <linux/string.h>
#include <linux/mm.h>
#include <linux/init.h>
#include <linux/time.h>
#include <linux/adb.h>
#include <linux/pmu.h>
#include <linux/interrupt.h>
#include <linux/mc146818rtc.h>
#include <linux/bcd.h>
#include <linux/of_address.h>
#include <asm/sections.h>
#include <asm/io.h>
#include <asm/machdep.h>
#include <asm/time.h>
#include "maple.h"
#ifdef DEBUG
#define DBG(x...) printk(x)
#else
#define DBG(x...)
#endif
static int maple_rtc_addr;
static int maple_clock_read(int addr)
{
outb_p(addr, maple_rtc_addr);
return inb_p(maple_rtc_addr+1);
}
static void maple_clock_write(unsigned long val, int addr)
{
outb_p(addr, maple_rtc_addr);
outb_p(val, maple_rtc_addr+1);
}
void maple_get_rtc_time(struct rtc_time *tm)
{
do {
tm->tm_sec = maple_clock_read(RTC_SECONDS);
tm->tm_min = maple_clock_read(RTC_MINUTES);
tm->tm_hour = maple_clock_read(RTC_HOURS);
tm->tm_mday = maple_clock_read(RTC_DAY_OF_MONTH);
tm->tm_mon = maple_clock_read(RTC_MONTH);
tm->tm_year = maple_clock_read(RTC_YEAR);
} while (tm->tm_sec != maple_clock_read(RTC_SECONDS));
if (!(maple_clock_read(RTC_CONTROL) & RTC_DM_BINARY)
|| RTC_ALWAYS_BCD) {
tm->tm_sec = bcd2bin(tm->tm_sec);
tm->tm_min = bcd2bin(tm->tm_min);
tm->tm_hour = bcd2bin(tm->tm_hour);
tm->tm_mday = bcd2bin(tm->tm_mday);
tm->tm_mon = bcd2bin(tm->tm_mon);
tm->tm_year = bcd2bin(tm->tm_year);
}
if ((tm->tm_year + 1900) < 1970)
tm->tm_year += 100;
tm->tm_wday = -1;
}
int maple_set_rtc_time(struct rtc_time *tm)
{
unsigned char save_control, save_freq_select;
int sec, min, hour, mon, mday, year;
spin_lock(&rtc_lock);
save_control = maple_clock_read(RTC_CONTROL); /* tell the clock it's being set */
maple_clock_write((save_control|RTC_SET), RTC_CONTROL);
save_freq_select = maple_clock_read(RTC_FREQ_SELECT); /* stop and reset prescaler */
maple_clock_write((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
sec = tm->tm_sec;
min = tm->tm_min;
hour = tm->tm_hour;
mon = tm->tm_mon;
mday = tm->tm_mday;
year = tm->tm_year;
if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
sec = bin2bcd(sec);
min = bin2bcd(min);
hour = bin2bcd(hour);
mon = bin2bcd(mon);
mday = bin2bcd(mday);
year = bin2bcd(year);
}
maple_clock_write(sec, RTC_SECONDS);
maple_clock_write(min, RTC_MINUTES);
maple_clock_write(hour, RTC_HOURS);
maple_clock_write(mon, RTC_MONTH);
maple_clock_write(mday, RTC_DAY_OF_MONTH);
maple_clock_write(year, RTC_YEAR);
/* The following flags have to be released exactly in this order,
* otherwise the DS12887 (popular MC146818A clone with integrated
* battery and quartz) will not reset the oscillator and will not
* update precisely 500 ms later. You won't find this mentioned in
* the Dallas Semiconductor data sheets, but who believes data
* sheets anyway ... -- Markus Kuhn
*/
maple_clock_write(save_control, RTC_CONTROL);
maple_clock_write(save_freq_select, RTC_FREQ_SELECT);
spin_unlock(&rtc_lock);
return 0;
}
static struct resource rtc_iores = {
.name = "rtc",
.flags = IORESOURCE_IO | IORESOURCE_BUSY,
};
time64_t __init maple_get_boot_time(void)
{
struct rtc_time tm;
struct device_node *rtcs;
rtcs = of_find_compatible_node(NULL, "rtc", "pnpPNP,b00");
if (rtcs) {
struct resource r;
if (of_address_to_resource(rtcs, 0, &r)) {
printk(KERN_EMERG "Maple: Unable to translate RTC"
" address\n");
goto bail;
}
if (!(r.flags & IORESOURCE_IO)) {
printk(KERN_EMERG "Maple: RTC address isn't PIO!\n");
goto bail;
}
maple_rtc_addr = r.start;
printk(KERN_INFO "Maple: Found RTC at IO 0x%x\n",
maple_rtc_addr);
}
bail:
of_node_put(rtcs);
if (maple_rtc_addr == 0) {
maple_rtc_addr = RTC_PORT(0); /* legacy address */
printk(KERN_INFO "Maple: No device node for RTC, assuming "
"legacy address (0x%x)\n", maple_rtc_addr);
}
rtc_iores.start = maple_rtc_addr;
rtc_iores.end = maple_rtc_addr + 7;
request_resource(&ioport_resource, &rtc_iores);
maple_get_rtc_time(&tm);
return rtc_tm_to_time64(&tm);
}

View File

@ -57,18 +57,10 @@ struct backlight_device *pmac_backlight;
int pmac_has_backlight_type(const char *type)
{
struct device_node* bk_node = of_find_node_by_name(NULL, "backlight");
int i = of_property_match_string(bk_node, "backlight-control", type);
if (bk_node) {
const char *prop = of_get_property(bk_node,
"backlight-control", NULL);
if (prop && strncmp(prop, type, strlen(type)) == 0) {
of_node_put(bk_node);
return 1;
}
of_node_put(bk_node);
}
return 0;
of_node_put(bk_node);
return i >= 0;
}
static void pmac_backlight_key_worker(struct work_struct *work)

View File

@ -178,7 +178,7 @@ static int __init ps3_setup_gelic_device(
return result;
}
static int __ref ps3_setup_uhc_device(
static int __init ps3_setup_uhc_device(
const struct ps3_repository_device *repo, enum ps3_match_id match_id,
enum ps3_interrupt_type interrupt_type, enum ps3_reg_type reg_type)
{

View File

@ -378,9 +378,9 @@ int ps3_send_event_locally(unsigned int virq)
/**
* ps3_sb_event_receive_port_setup - Setup a system bus event receive port.
* @dev: The system bus device instance.
* @cpu: enum ps3_cpu_binding indicating the cpu the interrupt should be
* serviced on.
* @dev: The system bus device instance.
* @virq: The assigned Linux virq.
*
* An event irq represents a virtual device interrupt. The interrupt_id

View File

@ -940,7 +940,7 @@ int __init ps3_repository_read_vuart_sysmgr_port(unsigned int *port)
/**
* ps3_repository_read_boot_dat_info - Get address and size of cell_ext_os_area.
* address: lpar address of cell_ext_os_area
* @lpar_addr: lpar address of cell_ext_os_area
* @size: size of cell_ext_os_area
*/

View File

@ -453,10 +453,9 @@ static ssize_t modalias_show(struct device *_dev, struct device_attribute *a,
char *buf)
{
struct ps3_system_bus_device *dev = ps3_dev_to_system_bus_dev(_dev);
int len = snprintf(buf, PAGE_SIZE, "ps3:%d:%d\n", dev->match_id,
dev->match_sub_id);
return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len;
return sysfs_emit(buf, "ps3:%d:%d\n", dev->match_id,
dev->match_sub_id);
}
static DEVICE_ATTR_RO(modalias);

View File

@ -140,6 +140,20 @@ config HV_PERF_CTRS
If unsure, select Y.
config VPA_PMU
tristate "VPA PMU events"
depends on KVM_BOOK3S_64_HV && HV_PERF_CTRS
help
Enable access to the VPA PMU counters via perf. This enables
code that support measurement for KVM on PowerVM(KoP) feature.
PAPR hypervisor has introduced three new counters in the VPA area
of LPAR CPUs for KVM L2 guest observability. Two for context switches
from host to guest and vice versa, and one counter for getting
the total time spent inside the KVM guest. This config enables code
that access these software counters via perf.
If unsure, Select N.
config IBMVIO
depends on PPC_PSERIES
bool

View File

@ -191,7 +191,7 @@ static int dtl_enable(struct dtl *dtl)
return -EBUSY;
/* ensure there are no other conflicting dtl users */
if (!read_trylock(&dtl_access_lock))
if (!down_read_trylock(&dtl_access_lock))
return -EBUSY;
n_entries = dtl_buf_entries;
@ -199,7 +199,7 @@ static int dtl_enable(struct dtl *dtl)
if (!buf) {
printk(KERN_WARNING "%s: buffer alloc failed for cpu %d\n",
__func__, dtl->cpu);
read_unlock(&dtl_access_lock);
up_read(&dtl_access_lock);
return -ENOMEM;
}
@ -217,7 +217,7 @@ static int dtl_enable(struct dtl *dtl)
spin_unlock(&dtl->lock);
if (rc) {
read_unlock(&dtl_access_lock);
up_read(&dtl_access_lock);
kmem_cache_free(dtl_cache, buf);
}
@ -232,7 +232,7 @@ static void dtl_disable(struct dtl *dtl)
dtl->buf = NULL;
dtl->buf_entries = 0;
spin_unlock(&dtl->lock);
read_unlock(&dtl_access_lock);
up_read(&dtl_access_lock);
}
/* file interface */

Some files were not shown because too many files have changed in this diff Show More