mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
synced 2025-01-09 06:43:09 +00:00
arm64 updates for 5.5:
- On ARMv8 CPUs without hardware updates of the access flag, avoid failing cow_user_page() on PFN mappings if the pte is old. The patches introduce an arch_faults_on_old_pte() macro, defined as false on x86. When true, cow_user_page() makes the pte young before attempting __copy_from_user_inatomic(). - Covert the synchronous exception handling paths in arch/arm64/kernel/entry.S to C. - FTRACE_WITH_REGS support for arm64. - ZONE_DMA re-introduced on arm64 to support Raspberry Pi 4 - Several kselftest cases specific to arm64, together with a MAINTAINERS update for these files (moved to the ARM64 PORT entry). - Workaround for a Neoverse-N1 erratum where the CPU may fetch stale instructions under certain conditions. - Workaround for Cortex-A57 and A72 errata where the CPU may speculatively execute an AT instruction and associate a VMID with the wrong guest page tables (corrupting the TLB). - Perf updates for arm64: additional PMU topologies on HiSilicon platforms, support for CCN-512 interconnect, AXI ID filtering in the IMX8 DDR PMU, support for the CCPI2 uncore PMU in ThunderX2. - GICv3 optimisation to avoid a heavy barrier when accessing the ICC_PMR_EL1 register. - ELF HWCAP documentation updates and clean-up. - SMC calling convention conduit code clean-up. - KASLR diagnostics printed during boot - NVIDIA Carmel CPU added to the KPTI whitelist - Some arm64 mm clean-ups: use generic free_initrd_mem(), remove stale macro, simplify calculation in __create_pgd_mapping(), typos. - Kconfig clean-ups: CMDLINE_FORCE to depend on CMDLINE, choice for endinanness to help with allmodconfig. -----BEGIN PGP SIGNATURE----- iQIzBAABCgAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAl3YJswACgkQa9axLQDI XvFwYg//aTGhNLew3ADgW2TYal7LyqetRROixPBrzqHLu2A8No1+QxHMaKxpZVyf pt25tABuLtPHql3qBzE0ltmfbLVsPj/3hULo404EJb9HLRfUnVGn7gcPkc+p4YAr IYkYPXJbk6OlJ84vI+4vXmDEF12bWCqamC9qZ+h99qTpMjFXFO17DSJ7xQ8Xic3A HHgCh4uA7gpTVOhLxaS6KIw+AZNYwvQxLXch2+wj6agbGX79uw9BeMhqVXdkPq8B RTDJpOdS970WOT4cHWOkmXwsqqGRqgsgyu+bRUJ0U72+0y6MX0qSHIUnVYGmNc5q Dtox4rryYLvkv/hbpkvjgVhv98q3J1mXt/CalChWB5dG4YwhJKN2jMiYuoAvB3WS 6dR7Dfupgai9gq1uoKgBayS2O6iFLSa4g58vt3EqUBqmM7W7viGFPdLbuVio4ycn CNF2xZ8MZR6Wrh1JfggO7Hc11EJdSqESYfHO6V/pYB4pdpnqJLDoriYHXU7RsZrc HvnrIvQWKMwNbqBvpNbWvK5mpBMMX2pEienA3wOqKNH7MbepVsG+npOZTVTtl9tN FL0ePb/mKJu/2+gW8ntiqYn7EzjKprRmknOiT2FjWWo0PxgJ8lumefuhGZZbaOWt /aTAeD7qKd/UXLKGHF/9v3q4GEYUdCFOXP94szWVPyLv+D9h8L8= =TPL9 -----END PGP SIGNATURE----- Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: "Apart from the arm64-specific bits (core arch and perf, new arm64 selftests), it touches the generic cow_user_page() (reviewed by Kirill) together with a macro for x86 to preserve the existing behaviour on this architecture. Summary: - On ARMv8 CPUs without hardware updates of the access flag, avoid failing cow_user_page() on PFN mappings if the pte is old. The patches introduce an arch_faults_on_old_pte() macro, defined as false on x86. When true, cow_user_page() makes the pte young before attempting __copy_from_user_inatomic(). - Covert the synchronous exception handling paths in arch/arm64/kernel/entry.S to C. - FTRACE_WITH_REGS support for arm64. - ZONE_DMA re-introduced on arm64 to support Raspberry Pi 4 - Several kselftest cases specific to arm64, together with a MAINTAINERS update for these files (moved to the ARM64 PORT entry). - Workaround for a Neoverse-N1 erratum where the CPU may fetch stale instructions under certain conditions. - Workaround for Cortex-A57 and A72 errata where the CPU may speculatively execute an AT instruction and associate a VMID with the wrong guest page tables (corrupting the TLB). - Perf updates for arm64: additional PMU topologies on HiSilicon platforms, support for CCN-512 interconnect, AXI ID filtering in the IMX8 DDR PMU, support for the CCPI2 uncore PMU in ThunderX2. - GICv3 optimisation to avoid a heavy barrier when accessing the ICC_PMR_EL1 register. - ELF HWCAP documentation updates and clean-up. - SMC calling convention conduit code clean-up. - KASLR diagnostics printed during boot - NVIDIA Carmel CPU added to the KPTI whitelist - Some arm64 mm clean-ups: use generic free_initrd_mem(), remove stale macro, simplify calculation in __create_pgd_mapping(), typos. - Kconfig clean-ups: CMDLINE_FORCE to depend on CMDLINE, choice for endinanness to help with allmodconfig" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (93 commits) arm64: Kconfig: add a choice for endianness kselftest: arm64: fix spelling mistake "contiguos" -> "contiguous" arm64: Kconfig: make CMDLINE_FORCE depend on CMDLINE MAINTAINERS: Add arm64 selftests to the ARM64 PORT entry arm64: kaslr: Check command line before looking for a seed arm64: kaslr: Announce KASLR status on boot kselftest: arm64: fake_sigreturn_misaligned_sp kselftest: arm64: fake_sigreturn_bad_size kselftest: arm64: fake_sigreturn_duplicated_fpsimd kselftest: arm64: fake_sigreturn_missing_fpsimd kselftest: arm64: fake_sigreturn_bad_size_for_magic0 kselftest: arm64: fake_sigreturn_bad_magic kselftest: arm64: add helper get_current_context kselftest: arm64: extend test_init functionalities kselftest: arm64: mangle_pstate_invalid_mode_el[123][ht] kselftest: arm64: mangle_pstate_invalid_daif_bits kselftest: arm64: mangle_pstate_invalid_compat_toggle and common utils kselftest: arm64: extend toplevel skeleton Makefile drivers/perf: hisi: update the sccl_id/ccl_id for certain HiSilicon platform arm64: mm: reserve CMA and crashkernel in ZONE_DMA32 ...
This commit is contained in:
commit
4ba380f616
@ -17,7 +17,8 @@ The "format" directory describes format of the config (event ID) and config1
|
|||||||
(AXI filtering) fields of the perf_event_attr structure, see /sys/bus/event_source/
|
(AXI filtering) fields of the perf_event_attr structure, see /sys/bus/event_source/
|
||||||
devices/imx8_ddr0/format/. The "events" directory describes the events types
|
devices/imx8_ddr0/format/. The "events" directory describes the events types
|
||||||
hardware supported that can be used with perf tool, see /sys/bus/event_source/
|
hardware supported that can be used with perf tool, see /sys/bus/event_source/
|
||||||
devices/imx8_ddr0/events/.
|
devices/imx8_ddr0/events/. The "caps" directory describes filter features implemented
|
||||||
|
in DDR PMU, see /sys/bus/events_source/devices/imx8_ddr0/caps/.
|
||||||
e.g.::
|
e.g.::
|
||||||
perf stat -a -e imx8_ddr0/cycles/ cmd
|
perf stat -a -e imx8_ddr0/cycles/ cmd
|
||||||
perf stat -a -e imx8_ddr0/read/,imx8_ddr0/write/ cmd
|
perf stat -a -e imx8_ddr0/read/,imx8_ddr0/write/ cmd
|
||||||
@ -25,9 +26,12 @@ devices/imx8_ddr0/events/.
|
|||||||
AXI filtering is only used by CSV modes 0x41 (axid-read) and 0x42 (axid-write)
|
AXI filtering is only used by CSV modes 0x41 (axid-read) and 0x42 (axid-write)
|
||||||
to count reading or writing matches filter setting. Filter setting is various
|
to count reading or writing matches filter setting. Filter setting is various
|
||||||
from different DRAM controller implementations, which is distinguished by quirks
|
from different DRAM controller implementations, which is distinguished by quirks
|
||||||
in the driver.
|
in the driver. You also can dump info from userspace, filter in "caps" directory
|
||||||
|
indicates whether PMU supports AXI ID filter or not; enhanced_filter indicates
|
||||||
|
whether PMU supports enhanced AXI ID filter or not. Value 0 for un-supported, and
|
||||||
|
value 1 for supported.
|
||||||
|
|
||||||
* With DDR_CAP_AXI_ID_FILTER quirk.
|
* With DDR_CAP_AXI_ID_FILTER quirk(filter: 1, enhanced_filter: 0).
|
||||||
Filter is defined with two configuration parts:
|
Filter is defined with two configuration parts:
|
||||||
--AXI_ID defines AxID matching value.
|
--AXI_ID defines AxID matching value.
|
||||||
--AXI_MASKING defines which bits of AxID are meaningful for the matching.
|
--AXI_MASKING defines which bits of AxID are meaningful for the matching.
|
||||||
@ -50,3 +54,8 @@ in the driver.
|
|||||||
axi_id to monitor a specific id, rather than having to specify axi_mask.
|
axi_id to monitor a specific id, rather than having to specify axi_mask.
|
||||||
e.g.::
|
e.g.::
|
||||||
perf stat -a -e imx8_ddr0/axid-read,axi_id=0x12/ cmd, which will monitor ARID=0x12
|
perf stat -a -e imx8_ddr0/axid-read,axi_id=0x12/ cmd, which will monitor ARID=0x12
|
||||||
|
|
||||||
|
* With DDR_CAP_AXI_ID_FILTER_ENHANCED quirk(filter: 1, enhanced_filter: 1).
|
||||||
|
This is an extension to the DDR_CAP_AXI_ID_FILTER quirk which permits
|
||||||
|
counting the number of bytes (as opposed to the number of bursts) from DDR
|
||||||
|
read and write transactions concurrently with another set of data counters.
|
||||||
|
@ -3,24 +3,26 @@ Cavium ThunderX2 SoC Performance Monitoring Unit (PMU UNCORE)
|
|||||||
=============================================================
|
=============================================================
|
||||||
|
|
||||||
The ThunderX2 SoC PMU consists of independent, system-wide, per-socket
|
The ThunderX2 SoC PMU consists of independent, system-wide, per-socket
|
||||||
PMUs such as the Level 3 Cache (L3C) and DDR4 Memory Controller (DMC).
|
PMUs such as the Level 3 Cache (L3C), DDR4 Memory Controller (DMC) and
|
||||||
|
Cavium Coherent Processor Interconnect (CCPI2).
|
||||||
|
|
||||||
The DMC has 8 interleaved channels and the L3C has 16 interleaved tiles.
|
The DMC has 8 interleaved channels and the L3C has 16 interleaved tiles.
|
||||||
Events are counted for the default channel (i.e. channel 0) and prorated
|
Events are counted for the default channel (i.e. channel 0) and prorated
|
||||||
to the total number of channels/tiles.
|
to the total number of channels/tiles.
|
||||||
|
|
||||||
The DMC and L3C support up to 4 counters. Counters are independently
|
The DMC and L3C support up to 4 counters, while the CCPI2 supports up to 8
|
||||||
programmable and can be started and stopped individually. Each counter
|
counters. Counters are independently programmable to different events and
|
||||||
can be set to a different event. Counters are 32-bit and do not support
|
can be started and stopped individually. None of the counters support an
|
||||||
an overflow interrupt; they are read every 2 seconds.
|
overflow interrupt. DMC and L3C counters are 32-bit and read every 2 seconds.
|
||||||
|
The CCPI2 counters are 64-bit and assumed not to overflow in normal operation.
|
||||||
|
|
||||||
PMU UNCORE (perf) driver:
|
PMU UNCORE (perf) driver:
|
||||||
|
|
||||||
The thunderx2_pmu driver registers per-socket perf PMUs for the DMC and
|
The thunderx2_pmu driver registers per-socket perf PMUs for the DMC and
|
||||||
L3C devices. Each PMU can be used to count up to 4 events
|
L3C devices. Each PMU can be used to count up to 4 (DMC/L3C) or up to 8
|
||||||
simultaneously. The PMUs provide a description of their available events
|
(CCPI2) events simultaneously. The PMUs provide a description of their
|
||||||
and configuration options under sysfs, see
|
available events and configuration options under sysfs, see
|
||||||
/sys/devices/uncore_<l3c_S/dmc_S/>; S is the socket id.
|
/sys/devices/uncore_<l3c_S/dmc_S/ccpi2_S/>; S is the socket id.
|
||||||
|
|
||||||
The driver does not support sampling, therefore "perf record" will not
|
The driver does not support sampling, therefore "perf record" will not
|
||||||
work. Per-task perf sessions are also not supported.
|
work. Per-task perf sessions are also not supported.
|
||||||
|
@ -213,6 +213,9 @@ Before jumping into the kernel, the following conditions must be met:
|
|||||||
|
|
||||||
- ICC_SRE_EL3.Enable (bit 3) must be initialiased to 0b1.
|
- ICC_SRE_EL3.Enable (bit 3) must be initialiased to 0b1.
|
||||||
- ICC_SRE_EL3.SRE (bit 0) must be initialised to 0b1.
|
- ICC_SRE_EL3.SRE (bit 0) must be initialised to 0b1.
|
||||||
|
- ICC_CTLR_EL3.PMHE (bit 6) must be set to the same value across
|
||||||
|
all CPUs the kernel is executing on, and must stay constant
|
||||||
|
for the lifetime of the kernel.
|
||||||
|
|
||||||
- If the kernel is entered at EL1:
|
- If the kernel is entered at EL1:
|
||||||
|
|
||||||
|
@ -168,8 +168,15 @@ infrastructure:
|
|||||||
+------------------------------+---------+---------+
|
+------------------------------+---------+---------+
|
||||||
|
|
||||||
|
|
||||||
3) MIDR_EL1 - Main ID Register
|
3) ID_AA64PFR1_EL1 - Processor Feature Register 1
|
||||||
|
+------------------------------+---------+---------+
|
||||||
|
| Name | bits | visible |
|
||||||
|
+------------------------------+---------+---------+
|
||||||
|
| SSBS | [7-4] | y |
|
||||||
|
+------------------------------+---------+---------+
|
||||||
|
|
||||||
|
|
||||||
|
4) MIDR_EL1 - Main ID Register
|
||||||
+------------------------------+---------+---------+
|
+------------------------------+---------+---------+
|
||||||
| Name | bits | visible |
|
| Name | bits | visible |
|
||||||
+------------------------------+---------+---------+
|
+------------------------------+---------+---------+
|
||||||
@ -188,11 +195,15 @@ infrastructure:
|
|||||||
as available on the CPU where it is fetched and is not a system
|
as available on the CPU where it is fetched and is not a system
|
||||||
wide safe value.
|
wide safe value.
|
||||||
|
|
||||||
4) ID_AA64ISAR1_EL1 - Instruction set attribute register 1
|
5) ID_AA64ISAR1_EL1 - Instruction set attribute register 1
|
||||||
|
|
||||||
+------------------------------+---------+---------+
|
+------------------------------+---------+---------+
|
||||||
| Name | bits | visible |
|
| Name | bits | visible |
|
||||||
+------------------------------+---------+---------+
|
+------------------------------+---------+---------+
|
||||||
|
| SB | [39-36] | y |
|
||||||
|
+------------------------------+---------+---------+
|
||||||
|
| FRINTTS | [35-32] | y |
|
||||||
|
+------------------------------+---------+---------+
|
||||||
| GPI | [31-28] | y |
|
| GPI | [31-28] | y |
|
||||||
+------------------------------+---------+---------+
|
+------------------------------+---------+---------+
|
||||||
| GPA | [27-24] | y |
|
| GPA | [27-24] | y |
|
||||||
@ -210,7 +221,7 @@ infrastructure:
|
|||||||
| DPB | [3-0] | y |
|
| DPB | [3-0] | y |
|
||||||
+------------------------------+---------+---------+
|
+------------------------------+---------+---------+
|
||||||
|
|
||||||
5) ID_AA64MMFR2_EL1 - Memory model feature register 2
|
6) ID_AA64MMFR2_EL1 - Memory model feature register 2
|
||||||
|
|
||||||
+------------------------------+---------+---------+
|
+------------------------------+---------+---------+
|
||||||
| Name | bits | visible |
|
| Name | bits | visible |
|
||||||
@ -218,7 +229,7 @@ infrastructure:
|
|||||||
| AT | [35-32] | y |
|
| AT | [35-32] | y |
|
||||||
+------------------------------+---------+---------+
|
+------------------------------+---------+---------+
|
||||||
|
|
||||||
6) ID_AA64ZFR0_EL1 - SVE feature ID register 0
|
7) ID_AA64ZFR0_EL1 - SVE feature ID register 0
|
||||||
|
|
||||||
+------------------------------+---------+---------+
|
+------------------------------+---------+---------+
|
||||||
| Name | bits | visible |
|
| Name | bits | visible |
|
||||||
|
@ -119,10 +119,6 @@ HWCAP_LRCPC
|
|||||||
HWCAP_DCPOP
|
HWCAP_DCPOP
|
||||||
Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0001.
|
Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0001.
|
||||||
|
|
||||||
HWCAP2_DCPODP
|
|
||||||
|
|
||||||
Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0010.
|
|
||||||
|
|
||||||
HWCAP_SHA3
|
HWCAP_SHA3
|
||||||
Functionality implied by ID_AA64ISAR0_EL1.SHA3 == 0b0001.
|
Functionality implied by ID_AA64ISAR0_EL1.SHA3 == 0b0001.
|
||||||
|
|
||||||
@ -141,6 +137,41 @@ HWCAP_SHA512
|
|||||||
HWCAP_SVE
|
HWCAP_SVE
|
||||||
Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001.
|
Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001.
|
||||||
|
|
||||||
|
HWCAP_ASIMDFHM
|
||||||
|
Functionality implied by ID_AA64ISAR0_EL1.FHM == 0b0001.
|
||||||
|
|
||||||
|
HWCAP_DIT
|
||||||
|
Functionality implied by ID_AA64PFR0_EL1.DIT == 0b0001.
|
||||||
|
|
||||||
|
HWCAP_USCAT
|
||||||
|
Functionality implied by ID_AA64MMFR2_EL1.AT == 0b0001.
|
||||||
|
|
||||||
|
HWCAP_ILRCPC
|
||||||
|
Functionality implied by ID_AA64ISAR1_EL1.LRCPC == 0b0010.
|
||||||
|
|
||||||
|
HWCAP_FLAGM
|
||||||
|
Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0001.
|
||||||
|
|
||||||
|
HWCAP_SSBS
|
||||||
|
Functionality implied by ID_AA64PFR1_EL1.SSBS == 0b0010.
|
||||||
|
|
||||||
|
HWCAP_SB
|
||||||
|
Functionality implied by ID_AA64ISAR1_EL1.SB == 0b0001.
|
||||||
|
|
||||||
|
HWCAP_PACA
|
||||||
|
Functionality implied by ID_AA64ISAR1_EL1.APA == 0b0001 or
|
||||||
|
ID_AA64ISAR1_EL1.API == 0b0001, as described by
|
||||||
|
Documentation/arm64/pointer-authentication.rst.
|
||||||
|
|
||||||
|
HWCAP_PACG
|
||||||
|
Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
|
||||||
|
ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
|
||||||
|
Documentation/arm64/pointer-authentication.rst.
|
||||||
|
|
||||||
|
HWCAP2_DCPODP
|
||||||
|
|
||||||
|
Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0010.
|
||||||
|
|
||||||
HWCAP2_SVE2
|
HWCAP2_SVE2
|
||||||
|
|
||||||
Functionality implied by ID_AA64ZFR0_EL1.SVEVer == 0b0001.
|
Functionality implied by ID_AA64ZFR0_EL1.SVEVer == 0b0001.
|
||||||
@ -165,38 +196,10 @@ HWCAP2_SVESM4
|
|||||||
|
|
||||||
Functionality implied by ID_AA64ZFR0_EL1.SM4 == 0b0001.
|
Functionality implied by ID_AA64ZFR0_EL1.SM4 == 0b0001.
|
||||||
|
|
||||||
HWCAP_ASIMDFHM
|
|
||||||
Functionality implied by ID_AA64ISAR0_EL1.FHM == 0b0001.
|
|
||||||
|
|
||||||
HWCAP_DIT
|
|
||||||
Functionality implied by ID_AA64PFR0_EL1.DIT == 0b0001.
|
|
||||||
|
|
||||||
HWCAP_USCAT
|
|
||||||
Functionality implied by ID_AA64MMFR2_EL1.AT == 0b0001.
|
|
||||||
|
|
||||||
HWCAP_ILRCPC
|
|
||||||
Functionality implied by ID_AA64ISAR1_EL1.LRCPC == 0b0010.
|
|
||||||
|
|
||||||
HWCAP_FLAGM
|
|
||||||
Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0001.
|
|
||||||
|
|
||||||
HWCAP2_FLAGM2
|
HWCAP2_FLAGM2
|
||||||
|
|
||||||
Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0010.
|
Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0010.
|
||||||
|
|
||||||
HWCAP_SSBS
|
|
||||||
Functionality implied by ID_AA64PFR1_EL1.SSBS == 0b0010.
|
|
||||||
|
|
||||||
HWCAP_PACA
|
|
||||||
Functionality implied by ID_AA64ISAR1_EL1.APA == 0b0001 or
|
|
||||||
ID_AA64ISAR1_EL1.API == 0b0001, as described by
|
|
||||||
Documentation/arm64/pointer-authentication.rst.
|
|
||||||
|
|
||||||
HWCAP_PACG
|
|
||||||
Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
|
|
||||||
ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
|
|
||||||
Documentation/arm64/pointer-authentication.rst.
|
|
||||||
|
|
||||||
HWCAP2_FRINT
|
HWCAP2_FRINT
|
||||||
|
|
||||||
Functionality implied by ID_AA64ISAR1_EL1.FRINTTS == 0b0001.
|
Functionality implied by ID_AA64ISAR1_EL1.FRINTTS == 0b0001.
|
||||||
|
@ -70,8 +70,12 @@ stable kernels.
|
|||||||
+----------------+-----------------+-----------------+-----------------------------+
|
+----------------+-----------------+-----------------+-----------------------------+
|
||||||
| ARM | Cortex-A57 | #834220 | ARM64_ERRATUM_834220 |
|
| ARM | Cortex-A57 | #834220 | ARM64_ERRATUM_834220 |
|
||||||
+----------------+-----------------+-----------------+-----------------------------+
|
+----------------+-----------------+-----------------+-----------------------------+
|
||||||
|
| ARM | Cortex-A57 | #1319537 | ARM64_ERRATUM_1319367 |
|
||||||
|
+----------------+-----------------+-----------------+-----------------------------+
|
||||||
| ARM | Cortex-A72 | #853709 | N/A |
|
| ARM | Cortex-A72 | #853709 | N/A |
|
||||||
+----------------+-----------------+-----------------+-----------------------------+
|
+----------------+-----------------+-----------------+-----------------------------+
|
||||||
|
| ARM | Cortex-A72 | #1319367 | ARM64_ERRATUM_1319367 |
|
||||||
|
+----------------+-----------------+-----------------+-----------------------------+
|
||||||
| ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 |
|
| ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 |
|
||||||
+----------------+-----------------+-----------------+-----------------------------+
|
+----------------+-----------------+-----------------+-----------------------------+
|
||||||
| ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 |
|
| ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 |
|
||||||
@ -88,6 +92,8 @@ stable kernels.
|
|||||||
+----------------+-----------------+-----------------+-----------------------------+
|
+----------------+-----------------+-----------------+-----------------------------+
|
||||||
| ARM | Neoverse-N1 | #1349291 | N/A |
|
| ARM | Neoverse-N1 | #1349291 | N/A |
|
||||||
+----------------+-----------------+-----------------+-----------------------------+
|
+----------------+-----------------+-----------------+-----------------------------+
|
||||||
|
| ARM | Neoverse-N1 | #1542419 | ARM64_ERRATUM_1542419 |
|
||||||
|
+----------------+-----------------+-----------------+-----------------------------+
|
||||||
| ARM | MMU-500 | #841119,826419 | N/A |
|
| ARM | MMU-500 | #841119,826419 | N/A |
|
||||||
+----------------+-----------------+-----------------+-----------------------------+
|
+----------------+-----------------+-----------------+-----------------------------+
|
||||||
+----------------+-----------------+-----------------+-----------------------------+
|
+----------------+-----------------+-----------------+-----------------------------+
|
||||||
|
@ -6,6 +6,7 @@ Required properties:
|
|||||||
"arm,ccn-502"
|
"arm,ccn-502"
|
||||||
"arm,ccn-504"
|
"arm,ccn-504"
|
||||||
"arm,ccn-508"
|
"arm,ccn-508"
|
||||||
|
"arm,ccn-512"
|
||||||
|
|
||||||
- reg: (standard registers property) physical address and size
|
- reg: (standard registers property) physical address and size
|
||||||
(16MB) of the configuration registers block
|
(16MB) of the configuration registers block
|
||||||
|
@ -5,6 +5,7 @@ Required properties:
|
|||||||
- compatible: should be one of:
|
- compatible: should be one of:
|
||||||
"fsl,imx8-ddr-pmu"
|
"fsl,imx8-ddr-pmu"
|
||||||
"fsl,imx8m-ddr-pmu"
|
"fsl,imx8m-ddr-pmu"
|
||||||
|
"fsl,imx8mp-ddr-pmu"
|
||||||
|
|
||||||
- reg: physical address and size
|
- reg: physical address and size
|
||||||
|
|
||||||
|
@ -2611,6 +2611,7 @@ S: Maintained
|
|||||||
F: arch/arm64/
|
F: arch/arm64/
|
||||||
X: arch/arm64/boot/dts/
|
X: arch/arm64/boot/dts/
|
||||||
F: Documentation/arm64/
|
F: Documentation/arm64/
|
||||||
|
F: tools/testing/selftests/arm64/
|
||||||
|
|
||||||
AS3645A LED FLASH CONTROLLER DRIVER
|
AS3645A LED FLASH CONTROLLER DRIVER
|
||||||
M: Sakari Ailus <sakari.ailus@iki.fi>
|
M: Sakari Ailus <sakari.ailus@iki.fi>
|
||||||
|
@ -1,7 +1,6 @@
|
|||||||
// SPDX-License-Identifier: GPL-2.0
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
#include <linux/arm-smccc.h>
|
#include <linux/arm-smccc.h>
|
||||||
#include <linux/kernel.h>
|
#include <linux/kernel.h>
|
||||||
#include <linux/psci.h>
|
|
||||||
#include <linux/smp.h>
|
#include <linux/smp.h>
|
||||||
|
|
||||||
#include <asm/cp15.h>
|
#include <asm/cp15.h>
|
||||||
@ -75,11 +74,8 @@ static void cpu_v7_spectre_init(void)
|
|||||||
case ARM_CPU_PART_CORTEX_A72: {
|
case ARM_CPU_PART_CORTEX_A72: {
|
||||||
struct arm_smccc_res res;
|
struct arm_smccc_res res;
|
||||||
|
|
||||||
if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
|
switch (arm_smccc_1_1_get_conduit()) {
|
||||||
break;
|
case SMCCC_CONDUIT_HVC:
|
||||||
|
|
||||||
switch (psci_ops.conduit) {
|
|
||||||
case PSCI_CONDUIT_HVC:
|
|
||||||
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
||||||
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
|
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
|
||||||
if ((int)res.a0 != 0)
|
if ((int)res.a0 != 0)
|
||||||
@ -90,7 +86,7 @@ static void cpu_v7_spectre_init(void)
|
|||||||
spectre_v2_method = "hypervisor";
|
spectre_v2_method = "hypervisor";
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case PSCI_CONDUIT_SMC:
|
case SMCCC_CONDUIT_SMC:
|
||||||
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
||||||
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
|
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
|
||||||
if ((int)res.a0 != 0)
|
if ((int)res.a0 != 0)
|
||||||
|
@ -143,6 +143,8 @@ config ARM64
|
|||||||
select HAVE_DEBUG_KMEMLEAK
|
select HAVE_DEBUG_KMEMLEAK
|
||||||
select HAVE_DMA_CONTIGUOUS
|
select HAVE_DMA_CONTIGUOUS
|
||||||
select HAVE_DYNAMIC_FTRACE
|
select HAVE_DYNAMIC_FTRACE
|
||||||
|
select HAVE_DYNAMIC_FTRACE_WITH_REGS \
|
||||||
|
if $(cc-option,-fpatchable-function-entry=2)
|
||||||
select HAVE_EFFICIENT_UNALIGNED_ACCESS
|
select HAVE_EFFICIENT_UNALIGNED_ACCESS
|
||||||
select HAVE_FAST_GUP
|
select HAVE_FAST_GUP
|
||||||
select HAVE_FTRACE_MCOUNT_RECORD
|
select HAVE_FTRACE_MCOUNT_RECORD
|
||||||
@ -266,6 +268,10 @@ config GENERIC_CSUM
|
|||||||
config GENERIC_CALIBRATE_DELAY
|
config GENERIC_CALIBRATE_DELAY
|
||||||
def_bool y
|
def_bool y
|
||||||
|
|
||||||
|
config ZONE_DMA
|
||||||
|
bool "Support DMA zone" if EXPERT
|
||||||
|
default y
|
||||||
|
|
||||||
config ZONE_DMA32
|
config ZONE_DMA32
|
||||||
bool "Support DMA32 zone" if EXPERT
|
bool "Support DMA32 zone" if EXPERT
|
||||||
default y
|
default y
|
||||||
@ -538,6 +544,16 @@ config ARM64_ERRATUM_1286807
|
|||||||
invalidated has been observed by other observers. The
|
invalidated has been observed by other observers. The
|
||||||
workaround repeats the TLBI+DSB operation.
|
workaround repeats the TLBI+DSB operation.
|
||||||
|
|
||||||
|
config ARM64_ERRATUM_1319367
|
||||||
|
bool "Cortex-A57/A72: Speculative AT instruction using out-of-context translation regime could cause subsequent request to generate an incorrect translation"
|
||||||
|
default y
|
||||||
|
help
|
||||||
|
This option adds work arounds for ARM Cortex-A57 erratum 1319537
|
||||||
|
and A72 erratum 1319367
|
||||||
|
|
||||||
|
Cortex-A57 and A72 cores could end-up with corrupted TLBs by
|
||||||
|
speculating an AT instruction during a guest context switch.
|
||||||
|
|
||||||
If unsure, say Y.
|
If unsure, say Y.
|
||||||
|
|
||||||
config ARM64_ERRATUM_1463225
|
config ARM64_ERRATUM_1463225
|
||||||
@ -558,6 +574,22 @@ config ARM64_ERRATUM_1463225
|
|||||||
|
|
||||||
If unsure, say Y.
|
If unsure, say Y.
|
||||||
|
|
||||||
|
config ARM64_ERRATUM_1542419
|
||||||
|
bool "Neoverse-N1: workaround mis-ordering of instruction fetches"
|
||||||
|
default y
|
||||||
|
help
|
||||||
|
This option adds a workaround for ARM Neoverse-N1 erratum
|
||||||
|
1542419.
|
||||||
|
|
||||||
|
Affected Neoverse-N1 cores could execute a stale instruction when
|
||||||
|
modified by another CPU. The workaround depends on a firmware
|
||||||
|
counterpart.
|
||||||
|
|
||||||
|
Workaround the issue by hiding the DIC feature from EL0. This
|
||||||
|
forces user-space to perform cache maintenance.
|
||||||
|
|
||||||
|
If unsure, say Y.
|
||||||
|
|
||||||
config CAVIUM_ERRATUM_22375
|
config CAVIUM_ERRATUM_22375
|
||||||
bool "Cavium erratum 22375, 24313"
|
bool "Cavium erratum 22375, 24313"
|
||||||
default y
|
default y
|
||||||
@ -845,10 +877,26 @@ config ARM64_PA_BITS
|
|||||||
default 48 if ARM64_PA_BITS_48
|
default 48 if ARM64_PA_BITS_48
|
||||||
default 52 if ARM64_PA_BITS_52
|
default 52 if ARM64_PA_BITS_52
|
||||||
|
|
||||||
|
choice
|
||||||
|
prompt "Endianness"
|
||||||
|
default CPU_LITTLE_ENDIAN
|
||||||
|
help
|
||||||
|
Select the endianness of data accesses performed by the CPU. Userspace
|
||||||
|
applications will need to be compiled and linked for the endianness
|
||||||
|
that is selected here.
|
||||||
|
|
||||||
config CPU_BIG_ENDIAN
|
config CPU_BIG_ENDIAN
|
||||||
bool "Build big-endian kernel"
|
bool "Build big-endian kernel"
|
||||||
help
|
help
|
||||||
Say Y if you plan on running a kernel in big-endian mode.
|
Say Y if you plan on running a kernel with a big-endian userspace.
|
||||||
|
|
||||||
|
config CPU_LITTLE_ENDIAN
|
||||||
|
bool "Build little-endian kernel"
|
||||||
|
help
|
||||||
|
Say Y if you plan on running a kernel with a little-endian userspace.
|
||||||
|
This is usually the case for distributions targeting arm64.
|
||||||
|
|
||||||
|
endchoice
|
||||||
|
|
||||||
config SCHED_MC
|
config SCHED_MC
|
||||||
bool "Multi-core scheduler support"
|
bool "Multi-core scheduler support"
|
||||||
@ -1597,6 +1645,7 @@ config CMDLINE
|
|||||||
|
|
||||||
config CMDLINE_FORCE
|
config CMDLINE_FORCE
|
||||||
bool "Always use the default kernel command string"
|
bool "Always use the default kernel command string"
|
||||||
|
depends on CMDLINE != ""
|
||||||
help
|
help
|
||||||
Always use the default kernel command string, even if the boot
|
Always use the default kernel command string, even if the boot
|
||||||
loader passes other arguments to the kernel.
|
loader passes other arguments to the kernel.
|
||||||
|
@ -95,6 +95,11 @@ ifeq ($(CONFIG_ARM64_MODULE_PLTS),y)
|
|||||||
KBUILD_LDS_MODULE += $(srctree)/arch/arm64/kernel/module.lds
|
KBUILD_LDS_MODULE += $(srctree)/arch/arm64/kernel/module.lds
|
||||||
endif
|
endif
|
||||||
|
|
||||||
|
ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_REGS),y)
|
||||||
|
KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
|
||||||
|
CC_FLAGS_FTRACE := -fpatchable-function-entry=2
|
||||||
|
endif
|
||||||
|
|
||||||
# Default value
|
# Default value
|
||||||
head-y := arch/arm64/kernel/head.o
|
head-y := arch/arm64/kernel/head.o
|
||||||
|
|
||||||
|
@ -58,12 +58,4 @@ alternative_else_nop_endif
|
|||||||
.endm
|
.endm
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
/*
|
|
||||||
* Remove the address tag from a virtual address, if present.
|
|
||||||
*/
|
|
||||||
.macro untagged_addr, dst, addr
|
|
||||||
sbfx \dst, \addr, #0, #56
|
|
||||||
and \dst, \dst, \addr
|
|
||||||
.endm
|
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
@ -29,6 +29,18 @@
|
|||||||
SB_BARRIER_INSN"nop\n", \
|
SB_BARRIER_INSN"nop\n", \
|
||||||
ARM64_HAS_SB))
|
ARM64_HAS_SB))
|
||||||
|
|
||||||
|
#ifdef CONFIG_ARM64_PSEUDO_NMI
|
||||||
|
#define pmr_sync() \
|
||||||
|
do { \
|
||||||
|
extern struct static_key_false gic_pmr_sync; \
|
||||||
|
\
|
||||||
|
if (static_branch_unlikely(&gic_pmr_sync)) \
|
||||||
|
dsb(sy); \
|
||||||
|
} while(0)
|
||||||
|
#else
|
||||||
|
#define pmr_sync() do {} while (0)
|
||||||
|
#endif
|
||||||
|
|
||||||
#define mb() dsb(sy)
|
#define mb() dsb(sy)
|
||||||
#define rmb() dsb(ld)
|
#define rmb() dsb(ld)
|
||||||
#define wmb() dsb(st)
|
#define wmb() dsb(st)
|
||||||
|
@ -11,6 +11,7 @@
|
|||||||
#define CTR_L1IP_MASK 3
|
#define CTR_L1IP_MASK 3
|
||||||
#define CTR_DMINLINE_SHIFT 16
|
#define CTR_DMINLINE_SHIFT 16
|
||||||
#define CTR_IMINLINE_SHIFT 0
|
#define CTR_IMINLINE_SHIFT 0
|
||||||
|
#define CTR_IMINLINE_MASK 0xf
|
||||||
#define CTR_ERG_SHIFT 20
|
#define CTR_ERG_SHIFT 20
|
||||||
#define CTR_CWG_SHIFT 24
|
#define CTR_CWG_SHIFT 24
|
||||||
#define CTR_CWG_MASK 15
|
#define CTR_CWG_MASK 15
|
||||||
@ -18,7 +19,7 @@
|
|||||||
#define CTR_DIC_SHIFT 29
|
#define CTR_DIC_SHIFT 29
|
||||||
|
|
||||||
#define CTR_CACHE_MINLINE_MASK \
|
#define CTR_CACHE_MINLINE_MASK \
|
||||||
(0xf << CTR_DMINLINE_SHIFT | 0xf << CTR_IMINLINE_SHIFT)
|
(0xf << CTR_DMINLINE_SHIFT | CTR_IMINLINE_MASK << CTR_IMINLINE_SHIFT)
|
||||||
|
|
||||||
#define CTR_L1IP(ctr) (((ctr) >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK)
|
#define CTR_L1IP(ctr) (((ctr) >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK)
|
||||||
|
|
||||||
|
@ -54,7 +54,9 @@
|
|||||||
#define ARM64_WORKAROUND_1463225 44
|
#define ARM64_WORKAROUND_1463225 44
|
||||||
#define ARM64_WORKAROUND_CAVIUM_TX2_219_TVM 45
|
#define ARM64_WORKAROUND_CAVIUM_TX2_219_TVM 45
|
||||||
#define ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM 46
|
#define ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM 46
|
||||||
|
#define ARM64_WORKAROUND_1542419 47
|
||||||
|
#define ARM64_WORKAROUND_1319367 48
|
||||||
|
|
||||||
#define ARM64_NCAPS 47
|
#define ARM64_NCAPS 49
|
||||||
|
|
||||||
#endif /* __ASM_CPUCAPS_H */
|
#endif /* __ASM_CPUCAPS_H */
|
||||||
|
@ -659,6 +659,20 @@ static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange)
|
|||||||
default: return CONFIG_ARM64_PA_BITS;
|
default: return CONFIG_ARM64_PA_BITS;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Check whether hardware update of the Access flag is supported */
|
||||||
|
static inline bool cpu_has_hw_af(void)
|
||||||
|
{
|
||||||
|
u64 mmfr1;
|
||||||
|
|
||||||
|
if (!IS_ENABLED(CONFIG_ARM64_HW_AFDBM))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
|
||||||
|
return cpuid_feature_extract_unsigned_field(mmfr1,
|
||||||
|
ID_AA64MMFR1_HADBS_SHIFT);
|
||||||
|
}
|
||||||
|
|
||||||
#endif /* __ASSEMBLY__ */
|
#endif /* __ASSEMBLY__ */
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
@ -8,7 +8,9 @@
|
|||||||
#include <linux/irqflags.h>
|
#include <linux/irqflags.h>
|
||||||
|
|
||||||
#include <asm/arch_gicv3.h>
|
#include <asm/arch_gicv3.h>
|
||||||
|
#include <asm/barrier.h>
|
||||||
#include <asm/cpufeature.h>
|
#include <asm/cpufeature.h>
|
||||||
|
#include <asm/ptrace.h>
|
||||||
|
|
||||||
#define DAIF_PROCCTX 0
|
#define DAIF_PROCCTX 0
|
||||||
#define DAIF_PROCCTX_NOIRQ PSR_I_BIT
|
#define DAIF_PROCCTX_NOIRQ PSR_I_BIT
|
||||||
@ -65,7 +67,7 @@ static inline void local_daif_restore(unsigned long flags)
|
|||||||
|
|
||||||
if (system_uses_irq_prio_masking()) {
|
if (system_uses_irq_prio_masking()) {
|
||||||
gic_write_pmr(GIC_PRIO_IRQON);
|
gic_write_pmr(GIC_PRIO_IRQON);
|
||||||
dsb(sy);
|
pmr_sync();
|
||||||
}
|
}
|
||||||
} else if (system_uses_irq_prio_masking()) {
|
} else if (system_uses_irq_prio_masking()) {
|
||||||
u64 pmr;
|
u64 pmr;
|
||||||
@ -109,4 +111,19 @@ static inline void local_daif_restore(unsigned long flags)
|
|||||||
trace_hardirqs_off();
|
trace_hardirqs_off();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Called by synchronous exception handlers to restore the DAIF bits that were
|
||||||
|
* modified by taking an exception.
|
||||||
|
*/
|
||||||
|
static inline void local_daif_inherit(struct pt_regs *regs)
|
||||||
|
{
|
||||||
|
unsigned long flags = regs->pstate & DAIF_MASK;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* We can't use local_daif_restore(regs->pstate) here as
|
||||||
|
* system_has_prio_mask_debugging() won't restore the I bit if it can
|
||||||
|
* use the pmr instead.
|
||||||
|
*/
|
||||||
|
write_sysreg(flags, daif);
|
||||||
|
}
|
||||||
#endif
|
#endif
|
||||||
|
@ -8,14 +8,15 @@
|
|||||||
#define __ASM_EXCEPTION_H
|
#define __ASM_EXCEPTION_H
|
||||||
|
|
||||||
#include <asm/esr.h>
|
#include <asm/esr.h>
|
||||||
|
#include <asm/kprobes.h>
|
||||||
|
#include <asm/ptrace.h>
|
||||||
|
|
||||||
#include <linux/interrupt.h>
|
#include <linux/interrupt.h>
|
||||||
|
|
||||||
#define __exception __attribute__((section(".exception.text")))
|
|
||||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||||
#define __exception_irq_entry __irq_entry
|
#define __exception_irq_entry __irq_entry
|
||||||
#else
|
#else
|
||||||
#define __exception_irq_entry __exception
|
#define __exception_irq_entry __kprobes
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
static inline u32 disr_to_esr(u64 disr)
|
static inline u32 disr_to_esr(u64 disr)
|
||||||
@ -31,5 +32,22 @@ static inline u32 disr_to_esr(u64 disr)
|
|||||||
}
|
}
|
||||||
|
|
||||||
asmlinkage void enter_from_user_mode(void);
|
asmlinkage void enter_from_user_mode(void);
|
||||||
|
void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
|
||||||
|
void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
|
||||||
|
void do_undefinstr(struct pt_regs *regs);
|
||||||
|
asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr);
|
||||||
|
void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr,
|
||||||
|
struct pt_regs *regs);
|
||||||
|
void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs);
|
||||||
|
void do_sve_acc(unsigned int esr, struct pt_regs *regs);
|
||||||
|
void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs);
|
||||||
|
void do_sysinstr(unsigned int esr, struct pt_regs *regs);
|
||||||
|
void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs);
|
||||||
|
void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr);
|
||||||
|
void do_cp15instr(unsigned int esr, struct pt_regs *regs);
|
||||||
|
void el0_svc_handler(struct pt_regs *regs);
|
||||||
|
void el0_svc_compat_handler(struct pt_regs *regs);
|
||||||
|
void do_el0_ia_bp_hardening(unsigned long addr, unsigned int esr,
|
||||||
|
struct pt_regs *regs);
|
||||||
|
|
||||||
#endif /* __ASM_EXCEPTION_H */
|
#endif /* __ASM_EXCEPTION_H */
|
||||||
|
@ -11,9 +11,20 @@
|
|||||||
#include <asm/insn.h>
|
#include <asm/insn.h>
|
||||||
|
|
||||||
#define HAVE_FUNCTION_GRAPH_FP_TEST
|
#define HAVE_FUNCTION_GRAPH_FP_TEST
|
||||||
|
|
||||||
|
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
|
||||||
|
#define ARCH_SUPPORTS_FTRACE_OPS 1
|
||||||
|
#else
|
||||||
#define MCOUNT_ADDR ((unsigned long)_mcount)
|
#define MCOUNT_ADDR ((unsigned long)_mcount)
|
||||||
|
#endif
|
||||||
|
|
||||||
|
/* The BL at the callsite's adjusted rec->ip */
|
||||||
#define MCOUNT_INSN_SIZE AARCH64_INSN_SIZE
|
#define MCOUNT_INSN_SIZE AARCH64_INSN_SIZE
|
||||||
|
|
||||||
|
#define FTRACE_PLT_IDX 0
|
||||||
|
#define FTRACE_REGS_PLT_IDX 1
|
||||||
|
#define NR_FTRACE_PLTS 2
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Currently, gcc tends to save the link register after the local variables
|
* Currently, gcc tends to save the link register after the local variables
|
||||||
* on the stack. This causes the max stack tracer to report the function
|
* on the stack. This causes the max stack tracer to report the function
|
||||||
@ -43,6 +54,12 @@ extern void return_to_handler(void);
|
|||||||
|
|
||||||
static inline unsigned long ftrace_call_adjust(unsigned long addr)
|
static inline unsigned long ftrace_call_adjust(unsigned long addr)
|
||||||
{
|
{
|
||||||
|
/*
|
||||||
|
* Adjust addr to point at the BL in the callsite.
|
||||||
|
* See ftrace_init_nop() for the callsite sequence.
|
||||||
|
*/
|
||||||
|
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS))
|
||||||
|
return addr + AARCH64_INSN_SIZE;
|
||||||
/*
|
/*
|
||||||
* addr is the address of the mcount call instruction.
|
* addr is the address of the mcount call instruction.
|
||||||
* recordmcount does the necessary offset calculation.
|
* recordmcount does the necessary offset calculation.
|
||||||
@ -50,6 +67,12 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr)
|
|||||||
return addr;
|
return addr;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
|
||||||
|
struct dyn_ftrace;
|
||||||
|
int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
|
||||||
|
#define ftrace_init_nop ftrace_init_nop
|
||||||
|
#endif
|
||||||
|
|
||||||
#define ftrace_return_address(n) return_address(n)
|
#define ftrace_return_address(n) return_address(n)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -440,6 +440,9 @@ u32 aarch64_insn_gen_logical_shifted_reg(enum aarch64_insn_register dst,
|
|||||||
int shift,
|
int shift,
|
||||||
enum aarch64_insn_variant variant,
|
enum aarch64_insn_variant variant,
|
||||||
enum aarch64_insn_logic_type type);
|
enum aarch64_insn_logic_type type);
|
||||||
|
u32 aarch64_insn_gen_move_reg(enum aarch64_insn_register dst,
|
||||||
|
enum aarch64_insn_register src,
|
||||||
|
enum aarch64_insn_variant variant);
|
||||||
u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
|
u32 aarch64_insn_gen_logical_immediate(enum aarch64_insn_logic_type type,
|
||||||
enum aarch64_insn_variant variant,
|
enum aarch64_insn_variant variant,
|
||||||
enum aarch64_insn_register Rn,
|
enum aarch64_insn_register Rn,
|
||||||
|
@ -6,6 +6,7 @@
|
|||||||
#define __ASM_IRQFLAGS_H
|
#define __ASM_IRQFLAGS_H
|
||||||
|
|
||||||
#include <asm/alternative.h>
|
#include <asm/alternative.h>
|
||||||
|
#include <asm/barrier.h>
|
||||||
#include <asm/ptrace.h>
|
#include <asm/ptrace.h>
|
||||||
#include <asm/sysreg.h>
|
#include <asm/sysreg.h>
|
||||||
|
|
||||||
@ -34,14 +35,14 @@ static inline void arch_local_irq_enable(void)
|
|||||||
}
|
}
|
||||||
|
|
||||||
asm volatile(ALTERNATIVE(
|
asm volatile(ALTERNATIVE(
|
||||||
"msr daifclr, #2 // arch_local_irq_enable\n"
|
"msr daifclr, #2 // arch_local_irq_enable",
|
||||||
"nop",
|
__msr_s(SYS_ICC_PMR_EL1, "%0"),
|
||||||
__msr_s(SYS_ICC_PMR_EL1, "%0")
|
|
||||||
"dsb sy",
|
|
||||||
ARM64_HAS_IRQ_PRIO_MASKING)
|
ARM64_HAS_IRQ_PRIO_MASKING)
|
||||||
:
|
:
|
||||||
: "r" ((unsigned long) GIC_PRIO_IRQON)
|
: "r" ((unsigned long) GIC_PRIO_IRQON)
|
||||||
: "memory");
|
: "memory");
|
||||||
|
|
||||||
|
pmr_sync();
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void arch_local_irq_disable(void)
|
static inline void arch_local_irq_disable(void)
|
||||||
@ -116,14 +117,14 @@ static inline unsigned long arch_local_irq_save(void)
|
|||||||
static inline void arch_local_irq_restore(unsigned long flags)
|
static inline void arch_local_irq_restore(unsigned long flags)
|
||||||
{
|
{
|
||||||
asm volatile(ALTERNATIVE(
|
asm volatile(ALTERNATIVE(
|
||||||
"msr daif, %0\n"
|
"msr daif, %0",
|
||||||
"nop",
|
__msr_s(SYS_ICC_PMR_EL1, "%0"),
|
||||||
__msr_s(SYS_ICC_PMR_EL1, "%0")
|
ARM64_HAS_IRQ_PRIO_MASKING)
|
||||||
"dsb sy",
|
|
||||||
ARM64_HAS_IRQ_PRIO_MASKING)
|
|
||||||
:
|
:
|
||||||
: "r" (flags)
|
: "r" (flags)
|
||||||
: "memory");
|
: "memory");
|
||||||
|
|
||||||
|
pmr_sync();
|
||||||
}
|
}
|
||||||
|
|
||||||
#endif /* __ASM_IRQFLAGS_H */
|
#endif /* __ASM_IRQFLAGS_H */
|
||||||
|
@ -600,8 +600,7 @@ static inline void kvm_arm_vhe_guest_enter(void)
|
|||||||
* local_daif_mask() already sets GIC_PRIO_PSR_I_SET, we just need a
|
* local_daif_mask() already sets GIC_PRIO_PSR_I_SET, we just need a
|
||||||
* dsb to ensure the redistributor is forwards EL2 IRQs to the CPU.
|
* dsb to ensure the redistributor is forwards EL2 IRQs to the CPU.
|
||||||
*/
|
*/
|
||||||
if (system_uses_irq_prio_masking())
|
pmr_sync();
|
||||||
dsb(sy);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void kvm_arm_vhe_guest_exit(void)
|
static inline void kvm_arm_vhe_guest_exit(void)
|
||||||
|
@ -69,12 +69,6 @@
|
|||||||
#define KERNEL_START _text
|
#define KERNEL_START _text
|
||||||
#define KERNEL_END _end
|
#define KERNEL_END _end
|
||||||
|
|
||||||
#ifdef CONFIG_ARM64_VA_BITS_52
|
|
||||||
#define MAX_USER_VA_BITS 52
|
|
||||||
#else
|
|
||||||
#define MAX_USER_VA_BITS VA_BITS
|
|
||||||
#endif
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Generic and tag-based KASAN require 1/8th and 1/16th of the kernel virtual
|
* Generic and tag-based KASAN require 1/8th and 1/16th of the kernel virtual
|
||||||
* address space for the shadow region respectively. They can bloat the stack
|
* address space for the shadow region respectively. They can bloat the stack
|
||||||
|
@ -21,7 +21,7 @@ struct mod_arch_specific {
|
|||||||
struct mod_plt_sec init;
|
struct mod_plt_sec init;
|
||||||
|
|
||||||
/* for CONFIG_DYNAMIC_FTRACE */
|
/* for CONFIG_DYNAMIC_FTRACE */
|
||||||
struct plt_entry *ftrace_trampoline;
|
struct plt_entry *ftrace_trampolines;
|
||||||
};
|
};
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
@ -69,7 +69,7 @@
|
|||||||
#define PGDIR_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - CONFIG_PGTABLE_LEVELS)
|
#define PGDIR_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - CONFIG_PGTABLE_LEVELS)
|
||||||
#define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT)
|
#define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT)
|
||||||
#define PGDIR_MASK (~(PGDIR_SIZE-1))
|
#define PGDIR_MASK (~(PGDIR_SIZE-1))
|
||||||
#define PTRS_PER_PGD (1 << (MAX_USER_VA_BITS - PGDIR_SHIFT))
|
#define PTRS_PER_PGD (1 << (VA_BITS - PGDIR_SHIFT))
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Section address mask and size definitions.
|
* Section address mask and size definitions.
|
||||||
|
@ -17,7 +17,7 @@
|
|||||||
* VMALLOC range.
|
* VMALLOC range.
|
||||||
*
|
*
|
||||||
* VMALLOC_START: beginning of the kernel vmalloc space
|
* VMALLOC_START: beginning of the kernel vmalloc space
|
||||||
* VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space
|
* VMALLOC_END: extends to the available space below vmemmap, PCI I/O space
|
||||||
* and fixed mappings
|
* and fixed mappings
|
||||||
*/
|
*/
|
||||||
#define VMALLOC_START (MODULES_END)
|
#define VMALLOC_START (MODULES_END)
|
||||||
@ -865,6 +865,20 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
|
|||||||
#define phys_to_ttbr(addr) (addr)
|
#define phys_to_ttbr(addr) (addr)
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
/*
|
||||||
|
* On arm64 without hardware Access Flag, copying from user will fail because
|
||||||
|
* the pte is old and cannot be marked young. So we always end up with zeroed
|
||||||
|
* page after fork() + CoW for pfn mappings. We don't always have a
|
||||||
|
* hardware-managed access flag on arm64.
|
||||||
|
*/
|
||||||
|
static inline bool arch_faults_on_old_pte(void)
|
||||||
|
{
|
||||||
|
WARN_ON(preemptible());
|
||||||
|
|
||||||
|
return !cpu_has_hw_af();
|
||||||
|
}
|
||||||
|
#define arch_faults_on_old_pte arch_faults_on_old_pte
|
||||||
|
|
||||||
#endif /* !__ASSEMBLY__ */
|
#endif /* !__ASSEMBLY__ */
|
||||||
|
|
||||||
#endif /* __ASM_PGTABLE_H */
|
#endif /* __ASM_PGTABLE_H */
|
||||||
|
@ -9,7 +9,7 @@
|
|||||||
#define __ASM_PROCESSOR_H
|
#define __ASM_PROCESSOR_H
|
||||||
|
|
||||||
#define KERNEL_DS UL(-1)
|
#define KERNEL_DS UL(-1)
|
||||||
#define USER_DS ((UL(1) << MAX_USER_VA_BITS) - 1)
|
#define USER_DS ((UL(1) << VA_BITS) - 1)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* On arm64 systems, unaligned accesses by the CPU are cheap, and so there is
|
* On arm64 systems, unaligned accesses by the CPU are cheap, and so there is
|
||||||
@ -26,10 +26,12 @@
|
|||||||
#include <linux/init.h>
|
#include <linux/init.h>
|
||||||
#include <linux/stddef.h>
|
#include <linux/stddef.h>
|
||||||
#include <linux/string.h>
|
#include <linux/string.h>
|
||||||
|
#include <linux/thread_info.h>
|
||||||
|
|
||||||
#include <asm/alternative.h>
|
#include <asm/alternative.h>
|
||||||
#include <asm/cpufeature.h>
|
#include <asm/cpufeature.h>
|
||||||
#include <asm/hw_breakpoint.h>
|
#include <asm/hw_breakpoint.h>
|
||||||
|
#include <asm/kasan.h>
|
||||||
#include <asm/lse.h>
|
#include <asm/lse.h>
|
||||||
#include <asm/pgtable-hwdef.h>
|
#include <asm/pgtable-hwdef.h>
|
||||||
#include <asm/pointer_auth.h>
|
#include <asm/pointer_auth.h>
|
||||||
@ -214,6 +216,18 @@ static inline void start_thread(struct pt_regs *regs, unsigned long pc,
|
|||||||
regs->sp = sp;
|
regs->sp = sp;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline bool is_ttbr0_addr(unsigned long addr)
|
||||||
|
{
|
||||||
|
/* entry assembly clears tags for TTBR0 addrs */
|
||||||
|
return addr < TASK_SIZE;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline bool is_ttbr1_addr(unsigned long addr)
|
||||||
|
{
|
||||||
|
/* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */
|
||||||
|
return arch_kasan_reset_tag(addr) >= PAGE_OFFSET;
|
||||||
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_COMPAT
|
#ifdef CONFIG_COMPAT
|
||||||
static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc,
|
static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc,
|
||||||
unsigned long sp)
|
unsigned long sp)
|
||||||
|
@ -66,24 +66,18 @@ struct pt_regs;
|
|||||||
} \
|
} \
|
||||||
static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
|
static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
|
||||||
|
|
||||||
#ifndef SYSCALL_DEFINE0
|
|
||||||
#define SYSCALL_DEFINE0(sname) \
|
#define SYSCALL_DEFINE0(sname) \
|
||||||
SYSCALL_METADATA(_##sname, 0); \
|
SYSCALL_METADATA(_##sname, 0); \
|
||||||
asmlinkage long __arm64_sys_##sname(const struct pt_regs *__unused); \
|
asmlinkage long __arm64_sys_##sname(const struct pt_regs *__unused); \
|
||||||
ALLOW_ERROR_INJECTION(__arm64_sys_##sname, ERRNO); \
|
ALLOW_ERROR_INJECTION(__arm64_sys_##sname, ERRNO); \
|
||||||
asmlinkage long __arm64_sys_##sname(const struct pt_regs *__unused)
|
asmlinkage long __arm64_sys_##sname(const struct pt_regs *__unused)
|
||||||
#endif
|
|
||||||
|
|
||||||
#ifndef COND_SYSCALL
|
|
||||||
#define COND_SYSCALL(name) \
|
#define COND_SYSCALL(name) \
|
||||||
asmlinkage long __weak __arm64_sys_##name(const struct pt_regs *regs) \
|
asmlinkage long __weak __arm64_sys_##name(const struct pt_regs *regs) \
|
||||||
{ \
|
{ \
|
||||||
return sys_ni_syscall(); \
|
return sys_ni_syscall(); \
|
||||||
}
|
}
|
||||||
#endif
|
|
||||||
|
|
||||||
#ifndef SYS_NI
|
|
||||||
#define SYS_NI(name) SYSCALL_ALIAS(__arm64_sys_##name, sys_ni_posix_timers);
|
#define SYS_NI(name) SYSCALL_ALIAS(__arm64_sys_##name, sys_ni_posix_timers);
|
||||||
#endif
|
|
||||||
|
|
||||||
#endif /* __ASM_SYSCALL_WRAPPER_H */
|
#endif /* __ASM_SYSCALL_WRAPPER_H */
|
||||||
|
@ -42,16 +42,6 @@ static inline int __in_irqentry_text(unsigned long ptr)
|
|||||||
ptr < (unsigned long)&__irqentry_text_end;
|
ptr < (unsigned long)&__irqentry_text_end;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline int in_exception_text(unsigned long ptr)
|
|
||||||
{
|
|
||||||
int in;
|
|
||||||
|
|
||||||
in = ptr >= (unsigned long)&__exception_text_start &&
|
|
||||||
ptr < (unsigned long)&__exception_text_end;
|
|
||||||
|
|
||||||
return in ? : __in_irqentry_text(ptr);
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline int in_entry_text(unsigned long ptr)
|
static inline int in_entry_text(unsigned long ptr)
|
||||||
{
|
{
|
||||||
return ptr >= (unsigned long)&__entry_text_start &&
|
return ptr >= (unsigned long)&__entry_text_start &&
|
||||||
|
@ -13,9 +13,9 @@ CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE)
|
|||||||
|
|
||||||
# Object file lists.
|
# Object file lists.
|
||||||
obj-y := debug-monitors.o entry.o irq.o fpsimd.o \
|
obj-y := debug-monitors.o entry.o irq.o fpsimd.o \
|
||||||
entry-fpsimd.o process.o ptrace.o setup.o signal.o \
|
entry-common.o entry-fpsimd.o process.o ptrace.o \
|
||||||
sys.o stacktrace.o time.o traps.o io.o vdso.o \
|
setup.o signal.o sys.o stacktrace.o time.o traps.o \
|
||||||
hyp-stub.o psci.o cpu_ops.o insn.o \
|
io.o vdso.o hyp-stub.o psci.o cpu_ops.o insn.o \
|
||||||
return_address.o cpuinfo.o cpu_errata.o \
|
return_address.o cpuinfo.o cpu_errata.o \
|
||||||
cpufeature.o alternative.o cacheinfo.o \
|
cpufeature.o alternative.o cacheinfo.o \
|
||||||
smp.o smp_spin_table.o topology.o smccc-call.o \
|
smp.o smp_spin_table.o topology.o smccc-call.o \
|
||||||
|
@ -56,6 +56,7 @@ int main(void)
|
|||||||
DEFINE(S_X24, offsetof(struct pt_regs, regs[24]));
|
DEFINE(S_X24, offsetof(struct pt_regs, regs[24]));
|
||||||
DEFINE(S_X26, offsetof(struct pt_regs, regs[26]));
|
DEFINE(S_X26, offsetof(struct pt_regs, regs[26]));
|
||||||
DEFINE(S_X28, offsetof(struct pt_regs, regs[28]));
|
DEFINE(S_X28, offsetof(struct pt_regs, regs[28]));
|
||||||
|
DEFINE(S_FP, offsetof(struct pt_regs, regs[29]));
|
||||||
DEFINE(S_LR, offsetof(struct pt_regs, regs[30]));
|
DEFINE(S_LR, offsetof(struct pt_regs, regs[30]));
|
||||||
DEFINE(S_SP, offsetof(struct pt_regs, sp));
|
DEFINE(S_SP, offsetof(struct pt_regs, sp));
|
||||||
DEFINE(S_PSTATE, offsetof(struct pt_regs, pstate));
|
DEFINE(S_PSTATE, offsetof(struct pt_regs, pstate));
|
||||||
|
@ -6,7 +6,6 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/arm-smccc.h>
|
#include <linux/arm-smccc.h>
|
||||||
#include <linux/psci.h>
|
|
||||||
#include <linux/types.h>
|
#include <linux/types.h>
|
||||||
#include <linux/cpu.h>
|
#include <linux/cpu.h>
|
||||||
#include <asm/cpu.h>
|
#include <asm/cpu.h>
|
||||||
@ -88,13 +87,21 @@ has_mismatched_cache_type(const struct arm64_cpu_capabilities *entry,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static void
|
static void
|
||||||
cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
|
cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *cap)
|
||||||
{
|
{
|
||||||
u64 mask = arm64_ftr_reg_ctrel0.strict_mask;
|
u64 mask = arm64_ftr_reg_ctrel0.strict_mask;
|
||||||
|
bool enable_uct_trap = false;
|
||||||
|
|
||||||
/* Trap CTR_EL0 access on this CPU, only if it has a mismatch */
|
/* Trap CTR_EL0 access on this CPU, only if it has a mismatch */
|
||||||
if ((read_cpuid_cachetype() & mask) !=
|
if ((read_cpuid_cachetype() & mask) !=
|
||||||
(arm64_ftr_reg_ctrel0.sys_val & mask))
|
(arm64_ftr_reg_ctrel0.sys_val & mask))
|
||||||
|
enable_uct_trap = true;
|
||||||
|
|
||||||
|
/* ... or if the system is affected by an erratum */
|
||||||
|
if (cap->capability == ARM64_WORKAROUND_1542419)
|
||||||
|
enable_uct_trap = true;
|
||||||
|
|
||||||
|
if (enable_uct_trap)
|
||||||
sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCT, 0);
|
sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCT, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -167,9 +174,7 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn,
|
|||||||
}
|
}
|
||||||
#endif /* CONFIG_KVM_INDIRECT_VECTORS */
|
#endif /* CONFIG_KVM_INDIRECT_VECTORS */
|
||||||
|
|
||||||
#include <uapi/linux/psci.h>
|
|
||||||
#include <linux/arm-smccc.h>
|
#include <linux/arm-smccc.h>
|
||||||
#include <linux/psci.h>
|
|
||||||
|
|
||||||
static void call_smc_arch_workaround_1(void)
|
static void call_smc_arch_workaround_1(void)
|
||||||
{
|
{
|
||||||
@ -213,11 +218,8 @@ static int detect_harden_bp_fw(void)
|
|||||||
struct arm_smccc_res res;
|
struct arm_smccc_res res;
|
||||||
u32 midr = read_cpuid_id();
|
u32 midr = read_cpuid_id();
|
||||||
|
|
||||||
if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
|
switch (arm_smccc_1_1_get_conduit()) {
|
||||||
return -1;
|
case SMCCC_CONDUIT_HVC:
|
||||||
|
|
||||||
switch (psci_ops.conduit) {
|
|
||||||
case PSCI_CONDUIT_HVC:
|
|
||||||
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
||||||
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
|
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
|
||||||
switch ((int)res.a0) {
|
switch ((int)res.a0) {
|
||||||
@ -235,7 +237,7 @@ static int detect_harden_bp_fw(void)
|
|||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case PSCI_CONDUIT_SMC:
|
case SMCCC_CONDUIT_SMC:
|
||||||
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
||||||
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
|
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
|
||||||
switch ((int)res.a0) {
|
switch ((int)res.a0) {
|
||||||
@ -309,11 +311,11 @@ void __init arm64_update_smccc_conduit(struct alt_instr *alt,
|
|||||||
|
|
||||||
BUG_ON(nr_inst != 1);
|
BUG_ON(nr_inst != 1);
|
||||||
|
|
||||||
switch (psci_ops.conduit) {
|
switch (arm_smccc_1_1_get_conduit()) {
|
||||||
case PSCI_CONDUIT_HVC:
|
case SMCCC_CONDUIT_HVC:
|
||||||
insn = aarch64_insn_get_hvc_value();
|
insn = aarch64_insn_get_hvc_value();
|
||||||
break;
|
break;
|
||||||
case PSCI_CONDUIT_SMC:
|
case SMCCC_CONDUIT_SMC:
|
||||||
insn = aarch64_insn_get_smc_value();
|
insn = aarch64_insn_get_smc_value();
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
@ -352,12 +354,12 @@ void arm64_set_ssbd_mitigation(bool state)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
switch (psci_ops.conduit) {
|
switch (arm_smccc_1_1_get_conduit()) {
|
||||||
case PSCI_CONDUIT_HVC:
|
case SMCCC_CONDUIT_HVC:
|
||||||
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
|
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case PSCI_CONDUIT_SMC:
|
case SMCCC_CONDUIT_SMC:
|
||||||
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
|
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
|
||||||
break;
|
break;
|
||||||
|
|
||||||
@ -391,20 +393,13 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
|
|||||||
goto out_printmsg;
|
goto out_printmsg;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
|
switch (arm_smccc_1_1_get_conduit()) {
|
||||||
ssbd_state = ARM64_SSBD_UNKNOWN;
|
case SMCCC_CONDUIT_HVC:
|
||||||
if (!this_cpu_safe)
|
|
||||||
__ssb_safe = false;
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
switch (psci_ops.conduit) {
|
|
||||||
case PSCI_CONDUIT_HVC:
|
|
||||||
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
||||||
ARM_SMCCC_ARCH_WORKAROUND_2, &res);
|
ARM_SMCCC_ARCH_WORKAROUND_2, &res);
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case PSCI_CONDUIT_SMC:
|
case SMCCC_CONDUIT_SMC:
|
||||||
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
|
||||||
ARM_SMCCC_ARCH_WORKAROUND_2, &res);
|
ARM_SMCCC_ARCH_WORKAROUND_2, &res);
|
||||||
break;
|
break;
|
||||||
@ -650,9 +645,21 @@ needs_tx2_tvm_workaround(const struct arm64_cpu_capabilities *entry,
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_HARDEN_EL2_VECTORS
|
static bool __maybe_unused
|
||||||
|
has_neoverse_n1_erratum_1542419(const struct arm64_cpu_capabilities *entry,
|
||||||
|
int scope)
|
||||||
|
{
|
||||||
|
u32 midr = read_cpuid_id();
|
||||||
|
bool has_dic = read_cpuid_cachetype() & BIT(CTR_DIC_SHIFT);
|
||||||
|
const struct midr_range range = MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1);
|
||||||
|
|
||||||
static const struct midr_range arm64_harden_el2_vectors[] = {
|
WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
|
||||||
|
return is_midr_in_range(midr, &range) && has_dic;
|
||||||
|
}
|
||||||
|
|
||||||
|
#if defined(CONFIG_HARDEN_EL2_VECTORS) || defined(CONFIG_ARM64_ERRATUM_1319367)
|
||||||
|
|
||||||
|
static const struct midr_range ca57_a72[] = {
|
||||||
MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
|
MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
|
||||||
MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
|
MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
|
||||||
{},
|
{},
|
||||||
@ -881,7 +888,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
|
|||||||
{
|
{
|
||||||
.desc = "EL2 vector hardening",
|
.desc = "EL2 vector hardening",
|
||||||
.capability = ARM64_HARDEN_EL2_VECTORS,
|
.capability = ARM64_HARDEN_EL2_VECTORS,
|
||||||
ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors),
|
ERRATA_MIDR_RANGE_LIST(ca57_a72),
|
||||||
},
|
},
|
||||||
#endif
|
#endif
|
||||||
{
|
{
|
||||||
@ -926,6 +933,23 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
|
|||||||
.capability = ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM,
|
.capability = ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM,
|
||||||
ERRATA_MIDR_RANGE_LIST(tx2_family_cpus),
|
ERRATA_MIDR_RANGE_LIST(tx2_family_cpus),
|
||||||
},
|
},
|
||||||
|
#endif
|
||||||
|
#ifdef CONFIG_ARM64_ERRATUM_1542419
|
||||||
|
{
|
||||||
|
/* we depend on the firmware portion for correctness */
|
||||||
|
.desc = "ARM erratum 1542419 (kernel portion)",
|
||||||
|
.capability = ARM64_WORKAROUND_1542419,
|
||||||
|
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
|
||||||
|
.matches = has_neoverse_n1_erratum_1542419,
|
||||||
|
.cpu_enable = cpu_enable_trap_ctr_access,
|
||||||
|
},
|
||||||
|
#endif
|
||||||
|
#ifdef CONFIG_ARM64_ERRATUM_1319367
|
||||||
|
{
|
||||||
|
.desc = "ARM erratum 1319367",
|
||||||
|
.capability = ARM64_WORKAROUND_1319367,
|
||||||
|
ERRATA_MIDR_RANGE_LIST(ca57_a72),
|
||||||
|
},
|
||||||
#endif
|
#endif
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
@ -982,6 +982,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
|
|||||||
MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
|
MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
|
||||||
MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
|
MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
|
||||||
MIDR_ALL_VERSIONS(MIDR_HISI_TSV110),
|
MIDR_ALL_VERSIONS(MIDR_HISI_TSV110),
|
||||||
|
MIDR_ALL_VERSIONS(MIDR_NVIDIA_CARMEL),
|
||||||
{ /* sentinel */ }
|
{ /* sentinel */ }
|
||||||
};
|
};
|
||||||
char const *str = "kpti command line option";
|
char const *str = "kpti command line option";
|
||||||
|
@ -329,7 +329,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
|
|||||||
info->reg_cntfrq = arch_timer_get_cntfrq();
|
info->reg_cntfrq = arch_timer_get_cntfrq();
|
||||||
/*
|
/*
|
||||||
* Use the effective value of the CTR_EL0 than the raw value
|
* Use the effective value of the CTR_EL0 than the raw value
|
||||||
* exposed by the CPU. CTR_E0.IDC field value must be interpreted
|
* exposed by the CPU. CTR_EL0.IDC field value must be interpreted
|
||||||
* with the CLIDR_EL1 fields to avoid triggering false warnings
|
* with the CLIDR_EL1 fields to avoid triggering false warnings
|
||||||
* when there is a mismatch across the CPUs. Keep track of the
|
* when there is a mismatch across the CPUs. Keep track of the
|
||||||
* effective value of the CTR_EL0 in our internal records for
|
* effective value of the CTR_EL0 in our internal records for
|
||||||
|
332
arch/arm64/kernel/entry-common.c
Normal file
332
arch/arm64/kernel/entry-common.c
Normal file
@ -0,0 +1,332 @@
|
|||||||
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
|
/*
|
||||||
|
* Exception handling code
|
||||||
|
*
|
||||||
|
* Copyright (C) 2019 ARM Ltd.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <linux/context_tracking.h>
|
||||||
|
#include <linux/ptrace.h>
|
||||||
|
#include <linux/thread_info.h>
|
||||||
|
|
||||||
|
#include <asm/cpufeature.h>
|
||||||
|
#include <asm/daifflags.h>
|
||||||
|
#include <asm/esr.h>
|
||||||
|
#include <asm/exception.h>
|
||||||
|
#include <asm/kprobes.h>
|
||||||
|
#include <asm/mmu.h>
|
||||||
|
#include <asm/sysreg.h>
|
||||||
|
|
||||||
|
static void notrace el1_abort(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
unsigned long far = read_sysreg(far_el1);
|
||||||
|
|
||||||
|
local_daif_inherit(regs);
|
||||||
|
far = untagged_addr(far);
|
||||||
|
do_mem_abort(far, esr, regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el1_abort);
|
||||||
|
|
||||||
|
static void notrace el1_pc(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
unsigned long far = read_sysreg(far_el1);
|
||||||
|
|
||||||
|
local_daif_inherit(regs);
|
||||||
|
do_sp_pc_abort(far, esr, regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el1_pc);
|
||||||
|
|
||||||
|
static void el1_undef(struct pt_regs *regs)
|
||||||
|
{
|
||||||
|
local_daif_inherit(regs);
|
||||||
|
do_undefinstr(regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el1_undef);
|
||||||
|
|
||||||
|
static void el1_inv(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
local_daif_inherit(regs);
|
||||||
|
bad_mode(regs, 0, esr);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el1_inv);
|
||||||
|
|
||||||
|
static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
unsigned long far = read_sysreg(far_el1);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The CPU masked interrupts, and we are leaving them masked during
|
||||||
|
* do_debug_exception(). Update PMR as if we had called
|
||||||
|
* local_mask_daif().
|
||||||
|
*/
|
||||||
|
if (system_uses_irq_prio_masking())
|
||||||
|
gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
|
||||||
|
|
||||||
|
do_debug_exception(far, esr, regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el1_dbg);
|
||||||
|
|
||||||
|
asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
|
||||||
|
{
|
||||||
|
unsigned long esr = read_sysreg(esr_el1);
|
||||||
|
|
||||||
|
switch (ESR_ELx_EC(esr)) {
|
||||||
|
case ESR_ELx_EC_DABT_CUR:
|
||||||
|
case ESR_ELx_EC_IABT_CUR:
|
||||||
|
el1_abort(regs, esr);
|
||||||
|
break;
|
||||||
|
/*
|
||||||
|
* We don't handle ESR_ELx_EC_SP_ALIGN, since we will have hit a
|
||||||
|
* recursive exception when trying to push the initial pt_regs.
|
||||||
|
*/
|
||||||
|
case ESR_ELx_EC_PC_ALIGN:
|
||||||
|
el1_pc(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_SYS64:
|
||||||
|
case ESR_ELx_EC_UNKNOWN:
|
||||||
|
el1_undef(regs);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_BREAKPT_CUR:
|
||||||
|
case ESR_ELx_EC_SOFTSTP_CUR:
|
||||||
|
case ESR_ELx_EC_WATCHPT_CUR:
|
||||||
|
case ESR_ELx_EC_BRK64:
|
||||||
|
el1_dbg(regs, esr);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
el1_inv(regs, esr);
|
||||||
|
};
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el1_sync_handler);
|
||||||
|
|
||||||
|
static void notrace el0_da(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
unsigned long far = read_sysreg(far_el1);
|
||||||
|
|
||||||
|
user_exit_irqoff();
|
||||||
|
local_daif_restore(DAIF_PROCCTX);
|
||||||
|
far = untagged_addr(far);
|
||||||
|
do_mem_abort(far, esr, regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_da);
|
||||||
|
|
||||||
|
static void notrace el0_ia(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
unsigned long far = read_sysreg(far_el1);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* We've taken an instruction abort from userspace and not yet
|
||||||
|
* re-enabled IRQs. If the address is a kernel address, apply
|
||||||
|
* BP hardening prior to enabling IRQs and pre-emption.
|
||||||
|
*/
|
||||||
|
if (!is_ttbr0_addr(far))
|
||||||
|
arm64_apply_bp_hardening();
|
||||||
|
|
||||||
|
user_exit_irqoff();
|
||||||
|
local_daif_restore(DAIF_PROCCTX);
|
||||||
|
do_mem_abort(far, esr, regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_ia);
|
||||||
|
|
||||||
|
static void notrace el0_fpsimd_acc(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
user_exit_irqoff();
|
||||||
|
local_daif_restore(DAIF_PROCCTX);
|
||||||
|
do_fpsimd_acc(esr, regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_fpsimd_acc);
|
||||||
|
|
||||||
|
static void notrace el0_sve_acc(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
user_exit_irqoff();
|
||||||
|
local_daif_restore(DAIF_PROCCTX);
|
||||||
|
do_sve_acc(esr, regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_sve_acc);
|
||||||
|
|
||||||
|
static void notrace el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
user_exit_irqoff();
|
||||||
|
local_daif_restore(DAIF_PROCCTX);
|
||||||
|
do_fpsimd_exc(esr, regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_fpsimd_exc);
|
||||||
|
|
||||||
|
static void notrace el0_sys(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
user_exit_irqoff();
|
||||||
|
local_daif_restore(DAIF_PROCCTX);
|
||||||
|
do_sysinstr(esr, regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_sys);
|
||||||
|
|
||||||
|
static void notrace el0_pc(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
unsigned long far = read_sysreg(far_el1);
|
||||||
|
|
||||||
|
if (!is_ttbr0_addr(instruction_pointer(regs)))
|
||||||
|
arm64_apply_bp_hardening();
|
||||||
|
|
||||||
|
user_exit_irqoff();
|
||||||
|
local_daif_restore(DAIF_PROCCTX);
|
||||||
|
do_sp_pc_abort(far, esr, regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_pc);
|
||||||
|
|
||||||
|
static void notrace el0_sp(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
user_exit_irqoff();
|
||||||
|
local_daif_restore(DAIF_PROCCTX_NOIRQ);
|
||||||
|
do_sp_pc_abort(regs->sp, esr, regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_sp);
|
||||||
|
|
||||||
|
static void notrace el0_undef(struct pt_regs *regs)
|
||||||
|
{
|
||||||
|
user_exit_irqoff();
|
||||||
|
local_daif_restore(DAIF_PROCCTX);
|
||||||
|
do_undefinstr(regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_undef);
|
||||||
|
|
||||||
|
static void notrace el0_inv(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
user_exit_irqoff();
|
||||||
|
local_daif_restore(DAIF_PROCCTX);
|
||||||
|
bad_el0_sync(regs, 0, esr);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_inv);
|
||||||
|
|
||||||
|
static void notrace el0_dbg(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
/* Only watchpoints write FAR_EL1, otherwise its UNKNOWN */
|
||||||
|
unsigned long far = read_sysreg(far_el1);
|
||||||
|
|
||||||
|
if (system_uses_irq_prio_masking())
|
||||||
|
gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
|
||||||
|
|
||||||
|
user_exit_irqoff();
|
||||||
|
do_debug_exception(far, esr, regs);
|
||||||
|
local_daif_restore(DAIF_PROCCTX_NOIRQ);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_dbg);
|
||||||
|
|
||||||
|
static void notrace el0_svc(struct pt_regs *regs)
|
||||||
|
{
|
||||||
|
if (system_uses_irq_prio_masking())
|
||||||
|
gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
|
||||||
|
|
||||||
|
el0_svc_handler(regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_svc);
|
||||||
|
|
||||||
|
asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
|
||||||
|
{
|
||||||
|
unsigned long esr = read_sysreg(esr_el1);
|
||||||
|
|
||||||
|
switch (ESR_ELx_EC(esr)) {
|
||||||
|
case ESR_ELx_EC_SVC64:
|
||||||
|
el0_svc(regs);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_DABT_LOW:
|
||||||
|
el0_da(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_IABT_LOW:
|
||||||
|
el0_ia(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_FP_ASIMD:
|
||||||
|
el0_fpsimd_acc(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_SVE:
|
||||||
|
el0_sve_acc(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_FP_EXC64:
|
||||||
|
el0_fpsimd_exc(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_SYS64:
|
||||||
|
case ESR_ELx_EC_WFx:
|
||||||
|
el0_sys(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_SP_ALIGN:
|
||||||
|
el0_sp(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_PC_ALIGN:
|
||||||
|
el0_pc(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_UNKNOWN:
|
||||||
|
el0_undef(regs);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_BREAKPT_LOW:
|
||||||
|
case ESR_ELx_EC_SOFTSTP_LOW:
|
||||||
|
case ESR_ELx_EC_WATCHPT_LOW:
|
||||||
|
case ESR_ELx_EC_BRK64:
|
||||||
|
el0_dbg(regs, esr);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
el0_inv(regs, esr);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_sync_handler);
|
||||||
|
|
||||||
|
#ifdef CONFIG_COMPAT
|
||||||
|
static void notrace el0_cp15(struct pt_regs *regs, unsigned long esr)
|
||||||
|
{
|
||||||
|
user_exit_irqoff();
|
||||||
|
local_daif_restore(DAIF_PROCCTX);
|
||||||
|
do_cp15instr(esr, regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_cp15);
|
||||||
|
|
||||||
|
static void notrace el0_svc_compat(struct pt_regs *regs)
|
||||||
|
{
|
||||||
|
if (system_uses_irq_prio_masking())
|
||||||
|
gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
|
||||||
|
|
||||||
|
el0_svc_compat_handler(regs);
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_svc_compat);
|
||||||
|
|
||||||
|
asmlinkage void notrace el0_sync_compat_handler(struct pt_regs *regs)
|
||||||
|
{
|
||||||
|
unsigned long esr = read_sysreg(esr_el1);
|
||||||
|
|
||||||
|
switch (ESR_ELx_EC(esr)) {
|
||||||
|
case ESR_ELx_EC_SVC32:
|
||||||
|
el0_svc_compat(regs);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_DABT_LOW:
|
||||||
|
el0_da(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_IABT_LOW:
|
||||||
|
el0_ia(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_FP_ASIMD:
|
||||||
|
el0_fpsimd_acc(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_FP_EXC32:
|
||||||
|
el0_fpsimd_exc(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_PC_ALIGN:
|
||||||
|
el0_pc(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_UNKNOWN:
|
||||||
|
case ESR_ELx_EC_CP14_MR:
|
||||||
|
case ESR_ELx_EC_CP14_LS:
|
||||||
|
case ESR_ELx_EC_CP14_64:
|
||||||
|
el0_undef(regs);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_CP15_32:
|
||||||
|
case ESR_ELx_EC_CP15_64:
|
||||||
|
el0_cp15(regs, esr);
|
||||||
|
break;
|
||||||
|
case ESR_ELx_EC_BREAKPT_LOW:
|
||||||
|
case ESR_ELx_EC_SOFTSTP_LOW:
|
||||||
|
case ESR_ELx_EC_WATCHPT_LOW:
|
||||||
|
case ESR_ELx_EC_BKPT32:
|
||||||
|
el0_dbg(regs, esr);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
el0_inv(regs, esr);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
NOKPROBE_SYMBOL(el0_sync_compat_handler);
|
||||||
|
#endif /* CONFIG_COMPAT */
|
@ -7,10 +7,137 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/linkage.h>
|
#include <linux/linkage.h>
|
||||||
|
#include <asm/asm-offsets.h>
|
||||||
#include <asm/assembler.h>
|
#include <asm/assembler.h>
|
||||||
#include <asm/ftrace.h>
|
#include <asm/ftrace.h>
|
||||||
#include <asm/insn.h>
|
#include <asm/insn.h>
|
||||||
|
|
||||||
|
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
|
||||||
|
/*
|
||||||
|
* Due to -fpatchable-function-entry=2, the compiler has placed two NOPs before
|
||||||
|
* the regular function prologue. For an enabled callsite, ftrace_init_nop() and
|
||||||
|
* ftrace_make_call() have patched those NOPs to:
|
||||||
|
*
|
||||||
|
* MOV X9, LR
|
||||||
|
* BL <entry>
|
||||||
|
*
|
||||||
|
* ... where <entry> is either ftrace_caller or ftrace_regs_caller.
|
||||||
|
*
|
||||||
|
* Each instrumented function follows the AAPCS, so here x0-x8 and x19-x30 are
|
||||||
|
* live, and x9-x18 are safe to clobber.
|
||||||
|
*
|
||||||
|
* We save the callsite's context into a pt_regs before invoking any ftrace
|
||||||
|
* callbacks. So that we can get a sensible backtrace, we create a stack record
|
||||||
|
* for the callsite and the ftrace entry assembly. This is not sufficient for
|
||||||
|
* reliable stacktrace: until we create the callsite stack record, its caller
|
||||||
|
* is missing from the LR and existing chain of frame records.
|
||||||
|
*/
|
||||||
|
.macro ftrace_regs_entry, allregs=0
|
||||||
|
/* Make room for pt_regs, plus a callee frame */
|
||||||
|
sub sp, sp, #(S_FRAME_SIZE + 16)
|
||||||
|
|
||||||
|
/* Save function arguments (and x9 for simplicity) */
|
||||||
|
stp x0, x1, [sp, #S_X0]
|
||||||
|
stp x2, x3, [sp, #S_X2]
|
||||||
|
stp x4, x5, [sp, #S_X4]
|
||||||
|
stp x6, x7, [sp, #S_X6]
|
||||||
|
stp x8, x9, [sp, #S_X8]
|
||||||
|
|
||||||
|
/* Optionally save the callee-saved registers, always save the FP */
|
||||||
|
.if \allregs == 1
|
||||||
|
stp x10, x11, [sp, #S_X10]
|
||||||
|
stp x12, x13, [sp, #S_X12]
|
||||||
|
stp x14, x15, [sp, #S_X14]
|
||||||
|
stp x16, x17, [sp, #S_X16]
|
||||||
|
stp x18, x19, [sp, #S_X18]
|
||||||
|
stp x20, x21, [sp, #S_X20]
|
||||||
|
stp x22, x23, [sp, #S_X22]
|
||||||
|
stp x24, x25, [sp, #S_X24]
|
||||||
|
stp x26, x27, [sp, #S_X26]
|
||||||
|
stp x28, x29, [sp, #S_X28]
|
||||||
|
.else
|
||||||
|
str x29, [sp, #S_FP]
|
||||||
|
.endif
|
||||||
|
|
||||||
|
/* Save the callsite's SP and LR */
|
||||||
|
add x10, sp, #(S_FRAME_SIZE + 16)
|
||||||
|
stp x9, x10, [sp, #S_LR]
|
||||||
|
|
||||||
|
/* Save the PC after the ftrace callsite */
|
||||||
|
str x30, [sp, #S_PC]
|
||||||
|
|
||||||
|
/* Create a frame record for the callsite above pt_regs */
|
||||||
|
stp x29, x9, [sp, #S_FRAME_SIZE]
|
||||||
|
add x29, sp, #S_FRAME_SIZE
|
||||||
|
|
||||||
|
/* Create our frame record within pt_regs. */
|
||||||
|
stp x29, x30, [sp, #S_STACKFRAME]
|
||||||
|
add x29, sp, #S_STACKFRAME
|
||||||
|
.endm
|
||||||
|
|
||||||
|
ENTRY(ftrace_regs_caller)
|
||||||
|
ftrace_regs_entry 1
|
||||||
|
b ftrace_common
|
||||||
|
ENDPROC(ftrace_regs_caller)
|
||||||
|
|
||||||
|
ENTRY(ftrace_caller)
|
||||||
|
ftrace_regs_entry 0
|
||||||
|
b ftrace_common
|
||||||
|
ENDPROC(ftrace_caller)
|
||||||
|
|
||||||
|
ENTRY(ftrace_common)
|
||||||
|
sub x0, x30, #AARCH64_INSN_SIZE // ip (callsite's BL insn)
|
||||||
|
mov x1, x9 // parent_ip (callsite's LR)
|
||||||
|
ldr_l x2, function_trace_op // op
|
||||||
|
mov x3, sp // regs
|
||||||
|
|
||||||
|
GLOBAL(ftrace_call)
|
||||||
|
bl ftrace_stub
|
||||||
|
|
||||||
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||||
|
GLOBAL(ftrace_graph_call) // ftrace_graph_caller();
|
||||||
|
nop // If enabled, this will be replaced
|
||||||
|
// "b ftrace_graph_caller"
|
||||||
|
#endif
|
||||||
|
|
||||||
|
/*
|
||||||
|
* At the callsite x0-x8 and x19-x30 were live. Any C code will have preserved
|
||||||
|
* x19-x29 per the AAPCS, and we created frame records upon entry, so we need
|
||||||
|
* to restore x0-x8, x29, and x30.
|
||||||
|
*/
|
||||||
|
ftrace_common_return:
|
||||||
|
/* Restore function arguments */
|
||||||
|
ldp x0, x1, [sp]
|
||||||
|
ldp x2, x3, [sp, #S_X2]
|
||||||
|
ldp x4, x5, [sp, #S_X4]
|
||||||
|
ldp x6, x7, [sp, #S_X6]
|
||||||
|
ldr x8, [sp, #S_X8]
|
||||||
|
|
||||||
|
/* Restore the callsite's FP, LR, PC */
|
||||||
|
ldr x29, [sp, #S_FP]
|
||||||
|
ldr x30, [sp, #S_LR]
|
||||||
|
ldr x9, [sp, #S_PC]
|
||||||
|
|
||||||
|
/* Restore the callsite's SP */
|
||||||
|
add sp, sp, #S_FRAME_SIZE + 16
|
||||||
|
|
||||||
|
ret x9
|
||||||
|
ENDPROC(ftrace_common)
|
||||||
|
|
||||||
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||||
|
ENTRY(ftrace_graph_caller)
|
||||||
|
ldr x0, [sp, #S_PC]
|
||||||
|
sub x0, x0, #AARCH64_INSN_SIZE // ip (callsite's BL insn)
|
||||||
|
add x1, sp, #S_LR // parent_ip (callsite's LR)
|
||||||
|
ldr x2, [sp, #S_FRAME_SIZE] // parent fp (callsite's FP)
|
||||||
|
bl prepare_ftrace_return
|
||||||
|
b ftrace_common_return
|
||||||
|
ENDPROC(ftrace_graph_caller)
|
||||||
|
#else
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#else /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Gcc with -pg will put the following code in the beginning of each function:
|
* Gcc with -pg will put the following code in the beginning of each function:
|
||||||
* mov x0, x30
|
* mov x0, x30
|
||||||
@ -160,11 +287,6 @@ GLOBAL(ftrace_graph_call) // ftrace_graph_caller();
|
|||||||
|
|
||||||
mcount_exit
|
mcount_exit
|
||||||
ENDPROC(ftrace_caller)
|
ENDPROC(ftrace_caller)
|
||||||
#endif /* CONFIG_DYNAMIC_FTRACE */
|
|
||||||
|
|
||||||
ENTRY(ftrace_stub)
|
|
||||||
ret
|
|
||||||
ENDPROC(ftrace_stub)
|
|
||||||
|
|
||||||
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||||
/*
|
/*
|
||||||
@ -184,7 +306,15 @@ ENTRY(ftrace_graph_caller)
|
|||||||
|
|
||||||
mcount_exit
|
mcount_exit
|
||||||
ENDPROC(ftrace_graph_caller)
|
ENDPROC(ftrace_graph_caller)
|
||||||
|
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
|
||||||
|
#endif /* CONFIG_DYNAMIC_FTRACE */
|
||||||
|
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
|
||||||
|
|
||||||
|
ENTRY(ftrace_stub)
|
||||||
|
ret
|
||||||
|
ENDPROC(ftrace_stub)
|
||||||
|
|
||||||
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
||||||
/*
|
/*
|
||||||
* void return_to_handler(void)
|
* void return_to_handler(void)
|
||||||
*
|
*
|
||||||
|
@ -269,8 +269,10 @@ alternative_else_nop_endif
|
|||||||
alternative_if ARM64_HAS_IRQ_PRIO_MASKING
|
alternative_if ARM64_HAS_IRQ_PRIO_MASKING
|
||||||
ldr x20, [sp, #S_PMR_SAVE]
|
ldr x20, [sp, #S_PMR_SAVE]
|
||||||
msr_s SYS_ICC_PMR_EL1, x20
|
msr_s SYS_ICC_PMR_EL1, x20
|
||||||
/* Ensure priority change is seen by redistributor */
|
mrs_s x21, SYS_ICC_CTLR_EL1
|
||||||
dsb sy
|
tbz x21, #6, .L__skip_pmr_sync\@ // Check for ICC_CTLR_EL1.PMHE
|
||||||
|
dsb sy // Ensure priority change is seen by redistributor
|
||||||
|
.L__skip_pmr_sync\@:
|
||||||
alternative_else_nop_endif
|
alternative_else_nop_endif
|
||||||
|
|
||||||
ldp x21, x22, [sp, #S_PC] // load ELR, SPSR
|
ldp x21, x22, [sp, #S_PC] // load ELR, SPSR
|
||||||
@ -578,76 +580,9 @@ ENDPROC(el1_error_invalid)
|
|||||||
.align 6
|
.align 6
|
||||||
el1_sync:
|
el1_sync:
|
||||||
kernel_entry 1
|
kernel_entry 1
|
||||||
mrs x1, esr_el1 // read the syndrome register
|
|
||||||
lsr x24, x1, #ESR_ELx_EC_SHIFT // exception class
|
|
||||||
cmp x24, #ESR_ELx_EC_DABT_CUR // data abort in EL1
|
|
||||||
b.eq el1_da
|
|
||||||
cmp x24, #ESR_ELx_EC_IABT_CUR // instruction abort in EL1
|
|
||||||
b.eq el1_ia
|
|
||||||
cmp x24, #ESR_ELx_EC_SYS64 // configurable trap
|
|
||||||
b.eq el1_undef
|
|
||||||
cmp x24, #ESR_ELx_EC_PC_ALIGN // pc alignment exception
|
|
||||||
b.eq el1_pc
|
|
||||||
cmp x24, #ESR_ELx_EC_UNKNOWN // unknown exception in EL1
|
|
||||||
b.eq el1_undef
|
|
||||||
cmp x24, #ESR_ELx_EC_BREAKPT_CUR // debug exception in EL1
|
|
||||||
b.ge el1_dbg
|
|
||||||
b el1_inv
|
|
||||||
|
|
||||||
el1_ia:
|
|
||||||
/*
|
|
||||||
* Fall through to the Data abort case
|
|
||||||
*/
|
|
||||||
el1_da:
|
|
||||||
/*
|
|
||||||
* Data abort handling
|
|
||||||
*/
|
|
||||||
mrs x3, far_el1
|
|
||||||
inherit_daif pstate=x23, tmp=x2
|
|
||||||
untagged_addr x0, x3
|
|
||||||
mov x2, sp // struct pt_regs
|
|
||||||
bl do_mem_abort
|
|
||||||
|
|
||||||
kernel_exit 1
|
|
||||||
el1_pc:
|
|
||||||
/*
|
|
||||||
* PC alignment exception handling. We don't handle SP alignment faults,
|
|
||||||
* since we will have hit a recursive exception when trying to push the
|
|
||||||
* initial pt_regs.
|
|
||||||
*/
|
|
||||||
mrs x0, far_el1
|
|
||||||
inherit_daif pstate=x23, tmp=x2
|
|
||||||
mov x2, sp
|
|
||||||
bl do_sp_pc_abort
|
|
||||||
ASM_BUG()
|
|
||||||
el1_undef:
|
|
||||||
/*
|
|
||||||
* Undefined instruction
|
|
||||||
*/
|
|
||||||
inherit_daif pstate=x23, tmp=x2
|
|
||||||
mov x0, sp
|
mov x0, sp
|
||||||
bl do_undefinstr
|
bl el1_sync_handler
|
||||||
kernel_exit 1
|
kernel_exit 1
|
||||||
el1_dbg:
|
|
||||||
/*
|
|
||||||
* Debug exception handling
|
|
||||||
*/
|
|
||||||
cmp x24, #ESR_ELx_EC_BRK64 // if BRK64
|
|
||||||
cinc x24, x24, eq // set bit '0'
|
|
||||||
tbz x24, #0, el1_inv // EL1 only
|
|
||||||
gic_prio_kentry_setup tmp=x3
|
|
||||||
mrs x0, far_el1
|
|
||||||
mov x2, sp // struct pt_regs
|
|
||||||
bl do_debug_exception
|
|
||||||
kernel_exit 1
|
|
||||||
el1_inv:
|
|
||||||
// TODO: add support for undefined instructions in kernel mode
|
|
||||||
inherit_daif pstate=x23, tmp=x2
|
|
||||||
mov x0, sp
|
|
||||||
mov x2, x1
|
|
||||||
mov x1, #BAD_SYNC
|
|
||||||
bl bad_mode
|
|
||||||
ASM_BUG()
|
|
||||||
ENDPROC(el1_sync)
|
ENDPROC(el1_sync)
|
||||||
|
|
||||||
.align 6
|
.align 6
|
||||||
@ -714,71 +649,18 @@ ENDPROC(el1_irq)
|
|||||||
.align 6
|
.align 6
|
||||||
el0_sync:
|
el0_sync:
|
||||||
kernel_entry 0
|
kernel_entry 0
|
||||||
mrs x25, esr_el1 // read the syndrome register
|
mov x0, sp
|
||||||
lsr x24, x25, #ESR_ELx_EC_SHIFT // exception class
|
bl el0_sync_handler
|
||||||
cmp x24, #ESR_ELx_EC_SVC64 // SVC in 64-bit state
|
b ret_to_user
|
||||||
b.eq el0_svc
|
|
||||||
cmp x24, #ESR_ELx_EC_DABT_LOW // data abort in EL0
|
|
||||||
b.eq el0_da
|
|
||||||
cmp x24, #ESR_ELx_EC_IABT_LOW // instruction abort in EL0
|
|
||||||
b.eq el0_ia
|
|
||||||
cmp x24, #ESR_ELx_EC_FP_ASIMD // FP/ASIMD access
|
|
||||||
b.eq el0_fpsimd_acc
|
|
||||||
cmp x24, #ESR_ELx_EC_SVE // SVE access
|
|
||||||
b.eq el0_sve_acc
|
|
||||||
cmp x24, #ESR_ELx_EC_FP_EXC64 // FP/ASIMD exception
|
|
||||||
b.eq el0_fpsimd_exc
|
|
||||||
cmp x24, #ESR_ELx_EC_SYS64 // configurable trap
|
|
||||||
ccmp x24, #ESR_ELx_EC_WFx, #4, ne
|
|
||||||
b.eq el0_sys
|
|
||||||
cmp x24, #ESR_ELx_EC_SP_ALIGN // stack alignment exception
|
|
||||||
b.eq el0_sp
|
|
||||||
cmp x24, #ESR_ELx_EC_PC_ALIGN // pc alignment exception
|
|
||||||
b.eq el0_pc
|
|
||||||
cmp x24, #ESR_ELx_EC_UNKNOWN // unknown exception in EL0
|
|
||||||
b.eq el0_undef
|
|
||||||
cmp x24, #ESR_ELx_EC_BREAKPT_LOW // debug exception in EL0
|
|
||||||
b.ge el0_dbg
|
|
||||||
b el0_inv
|
|
||||||
|
|
||||||
#ifdef CONFIG_COMPAT
|
#ifdef CONFIG_COMPAT
|
||||||
.align 6
|
.align 6
|
||||||
el0_sync_compat:
|
el0_sync_compat:
|
||||||
kernel_entry 0, 32
|
kernel_entry 0, 32
|
||||||
mrs x25, esr_el1 // read the syndrome register
|
|
||||||
lsr x24, x25, #ESR_ELx_EC_SHIFT // exception class
|
|
||||||
cmp x24, #ESR_ELx_EC_SVC32 // SVC in 32-bit state
|
|
||||||
b.eq el0_svc_compat
|
|
||||||
cmp x24, #ESR_ELx_EC_DABT_LOW // data abort in EL0
|
|
||||||
b.eq el0_da
|
|
||||||
cmp x24, #ESR_ELx_EC_IABT_LOW // instruction abort in EL0
|
|
||||||
b.eq el0_ia
|
|
||||||
cmp x24, #ESR_ELx_EC_FP_ASIMD // FP/ASIMD access
|
|
||||||
b.eq el0_fpsimd_acc
|
|
||||||
cmp x24, #ESR_ELx_EC_FP_EXC32 // FP/ASIMD exception
|
|
||||||
b.eq el0_fpsimd_exc
|
|
||||||
cmp x24, #ESR_ELx_EC_PC_ALIGN // pc alignment exception
|
|
||||||
b.eq el0_pc
|
|
||||||
cmp x24, #ESR_ELx_EC_UNKNOWN // unknown exception in EL0
|
|
||||||
b.eq el0_undef
|
|
||||||
cmp x24, #ESR_ELx_EC_CP15_32 // CP15 MRC/MCR trap
|
|
||||||
b.eq el0_cp15
|
|
||||||
cmp x24, #ESR_ELx_EC_CP15_64 // CP15 MRRC/MCRR trap
|
|
||||||
b.eq el0_cp15
|
|
||||||
cmp x24, #ESR_ELx_EC_CP14_MR // CP14 MRC/MCR trap
|
|
||||||
b.eq el0_undef
|
|
||||||
cmp x24, #ESR_ELx_EC_CP14_LS // CP14 LDC/STC trap
|
|
||||||
b.eq el0_undef
|
|
||||||
cmp x24, #ESR_ELx_EC_CP14_64 // CP14 MRRC/MCRR trap
|
|
||||||
b.eq el0_undef
|
|
||||||
cmp x24, #ESR_ELx_EC_BREAKPT_LOW // debug exception in EL0
|
|
||||||
b.ge el0_dbg
|
|
||||||
b el0_inv
|
|
||||||
el0_svc_compat:
|
|
||||||
gic_prio_kentry_setup tmp=x1
|
|
||||||
mov x0, sp
|
mov x0, sp
|
||||||
bl el0_svc_compat_handler
|
bl el0_sync_compat_handler
|
||||||
b ret_to_user
|
b ret_to_user
|
||||||
|
ENDPROC(el0_sync)
|
||||||
|
|
||||||
.align 6
|
.align 6
|
||||||
el0_irq_compat:
|
el0_irq_compat:
|
||||||
@ -788,140 +670,8 @@ el0_irq_compat:
|
|||||||
el0_error_compat:
|
el0_error_compat:
|
||||||
kernel_entry 0, 32
|
kernel_entry 0, 32
|
||||||
b el0_error_naked
|
b el0_error_naked
|
||||||
|
|
||||||
el0_cp15:
|
|
||||||
/*
|
|
||||||
* Trapped CP15 (MRC, MCR, MRRC, MCRR) instructions
|
|
||||||
*/
|
|
||||||
ct_user_exit_irqoff
|
|
||||||
enable_daif
|
|
||||||
mov x0, x25
|
|
||||||
mov x1, sp
|
|
||||||
bl do_cp15instr
|
|
||||||
b ret_to_user
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
el0_da:
|
|
||||||
/*
|
|
||||||
* Data abort handling
|
|
||||||
*/
|
|
||||||
mrs x26, far_el1
|
|
||||||
ct_user_exit_irqoff
|
|
||||||
enable_daif
|
|
||||||
untagged_addr x0, x26
|
|
||||||
mov x1, x25
|
|
||||||
mov x2, sp
|
|
||||||
bl do_mem_abort
|
|
||||||
b ret_to_user
|
|
||||||
el0_ia:
|
|
||||||
/*
|
|
||||||
* Instruction abort handling
|
|
||||||
*/
|
|
||||||
mrs x26, far_el1
|
|
||||||
gic_prio_kentry_setup tmp=x0
|
|
||||||
ct_user_exit_irqoff
|
|
||||||
enable_da_f
|
|
||||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
|
||||||
bl trace_hardirqs_off
|
|
||||||
#endif
|
|
||||||
mov x0, x26
|
|
||||||
mov x1, x25
|
|
||||||
mov x2, sp
|
|
||||||
bl do_el0_ia_bp_hardening
|
|
||||||
b ret_to_user
|
|
||||||
el0_fpsimd_acc:
|
|
||||||
/*
|
|
||||||
* Floating Point or Advanced SIMD access
|
|
||||||
*/
|
|
||||||
ct_user_exit_irqoff
|
|
||||||
enable_daif
|
|
||||||
mov x0, x25
|
|
||||||
mov x1, sp
|
|
||||||
bl do_fpsimd_acc
|
|
||||||
b ret_to_user
|
|
||||||
el0_sve_acc:
|
|
||||||
/*
|
|
||||||
* Scalable Vector Extension access
|
|
||||||
*/
|
|
||||||
ct_user_exit_irqoff
|
|
||||||
enable_daif
|
|
||||||
mov x0, x25
|
|
||||||
mov x1, sp
|
|
||||||
bl do_sve_acc
|
|
||||||
b ret_to_user
|
|
||||||
el0_fpsimd_exc:
|
|
||||||
/*
|
|
||||||
* Floating Point, Advanced SIMD or SVE exception
|
|
||||||
*/
|
|
||||||
ct_user_exit_irqoff
|
|
||||||
enable_daif
|
|
||||||
mov x0, x25
|
|
||||||
mov x1, sp
|
|
||||||
bl do_fpsimd_exc
|
|
||||||
b ret_to_user
|
|
||||||
el0_sp:
|
|
||||||
ldr x26, [sp, #S_SP]
|
|
||||||
b el0_sp_pc
|
|
||||||
el0_pc:
|
|
||||||
mrs x26, far_el1
|
|
||||||
el0_sp_pc:
|
|
||||||
/*
|
|
||||||
* Stack or PC alignment exception handling
|
|
||||||
*/
|
|
||||||
gic_prio_kentry_setup tmp=x0
|
|
||||||
ct_user_exit_irqoff
|
|
||||||
enable_da_f
|
|
||||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
|
||||||
bl trace_hardirqs_off
|
|
||||||
#endif
|
|
||||||
mov x0, x26
|
|
||||||
mov x1, x25
|
|
||||||
mov x2, sp
|
|
||||||
bl do_sp_pc_abort
|
|
||||||
b ret_to_user
|
|
||||||
el0_undef:
|
|
||||||
/*
|
|
||||||
* Undefined instruction
|
|
||||||
*/
|
|
||||||
ct_user_exit_irqoff
|
|
||||||
enable_daif
|
|
||||||
mov x0, sp
|
|
||||||
bl do_undefinstr
|
|
||||||
b ret_to_user
|
|
||||||
el0_sys:
|
|
||||||
/*
|
|
||||||
* System instructions, for trapped cache maintenance instructions
|
|
||||||
*/
|
|
||||||
ct_user_exit_irqoff
|
|
||||||
enable_daif
|
|
||||||
mov x0, x25
|
|
||||||
mov x1, sp
|
|
||||||
bl do_sysinstr
|
|
||||||
b ret_to_user
|
|
||||||
el0_dbg:
|
|
||||||
/*
|
|
||||||
* Debug exception handling
|
|
||||||
*/
|
|
||||||
tbnz x24, #0, el0_inv // EL0 only
|
|
||||||
mrs x24, far_el1
|
|
||||||
gic_prio_kentry_setup tmp=x3
|
|
||||||
ct_user_exit_irqoff
|
|
||||||
mov x0, x24
|
|
||||||
mov x1, x25
|
|
||||||
mov x2, sp
|
|
||||||
bl do_debug_exception
|
|
||||||
enable_da_f
|
|
||||||
b ret_to_user
|
|
||||||
el0_inv:
|
|
||||||
ct_user_exit_irqoff
|
|
||||||
enable_daif
|
|
||||||
mov x0, sp
|
|
||||||
mov x1, #BAD_SYNC
|
|
||||||
mov x2, x25
|
|
||||||
bl bad_el0_sync
|
|
||||||
b ret_to_user
|
|
||||||
ENDPROC(el0_sync)
|
|
||||||
|
|
||||||
.align 6
|
.align 6
|
||||||
el0_irq:
|
el0_irq:
|
||||||
kernel_entry 0
|
kernel_entry 0
|
||||||
@ -999,17 +749,6 @@ finish_ret_to_user:
|
|||||||
kernel_exit 0
|
kernel_exit 0
|
||||||
ENDPROC(ret_to_user)
|
ENDPROC(ret_to_user)
|
||||||
|
|
||||||
/*
|
|
||||||
* SVC handler.
|
|
||||||
*/
|
|
||||||
.align 6
|
|
||||||
el0_svc:
|
|
||||||
gic_prio_kentry_setup tmp=x1
|
|
||||||
mov x0, sp
|
|
||||||
bl el0_svc_handler
|
|
||||||
b ret_to_user
|
|
||||||
ENDPROC(el0_svc)
|
|
||||||
|
|
||||||
.popsection // .entry.text
|
.popsection // .entry.text
|
||||||
|
|
||||||
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
|
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
|
||||||
|
@ -920,7 +920,7 @@ void fpsimd_release_task(struct task_struct *dead_task)
|
|||||||
* would have disabled the SVE access trap for userspace during
|
* would have disabled the SVE access trap for userspace during
|
||||||
* ret_to_user, making an SVE access trap impossible in that case.
|
* ret_to_user, making an SVE access trap impossible in that case.
|
||||||
*/
|
*/
|
||||||
asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs)
|
void do_sve_acc(unsigned int esr, struct pt_regs *regs)
|
||||||
{
|
{
|
||||||
/* Even if we chose not to use SVE, the hardware could still trap: */
|
/* Even if we chose not to use SVE, the hardware could still trap: */
|
||||||
if (unlikely(!system_supports_sve()) || WARN_ON(is_compat_task())) {
|
if (unlikely(!system_supports_sve()) || WARN_ON(is_compat_task())) {
|
||||||
@ -947,7 +947,7 @@ asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs)
|
|||||||
/*
|
/*
|
||||||
* Trapped FP/ASIMD access.
|
* Trapped FP/ASIMD access.
|
||||||
*/
|
*/
|
||||||
asmlinkage void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs)
|
void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs)
|
||||||
{
|
{
|
||||||
/* TODO: implement lazy context saving/restoring */
|
/* TODO: implement lazy context saving/restoring */
|
||||||
WARN_ON(1);
|
WARN_ON(1);
|
||||||
@ -956,7 +956,7 @@ asmlinkage void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs)
|
|||||||
/*
|
/*
|
||||||
* Raise a SIGFPE for the current process.
|
* Raise a SIGFPE for the current process.
|
||||||
*/
|
*/
|
||||||
asmlinkage void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs)
|
void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs)
|
||||||
{
|
{
|
||||||
unsigned int si_code = FPE_FLTUNK;
|
unsigned int si_code = FPE_FLTUNK;
|
||||||
|
|
||||||
|
@ -62,6 +62,19 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
|
|||||||
return ftrace_modify_code(pc, 0, new, false);
|
return ftrace_modify_code(pc, 0, new, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr)
|
||||||
|
{
|
||||||
|
#ifdef CONFIG_ARM64_MODULE_PLTS
|
||||||
|
struct plt_entry *plt = mod->arch.ftrace_trampolines;
|
||||||
|
|
||||||
|
if (addr == FTRACE_ADDR)
|
||||||
|
return &plt[FTRACE_PLT_IDX];
|
||||||
|
if (addr == FTRACE_REGS_ADDR && IS_ENABLED(CONFIG_FTRACE_WITH_REGS))
|
||||||
|
return &plt[FTRACE_REGS_PLT_IDX];
|
||||||
|
#endif
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Turn on the call to ftrace_caller() in instrumented function
|
* Turn on the call to ftrace_caller() in instrumented function
|
||||||
*/
|
*/
|
||||||
@ -72,9 +85,11 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
|
|||||||
long offset = (long)pc - (long)addr;
|
long offset = (long)pc - (long)addr;
|
||||||
|
|
||||||
if (offset < -SZ_128M || offset >= SZ_128M) {
|
if (offset < -SZ_128M || offset >= SZ_128M) {
|
||||||
#ifdef CONFIG_ARM64_MODULE_PLTS
|
|
||||||
struct plt_entry trampoline, *dst;
|
|
||||||
struct module *mod;
|
struct module *mod;
|
||||||
|
struct plt_entry *plt;
|
||||||
|
|
||||||
|
if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* On kernels that support module PLTs, the offset between the
|
* On kernels that support module PLTs, the offset between the
|
||||||
@ -93,49 +108,13 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
|
|||||||
if (WARN_ON(!mod))
|
if (WARN_ON(!mod))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
/*
|
plt = get_ftrace_plt(mod, addr);
|
||||||
* There is only one ftrace trampoline per module. For now,
|
if (!plt) {
|
||||||
* this is not a problem since on arm64, all dynamic ftrace
|
pr_err("ftrace: no module PLT for %ps\n", (void *)addr);
|
||||||
* invocations are routed via ftrace_caller(). This will need
|
return -EINVAL;
|
||||||
* to be revisited if support for multiple ftrace entry points
|
|
||||||
* is added in the future, but for now, the pr_err() below
|
|
||||||
* deals with a theoretical issue only.
|
|
||||||
*
|
|
||||||
* Note that PLTs are place relative, and plt_entries_equal()
|
|
||||||
* checks whether they point to the same target. Here, we need
|
|
||||||
* to check if the actual opcodes are in fact identical,
|
|
||||||
* regardless of the offset in memory so use memcmp() instead.
|
|
||||||
*/
|
|
||||||
dst = mod->arch.ftrace_trampoline;
|
|
||||||
trampoline = get_plt_entry(addr, dst);
|
|
||||||
if (memcmp(dst, &trampoline, sizeof(trampoline))) {
|
|
||||||
if (plt_entry_is_initialized(dst)) {
|
|
||||||
pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n");
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* point the trampoline to our ftrace entry point */
|
|
||||||
module_disable_ro(mod);
|
|
||||||
*dst = trampoline;
|
|
||||||
module_enable_ro(mod, true);
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Ensure updated trampoline is visible to instruction
|
|
||||||
* fetch before we patch in the branch. Although the
|
|
||||||
* architecture doesn't require an IPI in this case,
|
|
||||||
* Neoverse-N1 erratum #1542419 does require one
|
|
||||||
* if the TLB maintenance in module_enable_ro() is
|
|
||||||
* skipped due to rodata_enabled. It doesn't seem worth
|
|
||||||
* it to make it conditional given that this is
|
|
||||||
* certainly not a fast-path.
|
|
||||||
*/
|
|
||||||
flush_icache_range((unsigned long)&dst[0],
|
|
||||||
(unsigned long)&dst[1]);
|
|
||||||
}
|
}
|
||||||
addr = (unsigned long)dst;
|
|
||||||
#else /* CONFIG_ARM64_MODULE_PLTS */
|
addr = (unsigned long)plt;
|
||||||
return -EINVAL;
|
|
||||||
#endif /* CONFIG_ARM64_MODULE_PLTS */
|
|
||||||
}
|
}
|
||||||
|
|
||||||
old = aarch64_insn_gen_nop();
|
old = aarch64_insn_gen_nop();
|
||||||
@ -144,6 +123,55 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
|
|||||||
return ftrace_modify_code(pc, old, new, true);
|
return ftrace_modify_code(pc, old, new, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
|
||||||
|
int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
|
||||||
|
unsigned long addr)
|
||||||
|
{
|
||||||
|
unsigned long pc = rec->ip;
|
||||||
|
u32 old, new;
|
||||||
|
|
||||||
|
old = aarch64_insn_gen_branch_imm(pc, old_addr,
|
||||||
|
AARCH64_INSN_BRANCH_LINK);
|
||||||
|
new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
|
||||||
|
|
||||||
|
return ftrace_modify_code(pc, old, new, true);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The compiler has inserted two NOPs before the regular function prologue.
|
||||||
|
* All instrumented functions follow the AAPCS, so x0-x8 and x19-x30 are live,
|
||||||
|
* and x9-x18 are free for our use.
|
||||||
|
*
|
||||||
|
* At runtime we want to be able to swing a single NOP <-> BL to enable or
|
||||||
|
* disable the ftrace call. The BL requires us to save the original LR value,
|
||||||
|
* so here we insert a <MOV X9, LR> over the first NOP so the instructions
|
||||||
|
* before the regular prologue are:
|
||||||
|
*
|
||||||
|
* | Compiled | Disabled | Enabled |
|
||||||
|
* +----------+------------+------------+
|
||||||
|
* | NOP | MOV X9, LR | MOV X9, LR |
|
||||||
|
* | NOP | NOP | BL <entry> |
|
||||||
|
*
|
||||||
|
* The LR value will be recovered by ftrace_regs_entry, and restored into LR
|
||||||
|
* before returning to the regular function prologue. When a function is not
|
||||||
|
* being traced, the MOV is not harmful given x9 is not live per the AAPCS.
|
||||||
|
*
|
||||||
|
* Note: ftrace_process_locs() has pre-adjusted rec->ip to be the address of
|
||||||
|
* the BL.
|
||||||
|
*/
|
||||||
|
int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
|
||||||
|
{
|
||||||
|
unsigned long pc = rec->ip - AARCH64_INSN_SIZE;
|
||||||
|
u32 old, new;
|
||||||
|
|
||||||
|
old = aarch64_insn_gen_nop();
|
||||||
|
new = aarch64_insn_gen_move_reg(AARCH64_INSN_REG_9,
|
||||||
|
AARCH64_INSN_REG_LR,
|
||||||
|
AARCH64_INSN_VARIANT_64BIT);
|
||||||
|
return ftrace_modify_code(pc, old, new, true);
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Turn off the call to ftrace_caller() in instrumented function
|
* Turn off the call to ftrace_caller() in instrumented function
|
||||||
*/
|
*/
|
||||||
@ -156,9 +184,11 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
|
|||||||
long offset = (long)pc - (long)addr;
|
long offset = (long)pc - (long)addr;
|
||||||
|
|
||||||
if (offset < -SZ_128M || offset >= SZ_128M) {
|
if (offset < -SZ_128M || offset >= SZ_128M) {
|
||||||
#ifdef CONFIG_ARM64_MODULE_PLTS
|
|
||||||
u32 replaced;
|
u32 replaced;
|
||||||
|
|
||||||
|
if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* 'mod' is only set at module load time, but if we end up
|
* 'mod' is only set at module load time, but if we end up
|
||||||
* dealing with an out-of-range condition, we can assume it
|
* dealing with an out-of-range condition, we can assume it
|
||||||
@ -189,9 +219,6 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
validate = false;
|
validate = false;
|
||||||
#else /* CONFIG_ARM64_MODULE_PLTS */
|
|
||||||
return -EINVAL;
|
|
||||||
#endif /* CONFIG_ARM64_MODULE_PLTS */
|
|
||||||
} else {
|
} else {
|
||||||
old = aarch64_insn_gen_branch_imm(pc, addr,
|
old = aarch64_insn_gen_branch_imm(pc, addr,
|
||||||
AARCH64_INSN_BRANCH_LINK);
|
AARCH64_INSN_BRANCH_LINK);
|
||||||
|
@ -1268,6 +1268,19 @@ u32 aarch64_insn_gen_logical_shifted_reg(enum aarch64_insn_register dst,
|
|||||||
return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_6, insn, shift);
|
return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_6, insn, shift);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* MOV (register) is architecturally an alias of ORR (shifted register) where
|
||||||
|
* MOV <*d>, <*m> is equivalent to ORR <*d>, <*ZR>, <*m>
|
||||||
|
*/
|
||||||
|
u32 aarch64_insn_gen_move_reg(enum aarch64_insn_register dst,
|
||||||
|
enum aarch64_insn_register src,
|
||||||
|
enum aarch64_insn_variant variant)
|
||||||
|
{
|
||||||
|
return aarch64_insn_gen_logical_shifted_reg(dst, AARCH64_INSN_REG_ZR,
|
||||||
|
src, 0, variant,
|
||||||
|
AARCH64_INSN_LOGIC_ORR);
|
||||||
|
}
|
||||||
|
|
||||||
u32 aarch64_insn_gen_adr(unsigned long pc, unsigned long addr,
|
u32 aarch64_insn_gen_adr(unsigned long pc, unsigned long addr,
|
||||||
enum aarch64_insn_register reg,
|
enum aarch64_insn_register reg,
|
||||||
enum aarch64_insn_adr_type type)
|
enum aarch64_insn_adr_type type)
|
||||||
|
@ -19,6 +19,14 @@
|
|||||||
#include <asm/pgtable.h>
|
#include <asm/pgtable.h>
|
||||||
#include <asm/sections.h>
|
#include <asm/sections.h>
|
||||||
|
|
||||||
|
enum kaslr_status {
|
||||||
|
KASLR_ENABLED,
|
||||||
|
KASLR_DISABLED_CMDLINE,
|
||||||
|
KASLR_DISABLED_NO_SEED,
|
||||||
|
KASLR_DISABLED_FDT_REMAP,
|
||||||
|
};
|
||||||
|
|
||||||
|
static enum kaslr_status __initdata kaslr_status;
|
||||||
u64 __ro_after_init module_alloc_base;
|
u64 __ro_after_init module_alloc_base;
|
||||||
u16 __initdata memstart_offset_seed;
|
u16 __initdata memstart_offset_seed;
|
||||||
|
|
||||||
@ -91,15 +99,15 @@ u64 __init kaslr_early_init(u64 dt_phys)
|
|||||||
*/
|
*/
|
||||||
early_fixmap_init();
|
early_fixmap_init();
|
||||||
fdt = fixmap_remap_fdt(dt_phys, &size, PAGE_KERNEL);
|
fdt = fixmap_remap_fdt(dt_phys, &size, PAGE_KERNEL);
|
||||||
if (!fdt)
|
if (!fdt) {
|
||||||
|
kaslr_status = KASLR_DISABLED_FDT_REMAP;
|
||||||
return 0;
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Retrieve (and wipe) the seed from the FDT
|
* Retrieve (and wipe) the seed from the FDT
|
||||||
*/
|
*/
|
||||||
seed = get_kaslr_seed(fdt);
|
seed = get_kaslr_seed(fdt);
|
||||||
if (!seed)
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Check if 'nokaslr' appears on the command line, and
|
* Check if 'nokaslr' appears on the command line, and
|
||||||
@ -107,8 +115,15 @@ u64 __init kaslr_early_init(u64 dt_phys)
|
|||||||
*/
|
*/
|
||||||
cmdline = kaslr_get_cmdline(fdt);
|
cmdline = kaslr_get_cmdline(fdt);
|
||||||
str = strstr(cmdline, "nokaslr");
|
str = strstr(cmdline, "nokaslr");
|
||||||
if (str == cmdline || (str > cmdline && *(str - 1) == ' '))
|
if (str == cmdline || (str > cmdline && *(str - 1) == ' ')) {
|
||||||
|
kaslr_status = KASLR_DISABLED_CMDLINE;
|
||||||
return 0;
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!seed) {
|
||||||
|
kaslr_status = KASLR_DISABLED_NO_SEED;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* OK, so we are proceeding with KASLR enabled. Calculate a suitable
|
* OK, so we are proceeding with KASLR enabled. Calculate a suitable
|
||||||
@ -170,3 +185,24 @@ u64 __init kaslr_early_init(u64 dt_phys)
|
|||||||
|
|
||||||
return offset;
|
return offset;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int __init kaslr_init(void)
|
||||||
|
{
|
||||||
|
switch (kaslr_status) {
|
||||||
|
case KASLR_ENABLED:
|
||||||
|
pr_info("KASLR enabled\n");
|
||||||
|
break;
|
||||||
|
case KASLR_DISABLED_CMDLINE:
|
||||||
|
pr_info("KASLR disabled on command line\n");
|
||||||
|
break;
|
||||||
|
case KASLR_DISABLED_NO_SEED:
|
||||||
|
pr_warn("KASLR disabled due to lack of seed\n");
|
||||||
|
break;
|
||||||
|
case KASLR_DISABLED_FDT_REMAP:
|
||||||
|
pr_warn("KASLR disabled due to FDT remapping failure\n");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
core_initcall(kaslr_init)
|
||||||
|
@ -4,6 +4,7 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/elf.h>
|
#include <linux/elf.h>
|
||||||
|
#include <linux/ftrace.h>
|
||||||
#include <linux/kernel.h>
|
#include <linux/kernel.h>
|
||||||
#include <linux/module.h>
|
#include <linux/module.h>
|
||||||
#include <linux/sort.h>
|
#include <linux/sort.h>
|
||||||
@ -330,7 +331,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
|
|||||||
tramp->sh_type = SHT_NOBITS;
|
tramp->sh_type = SHT_NOBITS;
|
||||||
tramp->sh_flags = SHF_EXECINSTR | SHF_ALLOC;
|
tramp->sh_flags = SHF_EXECINSTR | SHF_ALLOC;
|
||||||
tramp->sh_addralign = __alignof__(struct plt_entry);
|
tramp->sh_addralign = __alignof__(struct plt_entry);
|
||||||
tramp->sh_size = sizeof(struct plt_entry);
|
tramp->sh_size = NR_FTRACE_PLTS * sizeof(struct plt_entry);
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
|
|
||||||
#include <linux/bitops.h>
|
#include <linux/bitops.h>
|
||||||
#include <linux/elf.h>
|
#include <linux/elf.h>
|
||||||
|
#include <linux/ftrace.h>
|
||||||
#include <linux/gfp.h>
|
#include <linux/gfp.h>
|
||||||
#include <linux/kasan.h>
|
#include <linux/kasan.h>
|
||||||
#include <linux/kernel.h>
|
#include <linux/kernel.h>
|
||||||
@ -470,22 +471,58 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
|
|||||||
return -ENOEXEC;
|
return -ENOEXEC;
|
||||||
}
|
}
|
||||||
|
|
||||||
int module_finalize(const Elf_Ehdr *hdr,
|
static const Elf_Shdr *find_section(const Elf_Ehdr *hdr,
|
||||||
const Elf_Shdr *sechdrs,
|
const Elf_Shdr *sechdrs,
|
||||||
struct module *me)
|
const char *name)
|
||||||
{
|
{
|
||||||
const Elf_Shdr *s, *se;
|
const Elf_Shdr *s, *se;
|
||||||
const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
|
const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
|
||||||
|
|
||||||
for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) {
|
for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) {
|
||||||
if (strcmp(".altinstructions", secstrs + s->sh_name) == 0)
|
if (strcmp(name, secstrs + s->sh_name) == 0)
|
||||||
apply_alternatives_module((void *)s->sh_addr, s->sh_size);
|
return s;
|
||||||
#ifdef CONFIG_ARM64_MODULE_PLTS
|
|
||||||
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE) &&
|
|
||||||
!strcmp(".text.ftrace_trampoline", secstrs + s->sh_name))
|
|
||||||
me->arch.ftrace_trampoline = (void *)s->sh_addr;
|
|
||||||
#endif
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void __init_plt(struct plt_entry *plt, unsigned long addr)
|
||||||
|
{
|
||||||
|
*plt = get_plt_entry(addr, plt);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int module_init_ftrace_plt(const Elf_Ehdr *hdr,
|
||||||
|
const Elf_Shdr *sechdrs,
|
||||||
|
struct module *mod)
|
||||||
|
{
|
||||||
|
#if defined(CONFIG_ARM64_MODULE_PLTS) && defined(CONFIG_DYNAMIC_FTRACE)
|
||||||
|
const Elf_Shdr *s;
|
||||||
|
struct plt_entry *plts;
|
||||||
|
|
||||||
|
s = find_section(hdr, sechdrs, ".text.ftrace_trampoline");
|
||||||
|
if (!s)
|
||||||
|
return -ENOEXEC;
|
||||||
|
|
||||||
|
plts = (void *)s->sh_addr;
|
||||||
|
|
||||||
|
__init_plt(&plts[FTRACE_PLT_IDX], FTRACE_ADDR);
|
||||||
|
|
||||||
|
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS))
|
||||||
|
__init_plt(&plts[FTRACE_REGS_PLT_IDX], FTRACE_REGS_ADDR);
|
||||||
|
|
||||||
|
mod->arch.ftrace_trampolines = plts;
|
||||||
|
#endif
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int module_finalize(const Elf_Ehdr *hdr,
|
||||||
|
const Elf_Shdr *sechdrs,
|
||||||
|
struct module *me)
|
||||||
|
{
|
||||||
|
const Elf_Shdr *s;
|
||||||
|
s = find_section(hdr, sechdrs, ".altinstructions");
|
||||||
|
if (s)
|
||||||
|
apply_alternatives_module((void *)s->sh_addr, s->sh_size);
|
||||||
|
|
||||||
|
return module_init_ftrace_plt(hdr, sechdrs, me);
|
||||||
|
}
|
||||||
|
@ -158,133 +158,74 @@ armv8pmu_events_sysfs_show(struct device *dev,
|
|||||||
return sprintf(page, "event=0x%03llx\n", pmu_attr->id);
|
return sprintf(page, "event=0x%03llx\n", pmu_attr->id);
|
||||||
}
|
}
|
||||||
|
|
||||||
#define ARMV8_EVENT_ATTR(name, config) \
|
#define ARMV8_EVENT_ATTR(name, config) \
|
||||||
PMU_EVENT_ATTR(name, armv8_event_attr_##name, \
|
(&((struct perf_pmu_events_attr) { \
|
||||||
config, armv8pmu_events_sysfs_show)
|
.attr = __ATTR(name, 0444, armv8pmu_events_sysfs_show, NULL), \
|
||||||
|
.id = config, \
|
||||||
ARMV8_EVENT_ATTR(sw_incr, ARMV8_PMUV3_PERFCTR_SW_INCR);
|
}).attr.attr)
|
||||||
ARMV8_EVENT_ATTR(l1i_cache_refill, ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL);
|
|
||||||
ARMV8_EVENT_ATTR(l1i_tlb_refill, ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL);
|
|
||||||
ARMV8_EVENT_ATTR(l1d_cache_refill, ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL);
|
|
||||||
ARMV8_EVENT_ATTR(l1d_cache, ARMV8_PMUV3_PERFCTR_L1D_CACHE);
|
|
||||||
ARMV8_EVENT_ATTR(l1d_tlb_refill, ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL);
|
|
||||||
ARMV8_EVENT_ATTR(ld_retired, ARMV8_PMUV3_PERFCTR_LD_RETIRED);
|
|
||||||
ARMV8_EVENT_ATTR(st_retired, ARMV8_PMUV3_PERFCTR_ST_RETIRED);
|
|
||||||
ARMV8_EVENT_ATTR(inst_retired, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
|
|
||||||
ARMV8_EVENT_ATTR(exc_taken, ARMV8_PMUV3_PERFCTR_EXC_TAKEN);
|
|
||||||
ARMV8_EVENT_ATTR(exc_return, ARMV8_PMUV3_PERFCTR_EXC_RETURN);
|
|
||||||
ARMV8_EVENT_ATTR(cid_write_retired, ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED);
|
|
||||||
ARMV8_EVENT_ATTR(pc_write_retired, ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED);
|
|
||||||
ARMV8_EVENT_ATTR(br_immed_retired, ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED);
|
|
||||||
ARMV8_EVENT_ATTR(br_return_retired, ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED);
|
|
||||||
ARMV8_EVENT_ATTR(unaligned_ldst_retired, ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED);
|
|
||||||
ARMV8_EVENT_ATTR(br_mis_pred, ARMV8_PMUV3_PERFCTR_BR_MIS_PRED);
|
|
||||||
ARMV8_EVENT_ATTR(cpu_cycles, ARMV8_PMUV3_PERFCTR_CPU_CYCLES);
|
|
||||||
ARMV8_EVENT_ATTR(br_pred, ARMV8_PMUV3_PERFCTR_BR_PRED);
|
|
||||||
ARMV8_EVENT_ATTR(mem_access, ARMV8_PMUV3_PERFCTR_MEM_ACCESS);
|
|
||||||
ARMV8_EVENT_ATTR(l1i_cache, ARMV8_PMUV3_PERFCTR_L1I_CACHE);
|
|
||||||
ARMV8_EVENT_ATTR(l1d_cache_wb, ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB);
|
|
||||||
ARMV8_EVENT_ATTR(l2d_cache, ARMV8_PMUV3_PERFCTR_L2D_CACHE);
|
|
||||||
ARMV8_EVENT_ATTR(l2d_cache_refill, ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL);
|
|
||||||
ARMV8_EVENT_ATTR(l2d_cache_wb, ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB);
|
|
||||||
ARMV8_EVENT_ATTR(bus_access, ARMV8_PMUV3_PERFCTR_BUS_ACCESS);
|
|
||||||
ARMV8_EVENT_ATTR(memory_error, ARMV8_PMUV3_PERFCTR_MEMORY_ERROR);
|
|
||||||
ARMV8_EVENT_ATTR(inst_spec, ARMV8_PMUV3_PERFCTR_INST_SPEC);
|
|
||||||
ARMV8_EVENT_ATTR(ttbr_write_retired, ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED);
|
|
||||||
ARMV8_EVENT_ATTR(bus_cycles, ARMV8_PMUV3_PERFCTR_BUS_CYCLES);
|
|
||||||
/* Don't expose the chain event in /sys, since it's useless in isolation */
|
|
||||||
ARMV8_EVENT_ATTR(l1d_cache_allocate, ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE);
|
|
||||||
ARMV8_EVENT_ATTR(l2d_cache_allocate, ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE);
|
|
||||||
ARMV8_EVENT_ATTR(br_retired, ARMV8_PMUV3_PERFCTR_BR_RETIRED);
|
|
||||||
ARMV8_EVENT_ATTR(br_mis_pred_retired, ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED);
|
|
||||||
ARMV8_EVENT_ATTR(stall_frontend, ARMV8_PMUV3_PERFCTR_STALL_FRONTEND);
|
|
||||||
ARMV8_EVENT_ATTR(stall_backend, ARMV8_PMUV3_PERFCTR_STALL_BACKEND);
|
|
||||||
ARMV8_EVENT_ATTR(l1d_tlb, ARMV8_PMUV3_PERFCTR_L1D_TLB);
|
|
||||||
ARMV8_EVENT_ATTR(l1i_tlb, ARMV8_PMUV3_PERFCTR_L1I_TLB);
|
|
||||||
ARMV8_EVENT_ATTR(l2i_cache, ARMV8_PMUV3_PERFCTR_L2I_CACHE);
|
|
||||||
ARMV8_EVENT_ATTR(l2i_cache_refill, ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL);
|
|
||||||
ARMV8_EVENT_ATTR(l3d_cache_allocate, ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE);
|
|
||||||
ARMV8_EVENT_ATTR(l3d_cache_refill, ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL);
|
|
||||||
ARMV8_EVENT_ATTR(l3d_cache, ARMV8_PMUV3_PERFCTR_L3D_CACHE);
|
|
||||||
ARMV8_EVENT_ATTR(l3d_cache_wb, ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB);
|
|
||||||
ARMV8_EVENT_ATTR(l2d_tlb_refill, ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL);
|
|
||||||
ARMV8_EVENT_ATTR(l2i_tlb_refill, ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL);
|
|
||||||
ARMV8_EVENT_ATTR(l2d_tlb, ARMV8_PMUV3_PERFCTR_L2D_TLB);
|
|
||||||
ARMV8_EVENT_ATTR(l2i_tlb, ARMV8_PMUV3_PERFCTR_L2I_TLB);
|
|
||||||
ARMV8_EVENT_ATTR(remote_access, ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS);
|
|
||||||
ARMV8_EVENT_ATTR(ll_cache, ARMV8_PMUV3_PERFCTR_LL_CACHE);
|
|
||||||
ARMV8_EVENT_ATTR(ll_cache_miss, ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS);
|
|
||||||
ARMV8_EVENT_ATTR(dtlb_walk, ARMV8_PMUV3_PERFCTR_DTLB_WALK);
|
|
||||||
ARMV8_EVENT_ATTR(itlb_walk, ARMV8_PMUV3_PERFCTR_ITLB_WALK);
|
|
||||||
ARMV8_EVENT_ATTR(ll_cache_rd, ARMV8_PMUV3_PERFCTR_LL_CACHE_RD);
|
|
||||||
ARMV8_EVENT_ATTR(ll_cache_miss_rd, ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD);
|
|
||||||
ARMV8_EVENT_ATTR(remote_access_rd, ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD);
|
|
||||||
ARMV8_EVENT_ATTR(sample_pop, ARMV8_SPE_PERFCTR_SAMPLE_POP);
|
|
||||||
ARMV8_EVENT_ATTR(sample_feed, ARMV8_SPE_PERFCTR_SAMPLE_FEED);
|
|
||||||
ARMV8_EVENT_ATTR(sample_filtrate, ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE);
|
|
||||||
ARMV8_EVENT_ATTR(sample_collision, ARMV8_SPE_PERFCTR_SAMPLE_COLLISION);
|
|
||||||
|
|
||||||
static struct attribute *armv8_pmuv3_event_attrs[] = {
|
static struct attribute *armv8_pmuv3_event_attrs[] = {
|
||||||
&armv8_event_attr_sw_incr.attr.attr,
|
ARMV8_EVENT_ATTR(sw_incr, ARMV8_PMUV3_PERFCTR_SW_INCR),
|
||||||
&armv8_event_attr_l1i_cache_refill.attr.attr,
|
ARMV8_EVENT_ATTR(l1i_cache_refill, ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL),
|
||||||
&armv8_event_attr_l1i_tlb_refill.attr.attr,
|
ARMV8_EVENT_ATTR(l1i_tlb_refill, ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL),
|
||||||
&armv8_event_attr_l1d_cache_refill.attr.attr,
|
ARMV8_EVENT_ATTR(l1d_cache_refill, ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL),
|
||||||
&armv8_event_attr_l1d_cache.attr.attr,
|
ARMV8_EVENT_ATTR(l1d_cache, ARMV8_PMUV3_PERFCTR_L1D_CACHE),
|
||||||
&armv8_event_attr_l1d_tlb_refill.attr.attr,
|
ARMV8_EVENT_ATTR(l1d_tlb_refill, ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL),
|
||||||
&armv8_event_attr_ld_retired.attr.attr,
|
ARMV8_EVENT_ATTR(ld_retired, ARMV8_PMUV3_PERFCTR_LD_RETIRED),
|
||||||
&armv8_event_attr_st_retired.attr.attr,
|
ARMV8_EVENT_ATTR(st_retired, ARMV8_PMUV3_PERFCTR_ST_RETIRED),
|
||||||
&armv8_event_attr_inst_retired.attr.attr,
|
ARMV8_EVENT_ATTR(inst_retired, ARMV8_PMUV3_PERFCTR_INST_RETIRED),
|
||||||
&armv8_event_attr_exc_taken.attr.attr,
|
ARMV8_EVENT_ATTR(exc_taken, ARMV8_PMUV3_PERFCTR_EXC_TAKEN),
|
||||||
&armv8_event_attr_exc_return.attr.attr,
|
ARMV8_EVENT_ATTR(exc_return, ARMV8_PMUV3_PERFCTR_EXC_RETURN),
|
||||||
&armv8_event_attr_cid_write_retired.attr.attr,
|
ARMV8_EVENT_ATTR(cid_write_retired, ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED),
|
||||||
&armv8_event_attr_pc_write_retired.attr.attr,
|
ARMV8_EVENT_ATTR(pc_write_retired, ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED),
|
||||||
&armv8_event_attr_br_immed_retired.attr.attr,
|
ARMV8_EVENT_ATTR(br_immed_retired, ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED),
|
||||||
&armv8_event_attr_br_return_retired.attr.attr,
|
ARMV8_EVENT_ATTR(br_return_retired, ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED),
|
||||||
&armv8_event_attr_unaligned_ldst_retired.attr.attr,
|
ARMV8_EVENT_ATTR(unaligned_ldst_retired, ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED),
|
||||||
&armv8_event_attr_br_mis_pred.attr.attr,
|
ARMV8_EVENT_ATTR(br_mis_pred, ARMV8_PMUV3_PERFCTR_BR_MIS_PRED),
|
||||||
&armv8_event_attr_cpu_cycles.attr.attr,
|
ARMV8_EVENT_ATTR(cpu_cycles, ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
|
||||||
&armv8_event_attr_br_pred.attr.attr,
|
ARMV8_EVENT_ATTR(br_pred, ARMV8_PMUV3_PERFCTR_BR_PRED),
|
||||||
&armv8_event_attr_mem_access.attr.attr,
|
ARMV8_EVENT_ATTR(mem_access, ARMV8_PMUV3_PERFCTR_MEM_ACCESS),
|
||||||
&armv8_event_attr_l1i_cache.attr.attr,
|
ARMV8_EVENT_ATTR(l1i_cache, ARMV8_PMUV3_PERFCTR_L1I_CACHE),
|
||||||
&armv8_event_attr_l1d_cache_wb.attr.attr,
|
ARMV8_EVENT_ATTR(l1d_cache_wb, ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB),
|
||||||
&armv8_event_attr_l2d_cache.attr.attr,
|
ARMV8_EVENT_ATTR(l2d_cache, ARMV8_PMUV3_PERFCTR_L2D_CACHE),
|
||||||
&armv8_event_attr_l2d_cache_refill.attr.attr,
|
ARMV8_EVENT_ATTR(l2d_cache_refill, ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL),
|
||||||
&armv8_event_attr_l2d_cache_wb.attr.attr,
|
ARMV8_EVENT_ATTR(l2d_cache_wb, ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB),
|
||||||
&armv8_event_attr_bus_access.attr.attr,
|
ARMV8_EVENT_ATTR(bus_access, ARMV8_PMUV3_PERFCTR_BUS_ACCESS),
|
||||||
&armv8_event_attr_memory_error.attr.attr,
|
ARMV8_EVENT_ATTR(memory_error, ARMV8_PMUV3_PERFCTR_MEMORY_ERROR),
|
||||||
&armv8_event_attr_inst_spec.attr.attr,
|
ARMV8_EVENT_ATTR(inst_spec, ARMV8_PMUV3_PERFCTR_INST_SPEC),
|
||||||
&armv8_event_attr_ttbr_write_retired.attr.attr,
|
ARMV8_EVENT_ATTR(ttbr_write_retired, ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED),
|
||||||
&armv8_event_attr_bus_cycles.attr.attr,
|
ARMV8_EVENT_ATTR(bus_cycles, ARMV8_PMUV3_PERFCTR_BUS_CYCLES),
|
||||||
&armv8_event_attr_l1d_cache_allocate.attr.attr,
|
/* Don't expose the chain event in /sys, since it's useless in isolation */
|
||||||
&armv8_event_attr_l2d_cache_allocate.attr.attr,
|
ARMV8_EVENT_ATTR(l1d_cache_allocate, ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE),
|
||||||
&armv8_event_attr_br_retired.attr.attr,
|
ARMV8_EVENT_ATTR(l2d_cache_allocate, ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE),
|
||||||
&armv8_event_attr_br_mis_pred_retired.attr.attr,
|
ARMV8_EVENT_ATTR(br_retired, ARMV8_PMUV3_PERFCTR_BR_RETIRED),
|
||||||
&armv8_event_attr_stall_frontend.attr.attr,
|
ARMV8_EVENT_ATTR(br_mis_pred_retired, ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED),
|
||||||
&armv8_event_attr_stall_backend.attr.attr,
|
ARMV8_EVENT_ATTR(stall_frontend, ARMV8_PMUV3_PERFCTR_STALL_FRONTEND),
|
||||||
&armv8_event_attr_l1d_tlb.attr.attr,
|
ARMV8_EVENT_ATTR(stall_backend, ARMV8_PMUV3_PERFCTR_STALL_BACKEND),
|
||||||
&armv8_event_attr_l1i_tlb.attr.attr,
|
ARMV8_EVENT_ATTR(l1d_tlb, ARMV8_PMUV3_PERFCTR_L1D_TLB),
|
||||||
&armv8_event_attr_l2i_cache.attr.attr,
|
ARMV8_EVENT_ATTR(l1i_tlb, ARMV8_PMUV3_PERFCTR_L1I_TLB),
|
||||||
&armv8_event_attr_l2i_cache_refill.attr.attr,
|
ARMV8_EVENT_ATTR(l2i_cache, ARMV8_PMUV3_PERFCTR_L2I_CACHE),
|
||||||
&armv8_event_attr_l3d_cache_allocate.attr.attr,
|
ARMV8_EVENT_ATTR(l2i_cache_refill, ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL),
|
||||||
&armv8_event_attr_l3d_cache_refill.attr.attr,
|
ARMV8_EVENT_ATTR(l3d_cache_allocate, ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE),
|
||||||
&armv8_event_attr_l3d_cache.attr.attr,
|
ARMV8_EVENT_ATTR(l3d_cache_refill, ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL),
|
||||||
&armv8_event_attr_l3d_cache_wb.attr.attr,
|
ARMV8_EVENT_ATTR(l3d_cache, ARMV8_PMUV3_PERFCTR_L3D_CACHE),
|
||||||
&armv8_event_attr_l2d_tlb_refill.attr.attr,
|
ARMV8_EVENT_ATTR(l3d_cache_wb, ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB),
|
||||||
&armv8_event_attr_l2i_tlb_refill.attr.attr,
|
ARMV8_EVENT_ATTR(l2d_tlb_refill, ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL),
|
||||||
&armv8_event_attr_l2d_tlb.attr.attr,
|
ARMV8_EVENT_ATTR(l2i_tlb_refill, ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL),
|
||||||
&armv8_event_attr_l2i_tlb.attr.attr,
|
ARMV8_EVENT_ATTR(l2d_tlb, ARMV8_PMUV3_PERFCTR_L2D_TLB),
|
||||||
&armv8_event_attr_remote_access.attr.attr,
|
ARMV8_EVENT_ATTR(l2i_tlb, ARMV8_PMUV3_PERFCTR_L2I_TLB),
|
||||||
&armv8_event_attr_ll_cache.attr.attr,
|
ARMV8_EVENT_ATTR(remote_access, ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS),
|
||||||
&armv8_event_attr_ll_cache_miss.attr.attr,
|
ARMV8_EVENT_ATTR(ll_cache, ARMV8_PMUV3_PERFCTR_LL_CACHE),
|
||||||
&armv8_event_attr_dtlb_walk.attr.attr,
|
ARMV8_EVENT_ATTR(ll_cache_miss, ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS),
|
||||||
&armv8_event_attr_itlb_walk.attr.attr,
|
ARMV8_EVENT_ATTR(dtlb_walk, ARMV8_PMUV3_PERFCTR_DTLB_WALK),
|
||||||
&armv8_event_attr_ll_cache_rd.attr.attr,
|
ARMV8_EVENT_ATTR(itlb_walk, ARMV8_PMUV3_PERFCTR_ITLB_WALK),
|
||||||
&armv8_event_attr_ll_cache_miss_rd.attr.attr,
|
ARMV8_EVENT_ATTR(ll_cache_rd, ARMV8_PMUV3_PERFCTR_LL_CACHE_RD),
|
||||||
&armv8_event_attr_remote_access_rd.attr.attr,
|
ARMV8_EVENT_ATTR(ll_cache_miss_rd, ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD),
|
||||||
&armv8_event_attr_sample_pop.attr.attr,
|
ARMV8_EVENT_ATTR(remote_access_rd, ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD),
|
||||||
&armv8_event_attr_sample_feed.attr.attr,
|
ARMV8_EVENT_ATTR(sample_pop, ARMV8_SPE_PERFCTR_SAMPLE_POP),
|
||||||
&armv8_event_attr_sample_filtrate.attr.attr,
|
ARMV8_EVENT_ATTR(sample_feed, ARMV8_SPE_PERFCTR_SAMPLE_FEED),
|
||||||
&armv8_event_attr_sample_collision.attr.attr,
|
ARMV8_EVENT_ATTR(sample_filtrate, ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE),
|
||||||
|
ARMV8_EVENT_ATTR(sample_collision, ARMV8_SPE_PERFCTR_SAMPLE_COLLISION),
|
||||||
NULL,
|
NULL,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -455,10 +455,6 @@ int __init arch_populate_kprobe_blacklist(void)
|
|||||||
(unsigned long)__irqentry_text_end);
|
(unsigned long)__irqentry_text_end);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
ret = kprobe_add_area_blacklist((unsigned long)__exception_text_start,
|
|
||||||
(unsigned long)__exception_text_end);
|
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
ret = kprobe_add_area_blacklist((unsigned long)__idmap_text_start,
|
ret = kprobe_add_area_blacklist((unsigned long)__idmap_text_start,
|
||||||
(unsigned long)__idmap_text_end);
|
(unsigned long)__idmap_text_end);
|
||||||
if (ret)
|
if (ret)
|
||||||
|
@ -81,7 +81,8 @@ static void cpu_psci_cpu_die(unsigned int cpu)
|
|||||||
|
|
||||||
static int cpu_psci_cpu_kill(unsigned int cpu)
|
static int cpu_psci_cpu_kill(unsigned int cpu)
|
||||||
{
|
{
|
||||||
int err, i;
|
int err;
|
||||||
|
unsigned long start, end;
|
||||||
|
|
||||||
if (!psci_ops.affinity_info)
|
if (!psci_ops.affinity_info)
|
||||||
return 0;
|
return 0;
|
||||||
@ -91,16 +92,18 @@ static int cpu_psci_cpu_kill(unsigned int cpu)
|
|||||||
* while it is dying. So, try again a few times.
|
* while it is dying. So, try again a few times.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
for (i = 0; i < 10; i++) {
|
start = jiffies;
|
||||||
|
end = start + msecs_to_jiffies(100);
|
||||||
|
do {
|
||||||
err = psci_ops.affinity_info(cpu_logical_map(cpu), 0);
|
err = psci_ops.affinity_info(cpu_logical_map(cpu), 0);
|
||||||
if (err == PSCI_0_2_AFFINITY_LEVEL_OFF) {
|
if (err == PSCI_0_2_AFFINITY_LEVEL_OFF) {
|
||||||
pr_info("CPU%d killed.\n", cpu);
|
pr_info("CPU%d killed (polled %d ms)\n", cpu,
|
||||||
|
jiffies_to_msecs(jiffies - start));
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
msleep(10);
|
usleep_range(100, 1000);
|
||||||
pr_info("Retrying again to check for CPU kill\n");
|
} while (time_before(jiffies, end));
|
||||||
}
|
|
||||||
|
|
||||||
pr_warn("CPU%d may not have shut down cleanly (AFFINITY_INFO reports %d)\n",
|
pr_warn("CPU%d may not have shut down cleanly (AFFINITY_INFO reports %d)\n",
|
||||||
cpu, err);
|
cpu, err);
|
||||||
|
@ -2,6 +2,7 @@
|
|||||||
// Copyright (C) 2017 Arm Ltd.
|
// Copyright (C) 2017 Arm Ltd.
|
||||||
#define pr_fmt(fmt) "sdei: " fmt
|
#define pr_fmt(fmt) "sdei: " fmt
|
||||||
|
|
||||||
|
#include <linux/arm-smccc.h>
|
||||||
#include <linux/arm_sdei.h>
|
#include <linux/arm_sdei.h>
|
||||||
#include <linux/hardirq.h>
|
#include <linux/hardirq.h>
|
||||||
#include <linux/irqflags.h>
|
#include <linux/irqflags.h>
|
||||||
@ -161,7 +162,7 @@ unsigned long sdei_arch_get_entry_point(int conduit)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
sdei_exit_mode = (conduit == CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC;
|
sdei_exit_mode = (conduit == SMCCC_CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC;
|
||||||
|
|
||||||
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
|
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
|
||||||
if (arm64_kernel_unmapped_at_el0()) {
|
if (arm64_kernel_unmapped_at_el0()) {
|
||||||
|
@ -8,6 +8,7 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/compat.h>
|
#include <linux/compat.h>
|
||||||
|
#include <linux/cpufeature.h>
|
||||||
#include <linux/personality.h>
|
#include <linux/personality.h>
|
||||||
#include <linux/sched.h>
|
#include <linux/sched.h>
|
||||||
#include <linux/sched/signal.h>
|
#include <linux/sched/signal.h>
|
||||||
@ -17,6 +18,7 @@
|
|||||||
|
|
||||||
#include <asm/cacheflush.h>
|
#include <asm/cacheflush.h>
|
||||||
#include <asm/system_misc.h>
|
#include <asm/system_misc.h>
|
||||||
|
#include <asm/tlbflush.h>
|
||||||
#include <asm/unistd.h>
|
#include <asm/unistd.h>
|
||||||
|
|
||||||
static long
|
static long
|
||||||
@ -30,6 +32,15 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
|
|||||||
if (fatal_signal_pending(current))
|
if (fatal_signal_pending(current))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
if (cpus_have_const_cap(ARM64_WORKAROUND_1542419)) {
|
||||||
|
/*
|
||||||
|
* The workaround requires an inner-shareable tlbi.
|
||||||
|
* We pick the reserved-ASID to minimise the impact.
|
||||||
|
*/
|
||||||
|
__tlbi(aside1is, __TLBI_VADDR(0, 0));
|
||||||
|
dsb(ish);
|
||||||
|
}
|
||||||
|
|
||||||
ret = __flush_cache_user_range(start, start + chunk);
|
ret = __flush_cache_user_range(start, start + chunk);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -154,14 +154,14 @@ static inline void sve_user_discard(void)
|
|||||||
sve_user_disable();
|
sve_user_disable();
|
||||||
}
|
}
|
||||||
|
|
||||||
asmlinkage void el0_svc_handler(struct pt_regs *regs)
|
void el0_svc_handler(struct pt_regs *regs)
|
||||||
{
|
{
|
||||||
sve_user_discard();
|
sve_user_discard();
|
||||||
el0_svc_common(regs, regs->regs[8], __NR_syscalls, sys_call_table);
|
el0_svc_common(regs, regs->regs[8], __NR_syscalls, sys_call_table);
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_COMPAT
|
#ifdef CONFIG_COMPAT
|
||||||
asmlinkage void el0_svc_compat_handler(struct pt_regs *regs)
|
void el0_svc_compat_handler(struct pt_regs *regs)
|
||||||
{
|
{
|
||||||
el0_svc_common(regs, regs->regs[7], __NR_compat_syscalls,
|
el0_svc_common(regs, regs->regs[7], __NR_compat_syscalls,
|
||||||
compat_sys_call_table);
|
compat_sys_call_table);
|
||||||
|
@ -35,6 +35,7 @@
|
|||||||
#include <asm/debug-monitors.h>
|
#include <asm/debug-monitors.h>
|
||||||
#include <asm/esr.h>
|
#include <asm/esr.h>
|
||||||
#include <asm/insn.h>
|
#include <asm/insn.h>
|
||||||
|
#include <asm/kprobes.h>
|
||||||
#include <asm/traps.h>
|
#include <asm/traps.h>
|
||||||
#include <asm/smp.h>
|
#include <asm/smp.h>
|
||||||
#include <asm/stack_pointer.h>
|
#include <asm/stack_pointer.h>
|
||||||
@ -393,7 +394,7 @@ void arm64_notify_segfault(unsigned long addr)
|
|||||||
force_signal_inject(SIGSEGV, code, addr);
|
force_signal_inject(SIGSEGV, code, addr);
|
||||||
}
|
}
|
||||||
|
|
||||||
asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
|
void do_undefinstr(struct pt_regs *regs)
|
||||||
{
|
{
|
||||||
/* check for AArch32 breakpoint instructions */
|
/* check for AArch32 breakpoint instructions */
|
||||||
if (!aarch32_break_handler(regs))
|
if (!aarch32_break_handler(regs))
|
||||||
@ -405,6 +406,7 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
|
|||||||
BUG_ON(!user_mode(regs));
|
BUG_ON(!user_mode(regs));
|
||||||
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
|
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
|
||||||
}
|
}
|
||||||
|
NOKPROBE_SYMBOL(do_undefinstr);
|
||||||
|
|
||||||
#define __user_cache_maint(insn, address, res) \
|
#define __user_cache_maint(insn, address, res) \
|
||||||
if (address >= user_addr_max()) { \
|
if (address >= user_addr_max()) { \
|
||||||
@ -470,6 +472,15 @@ static void ctr_read_handler(unsigned int esr, struct pt_regs *regs)
|
|||||||
int rt = ESR_ELx_SYS64_ISS_RT(esr);
|
int rt = ESR_ELx_SYS64_ISS_RT(esr);
|
||||||
unsigned long val = arm64_ftr_reg_user_value(&arm64_ftr_reg_ctrel0);
|
unsigned long val = arm64_ftr_reg_user_value(&arm64_ftr_reg_ctrel0);
|
||||||
|
|
||||||
|
if (cpus_have_const_cap(ARM64_WORKAROUND_1542419)) {
|
||||||
|
/* Hide DIC so that we can trap the unnecessary maintenance...*/
|
||||||
|
val &= ~BIT(CTR_DIC_SHIFT);
|
||||||
|
|
||||||
|
/* ... and fake IminLine to reduce the number of traps. */
|
||||||
|
val &= ~CTR_IMINLINE_MASK;
|
||||||
|
val |= (PAGE_SHIFT - 2) & CTR_IMINLINE_MASK;
|
||||||
|
}
|
||||||
|
|
||||||
pt_regs_write_reg(regs, rt, val);
|
pt_regs_write_reg(regs, rt, val);
|
||||||
|
|
||||||
arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
|
arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
|
||||||
@ -667,7 +678,7 @@ static const struct sys64_hook cp15_64_hooks[] = {
|
|||||||
{},
|
{},
|
||||||
};
|
};
|
||||||
|
|
||||||
asmlinkage void __exception do_cp15instr(unsigned int esr, struct pt_regs *regs)
|
void do_cp15instr(unsigned int esr, struct pt_regs *regs)
|
||||||
{
|
{
|
||||||
const struct sys64_hook *hook, *hook_base;
|
const struct sys64_hook *hook, *hook_base;
|
||||||
|
|
||||||
@ -705,9 +716,10 @@ asmlinkage void __exception do_cp15instr(unsigned int esr, struct pt_regs *regs)
|
|||||||
*/
|
*/
|
||||||
do_undefinstr(regs);
|
do_undefinstr(regs);
|
||||||
}
|
}
|
||||||
|
NOKPROBE_SYMBOL(do_cp15instr);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
asmlinkage void __exception do_sysinstr(unsigned int esr, struct pt_regs *regs)
|
void do_sysinstr(unsigned int esr, struct pt_regs *regs)
|
||||||
{
|
{
|
||||||
const struct sys64_hook *hook;
|
const struct sys64_hook *hook;
|
||||||
|
|
||||||
@ -724,6 +736,7 @@ asmlinkage void __exception do_sysinstr(unsigned int esr, struct pt_regs *regs)
|
|||||||
*/
|
*/
|
||||||
do_undefinstr(regs);
|
do_undefinstr(regs);
|
||||||
}
|
}
|
||||||
|
NOKPROBE_SYMBOL(do_sysinstr);
|
||||||
|
|
||||||
static const char *esr_class_str[] = {
|
static const char *esr_class_str[] = {
|
||||||
[0 ... ESR_ELx_EC_MAX] = "UNRECOGNIZED EC",
|
[0 ... ESR_ELx_EC_MAX] = "UNRECOGNIZED EC",
|
||||||
@ -793,7 +806,7 @@ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
|
|||||||
* bad_el0_sync handles unexpected, but potentially recoverable synchronous
|
* bad_el0_sync handles unexpected, but potentially recoverable synchronous
|
||||||
* exceptions taken from EL0. Unlike bad_mode, this returns.
|
* exceptions taken from EL0. Unlike bad_mode, this returns.
|
||||||
*/
|
*/
|
||||||
asmlinkage void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr)
|
void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr)
|
||||||
{
|
{
|
||||||
void __user *pc = (void __user *)instruction_pointer(regs);
|
void __user *pc = (void __user *)instruction_pointer(regs);
|
||||||
|
|
||||||
|
@ -111,9 +111,6 @@ SECTIONS
|
|||||||
}
|
}
|
||||||
.text : { /* Real text segment */
|
.text : { /* Real text segment */
|
||||||
_stext = .; /* Text and read-only data */
|
_stext = .; /* Text and read-only data */
|
||||||
__exception_text_start = .;
|
|
||||||
*(.exception.text)
|
|
||||||
__exception_text_end = .;
|
|
||||||
IRQENTRY_TEXT
|
IRQENTRY_TEXT
|
||||||
SOFTIRQENTRY_TEXT
|
SOFTIRQENTRY_TEXT
|
||||||
ENTRY_TEXT
|
ENTRY_TEXT
|
||||||
|
@ -12,7 +12,7 @@
|
|||||||
|
|
||||||
#include <kvm/arm_psci.h>
|
#include <kvm/arm_psci.h>
|
||||||
|
|
||||||
#include <asm/arch_gicv3.h>
|
#include <asm/barrier.h>
|
||||||
#include <asm/cpufeature.h>
|
#include <asm/cpufeature.h>
|
||||||
#include <asm/kprobes.h>
|
#include <asm/kprobes.h>
|
||||||
#include <asm/kvm_asm.h>
|
#include <asm/kvm_asm.h>
|
||||||
@ -118,6 +118,20 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu)
|
|||||||
}
|
}
|
||||||
|
|
||||||
write_sysreg(val, cptr_el2);
|
write_sysreg(val, cptr_el2);
|
||||||
|
|
||||||
|
if (cpus_have_const_cap(ARM64_WORKAROUND_1319367)) {
|
||||||
|
struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
|
||||||
|
|
||||||
|
isb();
|
||||||
|
/*
|
||||||
|
* At this stage, and thanks to the above isb(), S2 is
|
||||||
|
* configured and enabled. We can now restore the guest's S1
|
||||||
|
* configuration: SCTLR, and only then TCR.
|
||||||
|
*/
|
||||||
|
write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR);
|
||||||
|
isb();
|
||||||
|
write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
|
static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
|
||||||
@ -159,6 +173,23 @@ static void __hyp_text __deactivate_traps_nvhe(void)
|
|||||||
{
|
{
|
||||||
u64 mdcr_el2 = read_sysreg(mdcr_el2);
|
u64 mdcr_el2 = read_sysreg(mdcr_el2);
|
||||||
|
|
||||||
|
if (cpus_have_const_cap(ARM64_WORKAROUND_1319367)) {
|
||||||
|
u64 val;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Set the TCR and SCTLR registers in the exact opposite
|
||||||
|
* sequence as __activate_traps_nvhe (first prevent walks,
|
||||||
|
* then force the MMU on). A generous sprinkling of isb()
|
||||||
|
* ensure that things happen in this exact order.
|
||||||
|
*/
|
||||||
|
val = read_sysreg_el1(SYS_TCR);
|
||||||
|
write_sysreg_el1(val | TCR_EPD1_MASK | TCR_EPD0_MASK, SYS_TCR);
|
||||||
|
isb();
|
||||||
|
val = read_sysreg_el1(SYS_SCTLR);
|
||||||
|
write_sysreg_el1(val | SCTLR_ELx_M, SYS_SCTLR);
|
||||||
|
isb();
|
||||||
|
}
|
||||||
|
|
||||||
__deactivate_traps_common();
|
__deactivate_traps_common();
|
||||||
|
|
||||||
mdcr_el2 &= MDCR_EL2_HPMN_MASK;
|
mdcr_el2 &= MDCR_EL2_HPMN_MASK;
|
||||||
@ -657,7 +688,7 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
|
|||||||
*/
|
*/
|
||||||
if (system_uses_irq_prio_masking()) {
|
if (system_uses_irq_prio_masking()) {
|
||||||
gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
|
gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
|
||||||
dsb(sy);
|
pmr_sync();
|
||||||
}
|
}
|
||||||
|
|
||||||
vcpu = kern_hyp_va(vcpu);
|
vcpu = kern_hyp_va(vcpu);
|
||||||
@ -670,18 +701,23 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
|
|||||||
|
|
||||||
__sysreg_save_state_nvhe(host_ctxt);
|
__sysreg_save_state_nvhe(host_ctxt);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* We must restore the 32-bit state before the sysregs, thanks
|
||||||
|
* to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72).
|
||||||
|
*
|
||||||
|
* Also, and in order to be able to deal with erratum #1319537 (A57)
|
||||||
|
* and #1319367 (A72), we must ensure that all VM-related sysreg are
|
||||||
|
* restored before we enable S2 translation.
|
||||||
|
*/
|
||||||
|
__sysreg32_restore_state(vcpu);
|
||||||
|
__sysreg_restore_state_nvhe(guest_ctxt);
|
||||||
|
|
||||||
__activate_vm(kern_hyp_va(vcpu->kvm));
|
__activate_vm(kern_hyp_va(vcpu->kvm));
|
||||||
__activate_traps(vcpu);
|
__activate_traps(vcpu);
|
||||||
|
|
||||||
__hyp_vgic_restore_state(vcpu);
|
__hyp_vgic_restore_state(vcpu);
|
||||||
__timer_enable_traps(vcpu);
|
__timer_enable_traps(vcpu);
|
||||||
|
|
||||||
/*
|
|
||||||
* We must restore the 32-bit state before the sysregs, thanks
|
|
||||||
* to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72).
|
|
||||||
*/
|
|
||||||
__sysreg32_restore_state(vcpu);
|
|
||||||
__sysreg_restore_state_nvhe(guest_ctxt);
|
|
||||||
__debug_switch_to_guest(vcpu);
|
__debug_switch_to_guest(vcpu);
|
||||||
|
|
||||||
__set_guest_arch_workaround_state(vcpu);
|
__set_guest_arch_workaround_state(vcpu);
|
||||||
|
@ -117,12 +117,26 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
|
|||||||
{
|
{
|
||||||
write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2);
|
write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2);
|
||||||
write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1);
|
write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1);
|
||||||
write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR);
|
|
||||||
|
if (!cpus_have_const_cap(ARM64_WORKAROUND_1319367)) {
|
||||||
|
write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR);
|
||||||
|
write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR);
|
||||||
|
} else if (!ctxt->__hyp_running_vcpu) {
|
||||||
|
/*
|
||||||
|
* Must only be done for guest registers, hence the context
|
||||||
|
* test. We're coming from the host, so SCTLR.M is already
|
||||||
|
* set. Pairs with __activate_traps_nvhe().
|
||||||
|
*/
|
||||||
|
write_sysreg_el1((ctxt->sys_regs[TCR_EL1] |
|
||||||
|
TCR_EPD1_MASK | TCR_EPD0_MASK),
|
||||||
|
SYS_TCR);
|
||||||
|
isb();
|
||||||
|
}
|
||||||
|
|
||||||
write_sysreg(ctxt->sys_regs[ACTLR_EL1], actlr_el1);
|
write_sysreg(ctxt->sys_regs[ACTLR_EL1], actlr_el1);
|
||||||
write_sysreg_el1(ctxt->sys_regs[CPACR_EL1], SYS_CPACR);
|
write_sysreg_el1(ctxt->sys_regs[CPACR_EL1], SYS_CPACR);
|
||||||
write_sysreg_el1(ctxt->sys_regs[TTBR0_EL1], SYS_TTBR0);
|
write_sysreg_el1(ctxt->sys_regs[TTBR0_EL1], SYS_TTBR0);
|
||||||
write_sysreg_el1(ctxt->sys_regs[TTBR1_EL1], SYS_TTBR1);
|
write_sysreg_el1(ctxt->sys_regs[TTBR1_EL1], SYS_TTBR1);
|
||||||
write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR);
|
|
||||||
write_sysreg_el1(ctxt->sys_regs[ESR_EL1], SYS_ESR);
|
write_sysreg_el1(ctxt->sys_regs[ESR_EL1], SYS_ESR);
|
||||||
write_sysreg_el1(ctxt->sys_regs[AFSR0_EL1], SYS_AFSR0);
|
write_sysreg_el1(ctxt->sys_regs[AFSR0_EL1], SYS_AFSR0);
|
||||||
write_sysreg_el1(ctxt->sys_regs[AFSR1_EL1], SYS_AFSR1);
|
write_sysreg_el1(ctxt->sys_regs[AFSR1_EL1], SYS_AFSR1);
|
||||||
@ -135,6 +149,23 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
|
|||||||
write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1);
|
write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1);
|
||||||
write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1);
|
write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1);
|
||||||
|
|
||||||
|
if (cpus_have_const_cap(ARM64_WORKAROUND_1319367) &&
|
||||||
|
ctxt->__hyp_running_vcpu) {
|
||||||
|
/*
|
||||||
|
* Must only be done for host registers, hence the context
|
||||||
|
* test. Pairs with __deactivate_traps_nvhe().
|
||||||
|
*/
|
||||||
|
isb();
|
||||||
|
/*
|
||||||
|
* At this stage, and thanks to the above isb(), S2 is
|
||||||
|
* deconfigured and disabled. We can now restore the host's
|
||||||
|
* S1 configuration: SCTLR, and only then TCR.
|
||||||
|
*/
|
||||||
|
write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR);
|
||||||
|
isb();
|
||||||
|
write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR);
|
||||||
|
}
|
||||||
|
|
||||||
write_sysreg(ctxt->gp_regs.sp_el1, sp_el1);
|
write_sysreg(ctxt->gp_regs.sp_el1, sp_el1);
|
||||||
write_sysreg_el1(ctxt->gp_regs.elr_el1, SYS_ELR);
|
write_sysreg_el1(ctxt->gp_regs.elr_el1, SYS_ELR);
|
||||||
write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],SYS_SPSR);
|
write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],SYS_SPSR);
|
||||||
|
@ -63,6 +63,22 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm,
|
|||||||
static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm,
|
static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm,
|
||||||
struct tlb_inv_context *cxt)
|
struct tlb_inv_context *cxt)
|
||||||
{
|
{
|
||||||
|
if (cpus_have_const_cap(ARM64_WORKAROUND_1319367)) {
|
||||||
|
u64 val;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* For CPUs that are affected by ARM 1319367, we need to
|
||||||
|
* avoid a host Stage-1 walk while we have the guest's
|
||||||
|
* VMID set in the VTTBR in order to invalidate TLBs.
|
||||||
|
* We're guaranteed that the S1 MMU is enabled, so we can
|
||||||
|
* simply set the EPD bits to avoid any further TLB fill.
|
||||||
|
*/
|
||||||
|
val = cxt->tcr = read_sysreg_el1(SYS_TCR);
|
||||||
|
val |= TCR_EPD1_MASK | TCR_EPD0_MASK;
|
||||||
|
write_sysreg_el1(val, SYS_TCR);
|
||||||
|
isb();
|
||||||
|
}
|
||||||
|
|
||||||
__load_guest_stage2(kvm);
|
__load_guest_stage2(kvm);
|
||||||
isb();
|
isb();
|
||||||
}
|
}
|
||||||
@ -100,6 +116,13 @@ static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm,
|
|||||||
struct tlb_inv_context *cxt)
|
struct tlb_inv_context *cxt)
|
||||||
{
|
{
|
||||||
write_sysreg(0, vttbr_el2);
|
write_sysreg(0, vttbr_el2);
|
||||||
|
|
||||||
|
if (cpus_have_const_cap(ARM64_WORKAROUND_1319367)) {
|
||||||
|
/* Ensure write of the host VMID */
|
||||||
|
isb();
|
||||||
|
/* Restore the host's TCR_EL1 */
|
||||||
|
write_sysreg_el1(cxt->tcr, SYS_TCR);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __hyp_text __tlb_switch_to_host(struct kvm *kvm,
|
static void __hyp_text __tlb_switch_to_host(struct kvm *kvm,
|
||||||
|
@ -32,7 +32,8 @@
|
|||||||
#include <asm/daifflags.h>
|
#include <asm/daifflags.h>
|
||||||
#include <asm/debug-monitors.h>
|
#include <asm/debug-monitors.h>
|
||||||
#include <asm/esr.h>
|
#include <asm/esr.h>
|
||||||
#include <asm/kasan.h>
|
#include <asm/kprobes.h>
|
||||||
|
#include <asm/processor.h>
|
||||||
#include <asm/sysreg.h>
|
#include <asm/sysreg.h>
|
||||||
#include <asm/system_misc.h>
|
#include <asm/system_misc.h>
|
||||||
#include <asm/pgtable.h>
|
#include <asm/pgtable.h>
|
||||||
@ -101,18 +102,6 @@ static void mem_abort_decode(unsigned int esr)
|
|||||||
data_abort_decode(esr);
|
data_abort_decode(esr);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool is_ttbr0_addr(unsigned long addr)
|
|
||||||
{
|
|
||||||
/* entry assembly clears tags for TTBR0 addrs */
|
|
||||||
return addr < TASK_SIZE;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline bool is_ttbr1_addr(unsigned long addr)
|
|
||||||
{
|
|
||||||
/* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */
|
|
||||||
return arch_kasan_reset_tag(addr) >= PAGE_OFFSET;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline unsigned long mm_to_pgd_phys(struct mm_struct *mm)
|
static inline unsigned long mm_to_pgd_phys(struct mm_struct *mm)
|
||||||
{
|
{
|
||||||
/* Either init_pg_dir or swapper_pg_dir */
|
/* Either init_pg_dir or swapper_pg_dir */
|
||||||
@ -318,6 +307,8 @@ static void __do_kernel_fault(unsigned long addr, unsigned int esr,
|
|||||||
if (is_el1_permission_fault(addr, esr, regs)) {
|
if (is_el1_permission_fault(addr, esr, regs)) {
|
||||||
if (esr & ESR_ELx_WNR)
|
if (esr & ESR_ELx_WNR)
|
||||||
msg = "write to read-only memory";
|
msg = "write to read-only memory";
|
||||||
|
else if (is_el1_instruction_abort(esr))
|
||||||
|
msg = "execute from non-executable memory";
|
||||||
else
|
else
|
||||||
msg = "read from unreadable memory";
|
msg = "read from unreadable memory";
|
||||||
} else if (addr < PAGE_SIZE) {
|
} else if (addr < PAGE_SIZE) {
|
||||||
@ -736,8 +727,7 @@ static const struct fault_info fault_info[] = {
|
|||||||
{ do_bad, SIGKILL, SI_KERNEL, "unknown 63" },
|
{ do_bad, SIGKILL, SI_KERNEL, "unknown 63" },
|
||||||
};
|
};
|
||||||
|
|
||||||
asmlinkage void __exception do_mem_abort(unsigned long addr, unsigned int esr,
|
void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs)
|
||||||
struct pt_regs *regs)
|
|
||||||
{
|
{
|
||||||
const struct fault_info *inf = esr_to_fault_info(esr);
|
const struct fault_info *inf = esr_to_fault_info(esr);
|
||||||
|
|
||||||
@ -753,43 +743,21 @@ asmlinkage void __exception do_mem_abort(unsigned long addr, unsigned int esr,
|
|||||||
arm64_notify_die(inf->name, regs,
|
arm64_notify_die(inf->name, regs,
|
||||||
inf->sig, inf->code, (void __user *)addr, esr);
|
inf->sig, inf->code, (void __user *)addr, esr);
|
||||||
}
|
}
|
||||||
|
NOKPROBE_SYMBOL(do_mem_abort);
|
||||||
|
|
||||||
asmlinkage void __exception do_el0_irq_bp_hardening(void)
|
void do_el0_irq_bp_hardening(void)
|
||||||
{
|
{
|
||||||
/* PC has already been checked in entry.S */
|
/* PC has already been checked in entry.S */
|
||||||
arm64_apply_bp_hardening();
|
arm64_apply_bp_hardening();
|
||||||
}
|
}
|
||||||
|
NOKPROBE_SYMBOL(do_el0_irq_bp_hardening);
|
||||||
|
|
||||||
asmlinkage void __exception do_el0_ia_bp_hardening(unsigned long addr,
|
void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs)
|
||||||
unsigned int esr,
|
|
||||||
struct pt_regs *regs)
|
|
||||||
{
|
{
|
||||||
/*
|
|
||||||
* We've taken an instruction abort from userspace and not yet
|
|
||||||
* re-enabled IRQs. If the address is a kernel address, apply
|
|
||||||
* BP hardening prior to enabling IRQs and pre-emption.
|
|
||||||
*/
|
|
||||||
if (!is_ttbr0_addr(addr))
|
|
||||||
arm64_apply_bp_hardening();
|
|
||||||
|
|
||||||
local_daif_restore(DAIF_PROCCTX);
|
|
||||||
do_mem_abort(addr, esr, regs);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
asmlinkage void __exception do_sp_pc_abort(unsigned long addr,
|
|
||||||
unsigned int esr,
|
|
||||||
struct pt_regs *regs)
|
|
||||||
{
|
|
||||||
if (user_mode(regs)) {
|
|
||||||
if (!is_ttbr0_addr(instruction_pointer(regs)))
|
|
||||||
arm64_apply_bp_hardening();
|
|
||||||
local_daif_restore(DAIF_PROCCTX);
|
|
||||||
}
|
|
||||||
|
|
||||||
arm64_notify_die("SP/PC alignment exception", regs,
|
arm64_notify_die("SP/PC alignment exception", regs,
|
||||||
SIGBUS, BUS_ADRALN, (void __user *)addr, esr);
|
SIGBUS, BUS_ADRALN, (void __user *)addr, esr);
|
||||||
}
|
}
|
||||||
|
NOKPROBE_SYMBOL(do_sp_pc_abort);
|
||||||
|
|
||||||
int __init early_brk64(unsigned long addr, unsigned int esr,
|
int __init early_brk64(unsigned long addr, unsigned int esr,
|
||||||
struct pt_regs *regs);
|
struct pt_regs *regs);
|
||||||
@ -872,8 +840,7 @@ NOKPROBE_SYMBOL(debug_exception_exit);
|
|||||||
#ifdef CONFIG_ARM64_ERRATUM_1463225
|
#ifdef CONFIG_ARM64_ERRATUM_1463225
|
||||||
DECLARE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
|
DECLARE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
|
||||||
|
|
||||||
static int __exception
|
static int cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
|
||||||
cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
|
|
||||||
{
|
{
|
||||||
if (user_mode(regs))
|
if (user_mode(regs))
|
||||||
return 0;
|
return 0;
|
||||||
@ -892,16 +859,15 @@ cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
|
|||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
#else
|
#else
|
||||||
static int __exception
|
static int cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
|
||||||
cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
|
|
||||||
{
|
{
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
#endif /* CONFIG_ARM64_ERRATUM_1463225 */
|
#endif /* CONFIG_ARM64_ERRATUM_1463225 */
|
||||||
|
NOKPROBE_SYMBOL(cortex_a76_erratum_1463225_debug_handler);
|
||||||
|
|
||||||
asmlinkage void __exception do_debug_exception(unsigned long addr_if_watchpoint,
|
void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr,
|
||||||
unsigned int esr,
|
struct pt_regs *regs)
|
||||||
struct pt_regs *regs)
|
|
||||||
{
|
{
|
||||||
const struct fault_info *inf = esr_to_debug_fault_info(esr);
|
const struct fault_info *inf = esr_to_debug_fault_info(esr);
|
||||||
unsigned long pc = instruction_pointer(regs);
|
unsigned long pc = instruction_pointer(regs);
|
||||||
|
@ -20,6 +20,7 @@
|
|||||||
#include <linux/sort.h>
|
#include <linux/sort.h>
|
||||||
#include <linux/of.h>
|
#include <linux/of.h>
|
||||||
#include <linux/of_fdt.h>
|
#include <linux/of_fdt.h>
|
||||||
|
#include <linux/dma-direct.h>
|
||||||
#include <linux/dma-mapping.h>
|
#include <linux/dma-mapping.h>
|
||||||
#include <linux/dma-contiguous.h>
|
#include <linux/dma-contiguous.h>
|
||||||
#include <linux/efi.h>
|
#include <linux/efi.h>
|
||||||
@ -41,6 +42,8 @@
|
|||||||
#include <asm/tlb.h>
|
#include <asm/tlb.h>
|
||||||
#include <asm/alternative.h>
|
#include <asm/alternative.h>
|
||||||
|
|
||||||
|
#define ARM64_ZONE_DMA_BITS 30
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We need to be able to catch inadvertent references to memstart_addr
|
* We need to be able to catch inadvertent references to memstart_addr
|
||||||
* that occur (potentially in generic code) before arm64_memblock_init()
|
* that occur (potentially in generic code) before arm64_memblock_init()
|
||||||
@ -56,7 +59,14 @@ EXPORT_SYMBOL(physvirt_offset);
|
|||||||
struct page *vmemmap __ro_after_init;
|
struct page *vmemmap __ro_after_init;
|
||||||
EXPORT_SYMBOL(vmemmap);
|
EXPORT_SYMBOL(vmemmap);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* We create both ZONE_DMA and ZONE_DMA32. ZONE_DMA covers the first 1G of
|
||||||
|
* memory as some devices, namely the Raspberry Pi 4, have peripherals with
|
||||||
|
* this limited view of the memory. ZONE_DMA32 will cover the rest of the 32
|
||||||
|
* bit addressable memory area.
|
||||||
|
*/
|
||||||
phys_addr_t arm64_dma_phys_limit __ro_after_init;
|
phys_addr_t arm64_dma_phys_limit __ro_after_init;
|
||||||
|
static phys_addr_t arm64_dma32_phys_limit __ro_after_init;
|
||||||
|
|
||||||
#ifdef CONFIG_KEXEC_CORE
|
#ifdef CONFIG_KEXEC_CORE
|
||||||
/*
|
/*
|
||||||
@ -81,7 +91,7 @@ static void __init reserve_crashkernel(void)
|
|||||||
|
|
||||||
if (crash_base == 0) {
|
if (crash_base == 0) {
|
||||||
/* Current arm64 boot protocol requires 2MB alignment */
|
/* Current arm64 boot protocol requires 2MB alignment */
|
||||||
crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT,
|
crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit,
|
||||||
crash_size, SZ_2M);
|
crash_size, SZ_2M);
|
||||||
if (crash_base == 0) {
|
if (crash_base == 0) {
|
||||||
pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
|
pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
|
||||||
@ -169,15 +179,16 @@ static void __init reserve_elfcorehdr(void)
|
|||||||
{
|
{
|
||||||
}
|
}
|
||||||
#endif /* CONFIG_CRASH_DUMP */
|
#endif /* CONFIG_CRASH_DUMP */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Return the maximum physical address for ZONE_DMA32 (DMA_BIT_MASK(32)). It
|
* Return the maximum physical address for a zone with a given address size
|
||||||
* currently assumes that for memory starting above 4G, 32-bit devices will
|
* limit. It currently assumes that for memory starting above 4G, 32-bit
|
||||||
* use a DMA offset.
|
* devices will use a DMA offset.
|
||||||
*/
|
*/
|
||||||
static phys_addr_t __init max_zone_dma_phys(void)
|
static phys_addr_t __init max_zone_phys(unsigned int zone_bits)
|
||||||
{
|
{
|
||||||
phys_addr_t offset = memblock_start_of_DRAM() & GENMASK_ULL(63, 32);
|
phys_addr_t offset = memblock_start_of_DRAM() & GENMASK_ULL(63, zone_bits);
|
||||||
return min(offset + (1ULL << 32), memblock_end_of_DRAM());
|
return min(offset + (1ULL << zone_bits), memblock_end_of_DRAM());
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_NUMA
|
#ifdef CONFIG_NUMA
|
||||||
@ -186,8 +197,11 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
|
|||||||
{
|
{
|
||||||
unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
|
unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
|
||||||
|
|
||||||
|
#ifdef CONFIG_ZONE_DMA
|
||||||
|
max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
|
||||||
|
#endif
|
||||||
#ifdef CONFIG_ZONE_DMA32
|
#ifdef CONFIG_ZONE_DMA32
|
||||||
max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_dma_phys());
|
max_zone_pfns[ZONE_DMA32] = PFN_DOWN(arm64_dma32_phys_limit);
|
||||||
#endif
|
#endif
|
||||||
max_zone_pfns[ZONE_NORMAL] = max;
|
max_zone_pfns[ZONE_NORMAL] = max;
|
||||||
|
|
||||||
@ -200,16 +214,21 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
|
|||||||
{
|
{
|
||||||
struct memblock_region *reg;
|
struct memblock_region *reg;
|
||||||
unsigned long zone_size[MAX_NR_ZONES], zhole_size[MAX_NR_ZONES];
|
unsigned long zone_size[MAX_NR_ZONES], zhole_size[MAX_NR_ZONES];
|
||||||
unsigned long max_dma = min;
|
unsigned long max_dma32 = min;
|
||||||
|
unsigned long __maybe_unused max_dma = min;
|
||||||
|
|
||||||
memset(zone_size, 0, sizeof(zone_size));
|
memset(zone_size, 0, sizeof(zone_size));
|
||||||
|
|
||||||
/* 4GB maximum for 32-bit only capable devices */
|
#ifdef CONFIG_ZONE_DMA
|
||||||
#ifdef CONFIG_ZONE_DMA32
|
|
||||||
max_dma = PFN_DOWN(arm64_dma_phys_limit);
|
max_dma = PFN_DOWN(arm64_dma_phys_limit);
|
||||||
zone_size[ZONE_DMA32] = max_dma - min;
|
zone_size[ZONE_DMA] = max_dma - min;
|
||||||
|
max_dma32 = max_dma;
|
||||||
#endif
|
#endif
|
||||||
zone_size[ZONE_NORMAL] = max - max_dma;
|
#ifdef CONFIG_ZONE_DMA32
|
||||||
|
max_dma32 = PFN_DOWN(arm64_dma32_phys_limit);
|
||||||
|
zone_size[ZONE_DMA32] = max_dma32 - max_dma;
|
||||||
|
#endif
|
||||||
|
zone_size[ZONE_NORMAL] = max - max_dma32;
|
||||||
|
|
||||||
memcpy(zhole_size, zone_size, sizeof(zhole_size));
|
memcpy(zhole_size, zone_size, sizeof(zhole_size));
|
||||||
|
|
||||||
@ -219,16 +238,22 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
|
|||||||
|
|
||||||
if (start >= max)
|
if (start >= max)
|
||||||
continue;
|
continue;
|
||||||
|
#ifdef CONFIG_ZONE_DMA
|
||||||
#ifdef CONFIG_ZONE_DMA32
|
|
||||||
if (start < max_dma) {
|
if (start < max_dma) {
|
||||||
unsigned long dma_end = min(end, max_dma);
|
unsigned long dma_end = min_not_zero(end, max_dma);
|
||||||
zhole_size[ZONE_DMA32] -= dma_end - start;
|
zhole_size[ZONE_DMA] -= dma_end - start;
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
if (end > max_dma) {
|
#ifdef CONFIG_ZONE_DMA32
|
||||||
|
if (start < max_dma32) {
|
||||||
|
unsigned long dma32_end = min(end, max_dma32);
|
||||||
|
unsigned long dma32_start = max(start, max_dma);
|
||||||
|
zhole_size[ZONE_DMA32] -= dma32_end - dma32_start;
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
if (end > max_dma32) {
|
||||||
unsigned long normal_end = min(end, max);
|
unsigned long normal_end = min(end, max);
|
||||||
unsigned long normal_start = max(start, max_dma);
|
unsigned long normal_start = max(start, max_dma32);
|
||||||
zhole_size[ZONE_NORMAL] -= normal_end - normal_start;
|
zhole_size[ZONE_NORMAL] -= normal_end - normal_start;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -418,11 +443,15 @@ void __init arm64_memblock_init(void)
|
|||||||
|
|
||||||
early_init_fdt_scan_reserved_mem();
|
early_init_fdt_scan_reserved_mem();
|
||||||
|
|
||||||
/* 4GB maximum for 32-bit only capable devices */
|
if (IS_ENABLED(CONFIG_ZONE_DMA)) {
|
||||||
|
zone_dma_bits = ARM64_ZONE_DMA_BITS;
|
||||||
|
arm64_dma_phys_limit = max_zone_phys(ARM64_ZONE_DMA_BITS);
|
||||||
|
}
|
||||||
|
|
||||||
if (IS_ENABLED(CONFIG_ZONE_DMA32))
|
if (IS_ENABLED(CONFIG_ZONE_DMA32))
|
||||||
arm64_dma_phys_limit = max_zone_dma_phys();
|
arm64_dma32_phys_limit = max_zone_phys(32);
|
||||||
else
|
else
|
||||||
arm64_dma_phys_limit = PHYS_MASK + 1;
|
arm64_dma32_phys_limit = PHYS_MASK + 1;
|
||||||
|
|
||||||
reserve_crashkernel();
|
reserve_crashkernel();
|
||||||
|
|
||||||
@ -430,7 +459,7 @@ void __init arm64_memblock_init(void)
|
|||||||
|
|
||||||
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
|
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
|
||||||
|
|
||||||
dma_contiguous_reserve(arm64_dma_phys_limit);
|
dma_contiguous_reserve(arm64_dma32_phys_limit);
|
||||||
}
|
}
|
||||||
|
|
||||||
void __init bootmem_init(void)
|
void __init bootmem_init(void)
|
||||||
@ -534,7 +563,7 @@ static void __init free_unused_memmap(void)
|
|||||||
void __init mem_init(void)
|
void __init mem_init(void)
|
||||||
{
|
{
|
||||||
if (swiotlb_force == SWIOTLB_FORCE ||
|
if (swiotlb_force == SWIOTLB_FORCE ||
|
||||||
max_pfn > (arm64_dma_phys_limit >> PAGE_SHIFT))
|
max_pfn > PFN_DOWN(arm64_dma_phys_limit ? : arm64_dma32_phys_limit))
|
||||||
swiotlb_init(1);
|
swiotlb_init(1);
|
||||||
else
|
else
|
||||||
swiotlb_force = SWIOTLB_NO_FORCE;
|
swiotlb_force = SWIOTLB_NO_FORCE;
|
||||||
@ -571,7 +600,7 @@ void free_initmem(void)
|
|||||||
{
|
{
|
||||||
free_reserved_area(lm_alias(__init_begin),
|
free_reserved_area(lm_alias(__init_begin),
|
||||||
lm_alias(__init_end),
|
lm_alias(__init_end),
|
||||||
0, "unused kernel");
|
POISON_FREE_INITMEM, "unused kernel");
|
||||||
/*
|
/*
|
||||||
* Unmap the __init region but leave the VM area in place. This
|
* Unmap the __init region but leave the VM area in place. This
|
||||||
* prevents the region from being reused for kernel modules, which
|
* prevents the region from being reused for kernel modules, which
|
||||||
@ -580,18 +609,6 @@ void free_initmem(void)
|
|||||||
unmap_kernel_range((u64)__init_begin, (u64)(__init_end - __init_begin));
|
unmap_kernel_range((u64)__init_begin, (u64)(__init_end - __init_begin));
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_BLK_DEV_INITRD
|
|
||||||
void __init free_initrd_mem(unsigned long start, unsigned long end)
|
|
||||||
{
|
|
||||||
unsigned long aligned_start, aligned_end;
|
|
||||||
|
|
||||||
aligned_start = __virt_to_phys(start) & PAGE_MASK;
|
|
||||||
aligned_end = PAGE_ALIGN(__virt_to_phys(end));
|
|
||||||
memblock_free(aligned_start, aligned_end - aligned_start);
|
|
||||||
free_reserved_area((void *)start, (void *)end, 0, "initrd");
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Dump out memory limit information on panic.
|
* Dump out memory limit information on panic.
|
||||||
*/
|
*/
|
||||||
|
@ -338,7 +338,7 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
|
|||||||
phys_addr_t (*pgtable_alloc)(int),
|
phys_addr_t (*pgtable_alloc)(int),
|
||||||
int flags)
|
int flags)
|
||||||
{
|
{
|
||||||
unsigned long addr, length, end, next;
|
unsigned long addr, end, next;
|
||||||
pgd_t *pgdp = pgd_offset_raw(pgdir, virt);
|
pgd_t *pgdp = pgd_offset_raw(pgdir, virt);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -350,9 +350,8 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
|
|||||||
|
|
||||||
phys &= PAGE_MASK;
|
phys &= PAGE_MASK;
|
||||||
addr = virt & PAGE_MASK;
|
addr = virt & PAGE_MASK;
|
||||||
length = PAGE_ALIGN(size + (virt & ~PAGE_MASK));
|
end = PAGE_ALIGN(virt + size);
|
||||||
|
|
||||||
end = addr + length;
|
|
||||||
do {
|
do {
|
||||||
next = pgd_addr_end(addr, end);
|
next = pgd_addr_end(addr, end);
|
||||||
alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc,
|
alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc,
|
||||||
|
@ -60,7 +60,6 @@ KBUILD_CFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY=1 \
|
|||||||
-DFTRACE_PATCHABLE_FUNCTION_SIZE=$(NOP_COUNT)
|
-DFTRACE_PATCHABLE_FUNCTION_SIZE=$(NOP_COUNT)
|
||||||
|
|
||||||
CC_FLAGS_FTRACE := -fpatchable-function-entry=$(NOP_COUNT),$(shell echo $$(($(NOP_COUNT)-1)))
|
CC_FLAGS_FTRACE := -fpatchable-function-entry=$(NOP_COUNT),$(shell echo $$(($(NOP_COUNT)-1)))
|
||||||
KBUILD_LDS_MODULE += $(srctree)/arch/parisc/kernel/module.lds
|
|
||||||
endif
|
endif
|
||||||
|
|
||||||
OBJCOPY_FLAGS =-O binary -R .note -R .comment -S
|
OBJCOPY_FLAGS =-O binary -R .note -R .comment -S
|
||||||
|
@ -43,6 +43,7 @@
|
|||||||
#include <linux/elf.h>
|
#include <linux/elf.h>
|
||||||
#include <linux/vmalloc.h>
|
#include <linux/vmalloc.h>
|
||||||
#include <linux/fs.h>
|
#include <linux/fs.h>
|
||||||
|
#include <linux/ftrace.h>
|
||||||
#include <linux/string.h>
|
#include <linux/string.h>
|
||||||
#include <linux/kernel.h>
|
#include <linux/kernel.h>
|
||||||
#include <linux/bug.h>
|
#include <linux/bug.h>
|
||||||
@ -862,7 +863,7 @@ int module_finalize(const Elf_Ehdr *hdr,
|
|||||||
const char *strtab = NULL;
|
const char *strtab = NULL;
|
||||||
const Elf_Shdr *s;
|
const Elf_Shdr *s;
|
||||||
char *secstrings;
|
char *secstrings;
|
||||||
int err, symindex = -1;
|
int symindex = -1;
|
||||||
Elf_Sym *newptr, *oldptr;
|
Elf_Sym *newptr, *oldptr;
|
||||||
Elf_Shdr *symhdr = NULL;
|
Elf_Shdr *symhdr = NULL;
|
||||||
#ifdef DEBUG
|
#ifdef DEBUG
|
||||||
@ -946,11 +947,13 @@ int module_finalize(const Elf_Ehdr *hdr,
|
|||||||
/* patch .altinstructions */
|
/* patch .altinstructions */
|
||||||
apply_alternatives(aseg, aseg + s->sh_size, me->name);
|
apply_alternatives(aseg, aseg + s->sh_size, me->name);
|
||||||
|
|
||||||
|
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||||
/* For 32 bit kernels we're compiling modules with
|
/* For 32 bit kernels we're compiling modules with
|
||||||
* -ffunction-sections so we must relocate the addresses in the
|
* -ffunction-sections so we must relocate the addresses in the
|
||||||
*__mcount_loc section.
|
* ftrace callsite section.
|
||||||
*/
|
*/
|
||||||
if (symindex != -1 && !strcmp(secname, "__mcount_loc")) {
|
if (symindex != -1 && !strcmp(secname, FTRACE_CALLSITE_SECTION)) {
|
||||||
|
int err;
|
||||||
if (s->sh_type == SHT_REL)
|
if (s->sh_type == SHT_REL)
|
||||||
err = apply_relocate((Elf_Shdr *)sechdrs,
|
err = apply_relocate((Elf_Shdr *)sechdrs,
|
||||||
strtab, symindex,
|
strtab, symindex,
|
||||||
@ -962,6 +965,7 @@ int module_finalize(const Elf_Ehdr *hdr,
|
|||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -1,7 +0,0 @@
|
|||||||
/* SPDX-License-Identifier: GPL-2.0 */
|
|
||||||
|
|
||||||
SECTIONS {
|
|
||||||
__mcount_loc : {
|
|
||||||
*(__patchable_function_entries)
|
|
||||||
}
|
|
||||||
}
|
|
@ -329,13 +329,4 @@ struct vm_area_struct;
|
|||||||
#endif /* __ASSEMBLY__ */
|
#endif /* __ASSEMBLY__ */
|
||||||
#include <asm/slice.h>
|
#include <asm/slice.h>
|
||||||
|
|
||||||
/*
|
|
||||||
* Allow 30-bit DMA for very limited Broadcom wifi chips on many powerbooks.
|
|
||||||
*/
|
|
||||||
#ifdef CONFIG_PPC32
|
|
||||||
#define ARCH_ZONE_DMA_BITS 30
|
|
||||||
#else
|
|
||||||
#define ARCH_ZONE_DMA_BITS 31
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#endif /* _ASM_POWERPC_PAGE_H */
|
#endif /* _ASM_POWERPC_PAGE_H */
|
||||||
|
@ -31,6 +31,7 @@
|
|||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
#include <linux/vmalloc.h>
|
#include <linux/vmalloc.h>
|
||||||
#include <linux/memremap.h>
|
#include <linux/memremap.h>
|
||||||
|
#include <linux/dma-direct.h>
|
||||||
|
|
||||||
#include <asm/pgalloc.h>
|
#include <asm/pgalloc.h>
|
||||||
#include <asm/prom.h>
|
#include <asm/prom.h>
|
||||||
@ -201,10 +202,10 @@ static int __init mark_nonram_nosave(void)
|
|||||||
* everything else. GFP_DMA32 page allocations automatically fall back to
|
* everything else. GFP_DMA32 page allocations automatically fall back to
|
||||||
* ZONE_DMA.
|
* ZONE_DMA.
|
||||||
*
|
*
|
||||||
* By using 31-bit unconditionally, we can exploit ARCH_ZONE_DMA_BITS to
|
* By using 31-bit unconditionally, we can exploit zone_dma_bits to inform the
|
||||||
* inform the generic DMA mapping code. 32-bit only devices (if not handled
|
* generic DMA mapping code. 32-bit only devices (if not handled by an IOMMU
|
||||||
* by an IOMMU anyway) will take a first dip into ZONE_NORMAL and get
|
* anyway) will take a first dip into ZONE_NORMAL and get otherwise served by
|
||||||
* otherwise served by ZONE_DMA.
|
* ZONE_DMA.
|
||||||
*/
|
*/
|
||||||
static unsigned long max_zone_pfns[MAX_NR_ZONES];
|
static unsigned long max_zone_pfns[MAX_NR_ZONES];
|
||||||
|
|
||||||
@ -237,9 +238,18 @@ void __init paging_init(void)
|
|||||||
printk(KERN_DEBUG "Memory hole size: %ldMB\n",
|
printk(KERN_DEBUG "Memory hole size: %ldMB\n",
|
||||||
(long int)((top_of_ram - total_ram) >> 20));
|
(long int)((top_of_ram - total_ram) >> 20));
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Allow 30-bit DMA for very limited Broadcom wifi chips on many
|
||||||
|
* powerbooks.
|
||||||
|
*/
|
||||||
|
if (IS_ENABLED(CONFIG_PPC32))
|
||||||
|
zone_dma_bits = 30;
|
||||||
|
else
|
||||||
|
zone_dma_bits = 31;
|
||||||
|
|
||||||
#ifdef CONFIG_ZONE_DMA
|
#ifdef CONFIG_ZONE_DMA
|
||||||
max_zone_pfns[ZONE_DMA] = min(max_low_pfn,
|
max_zone_pfns[ZONE_DMA] = min(max_low_pfn,
|
||||||
1UL << (ARCH_ZONE_DMA_BITS - PAGE_SHIFT));
|
1UL << (zone_dma_bits - PAGE_SHIFT));
|
||||||
#endif
|
#endif
|
||||||
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
|
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
|
||||||
#ifdef CONFIG_HIGHMEM
|
#ifdef CONFIG_HIGHMEM
|
||||||
|
@ -177,8 +177,6 @@ static inline int devmem_is_allowed(unsigned long pfn)
|
|||||||
#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | \
|
#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | \
|
||||||
VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
|
VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
|
||||||
|
|
||||||
#define ARCH_ZONE_DMA_BITS 31
|
|
||||||
|
|
||||||
#include <asm-generic/memory_model.h>
|
#include <asm-generic/memory_model.h>
|
||||||
#include <asm-generic/getorder.h>
|
#include <asm-generic/getorder.h>
|
||||||
|
|
||||||
|
@ -118,6 +118,7 @@ void __init paging_init(void)
|
|||||||
|
|
||||||
sparse_memory_present_with_active_regions(MAX_NUMNODES);
|
sparse_memory_present_with_active_regions(MAX_NUMNODES);
|
||||||
sparse_init();
|
sparse_init();
|
||||||
|
zone_dma_bits = 31;
|
||||||
memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
|
memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
|
||||||
max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS);
|
max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS);
|
||||||
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
|
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
|
||||||
|
@ -1463,6 +1463,12 @@ static inline bool arch_has_pfn_modify_check(void)
|
|||||||
return boot_cpu_has_bug(X86_BUG_L1TF);
|
return boot_cpu_has_bug(X86_BUG_L1TF);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#define arch_faults_on_old_pte arch_faults_on_old_pte
|
||||||
|
static inline bool arch_faults_on_old_pte(void)
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
#include <asm-generic/pgtable.h>
|
#include <asm-generic/pgtable.h>
|
||||||
#endif /* __ASSEMBLY__ */
|
#endif /* __ASSEMBLY__ */
|
||||||
|
|
||||||
|
@ -967,29 +967,29 @@ static int sdei_get_conduit(struct platform_device *pdev)
|
|||||||
if (np) {
|
if (np) {
|
||||||
if (of_property_read_string(np, "method", &method)) {
|
if (of_property_read_string(np, "method", &method)) {
|
||||||
pr_warn("missing \"method\" property\n");
|
pr_warn("missing \"method\" property\n");
|
||||||
return CONDUIT_INVALID;
|
return SMCCC_CONDUIT_NONE;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!strcmp("hvc", method)) {
|
if (!strcmp("hvc", method)) {
|
||||||
sdei_firmware_call = &sdei_smccc_hvc;
|
sdei_firmware_call = &sdei_smccc_hvc;
|
||||||
return CONDUIT_HVC;
|
return SMCCC_CONDUIT_HVC;
|
||||||
} else if (!strcmp("smc", method)) {
|
} else if (!strcmp("smc", method)) {
|
||||||
sdei_firmware_call = &sdei_smccc_smc;
|
sdei_firmware_call = &sdei_smccc_smc;
|
||||||
return CONDUIT_SMC;
|
return SMCCC_CONDUIT_SMC;
|
||||||
}
|
}
|
||||||
|
|
||||||
pr_warn("invalid \"method\" property: %s\n", method);
|
pr_warn("invalid \"method\" property: %s\n", method);
|
||||||
} else if (IS_ENABLED(CONFIG_ACPI) && !acpi_disabled) {
|
} else if (IS_ENABLED(CONFIG_ACPI) && !acpi_disabled) {
|
||||||
if (acpi_psci_use_hvc()) {
|
if (acpi_psci_use_hvc()) {
|
||||||
sdei_firmware_call = &sdei_smccc_hvc;
|
sdei_firmware_call = &sdei_smccc_hvc;
|
||||||
return CONDUIT_HVC;
|
return SMCCC_CONDUIT_HVC;
|
||||||
} else {
|
} else {
|
||||||
sdei_firmware_call = &sdei_smccc_smc;
|
sdei_firmware_call = &sdei_smccc_smc;
|
||||||
return CONDUIT_SMC;
|
return SMCCC_CONDUIT_SMC;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return CONDUIT_INVALID;
|
return SMCCC_CONDUIT_NONE;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int sdei_probe(struct platform_device *pdev)
|
static int sdei_probe(struct platform_device *pdev)
|
||||||
|
@ -53,10 +53,18 @@ bool psci_tos_resident_on(int cpu)
|
|||||||
}
|
}
|
||||||
|
|
||||||
struct psci_operations psci_ops = {
|
struct psci_operations psci_ops = {
|
||||||
.conduit = PSCI_CONDUIT_NONE,
|
.conduit = SMCCC_CONDUIT_NONE,
|
||||||
.smccc_version = SMCCC_VERSION_1_0,
|
.smccc_version = SMCCC_VERSION_1_0,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void)
|
||||||
|
{
|
||||||
|
if (psci_ops.smccc_version < SMCCC_VERSION_1_1)
|
||||||
|
return SMCCC_CONDUIT_NONE;
|
||||||
|
|
||||||
|
return psci_ops.conduit;
|
||||||
|
}
|
||||||
|
|
||||||
typedef unsigned long (psci_fn)(unsigned long, unsigned long,
|
typedef unsigned long (psci_fn)(unsigned long, unsigned long,
|
||||||
unsigned long, unsigned long);
|
unsigned long, unsigned long);
|
||||||
static psci_fn *invoke_psci_fn;
|
static psci_fn *invoke_psci_fn;
|
||||||
@ -212,13 +220,13 @@ static unsigned long psci_migrate_info_up_cpu(void)
|
|||||||
0, 0, 0);
|
0, 0, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void set_conduit(enum psci_conduit conduit)
|
static void set_conduit(enum arm_smccc_conduit conduit)
|
||||||
{
|
{
|
||||||
switch (conduit) {
|
switch (conduit) {
|
||||||
case PSCI_CONDUIT_HVC:
|
case SMCCC_CONDUIT_HVC:
|
||||||
invoke_psci_fn = __invoke_psci_fn_hvc;
|
invoke_psci_fn = __invoke_psci_fn_hvc;
|
||||||
break;
|
break;
|
||||||
case PSCI_CONDUIT_SMC:
|
case SMCCC_CONDUIT_SMC:
|
||||||
invoke_psci_fn = __invoke_psci_fn_smc;
|
invoke_psci_fn = __invoke_psci_fn_smc;
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
@ -240,9 +248,9 @@ static int get_set_conduit_method(struct device_node *np)
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (!strcmp("hvc", method)) {
|
if (!strcmp("hvc", method)) {
|
||||||
set_conduit(PSCI_CONDUIT_HVC);
|
set_conduit(SMCCC_CONDUIT_HVC);
|
||||||
} else if (!strcmp("smc", method)) {
|
} else if (!strcmp("smc", method)) {
|
||||||
set_conduit(PSCI_CONDUIT_SMC);
|
set_conduit(SMCCC_CONDUIT_SMC);
|
||||||
} else {
|
} else {
|
||||||
pr_warn("invalid \"method\" property: %s\n", method);
|
pr_warn("invalid \"method\" property: %s\n", method);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
@ -583,9 +591,9 @@ int __init psci_acpi_init(void)
|
|||||||
pr_info("probing for conduit method from ACPI.\n");
|
pr_info("probing for conduit method from ACPI.\n");
|
||||||
|
|
||||||
if (acpi_psci_use_hvc())
|
if (acpi_psci_use_hvc())
|
||||||
set_conduit(PSCI_CONDUIT_HVC);
|
set_conduit(SMCCC_CONDUIT_HVC);
|
||||||
else
|
else
|
||||||
set_conduit(PSCI_CONDUIT_SMC);
|
set_conduit(SMCCC_CONDUIT_SMC);
|
||||||
|
|
||||||
return psci_probe();
|
return psci_probe();
|
||||||
}
|
}
|
||||||
|
@ -87,6 +87,15 @@ static DEFINE_STATIC_KEY_TRUE(supports_deactivate_key);
|
|||||||
*/
|
*/
|
||||||
static DEFINE_STATIC_KEY_FALSE(supports_pseudo_nmis);
|
static DEFINE_STATIC_KEY_FALSE(supports_pseudo_nmis);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Global static key controlling whether an update to PMR allowing more
|
||||||
|
* interrupts requires to be propagated to the redistributor (DSB SY).
|
||||||
|
* And this needs to be exported for modules to be able to enable
|
||||||
|
* interrupts...
|
||||||
|
*/
|
||||||
|
DEFINE_STATIC_KEY_FALSE(gic_pmr_sync);
|
||||||
|
EXPORT_SYMBOL(gic_pmr_sync);
|
||||||
|
|
||||||
/* ppi_nmi_refs[n] == number of cpus having ppi[n + 16] set as NMI */
|
/* ppi_nmi_refs[n] == number of cpus having ppi[n + 16] set as NMI */
|
||||||
static refcount_t *ppi_nmi_refs;
|
static refcount_t *ppi_nmi_refs;
|
||||||
|
|
||||||
@ -1502,6 +1511,17 @@ static void gic_enable_nmi_support(void)
|
|||||||
for (i = 0; i < gic_data.ppi_nr; i++)
|
for (i = 0; i < gic_data.ppi_nr; i++)
|
||||||
refcount_set(&ppi_nmi_refs[i], 0);
|
refcount_set(&ppi_nmi_refs[i], 0);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Linux itself doesn't use 1:N distribution, so has no need to
|
||||||
|
* set PMHE. The only reason to have it set is if EL3 requires it
|
||||||
|
* (and we can't change it).
|
||||||
|
*/
|
||||||
|
if (gic_read_ctlr() & ICC_CTLR_EL1_PMHE_MASK)
|
||||||
|
static_branch_enable(&gic_pmr_sync);
|
||||||
|
|
||||||
|
pr_info("%s ICC_PMR_EL1 synchronisation\n",
|
||||||
|
static_branch_unlikely(&gic_pmr_sync) ? "Forcing" : "Relaxing");
|
||||||
|
|
||||||
static_branch_enable(&supports_pseudo_nmis);
|
static_branch_enable(&supports_pseudo_nmis);
|
||||||
|
|
||||||
if (static_branch_likely(&supports_deactivate_key))
|
if (static_branch_likely(&supports_deactivate_key))
|
||||||
|
@ -1642,7 +1642,6 @@ static struct cci_pmu *cci_pmu_alloc(struct device *dev)
|
|||||||
|
|
||||||
static int cci_pmu_probe(struct platform_device *pdev)
|
static int cci_pmu_probe(struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
struct resource *res;
|
|
||||||
struct cci_pmu *cci_pmu;
|
struct cci_pmu *cci_pmu;
|
||||||
int i, ret, irq;
|
int i, ret, irq;
|
||||||
|
|
||||||
@ -1650,8 +1649,7 @@ static int cci_pmu_probe(struct platform_device *pdev)
|
|||||||
if (IS_ERR(cci_pmu))
|
if (IS_ERR(cci_pmu))
|
||||||
return PTR_ERR(cci_pmu);
|
return PTR_ERR(cci_pmu);
|
||||||
|
|
||||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
cci_pmu->base = devm_platform_ioremap_resource(pdev, 0);
|
||||||
cci_pmu->base = devm_ioremap_resource(&pdev->dev, res);
|
|
||||||
if (IS_ERR(cci_pmu->base))
|
if (IS_ERR(cci_pmu->base))
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
|
@ -1477,8 +1477,7 @@ static int arm_ccn_probe(struct platform_device *pdev)
|
|||||||
ccn->dev = &pdev->dev;
|
ccn->dev = &pdev->dev;
|
||||||
platform_set_drvdata(pdev, ccn);
|
platform_set_drvdata(pdev, ccn);
|
||||||
|
|
||||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
ccn->base = devm_platform_ioremap_resource(pdev, 0);
|
||||||
ccn->base = devm_ioremap_resource(ccn->dev, res);
|
|
||||||
if (IS_ERR(ccn->base))
|
if (IS_ERR(ccn->base))
|
||||||
return PTR_ERR(ccn->base);
|
return PTR_ERR(ccn->base);
|
||||||
|
|
||||||
@ -1537,6 +1536,7 @@ static int arm_ccn_remove(struct platform_device *pdev)
|
|||||||
static const struct of_device_id arm_ccn_match[] = {
|
static const struct of_device_id arm_ccn_match[] = {
|
||||||
{ .compatible = "arm,ccn-502", },
|
{ .compatible = "arm,ccn-502", },
|
||||||
{ .compatible = "arm,ccn-504", },
|
{ .compatible = "arm,ccn-504", },
|
||||||
|
{ .compatible = "arm,ccn-512", },
|
||||||
{},
|
{},
|
||||||
};
|
};
|
||||||
MODULE_DEVICE_TABLE(of, arm_ccn_match);
|
MODULE_DEVICE_TABLE(of, arm_ccn_match);
|
||||||
|
@ -727,7 +727,7 @@ static void smmu_pmu_get_acpi_options(struct smmu_pmu *smmu_pmu)
|
|||||||
static int smmu_pmu_probe(struct platform_device *pdev)
|
static int smmu_pmu_probe(struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
struct smmu_pmu *smmu_pmu;
|
struct smmu_pmu *smmu_pmu;
|
||||||
struct resource *res_0, *res_1;
|
struct resource *res_0;
|
||||||
u32 cfgr, reg_size;
|
u32 cfgr, reg_size;
|
||||||
u64 ceid_64[2];
|
u64 ceid_64[2];
|
||||||
int irq, err;
|
int irq, err;
|
||||||
@ -764,8 +764,7 @@ static int smmu_pmu_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
/* Determine if page 1 is present */
|
/* Determine if page 1 is present */
|
||||||
if (cfgr & SMMU_PMCG_CFGR_RELOC_CTRS) {
|
if (cfgr & SMMU_PMCG_CFGR_RELOC_CTRS) {
|
||||||
res_1 = platform_get_resource(pdev, IORESOURCE_MEM, 1);
|
smmu_pmu->reloc_base = devm_platform_ioremap_resource(pdev, 1);
|
||||||
smmu_pmu->reloc_base = devm_ioremap_resource(dev, res_1);
|
|
||||||
if (IS_ERR(smmu_pmu->reloc_base))
|
if (IS_ERR(smmu_pmu->reloc_base))
|
||||||
return PTR_ERR(smmu_pmu->reloc_base);
|
return PTR_ERR(smmu_pmu->reloc_base);
|
||||||
} else {
|
} else {
|
||||||
|
@ -45,7 +45,8 @@
|
|||||||
static DEFINE_IDA(ddr_ida);
|
static DEFINE_IDA(ddr_ida);
|
||||||
|
|
||||||
/* DDR Perf hardware feature */
|
/* DDR Perf hardware feature */
|
||||||
#define DDR_CAP_AXI_ID_FILTER 0x1 /* support AXI ID filter */
|
#define DDR_CAP_AXI_ID_FILTER 0x1 /* support AXI ID filter */
|
||||||
|
#define DDR_CAP_AXI_ID_FILTER_ENHANCED 0x3 /* support enhanced AXI ID filter */
|
||||||
|
|
||||||
struct fsl_ddr_devtype_data {
|
struct fsl_ddr_devtype_data {
|
||||||
unsigned int quirks; /* quirks needed for different DDR Perf core */
|
unsigned int quirks; /* quirks needed for different DDR Perf core */
|
||||||
@ -57,9 +58,14 @@ static const struct fsl_ddr_devtype_data imx8m_devtype_data = {
|
|||||||
.quirks = DDR_CAP_AXI_ID_FILTER,
|
.quirks = DDR_CAP_AXI_ID_FILTER,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static const struct fsl_ddr_devtype_data imx8mp_devtype_data = {
|
||||||
|
.quirks = DDR_CAP_AXI_ID_FILTER_ENHANCED,
|
||||||
|
};
|
||||||
|
|
||||||
static const struct of_device_id imx_ddr_pmu_dt_ids[] = {
|
static const struct of_device_id imx_ddr_pmu_dt_ids[] = {
|
||||||
{ .compatible = "fsl,imx8-ddr-pmu", .data = &imx8_devtype_data},
|
{ .compatible = "fsl,imx8-ddr-pmu", .data = &imx8_devtype_data},
|
||||||
{ .compatible = "fsl,imx8m-ddr-pmu", .data = &imx8m_devtype_data},
|
{ .compatible = "fsl,imx8m-ddr-pmu", .data = &imx8m_devtype_data},
|
||||||
|
{ .compatible = "fsl,imx8mp-ddr-pmu", .data = &imx8mp_devtype_data},
|
||||||
{ /* sentinel */ }
|
{ /* sentinel */ }
|
||||||
};
|
};
|
||||||
MODULE_DEVICE_TABLE(of, imx_ddr_pmu_dt_ids);
|
MODULE_DEVICE_TABLE(of, imx_ddr_pmu_dt_ids);
|
||||||
@ -78,6 +84,61 @@ struct ddr_pmu {
|
|||||||
int id;
|
int id;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
enum ddr_perf_filter_capabilities {
|
||||||
|
PERF_CAP_AXI_ID_FILTER = 0,
|
||||||
|
PERF_CAP_AXI_ID_FILTER_ENHANCED,
|
||||||
|
PERF_CAP_AXI_ID_FEAT_MAX,
|
||||||
|
};
|
||||||
|
|
||||||
|
static u32 ddr_perf_filter_cap_get(struct ddr_pmu *pmu, int cap)
|
||||||
|
{
|
||||||
|
u32 quirks = pmu->devtype_data->quirks;
|
||||||
|
|
||||||
|
switch (cap) {
|
||||||
|
case PERF_CAP_AXI_ID_FILTER:
|
||||||
|
return !!(quirks & DDR_CAP_AXI_ID_FILTER);
|
||||||
|
case PERF_CAP_AXI_ID_FILTER_ENHANCED:
|
||||||
|
quirks &= DDR_CAP_AXI_ID_FILTER_ENHANCED;
|
||||||
|
return quirks == DDR_CAP_AXI_ID_FILTER_ENHANCED;
|
||||||
|
default:
|
||||||
|
WARN(1, "unknown filter cap %d\n", cap);
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static ssize_t ddr_perf_filter_cap_show(struct device *dev,
|
||||||
|
struct device_attribute *attr,
|
||||||
|
char *buf)
|
||||||
|
{
|
||||||
|
struct ddr_pmu *pmu = dev_get_drvdata(dev);
|
||||||
|
struct dev_ext_attribute *ea =
|
||||||
|
container_of(attr, struct dev_ext_attribute, attr);
|
||||||
|
int cap = (long)ea->var;
|
||||||
|
|
||||||
|
return snprintf(buf, PAGE_SIZE, "%u\n",
|
||||||
|
ddr_perf_filter_cap_get(pmu, cap));
|
||||||
|
}
|
||||||
|
|
||||||
|
#define PERF_EXT_ATTR_ENTRY(_name, _func, _var) \
|
||||||
|
(&((struct dev_ext_attribute) { \
|
||||||
|
__ATTR(_name, 0444, _func, NULL), (void *)_var \
|
||||||
|
}).attr.attr)
|
||||||
|
|
||||||
|
#define PERF_FILTER_EXT_ATTR_ENTRY(_name, _var) \
|
||||||
|
PERF_EXT_ATTR_ENTRY(_name, ddr_perf_filter_cap_show, _var)
|
||||||
|
|
||||||
|
static struct attribute *ddr_perf_filter_cap_attr[] = {
|
||||||
|
PERF_FILTER_EXT_ATTR_ENTRY(filter, PERF_CAP_AXI_ID_FILTER),
|
||||||
|
PERF_FILTER_EXT_ATTR_ENTRY(enhanced_filter, PERF_CAP_AXI_ID_FILTER_ENHANCED),
|
||||||
|
NULL,
|
||||||
|
};
|
||||||
|
|
||||||
|
static struct attribute_group ddr_perf_filter_cap_attr_group = {
|
||||||
|
.name = "caps",
|
||||||
|
.attrs = ddr_perf_filter_cap_attr,
|
||||||
|
};
|
||||||
|
|
||||||
static ssize_t ddr_perf_cpumask_show(struct device *dev,
|
static ssize_t ddr_perf_cpumask_show(struct device *dev,
|
||||||
struct device_attribute *attr, char *buf)
|
struct device_attribute *attr, char *buf)
|
||||||
{
|
{
|
||||||
@ -175,9 +236,40 @@ static const struct attribute_group *attr_groups[] = {
|
|||||||
&ddr_perf_events_attr_group,
|
&ddr_perf_events_attr_group,
|
||||||
&ddr_perf_format_attr_group,
|
&ddr_perf_format_attr_group,
|
||||||
&ddr_perf_cpumask_attr_group,
|
&ddr_perf_cpumask_attr_group,
|
||||||
|
&ddr_perf_filter_cap_attr_group,
|
||||||
NULL,
|
NULL,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static bool ddr_perf_is_filtered(struct perf_event *event)
|
||||||
|
{
|
||||||
|
return event->attr.config == 0x41 || event->attr.config == 0x42;
|
||||||
|
}
|
||||||
|
|
||||||
|
static u32 ddr_perf_filter_val(struct perf_event *event)
|
||||||
|
{
|
||||||
|
return event->attr.config1;
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool ddr_perf_filters_compatible(struct perf_event *a,
|
||||||
|
struct perf_event *b)
|
||||||
|
{
|
||||||
|
if (!ddr_perf_is_filtered(a))
|
||||||
|
return true;
|
||||||
|
if (!ddr_perf_is_filtered(b))
|
||||||
|
return true;
|
||||||
|
return ddr_perf_filter_val(a) == ddr_perf_filter_val(b);
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool ddr_perf_is_enhanced_filtered(struct perf_event *event)
|
||||||
|
{
|
||||||
|
unsigned int filt;
|
||||||
|
struct ddr_pmu *pmu = to_ddr_pmu(event->pmu);
|
||||||
|
|
||||||
|
filt = pmu->devtype_data->quirks & DDR_CAP_AXI_ID_FILTER_ENHANCED;
|
||||||
|
return (filt == DDR_CAP_AXI_ID_FILTER_ENHANCED) &&
|
||||||
|
ddr_perf_is_filtered(event);
|
||||||
|
}
|
||||||
|
|
||||||
static u32 ddr_perf_alloc_counter(struct ddr_pmu *pmu, int event)
|
static u32 ddr_perf_alloc_counter(struct ddr_pmu *pmu, int event)
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
@ -209,27 +301,17 @@ static void ddr_perf_free_counter(struct ddr_pmu *pmu, int counter)
|
|||||||
|
|
||||||
static u32 ddr_perf_read_counter(struct ddr_pmu *pmu, int counter)
|
static u32 ddr_perf_read_counter(struct ddr_pmu *pmu, int counter)
|
||||||
{
|
{
|
||||||
return readl_relaxed(pmu->base + COUNTER_READ + counter * 4);
|
struct perf_event *event = pmu->events[counter];
|
||||||
}
|
void __iomem *base = pmu->base;
|
||||||
|
|
||||||
static bool ddr_perf_is_filtered(struct perf_event *event)
|
/*
|
||||||
{
|
* return bytes instead of bursts from ddr transaction for
|
||||||
return event->attr.config == 0x41 || event->attr.config == 0x42;
|
* axid-read and axid-write event if PMU core supports enhanced
|
||||||
}
|
* filter.
|
||||||
|
*/
|
||||||
static u32 ddr_perf_filter_val(struct perf_event *event)
|
base += ddr_perf_is_enhanced_filtered(event) ? COUNTER_DPCR1 :
|
||||||
{
|
COUNTER_READ;
|
||||||
return event->attr.config1;
|
return readl_relaxed(base + counter * 4);
|
||||||
}
|
|
||||||
|
|
||||||
static bool ddr_perf_filters_compatible(struct perf_event *a,
|
|
||||||
struct perf_event *b)
|
|
||||||
{
|
|
||||||
if (!ddr_perf_is_filtered(a))
|
|
||||||
return true;
|
|
||||||
if (!ddr_perf_is_filtered(b))
|
|
||||||
return true;
|
|
||||||
return ddr_perf_filter_val(a) == ddr_perf_filter_val(b);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int ddr_perf_event_init(struct perf_event *event)
|
static int ddr_perf_event_init(struct perf_event *event)
|
||||||
|
@ -243,8 +243,6 @@ MODULE_DEVICE_TABLE(acpi, hisi_ddrc_pmu_acpi_match);
|
|||||||
static int hisi_ddrc_pmu_init_data(struct platform_device *pdev,
|
static int hisi_ddrc_pmu_init_data(struct platform_device *pdev,
|
||||||
struct hisi_pmu *ddrc_pmu)
|
struct hisi_pmu *ddrc_pmu)
|
||||||
{
|
{
|
||||||
struct resource *res;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Use the SCCL_ID and DDRC channel ID to identify the
|
* Use the SCCL_ID and DDRC channel ID to identify the
|
||||||
* DDRC PMU, while SCCL_ID is in MPIDR[aff2].
|
* DDRC PMU, while SCCL_ID is in MPIDR[aff2].
|
||||||
@ -263,8 +261,7 @@ static int hisi_ddrc_pmu_init_data(struct platform_device *pdev,
|
|||||||
/* DDRC PMUs only share the same SCCL */
|
/* DDRC PMUs only share the same SCCL */
|
||||||
ddrc_pmu->ccl_id = -1;
|
ddrc_pmu->ccl_id = -1;
|
||||||
|
|
||||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
ddrc_pmu->base = devm_platform_ioremap_resource(pdev, 0);
|
||||||
ddrc_pmu->base = devm_ioremap_resource(&pdev->dev, res);
|
|
||||||
if (IS_ERR(ddrc_pmu->base)) {
|
if (IS_ERR(ddrc_pmu->base)) {
|
||||||
dev_err(&pdev->dev, "ioremap failed for ddrc_pmu resource\n");
|
dev_err(&pdev->dev, "ioremap failed for ddrc_pmu resource\n");
|
||||||
return PTR_ERR(ddrc_pmu->base);
|
return PTR_ERR(ddrc_pmu->base);
|
||||||
|
@ -234,7 +234,6 @@ static int hisi_hha_pmu_init_data(struct platform_device *pdev,
|
|||||||
struct hisi_pmu *hha_pmu)
|
struct hisi_pmu *hha_pmu)
|
||||||
{
|
{
|
||||||
unsigned long long id;
|
unsigned long long id;
|
||||||
struct resource *res;
|
|
||||||
acpi_status status;
|
acpi_status status;
|
||||||
|
|
||||||
status = acpi_evaluate_integer(ACPI_HANDLE(&pdev->dev),
|
status = acpi_evaluate_integer(ACPI_HANDLE(&pdev->dev),
|
||||||
@ -256,8 +255,7 @@ static int hisi_hha_pmu_init_data(struct platform_device *pdev,
|
|||||||
/* HHA PMUs only share the same SCCL */
|
/* HHA PMUs only share the same SCCL */
|
||||||
hha_pmu->ccl_id = -1;
|
hha_pmu->ccl_id = -1;
|
||||||
|
|
||||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
hha_pmu->base = devm_platform_ioremap_resource(pdev, 0);
|
||||||
hha_pmu->base = devm_ioremap_resource(&pdev->dev, res);
|
|
||||||
if (IS_ERR(hha_pmu->base)) {
|
if (IS_ERR(hha_pmu->base)) {
|
||||||
dev_err(&pdev->dev, "ioremap failed for hha_pmu resource\n");
|
dev_err(&pdev->dev, "ioremap failed for hha_pmu resource\n");
|
||||||
return PTR_ERR(hha_pmu->base);
|
return PTR_ERR(hha_pmu->base);
|
||||||
|
@ -233,7 +233,6 @@ static int hisi_l3c_pmu_init_data(struct platform_device *pdev,
|
|||||||
struct hisi_pmu *l3c_pmu)
|
struct hisi_pmu *l3c_pmu)
|
||||||
{
|
{
|
||||||
unsigned long long id;
|
unsigned long long id;
|
||||||
struct resource *res;
|
|
||||||
acpi_status status;
|
acpi_status status;
|
||||||
|
|
||||||
status = acpi_evaluate_integer(ACPI_HANDLE(&pdev->dev),
|
status = acpi_evaluate_integer(ACPI_HANDLE(&pdev->dev),
|
||||||
@ -259,8 +258,7 @@ static int hisi_l3c_pmu_init_data(struct platform_device *pdev,
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
l3c_pmu->base = devm_platform_ioremap_resource(pdev, 0);
|
||||||
l3c_pmu->base = devm_ioremap_resource(&pdev->dev, res);
|
|
||||||
if (IS_ERR(l3c_pmu->base)) {
|
if (IS_ERR(l3c_pmu->base)) {
|
||||||
dev_err(&pdev->dev, "ioremap failed for l3c_pmu resource\n");
|
dev_err(&pdev->dev, "ioremap failed for l3c_pmu resource\n");
|
||||||
return PTR_ERR(l3c_pmu->base);
|
return PTR_ERR(l3c_pmu->base);
|
||||||
|
@ -15,6 +15,7 @@
|
|||||||
#include <linux/errno.h>
|
#include <linux/errno.h>
|
||||||
#include <linux/interrupt.h>
|
#include <linux/interrupt.h>
|
||||||
|
|
||||||
|
#include <asm/cputype.h>
|
||||||
#include <asm/local64.h>
|
#include <asm/local64.h>
|
||||||
|
|
||||||
#include "hisi_uncore_pmu.h"
|
#include "hisi_uncore_pmu.h"
|
||||||
@ -338,8 +339,10 @@ void hisi_uncore_pmu_disable(struct pmu *pmu)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* Read Super CPU cluster and CPU cluster ID from MPIDR_EL1.
|
* Read Super CPU cluster and CPU cluster ID from MPIDR_EL1.
|
||||||
* If multi-threading is supported, CCL_ID is the low 3-bits in MPIDR[Aff2]
|
* If multi-threading is supported, On Huawei Kunpeng 920 SoC whose cpu
|
||||||
* and SCCL_ID is the upper 5-bits of Aff2 field; if not, SCCL_ID
|
* core is tsv110, CCL_ID is the low 3-bits in MPIDR[Aff2] and SCCL_ID
|
||||||
|
* is the upper 5-bits of Aff2 field; while for other cpu types, SCCL_ID
|
||||||
|
* is in MPIDR[Aff3] and CCL_ID is in MPIDR[Aff2], if not, SCCL_ID
|
||||||
* is in MPIDR[Aff2] and CCL_ID is in MPIDR[Aff1].
|
* is in MPIDR[Aff2] and CCL_ID is in MPIDR[Aff1].
|
||||||
*/
|
*/
|
||||||
static void hisi_read_sccl_and_ccl_id(int *sccl_id, int *ccl_id)
|
static void hisi_read_sccl_and_ccl_id(int *sccl_id, int *ccl_id)
|
||||||
@ -347,12 +350,19 @@ static void hisi_read_sccl_and_ccl_id(int *sccl_id, int *ccl_id)
|
|||||||
u64 mpidr = read_cpuid_mpidr();
|
u64 mpidr = read_cpuid_mpidr();
|
||||||
|
|
||||||
if (mpidr & MPIDR_MT_BITMASK) {
|
if (mpidr & MPIDR_MT_BITMASK) {
|
||||||
int aff2 = MPIDR_AFFINITY_LEVEL(mpidr, 2);
|
if (read_cpuid_part_number() == HISI_CPU_PART_TSV110) {
|
||||||
|
int aff2 = MPIDR_AFFINITY_LEVEL(mpidr, 2);
|
||||||
|
|
||||||
if (sccl_id)
|
if (sccl_id)
|
||||||
*sccl_id = aff2 >> 3;
|
*sccl_id = aff2 >> 3;
|
||||||
if (ccl_id)
|
if (ccl_id)
|
||||||
*ccl_id = aff2 & 0x7;
|
*ccl_id = aff2 & 0x7;
|
||||||
|
} else {
|
||||||
|
if (sccl_id)
|
||||||
|
*sccl_id = MPIDR_AFFINITY_LEVEL(mpidr, 3);
|
||||||
|
if (ccl_id)
|
||||||
|
*ccl_id = MPIDR_AFFINITY_LEVEL(mpidr, 2);
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
if (sccl_id)
|
if (sccl_id)
|
||||||
*sccl_id = MPIDR_AFFINITY_LEVEL(mpidr, 2);
|
*sccl_id = MPIDR_AFFINITY_LEVEL(mpidr, 2);
|
||||||
|
@ -16,23 +16,36 @@
|
|||||||
* they need to be sampled before overflow(i.e, at every 2 seconds).
|
* they need to be sampled before overflow(i.e, at every 2 seconds).
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#define TX2_PMU_MAX_COUNTERS 4
|
#define TX2_PMU_DMC_L3C_MAX_COUNTERS 4
|
||||||
|
#define TX2_PMU_CCPI2_MAX_COUNTERS 8
|
||||||
|
#define TX2_PMU_MAX_COUNTERS TX2_PMU_CCPI2_MAX_COUNTERS
|
||||||
|
|
||||||
|
|
||||||
#define TX2_PMU_DMC_CHANNELS 8
|
#define TX2_PMU_DMC_CHANNELS 8
|
||||||
#define TX2_PMU_L3_TILES 16
|
#define TX2_PMU_L3_TILES 16
|
||||||
|
|
||||||
#define TX2_PMU_HRTIMER_INTERVAL (2 * NSEC_PER_SEC)
|
#define TX2_PMU_HRTIMER_INTERVAL (2 * NSEC_PER_SEC)
|
||||||
#define GET_EVENTID(ev) ((ev->hw.config) & 0x1f)
|
#define GET_EVENTID(ev, mask) ((ev->hw.config) & mask)
|
||||||
#define GET_COUNTERID(ev) ((ev->hw.idx) & 0x3)
|
#define GET_COUNTERID(ev, mask) ((ev->hw.idx) & mask)
|
||||||
/* 1 byte per counter(4 counters).
|
/* 1 byte per counter(4 counters).
|
||||||
* Event id is encoded in bits [5:1] of a byte,
|
* Event id is encoded in bits [5:1] of a byte,
|
||||||
*/
|
*/
|
||||||
#define DMC_EVENT_CFG(idx, val) ((val) << (((idx) * 8) + 1))
|
#define DMC_EVENT_CFG(idx, val) ((val) << (((idx) * 8) + 1))
|
||||||
|
|
||||||
|
/* bits[3:0] to select counters, are indexed from 8 to 15. */
|
||||||
|
#define CCPI2_COUNTER_OFFSET 8
|
||||||
|
|
||||||
#define L3C_COUNTER_CTL 0xA8
|
#define L3C_COUNTER_CTL 0xA8
|
||||||
#define L3C_COUNTER_DATA 0xAC
|
#define L3C_COUNTER_DATA 0xAC
|
||||||
#define DMC_COUNTER_CTL 0x234
|
#define DMC_COUNTER_CTL 0x234
|
||||||
#define DMC_COUNTER_DATA 0x240
|
#define DMC_COUNTER_DATA 0x240
|
||||||
|
|
||||||
|
#define CCPI2_PERF_CTL 0x108
|
||||||
|
#define CCPI2_COUNTER_CTL 0x10C
|
||||||
|
#define CCPI2_COUNTER_SEL 0x12c
|
||||||
|
#define CCPI2_COUNTER_DATA_L 0x130
|
||||||
|
#define CCPI2_COUNTER_DATA_H 0x134
|
||||||
|
|
||||||
/* L3C event IDs */
|
/* L3C event IDs */
|
||||||
#define L3_EVENT_READ_REQ 0xD
|
#define L3_EVENT_READ_REQ 0xD
|
||||||
#define L3_EVENT_WRITEBACK_REQ 0xE
|
#define L3_EVENT_WRITEBACK_REQ 0xE
|
||||||
@ -51,15 +64,28 @@
|
|||||||
#define DMC_EVENT_READ_TXNS 0xF
|
#define DMC_EVENT_READ_TXNS 0xF
|
||||||
#define DMC_EVENT_MAX 0x10
|
#define DMC_EVENT_MAX 0x10
|
||||||
|
|
||||||
|
#define CCPI2_EVENT_REQ_PKT_SENT 0x3D
|
||||||
|
#define CCPI2_EVENT_SNOOP_PKT_SENT 0x65
|
||||||
|
#define CCPI2_EVENT_DATA_PKT_SENT 0x105
|
||||||
|
#define CCPI2_EVENT_GIC_PKT_SENT 0x12D
|
||||||
|
#define CCPI2_EVENT_MAX 0x200
|
||||||
|
|
||||||
|
#define CCPI2_PERF_CTL_ENABLE BIT(0)
|
||||||
|
#define CCPI2_PERF_CTL_START BIT(1)
|
||||||
|
#define CCPI2_PERF_CTL_RESET BIT(4)
|
||||||
|
#define CCPI2_EVENT_LEVEL_RISING_EDGE BIT(10)
|
||||||
|
#define CCPI2_EVENT_TYPE_EDGE_SENSITIVE BIT(11)
|
||||||
|
|
||||||
enum tx2_uncore_type {
|
enum tx2_uncore_type {
|
||||||
PMU_TYPE_L3C,
|
PMU_TYPE_L3C,
|
||||||
PMU_TYPE_DMC,
|
PMU_TYPE_DMC,
|
||||||
|
PMU_TYPE_CCPI2,
|
||||||
PMU_TYPE_INVALID,
|
PMU_TYPE_INVALID,
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* pmu on each socket has 2 uncore devices(dmc and l3c),
|
* Each socket has 3 uncore devices associated with a PMU. The DMC and
|
||||||
* each device has 4 counters.
|
* L3C have 4 32-bit counters and the CCPI2 has 8 64-bit counters.
|
||||||
*/
|
*/
|
||||||
struct tx2_uncore_pmu {
|
struct tx2_uncore_pmu {
|
||||||
struct hlist_node hpnode;
|
struct hlist_node hpnode;
|
||||||
@ -69,8 +95,10 @@ struct tx2_uncore_pmu {
|
|||||||
int node;
|
int node;
|
||||||
int cpu;
|
int cpu;
|
||||||
u32 max_counters;
|
u32 max_counters;
|
||||||
|
u32 counters_mask;
|
||||||
u32 prorate_factor;
|
u32 prorate_factor;
|
||||||
u32 max_events;
|
u32 max_events;
|
||||||
|
u32 events_mask;
|
||||||
u64 hrtimer_interval;
|
u64 hrtimer_interval;
|
||||||
void __iomem *base;
|
void __iomem *base;
|
||||||
DECLARE_BITMAP(active_counters, TX2_PMU_MAX_COUNTERS);
|
DECLARE_BITMAP(active_counters, TX2_PMU_MAX_COUNTERS);
|
||||||
@ -79,6 +107,7 @@ struct tx2_uncore_pmu {
|
|||||||
struct hrtimer hrtimer;
|
struct hrtimer hrtimer;
|
||||||
const struct attribute_group **attr_groups;
|
const struct attribute_group **attr_groups;
|
||||||
enum tx2_uncore_type type;
|
enum tx2_uncore_type type;
|
||||||
|
enum hrtimer_restart (*hrtimer_callback)(struct hrtimer *cb);
|
||||||
void (*init_cntr_base)(struct perf_event *event,
|
void (*init_cntr_base)(struct perf_event *event,
|
||||||
struct tx2_uncore_pmu *tx2_pmu);
|
struct tx2_uncore_pmu *tx2_pmu);
|
||||||
void (*stop_event)(struct perf_event *event);
|
void (*stop_event)(struct perf_event *event);
|
||||||
@ -92,7 +121,21 @@ static inline struct tx2_uncore_pmu *pmu_to_tx2_pmu(struct pmu *pmu)
|
|||||||
return container_of(pmu, struct tx2_uncore_pmu, pmu);
|
return container_of(pmu, struct tx2_uncore_pmu, pmu);
|
||||||
}
|
}
|
||||||
|
|
||||||
PMU_FORMAT_ATTR(event, "config:0-4");
|
#define TX2_PMU_FORMAT_ATTR(_var, _name, _format) \
|
||||||
|
static ssize_t \
|
||||||
|
__tx2_pmu_##_var##_show(struct device *dev, \
|
||||||
|
struct device_attribute *attr, \
|
||||||
|
char *page) \
|
||||||
|
{ \
|
||||||
|
BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \
|
||||||
|
return sprintf(page, _format "\n"); \
|
||||||
|
} \
|
||||||
|
\
|
||||||
|
static struct device_attribute format_attr_##_var = \
|
||||||
|
__ATTR(_name, 0444, __tx2_pmu_##_var##_show, NULL)
|
||||||
|
|
||||||
|
TX2_PMU_FORMAT_ATTR(event, event, "config:0-4");
|
||||||
|
TX2_PMU_FORMAT_ATTR(event_ccpi2, event, "config:0-9");
|
||||||
|
|
||||||
static struct attribute *l3c_pmu_format_attrs[] = {
|
static struct attribute *l3c_pmu_format_attrs[] = {
|
||||||
&format_attr_event.attr,
|
&format_attr_event.attr,
|
||||||
@ -104,6 +147,11 @@ static struct attribute *dmc_pmu_format_attrs[] = {
|
|||||||
NULL,
|
NULL,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static struct attribute *ccpi2_pmu_format_attrs[] = {
|
||||||
|
&format_attr_event_ccpi2.attr,
|
||||||
|
NULL,
|
||||||
|
};
|
||||||
|
|
||||||
static const struct attribute_group l3c_pmu_format_attr_group = {
|
static const struct attribute_group l3c_pmu_format_attr_group = {
|
||||||
.name = "format",
|
.name = "format",
|
||||||
.attrs = l3c_pmu_format_attrs,
|
.attrs = l3c_pmu_format_attrs,
|
||||||
@ -114,6 +162,11 @@ static const struct attribute_group dmc_pmu_format_attr_group = {
|
|||||||
.attrs = dmc_pmu_format_attrs,
|
.attrs = dmc_pmu_format_attrs,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static const struct attribute_group ccpi2_pmu_format_attr_group = {
|
||||||
|
.name = "format",
|
||||||
|
.attrs = ccpi2_pmu_format_attrs,
|
||||||
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* sysfs event attributes
|
* sysfs event attributes
|
||||||
*/
|
*/
|
||||||
@ -164,6 +217,19 @@ static struct attribute *dmc_pmu_events_attrs[] = {
|
|||||||
NULL,
|
NULL,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
TX2_EVENT_ATTR(req_pktsent, CCPI2_EVENT_REQ_PKT_SENT);
|
||||||
|
TX2_EVENT_ATTR(snoop_pktsent, CCPI2_EVENT_SNOOP_PKT_SENT);
|
||||||
|
TX2_EVENT_ATTR(data_pktsent, CCPI2_EVENT_DATA_PKT_SENT);
|
||||||
|
TX2_EVENT_ATTR(gic_pktsent, CCPI2_EVENT_GIC_PKT_SENT);
|
||||||
|
|
||||||
|
static struct attribute *ccpi2_pmu_events_attrs[] = {
|
||||||
|
&tx2_pmu_event_attr_req_pktsent.attr.attr,
|
||||||
|
&tx2_pmu_event_attr_snoop_pktsent.attr.attr,
|
||||||
|
&tx2_pmu_event_attr_data_pktsent.attr.attr,
|
||||||
|
&tx2_pmu_event_attr_gic_pktsent.attr.attr,
|
||||||
|
NULL,
|
||||||
|
};
|
||||||
|
|
||||||
static const struct attribute_group l3c_pmu_events_attr_group = {
|
static const struct attribute_group l3c_pmu_events_attr_group = {
|
||||||
.name = "events",
|
.name = "events",
|
||||||
.attrs = l3c_pmu_events_attrs,
|
.attrs = l3c_pmu_events_attrs,
|
||||||
@ -174,6 +240,11 @@ static const struct attribute_group dmc_pmu_events_attr_group = {
|
|||||||
.attrs = dmc_pmu_events_attrs,
|
.attrs = dmc_pmu_events_attrs,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static const struct attribute_group ccpi2_pmu_events_attr_group = {
|
||||||
|
.name = "events",
|
||||||
|
.attrs = ccpi2_pmu_events_attrs,
|
||||||
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* sysfs cpumask attributes
|
* sysfs cpumask attributes
|
||||||
*/
|
*/
|
||||||
@ -213,6 +284,13 @@ static const struct attribute_group *dmc_pmu_attr_groups[] = {
|
|||||||
NULL
|
NULL
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static const struct attribute_group *ccpi2_pmu_attr_groups[] = {
|
||||||
|
&ccpi2_pmu_format_attr_group,
|
||||||
|
&pmu_cpumask_attr_group,
|
||||||
|
&ccpi2_pmu_events_attr_group,
|
||||||
|
NULL
|
||||||
|
};
|
||||||
|
|
||||||
static inline u32 reg_readl(unsigned long addr)
|
static inline u32 reg_readl(unsigned long addr)
|
||||||
{
|
{
|
||||||
return readl((void __iomem *)addr);
|
return readl((void __iomem *)addr);
|
||||||
@ -245,33 +323,58 @@ static void init_cntr_base_l3c(struct perf_event *event,
|
|||||||
struct tx2_uncore_pmu *tx2_pmu)
|
struct tx2_uncore_pmu *tx2_pmu)
|
||||||
{
|
{
|
||||||
struct hw_perf_event *hwc = &event->hw;
|
struct hw_perf_event *hwc = &event->hw;
|
||||||
|
u32 cmask;
|
||||||
|
|
||||||
|
tx2_pmu = pmu_to_tx2_pmu(event->pmu);
|
||||||
|
cmask = tx2_pmu->counters_mask;
|
||||||
|
|
||||||
/* counter ctrl/data reg offset at 8 */
|
/* counter ctrl/data reg offset at 8 */
|
||||||
hwc->config_base = (unsigned long)tx2_pmu->base
|
hwc->config_base = (unsigned long)tx2_pmu->base
|
||||||
+ L3C_COUNTER_CTL + (8 * GET_COUNTERID(event));
|
+ L3C_COUNTER_CTL + (8 * GET_COUNTERID(event, cmask));
|
||||||
hwc->event_base = (unsigned long)tx2_pmu->base
|
hwc->event_base = (unsigned long)tx2_pmu->base
|
||||||
+ L3C_COUNTER_DATA + (8 * GET_COUNTERID(event));
|
+ L3C_COUNTER_DATA + (8 * GET_COUNTERID(event, cmask));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void init_cntr_base_dmc(struct perf_event *event,
|
static void init_cntr_base_dmc(struct perf_event *event,
|
||||||
struct tx2_uncore_pmu *tx2_pmu)
|
struct tx2_uncore_pmu *tx2_pmu)
|
||||||
{
|
{
|
||||||
struct hw_perf_event *hwc = &event->hw;
|
struct hw_perf_event *hwc = &event->hw;
|
||||||
|
u32 cmask;
|
||||||
|
|
||||||
|
tx2_pmu = pmu_to_tx2_pmu(event->pmu);
|
||||||
|
cmask = tx2_pmu->counters_mask;
|
||||||
|
|
||||||
hwc->config_base = (unsigned long)tx2_pmu->base
|
hwc->config_base = (unsigned long)tx2_pmu->base
|
||||||
+ DMC_COUNTER_CTL;
|
+ DMC_COUNTER_CTL;
|
||||||
/* counter data reg offset at 0xc */
|
/* counter data reg offset at 0xc */
|
||||||
hwc->event_base = (unsigned long)tx2_pmu->base
|
hwc->event_base = (unsigned long)tx2_pmu->base
|
||||||
+ DMC_COUNTER_DATA + (0xc * GET_COUNTERID(event));
|
+ DMC_COUNTER_DATA + (0xc * GET_COUNTERID(event, cmask));
|
||||||
|
}
|
||||||
|
|
||||||
|
static void init_cntr_base_ccpi2(struct perf_event *event,
|
||||||
|
struct tx2_uncore_pmu *tx2_pmu)
|
||||||
|
{
|
||||||
|
struct hw_perf_event *hwc = &event->hw;
|
||||||
|
u32 cmask;
|
||||||
|
|
||||||
|
cmask = tx2_pmu->counters_mask;
|
||||||
|
|
||||||
|
hwc->config_base = (unsigned long)tx2_pmu->base
|
||||||
|
+ CCPI2_COUNTER_CTL + (4 * GET_COUNTERID(event, cmask));
|
||||||
|
hwc->event_base = (unsigned long)tx2_pmu->base;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void uncore_start_event_l3c(struct perf_event *event, int flags)
|
static void uncore_start_event_l3c(struct perf_event *event, int flags)
|
||||||
{
|
{
|
||||||
u32 val;
|
u32 val, emask;
|
||||||
struct hw_perf_event *hwc = &event->hw;
|
struct hw_perf_event *hwc = &event->hw;
|
||||||
|
struct tx2_uncore_pmu *tx2_pmu;
|
||||||
|
|
||||||
|
tx2_pmu = pmu_to_tx2_pmu(event->pmu);
|
||||||
|
emask = tx2_pmu->events_mask;
|
||||||
|
|
||||||
/* event id encoded in bits [07:03] */
|
/* event id encoded in bits [07:03] */
|
||||||
val = GET_EVENTID(event) << 3;
|
val = GET_EVENTID(event, emask) << 3;
|
||||||
reg_writel(val, hwc->config_base);
|
reg_writel(val, hwc->config_base);
|
||||||
local64_set(&hwc->prev_count, 0);
|
local64_set(&hwc->prev_count, 0);
|
||||||
reg_writel(0, hwc->event_base);
|
reg_writel(0, hwc->event_base);
|
||||||
@ -284,10 +387,17 @@ static inline void uncore_stop_event_l3c(struct perf_event *event)
|
|||||||
|
|
||||||
static void uncore_start_event_dmc(struct perf_event *event, int flags)
|
static void uncore_start_event_dmc(struct perf_event *event, int flags)
|
||||||
{
|
{
|
||||||
u32 val;
|
u32 val, cmask, emask;
|
||||||
struct hw_perf_event *hwc = &event->hw;
|
struct hw_perf_event *hwc = &event->hw;
|
||||||
int idx = GET_COUNTERID(event);
|
struct tx2_uncore_pmu *tx2_pmu;
|
||||||
int event_id = GET_EVENTID(event);
|
int idx, event_id;
|
||||||
|
|
||||||
|
tx2_pmu = pmu_to_tx2_pmu(event->pmu);
|
||||||
|
cmask = tx2_pmu->counters_mask;
|
||||||
|
emask = tx2_pmu->events_mask;
|
||||||
|
|
||||||
|
idx = GET_COUNTERID(event, cmask);
|
||||||
|
event_id = GET_EVENTID(event, emask);
|
||||||
|
|
||||||
/* enable and start counters.
|
/* enable and start counters.
|
||||||
* 8 bits for each counter, bits[05:01] of a counter to set event type.
|
* 8 bits for each counter, bits[05:01] of a counter to set event type.
|
||||||
@ -302,9 +412,14 @@ static void uncore_start_event_dmc(struct perf_event *event, int flags)
|
|||||||
|
|
||||||
static void uncore_stop_event_dmc(struct perf_event *event)
|
static void uncore_stop_event_dmc(struct perf_event *event)
|
||||||
{
|
{
|
||||||
u32 val;
|
u32 val, cmask;
|
||||||
struct hw_perf_event *hwc = &event->hw;
|
struct hw_perf_event *hwc = &event->hw;
|
||||||
int idx = GET_COUNTERID(event);
|
struct tx2_uncore_pmu *tx2_pmu;
|
||||||
|
int idx;
|
||||||
|
|
||||||
|
tx2_pmu = pmu_to_tx2_pmu(event->pmu);
|
||||||
|
cmask = tx2_pmu->counters_mask;
|
||||||
|
idx = GET_COUNTERID(event, cmask);
|
||||||
|
|
||||||
/* clear event type(bits[05:01]) to stop counter */
|
/* clear event type(bits[05:01]) to stop counter */
|
||||||
val = reg_readl(hwc->config_base);
|
val = reg_readl(hwc->config_base);
|
||||||
@ -312,27 +427,72 @@ static void uncore_stop_event_dmc(struct perf_event *event)
|
|||||||
reg_writel(val, hwc->config_base);
|
reg_writel(val, hwc->config_base);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void uncore_start_event_ccpi2(struct perf_event *event, int flags)
|
||||||
|
{
|
||||||
|
u32 emask;
|
||||||
|
struct hw_perf_event *hwc = &event->hw;
|
||||||
|
struct tx2_uncore_pmu *tx2_pmu;
|
||||||
|
|
||||||
|
tx2_pmu = pmu_to_tx2_pmu(event->pmu);
|
||||||
|
emask = tx2_pmu->events_mask;
|
||||||
|
|
||||||
|
/* Bit [09:00] to set event id.
|
||||||
|
* Bits [10], set level to rising edge.
|
||||||
|
* Bits [11], set type to edge sensitive.
|
||||||
|
*/
|
||||||
|
reg_writel((CCPI2_EVENT_TYPE_EDGE_SENSITIVE |
|
||||||
|
CCPI2_EVENT_LEVEL_RISING_EDGE |
|
||||||
|
GET_EVENTID(event, emask)), hwc->config_base);
|
||||||
|
|
||||||
|
/* reset[4], enable[0] and start[1] counters */
|
||||||
|
reg_writel(CCPI2_PERF_CTL_RESET |
|
||||||
|
CCPI2_PERF_CTL_START |
|
||||||
|
CCPI2_PERF_CTL_ENABLE,
|
||||||
|
hwc->event_base + CCPI2_PERF_CTL);
|
||||||
|
local64_set(&event->hw.prev_count, 0ULL);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void uncore_stop_event_ccpi2(struct perf_event *event)
|
||||||
|
{
|
||||||
|
struct hw_perf_event *hwc = &event->hw;
|
||||||
|
|
||||||
|
/* disable and stop counter */
|
||||||
|
reg_writel(0, hwc->event_base + CCPI2_PERF_CTL);
|
||||||
|
}
|
||||||
|
|
||||||
static void tx2_uncore_event_update(struct perf_event *event)
|
static void tx2_uncore_event_update(struct perf_event *event)
|
||||||
{
|
{
|
||||||
s64 prev, delta, new = 0;
|
u64 prev, delta, new = 0;
|
||||||
struct hw_perf_event *hwc = &event->hw;
|
struct hw_perf_event *hwc = &event->hw;
|
||||||
struct tx2_uncore_pmu *tx2_pmu;
|
struct tx2_uncore_pmu *tx2_pmu;
|
||||||
enum tx2_uncore_type type;
|
enum tx2_uncore_type type;
|
||||||
u32 prorate_factor;
|
u32 prorate_factor;
|
||||||
|
u32 cmask, emask;
|
||||||
|
|
||||||
tx2_pmu = pmu_to_tx2_pmu(event->pmu);
|
tx2_pmu = pmu_to_tx2_pmu(event->pmu);
|
||||||
type = tx2_pmu->type;
|
type = tx2_pmu->type;
|
||||||
|
cmask = tx2_pmu->counters_mask;
|
||||||
|
emask = tx2_pmu->events_mask;
|
||||||
prorate_factor = tx2_pmu->prorate_factor;
|
prorate_factor = tx2_pmu->prorate_factor;
|
||||||
|
if (type == PMU_TYPE_CCPI2) {
|
||||||
new = reg_readl(hwc->event_base);
|
reg_writel(CCPI2_COUNTER_OFFSET +
|
||||||
prev = local64_xchg(&hwc->prev_count, new);
|
GET_COUNTERID(event, cmask),
|
||||||
|
hwc->event_base + CCPI2_COUNTER_SEL);
|
||||||
/* handles rollover of 32 bit counter */
|
new = reg_readl(hwc->event_base + CCPI2_COUNTER_DATA_H);
|
||||||
delta = (u32)(((1UL << 32) - prev) + new);
|
new = (new << 32) +
|
||||||
|
reg_readl(hwc->event_base + CCPI2_COUNTER_DATA_L);
|
||||||
|
prev = local64_xchg(&hwc->prev_count, new);
|
||||||
|
delta = new - prev;
|
||||||
|
} else {
|
||||||
|
new = reg_readl(hwc->event_base);
|
||||||
|
prev = local64_xchg(&hwc->prev_count, new);
|
||||||
|
/* handles rollover of 32 bit counter */
|
||||||
|
delta = (u32)(((1UL << 32) - prev) + new);
|
||||||
|
}
|
||||||
|
|
||||||
/* DMC event data_transfers granularity is 16 Bytes, convert it to 64 */
|
/* DMC event data_transfers granularity is 16 Bytes, convert it to 64 */
|
||||||
if (type == PMU_TYPE_DMC &&
|
if (type == PMU_TYPE_DMC &&
|
||||||
GET_EVENTID(event) == DMC_EVENT_DATA_TRANSFERS)
|
GET_EVENTID(event, emask) == DMC_EVENT_DATA_TRANSFERS)
|
||||||
delta = delta/4;
|
delta = delta/4;
|
||||||
|
|
||||||
/* L3C and DMC has 16 and 8 interleave channels respectively.
|
/* L3C and DMC has 16 and 8 interleave channels respectively.
|
||||||
@ -351,6 +511,7 @@ static enum tx2_uncore_type get_tx2_pmu_type(struct acpi_device *adev)
|
|||||||
} devices[] = {
|
} devices[] = {
|
||||||
{"CAV901D", PMU_TYPE_L3C},
|
{"CAV901D", PMU_TYPE_L3C},
|
||||||
{"CAV901F", PMU_TYPE_DMC},
|
{"CAV901F", PMU_TYPE_DMC},
|
||||||
|
{"CAV901E", PMU_TYPE_CCPI2},
|
||||||
{"", PMU_TYPE_INVALID}
|
{"", PMU_TYPE_INVALID}
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -380,7 +541,8 @@ static bool tx2_uncore_validate_event(struct pmu *pmu,
|
|||||||
* Make sure the group of events can be scheduled at once
|
* Make sure the group of events can be scheduled at once
|
||||||
* on the PMU.
|
* on the PMU.
|
||||||
*/
|
*/
|
||||||
static bool tx2_uncore_validate_event_group(struct perf_event *event)
|
static bool tx2_uncore_validate_event_group(struct perf_event *event,
|
||||||
|
int max_counters)
|
||||||
{
|
{
|
||||||
struct perf_event *sibling, *leader = event->group_leader;
|
struct perf_event *sibling, *leader = event->group_leader;
|
||||||
int counters = 0;
|
int counters = 0;
|
||||||
@ -403,7 +565,7 @@ static bool tx2_uncore_validate_event_group(struct perf_event *event)
|
|||||||
* If the group requires more counters than the HW has,
|
* If the group requires more counters than the HW has,
|
||||||
* it cannot ever be scheduled.
|
* it cannot ever be scheduled.
|
||||||
*/
|
*/
|
||||||
return counters <= TX2_PMU_MAX_COUNTERS;
|
return counters <= max_counters;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -439,7 +601,7 @@ static int tx2_uncore_event_init(struct perf_event *event)
|
|||||||
hwc->config = event->attr.config;
|
hwc->config = event->attr.config;
|
||||||
|
|
||||||
/* Validate the group */
|
/* Validate the group */
|
||||||
if (!tx2_uncore_validate_event_group(event))
|
if (!tx2_uncore_validate_event_group(event, tx2_pmu->max_counters))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
@ -456,6 +618,10 @@ static void tx2_uncore_event_start(struct perf_event *event, int flags)
|
|||||||
tx2_pmu->start_event(event, flags);
|
tx2_pmu->start_event(event, flags);
|
||||||
perf_event_update_userpage(event);
|
perf_event_update_userpage(event);
|
||||||
|
|
||||||
|
/* No hrtimer needed for CCPI2, 64-bit counters */
|
||||||
|
if (!tx2_pmu->hrtimer_callback)
|
||||||
|
return;
|
||||||
|
|
||||||
/* Start timer for first event */
|
/* Start timer for first event */
|
||||||
if (bitmap_weight(tx2_pmu->active_counters,
|
if (bitmap_weight(tx2_pmu->active_counters,
|
||||||
tx2_pmu->max_counters) == 1) {
|
tx2_pmu->max_counters) == 1) {
|
||||||
@ -510,15 +676,23 @@ static void tx2_uncore_event_del(struct perf_event *event, int flags)
|
|||||||
{
|
{
|
||||||
struct tx2_uncore_pmu *tx2_pmu = pmu_to_tx2_pmu(event->pmu);
|
struct tx2_uncore_pmu *tx2_pmu = pmu_to_tx2_pmu(event->pmu);
|
||||||
struct hw_perf_event *hwc = &event->hw;
|
struct hw_perf_event *hwc = &event->hw;
|
||||||
|
u32 cmask;
|
||||||
|
|
||||||
|
cmask = tx2_pmu->counters_mask;
|
||||||
tx2_uncore_event_stop(event, PERF_EF_UPDATE);
|
tx2_uncore_event_stop(event, PERF_EF_UPDATE);
|
||||||
|
|
||||||
/* clear the assigned counter */
|
/* clear the assigned counter */
|
||||||
free_counter(tx2_pmu, GET_COUNTERID(event));
|
free_counter(tx2_pmu, GET_COUNTERID(event, cmask));
|
||||||
|
|
||||||
perf_event_update_userpage(event);
|
perf_event_update_userpage(event);
|
||||||
tx2_pmu->events[hwc->idx] = NULL;
|
tx2_pmu->events[hwc->idx] = NULL;
|
||||||
hwc->idx = -1;
|
hwc->idx = -1;
|
||||||
|
|
||||||
|
if (!tx2_pmu->hrtimer_callback)
|
||||||
|
return;
|
||||||
|
|
||||||
|
if (bitmap_empty(tx2_pmu->active_counters, tx2_pmu->max_counters))
|
||||||
|
hrtimer_cancel(&tx2_pmu->hrtimer);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void tx2_uncore_event_read(struct perf_event *event)
|
static void tx2_uncore_event_read(struct perf_event *event)
|
||||||
@ -580,8 +754,12 @@ static int tx2_uncore_pmu_add_dev(struct tx2_uncore_pmu *tx2_pmu)
|
|||||||
cpu_online_mask);
|
cpu_online_mask);
|
||||||
|
|
||||||
tx2_pmu->cpu = cpu;
|
tx2_pmu->cpu = cpu;
|
||||||
hrtimer_init(&tx2_pmu->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
|
|
||||||
tx2_pmu->hrtimer.function = tx2_hrtimer_callback;
|
if (tx2_pmu->hrtimer_callback) {
|
||||||
|
hrtimer_init(&tx2_pmu->hrtimer,
|
||||||
|
CLOCK_MONOTONIC, HRTIMER_MODE_REL);
|
||||||
|
tx2_pmu->hrtimer.function = tx2_pmu->hrtimer_callback;
|
||||||
|
}
|
||||||
|
|
||||||
ret = tx2_uncore_pmu_register(tx2_pmu);
|
ret = tx2_uncore_pmu_register(tx2_pmu);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
@ -653,10 +831,13 @@ static struct tx2_uncore_pmu *tx2_uncore_pmu_init_dev(struct device *dev,
|
|||||||
|
|
||||||
switch (tx2_pmu->type) {
|
switch (tx2_pmu->type) {
|
||||||
case PMU_TYPE_L3C:
|
case PMU_TYPE_L3C:
|
||||||
tx2_pmu->max_counters = TX2_PMU_MAX_COUNTERS;
|
tx2_pmu->max_counters = TX2_PMU_DMC_L3C_MAX_COUNTERS;
|
||||||
|
tx2_pmu->counters_mask = 0x3;
|
||||||
tx2_pmu->prorate_factor = TX2_PMU_L3_TILES;
|
tx2_pmu->prorate_factor = TX2_PMU_L3_TILES;
|
||||||
tx2_pmu->max_events = L3_EVENT_MAX;
|
tx2_pmu->max_events = L3_EVENT_MAX;
|
||||||
|
tx2_pmu->events_mask = 0x1f;
|
||||||
tx2_pmu->hrtimer_interval = TX2_PMU_HRTIMER_INTERVAL;
|
tx2_pmu->hrtimer_interval = TX2_PMU_HRTIMER_INTERVAL;
|
||||||
|
tx2_pmu->hrtimer_callback = tx2_hrtimer_callback;
|
||||||
tx2_pmu->attr_groups = l3c_pmu_attr_groups;
|
tx2_pmu->attr_groups = l3c_pmu_attr_groups;
|
||||||
tx2_pmu->name = devm_kasprintf(dev, GFP_KERNEL,
|
tx2_pmu->name = devm_kasprintf(dev, GFP_KERNEL,
|
||||||
"uncore_l3c_%d", tx2_pmu->node);
|
"uncore_l3c_%d", tx2_pmu->node);
|
||||||
@ -665,10 +846,13 @@ static struct tx2_uncore_pmu *tx2_uncore_pmu_init_dev(struct device *dev,
|
|||||||
tx2_pmu->stop_event = uncore_stop_event_l3c;
|
tx2_pmu->stop_event = uncore_stop_event_l3c;
|
||||||
break;
|
break;
|
||||||
case PMU_TYPE_DMC:
|
case PMU_TYPE_DMC:
|
||||||
tx2_pmu->max_counters = TX2_PMU_MAX_COUNTERS;
|
tx2_pmu->max_counters = TX2_PMU_DMC_L3C_MAX_COUNTERS;
|
||||||
|
tx2_pmu->counters_mask = 0x3;
|
||||||
tx2_pmu->prorate_factor = TX2_PMU_DMC_CHANNELS;
|
tx2_pmu->prorate_factor = TX2_PMU_DMC_CHANNELS;
|
||||||
tx2_pmu->max_events = DMC_EVENT_MAX;
|
tx2_pmu->max_events = DMC_EVENT_MAX;
|
||||||
|
tx2_pmu->events_mask = 0x1f;
|
||||||
tx2_pmu->hrtimer_interval = TX2_PMU_HRTIMER_INTERVAL;
|
tx2_pmu->hrtimer_interval = TX2_PMU_HRTIMER_INTERVAL;
|
||||||
|
tx2_pmu->hrtimer_callback = tx2_hrtimer_callback;
|
||||||
tx2_pmu->attr_groups = dmc_pmu_attr_groups;
|
tx2_pmu->attr_groups = dmc_pmu_attr_groups;
|
||||||
tx2_pmu->name = devm_kasprintf(dev, GFP_KERNEL,
|
tx2_pmu->name = devm_kasprintf(dev, GFP_KERNEL,
|
||||||
"uncore_dmc_%d", tx2_pmu->node);
|
"uncore_dmc_%d", tx2_pmu->node);
|
||||||
@ -676,6 +860,21 @@ static struct tx2_uncore_pmu *tx2_uncore_pmu_init_dev(struct device *dev,
|
|||||||
tx2_pmu->start_event = uncore_start_event_dmc;
|
tx2_pmu->start_event = uncore_start_event_dmc;
|
||||||
tx2_pmu->stop_event = uncore_stop_event_dmc;
|
tx2_pmu->stop_event = uncore_stop_event_dmc;
|
||||||
break;
|
break;
|
||||||
|
case PMU_TYPE_CCPI2:
|
||||||
|
/* CCPI2 has 8 counters */
|
||||||
|
tx2_pmu->max_counters = TX2_PMU_CCPI2_MAX_COUNTERS;
|
||||||
|
tx2_pmu->counters_mask = 0x7;
|
||||||
|
tx2_pmu->prorate_factor = 1;
|
||||||
|
tx2_pmu->max_events = CCPI2_EVENT_MAX;
|
||||||
|
tx2_pmu->events_mask = 0x1ff;
|
||||||
|
tx2_pmu->attr_groups = ccpi2_pmu_attr_groups;
|
||||||
|
tx2_pmu->name = devm_kasprintf(dev, GFP_KERNEL,
|
||||||
|
"uncore_ccpi2_%d", tx2_pmu->node);
|
||||||
|
tx2_pmu->init_cntr_base = init_cntr_base_ccpi2;
|
||||||
|
tx2_pmu->start_event = uncore_start_event_ccpi2;
|
||||||
|
tx2_pmu->stop_event = uncore_stop_event_ccpi2;
|
||||||
|
tx2_pmu->hrtimer_callback = NULL;
|
||||||
|
break;
|
||||||
case PMU_TYPE_INVALID:
|
case PMU_TYPE_INVALID:
|
||||||
devm_kfree(dev, tx2_pmu);
|
devm_kfree(dev, tx2_pmu);
|
||||||
return NULL;
|
return NULL;
|
||||||
@ -744,7 +943,9 @@ static int tx2_uncore_pmu_offline_cpu(unsigned int cpu,
|
|||||||
if (cpu != tx2_pmu->cpu)
|
if (cpu != tx2_pmu->cpu)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
hrtimer_cancel(&tx2_pmu->hrtimer);
|
if (tx2_pmu->hrtimer_callback)
|
||||||
|
hrtimer_cancel(&tx2_pmu->hrtimer);
|
||||||
|
|
||||||
cpumask_copy(&cpu_online_mask_temp, cpu_online_mask);
|
cpumask_copy(&cpu_online_mask_temp, cpu_online_mask);
|
||||||
cpumask_clear_cpu(cpu, &cpu_online_mask_temp);
|
cpumask_clear_cpu(cpu, &cpu_online_mask_temp);
|
||||||
new_cpu = cpumask_any_and(
|
new_cpu = cpumask_any_and(
|
||||||
|
@ -1282,25 +1282,21 @@ static int acpi_pmu_probe_active_mcb_mcu_l3c(struct xgene_pmu *xgene_pmu,
|
|||||||
struct platform_device *pdev)
|
struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
void __iomem *csw_csr, *mcba_csr, *mcbb_csr;
|
void __iomem *csw_csr, *mcba_csr, *mcbb_csr;
|
||||||
struct resource *res;
|
|
||||||
unsigned int reg;
|
unsigned int reg;
|
||||||
|
|
||||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
|
csw_csr = devm_platform_ioremap_resource(pdev, 1);
|
||||||
csw_csr = devm_ioremap_resource(&pdev->dev, res);
|
|
||||||
if (IS_ERR(csw_csr)) {
|
if (IS_ERR(csw_csr)) {
|
||||||
dev_err(&pdev->dev, "ioremap failed for CSW CSR resource\n");
|
dev_err(&pdev->dev, "ioremap failed for CSW CSR resource\n");
|
||||||
return PTR_ERR(csw_csr);
|
return PTR_ERR(csw_csr);
|
||||||
}
|
}
|
||||||
|
|
||||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 2);
|
mcba_csr = devm_platform_ioremap_resource(pdev, 2);
|
||||||
mcba_csr = devm_ioremap_resource(&pdev->dev, res);
|
|
||||||
if (IS_ERR(mcba_csr)) {
|
if (IS_ERR(mcba_csr)) {
|
||||||
dev_err(&pdev->dev, "ioremap failed for MCBA CSR resource\n");
|
dev_err(&pdev->dev, "ioremap failed for MCBA CSR resource\n");
|
||||||
return PTR_ERR(mcba_csr);
|
return PTR_ERR(mcba_csr);
|
||||||
}
|
}
|
||||||
|
|
||||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 3);
|
mcbb_csr = devm_platform_ioremap_resource(pdev, 3);
|
||||||
mcbb_csr = devm_ioremap_resource(&pdev->dev, res);
|
|
||||||
if (IS_ERR(mcbb_csr)) {
|
if (IS_ERR(mcbb_csr)) {
|
||||||
dev_err(&pdev->dev, "ioremap failed for MCBB CSR resource\n");
|
dev_err(&pdev->dev, "ioremap failed for MCBB CSR resource\n");
|
||||||
return PTR_ERR(mcbb_csr);
|
return PTR_ERR(mcbb_csr);
|
||||||
@ -1332,13 +1328,11 @@ static int acpi_pmu_v3_probe_active_mcb_mcu_l3c(struct xgene_pmu *xgene_pmu,
|
|||||||
struct platform_device *pdev)
|
struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
void __iomem *csw_csr;
|
void __iomem *csw_csr;
|
||||||
struct resource *res;
|
|
||||||
unsigned int reg;
|
unsigned int reg;
|
||||||
u32 mcb0routing;
|
u32 mcb0routing;
|
||||||
u32 mcb1routing;
|
u32 mcb1routing;
|
||||||
|
|
||||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
|
csw_csr = devm_platform_ioremap_resource(pdev, 1);
|
||||||
csw_csr = devm_ioremap_resource(&pdev->dev, res);
|
|
||||||
if (IS_ERR(csw_csr)) {
|
if (IS_ERR(csw_csr)) {
|
||||||
dev_err(&pdev->dev, "ioremap failed for CSW CSR resource\n");
|
dev_err(&pdev->dev, "ioremap failed for CSW CSR resource\n");
|
||||||
return PTR_ERR(csw_csr);
|
return PTR_ERR(csw_csr);
|
||||||
|
@ -110,17 +110,17 @@
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_FTRACE_MCOUNT_RECORD
|
#ifdef CONFIG_FTRACE_MCOUNT_RECORD
|
||||||
#ifdef CC_USING_PATCHABLE_FUNCTION_ENTRY
|
/*
|
||||||
#define MCOUNT_REC() . = ALIGN(8); \
|
* The ftrace call sites are logged to a section whose name depends on the
|
||||||
__start_mcount_loc = .; \
|
* compiler option used. A given kernel image will only use one, AKA
|
||||||
KEEP(*(__patchable_function_entries)) \
|
* FTRACE_CALLSITE_SECTION. We capture all of them here to avoid header
|
||||||
__stop_mcount_loc = .;
|
* dependencies for FTRACE_CALLSITE_SECTION's definition.
|
||||||
#else
|
*/
|
||||||
#define MCOUNT_REC() . = ALIGN(8); \
|
#define MCOUNT_REC() . = ALIGN(8); \
|
||||||
__start_mcount_loc = .; \
|
__start_mcount_loc = .; \
|
||||||
KEEP(*(__mcount_loc)) \
|
KEEP(*(__mcount_loc)) \
|
||||||
|
KEEP(*(__patchable_function_entries)) \
|
||||||
__stop_mcount_loc = .;
|
__stop_mcount_loc = .;
|
||||||
#endif
|
|
||||||
#else
|
#else
|
||||||
#define MCOUNT_REC()
|
#define MCOUNT_REC()
|
||||||
#endif
|
#endif
|
||||||
|
@ -80,6 +80,22 @@
|
|||||||
|
|
||||||
#include <linux/linkage.h>
|
#include <linux/linkage.h>
|
||||||
#include <linux/types.h>
|
#include <linux/types.h>
|
||||||
|
|
||||||
|
enum arm_smccc_conduit {
|
||||||
|
SMCCC_CONDUIT_NONE,
|
||||||
|
SMCCC_CONDUIT_SMC,
|
||||||
|
SMCCC_CONDUIT_HVC,
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* arm_smccc_1_1_get_conduit()
|
||||||
|
*
|
||||||
|
* Returns the conduit to be used for SMCCCv1.1 or later.
|
||||||
|
*
|
||||||
|
* When SMCCCv1.1 is not present, returns SMCCC_CONDUIT_NONE.
|
||||||
|
*/
|
||||||
|
enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* struct arm_smccc_res - Result from SMC/HVC call
|
* struct arm_smccc_res - Result from SMC/HVC call
|
||||||
* @a0-a3 result values from registers 0 to 3
|
* @a0-a3 result values from registers 0 to 3
|
||||||
|
@ -5,12 +5,6 @@
|
|||||||
|
|
||||||
#include <uapi/linux/arm_sdei.h>
|
#include <uapi/linux/arm_sdei.h>
|
||||||
|
|
||||||
enum sdei_conduit_types {
|
|
||||||
CONDUIT_INVALID = 0,
|
|
||||||
CONDUIT_SMC,
|
|
||||||
CONDUIT_HVC,
|
|
||||||
};
|
|
||||||
|
|
||||||
#include <acpi/ghes.h>
|
#include <acpi/ghes.h>
|
||||||
|
|
||||||
#ifdef CONFIG_ARM_SDE_INTERFACE
|
#ifdef CONFIG_ARM_SDE_INTERFACE
|
||||||
|
@ -5,6 +5,8 @@
|
|||||||
#include <linux/dma-mapping.h>
|
#include <linux/dma-mapping.h>
|
||||||
#include <linux/mem_encrypt.h>
|
#include <linux/mem_encrypt.h>
|
||||||
|
|
||||||
|
extern unsigned int zone_dma_bits;
|
||||||
|
|
||||||
#ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
|
#ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
|
||||||
#include <asm/dma-direct.h>
|
#include <asm/dma-direct.h>
|
||||||
#else
|
#else
|
||||||
|
@ -499,7 +499,7 @@ static inline int ftrace_disable_ftrace_graph_caller(void) { return 0; }
|
|||||||
/**
|
/**
|
||||||
* ftrace_make_nop - convert code into nop
|
* ftrace_make_nop - convert code into nop
|
||||||
* @mod: module structure if called by module load initialization
|
* @mod: module structure if called by module load initialization
|
||||||
* @rec: the mcount call site record
|
* @rec: the call site record (e.g. mcount/fentry)
|
||||||
* @addr: the address that the call site should be calling
|
* @addr: the address that the call site should be calling
|
||||||
*
|
*
|
||||||
* This is a very sensitive operation and great care needs
|
* This is a very sensitive operation and great care needs
|
||||||
@ -520,9 +520,38 @@ static inline int ftrace_disable_ftrace_graph_caller(void) { return 0; }
|
|||||||
extern int ftrace_make_nop(struct module *mod,
|
extern int ftrace_make_nop(struct module *mod,
|
||||||
struct dyn_ftrace *rec, unsigned long addr);
|
struct dyn_ftrace *rec, unsigned long addr);
|
||||||
|
|
||||||
|
|
||||||
|
/**
|
||||||
|
* ftrace_init_nop - initialize a nop call site
|
||||||
|
* @mod: module structure if called by module load initialization
|
||||||
|
* @rec: the call site record (e.g. mcount/fentry)
|
||||||
|
*
|
||||||
|
* This is a very sensitive operation and great care needs
|
||||||
|
* to be taken by the arch. The operation should carefully
|
||||||
|
* read the location, check to see if what is read is indeed
|
||||||
|
* what we expect it to be, and then on success of the compare,
|
||||||
|
* it should write to the location.
|
||||||
|
*
|
||||||
|
* The code segment at @rec->ip should contain the contents created by
|
||||||
|
* the compiler
|
||||||
|
*
|
||||||
|
* Return must be:
|
||||||
|
* 0 on success
|
||||||
|
* -EFAULT on error reading the location
|
||||||
|
* -EINVAL on a failed compare of the contents
|
||||||
|
* -EPERM on error writing to the location
|
||||||
|
* Any other value will be considered a failure.
|
||||||
|
*/
|
||||||
|
#ifndef ftrace_init_nop
|
||||||
|
static inline int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
|
||||||
|
{
|
||||||
|
return ftrace_make_nop(mod, rec, MCOUNT_ADDR);
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* ftrace_make_call - convert a nop call site into a call to addr
|
* ftrace_make_call - convert a nop call site into a call to addr
|
||||||
* @rec: the mcount call site record
|
* @rec: the call site record (e.g. mcount/fentry)
|
||||||
* @addr: the address that the call site should call
|
* @addr: the address that the call site should call
|
||||||
*
|
*
|
||||||
* This is a very sensitive operation and great care needs
|
* This is a very sensitive operation and great care needs
|
||||||
@ -545,7 +574,7 @@ extern int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr);
|
|||||||
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
|
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
|
||||||
/**
|
/**
|
||||||
* ftrace_modify_call - convert from one addr to another (no nop)
|
* ftrace_modify_call - convert from one addr to another (no nop)
|
||||||
* @rec: the mcount call site record
|
* @rec: the call site record (e.g. mcount/fentry)
|
||||||
* @old_addr: the address expected to be currently called to
|
* @old_addr: the address expected to be currently called to
|
||||||
* @addr: the address to change to
|
* @addr: the address to change to
|
||||||
*
|
*
|
||||||
@ -709,6 +738,11 @@ static inline unsigned long get_lock_parent_ip(void)
|
|||||||
|
|
||||||
#ifdef CONFIG_FTRACE_MCOUNT_RECORD
|
#ifdef CONFIG_FTRACE_MCOUNT_RECORD
|
||||||
extern void ftrace_init(void);
|
extern void ftrace_init(void);
|
||||||
|
#ifdef CC_USING_PATCHABLE_FUNCTION_ENTRY
|
||||||
|
#define FTRACE_CALLSITE_SECTION "__patchable_function_entries"
|
||||||
|
#else
|
||||||
|
#define FTRACE_CALLSITE_SECTION "__mcount_loc"
|
||||||
|
#endif
|
||||||
#else
|
#else
|
||||||
static inline void ftrace_init(void) { }
|
static inline void ftrace_init(void) { }
|
||||||
#endif
|
#endif
|
||||||
|
@ -487,6 +487,8 @@
|
|||||||
#define ICC_CTLR_EL1_EOImode_MASK (1 << ICC_CTLR_EL1_EOImode_SHIFT)
|
#define ICC_CTLR_EL1_EOImode_MASK (1 << ICC_CTLR_EL1_EOImode_SHIFT)
|
||||||
#define ICC_CTLR_EL1_CBPR_SHIFT 0
|
#define ICC_CTLR_EL1_CBPR_SHIFT 0
|
||||||
#define ICC_CTLR_EL1_CBPR_MASK (1 << ICC_CTLR_EL1_CBPR_SHIFT)
|
#define ICC_CTLR_EL1_CBPR_MASK (1 << ICC_CTLR_EL1_CBPR_SHIFT)
|
||||||
|
#define ICC_CTLR_EL1_PMHE_SHIFT 6
|
||||||
|
#define ICC_CTLR_EL1_PMHE_MASK (1 << ICC_CTLR_EL1_PMHE_SHIFT)
|
||||||
#define ICC_CTLR_EL1_PRI_BITS_SHIFT 8
|
#define ICC_CTLR_EL1_PRI_BITS_SHIFT 8
|
||||||
#define ICC_CTLR_EL1_PRI_BITS_MASK (0x7 << ICC_CTLR_EL1_PRI_BITS_SHIFT)
|
#define ICC_CTLR_EL1_PRI_BITS_MASK (0x7 << ICC_CTLR_EL1_PRI_BITS_SHIFT)
|
||||||
#define ICC_CTLR_EL1_ID_BITS_SHIFT 11
|
#define ICC_CTLR_EL1_ID_BITS_SHIFT 11
|
||||||
|
@ -359,33 +359,40 @@ struct per_cpu_nodestat {
|
|||||||
#endif /* !__GENERATING_BOUNDS.H */
|
#endif /* !__GENERATING_BOUNDS.H */
|
||||||
|
|
||||||
enum zone_type {
|
enum zone_type {
|
||||||
#ifdef CONFIG_ZONE_DMA
|
|
||||||
/*
|
/*
|
||||||
* ZONE_DMA is used when there are devices that are not able
|
* ZONE_DMA and ZONE_DMA32 are used when there are peripherals not able
|
||||||
* to do DMA to all of addressable memory (ZONE_NORMAL). Then we
|
* to DMA to all of the addressable memory (ZONE_NORMAL).
|
||||||
* carve out the portion of memory that is needed for these devices.
|
* On architectures where this area covers the whole 32 bit address
|
||||||
* The range is arch specific.
|
* space ZONE_DMA32 is used. ZONE_DMA is left for the ones with smaller
|
||||||
|
* DMA addressing constraints. This distinction is important as a 32bit
|
||||||
|
* DMA mask is assumed when ZONE_DMA32 is defined. Some 64-bit
|
||||||
|
* platforms may need both zones as they support peripherals with
|
||||||
|
* different DMA addressing limitations.
|
||||||
*
|
*
|
||||||
* Some examples
|
* Some examples:
|
||||||
*
|
*
|
||||||
* Architecture Limit
|
* - i386 and x86_64 have a fixed 16M ZONE_DMA and ZONE_DMA32 for the
|
||||||
* ---------------------------
|
* rest of the lower 4G.
|
||||||
* parisc, ia64, sparc <4G
|
|
||||||
* s390, powerpc <2G
|
|
||||||
* arm Various
|
|
||||||
* alpha Unlimited or 0-16MB.
|
|
||||||
*
|
*
|
||||||
* i386, x86_64 and multiple other arches
|
* - arm only uses ZONE_DMA, the size, up to 4G, may vary depending on
|
||||||
* <16M.
|
* the specific device.
|
||||||
|
*
|
||||||
|
* - arm64 has a fixed 1G ZONE_DMA and ZONE_DMA32 for the rest of the
|
||||||
|
* lower 4G.
|
||||||
|
*
|
||||||
|
* - powerpc only uses ZONE_DMA, the size, up to 2G, may vary
|
||||||
|
* depending on the specific device.
|
||||||
|
*
|
||||||
|
* - s390 uses ZONE_DMA fixed to the lower 2G.
|
||||||
|
*
|
||||||
|
* - ia64 and riscv only use ZONE_DMA32.
|
||||||
|
*
|
||||||
|
* - parisc uses neither.
|
||||||
*/
|
*/
|
||||||
|
#ifdef CONFIG_ZONE_DMA
|
||||||
ZONE_DMA,
|
ZONE_DMA,
|
||||||
#endif
|
#endif
|
||||||
#ifdef CONFIG_ZONE_DMA32
|
#ifdef CONFIG_ZONE_DMA32
|
||||||
/*
|
|
||||||
* x86_64 needs two ZONE_DMAs because it supports devices that are
|
|
||||||
* only able to do DMA to the lower 16M but also 32 bit devices that
|
|
||||||
* can only do DMA areas below 4G.
|
|
||||||
*/
|
|
||||||
ZONE_DMA32,
|
ZONE_DMA32,
|
||||||
#endif
|
#endif
|
||||||
/*
|
/*
|
||||||
|
@ -7,6 +7,7 @@
|
|||||||
#ifndef __LINUX_PSCI_H
|
#ifndef __LINUX_PSCI_H
|
||||||
#define __LINUX_PSCI_H
|
#define __LINUX_PSCI_H
|
||||||
|
|
||||||
|
#include <linux/arm-smccc.h>
|
||||||
#include <linux/init.h>
|
#include <linux/init.h>
|
||||||
#include <linux/types.h>
|
#include <linux/types.h>
|
||||||
|
|
||||||
@ -18,12 +19,6 @@ bool psci_tos_resident_on(int cpu);
|
|||||||
int psci_cpu_suspend_enter(u32 state);
|
int psci_cpu_suspend_enter(u32 state);
|
||||||
bool psci_power_state_is_valid(u32 state);
|
bool psci_power_state_is_valid(u32 state);
|
||||||
|
|
||||||
enum psci_conduit {
|
|
||||||
PSCI_CONDUIT_NONE,
|
|
||||||
PSCI_CONDUIT_SMC,
|
|
||||||
PSCI_CONDUIT_HVC,
|
|
||||||
};
|
|
||||||
|
|
||||||
enum smccc_version {
|
enum smccc_version {
|
||||||
SMCCC_VERSION_1_0,
|
SMCCC_VERSION_1_0,
|
||||||
SMCCC_VERSION_1_1,
|
SMCCC_VERSION_1_1,
|
||||||
@ -38,7 +33,7 @@ struct psci_operations {
|
|||||||
int (*affinity_info)(unsigned long target_affinity,
|
int (*affinity_info)(unsigned long target_affinity,
|
||||||
unsigned long lowest_affinity_level);
|
unsigned long lowest_affinity_level);
|
||||||
int (*migrate_info_type)(void);
|
int (*migrate_info_type)(void);
|
||||||
enum psci_conduit conduit;
|
enum arm_smccc_conduit conduit;
|
||||||
enum smccc_version smccc_version;
|
enum smccc_version smccc_version;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -10,6 +10,7 @@
|
|||||||
#include <linux/syscalls.h>
|
#include <linux/syscalls.h>
|
||||||
#include <linux/utime.h>
|
#include <linux/utime.h>
|
||||||
#include <linux/file.h>
|
#include <linux/file.h>
|
||||||
|
#include <linux/memblock.h>
|
||||||
|
|
||||||
static ssize_t __init xwrite(int fd, const char *p, size_t count)
|
static ssize_t __init xwrite(int fd, const char *p, size_t count)
|
||||||
{
|
{
|
||||||
@ -529,6 +530,13 @@ extern unsigned long __initramfs_size;
|
|||||||
|
|
||||||
void __weak free_initrd_mem(unsigned long start, unsigned long end)
|
void __weak free_initrd_mem(unsigned long start, unsigned long end)
|
||||||
{
|
{
|
||||||
|
#ifdef CONFIG_ARCH_KEEP_MEMBLOCK
|
||||||
|
unsigned long aligned_start = ALIGN_DOWN(start, PAGE_SIZE);
|
||||||
|
unsigned long aligned_end = ALIGN(end, PAGE_SIZE);
|
||||||
|
|
||||||
|
memblock_free(__pa(aligned_start), aligned_end - aligned_start);
|
||||||
|
#endif
|
||||||
|
|
||||||
free_reserved_area((void *)start, (void *)end, POISON_FREE_INITMEM,
|
free_reserved_area((void *)start, (void *)end, POISON_FREE_INITMEM,
|
||||||
"initrd");
|
"initrd");
|
||||||
}
|
}
|
||||||
|
@ -16,12 +16,11 @@
|
|||||||
#include <linux/swiotlb.h>
|
#include <linux/swiotlb.h>
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Most architectures use ZONE_DMA for the first 16 Megabytes, but
|
* Most architectures use ZONE_DMA for the first 16 Megabytes, but some use it
|
||||||
* some use it for entirely different regions:
|
* it for entirely different regions. In that case the arch code needs to
|
||||||
|
* override the variable below for dma-direct to work properly.
|
||||||
*/
|
*/
|
||||||
#ifndef ARCH_ZONE_DMA_BITS
|
unsigned int zone_dma_bits __ro_after_init = 24;
|
||||||
#define ARCH_ZONE_DMA_BITS 24
|
|
||||||
#endif
|
|
||||||
|
|
||||||
static void report_addr(struct device *dev, dma_addr_t dma_addr, size_t size)
|
static void report_addr(struct device *dev, dma_addr_t dma_addr, size_t size)
|
||||||
{
|
{
|
||||||
@ -69,7 +68,7 @@ static gfp_t __dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask,
|
|||||||
* Note that GFP_DMA32 and GFP_DMA are no ops without the corresponding
|
* Note that GFP_DMA32 and GFP_DMA are no ops without the corresponding
|
||||||
* zones.
|
* zones.
|
||||||
*/
|
*/
|
||||||
if (*phys_mask <= DMA_BIT_MASK(ARCH_ZONE_DMA_BITS))
|
if (*phys_mask <= DMA_BIT_MASK(zone_dma_bits))
|
||||||
return GFP_DMA;
|
return GFP_DMA;
|
||||||
if (*phys_mask <= DMA_BIT_MASK(32))
|
if (*phys_mask <= DMA_BIT_MASK(32))
|
||||||
return GFP_DMA32;
|
return GFP_DMA32;
|
||||||
@ -395,7 +394,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
|
|||||||
u64 min_mask;
|
u64 min_mask;
|
||||||
|
|
||||||
if (IS_ENABLED(CONFIG_ZONE_DMA))
|
if (IS_ENABLED(CONFIG_ZONE_DMA))
|
||||||
min_mask = DMA_BIT_MASK(ARCH_ZONE_DMA_BITS);
|
min_mask = DMA_BIT_MASK(zone_dma_bits);
|
||||||
else
|
else
|
||||||
min_mask = DMA_BIT_MASK(32);
|
min_mask = DMA_BIT_MASK(32);
|
||||||
|
|
||||||
|
@ -3222,7 +3222,7 @@ static int find_module_sections(struct module *mod, struct load_info *info)
|
|||||||
#endif
|
#endif
|
||||||
#ifdef CONFIG_FTRACE_MCOUNT_RECORD
|
#ifdef CONFIG_FTRACE_MCOUNT_RECORD
|
||||||
/* sechdrs[0].sh_size is always zero */
|
/* sechdrs[0].sh_size is always zero */
|
||||||
mod->ftrace_callsites = section_objs(info, "__mcount_loc",
|
mod->ftrace_callsites = section_objs(info, FTRACE_CALLSITE_SECTION,
|
||||||
sizeof(*mod->ftrace_callsites),
|
sizeof(*mod->ftrace_callsites),
|
||||||
&mod->num_ftrace_callsites);
|
&mod->num_ftrace_callsites);
|
||||||
#endif
|
#endif
|
||||||
|
@ -2494,14 +2494,14 @@ struct dyn_ftrace *ftrace_rec_iter_record(struct ftrace_rec_iter *iter)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static int
|
static int
|
||||||
ftrace_code_disable(struct module *mod, struct dyn_ftrace *rec)
|
ftrace_nop_initialize(struct module *mod, struct dyn_ftrace *rec)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (unlikely(ftrace_disabled))
|
if (unlikely(ftrace_disabled))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
ret = ftrace_make_nop(mod, rec, MCOUNT_ADDR);
|
ret = ftrace_init_nop(mod, rec);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
ftrace_bug_type = FTRACE_BUG_INIT;
|
ftrace_bug_type = FTRACE_BUG_INIT;
|
||||||
ftrace_bug(ret, rec);
|
ftrace_bug(ret, rec);
|
||||||
@ -2943,7 +2943,7 @@ static int ftrace_update_code(struct module *mod, struct ftrace_page *new_pgs)
|
|||||||
* to the NOP instructions.
|
* to the NOP instructions.
|
||||||
*/
|
*/
|
||||||
if (!__is_defined(CC_USING_NOP_MCOUNT) &&
|
if (!__is_defined(CC_USING_NOP_MCOUNT) &&
|
||||||
!ftrace_code_disable(mod, p))
|
!ftrace_nop_initialize(mod, p))
|
||||||
break;
|
break;
|
||||||
|
|
||||||
update_cnt++;
|
update_cnt++;
|
||||||
|
104
mm/memory.c
104
mm/memory.c
@ -118,6 +118,18 @@ int randomize_va_space __read_mostly =
|
|||||||
2;
|
2;
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
#ifndef arch_faults_on_old_pte
|
||||||
|
static inline bool arch_faults_on_old_pte(void)
|
||||||
|
{
|
||||||
|
/*
|
||||||
|
* Those arches which don't have hw access flag feature need to
|
||||||
|
* implement their own helper. By default, "true" means pagefault
|
||||||
|
* will be hit on old pte.
|
||||||
|
*/
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
static int __init disable_randmaps(char *s)
|
static int __init disable_randmaps(char *s)
|
||||||
{
|
{
|
||||||
randomize_va_space = 0;
|
randomize_va_space = 0;
|
||||||
@ -2145,32 +2157,82 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
|
|||||||
return same;
|
return same;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
|
static inline bool cow_user_page(struct page *dst, struct page *src,
|
||||||
|
struct vm_fault *vmf)
|
||||||
{
|
{
|
||||||
|
bool ret;
|
||||||
|
void *kaddr;
|
||||||
|
void __user *uaddr;
|
||||||
|
bool force_mkyoung;
|
||||||
|
struct vm_area_struct *vma = vmf->vma;
|
||||||
|
struct mm_struct *mm = vma->vm_mm;
|
||||||
|
unsigned long addr = vmf->address;
|
||||||
|
|
||||||
debug_dma_assert_idle(src);
|
debug_dma_assert_idle(src);
|
||||||
|
|
||||||
|
if (likely(src)) {
|
||||||
|
copy_user_highpage(dst, src, addr, vma);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If the source page was a PFN mapping, we don't have
|
* If the source page was a PFN mapping, we don't have
|
||||||
* a "struct page" for it. We do a best-effort copy by
|
* a "struct page" for it. We do a best-effort copy by
|
||||||
* just copying from the original user address. If that
|
* just copying from the original user address. If that
|
||||||
* fails, we just zero-fill it. Live with it.
|
* fails, we just zero-fill it. Live with it.
|
||||||
*/
|
*/
|
||||||
if (unlikely(!src)) {
|
kaddr = kmap_atomic(dst);
|
||||||
void *kaddr = kmap_atomic(dst);
|
uaddr = (void __user *)(addr & PAGE_MASK);
|
||||||
void __user *uaddr = (void __user *)(va & PAGE_MASK);
|
|
||||||
|
|
||||||
|
/*
|
||||||
|
* On architectures with software "accessed" bits, we would
|
||||||
|
* take a double page fault, so mark it accessed here.
|
||||||
|
*/
|
||||||
|
force_mkyoung = arch_faults_on_old_pte() && !pte_young(vmf->orig_pte);
|
||||||
|
if (force_mkyoung) {
|
||||||
|
pte_t entry;
|
||||||
|
|
||||||
|
vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl);
|
||||||
|
if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) {
|
||||||
|
/*
|
||||||
|
* Other thread has already handled the fault
|
||||||
|
* and we don't need to do anything. If it's
|
||||||
|
* not the case, the fault will be triggered
|
||||||
|
* again on the same address.
|
||||||
|
*/
|
||||||
|
ret = false;
|
||||||
|
goto pte_unlock;
|
||||||
|
}
|
||||||
|
|
||||||
|
entry = pte_mkyoung(vmf->orig_pte);
|
||||||
|
if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0))
|
||||||
|
update_mmu_cache(vma, addr, vmf->pte);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This really shouldn't fail, because the page is there
|
||||||
|
* in the page tables. But it might just be unreadable,
|
||||||
|
* in which case we just give up and fill the result with
|
||||||
|
* zeroes.
|
||||||
|
*/
|
||||||
|
if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
|
||||||
/*
|
/*
|
||||||
* This really shouldn't fail, because the page is there
|
* Give a warn in case there can be some obscure
|
||||||
* in the page tables. But it might just be unreadable,
|
* use-case
|
||||||
* in which case we just give up and fill the result with
|
|
||||||
* zeroes.
|
|
||||||
*/
|
*/
|
||||||
if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
|
WARN_ON_ONCE(1);
|
||||||
clear_page(kaddr);
|
clear_page(kaddr);
|
||||||
kunmap_atomic(kaddr);
|
}
|
||||||
flush_dcache_page(dst);
|
|
||||||
} else
|
ret = true;
|
||||||
copy_user_highpage(dst, src, va, vma);
|
|
||||||
|
pte_unlock:
|
||||||
|
if (force_mkyoung)
|
||||||
|
pte_unmap_unlock(vmf->pte, vmf->ptl);
|
||||||
|
kunmap_atomic(kaddr);
|
||||||
|
flush_dcache_page(dst);
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma)
|
static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma)
|
||||||
@ -2327,7 +2389,19 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
|
|||||||
vmf->address);
|
vmf->address);
|
||||||
if (!new_page)
|
if (!new_page)
|
||||||
goto oom;
|
goto oom;
|
||||||
cow_user_page(new_page, old_page, vmf->address, vma);
|
|
||||||
|
if (!cow_user_page(new_page, old_page, vmf)) {
|
||||||
|
/*
|
||||||
|
* COW failed, if the fault was solved by other,
|
||||||
|
* it's fine. If not, userspace would re-fault on
|
||||||
|
* the same address and we will handle the fault
|
||||||
|
* from the second attempt.
|
||||||
|
*/
|
||||||
|
put_page(new_page);
|
||||||
|
if (old_page)
|
||||||
|
put_page(old_page);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false))
|
if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false))
|
||||||
|
@ -1,5 +1,6 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
TARGETS = android
|
TARGETS = android
|
||||||
|
TARGETS += arm64
|
||||||
TARGETS += bpf
|
TARGETS += bpf
|
||||||
TARGETS += breakpoints
|
TARGETS += breakpoints
|
||||||
TARGETS += capabilities
|
TARGETS += capabilities
|
||||||
|
@ -1,12 +1,66 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
|
|
||||||
# ARCH can be overridden by the user for cross compiling
|
# When ARCH not overridden for crosscompiling, lookup machine
|
||||||
ARCH ?= $(shell uname -m 2>/dev/null || echo not)
|
ARCH ?= $(shell uname -m 2>/dev/null || echo not)
|
||||||
|
|
||||||
ifneq (,$(filter $(ARCH),aarch64 arm64))
|
ifneq (,$(filter $(ARCH),aarch64 arm64))
|
||||||
CFLAGS += -I../../../../usr/include/
|
ARM64_SUBTARGETS ?= tags signal
|
||||||
TEST_GEN_PROGS := tags_test
|
else
|
||||||
TEST_PROGS := run_tags_test.sh
|
ARM64_SUBTARGETS :=
|
||||||
endif
|
endif
|
||||||
|
|
||||||
include ../lib.mk
|
CFLAGS := -Wall -O2 -g
|
||||||
|
|
||||||
|
# A proper top_srcdir is needed by KSFT(lib.mk)
|
||||||
|
top_srcdir = $(realpath ../../../../)
|
||||||
|
|
||||||
|
# Additional include paths needed by kselftest.h and local headers
|
||||||
|
CFLAGS += -I$(top_srcdir)/tools/testing/selftests/
|
||||||
|
|
||||||
|
# Guessing where the Kernel headers could have been installed
|
||||||
|
# depending on ENV config
|
||||||
|
ifeq ($(KBUILD_OUTPUT),)
|
||||||
|
khdr_dir = $(top_srcdir)/usr/include
|
||||||
|
else
|
||||||
|
# the KSFT preferred location when KBUILD_OUTPUT is set
|
||||||
|
khdr_dir = $(KBUILD_OUTPUT)/kselftest/usr/include
|
||||||
|
endif
|
||||||
|
|
||||||
|
CFLAGS += -I$(khdr_dir)
|
||||||
|
|
||||||
|
export CFLAGS
|
||||||
|
export top_srcdir
|
||||||
|
|
||||||
|
all:
|
||||||
|
@for DIR in $(ARM64_SUBTARGETS); do \
|
||||||
|
BUILD_TARGET=$(OUTPUT)/$$DIR; \
|
||||||
|
mkdir -p $$BUILD_TARGET; \
|
||||||
|
make OUTPUT=$$BUILD_TARGET -C $$DIR $@; \
|
||||||
|
done
|
||||||
|
|
||||||
|
install: all
|
||||||
|
@for DIR in $(ARM64_SUBTARGETS); do \
|
||||||
|
BUILD_TARGET=$(OUTPUT)/$$DIR; \
|
||||||
|
make OUTPUT=$$BUILD_TARGET -C $$DIR $@; \
|
||||||
|
done
|
||||||
|
|
||||||
|
run_tests: all
|
||||||
|
@for DIR in $(ARM64_SUBTARGETS); do \
|
||||||
|
BUILD_TARGET=$(OUTPUT)/$$DIR; \
|
||||||
|
make OUTPUT=$$BUILD_TARGET -C $$DIR $@; \
|
||||||
|
done
|
||||||
|
|
||||||
|
# Avoid any output on non arm64 on emit_tests
|
||||||
|
emit_tests: all
|
||||||
|
@for DIR in $(ARM64_SUBTARGETS); do \
|
||||||
|
BUILD_TARGET=$(OUTPUT)/$$DIR; \
|
||||||
|
make OUTPUT=$$BUILD_TARGET -C $$DIR $@; \
|
||||||
|
done
|
||||||
|
|
||||||
|
clean:
|
||||||
|
@for DIR in $(ARM64_SUBTARGETS); do \
|
||||||
|
BUILD_TARGET=$(OUTPUT)/$$DIR; \
|
||||||
|
make OUTPUT=$$BUILD_TARGET -C $$DIR $@; \
|
||||||
|
done
|
||||||
|
|
||||||
|
.PHONY: all clean install run_tests emit_tests
|
||||||
|
25
tools/testing/selftests/arm64/README
Normal file
25
tools/testing/selftests/arm64/README
Normal file
@ -0,0 +1,25 @@
|
|||||||
|
KSelfTest ARM64
|
||||||
|
===============
|
||||||
|
|
||||||
|
- These tests are arm64 specific and so not built or run but just skipped
|
||||||
|
completely when env-variable ARCH is found to be different than 'arm64'
|
||||||
|
and `uname -m` reports other than 'aarch64'.
|
||||||
|
|
||||||
|
- Holding true the above, ARM64 KSFT tests can be run within the KSelfTest
|
||||||
|
framework using standard Linux top-level-makefile targets:
|
||||||
|
|
||||||
|
$ make TARGETS=arm64 kselftest-clean
|
||||||
|
$ make TARGETS=arm64 kselftest
|
||||||
|
|
||||||
|
or
|
||||||
|
|
||||||
|
$ make -C tools/testing/selftests TARGETS=arm64 \
|
||||||
|
INSTALL_PATH=<your-installation-path> install
|
||||||
|
|
||||||
|
or, alternatively, only specific arm64/ subtargets can be picked:
|
||||||
|
|
||||||
|
$ make -C tools/testing/selftests TARGETS=arm64 ARM64_SUBTARGETS="tags signal" \
|
||||||
|
INSTALL_PATH=<your-installation-path> install
|
||||||
|
|
||||||
|
Further details on building and running KFST can be found in:
|
||||||
|
Documentation/dev-tools/kselftest.rst
|
3
tools/testing/selftests/arm64/signal/.gitignore
vendored
Normal file
3
tools/testing/selftests/arm64/signal/.gitignore
vendored
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
mangle_*
|
||||||
|
fake_sigreturn_*
|
||||||
|
!*.[ch]
|
32
tools/testing/selftests/arm64/signal/Makefile
Normal file
32
tools/testing/selftests/arm64/signal/Makefile
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
|
# Copyright (C) 2019 ARM Limited
|
||||||
|
|
||||||
|
# Additional include paths needed by kselftest.h and local headers
|
||||||
|
CFLAGS += -D_GNU_SOURCE -std=gnu99 -I.
|
||||||
|
|
||||||
|
SRCS := $(filter-out testcases/testcases.c,$(wildcard testcases/*.c))
|
||||||
|
PROGS := $(patsubst %.c,%,$(SRCS))
|
||||||
|
|
||||||
|
# Generated binaries to be installed by top KSFT script
|
||||||
|
TEST_GEN_PROGS := $(notdir $(PROGS))
|
||||||
|
|
||||||
|
# Get Kernel headers installed and use them.
|
||||||
|
KSFT_KHDR_INSTALL := 1
|
||||||
|
|
||||||
|
# Including KSFT lib.mk here will also mangle the TEST_GEN_PROGS list
|
||||||
|
# to account for any OUTPUT target-dirs optionally provided by
|
||||||
|
# the toplevel makefile
|
||||||
|
include ../../lib.mk
|
||||||
|
|
||||||
|
$(TEST_GEN_PROGS): $(PROGS)
|
||||||
|
cp $(PROGS) $(OUTPUT)/
|
||||||
|
|
||||||
|
clean:
|
||||||
|
$(CLEAN)
|
||||||
|
rm -f $(PROGS)
|
||||||
|
|
||||||
|
# Common test-unit targets to build common-layout test-cases executables
|
||||||
|
# Needs secondary expansion to properly include the testcase c-file in pre-reqs
|
||||||
|
.SECONDEXPANSION:
|
||||||
|
$(PROGS): test_signals.c test_signals_utils.c testcases/testcases.c signals.S $$@.c test_signals.h test_signals_utils.h testcases/testcases.h
|
||||||
|
$(CC) $(CFLAGS) $^ -o $@
|
59
tools/testing/selftests/arm64/signal/README
Normal file
59
tools/testing/selftests/arm64/signal/README
Normal file
@ -0,0 +1,59 @@
|
|||||||
|
KSelfTest arm64/signal/
|
||||||
|
=======================
|
||||||
|
|
||||||
|
Signals Tests
|
||||||
|
+++++++++++++
|
||||||
|
|
||||||
|
- Tests are built around a common main compilation unit: such shared main
|
||||||
|
enforces a standard sequence of operations needed to perform a single
|
||||||
|
signal-test (setup/trigger/run/result/cleanup)
|
||||||
|
|
||||||
|
- The above mentioned ops are configurable on a test-by-test basis: each test
|
||||||
|
is described (and configured) using the descriptor signals.h::struct tdescr
|
||||||
|
|
||||||
|
- Each signal testcase is compiled into its own executable: a separate
|
||||||
|
executable is used for each test since many tests complete successfully
|
||||||
|
by receiving some kind of fatal signal from the Kernel, so it's safer
|
||||||
|
to run each test unit in its own standalone process, so as to start each
|
||||||
|
test from a clean slate.
|
||||||
|
|
||||||
|
- New tests can be simply defined in testcases/ dir providing a proper struct
|
||||||
|
tdescr overriding all the defaults we wish to change (as of now providing a
|
||||||
|
custom run method is mandatory though)
|
||||||
|
|
||||||
|
- Signals' test-cases hereafter defined belong currently to two
|
||||||
|
principal families:
|
||||||
|
|
||||||
|
- 'mangle_' tests: a real signal (SIGUSR1) is raised and used as a trigger
|
||||||
|
and then the test case code modifies the signal frame from inside the
|
||||||
|
signal handler itself.
|
||||||
|
|
||||||
|
- 'fake_sigreturn_' tests: a brand new custom artificial sigframe structure
|
||||||
|
is placed on the stack and a sigreturn syscall is called to simulate a
|
||||||
|
real signal return. This kind of tests does not use a trigger usually and
|
||||||
|
they are just fired using some simple included assembly trampoline code.
|
||||||
|
|
||||||
|
- Most of these tests are successfully passing if the process gets killed by
|
||||||
|
some fatal signal: usually SIGSEGV or SIGBUS. Since while writing this
|
||||||
|
kind of tests it is extremely easy in fact to end-up injecting other
|
||||||
|
unrelated SEGV bugs in the testcases, it becomes extremely tricky to
|
||||||
|
be really sure that the tests are really addressing what they are meant
|
||||||
|
to address and they are not instead falling apart due to unplanned bugs
|
||||||
|
in the test code.
|
||||||
|
In order to alleviate the misery of the life of such test-developer, a few
|
||||||
|
helpers are provided:
|
||||||
|
|
||||||
|
- a couple of ASSERT_BAD/GOOD_CONTEXT() macros to easily parse a ucontext_t
|
||||||
|
and verify if it is indeed GOOD or BAD (depending on what we were
|
||||||
|
expecting), using the same logic/perspective as in the arm64 Kernel signals
|
||||||
|
routines.
|
||||||
|
|
||||||
|
- a sanity mechanism to be used in 'fake_sigreturn_'-alike tests: enabled by
|
||||||
|
default it takes care to verify that the test-execution had at least
|
||||||
|
successfully progressed up to the stage of triggering the fake sigreturn
|
||||||
|
call.
|
||||||
|
|
||||||
|
In both cases test results are expected in terms of:
|
||||||
|
- some fatal signal sent by the Kernel to the test process
|
||||||
|
or
|
||||||
|
- analyzing some final regs state
|
64
tools/testing/selftests/arm64/signal/signals.S
Normal file
64
tools/testing/selftests/arm64/signal/signals.S
Normal file
@ -0,0 +1,64 @@
|
|||||||
|
/* SPDX-License-Identifier: GPL-2.0 */
|
||||||
|
/* Copyright (C) 2019 ARM Limited */
|
||||||
|
|
||||||
|
#include <asm/unistd.h>
|
||||||
|
|
||||||
|
.section .rodata, "a"
|
||||||
|
call_fmt:
|
||||||
|
.asciz "Calling sigreturn with fake sigframe sized:%zd at SP @%08lX\n"
|
||||||
|
|
||||||
|
.text
|
||||||
|
|
||||||
|
.globl fake_sigreturn
|
||||||
|
|
||||||
|
/* fake_sigreturn x0:&sigframe, x1:sigframe_size, x2:misalign_bytes */
|
||||||
|
fake_sigreturn:
|
||||||
|
stp x29, x30, [sp, #-16]!
|
||||||
|
mov x29, sp
|
||||||
|
|
||||||
|
mov x20, x0
|
||||||
|
mov x21, x1
|
||||||
|
mov x22, x2
|
||||||
|
|
||||||
|
/* create space on the stack for fake sigframe 16 bytes-aligned */
|
||||||
|
add x0, x21, x22
|
||||||
|
add x0, x0, #15
|
||||||
|
bic x0, x0, #15 /* round_up(sigframe_size + misalign_bytes, 16) */
|
||||||
|
sub sp, sp, x0
|
||||||
|
add x23, sp, x22 /* new sigframe base with misaligment if any */
|
||||||
|
|
||||||
|
ldr x0, =call_fmt
|
||||||
|
mov x1, x21
|
||||||
|
mov x2, x23
|
||||||
|
bl printf
|
||||||
|
|
||||||
|
/* memcpy the provided content, while still keeping SP aligned */
|
||||||
|
mov x0, x23
|
||||||
|
mov x1, x20
|
||||||
|
mov x2, x21
|
||||||
|
bl memcpy
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Here saving a last minute SP to current->token acts as a marker:
|
||||||
|
* if we got here, we are successfully faking a sigreturn; in other
|
||||||
|
* words we are sure no bad fatal signal has been raised till now
|
||||||
|
* for unrelated reasons, so we should consider the possibly observed
|
||||||
|
* fatal signal like SEGV coming from Kernel restore_sigframe() and
|
||||||
|
* triggered as expected from our test-case.
|
||||||
|
* For simplicity this assumes that current field 'token' is laid out
|
||||||
|
* as first in struct tdescr
|
||||||
|
*/
|
||||||
|
ldr x0, current
|
||||||
|
str x23, [x0]
|
||||||
|
/* finally move SP to misaligned address...if any requested */
|
||||||
|
mov sp, x23
|
||||||
|
|
||||||
|
mov x8, #__NR_rt_sigreturn
|
||||||
|
svc #0
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Above sigreturn should not return...looping here leads to a timeout
|
||||||
|
* and ensure proper and clean test failure, instead of jumping around
|
||||||
|
* on a potentially corrupted stack.
|
||||||
|
*/
|
||||||
|
b .
|
29
tools/testing/selftests/arm64/signal/test_signals.c
Normal file
29
tools/testing/selftests/arm64/signal/test_signals.c
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
|
/*
|
||||||
|
* Copyright (C) 2019 ARM Limited
|
||||||
|
*
|
||||||
|
* Generic test wrapper for arm64 signal tests.
|
||||||
|
*
|
||||||
|
* Each test provides its own tde struct tdescr descriptor to link with
|
||||||
|
* this wrapper. Framework provides common helpers.
|
||||||
|
*/
|
||||||
|
#include <kselftest.h>
|
||||||
|
|
||||||
|
#include "test_signals.h"
|
||||||
|
#include "test_signals_utils.h"
|
||||||
|
|
||||||
|
struct tdescr *current;
|
||||||
|
|
||||||
|
int main(int argc, char *argv[])
|
||||||
|
{
|
||||||
|
current = &tde;
|
||||||
|
|
||||||
|
ksft_print_msg("%s :: %s\n", current->name, current->descr);
|
||||||
|
if (test_setup(current) && test_init(current)) {
|
||||||
|
test_run(current);
|
||||||
|
test_cleanup(current);
|
||||||
|
}
|
||||||
|
test_result(current);
|
||||||
|
|
||||||
|
return current->result;
|
||||||
|
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user